Microsoft Hyper-V Is Alive Enhanced With Windows Server 2025

Yes, you read that correctly, Microsoft Hyper-V is alive and enhanced with Windows Server 2025, formerly Windows Server v.Next server. Note that  Windows Server 2025 preview build is just a preview available for download testing as of this time.

What about Myth Hyper-V is discontinued?

Despite recent FUD (fear, uncertainty, doubt), misinformation, and fake news, Microsoft Hyper-V is not dead. Nor has Hyper-V been discontinued, as some claim. Some Hyper-V FUD is tied to customers and partners of VMware following Broadcom’s acquisition of VMware looking for alternatives. More on Broadcom and VMware here, here, here, here, and here.

As a result of Broadcom’s VMware acquisition and challenges for partners and customers (see links above), organizations are doing due diligence, looking for replacement or alternatives. In addition, some vendors are leveraging the current VMware challenges to try and position themselves as the best hypervisor virtualization safe harbor for customers. Thus some vendors, their partners, influencers and amplifiers are using FUD to keep prospects from looking at or considering Hyper-V.

Virtual FUD (vFUD)

First, let’s shut down some Virtual FUD (vFUD). As mentioned above, some are claiming that Microsoft has discontinued Hyper-V. Specifically, the vFUD centers on Microsoft terminating a specific license SKU (e.g., the free Hyper-V Server 2019 SKU). For those unfamiliar with the discontinued SKU (Hyper-V Server 2019), it’s a headless (no desktop GUI) version of Windows Server  running Hyper-V VMs, nothing more, nothing less.

Does that mean the Hyper-V technology is discontinued? No.

Does that mean Windows Server and Hyper-V are discontinued? No.

Microsoft is terminating a particular stripped-down Windows Server version SKU (e.g. Hyper-V Server 2019) and not the underlying technology, including Windows Server and Hyper-V.

To repeat, a specific SKU or distribution (Hyper-V Server 2019) has been discontinued not Hyper-V. Meanwhile, other distributions of Windows Server with Hyper-V continue to be supported and enhanced, including the upcoming Windows Server 2025 and Server 2022, among others.

On the other hand, there is also some old vFUD going back many years, or a decade, when some last experienced using, trying, or looking at Hyper-V. For example, the last look at Hyper-V might been in the Server 2016 or before era.

If you are a vendor or influencer throwing vFUD around, at least get some new vFUD and use it in new ways. Better yet, up your game and marketing so you don’t rely on old vFUD. Likewise, if you are a vendor partner and have not extended your software or service support for Hyper-V, now is a good time to do so.

Watch out for falling into the vFUD trap thinking Hyper-V is dead and thus miss out on new revenue streams. At a minimum, take a look at current and upcoming enhancements for Hyper-V doing your due diligence instead of working off of old vFUD.

Where is Hyper-V being used?

From on-site (aka on-premises, on-premises, on-prem) and edge on Windows Servers standalone and clustered, to Azure Stack HCI. From Azure, and other Microsoft platforms or services to Windows Desktops, as well as home labs, among many other scenarios.

Do I use Hyper-V? Yes, when I  retired from the vExpert program after ten years. I moved all of my workloads from VMware environment to Hyper-V including *nix, containers and Windows VMs, on-site and on Azure Cloud.

How Hyper-V Is Alive Enhanced With Windows Server 2025

Is Hyper-V Alive Enhanced With Windows Server 2025?  Yup.

Formerly known as Windows Server v.Next, Microsoft announced the Windows Server 2025 preview build on January 26, 2024 (you can get the bits here). Note that Microsoft uses Windows Server v.Next as a generic placeholder for next-generation Windows Server technology.

A reminder that the cadence of Windows Server Long Term Serving Channels (LTSC) versions has been about three years (2012R2, 2016, 2019, 2022, now 2025), along with interim updates.

What’s enhanced with Hyper-V and Windows Server 2025

    • Hot patching of running server (requires Azure Arc management) with almost instant implementations and no reboot for physical, virtual, and cloud-based Windows Servers.
    • Scaling of even more compute processors and RAM for VMs.
    • Server Storage I/O performance updates, including NVMe optimizations.
    • Active Directory (AD) improvements for scaling, security, and performance.
    • There are enhancements to storage replica and clustering capabilities.
    • Hyper-V GPU partition and pools, including migration of VMs using GPUs.

More Enhancements for Hyper-V and Windows Server 2025

Active Directory (AD)

Enhanced performance using all CPUs in a process group up to 64 cores to support scaling and faster processing. LDAP for TLS 1.3, Kerberos support for AES SHA 256 / 384, new AD functional levels, local KDC, improved replication priority, NTLM retirement, local Kerberos, and other security hardening. In addition, 64-bit Long value IDs (LIDs) are supported along with a new database schema using 32K pages vs the previous 8K pages. You will need to upgrade forest-wide across domain controllers to leverage the new larger page sizes (at least Server 2016 or later). Note that there is also backward compatibility using 8K pages until all ADs are upgraded.

Storage, HA, and Clustering

Windows Server continues to offer flexible options for storage how you want or need to use it, from traditional direct attached storage (DAS) to Storage Area Networks (SAN), to Storage Spaces Direct (S2D) software-defined, including NVMe, NVMe over Fabrics (NVMeoF), SAS, Fibre Channel, iSCSI along with file attached storage. Some other storage and HA enhancements include Storage Replica performance for logging and compression and stretch S2D multi-site optimization.

Failover Cluster enhancements include AD-less clusters, cert-based VM live migration for the edge, cluster-aware updating reliability, and performance improvements. ReFS enhancements include dedupe and compression optimizations.

Other NVMe enhancements include optimization to boost performance while reducing CPU overhead, for example, going from 1.1M IOPS to 1.86M IOPS, and then with a new native NVMe driver (to be added), from 1.1M IOPs to 2.1M IOPs. These performance optimizations will be interesting to look at closer, including baseline configuration, number and type of devices used, and other considerations.

Compute, Hyper-V, and Containers

Microsoft has added and enhanced various Compute, Hyper-V, and Container functionality with Server 2025, including supporting larger configurations and more flexibility with GPUs. There are app compatibility improvements for containers that will be interesting to see and hear more details about besides just Nano (the ultra slimmed-down Windows container).

Hyper-V

Microsoft extensively uses Hyper-V technology across different platforms, including Azure, Windows Servers, and Desktops. In addition, Hyper-V is commonly found across various customer and partner deployments on Windows Servers, Desktops, Azure Stack HCI, running on other clouds, and virtualization (nested). While Microsoft effectively leverages Hyper-V and continues to enhance it, its marketing has not effectively told and amplified the business benefit and value, including where and how Hyper-V is deployed.

Hyper-V with Server 2025 includes discrete device assignment to VM (e.g., resources dedicated to VMs). However, dedicating a device like a GPU to a VM prevents resource sharing, failover cluster, or live migration. On the other hand, Server 2025 Hyper-V supports GPU-P (GPU Partitioning), enabling GPU(s) to be shared across multiple VMs. GPUs can be partitioned and assigned to VMs, with GPUs and GPU partitioning enabled across various hosts.

In addition to partitioning, GPUs can be placed into GPU pools for HA. Live migration and cluster failover (requires PCIe SR-IOV), AMD Lilan or later, Intel Sapphire Rapids, among other requirements, can be done. Another enhancement is Dynamic Processor Compatibility, which allows mixed processor generations to be used across VMs and then masks out functionalities that are not common across processors. Other enhancements include optimized UEFI, secure boot, TPM , and hot add and removal of NICs.

Networking

Network ATC provides intent-based deployments where you specify desired outcomes or states, and the configuration is optimized for what you want to do. Network HUD enables always-on monitoring and network remediation. Software Defined Network (SDN) optimization for transparent multi-site L2 and L3 connectivity and improved SDN gateway performance enhancements.

SMB over QUIC leverages TLS 1.3 security to streamline local, mobile, and remote networking while enhancing security with configuration from the server or client. In addition, there is an option to turn off SMB NTLM at the SMB level, along with controls on which versions of SMB to allow or refuse. Also being added is a brute force attack limiter that slows down SMB authentication attacks.

Management, Upgrades, General user Experience

The upgrade process moving forward with Windows Server 2025 is intended to be seamless and less disruptive. These enhancements include hot patching and flighting (e.g., LTSC Windows server upgrades similar to how you get regular updates). For hybrid management, an easier-to-use wizard to enable Azure Arc is planned. For flexibility, if present, WiFi networking and Bluetooth devices are automatically enabled with Windows Server 2025 focused on edge and remote deployment scenarios.

Also new is an optional subscription-based licensing model for Windows Server 2025 while retaining the existing perpetual use. Let me repeat that so as not to create new vFUD, you can still license Windows Server (and thus Hyper-V) using traditional perpetual models and SKUs.

Additional Resources Where to learn more

The following links are additional resources to learn about Windows Server, Server 2025, Hyper-V, and related data infrastructures and tradecraft topics.

What’s New in Windows Server v.Next video from Microsoft Ignite (11/17/23)
Microsoft Windows Server 2025 Whats New
Microsoft Windows Server 2025 Preview Build Download
Microsoft Windows Server 2025 Preview Build Download (site)
Microsoft Evaluation Center (various downloads for trial)
Microsoft Eval Center Windows Server 2022 download
Microsoft Hyper-V on Windows Information
Microsoft Hyper-V on Windows Server Information
Microsoft Hyper-V on Windows Desktop (e.g., Win10)
Microsoft Windows Server Release Information
Microsoft Hyper-V Server 2019
Microsoft Azure Virtual Machines Trial
Microsoft Azure Elastic SAN
If NVMe is the answer, what are the questions?
NVMe Primer (or refresh), The NVMe Place.

Additional learning experiences along with common questions (and answers), are found in my Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Hyper-V is very much alive, and being enhanced. Hyper-V is being used from Microsoft Azure to Windows Server and other platforms at scale, and in smaller environments.

If you are looking for alternatives to VMware or simply exploring virtualization options, do your due diligence and check out Hyper-V. Hyper-V may or may not be what you want; however, is it what you need? Looking at Hyper-V now and upcoming enhancements also positions you when asked by management if you have done your due  diligence vs relying on vFUD.

Do a quick Proof of Concept, spin up a lab, and check out currently available Hyper-V. For example, on Server 2022 or 2025 preview, to get a feel for what is there to meet your needs and wants. Download the bits and get some hands on time with Hyper-V and Windows Server 2025.

Wrap up

Hyper-V is alive and enhanced with Windows Server 2025 and other releases.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Nine time Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of UnlimitedIO LLC.

More modernizing data protection, virtualization and clouds with certainty

This is a follow-up to a recent post about modernizing data protection and doing more than simply swapping out media or mediums like flat tires on a car as well as part of the Quantum protecting data with certainty event series.

As part of a recent 15 city event series sponsored by Quantum (that was a disclosure btw ;) ) titled Virtualization, Cloud and the New Realities for Data Protection that had a theme of strategies and technologies that will help you adapt to a changing IT environment I was asked to present a keynote at the events around Modernizing data protection for cloud, virtual and legacy environments (see earlier and related posts here and here).

Quantum data protection with certainty

Since late June (taking July and most of August off) and wrapping up last week, the event series has traveled to Boston, Chicago, Palo Alto, Houston, New York City, Cleveland, Raleigh, Atlanta, Washington DC, San Diego, Los Angeles, Mohegan Sun CT, St. Louis, Portland Oregon and King of Prussia (Philadelphia area).

The following are a series of posts via IT Knowledge Exchange (ITKE) that covered these events including commentary and perspectives from myself and others.

Data protection in the cloud, summary of the events
Practical solutions for data protection challenges
Big data’s new and old realities
Can you afford to gamble on data protection
Conversations in and around modernizing data protection
Can you afford not to use cloud based data protection

In addition to the themes in the above links, here are some more images, thoughts and perspectives from while being out and about at these and other events.

Datalink does your data center suck sign
While I was traveling saw this advertisement sign from Datalink (who is a Quantum partner that participated in some of the events) in a few different airports which is a variation of the Datadomain tape sucks attention getter. For those not familiar, that creature on the right is an oversized mosquito with the company logos on the lower left being Datalink, NetApp, Cisco and VMware.

goddess of data fertility
When in Atlanta for one of the events at the Morton’s in the Sun trust plaza, the above sculpture was in the lobby. Its real title is the goddess of fertility, however I’m going to refer to it as the goddess of data fertility, after all, there is no such thing as a data or information recession.

The world and storageio runs on dunkin donuts
Traveling while out and about is like a lot of things particular IT and data infrastructure related which is hurry up and wait. Not only does America Run on Dunkin, so to does StorageIO.

Use your imagination
When out and about, sometimes instead of looking up, or around, take a moment and look down and see what is under your feet, then let your imagination go for a moment about what it means. Ok, nuff of that, drink your coffee and let’s get back to things shall we.

Delta 757 and PW2037 or PW2040
Just like virtualization and clouds, airplanes need physical engines to power them which have to be energy-efficient and effective. This means being very reliable, good performance, fuel-efficient (e.g. a 757 on a 1,500 mile trip if full can be in the neighborhood of 65 plus miles per gallon per passenger with a low latency (e.g. fast trip). In this case, a Pratt and Whitney PW2037 (could be a PW2040 as Delta has a few of them) on a Delta 757 is seen powering this flight as it climbs out of LAX on a Friday morning after one of the event series session the evening before in LA.

Ambulance waiting at casino
Not sure what to make out of this image, however it was taken while walking into the Mohegan Sun casino where we did one of the dinner events at the Michael Jordan restaraunt

David Chapa of Quantum in bank vault
Here is an image from one of the events in this series which is a restaurant in Cleveland where the vault is a dinning room. No that is not a banker, well perhaps a data protection banker, it is the one and only (@davidchapa) David Chapa aka the Chief Technology Evangelist (CTE) of Quantum, check out his blog here.

Just before landing in portland
Nice view just before landing in Portland Oregon where that evenings topic was as you might have guessed, data protection modernization, clouds and virtualization. Don’t be scared, be ready, learn and find concerns to overcome them to have certainty with data protection in cloud, virtual and physical environments.
Teamwork
Cloud, virtualization and data protection modernization is a shared responsibility requiring team work and cooperation between service or solution provider and the user or consumer. If the customer or consumer of a service is using the right tools, technologies, best practices and having had done their homework for applicable levels of services with SLAs and SLOs, then a service provider with good capabilities should be in harmony with each other. Of course having the right technologies and tools for the task at hand is also important.
Underground hallway connecting LAX terminals, path to the clouds
Moving your data to the cloud or a virtualized environment should not feel like a walk down a long hallway, that is assuming you have done your homework, that the service is safe and secure, well taken care of, there should be less of concerns. Now if that is a dark, dirty, dingy, dilapidated dungeon like hallway, then you just might be on the highway to hell vs. stairway to heaven or clouds ;).

clouds along california coastline
There continues to be barriers to cloud adoption and deployment for data protection among other users.

Unlike the mountain ranges inland from the LA area coastline causing a barrier for the marine layer clouds rolling further inland, many IT related barriers can be overcome. The key to overcoming cloud concerns and barriers is identifying and understanding what they are so that resolutions, solutions, best practices, tools or work around’s can be developed or put into place.

The world and storageio runs on dunkin donuts
Hmm, breakfast of champions and road warriors, Dunkin Donuts aka DD, not to be confused with DDUP the former ticker symbol of Datadomain.

Tiered coffee
In the spirit of not treating everything the same, have different technology or tools to meet various needs or requirements, it only makes sense that there are various hot beverage options including hot water for tea, regular and decaffeinated coffee. Hmm, tiered hot beverages?


On the lighter side, things including technology of all type will and do break, even with maintenance, so having a standby plan, or support service to call can come in handy. In this case the vehicle on the right did not hit the garage door that came off of its tracks due to wear and tear as I was preparing to leave for one of the data protection events. Note to self, consider going from bi-annual garage door preventive maintenance to annual service check-up.

Some salesman talking on phone in a quiet zone

While not part of or pertaining to data protection, clouds, virtualization, storage or data infrastructure topics, the above photo was taken while in a quiet section of an airport lounge waiting for a flight to one of the events. This falls in the class of a picture is worth a thousand words category as the sign just to the left of the sales person talking loudly on his cell phone about his big successful customer call says Quiet Zone with symbol of no cell phone conversations.

How do I know the guy was not talking about clouds, virtualization, data infrastructure or storage related topics? Simple, his conversation was so loud me and everybody else in the lounge could hear the details of the customer conversation as it was being relayed back to sales management.

Note to those involved in sales or customer related topics, be careful of your conversations in public and pseudo public places including airports, airport lounges, airplanes, trains, planes, hotel lobbies and other places, you never know who you will be broadcasting to.

Here is a link to a summary of the events along with common questions, thoughts and perspectives.

Quantum data protection with certainty

Thanks to everyone who participated in the events including attendees, as well as Quantum and their partners for sponsoring this event series, look forward to see you while out and about at some future event or venue.

Ok, nuff said.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

StorageIO Momentus Hybrid Hard Disk Drive (HHDD) Moments

This is the third in a series of posts that I have done about Hybrid Hard Disk Drives (HHDDs) along with pieces about Hard Disk Drives (HDD) and Solid State Devices (SSDs). Granted the HDD received its AARP card several years ago when it turned 50 and is routinely declared dead (or read here) even though it continues to evolve along SSD maturing and both expanding into different markets as well as usage roles.

For those who have not read previous posts about Hybrid Hard Disk Drives (HHDDs) and the Seagate Momentus XT you can find them here and here.

Since my last post, I have been using the HHDDs extensively and recently installed the latest firmware. The release of new HHDD firmware by Seagate for the Momentus XT (SD 25) like its predecessor SD24 cleaned up some annoyances and improved on overall stability. Here is a Seagate post by Mark Wojtasiak discussing SD25 and feedback obtained via the Momentus XT forum from customers.

If you have never done a HDD firmware update, its not as bad or intimidating as might be expected. The Seagate firmware update tools make it very easy, that is assuming you have a recent good backup of your data (one that can be restored) and about 10 to 15 minutes of time for a couple of reboots.

Speaking of stability, the Momentus XT HHDDs have been performing well helping to speed up accessing large documents on various projects including those for my new book. Granted an SSD would be faster across the board, however the large capacity at the price point of the HHDD is what makes it a hybrid value proposition. As I have said in previous posts, if you have the need for speed all of the time and time is money, get an SSD. Likewise if you need as much capacity as you can get and performance is not your primary objective, then leverage the high capacity HDDs. On the other hand, if you need a balance of some performance boost with capacity boost and a good value, then check out the HHDDs.

Image of Momentus XT courtesy of www.Seagate.com

Lets shift gears from that of the product or technology to that of common questions that I get asked about HHDDs.

Common questions I get asked about HHDDs include:

What is a Hybrid Hard Disk Drive?

A Hybrid Hard Disk Drive includes a combination of rotating HDD, solid state flash persistent memory along with volatile dynamic random access memory (DRAM) in an integrated package or product. The value proposition and benefit is a balance of performance and capacity at a good price for those environments, systems or applications that do not need all SSD performance (and cost) vs. those that need some performance in addition to large capacity.

How the Seagate Momentus XT differs from other Hybrid Disks?
One approach is to take a traditional HDD and pair it with a SSD using a controller packaged in various ways. For example on a large scale, HDDs and SSDs coexist in the same tiered storage system being managed by the controllers, storage processors or nodes in the solution including automated tiering and cache promotion or demotion. The main difference however between other storage systems, tiering and pairing and HHDDs is that in the case of the Momentus XT the HDD, SLC flash (SSD functionality) and RAM cache and their management are all integrated within the disk drive enclosure.

Do I use SSDs and HDDs or just HHDDs?
I have HHDDs installed internally in my laptops. I also have HDDs which are installed in servers, NAS and disk to disk (D2D) backup devices and Digital Video Recorders (DVRs) along with external SSD and Removable Hard Disk Drives (RHDDs). The RHDDs are used for archive and master or gold copy data protection that go offsite complimenting how I also use cloud backup services as part of my data protection strategy.

What are the technical specifications of a HHDD such as the Seagate Momentus XT?
3Gbs SATA interface, 2.5 inch 500GB 7,200 RPM HDD with 32MB RAM cache and integrated 4GByte SLC flash all managed via internal drive processor. Power consumption varies depending what the device is doing such as initial power up, idle, normal or other operating modes. You can view the Seagate Momentus XT 500GB (ST95005620AS which is what I have) specifications here as well as the product manual here.


One of my HHDDs on a note pad (paper) and other accessories

Do you need a special controller or management software?
Generally speaking no, the HHDD that I have been using plugged and played into my existing laptops internal bay replacing the HDD that came with those systems. No extra software was needed for Windows, no data movement or migration tools needed other than when initially copying from the source HDD to the new HHDD. The HHDD do their own caching, read ahead and write behind independent of the operating system or controller. Now the reason I say generally speaking is that like many devices, some operating systems or controllers may be able to leverage advanced features so check your particular system capabilities.

How come the storage system vendors are not talking about these HHDDs?
Good question which I assume it has a lot to do with the investment (people, time, engineering, money and marketing) that they have or are making in controller and storage system software functionality to effectively create hybrid tiered storage systems using SSD and HDDs on different scales. There have been some packaged HHDD systems or solutions brought to market by different vendors that combine HDD and SSD into a single physical package glued together with some software and controllers or processors to appear as a single system. I would not be surprised to see discrete HHDDs (where the HDD and flash SSD and RAM are all one integrated product) appear in lower end NAS or multifunction storage systems as well as for backup, dedupe or other system that requires large amounts of capacity space and performance boost now and then.

Why do I think this? Simple, say you have five HHDDs each with 500GB of capacity configured as a RAID5 set resulting in 2TByte of capacity. Using as a hypothetical example the Momentus XT yields 5 x 4GByte or 20GByte of flash cache helps accelerate write operations during data dumps, backup or other updates. Granted that is an overly simplified example and storage systems can be found with hundreds of GByte of cache, however think in terms of value or low cost balancing performance and capacity to cost for different usage scenarios. For example, applications such as bulk or scale out file and object storage including cloud or big data, entertainment, Server (Citrix/Xen, Microsoft/HyperV, VMware/vSphere) and Desktop virtualization or VDI, Disk to Disk (D2D) backup, business analytics among others. The common tenets of those applications and usage scenario is a combination of I/O and storage consolidation in a cost effective manner addressing the continuing storage capacity to I/O performance gap.

Data Center and I/O Bottlenecks

Storage and I/O performance gap

Do you have to backup HHDDs?
Yes, just as you would want to backup or protect any SSD or HHD device or system.

How does data get moved between the SSD and the HDD?
Other than the initial data migration from the old HDD (or SSD) to the HHDD, unless you are starting with a new system, once your data and applications exist on the HHDD, it automatically via the internal process of the device manages the RAM, flash and HDD activity. Unlike in a tiered storage system where data blocks or files may be moved between different types of storage devices, inside the HHDD, all data gets written to the HDD, however the flash and RAM are used as buffers for caching depending on activity needs. If you have sat through or listened to a NetApp or HDS use of cache for tiering discussion what the HHDDs do is similar in concept however on a smaller scale at the device level, potentially even in a complimentary mode in the future? Other functions performed inside the HHDD by its processor includes reading and writing, managing the caches, bad block replacement or re vectoring on the HDD, wear leveling of the SLC flash and other routine tasks such as integrity checks and diagnostics. Unlike paired storage solutions where data gets moved between tiers or types of devices, once data is stored in the HHDD, it is managed by the device similar to how a SSD or HDD would move blocks of data to and from the specific media along with leveraging RAM cache as a buffer.

Where is the controller that manages the SSD and HDD?
The HHDD itself is the controller per say in that the internal processor that manages the HDD also directly access the RAM and flash.

What type of flash is used and will it wear out?
The XT uses SLC (single level cell) flash which with wear leveling has a good duty cycle (life span) and is what is typically found in higher end flash SSD solutions vs. lower cost MLC (multi level cell)

Have I lost any data from it yet?
No, at least nothing that was not my own fault from saving the wrong file in the wrong place and having to recover from one of my recent D2D copies or the cloud. Oh, regarding what have I done with the HDDs that were replaced by the HHDDs? They are now an extra gold master backup copy as of a particular point in time and are being kept in a safe secure facility, encrypted of course.

Have you noticed a performance improvement?
Yes, performance will vary however in many cases I have seen performance comparable to SSD on both reads and writes as long as the HDDs keep up with the flash and RAM cache. Even as larger amounts of data are written, I have seen better performance than compared to HDDs. The caveat however is that initially you may see little to marginal performance improvement however over time, particularly on the same files, performance tends to improve. Working on large tens to hundreds of MByte size documents I noticed good performance when doing saves compared to working with them on a HDD.

What do the HHDDs cost?
Amazon.com has the 500GB model for about $100 which is about $40 to $50 less than when I bought my most recent one last fall. I have heard from other people that you can find them at even lower prices at other venues. In the theme of disclosures, I bought one of my HHDDs from Amazon and Seagate gave me one to test.

Will I buy more HHDDs or switch to SSDs?
Where applicable I will add SSDs as well as HDDs, however where possible and practical, I will also add HHDDs perhaps even replacing the HDDs in my NAS system with HHDDs at some time or maybe trying them in a DVR.

What is the down side to the HHDDs?
Im generating and saving more data on the devices at a faster rate which means that when I installed them I was wondering if I would ever fill up a 500GB drive. I still have hundreds of GBytes free or available for use, however I also am able to cary more reference data or information than in the past. In addition to more reference data including videos, audio, images, slide decks and other content, I have also been able to keep more versions or copies of documents which has been handy on the book project. Data that changes gets backed up D2D as well as to my cloud provider including while traveling. Leveraging compression and dedupe, given that many chapters or other content are similar, not as much data actually gets transmitted when doing cloud backups which has been handy when doing a backup from a airplane flying over the clouds. A wish for the XT type of HHDD that I have is for vendors such as Seagate to add Self Encrypting Disk (SED) capabilities to them along with applying continued intelligent power management (IPM) enhancements.

Why do I like the HHDD?
Simple, it solves both business and technology challenges while being an enabler, it gives me a balance of performance for productivity and capacity in a cost effective manner while being transparent to the systems it works with.

Here are some related links to additional material:
Data Center I/O Bottlenecks Performance Issues and Impacts
Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?
Seagate Momentus XT SD 25 firmware
Seagate Momentus XT SD25 firmware update coming this week
A Storage I/O Momentus Moment
Another StorageIO Hybrid Momentus Moment
As the Hard Disk Drive (HDD) continues to spin
Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?
Funeral for a Friend
As the Hard Disk Drive (HDD) continues to spin
Seagate Momentus XT product specifications
Seagate Momentus XT product manual
Technology Tiering, Servers Storage and Snow Removal
Self Encrypting Disks (SEDs)

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

What records will EMC break in NYC January 18, 2011?

What records will EMC break in NYC January 18, 2011?

In case you have not seen or heard, EMC is doing an event next week in New York City (NYC) at the AXA Equitable Center winter weather snow storm clouds permitting (and adequate tools or technologies to deal with the snow removal), that has a theme around breaking records. If you have yet to see any of the advertisements, blogs, tweets, facebook, friendfeed, twitter, yourtube or other mediums messages, here (and here and here) are a few links to learn more as well as register to view the event.

Click on the above image to see more

There is already speculation along with IT industry wiki leaks of what will be announced or talked about next week that you can google or find at some different venues.

The theme of the event is breaking records.

What might we hear?

In addition to the advisor, author, blogger and consultant hats that I wear, Im also in the EMCs analysts relations program and as such under NDA, consequently, what the actual announcement will be next week, no comment for now. BTW, I also wear other hats including one from Boeing even though I often fly on Airbus products as well.

If its not Boeing Im not going, except I do also fly Airbus, Embrear and Bombardiar products
Other hats I wear

However, how about some fun as to what might be covered at next weeks event with getting into a wiki leak situation?

  • A no brainier would be product (hardware, software, services) related as it is mid January and if you have been in the industry for more than a year or two, you might recall that EMC tends to a mid winter launch around this time of year along with sometimes an early summer refresh. Guess what time of the year it is.
  • Im guessing lots of superlatives, perhaps at a record breaking pace (e.g. revolutionary first, explosive growth, exponential explosive growth, perfect storm among others that could be candidates for the Storagebrain wall of fame or shame)
  • Maybe we will even hear that EMC has set a new record of number of members in Chads army aka the vspecialists focused on vSphere related topics along with a growing (quietly) number of Microsoft HyperV specialist.
  • That EMC has a record number of twitter tweeps engaged in conversations (or debates) with different audiences, collectives, communities, competitors, customers, individuals, organizations, partners or venues among others.
  • Possibly that their involvement in the CDP (Carbon Disclosure Project) has resulted in enough savings to offset the impact of hosting the event making it carbon and environment neutral. After all, we already know that EMC has been in the CDP as in Continual or Constant Data Protection as well as Complete or Comprehensive Data Protection along with Cloud Data Protection not to mention Common Sense Data Protection (CSDP) for sometime now.
  • Perhaps something around the number of acquisitions, patents, products, platforms, products and partners they have amassed recently.
  • For investors, wishful thinking that they will be moving their stock into record territories.
  • Im also guessing we will hear or see a record number of tweets, posts, videos and stories.
  • To be fair and balanced, Im also expecting a record number of counter tweets, counter posts, counter videos and counter stories coming out of the event.

Some records I would like to see EMC break however Im not going to hold my breath at least for next week include:

  • Announcement of upping the game in performance benchmarking battles with record setting or breaking various SPC benchmark results submitted on their own (instead of via a competitor or here) in different categories of block storage devices along with entries for SSD based, clustered and virtualized. Of course we would expect to hear how those benchmarks and workload simulations really do not matter which would be fine, at least they would have broken some records.
  • Announcement of having shipped more hard disk drives (HDD) than anyone else in conjunction with shipping more storage than anyone else. Despite being continually declared dead (its not) and SSD gaining traction, EMC would have a record breaking leg to stand on if the qualify amount of storage shipped as external or shared or networked (SAN or NAS) as opposed to collective (e.g. HP with servers and storage among others).
  • Announcement that they are buying Cisco, or Cisco is buying them, or that they and Cisco are buying Microsoft and Oracle.
  • Announcement of being proud of the record setting season of the Patriots, devastated to losing a close and questionable game to the NY Jets, wishing them well in the 2010 NFL Playoffs (Im just sayin…).
  • Announcement of being the first vendor and solution provider to establish SaaS, PaaS, IaaS, DaaS and many other XaaS offerings via their out of this world new moon base (plans underway for Mars as part of a federated offering).
  • Announcement that Fenway park will be rebranded as the house that EMC built (or rebuilt).

Disclosure: I will be in NYC on Tuesday the 18th as one of EMCs many guests that they have picked up airfare and lodging, thanks to Len Devanna and the EMC social media crew for reaching out and extending the invitation.

Other guests of the event will include analysts, advisors, authors, bloggers, beat writers, consultants, columnist, customers, editors, media, paparazzi, partners, press, protesters (hopefully polite ones), publishers, pundits, twitter tweepps and writers among others.

I wonder if there will also be a record number of disclosures made by others attending the event as guests of EMC?

More after (or maybe during) the event.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

End to End (E2E) Systems Resource Analysis (SRA) for Cloud and Virtual Environments

A new StorageIO Industry Trends and Perspective (ITP) white paper titled “End to End (E2E) Systems Resource Analysis (SRA) for Cloud, Virtual and Abstracted Environments” is now available at www.storageio.com/reports compliments of SANpulse technologies.

End to End (E2E) Systems Resource Analysis (SRA) for Virtual, Cloud and abstracted environments: Importance of Situational Awareness for Virtual and Abstracted Environments

Abstract:
Many organizations are in the planning phase or already executing initiatives moving their IT applications and data to abstracted, cloud (public or private) virtualized or other forms of efficient, effective dynamic operating environments. Others are in the process of exploring where, when, why and how to use various forms of abstraction techniques and technologies to address various issues. Issues include opportunities to leverage virtualization and abstraction techniques that enable IT agility, flexibility, resiliency and salability in a cost effective yet productive manner.

An important need when moving to a cloud or virtualized dynamic environment is to have situational awareness of IT resources. This means having insight into how IT resources are being deployed to support business applications and to meet service objectives in a cost effective manner.

Awareness of IT resource usage provides insight necessary for both tactical and strategic planning as well as decision making. Effective management requires insight into not only what resources are at hand but also how they are being used to decide where different applications and data should be placed to effectively meet business requirements.

Learn more about the importance and opportunities associated with gaining situational awareness using E2E SRA for virtual, cloud and abstracted environments in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of SANpulse technologies by clicking here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Spring 2010 StorageIO Newsletter

Welcome to the spring 2010 edition of the Server and StorageIO (StorageIO) news letter.

This edition follows the inaugural issue (Winter 2010) incorporating feedback and suggestions as well as building on the fantastic responses received from recipients.

A couple of enhancements included in this issue (marked as New!) include a Featured Related Site along with Some Interesting Industry Links. Another enhancement based on feedback is to include additional comment that in upcoming issues will expand to include a column article along with industry trends and perspectives.

StorageIO News Letter Image
Spring 2010 Newsletter

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the spring 2010 newsletter as HTML or PDF or, to go to the newsletter page.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Also, a very big thank you to everyone who has helped make StorageIO a success!.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

What is the Future of Servers?

Recently I provided some comments and perspectives on the future of servers in an article over at Processor.com.

In general, blade servers will become more ubiquitous, that is they wont go away even with cloud, rather become more common place with even higher density processors with more cores and performance along with faster I/O and larger memory capacity per given footprint.

While the term blade server may fade giving way to some new term or phrase, rest assured their capabilities and functionality will not disappear, rather be further enhanced to support virtualization with VMware vsphere, Microsoft HyperV, Citrix/Zen along with public and private clouds, both for consolidation and in the next wave of virtualization called life beyond consolidation.

The other trend is that not only will servers be able to support more processing and memory per footprint; they will also do that drawing less energy requiring lower cooling demands, hence more Ghz per watt along with energy savings modes when less work needs to be performed.

Another trend is around convergence both in terms of packaging along with technology improvements from a server, I/O networking and storage perspective. For example, enhancements to shared PCIe with I/O virtualization, hypervisor optimization, and integration such as the recently announced EMC, Cisco, Intel and VMware VCE coalition and vblocks.

Read more including my comments in the article here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Did HP respond to EMC and Cisco VCE with Microsoft HyperV bundle?

Last week EMC and Cisco along with Intel and VMware created the VCE collation along with a consumption model based service joint venture called Acadia.

In other activity last week, HP made several announcements including:

  • Improvements in sensing technologies
  • StorageWorks enhancements (SVSP, IBRIX, EVA and HyperV, X9000 and others)

EMC and Cisco were relatively quiet this week on announcement front, however HP unleashed another round of announcements that among others included:

  • Quarterly financial results
  • SMB server, storage, network and virtualization enhancements (here, here, here and here)
  • Acquisitions of 3COM (see related blog post here)

The reason I bring up all of this HP activity is not to simply re-cap all of the news and announcements which you can find on many other blogs or news sites, rather I see as a trend.

That trend appears to be one of a company on the move, not ready to sit back on its laurels, rather a company that continues to innovate in-house and via acquisitions.

Some of those acquisitions including IBRIX were relatively small, some like EDS last year and the one this week of 3COM to some would be large while to others perhaps as being seen as medium sized. Either way, HP has been busy expanding its portfolio of technology solution and services offerings along with its comprehensive IT stack.

Cisco, EMC and HP are examples of companies looking to expand their IT stacks and footprint in terms of diversifying current product focus and reach, along with extending into new or further into existing customer and market sector areas. Last weeks EMC and Cisco signaled two large players combing their resources to make virtualization and private clouds easy to acquire and deploy for mid to large size environments with a theme around VMware.

This week buried in all of the HP announcements was one that caught my eye which is a virtualization solution bundle designed for small business (that is something smaller than a vblock0), something that was missing in the Cisco and EMC news of last week however one that Im sure will be addressed sooner versus later.

In the case of HP, the other thing with their virtualization bundle was the focus on the mid to small business that fall into the broad and diverse SMB category, not to mention including Microsoft.

Yes, that is right, while a VMware based solution from HP would be a no-brainer given all of the activity the two companies are involved  in as joint partners, Microsoft HyperV was front and center.

Is this a reaction to last weeks Cisco and EMC salvo?

Perhaps and some will jump to that conclusion. However I will also offer this alternative scenario, 85-90 percent of servers consolidated into virtual machines (VMs) on VMware or other hypervisors including Microsoft HyperV are Windows based.

Likewise as one of the largest if not largest server vendors (pick your favorite server category or price band) who also happens to be one of the largest Microsoft Windows partners, I would have been more surprised if HP had not done a HyperV bundle.

While Cisco and EMC may stay the course or at least talk the talk with a VMware affinity in the Acadia and VCE coalition for the time being, I would expect HP to flex its wings a bit and show diversity of support for multiple Hypervisors, Operating Systems across its various server, network, storage and services platforms.

I would not be surprised to see some VMware based bundles appear over time building on previous announced HP blade systems matrix solution bundles.

Welcome back my friends to the show that never ends, that is the on-going server, storage, networking, virtualization, hardware, software and services solutions game for enabling the adaptive, dynamic, flexible, scalable, resilient, service oriented, public or private cloud, infrastructure as a service green and virtual data center.

Stay tuned, there is much more to come!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Should Everything Be Virtualized?

Storage I/O trends

Should everything, that is all servers, storage and I/O along with facilities, be virtualized?

The answer not surprisingly should be it depends!

Denny Cherry (aka Mrdenny) over at ITKE did a great recent post about applications not being virtualized, particularly databases. In general some of the points or themes we are on the same or similar page, while on others we slightly differ, not by very much.

Unfortunately consolidation is commonly misunderstood to be the sole function or value proposition of server virtualization given its first wave focus. I agree that not all applications or servers should be consolidated (note that I did not say virtualized).

From a consolidation standpoint, the emphasis is often on boosting resource use to cut physical hardware and management costs by boosting the number of virtual machines (VMs) per physical machine (PMs). Ironically, while VMs using VMware, Microsoft HyperV, Citrix/Xen among others can leverage a common gold image for cloning or rapid provisioning, there are still separate operating system instances and applications that need to be managed for each VM.

Sure, VM tools from the hypervisor along with 3rd party vendors help with these tasks as well as storage vendor tools including dedupe and thin provisioning help to cut the data footprint impact of these multiple images. However, there are still multiple images to manage providing a future opportunity for further cost and management reduction (more on that in a different post).

Getting back on track:

Some reasons that all servers or applications cannot be consolidated include among others:

  • Performance, response time, latency and Quality of Service (QoS)
  • Security requirements including keeping customers or applications separate
  • Vendor support of software on virtual or consolidated servers
  • Financial where different departments own hardware or software
  • Internal political or organizational barriers and turf wars

On the other hand, for those that see virtualization as enabling agility and flexibility, that is life beyond consolidation, there are many deployment opportunities for virtualization (note that I did not say consolidation). For some environments and applications, the emphasis can be on performance, quality of service (QoS) and other service characteristics where the ratio of VMs to PMs will be much lower, if not one to one. This is where Mrdenny and me are essentially on the same page, perhaps saying it different with plenty of caveats and clarification needed of course.

My view is that in life beyond consolidation, many more servers or applications can be virtualized than might be otherwise hosted by VMs (note that I did not say consolidated). For example, instead of a high number or ratio of VMs to PMs, a lower number and for some workloads or applications, even one VM to PM can be leveraged with a focus beyond basic CPU use.

Yes you read that correctly, I said why not configure some VMs on a one to one PM basis!

Here’s the premise, todays current wave or focus is around maximizing the number of VMs and/or the reduction of physical machines to cut capital and operating costs for under-utilized applications and servers, thus the move to stuff as many VMs into/onto a PM as possible.

However, for those applications that cannot be consolidated as outlined above, there is still a benefit of having a VM dedicated to a PM. For example, by dedicating a PM (blade, server or perhaps core) allows performance and QoS aims to be meet while still providing the ability for operational and infrastructure resource management (IRM), DCIM or ITSM flexibility and agility.

Meanwhile during busy periods, the application such as a database server could have its own PM, yet during off-hours, some over VM could be moved onto that PM for backup or other IRM/DCIM/ITSM activities. Likewise, by having the VM under the database with a dedicated PM, the application could be moved proactively for maintenance or in a clustered HA scenario support BC/DR.

What can and should be done?
First and foremost, decide how VMs is the right number to divide per PM for your environment and different applications to meet your particular requirements and business needs.

Identify various VM to PM ratios to align with different application service requirements. For example, some applications may run on virtual environments with a higher number of VMs to PMs, others with a lower number of VMs to PMs and some with a one VM to PM allocation.

Certainly there will be for different reasons the need to keep some applications on a direct PM without introducing a hypervisors and VM, however many applications and servers can benefit from virtualization (again note, I did not say consolation) for agility, flexibility, BC/DR, HA and ease of IRM assuming the costs work in your favor.

Additional general to do or action items include among others:

  • Look beyond CPU use also factoring in memory and I/O performance
  • Keep response time or latency in perspective as part of performance
  • More and fast memory are important for VMs as well as for applications including databases
  • High utilization may not show high hit rates or effectiveness of resource usage
  • Fast servers need fast memory, fast I/O and fast storage systems
  • Establish tiers of virtual and physical servers to meet different service requirements
  • See efficiency and optimization as more than simply driving up utilization to cut costs
  • Productivity and improved QoS are also tenants of an efficient and optimized environment

These are themes among others that are covered in chapters 3 (What Defines a Next-Generation and Virtual Data Center?), 4 (IT Infrastructure Resource Management), 5 (Measurement, Metrics, and Management of IT Resources), as well as 7 (Servers—Physical, Virtual, and Software) in my book “The Green and Virtual Data Center (CRC) that you can learn more about here.

Welcome to life beyond consolidation, the next wave of desktop, server, storage and IO virtualization along with the many new and expanded opportunities!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Data Proteciton for Virtual Environments

Storage I/O trends

Server virtualization continues to be a popular industry focus, particularly to discuss IT data center power, cooling, floor space and environmental (PCFE) issues commonly called green computing along with supporting next generation virtualized data center environments. There are many challenges and options related to protecting data and applications in a virtual server environment.

Here’s a link to a new white paper by myself that looks at various issues, challenges and along with various approaches for enabling data protection for virtual environments. This in-depth report explains what your organization needs to know as it moves into a virtual realm. Topics include background and issues, glossary of common virtual terms, re-architecting data protection, technologies and techniques, virtual machine movement, industry trends and much more …

The report is called Data Protection Options for Virtualized Servers: Demystifying Virtual Server Data Protection, Have a look for yourself.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved