Microsoft Hyper-V Is Alive Enhanced With Windows Server 2025

Yes, you read that correctly, Microsoft Hyper-V is alive and enhanced with Windows Server 2025, formerly Windows Server v.Next server. Note that  Windows Server 2025 preview build is just a preview available for download testing as of this time.

What about Myth Hyper-V is discontinued?

Despite recent FUD (fear, uncertainty, doubt), misinformation, and fake news, Microsoft Hyper-V is not dead. Nor has Hyper-V been discontinued, as some claim. Some Hyper-V FUD is tied to customers and partners of VMware following Broadcom’s acquisition of VMware looking for alternatives. More on Broadcom and VMware here, here, here, here, and here.

As a result of Broadcom’s VMware acquisition and challenges for partners and customers (see links above), organizations are doing due diligence, looking for replacement or alternatives. In addition, some vendors are leveraging the current VMware challenges to try and position themselves as the best hypervisor virtualization safe harbor for customers. Thus some vendors, their partners, influencers and amplifiers are using FUD to keep prospects from looking at or considering Hyper-V.

Virtual FUD (vFUD)

First, let’s shut down some Virtual FUD (vFUD). As mentioned above, some are claiming that Microsoft has discontinued Hyper-V. Specifically, the vFUD centers on Microsoft terminating a specific license SKU (e.g., the free Hyper-V Server 2019 SKU). For those unfamiliar with the discontinued SKU (Hyper-V Server 2019), it’s a headless (no desktop GUI) version of Windows Server  running Hyper-V VMs, nothing more, nothing less.

Does that mean the Hyper-V technology is discontinued? No.

Does that mean Windows Server and Hyper-V are discontinued? No.

Microsoft is terminating a particular stripped-down Windows Server version SKU (e.g. Hyper-V Server 2019) and not the underlying technology, including Windows Server and Hyper-V.

To repeat, a specific SKU or distribution (Hyper-V Server 2019) has been discontinued not Hyper-V. Meanwhile, other distributions of Windows Server with Hyper-V continue to be supported and enhanced, including the upcoming Windows Server 2025 and Server 2022, among others.

On the other hand, there is also some old vFUD going back many years, or a decade, when some last experienced using, trying, or looking at Hyper-V. For example, the last look at Hyper-V might been in the Server 2016 or before era.

If you are a vendor or influencer throwing vFUD around, at least get some new vFUD and use it in new ways. Better yet, up your game and marketing so you don’t rely on old vFUD. Likewise, if you are a vendor partner and have not extended your software or service support for Hyper-V, now is a good time to do so.

Watch out for falling into the vFUD trap thinking Hyper-V is dead and thus miss out on new revenue streams. At a minimum, take a look at current and upcoming enhancements for Hyper-V doing your due diligence instead of working off of old vFUD.

Where is Hyper-V being used?

From on-site (aka on-premises, on-premises, on-prem) and edge on Windows Servers standalone and clustered, to Azure Stack HCI. From Azure, and other Microsoft platforms or services to Windows Desktops, as well as home labs, among many other scenarios.

Do I use Hyper-V? Yes, when I  retired from the vExpert program after ten years. I moved all of my workloads from VMware environment to Hyper-V including *nix, containers and Windows VMs, on-site and on Azure Cloud.

How Hyper-V Is Alive Enhanced With Windows Server 2025

Is Hyper-V Alive Enhanced With Windows Server 2025?  Yup.

Formerly known as Windows Server v.Next, Microsoft announced the Windows Server 2025 preview build on January 26, 2024 (you can get the bits here). Note that Microsoft uses Windows Server v.Next as a generic placeholder for next-generation Windows Server technology.

A reminder that the cadence of Windows Server Long Term Serving Channels (LTSC) versions has been about three years (2012R2, 2016, 2019, 2022, now 2025), along with interim updates.

What’s enhanced with Hyper-V and Windows Server 2025

    • Hot patching of running server (requires Azure Arc management) with almost instant implementations and no reboot for physical, virtual, and cloud-based Windows Servers.
    • Scaling of even more compute processors and RAM for VMs.
    • Server Storage I/O performance updates, including NVMe optimizations.
    • Active Directory (AD) improvements for scaling, security, and performance.
    • There are enhancements to storage replica and clustering capabilities.
    • Hyper-V GPU partition and pools, including migration of VMs using GPUs.

More Enhancements for Hyper-V and Windows Server 2025

Active Directory (AD)

Enhanced performance using all CPUs in a process group up to 64 cores to support scaling and faster processing. LDAP for TLS 1.3, Kerberos support for AES SHA 256 / 384, new AD functional levels, local KDC, improved replication priority, NTLM retirement, local Kerberos, and other security hardening. In addition, 64-bit Long value IDs (LIDs) are supported along with a new database schema using 32K pages vs the previous 8K pages. You will need to upgrade forest-wide across domain controllers to leverage the new larger page sizes (at least Server 2016 or later). Note that there is also backward compatibility using 8K pages until all ADs are upgraded.

Storage, HA, and Clustering

Windows Server continues to offer flexible options for storage how you want or need to use it, from traditional direct attached storage (DAS) to Storage Area Networks (SAN), to Storage Spaces Direct (S2D) software-defined, including NVMe, NVMe over Fabrics (NVMeoF), SAS, Fibre Channel, iSCSI along with file attached storage. Some other storage and HA enhancements include Storage Replica performance for logging and compression and stretch S2D multi-site optimization.

Failover Cluster enhancements include AD-less clusters, cert-based VM live migration for the edge, cluster-aware updating reliability, and performance improvements. ReFS enhancements include dedupe and compression optimizations.

Other NVMe enhancements include optimization to boost performance while reducing CPU overhead, for example, going from 1.1M IOPS to 1.86M IOPS, and then with a new native NVMe driver (to be added), from 1.1M IOPs to 2.1M IOPs. These performance optimizations will be interesting to look at closer, including baseline configuration, number and type of devices used, and other considerations.

Compute, Hyper-V, and Containers

Microsoft has added and enhanced various Compute, Hyper-V, and Container functionality with Server 2025, including supporting larger configurations and more flexibility with GPUs. There are app compatibility improvements for containers that will be interesting to see and hear more details about besides just Nano (the ultra slimmed-down Windows container).

Hyper-V

Microsoft extensively uses Hyper-V technology across different platforms, including Azure, Windows Servers, and Desktops. In addition, Hyper-V is commonly found across various customer and partner deployments on Windows Servers, Desktops, Azure Stack HCI, running on other clouds, and virtualization (nested). While Microsoft effectively leverages Hyper-V and continues to enhance it, its marketing has not effectively told and amplified the business benefit and value, including where and how Hyper-V is deployed.

Hyper-V with Server 2025 includes discrete device assignment to VM (e.g., resources dedicated to VMs). However, dedicating a device like a GPU to a VM prevents resource sharing, failover cluster, or live migration. On the other hand, Server 2025 Hyper-V supports GPU-P (GPU Partitioning), enabling GPU(s) to be shared across multiple VMs. GPUs can be partitioned and assigned to VMs, with GPUs and GPU partitioning enabled across various hosts.

In addition to partitioning, GPUs can be placed into GPU pools for HA. Live migration and cluster failover (requires PCIe SR-IOV), AMD Lilan or later, Intel Sapphire Rapids, among other requirements, can be done. Another enhancement is Dynamic Processor Compatibility, which allows mixed processor generations to be used across VMs and then masks out functionalities that are not common across processors. Other enhancements include optimized UEFI, secure boot, TPM , and hot add and removal of NICs.

Networking

Network ATC provides intent-based deployments where you specify desired outcomes or states, and the configuration is optimized for what you want to do. Network HUD enables always-on monitoring and network remediation. Software Defined Network (SDN) optimization for transparent multi-site L2 and L3 connectivity and improved SDN gateway performance enhancements.

SMB over QUIC leverages TLS 1.3 security to streamline local, mobile, and remote networking while enhancing security with configuration from the server or client. In addition, there is an option to turn off SMB NTLM at the SMB level, along with controls on which versions of SMB to allow or refuse. Also being added is a brute force attack limiter that slows down SMB authentication attacks.

Management, Upgrades, General user Experience

The upgrade process moving forward with Windows Server 2025 is intended to be seamless and less disruptive. These enhancements include hot patching and flighting (e.g., LTSC Windows server upgrades similar to how you get regular updates). For hybrid management, an easier-to-use wizard to enable Azure Arc is planned. For flexibility, if present, WiFi networking and Bluetooth devices are automatically enabled with Windows Server 2025 focused on edge and remote deployment scenarios.

Also new is an optional subscription-based licensing model for Windows Server 2025 while retaining the existing perpetual use. Let me repeat that so as not to create new vFUD, you can still license Windows Server (and thus Hyper-V) using traditional perpetual models and SKUs.

Additional Resources Where to learn more

The following links are additional resources to learn about Windows Server, Server 2025, Hyper-V, and related data infrastructures and tradecraft topics.

What’s New in Windows Server v.Next video from Microsoft Ignite (11/17/23)
Microsoft Windows Server 2025 Whats New
Microsoft Windows Server 2025 Preview Build Download
Microsoft Windows Server 2025 Preview Build Download (site)
Microsoft Evaluation Center (various downloads for trial)
Microsoft Eval Center Windows Server 2022 download
Microsoft Hyper-V on Windows Information
Microsoft Hyper-V on Windows Server Information
Microsoft Hyper-V on Windows Desktop (e.g., Win10)
Microsoft Windows Server Release Information
Microsoft Hyper-V Server 2019
Microsoft Azure Virtual Machines Trial
Microsoft Azure Elastic SAN
If NVMe is the answer, what are the questions?
NVMe Primer (or refresh), The NVMe Place.

Additional learning experiences along with common questions (and answers), are found in my Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Hyper-V is very much alive, and being enhanced. Hyper-V is being used from Microsoft Azure to Windows Server and other platforms at scale, and in smaller environments.

If you are looking for alternatives to VMware or simply exploring virtualization options, do your due diligence and check out Hyper-V. Hyper-V may or may not be what you want; however, is it what you need? Looking at Hyper-V now and upcoming enhancements also positions you when asked by management if you have done your due  diligence vs relying on vFUD.

Do a quick Proof of Concept, spin up a lab, and check out currently available Hyper-V. For example, on Server 2022 or 2025 preview, to get a feel for what is there to meet your needs and wants. Download the bits and get some hands on time with Hyper-V and Windows Server 2025.

Wrap up

Hyper-V is alive and enhanced with Windows Server 2025 and other releases.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Nine time Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of UnlimitedIO LLC.

Dell Technology World 2018 Announcement Summary

Dell Technology World 2018 Announcement Summary

Dell Technology World 2018 Announcement Summary
This is part one of a five-part series about Dell Technology World 2018 announcement summary. Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw). There were several announcements along with plenty of other activity from sessions, meetings, hallway and event networking taking place at Dell Technology World DTW 2018.

Major data infrastructure technology announcements include:

  • PowerMax all-flash array (AFA) solid state device (SSD) NVMe storage system
  • PowerEdge four-socket 2U and 4U rack servers
  • XtremIO X2 AFA SSD storage system updates
  • PowerEdge MX preview of future composable servers
  • Desktop and thin client along with other VDI updates
  • Cloud and networking enhancements

Besides the above, additional data infrastructure related announcements were made in association with Dell Technology family members including VMware along with other partners, as well as customer awards. Other updates and announcements were tied to business updates from Dell Technology, Dell Technical Capital (venture capital), and, Dell Financial Services.

Dell Technology World Buzzword Bingo Lineup

Some of the buzzword bingo terms, topics, acronyms from Dell Technology World 2018 included AFA, AI, Autonomous, Azure, Bare Metal, Big Data, Blockchain, CI, Cloud, Composable, Compression, Containers, Core, Data Analytics, Dedupe, Dell, DFS (Dell Financial Services), DFR (Data Footprint Reduction), Distributed Ledger, DL, Durability, Fabric, FPGA, GDPR, Gen-Z, GPU, HCI, HDD, HPC, Hybrid, IOP, Kubernetes, Latency, MaaS (Metal as a Service), ML, NFV, NSX, NVMe, NVMeoF, PACE (Performance Availability Capacity Economics), PCIe, Pivotal, PMEM, RAID, RPO, RTO, SAS, SATA, SC, SCM, SDDC, SDS, Socket, SSD, Stamp, TBW (Terabytes Written per day), VDI, venture capital, VMware and VR among others.

Dell Technology World 2018 Venue
Dell Technology World DTW 2018 Event and Venue

Dell Technology World 2018 was located at the combined Palazzo and Venetian hotels along with adjacent Sands Expo center kicking off Monday, April 30th and wrapping up May 4th.

The theme for Dell Technology World DTW 2018 was make it real, which in some ways was interesting given the focus on virtual including virtual reality (VR), software-defined data center (SDDC) virtualization, data infrastructure topics, along with artificial intelligence (AI).

Virtual Sky Dell Technology World 2018
Make it real – Venetian Palazzo St. Mark’s Square on the way to Sands Expo Center

There was plenty of AI, VR, SDDC along with other technologies, tools as well as some fun stuff to do including VR games.

Dell Technology World 2018 Commons Area
Dell Technology World Village Area near Key Note and Expo Halls

Dell Technology World 2018 Commons Area Drones
Dell Technology World Drone Flying Area

During a break from some meetings, I used a few minutes to fly a drone using VR which was interesting. I Have been operating drones (See some videos here) visually without dependence on first-person view (FPV) or relying on extensive autonomous operations instead flying heads up by hand for several years. Needless to say, the VR was interesting, granted encountered a bit of vertigo that I had to get used to.

Dell Technology World 2018 Commons Area Virtual Village
More views of the Dell Technology World Village and Commons Area with VR activity

Dell Technology World 2018 Commons Area Virtual Village
Dell Technology World Village and VR area

Dell Technology World 2018 Commons Area Virtual Village
Dell Technology World Bean Bag Area

Dell Technology World 2018 Announcement Summary

Ok, nuff with the AI, ML, DL, VR fun, time to move on to the business and technology topics of Dell Technologies World 2018.

What was announced at Dell Technology World 2018 included among others:

Dell Technology World 2018 PowerMax
Dell PowerMax Front View

Subsequent posts in this series take a deeper look at the various announcements as well as what they mean.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

On the surface it may appear that there was not much announced at Dell Technology World 2018 particular compared to some of the recent Dell EMC Worlds and EMC Worlds. However turns out that there was a lot announced, granted without some of the entertainment and circus like atmosphere of previous events. Continue reading here Part II Dell Technology World 2018 Modern Data Center Announcement Details in this series, along with Part III here, Part IV here (including PowerEdge MX composable infrastructure leveraging Gen-Z) and Part V (servers and converged) here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part II Dell Technology World 2018 Modern Data Center Announcement Details

Part II Dell Technology World 2018 Modern Data Center Announcement Details

Dell Technology World 2018 Modern Data Center Announcement Summary
This is Part II Dell Technology World 2018 Modern Data Center Announcement Details that is part of a five-post series (view part I here, part III here, part IV here and part V here). Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw).

Dell Technology World 2018 Venue
Dell Technology World DTW 2018 Event and Venue

What was announced at Dell Technology World 2018 included among others:

Dell Technology World 2018 PowerMax
Dell PowerMax Front View

Dell Technology World 2018 Modern Data Center Announcement Details

Dell Technologies data infrastructure related announcements included new solutions competencies and expanded services deployment competencies with partners to boost deal size and revenues. An Internet of Things (IoT) solution competency was added with others planned including High-Performance Computing (HPC) / Super Computing (SC), Data Analytics, Business Applications and Security related topics. Dell Financial Services flexible consumption models announced at Dell EMC World 2017 provide flexible financing options for both partners as well as their clients.

Flexible Dell Financial Services cloud-like consumption model (e.g., pay for what you use) enhancements include reduced entry points for the Flex on Demand solutions across the Dell EMC storage portfolio. For example, Flex on Demand velocity pricing models for Dell EMC Unity All-Flash Array (AFA) solid state device (SSD) storage solution, and XtremIO X2 AFA systems with price points of less than USD 1,000.00 per month. The benefit is that Dell partners have a financial vehicle to help their midrange customers run consumption-based financing for all-flash storage without custom configurations resulting in faster deployment opportunities.

In other partner updates, Dell Technologies is enhancing the incentive program Dell EMC MyRewards program to help drive new business. Dell EMC MyRewards Program is an opt-in, points-based reward program for solution provider sales reps and systems engineers. MyRewards program is slated to replace the existing Partner Advantage and Sell & Earn programs with bigger and better promotions (up to 3x bonus payout, simplified global claiming).

What this means for partners is the ability to earn more while offering their clients new solutions with flexible financing and consumption-based pricing among other options. Other partner enhancements include update demo program, Proof of Concept (POC) program, and IT transformation campaigns.

Powering up the Modern Data Center and Future of Work

Powering up the modern data center along with future of work, part of the make it real theme of Dell Technologies world 2018 includes data infrastructure server, storage, I/O networking hardware, software and service solutions. These data infrastructure solutions include NVMe based storage, Converged Infrastructure (CI), hyper-converged infrastructure (HCI), software-defined data center (SDDC), VMware based multi-clouds, along with modular infrastructure resources.

In addition to server and storage data infrastructure resources form desktop to data center, Dell also has a focus of enabling traditional as well as emerging Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) as well as analytics applications. Besides providing data infrastructure resources to support AI, ML, DL, IoT and other applications along with their workloads, Dell is leveraging AI technology in some of their products for example PowerMax.

Other Dell Technologies announcements include Virtustream cloud risk management and compliance, along with Epic and SAP Digital Health healthcare software solutions. In addition to Virtustream, Dell Technologies cloud-related announcements also include VMware NSX network Virtual Cloud Network with Microsoft Azure support along with security enhancements. Refer here to recent April VMware vSphere, vCenter, vSAN, vRealize and other Virtual announcements as well as here for March VMware cloud updates.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

The above set of announcements span business to technology along with partner activity. Continue reading here (Part III Dell Technology World 2018 Storage Announcement Details) of this series, and part I (general summary) here, along with Part IV (PowerEdge MX Composable) here and part V here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part III Dell Technology World 2018 Storage Announcement Details

Part III Dell Technology World 2018 Storage Announcement Details

Part III Dell Technology World 2018 Storage Announcement Details

This is Part III Dell Technology World 2018 Storage Announcement Details that is part of a five-post series (view part I here, part II here, part IV (PowerEdge MX Composable) here and part V here). Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw).

Dell Technology World 2018 Storage Announcements Include:

  • PowerMax – Enterprise class tier 0 and tier 1 all-flash array (AFA)
  • XtremIO X2 – Native replication and new entry-level pricing

Dell Technology World 2018 PowerMax back view
Back view of Dell PowerMax

Dell PowerMax Something Old, Something New, Something Fast Near You Soon

PowerMax is the new companion to VMAX. Positioned for traditional tier 0 and tier 1 enterprise-class applications and workloads, PowerMax is optimized for dense server virtualization and SDDC, SAP, Oracle, SQL Server along with other low-latency, high-performance database activity. Different target workloads include Mainframe as well as Open Systems, AI, ML, DL, Big Data, as well as consolidation.

The Dell PowerMax is an all-flash array (AFA) architecture with an end to end NVMe along with built-in AI and ML technology. Building on the architecture of Dell EMC VMAX (some models still available) with new faster processors, full end to end NVMe ready (e.g., front-end server attachment, back-end devices).

The AI and ML features of PowerMax PowerMaxOS include an engine (software) that learns and makes autonomous storage management decisions, as well as implementations including tiering. Other AI and ML enabled operations include performance optimizations based on I/O pattern recognition.

Other features of PowerMax besides increased speeds, feeds, performance includes data footprint reduction (DFR) inline deduplication along with enhanced compression. The DFR benefits include up to 5:1 data reduction for space efficiency, without performance impact to boost performance effectiveness. The DFR along with improved 2x rack density, along with up to 40% power savings (your results may vary) based on Dell claims to enable an impressive amount of performance, availability, capacity, economics (e.g., PACE) in a given number of cubic feet (or meters).

There are two PowerMax models including 2000 (scales from 1 to 2 redundant controllers) and 8000 (scales from 1 to 8 redundant controller nodes). Note that controller nodes are Intel Xeon multi-socket, multi-core processors enabling scale-up and scale-out performance, availability, and capacity. Competitors of the PowerMax include AFA solutions from HPE 3PAR, NetApp, and Pure Storage among others.

Dell Technology World 2018 PowerMax Front View
Front view of Dell PowerMax

Besides resiliency, data services along with data protection, Dell is claiming PowerMax is 2x faster than their nearest high-end storage system competitors with up to 150GB/sec (e.g., 1,200Gbps) of bandwidth, as well as up to 10 million IOPS with 50% lower latency compared to previous VMAX.

PowerMax is also a full end to end NVMe ready (both back-end and front-end). Back-end includes NVMe drives, devices, shelves, and enclosures) as well as front-end (future NVMe over Fabrics, e.g., NVMeoF). Being NVMeoF ready enables PowerMax to support future front-end server network connectivity options to traditional SAN Fibre Channel (FC), iSCSI among others.

PowerMax is also ready for new, emerging high speed, low-latency storage class memory (SCM).  SCM is the next generation of persistent memories (PMEM) having performance closer to traditional DRAM while persistence of flash SSD. Examples of SCM technologies entering the market include Intel Optane based on 3D XPoint, along with others such as those from Everspin among others.

IBM Z Zed Mainframe at Dell Technology World 2018
An IBM “Zed” Mainframe (in case you have never seen one)

Based on the performance claims, the Dell PowerMax has an interesting if not potentially industry leading power, performance, availability, capacity, economic footprint per cubic foot (or meter). It will be interesting to see some third-party validation or audits of Dell claims. Likewise, I look forward to seeing some real-world applied workloads of Dell PowerMax vs. other storage systems. Here are some additional perspectives Via SearchStorage: Dell EMC all-flash PowerMax replaces VMAX, injects NVMe


Dell PowerMax Visual Studio (Image via Dell.com)

To help with customer decision making, Dell has created an interactive VMAX and PowerMax configuration studio that you can use to try out as well as learn about different options here. View more Dell PowerMax speeds, feeds, slots, watts, features and functions here (PDF).

Dell Technology World 2018 XtremIO X2

XtremIO X2

Dell XtremIO X2 and XIOS 6.1 operating system (software-defined storage) enhanced with native replication across wide area networks (WAN). The new WAN replication is metadata-aware native to the XtremIO X2 that implements data footprint reduction (DFR) technology reducing the amount of data sent over network connections. The benefit is more data moved in a given amount of time along with better data protection requiring less time (and network) by only moving unique changed data.

Dell Technology World 2018 XtremIO X2 back view
Back View of XtremIO X2

Dell EMC claims to reduce WAN network bandwidth by up to 75% utilizing the new native XtremIO X2 native asynchronous replication. Also, Dell says XtremIO X2 requires up to 38% less storage space at disaster recovery and business resiliency locations while maintaining predictable recovery point objectives (RPO) of 30 seconds. Another XtremIO X2 announcement is a new entry model for customers at up to 55% lower cost than previous product generations. View more information about Dell XtremIO X2 here, along with speeds feeds here, here, as well as here.

What about Dell Midrange Storage Unity and SC?

Here are some perspectives Via SearchStorage: Dell EMC midrange storage keeps its overlapping arrays.

Dell Bulk and Elastic Cloud Storage (ECS)

One of the questions I had going into Dell Technology World 2018 was what is the status of ECS (and its predecessors Atmos as well as Centera) bulk object storage is given lack of messaging and news around it. Specifically, my concern was that if ECS is the platform for storing and managing data to be preserved for the future, what is the current status, state as well as future of ECS.

In conversations with the Dell ECS folks, ECS which has encompassed Centera functionality and it (ECS) is very much alive, stay tuned for more updates. Also, note that Centera has been EOL. However, its feature functionality has been absorbed by ECS meaning that data preserved can now be managed by ECS. While I can not divulge the details of some meeting discussions, I can say that I am comfortable (for now) with the future directions of ECS along with the data it manages, stay tuned for updates.

Dell Data Protection

What about Data Protection? Security was mentioned in several different contexts during Dell Technology World 2018, as was a strong physical security presence seen at the Palazzo and Sands venues. Likewise, there was a data protection presence at Dell Technologies World 2018 in the expo hall, as well as with various sessions.

What was heard was mainly around data protection management tools, hybrid, as well as data protection appliances and data domain-based solutions. Perhaps we will hear more from Dell Technologies World in the future about data protection related topics.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

If there was any doubt about would Dell keep EMC storage progressing forward, the above announcements help to show some examples of what they are doing. On the other hand, lets stay tuned to see what news and updates appear in the future pertaining to mid-range storage (e.g. Unity and SC) as well as Isilon, ScaleIO, Data Protection platforms as well as software among other technologies.

Continue reading part IV (PowerEdge MX Composable and Gen-Z) here in this series, as well as part I here, part II here, part IV (PowerEdge MX Composable) here, and, part V here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure

Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure

Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure
This is Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure that is part of a five-post series (view part I here, part II here, part III here and part V here). Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw).

Introducing PowerEdge MX Composable Infrastructure (the other CI)

Dell announced at Dell Technology World 2018 a preview of the new PowerEdge MX (kinetic) family of data infrastructure resource servers. PowerEdge MX is being developed to meet the needs of resource-centric data infrastructures that require scalability, as well as performance availability, capacity, economic (PACE) flexibility for diverse workloads. Read more about Dell PowerEdge MX, Gen-Z and composable infrastructures (the other CI) here.

Some of the workloads being targeted by PowerEdge MX include large-scale dense SDDC virtualization (and containers), private (or public clouds by service providers). Other workloads include AI, ML, DL, data analytics, HPC, SC, big data, in-memory database, software-defined storage (SDS), software-defined networking (SDN), network function virtualization (NFV) among others.

The new PowerEdge MX previewed will be announced later in 2018 featuring a flexible, decomposable, as well as composable architecture that enables resources to be disaggregated and reassigned or aggregated to meet particular needs (e.g., defined or composed). Instead of traditional software defined virtualization carving up servers in smaller virtual machines or containers to meet workload needs, PowerEdge MX is part of a next-generation approach to enable server resources to be leveraged at a finer granularity.

For example, today an entire server including all of its sockets, cores, memory, PCIe devices among other resources get allocated and defined for use. A server gets defined for use by an operating system when bare metal (or Metal as a Service) or a hypervisor. PowerEdge MX (and other platforms expected to enter the market) have a finer granularity where with a proper upper layer (or higher altitude) software resources can be allocated and defined to meet different needs.

What this means is the potential to allocate resources to a given server with more granularity and flexibility, as well as combine multiple server’s resources to create what appears to be a more massive server. There are vendors in the market who have been working on and enabling this type of approach for several years ranging from ScaleMP to startup Liqid and Tidal among others. However, at the heart of the Dell PowerEdge MX is the new emerging Gen-Z technology.

If you are not familiar with Gen-Z, add it to your buzzword bingo lineup and learn about it as it is coming your way. A brief overview of Gen-Z consortium and Gen-Z material and primer information here. A common question is if Gen-Z is a replacement for PCIe which for now is that they will coexist and complement each other. Another common question is if Gen-Z will replace Ethernet and InfiniBand and the answer is for now they complement each other. Another question is if Gen-Z will replace Intel Quick Path and another CPU device and memory interconnects and the answer is potentially, and in my opinion, watch to see how long Intel drags its feet.

Note that composability is another way of saying defined without saying defined, something to pay attention too as well as have some vendor fun with. Also, note that Dell is referent to PowerEdge MX and Kinetic architecture which is not the same as the Seagate Kinetic Ethernet-based object key value accessed drive initiative from a few years ago (learn more about Seagate Kinetic here). Learn more about Gen-Z and what Dell is doing here.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Dell has provided a glimpse of what they are working on pertaining composable infrastructure, the other CI, as well as Gen-Z and related next generation of servers with PowerEdge MX as well as Kinetic. Stay tuned for more about Gen-Z and composable infrastructures. Continue reading Part V (servers converged) in this series here, as well as part I here, part II here and part III here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary

VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary

VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary

VMware announced last week vSphere vSAN vCenter version 6.7 among other updates for their software-defined data center (SDDC) and software-defined infrastructure (SDI) solutions. The new April v6.7 announcement updates followed those from this past March when VMware announced cloud enhancements with partner AWS (more on that announcement here).

VMware vSphere 6.7
VMware vSphere Web Client with vSphere 6.7

For those looking for a more extended version with a closer look and analysis of what VMware announced click here for part two and part three here.

What VMware announced is general availability (GA) meaning you can now download from here the bits (e.g., software) that include:

  • ESXi aka vSphere 6.7 hypervisor build 8169922
  • vCenter Server 6.7 build 8217866
  • vCenter Server Appliance 6.7 build 8217866
  • vSAN 6.7 and other related SDDC management tools
  • vSphere Operations Management (vROps) 6.7
  • Increased the speeds, feeds and other configuration maximum limits

For those not sure or need a refresher, vCenter Server is the software for extended management across multiple vSphere ESXi hypervisors that run on a Windows platform.

Major themes of the VMware April announcement is around increased scalability along with performance enhancements, ease of use, security, as well as extended application support. As part of the v6.7 improvements, VMware is focusing on simplifying, as well as accelerating software-defined data infrastructure along with other SDDC lifecycle operation activities.

Extended application support includes for traditional demanding enterprise IT, along with High-Performance Compute (HPC), Big Data, Little Data, Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL), as well as other emerging workloads. Part of supporting demanding workloads includes enhanced support for Graphical Processing Units (GPU) such as those from Nvidia among others.

What Happened to vSphere 6.6?

A question that comes up is that there is a vSphere 6.5 (and its smaller point releases) and now vSphere 6.7 (along with vCenter, vSAN among others). What happened to vSphere 6.6? Good question and not sure what the real or virtual answer from VMware is or would be. My take is that this is a good opportunity for VMware to align their versions of principal components (e.g., vSphere/ESXi, vCenter, vSAN) to a standard or unified numbering scheme.

Where to learn more

Learn more about VMware vSphere, vCenter, vSAN and related software-defined data center (SDDC); software-defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Overall the VMware vSphere vSAN vCenter version 6.7 enhancements are a good evolution of their core technologies for enabling hybrid, converged software-defined data infrastructures and software-defined data centers. Continue reading more about  VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary here in part II (focus on management, vCenter plus security) and part III here (focus on server storage I/O and deployment) of this three-part series.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware vSphere vSAN vCenter v6.7 SDDC details

VMware vSphere vSAN vCenter v6.7 SDDC details

VMware vSphere vSAN vCenter v6.7 SDDC details

VMware vSphere vSAN vCenter v6.7 SDDC details of announcement summary focus on vCenter, Security, and management. This is part two (part one here) of a three-part (part III here) series looking at VMware vSphere vSAN vCenter v6.7 SDDC details of announcement summary.

Last week VMware announced vSphere vSAN vCenter v6.7 updates as part of enhancing their software-defined data center (SDDC) and software-defined infrastructure (SDI) solutions core components. This is an expanded post as a companion to the Server StorageIO summary piece here. These April updates followed those from this past March when VMware announced cloud enhancements with partner AWS (more on that announcement here).

VMware vSphere 6.7
VMware vSphere Web Client with vSphere 6.7

What VMware announced is generally available (GA) meaning you can now download from here the bits (e.g., software) that include:

  • ESXi aka vSphere 6.7 hypervisor build 8169922
  • vCenter Server 6.7 build 8217866
  • vCenter Server Appliance 6.7 build 8217866
  • vSAN 6.7 and other related SDDC management tools
  • vSphere Operations Management (vROps) 6.7

For those not sure or need a refresher, vCenter Server is the software for extended management across multiple vSphere ESXi hypervisors that run on a Windows platform.

Major themes of the VMware April announcements are focused around:

  • Increased enterprise and hybrid cloud scalability
  • Resiliency, availability, durable and secure
  • Performance, efficiency and elastic
  • Intuitive, simplified management at scale
  • Expanded support for demanding application workloads

Expanded application support includes for traditional demanding enterprise IT, along with High-Performance Compute (HPC), Big Data, Little Data, Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL), as well as other emerging workloads. Part of supporting demanding workloads includes enhanced support for Graphical Processing Units (GPU)such as those from Nvidia among others.

What was announced

As mentioned above and in other posts in this series, VMware announced new versions of their ESXi hypervisor vSphere v6.7, as well as virtual SAN (vSAN) v6.7, virtual Center (vCenter),  v6.7 among other related tools. One of the themes of this announcement by VMware includes hybrid SDDC spanning on-site, on-premises (or on-premisess if you prefer) to the public cloud. Other topics involve increasing scalability, along with stability as well as ease of management along with security, performance updates.

As part of the v6.7 enhancements, VMware is focusing on simplifying, as well as accelerating software-defined data infrastructure along with other SDDC lifecycle operation activities. Additional themes and features focus on server, storage, I/O resource enablement, as well as application extensibility support.

vSphere ESXi hypervisor

With v6.7 ESXi host maintenance times improved with single reboot vs. previous multiple boots for some upgrades, as well as quick boot. Quick boot enables restarting the ESXi hypervisor without rebooting the physical machine skipping time-consuming hardware initialization.

Enhanced HTML5 based vSphere client GUI (along with API and CLI) with increased feature function parity compared to predecessor versions and other VMware tools. Increased functionality includes NSX, vSAN and VMware Upgrade Management (VUM) capabilities among others. In other words, not only are new technologies support, functions you may have in the past resisted using the web-based interfaces due to extensibility are being addressed with this release.

vCenter Server and vCenter Server Appliance (VCSA)

VMware has announced that moving forward the hosted (e.g., running on a Windows server platform) version is being depreciated. What this means is that it is time for those not already doing so to migrate to the vCenter Server Appliance (VCSA). As a refresher, VCSA is a turnkey software-defined virtual appliance that includes vCenter Server software running on VMware Photon Linux operating system as a virtual machine. VMware vCenter.

As part of the update, the enhanced vCenter Server Appliance (VCSA) supports new efficient, effective API management along with multiple vCenters as well as performance improvements. VMware cites 2x faster vCenter operations per second, 3x reduction in memory usage along with 3x quicker Distributed Resource Scheduler (DRS) related activities across powered on VMs).

What this means is that VCSA is a self-contained virtual appliance that can be configured for very large, large, medium and small environments in various configurations. With v6.7 vCenter Server Appliance emphasis on scaling, as well as performance along with security and ease of use features, VCSA is better positioned to support large enterprise deployments along with hybrid cloud. VCSA v6.7 is more than just a UI enhancement with v6.5 shown below followed by an image of v6.7 UI.

VMware vSphere 6.5
VMware vCenter Appliance v6.5 main UI

VMware vSphere 6.7
VMware vCenter Appliance v6.7 main UI

Besides UI enhancements (along with API and CLI) for vCenter, other updates include more robust data protection (aka backup) capability for the vCenter Server environment. In the prior v6.5 version there was a fundamental capability to specify a destination for sending vCenter configuration information to for backup data protection (See image below).

vCenter 6.5 backup
VMware vCenter Appliance 6.5 backup

Note that the VCSA backup only provides data protection for the vCenter Appliance, its configuration, settings along with data collected of the VMware hosts (and VMs) being managed. VCSA backup does not provide data protection of the individual VMware hosts or VMs which is accomplished via other data protection techniques, tools and technologies.

In v6.7 vCenter now has enhanced capabilities (shown below) for enabling data protection of configuration, settings, performance and other metrics. What this means is that with improved UI it is now possible to setup backup schedules as part of enabling automation for data protection of vCenter servers.

vCenter 6.7 backup
VMware VCSA v6.7 enhanced UI and data protection aka backup

The following shows some of the configuration sizing options as part of VCSA deployment. Note that the vCPU, Memory, and Storage are for the VCSA itself to support a given number of VMware hosts (e.g., physical machines) as well as guest virtual machines (VM).

 

VCSA

VCSA

VCSA

VM

 

Size

vCPU

Memory

Storage

Hosts

VMs

Tiny

2

10GB

300GB

10

100

Small

4

16GB

340GB

100

1000

Medium

8GB

24

525GB

400

4000

Large

16

32GB

740GB

1000

10000

Extra Large

24

48GB

1180GB

2000

35000

vCenter 6.7 sizing and number of the physical machine (e.g., VM hosts) and virtual machines supported

Keep in mind that in addition to the above individual VCSA configuration limits, multiple vCenters can be grouped including linked mode spanning onsite, on-premisess (on-prem if you prefer) as well as the cloud. VMware vCenter server hybrid linked mode enables seamless visibility and insight across on-site, on-premises (or on-premisess if you prefer) as well as public clouds such as AWS among others.

In other words, vCenter with hybrid linked mode enables you to have situational awareness and avoid flying blind in and among clouds. As part of hybrid vCenter environment support, cross-cloud (public, private) hot and cold migration including clone as well as vMotion across mixed VMware version provisioning is supported. Using linked mode multiple roles, permissions, tags, policies can be managed across different groups (e.g., unified management) as well as locations.

VMware and vSphere Security

Security is a big push for VMware with this release including Trusted Platform Module (TPM) 2.0 along with Virtual TPM 2.0 for protecting both the hypervisors and guest operating systems. Data encryption was introduced in vSphere 6.5 and is enhanced with increased management simplicities along with protection of data at rest and in flight (while in motion).

In other words, encrypted vMotion across different vCenter instances and versions are supported, as well as across hybrid environments (e.g., on-premises and public cloud). Other security enhancements include tighter collaboration and integration with Microsoft for Windows VMs, as well as vSAN, NSX and vRealize for a secure software-defined data infrastructure aka SDDC. For example, VMware has enhanced support for Microsoft Virtualization Based Security (VBS) including credential Guard where vSphere is providing a secure virtual hardware platform.

Additional VMware 6.7 security enhancements include Multiple SYSLOG targets, FIPS 140-2 Validated modules. Note that there is a difference between FIPS certified and FIPS validated, of which VMware vCenter and ESXi leverage two modules (VM Kernel Cryptographic, and OpenSSL) are currently validated. VMware is not playing games like some vendors when it comes to disclosing FIPS 140-2 validated vs. certified. Other VMware security enhancements include

Note, when a vendor mentions FIPS 140-2 and imply or says certified, ask them if they indeed are certified. Any vendor who is actually FIPS 140-2 certified should not get upset if you press them politely. Instead, they should thank you for asking. Otoh, if a vendor gives you a used car salesperson style dance or get upset, ask them why so sensitive, or, perhaps, what are they ashamed of or hiding, just saying. Learn more here.

vRealize Operations Manager (vROps)

vRealize Operations Manager (vROps) v6.7 dashboard for vSphere client plugin provides an overview of cluster view and alerts of both vCenter and vSAN. What this means is that you will want to upgrade vROps to v6.7. The vROps benefit being dashboards for optimal performance, capacity, troubleshooting, and management configuration.

Where to learn more

Learn more about VMware vSphere, vCenter, vSAN and related software-defined data center (SDDC); software-defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

VMware continues to enhance their core SDDC data infrastructure resources to support new and emerging, as well as legacy enterprise applications at scale. VMware enhancements include management, security along with other updates to support the demanding needs of various applications and workloads, along with supporting application developers.

Some examples of demanding workloads include among others AL, Big Data, Machine Learning, In memory and high-performance compute (HPC) among other resource-intensive new workloads, as well as existing applications. This includes enhanced support for Nvidia physical and virtual Graphical Processing Units (GPU) that are used in support for compute-intensive graphics, as well as non-graphic processing (e.g., AI, ML) workloads.

With the v6.7 announcements, VMware is providing proof points that they are continuing to invest in their core SDDC enabling technologies. VMware is also demonstrating the evolution of vSphere ESXi hypervisor along with associated management tools for hybrid environments with ease of use management at scale, along with security.  View more about VMware vSphere vSAN vCenter v6.7 SDDC details in part three of this three-part series here ((focus on server storage I/O, deployment information and analysis).

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware vSphere vSAN vCenter Server Storage I/O Enhancements

VMware vSphere vSAN vCenter Server Storage I/O Enhancements

VMware vSphere vSAN vCenter Server Storage I/O Enhancements

This is part three of a three-part series looking at last weeks v6.7 VMware vSphere vSAN vCenter Server Storage I/O Enhancements. The focus of this post is on server, storage, I/O along with deployment and other wrap up items. In case you missed them, read part one here, and part two here.

VMware as part of updates to, vSAN and vCenter introduced several server storage I/O enhancements some of which have already been mentioned.

VMware vSphere 6.7
VMware vSphere Web Client with vSphere 6.7

Server Storage I/O enhancements for vSphere, vSAN, and vCenter include:

  • Native 4K (4kn) block sector size for HDD and SSD devices
  • Intel Volume Management Device (VMD) for NVMe flash SSD
  • Support for Persistent Memory (PMEM) aka Storage Class Memory (SCM)
  • SCSI UNMAP (similar to TRIM) for SSD space reclamation
  • XCOPY and VAAI enhancements
  • VMFS-5 is now default file system
  • VMFS-6 SESparse vSphere snapshot space reclamation
  • VVOL supporting SCSI-3 persistent reservations and IPv6
  • Reduce dependences on RDMs with VVOL enhancements
  • Software-based Fibre Channel over Ethernet (FCoE) initiator
  • Para Virtualized RDMA (PV-RDMA)
  • Various speeds and feeds enhancements

VMware vSphere 6.7 also adds native 4KN sector size (e.g., 4096 block size) in addition to traditional native and emulated 512-byte sectors for HDD as well as SSD. The larger block size means performance improvements along with better storage allocation for applications, particularly for large capacity devices. Other server storage I/O updates include RDMA over Converged Ethernet (RoCE) enabled Remote Direct Memory Access (RDMA) as well as Intel VMD for NVMe. Learn more about NVMe here.

Other storage-related enhancements include SCSI UNMAP (e.g., SCSI equivalent of SSD TRIM) with the selectable priority of none or low for SSD space reclamation. Also enhanced are SESparse of vSphere snapshot virtual disk space reclamation (for VMFS-6). VMware XCOPY (Extended Copy) now works with vendor-specific VMware API for Array Integration (VAAI) primitives along with SCSI T10 standard used for cloning, zeroing and copy offload to storage systems. Virtual Volumes (VVOL) have been enhanced to support IPv6 and SCSI-3 persistent reservations to help reduce dependency or use of RDMs.

VMware configuration maximums (e.g., speeds and feeds) including server storage I/O enhancements including boosting from 512 to 1024 LUNs per host. Other speeds and feeds improvements include going from 2048 to 4096  server storage I/O paths per host, PVSCSI adapters now support up to 256 disks vs. 64 (virtual disks or Raw Device Mapped aka RDM). Also note that VMFS-3 is now the end of life (EOL) and will be automatically upgraded to VMFS-5 during the upgrade to vSphere 6.7, while the default datastore type is VMFS-6.

Additional server storage I/O enhancements include RoCE for RDMA enabling low latency server to server memory-based data movement access, along with Para-virtualized RDMA (PV-RDMA) on Linux guest OS. ESXi has been enhanced with iSER (iSCSI Extension for RDMA) leveraging faster server I/O interconnects and CPU offload. Another server storage I/O enhancement is Software based Fibre Channel over Ethernet (e.g., SW-FCoE) initiator using loss less Ethernet fabrics.

Note as a reminder or refresher that VMware also has para (e.g., virtualization-optimized) drivers for Ethernet and other networks, NVMe as well as SCSI in addition to standard devices. For example, you can access from a VM an NVMe backed datastore using standard VMware SATA, SCSI Controller, LSI Logic SAS, LSI Logic Parallel, VMware Paravirtual, native NVMe driver (virtual machine type 6.5 or higher) for better performance. Likewise, instead of using the standard SAS and SCSI VM devices, the VMware para-virtualized

Besides the previously mentioned items, other enhancements including for vSAN include support for logical clusters such as Oracle RAC, Microsoft SQL Server Availability Groups, Microsoft Exchange Data Availability Groups as well as Windows Server Failover Clusters (WSFC) using vSAN iSCSI service. Note that as a proof point of continued vSAN deployment customer adoption, VMware is claiming 10,000 deployments. For performance, vSAN enhancement also includes updates for adaptive placement, adaptive resync, as well as faster cache destage. The benefit of quicker destage is that cache can be drained or written to disk to eliminate or prevent I/O bottlenecks.

As part of supporting expanding, more demanding enterprise among other workloads, vSAN enhancements also include resiliency updates, physical resource and configuration checks, health and monitoring checks. Other vSAN improvements include streamlined workflows, converged management views across vCenter as well as vRealize tools. Read more from VMware about server storage I/O enhancements to vSphere, vSAN, and vCenter here.

VMware Server Storage I/O Memory Matters

VMware is also joining others with support for evolving persistent memory (PMEM) leveraging so-called storage class memories (SCM). Note, some refer to SCM as persistent memory as PM, however, context needs to be used as PM also means Physical Machine, Physical Memory, Primary Memory among others. With the new PMEM support for server memory, VMware is laying the foundation for guest operating systems as well as applications to leverage the technology.

For example, Microsoft with Windows Server 2016 supports SCMs as a block addressable storage medium and file system, as well as for Direct Access (e.g., DAX). What this means is that fast file systems can be backed by persistent faster than traditional SSD storage, as well as applications such as SQL Server that support DAX can do direct persistent I/O.

As a refresher, Non-Volatile DIMM enable server memory by combing traditional DRAM with some persistent storage class memory. By combing DRAM and storage class memory (SCM) also known as PMEM servers can use the RAM as a fast read/write memory, with the data destaged to persistent memory. Examples of SCM include Micron 3D Xpoint also known as Intel Optane along with others such as Everspin NVDIMM among others (available from Dell, HPE among others. Learn more SSD and storage class memories (SCM) along with PMEM here, as well as NVMe here.

Deployment, be prepared before you grab the bits and install the software

For those of you who want or need to download the bits here is a link to VMware software download. However, before racing off to install the new software in your production (or perhaps even lab), do your homework. Read the important information from VMware before upgrading to vSphere here (e.g., KB53704) as well as release notes, and review VMware’s best practices for upgrading to vCenter here.

Some of the things to be aware of including upgrade order and dependencies, as well as make sure you have good current backups of your vSphere ESXi configuration, vCenter appliance. In addition to viewing the vSphere ESXi and vCenter 6.7 release notes here, also.

There are some hardware compatibility items you need to be aware of, both for this as well as future versions. Check out the VMware hardware (and software) compatibility list (HCL), along with partner product interoperability matrices, as well as release notes. Pay attention to devices depreciated and no longer supported in ESXi 6.7 (e.g., VMware KB52583) as well as those that may not work in future releases to avoid surprises.

Where to learn more

Learn more about VMware vSphere, vCenter, vSAN and related software-defined data center (SDDC); software-defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

In case you missed them, read part one here and click here for part two of this series.

Some will say what’s the big deal why all the noise, coverage and discussion for a point release?

My view is that this is a big evolutionary package of upgrade enhancements and new features, even if a so-called point release (e.g., going from 6.5 to 6.7). Some vendors might have done this type of updates as a significant, e.g., version 6.x to 7.x upgrade to make more noise, get increased coverage or merely enhance the appearance of software maturity (e.g., V1.x to V2.x to V3.x, and so forth).

In the case of VMware, what some might refer to point release that is smaller, are the ones such as vSphere 6.5.0 to 6.5.x among others. Thus, there is a lot in this package of updates from VMware and good to see continued enhancements.

I also think that VMware is getting challenges from different fronts including Microsoft as well as cloud partners among others which is good. The reason I believe that it is okay VMware is being challenged is given their history; they tend to step up their game playing harder as well as stronger with the competition.

VMware is continuing to invest and extend its core SDDC technologies to meet the expanding demands of various organizations, from small to ultra large enterprises. What this means is that VMware is addressing ease of use for smaller, as well as removing complexity to enable simplified scaling from on-site (or on-premises and on-prem if you prefer) to the public cloud.

Overall the VMware Announced version 6.7 of vSphere vSAN vCenter SDDC core components are a useful extension of their existing technology. VMware Announced release 6.7 of vSphere vSAN vCenter SDDC core components enhancements enable customers more flexibility, scalability, resiliency, and security to meet their various needs.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Data Protection Recovery Life Post World Backup Day Pre GDPR

Data Protection Recovery Life Post World Backup Day Pre GDPR

Data Protection Recovery Life Post World Backup Day Pre GDPR trends

It’s time for Data Protection Recovery Life Post World Backup Day Pre GDPR Start Date.

The annual March 31 world backup day focus has come and gone once again.

However, that does not mean data protection including backup as well as recovery along with security gets a 364-day vacation until March 31, 2019 (or the days leading up to it).

Granted, for some environments, public relations, editors, influencers and other industry folks backup day will take some time off while others jump on the ramp up to GDPR which goes into effect May 25, 2018.

Expanding Focus Data Protection and GDPR

As I mentioned in this post here, world backup day should be expanded to include increased focus not just on backup, also recovery as well as other forms of data protection. Likewise, May 25 2018 is not the deadline or finish line or the destination for GDPR (e.g. Global Data Protection Regulations), rather, it is the starting point for an evolving journey, one that has global impact as well as applicability. Recently I participated in a fireside chat discussion with Danny Allan of Veeam who shared his GDPR expertise as well as experiences, lessons learned, tips of Veeam as they started their journey, check it out here.

Expanding Focus Data Protection Recovery and other Things that start with R

As part of expanding the focus on Data Protection Recovery Life Post World Backup Day Pre GDPR, that also means looking at, discussing things that start with R (like Recovery). Some examples besides recovery include restoration, reassess, review, rethink protection, recovery point, RPO, RTO, reconstruction, resiliency, ransomware, RAID, repair, remediation, restart, resume, rollback, and regulations among others.

Data Protection Tips, Reminders and Recommendations

  • There are no blue participation ribbons for failed recovery. However, there can be pink slips.
  • Only you can prevent on-premises or cloud data loss. However, it is also a shared responsibility with vendors and service providers
  • You can’t go forward in the future when there is a disaster or loss of data if you can’t go back in time for recovery
  • GDPR appliances to organizations around the world of all size and across all sectors including nonprofit
  • Keep new school 4 3 2 1 data protection in mind while evolving from old school 3 2 1 backup rules
  • 4 3 2 1 backup data protection rule

  • A Fundamental premise of data infrastructures is to enable applications and their data, protect, preserve, secure and serve
  • Remember to protect your applications, as well as data including metadata, settings configurations
  • Test your restores including can you use the data along with security settings
  • Don’t cause a disaster in the course of testing your data protection, backups or recovery
  • Expand (or refresh) your data protection and data infrastructure education tradecraft skills experiences

Where to learn more

Learn more about data protection, world backup day, recovery, restoration, GDPR along with related data infrastructure topics for cloud, legacy and other software defined environments via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Data protection including business continuance (BC), business resiliency (BR), disaster recovery (DR), availability, accessibility, backup, snapshots, encryption, security, privacy among others is a 7 x 24 x 365 day a year focus. The focus of data protection also needs to evolve from an after the fact cost overhead to proactive, business enabler Meanwhile, welcome to Data Protection Recovery Post World Backup Day Pre GDPR Start Date.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

November 2017 Server StorageIO Data Infrastructure Update Newsletter

Volume 17, Issue 11 (November 2017)

Hello and welcome to the November 2017 issue of the Server StorageIO update newsletter.

Software-Defined Data Infrastructure Essentials SDDI SDDC

2017 has a few more weeks left which look to be busy with end of year, holidays and other activities. Like the rest of 2017 November saw a lot of activity in and around the industry, setting up 2018 as yet another sequel to the busiest and most exciting year ever.

This is also the time of year when predictions for the following year (e.g. 2018) start to roll out, some of which are variations from those of the past or perennial favorites (e.g. the year of flash, the year of cloud, the year of software defined, the year of <insert_your_favorite_item_here>. Look for predictions and perspectives in future posts and newsletters.

Having been a busy month, let’s get to the content…

In This Issue

Enjoy this edition of the Server StorageIO data infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

Some recent Industry Activities, Trends, News and Announcements include:

On the heals of completing its acquisition of Brocade (note previously Avago (who bought LSI) also bought Broadcom and then changed its name to the more well-known entity. Broadcom also announced relocating it headquarters from Singapore to the US, along an over $100 Billion USD acquisition offer of Qualcomm (here is interesting perspective Apple might play). Broadcom has been focused more on server, storage, I/O and general networking technology, while Qualcomm on mobile including phones and related items. Note that Qualcomm has previously made a $38.5 Billion USD offer for NXP semiconductors waiting regularity approval. View recent Broadcom financial results here.

Also in November server storage I/O controller chip maker Marvell (not to be confused with entertainment provider Marvel) announced a merger with Cavium who had previously acquired Qlogic among others. The resulting combined entity to be called Marvell will have an estimated $16 Billion USD revenue stream focused on server, storage, I/O and networking technologies among others.

In other merger and acquisition activity, VMware announced acquisition of VeloCloud for software defined wide area networking (SD-WAN).

With Super Compute 2017 (SC17) in November there were several announcements including from ATTO, DDN, Enmotus and Micron, Everspin, along with many others. By the way, in case you missed it at end of October Microsoft and Cray announced a partnership to bring Super Compute capabilities to Azure clouds. Speaking of Microsoft, there was also an announcement of adding VMware running on top of Azure (granted without VMware support), similar in concept to VMware on AWS (read hare).

Also at the end of November was AWS Reinvent with many announcements (more on those in a follow-up newsletter and posts). Prior to Reinvent AWS announced several server, storage and other data infrastructure security enhancements including for S3. Highlights from AWS reinvent include Fargate (serverless aka containers at scale without managing infrastructure), Elastic Container Services for Kubernetes (EKS), Greengrass (machine learning [ML] data infrastructure), along with many others.

Fargate is for those who want to leverage serveless microservices containers without having to devote DevOps and related activity to the care and feeding of its data infrastructure. In other words, Fargate is for those who want to focus maximum effort on the business applications, vs. the business of setting up and maintaining the data infrastructure for serverless On the other hand, AWS also announced EKS for those who want or need to customize their serverless data infrastructure including around Kubernetes among others.

In other industry activity, Taiwanese based Foxconn who manufactures technology for the who’s who of the industry announced progress towards their future Wisconsin based factory complex.

Over at HPE, the big news announcement is that CEO Meg Whitman is stepping down. HPE also announced new AMD powered Gen 10 Proliant services, as well as multi-cloud management solutions. HPE also announced new partnerships with DDN for HPC and SC, with Rackspace for selling private cloud services, along with Cloudian EMEA partnership among others.

OwnBackup announced a new version of their data protection software, while low-cost budget bulk storage service backblaze (B2) announced their more recent quarterly drive failure (or success) reliability reports. Meanwhile over at Quantum they released former Ceo Jon Gacek and rotated in new management.

Red Hat announced Ceph Storage 3 including CephFS (POSIX compatible file system), iSCSI gateway including support for VMware and Windows that lack native Ceph drivers, daemon deployment in Linux containers for smaller hardware footprint. Also included are enhanced monitoring, troubleshooting and diagnostics to streamline deployment and ongoing management. Red Hat also announced OpenShift version 3.7 for containers.

SANblaze announced NVMf and dual port NVMe capabilities for NVMe fabrics, while Linbit won an European grant to build out a software defined storage cloud scale out solution.

I often get asked who are the hot, new, trendy or other vendors and services to keep an eye on some of which I have mentioned in previous newsletters, as well as posts such as here and here. Moving in to 2018 some to keep an eye on (not all are new or trendy, yet they can enable you to be productive, or differentiate) include the following.

AWS, Bluemedora, Chelsio, Cloudian, CloudPassage, Compuverde, Databricks, Datadog, Datos, Enmotus, Everspin, Excelero, Fluree (Blockchain database), Google, Mellonox, Microsemi, Microsoft, Marvel and Cavium, MyWorkDrive, Red Hat, Rook, Rozo, Rubrik, Strongbox, Storone, Turbonomic, Ubuntu, Veeam, Velostrata, Virtuozo, VMware, WekaIO and others.

What the above means, is that it has been a busy month as well as year, and, the year is not over yet. There are still plenty of shopping days left both for christmas and the holidays, as well as for IT year-end spending, vendors looking to do acquisitions, or other last-minute projects. Speaking of which, drop me a note if you have any end of year, or new year projects Server StorageIO can assist you with.

Check out other industry news, comments, trends perspectives here.

Server StorageIO Commentary in the news, tips and articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via HPE Insights: Comments on Public cloud versus on-prem storage
Via DataCenterKnowledge: Data Center Standards: Where’s the Value?
Via arsTechnica: Comments on cloud backup disaster recovery

View more Server, Storage and I/O trends and perspectives comments here

Server StorageIOblog Data Infrastructure Posts

Recent and popular Server StorageIOblog posts include:

In Case You Missed It #ICYMI

View other recent as well as past StorageIOblog posts here

Server StorageIO Recommended Reading (Watching and Listening) List

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017), the following are Server StorageIO data infrastructure recommended reading, watching and listening list items. The list includes various IT, Data Infrastructure and related topics. Speaking of my books, Didier Van Hoye (@WorkingHardInIt) has a good review over on his site you can view here, also check out the rest of his great content while there.

Intel Recommended Reading List (IRRL) for developers is a good resource to check out.

For those who are into Linux, container and hypervisor performance along with internals including cloud based, check out Brendan Gregg site. He has a lot of great material including some recent interesting posts ranging from dealing with workplace jerks, to whats inside AWS EC2 new KVM (switch from Xen based) hypervisors among others.

Here is a post by New York Times CIO/CTO Nick Rockwell The (Futile) Resistance to Serverless, also check out my podcast discussion with Nick here.

Over at Next Platform they have some interesting perspectives on Intel’s next Exascale architecture worth spending a few minutes to read.

Watch for more items to be added to the recommended reading list book shelf soon.

Events and Activities

Recent and upcoming event activities.

Nov. 9, 2017 – Webinar – All You Need To Know about ROBO Data Protection Backup
Nov. 2, 2017 – Webinar – Modern Data Protection for Hyper-Convergence

See more webinars and activities on the Server StorageIO Events page here.

Server StorageIO Industry Resources and Links

Useful links and pages:
Data Infrastructure Recommend Reading and watching list
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

Connect and Converse With Us


Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Data Protection Diaries Fundamental Topics Tools Techniques Technologies Tips

Data Protection Fundamental Topics Tools Techniques Technologies Tips

Update 1/16/2018

Data protection fundamental companion to Software Defined Data Infrastructure Essentials – Cloud, Converged, Virtual Fundamental Server Storage I/O Tradecraft ( CRC Press 2017)

server storage I/O data infrastructure trends

By Greg Schulzwww.storageioblog.com November 26, 2017

This is Part I of a multi-part series on Data Protection fundamental tools topics techniques terms technologies trends tradecraft tips as a follow-up to my Data Protection Diaries series, as well as a companion to my new book Software Defined Data Infrastructure Essentials – Cloud, Converged, Virtual Server Storage I/O Fundamental tradecraft (CRC Press 2017).

Software Defined Data Protection Fundamental Infrastructure Essentials Book SDDC

The focus of this series is around data protection fundamental topics including Data Infrastructure Services: Availability, RAS, RAID and Erasure Codes (including LRC) ( Chapter 9), Data Infrastructure Services: Availability, Recovery Point ( Chapter 10). Additional Data Protection related chapters include Storage Mediums and Component Devices ( Chapter 7), Management, Access, Tenancy, and Performance ( Chapter 8), as well as Capacity, Data Footprint Reduction ( Chapter 11), Storage Systems and Solutions Products and Cloud ( Chapter 12), Data Infrastructure and Software-Defined Management ( Chapter 13) among others.

Post in the series includes excerpts from Software Defined Data Infrastructure (SDDI) pertaining to data protection for legacy along with software defined data centers ( SDDC), data infrastructures in general along with related topics. In addition to excerpts, the posts also contain links to articles, tips, posts, videos, webinars, events and other companion material. Note that figure numbers in this series are those from the SDDI book and not in the order that they appear in the posts.

Posts in this data protection fundamental series include:

SDDC, SDI, SDDI data infrastructure
Figure 1.5 Data Infrastructures and other IT Infrastructure Layers

Data Infrastructures

Data Infrastructures exists to support business, cloud and information technology (IT) among other applications that transform data into information or services. The fundamental role of data infrastructures is to provide a platform environment for applications and data that is resilient, flexible, scalable, agile, efficient as well as cost-effective.

Put another way, data infrastructures exist to protect, preserve, process, move, secure and serve data as well as their applications for information services delivery. Technologies that make up data infrastructures include hardware, software, or managed services, servers, storage, I/O and networking along with people, processes, policies along with various tools spanning legacy, software-defined virtual, containers and cloud. Read more about data infrastructures (its what’s inside data centers) here.

Why SDDC SDDI Need Data Protection
Various Needs Demand Drivers For Data Protection Fundamentals

Why The Need For Data Protection

Data Protection encompasses many different things, from accessibility, durability, resiliency, reliability, and serviceability ( RAS) to security and data protection along with consistency. Availability includes basic, high availability ( HA), business continuance ( BC), business resiliency ( BR), disaster recovery ( DR), archiving, backup, logical and physical security, fault tolerance, isolation and containment spanning systems, applications, data, metadata, settings, and configurations.

From a data infrastructure perspective, availability of data services spans from local to remote, physical to logical and software-defined, virtual, container, and cloud, as well as mobile devices. Figure 9.2 shows various data infrastructure availability, accessibility, protection, and security points of interest. On the left side of Figure 9.2 are various data protection and security threat risks and scenarios that can impact availability, or result in a data loss event ( DLE), data loss access ( DLA), or disaster. The right side of Figure 9.2 shows various techniques, tools, technologies, and best practices to protect data infrastructures, applications, and data from threat risks.

SDDI SDDC Data Protection Fundamental Big Picture
Figure 9.2 Various threat vectors, issues, problems, and challenges that drive the need for data protection

A fundamental role of data infrastructures (and data centers) is to protect, preserve, secure and serve information when needed with consistency. This also means that the data infrastructure resources (servers, storage, I/O networks, hardware, software, external services) and the applications (and data) they combine and are defined to protect are also accessible, durable and secure.

Data Protection topics include:

  • Maintaining availability, accessibility to information services, applications and data
  • Data include software, actual data, metadata, settings, certificates and telemetry
  • Ensuring data is durable, consistent, secure and recoverable to past points in time
  • Everything is not the same across different environments, applications and data
  • Aligning techniques and technologies to meet various service level objectives ( SLO)

Data Protection Fundamental Tradecraft Skills Experience Knowledge

Tools, technologies, trends are part of Data Protection, so to are the techniques of knowing (e.g. tradecraft) what to use when, where, why and how to protect against various threats risks (challenges, issues, problems).

Part of what is covered in this series of posts as well as in the Software Defined Data Infrastructure (SDDI) Essentials book is tradecraft skills, tips, experiences, insight into what to use, as well as how to use old and new things in new ways.

This means looking outside the technology box towards what is that you need to protect and why, then knowing how to use different skills, experiences, techniques part of your tradecraft combined with data protection toolbox tools. Read more about tradecraft here.

Where To Learn More

Continue reading additional posts in this series of Data Infrastructure Data Protection fundamentals and companion to Software Defined Data Infrastructure Essentials (CRC Press 2017) book, as well as the following links covering technology, trends, tools, techniques, tradecraft and tips.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Everything is not the same across environments, data centers, data infrastructures and applications.

Likewise everything is and does not have to be the same when it comes to Data Protection. Data protection fundamentals encompasses many different hardware, software, services including cloud technologies, tools, techniques, best practices, policies and tradecraft experience skills (e.g. knowing what to use when, where, why and how).

Since everything is not the same, various data protection approaches are needed to address various application performance availability capacity economic ( PACE) needs, as well as SLO and SLAs.

Get your copy of Software Defined Data Infrastructure Essentials here at Amazon.com, at CRC Press among other locations and learn more here. Meanwhile, continue reading with the next post in this series, Part 2 Reliability, Availability, Serviceability ( RAS) Data Protection Fundamentals.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Data Protection Diaries Reliability, Availability, Serviceability RAS Fundamentals

Reliability, Availability, Serviceability RAS Fundamentals

Companion to Software Defined Data Infrastructure Essentials – Cloud, Converged, Virtual Fundamental Server Storage I/O Tradecraft ( CRC Press 2017)

server storage I/O data infrastructure trends

By Greg Schulzwww.storageioblog.com November 26, 2017

This is Part 2 of a multi-part series on Data Protection fundamental tools topics techniques terms technologies trends tradecraft tips as a follow-up to my Data Protection Diaries series, as well as a companion to my new book Software Defined Data Infrastructure Essentials – Cloud, Converged, Virtual Server Storage I/O Fundamental tradecraft (CRC Press 2017).

Software Defined Data Infrastructure Essentials Book SDDC

Click here to view the previous post Part 1 Data Infrastructure Data Protection Fundamentals, and click here to view the next post Part 3 Data Protection Access Availability RAID Erasure Codes (EC) including LRC.

Post in the series includes excerpts from Software Defined Data Infrastructure (SDDI) pertaining to data protection for legacy along with software defined data centers ( SDDC), data infrastructures in general along with related topics. In addition to excerpts, the posts also contain links to articles, tips, posts, videos, webinars, events and other companion material. Note that figure numbers in this series are those from the SDDI book and not in the order that they appear in the posts.

In this post the focus is around Data Protection availability from Chapter 9 which includes access, durability, RAS, RAID and Erasure Codes (including LRC), mirroring and replication along with related topics.

SDDC, SDI, SDDI data infrastructure
Figure 1.5 Data Infrastructures and other IT Infrastructure Layers

Reliability, Availability, Serviceability (RAS) Data Protection Fundamentals

Reliability, Availability Serviceability (RAS) and other access availability along with Data Protection topics are covered in chapter 9. A resilient data infrastructure (software-defined, SDDC and legacy) protects, preserves, secures and serves information involving various layers of technology. These technologies enable various layers ( altitudes) of functionality, from devices up to and through the various applications themselves.

SDDI SDDC Data Protection Big Picture
Figure 9.2 Various threat issues and challenges that drive the need for data protection

Some applications need a faster rebuild, while others need sustained performance (bandwidth, latency, IOPs, or transactions) with the slower rebuild; some need lower cost at the expense of performance; others are ok with more space if other objectives are meet. The result is that since everything is different yet there are similarities, there is also the need to tune how data Infrastructure protects, preserves, secures, and serves applications and data.

General reliability, availability, serviceability, and data protection functionality includes:

  • Manually or automatically via policies, start, stop, pause, resume protection
  • Adjust priorities of protection tasks, including speed, for faster or slower protection
  • Fast-reacting to changes, disruptions or failures, or slower cautious approaches
  • Workload and application load balancing (performance, availability, and capacity)

RAS can be optimized for:

  • Reduced redundancy for lower overall costs vs. resiliency
  • Basic or standard availability (leverage component plus)
  • High availability (use better components, multiple systems, multiple sites)
  • Fault-tolerant with no single points of failure (SPOF)
  • Faster restart, restore, rebuild, or repair with higher overhead costs
  • Lower overhead costs (space and performance) with lower resiliency
  • Lower impact to applications during rebuild vs. faster repair
  • Maintenance and planned outages or for continues operations

Common availability Data Protection related terms, technologies, techniques, trends and topics pertaining to data protection from availability and access to durability and consistency to point in time protection and security are shown below.

Data Protection Gaps and Air Gap

There are Good Data Protection Gaps that provide recovery points to a past time enabling recoverability in the future to move forward. Another good data protection gap is an Air Gap that isolates protection copies off-site or off-line so that they can not be tampered with enabling recovery from ransomware and other software defined threats. There are Bad data protection gaps including gaps in coverage where data is not protected or items are missing. Then there are Ugly data protecting gaps which include Bad gaps that result in what you think is protected are not and finding that your copies are bad when it is too late.

Data Protection Gaps Good Bad Ugly
Data Protection Gaps Good Bad and Ugly

The following figure shows good data protection gaps including recovery points (point in time protection) along with air gaps.

Good Data Protection Gaps
Figure 9.9 Air Gaps and Data Protection

Fault / Failures To Tolerate (FTT)

FTT is how many faults or failures to tolerate for a given solution or service which in turn determines what mode of protection, or fault tolerant mode ( FTM) to use.

Fault Tolerant Mode (FTM)

FTM is the mode or technique used to enable resiliency and protect against some number of faults.

Fault / Failure Domains

Fault or Failure domains are places and things that can fail from regions, data centers or availability zones, clusters, stamps, pods, servers, networks, storage, hardware (systems, components including SSD and HDDs, power supplies, adapters). Other fault domain topics and focus areas include facility power, cooling, software including applications, databases, operating systems and hypervisors among others.

SDDI SDDC Fault Domains Zones Regions
Figure 9.5 Various Fault and Failure Domains, Regions, Locations

Clustering

Clustering is a technique and technology for enabling resiliency, as well as scaling performance, availability, and capacity. Clusters can be local, remote, or wide-area to support different data infrastructure objectives, combined with replication and other techniques.

SDDI SDDC Clustering
Figure 9.12 Clustering and Replication Examples

Another characteristic of clustering and resiliency techniques is the ability to detect and react quickly to failures to isolate and contain faults, as well as invoking automatic repair if needed. Different clustering technologies enable various approaches, from proprietary hardware and software tightly coupled to loosely coupled general-purpose hardware or software.

Clustering characteristics include:

  • Application, database, file system, operating system (Windows Storage Replica)
  • Storage systems, appliances, adapters and network devices
  • Hypervisors ( Hyper-V, VMware vSphere ESXi and vSAN among others)
  • Share everything, share some things, share nothing
  • Tightly or loosely coupled with common or individual system metadata
  • Local in a data center, campus, metro, or stretch cluster
  • Wide-area in different regions and availability zones
  • Active/active for fast fail over or restart, active/passive (standby) mode

Additional clustering considerations include:

  • How does performance scale as nodes are added, or what overhead exists?
  • How is cluster resource locking in shared environments handled?
  • How many (or few) nodes are needed for quorum to exist?
  • Network and I/O interface (and management) requirements
  • Cluster partition or split-brain (i.e., cluster splits into two)?
  • Fast-reacting fail over and resiliency vs. overhead of failing back
  • Locality of where applications are located vs. storage access and clustering

Where To Learn More

Continue reading additional posts in this series of Data Infrastructure Data Protection fundamentals and companion to Software Defined Data Infrastructure Essentials (CRC Press 2017) book, as well as the following links covering technology, trends, tools, techniques, tradecraft and tips.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Everything is not the same across different environments, data centers, data infrastructures and applications. There are various performance, availability, capacity economic (PACE) considerations along with service level objectives (SLO). Availability means being able to access information resources (applications, data and underlying data infrastructure resources), as well as data being consistent along with durable. Being durable means enabling data to be accessible in the event of a device, component or other fault domain item failures (hardware, software, data center).

Just as everything is not the same across different environments, there are various techniques, technologies and tools that can be used in different ways to enable availability and accessibility. These include high availability (HA), RAS, mirroring, replication, parity along with derivative erasure code (EC), LRC, RS and other RAID implementations, along with clustering. Also keep in mind that pertaining to data protection, there are good gaps (e.g. time intervals for recovery points, air gaps), bad gaps (missed coverage or lack of protection), and ugly gaps (not being able to recover from a gap in time).

Note that mirroring, replication, EC, LRC, RS or other Parity and RAID approaches are not replacements for backup, rather they are companions to time interval based recovery point protection such as snapshots, backup, checkpoints, consistency points and versioning among others (discussed in follow-up posts in this series).

Which data protection tool, technology to trend is the best depends on what you are trying to accomplish and your application workload PACE requirements along with SLOs. Get your copy of Software Defined Data Infrastructure Essentials here at Amazon.com, at CRC Press among other locations and learn more here. Meanwhile, continue reading with the next post in this series, Part 3 Data Protection Access Availability RAID Erasure Codes (EC) including LRC.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Data Protection Diaries Access Availability RAID Erasure Codes LRC Deep Dive

Access Availability RAID Erasure Codes including LRC Deep Dive

Companion to Software Defined Data Infrastructure Essentials – Cloud, Converged, Virtual Fundamental Server Storage I/O Tradecraft ( CRC Press 2017)

server storage I/O data infrastructure trends

By Greg Schulzwww.storageioblog.com November 26, 2017

This is Part 3 of a multi-part series on Data Protection fundamental tools topics techniques terms technologies trends tradecraft tips as a follow-up to my Data Protection Diaries series, as well as a companion to my new book Software Defined Data Infrastructure Essentials – Cloud, Converged, Virtual Server Storage I/O Fundamental tradecraft (CRC Press 2017).

Software Defined Data Infrastructure Essentials Book SDDC

Click here to view the previous post Part 2 Reliability, Availability, Serviceability (RAS) Data Protection Fundamentals, and click here to view the next post Part 4 Data Protection Recovery Points (Archive, Backup, Snapshots, Versions).

Post in the series includes excerpts from Software Defined Data Infrastructure (SDDI) pertaining to data protection for legacy along with software defined data centers ( SDDC), data infrastructures in general along with related topics. In addition to excerpts, the posts also contain links to articles, tips, posts, videos, webinars, events and other companion material. Note that figure numbers in this series are those from the SDDI book and not in the order that they appear in the posts.

In this post part of the Data Protection diaries series as well as companion to Chapter 9 of SDDI Essentials book, we are going on a longer, deeper dive. We are going to look at availability, access and durability including mirror, replication, RAID including various traditional and newer parity approaches such as Erasure Codes ( EC), Local Reconstruction Code (LRC), Reed Solomon (RS) also known as RAID 2 among others. Later posts in this series look at point in time data protection to support recovery to a given time (e.g. RPO), while this and the previous post look at maintaining access and availability.

Keep in mind that if something can fail, it probably will, also that everything is not the same meaning different environments, application workloads (along with their data). Different environments and applications have diverse performance, availability, capacity economic (PACE) attributes, along with service level objectives ( SLOs). Various SLOs include PACE attributes, recovery point objectives ( RPO), recovery time objective ( RTO) among others.

Availability, accessibility and durability (see part two in this series) along with associated RAS topics are part of what enable RTO, as well as meet Faults (or failures) to tolerate ( FTT). This means that different fault tolerance modes ( FTM) determine what technologies, tools, trends and techniques to use to meet different RTO, FTT and application PACE needs.

Maintaining access and availability along with durability (e.g. how many copies of data as well as where stored) protects against loss or failure of a component device ( SSD, HDDs, adapters, power supply, controller), node or system, appliance, server, rack, clusters, stamps, data center, availability zones, regions, or other Fault or Failure domains spanning hardware, software, and services.

SDDC, SDI, SDDI data infrastructure
Figure 1.5 Data Infrastructures and other IT Infrastructure Layers

Data Protection Access Availability RAID Erasure Codes

This is a good place to mention some context for RAID and RAID array, which can mean different things pertaining to Data Protection. Some people associate RAID with a hardware storage array, or with a RAID card. Other people consider an array to be a storage array that is a RAID enabled storage system. A trend is to refer to legacy storage systems as RAID arrays or hardware-based RAID, to differentiate from newer implementations.

Context comes into play in that a RAID group (i.e., a collection of HDDs or SSD that is part of a RAID set) can be referred to as an array, a RAID array, or a virtual array. What this means is that while some RAID implementations may not be relevant, there are many new and evolving variations extending parity based protection making at least software-defined RAID still relevant

Keep context in mind, and don’t be afraid to ask what someone is referring to: a particular vendor storage system, a RAID implementation or packaging, a storage array, or a virtual array. Also keep the context of the virtual array in perspective vs. storage virtualization and virtual storage. RAID as a term is used to refer to different modes such as mirroring or parity, and parity can be legacy RAID 4, 5, or 6 along with erasure codes (EC). Note some people refer to erasure codes in the context of not being a RAID system, which can be an inference to not being a legacy storage system running hardware RAID (e.g. not software or software defined).

The following figure (9.13) shows various availability protection schemes (e.g. not recovery point) that maintain access while protecting against loss of a component, device, system, server, site, region or other part of a fault domain. Since everything is not the same with environments and applications having different Performance Availability Capacity Economic ( PACE) attributes, there are various approaches for enabling availability along with accessibility.

Keep in mind that RAID and Erasure codes along with their various, as well as replication and mirroring by themselves are not a replacement for backup or other point in time (e.g. enable recovery point) protection.

Instead, availability technologies such as RAID and erasure code along with mirror as well as replication need to be combined with snapshots, point in time copies, consistency points, checkpoints, backups among other recovery point protection for complete data protection.

Speaking of replacement for backup, while many vendors and their pundits claIm or want to see backup as being dead, as long as they keep talking about backup instead of broader data protection backup will remain alive.

SDDC SDDI RAID Parity Erasure Code EC
Figure 9.13 Various RAID, Mirror, Parity and Erasure Code (EC) approaches

Different RAID levels (including parity, EC, LRC and RS based) will affect storage energy effectiveness, similar to various SSD or HDD performance capacity characteristics; however, a balance of performance, availability, capacity, and energy needs to occur to meet application service needs. For example, RAID 1 mirroring or RAID 10 mirroring and striping use more HDDs and, thus, power, but will yield better performance than RAID 6 and erasure code parity protection.

 

Normal performance

 

Availability

Performance overhead

Rebuild overhead

Availability overhead

RAID 0 (stripe)

Very good read & write

None

None

Full volume restore

None

RAID 1 (mirror or replicate)

Good reads; writes = device speed

Very good; two or more copies

Multiple copies can benefit reads

Re-synchronize with existing volume

2:1 for dual, 3:1 for three-way copies

RAID 4 (stripe with dedicated parity, i.e., 4 + 1 = 5 drives total)

Poor writes without cache

Good for smaller drive groups and devices

High on write without cache (i.e., parity)

Moderate to high, based on number and type of drives

Varies; 1 Parity/N, where N = number of devices

RAID 5
(stripe with rotating parity, 4 + 1 = 5 drives)

Poor writes without cache

Good for smaller drive groups and devices

High on write without cache (i.e., parity)

Moderate to high, based on number and type of drives

Varies
1 Parity/N, where N = number of devices

RAID 6
(stripe with dual parity, 4 + 2 = 6 drives)

Poor writes without cache

Better for larger drive groups and devices

High on write without cache (i.e., parity)

Moderate to high, based on number and type of drives

Varies; 2 Parity/N, where N = number of devices

RAID 10
(mirror and stripe)

Good

Good

Minimum

Re-synchronize with existing volume

Twice mirror capacity stripe drives

Reed-Solomon (RS) parity, also known as erasure code (EC), local reconstruction code (LRC), and SHEC

Ok for reads, slow writes; good for static and cold data with front-end cache

Good

High on writes (CPU for parity calculation, extra I/O operations)

Moderate to high, based on number and type of drives, how implemented, extra I/Os for reconstruction

Varies, low overhead when using large number of devices; CPU, I/O, and network overhead.

Table 9.3 Common RAID Characteristics

Besides those shown in table 9.3, other RAID including parity based approaches include 2 (Reed Solomon), 3 (synchronized stripe and dedicated parity) along with others including combinations such as 10, 01, 50, 60 among others.

Similar to legacy parity-based RAID, some erasure code implementations use narrow drive groups while others use larger ones to increase protection and reduce capacity overhead. For example, some larger enterprise-class storage systems (RAID arrays) use narrow 3 + 1 or 4 + 1 RAID 5 or 4 + 2 or 6 + 2 RAID 6, which have higher protection storage capacity overhead and fault=impact footprint.

On the other hand, many smaller mid-range and scale-out storage systems, appliances, and solutions support wide stripes such as 7 + 1, 15 + 1, or larger RAID 5, or 14 + 2 or larger RAID 6. These solutions trade the lower storage capacity protection overhead for risk of a multiple drive failures or impacts. Similarly, some EC implementations use relatively small groups such as 6, 2 (8 drives) or 4, 2 (6 drives), while others use 14, 4 (18 drives), 16, 4 (20 drives), or larger.

Table 9.4 shows options for a number of data devices (k) vs. a number of protect devices (m).

k
(data devices)

m
(protect devices)

Availability;
Resiliency

Space capacity overhead

Normal performance

FTT

Comments;
Examples

Narrow

Wide

Very good;
Low impact of rebuild

Very high

Good (R/W)

Very good

Trade space for RAS;
Larger m vs. k;
1, 1; 1, 2; 2, 2; 4, 5

Narrow

Narrow

Good

Good

Good (R/W)

Good

Use with smaller drive groups;
2, 1; 3, 1; 6, 2

Wide

Narrow

Ok to good;
With larger m value

Low as m gets larger

Good (read);
Writes can be slow

Ok to good

Smaller m can impact rebuild;
3, 1; 7, 1; 14, 2; 13, 3

Wide

Wide

Very good;
Balanced

High

Good

Very good

Trade space for RAS;
2, 2; 4, 4; 8, 4; 18, 6

Table 9.4. Comparing Various Data Device vs. Protect Device Configurations

Note that wide k with no m, such as 4, 0, would not have protection. If you are focused on reducing costs and storage space capacity overhead, then a wider (i.e., more devices) with fewer protect devices might make sense. On the other hand, if performance, availability, and minimal to no impact during rebuild or reconstruction are important, then a narrower drive set, or a smaller ratio of data to protect drives, might make sense.

Also note that the higher or larger the RAID number, or parity scheme, or number of "m" devices in a parity and erasure code group may not be better, likewise smaller may not be better. What is better is which approach meets your specific application performance, availability, capacity, economic (PACE) needs, along with SLO, RTO, RPO requirements. What can also be good is to use hybrid approaches combining different technologies and tools to facilitate both access, availability, durability along with point in time recovery across different layers of granularity (e.g. device, drive, adapter, controller, cabinet, file system, data center, etc).

Some focus on the lower level RAID as the single or primary point of protection, however watch out for that being your single point of failure as well. For example, instead of building a resilient RAID 10 and then neglecting to have adequate higher level access, as well as recovery point protection, combine different techniques including file system protection, snapshots, and backups among others.

Figure 9.14 shows various options and considerations for balancing between too many or too few data (k) and protect (m) devices. The balance is about enabling particular FTT along with PACE attributes and SLO. This means, for some environments or applications, using different failure-tolerant modes ( FTM) in various combinations as well as configurations.

SDDC SDDI Data Protection
Figure 9.14 Comparing various data drive to protection devices

Figure 9.14 top shows no protection overhead (with no protection); the bottom shows 13 data drives and three protection drives in an EC (RS or LRC among others) configuration that could tolerate three devices failing before loss of data or access occurs. In between are various options that can also be scaled up or down across a different number of devices ( HDDs, SSD, or systems).

Some solutions allow the user or administrator to configure the I/O chunk, slabs, shard, or stripe size, for example, from 8 KB to 256 KB to 1 MB (or larger), aligning with application workload and I/O profiles. Other options include the ability to set or disable read-ahead, write-through vs. write-back cache (with battery-protected cache), among other options.

The width or number of devices in a RAID parity or erasure group is based on a combination of factor, including how much data is to be stored and what your FTT objective is, along with spreading out protection overhead. Another consideration is whether you have large or small files and objects.

For example, if you have many small files and a wide stripe, parity, or erasure code set with a large chunk or shard size, you may not have an optimal configuration from a performance perspective.

The following figure shows combing various data protection availability and accessibility technologies including local as well as remote mirroring and replication, along with parity or erasure code (including LRC, RS, SHEC among others) approaches. Instead of just using one technology, a hybrid approach is used leveraging mirror (local on SSD) and replication across sites including asynchronous and synchronous. Replication modes include Asynchronous (time-delayed, eventual consistency) for longer distance, higher latency networks, and synchronous (strong consistency, real-time) for short distance or low-latency networks.

Note that the mirror and replication can be done in software deployed as part of a storage system, appliance or as tin-wrapped software, virtual machine, virtual storage appliance, container or some other deployment mode. Likewise RAID, parity and erasure code software can be deployed and packaged in different ways.

In addition to mirror and replication, solutions are also using parity based including erasure code variations for lower cost, less active data. In other words, the mirror on SSD handles active hot data, as well as any buffering or cache, while lower performance, higher capacity, lower cost data gets de-staged or migrated to a parity erasure code tier. Some vendors, service provider and solutions leveraging variations of the approach in figure 9.15 include Microsoft ( Azure and Windows) and VMware among others.

SDDC SDDI Data Protection
Figure 9.15 Combining various availability data protection techniques

A tradecraft skill is finding the balance, knowing your applications, the data, and how the data is allocated as well as used, then leveraging that insight and your experience to configure to meet your application PACE requirements.

Consider:

  • Number of drives (width) in a group, along with protection copies or parity
  • Balance rebuild performance impact and time vs. storage space overhead savings
  • Ability to mix and match various devices in different drive groups in a system
  • Management interface, tools, wizards, GUIs, CLIs, APIs, and plug-ins
  • Different approaches for various applications and environments
  • Context of a physical RAID array, system, appliance, or solution vs. logical

Erasure Codes (EC)

Erasure Codes ( EC) combines advanced protection with variable space capacity overhead over many drives, devices, or systems using large parity chunks, shards compared to traditional parity RAID approaches. There are many variations of EC as well as parity based approaches, some are tied to Reed Solomon (RS) codes while others use different approaches.

Note that some EC are optimized for reducing the overhead and cost of storing data (e.g. less space capacity) for inactive, or primarily read data. Likewise, some EC or variations are optimized for performance of reads/writes as well as reducing overhead of rebuild, reconstructions, repairs with least impact. Which EC or parity derivative approach is best depends on what you are trying to do or impact to avoid.

Reed Solomon (RS) codes

Reed Solomon (RS) codes are advanced parity protection mathematical algorithm technique that works well on large amounts of data providing protection with lower space capacity overhead depending on how configured. Many Erasure Codes (EC) are based on derivatives of RS. Btw, did you know (or remember) that RAID 2 (rarely used with few legacy implementations) has ties to RS codes? Here are some additional links to RS including via Backblaze, CMU, and Dr Dobbs.

Local Reconstruction Codes (LRC)

Microsoft leverages LRC in Azure as well as in Windows Servers. LRC are optimized for a balance of protection, space capacity savings, normal performance as well as reducing impact on running workloads during a repair, rebuild or reconstruction. One of the tradeoffs that LRC uses is to add some amount of additional space capacity in exchange for normal and abnormal (e.g. during repair) performance improvements. Where RS, EC and other parity based derivatives typically use a (k,m) nomenclature (e.g. data, protection), LRC adds an extra variable to help with constructions (k,m,n).

Some might argue that LRC are not as space efficient as other EC, RS or parity derivative variations of which the counter argument can be that some of those approaches are not as performance effective. In other words, everything is not the same, one approach does not or should not have to be applied to all, unless of course your preferred solution approach can only do one thing.

Additional LRC related material includes:

  • (PDF by Microsoft) LRC Erasure Coding in Windows Storage Spaces
  • (Microsoft Usenix Paper) Best Paper Award Erasure Coding in Azure
  • (Via MSDN Shared) Azure Storage Erasure Coding with LRC
  • (Via Microsoft) Azure Storage with Strong Consistency
  • (Paper via Microsoft) 23rd ACM Symposium on Operating Systems Principles (SOSP)
  • (Microsoft) Erasure Coding in Azure with LRC
  • (Via Microsoft) Good collection of EC, RS, LRC and related material
  • (Via Microsoft) Storage Spaces Fault Tolerance
  • (Via Microsoft) Better Way To Store Data with EC/LRC
  • (Via Microsoft) Volume resiliency and efficiency in Storage Spaces

Shingled Erasure Code (SHEC)

Shingled Erasure Codes (SHEC) are a variation of Erasure Codes leveraging shingled overlay approach similar to what is being used in Shingled Magnetic Recording (SMR) on some HDDs. Ceph has been an early promoter of SHEC, read more here, and here.

Replication and Mirroring

Replication and Mirroring create a mirror or replica copy of data across different devices, systems, servers, clusters, sites or regions. In addition to keeping a copy, mirror and replication can occur on different time intervals such as real-time ( synchronous) and time deferred (Asynchronous). Besides time intervals, mirror and replication are implemented in different locations at various altitudes or stack layers from lower level hardware adapter or storage systems and appliances, to operating systems, hypervisors, software defined storage, volume managers, databases and applications themselves.

Covered in more detail in chapters 5 and 6, synchronous provides real-time, strong consistency, although high-latency local or remote interfaces can impact primary application performance. Note there is a common myth that high-latency networks are only long distance when in fact some local networks can also be high-latency. Asynchronous (also discussed in more depth in chapters 5 and 6) enables local and remote high-latency communications to be spanned, facilitating protection over a distance without impacting primary application performance, albeit with lower consistency, time deferred, also known as eventual consistency.

Mirroring (also known as RAID 1) and replication creates a copy (a mirror or replica) across two or more storage targets (devices, systems, file systems, cloud storage service, applications such as a database). The reason for using mirrors is to provide a faster (for normal running and during recovery) failure-tolerant mode for enabling availability, resiliency, and data protection, particularly for active data.

Figure 9.10 shows general replication scenarios. Illustrated are two basic mirror scenarios: At the top, a device, volume, file system, or object bucket is replicated to two other targets (i.e., three-way or three replicas); At the bottom, is a primary storage device using a hybrid replica and dispersal technique where multiple data chunks, shards, fragments, or extents are spread across devices in different locations.

SDDC SDDI Mirror and Replication
Figure 9.10 Various Mirror and Replication Approaches

Mirroring and replication can be done locally inside a system (server, storage system, or appliance), within a cabinet, rack, or data center, or remotely, including at cloud services. Mirroring can also be implemented inside a server in software or using RAID and HBA cards to off-load the processing.

SDDC SDDI Mirror Replication Techniques
Figure 9.11 Mirror or Replication combined with Snapshots or other PiT protection

Keep in mind that mirroring and replication by themselves are not a replacement for backups, versions, snapshots, or another recovery point, time-interval (time-gap) protection. The reason is that replication and mirroring maintain a copy of the source at one or more destination targets. What this means is that anything that changes on the primary source also gets applied to the target destination (mirror or replica). However, it also means that anything changed, deleted, corrupted, or damaged on the source is also impacted on the mirror replica (assuming the mirror or replicas were or are mounted and accessible on-line).

implementations in various locations (hardware, software, cloud) include:

  • Applications and databases such as SQL Server, Oracle among others
  • File systems, volume manager, Software-defined storage managers
  • Third-party storage software utilities and drivers
  • Operating systems and hypervisors
  • Hardware adapter and off-load devices
  • Storage systems and appliances
  • Cloud and managed services

Where To Learn More

Continue reading additional posts in this series of Data Infrastructure Data Protection fundamentals and companion to Software Defined Data Infrastructure Essentials (CRC Press 2017) book, as well as the following links covering technology, trends, tools, techniques, tradecraft and tips.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

There are various data protection technologies, tools and techniques for enabling availability of information resources including applications, data and data Infrastructure resources. Likewise there are many different aspects of RAID as well as context from legacy hardware based to cloud, virtual, container and software defined. In other words, not all RAID is in legacy storage systems, and there is a lot of FUD about RAID in general that is probably actually targeted more at specific implementations or products.

There are different approaches to meet various needs from stripe for performance with no protection by itself, to mirror and replication, as well as many parity approaches from legacy to erasure codes including Reed Solomon based as well as LRC among others. Which approach is best depends on your objects including balancing performance, availability, capacity economic (PACE) for normal running behavior as well as during faults and failure modes.

Get your copy of Software Defined Data Infrastructure Essentials here at Amazon.com, at CRC Press among other locations and learn more here. Meanwhile, continue reading with the next post in this series, Part 4 Data Protection Recovery Points (Archive, Backup, Snapshots, Versions).

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Data Protection Fundamentals Recovery Points (Backup, Snapshots, Versions)

Enabling Recovery Points (Backup, Snapshots, Versions)

Updated 1/7/18

Companion to Software Defined Data Infrastructure Essentials – Cloud, Converged, Virtual Fundamental Server Storage I/O Tradecraft ( CRC Press 2017)

server storage I/O data infrastructure trends

By Greg Schulzwww.storageioblog.com November 26, 2017

This is Part 4 of a multi-part series on Data Protection fundamental tools topics techniques terms technologies trends tradecraft tips as a follow-up to my Data Protection Diaries series, as well as a companion to my new book Software Defined Data Infrastructure Essentials – Cloud, Converged, Virtual Server Storage I/O Fundamental tradecraft (CRC Press 2017).

Software Defined Data Infrastructure Essentials Book SDDC

Click here to view the previous post Part 3 Data Protection Access Availability RAID Erasure Codes (EC) including LRC, and click here to view the next post Part 5 Point In Time Data Protection Granularity Points of Interest.

Post in the series includes excerpts from Software Defined Data Infrastructure (SDDI) pertaining to data protection for legacy along with software defined data centers ( SDDC), data infrastructures in general along with related topics. In addition to excerpts, the posts also contain links to articles, tips, posts, videos, webinars, events and other companion material. Note that figure numbers in this series are those from the SDDI book and not in the order that they appear in the posts.

In this post the focus is around Data Protection Recovery Points (Archive, Backup, Snapshots, Versions) from Chapter 10 .

SDDC, SDI, SDDI data infrastructure
Figure 1.5 Data Infrastructures and other IT Infrastructure Layers

Enabling RPO (Archive, Backup, CDP, PIT Copy, Snapshots, Versions)

SDDC SDDI Data Protection Points of Interests
Figure 9.5 Data Protection and Availability Points of Interest

RAID, including parity and erasure code (EC) along with mirroring and replication, provide availability and accessibility. These by themselves, however, are not a replacement for backup (or other point in time data protection) to support recovery points. For complete data protection the solution is to combine resiliency technology with point-in-time tools enabling availability and facilitate going back to a previous consistency time.

Recovery point protection is implemented within applications using checkpoint and consistency points as well as log and journal switches or flush. Other places where recovery-point protection occurs include in middleware, database, key-value stores and repositories, file systems, volume managers, and software-defined storage, in addition to hypervisors, operating systems, containers, utilities, storage systems, appliances, and service providers.

In addition to where, there are also different approaches, technologies, techniques, and tools, including archive, backup, continuous data protection, point-in-time copies, or clones such as snapshots, along with versioning.

Common recovery point Data Protection related terms, technologies, techniques, trends and topics pertaining to data protection from availability and access to durability and consistency to point in time protection and security are shown below.

Time interval protection for example with Snapshot, backup/restore, point in time copies, checkpoints, consistency point among other approaches can be scheduled or dynamic. They can also vary by how they copy data for example full copy or clone, or incremental and differential (e.g. what has changed) among other techniques to support 4 3 2 1 data protection. Other variations include how many concurrent copies, snapshots or versions can take place, along with how many stored and for how long (retention).

Additional Data Protection Terms

Copy Data Management ( CDM) as its name implies is associated managing various data copies for data protection, analytics among other activities. This includes being able to identify what copies exist (along with versions), where they are located among other insight.

Data Protection Management ( DPM) as its name implies is the management of data protection from backup/restore, to snapshots and other recovery point in time protection, to replication. This includes configuration, monitoring, reporting, analytics, insight into what is protected, how well it is protected, versions, retention, expiration, disposition, access control among other items.

Number of 9s Availability – Availability (access or durability or access and availability) can be expressed in number of nines. For example, 99.99 (four nines), indicates the level of availability (downtime does not exceed) objective. For example, 99.99% availability means that in a 24-hour day there could be about 9 seconds of downtime, or about 52 minutes and 34 seconds per year. Note that numbers can vary depending on whether you are using 30 days for a month vs. 365/12 days, or 52 weeks vs. 365/7 for weeks, along with rounding and number decimal places as shown in Table 9.1.

Uptime

24-hour Day

Week

Month

Year

99

0 h 14 m 24 s

1 h 40 m 48 s

7 h 18 m 17 s

3 d 15 h 36 m 15 s

99.9

0 h 01 m 27 s

0 h 10 m 05 s

0 h 43 m 26 s

0 d 08 h 45 m 36 s

99.99

0 h 00 m 09 s

0 h 01 m 01 s

0 h 04 m 12 s

0 d 00 h 52 m 34 s

99.999

0 h 00 m 01s

0 h 00 m 07 s

0 h 00 m 36 s

0 d 00 h 05 m 15 s

Table 9.1 Number of 9’s Availability Shown as Downtime per Time Interval

Service Level Objectives SLOs are metrics and key performance indicators (KPI) that guide meeting performance, availability, capacity, and economic targets. For example, some number of 9’s availability or durability, a specific number of transactions per second, or recovery and restart of applications. Service-level agreement (SLA) – SLA specifies various service level objectives such as PACE requirements including RTO and RPO, among others that define the expected level of service and any remediation for loss of service. SLA can also specify availability objectives as well as penalties or remuneration should SLO be missed.

Recovery Time Objective RTO is how much time is allowed before applications, data, or data infrastructure components need to be accessible, consistent, and usable. An RTO = 0 (zero) means no loss of access or service disruption, i.e., continuous availability. One example is an application end-to-end RTO of 4 hours, meaning that all components (application server, databases, file systems, settings, associated storage, networks) must be restored, rolled back, and restarted for use in 4 hours or less.

Another RTO example is component level for different data infrastructure layers as well as cumulative or end to end. In this scenario, the 4 hours includes time to recover, restart, and rebuild a server, application software, storage devices, databases, networks, and other items. In this scenario, there are not 4 hours available to restore the database, or 4 hours to restore the storage, as some time is needed for all pieces to be verified along with their dependencies.

Data Loss Access DLA occurs when data still exists, is consistent, durable, and safe, but it cannot be accessed due to network, application, or other problem. Note that the inverse is data that can be accessed, but it is damaged. Data Loss Event DLE is an incident that results in loss or damage to data. Note that some context is needed in a scenario in which data is stolen via a copy but the data still exists, vs. the actual data is taken and is now missing (no copies exist). Also note that there can be different granularity as well as scope of DLE for example all data or just some data lost (or damaged). Data Loss Prevention DLP encompasses the activities, techniques, technologies, tools, best practices, and tradecraft skills used to protect data from DLE or DLA.

Point in Time (PiT) such as PiT copy or data protection refers to a recovery or consistency point where data can be restored from or to (i.e., RPO), such as from a copy, snapshot, backup, sync, or clone. Essentially, as its name implies, it is the state of the data at that particular point in time.

Recovery Point Objective RPO is the point in time to which data needs to be recoverable (i.e., when it was last protected). Another way of looking at RPO is how much data you can afford to lose, with RPO = 0 (zero) meaning no data loss, or, for example, RPO = 5 minutes being up to 5 minutes of lost data.

SDDC SDDI RTO RPO
Figure 9.8 Recovery Points (point in time to recover from), and Recovery Time (how long recovery takes)

Frequency refers to how often and on what time interval protection is performed.

4 3 2 1 and 3 2 1 data protection rule
Figure 9.4 Data Protection 4 3 2 1 and 3 2 1 rule

In the context of the 4 3 2 1 rule, enabling RPO is associated with durability, meaning number of copies and versions. Simply having more copies is not sufficient because if they are all corrupted, damaged, infected, or contain deleted data, or data with latent nefarious bugs or root kits, then they could all be bad. The solution is to have multiple versions and copies of the versions in different locations to provided data protection to a given point in time.

Timeline and delta or recovery points are when data can be recovered from to move forward. They are consistent points in the context of what is/was protected. Figure 10.1 shows on the left vertical axis different granularity, along with protection and consistency points that occur over time (horizontal axis). For example, data “Hello” is written to storage (A) and then (B), an update is made “Oh Hello,” followed by (C) full backup, clone, and master snapshot or a gold copy is made.

SDDC SDDI Data Protection Recovery consistency points
Figure 10.1 Recovery and consistency points

Next, data is changed (D) to “Oh, Hello,” followed by, at time-1 (E), an incremental backup, copy, snapshot. At (F) a full copy, the master snapshot, is made, which now includes (H) “Hello” and “Oh, Hello.” Note that the previous full contained “Hello” and “Oh Hello,” while the new full (H) contains “Hello” and “Oh, Hello.” Next (G) data is changed to “Oh, Hello there,” then changed (I) to “Oh, Hello there I’m here.” Next (J) another incremental snapshot or copy is made, date is changed (K) to “Oh, Hello there I’m over here,” followed by another incremental (L), and other incremental (M) made a short time later.

At (N) there is a problem with the file, object, or stored item requiring a restore, rollback, or recovery from a previous point in time. Since the incremental (M) was too close to the recovery point (RP) or consistency point (CP), and perhaps damaged or its consistency questionable, it is decided to go to (O), the previous snapshot, copy, or backup. Alternatively, if needed, one can go back to (P) or (Q).

Note that simply having multiple copies and different versions is not enough for resiliency; some of those copies and versions need to be dispersed or placed in different systems or locations away from the source. How many copies, versions, systems, and locations are needed for your applications will depend on the applicable threat risks along with associated business impact.

The solution is to combine techniques for enabling copies with versions and point-in-time protection intervals. PIT intervals enable recovering or access to data back in time, which is a RPO. That RPO can be an application, transactional, system, or other consistency point, or some other time interval. Some context here is that there are gaps in protection coverage, meaning something was not protected.

A good data protection gap is a time interval enabling RPO, or simply a physical and logical break and the distance between the active or protection copy, and alternate versions and copies. For example, a gap in coverage (e.g. bad data protection gap) means something was not protected.

A protection air or distance gap is having one of those versions and copies on another system, in a different location and not directly accessible. In other words, if you delete, or data gets damaged locally, the protection copies are safe. Furthermore, if the local protection copies are also damaged, an air or distance gap means that the remote or alternate copies, which may be on-line or off-line, are also safe.

Good Data Protection Gaps
Figure 9.9 Air Gaps and Data Protection

Figure 10.2 shows on the left various data infrastructure layers moving from low altitude (lower in the stack) host servers or bare metal (BM) physical machine (PM) and up to higher levels with applications. At each layer or altitude, there are different hardware and software components to protect, with various policy attributes. These attributes, besides PACE, FTT, RTO, RPO, and SLOs, include granularity (full or incremental), consistency points, coverage, frequency (when protected), and retention.

SDDC SDDI Data Protection Granularity
Figure 10.2 Protecting data infrastructure granularity and enabling resiliency at various stack layers (or altitude)

Also shown in the top left of Figure 10.2 are protections for various data infrastructure management tools and resources, including active directory (AD), Azure AD (AAD), domain controllers (DC), group policy objects (GPO) and organizational units (OU), network DNS, routing and firewall, among others. Also included are protecting management systems such as VMware vCenter and related servers, Microsoft System Center, OpenStack, as well as data protection tools along with their associated configurations, metadata, and catalogs.

The center of Figure 10.2 lists various items that get protected along with associated technologies, techniques, and tools. On the right-hand side of Figure 10.2 is an example of how different layers get protected at various times, granularity, and what is protected.

For example, the PM or host server BIOS and UEFI as well as other related settings seldom change, so they do not have to be protected as often. Also shown on the right of Figure 10.2 are what can be a series of full and incremental backups, as well as differential or synthetic ones.

Figure 10.3 is a variation of Figure 10.2 showing on the left different frequencies and intervals, with a granularity of focus or scope of coverage on the right. The middle shows how different layers or applications and data focus have various protection intervals, type of protection (full, incremental, snap, differentials), along with retention, as well as some copies to keep.

SDDC SDDI Data Protection Granularity
Figure 10.3 Protecting different focus areas with various granularities

Protection in Figures 10.2 and 10.3 for the PM could be as simple as documentation of what settings to configure, versions, and other related information. A hypervisors may have changes, such as patches, upgrades, or new drivers, more frequently than a PM. How you go about protecting may involve reinstalling from your standard or custom distribution software, then applying patches, drivers, and settings.

You might also have a master copy of a hypervisors on a USB thumb drive or another storage device that can be cloned, customized with the server name, IP address, log location, and other information. Some backup and data protection tools also provide protection of hypervisors (or containers and cloud machine instances) in addition to the virtual machine (VM), guest operating systems, applications, and data.

The point is that as you go up the stack, higher in altitude (layers), the granularity and frequency of protection increases. What this means is that you may have more frequent smaller protection copies and consistency points higher up at the application layer, while lower down, less frequent, yet larger full image, volume, or VM protection, combining different tools, technology, and techniques.

Where To Learn More

Continue reading additional posts in this series of Data Infrastructure Data Protection fundamentals and companion to Software Defined Data Infrastructure Essentials (CRC Press 2017) book, as well as the following links covering technology, trends, tools, techniques, tradecraft and tips.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Everything is not the same across different environments, data centers, data infrastructures, applications and their workloads (along with data, and its value). Likewise there are different approaches for enabling data protection to meet various SLO needs including RTO, RPO, RAS, FTT and PACE attributes among others. What this means is that complete data protection requires using different new (and old) tools, technologies, trends, services (e.g. cloud) in new ways. This also means leveraging existing and new techniques, learning from lessons of the past to prevent making the same errors.

RAID (mirror, replicate, parity including erasure codes) regardless of where and how implemented (hardware, software, legacy, virtual, cloud) by itself is not a replacement for backup, they need to be combined with recovery point protection of some type (backup, checkpoint, consistency point, snapshots). Also protection should occur at multiple levels of granularity (device, system, application, database, table) to meet various SLO requirements as well as different time intervals enabling 4 3 2 1 data protection.

Keep in mind what is it that you are protecting, why are you protecting it and against what, what is likely to happen, also if something happens what will its impact be, what are your SLO requirements, as well as minimize impact to normal operating, as well as during failure scenarios. For example do you need to have a full system backup to support recovery of an individual database table, or can that table be protected and recovered via checkpoints, snapshots or other fine-grained routine protection? Everything is not the same, why treat and protect everything the same way?

Get your copy of Software Defined Data Infrastructure Essentials here at Amazon.com, at CRC Press among other locations and learn more here. Meanwhile, continue reading with the next post in this series, Part 5 Point In Time Data Protection Granularity Points of Interest.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.