Microsoft Azure Elastic SAN from Cloud to On-Prem

What is Azure Elastic SAN

Azure Elastic SAN (AES) is a new (now GA) Azure Cloud native storage service that provides scalable, resilient, easy management with rapid provisioning, high performance, and cost-effective storage. AES (figure 1) supports many workloads and computing resources. Workloads that benefit from AES include tier 1 and tier 2, such as Mission Critical, Database, and VDI, among others traditionally relying upon consolidated Storage Area Network (SAN) shared storage.

Compute resources that can use AES, including bare metal (BM) physical machines (PM), virtual machines (VM), and containers, among others, using iSCSI for access. AES is accessible by computing resources and services within the Azure Cloud in various regions (check Azure Website for specific region availability) and from on-prem core and edge locations using iSCSI. The AES management experience and value proposition are similar to traditional hardware or software-defined shared SAN storage combined with Azure cloud-based management capabilities.

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 1 General Concept and Use of Azure Elastic SAN (AES)

While Microsoft Azure describes AES as a cloud-native storage solution, that does not mean that AES is only for containers and other cloud-native apps or DevOPS. Rather, AES has been built for and is native to the cloud (e.g., software-defined) that can be accessed by various compute and other resources (e.g., VMs, Containers, AKS, etc) using iSCSI.

How Azure Elastic SAN differs from other Azure Storage

AES differs from traditional Azure block storage (e.g., Azure Disks) in that the storage is independent of the host compute server (e.g., BM, PM, VM, containers). With AES, similar to a conventional software-defined or hardware-based shared SAN solution, storage is disaggregated from host servers for sharing and management using iSCSI for connectivity. By comparison, AES differs from traditional Azure VM-based storage typically associated with a given virtual machine in a DAS (Direct Attached Storage) type configuration. Likewise, similar to conventional on-prem environments, there is a mix of DAS and SAN, including some host servers that leverage both.

AES supports Azure VM, Azure Kubernetes Service (AKS), cloud-native, edge, and on-prem computing (BM, VM, etc.) via iSCSI. Support for Azure VMware Solution (AVS) is in preview; check the Microsoft Azure website for updates and new feature functionality enhancements.

Does this mean everything is moving to AES? Similar to traditional SANs, there are roles and needs for various storage options, including DAS, shared block, file, and object, among storage offerings. Likewise, Microsoft and Azure have expanded their storage offerings to include AES, DAS (azure disks, including Ultra, premium, and standard, among other options), append, block, and page blobs (objects), and files, including Azure file sync, tables, and Data Box, among other storage services.

Azure Elastic Storage Feature Highlights

AES feature highlights include, among others:

    • Management via Azure Portal and associated tools
    • Azure cloud-based shared scalable bock storage
    • Scalable capacity, low latency, and high performance (IOPs and throughput)
    • Space capacity-optimized without the need for data reduction
    • Accessible from within Azure cloud and from on-prem using iSCSI
    • Supports Azure compute  (VMs, Containers/AKS, Azure VMware Solution)
    • On-prem access via iSCSI from PM/BM, VM, and containers
    • Variable number of volumes and volume size per volume group
    • Flexible easy to use Azure cloud-based management
    • Encryption and network private endpoint security
    • Local (LRS) and Zone (ZRS) with replication resiliency
    • Volume snapshots and cluster support

Who is Azure Elastic SAN for

AES is for those who need cost-effective, shared, resilient, high capacity, high performance (IOPS, Bandwidth), and low latency block storage within Azure and from on-prem access. Others who can benefit from AES include those who need shared block storage for clustering app workloads, server and storage consolidation, and hybrid and migration. Another consideration is for those familiar with traditional hardware and software-defined SANs to facilitate hybrid and migration strategies.

How Azure Elastic SAN works

Azure Elastic SAN is a software-defined (cloud native if you prefer) block storage offering that presents a virtual SAN accessible within Azure Cloud and to on-prem core and edge locations currently via iSCSI. Using iSCSI, Azure VMs, Clusters, Containers, Azure VMware Solution among other compute and services, and on-prem BM/PM, VM, and containers, among others, can access AES storage volumes.

From the Azure Portal or associated tools (Azure CLI or PowerShell), create an AES SAN, giving it a 3 to 24-character name and specify storage capacity (base units with performance and any additional space capacity). Next, create a Volume Group, assigning it to a specific subscription and resource group (new or existing), then specify which Azure Region to use, type of redundancy (LRS or GRS), and Zone to use. LRS provides local redundancy, while ZRS provides enhanced zone resiliency, with highspeed synchronous resiliency without setting up multiple SAN systems and their associated replication configurations along with networking considerations (e.g., Azure takes care of that for you within their service).

The next step is to create volumes by specifying the volume name, volume group to use, volume size in GB, maximum IOPs, and bandwidth. Once you have made your AES volume group and volumes, you can create private endpoints, change security and access controls, and access the volumes from Azure or on-prem resources using iSCSI. Note that AES currently needs to be LRS (not ZRS) for clustered shared storage and that Key management includes using your keys with Azure key vault.

Using Azure Elastic SAN

Using AES is straightforward, and there are good easy to follow guides from Microsoft Azure, including the following:

The following images show what AES looks like from the Azure Portal, as well as from an Azure Windows Server VM and an onprem physical machine (e.g., Windows 10 laptop).

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 2 AES Azure Portal Big Picture

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 3 AES Volume Groups Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 4  AES Volumes Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 5 AES Volume Snapshot Views

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 6 AES Connected Volume Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 7 AES Volume iSCSI view from on-prem Windows Laptop

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 8 AES iSCSI Volume attached to Azure VM

Azure Elastic SAN Cost Pricing

The cost of AES is elastic, depending on whether you scale capacity with performance (e.g., base unit) or add more space capacity. If you need more performance, add base unit capacity, increasing IOPS, bandwidth, and space. In other words, base capacity includes storage space and performance, which you can grow in various increments. Remember that AES storage resources get shared across volumes within a volume group.

Azure Elastic SAN is billed hourly based on a monthly per-capacity base unit rate, with a minimum of 1TB  provisioned capacity with minimum performance (e.g., 5,000 IOPs, 200MBps bandwidth). The base unit rate varies by region and type of redundancy, aka resiliency. For example, at the time of this writing, looking at US East, the Local Redundant Storage (LRS) base unit rate is 1TB with 5,000 IOPs and 200MBps bandwidth, costing $81.92 per unit per month.

The above example breaks down to a rate of $0.08 per GB per month, or $0.000110 per GB per hour (assumes 730 hours per month). An example of simply adding storage capacity without increasing base unit (e.g., performance) for US East is $61.44 per month. That works out to $0.06 per GB per month (no additional provisioned IOPs or Bandwidth) or $0.000083 per GB per hour.

Note that there are extra fees for Zone Redundant Storage (ZRS). Learn more about Azure Elastic SAN pricing here, as well as via a cost calculator here.

Azure Elastic SAN Performance

Performance for Azure Elastic SAN includes IOPs, Bandwidth, and Latency. AES IOPs get increased in increments of 5,000 per base TB. Thus, an AES with a base of 10TB would have 50,000 IOPs distributed (shared) across all of its volumes (e.g., volumes are not restricted). For example, if the base TB is increased from 10TB to 20TB, then the IOPs would increase from 50,000 to 100,000 IOPs.

On the other hand, if the base capacity (10TB) is not increased, only the storage capacity would increase from 10TB to 20TB, and the AES would have more capacity but still only have the 50,000 IOPs. AES bandwidth throughput increased by 200MBps per TB. For example, a 5TB AES would have 5 x 200MBps (1,000 MBps) throughput bandwidth shared across the volume groups volumes.

Note that while the performance gets shared across volumes, individual volume performance is determined by its capacity with a maximum of 80,000 IOPs and up to 1,024 MBps. Thus, to reach 80,000 IOPS and 1,024 MBps, an AES volume would have to be at least 107GB in space capacity. Also, note that the aggregate performance of all volumes cannot exceed the total of the AES. If you need more performance, then create another AES.

Will all VMs or compute resources see performance improvements with AES? Traditional Azure Disks associated with VMs have per-disk performance resource limits, including IOPs and Bandwidth. Likewise, VMs have storage limits based on their instance type and size, including the number of disks (HDD or SSD), performance (IOPS and bandwidth), and the number of CPUs and memory.

What this means is that an AES volume could have more performance than what a given VM is limited to. Refer to your VM instance sizing and configuration to determine its IOP and bandwidth limits; if needed, explore changing the size of your VM instance to leverage the performance of Azure Elastic SAN storage.

Additional Resources Where to learn more

The following links are additional resources to learn about Microsoft Azure Elastic SAN and related data infrastructures and tradecraft topics.

Azure AKS Storage Concepts 
Azure Elastic SAN (AES) Documentation and Deployment Guides
Azure Elastic SAN Microsoft Blog
Azure Elastic SAN Overview
Azure Elastic SAN Performance topics
Azure Elastic SAN Pricing calculator
Azure Products by Region (see where AES is currently available)
Azure Storage Offerings 
Azure Virtual Machine (VM) sizes
Azure Virtual Machine (VM) types
Azure Elastic SAN General Pricing
Azure Storage redundancy 
Azure Service Level Agreements (SLA) 
StorageIOBlog.com Data Box Family 
StorageIOBlog.com Data Box Review
StorageIOBlog.com Data Box Test Drive 
StorageIOblog.com Microsoft Hyper-V Alive Enhanced with Win Server 2025
StorageIOblog.com If NVMe is the answer, what are the questions?
StorageIOblog.com NVMe Primer (or refresh)

Additional learning experiences along with common questions (and answers), are found in my Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Azure Elastic SAN (AES) is a new and now generally available shared block storage offering that is accessible using iSCSI from within Azure Cloud and on-prem environments. Even with iSCSI, AES is relatively easy to set up and use for shared storage, mainly if you are used to or currently working with hardware or software-defined SAN storage solutions.

With NVMe over TCP fabrics gaining industry and customer traction, I’m hoping for Microsoft to adding that in the future. Currently, AES supports LRS and ZRS for redundancy, and an excellent future enhancement would be to add Geo Redundant Storage (GRS) capabilities for those who need it.

I like the option of elastic shared storage regarding performance, availability, capacity, and economic costs (PACE). Suppose you understand the value proposition of evolving from dedicated DAS to shared SAN (independent of the underlying fabric network); or are currently using some form of on-prem shared block storage. In that case, you will find AES familiar and easy to use. Granted, AES is not a solution for everything as there are roles for other block storage, including DAS such as Azure disks and VMs within Azure, along with on-prem DAS, as well as file, object, and blobs, tables, among others.

Wrap up

The notion that all cloud storage must be objects or blobs is tied those who only need, provide, or prefer those solutions. The reality is that everything is not the same. Thus, there is a need for various storage mediums, devices, tiers, access, and types of services. Microsoft and Azure have done an excellent job of providing. I like what Microsoft Azure is doing with Azure Elastic SAN.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Nine time Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of UnlimitedIO LLC.

Microsoft Hyper-V Is Alive Enhanced With Windows Server 2025

Yes, you read that correctly, Microsoft Hyper-V is alive and enhanced with Windows Server 2025, formerly Windows Server v.Next server. Note that  Windows Server 2025 preview build is just a preview available for download testing as of this time.

What about Myth Hyper-V is discontinued?

Despite recent FUD (fear, uncertainty, doubt), misinformation, and fake news, Microsoft Hyper-V is not dead. Nor has Hyper-V been discontinued, as some claim. Some Hyper-V FUD is tied to customers and partners of VMware following Broadcom’s acquisition of VMware looking for alternatives. More on Broadcom and VMware here, here, here, here, and here.

As a result of Broadcom’s VMware acquisition and challenges for partners and customers (see links above), organizations are doing due diligence, looking for replacement or alternatives. In addition, some vendors are leveraging the current VMware challenges to try and position themselves as the best hypervisor virtualization safe harbor for customers. Thus some vendors, their partners, influencers and amplifiers are using FUD to keep prospects from looking at or considering Hyper-V.

Virtual FUD (vFUD)

First, let’s shut down some Virtual FUD (vFUD). As mentioned above, some are claiming that Microsoft has discontinued Hyper-V. Specifically, the vFUD centers on Microsoft terminating a specific license SKU (e.g., the free Hyper-V Server 2019 SKU). For those unfamiliar with the discontinued SKU (Hyper-V Server 2019), it’s a headless (no desktop GUI) version of Windows Server  running Hyper-V VMs, nothing more, nothing less.

Does that mean the Hyper-V technology is discontinued? No.

Does that mean Windows Server and Hyper-V are discontinued? No.

Microsoft is terminating a particular stripped-down Windows Server version SKU (e.g. Hyper-V Server 2019) and not the underlying technology, including Windows Server and Hyper-V.

To repeat, a specific SKU or distribution (Hyper-V Server 2019) has been discontinued not Hyper-V. Meanwhile, other distributions of Windows Server with Hyper-V continue to be supported and enhanced, including the upcoming Windows Server 2025 and Server 2022, among others.

On the other hand, there is also some old vFUD going back many years, or a decade, when some last experienced using, trying, or looking at Hyper-V. For example, the last look at Hyper-V might been in the Server 2016 or before era.

If you are a vendor or influencer throwing vFUD around, at least get some new vFUD and use it in new ways. Better yet, up your game and marketing so you don’t rely on old vFUD. Likewise, if you are a vendor partner and have not extended your software or service support for Hyper-V, now is a good time to do so.

Watch out for falling into the vFUD trap thinking Hyper-V is dead and thus miss out on new revenue streams. At a minimum, take a look at current and upcoming enhancements for Hyper-V doing your due diligence instead of working off of old vFUD.

Where is Hyper-V being used?

From on-site (aka on-premises, on-premises, on-prem) and edge on Windows Servers standalone and clustered, to Azure Stack HCI. From Azure, and other Microsoft platforms or services to Windows Desktops, as well as home labs, among many other scenarios.

Do I use Hyper-V? Yes, when I  retired from the vExpert program after ten years. I moved all of my workloads from VMware environment to Hyper-V including *nix, containers and Windows VMs, on-site and on Azure Cloud.

How Hyper-V Is Alive Enhanced With Windows Server 2025

Is Hyper-V Alive Enhanced With Windows Server 2025?  Yup.

Formerly known as Windows Server v.Next, Microsoft announced the Windows Server 2025 preview build on January 26, 2024 (you can get the bits here). Note that Microsoft uses Windows Server v.Next as a generic placeholder for next-generation Windows Server technology.

A reminder that the cadence of Windows Server Long Term Serving Channels (LTSC) versions has been about three years (2012R2, 2016, 2019, 2022, now 2025), along with interim updates.

What’s enhanced with Hyper-V and Windows Server 2025

    • Hot patching of running server (requires Azure Arc management) with almost instant implementations and no reboot for physical, virtual, and cloud-based Windows Servers.
    • Scaling of even more compute processors and RAM for VMs.
    • Server Storage I/O performance updates, including NVMe optimizations.
    • Active Directory (AD) improvements for scaling, security, and performance.
    • There are enhancements to storage replica and clustering capabilities.
    • Hyper-V GPU partition and pools, including migration of VMs using GPUs.

More Enhancements for Hyper-V and Windows Server 2025

Active Directory (AD)

Enhanced performance using all CPUs in a process group up to 64 cores to support scaling and faster processing. LDAP for TLS 1.3, Kerberos support for AES SHA 256 / 384, new AD functional levels, local KDC, improved replication priority, NTLM retirement, local Kerberos, and other security hardening. In addition, 64-bit Long value IDs (LIDs) are supported along with a new database schema using 32K pages vs the previous 8K pages. You will need to upgrade forest-wide across domain controllers to leverage the new larger page sizes (at least Server 2016 or later). Note that there is also backward compatibility using 8K pages until all ADs are upgraded.

Storage, HA, and Clustering

Windows Server continues to offer flexible options for storage how you want or need to use it, from traditional direct attached storage (DAS) to Storage Area Networks (SAN), to Storage Spaces Direct (S2D) software-defined, including NVMe, NVMe over Fabrics (NVMeoF), SAS, Fibre Channel, iSCSI along with file attached storage. Some other storage and HA enhancements include Storage Replica performance for logging and compression and stretch S2D multi-site optimization.

Failover Cluster enhancements include AD-less clusters, cert-based VM live migration for the edge, cluster-aware updating reliability, and performance improvements. ReFS enhancements include dedupe and compression optimizations.

Other NVMe enhancements include optimization to boost performance while reducing CPU overhead, for example, going from 1.1M IOPS to 1.86M IOPS, and then with a new native NVMe driver (to be added), from 1.1M IOPs to 2.1M IOPs. These performance optimizations will be interesting to look at closer, including baseline configuration, number and type of devices used, and other considerations.

Compute, Hyper-V, and Containers

Microsoft has added and enhanced various Compute, Hyper-V, and Container functionality with Server 2025, including supporting larger configurations and more flexibility with GPUs. There are app compatibility improvements for containers that will be interesting to see and hear more details about besides just Nano (the ultra slimmed-down Windows container).

Hyper-V

Microsoft extensively uses Hyper-V technology across different platforms, including Azure, Windows Servers, and Desktops. In addition, Hyper-V is commonly found across various customer and partner deployments on Windows Servers, Desktops, Azure Stack HCI, running on other clouds, and virtualization (nested). While Microsoft effectively leverages Hyper-V and continues to enhance it, its marketing has not effectively told and amplified the business benefit and value, including where and how Hyper-V is deployed.

Hyper-V with Server 2025 includes discrete device assignment to VM (e.g., resources dedicated to VMs). However, dedicating a device like a GPU to a VM prevents resource sharing, failover cluster, or live migration. On the other hand, Server 2025 Hyper-V supports GPU-P (GPU Partitioning), enabling GPU(s) to be shared across multiple VMs. GPUs can be partitioned and assigned to VMs, with GPUs and GPU partitioning enabled across various hosts.

In addition to partitioning, GPUs can be placed into GPU pools for HA. Live migration and cluster failover (requires PCIe SR-IOV), AMD Lilan or later, Intel Sapphire Rapids, among other requirements, can be done. Another enhancement is Dynamic Processor Compatibility, which allows mixed processor generations to be used across VMs and then masks out functionalities that are not common across processors. Other enhancements include optimized UEFI, secure boot, TPM , and hot add and removal of NICs.

Networking

Network ATC provides intent-based deployments where you specify desired outcomes or states, and the configuration is optimized for what you want to do. Network HUD enables always-on monitoring and network remediation. Software Defined Network (SDN) optimization for transparent multi-site L2 and L3 connectivity and improved SDN gateway performance enhancements.

SMB over QUIC leverages TLS 1.3 security to streamline local, mobile, and remote networking while enhancing security with configuration from the server or client. In addition, there is an option to turn off SMB NTLM at the SMB level, along with controls on which versions of SMB to allow or refuse. Also being added is a brute force attack limiter that slows down SMB authentication attacks.

Management, Upgrades, General user Experience

The upgrade process moving forward with Windows Server 2025 is intended to be seamless and less disruptive. These enhancements include hot patching and flighting (e.g., LTSC Windows server upgrades similar to how you get regular updates). For hybrid management, an easier-to-use wizard to enable Azure Arc is planned. For flexibility, if present, WiFi networking and Bluetooth devices are automatically enabled with Windows Server 2025 focused on edge and remote deployment scenarios.

Also new is an optional subscription-based licensing model for Windows Server 2025 while retaining the existing perpetual use. Let me repeat that so as not to create new vFUD, you can still license Windows Server (and thus Hyper-V) using traditional perpetual models and SKUs.

Additional Resources Where to learn more

The following links are additional resources to learn about Windows Server, Server 2025, Hyper-V, and related data infrastructures and tradecraft topics.

What’s New in Windows Server v.Next video from Microsoft Ignite (11/17/23)
Microsoft Windows Server 2025 Whats New
Microsoft Windows Server 2025 Preview Build Download
Microsoft Windows Server 2025 Preview Build Download (site)
Microsoft Evaluation Center (various downloads for trial)
Microsoft Eval Center Windows Server 2022 download
Microsoft Hyper-V on Windows Information
Microsoft Hyper-V on Windows Server Information
Microsoft Hyper-V on Windows Desktop (e.g., Win10)
Microsoft Windows Server Release Information
Microsoft Hyper-V Server 2019
Microsoft Azure Virtual Machines Trial
Microsoft Azure Elastic SAN
If NVMe is the answer, what are the questions?
NVMe Primer (or refresh), The NVMe Place.

Additional learning experiences along with common questions (and answers), are found in my Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Hyper-V is very much alive, and being enhanced. Hyper-V is being used from Microsoft Azure to Windows Server and other platforms at scale, and in smaller environments.

If you are looking for alternatives to VMware or simply exploring virtualization options, do your due diligence and check out Hyper-V. Hyper-V may or may not be what you want; however, is it what you need? Looking at Hyper-V now and upcoming enhancements also positions you when asked by management if you have done your due  diligence vs relying on vFUD.

Do a quick Proof of Concept, spin up a lab, and check out currently available Hyper-V. For example, on Server 2022 or 2025 preview, to get a feel for what is there to meet your needs and wants. Download the bits and get some hands on time with Hyper-V and Windows Server 2025.

Wrap up

Hyper-V is alive and enhanced with Windows Server 2025 and other releases.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Nine time Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of UnlimitedIO LLC.

PACE your Server Storage I/O decision making, its about application requirements

PACE your Server Storage I/O decision making, its about application requirements

PACE your Server Storage I/O decision-making, it’s about application requirements. Regardless of if you are looking for physical, software-defined virtual, cloud or container storage, block, file or object, primary, secondary or protection copies, standalone, converged, hyper-converged, cluster in a box or other forms of storage and packaging, when it comes to server storage I/O decision-making, it’s about the applications.

I often see people deciding on the best storage before the questions of requirements, needs and wants are even mentioned. Sure the technology is important, so too are the techniques and trends including using new things in new ways, as well as old things in new ways. There are lots of buzzwords on the storage scene these days. But don’t even think about buying it until you truly understand your business’ storage needs.

However when it comes down to it unless you have a unique need, most environments server, and storage I/O resources exist to protect preserve and serve applications and their information or data. Recently I did a couple of articles over at Network Computing; these are tied to server and storage I/O decision-making balancing technology buzzwords with business and application requirements.

PACE and common applications characteristics

PACE your server storage decisions

A theme I mention in the above two articles as well as elsewhere on server, storage I/O and applications is PACE. That is, application Performance Availability Capacity Economics (PACE). Different applications will have various attributes, in general, as well as how they are used. For example database transaction activity vs. reporting or analytics, logs and journals vs. redo logs, indices, tables, indices, import/export, scratch and temp space. PACE (figure 2.7) describes the applications and data characters and needs.

Server Storage I/O PACE

Common Application Pace Attributes

All applications have PACE attributes

  • Those PACE attributes vary by application and usage
  • Some applications and their data are more active vs. others
  • PACE characteristics will vary within different parts of an application

Think of an application along with associated data PACE as its personality or how it behaves, what it does, how it does it and when along with value, benefit or cost along with Quality of Service (QoS) attributes. Understanding the applications in different environments, data value and associated PACE attributes is essential for making informed server, storage I/O decisions from configuration to acquisitions or upgrades, when, where, why and how to protect, or performance optimization along with capacity planning, reporting, and troubleshooting, not to mention addressing budget concerns.

Data and Application PACE

Primary PACE attributes for active and inactive applications and data:
P – Performance and activity (how things get used)
AAvailability and durability (resiliency and protection)
C – Capacity and space (what things use or occupy)
EEnergy and Economics (people, budgets and other barriers)

Some applications need more performance (server computer, or storage and network I/O) while others need space capacity (storage, memory, network or I/O connectivity). Likewise, some applications have different availability needs (data protection, durability, security, resiliency, backup, BC, DR) that determine various tools, technologies and techniques to use. Budgets are also a concern which for some applications meaning enabling more performance per cost while others are focused on maximizing space capacity and protection level per cost. PACE attributes also define or influence policies for QoS (performance, availability, capacity), as well as thresholds, limits, quotas, retention and disposition among others.

Where to learn more

Learn more about data infrastructures and tradecraft related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

The best storage will be the one that meets or exceeds your application requirements instead of the solution that meets somebody else’s needs or wants. Keep in mind, PACE your Server Storage I/O decision making, it is about application requirements

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

March 31st is world backup day; when is world recovery day

March 31st is world backup day; when is world recovery day

If March 31st is world backup day, when is world recovery day?

For several years, if not decades, March 31st has been world backup day, a reminder to protect and backup your apps and data. Data protection, including backup, recovery, business continuance (BC), disaster recovery (DR), and business resilience (BR), should be a 365-day-a-year focus. If you have regular data protection, including backup, that is great; when was the last time you tested restore?

Some related content

Upcoming and past events including webinars, tips and commentary
World Backup Day Reminder Don’t Be an April Fool Test Your Data Recovery
Data Infrastructure Overview, Its What’s Inside of a Data Center
Application Data Value Characteristics Everything Is Not The Same
Data Protection Diaries Topics Tools Techniques Technologies Tips

Reminder to Protect your data and apps and settings

Thus, this is also a reminder to protect your data and apps and their settings regularly. What’s even better is evolving from none once a year to more frequent data protection, including backup of your critical and noncritical apps and data. Notice I keep mentioning apps and not just the usual focus of or on data. Program apps are considered broadly data; after all, apps and your settings and metadata are just data when stored and protected.

There is also often a focus on just the data, which can lead to problems when it comes time to recover an app program, settings, or metadata. Also, a reminder that data protection, including backup, is not just for large enterprises; it applies to organizations and entities of all sizes, including small and medium businesses (SMBs), non-profits, and homes (e.g., your photos, worksheets, and other documents).

What About Recovery

If March 31st is world backup day, when is world recovery day? So far, I have been talking about backup as part of data protection or ensuring your apps, data, and settings are protected; what about recovery?

Sometimes with data protection, discussions can drift into what’s more critical, backup or recovery, which is a bit like a chicken and egg situation. In other words, what’s more important, the chicken or the egg? Similar to data protection, what’s more critical, backup or recovery?

Recovery is only as good as your backup (or snapshot, point-in-time copy, checkpoint, or consistency point), and your backup or protection copy is only as good as its recoverability. Recoverability means that not only is there something to restore from a point in time (e.g., recovery point objective or RPO) in a given amount of time (recovery time objective or RTO).

Recoverability also means that you can pull the data (e.g., bits, bytes, blocks, blobs, objects, files, tables) from the protection medium, media, or service and use it. Recovery means that the data is valid and consistent, has integrity, or is otherwise not bad, missing, damaged, or corrupted (e.g., usable).

What About Recovery Day?

For several years I have mentioned and will continue to do so that if March 31st is world backup day, then April 1st should be a world recovery day. So why April 1st for world recovery day? Simple, you don’t want to look like a fool the day after world backup day if you can’t restore and use data backed up the day before.

If you are not comfortable with April 1st for world recovery day? Then make your world recovery day (or test) a day or so later. The important message is to ensure your apps, data, and settings are protected (e.g., copied, backed up, snapshot, checkpoint, etc.), trust yet verify, and test your restorations.

Why do I mentation apps, data, and settings?

The important message here is that it is good if you are already protecting your data, your spreadsheets, worksheets, databases, files, photos, and the application programs that use them. However, also ensure that you are protecting application settings, configurations, metadata, encryption keys, the backup or protection mechanisms, and their data.

For example, when I accidentally delete a data file or configuration settings, I can restore those without recovering everything. Suppose, for instance, I accidentally or intentionally uninstall an application program. In that case, I can reinstall (assuming I have a copy of the program), then restore my settings and pick up where I resumed.

Who does this apply to?

From organizations of size and type to individuals. If you have or generate or save data, if it is worth having (or you have to keep it), then it should be protected. What how often to protect data (time interval) will be based on what your recovery point objective (RPO) is. Likewise how fast you need to recover with your recovery time objective (RTO).

Remember that it is not if you will need to restore, recover, reload, refresh, or repair your apps, data, and settings instead when. It might be because of accidental or planned deletion, accident, hardware, software, cloud service situation, ransomware, or malware, among other things that can and do happen.

What to do?

If March 31st is world backup day, when is world recovery day? Ensure you have regular copies of your apps, data, and configuration settings, including encryption keys. Implement a variation of the old school three two one (e.g., 3 2 1) data protection, e.g., backup scheme (e.g., three or more copies, stored on two or more devices, systems, media or mediums, and at least one of them offsite preferably offline including at cloud).

A variation of the new school 4 3 2 1 data protection scheme has:
Have four or more versions of your protected data.
Three or more copies (feel free to swap the number of copies and versions).
Stored on two or more different systems (devices, media, or locations).
At least one copy offsite (preferably with one offline), including cloud.

The big difference between the old school 3 2 1 and the new school 4 3 2 1 is the emphasis and distinction of having multiple copies and various versions (e.g., points in time). For example, storing three copies on two systems with one offsite is good unless all copies are damaged. Having different versions (e.g., point in time) and multiple copies of those versions stored in different places including at least one offline (e.g., air-gapped), is essential.

Trust yet verify, test your backups and recovery

Test to verify your data protection is working and that data (apps, data, settings) can be restored. When testing restores, be careful not to overwrite your good data and cause a disaster. Also, ensure your data is encrypted in multiple locations and layers and that you protect your encryption keys. Finally, make sure your backup, protection software, catalog, and settings are encrypted, secured, and protected.

If you have questions, not sure, learn more here in my book Software Defined Data Infrastructure Essentials (CRC Press), Data Infrastructure Management Insight and Strategies (CRC Press), as well as check out these listed below, or reach out to me or others. If you are an individual consumer and just looking to protect some photos, valuable documents, and heirlooms, get in touch with professionals who specialize in these types of things.

What do I do?

Implement 4 3 2 1 type data protection with different granularities and frequencies. For example, my data protection includes regular point-in-time copies, including backups and snapshots, checkpoints, consistency points of systems, volumes, shares, apps, files, data, and settings at different intervals. Having different types of apps and data, some of which are more static vs. others that are changing, protection is also varied to avoid treating everything the same, reduce cost, and increase coverage.

I protect my Apps, data, and settings with multiple versions and copies locally on different systems, devices, mediums, and offsite, including offline and at cloud services. So why do I store data offsite vs. having it all in the cloud? Simple, speed of recovery, and flexibility.

If it’s a few files, perhaps a few GBs of data, it is usually faster for me if I don’t have a good copy locally to get it from Microsoft Azure. Otoh, if I need to restore TBs of data (something terrible happens), then it can be faster to bring an offline, offsite copy back, correct that, then only pull the more recent data I need from the cloud.

What are some of the tools and technologies that I use?

Locally I have multiple Microsoft Windows Servers (Server 2022) with various storage (HDDs and SSDs), including removable devices. In addition to on-prem, I have data stored offsite on removable media and cloud copies. For my cloud copies, I have a mix of files and blobs stored at Microsoft Azure.

A challenge moving from AWS to Azure was Retrospect did not support objects (Azure blobs). I realized, no worries, Retrospect supports storing data on local storage (SSD or HDD) on regular filesystems as files. The solution was set up an Azure file share for Retrospect, and everything has worked fantastic.

Are there things I need and want to improve? Yes, it’s an ongoing process and journey.

What should you do next?

Make sure you have a data backup; if not, march 31st is a good reminder. Trust yet verify your backups are working and you can recover and not be an April 1st fool.

Where to learn more

Learn more about world backup day, recovery and data protection along with other related topics via the following links:

Upcoming and past events including webinars, tips and commentary
Next Generation Hybrid Data Infrastructures Are In Your Future
Cloud File Data Storage Consolidation and Economic Comparison Model
New Book Data Infrastructure Management Insight Strategies
World Backup Day Reminder Don’t Be an April Fool Test Your Data Recovery
Virtual, Cloud and IT Availability, it’s a shared responsibility
Don’t Stop Learning Expand Your Skills Experiences Everyday
Data Infrastructure Overview, Its What’s Inside of a Data Center
Application Data Value Characteristics Everything Is Not The Same
Data Protection Diaries Topics Tools Techniques Technologies Tips
Data Infrastructure Server Storage I/O related Tradecraft Overview

Additional learning experiences can be found in Software Defined Data Infrastructure Essentials book. Also check out Data Infrastructure Management Insight and Strategies.

Software Defined Data Infrastructure Essentials Book SDDC backup restore data protection cloud storage containers data footprint reduction

What this all means

If March 31st is world backup day, when is world recovery day? Every day should be a backup day (e.g., some protection, backup, copy, snapshot, checkpoint, consistency point). Likewise, every day should be able to be a recovery day. World backup day and recovery apply to organizations of all sizes and individuals. Remember that If March 31st is world backup day, when is world recovery day?

Ok, nuff said.

Cheers gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, ten-time VMware vExpert. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC). Cloud and Virtual Data Storage Networking (CRC), The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier). Visit twitter @storageio as well as www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com. Any reproduction without attribution or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. Visit our companion site https://picturesoverstillwater.com to view drone based aerial photography and video related topics. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO and UnlimitedIO LLC.

ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs

The ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs

ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs.

Yes, you read that correct; leverage TCP offload Engines (TOE) to boost the performance of TCP-based NVMeoF (e.g., NVMe over Fabrics) while reducing costs. Keep in mind that there is a difference between cutting costs (something that causes or moves problems and complexities elsewhere) and reducing and removing costs (e.g., finding, fixing, removing complexities).

Reducing or cutting costs can be easy by simply removing items for lower-priced items and introducing performance bottlenecks or some other compromise. Likewise, boosting performance can be addressed by throwing (deploying) more hardware (and or software) at the problem resulting in higher costs or some other compromise.

On the other hand, as mentioned above, finding, fixing, removing the complexity and overhead results in cost savings while doing the same work or enabling more work done via the same costs, maximizing hardware, software, and network costs. In other words, a better return on investment (ROI) and a lower total cost of ownership (TCO).

Software Defined Storage and Networks Need Hardware

With the continued shift towards software-defined data centers, software-defined data infrastructures, software-defined storage, software-defined networking, and software-defined everything, those all need something in common, and that is hardware-based compute processing.

In the case of software-defined storage, including standalone, shared fabric or networked-based, converged infrastructure (CI) or hyper-converged infrastructure (HCI) deployment models, there is the need for CPU compute, memory, and I/O, in addition to storage devices. This means that the software to create, manage, and perform storage tasks needs to run on a server’s CPU, along with I/O networking software stacks.

However, it should be evident that sometimes the obvious needs to be restarted, which is that software-defined anything requires hardware somewhere in the solution stack. Likewise, depending on how the software is implemented, it may require more hardware resources, including server compute, memory, I/O, and network and storage capabilities.

Keep in mind that networking stacks, including upper and lower-level protocols and interfaces, leverage software to implement their functionality. Therefore, the value proposition of using standard networks such as Ethernet and TCP is the ability to leverage lower-cost network interface cards (or chips), also known as NICs combined with server-based software stacks.

On the one hand, costs can be reduced by using less expensive NICs and using the generally available server CPU compute capabilities to run the TCP and other networking stack software. On systems with a lower application or other software performance demands, this can work out ok. However, for workloads and systems using software-defined storage and other applications that compete for server resources (CPU, memory, I/O), this can result in performance bottlenecks and problems.

Many Server Storage I/O Networking Bottlenecks Are CPU Problems

There is a classic saying that the best I/O is the one that you do not have to do. Likewise, the second-best I/O is the one with the most negligible overhead (and cost) as well as best performance. Another saying is that many application, database, server, and storage I/O problems are actually due to CPU bottlenecks. Fast storage devices need fast applications on fast servers with fast networks. This means finding and removing blockages, including offloading server CPU from performing network I/O processing using TOEs.

Wait a minute, isn’t the value proposition of using software-defined storage or networking to use low-cost general-purpose servers instead of more expensive hardware devices? With some caveats, Yup understands how much server CPU us being used to run the software-defined storage and software stacks and handle upper-level functionality. To support higher performance or larger workloads can be putting in more extensive (scale-up) and more (scale-out) servers and their increased connectivity and management overhead.

This is where the TOEs come into play by leveraging the best of both worlds to run software-defined storage (and networking) stacks, and other software and applications on general-purpose compute servers. The benefit is the TCP network I/O processing gets offloaded from the server CPU to the TOE, thereby freeing up the server CPU to do more work or enabling a smaller, lower-cost CPU to be used.

After all, many servers, storage, and I/O networking problems are often server CPU problems. An example of this is running the TCP networking software stack using CPU cycles on a host server that competes with the other software and applications. In addition, as an application does more I/O, for example, issuing reads and write requests to network and fabric-based storage, the server’s CPUs are also becoming busier with more overhead of running the lower-layer TCP and networking stack.

The result is server resources (CPU, memory) are running at higher utilization; however, there is more overhead. Higher resource utilization with low or no overhead, low latency, and high productivity are good things resulting in lower cost per work done. On the other hand, high CPU utilization, server operating system or kernel mode overhead, poor latency, and low productivity are not good things resulting in host per work done.

This means there is a loss of productivity as more time is spent waiting, and the cost to do a unit of work, for example, an I/O or transaction, increases (there is more overhead). Thus, offload engines (chips, cards, adapters) come into play to shift some software processing from the server CPU to a specialized processor. The result is lower server CPU overhead leaving more server resources for the main application or software-defined storage (and networking) while boosting performance and lowering overall costs.

Graphics, Compute, Network, TCP Offload Engines

Offload engines are not new, they have been around for a while, and in some cases, more common than some realize going by different names. For example, graphical Processing Units (GPUs) are used for offloading graphic and compute-intensive tasks to special chips and adapter cards. Other examples of offload processors include networks such as TCP Offload Engine (TOE), compression, and storage processing, among others.

The basic premise of offload engines is to move or shift processing of specific functions from having their software running on a general-purpose server CPU to a specialized processor (ASIC, FPGA, adapter, or mezzanine card). By moving the processing of functions to the offload or unique processing device, performance can be boosted while freeing up a server’s primary processor (CPU) to do other useful (and productive) work.

There is a cost associated with leveraging offloads and specialized processors; however, the business benefit should be offset by reducing primary server compute expenses or doing more work with available resources and driving network bandwidth line rates performance. The above should result in a net TCO reduction and boost your ROI for a given system or bill of material, including hardware, software, networking, and management.

Cloud File Data Storage Consolidation and Economic Comparison Model

Fast Storage Needs Fast Servers and I/O Networks

Ethernet network TOEs became popular in the industry back in the early 2000s, focusing on networked storage and storage networks that relied on TCP (e.g., iSCSI).

Fast forward to today, and there is continued use of networked (ok, fabric) storage over various interfaces, including Ethernet supporting different protocols. One of those protocols is NVMe in NVMe over Fabrics (NVMeoF) using TCP and underlying Ethernet-based networks for accessing fast Solid State Devices (SSDs).

Chelsio Communications T6 TOE for NVMeoF

An example of server storage I/O network TOEs, including those to support NVMeoF, are those from Chelsio Communications, such as the T6 25/100Gb devices. Chelsio announced today server storage I/O benchmark proof points for TCP based NVMe over Fabric (NVMeoF) TOE accelerated performance. StorageIO had the opportunity to look at the performance-boosting ability and CPU savings benefit of the Chelsio T6 prior to todays announcement.

After reviewing and validating the Chelsio proof points, test methodology, and results, it is clear that the T6 TOE enabled solution boosts server storage I/O performance while reducing host server CPU usage. The Chelsio T6 solution combined with Storage Performance Development Kit (SPDK) software, provides local-like performance of network fabric distributed NVMe (using TCP based NVMeoF) attached SSD storage while reducing host server CPU consumption.

“Boosting application performance, efficiency, and effectiveness of server CPUs are key priorities for legacy and software defined datacenter environments,” said Greg Schulz, Sr. Analyst Server Storage. “The Chelsio NVMe over Fabrics 100GbE NVMe/TCP (TOE) demonstration provides solid proof of how high-performance NVMe SSDs can help datacenters boost performance and productivity, while getting the best return on investment of datacenter infrastructure assets, not to mention optimize cost-of-ownership at the same time. It’s like getting a three for one bonus value from your server CPUs, your network, and your application perform better, now that’s a trifecta!”

You can read more about the technical and business benefits of the Chelsio T6 TOE enabled solution along with associated proof points (benchmarks) in the PDF white paper found here and their Press Release here. Note that the best measure, benchmark, proof point, or test is your application and workload, so contact Chelsio to arrange an evaluation of the T6 using your workload, software, and platform.

Where to learn more

Learn more about TOE, server, compute, GPU, ASIC, FPGA, storage, I/O networking, TCP, data infrastructure and software defined and related topics, trends, techniques, tools via the following links:

Chelsio Communications T6 Performance Press Release (PDF)
Chelsio Communications T6 TOE White Paper (PDF)
Application Data Value Characteristics Everything Is Not the Same
PACE your Infrastructure decision-making, it’s about application requirements
Data Infrastructure Server Storage I/O Tradecraft Trends
Data Infrastructure Overview, Its What’s Inside of Data Centers
Data Infrastructure Management (Insight and Strategies)
Hyper-V and Windows Server 2025 Enhancements

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

The large superscalar web services and other large environments leverage offload engines and specialized processing technologies (chips, ASICs, FPGAs, GPUs, adapters) to boost performance while reducing server compute costs or getting more value out of a given server platform. If it works for the large superscalars, it can also work for your environment or your software-defined platform.

The benefits are reducing the number and cost of your software-defined platform bill of materials (BoM). Another benefit is to free up server CPU cycles to run your storage or network or other software to get more performance and work done. Yet another benefit is the ability to further stretch your software license investments, getting more work done per software license unit.

Have a look at the Chelsio Communications T6 line of TOE for NVMeoF and other workloads to boost performance, reduce CPU usage and lower costs. See for yourself The TOE NVMeoF TCP Performance Line Boost Performance Reduce Costs benefit.

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Microsoft MVP Cloud and Data Center Management, previous 10 time VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), Data Infrastructure Management (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

ROI From Use Of Global Control Plane For Expanding VDI Environments

ROI From Use Of Global Control Plane For Cloud VDI Environments

ROI From Use Of Global Control Plane For Expanding VDI Environments

The following is a new Industry Trends Perspective White Paper Report titled ROI From Use Of Global Control Plane For Expanding VDI Environments.

ROI From Use Of Global Control Plane For Expanding VDI Environments

This new StorageIO report looks at ROI From Use Of Global Control Plane For Expanding VDI environments. Using a Pro-Forma analysis this report provides a financial economic model comparison with Return on Investment (ROI) cost savings analysis for managing cloud based virtual desktop infrastructures (VDI) environments.

Cloud File Data Storage Consolidation and Economic Comparison Model

IT data infrastructure resource (servers, storage, I/O network, hardware, software, services) decision-making involves evaluating and comparing technical attributes (speeds, feeds, features) of a solution or service. Another aspect of data infrastructure resource decision-making involves assessing how a solution or service will support and enable a given application workload, along with associated management costs from a Performance, Availability, Capacity, and Economic (PACE) perspective.

Keep in mind that all application workloads have some amount of PACE resource requirements that may be high, low or various permutations, along with associated management costs. Performance, Availability (including data protection along with security) as well as Capacity are addressed via technical speeds, feeds, functionality along with workload suitability analysis.

Management costs are a function of initial and recurring tasks to support a given function or service such as VDI. The cost of management includes staff salary, along with amount of time needed to perform various tasks. The E in PACE resource decision-making is about the Economic analysis of various costs associated with different solution approaches.

ROI From Use Of Global Control Plane For Expanding VDI Environments

The above image is an example from the White Paper Report titled ROI From Use Of Global Control Plane For Expanding VDI Environments.

In the example shown above, 36 month OpEx cost (and time) savings are shown using traditional cloud based VDI management tools, technologies and techniques vs. a modern cloud platform integrated global control plane solution. Leveraging a cloud platform integrated global control plane solution such as NetApp VDS among others, management costs can be reduced for initial and recurring tasks from $2,587,394 to $968,041 for 1,001 users.

In addition to the cost savings shown above, note the reduction in management hours of 21,653 over 36 months which could be used for doing other work, or reducing your OpEx spend. Of course your savings will vary based on what tasks, time per task, admin cost among other considerations.

The shift from Capital Expenditures (e.g. CapEx) IT data infrastructure spending to Operational Expenditures (e.g. OpEx) focus particular with IT clouds has resulted in increased OpEx budget demands. Increased spending is more than simply moving IT spend from the CapEx to OpEx columns in budgets. OpEx increases are a cumulation of increased cloud services and data infrastructure spend, along with management (initial and recurring) costs.

The good news is that there are OpEx opportunities to reduce, or, stretch your IT budget to do more while boosting productivity, performance, and effectiveness without compromise. By looking at how to use new technologies in new ways, including leverage cloud platform integrated global control planes for management of VDI (and other functions), initial and recurring OpEx management costs can be reduced.

Read more in this Server StorageIO Industry Trends  Report here.

Where to learn more

Learn more about ROI From Use Of Global Control Plane For Expanding VDI Environments, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Application Data Value Characteristics Everything Is Not the Same
PACE your Infrastructure decision-making, it’s about application requirements
Cloud conversations: confidence, certainty, and confidentiality
Industry adoption vs. industry deployment, is there a difference?
Ten tips to reduce your cloud compute storage costs 
Don’t Stop Learning Expand Your Skills Experiences Everyday 
ToE NVMeoF TCP Performance Reduce Costs
Data Infrastructure Server Storage I/O Tradecraft Trends
Data Infrastructure Overview, Its What’s Inside of Data Centers
Data Infrastructure Management (Insight and Strategies)
Data Protection Diaries (Archive, Backup, BC, BR, DR, HA, Security)
NetApp VDS with Global Control Plane Cloud VDI Management

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

In addition, looking at your IT data infrastructure cloud spend can also help you to boost the effectiveness, productivity and return on investment while reducing your OpEx spend, or doing more with it. Leveraging financial pro-forma analysis as a tool in conjunction with your technology feature function, speeds, feeds comparisons enables informed decision making.

When comparing and making data infrastructure resource decisions, consider the application workload PACE characteristics. Shift or expand your focus from simply looking at costs from a efficiency utilization perspective to also include performance, productivity, and effectiveness of your IT OpEx spending.

Keep in mind that PACE means Performance (productivity), Availability (data protection), Capacity and Economics. This includes making decisions from a technical feature, functionality (speeds and feeds) capacity as well as how the solution supports your application workload. Leverage resources including tools to perform analysis including ROI From Use Of Global Control Plane For Expanding VDI Environments approaches.

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Microsoft MVP Cloud and Data Center Management, previous 10 time VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), Data Infrastructure Management (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

2019 Happy Holidays Seasons Greetings

2019 Happy Holidays Seasons Greetings, here’s a video from our companion site Pictures Over Stillwater with holiday lights and Stillwater Lights.

Where to learn more

Learn and view more via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

2019 Happy Holidays Seasons Greetings.

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, ten-time VMware vExpert. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. Visit our companion site https://picturesoverstillwater.com to view drone based aerial photography and video related topics. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing my new book Data Infrastructure Management Insight Strategies published via Auerbach/CRC Press is now available via CRC Press and Amazon.com among other global venues.

My Fifth Solo Book Project – Data Infrastructure Management

Data Infrastructure Management Insight Strategies (e.g. the white book) is my fifth solo published book in addition to several other collaborative works. Given its title, the focus of this new book is around Data Infrastructures, the tools, technologies, techniques, trends including hardware, software, services, people, policies inside data centers that get defined to support business and application services delivery. The book (ISBN 9781138486423) is soft covered (also electronic kindle versions available) with 250 pages, over a 100 figures, tables, tips and examples. You can explore the contents via Google Books here.

Data Infrastructure Books by Greg Schulz
Stack of my solo books with common theme around Data Infrastructure topics

Data Infrastructure Management Book
Data Infrastructure Management – Insight and Strategies e.g. the White book (CRC Press 2019)

Some of My Other Books Include

Click on the following book images to learn more about, as well as order your copy.

Software Defined Data Infrastructure Essentials BookSNIA Recommended Reading List
Software Defined Data Infrastructure Essentials (SDDI) – Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft e.g. the Blue book covers software defined, sddc, sddi, hybrid, among other topics including serverless containers, NVMe, SSD, flash, pmem, scm as well as others. (CRC Press 2017) available at Amazon.com among other global venues.

Cloud and Virtual Data Storage Networking Intel recommended reading listIntel recommended reading list
Cloud and Virtual Data Storage Networking (CVDSN) – Your Journey to efficient and effective Information Services e.g. the Yellow or Gold Book (CRC Press 2011) available at Amazon.com among other global venues.

 

The Green and Virtual Data Center BookIntel Recommended Reading List
The Green and Virtual Data Center (TGVDC) – Enabling Efficient, Effective and Productive Data Infrastructures e.g. the Green Book (CRC Press 2009) available at Amazon.com among other venues.

Resilient Storage Networks Book
Resilient Storage Networks (RSN) – Designing Flexible scalable Data Infrastructures (Elsevier 2004) e.g. the Red Book is SNIA Education Endorsed Reading available at Amazon.com among other venues. I have some free copies of RSN for anybody who is willing to pay shipping and handling, send me a note and we will go from there.

Where to learn more

Learn more via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Today more than ever there tends to be a focus on the date something was created or published as there is a lot of temporal content with short shelf life. This means that there is a lot of content including books being created that are short temporal usually focused on a particular technology, tool, trend that has a life span or attention focus of a couple of years at best.

On the other hand, there is also content that is still being created today that combines new and emerging technology, tools, trends with time-tested strategies, techniques as well as processes, some of whose names or buzzwords will evolve. My books fit into the latter category of combing current as well as emerging technologies, tools, trends, techniques that support longer shelf life, just insert your new favorite buzzword, buzz trend or buzz topic as needed.

Data Infrastructure Books by Greg Schulz

You will also notice looking at the stack of books, Data Infrastructure Management Insight and Strategies is a smaller soft covered book compared to others in my collection. The reason is that this new book can be a quick read to address what you need, as well as be a companion to others in the stack depending on what your focus or requirements are.

Common questions I get having written several books, not to mention the thousands of articles, tips, reports, blogs, columns, white papers, videos, webinars among other content is what’s is next? Good question, see what’s next, as well as check out some other things I’m doing over at www.picturesoverstillwater.com where I’m generating big data that gets stored and processed in various data infrastructures including cloud ;) .

Will there be another book and if so on or about what? As I mentioned, there are some projects I’m exploring, will they get finished or take different directions, wait and see what’s next.

How do I find the time to create these books and how long does it take? The time required varies as does the amount of work, what else I’m doing. I try to leverage the book (and other content creation projects) with other things I’m doing to maximize time. Some book projects have been very fast, a year or less. Some take longer such as Software Defined Data Infrastructure Essentials as it is a big book with lots of material that will have a long shelf life.

Do I write and illustrate the books or do I have somebody do them for me? For my books I do the writing and illustrating (drawings, figures, images) myself along with some of the layouts relying on external copy editors and production folks.

What do I recommend or give advice to those wanting to write a book? Understand that publishing a book is a project, there’s the actual writing, editing, reviews, art work, research, labs or other support items as book companions. Also understand why are you writing a book, for fame, fortune, acclaim, to share with others or some other reason. I also recommend before you write your entire book to talk with others who have been published to test the waters, get feedback. You might find it easier to shop an extended outline than a completed manuscript, that is unless you are writing a novel or similar.

Want to learn more about writing a book (or other content), get feedback, have other questions, drop me a note and will do what I can to help out.

Data Infrastructure Management Book

There is an old saying, publish or perish, well, I just published my fifth solo book Data Infrastructure Management Insight Strategies that you can buy at Amazon.com among other venues.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2019. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2019 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Happy Holidays 2018 From Server StorageIO

Happy Holidays 2018 From Server StorageIO and Pictures Over Stillwater

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Seasons Greetings, Merry Christmas and Happy Holidays 2018 From Server StorageIO and Pictures Over Stillwater. Some of you may be familiar with our IT Data Infrastructure (cloud, converged, container, server, storage, I/O networking, hardware, software, data protection) related site Server StorageIO also known as StorageIO(tm) along with its StorageIOblog companion.

Others might know about our new companion site Pictures Over Stillwater (FAA & MNDoT Aviation licensed commercial drone based aerial photography and videography). Some of you might even know us for our other companion site Karen of Arcola, all of which are part of the UnlimitedIO LLC family. Regardless of which you may know us from or for, or even if a first time visitor, we wish you seasons greetings, happy holidays whatever if your choice to celebrate as well as safe, prosperous new years

Downtown Stillwater Lights via Pictures Over Stillwater
Downtown Stillwater Lights via www.picturesoverstillwater.com

Old Washington County Courthouse Stillwater via Pictures Over Stillwater
Old Washington County Courthouse via www.picturesoverstillwater.com

Downtown Christmas Tree Stillwater via Pictures Over Stillwater

Merry Christmas, Seasons Greetings and have a safe Happy Holidays from Stillwater MN via www.picturesoverstillwater.com

Where to learn more

Learn more via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Have a safe happy holiday season.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family is part two of a four-part series looking at Data Box. View Part 1 Microsoft announced Azure Data Box updates, Part 3 Microsoft Azure Data Box Disk Test Drive Review, Part 4 Microsoft Azure Data Box Disk Impressions.

Microsoft Azure Data Box Overview

Microsoft has several Data Box solutions available or in the preview to meet various customer needs. These include both online as well as offline solutions that include hardware (except Data Box Gateway), software tools and cloud services.

Data Box Online

Microsoft has two online Data Box offerings that provide real-time access of Azure cloud storage resources from on-prem including remote, edge locations. The online Data Box solutions include Edge and Gateway both with local on-prem storage.


Data Box Edge image via Microsoft.com

Data Box Edge (Preview)

Currently, in preview, Data Box Edge is a 1U appliance that combines hardware along with software resources for deployment on-prem at the edge or remote locations. Data Box Edge places locally converged compute and storage resources as an appliance along with connectivity to Azure cloud-based resources.

Intended workloads and applications for Data Box Edge include remote AI, ML, and DL inferencing, data processing or pre-processing before sending to Azure Cloud, function as an edge compute, data protection and data transfer platform (e.g., cloud storage gateway) with local compute. Data Box Edge is similar in functionality and focuses on other cloud service provider solutions such as AWS Snow Ball Edge (SBE). Management tools include Data Box Edge resource Azure portal for management from a web UI, create and manage resources, devices, shares.

Other Data Box Edge attributes include:

  • Supports Azure Blob or Files via SMB and NFS storage access protocols
  • Dual Intel Xeon processors each with 10 CPU cores, 64GB RAM
  • 2 x 10 Gbps SFP+ copper cables, 2 x 1 Gbps RJ45 cables
  • 8 NVMe SSD (1.6 TB each), no HA, 12.8 TB total raw cap
  • 2 x 1 GbE (one for management, one for user access)
  • 2 x 25 GbE (can operate at 10 GbE) and 2 x 25 GbE ports
  • Local web UI for management and configuration

Data Box Gateway (Preview)

Also in Preview, Data Box Gateway is a virtual machine (VM) based software defined appliance that runs on VMware vSphere (ESXi) or Microsoft Hyper-V hypervisors. The functionality of Data Box Gateway is that of a cloud storage gateway providing access to Azure Blob (Page and Block) or Files (NAS) via SMB or NFS protocols. Learn more about both Data Box Edge and Data Box Gateway here including pricing here.

Data Box Offline Solutions

Microsoft has several offline Data Box offerings including previously available and new in preview models. Offline Data Box solutions enable large amounts of data to be moved from on-prem primary, remote and edge locations to Azure cloud storage resources. Bulk data movement operations can be one-time or recurring in support of big data migration of energy, research, media & entertainment and other large volumes of data.

Other bulk movement includes for archive, backup, BC/DR, virtual machine and application migration among others. Use Data Box Offline solutions when large amounts of data need to be moved from on-prem to Azure cloud faster than what available networks will support promptly.

Offline Data Box solutions include:

  • Data Box Heavy (Preview) 1 PB Storage, 800 TB usable
  • Data Box 100 TB (80 TB usable)
  • Data Box Disk (Preview) 40 TB (35 TB Usable)


Data Box Heavy 1 PB (Preview) image via Microsoft.com

Data Box Heavy 1 PB (Preview)

  • Appliance with Up to 800 TB usable capacity per order
  • One system per order
  • Supports Azure Blob or Files
  • Copy data to up to 10 storage accounts
  • 1 x 1/10 Gbps RJ45 connector, 4 x 40 Gbps QSFP+ connectors
  • AES 256-bit encryption
  • Copies data using NAS SMB and NFS protocols


Data Box 100TB image via Microsoft.com

100 TB Data Box

  • An appliance that supports 80 TB usable storage capacity
  • Supports Azure Blob or Files
  • Copies data to 10 storage accounts
  • 1 x 1/10 GbE RJ45 connector
  • 2 x 10 GbE SFP+ connector
  • AES 256-bit encryption
  • Storage access and copy via SMB and NFS NAS protocols

Case of Data Box Disks image via Microsoft.com

Data Box Disk 40 TB (Preview)

  • Up to 35 TB usable capacity per order
  • Up to 5 SSDs per order
  • This is what I tested (2 x 8 TB)
  • Supports Azure Blob storage (Block and Page)
  • Copies data to a single storage account
  • USB/SATA II, III server I/O interface (comes with SATA to USB connector cables)
  • AES 128-bit encryption
  • Copy data with standard tools

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Which Microsoft Azure Data Box is the best? That depends on your needs and requirements.

Microsoft along with other major cloud service providers continue to evolve their data migration services. Realizing that customers who need, want, or have to get data to the cloud also need to remove barriers, solutions such as Azure Data Box are a step in eliminating cloud barriers while addressing cloud concerns. Continue reading Part 3 Microsoft Azure Data Box Disk Test Drive Review and Part 4 Microsoft Azure Data Box Disk Impressions as part of Microsoft Azure Data Box Family.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft announced Azure Data Box updates #blogtobertech

Microsoft announced Azure Data Box updates – #blogtobertech

Microsoft announced Azure Data Box updates - #blogtobertech

Microsoft announced Azure Data Box is the first in a series of four posts looking at Data Box including a test drive experience. View Part 2 Microsoft Azure Data Box Family, Part 3 Microsoft Azure Data Box Disk Test Drive Review, Part 4 Microsoft Azure Data Box Disk Impressions.

Microsoft Azure Data Box Family Page image via Microsoft.com
Microsoft Azure Data Box Family Page image via Microsoft.com

At Ignite in Microsoft announced Azure Data Box updates, which means its time for a test drive and review. Microsoft has several Data Box solutions available or in the preview to meet various customer needs. These include both online as well as offline solutions that include hardware (except Data Box Gateway), software tools and cloud services. In general, Data Box enables bulk movement and migration of data from on-prem environments to Azure cloud storage including blobs (e.g., objects) and files (e.g., NAS accessible) resources.

Whats The Need for Data Movement Appliance Service

Some might ask the question why do you need a Microsoft Azure Data Box when there are fast networks? Good question, assuming you have fast networks that can move large amounts of bulk data promptly. Microsoft supports traditional Internet-based access to Azure cloud resources for data migration, along with higher speed Express Route service similar to Amazon Web Service (AWS) Direct Connect among other options.

On the other hand, if you need to move a large amount of data that would take weeks, months or longer sending over expensive networks, then solutions like Data Box are an option. Microsoft is not alone or unique having data storage migration or movement services. AWS has Snowball, Snowball Edge with compute, as well as the truck size Snowmobile for large-scale data movement. Google also has their Transfer services including Google Transfer Appliance.

Who is Azure Data Box for?

Azure Data Box is for those who need to migrate data to Azure cloud storage and other services on a one-time or recurring basis. Another scenario is for those who need to have on-prem storage and optional compute at remote or edge locations in support of data acquisition, media & entertainment, energy exploration, AI, ML, DL inferencing, local data processing, pre-processing before sending to cloud among other workloads.

Yet other scenarios for those who need to move large amounts of data online, off-line, or in disconnected also known as submarine mode where a connection to the internet is not always available. Bulk data movement also applies for one-time, as well as recurring data protection such as archive, backups, BC/DR, as well as data shipping, virtual machine farm relocation, SQL Server data migration to cloud, data center consolidation among many other scenarios.

What is Azure Data Box

Azure Data Box is a combination of hardware, software, cloud services that support data migration (on-line and off-line) from on-prem environments including remote or edge to Azure cloud storage resources. There are different Data Box solutions available or in the preview to meet various needs from performance, capacity, functionality, without as well as without compute. In addition to being used for data migration, there are also Data Box solutions (e.g., Edge) that converge compute and storage for deployment at remote or edge locations.

Data Box Gateway is a software-defined virtual machine appliance that deploys on VMware and Microsoft (e.g., Hyper-V) hypervisors. Off-line Data Box solutions scale from single 8TB SSD disks to PB of capacity with various functionality.

As a reminder, blobs are analogous to and what Microsoft Azure refers to instead of objects (e.g., object storage). Also remember that Azure blobs include block, page (512-byte page aligned for VHDX) and append (similar to other vendors object storage). Microsoft Azure in addition to blobs, supports file (SMB and NFS) access, along with table (database) and queue storage services.

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Azure Data Box type solutions and services are becoming more common as well as diverse. With the addition of compute in some of these solutions to support remote edge workloads, the lines may blur with some of the converged and hyper-converged infrastructure (HCI) solutions. Likewise, keep an eye to see how cloud service providers leverage solutions like Data Box Edge to further place their reach out to the edge enabling fog (e.g., cloud at the edge) among other converged functionality. Continue reading Part 2 Microsoft Azure Data Box Family, Part 3 Microsoft Azure Data Box Disk Test Drive Review, and Part 4 Microsoft Azure Data Box Disk Impressions as part of Microsoft announced Azure Data Box updates.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update. Musician Phil Collins has an excellent name for his current tour Not Dead Yet which is a reminder that he is still alive and performing, at least one more time. With Halloween just around the corner, it is that time of the year to revisit zombie technology, those technologies, tools, techniques, trends that are declared dead yet still alive.

Data Infrastructure Tools Trends Topics

IT Zombie Technology Declared Dead Not Dead Yet

With a concert tour named Not Dead Yet, that sets the stage for this post which is about IT Zombie Technology and in particular data infrastructure related technology, tools, trends and related topics that have been declared dead by some people, yet are still alive. Not only are these tools and techniques being used, but they are also being enhanced to be around for future years of zombie technology updates, not dead yet.

As a refresher, a Zombie technology is one that is declared dead, usually by some upstart vendor and its pundits along with other followers in favor of whatever new has been announced. As luck or fate would have it, some of these startup or new technologies that declare an older established one as being dead, tend to end up on the where are they now list.

In other words, some technologies do survive and gain in both industry adoption, as well as the even more critical customer deployment category. Likewise, some of these technologies that result in something existing being declared dead-end up surviving to live alongside or near what its supporters declared dead.

Another not so uncommon occurrence is when the new technology that its supporters declared something else as being dead joins the ranks of being declared dead by a yet more modern technology thereby becoming a Zombie technology itself.  Put a different way, being on the Zombie technology list may not be the same as being the shiny new popular trendy technology. However, it can be both a badge of honor not to mention revenue and profit maker.

Data Infrastructure components

Zombie Technology List

What are some old and new Zombie technologies that have been declared dead, yet are still alive, being used and enhanced, not dead yet?

IBM Mainframe

This is a perennial favorite, and while not seeing new growth associated with other platforms including Intel, AMD, ARM among others, it has its place with many large organizations. Not only does it continue to be manufactured, enhanced, even some new customers buying them, it also runs native Linux in addition to traditional zOS among other software.

Fibre Channel (FC)

FC has been declared dead for over a decade, and while Ethernet-based server storage I/O networking continues to gain ground in both industry as well as customer deployments, there is still plenty of life in and with FC for years to come, at least for some environments. NVMe over Fabrics (NVMeoF) which is the NVMe protocol carried on top of a fabric network (SAN if you prefer) is gaining industry popularity and customer curiosity.

There are many flavors of NVMe over fabrics including NVMe over Fibre Channel, e.g., FC-NVMe which is similar to mapping the SCSI command set (SCSI_FCP) on to Fibre Channel or what is more commonly known as FCP or simply FC.

What this means if that FC-NVMe is just another upper-level protocol (ULP) that can co-exist with others on the same Fibre Channel network. In other words, FICON, FCP, NVMe among others can co-exist on the same Fibre Channel-based network. Will everybody using Fibre Channel move to FC-NVMe? Good question, ask the FC folks, and the answer not surprisingly would be yes or probably. Will new customers looking to do NVMe over some type of fabric or network use Fibre Channel instead of Ethernet or other transport? Some will while others will go other routes. For now, what is clear is that FC is still alive and thus on the Zombie technology list and not dead yet.

SAS and SATA

Both have been declared dead as they have been around for a while, and over time NVMe will pick up more of their workload, however near term, SAS and SATA will continue as lower cost smaller footprint for general purpose and bulk lower cost direct attachment. Otoh, look for more m.2 NVMe Next Generation Form Factor (NGFF) aka gum sticks appearing on physical servers along with storage systems. Likewise, watch for increased deployment of NVMe U.2. Aka 8639 drive form factor SSDs using NAND flash as well as 3D XPoint and Intel Optane among other mediums as part of new server and storage platforms. BTW, USB is not dead yet either, just saying.

Microsoft Windows

Windows desktop, Windows Servers, even Hyper-V virtualization have been declared dead for some time now, yet all continue to evolve. Just recently, Microsoft released Windows Server 2019 which included many enhancements from software-defined storage (Storage Spaces Direct aka S2D), software-defined networking, converged and hyper-converged infrastructure (HCI) deployment options, expanded virtualization capabilities, Windows Subsystem for Linux (WSL) enhancements (e.g. bash shell on Windows native), containers with Kubernetes as well as Docker updates among others. In other words, it’s not dead yet.

Hard Disk Drive (HDD)

Having been declared dead for decades, while not the primary frontline storage medium it was in the past, HDDs continue to evolve and be used for alongside faster flash SSD, and as a front-end to magnetic tape. Some of the larger consumers of HDDs continue to be cloud service providers also known as mega scalars for storing large amounts of bulk data. I suspect that HDDs will continue to be on the Zombie technology list for at least another decade or so which has been the case for the past several decades.

Magnetic Tape

Like HDDs, the tape is still in use in some environments, and like HDDs, the cloud service providers are significant users of tape as a low-cost, low access, high-capacity bulk storage for cold archives that are front-ended by HDD or SSD or both.

Cloud (Public, Private and Hybrid)

Yes, believe it or not, some have declared cloud dead, along with hybrid cloud, private cloud among others, oh well.

Physical Machine (PM)

Also known as bare metal, servers were declared dead a decade or so ago at the hands of the then emerging Intel based virtualization hypervisors notably VMware ESXi and to a lesser extent Microsoft Hyper-V. I say lesser extent with Hyper-V in that there was less noise about PM and BMs being dead as there was from some in the ESXi virtual kingdom. Needless to say, PM and BM from Intel to AMD and ARM-based, along with IBM Power among many others are very much alive as dedicated servers in the cloud, VM and container hosts, as well as being accessorized with FPGA, ASIC, GPU, and other resources.

Virtual Machines

Listen to some from the container, serverless or something new crowd, and you will hear that virtual machines (VMs) are dead which for some workloads may be right. On the other hand, similar to the physical machine (PM) or bare metal (BM) servers that were declared dead by the VMs a decade or so ago, VMs are alive and doing well. Not only are they doing well, like containers continued adoption and deployment of VMs will stay on both on-prem as well as cloud, as will BM and PMs now have known as dedicated servers in the clouds.

NAS and Files

If you listened to some of the pundits and press, NAS and files were supposed to have been dead several years ago at the hands of object storage. The reality today is that while object storage continues to grow in customer deployments while the industry is not as enamored (or drunk) with it as it was a few years ago, the new technology is here to stay and will be around for many decades to come.

That brings us back to NAS and files which were declared dead by the object opportunists which is file access is very much alive and continues gain ground. In fact, most cloud providers have either added NAS file-based access (NFS, SMB, POSIX among others) native or via partners to their solutions. Likewise, most object storage platforms have also added or enhanced their NAS file-based access for compatibility while their customers are re-engineering their applications, or create new apps that are object and blob native. Thus, NAS and File-based access are proud members of the Zombie technology list.

Data Infrastructure tools

There are many more tools, technologies, trends, techniques that are part of the above list for example Backup has been declared dead, along with the PCIe bus, NAND flash, programming, data centers, databases, SQL along with many others. What they have in common is that they are part of a growing list of not dead yet, yet declared dead thus are Zombie technologies.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

What is your favorite zombie technology, tool, trend or technique?

What zombie technologies, tools, trends or techniques should be added to the list and why?

Many tools, technologies, techniques, trends are often declared dead, sometimes before they are even really alive and mature by those who have something new, or that simply lack creative (e.g., dead marketing?) so it’s easier to declare something dead. While some succeed themselves prospering and being added to the Zombie technology list (a badge of honor), some quietly end up on the where are they now list. The where are they now list are those vendors, tools, technologies, techniques, trends that were on the famous hit parade in the past, having faded away, or end up dead (unlike a zombie).

Don’t be scared of zombie technology while also being prepared to embrace what is new while using both in new ways. Right now, I don’t have tickets to go see Phil Collins not dead yet tour, maybe that will change. However, for now, keep in mind, don’t be scared when looking at Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

How I saved money storing more data on aws s3 simple storage service #blogtobertech

How I saved money storing more data on aws s3 simple storage service #blogtobertech

How I saved money storing more data on aws s3 simple storage service

How I saved money storing more data on aws s3 simple storage service is an example of reducing cloud costs as opposed to merely cutting cloud costs. What this means is that instead of just cutting my cloud storage costs with a focus on how much I could save, I wanted to remove some costs while also storing more data without compromise. For example, since making the changes, storage capacity usage has almost doubled, yet prices are remaining 37% lower from two years ago before the changes were made.

How I saved money storing more data on aws s3?

Without adding any context, the typical reaction might be that I saved money storing more data on (or in) AWS S3 as opposed to locally on-site (on-prem). Another typical response would be that I moved all of my data from a different more expensive cloud service to AWS S3. Yet another common reaction would that I moved my AWS S3 data into AWS Glacier cold storage, or, deleted a large amount of data.

Some might even comment that I must have used some type of dedupe, compression or other data footprint reduction (DFR) technology. On the other hand, some might determine that I probably did all or some of the above, or, leveraged AWS tiered storage, aligning different storage classes to the type of data activity.

How I saved money storing more data in AWS S3 actually involved spending some money, to eventually save money by leveraging different S3 storage classes. As part of rebalancing or moving different data to its new storage class, some one-time charges were incurred which recouped after several months of savings. The costs pertained to EC2 compute instances and associated storage used for some of the data tiering, other fees were for access charges along with excessive API calls. For example, some of the data was in storage classes that had fees for early retrieval or deletions, or fees for access among others.

How I use different AWS S3 storage classes (tiers)

  • Standard – Frequently changing data, or data with frequent access
  • Infrequent Access (IA) – Data that does not change frequently or that is not routinely accessed. In the past before OZA, I had placed data that did not need to be in standard, yet to warm for Glacier in this storage class. After the migrations, I have fewer data stored in IA, with more in OZA as well as some in Standard.
  • One Zone Availability (OZA) – Data that is frequently accessed for reading, however, is static, not yet cold enough to move to Glacier or deep archive. A mix of backups, online and active archives. Note that I use OZA as an additional copy or location and not as a single, lowest cost place to store data. In other words, anything that I put into OZA has at least one additional copy somewhere else which may not be in the cloud.
  • Glacier – Very cold, seldom accessed, archives

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

I decreased my AWS monthly bill by balancing things around, there was a one-month period where my costs increased during the changes, then a subsequent reduction. However, while I saw my monthly AWS storage invoices decrease, I’m also storing more data per month. How I saved money storing more data on aws s3 simple storage service involved using different storage classes.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.