Microsoft Azure Elastic SAN from Cloud to On-Prem

What is Azure Elastic SAN

Azure Elastic SAN (AES) is a new (now GA) Azure Cloud native storage service that provides scalable, resilient, easy management with rapid provisioning, high performance, and cost-effective storage. AES (figure 1) supports many workloads and computing resources. Workloads that benefit from AES include tier 1 and tier 2, such as Mission Critical, Database, and VDI, among others traditionally relying upon consolidated Storage Area Network (SAN) shared storage.

Compute resources that can use AES, including bare metal (BM) physical machines (PM), virtual machines (VM), and containers, among others, using iSCSI for access. AES is accessible by computing resources and services within the Azure Cloud in various regions (check Azure Website for specific region availability) and from on-prem core and edge locations using iSCSI. The AES management experience and value proposition are similar to traditional hardware or software-defined shared SAN storage combined with Azure cloud-based management capabilities.

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 1 General Concept and Use of Azure Elastic SAN (AES)

While Microsoft Azure describes AES as a cloud-native storage solution, that does not mean that AES is only for containers and other cloud-native apps or DevOPS. Rather, AES has been built for and is native to the cloud (e.g., software-defined) that can be accessed by various compute and other resources (e.g., VMs, Containers, AKS, etc) using iSCSI.

How Azure Elastic SAN differs from other Azure Storage

AES differs from traditional Azure block storage (e.g., Azure Disks) in that the storage is independent of the host compute server (e.g., BM, PM, VM, containers). With AES, similar to a conventional software-defined or hardware-based shared SAN solution, storage is disaggregated from host servers for sharing and management using iSCSI for connectivity. By comparison, AES differs from traditional Azure VM-based storage typically associated with a given virtual machine in a DAS (Direct Attached Storage) type configuration. Likewise, similar to conventional on-prem environments, there is a mix of DAS and SAN, including some host servers that leverage both.

AES supports Azure VM, Azure Kubernetes Service (AKS), cloud-native, edge, and on-prem computing (BM, VM, etc.) via iSCSI. Support for Azure VMware Solution (AVS) is in preview; check the Microsoft Azure website for updates and new feature functionality enhancements.

Does this mean everything is moving to AES? Similar to traditional SANs, there are roles and needs for various storage options, including DAS, shared block, file, and object, among storage offerings. Likewise, Microsoft and Azure have expanded their storage offerings to include AES, DAS (azure disks, including Ultra, premium, and standard, among other options), append, block, and page blobs (objects), and files, including Azure file sync, tables, and Data Box, among other storage services.

Azure Elastic Storage Feature Highlights

AES feature highlights include, among others:

    • Management via Azure Portal and associated tools
    • Azure cloud-based shared scalable bock storage
    • Scalable capacity, low latency, and high performance (IOPs and throughput)
    • Space capacity-optimized without the need for data reduction
    • Accessible from within Azure cloud and from on-prem using iSCSI
    • Supports Azure compute  (VMs, Containers/AKS, Azure VMware Solution)
    • On-prem access via iSCSI from PM/BM, VM, and containers
    • Variable number of volumes and volume size per volume group
    • Flexible easy to use Azure cloud-based management
    • Encryption and network private endpoint security
    • Local (LRS) and Zone (ZRS) with replication resiliency
    • Volume snapshots and cluster support

Who is Azure Elastic SAN for

AES is for those who need cost-effective, shared, resilient, high capacity, high performance (IOPS, Bandwidth), and low latency block storage within Azure and from on-prem access. Others who can benefit from AES include those who need shared block storage for clustering app workloads, server and storage consolidation, and hybrid and migration. Another consideration is for those familiar with traditional hardware and software-defined SANs to facilitate hybrid and migration strategies.

How Azure Elastic SAN works

Azure Elastic SAN is a software-defined (cloud native if you prefer) block storage offering that presents a virtual SAN accessible within Azure Cloud and to on-prem core and edge locations currently via iSCSI. Using iSCSI, Azure VMs, Clusters, Containers, Azure VMware Solution among other compute and services, and on-prem BM/PM, VM, and containers, among others, can access AES storage volumes.

From the Azure Portal or associated tools (Azure CLI or PowerShell), create an AES SAN, giving it a 3 to 24-character name and specify storage capacity (base units with performance and any additional space capacity). Next, create a Volume Group, assigning it to a specific subscription and resource group (new or existing), then specify which Azure Region to use, type of redundancy (LRS or GRS), and Zone to use. LRS provides local redundancy, while ZRS provides enhanced zone resiliency, with highspeed synchronous resiliency without setting up multiple SAN systems and their associated replication configurations along with networking considerations (e.g., Azure takes care of that for you within their service).

The next step is to create volumes by specifying the volume name, volume group to use, volume size in GB, maximum IOPs, and bandwidth. Once you have made your AES volume group and volumes, you can create private endpoints, change security and access controls, and access the volumes from Azure or on-prem resources using iSCSI. Note that AES currently needs to be LRS (not ZRS) for clustered shared storage and that Key management includes using your keys with Azure key vault.

Using Azure Elastic SAN

Using AES is straightforward, and there are good easy to follow guides from Microsoft Azure, including the following:

The following images show what AES looks like from the Azure Portal, as well as from an Azure Windows Server VM and an onprem physical machine (e.g., Windows 10 laptop).

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 2 AES Azure Portal Big Picture

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 3 AES Volume Groups Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 4  AES Volumes Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 5 AES Volume Snapshot Views

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 6 AES Connected Volume Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 7 AES Volume iSCSI view from on-prem Windows Laptop

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 8 AES iSCSI Volume attached to Azure VM

Azure Elastic SAN Cost Pricing

The cost of AES is elastic, depending on whether you scale capacity with performance (e.g., base unit) or add more space capacity. If you need more performance, add base unit capacity, increasing IOPS, bandwidth, and space. In other words, base capacity includes storage space and performance, which you can grow in various increments. Remember that AES storage resources get shared across volumes within a volume group.

Azure Elastic SAN is billed hourly based on a monthly per-capacity base unit rate, with a minimum of 1TB  provisioned capacity with minimum performance (e.g., 5,000 IOPs, 200MBps bandwidth). The base unit rate varies by region and type of redundancy, aka resiliency. For example, at the time of this writing, looking at US East, the Local Redundant Storage (LRS) base unit rate is 1TB with 5,000 IOPs and 200MBps bandwidth, costing $81.92 per unit per month.

The above example breaks down to a rate of $0.08 per GB per month, or $0.000110 per GB per hour (assumes 730 hours per month). An example of simply adding storage capacity without increasing base unit (e.g., performance) for US East is $61.44 per month. That works out to $0.06 per GB per month (no additional provisioned IOPs or Bandwidth) or $0.000083 per GB per hour.

Note that there are extra fees for Zone Redundant Storage (ZRS). Learn more about Azure Elastic SAN pricing here, as well as via a cost calculator here.

Azure Elastic SAN Performance

Performance for Azure Elastic SAN includes IOPs, Bandwidth, and Latency. AES IOPs get increased in increments of 5,000 per base TB. Thus, an AES with a base of 10TB would have 50,000 IOPs distributed (shared) across all of its volumes (e.g., volumes are not restricted). For example, if the base TB is increased from 10TB to 20TB, then the IOPs would increase from 50,000 to 100,000 IOPs.

On the other hand, if the base capacity (10TB) is not increased, only the storage capacity would increase from 10TB to 20TB, and the AES would have more capacity but still only have the 50,000 IOPs. AES bandwidth throughput increased by 200MBps per TB. For example, a 5TB AES would have 5 x 200MBps (1,000 MBps) throughput bandwidth shared across the volume groups volumes.

Note that while the performance gets shared across volumes, individual volume performance is determined by its capacity with a maximum of 80,000 IOPs and up to 1,024 MBps. Thus, to reach 80,000 IOPS and 1,024 MBps, an AES volume would have to be at least 107GB in space capacity. Also, note that the aggregate performance of all volumes cannot exceed the total of the AES. If you need more performance, then create another AES.

Will all VMs or compute resources see performance improvements with AES? Traditional Azure Disks associated with VMs have per-disk performance resource limits, including IOPs and Bandwidth. Likewise, VMs have storage limits based on their instance type and size, including the number of disks (HDD or SSD), performance (IOPS and bandwidth), and the number of CPUs and memory.

What this means is that an AES volume could have more performance than what a given VM is limited to. Refer to your VM instance sizing and configuration to determine its IOP and bandwidth limits; if needed, explore changing the size of your VM instance to leverage the performance of Azure Elastic SAN storage.

Additional Resources Where to learn more

The following links are additional resources to learn about Microsoft Azure Elastic SAN and related data infrastructures and tradecraft topics.

Azure AKS Storage Concepts 
Azure Elastic SAN (AES) Documentation and Deployment Guides
Azure Elastic SAN Microsoft Blog
Azure Elastic SAN Overview
Azure Elastic SAN Performance topics
Azure Elastic SAN Pricing calculator
Azure Products by Region (see where AES is currently available)
Azure Storage Offerings 
Azure Virtual Machine (VM) sizes
Azure Virtual Machine (VM) types
Azure Elastic SAN General Pricing
Azure Storage redundancy 
Azure Service Level Agreements (SLA) 
StorageIOBlog.com Data Box Family 
StorageIOBlog.com Data Box Review
StorageIOBlog.com Data Box Test Drive 
StorageIOblog.com Microsoft Hyper-V Alive Enhanced with Win Server 2025
StorageIOblog.com If NVMe is the answer, what are the questions?
StorageIOblog.com NVMe Primer (or refresh)
RTO Context Matters (Blog Post)

Additional learning experiences along with common questions (and answers), are found in my Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials (CRC Press) by Greg Schulz

What this all means

Azure Elastic SAN (AES) is a new and now generally available shared block storage offering that is accessible using iSCSI from within Azure Cloud and on-prem environments. Even with iSCSI, AES is relatively easy to set up and use for shared storage, mainly if you are used to or currently working with hardware or software-defined SAN storage solutions.

With NVMe over TCP fabrics gaining industry and customer traction, I’m hoping for Microsoft to adding that in the future. Currently, AES supports LRS and ZRS for redundancy, and an excellent future enhancement would be to add Geo Redundant Storage (GRS) capabilities for those who need it.

I like the option of elastic shared storage regarding performance, availability, capacity, and economic costs (PACE). Suppose you understand the value proposition of evolving from dedicated DAS to shared SAN (independent of the underlying fabric network); or are currently using some form of on-prem shared block storage. In that case, you will find AES familiar and easy to use. Granted, AES is not a solution for everything as there are roles for other block storage, including DAS such as Azure disks and VMs within Azure, along with on-prem DAS, as well as file, object, and blobs, tables, among others.

Wrap up

The notion that all cloud storage must be objects or blobs is tied those who only need, provide, or prefer those solutions. The reality is that everything is not the same. Thus, there is a need for various storage mediums, devices, tiers, access, and types of services. Microsoft and Azure have done an excellent job of providing. I like what Microsoft Azure is doing with Azure Elastic SAN.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Nine time Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of UnlimitedIO LLC.

Microsoft Hyper-V Is Alive Enhanced With Windows Server 2025

Yes, you read that correctly, Microsoft Hyper-V is alive and enhanced with Windows Server 2025, formerly Windows Server v.Next server. Note that  Windows Server 2025 preview build is just a preview available for download testing as of this time.

What about Myth Hyper-V is discontinued?

Despite recent FUD (fear, uncertainty, doubt), misinformation, and fake news, Microsoft Hyper-V is not dead. Nor has Hyper-V been discontinued, as some claim. Some Hyper-V FUD is tied to customers and partners of VMware following Broadcom’s acquisition of VMware looking for alternatives. More on Broadcom and VMware here, here, here, here, and here.

As a result of Broadcom’s VMware acquisition and challenges for partners and customers (see links above), organizations are doing due diligence, looking for replacement or alternatives. In addition, some vendors are leveraging the current VMware challenges to try and position themselves as the best hypervisor virtualization safe harbor for customers. Thus some vendors, their partners, influencers and amplifiers are using FUD to keep prospects from looking at or considering Hyper-V.

Virtual FUD (vFUD)

First, let’s shut down some Virtual FUD (vFUD). As mentioned above, some are claiming that Microsoft has discontinued Hyper-V. Specifically, the vFUD centers on Microsoft terminating a specific license SKU (e.g., the free Hyper-V Server 2019 SKU). For those unfamiliar with the discontinued SKU (Hyper-V Server 2019), it’s a headless (no desktop GUI) version of Windows Server  running Hyper-V VMs, nothing more, nothing less.

Does that mean the Hyper-V technology is discontinued? No.

Does that mean Windows Server and Hyper-V are discontinued? No.

Microsoft is terminating a particular stripped-down Windows Server version SKU (e.g. Hyper-V Server 2019) and not the underlying technology, including Windows Server and Hyper-V.

To repeat, a specific SKU or distribution (Hyper-V Server 2019) has been discontinued not Hyper-V. Meanwhile, other distributions of Windows Server with Hyper-V continue to be supported and enhanced, including the upcoming Windows Server 2025 and Server 2022, among others.

On the other hand, there is also some old vFUD going back many years, or a decade, when some last experienced using, trying, or looking at Hyper-V. For example, the last look at Hyper-V might been in the Server 2016 or before era.

If you are a vendor or influencer throwing vFUD around, at least get some new vFUD and use it in new ways. Better yet, up your game and marketing so you don’t rely on old vFUD. Likewise, if you are a vendor partner and have not extended your software or service support for Hyper-V, now is a good time to do so.

Watch out for falling into the vFUD trap thinking Hyper-V is dead and thus miss out on new revenue streams. At a minimum, take a look at current and upcoming enhancements for Hyper-V doing your due diligence instead of working off of old vFUD.

Where is Hyper-V being used?

From on-site (aka on-premises, on-premises, on-prem) and edge on Windows Servers standalone and clustered, to Azure Stack HCI. From Azure, and other Microsoft platforms or services to Windows Desktops, as well as home labs, among many other scenarios.

Do I use Hyper-V? Yes, when I  retired from the vExpert program after ten years. I moved all of my workloads from VMware environment to Hyper-V including *nix, containers and Windows VMs, on-site and on Azure Cloud.

How Hyper-V Is Alive Enhanced With Windows Server 2025

Is Hyper-V Alive Enhanced With Windows Server 2025?  Yup.

Formerly known as Windows Server v.Next, Microsoft announced the Windows Server 2025 preview build on January 26, 2024 (you can get the bits here). Note that Microsoft uses Windows Server v.Next as a generic placeholder for next-generation Windows Server technology.

A reminder that the cadence of Windows Server Long Term Serving Channels (LTSC) versions has been about three years (2012R2, 2016, 2019, 2022, now 2025), along with interim updates.

What’s enhanced with Hyper-V and Windows Server 2025

    • Hot patching of running server (requires Azure Arc management) with almost instant implementations and no reboot for physical, virtual, and cloud-based Windows Servers.
    • Scaling of even more compute processors and RAM for VMs.
    • Server Storage I/O performance updates, including NVMe optimizations.
    • Active Directory (AD) improvements for scaling, security, and performance.
    • There are enhancements to storage replica and clustering capabilities.
    • Hyper-V GPU partition and pools, including migration of VMs using GPUs.

More Enhancements for Hyper-V and Windows Server 2025

Active Directory (AD)

Enhanced performance using all CPUs in a process group up to 64 cores to support scaling and faster processing. LDAP for TLS 1.3, Kerberos support for AES SHA 256 / 384, new AD functional levels, local KDC, improved replication priority, NTLM retirement, local Kerberos, and other security hardening. In addition, 64-bit Long value IDs (LIDs) are supported along with a new database schema using 32K pages vs the previous 8K pages. You will need to upgrade forest-wide across domain controllers to leverage the new larger page sizes (at least Server 2016 or later). Note that there is also backward compatibility using 8K pages until all ADs are upgraded.

Storage, HA, and Clustering

Windows Server continues to offer flexible options for storage how you want or need to use it, from traditional direct attached storage (DAS) to Storage Area Networks (SAN), to Storage Spaces Direct (S2D) software-defined, including NVMe, NVMe over Fabrics (NVMeoF), SAS, Fibre Channel, iSCSI along with file attached storage. Some other storage and HA enhancements include Storage Replica performance for logging and compression and stretch S2D multi-site optimization.

Failover Cluster enhancements include AD-less clusters, cert-based VM live migration for the edge, cluster-aware updating reliability, and performance improvements. ReFS enhancements include dedupe and compression optimizations.

Other NVMe enhancements include optimization to boost performance while reducing CPU overhead, for example, going from 1.1M IOPS to 1.86M IOPS, and then with a new native NVMe driver (to be added), from 1.1M IOPs to 2.1M IOPs. These performance optimizations will be interesting to look at closer, including baseline configuration, number and type of devices used, and other considerations.

Compute, Hyper-V, and Containers

Microsoft has added and enhanced various Compute, Hyper-V, and Container functionality with Server 2025, including supporting larger configurations and more flexibility with GPUs. There are app compatibility improvements for containers that will be interesting to see and hear more details about besides just Nano (the ultra slimmed-down Windows container).

Hyper-V

Microsoft extensively uses Hyper-V technology across different platforms, including Azure, Windows Servers, and Desktops. In addition, Hyper-V is commonly found across various customer and partner deployments on Windows Servers, Desktops, Azure Stack HCI, running on other clouds, and virtualization (nested). While Microsoft effectively leverages Hyper-V and continues to enhance it, its marketing has not effectively told and amplified the business benefit and value, including where and how Hyper-V is deployed.

Hyper-V with Server 2025 includes discrete device assignment to VM (e.g., resources dedicated to VMs). However, dedicating a device like a GPU to a VM prevents resource sharing, failover cluster, or live migration. On the other hand, Server 2025 Hyper-V supports GPU-P (GPU Partitioning), enabling GPU(s) to be shared across multiple VMs. GPUs can be partitioned and assigned to VMs, with GPUs and GPU partitioning enabled across various hosts.

In addition to partitioning, GPUs can be placed into GPU pools for HA. Live migration and cluster failover (requires PCIe SR-IOV), AMD Lilan or later, Intel Sapphire Rapids, among other requirements, can be done. Another enhancement is Dynamic Processor Compatibility, which allows mixed processor generations to be used across VMs and then masks out functionalities that are not common across processors. Other enhancements include optimized UEFI, secure boot, TPM , and hot add and removal of NICs.

Networking

Network ATC provides intent-based deployments where you specify desired outcomes or states, and the configuration is optimized for what you want to do. Network HUD enables always-on monitoring and network remediation. Software Defined Network (SDN) optimization for transparent multi-site L2 and L3 connectivity and improved SDN gateway performance enhancements.

SMB over QUIC leverages TLS 1.3 security to streamline local, mobile, and remote networking while enhancing security with configuration from the server or client. In addition, there is an option to turn off SMB NTLM at the SMB level, along with controls on which versions of SMB to allow or refuse. Also being added is a brute force attack limiter that slows down SMB authentication attacks.

Management, Upgrades, General user Experience

The upgrade process moving forward with Windows Server 2025 is intended to be seamless and less disruptive. These enhancements include hot patching and flighting (e.g., LTSC Windows server upgrades similar to how you get regular updates). For hybrid management, an easier-to-use wizard to enable Azure Arc is planned. For flexibility, if present, WiFi networking and Bluetooth devices are automatically enabled with Windows Server 2025 focused on edge and remote deployment scenarios.

Also new is an optional subscription-based licensing model for Windows Server 2025 while retaining the existing perpetual use. Let me repeat that so as not to create new vFUD, you can still license Windows Server (and thus Hyper-V) using traditional perpetual models and SKUs.

Additional Resources Where to learn more

The following links are additional resources to learn about Windows Server, Server 2025, Hyper-V, and related data infrastructures and tradecraft topics.

What’s New in Windows Server v.Next video from Microsoft Ignite (11/17/23)
Microsoft Windows Server 2025 Whats New
Microsoft Windows Server 2025 Preview Build Download
Microsoft Windows Server 2025 Preview Build Download (site)
Microsoft Evaluation Center (various downloads for trial)
Microsoft Eval Center Windows Server 2022 download
Microsoft Hyper-V on Windows Information
Microsoft Hyper-V on Windows Server Information
Microsoft Hyper-V on Windows Desktop (e.g., Win10)
Microsoft Windows Server Release Information
Microsoft Hyper-V Server 2019
Microsoft Azure Virtual Machines Trial
Microsoft Azure Elastic SAN
If NVMe is the answer, what are the questions?
NVMe Primer (or refresh), The NVMe Place.

Additional learning experiences along with common questions (and answers), are found in my Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Hyper-V is very much alive, and being enhanced. Hyper-V is being used from Microsoft Azure to Windows Server and other platforms at scale, and in smaller environments.

If you are looking for alternatives to VMware or simply exploring virtualization options, do your due diligence and check out Hyper-V. Hyper-V may or may not be what you want; however, is it what you need? Looking at Hyper-V now and upcoming enhancements also positions you when asked by management if you have done your due  diligence vs relying on vFUD.

Do a quick Proof of Concept, spin up a lab, and check out currently available Hyper-V. For example, on Server 2022 or 2025 preview, to get a feel for what is there to meet your needs and wants. Download the bits and get some hands on time with Hyper-V and Windows Server 2025.

Wrap up

Hyper-V is alive and enhanced with Windows Server 2025 and other releases.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Nine time Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of UnlimitedIO LLC.

PACE your Server Storage I/O decision making, its about application requirements

PACE your Server Storage I/O decision making, its about application requirements

PACE your Server Storage I/O decision-making, it’s about application requirements. Regardless of if you are looking for physical, software-defined virtual, cloud or container storage, block, file or object, primary, secondary or protection copies, standalone, converged, hyper-converged, cluster in a box or other forms of storage and packaging, when it comes to server storage I/O decision-making, it’s about the applications.

I often see people deciding on the best storage before the questions of requirements, needs and wants are even mentioned. Sure the technology is important, so too are the techniques and trends including using new things in new ways, as well as old things in new ways. There are lots of buzzwords on the storage scene these days. But don’t even think about buying it until you truly understand your business’ storage needs.

However when it comes down to it unless you have a unique need, most environments server, and storage I/O resources exist to protect preserve and serve applications and their information or data. Recently I did a couple of articles over at Network Computing; these are tied to server and storage I/O decision-making balancing technology buzzwords with business and application requirements.

PACE and common applications characteristics

PACE your server storage decisions

A theme I mention in the above two articles as well as elsewhere on server, storage I/O and applications is PACE. That is, application Performance Availability Capacity Economics (PACE). Different applications will have various attributes, in general, as well as how they are used. For example database transaction activity vs. reporting or analytics, logs and journals vs. redo logs, indices, tables, indices, import/export, scratch and temp space. PACE (figure 2.7) describes the applications and data characters and needs.

Server Storage I/O PACE

Common Application Pace Attributes

All applications have PACE attributes

  • Those PACE attributes vary by application and usage
  • Some applications and their data are more active vs. others
  • PACE characteristics will vary within different parts of an application

Think of an application along with associated data PACE as its personality or how it behaves, what it does, how it does it and when along with value, benefit or cost along with Quality of Service (QoS) attributes. Understanding the applications in different environments, data value and associated PACE attributes is essential for making informed server, storage I/O decisions from configuration to acquisitions or upgrades, when, where, why and how to protect, or performance optimization along with capacity planning, reporting, and troubleshooting, not to mention addressing budget concerns.

Data and Application PACE

Primary PACE attributes for active and inactive applications and data:
P – Performance and activity (how things get used)
AAvailability and durability (resiliency and protection)
C – Capacity and space (what things use or occupy)
EEnergy and Economics (people, budgets and other barriers)

Some applications need more performance (server computer, or storage and network I/O) while others need space capacity (storage, memory, network or I/O connectivity). Likewise, some applications have different availability needs (data protection, durability, security, resiliency, backup, BC, DR) that determine various tools, technologies and techniques to use. Budgets are also a concern which for some applications meaning enabling more performance per cost while others are focused on maximizing space capacity and protection level per cost. PACE attributes also define or influence policies for QoS (performance, availability, capacity), as well as thresholds, limits, quotas, retention and disposition among others.

Where to learn more

Learn more about data infrastructures and tradecraft related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

The best storage will be the one that meets or exceeds your application requirements instead of the solution that meets somebody else’s needs or wants. Keep in mind, PACE your Server Storage I/O decision making, it is about application requirements

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs

The ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs

ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs.

Yes, you read that correct; leverage TCP offload Engines (TOE) to boost the performance of TCP-based NVMeoF (e.g., NVMe over Fabrics) while reducing costs. Keep in mind that there is a difference between cutting costs (something that causes or moves problems and complexities elsewhere) and reducing and removing costs (e.g., finding, fixing, removing complexities).

Reducing or cutting costs can be easy by simply removing items for lower-priced items and introducing performance bottlenecks or some other compromise. Likewise, boosting performance can be addressed by throwing (deploying) more hardware (and or software) at the problem resulting in higher costs or some other compromise.

On the other hand, as mentioned above, finding, fixing, removing the complexity and overhead results in cost savings while doing the same work or enabling more work done via the same costs, maximizing hardware, software, and network costs. In other words, a better return on investment (ROI) and a lower total cost of ownership (TCO).

Software Defined Storage and Networks Need Hardware

With the continued shift towards software-defined data centers, software-defined data infrastructures, software-defined storage, software-defined networking, and software-defined everything, those all need something in common, and that is hardware-based compute processing.

In the case of software-defined storage, including standalone, shared fabric or networked-based, converged infrastructure (CI) or hyper-converged infrastructure (HCI) deployment models, there is the need for CPU compute, memory, and I/O, in addition to storage devices. This means that the software to create, manage, and perform storage tasks needs to run on a server’s CPU, along with I/O networking software stacks.

However, it should be evident that sometimes the obvious needs to be restarted, which is that software-defined anything requires hardware somewhere in the solution stack. Likewise, depending on how the software is implemented, it may require more hardware resources, including server compute, memory, I/O, and network and storage capabilities.

Keep in mind that networking stacks, including upper and lower-level protocols and interfaces, leverage software to implement their functionality. Therefore, the value proposition of using standard networks such as Ethernet and TCP is the ability to leverage lower-cost network interface cards (or chips), also known as NICs combined with server-based software stacks.

On the one hand, costs can be reduced by using less expensive NICs and using the generally available server CPU compute capabilities to run the TCP and other networking stack software. On systems with a lower application or other software performance demands, this can work out ok. However, for workloads and systems using software-defined storage and other applications that compete for server resources (CPU, memory, I/O), this can result in performance bottlenecks and problems.

Many Server Storage I/O Networking Bottlenecks Are CPU Problems

There is a classic saying that the best I/O is the one that you do not have to do. Likewise, the second-best I/O is the one with the most negligible overhead (and cost) as well as best performance. Another saying is that many application, database, server, and storage I/O problems are actually due to CPU bottlenecks. Fast storage devices need fast applications on fast servers with fast networks. This means finding and removing blockages, including offloading server CPU from performing network I/O processing using TOEs.

Wait a minute, isn’t the value proposition of using software-defined storage or networking to use low-cost general-purpose servers instead of more expensive hardware devices? With some caveats, Yup understands how much server CPU us being used to run the software-defined storage and software stacks and handle upper-level functionality. To support higher performance or larger workloads can be putting in more extensive (scale-up) and more (scale-out) servers and their increased connectivity and management overhead.

This is where the TOEs come into play by leveraging the best of both worlds to run software-defined storage (and networking) stacks, and other software and applications on general-purpose compute servers. The benefit is the TCP network I/O processing gets offloaded from the server CPU to the TOE, thereby freeing up the server CPU to do more work or enabling a smaller, lower-cost CPU to be used.

After all, many servers, storage, and I/O networking problems are often server CPU problems. An example of this is running the TCP networking software stack using CPU cycles on a host server that competes with the other software and applications. In addition, as an application does more I/O, for example, issuing reads and write requests to network and fabric-based storage, the server’s CPUs are also becoming busier with more overhead of running the lower-layer TCP and networking stack.

The result is server resources (CPU, memory) are running at higher utilization; however, there is more overhead. Higher resource utilization with low or no overhead, low latency, and high productivity are good things resulting in lower cost per work done. On the other hand, high CPU utilization, server operating system or kernel mode overhead, poor latency, and low productivity are not good things resulting in host per work done.

This means there is a loss of productivity as more time is spent waiting, and the cost to do a unit of work, for example, an I/O or transaction, increases (there is more overhead). Thus, offload engines (chips, cards, adapters) come into play to shift some software processing from the server CPU to a specialized processor. The result is lower server CPU overhead leaving more server resources for the main application or software-defined storage (and networking) while boosting performance and lowering overall costs.

Graphics, Compute, Network, TCP Offload Engines

Offload engines are not new, they have been around for a while, and in some cases, more common than some realize going by different names. For example, graphical Processing Units (GPUs) are used for offloading graphic and compute-intensive tasks to special chips and adapter cards. Other examples of offload processors include networks such as TCP Offload Engine (TOE), compression, and storage processing, among others.

The basic premise of offload engines is to move or shift processing of specific functions from having their software running on a general-purpose server CPU to a specialized processor (ASIC, FPGA, adapter, or mezzanine card). By moving the processing of functions to the offload or unique processing device, performance can be boosted while freeing up a server’s primary processor (CPU) to do other useful (and productive) work.

There is a cost associated with leveraging offloads and specialized processors; however, the business benefit should be offset by reducing primary server compute expenses or doing more work with available resources and driving network bandwidth line rates performance. The above should result in a net TCO reduction and boost your ROI for a given system or bill of material, including hardware, software, networking, and management.

Cloud File Data Storage Consolidation and Economic Comparison Model

Fast Storage Needs Fast Servers and I/O Networks

Ethernet network TOEs became popular in the industry back in the early 2000s, focusing on networked storage and storage networks that relied on TCP (e.g., iSCSI).

Fast forward to today, and there is continued use of networked (ok, fabric) storage over various interfaces, including Ethernet supporting different protocols. One of those protocols is NVMe in NVMe over Fabrics (NVMeoF) using TCP and underlying Ethernet-based networks for accessing fast Solid State Devices (SSDs).

Chelsio Communications T6 TOE for NVMeoF

An example of server storage I/O network TOEs, including those to support NVMeoF, are those from Chelsio Communications, such as the T6 25/100Gb devices. Chelsio announced today server storage I/O benchmark proof points for TCP based NVMe over Fabric (NVMeoF) TOE accelerated performance. StorageIO had the opportunity to look at the performance-boosting ability and CPU savings benefit of the Chelsio T6 prior to todays announcement.

After reviewing and validating the Chelsio proof points, test methodology, and results, it is clear that the T6 TOE enabled solution boosts server storage I/O performance while reducing host server CPU usage. The Chelsio T6 solution combined with Storage Performance Development Kit (SPDK) software, provides local-like performance of network fabric distributed NVMe (using TCP based NVMeoF) attached SSD storage while reducing host server CPU consumption.

“Boosting application performance, efficiency, and effectiveness of server CPUs are key priorities for legacy and software defined datacenter environments,” said Greg Schulz, Sr. Analyst Server Storage. “The Chelsio NVMe over Fabrics 100GbE NVMe/TCP (TOE) demonstration provides solid proof of how high-performance NVMe SSDs can help datacenters boost performance and productivity, while getting the best return on investment of datacenter infrastructure assets, not to mention optimize cost-of-ownership at the same time. It’s like getting a three for one bonus value from your server CPUs, your network, and your application perform better, now that’s a trifecta!”

You can read more about the technical and business benefits of the Chelsio T6 TOE enabled solution along with associated proof points (benchmarks) in the PDF white paper found here and their Press Release here. Note that the best measure, benchmark, proof point, or test is your application and workload, so contact Chelsio to arrange an evaluation of the T6 using your workload, software, and platform.

Where to learn more

Learn more about TOE, server, compute, GPU, ASIC, FPGA, storage, I/O networking, TCP, data infrastructure and software defined and related topics, trends, techniques, tools via the following links:

Chelsio Communications T6 Performance Press Release (PDF)
Chelsio Communications T6 TOE White Paper (PDF)
Application Data Value Characteristics Everything Is Not the Same
PACE your Infrastructure decision-making, it’s about application requirements
Data Infrastructure Server Storage I/O Tradecraft Trends
Data Infrastructure Overview, Its What’s Inside of Data Centers
Data Infrastructure Management (Insight and Strategies)
Hyper-V and Windows Server 2025 Enhancements

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

The large superscalar web services and other large environments leverage offload engines and specialized processing technologies (chips, ASICs, FPGAs, GPUs, adapters) to boost performance while reducing server compute costs or getting more value out of a given server platform. If it works for the large superscalars, it can also work for your environment or your software-defined platform.

The benefits are reducing the number and cost of your software-defined platform bill of materials (BoM). Another benefit is to free up server CPU cycles to run your storage or network or other software to get more performance and work done. Yet another benefit is the ability to further stretch your software license investments, getting more work done per software license unit.

Have a look at the Chelsio Communications T6 line of TOE for NVMeoF and other workloads to boost performance, reduce CPU usage and lower costs. See for yourself The TOE NVMeoF TCP Performance Line Boost Performance Reduce Costs benefit.

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Microsoft MVP Cloud and Data Center Management, previous 10 time VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), Data Infrastructure Management (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Cloud Ready Data Protection for Hybrid Data Centers Are In Your Future

Cloud Ready Data Protection for Hybrid Data Centers Are In Your Future

Cloud Ready Data Protection for Hybrid Data Centers

Join me for a free webinar Cloud Ready Data Protection for Hybrid Data Centers and Data Infrastructures 11AM PT Thursday July 11th produced by Redmond Magazine sponsored by Quest Software.

Hybrid Data Infrastructure Data Center Cloud Container Software Defined Next Generation Cloud Ready Data Protection for Hybrid Data Centers

Hybrid Data Infrastructures and Data Centers

Hybrid cloud and on-prem data centers are in your future if not already a reality. In addition to using public cloud and on-prem resources, your environment is likely a mix of many different operating systems, applications and servers (virtual and physical), along with multiple backup and recovery technologies.

Cloud Ready Data Protection for Hybrid Data Centers

In this engaging, interactive webinar, we will look at trends, issues, and challenges, as well as provide best practices in what you can do to address them today. You’ll learn how to simplify and streamline your system, application and data protection in both the cloud and data center without compromise, all while removing complexity and cost.

What You Will Learn

Join Microsoft MVP, VMware vExpert and IT analyst Greg Schulz of Server StorageIO along with Michael Gogos, Data Protection expert from Quest, as they discuss how to:

  • Become hybrid and cloud data protection ready
  • Use the cloud for backup and disaster recovery
  • Protecting cloud applications and their data
  • Address different hybrid data protection scenarios
  • Take action today to prepare for tomorrow

 

Where to learn more

Learn more about world backup day, recovery and data protection along with other related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

I look forward to you joining Michael Gogos of Quest Software and myself on Thursday July 11th 11AM PT for our interactive discussion (bring your questions) around Cloud Ready Data Protection for Hybrid Data Centers and what you can do today (Register here).

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, ten-time VMware vExpert. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2019 Server StorageIO and UnlimitedIO. Visit our companion site https://picturesoverstillwater.com to view drone based aerial photography and video related topics. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Driving ROI with Cloud Storage Consolidation Seminars

Driving ROI with Cloud Storage Consolidation Seminars

Driving ROI with Cloud Storage Consolidation Seminars

Driving ROI with Cloud Storage Consolidation Seminars

Join me in a series of in-person seminars driving ROI with cloud storage consolidation for unstructured file data.

driving roi with cloud storage consolidation seminars
Various Data Infrastructure options from on-prem to edge to cloud and beyond

These initial seminars are being held at Amazon Web Services (AWS) locations April 30 in New York City, May 1 in Chicago and May 2 in Houston Amazon. At each of these three cities, I will be joined by experts from NetApp, Talon and AWS as we look at issues, trends and what can be done today (including hands on demos) driving ROI with cloud storage consolidation for unstructured file data.

What The Seminars Are About

These seminars look at how remove cost and complexity while boosting productivity for distributed sites with unstructured data and NAS file servers. The seminars look at making informed decisions balancing technical considerations with a business return on investment (ROI) model, along with return on innovation (the other ROI) from boosting productivity. It’s not about simply cutting costs that can create chaos or compromise elsewhere, it’s about removing complexity and cost while boosting productivity with smart cloud storage consolidation for unstructured file data.

distributed file server cloud storage consolidation

Distributed File Server Cloud Storage Consolidation ROI Economic Comparison

During these seminars I will discuss various industry and customer trends, challenges as well as solutions, particular for environments with distributed file servers for unstructured file data. As part of my discussion, we will look at both technical, as well as ROI business based model for distributed file server cloud storage consolidation based on the Server StorageIO white paper report titled Cloud File Data Storage Consolidation and Economic Comparison Model (Free PDF download here).

Where When and How to Register

New York City Tuesday April 30, 2019 9:00AM
Amazon Web Services
7 West 34th St.
6th Floor
Learn more and register here.

Chicago Illinois  Wednesday May 1, 2019 9:00AM
Amazon Web Services
222 West Adams Street
Suite 1400
Learn more and register here

Houston Texas Thursday May 2, 2019 9:00AM
Amazon Web Services
825 Town and Country Lane
Suite 1000
Learn more and register here

Where to learn more

Learn more about world backup day, recovery and data protection along with other related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Making informed decisions for data infrastructure resources including cloud storage consolidation and distributed file servers involves technical, application workload as well as business economic analysis. Which of the three (technical, application workload, financial) is more important for enabling a business benefit will depend on your perspective, as well as area of focus. However, all the above need to be considered in the balance as part of making an informed data infrastructure resource decision. That is where a discussion about a business financial ROI model (pro forma if you prefer) comes into play as part of cloud storage consolidation, including for distributed file server of unstructured file data.

I look forward to meeting with attendees and hope to see you at the events April 30th in New York City, May 1 in Chicago, and Houston May 2nd as we discuss driving ROI with cloud storage consolidation at these seminars.

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, ten-time VMware vExpert. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. Visit our companion site https://picturesoverstillwater.com to view drone based aerial photography and video related topics. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

World Backup Day Reminder Don’t Be an April Fool Test Your Data Recovery

World Backup Day Reminder Don’t Be an April Fool Test Your Data Recovery

World Backup Day Reminder Don't Be an April Fool Plan Your Data Recovery

World Backup Day Reminder Don’t Be an April Fool Test Your Data Recovery.

March 31 is the annual world backup day to spotlight awareness around the importance of protecting your data and test your data recovery. The focus of world backup and recovery day spans from the largest enterprise and cloud service providers (e.g., super scalars) to the smallest SMB, SOHO, ROBO and home consumers (including your photos) or other valuable items.

Granted the technology, tools, techniques, trends will differ with a scope as well as scale.

However, the fundamental data protection approaches apply to all. That is, having multiple copies of different points in time spread across separate storage (systems, servers, devices, media, cloud services) as well as offsite (and off-line).

world backup day data protection cloud

Why The Need For Data Protection And Recovery

Data Protection encompasses many different things, from accessibility, durability, resiliency, reliability, and serviceability ( RAS) to security and data protection along with consistency. Availability includes basic, high availability ( HA), business continuance ( BC), business resiliency ( BR), disaster recovery ( DR), archivingbackup, logical and physical security, fault tolerance, isolation and containment spanning systems, applications, data, metadata, settings, and configurations.

From a data infrastructure perspective, availability of data services spans from local to remote, physical to logical and software-defined, virtual, container, and cloud, as well as mobile devices. On the left side of the following figure are various data protection and security threat risks and scenarios that can impact availability, or result in a data loss event ( DLE), data loss access ( DLA), or disaster. The following figure shows various techniques, tools, technologies, and best practices to protect data infrastructures, applications, and data from threat risks.

the need for data protection backup bc dr

Don’t Become An April 1st Recovery Fool

April 1st also known as April Fool’s day should be a reminder to plan as well as test your recovery, so the joke is not on you. Data protection including backup, archiving, security, disaster recovery (DR), business continuance (BC) as well as business resiliency (BR) are not a once a year focus, instead of a 365 day a year continuum. Likewise, the focus needs to expand from just making sure you backed up or made copies of your data to recover. After all, what good is a check box that you did a backup on world backup day only to find out the next day you cannot recover, or, what you thought was protected is not there.

If you already have good backups and data protection copies, verify that they are in fact good-by restoring their contents to a different location. It should go without saying, however all too often common sense needs to be repeated, make sure in the course of testing data protection including restoring that you do not inadvertently cause a disaster. Also, go a step beyond verifying that you can read the data stored on disk, tape, SSD, optical, that is, actually try to use, or open the data. What this does is verify that you can both access and restore the data from the protection medium or cloud location, as well as unlock, decrypt, uncompressed or re-inflate deduped data.

Evolving Data Protection Including Backup and Recovery

While the emphasis of world backup is on the importance of data protection including having backup copies, there also needs to be an emphasis on recovering. It is essential to make sure data is protected which means having multiple copies of different time intervals stored on several mediums or systems across one or more locations. The previous is the basis of 4 3 2 1 data protection, having four or more copies with three or more-time interval versions spread across two or more different systems or storage mediums.

server storageio data infrastructure data protection 4 3 2 1
4 3 2 1 data protection (via Software Defined Data Infrastructure Essentials)

4    At least four copies of data (or more), Enables durability in case a copy goes bad, deleted, corrupted, failed device, or site.
3    The number (or more) versions of the data to retain, Enables various recovery points in time to restore, resume, restart from.
2    Data located on two or more systems (devices or media/mediums), Enables protection against device, system, server, file system, or other fault/failure.
1    With at least one of those copies being off-premise and not live (isolated from active primary copy), Enables resiliency across sites, as well as space, time, distance gap for protection.

Also, make sure that at least one of those offsite preferably offline. Likewise, it is crucial that whatever is protected, backed up, copied, cloned, snapshot, checkpoint, consistency point, replicated is also usable. In addition to having multiple copies and versions, those data protection copies should also include occurring at various altitude or layers in the data infrastructure stack from applications to database, file systems to virtual machines or containers among others.

What About Individual Data Protection at Home

For consumers and individuals, as well as small business, make sure that you are copying your essential data from your computer to some other storage medium (or multiple). For example, have a local copy on an external hard disk drive (HDD) or a solid-state device (SSD). Better yet, have a couple of copies for different time intervals both on-site as well as off-site. Anything important you have stored on site including copies of photos, images, video, audio, records, spreadsheets, and other documents should have extra copies including off-site or in the cloud.

Likewise, anything you store in the cloud should have at least one other copy stored elsewhere. Don’t be scared of the cloud, however, do your homework to be prepared. Similar to only having one copy of your data on site, the other extreme only has one copy in the cloud. Instead, put a copy in the cloud as well as have one on-site (or on-prem if you prefer) or elsewhere.

Don’t Forget Your Home Photos and Movies

Speaking of photos and other documents, for those that are not yet digitized, scanned or electronic copies made, get them converted.  Get in touch with data protection and backup professional, as well as a photo (and digital asset) organizer. They can provide advice on best practices, techniques, as well as tools, technologies, and services to keep your digital data safe and secure. Some photo organizer professionals also can help with converting your old photos, movies, videos to new digital formats. For example, get in touch with Holly Corbid at Capture Your Photos (www.captureyourphotos.com) who is a certified professional photo organizer and member of Association of Professional Photo Organizers.

Where to learn more

Learn more about world backup day, recovery and data protection along with other related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

March 31 world backup day is more than an annual event for vendors to send out press releases on the importance of data protection. The focus should also expand to world recovery day or something similar as well as span 365 days a year. Now is a good time to review and verify your existing data protection including backup and recovery works as expected. Keep in mind, world backup day reminder don’t be an April fool test your data recovery before you need it.

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, ten-time VMware vExpert. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. Visit our companion site https://picturesoverstillwater.com to view drone based aerial photography and video related topics. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Deliver Data Management Availability For Multi Cloud Environments Webinar

Deliver Data Management Availability For Multi Cloud Environments Webinar

Deliver Data Management Availability For Multi Cloud Environments Webinar

Join me on Thursday March 14th 11AM PT when I host a webinar with topic Deliver Data Management Availability For Multi Cloud Environments. This is free webinar (will also be available for replay) sponsored by Veeam, produced by Redmond Magazine where I will be joined by Dave Russell, Vice President of Enterprise Strategy at Veeam Software for an interactive engaging discussion.

Our discussion including questions for attendees will look at how IT landscapes are evolving, hybrid and multi-cloud have become the new normal and what can be done to protect, preserve, secure and serve data spread across on-prem and different public clouds. Topics will include what to do today to prepare for tomorrow, minimizing risk of hybrid environments, changing environments along with their requirements, identify strategies for sound data management, data protection including backup for hybrid environments.

Register for the Deliver Data Management Availability For Multi Cloud Environments Webinar here (Live Thursday March 14th 11AM PT).

Where to learn more

Learn more about cloud, multi-cloud, hybrid and data protection via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Remember to register here for the live March 14, 2019 event. Join me for an interactive discussion with Dave Russell as we discuss the trends, issues, challenges and what can be done to put a strategy in place for data protection and to Deliver Data Management Availability For Multi Cloud Environments.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, ten-time VMware vExpert. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. Visit our companion site https://picturesoverstillwater.com to view drone based aerial photography and video related topics. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing my new book Data Infrastructure Management Insight Strategies published via Auerbach/CRC Press is now available via CRC Press and Amazon.com among other global venues.

My Fifth Solo Book Project – Data Infrastructure Management

Data Infrastructure Management Insight Strategies (e.g. the white book) is my fifth solo published book in addition to several other collaborative works. Given its title, the focus of this new book is around Data Infrastructures, the tools, technologies, techniques, trends including hardware, software, services, people, policies inside data centers that get defined to support business and application services delivery. The book (ISBN 9781138486423) is soft covered (also electronic kindle versions available) with 250 pages, over a 100 figures, tables, tips and examples. You can explore the contents via Google Books here.

Data Infrastructure Books by Greg Schulz
Stack of my solo books with common theme around Data Infrastructure topics

Data Infrastructure Management Book
Data Infrastructure Management – Insight and Strategies e.g. the White book (CRC Press 2019)

Some of My Other Books Include

Click on the following book images to learn more about, as well as order your copy.

Software Defined Data Infrastructure Essentials BookSNIA Recommended Reading List
Software Defined Data Infrastructure Essentials (SDDI) – Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft e.g. the Blue book covers software defined, sddc, sddi, hybrid, among other topics including serverless containers, NVMe, SSD, flash, pmem, scm as well as others. (CRC Press 2017) available at Amazon.com among other global venues.

Cloud and Virtual Data Storage Networking Intel recommended reading listIntel recommended reading list
Cloud and Virtual Data Storage Networking (CVDSN) – Your Journey to efficient and effective Information Services e.g. the Yellow or Gold Book (CRC Press 2011) available at Amazon.com among other global venues.

 

The Green and Virtual Data Center BookIntel Recommended Reading List
The Green and Virtual Data Center (TGVDC) – Enabling Efficient, Effective and Productive Data Infrastructures e.g. the Green Book (CRC Press 2009) available at Amazon.com among other venues.

Resilient Storage Networks Book
Resilient Storage Networks (RSN) – Designing Flexible scalable Data Infrastructures (Elsevier 2004) e.g. the Red Book is SNIA Education Endorsed Reading available at Amazon.com among other venues. I have some free copies of RSN for anybody who is willing to pay shipping and handling, send me a note and we will go from there.

Where to learn more

Learn more via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Today more than ever there tends to be a focus on the date something was created or published as there is a lot of temporal content with short shelf life. This means that there is a lot of content including books being created that are short temporal usually focused on a particular technology, tool, trend that has a life span or attention focus of a couple of years at best.

On the other hand, there is also content that is still being created today that combines new and emerging technology, tools, trends with time-tested strategies, techniques as well as processes, some of whose names or buzzwords will evolve. My books fit into the latter category of combing current as well as emerging technologies, tools, trends, techniques that support longer shelf life, just insert your new favorite buzzword, buzz trend or buzz topic as needed.

Data Infrastructure Books by Greg Schulz

You will also notice looking at the stack of books, Data Infrastructure Management Insight and Strategies is a smaller soft covered book compared to others in my collection. The reason is that this new book can be a quick read to address what you need, as well as be a companion to others in the stack depending on what your focus or requirements are.

Common questions I get having written several books, not to mention the thousands of articles, tips, reports, blogs, columns, white papers, videos, webinars among other content is what’s is next? Good question, see what’s next, as well as check out some other things I’m doing over at www.picturesoverstillwater.com where I’m generating big data that gets stored and processed in various data infrastructures including cloud ;) .

Will there be another book and if so on or about what? As I mentioned, there are some projects I’m exploring, will they get finished or take different directions, wait and see what’s next.

How do I find the time to create these books and how long does it take? The time required varies as does the amount of work, what else I’m doing. I try to leverage the book (and other content creation projects) with other things I’m doing to maximize time. Some book projects have been very fast, a year or less. Some take longer such as Software Defined Data Infrastructure Essentials as it is a big book with lots of material that will have a long shelf life.

Do I write and illustrate the books or do I have somebody do them for me? For my books I do the writing and illustrating (drawings, figures, images) myself along with some of the layouts relying on external copy editors and production folks.

What do I recommend or give advice to those wanting to write a book? Understand that publishing a book is a project, there’s the actual writing, editing, reviews, art work, research, labs or other support items as book companions. Also understand why are you writing a book, for fame, fortune, acclaim, to share with others or some other reason. I also recommend before you write your entire book to talk with others who have been published to test the waters, get feedback. You might find it easier to shop an extended outline than a completed manuscript, that is unless you are writing a novel or similar.

Want to learn more about writing a book (or other content), get feedback, have other questions, drop me a note and will do what I can to help out.

Data Infrastructure Management Book

There is an old saying, publish or perish, well, I just published my fifth solo book Data Infrastructure Management Insight Strategies that you can buy at Amazon.com among other venues.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2019. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2019 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars.

There is still time to register for the fall 2018 Dutch data infrastructure industry trends decision-making seminars November 27th and 28th. The workshops are being organized by Brouwer Storage Consultancy of Holland and will be held in Nijkerk.

On Tuesday, November 27th, there will be an advanced education workshop seminar covering data infrastructure industry trends and technology update presented by myself. On Wednesday, November 28th, there will be a deeper dive workgroup seminar session addressing data infrastructure related strategy, planning, and decision-making.

xxxx

Data Infrastructures Industry Trends November 27

Whats New, Whats the buzz, what you need to know about, From Speeds and Feeds, Slots and Watts to Whos doing what, from interesting to What’s relevant for your environment.

This one-day seminar is a new and improved version of the popular speeds and feeds session where we look at what’s new and emerging in the industry as well as applicable to your environments. You will be updated about the latest trends and emerging data infrastructure technologies to support digital transformation, little and big data analytics, AI/ML/DL, GDPR, data protection, edge/fog compute, and IoT among others. From legacy to the software-defined cloud, container converged and virtual to composable. The seminar is a mix of presentation and engaging discussion as we look into details of favorite or new technologies for both those who are old-school, new-school and current or future school.

Part I – Industry Trends, Applications, and Workload
Part II – Server Compute, Memory, I/O, hardware and software
Part III – Storage and Data protection for on-prem and cloud
Part IV – Bringing it all together, managing and decision making

Topics to be covered include among others:

  • What these trends, tools, technologies mean for different environments of various size.
  • Tips on evaluating legacy and startup or newer vendors as well as technologies.
  • Updates on vendors, services, technologies, products you may or may not have heard of.
  • Cloud (public/private/multi-cloud/hybrid) compute, storage and management.
  • Containers (including docker, windows, kubernetes, FaaS, serverless, lambda).
  • Converged and hyper-converged; Gen-Z and composable; NVMe and NVMeoF.
  • Persistent Memory (PMEM), Storage Class Memory (SCM), 3D XPoint, NAND Flash SSD.
  • Legacy vs. software-defined, appliances, storage systems, block, NAS file, object, table.
  • Bulk cloud data migration appliances, storage for the edge, file sync and share.
  • Role and importance of context (what’s applicable, what something means).
  • Who’s doing what, what to look for today for the future.

This seminar is for those involved with ICT/IT servers, storage, storage, I/O networking, and associated management activities including data protection, of legacy, as well as software-defined cloud, containers, converged hyper-converged and virtualization. This seminar is for professionals who manage, architect or are otherwise involved with data infrastructure related topic strategy and acquisitions.

Data Infrastructures Deep Dive Decision Making November 28

Enabling Informed Strategy and Decision Making, moving from what are the tools, trends and technologies evolving to what to use, when, where, why, how, along with strategy, planning, decision-making, and ongoing management.

If the answer is a cloud, converged, container, composable, edge, fog, digital transformation, on-prem, hybrid, software-defined, what were or are the questions to plan as well as prepare for deployment today, along with in the future? This workshop format seminar provides answers to fundamental questions, with essential insight into software-defined data infrastructures (SDDI) and software-defined data centers (SDDC). For ICT/IT professionals (architects, strategists, administrators, managers) currently or planning on being involved with servers, storage, I/O networking, hardware, software, converged, containers, cloud backup/data protection, and associated topics, this seminar is for you.

Clouds converged, and containers will be a primary focus along with related themes and topics that you need to know more about. Don’t be scared of clouds, be prepared, and this includes for on-prem, public, hybrid and multi-cloud. As part of our deeper dive decision-making strategy focus, we look at cloud cost considerations including are you paying too much or not enough (e.g., are you depriving your applications of performance to save money?). We will explore various decision-making and strategy topics spanning AWS, Microsoft Azure, Azure Stack, Windows and Hyper-V, VMware (including on AWS) and OpenStack, is it still open for business?

Additional topics, trends, themes include:

  • Everything is not the same across cloud services, converged, or containers.
  • Different environments have various data infrastructure resource needs.
  • How to balance legacy on-prem application needs with emerging technology options.
  • Different comparison criteria for smaller environments remote office vs. Larger enterprise
  • Do it yourself (DiY) vs. Turnkey software vs. Bundled tin wrapped software solution
  • Strategy, planning, decision-making, and ongoing management

How To Register For Seminar Workshops

Learn more about fall 2018 Dutch Server StorageIO Data Infrastructure Tuesday trends workshop seminar here (PDF), and Wednesday deeper dive decision-making workshop session here (PDF).

To register and obtain more information, contact event organizers Brouwer Storage consultancy at +31-33-246-6825 or +31-652-601-309 and info at brouwerconsultancy.com.

Where to learn more

Learn more about Data Infrastructure and related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Everything is not the same across different organizations, environments, application workloads, data, technology, tools, trends. These two one day interactive workshop seminars provide timely insight into what’s going on in the data infrastructure related industry, along with common IT organization challenges as well as how to address them. Moving from the what to what to use when, where, why, how along with alternatives, gaining insight and awareness to avoid flying blind enables effective strategy, decision-making, planning and ongoing management. Learn more and sign up for Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars, see you in Nijkerk.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Disk Impressions #blogtobertech

Microsoft Azure Data Box Disk Test Drive Impressions #blogtobertech

Microsoft Azure Data Box Disk Test Drive Impressions #blogtobertech

Data Box Disk Test Drive Impressions is the last of a four-post series looking at Microsoft Azure Data Box. View Part 1 Microsoft announced Azure Data Box updates, Part 2 Microsoft Azure Data Box Family, and Part 3 Microsoft Azure Data Box Disk Test Drive Review.

Overall, I liked the Azure Data Box experience along with a range of options to select the best fit solution for my needs. A common trend among the major cloud service providers such as AWS, Microsoft Azure and Google is that one size fits all approach solution does not meet different customer needs.

The only things that I did not like about and would like to see improved with Azure Data Box are two items one at the beginning, the other at the end of the process. Granted with Data Box Disks still in preview, there is time for those items to be addressed before general availability, and I have passed on the feedback to Microsoft.

At the beginning of the process, things are pretty straightforward with good tools along with resources to help you navigate which type of Data Box to order, how to order, specify your account details and other information.

What I did not like with the up front experience was after the quick ordering and notification process, the time delay of a week or more until notified when a Data Box would be arriving. Granted I was not in a rush and Microsoft did indicate that it could take about ten days to be informed of availability, this is something that should be done quickly as resources become available. Another option is for Microsoft to add an ordering option for priority or low-priority in the future.

The other experience that I did not like was at the very end, in that perhaps its stuck in an email spam trap (checked, could not find it), the final notification could be better. Not only a final email note saying your data is copied, but also a reminder of where your block or page blobs were copied to (e.g., what your setup when ordering).

Monitoring the progress of the process, I knew when Data Box drives arrived at Microsoft, copy started and completed including with error status. Having gotten used to receiving update notifications from Azure, not receiving one at the end saying congratulations your data has been copied, check here for any errors or other info, as well as a reminder where the data was copied to would be useful.

Likewise, a follow-up note from Microsoft saying that the Azure Data Box drives used as part of the transfer were securely erased along with a certificate of digital destruction would be useful for compliance purposes.

As mentioned above, overall, I found the Data Box Disk experience very positive and a great way to move bulk data faster than what could be done with available networks. My next step is now to migrate some of the transferred data to cold long-term archive storage, and some others to Azure Files, with some staying in block blobs. There are also a couple of VHD and VHDX that will be moved and attached to VMs for additional testing.

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

For those who have a need to move large amounts of data including structured, unstructured, semi-structured, little or big data to a cloud resource, solutions such as Azure Data Box may be in your future. Likewise, for those looking to support remote and edge workloads from AI, ML, DL inferencing, to large-scale data pre-processing, data collection and acquisition, video, telemetry, IoT among others Data Box type solutions may be in your future. Overall I found Microsoft Azure Data Box Disk Impressions Favorable and was able to address a project I had on the to-do list for some time.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Disk Test Drive Review #blogtobertech

Microsoft Azure Data Box Test Drive #blogtobertech

Microsoft Azure Data Box Test Drive #blogtobertech

Microsoft Azure Data Box Test Drive is part three of four series looking at Data Box. View Part 1 Microsoft announced Azure Data Box updatesPart 2 Microsoft Azure Data Box Family, and Part 4 Microsoft Azure Data Box Disk Impressions.

Getting Started

The workflow for using Data Box involves selecting with the type of Data Box to use via the Microsoft Azure portal (here), or Data Box Family page (here).

Getting Started via the Microsoft Azure Data Box Family Page image via Microsoft.com
Getting Started via the Microsoft Azure Data Box Family Page image via Microsoft.com

First step of ordering a Data Box is to specify your Azure subscription, type of operation (e.g., import data into Azure, or export out), source country/region and destination Azure region.

Selecting Data Box from Azure Portal
Selecting Data Box from Azure Portal

The next step is to determine what type of Data Box, in this test I choose 40 TB Data Box Disks. Make a note of fees to avoid any surprises.

Selecting Data Box Disks (40 TB) From Azure Portal
Selecting Data Box Disks (40 TB) From Azure Portal

After selecting the type of Data Box, fill in storage account information using an existing resource, or create new ones as needed. Make a note of these selections as you will need them after the copy is done as this is where your data will be located.

Specify Azure Storage Account Information Where Data Will Transfer To
Specify Azure Storage Account Information Where Data Will Transfer To

Once the order is placed, an email is received confirming the order and also being a preview, indicating that it might take ten days to hear a status update or availability of the devices.

Email notification received after the order is placed
Email notification received after the order is placed

After about ten days, I was contacted by Microsoft via an email (not shown) confirming the amount of data to be copied to determine how many disks would be needed. Once this was confirmed with Microsoft, a status update was noted on the Azure dashboard.

Azure Data Box Dashboard Status after order placed
Azure Data Box Dashboard Status after order placed

After a few days, a box arrived with the Data Box disks, cables and return shipping labels enclosed. Also received was an email notification indicating the disks had arrived.

Email notice Data Box has arrived on site
Email notice Data Box has arrived on site (on-prem if you prefer)

The following is the physical box that contains the Data Box disks that I received from Microsoft.

The shipping box with Data Box Disks arrives
The shipping box with Data Box Disks arrives

Once you get the Data Box, go to the Azure portal for Data Box and access the tools. There are tools and commands for Windows as well as Linux that are needed for accessing and unlocking the disks. This is where you also obtain device IDs. You will also need to have the access key phrase you specified in an earlier step as part of placing the order.

Access Data Box Software Tools and Keys from Azure Portal
Access Data Box Software Tools and Keys from Azure Portal

Inside the shipping box was a pair of 8 TB SATA SSDs, SATA to USB cables, along with return shipping labels.

Contents inside the shipping box, two Data Box 8 TB disks
Contents inside the shipping box, two Data Box 8 TB disks

From the Azure portal, access the device IDs that will be needed along with passphrase for obtaining and unlocking the Data Box disks. You will also want to download the tools as well as follow other instructions on the portal for accessing disks.

Azure Data Box tools, device IDs and Keys
Azure Data Box tools, device IDs and Keys

The Windows system I used for testing is a virtual machine hosted on a VMware vSphere ESXi 6.7 host. After physically attaching the Data Box Disks to the VM host, a virtual or software attachment was done by adding USB devices to the VM.

Virtual Attach of Data Box Disks to VMware vSphere ESXi host and guest VM
Virtual Attach of Data Box Disks to VMware vSphere ESXi host and guest VM

Once the VM had the Data Box disks attached and mapped, they appeared to Windows. After downloading the Data Box software tools and unlocking the devices, they were ready to copy data to. Note that the disks appear as a regular Windows device once unlocked. Simply using bit locker does not unlock the drives, you need to use the Data Box tools. Speaking of Windows disks, there are a couple of folders on the Data Box disk when shipped including one for Block Blob and Page Blob along with verification items.

View of Data Box Disks (8 TB each) after attaching to Windows system
View of Data Box Disks (8 TB each) after attaching to Windows system

Note that you are given several days as part of the base transfer cost, then extra days apply. Since I had a few extra days, I used some of the excess capacity to do some staging and reorganization of data before the actual copy.

Data copy is done using your choice of tools, for example, Robocopy among many others. I used a combination of Robocopy, Retrospect among others. Also, note that for most data place them in the folder or directory structure of your choice in the Block Blob folder. Page Blobs are for VHDX to be used with virtual machines on Azure. After spending a few days to copy the data I wanted to move along with performing verification, it was time to pack up the devices.

As a reminder, blobs are analogous to and what Microsoft Azure refers to instead of objects (e.g., object storage). Also remember that Azure blobs include block, page (512-byte page aligned for VHDX) and append (similar to other vendors object storage). Microsoft Azure in addition to blobs, supports file (SMB and NFS) access, along with table (database) and queue storage services.

The following shows the return label attached to the shipping box that contains the Data Box disks and cables. I also included a copy of the shipping label inside the box just in case something happened during shipment. Once prepared for delivery, I took the box to a local UPS store where I received a shipment receipt (not shown). Later that day I also received an email from Microsoft indicating the shipment was in-progress.

Data Box disks packaged with return receipt (was in the box)
Data Box disks packaged with return receipt (was in the box)

The Azure portal shows status of Data Box shipment being sent to Microsoft, along with a follow-up email notification.

Azure Data Box portal status
Azure Data Box portal status

Email notification of Data Box on the way to Microsoft.

Notice data box is on the way to Azure
Notice data box is on the way to Azure

After a few days’ ways, checking the Azure Portal shows the Data Box arrived at Microsoft and copied operations underway. Remember the storage account you specified back in the early steps is where you will look for your data. This is something I think Microsoft can improve on by providing a link, or some reminder of where the data is being copied to in the status. Likewise, a copy completion email notice would be handy after getting used to the other alerts previous in the process.

Azure Data Box portal showing disk copy operation status
Azure Data Box portal showing disk copy operation status

Looking at the Azure storage account specified during the ordering process in the Blob storage resources the contents of the Data Box Disks can be found.

Contents of Data Box disks copied into specified Azure Blobs and storage account
Contents of Data Box disks copied into specified Azure Blobs and storage account

The following shows folders that I had copied from on-prem systems to the Data Box now located in the proper Azure Block Blobs. Not shown are Page blobs where I moved some VHDXs.

xMission accomplished, data folders now stored in Azure block blobs
Mission accomplished, data folders now stored in Azure block blobs

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Overall the test drive of the Azure Data Box Disk solution was positive, and look forward to trying out some of the other Data Box solutions, both offline and online options in the future. Continue reading Part 4 Microsoft Azure Data Box Disk Impressions as part of this series including Microsoft Azure Data Box Disk Test Drive Review.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family is part two of a four-part series looking at Data Box. View Part 1 Microsoft announced Azure Data Box updates, Part 3 Microsoft Azure Data Box Disk Test Drive Review, Part 4 Microsoft Azure Data Box Disk Impressions.

Microsoft Azure Data Box Overview

Microsoft has several Data Box solutions available or in the preview to meet various customer needs. These include both online as well as offline solutions that include hardware (except Data Box Gateway), software tools and cloud services.

Data Box Online

Microsoft has two online Data Box offerings that provide real-time access of Azure cloud storage resources from on-prem including remote, edge locations. The online Data Box solutions include Edge and Gateway both with local on-prem storage.


Data Box Edge image via Microsoft.com

Data Box Edge (Preview)

Currently, in preview, Data Box Edge is a 1U appliance that combines hardware along with software resources for deployment on-prem at the edge or remote locations. Data Box Edge places locally converged compute and storage resources as an appliance along with connectivity to Azure cloud-based resources.

Intended workloads and applications for Data Box Edge include remote AI, ML, and DL inferencing, data processing or pre-processing before sending to Azure Cloud, function as an edge compute, data protection and data transfer platform (e.g., cloud storage gateway) with local compute. Data Box Edge is similar in functionality and focuses on other cloud service provider solutions such as AWS Snow Ball Edge (SBE). Management tools include Data Box Edge resource Azure portal for management from a web UI, create and manage resources, devices, shares.

Other Data Box Edge attributes include:

  • Supports Azure Blob or Files via SMB and NFS storage access protocols
  • Dual Intel Xeon processors each with 10 CPU cores, 64GB RAM
  • 2 x 10 Gbps SFP+ copper cables, 2 x 1 Gbps RJ45 cables
  • 8 NVMe SSD (1.6 TB each), no HA, 12.8 TB total raw cap
  • 2 x 1 GbE (one for management, one for user access)
  • 2 x 25 GbE (can operate at 10 GbE) and 2 x 25 GbE ports
  • Local web UI for management and configuration

Data Box Gateway (Preview)

Also in Preview, Data Box Gateway is a virtual machine (VM) based software defined appliance that runs on VMware vSphere (ESXi) or Microsoft Hyper-V hypervisors. The functionality of Data Box Gateway is that of a cloud storage gateway providing access to Azure Blob (Page and Block) or Files (NAS) via SMB or NFS protocols. Learn more about both Data Box Edge and Data Box Gateway here including pricing here.

Data Box Offline Solutions

Microsoft has several offline Data Box offerings including previously available and new in preview models. Offline Data Box solutions enable large amounts of data to be moved from on-prem primary, remote and edge locations to Azure cloud storage resources. Bulk data movement operations can be one-time or recurring in support of big data migration of energy, research, media & entertainment and other large volumes of data.

Other bulk movement includes for archive, backup, BC/DR, virtual machine and application migration among others. Use Data Box Offline solutions when large amounts of data need to be moved from on-prem to Azure cloud faster than what available networks will support promptly.

Offline Data Box solutions include:

  • Data Box Heavy (Preview) 1 PB Storage, 800 TB usable
  • Data Box 100 TB (80 TB usable)
  • Data Box Disk (Preview) 40 TB (35 TB Usable)


Data Box Heavy 1 PB (Preview) image via Microsoft.com

Data Box Heavy 1 PB (Preview)

  • Appliance with Up to 800 TB usable capacity per order
  • One system per order
  • Supports Azure Blob or Files
  • Copy data to up to 10 storage accounts
  • 1 x 1/10 Gbps RJ45 connector, 4 x 40 Gbps QSFP+ connectors
  • AES 256-bit encryption
  • Copies data using NAS SMB and NFS protocols


Data Box 100TB image via Microsoft.com

100 TB Data Box

  • An appliance that supports 80 TB usable storage capacity
  • Supports Azure Blob or Files
  • Copies data to 10 storage accounts
  • 1 x 1/10 GbE RJ45 connector
  • 2 x 10 GbE SFP+ connector
  • AES 256-bit encryption
  • Storage access and copy via SMB and NFS NAS protocols

Case of Data Box Disks image via Microsoft.com

Data Box Disk 40 TB (Preview)

  • Up to 35 TB usable capacity per order
  • Up to 5 SSDs per order
  • This is what I tested (2 x 8 TB)
  • Supports Azure Blob storage (Block and Page)
  • Copies data to a single storage account
  • USB/SATA II, III server I/O interface (comes with SATA to USB connector cables)
  • AES 128-bit encryption
  • Copy data with standard tools

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Which Microsoft Azure Data Box is the best? That depends on your needs and requirements.

Microsoft along with other major cloud service providers continue to evolve their data migration services. Realizing that customers who need, want, or have to get data to the cloud also need to remove barriers, solutions such as Azure Data Box are a step in eliminating cloud barriers while addressing cloud concerns. Continue reading Part 3 Microsoft Azure Data Box Disk Test Drive Review and Part 4 Microsoft Azure Data Box Disk Impressions as part of Microsoft Azure Data Box Family.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.