Microsoft Azure Elastic SAN from Cloud to On-Prem

What is Azure Elastic SAN

Azure Elastic SAN (AES) is a new (now GA) Azure Cloud native storage service that provides scalable, resilient, easy management with rapid provisioning, high performance, and cost-effective storage. AES (figure 1) supports many workloads and computing resources. Workloads that benefit from AES include tier 1 and tier 2, such as Mission Critical, Database, and VDI, among others traditionally relying upon consolidated Storage Area Network (SAN) shared storage.

Compute resources that can use AES, including bare metal (BM) physical machines (PM), virtual machines (VM), and containers, among others, using iSCSI for access. AES is accessible by computing resources and services within the Azure Cloud in various regions (check Azure Website for specific region availability) and from on-prem core and edge locations using iSCSI. The AES management experience and value proposition are similar to traditional hardware or software-defined shared SAN storage combined with Azure cloud-based management capabilities.

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 1 General Concept and Use of Azure Elastic SAN (AES)

While Microsoft Azure describes AES as a cloud-native storage solution, that does not mean that AES is only for containers and other cloud-native apps or DevOPS. Rather, AES has been built for and is native to the cloud (e.g., software-defined) that can be accessed by various compute and other resources (e.g., VMs, Containers, AKS, etc) using iSCSI.

How Azure Elastic SAN differs from other Azure Storage

AES differs from traditional Azure block storage (e.g., Azure Disks) in that the storage is independent of the host compute server (e.g., BM, PM, VM, containers). With AES, similar to a conventional software-defined or hardware-based shared SAN solution, storage is disaggregated from host servers for sharing and management using iSCSI for connectivity. By comparison, AES differs from traditional Azure VM-based storage typically associated with a given virtual machine in a DAS (Direct Attached Storage) type configuration. Likewise, similar to conventional on-prem environments, there is a mix of DAS and SAN, including some host servers that leverage both.

AES supports Azure VM, Azure Kubernetes Service (AKS), cloud-native, edge, and on-prem computing (BM, VM, etc.) via iSCSI. Support for Azure VMware Solution (AVS) is in preview; check the Microsoft Azure website for updates and new feature functionality enhancements.

Does this mean everything is moving to AES? Similar to traditional SANs, there are roles and needs for various storage options, including DAS, shared block, file, and object, among storage offerings. Likewise, Microsoft and Azure have expanded their storage offerings to include AES, DAS (azure disks, including Ultra, premium, and standard, among other options), append, block, and page blobs (objects), and files, including Azure file sync, tables, and Data Box, among other storage services.

Azure Elastic Storage Feature Highlights

AES feature highlights include, among others:

    • Management via Azure Portal and associated tools
    • Azure cloud-based shared scalable bock storage
    • Scalable capacity, low latency, and high performance (IOPs and throughput)
    • Space capacity-optimized without the need for data reduction
    • Accessible from within Azure cloud and from on-prem using iSCSI
    • Supports Azure compute  (VMs, Containers/AKS, Azure VMware Solution)
    • On-prem access via iSCSI from PM/BM, VM, and containers
    • Variable number of volumes and volume size per volume group
    • Flexible easy to use Azure cloud-based management
    • Encryption and network private endpoint security
    • Local (LRS) and Zone (ZRS) with replication resiliency
    • Volume snapshots and cluster support

Who is Azure Elastic SAN for

AES is for those who need cost-effective, shared, resilient, high capacity, high performance (IOPS, Bandwidth), and low latency block storage within Azure and from on-prem access. Others who can benefit from AES include those who need shared block storage for clustering app workloads, server and storage consolidation, and hybrid and migration. Another consideration is for those familiar with traditional hardware and software-defined SANs to facilitate hybrid and migration strategies.

How Azure Elastic SAN works

Azure Elastic SAN is a software-defined (cloud native if you prefer) block storage offering that presents a virtual SAN accessible within Azure Cloud and to on-prem core and edge locations currently via iSCSI. Using iSCSI, Azure VMs, Clusters, Containers, Azure VMware Solution among other compute and services, and on-prem BM/PM, VM, and containers, among others, can access AES storage volumes.

From the Azure Portal or associated tools (Azure CLI or PowerShell), create an AES SAN, giving it a 3 to 24-character name and specify storage capacity (base units with performance and any additional space capacity). Next, create a Volume Group, assigning it to a specific subscription and resource group (new or existing), then specify which Azure Region to use, type of redundancy (LRS or GRS), and Zone to use. LRS provides local redundancy, while ZRS provides enhanced zone resiliency, with highspeed synchronous resiliency without setting up multiple SAN systems and their associated replication configurations along with networking considerations (e.g., Azure takes care of that for you within their service).

The next step is to create volumes by specifying the volume name, volume group to use, volume size in GB, maximum IOPs, and bandwidth. Once you have made your AES volume group and volumes, you can create private endpoints, change security and access controls, and access the volumes from Azure or on-prem resources using iSCSI. Note that AES currently needs to be LRS (not ZRS) for clustered shared storage and that Key management includes using your keys with Azure key vault.

Using Azure Elastic SAN

Using AES is straightforward, and there are good easy to follow guides from Microsoft Azure, including the following:

The following images show what AES looks like from the Azure Portal, as well as from an Azure Windows Server VM and an onprem physical machine (e.g., Windows 10 laptop).

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 2 AES Azure Portal Big Picture

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 3 AES Volume Groups Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 4  AES Volumes Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 5 AES Volume Snapshot Views

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 6 AES Connected Volume Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 7 AES Volume iSCSI view from on-prem Windows Laptop

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 8 AES iSCSI Volume attached to Azure VM

Azure Elastic SAN Cost Pricing

The cost of AES is elastic, depending on whether you scale capacity with performance (e.g., base unit) or add more space capacity. If you need more performance, add base unit capacity, increasing IOPS, bandwidth, and space. In other words, base capacity includes storage space and performance, which you can grow in various increments. Remember that AES storage resources get shared across volumes within a volume group.

Azure Elastic SAN is billed hourly based on a monthly per-capacity base unit rate, with a minimum of 1TB  provisioned capacity with minimum performance (e.g., 5,000 IOPs, 200MBps bandwidth). The base unit rate varies by region and type of redundancy, aka resiliency. For example, at the time of this writing, looking at US East, the Local Redundant Storage (LRS) base unit rate is 1TB with 5,000 IOPs and 200MBps bandwidth, costing $81.92 per unit per month.

The above example breaks down to a rate of $0.08 per GB per month, or $0.000110 per GB per hour (assumes 730 hours per month). An example of simply adding storage capacity without increasing base unit (e.g., performance) for US East is $61.44 per month. That works out to $0.06 per GB per month (no additional provisioned IOPs or Bandwidth) or $0.000083 per GB per hour.

Note that there are extra fees for Zone Redundant Storage (ZRS). Learn more about Azure Elastic SAN pricing here, as well as via a cost calculator here.

Azure Elastic SAN Performance

Performance for Azure Elastic SAN includes IOPs, Bandwidth, and Latency. AES IOPs get increased in increments of 5,000 per base TB. Thus, an AES with a base of 10TB would have 50,000 IOPs distributed (shared) across all of its volumes (e.g., volumes are not restricted). For example, if the base TB is increased from 10TB to 20TB, then the IOPs would increase from 50,000 to 100,000 IOPs.

On the other hand, if the base capacity (10TB) is not increased, only the storage capacity would increase from 10TB to 20TB, and the AES would have more capacity but still only have the 50,000 IOPs. AES bandwidth throughput increased by 200MBps per TB. For example, a 5TB AES would have 5 x 200MBps (1,000 MBps) throughput bandwidth shared across the volume groups volumes.

Note that while the performance gets shared across volumes, individual volume performance is determined by its capacity with a maximum of 80,000 IOPs and up to 1,024 MBps. Thus, to reach 80,000 IOPS and 1,024 MBps, an AES volume would have to be at least 107GB in space capacity. Also, note that the aggregate performance of all volumes cannot exceed the total of the AES. If you need more performance, then create another AES.

Will all VMs or compute resources see performance improvements with AES? Traditional Azure Disks associated with VMs have per-disk performance resource limits, including IOPs and Bandwidth. Likewise, VMs have storage limits based on their instance type and size, including the number of disks (HDD or SSD), performance (IOPS and bandwidth), and the number of CPUs and memory.

What this means is that an AES volume could have more performance than what a given VM is limited to. Refer to your VM instance sizing and configuration to determine its IOP and bandwidth limits; if needed, explore changing the size of your VM instance to leverage the performance of Azure Elastic SAN storage.

Additional Resources Where to learn more

The following links are additional resources to learn about Microsoft Azure Elastic SAN and related data infrastructures and tradecraft topics.

Azure AKS Storage Concepts 
Azure Elastic SAN (AES) Documentation and Deployment Guides
Azure Elastic SAN Microsoft Blog
Azure Elastic SAN Overview
Azure Elastic SAN Performance topics
Azure Elastic SAN Pricing calculator
Azure Products by Region (see where AES is currently available)
Azure Storage Offerings 
Azure Virtual Machine (VM) sizes
Azure Virtual Machine (VM) types
Azure Elastic SAN General Pricing
Azure Storage redundancy 
Azure Service Level Agreements (SLA) 
StorageIOBlog.com Data Box Family 
StorageIOBlog.com Data Box Review
StorageIOBlog.com Data Box Test Drive 
StorageIOblog.com Microsoft Hyper-V Alive Enhanced with Win Server 2025
StorageIOblog.com If NVMe is the answer, what are the questions?
StorageIOblog.com NVMe Primer (or refresh)

Additional learning experiences along with common questions (and answers), are found in my Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Azure Elastic SAN (AES) is a new and now generally available shared block storage offering that is accessible using iSCSI from within Azure Cloud and on-prem environments. Even with iSCSI, AES is relatively easy to set up and use for shared storage, mainly if you are used to or currently working with hardware or software-defined SAN storage solutions.

With NVMe over TCP fabrics gaining industry and customer traction, I’m hoping for Microsoft to adding that in the future. Currently, AES supports LRS and ZRS for redundancy, and an excellent future enhancement would be to add Geo Redundant Storage (GRS) capabilities for those who need it.

I like the option of elastic shared storage regarding performance, availability, capacity, and economic costs (PACE). Suppose you understand the value proposition of evolving from dedicated DAS to shared SAN (independent of the underlying fabric network); or are currently using some form of on-prem shared block storage. In that case, you will find AES familiar and easy to use. Granted, AES is not a solution for everything as there are roles for other block storage, including DAS such as Azure disks and VMs within Azure, along with on-prem DAS, as well as file, object, and blobs, tables, among others.

Wrap up

The notion that all cloud storage must be objects or blobs is tied those who only need, provide, or prefer those solutions. The reality is that everything is not the same. Thus, there is a need for various storage mediums, devices, tiers, access, and types of services. Microsoft and Azure have done an excellent job of providing. I like what Microsoft Azure is doing with Azure Elastic SAN.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Nine time Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of UnlimitedIO LLC.

PACE your Server Storage I/O decision making, its about application requirements

PACE your Server Storage I/O decision making, its about application requirements

PACE your Server Storage I/O decision-making, it’s about application requirements. Regardless of if you are looking for physical, software-defined virtual, cloud or container storage, block, file or object, primary, secondary or protection copies, standalone, converged, hyper-converged, cluster in a box or other forms of storage and packaging, when it comes to server storage I/O decision-making, it’s about the applications.

I often see people deciding on the best storage before the questions of requirements, needs and wants are even mentioned. Sure the technology is important, so too are the techniques and trends including using new things in new ways, as well as old things in new ways. There are lots of buzzwords on the storage scene these days. But don’t even think about buying it until you truly understand your business’ storage needs.

However when it comes down to it unless you have a unique need, most environments server, and storage I/O resources exist to protect preserve and serve applications and their information or data. Recently I did a couple of articles over at Network Computing; these are tied to server and storage I/O decision-making balancing technology buzzwords with business and application requirements.

PACE and common applications characteristics

PACE your server storage decisions

A theme I mention in the above two articles as well as elsewhere on server, storage I/O and applications is PACE. That is, application Performance Availability Capacity Economics (PACE). Different applications will have various attributes, in general, as well as how they are used. For example database transaction activity vs. reporting or analytics, logs and journals vs. redo logs, indices, tables, indices, import/export, scratch and temp space. PACE (figure 2.7) describes the applications and data characters and needs.

Server Storage I/O PACE

Common Application Pace Attributes

All applications have PACE attributes

  • Those PACE attributes vary by application and usage
  • Some applications and their data are more active vs. others
  • PACE characteristics will vary within different parts of an application

Think of an application along with associated data PACE as its personality or how it behaves, what it does, how it does it and when along with value, benefit or cost along with Quality of Service (QoS) attributes. Understanding the applications in different environments, data value and associated PACE attributes is essential for making informed server, storage I/O decisions from configuration to acquisitions or upgrades, when, where, why and how to protect, or performance optimization along with capacity planning, reporting, and troubleshooting, not to mention addressing budget concerns.

Data and Application PACE

Primary PACE attributes for active and inactive applications and data:
P – Performance and activity (how things get used)
AAvailability and durability (resiliency and protection)
C – Capacity and space (what things use or occupy)
EEnergy and Economics (people, budgets and other barriers)

Some applications need more performance (server computer, or storage and network I/O) while others need space capacity (storage, memory, network or I/O connectivity). Likewise, some applications have different availability needs (data protection, durability, security, resiliency, backup, BC, DR) that determine various tools, technologies and techniques to use. Budgets are also a concern which for some applications meaning enabling more performance per cost while others are focused on maximizing space capacity and protection level per cost. PACE attributes also define or influence policies for QoS (performance, availability, capacity), as well as thresholds, limits, quotas, retention and disposition among others.

Where to learn more

Learn more about data infrastructures and tradecraft related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

The best storage will be the one that meets or exceeds your application requirements instead of the solution that meets somebody else’s needs or wants. Keep in mind, PACE your Server Storage I/O decision making, it is about application requirements

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

March 31st is world backup day; when is world recovery day

March 31st is world backup day; when is world recovery day

If March 31st is world backup day, when is world recovery day?

For several years, if not decades, March 31st has been world backup day, a reminder to protect and backup your apps and data. Data protection, including backup, recovery, business continuance (BC), disaster recovery (DR), and business resilience (BR), should be a 365-day-a-year focus. If you have regular data protection, including backup, that is great; when was the last time you tested restore?

Some related content

Upcoming and past events including webinars, tips and commentary
World Backup Day Reminder Don’t Be an April Fool Test Your Data Recovery
Data Infrastructure Overview, Its What’s Inside of a Data Center
Application Data Value Characteristics Everything Is Not The Same
Data Protection Diaries Topics Tools Techniques Technologies Tips

Reminder to Protect your data and apps and settings

Thus, this is also a reminder to protect your data and apps and their settings regularly. What’s even better is evolving from none once a year to more frequent data protection, including backup of your critical and noncritical apps and data. Notice I keep mentioning apps and not just the usual focus of or on data. Program apps are considered broadly data; after all, apps and your settings and metadata are just data when stored and protected.

There is also often a focus on just the data, which can lead to problems when it comes time to recover an app program, settings, or metadata. Also, a reminder that data protection, including backup, is not just for large enterprises; it applies to organizations and entities of all sizes, including small and medium businesses (SMBs), non-profits, and homes (e.g., your photos, worksheets, and other documents).

What About Recovery

If March 31st is world backup day, when is world recovery day? So far, I have been talking about backup as part of data protection or ensuring your apps, data, and settings are protected; what about recovery?

Sometimes with data protection, discussions can drift into what’s more critical, backup or recovery, which is a bit like a chicken and egg situation. In other words, what’s more important, the chicken or the egg? Similar to data protection, what’s more critical, backup or recovery?

Recovery is only as good as your backup (or snapshot, point-in-time copy, checkpoint, or consistency point), and your backup or protection copy is only as good as its recoverability. Recoverability means that not only is there something to restore from a point in time (e.g., recovery point objective or RPO) in a given amount of time (recovery time objective or RTO).

Recoverability also means that you can pull the data (e.g., bits, bytes, blocks, blobs, objects, files, tables) from the protection medium, media, or service and use it. Recovery means that the data is valid and consistent, has integrity, or is otherwise not bad, missing, damaged, or corrupted (e.g., usable).

What About Recovery Day?

For several years I have mentioned and will continue to do so that if March 31st is world backup day, then April 1st should be a world recovery day. So why April 1st for world recovery day? Simple, you don’t want to look like a fool the day after world backup day if you can’t restore and use data backed up the day before.

If you are not comfortable with April 1st for world recovery day? Then make your world recovery day (or test) a day or so later. The important message is to ensure your apps, data, and settings are protected (e.g., copied, backed up, snapshot, checkpoint, etc.), trust yet verify, and test your restorations.

Why do I mentation apps, data, and settings?

The important message here is that it is good if you are already protecting your data, your spreadsheets, worksheets, databases, files, photos, and the application programs that use them. However, also ensure that you are protecting application settings, configurations, metadata, encryption keys, the backup or protection mechanisms, and their data.

For example, when I accidentally delete a data file or configuration settings, I can restore those without recovering everything. Suppose, for instance, I accidentally or intentionally uninstall an application program. In that case, I can reinstall (assuming I have a copy of the program), then restore my settings and pick up where I resumed.

Who does this apply to?

From organizations of size and type to individuals. If you have or generate or save data, if it is worth having (or you have to keep it), then it should be protected. What how often to protect data (time interval) will be based on what your recovery point objective (RPO) is. Likewise how fast you need to recover with your recovery time objective (RTO).

Remember that it is not if you will need to restore, recover, reload, refresh, or repair your apps, data, and settings instead when. It might be because of accidental or planned deletion, accident, hardware, software, cloud service situation, ransomware, or malware, among other things that can and do happen.

What to do?

If March 31st is world backup day, when is world recovery day? Ensure you have regular copies of your apps, data, and configuration settings, including encryption keys. Implement a variation of the old school three two one (e.g., 3 2 1) data protection, e.g., backup scheme (e.g., three or more copies, stored on two or more devices, systems, media or mediums, and at least one of them offsite preferably offline including at cloud).

A variation of the new school 4 3 2 1 data protection scheme has:
Have four or more versions of your protected data.
Three or more copies (feel free to swap the number of copies and versions).
Stored on two or more different systems (devices, media, or locations).
At least one copy offsite (preferably with one offline), including cloud.

The big difference between the old school 3 2 1 and the new school 4 3 2 1 is the emphasis and distinction of having multiple copies and various versions (e.g., points in time). For example, storing three copies on two systems with one offsite is good unless all copies are damaged. Having different versions (e.g., point in time) and multiple copies of those versions stored in different places including at least one offline (e.g., air-gapped), is essential.

Trust yet verify, test your backups and recovery

Test to verify your data protection is working and that data (apps, data, settings) can be restored. When testing restores, be careful not to overwrite your good data and cause a disaster. Also, ensure your data is encrypted in multiple locations and layers and that you protect your encryption keys. Finally, make sure your backup, protection software, catalog, and settings are encrypted, secured, and protected.

If you have questions, not sure, learn more here in my book Software Defined Data Infrastructure Essentials (CRC Press), Data Infrastructure Management Insight and Strategies (CRC Press), as well as check out these listed below, or reach out to me or others. If you are an individual consumer and just looking to protect some photos, valuable documents, and heirlooms, get in touch with professionals who specialize in these types of things.

What do I do?

Implement 4 3 2 1 type data protection with different granularities and frequencies. For example, my data protection includes regular point-in-time copies, including backups and snapshots, checkpoints, consistency points of systems, volumes, shares, apps, files, data, and settings at different intervals. Having different types of apps and data, some of which are more static vs. others that are changing, protection is also varied to avoid treating everything the same, reduce cost, and increase coverage.

I protect my Apps, data, and settings with multiple versions and copies locally on different systems, devices, mediums, and offsite, including offline and at cloud services. So why do I store data offsite vs. having it all in the cloud? Simple, speed of recovery, and flexibility.

If it’s a few files, perhaps a few GBs of data, it is usually faster for me if I don’t have a good copy locally to get it from Microsoft Azure. Otoh, if I need to restore TBs of data (something terrible happens), then it can be faster to bring an offline, offsite copy back, correct that, then only pull the more recent data I need from the cloud.

What are some of the tools and technologies that I use?

Locally I have multiple Microsoft Windows Servers (Server 2022) with various storage (HDDs and SSDs), including removable devices. In addition to on-prem, I have data stored offsite on removable media and cloud copies. For my cloud copies, I have a mix of files and blobs stored at Microsoft Azure.

A challenge moving from AWS to Azure was Retrospect did not support objects (Azure blobs). I realized, no worries, Retrospect supports storing data on local storage (SSD or HDD) on regular filesystems as files. The solution was set up an Azure file share for Retrospect, and everything has worked fantastic.

Are there things I need and want to improve? Yes, it’s an ongoing process and journey.

What should you do next?

Make sure you have a data backup; if not, march 31st is a good reminder. Trust yet verify your backups are working and you can recover and not be an April 1st fool.

Where to learn more

Learn more about world backup day, recovery and data protection along with other related topics via the following links:

Upcoming and past events including webinars, tips and commentary
Next Generation Hybrid Data Infrastructures Are In Your Future
Cloud File Data Storage Consolidation and Economic Comparison Model
New Book Data Infrastructure Management Insight Strategies
World Backup Day Reminder Don’t Be an April Fool Test Your Data Recovery
Virtual, Cloud and IT Availability, it’s a shared responsibility
Don’t Stop Learning Expand Your Skills Experiences Everyday
Data Infrastructure Overview, Its What’s Inside of a Data Center
Application Data Value Characteristics Everything Is Not The Same
Data Protection Diaries Topics Tools Techniques Technologies Tips
Data Infrastructure Server Storage I/O related Tradecraft Overview

Additional learning experiences can be found in Software Defined Data Infrastructure Essentials book. Also check out Data Infrastructure Management Insight and Strategies.

Software Defined Data Infrastructure Essentials Book SDDC backup restore data protection cloud storage containers data footprint reduction

What this all means

If March 31st is world backup day, when is world recovery day? Every day should be a backup day (e.g., some protection, backup, copy, snapshot, checkpoint, consistency point). Likewise, every day should be able to be a recovery day. World backup day and recovery apply to organizations of all sizes and individuals. Remember that If March 31st is world backup day, when is world recovery day?

Ok, nuff said.

Cheers gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, ten-time VMware vExpert. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC). Cloud and Virtual Data Storage Networking (CRC), The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier). Visit twitter @storageio as well as www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com. Any reproduction without attribution or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. Visit our companion site https://picturesoverstillwater.com to view drone based aerial photography and video related topics. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO and UnlimitedIO LLC.

Driving ROI with Cloud Storage Consolidation Seminars

Driving ROI with Cloud Storage Consolidation Seminars

Driving ROI with Cloud Storage Consolidation Seminars

Driving ROI with Cloud Storage Consolidation Seminars

Join me in a series of in-person seminars driving ROI with cloud storage consolidation for unstructured file data.

driving roi with cloud storage consolidation seminars
Various Data Infrastructure options from on-prem to edge to cloud and beyond

These initial seminars are being held at Amazon Web Services (AWS) locations April 30 in New York City, May 1 in Chicago and May 2 in Houston Amazon. At each of these three cities, I will be joined by experts from NetApp, Talon and AWS as we look at issues, trends and what can be done today (including hands on demos) driving ROI with cloud storage consolidation for unstructured file data.

What The Seminars Are About

These seminars look at how remove cost and complexity while boosting productivity for distributed sites with unstructured data and NAS file servers. The seminars look at making informed decisions balancing technical considerations with a business return on investment (ROI) model, along with return on innovation (the other ROI) from boosting productivity. It’s not about simply cutting costs that can create chaos or compromise elsewhere, it’s about removing complexity and cost while boosting productivity with smart cloud storage consolidation for unstructured file data.

distributed file server cloud storage consolidation

Distributed File Server Cloud Storage Consolidation ROI Economic Comparison

During these seminars I will discuss various industry and customer trends, challenges as well as solutions, particular for environments with distributed file servers for unstructured file data. As part of my discussion, we will look at both technical, as well as ROI business based model for distributed file server cloud storage consolidation based on the Server StorageIO white paper report titled Cloud File Data Storage Consolidation and Economic Comparison Model (Free PDF download here).

Where When and How to Register

New York City Tuesday April 30, 2019 9:00AM
Amazon Web Services
7 West 34th St.
6th Floor
Learn more and register here.

Chicago Illinois  Wednesday May 1, 2019 9:00AM
Amazon Web Services
222 West Adams Street
Suite 1400
Learn more and register here

Houston Texas Thursday May 2, 2019 9:00AM
Amazon Web Services
825 Town and Country Lane
Suite 1000
Learn more and register here

Where to learn more

Learn more about world backup day, recovery and data protection along with other related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Making informed decisions for data infrastructure resources including cloud storage consolidation and distributed file servers involves technical, application workload as well as business economic analysis. Which of the three (technical, application workload, financial) is more important for enabling a business benefit will depend on your perspective, as well as area of focus. However, all the above need to be considered in the balance as part of making an informed data infrastructure resource decision. That is where a discussion about a business financial ROI model (pro forma if you prefer) comes into play as part of cloud storage consolidation, including for distributed file server of unstructured file data.

I look forward to meeting with attendees and hope to see you at the events April 30th in New York City, May 1 in Chicago, and Houston May 2nd as we discuss driving ROI with cloud storage consolidation at these seminars.

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, ten-time VMware vExpert. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. Visit our companion site https://picturesoverstillwater.com to view drone based aerial photography and video related topics. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing my new book Data Infrastructure Management Insight Strategies published via Auerbach/CRC Press is now available via CRC Press and Amazon.com among other global venues.

My Fifth Solo Book Project – Data Infrastructure Management

Data Infrastructure Management Insight Strategies (e.g. the white book) is my fifth solo published book in addition to several other collaborative works. Given its title, the focus of this new book is around Data Infrastructures, the tools, technologies, techniques, trends including hardware, software, services, people, policies inside data centers that get defined to support business and application services delivery. The book (ISBN 9781138486423) is soft covered (also electronic kindle versions available) with 250 pages, over a 100 figures, tables, tips and examples. You can explore the contents via Google Books here.

Data Infrastructure Books by Greg Schulz
Stack of my solo books with common theme around Data Infrastructure topics

Data Infrastructure Management Book
Data Infrastructure Management – Insight and Strategies e.g. the White book (CRC Press 2019)

Some of My Other Books Include

Click on the following book images to learn more about, as well as order your copy.

Software Defined Data Infrastructure Essentials BookSNIA Recommended Reading List
Software Defined Data Infrastructure Essentials (SDDI) – Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft e.g. the Blue book covers software defined, sddc, sddi, hybrid, among other topics including serverless containers, NVMe, SSD, flash, pmem, scm as well as others. (CRC Press 2017) available at Amazon.com among other global venues.

Cloud and Virtual Data Storage Networking Intel recommended reading listIntel recommended reading list
Cloud and Virtual Data Storage Networking (CVDSN) – Your Journey to efficient and effective Information Services e.g. the Yellow or Gold Book (CRC Press 2011) available at Amazon.com among other global venues.

 

The Green and Virtual Data Center BookIntel Recommended Reading List
The Green and Virtual Data Center (TGVDC) – Enabling Efficient, Effective and Productive Data Infrastructures e.g. the Green Book (CRC Press 2009) available at Amazon.com among other venues.

Resilient Storage Networks Book
Resilient Storage Networks (RSN) – Designing Flexible scalable Data Infrastructures (Elsevier 2004) e.g. the Red Book is SNIA Education Endorsed Reading available at Amazon.com among other venues. I have some free copies of RSN for anybody who is willing to pay shipping and handling, send me a note and we will go from there.

Where to learn more

Learn more via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Today more than ever there tends to be a focus on the date something was created or published as there is a lot of temporal content with short shelf life. This means that there is a lot of content including books being created that are short temporal usually focused on a particular technology, tool, trend that has a life span or attention focus of a couple of years at best.

On the other hand, there is also content that is still being created today that combines new and emerging technology, tools, trends with time-tested strategies, techniques as well as processes, some of whose names or buzzwords will evolve. My books fit into the latter category of combing current as well as emerging technologies, tools, trends, techniques that support longer shelf life, just insert your new favorite buzzword, buzz trend or buzz topic as needed.

Data Infrastructure Books by Greg Schulz

You will also notice looking at the stack of books, Data Infrastructure Management Insight and Strategies is a smaller soft covered book compared to others in my collection. The reason is that this new book can be a quick read to address what you need, as well as be a companion to others in the stack depending on what your focus or requirements are.

Common questions I get having written several books, not to mention the thousands of articles, tips, reports, blogs, columns, white papers, videos, webinars among other content is what’s is next? Good question, see what’s next, as well as check out some other things I’m doing over at www.picturesoverstillwater.com where I’m generating big data that gets stored and processed in various data infrastructures including cloud ;) .

Will there be another book and if so on or about what? As I mentioned, there are some projects I’m exploring, will they get finished or take different directions, wait and see what’s next.

How do I find the time to create these books and how long does it take? The time required varies as does the amount of work, what else I’m doing. I try to leverage the book (and other content creation projects) with other things I’m doing to maximize time. Some book projects have been very fast, a year or less. Some take longer such as Software Defined Data Infrastructure Essentials as it is a big book with lots of material that will have a long shelf life.

Do I write and illustrate the books or do I have somebody do them for me? For my books I do the writing and illustrating (drawings, figures, images) myself along with some of the layouts relying on external copy editors and production folks.

What do I recommend or give advice to those wanting to write a book? Understand that publishing a book is a project, there’s the actual writing, editing, reviews, art work, research, labs or other support items as book companions. Also understand why are you writing a book, for fame, fortune, acclaim, to share with others or some other reason. I also recommend before you write your entire book to talk with others who have been published to test the waters, get feedback. You might find it easier to shop an extended outline than a completed manuscript, that is unless you are writing a novel or similar.

Want to learn more about writing a book (or other content), get feedback, have other questions, drop me a note and will do what I can to help out.

Data Infrastructure Management Book

There is an old saying, publish or perish, well, I just published my fifth solo book Data Infrastructure Management Insight Strategies that you can buy at Amazon.com among other venues.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2019. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2019 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars.

There is still time to register for the fall 2018 Dutch data infrastructure industry trends decision-making seminars November 27th and 28th. The workshops are being organized by Brouwer Storage Consultancy of Holland and will be held in Nijkerk.

On Tuesday, November 27th, there will be an advanced education workshop seminar covering data infrastructure industry trends and technology update presented by myself. On Wednesday, November 28th, there will be a deeper dive workgroup seminar session addressing data infrastructure related strategy, planning, and decision-making.

xxxx

Data Infrastructures Industry Trends November 27

Whats New, Whats the buzz, what you need to know about, From Speeds and Feeds, Slots and Watts to Whos doing what, from interesting to What’s relevant for your environment.

This one-day seminar is a new and improved version of the popular speeds and feeds session where we look at what’s new and emerging in the industry as well as applicable to your environments. You will be updated about the latest trends and emerging data infrastructure technologies to support digital transformation, little and big data analytics, AI/ML/DL, GDPR, data protection, edge/fog compute, and IoT among others. From legacy to the software-defined cloud, container converged and virtual to composable. The seminar is a mix of presentation and engaging discussion as we look into details of favorite or new technologies for both those who are old-school, new-school and current or future school.

Part I – Industry Trends, Applications, and Workload
Part II – Server Compute, Memory, I/O, hardware and software
Part III – Storage and Data protection for on-prem and cloud
Part IV – Bringing it all together, managing and decision making

Topics to be covered include among others:

  • What these trends, tools, technologies mean for different environments of various size.
  • Tips on evaluating legacy and startup or newer vendors as well as technologies.
  • Updates on vendors, services, technologies, products you may or may not have heard of.
  • Cloud (public/private/multi-cloud/hybrid) compute, storage and management.
  • Containers (including docker, windows, kubernetes, FaaS, serverless, lambda).
  • Converged and hyper-converged; Gen-Z and composable; NVMe and NVMeoF.
  • Persistent Memory (PMEM), Storage Class Memory (SCM), 3D XPoint, NAND Flash SSD.
  • Legacy vs. software-defined, appliances, storage systems, block, NAS file, object, table.
  • Bulk cloud data migration appliances, storage for the edge, file sync and share.
  • Role and importance of context (what’s applicable, what something means).
  • Who’s doing what, what to look for today for the future.

This seminar is for those involved with ICT/IT servers, storage, storage, I/O networking, and associated management activities including data protection, of legacy, as well as software-defined cloud, containers, converged hyper-converged and virtualization. This seminar is for professionals who manage, architect or are otherwise involved with data infrastructure related topic strategy and acquisitions.

Data Infrastructures Deep Dive Decision Making November 28

Enabling Informed Strategy and Decision Making, moving from what are the tools, trends and technologies evolving to what to use, when, where, why, how, along with strategy, planning, decision-making, and ongoing management.

If the answer is a cloud, converged, container, composable, edge, fog, digital transformation, on-prem, hybrid, software-defined, what were or are the questions to plan as well as prepare for deployment today, along with in the future? This workshop format seminar provides answers to fundamental questions, with essential insight into software-defined data infrastructures (SDDI) and software-defined data centers (SDDC). For ICT/IT professionals (architects, strategists, administrators, managers) currently or planning on being involved with servers, storage, I/O networking, hardware, software, converged, containers, cloud backup/data protection, and associated topics, this seminar is for you.

Clouds converged, and containers will be a primary focus along with related themes and topics that you need to know more about. Don’t be scared of clouds, be prepared, and this includes for on-prem, public, hybrid and multi-cloud. As part of our deeper dive decision-making strategy focus, we look at cloud cost considerations including are you paying too much or not enough (e.g., are you depriving your applications of performance to save money?). We will explore various decision-making and strategy topics spanning AWS, Microsoft Azure, Azure Stack, Windows and Hyper-V, VMware (including on AWS) and OpenStack, is it still open for business?

Additional topics, trends, themes include:

  • Everything is not the same across cloud services, converged, or containers.
  • Different environments have various data infrastructure resource needs.
  • How to balance legacy on-prem application needs with emerging technology options.
  • Different comparison criteria for smaller environments remote office vs. Larger enterprise
  • Do it yourself (DiY) vs. Turnkey software vs. Bundled tin wrapped software solution
  • Strategy, planning, decision-making, and ongoing management

How To Register For Seminar Workshops

Learn more about fall 2018 Dutch Server StorageIO Data Infrastructure Tuesday trends workshop seminar here (PDF), and Wednesday deeper dive decision-making workshop session here (PDF).

To register and obtain more information, contact event organizers Brouwer Storage consultancy at +31-33-246-6825 or +31-652-601-309 and info at brouwerconsultancy.com.

Where to learn more

Learn more about Data Infrastructure and related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Everything is not the same across different organizations, environments, application workloads, data, technology, tools, trends. These two one day interactive workshop seminars provide timely insight into what’s going on in the data infrastructure related industry, along with common IT organization challenges as well as how to address them. Moving from the what to what to use when, where, why, how along with alternatives, gaining insight and awareness to avoid flying blind enables effective strategy, decision-making, planning and ongoing management. Learn more and sign up for Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars, see you in Nijkerk.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Disk Impressions #blogtobertech

Microsoft Azure Data Box Disk Test Drive Impressions #blogtobertech

Microsoft Azure Data Box Disk Test Drive Impressions #blogtobertech

Data Box Disk Test Drive Impressions is the last of a four-post series looking at Microsoft Azure Data Box. View Part 1 Microsoft announced Azure Data Box updates, Part 2 Microsoft Azure Data Box Family, and Part 3 Microsoft Azure Data Box Disk Test Drive Review.

Overall, I liked the Azure Data Box experience along with a range of options to select the best fit solution for my needs. A common trend among the major cloud service providers such as AWS, Microsoft Azure and Google is that one size fits all approach solution does not meet different customer needs.

The only things that I did not like about and would like to see improved with Azure Data Box are two items one at the beginning, the other at the end of the process. Granted with Data Box Disks still in preview, there is time for those items to be addressed before general availability, and I have passed on the feedback to Microsoft.

At the beginning of the process, things are pretty straightforward with good tools along with resources to help you navigate which type of Data Box to order, how to order, specify your account details and other information.

What I did not like with the up front experience was after the quick ordering and notification process, the time delay of a week or more until notified when a Data Box would be arriving. Granted I was not in a rush and Microsoft did indicate that it could take about ten days to be informed of availability, this is something that should be done quickly as resources become available. Another option is for Microsoft to add an ordering option for priority or low-priority in the future.

The other experience that I did not like was at the very end, in that perhaps its stuck in an email spam trap (checked, could not find it), the final notification could be better. Not only a final email note saying your data is copied, but also a reminder of where your block or page blobs were copied to (e.g., what your setup when ordering).

Monitoring the progress of the process, I knew when Data Box drives arrived at Microsoft, copy started and completed including with error status. Having gotten used to receiving update notifications from Azure, not receiving one at the end saying congratulations your data has been copied, check here for any errors or other info, as well as a reminder where the data was copied to would be useful.

Likewise, a follow-up note from Microsoft saying that the Azure Data Box drives used as part of the transfer were securely erased along with a certificate of digital destruction would be useful for compliance purposes.

As mentioned above, overall, I found the Data Box Disk experience very positive and a great way to move bulk data faster than what could be done with available networks. My next step is now to migrate some of the transferred data to cold long-term archive storage, and some others to Azure Files, with some staying in block blobs. There are also a couple of VHD and VHDX that will be moved and attached to VMs for additional testing.

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

For those who have a need to move large amounts of data including structured, unstructured, semi-structured, little or big data to a cloud resource, solutions such as Azure Data Box may be in your future. Likewise, for those looking to support remote and edge workloads from AI, ML, DL inferencing, to large-scale data pre-processing, data collection and acquisition, video, telemetry, IoT among others Data Box type solutions may be in your future. Overall I found Microsoft Azure Data Box Disk Impressions Favorable and was able to address a project I had on the to-do list for some time.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Disk Test Drive Review #blogtobertech

Microsoft Azure Data Box Test Drive #blogtobertech

Microsoft Azure Data Box Test Drive #blogtobertech

Microsoft Azure Data Box Test Drive is part three of four series looking at Data Box. View Part 1 Microsoft announced Azure Data Box updatesPart 2 Microsoft Azure Data Box Family, and Part 4 Microsoft Azure Data Box Disk Impressions.

Getting Started

The workflow for using Data Box involves selecting with the type of Data Box to use via the Microsoft Azure portal (here), or Data Box Family page (here).

Getting Started via the Microsoft Azure Data Box Family Page image via Microsoft.com
Getting Started via the Microsoft Azure Data Box Family Page image via Microsoft.com

First step of ordering a Data Box is to specify your Azure subscription, type of operation (e.g., import data into Azure, or export out), source country/region and destination Azure region.

Selecting Data Box from Azure Portal
Selecting Data Box from Azure Portal

The next step is to determine what type of Data Box, in this test I choose 40 TB Data Box Disks. Make a note of fees to avoid any surprises.

Selecting Data Box Disks (40 TB) From Azure Portal
Selecting Data Box Disks (40 TB) From Azure Portal

After selecting the type of Data Box, fill in storage account information using an existing resource, or create new ones as needed. Make a note of these selections as you will need them after the copy is done as this is where your data will be located.

Specify Azure Storage Account Information Where Data Will Transfer To
Specify Azure Storage Account Information Where Data Will Transfer To

Once the order is placed, an email is received confirming the order and also being a preview, indicating that it might take ten days to hear a status update or availability of the devices.

Email notification received after the order is placed
Email notification received after the order is placed

After about ten days, I was contacted by Microsoft via an email (not shown) confirming the amount of data to be copied to determine how many disks would be needed. Once this was confirmed with Microsoft, a status update was noted on the Azure dashboard.

Azure Data Box Dashboard Status after order placed
Azure Data Box Dashboard Status after order placed

After a few days, a box arrived with the Data Box disks, cables and return shipping labels enclosed. Also received was an email notification indicating the disks had arrived.

Email notice Data Box has arrived on site
Email notice Data Box has arrived on site (on-prem if you prefer)

The following is the physical box that contains the Data Box disks that I received from Microsoft.

The shipping box with Data Box Disks arrives
The shipping box with Data Box Disks arrives

Once you get the Data Box, go to the Azure portal for Data Box and access the tools. There are tools and commands for Windows as well as Linux that are needed for accessing and unlocking the disks. This is where you also obtain device IDs. You will also need to have the access key phrase you specified in an earlier step as part of placing the order.

Access Data Box Software Tools and Keys from Azure Portal
Access Data Box Software Tools and Keys from Azure Portal

Inside the shipping box was a pair of 8 TB SATA SSDs, SATA to USB cables, along with return shipping labels.

Contents inside the shipping box, two Data Box 8 TB disks
Contents inside the shipping box, two Data Box 8 TB disks

From the Azure portal, access the device IDs that will be needed along with passphrase for obtaining and unlocking the Data Box disks. You will also want to download the tools as well as follow other instructions on the portal for accessing disks.

Azure Data Box tools, device IDs and Keys
Azure Data Box tools, device IDs and Keys

The Windows system I used for testing is a virtual machine hosted on a VMware vSphere ESXi 6.7 host. After physically attaching the Data Box Disks to the VM host, a virtual or software attachment was done by adding USB devices to the VM.

Virtual Attach of Data Box Disks to VMware vSphere ESXi host and guest VM
Virtual Attach of Data Box Disks to VMware vSphere ESXi host and guest VM

Once the VM had the Data Box disks attached and mapped, they appeared to Windows. After downloading the Data Box software tools and unlocking the devices, they were ready to copy data to. Note that the disks appear as a regular Windows device once unlocked. Simply using bit locker does not unlock the drives, you need to use the Data Box tools. Speaking of Windows disks, there are a couple of folders on the Data Box disk when shipped including one for Block Blob and Page Blob along with verification items.

View of Data Box Disks (8 TB each) after attaching to Windows system
View of Data Box Disks (8 TB each) after attaching to Windows system

Note that you are given several days as part of the base transfer cost, then extra days apply. Since I had a few extra days, I used some of the excess capacity to do some staging and reorganization of data before the actual copy.

Data copy is done using your choice of tools, for example, Robocopy among many others. I used a combination of Robocopy, Retrospect among others. Also, note that for most data place them in the folder or directory structure of your choice in the Block Blob folder. Page Blobs are for VHDX to be used with virtual machines on Azure. After spending a few days to copy the data I wanted to move along with performing verification, it was time to pack up the devices.

As a reminder, blobs are analogous to and what Microsoft Azure refers to instead of objects (e.g., object storage). Also remember that Azure blobs include block, page (512-byte page aligned for VHDX) and append (similar to other vendors object storage). Microsoft Azure in addition to blobs, supports file (SMB and NFS) access, along with table (database) and queue storage services.

The following shows the return label attached to the shipping box that contains the Data Box disks and cables. I also included a copy of the shipping label inside the box just in case something happened during shipment. Once prepared for delivery, I took the box to a local UPS store where I received a shipment receipt (not shown). Later that day I also received an email from Microsoft indicating the shipment was in-progress.

Data Box disks packaged with return receipt (was in the box)
Data Box disks packaged with return receipt (was in the box)

The Azure portal shows status of Data Box shipment being sent to Microsoft, along with a follow-up email notification.

Azure Data Box portal status
Azure Data Box portal status

Email notification of Data Box on the way to Microsoft.

Notice data box is on the way to Azure
Notice data box is on the way to Azure

After a few days’ ways, checking the Azure Portal shows the Data Box arrived at Microsoft and copied operations underway. Remember the storage account you specified back in the early steps is where you will look for your data. This is something I think Microsoft can improve on by providing a link, or some reminder of where the data is being copied to in the status. Likewise, a copy completion email notice would be handy after getting used to the other alerts previous in the process.

Azure Data Box portal showing disk copy operation status
Azure Data Box portal showing disk copy operation status

Looking at the Azure storage account specified during the ordering process in the Blob storage resources the contents of the Data Box Disks can be found.

Contents of Data Box disks copied into specified Azure Blobs and storage account
Contents of Data Box disks copied into specified Azure Blobs and storage account

The following shows folders that I had copied from on-prem systems to the Data Box now located in the proper Azure Block Blobs. Not shown are Page blobs where I moved some VHDXs.

xMission accomplished, data folders now stored in Azure block blobs
Mission accomplished, data folders now stored in Azure block blobs

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Overall the test drive of the Azure Data Box Disk solution was positive, and look forward to trying out some of the other Data Box solutions, both offline and online options in the future. Continue reading Part 4 Microsoft Azure Data Box Disk Impressions as part of this series including Microsoft Azure Data Box Disk Test Drive Review.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family is part two of a four-part series looking at Data Box. View Part 1 Microsoft announced Azure Data Box updates, Part 3 Microsoft Azure Data Box Disk Test Drive Review, Part 4 Microsoft Azure Data Box Disk Impressions.

Microsoft Azure Data Box Overview

Microsoft has several Data Box solutions available or in the preview to meet various customer needs. These include both online as well as offline solutions that include hardware (except Data Box Gateway), software tools and cloud services.

Data Box Online

Microsoft has two online Data Box offerings that provide real-time access of Azure cloud storage resources from on-prem including remote, edge locations. The online Data Box solutions include Edge and Gateway both with local on-prem storage.


Data Box Edge image via Microsoft.com

Data Box Edge (Preview)

Currently, in preview, Data Box Edge is a 1U appliance that combines hardware along with software resources for deployment on-prem at the edge or remote locations. Data Box Edge places locally converged compute and storage resources as an appliance along with connectivity to Azure cloud-based resources.

Intended workloads and applications for Data Box Edge include remote AI, ML, and DL inferencing, data processing or pre-processing before sending to Azure Cloud, function as an edge compute, data protection and data transfer platform (e.g., cloud storage gateway) with local compute. Data Box Edge is similar in functionality and focuses on other cloud service provider solutions such as AWS Snow Ball Edge (SBE). Management tools include Data Box Edge resource Azure portal for management from a web UI, create and manage resources, devices, shares.

Other Data Box Edge attributes include:

  • Supports Azure Blob or Files via SMB and NFS storage access protocols
  • Dual Intel Xeon processors each with 10 CPU cores, 64GB RAM
  • 2 x 10 Gbps SFP+ copper cables, 2 x 1 Gbps RJ45 cables
  • 8 NVMe SSD (1.6 TB each), no HA, 12.8 TB total raw cap
  • 2 x 1 GbE (one for management, one for user access)
  • 2 x 25 GbE (can operate at 10 GbE) and 2 x 25 GbE ports
  • Local web UI for management and configuration

Data Box Gateway (Preview)

Also in Preview, Data Box Gateway is a virtual machine (VM) based software defined appliance that runs on VMware vSphere (ESXi) or Microsoft Hyper-V hypervisors. The functionality of Data Box Gateway is that of a cloud storage gateway providing access to Azure Blob (Page and Block) or Files (NAS) via SMB or NFS protocols. Learn more about both Data Box Edge and Data Box Gateway here including pricing here.

Data Box Offline Solutions

Microsoft has several offline Data Box offerings including previously available and new in preview models. Offline Data Box solutions enable large amounts of data to be moved from on-prem primary, remote and edge locations to Azure cloud storage resources. Bulk data movement operations can be one-time or recurring in support of big data migration of energy, research, media & entertainment and other large volumes of data.

Other bulk movement includes for archive, backup, BC/DR, virtual machine and application migration among others. Use Data Box Offline solutions when large amounts of data need to be moved from on-prem to Azure cloud faster than what available networks will support promptly.

Offline Data Box solutions include:

  • Data Box Heavy (Preview) 1 PB Storage, 800 TB usable
  • Data Box 100 TB (80 TB usable)
  • Data Box Disk (Preview) 40 TB (35 TB Usable)


Data Box Heavy 1 PB (Preview) image via Microsoft.com

Data Box Heavy 1 PB (Preview)

  • Appliance with Up to 800 TB usable capacity per order
  • One system per order
  • Supports Azure Blob or Files
  • Copy data to up to 10 storage accounts
  • 1 x 1/10 Gbps RJ45 connector, 4 x 40 Gbps QSFP+ connectors
  • AES 256-bit encryption
  • Copies data using NAS SMB and NFS protocols


Data Box 100TB image via Microsoft.com

100 TB Data Box

  • An appliance that supports 80 TB usable storage capacity
  • Supports Azure Blob or Files
  • Copies data to 10 storage accounts
  • 1 x 1/10 GbE RJ45 connector
  • 2 x 10 GbE SFP+ connector
  • AES 256-bit encryption
  • Storage access and copy via SMB and NFS NAS protocols

Case of Data Box Disks image via Microsoft.com

Data Box Disk 40 TB (Preview)

  • Up to 35 TB usable capacity per order
  • Up to 5 SSDs per order
  • This is what I tested (2 x 8 TB)
  • Supports Azure Blob storage (Block and Page)
  • Copies data to a single storage account
  • USB/SATA II, III server I/O interface (comes with SATA to USB connector cables)
  • AES 128-bit encryption
  • Copy data with standard tools

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Which Microsoft Azure Data Box is the best? That depends on your needs and requirements.

Microsoft along with other major cloud service providers continue to evolve their data migration services. Realizing that customers who need, want, or have to get data to the cloud also need to remove barriers, solutions such as Azure Data Box are a step in eliminating cloud barriers while addressing cloud concerns. Continue reading Part 3 Microsoft Azure Data Box Disk Test Drive Review and Part 4 Microsoft Azure Data Box Disk Impressions as part of Microsoft Azure Data Box Family.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft announced Azure Data Box updates #blogtobertech

Microsoft announced Azure Data Box updates – #blogtobertech

Microsoft announced Azure Data Box updates - #blogtobertech

Microsoft announced Azure Data Box is the first in a series of four posts looking at Data Box including a test drive experience. View Part 2 Microsoft Azure Data Box Family, Part 3 Microsoft Azure Data Box Disk Test Drive Review, Part 4 Microsoft Azure Data Box Disk Impressions.

Microsoft Azure Data Box Family Page image via Microsoft.com
Microsoft Azure Data Box Family Page image via Microsoft.com

At Ignite in Microsoft announced Azure Data Box updates, which means its time for a test drive and review. Microsoft has several Data Box solutions available or in the preview to meet various customer needs. These include both online as well as offline solutions that include hardware (except Data Box Gateway), software tools and cloud services. In general, Data Box enables bulk movement and migration of data from on-prem environments to Azure cloud storage including blobs (e.g., objects) and files (e.g., NAS accessible) resources.

Whats The Need for Data Movement Appliance Service

Some might ask the question why do you need a Microsoft Azure Data Box when there are fast networks? Good question, assuming you have fast networks that can move large amounts of bulk data promptly. Microsoft supports traditional Internet-based access to Azure cloud resources for data migration, along with higher speed Express Route service similar to Amazon Web Service (AWS) Direct Connect among other options.

On the other hand, if you need to move a large amount of data that would take weeks, months or longer sending over expensive networks, then solutions like Data Box are an option. Microsoft is not alone or unique having data storage migration or movement services. AWS has Snowball, Snowball Edge with compute, as well as the truck size Snowmobile for large-scale data movement. Google also has their Transfer services including Google Transfer Appliance.

Who is Azure Data Box for?

Azure Data Box is for those who need to migrate data to Azure cloud storage and other services on a one-time or recurring basis. Another scenario is for those who need to have on-prem storage and optional compute at remote or edge locations in support of data acquisition, media & entertainment, energy exploration, AI, ML, DL inferencing, local data processing, pre-processing before sending to cloud among other workloads.

Yet other scenarios for those who need to move large amounts of data online, off-line, or in disconnected also known as submarine mode where a connection to the internet is not always available. Bulk data movement also applies for one-time, as well as recurring data protection such as archive, backups, BC/DR, as well as data shipping, virtual machine farm relocation, SQL Server data migration to cloud, data center consolidation among many other scenarios.

What is Azure Data Box

Azure Data Box is a combination of hardware, software, cloud services that support data migration (on-line and off-line) from on-prem environments including remote or edge to Azure cloud storage resources. There are different Data Box solutions available or in the preview to meet various needs from performance, capacity, functionality, without as well as without compute. In addition to being used for data migration, there are also Data Box solutions (e.g., Edge) that converge compute and storage for deployment at remote or edge locations.

Data Box Gateway is a software-defined virtual machine appliance that deploys on VMware and Microsoft (e.g., Hyper-V) hypervisors. Off-line Data Box solutions scale from single 8TB SSD disks to PB of capacity with various functionality.

As a reminder, blobs are analogous to and what Microsoft Azure refers to instead of objects (e.g., object storage). Also remember that Azure blobs include block, page (512-byte page aligned for VHDX) and append (similar to other vendors object storage). Microsoft Azure in addition to blobs, supports file (SMB and NFS) access, along with table (database) and queue storage services.

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Azure Data Box type solutions and services are becoming more common as well as diverse. With the addition of compute in some of these solutions to support remote edge workloads, the lines may blur with some of the converged and hyper-converged infrastructure (HCI) solutions. Likewise, keep an eye to see how cloud service providers leverage solutions like Data Box Edge to further place their reach out to the edge enabling fog (e.g., cloud at the edge) among other converged functionality. Continue reading Part 2 Microsoft Azure Data Box Family, Part 3 Microsoft Azure Data Box Disk Test Drive Review, and Part 4 Microsoft Azure Data Box Disk Impressions as part of Microsoft announced Azure Data Box updates.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Cloud File Data Storage Consolidation and Economic Comparison Model #blogtobertech

Cloud File Data Storage Consolidation and Economic Comparison Model #blogtobertech

Cloud File Data Storage Consolidation and Economic Comparison Model

The following is a new Industry Trends Perspective White Paper Report titled Cloud File Data Storage Consolidation and Economic Comparison Model.

Cloud File Data Storage Consolidation and Economic Comparison Model

This new report looks at Distributed File Server and Consolidated Cloud Storage Economic Comparison with a fundamental economic comparison model for remote (on-prem) distributed file-servers and cloud storage consolidation decision-making. IT data infrastructure resource (servers, storage, I/O network, hardware, software, services) decision-making involves evaluating and comparing technical attributes (speeds, feeds, features) of a solution or service. Another aspect of data infrastructure resource decision-making involves assessing how a solution or service will support and enable a given application workload from a Performance, Availability, Capacity, and Economic (PACE) perspective.

Cloud File Data Storage Consolidation and Economic Comparison Model

Keep in mind that all application workloads have some amount of PACE resource requirements that may be high, low or various permutations. Performance, Availability (including data protection along with security) as well as Capacity are addressed via technical speeds, feeds, functionality along with workload suitability analysis. The E in PACE resource decision-making is about the Economic analysis of various costs associated with different solution approaches.

Read more in this Server StorageIO Industry Trends and Perspective (ITP) Report.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

When comparing and making data infrastructure resource decisions, consider the application workload PACE characteristics. Also keep in mind that PACE means Performance (productivity), Availability (data protection), Capacity and Economics. This includes making decisions from a technical feature, functionality (speeds and feeds) capacity as well as how the solution supports your application workload. Leverage resources including tools to perform analysis including Cloud File Data Storage Consolidation and Economic Comparison Model approaches.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update. Musician Phil Collins has an excellent name for his current tour Not Dead Yet which is a reminder that he is still alive and performing, at least one more time. With Halloween just around the corner, it is that time of the year to revisit zombie technology, those technologies, tools, techniques, trends that are declared dead yet still alive.

Data Infrastructure Tools Trends Topics

IT Zombie Technology Declared Dead Not Dead Yet

With a concert tour named Not Dead Yet, that sets the stage for this post which is about IT Zombie Technology and in particular data infrastructure related technology, tools, trends and related topics that have been declared dead by some people, yet are still alive. Not only are these tools and techniques being used, but they are also being enhanced to be around for future years of zombie technology updates, not dead yet.

As a refresher, a Zombie technology is one that is declared dead, usually by some upstart vendor and its pundits along with other followers in favor of whatever new has been announced. As luck or fate would have it, some of these startup or new technologies that declare an older established one as being dead, tend to end up on the where are they now list.

In other words, some technologies do survive and gain in both industry adoption, as well as the even more critical customer deployment category. Likewise, some of these technologies that result in something existing being declared dead-end up surviving to live alongside or near what its supporters declared dead.

Another not so uncommon occurrence is when the new technology that its supporters declared something else as being dead joins the ranks of being declared dead by a yet more modern technology thereby becoming a Zombie technology itself.  Put a different way, being on the Zombie technology list may not be the same as being the shiny new popular trendy technology. However, it can be both a badge of honor not to mention revenue and profit maker.

Data Infrastructure components

Zombie Technology List

What are some old and new Zombie technologies that have been declared dead, yet are still alive, being used and enhanced, not dead yet?

IBM Mainframe

This is a perennial favorite, and while not seeing new growth associated with other platforms including Intel, AMD, ARM among others, it has its place with many large organizations. Not only does it continue to be manufactured, enhanced, even some new customers buying them, it also runs native Linux in addition to traditional zOS among other software.

Fibre Channel (FC)

FC has been declared dead for over a decade, and while Ethernet-based server storage I/O networking continues to gain ground in both industry as well as customer deployments, there is still plenty of life in and with FC for years to come, at least for some environments. NVMe over Fabrics (NVMeoF) which is the NVMe protocol carried on top of a fabric network (SAN if you prefer) is gaining industry popularity and customer curiosity.

There are many flavors of NVMe over fabrics including NVMe over Fibre Channel, e.g., FC-NVMe which is similar to mapping the SCSI command set (SCSI_FCP) on to Fibre Channel or what is more commonly known as FCP or simply FC.

What this means if that FC-NVMe is just another upper-level protocol (ULP) that can co-exist with others on the same Fibre Channel network. In other words, FICON, FCP, NVMe among others can co-exist on the same Fibre Channel-based network. Will everybody using Fibre Channel move to FC-NVMe? Good question, ask the FC folks, and the answer not surprisingly would be yes or probably. Will new customers looking to do NVMe over some type of fabric or network use Fibre Channel instead of Ethernet or other transport? Some will while others will go other routes. For now, what is clear is that FC is still alive and thus on the Zombie technology list and not dead yet.

SAS and SATA

Both have been declared dead as they have been around for a while, and over time NVMe will pick up more of their workload, however near term, SAS and SATA will continue as lower cost smaller footprint for general purpose and bulk lower cost direct attachment. Otoh, look for more m.2 NVMe Next Generation Form Factor (NGFF) aka gum sticks appearing on physical servers along with storage systems. Likewise, watch for increased deployment of NVMe U.2. Aka 8639 drive form factor SSDs using NAND flash as well as 3D XPoint and Intel Optane among other mediums as part of new server and storage platforms. BTW, USB is not dead yet either, just saying.

Microsoft Windows

Windows desktop, Windows Servers, even Hyper-V virtualization have been declared dead for some time now, yet all continue to evolve. Just recently, Microsoft released Windows Server 2019 which included many enhancements from software-defined storage (Storage Spaces Direct aka S2D), software-defined networking, converged and hyper-converged infrastructure (HCI) deployment options, expanded virtualization capabilities, Windows Subsystem for Linux (WSL) enhancements (e.g. bash shell on Windows native), containers with Kubernetes as well as Docker updates among others. In other words, it’s not dead yet.

Hard Disk Drive (HDD)

Having been declared dead for decades, while not the primary frontline storage medium it was in the past, HDDs continue to evolve and be used for alongside faster flash SSD, and as a front-end to magnetic tape. Some of the larger consumers of HDDs continue to be cloud service providers also known as mega scalars for storing large amounts of bulk data. I suspect that HDDs will continue to be on the Zombie technology list for at least another decade or so which has been the case for the past several decades.

Magnetic Tape

Like HDDs, the tape is still in use in some environments, and like HDDs, the cloud service providers are significant users of tape as a low-cost, low access, high-capacity bulk storage for cold archives that are front-ended by HDD or SSD or both.

Cloud (Public, Private and Hybrid)

Yes, believe it or not, some have declared cloud dead, along with hybrid cloud, private cloud among others, oh well.

Physical Machine (PM)

Also known as bare metal, servers were declared dead a decade or so ago at the hands of the then emerging Intel based virtualization hypervisors notably VMware ESXi and to a lesser extent Microsoft Hyper-V. I say lesser extent with Hyper-V in that there was less noise about PM and BMs being dead as there was from some in the ESXi virtual kingdom. Needless to say, PM and BM from Intel to AMD and ARM-based, along with IBM Power among many others are very much alive as dedicated servers in the cloud, VM and container hosts, as well as being accessorized with FPGA, ASIC, GPU, and other resources.

Virtual Machines

Listen to some from the container, serverless or something new crowd, and you will hear that virtual machines (VMs) are dead which for some workloads may be right. On the other hand, similar to the physical machine (PM) or bare metal (BM) servers that were declared dead by the VMs a decade or so ago, VMs are alive and doing well. Not only are they doing well, like containers continued adoption and deployment of VMs will stay on both on-prem as well as cloud, as will BM and PMs now have known as dedicated servers in the clouds.

NAS and Files

If you listened to some of the pundits and press, NAS and files were supposed to have been dead several years ago at the hands of object storage. The reality today is that while object storage continues to grow in customer deployments while the industry is not as enamored (or drunk) with it as it was a few years ago, the new technology is here to stay and will be around for many decades to come.

That brings us back to NAS and files which were declared dead by the object opportunists which is file access is very much alive and continues gain ground. In fact, most cloud providers have either added NAS file-based access (NFS, SMB, POSIX among others) native or via partners to their solutions. Likewise, most object storage platforms have also added or enhanced their NAS file-based access for compatibility while their customers are re-engineering their applications, or create new apps that are object and blob native. Thus, NAS and File-based access are proud members of the Zombie technology list.

Data Infrastructure tools

There are many more tools, technologies, trends, techniques that are part of the above list for example Backup has been declared dead, along with the PCIe bus, NAND flash, programming, data centers, databases, SQL along with many others. What they have in common is that they are part of a growing list of not dead yet, yet declared dead thus are Zombie technologies.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

What is your favorite zombie technology, tool, trend or technique?

What zombie technologies, tools, trends or techniques should be added to the list and why?

Many tools, technologies, techniques, trends are often declared dead, sometimes before they are even really alive and mature by those who have something new, or that simply lack creative (e.g., dead marketing?) so it’s easier to declare something dead. While some succeed themselves prospering and being added to the Zombie technology list (a badge of honor), some quietly end up on the where are they now list. The where are they now list are those vendors, tools, technologies, techniques, trends that were on the famous hit parade in the past, having faded away, or end up dead (unlike a zombie).

Don’t be scared of zombie technology while also being prepared to embrace what is new while using both in new ways. Right now, I don’t have tickets to go see Phil Collins not dead yet tour, maybe that will change. However, for now, keep in mind, don’t be scared when looking at Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Ten tips to reduce your cloud compute storage costs #blogtobertech

Ten tips to reduce your cloud compute storage costs #blogtobertech

Ten tips to reduce your cloud compute storage costs

The following are Ten tips to reduce your cloud compute storage costs.

In some cases, reducing your cloud costs means spending the same yet getting more value and resources that provide a business benefit. For example, paying the same yet upgrading to fewer, faster servers, storage, I/O network resources to support growth while boosting productivity. In other words, when measured on a cost per unit of work done or service enabled, there should be an improvement.

On the other hand, cost cutting can be measured by an actual reduction in spending, for example, consolidating multiple applications to a lower cost compute instance running at higher utilization. The caveat is that while the spend may be reduced, is the corresponding level of service or application and user productivity negatively impacted?

Other examples are a hybrid of removing complexity and cost, as well as cost-cutting, for instance finding orphan resources that are powered on and not used. Orphan resources include IP addresses assigned, being charged for yet not used, or a virtual machine instance powered on however not used. Another orphan example is a VM instance that is powered off however no longer used, nor are the disks assigned to it, as well as any snapshots or backups.

Ten tips to reduce your cloud costs

  • Utilize client and remote site data file cache to reduce cloud egress network fees
  • Bring your own software licenses for operating systems and applications
  • Monitor your cloud cost summaries regularly to watch out for surprises
  • Find and remove orphan resources including instances, images, IP address, storage volumes, buckets
  • Revisit if your data is stored in the appropriate storage class or tier for how it is used. Likewise, leverage lower durable storage tiers as locations for additional protection instead of merely as a single destination to support cost-cutting. For example, cost cutting would be placing your only data protection copy and archive on a lower cost lower durable storage tier. Removing cost, boosting availability would be putting a copy of your data on two or more economical price, less durable storage tiers in different locations, instead of a single copy on a highly durable tier in one place.
  • Consolidate many smaller, lower cost instances into fewer larger instances, removing complexity and costs
  • Utilize reserved instances (RI) along with prepayment discounts, also check with your finance department to see if there are benefits of considering as OpEx or CapEx.
  • Audit your RIs to make sure you have the appropriately sized resources to meet workload needs.
  • Utilize spot instances for spot or ad-hoc interruptible workloads
  • Leverage ephemeral on-instance storage as a cache to boost performance

Additional Tips and Recommendations

Everything is not the same, why treat everything the same including assigning to the same type of resources. Keep in mind that all applications have some level of Performance, Availability, Capacity, and Economic (PACE) resource requirements that need to be balanced.

Similar to on-prem environments, one of the top mistakes when choosing storage is looking only at a cost per capacity, particular with flash-based SSD and NVMe accessed storage. Also look into what the storage performance thresholds are, as well as any access and API or service call fees.

Watch out for excessive API and cloud service calls beyond your normal monthly limits. For example, consistently running rsync on some storage classes can result in surprise monthly invoices. Likewise, moving data around, changing encryption or other operations may wipe out savings from going to a lower storage tier. Look beyond the monthly cost per capacity, what are the access including egress (reading data) fees, as well as API calls such as list, dir or other operations.

Likewise, for compute instances, look beyond the necessary cost also considering how much memory (DRAM), I/O for storage and networking, on-instance storage (temporary or persistent), bring your own license options, number of cores or virtual CPUs along with their speed. Also, watch for any limits on the number of I/O operations per instance particular with fast flash SSD including NVMe accessed storage. Just because its flash or NVMe does not mean it’s going to be fast.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Have a situational awareness of your on-prem environment knowing your costs of resources as well as the level of services to make informed decisions. Don’t be scared, be prepared, avoid flying blind, plan ahead and apply the appropriate resources along with quantity to require application workload needs. Keep in mind that there are more than Ten tips to reduce your cloud compute storage costs, however these should get your off to a good start.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.