PACE your Server Storage I/O decision making, its about application requirements

PACE your Server Storage I/O decision making, its about application requirements

PACE your Server Storage I/O decision-making, it’s about application requirements. Regardless of if you are looking for physical, software-defined virtual, cloud or container storage, block, file or object, primary, secondary or protection copies, standalone, converged, hyper-converged, cluster in a box or other forms of storage and packaging, when it comes to server storage I/O decision-making, it’s about the applications.

I often see people deciding on the best storage before the questions of requirements, needs and wants are even mentioned. Sure the technology is important, so too are the techniques and trends including using new things in new ways, as well as old things in new ways. There are lots of buzzwords on the storage scene these days. But don’t even think about buying it until you truly understand your business’ storage needs.

However when it comes down to it unless you have a unique need, most environments server, and storage I/O resources exist to protect preserve and serve applications and their information or data. Recently I did a couple of articles over at Network Computing; these are tied to server and storage I/O decision-making balancing technology buzzwords with business and application requirements.

PACE and common applications characteristics

PACE your server storage decisions

A theme I mention in the above two articles as well as elsewhere on server, storage I/O and applications is PACE. That is, application Performance Availability Capacity Economics (PACE). Different applications will have various attributes, in general, as well as how they are used. For example database transaction activity vs. reporting or analytics, logs and journals vs. redo logs, indices, tables, indices, import/export, scratch and temp space. PACE (figure 2.7) describes the applications and data characters and needs.

Server Storage I/O PACE

Common Application Pace Attributes

All applications have PACE attributes

  • Those PACE attributes vary by application and usage
  • Some applications and their data are more active vs. others
  • PACE characteristics will vary within different parts of an application

Think of an application along with associated data PACE as its personality or how it behaves, what it does, how it does it and when along with value, benefit or cost along with Quality of Service (QoS) attributes. Understanding the applications in different environments, data value and associated PACE attributes is essential for making informed server, storage I/O decisions from configuration to acquisitions or upgrades, when, where, why and how to protect, or performance optimization along with capacity planning, reporting, and troubleshooting, not to mention addressing budget concerns.

Data and Application PACE

Primary PACE attributes for active and inactive applications and data:
P – Performance and activity (how things get used)
AAvailability and durability (resiliency and protection)
C – Capacity and space (what things use or occupy)
EEnergy and Economics (people, budgets and other barriers)

Some applications need more performance (server computer, or storage and network I/O) while others need space capacity (storage, memory, network or I/O connectivity). Likewise, some applications have different availability needs (data protection, durability, security, resiliency, backup, BC, DR) that determine various tools, technologies and techniques to use. Budgets are also a concern which for some applications meaning enabling more performance per cost while others are focused on maximizing space capacity and protection level per cost. PACE attributes also define or influence policies for QoS (performance, availability, capacity), as well as thresholds, limits, quotas, retention and disposition among others.

Where to learn more

Learn more about data infrastructures and tradecraft related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

The best storage will be the one that meets or exceeds your application requirements instead of the solution that meets somebody else’s needs or wants. Keep in mind, PACE your Server Storage I/O decision making, it is about application requirements

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs

The ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs

ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs.

Yes, you read that correct; leverage TCP offload Engines (TOE) to boost the performance of TCP-based NVMeoF (e.g., NVMe over Fabrics) while reducing costs. Keep in mind that there is a difference between cutting costs (something that causes or moves problems and complexities elsewhere) and reducing and removing costs (e.g., finding, fixing, removing complexities).

Reducing or cutting costs can be easy by simply removing items for lower-priced items and introducing performance bottlenecks or some other compromise. Likewise, boosting performance can be addressed by throwing (deploying) more hardware (and or software) at the problem resulting in higher costs or some other compromise.

On the other hand, as mentioned above, finding, fixing, removing the complexity and overhead results in cost savings while doing the same work or enabling more work done via the same costs, maximizing hardware, software, and network costs. In other words, a better return on investment (ROI) and a lower total cost of ownership (TCO).

Software Defined Storage and Networks Need Hardware

With the continued shift towards software-defined data centers, software-defined data infrastructures, software-defined storage, software-defined networking, and software-defined everything, those all need something in common, and that is hardware-based compute processing.

In the case of software-defined storage, including standalone, shared fabric or networked-based, converged infrastructure (CI) or hyper-converged infrastructure (HCI) deployment models, there is the need for CPU compute, memory, and I/O, in addition to storage devices. This means that the software to create, manage, and perform storage tasks needs to run on a server’s CPU, along with I/O networking software stacks.

However, it should be evident that sometimes the obvious needs to be restarted, which is that software-defined anything requires hardware somewhere in the solution stack. Likewise, depending on how the software is implemented, it may require more hardware resources, including server compute, memory, I/O, and network and storage capabilities.

Keep in mind that networking stacks, including upper and lower-level protocols and interfaces, leverage software to implement their functionality. Therefore, the value proposition of using standard networks such as Ethernet and TCP is the ability to leverage lower-cost network interface cards (or chips), also known as NICs combined with server-based software stacks.

On the one hand, costs can be reduced by using less expensive NICs and using the generally available server CPU compute capabilities to run the TCP and other networking stack software. On systems with a lower application or other software performance demands, this can work out ok. However, for workloads and systems using software-defined storage and other applications that compete for server resources (CPU, memory, I/O), this can result in performance bottlenecks and problems.

Many Server Storage I/O Networking Bottlenecks Are CPU Problems

There is a classic saying that the best I/O is the one that you do not have to do. Likewise, the second-best I/O is the one with the most negligible overhead (and cost) as well as best performance. Another saying is that many application, database, server, and storage I/O problems are actually due to CPU bottlenecks. Fast storage devices need fast applications on fast servers with fast networks. This means finding and removing blockages, including offloading server CPU from performing network I/O processing using TOEs.

Wait a minute, isn’t the value proposition of using software-defined storage or networking to use low-cost general-purpose servers instead of more expensive hardware devices? With some caveats, Yup understands how much server CPU us being used to run the software-defined storage and software stacks and handle upper-level functionality. To support higher performance or larger workloads can be putting in more extensive (scale-up) and more (scale-out) servers and their increased connectivity and management overhead.

This is where the TOEs come into play by leveraging the best of both worlds to run software-defined storage (and networking) stacks, and other software and applications on general-purpose compute servers. The benefit is the TCP network I/O processing gets offloaded from the server CPU to the TOE, thereby freeing up the server CPU to do more work or enabling a smaller, lower-cost CPU to be used.

After all, many servers, storage, and I/O networking problems are often server CPU problems. An example of this is running the TCP networking software stack using CPU cycles on a host server that competes with the other software and applications. In addition, as an application does more I/O, for example, issuing reads and write requests to network and fabric-based storage, the server’s CPUs are also becoming busier with more overhead of running the lower-layer TCP and networking stack.

The result is server resources (CPU, memory) are running at higher utilization; however, there is more overhead. Higher resource utilization with low or no overhead, low latency, and high productivity are good things resulting in lower cost per work done. On the other hand, high CPU utilization, server operating system or kernel mode overhead, poor latency, and low productivity are not good things resulting in host per work done.

This means there is a loss of productivity as more time is spent waiting, and the cost to do a unit of work, for example, an I/O or transaction, increases (there is more overhead). Thus, offload engines (chips, cards, adapters) come into play to shift some software processing from the server CPU to a specialized processor. The result is lower server CPU overhead leaving more server resources for the main application or software-defined storage (and networking) while boosting performance and lowering overall costs.

Graphics, Compute, Network, TCP Offload Engines

Offload engines are not new, they have been around for a while, and in some cases, more common than some realize going by different names. For example, graphical Processing Units (GPUs) are used for offloading graphic and compute-intensive tasks to special chips and adapter cards. Other examples of offload processors include networks such as TCP Offload Engine (TOE), compression, and storage processing, among others.

The basic premise of offload engines is to move or shift processing of specific functions from having their software running on a general-purpose server CPU to a specialized processor (ASIC, FPGA, adapter, or mezzanine card). By moving the processing of functions to the offload or unique processing device, performance can be boosted while freeing up a server’s primary processor (CPU) to do other useful (and productive) work.

There is a cost associated with leveraging offloads and specialized processors; however, the business benefit should be offset by reducing primary server compute expenses or doing more work with available resources and driving network bandwidth line rates performance. The above should result in a net TCO reduction and boost your ROI for a given system or bill of material, including hardware, software, networking, and management.

Cloud File Data Storage Consolidation and Economic Comparison Model

Fast Storage Needs Fast Servers and I/O Networks

Ethernet network TOEs became popular in the industry back in the early 2000s, focusing on networked storage and storage networks that relied on TCP (e.g., iSCSI).

Fast forward to today, and there is continued use of networked (ok, fabric) storage over various interfaces, including Ethernet supporting different protocols. One of those protocols is NVMe in NVMe over Fabrics (NVMeoF) using TCP and underlying Ethernet-based networks for accessing fast Solid State Devices (SSDs).

Chelsio Communications T6 TOE for NVMeoF

An example of server storage I/O network TOEs, including those to support NVMeoF, are those from Chelsio Communications, such as the T6 25/100Gb devices. Chelsio announced today server storage I/O benchmark proof points for TCP based NVMe over Fabric (NVMeoF) TOE accelerated performance. StorageIO had the opportunity to look at the performance-boosting ability and CPU savings benefit of the Chelsio T6 prior to todays announcement.

After reviewing and validating the Chelsio proof points, test methodology, and results, it is clear that the T6 TOE enabled solution boosts server storage I/O performance while reducing host server CPU usage. The Chelsio T6 solution combined with Storage Performance Development Kit (SPDK) software, provides local-like performance of network fabric distributed NVMe (using TCP based NVMeoF) attached SSD storage while reducing host server CPU consumption.

“Boosting application performance, efficiency, and effectiveness of server CPUs are key priorities for legacy and software defined datacenter environments,” said Greg Schulz, Sr. Analyst Server Storage. “The Chelsio NVMe over Fabrics 100GbE NVMe/TCP (TOE) demonstration provides solid proof of how high-performance NVMe SSDs can help datacenters boost performance and productivity, while getting the best return on investment of datacenter infrastructure assets, not to mention optimize cost-of-ownership at the same time. It’s like getting a three for one bonus value from your server CPUs, your network, and your application perform better, now that’s a trifecta!”

You can read more about the technical and business benefits of the Chelsio T6 TOE enabled solution along with associated proof points (benchmarks) in the PDF white paper found here and their Press Release here. Note that the best measure, benchmark, proof point, or test is your application and workload, so contact Chelsio to arrange an evaluation of the T6 using your workload, software, and platform.

Where to learn more

Learn more about TOE, server, compute, GPU, ASIC, FPGA, storage, I/O networking, TCP, data infrastructure and software defined and related topics, trends, techniques, tools via the following links:

Chelsio Communications T6 Performance Press Release (PDF)
Chelsio Communications T6 TOE White Paper (PDF)
Application Data Value Characteristics Everything Is Not the Same
PACE your Infrastructure decision-making, it’s about application requirements
Data Infrastructure Server Storage I/O Tradecraft Trends
Data Infrastructure Overview, Its What’s Inside of Data Centers
Data Infrastructure Management (Insight and Strategies)
Hyper-V and Windows Server 2025 Enhancements

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

The large superscalar web services and other large environments leverage offload engines and specialized processing technologies (chips, ASICs, FPGAs, GPUs, adapters) to boost performance while reducing server compute costs or getting more value out of a given server platform. If it works for the large superscalars, it can also work for your environment or your software-defined platform.

The benefits are reducing the number and cost of your software-defined platform bill of materials (BoM). Another benefit is to free up server CPU cycles to run your storage or network or other software to get more performance and work done. Yet another benefit is the ability to further stretch your software license investments, getting more work done per software license unit.

Have a look at the Chelsio Communications T6 line of TOE for NVMeoF and other workloads to boost performance, reduce CPU usage and lower costs. See for yourself The TOE NVMeoF TCP Performance Line Boost Performance Reduce Costs benefit.

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Microsoft MVP Cloud and Data Center Management, previous 10 time VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), Data Infrastructure Management (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Cloud Ready Data Protection for Hybrid Data Centers Are In Your Future

Cloud Ready Data Protection for Hybrid Data Centers Are In Your Future

Cloud Ready Data Protection for Hybrid Data Centers

Join me for a free webinar Cloud Ready Data Protection for Hybrid Data Centers and Data Infrastructures 11AM PT Thursday July 11th produced by Redmond Magazine sponsored by Quest Software.

Hybrid Data Infrastructure Data Center Cloud Container Software Defined Next Generation Cloud Ready Data Protection for Hybrid Data Centers

Hybrid Data Infrastructures and Data Centers

Hybrid cloud and on-prem data centers are in your future if not already a reality. In addition to using public cloud and on-prem resources, your environment is likely a mix of many different operating systems, applications and servers (virtual and physical), along with multiple backup and recovery technologies.

Cloud Ready Data Protection for Hybrid Data Centers

In this engaging, interactive webinar, we will look at trends, issues, and challenges, as well as provide best practices in what you can do to address them today. You’ll learn how to simplify and streamline your system, application and data protection in both the cloud and data center without compromise, all while removing complexity and cost.

What You Will Learn

Join Microsoft MVP, VMware vExpert and IT analyst Greg Schulz of Server StorageIO along with Michael Gogos, Data Protection expert from Quest, as they discuss how to:

  • Become hybrid and cloud data protection ready
  • Use the cloud for backup and disaster recovery
  • Protecting cloud applications and their data
  • Address different hybrid data protection scenarios
  • Take action today to prepare for tomorrow

 

Where to learn more

Learn more about world backup day, recovery and data protection along with other related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

I look forward to you joining Michael Gogos of Quest Software and myself on Thursday July 11th 11AM PT for our interactive discussion (bring your questions) around Cloud Ready Data Protection for Hybrid Data Centers and what you can do today (Register here).

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, ten-time VMware vExpert. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2019 Server StorageIO and UnlimitedIO. Visit our companion site https://picturesoverstillwater.com to view drone based aerial photography and video related topics. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Driving ROI with Cloud Storage Consolidation Seminars

Driving ROI with Cloud Storage Consolidation Seminars

Driving ROI with Cloud Storage Consolidation Seminars

Driving ROI with Cloud Storage Consolidation Seminars

Join me in a series of in-person seminars driving ROI with cloud storage consolidation for unstructured file data.

driving roi with cloud storage consolidation seminars
Various Data Infrastructure options from on-prem to edge to cloud and beyond

These initial seminars are being held at Amazon Web Services (AWS) locations April 30 in New York City, May 1 in Chicago and May 2 in Houston Amazon. At each of these three cities, I will be joined by experts from NetApp, Talon and AWS as we look at issues, trends and what can be done today (including hands on demos) driving ROI with cloud storage consolidation for unstructured file data.

What The Seminars Are About

These seminars look at how remove cost and complexity while boosting productivity for distributed sites with unstructured data and NAS file servers. The seminars look at making informed decisions balancing technical considerations with a business return on investment (ROI) model, along with return on innovation (the other ROI) from boosting productivity. It’s not about simply cutting costs that can create chaos or compromise elsewhere, it’s about removing complexity and cost while boosting productivity with smart cloud storage consolidation for unstructured file data.

distributed file server cloud storage consolidation

Distributed File Server Cloud Storage Consolidation ROI Economic Comparison

During these seminars I will discuss various industry and customer trends, challenges as well as solutions, particular for environments with distributed file servers for unstructured file data. As part of my discussion, we will look at both technical, as well as ROI business based model for distributed file server cloud storage consolidation based on the Server StorageIO white paper report titled Cloud File Data Storage Consolidation and Economic Comparison Model (Free PDF download here).

Where When and How to Register

New York City Tuesday April 30, 2019 9:00AM
Amazon Web Services
7 West 34th St.
6th Floor
Learn more and register here.

Chicago Illinois  Wednesday May 1, 2019 9:00AM
Amazon Web Services
222 West Adams Street
Suite 1400
Learn more and register here

Houston Texas Thursday May 2, 2019 9:00AM
Amazon Web Services
825 Town and Country Lane
Suite 1000
Learn more and register here

Where to learn more

Learn more about world backup day, recovery and data protection along with other related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Making informed decisions for data infrastructure resources including cloud storage consolidation and distributed file servers involves technical, application workload as well as business economic analysis. Which of the three (technical, application workload, financial) is more important for enabling a business benefit will depend on your perspective, as well as area of focus. However, all the above need to be considered in the balance as part of making an informed data infrastructure resource decision. That is where a discussion about a business financial ROI model (pro forma if you prefer) comes into play as part of cloud storage consolidation, including for distributed file server of unstructured file data.

I look forward to meeting with attendees and hope to see you at the events April 30th in New York City, May 1 in Chicago, and Houston May 2nd as we discuss driving ROI with cloud storage consolidation at these seminars.

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, ten-time VMware vExpert. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. Visit our companion site https://picturesoverstillwater.com to view drone based aerial photography and video related topics. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

World Backup Day Reminder Don’t Be an April Fool Test Your Data Recovery

World Backup Day Reminder Don’t Be an April Fool Test Your Data Recovery

World Backup Day Reminder Don't Be an April Fool Plan Your Data Recovery

World Backup Day Reminder Don’t Be an April Fool Test Your Data Recovery.

March 31 is the annual world backup day to spotlight awareness around the importance of protecting your data and test your data recovery. The focus of world backup and recovery day spans from the largest enterprise and cloud service providers (e.g., super scalars) to the smallest SMB, SOHO, ROBO and home consumers (including your photos) or other valuable items.

Granted the technology, tools, techniques, trends will differ with a scope as well as scale.

However, the fundamental data protection approaches apply to all. That is, having multiple copies of different points in time spread across separate storage (systems, servers, devices, media, cloud services) as well as offsite (and off-line).

world backup day data protection cloud

Why The Need For Data Protection And Recovery

Data Protection encompasses many different things, from accessibility, durability, resiliency, reliability, and serviceability ( RAS) to security and data protection along with consistency. Availability includes basic, high availability ( HA), business continuance ( BC), business resiliency ( BR), disaster recovery ( DR), archivingbackup, logical and physical security, fault tolerance, isolation and containment spanning systems, applications, data, metadata, settings, and configurations.

From a data infrastructure perspective, availability of data services spans from local to remote, physical to logical and software-defined, virtual, container, and cloud, as well as mobile devices. On the left side of the following figure are various data protection and security threat risks and scenarios that can impact availability, or result in a data loss event ( DLE), data loss access ( DLA), or disaster. The following figure shows various techniques, tools, technologies, and best practices to protect data infrastructures, applications, and data from threat risks.

the need for data protection backup bc dr

Don’t Become An April 1st Recovery Fool

April 1st also known as April Fool’s day should be a reminder to plan as well as test your recovery, so the joke is not on you. Data protection including backup, archiving, security, disaster recovery (DR), business continuance (BC) as well as business resiliency (BR) are not a once a year focus, instead of a 365 day a year continuum. Likewise, the focus needs to expand from just making sure you backed up or made copies of your data to recover. After all, what good is a check box that you did a backup on world backup day only to find out the next day you cannot recover, or, what you thought was protected is not there.

If you already have good backups and data protection copies, verify that they are in fact good-by restoring their contents to a different location. It should go without saying, however all too often common sense needs to be repeated, make sure in the course of testing data protection including restoring that you do not inadvertently cause a disaster. Also, go a step beyond verifying that you can read the data stored on disk, tape, SSD, optical, that is, actually try to use, or open the data. What this does is verify that you can both access and restore the data from the protection medium or cloud location, as well as unlock, decrypt, uncompressed or re-inflate deduped data.

Evolving Data Protection Including Backup and Recovery

While the emphasis of world backup is on the importance of data protection including having backup copies, there also needs to be an emphasis on recovering. It is essential to make sure data is protected which means having multiple copies of different time intervals stored on several mediums or systems across one or more locations. The previous is the basis of 4 3 2 1 data protection, having four or more copies with three or more-time interval versions spread across two or more different systems or storage mediums.

server storageio data infrastructure data protection 4 3 2 1
4 3 2 1 data protection (via Software Defined Data Infrastructure Essentials)

4    At least four copies of data (or more), Enables durability in case a copy goes bad, deleted, corrupted, failed device, or site.
3    The number (or more) versions of the data to retain, Enables various recovery points in time to restore, resume, restart from.
2    Data located on two or more systems (devices or media/mediums), Enables protection against device, system, server, file system, or other fault/failure.
1    With at least one of those copies being off-premise and not live (isolated from active primary copy), Enables resiliency across sites, as well as space, time, distance gap for protection.

Also, make sure that at least one of those offsite preferably offline. Likewise, it is crucial that whatever is protected, backed up, copied, cloned, snapshot, checkpoint, consistency point, replicated is also usable. In addition to having multiple copies and versions, those data protection copies should also include occurring at various altitude or layers in the data infrastructure stack from applications to database, file systems to virtual machines or containers among others.

What About Individual Data Protection at Home

For consumers and individuals, as well as small business, make sure that you are copying your essential data from your computer to some other storage medium (or multiple). For example, have a local copy on an external hard disk drive (HDD) or a solid-state device (SSD). Better yet, have a couple of copies for different time intervals both on-site as well as off-site. Anything important you have stored on site including copies of photos, images, video, audio, records, spreadsheets, and other documents should have extra copies including off-site or in the cloud.

Likewise, anything you store in the cloud should have at least one other copy stored elsewhere. Don’t be scared of the cloud, however, do your homework to be prepared. Similar to only having one copy of your data on site, the other extreme only has one copy in the cloud. Instead, put a copy in the cloud as well as have one on-site (or on-prem if you prefer) or elsewhere.

Don’t Forget Your Home Photos and Movies

Speaking of photos and other documents, for those that are not yet digitized, scanned or electronic copies made, get them converted.  Get in touch with data protection and backup professional, as well as a photo (and digital asset) organizer. They can provide advice on best practices, techniques, as well as tools, technologies, and services to keep your digital data safe and secure. Some photo organizer professionals also can help with converting your old photos, movies, videos to new digital formats. For example, get in touch with Holly Corbid at Capture Your Photos (www.captureyourphotos.com) who is a certified professional photo organizer and member of Association of Professional Photo Organizers.

Where to learn more

Learn more about world backup day, recovery and data protection along with other related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

March 31 world backup day is more than an annual event for vendors to send out press releases on the importance of data protection. The focus should also expand to world recovery day or something similar as well as span 365 days a year. Now is a good time to review and verify your existing data protection including backup and recovery works as expected. Keep in mind, world backup day reminder don’t be an April fool test your data recovery before you need it.

Ok, nuff said, for now.

Cheers GS

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, ten-time VMware vExpert. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. Visit our companion site https://picturesoverstillwater.com to view drone based aerial photography and video related topics. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Deliver Data Management Availability For Multi Cloud Environments Webinar

Deliver Data Management Availability For Multi Cloud Environments Webinar

Deliver Data Management Availability For Multi Cloud Environments Webinar

Join me on Thursday March 14th 11AM PT when I host a webinar with topic Deliver Data Management Availability For Multi Cloud Environments. This is free webinar (will also be available for replay) sponsored by Veeam, produced by Redmond Magazine where I will be joined by Dave Russell, Vice President of Enterprise Strategy at Veeam Software for an interactive engaging discussion.

Our discussion including questions for attendees will look at how IT landscapes are evolving, hybrid and multi-cloud have become the new normal and what can be done to protect, preserve, secure and serve data spread across on-prem and different public clouds. Topics will include what to do today to prepare for tomorrow, minimizing risk of hybrid environments, changing environments along with their requirements, identify strategies for sound data management, data protection including backup for hybrid environments.

Register for the Deliver Data Management Availability For Multi Cloud Environments Webinar here (Live Thursday March 14th 11AM PT).

Where to learn more

Learn more about cloud, multi-cloud, hybrid and data protection via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Remember to register here for the live March 14, 2019 event. Join me for an interactive discussion with Dave Russell as we discuss the trends, issues, challenges and what can be done to put a strategy in place for data protection and to Deliver Data Management Availability For Multi Cloud Environments.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, ten-time VMware vExpert. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. Visit our companion site https://picturesoverstillwater.com to view drone based aerial photography and video related topics. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing My New Book Data Infrastructure Management Insight Strategies

Announcing my new book Data Infrastructure Management Insight Strategies published via Auerbach/CRC Press is now available via CRC Press and Amazon.com among other global venues.

My Fifth Solo Book Project – Data Infrastructure Management

Data Infrastructure Management Insight Strategies (e.g. the white book) is my fifth solo published book in addition to several other collaborative works. Given its title, the focus of this new book is around Data Infrastructures, the tools, technologies, techniques, trends including hardware, software, services, people, policies inside data centers that get defined to support business and application services delivery. The book (ISBN 9781138486423) is soft covered (also electronic kindle versions available) with 250 pages, over a 100 figures, tables, tips and examples. You can explore the contents via Google Books here.

Data Infrastructure Books by Greg Schulz
Stack of my solo books with common theme around Data Infrastructure topics

Data Infrastructure Management Book
Data Infrastructure Management – Insight and Strategies e.g. the White book (CRC Press 2019)

Some of My Other Books Include

Click on the following book images to learn more about, as well as order your copy.

Software Defined Data Infrastructure Essentials BookSNIA Recommended Reading List
Software Defined Data Infrastructure Essentials (SDDI) – Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft e.g. the Blue book covers software defined, sddc, sddi, hybrid, among other topics including serverless containers, NVMe, SSD, flash, pmem, scm as well as others. (CRC Press 2017) available at Amazon.com among other global venues.

Cloud and Virtual Data Storage Networking Intel recommended reading listIntel recommended reading list
Cloud and Virtual Data Storage Networking (CVDSN) – Your Journey to efficient and effective Information Services e.g. the Yellow or Gold Book (CRC Press 2011) available at Amazon.com among other global venues.

 

The Green and Virtual Data Center BookIntel Recommended Reading List
The Green and Virtual Data Center (TGVDC) – Enabling Efficient, Effective and Productive Data Infrastructures e.g. the Green Book (CRC Press 2009) available at Amazon.com among other venues.

Resilient Storage Networks Book
Resilient Storage Networks (RSN) – Designing Flexible scalable Data Infrastructures (Elsevier 2004) e.g. the Red Book is SNIA Education Endorsed Reading available at Amazon.com among other venues. I have some free copies of RSN for anybody who is willing to pay shipping and handling, send me a note and we will go from there.

Where to learn more

Learn more via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Today more than ever there tends to be a focus on the date something was created or published as there is a lot of temporal content with short shelf life. This means that there is a lot of content including books being created that are short temporal usually focused on a particular technology, tool, trend that has a life span or attention focus of a couple of years at best.

On the other hand, there is also content that is still being created today that combines new and emerging technology, tools, trends with time-tested strategies, techniques as well as processes, some of whose names or buzzwords will evolve. My books fit into the latter category of combing current as well as emerging technologies, tools, trends, techniques that support longer shelf life, just insert your new favorite buzzword, buzz trend or buzz topic as needed.

Data Infrastructure Books by Greg Schulz

You will also notice looking at the stack of books, Data Infrastructure Management Insight and Strategies is a smaller soft covered book compared to others in my collection. The reason is that this new book can be a quick read to address what you need, as well as be a companion to others in the stack depending on what your focus or requirements are.

Common questions I get having written several books, not to mention the thousands of articles, tips, reports, blogs, columns, white papers, videos, webinars among other content is what’s is next? Good question, see what’s next, as well as check out some other things I’m doing over at www.picturesoverstillwater.com where I’m generating big data that gets stored and processed in various data infrastructures including cloud ;) .

Will there be another book and if so on or about what? As I mentioned, there are some projects I’m exploring, will they get finished or take different directions, wait and see what’s next.

How do I find the time to create these books and how long does it take? The time required varies as does the amount of work, what else I’m doing. I try to leverage the book (and other content creation projects) with other things I’m doing to maximize time. Some book projects have been very fast, a year or less. Some take longer such as Software Defined Data Infrastructure Essentials as it is a big book with lots of material that will have a long shelf life.

Do I write and illustrate the books or do I have somebody do them for me? For my books I do the writing and illustrating (drawings, figures, images) myself along with some of the layouts relying on external copy editors and production folks.

What do I recommend or give advice to those wanting to write a book? Understand that publishing a book is a project, there’s the actual writing, editing, reviews, art work, research, labs or other support items as book companions. Also understand why are you writing a book, for fame, fortune, acclaim, to share with others or some other reason. I also recommend before you write your entire book to talk with others who have been published to test the waters, get feedback. You might find it easier to shop an extended outline than a completed manuscript, that is unless you are writing a novel or similar.

Want to learn more about writing a book (or other content), get feedback, have other questions, drop me a note and will do what I can to help out.

Data Infrastructure Management Book

There is an old saying, publish or perish, well, I just published my fifth solo book Data Infrastructure Management Insight Strategies that you can buy at Amazon.com among other venues.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2019. Author of Data Infrastructure Insights (CRC Press), Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Also visit www.picturesoverstillwater.com to view various UAS/UAV e.g. drone based aerial content created by Greg Schulz. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2019 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars

Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars.

There is still time to register for the fall 2018 Dutch data infrastructure industry trends decision-making seminars November 27th and 28th. The workshops are being organized by Brouwer Storage Consultancy of Holland and will be held in Nijkerk.

On Tuesday, November 27th, there will be an advanced education workshop seminar covering data infrastructure industry trends and technology update presented by myself. On Wednesday, November 28th, there will be a deeper dive workgroup seminar session addressing data infrastructure related strategy, planning, and decision-making.

xxxx

Data Infrastructures Industry Trends November 27

Whats New, Whats the buzz, what you need to know about, From Speeds and Feeds, Slots and Watts to Whos doing what, from interesting to What’s relevant for your environment.

This one-day seminar is a new and improved version of the popular speeds and feeds session where we look at what’s new and emerging in the industry as well as applicable to your environments. You will be updated about the latest trends and emerging data infrastructure technologies to support digital transformation, little and big data analytics, AI/ML/DL, GDPR, data protection, edge/fog compute, and IoT among others. From legacy to the software-defined cloud, container converged and virtual to composable. The seminar is a mix of presentation and engaging discussion as we look into details of favorite or new technologies for both those who are old-school, new-school and current or future school.

Part I – Industry Trends, Applications, and Workload
Part II – Server Compute, Memory, I/O, hardware and software
Part III – Storage and Data protection for on-prem and cloud
Part IV – Bringing it all together, managing and decision making

Topics to be covered include among others:

  • What these trends, tools, technologies mean for different environments of various size.
  • Tips on evaluating legacy and startup or newer vendors as well as technologies.
  • Updates on vendors, services, technologies, products you may or may not have heard of.
  • Cloud (public/private/multi-cloud/hybrid) compute, storage and management.
  • Containers (including docker, windows, kubernetes, FaaS, serverless, lambda).
  • Converged and hyper-converged; Gen-Z and composable; NVMe and NVMeoF.
  • Persistent Memory (PMEM), Storage Class Memory (SCM), 3D XPoint, NAND Flash SSD.
  • Legacy vs. software-defined, appliances, storage systems, block, NAS file, object, table.
  • Bulk cloud data migration appliances, storage for the edge, file sync and share.
  • Role and importance of context (what’s applicable, what something means).
  • Who’s doing what, what to look for today for the future.

This seminar is for those involved with ICT/IT servers, storage, storage, I/O networking, and associated management activities including data protection, of legacy, as well as software-defined cloud, containers, converged hyper-converged and virtualization. This seminar is for professionals who manage, architect or are otherwise involved with data infrastructure related topic strategy and acquisitions.

Data Infrastructures Deep Dive Decision Making November 28

Enabling Informed Strategy and Decision Making, moving from what are the tools, trends and technologies evolving to what to use, when, where, why, how, along with strategy, planning, decision-making, and ongoing management.

If the answer is a cloud, converged, container, composable, edge, fog, digital transformation, on-prem, hybrid, software-defined, what were or are the questions to plan as well as prepare for deployment today, along with in the future? This workshop format seminar provides answers to fundamental questions, with essential insight into software-defined data infrastructures (SDDI) and software-defined data centers (SDDC). For ICT/IT professionals (architects, strategists, administrators, managers) currently or planning on being involved with servers, storage, I/O networking, hardware, software, converged, containers, cloud backup/data protection, and associated topics, this seminar is for you.

Clouds converged, and containers will be a primary focus along with related themes and topics that you need to know more about. Don’t be scared of clouds, be prepared, and this includes for on-prem, public, hybrid and multi-cloud. As part of our deeper dive decision-making strategy focus, we look at cloud cost considerations including are you paying too much or not enough (e.g., are you depriving your applications of performance to save money?). We will explore various decision-making and strategy topics spanning AWS, Microsoft Azure, Azure Stack, Windows and Hyper-V, VMware (including on AWS) and OpenStack, is it still open for business?

Additional topics, trends, themes include:

  • Everything is not the same across cloud services, converged, or containers.
  • Different environments have various data infrastructure resource needs.
  • How to balance legacy on-prem application needs with emerging technology options.
  • Different comparison criteria for smaller environments remote office vs. Larger enterprise
  • Do it yourself (DiY) vs. Turnkey software vs. Bundled tin wrapped software solution
  • Strategy, planning, decision-making, and ongoing management

How To Register For Seminar Workshops

Learn more about fall 2018 Dutch Server StorageIO Data Infrastructure Tuesday trends workshop seminar here (PDF), and Wednesday deeper dive decision-making workshop session here (PDF).

To register and obtain more information, contact event organizers Brouwer Storage consultancy at +31-33-246-6825 or +31-652-601-309 and info at brouwerconsultancy.com.

Where to learn more

Learn more about Data Infrastructure and related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Everything is not the same across different organizations, environments, application workloads, data, technology, tools, trends. These two one day interactive workshop seminars provide timely insight into what’s going on in the data infrastructure related industry, along with common IT organization challenges as well as how to address them. Moving from the what to what to use when, where, why, how along with alternatives, gaining insight and awareness to avoid flying blind enables effective strategy, decision-making, planning and ongoing management. Learn more and sign up for Fall 2018 Dutch Data Infrastructure Industry Trends Decision Making Seminars, see you in Nijkerk.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Disk Impressions #blogtobertech

Microsoft Azure Data Box Disk Test Drive Impressions #blogtobertech

Microsoft Azure Data Box Disk Test Drive Impressions #blogtobertech

Data Box Disk Test Drive Impressions is the last of a four-post series looking at Microsoft Azure Data Box. View Part 1 Microsoft announced Azure Data Box updates, Part 2 Microsoft Azure Data Box Family, and Part 3 Microsoft Azure Data Box Disk Test Drive Review.

Overall, I liked the Azure Data Box experience along with a range of options to select the best fit solution for my needs. A common trend among the major cloud service providers such as AWS, Microsoft Azure and Google is that one size fits all approach solution does not meet different customer needs.

The only things that I did not like about and would like to see improved with Azure Data Box are two items one at the beginning, the other at the end of the process. Granted with Data Box Disks still in preview, there is time for those items to be addressed before general availability, and I have passed on the feedback to Microsoft.

At the beginning of the process, things are pretty straightforward with good tools along with resources to help you navigate which type of Data Box to order, how to order, specify your account details and other information.

What I did not like with the up front experience was after the quick ordering and notification process, the time delay of a week or more until notified when a Data Box would be arriving. Granted I was not in a rush and Microsoft did indicate that it could take about ten days to be informed of availability, this is something that should be done quickly as resources become available. Another option is for Microsoft to add an ordering option for priority or low-priority in the future.

The other experience that I did not like was at the very end, in that perhaps its stuck in an email spam trap (checked, could not find it), the final notification could be better. Not only a final email note saying your data is copied, but also a reminder of where your block or page blobs were copied to (e.g., what your setup when ordering).

Monitoring the progress of the process, I knew when Data Box drives arrived at Microsoft, copy started and completed including with error status. Having gotten used to receiving update notifications from Azure, not receiving one at the end saying congratulations your data has been copied, check here for any errors or other info, as well as a reminder where the data was copied to would be useful.

Likewise, a follow-up note from Microsoft saying that the Azure Data Box drives used as part of the transfer were securely erased along with a certificate of digital destruction would be useful for compliance purposes.

As mentioned above, overall, I found the Data Box Disk experience very positive and a great way to move bulk data faster than what could be done with available networks. My next step is now to migrate some of the transferred data to cold long-term archive storage, and some others to Azure Files, with some staying in block blobs. There are also a couple of VHD and VHDX that will be moved and attached to VMs for additional testing.

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

For those who have a need to move large amounts of data including structured, unstructured, semi-structured, little or big data to a cloud resource, solutions such as Azure Data Box may be in your future. Likewise, for those looking to support remote and edge workloads from AI, ML, DL inferencing, to large-scale data pre-processing, data collection and acquisition, video, telemetry, IoT among others Data Box type solutions may be in your future. Overall I found Microsoft Azure Data Box Disk Impressions Favorable and was able to address a project I had on the to-do list for some time.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Disk Test Drive Review #blogtobertech

Microsoft Azure Data Box Test Drive #blogtobertech

Microsoft Azure Data Box Test Drive #blogtobertech

Microsoft Azure Data Box Test Drive is part three of four series looking at Data Box. View Part 1 Microsoft announced Azure Data Box updatesPart 2 Microsoft Azure Data Box Family, and Part 4 Microsoft Azure Data Box Disk Impressions.

Getting Started

The workflow for using Data Box involves selecting with the type of Data Box to use via the Microsoft Azure portal (here), or Data Box Family page (here).

Getting Started via the Microsoft Azure Data Box Family Page image via Microsoft.com
Getting Started via the Microsoft Azure Data Box Family Page image via Microsoft.com

First step of ordering a Data Box is to specify your Azure subscription, type of operation (e.g., import data into Azure, or export out), source country/region and destination Azure region.

Selecting Data Box from Azure Portal
Selecting Data Box from Azure Portal

The next step is to determine what type of Data Box, in this test I choose 40 TB Data Box Disks. Make a note of fees to avoid any surprises.

Selecting Data Box Disks (40 TB) From Azure Portal
Selecting Data Box Disks (40 TB) From Azure Portal

After selecting the type of Data Box, fill in storage account information using an existing resource, or create new ones as needed. Make a note of these selections as you will need them after the copy is done as this is where your data will be located.

Specify Azure Storage Account Information Where Data Will Transfer To
Specify Azure Storage Account Information Where Data Will Transfer To

Once the order is placed, an email is received confirming the order and also being a preview, indicating that it might take ten days to hear a status update or availability of the devices.

Email notification received after the order is placed
Email notification received after the order is placed

After about ten days, I was contacted by Microsoft via an email (not shown) confirming the amount of data to be copied to determine how many disks would be needed. Once this was confirmed with Microsoft, a status update was noted on the Azure dashboard.

Azure Data Box Dashboard Status after order placed
Azure Data Box Dashboard Status after order placed

After a few days, a box arrived with the Data Box disks, cables and return shipping labels enclosed. Also received was an email notification indicating the disks had arrived.

Email notice Data Box has arrived on site
Email notice Data Box has arrived on site (on-prem if you prefer)

The following is the physical box that contains the Data Box disks that I received from Microsoft.

The shipping box with Data Box Disks arrives
The shipping box with Data Box Disks arrives

Once you get the Data Box, go to the Azure portal for Data Box and access the tools. There are tools and commands for Windows as well as Linux that are needed for accessing and unlocking the disks. This is where you also obtain device IDs. You will also need to have the access key phrase you specified in an earlier step as part of placing the order.

Access Data Box Software Tools and Keys from Azure Portal
Access Data Box Software Tools and Keys from Azure Portal

Inside the shipping box was a pair of 8 TB SATA SSDs, SATA to USB cables, along with return shipping labels.

Contents inside the shipping box, two Data Box 8 TB disks
Contents inside the shipping box, two Data Box 8 TB disks

From the Azure portal, access the device IDs that will be needed along with passphrase for obtaining and unlocking the Data Box disks. You will also want to download the tools as well as follow other instructions on the portal for accessing disks.

Azure Data Box tools, device IDs and Keys
Azure Data Box tools, device IDs and Keys

The Windows system I used for testing is a virtual machine hosted on a VMware vSphere ESXi 6.7 host. After physically attaching the Data Box Disks to the VM host, a virtual or software attachment was done by adding USB devices to the VM.

Virtual Attach of Data Box Disks to VMware vSphere ESXi host and guest VM
Virtual Attach of Data Box Disks to VMware vSphere ESXi host and guest VM

Once the VM had the Data Box disks attached and mapped, they appeared to Windows. After downloading the Data Box software tools and unlocking the devices, they were ready to copy data to. Note that the disks appear as a regular Windows device once unlocked. Simply using bit locker does not unlock the drives, you need to use the Data Box tools. Speaking of Windows disks, there are a couple of folders on the Data Box disk when shipped including one for Block Blob and Page Blob along with verification items.

View of Data Box Disks (8 TB each) after attaching to Windows system
View of Data Box Disks (8 TB each) after attaching to Windows system

Note that you are given several days as part of the base transfer cost, then extra days apply. Since I had a few extra days, I used some of the excess capacity to do some staging and reorganization of data before the actual copy.

Data copy is done using your choice of tools, for example, Robocopy among many others. I used a combination of Robocopy, Retrospect among others. Also, note that for most data place them in the folder or directory structure of your choice in the Block Blob folder. Page Blobs are for VHDX to be used with virtual machines on Azure. After spending a few days to copy the data I wanted to move along with performing verification, it was time to pack up the devices.

As a reminder, blobs are analogous to and what Microsoft Azure refers to instead of objects (e.g., object storage). Also remember that Azure blobs include block, page (512-byte page aligned for VHDX) and append (similar to other vendors object storage). Microsoft Azure in addition to blobs, supports file (SMB and NFS) access, along with table (database) and queue storage services.

The following shows the return label attached to the shipping box that contains the Data Box disks and cables. I also included a copy of the shipping label inside the box just in case something happened during shipment. Once prepared for delivery, I took the box to a local UPS store where I received a shipment receipt (not shown). Later that day I also received an email from Microsoft indicating the shipment was in-progress.

Data Box disks packaged with return receipt (was in the box)
Data Box disks packaged with return receipt (was in the box)

The Azure portal shows status of Data Box shipment being sent to Microsoft, along with a follow-up email notification.

Azure Data Box portal status
Azure Data Box portal status

Email notification of Data Box on the way to Microsoft.

Notice data box is on the way to Azure
Notice data box is on the way to Azure

After a few days’ ways, checking the Azure Portal shows the Data Box arrived at Microsoft and copied operations underway. Remember the storage account you specified back in the early steps is where you will look for your data. This is something I think Microsoft can improve on by providing a link, or some reminder of where the data is being copied to in the status. Likewise, a copy completion email notice would be handy after getting used to the other alerts previous in the process.

Azure Data Box portal showing disk copy operation status
Azure Data Box portal showing disk copy operation status

Looking at the Azure storage account specified during the ordering process in the Blob storage resources the contents of the Data Box Disks can be found.

Contents of Data Box disks copied into specified Azure Blobs and storage account
Contents of Data Box disks copied into specified Azure Blobs and storage account

The following shows folders that I had copied from on-prem systems to the Data Box now located in the proper Azure Block Blobs. Not shown are Page blobs where I moved some VHDXs.

xMission accomplished, data folders now stored in Azure block blobs
Mission accomplished, data folders now stored in Azure block blobs

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Overall the test drive of the Azure Data Box Disk solution was positive, and look forward to trying out some of the other Data Box solutions, both offline and online options in the future. Continue reading Part 4 Microsoft Azure Data Box Disk Impressions as part of this series including Microsoft Azure Data Box Disk Test Drive Review.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family is part two of a four-part series looking at Data Box. View Part 1 Microsoft announced Azure Data Box updates, Part 3 Microsoft Azure Data Box Disk Test Drive Review, Part 4 Microsoft Azure Data Box Disk Impressions.

Microsoft Azure Data Box Overview

Microsoft has several Data Box solutions available or in the preview to meet various customer needs. These include both online as well as offline solutions that include hardware (except Data Box Gateway), software tools and cloud services.

Data Box Online

Microsoft has two online Data Box offerings that provide real-time access of Azure cloud storage resources from on-prem including remote, edge locations. The online Data Box solutions include Edge and Gateway both with local on-prem storage.


Data Box Edge image via Microsoft.com

Data Box Edge (Preview)

Currently, in preview, Data Box Edge is a 1U appliance that combines hardware along with software resources for deployment on-prem at the edge or remote locations. Data Box Edge places locally converged compute and storage resources as an appliance along with connectivity to Azure cloud-based resources.

Intended workloads and applications for Data Box Edge include remote AI, ML, and DL inferencing, data processing or pre-processing before sending to Azure Cloud, function as an edge compute, data protection and data transfer platform (e.g., cloud storage gateway) with local compute. Data Box Edge is similar in functionality and focuses on other cloud service provider solutions such as AWS Snow Ball Edge (SBE). Management tools include Data Box Edge resource Azure portal for management from a web UI, create and manage resources, devices, shares.

Other Data Box Edge attributes include:

  • Supports Azure Blob or Files via SMB and NFS storage access protocols
  • Dual Intel Xeon processors each with 10 CPU cores, 64GB RAM
  • 2 x 10 Gbps SFP+ copper cables, 2 x 1 Gbps RJ45 cables
  • 8 NVMe SSD (1.6 TB each), no HA, 12.8 TB total raw cap
  • 2 x 1 GbE (one for management, one for user access)
  • 2 x 25 GbE (can operate at 10 GbE) and 2 x 25 GbE ports
  • Local web UI for management and configuration

Data Box Gateway (Preview)

Also in Preview, Data Box Gateway is a virtual machine (VM) based software defined appliance that runs on VMware vSphere (ESXi) or Microsoft Hyper-V hypervisors. The functionality of Data Box Gateway is that of a cloud storage gateway providing access to Azure Blob (Page and Block) or Files (NAS) via SMB or NFS protocols. Learn more about both Data Box Edge and Data Box Gateway here including pricing here.

Data Box Offline Solutions

Microsoft has several offline Data Box offerings including previously available and new in preview models. Offline Data Box solutions enable large amounts of data to be moved from on-prem primary, remote and edge locations to Azure cloud storage resources. Bulk data movement operations can be one-time or recurring in support of big data migration of energy, research, media & entertainment and other large volumes of data.

Other bulk movement includes for archive, backup, BC/DR, virtual machine and application migration among others. Use Data Box Offline solutions when large amounts of data need to be moved from on-prem to Azure cloud faster than what available networks will support promptly.

Offline Data Box solutions include:

  • Data Box Heavy (Preview) 1 PB Storage, 800 TB usable
  • Data Box 100 TB (80 TB usable)
  • Data Box Disk (Preview) 40 TB (35 TB Usable)


Data Box Heavy 1 PB (Preview) image via Microsoft.com

Data Box Heavy 1 PB (Preview)

  • Appliance with Up to 800 TB usable capacity per order
  • One system per order
  • Supports Azure Blob or Files
  • Copy data to up to 10 storage accounts
  • 1 x 1/10 Gbps RJ45 connector, 4 x 40 Gbps QSFP+ connectors
  • AES 256-bit encryption
  • Copies data using NAS SMB and NFS protocols


Data Box 100TB image via Microsoft.com

100 TB Data Box

  • An appliance that supports 80 TB usable storage capacity
  • Supports Azure Blob or Files
  • Copies data to 10 storage accounts
  • 1 x 1/10 GbE RJ45 connector
  • 2 x 10 GbE SFP+ connector
  • AES 256-bit encryption
  • Storage access and copy via SMB and NFS NAS protocols

Case of Data Box Disks image via Microsoft.com

Data Box Disk 40 TB (Preview)

  • Up to 35 TB usable capacity per order
  • Up to 5 SSDs per order
  • This is what I tested (2 x 8 TB)
  • Supports Azure Blob storage (Block and Page)
  • Copies data to a single storage account
  • USB/SATA II, III server I/O interface (comes with SATA to USB connector cables)
  • AES 128-bit encryption
  • Copy data with standard tools

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Which Microsoft Azure Data Box is the best? That depends on your needs and requirements.

Microsoft along with other major cloud service providers continue to evolve their data migration services. Realizing that customers who need, want, or have to get data to the cloud also need to remove barriers, solutions such as Azure Data Box are a step in eliminating cloud barriers while addressing cloud concerns. Continue reading Part 3 Microsoft Azure Data Box Disk Test Drive Review and Part 4 Microsoft Azure Data Box Disk Impressions as part of Microsoft Azure Data Box Family.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft announced Azure Data Box updates #blogtobertech

Microsoft announced Azure Data Box updates – #blogtobertech

Microsoft announced Azure Data Box updates - #blogtobertech

Microsoft announced Azure Data Box is the first in a series of four posts looking at Data Box including a test drive experience. View Part 2 Microsoft Azure Data Box Family, Part 3 Microsoft Azure Data Box Disk Test Drive Review, Part 4 Microsoft Azure Data Box Disk Impressions.

Microsoft Azure Data Box Family Page image via Microsoft.com
Microsoft Azure Data Box Family Page image via Microsoft.com

At Ignite in Microsoft announced Azure Data Box updates, which means its time for a test drive and review. Microsoft has several Data Box solutions available or in the preview to meet various customer needs. These include both online as well as offline solutions that include hardware (except Data Box Gateway), software tools and cloud services. In general, Data Box enables bulk movement and migration of data from on-prem environments to Azure cloud storage including blobs (e.g., objects) and files (e.g., NAS accessible) resources.

Whats The Need for Data Movement Appliance Service

Some might ask the question why do you need a Microsoft Azure Data Box when there are fast networks? Good question, assuming you have fast networks that can move large amounts of bulk data promptly. Microsoft supports traditional Internet-based access to Azure cloud resources for data migration, along with higher speed Express Route service similar to Amazon Web Service (AWS) Direct Connect among other options.

On the other hand, if you need to move a large amount of data that would take weeks, months or longer sending over expensive networks, then solutions like Data Box are an option. Microsoft is not alone or unique having data storage migration or movement services. AWS has Snowball, Snowball Edge with compute, as well as the truck size Snowmobile for large-scale data movement. Google also has their Transfer services including Google Transfer Appliance.

Who is Azure Data Box for?

Azure Data Box is for those who need to migrate data to Azure cloud storage and other services on a one-time or recurring basis. Another scenario is for those who need to have on-prem storage and optional compute at remote or edge locations in support of data acquisition, media & entertainment, energy exploration, AI, ML, DL inferencing, local data processing, pre-processing before sending to cloud among other workloads.

Yet other scenarios for those who need to move large amounts of data online, off-line, or in disconnected also known as submarine mode where a connection to the internet is not always available. Bulk data movement also applies for one-time, as well as recurring data protection such as archive, backups, BC/DR, as well as data shipping, virtual machine farm relocation, SQL Server data migration to cloud, data center consolidation among many other scenarios.

What is Azure Data Box

Azure Data Box is a combination of hardware, software, cloud services that support data migration (on-line and off-line) from on-prem environments including remote or edge to Azure cloud storage resources. There are different Data Box solutions available or in the preview to meet various needs from performance, capacity, functionality, without as well as without compute. In addition to being used for data migration, there are also Data Box solutions (e.g., Edge) that converge compute and storage for deployment at remote or edge locations.

Data Box Gateway is a software-defined virtual machine appliance that deploys on VMware and Microsoft (e.g., Hyper-V) hypervisors. Off-line Data Box solutions scale from single 8TB SSD disks to PB of capacity with various functionality.

As a reminder, blobs are analogous to and what Microsoft Azure refers to instead of objects (e.g., object storage). Also remember that Azure blobs include block, page (512-byte page aligned for VHDX) and append (similar to other vendors object storage). Microsoft Azure in addition to blobs, supports file (SMB and NFS) access, along with table (database) and queue storage services.

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Azure Data Box type solutions and services are becoming more common as well as diverse. With the addition of compute in some of these solutions to support remote edge workloads, the lines may blur with some of the converged and hyper-converged infrastructure (HCI) solutions. Likewise, keep an eye to see how cloud service providers leverage solutions like Data Box Edge to further place their reach out to the edge enabling fog (e.g., cloud at the edge) among other converged functionality. Continue reading Part 2 Microsoft Azure Data Box Family, Part 3 Microsoft Azure Data Box Disk Test Drive Review, and Part 4 Microsoft Azure Data Box Disk Impressions as part of Microsoft announced Azure Data Box updates.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Cloud File Data Storage Consolidation and Economic Comparison Model #blogtobertech

Cloud File Data Storage Consolidation and Economic Comparison Model #blogtobertech

Cloud File Data Storage Consolidation and Economic Comparison Model

The following is a new Industry Trends Perspective White Paper Report titled Cloud File Data Storage Consolidation and Economic Comparison Model.

Cloud File Data Storage Consolidation and Economic Comparison Model

This new report looks at Distributed File Server and Consolidated Cloud Storage Economic Comparison with a fundamental economic comparison model for remote (on-prem) distributed file-servers and cloud storage consolidation decision-making. IT data infrastructure resource (servers, storage, I/O network, hardware, software, services) decision-making involves evaluating and comparing technical attributes (speeds, feeds, features) of a solution or service. Another aspect of data infrastructure resource decision-making involves assessing how a solution or service will support and enable a given application workload from a Performance, Availability, Capacity, and Economic (PACE) perspective.

Cloud File Data Storage Consolidation and Economic Comparison Model

Keep in mind that all application workloads have some amount of PACE resource requirements that may be high, low or various permutations. Performance, Availability (including data protection along with security) as well as Capacity are addressed via technical speeds, feeds, functionality along with workload suitability analysis. The E in PACE resource decision-making is about the Economic analysis of various costs associated with different solution approaches.

Read more in this Server StorageIO Industry Trends and Perspective (ITP) Report.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

When comparing and making data infrastructure resource decisions, consider the application workload PACE characteristics. Also keep in mind that PACE means Performance (productivity), Availability (data protection), Capacity and Economics. This includes making decisions from a technical feature, functionality (speeds and feeds) capacity as well as how the solution supports your application workload. Leverage resources including tools to perform analysis including Cloud File Data Storage Consolidation and Economic Comparison Model approaches.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update. Musician Phil Collins has an excellent name for his current tour Not Dead Yet which is a reminder that he is still alive and performing, at least one more time. With Halloween just around the corner, it is that time of the year to revisit zombie technology, those technologies, tools, techniques, trends that are declared dead yet still alive.

Data Infrastructure Tools Trends Topics

IT Zombie Technology Declared Dead Not Dead Yet

With a concert tour named Not Dead Yet, that sets the stage for this post which is about IT Zombie Technology and in particular data infrastructure related technology, tools, trends and related topics that have been declared dead by some people, yet are still alive. Not only are these tools and techniques being used, but they are also being enhanced to be around for future years of zombie technology updates, not dead yet.

As a refresher, a Zombie technology is one that is declared dead, usually by some upstart vendor and its pundits along with other followers in favor of whatever new has been announced. As luck or fate would have it, some of these startup or new technologies that declare an older established one as being dead, tend to end up on the where are they now list.

In other words, some technologies do survive and gain in both industry adoption, as well as the even more critical customer deployment category. Likewise, some of these technologies that result in something existing being declared dead-end up surviving to live alongside or near what its supporters declared dead.

Another not so uncommon occurrence is when the new technology that its supporters declared something else as being dead joins the ranks of being declared dead by a yet more modern technology thereby becoming a Zombie technology itself.  Put a different way, being on the Zombie technology list may not be the same as being the shiny new popular trendy technology. However, it can be both a badge of honor not to mention revenue and profit maker.

Data Infrastructure components

Zombie Technology List

What are some old and new Zombie technologies that have been declared dead, yet are still alive, being used and enhanced, not dead yet?

IBM Mainframe

This is a perennial favorite, and while not seeing new growth associated with other platforms including Intel, AMD, ARM among others, it has its place with many large organizations. Not only does it continue to be manufactured, enhanced, even some new customers buying them, it also runs native Linux in addition to traditional zOS among other software.

Fibre Channel (FC)

FC has been declared dead for over a decade, and while Ethernet-based server storage I/O networking continues to gain ground in both industry as well as customer deployments, there is still plenty of life in and with FC for years to come, at least for some environments. NVMe over Fabrics (NVMeoF) which is the NVMe protocol carried on top of a fabric network (SAN if you prefer) is gaining industry popularity and customer curiosity.

There are many flavors of NVMe over fabrics including NVMe over Fibre Channel, e.g., FC-NVMe which is similar to mapping the SCSI command set (SCSI_FCP) on to Fibre Channel or what is more commonly known as FCP or simply FC.

What this means if that FC-NVMe is just another upper-level protocol (ULP) that can co-exist with others on the same Fibre Channel network. In other words, FICON, FCP, NVMe among others can co-exist on the same Fibre Channel-based network. Will everybody using Fibre Channel move to FC-NVMe? Good question, ask the FC folks, and the answer not surprisingly would be yes or probably. Will new customers looking to do NVMe over some type of fabric or network use Fibre Channel instead of Ethernet or other transport? Some will while others will go other routes. For now, what is clear is that FC is still alive and thus on the Zombie technology list and not dead yet.

SAS and SATA

Both have been declared dead as they have been around for a while, and over time NVMe will pick up more of their workload, however near term, SAS and SATA will continue as lower cost smaller footprint for general purpose and bulk lower cost direct attachment. Otoh, look for more m.2 NVMe Next Generation Form Factor (NGFF) aka gum sticks appearing on physical servers along with storage systems. Likewise, watch for increased deployment of NVMe U.2. Aka 8639 drive form factor SSDs using NAND flash as well as 3D XPoint and Intel Optane among other mediums as part of new server and storage platforms. BTW, USB is not dead yet either, just saying.

Microsoft Windows

Windows desktop, Windows Servers, even Hyper-V virtualization have been declared dead for some time now, yet all continue to evolve. Just recently, Microsoft released Windows Server 2019 which included many enhancements from software-defined storage (Storage Spaces Direct aka S2D), software-defined networking, converged and hyper-converged infrastructure (HCI) deployment options, expanded virtualization capabilities, Windows Subsystem for Linux (WSL) enhancements (e.g. bash shell on Windows native), containers with Kubernetes as well as Docker updates among others. In other words, it’s not dead yet.

Hard Disk Drive (HDD)

Having been declared dead for decades, while not the primary frontline storage medium it was in the past, HDDs continue to evolve and be used for alongside faster flash SSD, and as a front-end to magnetic tape. Some of the larger consumers of HDDs continue to be cloud service providers also known as mega scalars for storing large amounts of bulk data. I suspect that HDDs will continue to be on the Zombie technology list for at least another decade or so which has been the case for the past several decades.

Magnetic Tape

Like HDDs, the tape is still in use in some environments, and like HDDs, the cloud service providers are significant users of tape as a low-cost, low access, high-capacity bulk storage for cold archives that are front-ended by HDD or SSD or both.

Cloud (Public, Private and Hybrid)

Yes, believe it or not, some have declared cloud dead, along with hybrid cloud, private cloud among others, oh well.

Physical Machine (PM)

Also known as bare metal, servers were declared dead a decade or so ago at the hands of the then emerging Intel based virtualization hypervisors notably VMware ESXi and to a lesser extent Microsoft Hyper-V. I say lesser extent with Hyper-V in that there was less noise about PM and BMs being dead as there was from some in the ESXi virtual kingdom. Needless to say, PM and BM from Intel to AMD and ARM-based, along with IBM Power among many others are very much alive as dedicated servers in the cloud, VM and container hosts, as well as being accessorized with FPGA, ASIC, GPU, and other resources.

Virtual Machines

Listen to some from the container, serverless or something new crowd, and you will hear that virtual machines (VMs) are dead which for some workloads may be right. On the other hand, similar to the physical machine (PM) or bare metal (BM) servers that were declared dead by the VMs a decade or so ago, VMs are alive and doing well. Not only are they doing well, like containers continued adoption and deployment of VMs will stay on both on-prem as well as cloud, as will BM and PMs now have known as dedicated servers in the clouds.

NAS and Files

If you listened to some of the pundits and press, NAS and files were supposed to have been dead several years ago at the hands of object storage. The reality today is that while object storage continues to grow in customer deployments while the industry is not as enamored (or drunk) with it as it was a few years ago, the new technology is here to stay and will be around for many decades to come.

That brings us back to NAS and files which were declared dead by the object opportunists which is file access is very much alive and continues gain ground. In fact, most cloud providers have either added NAS file-based access (NFS, SMB, POSIX among others) native or via partners to their solutions. Likewise, most object storage platforms have also added or enhanced their NAS file-based access for compatibility while their customers are re-engineering their applications, or create new apps that are object and blob native. Thus, NAS and File-based access are proud members of the Zombie technology list.

Data Infrastructure tools

There are many more tools, technologies, trends, techniques that are part of the above list for example Backup has been declared dead, along with the PCIe bus, NAND flash, programming, data centers, databases, SQL along with many others. What they have in common is that they are part of a growing list of not dead yet, yet declared dead thus are Zombie technologies.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

What is your favorite zombie technology, tool, trend or technique?

What zombie technologies, tools, trends or techniques should be added to the list and why?

Many tools, technologies, techniques, trends are often declared dead, sometimes before they are even really alive and mature by those who have something new, or that simply lack creative (e.g., dead marketing?) so it’s easier to declare something dead. While some succeed themselves prospering and being added to the Zombie technology list (a badge of honor), some quietly end up on the where are they now list. The where are they now list are those vendors, tools, technologies, techniques, trends that were on the famous hit parade in the past, having faded away, or end up dead (unlike a zombie).

Don’t be scared of zombie technology while also being prepared to embrace what is new while using both in new ways. Right now, I don’t have tickets to go see Phil Collins not dead yet tour, maybe that will change. However, for now, keep in mind, don’t be scared when looking at Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.