Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family #blogtobertech

Microsoft Azure Data Box Family is part two of a four-part series looking at Data Box. View Part 1 Microsoft announced Azure Data Box updates, Part 3 Microsoft Azure Data Box Disk Test Drive Review, Part 4 Microsoft Azure Data Box Disk Impressions.

Microsoft Azure Data Box Overview

Microsoft has several Data Box solutions available or in the preview to meet various customer needs. These include both online as well as offline solutions that include hardware (except Data Box Gateway), software tools and cloud services.

Data Box Online

Microsoft has two online Data Box offerings that provide real-time access of Azure cloud storage resources from on-prem including remote, edge locations. The online Data Box solutions include Edge and Gateway both with local on-prem storage.


Data Box Edge image via Microsoft.com

Data Box Edge (Preview)

Currently, in preview, Data Box Edge is a 1U appliance that combines hardware along with software resources for deployment on-prem at the edge or remote locations. Data Box Edge places locally converged compute and storage resources as an appliance along with connectivity to Azure cloud-based resources.

Intended workloads and applications for Data Box Edge include remote AI, ML, and DL inferencing, data processing or pre-processing before sending to Azure Cloud, function as an edge compute, data protection and data transfer platform (e.g., cloud storage gateway) with local compute. Data Box Edge is similar in functionality and focuses on other cloud service provider solutions such as AWS Snow Ball Edge (SBE). Management tools include Data Box Edge resource Azure portal for management from a web UI, create and manage resources, devices, shares.

Other Data Box Edge attributes include:

  • Supports Azure Blob or Files via SMB and NFS storage access protocols
  • Dual Intel Xeon processors each with 10 CPU cores, 64GB RAM
  • 2 x 10 Gbps SFP+ copper cables, 2 x 1 Gbps RJ45 cables
  • 8 NVMe SSD (1.6 TB each), no HA, 12.8 TB total raw cap
  • 2 x 1 GbE (one for management, one for user access)
  • 2 x 25 GbE (can operate at 10 GbE) and 2 x 25 GbE ports
  • Local web UI for management and configuration

Data Box Gateway (Preview)

Also in Preview, Data Box Gateway is a virtual machine (VM) based software defined appliance that runs on VMware vSphere (ESXi) or Microsoft Hyper-V hypervisors. The functionality of Data Box Gateway is that of a cloud storage gateway providing access to Azure Blob (Page and Block) or Files (NAS) via SMB or NFS protocols. Learn more about both Data Box Edge and Data Box Gateway here including pricing here.

Data Box Offline Solutions

Microsoft has several offline Data Box offerings including previously available and new in preview models. Offline Data Box solutions enable large amounts of data to be moved from on-prem primary, remote and edge locations to Azure cloud storage resources. Bulk data movement operations can be one-time or recurring in support of big data migration of energy, research, media & entertainment and other large volumes of data.

Other bulk movement includes for archive, backup, BC/DR, virtual machine and application migration among others. Use Data Box Offline solutions when large amounts of data need to be moved from on-prem to Azure cloud faster than what available networks will support promptly.

Offline Data Box solutions include:

  • Data Box Heavy (Preview) 1 PB Storage, 800 TB usable
  • Data Box 100 TB (80 TB usable)
  • Data Box Disk (Preview) 40 TB (35 TB Usable)


Data Box Heavy 1 PB (Preview) image via Microsoft.com

Data Box Heavy 1 PB (Preview)

  • Appliance with Up to 800 TB usable capacity per order
  • One system per order
  • Supports Azure Blob or Files
  • Copy data to up to 10 storage accounts
  • 1 x 1/10 Gbps RJ45 connector, 4 x 40 Gbps QSFP+ connectors
  • AES 256-bit encryption
  • Copies data using NAS SMB and NFS protocols


Data Box 100TB image via Microsoft.com

100 TB Data Box

  • An appliance that supports 80 TB usable storage capacity
  • Supports Azure Blob or Files
  • Copies data to 10 storage accounts
  • 1 x 1/10 GbE RJ45 connector
  • 2 x 10 GbE SFP+ connector
  • AES 256-bit encryption
  • Storage access and copy via SMB and NFS NAS protocols

Case of Data Box Disks image via Microsoft.com

Data Box Disk 40 TB (Preview)

  • Up to 35 TB usable capacity per order
  • Up to 5 SSDs per order
  • This is what I tested (2 x 8 TB)
  • Supports Azure Blob storage (Block and Page)
  • Copies data to a single storage account
  • USB/SATA II, III server I/O interface (comes with SATA to USB connector cables)
  • AES 128-bit encryption
  • Copy data with standard tools

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Which Microsoft Azure Data Box is the best? That depends on your needs and requirements.

Microsoft along with other major cloud service providers continue to evolve their data migration services. Realizing that customers who need, want, or have to get data to the cloud also need to remove barriers, solutions such as Azure Data Box are a step in eliminating cloud barriers while addressing cloud concerns. Continue reading Part 3 Microsoft Azure Data Box Disk Test Drive Review and Part 4 Microsoft Azure Data Box Disk Impressions as part of Microsoft Azure Data Box Family.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Microsoft announced Azure Data Box updates #blogtobertech

Microsoft announced Azure Data Box updates – #blogtobertech

Microsoft announced Azure Data Box updates - #blogtobertech

Microsoft announced Azure Data Box is the first in a series of four posts looking at Data Box including a test drive experience. View Part 2 Microsoft Azure Data Box Family, Part 3 Microsoft Azure Data Box Disk Test Drive Review, Part 4 Microsoft Azure Data Box Disk Impressions.

Microsoft Azure Data Box Family Page image via Microsoft.com
Microsoft Azure Data Box Family Page image via Microsoft.com

At Ignite in Microsoft announced Azure Data Box updates, which means its time for a test drive and review. Microsoft has several Data Box solutions available or in the preview to meet various customer needs. These include both online as well as offline solutions that include hardware (except Data Box Gateway), software tools and cloud services. In general, Data Box enables bulk movement and migration of data from on-prem environments to Azure cloud storage including blobs (e.g., objects) and files (e.g., NAS accessible) resources.

Whats The Need for Data Movement Appliance Service

Some might ask the question why do you need a Microsoft Azure Data Box when there are fast networks? Good question, assuming you have fast networks that can move large amounts of bulk data promptly. Microsoft supports traditional Internet-based access to Azure cloud resources for data migration, along with higher speed Express Route service similar to Amazon Web Service (AWS) Direct Connect among other options.

On the other hand, if you need to move a large amount of data that would take weeks, months or longer sending over expensive networks, then solutions like Data Box are an option. Microsoft is not alone or unique having data storage migration or movement services. AWS has Snowball, Snowball Edge with compute, as well as the truck size Snowmobile for large-scale data movement. Google also has their Transfer services including Google Transfer Appliance.

Who is Azure Data Box for?

Azure Data Box is for those who need to migrate data to Azure cloud storage and other services on a one-time or recurring basis. Another scenario is for those who need to have on-prem storage and optional compute at remote or edge locations in support of data acquisition, media & entertainment, energy exploration, AI, ML, DL inferencing, local data processing, pre-processing before sending to cloud among other workloads.

Yet other scenarios for those who need to move large amounts of data online, off-line, or in disconnected also known as submarine mode where a connection to the internet is not always available. Bulk data movement also applies for one-time, as well as recurring data protection such as archive, backups, BC/DR, as well as data shipping, virtual machine farm relocation, SQL Server data migration to cloud, data center consolidation among many other scenarios.

What is Azure Data Box

Azure Data Box is a combination of hardware, software, cloud services that support data migration (on-line and off-line) from on-prem environments including remote or edge to Azure cloud storage resources. There are different Data Box solutions available or in the preview to meet various needs from performance, capacity, functionality, without as well as without compute. In addition to being used for data migration, there are also Data Box solutions (e.g., Edge) that converge compute and storage for deployment at remote or edge locations.

Data Box Gateway is a software-defined virtual machine appliance that deploys on VMware and Microsoft (e.g., Hyper-V) hypervisors. Off-line Data Box solutions scale from single 8TB SSD disks to PB of capacity with various functionality.

As a reminder, blobs are analogous to and what Microsoft Azure refers to instead of objects (e.g., object storage). Also remember that Azure blobs include block, page (512-byte page aligned for VHDX) and append (similar to other vendors object storage). Microsoft Azure in addition to blobs, supports file (SMB and NFS) access, along with table (database) and queue storage services.

Where to learn more

Learn more about Microsoft Azure Data Box, Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Azure Data Box type solutions and services are becoming more common as well as diverse. With the addition of compute in some of these solutions to support remote edge workloads, the lines may blur with some of the converged and hyper-converged infrastructure (HCI) solutions. Likewise, keep an eye to see how cloud service providers leverage solutions like Data Box Edge to further place their reach out to the edge enabling fog (e.g., cloud at the edge) among other converged functionality. Continue reading Part 2 Microsoft Azure Data Box Family, Part 3 Microsoft Azure Data Box Disk Test Drive Review, and Part 4 Microsoft Azure Data Box Disk Impressions as part of Microsoft announced Azure Data Box updates.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Cloud File Data Storage Consolidation and Economic Comparison Model #blogtobertech

Cloud File Data Storage Consolidation and Economic Comparison Model #blogtobertech

Cloud File Data Storage Consolidation and Economic Comparison Model

The following is a new Industry Trends Perspective White Paper Report titled Cloud File Data Storage Consolidation and Economic Comparison Model.

Cloud File Data Storage Consolidation and Economic Comparison Model

This new report looks at Distributed File Server and Consolidated Cloud Storage Economic Comparison with a fundamental economic comparison model for remote (on-prem) distributed file-servers and cloud storage consolidation decision-making. IT data infrastructure resource (servers, storage, I/O network, hardware, software, services) decision-making involves evaluating and comparing technical attributes (speeds, feeds, features) of a solution or service. Another aspect of data infrastructure resource decision-making involves assessing how a solution or service will support and enable a given application workload from a Performance, Availability, Capacity, and Economic (PACE) perspective.

Cloud File Data Storage Consolidation and Economic Comparison Model

Keep in mind that all application workloads have some amount of PACE resource requirements that may be high, low or various permutations. Performance, Availability (including data protection along with security) as well as Capacity are addressed via technical speeds, feeds, functionality along with workload suitability analysis. The E in PACE resource decision-making is about the Economic analysis of various costs associated with different solution approaches.

Read more in this Server StorageIO Industry Trends and Perspective (ITP) Report.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

When comparing and making data infrastructure resource decisions, consider the application workload PACE characteristics. Also keep in mind that PACE means Performance (productivity), Availability (data protection), Capacity and Economics. This includes making decisions from a technical feature, functionality (speeds and feeds) capacity as well as how the solution supports your application workload. Leverage resources including tools to perform analysis including Cloud File Data Storage Consolidation and Economic Comparison Model approaches.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update. Musician Phil Collins has an excellent name for his current tour Not Dead Yet which is a reminder that he is still alive and performing, at least one more time. With Halloween just around the corner, it is that time of the year to revisit zombie technology, those technologies, tools, techniques, trends that are declared dead yet still alive.

Data Infrastructure Tools Trends Topics

IT Zombie Technology Declared Dead Not Dead Yet

With a concert tour named Not Dead Yet, that sets the stage for this post which is about IT Zombie Technology and in particular data infrastructure related technology, tools, trends and related topics that have been declared dead by some people, yet are still alive. Not only are these tools and techniques being used, but they are also being enhanced to be around for future years of zombie technology updates, not dead yet.

As a refresher, a Zombie technology is one that is declared dead, usually by some upstart vendor and its pundits along with other followers in favor of whatever new has been announced. As luck or fate would have it, some of these startup or new technologies that declare an older established one as being dead, tend to end up on the where are they now list.

In other words, some technologies do survive and gain in both industry adoption, as well as the even more critical customer deployment category. Likewise, some of these technologies that result in something existing being declared dead-end up surviving to live alongside or near what its supporters declared dead.

Another not so uncommon occurrence is when the new technology that its supporters declared something else as being dead joins the ranks of being declared dead by a yet more modern technology thereby becoming a Zombie technology itself.  Put a different way, being on the Zombie technology list may not be the same as being the shiny new popular trendy technology. However, it can be both a badge of honor not to mention revenue and profit maker.

Data Infrastructure components

Zombie Technology List

What are some old and new Zombie technologies that have been declared dead, yet are still alive, being used and enhanced, not dead yet?

IBM Mainframe

This is a perennial favorite, and while not seeing new growth associated with other platforms including Intel, AMD, ARM among others, it has its place with many large organizations. Not only does it continue to be manufactured, enhanced, even some new customers buying them, it also runs native Linux in addition to traditional zOS among other software.

Fibre Channel (FC)

FC has been declared dead for over a decade, and while Ethernet-based server storage I/O networking continues to gain ground in both industry as well as customer deployments, there is still plenty of life in and with FC for years to come, at least for some environments. NVMe over Fabrics (NVMeoF) which is the NVMe protocol carried on top of a fabric network (SAN if you prefer) is gaining industry popularity and customer curiosity.

There are many flavors of NVMe over fabrics including NVMe over Fibre Channel, e.g., FC-NVMe which is similar to mapping the SCSI command set (SCSI_FCP) on to Fibre Channel or what is more commonly known as FCP or simply FC.

What this means if that FC-NVMe is just another upper-level protocol (ULP) that can co-exist with others on the same Fibre Channel network. In other words, FICON, FCP, NVMe among others can co-exist on the same Fibre Channel-based network. Will everybody using Fibre Channel move to FC-NVMe? Good question, ask the FC folks, and the answer not surprisingly would be yes or probably. Will new customers looking to do NVMe over some type of fabric or network use Fibre Channel instead of Ethernet or other transport? Some will while others will go other routes. For now, what is clear is that FC is still alive and thus on the Zombie technology list and not dead yet.

SAS and SATA

Both have been declared dead as they have been around for a while, and over time NVMe will pick up more of their workload, however near term, SAS and SATA will continue as lower cost smaller footprint for general purpose and bulk lower cost direct attachment. Otoh, look for more m.2 NVMe Next Generation Form Factor (NGFF) aka gum sticks appearing on physical servers along with storage systems. Likewise, watch for increased deployment of NVMe U.2. Aka 8639 drive form factor SSDs using NAND flash as well as 3D XPoint and Intel Optane among other mediums as part of new server and storage platforms. BTW, USB is not dead yet either, just saying.

Microsoft Windows

Windows desktop, Windows Servers, even Hyper-V virtualization have been declared dead for some time now, yet all continue to evolve. Just recently, Microsoft released Windows Server 2019 which included many enhancements from software-defined storage (Storage Spaces Direct aka S2D), software-defined networking, converged and hyper-converged infrastructure (HCI) deployment options, expanded virtualization capabilities, Windows Subsystem for Linux (WSL) enhancements (e.g. bash shell on Windows native), containers with Kubernetes as well as Docker updates among others. In other words, it’s not dead yet.

Hard Disk Drive (HDD)

Having been declared dead for decades, while not the primary frontline storage medium it was in the past, HDDs continue to evolve and be used for alongside faster flash SSD, and as a front-end to magnetic tape. Some of the larger consumers of HDDs continue to be cloud service providers also known as mega scalars for storing large amounts of bulk data. I suspect that HDDs will continue to be on the Zombie technology list for at least another decade or so which has been the case for the past several decades.

Magnetic Tape

Like HDDs, the tape is still in use in some environments, and like HDDs, the cloud service providers are significant users of tape as a low-cost, low access, high-capacity bulk storage for cold archives that are front-ended by HDD or SSD or both.

Cloud (Public, Private and Hybrid)

Yes, believe it or not, some have declared cloud dead, along with hybrid cloud, private cloud among others, oh well.

Physical Machine (PM)

Also known as bare metal, servers were declared dead a decade or so ago at the hands of the then emerging Intel based virtualization hypervisors notably VMware ESXi and to a lesser extent Microsoft Hyper-V. I say lesser extent with Hyper-V in that there was less noise about PM and BMs being dead as there was from some in the ESXi virtual kingdom. Needless to say, PM and BM from Intel to AMD and ARM-based, along with IBM Power among many others are very much alive as dedicated servers in the cloud, VM and container hosts, as well as being accessorized with FPGA, ASIC, GPU, and other resources.

Virtual Machines

Listen to some from the container, serverless or something new crowd, and you will hear that virtual machines (VMs) are dead which for some workloads may be right. On the other hand, similar to the physical machine (PM) or bare metal (BM) servers that were declared dead by the VMs a decade or so ago, VMs are alive and doing well. Not only are they doing well, like containers continued adoption and deployment of VMs will stay on both on-prem as well as cloud, as will BM and PMs now have known as dedicated servers in the clouds.

NAS and Files

If you listened to some of the pundits and press, NAS and files were supposed to have been dead several years ago at the hands of object storage. The reality today is that while object storage continues to grow in customer deployments while the industry is not as enamored (or drunk) with it as it was a few years ago, the new technology is here to stay and will be around for many decades to come.

That brings us back to NAS and files which were declared dead by the object opportunists which is file access is very much alive and continues gain ground. In fact, most cloud providers have either added NAS file-based access (NFS, SMB, POSIX among others) native or via partners to their solutions. Likewise, most object storage platforms have also added or enhanced their NAS file-based access for compatibility while their customers are re-engineering their applications, or create new apps that are object and blob native. Thus, NAS and File-based access are proud members of the Zombie technology list.

Data Infrastructure tools

There are many more tools, technologies, trends, techniques that are part of the above list for example Backup has been declared dead, along with the PCIe bus, NAND flash, programming, data centers, databases, SQL along with many others. What they have in common is that they are part of a growing list of not dead yet, yet declared dead thus are Zombie technologies.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

What is your favorite zombie technology, tool, trend or technique?

What zombie technologies, tools, trends or techniques should be added to the list and why?

Many tools, technologies, techniques, trends are often declared dead, sometimes before they are even really alive and mature by those who have something new, or that simply lack creative (e.g., dead marketing?) so it’s easier to declare something dead. While some succeed themselves prospering and being added to the Zombie technology list (a badge of honor), some quietly end up on the where are they now list. The where are they now list are those vendors, tools, technologies, techniques, trends that were on the famous hit parade in the past, having faded away, or end up dead (unlike a zombie).

Don’t be scared of zombie technology while also being prepared to embrace what is new while using both in new ways. Right now, I don’t have tickets to go see Phil Collins not dead yet tour, maybe that will change. However, for now, keep in mind, don’t be scared when looking at Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Ten tips to reduce your cloud compute storage costs #blogtobertech

Ten tips to reduce your cloud compute storage costs #blogtobertech

Ten tips to reduce your cloud compute storage costs

The following are Ten tips to reduce your cloud compute storage costs.

In some cases, reducing your cloud costs means spending the same yet getting more value and resources that provide a business benefit. For example, paying the same yet upgrading to fewer, faster servers, storage, I/O network resources to support growth while boosting productivity. In other words, when measured on a cost per unit of work done or service enabled, there should be an improvement.

On the other hand, cost cutting can be measured by an actual reduction in spending, for example, consolidating multiple applications to a lower cost compute instance running at higher utilization. The caveat is that while the spend may be reduced, is the corresponding level of service or application and user productivity negatively impacted?

Other examples are a hybrid of removing complexity and cost, as well as cost-cutting, for instance finding orphan resources that are powered on and not used. Orphan resources include IP addresses assigned, being charged for yet not used, or a virtual machine instance powered on however not used. Another orphan example is a VM instance that is powered off however no longer used, nor are the disks assigned to it, as well as any snapshots or backups.

Ten tips to reduce your cloud costs

  • Utilize client and remote site data file cache to reduce cloud egress network fees
  • Bring your own software licenses for operating systems and applications
  • Monitor your cloud cost summaries regularly to watch out for surprises
  • Find and remove orphan resources including instances, images, IP address, storage volumes, buckets
  • Revisit if your data is stored in the appropriate storage class or tier for how it is used. Likewise, leverage lower durable storage tiers as locations for additional protection instead of merely as a single destination to support cost-cutting. For example, cost cutting would be placing your only data protection copy and archive on a lower cost lower durable storage tier. Removing cost, boosting availability would be putting a copy of your data on two or more economical price, less durable storage tiers in different locations, instead of a single copy on a highly durable tier in one place.
  • Consolidate many smaller, lower cost instances into fewer larger instances, removing complexity and costs
  • Utilize reserved instances (RI) along with prepayment discounts, also check with your finance department to see if there are benefits of considering as OpEx or CapEx.
  • Audit your RIs to make sure you have the appropriately sized resources to meet workload needs.
  • Utilize spot instances for spot or ad-hoc interruptible workloads
  • Leverage ephemeral on-instance storage as a cache to boost performance

Additional Tips and Recommendations

Everything is not the same, why treat everything the same including assigning to the same type of resources. Keep in mind that all applications have some level of Performance, Availability, Capacity, and Economic (PACE) resource requirements that need to be balanced.

Similar to on-prem environments, one of the top mistakes when choosing storage is looking only at a cost per capacity, particular with flash-based SSD and NVMe accessed storage. Also look into what the storage performance thresholds are, as well as any access and API or service call fees.

Watch out for excessive API and cloud service calls beyond your normal monthly limits. For example, consistently running rsync on some storage classes can result in surprise monthly invoices. Likewise, moving data around, changing encryption or other operations may wipe out savings from going to a lower storage tier. Look beyond the monthly cost per capacity, what are the access including egress (reading data) fees, as well as API calls such as list, dir or other operations.

Likewise, for compute instances, look beyond the necessary cost also considering how much memory (DRAM), I/O for storage and networking, on-instance storage (temporary or persistent), bring your own license options, number of cores or virtual CPUs along with their speed. Also, watch for any limits on the number of I/O operations per instance particular with fast flash SSD including NVMe accessed storage. Just because its flash or NVMe does not mean it’s going to be fast.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Have a situational awareness of your on-prem environment knowing your costs of resources as well as the level of services to make informed decisions. Don’t be scared, be prepared, avoid flying blind, plan ahead and apply the appropriate resources along with quantity to require application workload needs. Keep in mind that there are more than Ten tips to reduce your cloud compute storage costs, however these should get your off to a good start.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

How I saved money storing more data on aws s3 simple storage service #blogtobertech

How I saved money storing more data on aws s3 simple storage service #blogtobertech

How I saved money storing more data on aws s3 simple storage service

How I saved money storing more data on aws s3 simple storage service is an example of reducing cloud costs as opposed to merely cutting cloud costs. What this means is that instead of just cutting my cloud storage costs with a focus on how much I could save, I wanted to remove some costs while also storing more data without compromise. For example, since making the changes, storage capacity usage has almost doubled, yet prices are remaining 37% lower from two years ago before the changes were made.

How I saved money storing more data on aws s3?

Without adding any context, the typical reaction might be that I saved money storing more data on (or in) AWS S3 as opposed to locally on-site (on-prem). Another typical response would be that I moved all of my data from a different more expensive cloud service to AWS S3. Yet another common reaction would that I moved my AWS S3 data into AWS Glacier cold storage, or, deleted a large amount of data.

Some might even comment that I must have used some type of dedupe, compression or other data footprint reduction (DFR) technology. On the other hand, some might determine that I probably did all or some of the above, or, leveraged AWS tiered storage, aligning different storage classes to the type of data activity.

How I saved money storing more data in AWS S3 actually involved spending some money, to eventually save money by leveraging different S3 storage classes. As part of rebalancing or moving different data to its new storage class, some one-time charges were incurred which recouped after several months of savings. The costs pertained to EC2 compute instances and associated storage used for some of the data tiering, other fees were for access charges along with excessive API calls. For example, some of the data was in storage classes that had fees for early retrieval or deletions, or fees for access among others.

How I use different AWS S3 storage classes (tiers)

  • Standard – Frequently changing data, or data with frequent access
  • Infrequent Access (IA) – Data that does not change frequently or that is not routinely accessed. In the past before OZA, I had placed data that did not need to be in standard, yet to warm for Glacier in this storage class. After the migrations, I have fewer data stored in IA, with more in OZA as well as some in Standard.
  • One Zone Availability (OZA) – Data that is frequently accessed for reading, however, is static, not yet cold enough to move to Glacier or deep archive. A mix of backups, online and active archives. Note that I use OZA as an additional copy or location and not as a single, lowest cost place to store data. In other words, anything that I put into OZA has at least one additional copy somewhere else which may not be in the cloud.
  • Glacier – Very cold, seldom accessed, archives

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

I decreased my AWS monthly bill by balancing things around, there was a one-month period where my costs increased during the changes, then a subsequent reduction. However, while I saw my monthly AWS storage invoices decrease, I’m also storing more data per month. How I saved money storing more data on aws s3 simple storage service involved using different storage classes.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dont Stop Learning Expand Your Skills Experiences Everyday #blogtobertech

Dont Stop Learning Expand Your Skills Experiences Everyday #blogtobertech

Dont Stop Learning Expand Your Skills Experiences Everyday

Dont Stop Learning Expand Your Skills Experiences Everyday including moving beyond or outside our current tradecraft focus area. If you are an expert in a field or given focus area, learn something new about an area outside your expertise or comfort zone. Now if you are of the mind-set that there is nothing new to learn about, it’s all old, boring, perhaps its time to step back, look around, explore other areas.

Doing something new can be in an adjacent technology area, or something completely unrelated. For example, in a recent VMUG keynote presentation and blog post I discussed how Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future.

Dont Stop Learning Expand Your Skills Experiences Everyday
Next Generation Data Infrastructures are in your future (if not already)

What tradecraft skills and experience do you need to have, expand or refresh to support next-generation hybrid software-defined data infrastructures? If you are a server person than you need to broaden your tradecraft skills experience to storage, I/O networking, cloud, virtual, container across hardware as well as software. Likewise, if you are storage or I/O and networking, you need to expand into other areas. If you are a VMware focused professional, then learn about Microsoft Hyper-V or vice versa. If you are an AWS focused person, learn about Google, Azure or vice versa, same applies across different technology domains.

On the other hand, if you know all there is to know, chances are they are other areas you need to learn more about, or, determine what you don’t know to address that. By chance, if you do happen to know everything, there is to know, how much time are you spending interacting with others to teach them, possibly learning something new yourself.

Invest Time into Your Tradecraft Skill set

If you are not spending at least an hour a day learning something new, you are missing out on the opportunity. Part of that hour should also be outside your comfort zone core focus area. For example, if you are a software pro, learn more about hardware, clouds, or something different. If you are a VMware focused person, learn Hyper-V, AWS, Azure, something else. If you are storage, learn server, network, cloud and beyond. If you are focused on data infrastructures, then learn about the upper-level business applications along with the users who use them and vice versa.

How I Continue to Learn Expanding My Tradecraft Skills Experience Every day

As part of expanding my tradecraft, I spend part of my day learning, refreshing on core data infrastructure focus areas (servers, storage, I/O networking, hardware, software, cloud, containers, converged, software-defined, data protection) and related topics. Learning involves vendor briefings, researching, talking with others, reading, hands-on technology trial to gain insight experience perspectives.

I also have expanded my tradecraft experiences by becoming an FAA Part 107 licensed commercial pilot of small unmanned aerial system (sUAS), small unmanned aerial vehicle (sUAV) or more commonly merely called drones. Besides being FAA licensed, I also expanded by becoming Minnesota sUAV/drone and aerial photography licensed. Drone flying has an adjacent to data infrastructures in that one of my drones’ records at 4K 60 frames per second (fps) meaning about 1 GByte of data every two minutes of video, plus telemetry. Note that the drones have internet capability and can be considered IoT for their video, as well as telemetry.


Above is a 4K video flights via my companion site www.picturesoverstillwater.com

Where to learn more

Learn more about learning, data infrastructures, tradecraft, drones as well as related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

What this means is that in addition to expanding as well as refreshing my data infrastructure related tradecraft skills, I’am also expanding my experiences into other adjacent areas. In other words, instead of just talking about big data, fast data, video, IoT, drones and related, I’am involved with it hands on.

Keep in mind, at some point the student becomes the teacher, and a teacher is a student. Leverage your pair of eyes and ears to see things in different ways, listen to and learn about items outside your primary focus area as you expand or refresh your tradecraft skill set experiences.

If you can’t learn something new every day, either you are not trying, or you are in trouble. Even experts and unicorns can learn something new every day, even if that is as simple as learning to listen to others.

With October being #blogtobertech there are plenty of opportunities to Don’t Stop Learning Expand Your Skills Experiences Everyday which also includes student becoming teacher, teacher being student.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future #blogtobertech

Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future #blogtobertech

Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future

A few weeks ago I was invited to present a keynote at the 1st annual Minnesota VMware User Group (VMUG) Super VMUG mega event in Minneapolis titled Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future (download PDF presentation here).

Key themes of the presentation focused around data infrastructures (e.g. what’s inside physical data centers including server, storage, I/O networking, hardware, software, policies, procedures) along with industry trends including hybrid software defined clouds (and containers). Anther aspect of the presentation focused around building, refreshing and expanding our fundamental data infrasture tradecraft skills. Also keep in mind that everything is not the same across different environments, granted there are similarities that can be leveraged.


Data Infrasture’s are defined to support business applications information service delivery

Data Infrastructures

The fundamental role of data infrastructures is to provide a platform environment for applications and data that is resilient, flexible, scalable, agile, efficient as well as cost-effective. Put another way, data infrastructures exist to protect, preserve, process, move, secure and serve data as well as their applications for information services delivery. Technologies that makeup data infrastructures include hardware, software, cloud or managed services, servers, storage, I/O and networking along with people, processes, policies along with various tools spanning legacy, software-defined virtual, containers and cloud.

Depending on your role or focus, you may have a different view than somebody else of what is infrastructure, or what an infrastructure is. Generally speaking, people tend to refer to infrastructure as those things that support what they are doing at work, at home, or in other aspects of their lives. For example, the roads and bridges that carry you over rivers or valleys when traveling in a vehicle are referred to as infrastructure.

Similarly, the system of pipes, valves, meters, lifts, and pumps that bring fresh water to you, and the sewer system that takes away waste water, are called infrastructure. The telecommunications network. This includes both wired and wireless, such as cell phone networks, along with electrical generating and transmission networks are considered infrastructure. Even the airplanes, trains, boats, and buses that transport us locally or globally are considered part of the transportation infrastructure. Anything that is below what you do, or that supports what you do is considered infrastructure.

The following figure shows various layers or altitudes of encapsulation and abstraction of data infrastructures along with their underlying resources that are defined to support a business enablement outcome, as well as support information services delivery.


Data Infrastructure Stack Layers and Resources Defined To Support Business Information Services

The following figure shows evolution of data infrastructures from on-prem bare metal to software-defined virtual, cloud, containers, converged and hyper-converged packaging as well as emerging composable. Also shown below are a hybrid as well as multi-clouds including bare metal dedicated services in addition to virtual machine instances as well as container-based services.


Data Infrastructure and Resource Packaging Deployment Evolution

Hybrid Software Defined Industry Trends

Some of the trends discussed in the presentation include:

Clouds – Public, Private, Hybrid, Multi and hybrid clouds along with how they are being used, along with technology evolution including virtual machine (VM) instances, bare metal dedicated private servers (DPS) as well as metal as a service. Other cloud trends include data migration appliances such as AWS Snowball Edge, Microsoft Azure Databox among others, VMware on AWS, as well as fog and edge computing.

Other trend topics included converged, hyper-converged, serverless, containers, persistent memory (PMEM) also known as storage class memory (SCM) along with other server storage I/O topics. Additional trend topics included data protection, Azure Stack, security, NVMe as well as NVMe over Fabrics (NVMeoF) along with composable and Gen-Z.

Tradecraft Skills Experience

Expanding your data infrastructure tradecraft means evolving from your primary focus area, gaining insight into other technologies, tools, techniques in adjacent areas outside your comfort zone. For industry veterans with several years to many decades of experience, this means refreshing on what you know, think you know or need to know with what’s new or evolving. On other other hand, for those who are new, expanding your tradecraft means moving beyond learning to memorize to pass a certificate test, to gaining insight on how, when, where, why to apply different tools, technologies, trends to tasks at hand.

For example, developing tradecraft from knowing the different hardware, software, and services resources as well as tools, to what to use when, where, why, and how. Another dimension of expanding data infrastructure tradecraft skills is gaining the experience and insight to troubleshoot problems, gain insight awareness with dashboard or monitoring tools, as well as how to design and manage to cut or reduce the chance of things going wrong.

From Tools and Technologies to Techniques and Tricks of the Trade

Expanding your awareness of new technologies along with how they work is important, so too is understanding application and organization needs. Developing your tradecraft means balancing the focus on new and old technologies, tools, and techniques with business or organizational application functionality.

This is where using various tools that themselves are applications to gain insight into how your data infrastructure is configured and being used, along with the applications they support, is important.

Data Infrastructure Tools Tradecraft
Data Infrastructure Toolbox (Hardware, Software, Scripts)

Next Generation Hybrid Software Defined Data Infrastructures What Next


Balance head in the clouds (thinking, strategy, vision) with feet on the ground (what you can do today)

The following are some additional tips, comments, recommendations to keep in mind for enabling your next generation hybrid software defined data infrastructure.

Where to learn more

Learn more about data infrastructures and tradecraft related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Everything is not the same across different organizations, IT environments, application workloads and the data infrastructures that support them. Data Infrasture’s span from legacy on-prem to software-defined cloud (public, private, hybrid, multi-cloud), container, serverless, virtual, hybrid, converged and hyper-converged, as well as central, core and distributed edge or remote office branch office (ROBO). Even though everything is not the same, there are similarities across different environments, technologies and workloads that can be leveraged. Fundamental tradecraft skills and experiences are what enable you to know what to use when, where, why and how including using new as well as old things in new ways, while not making old mistakes in new ways.

Some other tips include avoid flying blind, particular in software defined and cloud environments, have situational awareness, end to end (E2E) insight leveraging metrics that matter, are relevant, timely, accurate and hold context to the data infrastructures as well as applications they support. Part of expanding your tradecraft skills is refreshing on what you know, also expanding into new adjacent areas getting out of your comfort zone. Also understand the context of different terms, technologies and tools. For example, SAS can be big data analytic statistical analysis software, serial attached SCSI storage device as well as shared access signature for Azure clouds among others.

Also keep in mind that while software defined things are popular and trendy with the industry, keep the focus on what is being defined to enable an outcome or business enablement In other words, the emphasis should not be on the software aspect per say, rather how something (hardware, software, service) is defined to enable something. Also keep in mind with software defined marketing and trends such as serverless, servers and software still need hardware (somewhere), and hardware still needs software from micro code to firmware to many other places in the data infrasture layers or stack. Meanwhile, keep in mind that it is #blogtobertech and Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Following up from last years 2017 crossword puzzle for travel fun, here is the Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle (click on the below image for PDF version that includes answers). The Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle can be something to do while traveling, taking a break between (or during) sessions as well as keynotes. I wonder which buzzword term will get used the most, as well as new ones to be added to an updated version of this?

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Where to learn more

Learn more about VMworld and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Next week is VMworld 2018 in Las Vegas which means for some traveling and long week. Feel free to suggest additions as there could be a revision, update or two between now and VMworld. Have fun, safe travels, hope to see you next week in the meantime enjoy the Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture

Dell EMC today announced with a tag line IT Unbound their new PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture slated for general availability September 21, 2018. Previewed earlier this year at Dell Technology World in Las Vegas, PowerEdge MX 7000 is a new family of modular, scalable servers for various data infrastructure roles.

What is different with PowerEdge MX 7000 compared to other new 14th generation (Gen 14) Dell servers is the finer granularity of resource allocation based around the new Kinetic composable infrastructure. Also previewed at Dell Technology World earlier this year in Las Vegas, Kinetic (not to be confused the Seagate Kinetic object storage key value drive initiative) is a new composable architecture.

Dell EMC PowerEdge MX 7000 Kinetic What Was Announced

  • First instantiation of Kinetic composable based data infrastructure resources
  • OpenManage Enterprise Modular Edition
  • PowerEdge MX 7000 modular data infrastructure server

Dell EMC PowerEdge MX 7000 and Kinetic Architecture
Dell EMC PowerEdge MX 7000 and Kinetic Architecture Image via Dell.com

Dell EMC Kinetic Composability What Is It

By being a composable data infrastructure resource and server, Dell EMC Kinetic based solutions can be decomposed with finer granularity than previous servers. What this means is that in the past, memory, I/O network, physical storage devices, compute sockets and cores were assigned to a single image instance. The only image instance could be an operating system (OS) such as Linux or Windows based, a hypervisor such as KVM, Microsoft Hyper-V, Nitro (AWS), Oracle, VMware vSphere ESXi, or Xen among others, as well as proprietary decomposition and aggregation software (and hardware) technology (ScaleMP among others).

With a composable based solution, instead of the entire server, or motherboard(s) and its resources allocated to a single OS as a bare metal (BM) or Metal as a Service (MaaS) instance, or to a hypervisor, different resources can be allocated to various instances. On the surface it would be easy to say that sounds a lot like what hypervisors such as those from Microsoft, VMware, and others are doing, particular with clusters.

Dell EMC Kinetic Data Infrastructure Architecture
Dell EMC Kinetic Data Infrastructure Architecture Image via Dell.com

However, the difference is that with hypervisors, all of a server’s physical resources (compute, memory, I/O, storage devices, GPU, FPGA/ASIC) are allocated to the OS, hypervisor, or composition software, that then creates vCPU, vRAM, and related resources. Emphasis is on enabling more granular resource allocation as well as scaling out. The business or organizational outcome is what is essential which means, better allocation and effective use of resources to boost productivity vs. merely driving up utilization and efficiency.

Dell EMC PowerEdge MX 7000 Eliminates traditional hardware-based mid-plane with an internal fabric connector per node that can also be exposed outside of the physical MX enclosure. By using an industry standard connector on the edge of server motherboard resource nodes, different server I/O connectivity can be leveraged as it becomes available or improves. For example, IMHO it is not too complicated to envision a time in the not so distant future when Kinetic enabled resources (e.g., server nodes) evolve to support the emerging Gen-Z server I/O connectivity protocol.

What is Gen-Z

Does PowerEdge MX 7000 and Kinetic use Gen-Z today? Not yet, however, Dell has been showing demos and technology proof of concepts at various events.

Why bring up Gen-Z now? Simple, it’s something that will be part of many data infrastructure, the server I/O, storage, networking, hardware and software-defined discussions in the not so distant future.

As a refresher or primer, Gen-Z is a new server I/O fabric interface that supports access of and by CPU sockets along with their cores or memory including DRAM as well as emerging SCM as well as PMEM. In addition to server memory access. Gen-Z also enables local as well as remote access to memory, storage, GPU, FPGA, ASIC among other resources. For backward compatibility as well as investment protection, Gen-Z is intended to work with existing PCIe, Ethernet, Fibre Channel, SAS, SATA, NVMe, InfiniBand among another server I/O interconnects and protocols.

Does this mean Gen-Z is a challenger for Ethernet and another IP-based general LAN networking? IMHO no, at least not in the foreseeable future, granted like PCIe, Fibre Channel, InfiniBand, Ethernet and some others that have joined the where are they now list of technologies that promised to be the end all network for everything, near-term Gen-Z is focused on inside a modular enclosure or perhaps within a rack. Read more about Gen-Z here, as well as Dell EMC blog The Gen-Z Journey road to composability.

Dell OpenManage Enterprise
Dell OpenManage Management Interface Image via Dell.com

OpenManage Enterprise Modular Edition

Management for PowerEdge MX 7000 utilizes OpenManage Enterprise Modular Edition that is an HTML5 REST based with API tool. Management capabilities include workflow’s for simplicity of operation and lifecycle management. OpenManage Enterprise Module Edition besides being HTML5 REST API is also RedFish inspired for further interoperability. Note that PowerEdge MX 7000 is also integrated with Dell iDRAC physical machine level management interface provides unified management from a single to multiple server groups spanning towers to racks.

Dell EMC PowerEdge MX 7000
Dell EMC PowerEdge MX 7000 Image via Dell.com

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Server

The new Dell EMC PowerEdge MX 7000 is the first installment of their new Kinetic based composable architecture. The new Dell EMC PowerEdge MX 7000 components consist of a 7U chassis with power and cooling fans, along with compute sled, storage sled, I/O connectivity and inner fabric, along with management tools.

Dell EMC PowerEdge MX 7000 Modules
Dell EMC PowerEdge MX 7000 Modules Image via Dell.com

Dell EMC PowerEdge MX 7000 Server Compute modules

Dell EMC PowerEdge MX 7000 Compute sleds include MX740c (single width) and MX840c (double width) that are two and four socket modules with local on-board NVMe (e.g., U.2 8639 small form factor SFF) drives (per module). These initial compute modules support Intel Xeon processors and up to six (6) TBytes of memory. The MX740c supports up to six (6) local NVMe, SAS or SATA drives (e.g., 8639 connectors), while the MX840c supports up to eight (8) local drives. Note that these local onboard drives can be shared with other sled modules, as well as compute sleds can access the shared storage sled-based drives.

Dell EMC PowerEdge MX 7000 Server Storage modules

Dell EMC PowerEdge MX 7000 Storage sled consists of MX5016s holding up to 16 hot-pluggable SAS HDD, up to seven MX5016s sleds can be configured per MX chassis for up to 112 direct attached storage (DAS) drives. Each of the drives can be individually mapped to one or more servers supporting aggregated (e.g., HCI) as well as disaggregated (CI and legacy) deployment topologies.

Dell EMC PowerEdge MX 7000 Server I/O Networking Modules

Initial server I/O modules for the new Dell EMC PowerEdge MX include 25GbE and 32G Fibre Channel (GFC) host connectivity along with 100GbE and 32 GFC uplink capabilities with the top of rack (ToR)support built in along with Open Networking OS10EE software enabled. The server I/O modules provide both north-south, as well as east-west connectivity inside and outside the chassis for data plane and management plane traffic.

Server I/O connectivity options include:

  • MX5108n Ethernet Switch with 8 x 25GbE (server facing ports), 2 x 100GbE ports, 1 x 40GbE port, 4 x 10GbE ports.
  • MX9116n Fabric Switching Engine (e.g., Kinetic fabric) with 16 x 25GbE server facing ports, 2 x 100GbE/8 x 32GFC unified ports, 2 x 100 GbE ports and 12 fabric expansion ports.
  • MXG610s Fibre Channel Switch with 16 x 32GFC internal ports, 8 x 32 GFC SFP+ ports and 2 QSFP (4 x 32GFC) uplink ports.

Where to learn more

Learn more about Dell EMC PowerEdge MX, Kinetic, Composable and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Overall this is a good announcement of technology, product, as well as where resources are headed to meet different workload demands and look forward to getting some test time with a Dell EMC PowerEdge MX 7000.

Dell EMC PowerEdge MX 7000 Three Tenants
Dell EMC PowerEdge MX 7000 Three Tenants Image via Dell.com

The new Dell EMC PowerEdge MX 7000 Provides a data infrastructure resource platform for deploying traditional, cloud, software-defined, composable, as well as converged infrastructure (CI) disaggregated, as well as hyper-converged infrastructure (HCI) aggregated along with hybrid configurations.

With the Dell EMC PowerEdge MX 7000, there is more resource granularity and future-proof capabilities than traditional high-density blade, as well as twin, quad or eight node server configuration solutions.

Many vendors talk about solutions being future proof or enabling investment protection, with PowerEdge MX 7000, Dell EMC is taking the next step in discussing trends, technology, and what you can do today. Unlike traditional dual, quad, eight or high-density node and blade servers with dedicated discrete mid-planes tied to a given technology, Dell PowerEdge MX 7000 and Kinetic based architecture are mid planes aka back plane free. Now there is still connectivity between the different PowerEdge MX 7000 chassis modules which is a fabric (network if you prefer).

For example, server compute sled modules have an industry standard connector that connects with other components in the chassis. What differs from the traditional blade and multi-node server configurations is that on board the compute sleds; an adapter module can be changed to support a new interface over different generations of technology (as an example, keep an eye on what happens with Gen-Z).

The result is that the Dell EMC PowerEdge MX 7000 should be an excellent platform for software-defined data centers (SDDC), software-defined data infrastructures (SDDI), software-defined infrastructures (SDI) as well as other software defined or traditional deployments. The Dell EMC PowerEdge MX 7000 will make for a good CI, HCI, SDDC, SDDI, SDI platform for public, private as well as hybrid clouds, PaaS as well as IaaS deployments, along with VMware, Microsoft (Hyper-V, Windows Storage Spaces Direct (S2D), as well as Azure Stack) among other scenarios.

By being flexible, scalable, agile and adaptable, easy management, responsive design that is future proof enabling a pool of dynamic data infrastructure resource, the Dell EMC PowerEdge MX 7000 should be good allowing IT Unbound.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Catching Up With Summer 2018 IBM Cloudy Software Defined Storage Announcements

Summer 2018 IBM Cloudy Software Defined Storage Announcements

Catching Up With Summer 2018 IBM Cloudy Software Defined Storage Announcements

Time for some catching up with summer 2018 IBM cloudy software defined storage announcements that were made earlier this week. The Share Event (Mainframe centric) is occurring this week in St. Louis. Thus, it is no surprise that it is time for catching up with summer 2018 IBM cloudy software-defined storage announcements that are geared to mainframe Z environments. These cloud and software-defined storage for the mainframe environment announcements follow those from a few weeks ago including new Power9 based servers and IBM FlashSystem 9100 flash SSD.

What was announced

What IBM announced this week were a mix of mainframe Z server storage with software-defined storage and cloud (e.g. cloudy) support including:

IBM Spectrum Protect 8.1.6 multi-cloud updates with tiered backup across on-site and cloud. For example, active data remains on-site (or on-prem), inactive data protection copies get moved (tiered) to cloud storage. Other enhancements include software-defined threat protection such as malware and ransomware extending to hypervisor data, along with blueprint guides for IBM Cloud (e.g., Softlayer), AWS and Microsoft Azure.

IBM Spectrum Protect Plus 10.1.1 enhanced with encryption of vSnap repositories for security, VMware vSphere 6.7 support, improved dashboards user interfaces (UI), and DB2 support in addition to Microsoft SQL Server and Oracle.

IBM DS8882F storage
IBM DS8882F Z mainframe rack mount storage Image via IBM.com

IBM DS8882F rack-mounted storage system (part of DS8000 storage family) integrated with IBM Z ZR1 (mainframe) and LinuxOne Rockhopper II (mainframe) servers. The DS8882F supports from 6.4TB to 368.64TB raw capacity. Along with safeguarded copy protection including read-only copies (e.g., a variation of WORM), along with encrypted digital signatures, and 256-bit AES encryption.

IBM Cloud Object Storage aka COS (formerly known as Cleversafe) functions as a target tier for DS8880 without the need for an external gateway. Enhancements also include a new 1U server (via Quanta) supporting up to 72 TB configurations.

IBM Elastic Storage Server File and Object pre-configured storage for AI, ML, Big Data and High-Performance Compute (HPC) includes an integrated file (NFS, SMB, S3, Swift) and object access. The solution is pre-installed on IBM Power8 servers running Red Hat Linux (e.g., RHEL). IBM claims high throughput for NAS NFS workloads with a large number of server connections. However, some performance numbers would be impressive to see along with a side of context.

IBM Spectrum Scale on AWS is a software-defined storage solution alternative to the traditional appliance-based solution. With Spectrum Scale 5.0.2 IBM is joining other vendors who have made their software-defined storage solutions available on clouds such as AWS, Azure, Google among others. Besides running on AWS working with Virtual Private Clouds (VPC), IBM supports per TB licenses including bringing your own license a growing industry trend.

Where to learn more

Learn more about IBM Server, Storage, Data Protection and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Despite having been declared dead for decades, IBM Z series are still prevalent in many large environments even in a software-defined cloudy era. It’s good to see IBM continuing to invest in, and join other industry vendors who are supporting various cloudy deployments, as well as legacy on-site aka on-prem.

Likewise, IBM is making its legacy Z mainframe systems trendy and cloudy with these new enhancements to support customer hybrid server, storage, and data infrastructure deployments.

Overall, a nice set of incremental improvements following industry trends, and catching up with summer 2018 IBM cloudy software defined storage announcements.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

IBM announces new Power9 processor based E950 E980 server systems

IBM announces new Power9 processor based E950 E980 server systems

IBM announces new Power9 processor based E950 E980 server systems.

IBM announces new Power9 processor based E950 E980 server systems.

As a single server or node, the Power9 E950 supports up to four (4) CPU processor sockets each with multiple cores. An E980 system comprises up to four E950 based systems as a solution. The new E950 succeeds Power E850 and E850C, its machine type model number is 9040-MR9 that is a 4U single enclosure with two or four processor modules.


Power9 Processor image via IBM.com

IBM Power9 E950 and E980

As a refresher, leveraging IBMs proprietary processor chip technology called Power, which are used in their various mid-range and higher end server solutions.

The Power9 E950 and E980 systems support PowerVM virtualization, along with virtual machine (VM) mobility as well as optimization for OpenStack among other workloads.

IBM touts Power9 E950 (AIX and Linux) and E980 (AIX, Linux, I systems) optimized for:

  • Analytics, AI (ML/DL) and Cognitive computing
    • Faster cores and threads, more performance per socket
    • More bandwidth and lower latency
  • Super Compute (SC), Technical, High Performance Compute (HPC)
    • High bandwidth graphical processing unit (GPU) attachment
    • Optimized CPU GPU memory sharing and interaction
    • Bandwidth optimized main memory
    • Virtual addressing optimization
  • Cloud and Hyper Scale Data Infrastructures and Data Centers
    • Dense performance and energy consumption
    • Virtualization assist, QoS, power management and security
    • Fast I/O subsystem for server I/O to storage and networks
  • Enterprise data infrastructures and data centers
    • Scale-up and scale-out
    • Server and workload consolidation
    • Up to 4TB of buffered memory per socket (16TB per E950 node)

IBM E950 Power9 System

Front view of E950 System Image via IBM.com

The following image (via IBM.com) shows an exploded component view of the E950.
IBM Power9 E950 exploded view

The following image (via IBM.com) shows a top view looking down into an E950.

IBM Power9 E950 top view

E950 is a 4U server (or E980 node) with compute and memory features including:

  • Power9 8,10,11 or 12 cores per socket, up to 48 cores (4 x 12 cores)
  • Four times memory compared to E850 systems (up to 16TB or 4TB per socket)
  • Eight (8) memory riser cards with 16 DDR4 DIMM each (8,16,32,64 or 128GB DIMM)
  • Memory bandwidth of up to 920 GB/sec (note that is big B not Gb or little b)
  • Refresh your server, CPU, compute, socket, core and threads knowledge here.

E950 also features faster I/O subsystem for server I/O to storage and networks:

  • 630GB/sec (e.g. 5Tbpsec) I/O bandwidth
  • NVIDIA NVLink GPU attachment, PCIe Gen4 and OpenCAPI I/O
  • Up to eight (8) (4 socket systems) PCIe Gen4 x16 (16 lanes each) card slots
  • Up to two (2) PCIe Gen4 x8 (8 lanes each) card slots
  • Up to 144 PCIe lanes (4 socket systems), full height, half length
  • USB 3 (2 front, 2 rear)
  • 12 internal 2.5” form factor storage bays for HDD and SSDs including up to eight (8) SAS SAS, and four NVMe U.2 (8639). Note that NVMe devices attach via PCIe ports and lanes.
  • Hot plug components and optional I/O expansion as well as storage drawers
  • Here is a refresher (or primer) on PCIe, as well as NVMe, SAS, and SSD technologies.

IBM E980

The IBM E980 system is a collection of up to four nodes along with a control module, a cabinet rack E980 system is shown below (image via IBM.com).
IBM Power9 E980

IBM Power9 E950 E980
Via IBM.com

View more features for E950 here (PDF) and E980 here (PDF).

Where to learn more

Learn more about IBM Power and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

These new systems provide increase in not only compute, also memory as well as server I/O for storage and networking. With the addition of multiple PCIe Gen4 x16 card slots, more GPUs such as those from NVIDIA as well as fast Fibre Channel, SAS and NVMe based storage can be attached to these systems.

With a good number of x16 PCIe Gen4 slots, the E950 and E980 systems are capable of supporting more GPU offload cards such as those from NVIDIA, along with other ASIC or FPGA accelerator devices. In addition to compute offload, the x16 PCIe Gen4 slots enable server I/O cards to more storage devices including faster Fibre Channel, Ethernet, SAS as well as NVMe attachment.

Overall, IBM announces new Power9 processor based E950 E980 server systems is a good move for existing customers of AIX, Linux as well as with the E980 for i systems.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

July 2018 Server StorageIO Data Infrastructure Update Newsletter

July 2018 Server StorageIO Data Infrastructure Update Newsletter

July 2018 Server StorageIO Data Infrastructure Update Newsletter

Volume 18, Issue 7 (July 2018)

Hello and welcome to the July 2018 Server StorageIO Data Infrastructure Update Newsletter.

In cased you missed it, the June 2018 Server StorageIO Data Infrastructure Update Newsletter can be viewed here ( HTML and PDF).

In this issue buzzwords topics include Dell Technology and VMware, AWS and Google public, private and hybrid cloud, machine learning, 3D XPoint, SCM, SSD, NVMe, data infrastructure management tools among other topics.

Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

July 2018 data infrastructure, server, storage, I/O network, hardware, software, cloud, converged, and container as well as data protection industry activity includes among others:

Amazon Web Services AWS July 2018 Updates include enhancements to machine learning (ML) Sagemaker service, faster S3 access, new EC2 instances along with Snowball Edge (SBE) for on-prem converged server and compute appliance ( read more about SBE here). In other public cloud activity, Google Cloud Platform GCP announced new Los Angeles Region.

Intel and Micron have announced that they will be pursuing different paths when they complete the second generation in 2019 of 3D XPoint used in Intel Optane NVMe SSD and Storage Class Memory (SCM) technologies, read more here Intel Micron 3D XPoint Evolving. Meanwhile, Broadcom buying CA, Brilliant or a Brainbuster? This deal is a bit of a head scratcher with Broadcom spending $18.9 Billion USD (cash) to by CA Technologies.

In other data infrastructures news and activity, DataDirect Networks Stages Bid to Acquire Tintri’s Assets and Expand Its Storage Portfolio into the Enterprise. Dell EMC announced a new integrated data protection appliance ( IDPA DP4400) for small and midsize organizations. In other activity, VMware declared a dividend, with Dell Technologies being a majority owner, will use cash to fund Dell business structuring. Read more about Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash here.

Spectra (e.g. who some of you know as Spectra Logic) has announced enhancements to their tape libraries. Note that one of the larger growth (or sustainment) markets for tape based technologies in recent years have been the larger cloud scale service providers. Granted those providers are not using tape in old ways (e.g. for direct backup), rather, in new ways where it is a companion to SSD, HDD as another storage class, tier or technology enabler.

IBM has jumped on the NVMe bandwagon announcing updates to their Flashsystems 9100 systems (e.g. what they acquired via TMS a few years ago). Opvisor has announced a new VMware vSAN performance monitoring and troubleshooting feature for their insight, awareness management tools.

Check out other industry news, comments, trends perspectives here.

Data Infrastructure Server StorageIO Comments Content

Server StorageIO Commentary in the news, tips and articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via : SearchStorage: Comments on GDPR and Cloudian File Sync Share 
Via : NetworkComputing: Comments Software Defined Storage SDS Getting Started 
Via SearchStorage: Comments The storage administrator skills you need to keep up today
Via SearchStorage: Comments Managing storage for IoT data at the enterprise edge
Via SearchCloudComputing: Comments Hybrid cloud deployment demands a change in security mind set

View more Server, Storage and I/O trends and perspectives comments here.

Data Infrastructure Server StorageIOblog posts

Server StorageIOblog Data Infrastructure Posts

Recent and popular Server StorageIOblog posts include:

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
June 2018 Server StorageIO Data Infrastructure Update Newsletter
May 2018 Server StorageIO Data Infrastructure Update Newsletter
Have you heard about the new CLOUD Act data regulation?
Data Protection Recovery Life Post World Backup Day Pre GDPR
Microsoft Windows Server 2019 Insiders Preview
Server Storage I/O Benchmark Performance Resource Tools
Data Infrastructure Primer Overview (Its Whats Inside The Data Center)
If NVMe is the answer, what are the questions?

View other recent as well as past StorageIOblog posts here

Server StorageIO Recommended Reading (Watching and Listening) List

Software-Defined Data Infrastructure Essentials SDDI SDDC

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017) available at Amazon.com (check out special sale price), the following are Server StorageIO data infrastructure recommended reading, watching and listening list items. The Server StorageIO data infrastructure recommended reading list includes various IT, Data Infrastructure and related topics including Intel Recommended Reading List (IRRL) for developers is a good resource to check out.

Duncan Epping ( @DuncanYB), Frank Denneman ( @FrankDenneman) and Neils Hagoort ( @NHagoort) have released their VMware vSphere 6.7 Clustering Deep Dive book available at venues including Amazon.com. This is the latest in a series of Cluster and deep dive books from Frank and Duncan which if you are involved with VMware, SDDC and related software defined data infrastructures these should be on your bookshelf.

Watch for more items to be added to the recommended reading list book shelf soon.

Data Infrastructure Server StorageIO event activities

Events and Activities

Recent and upcoming event activities.

July 25, 2018 – Webinar – Data Protect & Storage

June 27, 2018 – Webinar – App Server Performance

June 26, 2018 – Webinar – Cloud App Optimize

See more webinars and activities on the Server StorageIO Events page here.

Data Infrastructure Server StorageIO Industry Resources and Links

Various useful links and resources:

Data Infrastructure Recommend Reading and watching list
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

What this all means and wrap-up

Summer is here in North America and the Northern Hemisphere which means holidays as well as vacations. However Data Infrastructures continue to evolve as do the tools, technologies, trends, hardware, software, services along with those who take care of, and define them. Enjoy your summer vacation, holidays as well as this July 2018 Server StorageIO Data Infrastructure Update Newsletter edition.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Amazon Web Services AWS July 2018 Updates

Amazon Web Services AWS July 2018 Updates

Amazon Web Services (AWS) July 2018 Updates

Amazon Web Services AWS July 2018 Updates

Amazon Web Services AWS July 2018 Updates continue to expand feature, functionality, service capabilities of the public cloud providers capabilities across various geographies.

Recent AWS updates include Snowball Edge (SBE) that adds local, on-site, on-premises aka on-prem EC2 compute capabilities as part of the Snowball appliance. Previously Snowball was a data and storage migration only appliance, now with the new capabilities, compute is also enabled as part of a turnkey converged platform. Read more about SBE here.

In other updates, AWS has extended its Elastic Cloud Compute (EC2) capabilities (besides Snowball Edge) with new instance types, along with leveraging their next generation hypervisor as part of Nitro enabled systems. New EC2 instances span from on-prem Snowball Edge (SBE) to AWS Dedicated aka bare metal instances, along with traditional cloud instances (e.g., virtual machines).

These new instances including R5, R5D, and Z1 among others leverage faster Intel Xeon Platinum 8000 series processors, along with more memory. For example, Z1D is a compute-intensive instance with 4.0 GHz all turbo enabled core, while R5 is memory optimized with 3.1 GHz cores (up to 96 vCPU) and up to 768GB of RAM. The R5D is a memory-optimized instance that also supports up to 3.6TB of on-instance NVMe based storage. View additional AWS instance types here.

AWS has enhanced SageMaker (Machine Learning) service supporting higher throughput enabling faster data transformation batch jobs of non-real-time inference. To enable higher data and API call rates, AWS has also enhanced Simple Storage Service (S3) request rate. Another enhancement by AWS is enabling bring your own IP address preview for virtual private cloud (VPC) as part of allowing hybrid clouds.

View additional new, recent and past AWS updates here, and here.

Where to learn more

Learn more about AWS, Cloud and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Amazon Web Services AWS July 2018 Updates continue to expand the number, type and extensiveness of public cloud services, as well as enabling hybrid capabilities. The Amazon Web Services AWS July 2018 Updates also address different data infrastructure layers from lower level Infrastructure as a Service (IaaS) including EC2 compute, as well as higher level artificial inelegance (AI), machine learning (ML), deep learning (DL) among other cognitive as well as analytic offerings.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.