Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update. Musician Phil Collins has an excellent name for his current tour Not Dead Yet which is a reminder that he is still alive and performing, at least one more time. With Halloween just around the corner, it is that time of the year to revisit zombie technology, those technologies, tools, techniques, trends that are declared dead yet still alive.

Data Infrastructure Tools Trends Topics

IT Zombie Technology Declared Dead Not Dead Yet

With a concert tour named Not Dead Yet, that sets the stage for this post which is about IT Zombie Technology and in particular data infrastructure related technology, tools, trends and related topics that have been declared dead by some people, yet are still alive. Not only are these tools and techniques being used, but they are also being enhanced to be around for future years of zombie technology updates, not dead yet.

As a refresher, a Zombie technology is one that is declared dead, usually by some upstart vendor and its pundits along with other followers in favor of whatever new has been announced. As luck or fate would have it, some of these startup or new technologies that declare an older established one as being dead, tend to end up on the where are they now list.

In other words, some technologies do survive and gain in both industry adoption, as well as the even more critical customer deployment category. Likewise, some of these technologies that result in something existing being declared dead-end up surviving to live alongside or near what its supporters declared dead.

Another not so uncommon occurrence is when the new technology that its supporters declared something else as being dead joins the ranks of being declared dead by a yet more modern technology thereby becoming a Zombie technology itself.  Put a different way, being on the Zombie technology list may not be the same as being the shiny new popular trendy technology. However, it can be both a badge of honor not to mention revenue and profit maker.

Data Infrastructure components

Zombie Technology List

What are some old and new Zombie technologies that have been declared dead, yet are still alive, being used and enhanced, not dead yet?

IBM Mainframe

This is a perennial favorite, and while not seeing new growth associated with other platforms including Intel, AMD, ARM among others, it has its place with many large organizations. Not only does it continue to be manufactured, enhanced, even some new customers buying them, it also runs native Linux in addition to traditional zOS among other software.

Fibre Channel (FC)

FC has been declared dead for over a decade, and while Ethernet-based server storage I/O networking continues to gain ground in both industry as well as customer deployments, there is still plenty of life in and with FC for years to come, at least for some environments. NVMe over Fabrics (NVMeoF) which is the NVMe protocol carried on top of a fabric network (SAN if you prefer) is gaining industry popularity and customer curiosity.

There are many flavors of NVMe over fabrics including NVMe over Fibre Channel, e.g., FC-NVMe which is similar to mapping the SCSI command set (SCSI_FCP) on to Fibre Channel or what is more commonly known as FCP or simply FC.

What this means if that FC-NVMe is just another upper-level protocol (ULP) that can co-exist with others on the same Fibre Channel network. In other words, FICON, FCP, NVMe among others can co-exist on the same Fibre Channel-based network. Will everybody using Fibre Channel move to FC-NVMe? Good question, ask the FC folks, and the answer not surprisingly would be yes or probably. Will new customers looking to do NVMe over some type of fabric or network use Fibre Channel instead of Ethernet or other transport? Some will while others will go other routes. For now, what is clear is that FC is still alive and thus on the Zombie technology list and not dead yet.

SAS and SATA

Both have been declared dead as they have been around for a while, and over time NVMe will pick up more of their workload, however near term, SAS and SATA will continue as lower cost smaller footprint for general purpose and bulk lower cost direct attachment. Otoh, look for more m.2 NVMe Next Generation Form Factor (NGFF) aka gum sticks appearing on physical servers along with storage systems. Likewise, watch for increased deployment of NVMe U.2. Aka 8639 drive form factor SSDs using NAND flash as well as 3D XPoint and Intel Optane among other mediums as part of new server and storage platforms. BTW, USB is not dead yet either, just saying.

Microsoft Windows

Windows desktop, Windows Servers, even Hyper-V virtualization have been declared dead for some time now, yet all continue to evolve. Just recently, Microsoft released Windows Server 2019 which included many enhancements from software-defined storage (Storage Spaces Direct aka S2D), software-defined networking, converged and hyper-converged infrastructure (HCI) deployment options, expanded virtualization capabilities, Windows Subsystem for Linux (WSL) enhancements (e.g. bash shell on Windows native), containers with Kubernetes as well as Docker updates among others. In other words, it’s not dead yet.

Hard Disk Drive (HDD)

Having been declared dead for decades, while not the primary frontline storage medium it was in the past, HDDs continue to evolve and be used for alongside faster flash SSD, and as a front-end to magnetic tape. Some of the larger consumers of HDDs continue to be cloud service providers also known as mega scalars for storing large amounts of bulk data. I suspect that HDDs will continue to be on the Zombie technology list for at least another decade or so which has been the case for the past several decades.

Magnetic Tape

Like HDDs, the tape is still in use in some environments, and like HDDs, the cloud service providers are significant users of tape as a low-cost, low access, high-capacity bulk storage for cold archives that are front-ended by HDD or SSD or both.

Cloud (Public, Private and Hybrid)

Yes, believe it or not, some have declared cloud dead, along with hybrid cloud, private cloud among others, oh well.

Physical Machine (PM)

Also known as bare metal, servers were declared dead a decade or so ago at the hands of the then emerging Intel based virtualization hypervisors notably VMware ESXi and to a lesser extent Microsoft Hyper-V. I say lesser extent with Hyper-V in that there was less noise about PM and BMs being dead as there was from some in the ESXi virtual kingdom. Needless to say, PM and BM from Intel to AMD and ARM-based, along with IBM Power among many others are very much alive as dedicated servers in the cloud, VM and container hosts, as well as being accessorized with FPGA, ASIC, GPU, and other resources.

Virtual Machines

Listen to some from the container, serverless or something new crowd, and you will hear that virtual machines (VMs) are dead which for some workloads may be right. On the other hand, similar to the physical machine (PM) or bare metal (BM) servers that were declared dead by the VMs a decade or so ago, VMs are alive and doing well. Not only are they doing well, like containers continued adoption and deployment of VMs will stay on both on-prem as well as cloud, as will BM and PMs now have known as dedicated servers in the clouds.

NAS and Files

If you listened to some of the pundits and press, NAS and files were supposed to have been dead several years ago at the hands of object storage. The reality today is that while object storage continues to grow in customer deployments while the industry is not as enamored (or drunk) with it as it was a few years ago, the new technology is here to stay and will be around for many decades to come.

That brings us back to NAS and files which were declared dead by the object opportunists which is file access is very much alive and continues gain ground. In fact, most cloud providers have either added NAS file-based access (NFS, SMB, POSIX among others) native or via partners to their solutions. Likewise, most object storage platforms have also added or enhanced their NAS file-based access for compatibility while their customers are re-engineering their applications, or create new apps that are object and blob native. Thus, NAS and File-based access are proud members of the Zombie technology list.

Data Infrastructure tools

There are many more tools, technologies, trends, techniques that are part of the above list for example Backup has been declared dead, along with the PCIe bus, NAND flash, programming, data centers, databases, SQL along with many others. What they have in common is that they are part of a growing list of not dead yet, yet declared dead thus are Zombie technologies.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

What is your favorite zombie technology, tool, trend or technique?

What zombie technologies, tools, trends or techniques should be added to the list and why?

Many tools, technologies, techniques, trends are often declared dead, sometimes before they are even really alive and mature by those who have something new, or that simply lack creative (e.g., dead marketing?) so it’s easier to declare something dead. While some succeed themselves prospering and being added to the Zombie technology list (a badge of honor), some quietly end up on the where are they now list. The where are they now list are those vendors, tools, technologies, techniques, trends that were on the famous hit parade in the past, having faded away, or end up dead (unlike a zombie).

Don’t be scared of zombie technology while also being prepared to embrace what is new while using both in new ways. Right now, I don’t have tickets to go see Phil Collins not dead yet tour, maybe that will change. However, for now, keep in mind, don’t be scared when looking at Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Ten tips to reduce your cloud compute storage costs #blogtobertech

Ten tips to reduce your cloud compute storage costs #blogtobertech

Ten tips to reduce your cloud compute storage costs

The following are Ten tips to reduce your cloud compute storage costs.

In some cases, reducing your cloud costs means spending the same yet getting more value and resources that provide a business benefit. For example, paying the same yet upgrading to fewer, faster servers, storage, I/O network resources to support growth while boosting productivity. In other words, when measured on a cost per unit of work done or service enabled, there should be an improvement.

On the other hand, cost cutting can be measured by an actual reduction in spending, for example, consolidating multiple applications to a lower cost compute instance running at higher utilization. The caveat is that while the spend may be reduced, is the corresponding level of service or application and user productivity negatively impacted?

Other examples are a hybrid of removing complexity and cost, as well as cost-cutting, for instance finding orphan resources that are powered on and not used. Orphan resources include IP addresses assigned, being charged for yet not used, or a virtual machine instance powered on however not used. Another orphan example is a VM instance that is powered off however no longer used, nor are the disks assigned to it, as well as any snapshots or backups.

Ten tips to reduce your cloud costs

  • Utilize client and remote site data file cache to reduce cloud egress network fees
  • Bring your own software licenses for operating systems and applications
  • Monitor your cloud cost summaries regularly to watch out for surprises
  • Find and remove orphan resources including instances, images, IP address, storage volumes, buckets
  • Revisit if your data is stored in the appropriate storage class or tier for how it is used. Likewise, leverage lower durable storage tiers as locations for additional protection instead of merely as a single destination to support cost-cutting. For example, cost cutting would be placing your only data protection copy and archive on a lower cost lower durable storage tier. Removing cost, boosting availability would be putting a copy of your data on two or more economical price, less durable storage tiers in different locations, instead of a single copy on a highly durable tier in one place.
  • Consolidate many smaller, lower cost instances into fewer larger instances, removing complexity and costs
  • Utilize reserved instances (RI) along with prepayment discounts, also check with your finance department to see if there are benefits of considering as OpEx or CapEx.
  • Audit your RIs to make sure you have the appropriately sized resources to meet workload needs.
  • Utilize spot instances for spot or ad-hoc interruptible workloads
  • Leverage ephemeral on-instance storage as a cache to boost performance

Additional Tips and Recommendations

Everything is not the same, why treat everything the same including assigning to the same type of resources. Keep in mind that all applications have some level of Performance, Availability, Capacity, and Economic (PACE) resource requirements that need to be balanced.

Similar to on-prem environments, one of the top mistakes when choosing storage is looking only at a cost per capacity, particular with flash-based SSD and NVMe accessed storage. Also look into what the storage performance thresholds are, as well as any access and API or service call fees.

Watch out for excessive API and cloud service calls beyond your normal monthly limits. For example, consistently running rsync on some storage classes can result in surprise monthly invoices. Likewise, moving data around, changing encryption or other operations may wipe out savings from going to a lower storage tier. Look beyond the monthly cost per capacity, what are the access including egress (reading data) fees, as well as API calls such as list, dir or other operations.

Likewise, for compute instances, look beyond the necessary cost also considering how much memory (DRAM), I/O for storage and networking, on-instance storage (temporary or persistent), bring your own license options, number of cores or virtual CPUs along with their speed. Also, watch for any limits on the number of I/O operations per instance particular with fast flash SSD including NVMe accessed storage. Just because its flash or NVMe does not mean it’s going to be fast.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Have a situational awareness of your on-prem environment knowing your costs of resources as well as the level of services to make informed decisions. Don’t be scared, be prepared, avoid flying blind, plan ahead and apply the appropriate resources along with quantity to require application workload needs. Keep in mind that there are more than Ten tips to reduce your cloud compute storage costs, however these should get your off to a good start.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

How I saved money storing more data on aws s3 simple storage service #blogtobertech

How I saved money storing more data on aws s3 simple storage service #blogtobertech

How I saved money storing more data on aws s3 simple storage service

How I saved money storing more data on aws s3 simple storage service is an example of reducing cloud costs as opposed to merely cutting cloud costs. What this means is that instead of just cutting my cloud storage costs with a focus on how much I could save, I wanted to remove some costs while also storing more data without compromise. For example, since making the changes, storage capacity usage has almost doubled, yet prices are remaining 37% lower from two years ago before the changes were made.

How I saved money storing more data on aws s3?

Without adding any context, the typical reaction might be that I saved money storing more data on (or in) AWS S3 as opposed to locally on-site (on-prem). Another typical response would be that I moved all of my data from a different more expensive cloud service to AWS S3. Yet another common reaction would that I moved my AWS S3 data into AWS Glacier cold storage, or, deleted a large amount of data.

Some might even comment that I must have used some type of dedupe, compression or other data footprint reduction (DFR) technology. On the other hand, some might determine that I probably did all or some of the above, or, leveraged AWS tiered storage, aligning different storage classes to the type of data activity.

How I saved money storing more data in AWS S3 actually involved spending some money, to eventually save money by leveraging different S3 storage classes. As part of rebalancing or moving different data to its new storage class, some one-time charges were incurred which recouped after several months of savings. The costs pertained to EC2 compute instances and associated storage used for some of the data tiering, other fees were for access charges along with excessive API calls. For example, some of the data was in storage classes that had fees for early retrieval or deletions, or fees for access among others.

How I use different AWS S3 storage classes (tiers)

  • Standard – Frequently changing data, or data with frequent access
  • Infrequent Access (IA) – Data that does not change frequently or that is not routinely accessed. In the past before OZA, I had placed data that did not need to be in standard, yet to warm for Glacier in this storage class. After the migrations, I have fewer data stored in IA, with more in OZA as well as some in Standard.
  • One Zone Availability (OZA) – Data that is frequently accessed for reading, however, is static, not yet cold enough to move to Glacier or deep archive. A mix of backups, online and active archives. Note that I use OZA as an additional copy or location and not as a single, lowest cost place to store data. In other words, anything that I put into OZA has at least one additional copy somewhere else which may not be in the cloud.
  • Glacier – Very cold, seldom accessed, archives

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

I decreased my AWS monthly bill by balancing things around, there was a one-month period where my costs increased during the changes, then a subsequent reduction. However, while I saw my monthly AWS storage invoices decrease, I’m also storing more data per month. How I saved money storing more data on aws s3 simple storage service involved using different storage classes.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dont Stop Learning Expand Your Skills Experiences Everyday #blogtobertech

Dont Stop Learning Expand Your Skills Experiences Everyday #blogtobertech

Dont Stop Learning Expand Your Skills Experiences Everyday

Dont Stop Learning Expand Your Skills Experiences Everyday including moving beyond or outside our current tradecraft focus area. If you are an expert in a field or given focus area, learn something new about an area outside your expertise or comfort zone. Now if you are of the mind-set that there is nothing new to learn about, it’s all old, boring, perhaps its time to step back, look around, explore other areas.

Doing something new can be in an adjacent technology area, or something completely unrelated. For example, in a recent VMUG keynote presentation and blog post I discussed how Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future.

Dont Stop Learning Expand Your Skills Experiences Everyday
Next Generation Data Infrastructures are in your future (if not already)

What tradecraft skills and experience do you need to have, expand or refresh to support next-generation hybrid software-defined data infrastructures? If you are a server person than you need to broaden your tradecraft skills experience to storage, I/O networking, cloud, virtual, container across hardware as well as software. Likewise, if you are storage or I/O and networking, you need to expand into other areas. If you are a VMware focused professional, then learn about Microsoft Hyper-V or vice versa. If you are an AWS focused person, learn about Google, Azure or vice versa, same applies across different technology domains.

On the other hand, if you know all there is to know, chances are they are other areas you need to learn more about, or, determine what you don’t know to address that. By chance, if you do happen to know everything, there is to know, how much time are you spending interacting with others to teach them, possibly learning something new yourself.

Invest Time into Your Tradecraft Skill set

If you are not spending at least an hour a day learning something new, you are missing out on the opportunity. Part of that hour should also be outside your comfort zone core focus area. For example, if you are a software pro, learn more about hardware, clouds, or something different. If you are a VMware focused person, learn Hyper-V, AWS, Azure, something else. If you are storage, learn server, network, cloud and beyond. If you are focused on data infrastructures, then learn about the upper-level business applications along with the users who use them and vice versa.

How I Continue to Learn Expanding My Tradecraft Skills Experience Every day

As part of expanding my tradecraft, I spend part of my day learning, refreshing on core data infrastructure focus areas (servers, storage, I/O networking, hardware, software, cloud, containers, converged, software-defined, data protection) and related topics. Learning involves vendor briefings, researching, talking with others, reading, hands-on technology trial to gain insight experience perspectives.

I also have expanded my tradecraft experiences by becoming an FAA Part 107 licensed commercial pilot of small unmanned aerial system (sUAS), small unmanned aerial vehicle (sUAV) or more commonly merely called drones. Besides being FAA licensed, I also expanded by becoming Minnesota sUAV/drone and aerial photography licensed. Drone flying has an adjacent to data infrastructures in that one of my drones’ records at 4K 60 frames per second (fps) meaning about 1 GByte of data every two minutes of video, plus telemetry. Note that the drones have internet capability and can be considered IoT for their video, as well as telemetry.


Above is a 4K video flights via my companion site www.picturesoverstillwater.com

Where to learn more

Learn more about learning, data infrastructures, tradecraft, drones as well as related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

What this means is that in addition to expanding as well as refreshing my data infrastructure related tradecraft skills, I’am also expanding my experiences into other adjacent areas. In other words, instead of just talking about big data, fast data, video, IoT, drones and related, I’am involved with it hands on.

Keep in mind, at some point the student becomes the teacher, and a teacher is a student. Leverage your pair of eyes and ears to see things in different ways, listen to and learn about items outside your primary focus area as you expand or refresh your tradecraft skill set experiences.

If you can’t learn something new every day, either you are not trying, or you are in trouble. Even experts and unicorns can learn something new every day, even if that is as simple as learning to listen to others.

With October being #blogtobertech there are plenty of opportunities to Don’t Stop Learning Expand Your Skills Experiences Everyday which also includes student becoming teacher, teacher being student.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future #blogtobertech

Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future #blogtobertech

A few weeks ago I was invited to present a keynote at the 1st annual Minnesota VMware User Group (VMUG) Super VMUG mega event in Minneapolis titled Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future (download PDF presentation here).

Key themes of the presentation focused around data infrastructures (e.g. what’s inside physical data centers including server, storage, I/O networking, hardware, software, policies, procedures) along with industry trends including hybrid software defined clouds (and containers). Anther aspect of the presentation focused around building, refreshing and expanding our fundamental data infrasture tradecraft skills. Also keep in mind that everything is not the same across different environments, granted there are similarities that can be leveraged.


Data Infrasture’s are defined to support business applications information service delivery

Data Infrastructures

The fundamental role of data infrastructures is to provide a platform environment for applications and data that is resilient, flexible, scalable, agile, efficient as well as cost-effective. Put another way, data infrastructures exist to protect, preserve, process, move, secure and serve data as well as their applications for information services delivery. Technologies that makeup data infrastructures include hardware, software, cloud or managed services, servers, storage, I/O and networking along with people, processes, policies along with various tools spanning legacy, software-defined virtual, containers and cloud.

Depending on your role or focus, you may have a different view than somebody else of what is infrastructure, or what an infrastructure is. Generally speaking, people tend to refer to infrastructure as those things that support what they are doing at work, at home, or in other aspects of their lives. For example, the roads and bridges that carry you over rivers or valleys when traveling in a vehicle are referred to as infrastructure.

Similarly, the system of pipes, valves, meters, lifts, and pumps that bring fresh water to you, and the sewer system that takes away waste water, are called infrastructure. The telecommunications network. This includes both wired and wireless, such as cell phone networks, along with electrical generating and transmission networks are considered infrastructure. Even the airplanes, trains, boats, and buses that transport us locally or globally are considered part of the transportation infrastructure. Anything that is below what you do, or that supports what you do is considered infrastructure.

The following figure shows various layers or altitudes of encapsulation and abstraction of data infrastructures along with their underlying resources that are defined to support a business enablement outcome, as well as support information services delivery.


Data Infrastructure Stack Layers and Resources Defined To Support Business Information Services

The following figure shows evolution of data infrastructures from on-prem bare metal to software-defined virtual, cloud, containers, converged and hyper-converged packaging as well as emerging composable. Also shown below are a hybrid as well as multi-clouds including bare metal dedicated services in addition to virtual machine instances as well as container-based services.


Data Infrastructure and Resource Packaging Deployment Evolution

Hybrid Software Defined Industry Trends

Some of the trends discussed in the presentation include:

Clouds – Public, Private, Hybrid, Multi and hybrid clouds along with how they are being used, along with technology evolution including virtual machine (VM) instances, bare metal dedicated private servers (DPS) as well as metal as a service. Other cloud trends include data migration appliances such as AWS Snowball Edge, Microsoft Azure Databox among others, VMware on AWS, as well as fog and edge computing.

Other trend topics included converged, hyper-converged, serverless, containers, persistent memory (PMEM) also known as storage class memory (SCM) along with other server storage I/O topics. Additional trend topics included data protection, Azure Stack, security, NVMe as well as NVMe over Fabrics (NVMeoF) along with composable and Gen-Z.

Tradecraft Skills Experience

Expanding your data infrastructure tradecraft means evolving from your primary focus area, gaining insight into other technologies, tools, techniques in adjacent areas outside your comfort zone. For industry veterans with several years to many decades of experience, this means refreshing on what you know, think you know or need to know with what’s new or evolving. On other other hand, for those who are new, expanding your tradecraft means moving beyond learning to memorize to pass a certificate test, to gaining insight on how, when, where, why to apply different tools, technologies, trends to tasks at hand.

For example, developing tradecraft from knowing the different hardware, software, and services resources as well as tools, to what to use when, where, why, and how. Another dimension of expanding data infrastructure tradecraft skills is gaining the experience and insight to troubleshoot problems, gain insight awareness with dashboard or monitoring tools, as well as how to design and manage to cut or reduce the chance of things going wrong.

From Tools and Technologies to Techniques and Tricks of the Trade

Expanding your awareness of new technologies along with how they work is important, so too is understanding application and organization needs. Developing your tradecraft means balancing the focus on new and old technologies, tools, and techniques with business or organizational application functionality.

This is where using various tools that themselves are applications to gain insight into how your data infrastructure is configured and being used, along with the applications they support, is important.

Data Infrastructure Tools Tradecraft
Data Infrastructure Toolbox (Hardware, Software, Scripts)

Next Generation Hybrid Software Defined Data Infrastructures What Next


Balance head in the clouds (thinking, strategy, vision) with feet on the ground (what you can do today)

The following are some additional tips, comments, recommendations to keep in mind for enabling your next generation hybrid software defined data infrastructure.

Where to learn more

Learn more about data infrastructures and tradecraft related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Everything is not the same across different organizations, IT environments, application workloads and the data infrastructures that support them. Data Infrasture’s span from legacy on-prem to software-defined cloud (public, private, hybrid, multi-cloud), container, serverless, virtual, hybrid, converged and hyper-converged, as well as central, core and distributed edge or remote office branch office (ROBO). Even though everything is not the same, there are similarities across different environments, technologies and workloads that can be leveraged. Fundamental tradecraft skills and experiences are what enable you to know what to use when, where, why and how including using new as well as old things in new ways, while not making old mistakes in new ways.

Some other tips include avoid flying blind, particular in software defined and cloud environments, have situational awareness, end to end (E2E) insight leveraging metrics that matter, are relevant, timely, accurate and hold context to the data infrastructures as well as applications they support. Part of expanding your tradecraft skills is refreshing on what you know, also expanding into new adjacent areas getting out of your comfort zone. Also understand the context of different terms, technologies and tools. For example, SAS can be big data analytic statistical analysis software, serial attached SCSI storage device as well as shared access signature for Azure clouds among others.

Also keep in mind that while software defined things are popular and trendy with the industry, keep the focus on what is being defined to enable an outcome or business enablement In other words, the emphasis should not be on the software aspect per say, rather how something (hardware, software, service) is defined to enable something. Also keep in mind with software defined marketing and trends such as serverless, servers and software still need hardware (somewhere), and hardware still needs software from micro code to firmware to many other places in the data infrasture layers or stack. Meanwhile, keep in mind that it is #blogtobertech and Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Following up from last years 2017 crossword puzzle for travel fun, here is the Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle (click on the below image for PDF version that includes answers). The Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle can be something to do while traveling, taking a break between (or during) sessions as well as keynotes. I wonder which buzzword term will get used the most, as well as new ones to be added to an updated version of this?

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Where to learn more

Learn more about VMworld and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Next week is VMworld 2018 in Las Vegas which means for some traveling and long week. Feel free to suggest additions as there could be a revision, update or two between now and VMworld. Have fun, safe travels, hope to see you next week in the meantime enjoy the Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture

Dell EMC today announced with a tag line IT Unbound their new PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture slated for general availability September 21, 2018. Previewed earlier this year at Dell Technology World in Las Vegas, PowerEdge MX 7000 is a new family of modular, scalable servers for various data infrastructure roles.

What is different with PowerEdge MX 7000 compared to other new 14th generation (Gen 14) Dell servers is the finer granularity of resource allocation based around the new Kinetic composable infrastructure. Also previewed at Dell Technology World earlier this year in Las Vegas, Kinetic (not to be confused the Seagate Kinetic object storage key value drive initiative) is a new composable architecture.

Dell EMC PowerEdge MX 7000 Kinetic What Was Announced

  • First instantiation of Kinetic composable based data infrastructure resources
  • OpenManage Enterprise Modular Edition
  • PowerEdge MX 7000 modular data infrastructure server

Dell EMC PowerEdge MX 7000 and Kinetic Architecture
Dell EMC PowerEdge MX 7000 and Kinetic Architecture Image via Dell.com

Dell EMC Kinetic Composability What Is It

By being a composable data infrastructure resource and server, Dell EMC Kinetic based solutions can be decomposed with finer granularity than previous servers. What this means is that in the past, memory, I/O network, physical storage devices, compute sockets and cores were assigned to a single image instance. The only image instance could be an operating system (OS) such as Linux or Windows based, a hypervisor such as KVM, Microsoft Hyper-V, Nitro (AWS), Oracle, VMware vSphere ESXi, or Xen among others, as well as proprietary decomposition and aggregation software (and hardware) technology (ScaleMP among others).

With a composable based solution, instead of the entire server, or motherboard(s) and its resources allocated to a single OS as a bare metal (BM) or Metal as a Service (MaaS) instance, or to a hypervisor, different resources can be allocated to various instances. On the surface it would be easy to say that sounds a lot like what hypervisors such as those from Microsoft, VMware, and others are doing, particular with clusters.

Dell EMC Kinetic Data Infrastructure Architecture
Dell EMC Kinetic Data Infrastructure Architecture Image via Dell.com

However, the difference is that with hypervisors, all of a server’s physical resources (compute, memory, I/O, storage devices, GPU, FPGA/ASIC) are allocated to the OS, hypervisor, or composition software, that then creates vCPU, vRAM, and related resources. Emphasis is on enabling more granular resource allocation as well as scaling out. The business or organizational outcome is what is essential which means, better allocation and effective use of resources to boost productivity vs. merely driving up utilization and efficiency.

Dell EMC PowerEdge MX 7000 Eliminates traditional hardware-based mid-plane with an internal fabric connector per node that can also be exposed outside of the physical MX enclosure. By using an industry standard connector on the edge of server motherboard resource nodes, different server I/O connectivity can be leveraged as it becomes available or improves. For example, IMHO it is not too complicated to envision a time in the not so distant future when Kinetic enabled resources (e.g., server nodes) evolve to support the emerging Gen-Z server I/O connectivity protocol.

What is Gen-Z

Does PowerEdge MX 7000 and Kinetic use Gen-Z today? Not yet, however, Dell has been showing demos and technology proof of concepts at various events.

Why bring up Gen-Z now? Simple, it’s something that will be part of many data infrastructure, the server I/O, storage, networking, hardware and software-defined discussions in the not so distant future.

As a refresher or primer, Gen-Z is a new server I/O fabric interface that supports access of and by CPU sockets along with their cores or memory including DRAM as well as emerging SCM as well as PMEM. In addition to server memory access. Gen-Z also enables local as well as remote access to memory, storage, GPU, FPGA, ASIC among other resources. For backward compatibility as well as investment protection, Gen-Z is intended to work with existing PCIe, Ethernet, Fibre Channel, SAS, SATA, NVMe, InfiniBand among another server I/O interconnects and protocols.

Does this mean Gen-Z is a challenger for Ethernet and another IP-based general LAN networking? IMHO no, at least not in the foreseeable future, granted like PCIe, Fibre Channel, InfiniBand, Ethernet and some others that have joined the where are they now list of technologies that promised to be the end all network for everything, near-term Gen-Z is focused on inside a modular enclosure or perhaps within a rack. Read more about Gen-Z here, as well as Dell EMC blog The Gen-Z Journey road to composability.

Dell OpenManage Enterprise
Dell OpenManage Management Interface Image via Dell.com

OpenManage Enterprise Modular Edition

Management for PowerEdge MX 7000 utilizes OpenManage Enterprise Modular Edition that is an HTML5 REST based with API tool. Management capabilities include workflow’s for simplicity of operation and lifecycle management. OpenManage Enterprise Module Edition besides being HTML5 REST API is also RedFish inspired for further interoperability. Note that PowerEdge MX 7000 is also integrated with Dell iDRAC physical machine level management interface provides unified management from a single to multiple server groups spanning towers to racks.

Dell EMC PowerEdge MX 7000
Dell EMC PowerEdge MX 7000 Image via Dell.com

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Server

The new Dell EMC PowerEdge MX 7000 is the first installment of their new Kinetic based composable architecture. The new Dell EMC PowerEdge MX 7000 components consist of a 7U chassis with power and cooling fans, along with compute sled, storage sled, I/O connectivity and inner fabric, along with management tools.

Dell EMC PowerEdge MX 7000 Modules
Dell EMC PowerEdge MX 7000 Modules Image via Dell.com

Dell EMC PowerEdge MX 7000 Server Compute modules

Dell EMC PowerEdge MX 7000 Compute sleds include MX740c (single width) and MX840c (double width) that are two and four socket modules with local on-board NVMe (e.g., U.2 8639 small form factor SFF) drives (per module). These initial compute modules support Intel Xeon processors and up to six (6) TBytes of memory. The MX740c supports up to six (6) local NVMe, SAS or SATA drives (e.g., 8639 connectors), while the MX840c supports up to eight (8) local drives. Note that these local onboard drives can be shared with other sled modules, as well as compute sleds can access the shared storage sled-based drives.

Dell EMC PowerEdge MX 7000 Server Storage modules

Dell EMC PowerEdge MX 7000 Storage sled consists of MX5016s holding up to 16 hot-pluggable SAS HDD, up to seven MX5016s sleds can be configured per MX chassis for up to 112 direct attached storage (DAS) drives. Each of the drives can be individually mapped to one or more servers supporting aggregated (e.g., HCI) as well as disaggregated (CI and legacy) deployment topologies.

Dell EMC PowerEdge MX 7000 Server I/O Networking Modules

Initial server I/O modules for the new Dell EMC PowerEdge MX include 25GbE and 32G Fibre Channel (GFC) host connectivity along with 100GbE and 32 GFC uplink capabilities with the top of rack (ToR)support built in along with Open Networking OS10EE software enabled. The server I/O modules provide both north-south, as well as east-west connectivity inside and outside the chassis for data plane and management plane traffic.

Server I/O connectivity options include:

  • MX5108n Ethernet Switch with 8 x 25GbE (server facing ports), 2 x 100GbE ports, 1 x 40GbE port, 4 x 10GbE ports.
  • MX9116n Fabric Switching Engine (e.g., Kinetic fabric) with 16 x 25GbE server facing ports, 2 x 100GbE/8 x 32GFC unified ports, 2 x 100 GbE ports and 12 fabric expansion ports.
  • MXG610s Fibre Channel Switch with 16 x 32GFC internal ports, 8 x 32 GFC SFP+ ports and 2 QSFP (4 x 32GFC) uplink ports.

Where to learn more

Learn more about Dell EMC PowerEdge MX, Kinetic, Composable and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Overall this is a good announcement of technology, product, as well as where resources are headed to meet different workload demands and look forward to getting some test time with a Dell EMC PowerEdge MX 7000.

Dell EMC PowerEdge MX 7000 Three Tenants
Dell EMC PowerEdge MX 7000 Three Tenants Image via Dell.com

The new Dell EMC PowerEdge MX 7000 Provides a data infrastructure resource platform for deploying traditional, cloud, software-defined, composable, as well as converged infrastructure (CI) disaggregated, as well as hyper-converged infrastructure (HCI) aggregated along with hybrid configurations.

With the Dell EMC PowerEdge MX 7000, there is more resource granularity and future-proof capabilities than traditional high-density blade, as well as twin, quad or eight node server configuration solutions.

Many vendors talk about solutions being future proof or enabling investment protection, with PowerEdge MX 7000, Dell EMC is taking the next step in discussing trends, technology, and what you can do today. Unlike traditional dual, quad, eight or high-density node and blade servers with dedicated discrete mid-planes tied to a given technology, Dell PowerEdge MX 7000 and Kinetic based architecture are mid planes aka back plane free. Now there is still connectivity between the different PowerEdge MX 7000 chassis modules which is a fabric (network if you prefer).

For example, server compute sled modules have an industry standard connector that connects with other components in the chassis. What differs from the traditional blade and multi-node server configurations is that on board the compute sleds; an adapter module can be changed to support a new interface over different generations of technology (as an example, keep an eye on what happens with Gen-Z).

The result is that the Dell EMC PowerEdge MX 7000 should be an excellent platform for software-defined data centers (SDDC), software-defined data infrastructures (SDDI), software-defined infrastructures (SDI) as well as other software defined or traditional deployments. The Dell EMC PowerEdge MX 7000 will make for a good CI, HCI, SDDC, SDDI, SDI platform for public, private as well as hybrid clouds, PaaS as well as IaaS deployments, along with VMware, Microsoft (Hyper-V, Windows Storage Spaces Direct (S2D), as well as Azure Stack) among other scenarios.

By being flexible, scalable, agile and adaptable, easy management, responsive design that is future proof enabling a pool of dynamic data infrastructure resource, the Dell EMC PowerEdge MX 7000 should be good allowing IT Unbound.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Catching Up With Summer 2018 IBM Cloudy Software Defined Storage Announcements

Summer 2018 IBM Cloudy Software Defined Storage Announcements

Catching Up With Summer 2018 IBM Cloudy Software Defined Storage Announcements

Time for some catching up with summer 2018 IBM cloudy software defined storage announcements that were made earlier this week. The Share Event (Mainframe centric) is occurring this week in St. Louis. Thus, it is no surprise that it is time for catching up with summer 2018 IBM cloudy software-defined storage announcements that are geared to mainframe Z environments. These cloud and software-defined storage for the mainframe environment announcements follow those from a few weeks ago including new Power9 based servers and IBM FlashSystem 9100 flash SSD.

What was announced

What IBM announced this week were a mix of mainframe Z server storage with software-defined storage and cloud (e.g. cloudy) support including:

IBM Spectrum Protect 8.1.6 multi-cloud updates with tiered backup across on-site and cloud. For example, active data remains on-site (or on-prem), inactive data protection copies get moved (tiered) to cloud storage. Other enhancements include software-defined threat protection such as malware and ransomware extending to hypervisor data, along with blueprint guides for IBM Cloud (e.g., Softlayer), AWS and Microsoft Azure.

IBM Spectrum Protect Plus 10.1.1 enhanced with encryption of vSnap repositories for security, VMware vSphere 6.7 support, improved dashboards user interfaces (UI), and DB2 support in addition to Microsoft SQL Server and Oracle.

IBM DS8882F storage
IBM DS8882F Z mainframe rack mount storage Image via IBM.com

IBM DS8882F rack-mounted storage system (part of DS8000 storage family) integrated with IBM Z ZR1 (mainframe) and LinuxOne Rockhopper II (mainframe) servers. The DS8882F supports from 6.4TB to 368.64TB raw capacity. Along with safeguarded copy protection including read-only copies (e.g., a variation of WORM), along with encrypted digital signatures, and 256-bit AES encryption.

IBM Cloud Object Storage aka COS (formerly known as Cleversafe) functions as a target tier for DS8880 without the need for an external gateway. Enhancements also include a new 1U server (via Quanta) supporting up to 72 TB configurations.

IBM Elastic Storage Server File and Object pre-configured storage for AI, ML, Big Data and High-Performance Compute (HPC) includes an integrated file (NFS, SMB, S3, Swift) and object access. The solution is pre-installed on IBM Power8 servers running Red Hat Linux (e.g., RHEL). IBM claims high throughput for NAS NFS workloads with a large number of server connections. However, some performance numbers would be impressive to see along with a side of context.

IBM Spectrum Scale on AWS is a software-defined storage solution alternative to the traditional appliance-based solution. With Spectrum Scale 5.0.2 IBM is joining other vendors who have made their software-defined storage solutions available on clouds such as AWS, Azure, Google among others. Besides running on AWS working with Virtual Private Clouds (VPC), IBM supports per TB licenses including bringing your own license a growing industry trend.

Where to learn more

Learn more about IBM Server, Storage, Data Protection and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Despite having been declared dead for decades, IBM Z series are still prevalent in many large environments even in a software-defined cloudy era. It’s good to see IBM continuing to invest in, and join other industry vendors who are supporting various cloudy deployments, as well as legacy on-site aka on-prem.

Likewise, IBM is making its legacy Z mainframe systems trendy and cloudy with these new enhancements to support customer hybrid server, storage, and data infrastructure deployments.

Overall, a nice set of incremental improvements following industry trends, and catching up with summer 2018 IBM cloudy software defined storage announcements.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

IBM announces new Power9 processor based E950 E980 server systems

IBM announces new Power9 processor based E950 E980 server systems

IBM announces new Power9 processor based E950 E980 server systems.

IBM announces new Power9 processor based E950 E980 server systems.

As a single server or node, the Power9 E950 supports up to four (4) CPU processor sockets each with multiple cores. An E980 system comprises up to four E950 based systems as a solution. The new E950 succeeds Power E850 and E850C, its machine type model number is 9040-MR9 that is a 4U single enclosure with two or four processor modules.


Power9 Processor image via IBM.com

IBM Power9 E950 and E980

As a refresher, leveraging IBMs proprietary processor chip technology called Power, which are used in their various mid-range and higher end server solutions.

The Power9 E950 and E980 systems support PowerVM virtualization, along with virtual machine (VM) mobility as well as optimization for OpenStack among other workloads.

IBM touts Power9 E950 (AIX and Linux) and E980 (AIX, Linux, I systems) optimized for:

  • Analytics, AI (ML/DL) and Cognitive computing
    • Faster cores and threads, more performance per socket
    • More bandwidth and lower latency
  • Super Compute (SC), Technical, High Performance Compute (HPC)
    • High bandwidth graphical processing unit (GPU) attachment
    • Optimized CPU GPU memory sharing and interaction
    • Bandwidth optimized main memory
    • Virtual addressing optimization
  • Cloud and Hyper Scale Data Infrastructures and Data Centers
    • Dense performance and energy consumption
    • Virtualization assist, QoS, power management and security
    • Fast I/O subsystem for server I/O to storage and networks
  • Enterprise data infrastructures and data centers
    • Scale-up and scale-out
    • Server and workload consolidation
    • Up to 4TB of buffered memory per socket (16TB per E950 node)

IBM E950 Power9 System

Front view of E950 System Image via IBM.com

The following image (via IBM.com) shows an exploded component view of the E950.
IBM Power9 E950 exploded view

The following image (via IBM.com) shows a top view looking down into an E950.

IBM Power9 E950 top view

E950 is a 4U server (or E980 node) with compute and memory features including:

  • Power9 8,10,11 or 12 cores per socket, up to 48 cores (4 x 12 cores)
  • Four times memory compared to E850 systems (up to 16TB or 4TB per socket)
  • Eight (8) memory riser cards with 16 DDR4 DIMM each (8,16,32,64 or 128GB DIMM)
  • Memory bandwidth of up to 920 GB/sec (note that is big B not Gb or little b)
  • Refresh your server, CPU, compute, socket, core and threads knowledge here.

E950 also features faster I/O subsystem for server I/O to storage and networks:

  • 630GB/sec (e.g. 5Tbpsec) I/O bandwidth
  • NVIDIA NVLink GPU attachment, PCIe Gen4 and OpenCAPI I/O
  • Up to eight (8) (4 socket systems) PCIe Gen4 x16 (16 lanes each) card slots
  • Up to two (2) PCIe Gen4 x8 (8 lanes each) card slots
  • Up to 144 PCIe lanes (4 socket systems), full height, half length
  • USB 3 (2 front, 2 rear)
  • 12 internal 2.5” form factor storage bays for HDD and SSDs including up to eight (8) SAS SAS, and four NVMe U.2 (8639). Note that NVMe devices attach via PCIe ports and lanes.
  • Hot plug components and optional I/O expansion as well as storage drawers
  • Here is a refresher (or primer) on PCIe, as well as NVMe, SAS, and SSD technologies.

IBM E980

The IBM E980 system is a collection of up to four nodes along with a control module, a cabinet rack E980 system is shown below (image via IBM.com).
IBM Power9 E980

IBM Power9 E950 E980
Via IBM.com

View more features for E950 here (PDF) and E980 here (PDF).

Where to learn more

Learn more about IBM Power and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

These new systems provide increase in not only compute, also memory as well as server I/O for storage and networking. With the addition of multiple PCIe Gen4 x16 card slots, more GPUs such as those from NVIDIA as well as fast Fibre Channel, SAS and NVMe based storage can be attached to these systems.

With a good number of x16 PCIe Gen4 slots, the E950 and E980 systems are capable of supporting more GPU offload cards such as those from NVIDIA, along with other ASIC or FPGA accelerator devices. In addition to compute offload, the x16 PCIe Gen4 slots enable server I/O cards to more storage devices including faster Fibre Channel, Ethernet, SAS as well as NVMe attachment.

Overall, IBM announces new Power9 processor based E950 E980 server systems is a good move for existing customers of AIX, Linux as well as with the E980 for i systems.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

July 2018 Server StorageIO Data Infrastructure Update Newsletter

July 2018 Server StorageIO Data Infrastructure Update Newsletter

July 2018 Server StorageIO Data Infrastructure Update Newsletter

Volume 18, Issue 7 (July 2018)

Hello and welcome to the July 2018 Server StorageIO Data Infrastructure Update Newsletter.

In cased you missed it, the June 2018 Server StorageIO Data Infrastructure Update Newsletter can be viewed here ( HTML and PDF).

In this issue buzzwords topics include Dell Technology and VMware, AWS and Google public, private and hybrid cloud, machine learning, 3D XPoint, SCM, SSD, NVMe, data infrastructure management tools among other topics.

Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

July 2018 data infrastructure, server, storage, I/O network, hardware, software, cloud, converged, and container as well as data protection industry activity includes among others:

Amazon Web Services AWS July 2018 Updates include enhancements to machine learning (ML) Sagemaker service, faster S3 access, new EC2 instances along with Snowball Edge (SBE) for on-prem converged server and compute appliance ( read more about SBE here). In other public cloud activity, Google Cloud Platform GCP announced new Los Angeles Region.

Intel and Micron have announced that they will be pursuing different paths when they complete the second generation in 2019 of 3D XPoint used in Intel Optane NVMe SSD and Storage Class Memory (SCM) technologies, read more here Intel Micron 3D XPoint Evolving. Meanwhile, Broadcom buying CA, Brilliant or a Brainbuster? This deal is a bit of a head scratcher with Broadcom spending $18.9 Billion USD (cash) to by CA Technologies.

In other data infrastructures news and activity, DataDirect Networks Stages Bid to Acquire Tintri’s Assets and Expand Its Storage Portfolio into the Enterprise. Dell EMC announced a new integrated data protection appliance ( IDPA DP4400) for small and midsize organizations. In other activity, VMware declared a dividend, with Dell Technologies being a majority owner, will use cash to fund Dell business structuring. Read more about Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash here.

Spectra (e.g. who some of you know as Spectra Logic) has announced enhancements to their tape libraries. Note that one of the larger growth (or sustainment) markets for tape based technologies in recent years have been the larger cloud scale service providers. Granted those providers are not using tape in old ways (e.g. for direct backup), rather, in new ways where it is a companion to SSD, HDD as another storage class, tier or technology enabler.

IBM has jumped on the NVMe bandwagon announcing updates to their Flashsystems 9100 systems (e.g. what they acquired via TMS a few years ago). Opvisor has announced a new VMware vSAN performance monitoring and troubleshooting feature for their insight, awareness management tools.

Check out other industry news, comments, trends perspectives here.

Data Infrastructure Server StorageIO Comments Content

Server StorageIO Commentary in the news, tips and articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via : SearchStorage: Comments on GDPR and Cloudian File Sync Share 
Via : NetworkComputing: Comments Software Defined Storage SDS Getting Started 
Via SearchStorage: Comments The storage administrator skills you need to keep up today
Via SearchStorage: Comments Managing storage for IoT data at the enterprise edge
Via SearchCloudComputing: Comments Hybrid cloud deployment demands a change in security mind set

View more Server, Storage and I/O trends and perspectives comments here.

Data Infrastructure Server StorageIOblog posts

Server StorageIOblog Data Infrastructure Posts

Recent and popular Server StorageIOblog posts include:

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
June 2018 Server StorageIO Data Infrastructure Update Newsletter
May 2018 Server StorageIO Data Infrastructure Update Newsletter
Have you heard about the new CLOUD Act data regulation?
Data Protection Recovery Life Post World Backup Day Pre GDPR
Microsoft Windows Server 2019 Insiders Preview
Server Storage I/O Benchmark Performance Resource Tools
Data Infrastructure Primer Overview (Its Whats Inside The Data Center)
If NVMe is the answer, what are the questions?

View other recent as well as past StorageIOblog posts here

Server StorageIO Recommended Reading (Watching and Listening) List

Software-Defined Data Infrastructure Essentials SDDI SDDC

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017) available at Amazon.com (check out special sale price), the following are Server StorageIO data infrastructure recommended reading, watching and listening list items. The Server StorageIO data infrastructure recommended reading list includes various IT, Data Infrastructure and related topics including Intel Recommended Reading List (IRRL) for developers is a good resource to check out.

Duncan Epping ( @DuncanYB), Frank Denneman ( @FrankDenneman) and Neils Hagoort ( @NHagoort) have released their VMware vSphere 6.7 Clustering Deep Dive book available at venues including Amazon.com. This is the latest in a series of Cluster and deep dive books from Frank and Duncan which if you are involved with VMware, SDDC and related software defined data infrastructures these should be on your bookshelf.

Watch for more items to be added to the recommended reading list book shelf soon.

Data Infrastructure Server StorageIO event activities

Events and Activities

Recent and upcoming event activities.

July 25, 2018 – Webinar – Data Protect & Storage

June 27, 2018 – Webinar – App Server Performance

June 26, 2018 – Webinar – Cloud App Optimize

See more webinars and activities on the Server StorageIO Events page here.

Data Infrastructure Server StorageIO Industry Resources and Links

Various useful links and resources:

Data Infrastructure Recommend Reading and watching list
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

What this all means and wrap-up

Summer is here in North America and the Northern Hemisphere which means holidays as well as vacations. However Data Infrastructures continue to evolve as do the tools, technologies, trends, hardware, software, services along with those who take care of, and define them. Enjoy your summer vacation, holidays as well as this July 2018 Server StorageIO Data Infrastructure Update Newsletter edition.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Amazon Web Services AWS July 2018 Updates

Amazon Web Services AWS July 2018 Updates

Amazon Web Services (AWS) July 2018 Updates

Amazon Web Services AWS July 2018 Updates

Amazon Web Services AWS July 2018 Updates continue to expand feature, functionality, service capabilities of the public cloud providers capabilities across various geographies.

Recent AWS updates include Snowball Edge (SBE) that adds local, on-site, on-premises aka on-prem EC2 compute capabilities as part of the Snowball appliance. Previously Snowball was a data and storage migration only appliance, now with the new capabilities, compute is also enabled as part of a turnkey converged platform. Read more about SBE here.

In other updates, AWS has extended its Elastic Cloud Compute (EC2) capabilities (besides Snowball Edge) with new instance types, along with leveraging their next generation hypervisor as part of Nitro enabled systems. New EC2 instances span from on-prem Snowball Edge (SBE) to AWS Dedicated aka bare metal instances, along with traditional cloud instances (e.g., virtual machines).

These new instances including R5, R5D, and Z1 among others leverage faster Intel Xeon Platinum 8000 series processors, along with more memory. For example, Z1D is a compute-intensive instance with 4.0 GHz all turbo enabled core, while R5 is memory optimized with 3.1 GHz cores (up to 96 vCPU) and up to 768GB of RAM. The R5D is a memory-optimized instance that also supports up to 3.6TB of on-instance NVMe based storage. View additional AWS instance types here.

AWS has enhanced SageMaker (Machine Learning) service supporting higher throughput enabling faster data transformation batch jobs of non-real-time inference. To enable higher data and API call rates, AWS has also enhanced Simple Storage Service (S3) request rate. Another enhancement by AWS is enabling bring your own IP address preview for virtual private cloud (VPC) as part of allowing hybrid clouds.

View additional new, recent and past AWS updates here, and here.

Where to learn more

Learn more about AWS, Cloud and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Amazon Web Services AWS July 2018 Updates continue to expand the number, type and extensiveness of public cloud services, as well as enabling hybrid capabilities. The Amazon Web Services AWS July 2018 Updates also address different data infrastructure layers from lower level Infrastructure as a Service (IaaS) including EC2 compute, as well as higher level artificial inelegance (AI), machine learning (ML), deep learning (DL) among other cognitive as well as analytic offerings.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Intel Micron 3D XPoint Evolving

Intel Micron 3D XPoint Evolving

Intel Micron 3D XPoint Evolving

Intel Micron 3D XPoint Evolving

Generations of memory
Major memory classes or categories timeline (Image via Intel and Micron)

Co-Creators of 3D XPoint the next generation of non-volatile memory (NVM) also known as storage class memory (SCM) or Persistent Memory (PMEM) have announced they will complete joint development of second-generation technology, then pursue their separate paths. Intel and Micron jointly announced 3D XPoint three years (July 2015) as a new technology with the first generation of products have appeared in the market or past year or so.

Various industry vs customer adoption deployment timelines
Various Adoption Deployment Timelines for different focus areas

For those in the industry who measure technology on shorter months vs. years adoption and deployment scenarios, or time from press release until new news, some would say 3D XPoint is late, behind schedule, which perhaps it is based on some timelines. On the other hand, IT customers tend to be on a different timeline that may seem like glacial speed to industry focused rapid change. IMHO 3D XPoint is about on the right timeline based on IT customer deployment which may very well accelerate for broader usage with the second generation based products.

3D XPoint based Intel Optane
Top Intel 750 NVMe PCIe AiC SSD, bottom Intel Optane NVMe 900P U.2 SSD with Ableconn carrier

While the focus is easily around Intel and Micron going separate ways, keep in mind that there is the second generation of 3D XPoint in the works. Some might consider the second generation of 3D XPoint as the first real production and volume technology with the first being just that, the first generation. An example of a first generation 3D XPoint based product are the Intel Optane NVMe devices such as the one show above, and discussed in this StorageIO Lab test drive post here.

NVMe and NVM along with SCM as well as PMEM better together

Where to learn more

Learn more about Intel, Micron, NVM, NVMe, 3D XPoint, SCM, PMEM and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Some may see the announcement of Intel and Micron pursuing separate paths as a negative while others as a positive. While completing the second-generation development together, both can leverage what they have done while seeking different, presumably divergent or expand paths forward.

A concern could be if Intel and Micron merely go their separate ways yet focus on the same market areas. A benefit could be if Intel and Micron pursue different market focus areas with some overlap while expanding to broader opportunities.

The latter scenario could be useful for moving the technology forward by giving it new and different opportunities. For example, some that favor Intel along with its ecosystem would prefer whatever Intel does next. Likewise, those that favor Micron and their ecosystem may influence the direction Micron goes.

Does this mean Micron and Intel are all done collaborating? Tough to say.

However, they still share a fabrication facility (fab) imFLASH in Lehi Utah.

Overall, I think this is a good move for both Intel and Micron once they get the second generation of 3D XPoint developed and into production for customer deployments. With Intel Micron 3D XPoint Evolving, lets see what’s next.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

AWS Snowball Edge SBE Converged Cloud Storage Appliance

AWS Snowball Edge SBE Converged Cloud Storage Appliance

AWS Snowball Edge SBE Converged Cloud Storage Appliance

As part of extending their cloud platform reach, recent Amazon Web Services (AWS) announcements include AWS Snowball Edge SBE Converged Cloud Storage Appliance. Snowball Edge (SBE) has evolved from its previous focus as a data transfer, migration platform appliance to now include support for on-prem compute. SBE has previously been available as an appliance that ships from AWS to your location as a service to enable bulk data movement to the public cloud (e.g. AWS Simple Storage Service (S3) bucket). With this new capability, AWS is enabling SBE to support on-prem compute similar to Elastic Cloud Compute (EC2) cloud instances.

AWS Snowball Data Migration at PB scale
AWS Snowball Appliance Image via AWS.com

What is AWS Snowball

Snowball is a bulk physical data migration appliance that AWS ships to your location. You use Snowball by setting up a copy job with AWS, when the device arrives at your site, set it up, and enable the copy jobs to occur moving data from source to Snowball destination. Once data is copied, you ship the Snowball back to an AWS region and availability zone (AZ) where its contents are copied into a Simple Storage Service (S3) bucket of your choice. Once the copy job into an AWS S3 is complete, AWS performs a secure erase of the Snowball.

Basic Snowball includes 10 GbE network connections (RJ45 and SFP+ [fiber or copper]). Security and Encryption includes 256-bit keys that can be managed via AWS Key Management Service (KMS). Note that keys are not sent to or stored on the device for security during transit. For additional protection, tamper-resistant seals are included along with the Trusted Platform Module (TPM) to detect unauthorized hardware, firmware or software changes.

End to End tracking is enabled using E ink shipping labels and allow monitoring via AWS Simple Notification Service (SNS). Once your data transfer job completes along with verified, a software erasure of the SBE is performed by AWS following NIST media handling guidelines.

For management, SBE has an API for customer integration, as well as the ability to create and manage transfer jobs via the AWS management console. SBE Adapter also gives customers direct access to Snowball where it appears as an S3 endpoint (how you access the storage and data).

Backside view of AWS Snowball
Backside view of Snowball Image via Amazon.com

Additional Snowball Speeds and Specification Feature Feeds include:

  • Storage space capacity of 50TB (42TB usable) or 80TB (72TB usable)
  • Network connectivity 10 GbE RJ45 (Cat6), SFP+ (Copper and Optical). Cables include RJ45 and Copper SFP+. For Fiber attached Ethernet, the customer supplies their own SFP+ optical cables.
  • SBE is designed for office environments, as well as data centers (e.g., about 68db) and weigh about 47 pounds.
  • Power requirements include NEMA 5-15p (standard wall outlet) 100-200 volts with power cable included.

Note for traditional Snowball deployments an on-prem workstation or server is needed to copy data from source locations to the Snowball device.

How AWS Snowball and Snowball Edge work

How AWS Snowball Works

Referring to the image above, first step to using AWS Snowball (or Snowball Edge) is to place an order via AWS management console (A). Part of the ordering process involves setting up the data transfer job, and in the case of AWS Snowball Edge, defining the EC2 instance and image (read more about that here via AWS). After placing order and setup, the AWS Snowball arrives at your location (B), on-site setup is done and data transfer performed (C). Once data is transferred, the AWS Snowball is returned to designated AWS location via two day shipping (D) and data copied into your specified S3 or Glacier bucket (E). After your data is transferred into the S3 or Glacier bucket you specify as part of the transfer job, you are able to do what you want with your files, folders, images, videos, VHDX’s, VMDK’s, ISO, little data, big data.

What is AWS Snowball Edge

AWS has enhanced its Snowball Edge (SBE) data mobility, migration, and transport appliance to now also include compute. For those not familiar, Snowball is an appliance that comes in various sizes that you order from AWS, it shows up at your site, and then you copy your data to it for migration into AWS. Once data is copied, you return to AWS where the data then appears in your designated S3 bucket. From your S3 bucket, you can then move the data, files, volumes, images to other locations, use for standing up EC2 compute, populating databases or other items.

With the new compute feature, AWS is enabling compute on the snowball edge appliance functioning similar to EC2 instance, except that they are on your site. This means you can use the compute to run your own custom AMI’s (Amazon Machine Image) on site or on-prem in support of data migration, conversion or another process. You can also keep the appliance on-site for as long as you want, granted your credit card gets charged to support development, test, extended migration, or to have a converged, or, hyper-converged platform.

Note that with SBE having compute capability, you can now run an EC2 image that functions as your copy server eliminating the need to have a workstation or server on-prem for the copy operation.

Additional AWS Snowball Edge Speeds and feature function feeds include:

  • 100TB (82TB usable) storage space capacity
  • 10 GbE network, along with 10/25 GbE SFP28 and 40 GbE QSFP+ with device-based encryption (customer provided network cables)
  • Local computing with EC2 and Lambda functions for remote deployment along with scale-out clustering of multiple SBE’s
  • S3 compatible endpoint along with NFS endpoint (mount point) using both NFS v3 and v4.1.
  • Weighs about 50 pounds, tamper evident seals along with TPM similar to traditional Snowball along with detection of hardware, firmware or software changes.
  • Can exist in an office environment, or data center.
  • Power cables are included, NEMA 5-15p, 100-220 volts, 400 watts.

What is AWS Snowmobile

Need something with more capacity than an SBE? AWS has a more extensive version called Snowmobile that supports up to 100PB that is brought to your site via a 45-foot-long tractor-trailer truck. Both SBE and Snowmobile physically move data from your location to an AWS region availability zone (AZ) aka data center where it is placed into the Simple Storage Service (S3) or Glacier bucket of your choice. Once in the S3 or Glacier bucket, you can move the data to where ever you need it.

Why Snowball Edge and Snowmobile vs. Fast Networks

Some people ask why the need for services such as SBE and Snowmobile, or, physically shipping your SSDs, HDD’s, tape or other storage media to a cloud provider in the Internet era of fast networks. The reason can be quite simple; most environments do not have internet connection speeds of 10 GbE or higher that can be dedicated outside of regular use for data movement at scale.

Likewise, some public cloud service providers have limitations on the network speed of their front-end general-purpose Internet access.

Note that some such as AWS have high-speed, low latency direct connect services from partner staging locations. However, those too may be limited in speed for large bulk transfers. AWS also has other performance-enhanced services for general Internet access including S3 Transfer Acceleration. Note that Microsoft Azure has special connectivity options such as ExpressRoute, while Google Compute Platform (GCP) has Cloud Interconnect.

Is AWS SBE and CI, HCI, CiB or Appliance?

The answer to the question of if SBE is a Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI), Cloud in a Box (CiB) or Cloud Appliance depends on your view and definition of those deployment models. Some will argue that SBE is a CI or HCI as well as CiB based on what Cisco, Dell Technologies, HPE, Microsoft (Azure Stack and Windows S2D), NetApp, Nutanix, Pivot3 and VMware vSAN among others offerings.

On the other hand, some will argue that SBE is not the same as the above and others give it does not meet the definition of their CI, HCI, CiB or cloud appliance. What is important is not if CI, HCI, CiB or appliance, rather, what it can do, how it can adapt to your environment and work for you vs. you work for it. In other words, what is important is the enablement a solution provides vs. if it is CI, HCI, CIB or something else. Meanwhile watch to see who ignores SBE, who welcomes it to their market space, and who throws mud balls and fud balls at snowball.

When to use Snowball vs. Snowball Edge

If all you need is bulk data migration appliance using one of your servers or workstations for smaller amounts of data, traditional Snowball is a good fit. On the other hand, if you need to move more data, leverage SBE enabled on-prem compute with EC2 and Lambda functionality for short, or long-term duration, as well as scale-out to create a cluster, then SBE is for you. SBE is also a good fit for environments that need short-term, as well as the longer-term deployment of compute, storage and network (e.g., converged). For example, factory environments, rugged implementations on ships, energy exploration and processing, traveling venues and sporting events, distributed environments being consolidated among others.

AWS Regions, AZ locations
AWS Regions and AZ’s image Via AWS.com

What About AWS Snowball Edge Pricing

Pricing varies based on AWS region you are using for your transfer and management from. Another variable is if you are selecting data transfer only, or, enabling EC2 compute instance on-prem. Yet another pricing variable is how long you will keep the Snowball Edge on-prem. You are given ten (10) free days as part of your data transfer job along with days for shipping and return.

Beyond the ten free days, you will pay a daily rate that varies. The longer you keep the SBE on-prem, and for example commit to a one or three-year pre-pay, you will receive larger discounts. Also note that there are no data transfer fees for moving data into AWS. However, standard pricing applies once stored into AWS, or moved. Also note that standard AWS storage charges (e.g. S3, Glacier, along with API calls apply once data is stored).

As an example, data transfer only, the service fee for a data transfer job is USD 300 for the US and another non-Asia-Pacific (Singapore). Additional days are $30 each.

Another example is selecting data transfer plus EC2 compute instance which varies by region example is $500 for transfer job (US East Northern Virginia or Ohio), $50 a day extra fee. However, if you are will to pay up front for one year, the day fee drops to $42 (varies by region), and to $35 a day for a three commitment.

For some environments, it may cost less to buy a server with storage, set it up and manage, while for others, the simplicity of a turnkey converged platform may be more cost-effective along with better value. Learn more about AWS Snowball Edge pricing here.

Where to learn more

Learn more about AWS, Snowball Edge, Cloud and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Has AWS embraced hybrid public cloud and on-prem computing? IMHO while AWS is making it easier for environments to use, access as well as move to public cloud, they are still focused on the public cloud as the destination. In other words, AWS is making it easy to move your data and applications to their services as well as access them with AWS Snowball Edge SBE Converged Cloud Storage Appliance.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform (GCP) has announced a new Los Angeles Region (e.g., uswest-2) with three initial Availability Zones (AZ) also known as data centers. Keep in mind that a region is a geographic area that is made up of two or more AZ’s. Thus, a region has multiple data centers for availability, resiliency, durability.

The new GCP uswest-2 region is the fifth in the US and seventh in the Americas. GCP regions (and AZ’s) in the Americas include Iowa (us-central1), Montreal Quebec Canada (northamerica-northeast1), Northern Virginia (us-east4), Oregon (us-west1), Los Angeles (us-west2), South Carolina (us-east1) and Sao Paulo Brazil (southamerica-east1). View other Geographies as well as services including Europe and the Asia-Pacific here.

How Does GCP Compare to AWS and Azure?

The following are simple graphical comparisons of what Amazon Web Services (AWS) and Microsoft Azure currently have deployed for regions and AZ’s across different geographies. Note, each region may have a different set of services available so check your cloud providers notes as to what is currently available at various locations.

Google Cloud Compute Platform regions
Google Compute Platform Locations (Regions and AZ’s) Image via Google.com

AWS Regions, AZ locations
AWS Regions and AZ’s image Via AWS.com

Microsoft Azure Cloud Region Locations
Microsoft Azure Regions and AZ’s image Via Azure.com

Where to learn more

Learn more about data infrastructures and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Google continues to evolve its public cloud platform (GCP) both regarding geographical global physical locations (e.g., regions and AZ’s), also regarding feature, function, extensibility. By adding a new Los Angeles (e.g. uswest-2) Region and three AZ’s within it, Google is providing a local point of presence for data infrastructure intense (server compute, memory, I/O, storage) applications such as those in media, entertainment, high performance compute, aerospace among others in the southern California region.  Overall, Google Cloud Platform GCP announced new Los Angeles Region is good to see not only new features being added to GCP but also physical points of presences.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.