Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update

Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update. Musician Phil Collins has an excellent name for his current tour Not Dead Yet which is a reminder that he is still alive and performing, at least one more time. With Halloween just around the corner, it is that time of the year to revisit zombie technology, those technologies, tools, techniques, trends that are declared dead yet still alive.

Data Infrastructure Tools Trends Topics

IT Zombie Technology Declared Dead Not Dead Yet

With a concert tour named Not Dead Yet, that sets the stage for this post which is about IT Zombie Technology and in particular data infrastructure related technology, tools, trends and related topics that have been declared dead by some people, yet are still alive. Not only are these tools and techniques being used, but they are also being enhanced to be around for future years of zombie technology updates, not dead yet.

As a refresher, a Zombie technology is one that is declared dead, usually by some upstart vendor and its pundits along with other followers in favor of whatever new has been announced. As luck or fate would have it, some of these startup or new technologies that declare an older established one as being dead, tend to end up on the where are they now list.

In other words, some technologies do survive and gain in both industry adoption, as well as the even more critical customer deployment category. Likewise, some of these technologies that result in something existing being declared dead-end up surviving to live alongside or near what its supporters declared dead.

Another not so uncommon occurrence is when the new technology that its supporters declared something else as being dead joins the ranks of being declared dead by a yet more modern technology thereby becoming a Zombie technology itself.  Put a different way, being on the Zombie technology list may not be the same as being the shiny new popular trendy technology. However, it can be both a badge of honor not to mention revenue and profit maker.

Data Infrastructure components

Zombie Technology List

What are some old and new Zombie technologies that have been declared dead, yet are still alive, being used and enhanced, not dead yet?

IBM Mainframe

This is a perennial favorite, and while not seeing new growth associated with other platforms including Intel, AMD, ARM among others, it has its place with many large organizations. Not only does it continue to be manufactured, enhanced, even some new customers buying them, it also runs native Linux in addition to traditional zOS among other software.

Fibre Channel (FC)

FC has been declared dead for over a decade, and while Ethernet-based server storage I/O networking continues to gain ground in both industry as well as customer deployments, there is still plenty of life in and with FC for years to come, at least for some environments. NVMe over Fabrics (NVMeoF) which is the NVMe protocol carried on top of a fabric network (SAN if you prefer) is gaining industry popularity and customer curiosity.

There are many flavors of NVMe over fabrics including NVMe over Fibre Channel, e.g., FC-NVMe which is similar to mapping the SCSI command set (SCSI_FCP) on to Fibre Channel or what is more commonly known as FCP or simply FC.

What this means if that FC-NVMe is just another upper-level protocol (ULP) that can co-exist with others on the same Fibre Channel network. In other words, FICON, FCP, NVMe among others can co-exist on the same Fibre Channel-based network. Will everybody using Fibre Channel move to FC-NVMe? Good question, ask the FC folks, and the answer not surprisingly would be yes or probably. Will new customers looking to do NVMe over some type of fabric or network use Fibre Channel instead of Ethernet or other transport? Some will while others will go other routes. For now, what is clear is that FC is still alive and thus on the Zombie technology list and not dead yet.

SAS and SATA

Both have been declared dead as they have been around for a while, and over time NVMe will pick up more of their workload, however near term, SAS and SATA will continue as lower cost smaller footprint for general purpose and bulk lower cost direct attachment. Otoh, look for more m.2 NVMe Next Generation Form Factor (NGFF) aka gum sticks appearing on physical servers along with storage systems. Likewise, watch for increased deployment of NVMe U.2. Aka 8639 drive form factor SSDs using NAND flash as well as 3D XPoint and Intel Optane among other mediums as part of new server and storage platforms. BTW, USB is not dead yet either, just saying.

Microsoft Windows

Windows desktop, Windows Servers, even Hyper-V virtualization have been declared dead for some time now, yet all continue to evolve. Just recently, Microsoft released Windows Server 2019 which included many enhancements from software-defined storage (Storage Spaces Direct aka S2D), software-defined networking, converged and hyper-converged infrastructure (HCI) deployment options, expanded virtualization capabilities, Windows Subsystem for Linux (WSL) enhancements (e.g. bash shell on Windows native), containers with Kubernetes as well as Docker updates among others. In other words, it’s not dead yet.

Hard Disk Drive (HDD)

Having been declared dead for decades, while not the primary frontline storage medium it was in the past, HDDs continue to evolve and be used for alongside faster flash SSD, and as a front-end to magnetic tape. Some of the larger consumers of HDDs continue to be cloud service providers also known as mega scalars for storing large amounts of bulk data. I suspect that HDDs will continue to be on the Zombie technology list for at least another decade or so which has been the case for the past several decades.

Magnetic Tape

Like HDDs, the tape is still in use in some environments, and like HDDs, the cloud service providers are significant users of tape as a low-cost, low access, high-capacity bulk storage for cold archives that are front-ended by HDD or SSD or both.

Cloud (Public, Private and Hybrid)

Yes, believe it or not, some have declared cloud dead, along with hybrid cloud, private cloud among others, oh well.

Physical Machine (PM)

Also known as bare metal, servers were declared dead a decade or so ago at the hands of the then emerging Intel based virtualization hypervisors notably VMware ESXi and to a lesser extent Microsoft Hyper-V. I say lesser extent with Hyper-V in that there was less noise about PM and BMs being dead as there was from some in the ESXi virtual kingdom. Needless to say, PM and BM from Intel to AMD and ARM-based, along with IBM Power among many others are very much alive as dedicated servers in the cloud, VM and container hosts, as well as being accessorized with FPGA, ASIC, GPU, and other resources.

Virtual Machines

Listen to some from the container, serverless or something new crowd, and you will hear that virtual machines (VMs) are dead which for some workloads may be right. On the other hand, similar to the physical machine (PM) or bare metal (BM) servers that were declared dead by the VMs a decade or so ago, VMs are alive and doing well. Not only are they doing well, like containers continued adoption and deployment of VMs will stay on both on-prem as well as cloud, as will BM and PMs now have known as dedicated servers in the clouds.

NAS and Files

If you listened to some of the pundits and press, NAS and files were supposed to have been dead several years ago at the hands of object storage. The reality today is that while object storage continues to grow in customer deployments while the industry is not as enamored (or drunk) with it as it was a few years ago, the new technology is here to stay and will be around for many decades to come.

That brings us back to NAS and files which were declared dead by the object opportunists which is file access is very much alive and continues gain ground. In fact, most cloud providers have either added NAS file-based access (NFS, SMB, POSIX among others) native or via partners to their solutions. Likewise, most object storage platforms have also added or enhanced their NAS file-based access for compatibility while their customers are re-engineering their applications, or create new apps that are object and blob native. Thus, NAS and File-based access are proud members of the Zombie technology list.

Data Infrastructure tools

There are many more tools, technologies, trends, techniques that are part of the above list for example Backup has been declared dead, along with the PCIe bus, NAND flash, programming, data centers, databases, SQL along with many others. What they have in common is that they are part of a growing list of not dead yet, yet declared dead thus are Zombie technologies.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

What is your favorite zombie technology, tool, trend or technique?

What zombie technologies, tools, trends or techniques should be added to the list and why?

Many tools, technologies, techniques, trends are often declared dead, sometimes before they are even really alive and mature by those who have something new, or that simply lack creative (e.g., dead marketing?) so it’s easier to declare something dead. While some succeed themselves prospering and being added to the Zombie technology list (a badge of honor), some quietly end up on the where are they now list. The where are they now list are those vendors, tools, technologies, techniques, trends that were on the famous hit parade in the past, having faded away, or end up dead (unlike a zombie).

Don’t be scared of zombie technology while also being prepared to embrace what is new while using both in new ways. Right now, I don’t have tickets to go see Phil Collins not dead yet tour, maybe that will change. However, for now, keep in mind, don’t be scared when looking at Not Dead Yet Zombie Technology (Declared Dead yet still alive) October 2018 Update #blogtobertech.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dont Stop Learning Expand Your Skills Experiences Everyday #blogtobertech

Dont Stop Learning Expand Your Skills Experiences Everyday #blogtobertech

Dont Stop Learning Expand Your Skills Experiences Everyday

Dont Stop Learning Expand Your Skills Experiences Everyday including moving beyond or outside our current tradecraft focus area. If you are an expert in a field or given focus area, learn something new about an area outside your expertise or comfort zone. Now if you are of the mind-set that there is nothing new to learn about, it’s all old, boring, perhaps its time to step back, look around, explore other areas.

Doing something new can be in an adjacent technology area, or something completely unrelated. For example, in a recent VMUG keynote presentation and blog post I discussed how Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future.

Dont Stop Learning Expand Your Skills Experiences Everyday
Next Generation Data Infrastructures are in your future (if not already)

What tradecraft skills and experience do you need to have, expand or refresh to support next-generation hybrid software-defined data infrastructures? If you are a server person than you need to broaden your tradecraft skills experience to storage, I/O networking, cloud, virtual, container across hardware as well as software. Likewise, if you are storage or I/O and networking, you need to expand into other areas. If you are a VMware focused professional, then learn about Microsoft Hyper-V or vice versa. If you are an AWS focused person, learn about Google, Azure or vice versa, same applies across different technology domains.

On the other hand, if you know all there is to know, chances are they are other areas you need to learn more about, or, determine what you don’t know to address that. By chance, if you do happen to know everything, there is to know, how much time are you spending interacting with others to teach them, possibly learning something new yourself.

Invest Time into Your Tradecraft Skill set

If you are not spending at least an hour a day learning something new, you are missing out on the opportunity. Part of that hour should also be outside your comfort zone core focus area. For example, if you are a software pro, learn more about hardware, clouds, or something different. If you are a VMware focused person, learn Hyper-V, AWS, Azure, something else. If you are storage, learn server, network, cloud and beyond. If you are focused on data infrastructures, then learn about the upper-level business applications along with the users who use them and vice versa.

How I Continue to Learn Expanding My Tradecraft Skills Experience Every day

As part of expanding my tradecraft, I spend part of my day learning, refreshing on core data infrastructure focus areas (servers, storage, I/O networking, hardware, software, cloud, containers, converged, software-defined, data protection) and related topics. Learning involves vendor briefings, researching, talking with others, reading, hands-on technology trial to gain insight experience perspectives.

I also have expanded my tradecraft experiences by becoming an FAA Part 107 licensed commercial pilot of small unmanned aerial system (sUAS), small unmanned aerial vehicle (sUAV) or more commonly merely called drones. Besides being FAA licensed, I also expanded by becoming Minnesota sUAV/drone and aerial photography licensed. Drone flying has an adjacent to data infrastructures in that one of my drones’ records at 4K 60 frames per second (fps) meaning about 1 GByte of data every two minutes of video, plus telemetry. Note that the drones have internet capability and can be considered IoT for their video, as well as telemetry.


Above is a 4K video flights via my companion site www.picturesoverstillwater.com

Where to learn more

Learn more about learning, data infrastructures, tradecraft, drones as well as related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

What this means is that in addition to expanding as well as refreshing my data infrastructure related tradecraft skills, I’am also expanding my experiences into other adjacent areas. In other words, instead of just talking about big data, fast data, video, IoT, drones and related, I’am involved with it hands on.

Keep in mind, at some point the student becomes the teacher, and a teacher is a student. Leverage your pair of eyes and ears to see things in different ways, listen to and learn about items outside your primary focus area as you expand or refresh your tradecraft skill set experiences.

If you can’t learn something new every day, either you are not trying, or you are in trouble. Even experts and unicorns can learn something new every day, even if that is as simple as learning to listen to others.

With October being #blogtobertech there are plenty of opportunities to Don’t Stop Learning Expand Your Skills Experiences Everyday which also includes student becoming teacher, teacher being student.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future #blogtobertech

Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future #blogtobertech

A few weeks ago I was invited to present a keynote at the 1st annual Minnesota VMware User Group (VMUG) Super VMUG mega event in Minneapolis titled Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future (download PDF presentation here).

Key themes of the presentation focused around data infrastructures (e.g. what’s inside physical data centers including server, storage, I/O networking, hardware, software, policies, procedures) along with industry trends including hybrid software defined clouds (and containers). Anther aspect of the presentation focused around building, refreshing and expanding our fundamental data infrasture tradecraft skills. Also keep in mind that everything is not the same across different environments, granted there are similarities that can be leveraged.


Data Infrasture’s are defined to support business applications information service delivery

Data Infrastructures

The fundamental role of data infrastructures is to provide a platform environment for applications and data that is resilient, flexible, scalable, agile, efficient as well as cost-effective. Put another way, data infrastructures exist to protect, preserve, process, move, secure and serve data as well as their applications for information services delivery. Technologies that makeup data infrastructures include hardware, software, cloud or managed services, servers, storage, I/O and networking along with people, processes, policies along with various tools spanning legacy, software-defined virtual, containers and cloud.

Depending on your role or focus, you may have a different view than somebody else of what is infrastructure, or what an infrastructure is. Generally speaking, people tend to refer to infrastructure as those things that support what they are doing at work, at home, or in other aspects of their lives. For example, the roads and bridges that carry you over rivers or valleys when traveling in a vehicle are referred to as infrastructure.

Similarly, the system of pipes, valves, meters, lifts, and pumps that bring fresh water to you, and the sewer system that takes away waste water, are called infrastructure. The telecommunications network. This includes both wired and wireless, such as cell phone networks, along with electrical generating and transmission networks are considered infrastructure. Even the airplanes, trains, boats, and buses that transport us locally or globally are considered part of the transportation infrastructure. Anything that is below what you do, or that supports what you do is considered infrastructure.

The following figure shows various layers or altitudes of encapsulation and abstraction of data infrastructures along with their underlying resources that are defined to support a business enablement outcome, as well as support information services delivery.


Data Infrastructure Stack Layers and Resources Defined To Support Business Information Services

The following figure shows evolution of data infrastructures from on-prem bare metal to software-defined virtual, cloud, containers, converged and hyper-converged packaging as well as emerging composable. Also shown below are a hybrid as well as multi-clouds including bare metal dedicated services in addition to virtual machine instances as well as container-based services.


Data Infrastructure and Resource Packaging Deployment Evolution

Hybrid Software Defined Industry Trends

Some of the trends discussed in the presentation include:

Clouds – Public, Private, Hybrid, Multi and hybrid clouds along with how they are being used, along with technology evolution including virtual machine (VM) instances, bare metal dedicated private servers (DPS) as well as metal as a service. Other cloud trends include data migration appliances such as AWS Snowball Edge, Microsoft Azure Databox among others, VMware on AWS, as well as fog and edge computing.

Other trend topics included converged, hyper-converged, serverless, containers, persistent memory (PMEM) also known as storage class memory (SCM) along with other server storage I/O topics. Additional trend topics included data protection, Azure Stack, security, NVMe as well as NVMe over Fabrics (NVMeoF) along with composable and Gen-Z.

Tradecraft Skills Experience

Expanding your data infrastructure tradecraft means evolving from your primary focus area, gaining insight into other technologies, tools, techniques in adjacent areas outside your comfort zone. For industry veterans with several years to many decades of experience, this means refreshing on what you know, think you know or need to know with what’s new or evolving. On other other hand, for those who are new, expanding your tradecraft means moving beyond learning to memorize to pass a certificate test, to gaining insight on how, when, where, why to apply different tools, technologies, trends to tasks at hand.

For example, developing tradecraft from knowing the different hardware, software, and services resources as well as tools, to what to use when, where, why, and how. Another dimension of expanding data infrastructure tradecraft skills is gaining the experience and insight to troubleshoot problems, gain insight awareness with dashboard or monitoring tools, as well as how to design and manage to cut or reduce the chance of things going wrong.

From Tools and Technologies to Techniques and Tricks of the Trade

Expanding your awareness of new technologies along with how they work is important, so too is understanding application and organization needs. Developing your tradecraft means balancing the focus on new and old technologies, tools, and techniques with business or organizational application functionality.

This is where using various tools that themselves are applications to gain insight into how your data infrastructure is configured and being used, along with the applications they support, is important.

Data Infrastructure Tools Tradecraft
Data Infrastructure Toolbox (Hardware, Software, Scripts)

Next Generation Hybrid Software Defined Data Infrastructures What Next


Balance head in the clouds (thinking, strategy, vision) with feet on the ground (what you can do today)

The following are some additional tips, comments, recommendations to keep in mind for enabling your next generation hybrid software defined data infrastructure.

Where to learn more

Learn more about data infrastructures and tradecraft related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Everything is not the same across different organizations, IT environments, application workloads and the data infrastructures that support them. Data Infrasture’s span from legacy on-prem to software-defined cloud (public, private, hybrid, multi-cloud), container, serverless, virtual, hybrid, converged and hyper-converged, as well as central, core and distributed edge or remote office branch office (ROBO). Even though everything is not the same, there are similarities across different environments, technologies and workloads that can be leveraged. Fundamental tradecraft skills and experiences are what enable you to know what to use when, where, why and how including using new as well as old things in new ways, while not making old mistakes in new ways.

Some other tips include avoid flying blind, particular in software defined and cloud environments, have situational awareness, end to end (E2E) insight leveraging metrics that matter, are relevant, timely, accurate and hold context to the data infrastructures as well as applications they support. Part of expanding your tradecraft skills is refreshing on what you know, also expanding into new adjacent areas getting out of your comfort zone. Also understand the context of different terms, technologies and tools. For example, SAS can be big data analytic statistical analysis software, serial attached SCSI storage device as well as shared access signature for Azure clouds among others.

Also keep in mind that while software defined things are popular and trendy with the industry, keep the focus on what is being defined to enable an outcome or business enablement In other words, the emphasis should not be on the software aspect per say, rather how something (hardware, software, service) is defined to enable something. Also keep in mind with software defined marketing and trends such as serverless, servers and software still need hardware (somewhere), and hardware still needs software from micro code to firmware to many other places in the data infrasture layers or stack. Meanwhile, keep in mind that it is #blogtobertech and Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Following up from last years 2017 crossword puzzle for travel fun, here is the Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle (click on the below image for PDF version that includes answers). The Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle can be something to do while traveling, taking a break between (or during) sessions as well as keynotes. I wonder which buzzword term will get used the most, as well as new ones to be added to an updated version of this?

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Where to learn more

Learn more about VMworld and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Next week is VMworld 2018 in Las Vegas which means for some traveling and long week. Feel free to suggest additions as there could be a revision, update or two between now and VMworld. Have fun, safe travels, hope to see you next week in the meantime enjoy the Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture

Dell EMC today announced with a tag line IT Unbound their new PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture slated for general availability September 21, 2018. Previewed earlier this year at Dell Technology World in Las Vegas, PowerEdge MX 7000 is a new family of modular, scalable servers for various data infrastructure roles.

What is different with PowerEdge MX 7000 compared to other new 14th generation (Gen 14) Dell servers is the finer granularity of resource allocation based around the new Kinetic composable infrastructure. Also previewed at Dell Technology World earlier this year in Las Vegas, Kinetic (not to be confused the Seagate Kinetic object storage key value drive initiative) is a new composable architecture.

Dell EMC PowerEdge MX 7000 Kinetic What Was Announced

  • First instantiation of Kinetic composable based data infrastructure resources
  • OpenManage Enterprise Modular Edition
  • PowerEdge MX 7000 modular data infrastructure server

Dell EMC PowerEdge MX 7000 and Kinetic Architecture
Dell EMC PowerEdge MX 7000 and Kinetic Architecture Image via Dell.com

Dell EMC Kinetic Composability What Is It

By being a composable data infrastructure resource and server, Dell EMC Kinetic based solutions can be decomposed with finer granularity than previous servers. What this means is that in the past, memory, I/O network, physical storage devices, compute sockets and cores were assigned to a single image instance. The only image instance could be an operating system (OS) such as Linux or Windows based, a hypervisor such as KVM, Microsoft Hyper-V, Nitro (AWS), Oracle, VMware vSphere ESXi, or Xen among others, as well as proprietary decomposition and aggregation software (and hardware) technology (ScaleMP among others).

With a composable based solution, instead of the entire server, or motherboard(s) and its resources allocated to a single OS as a bare metal (BM) or Metal as a Service (MaaS) instance, or to a hypervisor, different resources can be allocated to various instances. On the surface it would be easy to say that sounds a lot like what hypervisors such as those from Microsoft, VMware, and others are doing, particular with clusters.

Dell EMC Kinetic Data Infrastructure Architecture
Dell EMC Kinetic Data Infrastructure Architecture Image via Dell.com

However, the difference is that with hypervisors, all of a server’s physical resources (compute, memory, I/O, storage devices, GPU, FPGA/ASIC) are allocated to the OS, hypervisor, or composition software, that then creates vCPU, vRAM, and related resources. Emphasis is on enabling more granular resource allocation as well as scaling out. The business or organizational outcome is what is essential which means, better allocation and effective use of resources to boost productivity vs. merely driving up utilization and efficiency.

Dell EMC PowerEdge MX 7000 Eliminates traditional hardware-based mid-plane with an internal fabric connector per node that can also be exposed outside of the physical MX enclosure. By using an industry standard connector on the edge of server motherboard resource nodes, different server I/O connectivity can be leveraged as it becomes available or improves. For example, IMHO it is not too complicated to envision a time in the not so distant future when Kinetic enabled resources (e.g., server nodes) evolve to support the emerging Gen-Z server I/O connectivity protocol.

What is Gen-Z

Does PowerEdge MX 7000 and Kinetic use Gen-Z today? Not yet, however, Dell has been showing demos and technology proof of concepts at various events.

Why bring up Gen-Z now? Simple, it’s something that will be part of many data infrastructure, the server I/O, storage, networking, hardware and software-defined discussions in the not so distant future.

As a refresher or primer, Gen-Z is a new server I/O fabric interface that supports access of and by CPU sockets along with their cores or memory including DRAM as well as emerging SCM as well as PMEM. In addition to server memory access. Gen-Z also enables local as well as remote access to memory, storage, GPU, FPGA, ASIC among other resources. For backward compatibility as well as investment protection, Gen-Z is intended to work with existing PCIe, Ethernet, Fibre Channel, SAS, SATA, NVMe, InfiniBand among another server I/O interconnects and protocols.

Does this mean Gen-Z is a challenger for Ethernet and another IP-based general LAN networking? IMHO no, at least not in the foreseeable future, granted like PCIe, Fibre Channel, InfiniBand, Ethernet and some others that have joined the where are they now list of technologies that promised to be the end all network for everything, near-term Gen-Z is focused on inside a modular enclosure or perhaps within a rack. Read more about Gen-Z here, as well as Dell EMC blog The Gen-Z Journey road to composability.

Dell OpenManage Enterprise
Dell OpenManage Management Interface Image via Dell.com

OpenManage Enterprise Modular Edition

Management for PowerEdge MX 7000 utilizes OpenManage Enterprise Modular Edition that is an HTML5 REST based with API tool. Management capabilities include workflow’s for simplicity of operation and lifecycle management. OpenManage Enterprise Module Edition besides being HTML5 REST API is also RedFish inspired for further interoperability. Note that PowerEdge MX 7000 is also integrated with Dell iDRAC physical machine level management interface provides unified management from a single to multiple server groups spanning towers to racks.

Dell EMC PowerEdge MX 7000
Dell EMC PowerEdge MX 7000 Image via Dell.com

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Server

The new Dell EMC PowerEdge MX 7000 is the first installment of their new Kinetic based composable architecture. The new Dell EMC PowerEdge MX 7000 components consist of a 7U chassis with power and cooling fans, along with compute sled, storage sled, I/O connectivity and inner fabric, along with management tools.

Dell EMC PowerEdge MX 7000 Modules
Dell EMC PowerEdge MX 7000 Modules Image via Dell.com

Dell EMC PowerEdge MX 7000 Server Compute modules

Dell EMC PowerEdge MX 7000 Compute sleds include MX740c (single width) and MX840c (double width) that are two and four socket modules with local on-board NVMe (e.g., U.2 8639 small form factor SFF) drives (per module). These initial compute modules support Intel Xeon processors and up to six (6) TBytes of memory. The MX740c supports up to six (6) local NVMe, SAS or SATA drives (e.g., 8639 connectors), while the MX840c supports up to eight (8) local drives. Note that these local onboard drives can be shared with other sled modules, as well as compute sleds can access the shared storage sled-based drives.

Dell EMC PowerEdge MX 7000 Server Storage modules

Dell EMC PowerEdge MX 7000 Storage sled consists of MX5016s holding up to 16 hot-pluggable SAS HDD, up to seven MX5016s sleds can be configured per MX chassis for up to 112 direct attached storage (DAS) drives. Each of the drives can be individually mapped to one or more servers supporting aggregated (e.g., HCI) as well as disaggregated (CI and legacy) deployment topologies.

Dell EMC PowerEdge MX 7000 Server I/O Networking Modules

Initial server I/O modules for the new Dell EMC PowerEdge MX include 25GbE and 32G Fibre Channel (GFC) host connectivity along with 100GbE and 32 GFC uplink capabilities with the top of rack (ToR)support built in along with Open Networking OS10EE software enabled. The server I/O modules provide both north-south, as well as east-west connectivity inside and outside the chassis for data plane and management plane traffic.

Server I/O connectivity options include:

  • MX5108n Ethernet Switch with 8 x 25GbE (server facing ports), 2 x 100GbE ports, 1 x 40GbE port, 4 x 10GbE ports.
  • MX9116n Fabric Switching Engine (e.g., Kinetic fabric) with 16 x 25GbE server facing ports, 2 x 100GbE/8 x 32GFC unified ports, 2 x 100 GbE ports and 12 fabric expansion ports.
  • MXG610s Fibre Channel Switch with 16 x 32GFC internal ports, 8 x 32 GFC SFP+ ports and 2 QSFP (4 x 32GFC) uplink ports.

Where to learn more

Learn more about Dell EMC PowerEdge MX, Kinetic, Composable and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Overall this is a good announcement of technology, product, as well as where resources are headed to meet different workload demands and look forward to getting some test time with a Dell EMC PowerEdge MX 7000.

Dell EMC PowerEdge MX 7000 Three Tenants
Dell EMC PowerEdge MX 7000 Three Tenants Image via Dell.com

The new Dell EMC PowerEdge MX 7000 Provides a data infrastructure resource platform for deploying traditional, cloud, software-defined, composable, as well as converged infrastructure (CI) disaggregated, as well as hyper-converged infrastructure (HCI) aggregated along with hybrid configurations.

With the Dell EMC PowerEdge MX 7000, there is more resource granularity and future-proof capabilities than traditional high-density blade, as well as twin, quad or eight node server configuration solutions.

Many vendors talk about solutions being future proof or enabling investment protection, with PowerEdge MX 7000, Dell EMC is taking the next step in discussing trends, technology, and what you can do today. Unlike traditional dual, quad, eight or high-density node and blade servers with dedicated discrete mid-planes tied to a given technology, Dell PowerEdge MX 7000 and Kinetic based architecture are mid planes aka back plane free. Now there is still connectivity between the different PowerEdge MX 7000 chassis modules which is a fabric (network if you prefer).

For example, server compute sled modules have an industry standard connector that connects with other components in the chassis. What differs from the traditional blade and multi-node server configurations is that on board the compute sleds; an adapter module can be changed to support a new interface over different generations of technology (as an example, keep an eye on what happens with Gen-Z).

The result is that the Dell EMC PowerEdge MX 7000 should be an excellent platform for software-defined data centers (SDDC), software-defined data infrastructures (SDDI), software-defined infrastructures (SDI) as well as other software defined or traditional deployments. The Dell EMC PowerEdge MX 7000 will make for a good CI, HCI, SDDC, SDDI, SDI platform for public, private as well as hybrid clouds, PaaS as well as IaaS deployments, along with VMware, Microsoft (Hyper-V, Windows Storage Spaces Direct (S2D), as well as Azure Stack) among other scenarios.

By being flexible, scalable, agile and adaptable, easy management, responsive design that is future proof enabling a pool of dynamic data infrastructure resource, the Dell EMC PowerEdge MX 7000 should be good allowing IT Unbound.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

IBM announces new Power9 processor based E950 E980 server systems

IBM announces new Power9 processor based E950 E980 server systems

IBM announces new Power9 processor based E950 E980 server systems.

IBM announces new Power9 processor based E950 E980 server systems.

As a single server or node, the Power9 E950 supports up to four (4) CPU processor sockets each with multiple cores. An E980 system comprises up to four E950 based systems as a solution. The new E950 succeeds Power E850 and E850C, its machine type model number is 9040-MR9 that is a 4U single enclosure with two or four processor modules.


Power9 Processor image via IBM.com

IBM Power9 E950 and E980

As a refresher, leveraging IBMs proprietary processor chip technology called Power, which are used in their various mid-range and higher end server solutions.

The Power9 E950 and E980 systems support PowerVM virtualization, along with virtual machine (VM) mobility as well as optimization for OpenStack among other workloads.

IBM touts Power9 E950 (AIX and Linux) and E980 (AIX, Linux, I systems) optimized for:

  • Analytics, AI (ML/DL) and Cognitive computing
    • Faster cores and threads, more performance per socket
    • More bandwidth and lower latency
  • Super Compute (SC), Technical, High Performance Compute (HPC)
    • High bandwidth graphical processing unit (GPU) attachment
    • Optimized CPU GPU memory sharing and interaction
    • Bandwidth optimized main memory
    • Virtual addressing optimization
  • Cloud and Hyper Scale Data Infrastructures and Data Centers
    • Dense performance and energy consumption
    • Virtualization assist, QoS, power management and security
    • Fast I/O subsystem for server I/O to storage and networks
  • Enterprise data infrastructures and data centers
    • Scale-up and scale-out
    • Server and workload consolidation
    • Up to 4TB of buffered memory per socket (16TB per E950 node)

IBM E950 Power9 System

Front view of E950 System Image via IBM.com

The following image (via IBM.com) shows an exploded component view of the E950.
IBM Power9 E950 exploded view

The following image (via IBM.com) shows a top view looking down into an E950.

IBM Power9 E950 top view

E950 is a 4U server (or E980 node) with compute and memory features including:

  • Power9 8,10,11 or 12 cores per socket, up to 48 cores (4 x 12 cores)
  • Four times memory compared to E850 systems (up to 16TB or 4TB per socket)
  • Eight (8) memory riser cards with 16 DDR4 DIMM each (8,16,32,64 or 128GB DIMM)
  • Memory bandwidth of up to 920 GB/sec (note that is big B not Gb or little b)
  • Refresh your server, CPU, compute, socket, core and threads knowledge here.

E950 also features faster I/O subsystem for server I/O to storage and networks:

  • 630GB/sec (e.g. 5Tbpsec) I/O bandwidth
  • NVIDIA NVLink GPU attachment, PCIe Gen4 and OpenCAPI I/O
  • Up to eight (8) (4 socket systems) PCIe Gen4 x16 (16 lanes each) card slots
  • Up to two (2) PCIe Gen4 x8 (8 lanes each) card slots
  • Up to 144 PCIe lanes (4 socket systems), full height, half length
  • USB 3 (2 front, 2 rear)
  • 12 internal 2.5” form factor storage bays for HDD and SSDs including up to eight (8) SAS SAS, and four NVMe U.2 (8639). Note that NVMe devices attach via PCIe ports and lanes.
  • Hot plug components and optional I/O expansion as well as storage drawers
  • Here is a refresher (or primer) on PCIe, as well as NVMe, SAS, and SSD technologies.

IBM E980

The IBM E980 system is a collection of up to four nodes along with a control module, a cabinet rack E980 system is shown below (image via IBM.com).
IBM Power9 E980

IBM Power9 E950 E980
Via IBM.com

View more features for E950 here (PDF) and E980 here (PDF).

Where to learn more

Learn more about IBM Power and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

These new systems provide increase in not only compute, also memory as well as server I/O for storage and networking. With the addition of multiple PCIe Gen4 x16 card slots, more GPUs such as those from NVIDIA as well as fast Fibre Channel, SAS and NVMe based storage can be attached to these systems.

With a good number of x16 PCIe Gen4 slots, the E950 and E980 systems are capable of supporting more GPU offload cards such as those from NVIDIA, along with other ASIC or FPGA accelerator devices. In addition to compute offload, the x16 PCIe Gen4 slots enable server I/O cards to more storage devices including faster Fibre Channel, Ethernet, SAS as well as NVMe attachment.

Overall, IBM announces new Power9 processor based E950 E980 server systems is a good move for existing customers of AIX, Linux as well as with the E980 for i systems.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

July 2018 Server StorageIO Data Infrastructure Update Newsletter

July 2018 Server StorageIO Data Infrastructure Update Newsletter

July 2018 Server StorageIO Data Infrastructure Update Newsletter

Volume 18, Issue 7 (July 2018)

Hello and welcome to the July 2018 Server StorageIO Data Infrastructure Update Newsletter.

In cased you missed it, the June 2018 Server StorageIO Data Infrastructure Update Newsletter can be viewed here ( HTML and PDF).

In this issue buzzwords topics include Dell Technology and VMware, AWS and Google public, private and hybrid cloud, machine learning, 3D XPoint, SCM, SSD, NVMe, data infrastructure management tools among other topics.

Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

July 2018 data infrastructure, server, storage, I/O network, hardware, software, cloud, converged, and container as well as data protection industry activity includes among others:

Amazon Web Services AWS July 2018 Updates include enhancements to machine learning (ML) Sagemaker service, faster S3 access, new EC2 instances along with Snowball Edge (SBE) for on-prem converged server and compute appliance ( read more about SBE here). In other public cloud activity, Google Cloud Platform GCP announced new Los Angeles Region.

Intel and Micron have announced that they will be pursuing different paths when they complete the second generation in 2019 of 3D XPoint used in Intel Optane NVMe SSD and Storage Class Memory (SCM) technologies, read more here Intel Micron 3D XPoint Evolving. Meanwhile, Broadcom buying CA, Brilliant or a Brainbuster? This deal is a bit of a head scratcher with Broadcom spending $18.9 Billion USD (cash) to by CA Technologies.

In other data infrastructures news and activity, DataDirect Networks Stages Bid to Acquire Tintri’s Assets and Expand Its Storage Portfolio into the Enterprise. Dell EMC announced a new integrated data protection appliance ( IDPA DP4400) for small and midsize organizations. In other activity, VMware declared a dividend, with Dell Technologies being a majority owner, will use cash to fund Dell business structuring. Read more about Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash here.

Spectra (e.g. who some of you know as Spectra Logic) has announced enhancements to their tape libraries. Note that one of the larger growth (or sustainment) markets for tape based technologies in recent years have been the larger cloud scale service providers. Granted those providers are not using tape in old ways (e.g. for direct backup), rather, in new ways where it is a companion to SSD, HDD as another storage class, tier or technology enabler.

IBM has jumped on the NVMe bandwagon announcing updates to their Flashsystems 9100 systems (e.g. what they acquired via TMS a few years ago). Opvisor has announced a new VMware vSAN performance monitoring and troubleshooting feature for their insight, awareness management tools.

Check out other industry news, comments, trends perspectives here.

Data Infrastructure Server StorageIO Comments Content

Server StorageIO Commentary in the news, tips and articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via : SearchStorage: Comments on GDPR and Cloudian File Sync Share 
Via : NetworkComputing: Comments Software Defined Storage SDS Getting Started 
Via SearchStorage: Comments The storage administrator skills you need to keep up today
Via SearchStorage: Comments Managing storage for IoT data at the enterprise edge
Via SearchCloudComputing: Comments Hybrid cloud deployment demands a change in security mind set

View more Server, Storage and I/O trends and perspectives comments here.

Data Infrastructure Server StorageIOblog posts

Server StorageIOblog Data Infrastructure Posts

Recent and popular Server StorageIOblog posts include:

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
June 2018 Server StorageIO Data Infrastructure Update Newsletter
May 2018 Server StorageIO Data Infrastructure Update Newsletter
Have you heard about the new CLOUD Act data regulation?
Data Protection Recovery Life Post World Backup Day Pre GDPR
Microsoft Windows Server 2019 Insiders Preview
Server Storage I/O Benchmark Performance Resource Tools
Data Infrastructure Primer Overview (Its Whats Inside The Data Center)
If NVMe is the answer, what are the questions?

View other recent as well as past StorageIOblog posts here

Server StorageIO Recommended Reading (Watching and Listening) List

Software-Defined Data Infrastructure Essentials SDDI SDDC

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017) available at Amazon.com (check out special sale price), the following are Server StorageIO data infrastructure recommended reading, watching and listening list items. The Server StorageIO data infrastructure recommended reading list includes various IT, Data Infrastructure and related topics including Intel Recommended Reading List (IRRL) for developers is a good resource to check out.

Duncan Epping ( @DuncanYB), Frank Denneman ( @FrankDenneman) and Neils Hagoort ( @NHagoort) have released their VMware vSphere 6.7 Clustering Deep Dive book available at venues including Amazon.com. This is the latest in a series of Cluster and deep dive books from Frank and Duncan which if you are involved with VMware, SDDC and related software defined data infrastructures these should be on your bookshelf.

Watch for more items to be added to the recommended reading list book shelf soon.

Data Infrastructure Server StorageIO event activities

Events and Activities

Recent and upcoming event activities.

July 25, 2018 – Webinar – Data Protect & Storage

June 27, 2018 – Webinar – App Server Performance

June 26, 2018 – Webinar – Cloud App Optimize

See more webinars and activities on the Server StorageIO Events page here.

Data Infrastructure Server StorageIO Industry Resources and Links

Various useful links and resources:

Data Infrastructure Recommend Reading and watching list
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

What this all means and wrap-up

Summer is here in North America and the Northern Hemisphere which means holidays as well as vacations. However Data Infrastructures continue to evolve as do the tools, technologies, trends, hardware, software, services along with those who take care of, and define them. Enjoy your summer vacation, holidays as well as this July 2018 Server StorageIO Data Infrastructure Update Newsletter edition.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Amazon Web Services AWS July 2018 Updates

Amazon Web Services AWS July 2018 Updates

Amazon Web Services (AWS) July 2018 Updates

Amazon Web Services AWS July 2018 Updates

Amazon Web Services AWS July 2018 Updates continue to expand feature, functionality, service capabilities of the public cloud providers capabilities across various geographies.

Recent AWS updates include Snowball Edge (SBE) that adds local, on-site, on-premises aka on-prem EC2 compute capabilities as part of the Snowball appliance. Previously Snowball was a data and storage migration only appliance, now with the new capabilities, compute is also enabled as part of a turnkey converged platform. Read more about SBE here.

In other updates, AWS has extended its Elastic Cloud Compute (EC2) capabilities (besides Snowball Edge) with new instance types, along with leveraging their next generation hypervisor as part of Nitro enabled systems. New EC2 instances span from on-prem Snowball Edge (SBE) to AWS Dedicated aka bare metal instances, along with traditional cloud instances (e.g., virtual machines).

These new instances including R5, R5D, and Z1 among others leverage faster Intel Xeon Platinum 8000 series processors, along with more memory. For example, Z1D is a compute-intensive instance with 4.0 GHz all turbo enabled core, while R5 is memory optimized with 3.1 GHz cores (up to 96 vCPU) and up to 768GB of RAM. The R5D is a memory-optimized instance that also supports up to 3.6TB of on-instance NVMe based storage. View additional AWS instance types here.

AWS has enhanced SageMaker (Machine Learning) service supporting higher throughput enabling faster data transformation batch jobs of non-real-time inference. To enable higher data and API call rates, AWS has also enhanced Simple Storage Service (S3) request rate. Another enhancement by AWS is enabling bring your own IP address preview for virtual private cloud (VPC) as part of allowing hybrid clouds.

View additional new, recent and past AWS updates here, and here.

Where to learn more

Learn more about AWS, Cloud and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Amazon Web Services AWS July 2018 Updates continue to expand the number, type and extensiveness of public cloud services, as well as enabling hybrid capabilities. The Amazon Web Services AWS July 2018 Updates also address different data infrastructure layers from lower level Infrastructure as a Service (IaaS) including EC2 compute, as well as higher level artificial inelegance (AI), machine learning (ML), deep learning (DL) among other cognitive as well as analytic offerings.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Intel Micron 3D XPoint Evolving

Intel Micron 3D XPoint Evolving

Intel Micron 3D XPoint Evolving

Intel Micron 3D XPoint Evolving

Generations of memory
Major memory classes or categories timeline (Image via Intel and Micron)

Co-Creators of 3D XPoint the next generation of non-volatile memory (NVM) also known as storage class memory (SCM) or Persistent Memory (PMEM) have announced they will complete joint development of second-generation technology, then pursue their separate paths. Intel and Micron jointly announced 3D XPoint three years (July 2015) as a new technology with the first generation of products have appeared in the market or past year or so.

Various industry vs customer adoption deployment timelines
Various Adoption Deployment Timelines for different focus areas

For those in the industry who measure technology on shorter months vs. years adoption and deployment scenarios, or time from press release until new news, some would say 3D XPoint is late, behind schedule, which perhaps it is based on some timelines. On the other hand, IT customers tend to be on a different timeline that may seem like glacial speed to industry focused rapid change. IMHO 3D XPoint is about on the right timeline based on IT customer deployment which may very well accelerate for broader usage with the second generation based products.

3D XPoint based Intel Optane
Top Intel 750 NVMe PCIe AiC SSD, bottom Intel Optane NVMe 900P U.2 SSD with Ableconn carrier

While the focus is easily around Intel and Micron going separate ways, keep in mind that there is the second generation of 3D XPoint in the works. Some might consider the second generation of 3D XPoint as the first real production and volume technology with the first being just that, the first generation. An example of a first generation 3D XPoint based product are the Intel Optane NVMe devices such as the one show above, and discussed in this StorageIO Lab test drive post here.

NVMe and NVM along with SCM as well as PMEM better together

Where to learn more

Learn more about Intel, Micron, NVM, NVMe, 3D XPoint, SCM, PMEM and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Some may see the announcement of Intel and Micron pursuing separate paths as a negative while others as a positive. While completing the second-generation development together, both can leverage what they have done while seeking different, presumably divergent or expand paths forward.

A concern could be if Intel and Micron merely go their separate ways yet focus on the same market areas. A benefit could be if Intel and Micron pursue different market focus areas with some overlap while expanding to broader opportunities.

The latter scenario could be useful for moving the technology forward by giving it new and different opportunities. For example, some that favor Intel along with its ecosystem would prefer whatever Intel does next. Likewise, those that favor Micron and their ecosystem may influence the direction Micron goes.

Does this mean Micron and Intel are all done collaborating? Tough to say.

However, they still share a fabrication facility (fab) imFLASH in Lehi Utah.

Overall, I think this is a good move for both Intel and Micron once they get the second generation of 3D XPoint developed and into production for customer deployments. With Intel Micron 3D XPoint Evolving, lets see what’s next.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Broadcom buying CA, Brilliant or a Brainbuster?

Broadcom buying CA, Brilliant or a Brainbuster?

Broadcom buying CA, Brilliant or a Brain buster?

For some in the IT industry as well as financial markets, there is skepticism about Broadcom (formerly known as Avago) making an announcing that they are buying CA Technologies (CA) for USD 18.9 Billion (cash). For example, the Broadcom stock ( AVGO) took a significant negative hit (13%) on the news.

Broadcom Stock impact after announcing CA purchase
Broadcom Stock upon announcing buying CA (via Google)

Broadcom aka Avago and CA rewind

Why the backlash over buying CA? a couple of reasons, CA is not exactly the most loved software vendor by customers in the industry, and, Broadcom (Avago) has been traditionally focused on hardware.

However, to understand this better, lets take a step back.

After digesting the likes of Broadcom, Brocade, and LSI among others, as well as after failing to capture Qualcomm in a USD 117 Billion takeover attempt, Avago (e.g., Broadcom) has set its sights on Mainframe and legacy enterprise software vendor Computer Technologies (CA) formerly known as Computer Associates. CA has about USD 4.2 Billion in annual revenue with about two-thirds tied to legacy IBM mainframe software, and the rest in other enterprise software. While not a growth segment, the IBM mainframe software business is a good annuity revenue and margin stream.

Data Infrastructures
Data Infrastructures support IT business applications

Broadcom had 2017 revenues of about USD 17.6 Billion made up of a diverse product set including data infrastructure hardware along with associated software spanning legacy to new and emerging cloud environments. Some of Broadcom technologies include server I/O devices such as PCIe, SAS, SATA and NVMe adapters, RAID controllers and chips, Fibre Channel, NVMe over Fabric (NVMeoF), Ethernet, switches and much more.
Broadcom and CA, Brainbuster or Brilliant?

This deal is a bit of a head-scratcher or brainbuster on the surface as Broadcom aka Avago has been primarily a hardware company (they do have a portfolio of drivers, management tools, monitoring and other software) and I can understand them wanting to get more into the software business.

Avago (excuse me, Broadcom) has had a focus on leaning out acquisitions to drive volume and integration across its portfolio, bringing value to its partners and customers. For its part, CA has been known where old (or new) software goes to die or retire garnering CA reputations as a software retirement home, or undertaker for technology. Refer to the Broadcom SEC filing for more information here.

On the other hand, CA has made a successful business wringing our value from existing software as opposed to substantial investment in new development; they do do some new development.

Perhaps this is the risk and reward that Avago sees, where similar to themselves of wringing out value from existing hardware, maybe they will do the same with CA, however, taking it to a new level. If that is the game, then once CA is bought by Broadcom, who will they pursue from a software acquisition target list similar to what Avago has done with hardware?

Where to learn more

Learn more about Broadcom (Avago), CA and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

For now, Broadcom buying CA is a brainbuster, especially on the surface. However, there could be a brilliant move if Broadcom can leverage CA to do what it has done in the past. That is, similar to Avago buying various companies and leaning them out; CA has done similar with both boosting recurring revenues and increasing market footprint. Also, the combined companies can also leverage their reach into various partner ecosystems as keep in mind, hardware needs software, software needs hardware, and Broadcom is now a supplier of both.

It will be interesting to see how much Broadcom leans out CA, perhaps the lessons from buying Brocade might help as opposed to previous purchases. My point is that Brocade solutions are higher up the data infrastructure technology stack than traditional Broadcom, Avago, LSI components that require more direct customer-facing sales and marketing.

CA for its part also relies on direct customer-facing sales and marketing, however, is their room or opportunity for leaning things out?

Something else interesting to watch is how much Broadcom allows CA to operate on its own, vs. more under the direct Broadcom umbrella.

Then there is the question of to sustain growth, does Broadcom and CA go on additional shopping sprees for undervalued software companies and whom would those be? Perhaps some of the legacy big vendors such as Cisco, Dell Technologies, HPE, IBM, Oracle among others might be interested in selling off some under performing software.

On the other hand, perhaps there are some opportunities for Broadcom and CA to do some buy out deals with private equity firms?

Keep in mind that over the past few years, several software business units have been divested from the likes of the combined Dell and EMC, HPE among others.

For now, I’m sticking with Broadcom buying CA as a brainbuster, however, see some interesting scenarios in the future.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

AWS Snowball Edge SBE Converged Cloud Storage Appliance

AWS Snowball Edge SBE Converged Cloud Storage Appliance

AWS Snowball Edge SBE Converged Cloud Storage Appliance

As part of extending their cloud platform reach, recent Amazon Web Services (AWS) announcements include AWS Snowball Edge SBE Converged Cloud Storage Appliance. Snowball Edge (SBE) has evolved from its previous focus as a data transfer, migration platform appliance to now include support for on-prem compute. SBE has previously been available as an appliance that ships from AWS to your location as a service to enable bulk data movement to the public cloud (e.g. AWS Simple Storage Service (S3) bucket). With this new capability, AWS is enabling SBE to support on-prem compute similar to Elastic Cloud Compute (EC2) cloud instances.

AWS Snowball Data Migration at PB scale
AWS Snowball Appliance Image via AWS.com

What is AWS Snowball

Snowball is a bulk physical data migration appliance that AWS ships to your location. You use Snowball by setting up a copy job with AWS, when the device arrives at your site, set it up, and enable the copy jobs to occur moving data from source to Snowball destination. Once data is copied, you ship the Snowball back to an AWS region and availability zone (AZ) where its contents are copied into a Simple Storage Service (S3) bucket of your choice. Once the copy job into an AWS S3 is complete, AWS performs a secure erase of the Snowball.

Basic Snowball includes 10 GbE network connections (RJ45 and SFP+ [fiber or copper]). Security and Encryption includes 256-bit keys that can be managed via AWS Key Management Service (KMS). Note that keys are not sent to or stored on the device for security during transit. For additional protection, tamper-resistant seals are included along with the Trusted Platform Module (TPM) to detect unauthorized hardware, firmware or software changes.

End to End tracking is enabled using E ink shipping labels and allow monitoring via AWS Simple Notification Service (SNS). Once your data transfer job completes along with verified, a software erasure of the SBE is performed by AWS following NIST media handling guidelines.

For management, SBE has an API for customer integration, as well as the ability to create and manage transfer jobs via the AWS management console. SBE Adapter also gives customers direct access to Snowball where it appears as an S3 endpoint (how you access the storage and data).

Backside view of AWS Snowball
Backside view of Snowball Image via Amazon.com

Additional Snowball Speeds and Specification Feature Feeds include:

  • Storage space capacity of 50TB (42TB usable) or 80TB (72TB usable)
  • Network connectivity 10 GbE RJ45 (Cat6), SFP+ (Copper and Optical). Cables include RJ45 and Copper SFP+. For Fiber attached Ethernet, the customer supplies their own SFP+ optical cables.
  • SBE is designed for office environments, as well as data centers (e.g., about 68db) and weigh about 47 pounds.
  • Power requirements include NEMA 5-15p (standard wall outlet) 100-200 volts with power cable included.

Note for traditional Snowball deployments an on-prem workstation or server is needed to copy data from source locations to the Snowball device.

How AWS Snowball and Snowball Edge work

How AWS Snowball Works

Referring to the image above, first step to using AWS Snowball (or Snowball Edge) is to place an order via AWS management console (A). Part of the ordering process involves setting up the data transfer job, and in the case of AWS Snowball Edge, defining the EC2 instance and image (read more about that here via AWS). After placing order and setup, the AWS Snowball arrives at your location (B), on-site setup is done and data transfer performed (C). Once data is transferred, the AWS Snowball is returned to designated AWS location via two day shipping (D) and data copied into your specified S3 or Glacier bucket (E). After your data is transferred into the S3 or Glacier bucket you specify as part of the transfer job, you are able to do what you want with your files, folders, images, videos, VHDX’s, VMDK’s, ISO, little data, big data.

What is AWS Snowball Edge

AWS has enhanced its Snowball Edge (SBE) data mobility, migration, and transport appliance to now also include compute. For those not familiar, Snowball is an appliance that comes in various sizes that you order from AWS, it shows up at your site, and then you copy your data to it for migration into AWS. Once data is copied, you return to AWS where the data then appears in your designated S3 bucket. From your S3 bucket, you can then move the data, files, volumes, images to other locations, use for standing up EC2 compute, populating databases or other items.

With the new compute feature, AWS is enabling compute on the snowball edge appliance functioning similar to EC2 instance, except that they are on your site. This means you can use the compute to run your own custom AMI’s (Amazon Machine Image) on site or on-prem in support of data migration, conversion or another process. You can also keep the appliance on-site for as long as you want, granted your credit card gets charged to support development, test, extended migration, or to have a converged, or, hyper-converged platform.

Note that with SBE having compute capability, you can now run an EC2 image that functions as your copy server eliminating the need to have a workstation or server on-prem for the copy operation.

Additional AWS Snowball Edge Speeds and feature function feeds include:

  • 100TB (82TB usable) storage space capacity
  • 10 GbE network, along with 10/25 GbE SFP28 and 40 GbE QSFP+ with device-based encryption (customer provided network cables)
  • Local computing with EC2 and Lambda functions for remote deployment along with scale-out clustering of multiple SBE’s
  • S3 compatible endpoint along with NFS endpoint (mount point) using both NFS v3 and v4.1.
  • Weighs about 50 pounds, tamper evident seals along with TPM similar to traditional Snowball along with detection of hardware, firmware or software changes.
  • Can exist in an office environment, or data center.
  • Power cables are included, NEMA 5-15p, 100-220 volts, 400 watts.

What is AWS Snowmobile

Need something with more capacity than an SBE? AWS has a more extensive version called Snowmobile that supports up to 100PB that is brought to your site via a 45-foot-long tractor-trailer truck. Both SBE and Snowmobile physically move data from your location to an AWS region availability zone (AZ) aka data center where it is placed into the Simple Storage Service (S3) or Glacier bucket of your choice. Once in the S3 or Glacier bucket, you can move the data to where ever you need it.

Why Snowball Edge and Snowmobile vs. Fast Networks

Some people ask why the need for services such as SBE and Snowmobile, or, physically shipping your SSDs, HDD’s, tape or other storage media to a cloud provider in the Internet era of fast networks. The reason can be quite simple; most environments do not have internet connection speeds of 10 GbE or higher that can be dedicated outside of regular use for data movement at scale.

Likewise, some public cloud service providers have limitations on the network speed of their front-end general-purpose Internet access.

Note that some such as AWS have high-speed, low latency direct connect services from partner staging locations. However, those too may be limited in speed for large bulk transfers. AWS also has other performance-enhanced services for general Internet access including S3 Transfer Acceleration. Note that Microsoft Azure has special connectivity options such as ExpressRoute, while Google Compute Platform (GCP) has Cloud Interconnect.

Is AWS SBE and CI, HCI, CiB or Appliance?

The answer to the question of if SBE is a Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI), Cloud in a Box (CiB) or Cloud Appliance depends on your view and definition of those deployment models. Some will argue that SBE is a CI or HCI as well as CiB based on what Cisco, Dell Technologies, HPE, Microsoft (Azure Stack and Windows S2D), NetApp, Nutanix, Pivot3 and VMware vSAN among others offerings.

On the other hand, some will argue that SBE is not the same as the above and others give it does not meet the definition of their CI, HCI, CiB or cloud appliance. What is important is not if CI, HCI, CiB or appliance, rather, what it can do, how it can adapt to your environment and work for you vs. you work for it. In other words, what is important is the enablement a solution provides vs. if it is CI, HCI, CIB or something else. Meanwhile watch to see who ignores SBE, who welcomes it to their market space, and who throws mud balls and fud balls at snowball.

When to use Snowball vs. Snowball Edge

If all you need is bulk data migration appliance using one of your servers or workstations for smaller amounts of data, traditional Snowball is a good fit. On the other hand, if you need to move more data, leverage SBE enabled on-prem compute with EC2 and Lambda functionality for short, or long-term duration, as well as scale-out to create a cluster, then SBE is for you. SBE is also a good fit for environments that need short-term, as well as the longer-term deployment of compute, storage and network (e.g., converged). For example, factory environments, rugged implementations on ships, energy exploration and processing, traveling venues and sporting events, distributed environments being consolidated among others.

AWS Regions, AZ locations
AWS Regions and AZ’s image Via AWS.com

What About AWS Snowball Edge Pricing

Pricing varies based on AWS region you are using for your transfer and management from. Another variable is if you are selecting data transfer only, or, enabling EC2 compute instance on-prem. Yet another pricing variable is how long you will keep the Snowball Edge on-prem. You are given ten (10) free days as part of your data transfer job along with days for shipping and return.

Beyond the ten free days, you will pay a daily rate that varies. The longer you keep the SBE on-prem, and for example commit to a one or three-year pre-pay, you will receive larger discounts. Also note that there are no data transfer fees for moving data into AWS. However, standard pricing applies once stored into AWS, or moved. Also note that standard AWS storage charges (e.g. S3, Glacier, along with API calls apply once data is stored).

As an example, data transfer only, the service fee for a data transfer job is USD 300 for the US and another non-Asia-Pacific (Singapore). Additional days are $30 each.

Another example is selecting data transfer plus EC2 compute instance which varies by region example is $500 for transfer job (US East Northern Virginia or Ohio), $50 a day extra fee. However, if you are will to pay up front for one year, the day fee drops to $42 (varies by region), and to $35 a day for a three commitment.

For some environments, it may cost less to buy a server with storage, set it up and manage, while for others, the simplicity of a turnkey converged platform may be more cost-effective along with better value. Learn more about AWS Snowball Edge pricing here.

Where to learn more

Learn more about AWS, Snowball Edge, Cloud and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Has AWS embraced hybrid public cloud and on-prem computing? IMHO while AWS is making it easier for environments to use, access as well as move to public cloud, they are still focused on the public cloud as the destination. In other words, AWS is making it easy to move your data and applications to their services as well as access them with AWS Snowball Edge SBE Converged Cloud Storage Appliance.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform (GCP) has announced a new Los Angeles Region (e.g., uswest-2) with three initial Availability Zones (AZ) also known as data centers. Keep in mind that a region is a geographic area that is made up of two or more AZ’s. Thus, a region has multiple data centers for availability, resiliency, durability.

The new GCP uswest-2 region is the fifth in the US and seventh in the Americas. GCP regions (and AZ’s) in the Americas include Iowa (us-central1), Montreal Quebec Canada (northamerica-northeast1), Northern Virginia (us-east4), Oregon (us-west1), Los Angeles (us-west2), South Carolina (us-east1) and Sao Paulo Brazil (southamerica-east1). View other Geographies as well as services including Europe and the Asia-Pacific here.

How Does GCP Compare to AWS and Azure?

The following are simple graphical comparisons of what Amazon Web Services (AWS) and Microsoft Azure currently have deployed for regions and AZ’s across different geographies. Note, each region may have a different set of services available so check your cloud providers notes as to what is currently available at various locations.

Google Cloud Compute Platform regions
Google Compute Platform Locations (Regions and AZ’s) Image via Google.com

AWS Regions, AZ locations
AWS Regions and AZ’s image Via AWS.com

Microsoft Azure Cloud Region Locations
Microsoft Azure Regions and AZ’s image Via Azure.com

Where to learn more

Learn more about data infrastructures and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Google continues to evolve its public cloud platform (GCP) both regarding geographical global physical locations (e.g., regions and AZ’s), also regarding feature, function, extensibility. By adding a new Los Angeles (e.g. uswest-2) Region and three AZ’s within it, Google is providing a local point of presence for data infrastructure intense (server compute, memory, I/O, storage) applications such as those in media, entertainment, high performance compute, aerospace among others in the southern California region.  Overall, Google Cloud Platform GCP announced new Los Angeles Region is good to see not only new features being added to GCP but also physical points of presences.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

Here is the 2018 Hot Popular New Trending Data Infrastructure Vendors To Watch which includes startups as well as established vendors doing new things. This piece follows last year’s hot favorite trending data infrastructure vendors to watch list (here), as well as who will be top of storage world in a decade piece here.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Data Infrastructures Support Information Systems Applications and Their Data

Data Infrastructures are what exists inside physical data centers and cloud availability zones (AZ) that are defined to provide traditional, as well as cloud services. Cloud and legacy data infrastructures are combined by hardware (server, storage, I/O network), software along with management tools, policies, tradecraft techniques (skills), best practices to support applications and their data. There are different types of data infrastructures to meet the needs of various environments that range in size, scope, focus, application workloads, along with Performance and capacity.

Another important aspect of data infrastructures is that they exist to protect, preserve, secure and serve applications that transform data into information. This means that availability and Data Protection including archive, backup, business continuance (BC), business resiliency (BR), disaster recovery (DR), privacy and security among other related topics, technology, techniques, and trends are essential data infrastructure topics.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Different timelines of adoption and deployment for various audiences

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

Some of those on this year’s list are focused on different technology areas, while others on size or types of vendors, suppliers, service providers. Others on the list are focused on who is new, startup, evolving, or established which varies from if you are an industry insider or IT customer environment. Meanwhile others new and some are established doing new things, mix of some you may not have heard of for those who want or need to have the most current list to rattle off startups for industry adoption (and deployment), as well as what some established players are doing that might lead to customer deployment (and adoption).

AMD – The AMD EPYC family of processors is opening up new opportunities for AMD to challenge Intel among others for a more significant share of the general-purpose compute market in support of data center and data infrastructure markets. An advantage that AMD has and is playing to in the industry speeds feeds, slots and watts price performance game is the ability to support more memory and PCIe lanes per socket than others including Intel. Keep in mind that PCIe lanes will become even more critical as NVMe deployment increases, as well as the use of GPU’s and faster Ethernet among other devices. Name brand vendors including Dell and HPE among others have announced or are shipping AMD EPYC based processors.

Aperion – Cloud and managed service provider with diverse capabilities.

Amazon Web Services (AWS) – Continues to expand its footprint regarding regions, availability zones (AZ) also known as data centers in regions, as well as some services along with the breadth of those capabilities. AWS has recently announced a new Snowball Edge (SBE) which in the past has been a data migration appliance now enhanced with on-prem Elastic Cloud Compute (EC2) capabilities. What this means is that AWS can put on-prem compute capabilities as part of a storage appliance for short-term data movement, migration, conversion, importing of virtual machines and other items.

On the other hand, AWS can also be seen as using SBE as a first entry to placing equipment on-prem for hybrid clouds, or, converged infrastructure (CI), hyper-converged infrastructure (HCI), cloud in a box similar to Microsoft Azure Stack, as well as CI/HCI solutions from others.

My prediction near term, however, is that CI/HCI vendors will either ignore SBE, downplay it, create some new marketing on why it is not CI/HCI or fud about vendor lock-in. In other words, make some popcorn and sit back, watch the show.

Backblaze – Low-cost, high-capacity cloud storage for backup and archiving provider known for their quarterly disk drive reliability ratings (or failure) reports. They have been around for a while, have a good reputation among those who use their services for being a low-cost alternative to the larger providers.

Barefoot networks – Some of you may already be aware of or following Barefoot Networks, while others may not have heard of them outside of the networking space. They have some impressive capabilities, are new, you probably have not heard of them, thus an excellent addition to this list.

Cloudian – Continue to evolve and no longer just another object storage solution, Cloudian has been expanding via organic technology development, as well as acquisitions giving them a broad portfolio of software-defined storage and tiering from on-prem to the cloud, block, file and object access.

Cloudflare – Not exactly a startup, some of you may know or are using Cloudflare, while to others, their role as a web cache, DNS, and other service is transparent. I have been using Cloudflare on my various sites for over a year, and like the security, DNS, cache and analytics tools they provide as a customer.

Cobalt Iron – For some, they might be new, Software-defined Data protection and management is the name of the game over at Cobalt Iron which has been around a few years under the radar compared to more popular players. If you have or are involved with IBM Tivoli aka TSM based backup and data protection among others, check out the exciting capabilities that Cobalt can bring to the table.

CTERA – Having been around for a while, to some they might not be a startup, on the other hand, they may be new to others while offering new data and file management options to others.

DataCore – You might know of DataCore for their software-defined storage and past storage hypervisor activity. However, they have a new piece of software MaxParallel that boost server storage I/O performance. The software installs on your Windows Server instance (bare metal, VM, or cloud instance) and shows you performance with and without acceleration which you can dynamically turn off and off.

DataDirect Networks (DDN) – Recently acquired Lustre assets from Intel, now picking up the storage startup Tintri pieces after it ceased operations. What this means is that while beefing up their traditional High-Performance Compute (HPC) and Super Compute (SC) focus, DDN is also expanding into broader markets.

Dell Technologies – At its recent Dell Technology World event in Las Vegas during late April, early May 2018, several announcements were made, including some tied to emerging Gen-Z along with composability. More recently, Dell Technologies along with VMware announced business structure and finance changes. Changes include VMware declaring a dividend, Dell Technologies being its largest shareholder will use proceeds to fund restricting and debt service. Read more about VMware and Dell Technology business and financial changes here.

Densify – With a name like Densify no surprise they propose to drive densification and automation with AI-powered deep learning to optimize application resource use across on-prem software-defined virtual as well as cloud instances and containers.

FlureDB – If you are into databases (SQL or NoSQL), as well as Blockchain or distributed ledgers, check out FlureDB.

Innovium.com – When it comes to data infrastructure and data center networking, Innovium is probably not on your radar, however, keep an eye on these folks and their TERALYNX switching silicon to see where it ends up given their performance claims.

Komprise – File, and data management solutions including tiering along with partners such as IBM.

Kubernetes – A few years ago OpenStack, then Docker containers was the favorite and trending discussion topic, then Mesos and along comes Kubernetes. It’s safe to say, at least for now, Kubernetes is settling in as a preferred open source industry and customer defecto choice (I want to say standard, however, will hold off on that for now) for container and related orchestration management. Besides, do it yourself (DiY) leveraging open source, there are also managed AWS Elastic Kubernetes Service (EKS), Azure Kubernetes Services (AKS), Google Kubernetes Engine (GKE), and VMware Pivotal Container Service (PKS) among others. Besides Azure, Microsoft also includes Kubernetes support (along with Docker and Windows containers) as part of Windows Servers.

ManageEngine (part of Zoho) – Has data infrastructure monitoring technology called OpManager for keeping an eye on networking.

Marvel – Marvel may not be a familiar name (don’t confuse with comics), however, has been a critical component supplier to partners whose server or storage technology you may be familiar with or have yourself. Server, Storage, I/O Networking chip maker has closed on its acquisition of Cavium (who previously bought Qlogic among others). The combined company is well positioned as a key data infrastructure component supplier to various partners spanning servers, storage, I/O networking including Fibre Channel (FC), Ethernet, InfiniBand, NVMe (and NVMeoF) among others.

Mellanox – Known for their InfiniBand adapters, switches, and associated software, along with growing presence in RDMA over Converged Ethernet (RoCE), they are also well positioned for NVMe over Fabrics among other growth opportunities following recent boardroom updates, along with technology roadmap’s.

Microsoft – Azure public cloud continues to evolve similarly to AWS with more region locations, availability zone (AZ) data centers, as well as features and extensions. Microsoft also introduced about a year ago its hybrid on-prem CI/HCI cloud in a box platform appliance Azure Stack (read about my test drive here). However, there is more to Microsoft than just their current cloud first focus which means Windows (desktop), as well as Server, are also evolving. Currently, in public preview, Windows Server 2019 insiders build available to try out many new capabilities, some of which were covered in the recent free Microsoft Virtual Summit held in June. Key themes of Windows Server 2019 include security, performance, hybrid cloud, containers, software-defined storage and much more.

Microsemi – Has been around for a while is the combination of some vendors you may not have heard of or heard about in some time including PMC-Sierra (acquired Adaptec) and Vitesse among others. The reason I have Microsemi on this list is a combination of their acquisitions which might be an indicator of whom they pick up next. Another reason is that their components span data infrastructure topics from servers, storage, I/O and networking, PCIe and many more.

NVIDIA – GPU high performance compute and related compute offload technologies have been accessible for over a decade. More recently with new graphics and computational demands, GPU such as those NVIDIA are in need. Demand includes traditional graphics acceleration for physical and virtual, augmented and virtual reality, as well as cloud, along with compute-intensive analytics, AI, ML, DL along with other cognitive workloads.

NGDSystems (NGD) – Similar to what NVIDIA and other GPU vendors do for enabling compute offload for specific applications and workloads, NGD is working on a variation. That variation is to move offload compute capabilities for the server I/O storage-intensive workloads closer, in fact into storage system components such as SSDs and emerging SCMs and PMEMs. Unlike GPU based applications or workloads that tend to be more memory and compute intensive, NGD is positioned for applications that are the server I/O and storage intensive.

The premise of NGD is that they move the compute and application closer to where the data is, eliminating extra I/O, as well as reducing the amount of main server memory and compute cycles. If you are familiar with other server storage I/O offload engines and systems such as Oracle Exadata database appliance NGD is working at a tighter integration granularity. How it works is your application gets ported to run on the NGD storage platform which is SSD based and having a general-purpose processor. Your application is initiated from a host server, where it then runs on the NGD meaning I/Os are kept local to the storage system. Keep in mind that the best I/O is the one that you do not have to do, the second best is the one with the least resource or user impact.

Opvisor – Performance activity and capacity monitoring tools including for VMware environments.

Pavillon – Startup with an interesting NVMe based hardware appliance.

Quest – Having gained their independence as a free-standing company since divestiture from Dell Technologies (Dell had previously acquired Quest before EMC acquisition), Quest continues to make their data infrastructure related management tools available. Besides now being a standalone company again, keep an eye on Quest to see how they evolve their existing data protection and data infrastructure resource management tools portfolio via growth, acquisition, or, perhaps Quest will be on somebody else’s future growth list.

Retrospect – Far from being a startup, after gaining their independence from when EMC bought them several years ago, they have since continued to enhance their data protection technology. Disclosure, I have been a Retrospect customer since 2001 using it for on-site, as well as cloud data protection backups to the cloud.

Rubrik – Becoming more of a data infrastructure household name given their expanding technology portfolio and marketing efforts. More commonly known in smaller customer environments, as well as broadly within industry insider circles, Rubrik has potential with continued technology evolution to move further upmarket similar to how Commvault did back in the late 90s, just saying.

SkyScale – Cloud service provider that offers dedicated bare metal, as well as private, hybrid cloud instances along with GPU to support AI, ML, DL and other high performance,  compute workloads.

Snowflake – The name does not describe well what they do or who they are. However, they have an interesting cloud data warehouse (old school) large-scale data lakes (new school) technologies.

Strongbox – Not to be confused with technology such as those from Iosafe (e.g., waterproof, fireproof), Strongbox is a data protection storage solution for storing archives, backups, BC/BR/DR data, as well as cloud tiering. For those who are into buzzword bingo, think cloud tiering, object, cold storage among others. The technology evolved out of Crossroads and with David Cerf at the helm has branched out into a private company with keeping an eye on.

Storbyte – With longtime industry insider sales and marketing pro-Diamond Lauffin (formerly Nexsan) involved as Chief Evangelist, this is worth keeping an eye on and could be entertaining as well as exciting. In some ways it could be seen as a bit of Nexsan meets NVme meets NAND Flash meets cost-effective value storage dejavu play.

Talon – Enterprise storage and management solutions for file sharing across organizations, ROBO and cloud environments.

Ubitqui – Also known as UBNT is a data infrastructure networking vendor whose technologies span from WiFi access points (AP), high-performance antennas, routing, switching and related hardware, along with software solutions. UBNT is not as well-known in more larger environments as a Cisco or others. However, they are making a name for themselves moving from the edge to the core. That is, working from the edge with AP and routers, firewalls, gateways for the SMB, ROBO, SOHO as well as consumer (I have several of their APs, switches, routers and high-performance antennas along with management software), these technologies are also finding their way into larger environments. 

My first use of UBNT was several years ago when I needed to get an IP network connection to a remote building separated by several hundred yards of forest. The solution I found was to get a pair of UBNT NANO Apps, put them in secure bridge mode; now I have a high-performance WiFi service through a forest of trees. Since then have replaced an older Cisco router, several Cisco, and other APs, as well as the phased migration of switches.

UpdraftPlus– If you have a WordPress web or blog site, you should also have a UpdraftPlus plugin (go premium btw) for data protection. I have been using Updraft for several years on my various sites to backup and protect the MySQL databases and all other content. For those of you who are familiar with Spanning (e.g., was acquired by EMC then divested by Dell) and what they do for cloud applications, UpdraftPlus does similar for lower-end, smaller cloud-based applications.

Vexata – Startup scale out NVMe storage solution.

VMware – Expanding their cloud foundation from on-prem to in and on clouds including AWS among others. Data Infrastructure focus continues to expand from core to edge, server, storage, I/O, networking. With recent Dell Technologies and VMware declaring a dividend, should be interesting to see what lies ahead for both entities.

What About Those Not Mentioned?

By the way, if you were wondering about or why others are not in the above list, simple, check out last year’s list which includes Apcera, Blue Medora, Broadcom, Chelsio, Commvault, Compuverde, Datadog, Datrium, Docker, E8 Storage, Elastifile, Enmotus, Everspin, Excelero, Hedvig, Huawei, Intel, Kubernetes, Liqid, Maxta, Micron, Minio, NetApp, Neuvector, Noobaa, NVIDA, Pivot3, Pluribus Networks, Portwork, Rozo Systems, ScaleMP, Storpool, Stratoscale, SUSE Technology, Tidalscale, Turbonomic, Ubuntu, Veeam, Virtuozzo and WekaIO. Note that many of the above have expanded their capabilities in the past year and remain, or have become even more interesting to watch, while some might be on the future where are they now list sometime down the road. View additional vendors and service providers via our industry links and resources page here.

What About New, Emerging, Trending and Trendy Technologies

Bitcoin and Blockchain storage startups, some of which claim or would like to replace cloud storage taking on giants such as AWS S3 in the not so distant future have been popping up lately. Some of these have good and exciting stories if they can deliver on the hype along with the premise. A couple of names to drop include among others Filecoin, Maidsafe, Sia, Storj along with services from AWS, Azure, Google and a long list of others.

Besides Blockchain distributed ledgers, other technologies and trends to keep an eye on include compute processes from ARM to SoC, GPU, FPGA, ASIC for offload and specialized processing. GPU, ASIC, and FPGA are appearing in new deployments across cloud providers as they look to offload processing from their general servers to derive total effective productivity out of them. In other words, innovating by offloading to boost their effective return on investment (old ROI), as well as increase their return on innovation (the new ROI).

Other data infrastructure server I/O which also ties into storage and network trends to watch include Gen-Z that some may claim as the successor to PCIe, Ethernet, InfiniBand among others (hint, get ready for a new round of “something is dead” hype). Near-term the objective of Gen-Z is to coexist, complement PCIe, Ethernet, CPU to memory interconnect, while enabling more granular allocation of data infrastructure resources (e.g., composability). Besides watching who is part of the Gen-Z movement, keep an eye on who is not part of it yet, specifically Intel.

NVMe and its many variations from a server internal to networked NVMe over Fabrics (NVMeoF) along with its derivatives continue to gain both industry adoption, as well as customer deployment. There are some early NVMeoF based server storage deployments (along with marketing dollars). However, the server side NVMe customer adoption is where the dollars are moving to the vendors. In other words, it’s still early in the bigger broader NVMe and NVMeoF game.

Where to learn more

Learn more about data infrastructures and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Let’s see how those mentioned last year as well as this year, along with some new and emerging vendors, service providers who did not get said end up next year, as well as the years after that.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Different timelines of adoption and deployment for various audiences

Keep in mind that there is a difference between industry adoption and customer deployment, granted they are related. Likewise let’s see who will be at the top in three, five and ten years, which means some of the current top or favorite vendors may or may not be on the list, same with some of the established vendors. Meanwhile, check out the 2018 Hot Popular New Trending Data Infrastructure Vendors to Watch.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash.

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash
Image via Dell Technologies

Summary of Dell transaction announcement includes:

  • VMware declares an $11 Billion USD cash dividend pro rata to all VMware stock holders.
  • Given ownership percentage of VMware, Dell Technologies will receive approximately $9 Billion USD cash dividend.
  • Dell plans to list its Class C common stock shares on the New York Stock Exchange (NYSE).
  • Dell plans to use the VMware dividend proceeds to fund cash consideration to be paid to Class V (tracking stock) shareholders.
  • For each Class V share (e.g. VMware tracking stock) shareholders can choose to receive:

    1.3665 shares of Dell Technologies Class C common stock, or
    $109 in cash per DVMT (Class V share) a 29% premium per share

Dell Announces Class V VMware Tracking Stock exchange for stock or cash
Image via Dell Technologies

Additional interest points of this transaction include:

  • Transaction expected to close Q4 CY2018, subject to Class V shareholder approval.
  • VMware maintains its independence as a separate publicly traded company.
  • Dell Technologies maintains its 81% ownership of VMware common stock
    Dell Technologies Class V (DVMT) shareholders will own 20.8% to 31.0% of Dell Class C (depending on cash election amounts).
  • Streamline Dell capital and ownership structure.
  • Establishes a public security (stock) in global end to end data infrastructure provider (e.g. Dell Technologies Stock on NYSE).
  • Enables financial flexibility for future strategic initiatives

Dell Announces Class V VMware Tracking Stock exchange for stock or cash
Image via Dell Technologies

Michael Dell and Silver Lake Continued Ownership

As part of this transaction, both Michael Dell and Silver Lake partners announce commitment to Dell Technologies. Michael Dell will continue to serve as Chairman and CEO, along with a committed stockholder beneficially owning between about 47% to 54% of Dell Technologies on a fully diluted basis. Silver Lake equity partners, an investor in Dell will continue its long-term partnership with Michael Dell beneficially owning between about 16%-18% of Dell Technologies on a fully diluted basis.

Where to learn more

Learn more about Dell Technologies, VMware, Data Infrastructures and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

This announcement enables Dell to streamline its financial structure, while providing VMware shareholder with a dividend value. In addition, this Dell Technologies announcement puts to rest industry discussions of what will Michael Dell along with Dell Technologies and VMware do in the future. Speaking of the future, this transaction could also page the wave for future investment or acquisitions by Dell and/or VMware. Now the question is if you are a DVMT tracking stock shareholder, do you take the $109 USD cash, or, new Class C Dell Technologies stock? Now lets see how Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash plays out during the rest of summer and into the fall.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.