Dont Stop Learning Expand Your Skills Experiences Everyday #blogtobertech

Dont Stop Learning Expand Your Skills Experiences Everyday #blogtobertech

Dont Stop Learning Expand Your Skills Experiences Everyday

Dont Stop Learning Expand Your Skills Experiences Everyday including moving beyond or outside our current tradecraft focus area. If you are an expert in a field or given focus area, learn something new about an area outside your expertise or comfort zone. Now if you are of the mind-set that there is nothing new to learn about, it’s all old, boring, perhaps its time to step back, look around, explore other areas.

Doing something new can be in an adjacent technology area, or something completely unrelated. For example, in a recent VMUG keynote presentation and blog post I discussed how Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future.

Dont Stop Learning Expand Your Skills Experiences Everyday
Next Generation Data Infrastructures are in your future (if not already)

What tradecraft skills and experience do you need to have, expand or refresh to support next-generation hybrid software-defined data infrastructures? If you are a server person than you need to broaden your tradecraft skills experience to storage, I/O networking, cloud, virtual, container across hardware as well as software. Likewise, if you are storage or I/O and networking, you need to expand into other areas. If you are a VMware focused professional, then learn about Microsoft Hyper-V or vice versa. If you are an AWS focused person, learn about Google, Azure or vice versa, same applies across different technology domains.

On the other hand, if you know all there is to know, chances are they are other areas you need to learn more about, or, determine what you don’t know to address that. By chance, if you do happen to know everything, there is to know, how much time are you spending interacting with others to teach them, possibly learning something new yourself.

Invest Time into Your Tradecraft Skill set

If you are not spending at least an hour a day learning something new, you are missing out on the opportunity. Part of that hour should also be outside your comfort zone core focus area. For example, if you are a software pro, learn more about hardware, clouds, or something different. If you are a VMware focused person, learn Hyper-V, AWS, Azure, something else. If you are storage, learn server, network, cloud and beyond. If you are focused on data infrastructures, then learn about the upper-level business applications along with the users who use them and vice versa.

How I Continue to Learn Expanding My Tradecraft Skills Experience Every day

As part of expanding my tradecraft, I spend part of my day learning, refreshing on core data infrastructure focus areas (servers, storage, I/O networking, hardware, software, cloud, containers, converged, software-defined, data protection) and related topics. Learning involves vendor briefings, researching, talking with others, reading, hands-on technology trial to gain insight experience perspectives.

I also have expanded my tradecraft experiences by becoming an FAA Part 107 licensed commercial pilot of small unmanned aerial system (sUAS), small unmanned aerial vehicle (sUAV) or more commonly merely called drones. Besides being FAA licensed, I also expanded by becoming Minnesota sUAV/drone and aerial photography licensed. Drone flying has an adjacent to data infrastructures in that one of my drones’ records at 4K 60 frames per second (fps) meaning about 1 GByte of data every two minutes of video, plus telemetry. Note that the drones have internet capability and can be considered IoT for their video, as well as telemetry.


Above is a 4K video flights via my companion site www.picturesoverstillwater.com

Where to learn more

Learn more about learning, data infrastructures, tradecraft, drones as well as related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

What this means is that in addition to expanding as well as refreshing my data infrastructure related tradecraft skills, I’am also expanding my experiences into other adjacent areas. In other words, instead of just talking about big data, fast data, video, IoT, drones and related, I’am involved with it hands on.

Keep in mind, at some point the student becomes the teacher, and a teacher is a student. Leverage your pair of eyes and ears to see things in different ways, listen to and learn about items outside your primary focus area as you expand or refresh your tradecraft skill set experiences.

If you can’t learn something new every day, either you are not trying, or you are in trouble. Even experts and unicorns can learn something new every day, even if that is as simple as learning to listen to others.

With October being #blogtobertech there are plenty of opportunities to Don’t Stop Learning Expand Your Skills Experiences Everyday which also includes student becoming teacher, teacher being student.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future #blogtobertech

Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future #blogtobertech

Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future

A few weeks ago I was invited to present a keynote at the 1st annual Minnesota VMware User Group (VMUG) Super VMUG mega event in Minneapolis titled Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future (download PDF presentation here).

Key themes of the presentation focused around data infrastructures (e.g. what’s inside physical data centers including server, storage, I/O networking, hardware, software, policies, procedures) along with industry trends including hybrid software defined clouds (and containers). Anther aspect of the presentation focused around building, refreshing and expanding our fundamental data infrasture tradecraft skills. Also keep in mind that everything is not the same across different environments, granted there are similarities that can be leveraged.


Data Infrasture’s are defined to support business applications information service delivery

Data Infrastructures

The fundamental role of data infrastructures is to provide a platform environment for applications and data that is resilient, flexible, scalable, agile, efficient as well as cost-effective. Put another way, data infrastructures exist to protect, preserve, process, move, secure and serve data as well as their applications for information services delivery. Technologies that makeup data infrastructures include hardware, software, cloud or managed services, servers, storage, I/O and networking along with people, processes, policies along with various tools spanning legacy, software-defined virtual, containers and cloud.

Depending on your role or focus, you may have a different view than somebody else of what is infrastructure, or what an infrastructure is. Generally speaking, people tend to refer to infrastructure as those things that support what they are doing at work, at home, or in other aspects of their lives. For example, the roads and bridges that carry you over rivers or valleys when traveling in a vehicle are referred to as infrastructure.

Similarly, the system of pipes, valves, meters, lifts, and pumps that bring fresh water to you, and the sewer system that takes away waste water, are called infrastructure. The telecommunications network. This includes both wired and wireless, such as cell phone networks, along with electrical generating and transmission networks are considered infrastructure. Even the airplanes, trains, boats, and buses that transport us locally or globally are considered part of the transportation infrastructure. Anything that is below what you do, or that supports what you do is considered infrastructure.

The following figure shows various layers or altitudes of encapsulation and abstraction of data infrastructures along with their underlying resources that are defined to support a business enablement outcome, as well as support information services delivery.


Data Infrastructure Stack Layers and Resources Defined To Support Business Information Services

The following figure shows evolution of data infrastructures from on-prem bare metal to software-defined virtual, cloud, containers, converged and hyper-converged packaging as well as emerging composable. Also shown below are a hybrid as well as multi-clouds including bare metal dedicated services in addition to virtual machine instances as well as container-based services.


Data Infrastructure and Resource Packaging Deployment Evolution

Hybrid Software Defined Industry Trends

Some of the trends discussed in the presentation include:

Clouds – Public, Private, Hybrid, Multi and hybrid clouds along with how they are being used, along with technology evolution including virtual machine (VM) instances, bare metal dedicated private servers (DPS) as well as metal as a service. Other cloud trends include data migration appliances such as AWS Snowball Edge, Microsoft Azure Databox among others, VMware on AWS, as well as fog and edge computing.

Other trend topics included converged, hyper-converged, serverless, containers, persistent memory (PMEM) also known as storage class memory (SCM) along with other server storage I/O topics. Additional trend topics included data protection, Azure Stack, security, NVMe as well as NVMe over Fabrics (NVMeoF) along with composable and Gen-Z.

Tradecraft Skills Experience

Expanding your data infrastructure tradecraft means evolving from your primary focus area, gaining insight into other technologies, tools, techniques in adjacent areas outside your comfort zone. For industry veterans with several years to many decades of experience, this means refreshing on what you know, think you know or need to know with what’s new or evolving. On other other hand, for those who are new, expanding your tradecraft means moving beyond learning to memorize to pass a certificate test, to gaining insight on how, when, where, why to apply different tools, technologies, trends to tasks at hand.

For example, developing tradecraft from knowing the different hardware, software, and services resources as well as tools, to what to use when, where, why, and how. Another dimension of expanding data infrastructure tradecraft skills is gaining the experience and insight to troubleshoot problems, gain insight awareness with dashboard or monitoring tools, as well as how to design and manage to cut or reduce the chance of things going wrong.

From Tools and Technologies to Techniques and Tricks of the Trade

Expanding your awareness of new technologies along with how they work is important, so too is understanding application and organization needs. Developing your tradecraft means balancing the focus on new and old technologies, tools, and techniques with business or organizational application functionality.

This is where using various tools that themselves are applications to gain insight into how your data infrastructure is configured and being used, along with the applications they support, is important.

Data Infrastructure Tools Tradecraft
Data Infrastructure Toolbox (Hardware, Software, Scripts)

Next Generation Hybrid Software Defined Data Infrastructures What Next


Balance head in the clouds (thinking, strategy, vision) with feet on the ground (what you can do today)

The following are some additional tips, comments, recommendations to keep in mind for enabling your next generation hybrid software defined data infrastructure.

Where to learn more

Learn more about data infrastructures and tradecraft related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Everything is not the same across different organizations, IT environments, application workloads and the data infrastructures that support them. Data Infrasture’s span from legacy on-prem to software-defined cloud (public, private, hybrid, multi-cloud), container, serverless, virtual, hybrid, converged and hyper-converged, as well as central, core and distributed edge or remote office branch office (ROBO). Even though everything is not the same, there are similarities across different environments, technologies and workloads that can be leveraged. Fundamental tradecraft skills and experiences are what enable you to know what to use when, where, why and how including using new as well as old things in new ways, while not making old mistakes in new ways.

Some other tips include avoid flying blind, particular in software defined and cloud environments, have situational awareness, end to end (E2E) insight leveraging metrics that matter, are relevant, timely, accurate and hold context to the data infrastructures as well as applications they support. Part of expanding your tradecraft skills is refreshing on what you know, also expanding into new adjacent areas getting out of your comfort zone. Also understand the context of different terms, technologies and tools. For example, SAS can be big data analytic statistical analysis software, serial attached SCSI storage device as well as shared access signature for Azure clouds among others.

Also keep in mind that while software defined things are popular and trendy with the industry, keep the focus on what is being defined to enable an outcome or business enablement In other words, the emphasis should not be on the software aspect per say, rather how something (hardware, software, service) is defined to enable something. Also keep in mind with software defined marketing and trends such as serverless, servers and software still need hardware (somewhere), and hardware still needs software from micro code to firmware to many other places in the data infrasture layers or stack. Meanwhile, keep in mind that it is #blogtobertech and Next Generation Hybrid Software Defined Data Infrastructures Are In Your Future.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Following up from last years 2017 crossword puzzle for travel fun, here is the Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle (click on the below image for PDF version that includes answers). The Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle can be something to do while traveling, taking a break between (or during) sessions as well as keynotes. I wonder which buzzword term will get used the most, as well as new ones to be added to an updated version of this?

Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle

Where to learn more

Learn more about VMworld and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Next week is VMworld 2018 in Las Vegas which means for some traveling and long week. Feel free to suggest additions as there could be a revision, update or two between now and VMworld. Have fun, safe travels, hope to see you next week in the meantime enjoy the Server StorageIO 2018 VMworld Data Infrastructure Buzzword Bingo Puzzle.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture

Dell EMC today announced with a tag line IT Unbound their new PowerEdge MX 7000 Kinetic Based Data Infrastructure Architecture slated for general availability September 21, 2018. Previewed earlier this year at Dell Technology World in Las Vegas, PowerEdge MX 7000 is a new family of modular, scalable servers for various data infrastructure roles.

What is different with PowerEdge MX 7000 compared to other new 14th generation (Gen 14) Dell servers is the finer granularity of resource allocation based around the new Kinetic composable infrastructure. Also previewed at Dell Technology World earlier this year in Las Vegas, Kinetic (not to be confused the Seagate Kinetic object storage key value drive initiative) is a new composable architecture.

Dell EMC PowerEdge MX 7000 Kinetic What Was Announced

  • First instantiation of Kinetic composable based data infrastructure resources
  • OpenManage Enterprise Modular Edition
  • PowerEdge MX 7000 modular data infrastructure server

Dell EMC PowerEdge MX 7000 and Kinetic Architecture
Dell EMC PowerEdge MX 7000 and Kinetic Architecture Image via Dell.com

Dell EMC Kinetic Composability What Is It

By being a composable data infrastructure resource and server, Dell EMC Kinetic based solutions can be decomposed with finer granularity than previous servers. What this means is that in the past, memory, I/O network, physical storage devices, compute sockets and cores were assigned to a single image instance. The only image instance could be an operating system (OS) such as Linux or Windows based, a hypervisor such as KVM, Microsoft Hyper-V, Nitro (AWS), Oracle, VMware vSphere ESXi, or Xen among others, as well as proprietary decomposition and aggregation software (and hardware) technology (ScaleMP among others).

With a composable based solution, instead of the entire server, or motherboard(s) and its resources allocated to a single OS as a bare metal (BM) or Metal as a Service (MaaS) instance, or to a hypervisor, different resources can be allocated to various instances. On the surface it would be easy to say that sounds a lot like what hypervisors such as those from Microsoft, VMware, and others are doing, particular with clusters.

Dell EMC Kinetic Data Infrastructure Architecture
Dell EMC Kinetic Data Infrastructure Architecture Image via Dell.com

However, the difference is that with hypervisors, all of a server’s physical resources (compute, memory, I/O, storage devices, GPU, FPGA/ASIC) are allocated to the OS, hypervisor, or composition software, that then creates vCPU, vRAM, and related resources. Emphasis is on enabling more granular resource allocation as well as scaling out. The business or organizational outcome is what is essential which means, better allocation and effective use of resources to boost productivity vs. merely driving up utilization and efficiency.

Dell EMC PowerEdge MX 7000 Eliminates traditional hardware-based mid-plane with an internal fabric connector per node that can also be exposed outside of the physical MX enclosure. By using an industry standard connector on the edge of server motherboard resource nodes, different server I/O connectivity can be leveraged as it becomes available or improves. For example, IMHO it is not too complicated to envision a time in the not so distant future when Kinetic enabled resources (e.g., server nodes) evolve to support the emerging Gen-Z server I/O connectivity protocol.

What is Gen-Z

Does PowerEdge MX 7000 and Kinetic use Gen-Z today? Not yet, however, Dell has been showing demos and technology proof of concepts at various events.

Why bring up Gen-Z now? Simple, it’s something that will be part of many data infrastructure, the server I/O, storage, networking, hardware and software-defined discussions in the not so distant future.

As a refresher or primer, Gen-Z is a new server I/O fabric interface that supports access of and by CPU sockets along with their cores or memory including DRAM as well as emerging SCM as well as PMEM. In addition to server memory access. Gen-Z also enables local as well as remote access to memory, storage, GPU, FPGA, ASIC among other resources. For backward compatibility as well as investment protection, Gen-Z is intended to work with existing PCIe, Ethernet, Fibre Channel, SAS, SATA, NVMe, InfiniBand among another server I/O interconnects and protocols.

Does this mean Gen-Z is a challenger for Ethernet and another IP-based general LAN networking? IMHO no, at least not in the foreseeable future, granted like PCIe, Fibre Channel, InfiniBand, Ethernet and some others that have joined the where are they now list of technologies that promised to be the end all network for everything, near-term Gen-Z is focused on inside a modular enclosure or perhaps within a rack. Read more about Gen-Z here, as well as Dell EMC blog The Gen-Z Journey road to composability.

Dell OpenManage Enterprise
Dell OpenManage Management Interface Image via Dell.com

OpenManage Enterprise Modular Edition

Management for PowerEdge MX 7000 utilizes OpenManage Enterprise Modular Edition that is an HTML5 REST based with API tool. Management capabilities include workflow’s for simplicity of operation and lifecycle management. OpenManage Enterprise Module Edition besides being HTML5 REST API is also RedFish inspired for further interoperability. Note that PowerEdge MX 7000 is also integrated with Dell iDRAC physical machine level management interface provides unified management from a single to multiple server groups spanning towers to racks.

Dell EMC PowerEdge MX 7000
Dell EMC PowerEdge MX 7000 Image via Dell.com

Dell EMC PowerEdge MX 7000 Kinetic Based Data Infrastructure Server

The new Dell EMC PowerEdge MX 7000 is the first installment of their new Kinetic based composable architecture. The new Dell EMC PowerEdge MX 7000 components consist of a 7U chassis with power and cooling fans, along with compute sled, storage sled, I/O connectivity and inner fabric, along with management tools.

Dell EMC PowerEdge MX 7000 Modules
Dell EMC PowerEdge MX 7000 Modules Image via Dell.com

Dell EMC PowerEdge MX 7000 Server Compute modules

Dell EMC PowerEdge MX 7000 Compute sleds include MX740c (single width) and MX840c (double width) that are two and four socket modules with local on-board NVMe (e.g., U.2 8639 small form factor SFF) drives (per module). These initial compute modules support Intel Xeon processors and up to six (6) TBytes of memory. The MX740c supports up to six (6) local NVMe, SAS or SATA drives (e.g., 8639 connectors), while the MX840c supports up to eight (8) local drives. Note that these local onboard drives can be shared with other sled modules, as well as compute sleds can access the shared storage sled-based drives.

Dell EMC PowerEdge MX 7000 Server Storage modules

Dell EMC PowerEdge MX 7000 Storage sled consists of MX5016s holding up to 16 hot-pluggable SAS HDD, up to seven MX5016s sleds can be configured per MX chassis for up to 112 direct attached storage (DAS) drives. Each of the drives can be individually mapped to one or more servers supporting aggregated (e.g., HCI) as well as disaggregated (CI and legacy) deployment topologies.

Dell EMC PowerEdge MX 7000 Server I/O Networking Modules

Initial server I/O modules for the new Dell EMC PowerEdge MX include 25GbE and 32G Fibre Channel (GFC) host connectivity along with 100GbE and 32 GFC uplink capabilities with the top of rack (ToR)support built in along with Open Networking OS10EE software enabled. The server I/O modules provide both north-south, as well as east-west connectivity inside and outside the chassis for data plane and management plane traffic.

Server I/O connectivity options include:

  • MX5108n Ethernet Switch with 8 x 25GbE (server facing ports), 2 x 100GbE ports, 1 x 40GbE port, 4 x 10GbE ports.
  • MX9116n Fabric Switching Engine (e.g., Kinetic fabric) with 16 x 25GbE server facing ports, 2 x 100GbE/8 x 32GFC unified ports, 2 x 100 GbE ports and 12 fabric expansion ports.
  • MXG610s Fibre Channel Switch with 16 x 32GFC internal ports, 8 x 32 GFC SFP+ ports and 2 QSFP (4 x 32GFC) uplink ports.

Where to learn more

Learn more about Dell EMC PowerEdge MX, Kinetic, Composable and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Overall this is a good announcement of technology, product, as well as where resources are headed to meet different workload demands and look forward to getting some test time with a Dell EMC PowerEdge MX 7000.

Dell EMC PowerEdge MX 7000 Three Tenants
Dell EMC PowerEdge MX 7000 Three Tenants Image via Dell.com

The new Dell EMC PowerEdge MX 7000 Provides a data infrastructure resource platform for deploying traditional, cloud, software-defined, composable, as well as converged infrastructure (CI) disaggregated, as well as hyper-converged infrastructure (HCI) aggregated along with hybrid configurations.

With the Dell EMC PowerEdge MX 7000, there is more resource granularity and future-proof capabilities than traditional high-density blade, as well as twin, quad or eight node server configuration solutions.

Many vendors talk about solutions being future proof or enabling investment protection, with PowerEdge MX 7000, Dell EMC is taking the next step in discussing trends, technology, and what you can do today. Unlike traditional dual, quad, eight or high-density node and blade servers with dedicated discrete mid-planes tied to a given technology, Dell PowerEdge MX 7000 and Kinetic based architecture are mid planes aka back plane free. Now there is still connectivity between the different PowerEdge MX 7000 chassis modules which is a fabric (network if you prefer).

For example, server compute sled modules have an industry standard connector that connects with other components in the chassis. What differs from the traditional blade and multi-node server configurations is that on board the compute sleds; an adapter module can be changed to support a new interface over different generations of technology (as an example, keep an eye on what happens with Gen-Z).

The result is that the Dell EMC PowerEdge MX 7000 should be an excellent platform for software-defined data centers (SDDC), software-defined data infrastructures (SDDI), software-defined infrastructures (SDI) as well as other software defined or traditional deployments. The Dell EMC PowerEdge MX 7000 will make for a good CI, HCI, SDDC, SDDI, SDI platform for public, private as well as hybrid clouds, PaaS as well as IaaS deployments, along with VMware, Microsoft (Hyper-V, Windows Storage Spaces Direct (S2D), as well as Azure Stack) among other scenarios.

By being flexible, scalable, agile and adaptable, easy management, responsive design that is future proof enabling a pool of dynamic data infrastructure resource, the Dell EMC PowerEdge MX 7000 should be good allowing IT Unbound.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Catching Up With Summer 2018 IBM Cloudy Software Defined Storage Announcements

Summer 2018 IBM Cloudy Software Defined Storage Announcements

Catching Up With Summer 2018 IBM Cloudy Software Defined Storage Announcements

Time for some catching up with summer 2018 IBM cloudy software defined storage announcements that were made earlier this week. The Share Event (Mainframe centric) is occurring this week in St. Louis. Thus, it is no surprise that it is time for catching up with summer 2018 IBM cloudy software-defined storage announcements that are geared to mainframe Z environments. These cloud and software-defined storage for the mainframe environment announcements follow those from a few weeks ago including new Power9 based servers and IBM FlashSystem 9100 flash SSD.

What was announced

What IBM announced this week were a mix of mainframe Z server storage with software-defined storage and cloud (e.g. cloudy) support including:

IBM Spectrum Protect 8.1.6 multi-cloud updates with tiered backup across on-site and cloud. For example, active data remains on-site (or on-prem), inactive data protection copies get moved (tiered) to cloud storage. Other enhancements include software-defined threat protection such as malware and ransomware extending to hypervisor data, along with blueprint guides for IBM Cloud (e.g., Softlayer), AWS and Microsoft Azure.

IBM Spectrum Protect Plus 10.1.1 enhanced with encryption of vSnap repositories for security, VMware vSphere 6.7 support, improved dashboards user interfaces (UI), and DB2 support in addition to Microsoft SQL Server and Oracle.

IBM DS8882F storage
IBM DS8882F Z mainframe rack mount storage Image via IBM.com

IBM DS8882F rack-mounted storage system (part of DS8000 storage family) integrated with IBM Z ZR1 (mainframe) and LinuxOne Rockhopper II (mainframe) servers. The DS8882F supports from 6.4TB to 368.64TB raw capacity. Along with safeguarded copy protection including read-only copies (e.g., a variation of WORM), along with encrypted digital signatures, and 256-bit AES encryption.

IBM Cloud Object Storage aka COS (formerly known as Cleversafe) functions as a target tier for DS8880 without the need for an external gateway. Enhancements also include a new 1U server (via Quanta) supporting up to 72 TB configurations.

IBM Elastic Storage Server File and Object pre-configured storage for AI, ML, Big Data and High-Performance Compute (HPC) includes an integrated file (NFS, SMB, S3, Swift) and object access. The solution is pre-installed on IBM Power8 servers running Red Hat Linux (e.g., RHEL). IBM claims high throughput for NAS NFS workloads with a large number of server connections. However, some performance numbers would be impressive to see along with a side of context.

IBM Spectrum Scale on AWS is a software-defined storage solution alternative to the traditional appliance-based solution. With Spectrum Scale 5.0.2 IBM is joining other vendors who have made their software-defined storage solutions available on clouds such as AWS, Azure, Google among others. Besides running on AWS working with Virtual Private Clouds (VPC), IBM supports per TB licenses including bringing your own license a growing industry trend.

Where to learn more

Learn more about IBM Server, Storage, Data Protection and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Despite having been declared dead for decades, IBM Z series are still prevalent in many large environments even in a software-defined cloudy era. It’s good to see IBM continuing to invest in, and join other industry vendors who are supporting various cloudy deployments, as well as legacy on-site aka on-prem.

Likewise, IBM is making its legacy Z mainframe systems trendy and cloudy with these new enhancements to support customer hybrid server, storage, and data infrastructure deployments.

Overall, a nice set of incremental improvements following industry trends, and catching up with summer 2018 IBM cloudy software defined storage announcements.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

July 2018 Server StorageIO Data Infrastructure Update Newsletter

July 2018 Server StorageIO Data Infrastructure Update Newsletter

July 2018 Server StorageIO Data Infrastructure Update Newsletter

Volume 18, Issue 7 (July 2018)

Hello and welcome to the July 2018 Server StorageIO Data Infrastructure Update Newsletter.

In cased you missed it, the June 2018 Server StorageIO Data Infrastructure Update Newsletter can be viewed here ( HTML and PDF).

In this issue buzzwords topics include Dell Technology and VMware, AWS and Google public, private and hybrid cloud, machine learning, 3D XPoint, SCM, SSD, NVMe, data infrastructure management tools among other topics.

Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

July 2018 data infrastructure, server, storage, I/O network, hardware, software, cloud, converged, and container as well as data protection industry activity includes among others:

Amazon Web Services AWS July 2018 Updates include enhancements to machine learning (ML) Sagemaker service, faster S3 access, new EC2 instances along with Snowball Edge (SBE) for on-prem converged server and compute appliance ( read more about SBE here). In other public cloud activity, Google Cloud Platform GCP announced new Los Angeles Region.

Intel and Micron have announced that they will be pursuing different paths when they complete the second generation in 2019 of 3D XPoint used in Intel Optane NVMe SSD and Storage Class Memory (SCM) technologies, read more here Intel Micron 3D XPoint Evolving. Meanwhile, Broadcom buying CA, Brilliant or a Brainbuster? This deal is a bit of a head scratcher with Broadcom spending $18.9 Billion USD (cash) to by CA Technologies.

In other data infrastructures news and activity, DataDirect Networks Stages Bid to Acquire Tintri’s Assets and Expand Its Storage Portfolio into the Enterprise. Dell EMC announced a new integrated data protection appliance ( IDPA DP4400) for small and midsize organizations. In other activity, VMware declared a dividend, with Dell Technologies being a majority owner, will use cash to fund Dell business structuring. Read more about Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash here.

Spectra (e.g. who some of you know as Spectra Logic) has announced enhancements to their tape libraries. Note that one of the larger growth (or sustainment) markets for tape based technologies in recent years have been the larger cloud scale service providers. Granted those providers are not using tape in old ways (e.g. for direct backup), rather, in new ways where it is a companion to SSD, HDD as another storage class, tier or technology enabler.

IBM has jumped on the NVMe bandwagon announcing updates to their Flashsystems 9100 systems (e.g. what they acquired via TMS a few years ago). Opvisor has announced a new VMware vSAN performance monitoring and troubleshooting feature for their insight, awareness management tools.

Check out other industry news, comments, trends perspectives here.

Data Infrastructure Server StorageIO Comments Content

Server StorageIO Commentary in the news, tips and articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via : SearchStorage: Comments on GDPR and Cloudian File Sync Share 
Via : NetworkComputing: Comments Software Defined Storage SDS Getting Started 
Via SearchStorage: Comments The storage administrator skills you need to keep up today
Via SearchStorage: Comments Managing storage for IoT data at the enterprise edge
Via SearchCloudComputing: Comments Hybrid cloud deployment demands a change in security mind set

View more Server, Storage and I/O trends and perspectives comments here.

Data Infrastructure Server StorageIOblog posts

Server StorageIOblog Data Infrastructure Posts

Recent and popular Server StorageIOblog posts include:

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
June 2018 Server StorageIO Data Infrastructure Update Newsletter
May 2018 Server StorageIO Data Infrastructure Update Newsletter
Have you heard about the new CLOUD Act data regulation?
Data Protection Recovery Life Post World Backup Day Pre GDPR
Microsoft Windows Server 2019 Insiders Preview
Server Storage I/O Benchmark Performance Resource Tools
Data Infrastructure Primer Overview (Its Whats Inside The Data Center)
If NVMe is the answer, what are the questions?

View other recent as well as past StorageIOblog posts here

Server StorageIO Recommended Reading (Watching and Listening) List

Software-Defined Data Infrastructure Essentials SDDI SDDC

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017) available at Amazon.com (check out special sale price), the following are Server StorageIO data infrastructure recommended reading, watching and listening list items. The Server StorageIO data infrastructure recommended reading list includes various IT, Data Infrastructure and related topics including Intel Recommended Reading List (IRRL) for developers is a good resource to check out.

Duncan Epping ( @DuncanYB), Frank Denneman ( @FrankDenneman) and Neils Hagoort ( @NHagoort) have released their VMware vSphere 6.7 Clustering Deep Dive book available at venues including Amazon.com. This is the latest in a series of Cluster and deep dive books from Frank and Duncan which if you are involved with VMware, SDDC and related software defined data infrastructures these should be on your bookshelf.

Watch for more items to be added to the recommended reading list book shelf soon.

Data Infrastructure Server StorageIO event activities

Events and Activities

Recent and upcoming event activities.

July 25, 2018 – Webinar – Data Protect & Storage

June 27, 2018 – Webinar – App Server Performance

June 26, 2018 – Webinar – Cloud App Optimize

See more webinars and activities on the Server StorageIO Events page here.

Data Infrastructure Server StorageIO Industry Resources and Links

Various useful links and resources:

Data Infrastructure Recommend Reading and watching list
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

What this all means and wrap-up

Summer is here in North America and the Northern Hemisphere which means holidays as well as vacations. However Data Infrastructures continue to evolve as do the tools, technologies, trends, hardware, software, services along with those who take care of, and define them. Enjoy your summer vacation, holidays as well as this July 2018 Server StorageIO Data Infrastructure Update Newsletter edition.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Amazon Web Services AWS July 2018 Updates

Amazon Web Services AWS July 2018 Updates

Amazon Web Services (AWS) July 2018 Updates

Amazon Web Services AWS July 2018 Updates

Amazon Web Services AWS July 2018 Updates continue to expand feature, functionality, service capabilities of the public cloud providers capabilities across various geographies.

Recent AWS updates include Snowball Edge (SBE) that adds local, on-site, on-premises aka on-prem EC2 compute capabilities as part of the Snowball appliance. Previously Snowball was a data and storage migration only appliance, now with the new capabilities, compute is also enabled as part of a turnkey converged platform. Read more about SBE here.

In other updates, AWS has extended its Elastic Cloud Compute (EC2) capabilities (besides Snowball Edge) with new instance types, along with leveraging their next generation hypervisor as part of Nitro enabled systems. New EC2 instances span from on-prem Snowball Edge (SBE) to AWS Dedicated aka bare metal instances, along with traditional cloud instances (e.g., virtual machines).

These new instances including R5, R5D, and Z1 among others leverage faster Intel Xeon Platinum 8000 series processors, along with more memory. For example, Z1D is a compute-intensive instance with 4.0 GHz all turbo enabled core, while R5 is memory optimized with 3.1 GHz cores (up to 96 vCPU) and up to 768GB of RAM. The R5D is a memory-optimized instance that also supports up to 3.6TB of on-instance NVMe based storage. View additional AWS instance types here.

AWS has enhanced SageMaker (Machine Learning) service supporting higher throughput enabling faster data transformation batch jobs of non-real-time inference. To enable higher data and API call rates, AWS has also enhanced Simple Storage Service (S3) request rate. Another enhancement by AWS is enabling bring your own IP address preview for virtual private cloud (VPC) as part of allowing hybrid clouds.

View additional new, recent and past AWS updates here, and here.

Where to learn more

Learn more about AWS, Cloud and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Amazon Web Services AWS July 2018 Updates continue to expand the number, type and extensiveness of public cloud services, as well as enabling hybrid capabilities. The Amazon Web Services AWS July 2018 Updates also address different data infrastructure layers from lower level Infrastructure as a Service (IaaS) including EC2 compute, as well as higher level artificial inelegance (AI), machine learning (ML), deep learning (DL) among other cognitive as well as analytic offerings.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Intel Micron 3D XPoint Evolving

Intel Micron 3D XPoint Evolving

Intel Micron 3D XPoint Evolving

Intel Micron 3D XPoint Evolving

Generations of memory
Major memory classes or categories timeline (Image via Intel and Micron)

Co-Creators of 3D XPoint the next generation of non-volatile memory (NVM) also known as storage class memory (SCM) or Persistent Memory (PMEM) have announced they will complete joint development of second-generation technology, then pursue their separate paths. Intel and Micron jointly announced 3D XPoint three years (July 2015) as a new technology with the first generation of products have appeared in the market or past year or so.

Various industry vs customer adoption deployment timelines
Various Adoption Deployment Timelines for different focus areas

For those in the industry who measure technology on shorter months vs. years adoption and deployment scenarios, or time from press release until new news, some would say 3D XPoint is late, behind schedule, which perhaps it is based on some timelines. On the other hand, IT customers tend to be on a different timeline that may seem like glacial speed to industry focused rapid change. IMHO 3D XPoint is about on the right timeline based on IT customer deployment which may very well accelerate for broader usage with the second generation based products.

3D XPoint based Intel Optane
Top Intel 750 NVMe PCIe AiC SSD, bottom Intel Optane NVMe 900P U.2 SSD with Ableconn carrier

While the focus is easily around Intel and Micron going separate ways, keep in mind that there is the second generation of 3D XPoint in the works. Some might consider the second generation of 3D XPoint as the first real production and volume technology with the first being just that, the first generation. An example of a first generation 3D XPoint based product are the Intel Optane NVMe devices such as the one show above, and discussed in this StorageIO Lab test drive post here.

NVMe and NVM along with SCM as well as PMEM better together

Where to learn more

Learn more about Intel, Micron, NVM, NVMe, 3D XPoint, SCM, PMEM and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Some may see the announcement of Intel and Micron pursuing separate paths as a negative while others as a positive. While completing the second-generation development together, both can leverage what they have done while seeking different, presumably divergent or expand paths forward.

A concern could be if Intel and Micron merely go their separate ways yet focus on the same market areas. A benefit could be if Intel and Micron pursue different market focus areas with some overlap while expanding to broader opportunities.

The latter scenario could be useful for moving the technology forward by giving it new and different opportunities. For example, some that favor Intel along with its ecosystem would prefer whatever Intel does next. Likewise, those that favor Micron and their ecosystem may influence the direction Micron goes.

Does this mean Micron and Intel are all done collaborating? Tough to say.

However, they still share a fabrication facility (fab) imFLASH in Lehi Utah.

Overall, I think this is a good move for both Intel and Micron once they get the second generation of 3D XPoint developed and into production for customer deployments. With Intel Micron 3D XPoint Evolving, lets see what’s next.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

AWS Snowball Edge SBE Converged Cloud Storage Appliance

AWS Snowball Edge SBE Converged Cloud Storage Appliance

AWS Snowball Edge SBE Converged Cloud Storage Appliance

As part of extending their cloud platform reach, recent Amazon Web Services (AWS) announcements include AWS Snowball Edge SBE Converged Cloud Storage Appliance. Snowball Edge (SBE) has evolved from its previous focus as a data transfer, migration platform appliance to now include support for on-prem compute. SBE has previously been available as an appliance that ships from AWS to your location as a service to enable bulk data movement to the public cloud (e.g. AWS Simple Storage Service (S3) bucket). With this new capability, AWS is enabling SBE to support on-prem compute similar to Elastic Cloud Compute (EC2) cloud instances.

AWS Snowball Data Migration at PB scale
AWS Snowball Appliance Image via AWS.com

What is AWS Snowball

Snowball is a bulk physical data migration appliance that AWS ships to your location. You use Snowball by setting up a copy job with AWS, when the device arrives at your site, set it up, and enable the copy jobs to occur moving data from source to Snowball destination. Once data is copied, you ship the Snowball back to an AWS region and availability zone (AZ) where its contents are copied into a Simple Storage Service (S3) bucket of your choice. Once the copy job into an AWS S3 is complete, AWS performs a secure erase of the Snowball.

Basic Snowball includes 10 GbE network connections (RJ45 and SFP+ [fiber or copper]). Security and Encryption includes 256-bit keys that can be managed via AWS Key Management Service (KMS). Note that keys are not sent to or stored on the device for security during transit. For additional protection, tamper-resistant seals are included along with the Trusted Platform Module (TPM) to detect unauthorized hardware, firmware or software changes.

End to End tracking is enabled using E ink shipping labels and allow monitoring via AWS Simple Notification Service (SNS). Once your data transfer job completes along with verified, a software erasure of the SBE is performed by AWS following NIST media handling guidelines.

For management, SBE has an API for customer integration, as well as the ability to create and manage transfer jobs via the AWS management console. SBE Adapter also gives customers direct access to Snowball where it appears as an S3 endpoint (how you access the storage and data).

Backside view of AWS Snowball
Backside view of Snowball Image via Amazon.com

Additional Snowball Speeds and Specification Feature Feeds include:

  • Storage space capacity of 50TB (42TB usable) or 80TB (72TB usable)
  • Network connectivity 10 GbE RJ45 (Cat6), SFP+ (Copper and Optical). Cables include RJ45 and Copper SFP+. For Fiber attached Ethernet, the customer supplies their own SFP+ optical cables.
  • SBE is designed for office environments, as well as data centers (e.g., about 68db) and weigh about 47 pounds.
  • Power requirements include NEMA 5-15p (standard wall outlet) 100-200 volts with power cable included.

Note for traditional Snowball deployments an on-prem workstation or server is needed to copy data from source locations to the Snowball device.

How AWS Snowball and Snowball Edge work

How AWS Snowball Works

Referring to the image above, first step to using AWS Snowball (or Snowball Edge) is to place an order via AWS management console (A). Part of the ordering process involves setting up the data transfer job, and in the case of AWS Snowball Edge, defining the EC2 instance and image (read more about that here via AWS). After placing order and setup, the AWS Snowball arrives at your location (B), on-site setup is done and data transfer performed (C). Once data is transferred, the AWS Snowball is returned to designated AWS location via two day shipping (D) and data copied into your specified S3 or Glacier bucket (E). After your data is transferred into the S3 or Glacier bucket you specify as part of the transfer job, you are able to do what you want with your files, folders, images, videos, VHDX’s, VMDK’s, ISO, little data, big data.

What is AWS Snowball Edge

AWS has enhanced its Snowball Edge (SBE) data mobility, migration, and transport appliance to now also include compute. For those not familiar, Snowball is an appliance that comes in various sizes that you order from AWS, it shows up at your site, and then you copy your data to it for migration into AWS. Once data is copied, you return to AWS where the data then appears in your designated S3 bucket. From your S3 bucket, you can then move the data, files, volumes, images to other locations, use for standing up EC2 compute, populating databases or other items.

With the new compute feature, AWS is enabling compute on the snowball edge appliance functioning similar to EC2 instance, except that they are on your site. This means you can use the compute to run your own custom AMI’s (Amazon Machine Image) on site or on-prem in support of data migration, conversion or another process. You can also keep the appliance on-site for as long as you want, granted your credit card gets charged to support development, test, extended migration, or to have a converged, or, hyper-converged platform.

Note that with SBE having compute capability, you can now run an EC2 image that functions as your copy server eliminating the need to have a workstation or server on-prem for the copy operation.

Additional AWS Snowball Edge Speeds and feature function feeds include:

  • 100TB (82TB usable) storage space capacity
  • 10 GbE network, along with 10/25 GbE SFP28 and 40 GbE QSFP+ with device-based encryption (customer provided network cables)
  • Local computing with EC2 and Lambda functions for remote deployment along with scale-out clustering of multiple SBE’s
  • S3 compatible endpoint along with NFS endpoint (mount point) using both NFS v3 and v4.1.
  • Weighs about 50 pounds, tamper evident seals along with TPM similar to traditional Snowball along with detection of hardware, firmware or software changes.
  • Can exist in an office environment, or data center.
  • Power cables are included, NEMA 5-15p, 100-220 volts, 400 watts.

What is AWS Snowmobile

Need something with more capacity than an SBE? AWS has a more extensive version called Snowmobile that supports up to 100PB that is brought to your site via a 45-foot-long tractor-trailer truck. Both SBE and Snowmobile physically move data from your location to an AWS region availability zone (AZ) aka data center where it is placed into the Simple Storage Service (S3) or Glacier bucket of your choice. Once in the S3 or Glacier bucket, you can move the data to where ever you need it.

Why Snowball Edge and Snowmobile vs. Fast Networks

Some people ask why the need for services such as SBE and Snowmobile, or, physically shipping your SSDs, HDD’s, tape or other storage media to a cloud provider in the Internet era of fast networks. The reason can be quite simple; most environments do not have internet connection speeds of 10 GbE or higher that can be dedicated outside of regular use for data movement at scale.

Likewise, some public cloud service providers have limitations on the network speed of their front-end general-purpose Internet access.

Note that some such as AWS have high-speed, low latency direct connect services from partner staging locations. However, those too may be limited in speed for large bulk transfers. AWS also has other performance-enhanced services for general Internet access including S3 Transfer Acceleration. Note that Microsoft Azure has special connectivity options such as ExpressRoute, while Google Compute Platform (GCP) has Cloud Interconnect.

Is AWS SBE and CI, HCI, CiB or Appliance?

The answer to the question of if SBE is a Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI), Cloud in a Box (CiB) or Cloud Appliance depends on your view and definition of those deployment models. Some will argue that SBE is a CI or HCI as well as CiB based on what Cisco, Dell Technologies, HPE, Microsoft (Azure Stack and Windows S2D), NetApp, Nutanix, Pivot3 and VMware vSAN among others offerings.

On the other hand, some will argue that SBE is not the same as the above and others give it does not meet the definition of their CI, HCI, CiB or cloud appliance. What is important is not if CI, HCI, CiB or appliance, rather, what it can do, how it can adapt to your environment and work for you vs. you work for it. In other words, what is important is the enablement a solution provides vs. if it is CI, HCI, CIB or something else. Meanwhile watch to see who ignores SBE, who welcomes it to their market space, and who throws mud balls and fud balls at snowball.

When to use Snowball vs. Snowball Edge

If all you need is bulk data migration appliance using one of your servers or workstations for smaller amounts of data, traditional Snowball is a good fit. On the other hand, if you need to move more data, leverage SBE enabled on-prem compute with EC2 and Lambda functionality for short, or long-term duration, as well as scale-out to create a cluster, then SBE is for you. SBE is also a good fit for environments that need short-term, as well as the longer-term deployment of compute, storage and network (e.g., converged). For example, factory environments, rugged implementations on ships, energy exploration and processing, traveling venues and sporting events, distributed environments being consolidated among others.

AWS Regions, AZ locations
AWS Regions and AZ’s image Via AWS.com

What About AWS Snowball Edge Pricing

Pricing varies based on AWS region you are using for your transfer and management from. Another variable is if you are selecting data transfer only, or, enabling EC2 compute instance on-prem. Yet another pricing variable is how long you will keep the Snowball Edge on-prem. You are given ten (10) free days as part of your data transfer job along with days for shipping and return.

Beyond the ten free days, you will pay a daily rate that varies. The longer you keep the SBE on-prem, and for example commit to a one or three-year pre-pay, you will receive larger discounts. Also note that there are no data transfer fees for moving data into AWS. However, standard pricing applies once stored into AWS, or moved. Also note that standard AWS storage charges (e.g. S3, Glacier, along with API calls apply once data is stored).

As an example, data transfer only, the service fee for a data transfer job is USD 300 for the US and another non-Asia-Pacific (Singapore). Additional days are $30 each.

Another example is selecting data transfer plus EC2 compute instance which varies by region example is $500 for transfer job (US East Northern Virginia or Ohio), $50 a day extra fee. However, if you are will to pay up front for one year, the day fee drops to $42 (varies by region), and to $35 a day for a three commitment.

For some environments, it may cost less to buy a server with storage, set it up and manage, while for others, the simplicity of a turnkey converged platform may be more cost-effective along with better value. Learn more about AWS Snowball Edge pricing here.

Where to learn more

Learn more about AWS, Snowball Edge, Cloud and data infrastructures related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Has AWS embraced hybrid public cloud and on-prem computing? IMHO while AWS is making it easier for environments to use, access as well as move to public cloud, they are still focused on the public cloud as the destination. In other words, AWS is making it easy to move your data and applications to their services as well as access them with AWS Snowball Edge SBE Converged Cloud Storage Appliance.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform GCP announced new Los Angeles Region

Google Cloud Platform (GCP) has announced a new Los Angeles Region (e.g., uswest-2) with three initial Availability Zones (AZ) also known as data centers. Keep in mind that a region is a geographic area that is made up of two or more AZ’s. Thus, a region has multiple data centers for availability, resiliency, durability.

The new GCP uswest-2 region is the fifth in the US and seventh in the Americas. GCP regions (and AZ’s) in the Americas include Iowa (us-central1), Montreal Quebec Canada (northamerica-northeast1), Northern Virginia (us-east4), Oregon (us-west1), Los Angeles (us-west2), South Carolina (us-east1) and Sao Paulo Brazil (southamerica-east1). View other Geographies as well as services including Europe and the Asia-Pacific here.

How Does GCP Compare to AWS and Azure?

The following are simple graphical comparisons of what Amazon Web Services (AWS) and Microsoft Azure currently have deployed for regions and AZ’s across different geographies. Note, each region may have a different set of services available so check your cloud providers notes as to what is currently available at various locations.

Google Cloud Compute Platform regions
Google Compute Platform Locations (Regions and AZ’s) Image via Google.com

AWS Regions, AZ locations
AWS Regions and AZ’s image Via AWS.com

Microsoft Azure Cloud Region Locations
Microsoft Azure Regions and AZ’s image Via Azure.com

Where to learn more

Learn more about data infrastructures and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Google continues to evolve its public cloud platform (GCP) both regarding geographical global physical locations (e.g., regions and AZ’s), also regarding feature, function, extensibility. By adding a new Los Angeles (e.g. uswest-2) Region and three AZ’s within it, Google is providing a local point of presence for data infrastructure intense (server compute, memory, I/O, storage) applications such as those in media, entertainment, high performance compute, aerospace among others in the southern California region.  Overall, Google Cloud Platform GCP announced new Los Angeles Region is good to see not only new features being added to GCP but also physical points of presences.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash.

Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash
Image via Dell Technologies

Summary of Dell transaction announcement includes:

  • VMware declares an $11 Billion USD cash dividend pro rata to all VMware stock holders.
  • Given ownership percentage of VMware, Dell Technologies will receive approximately $9 Billion USD cash dividend.
  • Dell plans to list its Class C common stock shares on the New York Stock Exchange (NYSE).
  • Dell plans to use the VMware dividend proceeds to fund cash consideration to be paid to Class V (tracking stock) shareholders.
  • For each Class V share (e.g. VMware tracking stock) shareholders can choose to receive:

    1.3665 shares of Dell Technologies Class C common stock, or
    $109 in cash per DVMT (Class V share) a 29% premium per share

Dell Announces Class V VMware Tracking Stock exchange for stock or cash
Image via Dell Technologies

Additional interest points of this transaction include:

  • Transaction expected to close Q4 CY2018, subject to Class V shareholder approval.
  • VMware maintains its independence as a separate publicly traded company.
  • Dell Technologies maintains its 81% ownership of VMware common stock
    Dell Technologies Class V (DVMT) shareholders will own 20.8% to 31.0% of Dell Class C (depending on cash election amounts).
  • Streamline Dell capital and ownership structure.
  • Establishes a public security (stock) in global end to end data infrastructure provider (e.g. Dell Technologies Stock on NYSE).
  • Enables financial flexibility for future strategic initiatives

Dell Announces Class V VMware Tracking Stock exchange for stock or cash
Image via Dell Technologies

Michael Dell and Silver Lake Continued Ownership

As part of this transaction, both Michael Dell and Silver Lake partners announce commitment to Dell Technologies. Michael Dell will continue to serve as Chairman and CEO, along with a committed stockholder beneficially owning between about 47% to 54% of Dell Technologies on a fully diluted basis. Silver Lake equity partners, an investor in Dell will continue its long-term partnership with Michael Dell beneficially owning between about 16%-18% of Dell Technologies on a fully diluted basis.

Where to learn more

Learn more about Dell Technologies, VMware, Data Infrastructures and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

This announcement enables Dell to streamline its financial structure, while providing VMware shareholder with a dividend value. In addition, this Dell Technologies announcement puts to rest industry discussions of what will Michael Dell along with Dell Technologies and VMware do in the future. Speaking of the future, this transaction could also page the wave for future investment or acquisitions by Dell and/or VMware. Now the question is if you are a DVMT tracking stock shareholder, do you take the $109 USD cash, or, new Class C Dell Technologies stock? Now lets see how Dell Technologies Announces Class V VMware Tracking Stock exchange for stock or cash plays out during the rest of summer and into the fall.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Announcing Windows Server Summit Virtual Online Event

Announcing Windows Server Summit Virtual Online Event

Dell Technology World 2018 Announcement Summary

Announcing Windows Server Summit Virtual Online Event

Microsoft will be hosting a free (no registration required) half day virtual (e.g. online) Windows Server Summit Virtual Online Event June 26, 2018 starting at 9AM PT. As part of its continued focus on supporting hybrid strategy spanning on-premises Windows Server to Azure (among others including AWS) cloud based, Microsoft is preparing for the launch later this year of Windows Server 2019.

There is no registration required, you can just show up without concern of getting email or other spam, however you can also click here to save the date, as well as here to get updates on the event.

Microsoft Windows Server LTSC and SAC release

Windows Server 2019 is now in insider preview (get it here) and is the next Long Term Service Channel (LTSC) release following Windows Server 2016. In the past, Microsoft would have called Windows Server 2019 something such as Windows Server 2016 R2, however that has changed with the new Semiannual Channel (SAC) and LTSC release cycles.

Keynote kick off presentations will be from Erin Chapple, Director of Program Management, Cloud + AI (which includes Windows Kernel, Hypervisors, Containers and Storage), Arpan Shah, General Manager of Azure Infrastructure marketing (Windows Server, Azure IaaS, Azure Stack, Azure Management and Security), and, Jeff Woosley Principal PM, Windows Server. In addition to the kick off presentations with current state and status of Windows Servers available for on-premises bare metal, virtual, container as well as cloud, there will be demos, Q&A, roadmap’s and much more. Topics will include new and recent functionalities such as Windows Server 2019, Windows Admin Center (formerly known as Honolulu), IoT, roadmap’s and much more.

Windows Server Summit HybridWindows Server Summit SecurityWindows Server Summit HCIWindows Server Summit Application Development
Images Via Microsoft Windows Server Summit Page

Windows Server Summit Break Out Tracks

During the Windows Server Summit, there will be four technology focused tracks including:

  • Hybrid – From on-premisess to Azure, how Windows Server supports different workloads in various configurations, along with associated management tools (including Windows Admin Center aka Honolulu)
  • Security – New and recent security enhancements for Windows Server along with Hyper-V and other related topics.
  • Application Platform – Containers and Linux support along with associated management tools for on-premisess and Azure.
  • Hyper-converged infrastructure (HCI) – Leveraging software defined storage (SDS) with Storage Spaces Direct (S2D) in Windows Server 2016, along with Hyper-V and other technologies, learn how Microsoft supports HCI and beyond.

Where to learn more

Learn more about Windows Server Summit and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Windows Server remains relevant today for traditional, on site, on-premises, as well as on-premisess along with cloud, container among other deployments. Remember to click here to save the date, click here to sign up for Windows Server Summit updates and learn more about the Windows Server Summit Virtual Online event here, see there, or at least virtually.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

May 2018 Server StorageIO Data Infrastructure Update Newsletter

May 2018 Server StorageIO Data Infrastructure Update Newsletter

May 2018 Server StorageIO Data Infrastructure Update Newsletter

Volume 18, Issue 5 (May 2018)

Hello and welcome to the May 2018 Server StorageIO Data Infrastructure Update Newsletter.

In cased you missed it, the April 2018 Server StorageIO Data Infrastructure Update Newsletter can be viewed here (HTML and PDF).

May has been a busy month with a lot of data infrastructure related activity from software-defined virtual, cloud, container, converged, serverless to legacy, hardware, software, services, server, storage, I/O and networking along with data protection topics among others.

In this issue buzzwords topics include GDPR, NVMe, NVMeoF, Composable, Serverless, Data Protection, SCM, Gen-Z, MaaS:

Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

May has been a busy month, some data infrastructure, server, storage, I/O network, hardware, software, cloud, converged, and container as well as data protection activity includes among others:

Depending on when you read this, the new global data protection regulations (GDPR) are either days away, or already in effect. For those who are not aware of GDPR other than seeing many inbox items in your email pertaining to it, here are some resources as a refresher or primer:

May Buzzword, Buzz Topic and Trends

Besides data protection and GDPR, other recent data infrastructure related news, trends, technologies and topics to keep an eye on (besides AI, ML, DL, AR/VR, IoT, Blockchain, Serverless) include Metal as a Service (MaaS) that might be familiar to some, for others, something new. Canonical has been busy for sometime now with MaaS including in Ubuntu and they are not alone with variations appearing with various managed service providers, hosting and cloud providers as well. NVMe has become a more common topic, technology, trend including for use in servers as well as over fabrics (e.g. NVMe over Fabrics) as a language for server, storage, I/O communication.

A new emerging companion to NVMe is Gen-Z which initially is a companion to PCIe. Longer term, Gen-Z could maybe possibly be a replacement, as well as for use accessing direct random access memory (DRAM) among other uses. Storage Class Memory (SCM) has been an industry conversation topic for several years now with new persistent memories (PMEM) that combine the best of traditional DRAM (Speed and write endurance) as well as persistent, higher capacity, lower cost of traditional NAND flash SSDs.

Another trend topic is that for some, ASIC, FPGA and GPU are new companions to standard commodity compute processors along with servers, yet for others it may be Dejavu as they have been being used for years (ok, decades) in some solutions. For now, two other buzzwords, buzz terms to add or refresh your data infrastructure vocabulary include distributed ledgers (aka blockchains), composable resources and ephemeral instance storage (storage on a cloud instance).

May NVMe Momentum Movement Activity

May saw a lot of NVMe related activity, from chips and components (adapters, devices) to systems spanning direct attached to NVMe over Fabric (NVMeoF). Here is a primer (or refresh) for NVMe along with various deployment options. NVMeoF includes RDMA over Converged Ethernet (RoCE) based, along with NVMe over Fibre Channel (FC-NVMe), as well as emerging NVMe over IP.

NVMe options
NVMe being used for front-end accessed via shared PCIe along with back-end devices

There are many different facets of NVMe including for use as a front-end on storage systems supporting server attachment (e.g. competes with Fibre Channel, iSCSI, SAS among others). Another variation of NVMe is as a back-end for attachment of drives or other NVMe based devices in storage systems, as well as servers.

NVMe backend
Front-end using traditional block SAN access with back-end NVMe, SAS and SATA devices

Read more about the many different options and variations of NVMe including key questions to ask or understand, deployment topology along with other related topics at thenvmeplace.com.

NVMe frontend NVMeoF
Various NVMe front-end including NVMeoF along with NVMe back-end devices (U.2, M.2, AiC)

Software Defined Data Infrastructure Activity

Amazon Web Services (AWS) continues to add new features, functionality as well as extending those as along with existing capabilities into various regions. Some recent updates include new Elastic Cloud Compute (EC2) Microsoft Windows Servers versions 1709 and 1803 Amazon Machine Images (AMIs). Other AWS updates include spot instances support for Red Hat BYOL (Bring Your Own License), VPN enhancements, X1e instances available in Frankfurt, H1 instance price reduction, as well as LightSail now in Canada, Paris, and Seoul regions.

For those who are not familiar with LightSail, they are virtual private servers (VPS) which are different from traditional EC2 instances. LightSail can be a cost-effective way for those who need to move out of general population shared hosting, yet cannot justify a full EC2 instance while requiring more than a container.

The LightSail instance also is available with various software pre-installed such as for WordPress websites among others. For example, I have used LightSail as a backup and standby WordPress site for StorageIOblog using Updraft Plus  Pro for data protection.

In other news, AWS C5d EC2 instances are available in various regions. C5d instances are available with 2, 4, 8, 16, 36 and 72 vCPUs along with up to 1800GB of NVMe based ephemeral storage for on-demand reserved or spot instances.

Note that instance-based storage is temporary meaning that it persists for the life of the instance. What this means is that if you stop and restart the instance, the data is not persistence. Instance-based storage is useful for data that can be protected or persisted to other storage including EBS (Elastic Block Storage). Usage includes batch, log and analytics processing, burst buffers, cache or workspace.

AWS also announced a new Simple Storage Service (S3) storage class a month or so ago called One Zone Availability Infrequent Access. This new storage class primarily provides a lower cost of storage with lower durability (e.g., data spread across one zone vs. multiple). Over the past couple of months, I have been migrating from S3 Infrequent Access (IA) as well as standard into One Zone Availability. Some of my active data remains in S3 Standard storage class, while cold archives are in Glacier.

A tip about migrating to One Zone Availability, as well as between other S3 storage classes is paid attention to your API calls and monthly budget. You might see an increase in S3 costs during the migration time, that then settles into the lower prices once data has been moved due to API calls (gets, puts, lists, dir). In other words, pay attention to how many API calls you are allowed per storage class per month, along with other fees beyond focusing only on cost per TByte. Read about other recent AWS news updates here.

Software-defined storage startup Cloudian announced their technology available for test drive on Google Cloud Platform as part of a continued industry trend. That trend is for storage vendors to make their storage software technology available on different cloud platforms such as AWS, Azure, Google, Softlayer among others.

Dell Technology World 2018

Dell Technologies made several announcements as part of Dell Technologies World that are covered in a series of posts here. Announcements included PowerMax the successor to VMAX, XtremIO X2 updates, new servers, workstations among many other items, read more here.

Besides the data infrastructure, cloud service providers and systems vendors, component suppliers including Cavium announced NVMe over Fibre Channel updates (here and here), along with Marvel NVMe updates here. HPE announced new thin clients and software (t430 Thin Client, HP mt44 Mobile Thin Client, HP ThinPro software), as well as updates to 3PAR and other storage solutions.

IBM announced various storage enhancements (and here) as well as a Happy 30th anniversary to the IBM Power9 based i systems. In other news, Kaseya bought backup data protection vendor Unitrends.

NVMe NAND flash Intel Optane

Micron announced the first quad layer cell (QLC) nand flash solid state device (SSD) named 52100 has begun shipping to select customers (and vendors). QLC packs or stacks 4 bits per cell. The 5200 is optimized for read-intensive workloads with up to 33% higher densities compared to previous generation TLC (triple layer cell) NAND flash. Broader market availability is expected to occur later fall 2018, 5210 form factor is 2.5” as a standard SSD or HDD, with capacities from 1.92TB to 7.68TB.

In other news, Micron also announced a $10 Billion (USD) stock repurchase plan, along with an extension of Intel 3D NAND flash memory partnership involving 3D NAND flash, as well as 96 layer 3D NAND. Meanwhile, various vendors are increasingly talking about how their systems are or will be storage class memory (SCM) ready including for use such as Micron 3D XPoint also known as Intel Optane among others.

Microsoft has placed into public preview Azure Active Directory (AAD) Storage authentication for Azure Blobs and Queues. Azure Storage Explorer is now released as version 1.0. AAD storage authentication enables organizations to implement role-based access control of Azure storage resources. Speaking of Azure, Microsoft has published several architectures, reference and other content at the Azure Virtual Datacenter portal here.

If you have not done so, check out Azure File Sync which is currently in public preview. Having been involved and using it for over a year including during private preview, Azure File Sync is an exciting, useful technology for creating a hybrid distributed file sharing with cloud tiering solutions. Learn more Azure File Sync here and here. In other news, Microsoft has announced a preview as part of the April 2018 Windows 10 build for a Hyper-V Google Android emulator support.

NetApp has had Azure based NAS storage in preview for a while now, and also announced Cloud Volumes on Google Cloud Platform (GCP). In addition to Cloud Volumes on AWS, Azure, and GCP, NetApp also announced enhanced NVMe based storage systems among other updates.

Two companies that have similar names are Opendrives (video workflow acceleration) and Opendrive (cloud storage, backup, and data protection). Meanwhile, data infrastructure startup Pavilion has received new funding as well as begun talking about their NVMe including NVMe over Fabric (NVMeOF) hardware storage system. Long-time data infrastructure converged server storage startup Pivot3 announced additional cloud workload mobility.

Pure storage made a couple of announcements including  FlashArray//X NVMe based shared accelerated storage system as well as NVIDIA (GPU powered) based AIRI Mini for AI/DL/ML.

Have you heard about Snowflake computing, aka, the cloud data warehouse solution? If not, check them out here. Another cloud-related data infrastructure vendor to look into is Upbound.io who have received additional funding for their multi-cloud management solutions.

Building off of recent VMware vSphere updates (here), and Dell Technology World here, the following is an excellent post about Instant Clone in vSphere 6.7, and VMware vSAN HCI assessment tool here.

Check out other industry news, comments, trends perspectives here.

Data Infrastructure Server StorageIO Comments Content

Server StorageIO Commentary in the news, tips and articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via SearchStorage: Comments Managing storage for IoT data at the enterprise edge
Via SearchCloudComputing: Comments Hybrid cloud deployment demands a change in security mindset
Via SearchStorage: Comments Dell EMC storage IPO, VMware merger plans still unclear
Via SearchStorage: Comments Dell EMC midrange storage keeps its overlapping arrays
Via SearchStorage: Comments Dell EMC all-flash PowerMax replaces VMAX, injects NVMe
Via IronMountain InfoGoto:  The growing Trend of Secondary Data Storage

View more Server, Storage and I/O trends and perspectives comments here.

Data Infrastructure Server StorageIOblog posts

Server StorageIOblog Data Infrastructure Posts

Recent and popular Server StorageIOblog posts include:

Dell Technology World 2018 Announcement Summary
Part II Dell Technology World 2018 Modern Data Center Announcement Details
Part III Dell Technology World 2018 Storage Announcement Details
Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure
Part V Dell Technology World 2018 Server Converged Announcement Details
April 2018 Server StorageIO Data Infrastructure Update Newsletter
VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary
PCIe Fundamentals Server Storage I/O Network Essentials
Have you heard about the new CLOUD Act data regulation?
Data Protection Recovery Life Post World Backup Day Pre GDPR
Microsoft Windows Server 2019 Insiders Preview
Application Data Value Characteristics Everything Is Not The Same
Data Infrastructure Resource Links cloud data protection tradecraft trends
IT transformation Serverless Life Beyond DevOps Podcast
Data Protection Diaries Fundamental Topics Tools Techniques Technologies Tips
Introducing Windows Subsystem for Linux WSL Overview
Data Infrastructure Primer Overview (Its Whats Inside The Data Center)
If NVMe is the answer, what are the questions?

View other recent as well as past StorageIOblog posts here

Server StorageIO Recommended Reading (Watching and Listening) List

Software-Defined Data Infrastructure Essentials SDDI SDDC

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017) available at Amazon.com (check out special sale price), the following are Server StorageIO data infrastructure recommended reading, watching and listening list items. The Server StorageIO data infrastructure recommended reading list includes various IT, Data Infrastructure and related topics including Intel Recommended Reading List (IRRL) for developers is a good resource to check out. Speaking of my books, Didier Van Hoye (@WorkingHardInIt) has a good review over on his site you can view here, also check out the rest of his great content while there.

Containers, serverless, kubernetes continue to gain in industry adoption, as well as customer deployments. Here is some information about Microsoft Azure Kubernetes Service (AKS). Note that AWS has Elastic Kubernetes Service (EKS), Google, VMware and Pivotal with Pivotal Kubernetes Service (PKS) among others.

Here is an interesting perspective by Ben Kepps about Serverless (e.g. life beyond Kubernetes and containers (e.g. life beyond virtualization which to some is or was life (e.g. life beyond bare metal))) as well as the all to often punditry, evangelism of something new causing something else to be dead.

SNIA has updated their Emerald aka Green energy effectiveness (focus on productivity) measurement specification (V3.01) including NAS NFS file activity (besides block). Learn more at snia.org/forums/green.

Watch for more items to be added to the recommended reading list book shelf soon.

Data Infrastructure Server StorageIO event activities

Events and Activities

Recent and upcoming event activities.

June 27, 2018 – Webinar – TBA

May 29, 2018 – Webinar – Microsoft Windows as a Service

April 24, 2018 – Webinar – AWS and on-site, on-premises hybrid data protection

See more webinars and activities on the Server StorageIO Events page here.

Data Infrastructure Server StorageIO Industry Resources and Links

Various useful links and resources:

Data Infrastructure Recommend Reading and watching list
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

Connect and Converse With Us

Storage IO RSS storageio linkedin storageio facebook Server StorageIO on twitter @StorageIO   Google+  Server StorageIO email storageio youtube  storageio instagram

Subscribe to Newsletter – Newsletter Archives StorageIO.comStorageIOblog.com

What this all means and wrap-up

Data Infrastructures are what exists inside physical data centers spanning cloud, converged, hyper-converged, virtual, serverless and other software defined as well as legacy environments. So far this spring there has been a lot of data infrastructure related activity, from new technology announcements, to events, trends among others. Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter and watch for more NVMe, Gen-Z, cloud, data protection among other topics in future posts, articles, events, and newsletters.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Dell Technology World 2018 Announcement Summary

Dell Technology World 2018 Announcement Summary

Dell Technology World 2018 Announcement Summary
This is part one of a five-part series about Dell Technology World 2018 announcement summary. Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw). There were several announcements along with plenty of other activity from sessions, meetings, hallway and event networking taking place at Dell Technology World DTW 2018.

Major data infrastructure technology announcements include:

  • PowerMax all-flash array (AFA) solid state device (SSD) NVMe storage system
  • PowerEdge four-socket 2U and 4U rack servers
  • XtremIO X2 AFA SSD storage system updates
  • PowerEdge MX preview of future composable servers
  • Desktop and thin client along with other VDI updates
  • Cloud and networking enhancements

Besides the above, additional data infrastructure related announcements were made in association with Dell Technology family members including VMware along with other partners, as well as customer awards. Other updates and announcements were tied to business updates from Dell Technology, Dell Technical Capital (venture capital), and, Dell Financial Services.

Dell Technology World Buzzword Bingo Lineup

Some of the buzzword bingo terms, topics, acronyms from Dell Technology World 2018 included AFA, AI, Autonomous, Azure, Bare Metal, Big Data, Blockchain, CI, Cloud, Composable, Compression, Containers, Core, Data Analytics, Dedupe, Dell, DFS (Dell Financial Services), DFR (Data Footprint Reduction), Distributed Ledger, DL, Durability, Fabric, FPGA, GDPR, Gen-Z, GPU, HCI, HDD, HPC, Hybrid, IOP, Kubernetes, Latency, MaaS (Metal as a Service), ML, NFV, NSX, NVMe, NVMeoF, PACE (Performance Availability Capacity Economics), PCIe, Pivotal, PMEM, RAID, RPO, RTO, SAS, SATA, SC, SCM, SDDC, SDS, Socket, SSD, Stamp, TBW (Terabytes Written per day), VDI, venture capital, VMware and VR among others.

Dell Technology World 2018 Venue
Dell Technology World DTW 2018 Event and Venue

Dell Technology World 2018 was located at the combined Palazzo and Venetian hotels along with adjacent Sands Expo center kicking off Monday, April 30th and wrapping up May 4th.

The theme for Dell Technology World DTW 2018 was make it real, which in some ways was interesting given the focus on virtual including virtual reality (VR), software-defined data center (SDDC) virtualization, data infrastructure topics, along with artificial intelligence (AI).

Virtual Sky Dell Technology World 2018
Make it real – Venetian Palazzo St. Mark’s Square on the way to Sands Expo Center

There was plenty of AI, VR, SDDC along with other technologies, tools as well as some fun stuff to do including VR games.

Dell Technology World 2018 Commons Area
Dell Technology World Village Area near Key Note and Expo Halls

Dell Technology World 2018 Commons Area Drones
Dell Technology World Drone Flying Area

During a break from some meetings, I used a few minutes to fly a drone using VR which was interesting. I Have been operating drones (See some videos here) visually without dependence on first-person view (FPV) or relying on extensive autonomous operations instead flying heads up by hand for several years. Needless to say, the VR was interesting, granted encountered a bit of vertigo that I had to get used to.

Dell Technology World 2018 Commons Area Virtual Village
More views of the Dell Technology World Village and Commons Area with VR activity

Dell Technology World 2018 Commons Area Virtual Village
Dell Technology World Village and VR area

Dell Technology World 2018 Commons Area Virtual Village
Dell Technology World Bean Bag Area

Dell Technology World 2018 Announcement Summary

Ok, nuff with the AI, ML, DL, VR fun, time to move on to the business and technology topics of Dell Technologies World 2018.

What was announced at Dell Technology World 2018 included among others:

Dell Technology World 2018 PowerMax
Dell PowerMax Front View

Subsequent posts in this series take a deeper look at the various announcements as well as what they mean.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

On the surface it may appear that there was not much announced at Dell Technology World 2018 particular compared to some of the recent Dell EMC Worlds and EMC Worlds. However turns out that there was a lot announced, granted without some of the entertainment and circus like atmosphere of previous events. Continue reading here Part II Dell Technology World 2018 Modern Data Center Announcement Details in this series, along with Part III here, Part IV here (including PowerEdge MX composable infrastructure leveraging Gen-Z) and Part V (servers and converged) here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.