Dell Technology World 2018 Announcement Summary

Dell Technology World 2018 Announcement Summary

Dell Technology World 2018 Announcement Summary
This is part one of a five-part series about Dell Technology World 2018 announcement summary. Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw). There were several announcements along with plenty of other activity from sessions, meetings, hallway and event networking taking place at Dell Technology World DTW 2018.

Major data infrastructure technology announcements include:

  • PowerMax all-flash array (AFA) solid state device (SSD) NVMe storage system
  • PowerEdge four-socket 2U and 4U rack servers
  • XtremIO X2 AFA SSD storage system updates
  • PowerEdge MX preview of future composable servers
  • Desktop and thin client along with other VDI updates
  • Cloud and networking enhancements

Besides the above, additional data infrastructure related announcements were made in association with Dell Technology family members including VMware along with other partners, as well as customer awards. Other updates and announcements were tied to business updates from Dell Technology, Dell Technical Capital (venture capital), and, Dell Financial Services.

Dell Technology World Buzzword Bingo Lineup

Some of the buzzword bingo terms, topics, acronyms from Dell Technology World 2018 included AFA, AI, Autonomous, Azure, Bare Metal, Big Data, Blockchain, CI, Cloud, Composable, Compression, Containers, Core, Data Analytics, Dedupe, Dell, DFS (Dell Financial Services), DFR (Data Footprint Reduction), Distributed Ledger, DL, Durability, Fabric, FPGA, GDPR, Gen-Z, GPU, HCI, HDD, HPC, Hybrid, IOP, Kubernetes, Latency, MaaS (Metal as a Service), ML, NFV, NSX, NVMe, NVMeoF, PACE (Performance Availability Capacity Economics), PCIe, Pivotal, PMEM, RAID, RPO, RTO, SAS, SATA, SC, SCM, SDDC, SDS, Socket, SSD, Stamp, TBW (Terabytes Written per day), VDI, venture capital, VMware and VR among others.

Dell Technology World 2018 Venue
Dell Technology World DTW 2018 Event and Venue

Dell Technology World 2018 was located at the combined Palazzo and Venetian hotels along with adjacent Sands Expo center kicking off Monday, April 30th and wrapping up May 4th.

The theme for Dell Technology World DTW 2018 was make it real, which in some ways was interesting given the focus on virtual including virtual reality (VR), software-defined data center (SDDC) virtualization, data infrastructure topics, along with artificial intelligence (AI).

Virtual Sky Dell Technology World 2018
Make it real – Venetian Palazzo St. Mark’s Square on the way to Sands Expo Center

There was plenty of AI, VR, SDDC along with other technologies, tools as well as some fun stuff to do including VR games.

Dell Technology World 2018 Commons Area
Dell Technology World Village Area near Key Note and Expo Halls

Dell Technology World 2018 Commons Area Drones
Dell Technology World Drone Flying Area

During a break from some meetings, I used a few minutes to fly a drone using VR which was interesting. I Have been operating drones (See some videos here) visually without dependence on first-person view (FPV) or relying on extensive autonomous operations instead flying heads up by hand for several years. Needless to say, the VR was interesting, granted encountered a bit of vertigo that I had to get used to.

Dell Technology World 2018 Commons Area Virtual Village
More views of the Dell Technology World Village and Commons Area with VR activity

Dell Technology World 2018 Commons Area Virtual Village
Dell Technology World Village and VR area

Dell Technology World 2018 Commons Area Virtual Village
Dell Technology World Bean Bag Area

Dell Technology World 2018 Announcement Summary

Ok, nuff with the AI, ML, DL, VR fun, time to move on to the business and technology topics of Dell Technologies World 2018.

What was announced at Dell Technology World 2018 included among others:

Dell Technology World 2018 PowerMax
Dell PowerMax Front View

Subsequent posts in this series take a deeper look at the various announcements as well as what they mean.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

On the surface it may appear that there was not much announced at Dell Technology World 2018 particular compared to some of the recent Dell EMC Worlds and EMC Worlds. However turns out that there was a lot announced, granted without some of the entertainment and circus like atmosphere of previous events. Continue reading here Part II Dell Technology World 2018 Modern Data Center Announcement Details in this series, along with Part III here, Part IV here (including PowerEdge MX composable infrastructure leveraging Gen-Z) and Part V (servers and converged) here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part II Dell Technology World 2018 Modern Data Center Announcement Details

Part II Dell Technology World 2018 Modern Data Center Announcement Details

Dell Technology World 2018 Modern Data Center Announcement Summary
This is Part II Dell Technology World 2018 Modern Data Center Announcement Details that is part of a five-post series (view part I here, part III here, part IV here and part V here). Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw).

Dell Technology World 2018 Venue
Dell Technology World DTW 2018 Event and Venue

What was announced at Dell Technology World 2018 included among others:

Dell Technology World 2018 PowerMax
Dell PowerMax Front View

Dell Technology World 2018 Modern Data Center Announcement Details

Dell Technologies data infrastructure related announcements included new solutions competencies and expanded services deployment competencies with partners to boost deal size and revenues. An Internet of Things (IoT) solution competency was added with others planned including High-Performance Computing (HPC) / Super Computing (SC), Data Analytics, Business Applications and Security related topics. Dell Financial Services flexible consumption models announced at Dell EMC World 2017 provide flexible financing options for both partners as well as their clients.

Flexible Dell Financial Services cloud-like consumption model (e.g., pay for what you use) enhancements include reduced entry points for the Flex on Demand solutions across the Dell EMC storage portfolio. For example, Flex on Demand velocity pricing models for Dell EMC Unity All-Flash Array (AFA) solid state device (SSD) storage solution, and XtremIO X2 AFA systems with price points of less than USD 1,000.00 per month. The benefit is that Dell partners have a financial vehicle to help their midrange customers run consumption-based financing for all-flash storage without custom configurations resulting in faster deployment opportunities.

In other partner updates, Dell Technologies is enhancing the incentive program Dell EMC MyRewards program to help drive new business. Dell EMC MyRewards Program is an opt-in, points-based reward program for solution provider sales reps and systems engineers. MyRewards program is slated to replace the existing Partner Advantage and Sell & Earn programs with bigger and better promotions (up to 3x bonus payout, simplified global claiming).

What this means for partners is the ability to earn more while offering their clients new solutions with flexible financing and consumption-based pricing among other options. Other partner enhancements include update demo program, Proof of Concept (POC) program, and IT transformation campaigns.

Powering up the Modern Data Center and Future of Work

Powering up the modern data center along with future of work, part of the make it real theme of Dell Technologies world 2018 includes data infrastructure server, storage, I/O networking hardware, software and service solutions. These data infrastructure solutions include NVMe based storage, Converged Infrastructure (CI), hyper-converged infrastructure (HCI), software-defined data center (SDDC), VMware based multi-clouds, along with modular infrastructure resources.

In addition to server and storage data infrastructure resources form desktop to data center, Dell also has a focus of enabling traditional as well as emerging Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) as well as analytics applications. Besides providing data infrastructure resources to support AI, ML, DL, IoT and other applications along with their workloads, Dell is leveraging AI technology in some of their products for example PowerMax.

Other Dell Technologies announcements include Virtustream cloud risk management and compliance, along with Epic and SAP Digital Health healthcare software solutions. In addition to Virtustream, Dell Technologies cloud-related announcements also include VMware NSX network Virtual Cloud Network with Microsoft Azure support along with security enhancements. Refer here to recent April VMware vSphere, vCenter, vSAN, vRealize and other Virtual announcements as well as here for March VMware cloud updates.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

The above set of announcements span business to technology along with partner activity. Continue reading here (Part III Dell Technology World 2018 Storage Announcement Details) of this series, and part I (general summary) here, along with Part IV (PowerEdge MX Composable) here and part V here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part III Dell Technology World 2018 Storage Announcement Details

Part III Dell Technology World 2018 Storage Announcement Details

Part III Dell Technology World 2018 Storage Announcement Details

This is Part III Dell Technology World 2018 Storage Announcement Details that is part of a five-post series (view part I here, part II here, part IV (PowerEdge MX Composable) here and part V here). Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw).

Dell Technology World 2018 Storage Announcements Include:

  • PowerMax – Enterprise class tier 0 and tier 1 all-flash array (AFA)
  • XtremIO X2 – Native replication and new entry-level pricing

Dell Technology World 2018 PowerMax back view
Back view of Dell PowerMax

Dell PowerMax Something Old, Something New, Something Fast Near You Soon

PowerMax is the new companion to VMAX. Positioned for traditional tier 0 and tier 1 enterprise-class applications and workloads, PowerMax is optimized for dense server virtualization and SDDC, SAP, Oracle, SQL Server along with other low-latency, high-performance database activity. Different target workloads include Mainframe as well as Open Systems, AI, ML, DL, Big Data, as well as consolidation.

The Dell PowerMax is an all-flash array (AFA) architecture with an end to end NVMe along with built-in AI and ML technology. Building on the architecture of Dell EMC VMAX (some models still available) with new faster processors, full end to end NVMe ready (e.g., front-end server attachment, back-end devices).

The AI and ML features of PowerMax PowerMaxOS include an engine (software) that learns and makes autonomous storage management decisions, as well as implementations including tiering. Other AI and ML enabled operations include performance optimizations based on I/O pattern recognition.

Other features of PowerMax besides increased speeds, feeds, performance includes data footprint reduction (DFR) inline deduplication along with enhanced compression. The DFR benefits include up to 5:1 data reduction for space efficiency, without performance impact to boost performance effectiveness. The DFR along with improved 2x rack density, along with up to 40% power savings (your results may vary) based on Dell claims to enable an impressive amount of performance, availability, capacity, economics (e.g., PACE) in a given number of cubic feet (or meters).

There are two PowerMax models including 2000 (scales from 1 to 2 redundant controllers) and 8000 (scales from 1 to 8 redundant controller nodes). Note that controller nodes are Intel Xeon multi-socket, multi-core processors enabling scale-up and scale-out performance, availability, and capacity. Competitors of the PowerMax include AFA solutions from HPE 3PAR, NetApp, and Pure Storage among others.

Dell Technology World 2018 PowerMax Front View
Front view of Dell PowerMax

Besides resiliency, data services along with data protection, Dell is claiming PowerMax is 2x faster than their nearest high-end storage system competitors with up to 150GB/sec (e.g., 1,200Gbps) of bandwidth, as well as up to 10 million IOPS with 50% lower latency compared to previous VMAX.

PowerMax is also a full end to end NVMe ready (both back-end and front-end). Back-end includes NVMe drives, devices, shelves, and enclosures) as well as front-end (future NVMe over Fabrics, e.g., NVMeoF). Being NVMeoF ready enables PowerMax to support future front-end server network connectivity options to traditional SAN Fibre Channel (FC), iSCSI among others.

PowerMax is also ready for new, emerging high speed, low-latency storage class memory (SCM).  SCM is the next generation of persistent memories (PMEM) having performance closer to traditional DRAM while persistence of flash SSD. Examples of SCM technologies entering the market include Intel Optane based on 3D XPoint, along with others such as those from Everspin among others.

IBM Z Zed Mainframe at Dell Technology World 2018
An IBM “Zed” Mainframe (in case you have never seen one)

Based on the performance claims, the Dell PowerMax has an interesting if not potentially industry leading power, performance, availability, capacity, economic footprint per cubic foot (or meter). It will be interesting to see some third-party validation or audits of Dell claims. Likewise, I look forward to seeing some real-world applied workloads of Dell PowerMax vs. other storage systems. Here are some additional perspectives Via SearchStorage: Dell EMC all-flash PowerMax replaces VMAX, injects NVMe


Dell PowerMax Visual Studio (Image via Dell.com)

To help with customer decision making, Dell has created an interactive VMAX and PowerMax configuration studio that you can use to try out as well as learn about different options here. View more Dell PowerMax speeds, feeds, slots, watts, features and functions here (PDF).

Dell Technology World 2018 XtremIO X2

XtremIO X2

Dell XtremIO X2 and XIOS 6.1 operating system (software-defined storage) enhanced with native replication across wide area networks (WAN). The new WAN replication is metadata-aware native to the XtremIO X2 that implements data footprint reduction (DFR) technology reducing the amount of data sent over network connections. The benefit is more data moved in a given amount of time along with better data protection requiring less time (and network) by only moving unique changed data.

Dell Technology World 2018 XtremIO X2 back view
Back View of XtremIO X2

Dell EMC claims to reduce WAN network bandwidth by up to 75% utilizing the new native XtremIO X2 native asynchronous replication. Also, Dell says XtremIO X2 requires up to 38% less storage space at disaster recovery and business resiliency locations while maintaining predictable recovery point objectives (RPO) of 30 seconds. Another XtremIO X2 announcement is a new entry model for customers at up to 55% lower cost than previous product generations. View more information about Dell XtremIO X2 here, along with speeds feeds here, here, as well as here.

What about Dell Midrange Storage Unity and SC?

Here are some perspectives Via SearchStorage: Dell EMC midrange storage keeps its overlapping arrays.

Dell Bulk and Elastic Cloud Storage (ECS)

One of the questions I had going into Dell Technology World 2018 was what is the status of ECS (and its predecessors Atmos as well as Centera) bulk object storage is given lack of messaging and news around it. Specifically, my concern was that if ECS is the platform for storing and managing data to be preserved for the future, what is the current status, state as well as future of ECS.

In conversations with the Dell ECS folks, ECS which has encompassed Centera functionality and it (ECS) is very much alive, stay tuned for more updates. Also, note that Centera has been EOL. However, its feature functionality has been absorbed by ECS meaning that data preserved can now be managed by ECS. While I can not divulge the details of some meeting discussions, I can say that I am comfortable (for now) with the future directions of ECS along with the data it manages, stay tuned for updates.

Dell Data Protection

What about Data Protection? Security was mentioned in several different contexts during Dell Technology World 2018, as was a strong physical security presence seen at the Palazzo and Sands venues. Likewise, there was a data protection presence at Dell Technologies World 2018 in the expo hall, as well as with various sessions.

What was heard was mainly around data protection management tools, hybrid, as well as data protection appliances and data domain-based solutions. Perhaps we will hear more from Dell Technologies World in the future about data protection related topics.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

If there was any doubt about would Dell keep EMC storage progressing forward, the above announcements help to show some examples of what they are doing. On the other hand, lets stay tuned to see what news and updates appear in the future pertaining to mid-range storage (e.g. Unity and SC) as well as Isilon, ScaleIO, Data Protection platforms as well as software among other technologies.

Continue reading part IV (PowerEdge MX Composable and Gen-Z) here in this series, as well as part I here, part II here, part IV (PowerEdge MX Composable) here, and, part V here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure

Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure

Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure
This is Part IV Dell Technology World 2018 PowerEdge MX Gen-Z Composable Infrastructure that is part of a five-post series (view part I here, part II here, part III here and part V here). Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw).

Introducing PowerEdge MX Composable Infrastructure (the other CI)

Dell announced at Dell Technology World 2018 a preview of the new PowerEdge MX (kinetic) family of data infrastructure resource servers. PowerEdge MX is being developed to meet the needs of resource-centric data infrastructures that require scalability, as well as performance availability, capacity, economic (PACE) flexibility for diverse workloads. Read more about Dell PowerEdge MX, Gen-Z and composable infrastructures (the other CI) here.

Some of the workloads being targeted by PowerEdge MX include large-scale dense SDDC virtualization (and containers), private (or public clouds by service providers). Other workloads include AI, ML, DL, data analytics, HPC, SC, big data, in-memory database, software-defined storage (SDS), software-defined networking (SDN), network function virtualization (NFV) among others.

The new PowerEdge MX previewed will be announced later in 2018 featuring a flexible, decomposable, as well as composable architecture that enables resources to be disaggregated and reassigned or aggregated to meet particular needs (e.g., defined or composed). Instead of traditional software defined virtualization carving up servers in smaller virtual machines or containers to meet workload needs, PowerEdge MX is part of a next-generation approach to enable server resources to be leveraged at a finer granularity.

For example, today an entire server including all of its sockets, cores, memory, PCIe devices among other resources get allocated and defined for use. A server gets defined for use by an operating system when bare metal (or Metal as a Service) or a hypervisor. PowerEdge MX (and other platforms expected to enter the market) have a finer granularity where with a proper upper layer (or higher altitude) software resources can be allocated and defined to meet different needs.

What this means is the potential to allocate resources to a given server with more granularity and flexibility, as well as combine multiple server’s resources to create what appears to be a more massive server. There are vendors in the market who have been working on and enabling this type of approach for several years ranging from ScaleMP to startup Liqid and Tidal among others. However, at the heart of the Dell PowerEdge MX is the new emerging Gen-Z technology.

If you are not familiar with Gen-Z, add it to your buzzword bingo lineup and learn about it as it is coming your way. A brief overview of Gen-Z consortium and Gen-Z material and primer information here. A common question is if Gen-Z is a replacement for PCIe which for now is that they will coexist and complement each other. Another common question is if Gen-Z will replace Ethernet and InfiniBand and the answer is for now they complement each other. Another question is if Gen-Z will replace Intel Quick Path and another CPU device and memory interconnects and the answer is potentially, and in my opinion, watch to see how long Intel drags its feet.

Note that composability is another way of saying defined without saying defined, something to pay attention too as well as have some vendor fun with. Also, note that Dell is referent to PowerEdge MX and Kinetic architecture which is not the same as the Seagate Kinetic Ethernet-based object key value accessed drive initiative from a few years ago (learn more about Seagate Kinetic here). Learn more about Gen-Z and what Dell is doing here.

Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Dell has provided a glimpse of what they are working on pertaining composable infrastructure, the other CI, as well as Gen-Z and related next generation of servers with PowerEdge MX as well as Kinetic. Stay tuned for more about Gen-Z and composable infrastructures. Continue reading Part V (servers converged) in this series here, as well as part I here, part II here and part III here.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary

VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary

VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary

VMware announced last week vSphere vSAN vCenter version 6.7 among other updates for their software-defined data center (SDDC) and software-defined infrastructure (SDI) solutions. The new April v6.7 announcement updates followed those from this past March when VMware announced cloud enhancements with partner AWS (more on that announcement here).

VMware vSphere 6.7
VMware vSphere Web Client with vSphere 6.7

For those looking for a more extended version with a closer look and analysis of what VMware announced click here for part two and part three here.

What VMware announced is general availability (GA) meaning you can now download from here the bits (e.g., software) that include:

  • ESXi aka vSphere 6.7 hypervisor build 8169922
  • vCenter Server 6.7 build 8217866
  • vCenter Server Appliance 6.7 build 8217866
  • vSAN 6.7 and other related SDDC management tools
  • vSphere Operations Management (vROps) 6.7
  • Increased the speeds, feeds and other configuration maximum limits

For those not sure or need a refresher, vCenter Server is the software for extended management across multiple vSphere ESXi hypervisors that run on a Windows platform.

Major themes of the VMware April announcement is around increased scalability along with performance enhancements, ease of use, security, as well as extended application support. As part of the v6.7 improvements, VMware is focusing on simplifying, as well as accelerating software-defined data infrastructure along with other SDDC lifecycle operation activities.

Extended application support includes for traditional demanding enterprise IT, along with High-Performance Compute (HPC), Big Data, Little Data, Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL), as well as other emerging workloads. Part of supporting demanding workloads includes enhanced support for Graphical Processing Units (GPU) such as those from Nvidia among others.

What Happened to vSphere 6.6?

A question that comes up is that there is a vSphere 6.5 (and its smaller point releases) and now vSphere 6.7 (along with vCenter, vSAN among others). What happened to vSphere 6.6? Good question and not sure what the real or virtual answer from VMware is or would be. My take is that this is a good opportunity for VMware to align their versions of principal components (e.g., vSphere/ESXi, vCenter, vSAN) to a standard or unified numbering scheme.

Where to learn more

Learn more about VMware vSphere, vCenter, vSAN and related software-defined data center (SDDC); software-defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Overall the VMware vSphere vSAN vCenter version 6.7 enhancements are a good evolution of their core technologies for enabling hybrid, converged software-defined data infrastructures and software-defined data centers. Continue reading more about  VMware vSphere vSAN vCenter version 6.7 SDDC Update Summary here in part II (focus on management, vCenter plus security) and part III here (focus on server storage I/O and deployment) of this three-part series.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware vSphere vSAN vCenter v6.7 SDDC details

VMware vSphere vSAN vCenter v6.7 SDDC details

VMware vSphere vSAN vCenter v6.7 SDDC details

VMware vSphere vSAN vCenter v6.7 SDDC details of announcement summary focus on vCenter, Security, and management. This is part two (part one here) of a three-part (part III here) series looking at VMware vSphere vSAN vCenter v6.7 SDDC details of announcement summary.

Last week VMware announced vSphere vSAN vCenter v6.7 updates as part of enhancing their software-defined data center (SDDC) and software-defined infrastructure (SDI) solutions core components. This is an expanded post as a companion to the Server StorageIO summary piece here. These April updates followed those from this past March when VMware announced cloud enhancements with partner AWS (more on that announcement here).

VMware vSphere 6.7
VMware vSphere Web Client with vSphere 6.7

What VMware announced is generally available (GA) meaning you can now download from here the bits (e.g., software) that include:

  • ESXi aka vSphere 6.7 hypervisor build 8169922
  • vCenter Server 6.7 build 8217866
  • vCenter Server Appliance 6.7 build 8217866
  • vSAN 6.7 and other related SDDC management tools
  • vSphere Operations Management (vROps) 6.7

For those not sure or need a refresher, vCenter Server is the software for extended management across multiple vSphere ESXi hypervisors that run on a Windows platform.

Major themes of the VMware April announcements are focused around:

  • Increased enterprise and hybrid cloud scalability
  • Resiliency, availability, durable and secure
  • Performance, efficiency and elastic
  • Intuitive, simplified management at scale
  • Expanded support for demanding application workloads

Expanded application support includes for traditional demanding enterprise IT, along with High-Performance Compute (HPC), Big Data, Little Data, Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL), as well as other emerging workloads. Part of supporting demanding workloads includes enhanced support for Graphical Processing Units (GPU)such as those from Nvidia among others.

What was announced

As mentioned above and in other posts in this series, VMware announced new versions of their ESXi hypervisor vSphere v6.7, as well as virtual SAN (vSAN) v6.7, virtual Center (vCenter),  v6.7 among other related tools. One of the themes of this announcement by VMware includes hybrid SDDC spanning on-site, on-premises (or on-premisess if you prefer) to the public cloud. Other topics involve increasing scalability, along with stability as well as ease of management along with security, performance updates.

As part of the v6.7 enhancements, VMware is focusing on simplifying, as well as accelerating software-defined data infrastructure along with other SDDC lifecycle operation activities. Additional themes and features focus on server, storage, I/O resource enablement, as well as application extensibility support.

vSphere ESXi hypervisor

With v6.7 ESXi host maintenance times improved with single reboot vs. previous multiple boots for some upgrades, as well as quick boot. Quick boot enables restarting the ESXi hypervisor without rebooting the physical machine skipping time-consuming hardware initialization.

Enhanced HTML5 based vSphere client GUI (along with API and CLI) with increased feature function parity compared to predecessor versions and other VMware tools. Increased functionality includes NSX, vSAN and VMware Upgrade Management (VUM) capabilities among others. In other words, not only are new technologies support, functions you may have in the past resisted using the web-based interfaces due to extensibility are being addressed with this release.

vCenter Server and vCenter Server Appliance (VCSA)

VMware has announced that moving forward the hosted (e.g., running on a Windows server platform) version is being depreciated. What this means is that it is time for those not already doing so to migrate to the vCenter Server Appliance (VCSA). As a refresher, VCSA is a turnkey software-defined virtual appliance that includes vCenter Server software running on VMware Photon Linux operating system as a virtual machine. VMware vCenter.

As part of the update, the enhanced vCenter Server Appliance (VCSA) supports new efficient, effective API management along with multiple vCenters as well as performance improvements. VMware cites 2x faster vCenter operations per second, 3x reduction in memory usage along with 3x quicker Distributed Resource Scheduler (DRS) related activities across powered on VMs).

What this means is that VCSA is a self-contained virtual appliance that can be configured for very large, large, medium and small environments in various configurations. With v6.7 vCenter Server Appliance emphasis on scaling, as well as performance along with security and ease of use features, VCSA is better positioned to support large enterprise deployments along with hybrid cloud. VCSA v6.7 is more than just a UI enhancement with v6.5 shown below followed by an image of v6.7 UI.

VMware vSphere 6.5
VMware vCenter Appliance v6.5 main UI

VMware vSphere 6.7
VMware vCenter Appliance v6.7 main UI

Besides UI enhancements (along with API and CLI) for vCenter, other updates include more robust data protection (aka backup) capability for the vCenter Server environment. In the prior v6.5 version there was a fundamental capability to specify a destination for sending vCenter configuration information to for backup data protection (See image below).

vCenter 6.5 backup
VMware vCenter Appliance 6.5 backup

Note that the VCSA backup only provides data protection for the vCenter Appliance, its configuration, settings along with data collected of the VMware hosts (and VMs) being managed. VCSA backup does not provide data protection of the individual VMware hosts or VMs which is accomplished via other data protection techniques, tools and technologies.

In v6.7 vCenter now has enhanced capabilities (shown below) for enabling data protection of configuration, settings, performance and other metrics. What this means is that with improved UI it is now possible to setup backup schedules as part of enabling automation for data protection of vCenter servers.

vCenter 6.7 backup
VMware VCSA v6.7 enhanced UI and data protection aka backup

The following shows some of the configuration sizing options as part of VCSA deployment. Note that the vCPU, Memory, and Storage are for the VCSA itself to support a given number of VMware hosts (e.g., physical machines) as well as guest virtual machines (VM).

 

VCSA

VCSA

VCSA

VM

 

Size

vCPU

Memory

Storage

Hosts

VMs

Tiny

2

10GB

300GB

10

100

Small

4

16GB

340GB

100

1000

Medium

8GB

24

525GB

400

4000

Large

16

32GB

740GB

1000

10000

Extra Large

24

48GB

1180GB

2000

35000

vCenter 6.7 sizing and number of the physical machine (e.g., VM hosts) and virtual machines supported

Keep in mind that in addition to the above individual VCSA configuration limits, multiple vCenters can be grouped including linked mode spanning onsite, on-premisess (on-prem if you prefer) as well as the cloud. VMware vCenter server hybrid linked mode enables seamless visibility and insight across on-site, on-premises (or on-premisess if you prefer) as well as public clouds such as AWS among others.

In other words, vCenter with hybrid linked mode enables you to have situational awareness and avoid flying blind in and among clouds. As part of hybrid vCenter environment support, cross-cloud (public, private) hot and cold migration including clone as well as vMotion across mixed VMware version provisioning is supported. Using linked mode multiple roles, permissions, tags, policies can be managed across different groups (e.g., unified management) as well as locations.

VMware and vSphere Security

Security is a big push for VMware with this release including Trusted Platform Module (TPM) 2.0 along with Virtual TPM 2.0 for protecting both the hypervisors and guest operating systems. Data encryption was introduced in vSphere 6.5 and is enhanced with increased management simplicities along with protection of data at rest and in flight (while in motion).

In other words, encrypted vMotion across different vCenter instances and versions are supported, as well as across hybrid environments (e.g., on-premises and public cloud). Other security enhancements include tighter collaboration and integration with Microsoft for Windows VMs, as well as vSAN, NSX and vRealize for a secure software-defined data infrastructure aka SDDC. For example, VMware has enhanced support for Microsoft Virtualization Based Security (VBS) including credential Guard where vSphere is providing a secure virtual hardware platform.

Additional VMware 6.7 security enhancements include Multiple SYSLOG targets, FIPS 140-2 Validated modules. Note that there is a difference between FIPS certified and FIPS validated, of which VMware vCenter and ESXi leverage two modules (VM Kernel Cryptographic, and OpenSSL) are currently validated. VMware is not playing games like some vendors when it comes to disclosing FIPS 140-2 validated vs. certified. Other VMware security enhancements include

Note, when a vendor mentions FIPS 140-2 and imply or says certified, ask them if they indeed are certified. Any vendor who is actually FIPS 140-2 certified should not get upset if you press them politely. Instead, they should thank you for asking. Otoh, if a vendor gives you a used car salesperson style dance or get upset, ask them why so sensitive, or, perhaps, what are they ashamed of or hiding, just saying. Learn more here.

vRealize Operations Manager (vROps)

vRealize Operations Manager (vROps) v6.7 dashboard for vSphere client plugin provides an overview of cluster view and alerts of both vCenter and vSAN. What this means is that you will want to upgrade vROps to v6.7. The vROps benefit being dashboards for optimal performance, capacity, troubleshooting, and management configuration.

Where to learn more

Learn more about VMware vSphere, vCenter, vSAN and related software-defined data center (SDDC); software-defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

VMware continues to enhance their core SDDC data infrastructure resources to support new and emerging, as well as legacy enterprise applications at scale. VMware enhancements include management, security along with other updates to support the demanding needs of various applications and workloads, along with supporting application developers.

Some examples of demanding workloads include among others AL, Big Data, Machine Learning, In memory and high-performance compute (HPC) among other resource-intensive new workloads, as well as existing applications. This includes enhanced support for Nvidia physical and virtual Graphical Processing Units (GPU) that are used in support for compute-intensive graphics, as well as non-graphic processing (e.g., AI, ML) workloads.

With the v6.7 announcements, VMware is providing proof points that they are continuing to invest in their core SDDC enabling technologies. VMware is also demonstrating the evolution of vSphere ESXi hypervisor along with associated management tools for hybrid environments with ease of use management at scale, along with security.  View more about VMware vSphere vSAN vCenter v6.7 SDDC details in part three of this three-part series here ((focus on server storage I/O, deployment information and analysis).

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware vSphere vSAN vCenter Server Storage I/O Enhancements

VMware vSphere vSAN vCenter Server Storage I/O Enhancements

VMware vSphere vSAN vCenter Server Storage I/O Enhancements

This is part three of a three-part series looking at last weeks v6.7 VMware vSphere vSAN vCenter Server Storage I/O Enhancements. The focus of this post is on server, storage, I/O along with deployment and other wrap up items. In case you missed them, read part one here, and part two here.

VMware as part of updates to, vSAN and vCenter introduced several server storage I/O enhancements some of which have already been mentioned.

VMware vSphere 6.7
VMware vSphere Web Client with vSphere 6.7

Server Storage I/O enhancements for vSphere, vSAN, and vCenter include:

  • Native 4K (4kn) block sector size for HDD and SSD devices
  • Intel Volume Management Device (VMD) for NVMe flash SSD
  • Support for Persistent Memory (PMEM) aka Storage Class Memory (SCM)
  • SCSI UNMAP (similar to TRIM) for SSD space reclamation
  • XCOPY and VAAI enhancements
  • VMFS-5 is now default file system
  • VMFS-6 SESparse vSphere snapshot space reclamation
  • VVOL supporting SCSI-3 persistent reservations and IPv6
  • Reduce dependences on RDMs with VVOL enhancements
  • Software-based Fibre Channel over Ethernet (FCoE) initiator
  • Para Virtualized RDMA (PV-RDMA)
  • Various speeds and feeds enhancements

VMware vSphere 6.7 also adds native 4KN sector size (e.g., 4096 block size) in addition to traditional native and emulated 512-byte sectors for HDD as well as SSD. The larger block size means performance improvements along with better storage allocation for applications, particularly for large capacity devices. Other server storage I/O updates include RDMA over Converged Ethernet (RoCE) enabled Remote Direct Memory Access (RDMA) as well as Intel VMD for NVMe. Learn more about NVMe here.

Other storage-related enhancements include SCSI UNMAP (e.g., SCSI equivalent of SSD TRIM) with the selectable priority of none or low for SSD space reclamation. Also enhanced are SESparse of vSphere snapshot virtual disk space reclamation (for VMFS-6). VMware XCOPY (Extended Copy) now works with vendor-specific VMware API for Array Integration (VAAI) primitives along with SCSI T10 standard used for cloning, zeroing and copy offload to storage systems. Virtual Volumes (VVOL) have been enhanced to support IPv6 and SCSI-3 persistent reservations to help reduce dependency or use of RDMs.

VMware configuration maximums (e.g., speeds and feeds) including server storage I/O enhancements including boosting from 512 to 1024 LUNs per host. Other speeds and feeds improvements include going from 2048 to 4096  server storage I/O paths per host, PVSCSI adapters now support up to 256 disks vs. 64 (virtual disks or Raw Device Mapped aka RDM). Also note that VMFS-3 is now the end of life (EOL) and will be automatically upgraded to VMFS-5 during the upgrade to vSphere 6.7, while the default datastore type is VMFS-6.

Additional server storage I/O enhancements include RoCE for RDMA enabling low latency server to server memory-based data movement access, along with Para-virtualized RDMA (PV-RDMA) on Linux guest OS. ESXi has been enhanced with iSER (iSCSI Extension for RDMA) leveraging faster server I/O interconnects and CPU offload. Another server storage I/O enhancement is Software based Fibre Channel over Ethernet (e.g., SW-FCoE) initiator using loss less Ethernet fabrics.

Note as a reminder or refresher that VMware also has para (e.g., virtualization-optimized) drivers for Ethernet and other networks, NVMe as well as SCSI in addition to standard devices. For example, you can access from a VM an NVMe backed datastore using standard VMware SATA, SCSI Controller, LSI Logic SAS, LSI Logic Parallel, VMware Paravirtual, native NVMe driver (virtual machine type 6.5 or higher) for better performance. Likewise, instead of using the standard SAS and SCSI VM devices, the VMware para-virtualized

Besides the previously mentioned items, other enhancements including for vSAN include support for logical clusters such as Oracle RAC, Microsoft SQL Server Availability Groups, Microsoft Exchange Data Availability Groups as well as Windows Server Failover Clusters (WSFC) using vSAN iSCSI service. Note that as a proof point of continued vSAN deployment customer adoption, VMware is claiming 10,000 deployments. For performance, vSAN enhancement also includes updates for adaptive placement, adaptive resync, as well as faster cache destage. The benefit of quicker destage is that cache can be drained or written to disk to eliminate or prevent I/O bottlenecks.

As part of supporting expanding, more demanding enterprise among other workloads, vSAN enhancements also include resiliency updates, physical resource and configuration checks, health and monitoring checks. Other vSAN improvements include streamlined workflows, converged management views across vCenter as well as vRealize tools. Read more from VMware about server storage I/O enhancements to vSphere, vSAN, and vCenter here.

VMware Server Storage I/O Memory Matters

VMware is also joining others with support for evolving persistent memory (PMEM) leveraging so-called storage class memories (SCM). Note, some refer to SCM as persistent memory as PM, however, context needs to be used as PM also means Physical Machine, Physical Memory, Primary Memory among others. With the new PMEM support for server memory, VMware is laying the foundation for guest operating systems as well as applications to leverage the technology.

For example, Microsoft with Windows Server 2016 supports SCMs as a block addressable storage medium and file system, as well as for Direct Access (e.g., DAX). What this means is that fast file systems can be backed by persistent faster than traditional SSD storage, as well as applications such as SQL Server that support DAX can do direct persistent I/O.

As a refresher, Non-Volatile DIMM enable server memory by combing traditional DRAM with some persistent storage class memory. By combing DRAM and storage class memory (SCM) also known as PMEM servers can use the RAM as a fast read/write memory, with the data destaged to persistent memory. Examples of SCM include Micron 3D Xpoint also known as Intel Optane along with others such as Everspin NVDIMM among others (available from Dell, HPE among others. Learn more SSD and storage class memories (SCM) along with PMEM here, as well as NVMe here.

Deployment, be prepared before you grab the bits and install the software

For those of you who want or need to download the bits here is a link to VMware software download. However, before racing off to install the new software in your production (or perhaps even lab), do your homework. Read the important information from VMware before upgrading to vSphere here (e.g., KB53704) as well as release notes, and review VMware’s best practices for upgrading to vCenter here.

Some of the things to be aware of including upgrade order and dependencies, as well as make sure you have good current backups of your vSphere ESXi configuration, vCenter appliance. In addition to viewing the vSphere ESXi and vCenter 6.7 release notes here, also.

There are some hardware compatibility items you need to be aware of, both for this as well as future versions. Check out the VMware hardware (and software) compatibility list (HCL), along with partner product interoperability matrices, as well as release notes. Pay attention to devices depreciated and no longer supported in ESXi 6.7 (e.g., VMware KB52583) as well as those that may not work in future releases to avoid surprises.

Where to learn more

Learn more about VMware vSphere, vCenter, vSAN and related software-defined data center (SDDC); software-defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

In case you missed them, read part one here and click here for part two of this series.

Some will say what’s the big deal why all the noise, coverage and discussion for a point release?

My view is that this is a big evolutionary package of upgrade enhancements and new features, even if a so-called point release (e.g., going from 6.5 to 6.7). Some vendors might have done this type of updates as a significant, e.g., version 6.x to 7.x upgrade to make more noise, get increased coverage or merely enhance the appearance of software maturity (e.g., V1.x to V2.x to V3.x, and so forth).

In the case of VMware, what some might refer to point release that is smaller, are the ones such as vSphere 6.5.0 to 6.5.x among others. Thus, there is a lot in this package of updates from VMware and good to see continued enhancements.

I also think that VMware is getting challenges from different fronts including Microsoft as well as cloud partners among others which is good. The reason I believe that it is okay VMware is being challenged is given their history; they tend to step up their game playing harder as well as stronger with the competition.

VMware is continuing to invest and extend its core SDDC technologies to meet the expanding demands of various organizations, from small to ultra large enterprises. What this means is that VMware is addressing ease of use for smaller, as well as removing complexity to enable simplified scaling from on-site (or on-premises and on-prem if you prefer) to the public cloud.

Overall the VMware Announced version 6.7 of vSphere vSAN vCenter SDDC core components are a useful extension of their existing technology. VMware Announced release 6.7 of vSphere vSAN vCenter SDDC core components enhancements enable customers more flexibility, scalability, resiliency, and security to meet their various needs.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Use Intel Optane NVMe U.2 SFF 8639 SSD drive in PCIe slot

Use NVMe U.2 SFF 8639 disk drive form factor SSD in PCIe slot

server storage I/O data infrastructure trends

Need to install or use an Intel Optane NVMe 900P or other Nonvolatile Memory (NVM) Express NVMe based U.2 SFF 8639 disk drive form factor Solid State Device (SSD) into PCIe a slot?

For example, I needed to connect an Intel Optane NVMe 900P U.2 SFF 8639 drive form factor SSD into one of my servers using an available PCIe slot.

The solution I used was an carrier adapter card such as those from Ableconn (PEXU2-132 NVMe 2.5-inch U.2 [SFF-8639] via Amazon.com among other global venues.

xxxx
Top Intel 750 NVMe PCIe AiC SSD, bottom Intel Optane NVMe 900P U.2 SSD with Ableconn carrier

The above image shows top an Intel 750 NVMe PCIe Add in Card (AiC) SSD and on the bottom an Intel Optane NVMe 900P 280GB U.2 (SFF 8639) drive form factor SSD mounted on an Ableconn carrier adapter.

NVMe server storage I/O sddc

NVMe Tradecraft Refresher

NVMe is the protocol that is implemented with different topologies including local via PCIe using U.2 aka SFF-8639 (aka disk drive form factor), M.2 aka Next Generation Form Factor (NGFF) also known as "gum stick", along with PCIe Add in Card (AiC). NVMe accessed devices can be installed in laptops, ultra books, workstations, servers and storage systems using the various form factors. U.2 drives are also refereed to by some as PCIe drives in that the NVMe command set protocol is implemented using PCIe x4 physical connection to the devices. Jump ahead if you want to skip over the NVMe primer refresh material to learn more about U.2 8639 devices.

data infrastructure nvme u.2 8639 ssd
Various SSD device form factors and interfaces

In addition to form factor, NVMe devices can be direct attached and dedicated, rack and shared, as well as accessed via networks also known as fabrics such as NVMe over Fabrics.

NVMeoF FC-NVMe NVMe fabric SDDC
The many facets of NVMe as a front-end, back-end, direct attach and fabric

Context is important with NVMe in that fabric can mean NVMe over Fibre Channel (FC-NVMe) where the NVMe command set protocol is used in place of SCSI Fibre Channel Protocol (e.g. SCSI_FCP) aka FCP or what many simply know and refer to as Fibre Channel. NVMe over Fabric can also mean NVMe command set implemented over an RDMA over Converged Ethernet (RoCE) based network.

NVM and NVMe accessed flash SCM SSD storage

Another point of context is not to confuse Nonvolatile Memory (NVM) which are the storage or memory media and NVMe which is the interface for accessing storage (e.g. similar to SAS, SATA and others). As a refresher, NVM or the media are the various persistent memories (PM) including NVRAM, NAND Flash, 3D XPoint along with other storage class memories (SCM) used in SSD (in various packaging).

Learn more about 3D XPoint with the following resources:

Learn more (or refresh) your NVMe server storage I/O knowledge, experience tradecraft skill set with this post here. View this piece here looking at NVM vs. NVMe and how one is the media where data is stored, while the other is an access protocol (e.g. NVMe). Also visit www.thenvmeplace.com to view additional NVMe tips, tools, technologies, and related resources.

NVMe U.2 SFF-8639 aka 8639 SSD

On quick glance, an NVMe U.2 SFF-8639 SSD may look like a SAS small form factor (SFF) 2.5" HDD or SSD. Also, keep in mind that HDD and SSD with SAS interface have a small tab to prevent inserting them into a SATA port. As a reminder, SATA devices can plug into SAS ports, however not the other way around which is what the key tab function does (prevents accidental insertion of SAS into SATA). Looking at the left-hand side of the following image you will see an NVMe SFF 8639 aka U.2 backplane connector which looks similar to a SAS port.

Note that depending on how implemented including its internal controller, flash translation layer (FTL), firmware and other considerations, an NVMe U.2 or 8639 x4 SSD should have similar performance to a comparable NVMe x4 PCIe AiC (e.g. card) device. By comparable device, I mean the same type of NVM media (e.g. flash or 3D XPoint), FTL and controller. Likewise generally an PCIe x8 should be faster than an x4, however more PCIe lanes does not mean more performance, its what’s inside and how those lanes are actually used that matter.

NVMe U.2 8639 2.5" 1.8" SSD driveNVMe U.2 8639 2.5 1.8 SSD drive slot pin
NVMe U.2 SFF 8639 Drive (Software Defined Data Infrastructure Essentials CRC Press)

With U.2 devices the key tab that prevents SAS drives from inserting into a SATA port is where four pins that support PCIe x4 are located. What this all means is that a U.2 8639 port or socket can accept an NVMe, SAS or SATA device depending on how the port is configured. Note that the U.2 8639 port is either connected to a SAS controller for SAS and SATA devices or a PCIe port, riser or adapter.

On the left of the above figure is a view towards the backplane of a storage enclosure in a server that supports SAS, SATA, and NVMe (e.g. 8639). On the right of the above figure is the connector end of an 8639 NVM SSD showing addition pin connectors compared to a SAS or SATA device. Those extra pins give PCIe x4 connectivity to the NVMe devices. The 8639 drive connectors enable a device such as an NVM, or NAND flash SSD to share a common physical storage enclosure with SAS and SATA devices, including optional dual-pathing.

More PCIe lanes may not mean faster performance, verify if those lanes (e.g. x4 x8 x16 etc) are present just for mechanical (e.g. physical) as well as electrical (they are also usable) and actually being used. Also, note that some PCIe storage devices or adapters might be for example an x8 for supporting two channels or devices each at x4. Likewise, some devices might be x16 yet only support four x4 devices.

NVMe U.2 SFF 8639 PCIe Drive SSD FAQ

Some common questions pertaining NVMe U.2 aka SFF 8639 interface and form factor based SSD include:

Why use U.2 type devices?

Compatibility with what’s available for server storage I/O slots in a server, appliance, storage enclosure. Ability to mix and match SAS, SATA and NVMe with some caveats in the same enclosure. Support higher density storage configurations maximizing available PCIe slots and enclosure density.

Is PCIe x4 with NVMe U.2 devices fast enough?

While not as fast as a PCIe AiC that fully supports x8 or x16 or higher, an x4 U.2 NVMe accessed SSD should be plenty fast for many applications. If you need more performance, then go with a faster AiC card.

Why not go with all PCIe AiC?

If you need the speed, simplicity, have available PCIe card slots, then put as many of those in your systems or appliances as possible. Otoh, some servers or appliances are PCIe slot constrained so U.2 devices can be used to increase the number of devices attached to a PCIe backplane while also supporting SAS, SATA based SSD or HDDs.

Why not use M.2 devices?

If your system or appliances supports NVMe M.2 those are good options. Some systems even support a combination of M.2 for local boot, staging, logs, work and other storage space while PCIe AiC are for performance along with U.2 devices.

Why not use NVMeoF?

Good question, why not, that is, if your shared storage system supports NVMeoF or FC-NVMe go ahead and use that, however, you might also need some local NVMe devices. Likewise, if yours is a software-defined storage platform that needs local storage, then NVMe U.2, M.2 and AiC or custom cards are an option. On the other hand, a shared fabric NVMe based solution may support a mixed pool of SAS, SATA along with NVMe U.2, M.2, AiC or custom cards as its back-end storage resources.

When not to use U.2?

If your system, appliance or enclosure does not support U.2 and you do not have a need for it. Or, if you need more performance such as from an x8 or x16 based AiC, or you need shared storage. Granted a shared storage system may have U.2 based SSD drives as back-end storage among other options.

How does the U.2 backplane connector attach to PCIe?

Via enclosures backplane, there is either a direct hardwire connection to the PCIe backplane, or, via a connector cable to a riser card or similar mechanism.

Does NVMe replace SAS, SATA or Fibre Channel as an interface?

The NVMe command set is an alternative to the traditional SCSI command set used in SAS and Fibre Channel. That means it can replace, or co-exist depending on your needs and preferences for access various storage devices.

Who supports U.2 devices?

Dell has supported U.2 aka PCIe drives in some of their servers for many years, as has Intel and many others. Likewise, U.2 8639 SSD drives including 3D Xpoint and NAND flash-based are available from Intel among others.

Can you have AiC, U.2 and M.2 devices in the same system?

If your server or appliance or storage system support them then yes. Likewise, there are M.2 to PCIe AiC, M.2 to SATA along with other adapters available for your servers, workstations or software-defined storage system platform.

NVMe U.2 carrier to PCIe adapter

The following images show examples of mounting an Intel Optane NVMe 900P accessed U.2 8639 SSD on an Ableconn PCIe AiC carrier. Once U.2 SSD is mounted, the Ableconn adapter inserts into an available PCIe slot similar to other AiC devices. From a server or storage appliances software perspective, the Ableconn is a pass-through device so your normal device drivers are used, for example VMware vSphere ESXi 6.5 recognizes the Intel Optane device, similar with Windows and other operating systems.

intel optane 900p u.2 8639 nvme drive bottom view
Intel Optane NVMe 900P U.2 SSD and Ableconn PCIe AiC carrier

The above image shows the Ableconn adapter carrier card along with NVMe U.2 8639 pins on the Intel Optane NVMe 900P.

intel optane 900p u.2 8639 nvme drive end view
Views of Intel Optane NVMe 900P U.2 8639 and Ableconn carrier connectors

The above image shows an edge view of the NVMe U.2 SFF 8639 Intel Optane NVMe 900P SSD along with those on the Ableconn adapter carrier. The following images show an Intel Optane NVMe 900P SSD installed in a PCIe AiC slot using an Ableconn carrier, along with how VMware vSphere ESXi 6.5 sees the device using plug and play NVMe device drivers.

NVMe U.2 8639 installed in PCIe AiC Slot
Intel Optane NVMe 900P U.2 SSD installed in PCIe AiC Slot

NVMe U.2 8639 and VMware vSphere ESXi
How VMware vSphere ESXi 6.5 sees NVMe U.2 device

Intel NVMe Optane NVMe 3D XPoint based and other SSDs

Here are some Amazon.com links to various Intel Optane NVMe 3D XPoint based SSDs in different packaging form factors:

Here are some Amazon.com links to various Intel and other vendor NAND flash based NVMe accessed SSDs including U.2, M.2 and AiC form factors:

Note in addition to carriers to adapt U.2 8639 devices to PCIe AiC form factor and interfaces, there are also M.2 NGFF to PCIe AiC among others. An example is the Ableconn M.2 NGFF PCIe SSD to PCI Express 3.0 x4 Host Adapter Card.

In addition to Amazon.com, Newegg.com, Ebay and many other venues carry NVMe related technologies.
The Intel Optane NVMe 900P are newer, however the Intel 750 Series along with other Intel NAND Flash based SSDs are still good price performers and as well as provide value. I have accumulated several Intel 750 NVMe devices over past few years as they are great price performers. Check out this related post Get in the NVMe SSD game (if you are not already).

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

NVMe accessed storage is in your future, however there are various questions to address including exploring your options for type of devices, form factors, configurations among other topics. Some NVMe accessed storage is direct attached and dedicated in laptops, ultrabooks, workstations and servers including PCIe AiC, M.2 and U.2 SSDs, while others are shared networked aka fabric based. NVMe over fabric (e.g. NVMeoF) includes RDMA over converged Ethernet (RoCE) as well as NVMe over Fibre Channel (e.g. FC-NVMe). Networked fabric accessed NVMe access of pooled shared storage systems and appliances can also include internal NVMe attached devices (e.g. as part of back-end storage) as well as other SSDs (e.g. SAS, SATA).

General wrap-up (for now) NVMe U.2 8639 and related tips include:

  • Verify the performance of the device vs. how many PCIe lanes exist
  • Update any applicable BIOS/UEFI, device drivers and other software
  • Check the form factor and interface needed (e.g. U.2, M.2 / NGFF, AiC) for a given scenario
  • Look carefully at the NVMe devices being ordered for proper form factor and interface
  • With M.2 verify that it is an NVMe enabled device vs. SATA

Learn more about NVMe at www.thenvmeplace.com including how to use Intel Optane NVMe 900P U.2 SFF 8639 disk drive form factor SSDs in PCIe slots as well as for fabric among other scenarios.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Data Infrastructure Resource Links cloud data protection tradecraft trends

Data Infrastructure Resource Links Server Storage I/O Network

data infrastructure resource links server storage I/O cloud data protection tradecraft links

By Greg Schulzwww.storageioblog.com April 28, 2018

Various data infrastructure resource links.

SDDC Data Infrastructure

The following are a collection of server storageioblog data infrastructure resource links.

Where to learn more

Vmware Vsphere Vsan Vcenter Version 6 7 Summary

Vmware Vsphere Vsan Vcenter V6 7 Sddc Details

Vmware Vsphere Vsan Server Storage Io Enhancements

New Cloud Act Data Regulation

Data Protection Recovery World Backup Day

Aws Cloud Application Data Protection Webinar

Microsoft Windows Server 2019 Insiders Preview

March 2018 Data Infrastructure Update Newsletter

Application Data Value Characteristics Part1

4 3 2 1 Data Protection Availability

Application Data Characteristics Types Part3

Application Data Volume Velocity

Application Data Access Life Cycle

Veeam Gdpr Experiences Walking Talk

Vmware Continues Cloud Construction March Announcements

Cloud Benefits Hyperv Disaster Recovery Draas

World Backup Day 2018 Data Protection Readiness Reminder

Install Intel Optane Nvme U2 8639 Ssd Drive In Pcie Slot

Data Infrastructure Resource Links Tradecraft Trends

Achieve Flexible Data Protection Availability Flash Storage Solutions Webinar

2017 Holiday Greetings From Serverstorageio

November 2017 Server Storageio Data Infrastructure Update Newsletter

Transformation Serverless Life Beyond Devops New York Times Cto Nick Rockwell

Data Protection Fundamentals

Reliability Availability Serviceability Ras Data Protection Fundamentals

Data Protection Acess Availabity Raid Erasure Codes

Enabling Data Protection Rpo Archive Backup Cdp Pit Copy Snapshots Versions

Point Time Data Protection Granularity Points Interest

Nvme Place Volatile Memory Express

Nand Flash Ssd Storage Io Conversation

Welcome To The Obeject Storage Resources Center

Server And Storage Io Benchmark Resources

Server Storage Io Converged Infrastructure Hci Overview

Data Protection Diaries Main

Data Infrastructure Server Storage Io Networking Recommended Reading Book Shelf Blogtober

Gdpr General Data Protection Regulation Resources Areyou Ready

Data Infrastructure Primer Overview

Data Infrastructure Tradecraft Overview

Announcing Software Defined Data Infrastructure Sddc Book

Travel Fun Crossword Puzzle Vmworld 2017 Las Vegas

Hot Popular Trending Data Infrastructure Vendors Watch

Data Protection Security Logical Physical Software Defined

Data Protection Tools Technologies Toolbox Buzzword Bingo Trends

Walking Data Protection Talk

Whos Toolbox Technology Tools

Data Protection Resources Learn

October 2017 Server Storageio Update Newsletter

Introducing Windows Subsystem For Linux Wsl

Enterprise Hdd Content Servers

Why Fc And Fcoe Vendors Get Beat Up Over Bandwidth

Are Vmware Vvols In Your Virtual Server And Storage Io Future

Putting Some Vmware Esx Storage Tips Together Part I

Server Storage Io Memory Dram Nand Flash

Intel Micron 3d Xpoint Nvm Scm Pm Nvme Ssd

Garbage Data In Garbage Information Out Big Data Or Big Garbage

Only You Can Prevent Cloud Data Loss

Cloud Conversations Aws Ebs Glacier And S3 Overview Part I

Cloud Conversations Confidence Certainty And Confidentiality

Cloud Conversations Azure Aws Service Maps

Aws S3 Storage Gateway Revisited Part

Cloud Conversations Aws S3 Cross Region Replication Storage Enhancements

Cloud Conversations Aws Ebs Glacier And S3 Overview Part Ii S3

Aws Announces S3 Cloud Storage Security Encryption Features

Fixing Windows 10 1709 Post Upgrade Restart Loop

Microsoft September 2017 Software Defined Data Infrastructure Updates

Nvme Wont Replace Flash Complement

Intel Micron Unveil New 3d Xpoint Nvm For Servers Storage

Answer Nvme Questions

Gaining Industry Traction Adoption

Industry Adoption Vs Industry Deployment Is There A Difference

Seven Databases In Seven Weeks A Book Review Of Nosql Databases

Hpe Announces Amd Powered Gen 10 Proliant Dl385 Software Defined Workloads

August 2017 Sddi Update Newsletter

Backyard Black Bears Stillwater St Croix River Valley

Story Stadiums Along Seismic Activity

Side Slbs Serverless Bs Software Hardware Fud

Standing Tall Proud September 11 2001 Forget

Participate In Top Vblog 2016 Voting Now

Cloud Constellation Spacebelt Out Of This World Cloud Data Centers

Water Data Storage Analogy

S3motion Buckets Containers Objects Aws S3 Cloud Emccode

Server Storage Io Cables Connectors Chargers Geek Gifts

Storageio Out And About Update Vmworld 2014

Happy Earth Day 2016 Eliminating Digital Data Ewaste

Green And Virtual Data Center Primer

Green Virtual Data Center Productive Economical Efficient Effective Flexible

Green And Virtual Data Center Links

Part Ii Geek2014

Data Center Sustainability Convergence Zone

June 2013 Server Storageio Update Newsletter

Epa Energy Star Data Center Storage Draft Specification Review

Web Chat Thur 30th Hot Storage Trends 2013

Spring Snw 2013 Storage Networking World Recap

Server Storageio Data Infrastructure Related Links

Server Storageio Data Infrastructure Related Links 2

Server Storageio Data Infrastructure Related Links 3

Server Storageio Data Infrastructure Related Links 4

Server Storageio Data Infrastructure Related Links 5

Data Centers Trade Show Exhibit Infrastructure Granted

Family Intel Xeon Scalable Processors Enable Software Defined Data Infrastructures Sddi Sddc

Azure Stack Technical Preview 3 Tp3 Overview Preview Review

Broadcom Aka Avago Aka Lsi Announces Sas Sata Nvme Adapters Raid

Pace Your Server Storage Io Decision Making Its About Application Requirements

More Data Footprint Reduction Dfr Material

Revisiting Raid Remains Relevant Resources Context Matters

Preparing World Backup Day 2017 Prepared

Data Storage Tape Update V2014 Alive

Server Storageio August 2016 Update Newsletter

Farley Flies Into Snw Spring 2013

Talking With Tony Dicenzo At Snw Spring 2013

Dave Demming Talking Tech Education Snw Fall 2012

Amazon Web Service Aws September 2017 Software Defined Data Infrastructure Updates

Dell Emc Vmware September 2017 Software Defined Data Infrastructure Updates

September 2017 Server Storageio Data Infrastructure Update Newsletter

July 2017 Server Storageio Data Infrastructures Update Newsletter

2017 Server Storageio Data Infrastructures Update Newsletter

Pcie Fundamentals Server Storage Io

Emc Dell Emc Part Dell Technologies Updates

Vmware Vsan V66 Part Vsan Evolution Summary

Dell Emc World 2017 Day News Announcement Summary

Getting Caught Happened September 2017

February 2017 Server Storageio Update Newsletter

Gdpr Effect 25 2018 Ready

Part Iii Focus Expands Data Protection Action

Backup Big Data Big Data Protection Cmg Tom Becchetti Podcast

Data Infrastructure Data Center Software Defined Management Dashboard Tools

Zombie Technology Life Death Tape Alive

Cloud Bulk Object Storage Fundamentals

Nvme Overview And Primer Part I

Nvme Ssd Game Intel 750

Part Ii Nvme Overview And Primer Different Configurations

Part Iii Nvme Overview And Primer Need For Performance Speed

Part Iv Nvme Overview And Primer Where And How To Use Nvme

Part V Nvme Overview And Primer Where To Learn More What This All Means

Server Storage Io Benchmark Workload Scripts Part

Part Ii Server Storage Io Benchmark Workload Scripts Results

Politics And Storage Or Storage In An Election Year V2008

Sherwood Becomes Atrato

Updated Look And Feel

Chargeback For Storage

Beware Of Announcements On April 1st

Im Leaving On A Jet Plane

Links To Upcoming And Recent Webcasts And Videocasts

Off To Snw In Dallas For The Day

Poll Whats Your Take On Windows 7

Update Energystar For Server Workshop

Emc And Cisco Acadia Vce What Does It Mean

Moving Beyond The Benchmark Brouhaha

Snw Spring 2008 Audio And Podcasts

Presentation Downloads From Storage Decisions New York 2008

Us Epa Energystar For Servers Wants To Hear From You

Upcoming Event Industry Trends And Perspective European Seminar

Could Huawei Buy Brocade

Back From Fall 2008 Snw In Dallas

Another Storageio Appearance On Storage Monkeys Infosmack

Atrato Part Deux

Updated Look And Feel Part Deux

Summer Dog Days

My How Time Flys By

Missing Dedupe Debate Detail

Trick Or Treat Either Way Be Safe

Storage Performance Council Releases Component Spc 1c And Spc 2c Results

Happy Earth Day 2008

Something You May Not See Everyday

The Function Of Xaasx Pick A Letter

Recent Storageio Media Coverage And Comments

The Many Faces Of Solid State Devicesdisks Ssd

Snw Spring 2008

Downloads For Fall 2008 San Francisco Storage Decisions Now Available

On The Road Again An Update

Dutch Storageexpo Recap

Worried About It Ma Here Come The New Startups

Out And About Update Off To Vmworld Next Week

Visit My New Amazon Authors Page

Upcoming Out And About Events

Happy Labor Day V2 009

Storageio Aka Greg Schulz Appears On Infosmack

Storageio Debuts At 79 In Technobabble Top 400 Analyst List

Going Rouge In It

Poll What Was Hot In 2009 And What Was Not Cast Your Vote

Upcoming Events And Activities Update V2010 1

Epa Server And Storage Workshop Feb 2 2010

Networking With Bruce Ravid And Bruce Rave

Practical Email Optimization And Archiving Strategies

Why Vasa Is Important To Have In Your Vmware Casa

Convergence People Processes Polices And Products

Cloud Virtualization And Storage Networking Conversations

New Seagate Momentus Xt Hybrid Drive Ssd And Hdd

Top 2011 Cloud Virtualization Storage And Networking Posts

A Conversation From Snw 2011 With Jenny Hamel

2012 Industry Trends Perspectives And Commentary Predictions

Should You Feel Sorry For Revenue Prevention Departments

Top Storageio Cloud Virtualization Networking And Data Protection Posts

Can I Ask For Your Support Please Vote For My Blog

Is 14 4tbytes Of Data Storage For 52503 A Good Deal It Depends

Are Large Storage Arrays Dead At The Hands Of Ssd

Is Ssd Dead No However Some Vendors Might Be

More Storage Io Momentus Hhdd And Ssd Moments Part Ii

What Is The Best Kind Of Io The One You Do Not Have To Do

How Much Ssd Do You Need Vs Want

Various Cloud Virtualization Server Storage Io Polls

3rd Of July Fireworks Grand Finale Video

Dell Is Buying Quest Software Not The Phone Company Qwest

Dell Storage Customer Advisory Panel Cap

Epa Energy Star For Data Center Storage Draft 3 Specification

Kudos To Lenovo Customer Service Redefined Or Re Established

What Does New Emc And Lenovo Partnership Mean

What Are Some Endangered It Species

Over 1000 Entries Now On The Storageio Industry Links Page

Cloud Conversations Aws Government Cloud Govcloud

Who Will Be Winner With Oracle 10 Million Dollar Challenge

Cloud Virtualization Storage And Networking In An Election Year

Technology Buying Do You Decide On G2 Or Gq

Raid And Iops And Io Observations

Trick Or Treat And Vendor Fun Games

Industry Trends And Perspectives Snw 2012 Rapping With Dave Raffo Of Searchstorage

Industry Trends And Perspectives Ray Lucchesi On Storage And Snw

Industry Trends And Perspectives Catching Up With Quantum Cte David Chapa

Industry Trends And Perspectives Snw 2012 Waynes World

Industry Trends And Perspectives Chatting With Karl Chen At Snw 2012

Industry Trends And Perspectives Learning With Leo Leger Of Snia

Industry Trends And Perspectives Meeting Up With Marty Foltyn Of Snia

Have Ssds Been Unsuccessful With Storage Arrays With Poll

Little Data Big Data And Very Big Data Vbd Or Big Bs

Data Center Infrastructure Management Dcim And Irm

Is Ssd Only For Performance

Ssd Flash And Dram Dejavu Or Something New

Thanks For Viewing Storageio Content And Top 2012 Viewed Posts

Summary Emc Vmax 10k High End Storage Systems Stayin Alive

Cloud Conversations Public Private Hybrid And Community Clouds Part Ii

Hardware Software What About Valueware

Cloud Virtualization Storage Io Trends For 2013 And Beyond

Vote For Top 2013 Vblogs Thanks For Your Continued Support

Conversation With Justin Stottlemyer Of Shutterfly And Object Storage Discussion

Snias New Spdecon Conference

Snia Spring 2013 Update With Wayne Adams

Speaking Of Ssds With Poll

Io Io Its Off To Virtual Work And Vmworld I Go Or Went

Blame It On The Un In Nyc This Week

Trick Or Treat Have You Seen Any It Frankenstacks

Cloud And Travel Fun

Some Alternative And Fun Cloud Api Meanings

Emcworld 2012 Tust And Marketing Can They Coexist

Iod Iot Ioe Ios Iop Iou Iox Future

Storage Decisions Spring 2009 Sessions Update

Removing Complexity Cost Drive Return Innovation Roi

Storageio Industry Links Page Updated 1200 Entries

School School Current Future School 2

Ivmcontrol Iphone Vmware Management Itool Itoy

Lenovo Ts140 Server Storage Io Review

Aws Adds Zocalo Enterprise File Sync Share Collaboration

Vmware Vvols And Storage Io Fundementals Part 2

Docker Smarties Nondummies Vmworld 2014

Server Storage Io Networking Virtualization Cloud Scaling

Remember The Alamo

Do You Have Your Copy Of The Green And Virtual Data Center Yet

Green It Deferral Blamed On Economic Recession Might Be Result Of Green Gap

Just For Fun Roses Are Red

Snw And Other Conferences Want And Need You

R U Twittering Yet

More Storage Io Momentus Hhdd And Ssd Moments Part I

Ssd And Green It Moving Beyond Green Washing

Io Io How Well Do You Know About Good Or Bad Server And Storage Ios

In The Data Center Or Information Factory Not Everything Is The Same

Cloud Conversations Public Private Hybrid What About Community Clouds

Data Protection Modernization More Than Swapping Out Media

Modernizing Data Protection With Certainty

Trick Or Treat 2011 It Zombie Technology Poll

Is There An Information Or Data Recession Are You Using Less Storage With Polls

Spring 2014 Storageio Events Activities Update

Seagate Shipped 10 Million Hhdds Lot

Revisiting Reinvent 2014 Aws News

Data Protection Diaries Are Your Restores Ready For World Backup Day 2015

How To Test Your Hdd Ssd Or All Flash Array Afa Storage Fundamentals

Introducing Us Hr2454 Waxman Markey Climate Bill

Cloud And Virtual Data Storage Networking Now On Kindle

Modernizing Data Protection Ways

Storageio In The News Update V2010 1

Ibm Speed Of Light Energy Saving Or Speed Of Light Green Marketing

Amazon Web Services Aws And The Netflix Fix

Spring 2008 Storage Descisions Wrap Up

Why Ssd Based Arrays And Storage Appliances Can Be A Good Idea Part Ii

Director Dinner Discussions Of The San Kind

Hello From Emc World Bloggers Lounge

Going Dutch And Other Spring Spring 2012 Storageio Activities

Storageio Going Dutch And Deutsch Fall 2012

Some August 2015 Amazon Web Services Aws And Microsoft Azure Cloud Updates

What Am I Hearing And Seeing While Out And About

Work And Entertainment From Coast To Coast

Snia Announces Cloud Data Management Initiative Cdmi V1 1

Storage Magazine In A Virtual World

Dude Dell Is Getting Buying An Emc And Vmware Deal

Check Out These Top 50 It Blogs 3

It Optimization Efficiency Convergence And Cloud Conversations From Snw

Usenix Fast File Storage Technologies 2014 Conference Proceedings

Putting Some Vmware Esx Storage Tips Together Part Ii

Out And About Update

Part Ii Seagate 1200 12gbs Enterprise Sas Ssd Storgeio Lab Review

Ben Woo On Big Data Buzzword Bingo And Business Benefits

Declared Dead Fibre Channel Continues Evolve Fcbb6

Getting Caught Up Its Been A Busy Year

Airport Parking Tiered Storage And Latency

Green Data Storage And Server Io Topics

Introducing Josh Apter And The Padcaster From Nab 2013

Amazon Cloud Storage Options Enhanced With Glacier

Software Defined Virtual Hard Disk Vhd

Ibm Vs Oracle Nad Intervenes Again

Vmware Announces Vsphere V6 Virtualization Technologies

Server And Storage Io Benchmarking 101 For Smarties

Cloud Conversations Focused Cost Missing Cloud Opportunities

Logo Ology

If March 31st Is Backup Day Dont Be Fooled With Restore On April 1st

The Blame Game Does Cloud Storage Result In Data Loss

Commentary On Clouds Storage Networking Green It And Other Topics

Future Ethernet 2016 Roadmap Released Ethernet Alliance

Brocade To Buy Foundry Networks Prelude To Upcoming Converged Ethernet Battle

Podcast Vbrownbags Vforums And Vmware Vtraining With Alastair Cooke

Snw Fall 2011 Revisited And Snia Emerald Program

Goodbye 2013 2014 Predictions Present Future

March And Mileage Mania Wrap Up

Was Today The Proverbal Day That He Froze Over

Something For Free From Vmware Other Than Your Time

Speaking Of Speeding Up Business With Ssd Storage

Just When You Thought It Was Safe To Go In The Water Again

What Industry Pundits Love And Loathe About Data Storage

Lenovo Thinkserver Td340 Storageio Lab Review

Fall 2015 Server Storage Io Cloud Virtual Seminars Dutch

Networking Convergence Ethernet Infiniband Or Both

Data Storage Innovation Chat Snia Wayne Adams David

My Server And Storage Io Holiday Break Projects

Vmware Vcloud Air Server Storageiolab Test Drive With Videos

More Modernizing Data Protection Virtualization And Clouds With Certainty

Congratulations Imation And Nexsan Are There Any Independent Storage Vendors Left

Cloud Conversations Aws Efs Elastic File System Cloud Nas Preview

Does Dell Have A Cloudy Cloud Strategy Story Part Ii

Infosmack Episode 34 Vmware Microsoft And More

Nad Recommends Oracle Discontinue Certain Exadata Performance Claims

Vmware Buys Virsto Is It About Storage Hypervisors

Part Ii Focus Expands Data Protection

Hps Big December 3rd Storage Announcement

Did Hp Respond To Emc And Cisco Vce With Microsoft Hyperv Bundle

Plenty Of Industry Firsts At Vmworld Europe

Ibm Mainframe Part Deux

California Center For Sustainable Energy Ccse

Help Save A Life

Congratulations To Ibm For Releasing Xiv Spc Results

Storageio Books Added To Intel Recommended Reading Lists

Collecting Transaction Minute Sql Server Hammerdb

Time For Top Vblog Voting V2015 Its It Award Season Cast Your Votes

Award Season Time 2014 Top Vmware Virtualization Blog Voting

525 Media Bay Add 25 12 Gbps Sas Sata Drives Server

Aws Amazon Storage Gateway First Second And Third Impressions

More Storage And Io Metrics That Matter

Snow Birds

The Human Face Of Big Data A Book Review

Netapp On Rough Ground Or A Diamond In The Rough

Data Protection Gumbo Protect Preserve Serve Information

Rip Windows Sis Single Instance Storage Or At Least In Server 2016

Ubuntu 16 04 Lts Aka Xenial Xerus Whats In The Bits And Bytes

Securing Information Assets Data Storage

Mirror Mirror On The Wall Whos The Greenest Of Them All

Missing Mh370 Remind Digital Assets

Hardware Sas Sata Nvm M2 Software Vhd Defined Odds Ends

Focus Expands Data Protection Backup Staying Alive

Odds And Ends Getting Caught Up News And Other Updates

Ceph Day In Amsterdam And Stage Weil On Object Storage

Emcworld 2016 Getting Started On Dell Emc

Emcworld 2015 How Do You Want Your Storage Wrapped

How Can Direct Attached Storage Das Make A Comeback If It Never Left

Ssd Past Present And Future With Jim Handy

Announcing Sas Sans For Dummies Book Lsi Edition

Recent Tips Videos Articles And More

Vmware Vvols And Storage Io Fundementals

Two Companies On Parallel Tracks Moving Like Trains Offset By Time Emc And Netapp

Big Files Lots File Processing Benchmarking Vdbench

Server Storage Io Benchmarking Tools Microsoft Diskspd Part

Data Protection Diaries World Backup Day March 31 Restore Data Test Time

Part Ii Iops Hdd Hhdd Ssd

Ceph Day Amsterdam 2012 Object And Cloud Storage

Mr Backup Curtis Preston Goes Back To Ceph School

Emc Dssd D5 Rack Scale Shared Direct Attached Ssd All Flash Array Part I

Part Ii Emc Dssd D5 Direct Attached Shared Afa

Blog Roll Dj Vu And Storage Monkeys

Give Hp Storage Some Love And Short Strokin

Vce Revisited Now Zen

Funeral For A Friend

April 2017 Server Storageio Data Infrastructure Update Newsletter

Vmware Vsan V6 6 Part Ii Just Speeds Feeds Please

Introducing Vsan 6 6 Hyper Converged Hci Software Defined Data Infrastructure

Vmware Vsan V66 Part Iii Reducing Cost Complexity

Vmware Vsan V6 6 Part Iv Scaling Robo Data Centers Today

Cisco Gen 32gb Fibre Channel Nvme San Updates

Kevin Closson Discusses Slob Server Cpu Io Database Performance Benchmarks

Congratulations Returning Fellow Vexperts 2017

Sdx Summit London Uk Planning Enabling Journey Software Defined

Ssd Flash Nonvolatile Memory Nvm Storage Trends Tips Topics

Cloud Object Storage Future Questions

Updated Software Defined Data Infrastructure Webinars Fall 2016 Events

Value Infrastructure Insight Enabling Informed Decision Making

Software Defined Data Infrastructure School Webinar Fall 2016 Events

12gb Sas Ssd Enabling Server Storage Io Performance Effectiveness

Netapp Announces Ontap 9 Software Defined Storage Management

Going Dutch Seminars And Workshops In Holland June 2016

Enabling Bitlocker On Microsoft Windows 7 Professional 64 Bit

Tape Is Still Alive Or At Least In Conversations And Discussions

Comptia Input Storage Certification

Vmware Cisco Emc Vce Zen

It And Storage Economics 101 Supply And Demand

Part Ii Revisting Aws S3 Storage Gateway Test Drive Deployment

It And Technology Turkeys

Emc Vmax 10k Looks Like High End Storage Systems Are Still Alive Part Ii

Part Ii Lenovo Ts140 Server Storage Io Review

Recent Tips Videos Articles And More Update V2010 1

Industry Trends And Perspectives Thoughts On Ipad For Business

Volatile Memory Nvm Nvme Flash Memory Summit Ssd Updates

April 2015 Server Storageio Update Newsletter

Researchers And Marketers Dont Agree On Future Of Nand Flash Ssd

Emc Vfcache Respinning Ssd And Intelligent Caching Part I

Why Ssd Based Arrays And Storage Appliances Can Be A Good Idea Part I

Ibm Buys Flash Solid State Device Ssd Industry Veteran Tms

Cloud Conversations Gaining Cloud Confidence From Insights Into Aws Outages Part Ii

January 2015 Server Storageio Newsletter

Computer Data Storage Complex Depends

December 2014 Server Storageio Newsletter

Diy Converged Server Software Defined Storage Budget Lenovo Ts140

Server Storageio December 2015 Update Newsletter

November 2014 Server Storageio Update Newsletter

February 2015 Server Storageio Update Newsletter

July 2015 Server Storageio Update Newsletter

March 2015 Server Storageio Update Newsletter

August Server Storageio Update Newsletter

Server Storageio October 2015 Update Newsletter

Server Storage Io Network Benchmark Winter Olympic Games

Enterprise Sshd And Flash Ssd Part Of An Enterprise Tiered Storage Strategy

Microsoft Diskspd Part Ii Server Storage Io Benchmark Tools

September October 2014 Server And Storageio Update Newsletter

Seagate 1200 12gbs Enterprise Sas Ssd Server Storgeio Lab Review

Microsoft Windows Server Azure Nano Life Cycle Updates

Server Storage Io Intel Nuc Nick Knack Notes Impressions

Emcworld 2016 Emc Hybrid And Converged Clouds Your Way

Server Storageio 2016 Update Newsletter

Server Storageio Industry Trends Perspectives Report Wekaio Matrix

Data Quantum Revenues Continue Grow

Chelsio Storage Ip Networks Enable Data Infrastructures

Post Holiday It Shopping Bargains Dell Buying Exanet

Predictions Did Mayans Have It Right Or Did We Read It Wrong

Overview Review Microsoft Refs Reliable File System

Gaining Server Storage Io Insight Microsoft Windows Server 2016

How Many Degrees Separate You And Your Information

Inaugural Storageio Newsletter

Spring 2010 Storageio Newsletter

Storage Comments From The Field And Customers In The Trenches

Virtual Storage And Social Media What Did Emc Not Announce

Are Social Media And Networking A Waste Of Time

Congratulations To New And Returning 2012 Vmware Vexperts

Hitting The Road Again

It Feels Like Grand Central Station Here

Storageio Outlines Intelligent Power Management And Maid 20 Storage Techniques Advocates New Technologies To Address Modern Data Center Energy Concerns

Trains Going Green Ah Well Maybe Blue

Happy Earth Day 2009

Mirror Mirror On The Wall Who Is The Greenest Of Them All

Green Virtual Servers Storage And Networking 2008 Beijing Olympics

Hot Storage Topics Converge On Chicago Next Week

John Carpenters Escape From New York Back From Storage Decisions Ny 2008

Does Dell Have A Cloudy Cloud Strategy Story Part I

Dell Updates Storage Center Operating System 7 Scos 7

Lenovo Buys Ibms Xseries Aka X86 Server Business Emc

Cloud And Virtual Data Storage Networking Book Vmworld 2011 Debut

Cloud And Virtual Data Storage Networking Book Released

Server Storageio September 2015 Update Newsletter

Some Windows Server Storage Io Related Commands

Server Storageio November 2015 Update Newsletter

Dell Emc Azure Stack Hybrid Cloud Solution

Msp Business Journal Names Greg Schulz An Eco Tech Warrior

Continuing Education And Refresher Time Raid And Luns

Many Different Implementations Of Raid

Wide World Of Archiving Life Beyond Compliance

Comfort Zones Stating What Might Be Obvious To Some

The Differences Between Singapore And Houston In May

Do Disk Based Vtls Draw Less Power Than Tape

More On Fibre Channel Over Ethernet Fcoe

Green Hype Or Reality

Thank You Gartner For Generating Awareness For My New Book

Why Xiv Is So Important To Ibms Storage Business

Das Sas Fcoe Green Efficient Storage And Io Podcast Faqs

Cmg Enabling The Green And Virtual Data Center

It Belt Tightening And Stratigies For It Economic Sustainment

Vendors Who Dont Want To Be Virtualized

Did Someone Forget To Tell Dell That Tape Is Dead

Ssd Activity Continues To Go Virtually Round And Round

All Work And No Play Ok How About An Education Half Day

Industry Trend And Perspective Seagate Changes Disk Drive Warranties

Just For Fun Of Flying

Raid Data Protection Remains Relevant

Protecting And Storing Personal Digital Documents

Is There Still Innovation For It And Storage

Io Virtualization Iov Revisited

Shifting Industry Trend From Purchase To Leasing

Is There A Data And Io Activity Recession

Us Epa Looking For Industry Input On Energy Star For Storage

Shifting From Energy Avoidance To Energy Efficiency

Ibm Out Oracle In As Buyer Of Sun

Us Epa Energy Star For Server Update

Data Center Io Bottlenecks Performance Issues And Impacts

Clarifying Clustered Storage Confusion

Green It Confusion Continues Opportunities Missed

Clouds Are Like Electricity Dont Be Scared

Hp Buys One Of The Seven Networking Dwarfs And Gets A Bargain

Should Everything Be Virtualized

Optimize Data Storage For Performance And Capacity Efficiency

Justifying Green It And Home Hardware Upgrades With Energystar

How To Win Approval For Upgrades Link Them To Business Benefits

What Is The Future Of Servers

Ssd And Storage System Performance

Green It And Virtual Data Centers

Emc Storage And Management Software Getting Fast

Its Us Census Time What About It Data Centers

Nas Nasa And Nascar Do They Have Anything In Common

Is Maid Dead I Dont Think So

Happy Earth Day 2010

Who Or What Is Your Sphere Of Influence

Apple Ipad Is It A Business Itool Or Itoy

Cloud Conversations Nirvanix Shutdown Caused Cloud Confidence Concerns

Industry Trends And Perspectives Raid Rebuild Rates

Industry Trends And Perspectives Storage Virtualization And Virtual Storage

Industry Trends And Perspectives Converged Networking And Io Virtualization Iov

Industry Trends And Perspectives Tiered Storage Systems And Mediums

Initial Virtumania Appearance Episode 14 With Fellow Vexperts

Industry Trends And Perspectives Tiered Hypervisors And Microsoft Hyperv

Vmware Vexpert 2010 Thank You Im Honored To Be Named A Member

Industry Trends And Perspectives Blog Series

My Favorite Late Summer Reading Material

Supreme Court Rules Sarbox Intact Oversight Board Changes

While Hp And Dell Make Counter Bids Exclusive Interview With 3par Ceo David Scott

End To End E2e Systems Resource Analysis Sra For Cloud And Virtual Environments

Has Fcoe Entered The Trough Of Disillusionment

What Is Dfr Or Data Footprint Reduction

Santas It Elf Limited Time Discount

What Do You Do When Your Service Provider Drops The Ball

Green It Goes Mainstream What About Data Storage Environments

Storageio Momentus Hybrid Hard Disk Drive Hhdd Moments

Buzzword Bingo 1 0 Are You Ready For Fall Product Announcemnts

Happy Holidays 2010

What Have I Been Doing This Winter

What Do Vars And Clouds As Well As Msps Have In Common

What Do You Need When Its Time To Buy A New Server

Securing Data At Rest Self Encrypting Disks Seds

Buzzword Bingo And Acronym Update V2 011

Happy Earth Day 2011

The Data Storage Prayer

Cloud And Virtual Data Storage Networking

Cloud Storage Dont Be Scared However Look Before You Leap

Storageio Going Dutch Seminar For Storage And Io Professionals

Seagate Kinetic Cloud Object Storage Io Platform

Summer Greetings And Happy Holidays V2011

Industry Trend People Plus Data Are Aging And Living Longer

Dell Storage Forum 2011 Revisited

Storageio Going Dutch Again October 2011 Seminar For Storage Professionals

Time In And Around Clouds

Congratulations To Infosmack On Episode 100

Industry Trends And Perspectives Public And Private It Clouds

Dude Is Dell Going To Buy Brocade

Spring May 2012 Storageio News Letter

Data Migration Tips

Cloud Conversation Thanks Gartner For Saying What Has Been Said

December 2012 Storageio Update News Letter

January 2013 Server And Storageio Update Newsletter

Behind The Scenes Santa Claus Global Cloud Story

Emc Vmax 10k Looks Like High End Storage Systems Are Still Alive Part Iii

Many Faces Of Storage Hypervisor Virtual Storage Or Storage Virtualization

February 2013 Server And Storageio Update Newsletter

Xtremio Xtremsw And Xtremsf Emc Flash Ssd Portfolio Redefined

Some Things Keep Going Around Seagate Ships 2 Billion Hdds

Where Has The Fcoe Hype And Fud Gone With Poll

A Pivotal Or Cloudy Moment For Emc And Vmware

March Metrics And Measuring Social Media

Are Your Analyst Blogger Media Or Press Requests Being Read

March 2013 Server And Storageio Update Newsletter

Pressure Cooker Good

Hp Moonshot 1500 Software Defined Capable Compute Servers

Netapp And Akorri An E2e Cross Technology Domain Sra Play

Full Rss Archive Feeds Are Now Available For Storageioblog

2013 Server Storageio Update Newsletter

Morning Summer Storms Walking Midwest

Ibm Buys Softlayer Software Defined Infrastructures Clouds

Upgrading Lenovo X1 Windows 7 Samsung 840 Ssd

Geek Gadgets Kill A Watt Meter

Green Storage Practical Ways To Reduce Power Consumption

Data Proteciton For Virtual Environments At Vmware Vmworld

From Ilm To Iim Is This A Solution Sell Looking For A Problem

Industry Trends And Perspectives Tape Disk And Dedupe Coexistence

Ilm Has It Losts Its Meaning

Is Ibm Xiv Still Relevant

Data Proteciton For Virtual Environments

Spc And Storage Benchmarking Games

Server And Storage Virtualization Life Beyond Consolidation

Epa Draft 3 Of Energy Star For Computer Server Specification

Cloud Virtual Server Storage Io Technology Tiering

Disruptive Updates

Virtual Cloud Availability Shared Responsibility Common Sense

Storage Performance

Will 6gb Sas Kill Fibre Channel

Poll Whats Do You Think Of It Clouds

Closing The Green Gap Green Washing May Be Endangered However Addressing Real Green Issues Is Here To Stay

Catch Of The Day Or Post Of The Day

Availability Or Lack There Of Lessons From Our Frail Aging Infrastructure

Cisco Wins Fcoe Pre Season And Primaries Now For The Main Event

Power Cooling Floor Space Environmental Pcfe And Green Metrics

Tape Talk Changing Role Of Tape

Sas Disk Drives Appearing In Larger Mid Range Arrays

Blog Post March Metric Madness Fun With Simple Math

Hard Product Vs Soft Product

Optical Storage Oppourtunities Or Obsolence

Storage Efficiency And Optimization The Other Green

Smb Capacity Planning Focusing On Energy Conservation

Whats Your Take On Ftc Guidelines For Bloggers

Technology And Traveling

Clouds And Data Loss Time For Cdp Commonsense Data Protection

Epa Energy Star For Data Center Storage Update 2

From Bits To Bytes Decoding Encoding

Industry Trends And Perspectives 6gb Sas And Das Are Not Dumb A Storage

As The Hard Disk Drive Hdd Continues To Spin

Another Storageio Hybrid Momentus Moment

Cloud Conversations Aws Ebs Optimized Instances

Unified Storage Systems Showdown Netapp Fas Vs Emc Vnx

April 2013 Server Storageio Update Newsletter

Cloud Conversations Aws Ebs Glacier And S3 Overview Part Iii

Part Ii Ibm Server Side Storage Io Ssd Flash Cache Software

Are Hard Disk Drives Hdds Getting To Big

2011 Summer Momentus Hybrid Hard Disk Drive Hhdd Moment

Measuring Windows Performance Impact For Vdi Planning

Getting Sasy The Other Shared Storage Option For Disk And Ssd Systems

Supporting It Growth Demand During Economic Uncertain Times

Inaugural Ssd Show

Care Coraid Content Conversation

Wd Buys Nand Flash Ssd Storage Io Cache Vendor Virident

Depends

Fall 2013 Dutch Cloud Virtual Storage Io Seminars

Data Footprint Reduction Part 2 Dell Ibm Ocarina And Storwize

Fall 2010 Storageio News Letter

Spring 2011 Server And Storageio News Letter

Winter 2011 Server And Storageio News Letter

Summer 2011 Storageio News Letter

A Storage Io Momentus Moment

Part Ii Emc Announces Xtremio General Availability

Fall December 2011 Storageio News Letter

Merry Christmas Seasons Happy Holidays 2013 Server Storageio

Fusionio Fio Ssd Vendor Ceo Flash Whats

Server Virtualization Nested Tiered Hypervisors

Book Review Rethinking Enterprise Storage Microsoftstorsimple Marc Farley

Kudos To Hp Ceo Mark Hurd For Dignity To Step Down From His Post

Dell Inspiron 660 Virtual Diamond Rough

August 2010 Storageio News Letter

Small Medium Business Smb Continues Gain Respect Soho

Using Removable Hard Disk Drives Rhdds

Storage Bridge Bay Sbb Industry Group Update

Emc Announces Xtremio General Availability Part

Emc Evolves Enterprise Data Protection Enhancements Part

Raid Extend Life Nand Flash Ssd

Fall 2013 Aws Cloud Storage Compute Enhancements

Emc Vplex Virtual Storage Redefined Or Respun

The Other Green Storage Efficiency And Optimization

Is Fcoe Struggling To Gain Traction Or On A Normal Adoption Course

Big Fish And Small Fish Fish Story Or The One That Did Not Get Away

Side Context Iops

Part Ii Revisiting Reinvent 2014 And Other Aws Updates

Summer 2013 Server And Storageio Update Newsletter

Dell Will Buy Someone However Not Brocade At Least For Now

Happy Thanks Giving 2010

June 2010 Storageio Newsletter

What Records Will Emc Break In Nyc January 18 2011

Smb Soho And Low End Nas Gaining Enterprise Features

Gregs Storageio Out And About Update June 2010

Vmware Vsphere V5 And Storage Drs

Storage Effiency And Optimizaiton Balancing Time And Space

Pue Are You Managing Power Energy Or Productivity

Emc Vnx Mcx Storage Io Work

The New Green Gaining Realistic Economic Efficiencys Now

Closing The Green Gap Wsradio Internet Radio Interview

Determining Computer Or Server Energy Use

Epa Energy Star For Data Center Storage Update

Saving Money With Green It Time To Invest In Information Factories

Webcast E2e Awareness And Insight For It Environments

Ibm Server Side Storage Io Ssd Flash Cache Software

Part Ii Emc Evolves Enterprise Data Protection Enhancements

Cisco Buys Whiptail Continuing Storage Storage Io Flash Cash Cache Dash

Fall 2013 Storageio Update Newsletter

Raid Relevance Revisited

Have You Heard Of 2drs Data Protection Technology

July 2010 Odds And Ends Perspectives Tips And Articles

Has Ssd Put Hard Disk Drives Hdds On Endangered Species List

Seagate Proof Life Enterprise Hdd Enhancements

Seagate To Say Goodbye To Cayman Islands Hello Ireland

Cloud Conversations Gaining Cloud Confidence From Insights Into Aws Outages

Have Vtls Or Vxls Become Zombies Declared Dead Yet Still Alive

Tiered Communication And Media Venues

Are You On The Storageio It Data Infrastructure Industry Links Page

Green Storage Is Alive And Well Energy Star Enterprise Storage Stakeholder Meeting Details

Tape Talk Time

Back To School Dedupe School

Storageio V20 11 2011 Events Seminars And Web Casts Schedule

Getting Caught Up And Holiday Shopping

Performance Availability Storageioblog Featured Itke Guest Blog

The New Green It Efficient Effective Smart And Productive

Dude Is Dell Doing A Disk Deal Again With Compellent

Intelligent Power Management Ipm And Second Generation Maid 20 On The Rise

2010 And 2011 Trends Perspectives And Predictions More Of The Same

Mainframe Cmg Virtualization Storage And Zombie Technologies

Vmworld 2010 Virtual Roads Clouds And Inxs Devil Inside

Green Power And Cooling Tools And Calculators

Green It Green Gap Tiered Energy And Green Myths

Vmworld 2013 Vmware Server Storage Io Networking Update Day 1

Part Ii Xtremio Xtremsw And Xtremsf Emc Flash Ssd Portfolio Redefined

Datadynamics Storagex 70 File Data Management Migration Software

Whats Your Take On Open Virtualization Alliance And Vmware

September October Server Storageio Update Newsletter

Server Storageio June July 2016 Update Newsletter

Open Data Center Alliance Odca Bmw Private Cloud Strategy

Happy 20th Birthday Microsoft Windows Server Get Ready Windows Server 2016

Server Storageio March 2016 Update Newsletter

Netapp Ef540 Something Familiar Something New

Data Footprint Reduction Part 1 Life Beyond Dedupe And Changing Data Lifecycles

Emc Vipr Software Defined Object Storage Part Ii

Emc Vipr Software Defined Object Storage Part Iii

Emc Vipr Virtual Physical Object Software Defined Storage Sds

Breaking Vmware Esxi 55 Acpi Boot Loop Lenovo Td350

Storageio In The News

Summer Book Update And Back To School Reading

February 2014 Server Storageio Update Newsletter

November 2013 Server Storageio Update Newsletter

Matt Vogt Computex Talks Vmware Vcops Podcast

August 2014 Server Storageio Update Newsletter

July 2014 Server Storageio Update Newsletter

Storage Virtualization In Band Vs Out Of Band Debates To Be Resurrected

Snow Fun And Information Technology They Do Mix

Technology Tiering Servers Storage And Snow Removal

Netapp Buying Lsis Engenio Storage Business Unit

Summer Weddings Emcdatadomain And Hpibrix

Server Storage Io Intel Nuc Nick Knack Notes Second Impressions

Emc Vfcache Respinning Ssd And Intelligent Caching Part Ii

Hds Claus Mikkelsen Talking Storage Snw Fall 2012

How To Write Publish And Promote A Book Or Blog

Oracle Xsigo Vmware Nicira Sdn And Iov Io Io Its Off To Work They Go

Open Data Center Alliance Odca Publishes Two New Cloud Usage Models

Nand Flash Sata Ssd Ddr3 Dimm Slot

Server Storageio February 2016 Update Newsletter

Server Storageio January 2016 Update Newsletter

June 2017 Server Storageio Data Infrastructures Update Newsletter

Ibms Storwize Or Wise Storage The V7000 And Dfr

Re Visiting If Ibm Xiv Is Still Relevant With V7000

Part I Puresystems Something Old Something New Something From Big Blue

Part V Puresystems Something Old Something New Something From Big Blue

Part Iv Puresystems Something Old Something New Something From Big Blue

Part Ii Puresystems Something Old Something New Something From Big Blue

Microsoft Azure Cloud Software Defined Data Infrastructure Reference Architecture Resources

Happy 100th Birthday Or Anniversary Wishes

Azure Stack Tp3 Overview Preview Review Part Ii

Data Protection Diaries Data Protection

March2014 Storageio Newsletter Cisco Cloud Vmware Vsan

June 2014 Server Storageio Update Newsletter

Chat With Cash Coleman Talking Cleardb Cloud Database And Johnny Cash

April 2014 Server Storageio Update Newsletter

Acadia Vce Vmware Cisco Emc Virtual Computing Environment

Storageio Spring Keynote And Speaking Tour V2008

Server Storageio April 2016 Update Newsletter

Cloud Conversations Loss Of Data Access Vs Data Loss

Hpe Buying Server Storage Io Data Infrastructures

January 2017 Server Storageio Update Newsletter

Top Vblog 2017 Voting Open

Data Infrastructure Tradecraft Trends

Converged Ci Hyperconverged Hci Mean Storage Io

Popular Viewed Storageioblog Posts 2016

March 2017 Server Storageio Update Newsletter

Top Storage World Decade

Back To School Shopping Dude Dell Digests 3par Disk Storage

Does Ibm Power7 Processor Announcement Signal Storage Upgrades

Do You Know Hds Or What It Means

Is The New Hds Vsp Really The Mvsp

Hds Mid Summer Storage Converged Compute Enhancements

Object Storage News Trends Cloud Bulk Storage

Hds Buys Bluearc Any Surprises Here

June 2015 Server Storageio Update Newsletter

Server Storageio Holiday Seasons 2016

Do Software Vendors Eliminate Or Move Location Of Vendor Lock In

Vendor Lockin Responsibiity

Spam Of A Different Kind

Part Iii Puresystems Something Old Something New Something From Big Blue

Emc Vmax 10k Looks Like High End Storage Systems Are Still Alive

Which Enterprise Hdd Content Application Testing

Which Enterprise Hdd Content Server Test Configuration

Hdd Ssd Flash Storage Iops

Which Enterprise Hdd Use For Database Workloads

Enterprise Hdd For Content Server Different File Size

Which Enterprise Hdd General Io Performance

Enterprise Hdds Evolve For Content Server Applications

Achieve Flexible Data Protection

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

SDDC Data Infrastructure

Check out the above links to data infrastructure resource links.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

New family of Intel Xeon Scalable Processors enable software defined data infrastructures (SDDI) and SDDC

Intel Xeon Scalable Processors SDDI and SDDC

server storage I/O data infrastructure trends

Today Intel announced a new family of Xeon Scalable Processors (aka Purely) that for some workloads Intel claims to be on average of 1.65x faster than their predecessors. Note your real improvement will vary based on workload, configuration, benchmark testing, type of processor, memory, and many other server storage I/O performance considerations.

Intel Scalable Xeon Processors
Image via Intel.com

In general the new Intel Xeon Scalable Processors enable legacy and software defined data infrastructures (SDDI), along with software defined data centers (SDDC), cloud and other environments to support expanding workloads more efficiently as well as effectively (e.g. boosting productivity).

Data Infrastructures and workloads

Some target application and environment workloads Intel is positioning these new processors for includes among others:

  • Machine Learning (ML), Artificial Intelligence (AI), advanced analytics, deep learning and big data
  • Networking including software defined network (SDN) and network function virtualization (NFV)
  • Cloud and Virtualization including Azure Stack, Docker and Kubernetes containers, Hyper-V, KVM, OpenStack VMware vSphere, KVM among others
  • High Performance Compute (HPC) and High Productivity Compute (e.g. the other HPC)
  • Storage including legacy and emerging software defined storage software deployed as appliances, systems or server less deployment modes.

Features of the new Intel Xeon Scalable Processors include:

  • New core micro architecture with interconnects and on die memory controllers
  • Sockets (processors) scalable up to 28 cores
  • Improved networking performance using Quick Assist and Data Plane Development Kit (DPDK)
  • Leverages Intel Quick Assist Technology for CPU offload of compute intensive functions including I/O networking, security, AI, ML, big data, analytics and storage functions. Functions that benefit from Quick Assist include cryptography, encryption, authentication, cipher operations, digital signatures, key exchange, loss less data compression and data footprint reduction along with data at rest encryption (DARE).
  • Optane Non-Volatile Dual Inline Memory Module (NVDIMM) for storage class memory (SCM) also referred to by some as Persistent Memory (PM), not to be confused with Physical Machine (PM).
  • Supports Advanced Vector Extensions 512  (AVX-512) for HPC and other workloads
  • Optional Omni-Path Fabrics in addition to 1/10Gb Ethernet among other I/O options
  • Six memory channels supporting up to 6TB of RDIMM with multi socket systems
  • From two to eight  sockets per node (system)
  • Systems support PCIe 3.x (some supporting x4 based M.2 interconnects)

Note that exact speeds, feeds, slots and watts will vary by specific server model and vendor options. Also note that some server system solutions have two or more nodes (e.g. two or more real servers) in a single package not to be confused with two or more sockets per node (system or motherboard). Refer to the where to learn more section below for links to Intel benchmarks and other resources.

Software Defined Data Infrastructures, SDDC, SDX and SDDI

What About Speeds and Feeds

Watch for and check out the various Intel partners who have or will be announcing their new server compute platforms based on Intel Xeon Scalable Processors. Each of the different vendors will have various speeds and feeds options that build on the fundamental Intel Xeon Scalable Processor capabilities.

For example Dell EMC announced their 14G server platforms at the May 2017 Dell EMC World event with details to follow (e.g. after the Intel announcements).

Some things to keep in mind include the amount of DDR4 DRAM (or Optane NVDIMM) will vary by vendors server platform configuration, motherboards, several sockets and DIMM slots. Also keep in mind the differences between registered (e.g. buffered RDIMM) that give good capacity and great performance, and load reduced DIMM (LRDIMM) that have great capacity and ok performance.

Various nvme options

What about NVMe

It’s there as these systems like previous Intel models support NVMe devices via PCIe 3.x slots, and some vendor solutions also supporting M.2 x4 physical interconnects as well.

server storageIO flash and SSD
Image via Software Defined Data Infrastructure Essentials (CRC)

Note that Broadcom formerly known as Avago and LSI recently announced PCIe based RAID and adapter cards that support NVMe attached devices in addition to SAS and SATA.

server storage data infrastructure sddi

What About Intel and Storage

In case you have not connected the dots yet, the Intel Xeon Scalable Processor based server (aka compute) systems are also a fundamental platform for storage systems, services, solutions, appliances along with tin-wrapped software.

What this means is that the Intel Xeon Scalable Processors based systems can be used for deploying legacy as well as new and emerging software-defined storage software solutions. This also means that the Intel platforms can be used to support SDDC, SDDI, SDX, SDI as well as other forms of legacy and software-defined data infrastructures along with cloud, virtual, container, server less among other modes of deployment.

Image Via Intel.com

Moving beyond server and compute platforms, there is another tie to storage as part of this recent as well as other Intel announcements. Just a few weeks ago Intel announced 64 layer triple level cell (TLC) 3D NAND solutions positioned for the client market (laptop, workstations, tablets, thin clients). Intel with that announcement increased the traditional aerial density (e.g. bits per square inch or cm) as well as boosting the number of layers (stacking more bits as well).

The net result is not only more bits per square inch, also more per cubic inch or cm. This is all part of a continued evolution of NAND flash including from 2D to 3D, MCL to TLC, 32 to 64 layer.  In other words, NAND flash-based Solid State Devices (SSDs) are very much still a relevant and continue to be enhanced technology even with the emerging 3D XPoint and Optane (also available via Amazon in M.2) in the wings.

server memory evolution
Via Intel and Micron (3D XPoint launch)

Keep in mind that NAND flash-based technologies were announced almost 20 years ago (1999), and are still evolving. 3D XPoint announced two years ago, along with other emerging storage class memories (SCM), non-volatile memory (NVM) and persistent memory (PM) devices are part of the future as is 3D NAND (among others). Speaking of 3D XPoint and Optane, Intel had announcements about that in the past as well.

Where To Learn More

Learn more about Intel Xeon Scalable Processors along with related technology, trends, tools, techniques and tips with the following links.

What This All Means

Some say the PC is dead and IMHO that depends on what you mean or define a PC as. For example if you refer to a PC generically to also include servers besides workstations or other devices, then they are alive. If however your view is that PCs are only workstations and client devices, then they are on the decline.

However if your view is that a PC is defined by the underlying processor such as Intel general purpose 64 bit x86 derivative (or descendent) then they are very much alive. Just as older generations of PCs leveraging general purpose Intel based x86 (and its predecessors) processors were deployed for many uses, so to are today’s line of Xeon (among others) processors.

Even with the increase of ARM, GPU and other specialized processors, as well as ASIC and FPGAs for offloads, the role of general purpose processors continues to increase, as does the technology evolution around. Even with so called server less architectures, they still need underlying compute server platforms for running software, which also includes software defined storage, software defined networks, SDDC, SDDI, SDX, IoT among others.

Overall this is a good set of announcements by Intel and what we can also expect to be a flood of enhancements from their partners who will use the new family of Intel Xeon Scalable Processors in their products to enable software defined data infrastructures (SDDI) and SDDC.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Part 3 – Which HDD for content applicaitons – Test Configuration

Which HDD for content applications – HDD Test Configuration

HDD Test Configuration server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server platform hdd test configuratoin

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the third in a multi-part series (read part two here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus expands to hardware and software defining as well as configuring the test environments along with applications workloads.

Defining Hardware Software Environment

Servers Direct content platforms are software defined and hardware defined to your specific solution needs. For my test-drive, I used a pair of 2U Content Solution platforms, one for a client System Test Initiator (STI) (3), the other as server SUT shown in figure-1 (next page). With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

(Note 3) System Test Initiator (STI) was hardware defined with dual Intel Xeon E5-2695 v3 (2.30 GHz) processors, 32GB RAM running Windows Server 2012 R2 with two network connections to the SUT. Network connections from the STI to SUT included an Intel GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE Converged Network Adapter (CNA). In addition to software defining the STI with Windows Server 2012 R2, Dell Benchmark Factory (V7.1 64b bit 496) part of the Database Administrators (DBA) Toad Tools (including free versions) was also used. For those familiar with HammerDB, Sysbench among others, Benchmark Factory is an alternative that supports various workloads and database connections with robust reporting, scripting and automation. Other installed tools included Spotlight on Windows, Iperf 2.0.5 for generating network traffic and reporting results, as well as Vdbench with various scripts.

SUT setup (4)  included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

(Note 4) System Under Test (SUT) dual Intel Xeon E5-2697 v3 (2.60 GHz) providing 54 logical processors, 64GB of RAM (expandable to 768GB with 32GB DIMMs, or 3TB with 128GB DIMMs) and two network connections. Network connections from the STI to SUT consisting of an Intel 1 GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE CNA. The GbE LAN connection was used for management purposes while the 40 GbE was used for data traffic. System disk was a 6Gbs SATA flash SSD. Seagate Enterprise class HDD’s were installed into the 16 available 2.5” small form factor (SFF) drive slots. Eight (left most) drive slots were connected to an Intel RMS3CC080 12 Gbps SAS RAID internal controller. The “Blue” drives in the middle were connected to both an NVMe PCIe card and motherboard 6 Gbps SATA controller using an SFF-8637 connector. The four right most drives were also connected to the motherboard 6 Gbps SATA controller.

System Test Configuration
Figure-1 STI and SUT hardware as well as software defined test configuration

This included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. Five 6 Gbps SATA Enterprise Capacity 2TB HDD’s were setup using Microsoft Windows as a spanned volume. System disk was a 6Gbps flash SSD and an NVMe flash SSD drive was used for database temp space.

What About NVM Flash SSD?

NAND flash and other Non-Volatile Memory (NVM) memory and SSD complement content solution. A little bit of flash SSD in the right place can have a big impact. The focus for theses tests is HDD’s, however some flash SSDs were used as system boot and database temp (e.g. tempdb) space. Refer to StorageIO Lab reviews and visit www.thessdplace.com

Seagate Enterprise HDD’s Used During Testing

Various Seagate Enterprise HDD specifications use in the testing are shown below in table-1.

 

Qty

 

Seagate HDD’s

 

Capacity

 

RPM

 

Interface

 

Size

 

Model

Servers Direct Price Each

Configuration

4

Enterprise 10K
Performance

1.8TB

10K with cache

12 Gbps SAS

2.5”

ST1800MM0128
with enhanced cache

$875.00 USD

HW(5) RAID 10 and RAID 1

2

Enterprise
Capacity 7.2K

2TB

7.2K

12 Gbps SAS

2.5”

ST2000NX0273

$399.00 USD

HW RAID 1

2

Enterprise 15K
Performance

600GB

15K with cache

12 Gbps SAS

2.5”

ST600MX0082
with enhanced cache

$595.00 USD

HW RAID 1

5

Enterprise
Capacity 7.2K

2TB

7.2K

6 Gbps SATA

2.5”

ST2000NX0273

$399.00 USD

SW(6) RAID Span Volume

Table-1 Seagate Enterprise HDD specification and Servers Direct pricing

URLs for additional Servers Direct content platform information:
https://serversdirect.com/solutions/content-solutions
https://serversdirect.com/solutions/content-solutions/video-streaming
https://www.serversdirect.com/File%20Library/Data%20Sheets/Intel-SDR-2P16D-001-ds2.pdf

URLs for additional Seagate Enterprise HDD information:
https://serversdirect.com/Components/Drives/id-HD1558/Seagate_ST2000NX0273_2TB_Hard_Drive

https://serversdirect.com/Components/Drives/id-HD1559/Seagate_ST600MX0082_SSHD

Seagate Performance Enhanced Cache Feature

The Enterprise 10K and 15K Performance HDD’s tested had the enhanced cache feature enabled. This feature provides a “turbo” boost like acceleration for both reads and write I/O operations. HDD’s with enhanced cache feature leverage the fact that some NVM such as flash in the right place can have a big impact on performance (7).

In addition to their performance benefit, combing a best of or hybrid storage model (combing flash with HDD’s along with software defined cache algorithms), these devices are “plug-and-play”. By being “plug-and-play” no extra special adapters, controllers, device drivers, tiering or cache management software tools are required.

(Note 5) Hardware (HW) RAID using Intel server on-board LSI based 12 Gbps SAS RAID card, RAID 1 with two (2) drives, RAID 10 with four (4) drives. RAID configured in write-through mode with default stripe / chunk size.

(Note 6) Software (SW) RAID using Microsoft Windows Server 2012 R2 (span). Hardware RAID used write-through cache (e.g. no buffering) with read-ahead enabled and a default 256KB stripe/chunk size.

(Note 7) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

The Seagate Enterprise Performance 10K and 15K with enhanced cache feature are a good example of how there is more to performance in today’s HDD’s than simply comparing RPM’s, drive form factor or interface.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Careful and practical planning are key steps for testing various resources as well as aligning the applicable tools, configuration to meet your needs.

Continue reading part four of this multi-part series here where the focus expands to database application workloads.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

NVMe overview primer

server storage I/O trends
Updated 2/2/2018

This is the first in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

What is NVM Express (NVMe)

Non-Volatile Memory (NVM) includes persistent memory such as NAND flash and other forms Solid State Devices (SSD). NVM express (NVMe) is a new server storage I/P protocol alternative to AHCI/SATA and the SCSI protocol used by Serial Attached SCSI (SAS). Note that the name NVMe is owned and managed by the industry trade group for NVM Express is (www.nvmexpress.org).

The key question with NVMe is not if, rather when, where, why, how and with what will it appear in your data center or server storage I/O data infrastructure. This is a companion to material that I have on my micro site www.thenvmeplace.com that provides an overview of NVMe, as well as helps to discuss some of the questions about NVMe.

Main features of NVMe include among others:

  • Lower latency due to improve drivers and increased queues (and queue sizes)
  • Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
  • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
  • Bandwidth improvements leveraging various fast PCIe interface and available lanes
  • Dual-pathing of devices like what is available with dual-path SAS devices
  • Unlock the value of more cores per processor socket and software threads (productivity)
  • Various packaging options, deployment scenarios and configuration options
  • Appears as a standard storage device on most operating systems
  • Plug-play with in-box drivers on many popular operating systems and hypervisors

Why NVMe for Server Storage I/O?
NVMe has been designed from the ground up for accessing fast storage including flash SSD leveraging PCI Express (PCIe). The benefits include lower latency, improved concurrency, increased performance and the ability to unleash a lot more of the potential of modern multi-core modern processors.

NVMe Server Storage I/O
Figure 1 shows common server I/O connectivity including PCIe, SAS, SATA and NVMe.

NVMe, leveraging PCIe, enables modern applications to reach their full potential. NVMe is one of those rare, generational protocol upgrades that comes around every couple of decades to help unlock the full performance value of servers and storage. NVMe does need new drivers, but once in place, it plugs and plays seamlessly with existing tools, software and user experiences. Likewise many of those drivers are now in the box (e.g. ship with) for popular operating systems and hypervisors.

While SATA and SAS provided enough bandwidth for HDDs and some SSD uses, more performance is needed. NVMe near-term does not replace SAS or SATA they can and will coexist for years to come enabling different tiers of server storage I/O performance.

NVMe unlocks the potential of flash-based storage by allowing up to 65,536 (64K) queues each with 64K commands per queue. SATA allowed for only one command queue capable of holding 32 commands per queue and SAS supports a queue with 64K command entries. As a result, the storage IO capabilities of flash can now be fed across PCIe much faster to enable modern multi-core processors to complete more useful work in less time.

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Continue reading about NVMe with Part II (Different NVMe configurations) in this five-part series, or jump to Part III, Part IV or Part V.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future.

Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way.

Introducing Solid State Hybrid Drives (SSHD)

Solid State Hybrid Disks (SSHD) are the successors to previous generation Hybrid Hard Disk Drives (HHDD) that I have used for several years (you can read more about them here, and here).

While it would be nice to simply have SSD for everything, there are also economic budget realities to be dealt with. Keep in mind that a bit of nand flash SSD cache in the right location for a given purpose can go a long way which is the case with SSHDs. This is also why in many environments today there is a mix of SSD, HDD of various makes, types, speeds and capacities (e.g. different tiers) to support diverse application needs (e.g. not everything in the data center is the same).

However, If you have the need for speed and can afford or benefit from the increased productivity by all means go SSD!

Otoh if you have budget constraints and need more space capacity yet want some performance boost, then SSHDs are an option. The big difference however between today’s SSHDs that are available for both enterprise class storage systems and servers, as well as desktop environments is that they can accelerate both reads and writes. This is different from their predecessors that I have used for several years now that had basic read acceleration, however no write optimizations.

SSHD storage I/O oppourtunity
Better Together: Where SSHDs fit in an enterprise tiered storage environment with SSD and HDDs

As their names imply, they are a hybrid between a nand flash Solid State Device (SSD) and traditional Hard Disk Drive (HDD) meaning a best of situation. This means that the SSHD are based on a traditional spinning HDD (various models with different speeds, space capacity, interfaces) along with DRAM (which is found on most modern HDDs), along with nand flash for read cache, and some extra nonvolatile memory for persistent write cache combined with a bit of software defined storage performance optimization algorithms.

Btw, if you were paying attention to that last sentence you would have picked up on something about nonvolatile memory being used for persistent write cache which should prompt the question would that help with nand flash write endurance? Yup.

Where and when to use SSHD?

In the StorageIO Industry Trends Perspective thought leadership white paper I recently released compliments of Seagate Enterprise Turbo SSHD (that’s a disclosure btw ;) enterprise class Solid State Hybrid Drives (SSHD) were looked at and test driven in the StorageIO Labs with various application workloads. These activities include being in a virtual environment for common applications including database and email messaging using industry standard benchmark workloads (e.g. TPC-B and TPC-E for database, JetStress for Exchange).

Storage I/O sshd white paper

Conventional storage system focused workloads using iometer, iorate and vdbench were also run in the StorageIO Labs to set up baseline reads, writes, random, sequential, small and large I/O size with IOPs, bandwidth and response time latency results. Some of those results can be found here (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?) with other ongoing workloads continuing in different configurations. The various test drive proof points were done in the   comparing SSHD, SSD and different HDDs.

Data Protection (Archiving, Backup, BC, DR)

Staging cache buffer area for snapshots, replication or current copies before streaming to other storage tier using fast read/write capabilities. Meta data, index and catalogs benefit from fast reads and writes for faster protection.

Big Data DSS
Data Warehouse

Support sequential read-ahead operations and “hot-band” data caching in a cost-effective way using SSHD vs. slower similar capacity size HDDs for Data warehouse, DSS and other analytic environments.

Email, Text and Voice Messaging

Microsoft Exchange and other email journals, mailbox or object repositories can leverage faster read and write I/Os with more space capacity.

OLTP, Database
 Key Value Stores SQL and NoSQL

Eliminate the need to short stroke HDDs to gain performance, offer more space capacity and IOP performance per device for tables, logs, journals, import/export and scratch, temporary ephemeral storage. Leverage random and sequential read acceleration to compliment server-side SSD-based read and write-thru caching. Utilize fast magnetic media for persistent data reducing wear and tear on more costly flash SSD storage devices.

Server Virtualization

Fast disk storage for data stores and virtual disks supporting VMware vSphere/ESXi, Microsoft Hyper-V, KVM, Xen and others.  Holding virtual machines such as VMware VMDKs, along with Hyper-V and other hypervisor virtual disks.  Compliment virtual server read cache and I/O optimization using SSD as a cache with writes going to fast SSHD. For example VMware V5.5 Virtual SAN host disk groups use SSD as a read cache and can use SSHD as the magnetic disk for storing data while boosting performance without breaking the budget or adding complexity.

Speaking of Virtual, as mentioned the various proof points were run using Windows systems that were VMware guests with the SSHD and other devices being Raw Device Mapped (RDM) SAS and SATA attached, read how to do that here.

Hint: If you know about the VMware trick for making a HDD look like a SSD to vSphere/ESXi (refer to here and here) think outside the virtual box for a moment on some things you could do with SSHD in a VSAN environment among other things, for now, just sayin ;).

Virtual Desktop Infrastructure (VDI)

SSHD can be used as high performance magnetic disk for storing linked clone images, applications and data. Leverage fast read to support read ahead or pre-fetch to compliment SSD based read cache solutions. Utilize fast writes to quickly store data enabling SSD-based read or write-thru cache solutions to be more effective. Reduce impact of boot, shutdown, and virus scan or maintenance storms while providing more space capacity.

Table 1 Example application and workload scenarios benefiting from SSHDs

Test drive application proof points

Various workloads were run using Seagate Enterprise Turbo SSHD in the StorageIO lab environment across different real world like application workload scenarios. These include general storage I/O performance characteristics profiling (e.g. reads, writes, random, sequential or various IOP size) to understand how these devices compare to other HDD, HHDD and SSD storage devices in terms of IOPS, bandwidth and response time (latency). In addition to basic storage I/O profiling, the Enterprise Turbo SSHD was also used with various SQL database workloads including Transaction Processing Council (TPC); along with VMware server virtualization among others use case scenarios.

Note that in the following workload proof points a single drive was used meaning that using more drives in a server or storage system should yield better performance. This also means scaling would be bound by the constraints of a given configuration, server or storage system. These were also conducted using 6Gbps SAS with PCIe Gen 2 based servers and ongoing testing is confirming even better results with 12Gbs SAS, faster servers with PCIe Gen 3.

SSHD large file storage i/o
Copy (read and write) 80GB and 220GB file copies (time to copy entire file)

SSHD storage I/O TPCB Database performance
SQLserver TPC-B batch database updates

Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 500GB 3.5” 7.2K RPM HDD 3 Gbps SATA, 1TB 3.5” 7.2K RPM HDD 3 Gbps SATA. Workload generator and virtual clients ran on Windows 7 Ultimate. Microsoft SQL Server 2012 Database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (Intel x3490 2.93 GHz)), with LSI 9211 6Gbps SAS adapters with TPC-B (www.tpc.org) workloads. VM resided on separate data store from devices being tested. All devices being tested with SQL MDF were Raw Device Mapped (RDM) independent persistent with database log file (LDF) on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

SSHD storage I/O TPCE Database performance
SQLserver TPC-E transactional workload

Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 300GB 2.5” Savio 10K RPM HDD 6 Gbps SAS, 1TB 3.5” 7.2K RPM HDD 6 Gbps SATA. Workload generator and virtual clients Windows 7 Ultimate. Microsoft SQL Server 2012 database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (E8400 2.99GHz), with LSI 9211 6Gbps SAS adapters with TPC-E (www.tpc.org) workloads. VM resided on separate SSD based data store from devices being tested (e.g., where MDF resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

SSHD storage I/O Exchange performance
Microsoft Exchange workload

Test configuration: 2.5” Seagate 600 Pro 120GB (ST120FP0021 ) SSD 6 Gbps SATA, 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 2.5” Savio 146GB HDD 6 Gbps SAS, 3.5” Barracuda 500GB 7.2K RPM HDD 3 Gbps SATA. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft Small Business Server (SBS) 2011 Service Pack 1 64 bit, 8GB DRAM, One CPU (Intel X3490 2.93 GHz) LSI 9211 6 Gbps SAS adapter, JetStress 2010 (no other active workload during test intervals). All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device.

Read more about the above proof points along view data points and configuration information in the associated white paper found here (no registration required).

What this all means

Similar to flash-based SSD technologies the question is not if, rather when, where, why and how to deploy hybrid solutions such as SSHDs. If your applications and data infrastructures environment have the need for storage I/O speed without loss of space capacity and breaking your budget, SSD enabled devices like the Seagate Enterprise Turbo 600GB SSHD are in your future. You can learn more about enterprise class SSHD such as those from Seagate by visiting this link here.

Watch for extra workload proof points being performed including with 12Gbps SAS and faster servers using PCIe Gen 3.

Ok, nuff said.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

IoD, IoT, IoE, IoS, IoP, IoU and IoX are in your future

Storage I/O trends

IoD, IoT, IoE, IoS, IoP, IoU and IoX are in your future

Have you figured out the new buzzword trend for 2014 that starting ramping up in 2013?

Yup, its Internet of Things (IoT) and Internet of Devices’s (IoD)

Assuming that IoT, IoD and other variations catch on which looks like they will, this could bring relief and rest for the over-worked Big Data and Software Defined "X" buzzword bingo bandwagon usage.

Buzzword bingo

Introducing IoX?

For those not familiar with Software Defined "X", simply replace "X" with your favorite term such as Data Center (SDDC), Networking (SDN), Storage (SDS), Marketing, (SDM) among others.as the new IT (and beyond) industry term might just take some pressure from the over-worked software defined "x" usage (you pick "x" such as data center, networking, storage, marketing, etc).

This is good news as we now have IoX where "X" can be leveraged from Things (IoT) and Devices (IoD) to People, Places, Protocols or Platforms (IoP), not to mention APIs, Applications and Apple (IoA).

How about Internet of Items (IoI) or Internet of Objects (IoO)?

We are already seeing Cisco with Internet of Everything (IoE) from CES and rest assured the Big Data folks will want to get all over IoBD while storage folks serve up Internet of Storage (IoS), granted that might be a little close to Apple’s OS for comfort of some.

Of course this should also prompt the question of if instead of Internet of Things (IoT) or IoX as being public, then would a Intranet of Things or other items (e.g. IoX) be considered private?

And if you just said or thought, what hybrid, sure, why not, its 2014 after all…

Here’s my point

There are many other variations particular if you apply some cloud and virtual based Big Data analytics with some software defined marketing creativity.

So what’s your take on IoT, IoD, IoP and other IoX, is it all IoH (Internet of Hype) and Internet of Marketing (IoM), something new to get excited about for those who suffer from technology buzzword ADD?

What say you?

Ok, nuff said

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved