October 2017 Server StorageIO Data Infrastructure Update Newsletter



Server StorageIO October 2017 Data Infrastructure Update Newsletter

Volume 17, Issue 10 (October 2017)

Hello and welcome to the October 2017 issue of the Server StorageIO data infrastructure update newsletter.

Software-Defined Data Infrastructure Essentials SDDI SDDC

October has been a busy month pertaining data infrastructure including server storage I/O related trends, activities, news, perspectives and related topics, so let’s have a look at them.

In This Issue

Enjoy this edition of the Server StorageIO data infrastructure update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

Some recent Industry Activities, Trends, News and Announcements include:

Startup Aparavi launched with a SaaS platform for managing long-term data retention. As part of a move to streamline the acquisition of Brocade by Broadcom (formerly known as Avago), the Brocade data center Ethernet networking business is being sold to Extreme networks. Datacore also updated their software defined storage solutions in October.

Cisco announced new storage networking products and acquisition of Brodsoft (cloud calling and contact center solutions). As part of continued support for Fibre Channel based data infrastructure environments, Cisco has announced a 1U MDS 9132T 32 port 32 Gbps Fibre Channel Switch with FCP (SCSI Fibre Channel Protocol) now, and emerging FC-NVMe future support. Also announced are SAN telemetry activity monitoring, insight and event streaming for analysis in MDS 9700 32Gbps module.

Cisco also announced interoperability for data center and data infrastructure insight, activity monitoring and telemetry with Virtual Instruments Virtual Wisdom technology eliminating the reliance on hardware based probes, along with Fibre Channel N-Port virtualization on Nexus 9300-FX DC switch.

Commvault announced scale-out data protection with ScaleProtect for Cisco UCS platforms, along with their HyperScale appliance and HyperScale software.

IBM had several October announcements include LTO 8 related, FlashSystem V9000 updates (e.g. All Flash Array) enclosure as well as hardware based compression, FlashSystem A9000 leveraging 3D TLC NAND flash (lower cost, higher capacity) among others.

There is plenty of content (blogs, articles, podcasts, webinars, videos, white papers, presentations) on when to do containers, microservices and serverless compute including mesos, kubernetes and docker among others. What about when not to use those approaches or caveats to be aware of, here is such a piece (via Redhat) to have a look at.

Granted if you are part of the micro services cheerleading bandwagon crowd you might not agree with the authors points, after all, everything is not the same in data centers and data infrastructures. Speaking of serverless, containers, here is a good post about Docker Swarm vs. Kubernetes management over at Upcloud.

In Microsoft and Azure related activity, despite some early speculation in some venues that Storage Spaces Direct (S2D) was being discontinued as it was not part of Server release 1709, the reality is S2D is very much alive.

Microsoft LTSC and SAC release cycles
Image via Microsoft.com

However some clarification is needed that might have lead to some initial speculation due to lack of understanding the new Microsoft release cycle.

Microsoft has gone to Semi Annual Channel (SAC) releases that introduce new features in advance of the Long Term Support Channel (LTSC). LTSC are what you might be familiar with Windows and Windows Server releases that are updates spread out over time for a given major version (e.g. going from Server 2012 to Server 2012 R2 and so forth). The current Windows Server LTSC is the base introduced fall of 2016 along with incremental updates.

By comparison, think of SAC as a branch channel for early adopters to get new features and with 1709 (e.g. September 2017), the focus is on containers. A mistake that has been made is to assume that a SAC release is actually a new major LTSC release, thus probably why some thought S2D was dead as it is not in SAC 1709. Indications from Microsoft are that there will be S2D enhancements in the next SAC, as well as future LTSC.

For those interested in IoT, check out this Microsoft Azure IoT Hub and device twin document. Here is a post by Thomas Mauer looking at 10 hidden Hyper-V features to know about.

In other activity, Minio announced experimental AWS S3 API support for Backblaze storage service. Software Defined Serverless Storage startup OpenIO gets $5M USD in additional funding. Quantum and other LTO Organization vendors have announced support for the new LTO version 8 tape drives and media. In addition to LTO 8, new roadmaps including out to LTO 12 are outlined here, and VMware vCloud Air is hosted by OVH. Western Digital Corporation (WDC) announced Microwave Assisted Magnetic Recording (MAMR) enabled Hard Disk Drives (HDD) that will enable future, larger capacity devices to be brought to market.

Check out other industry news, comments, trends perspectives here.

Server StorageIO Commentary in the news

Recent Server StorageIO industry trends perspectives commentary in the news.

Via HPE Insights: Comments on Public cloud versus on-prem storage
Via arsTechnica: Comments on cloud backup disaster recovery
Via Gizmodo: Comments on WDC 40TB HDD
Via CDW: Comments on Is Your Network About To Fail?
Via EnterpriseStorageForum: Comments on Trends for Data Storage with Big Data Analytics
Via EnterpriseStorageForum: Comments on 8 ways to save on cloud storage
Via EnterpriseStorageForum: Comments on Google Cloud Platform and Storage

View more Server, Storage and I/O trends and perspectives comments here

Server StorageIOblog Posts

Recent and popular Server StorageIOblog posts include:

In Case You Missed It #ICYMI

View other recent as well as past StorageIOblog posts here

Server StorageIO Data Infrastructure Tips and Articles

Recent Server StorageIO industry trends perspectives commentary in the news.

Via EnterpriseStorageForum: Comments on Who Will Rule the Storage World?
Via InfoGoto: Comments on Google Cloud Platform Gaining Data Storage Momentum
Via InfoGoto: Comments on Singapore High Rise Data Centers
Via InfoGoto: Comments on New Tape Storage Capacity
Via EnterpriseStorageForum: Comments on 8 ways to save on cloud storage
Via EnterpriseStorageForum: Comments on Google Cloud Platform and Storage

View more Server, Storage and I/O trends and perspectives comments here

Server StorageIO Recommended Reading (Watching and Listening) List

In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017), the following are Server StorageIO recommended reading, watching and listening list items. The list includes various IT, Data Infrastructure and related topics.

Intel Recommended Reading List (IRRL) for developers is a good resource to check out.

Its October which means that it is also Blogtober, check out some of the blogs and posts occurring during October here.

For those involved with VMware, check out Frank Denneman VMware vSphere 6.5 host resource guide-book here at Amazon.com.

Docker: Up & Running: Shipping Reliable Containers in Production by Karl Matthias & Sean P. Kane via Amazon.com here.

Essential Virtual SAN (VSAN): Administrator’s Guide to VMware Virtual SAN,2nd ed. by Cormac Hogan & Duncan Epping via Amazon.com here.

Hadoop: The Definitive Guide: Storage and Analysis at Internet Scale by Tom White via Amazon.com here.

Cisco IOS Cookbook: Field tested solutions to Cisco Router Problems by Kevin Dooley and Ian Brown Via Amazon.com here.

Watch for more items to be added to the recommended reading list book shelf soon.

Events and Activities

Recent and upcoming event activities.

Nov. 9, 2017 – Webinar – All You Need To Know about ROBO Data Protection Backup
Nov. 2, 2017 – Webinar – Modern Data Protection for Hyper-Convergence
Sep. 21, 2017 – MSP CMG – Minneapolis MN
Sep. 20, 2017 – Webinar – BC, DR and Business Resiliency (BR) tips
Sep. 14, 2017 – Fujifilm IT Executive Summit – Seattle WA
Sep. 12, 2017 – SNIA Software Developers Conference (SDC) – Santa Clara CA
Sep. 7, 2017 – Wipro SDX – Enabling, Planning Your Software Defined Journey

See more webinars and activities on the Server StorageIO Events page here.

Server StorageIO Industry Resources and Links

Useful links and pages:
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/downloads – Various presentations and other download material
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

April 2017 Server StorageIO Data Infrastructures Update Newsletter

Volume 17, Issue IV

Hello and welcome to the April 2017 issue of the Server StorageIO data infrastructures update newsletter.

Spring is here in the northern hemisphere which means that there is a lot of things going on, or about to be occurring soon. April has been a busy month for me including spending time in Europe doing some seminar and workshop presentations, along with other consulting advisory activities involving data infrastructures.

Besides travel, I have been busy working on client projects, attending to various post-production activities for my new book Software Defined Data Infrastructure Essentials (more about this in the May issue). Other things I have been doing include being briefed on upcoming technology announcements along with some hands activities trying out things that will be covered in future updates, as well as working with some interesting NDA items that, well, are NDA.

Be sure to check out the recent blog posts, as well as industry trends perspectives commentary below, along with recent and upcoming webinar among events.

In This Issue

Enjoy this abbreviated edition of the Server StorageIO update newsletter.

Cheers GS

 

Server StorageIOblog Posts

Recent and popular Server StorageIOblog posts include:

View other recent as well as past StorageIOblog posts here

Server StorageIO Commentary in the news

Recent Server StorageIO industry trends perspectives commentary in the news.

Via SearchCloudComputing: Virtual private clouds an alternative to on-premisess computing
Hybrid clouds continue to grow in popularity as well as deployed usage, from storage to compute to networking, said Greg Schulz, the senior advisory analyst at StorageIO in Stillwater, Minn. Most cloud and service providers talk about hybrid along with public clouds, while AWS tends to talk about [VPC aka virtual private clouds].

Via SearchDataCenter: Ask the right questions before committing to a collocation SLA policy
Do you just need a physical space to put things, or do you need high bandwidth and ultra-reliable power? asked Greg Schulz, senior advisory analyst at StorageIO, a consultancy in Stillwater, Minn.

Via EnterpriseStorageForum: Tips for Enterprise SSD Form Factor Selection Deployment
It’s doubtful that there is one form factor to rule them all. Some may be best for X but lousy for Y. But Greg Schulz, an analyst at StorageIO Group notes that many vendors attempt to champion a particular flash SSD form factor and interface, claiming it’s the best and only fit for the enterprise.

Via SearchITOperations: Storage performance analysis reveals IT’s ongoing bottleneck
Sometimes it takes more than an aspirin to cure a headache, said Greg Schulz

Via SearchDNS: Parsing through the software-defined storage hype
Beyond scalability, SDS technology aims for freedom from the limits of proprietary hardware, explained StorageIO analyst Greg Schulz.

Via InfoStor: Data Storage Industry Braces for AI and Machine Learning
AI could also lead to untapped hidden or unknown value in existing data that has no or little perceived value, said Greg Schulz.

View more Server, Storage and I/O trends and perspectives comments here

Events and Activities

Recent and upcoming event activities.

May 11, 2017 – Webinar – Email Archiving, Compliance and Ransomware

May 8-10, 2017 – Dell EMCworld – Las Vegas

April 3-7, 2017 – Seminars – Dutch workshop seminar series – Nijkerk Netherlands

March 15, 2017 – Webinar – SNIA/BrightTalkHyperConverged and Storage – 10AM PT

See more webinars and activities on the Server StorageIO Events page here.

Server StorageIO Industry Resources and Links

Useful links and pages:
Microsoft TechNet – Various Microsoft related from Azure to Docker to Windows
storageio.com/links – Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com – Cloud and object storage topics, tips and news items
OpenStack.org – Various OpenStack related items
storageio.com/protect – Various data protection items and topics
thenvmeplace.com – Focus on NVMe trends and technologies
thessdplace.com – NVM and Solid State Disk topics, tips and techniques
storageio.com/converge – Various CI, HCI and related SDS topics
storageio.com/performance – Various server, storage and I/O benchmark and tools
VMware Technical Network – Various VMware related items

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book Software-Defined Data Infrastructure Essentials(CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Cisco Next Gen 32Gb Fibre Channel NVMe SAN Updates

server storage I/O trends

Cisco Next Gen 32Gb Fibre Channel and NVMe SAN Updates

Cisco announced today next generation MDS storage area networking (SAN) Fibre Channel (FC) switches with 32Gb, along with NVMe over FC support.

Cisco Fibre Channel (FC) Directors (Left) and Switches (Right)

Highlights of the Cisco announcement include:

  • MDS 9700 48 port 32Gbps FC switching module
  • High density 768 port 32Gbps FC directors
  • NVMe over FC for attaching fast flash SSD devices (current MDS 9700, 9396S, 9250i and 9148S)
  • Integrated analytics engine for management insight awareness
  • Multiple concurrent protocols including NVMe, SCSI (e.g. SCSI_FCP aka FCP) and FCoE

Where to Learn More

The following are additional resources to learn more.

What this all means, wrap up and summary

Fibre Channel remains relevant for many environments and it makes sense that Cisco known for Ethernet along with IP networking enhance their offerings. By having 32Gb Fibre Channel, along with adding NVMe over Fabric provides existing (and new) Cisco customers to support their legacy (e.g. FC) and emerging (NVMe) workloads as well as devices. For those environments that still need some mix of Fibre Channel, as well as NVMe over fabric this is a good announcement. Keep an eye and ear open for which storage vendors jump on the NVMe over Fabric bandwagon now that Cisco as well as Brocade have made switch support announcements.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

If NVMe is the answer, what are the questions?

If NVMe is the answer, what are the questions?

If NVMe is the answer, then what are the various questions that should be asked?

Some common questions that NVMe is the answer to include what is the difference between NVM and NVMe?

Is NVMe only for servers, does NVMe require fabrics and what benefit is NVMe beyond more IOPs.

Lets take a look at some of these common NVMe conversations and other questions.

Main Features and Benefits of NVMe

Some of the main feature and benefits of NVMe among others include:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPS) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

NVM and Media memory matters

Whats the differences between NVM and NVMe? Non-Volatile Memory (NVM) which as its name implies is persistent electronic memory medium where data is stored. Today you commonly know about NVMs as NAND flash Solid State Devices (SSD), along with NVRAM among others emerging storage class memories (SCM).

Emerging SCM such as 3D XPoint among other mediums (or media if you prefer) have the premises of boosting both read and write performance beyond traditional NAND flash, closer to DRAM, while having durability also closer to DRAM. For now let’s set the media and mediums aside and get back to how they or will be accessed as well as used.

server storage I/O NVMe fundamentals

Server and Storage I/O Media access matters

NVM Express (e.g. NVMe) is a standard industry protocol for accessing NVM media (SSD and flash devices, storage system, appliances). If NVMe is the answer, then depending on your point of view, NVMe can be (or is) a replacement (today or in the future) for AHCI/SATA, Serial Attached SCSI (SAS). What this means is that NVMe can coexist or replace other block SCSI protocol implementations (e.g. Fibre Channel FCP aka FCP, iSCSI, SRP) as well as NBD (among others).

Similar to the SCSI command set that is implemented on different networks (e.g. iSCSI (IP), FCP (Fibre Channel), SRP (InfiniBand), SAS) NVMe as a protocol is now implemented using PCIe with form factors of add-in cards (AiC), M.2 (e.g. gum sticks aka next-gen form factor or NGFF) as well as U.2 aka 8639 drive form factors. There are also the emerging NVMe over Fabrics variants including FC-NVMe (e.g. NVMe protocol over Fibre Channel) which is an alternative to SCSI_FCP (e.g. SCSI on Fibre Channel). An example of a PCIe AiC that I have include the Intel 750 400GB NVMe (among others). You should be able to find the Intel among other NVMe devices from your prefered vendor as well as different venues including Amazon.com.

NVM, flash and NVMe SSD
Left PCIe AiC x4 NVMe SSD, lower center M.2 NGFF, right SAS and SATA SSD

The following image shows an NVMe U.2 (e.g. 8639) drive form factor device that from a distance looks like a SAS device and connector. However looking closer some extra pins or connectors that present a PCIe Gen 3 x4 (4 PCIe lanes) connection from the server or enclosure backplane to the devices. These U.2 devices plug into 8639 slots (right) that look like a SAS slot that can also accommodate SATA. Remember, SATA can plug into SAS, however not the other way around.

NVMe U.2 8639 driveNVMe 8639 slot
Left NVMe U.2 drive showing PCIe x4 connectors, right, NVMe U.2 8639 connector

What NVMe U.2 means is that the 8639 slots can be used for 12Gbps SAS, 6Gbps SATA or x4 PCIe-based NVMe. Those devices in turn attach to their respective controllers (or adapters) and device driver software protocol stack. Several servers have U.2 or 8639 drive slots either in 2.5” or 1.8” form factors, sometimes these are also called or known as “blue” drives (or slots). The color coding simply helps to keep track of what slots can be used for different things.

Navigating your various NVMe options

If NVMe is the answer, then some device and component options are as follows.

NVMe device components and options include:

    • Enclosures and connector port slots
    • Adapters and controllers
    • U.2, PCIe AIC and M.2 devices
    • Shared storage system or appliances
    • PCIe and NVMe switches

If NVMe is the answer, what to use when, where and why?

Why use an U.2 or 8639 slot when you could use PCIe AiC? Simple, your server or storage system may be PCIe slot constrained, yet have more available U.2 slots. There are U.2 drives from various vendors including Intel and Micro, as well as servers from Dell, Intel and Lenovo among many others.

Why and when would you use an NVMe M.2 device? As a local read/write cache, or perhaps a boot and system device on servers or appliances that have M.2 slots. Many servers and smaller workstations including Intel NUC support M.2. Likewise, there are M.2 devices from many different vendors including Micron, Samsung among others.

Where and why would you use NVMe PCIe AiC? Whenever you can and if you have enough PCIe slots of the proper form factor, mechanical and electrical (e.g. x1, x4, x8, x16) to support a particular card.

Can you mix and match different types of NVMe devices on the same server or appliance? As long as the physical server and its software (BIOS/UEFI, operating system, hypervisors, drivers) support it yes. Most server and appliance vendors support PCIe NVMe AiCs, however, pay attention to if they are x4, x8 both mechanical as well as electrical. Also, verify operating system and hypervisor device driver support. PCIe NVMe AiCs are available from Dell, Intel, Micron and many other vendors.

Networking with your Server and NVMe Storage

Keep in mind that context is important when discussing NVMe as there are devices for attaching as the back-end to servers, storage systems or appliances, as well as for front-end attachment (e.g. for attaching storage systems to servers). NVMe devices can also be internal to a server or storage system and appliance, or, accessible over a network. Think of NVMe as an upper-level command set protocol like SCSI that gets implemented on different networks (e.g. iSCSI, FCP, SRP).

How can NVMe use PCIe as a transport to use devices that are outside of a server? Different vendors have PCIe adapter cards that support longer distances (few meters) to attach to devices. For example, Dell EMC DSSD has a special dual port (two x4 ports) that are PCIe x8 cards for attachment to the DSSD shared SSD devices.

Note that there are also PCIe switches similar to SAS and InfiniBand among other switches. However just because these are switches, does not mean they are your regular off the shelf network type switch that your networking folks will know what to do with (or want to manage).

The following example shows a shared storage system or appliance being accessed by servers using traditional block, NAS file or object protocols. In this example, the storage system or appliance has implemented NVMe devices (PCIe AiC, M.2, U.2) as part of their back-end storage. The back-end storage might be all NVMe, or a mix of NVMe, SAS or SATA SSD and perhaps some high-capacity HDD.

NVMe and server storage access
Servers accessing shared storage with NVMe back-end devices

NVMe and server storage access via PCIe
NVMe PCIe attached (via front-end) storage with various back-end devices

In addition to shared PCIe-attached storage such as Dell EMC DSSD similar to what is shown above, there are also other NVMe options. For example, there are industry initiatives to support the NVMe protocol to use shared storage over fabric networks. There are different fabric networks, they range from RDMA over Converged Ethernet (RoCE) based as well as Fibre Channel NVME (e.g. FC-NVME) among others.

An option that on the surface may not seem like a natural fit or leverage NVMe to its fullest is simply adding NVMe devices as back-end media to existing arrays and appliances. For example, adding NVMe devices as the back-end to iSCSI, SAS, FC, FCoE or other block-based, NAS file or object systems.

NVMe and server storage access via shared PCIe
NVMe over a fabric network (via front-end) with various back-end devices

A common argument against using legacy storage access of shared NVMe is along the lines of why would you want to put a slow network or controller in front of a fast NVM device? You might not want to do that, or your vendor may tell you many reasons why you don’t want to do it particularly if they do not support it. On the other hand, just like other fast NVM SSD storage on shared systems, it may not be all about 100% full performance. Rather, for some environments, it might be about maximizing connectivity over many interfaces to faster NVM devices for several servers.

NVMe and server storage I/O performance

Is NVMe all about boosting the number of IOPS? NVMe can increase the number of IOPS, as well as support more bandwidth. However, it also reduces response time latency as would be expected with an SSD or NVM type of solution. The following image shows an example of not surprisingly an NVMe PCIe AiC x4 SSD outperforming (more IOPs, lower response time) compared to a 6Gb SATA SSD (apples to oranges). Also keep in mind that best benchmark or workload tool is your own application as well as your performance mileage will vary.

NVMe using less CPU per IOP
SATA SSD vs. NVMe PCIe AiC SSD IOPS, Latency and CPU per IOP

The above image shows the lower amount of CPU per IOP given the newer, more streamlined driver and I/O software protocol of NVMe. With NVMe there is less overhead due to the new design, more queues and ability to unlock value not only in SSD also in servers with more sockets, cores and threads.

What this means is that NVMe and SSD can boost performance for activity (TPS, IOPs, gets, puts, reads, writes). NVMe can also lower response time latency while also enabling higher throughput bandwidth. In other words, you get more work out of your servers CPU (and memory). Granted SSDs have been used for decades to boost server performance and in many cases, delay an upgrade to a newer faster system by getting more work out of them (e.g. SSD marketing 202).

NVMe maximizing your software license investments

What may not be so obvious (e.g. SSD marketing 404) is that by getting more work activity done in a given amount of time, you can also stretch your software licenses further. What this means is that you can get more out of your IBM, Microsoft, Oracle, SAP, VMware and other software licenses by increasing their effective productivity. You might already be using virtualization to increase server hardware efficiency and utilization to cut costs. Why not go further and boost productivity to increase your software license (as well as servers) effectiveness by using NVMe and SSDs?

Note that fast applications need fast software, servers, drivers, I/O protocols and devices.

Also just because you have NVMe present or PCIe does not mean full performance, similar to how some vendors put SSDs behind their slow controllers and saw, well slow performance. On the other hand vendors who had or have fast controllers (software, firmware, hardware) that were HDD or are even SSD performance constrained can see a performance boost.

Additional NVMe and related tips

If you have a Windows server and have not overridden, check your power plan to make sure it is not improperly set to balanced instead of high performance. For example using PowerShell issue the following command:

PowerCfg -SetActive “381b4222-f694-41f0-9685-ff5bb260df2e”

Another Windows related tip if you have not done so is enable task manager disk stats by issuing from a command line “diskperf –y”. Then display task manager and performance and see drive performance.

Need to benchmark, validate, compare or test an NVMe, SSD (or even HDD) device or system, there are various tools and workloads for different scenarios. Likewise those various tools can be configured for different activity to reflect your needs (and application workloads). For example, Microsoft Diskspd, fio.exe, iometer and vdbench sample scripts are shown here (along with results) as a starting point for comparison or validation testing.

Does M.2. mean you have NVMe? That depends as some systems implement M.2 with SATA, while others support NVMe, read the fine print or ask for clarification.

Do all NVMe using PCIe run at the same speed? Not necessarily as some might be PCIe x1 or x4 or x8. Likewise some NVMe PCIe cards might be x8 (mechanical and electrical) yet split out into a pair of x4 ports. Also keep in mind that similar to a dual port HDD, NVMe U.2 drives can have two paths to a server, storage system controller or adapter, however both might not be active at the same time. You might also have a fast NVMe device attached to a slow server or storage system or adapter.

Who to watch and keep an eye on in the NVMe ecosystem? Besides those mentioned above, others to keep an eye on include Broadcom, E8, Enmotus Fuzedrive (micro-tiering software), Excelero, Magnotics, Mellanox, Microsemi (e.g. PMC Sierra), Microsoft (Windows Server 2016 S2D + ReFS + Storage Tiering), NVM Express trade group, Seagate, VMware (Virtual NVMe driver part of vSphere ESXi in addition to previous driver support) and WD/Sandisk among many others.

Where To Learn More

Additional related content can be found at:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

NVMe is in your future, that was the answer, however there are the when, where, how, with what among other questions to be addressed. One of the great things IMHO about NVMe is that you can have it your way, where and when you need it, as a replacement or companion to what you have. Granted that will vary based on your preferred vendors as well as what they support today or in the future.

If NVMe is the answer, Ask your vendor when they will support NVMe as a back-end for their storage systems, as well as a front-end. Also decide when will your servers (hardware, operating systems hypervisors) support NVMe and in what variation. Learn more why NVMe is the answer and related topics at www.thenvmeplace.com

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server StorageIO August 2016 Update Newsletter

Volume 16, Issue VIII

Hello and welcome to this August 2016 Server StorageIO update newsletter.

In This Issue

  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events and Webinars
  • Industry Activity Trends
  • Resources and Links
  • Enjoy this shortened summer edition of the Server StorageIO update newsletter.

    Cheers GS

    Industry Activity Trends

    With VMworld coming up this week, rest assured, there will be plenty to talk about and discuss in the following weeks. However for now, here are a few things from this past month.

    At Flash Memory Summit (FMS) which is more of a component, vendor to vendor industry type event, there was buzz about analytics, however what was shown as analytics tended to be Iometer. Hmmm, more on that in a future post. However something else at FMS besides variations of Non-Volatile Memory (NVM) including SSD, NAND, Flash, Storage Class Memory (SCM) such as 3D XPoint (among its new marketing names) along with NVM Expres (NVMe) was NVMe over Fabric.

    This includes NVMe over RoCE (RDMA over Converged Ethernet) which can be implemented on some 10 Gb (and faster) Ethernet adapters as well as some InfiniBand adapters from Mellanox among others. Another variation is Fibre Channel NVMe (FC-NVMe) where the NVMe protocol command set is transported as a Upper Level Protocol (ULP) over FC. This is similar to how the SCSI command set is implemented on FC (e.g. SCSI_FCP or FCP) which means NVMe can be seen as a competing protocol to FCP (which it will or could be). Naturally not to be left out, some of the marketers have already started with Persistent Memory over Fabric among other variations of Non- Ephemeral Memory over Fabrics. More on NVM, NVMe and fabrics in future posts, commentary and newsletter.

    Some other buzzword topics regaining mention (or perhaps for the first time for some) includes
    FedRAMP, Authority To Operate (ATO) clouds for Government entities, and FISMA among others. Many service providers, cloud and hosting providers from large AWS and Azure to smaller Blackmesh have added FedRAMP and other options in addition to traditional, DevOps.

    Some of you may recall me mentioning ScaleMP in the past which is a technology for aggregating multiple compute servers including processors and memory into a converged resource pool. Think the opposite of a hypervisor that divides up resources to support consolidation. In other words, where you need to scale up without complexity of clustering or to avoid having to change and partition your software applications. In addition to ScaleMP, a newer hardware agnostic startup to check out is Tidal Scale.

    On the merger and acquisition front, the Dell / EMC deal is moving forward expected to close soon, perhaps by time or before you read this. In other news, HPE announced that it is buying SGI to gain access to a larger part of the traditional legacy big data Super Compute and High Performance Compute (HPC) market. One of the SGI diamonds in the rough that if you are not aware, is DMF for data management. HPE and Dropbox also announced a partnership deal earlier this summer.

    That’s all for now, time to pack my bags and off to Las Vegas for VMworld 2016.

    Ok, nuff said, for now…

     

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past StorageIOblog posts here

     

    StorageIO Commentary in the news

    Recent Server StorageIO industry trends perspectives commentary in the news.

    Via FutureReadyOEM Q&A: when to implement ultra-dense storage
    Via EnterpriseStorageForum Comments on Top 10 Enterprise SSD Market Trends
    Via SearchStorage Comments on NAS system buying decisions
    Via EnterpriseStorageForum Comments on Cloud Storage Pros and Cons
    EnterpriseStorageForum Comments on Top 10 Enterprise SSD Market Trends

    View more Server, Storage and I/O hardware as well as software trends comments here

     

    StorageIO Tips and Articles

    Recent and past Server StorageIO articles appearing in different venues include:

    Via Iron Mountain Preventing Unexpected Disasters: IT and Data Infrastructure
    Via FutureReadyOEM Q&A: When to implement ultra-dense storage servers
    Via Micron Blog Whats next for NVMe and your Data Center – Preparing for Tomorrow
    Redmond Magazine: Trends – Evolving from Data Protection to Data Resiliency
    IronMountain: 5 Noteworthy Data Privacy Trends From 2015
    InfoStor: Data Protection Gaps, Some Good, Some Not So Good
    Virtual Blocks (VMware Blogs): EVO:RAIL ? When And Where To Use It?

    Check out these resources techniques, trends as well as tools. View more tips and articles here

    StorageIO Webinars and Industry Events

    December 7: BrightTalk Webinar – Hyper-Converged Infrastructure (HCI) Webinar 11AM PT

    November 23: BrightTalk Webinar – BCDR and Cloud Backup – Software Defined Data Infrastructures (SDDI) and Data Protection – 10AM PT

    November 23: BrightTalk Webinar – Cloud Storage – Hybrid and Software Defined Data Infrastructures (SDDI) – 9AM PT

    November 22: BrightTalk Webinar – Cloud Infrastructure – Hybrid and Software Defined Data Infrastructures (SDDI) – 10AM PT

    October 20: BrightTalk Webinar – Next-Gen Data Centers – Software Defined Data Infrastructures (SDDI) including Servers, Storage and Virtualizations – 9AM PT

    September 29: TBA Webinar – 10AM PT

    September 27-28 – NetApp – Las Vegas

    September 20: BrightTalk Webinar – Software Defined Data Infrastructures (SDDI) Enabling Software Defined Data Centers – Part of Software-Defined Storage summit – 8AM PT

    September 13: Redmond Webinar – Windows Server 2016 and Active Directory What’s New and How to Plan for Migration – 11AM PT

    September 8: Redmond Webinar – Data Protection for Modern Microsoft Environments – 11AM PT

    August 29-31: VMworld Las Vegas

    August 25 – MSP CMG – The Answer is Software Defined – What was the question?

    August 16: BrightTalk Webinar Software Defined Data Centers (SDDC) are in your future (if not already) – Part of Enterprise Software and Infrastructure summit 8AM PT

    August 10-11 Flash Memory Summit (Panel discussion August 11th) – NVMe over Fabric

    See more webinars and other activities on the Server StorageIO Events page here.

     

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    storageioblog.com/data-protection-diaries-main/ – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance.com – Various server, storage and I/O performance and benchmarking

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part V – NVMe overview primer (Where to learn more, what this all means)

    This is the fifth in a five-part mini-series providing a NVMe primer overview.

    View Part I, Part II, Part III, Part IV, Part V as well as companion posts and more NVMe primer material at www.thenvmeplace.com.

    There are many different facets of NVMe including protocol that can be deployed on PCIe (AiC, U.2/8639 drives, M.2) for local direct attached, dedicated or shared for front-end or back-end of storage systems. NVMe direct attach is also found in servers and laptops using M.2 NGFF mini cards (e.g. “gum sticks”). In addition to direct attached, dedicated and shared, NVMe is also deployed on fabrics including over Fibre Channel (FC-NVMe) as well as NVMe over Fabrics (NVMeoF) leveraging RDMA based networks (e.g. iWARP, RoCE among others).

    The storage I/O capabilities of flash can now be fed across PCIe faster to enable modern multi-core processors to complete more useful work in less time, resulting in greater application productivity. NVMe has been designed from the ground up with more and deeper queues, supporting a larger number of commands in those queues. This in turn enables the SSD to better optimize command execution for much higher concurrent IOPS. NVMe will coexist along with SAS, SATA and other server storage I/O technologies for some time to come. But NVMe will be at the top-tier of storage as it takes full advantage of the inherent speed and low latency of flash while complementing the potential of multi-core processors that can support the latest applications.

    With NVMe, the capabilities of underlying NVM and storage memories are further realized Devices used include a PCIe x4 NVMe AiC SSD, 12 GbpsSAS SSD and 6 GbpsSATA SSD. These and other improvements with NVMe enable concurrency while reducing latency to remove server storage I/O traffic congestion. The result is that application demanding more concurrent I/O activity along with lower latency will gravitate towards NVMe for access fast storage.

    Like the robust PCIe physical server storage I/O interface it leverages, NVMe provides both flexibility and compatibility. It removes complexity, overhead and latency while allowing far more concurrent I/O work to be accomplished. Those on the cutting edge will embrace NVMe rapidly. Others may prefer a phased approach.

    Some environments will initially focus on NVMe for local server storage I/O performance and capacity available today. Other environments will phase in emerging external NVMe flash-based shared storage systems over time.

    Planning is an essential ingredient for any enterprise. Because NVMe spans servers, storage, I/O hardware and software, those intending to adopt NVMe need to take into account all ramifications. Decisions made today will have a big impact on future data and information infrastructures.

    Key questions should be, how much speed do your applications need now, and how do growth plans affect those requirements? How and where can you maximize your financial return on investment (ROI) when deploying NVMe and how will that success be measured?

    Several vendors are working on, or have already introduced NVMe related technologies or initiatives. Keep an eye on among others including AWS, Broadcom (Avago, Brocade), Cisco (Servers), Dell EMC, Excelero, HPE, Intel (Servers, Drives and Cards), Lenovo, Micron, Microsoft (Azure, Drivers, Operating Systems, Storage Spaces), Mellanox, NetApp, OCZ, Oracle, PMC, Samsung, Seagate, Supermicro, VMware, Western Digital (acquisition of SANdisk and HGST) among others.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What this all means

    NVMe is in your future if not already, so If NVMe is the answer, what are the questions?

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Where, How to use NVMe overview primer

    server storage I/O trends
    Updated 1/12/2018

    This is the fourth in a five-part miniseries providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    Where and how to use NVMe

    As mentioned and shown in the second post of this series, initially, NVMe is being deployed inside servers as “ back-end,” fast, low latency storage using PCIe Add-In-Cards (AIC) and flash drives. Similar to SAS NVM SSDs and HDDs that support dual-paths, NVMe has a primary path and an alternate path. If one path fails, traffic keeps flowing without causing slowdowns. This feature is an advantage to those already familiar with the dual-path capabilities of SAS, enabling them to design and configure resilient solutions.

    NVMe devices including NVM flash AIC flash will also find their way into storage systems and appliances as back-end storage, co-existing with SAS or SATA devices. Another emerging deployment configuration scenario is shared NVMe direct attached storage (DAS) with multiple server access via PCIe external storage with dual paths for resiliency.

    Even though NVMe is a new protocol, it leverages existing skill sets. Anyone familiar with SAS/SCSI and AHCI/SATA storage devices will need little or no training to carry out and manage NVMe. Since NVMe-enabled storage appears to a host server or storage appliance as an LUN or volume, existing Windows, Linux and other OS or hypervisors tools can be used. On Windows, such as,  other than going to the device manager to see what the device is and what controller it is attached to, it is no different from installing and using any other storage device. The experience on Linux is similar, particularly when using in-the-box drivers that ship with the OS. One minor Linux difference of note is that instead of seeing a /dev/sda device as an example, you might see a device name like /dev/nvme0n1 or /dev/nvme0n1p1 (with a partition).

    Keep in mind that NVMe like SAS can be used as a “back-end” access from servers (or storage systems) to a storage device or system. For example JBOD SSD drives (e.g. 8639), PCIe AiC or M.2 devices. NVMe can also like SAS be used as a “front-end” on storage systems or appliances in place of, or in addition to other access such as GbE based iSCSI, Fibre Channel, FCoE, InfiniBand, NAS or Object.

    What this means is that NVMe can be implemented in a storage system or appliance on both the “front-end” e.g. server or host side as well as on the “back-end” e.g. device or drive side that is like SAS. Another similarity to SAS is that NVMe dual-pathing of devices, permitting system architects to design resiliency into their solutions. When the primary path fails, access to the storage device can be maintained with failover so that fast I/O operations can continue when using SAS and NVMe.

    NVM connectivity options including NVMe
    Various NVM NAND flash SSD devices and their connectivity including NVMe, M2, SATA and 12 Gbps SAS are shown in figure 6.

    Various NVM SSD interfaces including NVMe and M2
    Figure 6 Various NVM flash SSDs (Via StorageIO Labs)

    Left in figure 6 is an NAND flash NVMe PCIe AiC, top center is a USB thumb drive that has been opened up showing an NAND die (chip), middle center is a mSATA card, bottom center is an M.2 card, next on the right is a 2.5” 6 Gbps SATA device, and far fright is a 12 Gbps SAS device. Note that an M.2 card can be either an SATA or NVMe device depending on its internal controller that determines which host or server protocol device driver to use.

    The role of PCIe has evolved over the years as has its performance and packaging form factors. Also, to add in card (AiC) slots, PCIe form factors also include M.2 small form factor that replaces legacy mini-PCIe cards. Another form factor is M.2 (aka Next Generation Form Factor or NGFF) that like other devices, can be an NVMe, or SATA device.

    NGFF also known as 8639 or possibly 8637 (figure 7) can be used to support SATA as well as NVMe depending on the card device installed and host server driver support. There are various M.2 NGFF form factors including 2230, 2242, 2260 and 2280. There are also M.2 to regular physical SATA converter or adapter cards that are available enabling M.2 devices to attach to legacy SAS/SATA RAID adapters or HBAs.

    NVMe 8637 and 8639 interface backplane slotsNVMe 8637 and 8639 interface
    Figure 7 PCIe NVMe 8639 Drive (Via StorageIO Labs)

    On the left of figure 7 is a view towards the backplane of a storage enclosure in a server that supports SAS, SATA, and NVMe (e.g. 8639). On the right of figure 7 is the connector end of an 8639 NVM SSD showing addition pin connectors compared to an SAS or SATA device. Those extra pins give PCIe x4 connectivity to the NVMe devices. The 8639 drive connectors enable a device such as an NVM, or NAND flash SSD to share a common physical storage enclosure with SAS and SATA devices, including optional dual-pathing.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Be careful judging a device or component by its physical packaging or interface connection about what it is or is not. In figure 6.6 the device has SAS/SATA along with PCIe physical connections, yet it’s what’s inside (e.g. its controller) that determines if it is an SAS, SATA or NVMe enabled device. This also applies to HDDs and PCIe AiC devices, as well as I/O networking cards and adapters that may use common physical connectors, yet implement different protocols. For example, the SFF-8643 HD-Mini SAS internal connector is used for 12 Gbps SAS attachment as well as PCIe to devices such as 8630.

    Depending on the type of device inserted, access can be via NVMe over PCIe x4, SAS (12 Gbps or 6Gb) or SATA. 8639 connector based enclosures have a physical connection with their backplanes to the individual drive connectors, as well as to PCIe, SAS, and SATA cards or connectors on the server motherboard or via PCIe riser slots.

    While PCIe devices including AiC slot based, M.2 or 8639 can have common physical interfaces and lower level signaling, it’s the protocols, controllers, and drivers that determine how they get a software defined and used. Keep in mind that it’s not just the physical connector or interface that determines what a device is or how it is used, it’s also the protocol, command set, and controller and device drivers.

    Continue reading about NVMe with Part V (Where to learn more, what this all means) in this five-part series, or jump to Part I, Part II or Part III.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    NVMe Need for Performance Speed Performance

    server storage I/O trends
    Updated 1/12/2018

    This is the third in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    How fast is NVMe?

    It depends! Generally speaking NVMe is fast!

    However fast interfaces and protocols also need fast storage devices, adapters, drivers, servers, operating systems and hypervisors as well as applications that drive or benefit from the increased speed.

    A server storage I/O example is in figure 5 where a 6 Gbps SATA NVM flash SSD (left) is shown with an NVMe 8639 (x4) drive that were directly attached to a server. The workload is 8 Kbyte sized random writes with 128 threads (workers) showing results for IOPs (solid bar) along with response time (dotted line). Not surprisingly the NVMe device has a lower response time and a higher number of IOPs. However also note how the amount of CPU time used per IOP is lower on the right with the NVMe drive.

    NVMe storage I/O performance
    Figure 5 6 Gbps SATA NVM flash SSD vs. NVMe flash SSD

    While many people are aware or learning about the IOP and bandwidth improvements as well as the decrease in latency with NVMe, something that gets overlooked is how much less CPU is used. If a server is spending time in wait modes that can result in lost productivity, by finding and removing the barriers more work can be done on a given server, perhaps even delaying a server upgrade.

    In figure 5 notice the lower amount of CPU used per work activity being done (e.g. I/O or IOP) which translates to more effective resource use of your server. What that means is either doing more work with what you have, or potentially delaying a CPU server upgrade, or, using those extra CPU cycles to power software defined storage management stacks including erasure coding or advanced parity RAID, replication and other functions.

    Table 1 shows relative server I/O performance of some NVM flash SSD devices across various workloads. As with any performance, the comparison takes them, and the following with a grain of salt as your speed will vary.

    8KB I/O Size

    1MB I/O size

    NAND flash SSD

    100% Seq. Read

    100% Seq. Write

    100% Ran. Read

    100% Ran. Write

    100% Seq. Read

    100% Seq. Write

    100% Ran. Read

    100% Ran. Write

    NVMe

    IOPs

    41829.19

    33349.36

    112353.6

    28520.82

    1437.26

    889.36

    1336.94

    496.74

    PCIe

    Bandwidth

    326.79

    260.54

    877.76

    222.82

    1437.26

    889.36

    1336.94

    496.74

    AiC

    Resp.

    3.23

    3.90

    1.30

    4.56

    178.11

    287.83

    191.27

    515.17

    CPU / IOP

    0.001571

    0.002003

    0.000689

    0.002342

    0.007793

    0.011244

    0.009798

    0.015098

    12Gb

    IOPs

    34792.91

    34863.42

    29373.5

    27069.56

    427.19

    439.42

    416.68

    385.9

    SAS

    Bandwidth

    271.82

    272.37

    229.48

    211.48

    427.19

    429.42

    416.68

    385.9

    Resp.

    3.76

    3.77

    4.56

    5.71

    599.26

    582.66

    614.22

    663.21

    CPU / IOP

    0.001857

    0.00189

    0.002267

    0.00229

    0.011236

    0.011834

    0.01416

    0.015548

    6Gb

    IOPs

    33861.29

    9228.49

    28677.12

    6974.32

    363.25

    65.58

    356.06

    55.86

    SATA

    Bandwidth

    264.54

    72.1

    224.04

    54.49

    363.25

    65.58

    356.06

    55.86

    Resp.

    4.05

    26.34

    4.67

    35.65

    704.70

    3838.59

    718.81

    4535.63

    CPU / IOP

    0.001899

    0.002546

    0.002298

    0.003269

    0.012113

    0.032022

    0.015166

    0.046545

    Table 1 Relative performance of various protocols and interfaces

    The workload results in table 1 were generated using a vdbench script running on a Windows 2012 R2 based server and are intended to be a relative indicator of different protocol and interfaces; your performance mileage will vary. The results shown below compare the number of IOPs (activity rate) for reads, writes, random and sequential across small 8KB and large 1MB sized I/Os.

    Also shown in table 1 are bandwidth or throughput (e.g. amount of data moved), response time and the amount of CPU used per IOP. Note in table 1 how NVMe can do higher IOPs with a lower CPU per IOP, or, using a similar amount of CPU, do more work at a lower latency. SSD has been used for decades to help reduce CPU bottlenecks or defer server upgrades by removing I/O wait times and reduce CPU consumption (e.g. wait or lost time).

    Can NVMe solutions run faster than those shown above? Absolutely!

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Continue reading about NVMe with Part IV (Where and How to use NVMe) in this five-part series, or jump to Part I, Part II or Part V.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Different NVMe Configurations

    server storage I/O trends
    Updated 1/12/2018

    This is the second in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    The many different faces or facets of NVMe configurations

    NVMe can be deployed and used in many ways, the following are some examples to show you its flexibility today as well as where it may be headed in the future. An initial deployment scenario is NVMe devices (e.g. PCIe cards, M2 or 8639 drives) installed as storage in servers or as back-end storage in storage systems. Figure 2 below shows a networked storage system or appliance that uses traditional server storage I/O interfaces and protocols for front-end access, however with back-end storage being all NVMe, or a hybrid of NVMe, SAS and SATA devices.
    NVMe as back-end server storage I/O interface to NVM
    Figure 2 NVMe as back-end server storage I/O interface to NVM storage

    A variation of the above is using NVMe for shared direct attached storage (DAS) such as the EMC DSSD D5. In the following scenario (figure 3), multiple servers in a rack or cabinet configuration have an extended PCIe connection that attached to a shared storage all flash array using NVMe on the front-end. Read more about this approach and EMC DSSD D5 here or click on the image below.

    EMC DSSD D5 NVMe
    Figure 3 Shared DAS All Flash NVM Storage using NVMe (e.g. EMC DSSD D5)

    Next up in figure 4 is a variation of the previous example, except NVMe is implemented over an RDMA (Remote Direct Memory Access) based fabric network using Converged 10GbE/40GbE or InfiniBand in what is known as RoCE (RDMA over Converged Ethernet pronounced Rocky).

    NVMe over Fabric RoCE
    Figure 4 NVMe as a “front-end” interface for servers or storage systems/appliances

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Watch for more topology and configuration options as NVMe along with associated hardware, software and I/O networking tools and technologies emerge over time.

    Continue reading about NVMe with Part III (Need for Performance Speed) in this five-part series, or jump to Part I, Part IV or Part V.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    NVMe overview primer

    server storage I/O trends
    Updated 2/2/2018

    This is the first in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    What is NVM Express (NVMe)

    Non-Volatile Memory (NVM) includes persistent memory such as NAND flash and other forms Solid State Devices (SSD). NVM express (NVMe) is a new server storage I/P protocol alternative to AHCI/SATA and the SCSI protocol used by Serial Attached SCSI (SAS). Note that the name NVMe is owned and managed by the industry trade group for NVM Express is (www.nvmexpress.org).

    The key question with NVMe is not if, rather when, where, why, how and with what will it appear in your data center or server storage I/O data infrastructure. This is a companion to material that I have on my micro site www.thenvmeplace.com that provides an overview of NVMe, as well as helps to discuss some of the questions about NVMe.

    Main features of NVMe include among others:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

    Why NVMe for Server Storage I/O?
    NVMe has been designed from the ground up for accessing fast storage including flash SSD leveraging PCI Express (PCIe). The benefits include lower latency, improved concurrency, increased performance and the ability to unleash a lot more of the potential of modern multi-core modern processors.

    NVMe Server Storage I/O
    Figure 1 shows common server I/O connectivity including PCIe, SAS, SATA and NVMe.

    NVMe, leveraging PCIe, enables modern applications to reach their full potential. NVMe is one of those rare, generational protocol upgrades that comes around every couple of decades to help unlock the full performance value of servers and storage. NVMe does need new drivers, but once in place, it plugs and plays seamlessly with existing tools, software and user experiences. Likewise many of those drivers are now in the box (e.g. ship with) for popular operating systems and hypervisors.

    While SATA and SAS provided enough bandwidth for HDDs and some SSD uses, more performance is needed. NVMe near-term does not replace SAS or SATA they can and will coexist for years to come enabling different tiers of server storage I/O performance.

    NVMe unlocks the potential of flash-based storage by allowing up to 65,536 (64K) queues each with 64K commands per queue. SATA allowed for only one command queue capable of holding 32 commands per queue and SAS supports a queue with 64K command entries. As a result, the storage IO capabilities of flash can now be fed across PCIe much faster to enable modern multi-core processors to complete more useful work in less time.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Continue reading about NVMe with Part II (Different NVMe configurations) in this five-part series, or jump to Part III, Part IV or Part V.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    NVMe Place NVM Non Volatile Memory Express Resources

    Updated 8/31/19
    NVMe place server Storage I/O data infrastructure trends

    Welcome to NVMe place NVM Non Volatile Memory Express Resources. NVMe place is about Non Volatile Memory (NVM) Express (NVMe) with Industry Trends Perspectives, Tips, Tools, Techniques, Technologies, News and other information.

    Disclaimer

    Please note that this NVMe place resources site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks.

    NVM Express Organization
    Image used with permission of NVM Express, Inc.

    Visit the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

     

    The NVMe Place resources and NVM including SCM, PMEM, Flash

    NVMe place includes Non Volatile Memory (NVM) including nand flash, storage class memories (SCM), persistent memories (PM) are storage memory mediums while NVM Express (NVMe) is an interface for accessing NVM. This NVMe resources page is a companion to The SSD Place which has a broader Non Volatile Memory (NVM) focus including flash among other SSD topics. NVMe is a new server storage I/O access method and protocol for fast access to NVM based storage and memory technologies. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

    Server Storage I/O NVMe PCIe SAS SATA AHCI
    Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

    Leveraging the standard PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5″ drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add-in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially, NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

    NVMe as back-end storage
    NVMe as a “back-end” I/O interface for NVM storage media

    NVMe as front-end server storage I/O interface
    NVMe as a “front-end” interface for servers or storage systems/appliances

    NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can also being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

    NVMe features

    Main features of NVMe include among others:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

    Shared external PCIe using NVMe
    NVMe and shared PCIe (e.g. shared PCIe flash DAS)

    NVMe related content and links

    The following are some of my tips, articles, blog posts, presentations and other content, along with material from others pertaining to NVMe. Keep in mind that the question should not be if NVMe is in your future, rather when, where, with what, from whom and how much of it will be used as well as how it will be used.

    • How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
    • Why NVMe Should Be in Your Data Center (Via Micron.com)
    • NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
    • Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
    • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
    • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
    • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
    • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
    • MNVM Express solutions (Via SuperMicro)
    • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
    • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
    • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
    • How many IOPS can a HDD, HHDD or SSD do (Part I)?
    • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
    • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
    • Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
    • Via EnterpriseStorageForum: 10-Year Review of Data Storage

    Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamline performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU.

    NVMe and SATA flash SSD performance

    The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed.

    Additional NVMe Resources

    Also check out the Server StorageIO companion micro sites landing pages including thessdplace.com (SSD focus), data protection diaries (backup, BC/DR/HA and related topics), cloud and object storage, and server storage I/O performance and benchmarking here.

    If you are in to the real bits and bytes details such as at device driver level content check out the Linux NVMe reflector forum. The linux-nvme forum is a good source if you are developer to stay up on what is happening in and around device driver and associated topics.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    Disclaimer

    Disclaimer: Please note that this site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks. Check out the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

    NVM Express Organization
    Image used with permission of NVM Express, Inc.

    Wrap Up

    Watch for updates with more content, links and NVMe resources to be added here soon.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Intel Micron 3D XPoint server storage NVM SCM PM SSD

    3D XPoint server storage class memory SCM


    Storage I/O trends

    Updated 1/31/2018

    Intel Micron 3D XPoint server storage NVM SCM PM SSD.

    This is the second of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part III here.

    Is this 3D XPoint marketing, manufacturing or material technology?

    You can’t have a successful manufactured material technology without some marketing, likewise marketing without some manufactured material would be manufactured marketing. In the case of 3D XPoint and its announcement launch, their real technology shown, granted it was only wafer and dies as opposed to an actual DDR4 DIMM or PCIe Add In Card (AIC) or drive form factor Solid State Device (SSD) product. On the other hand, on a relative comparison basis, even though there is marketing collateral available to learn more from, this was far from a over the big-top made for TV or web circus event, which can be a good thing.


    Wafer unveiled containing 3D XPoint 128 Gb dies

    Who will get access to 3D XPoint?

    Initially 3D XPoint production capacity supply will be for the two companies to offer early samples to their customers later this year with general production slated for 2016 meaning early real customer deployed products starting sometime in 2016.

    Is it NAND or NOT?

    3D XPoint is not NAND flash, it is also not NVRAM or DRAM, it’s a new class of NVM that can be used for server class main memory with persistency, or as persistent data storage among other uses (cell phones, automobiles, appliances and other electronics). In addition, 3D XPoint is more durable with a longer useful life for writing and storing data vs. NAND flash.

    Why is 3D XPoint important?

    As mentioned during the Intel and Micron announcement, there have only been seven major memory technologies introduced since the transistor back in 1947, granted there have been many variations along with generational enhancements of those. Thus 3D XPoint is being positioned by Intel and Micron as the eighth memory class joining its predecessors many of which continue to be used today in various roles.


    Major memory classes or categories timeline

    In addition to the above memory classes or categories timeline, the following shows in more detail various memory categories (click on the image below to get access to the Intel interactive infographic).

    Intel History of Memory Infographic
    Via: https://intelsalestraining.com/memory timeline/ (Click on image to view)

    What capacity size is 3D XPoint?

    Initially the 3D XPoint technology is available in a 2 layer 128 bit (cell) per die capacity. Keep in mind that there are usually 8 bits to a byte resulting in 16 GByte capacity per chip initially. With density improvements, as well as increased stacking of layers, the number of cells or bits per die (e.g. what makes up a chip) should improve, as well as most implementations will have multiple chips in some type of configuration.

    What will 3D XPoint cost?

    During the 3D XPoint launch webinar Intel and Micron hinted that first pricing will be between current DRAM and NAND flash on a per cell or bit basis, however real pricing and costs will vary depending on how packaged for use. For example if placed on a DDR4 or different type of DIMM or on a PCIe Add In Card (AIC) or as a drive form factor SSD among other options will vary the real price. Likewise as with other memories and storage mediums, as production yields and volumes increase, along with denser designs, the cost per usable cell or bit can be expected to further improve.

    Where to read, watch and learn more

    Storage I/O trends

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    DRAM which has been around for sometime has plenty of life left for many applications as does NAND flash including new 3D NAND, vNAND and other variations. For the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies including 3D XPoint. Read more in this series including Part I here and Part III here.

    Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    3D XPoint nvm pm scm storage class memory

    Part III – 3D XPoint server storage class memory SCM


    Storage I/O trends

    Updated 1/31/2018

    3D XPoint nvm pm scm storage class memory.

    This is the third of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part II here.

    What is 3D XPoint and how does it work?

    3D XPoint is a new class or class of memory (view other categories of memory here) that provides performance for reads and writes closer to that of DRAM with about 10x the capacity density. In addition to the speed closer to DRAM vs. the lower NAND flash, 3D XPoint is also non-volatile memory (NVM) like NAND flash, NVRAM and others. What this means is that 3D XPoint can be used as persistent higher density fast server memory (or main memory for other computers and electronics). Besides being fast persistent main memory, 3D XPoint will also be a faster medium for solid state devices (SSD’s) including PCIe Add In Cards (AIC), m2 cards and drive form factor 8637/8639 NVM Express (NVMe) accessed devices that also has better endurance or life span compared to NAND flash.


    3D XPoint architecture and attributes

    The initial die or basic chip building block 3D XPoint implementation is a layer 128 Gbit device which if using 8 bits would yield 16GB raw. Over time increased densities should become available as the bit density improves with more cells and further scaling of the technology, combined with packaging. For example while a current die could hold up to 16 GBytes of data, multiple dies could be packaged together to create a 32GB, 64GB, 128GB etc. or larger actual product. Think about not only where packaged flash based SSD capacities are today, also think in terms of where DDR3 and DDR4 DIMM are at such as 4GB, 8GB, 16GB, 32GB densities.

    The 3D aspect comes from the memory being in a matrix initially being two layers high, with multiple rows and columns that intersect, where those intersections occur is a microscopic material based switch for accessing a particular memory cell. Unlike NAND flash where an individual cell or bit is accessed as part of a larger block or page comprising several thousand bytes at once, 3D XPoint cells or bits can be individually accessed to speed up reads and writes in a more granular fashion. It is this more granular access along with performance that will enable 3D XPoint to be used in lower latency scenarios where DRAM would normally be used.

    Instead of trapping electrons in a cell to create a bit of capacity (e.g. on or off) like NAND flash, 3D XPoint leverages the underlying physical material propertied to store a bit as a phase change enabling use of all cells. In other words, instead of being electron based, it is material based. While Intel and Micron did not specify what the actual chemistry and physical materials that are used in 3D XPoint, they did discuss some of the characteristics. If you want to go deep, check out how the Dailytech makes an interesting educated speculation or thesis on the underlying technology.

    Watch the following video to get a better idea and visually see how 3D XPoint works.



    3D XPoint YouTube Video

    What are these chips, cells, wafers and dies?

    Left many dies on a wafer, right, a closer look at the dies cut from the wafer

    Dies (here and here) are the basic building block of what goes into the chips that in turn are the components used for creating DDR DIMM for main computer memory, as well as for create SD and MicroSD cards, USB thumb drives, PCIe AIC and drive form factor SSD, as well as custom modules on motherboards, or consumption via bare die and wafer level consumption (e.g. where you are doing really custom things at volume, beyond using a soldering iron scale).

    Storage I/O trends

    Has Intel and Micron cornered the NVM and memory market?

    We have heard proclamations, speculation and statements of the demise of DRAM, NAND flash and other volatile and NVM memories for years, if not decades now. Each year there is the usual this will be the year of “x” where “x” can include among others. Resistive RAM aka ReRAM or RRAM aka the memristor that HP earlier announced they were going to bring to market and then earlier this year canceling those plans while Crossbar continues to pursue RRAM. MRAM or Magnetorestive RAM, Phase Change Memory aka CRAM or PCM and PRAM, FRAM aka FeRAM or Ferroelectric RAM among others.

    flash SSD and NVM trends

    Expanding persistent memory and SSD storage markets

    Keep in mind that there are many steps taking time measured in years or decades to go from research and development lab idea to prototype that can then be produced at production volumes in economic yields. As a reference for, there is still plenty of life in both DRAM as well as NAND flash, the later having appeared around 1989.

    Industry vs. Customer Adoption and deployment timeline

    Technology industry adoption precedes customer adoption and deployment

    There is a difference between industry adoption and deployment vs. customer adoption and deployment, they are related, yet separated by time as shown in the above figure. What this means is that there can be several years from the time a new technology is initially introduced and when it becomes generally available. Keep in mind that NAND flash has yet to reach its full market potential despite having made significant inroads the past few years since it was introduced in 1989.

    This begs the question of if 3D XPoint is a variation of phase change, RRAM, MRAM or something else. Over at the Dailytech they lay out a line of thinking (or educated speculation) that 3D XPoint is some derivative or variation of phase change, time will tell about what it really is.

    What’s the difference between 3D NAND flash and 3D XPoint?

    3D NAND is a form of NAND flash NVM, while 3D XPoint is a completely new and different type of NVM (e.g. its not NAND).

    3D NAND is a variation of traditional flash with the difference between vertical stacking vs. horizontal to improve density, also known as vertical NAND or V-NAND. Vertical stacking is like building up to house more tenants or occupants in a dense environment or scaling up, vs scaling-out by using up more space where density is not an issue. Note that magnetic HDD’s shifted to perpendicular (e.g. vertical) recording about ten years ago to break through the super parametric barrier and more recently, magnetic tape has also adopted perpendicular recording. Also keep in mind that 3D XPoint and the earlier announced Intel and Micron 3D NAND flash are two separate classes of memory that both just happen to have 3D in their marketing names.

    Where to read, watch and learn more

    Storage I/O trends

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Same with DRAM which has been around for sometime, it too still has plenty of life left for many applications. However other applications that have the need for improved speed over NAND flash, or persistency and density vs. DRAM will be some of the first to leverage new NVM technologies such as 3D XPoint. Thus at least for the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies. Bottom line, 3D XPoint is a new class of NVM memory, can be used for persistent main server memory or for persistent fast storage memory. If you have not done so, check out Part I here and Part II here of this three-part series on Intel and Micron 3D XPoint.

    Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.