Use Intel Optane NVMe U.2 SFF 8639 SSD drive in PCIe slot

Use NVMe U.2 SFF 8639 disk drive form factor SSD in PCIe slot

server storage I/O data infrastructure trends

Need to install or use an Intel Optane NVMe 900P or other Nonvolatile Memory (NVM) Express NVMe based U.2 SFF 8639 disk drive form factor Solid State Device (SSD) into PCIe a slot?

For example, I needed to connect an Intel Optane NVMe 900P U.2 SFF 8639 drive form factor SSD into one of my servers using an available PCIe slot.

The solution I used was an carrier adapter card such as those from Ableconn (PEXU2-132 NVMe 2.5-inch U.2 [SFF-8639] via Amazon.com among other global venues.

xxxx
Top Intel 750 NVMe PCIe AiC SSD, bottom Intel Optane NVMe 900P U.2 SSD with Ableconn carrier

The above image shows top an Intel 750 NVMe PCIe Add in Card (AiC) SSD and on the bottom an Intel Optane NVMe 900P 280GB U.2 (SFF 8639) drive form factor SSD mounted on an Ableconn carrier adapter.

NVMe server storage I/O sddc

NVMe Tradecraft Refresher

NVMe is the protocol that is implemented with different topologies including local via PCIe using U.2 aka SFF-8639 (aka disk drive form factor), M.2 aka Next Generation Form Factor (NGFF) also known as "gum stick", along with PCIe Add in Card (AiC). NVMe accessed devices can be installed in laptops, ultra books, workstations, servers and storage systems using the various form factors. U.2 drives are also refereed to by some as PCIe drives in that the NVMe command set protocol is implemented using PCIe x4 physical connection to the devices. Jump ahead if you want to skip over the NVMe primer refresh material to learn more about U.2 8639 devices.

data infrastructure nvme u.2 8639 ssd
Various SSD device form factors and interfaces

In addition to form factor, NVMe devices can be direct attached and dedicated, rack and shared, as well as accessed via networks also known as fabrics such as NVMe over Fabrics.

NVMeoF FC-NVMe NVMe fabric SDDC
The many facets of NVMe as a front-end, back-end, direct attach and fabric

Context is important with NVMe in that fabric can mean NVMe over Fibre Channel (FC-NVMe) where the NVMe command set protocol is used in place of SCSI Fibre Channel Protocol (e.g. SCSI_FCP) aka FCP or what many simply know and refer to as Fibre Channel. NVMe over Fabric can also mean NVMe command set implemented over an RDMA over Converged Ethernet (RoCE) based network.

NVM and NVMe accessed flash SCM SSD storage

Another point of context is not to confuse Nonvolatile Memory (NVM) which are the storage or memory media and NVMe which is the interface for accessing storage (e.g. similar to SAS, SATA and others). As a refresher, NVM or the media are the various persistent memories (PM) including NVRAM, NAND Flash, 3D XPoint along with other storage class memories (SCM) used in SSD (in various packaging).

Learn more about 3D XPoint with the following resources:

Learn more (or refresh) your NVMe server storage I/O knowledge, experience tradecraft skill set with this post here. View this piece here looking at NVM vs. NVMe and how one is the media where data is stored, while the other is an access protocol (e.g. NVMe). Also visit www.thenvmeplace.com to view additional NVMe tips, tools, technologies, and related resources.

NVMe U.2 SFF-8639 aka 8639 SSD

On quick glance, an NVMe U.2 SFF-8639 SSD may look like a SAS small form factor (SFF) 2.5" HDD or SSD. Also, keep in mind that HDD and SSD with SAS interface have a small tab to prevent inserting them into a SATA port. As a reminder, SATA devices can plug into SAS ports, however not the other way around which is what the key tab function does (prevents accidental insertion of SAS into SATA). Looking at the left-hand side of the following image you will see an NVMe SFF 8639 aka U.2 backplane connector which looks similar to a SAS port.

Note that depending on how implemented including its internal controller, flash translation layer (FTL), firmware and other considerations, an NVMe U.2 or 8639 x4 SSD should have similar performance to a comparable NVMe x4 PCIe AiC (e.g. card) device. By comparable device, I mean the same type of NVM media (e.g. flash or 3D XPoint), FTL and controller. Likewise generally an PCIe x8 should be faster than an x4, however more PCIe lanes does not mean more performance, its what’s inside and how those lanes are actually used that matter.

NVMe U.2 8639 2.5" 1.8" SSD driveNVMe U.2 8639 2.5 1.8 SSD drive slot pin
NVMe U.2 SFF 8639 Drive (Software Defined Data Infrastructure Essentials CRC Press)

With U.2 devices the key tab that prevents SAS drives from inserting into a SATA port is where four pins that support PCIe x4 are located. What this all means is that a U.2 8639 port or socket can accept an NVMe, SAS or SATA device depending on how the port is configured. Note that the U.2 8639 port is either connected to a SAS controller for SAS and SATA devices or a PCIe port, riser or adapter.

On the left of the above figure is a view towards the backplane of a storage enclosure in a server that supports SAS, SATA, and NVMe (e.g. 8639). On the right of the above figure is the connector end of an 8639 NVM SSD showing addition pin connectors compared to a SAS or SATA device. Those extra pins give PCIe x4 connectivity to the NVMe devices. The 8639 drive connectors enable a device such as an NVM, or NAND flash SSD to share a common physical storage enclosure with SAS and SATA devices, including optional dual-pathing.

More PCIe lanes may not mean faster performance, verify if those lanes (e.g. x4 x8 x16 etc) are present just for mechanical (e.g. physical) as well as electrical (they are also usable) and actually being used. Also, note that some PCIe storage devices or adapters might be for example an x8 for supporting two channels or devices each at x4. Likewise, some devices might be x16 yet only support four x4 devices.

NVMe U.2 SFF 8639 PCIe Drive SSD FAQ

Some common questions pertaining NVMe U.2 aka SFF 8639 interface and form factor based SSD include:

Why use U.2 type devices?

Compatibility with what’s available for server storage I/O slots in a server, appliance, storage enclosure. Ability to mix and match SAS, SATA and NVMe with some caveats in the same enclosure. Support higher density storage configurations maximizing available PCIe slots and enclosure density.

Is PCIe x4 with NVMe U.2 devices fast enough?

While not as fast as a PCIe AiC that fully supports x8 or x16 or higher, an x4 U.2 NVMe accessed SSD should be plenty fast for many applications. If you need more performance, then go with a faster AiC card.

Why not go with all PCIe AiC?

If you need the speed, simplicity, have available PCIe card slots, then put as many of those in your systems or appliances as possible. Otoh, some servers or appliances are PCIe slot constrained so U.2 devices can be used to increase the number of devices attached to a PCIe backplane while also supporting SAS, SATA based SSD or HDDs.

Why not use M.2 devices?

If your system or appliances supports NVMe M.2 those are good options. Some systems even support a combination of M.2 for local boot, staging, logs, work and other storage space while PCIe AiC are for performance along with U.2 devices.

Why not use NVMeoF?

Good question, why not, that is, if your shared storage system supports NVMeoF or FC-NVMe go ahead and use that, however, you might also need some local NVMe devices. Likewise, if yours is a software-defined storage platform that needs local storage, then NVMe U.2, M.2 and AiC or custom cards are an option. On the other hand, a shared fabric NVMe based solution may support a mixed pool of SAS, SATA along with NVMe U.2, M.2, AiC or custom cards as its back-end storage resources.

When not to use U.2?

If your system, appliance or enclosure does not support U.2 and you do not have a need for it. Or, if you need more performance such as from an x8 or x16 based AiC, or you need shared storage. Granted a shared storage system may have U.2 based SSD drives as back-end storage among other options.

How does the U.2 backplane connector attach to PCIe?

Via enclosures backplane, there is either a direct hardwire connection to the PCIe backplane, or, via a connector cable to a riser card or similar mechanism.

Does NVMe replace SAS, SATA or Fibre Channel as an interface?

The NVMe command set is an alternative to the traditional SCSI command set used in SAS and Fibre Channel. That means it can replace, or co-exist depending on your needs and preferences for access various storage devices.

Who supports U.2 devices?

Dell has supported U.2 aka PCIe drives in some of their servers for many years, as has Intel and many others. Likewise, U.2 8639 SSD drives including 3D Xpoint and NAND flash-based are available from Intel among others.

Can you have AiC, U.2 and M.2 devices in the same system?

If your server or appliance or storage system support them then yes. Likewise, there are M.2 to PCIe AiC, M.2 to SATA along with other adapters available for your servers, workstations or software-defined storage system platform.

NVMe U.2 carrier to PCIe adapter

The following images show examples of mounting an Intel Optane NVMe 900P accessed U.2 8639 SSD on an Ableconn PCIe AiC carrier. Once U.2 SSD is mounted, the Ableconn adapter inserts into an available PCIe slot similar to other AiC devices. From a server or storage appliances software perspective, the Ableconn is a pass-through device so your normal device drivers are used, for example VMware vSphere ESXi 6.5 recognizes the Intel Optane device, similar with Windows and other operating systems.

intel optane 900p u.2 8639 nvme drive bottom view
Intel Optane NVMe 900P U.2 SSD and Ableconn PCIe AiC carrier

The above image shows the Ableconn adapter carrier card along with NVMe U.2 8639 pins on the Intel Optane NVMe 900P.

intel optane 900p u.2 8639 nvme drive end view
Views of Intel Optane NVMe 900P U.2 8639 and Ableconn carrier connectors

The above image shows an edge view of the NVMe U.2 SFF 8639 Intel Optane NVMe 900P SSD along with those on the Ableconn adapter carrier. The following images show an Intel Optane NVMe 900P SSD installed in a PCIe AiC slot using an Ableconn carrier, along with how VMware vSphere ESXi 6.5 sees the device using plug and play NVMe device drivers.

NVMe U.2 8639 installed in PCIe AiC Slot
Intel Optane NVMe 900P U.2 SSD installed in PCIe AiC Slot

NVMe U.2 8639 and VMware vSphere ESXi
How VMware vSphere ESXi 6.5 sees NVMe U.2 device

Intel NVMe Optane NVMe 3D XPoint based and other SSDs

Here are some Amazon.com links to various Intel Optane NVMe 3D XPoint based SSDs in different packaging form factors:

Here are some Amazon.com links to various Intel and other vendor NAND flash based NVMe accessed SSDs including U.2, M.2 and AiC form factors:

Note in addition to carriers to adapt U.2 8639 devices to PCIe AiC form factor and interfaces, there are also M.2 NGFF to PCIe AiC among others. An example is the Ableconn M.2 NGFF PCIe SSD to PCI Express 3.0 x4 Host Adapter Card.

In addition to Amazon.com, Newegg.com, Ebay and many other venues carry NVMe related technologies.
The Intel Optane NVMe 900P are newer, however the Intel 750 Series along with other Intel NAND Flash based SSDs are still good price performers and as well as provide value. I have accumulated several Intel 750 NVMe devices over past few years as they are great price performers. Check out this related post Get in the NVMe SSD game (if you are not already).

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

NVMe accessed storage is in your future, however there are various questions to address including exploring your options for type of devices, form factors, configurations among other topics. Some NVMe accessed storage is direct attached and dedicated in laptops, ultrabooks, workstations and servers including PCIe AiC, M.2 and U.2 SSDs, while others are shared networked aka fabric based. NVMe over fabric (e.g. NVMeoF) includes RDMA over converged Ethernet (RoCE) as well as NVMe over Fibre Channel (e.g. FC-NVMe). Networked fabric accessed NVMe access of pooled shared storage systems and appliances can also include internal NVMe attached devices (e.g. as part of back-end storage) as well as other SSDs (e.g. SAS, SATA).

General wrap-up (for now) NVMe U.2 8639 and related tips include:

  • Verify the performance of the device vs. how many PCIe lanes exist
  • Update any applicable BIOS/UEFI, device drivers and other software
  • Check the form factor and interface needed (e.g. U.2, M.2 / NGFF, AiC) for a given scenario
  • Look carefully at the NVMe devices being ordered for proper form factor and interface
  • With M.2 verify that it is an NVMe enabled device vs. SATA

Learn more about NVMe at www.thenvmeplace.com including how to use Intel Optane NVMe 900P U.2 SFF 8639 disk drive form factor SSDs in PCIe slots as well as for fabric among other scenarios.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Cisco Next Gen 32Gb Fibre Channel NVMe SAN Updates

server storage I/O trends

Cisco Next Gen 32Gb Fibre Channel and NVMe SAN Updates

Cisco announced today next generation MDS storage area networking (SAN) Fibre Channel (FC) switches with 32Gb, along with NVMe over FC support.

Cisco Fibre Channel (FC) Directors (Left) and Switches (Right)

Highlights of the Cisco announcement include:

  • MDS 9700 48 port 32Gbps FC switching module
  • High density 768 port 32Gbps FC directors
  • NVMe over FC for attaching fast flash SSD devices (current MDS 9700, 9396S, 9250i and 9148S)
  • Integrated analytics engine for management insight awareness
  • Multiple concurrent protocols including NVMe, SCSI (e.g. SCSI_FCP aka FCP) and FCoE

Where to Learn More

The following are additional resources to learn more.

What this all means, wrap up and summary

Fibre Channel remains relevant for many environments and it makes sense that Cisco known for Ethernet along with IP networking enhance their offerings. By having 32Gb Fibre Channel, along with adding NVMe over Fabric provides existing (and new) Cisco customers to support their legacy (e.g. FC) and emerging (NVMe) workloads as well as devices. For those environments that still need some mix of Fibre Channel, as well as NVMe over fabric this is a good announcement. Keep an eye and ear open for which storage vendors jump on the NVMe over Fabric bandwagon now that Cisco as well as Brocade have made switch support announcements.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

March 2017 Server StorageIO Data Infrastructure Update Newsletter

Volume 17, Issue III

Hello and welcome to the March 2017 issue of the Server StorageIO update newsletter.

First a reminder world backup (and recovery) day is on March 31. Following up from the February Server StorageIO update newsletter that had a focus on data protection this edition includes some additional posts, articles, tips and commentary below.

Other data infrastructure (and tradecraft) topics in this edition include cloud, virtual, server, storage and I/O including NVMe as well as networks. Industry trends include new technology and services announcements, cloud services, HPE buying Nimble among other activity. Check out the Converged Infrastructure (CI), Hyper-Converged (HCI) and Cluster in Box (or Cloud in Box) coverage including a recent SNIA webinar I was invited to be the guest presenter for, along with companion post below.

In This Issue

Enjoy this edition of the Server StorageIO update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

Some recent Industry Activities, Trends, News and Announcements include:

Dell EMC has discontinued the NVMe direct attached shared DSSD D5 all flash array has been discontinued. At about the same time Dell EMC is shutting down the DSSD D5 product, it has also signaled they will leverage the various technologies including NVMe across their broad server storage portfolio in different ways moving forward. While Dell EMC is shutting down DSSD D5, they are also bringing additional NVMe solutions to the market including those they have been shipping for years (e.g. on the server-side). Learn more about DSSD D5 here and here including perspectives of how it could have been used (plays for playbooks).

Meanwhile NVMe industry activity continues to expand with different solutions from startups such as E8, Excelero, Everspin, Intel, Mellanox, Micron, Samsung and WD SANdisk among others. Also keep in mind, if the answer is NVMe, then what were and are the questions to ask, as well as what are some easy to use benchmark scripts (using fio, diskspd, vdbench, iometer).

Speaking of NVMe, flash and SSDs, Amazon Web Services (AWS) have added new Elastic Cloud Compute (EC2) storage and I/O optimized i3 instances. These new instances are available in various configurations with different amounts of vCPU (cores or logical processors), memory and NVMe SSD capacities (and quantity) along with price.

Note that the price per i3 instance varies not only by its configuration, also for image and region deployed in. The flash SSD capacities range from an entry-level (i3.large) with 2 vCPU (logical processors), 15.25GB of RAM and a single 475GB NVMe SSD that for example in the US East Region was recently priced at $0.156 per hour. At the high-end there is the i3.16xlarge with 64 vCPU (logical processors), 488GB RAM and 8 x 1900GB NVMe SSDs with a recent US East Region price of $4.992 per hour. Note that the vCPU refers to the available number of logical processors available and not necessarily cores or sockets.

Also note that your performance will vary, and while NVMe protocol tends to use less CPU per I/O, if generating a large number of I/Os you will need some CPU. What this means is that if you find your performance limited compared to expectations with the lower end i3 instances, move up to a larger instance and see what happens. If you have a Windows-based environment, you can use a tool such as Diskspd to see what happens with I/O performance as you decrease the number of CPUs used.

Chelsio has announced they are now Microsoft Azure Stack Certified with their iWARP RDMA host adapter solutions, as well as for converged infrastructure (CI), hyper-converged (HCI) and legacy server storage deployments. As part of the announcement, Chelsio is also offering a 30 day no cost trial of their adapters for Microsoft Azure Stack, Windows Server 2016 and Windows 10 client environments. Learn more about the Chelsio trial offer here.

Everspin (the MRAM Spintorque, persistent RAM folks) have announced a new Storage Class Memory (SCM) NVMe accessible family (nvNITRO) of storage accelerator devices (PCIe AiC, U.2). Whats interesting about Everspin is that they are using NVMe for accessing their persistent RAM (e.g. MRAM) making it easily plug compatible with existing operating systems or hypervisors. This means using standard out of the box NVMe drivers where the Everspin SCM appears as a block device (for compatibility) functioning as a low latency, high performance persistent write cache.

Something else interesting besides making the new memory compatible with existing servers CPU complex via PCIe, is how Everspin is demonstrating that NVMe as a general access protocol is not just exclusive to nand flash-based SSDs. What this means is that instead of using non-persistent DRAM, or slower NAND flash (or 3D XPoint SCM), Everspin nvNITRO enables high endurance write cache with persistent to compliment existing NAND flash as well as emerging 3D XPoint based storage. Keep an eye on Everspin as they are doing some interesting things for future discussions.

Google Cloud Services has added additional regions (cloud locations) and other enhancements.

HPE continued buying into server storage I/O data infrastructure technologies announcing an all cash (e.g. no stock) acquisition of Nimble Storage (NMBL). The cash acquisition for a little over $1B USD amounts to $12.50 USD per Nimble share, double what it had traded at. As a refresh, or overview, Nimble is an all flash shared storage system leverage NAND flash solid storage device (SSD) performance. Note that Nimble also partners with Cisco and Lenovo platforms that compete with HPE servers for converged systems.View additional perspectives here.

Riverbed has announced the release of Steelfusion 5 which while its name implies physical hardware metal, the solution is available as tin wrapped (e.g. hardware appliance) software. However the solution is also available for deployment as a VMware virtual appliance for remote office branch office (ROBO) among others. Enhancements include converged functionality such as NAS support along with network latency as well as bandwidth among other features.

Check out other industry news, comments, trends perspectives here.

Server StorageIOblog Posts

Recent and popular Server StorageIOblog posts include:

View other recent as well as past StorageIOblog posts here

Server StorageIO Commentary in the news

Recent Server StorageIO industry trends perspectives commentary in the news.

Via InfoStor: 8 Big Enterprise SSD Trends to Expect in 2017
Watch for increased capacities at lower cost, differentiation awareness of high-capacity, low-cost and lower performing SSDs versus improved durability and performance along with cost capacity enhancements for active SSD (read and write optimized). You can also expect increased support for NVMe both as a back-end storage device with different form factors (e.g., M.2 gum sticks, U.2 8639 drives, PCIe cards) as well as front-end (e.g., storage systems that are NVMe-attached) including local direct-attached and fiber-attached. This means more awareness around NVMe both as front-end and back-end deployment options.

Via SearchITOperations: Storage performance bottlenecks
Sometimes it takes more than an aspirin to cure a headache. There may be a bottleneck somewhere else, in hardware, software, storage system architecture or something else.

Via SearchDNS: Parsing through the software-defined storage hype
Beyond scalability, SDS technology aims for freedom from the limits of proprietary hardware.

Via InfoStor: Data Storage Industry Braces for AI and Machine Learning
AI could also lead to untapped hidden or unknown value in existing data that has no or little perceived value

Via SearchDataCenter: New options to evolve data backup recovery

View more Server, Storage and I/O trends and perspectives comments here

Various Tips, Tools, Technology and Tradecraft Topics

Recent Data Infrastructure Tradecraft Articles, Tips, Tools, Tricks and related topics.

Via ComputerWeekly: Time to restore from backup: Do you know where your data is?
Via IDG/NetworkWorld: Ensure your data infrastructure remains available and resilient
Via IDG/NetworkWorld: Whats a data infrastructure?

Check out Scott Lowe @Scott_Lowe of VMware fame who while having a virtual networking focus has a nice roundup of related data infrastructure topics cloud, open source among others.

Want to take a break from reading or listening to tech talk, check out some of the fun videos including aerial drone (and some technology topics) at www.storageio.tv.

View more tips and articles here

Events and Activities

Recent and upcoming event activities.

May 8-10, 2017 – Dell EMCworld – Las Vegas

April 3-7, 2017 – Seminars – Dutch workshop seminar series – Nijkerk Netherlands

March 15, 2017 – Webinar – SNIA/BrightTalkHyperConverged and Storage – 10AM PT

January 26 2017 – Seminar – Presenting at Wipro SDx Summit London UK

See more webinars and activities on the Server StorageIO Events page here.


Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book Software-Defined Data Infrastructure Essentials(CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

If NVMe is the answer, what are the questions?

If NVMe is the answer, what are the questions?

server storage I/O trends

Updated 5/31/2018

If NVMe is the answer, then what are the various questions that should be asked?

Some common questions that NVMe is the answer to include what is the difference between NVM and NVMe?

Is NVMe only for servers, does NVMe require fabrics and what benefit is NVMe beyond more IOPs.

Lets take a look at some of these common NVMe conversations and other questions.

Main Features and Benefits of NVMe

Some of the main feature and benefits of NVMe among others include:

  • Lower latency due to improve drivers and increased queues (and queue sizes)
  • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
  • Higher I/O activity rates (IOPS) to boost productivity unlock value of fast flash and NVM
  • Bandwidth improvements leveraging various fast PCIe interface and available lanes
  • Dual-pathing of devices like what is available with dual-path SAS devices
  • Unlock the value of more cores per processor socket and software threads (productivity)
  • Various packaging options, deployment scenarios and configuration options
  • Appears as a standard storage device on most operating systems
  • Plug-play with in-box drivers on many popular operating systems and hypervisors

NVM and Media memory matters

Whats the differences between NVM and NVMe? Non-Volatile Memory (NVM) which as its name implies is persistent electronic memory medium where data is stored. Today you commonly know about NVMs as NAND flash Solid State Devices (SSD), along with NVRAM among others emerging storage class memories (SCM).

Emerging SCM such as 3D XPoint among other mediums (or media if you prefer) have the premises of boosting both read and write performance beyond traditional NAND flash, closer to DRAM, while having durability also closer to DRAM. For now let’s set the media and mediums aside and get back to how they or will be accessed as well as used.

server storage I/O NVMe fundamentals

Server and Storage I/O Media access matters

NVM Express (e.g. NVMe) is a standard industry protocol for accessing NVM media (SSD and flash devices, storage system, appliances). If NVMe is the answer, then depending on your point of view, NVMe can be (or is) a replacement (today or in the future) for AHCI/SATA, Serial Attached SCSI (SAS). What this means is that NVMe can coexist or replace other block SCSI protocol implementations (e.g. Fibre Channel FCP aka FCP, iSCSI, SRP) as well as NBD (among others).

Similar to the SCSI command set that is implemented on different networks (e.g. iSCSI (IP), FCP (Fibre Channel), SRP (InfiniBand), SAS) NVMe as a protocol is now implemented using PCIe with form factors of add-in cards (AiC), M.2 (e.g. gum sticks aka next-gen form factor or NGFF) as well as U.2 aka 8639 drive form factors. There are also the emerging NVMe over Fabrics variants including FC-NVMe (e.g. NVMe protocol over Fibre Channel) which is an alternative to SCSI_FCP (e.g. SCSI on Fibre Channel). An example of a PCIe AiC that I have include the Intel 750 400GB NVMe (among others). You should be able to find the Intel among other NVMe devices from your prefered vendor as well as different venues including Amazon.com.

NVM, flash and NVMe SSD
Left PCIe AiC x4 NVMe SSD, lower center M.2 NGFF, right SAS and SATA SSD

The following image shows an NVMe U.2 (e.g. 8639) drive form factor device that from a distance looks like a SAS device and connector. However looking closer some extra pins or connectors that present a PCIe Gen 3 x4 (4 PCIe lanes) connection from the server or enclosure backplane to the devices. These U.2 devices plug into 8639 slots (right) that look like a SAS slot that can also accommodate SATA. Remember, SATA can plug into SAS, however not the other way around.

NVMe U.2 8639 driveNVMe 8639 slot
Left NVMe U.2 drive showing PCIe x4 connectors, right, NVMe U.2 8639 connector

What NVMe U.2 means is that the 8639 slots can be used for 12Gbps SAS, 6Gbps SATA or x4 PCIe-based NVMe. Those devices in turn attach to their respective controllers (or adapters) and device driver software protocol stack. Several servers have U.2 or 8639 drive slots either in 2.5” or 1.8” form factors, sometimes these are also called or known as “blue” drives (or slots). The color coding simply helps to keep track of what slots can be used for different things.

Navigating your various NVMe options

If NVMe is the answer, then some device and component options are as follows.

NVMe device components and options include:

  • Enclosures and connector port slots
  • Adapters and controllers
  • U.2, PCIe AIC and M.2 devices
  • Shared storage system or appliances
  • PCIe and NVMe switches

If NVMe is the answer, what to use when, where and why?

Why use an U.2 or 8639 slot when you could use PCIe AiC? Simple, your server or storage system may be PCIe slot constrained, yet have more available U.2 slots. There are U.2 drives from various vendors including Intel and Micro, as well as servers from Dell, Intel and Lenovo among many others.

Why and when would you use an NVMe M.2 device? As a local read/write cache, or perhaps a boot and system device on servers or appliances that have M.2 slots. Many servers and smaller workstations including Intel NUC support M.2. Likewise, there are M.2 devices from many different vendors including Micron, Samsung among others.

Where and why would you use NVMe PCIe AiC? Whenever you can and if you have enough PCIe slots of the proper form factor, mechanical and electrical (e.g. x1, x4, x8, x16) to support a particular card.

Can you mix and match different types of NVMe devices on the same server or appliance? As long as the physical server and its software (BIOS/UEFI, operating system, hypervisors, drivers) support it yes. Most server and appliance vendors support PCIe NVMe AiCs, however, pay attention to if they are x4, x8 both mechanical as well as electrical. Also, verify operating system and hypervisor device driver support. PCIe NVMe AiCs are available from Dell, Intel, Micron and many other vendors.

Networking with your Server and NVMe Storage

Keep in mind that context is important when discussing NVMe as there are devices for attaching as the back-end to servers, storage systems or appliances, as well as for front-end attachment (e.g. for attaching storage systems to servers). NVMe devices can also be internal to a server or storage system and appliance, or, accessible over a network. Think of NVMe as an upper-level command set protocol like SCSI that gets implemented on different networks (e.g. iSCSI, FCP, SRP).

How can NVMe use PCIe as a transport to use devices that are outside of a server? Different vendors have PCIe adapter cards that support longer distances (few meters) to attach to devices. For example, Dell EMC DSSD has a special dual port (two x4 ports) that are PCIe x8 cards for attachment to the DSSD shared SSD devices.

Note that there are also PCIe switches similar to SAS and InfiniBand among other switches. However just because these are switches, does not mean they are your regular off the shelf network type switch that your networking folks will know what to do with (or want to manage).

The following example shows a shared storage system or appliance being accessed by servers using traditional block, NAS file or object protocols. In this example, the storage system or appliance has implemented NVMe devices (PCIe AiC, M.2, U.2) as part of their back-end storage. The back-end storage might be all NVMe, or a mix of NVMe, SAS or SATA SSD and perhaps some high-capacity HDD.

NVMe and server storage access
Servers accessing shared storage with NVMe back-end devices

NVMe and server storage access via PCIe
NVMe PCIe attached (via front-end) storage with various back-end devices

In addition to shared PCIe-attached storage such as Dell EMC DSSD similar to what is shown above, there are also other NVMe options. For example, there are industry initiatives to support the NVMe protocol to use shared storage over fabric networks. There are different fabric networks, they range from RDMA over Converged Ethernet (RoCE) based as well as Fibre Channel NVME (e.g. FC-NVME) among others.

An option that on the surface may not seem like a natural fit or leverage NVMe to its fullest is simply adding NVMe devices as back-end media to existing arrays and appliances. For example, adding NVMe devices as the back-end to iSCSI, SAS, FC, FCoE or other block-based, NAS file or object systems.

NVMe and server storage access via shared PCIe
NVMe over a fabric network (via front-end) with various back-end devices

A common argument against using legacy storage access of shared NVMe is along the lines of why would you want to put a slow network or controller in front of a fast NVM device? You might not want to do that, or your vendor may tell you many reasons why you don’t want to do it particularly if they do not support it. On the other hand, just like other fast NVM SSD storage on shared systems, it may not be all about 100% full performance. Rather, for some environments, it might be about maximizing connectivity over many interfaces to faster NVM devices for several servers.

NVMe and server storage I/O performance

Is NVMe all about boosting the number of IOPS? NVMe can increase the number of IOPS, as well as support more bandwidth. However, it also reduces response time latency as would be expected with an SSD or NVM type of solution. The following image shows an example of not surprisingly an NVMe PCIe AiC x4 SSD outperforming (more IOPs, lower response time) compared to a 6Gb SATA SSD (apples to oranges). Also keep in mind that best benchmark or workload tool is your own application as well as your performance mileage will vary.

NVMe using less CPU per IOP
SATA SSD vs. NVMe PCIe AiC SSD IOPS, Latency and CPU per IOP

The above image shows the lower amount of CPU per IOP given the newer, more streamlined driver and I/O software protocol of NVMe. With NVMe there is less overhead due to the new design, more queues and ability to unlock value not only in SSD also in servers with more sockets, cores and threads.

What this means is that NVMe and SSD can boost performance for activity (TPS, IOPs, gets, puts, reads, writes). NVMe can also lower response time latency while also enabling higher throughput bandwidth. In other words, you get more work out of your servers CPU (and memory). Granted SSDs have been used for decades to boost server performance and in many cases, delay an upgrade to a newer faster system by getting more work out of them (e.g. SSD marketing 202).

NVMe maximizing your software license investments

What may not be so obvious (e.g. SSD marketing 404) is that by getting more work activity done in a given amount of time, you can also stretch your software licenses further. What this means is that you can get more out of your IBM, Microsoft, Oracle, SAP, VMware and other software licenses by increasing their effective productivity. You might already be using virtualization to increase server hardware efficiency and utilization to cut costs. Why not go further and boost productivity to increase your software license (as well as servers) effectiveness by using NVMe and SSDs?

Note that fast applications need fast software, servers, drivers, I/O protocols and devices.

Also just because you have NVMe present or PCIe does not mean full performance, similar to how some vendors put SSDs behind their slow controllers and saw, well slow performance. On the other hand vendors who had or have fast controllers (software, firmware, hardware) that were HDD or are even SSD performance constrained can see a performance boost.

Additional NVMe and related tips

If you have a Windows server and have not overridden, check your power plan to make sure it is not improperly set to balanced instead of high performance. For example using PowerShell issue the following command:

PowerCfg -SetActive "381b4222-f694-41f0-9685-ff5bb260df2e"

Another Windows related tip if you have not done so is enable task manager disk stats by issuing from a command line “diskperf –y”. Then display task manager and performance and see drive performance.

Need to benchmark, validate, compare or test an NVMe, SSD (or even HDD) device or system, there are various tools and workloads for different scenarios. Likewise those various tools can be configured for different activity to reflect your needs (and application workloads). For example, Microsoft Diskspd, fio.exe, iometer and vdbench sample scripts are shown here (along with results) as a starting point for comparison or validation testing.

Does M.2. mean you have NVMe? That depends as some systems implement M.2 with SATA, while others support NVMe, read the fine print or ask for clarification.

Do all NVMe using PCIe run at the same speed? Not necessarily as some might be PCIe x1 or x4 or x8. Likewise some NVMe PCIe cards might be x8 (mechanical and electrical) yet split out into a pair of x4 ports. Also keep in mind that similar to a dual port HDD, NVMe U.2 drives can have two paths to a server, storage system controller or adapter, however both might not be active at the same time. You might also have a fast NVMe device attached to a slow server or storage system or adapter.

Who to watch and keep an eye on in the NVMe ecosystem? Besides those mentioned above, others to keep an eye on include Broadcom, E8, Enmotus Fuzedrive (micro-tiering software), Excelero, Magnotics, Mellanox, Microsemi (e.g. PMC Sierra), Microsoft (Windows Server 2016 S2D + ReFS + Storage Tiering), NVM Express trade group, Seagate, VMware (Virtual NVMe driver part of vSphere ESXi in addition to previous driver support) and WD/Sandisk among many others.

Where To Learn More

Additional related content can be found at:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

NVMe is in your future, that was the answer, however there are the when, where, how, with what among other questions to be addressed. One of the great things IMHO about NVMe is that you can have it your way, where and when you need it, as a replacement or companion to what you have. Granted that will vary based on your preferred vendors as well as what they support today or in the future.

If NVMe is the answer, Ask your vendor when they will support NVMe as a back-end for their storage systems, as well as a front-end. Also decide when will your servers (hardware, operating systems hypervisors) support NVMe and in what variation. Learn more why NVMe is the answer and related topics at www.thenvmeplace.com

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

SSD, flash, Non-volatile memory (NVM) storage Trends, Tips & Topics

SSD, flash, Non-volatile memory (NVM) storage Trends, Tips & Topics

Updated 2/2/2018

server storage I/O trends

Will 2017 be there year of solid state device (SSD), all flash, or all Non-volatile memory (NVM) based storage data centers and data infrastructures?

Recently I did a piece over at InfoStor looking at SSD trends, tips and related topics. SSDs of some type, shape and form are in your future, if they are not already. In my InfoStor piece, I look at some non-volatile memory (NVM) and SSD trends, technologies, tools and tips that you can leverage today to help prepare for tomorrow. This also includes NVM Express (NVMe) based components and solutions.

By way of background, SSD can refer to solid state drive or solid state device (e.g. more generic). The latter is what I am using in this post. NVM refers to different types of persistent memories, including NAND flash and its variants most commonly used today in SSDs. Other NVM mediums include NVRAM along with storage class memories (SCMs) such as 3D XPoint and phase change memory (PCM) among others. Let’s focus on NAND flash as that is what is primarily available and shipping for production enterprise environments today.

Continue reading about SSD, flash, NVM and related trends, topics and tips over at InfoStor by clicking here.

Where To Learn More

Additional related content can be found at:

What This All Means

Will 2017 finally be the year of all flash, all SSD and all NVM including emerging storage class memories (SCM)? Or as we have seen over the past decade increasing adoption as well as deployment in most environments, some of which have gone all SSD or NVM. In the meantime it is safe to say that NVMe, NVM, SSD, flash and other related technologies are in your future in some shape or form as well as quantity. Check out my piece over at InfoStor SSD trends, tips and related topics.

What say you, are you going all flash, SSD or NVM in 2017, if not, what are your concerns or constraints and plans?

Ok, nuff said, for now…

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and <a “https://storageioblog.com/book1”>Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

NVMe Place NVM Non Volatile Memory Express Resources

Updated 8/31/19
NVMe place server Storage I/O data infrastructure trends

Welcome to NVMe place NVM Non Volatile Memory Express Resources. NVMe place is about Non Volatile Memory (NVM) Express (NVMe) with Industry Trends Perspectives, Tips, Tools, Techniques, Technologies, News and other information.

Disclaimer

Please note that this NVMe place resources site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks.

NVM Express Organization
Image used with permission of NVM Express, Inc.

Visit the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

 

The NVMe Place resources and NVM including SCM, PMEM, Flash

NVMe place includes Non Volatile Memory (NVM) including nand flash, storage class memories (SCM), persistent memories (PM) are storage memory mediums while NVM Express (NVMe) is an interface for accessing NVM. This NVMe resources page is a companion to The SSD Place which has a broader Non Volatile Memory (NVM) focus including flash among other SSD topics. NVMe is a new server storage I/O access method and protocol for fast access to NVM based storage and memory technologies. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

Server Storage I/O NVMe PCIe SAS SATA AHCI
Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

Leveraging the standard PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5″ drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add-in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially, NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

NVMe as back-end storage
NVMe as a “back-end” I/O interface for NVM storage media

NVMe as front-end server storage I/O interface
NVMe as a “front-end” interface for servers or storage systems/appliances

NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can also being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

NVMe features

Main features of NVMe include among others:

  • Lower latency due to improve drivers and increased queues (and queue sizes)
  • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
  • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
  • Bandwidth improvements leveraging various fast PCIe interface and available lanes
  • Dual-pathing of devices like what is available with dual-path SAS devices
  • Unlock the value of more cores per processor socket and software threads (productivity)
  • Various packaging options, deployment scenarios and configuration options
  • Appears as a standard storage device on most operating systems
  • Plug-play with in-box drivers on many popular operating systems and hypervisors

Shared external PCIe using NVMe
NVMe and shared PCIe (e.g. shared PCIe flash DAS)

NVMe related content and links

The following are some of my tips, articles, blog posts, presentations and other content, along with material from others pertaining to NVMe. Keep in mind that the question should not be if NVMe is in your future, rather when, where, with what, from whom and how much of it will be used as well as how it will be used.

  • How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
  • Why NVMe Should Be in Your Data Center (Via Micron.com)
  • NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
  • Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
  • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
  • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
  • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
  • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
  • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
  • MNVM Express solutions (Via SuperMicro)
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
  • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
  • RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
  • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
  • What should I consider when using SSD cloud? (Via SearchCloudStorage)
  • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
  • Selecting Storage: Start With Requirements (Via NetworkComputing)
  • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
  • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
  • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
  • How many IOPS can a HDD, HHDD or SSD do (Part I)?
  • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
  • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
  • Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
  • Via EnterpriseStorageForum: 10-Year Review of Data Storage

Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamline performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU.

NVMe and SATA flash SSD performance

The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed.

Additional NVMe Resources

Also check out the Server StorageIO companion micro sites landing pages including thessdplace.com (SSD focus), data protection diaries (backup, BC/DR/HA and related topics), cloud and object storage, and server storage I/O performance and benchmarking here.

If you are in to the real bits and bytes details such as at device driver level content check out the Linux NVMe reflector forum. The linux-nvme forum is a good source if you are developer to stay up on what is happening in and around device driver and associated topics.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

Disclaimer

Disclaimer: Please note that this site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks. Check out the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

NVM Express Organization
Image used with permission of NVM Express, Inc.

Wrap Up

Watch for updates with more content, links and NVMe resources to be added here soon.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

August Server StorageIO Update Newsletter – NVM and Flash SSD Focus

Volume 15, Issue VIII

Hello and welcome to this August 2015 Server StorageIO update newsletter. Summer is wrapping up here in the northern hemisphere which means the fall conference season has started, holidays in progress as well as getting ready for back to school time. I have been spending my summer working on various things involving servers, storage, I/O networking hardware, software, services from cloud to containers, virtual and physical. This includes OpenStack, VMware vCloud Air, AWS, Microsoft Azure, GCS among others, as well as new versions of Microsoft Windows and Servers, Non Volatile Memory (NVM) including flash SSD, NVM Express (NVMe), databases, data protection, software defined, cache, micro-tiering and benchmarking using various tools among other things (some are still under wraps).

Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

Cheers GS

In This Issue

  • Feature Topic
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcasts
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • Feature Topic – Non Volatile Memory including NAND flash SSD

    Via Intel History of Memory
    Via Intel: Click above image to view history of memory

    This months feature topic theme is Non Volatile Memory (NVM) which includes technologies such as NAND flash commonly used in Solid State Devices (SSDs) storage today, as well as in USB thumb drive, mobile and hand-held devices among many other uses. NVM spans servers, storage, I/O devices along with mobile and handheld among many other technologies. In addition to NAND flash, other forms of NVM include Non Volatile Random Access Memory (NVRAM), Read Only Memory (ROM) along with some emerging new technologies including the recently announced Intel and Micron 3D XPoint among others.

    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities
    • Flash and SSD Industry trends perspectives plus benchmarking tips) – PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • Spot The Newest & Best Server Trends (Via Processor)
    • Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))

    Continue reading more about NVM, NVMe, NAND flash, SSD Server and storage I/O related topics at www.thessdplace.com as well as about I/O performance, monitoring and benchmarking tools at www.storageperformance.us.

     

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    • PMC Announces NVMe SSD Controllers (Via TomsITpro)
    • New SATA SSD powers elastic cloud agility for CSPs (Via Cbronline)
    • Toshiba Solid-State Drive Family Features PCIe Technology (Via Eweek)
    • SanDisk aims CloudSpeed Ultra SSD at cloud providers (Via ITwire)
    • Everspin & Aupera reveal MRAM Module M.2 Form Factor (Via BusinessWire)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • Seagate Grows Its Nytro Enterprise Flash Storage Line (Via InfoStor)
    • New SAS Solid State Drive From Seagate Micron Alliance (Via Seagate)
    • Samsung ups the SSD ante with faster, higher capacity drives (Via ITworld)

    View other recent news and industry trends here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities tips, and announcements.

    • Processor: Comments on Spot The Newest & Best Server Trends
    • Processor: Comments on A Snapshot Strategy For Backups & Data Recovery
    • EnterpriseStorageForum: Comments on Defining the Future of DR Storage
    • EnterpriseStorageForum: Comments on Top Ten Tips for DR as a Service
    • EnterpriseStorageForum: Comments on NVMe: Golden Ticket for Faster Storage

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Scala – Scale out storage management software tools
    • Reduxio – Enterprise hybrid storage with data services
    • Jam TreeSize Pro – Data discovery and storage resource analysis and reporting

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • IronMountain:  Information Lifecycle Management: Which Data Types Have Value?
      It’s important to keep in mind that on a fundamental level, there are three types of data: information that has value, information that does not have value and information that has unknown value. Data value can be measured along performance, availability, capacity and economic attributes, which define how the data gets managed across different tiers of storage. In general data can have value, unknown value or no value. Read more here.
    • EnterpriseStorageForum:  Is Future Storage Converging Around Hyper-Converged?
      Depending on who you talk or listen to, hyper-converged storage is either the future of storage, or it is a hype niche market that is not for everybody, particular not larger environments. How converged is the hyper-converged market? There are many environments that can leverage CI along with HCI, CiB or other bundles solutions. Granted, not all of those environments will converge around the same CI, CiB and HCI or pod solution bundles as everything is not the same in most IT environments and data centers. Not all markets, environments or solutions are the same. Read more here.

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    Server Storage I/O Workshop Seminars
    Nijkerk Netherlands October 13-16 2015

    VMworld August 30-September 3 2015

    See additional webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    Enmotus FuzeDrive (Server based Micro-Tiering)
    Enmotus FuzeDrive
    • Micro-teiring of reads and writes
    • FuzeDrive for transparent tiering
    • Dynamic tiering with selectable options
    • Monitoring and diagnostics tools
    • Transparent to operating systems
    • Hardware transparent (HDD and SSD)
    • Server I/O interface agnostic
    • Optional RAM cache and file pinning
    • Maximize NVM flash SSD investment
    • Compliment other SDS solutions
    • Use for servers or workstations

    Enmotus FuzeDrive provides micro-tiering boosting performance (reads and writes) of storage attached to physical bare metal servers, virtual and cloud instances including Windows and Linux operating systems across various applications. In the simple example above five separate SQL Server databases (260GB each) were placed on a single 6TB HDD. A TPCC workload was run concurrently against all databases with various numbers of users. One workload used a single 6TB HDD (blue) while the other used a FuzeDrive (green) comprised of a 6TB HDD and a 400GB SSD showing basic micro-tiering improvements.

    View other StorageIO lab review reports here

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of my books.

    Get Whats Yours via Amazon.com
    While not a technology book, you do not have to be at or near retirement age to be planning for retirement. Some of you may already be at or near retirement age, for others, its time to start planning or refining your plans. A friend recommended this book and I’m recommending it to others. Its pretty straight forward and you might be surprised how much money people may be leaving on the table! Check it out here at Amazon.com.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    3D XPoint nvm pm scm storage class memory

    Part III – 3D XPoint server storage class memory SCM


    Storage I/O trends

    Updated 1/31/2018

    3D XPoint nvm pm scm storage class memory.

    This is the third of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part II here.

    What is 3D XPoint and how does it work?

    3D XPoint is a new class or class of memory (view other categories of memory here) that provides performance for reads and writes closer to that of DRAM with about 10x the capacity density. In addition to the speed closer to DRAM vs. the lower NAND flash, 3D XPoint is also non-volatile memory (NVM) like NAND flash, NVRAM and others. What this means is that 3D XPoint can be used as persistent higher density fast server memory (or main memory for other computers and electronics). Besides being fast persistent main memory, 3D XPoint will also be a faster medium for solid state devices (SSD’s) including PCIe Add In Cards (AIC), m2 cards and drive form factor 8637/8639 NVM Express (NVMe) accessed devices that also has better endurance or life span compared to NAND flash.


    3D XPoint architecture and attributes

    The initial die or basic chip building block 3D XPoint implementation is a layer 128 Gbit device which if using 8 bits would yield 16GB raw. Over time increased densities should become available as the bit density improves with more cells and further scaling of the technology, combined with packaging. For example while a current die could hold up to 16 GBytes of data, multiple dies could be packaged together to create a 32GB, 64GB, 128GB etc. or larger actual product. Think about not only where packaged flash based SSD capacities are today, also think in terms of where DDR3 and DDR4 DIMM are at such as 4GB, 8GB, 16GB, 32GB densities.

    The 3D aspect comes from the memory being in a matrix initially being two layers high, with multiple rows and columns that intersect, where those intersections occur is a microscopic material based switch for accessing a particular memory cell. Unlike NAND flash where an individual cell or bit is accessed as part of a larger block or page comprising several thousand bytes at once, 3D XPoint cells or bits can be individually accessed to speed up reads and writes in a more granular fashion. It is this more granular access along with performance that will enable 3D XPoint to be used in lower latency scenarios where DRAM would normally be used.

    Instead of trapping electrons in a cell to create a bit of capacity (e.g. on or off) like NAND flash, 3D XPoint leverages the underlying physical material propertied to store a bit as a phase change enabling use of all cells. In other words, instead of being electron based, it is material based. While Intel and Micron did not specify what the actual chemistry and physical materials that are used in 3D XPoint, they did discuss some of the characteristics. If you want to go deep, check out how the Dailytech makes an interesting educated speculation or thesis on the underlying technology.

    Watch the following video to get a better idea and visually see how 3D XPoint works.



    3D XPoint YouTube Video

    What are these chips, cells, wafers and dies?

    Left many dies on a wafer, right, a closer look at the dies cut from the wafer

    Dies (here and here) are the basic building block of what goes into the chips that in turn are the components used for creating DDR DIMM for main computer memory, as well as for create SD and MicroSD cards, USB thumb drives, PCIe AIC and drive form factor SSD, as well as custom modules on motherboards, or consumption via bare die and wafer level consumption (e.g. where you are doing really custom things at volume, beyond using a soldering iron scale).

    Storage I/O trends

    Has Intel and Micron cornered the NVM and memory market?

    We have heard proclamations, speculation and statements of the demise of DRAM, NAND flash and other volatile and NVM memories for years, if not decades now. Each year there is the usual this will be the year of “x” where “x” can include among others. Resistive RAM aka ReRAM or RRAM aka the memristor that HP earlier announced they were going to bring to market and then earlier this year canceling those plans while Crossbar continues to pursue RRAM. MRAM or Magnetorestive RAM, Phase Change Memory aka CRAM or PCM and PRAM, FRAM aka FeRAM or Ferroelectric RAM among others.

    flash SSD and NVM trends

    Expanding persistent memory and SSD storage markets

    Keep in mind that there are many steps taking time measured in years or decades to go from research and development lab idea to prototype that can then be produced at production volumes in economic yields. As a reference for, there is still plenty of life in both DRAM as well as NAND flash, the later having appeared around 1989.

    Industry vs. Customer Adoption and deployment timeline

    Technology industry adoption precedes customer adoption and deployment

    There is a difference between industry adoption and deployment vs. customer adoption and deployment, they are related, yet separated by time as shown in the above figure. What this means is that there can be several years from the time a new technology is initially introduced and when it becomes generally available. Keep in mind that NAND flash has yet to reach its full market potential despite having made significant inroads the past few years since it was introduced in 1989.

    This begs the question of if 3D XPoint is a variation of phase change, RRAM, MRAM or something else. Over at the Dailytech they lay out a line of thinking (or educated speculation) that 3D XPoint is some derivative or variation of phase change, time will tell about what it really is.

    What’s the difference between 3D NAND flash and 3D XPoint?

    3D NAND is a form of NAND flash NVM, while 3D XPoint is a completely new and different type of NVM (e.g. its not NAND).

    3D NAND is a variation of traditional flash with the difference between vertical stacking vs. horizontal to improve density, also known as vertical NAND or V-NAND. Vertical stacking is like building up to house more tenants or occupants in a dense environment or scaling up, vs scaling-out by using up more space where density is not an issue. Note that magnetic HDD’s shifted to perpendicular (e.g. vertical) recording about ten years ago to break through the super parametric barrier and more recently, magnetic tape has also adopted perpendicular recording. Also keep in mind that 3D XPoint and the earlier announced Intel and Micron 3D NAND flash are two separate classes of memory that both just happen to have 3D in their marketing names.

    Where to read, watch and learn more

    Storage I/O trends

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Same with DRAM which has been around for sometime, it too still has plenty of life left for many applications. However other applications that have the need for improved speed over NAND flash, or persistency and density vs. DRAM will be some of the first to leverage new NVM technologies such as 3D XPoint. Thus at least for the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies. Bottom line, 3D XPoint is a new class of NVM memory, can be used for persistent main server memory or for persistent fast storage memory. If you have not done so, check out Part I here and Part II here of this three-part series on Intel and Micron 3D XPoint.

    Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future.

    Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way.

    Introducing Solid State Hybrid Drives (SSHD)

    Solid State Hybrid Disks (SSHD) are the successors to previous generation Hybrid Hard Disk Drives (HHDD) that I have used for several years (you can read more about them here, and here).

    While it would be nice to simply have SSD for everything, there are also economic budget realities to be dealt with. Keep in mind that a bit of nand flash SSD cache in the right location for a given purpose can go a long way which is the case with SSHDs. This is also why in many environments today there is a mix of SSD, HDD of various makes, types, speeds and capacities (e.g. different tiers) to support diverse application needs (e.g. not everything in the data center is the same).

    However, If you have the need for speed and can afford or benefit from the increased productivity by all means go SSD!

    Otoh if you have budget constraints and need more space capacity yet want some performance boost, then SSHDs are an option. The big difference however between today’s SSHDs that are available for both enterprise class storage systems and servers, as well as desktop environments is that they can accelerate both reads and writes. This is different from their predecessors that I have used for several years now that had basic read acceleration, however no write optimizations.

    SSHD storage I/O oppourtunity
    Better Together: Where SSHDs fit in an enterprise tiered storage environment with SSD and HDDs

    As their names imply, they are a hybrid between a nand flash Solid State Device (SSD) and traditional Hard Disk Drive (HDD) meaning a best of situation. This means that the SSHD are based on a traditional spinning HDD (various models with different speeds, space capacity, interfaces) along with DRAM (which is found on most modern HDDs), along with nand flash for read cache, and some extra nonvolatile memory for persistent write cache combined with a bit of software defined storage performance optimization algorithms.

    Btw, if you were paying attention to that last sentence you would have picked up on something about nonvolatile memory being used for persistent write cache which should prompt the question would that help with nand flash write endurance? Yup.

    Where and when to use SSHD?

    In the StorageIO Industry Trends Perspective thought leadership white paper I recently released compliments of Seagate Enterprise Turbo SSHD (that’s a disclosure btw ;) enterprise class Solid State Hybrid Drives (SSHD) were looked at and test driven in the StorageIO Labs with various application workloads. These activities include being in a virtual environment for common applications including database and email messaging using industry standard benchmark workloads (e.g. TPC-B and TPC-E for database, JetStress for Exchange).

    Storage I/O sshd white paper

    Conventional storage system focused workloads using iometer, iorate and vdbench were also run in the StorageIO Labs to set up baseline reads, writes, random, sequential, small and large I/O size with IOPs, bandwidth and response time latency results. Some of those results can be found here (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?) with other ongoing workloads continuing in different configurations. The various test drive proof points were done in the   comparing SSHD, SSD and different HDDs.

    Data Protection (Archiving, Backup, BC, DR)

    Staging cache buffer area for snapshots, replication or current copies before streaming to other storage tier using fast read/write capabilities. Meta data, index and catalogs benefit from fast reads and writes for faster protection.

    Big Data DSS
    Data Warehouse

    Support sequential read-ahead operations and “hot-band” data caching in a cost-effective way using SSHD vs. slower similar capacity size HDDs for Data warehouse, DSS and other analytic environments.

    Email, Text and Voice Messaging

    Microsoft Exchange and other email journals, mailbox or object repositories can leverage faster read and write I/Os with more space capacity.

    OLTP, Database
     Key Value Stores SQL and NoSQL

    Eliminate the need to short stroke HDDs to gain performance, offer more space capacity and IOP performance per device for tables, logs, journals, import/export and scratch, temporary ephemeral storage. Leverage random and sequential read acceleration to compliment server-side SSD-based read and write-thru caching. Utilize fast magnetic media for persistent data reducing wear and tear on more costly flash SSD storage devices.

    Server Virtualization

    Fast disk storage for data stores and virtual disks supporting VMware vSphere/ESXi, Microsoft Hyper-V, KVM, Xen and others.  Holding virtual machines such as VMware VMDKs, along with Hyper-V and other hypervisor virtual disks.  Compliment virtual server read cache and I/O optimization using SSD as a cache with writes going to fast SSHD. For example VMware V5.5 Virtual SAN host disk groups use SSD as a read cache and can use SSHD as the magnetic disk for storing data while boosting performance without breaking the budget or adding complexity.

    Speaking of Virtual, as mentioned the various proof points were run using Windows systems that were VMware guests with the SSHD and other devices being Raw Device Mapped (RDM) SAS and SATA attached, read how to do that here.

    Hint: If you know about the VMware trick for making a HDD look like a SSD to vSphere/ESXi (refer to here and here) think outside the virtual box for a moment on some things you could do with SSHD in a VSAN environment among other things, for now, just sayin ;).

    Virtual Desktop Infrastructure (VDI)

    SSHD can be used as high performance magnetic disk for storing linked clone images, applications and data. Leverage fast read to support read ahead or pre-fetch to compliment SSD based read cache solutions. Utilize fast writes to quickly store data enabling SSD-based read or write-thru cache solutions to be more effective. Reduce impact of boot, shutdown, and virus scan or maintenance storms while providing more space capacity.

    Table 1 Example application and workload scenarios benefiting from SSHDs

    Test drive application proof points

    Various workloads were run using Seagate Enterprise Turbo SSHD in the StorageIO lab environment across different real world like application workload scenarios. These include general storage I/O performance characteristics profiling (e.g. reads, writes, random, sequential or various IOP size) to understand how these devices compare to other HDD, HHDD and SSD storage devices in terms of IOPS, bandwidth and response time (latency). In addition to basic storage I/O profiling, the Enterprise Turbo SSHD was also used with various SQL database workloads including Transaction Processing Council (TPC); along with VMware server virtualization among others use case scenarios.

    Note that in the following workload proof points a single drive was used meaning that using more drives in a server or storage system should yield better performance. This also means scaling would be bound by the constraints of a given configuration, server or storage system. These were also conducted using 6Gbps SAS with PCIe Gen 2 based servers and ongoing testing is confirming even better results with 12Gbs SAS, faster servers with PCIe Gen 3.

    SSHD large file storage i/o
    Copy (read and write) 80GB and 220GB file copies (time to copy entire file)

    SSHD storage I/O TPCB Database performance
    SQLserver TPC-B batch database updates

    Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 500GB 3.5” 7.2K RPM HDD 3 Gbps SATA, 1TB 3.5” 7.2K RPM HDD 3 Gbps SATA. Workload generator and virtual clients ran on Windows 7 Ultimate. Microsoft SQL Server 2012 Database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (Intel x3490 2.93 GHz)), with LSI 9211 6Gbps SAS adapters with TPC-B (www.tpc.org) workloads. VM resided on separate data store from devices being tested. All devices being tested with SQL MDF were Raw Device Mapped (RDM) independent persistent with database log file (LDF) on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

    SSHD storage I/O TPCE Database performance
    SQLserver TPC-E transactional workload

    Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 300GB 2.5” Savio 10K RPM HDD 6 Gbps SAS, 1TB 3.5” 7.2K RPM HDD 6 Gbps SATA. Workload generator and virtual clients Windows 7 Ultimate. Microsoft SQL Server 2012 database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (E8400 2.99GHz), with LSI 9211 6Gbps SAS adapters with TPC-E (www.tpc.org) workloads. VM resided on separate SSD based data store from devices being tested (e.g., where MDF resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

    SSHD storage I/O Exchange performance
    Microsoft Exchange workload

    Test configuration: 2.5” Seagate 600 Pro 120GB (ST120FP0021 ) SSD 6 Gbps SATA, 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 2.5” Savio 146GB HDD 6 Gbps SAS, 3.5” Barracuda 500GB 7.2K RPM HDD 3 Gbps SATA. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft Small Business Server (SBS) 2011 Service Pack 1 64 bit, 8GB DRAM, One CPU (Intel X3490 2.93 GHz) LSI 9211 6 Gbps SAS adapter, JetStress 2010 (no other active workload during test intervals). All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device.

    Read more about the above proof points along view data points and configuration information in the associated white paper found here (no registration required).

    What this all means

    Similar to flash-based SSD technologies the question is not if, rather when, where, why and how to deploy hybrid solutions such as SSHDs. If your applications and data infrastructures environment have the need for storage I/O speed without loss of space capacity and breaking your budget, SSD enabled devices like the Seagate Enterprise Turbo 600GB SSHD are in your future. You can learn more about enterprise class SSHD such as those from Seagate by visiting this link here.

    Watch for extra workload proof points being performed including with 12Gbps SAS and faster servers using PCIe Gen 3.

    Ok, nuff said.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Nand flash SSD NVM SCM server storage I/O memory conversations

    Updated 8/31/19
    Server Storage I/O storageioblog SDDC SDDI Data Infrastructure trends

    The SSD Place NVM, SCM, PMEM, Flash, Optane, 3D XPoint, MRAM, NVMe Server, Storage, I/O Topics

    Now and then somebody asks me if I’m familiar with flash or nand flash Solid State Devices (SSD) along with other non-volatile memory (NVM) technologies and trends including NVM Express (NVMe).

    Having been involved with various types of SSD technology, products and solutions since the late 80s initially as a customer in IT (including as a lunch customer for DEC’s ESE20 SSD’s), then later as a vendor selling SSD solutions, as well as an analyst and advisory consultant cover the technologies, I tell the person asking, well, yes, of course.

    That gave me the idea as well as to help me keep track of some of the content and make it easy to find by putting it here in this post (which will be updated now and then).

    Thus this is a collection of articles, tips, posts, presentations, blog posts and other content on SSD including nand flash drives, PCIe cards, DIMMs, NVM Express (NVMe), hybrid and other storage solutions along with related themes.

    Also if you can’t find it here, you can always do a Google search like this or this to find some more material (some of which is on this page).

    HDD, SSHD, HHDD and HDD

    Flash SSD Articles, posts and presentations

    The following are some of my tips, articles, blog posts, presentations and other content on SSD. Keep in mind that the question should not be if SSD are in your future, rather when, where, with what, from whom and how much. Also keep in mind that a bit of SSD as storage or cache in the right place can go a long way, while a lot of SSD will give you a benefit however also cost a lot of cash.

    • How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
    • Why NVMe Should Be in Your Data Center (Via Micron.com)
    • NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
    • Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
    • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
    • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
    • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
    • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
    • MNVM Express solutions (Via SuperMicro)
    • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
    • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
    • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
    • How many IOPS can a HDD, HHDD or SSD do (Part I)?
    • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
    • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
    • Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
    • Via EnterpriseStorageForum: 10-Year Review of Data Storage
    • Via CustomPCreview: Samsung SM961 PCIe NVMe SSD Shows Up for Pre-Order
    • StorageIO Industry Trends Perspective White Paper: Seagate 1200 Enterprise SSD (12Gbps SAS) with proof points (e.g. Lab test results)
    • Companion: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review (blog post part I and Part II)
    • NewEggBusiness: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review Are NVMe m.2 drives ready for the limelight?
    • Google (Research White Paper): Disks for Data Centers (vs. just SSD)
    • CMU (PDF White Paper): A Large-Scale Study of Flash Memory Failures in the Field
    • Via ZDnet: Google doubles Cloud Compute local SSD capacity: Now it’s 3TB per VM
    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
    • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
    • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
    • Here’s why Western Digital is buying SanDisk (Via ComputerWorld)
    • HP, SanDisk partner to bring storage-class memory to market (Via ComputerWorld)
    • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
    • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • Seagate Grows Its Nytro Enterprise Flash Storage Line (Via InfoStor)
    • New SAS Solid State Drive First Product From Seagate Micron Alliance (Via Seagate)
    • Wow, Samsung’s New 16 Terabyte SSD Is the World’s Largest Hard Drive (Via Gizmodo)
    • Samsung ups the SSD ante with faster, higher capacity drives (Via ITworld)
    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
    • New SATA SSD powers elastic cloud agility for CSPs (Via Cbronline)
    • Toshiba Solid-State Drive Family Features PCIe Technology (Via Eweek)
    • SanDisk aims CloudSpeed Ultra SSD at cloud providers (Via ITwire)
    • Everspin & Aupera reveal all-MRAM Storage Module in M.2 Form Factor (Via BusinessWire)
    • Intel, Micron Launch “Bulk-Switching” ReRAM (Via EEtimes)
    • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
    • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)

    server I/O hirearchy

    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, September 2014 Presentation (Flash back to reality – Myths and Realities Flash and SSD Industry trends perspectives plus benchmarking tips) – PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • Spot The Newest & Best Server Trends (Via Processor)
    • Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))
    • 2015 Tech Preview: SSD and SMBs (Via ChannelProNetworks )
    • How to test your HDD, SSD or all flash array (AFA) storage fundamentals (Via StorageIOBlog)
    • Processor: Comments on What Abandoned Data Is Costing Your Company
    • Processor: Comments on Match Application Needs & Infrastructure Capabilities
    • Processor: Comments on Explore The Argument For Flash-Based Storage
    • Processor: Comments on Understand The True Cost Of Acquiring More Storage
    • Processor: Comments on What Resilient & Highly Available Mean
    • Processor: Explore The Argument For Flash-Based Storage
    • SearchCloudStorage What should I consider when using SSD cloud?
    • StorageSearch.com: (not to be confused with TechTarget, good site with lots of SSD related content)
    • StorageSearch.com: What kind of SSD world… 2015?
    • StorageSearch.com: Various links about SSD
    • FlashStorage.com: (Various flash links curated by Tegile and analyst firm Actual Tech Media [Scott D. Lowe])
    • StorageSearch.com: How fast can your SSD run backwards?
    • Seagate has shipped over 10 Million storage HHDD’s (SSHDs), is that a lot?
    • Are large storage arrays dead at the hands of SSD?
    • Can we get a side of context with them IOPS and other storage metrics?
    • Cisco buys Whiptail continuing the SSD storage I/O flash cash cache dash
    • EMC VFCache respinning SSD and intelligent caching (Part I)
    • Flash Data Storage: Myth vs. Reality (Via InfoStor)
    • Have SSDs been unsuccessful with storage arrays (with poll)?
    • How many IOPS can a HDD, HHDD or SSD do (Part I)?
    • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
    • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)

    server storage i/o memory hirearchy

    • Spiceworks SSD and related conversation here and here, profiling IOPs here, and SSD endurance here.
    • SSD is in your future, How, when, with what and where you will be using it (PDF Presentation)
    • SSD for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD (Via TheVirtualizationPractice), Part II, The call to duty, SSD endurance, Part III What SSD is best for you?, and Part IV what’s best for your needs.
    • IT and storage economics 101, supply and demand
    • SSD, flash and DRAM, DejaVu or something new?
    • The Many Faces of Solid State Devices/Disks (SSD)
    • The Nand Flash Cache SSD Cash Dance (Via InfoStor)
    • The Right Storage Option Is Important for Big Data Success (Via FedTech)

    server storage i/o nand flash ssd options

    • Viking SATADIMM: Nand flash SATA SSD in DDR3 DIMM slot?
    • WD buys nand flash SSD storage I/O cache vendor Virident (Via VMware Communities)
    • What is the best kind of IO? The one you do not have to do
    • When and Where to Use NAND Flash SSD for Virtual Servers (Via TheVirtualizationPractice)
    • Why SSD based arrays and storage appliances can be a good idea (Part I)
    • Why SSD based arrays and storage appliances can be a good idea (Part II)
    • Q&A on Access data more efficiently with automated storage tiering and flash (Via SearchSolidStateStorage)
    • InfoStor: Flash Data Storage: Myth vs. Reality (Via InfoStor)
    • Enterprise Storage Forum: Not Just a Flash in the Pan (Via EnterpriseStorageForum)

    SSD Storage I/O and related technologies comments in the news

    The following are some of my commentary and industry trend perspectives that appear in various global venues.

    Storage I/O ssd news

    • Comments on using Flash Drives To Boost Performance (Via Processor)
    • Comments on selecting the Right Type, Amount & Location of Flash Storage (Via Toms It Pro)
    • Comments Google vs. AWS SSD: Which is the better deal? (Via SearchAWS)
    • Tech News World: SANdisk SSD comments and perspectives.
    • Tech News World: Samsung Jumbo SSD drives perspectives
    • Comments on Why Degaussing Isn’t Always Effective (Via StateTech Magazine)
    • Processor: SSD (FLASH and RAM)
    • SearchStorage: FLASH and SSD Storage
    • Internet News: Steve Wozniak joining SSD startup
    • Internet News: SANdisk sale to Toshiba
    • SearchSMBStorage: Comments on SanDisk and wireless storage product
    • StorageAcceleration: Comments on When VDI Hits a Storage Roadblock and SSD
    • Statetechmagazine: Boosting performance with SSD
    • Edtechmagazine: Driving toward SSDsStorage I/O trends
    • SearchStorage: Seagate SLC and MLC flash SSD
    • SearchWindowServer: Making the move to SSD in a SAN/NAS
    • SearchSolidStateStorage: Comments SSD marketplace
    • InfoStor: Comments on SSD approaches and opportunities
    • SearchSMBStorage: Solid State Devices (SSD) benefits
    • SearchSolidState: Comments on Fusion-IO flash SSD and API’s
    • SeaarchSolidStateStorage: Comments on SSD industry activity and OCZ bankruptcy
    • Processor: Comments on Plan Your Storage Future including SSD
    • Processor: Comments on Incorporate SSDs Into Your Storage PlanStorage I/O ssd news
    • Digistor: Comments on SSD and flash storage
    • ITbusinessEdge: Comments on flash SSD and hybrid storage environments
    • SearchStorage: Perspectives on Cisco buying SSD storage vendor Whiptail
    • StateTechMagazine: Comments on all flash SSD storage arrays
    • Processor: Comments on choosing SSDs for your data center needs
    • Searchsolidstatestorage: Comments on how to add solid state devices (SSD) to your storage system
    • Networkcomputing: Comments on SSD/Hard Disk Hybrids Bridge Storage Divide
    • Internet Evolution: Comments on IBM buying flash SSD vendor TMS
    • ITKE: Comments on IBM buying flash SSD vendor TMSStorage I/O trends
    • Searchsolidstatestorage: SSD, Green IT and economic benefits
    • IT World Canada: Cloud computing, dot be scared, look before you leap
    • SearchStorage: SSD in storage systems
    • SearchStorage: SAS SSD
    • SearchSolidStateStorage: Comments on Access data more efficiently with automated storage tiering and flash
    • InfoStor: Comments on EMC’s Light to Speed: Flash, VNX, and Software-Defined
    • EnterpriseStorageForum: Cloud Storage Mergers and Acquisitions: What’s Going On?

    Check out the Server StorageIO NVM Express (NVMe) focus page aka www.thenvmeplace.com for additional related content. nterested in data protection, check out the data protection diaries series of posts here, or cloud and object storage here, and server storage I/O performance benchmarking here. Also check out the StorageIO events and activities page here, as well as tips and articles here, news commentary here, along out newsletter here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    WD buys nand flash SSD storage I/O cache vendor Virident

    Storage I/O trends

    WD buys nand flash SSD storage I/O cache vendor Virident

    Congratulations to Virident for being bought today for $645 Million USD by Western Digital (WD). Virident a nand flash PCIe card startup vendor has been around for several years and in the last year or two has gain more industry awareness as a competitor to FusionIO among others.

    There is a nand flash solid state devices (SSD) cash dash occurring, not to mention fast cache dances that is occurring the IT and data infrastructure (e.g. storage and IO) sector specifically.

    Why the nand flash SSD cash dash and cache dance?

    Here is a piece that I did today over at InfoStor on a related theme that sets the basis of why the nand flash-based SSD market is popular for storage and as a cache. Hence there is a flash cash dash and by some dance for increased storage I/O performance.

    Like the hard disk drive (HDD) industry before it which despite what some pundits and profits have declared (for years if not decades) as being dead (it is still alive), there were many startups, shutdowns, mergers and acquisitions along with some transformations. Granted solid-state memories is part of the presence and future being deployed in new and different ways.

    The same thing has occurred in the nand flash-based SSD sector with LSI acquiring SANDforce, SANdisk picking up Pliant and Flashsoft among others. Then there is Western Digital (WD) that recently has danced with their cash as they dash to buy up all things flash including Stec (drives & PCIe cards), Velobit (cache software), Virident (PCIe cards), along with Arkeia (backup) and an investment in Skyera.

    Storage I/O trends

    What about industry trends and market dynamics?

    Meanwhile there have been some other changes with former industry darling and highflying stock post IPO FusionIO hitting as market reality and sudden CEO departure a few months ago. However after a few months of their stock being pummeled, today it bounced back perhaps as people now speculate who will buy FusionIO with WD picking up Virident. Note that one of Viridents OEM customers is EMC for their PCIe flash card XtremSF as are Micron and LSI.

    Meanwhile Stec, also  now own by WD was also EMCs original flash SSD drive supplier or what they refer to as a EFDs (Electronic Flash Devices), not to mention having also supplied HDDs to them (also keep in mind WD bought HGST a year or so back).

    There are some early signs as well as their stock price jumping today which was probably oversold. Perhaps people are now speculating that maybe Seagate who had been an investor in Virident which was bought by WD for $645 million today might be in the market for somebody else? Alternatively, that perhaps WD didn’t see the value in a FusionIO, or willing to make big flash cache cash grabs dash of that size? Also note Seagate won a $630 million (and the next appeal was recently upheld) infringement lawsuit vs. WD (here and here).

    Does that mean FusionIO could become Seagate’s target or that of NetApp, Oracle or somebody else with the cash and willingness to dash, grab a chunk of the nand flash, and cache market?

    Likewise, there are the software I/O and caching tool vendors some of which are tied to VMware and virtual servers vs. others that are more flexible that are gaining popularity. What about the systems or solution appliances play, could that be in the hunt for a Seagate?

    Anything is possible however IMHO that would be a risky move, one that many at Seagate probably still remember from their experiment with Xiotech, not to mention stepping on the toes of their major OEM customer partners.

    Storage I/O trends

    Thus I would expect Seagate if they do anything would be more along the lines of a component type suppler meaning a FusionIO (yes they have Nexgen, however that could be easily dealt with), OCZ, perhaps even a LSI or Micron however some of those start to get rather expensive for a quick flash cache grab for some stock and cash.

    Also, keep in mind that FusionIO in addition to having their PCIe flash cards also have the ioturbine software-caching tool that if you are not familiar with, IBM recently made an announcement of their Flash Cache Storage Accelerator (FCSA) that has an affiliation to guess who?

    Closing comments (for now)

    Some of the systems or solutions players will survive, perhaps even being acquired as XtremIO was by EMC, or file for IPO like Violin, or express their wish to IPO and or be bought such as all the others (e.g. Skyera, Whiptail, Pure, Solidfire, Cloudbyte, Nimbus, Nimble, Nutanix, Tegile, Kaminario, Greenbyte, and Simplivity among others).

    Here’s the thing, those who really do know what is going to happen are not and probably cannot say, and those who are talking what will happen are like the rest of us, just speculating or providing perspectives or stirring the pot among other things.

    So who will be next in the flash cache ssd cash dash dance?

    Ok, nuff said (for now).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Inaugural episode of the SSD Show podcast at Myce.com

    Storage I/O trends

    Inaugural episode of the SSD Show podcast at Myce.com

    The other day I was invited by Jeremy Reynolds and J.W. Aldershoff to be a guest on the Inaugural episode of their new SSD Show podcast (click here to learn more or listen in).

    audio

    Many different facets or faces of nand flash SSD and SSHD or HHDD

    With this first episode we discuss the latest developments in and around the solid-state device (SSD) and related storage industry, from consumer to enterprise, hardware and software, along with hands on experience insight on products, trends, technologies, technique themes. In this first podcast we discuss Solid State Hybrid Disks (SSHDs) aka Hybrid Hard Disk Drives (HHDD) with flash (read about some of my SSD, HHDD/SSHD hands on personal experiences here), the state of NAND memory (also here about nand DIMMs), the market and SSD pricing.

    I had a lot of fun doing this first episode with Jeremy and hope to be invited back to do some more, follow-up on themes we discussed along with new ones in future episodes. One question remains after the podcast, will I convince Jeremy to get a Twitter account? Stay tuned!

    Check out the new SSD Show podcast here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    Part II: IBM Server Side Storage I/O SSD Flash Cache Software

    Storage I/O trends

    Part II IBM Server Flash Cache Storage I/O accelerator for SSD

    This is the second in a two-part post series on IBM’s Flash Cache Storage Accelerator (FCSA) for Solid State Device (SSD) storage announced today. You can view part I of the IBM FCSA announcement synopsis here.

    Some FCSA ssd cache questions and perspectives

    What is FCSA?
    FCSA is a server-side storage I/O or IOP caching software tool that makes use of local (server-side) nand flash SSD (PCIe cards or drives). As a cache tool (view IBM flash site here) FCSA provides persistent read caching on IBM servers (xSeries, Flex and Blade x86 based systems) with write through cache (e.g. data cached for later reads) while write data is written directly to block attached storage including SANs. back-end storage can be iSCSI, SAS, FC or FCoE based block systems from IBM or others including all SSD, hybrid SSD or traditional HDD based solutions from IBM and others.

    How is this different from just using a dedicated PCIe nand flash SSD card?
    FCSA complements those by using them as a persistent storage to cache storage I/O reads to boost performance. By using the PCIe nand flash card or SSD drives, FCSA and other storage I/O cache optimization tools free up valuable server-side DRAM from having to be used as a read cache on the servers. On the other hand, caching tools such as FCSA also keep local cached reads closer to the applications on the servers (e.g. locality of reference) reducing the impact on backed shared block storage systems.

    What is FCSA for?
    With storage I/O or IOPS and application performance in general, location matters due to locality of reference hence the need for using different approaches for various environments. IBM FCSA is a storage I/O caching software technology that reduces the impact of applications having to do random read operations. In addition to caching reads, FCSA also has a write-through cache, which means that while data written to back-end block storage including on iSCSI, SAS, FC or FCoE based storage (IBM or other vendors), a copy of the data is cached for later reads. Thus while the best storage I/O is the one that does not have to be done (e.g. can be resolved from cache), the second best would be writes that go to a storage system that are not competing with read requests (handled via cache).

    Storage I/O trends

    Who else is doing this?
    This is similar to what EMC initially announced and released in February 2012 with VFcache now renamed to be XtremSW along with other caching and IO optimization software from others (e.g. SANdisk, Proximal and Pernix among others.

    Does this replace IBM EasyTier?
    Simple answer is no, one is for tiering (e.g. EasyTier), the other is for IO caching and optimization (e.g. FCSA).

    Does this replace or compete with other IBM SSD technologies?
    With anything, it is possible to find a way to make or view it as competitive. However in general FCSA complements other IBM storage I/O optimization and management software tools such as EasyTier as well as leverage and coexist with their various SSD products (from PCIe cards to drives to drive shelves to all SSD and hybrid SSD solutions).

    How does FCSA work?
    The FCSA software works in either a physical machine (PM) bare metal mode with Microsoft Windows operating systems (OS) such as Server 2008, 2012 among others. There is also *nix support for RedHat Linux, along with in a VMware virtual machine (VM) environment. In a VMware environment High Availability (HA), DRS and VMotion services and capabilities are supported. Hopefully it will be sooner vs. later that we hear IBM do a follow-up announcement (pure speculation and wishful thinking) on more hypervisors (e.g. Hyper-V, Xen, KVM) support along with Centos, Ubuntu or Power based systems including IBM pSeries. Read more about IBM Pure and Flex systems here.

    What about server CPU and DRAM overhead?
    As should be expected, a minimal amount of server DRAM (e.g. main memory) and CPU processing cycles are used to support the FCSA software and its drivers. Note the reason I say as should be expected is how you can have software running on a server doing any type of work that does not need some amount of DRAM and processing cycles. Granted some vendors will try to spin and say that there is no server-side DRAM or CPU consumed which would be true if they are completely external to the server (VM or PM). The important thing is to understand how much of an impact in terms of CPU along with DRAM consumed along with their corresponding effectiveness benefit that are derived.

    Storage I/O trends

    Does FCSA work with NAS (NFS or CIFS) back-end storage?
    No this is a server-side block only cache solution. However having said that, if your applications or server are presenting shared storage to others (e.g. out the front-end) as NAS (NFS, CIFS, HDFS) using block storage (back-end), then FCSA can cache the storage I/O going to those back-end block devices.

    Is this an appliance?
    Short and simple answer is no, however I would not be surprised to hear some creative software defined marketer try to spin it as a flash cache software appliance. What this means is that FCSA is simply IO and storage optimization software for caching to boost read performance for VM and PM servers.

    What is this hardware or storage agnostic stuff mean?
    Simple, it means that FCSA can work with various nand flash PCIe cards or flash SSD drives installed in servers, as well as with various back-end block storage including SAN from IBM or others. This includes being able to use block storage using iSCSI, SAS, FC or FCoE attached storage.

    What is the difference between Easytier and FCSA?
    Simple, FCSA is providing read acceleration via caching which in turn should offload some reads from affecting storage systems so that they can focus on handling writes or read ahead operations. Easytier on the other hand is for as its name implies tiering or movement of data in a more deterministic fashion.

    How do you get FCSA?
    It is software that you buy from IBM that runs on an IBM x86 based server. It is licensed on a per server basis including one-year service and support. IBM has also indicated that they have volume or multiple servers based licensing options.

    Storage I/O trends

    Does this mean IBM is competing with other software based IO optimization and cache tool vendors?
    IBM is focusing on selling and adding value to their server solutions. Thus while you can buy the software from IBM for their servers (e.g. no bundling required), you cannot buy the software to run on your AMD/Seamicro, Cisco (including EMC/VCE and NetApp) , Dell, Fujitsu, HDS, HP, Lenovo, Oracle, SuperMicro among other vendors servers.

    Will this work on non-IBM servers?
    IBM is only supporting FCSA on IBM x86 based servers; however, you can buy the software without having to buy a solution bundle (e.g. servers or storage).

    What is this Cooperative Caching stuff?
    Cooperative caching takes the next step from simple read cache with write-through to also support chance coherency in a shared environment, as well as leverage tighter application or guest operating system and storage system integration. For example, applications can work with storage systems to make intelligent predictive informed decisions on what to pre-fetch or read ahead and cached, as well as enable cache warming on restart. Another example is where in a shared storage environment if one server makes a change to a shared LUN or volume that the local server-side caches are also updated to prevent stale or inconsistent reads from occurring.

    Can FCSA use multiple nand flash SSD devices on the same server?
    Yes, IBM FCSA supports use of multiple server-side PCIe and or drive based SSD devices.

    How is cache coherency maintained including during a reboot?
    While data stored in the nand flash SSD device is persistent, it’s up to the server and applications working with the storage systems to decide if there is coherent or stale data that needs to be refreshed. Likewise, since FCSA is server-side and back-end storage system or SAN agnostic, without cooperative caching it will not know if the underlying data for a storage volume changed without being notified from another server that modified it. Thus if using shared back-end including SAN storage, do your due diligence to make sure multi-host access to the same LUN’s or volumes is being coordinated with some server-side software to support cache coherency, something that would apply to all vendors.

    Storage I/O trends

    What about cache warming or reloading of the read cache?
    Some vendors who have tightly interested caching software and storage systems, something IBM refers to as cooperative caching that can have the ability to re-warm the cache. With solutions that support cache re-warming, the cache software and storage systems work together to main cache coherency while pre-loading data from the underlying storage system based on hot bands or other profiles and experience. As of this announcement, FCSA does not support cache warming on its own.

    Does IBM have service or tools to complement FCSA?
    Yes, IBM has an assessment, profile and planning tool that are available on a free consultation services basis with a technician to check your environment. Of course, the next logical step would be for IBM to make the tool available via free download or on some other basis as well.

    Do I recommend and have I tried FCSA?
    On paper, or WebEx, YouTube or other venue FCSA looks interesting and capable, a good fit for some environments particular if IBM server-based. However since my PM and VMware VM based servers are from other vendors, along with the fact that FCSA only runs on IBM servers, have not actually given it a hands on test drive yet. Thus if you are looking at storage I/O optimization and caching software tools for your VM or PM environment, checkout IBM FCSA to see if it meets your needs.

    Storage I/O trends

    General comments

    It is great to see server and storage systems vendors add value to their solutions with I/O and performance optimization as well as caching software tools. However, I am also concerned with the growing numbers of different software tools that only work with one vendor’s servers or storage systems, or at least are supported as such.

    This reminds me of a time not all that long ago (ok, for some longer than others) when we had a proliferation of different host bus adapter (HBA) driver and pathing drivers from various vendors. The result is a hodge podge (a technical term) of software running on different operating systems, hypervisors, PM’s, VMs, and storage systems, all of which need to be managed. On the other hand, for the time being perhaps the benefit will outweigh the pain of having different tools. That is where there are options from server-side vendor centric, storage system focused, or third-party software tool providers.

    Another consideration is that some tools work in VMware environments; others support multiple hypervisors while others also support bare metal servers or PMs. Which applies to your environment will of course depend. After all, if you are an all VMware environment given that many of the caching tools tend to be VMware focused, that gives more options vs. for those who are still predominately PM environments.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Viking SATADIMM: Nand flash SATA SSD in DDR3 DIMM slot?

    Storage I/O trends

    Today computer and data storage memory vendor Viking announced that SSD vendor Solidfire has deployed their SATADIMM modules in DDR3 DIMM (e.g. Random Access Memory (RAM) main memory) slots of their SF SSD based storage solution.

    solidfire ssd storage with satadimm
    Solidfire SD solution with SATADIMM via Viking

    Nand flash SATA SSD in a DDR3 DIMM slot?

    Per Viking, Solidfire uses the SATADIMM as boot devices and cache to complement the normal SSD drives used in their SF SSD storage grid or cluster. For those not familiar, Solidfire SF storage systems or appliances are based on industry standard servers that are populated with SSD devices which in turn are interconnected with other nodes (servers) to create a grid or cluster of SSD performance and space capacity. Thus as nodes are added, more performance, availability and capacity are also increased all of which are accessed via iSCSI. Learn more about Solidfire SD solutions on their website here.

    Here is the press release that Viking put out today:

    Viking Technology SATADIMM Increases SSD Capacity in SolidFire’s Storage System (Press Release)

    Viking Technology’s SATADIMM enables higher total SSD capacity for SolidFire systems, offering cloud infrastructure providers an optimized and more powerful solution

    FOOTHILL RANCH, Calif., August 12, 2013 – Viking Technology, an industry leading supplier of Solid State Drives (SSDs), Non-Volatile Dual In-line Memory Module (NVDIMMs), and DRAM, today announced that SolidFire has selected its SATADIMM SSD as both the cache SSD and boot volume SSD for their storage nodes. Viking Technology’s SATADIMM SSD enables SolidFire to offer enhanced products by increasing both the number and the total capacity of SSDs in their solution.

    “The Viking SATADIMM gives us an additional SSD within the chassis allowing us to dedicate more drives towards storage capacity, while storing boot and metadata information securely inside the system,” says Adam Carter, Director of Product Management at SolidFire. “Viking’s SATADIMM technology is unique in the market and an important part of our hardware design.”

    SATADIMM is an enterprise-class SSD in a Dual In-line Memory Module (DIMM) form factor that resides within any empty DDR3 DIMM socket. The drive enables SSD caching and boot capabilities without using a hard disk drive bay. The integration of Viking Technology’s SATADIMM not only boosts overall system performance but allows SolidFire to minimize potential human errors associated with data center management, such as accidentally removing a boot or cache drive when replacing an adjacent failed drive.

    “We are excited to support SolidFire with an optimal solid state solution that delivers increased value to their customers compared to traditional SSDs,” says Adrian Proctor, VP of Marketing, Viking Technology. “SATADIMM is a solid state drive that takes advantage of existing empty DDR3 sockets and provides a valuable increase in both performance and capacity.”

    SATADIMM is a 6Gb SATA SSD with capacities up to 512GB. A next generation SAS solution with capacities of 1TB & 2TB will be available early in 2014. For more information, visit our website www.vikingtechnology.com or email us at sales@vikingtechnology.com.

    Sales information is available at: www.vikingtechnology.com, via email at sales@vikingtechnology.com or by calling (949) 643-7255.

    About Viking Technology Viking Technology is recognized as a leader in NVDIMM technology. Supporting a broad range of memory solutions that bridge DRAM and SSD, Viking delivers solutions to OEMs in the enterprise, high-performance computing, industrial and the telecommunications markets. Viking Technology is a division of Sanmina Corporation (Nasdaq: SANM), a leading Electronics Manufacturing Services (EMS) provider. More information is available at www.vikingtechnology.com.

    About SolidFire SolidFire is the market leader in high-performance data storage systems designed for large-scale public and private cloud infrastructure. Leveraging an all-flash scale-out architecture with patented volume-level quality of service (QoS) control, providers can now guarantee storage performance to thousands of applications within a shared infrastructure. In-line data reduction techniques along with system-wide automation are fueling new block-storage services and advancing the way the world uses the cloud.

    What’s inside the press release

    On the surface this might cause some to jump to the conclusion that the nand flash SSD is being accessed via the fast memory bus normally used for DRAM (e.g. main memory) of a server or storage system controller. For some this might even cause a jump to conclusion that Viking has figured out a way to use nand flash for reads and writes not only via a DDR3 DIMM memory location, as well as doing so with the Serial ATA (SATA) protocol enabling server boot and use by any operating system or hypervisors (e.g. VMware vSphere or ESXi, Microsoft Hyper-V, Xen or KVM among others).

    Note for those not familiar or needing a refresh on DRAM, DIMM and related items, here is an excerpt from Chapter 7 (Servers – Physical, Virtual and Software) from my book "The Green and Virtual Data Center" (CRC Press).

    7.2.2 Memory

    Computers rely on some form of memory ranging from internal registers, local on-board processor Level 1 (L1) and Level 2 (L2) caches, random accessible memory (RAM), non-volatile RAM (NVRAM) or Flash along with external disk storage. Memory, which includes external disk storage, is used for storing operating system software along with associated tools or utilities, application programs and data. Read more of the excerpt here…

    Is SATADIMM memory bus nand flash SSD storage?

    In short no.

    Some vendors or their surrogates might be tempted to spin such a story by masking some details to allow your imagination to run wild a bit. When I saw the press release announcement I reached out to Tinh Ngo (Director Marketing Communications) over at Viking with some questions. I was expecting the usual marketing spin story, dancing around the questions with long answers or simply not responding with anything of substance (or that requires some substance to believe). Again what I found was the opposite and thus want to share with you some of the types of questions and answers.

    So what actually is SATADIMM? See for yourself in the following image (click on it to view or Viking site).

    Via Viking website, click on image or here to learn more about SATADIMM

    Does SATADIMM actually move data via DDR3 and memory bus? No, SATADIMM only draws power from it (yes nand flash does need power when in use contrary to a myth I was told about).

    Wait, then how is data moved and how does it get to and through the SATA IO stack (hardware and software)?

    Simple, there is a cable connector that attached to the SATADIMM that in turn attached to an internal SATA port. Or using a different connector cable attach the SATADIMM (up to four) to a standard SAS internal port such as on a main board, HBA, RAID or caching adapter.

    industry trend

    Does that mean that Viking and who ever uses SATADIMM is not actually moving data or implementing SATA via the memory bus and DDR3 DIMM sockets? That would be correct, data movement occurs via cable connection to standard SATA or SAS ports.

    Wait, why would I give up a DDR3 DIMM socket in my server that could be used for more DRAM? Great question and one that should be it depends on if you need more DRAM or more nand flash? If you are out of drive slots or PCIe card slots and have enough DRAM for your needs along with available DDR3 slots, you can stuff more nand flash into those locations assuming you have SAS or SATA connectivity.

    satadimm
    SATADIMM with SATA connector top right via Viking

    satadimm sata connector
    SATADIMM SATA connector via Viking

    satadimm sas connector
    SATADIMM SAS (Internal) connector via Viking

    Why not just use the onboard USB ports and plug-in some high-capacity USB thumb drives to cut cost? If that is your primary objective it would probably work and I can also think of some other ways to cut cost. However those are also probably not the primary tenants that people looking to deploy something like SATADIMM would be looking for.

    What are the storage capacities that can be placed on the SATADIMM? They are available in different sizes up to 400GB for SLC and 480GB for MLC. Viking indicated that there are larger capacities and faster 12Gb SAS interfaces in the works which would be more of a surprise if there were not. Learn more about current product specifications here.

    Good questions. Attached are three images that sort of illustrates the connector. As well, why not a USB drive; well, there are customers that put 12 of these in the system (with up to 480GB usable capacity) that equates to roughly an added 5.7TBs inside the box without touching the drive bays (left for mass HDD’s). You will then need to raid/connect) all the SATADIMM via a HBA.

    How fast is the SATADIMM and does putting it into a DDR3 slot speed things up or slow them down? Viking has some basic performance information on their site (here). However generally should be the same or similar to reach a SAS or SATA SSD drive, although keep SSD metrics and performance in the proper context. Also keep in mind that the DDR3 DIMM slot is only being used for power and not real data movement.

    Is the SATADIMM using 3Gbs or 6Gbs SATA? Good questions, today is 6Gb SATA (remember that SATA can attach to a SAS port however not vise versa). Lets see if Viking responds in the comments with more including RAID support (hardware or software) along with other insight such as UNMAP, TRIM, Advanced Format (AF) 4KByte blocks among other things.

    Have I actually tried SATADIMM yet? No, not yet. However would like to give it a test drive and workout if one were to show up on my doorstep along with disclosure and share the results if applicable.

    industry trend

    Future of nand flash in DRAM DIMM sockets

    Keep in mind that someday nand flash will actually seem not only in a Webex or Powerpoint demo preso (e.g. similar to what Diablo Technology is previewing), as well as in real use for example what Micron earlier this year predicted for flash on DDR4 (more DDR3 vs. DDR4 here).

    Is SATADIMM the best nand flash SSD approach for every solution or environment? No, however it does give some interesting options for those who are PCIe card, or HDD and SSD drive slot constrained that also have available DDR3 DIMM sockets. As to price, check with Viking, wish I could say tell them Greg from StorageIO sent you for a good value, however not sure what they would say or do.

    Related more reading:
    How much storage performance do you want vs. need?
    Can RAID extend the life of nand flash SSD?
    Can we get a side of context with them IOPS and other storage metrics?
    SSD & Real Estate: Location, Location, Location
    What is the best kind of IO? The one you do not have to do
    SSD, flash and DRAM, DejaVu or something new?

    Ok, nuff said (for now).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved