Where, How to use NVMe overview primer

server storage I/O trends
Updated 1/12/2018

This is the fourth in a five-part miniseries providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

Where and how to use NVMe

As mentioned and shown in the second post of this series, initially, NVMe is being deployed inside servers as “ back-end,” fast, low latency storage using PCIe Add-In-Cards (AIC) and flash drives. Similar to SAS NVM SSDs and HDDs that support dual-paths, NVMe has a primary path and an alternate path. If one path fails, traffic keeps flowing without causing slowdowns. This feature is an advantage to those already familiar with the dual-path capabilities of SAS, enabling them to design and configure resilient solutions.

NVMe devices including NVM flash AIC flash will also find their way into storage systems and appliances as back-end storage, co-existing with SAS or SATA devices. Another emerging deployment configuration scenario is shared NVMe direct attached storage (DAS) with multiple server access via PCIe external storage with dual paths for resiliency.

Even though NVMe is a new protocol, it leverages existing skill sets. Anyone familiar with SAS/SCSI and AHCI/SATA storage devices will need little or no training to carry out and manage NVMe. Since NVMe-enabled storage appears to a host server or storage appliance as an LUN or volume, existing Windows, Linux and other OS or hypervisors tools can be used. On Windows, such as,  other than going to the device manager to see what the device is and what controller it is attached to, it is no different from installing and using any other storage device. The experience on Linux is similar, particularly when using in-the-box drivers that ship with the OS. One minor Linux difference of note is that instead of seeing a /dev/sda device as an example, you might see a device name like /dev/nvme0n1 or /dev/nvme0n1p1 (with a partition).

Keep in mind that NVMe like SAS can be used as a “back-end” access from servers (or storage systems) to a storage device or system. For example JBOD SSD drives (e.g. 8639), PCIe AiC or M.2 devices. NVMe can also like SAS be used as a “front-end” on storage systems or appliances in place of, or in addition to other access such as GbE based iSCSI, Fibre Channel, FCoE, InfiniBand, NAS or Object.

What this means is that NVMe can be implemented in a storage system or appliance on both the “front-end” e.g. server or host side as well as on the “back-end” e.g. device or drive side that is like SAS. Another similarity to SAS is that NVMe dual-pathing of devices, permitting system architects to design resiliency into their solutions. When the primary path fails, access to the storage device can be maintained with failover so that fast I/O operations can continue when using SAS and NVMe.

NVM connectivity options including NVMe
Various NVM NAND flash SSD devices and their connectivity including NVMe, M2, SATA and 12 Gbps SAS are shown in figure 6.

Various NVM SSD interfaces including NVMe and M2
Figure 6 Various NVM flash SSDs (Via StorageIO Labs)

Left in figure 6 is an NAND flash NVMe PCIe AiC, top center is a USB thumb drive that has been opened up showing an NAND die (chip), middle center is a mSATA card, bottom center is an M.2 card, next on the right is a 2.5” 6 Gbps SATA device, and far fright is a 12 Gbps SAS device. Note that an M.2 card can be either an SATA or NVMe device depending on its internal controller that determines which host or server protocol device driver to use.

The role of PCIe has evolved over the years as has its performance and packaging form factors. Also, to add in card (AiC) slots, PCIe form factors also include M.2 small form factor that replaces legacy mini-PCIe cards. Another form factor is M.2 (aka Next Generation Form Factor or NGFF) that like other devices, can be an NVMe, or SATA device.

NGFF also known as 8639 or possibly 8637 (figure 7) can be used to support SATA as well as NVMe depending on the card device installed and host server driver support. There are various M.2 NGFF form factors including 2230, 2242, 2260 and 2280. There are also M.2 to regular physical SATA converter or adapter cards that are available enabling M.2 devices to attach to legacy SAS/SATA RAID adapters or HBAs.

NVMe 8637 and 8639 interface backplane slotsNVMe 8637 and 8639 interface
Figure 7 PCIe NVMe 8639 Drive (Via StorageIO Labs)

On the left of figure 7 is a view towards the backplane of a storage enclosure in a server that supports SAS, SATA, and NVMe (e.g. 8639). On the right of figure 7 is the connector end of an 8639 NVM SSD showing addition pin connectors compared to an SAS or SATA device. Those extra pins give PCIe x4 connectivity to the NVMe devices. The 8639 drive connectors enable a device such as an NVM, or NAND flash SSD to share a common physical storage enclosure with SAS and SATA devices, including optional dual-pathing.

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Be careful judging a device or component by its physical packaging or interface connection about what it is or is not. In figure 6.6 the device has SAS/SATA along with PCIe physical connections, yet it’s what’s inside (e.g. its controller) that determines if it is an SAS, SATA or NVMe enabled device. This also applies to HDDs and PCIe AiC devices, as well as I/O networking cards and adapters that may use common physical connectors, yet implement different protocols. For example, the SFF-8643 HD-Mini SAS internal connector is used for 12 Gbps SAS attachment as well as PCIe to devices such as 8630.

Depending on the type of device inserted, access can be via NVMe over PCIe x4, SAS (12 Gbps or 6Gb) or SATA. 8639 connector based enclosures have a physical connection with their backplanes to the individual drive connectors, as well as to PCIe, SAS, and SATA cards or connectors on the server motherboard or via PCIe riser slots.

While PCIe devices including AiC slot based, M.2 or 8639 can have common physical interfaces and lower level signaling, it’s the protocols, controllers, and drivers that determine how they get a software defined and used. Keep in mind that it’s not just the physical connector or interface that determines what a device is or how it is used, it’s also the protocol, command set, and controller and device drivers.

Continue reading about NVMe with Part V (Where to learn more, what this all means) in this five-part series, or jump to Part I, Part II or Part III.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

NVMe Need for Performance Speed Performance

server storage I/O trends
Updated 1/12/2018

This is the third in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

How fast is NVMe?

It depends! Generally speaking NVMe is fast!

However fast interfaces and protocols also need fast storage devices, adapters, drivers, servers, operating systems and hypervisors as well as applications that drive or benefit from the increased speed.

A server storage I/O example is in figure 5 where a 6 Gbps SATA NVM flash SSD (left) is shown with an NVMe 8639 (x4) drive that were directly attached to a server. The workload is 8 Kbyte sized random writes with 128 threads (workers) showing results for IOPs (solid bar) along with response time (dotted line). Not surprisingly the NVMe device has a lower response time and a higher number of IOPs. However also note how the amount of CPU time used per IOP is lower on the right with the NVMe drive.

NVMe storage I/O performance
Figure 5 6 Gbps SATA NVM flash SSD vs. NVMe flash SSD

While many people are aware or learning about the IOP and bandwidth improvements as well as the decrease in latency with NVMe, something that gets overlooked is how much less CPU is used. If a server is spending time in wait modes that can result in lost productivity, by finding and removing the barriers more work can be done on a given server, perhaps even delaying a server upgrade.

In figure 5 notice the lower amount of CPU used per work activity being done (e.g. I/O or IOP) which translates to more effective resource use of your server. What that means is either doing more work with what you have, or potentially delaying a CPU server upgrade, or, using those extra CPU cycles to power software defined storage management stacks including erasure coding or advanced parity RAID, replication and other functions.

Table 1 shows relative server I/O performance of some NVM flash SSD devices across various workloads. As with any performance, the comparison takes them, and the following with a grain of salt as your speed will vary.

8KB I/O Size

1MB I/O size

NAND flash SSD

100% Seq. Read

100% Seq. Write

100% Ran. Read

100% Ran. Write

100% Seq. Read

100% Seq. Write

100% Ran. Read

100% Ran. Write

NVMe

IOPs

41829.19

33349.36

112353.6

28520.82

1437.26

889.36

1336.94

496.74

PCIe

Bandwidth

326.79

260.54

877.76

222.82

1437.26

889.36

1336.94

496.74

AiC

Resp.

3.23

3.90

1.30

4.56

178.11

287.83

191.27

515.17

CPU / IOP

0.001571

0.002003

0.000689

0.002342

0.007793

0.011244

0.009798

0.015098

12Gb

IOPs

34792.91

34863.42

29373.5

27069.56

427.19

439.42

416.68

385.9

SAS

Bandwidth

271.82

272.37

229.48

211.48

427.19

429.42

416.68

385.9

Resp.

3.76

3.77

4.56

5.71

599.26

582.66

614.22

663.21

CPU / IOP

0.001857

0.00189

0.002267

0.00229

0.011236

0.011834

0.01416

0.015548

6Gb

IOPs

33861.29

9228.49

28677.12

6974.32

363.25

65.58

356.06

55.86

SATA

Bandwidth

264.54

72.1

224.04

54.49

363.25

65.58

356.06

55.86

Resp.

4.05

26.34

4.67

35.65

704.70

3838.59

718.81

4535.63

CPU / IOP

0.001899

0.002546

0.002298

0.003269

0.012113

0.032022

0.015166

0.046545

Table 1 Relative performance of various protocols and interfaces

The workload results in table 1 were generated using a vdbench script running on a Windows 2012 R2 based server and are intended to be a relative indicator of different protocol and interfaces; your performance mileage will vary. The results shown below compare the number of IOPs (activity rate) for reads, writes, random and sequential across small 8KB and large 1MB sized I/Os.

Also shown in table 1 are bandwidth or throughput (e.g. amount of data moved), response time and the amount of CPU used per IOP. Note in table 1 how NVMe can do higher IOPs with a lower CPU per IOP, or, using a similar amount of CPU, do more work at a lower latency. SSD has been used for decades to help reduce CPU bottlenecks or defer server upgrades by removing I/O wait times and reduce CPU consumption (e.g. wait or lost time).

Can NVMe solutions run faster than those shown above? Absolutely!

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Continue reading about NVMe with Part IV (Where and How to use NVMe) in this five-part series, or jump to Part I, Part II or Part V.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Different NVMe Configurations

server storage I/O trends
Updated 1/12/2018

This is the second in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

The many different faces or facets of NVMe configurations

NVMe can be deployed and used in many ways, the following are some examples to show you its flexibility today as well as where it may be headed in the future. An initial deployment scenario is NVMe devices (e.g. PCIe cards, M2 or 8639 drives) installed as storage in servers or as back-end storage in storage systems. Figure 2 below shows a networked storage system or appliance that uses traditional server storage I/O interfaces and protocols for front-end access, however with back-end storage being all NVMe, or a hybrid of NVMe, SAS and SATA devices.
NVMe as back-end server storage I/O interface to NVM
Figure 2 NVMe as back-end server storage I/O interface to NVM storage

A variation of the above is using NVMe for shared direct attached storage (DAS) such as the EMC DSSD D5. In the following scenario (figure 3), multiple servers in a rack or cabinet configuration have an extended PCIe connection that attached to a shared storage all flash array using NVMe on the front-end. Read more about this approach and EMC DSSD D5 here or click on the image below.

EMC DSSD D5 NVMe
Figure 3 Shared DAS All Flash NVM Storage using NVMe (e.g. EMC DSSD D5)

Next up in figure 4 is a variation of the previous example, except NVMe is implemented over an RDMA (Remote Direct Memory Access) based fabric network using Converged 10GbE/40GbE or InfiniBand in what is known as RoCE (RDMA over Converged Ethernet pronounced Rocky).

NVMe over Fabric RoCE
Figure 4 NVMe as a “front-end” interface for servers or storage systems/appliances

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Watch for more topology and configuration options as NVMe along with associated hardware, software and I/O networking tools and technologies emerge over time.

Continue reading about NVMe with Part III (Need for Performance Speed) in this five-part series, or jump to Part I, Part IV or Part V.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

NVMe overview primer

server storage I/O trends
Updated 2/2/2018

This is the first in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

What is NVM Express (NVMe)

Non-Volatile Memory (NVM) includes persistent memory such as NAND flash and other forms Solid State Devices (SSD). NVM express (NVMe) is a new server storage I/P protocol alternative to AHCI/SATA and the SCSI protocol used by Serial Attached SCSI (SAS). Note that the name NVMe is owned and managed by the industry trade group for NVM Express is (www.nvmexpress.org).

The key question with NVMe is not if, rather when, where, why, how and with what will it appear in your data center or server storage I/O data infrastructure. This is a companion to material that I have on my micro site www.thenvmeplace.com that provides an overview of NVMe, as well as helps to discuss some of the questions about NVMe.

Main features of NVMe include among others:

  • Lower latency due to improve drivers and increased queues (and queue sizes)
  • Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
  • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
  • Bandwidth improvements leveraging various fast PCIe interface and available lanes
  • Dual-pathing of devices like what is available with dual-path SAS devices
  • Unlock the value of more cores per processor socket and software threads (productivity)
  • Various packaging options, deployment scenarios and configuration options
  • Appears as a standard storage device on most operating systems
  • Plug-play with in-box drivers on many popular operating systems and hypervisors

Why NVMe for Server Storage I/O?
NVMe has been designed from the ground up for accessing fast storage including flash SSD leveraging PCI Express (PCIe). The benefits include lower latency, improved concurrency, increased performance and the ability to unleash a lot more of the potential of modern multi-core modern processors.

NVMe Server Storage I/O
Figure 1 shows common server I/O connectivity including PCIe, SAS, SATA and NVMe.

NVMe, leveraging PCIe, enables modern applications to reach their full potential. NVMe is one of those rare, generational protocol upgrades that comes around every couple of decades to help unlock the full performance value of servers and storage. NVMe does need new drivers, but once in place, it plugs and plays seamlessly with existing tools, software and user experiences. Likewise many of those drivers are now in the box (e.g. ship with) for popular operating systems and hypervisors.

While SATA and SAS provided enough bandwidth for HDDs and some SSD uses, more performance is needed. NVMe near-term does not replace SAS or SATA they can and will coexist for years to come enabling different tiers of server storage I/O performance.

NVMe unlocks the potential of flash-based storage by allowing up to 65,536 (64K) queues each with 64K commands per queue. SATA allowed for only one command queue capable of holding 32 commands per queue and SAS supports a queue with 64K command entries. As a result, the storage IO capabilities of flash can now be fed across PCIe much faster to enable modern multi-core processors to complete more useful work in less time.

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Continue reading about NVMe with Part II (Different NVMe configurations) in this five-part series, or jump to Part III, Part IV or Part V.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server StorageIO February 2016 Update Newsletter

Volume 16, Issue II

Hello and welcome to the February 2016 Server StorageIO update newsletter.

Even with an extra day during the month of February, there was a lot going on in a short amount of time. This included industry activity from servers to storage and I/O networking, hardware, software, services, mergers and acquisitions for cloud, virtual, containers and legacy environments. Check out the sampling of some of the various industry activities below.

Meanwhile, its now time for March Madness which also means metrics that matter and getting ready for World Backup Data on March 31st. Speaking of World Backup Day, check out the StorageIO events and activities page for a webinar on March 31st involving data protection as part of smart backups.

While your focus for March may be around brackets and other related themes, check out the Carnegie Mellon University (CMU) white paper listed below that looks at NAND flash SSD failures at Facebook. Some of the takeaways involve the importance of cooling and thermal management for flash, as well as wear management and role of flash translation layer firmware along with controllers.

Also see the links to the Google White Paper on their request to the industry for a new type of Hard Disk Drive (HDD) to store capacity data while SSD’s handle the IOP’s. The take away is that while Google uses a lot of flash SSD for high performance, low latency workloads, they also need to have a lot of high-capacity bulk storage that is more affordable on a cost per capacity basis. Google also makes several proposals and suggestions to the industry on what should and can be done on a go forward basis.

Backblaze also has a new report out on their 2015 HDD reliability and failure analysis which makes for an interesting read. One of the take away’s is that while there are newer, larger capacity 6TB and 8TB drives, Backblaze is leveraging the lower cost per capacity of 4TB drives that are also available in volume quantity.

Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

Cheers GS

In This Issue

  • StorageIOblog posts
  • Industry Activity Trends
  • New and Old Vendor Update
  • Events and Webinars
  • StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I
      and Part II – EMC DSSD D5 Direct Attached Shared AFA
      EMC announced the general availability of their DSSD D5 Shared Direct Attached SSD (DAS) flash storage system (e.g. All Flash Array or AFA) which is a rack-scale solution. If you recall, EMC acquired DSSD back in 2014 which you can read more about here. EMC announced four configurations that include 36TB, 72TB and 144TB raw flash SSD capacity with support for up to 48 dual-ported host client servers.
    • Various Hardware (SAS, SATA, NVM, M2) and Software (VHD) Defined Odd’s and Ends
      Ever need to add another GbE port to a small server, workstation or perhaps Intel NUC, however no PCIe slots are available? How about attaching a M2 form factor flash SSD card to a server or device that does not have an M2 port, or, for mirroring two M2 cards together with a RAID adapter? Looking for tool to convert a Windows system to a Virtual Hard Disk (VHD) while it is running? The following are a collection of odd’s and end’s devices and tools for hardware and software defining your environment.
    • Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures
      For those who are into, or simply like to talk about software defined storage (SDS), API’s, Windows, Virtual Hard Disks (VHD) or VHDX, or Hyper-V among other related themes, have you ever actually looked at the specification for VHDX? If not, here is the link to the open specification that Microsoft published (this one dates back to 2012).
    • Big Files and Lots of Little File Processing and Benchmarking with Vdbench
      Need to test a server, storage I/O networking, hardware, software, services, cloud, virtual, physical or other environment that is either doing some form of file processing, or, that you simply want to have some extra workload running in the background for what ever reason?

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

  • Tegile – IntelliFlash HD Now Available To Enterprises Worldwide
  • Via Forbes – Competitors and Cash Bleed Put Pressure on Pure Storage
  • Via HealthCareBusiness – Philips and Amazon team up on cloud-based health record storage
  • Via Zacks – IBM Advances Hybrid Cloud Object Based Storage
  • DataONstorage expands Microsoft Hyper Converged Infrastructure platforms
  • Via ITBusinessEdge – Nimble updates All Flash Array (AFA) storage
  • Carnegie Mellon University – A Large-Scale Study of Flash Memory Failures
  • Cisco Buys Cliqr Cloud Orchestration
  • Backblaze – 2015 Hard Drive Reliability Reports and Analysis
  • Via BusinessCloudNews – Verizon Closing Down Its Public Cloud
  • Via BusinessInsider – US Government Approves Dell and EMC Deal
  • EMC and VMware announce new VCE VxRAIL Converged Solutions
  • EMC announces new IBM zSeries Mainframe enhancements for VMAX
  • EMC announces new DSSD D5 AFA and VMAX AFA enhancements
  • HPE announces enhancements to StoreEasy 1650 storage
  • Seagate now shipping worlds slimmest and fastest 2TB mobile HDD
  • Via VMblog – Oracle Scoops Up Ravello to Boosts Its Public Cloud Offerings
  • Via Investors – SSD and Chinese Investments in Western Digital
  • ATTO announces 32G (e.g. Gen 6) Fibre Channel adapters
  • Google to disk vendors: Make hard drives like this, even if they lose more data
  • Google Disk for Data Centers White Paper (PDF Here)
  • View other recent news and industry trends here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    • SkySync – Enterprise File Sync and Share
    • SANblaze – Storage protocol emulation tools
    • OpenIT – DCIM and Data Infrastructure Management Tools
    • Infinit.sh – Decentralized Software Based File Storage Platform
    • Alluxio –
      Open Source Software Defined Storage Abstraction Layer
    • Genie9
      Backup and Data Protection Tools
    • E8 Storage – Software Defined Stealth Storage Startup

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

     

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    NAB (Las Vegas) April 19-20, 2016

    March 31, 2016 Webinar (1PM ET) – Smart Backup and World Backup Day

    February 25, 2016 Webinar (11AM PT) – Migrating to Hyper-V including from Vmware

    February 24, 2016 Webinar (11AM ET) – How To Become a Data Protection Hero

    February 23, 2016 Webinar (11AM PT) – Rethinking Data Protection

    January 19, 2016 Webinar (9AM PT) – Solve Virtualization Performance Issues Like a Pro

    See more webinars and other activities on the Server StorageIO Events page here.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links,
    objectstoragecenter.com, storageioblog.com/data-protection-diaries-main/,
    thenvmeplace.com, thessdplace.com and storageio.com/performance among others.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II – EMC DSSD D5 Direct Attached Shared AFA

    Part II – EMC DSSD D5 Direct Attached Shared AFA

    server storage I/O trends

    This is the second post in a two-part series on the EMC DSSD D5 announcement, you can read part one here.

    Lets take a closer look at how EMC DSSD D5 works, its hardware and software components, how it compares and other considerations.

    How Does DSSD D5 Work

    Up to 48 Linux servers attach via dual port PCIe Gen 3 x8 cards that are stateless. Stateless simply means they do not have any flash or are not being used as storage cards, rather, they are essentially just an NVMe adapter card. With the first release block, HDFS file along with object and APIs are available for Linux systems. These drivers enabling the shared NVMe storage to be accessed by applications using different streamlined server and storage I/O driver software stacks to cut latency. DSSD D5 is meant to be a rack scale solutions so distance is measured as inside a rack (e.g. a couple of meters).

    The 5U tall DSSD D5 supports 48 servers via a pair of I/O Modules (IOM) each with 48 ports that in turn attach to the data plane and on to the Flash Modules (FM). Also attached to the data plane are a pair of controllers that are active / active for performing management tasks, however they do not sit in the data path. This means that host client directly access the FMs without having to go through a controller which is the case in traditional storage systems and AFAs. The controllers only get involved when there is some setup, configuration or other management activities, otherwise they get out-of-the-way, kind of like how management should function. There when you need them to help, then get out-of-the-way so productive work can be done.

    EMC DSSD shared ssd das
    Pardon the following hand drawn sketches, you can see some nice pretty diagrams, videos and other content via the EMC Pulse Blog as well as elsewhere.

    Note that the host client servers take on the responsibility for managing and coordinating data consistency meaning data can be shared between servers assuming applicable software is used for implementing integrity. This means that clustering and other software that can support shared storage are able to support low latency high performance read and write activity the DSSD D5 as opposed to relying on the underlying storage system for handling the shared storage coordination such as in a NAS. Another note is that the DSSD D5 is optimized for concurrent multi-threaded and asynchronous I/O operations along with atomic writes for data integrity that enable the multiple cores in today’s faster processors to be more effectively leveraged.

    The data plane is a mesh or switch or expander based back plane enabling any of the north bound (host client-server) 96 (2 x 48) PCIe Gen 3 x4 ports to reach the up to 36 (or as few as 18) FMs that are also dual pathed. Note that the host client-server PCIe dual port cards are Gen 3 x8 while the DSSD D5 ports are Gen 3 x4. Simple math should tell you that if are going to have 2 x PCIe Gen 3 x4 ports running at full speed, you want to have a Gen 3 x8 connection inside the server to get full performance.

    Think of the data plane similar to how a SAS expander works in an enclosure or a SAS switch, the difference being it is PCIe and not SAS or other protocol. Note that even though the terms mesh, fabric, switch, network are used, these are NOT attached to traditional LAN, SAN, NAS or other networks. Instead, this is a private “networked back plane” between the server and storage devices (e.g. FM).

    EMC DSSD D5 details

    The dual controllers (e.g. control plane) over see the flash management including garbage collection among other tasks, as well as storage is thin provisioned.

    Dual Controllers (active/active) are connected to each other (e.g. control plane) as well as to the data path, however, do not sit in the data path. Thus this is a fast path control path approach meaning the controllers can get involved to do management functions when needed, and get out-of-the-way of work when not needed. The controllers are hot-swap and add global management functions including setting up, tearing down host client/server I/O paths, mappings and affinities. Controllers also support the management of CUBIC RAID data protection functions performed by the Flash Modules (FM).

    Other functions the controllers implement leveraging their CPUs and DRAM include flash translation layer (FTL) functions normally handled by SSD cards, drives or other devices. These FTL functions include wear-leveling for durability, garbage collection, voltage power management among other tasks. The result is that the flash modules are able to spend more time and their resources handling I/O operations vs. handling management tasks vs. traditional off the shelf SSD drives, cards or devices.

    The FMs insert from the front and come in two sizes of 2TB and 4TB of raw NAND capacity. What’s different about the FMs vs. some other vendors approach is that these are not your traditional PCIe flash cards, instead they are custom cards with a proprietary ASIC and raw nand dies. DRAM is used in the FM as a buffer to hold data for write optimization as well as enhance wear-leveling to increase flash endurance.

    The result is up to thousands of nand dies spread over up to 36 FMs however more important, more performance being derived out of those resources. The increased performance comes from DSSD implementing its own flash translation layer, garbage collection, power voltage management among other techniques to derive more useful work per watt of energy consumed.

    EMC DSSD performance claims:

    • 100 microsecond latency for small IOs
    • 100GB bandwidth for large IOs
    • 10 Million small IO IOPs
    • Up to 144TB raw capacity

    How Does It Compare To Other AFA and SSD solutions

    There will be many apples to oranges comparisons as is often the case with new technologies or at least until others arrive in the market.

    Some general comparisons that may be apples to oranges as opposed to apples to apples include:

    • Shared and dense fast nand flash (eMLC) SSD storage
    • disaggregated flash SSD storage from server while enabling high performance, low latency
    • Eliminate pools or ponds of dedicated SSD storage capacity and performance
    • Not a SAN yet more than server-side flash or flash SSD JBOD
    • Underlying Flash Translation Layer (FTL) is disaggregated from SSD devices
    • Optimized hardware and software data path
    • Requires special server-side stateless adapter for accessing shared storage

    Some other comparisons include:

    • Hybrid and AFA shared via some server storage I/O network (good sharing, feature rich, resilient, slower performance and higher latency due to hardware, network and server I/O software stacks). For example EMC VMAX, VNX, XtremIO among others.
    • Server attached flash SSD aka server SAN (flash SSD creates islands of technology, lower resource sharing, data shuffling between servers, limited or no data services, management complexity). For example PCIe flash SSD state full (persistent) cards where data is stored or used as a cache along with associated management tools and drivers.
    • DSSD D5 is a rack-scale hybrid approach combing direct attached shared flash with lower latency, higher performance vs. traditional AFA or hybrid storage array, better resource usage, sharing, management and performance vs. traditional dedicated server flash. Compliment server-side data infrastructure and applications scale-out software. Server applications can reach NVMe storage via user spacing with block, hdfs, Flood and other APIs.

    Using EMC DSSD D5 in possible hybrid ways

    What Happened to Server PCIe cards and Server SANs

    If you recall a few years ago the industry rage was flash SSD PCIe server cards from vendors such as EMC, FusionIO (now part of SANdisk), Intel (still Intel), LSI (now part of Seagate), Micron (still Micron) and STEC (now part of Western Digital) among others. Server side flash SSD PCIe cards are still popular particular with newer NVMe controller based models that use the NVMe protocol stack instead of AHC/SATA or others.

    However as is often the case, things evolve and while there is still a place for server-side state full PCIe flash cards either for data or as cache, there is also the need to combine and simplify management, as well as streamline the software I/O stacks which is where EMC DSSD D5 comes into play. It enables consolidation of server-side SSD cards into a shared 5U chassis enabling up to 48 dual pathed servers access to the flash pools while using streamlined server software stacks and drivers that leverage NVMe over PCIe.

    Where to learn more

    Continue reading with the following links about NVMe, flash SSD and EMC DSSD.

  • Part one of this series here and part two here.
  • Performance Redefined! Introducing DSSD D5 Rack-Scale Flash Solution (EMC Pulse Blog)
  • EMC Unveils DSSD D5: A Quantum Leap In Flash Storage (EMC Press Release)
  • EMC Declares 2016 The “Year of All-Flash” For Primary Storage (EMC Press Release)
  • EMC DSSD D5 Rack-Scale Flash (EMC PDF Overview)
  • EMC DSSD and Cloudera Evolve Hadoop (EMC White Paper Overview)
  • Software Aspects of The EMC DSSD D5 Rack-Scale Flash Storage Platform (EMC PDF White Paper)
  • EMC DSSD D5 (EMC PDF Architecture and Product Specification)
  • EMC VFCache respinning SSD and intelligent caching (Part II)
  • EMC To Acquire DSSD, Inc., Extends Flash Storage Leadership
  • Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • Learn more about flash SSD here and NVMe here at thenvmeplace.com
  • What this all means

    EMC with DSSD D5 now has another solution to offer clients, granted their challenge as it has been over the past couple of decades now will be to educate and compensate their sales force and partners on what technology solution to put for different needs.

    On one hand, life could be simpler for EMC if they only had one platform solution that would then be the answer to every problem, something that some other vendors and startups face. Likewise, if all you have is one solution, then while you can try to make that solution fit different environments, or, get the environment to adapt to the solution, having options is a good thing if those options can remove complexity along with cost while boosting productivity.

    I would like to see support for other operating systems such as Windows, particular with the future Windows 2016 based Nano, as well as hypervisors including VMware, Hyper-V among others. On the other hand I also would like to see a Sharp Aquous Quattron 80" 1080p 240Hz 3D TV on my wall to watch HD videos from my DJI Phantom Drone. For now focusing on Linux makes sense, however, would be nice to see some more platforms supported.

    Keep an eye on the NVMe space as we are seeing NVMe solutions appearing inside servers, storage system, external dedicated and shared, as well as some other emerging things including NVMe over Fabric. Learn more about EMC DSSD D5 here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures

    Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures

    server storage I/O trends

    For those who are into, or simply like to talk about software defined storage (SDS), APIs, Windows, Virtual Hard Disks (VHD) or VHDX, or Hyper-V among other related themes, have you ever actually looked at the specification for VHDX? If not, here is the link to the open specification that Microsoft published (this one dates back to 2012).

    Microsoft VHDX specification document
    Click on above image to download the VHDX specification from Microsoft.com

    How about Algorithms + Data Structures = Programs by Niklaus Wirth, some of you might remember that from the past, if not, it’s a timeless piece of work that has many fundamental concepts for understanding software defined anything. I came across Algorithms + Data Structures = Programs back in Graduate School when I was getting my masters degree in Software Engineering at night, while working during the day in an IT environment on servers, storage, I/O networking hardware and software.


    Algorithms + Data Structures = Programs on Amazon.com

    In addition to the Amazon.com link above, here is a link to a free (legitimate PDF) copy.

    The reason I mention Software Defined, Virtual Hard Disk and Algorithms + Data Structures = Programs is that they are all directly related, or at a minimum can help demystify things.

    Inside a VHD and VHDX

    The following is an excerpt from the Microsoft VHDX specification document mentioned above that shows a logical view of how a VHDX is defined as a data structure, as well as how algorithms should use and access them.

    Microsoft VHDX specification

    Keep in mind that anything software defined is a collection of data structures that describe how bits, bytes, blocks, blobs or other entities are organized and then accessed by algorithms that are defined how to use those data structures. Thus the connection to Algorithms + Data Structures = Programs mentioned above.

    In the case of a Virtual Hard Disk (VHD) or VHDX they are the data structures defined (see the specification here) and then used by various programs (applications or algorithms) such as Windows or other operating systems, hypervisors or utilities.

    A VHDX (or VMDK or VVOL or qcow or other virtual disk for that matter) is a file whose contents are organized e.g. the data structures per a given specification (here).

    The VHDX can then be moved around like another file and used for booting some operating systems, as well as simply mounting and using like any other disk or device.

    This also means that you can nest putting a VHDX inside of a VHDX and so forth.

    Where to learn more

    Continue reading with the following links about Virtual Hard Disks pertaining to Microsoft Windows, Hyper-V, VMware among others.

  • Algorithms + Data Structures = Programs on Amazon.com
  • Microsoft Technet Virtual Hard Disk Sharing Overview
  • Download the VHDX specification from Microsoft.com
  • Microsoft Technet Hyper-V Virtual Hard Disk (VHD) Format Overview
  • Microsoft Technet Online Virtual Hard Disk Resizing Overview
  • VMware Developer Resource Center (VDDK for vSphere 6.0)
  • VMware VVOLs and storage I/O fundamentals (Part 1)
  • What this all means

    Applications and utilities or basically anything that is algorithms working with data structures is a program. Software Defined Storage or Software Defined anything involves defining data structures that describes various entities, along with the algorithms to work with and use those data structures.

    Sharpen, refresh or expand your software defined data center, software defined network, software defined storage or software defined storage management as well as software defined marketing game by digging a bit deeper into the bits and bytes. Who knows, you might just go from talking the talk to walking the talk, if nothing else, talking the talk better..

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Big Files Lots of Little File Processing Benchmarking with Vdbench

    Big Files Lots of Little File Processing Benchmarking with Vdbench


    server storage data infrastructure i/o File Processing Benchmarking with Vdbench

    Updated 2/10/2018

    Need to test a server, storage I/O networking, hardware, software, services, cloud, virtual, physical or other environment that is either doing some form of file processing, or, that you simply want to have some extra workload running in the background for what ever reason? An option is File Processing Benchmarking with Vdbench.

    I/O performance

    Getting Started


    Here’s a quick and relatively easy way to do it with Vdbench (Free from Oracle). Granted there are other tools, both for free and for fee that can similar things, however we will leave those for another day and post. Here’s the con to this approach, there is no Uui Gui like what you have available with some other tools Here’s the pro to this approach, its free, flexible and limited by your creative, amount of storage space, server memory and I/O capacity.

    If you need a background on Vdbench and benchmarking, check out the series of related posts here (e.g. www.storageio.com/performance).

    Get and Install the Vdbench Bits and Bytes


    If you do not already have Vdbench installed, get a copy from the Oracle or Source Forge site (now points to Oracle here).

    Vdbench is free, you simply sign-up and accept the free license, select the version down load (it is a single, common distribution for all OS) the bits as well as documentation.

    Installation particular on Windows is really easy, basically follow the instructions in the documentation by copying the contents of the download folder to a specified directory, set up any environment variables, and make sure that you have Java installed.

    Here is a hint and tip for Windows Servers, if you get an error message about counters, open a command prompt with Administrator rights, and type the command:

    $ lodctr /r


    The above command will reset your I/O counters. Note however that command will also overwrite counters if enabled so only use it if you have to.

    Likewise *nix install is also easy, copy the files, make sure to copy the applicable *nix shell script (they are in the download folder), and verify Java is installed and working.

    You can do a vdbench -t (windows) or ./vdbench -t (*nix) to verify that it is working.

    Vdbench File Processing

    There are many options with Vdbench as it has a very robust command and scripting language including ability to set up for loops among other things. We are only going to touch the surface here using its file processing capabilities. Likewise, Vdbench can run from a single server accessing multiple storage systems or file systems, as well as running from multiple servers to a single file system. For simplicity, we will stick with the basics in the following examples to exercise a local file system. The limits on the number of files and file size are limited by server memory and storage space.

    You can specify number and depth of directories to put files into for processing. One of the parameters is the anchor point for the file processing, in the following examples =S:\SIOTEMP\FS1 is used as the anchor point. Other parameters include the I/O size, percent reads, number of threads, run time and sample interval as well as output folder name for the result files. Note that unlike some tools, Vdbench does not create a single file of results, rather a folder with several files including summary, totals, parameters, histograms, CSV among others.


    Simple Vdbench File Processing Commands

    For flexibility and ease of use I put the following three Vdbench commands into a simple text file that is then called with parameters on the command line.
    fsd=fsd1,anchor=!fanchor,depth=!dirdep,width=!dirwid,files=!numfiles,size=!filesize

    fwd=fwd1,fsd=fsd1,rdpct=!filrdpct,xfersize=!fxfersize,fileselect=random,fileio=random,threads=!thrds

    rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=!etime,interval=!itime

    Simple Vdbench script

    # SIO_vdbench_filesystest.txt
    #
    # Example Vdbench script for file processing
    #
    # fanchor = file system place where directories and files will be created
    # dirwid = how wide should the directories be (e.g. how many directories wide)
    # numfiles = how many files per directory
    # filesize = size in in k, m, g e.g. 16k = 16KBytes
    # fxfersize = file I/O transfer size in kbytes
    # thrds = how many threads or workers
    # etime = how long to run in minutes (m) or hours (h)
    # itime = interval sample time e.g. 30 seconds
    # dirdep = how deep the directory tree
    # filrdpct = percent of reads e.g. 90 = 90 percent reads
    # -p processnumber = optional specify a process number, only needed if running multiple vdbenchs at same time, number should be unique
    # -o output file that describes what being done and some config info
    #
    # Sample command line shown for Windows, for *nix add ./
    #
    # The real Vdbench script with command line parameters indicated by !=
    #

    fsd=fsd1,anchor=!fanchor,depth=!dirdep,width=!dirwid,files=!numfiles,size=!filesize

    fwd=fwd1,fsd=fsd1,rdpct=!filrdpct,xfersize=!fxfersize,fileselect=random,fileio=random,threads=!thrds

    rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=!etime,interval=!itime

    Big Files Processing Script


    With the above script file defined, for Big Files I specify a command line such as the following.
    $ vdbench -f SIO_vdbench_filesystest.txt fanchor=S:\SIOTemp\FS1 dirwid=1 numfiles=60 filesize=5G fxfersize=128k thrds=64 etime=10h itime=30 numdir=1 dirdep=1 filrdpct=90 -p 5576 -o SIOWS2012R220_NOFUZE_5Gx60_BigFiles_64TH_STX1200_020116

    Big Files Processing Example Results


    The following is one of the result files from the folder of results created via the above command for Big File processing showing totals.


    Run totals

    21:09:36.001 Starting RD=format_for_rd1

    Feb 01, 2016 .Interval. .ReqstdOps.. ...cpu%... read ....read.... ...write.... ..mb/sec... mb/sec .xfer.. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete...
    rate resp total sys pct rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp
    21:23:34.101 avg_2-28 2848.2 2.70 8.8 8.32 0.0 0.0 0.00 2848.2 2.70 0.00 356.0 356.02 131071 0.0 0.00 0.0 0.00 0.1 109176 0.1 0.55 0.1 2006 0.0 0.00

    21:23:35.009 Starting RD=rd1; elapsed=36000; fwdrate=max. For loops: None

    07:23:35.000 avg_2-1200 4939.5 1.62 18.5 17.3 90.0 4445.8 1.79 493.7 0.07 555.7 61.72 617.44 131071 0.0 0.00 0.0 0.00 0.0 0.00 0.1 0.03 0.1 2.95 0.0 0.00


    Lots of Little Files Processing Script


    For lots of little files, the following is used.


    $ vdbench -f SIO_vdbench_filesystest.txt fanchor=S:\SIOTEMP\FS1 dirwid=64 numfiles=25600 filesize=16k fxfersize=1k thrds=64 etime=10h itime=30 dirdep=1 filrdpct=90 -p 5576 -o SIOWS2012R220_NOFUZE_SmallFiles_64TH_STX1200_020116

    Lots of Little Files Processing Example Results


    The following is one of the result files from the folder of results created via the above command for Big File processing showing totals.
    Run totals

    09:17:38.001 Starting RD=format_for_rd1

    Feb 02, 2016 .Interval. .ReqstdOps.. ...cpu%... read ....read.... ...write.... ..mb/sec... mb/sec .xfer.. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete...
    rate resp total sys pct rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp
    09:19:48.016 avg_2-5 10138 0.14 75.7 64.6 0.0 0.0 0.00 10138 0.14 0.00 158.4 158.42 16384 0.0 0.00 0.0 0.00 10138 0.65 10138 0.43 10138 0.05 0.0 0.00

    09:19:49.000 Starting RD=rd1; elapsed=36000; fwdrate=max. For loops: None

    19:19:49.001 avg_2-1200 113049 0.41 67.0 55.0 90.0 101747 0.19 11302 2.42 99.36 11.04 110.40 1023 0.0 0.00 0.0 0.00 0.0 0.00 7065 0.85 7065 1.60 0.0 0.00


    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    The above examples can easily be modified to do different things particular if you read the Vdbench documentation on how to setup multi-host, multi-storage system, multiple job streams to do different types of processing. This means you can benchmark a storage systems, server or converged and hyper-converged platform, or simply put a workload on it as part of other testing. There are even options for handling data footprint reduction such as compression and dedupe.

    Ok, nuff said, for now.

    Gs

    Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Server StorageIO January 2016 Update Newsletter

    Volume 16, Issue I – beginning of Year (BoY) Edition

    Hello and welcome to the January 2016 Server StorageIO update newsletter.

    Is it just me, or did January disappear in a flash like data stored in non-persistent volatile DRAM memory when the power is turned off? It seems like just the other day that it was the first day of the new year and now we are about to welcome in February. Needless to say, like many of you I have been busy with various projects, many of which are behind the scenes, some of which will start appearing publicly sooner while others later.

    In terms of what have I been working on, it includes the usual of performance, availability, capacity and economics (e.g. PACE) related to servers, storage, I/O networks, hardware, software, cloud, virtual and containers. This includes NVM as well as NVMe based SSD’s, HDD’s, cache and tiering technologies, as well as data protection among other things with Hyper-V, VMware as well as various cloud services.

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    In This Issue

  • Feature Topic
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcasts
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • Feature Topic – Microsoft Nano, Server 2016 TP4 and VMware

    This months feature topic is virtual servers and software defined storage including those from VMware and Microsoft. Back in November I mentioned the 2016 Technical Preview 4 (e.g. TP4) along with Storage Spaces Direct and Nano. As a reminder you can download your free trial copy of Windows Server 2016 TP4 from this Microsoft site here.

    Three good Microsoft Blog posts about storage spaces to check out include:

    • Storage Spaces Direct in Technical Preview 4 (here)
    • Hardware options for evaluating Storage Spaces Direct in Technical Preview 4 (here)
    • Storage Spaces Direct – Under the hood with the Software Storage Bus (here)

    As for Microsoft Nano, for those not familiar, it’s not a new tablet or mobile device, instead, it is a very light weight streamlined version of the Windows Server 2016 server. How streamlined? Much more so then the earlier Windows Server versions that simply disabled the GUI and desktop interfaces. Nano is smaller from a memory and disk storage space perspective meaning it uses less RAM, boots faster, has fewer moving parts (e.g. software modules) to break (or need patching).

    Specifically Nano removes 32 bit support and anything related to the desktop and GUI interfaces as well as removing the console interface. That’s right, no console or virtual console to log into, Wow is gone, access is via Powershell or Windows Management Interface tools from remote systems. How small is it? I have a Nano instance built on a VHDX that is under a GB in size, granted, its only for testing. The goal of Nano is to have a very light weight streamlined version of Windows Server that can run hundreds (or more) VMs in a small memory footprint, not to mention supports lots of containers. Nano is part of WIndows TP4, learn more about Nano here in this Microsoft post including how to get started using it.

    Speaking of VMware, if you have not received an invite yet to their Digital Enterprise February 6, 2016 announcement event, click here to register.

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

    • EMC announced Elastic Cloud Storage (ECS) V2.2. A main theme of V2.2 is that besides being the 3rd generation of EMC object storage (dating back to Centera, then Atmos), is that ECS is also where the functionality of Centera, Atmos and other functionality converge. ECS provides object storage access along with HDFS (Hadoop and Hortonworks certified) and traditional NFS file access.

      Object storage access includes Amazon S3, OpenStack Swift, ATMOS and CAS (Centera). In addition to the access, added Centera functionality for regulatory compliance has been folded into the ECS software stack. For example, ECS is now compatible with SEC 17 a-4(f) and CFTC 1.3(b)-(c) regulations protecting data from being overwritten or erased for a specified retention period. Other enhancements besides scalability, resiliency and ease of use include meta data and search capabilities. You can download and try ECS for non-production workloads with no capacity or functionality limitations from EMC here.

    View other recent news and industry trends here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities tips, and announcements. In case you missed them from last month:

    • TheFibreChannel.com: Industry Analyst Interview: Greg Schulz, StorageIO
    • EnterpriseStorageForum: Comments Handling Virtual Storage Challenges
    • PowerMore (Dell): Q&A: When to implement ultra-dense storage

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Datrium – DVX and NetShelf server software defined flash storage and converged infrastructure
    • DataDynamics – StorageX is the software solution for enabling intelligent data migration, including from NetApp OnTap 7 to Clustered OnTap, as well as to and from EMC among other NAS file serving solutions.
    • Paxata – Little and Big Data management solutions

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • InfoStor:  Data Protection Gaps, Some Good, Some Not So Good

    And in case you missed them from last month

    • IronMountain:  5 Noteworthy Data Privacy Trends From 2015
    • Virtual Blocks (VMware Blogs):  Part III EVO:RAIL – When And Where To Use It?
    • InfoStor:  Object Storage Is In Your Future
    • InfoStor:  Water, Data and Storage Analogy

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    NAB (Las Vegas) April 19-20, 2016

    TBA – March 31, 2016

    Redmond Magazine Gridstore (How to Migrate from VMware to Hyper-V) February 25, 2016 Webinar (11AM PT)

    TBA – February 23, 2016

    Redmond Magazine and Dell Foglight – Manage and Solve Virtualization Performance Issues Like a Pro (Webinar 9AM PT) – January 19, 2016

    See more webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    Quick Look: What’s the Best Enterprise HDD for a Content Server?
    Which enterprise HDD for content servers

    Insight for Effective Server Storage I/O decision-making
    This StorageIO® Industry Trends Perspectives Solution Brief and Lab Review (compliments of Seagate and Servers Direct) looks at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate (www.seagate.com) Enterprise Hard Disk Drive (HDDs).

    I was given the opportunity to do some hands-on testing running different application workloads with a 2U content solution platform along with various Seagate Enterprise 2.5” HDDs handle different application workloads. This includes Seagate’s Enterprise Performance HDDs with the enhanced caching feature.

    Read more in this Server StorageIO industry Trends Perspective white paper and lab review.

    Looking for NVM including SSD information? Visit the Server StorageIO www.thessdplace.com and www.thenvmeplace.com micro sites. View other StorageIO lab review and test drive reports here.

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of mine as well as books from others. For this months recommended reading, it’s a blog site. If you have not visited Duncan Eppings (@DuncanYB) Yellow-Bricks site, you should, particular if you are interested in virtualization, high availability and related topical themes.

    Seven Databases in Seven Weeks guide to no SQL via Amazon.com

    Granted Duncan being a member of the VMware CTO office covers a lot of VMware related themes, however being the author of several books, he also covers non VMware related topics. Duncan recently did a really good and simple post about rebuilding a failed disk in a VMware VSAN vs. in a legacy RAID or erasure code based storage solution.

    One of the things that struck me as being important with what Duncan wrote about is avoiding apples to oranges comparisons. What I mean by this is that it is easy to compare traditional parity based or mirror type solutions that chunk or shard data on KByte basis spread over disks, vs. data that is chunk or sharded on GByte (or larger) basis over multiple servers and their disks. Anyway, check out Duncan’s site and recent post by clicking here.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace
    thessdplace.com
    storageio.com/performance.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    RIP Windows SIS (Single Instance Storage), or at least in Server 2016

    RIP Windows SIS, or at least in Server 2016

    server storage I/O trends

    I received as a Microsoft MVP a partner communication today from Microsoft of a heads up as well as pass on to others that Single Instance Storage (SIS) has been removed from Windows Server 2016 (Read the Microsoft Announcement here, or below). Windows SIS is part of Microsoft’s portfolio of tools and technology for implementing Data Footprint Reduction (DFR).

    Granted Windows Server 2016 has not been released yet, however you can download and try out the latest release such as Technical Preview 4 (TP4), get the bits from Microsoft here. Learn more about some of the server and storage I/O enhancements in TP4 including storage spaces direct here.

    Partner Communication from Microsoft

    Partner Communication
    Please relay or forward this notification to ISVs and hardware partners that have used Single Instance Storage (SIS) or implemented the SIS backup API.

    Single Instance Storage (SIS) has been removed from Windows Server 2016
    Summary:   Single Instance Storage (SIS), a file system filter driver used for NTFS file deduplication, has been removed from Windows Server. In Dec 2015, the SIS feature has been completely removed from Windows Server and Windows Storage Server editions.  SIS was officially deprecated in Windows Server 2012 R2 in this announcement and will be removed from future Windows Server Technical Preview releases.

    Call to action:
    Storage vendors that have any application dependencies on legacy SIS functions or SIS backup and restore APIs should verify that their applications behave as expected on Windows Server 2016 and Windows Storage Server 2016. Windows Server 2012 included Microsoft’s next generation of deduplication technology that uses variable-sized chunking and hashing and offers far superior deduplication rates. Users and backup vendors have already moved to support the latest Microsoft deduplication technology and should continue to do so.

    Background:
    SIS was developed and used in Windows Server since 2000, when it was part of Remote Installation Services. SIS became a general purpose file system filter driver in Windows Storage Server 2003 and the SIS groveler (the deduplication engine) was included in Windows Storage Server. In Windows Storage Server 2008, the SIS legacy read/write filter driver was upgraded to a mini-filter and it shipped in Windows Server 2008, Windows Server 2012 and Windows Server 2012 R2 editions. Creating SIS-controlled volumes could only occur on Windows Storage Server, however, all editions of Windows Server could read and write to volumes that were under SIS control and could restore and backup volumes that had SIS applied.

    Volumes using SIS that are restored or plugged into Windows Server 2016 will only be able to read data that was not deduplicated. Prior to migrating or restoring a volume, users must remove SIS from the volume by copying it to another location or removing SIS using SISadmin commands.

    The SIS components and features:

    • SIS Groveler. The SIS Groveler searched for files that were identical on the NTFS file system volume. It then reported those files to the SIS filter driver.
    • SIS Storage Filter. The SIS Storage Filter was a file system filter that managed duplicate copies of files on logical volumes. This filter copied one instance of the duplicate file into the Common Store. The duplicate copies were replaced with a link to the Common Store to improve disk space utilization.
    • SIS Link. SIS links were pointers within the file system, maintaining both application and user experience (including attributes such as file size and directory path) while I/O was transparently redirected to the actual duplicate file located within the SIS Common Store.
    • SIS Common Store. The SIS Common Store served as the repository for each file identified as having duplicates. Each SIS-maintained volume contained one SIS Common Store, which contained all of the merged duplicate files that exist on that volume.
    • SIS Administrative Interface. The SIS Administrative Interface gave network administrators easy access to all SIS controls to simplify management.
    • SIS Backup API. The SIS Backup API (Sisbkup.dll) helped OEMs create SIS-aware backup and restoration solutions.

    References:
    https://msdn.microsoft.com/en-us/library/windows/desktop/aa362538(v=vs.85).aspx
    https://msdn.microsoft.com/en-us/library/windows/desktop/aa362512(v=vs.85).aspx
    https://msdn.microsoft.com/en-us/library/dexter.functioncatall.sis(v=vs.90).aspx
    https://blogs.technet.com/b/filecab/archive/2012/05/21/introduction-to-data-deduplication-in-windows-server-2012.aspx
    https://blogs.technet.com/b/filecab/archive/2006/02/03/single-instance-store-sis-in-windows-storage-server-r2.aspx

    What this all means

    Like it or not, SIS is being removed from Windows 2016 replaced by the new Microsoft deduplication or data footprint reduction (DFR) technology.

    You have been advised…

    RIP Windows SIS

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Server StorageIO December 2015 Update Newsletter


    Server and StorageIO Update Newsletter

    Volume 15, Issue XII – End of Year (EOY) Edition

    Hello and welcome to this December 2015 Server StorageIO update newsletter.

    Seasons Greetings and Happy New Years.

    Winter has arrived here in the northern hemisphere and it is also the last day of 2015 e.g. End Of Year or EOY). For some this means relaxing and having fun after a busy year, for others, it’s the last day of the most important quarter of the most important year ever, particular if you are involved in sales or spending.

    This is also that time of year where predictions for 2016 will start streaming out as well as reflections looking back at 2015 appear (more on these in January). Another EOY activity is planning for 2016 as well as getting items ready for roll-out or launch in the new year. Overall 2015 has been a very good year with many things in the works both public facing, as well as several behind the scenes some of which will start to appear throughout 2016.

    Enjoy this abbreviated edition of the Server StorageIO update newsletter and watch for new tips, articles, predictions, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Thank you for enabling a successful 2015 and wishing you all a prosperous new year in 2016.

    Cheers GS

    In This Issue

  • Tips and Articles
  • Events and Webinars
  • Resources and Links
  • StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • IronMountain:  5 Noteworthy Data Privacy Trends From 2015
    • Virtual Blocks (VMware Blogs):  Part III EVO:RAIL – When And Where To Use It?
    • InfoStor:  Object Storage Is In Your Future
    • InfoStor:  Water, Data and Storage Analogy

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    NAB (Las Vegas) April 19-20, 2016

    Redmond Magazine Gridstore (How to Migrate from VMware to Hyper-V) February 25, 2016 Webinar (11AM PT)

    See more webinars and other activities on the Server StorageIO Events page here.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server StorageIO November 2015 Update Newsletter


    Server and StorageIO Update Newsletter

    Volume 15, Issue XI – November 2015

    Hello and welcome to this November 2015 Server StorageIO update newsletter. Winter has arrived here in the northern hemisphere, although technically its still fall until the winter solstice in December. Regardless of if summer or winter depending on which hemisphere you are, 2015 is about to wrap up meaning end of year (EOY) activities.

    EOY activities can mean final shopping or acquisitions for technology and services or simply for home and fun. This is also that time of year where predictions for 2016 will start streaming out as well as reflections looking back at 2015 appear (lets save those for December ;). Another EOY activity is planning for 2016 as well as getting items ready for roll-out or launch in the new year. Needless to say there is a lot going on so with that, enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    In This Issue

  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events and Webinars
  • Recommended Reading List
  • Resources and Links
  • StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities tips, and announcements.

    • TheFibreChannel.com: Industry Analyst Interview: Greg Schulz, StorageIO
    • EnterpriseStorageForum: Comments Handling Virtual Storage Challenges
    • PowerMore (Dell): Q&A: When to implement ultra-dense storage

    View more Server, Storage and I/O hardware as well as software trends comments here

     

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • Virtual Blocks (VMware Blogs):  EVO:RAIL Part II – Why And When To Use It?
      This is the second of a multi-part series looking at Converged Infrastructures (CI), Hyper-Converged Infrastructures (HCI), Cluster in Box (CiB) and other unified solution bundles. There is a trend of industry adoption talking about CI, HCI, CiB and other bundled solutions, along with growing IT customer adoption and deployment. Different sized organizations are looking at various types of CI solutions to meet various application and workloads needs. Read more here and part I here.
    • TheFibreChannel.com:  Industry Analyst Interview: Greg Schulz, StorageIO
      In part one of a two part article series, Frank Berry, storage industry analyst and Founder of IT Brand Pulse and editor of TheFibreChannel.com, recently spoke with StorageIO Founder Greg Schulz about Fibre Channel SAN integration with OpenStack, why Rackspace is using Fibre Channel and more. Read more here
    • CloudComputingAdmin.com:  Cloud Storage Decision Making – Using Microsoft Azure for cloud storage
      Let’s say that you have been tasked with, or decided that it is time to use (or try) public cloud storage such as Microsoft Azure. Ok, now what do you do and what decisions need to be made? Keep in mind that Microsoft Azure like many other popular public clouds provides many different services available for fee (subscription) along with free trials. These services include applications, compute, networking, storage along with development and management platform tools. Read more here.

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    Deltaware Emerging Technology Summit November 10, 2015

    Dell Data Protection Summit Nov 4, 2015 7AM PT

    Microsoft MVP Summit Nov 2-5, 2015

    See more webinars and other activities on the Server StorageIO Events page here.

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of my books.

    In case you had not heard, Microsoft recently released the bits (e.g. software download) for Windows Server 2016 Technical Preview 4 (TP4). TP4 is the successor to Technical Preview 3 (TP3) that was released this past August and is the most recent public preview version of the next Windows Server. TP4 adds a new tiering capability where Windows and storage spaces can cache and migrate data between Hard Disk Drives (HDD) and Non-Volatile Memory (NVM) including flash SSD. The new tiering feature supports a mixed HDD and NVM with flash SSD (including NVM Express or NVMe), as well as an all NVM scenario. Yes, that is correct, tiering with all NVM is not a type, instead enables using lower latency faster NVM along with lower cost higher capacity flash SSD. Learn more about what’s in TP4 from a server and storage I/O perspective in this Microsoft post, as well as more about S2D in this Microsoft Technet post here and here. You can get the Windows Server 2016 TP4 bits here which are already running in the Server StorageIO lab.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace.com
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Water, Data and Storage Analogy

    Water, Data and Storage Analogy

    server storage I/O trends

    Recently I did a piece over at InfoStor titled "Water, Data and Storage Analogy". Besides being taken for granted and all of us being dependent on them, several other similarities exist between water, data, and storage. In addition to being a link that piece, this is a companion with some different images to help show the similarities between water, data and storage if for no other reason to have a few moments of fun. Read the entire piece here.

    Water, Data and Storage Similarities

    Water can get cold and freeze, data can also go cold becoming dormant and a candidate for archiving or cold cloud storage.

    Like data and storage water can be frozen
    Like data and storage water can be frozen

    Various types of storage devices
    Various types of storage drives (HDD & SSD)

    different tiers of frozen water storage containers
    Different types and tiers of frozen water storage containers

    Data, like water, can move or be dormant, can be warm and active, or cold, frozen and inactive. Water, data and storage can also be used for work or fun.

    Kyak fishing
    Fishing on water vs. phishing for data on storage

    Eagle fly fishing on st croix river
    Eagle fly fishing on water over st croix river

    Data can be transformed into 3D images and video, water transformed into Snow can also be made into various virtual images or things.

    Data on storage can be transformed like water
    Data on storage can be transformed like water (e.g. snow)

    Data, like water, can exist in clouds, resulting in storms that if not properly prepared for, can cause problems.

    Data and storage can be damaged including by water, water can also be damaged by putting things into it or the environment.

    Water can destroy things, data and storage can be destroyed
    Water can destroy things, data and storage can be destroyed

    There are data lakes, data pools, data ponds, oceans of storage and seas of data as well as data centers.

    inside a data center
    Rows of servers and storage in a data center

    An indoor water lake (e.g. not an indoor data lake)
    An indoor water lake (e.g. not an indoor data lake)

    As water flows downstream it tends to increase in volume as tributaries or streams adding to the volume in lakes, reservoirs, rivers and streams. Another similarity is that water will tend to flow and seek its level filling up space, while data can involve a seek on an HDD in addition to filling up space.

    Flood of water vs. flood of data
    Flood of water vs. flood of data (e.g. need for Data Protection)

    There are also hybrid uses (or types) of water, just like hybrid technologies for supporting data infrastructures.

    Amphicar hybrid automobile
    Hybrid Automobile on water

    What this all means

    We might take water, data and storage for granted, yet they each need to be managed, protected, preserved and served. Servers utilize storage to support applications for managing water; water is used for cooling and powering storage, not to mention for making coffee for those who take care of IT resources.

    When you hear about data lakes, ponds or pools, keep in mind that there are also data streams, all of which need to be managed to prevent the flood of data from overwhelming you.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    NVMe Place NVM Non Volatile Memory Express Resources

    Updated 8/31/19
    NVMe place server Storage I/O data infrastructure trends

    Welcome to NVMe place NVM Non Volatile Memory Express Resources. NVMe place is about Non Volatile Memory (NVM) Express (NVMe) with Industry Trends Perspectives, Tips, Tools, Techniques, Technologies, News and other information.

    Disclaimer

    Please note that this NVMe place resources site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks.

    NVM Express Organization
    Image used with permission of NVM Express, Inc.

    Visit the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

     

    The NVMe Place resources and NVM including SCM, PMEM, Flash

    NVMe place includes Non Volatile Memory (NVM) including nand flash, storage class memories (SCM), persistent memories (PM) are storage memory mediums while NVM Express (NVMe) is an interface for accessing NVM. This NVMe resources page is a companion to The SSD Place which has a broader Non Volatile Memory (NVM) focus including flash among other SSD topics. NVMe is a new server storage I/O access method and protocol for fast access to NVM based storage and memory technologies. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

    Server Storage I/O NVMe PCIe SAS SATA AHCI
    Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

    Leveraging the standard PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5″ drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add-in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially, NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

    NVMe as back-end storage
    NVMe as a “back-end” I/O interface for NVM storage media

    NVMe as front-end server storage I/O interface
    NVMe as a “front-end” interface for servers or storage systems/appliances

    NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can also being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

    NVMe features

    Main features of NVMe include among others:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

    Shared external PCIe using NVMe
    NVMe and shared PCIe (e.g. shared PCIe flash DAS)

    NVMe related content and links

    The following are some of my tips, articles, blog posts, presentations and other content, along with material from others pertaining to NVMe. Keep in mind that the question should not be if NVMe is in your future, rather when, where, with what, from whom and how much of it will be used as well as how it will be used.

    • How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
    • Why NVMe Should Be in Your Data Center (Via Micron.com)
    • NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
    • Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
    • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
    • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
    • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
    • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
    • MNVM Express solutions (Via SuperMicro)
    • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
    • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
    • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
    • How many IOPS can a HDD, HHDD or SSD do (Part I)?
    • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
    • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
    • Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
    • Via EnterpriseStorageForum: 10-Year Review of Data Storage

    Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamline performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU.

    NVMe and SATA flash SSD performance

    The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed.

    Additional NVMe Resources

    Also check out the Server StorageIO companion micro sites landing pages including thessdplace.com (SSD focus), data protection diaries (backup, BC/DR/HA and related topics), cloud and object storage, and server storage I/O performance and benchmarking here.

    If you are in to the real bits and bytes details such as at device driver level content check out the Linux NVMe reflector forum. The linux-nvme forum is a good source if you are developer to stay up on what is happening in and around device driver and associated topics.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    Disclaimer

    Disclaimer: Please note that this site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks. Check out the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

    NVM Express Organization
    Image used with permission of NVM Express, Inc.

    Wrap Up

    Watch for updates with more content, links and NVMe resources to be added here soon.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.