Where, How to use NVMe overview primer

server storage I/O trends
Updated 1/12/2018

This is the fourth in a five-part miniseries providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

Where and how to use NVMe

As mentioned and shown in the second post of this series, initially, NVMe is being deployed inside servers as “ back-end,” fast, low latency storage using PCIe Add-In-Cards (AIC) and flash drives. Similar to SAS NVM SSDs and HDDs that support dual-paths, NVMe has a primary path and an alternate path. If one path fails, traffic keeps flowing without causing slowdowns. This feature is an advantage to those already familiar with the dual-path capabilities of SAS, enabling them to design and configure resilient solutions.

NVMe devices including NVM flash AIC flash will also find their way into storage systems and appliances as back-end storage, co-existing with SAS or SATA devices. Another emerging deployment configuration scenario is shared NVMe direct attached storage (DAS) with multiple server access via PCIe external storage with dual paths for resiliency.

Even though NVMe is a new protocol, it leverages existing skill sets. Anyone familiar with SAS/SCSI and AHCI/SATA storage devices will need little or no training to carry out and manage NVMe. Since NVMe-enabled storage appears to a host server or storage appliance as an LUN or volume, existing Windows, Linux and other OS or hypervisors tools can be used. On Windows, such as,  other than going to the device manager to see what the device is and what controller it is attached to, it is no different from installing and using any other storage device. The experience on Linux is similar, particularly when using in-the-box drivers that ship with the OS. One minor Linux difference of note is that instead of seeing a /dev/sda device as an example, you might see a device name like /dev/nvme0n1 or /dev/nvme0n1p1 (with a partition).

Keep in mind that NVMe like SAS can be used as a “back-end” access from servers (or storage systems) to a storage device or system. For example JBOD SSD drives (e.g. 8639), PCIe AiC or M.2 devices. NVMe can also like SAS be used as a “front-end” on storage systems or appliances in place of, or in addition to other access such as GbE based iSCSI, Fibre Channel, FCoE, InfiniBand, NAS or Object.

What this means is that NVMe can be implemented in a storage system or appliance on both the “front-end” e.g. server or host side as well as on the “back-end” e.g. device or drive side that is like SAS. Another similarity to SAS is that NVMe dual-pathing of devices, permitting system architects to design resiliency into their solutions. When the primary path fails, access to the storage device can be maintained with failover so that fast I/O operations can continue when using SAS and NVMe.

NVM connectivity options including NVMe
Various NVM NAND flash SSD devices and their connectivity including NVMe, M2, SATA and 12 Gbps SAS are shown in figure 6.

Various NVM SSD interfaces including NVMe and M2
Figure 6 Various NVM flash SSDs (Via StorageIO Labs)

Left in figure 6 is an NAND flash NVMe PCIe AiC, top center is a USB thumb drive that has been opened up showing an NAND die (chip), middle center is a mSATA card, bottom center is an M.2 card, next on the right is a 2.5” 6 Gbps SATA device, and far fright is a 12 Gbps SAS device. Note that an M.2 card can be either an SATA or NVMe device depending on its internal controller that determines which host or server protocol device driver to use.

The role of PCIe has evolved over the years as has its performance and packaging form factors. Also, to add in card (AiC) slots, PCIe form factors also include M.2 small form factor that replaces legacy mini-PCIe cards. Another form factor is M.2 (aka Next Generation Form Factor or NGFF) that like other devices, can be an NVMe, or SATA device.

NGFF also known as 8639 or possibly 8637 (figure 7) can be used to support SATA as well as NVMe depending on the card device installed and host server driver support. There are various M.2 NGFF form factors including 2230, 2242, 2260 and 2280. There are also M.2 to regular physical SATA converter or adapter cards that are available enabling M.2 devices to attach to legacy SAS/SATA RAID adapters or HBAs.

NVMe 8637 and 8639 interface backplane slotsNVMe 8637 and 8639 interface
Figure 7 PCIe NVMe 8639 Drive (Via StorageIO Labs)

On the left of figure 7 is a view towards the backplane of a storage enclosure in a server that supports SAS, SATA, and NVMe (e.g. 8639). On the right of figure 7 is the connector end of an 8639 NVM SSD showing addition pin connectors compared to an SAS or SATA device. Those extra pins give PCIe x4 connectivity to the NVMe devices. The 8639 drive connectors enable a device such as an NVM, or NAND flash SSD to share a common physical storage enclosure with SAS and SATA devices, including optional dual-pathing.

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Be careful judging a device or component by its physical packaging or interface connection about what it is or is not. In figure 6.6 the device has SAS/SATA along with PCIe physical connections, yet it’s what’s inside (e.g. its controller) that determines if it is an SAS, SATA or NVMe enabled device. This also applies to HDDs and PCIe AiC devices, as well as I/O networking cards and adapters that may use common physical connectors, yet implement different protocols. For example, the SFF-8643 HD-Mini SAS internal connector is used for 12 Gbps SAS attachment as well as PCIe to devices such as 8630.

Depending on the type of device inserted, access can be via NVMe over PCIe x4, SAS (12 Gbps or 6Gb) or SATA. 8639 connector based enclosures have a physical connection with their backplanes to the individual drive connectors, as well as to PCIe, SAS, and SATA cards or connectors on the server motherboard or via PCIe riser slots.

While PCIe devices including AiC slot based, M.2 or 8639 can have common physical interfaces and lower level signaling, it’s the protocols, controllers, and drivers that determine how they get a software defined and used. Keep in mind that it’s not just the physical connector or interface that determines what a device is or how it is used, it’s also the protocol, command set, and controller and device drivers.

Continue reading about NVMe with Part V (Where to learn more, what this all means) in this five-part series, or jump to Part I, Part II or Part III.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

NVMe Need for Performance Speed Performance

server storage I/O trends
Updated 1/12/2018

This is the third in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

How fast is NVMe?

It depends! Generally speaking NVMe is fast!

However fast interfaces and protocols also need fast storage devices, adapters, drivers, servers, operating systems and hypervisors as well as applications that drive or benefit from the increased speed.

A server storage I/O example is in figure 5 where a 6 Gbps SATA NVM flash SSD (left) is shown with an NVMe 8639 (x4) drive that were directly attached to a server. The workload is 8 Kbyte sized random writes with 128 threads (workers) showing results for IOPs (solid bar) along with response time (dotted line). Not surprisingly the NVMe device has a lower response time and a higher number of IOPs. However also note how the amount of CPU time used per IOP is lower on the right with the NVMe drive.

NVMe storage I/O performance
Figure 5 6 Gbps SATA NVM flash SSD vs. NVMe flash SSD

While many people are aware or learning about the IOP and bandwidth improvements as well as the decrease in latency with NVMe, something that gets overlooked is how much less CPU is used. If a server is spending time in wait modes that can result in lost productivity, by finding and removing the barriers more work can be done on a given server, perhaps even delaying a server upgrade.

In figure 5 notice the lower amount of CPU used per work activity being done (e.g. I/O or IOP) which translates to more effective resource use of your server. What that means is either doing more work with what you have, or potentially delaying a CPU server upgrade, or, using those extra CPU cycles to power software defined storage management stacks including erasure coding or advanced parity RAID, replication and other functions.

Table 1 shows relative server I/O performance of some NVM flash SSD devices across various workloads. As with any performance, the comparison takes them, and the following with a grain of salt as your speed will vary.

8KB I/O Size

1MB I/O size

NAND flash SSD

100% Seq. Read

100% Seq. Write

100% Ran. Read

100% Ran. Write

100% Seq. Read

100% Seq. Write

100% Ran. Read

100% Ran. Write

NVMe

IOPs

41829.19

33349.36

112353.6

28520.82

1437.26

889.36

1336.94

496.74

PCIe

Bandwidth

326.79

260.54

877.76

222.82

1437.26

889.36

1336.94

496.74

AiC

Resp.

3.23

3.90

1.30

4.56

178.11

287.83

191.27

515.17

CPU / IOP

0.001571

0.002003

0.000689

0.002342

0.007793

0.011244

0.009798

0.015098

12Gb

IOPs

34792.91

34863.42

29373.5

27069.56

427.19

439.42

416.68

385.9

SAS

Bandwidth

271.82

272.37

229.48

211.48

427.19

429.42

416.68

385.9

Resp.

3.76

3.77

4.56

5.71

599.26

582.66

614.22

663.21

CPU / IOP

0.001857

0.00189

0.002267

0.00229

0.011236

0.011834

0.01416

0.015548

6Gb

IOPs

33861.29

9228.49

28677.12

6974.32

363.25

65.58

356.06

55.86

SATA

Bandwidth

264.54

72.1

224.04

54.49

363.25

65.58

356.06

55.86

Resp.

4.05

26.34

4.67

35.65

704.70

3838.59

718.81

4535.63

CPU / IOP

0.001899

0.002546

0.002298

0.003269

0.012113

0.032022

0.015166

0.046545

Table 1 Relative performance of various protocols and interfaces

The workload results in table 1 were generated using a vdbench script running on a Windows 2012 R2 based server and are intended to be a relative indicator of different protocol and interfaces; your performance mileage will vary. The results shown below compare the number of IOPs (activity rate) for reads, writes, random and sequential across small 8KB and large 1MB sized I/Os.

Also shown in table 1 are bandwidth or throughput (e.g. amount of data moved), response time and the amount of CPU used per IOP. Note in table 1 how NVMe can do higher IOPs with a lower CPU per IOP, or, using a similar amount of CPU, do more work at a lower latency. SSD has been used for decades to help reduce CPU bottlenecks or defer server upgrades by removing I/O wait times and reduce CPU consumption (e.g. wait or lost time).

Can NVMe solutions run faster than those shown above? Absolutely!

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Continue reading about NVMe with Part IV (Where and How to use NVMe) in this five-part series, or jump to Part I, Part II or Part V.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Different NVMe Configurations

server storage I/O trends
Updated 1/12/2018

This is the second in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

The many different faces or facets of NVMe configurations

NVMe can be deployed and used in many ways, the following are some examples to show you its flexibility today as well as where it may be headed in the future. An initial deployment scenario is NVMe devices (e.g. PCIe cards, M2 or 8639 drives) installed as storage in servers or as back-end storage in storage systems. Figure 2 below shows a networked storage system or appliance that uses traditional server storage I/O interfaces and protocols for front-end access, however with back-end storage being all NVMe, or a hybrid of NVMe, SAS and SATA devices.
NVMe as back-end server storage I/O interface to NVM
Figure 2 NVMe as back-end server storage I/O interface to NVM storage

A variation of the above is using NVMe for shared direct attached storage (DAS) such as the EMC DSSD D5. In the following scenario (figure 3), multiple servers in a rack or cabinet configuration have an extended PCIe connection that attached to a shared storage all flash array using NVMe on the front-end. Read more about this approach and EMC DSSD D5 here or click on the image below.

EMC DSSD D5 NVMe
Figure 3 Shared DAS All Flash NVM Storage using NVMe (e.g. EMC DSSD D5)

Next up in figure 4 is a variation of the previous example, except NVMe is implemented over an RDMA (Remote Direct Memory Access) based fabric network using Converged 10GbE/40GbE or InfiniBand in what is known as RoCE (RDMA over Converged Ethernet pronounced Rocky).

NVMe over Fabric RoCE
Figure 4 NVMe as a “front-end” interface for servers or storage systems/appliances

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Watch for more topology and configuration options as NVMe along with associated hardware, software and I/O networking tools and technologies emerge over time.

Continue reading about NVMe with Part III (Need for Performance Speed) in this five-part series, or jump to Part I, Part IV or Part V.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

NVMe overview primer

server storage I/O trends
Updated 2/2/2018

This is the first in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

What is NVM Express (NVMe)

Non-Volatile Memory (NVM) includes persistent memory such as NAND flash and other forms Solid State Devices (SSD). NVM express (NVMe) is a new server storage I/P protocol alternative to AHCI/SATA and the SCSI protocol used by Serial Attached SCSI (SAS). Note that the name NVMe is owned and managed by the industry trade group for NVM Express is (www.nvmexpress.org).

The key question with NVMe is not if, rather when, where, why, how and with what will it appear in your data center or server storage I/O data infrastructure. This is a companion to material that I have on my micro site www.thenvmeplace.com that provides an overview of NVMe, as well as helps to discuss some of the questions about NVMe.

Main features of NVMe include among others:

  • Lower latency due to improve drivers and increased queues (and queue sizes)
  • Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
  • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
  • Bandwidth improvements leveraging various fast PCIe interface and available lanes
  • Dual-pathing of devices like what is available with dual-path SAS devices
  • Unlock the value of more cores per processor socket and software threads (productivity)
  • Various packaging options, deployment scenarios and configuration options
  • Appears as a standard storage device on most operating systems
  • Plug-play with in-box drivers on many popular operating systems and hypervisors

Why NVMe for Server Storage I/O?
NVMe has been designed from the ground up for accessing fast storage including flash SSD leveraging PCI Express (PCIe). The benefits include lower latency, improved concurrency, increased performance and the ability to unleash a lot more of the potential of modern multi-core modern processors.

NVMe Server Storage I/O
Figure 1 shows common server I/O connectivity including PCIe, SAS, SATA and NVMe.

NVMe, leveraging PCIe, enables modern applications to reach their full potential. NVMe is one of those rare, generational protocol upgrades that comes around every couple of decades to help unlock the full performance value of servers and storage. NVMe does need new drivers, but once in place, it plugs and plays seamlessly with existing tools, software and user experiences. Likewise many of those drivers are now in the box (e.g. ship with) for popular operating systems and hypervisors.

While SATA and SAS provided enough bandwidth for HDDs and some SSD uses, more performance is needed. NVMe near-term does not replace SAS or SATA they can and will coexist for years to come enabling different tiers of server storage I/O performance.

NVMe unlocks the potential of flash-based storage by allowing up to 65,536 (64K) queues each with 64K commands per queue. SATA allowed for only one command queue capable of holding 32 commands per queue and SAS supports a queue with 64K command entries. As a result, the storage IO capabilities of flash can now be fed across PCIe much faster to enable modern multi-core processors to complete more useful work in less time.

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Continue reading about NVMe with Part II (Different NVMe configurations) in this five-part series, or jump to Part III, Part IV or Part V.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Big Files Lots of Little File Processing Benchmarking with Vdbench

Big Files Lots of Little File Processing Benchmarking with Vdbench


server storage data infrastructure i/o File Processing Benchmarking with Vdbench

Updated 2/10/2018

Need to test a server, storage I/O networking, hardware, software, services, cloud, virtual, physical or other environment that is either doing some form of file processing, or, that you simply want to have some extra workload running in the background for what ever reason? An option is File Processing Benchmarking with Vdbench.

I/O performance

Getting Started


Here’s a quick and relatively easy way to do it with Vdbench (Free from Oracle). Granted there are other tools, both for free and for fee that can similar things, however we will leave those for another day and post. Here’s the con to this approach, there is no Uui Gui like what you have available with some other tools Here’s the pro to this approach, its free, flexible and limited by your creative, amount of storage space, server memory and I/O capacity.

If you need a background on Vdbench and benchmarking, check out the series of related posts here (e.g. www.storageio.com/performance).

Get and Install the Vdbench Bits and Bytes


If you do not already have Vdbench installed, get a copy from the Oracle or Source Forge site (now points to Oracle here).

Vdbench is free, you simply sign-up and accept the free license, select the version down load (it is a single, common distribution for all OS) the bits as well as documentation.

Installation particular on Windows is really easy, basically follow the instructions in the documentation by copying the contents of the download folder to a specified directory, set up any environment variables, and make sure that you have Java installed.

Here is a hint and tip for Windows Servers, if you get an error message about counters, open a command prompt with Administrator rights, and type the command:

$ lodctr /r


The above command will reset your I/O counters. Note however that command will also overwrite counters if enabled so only use it if you have to.

Likewise *nix install is also easy, copy the files, make sure to copy the applicable *nix shell script (they are in the download folder), and verify Java is installed and working.

You can do a vdbench -t (windows) or ./vdbench -t (*nix) to verify that it is working.

Vdbench File Processing

There are many options with Vdbench as it has a very robust command and scripting language including ability to set up for loops among other things. We are only going to touch the surface here using its file processing capabilities. Likewise, Vdbench can run from a single server accessing multiple storage systems or file systems, as well as running from multiple servers to a single file system. For simplicity, we will stick with the basics in the following examples to exercise a local file system. The limits on the number of files and file size are limited by server memory and storage space.

You can specify number and depth of directories to put files into for processing. One of the parameters is the anchor point for the file processing, in the following examples =S:\SIOTEMP\FS1 is used as the anchor point. Other parameters include the I/O size, percent reads, number of threads, run time and sample interval as well as output folder name for the result files. Note that unlike some tools, Vdbench does not create a single file of results, rather a folder with several files including summary, totals, parameters, histograms, CSV among others.


Simple Vdbench File Processing Commands

For flexibility and ease of use I put the following three Vdbench commands into a simple text file that is then called with parameters on the command line.
fsd=fsd1,anchor=!fanchor,depth=!dirdep,width=!dirwid,files=!numfiles,size=!filesize

fwd=fwd1,fsd=fsd1,rdpct=!filrdpct,xfersize=!fxfersize,fileselect=random,fileio=random,threads=!thrds

rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=!etime,interval=!itime

Simple Vdbench script

# SIO_vdbench_filesystest.txt
#
# Example Vdbench script for file processing
#
# fanchor = file system place where directories and files will be created
# dirwid = how wide should the directories be (e.g. how many directories wide)
# numfiles = how many files per directory
# filesize = size in in k, m, g e.g. 16k = 16KBytes
# fxfersize = file I/O transfer size in kbytes
# thrds = how many threads or workers
# etime = how long to run in minutes (m) or hours (h)
# itime = interval sample time e.g. 30 seconds
# dirdep = how deep the directory tree
# filrdpct = percent of reads e.g. 90 = 90 percent reads
# -p processnumber = optional specify a process number, only needed if running multiple vdbenchs at same time, number should be unique
# -o output file that describes what being done and some config info
#
# Sample command line shown for Windows, for *nix add ./
#
# The real Vdbench script with command line parameters indicated by !=
#

fsd=fsd1,anchor=!fanchor,depth=!dirdep,width=!dirwid,files=!numfiles,size=!filesize

fwd=fwd1,fsd=fsd1,rdpct=!filrdpct,xfersize=!fxfersize,fileselect=random,fileio=random,threads=!thrds

rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=!etime,interval=!itime

Big Files Processing Script


With the above script file defined, for Big Files I specify a command line such as the following.
$ vdbench -f SIO_vdbench_filesystest.txt fanchor=S:\SIOTemp\FS1 dirwid=1 numfiles=60 filesize=5G fxfersize=128k thrds=64 etime=10h itime=30 numdir=1 dirdep=1 filrdpct=90 -p 5576 -o SIOWS2012R220_NOFUZE_5Gx60_BigFiles_64TH_STX1200_020116

Big Files Processing Example Results


The following is one of the result files from the folder of results created via the above command for Big File processing showing totals.


Run totals

21:09:36.001 Starting RD=format_for_rd1

Feb 01, 2016 .Interval. .ReqstdOps.. ...cpu%... read ....read.... ...write.... ..mb/sec... mb/sec .xfer.. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete...
rate resp total sys pct rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp
21:23:34.101 avg_2-28 2848.2 2.70 8.8 8.32 0.0 0.0 0.00 2848.2 2.70 0.00 356.0 356.02 131071 0.0 0.00 0.0 0.00 0.1 109176 0.1 0.55 0.1 2006 0.0 0.00

21:23:35.009 Starting RD=rd1; elapsed=36000; fwdrate=max. For loops: None

07:23:35.000 avg_2-1200 4939.5 1.62 18.5 17.3 90.0 4445.8 1.79 493.7 0.07 555.7 61.72 617.44 131071 0.0 0.00 0.0 0.00 0.0 0.00 0.1 0.03 0.1 2.95 0.0 0.00


Lots of Little Files Processing Script


For lots of little files, the following is used.


$ vdbench -f SIO_vdbench_filesystest.txt fanchor=S:\SIOTEMP\FS1 dirwid=64 numfiles=25600 filesize=16k fxfersize=1k thrds=64 etime=10h itime=30 dirdep=1 filrdpct=90 -p 5576 -o SIOWS2012R220_NOFUZE_SmallFiles_64TH_STX1200_020116

Lots of Little Files Processing Example Results


The following is one of the result files from the folder of results created via the above command for Big File processing showing totals.
Run totals

09:17:38.001 Starting RD=format_for_rd1

Feb 02, 2016 .Interval. .ReqstdOps.. ...cpu%... read ....read.... ...write.... ..mb/sec... mb/sec .xfer.. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete...
rate resp total sys pct rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp
09:19:48.016 avg_2-5 10138 0.14 75.7 64.6 0.0 0.0 0.00 10138 0.14 0.00 158.4 158.42 16384 0.0 0.00 0.0 0.00 10138 0.65 10138 0.43 10138 0.05 0.0 0.00

09:19:49.000 Starting RD=rd1; elapsed=36000; fwdrate=max. For loops: None

19:19:49.001 avg_2-1200 113049 0.41 67.0 55.0 90.0 101747 0.19 11302 2.42 99.36 11.04 110.40 1023 0.0 0.00 0.0 0.00 0.0 0.00 7065 0.85 7065 1.60 0.0 0.00


Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

The above examples can easily be modified to do different things particular if you read the Vdbench documentation on how to setup multi-host, multi-storage system, multiple job streams to do different types of processing. This means you can benchmark a storage systems, server or converged and hyper-converged platform, or simply put a workload on it as part of other testing. There are even options for handling data footprint reduction such as compression and dedupe.

Ok, nuff said, for now.

Gs

Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

NVMe Place NVM Non Volatile Memory Express Resources

Updated 8/31/19
NVMe place server Storage I/O data infrastructure trends

Welcome to NVMe place NVM Non Volatile Memory Express Resources. NVMe place is about Non Volatile Memory (NVM) Express (NVMe) with Industry Trends Perspectives, Tips, Tools, Techniques, Technologies, News and other information.

Disclaimer

Please note that this NVMe place resources site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks.

NVM Express Organization
Image used with permission of NVM Express, Inc.

Visit the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

 

The NVMe Place resources and NVM including SCM, PMEM, Flash

NVMe place includes Non Volatile Memory (NVM) including nand flash, storage class memories (SCM), persistent memories (PM) are storage memory mediums while NVM Express (NVMe) is an interface for accessing NVM. This NVMe resources page is a companion to The SSD Place which has a broader Non Volatile Memory (NVM) focus including flash among other SSD topics. NVMe is a new server storage I/O access method and protocol for fast access to NVM based storage and memory technologies. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

Server Storage I/O NVMe PCIe SAS SATA AHCI
Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

Leveraging the standard PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5″ drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add-in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially, NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

NVMe as back-end storage
NVMe as a “back-end” I/O interface for NVM storage media

NVMe as front-end server storage I/O interface
NVMe as a “front-end” interface for servers or storage systems/appliances

NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can also being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

NVMe features

Main features of NVMe include among others:

  • Lower latency due to improve drivers and increased queues (and queue sizes)
  • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
  • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
  • Bandwidth improvements leveraging various fast PCIe interface and available lanes
  • Dual-pathing of devices like what is available with dual-path SAS devices
  • Unlock the value of more cores per processor socket and software threads (productivity)
  • Various packaging options, deployment scenarios and configuration options
  • Appears as a standard storage device on most operating systems
  • Plug-play with in-box drivers on many popular operating systems and hypervisors

Shared external PCIe using NVMe
NVMe and shared PCIe (e.g. shared PCIe flash DAS)

NVMe related content and links

The following are some of my tips, articles, blog posts, presentations and other content, along with material from others pertaining to NVMe. Keep in mind that the question should not be if NVMe is in your future, rather when, where, with what, from whom and how much of it will be used as well as how it will be used.

  • How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
  • Why NVMe Should Be in Your Data Center (Via Micron.com)
  • NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
  • Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
  • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
  • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
  • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
  • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
  • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
  • MNVM Express solutions (Via SuperMicro)
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
  • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
  • RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
  • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
  • What should I consider when using SSD cloud? (Via SearchCloudStorage)
  • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
  • Selecting Storage: Start With Requirements (Via NetworkComputing)
  • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
  • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
  • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
  • How many IOPS can a HDD, HHDD or SSD do (Part I)?
  • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
  • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
  • Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
  • Via EnterpriseStorageForum: 10-Year Review of Data Storage

Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamline performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU.

NVMe and SATA flash SSD performance

The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed.

Additional NVMe Resources

Also check out the Server StorageIO companion micro sites landing pages including thessdplace.com (SSD focus), data protection diaries (backup, BC/DR/HA and related topics), cloud and object storage, and server storage I/O performance and benchmarking here.

If you are in to the real bits and bytes details such as at device driver level content check out the Linux NVMe reflector forum. The linux-nvme forum is a good source if you are developer to stay up on what is happening in and around device driver and associated topics.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

Disclaimer

Disclaimer: Please note that this site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks. Check out the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

NVM Express Organization
Image used with permission of NVM Express, Inc.

Wrap Up

Watch for updates with more content, links and NVMe resources to be added here soon.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Intel Micron 3D XPoint server storage NVM SCM PM SSD

3D XPoint server storage class memory SCM


Storage I/O trends

Updated 1/31/2018

Intel Micron 3D XPoint server storage NVM SCM PM SSD.

This is the second of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part III here.

Is this 3D XPoint marketing, manufacturing or material technology?

You can’t have a successful manufactured material technology without some marketing, likewise marketing without some manufactured material would be manufactured marketing. In the case of 3D XPoint and its announcement launch, their real technology shown, granted it was only wafer and dies as opposed to an actual DDR4 DIMM or PCIe Add In Card (AIC) or drive form factor Solid State Device (SSD) product. On the other hand, on a relative comparison basis, even though there is marketing collateral available to learn more from, this was far from a over the big-top made for TV or web circus event, which can be a good thing.


Wafer unveiled containing 3D XPoint 128 Gb dies

Who will get access to 3D XPoint?

Initially 3D XPoint production capacity supply will be for the two companies to offer early samples to their customers later this year with general production slated for 2016 meaning early real customer deployed products starting sometime in 2016.

Is it NAND or NOT?

3D XPoint is not NAND flash, it is also not NVRAM or DRAM, it’s a new class of NVM that can be used for server class main memory with persistency, or as persistent data storage among other uses (cell phones, automobiles, appliances and other electronics). In addition, 3D XPoint is more durable with a longer useful life for writing and storing data vs. NAND flash.

Why is 3D XPoint important?

As mentioned during the Intel and Micron announcement, there have only been seven major memory technologies introduced since the transistor back in 1947, granted there have been many variations along with generational enhancements of those. Thus 3D XPoint is being positioned by Intel and Micron as the eighth memory class joining its predecessors many of which continue to be used today in various roles.


Major memory classes or categories timeline

In addition to the above memory classes or categories timeline, the following shows in more detail various memory categories (click on the image below to get access to the Intel interactive infographic).

Intel History of Memory Infographic
Via: https://intelsalestraining.com/memory timeline/ (Click on image to view)

What capacity size is 3D XPoint?

Initially the 3D XPoint technology is available in a 2 layer 128 bit (cell) per die capacity. Keep in mind that there are usually 8 bits to a byte resulting in 16 GByte capacity per chip initially. With density improvements, as well as increased stacking of layers, the number of cells or bits per die (e.g. what makes up a chip) should improve, as well as most implementations will have multiple chips in some type of configuration.

What will 3D XPoint cost?

During the 3D XPoint launch webinar Intel and Micron hinted that first pricing will be between current DRAM and NAND flash on a per cell or bit basis, however real pricing and costs will vary depending on how packaged for use. For example if placed on a DDR4 or different type of DIMM or on a PCIe Add In Card (AIC) or as a drive form factor SSD among other options will vary the real price. Likewise as with other memories and storage mediums, as production yields and volumes increase, along with denser designs, the cost per usable cell or bit can be expected to further improve.

Where to read, watch and learn more

Storage I/O trends

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

DRAM which has been around for sometime has plenty of life left for many applications as does NAND flash including new 3D NAND, vNAND and other variations. For the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies including 3D XPoint. Read more in this series including Part I here and Part III here.

Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

3D XPoint nvm pm scm storage class memory

Part III – 3D XPoint server storage class memory SCM


Storage I/O trends

Updated 1/31/2018

3D XPoint nvm pm scm storage class memory.

This is the third of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part II here.

What is 3D XPoint and how does it work?

3D XPoint is a new class or class of memory (view other categories of memory here) that provides performance for reads and writes closer to that of DRAM with about 10x the capacity density. In addition to the speed closer to DRAM vs. the lower NAND flash, 3D XPoint is also non-volatile memory (NVM) like NAND flash, NVRAM and others. What this means is that 3D XPoint can be used as persistent higher density fast server memory (or main memory for other computers and electronics). Besides being fast persistent main memory, 3D XPoint will also be a faster medium for solid state devices (SSD’s) including PCIe Add In Cards (AIC), m2 cards and drive form factor 8637/8639 NVM Express (NVMe) accessed devices that also has better endurance or life span compared to NAND flash.


3D XPoint architecture and attributes

The initial die or basic chip building block 3D XPoint implementation is a layer 128 Gbit device which if using 8 bits would yield 16GB raw. Over time increased densities should become available as the bit density improves with more cells and further scaling of the technology, combined with packaging. For example while a current die could hold up to 16 GBytes of data, multiple dies could be packaged together to create a 32GB, 64GB, 128GB etc. or larger actual product. Think about not only where packaged flash based SSD capacities are today, also think in terms of where DDR3 and DDR4 DIMM are at such as 4GB, 8GB, 16GB, 32GB densities.

The 3D aspect comes from the memory being in a matrix initially being two layers high, with multiple rows and columns that intersect, where those intersections occur is a microscopic material based switch for accessing a particular memory cell. Unlike NAND flash where an individual cell or bit is accessed as part of a larger block or page comprising several thousand bytes at once, 3D XPoint cells or bits can be individually accessed to speed up reads and writes in a more granular fashion. It is this more granular access along with performance that will enable 3D XPoint to be used in lower latency scenarios where DRAM would normally be used.

Instead of trapping electrons in a cell to create a bit of capacity (e.g. on or off) like NAND flash, 3D XPoint leverages the underlying physical material propertied to store a bit as a phase change enabling use of all cells. In other words, instead of being electron based, it is material based. While Intel and Micron did not specify what the actual chemistry and physical materials that are used in 3D XPoint, they did discuss some of the characteristics. If you want to go deep, check out how the Dailytech makes an interesting educated speculation or thesis on the underlying technology.

Watch the following video to get a better idea and visually see how 3D XPoint works.



3D XPoint YouTube Video

What are these chips, cells, wafers and dies?

Left many dies on a wafer, right, a closer look at the dies cut from the wafer

Dies (here and here) are the basic building block of what goes into the chips that in turn are the components used for creating DDR DIMM for main computer memory, as well as for create SD and MicroSD cards, USB thumb drives, PCIe AIC and drive form factor SSD, as well as custom modules on motherboards, or consumption via bare die and wafer level consumption (e.g. where you are doing really custom things at volume, beyond using a soldering iron scale).

Storage I/O trends

Has Intel and Micron cornered the NVM and memory market?

We have heard proclamations, speculation and statements of the demise of DRAM, NAND flash and other volatile and NVM memories for years, if not decades now. Each year there is the usual this will be the year of “x” where “x” can include among others. Resistive RAM aka ReRAM or RRAM aka the memristor that HP earlier announced they were going to bring to market and then earlier this year canceling those plans while Crossbar continues to pursue RRAM. MRAM or Magnetorestive RAM, Phase Change Memory aka CRAM or PCM and PRAM, FRAM aka FeRAM or Ferroelectric RAM among others.

flash SSD and NVM trends

Expanding persistent memory and SSD storage markets

Keep in mind that there are many steps taking time measured in years or decades to go from research and development lab idea to prototype that can then be produced at production volumes in economic yields. As a reference for, there is still plenty of life in both DRAM as well as NAND flash, the later having appeared around 1989.

Industry vs. Customer Adoption and deployment timeline

Technology industry adoption precedes customer adoption and deployment

There is a difference between industry adoption and deployment vs. customer adoption and deployment, they are related, yet separated by time as shown in the above figure. What this means is that there can be several years from the time a new technology is initially introduced and when it becomes generally available. Keep in mind that NAND flash has yet to reach its full market potential despite having made significant inroads the past few years since it was introduced in 1989.

This begs the question of if 3D XPoint is a variation of phase change, RRAM, MRAM or something else. Over at the Dailytech they lay out a line of thinking (or educated speculation) that 3D XPoint is some derivative or variation of phase change, time will tell about what it really is.

What’s the difference between 3D NAND flash and 3D XPoint?

3D NAND is a form of NAND flash NVM, while 3D XPoint is a completely new and different type of NVM (e.g. its not NAND).

3D NAND is a variation of traditional flash with the difference between vertical stacking vs. horizontal to improve density, also known as vertical NAND or V-NAND. Vertical stacking is like building up to house more tenants or occupants in a dense environment or scaling up, vs scaling-out by using up more space where density is not an issue. Note that magnetic HDD’s shifted to perpendicular (e.g. vertical) recording about ten years ago to break through the super parametric barrier and more recently, magnetic tape has also adopted perpendicular recording. Also keep in mind that 3D XPoint and the earlier announced Intel and Micron 3D NAND flash are two separate classes of memory that both just happen to have 3D in their marketing names.

Where to read, watch and learn more

Storage I/O trends

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Same with DRAM which has been around for sometime, it too still has plenty of life left for many applications. However other applications that have the need for improved speed over NAND flash, or persistency and density vs. DRAM will be some of the first to leverage new NVM technologies such as 3D XPoint. Thus at least for the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies. Bottom line, 3D XPoint is a new class of NVM memory, can be used for persistent main server memory or for persistent fast storage memory. If you have not done so, check out Part I here and Part II here of this three-part series on Intel and Micron 3D XPoint.

Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage

3D XPoint NVM persistent memory PM storage class memory SCM


Storage I/O trends

Updated 1/31/2018

This is the first of a three-part series on Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage announcement. Read Part II here and Part III here.

In a webcast the other day, Intel and Micron announced new 3D XPoint non-volatile memory (NVM) that can be used for both primary main memory (e.g. what’s in computers, serves, laptops, tablets and many other things) in place of Dynamic Random Access Memory (DRAM), for persistent storage faster than today’s NAND flash-based solid state devices (SSD), not to mention future hybrid usage scenarios. Note that this announcement while having the common term 3D in it is different from the earlier Intel and Micron announcement about 3D NAND flash (read more about that here).

Twitter hash tag #3DXpoint

The big picture, why this type of NVM technology is needed

Server and Storage I/O trends

  • Memory is storage and storage is persistent memory
  • No such thing as a data or information recession, more data being create, processed and stored
  • Increased demand is also driving density along with convergence across server storage I/O resources
  • Larger amounts of data needing to be processed faster (large amounts of little data and big fast data)
  • Fast applications need more and faster processors, memory along with I/O interfaces
  • The best server or storage I/O is the one you do not need to do
  • The second best I/O is one with least impact or overhead
  • Data needs to be close to processing, processing needs to be close to the data (locality of reference)


Server Storage I/O memory hardware and software hierarchy along with technology tiers

What did Intel and Micron announce?

Intel SVP and General Manager Non-Volatile Memory solutions group Robert Crooke (Left) and Micron CEO D. Mark Durcan did the joint announcement presentation of 3D XPoint (webinar here). What was announced is the 3D XPoint technology jointly developed and manufactured by Intel and Micron which is a new form or category of NVM that can be used for both primary memory in servers, laptops, other computers among other uses, as well as for persistent data storage.


Robert Crooke (Left) and Mark Durcan (Right)

Summary of 3D XPoint announcement

  • New category of NVM memory for servers and storage
  • Joint development and manufacturing by Intel and Micron in Utah
  • Non volatile so can be used for storage or persistent server main memory
  • Allows NVM to scale with data, storage and processors performance
  • Leverages capabilities of both Intel and Micron who have collaborated in the past
  • Performance Intel and Micron claim up to 1000x faster vs. NAND flash
  • Availability persistent NVM compared to DRAM with better durability (life span) vs. NAND flash
  • Capacity densities about 10x better vs. traditional DRAM
  • Economics cost per bit between dram and nand (depending on packaging of resulting products)

What applications and products is 3D XPoint suited for?

In general, 3D XPoint should be able to be used for many of the same applications and associated products that current DRAM and NAND flash-based storage memories are used for. These range from IT and cloud or managed service provider data centers based applications and services, as well as consumer focused among many others.


3D XPoint enabling various applications

In general, applications or usage scenarios along with supporting products that can benefit from 3D XPoint include among others’. Applications that need larger amounts of main memory in a denser footprint such as in-memory databases, little and big data analytics, gaming, wave form analysis for security, copyright or other detection analysis, life sciences, high performance compute and high-productivity compute, energy, video and content severing among many others.

In addition, applications that need persistent main memory for resiliency, or to cut delays and impacts for planned or un-planned maintenance or having to wait for memories and caches to be warmed or re-populated after a server boot (or re-boot). 3D XPoint will also be useful for those applications that need faster read and write performance compared to current generations NAND flash for data storage. This means both existing and emerging applications as well as some that do not yet exist will benefit from 3D XPoint over time, like how today’s applications and others have benefited from DRAM used in Dual Inline Memory Module (DIMM) and NAND flash advances over the past several decades.

Where to read, watch and learn more

Storage I/O trends

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Continue reading Part II here and Part III here of this three-part series on Intel and Micron 3D XPoint along with more analysis and commentary.

Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

How to test your HDD SSD or all flash array (AFA) storage fundamentals

How to test your HDD SSD AFA Hybrid or cloud storage

server storage data infrastructure i/o hdd ssd all flash array afa fundamentals

Updated 2/14/2018

Over at BizTech Magazine I have a new article 4 Ways to Performance Test Your New HDD or SSD that provides a quick guide to verifying or learning what the speed characteristic of your new storage device are capable of.

An out-take from the article used by BizTech as a "tease" is:

These four steps will help you evaluate new storage drives. And … psst … we included the metrics that matter.

Building off the basics, server storage I/O benchmark fundamentals

The four basic steps in the article are:

  • Plan what and how you are going to test (what’s applicable for you)
  • Decide on a benchmarking tool (learn about various tools here)
  • Test the test (find bugs, errors before a long running test)
  • Focus on metrics that matter (what’s important for your environment)

Server Storage I/O performance

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

To some the above (read the full article here) may seem like common sense tips and things everybody should know otoh there are many people who are new to servers storage I/O networking hardware software cloud virtual along with various applications, not to mention different tools.

Thus the above is a refresher for some (e.g. Dejavu) while for others it might be new and revolutionary or simply helpful. Interested in HDD’s, SSD’s as well as other server storage I/O performance along with benchmarking tools, techniques and trends check out the collection of links here (Server and Storage I/O Benchmarking and Performance Resources).

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

I/O, I/O how well do you know good bad ugly server storage I/O iops?

How well do you know good bad ugly I/O iops?

server storage i/o iops activity data infrastructure trends

Updated 2/10/2018

There are many different types of server storage I/O iops associated with various environments, applications and workloads. Some I/Os activity are iops, others are transactions per second (TPS), files or messages per time (hour, minute, second), gets, puts or other operations. The best IO is one you do not have to do.

What about all the cloud, virtual, software defined and legacy based application that still need to do I/O?

If no IO operation is the best IO, then the second best IO is the one that can be done as close to the application and processor as possible with the best locality of reference.

Also keep in mind that aggregation (e.g. consolidation) can cause aggravation (server storage I/O performance bottlenecks).

aggregation causes aggravation
Example of aggregation (consolidation) causing aggravation (server storage i/o blender bottlenecks)

And the third best?

It’s the one that can be done in less time or at least cost or effect to the requesting application, which means moving further down the memory and storage stack.

solving server storage i/o blender and other bottlenecks
Leveraging flash SSD and cache technologies to find and fix server storage I/O bottlenecks

On the other hand, any IOP regardless of if for block, file or object storage that involves some context is better than those without, particular involving metrics that matter (here, here and here [webinar] )

Server Storage I/O optimization and effectiveness

The problem with IO’s is that they are a basic operations to get data into and out of a computer or processor, so there’s no way to avoid all of them, unless you have a very large budget. Even if you have a large budget that can afford an all flash SSD solution, you may still meet bottlenecks or other barriers.

IO’s require CPU or processor time and memory to set up and then process the results as well as IO and networking resources to move data too their destination or retrieve them from where they are stored. While IO’s cannot be eliminated, their impact can be greatly improved or optimized by, among other techniques, doing fewer of them via caching and by grouping reads or writes (pre-fetch, write-behind).

server storage I/O STI and SUT

Think of it this way: Instead of going on multiple errands, sometimes you can group multiple destinations together making for a shorter, more efficient trip. However, that optimization may also mean your drive will take longer. So, sometimes it makes sense to go on a couple of quick, short, low-latency trips instead of one larger one that takes half a day even as it accomplishes many tasks. Of course, how far you have to go on those trips (i.e., their locality) makes a difference about how many you can do in a given amount of time.

Locality of reference (or proximity)

What is locality of reference?

This refers to how close (i.e., its place) data exists to where it is needed (being referenced) for use. For example, the best locality of reference in a computer would be registers in the processor core, ready to be acted on immediately. This would be followed by levels 1, 2, and 3 (L1, L2, and L3) onboard caches, followed by main memory, or DRAM. After that comes solid-state memory typically NAND flash either on PCIe cards or accessible on a direct attached storage (DAS), SAN, or NAS device. 

server storage I/O locality of reference

Even though a PCIe NAND flash card is close to the processor, there still remains the overhead of traversing the PCIe bus and associated drivers. To help offset that impact, PCIe cards use DRAM as cache or buffers for data along with meta or control information to further optimize and improve locality of reference. In other words, this information is used to help with cache hits, cache use, and cache effectiveness vs. simply boosting cache use.

SSD to the rescue?

What can you do the cut the impact of IO’s?

There are many steps one can take, starting with establishing baseline performance and availability metrics.

The metrics that matter include IOP’s, latency, bandwidth, and availability. Then, leverage metrics to gain insight into your application’s performance.

Understand that IO’s are a fact of applications doing work (storing, retrieving, managing data) no matter whether systems are virtual, physical, or running up in the cloud. But it’s important to understand just what a bad IO is, along with its impact on performance. Try to identify those that are bad, and then find and fix the problem, either with software, application, or database changes. Perhaps you need to throw more software caching tools, hypervisors, or hardware at the problem. Hardware may include faster processors with more DRAM and faster internal busses.

Leveraging local PCIe flash SSD cards for caching or as targets is another option.

You may want to use storage systems or appliances that rely on intelligent caching and storage optimization capabilities to help with performance, availability, and capacity.

Where to gain insight into your server storage I/O environment

There are many tools that you can be used to gain insight into your server storage I/O environment across cloud, virtual, software defined and legacy as well as from different layers (e.g. applications, database, file systems, operating systems, hypervisors, server, storage, I/O networking). Many applications along with databases have either built-in or optional tools from their provider, third-party, or via other sources that can give information about work activity being done. Likewise there are tools to dig down deeper into the various data information infrastructure to see what is happening at the various layers as shown in the following figures.

application storage I/O performance
Gaining application and operating system level performance insight via different tools

windows and linux storage I/O performance
Insight and awareness via operating system tools on Windows and Linux

In the above example, Spotlight on Windows (SoW) which you can download for free from Dell here along with Ubuntu utilities are shown, You could also use other tools to look at server storage I/O performance including Windows Perfmon among others.

vmware server storage I/O
Hypervisor performance using VMware ESXi / vsphere built-in tools

vmware server storage I/O performance
Using Visual ESXtop to dig deeper into virtual server storage I/O performance

vmware server storage i/o cache
Gaining insight into virtual server storage I/O cache performance

Wrap up and summary

There are many approaches to address (e.g. find and fix) vs. simply move or mask data center and server storage I/O bottlenecks. Having insight and awareness into how your environment along with applications is important to know to focus resources. Also keep in mind that a bit of flash SSD or DRAM cache in the applicable place can go along way while a lot of cache will also cost you cash. Even if you cant eliminate I/Os, look for ways to decrease their impact on your applications and systems.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

>Keep in mind: SSD including flash and DRAM among others are in your future, the question is where, when, with what, how much and whose technology or packaging.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Revisiting RAID data protection remains relevant resource links

Revisiting RAID data protection remains relevant and resources

Storage I/O trends

Updated 2/10/2018

RAID data protection remains relevant including erasure codes (EC), local reconstruction codes (LRC) among other technologies. If RAID were really not relevant anymore (e.g. actually dead), why do some people spend so much time trying to convince others that it is dead or to use a different RAID level or enhanced RAID or beyond raid with related advanced approaches?

When you hear RAID, what comes to mind?

A legacy monolithic storage system that supports narrow 4, 5 or 6 drive wide stripe sets or a modern system support dozens of drives in a RAID group with different options?

RAID means many things, likewise there are different implementations (hardware, software, systems, adapters, operating systems) with various functionality, some better than others.

For example, which of the items in the following figure come to mind, or perhaps are new to your RAID vocabulary?

RAID questions

There are Many Variations of RAID Storage some for the enterprise, some for SMB, SOHO or consumer. Some have better performance than others, some have poor performance for example causing extra writes that lead to the perception that all parity based RAID do extra writes (some actually do write gathering and optimization).

Some hardware and software implementations using WBC (write back cache) mirrored or battery backed-BBU along with being able to group writes together in memory (cache) to do full stripe writes. The result can be fewer back-end writes compared to other systems. Hence, not all RAID implementations in either hardware or software are the same. Likewise, just because a RAID definition shows a particular theoretical implementation approach does not mean all vendors have implemented it in that way.

RAID is not a replacement for backup rather part of an overall approach to providing data availability and accessibility.

data protection and durability

What’s the best RAID level? The one that meets YOUR needs

There are different RAID levels and implementations (hardware, software, controller, storage system, operating system, adapter among others) for various environments (enterprise, SME, SMB, SOHO, consumer) supporting primary, secondary, tertiary (backup/data protection, archiving).

RAID comparison
General RAID comparisons

Thus one size or approach does fit all solutions, likewise RAID rules of thumbs or guides need context. Context means that a RAID rule or guide for consumer or SOHO or SMB might be different for enterprise and vise versa, not to mention on the type of storage system, number of drives, drive type and capacity among other factors.

RAID comparison
General basic RAID comparisons

Thus the best RAID level is the one that meets your specific needs in your environment. What is best for one environment and application may be different from what is applicable to your needs.

Key points and RAID considerations include:

· Not all RAID implementations are the same, some are very much alive and evolving while others are in need of a rest or rewrite. So it is not the technology or techniques that are often the problem, rather how it is implemented and then deployed.

· It may not be RAID that is dead, rather the solution that uses it, hence if you think a particular storage system, appliance, product or software is old and dead along with its RAID implementation, then just say that product or vendors solution is dead.

· RAID can be implemented in hardware controllers, adapters or storage systems and appliances as well as via software and those have different features, capabilities or constraints.

· Long or slow drive rebuilds are a reality with larger disk drives and parity-based approaches; however, you have options on how to balance performance, availability, capacity, and economics.

· RAID can be single, dual or multiple parity or mirroring-based.

· Erasure and other coding schemes leverage parity schemes and guess what umbrella parity schemes fall under.

· RAID may not be cool, sexy or a fun topic and technology to talk about, however many trendy tools, solutions and services actually use some form or variation of RAID as part of their basic building blocks. This is an example of using new and old things in new ways to help each other do more without increasing complexity.

·  Even if you are not a fan of RAID and think it is old and dead, at least take a few minutes to learn more about what it is that you do not like to update your dead FUD.

Wait, Isn’t RAID dead?

There is some dead marketing that paints a broad picture that RAID is dead to prop up something new, which in some cases may be a derivative variation of parity RAID.

data dispersal
Data dispersal and durability

RAID rebuild improving
RAID continues to evolve with rapid rebuilds for some systems

Otoh, there are some specific products, technologies, implementations that may be end of life or actually dead. Likewise what might be dead, dying or simply not in vogue are specific RAID implementations or packaging. Certainly there is a lot of buzz around object storage, cloud storage, forward error correction (FEC) and erasure coding including messages of how they cut RAID. Catch is that some object storage solutions are overlayed on top of lower level file systems that do things such as RAID 6, granted they are out of sight, out of mind.

RAID comparison
General RAID parity and erasure code/FEC comparisons

Then there are advanced parity protection schemes which include FEC and erasure codes that while they are not your traditional RAID levels, they have characteristic including chunking or sharding data, spreading it out over multiple devices with multiple parity (or derivatives of parity) protection.

Bottom line is that for some environments, different RAID levels may be more applicable and alive than for others.

Via BizTech – How to Turn Storage Networks into Better Performers

  • Maintain Situational Awareness
  • Design for Performance and Availability
  • Determine Networked Server and Storage Patterns
  • Make Use of Applicable Technologies and Techniques

If RAID is alive, what to do with it?

If you are new to RAID, learn more about the past, present and future keeping mind context. Keeping context in mind means that there are different RAID levels and implementations for various environments. Not all RAID 0, 1, 1/0, 10, 2, 3, 4, 5, 6 or other variations (past, present and emerging) are the same for consumer vs. SOHO vs. SMB vs. SME vs. Enterprise, nor are the usage cases. Some need performance for reads, others for writes, some for high-capacity with low performance using hardware or software. RAID Rules of thumb are ok and useful, however keep them in context to what you are doing as well as using.

What to do next?

Take some time to learn, ask questions including what to use when, where, why and how as well as if an approach or recommendation are applicable to your needs. Check out the following links to read some extra perspectives about RAID and keep in mind, what might apply to enterprise may not be relevant for consumer or SMB and vise versa.

Some advise needed on SSD’s and Raid (Via Spiceworks)
RAID 5 URE Rebuild Means The Sky Is Falling (Via BenchmarkReview)
Double drive failures in a RAID-10 configuration (Via SearchStorage)
Industry Trends and Perspectives: RAID Rebuild Rates (Via StorageIOblog)
RAID, IOPS and IO observations (Via StorageIOBlog)
RAID Relevance Revisited (Via StorageIOBlog)
HDDs Are Still Spinning (Rust Never Sleeps) (Via InfoStor)
When and Where to Use NAND Flash SSD for Virtual Servers (Via TheVirtualizationPractice)
What’s the best way to learn about RAID storage? (Via Spiceworks)
Design considerations for the host local FVP architecture (Via Frank Denneman)
Some basic RAID fundamentals and definitions (Via SearchStorage)
Can RAID extend nand flash SSD life? (Via StorageIOBlog)
I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
The original RAID white paper (PDF) that while over 20 years old, it provides a basis, foundation and some history by Katz, Gibson, Patterson et al
Storage Interview Series (Via Infortrend)
Different RAID methods (Via RAID Recovery Guide)
A good RAID tutorial (Via TheGeekStuff)
Basics of RAID explained (Via ZDNet)
RAID and IOPs (Via VMware Communities)

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What is my favorite or preferred RAID level?

That depends, for some things its RAID 1, for others RAID 10 yet for others RAID 4, 5, 6 or DP and yet other situations could be a fit for RAID 0 or erasure codes and FEC. Instead of being focused on just one or two RAID levels as the solution for different problems, I prefer to look at the environment (consumer, SOHO, small or large SMB, SME, enterprise), type of usage (primary or secondary or data protection), performance characteristics, reads, writes, type and number of drives among other factors. What might be a fit for one environment would not be a fit for others, thus my preferred RAID level along with where implemented is the one that meets the given situation. However also keep in mind is tying RAID into part of an overall data protection strategy, remember, RAID is not a replacement for backup.

What this all means

Like other technologies that have been declared dead for years or decades, aka the Zombie technologies (e.g. dead yet still alive) RAID continues to be used while the technologies evolves. There are specific products, implementations or even RAID levels that have faded away, or are declining in some environments, yet alive in others. RAID and its variations are still alive, however how it is used or deployed in conjunction with other technologies also is evolving.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

July 2014 Server and StorageIO Update newsletter


Server and StorageIO Update newsletter – July 2014

Welcome to the July 2014 edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics. For some of you it is mid summer (e.g. in the northern hemisphere) while for others it mid-winter (southern hemisphere). Here in the Stillwater MN area it is mid-summer which means enjoying the warm outdoor weather as well as getting ready for the busy late summer and early fall 2014 schedule of events including VMworld among others. Starting in this edition there are a couple of new and expanded sections including Technology tips and tools and the return of Just For Fun.

Greg Schulz Storage I/OGreg Schulz @StorageIO

Lets jump into this mid-summer, or for some of you mid-winter edition of the StorageIO Update newsletter.

Industry and Technology Updates

StorageIO Industry Trends and Perspectives

Following up from our June 2014 Newsletter that included coverage of NetApp and Avago selling its newly acquired (via LSI acquisition) flash storage division to Seagate, along with other activity, here are some current industry activities. From a flash memory and solid state device (SSD) perspective, the flash memory summit (FMS) is occurring in Santa Clara the week of August 5, 2014. Having insight under NDA into some of the many announcements as well as other things occurring, keep an eye out for various news from the FMS event.

EMC MegaLaunch and MegaActivist Investor

In addition to their recent MegaLaunch series of product announcements and updates, EMC is also in the news as it comes under pressure from activist investor Hedge fund Elliot Management Corp to spin-off VMware to increase shareholder value. What would a spin-off of VMware mean for customers of the EMC federation of EMC core technologies, VMware and Pivotal labs if the activists get their way? Here are some additional comments and perspectives via CruxialCIO. Click here to view the recent (July 23, 2014) earnings announcement for a summary of how EMC is doing in the market and financially.

Speaking of EMC MegaLaunch on July 8, 2014, EMC also announced enhancements and new models of their Isilon scale out storage, new VMAX3 models with embedded virtualization and other enhancements, XtremeIO 3.0 and new models among other enhancements. EMC also announced the general availability of some previously announced at EMCworld (May 2014) solutions including Elastic Cloud Storage (ECS) and VIPR 2.0 along with SRM 2.0 among other items. For those not familiar with EMC ViPR Software Defined Storage Management you can read more here, here, here and here.

What’s in the works?

Several projects and things are in the works that will show themselves in the coming weeks or months if not sooner. Some of which are more proof points coming out of the StorageIO labs involving software defined, converged, cloud, virtual, SSD, cache software, data protection and more.

Speaking of Software Defined, join me for a free Webinar on August 7 Hardware agnostic Virtual SAN for VMware ESXi Free (sponsored by Starwind Software). Other upcoming webinars include BackupU Summer Semester series (Sponsored by Dell Software) where we continue Exploring the Data Protection Toolbox. August also means VMworld in San Francisco so see you there. Check out the activities calendar below and at our main website to learn about these and other events.

Watch for more StorageIO posts, commentary, perspectives, presentations, webinars, tips and events on information and data infrastructure topics, themes and trends. Data Infrastructure topics include among others cloud, virtual, legacy server, storage I/O networking, data protection, hardware and software.

Enjoy this edition of the StorageIO Update newsletter and look forward to catching up with you live or online while out and about this summer.

StorageIO comments and perspectives in the news

StorageIO in the news

The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.

NetworkComputing: Comments on Data Backup: Beyond Band-Aids
StorageNewsletter: Comments on Unified Storage Appliance Buying Guide
Forbes: Comments on Big Data and Enterprise Information Management
Toms Hardware: Comments on Server SAN: Demystifying Today’s Newest Storage Buzzword
CruxialCIO: Comments on EMC Bridges Cloud, On-Premise Storage With TwinStrata Buy
ComputerWeekly: Comments on Backup vs archive: Can they be merged?
CruxialCIO: Comments on EMC under pressure to spin-off VMware
EnterpriseStorageForum: Comments on Unified Storage and buyers guide tips

StorageIO video and audio pod casts

StorageIOblog postStorageIOblog post
StorageIO audio pod casts are also available via
and at StorageIO.tv

StorageIOblog posts and perspectives

StorageIOblog post

  • AWS adds Zocalo Enterprise File Sync Share and Collaboration
  • June 2014 Server and StorageIO Update newsletter
  • Remember to check out our objectstoragecenter.com page where you will find a growing collection of information and links on cloud and object storage themes, technologies and trends from various sources.

    If you are interested in data protection including Backup/Restore, BC, DR, BR and Archiving along with associated technologies, tools, techniques and trends visit our storageioblog.com/data-protection-diaries-main/ page.

    StorageIO events and activities

    Server and StorageIO seminars, conferences, web cats, events, activities

    The StorageIO calendar continues to evolve including several new events being added for August and well into the fall with more in the works. Here are some recent and upcoming activities including live in-person seminars, conferences, keynote and speaking activities as well as on-line webinars, twitter chats, Google+ hangouts among others.

    October 10, 2014 Seminar: Server, Storage and IO Data Center Virtualization JumpstartNijkerk Holland
    Netherlands
    October 9, 2014 Seminar: Data Infrastructure Industry Trends and Perspectives – What’s The BuzzNijkerk Holland
    Netherlands
    October 8, 2014 Private Seminar – Contact Brouwer Storage ConsultancyNijkerk Holland
    Netherlands
    Sep. 2, 2014Dell BackupUExploring the Data Protection Toolbox – Data and Application ReplicationOnline Webinar
    11AM PT
    August 25-28, 2014VMworldVarious ActivitiesSan Francisco
    Aug. 21, 2014 BrightTalkSAN, LAN, MAN, WAN, POTS & PANs – How you CAN network your servers & storage beyond the cable?

    Webinar
    11AM PT
    Aug. 20, 2014TBASoftware Defined Data CentersTBA
    1:30PM CT
    Aug. 19, 2014Dell BackupUExploring the Data Protection Toolbox – The ABCDs of DFR (Data Footprint Reduction), part IIGoogle+ Hangout
    9AM PT
    Aug. 13, 2014 BrightTalkWhat is Your Virtualization Optimization Objective?Webinar
    1PM PT
    Aug. 7, 2014Starwind SoftwareLive webinar: Hardware agnostic Virtual SAN for VMware ESXi FreeWebinar
    1PM CT
    Aug. 5, 2014Dell BackupUExploring the Data Protection Toolbox – The ABCDs of DFR (Data Footprint Reduction), part IIOnline Webinar
    11AM PT

    Click here to view other upcoming along with earlier event activities. Watch for more 2014 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, big data, little data, cloud and object storage, performance and management trends among others.

    Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.

    Server and StorageIO Technology Tips and Tools

    Server and StorageIO seminars, conferences, web cats, events, activities

    FedTech:  Use a VPN for More than Remote Access
    InfoStor:  Data Archiving: Life Beyond Compliance

    Just for fun and on a lighter note

    Wrapping up this edition of the StorageIO Update newsletter is the return of the Just for fun and on a lighter note section where we share something non IT related. In this edition how about summertime backyard home video taken a few weeks ago? Check out this video of a black bear and her two cubs walking in, well, my backyard. First you will see Big Mama Bear, then Yogi Jr. that appear from the left, followed by baby Boo Boo also from the left.

    Backyard Black Bears
    Video courtesy of KarenofArcola – Click on image to view

    StorageIO Update Newsletter Archives

    Click here to view earlier StorageIO Update newsletters (HTML and PDF versions) at www.storageio.com/newsletter. Subscribe to this newsletter (and pass it along) by clicking here (Via Secure Campaigner site). View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: What I did with Lenovo TS140 in my Server and Storage I/O Review

    Storage I/O trends

    Part II: Lenovo TS140 Server and Storage I/O Review


    This is the second of a two-part post series on my recent experiences with a Lenovo TS140 Server, you can read part I here.

    What Did I do with the TS140

    After initial check out in an office type environment, I moved the TS140 into the lab area where it joined other servers to be used for various things.

    Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities. Also, I also installed VMware ESXi 5.5 and ran into a few surprises. One of those was that I needed to apply an update to VMware drivers to support the onboard Intel NIC, as well as enable VT and EP modes for virtualization to assist via the BIOS. The biggest surprise was that I discovered I could not install VMware onto an internal drive attached via one of the internal SATA ports which turns out to be a BIOS firmware issue.

    Lenovo confirmed this when I brought it to their attention, and the workaround is to use USB to install VMware onto a USB flash SSD thumb drive, or other USB attached drive or to use external storage via an adapter. As of this time Lenovo is aware of the VMware issue, however, no date for new BIOS or firmware is available. Speaking of BIOS, I did notice that there was some newer BIOS and firmware available (FBKT70AUS December 2013) than what was installed (FB48A August of 2013). So I went ahead and did this upgrade which was a smooth, quick and easy process. The process included going to the Lenovo site (see resource links below), selecting the applicable download, and then installing it following the directions.

    Since I was going to install various PCIe SAS adapters into the TS140 attached to external SAS and SATA storage, this was not a big issue, more of an inconvenience Likewise for using storage mounted internally the workaround is to use an SAS or SATA adapter with internal ports (or cable). Speaking of USB workarounds, have a HDD, HHDD, SSHD or SSD that is a SATA device and need to attach it to USB, then get one of these cables. Note that there are USB 3.0 and USB 2.0 cables (see below) available so choose wisely.

    USB to SATA adapter cable

    In addition to running various VMware-based workloads with different guest VMs.

    I also ran FUTREMARK PCmark (btw, if you do not have this in your server storage I/O toolbox it should be) to gauge the systems performance. As mentioned the TS140 is quiet. However, it also has good performance depending on what processor you select. Note that while the TS140 has a list price as of the time of this post under $400 USD, that will change depending on which processor, amount of memory, software and other options you choose.

    Futuremark PCMark
    PCmark

    PCmark testResults
    Composite score2274
    Compute11530
    System Storage2429
    Secondary Storage2428
    Productivity1682
    Lightweight2137

    PCmark results are shown above for the Windows Server 2012 system (non-virtualized) configured as shipped and received from Lenovo.

    What I liked

    Unbelievably quiet which may not seem like a big deal, however, if you are looking to deploy a server or system into a small office workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server that can be installed into a home media entertainment system, well, this might be a nice to have consideration ;).

    Something else that I liked is that the TS140 with the E3-1220 v3 family of processor supports PCIe G3 adapters which are useful if you are going to be using 10GbE cards or 12Gbs SAS and faster cards to move lots of data, support more IOPs or reduce response time latency.

    In addition, while only 4 DIMM slots is not very much, its more than what some other similar focused systems have, plus with large capacity DIMMs, you can still get a nice system, or two, or three or four for a cluster at a good price or value (Hmm, VSAN anybody?). Also while not a big item, the TS140 did not require ordering an HDD or SSD if you are not also ordering software the system for a diskless system and have your own.

    Speaking of IO slots, naturally I’m interested in Server Storage I/O so having multiple slots is a must have, along with the processor that is quad core (pretty much standard these days) along with VT and EP for supporting VMware (these were disabled in the BIOS. However, that was an easy fix).

    Then there is the price as of this posting starting at $379 USD which is a bare bones system (e.g. minimal memory, basic processor, no software) whose price increases as you add more items. What I like about this price is that it has the PCIe G3 slot as well as other PCIe G2 slots for expansion meaning I can install 12Gbps (or 6Gbps) SAS storage I/O adapters, or other PCIe cards including SSD, RAID, 10GbE CNA or other cards to meet various needs including software defined storage.

    What I did not like

    I would like to have had at least six vs. four DIMM slots, however keeping in mind the price point of where this system is positioned, not to mention what you could do with it thinking outside of the box, I’m fine with only 4 x DIMM. Space for more internal storage would be nice, however, if that is what you need, then there are the larger Lenovo models to look at. By the way, thinking outside of the box, could you do something like a Hadoop, OpenStack, Object Storage, VMware VSAN or other cluster with these in addition to using as a Windows Server?

    Yup.

    Granted you won’t have as much internal storage, as the TS140 only has two fixed drive slots (for more storage there is the model TD340 among others).

    However it is not that difficult to add more (not Lenovo endorsed) by adding a StarTech enclosure like I did with my other systems (see here). Oh and those extra PCIe slots, that’s where a 12Gbs (or 6Gbps) adapter comes into play while leaving room for GbE cards and PCIe SSD cards. Btw not sure what to do with that PCIe x1 slot, that’s a good place for a dual GbE NIC to add more networking ports or an SATA adapter for attaching to larger capacity slower drives.

    StarTech 2.5" SAS and SATA drive enclosure on Amazon.com
    StarTech 2.5″ SAS SATA drive enclosure via Amazon.com

    If VMware is not a requirement, and you need a good entry level server for a large SOHO or small SMB environment, or, if you are looking to add a flexible server to a lab or for other things the TS140 is good (see disclosure below) and quiet.

    Otoh as mentioned, there is a current issue with the BIOS/firmware with the TS140 involving VMware (tried ESXi 5 & 5.5).

    However I did find a workaround which is that the current TS140 BIOS/Firmware does work with VMware if you install onto a USB drive, and then use external SAS, SATA or other accessible storage which is how I ended up using it.

    Lenovo TS140 resources include

  • TS140 Lenovo ordering website
  • TS140 Data and Spec Sheet (PDF here)
  • Lenovo ThinkServer TS140 Manual (PDF here)
  • Intel E3-1200 v3 processors capabilities (Web page here)
  • Lenovo Drivers and Software (Web page here)
  • Lenovo BIOS and Drivers (Web page here)
  • Enabling Virtualization Technology (VT) in TS140 BIOS (Press F1) (Read here)
  • Enabling Intel NIC (82579LM) GbE with VMware (Link to user forum and a blog site here)
  • My experience from a couple years ago dealing with Lenovo support for a laptop issue
  • Summary

    Disclosure: Lenovo loaned the TS140 to me for just under two months including covering shipping costs at no charge (to them or to me) hence this is not a sponsored post or review. On the other hand I have placed an order for a new TS140 similar to the one tested that I bought on-line from Lenovo.

    This new TS140 server that I bought joins the Dell Inspiron I added late last year (read more about that here) as well as other HP and Dell systems.

    Overall I give the Lenovo TS140 an provisional "A" which would be a solid "A" once the BIOS/firmware issue mentioned above is resolved for VMware. Otoh, if you are not concerned about using the TS140 for VMware (or can do a work around), then consider it as an "A".

    As mentioned above, I liked it so much I actually bought one to add to my collection.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved