March 2017 Server StorageIO Data Infrastructure Update Newsletter

Volume 17, Issue III

Hello and welcome to the March 2017 issue of the Server StorageIO update newsletter.

First a reminder world backup (and recovery) day is on March 31. Following up from the February Server StorageIO update newsletter that had a focus on data protection this edition includes some additional posts, articles, tips and commentary below.

Other data infrastructure (and tradecraft) topics in this edition include cloud, virtual, server, storage and I/O including NVMe as well as networks. Industry trends include new technology and services announcements, cloud services, HPE buying Nimble among other activity. Check out the Converged Infrastructure (CI), Hyper-Converged (HCI) and Cluster in Box (or Cloud in Box) coverage including a recent SNIA webinar I was invited to be the guest presenter for, along with companion post below.

In This Issue

Enjoy this edition of the Server StorageIO update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

Some recent Industry Activities, Trends, News and Announcements include:

Dell EMC has discontinued the NVMe direct attached shared DSSD D5 all flash array has been discontinued. At about the same time Dell EMC is shutting down the DSSD D5 product, it has also signaled they will leverage the various technologies including NVMe across their broad server storage portfolio in different ways moving forward. While Dell EMC is shutting down DSSD D5, they are also bringing additional NVMe solutions to the market including those they have been shipping for years (e.g. on the server-side). Learn more about DSSD D5 here and here including perspectives of how it could have been used (plays for playbooks).

Meanwhile NVMe industry activity continues to expand with different solutions from startups such as E8, Excelero, Everspin, Intel, Mellanox, Micron, Samsung and WD SANdisk among others. Also keep in mind, if the answer is NVMe, then what were and are the questions to ask, as well as what are some easy to use benchmark scripts (using fio, diskspd, vdbench, iometer).

Speaking of NVMe, flash and SSDs, Amazon Web Services (AWS) have added new Elastic Cloud Compute (EC2) storage and I/O optimized i3 instances. These new instances are available in various configurations with different amounts of vCPU (cores or logical processors), memory and NVMe SSD capacities (and quantity) along with price.

Note that the price per i3 instance varies not only by its configuration, also for image and region deployed in. The flash SSD capacities range from an entry-level (i3.large) with 2 vCPU (logical processors), 15.25GB of RAM and a single 475GB NVMe SSD that for example in the US East Region was recently priced at $0.156 per hour. At the high-end there is the i3.16xlarge with 64 vCPU (logical processors), 488GB RAM and 8 x 1900GB NVMe SSDs with a recent US East Region price of $4.992 per hour. Note that the vCPU refers to the available number of logical processors available and not necessarily cores or sockets.

Also note that your performance will vary, and while NVMe protocol tends to use less CPU per I/O, if generating a large number of I/Os you will need some CPU. What this means is that if you find your performance limited compared to expectations with the lower end i3 instances, move up to a larger instance and see what happens. If you have a Windows-based environment, you can use a tool such as Diskspd to see what happens with I/O performance as you decrease the number of CPUs used.

Chelsio has announced they are now Microsoft Azure Stack Certified with their iWARP RDMA host adapter solutions, as well as for converged infrastructure (CI), hyper-converged (HCI) and legacy server storage deployments. As part of the announcement, Chelsio is also offering a 30 day no cost trial of their adapters for Microsoft Azure Stack, Windows Server 2016 and Windows 10 client environments. Learn more about the Chelsio trial offer here.

Everspin (the MRAM Spintorque, persistent RAM folks) have announced a new Storage Class Memory (SCM) NVMe accessible family (nvNITRO) of storage accelerator devices (PCIe AiC, U.2). Whats interesting about Everspin is that they are using NVMe for accessing their persistent RAM (e.g. MRAM) making it easily plug compatible with existing operating systems or hypervisors. This means using standard out of the box NVMe drivers where the Everspin SCM appears as a block device (for compatibility) functioning as a low latency, high performance persistent write cache.

Something else interesting besides making the new memory compatible with existing servers CPU complex via PCIe, is how Everspin is demonstrating that NVMe as a general access protocol is not just exclusive to nand flash-based SSDs. What this means is that instead of using non-persistent DRAM, or slower NAND flash (or 3D XPoint SCM), Everspin nvNITRO enables high endurance write cache with persistent to compliment existing NAND flash as well as emerging 3D XPoint based storage. Keep an eye on Everspin as they are doing some interesting things for future discussions.

Google Cloud Services has added additional regions (cloud locations) and other enhancements.

HPE continued buying into server storage I/O data infrastructure technologies announcing an all cash (e.g. no stock) acquisition of Nimble Storage (NMBL). The cash acquisition for a little over $1B USD amounts to $12.50 USD per Nimble share, double what it had traded at. As a refresh, or overview, Nimble is an all flash shared storage system leverage NAND flash solid storage device (SSD) performance. Note that Nimble also partners with Cisco and Lenovo platforms that compete with HPE servers for converged systems.View additional perspectives here.

Riverbed has announced the release of Steelfusion 5 which while its name implies physical hardware metal, the solution is available as tin wrapped (e.g. hardware appliance) software. However the solution is also available for deployment as a VMware virtual appliance for remote office branch office (ROBO) among others. Enhancements include converged functionality such as NAS support along with network latency as well as bandwidth among other features.

Check out other industry news, comments, trends perspectives here.

Server StorageIOblog Posts

Recent and popular Server StorageIOblog posts include:

View other recent as well as past StorageIOblog posts here

Server StorageIO Commentary in the news

Recent Server StorageIO industry trends perspectives commentary in the news.

Via InfoStor: 8 Big Enterprise SSD Trends to Expect in 2017
Watch for increased capacities at lower cost, differentiation awareness of high-capacity, low-cost and lower performing SSDs versus improved durability and performance along with cost capacity enhancements for active SSD (read and write optimized). You can also expect increased support for NVMe both as a back-end storage device with different form factors (e.g., M.2 gum sticks, U.2 8639 drives, PCIe cards) as well as front-end (e.g., storage systems that are NVMe-attached) including local direct-attached and fiber-attached. This means more awareness around NVMe both as front-end and back-end deployment options.

Via SearchITOperations: Storage performance bottlenecks
Sometimes it takes more than an aspirin to cure a headache. There may be a bottleneck somewhere else, in hardware, software, storage system architecture or something else.

Via SearchDNS: Parsing through the software-defined storage hype
Beyond scalability, SDS technology aims for freedom from the limits of proprietary hardware.

Via InfoStor: Data Storage Industry Braces for AI and Machine Learning
AI could also lead to untapped hidden or unknown value in existing data that has no or little perceived value

Via SearchDataCenter: New options to evolve data backup recovery

View more Server, Storage and I/O trends and perspectives comments here

Various Tips, Tools, Technology and Tradecraft Topics

Recent Data Infrastructure Tradecraft Articles, Tips, Tools, Tricks and related topics.

Via ComputerWeekly: Time to restore from backup: Do you know where your data is?
Via IDG/NetworkWorld: Ensure your data infrastructure remains available and resilient
Via IDG/NetworkWorld: Whats a data infrastructure?

Check out Scott Lowe @Scott_Lowe of VMware fame who while having a virtual networking focus has a nice roundup of related data infrastructure topics cloud, open source among others.

Want to take a break from reading or listening to tech talk, check out some of the fun videos including aerial drone (and some technology topics) at www.storageio.tv.

View more tips and articles here

Events and Activities

Recent and upcoming event activities.

May 8-10, 2017 – Dell EMCworld – Las Vegas

April 3-7, 2017 – Seminars – Dutch workshop seminar series – Nijkerk Netherlands

March 15, 2017 – Webinar – SNIA/BrightTalkHyperConverged and Storage – 10AM PT

January 26 2017 – Seminar – Presenting at Wipro SDx Summit London UK

See more webinars and activities on the Server StorageIO Events page here.


Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book Software-Defined Data Infrastructure Essentials(CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Kevin Closson discusses SLOB Server CPU I/O Database Performance benchmarks

Slilly Little Oracle Benchmark (SLOB) Database Server I/O Podcast

server storage I/O trends

In this Server StorageIO podcast episode, I am joined by @Kevinclosson who is an Oracle (along with other Databases) performance expert and creator of the Silly Little Oracle Benchmark (SLOB) tool. Not surprising our data infrastructure discussion involves server CPU, software, I/O, storage, performance, software, tools, best practices, fundamental tradecraft skills among other items.

server storage I/O performance

Kevin has been involved in database performance (and porting) optimization for decades which means CPU server, memory, I/O and storage issues, resources and tuning. Part of server, storage I/O a tuning is understanding the workloads, also the demands of software such as databases along with how they use CPU and its impact on resources. This means that somewhere in the technology stack, server CPUs are still needed, even in serverless environments.

We also discuss metrics, gaining insight to resources uses, what they mean including how CPU wait may be costing your lost productivity with overhead, as well as benchmarks, simulations, and related themes. Check out Kevins website www.kevinclosson.net to learn more about Oracle, Databases, SLOB, tools and other content. Listen to the podcast discussion here (42 minutes) as well as on iTunes.

Where to learn more

Learn more about Oracle, Database Performance, Benchmarking along with other tools via the following links:

What this all means and wrap-up

Check out my discussion here with Kevin Closson where you may have some Dejavu, or learn something new on server, storage I/O, database performance, software, benchmark workloads as well as much more. Also available on 

Ok, nuff said for now…

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Server storage I/O performance benchmark workload scripts Part I

Server storage I/O performance benchmark workload scripts Part I

Server storage I/O performance benchmark workload scripts

Update 1/28/2018

This is part one of a two-part series of posts about Server storage I/O performance benchmark workload tools and scripts. View part II here which includes the workload scripts and where to view sample results.

There are various tools and workloads for server I/O benchmark testing, validation and exercising different storage devices (or systems and appliances) such as Non-Volatile Memory (NVM) flash Solid State Devices (SSDs) or Hard Disk Drives (HDD) among others.

NVMe ssd storage
Various NVM flash SSD including NVMe devices

For example, lets say you have an SSD such as an Intel 750 (here, here, and here) or some other vendors NVMe PCIe Add in Card (AiC) installed into a Microsoft Windows server and would like to see how it compares with expected results. The following scripts allow you to validate your system with those of others running the same workload, granted of course your mileage (performance) may vary.

server storage I/O SCM NVM SSD performance

Why Your Performance May Vary

Reasons you performance may vary include among others:

  • GHz Speed of your server, number of sockets, cores
  • Amount of main DRAM memory
  • Number, type and speed of PCIe slots
  • Speed of storage device and any adapters
  • Device drivers and firmware of storage devices and adapters
  • Server power mode setting (e.g. low or balanced power vs. high-performance)
  • Other workload running on system and device under test
  • Solar flares (kp-index) among other urban (or real) myths and issues
  • Typos or misconfiguration of workload test scripts
  • Test server, storage, I/O device, software and workload configuration
  • Versions of test software tools among others

Windows Power (and performance) Settings

Some things are assumed or taken for granted that everybody knows and does, however sometimes the obvious needs to be stated or re-stated. An example is remembering to check your server power management settings to see if they are in energy efficiency power savings mode, or, in high-performance mode. Note that if your focus is on getting the best possible performance for effective productivity, then you want to be in high performance mode. On the other hand if performance is not your main concern, instead a focus on energy avoidance, then low power mode, or perhaps balanced.

For Microsoft Windows Servers, Desktop Workstations, Laptops and Tablets you can adjust power settings via control panel and GUI as well as command line or Powershell. From command line (privileged or administrator) the following are used for setting balanced or high-performance power settings.

Balanced

powercfg.exe /setactive 381b4222-f694-41f0-9685-ff5bb260df2e

High Performance

powercfg.exe /setactive 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c

From Powershell the following set balanced or high-performance.

Balanced
PowerCfg -SetActive "381b4222-f694-41f0-9685-ff5bb260df2e"

High Performance
PowerCfg -SetActive "8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c"

Note that you can list Windows power management settings using powercfg -LIST and powercfg -QUERY

server storage I/O power management

Btw, if you have not already done so, enable Windows disk (HDD and SSD) performance counters so that they appear via Task Manager by entering from a command prompt:

diskperf -y

Workload (Benchmark) Simulation Test Tools Used

There are many tools (see storageio.com/performance) that can be used for creating and running workloads just as there are various application server I/O characteristics. Different server I/O and application performance attributes include among others read vs. write, random vs. sequential, large vs. small, long vs. short stride, burst vs. sustain, cache and non-cache friendly, activity vs. data movement vs. latency vs. CPU usage among others. Likewise the number of workers, jobs, threads, outstanding and overlapped I/O among other configuration settings can have an impact on workload and results.

The four free tools that I’m using with this set of scripts are:

  • Microsoft Diskspd (free), get the tool and bits here or here (open source), learn more about Diskspd here.
  • FIO.exe (free), get the tool and bits here or here among other venues.
  • Vdbench (free with registration), get the tool and bits here or here among other venues.
  • Iometer (free), get the tool and bits here among other venues.

Notice: While best effort has been made to verify the above links, they may change over time and you are responsible for verifying the safety of links and your downloads.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Remember, everything is not the same in the data center or with data infrastructures that support different applications.

While some tools are more robust or better than others for different things, ultimately it’s usually not the tool that results in a bad benchmark or comparison, it’s the configuration or lack of including workload settings that are not relevant or applicable. The best benchmark, workload or simulation is your own application. Second best is one that closely resembles your application workload characteristics. A bad benchmark is one that has no relevance to your environment, application use scenario. Take and treat all benchmark or workload simulation results with a grain of salt as something to compare, contrast or make reference to in the proper context. Read part two of this post series to view test tool workload scripts along with sample results.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part II – Some server storage I/O workload scripts and results

Part II – Some server storage I/O workload scripts and results

server storage I/O trends

Updated 1/28/2018

This is the second in a two part series of posts pertaining to using some common server storage I/O workload benchmark tools and scripts. View part I here which includes overview, background and information about the tools used and related topics.

NVMe ssd storage
Various NVM flash SSD including NVMe devices

Following are some server I/O benchmark workload scripts to exercise various storage devices such as Non-Volatile Memory (NVM) flash Solid State Devices (SSDs) or Hard Disk Drives (HDD) among others.

The Workloads

Some ways that can impact the workload performance results besides changing the I/O size, read write, random sequential mix is the number of threads, workers and jobs. Note that in the workload steps, the larger 1MB and sequential scenarios have fewer threads, workers vs. the smaller IOP or activity focused workloads. Too many threads or workers can cause overhead and you will reach a point of diminishing return at some point. Likewise too few and you will not drive the system under test (SUT) or device under test (DUT) to its full potential. If you are not sure how many threads or workers to use, run some short calibration tests to see the results before doing a large, longer test.

Keep in mind that the best benchmark or workload is your own application running with similar load to what you would see in real world, along with applicable features, configuration and functionality enabled. The second best would be those that closely resemble your workload characteristics and that are relevant.

The following workloads involved a system test initiator (STI) server driving workload using the different tools as well as scripts shown. The STI sends the workload to a SUT or DUT that can be a single drive, card or multiple devices, storage system or appliance. Warning: The following workload tests does both reads and writes which can be destructive to your device under test. Exercise caution on the device and file name specified to avoid causing a problem that might result in you testing your backup / recovery process. Likewise no warranty is given, implied or made for these scripts or their use or results, they are simply supplied as is for your reference.

The four free tools that I’m using with this set of scripts are:

  • Microsoft Diskspd (free), get the tool and bits here or here (open source), learn more about Diskspd here.
  • FIO.exe (free), get the tool and bits here or here among other venues.
  • Vdbench (free with registration), get the tool and bits here or here among other venues.
  • Iometer (free), get the tool and bits here among other venues.

Notice: While best effort has been made to verify the above links, they may change over time and you are responsible for verifying the safety of links and your downloads

Microsoft Diskspd workloads

Note that a 300GB size file named iobw.tst on device N: is being used for performing read and write I/Os to. There are 160 threads, I/O size of 4KB and 8KB varying from 100% Read (0% write), 70% Read (30% write) and 0% Read (100% write) with random (seek) and no hardware or software cache. Also specified are to collect latency statistics, a 30 second warm up ramp up time, and a quick 5 minute duration (test time). 5 minutes is a quick test for calibration, verify your environment however relatively short for a real test which should be in the hours or more depending on your needs.

Note that the output results are put into a file with a name describing the test tool, workload and other useful information such as date and time. You may also want to specify a different directory where output files are placed.

diskspd.exe -c300G -o160 -t160 -b4K -w0 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan100Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b4K -w30 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan70Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b4K -w100 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan0Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w0 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan100Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w30 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan70Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w100 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan0Read_160x160_072416_8AM.txt

The following Diskspd tests use similar settings as above, however instead of random, sequential is specified, threads and outstanding I/Os are reduced while I/O size is set to 1MB, then 8KB, with 100% read and 100% write scenarios. The -t specifies the number of threads and -o number of outstanding I/Os per thread.

diskspd.exe -c300G -o32 -t132 -b1M -w0 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_1MSeq100Read_32x32_072416_8AM.txt
diskspd.exe -c300G -o32 -t132 -b1M -w100 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_1MSeq0Read_32x32_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w0 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_8KSeq100Read_32x32_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w100 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_8KSeq0Read_32x32_072416_8AM.txt

Fio.exe workloads

Next are the fio workloads similar to those run using Diskspd except the sequential scenarios are skipped.

fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=100 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan100Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=70 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan70Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=0 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan0Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=100 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan100Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=70 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan70Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=0 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan0Read_5x32_072416_8AM.txt

Vdbench workloads

Next are the Vdbench workloads similar to those used with the Microsoft Diskspd scenarios. In addition to making sure Vdbench is installed and working, you will need to create a text file called seqrxx.txt containing the following:

hd=localhost,jvms=!jvmn
sd=sd1,lun=!drivename,openflags=directio,size=!dsize
wd=mix,sd=sd1
rd=!jobname,wd=mix,elapsed=!etime,interval=!itime,iorate=max,forthreads=(!tthreads),forxfersize=(!worktbd),forseekpct=(!workseek),forrdpct=(!workread),openflags=directio

The following are the commands that call the Vdbench script file. Note Vdbench puts output files (yes, plural there are many results) in a output folder.

vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o  vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran100Read_0726166AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran70Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran0Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Ran100Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Ran70Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq0Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq100Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq70Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq0Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=32 jvmn=64 worktbd=1M workseek=0 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_32TH_1M100Seq100Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=32 jvmn=64 worktbd=1M workseek=0 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_32TH_1M100Seq0Read_072416_8AM

Iometer workloads

Last however not least, lets do an Iometer run. The following command calls an Iometer input file (icf) that you can find here. In that file you will need to make a few changes including the name of the server where Iometer is running, description and device under test address. For example in the icf file change SIOSERVER to the name of the server where you will be running Iometer from. Also change the address for the DUT, for example N: to what ever address, drive, mount point you are using. Also update the description accordingly (e.g. "NVME" to "Your test example".

Here is the command line to run Iometer specifying an icf and where to put the results in a CSV file that can be imported into Excel or other tools.

iometer /c  iometer_5work32q_intel_Profile.icf /r iometer_nvmetest_5work32q_072416_8AM.csv

server storage I/O SCM NVM SSD performance

What About The Results?

For context, the following results were run on a Lenovo TS140 (32GB RAM), single socket quad core (3.2GHz) Intel E3-1225 v3 with an Intel NVMe 750 PCIe AiC (Intel SSDPEDMW40). Out of the box Microsoft Windows NVMe drive and controller drivers were used (e.g. 6.3.9600.18203 and 6.3.9600.16421). Operating system is Windows 2012 R2 (bare metal) with NVMe PCIe card formatted with ReFS file system. Workload generator and benchmark driver tools included Microsoft Diskspd version 2.012, Fio.exe version 2.2.3, Vdbench 50403 and Iometer 1.1.0. Note that there are newer versions of the various workload generation tools.

Example results are located here.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Remember, everything is not the same in the data center or with data infrastructures that support different applications.

While some tools are more robust or better than others for different things, ultimately its usually not the tool that results in a bad benchmark or comparison, its the configuration or lack of including workload settings that are not relevant or applicable. The best benchmark, workload or simulation is your own application. Second best is one that closely resembles your application workload characteristics. A bad benchmark is one that has no relevance to your environment, application use scenario. Take and treat all benchmark or workload simulation results with a grain of salt as something to compare, contrast or make reference to in the proper context.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Which Enterprise HDD for Content Applications General I/O Performance

Which HDD for Content Applications general I/O Performance

hdd general i/o performance server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server platform general I/O performance Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the sixth in a multi-part series (read part five here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus is around general I/O performance including 8KB and 128KB IOP sizes.

General I/O Performance

In addition to running database and file (large and small) processing workloads, Vdbench was also used to collect basic small (8KB) and large (128KB) sized I/O operations. This consisted of random and sequential reads as well as writes with the results shown below. In addition to using vdbench, other tools that could be used include Microsoft Diskspd, fio, iorate and iometer among many others.

These workloads used Vdbench configured (13) to do direct I/O to a Windows file system mounted device using as much of the available disk space as possible. All workloads used 16 threads and were run concurrently similar to database and file processing tests.

(Note 13) Sample vdbench configuration for general I/O, note different settings were used for various tests

Table-7 shows workload results for 8KB random IOPs 75% reads and 75% writes including IOPs, bandwidth and response time.

 

ENT 15K RAID1

ENT 10K RAID1

ENT CAP RAID1

ENT 10K R10
(4 Drives)

ECAP SW RAID (5 Drives)

 

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

I/O Rate (IOPs)

597.11

559.26

514

475

285

293

979

984

491

644

MB/sec

4.7

4.4

4.0

3.7

2.2

2.3

7.7

7.7

3.8

5.0

Resp. Time (Sec.)

25.9

27.6

30.2

32.7

55.5

53.7

16.3

16.3

32.6

24.8

Table-7 8KB sized random IOPs workload results

Figure-6 shows small (8KB) random I/O (75% read and 25% read) across different HDD configurations. Performance including activity rates (e.g. IOPs), bandwidth and response time for mixed reads / writes are shown. Note how response time increases with the Enterprise Capacity configurations vs. other performance optimized drives.

general 8K random IO
Figure-6 8KB random reads and write showing IOP activity, bandwidth and response time

Table-8 below shows workload results for 8GB sized I/Os 100% sequential with 75% reads and 75% writes including IOPs, MB/sec and response time in seconds.

ENT 15K RAID1

ENT 10K RAID1

ENT CAP RAID1

ENT 10K R10
(4 Drives)

ECAP SW RAID (5 Drives)

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

I/O Rate (IOPs)

3,778

3,414

3,761

3,986

3,379

1,274

11,840

8,368

2,891

1,146

MB/sec

29.5

26.7

29.4

31.1

26.4

10.0

92.5

65.4

22.6

9.0

Resp. Time (Sec.)

2.2

3.1

2.3

2.4

2.7

10.9

1.3

1.9

5.5

14.0

Table-8 8KB sized sequential workload results

Figure-7 shows small 8KB sequential mixed reads and writes (75% read and 75% write), while the Enterprise Capacity 2TB HDD has a large amount of space capacity, its performance in a RAID 1 vs. other similar configured drives is slower.

8KB Sequential
Figure-7 8KB sequential 75% reads and 75% write showing bandwidth activity

Table-9 shows workload results for 100% sequential, 100% read and 100% write 128KB sized I/Os including IOPs, bandwidth and response time.

ENT 15K RAID1

ENT 10K RAID1

ENT CAP RAID1

ENT 10K R10
(4 Drives)

ECAP SW RAID (5 Drives)

Read

Write

Read

Write

Read

Write

Read

Write

Read

Write

I/O Rate (IOPs)

1,798

1,771

1,716

1,688

921

912

3,552

3,486

780

721

MB/sec

224.7

221.3

214.5

210.9

115.2

114.0

444.0

435.8

97.4

90.1

Resp. Time (Sec.)

8.9

9.0

9.3

9.5

17.4

17.5

4.5

4.6

19.3

20.2

Table-9 128KB sized sequential workload results

Figure-8 shows sequential or streaming operations of larger I/O (100% read and 100% write) requests sizes (128KB) that would be found with large content applications. Figure-8 highlights the relationship between lower response time and increased IOPs as well as bandwidth.

128K Sequential
Figure-8 128KB sequential reads and write showing IOP activity, bandwidth and response time

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Some content applications are doing small random I/Os for database, key value stores or repositories as well as meta data processing while others are doing large sequential I/O. 128KB sized I/O may be large for your environment, on the other hand, with an increasing number of applications, file systems, software defined storage management tools among others, 1 to 10MB or even larger I/O sizes are becoming common. Key is selecting I/O sizes and read write as well as random sequential along with I/O or queue depths that align with your environment.

Continue reading part seven the final post in this multi-part series here where the focus is around how HDD’s continue to evolve including performance beyond traditional RPM based execrations along with wrap up.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)

Server Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)

server storage I/O trends

This is part-one of a two-part post pertaining Microsoft Diskspd.that is also part of a broader series focused on server storage I/O benchmarking, performance, capacity planning, tools and related technologies. You can view part-two of this post here, along with companion links here.

Background

Many people use Iometer for creating synthetic (artificial) workloads to support benchmarking for testing, validation and other activities. While Iometer with its GUI is relatively easy to use and available across many operating system (OS) environments, the tool also has its limits. One of the bigger limits for Iometer is that it has become dated with little to no new development for a long time, while other tools including some new ones continue to evolve in functionality, along with extensibility. Some of these tools have optional GUI for easy of use or configuration, while others simple have extensive scripting and command parameter capabilities. Many tools are supported across different OS including physical, virtual and cloud, while others such as Microsoft Diskspd are OS specific.

Instead of focusing on Iometer and other tools as well as benchmarking techniques (we cover those elsewhere), lets focus on Microsoft Diskspd.


server storage I/O performance

What is Microsoft Diskspd?

Microsoft Diskspd is a synthetic workload generation (e.g. benchmark) tool that runs on various Windows systems as an alternative to Iometer, vdbench, iozone, iorate, fio, sqlio among other tools. Diskspd is a command line tool which means it can easily be scripted to do reads and writes of various I/O size including random as well as sequential activity. Server and storage I/O can be buffered file system as well non-buffered across different types of storage and interfaces. Various performance and CPU usage information is provided to gauge the impact on a system when doing a given number of IOP’s, amount of bandwidth along with response time latency.

What can Diskspd do?

Microsoft Diskspd creates synthetic benchmark workload activity with ability to define various options to simulate different application characteristics. This includes specifying read and writes, random, sequential, IO size along with number of threads to simulate concurrent activity. Diskspd can be used for testing or validating server and storage I/O systems along with associated software, tools and components. In addition to being able to specify different workloads, Diskspd can also be told which processors to use (e.g. CPU affinity), buffering or non-buffered IO among other things.

What type of storage does Diskspd work with?

Physical and virtual storage including hard disk drive (HDD), solid state devices (SSD), solid state hybrid drives (SSHD) in various systems or solutions. Storage can be physical as well as partitions or file systems. As with any workload tool when doing writes, exercise caution to prevent accidental deletion or destruction of your data.


What information does Diskspd produce?

Diskspd provides output in text as well as XML formats. See an example of Diskspd output further down in this post.

Where to get Diskspd?

You can download your free copy of Diskspd from the Microsoft site here.

The download and installation are quick and easy, just remember to select the proper version for your Windows system and type of processor.

Another tip is to remember to set path environment variables point to where you put the Diskspd image.

Also stating what should be obvious, don’t forget that if you are going to be doing any benchmark or workload generation activity on a system where the potential for a data to be over-written or deleted, make sure you have a good backup and tested restore before you begin, if something goes wrong.


New to server storage I/O benchmarking or tools?

If you are not familiar with server storage I/O performance benchmarking or using various workload generation tools (e.g. benchmark tools), Drew Robb (@robbdrew) has a Data Storage Benchmarking Guide article over at Enterprise Storage Forum that provides a good framework and summary quick guide to server storage I/O benchmarking.




Via Drew:

Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark.

Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon).


But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more. 

Read more here including some of my comments, tips and recommendations.


In addition to Drew’s benchmarking quick reference guide, along with the server storage I/O benchmarking tools, technologies and techniques resource page (Server and Storage I/O Benchmarking 101 for Smarties.

How do you use Diskspd?


Tip: When you run Microsoft Diskspd it will create a file or data set on the device or volume being tested that it will do its I/O to, make sure that you have enough disk space for what will be tested (e.g. if you are going to test 1TB you need to have more than 1TB of disk space free for use). Another tip is to speed up the initializing (e.g. when Diskspd creates the file that I/Os will be done to) run as administrator.

Tip: In case you forgot, a couple of other useful Microsoft tools (besides Perfmon) for working with and displaying server storage I/O devices including disks (HDD and SSDs) are the commands "wmic diskdrive list [brief]" and "diskpart". With diskpart exercise caution as it can get you in trouble just as fast as it can get you out of trouble.

You can view the Diskspd commands after installing the tool and from a Windows command prompt type:

C:\Users\Username> Diskspd


The above command will display Diskspd help and information about the commands as follows.

Usage: diskspd [options] target1 [ target2 [ target3 …] ]
version 2.0.12 (2014/09/17)

Available targets:
file_path
# :

Available options:











-?display usage information
-a#[,#[…]]advanced CPU affinity – affinitize threads to CPUs provided after -a in a round-robin manner within current KGroup (CPU count starts with 0); the same CPU can be listed more than once and the number of CPUs can be different than the number of files or threads (cannot be used with -n)

-ag

group affinity – affinitize threads in a round-robin manner across KGroups
-b[K|M|G]block size in bytes/KB/MB/GB [default=64K]

-B[K|M|G|b]

base file offset in bytes/KB/MB/GB/blocks [default=0] (offset from the beginning of the file)
-c[K|M|G|b]create files of the given size. Size can be stated in bytes/KB/MB/GB/blocks

-Ccool down time – duration of the test after measurements finished [default=0s].

-DPrint IOPS standard deviations. The deviations are calculated for samples of duration . is given in milliseconds and the default value is 1000.
-dduration (in seconds) to run test [default=10s]
-f[K|M|G|b]

file size – this parameter can be used to use only the part of the file/disk/partition for example to test only the first sectors of disk
-fropen file with the FILE_FLAG_RANDOM_ACCESS hint
-fsopen file with the FILE_FLAG_SEQUENTIAL_SCAN hint
-Ftotal number of threads (cannot be used with -t)
-gthroughput per thread is throttled to given bytes per millisecond note that this can not be specified when using completion routines
-hdisable both software and hardware caching
-inumber of IOs (burst size) before thinking. must be specified with -j
-jtime to think in ms before issuing a burst of IOs (burst size). must be specified with -i
-ISet IO priority to . Available values are: 1-very low, 2-low, 3-normal (default)
-lUse large pages for IO buffers

-Lmeasure latency statistics
-ndisable affinity (cannot be used with -a)
-onumber of overlapped I/O requests per file per thread (1=synchronous I/O, unless more than 1 thread is specified with -F) [default=2]
-pstart async (overlapped) I/O operations with the same offset (makes sense only with -o2 or grater)
-Penable printing a progress dot after each completed I/O operations (counted separately by each thread) [default count=65536]
-r[K|M|G|b]random I/O aligned to bytes (doesn’t make sense with -s). can be stated in bytes/KB/MB/GB/blocks [default access=sequential, default alignment=block size]
-R

output format. Default is text.
-s[K|M|G|b]stride size (offset between starting positions of subsequent I/O operations)
-Sdisable OS caching
-tnumber of threads per file (cannot be used with -F)
-T[K|M|G|b]stride between I/O operations performed on the same file by different threads [default=0] (starting offset = base file offset + (thread number * ) it makes sense only with -t or -F
-vverbose mode
-wpercentage of write requests (-w and -w0 are equivalent). absence of this switch indicates 100% reads IMPORTANT: Your data will be destroyed without a warning
-W

warm up time – duration of the test before measurements start [default=5s].
-xuse completion routines instead of I/O Completion Ports
-Xuse an XML file for configuring the workload. Cannot be used with other parameters.
-zset random seed [default=0 if parameter not provided, GetTickCount() if value not provided]




 
Write buffers command options. By default, the write buffers are filled with a repeating pattern (0, 1, 2, …, 255, 0, 1, …)
-Z

zero buffers used for write tests
-Z[K|M|G|b]use a global buffer filled with random data as a source for write operations.
-Z[K|M|G|b],

use a global buffer filled with data from as a source for write operations. If is smaller than , its content will be repeated multiple times in the buffer. By default, the write buffers are filled with a repeating pattern (0, 1, 2, …, 255, 0, 1, …)







 Synchronization command options
-ys
signals event
before starting the actual run (no warmup) (creates a notification event if does not exist)
-yfsignals event after the actual run finishes (no cooldown) (creates a notification event if does not exist)
-yrwaits on event before starting the run (including warmup) (creates a notification event if does not exist)
-ypallows to stop the run when event is set; it also binds CTRL+C to this event (creates a notification event if does not exist)
-yesets event and quits









Event Tracing command options

-epuse paged memory for NT Kernel Logger (by default it uses non-paged memory)
-equse perf timer
-esuse system timer (default)
-ecuse cycle count
-ePROCESSprocess start & end
-eTHREADthread start & end
-eIMAGE_LOADimage load
-eDISK_IOphysical disk IO
-eMEMORY_PAGE_FAULTSall page faults
-eMEMORY_HARD_FAULTShard faults only
-eNETWORK

TCP/IP, UDP/IP send & receive
-eREGISTRYregistry calls



Examples:

Create 8192KB file and run read test on it for 1 second:

diskspd -c8192K -d1 testfile.dat

Set block size to 4KB, create 2 threads per file, 32 overlapped (outstanding)
I/O operations per thread, disable all caching mechanisms and run block-aligned random
access read test lasting 10 seconds:

diskspd -b4K -t2 -r -o32 -d10 -h testfile.dat

Create two 1GB files, set block size to 4KB, create 2 threads per file, affinitize threads
to CPUs 0 and 1 (each file will have threads affinitized to both CPUs) and run read test
lasting 10 seconds:

diskspd -c1G -b4K -t2 -d10 -a0,1 testfile1.dat testfile2.dat

Where to learn more


The following are related links to read more about servver (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.
resource page

Server and Storage I/O Benchmarking 101 for Smarties.

Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)

I/O, I/O how well do you know about good or bad server and storage I/Os?

Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

Wrap up and summary, for now…


This wraps up part-one of this two-part post taking a look at Microsoft Diskspd benchmark and workload generation tool. In part-two (here) of this post series we take a closer look including a test drive using Microsoft Diskspd.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

twitter @storageio


All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Server Storage I/O Benchmark Performance Resource Tools

Server Storage I/O Benchmarking Performance Resource Tools

server storage I/O trends

Updated 1/23/2018

Server storage I/O benchmark performance resource tools, various articles and tips. These include tools for legacy, virtual, cloud and software defined environments.

benchmark performance resource tools server storage I/O performance

The best server and storage I/O (input/output operation) is the one that you do not have to do, the second best is the one with the least impact.

server storage I/O locality of reference

This is where the idea of locality of reference (e.g. how close is the data to where your application is running) comes into play which is implemented via tiered memory, storage and caching shown in the figure above.

Cloud virtual software defined storage I/O

Server storage I/O performance applies to cloud, virtual, software defined and legacy environments

What this has to do with server storage I/O (and networking) performance benchmarking is keeping the idea of locality of reference, context and the application workload in perspective regardless of if cloud, virtual, software defined or legacy physical environments.

StorageIOblog: I/O, I/O how well do you know about good or bad server and storage I/Os?
StorageIOblog: Server and Storage I/O benchmarking 101 for smarties
StorageIOblog: Which Enterprise HDDs to use for a Content Server Platform (7 part series with using benchmark tools)
StorageIO.com: Enmotus FuzeDrive MicroTiering lab test using various tools
StorageIOblog: Some server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
StorageIOblog: Get in the NVMe SSD game (if you are not already)
Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
ComputerWeekly: Storage performance metrics: How suppliers spin performance specifications

Via StorageIO Podcast: Kevin Closson discusses SLOB Server CPU I/O Database Performance benchmarks
Via @KevinClosson: SLOB Use Cases By Industry Vendors. Learn SLOB, Speak The Experts’ Language
Via BeyondTheBlocks (Reduxio): 8 Useful Tools for Storage I/O Benchmarking
Via CCSIObench: Cold-cache Sequential I/O Benchmark
Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
CISJournal: Benchmarking the Performance of Microsoft Hyper-V server, VMware ESXi and Xen Hypervisors (PDF)
Microsoft TechNet:Windows Server 2016 Hyper-V large-scale VM performance for in-memory transaction processing
InfoStor: What’s The Best Storage Benchmark?
StorageIOblog: How to test your HDD, SSD or all flash array (AFA) storage fundamentals
Via ATTO: Atto V3.05 free storage test tool available
Via StorageIOblog: Big Files and Lots of Little File Processing and Benchmarking with Vdbench

Via StorageIO.com: Which Enterprise Hard Disk Drives (HDDs) to use with a Content Server Platform (White Paper)
Via VMware Blogs: A Free Storage Performance Testing Tool For Hyperconverged
Microsoft Technet: Test Storage Spaces Performance Using Synthetic Workloads in Windows Server
Microsoft Technet: Microsoft Windows Server Storage Spaces – Designing for Performance
BizTech: 4 Ways to Performance-Test Your New HDD or SSD
EnterpriseStorageForum: Data Storage Benchmarking Guide
StorageSearch.com: How fast can your SSD run backwards?
OpenStack: How to calculate IOPS for Cinder Storage ?
StorageAcceleration: Tips for Measuring Your Storage Acceleration

server storage I/O STI and SUT

Spiceworks: Determining HDD SSD SSHD IOP Performance
Spiceworks: Calculating IOPS from Perfmon data
Spiceworks: profiling IOPs

vdbench server storage I/O benchmark
Vdbench example via StorageIOblog.com

StorageIOblog: What does server storage I/O scaling mean to you?
StorageIOblog: What is the best kind of IO? The one you do not have to do
Testmyworkload.com: Collect and report various OS workloads
Whoishostingthis: Various SQL resources
StorageAcceleration: What, When, Why & How to Accelerate Storage
Filesystems.org: Various tools and links
StorageIOblog: Can we get a side of context with them IOPS and other storage metrics?

flash ssd and hdd

BrightTalk Webinar: Data Center Monitoring – Metrics that Matter for Effective Management
StorageIOblog: Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy
StorageIOblog: Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?

server storage I/O bottlenecks and I/O blender

Microsoft TechNet: Measuring Disk Latency with Windows Performance Monitor (Perfmon)
Via Scalegrid.io: How to benchmark MongoDB with YCSB? (Perfmon)
Microsoft MSDN: List of Perfmon counters for sql server
Microsoft TechNet: Taking Your Server’s Pulse
StorageIOblog: Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?
CMG: I/O Performance Issues and Impacts on Time-Sensitive Applications

flash ssd and hdd

Virtualization Practice: IO IO it is off to Storage and IO metrics we go
InfoStor: Is HP Short Stroking for Performance and Capacity Gains?
StorageIOblog: Is Computer Data Storage Complex? It Depends
StorageIOblog: More storage and IO metrics that matter
StorageIOblog: Moving Beyond the Benchmark Brouhaha
Yellow-Bricks: VSAN VDI Benchmarking and Beta refresh!

server storage I/O benchmark example

YellowBricks: VSAN performance: many SAS low capacity VS some SATA high capacity?
YellowBricsk: VSAN VDI Benchmarking and Beta refresh!
StorageIOblog: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review
StorageIOblog: Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review
StorageIOblog: Server Storage I/O Network Benchmark Winter Olympic Games

flash ssd and hdd

VMware VDImark aka View Planner (also here, here and here) as well as VMmark here
StorageIOblog: SPC and Storage Benchmarking Games
StorageIOblog: Speaking of speeding up business with SSD storage
StorageIOblog: SSD and Storage System Performance

Hadoop server storage I/O performance
Various Server Storage I/O tools in a hadoop environment

Michael-noll.com: Benchmarking and Stress Testing an Hadoop Cluster With TeraSort, TestDFSIO
Virtualization Practice: SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD
StorageIOblog: Storage and IO metrics that matter
InfoStor: Storage Metrics and Measurements That Matter: Getting Started
SilvertonConsulting: Storage throughput vs. IO response time and why it matters
Splunk: The percentage of Read / Write utilization to get to 800 IOPS?

flash ssd and hdd
Various server storage I/O benchmarking tools

Spiceworks: What is the best IO IOPs testing tool out there
StorageIOblog: How many IOPS can a HDD, HHDD or SSD do?
StorageIOblog: Some Windows Server Storage I/O related commands
Openmaniak: Iperf overview and Iperf.fr: Iperf overview
StorageIOblog: Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)
Quest: SQL Server Perfmon Poster (PDF)
Server and Storage I/O Networking Performance Management (webinar)
Data Center Monitoring – Metrics that Matter for Effective Management (webinar)
Flash back to reality – Flash SSD Myths and Realities (Industry trends & benchmarking tips), (MSP CMG presentation)
DBAstackexchange: How can I determine how many IOPs I need for my AWS RDS database?
ITToolbox: Benchmarking the Performance of SANs

server storage IO labs

StorageIOblog: Dell Inspiron 660 i660, Virtual Server Diamond in the rough (Server review)
StorageIOblog: Part II: Lenovo TS140 Server and Storage I/O Review (Server review)
StorageIOblog: DIY converged server software defined storage on a budget using Lenovo TS140
StorageIOblog: Server storage I/O Intel NUC nick knack notes First impressions (Server review)
StorageIOblog & ITKE: Storage performance needs availability, availability needs performance
StorageIOblog: Why SSD based arrays and storage appliances can be a good idea (Part I)
StorageIOblog: Revisiting RAID storage remains relevant and resources

Interested in cloud and object storage visit our objectstoragecenter.com page, for flash SSD checkout storageio.com/ssd page, along with data protection, RAID, various industry links and more here.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Watch for additional links to be added above in addition to those that appear via comments.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future.

Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way.

Introducing Solid State Hybrid Drives (SSHD)

Solid State Hybrid Disks (SSHD) are the successors to previous generation Hybrid Hard Disk Drives (HHDD) that I have used for several years (you can read more about them here, and here).

While it would be nice to simply have SSD for everything, there are also economic budget realities to be dealt with. Keep in mind that a bit of nand flash SSD cache in the right location for a given purpose can go a long way which is the case with SSHDs. This is also why in many environments today there is a mix of SSD, HDD of various makes, types, speeds and capacities (e.g. different tiers) to support diverse application needs (e.g. not everything in the data center is the same).

However, If you have the need for speed and can afford or benefit from the increased productivity by all means go SSD!

Otoh if you have budget constraints and need more space capacity yet want some performance boost, then SSHDs are an option. The big difference however between today’s SSHDs that are available for both enterprise class storage systems and servers, as well as desktop environments is that they can accelerate both reads and writes. This is different from their predecessors that I have used for several years now that had basic read acceleration, however no write optimizations.

SSHD storage I/O oppourtunity
Better Together: Where SSHDs fit in an enterprise tiered storage environment with SSD and HDDs

As their names imply, they are a hybrid between a nand flash Solid State Device (SSD) and traditional Hard Disk Drive (HDD) meaning a best of situation. This means that the SSHD are based on a traditional spinning HDD (various models with different speeds, space capacity, interfaces) along with DRAM (which is found on most modern HDDs), along with nand flash for read cache, and some extra nonvolatile memory for persistent write cache combined with a bit of software defined storage performance optimization algorithms.

Btw, if you were paying attention to that last sentence you would have picked up on something about nonvolatile memory being used for persistent write cache which should prompt the question would that help with nand flash write endurance? Yup.

Where and when to use SSHD?

In the StorageIO Industry Trends Perspective thought leadership white paper I recently released compliments of Seagate Enterprise Turbo SSHD (that’s a disclosure btw ;) enterprise class Solid State Hybrid Drives (SSHD) were looked at and test driven in the StorageIO Labs with various application workloads. These activities include being in a virtual environment for common applications including database and email messaging using industry standard benchmark workloads (e.g. TPC-B and TPC-E for database, JetStress for Exchange).

Storage I/O sshd white paper

Conventional storage system focused workloads using iometer, iorate and vdbench were also run in the StorageIO Labs to set up baseline reads, writes, random, sequential, small and large I/O size with IOPs, bandwidth and response time latency results. Some of those results can be found here (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?) with other ongoing workloads continuing in different configurations. The various test drive proof points were done in the   comparing SSHD, SSD and different HDDs.

Data Protection (Archiving, Backup, BC, DR)

Staging cache buffer area for snapshots, replication or current copies before streaming to other storage tier using fast read/write capabilities. Meta data, index and catalogs benefit from fast reads and writes for faster protection.

Big Data DSS
Data Warehouse

Support sequential read-ahead operations and “hot-band” data caching in a cost-effective way using SSHD vs. slower similar capacity size HDDs for Data warehouse, DSS and other analytic environments.

Email, Text and Voice Messaging

Microsoft Exchange and other email journals, mailbox or object repositories can leverage faster read and write I/Os with more space capacity.

OLTP, Database
 Key Value Stores SQL and NoSQL

Eliminate the need to short stroke HDDs to gain performance, offer more space capacity and IOP performance per device for tables, logs, journals, import/export and scratch, temporary ephemeral storage. Leverage random and sequential read acceleration to compliment server-side SSD-based read and write-thru caching. Utilize fast magnetic media for persistent data reducing wear and tear on more costly flash SSD storage devices.

Server Virtualization

Fast disk storage for data stores and virtual disks supporting VMware vSphere/ESXi, Microsoft Hyper-V, KVM, Xen and others.  Holding virtual machines such as VMware VMDKs, along with Hyper-V and other hypervisor virtual disks.  Compliment virtual server read cache and I/O optimization using SSD as a cache with writes going to fast SSHD. For example VMware V5.5 Virtual SAN host disk groups use SSD as a read cache and can use SSHD as the magnetic disk for storing data while boosting performance without breaking the budget or adding complexity.

Speaking of Virtual, as mentioned the various proof points were run using Windows systems that were VMware guests with the SSHD and other devices being Raw Device Mapped (RDM) SAS and SATA attached, read how to do that here.

Hint: If you know about the VMware trick for making a HDD look like a SSD to vSphere/ESXi (refer to here and here) think outside the virtual box for a moment on some things you could do with SSHD in a VSAN environment among other things, for now, just sayin ;).

Virtual Desktop Infrastructure (VDI)

SSHD can be used as high performance magnetic disk for storing linked clone images, applications and data. Leverage fast read to support read ahead or pre-fetch to compliment SSD based read cache solutions. Utilize fast writes to quickly store data enabling SSD-based read or write-thru cache solutions to be more effective. Reduce impact of boot, shutdown, and virus scan or maintenance storms while providing more space capacity.

Table 1 Example application and workload scenarios benefiting from SSHDs

Test drive application proof points

Various workloads were run using Seagate Enterprise Turbo SSHD in the StorageIO lab environment across different real world like application workload scenarios. These include general storage I/O performance characteristics profiling (e.g. reads, writes, random, sequential or various IOP size) to understand how these devices compare to other HDD, HHDD and SSD storage devices in terms of IOPS, bandwidth and response time (latency). In addition to basic storage I/O profiling, the Enterprise Turbo SSHD was also used with various SQL database workloads including Transaction Processing Council (TPC); along with VMware server virtualization among others use case scenarios.

Note that in the following workload proof points a single drive was used meaning that using more drives in a server or storage system should yield better performance. This also means scaling would be bound by the constraints of a given configuration, server or storage system. These were also conducted using 6Gbps SAS with PCIe Gen 2 based servers and ongoing testing is confirming even better results with 12Gbs SAS, faster servers with PCIe Gen 3.

SSHD large file storage i/o
Copy (read and write) 80GB and 220GB file copies (time to copy entire file)

SSHD storage I/O TPCB Database performance
SQLserver TPC-B batch database updates

Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 500GB 3.5” 7.2K RPM HDD 3 Gbps SATA, 1TB 3.5” 7.2K RPM HDD 3 Gbps SATA. Workload generator and virtual clients ran on Windows 7 Ultimate. Microsoft SQL Server 2012 Database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (Intel x3490 2.93 GHz)), with LSI 9211 6Gbps SAS adapters with TPC-B (www.tpc.org) workloads. VM resided on separate data store from devices being tested. All devices being tested with SQL MDF were Raw Device Mapped (RDM) independent persistent with database log file (LDF) on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

SSHD storage I/O TPCE Database performance
SQLserver TPC-E transactional workload

Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 300GB 2.5” Savio 10K RPM HDD 6 Gbps SAS, 1TB 3.5” 7.2K RPM HDD 6 Gbps SATA. Workload generator and virtual clients Windows 7 Ultimate. Microsoft SQL Server 2012 database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (E8400 2.99GHz), with LSI 9211 6Gbps SAS adapters with TPC-E (www.tpc.org) workloads. VM resided on separate SSD based data store from devices being tested (e.g., where MDF resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

SSHD storage I/O Exchange performance
Microsoft Exchange workload

Test configuration: 2.5” Seagate 600 Pro 120GB (ST120FP0021 ) SSD 6 Gbps SATA, 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 2.5” Savio 146GB HDD 6 Gbps SAS, 3.5” Barracuda 500GB 7.2K RPM HDD 3 Gbps SATA. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft Small Business Server (SBS) 2011 Service Pack 1 64 bit, 8GB DRAM, One CPU (Intel X3490 2.93 GHz) LSI 9211 6 Gbps SAS adapter, JetStress 2010 (no other active workload during test intervals). All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device.

Read more about the above proof points along view data points and configuration information in the associated white paper found here (no registration required).

What this all means

Similar to flash-based SSD technologies the question is not if, rather when, where, why and how to deploy hybrid solutions such as SSHDs. If your applications and data infrastructures environment have the need for storage I/O speed without loss of space capacity and breaking your budget, SSD enabled devices like the Seagate Enterprise Turbo 600GB SSHD are in your future. You can learn more about enterprise class SSHD such as those from Seagate by visiting this link here.

Watch for extra workload proof points being performed including with 12Gbps SAS and faster servers using PCIe Gen 3.

Ok, nuff said.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Can we get a side of context with them IOPS server storage metrics?

Can we get a side of context with them server storage metrics?

Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.

There is an old saying that the best I/O (Input/Output) is the one that you do not have to do.

In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.

Expanding the conversation, the need for more context

The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.

hdd and ssd iops

This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.

Adding a side of context

The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.

Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time. Not to mention block, file, object or blob as well as table.

However, are those million IOP’s applicable to your environment or needs?

Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?

How about the response time or latency for achieving them IOPS?

In other words, what is the context of those metrics and why do they matter?

storage i/o iops
Click on image to view more metrics that matter including IOP’s for HDD and SSD’s

Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.

As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.

Here are some examples of context that can be added to help make IOP’s and other metrics matter:

  • What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
  • Are they reads, writes, random, sequential or mixed and what percentage?
  • How was the storage configured including RAID, replication, erasure or dispersal codes?
  • Then there is the latency or response time and IO queue depths for the given number of IOPS.
  • Let us not forget if the storage systems (and servers) were busy with other work or not.
  • If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
  • What was the number of threads or workers, along with how many servers?
  • What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
  • Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
  • Did the IOP’s number come from a single storage system or total of multiple systems?
  • Fast storage needs fast serves and networks, what was their configuration?
  • Was the performance a short burst, or long sustained period?
  • What was the size of the test data used; did it all fit into cache?
  • Were short stroking for IOPS or long stroking for bandwidth techniques used?
  • Data footprint reduction (DFR) techniques (thin provisioned, compression or dedupe) used?
  • Were write data committed synchronously to storage, or deferred (aka lazy writes used)?

The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.

Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.

Storage I/O trends

Does size or age of vendors make a difference when it comes to context?

Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.

Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.

Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.

Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.

Storage I/O trends

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What this means is let us start putting and asking for metrics that matter such as IOP’s with context.

If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.

IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.

Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.

So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

How many I/O iops can flash SSD or HDD do?

How many i/o iops can flash ssd or hdd do with vmware?

sddc data infrastructure Storage I/O ssd trends

Updated 2/10/2018

A common question I run across is how many I/O iopsS can flash SSD or HDD storage device or system do or give.

The answer is or should be it depends.

This is the first of a two-part series looking at storage performance, and in context specifically around drive or device (e.g. mediums) characteristics across HDD, HHDD and SSD that can be found in cloud, virtual, and legacy environments. In this first part the focus is around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).

What about cloud, tape summit resources, storage systems or appliance?

Lets leave those for a different discussion at another time.

Getting started

Part of my interest in tools, metrics that matter, measurements, analyst, forecasting ties back to having been a server, storage and IO performance and capacity planning analyst when I worked in IT. Another aspect ties back to also having been a sys admin as well as business applications developer when on the IT customer side of things. This was followed by switching over to the vendor world involved with among other things competitive positioning, customer design configuration, validation, simulation and benchmarking HDD and SSD based solutions (e.g. life before becoming an analyst and advisory consultant).

Btw, if you happen to be interested in learn more about server, storage and IO performance and capacity planning, check out my first book Resilient Storage Networks (Elsevier) that has a bit of information on it. There is also coverage of metrics and planning in my two other books The Green and Virtual Data Center (CRC Press) and Cloud and Virtual Data Storage Networking (CRC Press). I have some copies of Resilient Storage Networks available at a special reader or viewer rate (essentially shipping and handling). If interested drop me a note and can fill you in on the details.

There are many rules of thumb (RUT) when it comes to metrics that matter such as IOPS, some that are older while others may be guess or measured in different ways. However the answer is that it depends on many things ranging from if a standalone hard disk drive (HDD), Hybrid HDD (HHDD), Solid State Device (SSD) or if attached to a storage system, appliance, or RAID adapter card among others.

Taking a step back, the big picture

hdd image
Various HDD, HHDD and SSD’s

Server, storage and I/O performance and benchmark fundamentals

Even if just looking at a HDD, there are many variables ranging from the rotational speed or Revolutions Per Minute (RPM), interface including 1.5Gb, 3.0Gb, 6Gb or 12Gb SAS or SATA or 4Gb Fibre Channel. If simply using a RUT or number based on RPM can cause issues particular with 2.5 vs. 3.5 or enterprise and desktop. For example, some current generation 10K 2.5 HDD can deliver the same or better performance than an older generation 3.5 15K. Other drive factors (see this link for HDD fundamentals) including physical size such as 3.5 inch or 2.5 inch small form factor (SFF), enterprise or desktop or consumer, amount of drive level cache (DRAM). Space capacity of a drive can also have an impact such as if all or just a portion of a large or small capacity devices is used. Not to mention what the drive is attached to ranging from in internal SAS or SATA drive bay, USB port, or a HBA or RAID adapter card or in a storage system.

disk iops
HDD fundamentals

How about benchmark and performance for marketing or comparison tricks including delayed, deferred or asynchronous writes vs. synchronous or actually committed data to devices? Lets not forget about short stroking (only using a portion of a drive for better IOP’s) or even long stroking (to get better bandwidth leveraging spiral transfers) among others.

Almost forgot, there are also thick, standard, thin and ultra thin drives in 2.5 and 3.5 inch form factors. What’s the difference? The number of platters and read write heads. Look at the following image showing various thickness 2.5 inch drives that have various numbers of platters to increase space capacity in a given density. Want to take a wild guess as to which one has the most space capacity in a given footprint? Also want to guess which type I use for removable disk based archives along with for onsite disk based backup targets (compliments my offsite cloud backups)?

types of disks
Thick, thin and ultra thin devices

Beyond physical and configuration items, then there are logical configuration including the type of workload, large or small IOPS, random, sequential, reads, writes or mixed (various random, sequential, read, write, large and small IO). Other considerations include file system or raw device, number of workers or concurrent IO threads, size of the target storage space area to decide impact of any locality of reference or buffering. Some other items include how long the test or workload simulation ran for, was the device new or worn in before use among other items.

Tools and the performance toolbox

Then there are the various tools for generating IO’s or workloads along with recording metrics such as reads, writes, response time and other information. Some examples (mix of free or for fee) include Bonnie, Iometer, Iorate, IOzone, Vdbench, TPC, SPC, Microsoft ESRP, SPEC and netmist, Swifttest, Vmark, DVDstore and PCmark 7 among many others. Some are focused just on the storage system and IO path while others are application specific thus exercising servers, storage and IO paths.

performance tools
Server, storage and IO performance toolbox

Having used Iometer since the late 90s, it has its place and is popular given its ease of use. Iometer is also long in the tooth and has its limits including not much if any new development, never the less, I have it in the toolbox. I also have Futremark PCmark 7 (full version) which turns out has some interesting abilities to do more than exercise an entire Windows PC. For example PCmark can use a secondary drive for doing IO to.

PCmark can be handy for spinning up with VMware (or other tools) lots of virtual Windows systems pointing to a NAS or other shared storage device doing real world type activity. Something that could be handy for testing or stressing virtual desktop infrastructures (VDI) along with other storage systems, servers and solutions. I also have Vdbench among others tools in the toolbox including Iorate which was used to drive the workloads shown below.

What I look for in a tool are how extensible are the scripting capabilities to define various workloads along with capabilities of the test engine. A nice GUI is handy which makes Iometer popular and yes there are script capabilities with Iometer. That is also where Iometer is long in the tooth compared to some of the newer generation of tools that have more emphasis on extensibility vs. ease of use interfaces. This also assumes knowing what workloads to generate vs. simply kicking off some IOPs using default settings to see what happens.

Another handy tool is for recording what’s going on with a running system including IO’s, reads, writes, bandwidth or transfers, random and sequential among other things. This is where when needed I turn to something like HiMon from HyperIO, if you have not tried it, get in touch with Tom West over at HyperIO and tell him StorageIO sent you to get a demo or trial. HiMon is what I used for doing start, stop and boot among other testing being able to see IO’s at the Windows file system level (or below) including very early in the boot or shutdown phase.

Here is a link to some other things I did awhile back with HiMon to profile some Windows and VDI activity test profiling.

What’s the best tool or benchmark or workload generator?

The one that meets your needs, usually your applications or something as close as possible to it.

disk iops
Various 2.5 and 3.5 inch HDD, HHDD, SSD with different performance

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

That depends, however continue reading part II of this series to see some results for various types of drives and workloads.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.