Check out additional Server StorageIO Tips & Articles here along with commentary in the news here

Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)

February 1, 2015 – 12:12 pm

Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)

server storage I/O trends

This is part-one of a two-part post pertaining Microsoft Diskspd.that is also part of a broader series focused on server storage I/O benchmarking, performance, capacity planning, tools and related technologies. You can view part-two of this post here, along with companion links here.


Many people use Iometer for creating synthetic (artificial) workloads to support benchmarking for testing, validation and other activities. While Iometer with its GUI is relatively easy to use and available across many operating system (OS) environments, the tool also has its limits. One of the bigger limits for Iometer is that it has become dated with little to no new development for a long time, while other tools including some new ones continue to evolve in functionality, along with extensibility. Some of these tools have optional GUI for easy of use or configuration, while others simple have extensive scripting and command parameter capabilities. Many tools are supported across different OS including physical, virtual and cloud, while others such as Microsoft Diskspd are OS specific.

Instead of focusing on Iometer and other tools as well as benchmarking techniques (we cover those elsewhere), lets focus on Microsoft Diskspd.

server storage I/O performance

What is Microsoft Diskspd?

Microsoft Diskspd is a synthetic workload generation (e.g. benchmark) tool that runs on various Windows systems as an alternative to Iometer, vdbench, iozone, iorate, fio, sqlio among other tools. Diskspd is a command line tool which means it can easily be scripted to do reads and writes of various I/O size including random as well as sequential activity. Server and storage I/O can be buffered file system as well non-buffered across different types of storage and interfaces. Various performance and CPU usage information is provided to gauge the impact on a system when doing a given number of IOP’s, amount of bandwidth along with response time latency.

What can Diskspd do?

Microsoft Diskspd creates synthetic benchmark workload activity with ability to define various options to simulate different application characteristics. This includes specifying read and writes, random, sequential, IO size along with number of threads to simulate concurrent activity. Diskspd can be used for testing or validating server and storage I/O systems along with associated software, tools and components. In addition to being able to specify different workloads, Diskspd can also be told which processors to use (e.g. CPU affinity), buffering or non-buffered IO among other things.

What type of storage does Diskspd work with?

Physical and virtual storage including hard disk drive (HDD), solid state devices (SSD), solid state hybrid drives (SSHD) in various systems or solutions. Storage can be physical as well as partitions or file systems. As with any workload tool when doing writes, exercise caution to prevent accidental deletion or destruction of your data.

What information does Diskspd produce?

Diskspd provides output in text as well as XML formats. See an example of Diskspd output further down in this post.

Where to get Diskspd?

You can download your free copy of Diskspd from the Microsoft site here.

The download and installation are quick and easy, just remember to select the proper version for your Windows system and type of processor.

Another tip is to remember to set path environment variables point to where you put the Diskspd image.

Also stating what should be obvious, don’t forget that if you are going to be doing any benchmark or workload generation activity on a system where the potential for a data to be over-written or deleted, make sure you have a good backup and tested restore before you begin, if something goes wrong.

New to server storage I/O benchmarking or tools?

If you are not familiar with server storage I/O performance benchmarking or using various workload generation tools (e.g. benchmark tools), Drew Robb (@robbdrew) has a Data Storage Benchmarking Guide article over at Enterprise Storage Forum that provides a good framework and summary quick guide to server storage I/O benchmarking.

Via Drew:

Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark.

Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon).

But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more. 

Read more here including some of my comments, tips and recommendations.

In addition to Drew’s benchmarking quick reference guide, along with the server storage I/O benchmarking tools, technologies and techniques resource page (here), check out this companion post as a primer for benchmarking and associated topics titled Server and Storage I/O Benchmarking 101 for Smarties.

How do you use Diskspd?

Tip: When you run Microsoft Diskspd it will create a file or data set on the device or volume being tested that it will do its I/O to, make sure that you have enough disk space for what will be tested (e.g. if you are going to test 1TB you need to have more than 1TB of disk space free for use). Another tip is to speed up the initializing (e.g. when Diskspd creates the file that I/Os will be done to) run as administrator.

Tip: In case you forgot, a couple of other useful Microsoft tools (besides Perfmon) for working with and displaying server storage I/O devices including disks (HDD and SSDs) are the commands "wmic diskdrive list [brief]" and "diskpart". With diskpart exercise caution as it can get you in trouble just as fast as it can get you out of trouble.

You can view the Diskspd commands after installing the tool and from a Windows command prompt type:

C:\Users\Username> Diskspd

The above command will display Diskspd help and information about the commands as follows.

Usage: diskspd [options] target1 [ target2 [ target3 ...] ]
version 2.0.12 (2014/09/17)
Available targets:
Available options:
-?display usage information
-a#[,#[…]]advanced CPU affinity – affinitize threads to CPUs provided after -a in a round-robin manner within current KGroup (CPU count starts with 0); the same CPU can be listed more than once and the number of CPUs can be different than the number of files or threads (cannot be used with -n)
-aggroup affinity – affinitize threads in a round-robin manner across KGroups
-b[K|M|G]block size in bytes/KB/MB/GB [default=64K]
-B[K|M|G|b]base file offset in bytes/KB/MB/GB/blocks [default=0] (offset from the beginning of the file)
-c[K|M|G|b]create files of the given size. Size can be stated in bytes/KB/MB/GB/blocks
-Ccool down time – duration of the test after measurements finished [default=0s].
-DPrint IOPS standard deviations. The deviations are calculated for samples of duration . is given in milliseconds and the default value is 1000.
-dduration (in seconds) to run test [default=10s]
-f[K|M|G|b]file size – this parameter can be used to use only the part of the file/disk/partition for example to test only the first sectors of disk
-fropen file with the FILE_FLAG_RANDOM_ACCESS hint
-fsopen file with the FILE_FLAG_SEQUENTIAL_SCAN hint
-Ftotal number of threads (cannot be used with -t)
-gthroughput per thread is throttled to given bytes per millisecond note that this can not be specified when using completion routines
-hdisable both software and hardware caching
-inumber of IOs (burst size) before thinking. must be specified with -j
-jtime to think in ms before issuing a burst of IOs (burst size). must be specified with -i
-ISet IO priority to . Available values are: 1-very low, 2-low, 3-normal (default)
-lUse large pages for IO buffers
-Lmeasure latency statistics
-ndisable affinity (cannot be used with -a)
-onumber of overlapped I/O requests per file per thread (1=synchronous I/O, unless more than 1 thread is specified with -F) [default=2]

-pstart async (overlapped) I/O operations with the same offset (makes sense only with -o2 or grater)
-Penable printing a progress dot after each completed I/O operations (counted separately by each thread) [default count=65536]
-r[K|M|G|b]random I/O aligned to bytes (doesn’t make sense with -s). can be stated in bytes/KB/MB/GB/blocks [default access=sequential, default alignment=block size]
-Routput format. Default is text.
-s[K|M|G|b]stride size (offset between starting positions of subsequent I/O operations)
-Sdisable OS caching
-tnumber of threads per file (cannot be used with -F)
-T[K|M|G|b]stride between I/O operations performed on the same file by different threads [default=0] (starting offset = base file offset + (thread number * ) it makes sense only with -t or -F
-vverbose mode
-wpercentage of write requests (-w and -w0 are equivalent). absence of this switch indicates 100% reads IMPORTANT: Your data will be destroyed without a warning
-Wwarm up time – duration of the test before measurements start [default=5s].
-xuse completion routines instead of I/O Completion Ports
-Xuse an XML file for configuring the workload. Cannot be used with other parameters.
-zset random seed [default=0 if parameter not provided, GetTickCount() if value not provided]
 Write buffers command options. By default, the write buffers are filled with a repeating pattern (0, 1, 2, …, 255, 0, 1, …)
-Zzero buffers used for write tests
-Z[K|M|G|b]use a global buffer filled with random data as a source for write operations.
-Z[K|M|G|b],use a global buffer filled with data from as a source for write operations. If is smaller than , its content will be repeated multiple times in the buffer. By default, the write buffers are filled with a repeating pattern (0, 1, 2, …, 255, 0, 1, …)
 Synchronization command options
signals event
before starting the actual run (no warmup) (creates a notification event if does not exist)
-yfsignals event after the actual run finishes (no cooldown) (creates a notification event if does not exist)
-yrwaits on event before starting the run (including warmup) (creates a notification event if does not exist)
-ypallows to stop the run when event is set; it also binds CTRL+C to this event (creates a notification event if does not exist)
-yesets event and quits
Event Tracing command options
-epuse paged memory for NT Kernel Logger (by default it uses non-paged memory)
-equse perf timer
-esuse system timer (default)
-ecuse cycle count
-ePROCESSprocess start & end
-eTHREADthread start & end
-eIMAGE_LOADimage load
-eDISK_IOphysical disk IO
-eMEMORY_PAGE_FAULTSall page faults
-eMEMORY_HARD_FAULTShard faults only
-eNETWORKTCP/IP, UDP/IP send & receive
-eREGISTRYregistry calls
Create 8192KB file and run read test on it for 1 second:
  diskspd -c8192K -d1 testfile.dat
Set block size to 4KB, create 2 threads per file, 32 overlapped (outstanding)
I/O operations per thread, disable all caching mechanisms and run block-aligned random
access read test lasting 10 seconds:
  diskspd -b4K -t2 -r -o32 -d10 -h testfile.dat
Create two 1GB files, set block size to 4KB, create 2 threads per file, affinitize threads
to CPUs 0 and 1 (each file will have threads affinitized to both CPUs) and run read test
lasting 10 seconds:
  diskspd -c1G -b4K -t2 -d10 -a0,1 testfile1.dat testfile2.dat

Where to learn more

The following are related links to read more about servver (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.

Drew Robb’s benchmarking quick reference guide
Server storage I/O benchmarking tools, technologies and techniques resource page
Server and Storage I/O Benchmarking 101 for Smarties.
Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
I/O, I/O how well do you know about good or bad server and storage I/Os?
Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

Wrap up and summary, for now…

This wraps up part-one of this two-part post taking a look at Microsoft Diskspd benchmark and workload generation tool. In part-two (here) of this post series we take a closer look including a test drive using Microsoft Diskspd.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2016 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Post a Comment

Powered by Disqus