Server Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)
This is part-one of a two-part post pertaining Microsoft Diskspd.that is also part of a broader series focused on server storage I/O benchmarking, performance, capacity planning, tools and related technologies. You can view part-two of this post here, along with companion links here.
Background
Many people use Iometer for creating synthetic (artificial) workloads to support benchmarking for testing, validation and other activities. While Iometer with its GUI is relatively easy to use and available across many operating system (OS) environments, the tool also has its limits. One of the bigger limits for Iometer is that it has become dated with little to no new development for a long time, while other tools including some new ones continue to evolve in functionality, along with extensibility. Some of these tools have optional GUI for easy of use or configuration, while others simple have extensive scripting and command parameter capabilities. Many tools are supported across different OS including physical, virtual and cloud, while others such as Microsoft Diskspd are OS specific.
Instead of focusing on Iometer and other tools as well as benchmarking techniques (we cover those elsewhere), lets focus on Microsoft Diskspd.
What is Microsoft Diskspd?
Microsoft Diskspd is a synthetic workload generation (e.g. benchmark) tool that runs on various Windows systems as an alternative to Iometer, vdbench, iozone, iorate, fio, sqlio among other tools. Diskspd is a command line tool which means it can easily be scripted to do reads and writes of various I/O size including random as well as sequential activity. Server and storage I/O can be buffered file system as well non-buffered across different types of storage and interfaces. Various performance and CPU usage information is provided to gauge the impact on a system when doing a given number of IOP’s, amount of bandwidth along with response time latency.
What can Diskspd do?
Microsoft Diskspd creates synthetic benchmark workload activity with ability to define various options to simulate different application characteristics. This includes specifying read and writes, random, sequential, IO size along with number of threads to simulate concurrent activity. Diskspd can be used for testing or validating server and storage I/O systems along with associated software, tools and components. In addition to being able to specify different workloads, Diskspd can also be told which processors to use (e.g. CPU affinity), buffering or non-buffered IO among other things.
What type of storage does Diskspd work with?
Physical and virtual storage including hard disk drive (HDD), solid state devices (SSD), solid state hybrid drives (SSHD) in various systems or solutions. Storage can be physical as well as partitions or file systems. As with any workload tool when doing writes, exercise caution to prevent accidental deletion or destruction of your data.
What information does Diskspd produce?
Diskspd provides output in text as well as XML formats. See an example of Diskspd output further down in this post.
Where to get Diskspd?
You can download your free copy of Diskspd from the Microsoft site here.
The download and installation are quick and easy, just remember to select the proper version for your Windows system and type of processor.
Another tip is to remember to set path environment variables point to where you put the Diskspd image.
Also stating what should be obvious, don’t forget that if you are going to be doing any benchmark or workload generation activity on a system where the potential for a data to be over-written or deleted, make sure you have a good backup and tested restore before you begin, if something goes wrong.
New to server storage I/O benchmarking or tools?
If you are not familiar with server storage I/O performance benchmarking or using various workload generation tools (e.g. benchmark tools), Drew Robb (@robbdrew) has a Data Storage Benchmarking Guide article over at Enterprise Storage Forum that provides a good framework and summary quick guide to server storage I/O benchmarking.
Via Drew: Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark. Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon). But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more. Read more here including some of my comments, tips and recommendations. |
In addition to Drew’s benchmarking quick reference guide, along with the server storage I/O benchmarking tools, technologies and techniques resource page (Server and Storage I/O Benchmarking 101 for Smarties.
How do you use Diskspd?
Tip: When you run Microsoft Diskspd it will create a file or data set on the device or volume being tested that it will do its I/O to, make sure that you have enough disk space for what will be tested (e.g. if you are going to test 1TB you need to have more than 1TB of disk space free for use). Another tip is to speed up the initializing (e.g. when Diskspd creates the file that I/Os will be done to) run as administrator.
Tip: In case you forgot, a couple of other useful Microsoft tools (besides Perfmon) for working with and displaying server storage I/O devices including disks (HDD and SSDs) are the commands "wmic diskdrive list [brief]" and "diskpart". With diskpart exercise caution as it can get you in trouble just as fast as it can get you out of trouble.
You can view the Diskspd commands after installing the tool and from a Windows command prompt type:
C:\Users\Username> Diskspd
The above command will display Diskspd help and information about the commands as follows.
Usage: diskspd [options] target1 [ target2 [ target3 …] ]
version 2.0.12 (2014/09/17)Available targets:
file_path
#: Available options:
-? | display usage information | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
-a#[,#[…]] | advanced CPU affinity – affinitize threads to CPUs provided after -a in a round-robin manner within current KGroup (CPU count starts with 0); the same CPU can be listed more than once and the number of CPUs can be different than the number of files or threads (cannot be used with -n) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
-ag | group affinity – affinitize threads in a round-robin manner across KGroups | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
-bblock size in bytes/KB/MB/GB [default=64K] | -B | base file offset in bytes/KB/MB/GB/blocks [default=0] (offset from the beginning of the file) | -c | create files of the given size. Size can be stated in bytes/KB/MB/GB/blocks | -C | cool down time – duration of the test after measurements finished [default=0s]. | -D | Print IOPS standard deviations. The deviations are calculated for samples of duration | -d | duration (in seconds) to run test [default=10s] | -f | file size – this parameter can be used to use only the part of the file/disk/partition for example to test only the first sectors of disk | -fr | open file with the FILE_FLAG_RANDOM_ACCESS hint | -fs | open file with the FILE_FLAG_SEQUENTIAL_SCAN hint | -F | total number of threads (cannot be used with -t) | -g | throughput per thread is throttled to given bytes per millisecond note that this can not be specified when using completion routines | -h | disable both software and hardware caching | -i | number of IOs (burst size) before thinking. must be specified with -j | -j | time to think in ms before issuing a burst of IOs (burst size). must be specified with -i | -I | Set IO priority to | -l | Use large pages for IO buffers | -L | measure latency statistics | -n | disable affinity (cannot be used with -a) | -o | number of overlapped I/O requests per file per thread (1=synchronous I/O, unless more than 1 thread is specified with -F) [default=2] | -p | start async (overlapped) I/O operations with the same offset (makes sense only with -o2 or grater) | -P | enable printing a progress dot after each | -r | random I/O aligned to | -R | output format. Default is text. | -s | stride size (offset between starting positions of subsequent I/O operations) | -S | disable OS caching | -t | number of threads per file (cannot be used with -F) | -T | stride between I/O operations performed on the same file by different threads [default=0] (starting offset = base file offset + (thread number * | -v | verbose mode | -w | percentage of write requests (-w and -w0 are equivalent). absence of this switch indicates 100% reads IMPORTANT: Your data will be destroyed without a warning | -W | warm up time – duration of the test before measurements start [default=5s]. | -x | use completion routines instead of I/O Completion Ports | -X | use an XML file for configuring the workload. Cannot be used with other parameters. | -z | set random seed [default=0 if parameter not provided, GetTickCount() if value not provided] | |
| Write buffers command options. By default, the write buffers are filled with a repeating pattern (0, 1, 2, …, 255, 0, 1, …) | ||
-Z | zero buffers used for write tests | ||
-Zuse a global | -Z | use a global | |
Synchronization command options | |||||||||
-ys | signals event-yf | signals event | -yr | waits on event | -yp | allows to stop the run when event | -ye | sets event | |
Event Tracing command options | |
-ep | use paged memory for NT Kernel Logger (by default it uses non-paged memory) |
-eq | use perf timer |
-es | use system timer (default) |
-ec | use cycle count |
-ePROCESS | process start & end |
-eTHREAD | thread start & end |
-eIMAGE_LOAD | image load |
-eDISK_IO | physical disk IO |
-eMEMORY_PAGE_FAULTS | all page faults |
-eMEMORY_HARD_FAULTS | hard faults only |
-eNETWORK | TCP/IP, UDP/IP send & receive |
-eREGISTRY | registry calls |
|
Where to learn more
The following are related links to read more about servver (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.
resource page
Server and Storage I/O Benchmarking 101 for Smarties.
Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
I/O, I/O how well do you know about good or bad server and storage I/Os?
Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)
Wrap up and summary, for now…
This wraps up part-one of this two-part post taking a look at Microsoft Diskspd benchmark and workload generation tool. In part-two (here) of this post series we take a closer look including a test drive using Microsoft Diskspd.
Ok, nuff said (for now)
Cheers gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved