Which Enterprise HDD for Content Applications Different File Size Impact

Which HDD for Content Applications Different File Size Impact

Different File Size Impact server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server platform different file size impact.

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the fifth in a multi-part series (read part four here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus looks at large and small file I/O processing.

File Performance Activity

Tip, Content solutions use files in various ways. Use the following to gain perspective how various HDD’s handle workloads similar to your specific needs.

Two separate file processing workloads were run (12), one with a relative small number of large files, and another with a large number of small files. For the large file processing (table-3), 5 GByte sized files were created and then accessed via 128 Kbyte (128KB) sized I/O over a 10 hour period with 90% read using 64 threads (workers). Large file workload simulates what might be seen with higher definition video, image or other content streaming.

(Note 12) File processing workloads were run using Vdbench 5.04 and file anchors with sample script configuration below. Instead of vdbench you could also use other tools such as sysbench or fio among others.

VdbenchFSBigTest.txt
# Sample script for big files testing
fsd=fsd1,anchor=H:,depth=1,width=5,files=20,size=5G
fwd=fwd1,fsd=fsd1,rdpct=90,xfersize=128k,fileselect=random,fileio=random,threads=64
rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=10h,interval=30

vdbench -f VdbenchFSBigTest.txt -m 16 -o Results_FSbig_H_060615

VdbenchFSSmallTest.txt
# Sample script for big files testing
fsd=fsd1,anchor=H:,depth=1,width=64,files=25600,size=16k
fwd=fwd1,fsd=fsd1,rdpct=90,xfersize=1k,fileselect=random,fileio=random,threads=64
rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=10h,interval=30

vdbench -f VdbenchFSSmallTest.txt -m 16 -o Results_FSsmall_H_060615

The 10% writes are intended to reflect some update activity for new content or other changes to content. Note that 128KB per second translates to roughly 1 Gbps streaming content such as higher definition video. However 4K video (not optimized) would require a higher speed as well as resulting in larger file sizes. Table-3 shows the performance during the large file access period showing average read /write rates and response time, bandwidth (MBps), average open and close rates with response time.

Avg. File Read Rate

Avg. Read Resp. Time
Sec.

Avg. File Write Rate

Avg. Write Resp. Time
Sec.

Avg.
CPU %
Total

Avg. CPU % System

Avg. MBps
Read

Avg. MBps
Write

ENT 15K R1

580.7

107.9

64.5

19.7

52.2

35.5

72.6

8.1

ENT 10K R1

455.4

135.5

50.6

44.6

34.0

22.7

56.9

6.3

ENT CAP R1

285.5

221.9

31.8

19.0

43.9

28.3

37.7

4.0

ENT 10K R10

690.9

87.21

76.8

48.6

35.0

21.8

86.4

9.6

Table-3 Performance summary for large file access operations (90% read)

Table-3 shows that for two-drive RAID 1, the Enterprise 15K are the fastest performance, however using a RAID 10 with four 10K HDD’s with enhanced cache features provide a good price, performance and space capacity option. Software RAID was used in this workload test.

Figure-4 shows the relative performance of various HDD options handling large files, keep in mind that for the response line lower is better, while for the activity rate higher is better.

large file processing
Figure-4 Large file processing 90% read, 10% write rate and response time

In figure-4 you can see the performance in terms of response time (reads larger dashed line, writes smaller dotted line) along with number of file read operations per second (reads solid blue column bar, writes green column bar). Reminder that lower response time, and higher activity rates are better. Performance declines moving from left to right, from 15K to 10K Enterprise Performance with enhanced cache feature to Enterprise Capacity (7.2K), all of which were hardware RAID 1. Also shown is a hardware RAID 10 (four x 10K HDD’s).

Results in figure-4 above and table-4 below show how various drives can be configured to balance their performance, capacity and costs to meet different needs. Table-4 below shows an analysis looking at average file reads per second (RPS) performance vs. HDD costs, usable capacity and protection level.

Table-4 is an example of looking at multiple metrics to make informed decisions as to which HDD would be best suited to your specific needs. For example RAID 10 using four 10K drives provides good performance and protection along with large usable space, however that also comes at a budget cost (e.g. price).

Avg.
File Reads Per Sec. (RPS)

Single Drive Cost per RPS

Multi-Drive Cost per RPS

Single Drive Cost / Per GB Capacity

Cost / Per GB Usable (Protected) Cap.

Drive Cost (Multiple Drives)

Protection Overhead (Space Capacity for RAID)

Cost per usable GB per RPS

Avg. File Read Resp. (Sec.)

ENT 15K R1

580.7

$1.02

$2.05

$ 0.99

$0.99

$1,190

100%

$2.1

107.9

ENT 10K R1

455.5

1.92

3.84

0.49

0.49

1,750

100%

3.8

135.5

ENT CAP R1

285.5

1.40

2.80

0.20

0.20

798

100%

2.8

271.9

ENT 10K R10

690.9

1.27

5.07

0.49

0.97

3,500

100%

5.1

87.2

Table-4 Performance, capacity and cost analysis for big file processing

Small File Size Processing

To simulate a general file sharing environment, or content streaming with many smaller objects, 1,638,464 16KB sized files were created on each device being tested (table-5). These files were spread across 64 directories (25,600 files each) and accessed via 64 threads (workers) doing 90% reads with a 1KB I/O size over a ten hour time frame. Like the large file test, and database activity, all workloads were run at the same time (e.g. test devices were concurrently busy).

Avg. File Read Rate

Avg. Read Resp. Time
Sec.

Avg. File Write Rate

Avg. Write Resp. Time
Sec.

Avg.
CPU %
Total

Avg. CPU % System

Avg. MBps
Read

Avg. MBps
Write

ENT 15K R1

3,415.7

1.5

379.4

132.2

24.9

19.5

3.3

0.4

ENT 10K R1

2,203.4

2.9

244.7

172.8

24.7

19.3

2.2

0.2

ENT CAP R1

1,063.1

12.7

118.1

303.3

24.6

19.2

1.1

0.1

ENT 10K R10

4,590.5

0.7

509.9

101.7

27.7

22.1

4.5

0.5

Table-5 Performance summary for small sized (16KB) file access operations (90% read)

Figure-5 shows the relative performance of various HDD options handling large files, keep in mind that for the response line lower is better, while for the activity rate higher is better.

small file processing
Figure-5 Small file processing 90% read, 10% write rate and response time

In figure-5 you can see the performance in terms of response time (reads larger dashed line, writes smaller dotted line) along with number of file read operations per second (reads solid blue column bar, writes green column bar). Reminder that lower response time, and higher activity rates are better. Performance declines moving from left to right, from 15K to 10K Enterprise Performance with enhanced cache feature to Enterprise Capacity (7.2K RPM), all of which were hardware RAID 1. Also shown is a hardware RAID 10 (four x 10K RPM HDD’s) that has higher performance and capacity along with costs (table-5).

Results in figure-5 above and table-5 below show how various drives can be configured to balance their performance, capacity and costs to meet different needs. Table-6 below shows an analysis looking at average file reads per second (RPS) performance vs. HDD costs, usable capacity and protection level.

Table-6 is an example of looking at multiple metrics to make informed decisions as to which HDD would be best suited to your specific needs. For example RAID 10 using four 10K drives provides good performance and protection along with large usable space, however that also comes at a budget cost (e.g. price).

Avg.
File Reads Per Sec. (RPS)

Single Drive Cost per RPS

Multi-Drive Cost per RPS

Single Drive Cost / Per GB Capacity

Cost / Per GB Usable (Protected) Cap.

Drive Cost (Multiple Drives)

Protection Overhead (Space Capacity for RAID)

Cost per usable GB per RPS

Avg. File Read Resp. (Sec.)

ENT 15K R1

3,415.7

$0.17

$0.35

$0.99

$0.99

$1,190

100%

$0.35

1.51

ENT 10K R1

2,203.4

0.40

0.79

0.49

0.49

1,750

100%

0.79

2.90

ENT CAP R1

1,063.1

0.38

0.75

0.20

0.20

798

100%

0.75

12.70

ENT 10K R10

4,590.5

0.19

0.76

0.49

0.97

3,500

100%

0.76

0.70

Table-6 Performance, capacity and cost analysis for small file processing

Looking at the small file processing analysis in table-5 shows that the 15K HDD’s on an apples to apples basis (e.g. same RAID level and number of drives) provide the best performance. However when also factoring in space capacity, performance, different RAID level or other protection schemes along with cost, there are other considerations. On the other hand the Enterprise Capacity 2TB HDD’s have a low cost per capacity, however do not have the performance of other options, assuming your applications need more performance.

Thus the right HDD for one application may not be the best one for a different scenario as well as multiple metrics as shown in table-5 need to be included in an informed storage decision making process.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

File processing are common content applications tasks, some being small, others large or mixed as well as reads and writes. Even if your content environment is using object storage, chances are unless it is a new applications or a gateway exists, you may be using NAS or file based access. Thus the importance of if your applications are doing file based processing, either run your own applications or use tools that can simulate as close as possible to what your environment is doing.

Continue reading part six in this multi-part series here where the focus is around general I/O including 8KB and 128KB sized IOPs along with associated metrics.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Big Files Lots of Little File Processing Benchmarking with Vdbench

Big Files Lots of Little File Processing Benchmarking with Vdbench


server storage data infrastructure i/o File Processing Benchmarking with Vdbench

Updated 2/10/2018

Need to test a server, storage I/O networking, hardware, software, services, cloud, virtual, physical or other environment that is either doing some form of file processing, or, that you simply want to have some extra workload running in the background for what ever reason? An option is File Processing Benchmarking with Vdbench.

I/O performance

Getting Started


Here’s a quick and relatively easy way to do it with Vdbench (Free from Oracle). Granted there are other tools, both for free and for fee that can similar things, however we will leave those for another day and post. Here’s the con to this approach, there is no Uui Gui like what you have available with some other tools Here’s the pro to this approach, its free, flexible and limited by your creative, amount of storage space, server memory and I/O capacity.

If you need a background on Vdbench and benchmarking, check out the series of related posts here (e.g. www.storageio.com/performance).

Get and Install the Vdbench Bits and Bytes


If you do not already have Vdbench installed, get a copy from the Oracle or Source Forge site (now points to Oracle here).

Vdbench is free, you simply sign-up and accept the free license, select the version down load (it is a single, common distribution for all OS) the bits as well as documentation.

Installation particular on Windows is really easy, basically follow the instructions in the documentation by copying the contents of the download folder to a specified directory, set up any environment variables, and make sure that you have Java installed.

Here is a hint and tip for Windows Servers, if you get an error message about counters, open a command prompt with Administrator rights, and type the command:

$ lodctr /r


The above command will reset your I/O counters. Note however that command will also overwrite counters if enabled so only use it if you have to.

Likewise *nix install is also easy, copy the files, make sure to copy the applicable *nix shell script (they are in the download folder), and verify Java is installed and working.

You can do a vdbench -t (windows) or ./vdbench -t (*nix) to verify that it is working.

Vdbench File Processing

There are many options with Vdbench as it has a very robust command and scripting language including ability to set up for loops among other things. We are only going to touch the surface here using its file processing capabilities. Likewise, Vdbench can run from a single server accessing multiple storage systems or file systems, as well as running from multiple servers to a single file system. For simplicity, we will stick with the basics in the following examples to exercise a local file system. The limits on the number of files and file size are limited by server memory and storage space.

You can specify number and depth of directories to put files into for processing. One of the parameters is the anchor point for the file processing, in the following examples =S:\SIOTEMP\FS1 is used as the anchor point. Other parameters include the I/O size, percent reads, number of threads, run time and sample interval as well as output folder name for the result files. Note that unlike some tools, Vdbench does not create a single file of results, rather a folder with several files including summary, totals, parameters, histograms, CSV among others.


Simple Vdbench File Processing Commands

For flexibility and ease of use I put the following three Vdbench commands into a simple text file that is then called with parameters on the command line.
fsd=fsd1,anchor=!fanchor,depth=!dirdep,width=!dirwid,files=!numfiles,size=!filesize

fwd=fwd1,fsd=fsd1,rdpct=!filrdpct,xfersize=!fxfersize,fileselect=random,fileio=random,threads=!thrds

rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=!etime,interval=!itime

Simple Vdbench script

# SIO_vdbench_filesystest.txt
#
# Example Vdbench script for file processing
#
# fanchor = file system place where directories and files will be created
# dirwid = how wide should the directories be (e.g. how many directories wide)
# numfiles = how many files per directory
# filesize = size in in k, m, g e.g. 16k = 16KBytes
# fxfersize = file I/O transfer size in kbytes
# thrds = how many threads or workers
# etime = how long to run in minutes (m) or hours (h)
# itime = interval sample time e.g. 30 seconds
# dirdep = how deep the directory tree
# filrdpct = percent of reads e.g. 90 = 90 percent reads
# -p processnumber = optional specify a process number, only needed if running multiple vdbenchs at same time, number should be unique
# -o output file that describes what being done and some config info
#
# Sample command line shown for Windows, for *nix add ./
#
# The real Vdbench script with command line parameters indicated by !=
#

fsd=fsd1,anchor=!fanchor,depth=!dirdep,width=!dirwid,files=!numfiles,size=!filesize

fwd=fwd1,fsd=fsd1,rdpct=!filrdpct,xfersize=!fxfersize,fileselect=random,fileio=random,threads=!thrds

rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=!etime,interval=!itime

Big Files Processing Script


With the above script file defined, for Big Files I specify a command line such as the following.
$ vdbench -f SIO_vdbench_filesystest.txt fanchor=S:\SIOTemp\FS1 dirwid=1 numfiles=60 filesize=5G fxfersize=128k thrds=64 etime=10h itime=30 numdir=1 dirdep=1 filrdpct=90 -p 5576 -o SIOWS2012R220_NOFUZE_5Gx60_BigFiles_64TH_STX1200_020116

Big Files Processing Example Results


The following is one of the result files from the folder of results created via the above command for Big File processing showing totals.


Run totals

21:09:36.001 Starting RD=format_for_rd1

Feb 01, 2016 .Interval. .ReqstdOps.. ...cpu%... read ....read.... ...write.... ..mb/sec... mb/sec .xfer.. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete...
rate resp total sys pct rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp
21:23:34.101 avg_2-28 2848.2 2.70 8.8 8.32 0.0 0.0 0.00 2848.2 2.70 0.00 356.0 356.02 131071 0.0 0.00 0.0 0.00 0.1 109176 0.1 0.55 0.1 2006 0.0 0.00

21:23:35.009 Starting RD=rd1; elapsed=36000; fwdrate=max. For loops: None

07:23:35.000 avg_2-1200 4939.5 1.62 18.5 17.3 90.0 4445.8 1.79 493.7 0.07 555.7 61.72 617.44 131071 0.0 0.00 0.0 0.00 0.0 0.00 0.1 0.03 0.1 2.95 0.0 0.00


Lots of Little Files Processing Script


For lots of little files, the following is used.


$ vdbench -f SIO_vdbench_filesystest.txt fanchor=S:\SIOTEMP\FS1 dirwid=64 numfiles=25600 filesize=16k fxfersize=1k thrds=64 etime=10h itime=30 dirdep=1 filrdpct=90 -p 5576 -o SIOWS2012R220_NOFUZE_SmallFiles_64TH_STX1200_020116

Lots of Little Files Processing Example Results


The following is one of the result files from the folder of results created via the above command for Big File processing showing totals.
Run totals

09:17:38.001 Starting RD=format_for_rd1

Feb 02, 2016 .Interval. .ReqstdOps.. ...cpu%... read ....read.... ...write.... ..mb/sec... mb/sec .xfer.. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete...
rate resp total sys pct rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp
09:19:48.016 avg_2-5 10138 0.14 75.7 64.6 0.0 0.0 0.00 10138 0.14 0.00 158.4 158.42 16384 0.0 0.00 0.0 0.00 10138 0.65 10138 0.43 10138 0.05 0.0 0.00

09:19:49.000 Starting RD=rd1; elapsed=36000; fwdrate=max. For loops: None

19:19:49.001 avg_2-1200 113049 0.41 67.0 55.0 90.0 101747 0.19 11302 2.42 99.36 11.04 110.40 1023 0.0 0.00 0.0 0.00 0.0 0.00 7065 0.85 7065 1.60 0.0 0.00


Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

The above examples can easily be modified to do different things particular if you read the Vdbench documentation on how to setup multi-host, multi-storage system, multiple job streams to do different types of processing. This means you can benchmark a storage systems, server or converged and hyper-converged platform, or simply put a workload on it as part of other testing. There are even options for handling data footprint reduction such as compression and dedupe.

Ok, nuff said, for now.

Gs

Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Cloud Conversations: AWS EFS Elastic File System (Cloud NAS) First Preview Look

Storage I/O trends

Cloud Conversations: AWS EFS Elastic File System (Cloud NAS) First Preview Look

Amazon Web Services (AWS) recently announced (preview) new Elastic File System (EFS) providing Network File System (NFS) NAS (Network Attached Storage) capabilities for AWS Elastic Cloud Compute (EC2) instances. EFS AWS compliments other AWS storage offerings including Simple Storage Service (S3) along with Elastic Block Storage (EBS), Glacier and Relational Data Services (RDS) among others.

Ok, that’s a lot of buzzwords and acronyms so lets break this down a bit.

Amazon Web Services AWS

AWS EFS and Cloud Storage, Beyond Buzzword Bingo

  • EC2 – Instances exist in various Availability Zones (AZ’s) in different AWS Regions. Compute instance with various operating systems including Windows and Ubuntu among others that also can be pre-configured with applications such as SQL Server or web services among others. EC2 instances vary from low-cost to high-performance compute, memory, GPU, storage or general purposed optimized. For example, some EC2 instances rely solely on EBS, S3, RDS or other AWS storage offerings while others include on-board Solid State Disk (SSD) like DAS SSD found on traditional servers. EC2 instances on EBS volumes can be snapshot to S3 storage which in turn can be replicated to another region.
  • EBS – Scalable block accessible storage for EC2 instances that can be configured for performance or bulk storage, as well as for persistent images for EC2 instances (if you choose to configure your instance to be persistent)
  • EFS – New file (aka NAS) accessible storage service accessible from EC2 instances in various AZ’s in a given AWS region
  • Glacier – Cloud based near-line (or by some comparisons off-line) cold-storage archives.
  • RDS – Relational Database Services for SQL and other data repositories
  • S3 – Provides durable, scalable low-cost bulk (aka object) storage accessible from inside AWS as well as via externally. S3 can be used by EC2 instances for bulk durable storage as well as being used as a target for EBS snapshots.
  • Learn more about EC2, EBS, S3, Glacier, Regions, AZ’s and other AWS topics in this primer here

aws regions architecture

What is EFS

Implements NFS V4 (SNIA NFS V4 primer) providing network attached storage (NAS) meaning data sharing. AWS is indicating initial pricing for EFS at $0.30 per GByte per month. EFS is designed for storage and data sharing from multiple EC2 instances in different AZ’s in the same AWS region with scalability into the PBs.

What EFS is not

Currently it seems that EFS has an end-point inside AWS accessible via an EC2 instance like EBS. This appears to be like EBS where the storage service is accessible only to AWS EC2 instances unlike S3 which can be accessible from the out-side world as well as via EC2 instances.

Note however, that depending on how you configure your EC2 instance with different software, as well as configure a Virtual Private Cloud (VPC) and other settings, it is possible to have an application, software tool or operating system running on EC2 accessible from the outside world. For example, NAS software such as those from SoftNAS and NetApp among many others can be installed on an EC2 instance and with proper configuration, as well as being accessible to other EC2 instances, they can also be accessed from outside of AWS (with proper settings and security).

AWS EFS at this time is NFS version 4 based however does not support Windows SMB/CIFS, HDFS or other NAS access protocols. In addition AWS EFS is accessible from multiple AZ’s within a region. To share NAS data across regions some other software would be required.

EFS is not yet as of this writing released and AWS is currently accepting requests to join the EFS preview here.

Amazon Web Services AWS

Where to learn more

Here are some links to learn more about AWS S3 and related topics

What this all means and wrap-up

AWS continues to extend its cloud platform include both compute and storage offerings. EFS compliments EBS along with S3, Glacier and RDS. For many environments NFS support will be welcome while for others CIFS/SMB would be appreciated and others are starting to find that value in HDFS accessible NAS.

Overall I like this announcement and look forward to moving beyond the preview.

Ok, nuff said, for now..

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

How to test your HDD SSD or all flash array (AFA) storage fundamentals

How to test your HDD SSD AFA Hybrid or cloud storage

server storage data infrastructure i/o hdd ssd all flash array afa fundamentals

Updated 2/14/2018

Over at BizTech Magazine I have a new article 4 Ways to Performance Test Your New HDD or SSD that provides a quick guide to verifying or learning what the speed characteristic of your new storage device are capable of.

An out-take from the article used by BizTech as a "tease" is:

These four steps will help you evaluate new storage drives. And … psst … we included the metrics that matter.

Building off the basics, server storage I/O benchmark fundamentals

The four basic steps in the article are:

  • Plan what and how you are going to test (what’s applicable for you)
  • Decide on a benchmarking tool (learn about various tools here)
  • Test the test (find bugs, errors before a long running test)
  • Focus on metrics that matter (what’s important for your environment)

Server Storage I/O performance

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

To some the above (read the full article here) may seem like common sense tips and things everybody should know otoh there are many people who are new to servers storage I/O networking hardware software cloud virtual along with various applications, not to mention different tools.

Thus the above is a refresher for some (e.g. Dejavu) while for others it might be new and revolutionary or simply helpful. Interested in HDD’s, SSD’s as well as other server storage I/O performance along with benchmarking tools, techniques and trends check out the collection of links here (Server and Storage I/O Benchmarking and Performance Resources).

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

I/O, I/O how well do you know good bad ugly server storage I/O iops?

How well do you know good bad ugly I/O iops?

server storage i/o iops activity data infrastructure trends

Updated 2/10/2018

There are many different types of server storage I/O iops associated with various environments, applications and workloads. Some I/Os activity are iops, others are transactions per second (TPS), files or messages per time (hour, minute, second), gets, puts or other operations. The best IO is one you do not have to do.

What about all the cloud, virtual, software defined and legacy based application that still need to do I/O?

If no IO operation is the best IO, then the second best IO is the one that can be done as close to the application and processor as possible with the best locality of reference.

Also keep in mind that aggregation (e.g. consolidation) can cause aggravation (server storage I/O performance bottlenecks).

aggregation causes aggravation
Example of aggregation (consolidation) causing aggravation (server storage i/o blender bottlenecks)

And the third best?

It’s the one that can be done in less time or at least cost or effect to the requesting application, which means moving further down the memory and storage stack.

solving server storage i/o blender and other bottlenecks
Leveraging flash SSD and cache technologies to find and fix server storage I/O bottlenecks

On the other hand, any IOP regardless of if for block, file or object storage that involves some context is better than those without, particular involving metrics that matter (here, here and here [webinar] )

Server Storage I/O optimization and effectiveness

The problem with IO’s is that they are a basic operations to get data into and out of a computer or processor, so there’s no way to avoid all of them, unless you have a very large budget. Even if you have a large budget that can afford an all flash SSD solution, you may still meet bottlenecks or other barriers.

IO’s require CPU or processor time and memory to set up and then process the results as well as IO and networking resources to move data too their destination or retrieve them from where they are stored. While IO’s cannot be eliminated, their impact can be greatly improved or optimized by, among other techniques, doing fewer of them via caching and by grouping reads or writes (pre-fetch, write-behind).

server storage I/O STI and SUT

Think of it this way: Instead of going on multiple errands, sometimes you can group multiple destinations together making for a shorter, more efficient trip. However, that optimization may also mean your drive will take longer. So, sometimes it makes sense to go on a couple of quick, short, low-latency trips instead of one larger one that takes half a day even as it accomplishes many tasks. Of course, how far you have to go on those trips (i.e., their locality) makes a difference about how many you can do in a given amount of time.

Locality of reference (or proximity)

What is locality of reference?

This refers to how close (i.e., its place) data exists to where it is needed (being referenced) for use. For example, the best locality of reference in a computer would be registers in the processor core, ready to be acted on immediately. This would be followed by levels 1, 2, and 3 (L1, L2, and L3) onboard caches, followed by main memory, or DRAM. After that comes solid-state memory typically NAND flash either on PCIe cards or accessible on a direct attached storage (DAS), SAN, or NAS device. 

server storage I/O locality of reference

Even though a PCIe NAND flash card is close to the processor, there still remains the overhead of traversing the PCIe bus and associated drivers. To help offset that impact, PCIe cards use DRAM as cache or buffers for data along with meta or control information to further optimize and improve locality of reference. In other words, this information is used to help with cache hits, cache use, and cache effectiveness vs. simply boosting cache use.

SSD to the rescue?

What can you do the cut the impact of IO’s?

There are many steps one can take, starting with establishing baseline performance and availability metrics.

The metrics that matter include IOP’s, latency, bandwidth, and availability. Then, leverage metrics to gain insight into your application’s performance.

Understand that IO’s are a fact of applications doing work (storing, retrieving, managing data) no matter whether systems are virtual, physical, or running up in the cloud. But it’s important to understand just what a bad IO is, along with its impact on performance. Try to identify those that are bad, and then find and fix the problem, either with software, application, or database changes. Perhaps you need to throw more software caching tools, hypervisors, or hardware at the problem. Hardware may include faster processors with more DRAM and faster internal busses.

Leveraging local PCIe flash SSD cards for caching or as targets is another option.

You may want to use storage systems or appliances that rely on intelligent caching and storage optimization capabilities to help with performance, availability, and capacity.

Where to gain insight into your server storage I/O environment

There are many tools that you can be used to gain insight into your server storage I/O environment across cloud, virtual, software defined and legacy as well as from different layers (e.g. applications, database, file systems, operating systems, hypervisors, server, storage, I/O networking). Many applications along with databases have either built-in or optional tools from their provider, third-party, or via other sources that can give information about work activity being done. Likewise there are tools to dig down deeper into the various data information infrastructure to see what is happening at the various layers as shown in the following figures.

application storage I/O performance
Gaining application and operating system level performance insight via different tools

windows and linux storage I/O performance
Insight and awareness via operating system tools on Windows and Linux

In the above example, Spotlight on Windows (SoW) which you can download for free from Dell here along with Ubuntu utilities are shown, You could also use other tools to look at server storage I/O performance including Windows Perfmon among others.

vmware server storage I/O
Hypervisor performance using VMware ESXi / vsphere built-in tools

vmware server storage I/O performance
Using Visual ESXtop to dig deeper into virtual server storage I/O performance

vmware server storage i/o cache
Gaining insight into virtual server storage I/O cache performance

Wrap up and summary

There are many approaches to address (e.g. find and fix) vs. simply move or mask data center and server storage I/O bottlenecks. Having insight and awareness into how your environment along with applications is important to know to focus resources. Also keep in mind that a bit of flash SSD or DRAM cache in the applicable place can go along way while a lot of cache will also cost you cash. Even if you cant eliminate I/Os, look for ways to decrease their impact on your applications and systems.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

>Keep in mind: SSD including flash and DRAM among others are in your future, the question is where, when, with what, how much and whose technology or packaging.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Revisiting RAID data protection remains relevant resource links

Revisiting RAID data protection remains relevant and resources

Storage I/O trends

Updated 2/10/2018

RAID data protection remains relevant including erasure codes (EC), local reconstruction codes (LRC) among other technologies. If RAID were really not relevant anymore (e.g. actually dead), why do some people spend so much time trying to convince others that it is dead or to use a different RAID level or enhanced RAID or beyond raid with related advanced approaches?

When you hear RAID, what comes to mind?

A legacy monolithic storage system that supports narrow 4, 5 or 6 drive wide stripe sets or a modern system support dozens of drives in a RAID group with different options?

RAID means many things, likewise there are different implementations (hardware, software, systems, adapters, operating systems) with various functionality, some better than others.

For example, which of the items in the following figure come to mind, or perhaps are new to your RAID vocabulary?

RAID questions

There are Many Variations of RAID Storage some for the enterprise, some for SMB, SOHO or consumer. Some have better performance than others, some have poor performance for example causing extra writes that lead to the perception that all parity based RAID do extra writes (some actually do write gathering and optimization).

Some hardware and software implementations using WBC (write back cache) mirrored or battery backed-BBU along with being able to group writes together in memory (cache) to do full stripe writes. The result can be fewer back-end writes compared to other systems. Hence, not all RAID implementations in either hardware or software are the same. Likewise, just because a RAID definition shows a particular theoretical implementation approach does not mean all vendors have implemented it in that way.

RAID is not a replacement for backup rather part of an overall approach to providing data availability and accessibility.

data protection and durability

What’s the best RAID level? The one that meets YOUR needs

There are different RAID levels and implementations (hardware, software, controller, storage system, operating system, adapter among others) for various environments (enterprise, SME, SMB, SOHO, consumer) supporting primary, secondary, tertiary (backup/data protection, archiving).

RAID comparison
General RAID comparisons

Thus one size or approach does fit all solutions, likewise RAID rules of thumbs or guides need context. Context means that a RAID rule or guide for consumer or SOHO or SMB might be different for enterprise and vise versa, not to mention on the type of storage system, number of drives, drive type and capacity among other factors.

RAID comparison
General basic RAID comparisons

Thus the best RAID level is the one that meets your specific needs in your environment. What is best for one environment and application may be different from what is applicable to your needs.

Key points and RAID considerations include:

· Not all RAID implementations are the same, some are very much alive and evolving while others are in need of a rest or rewrite. So it is not the technology or techniques that are often the problem, rather how it is implemented and then deployed.

· It may not be RAID that is dead, rather the solution that uses it, hence if you think a particular storage system, appliance, product or software is old and dead along with its RAID implementation, then just say that product or vendors solution is dead.

· RAID can be implemented in hardware controllers, adapters or storage systems and appliances as well as via software and those have different features, capabilities or constraints.

· Long or slow drive rebuilds are a reality with larger disk drives and parity-based approaches; however, you have options on how to balance performance, availability, capacity, and economics.

· RAID can be single, dual or multiple parity or mirroring-based.

· Erasure and other coding schemes leverage parity schemes and guess what umbrella parity schemes fall under.

· RAID may not be cool, sexy or a fun topic and technology to talk about, however many trendy tools, solutions and services actually use some form or variation of RAID as part of their basic building blocks. This is an example of using new and old things in new ways to help each other do more without increasing complexity.

·  Even if you are not a fan of RAID and think it is old and dead, at least take a few minutes to learn more about what it is that you do not like to update your dead FUD.

Wait, Isn’t RAID dead?

There is some dead marketing that paints a broad picture that RAID is dead to prop up something new, which in some cases may be a derivative variation of parity RAID.

data dispersal
Data dispersal and durability

RAID rebuild improving
RAID continues to evolve with rapid rebuilds for some systems

Otoh, there are some specific products, technologies, implementations that may be end of life or actually dead. Likewise what might be dead, dying or simply not in vogue are specific RAID implementations or packaging. Certainly there is a lot of buzz around object storage, cloud storage, forward error correction (FEC) and erasure coding including messages of how they cut RAID. Catch is that some object storage solutions are overlayed on top of lower level file systems that do things such as RAID 6, granted they are out of sight, out of mind.

RAID comparison
General RAID parity and erasure code/FEC comparisons

Then there are advanced parity protection schemes which include FEC and erasure codes that while they are not your traditional RAID levels, they have characteristic including chunking or sharding data, spreading it out over multiple devices with multiple parity (or derivatives of parity) protection.

Bottom line is that for some environments, different RAID levels may be more applicable and alive than for others.

Via BizTech – How to Turn Storage Networks into Better Performers

  • Maintain Situational Awareness
  • Design for Performance and Availability
  • Determine Networked Server and Storage Patterns
  • Make Use of Applicable Technologies and Techniques

If RAID is alive, what to do with it?

If you are new to RAID, learn more about the past, present and future keeping mind context. Keeping context in mind means that there are different RAID levels and implementations for various environments. Not all RAID 0, 1, 1/0, 10, 2, 3, 4, 5, 6 or other variations (past, present and emerging) are the same for consumer vs. SOHO vs. SMB vs. SME vs. Enterprise, nor are the usage cases. Some need performance for reads, others for writes, some for high-capacity with low performance using hardware or software. RAID Rules of thumb are ok and useful, however keep them in context to what you are doing as well as using.

What to do next?

Take some time to learn, ask questions including what to use when, where, why and how as well as if an approach or recommendation are applicable to your needs. Check out the following links to read some extra perspectives about RAID and keep in mind, what might apply to enterprise may not be relevant for consumer or SMB and vise versa.

Some advise needed on SSD’s and Raid (Via Spiceworks)
RAID 5 URE Rebuild Means The Sky Is Falling (Via BenchmarkReview)
Double drive failures in a RAID-10 configuration (Via SearchStorage)
Industry Trends and Perspectives: RAID Rebuild Rates (Via StorageIOblog)
RAID, IOPS and IO observations (Via StorageIOBlog)
RAID Relevance Revisited (Via StorageIOBlog)
HDDs Are Still Spinning (Rust Never Sleeps) (Via InfoStor)
When and Where to Use NAND Flash SSD for Virtual Servers (Via TheVirtualizationPractice)
What’s the best way to learn about RAID storage? (Via Spiceworks)
Design considerations for the host local FVP architecture (Via Frank Denneman)
Some basic RAID fundamentals and definitions (Via SearchStorage)
Can RAID extend nand flash SSD life? (Via StorageIOBlog)
I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
The original RAID white paper (PDF) that while over 20 years old, it provides a basis, foundation and some history by Katz, Gibson, Patterson et al
Storage Interview Series (Via Infortrend)
Different RAID methods (Via RAID Recovery Guide)
A good RAID tutorial (Via TheGeekStuff)
Basics of RAID explained (Via ZDNet)
RAID and IOPs (Via VMware Communities)

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What is my favorite or preferred RAID level?

That depends, for some things its RAID 1, for others RAID 10 yet for others RAID 4, 5, 6 or DP and yet other situations could be a fit for RAID 0 or erasure codes and FEC. Instead of being focused on just one or two RAID levels as the solution for different problems, I prefer to look at the environment (consumer, SOHO, small or large SMB, SME, enterprise), type of usage (primary or secondary or data protection), performance characteristics, reads, writes, type and number of drives among other factors. What might be a fit for one environment would not be a fit for others, thus my preferred RAID level along with where implemented is the one that meets the given situation. However also keep in mind is tying RAID into part of an overall data protection strategy, remember, RAID is not a replacement for backup.

What this all means

Like other technologies that have been declared dead for years or decades, aka the Zombie technologies (e.g. dead yet still alive) RAID continues to be used while the technologies evolves. There are specific products, implementations or even RAID levels that have faded away, or are declining in some environments, yet alive in others. RAID and its variations are still alive, however how it is used or deployed in conjunction with other technologies also is evolving.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

CompTIA needs input for their Storage+ certification, can you help?

CompTIA needs input for their Storage+ certification, can you help?

The CompTIA folks are looking for some comments and feedback from those who are involved with data storage in various ways as part of planning for their upcoming enhancements to the Storage+ certification testing.

As a point of disclosure, I am member of the CompTIA Storage+ certification advisory committee (CAC), however I don’t get paid or receive any other type of renumeration for contributing my time to give them feedback and guidance other than a thank, Atta boy for giving back and playing it forward to help others in the IT community similar to what my predecessors did.

I have been asked to pass this along to others (e.g. you or who ever forwards it on to you).

Please take a few moments and feel free to share with others this link here to the survey for CompTIA Storage+.

What they are looking for is to validate the exam blueprint generated from a recent Job Task Analysis (JTA) process.

In other words, does the certification exam show real-world relevance to what you and your associates may be doing involved with data storage.

This is opposed to being aligned with those whose’s job it is to create test questions and may not understand what it is you the IT pro involved with storage does or does not do.

If you have ever taken a certification exam test and scratched your head or wondered out why some questions that seem to lack real-world relevance were included, vs. ones of practical on-the-job experience were missing, here’s your chance to give feedback.

Note that you will not be rewarded with an Amex or Amazon gift card, Starbucks or Dunkin Donuts certificates, free software download or some other incentive to play and win, however if you take the survey let me know and will be sure to tweet you an Atta boy or Atta girl! However they are giving away a free T-Shirt to every 10 survey takers.

Btw, if you really need something for free, send me a note (I’m not that difficult to find) as I have some free copies of Resilient Storage Networking (RSN): Designing Flexible Scalable Data Infrastructures (Elsevier) you simply pay shopping and handling. RSN can be used to help prepare you for various storage testing as well as other day-to-day activities.

CompTIA is looking for survey takers who have some hands-on experience or involved with data storage (e.g. can you spell SAN, NAS, Disk or SSD and work with them hands-on then you are a candidate ;).

Welcome to the CompTIA Storage+ Certification Job Task Analysis (JTA) Survey

  • Your input will help CompTIA evaluate which test objectives are most important to include in the CompTIA Storage+ Certification Exam
  • Your responses are completely confidential.
  • The results will only be viewed in the aggregate.
  • Here is what (and whom) CompTIA is looking for feedback from:

  • Has at least 12 to 18 months of experience with storage-related technologies.
  • Makes recommendations and decisions regarding storage configuration.
  • Facilitates data security and data integrity.
  • Supports a multiplatform and multiprotocol storage environment with little assistance.
  • Has basic knowledge of cloud technologies and object storage concepts.
  • As a small token of CompTIA appreciation for your participation, they will provide an official CompTIA T-shirt to every tenth (1 of every 10) person who completes this survey. Go here for official rules.

    Click here to complete the CompTIA Storage+ survey

    Contact CompTIA with any survey issues, research@comptia.org

    What say you, take a few minutes like I did and give some feedback, you will not be on the hook for anything, and if you do get spammed by the CompTIA folks, let me know and I in turn will spam them back for spamming you as well as me.

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    DataDynamics StorageX 7.0 file and data management migration software

    Storage I/O trends

    DataDynamics StorageX 7.0 file and data management migration software

    Some of you may recall back in 2006 (here and here) when Brocade bought a file management storage startup called NuView whose product was StorageX, and then in 2009 issued end of life (EOL) notice letters that the solution was being discontinued.

    Fast forward to 2013 and there is a new storage startup (DatraDynamics) with an existing product that was just updated and re-released called StorageX 7.0.

    Software Defined File Management – SDFM?

    Granted from an industry buzz focused adoption perspective you may not have heard of DataDynamics or perhaps even StorageX. However many other customers around the world from different industry sectors have as well as are using the solution.

    The current industry buzz is around software defined data centers (SDDC) which has lead to software defined networking (SDN), software defined storage (SDS), and other software defined marketing (SDM) terms, not to mention Valueware. So for those who like software defined marketing or software defined buzzwords, you can think of StorageX as software defined file management (SDFM), however don’t ask or blame them about using it as I just thought of it for them ;).

    This is an example of industry adoption traction (what is being talked about) vs. industry deployment and customer adoption (what is actually in use on a revenue basis) in that DataDynamics is not a well-known company yet, however they have what many of the high-flying startups with industry adoption don’t have which is an installed base with revenue customers that also now have a new version 7.0 product to deploy.

    StorageX 7.0 enabling intelligent file and data migration management

    Thus, a common theme is adding management including automated data movement and migration to carry out structure around unstructured NAS file data. More than a data mover or storage migration tool, Data Dynamics StorageX is a software platform for adding storage management structure around unstructured local and distributed NAS file data. This includes heterogeneous vendor support across different storage system, protocols and tools including Windows CIFS and Unix/Linux NFS.

    Storage I/O image

    A few months back prior to its release, I had an opportunity to test drive StorageX 7.0 and have included some of my comments in this industry trends perspective technology solution brief (PDF). This solution brief titled Data Dynamics StorageX 7.0 Intelligent Policy Based File Data Migration is a free download with no registration required (as are others found here), however per our disclosure policy to give transparency, DataDynamics has been a StorageIO client.

    If you have a new for gaining insight and management control around your file unstructured data to support migrations for upgrades, technology refresh, archiving or tiering across different vendors including EMC and NetApp, check out DataDynamics StorageX 7.0, take it for a test drive like I did and tell them StorageIO sent you.

    Ok, nuff said,

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Care to help Coraid with a Storage I/O Content Conversation?

    Storage I/O trends

    Blog post – Can you help Coraid with a Storage I/O Content Conversation?

    Over the past week or so have had many email conversations with the Coraid marketing/public relations (PR) folks who want to share some of their content unique or custom content with you.

    Normally I (aka @StorageIO) does not accept unsolicited (placed) content (particular for product pitch/placements) from vendors or their VARs, PR, surrogates including third or fourth party placement firms. Granted StorageIOblog.com does have site sponsors , per our policies that is all that those are, advertisements with no more or less influence than for others. StorageIO does do commissioned or sponsored custom content including white papers, solution briefs among other things with applicable disclosures, retention of editorial tone and control.

    Who is Coraid and what do they do?

    However wanting to experiment with things, not to mention given Coraids persistence, let’s try something to see how it works.

    Coraid for those who are not aware provides an alternative storage and I/O networking solution called ATA over Ethernet or AoE (here is a link to Coraids Analyst supplied content page). AoE enables servers with applicable software to use storage equipped with AoE technology (or via an applicable equipped appliance) to use Ethernet as an interconnect and transport. AoE is on the low-end an alternative to USB, Thunderbolt or direct attached SATA or SAS, along with switched or shared SAS (keep in mind SATA can plug into SAS, not vice versa).

    In addition AoE is an alternative to the industry standard iSCSI (SCSI command set mapped onto IP) which can be found in various solutions including as a software stack. Another area where AoE is positioned by Coraid is as an alternative to Fibre Channel SCSI_FCP (FCP) and Fibre Channel over Ethernet (FCoE). Keep in mind that Coraid AoE is block based (granted they have other solutions) as opposed to NAS (file) such as NFS, CIFS/SMB/SAMBA, pNFS or HDFS among others and is using native Ethernet as opposed to being layered on top of iSCSI.

    Storage I/O trends

    So here is the experiment

    Since Coraid wanted to get their unique content placed either by them or via others, lets see what happens in the comments section here at StorageIOblog.com. The warning of course is keep it respectful, courteous and no bashing or disparaging comments about others (vendors, products, technology).

    Thus the experiment is simple, lets see how the conversation evolves into the caveats, benefits, tradeoffs and experiences of those who have used or looked into the solution (pro or con) and why a particular opinion. If you have a perspective or opinion, no worries, however put it in context including if you are a Coraid employee, var, reseller, surrogate and likewise for those with other view (state who you are, your affiliation and other disclosure). Likewise if providing or supplying links to any content (white papers, videos, webinars) including via third-party provide applicable disclosures (e.g. it was sponsored and by whom etc.).

    Disclosure

    While I have mentioned or provided perspectives about them via different venues (online, print and in person) in the past, Coraid has never been a StorageIO client. Likewise this is not an endorsement for or against Coraid and their AoE or other solutions, simply an industry trends perspective.

    Ok, nuff said (for now).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Can we get a side of context with them IOPS server storage metrics?

    Can we get a side of context with them server storage metrics?

    Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.

    There is an old saying that the best I/O (Input/Output) is the one that you do not have to do.

    In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.

    Expanding the conversation, the need for more context

    The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.

    hdd and ssd iops

    This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.

    Adding a side of context

    The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.

    Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time. Not to mention block, file, object or blob as well as table.

    However, are those million IOP’s applicable to your environment or needs?

    Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?

    How about the response time or latency for achieving them IOPS?

    In other words, what is the context of those metrics and why do they matter?

    storage i/o iops
    Click on image to view more metrics that matter including IOP’s for HDD and SSD’s

    Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.

    As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.

    Here are some examples of context that can be added to help make IOP’s and other metrics matter:

    • What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
    • Are they reads, writes, random, sequential or mixed and what percentage?
    • How was the storage configured including RAID, replication, erasure or dispersal codes?
    • Then there is the latency or response time and IO queue depths for the given number of IOPS.
    • Let us not forget if the storage systems (and servers) were busy with other work or not.
    • If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
    • What was the number of threads or workers, along with how many servers?
    • What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
    • Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
    • Did the IOP’s number come from a single storage system or total of multiple systems?
    • Fast storage needs fast serves and networks, what was their configuration?
    • Was the performance a short burst, or long sustained period?
    • What was the size of the test data used; did it all fit into cache?
    • Were short stroking for IOPS or long stroking for bandwidth techniques used?
    • Data footprint reduction (DFR) techniques (thin provisioned, compression or dedupe) used?
    • Were write data committed synchronously to storage, or deferred (aka lazy writes used)?

    The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.

    Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.

    Storage I/O trends

    Does size or age of vendors make a difference when it comes to context?

    Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.

    Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.

    Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.

    Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.

    Storage I/O trends

    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    What this means is let us start putting and asking for metrics that matter such as IOP’s with context.

    If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.

    IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.

    Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.

    So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Web chat Thur May 30th: Hot Storage Trends for 2013 (and beyond)

    Storage I/O trends

    Join me on Thursday May 30, 2013 at Noon ET (9AM PT) for a live web chat at the 21st Century IT (21cit) site (click here to register, sign-up, or view earlier posts). This will be an online web chat format interactive conversation so if you are not able to attend, you can visit at your convenience to view and give your questions along with comments. I have done several of these web chats with 21cit as well as other venues that are a lot of fun and engaging (time flies by fast).

    For those not familiar, 21cIT is part of the Desum/UBM family of sites including Internet Evolution, SMB Authority, and Enterprise Efficiency among others that I do article posts, videos and live chats for.


    Sponsored by NetApp

    I like these types of sites in that while they have a sponsor, the content is generally kept separate between those of editors and contributors like myself and the vendor supplied material. In other words I coordinate with the site editors on what topics I feel like writing (or doing videos) about that align with the given sites focus and themes as opposed to following and advertorial calendar script.

    During this industry trends perspective web chat, one of the topics and themes planned for discussion include software defined storage (SDS). View a recent video blog post I did here about SDS. In addition to SDS, Solid State Devices (SSD) including nand flash, cloud, virtualization, object, backup and data protection, performance, management tools among others are topics that will be put out on the virtual discussion table.

    Storage I/O trends

    Following are some examples of recent and earlier industry trends perspectives posts that I have done over at 21cit:

    Video: And Now, Software-Defined Storage!
    There are many different views on what is or is not “software-defined” with products, protocols, preferences and even press releases. Check out the video and comments here.

    Big Data and the Boston Marathon Investigation
    How the human face of big-data will help investigators piece together all the evidence in the Boston bombing tragedy and bring those responsible to justice. Check out the post and comments here.

    Don’t Use New Technologies in Old Ways
    You can add new technologies to your data center infrastructure, but you won’t get the full benefit unless you update your approach with people, processes, and policies. Check out the post and comments here.

    Don’t Let Clouds Scare You, Be Prepared
    The idea of moving to cloud computing and cloud services can be scary, but it doesn’t have to be so if you prepare as you would for implementing any other IT tool. Check out the post and comments here.

    Storage and IO trends for 2013 (& Beyond)
    Efficiency, new media, data protection, and management are some of the keywords for the storage sector in 2013. Check out these and other trends, predictions along with comments here.

    SSD and Real Estate: Location, Location, Location
    You might be surprised how many similarities between buying real estate and buying SSDs.
    Location matters and it’s not if, rather when, where, why and how you will be using SSD including nand flash in the future, read more and view comments here.

    Everything Is Not Equal in the Data center, Part 3
    Here are steps you can take to give the right type of backup and protection to data and solutions, depending on the risks and scenarios they face. The result? Savings and efficiencies. Read more and view comments here.

    Everything Is Not Equal in the Data center, Part 2
    Your data center’s operations can be affected at various levels, by multiple factors, in a number of degrees. And, therefore, each scenario requires different responses. Read more and view comments here.

    Everything Is Not Equal in the Data center, Part 1
    It pays to check your data center Different components need different levels of security, storage, and availability. Read more and view comments here.

    Data Protection Modernizing: More Than Buzzword Bingo
    IT professionals and solution providers should put technologies such as disk based backup, dedupe, cloud, and data protection management tools as assets and resources to make sure they receive necessary funding and buy in. Read more and view comments here.

    Don’t Take Your Server & Storage IO Pathing Software for Granted
    Path managers are valuable resources. They will become even more useful as companies continue to carry out cloud and virtualization solutions. Read more and view comments here.

    SSD Is in Your Future: Where, When & With What Are the Questions
    During EMC World 2012, EMC (as have other vendors) made many announcements around flash solid-state devices (SSDs), underscoring the importance of SSDs to organizations future storage needs. Read more here about why SSD is in your future along with view comments.

    Changing Life cycles and Data Footprint Reduction (DFR), Part 2
    In the second part of this series, the ABCDs (Archive, Backup modernize, Compression, Dedupe and data management, storage tiering) of data footprint reduction, as well as SLOs, RTOs, and RPOs are discussed. Read more and view comments here.

    Changing Life cycles and Data Footprint Reduction (DFR), Part 1
    Web 2.0 and related data needs to stay online and readily accessible, creating storage challenges for many organizations that want to cut their data footprint. Read more and view comments here.

    No Such Thing as an Information Recession
    Data, even older information, must be protected and made accessible cost-effectively. Not to mention that people and data are living longer as well as getting larger. Read more and view comments here.

    Storage I/O trends

    These real-time, industry trends perspective interactive chats at 21cit are open forum format (however be polite and civil) as well as non vendor sales or marketing pitches. If you have specific questions you ‘d like to ask or points of view to express, click here and post them in the chat room at any time (before, during or after).

    Mark your calendar for this event live Thursday, May 30, at noon ET or visit after the fact.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: How many IOPS can a HDD HHDD SSD do with VMware?

    How many IOPS can a HDD HHDD SSD do with VMware?

    server storage data infrastructure i/o iop hdd ssd trends

    Updated 2/10/2018

    This is the second post of a two-part series looking at storage performance, specifically in the context of drive or device (e.g. mediums) characteristics of How many IOPS can a HDD HHDD SSD do with VMware. In the first post the focus was around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).

    A common question is how many IOPS (IO Operations Per Second) can a storage device or system do?

    The answer is or should be it depends.

    Here are some examples to give you some more insight.

    For example, the following shows how IOPS vary by changing the percent of reads, writes, random and sequential for a 4K (4,096 bytes or 4 KBytes) IO size with each test step (4 minutes each).

    IO Size for test
    Workload Pattern of test
    Avg. Resp (R+W) ms
    Avg. IOP Sec (R+W)
    Bandwidth KB Sec (R+W)
    4KB
    100% Seq 100% Read
    0.0
    29,736
    118,944
    4KB
    60% Seq 100% Read
    4.2
    236
    947
    4KB
    30% Seq 100% Read
    7.1
    140
    563
    4KB
    0% Seq 100% Read
    10.0
    100
    400
    4KB
    100% Seq 60% Read
    3.4
    293
    1,174
    4KB
    60% Seq 60% Read
    7.2
    138
    554
    4KB
    30% Seq 60% Read
    9.1
    109
    439
    4KB
    0% Seq 60% Read
    10.9
    91
    366
    4KB
    100% Seq 30% Read
    5.9
    168
    675
    4KB
    60% Seq 30% Read
    9.1
    109
    439
    4KB
    30% Seq 30% Read
    10.7
    93
    373
    4KB
    0% Seq 30% Read
    11.5
    86
    346
    4KB
    100% Seq 0% Read
    8.4
    118
    474
    4KB
    60% Seq 0% Read
    13.0
    76
    307
    4KB
    30% Seq 0% Read
    11.6
    86
    344
    4KB
    0% Seq 0% Read
    12.1
    82
    330

    Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 4K IO size

    In the above example the drive is a 1TB 7200 RPM 3.5 inch Dell (Western Digital) 3Gb SATA device doing raw (non file system) IO. Note the high IOP rate with 100 percent sequential reads and a small IO size which might be a result of locality of reference due to drive level cache or buffering.

    Some drives have larger buffers than others from a couple to 16MB (or more) of DRAM that can be used for read ahead caching. Note that this level of cache is independent of a storage system, RAID adapter or controller or other forms and levels of buffering.

    Does this mean you can expect or plan on getting those levels of performance?

    I would not make that assumption, and thus this serves as an example of using metrics like these in the proper context.

    Building off of the previous example, the following is using the same drive however with a 16K IO size.

    IO Size for test
    Workload Pattern of test
    Avg. Resp (R+W) ms
    Avg. IOP Sec (R+W)
    Bandwidth KB Sec (R+W)
    16KB
    100% Seq 100% Read
    0.1
    7,658
    122,537
    16KB
    60% Seq 100% Read
    4.7
    210
    3,370
    16KB
    30% Seq 100% Read
    7.7
    130
    2,080
    16KB
    0% Seq 100% Read
    10.1
    98
    1,580
    16KB
    100% Seq 60% Read
    3.5
    282
    4,522
    16KB
    60% Seq 60% Read
    7.7
    130
    2,090
    16KB
    30% Seq 60% Read
    9.3
    107
    1,715
    16KB
    0% Seq 60% Read
    11.1
    90
    1,443
    16KB
    100% Seq 30% Read
    6.0
    165
    2,644
    16KB
    60% Seq 30% Read
    9.2
    109
    1,745
    16KB
    30% Seq 30% Read
    11.0
    90
    1,450
    16KB
    0% Seq 30% Read
    11.7
    85
    1,364
    16KB
    100% Seq 0% Read
    8.5
    117
    1,874
    16KB
    60% Seq 0% Read
    10.9
    92
    1,472
    16KB
    30% Seq 0% Read
    11.8
    84
    1,353
    16KB
    0% Seq 0% Read
    12.2
    81
    1,310

    Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 16K IO size

    The previous two examples are excerpts of a series of workload simulation tests (ok, you can call them benchmarks) that I have done to collect information, as well as try some different things out.

    The following is an example of the summary for each test output that includes the IO size, workload pattern (reads, writes, random, sequential), duration for each workload step, totals for reads and writes, along with averages including IOP’s, bandwidth and latency or response time.

    disk iops

    Want to see more numbers, speeds and feeds, check out the following table which will be updated with extra results as they become available.

    Device
    Vendor
    Make

    Model

    Form Factor
    Capacity
    Interface
    RPM Speed
    Raw
    Test Result
    HDD
    HGST
    Desktop
    HK250-160
    2.5
    160GB
    SATA
    5.4K
    HDD
    Seagate
    Mobile
    ST2000LM003
    2.5
    2TB
    SATA
    5.4K
    HDD
    Fujitsu
    Desktop
    MHWZ160BH
    2.5
    160GB
    SATA
    7.2K
    HDD
    Seagate
    Momentus
    ST9160823AS
    2.5
    160GB
    SATA
    7.2K
    HDD
    Seagate
    MomentusXT
    ST95005620AS
    2.5
    500GB
    SATA
    7.2K(1)
    HDD
    Seagate
    Barracuda
    ST3500320AS
    3.5
    500GB
    SATA
    7.2K
    HDD
    WD/Dell
    Enterprise
    WD1003FBYX
    3.5
    1TB
    SATA
    7.2K
    HDD
    Seagate
    Barracuda
    ST3000DM01
    3.5
    3TB
    SATA
    7.2K
    HDD
    Seagate
    Desktop
    ST4000DM000
    3.5
    4TB
    SATA
    HDD
    HDD
    Seagate
    Capacity
    ST6000NM00
    3.5
    6TB
    SATA
    HDD
    HDD
    Seagate
    Capacity
    ST6000NM00
    3.5
    6TB
    12GSAS
    HDD
    HDD
    Seagate
    Savio 10K.3
    ST9300603SS
    2.5
    300GB
    SAS
    10K
    HDD
    Seagate
    Cheetah
    ST3146855SS
    3.5
    146GB
    SAS
    15K
    HDD
    Seagate
    Savio 15K.2
    ST9146852SS
    2.5
    146GB
    SAS
    15K
    HDD
    Seagate
    Ent. 15K
    ST600MP0003
    2.5
    600GB
    SAS
    15K
    SSHD
    Seagate
    Ent. Turbo
    ST600MX0004
    2.5
    600GB
    SAS
    SSHD
    SSD
    Samsung
    840 PRo
    MZ-7PD256
    2.5
    256GB
    SATA
    SSD
    HDD
    Seagate
    600 SSD
    ST480HM000
    2.5
    480GB
    SATA
    SSD
    SSD
    Seagate
    1200 SSD
    ST400FM0073
    2.5
    400GB
    12GSAS
    SSD

    Performance characteristics 1 worker (thread count) for RAW IO (non-file system)

    Note: (1) Seagate Momentus XT is a Hybrid Hard Disk Drive (HHDD) based on a 7.2K 2.5 HDD with SLC nand flash integrated for read buffer in addition to normal DRAM buffer. This model is a XT I (4GB SLC nand flash), may add an XT II (8GB SLC nand flash) at some future time.

    As a starting point, these results are raw IO with file system based information to be added soon along with more devices. These results are for tests with one worker or thread count, other results will be added with such as 16 workers or thread counts to show how those differ.

    The above results include all reads, all writes, mix of reads and writes, along with all random, sequential and mixed for each IO size. IO sizes include 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K, 1024K and 2048K. As with any workload simulation, benchmark or comparison test, take these results with a grain of salt as your mileage can and will vary. For example you will see some what I consider very high IO rates with sequential reads even without file system buffering. These results might be due to locality of reference of IO’s being resolved out of the drives DRAM cache (read ahead) which vary in size for different devices. Use the vendor model numbers in the table above to check the manufactures specs on drive DRAM and other attributes.

    If you are used to seeing 4K or 8K and wonder why anybody would be interested in some of the larger sizes take a look at big fast data or cloud and object storage. For some of those applications 2048K may not seem all that big. Likewise if you are used to the larger sizes, there are still applications doing smaller sizes. Sorry for those who like 512 byte or smaller IO’s as they are not included. Note that for all of these unless indicated a 512 byte standard sector or drive format is used as opposed to emerging Advanced Format (AF) 4KB sector or block size. Watch for some more drive and device types to be added to the above, along with results for more workers or thread counts, along with file system and other scenarios.

    Using VMware as part of a Server, Storage and IO (aka StorageIO) test platform

    vmware vexpert

    The above performance results were generated on Ubuntu 12.04 (since upgraded to 14.04 which was hosted on a VMware vSphere 5.1 (upgraded to 5.5U2) purchased version (you can get the ESXi free version here) with vCenter enabled system. I also have VMware workstation installed on some of my Windows-based laptops for doing preliminary testing of scripts and other activity prior to running them on the larger server-based VMware environment. Other VMware tools include vCenter Converter, vSphere Client and CLI. Note that other guest virtual machines (VMs) were idle during the tests (e.g. other guest VMs were quiet). You may experience different results if you ran Ubuntu native on a physical machine or with different adapters, processors and device configurations among many other variables (that was a disclaimer btw ;) ).

    Storage I/O trends

    All of the devices (HDD, HHDD, SSD’s including those not shown or published yet) were Raw Device Mapped (RDM) to the Ubuntu VM bypassing VMware file system.

    Example of creating an RDM for local SAS or SATA direct attached device.

    vmkfstools -z /vmfs/devices/disks/naa.600605b0005f125018e923064cc17e7c /vmfs/volumes/dat1/RDM_ST1500Z110S6M5.vmdk

    The above uses the drives address (find by doing a ls -l /dev/disks via VMware shell command line) to then create a vmdk container stored in a dat. Note that the RDM being created does not actually store data in the .vmdk, it’s there for VMware management operations.

    If you are not familiar with how to create a RDM of a local SAS or SATA device, check out this post to learn how.This is important to note in that while VMware was used as a platform to support the guest operating systems (e.g. Ubuntu or Windows), the real devices are not being mapped through or via VMware virtual drives.

    vmware iops

    The above shows examples of RDM SAS and SATA devices along with other VMware devices and dats. In the next figure is an example of a workload being run in the test environment.

    vmware iops

    One of the advantages of using VMware (or other hypervisor) with RDM’s is that I can quickly define via software commands where a device gets attached to different operating systems (e.g. the other aspect of software defined storage). This means that after a test run, I can quickly simply shutdown Ubuntu, remove the RDM device from that guests settings, move the device just tested to a Windows guest if needed and restart those VMs. All of that from where ever I happen to be working from without physically changing things or dealing with multi-boot or cabling issues.

    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    So how many IOPs can a device do?

    That depends, however have a look at the above information and results.

    Check back from time to time here to see what is new or has been added including more drives, devices and other related themes.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    How many I/O iops can flash SSD or HDD do?

    How many i/o iops can flash ssd or hdd do with vmware?

    sddc data infrastructure Storage I/O ssd trends

    Updated 2/10/2018

    A common question I run across is how many I/O iopsS can flash SSD or HDD storage device or system do or give.

    The answer is or should be it depends.

    This is the first of a two-part series looking at storage performance, and in context specifically around drive or device (e.g. mediums) characteristics across HDD, HHDD and SSD that can be found in cloud, virtual, and legacy environments. In this first part the focus is around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).

    What about cloud, tape summit resources, storage systems or appliance?

    Lets leave those for a different discussion at another time.

    Getting started

    Part of my interest in tools, metrics that matter, measurements, analyst, forecasting ties back to having been a server, storage and IO performance and capacity planning analyst when I worked in IT. Another aspect ties back to also having been a sys admin as well as business applications developer when on the IT customer side of things. This was followed by switching over to the vendor world involved with among other things competitive positioning, customer design configuration, validation, simulation and benchmarking HDD and SSD based solutions (e.g. life before becoming an analyst and advisory consultant).

    Btw, if you happen to be interested in learn more about server, storage and IO performance and capacity planning, check out my first book Resilient Storage Networks (Elsevier) that has a bit of information on it. There is also coverage of metrics and planning in my two other books The Green and Virtual Data Center (CRC Press) and Cloud and Virtual Data Storage Networking (CRC Press). I have some copies of Resilient Storage Networks available at a special reader or viewer rate (essentially shipping and handling). If interested drop me a note and can fill you in on the details.

    There are many rules of thumb (RUT) when it comes to metrics that matter such as IOPS, some that are older while others may be guess or measured in different ways. However the answer is that it depends on many things ranging from if a standalone hard disk drive (HDD), Hybrid HDD (HHDD), Solid State Device (SSD) or if attached to a storage system, appliance, or RAID adapter card among others.

    Taking a step back, the big picture

    hdd image
    Various HDD, HHDD and SSD’s

    Server, storage and I/O performance and benchmark fundamentals

    Even if just looking at a HDD, there are many variables ranging from the rotational speed or Revolutions Per Minute (RPM), interface including 1.5Gb, 3.0Gb, 6Gb or 12Gb SAS or SATA or 4Gb Fibre Channel. If simply using a RUT or number based on RPM can cause issues particular with 2.5 vs. 3.5 or enterprise and desktop. For example, some current generation 10K 2.5 HDD can deliver the same or better performance than an older generation 3.5 15K. Other drive factors (see this link for HDD fundamentals) including physical size such as 3.5 inch or 2.5 inch small form factor (SFF), enterprise or desktop or consumer, amount of drive level cache (DRAM). Space capacity of a drive can also have an impact such as if all or just a portion of a large or small capacity devices is used. Not to mention what the drive is attached to ranging from in internal SAS or SATA drive bay, USB port, or a HBA or RAID adapter card or in a storage system.

    disk iops
    HDD fundamentals

    How about benchmark and performance for marketing or comparison tricks including delayed, deferred or asynchronous writes vs. synchronous or actually committed data to devices? Lets not forget about short stroking (only using a portion of a drive for better IOP’s) or even long stroking (to get better bandwidth leveraging spiral transfers) among others.

    Almost forgot, there are also thick, standard, thin and ultra thin drives in 2.5 and 3.5 inch form factors. What’s the difference? The number of platters and read write heads. Look at the following image showing various thickness 2.5 inch drives that have various numbers of platters to increase space capacity in a given density. Want to take a wild guess as to which one has the most space capacity in a given footprint? Also want to guess which type I use for removable disk based archives along with for onsite disk based backup targets (compliments my offsite cloud backups)?

    types of disks
    Thick, thin and ultra thin devices

    Beyond physical and configuration items, then there are logical configuration including the type of workload, large or small IOPS, random, sequential, reads, writes or mixed (various random, sequential, read, write, large and small IO). Other considerations include file system or raw device, number of workers or concurrent IO threads, size of the target storage space area to decide impact of any locality of reference or buffering. Some other items include how long the test or workload simulation ran for, was the device new or worn in before use among other items.

    Tools and the performance toolbox

    Then there are the various tools for generating IO’s or workloads along with recording metrics such as reads, writes, response time and other information. Some examples (mix of free or for fee) include Bonnie, Iometer, Iorate, IOzone, Vdbench, TPC, SPC, Microsoft ESRP, SPEC and netmist, Swifttest, Vmark, DVDstore and PCmark 7 among many others. Some are focused just on the storage system and IO path while others are application specific thus exercising servers, storage and IO paths.

    performance tools
    Server, storage and IO performance toolbox

    Having used Iometer since the late 90s, it has its place and is popular given its ease of use. Iometer is also long in the tooth and has its limits including not much if any new development, never the less, I have it in the toolbox. I also have Futremark PCmark 7 (full version) which turns out has some interesting abilities to do more than exercise an entire Windows PC. For example PCmark can use a secondary drive for doing IO to.

    PCmark can be handy for spinning up with VMware (or other tools) lots of virtual Windows systems pointing to a NAS or other shared storage device doing real world type activity. Something that could be handy for testing or stressing virtual desktop infrastructures (VDI) along with other storage systems, servers and solutions. I also have Vdbench among others tools in the toolbox including Iorate which was used to drive the workloads shown below.

    What I look for in a tool are how extensible are the scripting capabilities to define various workloads along with capabilities of the test engine. A nice GUI is handy which makes Iometer popular and yes there are script capabilities with Iometer. That is also where Iometer is long in the tooth compared to some of the newer generation of tools that have more emphasis on extensibility vs. ease of use interfaces. This also assumes knowing what workloads to generate vs. simply kicking off some IOPs using default settings to see what happens.

    Another handy tool is for recording what’s going on with a running system including IO’s, reads, writes, bandwidth or transfers, random and sequential among other things. This is where when needed I turn to something like HiMon from HyperIO, if you have not tried it, get in touch with Tom West over at HyperIO and tell him StorageIO sent you to get a demo or trial. HiMon is what I used for doing start, stop and boot among other testing being able to see IO’s at the Windows file system level (or below) including very early in the boot or shutdown phase.

    Here is a link to some other things I did awhile back with HiMon to profile some Windows and VDI activity test profiling.

    What’s the best tool or benchmark or workload generator?

    The one that meets your needs, usually your applications or something as close as possible to it.

    disk iops
    Various 2.5 and 3.5 inch HDD, HHDD, SSD with different performance

    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    That depends, however continue reading part II of this series to see some results for various types of drives and workloads.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Trick or treat and vendor fun games

    In the spirit of Halloween and zombies season, a couple of thoughts come to mind about vendor tricks and treats. This is an industry trends and perspectives post, part of an ongoing series looking at various technology and fun topics.

    The first trick or treat game pertains to the blame game; you know either when something breaks, or at the other extreme, before you have even made a decision to buy something. The trick or treat game for decision-making goes something like this.

    StorageIO industry trends cloud, virtualization and big data

    Vendor “A” says products succeed with their solution while failure results with a solution from “B” when doing “X”. Otoh, vendor “B” claims that “X” will fail when using a solution from vendor “A”. In fact, you can pick what you want to substitute for “X”, perhaps VDI, PCIe, Big Data, Little Data, Backup, Archive, Analytics, Private Cloud, Public Cloud, Hybrid Cloud, eDiscovery you name it.

    This is not complicated math or big data problem requiring a high-performance computing (HPC) platform. A HPC Zetta-Flop processing ability using 512 bit addressing of 9.9 (e.g. 1 nine) PettaBytes of battery-backed DRAM and an IO capability of 9.99999 (e.g. 5 9’s) trillion 8 bit IOPS to do table pivots or runge kutta numerical analysis, map reduce, SAS or another modeling with optional iProduct or Android interface are not needed.

    image of StorageIO big data HPC cloud storageimage of StorageIO big data HPC cloud storage
    StorageIO images of touring Texas Advanced Computing (e.g. HPC) Center

    Can you solve this equation? Hint it does not need a PhD or any other advanced degree. Another hint, if you have ever been at any side of the technology product and services decision-making table, regardless of the costume you wore, you should know the answer.

    Of course the question of would “X” fail regardless of who or what “A” or “B” let alone a “C”, “D” or “F”? In other words, it is not the solution, technology, vendor or provider, rather the problem or perhaps even lack thereof that is the issue. Or is it a case where there is a solution from “A”, “B” or any others that is looking for a problem, and if it is the wrong problem, there can be a wrong solution thus failure?

    StorageIO industry trends cloud, virtualization and big data

    Another trick or treat game is vendors public relations (PR) or analyst relations (AR) people to ask for one thing and delivery or ask another. For example, some vendor, service provider, their marketing AR and PR people or surrogates make contact wanting to tell of various success and failure story. Of course, this is usually their success and somebody else’s failure, or their victory over something or someone who sometimes can be interesting. Of course, there are also the treats to get you to listen to the above, such as tempt you with a project if you meet with their subject, which may be a trick of a disappearing treat (e.g. magic, poof it is gone after the discussion).

    There are another AR and PR trick and treat where they offer on behalf of their representative organization or client to a perspective or exclusive insight on their competitor. Of course, the treat from their perspective is that they will generously expose all that is wrong with what a competitor is saying about their own (e.g. the competitors) product.

    StorageIO industry trends cloud, virtualization and big data

    Let me get this straight, I am not supposed to believe what somebody says about his or her own product, however, supposed to believe what a competitor says is wrong with the competition’s product, and what is right with his or her own product.

    Hmm, ok, so let me get this straight, a competitor say “A” wants to tell me what somebody say from “B” has told me is wrong and I should schedule a visit with a truth squad member from “A” to get the record set straight about “B”?

    Does that mean then that I go to “B” for a rebuttal, as well as an update about “A” from “B”, assuming that what “A” has told me is also false about themselves, and perhaps about “B” or any other?

    Too be fair, depending on your level of trust and confidence in either a vendor, their personal or surrogates, you might tend to believe more from them vs. others, or at least until you been tricked after given treats. There may be some that have been tricked, or they tried applying to many treats to present a story that behind the costume might be a bit scary.

    StorageIO industry trends cloud, virtualization and big data

    Having been through enough of these, and I candidly believe that sometimes “A” or “B” or any other party actually do believe that they have more or better info about their competitor and that they can convince somebody about what their competitor is doing better than the competitor can. I also believe that there are people out there who will go to “A” or “B” and believe what they are told by based on their preference, bias or interests.

    When I hear from vendors, VARs, solution or service providers and others, it’s interesting hearing point, counterpoint and so forth, however if time is limited, I’am more interested in hearing from such as “A” about them, what they are doing, where success, where challenges, where going and if applicable, under NDA go into more detail.

    StorageIO industry trends cloud, virtualization and big data

    Customer success stories are good, however again, if interested in what works, what kind of works, or what does not work, chances are when looking for G2 vs. GQ, a non-scripted customer conversation or perspective of the good, the bad and the ugly is preferred, even if under NDA. Again, if time is limited which it usually is, focus on what is being done with your solution, where it is going and if compelled send follow-up material that can of course include MUD and FUD about others if that is your preference.

    Then there is when during a 30 minute briefing, the vendor or solution provider is still talking about trends, customer pain points, what competitors are doing at 21 minutes into the call with no sign of an announcement, update or news in site

    Lets not forget about the trick where the vendor marketing or PR person reaches out and says that the CEO, CMO, CTO or some other CxO or Chief Jailable Officer (CJO) wants to talk with you. Part of the trick is when the CxO actually makes it to the briefing and is not ready, does not know why the call is occurring, or, thinks that a request for an audience has been made with them for an interview or something else.

    StorageIO industry trends cloud, virtualization and big data

    A treat is when 3 to 4 minutes into a briefing, the vendor or solution provider has already framed up what and why they are doing something. This means getting to what they are announcing or planning on doing and getting into a conversation to discuss what they are doing and making good follow-up content and resources available.

    StorageIO industry trends cloud, virtualization and big data

    Sometimes a treat is when a briefer goes on autopilot nailing their script for 29 of a 30 minute session then use the last-minute to ask if there are any questions. The reason autopilot briefings can be a treat is when they are going over what is in the slide deck, webex, or press release thus affording an opportunity to get caught up on other things while talk at you. Hmm, perhaps need to consider playing some tricks in reward for those kind of treats? ;)

    StorageIO industry trends cloud, virtualization and big data

    Do not be scared, not everybody is out to trick you with treats, and not all treats have tricks attached to them. Be prepared, figure out who is playing tricks with treats, and who has treats without tricks.

    Oh, and as a former IT customer, vendor and analyst, one of my favorites is contact information of my dogs to vendors who require registration on their websites for basic things such as data sheets. Another is supplying contact information of competing vendors sales reps to vendors who also require registration for basic data sheets or what should otherwise be generally available information as opposed to more premium treats. Of course there are many more fun tricks, however lets leave those alone for now.

    Note: Zombie voting rules apply which means vote early, vote often, and of course vote for those who cannot include those that are dead (real or virtual).

    Where To Learn More

    View additiona related material via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Watch out for tricks and treats, have a safe and fun Zombie (aka Halloween) season. See you while out and about this fall and don’t forget to take part in the ongoing zombie technology poll. Oh, and be safe with trick or treat and vendor fun games

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.