Can we get a side of context with them IOPS server storage metrics?

Can we get a side of context with them server storage metrics?

Storage I/O trends

Updated 2/10/2018

Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.

There is an old saying that the best I/O (Input/Output) is the one that you do not have to do.

In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.

Expanding the conversation, the need for more context

The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.

hdd and ssd iops

This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.

Adding a side of context

The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.

Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time.

However, are those million IOP’s applicable to your environment or needs?

Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?

How about the response time or latency for achieving them IOPS?

In other words, what is the context of those metrics and why do they matter?

storage i/o iops
Click on image to view more metrics that matter including IOP’s for HDD and SSD’s

Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.

As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.

Here are some examples of context that can be added to help make IOP’s and other metrics matter:

  • What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
  • Are they reads, writes, random, sequential or mixed and what percentage?
  • How was the storage configured including RAID, replication, erasure or dispersal codes?
  • Then there is the latency or response time and IO queue depths for the given number of IOPS.
  • Let us not forget if the storage systems (and servers) were busy with other work or not.
  • If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
  • What was the number of threads or workers, along with how many servers?
  • What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
  • Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
  • Did the IOP’s number come from a single storage system or total of multiple systems?
  • Fast storage needs fast serves and networks, what was their configuration?
  • Was the performance a short burst, or long sustained period?
  • What was the size of the test data used; did it all fit into cache?
  • Were short stroking for IOPS or long stroking for bandwidth techniques used?
  • Data footprint reduction (DFR) techniques (thin provisioned, compression or dedupe) used?
  • Were write data committed synchronously to storage, or deferred (aka lazy writes used)?

The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.

Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.

Storage I/O trends

Does size or age of vendors make a difference when it comes to context?

Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.

Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.

Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.

Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.

Storage I/O trends

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What this means is let us start putting and asking for metrics that matter such as IOP’s with context.

If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.

IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.

Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.

So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

RAID and IOPS and IO observations

Storage I/O trends

There are at least two different meanings for IOPs, which for those not familiar with the information technology (IT) and data storage meaning is Input/output Operations Per second (e.g. data movement activity). Another meaning for IOP that is the international organization for a participatory society (iopsociety.org), and their fundraising activity found here.

I recently came across a piece (here and here) talking about RAID and IOPs that had some interesting points; however, some generalizations could use some more comments. One of the interesting comments and assertions is that RAID writes increase with the number of drives in the parity scheme. Granted the specific implementation and configuration could result in an it depends type response.

StorageIO industry trends cloud, virtualization and big data

Here are some more perspectives to the piece (here and here) as the sites comments seem to be restricted.

Keep in mind that such as with RAID 5 (or 6) performance, your IO size will have a bearing on if you are doing those extra back-end IOs. For example if you are writing a 32KB item that is accomplished by a single front-end IO from an applications server, and your storage system, appliance, adapter, software implementing and performing the RAID (or erasure coding for that matter) has a chunk size of say 8KB (e.g. the amount of data written to each back-end drive). Then a 5 drive R5 (e.g. 4+1) would in fact have five back-end IOPS (32KB / 8KB = 4 + 1 (8KB Parity)).

StorageIO industry trends cloud, virtualization and big data

Otoh of the front end IOP were only 16KB (using whole numbers for simplicity, otherwise round-up), in the case of a write, there would be three back-end writes with the R5 (e.g. 2 + 1). Keep in mind the controller/software managing the RAID would (or should) try to schedule back-end IO with cache, read-head, write-behind, write-back, other forms of optimization etc.

In the piece (here and here), a good point is the understanding and factoring in IOPS is important, as is also latency or response time in addition to bandwidth or throughput, along with availability, they are all inter-related.

Also very important is to keep in mind the size of the IOP, read and write, random, sequential etc.

RAID along with erasure coding is a balancing act between performance, availability, space capacity and economics aligned to different application needs.

RAID 0 (R0) actually has a big impact on performance, no penalty on writes; however, it has no availability protection benefit and in fact can be a single point of failure (e.g. loss of a HDD or SSD) impacts the entire R0 group. However, for static items, or items that are being journaled and protected on some other medium/RAID/protection scheme, R0 is used more than people realize for scratch/buffer/transient/read cache types of applications. Keep in mind that it is a balance of all performance and capacity with the exposure of no availability as opposed to other approaches. Thus, do not be scared of R0, however also do not get burned or hurt with it either, treat it with respect and can be effective for something’s.

Also mentioned in the piece was that SSD based servers will perform vastly better than SATA or SAS based ones. I am assuming that the authors meant to say better than SAS or SATA DAS based HDDs?

Storage I/O trends

Keep in mind that unless you are using a PCIe nand flash SSD card as a target or cache or RAID card, most SSD drives today are either SAS or SATA (being the more common) along with moving from 3Gb SAS or SATA to 6Gb SAS & SATA.

Also while HDD and SSDs can do a given number of reads or writes per second, those will vary based on the size of the IO, read, write, random, sequential. However what can have the biggest impact and where I have seen too many people or environments get into a performance jam is when assuming that those IOP numbers per HDD or SSD are a given. For example assuming that 100-140, IOPs (regardless of size, type, etc.) can be achieved as a limiting factor is the type of interface and controller/adapter being used.

I have seen fast HDDs and SSDs deliver sub-par performance or not meeting expectations fast interfaces such as iSCSI/SAS/SATA/FC/FCoE/IBA or other interfaces due to bottlenecks in the adapter card, storage system / appliance / controller / software. In some cases you may see more effective IOPs or reads, writes or both, while on other implementations you may see lower than expected due to internal implementation bottlenecks or architectural designs. Hint, watch out for solutions where the vendor tries to blame poor performance on the access network (e.g. SAS, iSCSI, FC, etc.) particular if you know that those are not bottlenecks.

Here are some related content:
Are Hard Disk Drives (HDDs) getting too big?
How can direct attached storage (DAS) make a comeback if it never left?
EMC VFCache re spinning SSD and intelligent caching
SSD and Green IT moving beyond green washing
Optimize Data Storage for Performance and Capacity Efficiency
Is SSD dead? No, however some vendors might be
RAID Relevance Revisited
Industry Trends and Perspectives: RAID Rebuild Rates
What is the best kind of IO? The one you do not have to do
More storage and IO metrics that matter
IBM buys flash solid state device (SSD) industry veteran TMS

In terms of fund-raising, if you feel so compelled, send a gift, donation, sponsorship, project, buy some books, piece of work, assignment, research project, speaking, keynote, web cast, video or seminar event my way and just like professional fund-raisers, or IOPS vendors, StorageIO accept visa, Master Card, American express, Pay Pal, check and traditional POs.

As for this site and comments, outside of those caught in the spam trap, courteous perspectives and discussions are welcome.

Ok, nuff said.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved