SSD and Storage System Performance

Jacob Gsoedl has a new article over at SearchStorage titled How to add solidstate storage to your enterprise data storage systems.

In his article which includes some commentary by me, Jacob lays out various options on where and how to deploy solid state devices (SSD) in and with enterprise storage systems.

While many vendors have jumped on the latest SSD bandwagon adding flash based devices to storage systems, where and how they implement the technologies varies.

Some vendors take a simplistic approach of qualify flash SSD devices for attachment to their storage controllers similar to how any other Fibre Channel, SAS or SATA hard disk drive (HDD) would be.

Yet others take a more in depth approach including optimizing controller software, firmware or micro code to leverage flash SSD devices along with addressing wear leveling, read and write performance among other capabilities.

Performance is another area where on paper a flash SSD device might appear to be fast and enable a storage system to be faster.

However, systems that are not optimized for higher throughput and or increased IOPs needing lower latency may end up placing restrictions on the number of flash SSD devices or other configuration constraints. Even worse is when expected performance improvements are not realized as after all, fast controllers need fast devices, and fast devices need fast controllers.

RAM and flash based SSD are great enabling technologies for boosting performance, productivity and enabling a green efficient environment however do your homework.

Look at how various vendors implement and support SSD particularly flash based products with enhancements to storage controllers for optimal performance.

Likewise check out the activity of  the SNIA Solid State Storage Initiative (SSSI) among other industry trade group or vendor initiatives around enhancing along with best practices for SSD.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

SPC and Storage Benchmarking Games

Storage I/O trends

There is a post over in one of the LinkedIn Discussion forums about storage performance council (SPC) benchmarks being miss-leading that I just did a short response post to. Here’s the full post as LinkedIn has a short post response limit.

While the SPC is far from perfect, it is at least for block, arguably better than doing nothing.

For the most part, SPC has become a de facto standard for at least block storage benchmarks independent of using IOmeter or other tools or vendor specific simulations, similar how MSFT ESRP is for exchange, TPC for database, SPEC for NFS and so forth. In fact, SPC even recently rather quietly rolled out a new set of what could be considered the basis for Green storage benchmarks. I would argue that SPC results in themselves are not misleading, particularly if you take the time to look at both the executive and full disclosures and look beyond the summary.

Some vendors have taken advantage of the SPC results playing games with discounting on prices (something that’s allowed under SPC rules) to show and make apples to oranges comparisons on cost per IOP or other ploys. This proactive is nothing new to the IT industry or other industries for that matter, hence benchmark games.

Where the misleading SPC issue can come into play is for those who simply look at what a vendor is claiming and not looking at the rest of the story, or taking the time to look at the results and making apples to apples, instead of believing the apples to oranges comparison. After all, the results are there for a reason. That reason is for those really interested to dig in and sift through the material, granted not everyone wants to do that.

For example, some vendors can show a highly discounted list price to get a better IOP per cost on an apple to oranges basis, however, when processes are normalized, the results can be quite different. However here’s the real gem for those who dig into the SPC results, including looking at the configurations and that is that latency under workload is also reported.

The reason that latency is a gem is that generally speaking, latency does not lie.

What this means is that if vendor A doubles the amount of cache, doubles the number of controllers, doubles the number of disk drives, plays games with actual storage utilization (ASU), utilizes fast interfaces from 10 GbE  iSCSI to 8Gb FC or FCoE or SAS to get a better cost per IOP number with discounting, look at the latency numbers. There have been some recent examples of this where vendor A has a better cost per IOP while achieving a higher number of IOPS at a lower cost compared to vendor B, which is what is typically reported in a press release or news story. (See a blog entry that also points to a CMG presentation discussion around this topic here.

Then go and look at the two results, vendor B may be at list price while vendor A is severely discounted which is not a bad thing, as that is then the starting list price as to which customers should start negotiations. However to be fair, normalize the pricing for fun, look at how much more equipment vendor A may need while having to discount to get the price to offset the increased amount of hardware, then look at latency.

In some of the recent record reported results, the latency results are actually better for a vendor B than for a vendor A and why does latency matter? Beyond showing what a controller can actually do in terms of levering  the number of disks, cache, interface ports and so forth, the big kicker is for those talking about SSD (RAM or FLASH) in that SSD generally is about latency. To fully effectively utilize SSD which is a low latency device, you would want a controller that can do a decent job at handling IOPS; however you also need a controller that can do a decent job of handling IOPS with low latency under heavy workload conditions.

Thus the SPC again while far from perfect, at least for a thumb nail sketch and comparison is not necessarily misleading, more often than not it’s how the results are utilized that is misleading. Now in the quest for the SPC administrators to try and gain more members and broader industry participation and thus secure their own future, is the SPC organization or administration opening itself up to being used more and more as a marketing tool in ways that potentially compromise all the credibility (I know, some will dispute the validity of SPC, however that’s reserved for a different discussion ;) )?

There is a bit of Déjà here for those involved with RAID and storage who recall how the RAID Advisory Board (RAB) in its quest to gain broader industry adoption and support succumbed to marketing pressures and use or what some would describe as miss-use and is now a member of the “Where are they now” club!

Don’t get me wrong here; I like the SPC tests/results/format, there is a lot of good information in the SPC. The various vendor folks who work very hard behind the scenes to make the SPC actually work and continue to evolve it also all deserve a great big kudos, an “atta boy” or “atta girl” for the fine work that have been doing, work that I hope does not become lost in the quest to gain market adoption for the SPC.

Ok, so then this should all then beg the question of what is the best benchmark. Simple, the one that most closely resembles your actual applications, workload, conditions, configuration and environment.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

SSD activity continues to go virtually round and round

Storage I/O trends

Solid State Disk (SSD) (both FLASH and RAM based) activities and discussions continue to go round and round (pun intended) with announcements (here, here, here, here, here, and here and among others) of various improvements and evolution for technologies focused from the consumer to the small office home office (SOHO) to small medium business (SMB) to enterprise with technologies from vendors including Intel, Sandisk, Seagate and many others.

Recent innovations are looking to address write performance issues or challenges associated with FLASH based SSD, which while better than magnetic hard disk drives (HDD), are slower than their RAM based counterparts.

Other activity includes extending the useful life or duration of how many times a FLASH based device can be rewritten or modified before problems arise or performance degrades. Yet another activity is Sandisk introducing “virtual RPM” (vRPM) metrics to provide consumers an indication of relative revolutions per minute (RPM) of a non-rotating SSD device to make comparisons to help with shopping decisions makings. Can you say SSDs going round and round and round at least in a virtual world? Now that should make for some interesting “virtual benchmarking” discussions!

Meanwhile industry trade groups include the SNIA Solid State Storage Initiative (SSSI) are gathering momentum to address marketing, messaging, awareness, education as well as metrics or benchmarks among things normally done around industry trade group camp fires and camp outs.

So, as the HDDs spin, so to does the activity in and around SSD based technologies.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

The Many Faces of Solid State Devices/Disks (SSD)

Storage I/O trends

Here’s a link to a recent article I wrote for Enterprise Storage Forum titled “Not a Flash in the PAN” providing a synopsis of the many faces, implementations and forms of SSD based technologies that includes several links to other related content.

A popular topic over the past year or so has been SSD with FLASH based storage for laptops, also sometimes referred to as hybrid disk drives along with announcements late last year by companies such as Texas Memory Systems (TMS) of a FLASH based storage system combining DRAM for high speed cache in their RAMSAN-500 and more recently EMC adding support for FLASH based SSD devices in their DMX4 systems as a tier-0 to co-exist with other tier-1 (fast FC) and tier-2 (SATA) drives.

Solid State Disks/Devices (SSD) or memory based storage mediums have been around for decades, they continue to evolve using different types of memory ranging from volatile dynamic random access (DRAM) memory to persistent or non-volatile RAM (NVRAM) and various derivatives of NAND FLASH among other users. Likewise, the capacity cost points, performance, reliability, packaging, interfaces and power consumption all continue to improve.

SSD in general, is a technology that has been miss-understood over the decades particularly when simply compared on a cost per capacity (e.g. dollar per GByte) basis which is an unfair comparison. The more approaches comparison is to look at how much work or amount of activity for example transactions per second, NFS operations per second, IOPS or email messages that can be processed in a given amount of time and then comparing the amount of power and number of devices to achieve a desired level of performance. Granted SSD and in particular DRAM based systems cost more on a GByte or TByte basis than magnetic hard disk drives however it also requires more HDDs and controllers to achieve the same level of performance not to mention requiring more power and cooling than compared to a typical SSD based device.

The many faces of SSD range from low cost consumer grade products based on consumer FLASH products to high performance DRAM based caches and devices for enterprise storage applications. Over the past year or so, SSD have re-emerged for those who are familiar with the technology, and emerged or appeared for those new to the various implementations and technologies leading to another up swinging in the historic up and down cycles of SSD adoption and technology evolution in the industry.

This time around, a few things are different and I believe that SSD in general, that is, the many difference faces of SSD will have staying power and not fade away into the shadows only to re-emerge a few years later as has been the case in the past.

The reason I have this opinion is based on two basic premises which are economics and ecological”. Given the focus on reducing or containing costs, doing more with what you have and environmental or ecological awareness in the race to green the data center and green storage, improving on the economics with more energy efficiency storage, that is, enabling your storage to do more work with less energy as opposed to avoiding energy consumption, has the by product of improved economics (cost savings and improved resource utilization and better service delivery) along with ecological (better use of energy or less use of energy).

Current implementations of SSD based solutions are addressing both the energy efficiency topics to enable better energy efficiency ranging from maximizing battery life to boosting performance while drawing less power. Consequently we are now seeing SSD in general are not only being used for boosting performance, also we are seeing it as one of many different tools to address power, cooling, floor space and environmental or green storage issues.

Here’s a link to a StorageIO industry trends and perspectives white paper at www.storageio.com/xreports.htm.

Here’s the bottom line, there are many faces to SSD. SSD (FLASH or DRAM) based solutions and devices have a place in a tiered storage environment as a Tier-0 or as an alternative in some laptop or other servers where appropriate. SSD compliments other technologies and SSD benefits from being paired with other technologies including high performance storage for tier-1 and near-line or tier-2 storage implementing intelligent power management (IPM).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved