Seagate has shipped over 10 Million storage HHDD’s, is that a lot?

Seagate has shipped over 10 Million storage HHDD’s, is that a lot?

Recently Seagate made an announcement that they have shipped over 10 million Hybrid Hard Disk Drives (HHDD) also known as Solid State Hybrid Drives (SSHD) over that past few years. Disclosure Seagate has been a StorageIO client.

I know where some of those desktop class HHDD’s including Momentus XTs ended up as I bought some of the 500GB and 750GB models via Amazon and have them in various systems. Likewise I have installed in VMware servers the newer generation of enterprise class SSHD’s which Seagate now refers to as Turbo models as companions to my older HHDD’s

What is a HHDD or SSHD?

The HHDD’s continue to evolve from initially accelerating reads to now being capable of speeding up write operations across different families (desktop/mobile, workstation and enterprise). What makes a HHDD or SSHD is that as their name implies, they are a hybrid combing a traditional spinning magnetic Hard Disk Drive (HDD) along with flash SSD storage. The flash persistent memory is in addition to the DRAM or non-persistent memory typically found on HDDs used as a cache buffer. These HHDDs or SSHDs are self-contained in that the flash are built-in to the actual drive as part of its internal electronics circuit board (controller). This means that the drives should be transparent to the operating systems or hypervisors on servers or storage controllers without need for special adapters, controller cards or drivers. In addition, there is no extra software needed to automated tiering or movement between the flash on the HHDD or SSHD and its internal HDD, its all self-contained managed by the drives firmware (e.g. software).

Some SSHD and HHDD industry perspectives

Jim Handy over at Objective Analysis has this interesting post discussing Hybrid Drives Not Catching On. The following is an excerpt from Jim’s post.

Why were our expectations higher? 

There were a few reasons: The hybrid drive can be viewed as an evolution of the DRAM cache already incorporated into nearly all HDDs today. 

  • Replacing or augmenting an expensive DRAM cache with a slower, cheaper NAND cache makes a lot of sense.
  • An SSHD performs much better than a standard HDD at a lower price than an SSD. In fact, an SSD of the same capacity as today’s average HDD would cost about an order of magnitude more than the HDD. The beauty of an SSHD is that it provides near-SSD performance at a near-HDD price. This could have been a very compelling sales proposition had it been promoted in a way that was understood and embraced by end users.
  • Some expected for Seagate to include this technology into all HDDs and not to try to continue using it as a differentiator between different Seagate product lines. The company could have taken either of two approaches: To use hybrid technology to break apart two product lines – standard HDDs and higher-margin hybrid HDDs, or to merge hybrid technology into all Seagate HDDs to differentiate Seagate HDDs from competitors’ products, allowing Seagate to take slightly higher margins on all HDDs. Seagate chose the first path.

The net result is shipments of 10 million units since its 2010 introduction, for an average of 2.5 million per year, out of a total annual HDD shipments of around 500 million units, or one half of one percent.

Continue reading more of Jim’s post here.

In his post, Jim raises some good points including that HHDD’s and SSHD’s are still a fraction of the overall HDD’s shipped on an annual basis. However IMHO the annual growth rate has not been a flat average of 2.5 million, rather starting at a lower rate and then increasing year over year. For example Seagate issued a press release back in summer 2011 that they had shipped a million HHDD’s a year after their release. Also keep in mind that those HHDD’s were focused on desktop workstations and in particular, at Gamers among others.

The early HHDD’s such as the Momentus XTs that I was using starting in June 2010 only had read acceleration which was better than HDD’s, however did not help out on writes. Over the past couple of years there have been enhancements to the HHDD’s including the newer generation also known as SSHD’s or Turbo drives as Seagate now calls them. These newer drives include write acceleration as well as with models for mobile/laptop, workstation and enterprise class including higher-performance and high-capacity versions. Thus my estimates or analysis has the growth on an accelerating curve vs. linear growth rate (e.g. average of 2.5 million units per year).

 Units shipped per yearRunning total units shipped
2010-20111.0 Million1.0 Million
2011-20121.25 Million (est.)2.25 Million (est.)
2012-20132.75 Million (est.)5.0 Million (est.)
2013-20145.0 Million (est)10.0 Million

StorageIO estimates on HHDD/SSHD units shipped based on Seagate announcements

estimated hhdd and sshd shipments

However IMHO there is more to the story beyond numbers of HHDD/SSHD shipped or if they are accelerating in deployment or growing at an average rate. Some of those perspectives are in my comments over on Jim Handy’s site with an excerpt below.

In talking with IT professionals (e.g. what the vendors/industry calls users/customers) they are generally not aware that these devices exist, or if they are aware of them, they are only aware of what was available in the past (e.g. the consumer class read optimized versions). I do talk with some who are aware of the newer generation devices however their comments are usually tied to lack of system integrator (SI) or vendor/OEM support, or sole source. Also there was a focus on promoting the HHDD’s to “gamers” or other power users as opposed to broader marketing efforts. Also most of these IT people are not aware of the newer generation of SSHD or what Seagate is now calling “Turbo” drives.

When talking with VAR’s, there is a similar reaction which is discussion about lack of support for HHDD’s or SSHD’s from the SI/vendor OEMs, or single source supply concerns. Also a common reaction is lack of awareness around current generation of SSHD’s (e.g. those that do write optimization, as well as enterprise class versions).

When talking with vendors/OEMs, there is a general lack of awareness of the newer enterprise class SSHD’s/HHDD’s that do write acceleration, sometimes there is concern of how this would disrupt their “hybrid” SSD + HDD or tiering marketing stories/strategies, as well as comments about single source suppliers. Have also heard comments to the effect of concerns about how long or committed are the drive manufactures going to be focused on SSHD/HHDD, or is this just a gap filler for now.

Not surprisingly when I talk with industry pundits, influencers, amplifiers (e.g. analyst, media, consultants, blogalysts) there is a reflection of all the above which is lack of awareness of what is available (not to mention lack of experience) vs. repeating what has been heard or read about in the past.

IMHO while there are some technology hurdles, the biggest issue and challenge is that of some basic marketing and business development to generate awareness with the industry (e.g. pundits), vendors/OEMs, VAR’s, and IT customers, that is of course assuming SSHD/HHDD are here to stay and not just a passing fad…

What about SSHD and HHDD performance on reads and writes?

What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?

SSHD and HHDD read / write performance exchange
Enterprise Turbo SSHD read and write performance (Exchange Email)

What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?

SSHD and HHDD performance TPC-B
Enterprise Turbo SSHD read and write performance (TPC-B database)

SSHD and HHDD performance TPC-E
Enterprise Turbo SSHD read and write performance (TPC-E database)

Additional details and information about HHDD/SSHD or as Seagate now refers to them Turbo drives can be found in two StorageIO Industry Trends Perspective White Papers (located here and another here).

Where to learn more

Refer to the following links to learn more about HHDD and SSHD devices.
StorageIO Momentus Hybrid Hard Disk Drive (HHDD) Moments
Enterprise SSHD and Flash SSD
Part of an Enterprise Tiered Storage Strategy

Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?
2011 Summer momentus hybrid hard disk drive (HHDD) moment
More Storage IO momentus HHDD and SSD moments part I
More Storage IO momentus HHDD and SSD moments part II
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Another StorageIO Hybrid Momentus Moment
SSD past, present and future with Jim Handy
Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?

Closing comments and perspectives

I continue to be bullish on hybrid storage solutions from cloud, to storage systems as well as hybrid-storage devices. However like many technology just because something makes sense or is interesting does not mean its a near-term or long-term winner. My main concern with SSHD and HHDD is if the manufactures such as Seagate and WD are serious about making them a standard feature in all drives, or simply as a near-term stop-gap solution.

What’s your take or experience with using HHDD and/or SSHDs?

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Dell Inspiron 660 i660, Virtual Server Diamond in the rough?

Storage I/O trends

Dell Inspiron 660 i660, Virtual Server Diamond in the rough?

During the 2013 post thanksgiving black friday shopping day, I did some on-line buying including a Dell Inspiron 660 i660 (5629BK) to be used as a physical machine (PM) or VMware host (among other things).

Now technically I know, this is a workstation or desktop and thus not what some would consider a server, however as another PM to add to my VMware environment (or be used as a bare metal platform), it is a good companion to my other systems.

Via Dell.com Dell 660 i660

Taking a step back, needs vs. wants

Initially my plan for this other system was to go with a larger, more expensive model with as many DDR3 DIMM (memory) and PCIe x4/x8/x16 expansion slots as possible. Some of my other criteria were PCIe Gen 3, latest Intel processor generation with VT (Virtualization Technology) and Extended Page Tables (EPT) for server virtualization support without breaking my budget. Heck, I would love a Dell VRTX or some similar types of servers from the likes of Cisco, HP, IBM, Lenovo, Supermicro among many others. On the other hand, I really don’t need one of those types of systems yet, unless of course somebody wants to send some to play with (excuse me, test drive, try-out).

Hence needs are what I must have or need, while wants are those things that would be, well, nice to have.

Server shopping and selection

In the course of shopping around, looking at alternatives and having previously talked with Robert Novak (aka @gallifreyan) and he reminded me to think outside the box a bit, literally. Check out Roberts blog (aka rsts11 a great blog name btw for those of use who used to work with RSTS, RSX and others) including a post he did shortly after I had a conversation with him. If you read his post and continue through this one, you should be able to connect the dots.

While I still have a need and plans for another server with more PCIe and DDR3 (maybe wait for DDR4? ;) ) slots, I found a Dell Inspiron 660.

Candidly normally I would have skipped over this type or class of system, however what caught my eye was that while limited to only two DDR3 DIMM slots and a single PCIe x16 slot, there were three extra x1 slots which while not as robust, certainly gave me some options if I need to use those for older, slower things. Likewise leveraging higher density DIMM’s, the system is already now at 16GB RAM waiting for larger DIMM’s if needed.

VMware view of Inspiron 600

The Dell Inspiron 660-i660 I found had a price of a little over $550 (delivered) with an Intel i5-3330 processor (quad-core, quad thread 3GHz clock), PCIe Gen 3, one PCIe x16 and three PCIe x1 slots, 8GB DRAM (since reallocated), GbE port and built-in WiFi, Windows 8 (since P2V and moved into the VMware environment), keyboard and mouse, plus a 1TB 6Gb SATA drive, I could afford two, maybe three or four of these in place of a larger system (at least for now). While for something’s I have a need for a single larger server, there are other things where having multiple smaller ones with enough processing performance, VT and EPT support comes in handy (if not required for some virtual servers).

Some of the enhancements that I made were once the initial setup of the Windows system was complete, did a clone and P2V of that image, and then redeploying the 1TB SATA drive to join others in the storage pool. Thus the 1TB SATA HDD has been replaced with (for now) a 500GB Momentus XT HHDD which by time you read this could already changed to something else.

Another enhancements was bumping up the memory from 8GB to 16GB, and then adding a StarTech enclosure (See below) for more internal SAS / SATA storage (it supports both 2.5" SAS and SATA HDD’s as well as SSD’s). In addition to the on-board SATA drive port plus one being used for the CD/DVD, there are two more ports for attaching to the StarTech or other large 3.5" drives that live in the drive bay. Depending on what I’m using this system for, it has different types of adapters for external expansion or networking some of which have already included 6Gbps and 12Gbps SAS HBA’s.

What about adding more GbE ports?

As this is not a general purpose larger system with many expansion ports for PCIe slots, that is one of the downsides you get for this cost. However depending on your needs, you have some options. For example I have some Intel PCIe x1 GbE cards to give extra networking connectivity if or when needed. Note however that as these are PCIe x1 slots they are PCIe Gen 1 so from a performance perspective exercise caution when mixing these with other newer, faster cards when performance matters (more on this in the future).

Via Amazon.com Intel PCIe x1 GbE card
Via Amazon.com Intel (Gigabit CT PCI-E Network Adapter EXPI9301CTBLK)

One of the caveats to be aware of if you are going to be using VMware vSphere/ESXi is that the Realtek GbE NIC on the Dell Inspiron D600-i660 may not play well, however there are work around’s. Check out some of the work around’s over at Kendrick Coleman (@KendrickColeman) and Erik Bussink (@ErikBussink) sites both of which were very helpful and I can report that the Realtek GbE is working fine with VMware ESXi 5.5a.

Need some extra SAS and SATA internal expansion slots for HDD and SSD’s?

The StarTech 4 x 2.5″ SAS and SATA internal enclosures supports various speed SSD and HDD’s depending on what you connect the back-end connector port to. On the back of the enclosure chassis there is a connector that is a pass-thru to the SAS drive interface that also accepts SATA drives. This StarTech enclosure fits nicely into an empty 5.2″ CD/DVD expansion bay and then attach the individual drive bays to your internal motherboard SAS or SATA ports, or to those on another adapter.

Via Amazon.com StarTech 4 port SAS / SATA enclosure
Via Amazon.com StarTech 4 x 2.5" SAS and SATA internal enclosure

So far I have used these enclosures attached to various adapters at different speeds as well as with HDD, HHDD, SSHD and SSD’s at various SAS/SATA interface speeds up to 12Gbps. Note that unlike some other enclosures that have SAS or SATA expander, the drive bays in the StarTech are pass-thru hence are not regulated by the expander chip and its speed. Price for these StarTech enclosures is around $60-90 USD and are good for internal storage expansion (hmm, need to build your own NAS or VSAN or storage server appliance? ;) ).

Via Amazon Molex power connector

Note that you will also need to get a Molex power connector to go from the back of the drive enclosure to an available power port such as for expansion DVD/CD that you can find at a Radio Shack, Fry’s or many other venues for couple of dollars. Double check your specific system and cable connector leads to verify what you will need.

How is it working and performing

So far so good, in addition to using it for some initial calibration and validation activities, the D660 is performing very well and no buyers remorse. Ok, sure, would like more PCIe Gen 3 x4/x8/x16 or an extra on-board Ethernet, however all the other benefits have outweighed those pitfalls.

Speaking of which, if you think a SSD (or other fast storage device) is fast on a 6Gbps SAS or PCIe Gen 2 interface for physical or virtual servers, wait until you experience those IOPs or latencies at 12Gbps SAS and PCIe Gen 3 with a faster current generation Intel processor, just saying ;)…

Server and Storge I/O IOPS and vmware   

In the above chart (slide scroll bar to view more to the right) a Windows 7 64 bit systems (VMs configured with 14GB DRAM) on VMware vSphere V5.5.1 is shown running on different hardware configurations. The Windows system is running Futuremark PCMark 7 Pro (v1.0.4). From left to right the Windows VM on the Dell Inspiron 660 with 16GB physical DRAM using a SSHD (Solid State Hybrid Drive). Second from the left shows results running on a Dell T310 with an Intel X3470 processor also on a SSHD. Middle is the workload on the Dell 660 running on a HHDD, second from right is the workload on the Dell T310 also on a HHDD, while on the right is the same workload on an HP DCS5800 with an Intel E8400. The workload results show a composite score, system storage, simulating user productivity, lightweight processing, and compute intensive tasks.

Futuremark PCMark Windows benchmark
Futuremark PCMark

Don’t forget about the KVM (Keyboard Video Mouse)

Mention KVM to many people in and around the server, storage and virtualization world and they think KVM as in the hypervisor, however to others it means Key board, Video and Mouse aka the other KVM. As part of my recent and ongoing upgrades, it was also time to upgrade from the older smaller KVM’s to a larger, easier to use model. The benefit, support growth while also being easier to work with. Having done some research on various options that also varied in price, I settled in on the StarTech shown below.

Via Amazon.com StarTech 8 port KVM
Via Amazon.com StarTech 8 Port 1U USB KVM Switch

What’s cool about the above 8 port StarTech KVM switch is that it comes with 8 cables (there are 8 ports) that on one end look like a regular VGA monitor screen cable connector. However on the other end that attached to your computer, there is the standard VGA connection that attached to your video out, and a short USB tail cable that attached to an available USB port for Keyboard and Mouse. Needless to say it helps to cut down on the cable clutter while coming in around $38.00 USD per server port being managed, or about a dollar a month over a little over three years.

Word of caution on make and models

Be advised that there are various makes and models of the Dell Inspiron available that differ in the processor generation and thus feature set included. Pay attention to which make or model you are looking at as the prices can vary, hence double-check the processor make and model and then visit the Intel site to see if it is what you are expecting. For example I double checked that the processor for the different models I looked at were i5-3330 (view Intel specifications for that processor here).

Summary

Thanks to Robert Novak (aka @gallifreyan) for taking some time providing useful tips and ideas to help think outside the box for this, as well as some future enhancements to my server and StorageIO lab environment.

Consequently while the Dell Inspiron D600-i660 was not the server that I wanted, it has turned out to be the system that I need now and hence IMHO a diamond in the rough, if you get the right make and mode.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2013 StorageIO and UnlimitedIO All Rights Reserved

As the platters spin, HDD’s for cloud, virtual and traditional storage environments

HDDs for cloud, virtual and traditional storage environments

Storage I/O trends

Updated 1/23/2018

As the platters spin is a follow-up to a recent series of posts on Hard Disk Drives (HDD’s) along with some posts about How Many IOPS HDD’s can do.

HDD and storage trends and directions include among others

HDD’s will continue to be declared dead into the next decade, just as they have been for over a decade, meanwhile they are being enhanced, continued to be used in evolving roles.

hdd and ssd

SSD will continue to coexist with HDD, either as separate or converged HHDD’s. Where, where and how they are used will also continue to evolve. High IO (IOPS) or low latency activity will continue to move to some form of nand flash SSD (PCM around the corner), while storage capacity including some of which has been on tape stays on disk. Instead of more HDD capacity in a server, it moves to a SAN or NAS or to a cloud or service provider. This includes for backup/restore, BC, DR, archive and online reference or what some call active archives.

The need for storage spindle speed and more

The need for faster revolutions per minute (RPM’s) performance of drives (e.g. platter spin speed) is being replaced by SSD and more robust smaller form factor (SFF) drives. For example, some of today’s 2.5” SFF 10,000 RPM (e.g. 10K) SAS HDD’s can do as well or better than their larger 3.5” 15K predecessors can for both IOPS and bandwidth. This is also an example where the RPM speed of a drive may not be the only determination for performance as it has been in the past.


Performance comparison of four different drive types, click to view larger image.

The need for storage space capacity and areal density

In terms of storage enhancements, watch for the appearance of Shingled Magnetic Recording (SMR) enabled HDD’s to help further boost the space capacity in the same footprint. Using SMR HDD manufactures can put more bits (e.g. areal density) into the same physical space on a platter.


Traditional vs. SMR to increase storage areal density capacity

The generic idea with SMR is to increase areal density (how many bits can be safely stored per square inch) of data placed on spinning disk platter media. In the above image on the left is a representative example of how traditional magnetic disk media lays down tracks next to each other. With traditional magnetic recording approaches, the tracks are placed as close together as possible for the write heads to safely write data.

With new recording formats such as SMR along with improvements to read/write heads, the tracks can be more closely grouped together in an overlapping way. This overlapping way (used in a generic sense) is like how the shingles on a roof overlap, hence Shingled Magnetic Recording. Other magnetic recording or storage enhancements in the works include Heat Assisted Magnetic Recording (HAMR) and Helium filed drives. Thus, there is still plenty of bits and bytes room for growth in HDD’s well into the next decade to co-exist and complement SSD’s.

DIF and AF (Advanced Format), or software defining the drives

Another evolving storage feature that ties into HDD’s is Data Integrity Feature (DIF) that has a couple of different types. Depending on which type of DIF (0, 1, 2, and 3) is used; there can be added data integrity checks from the application to the storage medium or drive beyond normal functionality. Here is something to keep in mind, as there are different types or levels of DIF, when somebody says they support or need DIF, ask them which type or level as well as why.

Are you familiar with Advanced Format (AF)? If not you should be. Traditionally outside of special formats for some operating systems or controllers, that standard open system data storage block, page or sector has been 512 bytes. This has served well in the past, however; with the advent of TByte and larger sized drives, a new mechanism is needed. The need is to support both larger average data allocation sizes from operating systems and storage systems, as well as to cut the overhead of managing all the small sectors. Operating systems and file systems have added new partitioning features such as GUID Partition Table (GPT) to support 1TB and larger SSD, HDD and storage system LUN’s.

These enhancements are enabling larger devices to be used in place of traditional Master Boot Record (MBR) or other operating system partition and allocation schemes. The next step, however, is to teach operating systems, file systems, and hypervisors along with their associated tools or drives how to work with 4,096 byte or 4 Kbyte sectors. The advantage will be to cut the overhead of tracking all of those smaller sectors or file system extents and clusters. Today many HDD’s support AF however by default may have 512-byte emulation mode enabled due to lack of operating system or other support.

Intelligent Power Management, moving beyond drive spin down

Intelligent Power Management (IPM) is a collection of techniques that can be applied to vary the amount of energy consumed by a drive, controller or processor to do its work. These include in the case of an HDD slowing the spin rate of platters, however, keep in mind that mass in motion tends to stay in motion. This means that HDD’s once up and spinning do not need as much relative power as they function like a flywheel. Where their power draw comes in is during reading and write, in part to the movement of reading/write heads, however also for running the processors and electronics that control the device. Another big power consumer is when drives spin up, thus if they can be kept moving, however at a lower rate, along with disabling energy used by read/write heads and their electronics, you can see a drop in power consumption. Btw, a current generation 3.5” 4TB 6Gbs SATA HDD consumes about 6-7 watts of power while in active use, or less when in idle mode. Likewise a current generation high performance 2.5” 1.2TB HDD consumes about 4.8 watts of energy, a far cry from the 12-16 plus watts of energy some use as HDD fud.

Hybrid Hard Disk Drives (HHDD) and Solid State Hybrid Drives (SSDHD)

Hybrid HDD’s (HHDD’s) also known as Solid State Hybrid Drives (SSHD) have been around for a while and if you have read my earlier posts, you know that I have been a user and fan of them for several years. However one of the drawbacks of the HHDD’s has been lack of write acceleration, (e.g. they only optimize for reads) with some models. Current and emerging HDDD’s are appearing with a mix of nand flash SLC (used in earlier versions), MLC and eMLC along with DRAM while enabling write optimization. There are also more drive options available as HHDD’s from different manufactures both for desktop and enterprise class scenarios.

The challenge with HHDD’s is that many vendors either do not understand how they fit and compliment their tiering or storage management software tools or simply do not see the value proposition. I have had vendors and others tell me that the HHDD’s don’t make sense as they are too simple, how can they be a fit without requiring tiering software, controllers, SSD and HDD’s to be viable?

Storage I/O trends

I also see a trend similar to when the desktop high-capacity SATA drives appeared for enterprise-class storage systems in the early 2000s. Some of the same people did not see where or how a desktop class product or technology could ever be used in an enterprise solution.

Hmm, hey wait a minute, I seem to recall similar thinking when SCSI drives appeared in the early 90s, funny how some things do not change, DejaVu anybody?

Does that mean HHDD’s will be used everywhere?

Not necessarily, however, there will be places where they make sense, others where either an HDD or SSD will be more practical.

Networking with your server and storage

Drive native interfaces near-term will remain as 6Gbs (going to 12Gbs) SAS and SATA with some FC (you might still find a parallel SCSI drive out there). Likewise, with bridges or interface cards, those drives may appear as USB or something else.

What about SCSI over PCIe, will that catch on as a drive interface? Tough to say however I am sure we can find some people who will gladly try to convince you of that. FC based drives operating at 4Gbs FC (4GFC) are still being used for some environments however most activity is shifting over to SAS and SATA. SAS and SATA are switching over from 3Gbs to 6Gbs with 12Gbs SAS on the roadmaps.

So which drive is best for you?

That depends; do you need bandwidth or IOPS, low latency or high capacity, small low profile thin form factor or feature functions? Do you need a hybrid or all SSD or a self-encrypting device (SED) also known as Instant Secure Erase (ISE), these are among your various options.

Disk drives

Why the storage diversity?

Simple, some are legacy soon to be replaced and disposed of while others are newer. I also have a collection so to speak that get used for various testing, research, learning and trying things out. Click here and here to read about some of the ways I use various drives in my VMware environment including creating Raw Device Mapped (RDM) local SAS and SATA devices.

Other capabilities and functionality existing or being added to HDD’s include RAID and data copy assist; securely erase, self-encrypting, vibration dampening among other abilities for supporting dense data environments.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Do not judge a drive only by its interface, space capacity, cost or RPM alone. Look under the cover a bit to see what is inside in terms of functionality, performance, and reliability among other options to fit your needs. After all, in the data center or information factory not everything is the same.

From a marketing and fun to talk about new technology perspective, HDD’s might be dead for some. The reality is that they are very much alive in physical, virtual and cloud environments, granted their role is changing.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Seagate provides proof of life: Enterprise HDD enhancements

Storage I/O trends

Proof of life: Enterprise Hard Disk Drives (HDD’s) are enhanced

Last week while hard disk drive (HDD) competitor Western Digital (WD) was announcing yet another (Velobit) in a string of acquisitions ( e.g. earlier included Stec, Arkeia) and investments (Skyera), Seagate announced new enterprise class HDD’s to their portfolio. Note that it was only two years ago that WD acquired Hitachi Global Storage Technologies (HGST) the disk drive manufacturing business of Hitachi Ltd. (not to be confused with HDS).

Seagate

Similar to WD expanding their presence in the growing nand flash SSD market, Seagate also in May of this year extended their existing enterprise class SSD portfolio. These enhancements included new drives with 12Gbs SAS interface, along with a partnership (and investment) with PCIe flash card startup vendor Virident. Other PCIe flash SSD card vendors (manufacturers and OEMs) include Cisco, Dell, EMC, FusionIO, HP, IBM, LSI, Micron, NetApp and Oracle among others.

These new Seagate enterprise class HDD’s are designed for use in cloud and traditional data center servers and storage systems. A month or two ago Seagate also announced new ultra-thin (5mm) client (aka desktop) class HDD’s along with a 3.5 inch 4TB video optimized HDD. The video optimized HDD’s are intended for Digital Video Recorders (DVR’s), Set Top Boxes (STB’s) or other similar applications.

What was announced?

Specifically what Seagate announced were two enterprise class drives, one for performance (e.g. 1.2TB 10K) and the other for space capacity (e.g. 4TB).

 

Enterprise High Performance 10K.7 (aka formerly known as Savio)

Enterprise Terascale (aka formerly known as constellation)

Class/category

Enterprise / High Performance

Enterprise High Capacity

Form factor

2.5” Small Form Factor (SFF)

3.5”

Interface

6Gbs SAS

6Gbs SATA

Space capacity

1,200GB (1.2TB)

4TB

RPM speed

10,000

5,900

Average seek

2.9 ms

12 ms

DRAM cache

64MB

64MB

Power idle / operating

4.8 watts

5.49 / 6.49 watts

Intelligent Power Management (IPM)

Yes – Seagate PowerChoice

Yes – Seagate PowerChoice

Warranty

Limited 5 years

Limited 3 years

Instant Secure Erase (ISE)

Yes

Optional

Other features

RAID Rebuild assist, Self-Encrypting Device (SED)

Advanced Format (AF) 4K block in addition to standard 512 byte sectors

Use cases

Replace earlier generation 3.5” 15K SAS and Fibre Channel HDD’s for higher performance applications including file systems, databases where SSD are not practical fit.

Backup and data protection, replication, copy operations for erasure coding and data dispersal, active in dormant archives, unstructured NAS, big data, data warehouse, cloud and object storage.

Note the Seagate Terascale has a disk rotation speed of 5,900 (5.9K RPM) which is not a typo given the more traditional 5.4K RPM drives. This slight increase in performance from 5.4K to 5.9K should give when combined with other enhancements (e.g. firmware, electronics) to boost performance for higher capacity workloads.

Let us watch for some performance numbers to be published by Seagate or others. Note that I have not had a chance to try these new drives yet, however look forward to getting my hands on them (among others) sometime in the future for a test drive to add to the growing list found here (hey Seagate and WD, that’s a hint ;) ).

What this all means?

Storage I/O trends

Wait, weren’t HDD’s supposed to be dead or dying?

Some people just like new and emerging things and thus will declare anything existing or that they have lost interest in (or their jobs need it) as old, boring or dead.

For example if you listen to some, they may say nand flash SSD are also dead or dying. For what it is worth, imho nand flash-based SSDs still have a bright future in front of them even with new technologies emerging as they will take time to mature (read more here or listen here).

However, the reality is that for at least the next decade, like them or not, HDD’s will continue to play a role that is also evolving. Thus, these and other improvements with HDD’s will be needed until current nand flash or emerging PCM (Phase Change Memory) among other forms of SSD are capable of picking up all the storage workloads in a cost-effective way.

Btw, yes, I am also a fan and user of nand flash-based SSD’s, in addition to HDD’s and see roles for both as being viable complementing each other for traditional, virtual and cloud environments.

In short, HDD’s will keep spinning (pun intended) for some time granted their roles and usage will also evolve similar to that of tape summit resources.

Storage I/O trends

With this announcement by Seagate along with other enhancements from WD show that the HDD will not only see its 60th birthday, (and here), it will probably also easily see its 70th and not from the comfort of a computer museum. The reason is that there is yet another wave of HDD improvements just around the corner including Shingled Magnetic Recording (SMR) (more info here) along with Heat Assisted Magnetic Recording (HAMR) among others. Watch for more on HAMR and SMR in future posts. With these and other enhancements, we should be able to see a return to the rapid density improvements with HDD’s observed during the mid to late 2000 era when Perpendicular recording became available.

What is up with this ISE stuff is that the same as what Xiotech (e.g. XIO) had?

Is this the same technology that Xiotech (now Xio) referred to the ISE the answer is no. This Seagate ISE is for fast secure erase of data on disk. The benefit of Instant Secure Erase (ISE) is to cut from hours or days the time required to erase a drive for secure disposal to seconds (or less). For those environments that already factor drives erase time as part of those overall costs, this can increase the useful time in service to help improve TCO and ROI.

Wait a minute, aren’t slower RPM’s supposed to be lower performance?

Some of you might be wondering or asking the question of wait, how can a 10,000 revolution per minute (10K RPM) HDD be considered fast vs. a 15K HDD, let alone SSD?

Storage I/O trends

There is a trend occurring with HDD’s that the old rules of IOPS or performance being tied directly to the size and rotational speed (RPM’s) of drives, along with their interfaces. This comes down to being careful to judge a book or in this case a drive by its cover. While RPM’s do have an impact on performance, new generation drives at 10K such as some 2.5” models are delivering performance equal to or better than earlier generation 3.5” 15K device’s.

Likewise, there are similar improvements with 5.4K devices vs. previous generation 7.2K models. As you will see in some of the results found here, not all the old rules of thumbs when it comes to drive performance are still valid. Likewise, keep those metrics that matter in the proper context.


Click on above image to see various performance results

For example as seen in the results (above), the more DRAM or DDR cache on the drives has a positive impact on sequential reads which can be good news if that is what your applications need. Thus, do your homework and avoid judging a device simply by its RPM, interface or form factor.

Other considerations, temperature and vibration

Another consideration is that with increased density of more drives being placed in a given amount of space, some of which may not have the best climate controls, humidity and vibration are concerns. Thus, the importance of drives having vibration dampening or safeguards to keep up performance are important. Likewise, even though drive heads and platters are sealed, there are also considerations that need to be taken care of for humidity in data center or cloud service providers in hot environments near the equator.

If this is not connecting with you, think about how close parts of Southeast Asia and the India subcontinent are to the equator along with the rapid growth and low-cost focus occurring there. Your data center might be temperature and humidity controlled, however others who very focused on cost cutting may not be as concerned with normal facilities best practices.

What type of drives should be used for cloud, virtual and traditional storage?

Good question and one where the answer should be it depends upon what you are trying or need to do (e.g. see previous posts here or here and here (via Seagate)).For example here are some tips for big data storage and storage making decisions in general.

Disclosure

Seagate recently invited me along with several other industry analysts to their cloud storage analyst summit in San Francisco where they covered roundtrip coach airfare, lodging, airport transfers and a nice dinner at the Epic Roast house.

hdd image

I also have received in the past a couple of Momentus XT HHDD (aka SSHD) from Seagate. These are in addition to those that I bought including various Seagate, WD along with HGST, Fujitsu, Toshiba and Samsung (SSD and HDD’s) that I use for various things.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Can we get a side of context with them IOPS server storage metrics?

Can we get a side of context with them server storage metrics?

Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.

There is an old saying that the best I/O (Input/Output) is the one that you do not have to do.

In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.

Expanding the conversation, the need for more context

The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.

hdd and ssd iops

This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.

Adding a side of context

The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.

Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time. Not to mention block, file, object or blob as well as table.

However, are those million IOP’s applicable to your environment or needs?

Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?

How about the response time or latency for achieving them IOPS?

In other words, what is the context of those metrics and why do they matter?

storage i/o iops
Click on image to view more metrics that matter including IOP’s for HDD and SSD’s

Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.

As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.

Here are some examples of context that can be added to help make IOP’s and other metrics matter:

  • What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
  • Are they reads, writes, random, sequential or mixed and what percentage?
  • How was the storage configured including RAID, replication, erasure or dispersal codes?
  • Then there is the latency or response time and IO queue depths for the given number of IOPS.
  • Let us not forget if the storage systems (and servers) were busy with other work or not.
  • If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
  • What was the number of threads or workers, along with how many servers?
  • What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
  • Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
  • Did the IOP’s number come from a single storage system or total of multiple systems?
  • Fast storage needs fast serves and networks, what was their configuration?
  • Was the performance a short burst, or long sustained period?
  • What was the size of the test data used; did it all fit into cache?
  • Were short stroking for IOPS or long stroking for bandwidth techniques used?
  • Data footprint reduction (DFR) techniques (thin provisioned, compression or dedupe) used?
  • Were write data committed synchronously to storage, or deferred (aka lazy writes used)?

The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.

Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.

Storage I/O trends

Does size or age of vendors make a difference when it comes to context?

Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.

Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.

Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.

Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.

Storage I/O trends

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What this means is let us start putting and asking for metrics that matter such as IOP’s with context.

If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.

IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.

Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.

So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Upgrading Lenovo X1 Windows 7 with a Samsung 840 SSD

Storage I/O trends

I recently upgraded my Lenovo X1 laptop from a Samsung 830 256GB Solid State Device (SSD) drive to a new Samsung 840 512GB SSD. The following are some perspectives, comments on my experience in using the Samsung SSD over the past year, along with what was involved in the upgrade.

Background

A little over a year ago I upgraded my then new Lenovo X1 replacing upon its arrival the factory supplied Hard Disk Drive (HDD) with a Solid State Device (SSD) drive. After setup and data migration the 2.5” 7,200 RPM 320GB Toshiba HDD was cloned to a SATA 256GB Samsung model 830 SSD. By first setting up and configuring, copying files, applications, going through Windows and other updates, when it came time to clone to the SSD, the HDD effectively became a backup.

Note that prior to using the Samsung SSD in my Lenovo X1, I was using Hybrid HDD (HHDD’s) as my primary storage to boost read performance and space capacity. These were in addition to other external SSD and HDD that I used along with NAS devices. Read more about my HHDD experiences in a series of post here.

Fast forward to the present and it is time to do yet another upgrade, not because there is anything wrong with the Samsung SSD other than I was running low on space capacity. Sure 256GB was a lot of space, however I also had become used to having a 500GB and 750GB HHDD before downsizing to the SSD. Granted some of the data I have on the SSD is more for convenience, as a cache or buffer when not connected to the network. Not to mention if you have VMware Workstation for running various Virtual Machines (VMs) you know how those VMs can add up quickly, not to mention videos and other items.

Stack of HDD, HHDD and SSDs

Over the past year, my return on investment (ROI) and return on innovation (the new ROI) was as low as three months, or worse case about six months. That was based on the amount of time I was able to not have to wait while saving data. Sure, I had some read and boot performance improvements, as well as being able to do more IOPs and other things. However those were not as significant due to having been using HHDDs vs. if had gone from HDD to SSD.

My productivity was saving 3 to 5 minutes per day when storing large files, documents, videos or other items as part of generating or working on content. Not to mention seeing faster snapshots and other copy functions for HA, BC, DR take less time enabling more productivity to occur vs. waiting.

Thus the ROI timeframe varies depends on what I value my time on or for a particular project among other things.

Sure IOPS are important, so to is simple wall clock or stop watch based timing to measure work being done or time spent waiting.

Upgrade Time

While this was replacing one SSD with another, the same things and steps would apply if going from an HDD to SSD.

Before upgrade
Free space and storage utilization before the upgrade

Make sure that you have a good full and consistent backup copy of your data.

If it is enabled, disable bit locker or other items that might interfere with the clone. Here is a post if you are interested in enabling Windows bitlocker on Windows 7 64 bit.

Run a quick cleanup, registry repair or other maintenance to make sure you have a good and consistent copy before cloning it.

Install any migration or clone software, in the past I have used Seagate Discwizard (Acronis) along with full Acronis in the past. This time I used the Samsung Data Migration powered by Clonix, which is an improvement IMHO vs. what they used to supply which was Norton Ghost.

Shutdown Time

Attach the new drive, for this upgrade I removed the existing Samsung 830 SSD from its internal bay and replaced it with the new Samsung 840. The Samsung 830 was then attached to Lenovo X1 laptop using a USB to SATA cable. Note that you could also do the opposite which is attach the new drive using the USB to SATA cable for the clone operation, then install that into the internal drive bay which would drop need for changing boot sequence.


Samsung 830, Samsung 840 and Lenovo X1


Old Samsung 830 removed, new 840 being installed


Samsung 840 goes in Lenovo X1, Samsung 830 with SATA to USB cable

Since I removed the old drive and attached that to the Lenovo X1 via a SATA to USB cable, and the new drive internal, I also had to change the boot sequence. Remember to change this boot sequence back after the upgrade is complete. On the other hand, if you leave the original drive internally and attach the new drive via a USB to SATA, or eSATA to SATA cable for the clone, you do not need to change the boot sequence.


Changing boot sequence , note one SSDs appears as USB cable being used

Before running the data migration software, I disabled my network connection to make sure the system was isolated during the upgraded and then run the data migration software tool.


Samsung Data Migration tool (powered by Clonix Ltd.) during clone operation

Unlike tools such as Seagate DiscWizard based on Acronis, the Samsung tool based on Clonix does not shutdown or performs upgrade off-line. There is a tradeoff here that I observed, the Acronis shutdown approach while being offline, seemed quicker, however that is subjective. The Samsung tool seemed longer, about 2.5 hours to clone 256G to 512G however, I was still able to do things on the PC (making screen shots).

Even though the Clonix powered Samsung data migration tool works on-line enabling things to be done, best to leave all applications shutdown.

Once the data migration tool is done and it says 100 percent complete DO NOT DO ANYTHING until you see a prompt telling you to do something.

WAIT, as there is some background things that occur after you get the 100 percent complete. When you see prompt screen, only then it will be ok to move forward.

At that point, shutdown window, remove old drive, change any setup boot sequence and reboot to verify all is ok.

Also, remember to turn bit locker back on if needed.

Post Mortem

How is the new SSD drive is running?

So far so good, as fast if not better than the old one.


About a month after the upgrade and the space is being put to use.

How about the Samsung 830?

That is now being used for various things in my test lab environment joining other SSD, HHDD and HDDs supporting various physical and virtual server activities including in some testing as part of this series (watch for more in this series soon).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: How many IOPS can a HDD HHDD SSD do with VMware?

How many IOPS can a HDD HHDD SSD do with VMware?

server storage data infrastructure i/o iop hdd ssd trends

Updated 2/10/2018

This is the second post of a two-part series looking at storage performance, specifically in the context of drive or device (e.g. mediums) characteristics of How many IOPS can a HDD HHDD SSD do with VMware. In the first post the focus was around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).

A common question is how many IOPS (IO Operations Per Second) can a storage device or system do?

The answer is or should be it depends.

Here are some examples to give you some more insight.

For example, the following shows how IOPS vary by changing the percent of reads, writes, random and sequential for a 4K (4,096 bytes or 4 KBytes) IO size with each test step (4 minutes each).

IO Size for test
Workload Pattern of test
Avg. Resp (R+W) ms
Avg. IOP Sec (R+W)
Bandwidth KB Sec (R+W)
4KB
100% Seq 100% Read
0.0
29,736
118,944
4KB
60% Seq 100% Read
4.2
236
947
4KB
30% Seq 100% Read
7.1
140
563
4KB
0% Seq 100% Read
10.0
100
400
4KB
100% Seq 60% Read
3.4
293
1,174
4KB
60% Seq 60% Read
7.2
138
554
4KB
30% Seq 60% Read
9.1
109
439
4KB
0% Seq 60% Read
10.9
91
366
4KB
100% Seq 30% Read
5.9
168
675
4KB
60% Seq 30% Read
9.1
109
439
4KB
30% Seq 30% Read
10.7
93
373
4KB
0% Seq 30% Read
11.5
86
346
4KB
100% Seq 0% Read
8.4
118
474
4KB
60% Seq 0% Read
13.0
76
307
4KB
30% Seq 0% Read
11.6
86
344
4KB
0% Seq 0% Read
12.1
82
330

Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 4K IO size

In the above example the drive is a 1TB 7200 RPM 3.5 inch Dell (Western Digital) 3Gb SATA device doing raw (non file system) IO. Note the high IOP rate with 100 percent sequential reads and a small IO size which might be a result of locality of reference due to drive level cache or buffering.

Some drives have larger buffers than others from a couple to 16MB (or more) of DRAM that can be used for read ahead caching. Note that this level of cache is independent of a storage system, RAID adapter or controller or other forms and levels of buffering.

Does this mean you can expect or plan on getting those levels of performance?

I would not make that assumption, and thus this serves as an example of using metrics like these in the proper context.

Building off of the previous example, the following is using the same drive however with a 16K IO size.

IO Size for test
Workload Pattern of test
Avg. Resp (R+W) ms
Avg. IOP Sec (R+W)
Bandwidth KB Sec (R+W)
16KB
100% Seq 100% Read
0.1
7,658
122,537
16KB
60% Seq 100% Read
4.7
210
3,370
16KB
30% Seq 100% Read
7.7
130
2,080
16KB
0% Seq 100% Read
10.1
98
1,580
16KB
100% Seq 60% Read
3.5
282
4,522
16KB
60% Seq 60% Read
7.7
130
2,090
16KB
30% Seq 60% Read
9.3
107
1,715
16KB
0% Seq 60% Read
11.1
90
1,443
16KB
100% Seq 30% Read
6.0
165
2,644
16KB
60% Seq 30% Read
9.2
109
1,745
16KB
30% Seq 30% Read
11.0
90
1,450
16KB
0% Seq 30% Read
11.7
85
1,364
16KB
100% Seq 0% Read
8.5
117
1,874
16KB
60% Seq 0% Read
10.9
92
1,472
16KB
30% Seq 0% Read
11.8
84
1,353
16KB
0% Seq 0% Read
12.2
81
1,310

Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 16K IO size

The previous two examples are excerpts of a series of workload simulation tests (ok, you can call them benchmarks) that I have done to collect information, as well as try some different things out.

The following is an example of the summary for each test output that includes the IO size, workload pattern (reads, writes, random, sequential), duration for each workload step, totals for reads and writes, along with averages including IOP’s, bandwidth and latency or response time.

disk iops

Want to see more numbers, speeds and feeds, check out the following table which will be updated with extra results as they become available.

Device
Vendor
Make

Model

Form Factor
Capacity
Interface
RPM Speed
Raw
Test Result
HDD
HGST
Desktop
HK250-160
2.5
160GB
SATA
5.4K
HDD
Seagate
Mobile
ST2000LM003
2.5
2TB
SATA
5.4K
HDD
Fujitsu
Desktop
MHWZ160BH
2.5
160GB
SATA
7.2K
HDD
Seagate
Momentus
ST9160823AS
2.5
160GB
SATA
7.2K
HDD
Seagate
MomentusXT
ST95005620AS
2.5
500GB
SATA
7.2K(1)
HDD
Seagate
Barracuda
ST3500320AS
3.5
500GB
SATA
7.2K
HDD
WD/Dell
Enterprise
WD1003FBYX
3.5
1TB
SATA
7.2K
HDD
Seagate
Barracuda
ST3000DM01
3.5
3TB
SATA
7.2K
HDD
Seagate
Desktop
ST4000DM000
3.5
4TB
SATA
HDD
HDD
Seagate
Capacity
ST6000NM00
3.5
6TB
SATA
HDD
HDD
Seagate
Capacity
ST6000NM00
3.5
6TB
12GSAS
HDD
HDD
Seagate
Savio 10K.3
ST9300603SS
2.5
300GB
SAS
10K
HDD
Seagate
Cheetah
ST3146855SS
3.5
146GB
SAS
15K
HDD
Seagate
Savio 15K.2
ST9146852SS
2.5
146GB
SAS
15K
HDD
Seagate
Ent. 15K
ST600MP0003
2.5
600GB
SAS
15K
SSHD
Seagate
Ent. Turbo
ST600MX0004
2.5
600GB
SAS
SSHD
SSD
Samsung
840 PRo
MZ-7PD256
2.5
256GB
SATA
SSD
HDD
Seagate
600 SSD
ST480HM000
2.5
480GB
SATA
SSD
SSD
Seagate
1200 SSD
ST400FM0073
2.5
400GB
12GSAS
SSD

Performance characteristics 1 worker (thread count) for RAW IO (non-file system)

Note: (1) Seagate Momentus XT is a Hybrid Hard Disk Drive (HHDD) based on a 7.2K 2.5 HDD with SLC nand flash integrated for read buffer in addition to normal DRAM buffer. This model is a XT I (4GB SLC nand flash), may add an XT II (8GB SLC nand flash) at some future time.

As a starting point, these results are raw IO with file system based information to be added soon along with more devices. These results are for tests with one worker or thread count, other results will be added with such as 16 workers or thread counts to show how those differ.

The above results include all reads, all writes, mix of reads and writes, along with all random, sequential and mixed for each IO size. IO sizes include 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K, 1024K and 2048K. As with any workload simulation, benchmark or comparison test, take these results with a grain of salt as your mileage can and will vary. For example you will see some what I consider very high IO rates with sequential reads even without file system buffering. These results might be due to locality of reference of IO’s being resolved out of the drives DRAM cache (read ahead) which vary in size for different devices. Use the vendor model numbers in the table above to check the manufactures specs on drive DRAM and other attributes.

If you are used to seeing 4K or 8K and wonder why anybody would be interested in some of the larger sizes take a look at big fast data or cloud and object storage. For some of those applications 2048K may not seem all that big. Likewise if you are used to the larger sizes, there are still applications doing smaller sizes. Sorry for those who like 512 byte or smaller IO’s as they are not included. Note that for all of these unless indicated a 512 byte standard sector or drive format is used as opposed to emerging Advanced Format (AF) 4KB sector or block size. Watch for some more drive and device types to be added to the above, along with results for more workers or thread counts, along with file system and other scenarios.

Using VMware as part of a Server, Storage and IO (aka StorageIO) test platform

vmware vexpert

The above performance results were generated on Ubuntu 12.04 (since upgraded to 14.04 which was hosted on a VMware vSphere 5.1 (upgraded to 5.5U2) purchased version (you can get the ESXi free version here) with vCenter enabled system. I also have VMware workstation installed on some of my Windows-based laptops for doing preliminary testing of scripts and other activity prior to running them on the larger server-based VMware environment. Other VMware tools include vCenter Converter, vSphere Client and CLI. Note that other guest virtual machines (VMs) were idle during the tests (e.g. other guest VMs were quiet). You may experience different results if you ran Ubuntu native on a physical machine or with different adapters, processors and device configurations among many other variables (that was a disclaimer btw ;) ).

Storage I/O trends

All of the devices (HDD, HHDD, SSD’s including those not shown or published yet) were Raw Device Mapped (RDM) to the Ubuntu VM bypassing VMware file system.

Example of creating an RDM for local SAS or SATA direct attached device.

vmkfstools -z /vmfs/devices/disks/naa.600605b0005f125018e923064cc17e7c /vmfs/volumes/dat1/RDM_ST1500Z110S6M5.vmdk

The above uses the drives address (find by doing a ls -l /dev/disks via VMware shell command line) to then create a vmdk container stored in a dat. Note that the RDM being created does not actually store data in the .vmdk, it’s there for VMware management operations.

If you are not familiar with how to create a RDM of a local SAS or SATA device, check out this post to learn how.This is important to note in that while VMware was used as a platform to support the guest operating systems (e.g. Ubuntu or Windows), the real devices are not being mapped through or via VMware virtual drives.

vmware iops

The above shows examples of RDM SAS and SATA devices along with other VMware devices and dats. In the next figure is an example of a workload being run in the test environment.

vmware iops

One of the advantages of using VMware (or other hypervisor) with RDM’s is that I can quickly define via software commands where a device gets attached to different operating systems (e.g. the other aspect of software defined storage). This means that after a test run, I can quickly simply shutdown Ubuntu, remove the RDM device from that guests settings, move the device just tested to a Windows guest if needed and restart those VMs. All of that from where ever I happen to be working from without physically changing things or dealing with multi-boot or cabling issues.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

So how many IOPs can a device do?

That depends, however have a look at the above information and results.

Check back from time to time here to see what is new or has been added including more drives, devices and other related themes.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

How many I/O iops can flash SSD or HDD do?

How many i/o iops can flash ssd or hdd do with vmware?

sddc data infrastructure Storage I/O ssd trends

Updated 2/10/2018

A common question I run across is how many I/O iopsS can flash SSD or HDD storage device or system do or give.

The answer is or should be it depends.

This is the first of a two-part series looking at storage performance, and in context specifically around drive or device (e.g. mediums) characteristics across HDD, HHDD and SSD that can be found in cloud, virtual, and legacy environments. In this first part the focus is around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).

What about cloud, tape summit resources, storage systems or appliance?

Lets leave those for a different discussion at another time.

Getting started

Part of my interest in tools, metrics that matter, measurements, analyst, forecasting ties back to having been a server, storage and IO performance and capacity planning analyst when I worked in IT. Another aspect ties back to also having been a sys admin as well as business applications developer when on the IT customer side of things. This was followed by switching over to the vendor world involved with among other things competitive positioning, customer design configuration, validation, simulation and benchmarking HDD and SSD based solutions (e.g. life before becoming an analyst and advisory consultant).

Btw, if you happen to be interested in learn more about server, storage and IO performance and capacity planning, check out my first book Resilient Storage Networks (Elsevier) that has a bit of information on it. There is also coverage of metrics and planning in my two other books The Green and Virtual Data Center (CRC Press) and Cloud and Virtual Data Storage Networking (CRC Press). I have some copies of Resilient Storage Networks available at a special reader or viewer rate (essentially shipping and handling). If interested drop me a note and can fill you in on the details.

There are many rules of thumb (RUT) when it comes to metrics that matter such as IOPS, some that are older while others may be guess or measured in different ways. However the answer is that it depends on many things ranging from if a standalone hard disk drive (HDD), Hybrid HDD (HHDD), Solid State Device (SSD) or if attached to a storage system, appliance, or RAID adapter card among others.

Taking a step back, the big picture

hdd image
Various HDD, HHDD and SSD’s

Server, storage and I/O performance and benchmark fundamentals

Even if just looking at a HDD, there are many variables ranging from the rotational speed or Revolutions Per Minute (RPM), interface including 1.5Gb, 3.0Gb, 6Gb or 12Gb SAS or SATA or 4Gb Fibre Channel. If simply using a RUT or number based on RPM can cause issues particular with 2.5 vs. 3.5 or enterprise and desktop. For example, some current generation 10K 2.5 HDD can deliver the same or better performance than an older generation 3.5 15K. Other drive factors (see this link for HDD fundamentals) including physical size such as 3.5 inch or 2.5 inch small form factor (SFF), enterprise or desktop or consumer, amount of drive level cache (DRAM). Space capacity of a drive can also have an impact such as if all or just a portion of a large or small capacity devices is used. Not to mention what the drive is attached to ranging from in internal SAS or SATA drive bay, USB port, or a HBA or RAID adapter card or in a storage system.

disk iops
HDD fundamentals

How about benchmark and performance for marketing or comparison tricks including delayed, deferred or asynchronous writes vs. synchronous or actually committed data to devices? Lets not forget about short stroking (only using a portion of a drive for better IOP’s) or even long stroking (to get better bandwidth leveraging spiral transfers) among others.

Almost forgot, there are also thick, standard, thin and ultra thin drives in 2.5 and 3.5 inch form factors. What’s the difference? The number of platters and read write heads. Look at the following image showing various thickness 2.5 inch drives that have various numbers of platters to increase space capacity in a given density. Want to take a wild guess as to which one has the most space capacity in a given footprint? Also want to guess which type I use for removable disk based archives along with for onsite disk based backup targets (compliments my offsite cloud backups)?

types of disks
Thick, thin and ultra thin devices

Beyond physical and configuration items, then there are logical configuration including the type of workload, large or small IOPS, random, sequential, reads, writes or mixed (various random, sequential, read, write, large and small IO). Other considerations include file system or raw device, number of workers or concurrent IO threads, size of the target storage space area to decide impact of any locality of reference or buffering. Some other items include how long the test or workload simulation ran for, was the device new or worn in before use among other items.

Tools and the performance toolbox

Then there are the various tools for generating IO’s or workloads along with recording metrics such as reads, writes, response time and other information. Some examples (mix of free or for fee) include Bonnie, Iometer, Iorate, IOzone, Vdbench, TPC, SPC, Microsoft ESRP, SPEC and netmist, Swifttest, Vmark, DVDstore and PCmark 7 among many others. Some are focused just on the storage system and IO path while others are application specific thus exercising servers, storage and IO paths.

performance tools
Server, storage and IO performance toolbox

Having used Iometer since the late 90s, it has its place and is popular given its ease of use. Iometer is also long in the tooth and has its limits including not much if any new development, never the less, I have it in the toolbox. I also have Futremark PCmark 7 (full version) which turns out has some interesting abilities to do more than exercise an entire Windows PC. For example PCmark can use a secondary drive for doing IO to.

PCmark can be handy for spinning up with VMware (or other tools) lots of virtual Windows systems pointing to a NAS or other shared storage device doing real world type activity. Something that could be handy for testing or stressing virtual desktop infrastructures (VDI) along with other storage systems, servers and solutions. I also have Vdbench among others tools in the toolbox including Iorate which was used to drive the workloads shown below.

What I look for in a tool are how extensible are the scripting capabilities to define various workloads along with capabilities of the test engine. A nice GUI is handy which makes Iometer popular and yes there are script capabilities with Iometer. That is also where Iometer is long in the tooth compared to some of the newer generation of tools that have more emphasis on extensibility vs. ease of use interfaces. This also assumes knowing what workloads to generate vs. simply kicking off some IOPs using default settings to see what happens.

Another handy tool is for recording what’s going on with a running system including IO’s, reads, writes, bandwidth or transfers, random and sequential among other things. This is where when needed I turn to something like HiMon from HyperIO, if you have not tried it, get in touch with Tom West over at HyperIO and tell him StorageIO sent you to get a demo or trial. HiMon is what I used for doing start, stop and boot among other testing being able to see IO’s at the Windows file system level (or below) including very early in the boot or shutdown phase.

Here is a link to some other things I did awhile back with HiMon to profile some Windows and VDI activity test profiling.

What’s the best tool or benchmark or workload generator?

The one that meets your needs, usually your applications or something as close as possible to it.

disk iops
Various 2.5 and 3.5 inch HDD, HHDD, SSD with different performance

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

That depends, however continue reading part II of this series to see some results for various types of drives and workloads.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Some things keep going around, Seagate ships 2 Billion HDD’s

Seagate

Seagate (@Seagate) announced today that it reached a milestone of having shipped 2 Billion hard disk drives (HDD’s), something that is round stores data that keeps growing. As part of their announcement, Seagate has a good info graphics and facts page here going back to 1979 when it was founded as Shugart Technology (read about Al Shugart here).

By coincidence, just a few years before Seagate was founded, McDonalds (who makes round things as well) announced that they had served over 20 billion hamburgers. Thus McDonald feeds the appetites of consumers hungry for a quick meal while Seagate feeds the information demands, perhaps while stopping for a quick breakfast, lunch, coffee or dinner. Speaking of things that go around (like HDD’s), check out what NAS, NASA and NASCAR have in common all of which are also involved in big data as well as little data.

Storage I/O industry trends image

Both Seagate and McDonalds have also expanded their menu of offerings over the years maintaining their core products while expanding into new and adjacent areas given different appetites and preferences. After all, in the data cloud, virtual or physical data center also known as an information factory not everything is the same either.

Cloud

Granted Seagate is helping to feed or fuel the internet along with traditional hungry demand for data, not to mention people and data are living longer, as well as getting larger.

Cloud, virtual server, big data and little data storage I/O image

In the case of Seagate and other driver manufactures of which have consolidated down to three (Toshiba, Seagate and Western Digital), the physical devices are getting smaller, however capacities are increasing.

Storage I/O

Why the continued growth? As mentioned data is getting larger (big data and little data) and living longer, there is also no such thing as a data or information recession. Consequently data storage is an important pillar or part of cloud, virtual and traditional information services with HDD’s remaining popular along side nand flash solid state devices (SSD).

The Seagate info graphic page can be seen here and is a good walk back in time for some, perhaps a history lesson for others. It goes back to the Sony Walkman which some might remember, launch of the PC and Apple Macintosh in the 80s, Linux and the web in the 90s and moving forward from then to now.

HDD
A few of my HDD’s, different types for various tasks.

If you think or believe HDD’s are a dead technology, take a few minutes to view the info graphic to update your insight on what has been an important aspect of computing and remains popular in cloud environments. Otoh, if you believe that HDD’s are still a core piece of computing and will remain so including in roles in the future, have a look to see how things have progressed, maybe some Dejavu.

Oh, for those who are thinking that the HDD did not begin in 1979, you are absolutely correct as it dates back into the 1950s. Here is a link to something that I wrote a few years ago on the HDD’s 50th birthday and looks like it will easily celebrate 60 and beyond.

Additional related reading:
In the data center or information factory, not everything is the same
Hard Disk Drives (HDDs) for virtual and physical environments
Happy 50th, hard drive. But will you make it to 60?
Seagate to say goodbye to Cayman Islands, Hello Ireland
More Storage IO momentus HHDD and SSD moments part II
In the data center or information factory, not everything is the same
The Human Face of Big Data, a Book Review

Congratulations to Seagate, now how long until the 3 billion served, excuse me, shipped HDD occurs?

Disclosure: Its been almost a month since my last visit to McDonalds or buying another HDD (or SSD) from Amazon.com.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Putting some VMware ESX storage tips together (Part I)

Have you spent time searching the VMware documentation, on-line forums, venues and books to decide how to make a local dedicated direct attached storage (DAS) type device (e.g. SATA or SAS) be Raw Device Mappings (RDM)? Part two of this post looks at how to make an RDM using an internal SATA HDD.

Or how about how to make a Hybrid Hard disk drive (HHDD) that is faster than a regular Hard Disk Drive (HDD) on reads, however more capacity and less cost than a Solid State Device (SSD) actually appear to VMware as a SSD?

Recently I had these and some other questions and spent some time looking around, thus this post highlights some great information I have found for addressing the above VMware challenges and some others.

VMware vExpert image

The SSD solution is via a post I found on fellow VMware vExpert  Duncan Epping’s yellow-brick site which if you are into VMware or server virtualization in general, and particular a fan of high-availability in general or virtual specific, add Duncan’s site to your reading list. Duncan also has some great books to add to your bookshelves including VMware vSphere 5.1 Clustering Deepdive (Volume 1) and VMware vSphere 5 Clustering Technical Deepdive that you can find at Amazon.com.

VMware vSphere 5 Clustering Technical Deepdive book image

Duncan’s post shows how to fake into thinking that a HDD was a SSD for testing or other purposes. Since I have some Seagate Momentus XT HHDDs that combine the capacity of a traditional HDD (and cost) with the read performance closer to a SSD (without the cost or capacity penalty), I was interested in trying Duncan’s tip (here is a link to his tip). Essential Duncan’s tip shows how to use esxcli storage nmp satp and esxcli storage core commands to make a non-SSD look like a SSD.

The commands that were used from the VMware shell per Duncan’s tip:

esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba0:C0:T1:L0 –option “enable_local enable_ssd”
esxcli storage core claiming reclaim -d mpx.vmhba0:C0:T1:L0
esxcli storage core device list –device=mpx.vmhba0:C0:T1:L0

After all, if the HHDD is actually doing some of the work to boost and thus fool the OS or hypervisor that it is faster than a HDD, why not tell the OS or hypervisor in this case VMware ESX that it is a SSD. So far have not seen nor do I expect to notice anything different in terms of performance as that already occurred going from a 7,200RPM (7.2K) HDD to the HHDD.

If you know how to decide what type of a HDD or SSD a device is by reading its sense code and model number information, you will recognize the circled device as a Seagate Momentus XT HHDD. This particular model is Seagate Momentus XT II 750GB with 8GB SLC nand flash SSD memory integrated inside the 2.5-inch drive device.

Normally the Seagate HHDDs appear to the host operating system or whatever it is attached to as a Momentus 7200 RPM SATA type disk drive. Since there are not special device drivers, controllers, adapters or anything else, essentially the Momentus XT type HHDD are plug and play. After a bit of time they start learning and caching things to boost read performance (read more about boosting read performance including Windows boot testing here).

Image of VMware vSphere vClient storage devices
Screen shot showing Seagate Momentus XT appearing as a SSD

Note that the HHDD (a Seagate Momentus XT II) is a 750GB 2.5” SATA drive that boost read performance with the current firmware. Seagate has hinted that there could be a future firmware version to enable write caching or optimization however, I have waited for a year.

Disclosure: Seagate gave me an evaluation copy of my first HHDD a couple of years ago and I then went on to buy several more from Amazon.com. I have not had a chance to try any Western Digital (WD) HHDDs yet, however I do have some of their HDDs. Perhaps I will hear something from them sometime in the future.

For those who are SSD fans or that actually have them, yes, I know SSD’s are faster all around and that is why I have some including in my Lenovo X1. Thus for write intensive go with a full SSD today if you can afford them as I have with my Lenovo X1 which enables me to save large files faster (less time waiting). However if you want the best of both worlds for lab or other system that is doing more reads vs. writes as well as need as much capacity as possible without breaking the budget, check out the HHDDs.

Thanks for the great tip and information Duncan, in part II of this post, read how to make an RDM using an internal SATA HDD.

 

Ok, nuff said (for now)…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Spring (May) 2012 StorageIO news letter

StorageIO News Letter Image
Spring (May) 2012 News letter

Welcome to the Spring (May) 2012 edition of the Server and StorageIO Group (StorageIO) news letter. This follows the Fall (December) 2011 edition.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

Click on the following links to view the Spring May 2012 edition as an HTML or PDF or, to go to the news letter page to view previous editions.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Nuff said for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Are large storage arrays dead at the hands of SSD?

Storage I/O trends

An industry trends and perspective.

.

Are large storage arrays dead at the hands of SSD? Short answer NO not yet.
There is still a place for traditional storage arrays or appliances particular those with extensive features, functionality and reliability availability serviceability (RAS). In other words, there is still a place for large (and small) storage arrays or appliances including those with SSDs.

Is there a place for newer flash SSD storage systems, appliances and architectures? Yes
Similar to how there is a place for traditional midrange storage arrays or appliances have found their roles vs. traditional higher end so-called enterprise arrays. Think as an example  EMC CLARiiON/VNX or HP EVA/P6000 or HDS AMS/HUS or NetApp FAS or IBM DS5000 or IBM V7000 among others vs. EMC Symmetrix/DMX/VMAX or HP P10000/3Par or HDS VSP/USP or IBM DS8000. In addition to traditional enterprise or high-end storage systems and midrange also known as modular, there are also specialized appliances or targets such as for backup/restore and archiving. Also do not forget the IO performance SSD appliances like those from TMS among others that have been around for a while.

Is the role of large storage systems changing or evolving? Yes
Given their scale and ability to do large amounts of work in a dense footprint, for some the role of these systems is still mission critical tier 1 application and data support. For other environments, their role continues to evolve being used for high-density tier 2 bulk or even near-line storage for on-line access at scale.

Storage I/O trends

Does this mean there is completion between the old and new systems? Yes
In some circumstances as we have seen already with SSD solutions. Some will place as competing or replacements while others as complementing. For example in the PCIe flash SSD card segment EMC VFCache is positioned is complementing Dell, EMC, HDS, HP, IBM, NetApp, Oracle or others storage vs. FusionIO who positions as a replacement for the above and others. Another scenario is how some SSD vendors have and continue to position their all-flash SSD arrays using either drives or PCIe cards to complement and coexist with other storage systems in an environment (e.g. data center level tiering) vs. as a replacement. Also keep in mind SSD solutions that also support a mix of flash devices and traditional HDDs for capacity and cost savings or cloud access in the same solution.

Does this mean that the industry has adopted all SSD appliances as the state of art?
Avoid confusing industry adoption or talk with industry and customer deployment. They are similar, however one is focused on what the industry talks about or discusses as state of art or the future while the other is what customers are doing. Certainly some of the new flash SSD appliance and storage startups such as Solidfire, Nexgen, Violin, Whiptail or veteran TMS among others have promising futures, some of which may actually be in play with the current SSD market shakeout and consolidation.

Does that mean everybody is going SSD?
SSD customer adoption and deployment continues to grow, however so too does the deployment of high-capacity HDDs.

Storage I/O trends

Do SSDs need HDDs, do HDDs need SSDs? Yes
Granted there are environments where needs can be addressed by all of one or the other. However at least near term, there is a very strong market for tiering and mix of SSD, some fast HDDs and lots of high-capacity HDDs to meet various needs including performance, availability, capacity, energy and economics. After all, there is no such thing, as a data or information recession yet budgets are tight or being reduced. Likewise, people and data are living longer.

What does this mean?
If there, were no such thing as a data recession and budgets a non-issue, perhaps everything could move to all flash SSD storage systems. However, we also know that people and data are living longer along with changing data life-cycle patterns. There is also the need for performance to close the traditional data center IO performance to space capacity gap and bottlenecks as well as store and keep data longer.

There will continue to be a need for a mix of high-capacity and high performance. More IO will continue to gravitate towards the IO appliances, however more data will settle in for longer-term retention and continued access as data life-cycle continue to evolve. Watch for more SSD and cache in the large systems, along with higher density SAS-NL (SAS Near Line e.g. high capacity) type drives appearing in those systems.

If you like new shiny new toys or technology (SNTs) to buy, sell or talk about, there will be plenty of those to continue industry adoption while for those who are focused on industry deployment, there will be a mix of new, and continued evolution for implementation.

Related links
Industry adoption vs. industry deployment, is there a difference?

Industry trend: People plus data are aging and living longer

No Such Thing as an Information Recession

Changing Lifecycles & Data Footprint Reduction
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Speaking of speeding up business with SSD storage
Are Hard Disk Drives (HDD’s) getting too big?
IT and storage economics 101, supply and demand
Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
Researchers and marketers don’t agree on future of nand flash SSD
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Is SSD dead? No, however some vendors might be

Storage I/O trends

Is SSD dead? No, however some vendors might be

In a recent conversation with Dave Raffo about the nand flash solid state disk (SSD) market, we talked about industry trends, perspectives and where the market is now as well as headed. One of my comments is, has been and will remain that the industry has still not reached anywhere near full potential for deployment of SSD for enterprise, SMB and other data storage needs. Granted, there is broad adoption in terms of discussion or conversation and plenty of early adopters.

SSD and in particular nand flash is anything but dead, in fact in the big broad picture of things, it is still very early in the game. Sure, for those who cover and crave the newest, latest and greatest technology to talk about, nand flash SSD might seem old, yesterday news, long in the tooth and time for something else. However, for those who are focused on deployment vs. adoption such as customers, in general, nand flash SSD in its many packaging options has still not yet reached its full potential.

Despite the hype, fanfare from CEOs or their evangelist along with loyal followers of startups that help drive industry adoption (e.g. what is talked about), there is still lots of upside growth in the customer drive industry deployment (actually buying, installing and using) for nand flash SSD.

What about broad customer deployments?

Sure, there are the marquee customer success stories that you need a high-capacity SAS or SATA drive to hold the YouTube videos, slide decks, press releases for.

However, have we truly, reached broad customer deployment or broad industry adoption?

Hence, I see more startups coming into the market space, and some exiting on their own, via mergers and acquisition or other means.

Will we see a feeding frenzy or IPO craze as with earlier hype cycles of technologies, IMHO there will be some companies that get the big deal, some will survive as new players running as a business vs. running to be acquired or IPO. Others will survive by evolving into something else while others will join the where are they now list.

If you are a SSD startup, CEO, CxO, or marketer, their PR, evangelist or loyal follower do not worry as the SSD market and even nand flash is far from being dead. On the other hand, if you think that it has hit its full stride, you are missing either the bigger picture, or too busy patting yourselves on the back for a job well done. There is much more opportunity out there and not even all the low hanging fruit has been picked yet.

Check out the conversation with Dave Raffo along with comments from others here.

Related links on storage IO metrics and SSD performance
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Storage and IO metrics that matter
IO IO it is off to Storage and IO metrics we go
SSD and Storage System Performance
Speaking of speeding up business with SSD storage
Are Hard Disk Drives (HDD’s) getting too big?
Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
IT and storage economics 101, supply and demand
Researchers and marketers dont agree on future of nand flash SSD
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments Part II: The call to duty, SSD endurance
SSD options for Virtual (and Physical) Environments Part III: What type of SSD is best for you?
SSD options for Virtual (and Physical) Environments Part IV: What type of SSD is best for your needs

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

More storage and IO metrics that matter

It is great to see more conversations and coverage around storage metrics that matter beyond simply focusing on cost per GByte or TByte (e.g. space capacity). Likewise, it is also good to see conversations expanding beyond data footprint reduction (DFR) from a space capacity savings or reduction ratio to also address data movement and transfer rates. Also good to see is increase in discussion around input/output operations per section (IOPs) tying into conversations from virtualization, VDI, cloud to Sold State Devices (SSD).

Other storage and IO metrics that matter include latency or response time, which is how fast work is done, or time spent. Latency also ties to IOPS in that as more work arrives to be done (IOPS) of various size, random or sequential, reads or writes, queue depths are an indicator of how well work is flowing. Another storage and IO metric that matters is availability because without it, performance or capacity can be affected. Likewise, without performance, availability can be affected.

Needless to say that I am just scratching the surface here with storage and IO metrics that matter for physical, virtual and cloud environments from servers to networks to storage.

Here is a link to a post I did called IO, IO, it is off to storage and IO metrics we go that ties in themes of performance measurements and solid-state disk (SSD) among others. Also check out this piece about why VASA (VMware storage analysis metrics) is important to have your VMware CASA along with Windows boot storage and IO performance for VDI and traditional planning purposes.

Check out this post about metrics and measurements that matter along with this conversation about IOPs, capacity, bandwidth and purchasing discussion topics.

Related links on storage IO metrics and SSD performance
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Storage and IO metrics that matter
IO IO it is off to Storage and IO metrics we go
SSD and Storage System Performance
Speaking of speeding up business with SSD storage
Are Hard Disk Drives (HDD’s) getting too big?
Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
IT and storage economics 101, supply and demand
Researchers and marketers dont agree on future of nand flash SSD
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments Part II: The call to duty, SSD endurance
SSD options for Virtual (and Physical) Environments Part III: What type of SSD is best for you?
SSD options for Virtual (and Physical) Environments Part IV: What type of SSD is best for your needs

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved