IBM Server Side Storage I/O SSD Flash Cache Software

Storage I/O trends

IBM Server Side Storage I/O SSD Flash Cache Software

As I often say, the best server storage I/O or IOP is the one that you do not have to do. The second best storage I/O or IOP is the one with least impact or that can be done in a cost-effective way. Likewise the question is not if solid-state device (SSD) including nand flash are in your future, rather when, where, why, with what, how much along with from whom. Also location matters when it comes to SSD including nand flash with different environments and applications leveraging different placement (locality) options, not to mention how much performance do you need vs. want?

As part of their $1 billion USD (to be spent over three years, or $333.3333 million per year) flash ahead initiative IBM has announced their Flash Cache Storage Accelerator (FCSA) server software. While IBM did not use the term, (congratulations and thank you btw) some creative marketer might want to try calling this Software Defined Cache (SDC) or Software Defined SSD (SDSSD) which if that occurs, apologies in advance ;). Keep in mind that it was about a year ago this time when IBM announced that they were acquiring SSD industry veteran Texas Memory Systems (TMS).

What was announced, introducing Flash Cache Storage Acceleration or FCSA

With this announcement of FCSA slated for customer general availability by end of August, IBM joins EMC and NetApp among other storage systems vendors who developed their own, or have collaborated on server-side IO optimization and cache software. Some of the other startup and established vendors who have IO optimization, performance acceleration and caching software include DataRam (Ramdisk), FusionIO, Infinio (NFS for VMware), Pernix (block for VMware), Proximal and SANdisk (bought flashsoft) among others.

Read more about IBM Flash Cache Software (FCSA) including various questions and perspectives in part two of this two-part post located here.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: IBM Server Side Storage I/O SSD Flash Cache Software

Storage I/O trends

Part II IBM Server Flash Cache Storage I/O accelerator for SSD

This is the second in a two-part post series on IBM’s Flash Cache Storage Accelerator (FCSA) for Solid State Device (SSD) storage announced today. You can view part I of the IBM FCSA announcement synopsis here.

Some FCSA ssd cache questions and perspectives

What is FCSA?
FCSA is a server-side storage I/O or IOP caching software tool that makes use of local (server-side) nand flash SSD (PCIe cards or drives). As a cache tool (view IBM flash site here) FCSA provides persistent read caching on IBM servers (xSeries, Flex and Blade x86 based systems) with write through cache (e.g. data cached for later reads) while write data is written directly to block attached storage including SANs. back-end storage can be iSCSI, SAS, FC or FCoE based block systems from IBM or others including all SSD, hybrid SSD or traditional HDD based solutions from IBM and others.

How is this different from just using a dedicated PCIe nand flash SSD card?
FCSA complements those by using them as a persistent storage to cache storage I/O reads to boost performance. By using the PCIe nand flash card or SSD drives, FCSA and other storage I/O cache optimization tools free up valuable server-side DRAM from having to be used as a read cache on the servers. On the other hand, caching tools such as FCSA also keep local cached reads closer to the applications on the servers (e.g. locality of reference) reducing the impact on backed shared block storage systems.

What is FCSA for?
With storage I/O or IOPS and application performance in general, location matters due to locality of reference hence the need for using different approaches for various environments. IBM FCSA is a storage I/O caching software technology that reduces the impact of applications having to do random read operations. In addition to caching reads, FCSA also has a write-through cache, which means that while data written to back-end block storage including on iSCSI, SAS, FC or FCoE based storage (IBM or other vendors), a copy of the data is cached for later reads. Thus while the best storage I/O is the one that does not have to be done (e.g. can be resolved from cache), the second best would be writes that go to a storage system that are not competing with read requests (handled via cache).

Storage I/O trends

Who else is doing this?
This is similar to what EMC initially announced and released in February 2012 with VFcache now renamed to be XtremSW along with other caching and IO optimization software from others (e.g. SANdisk, Proximal and Pernix among others.

Does this replace IBM EasyTier?
Simple answer is no, one is for tiering (e.g. EasyTier), the other is for IO caching and optimization (e.g. FCSA).

Does this replace or compete with other IBM SSD technologies?
With anything, it is possible to find a way to make or view it as competitive. However in general FCSA complements other IBM storage I/O optimization and management software tools such as EasyTier as well as leverage and coexist with their various SSD products (from PCIe cards to drives to drive shelves to all SSD and hybrid SSD solutions).

How does FCSA work?
The FCSA software works in either a physical machine (PM) bare metal mode with Microsoft Windows operating systems (OS) such as Server 2008, 2012 among others. There is also *nix support for RedHat Linux, along with in a VMware virtual machine (VM) environment. In a VMware environment High Availability (HA), DRS and VMotion services and capabilities are supported. Hopefully it will be sooner vs. later that we hear IBM do a follow-up announcement (pure speculation and wishful thinking) on more hypervisors (e.g. Hyper-V, Xen, KVM) support along with Centos, Ubuntu or Power based systems including IBM pSeries. Read more about IBM Pure and Flex systems here.

What about server CPU and DRAM overhead?
As should be expected, a minimal amount of server DRAM (e.g. main memory) and CPU processing cycles are used to support the FCSA software and its drivers. Note the reason I say as should be expected is how you can have software running on a server doing any type of work that does not need some amount of DRAM and processing cycles. Granted some vendors will try to spin and say that there is no server-side DRAM or CPU consumed which would be true if they are completely external to the server (VM or PM). The important thing is to understand how much of an impact in terms of CPU along with DRAM consumed along with their corresponding effectiveness benefit that are derived.

Storage I/O trends

Does FCSA work with NAS (NFS or CIFS) back-end storage?
No this is a server-side block only cache solution. However having said that, if your applications or server are presenting shared storage to others (e.g. out the front-end) as NAS (NFS, CIFS, HDFS) using block storage (back-end), then FCSA can cache the storage I/O going to those back-end block devices.

Is this an appliance?
Short and simple answer is no, however I would not be surprised to hear some creative software defined marketer try to spin it as a flash cache software appliance. What this means is that FCSA is simply IO and storage optimization software for caching to boost read performance for VM and PM servers.

What is this hardware or storage agnostic stuff mean?
Simple, it means that FCSA can work with various nand flash PCIe cards or flash SSD drives installed in servers, as well as with various back-end block storage including SAN from IBM or others. This includes being able to use block storage using iSCSI, SAS, FC or FCoE attached storage.

What is the difference between Easytier and FCSA?
Simple, FCSA is providing read acceleration via caching which in turn should offload some reads from affecting storage systems so that they can focus on handling writes or read ahead operations. Easytier on the other hand is for as its name implies tiering or movement of data in a more deterministic fashion.

How do you get FCSA?
It is software that you buy from IBM that runs on an IBM x86 based server. It is licensed on a per server basis including one-year service and support. IBM has also indicated that they have volume or multiple servers based licensing options.

Storage I/O trends

Does this mean IBM is competing with other software based IO optimization and cache tool vendors?
IBM is focusing on selling and adding value to their server solutions. Thus while you can buy the software from IBM for their servers (e.g. no bundling required), you cannot buy the software to run on your AMD/Seamicro, Cisco (including EMC/VCE and NetApp) , Dell, Fujitsu, HDS, HP, Lenovo, Oracle, SuperMicro among other vendors servers.

Will this work on non-IBM servers?
IBM is only supporting FCSA on IBM x86 based servers; however, you can buy the software without having to buy a solution bundle (e.g. servers or storage).

What is this Cooperative Caching stuff?
Cooperative caching takes the next step from simple read cache with write-through to also support chance coherency in a shared environment, as well as leverage tighter application or guest operating system and storage system integration. For example, applications can work with storage systems to make intelligent predictive informed decisions on what to pre-fetch or read ahead and cached, as well as enable cache warming on restart. Another example is where in a shared storage environment if one server makes a change to a shared LUN or volume that the local server-side caches are also updated to prevent stale or inconsistent reads from occurring.

Can FCSA use multiple nand flash SSD devices on the same server?
Yes, IBM FCSA supports use of multiple server-side PCIe and or drive based SSD devices.

How is cache coherency maintained including during a reboot?
While data stored in the nand flash SSD device is persistent, it’s up to the server and applications working with the storage systems to decide if there is coherent or stale data that needs to be refreshed. Likewise, since FCSA is server-side and back-end storage system or SAN agnostic, without cooperative caching it will not know if the underlying data for a storage volume changed without being notified from another server that modified it. Thus if using shared back-end including SAN storage, do your due diligence to make sure multi-host access to the same LUN’s or volumes is being coordinated with some server-side software to support cache coherency, something that would apply to all vendors.

Storage I/O trends

What about cache warming or reloading of the read cache?
Some vendors who have tightly interested caching software and storage systems, something IBM refers to as cooperative caching that can have the ability to re-warm the cache. With solutions that support cache re-warming, the cache software and storage systems work together to main cache coherency while pre-loading data from the underlying storage system based on hot bands or other profiles and experience. As of this announcement, FCSA does not support cache warming on its own.

Does IBM have service or tools to complement FCSA?
Yes, IBM has an assessment, profile and planning tool that are available on a free consultation services basis with a technician to check your environment. Of course, the next logical step would be for IBM to make the tool available via free download or on some other basis as well.

Do I recommend and have I tried FCSA?
On paper, or WebEx, YouTube or other venue FCSA looks interesting and capable, a good fit for some environments particular if IBM server-based. However since my PM and VMware VM based servers are from other vendors, along with the fact that FCSA only runs on IBM servers, have not actually given it a hands on test drive yet. Thus if you are looking at storage I/O optimization and caching software tools for your VM or PM environment, checkout IBM FCSA to see if it meets your needs.

Storage I/O trends

General comments

It is great to see server and storage systems vendors add value to their solutions with I/O and performance optimization as well as caching software tools. However, I am also concerned with the growing numbers of different software tools that only work with one vendor’s servers or storage systems, or at least are supported as such.

This reminds me of a time not all that long ago (ok, for some longer than others) when we had a proliferation of different host bus adapter (HBA) driver and pathing drivers from various vendors. The result is a hodge podge (a technical term) of software running on different operating systems, hypervisors, PM’s, VMs, and storage systems, all of which need to be managed. On the other hand, for the time being perhaps the benefit will outweigh the pain of having different tools. That is where there are options from server-side vendor centric, storage system focused, or third-party software tool providers.

Another consideration is that some tools work in VMware environments; others support multiple hypervisors while others also support bare metal servers or PMs. Which applies to your environment will of course depend. After all, if you are an all VMware environment given that many of the caching tools tend to be VMware focused, that gives more options vs. for those who are still predominately PM environments.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Viking SATADIMM: Nand flash SATA SSD in DDR3 DIMM slot?

Storage I/O trends

Today computer and data storage memory vendor Viking announced that SSD vendor Solidfire has deployed their SATADIMM modules in DDR3 DIMM (e.g. Random Access Memory (RAM) main memory) slots of their SF SSD based storage solution.

solidfire ssd storage with satadimm
Solidfire SD solution with SATADIMM via Viking

Nand flash SATA SSD in a DDR3 DIMM slot?

Per Viking, Solidfire uses the SATADIMM as boot devices and cache to complement the normal SSD drives used in their SF SSD storage grid or cluster. For those not familiar, Solidfire SF storage systems or appliances are based on industry standard servers that are populated with SSD devices which in turn are interconnected with other nodes (servers) to create a grid or cluster of SSD performance and space capacity. Thus as nodes are added, more performance, availability and capacity are also increased all of which are accessed via iSCSI. Learn more about Solidfire SD solutions on their website here.

Here is the press release that Viking put out today:

Viking Technology SATADIMM Increases SSD Capacity in SolidFire’s Storage System (Press Release)

Viking Technology’s SATADIMM enables higher total SSD capacity for SolidFire systems, offering cloud infrastructure providers an optimized and more powerful solution

FOOTHILL RANCH, Calif., August 12, 2013 – Viking Technology, an industry leading supplier of Solid State Drives (SSDs), Non-Volatile Dual In-line Memory Module (NVDIMMs), and DRAM, today announced that SolidFire has selected its SATADIMM SSD as both the cache SSD and boot volume SSD for their storage nodes. Viking Technology’s SATADIMM SSD enables SolidFire to offer enhanced products by increasing both the number and the total capacity of SSDs in their solution.

“The Viking SATADIMM gives us an additional SSD within the chassis allowing us to dedicate more drives towards storage capacity, while storing boot and metadata information securely inside the system,” says Adam Carter, Director of Product Management at SolidFire. “Viking’s SATADIMM technology is unique in the market and an important part of our hardware design.”

SATADIMM is an enterprise-class SSD in a Dual In-line Memory Module (DIMM) form factor that resides within any empty DDR3 DIMM socket. The drive enables SSD caching and boot capabilities without using a hard disk drive bay. The integration of Viking Technology’s SATADIMM not only boosts overall system performance but allows SolidFire to minimize potential human errors associated with data center management, such as accidentally removing a boot or cache drive when replacing an adjacent failed drive.

“We are excited to support SolidFire with an optimal solid state solution that delivers increased value to their customers compared to traditional SSDs,” says Adrian Proctor, VP of Marketing, Viking Technology. “SATADIMM is a solid state drive that takes advantage of existing empty DDR3 sockets and provides a valuable increase in both performance and capacity.”

SATADIMM is a 6Gb SATA SSD with capacities up to 512GB. A next generation SAS solution with capacities of 1TB & 2TB will be available early in 2014. For more information, visit our website www.vikingtechnology.com or email us at sales@vikingtechnology.com.

Sales information is available at: www.vikingtechnology.com, via email at sales@vikingtechnology.com or by calling (949) 643-7255.

About Viking Technology Viking Technology is recognized as a leader in NVDIMM technology. Supporting a broad range of memory solutions that bridge DRAM and SSD, Viking delivers solutions to OEMs in the enterprise, high-performance computing, industrial and the telecommunications markets. Viking Technology is a division of Sanmina Corporation (Nasdaq: SANM), a leading Electronics Manufacturing Services (EMS) provider. More information is available at www.vikingtechnology.com.

About SolidFire SolidFire is the market leader in high-performance data storage systems designed for large-scale public and private cloud infrastructure. Leveraging an all-flash scale-out architecture with patented volume-level quality of service (QoS) control, providers can now guarantee storage performance to thousands of applications within a shared infrastructure. In-line data reduction techniques along with system-wide automation are fueling new block-storage services and advancing the way the world uses the cloud.

What’s inside the press release

On the surface this might cause some to jump to the conclusion that the nand flash SSD is being accessed via the fast memory bus normally used for DRAM (e.g. main memory) of a server or storage system controller. For some this might even cause a jump to conclusion that Viking has figured out a way to use nand flash for reads and writes not only via a DDR3 DIMM memory location, as well as doing so with the Serial ATA (SATA) protocol enabling server boot and use by any operating system or hypervisors (e.g. VMware vSphere or ESXi, Microsoft Hyper-V, Xen or KVM among others).

Note for those not familiar or needing a refresh on DRAM, DIMM and related items, here is an excerpt from Chapter 7 (Servers – Physical, Virtual and Software) from my book "The Green and Virtual Data Center" (CRC Press).

7.2.2 Memory

Computers rely on some form of memory ranging from internal registers, local on-board processor Level 1 (L1) and Level 2 (L2) caches, random accessible memory (RAM), non-volatile RAM (NVRAM) or Flash along with external disk storage. Memory, which includes external disk storage, is used for storing operating system software along with associated tools or utilities, application programs and data. Read more of the excerpt here…

Is SATADIMM memory bus nand flash SSD storage?

In short no.

Some vendors or their surrogates might be tempted to spin such a story by masking some details to allow your imagination to run wild a bit. When I saw the press release announcement I reached out to Tinh Ngo (Director Marketing Communications) over at Viking with some questions. I was expecting the usual marketing spin story, dancing around the questions with long answers or simply not responding with anything of substance (or that requires some substance to believe). Again what I found was the opposite and thus want to share with you some of the types of questions and answers.

So what actually is SATADIMM? See for yourself in the following image (click on it to view or Viking site).

Via Viking website, click on image or here to learn more about SATADIMM

Does SATADIMM actually move data via DDR3 and memory bus? No, SATADIMM only draws power from it (yes nand flash does need power when in use contrary to a myth I was told about).

Wait, then how is data moved and how does it get to and through the SATA IO stack (hardware and software)?

Simple, there is a cable connector that attached to the SATADIMM that in turn attached to an internal SATA port. Or using a different connector cable attach the SATADIMM (up to four) to a standard SAS internal port such as on a main board, HBA, RAID or caching adapter.

industry trend

Does that mean that Viking and who ever uses SATADIMM is not actually moving data or implementing SATA via the memory bus and DDR3 DIMM sockets? That would be correct, data movement occurs via cable connection to standard SATA or SAS ports.

Wait, why would I give up a DDR3 DIMM socket in my server that could be used for more DRAM? Great question and one that should be it depends on if you need more DRAM or more nand flash? If you are out of drive slots or PCIe card slots and have enough DRAM for your needs along with available DDR3 slots, you can stuff more nand flash into those locations assuming you have SAS or SATA connectivity.

satadimm
SATADIMM with SATA connector top right via Viking

satadimm sata connector
SATADIMM SATA connector via Viking

satadimm sas connector
SATADIMM SAS (Internal) connector via Viking

Why not just use the onboard USB ports and plug-in some high-capacity USB thumb drives to cut cost? If that is your primary objective it would probably work and I can also think of some other ways to cut cost. However those are also probably not the primary tenants that people looking to deploy something like SATADIMM would be looking for.

What are the storage capacities that can be placed on the SATADIMM? They are available in different sizes up to 400GB for SLC and 480GB for MLC. Viking indicated that there are larger capacities and faster 12Gb SAS interfaces in the works which would be more of a surprise if there were not. Learn more about current product specifications here.

Good questions. Attached are three images that sort of illustrates the connector. As well, why not a USB drive; well, there are customers that put 12 of these in the system (with up to 480GB usable capacity) that equates to roughly an added 5.7TBs inside the box without touching the drive bays (left for mass HDD’s). You will then need to raid/connect) all the SATADIMM via a HBA.

How fast is the SATADIMM and does putting it into a DDR3 slot speed things up or slow them down? Viking has some basic performance information on their site (here). However generally should be the same or similar to reach a SAS or SATA SSD drive, although keep SSD metrics and performance in the proper context. Also keep in mind that the DDR3 DIMM slot is only being used for power and not real data movement.

Is the SATADIMM using 3Gbs or 6Gbs SATA? Good questions, today is 6Gb SATA (remember that SATA can attach to a SAS port however not vise versa). Lets see if Viking responds in the comments with more including RAID support (hardware or software) along with other insight such as UNMAP, TRIM, Advanced Format (AF) 4KByte blocks among other things.

Have I actually tried SATADIMM yet? No, not yet. However would like to give it a test drive and workout if one were to show up on my doorstep along with disclosure and share the results if applicable.

industry trend

Future of nand flash in DRAM DIMM sockets

Keep in mind that someday nand flash will actually seem not only in a Webex or Powerpoint demo preso (e.g. similar to what Diablo Technology is previewing), as well as in real use for example what Micron earlier this year predicted for flash on DDR4 (more DDR3 vs. DDR4 here).

Is SATADIMM the best nand flash SSD approach for every solution or environment? No, however it does give some interesting options for those who are PCIe card, or HDD and SSD drive slot constrained that also have available DDR3 DIMM sockets. As to price, check with Viking, wish I could say tell them Greg from StorageIO sent you for a good value, however not sure what they would say or do.

Related more reading:
How much storage performance do you want vs. need?
Can RAID extend the life of nand flash SSD?
Can we get a side of context with them IOPS and other storage metrics?
SSD & Real Estate: Location, Location, Location
What is the best kind of IO? The one you do not have to do
SSD, flash and DRAM, DejaVu or something new?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Server and Storage IO Memory: DRAM and nand flash

Storage I/O trends

DRAM, DIMM, DDR3, nand flash memory, SSD, stating what’s often assumed

Often what’s assumed is not always the case. For example in along with around server, storage and IO networking circles including virtual as well as cloud environments terms such as nand (Negated AND or NOT And) flash memory aka (Solid State Device or SSD), DRAM (Dynamic Random Access Memory), DDR3 (Double Data Rate 3) not to mention DIMM (Dual Inline Memory Module) get tossed around with the assumption everybody must know what they mean.

On the other hand, I find plenty of people who are not sure what those among other terms or things are, sometimes they are even embarrassed to ask, particular if they are a self-proclaimed expert.

So for those who need a refresh or primer, here you go, an excerpt from Chapter 7 (Servers – Physical, Virtual and Software) from my book "The Green and Virtual Data Center" (CRC Press) available at Amazon.com and other global venues in print and ebook formats.

7.2.2 Memory

Computers rely on some form of memory ranging from internal registers, local on-board processor Level 1 (L1) and Level 2 (L2) caches, random accessible memory (RAM), non-volatile RAM (NVRAM) or nand Flash (SSD) along with external disk storage. Memory, which includes external disk storage, is used for storing operating system software along with associated tools or utilities, application programs and data. Main memory or RAM, also known as dynamic RAM (DRAM) chips, is packaged in different ways with a common form being dual inline memory modules (DIMMs) for notebook or laptop, desktop PC and servers.

RAM main memory on a server is the fastest form of memory, second only to internal processor or chip based registers, L1, L2 or local memory. RAM and processor based memories are volatile and non-persistent in that when power is removed, the contents of memory are lost. As a result, some form of persistent memory is needed to keep programs and data when power is removed. Read only memory (ROM) and NVRAM are both persistent forms of memory in that their contents are not lost when power is removed. The amount of RAM that can be installed into a server will vary with specific architecture implementation and operating software being used. In addition to memory capacity and packaging format, the speed of memory is also important to be able to move data and programs quickly to avoid internal bottlenecks. Memory bandwidth performance increases with the width of the memory bus in bits and frequency in MHz. For example, moving 8 bytes on a 64 bit buss in parallel at the same time at 100MHz provides a theoretical 800MByte/sec speed.

To improve availability and increase the level of persistence, some servers include battery backed up RAM or cache to protect data in the event of a power loss. Another technique to protect memory data on some servers is memory mirroring where twice the amount of memory is installed and divided into two groups. Each group of memory has a copy of data being stored so that in the event of a memory failure beyond those correctable with standard parity and error correction code (ECC) no data is lost. In addition to being fast, RAM based memories are also more expensive and used in smaller quantities compared to external persistent memories such as magnetic hard disk drives, magnetic tape or optical based memory medias.

Memory diagram
Memory and Storage Pyramid

The above shows a tiered memory model that may look familiar as the bottom part is often expanded to show tiered storage. At the top of the memory pyramid is high-speed processor memory followed by RAM, ROM, NVRAM and FLASH along with many forms of external memory commonly called storage. More detail about tiered storage is covered in chapter 8 (Data Storage – Data Storage – Disk, Tape, Optical, and Memory). In addition to being slower and lower cost than RAM based memories, disk storage along with NVRAM and FLASH based memory devices are also persistent.

By being persistent, when power is removed, data is retained on the storage or memory device. Also shown in the above figure is that on a relative basis, less energy is used for power storage or memory at the bottom of the pyramid than for upper levels where performance increases. From a PCFE (Power, Cooling, Floor space, Economic) perspective, balancing memory and storage performance, availability, capacity and energy to a given function, quality of service and service level objective for a given cost needs to be kept in perspective and not considering simply the lowest cost for the most amount of memory or storage. In addition to gauging memory on capacity, other metrics include percent used, operating system page faults and page read/write operations along with memory swap activity as well memory errors.

Base 2 versus base 10 numbering systems can account for some storage capacity that appears to “missing” when real storage is compared to what is expected to be seen. Disk drive manufacturers use base 10 (decimal) to count bytes of data while memory chip, server and operating system vendors typically use base 2 (binary) to count bytes of data. This has led to confusion when comparing a disk drive base 10 GB with a chip memory base 2 GB of memory capacity, such as 1,000,000,000 (10^9) bytes versus 1,073,741,824 (2^30) bytes. Nomenclature based on the International System of Units uses MiB, GiB and TiB to denote million, billion and trillion bytes for base 2 numbering with base 10 using MB, TB and GB . Most vendors do document how many bytes, sometimes in both base 2 and base 10, as well as the number of 512 byte sectors supported on their storage devices and storage systems, though it might be in the small print.

Related more reading:
How much storage performance do you want vs. need?
Can RAID extend the life of nand flash SSD?
Can we get a side of context with them IOPS and other storage metrics?
SSD & Real Estate: Location, Location, Location
What is the best kind of IO? The one you do not have to do
SSD, flash and DRAM, DejaVu or something new?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier).

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

How much storage performance do you want vs. need?

Storage I/O trends

How much storage I/O performance do you want vs. need?

The answer to how much storage I/O performance you need vs. want probably depends on cost, for which applications along with benefit among other things.

Storage I/O performance
View Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?

I did a piece over at 21cit titled Parsing the Need for Speed in Storage that looks at those and other related themes including metrics that matter across tiered storage.

Here is an excerpt:

Can storage speed be too fast? Or, put another away, how do you decide a return on investments or innovation from the financial resources you spend on storage and the various technologies that go into storage performance.

Think about it: Fast storage needs fast servers, IO and networking interfaces, software, firmware, hypervisors, operating systems, drivers, and a file system or database, along with applications. Then there are the other buzzword bingo technologies that are also factors, among them fast storage DRAM and flash Solid State Devices (SSD).

Some questions to ask about storage I/O performance include among others:

  • How do response time, latency, and think or wait-times effect your environment and applications?
  • Do you know the location of your storage or data center performance bottlenecks?
  • If you remove bottlenecks in storage systems or appliances as well as in the data path, how will your application or the CPU in the server it runs on behave?
  • If your application server is currently showing high CPU due to the system overhead of having to wait for storage I/Os, you may see a positive improvement.
  • If more real work can be done now, will all of the components be ready to support each other without creating a new bottleneck?
  • Also speaking of storage I/O performance, how about can we get a side of context with them IOPs and other metrics that matter!

So how about it, how much performance, for primary, secondary, backup, cloud or virtual storage do you want vs. need?

Ok, nuff said for now.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud, Virtual, Server, Storage I/O and other technology tiering

Storage I/O trends

Tiering technology and the right data center tool for a given task

Depending on who or what is your sphere of influence, or your sources of information and insight are, there will be different views of tiering, particular when it comes to tiered storage and storage tiering for cloud, virtual and traditional environments.

Recently I did piece over at 21st century IT (21cit) titled Tiered Storage Explained that looks at both tiered storage and storage tiering (e.g. movement and migration, automated or manual) that you can read here.

In the data center (or information factory) everything is not the same as different applications have various performance, availability, capacity and economics among other requirements. Consequently there are different levels or categories of service along with associated tiers of technology to support them, more on these in few moments.

Technology tiering is all around you

Tiering is not unique to Information Technology (IT) as it is more common than you may realize, granted, not always called tiering per say. For example there are different tiers of transportation (beside public or private, shared or single use) ranging from planes, trains, bicycles and boats among others.

Dutch BikesDutch TrainAirbus A330Gondola
Tiered transportation (Bikes, Trains, Planes, Gondolas)

Storage I/O trends

Moving beyond IT (we will get back to that shortly), there are other examples of tiered technologies. For example I live in the Stillwater / Minneapolis Minnesota area thus have a need for different types of snow movement and management tools, after all, not all snow situations are the same.

Snow plow
Tiered snow movement technology (Different tools for various tasks)

The other part of the year when the snow is not actually accumulating or the St. Croix river is not frozen which on a good year can be from March to November, its fishing time. That means having different types of fishing rods rigged for various things such as casting, trolling or jigging, not to mention big fish or little fish, something like how a golfer has different clubs. While like a golfer a single fishing rod can do the task, it’s not as practical thus different tools for various tasks.

Kyak FishingWalleye FishBig Fish
Different sizes and types of fish


Speaking of transportation and automobiles, there are also various metrics some of which have a correlation to Data Center energy use and effectiveness, not to mention EPA Energy Star for Data Centers and Data Center Storage.


Storage I/O trends

Technology tiering in and around the data center

IT data center

Now let’s get back to technology tiering the data center (or information factory) including tiered storage and storage tiering (here’s link to the tiered storage explained piece I mentioned earlier). The three primary building blocks for IT services are processing or compute (e.g. servers, workstations), networking or connectivity and storage that include hardware, software, management tools and applications. These resources in turn get accessed by yes you guessed it, different tiers or categories of devices from mobile smart phones, tablets, laptops, workstations or terminals browsers, applets and other presentation services.

IT building blocks, server, storage, networks

Lets focus on storage for a bit (pun intended)

Keep in mind that not everything is the same in the data center from a performance, availability, capacity and economic perspective. This means different threat risks to protect applications and data against, performance or space capacity needs among others.

data protection tiers
Avoid treating all threat risks the same, tiered data protection

Tiered data protection
Part of modernizing data protection is aligning various tools and technologies to meet different requirements including Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) along with Service Level Agreements (SLAs) and Service Level Objectives (SLO’s).

In addition to protecting data and applications to meet various needs, there are also tiered storage mediums or media (e.g. HDD, SSD, Tape) along with storage systems.

Storage Tiers
Storage I/O trends

Excerpt, Chapter 9: Storage Services and Systems from my book Cloud and Virtual Data Storage Networking book (CRC Press) available via Amazon (also Kindle) and other venues.

9.2 Tiered Storage

Tiered storage is often referred to by the type of disk drives or media, by the price band, by the architecture or by its target use (online for files, emails and databases; near line for reference or backup; offline for archive). The intention of tiered storage is to configure various types of storage systems and media for different levels of performance, availability, capacity and energy or economics (PACE) capabilities to meet a given set of application service requirements. Other storage mediums such as HDD, SSD, magnetic tape and optical storage devices are also used in tiered storage.

Storage tiering can mean different things to different people. For some it is describing storage or storage systems tied to business, application or information services delivery functional need. Others classify storage tiers by price band or how much the solution costs. For others it’s the size or capacity or functionality. Another way to think of tiering is by where it will be used such as on-line, near-line or off-line (primary, secondary or tertiary). Price bands are a way of categorizing disk storage systems based on price to align with various markets and usage scenarios. For example consumer, small office home office (SOHO) and low-end SMB in a price band of under $5,000 USD, mid to high-end SMB in middle price bands from $50,000 to $100,000 range, and small to large enterprise systems ranging from a few hundred thousand dollars to millions of dollars.

Another method of classification is by high performance active or high-capacity inactive or idle. Storage tiering is also used in the context of different mediums such as high performance solid state devices (SSD) or 15,500 revolution per minute (15.5K RPM) SAS of Fibre Channel hard disk drives (HDD), or slower 7.2K and 10K high-capacity SAS and SATA drives or magnetic tape. Yet another category is internal dedicated, external shared, networked and cloud accessible using different protocols and interfaces. Adding to the confusion are marketing approaches that emphasize functionality as defining a tier in trying to standout and differentiate above competition. In other words, if you can’t beat someone in a given category or classification then just create a new one.

Another dimension of tiered storage is tiered access, meaning the type of storage I/O interface and protocol or access method used for storing and retrieving data. For example, high-speed 8Gb Fibre Channel (8GFC) and 10GbE Fibre Channel over Ethernet (FCoE) versus older and slower 4GFC or low-cost 1Gb Ethernet (1GbE) or high performance 10GbE based iSCSI for shared storage access or serial attached SCSI (SAS) for direct attached storage (DAS) or shared storage between a pair of clustered servers. Additional examples of tiered access include file or NAS based access of storage using network file system (NFS) or Windows-based Common Internet File system (CIFS) file sharing among others.

Different categories of storage systems, also called tiered storage systems, combine various tiered storage mediums with tiered access and tiered data protection. For example, tiered data protection includes local and remote mirroring, in different RAID levels, point-in-time (pit) copies or snapshots and other forms of securing and maintaining data integrity to meet various service level, RTO and RPO requirements. Regardless of the approach or taxonomy, ultimately, tiered servers, tiered hypervisors, tiered networks, tiered storage and tiered data protection are about and need to map back to the business and applications functionality.

Storage I/O trends

There is more to storage tiering which includes movement or migration of data (manually or automatically) across various types of storage devices or systems. For example EMC FAST (Fully Automated Storage Tiering), HDS Dynamic Tiering, IBM Easy Tier (and here), and NetApp Virtual Storage Tier (replaces what was known as Automated Storage Tiering) among others.

Likewise there are different types of storage systems or appliances from primary to secondary as well as for backup and archiving.

Then there are also markets or price bands (cost) for various storage systems solutions to meet different needs.

Needless to say there is plenty more to tiered storage and storage tiering for later conversations.

However for now check out the following related links:
Non Disruptive Updates, Needs vs. Wants (Requirements vs. wish lists)
Tiered Hypervisors and Microsoft Hyper-V (Different types or classes of Hypervisors for various needs)
tape summit resources (Using different types or tiers of storage)
EMC VMAX 10K, looks like high-end storage systems are still alive (Tiered storage systems)
Storage comments from the field and customers in the trenches (Various perspectives on tools and technology)
Green IT, Green Gap, Tiered Energy and Green Myths (Energy avoidance vs. energy effectiveness and tiering)
Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List? (Tiered storage systems and devices)
Tiered Storage, Systems and Mediums (Storage Tiering and Tiered Storage)
Cloud, virtualization, Storage I/O trends for 2013 and beyond (Industry Trends and Perspectives)
Amazon cloud storage options enhanced with Glacier (Tiered Cloud Storage)
Garbage data in, garbage information out, big data or big garbage? (How much data are your preserving or hoarding?)Saving Money with Green IT: Time To Invest In Information Factories
I/O Virtualization (IOV) and Tiered Storage Access (Tiered storage access)
EMC VFCache respinning SSD and intelligent caching (Storage and SSD tiering including caching
Green and SASy = Energy and Economic, Effective Storage (Tired storage devices)
EMC Evolves Enterprise Data Protection with Enhancements (Tiered data protection)
Inside the Virtual Data Center (Data Center and Technology Tiering)
Airport Parking, Tiered Storage and Latency (Travel and Technology, Cost and Latency)
Tiered Storage Strategies (Comments on Storage Tiering)
Tiered Storage: Excerpt from Cloud and Virtual Data Storage Networking (CRC Press, see more here)
Using SAS and SATA for tiered storage (SAS and SATA Storage Devices)
The Right Storage Option Is Important for Big Data Success (Big Data and Storage)
VMware vSphere v5 and Storage DRS (VMware vSphere and Storage Tiers)
Tiered Communication and Media Venues (Social and Traditional Media for IT)
Tiered Storage Explained

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

As the platters spin, HDD’s for cloud, virtual and traditional storage environments

HDDs for cloud, virtual and traditional storage environments

Storage I/O trends

Updated 1/23/2018

As the platters spin is a follow-up to a recent series of posts on Hard Disk Drives (HDD’s) along with some posts about How Many IOPS HDD’s can do.

HDD and storage trends and directions include among others

HDD’s will continue to be declared dead into the next decade, just as they have been for over a decade, meanwhile they are being enhanced, continued to be used in evolving roles.

hdd and ssd

SSD will continue to coexist with HDD, either as separate or converged HHDD’s. Where, where and how they are used will also continue to evolve. High IO (IOPS) or low latency activity will continue to move to some form of nand flash SSD (PCM around the corner), while storage capacity including some of which has been on tape stays on disk. Instead of more HDD capacity in a server, it moves to a SAN or NAS or to a cloud or service provider. This includes for backup/restore, BC, DR, archive and online reference or what some call active archives.

The need for storage spindle speed and more

The need for faster revolutions per minute (RPM’s) performance of drives (e.g. platter spin speed) is being replaced by SSD and more robust smaller form factor (SFF) drives. For example, some of today’s 2.5” SFF 10,000 RPM (e.g. 10K) SAS HDD’s can do as well or better than their larger 3.5” 15K predecessors can for both IOPS and bandwidth. This is also an example where the RPM speed of a drive may not be the only determination for performance as it has been in the past.


Performance comparison of four different drive types, click to view larger image.

The need for storage space capacity and areal density

In terms of storage enhancements, watch for the appearance of Shingled Magnetic Recording (SMR) enabled HDD’s to help further boost the space capacity in the same footprint. Using SMR HDD manufactures can put more bits (e.g. areal density) into the same physical space on a platter.


Traditional vs. SMR to increase storage areal density capacity

The generic idea with SMR is to increase areal density (how many bits can be safely stored per square inch) of data placed on spinning disk platter media. In the above image on the left is a representative example of how traditional magnetic disk media lays down tracks next to each other. With traditional magnetic recording approaches, the tracks are placed as close together as possible for the write heads to safely write data.

With new recording formats such as SMR along with improvements to read/write heads, the tracks can be more closely grouped together in an overlapping way. This overlapping way (used in a generic sense) is like how the shingles on a roof overlap, hence Shingled Magnetic Recording. Other magnetic recording or storage enhancements in the works include Heat Assisted Magnetic Recording (HAMR) and Helium filed drives. Thus, there is still plenty of bits and bytes room for growth in HDD’s well into the next decade to co-exist and complement SSD’s.

DIF and AF (Advanced Format), or software defining the drives

Another evolving storage feature that ties into HDD’s is Data Integrity Feature (DIF) that has a couple of different types. Depending on which type of DIF (0, 1, 2, and 3) is used; there can be added data integrity checks from the application to the storage medium or drive beyond normal functionality. Here is something to keep in mind, as there are different types or levels of DIF, when somebody says they support or need DIF, ask them which type or level as well as why.

Are you familiar with Advanced Format (AF)? If not you should be. Traditionally outside of special formats for some operating systems or controllers, that standard open system data storage block, page or sector has been 512 bytes. This has served well in the past, however; with the advent of TByte and larger sized drives, a new mechanism is needed. The need is to support both larger average data allocation sizes from operating systems and storage systems, as well as to cut the overhead of managing all the small sectors. Operating systems and file systems have added new partitioning features such as GUID Partition Table (GPT) to support 1TB and larger SSD, HDD and storage system LUN’s.

These enhancements are enabling larger devices to be used in place of traditional Master Boot Record (MBR) or other operating system partition and allocation schemes. The next step, however, is to teach operating systems, file systems, and hypervisors along with their associated tools or drives how to work with 4,096 byte or 4 Kbyte sectors. The advantage will be to cut the overhead of tracking all of those smaller sectors or file system extents and clusters. Today many HDD’s support AF however by default may have 512-byte emulation mode enabled due to lack of operating system or other support.

Intelligent Power Management, moving beyond drive spin down

Intelligent Power Management (IPM) is a collection of techniques that can be applied to vary the amount of energy consumed by a drive, controller or processor to do its work. These include in the case of an HDD slowing the spin rate of platters, however, keep in mind that mass in motion tends to stay in motion. This means that HDD’s once up and spinning do not need as much relative power as they function like a flywheel. Where their power draw comes in is during reading and write, in part to the movement of reading/write heads, however also for running the processors and electronics that control the device. Another big power consumer is when drives spin up, thus if they can be kept moving, however at a lower rate, along with disabling energy used by read/write heads and their electronics, you can see a drop in power consumption. Btw, a current generation 3.5” 4TB 6Gbs SATA HDD consumes about 6-7 watts of power while in active use, or less when in idle mode. Likewise a current generation high performance 2.5” 1.2TB HDD consumes about 4.8 watts of energy, a far cry from the 12-16 plus watts of energy some use as HDD fud.

Hybrid Hard Disk Drives (HHDD) and Solid State Hybrid Drives (SSDHD)

Hybrid HDD’s (HHDD’s) also known as Solid State Hybrid Drives (SSHD) have been around for a while and if you have read my earlier posts, you know that I have been a user and fan of them for several years. However one of the drawbacks of the HHDD’s has been lack of write acceleration, (e.g. they only optimize for reads) with some models. Current and emerging HDDD’s are appearing with a mix of nand flash SLC (used in earlier versions), MLC and eMLC along with DRAM while enabling write optimization. There are also more drive options available as HHDD’s from different manufactures both for desktop and enterprise class scenarios.

The challenge with HHDD’s is that many vendors either do not understand how they fit and compliment their tiering or storage management software tools or simply do not see the value proposition. I have had vendors and others tell me that the HHDD’s don’t make sense as they are too simple, how can they be a fit without requiring tiering software, controllers, SSD and HDD’s to be viable?

Storage I/O trends

I also see a trend similar to when the desktop high-capacity SATA drives appeared for enterprise-class storage systems in the early 2000s. Some of the same people did not see where or how a desktop class product or technology could ever be used in an enterprise solution.

Hmm, hey wait a minute, I seem to recall similar thinking when SCSI drives appeared in the early 90s, funny how some things do not change, DejaVu anybody?

Does that mean HHDD’s will be used everywhere?

Not necessarily, however, there will be places where they make sense, others where either an HDD or SSD will be more practical.

Networking with your server and storage

Drive native interfaces near-term will remain as 6Gbs (going to 12Gbs) SAS and SATA with some FC (you might still find a parallel SCSI drive out there). Likewise, with bridges or interface cards, those drives may appear as USB or something else.

What about SCSI over PCIe, will that catch on as a drive interface? Tough to say however I am sure we can find some people who will gladly try to convince you of that. FC based drives operating at 4Gbs FC (4GFC) are still being used for some environments however most activity is shifting over to SAS and SATA. SAS and SATA are switching over from 3Gbs to 6Gbs with 12Gbs SAS on the roadmaps.

So which drive is best for you?

That depends; do you need bandwidth or IOPS, low latency or high capacity, small low profile thin form factor or feature functions? Do you need a hybrid or all SSD or a self-encrypting device (SED) also known as Instant Secure Erase (ISE), these are among your various options.

Disk drives

Why the storage diversity?

Simple, some are legacy soon to be replaced and disposed of while others are newer. I also have a collection so to speak that get used for various testing, research, learning and trying things out. Click here and here to read about some of the ways I use various drives in my VMware environment including creating Raw Device Mapped (RDM) local SAS and SATA devices.

Other capabilities and functionality existing or being added to HDD’s include RAID and data copy assist; securely erase, self-encrypting, vibration dampening among other abilities for supporting dense data environments.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Do not judge a drive only by its interface, space capacity, cost or RPM alone. Look under the cover a bit to see what is inside in terms of functionality, performance, and reliability among other options to fit your needs. After all, in the data center or information factory not everything is the same.

From a marketing and fun to talk about new technology perspective, HDD’s might be dead for some. The reality is that they are very much alive in physical, virtual and cloud environments, granted their role is changing.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Seagate provides proof of life: Enterprise HDD enhancements

Storage I/O trends

Proof of life: Enterprise Hard Disk Drives (HDD’s) are enhanced

Last week while hard disk drive (HDD) competitor Western Digital (WD) was announcing yet another (Velobit) in a string of acquisitions ( e.g. earlier included Stec, Arkeia) and investments (Skyera), Seagate announced new enterprise class HDD’s to their portfolio. Note that it was only two years ago that WD acquired Hitachi Global Storage Technologies (HGST) the disk drive manufacturing business of Hitachi Ltd. (not to be confused with HDS).

Seagate

Similar to WD expanding their presence in the growing nand flash SSD market, Seagate also in May of this year extended their existing enterprise class SSD portfolio. These enhancements included new drives with 12Gbs SAS interface, along with a partnership (and investment) with PCIe flash card startup vendor Virident. Other PCIe flash SSD card vendors (manufacturers and OEMs) include Cisco, Dell, EMC, FusionIO, HP, IBM, LSI, Micron, NetApp and Oracle among others.

These new Seagate enterprise class HDD’s are designed for use in cloud and traditional data center servers and storage systems. A month or two ago Seagate also announced new ultra-thin (5mm) client (aka desktop) class HDD’s along with a 3.5 inch 4TB video optimized HDD. The video optimized HDD’s are intended for Digital Video Recorders (DVR’s), Set Top Boxes (STB’s) or other similar applications.

What was announced?

Specifically what Seagate announced were two enterprise class drives, one for performance (e.g. 1.2TB 10K) and the other for space capacity (e.g. 4TB).

 

Enterprise High Performance 10K.7 (aka formerly known as Savio)

Enterprise Terascale (aka formerly known as constellation)

Class/category

Enterprise / High Performance

Enterprise High Capacity

Form factor

2.5” Small Form Factor (SFF)

3.5”

Interface

6Gbs SAS

6Gbs SATA

Space capacity

1,200GB (1.2TB)

4TB

RPM speed

10,000

5,900

Average seek

2.9 ms

12 ms

DRAM cache

64MB

64MB

Power idle / operating

4.8 watts

5.49 / 6.49 watts

Intelligent Power Management (IPM)

Yes – Seagate PowerChoice

Yes – Seagate PowerChoice

Warranty

Limited 5 years

Limited 3 years

Instant Secure Erase (ISE)

Yes

Optional

Other features

RAID Rebuild assist, Self-Encrypting Device (SED)

Advanced Format (AF) 4K block in addition to standard 512 byte sectors

Use cases

Replace earlier generation 3.5” 15K SAS and Fibre Channel HDD’s for higher performance applications including file systems, databases where SSD are not practical fit.

Backup and data protection, replication, copy operations for erasure coding and data dispersal, active in dormant archives, unstructured NAS, big data, data warehouse, cloud and object storage.

Note the Seagate Terascale has a disk rotation speed of 5,900 (5.9K RPM) which is not a typo given the more traditional 5.4K RPM drives. This slight increase in performance from 5.4K to 5.9K should give when combined with other enhancements (e.g. firmware, electronics) to boost performance for higher capacity workloads.

Let us watch for some performance numbers to be published by Seagate or others. Note that I have not had a chance to try these new drives yet, however look forward to getting my hands on them (among others) sometime in the future for a test drive to add to the growing list found here (hey Seagate and WD, that’s a hint ;) ).

What this all means?

Storage I/O trends

Wait, weren’t HDD’s supposed to be dead or dying?

Some people just like new and emerging things and thus will declare anything existing or that they have lost interest in (or their jobs need it) as old, boring or dead.

For example if you listen to some, they may say nand flash SSD are also dead or dying. For what it is worth, imho nand flash-based SSDs still have a bright future in front of them even with new technologies emerging as they will take time to mature (read more here or listen here).

However, the reality is that for at least the next decade, like them or not, HDD’s will continue to play a role that is also evolving. Thus, these and other improvements with HDD’s will be needed until current nand flash or emerging PCM (Phase Change Memory) among other forms of SSD are capable of picking up all the storage workloads in a cost-effective way.

Btw, yes, I am also a fan and user of nand flash-based SSD’s, in addition to HDD’s and see roles for both as being viable complementing each other for traditional, virtual and cloud environments.

In short, HDD’s will keep spinning (pun intended) for some time granted their roles and usage will also evolve similar to that of tape summit resources.

Storage I/O trends

With this announcement by Seagate along with other enhancements from WD show that the HDD will not only see its 60th birthday, (and here), it will probably also easily see its 70th and not from the comfort of a computer museum. The reason is that there is yet another wave of HDD improvements just around the corner including Shingled Magnetic Recording (SMR) (more info here) along with Heat Assisted Magnetic Recording (HAMR) among others. Watch for more on HAMR and SMR in future posts. With these and other enhancements, we should be able to see a return to the rapid density improvements with HDD’s observed during the mid to late 2000 era when Perpendicular recording became available.

What is up with this ISE stuff is that the same as what Xiotech (e.g. XIO) had?

Is this the same technology that Xiotech (now Xio) referred to the ISE the answer is no. This Seagate ISE is for fast secure erase of data on disk. The benefit of Instant Secure Erase (ISE) is to cut from hours or days the time required to erase a drive for secure disposal to seconds (or less). For those environments that already factor drives erase time as part of those overall costs, this can increase the useful time in service to help improve TCO and ROI.

Wait a minute, aren’t slower RPM’s supposed to be lower performance?

Some of you might be wondering or asking the question of wait, how can a 10,000 revolution per minute (10K RPM) HDD be considered fast vs. a 15K HDD, let alone SSD?

Storage I/O trends

There is a trend occurring with HDD’s that the old rules of IOPS or performance being tied directly to the size and rotational speed (RPM’s) of drives, along with their interfaces. This comes down to being careful to judge a book or in this case a drive by its cover. While RPM’s do have an impact on performance, new generation drives at 10K such as some 2.5” models are delivering performance equal to or better than earlier generation 3.5” 15K device’s.

Likewise, there are similar improvements with 5.4K devices vs. previous generation 7.2K models. As you will see in some of the results found here, not all the old rules of thumbs when it comes to drive performance are still valid. Likewise, keep those metrics that matter in the proper context.


Click on above image to see various performance results

For example as seen in the results (above), the more DRAM or DDR cache on the drives has a positive impact on sequential reads which can be good news if that is what your applications need. Thus, do your homework and avoid judging a device simply by its RPM, interface or form factor.

Other considerations, temperature and vibration

Another consideration is that with increased density of more drives being placed in a given amount of space, some of which may not have the best climate controls, humidity and vibration are concerns. Thus, the importance of drives having vibration dampening or safeguards to keep up performance are important. Likewise, even though drive heads and platters are sealed, there are also considerations that need to be taken care of for humidity in data center or cloud service providers in hot environments near the equator.

If this is not connecting with you, think about how close parts of Southeast Asia and the India subcontinent are to the equator along with the rapid growth and low-cost focus occurring there. Your data center might be temperature and humidity controlled, however others who very focused on cost cutting may not be as concerned with normal facilities best practices.

What type of drives should be used for cloud, virtual and traditional storage?

Good question and one where the answer should be it depends upon what you are trying or need to do (e.g. see previous posts here or here and here (via Seagate)).For example here are some tips for big data storage and storage making decisions in general.

Disclosure

Seagate recently invited me along with several other industry analysts to their cloud storage analyst summit in San Francisco where they covered roundtrip coach airfare, lodging, airport transfers and a nice dinner at the Epic Roast house.

hdd image

I also have received in the past a couple of Momentus XT HHDD (aka SSHD) from Seagate. These are in addition to those that I bought including various Seagate, WD along with HGST, Fujitsu, Toshiba and Samsung (SSD and HDD’s) that I use for various things.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Can we get a side of context with them IOPS server storage metrics?

Can we get a side of context with them server storage metrics?

Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.

There is an old saying that the best I/O (Input/Output) is the one that you do not have to do.

In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.

Expanding the conversation, the need for more context

The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.

hdd and ssd iops

This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.

Adding a side of context

The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.

Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time. Not to mention block, file, object or blob as well as table.

However, are those million IOP’s applicable to your environment or needs?

Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?

How about the response time or latency for achieving them IOPS?

In other words, what is the context of those metrics and why do they matter?

storage i/o iops
Click on image to view more metrics that matter including IOP’s for HDD and SSD’s

Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.

As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.

Here are some examples of context that can be added to help make IOP’s and other metrics matter:

  • What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
  • Are they reads, writes, random, sequential or mixed and what percentage?
  • How was the storage configured including RAID, replication, erasure or dispersal codes?
  • Then there is the latency or response time and IO queue depths for the given number of IOPS.
  • Let us not forget if the storage systems (and servers) were busy with other work or not.
  • If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
  • What was the number of threads or workers, along with how many servers?
  • What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
  • Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
  • Did the IOP’s number come from a single storage system or total of multiple systems?
  • Fast storage needs fast serves and networks, what was their configuration?
  • Was the performance a short burst, or long sustained period?
  • What was the size of the test data used; did it all fit into cache?
  • Were short stroking for IOPS or long stroking for bandwidth techniques used?
  • Data footprint reduction (DFR) techniques (thin provisioned, compression or dedupe) used?
  • Were write data committed synchronously to storage, or deferred (aka lazy writes used)?

The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.

Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.

Storage I/O trends

Does size or age of vendors make a difference when it comes to context?

Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.

Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.

Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.

Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.

Storage I/O trends

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What this means is let us start putting and asking for metrics that matter such as IOP’s with context.

If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.

IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.

Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.

So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

HDS Mid Summer Storage and Converged Compute Enhancements

Storage I/O trends

Converged Compute, SSD Storage and Clouds

Hitachi Data Systems (HDS) announced today several enhancements to their data storage and unified compute portfolio as part of their Maximize I.T. initiative.

Setting the context

As part of setting the stage for this announcement, HDS has presented the following strategy vision as part their vision for IT transformation and cloud computing.

https://hds.com/solutions/it-strategies/maximize-it.html?WT.ac=us_hp_flash_r11

What was announced

This announcement builds on earlier ones around HDS Unified Storage (HUS) primary storage using nand flash MLC Solid State Devices (SSD) and Hard Disk Drives (HDD’s), along with unified block and file (NAS), as well Unified Compute Platform (UCP) also known as converged compute, networking, storage and software. These enhancements follow recent updates to the HDS Content Platform (HCP) for object, file and content storage.

There are three main focus areas of the announcement:

  • Flash SSD storage enhancements for HUS
  • Unified with enhanced file (aka BlueArc based)
  • Enhanced unified compute (UCP)

HDS Flash SSD acceleration

The question should not be if SSD is in your future, rather when, where, with what and how much will be needed.

As part of this announcement, HDS is releasing an all flash SSD based HUS enterprise storage system. Similar to what other vendors have done, HDS is attaching flash SSD storage to their HUS systems in place of HDD’s. Hitachi has developed their own SSD module announced in 2012 (read more here). The HDS SSD module use Multi Level Cell (MLC) nand flash chips (dies) that now supports 1.6TB of storage space capacity unit. This is different from other vendors who either use nand flash SSD drive form factor devices (e.g. Intel, Micron, Samsung, SANdisk, Seagate, STEC (now WD), WD among others) or, PCIe form factor cards (e.g. FusionIO, Intel, LSI, Micron, Virident among others) or, attach a third-party external SSD device (e.g. IBM/TMS, Violin, Whiptail etc.).

Like some other vendors, HDS has also done more than simply attach a SSD (drive, PCIe card, or external device) to their storage systems calling it an integrated solution. What this means is that HDS has implemented software or firmware changes into their storage systems to manage durability and extend flash duty cycles caused by program erase (P/E) cycle wear. In addition HDS has implemented performance optimization in their storage systems to leverage the faster SSD modules, after all, faster storage media or devices need fast storage systems or controllers.

While the new all flash storage system can be initially bought with just SSD, similar to other hybrid storage solutions, hard disk drives (HDD’s) can also be installed. For enabling full performance at low latency, HDS is addressing both the flash SSD modules as well as the storage systems they attach to including back-end, front-end and caching in-between.

The release enables 500,000 or half a million IOPS (no IOP size, reads or writes, random or sequential. Future firmware (non-disrupted) to enable higher performance that HDS is claiming will be 1,000,000 IOPS at under a millisecond) were indicated.

In addition to future performance improvements, HDS is also indicating increased storage space capacity of its MLC flash SSD modules (1.6TB today). Using 12 modules (1.6TB each), 154TB of flash SSD can be placed in a single rack.

HDS File and Network Attached Storage (NAS)

HUS unified NAS file system and gateway (BlueArc based) enhancements include:

  • New platforms leveraging faster processors (both Intel and Field Programmable Gate Arrays (FPGA’s))
  • Common management and software tools from 3000 to new 4000 series
  • Bandwidth doubled with faster connections and more memory
  • Four 10GbE NAS serving ports (front-end)
  • Four 8Gb Fibre Channel ports (back-end)
  • FPGA leveraged for off-loading some dedupe functions (faster performance)

HDS Unified Complete Platform (UCP)

As part of this announcement, HDS is enhancing the Unified Compute Platform (UCP) offerings. HDS re-entered the compute market in 2012 joining other vendors offering unified compute, storage and networking solutions. The HDS converged data infrastructure competes with AMD (Seamicro) SM15000, Dell vStart and VRTX (for lower end market), EMC and VCE vBlock, NetApp FlexPod along with those from HP (or Moonshot micro servers), IBM Puresystems, Oracle and others.

UCP Pro for VMware vSphere

  • Turnkey converged solution (Compute, Networking, Storage, Software)
  • Includes VMware vSphere pre-installed (OEM from VMware)
  • Flexible compute blade options
  • Three storage system options (HUS, HUS VM and VSP)
  • Cisco and Brocade IP networking
  • UCP Director 3.0 with enhanced automation and orchestration software

UCP Select for Microsoft Private Cloud

  • Supports Hyper-V 3.0 server virtualization
  • Live migration with DR and resynch
  • Microsoft Fast Track certified

UCP Select for Oracle RAC

  • HDS Flash SSD storage
  • SMP x86 compute for performance
  • 2x improvements for IOPS less than 1 millisecond
  • Common management with HiCommand suite
  • Integrated with Oracle RMAN and OVM

UCP Select for SAP HANA

  • Scale out to 8TBs memory (DRAM)
  • Tier 1 storage system certified for SAP HANA DR
  • Leverages SAP HANA SAP storage connector API

What this all means?

Storage I/O trends

With these announcements HDS is extending its storage centric hardware, software and services solution portfolio for block, file and object access across different usage tiers (systems, applications, mediums). HDS is also expanding their converged unified compute platforms to stay competitive with others including Dell, EMC, Fujitsu, HP, IBM, NEC, NetApp and Oracle among others. For environments with HDS storage looking for converged solutions to support VMware, Microsoft Hyper-V, Oracle or SAP HANA these UCP systems are worth checking out as part of evaluating vendor offerings. Likewise for those who have HDS storage exploring SSD offerings, these announcements give opportunities to enable consolidation as do the unified file (NAS) offerings.

Note that now HDS does not have a public formalized message or story around PCIe flash cards, however they have relationships with various vendors as part of their UCP offerings.

Overall a good set of incremental enhancements for HDS to stay competitive and leverage their field proven capabilities including management software tools.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Upgrading Lenovo X1 Windows 7 with a Samsung 840 SSD

Storage I/O trends

I recently upgraded my Lenovo X1 laptop from a Samsung 830 256GB Solid State Device (SSD) drive to a new Samsung 840 512GB SSD. The following are some perspectives, comments on my experience in using the Samsung SSD over the past year, along with what was involved in the upgrade.

Background

A little over a year ago I upgraded my then new Lenovo X1 replacing upon its arrival the factory supplied Hard Disk Drive (HDD) with a Solid State Device (SSD) drive. After setup and data migration the 2.5” 7,200 RPM 320GB Toshiba HDD was cloned to a SATA 256GB Samsung model 830 SSD. By first setting up and configuring, copying files, applications, going through Windows and other updates, when it came time to clone to the SSD, the HDD effectively became a backup.

Note that prior to using the Samsung SSD in my Lenovo X1, I was using Hybrid HDD (HHDD’s) as my primary storage to boost read performance and space capacity. These were in addition to other external SSD and HDD that I used along with NAS devices. Read more about my HHDD experiences in a series of post here.

Fast forward to the present and it is time to do yet another upgrade, not because there is anything wrong with the Samsung SSD other than I was running low on space capacity. Sure 256GB was a lot of space, however I also had become used to having a 500GB and 750GB HHDD before downsizing to the SSD. Granted some of the data I have on the SSD is more for convenience, as a cache or buffer when not connected to the network. Not to mention if you have VMware Workstation for running various Virtual Machines (VMs) you know how those VMs can add up quickly, not to mention videos and other items.

Stack of HDD, HHDD and SSDs

Over the past year, my return on investment (ROI) and return on innovation (the new ROI) was as low as three months, or worse case about six months. That was based on the amount of time I was able to not have to wait while saving data. Sure, I had some read and boot performance improvements, as well as being able to do more IOPs and other things. However those were not as significant due to having been using HHDDs vs. if had gone from HDD to SSD.

My productivity was saving 3 to 5 minutes per day when storing large files, documents, videos or other items as part of generating or working on content. Not to mention seeing faster snapshots and other copy functions for HA, BC, DR take less time enabling more productivity to occur vs. waiting.

Thus the ROI timeframe varies depends on what I value my time on or for a particular project among other things.

Sure IOPS are important, so to is simple wall clock or stop watch based timing to measure work being done or time spent waiting.

Upgrade Time

While this was replacing one SSD with another, the same things and steps would apply if going from an HDD to SSD.

Before upgrade
Free space and storage utilization before the upgrade

Make sure that you have a good full and consistent backup copy of your data.

If it is enabled, disable bit locker or other items that might interfere with the clone. Here is a post if you are interested in enabling Windows bitlocker on Windows 7 64 bit.

Run a quick cleanup, registry repair or other maintenance to make sure you have a good and consistent copy before cloning it.

Install any migration or clone software, in the past I have used Seagate Discwizard (Acronis) along with full Acronis in the past. This time I used the Samsung Data Migration powered by Clonix, which is an improvement IMHO vs. what they used to supply which was Norton Ghost.

Shutdown Time

Attach the new drive, for this upgrade I removed the existing Samsung 830 SSD from its internal bay and replaced it with the new Samsung 840. The Samsung 830 was then attached to Lenovo X1 laptop using a USB to SATA cable. Note that you could also do the opposite which is attach the new drive using the USB to SATA cable for the clone operation, then install that into the internal drive bay which would drop need for changing boot sequence.


Samsung 830, Samsung 840 and Lenovo X1


Old Samsung 830 removed, new 840 being installed


Samsung 840 goes in Lenovo X1, Samsung 830 with SATA to USB cable

Since I removed the old drive and attached that to the Lenovo X1 via a SATA to USB cable, and the new drive internal, I also had to change the boot sequence. Remember to change this boot sequence back after the upgrade is complete. On the other hand, if you leave the original drive internally and attach the new drive via a USB to SATA, or eSATA to SATA cable for the clone, you do not need to change the boot sequence.


Changing boot sequence , note one SSDs appears as USB cable being used

Before running the data migration software, I disabled my network connection to make sure the system was isolated during the upgraded and then run the data migration software tool.


Samsung Data Migration tool (powered by Clonix Ltd.) during clone operation

Unlike tools such as Seagate DiscWizard based on Acronis, the Samsung tool based on Clonix does not shutdown or performs upgrade off-line. There is a tradeoff here that I observed, the Acronis shutdown approach while being offline, seemed quicker, however that is subjective. The Samsung tool seemed longer, about 2.5 hours to clone 256G to 512G however, I was still able to do things on the PC (making screen shots).

Even though the Clonix powered Samsung data migration tool works on-line enabling things to be done, best to leave all applications shutdown.

Once the data migration tool is done and it says 100 percent complete DO NOT DO ANYTHING until you see a prompt telling you to do something.

WAIT, as there is some background things that occur after you get the 100 percent complete. When you see prompt screen, only then it will be ok to move forward.

At that point, shutdown window, remove old drive, change any setup boot sequence and reboot to verify all is ok.

Also, remember to turn bit locker back on if needed.

Post Mortem

How is the new SSD drive is running?

So far so good, as fast if not better than the old one.


About a month after the upgrade and the space is being put to use.

How about the Samsung 830?

That is now being used for various things in my test lab environment joining other SSD, HHDD and HDDs supporting various physical and virtual server activities including in some testing as part of this series (watch for more in this series soon).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Web chat Thur May 30th: Hot Storage Trends for 2013 (and beyond)

Storage I/O trends

Join me on Thursday May 30, 2013 at Noon ET (9AM PT) for a live web chat at the 21st Century IT (21cit) site (click here to register, sign-up, or view earlier posts). This will be an online web chat format interactive conversation so if you are not able to attend, you can visit at your convenience to view and give your questions along with comments. I have done several of these web chats with 21cit as well as other venues that are a lot of fun and engaging (time flies by fast).

For those not familiar, 21cIT is part of the Desum/UBM family of sites including Internet Evolution, SMB Authority, and Enterprise Efficiency among others that I do article posts, videos and live chats for.


Sponsored by NetApp

I like these types of sites in that while they have a sponsor, the content is generally kept separate between those of editors and contributors like myself and the vendor supplied material. In other words I coordinate with the site editors on what topics I feel like writing (or doing videos) about that align with the given sites focus and themes as opposed to following and advertorial calendar script.

During this industry trends perspective web chat, one of the topics and themes planned for discussion include software defined storage (SDS). View a recent video blog post I did here about SDS. In addition to SDS, Solid State Devices (SSD) including nand flash, cloud, virtualization, object, backup and data protection, performance, management tools among others are topics that will be put out on the virtual discussion table.

Storage I/O trends

Following are some examples of recent and earlier industry trends perspectives posts that I have done over at 21cit:

Video: And Now, Software-Defined Storage!
There are many different views on what is or is not “software-defined” with products, protocols, preferences and even press releases. Check out the video and comments here.

Big Data and the Boston Marathon Investigation
How the human face of big-data will help investigators piece together all the evidence in the Boston bombing tragedy and bring those responsible to justice. Check out the post and comments here.

Don’t Use New Technologies in Old Ways
You can add new technologies to your data center infrastructure, but you won’t get the full benefit unless you update your approach with people, processes, and policies. Check out the post and comments here.

Don’t Let Clouds Scare You, Be Prepared
The idea of moving to cloud computing and cloud services can be scary, but it doesn’t have to be so if you prepare as you would for implementing any other IT tool. Check out the post and comments here.

Storage and IO trends for 2013 (& Beyond)
Efficiency, new media, data protection, and management are some of the keywords for the storage sector in 2013. Check out these and other trends, predictions along with comments here.

SSD and Real Estate: Location, Location, Location
You might be surprised how many similarities between buying real estate and buying SSDs.
Location matters and it’s not if, rather when, where, why and how you will be using SSD including nand flash in the future, read more and view comments here.

Everything Is Not Equal in the Data center, Part 3
Here are steps you can take to give the right type of backup and protection to data and solutions, depending on the risks and scenarios they face. The result? Savings and efficiencies. Read more and view comments here.

Everything Is Not Equal in the Data center, Part 2
Your data center’s operations can be affected at various levels, by multiple factors, in a number of degrees. And, therefore, each scenario requires different responses. Read more and view comments here.

Everything Is Not Equal in the Data center, Part 1
It pays to check your data center Different components need different levels of security, storage, and availability. Read more and view comments here.

Data Protection Modernizing: More Than Buzzword Bingo
IT professionals and solution providers should put technologies such as disk based backup, dedupe, cloud, and data protection management tools as assets and resources to make sure they receive necessary funding and buy in. Read more and view comments here.

Don’t Take Your Server & Storage IO Pathing Software for Granted
Path managers are valuable resources. They will become even more useful as companies continue to carry out cloud and virtualization solutions. Read more and view comments here.

SSD Is in Your Future: Where, When & With What Are the Questions
During EMC World 2012, EMC (as have other vendors) made many announcements around flash solid-state devices (SSDs), underscoring the importance of SSDs to organizations future storage needs. Read more here about why SSD is in your future along with view comments.

Changing Life cycles and Data Footprint Reduction (DFR), Part 2
In the second part of this series, the ABCDs (Archive, Backup modernize, Compression, Dedupe and data management, storage tiering) of data footprint reduction, as well as SLOs, RTOs, and RPOs are discussed. Read more and view comments here.

Changing Life cycles and Data Footprint Reduction (DFR), Part 1
Web 2.0 and related data needs to stay online and readily accessible, creating storage challenges for many organizations that want to cut their data footprint. Read more and view comments here.

No Such Thing as an Information Recession
Data, even older information, must be protected and made accessible cost-effectively. Not to mention that people and data are living longer as well as getting larger. Read more and view comments here.

Storage I/O trends

These real-time, industry trends perspective interactive chats at 21cit are open forum format (however be polite and civil) as well as non vendor sales or marketing pitches. If you have specific questions you ‘d like to ask or points of view to express, click here and post them in the chat room at any time (before, during or after).

Mark your calendar for this event live Thursday, May 30, at noon ET or visit after the fact.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: How many IOPS can a HDD HHDD SSD do with VMware?

How many IOPS can a HDD HHDD SSD do with VMware?

server storage data infrastructure i/o iop hdd ssd trends

Updated 2/10/2018

This is the second post of a two-part series looking at storage performance, specifically in the context of drive or device (e.g. mediums) characteristics of How many IOPS can a HDD HHDD SSD do with VMware. In the first post the focus was around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).

A common question is how many IOPS (IO Operations Per Second) can a storage device or system do?

The answer is or should be it depends.

Here are some examples to give you some more insight.

For example, the following shows how IOPS vary by changing the percent of reads, writes, random and sequential for a 4K (4,096 bytes or 4 KBytes) IO size with each test step (4 minutes each).

IO Size for test
Workload Pattern of test
Avg. Resp (R+W) ms
Avg. IOP Sec (R+W)
Bandwidth KB Sec (R+W)
4KB
100% Seq 100% Read
0.0
29,736
118,944
4KB
60% Seq 100% Read
4.2
236
947
4KB
30% Seq 100% Read
7.1
140
563
4KB
0% Seq 100% Read
10.0
100
400
4KB
100% Seq 60% Read
3.4
293
1,174
4KB
60% Seq 60% Read
7.2
138
554
4KB
30% Seq 60% Read
9.1
109
439
4KB
0% Seq 60% Read
10.9
91
366
4KB
100% Seq 30% Read
5.9
168
675
4KB
60% Seq 30% Read
9.1
109
439
4KB
30% Seq 30% Read
10.7
93
373
4KB
0% Seq 30% Read
11.5
86
346
4KB
100% Seq 0% Read
8.4
118
474
4KB
60% Seq 0% Read
13.0
76
307
4KB
30% Seq 0% Read
11.6
86
344
4KB
0% Seq 0% Read
12.1
82
330

Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 4K IO size

In the above example the drive is a 1TB 7200 RPM 3.5 inch Dell (Western Digital) 3Gb SATA device doing raw (non file system) IO. Note the high IOP rate with 100 percent sequential reads and a small IO size which might be a result of locality of reference due to drive level cache or buffering.

Some drives have larger buffers than others from a couple to 16MB (or more) of DRAM that can be used for read ahead caching. Note that this level of cache is independent of a storage system, RAID adapter or controller or other forms and levels of buffering.

Does this mean you can expect or plan on getting those levels of performance?

I would not make that assumption, and thus this serves as an example of using metrics like these in the proper context.

Building off of the previous example, the following is using the same drive however with a 16K IO size.

IO Size for test
Workload Pattern of test
Avg. Resp (R+W) ms
Avg. IOP Sec (R+W)
Bandwidth KB Sec (R+W)
16KB
100% Seq 100% Read
0.1
7,658
122,537
16KB
60% Seq 100% Read
4.7
210
3,370
16KB
30% Seq 100% Read
7.7
130
2,080
16KB
0% Seq 100% Read
10.1
98
1,580
16KB
100% Seq 60% Read
3.5
282
4,522
16KB
60% Seq 60% Read
7.7
130
2,090
16KB
30% Seq 60% Read
9.3
107
1,715
16KB
0% Seq 60% Read
11.1
90
1,443
16KB
100% Seq 30% Read
6.0
165
2,644
16KB
60% Seq 30% Read
9.2
109
1,745
16KB
30% Seq 30% Read
11.0
90
1,450
16KB
0% Seq 30% Read
11.7
85
1,364
16KB
100% Seq 0% Read
8.5
117
1,874
16KB
60% Seq 0% Read
10.9
92
1,472
16KB
30% Seq 0% Read
11.8
84
1,353
16KB
0% Seq 0% Read
12.2
81
1,310

Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 16K IO size

The previous two examples are excerpts of a series of workload simulation tests (ok, you can call them benchmarks) that I have done to collect information, as well as try some different things out.

The following is an example of the summary for each test output that includes the IO size, workload pattern (reads, writes, random, sequential), duration for each workload step, totals for reads and writes, along with averages including IOP’s, bandwidth and latency or response time.

disk iops

Want to see more numbers, speeds and feeds, check out the following table which will be updated with extra results as they become available.

Device
Vendor
Make

Model

Form Factor
Capacity
Interface
RPM Speed
Raw
Test Result
HDD
HGST
Desktop
HK250-160
2.5
160GB
SATA
5.4K
HDD
Seagate
Mobile
ST2000LM003
2.5
2TB
SATA
5.4K
HDD
Fujitsu
Desktop
MHWZ160BH
2.5
160GB
SATA
7.2K
HDD
Seagate
Momentus
ST9160823AS
2.5
160GB
SATA
7.2K
HDD
Seagate
MomentusXT
ST95005620AS
2.5
500GB
SATA
7.2K(1)
HDD
Seagate
Barracuda
ST3500320AS
3.5
500GB
SATA
7.2K
HDD
WD/Dell
Enterprise
WD1003FBYX
3.5
1TB
SATA
7.2K
HDD
Seagate
Barracuda
ST3000DM01
3.5
3TB
SATA
7.2K
HDD
Seagate
Desktop
ST4000DM000
3.5
4TB
SATA
HDD
HDD
Seagate
Capacity
ST6000NM00
3.5
6TB
SATA
HDD
HDD
Seagate
Capacity
ST6000NM00
3.5
6TB
12GSAS
HDD
HDD
Seagate
Savio 10K.3
ST9300603SS
2.5
300GB
SAS
10K
HDD
Seagate
Cheetah
ST3146855SS
3.5
146GB
SAS
15K
HDD
Seagate
Savio 15K.2
ST9146852SS
2.5
146GB
SAS
15K
HDD
Seagate
Ent. 15K
ST600MP0003
2.5
600GB
SAS
15K
SSHD
Seagate
Ent. Turbo
ST600MX0004
2.5
600GB
SAS
SSHD
SSD
Samsung
840 PRo
MZ-7PD256
2.5
256GB
SATA
SSD
HDD
Seagate
600 SSD
ST480HM000
2.5
480GB
SATA
SSD
SSD
Seagate
1200 SSD
ST400FM0073
2.5
400GB
12GSAS
SSD

Performance characteristics 1 worker (thread count) for RAW IO (non-file system)

Note: (1) Seagate Momentus XT is a Hybrid Hard Disk Drive (HHDD) based on a 7.2K 2.5 HDD with SLC nand flash integrated for read buffer in addition to normal DRAM buffer. This model is a XT I (4GB SLC nand flash), may add an XT II (8GB SLC nand flash) at some future time.

As a starting point, these results are raw IO with file system based information to be added soon along with more devices. These results are for tests with one worker or thread count, other results will be added with such as 16 workers or thread counts to show how those differ.

The above results include all reads, all writes, mix of reads and writes, along with all random, sequential and mixed for each IO size. IO sizes include 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K, 1024K and 2048K. As with any workload simulation, benchmark or comparison test, take these results with a grain of salt as your mileage can and will vary. For example you will see some what I consider very high IO rates with sequential reads even without file system buffering. These results might be due to locality of reference of IO’s being resolved out of the drives DRAM cache (read ahead) which vary in size for different devices. Use the vendor model numbers in the table above to check the manufactures specs on drive DRAM and other attributes.

If you are used to seeing 4K or 8K and wonder why anybody would be interested in some of the larger sizes take a look at big fast data or cloud and object storage. For some of those applications 2048K may not seem all that big. Likewise if you are used to the larger sizes, there are still applications doing smaller sizes. Sorry for those who like 512 byte or smaller IO’s as they are not included. Note that for all of these unless indicated a 512 byte standard sector or drive format is used as opposed to emerging Advanced Format (AF) 4KB sector or block size. Watch for some more drive and device types to be added to the above, along with results for more workers or thread counts, along with file system and other scenarios.

Using VMware as part of a Server, Storage and IO (aka StorageIO) test platform

vmware vexpert

The above performance results were generated on Ubuntu 12.04 (since upgraded to 14.04 which was hosted on a VMware vSphere 5.1 (upgraded to 5.5U2) purchased version (you can get the ESXi free version here) with vCenter enabled system. I also have VMware workstation installed on some of my Windows-based laptops for doing preliminary testing of scripts and other activity prior to running them on the larger server-based VMware environment. Other VMware tools include vCenter Converter, vSphere Client and CLI. Note that other guest virtual machines (VMs) were idle during the tests (e.g. other guest VMs were quiet). You may experience different results if you ran Ubuntu native on a physical machine or with different adapters, processors and device configurations among many other variables (that was a disclaimer btw ;) ).

Storage I/O trends

All of the devices (HDD, HHDD, SSD’s including those not shown or published yet) were Raw Device Mapped (RDM) to the Ubuntu VM bypassing VMware file system.

Example of creating an RDM for local SAS or SATA direct attached device.

vmkfstools -z /vmfs/devices/disks/naa.600605b0005f125018e923064cc17e7c /vmfs/volumes/dat1/RDM_ST1500Z110S6M5.vmdk

The above uses the drives address (find by doing a ls -l /dev/disks via VMware shell command line) to then create a vmdk container stored in a dat. Note that the RDM being created does not actually store data in the .vmdk, it’s there for VMware management operations.

If you are not familiar with how to create a RDM of a local SAS or SATA device, check out this post to learn how.This is important to note in that while VMware was used as a platform to support the guest operating systems (e.g. Ubuntu or Windows), the real devices are not being mapped through or via VMware virtual drives.

vmware iops

The above shows examples of RDM SAS and SATA devices along with other VMware devices and dats. In the next figure is an example of a workload being run in the test environment.

vmware iops

One of the advantages of using VMware (or other hypervisor) with RDM’s is that I can quickly define via software commands where a device gets attached to different operating systems (e.g. the other aspect of software defined storage). This means that after a test run, I can quickly simply shutdown Ubuntu, remove the RDM device from that guests settings, move the device just tested to a Windows guest if needed and restart those VMs. All of that from where ever I happen to be working from without physically changing things or dealing with multi-boot or cabling issues.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

So how many IOPs can a device do?

That depends, however have a look at the above information and results.

Check back from time to time here to see what is new or has been added including more drives, devices and other related themes.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

How many I/O iops can flash SSD or HDD do?

How many i/o iops can flash ssd or hdd do with vmware?

sddc data infrastructure Storage I/O ssd trends

Updated 2/10/2018

A common question I run across is how many I/O iopsS can flash SSD or HDD storage device or system do or give.

The answer is or should be it depends.

This is the first of a two-part series looking at storage performance, and in context specifically around drive or device (e.g. mediums) characteristics across HDD, HHDD and SSD that can be found in cloud, virtual, and legacy environments. In this first part the focus is around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).

What about cloud, tape summit resources, storage systems or appliance?

Lets leave those for a different discussion at another time.

Getting started

Part of my interest in tools, metrics that matter, measurements, analyst, forecasting ties back to having been a server, storage and IO performance and capacity planning analyst when I worked in IT. Another aspect ties back to also having been a sys admin as well as business applications developer when on the IT customer side of things. This was followed by switching over to the vendor world involved with among other things competitive positioning, customer design configuration, validation, simulation and benchmarking HDD and SSD based solutions (e.g. life before becoming an analyst and advisory consultant).

Btw, if you happen to be interested in learn more about server, storage and IO performance and capacity planning, check out my first book Resilient Storage Networks (Elsevier) that has a bit of information on it. There is also coverage of metrics and planning in my two other books The Green and Virtual Data Center (CRC Press) and Cloud and Virtual Data Storage Networking (CRC Press). I have some copies of Resilient Storage Networks available at a special reader or viewer rate (essentially shipping and handling). If interested drop me a note and can fill you in on the details.

There are many rules of thumb (RUT) when it comes to metrics that matter such as IOPS, some that are older while others may be guess or measured in different ways. However the answer is that it depends on many things ranging from if a standalone hard disk drive (HDD), Hybrid HDD (HHDD), Solid State Device (SSD) or if attached to a storage system, appliance, or RAID adapter card among others.

Taking a step back, the big picture

hdd image
Various HDD, HHDD and SSD’s

Server, storage and I/O performance and benchmark fundamentals

Even if just looking at a HDD, there are many variables ranging from the rotational speed or Revolutions Per Minute (RPM), interface including 1.5Gb, 3.0Gb, 6Gb or 12Gb SAS or SATA or 4Gb Fibre Channel. If simply using a RUT or number based on RPM can cause issues particular with 2.5 vs. 3.5 or enterprise and desktop. For example, some current generation 10K 2.5 HDD can deliver the same or better performance than an older generation 3.5 15K. Other drive factors (see this link for HDD fundamentals) including physical size such as 3.5 inch or 2.5 inch small form factor (SFF), enterprise or desktop or consumer, amount of drive level cache (DRAM). Space capacity of a drive can also have an impact such as if all or just a portion of a large or small capacity devices is used. Not to mention what the drive is attached to ranging from in internal SAS or SATA drive bay, USB port, or a HBA or RAID adapter card or in a storage system.

disk iops
HDD fundamentals

How about benchmark and performance for marketing or comparison tricks including delayed, deferred or asynchronous writes vs. synchronous or actually committed data to devices? Lets not forget about short stroking (only using a portion of a drive for better IOP’s) or even long stroking (to get better bandwidth leveraging spiral transfers) among others.

Almost forgot, there are also thick, standard, thin and ultra thin drives in 2.5 and 3.5 inch form factors. What’s the difference? The number of platters and read write heads. Look at the following image showing various thickness 2.5 inch drives that have various numbers of platters to increase space capacity in a given density. Want to take a wild guess as to which one has the most space capacity in a given footprint? Also want to guess which type I use for removable disk based archives along with for onsite disk based backup targets (compliments my offsite cloud backups)?

types of disks
Thick, thin and ultra thin devices

Beyond physical and configuration items, then there are logical configuration including the type of workload, large or small IOPS, random, sequential, reads, writes or mixed (various random, sequential, read, write, large and small IO). Other considerations include file system or raw device, number of workers or concurrent IO threads, size of the target storage space area to decide impact of any locality of reference or buffering. Some other items include how long the test or workload simulation ran for, was the device new or worn in before use among other items.

Tools and the performance toolbox

Then there are the various tools for generating IO’s or workloads along with recording metrics such as reads, writes, response time and other information. Some examples (mix of free or for fee) include Bonnie, Iometer, Iorate, IOzone, Vdbench, TPC, SPC, Microsoft ESRP, SPEC and netmist, Swifttest, Vmark, DVDstore and PCmark 7 among many others. Some are focused just on the storage system and IO path while others are application specific thus exercising servers, storage and IO paths.

performance tools
Server, storage and IO performance toolbox

Having used Iometer since the late 90s, it has its place and is popular given its ease of use. Iometer is also long in the tooth and has its limits including not much if any new development, never the less, I have it in the toolbox. I also have Futremark PCmark 7 (full version) which turns out has some interesting abilities to do more than exercise an entire Windows PC. For example PCmark can use a secondary drive for doing IO to.

PCmark can be handy for spinning up with VMware (or other tools) lots of virtual Windows systems pointing to a NAS or other shared storage device doing real world type activity. Something that could be handy for testing or stressing virtual desktop infrastructures (VDI) along with other storage systems, servers and solutions. I also have Vdbench among others tools in the toolbox including Iorate which was used to drive the workloads shown below.

What I look for in a tool are how extensible are the scripting capabilities to define various workloads along with capabilities of the test engine. A nice GUI is handy which makes Iometer popular and yes there are script capabilities with Iometer. That is also where Iometer is long in the tooth compared to some of the newer generation of tools that have more emphasis on extensibility vs. ease of use interfaces. This also assumes knowing what workloads to generate vs. simply kicking off some IOPs using default settings to see what happens.

Another handy tool is for recording what’s going on with a running system including IO’s, reads, writes, bandwidth or transfers, random and sequential among other things. This is where when needed I turn to something like HiMon from HyperIO, if you have not tried it, get in touch with Tom West over at HyperIO and tell him StorageIO sent you to get a demo or trial. HiMon is what I used for doing start, stop and boot among other testing being able to see IO’s at the Windows file system level (or below) including very early in the boot or shutdown phase.

Here is a link to some other things I did awhile back with HiMon to profile some Windows and VDI activity test profiling.

What’s the best tool or benchmark or workload generator?

The one that meets your needs, usually your applications or something as close as possible to it.

disk iops
Various 2.5 and 3.5 inch HDD, HHDD, SSD with different performance

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

That depends, however continue reading part II of this series to see some results for various types of drives and workloads.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.