Seagate has shipped over 10 Million storage HHDD’s, is that a lot?

Seagate has shipped over 10 Million storage HHDD’s, is that a lot?

Recently Seagate made an announcement that they have shipped over 10 million Hybrid Hard Disk Drives (HHDD) also known as Solid State Hybrid Drives (SSHD) over that past few years. Disclosure Seagate has been a StorageIO client.

I know where some of those desktop class HHDD’s including Momentus XTs ended up as I bought some of the 500GB and 750GB models via Amazon and have them in various systems. Likewise I have installed in VMware servers the newer generation of enterprise class SSHD’s which Seagate now refers to as Turbo models as companions to my older HHDD’s

What is a HHDD or SSHD?

The HHDD’s continue to evolve from initially accelerating reads to now being capable of speeding up write operations across different families (desktop/mobile, workstation and enterprise). What makes a HHDD or SSHD is that as their name implies, they are a hybrid combing a traditional spinning magnetic Hard Disk Drive (HDD) along with flash SSD storage. The flash persistent memory is in addition to the DRAM or non-persistent memory typically found on HDDs used as a cache buffer. These HHDDs or SSHDs are self-contained in that the flash are built-in to the actual drive as part of its internal electronics circuit board (controller). This means that the drives should be transparent to the operating systems or hypervisors on servers or storage controllers without need for special adapters, controller cards or drivers. In addition, there is no extra software needed to automated tiering or movement between the flash on the HHDD or SSHD and its internal HDD, its all self-contained managed by the drives firmware (e.g. software).

Some SSHD and HHDD industry perspectives

Jim Handy over at Objective Analysis has this interesting post discussing Hybrid Drives Not Catching On. The following is an excerpt from Jim’s post.

Why were our expectations higher? 

There were a few reasons: The hybrid drive can be viewed as an evolution of the DRAM cache already incorporated into nearly all HDDs today. 

  • Replacing or augmenting an expensive DRAM cache with a slower, cheaper NAND cache makes a lot of sense.
  • An SSHD performs much better than a standard HDD at a lower price than an SSD. In fact, an SSD of the same capacity as today’s average HDD would cost about an order of magnitude more than the HDD. The beauty of an SSHD is that it provides near-SSD performance at a near-HDD price. This could have been a very compelling sales proposition had it been promoted in a way that was understood and embraced by end users.
  • Some expected for Seagate to include this technology into all HDDs and not to try to continue using it as a differentiator between different Seagate product lines. The company could have taken either of two approaches: To use hybrid technology to break apart two product lines – standard HDDs and higher-margin hybrid HDDs, or to merge hybrid technology into all Seagate HDDs to differentiate Seagate HDDs from competitors’ products, allowing Seagate to take slightly higher margins on all HDDs. Seagate chose the first path.

The net result is shipments of 10 million units since its 2010 introduction, for an average of 2.5 million per year, out of a total annual HDD shipments of around 500 million units, or one half of one percent.

Continue reading more of Jim’s post here.

In his post, Jim raises some good points including that HHDD’s and SSHD’s are still a fraction of the overall HDD’s shipped on an annual basis. However IMHO the annual growth rate has not been a flat average of 2.5 million, rather starting at a lower rate and then increasing year over year. For example Seagate issued a press release back in summer 2011 that they had shipped a million HHDD’s a year after their release. Also keep in mind that those HHDD’s were focused on desktop workstations and in particular, at Gamers among others.

The early HHDD’s such as the Momentus XTs that I was using starting in June 2010 only had read acceleration which was better than HDD’s, however did not help out on writes. Over the past couple of years there have been enhancements to the HHDD’s including the newer generation also known as SSHD’s or Turbo drives as Seagate now calls them. These newer drives include write acceleration as well as with models for mobile/laptop, workstation and enterprise class including higher-performance and high-capacity versions. Thus my estimates or analysis has the growth on an accelerating curve vs. linear growth rate (e.g. average of 2.5 million units per year).

 Units shipped per yearRunning total units shipped
2010-20111.0 Million1.0 Million
2011-20121.25 Million (est.)2.25 Million (est.)
2012-20132.75 Million (est.)5.0 Million (est.)
2013-20145.0 Million (est)10.0 Million

StorageIO estimates on HHDD/SSHD units shipped based on Seagate announcements

estimated hhdd and sshd shipments

However IMHO there is more to the story beyond numbers of HHDD/SSHD shipped or if they are accelerating in deployment or growing at an average rate. Some of those perspectives are in my comments over on Jim Handy’s site with an excerpt below.

In talking with IT professionals (e.g. what the vendors/industry calls users/customers) they are generally not aware that these devices exist, or if they are aware of them, they are only aware of what was available in the past (e.g. the consumer class read optimized versions). I do talk with some who are aware of the newer generation devices however their comments are usually tied to lack of system integrator (SI) or vendor/OEM support, or sole source. Also there was a focus on promoting the HHDD’s to “gamers” or other power users as opposed to broader marketing efforts. Also most of these IT people are not aware of the newer generation of SSHD or what Seagate is now calling “Turbo” drives.

When talking with VAR’s, there is a similar reaction which is discussion about lack of support for HHDD’s or SSHD’s from the SI/vendor OEMs, or single source supply concerns. Also a common reaction is lack of awareness around current generation of SSHD’s (e.g. those that do write optimization, as well as enterprise class versions).

When talking with vendors/OEMs, there is a general lack of awareness of the newer enterprise class SSHD’s/HHDD’s that do write acceleration, sometimes there is concern of how this would disrupt their “hybrid” SSD + HDD or tiering marketing stories/strategies, as well as comments about single source suppliers. Have also heard comments to the effect of concerns about how long or committed are the drive manufactures going to be focused on SSHD/HHDD, or is this just a gap filler for now.

Not surprisingly when I talk with industry pundits, influencers, amplifiers (e.g. analyst, media, consultants, blogalysts) there is a reflection of all the above which is lack of awareness of what is available (not to mention lack of experience) vs. repeating what has been heard or read about in the past.

IMHO while there are some technology hurdles, the biggest issue and challenge is that of some basic marketing and business development to generate awareness with the industry (e.g. pundits), vendors/OEMs, VAR’s, and IT customers, that is of course assuming SSHD/HHDD are here to stay and not just a passing fad…

What about SSHD and HHDD performance on reads and writes?

What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?

SSHD and HHDD read / write performance exchange
Enterprise Turbo SSHD read and write performance (Exchange Email)

What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?

SSHD and HHDD performance TPC-B
Enterprise Turbo SSHD read and write performance (TPC-B database)

SSHD and HHDD performance TPC-E
Enterprise Turbo SSHD read and write performance (TPC-E database)

Additional details and information about HHDD/SSHD or as Seagate now refers to them Turbo drives can be found in two StorageIO Industry Trends Perspective White Papers (located here and another here).

Where to learn more

Refer to the following links to learn more about HHDD and SSHD devices.
StorageIO Momentus Hybrid Hard Disk Drive (HHDD) Moments
Enterprise SSHD and Flash SSD
Part of an Enterprise Tiered Storage Strategy

Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?
2011 Summer momentus hybrid hard disk drive (HHDD) moment
More Storage IO momentus HHDD and SSD moments part I
More Storage IO momentus HHDD and SSD moments part II
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Another StorageIO Hybrid Momentus Moment
SSD past, present and future with Jim Handy
Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?

Closing comments and perspectives

I continue to be bullish on hybrid storage solutions from cloud, to storage systems as well as hybrid-storage devices. However like many technology just because something makes sense or is interesting does not mean its a near-term or long-term winner. My main concern with SSHD and HHDD is if the manufactures such as Seagate and WD are serious about making them a standard feature in all drives, or simply as a near-term stop-gap solution.

What’s your take or experience with using HHDD and/or SSHDs?

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future.

Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way.

Introducing Solid State Hybrid Drives (SSHD)

Solid State Hybrid Disks (SSHD) are the successors to previous generation Hybrid Hard Disk Drives (HHDD) that I have used for several years (you can read more about them here, and here).

While it would be nice to simply have SSD for everything, there are also economic budget realities to be dealt with. Keep in mind that a bit of nand flash SSD cache in the right location for a given purpose can go a long way which is the case with SSHDs. This is also why in many environments today there is a mix of SSD, HDD of various makes, types, speeds and capacities (e.g. different tiers) to support diverse application needs (e.g. not everything in the data center is the same).

However, If you have the need for speed and can afford or benefit from the increased productivity by all means go SSD!

Otoh if you have budget constraints and need more space capacity yet want some performance boost, then SSHDs are an option. The big difference however between today’s SSHDs that are available for both enterprise class storage systems and servers, as well as desktop environments is that they can accelerate both reads and writes. This is different from their predecessors that I have used for several years now that had basic read acceleration, however no write optimizations.

SSHD storage I/O oppourtunity
Better Together: Where SSHDs fit in an enterprise tiered storage environment with SSD and HDDs

As their names imply, they are a hybrid between a nand flash Solid State Device (SSD) and traditional Hard Disk Drive (HDD) meaning a best of situation. This means that the SSHD are based on a traditional spinning HDD (various models with different speeds, space capacity, interfaces) along with DRAM (which is found on most modern HDDs), along with nand flash for read cache, and some extra nonvolatile memory for persistent write cache combined with a bit of software defined storage performance optimization algorithms.

Btw, if you were paying attention to that last sentence you would have picked up on something about nonvolatile memory being used for persistent write cache which should prompt the question would that help with nand flash write endurance? Yup.

Where and when to use SSHD?

In the StorageIO Industry Trends Perspective thought leadership white paper I recently released compliments of Seagate Enterprise Turbo SSHD (that’s a disclosure btw ;) enterprise class Solid State Hybrid Drives (SSHD) were looked at and test driven in the StorageIO Labs with various application workloads. These activities include being in a virtual environment for common applications including database and email messaging using industry standard benchmark workloads (e.g. TPC-B and TPC-E for database, JetStress for Exchange).

Storage I/O sshd white paper

Conventional storage system focused workloads using iometer, iorate and vdbench were also run in the StorageIO Labs to set up baseline reads, writes, random, sequential, small and large I/O size with IOPs, bandwidth and response time latency results. Some of those results can be found here (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?) with other ongoing workloads continuing in different configurations. The various test drive proof points were done in the   comparing SSHD, SSD and different HDDs.

Data Protection (Archiving, Backup, BC, DR)

Staging cache buffer area for snapshots, replication or current copies before streaming to other storage tier using fast read/write capabilities. Meta data, index and catalogs benefit from fast reads and writes for faster protection.

Big Data DSS
Data Warehouse

Support sequential read-ahead operations and “hot-band” data caching in a cost-effective way using SSHD vs. slower similar capacity size HDDs for Data warehouse, DSS and other analytic environments.

Email, Text and Voice Messaging

Microsoft Exchange and other email journals, mailbox or object repositories can leverage faster read and write I/Os with more space capacity.

OLTP, Database
 Key Value Stores SQL and NoSQL

Eliminate the need to short stroke HDDs to gain performance, offer more space capacity and IOP performance per device for tables, logs, journals, import/export and scratch, temporary ephemeral storage. Leverage random and sequential read acceleration to compliment server-side SSD-based read and write-thru caching. Utilize fast magnetic media for persistent data reducing wear and tear on more costly flash SSD storage devices.

Server Virtualization

Fast disk storage for data stores and virtual disks supporting VMware vSphere/ESXi, Microsoft Hyper-V, KVM, Xen and others.  Holding virtual machines such as VMware VMDKs, along with Hyper-V and other hypervisor virtual disks.  Compliment virtual server read cache and I/O optimization using SSD as a cache with writes going to fast SSHD. For example VMware V5.5 Virtual SAN host disk groups use SSD as a read cache and can use SSHD as the magnetic disk for storing data while boosting performance without breaking the budget or adding complexity.

Speaking of Virtual, as mentioned the various proof points were run using Windows systems that were VMware guests with the SSHD and other devices being Raw Device Mapped (RDM) SAS and SATA attached, read how to do that here.

Hint: If you know about the VMware trick for making a HDD look like a SSD to vSphere/ESXi (refer to here and here) think outside the virtual box for a moment on some things you could do with SSHD in a VSAN environment among other things, for now, just sayin ;).

Virtual Desktop Infrastructure (VDI)

SSHD can be used as high performance magnetic disk for storing linked clone images, applications and data. Leverage fast read to support read ahead or pre-fetch to compliment SSD based read cache solutions. Utilize fast writes to quickly store data enabling SSD-based read or write-thru cache solutions to be more effective. Reduce impact of boot, shutdown, and virus scan or maintenance storms while providing more space capacity.

Table 1 Example application and workload scenarios benefiting from SSHDs

Test drive application proof points

Various workloads were run using Seagate Enterprise Turbo SSHD in the StorageIO lab environment across different real world like application workload scenarios. These include general storage I/O performance characteristics profiling (e.g. reads, writes, random, sequential or various IOP size) to understand how these devices compare to other HDD, HHDD and SSD storage devices in terms of IOPS, bandwidth and response time (latency). In addition to basic storage I/O profiling, the Enterprise Turbo SSHD was also used with various SQL database workloads including Transaction Processing Council (TPC); along with VMware server virtualization among others use case scenarios.

Note that in the following workload proof points a single drive was used meaning that using more drives in a server or storage system should yield better performance. This also means scaling would be bound by the constraints of a given configuration, server or storage system. These were also conducted using 6Gbps SAS with PCIe Gen 2 based servers and ongoing testing is confirming even better results with 12Gbs SAS, faster servers with PCIe Gen 3.

SSHD large file storage i/o
Copy (read and write) 80GB and 220GB file copies (time to copy entire file)

SSHD storage I/O TPCB Database performance
SQLserver TPC-B batch database updates

Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 500GB 3.5” 7.2K RPM HDD 3 Gbps SATA, 1TB 3.5” 7.2K RPM HDD 3 Gbps SATA. Workload generator and virtual clients ran on Windows 7 Ultimate. Microsoft SQL Server 2012 Database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (Intel x3490 2.93 GHz)), with LSI 9211 6Gbps SAS adapters with TPC-B (www.tpc.org) workloads. VM resided on separate data store from devices being tested. All devices being tested with SQL MDF were Raw Device Mapped (RDM) independent persistent with database log file (LDF) on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

SSHD storage I/O TPCE Database performance
SQLserver TPC-E transactional workload

Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 300GB 2.5” Savio 10K RPM HDD 6 Gbps SAS, 1TB 3.5” 7.2K RPM HDD 6 Gbps SATA. Workload generator and virtual clients Windows 7 Ultimate. Microsoft SQL Server 2012 database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (E8400 2.99GHz), with LSI 9211 6Gbps SAS adapters with TPC-E (www.tpc.org) workloads. VM resided on separate SSD based data store from devices being tested (e.g., where MDF resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

SSHD storage I/O Exchange performance
Microsoft Exchange workload

Test configuration: 2.5” Seagate 600 Pro 120GB (ST120FP0021 ) SSD 6 Gbps SATA, 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 2.5” Savio 146GB HDD 6 Gbps SAS, 3.5” Barracuda 500GB 7.2K RPM HDD 3 Gbps SATA. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft Small Business Server (SBS) 2011 Service Pack 1 64 bit, 8GB DRAM, One CPU (Intel X3490 2.93 GHz) LSI 9211 6 Gbps SAS adapter, JetStress 2010 (no other active workload during test intervals). All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device.

Read more about the above proof points along view data points and configuration information in the associated white paper found here (no registration required).

What this all means

Similar to flash-based SSD technologies the question is not if, rather when, where, why and how to deploy hybrid solutions such as SSHDs. If your applications and data infrastructures environment have the need for storage I/O speed without loss of space capacity and breaking your budget, SSD enabled devices like the Seagate Enterprise Turbo 600GB SSHD are in your future. You can learn more about enterprise class SSHD such as those from Seagate by visiting this link here.

Watch for extra workload proof points being performed including with 12Gbps SAS and faster servers using PCIe Gen 3.

Ok, nuff said.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

March 2014 StorageIO Update Newsletter : Cisco Cloud, VMware VSAN and More

Industry Trends Perspectives: Cisco Cloud and VMware VSAN

Welcome to the March 2014 edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics. Technically it is now spring here in North America and to say that we have had abnormal cold weather would be an understatement. However it is March with April just around the corner meaning plenty to do including several upcoming events (see below).

Clouds and Cisco

Some recent industry activity has included Cisco announcing its Cloud intentions (e.g. more than simply selling servers and networking hardware). So far the Cisco Cloud move appears to be more about hybrid and partner ecosystem including channels vs. going toes to toe with an Amazon Web Service (AWS). Cisco appears to playing the hybrid theme of being a technology supplier as well as provider or partner. Thus, it looks like for the near term the Cisco cloud target is not as much AWS as the likes of an IBM who recently added Softlayer or an HP.

Greg Schulz Storage I/OGreg Schulz on break

This will also be interesting to watch where along with how other Cisco partners such as EMC, Microsoft, NetApp, VCE and VMware participate. Keep in mind that some of these and other Cisco partners also have their own public, private and hybrid cloud initiatives, services along with being a supplier to each other.

VMware VSAN Software Defined Storage

Another industry activity involving servers storage I/O networking hardware software and virtualization (aka software defined) was the general announcement (GA) by VMware of Virtual SAN (VSAN). VMware VSAN went into public beta shortly after VMworld 2013 timeframe when many of us downloaded, installed and did various types of testing with it.

For those not familiar with VSAN, it is added licensed software functionality for VMware that creates a cluster to host Virtual Machines (VMs) along with its own shared resilient storage solution (e.g. Software Defined Storage). How VSAN works is to use PCIe, SAS, SATA dedicated direct attached storage (DAS) including that are local to the VMware host server (physical machine or PM). The VMware host PMs support DAS Hard Disk Drives (HDD), Solid State Devices (SSD) including PCIe cards, drives or DIMMs, along with Solid State Hybrid Drives (SSHD). This local DAS storage is served and shared among the nodes (up to 32 host or PMs) per VSAN cluster balancing performance, availability (and resiliency) along with space capacity to host VM objects. Note that VM objects include VMDKs (e.g. virtual disks) and are not to be confused with the other type of object storage or access such as CDMI/SWIFT/S3/HTTP/REST.

VMs (and those managing them) see in the VSAN cluster dats that are familiar with other VMware implementations including storage policies and other tools. Here is a link to a great piece by Patrick Schulz a data infrastructure systems engineer in Germany (no relation, at least not that I know of yet) where he shares his experiences with VSAN implementation.

storage I/O vsan
Generic VSAN example

Instead of using an external iSCSI, Fibre Channel (FC) or FC over Ethernet (FCoE) shared SAN or NAS storage system / appliance to create the storage repository, local DAS is leveraged in groups spread across the hosts in the VSAN cluster (up to 32 nodes ). VSAN requires a percentage of SSD for each storage group on the host cluster nodes that a part is used for caching data which is persistently stored on HDD based media.

VSAN software is licensed by the number of active sockets (not the cores) in the host servers (PM) that are in the cluster or by number of VDI users (guest VMs). For example if there are four servers two with one socket and two with dual sockets there would be six socket licenses. MSRP License cost per processor socket is $2,495 USD which also assumes core VMware licenses already exist. There are also a per guest VM license of $50 per VDI instance, as well as other optional license models and bundles with different features or upgrades.

What is different with VSAN vs. other VMware clusters is that a) the storage is only accessible to VMs that are in the VSAN cluster (unless a VM exports and serves to others via NFS, iSCSI, etc which is a different conversation for another day). Another difference is that today VSAN leverage storage inside of servers or direct attached as opposed to using iSCSI, FC, FCoE SAN or NAS storage systems.

Btw, the current maximum LUN, volume or target storage device size is 4TB so if you were thinking of taking a SAS attached storage system and creating a bunch of small LUNs, you might want to review that from a cost perspective, or at least for today.

There is much more to VSAN including how it works, what it can and can not do, who it is for and whom should not use for different app’s, however IMHO besides lower-end, SMB, workgroup, departmental, VMware centric environments, the number one scenario today is VDI along with where converged solutions such as those from Nutanix, Simplivity and Tintri among others are playing.

Watch for more StorageIO posts, commentary, perspectives, presentations, webinars, tips and events on information and data infrastructure topics, themes and trends. Data Infrastructure topics include among others cloud, virtual, legacy server, storage I/O networking, data protection, hardware and software.

Check out our backup, restore, BC, DR and archiving (Under the resources section on StorageIO.com) for extra content.

StorageIO Industry Trends and PerspectivesIndustry trends tips, commentary, articles and blog posts
What is being seen, heard and talked about while out and about

The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.

StorageIO in the newsRecent StorageIO comments and perspectives in the news

SearchSolidStateStorage: Comments on automated storage tiering and flash
EnterpriseStorageForum: Comments on Cloud-Storage Mergers and Acquisitions
SearchDataBackup: Comments on near-CDP nudging true CDP from landscape
EnterpriseStorageForum: Comments on Ways to Avoid Cloud Storage Pricing Surprises
SearchDataBackup: Q&A: Snapshot, replication ‘great approach’ for data protection
SearchDataBackup: Comments on LTFS-enabled products

StorageIO tips and articles Recent StorageIO tips and articles in various venues

InformationSecurityBuzz: Dark Territories – Do You Know Where Your Information Is?
InformationSecurityBuzz: Rings Of Security For Data Protection Or For Appearance?
SearchSolidStateStorage: Q&A on automated storage tiering and flash
SpiceWorks: My copies were corrupted: The 3-2-1 data protection rule

StorageIOblog postRecent StorageIOblog posts and perspectives

  • Missing MH370 reminds us, do you know where your digital assets are? Click to read more
  • Old School, New School, Current and Back to School – Click to read and view poll
  • USENIX FAST (File and Storage Technologies) 2014 Proceedings – Click to read more
  • Spring 2014 StorageIO Events and Activities Update Click to view
  • Review – iVMcontrol iPhone VMware management, iTool or iToy? Click to read more
  • February 2014 Server StorageIO Update Newsletter
  • Remember to check out our objectstoragecenter.com page where you will find a growing collection of information and links on cloud and object storage themes, technologies and trends from various sources.

    Server and StorageIO seminars, conferences, web cats, events, activities StorageIO activities (out and about)

    Seminars, symposium, conferences, webinars
    Live in person and recorded recent and upcoming events

    The StorageIO calendar continues to evolve, here are some recent and upcoming activities.

    129/78/148/103/1527/350/242/91 = 650

    June 12, 2014The Many Facets of Virtual Storage and Software Defined Storage VirtualizationWebinar
    9AM PT
    June 11, 2014The Changing Face and Landscape of Enterprise StorageWebinar
    9AM PT
    May 16, 2014 What you need to know about virtualization (Demystifying Virtualization)Nijkerk Holland
    May 15, 2014 Data Infrastructure Industry Trends: What’s New and TrendingNijkerk Holland
    May 14, 2014 To be announcedNijkerk Holland
    May 13, 2014 Data Movement and Migration: Storage Decision Making ConsiderationsNijkerk Holland
    May 12, 2014 Rethinking Business Resiliency: From Disaster Recovery to Business ContinuanceNijkerk Holland
    May 5-7, 2014EMC WorldLas Vegas
    April 22-23, 2014SNIA DSI Event

    Presenting – The “Cloud” Hybrid Home Run
    Life beyond they Hype

    Santa Clara CA
    April 16, 2014Open Source and Cloud Storage – Enabling business, or a technology enabler?Webinar
    9AM PT
    April 9, 2014Storage Decision Making for Fast, Big and Very Big Data EnvironmentsWebinar
    9AM PT
    April 8, 2014NABNational Association Broadcasters (e.g. Very Big Fast data Event)Las Vegas
    March 27, 2014
    Keynote: The 2017 Datacenter – PREPARING FOR THE 2017 DATACENTER SESSIONSEdina
    8:00AM
    Register Here

    Click here to view other upcoming along with earlier event activities. Watch for more 2014 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, big data, little data, cloud and object storage, performance and management trends among others.

    Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.

    Thank you to the current StorageIoblog.com site sponsor advertisers

    Druva (End Point Data Protection)
    Unitrends (Enterprise backup solution and management tools)
    Veeam (VMware and Hyper-V virtual server backup and data protection tools).

    Contact StorageIO to learn about sponsorship and other partnership opportunities.

    Click here to view earlier StorageIO Update newsletters (HTML and PDF versions). Subscribe to this newsletter (and pass it along) and click here to subscribe to this news letter. View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Dell Inspiron 660 i660, Virtual Server Diamond in the rough?

    Storage I/O trends

    Dell Inspiron 660 i660, Virtual Server Diamond in the rough?

    During the 2013 post thanksgiving black friday shopping day, I did some on-line buying including a Dell Inspiron 660 i660 (5629BK) to be used as a physical machine (PM) or VMware host (among other things).

    Now technically I know, this is a workstation or desktop and thus not what some would consider a server, however as another PM to add to my VMware environment (or be used as a bare metal platform), it is a good companion to my other systems.

    Via Dell.com Dell 660 i660

    Taking a step back, needs vs. wants

    Initially my plan for this other system was to go with a larger, more expensive model with as many DDR3 DIMM (memory) and PCIe x4/x8/x16 expansion slots as possible. Some of my other criteria were PCIe Gen 3, latest Intel processor generation with VT (Virtualization Technology) and Extended Page Tables (EPT) for server virtualization support without breaking my budget. Heck, I would love a Dell VRTX or some similar types of servers from the likes of Cisco, HP, IBM, Lenovo, Supermicro among many others. On the other hand, I really don’t need one of those types of systems yet, unless of course somebody wants to send some to play with (excuse me, test drive, try-out).

    Hence needs are what I must have or need, while wants are those things that would be, well, nice to have.

    Server shopping and selection

    In the course of shopping around, looking at alternatives and having previously talked with Robert Novak (aka @gallifreyan) and he reminded me to think outside the box a bit, literally. Check out Roberts blog (aka rsts11 a great blog name btw for those of use who used to work with RSTS, RSX and others) including a post he did shortly after I had a conversation with him. If you read his post and continue through this one, you should be able to connect the dots.

    While I still have a need and plans for another server with more PCIe and DDR3 (maybe wait for DDR4? ;) ) slots, I found a Dell Inspiron 660.

    Candidly normally I would have skipped over this type or class of system, however what caught my eye was that while limited to only two DDR3 DIMM slots and a single PCIe x16 slot, there were three extra x1 slots which while not as robust, certainly gave me some options if I need to use those for older, slower things. Likewise leveraging higher density DIMM’s, the system is already now at 16GB RAM waiting for larger DIMM’s if needed.

    VMware view of Inspiron 600

    The Dell Inspiron 660-i660 I found had a price of a little over $550 (delivered) with an Intel i5-3330 processor (quad-core, quad thread 3GHz clock), PCIe Gen 3, one PCIe x16 and three PCIe x1 slots, 8GB DRAM (since reallocated), GbE port and built-in WiFi, Windows 8 (since P2V and moved into the VMware environment), keyboard and mouse, plus a 1TB 6Gb SATA drive, I could afford two, maybe three or four of these in place of a larger system (at least for now). While for something’s I have a need for a single larger server, there are other things where having multiple smaller ones with enough processing performance, VT and EPT support comes in handy (if not required for some virtual servers).

    Some of the enhancements that I made were once the initial setup of the Windows system was complete, did a clone and P2V of that image, and then redeploying the 1TB SATA drive to join others in the storage pool. Thus the 1TB SATA HDD has been replaced with (for now) a 500GB Momentus XT HHDD which by time you read this could already changed to something else.

    Another enhancements was bumping up the memory from 8GB to 16GB, and then adding a StarTech enclosure (See below) for more internal SAS / SATA storage (it supports both 2.5" SAS and SATA HDD’s as well as SSD’s). In addition to the on-board SATA drive port plus one being used for the CD/DVD, there are two more ports for attaching to the StarTech or other large 3.5" drives that live in the drive bay. Depending on what I’m using this system for, it has different types of adapters for external expansion or networking some of which have already included 6Gbps and 12Gbps SAS HBA’s.

    What about adding more GbE ports?

    As this is not a general purpose larger system with many expansion ports for PCIe slots, that is one of the downsides you get for this cost. However depending on your needs, you have some options. For example I have some Intel PCIe x1 GbE cards to give extra networking connectivity if or when needed. Note however that as these are PCIe x1 slots they are PCIe Gen 1 so from a performance perspective exercise caution when mixing these with other newer, faster cards when performance matters (more on this in the future).

    Via Amazon.com Intel PCIe x1 GbE card
    Via Amazon.com Intel (Gigabit CT PCI-E Network Adapter EXPI9301CTBLK)

    One of the caveats to be aware of if you are going to be using VMware vSphere/ESXi is that the Realtek GbE NIC on the Dell Inspiron D600-i660 may not play well, however there are work around’s. Check out some of the work around’s over at Kendrick Coleman (@KendrickColeman) and Erik Bussink (@ErikBussink) sites both of which were very helpful and I can report that the Realtek GbE is working fine with VMware ESXi 5.5a.

    Need some extra SAS and SATA internal expansion slots for HDD and SSD’s?

    The StarTech 4 x 2.5″ SAS and SATA internal enclosures supports various speed SSD and HDD’s depending on what you connect the back-end connector port to. On the back of the enclosure chassis there is a connector that is a pass-thru to the SAS drive interface that also accepts SATA drives. This StarTech enclosure fits nicely into an empty 5.2″ CD/DVD expansion bay and then attach the individual drive bays to your internal motherboard SAS or SATA ports, or to those on another adapter.

    Via Amazon.com StarTech 4 port SAS / SATA enclosure
    Via Amazon.com StarTech 4 x 2.5" SAS and SATA internal enclosure

    So far I have used these enclosures attached to various adapters at different speeds as well as with HDD, HHDD, SSHD and SSD’s at various SAS/SATA interface speeds up to 12Gbps. Note that unlike some other enclosures that have SAS or SATA expander, the drive bays in the StarTech are pass-thru hence are not regulated by the expander chip and its speed. Price for these StarTech enclosures is around $60-90 USD and are good for internal storage expansion (hmm, need to build your own NAS or VSAN or storage server appliance? ;) ).

    Via Amazon Molex power connector

    Note that you will also need to get a Molex power connector to go from the back of the drive enclosure to an available power port such as for expansion DVD/CD that you can find at a Radio Shack, Fry’s or many other venues for couple of dollars. Double check your specific system and cable connector leads to verify what you will need.

    How is it working and performing

    So far so good, in addition to using it for some initial calibration and validation activities, the D660 is performing very well and no buyers remorse. Ok, sure, would like more PCIe Gen 3 x4/x8/x16 or an extra on-board Ethernet, however all the other benefits have outweighed those pitfalls.

    Speaking of which, if you think a SSD (or other fast storage device) is fast on a 6Gbps SAS or PCIe Gen 2 interface for physical or virtual servers, wait until you experience those IOPs or latencies at 12Gbps SAS and PCIe Gen 3 with a faster current generation Intel processor, just saying ;)…

    Server and Storge I/O IOPS and vmware   
    

    In the above chart (slide scroll bar to view more to the right) a Windows 7 64 bit systems (VMs configured with 14GB DRAM) on VMware vSphere V5.5.1 is shown running on different hardware configurations. The Windows system is running Futuremark PCMark 7 Pro (v1.0.4). From left to right the Windows VM on the Dell Inspiron 660 with 16GB physical DRAM using a SSHD (Solid State Hybrid Drive). Second from the left shows results running on a Dell T310 with an Intel X3470 processor also on a SSHD. Middle is the workload on the Dell 660 running on a HHDD, second from right is the workload on the Dell T310 also on a HHDD, while on the right is the same workload on an HP DCS5800 with an Intel E8400. The workload results show a composite score, system storage, simulating user productivity, lightweight processing, and compute intensive tasks.

    Futuremark PCMark Windows benchmark
    Futuremark PCMark

    Don’t forget about the KVM (Keyboard Video Mouse)

    Mention KVM to many people in and around the server, storage and virtualization world and they think KVM as in the hypervisor, however to others it means Key board, Video and Mouse aka the other KVM. As part of my recent and ongoing upgrades, it was also time to upgrade from the older smaller KVM’s to a larger, easier to use model. The benefit, support growth while also being easier to work with. Having done some research on various options that also varied in price, I settled in on the StarTech shown below.

    Via Amazon.com StarTech 8 port KVM
    Via Amazon.com StarTech 8 Port 1U USB KVM Switch

    What’s cool about the above 8 port StarTech KVM switch is that it comes with 8 cables (there are 8 ports) that on one end look like a regular VGA monitor screen cable connector. However on the other end that attached to your computer, there is the standard VGA connection that attached to your video out, and a short USB tail cable that attached to an available USB port for Keyboard and Mouse. Needless to say it helps to cut down on the cable clutter while coming in around $38.00 USD per server port being managed, or about a dollar a month over a little over three years.

    Word of caution on make and models

    Be advised that there are various makes and models of the Dell Inspiron available that differ in the processor generation and thus feature set included. Pay attention to which make or model you are looking at as the prices can vary, hence double-check the processor make and model and then visit the Intel site to see if it is what you are expecting. For example I double checked that the processor for the different models I looked at were i5-3330 (view Intel specifications for that processor here).

    Summary

    Thanks to Robert Novak (aka @gallifreyan) for taking some time providing useful tips and ideas to help think outside the box for this, as well as some future enhancements to my server and StorageIO lab environment.

    Consequently while the Dell Inspiron D600-i660 was not the server that I wanted, it has turned out to be the system that I need now and hence IMHO a diamond in the rough, if you get the right make and mode.

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2013 StorageIO and UnlimitedIO All Rights Reserved

    As the platters spin, HDD’s for cloud, virtual and traditional storage environments

    HDDs for cloud, virtual and traditional storage environments

    Storage I/O trends

    Updated 1/23/2018

    As the platters spin is a follow-up to a recent series of posts on Hard Disk Drives (HDD’s) along with some posts about How Many IOPS HDD’s can do.

    HDD and storage trends and directions include among others

    HDD’s will continue to be declared dead into the next decade, just as they have been for over a decade, meanwhile they are being enhanced, continued to be used in evolving roles.

    hdd and ssd

    SSD will continue to coexist with HDD, either as separate or converged HHDD’s. Where, where and how they are used will also continue to evolve. High IO (IOPS) or low latency activity will continue to move to some form of nand flash SSD (PCM around the corner), while storage capacity including some of which has been on tape stays on disk. Instead of more HDD capacity in a server, it moves to a SAN or NAS or to a cloud or service provider. This includes for backup/restore, BC, DR, archive and online reference or what some call active archives.

    The need for storage spindle speed and more

    The need for faster revolutions per minute (RPM’s) performance of drives (e.g. platter spin speed) is being replaced by SSD and more robust smaller form factor (SFF) drives. For example, some of today’s 2.5” SFF 10,000 RPM (e.g. 10K) SAS HDD’s can do as well or better than their larger 3.5” 15K predecessors can for both IOPS and bandwidth. This is also an example where the RPM speed of a drive may not be the only determination for performance as it has been in the past.


    Performance comparison of four different drive types, click to view larger image.

    The need for storage space capacity and areal density

    In terms of storage enhancements, watch for the appearance of Shingled Magnetic Recording (SMR) enabled HDD’s to help further boost the space capacity in the same footprint. Using SMR HDD manufactures can put more bits (e.g. areal density) into the same physical space on a platter.


    Traditional vs. SMR to increase storage areal density capacity

    The generic idea with SMR is to increase areal density (how many bits can be safely stored per square inch) of data placed on spinning disk platter media. In the above image on the left is a representative example of how traditional magnetic disk media lays down tracks next to each other. With traditional magnetic recording approaches, the tracks are placed as close together as possible for the write heads to safely write data.

    With new recording formats such as SMR along with improvements to read/write heads, the tracks can be more closely grouped together in an overlapping way. This overlapping way (used in a generic sense) is like how the shingles on a roof overlap, hence Shingled Magnetic Recording. Other magnetic recording or storage enhancements in the works include Heat Assisted Magnetic Recording (HAMR) and Helium filed drives. Thus, there is still plenty of bits and bytes room for growth in HDD’s well into the next decade to co-exist and complement SSD’s.

    DIF and AF (Advanced Format), or software defining the drives

    Another evolving storage feature that ties into HDD’s is Data Integrity Feature (DIF) that has a couple of different types. Depending on which type of DIF (0, 1, 2, and 3) is used; there can be added data integrity checks from the application to the storage medium or drive beyond normal functionality. Here is something to keep in mind, as there are different types or levels of DIF, when somebody says they support or need DIF, ask them which type or level as well as why.

    Are you familiar with Advanced Format (AF)? If not you should be. Traditionally outside of special formats for some operating systems or controllers, that standard open system data storage block, page or sector has been 512 bytes. This has served well in the past, however; with the advent of TByte and larger sized drives, a new mechanism is needed. The need is to support both larger average data allocation sizes from operating systems and storage systems, as well as to cut the overhead of managing all the small sectors. Operating systems and file systems have added new partitioning features such as GUID Partition Table (GPT) to support 1TB and larger SSD, HDD and storage system LUN’s.

    These enhancements are enabling larger devices to be used in place of traditional Master Boot Record (MBR) or other operating system partition and allocation schemes. The next step, however, is to teach operating systems, file systems, and hypervisors along with their associated tools or drives how to work with 4,096 byte or 4 Kbyte sectors. The advantage will be to cut the overhead of tracking all of those smaller sectors or file system extents and clusters. Today many HDD’s support AF however by default may have 512-byte emulation mode enabled due to lack of operating system or other support.

    Intelligent Power Management, moving beyond drive spin down

    Intelligent Power Management (IPM) is a collection of techniques that can be applied to vary the amount of energy consumed by a drive, controller or processor to do its work. These include in the case of an HDD slowing the spin rate of platters, however, keep in mind that mass in motion tends to stay in motion. This means that HDD’s once up and spinning do not need as much relative power as they function like a flywheel. Where their power draw comes in is during reading and write, in part to the movement of reading/write heads, however also for running the processors and electronics that control the device. Another big power consumer is when drives spin up, thus if they can be kept moving, however at a lower rate, along with disabling energy used by read/write heads and their electronics, you can see a drop in power consumption. Btw, a current generation 3.5” 4TB 6Gbs SATA HDD consumes about 6-7 watts of power while in active use, or less when in idle mode. Likewise a current generation high performance 2.5” 1.2TB HDD consumes about 4.8 watts of energy, a far cry from the 12-16 plus watts of energy some use as HDD fud.

    Hybrid Hard Disk Drives (HHDD) and Solid State Hybrid Drives (SSDHD)

    Hybrid HDD’s (HHDD’s) also known as Solid State Hybrid Drives (SSHD) have been around for a while and if you have read my earlier posts, you know that I have been a user and fan of them for several years. However one of the drawbacks of the HHDD’s has been lack of write acceleration, (e.g. they only optimize for reads) with some models. Current and emerging HDDD’s are appearing with a mix of nand flash SLC (used in earlier versions), MLC and eMLC along with DRAM while enabling write optimization. There are also more drive options available as HHDD’s from different manufactures both for desktop and enterprise class scenarios.

    The challenge with HHDD’s is that many vendors either do not understand how they fit and compliment their tiering or storage management software tools or simply do not see the value proposition. I have had vendors and others tell me that the HHDD’s don’t make sense as they are too simple, how can they be a fit without requiring tiering software, controllers, SSD and HDD’s to be viable?

    Storage I/O trends

    I also see a trend similar to when the desktop high-capacity SATA drives appeared for enterprise-class storage systems in the early 2000s. Some of the same people did not see where or how a desktop class product or technology could ever be used in an enterprise solution.

    Hmm, hey wait a minute, I seem to recall similar thinking when SCSI drives appeared in the early 90s, funny how some things do not change, DejaVu anybody?

    Does that mean HHDD’s will be used everywhere?

    Not necessarily, however, there will be places where they make sense, others where either an HDD or SSD will be more practical.

    Networking with your server and storage

    Drive native interfaces near-term will remain as 6Gbs (going to 12Gbs) SAS and SATA with some FC (you might still find a parallel SCSI drive out there). Likewise, with bridges or interface cards, those drives may appear as USB or something else.

    What about SCSI over PCIe, will that catch on as a drive interface? Tough to say however I am sure we can find some people who will gladly try to convince you of that. FC based drives operating at 4Gbs FC (4GFC) are still being used for some environments however most activity is shifting over to SAS and SATA. SAS and SATA are switching over from 3Gbs to 6Gbs with 12Gbs SAS on the roadmaps.

    So which drive is best for you?

    That depends; do you need bandwidth or IOPS, low latency or high capacity, small low profile thin form factor or feature functions? Do you need a hybrid or all SSD or a self-encrypting device (SED) also known as Instant Secure Erase (ISE), these are among your various options.

    Disk drives

    Why the storage diversity?

    Simple, some are legacy soon to be replaced and disposed of while others are newer. I also have a collection so to speak that get used for various testing, research, learning and trying things out. Click here and here to read about some of the ways I use various drives in my VMware environment including creating Raw Device Mapped (RDM) local SAS and SATA devices.

    Other capabilities and functionality existing or being added to HDD’s include RAID and data copy assist; securely erase, self-encrypting, vibration dampening among other abilities for supporting dense data environments.

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Do not judge a drive only by its interface, space capacity, cost or RPM alone. Look under the cover a bit to see what is inside in terms of functionality, performance, and reliability among other options to fit your needs. After all, in the data center or information factory not everything is the same.

    From a marketing and fun to talk about new technology perspective, HDD’s might be dead for some. The reality is that they are very much alive in physical, virtual and cloud environments, granted their role is changing.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Seagate provides proof of life: Enterprise HDD enhancements

    Storage I/O trends

    Proof of life: Enterprise Hard Disk Drives (HDD’s) are enhanced

    Last week while hard disk drive (HDD) competitor Western Digital (WD) was announcing yet another (Velobit) in a string of acquisitions ( e.g. earlier included Stec, Arkeia) and investments (Skyera), Seagate announced new enterprise class HDD’s to their portfolio. Note that it was only two years ago that WD acquired Hitachi Global Storage Technologies (HGST) the disk drive manufacturing business of Hitachi Ltd. (not to be confused with HDS).

    Seagate

    Similar to WD expanding their presence in the growing nand flash SSD market, Seagate also in May of this year extended their existing enterprise class SSD portfolio. These enhancements included new drives with 12Gbs SAS interface, along with a partnership (and investment) with PCIe flash card startup vendor Virident. Other PCIe flash SSD card vendors (manufacturers and OEMs) include Cisco, Dell, EMC, FusionIO, HP, IBM, LSI, Micron, NetApp and Oracle among others.

    These new Seagate enterprise class HDD’s are designed for use in cloud and traditional data center servers and storage systems. A month or two ago Seagate also announced new ultra-thin (5mm) client (aka desktop) class HDD’s along with a 3.5 inch 4TB video optimized HDD. The video optimized HDD’s are intended for Digital Video Recorders (DVR’s), Set Top Boxes (STB’s) or other similar applications.

    What was announced?

    Specifically what Seagate announced were two enterprise class drives, one for performance (e.g. 1.2TB 10K) and the other for space capacity (e.g. 4TB).

     

    Enterprise High Performance 10K.7 (aka formerly known as Savio)

    Enterprise Terascale (aka formerly known as constellation)

    Class/category

    Enterprise / High Performance

    Enterprise High Capacity

    Form factor

    2.5” Small Form Factor (SFF)

    3.5”

    Interface

    6Gbs SAS

    6Gbs SATA

    Space capacity

    1,200GB (1.2TB)

    4TB

    RPM speed

    10,000

    5,900

    Average seek

    2.9 ms

    12 ms

    DRAM cache

    64MB

    64MB

    Power idle / operating

    4.8 watts

    5.49 / 6.49 watts

    Intelligent Power Management (IPM)

    Yes – Seagate PowerChoice

    Yes – Seagate PowerChoice

    Warranty

    Limited 5 years

    Limited 3 years

    Instant Secure Erase (ISE)

    Yes

    Optional

    Other features

    RAID Rebuild assist, Self-Encrypting Device (SED)

    Advanced Format (AF) 4K block in addition to standard 512 byte sectors

    Use cases

    Replace earlier generation 3.5” 15K SAS and Fibre Channel HDD’s for higher performance applications including file systems, databases where SSD are not practical fit.

    Backup and data protection, replication, copy operations for erasure coding and data dispersal, active in dormant archives, unstructured NAS, big data, data warehouse, cloud and object storage.

    Note the Seagate Terascale has a disk rotation speed of 5,900 (5.9K RPM) which is not a typo given the more traditional 5.4K RPM drives. This slight increase in performance from 5.4K to 5.9K should give when combined with other enhancements (e.g. firmware, electronics) to boost performance for higher capacity workloads.

    Let us watch for some performance numbers to be published by Seagate or others. Note that I have not had a chance to try these new drives yet, however look forward to getting my hands on them (among others) sometime in the future for a test drive to add to the growing list found here (hey Seagate and WD, that’s a hint ;) ).

    What this all means?

    Storage I/O trends

    Wait, weren’t HDD’s supposed to be dead or dying?

    Some people just like new and emerging things and thus will declare anything existing or that they have lost interest in (or their jobs need it) as old, boring or dead.

    For example if you listen to some, they may say nand flash SSD are also dead or dying. For what it is worth, imho nand flash-based SSDs still have a bright future in front of them even with new technologies emerging as they will take time to mature (read more here or listen here).

    However, the reality is that for at least the next decade, like them or not, HDD’s will continue to play a role that is also evolving. Thus, these and other improvements with HDD’s will be needed until current nand flash or emerging PCM (Phase Change Memory) among other forms of SSD are capable of picking up all the storage workloads in a cost-effective way.

    Btw, yes, I am also a fan and user of nand flash-based SSD’s, in addition to HDD’s and see roles for both as being viable complementing each other for traditional, virtual and cloud environments.

    In short, HDD’s will keep spinning (pun intended) for some time granted their roles and usage will also evolve similar to that of tape summit resources.

    Storage I/O trends

    With this announcement by Seagate along with other enhancements from WD show that the HDD will not only see its 60th birthday, (and here), it will probably also easily see its 70th and not from the comfort of a computer museum. The reason is that there is yet another wave of HDD improvements just around the corner including Shingled Magnetic Recording (SMR) (more info here) along with Heat Assisted Magnetic Recording (HAMR) among others. Watch for more on HAMR and SMR in future posts. With these and other enhancements, we should be able to see a return to the rapid density improvements with HDD’s observed during the mid to late 2000 era when Perpendicular recording became available.

    What is up with this ISE stuff is that the same as what Xiotech (e.g. XIO) had?

    Is this the same technology that Xiotech (now Xio) referred to the ISE the answer is no. This Seagate ISE is for fast secure erase of data on disk. The benefit of Instant Secure Erase (ISE) is to cut from hours or days the time required to erase a drive for secure disposal to seconds (or less). For those environments that already factor drives erase time as part of those overall costs, this can increase the useful time in service to help improve TCO and ROI.

    Wait a minute, aren’t slower RPM’s supposed to be lower performance?

    Some of you might be wondering or asking the question of wait, how can a 10,000 revolution per minute (10K RPM) HDD be considered fast vs. a 15K HDD, let alone SSD?

    Storage I/O trends

    There is a trend occurring with HDD’s that the old rules of IOPS or performance being tied directly to the size and rotational speed (RPM’s) of drives, along with their interfaces. This comes down to being careful to judge a book or in this case a drive by its cover. While RPM’s do have an impact on performance, new generation drives at 10K such as some 2.5” models are delivering performance equal to or better than earlier generation 3.5” 15K device’s.

    Likewise, there are similar improvements with 5.4K devices vs. previous generation 7.2K models. As you will see in some of the results found here, not all the old rules of thumbs when it comes to drive performance are still valid. Likewise, keep those metrics that matter in the proper context.


    Click on above image to see various performance results

    For example as seen in the results (above), the more DRAM or DDR cache on the drives has a positive impact on sequential reads which can be good news if that is what your applications need. Thus, do your homework and avoid judging a device simply by its RPM, interface or form factor.

    Other considerations, temperature and vibration

    Another consideration is that with increased density of more drives being placed in a given amount of space, some of which may not have the best climate controls, humidity and vibration are concerns. Thus, the importance of drives having vibration dampening or safeguards to keep up performance are important. Likewise, even though drive heads and platters are sealed, there are also considerations that need to be taken care of for humidity in data center or cloud service providers in hot environments near the equator.

    If this is not connecting with you, think about how close parts of Southeast Asia and the India subcontinent are to the equator along with the rapid growth and low-cost focus occurring there. Your data center might be temperature and humidity controlled, however others who very focused on cost cutting may not be as concerned with normal facilities best practices.

    What type of drives should be used for cloud, virtual and traditional storage?

    Good question and one where the answer should be it depends upon what you are trying or need to do (e.g. see previous posts here or here and here (via Seagate)).For example here are some tips for big data storage and storage making decisions in general.

    Disclosure

    Seagate recently invited me along with several other industry analysts to their cloud storage analyst summit in San Francisco where they covered roundtrip coach airfare, lodging, airport transfers and a nice dinner at the Epic Roast house.

    hdd image

    I also have received in the past a couple of Momentus XT HHDD (aka SSHD) from Seagate. These are in addition to those that I bought including various Seagate, WD along with HGST, Fujitsu, Toshiba and Samsung (SSD and HDD’s) that I use for various things.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Can we get a side of context with them IOPS server storage metrics?

    Can we get a side of context with them server storage metrics?

    Storage I/O trends

    Updated 2/10/2018

    Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.

    There is an old saying that the best I/O (Input/Output) is the one that you do not have to do.

    In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.

    Expanding the conversation, the need for more context

    The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.

    hdd and ssd iops

    This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.

    Adding a side of context

    The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.

    Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time.

    However, are those million IOP’s applicable to your environment or needs?

    Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?

    How about the response time or latency for achieving them IOPS?

    In other words, what is the context of those metrics and why do they matter?

    storage i/o iops
    Click on image to view more metrics that matter including IOP’s for HDD and SSD’s

    Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.

    As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.

    Here are some examples of context that can be added to help make IOP’s and other metrics matter:

    • What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
    • Are they reads, writes, random, sequential or mixed and what percentage?
    • How was the storage configured including RAID, replication, erasure or dispersal codes?
    • Then there is the latency or response time and IO queue depths for the given number of IOPS.
    • Let us not forget if the storage systems (and servers) were busy with other work or not.
    • If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
    • What was the number of threads or workers, along with how many servers?
    • What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
    • Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
    • Did the IOP’s number come from a single storage system or total of multiple systems?
    • Fast storage needs fast serves and networks, what was their configuration?
    • Was the performance a short burst, or long sustained period?
    • What was the size of the test data used; did it all fit into cache?
    • Were short stroking for IOPS or long stroking for bandwidth techniques used?
    • Data footprint reduction (DFR) techniques (thin provisioned, compression or dedupe) used?
    • Were write data committed synchronously to storage, or deferred (aka lazy writes used)?

    The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.

    Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.

    Storage I/O trends

    Does size or age of vendors make a difference when it comes to context?

    Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.

    Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.

    Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.

    Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.

    Storage I/O trends

    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    What this means is let us start putting and asking for metrics that matter such as IOP’s with context.

    If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.

    IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.

    Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.

    So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.