Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

This is the second post of a two part series, read the first post here.

Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

The Server Storage I/O Blender Effect Bottleneck

The earlier proof-points focused on SSD as a target or storage device. In the following proof-points, the Seagate Enterprise 1200 SSD is used as a shared read cache (write-through). Using a write-through cache enables a given amount of SSD to give a performance benefit to other local and networked storage devices.

traditional server storage I/O
Non-virtualized servers with dedicated storage and I/O paths.

Aggregation causes aggravation with I/O bottlenecks because of consolidation using server virtualization. The following figure shows non-virtualized servers with their own dedicated physical machine (PM) and I/O resources. When various servers are virtualized and hosted by a common host (physical machine), their various workloads compete for I/O and other resources. In addition to competing for I/O performance resources, these different servers also tend to have diverse workloads.

virtual server storage I/O blender
Virtual server storage I/O blender bottleneck (aggregation causes aggravation)

The figure above shows aggregation causing aggravation with the result being I/O bottlenecks as various applications performance needs converge and compete with each other. The aggregation and consolidation result is a blend of random, sequential, large, small, read and write characteristics. These different storage I/O characteristics are mixed up and need to be handled by the underlying I/O capabilities of the physical machine and hypervisor. As a result, a common deployment for SSD in addition to as a target device for storing data is as a cache to cut bottlenecks for traditional spinning HDD.

In the following figure a solution is shown introducing I/O caching with SSD to help mitigate or cut the effects of server consolation causing performance aggravations.

Creating a server storage I/O blender bottleneck

xxxxx
Addressing the VMware Server Storage I/O blender with cache

Addressing server storage I/O blender and other bottlenecks

For these proof-points, the goal was to create an I/O bottleneck resulting from multiple VMs in a virtual server environment performing application work. In this proof-point, multiple competing VMs including a SQL Server 2012 database and an Exchange server shared the same underlying storage I/O infrastructure including HDD’s The 6TB (Enterprise Capacity) HDD was configured as a VMware dat and allocated as virtual disks to the VMs. Workloads were then run concurrently to create an I/O bottleneck for both cached and non-cached results.

xxxxx
Server storage I/O with virtualization roof-point configuration topology

The following figure shows two sets of proof points, cached (top) and non-cached (bottom) with three workloads. The workloads consisted of concurrent Exchange and SQL Server 2012 (TPC-B and TPC-E) running on separate virtual machine (VM) all on the same physical machine host (SUT) with database transactions being driven by two separate servers. In these proof-points, the applications data were placed onto the 6TB SAS HDD to create a bottleneck, and a portion of the SSD used as a cache. Note that the Virtunet cache software allows you to use a part of a SSD device for cache with the balance used as a regular storage target should you want to do so.

If you have paid attention to the earlier proof-points, you might notice that some of the results below are not as good as those seen in the Exchange, TPC-B and TPC-E results about. The reason is simply that the earlier proof-points were run without competing workloads, and database along with log or journal files were placed on separate drives for performance. In the following proof-point as part of creating a server storage I/O blender bottleneck the Exchange, TPC-B as well as TPC-E workloads were all running concurrently with all data on the 6TB drive (something you normally would not want to do).

storage I/O blender solved
Solving the VMware Server Storage I/O blender with cache

The cache and non-cached mixed workloads shown above prove how an SSD based read-cache can help to reduce I/O bottlenecks. This is an example of addressing the aggravation caused by aggregation of different competing workloads that are consolidated with server virtualization.

For the workloads shown above, all data (database tables and logs) were placed on VMware virtual disks created from a dat using a single 7.2K 6TB 12Gbps SAS HDD (e.g. Seagate Enterprise Capacity).

The guest VM system disks which included paging, applications and other data files were virtual disks using a separate dat mapped to a single 7.2K 1TB HDD. Each workload ran for eight hours with the TPC-B and TPC-E having 50 simulated users. For the TPC-B and TPC-E workloads, two separate servers were used to drive the transaction requests to the SQL Server 2012 database.

For the cached tests, a Seagate Enterprise 1200 400GB 12Gbps SAS SSD was used as the backing store for the cache software (Virtunet Systems Virtucache) that was installed and configured on the VMware host.

During the cached tests, the physical HDD for the data files (e.g. 6TB HDD) and system volumes (1TB HDD) were read cache enabled. All caching was disabled for the non-cached workloads.

Note that this was only a read cache, which has the side benefit of off-loading those activities enabling the HDD to focus on writes, or read-ahead. Also note that the combined TPC-E, TPC-B and Exchange databases, logs and associated files represented over 600GB of data, there was also the combined space and thus cache impact of the two system volumes and their data. This simple workload and configuration is representative of how SSD caching can complement high-capacity HDD’s

Seagate 6TB 12Gbs SAS high-capacity HDD

While the star and focus of these series of proof-points is the Seagate 1200 Enterprise 12Gbs SAS SSD, the caching software (virtunet) and Enterprise TurboBoost drives also play key supporting and favorable roles. However the 6TB 12Gbs SAS high-capacity drive caught my attention from a couple of different perspectives. Certainly the space capacity was interesting along with a 12Gbs SAS interface well suited for near-line, high-capacity and dense tiered storage environments. However for a high-capacity drive its performance is what really caught my attention both in the standard exchange, TPC-B and TPC-E workloads, as well as when combined with SSD and cache software.

This opens the door for a great combination of leveraging some amount of high-performance flash-based SSD (or TurboBoost drives) combined with cache software and high-capacity drives such as the 6TB device (Seagate now has larger versions available). Something else to mention is that the 6TB HDD in addition to being available in either 12Gbs SAS, 6Gbs SAS or 6Gbs SATA also has enhanced durability with a Read Bit Error Rate of 10 ^15 (e.g. 1 second read error per 10^15 average attempts) and an AFR (annual failure rate) of 0.63% (See more speeds and feeds here). Hence if you are concerned about using large capacity HDD’s and them failing, make sure you go with those that have a high Read Bit Error Rate and a low AFR which are more common with enterprise class vs. lower cost commodity or workstation drives. Note that these high-capacity enterprise HDD’s are also available with Self-Encrypting Drive (SED) options.

Summary

Read more in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of Seagate 1200 12Gbs SAS SSD’s and visit the Seagate Enterprise 1200 12Gbs SAS SSD page here. Moving forward there is the notion that flash SSD will be everywhere. There is a difference between all data on flash SSD vs. having some amount of SSD involved in preserving, serving and protecting (storing) information.

Key themes to keep in mind include:

  • Aggregation can cause aggravation which SSD can alleviate
  • A relative small amount of flash SSD in the right place can go a long way
  • Fast flash storage needs fast server storage I/O access hardware and software
  • Locality of reference with data close to applications is a performance enabler
  • Flash SSD everywhere does not mean everything has to be SSD based
  • Having some amount of flash in different places is important for flash everywhere
  • Different applications have various performance characteristics
  • SSD as a storage device or persistent cache can speed up IOPs and bandwidth

Flash and SSD are in your future, this comes back to the questions of how much flash SSD do you need, along with where to put it, how to use it and when.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Seagate has shipped over 10 Million storage HHDD’s, is that a lot?

Seagate has shipped over 10 Million storage HHDD’s, is that a lot?

Recently Seagate made an announcement that they have shipped over 10 million Hybrid Hard Disk Drives (HHDD) also known as Solid State Hybrid Drives (SSHD) over that past few years. Disclosure Seagate has been a StorageIO client.

I know where some of those desktop class HHDD’s including Momentus XTs ended up as I bought some of the 500GB and 750GB models via Amazon and have them in various systems. Likewise I have installed in VMware servers the newer generation of enterprise class SSHD’s which Seagate now refers to as Turbo models as companions to my older HHDD’s

What is a HHDD or SSHD?

The HHDD’s continue to evolve from initially accelerating reads to now being capable of speeding up write operations across different families (desktop/mobile, workstation and enterprise). What makes a HHDD or SSHD is that as their name implies, they are a hybrid combing a traditional spinning magnetic Hard Disk Drive (HDD) along with flash SSD storage. The flash persistent memory is in addition to the DRAM or non-persistent memory typically found on HDDs used as a cache buffer. These HHDDs or SSHDs are self-contained in that the flash are built-in to the actual drive as part of its internal electronics circuit board (controller). This means that the drives should be transparent to the operating systems or hypervisors on servers or storage controllers without need for special adapters, controller cards or drivers. In addition, there is no extra software needed to automated tiering or movement between the flash on the HHDD or SSHD and its internal HDD, its all self-contained managed by the drives firmware (e.g. software).

Some SSHD and HHDD industry perspectives

Jim Handy over at Objective Analysis has this interesting post discussing Hybrid Drives Not Catching On. The following is an excerpt from Jim’s post.

Why were our expectations higher? 

There were a few reasons: The hybrid drive can be viewed as an evolution of the DRAM cache already incorporated into nearly all HDDs today. 

  • Replacing or augmenting an expensive DRAM cache with a slower, cheaper NAND cache makes a lot of sense.
  • An SSHD performs much better than a standard HDD at a lower price than an SSD. In fact, an SSD of the same capacity as today’s average HDD would cost about an order of magnitude more than the HDD. The beauty of an SSHD is that it provides near-SSD performance at a near-HDD price. This could have been a very compelling sales proposition had it been promoted in a way that was understood and embraced by end users.
  • Some expected for Seagate to include this technology into all HDDs and not to try to continue using it as a differentiator between different Seagate product lines. The company could have taken either of two approaches: To use hybrid technology to break apart two product lines – standard HDDs and higher-margin hybrid HDDs, or to merge hybrid technology into all Seagate HDDs to differentiate Seagate HDDs from competitors’ products, allowing Seagate to take slightly higher margins on all HDDs. Seagate chose the first path.

The net result is shipments of 10 million units since its 2010 introduction, for an average of 2.5 million per year, out of a total annual HDD shipments of around 500 million units, or one half of one percent.

Continue reading more of Jim’s post here.

In his post, Jim raises some good points including that HHDD’s and SSHD’s are still a fraction of the overall HDD’s shipped on an annual basis. However IMHO the annual growth rate has not been a flat average of 2.5 million, rather starting at a lower rate and then increasing year over year. For example Seagate issued a press release back in summer 2011 that they had shipped a million HHDD’s a year after their release. Also keep in mind that those HHDD’s were focused on desktop workstations and in particular, at Gamers among others.

The early HHDD’s such as the Momentus XTs that I was using starting in June 2010 only had read acceleration which was better than HDD’s, however did not help out on writes. Over the past couple of years there have been enhancements to the HHDD’s including the newer generation also known as SSHD’s or Turbo drives as Seagate now calls them. These newer drives include write acceleration as well as with models for mobile/laptop, workstation and enterprise class including higher-performance and high-capacity versions. Thus my estimates or analysis has the growth on an accelerating curve vs. linear growth rate (e.g. average of 2.5 million units per year).

 Units shipped per yearRunning total units shipped
2010-20111.0 Million1.0 Million
2011-20121.25 Million (est.)2.25 Million (est.)
2012-20132.75 Million (est.)5.0 Million (est.)
2013-20145.0 Million (est)10.0 Million

StorageIO estimates on HHDD/SSHD units shipped based on Seagate announcements

estimated hhdd and sshd shipments

However IMHO there is more to the story beyond numbers of HHDD/SSHD shipped or if they are accelerating in deployment or growing at an average rate. Some of those perspectives are in my comments over on Jim Handy’s site with an excerpt below.

In talking with IT professionals (e.g. what the vendors/industry calls users/customers) they are generally not aware that these devices exist, or if they are aware of them, they are only aware of what was available in the past (e.g. the consumer class read optimized versions). I do talk with some who are aware of the newer generation devices however their comments are usually tied to lack of system integrator (SI) or vendor/OEM support, or sole source. Also there was a focus on promoting the HHDD’s to “gamers” or other power users as opposed to broader marketing efforts. Also most of these IT people are not aware of the newer generation of SSHD or what Seagate is now calling “Turbo” drives.

When talking with VAR’s, there is a similar reaction which is discussion about lack of support for HHDD’s or SSHD’s from the SI/vendor OEMs, or single source supply concerns. Also a common reaction is lack of awareness around current generation of SSHD’s (e.g. those that do write optimization, as well as enterprise class versions).

When talking with vendors/OEMs, there is a general lack of awareness of the newer enterprise class SSHD’s/HHDD’s that do write acceleration, sometimes there is concern of how this would disrupt their “hybrid” SSD + HDD or tiering marketing stories/strategies, as well as comments about single source suppliers. Have also heard comments to the effect of concerns about how long or committed are the drive manufactures going to be focused on SSHD/HHDD, or is this just a gap filler for now.

Not surprisingly when I talk with industry pundits, influencers, amplifiers (e.g. analyst, media, consultants, blogalysts) there is a reflection of all the above which is lack of awareness of what is available (not to mention lack of experience) vs. repeating what has been heard or read about in the past.

IMHO while there are some technology hurdles, the biggest issue and challenge is that of some basic marketing and business development to generate awareness with the industry (e.g. pundits), vendors/OEMs, VAR’s, and IT customers, that is of course assuming SSHD/HHDD are here to stay and not just a passing fad…

What about SSHD and HHDD performance on reads and writes?

What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?

SSHD and HHDD read / write performance exchange
Enterprise Turbo SSHD read and write performance (Exchange Email)

What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?

SSHD and HHDD performance TPC-B
Enterprise Turbo SSHD read and write performance (TPC-B database)

SSHD and HHDD performance TPC-E
Enterprise Turbo SSHD read and write performance (TPC-E database)

Additional details and information about HHDD/SSHD or as Seagate now refers to them Turbo drives can be found in two StorageIO Industry Trends Perspective White Papers (located here and another here).

Where to learn more

Refer to the following links to learn more about HHDD and SSHD devices.
StorageIO Momentus Hybrid Hard Disk Drive (HHDD) Moments
Enterprise SSHD and Flash SSD
Part of an Enterprise Tiered Storage Strategy

Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?
2011 Summer momentus hybrid hard disk drive (HHDD) moment
More Storage IO momentus HHDD and SSD moments part I
More Storage IO momentus HHDD and SSD moments part II
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Another StorageIO Hybrid Momentus Moment
SSD past, present and future with Jim Handy
Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?

Closing comments and perspectives

I continue to be bullish on hybrid storage solutions from cloud, to storage systems as well as hybrid-storage devices. However like many technology just because something makes sense or is interesting does not mean its a near-term or long-term winner. My main concern with SSHD and HHDD is if the manufactures such as Seagate and WD are serious about making them a standard feature in all drives, or simply as a near-term stop-gap solution.

What’s your take or experience with using HHDD and/or SSHDs?

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future.

Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way.

Introducing Solid State Hybrid Drives (SSHD)

Solid State Hybrid Disks (SSHD) are the successors to previous generation Hybrid Hard Disk Drives (HHDD) that I have used for several years (you can read more about them here, and here).

While it would be nice to simply have SSD for everything, there are also economic budget realities to be dealt with. Keep in mind that a bit of nand flash SSD cache in the right location for a given purpose can go a long way which is the case with SSHDs. This is also why in many environments today there is a mix of SSD, HDD of various makes, types, speeds and capacities (e.g. different tiers) to support diverse application needs (e.g. not everything in the data center is the same).

However, If you have the need for speed and can afford or benefit from the increased productivity by all means go SSD!

Otoh if you have budget constraints and need more space capacity yet want some performance boost, then SSHDs are an option. The big difference however between today’s SSHDs that are available for both enterprise class storage systems and servers, as well as desktop environments is that they can accelerate both reads and writes. This is different from their predecessors that I have used for several years now that had basic read acceleration, however no write optimizations.

SSHD storage I/O oppourtunity
Better Together: Where SSHDs fit in an enterprise tiered storage environment with SSD and HDDs

As their names imply, they are a hybrid between a nand flash Solid State Device (SSD) and traditional Hard Disk Drive (HDD) meaning a best of situation. This means that the SSHD are based on a traditional spinning HDD (various models with different speeds, space capacity, interfaces) along with DRAM (which is found on most modern HDDs), along with nand flash for read cache, and some extra nonvolatile memory for persistent write cache combined with a bit of software defined storage performance optimization algorithms.

Btw, if you were paying attention to that last sentence you would have picked up on something about nonvolatile memory being used for persistent write cache which should prompt the question would that help with nand flash write endurance? Yup.

Where and when to use SSHD?

In the StorageIO Industry Trends Perspective thought leadership white paper I recently released compliments of Seagate Enterprise Turbo SSHD (that’s a disclosure btw ;) enterprise class Solid State Hybrid Drives (SSHD) were looked at and test driven in the StorageIO Labs with various application workloads. These activities include being in a virtual environment for common applications including database and email messaging using industry standard benchmark workloads (e.g. TPC-B and TPC-E for database, JetStress for Exchange).

Storage I/O sshd white paper

Conventional storage system focused workloads using iometer, iorate and vdbench were also run in the StorageIO Labs to set up baseline reads, writes, random, sequential, small and large I/O size with IOPs, bandwidth and response time latency results. Some of those results can be found here (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?) with other ongoing workloads continuing in different configurations. The various test drive proof points were done in the   comparing SSHD, SSD and different HDDs.

Data Protection (Archiving, Backup, BC, DR)

Staging cache buffer area for snapshots, replication or current copies before streaming to other storage tier using fast read/write capabilities. Meta data, index and catalogs benefit from fast reads and writes for faster protection.

Big Data DSS
Data Warehouse

Support sequential read-ahead operations and “hot-band” data caching in a cost-effective way using SSHD vs. slower similar capacity size HDDs for Data warehouse, DSS and other analytic environments.

Email, Text and Voice Messaging

Microsoft Exchange and other email journals, mailbox or object repositories can leverage faster read and write I/Os with more space capacity.

OLTP, Database
 Key Value Stores SQL and NoSQL

Eliminate the need to short stroke HDDs to gain performance, offer more space capacity and IOP performance per device for tables, logs, journals, import/export and scratch, temporary ephemeral storage. Leverage random and sequential read acceleration to compliment server-side SSD-based read and write-thru caching. Utilize fast magnetic media for persistent data reducing wear and tear on more costly flash SSD storage devices.

Server Virtualization

Fast disk storage for data stores and virtual disks supporting VMware vSphere/ESXi, Microsoft Hyper-V, KVM, Xen and others.  Holding virtual machines such as VMware VMDKs, along with Hyper-V and other hypervisor virtual disks.  Compliment virtual server read cache and I/O optimization using SSD as a cache with writes going to fast SSHD. For example VMware V5.5 Virtual SAN host disk groups use SSD as a read cache and can use SSHD as the magnetic disk for storing data while boosting performance without breaking the budget or adding complexity.

Speaking of Virtual, as mentioned the various proof points were run using Windows systems that were VMware guests with the SSHD and other devices being Raw Device Mapped (RDM) SAS and SATA attached, read how to do that here.

Hint: If you know about the VMware trick for making a HDD look like a SSD to vSphere/ESXi (refer to here and here) think outside the virtual box for a moment on some things you could do with SSHD in a VSAN environment among other things, for now, just sayin ;).

Virtual Desktop Infrastructure (VDI)

SSHD can be used as high performance magnetic disk for storing linked clone images, applications and data. Leverage fast read to support read ahead or pre-fetch to compliment SSD based read cache solutions. Utilize fast writes to quickly store data enabling SSD-based read or write-thru cache solutions to be more effective. Reduce impact of boot, shutdown, and virus scan or maintenance storms while providing more space capacity.

Table 1 Example application and workload scenarios benefiting from SSHDs

Test drive application proof points

Various workloads were run using Seagate Enterprise Turbo SSHD in the StorageIO lab environment across different real world like application workload scenarios. These include general storage I/O performance characteristics profiling (e.g. reads, writes, random, sequential or various IOP size) to understand how these devices compare to other HDD, HHDD and SSD storage devices in terms of IOPS, bandwidth and response time (latency). In addition to basic storage I/O profiling, the Enterprise Turbo SSHD was also used with various SQL database workloads including Transaction Processing Council (TPC); along with VMware server virtualization among others use case scenarios.

Note that in the following workload proof points a single drive was used meaning that using more drives in a server or storage system should yield better performance. This also means scaling would be bound by the constraints of a given configuration, server or storage system. These were also conducted using 6Gbps SAS with PCIe Gen 2 based servers and ongoing testing is confirming even better results with 12Gbs SAS, faster servers with PCIe Gen 3.

SSHD large file storage i/o
Copy (read and write) 80GB and 220GB file copies (time to copy entire file)

SSHD storage I/O TPCB Database performance
SQLserver TPC-B batch database updates

Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 500GB 3.5” 7.2K RPM HDD 3 Gbps SATA, 1TB 3.5” 7.2K RPM HDD 3 Gbps SATA. Workload generator and virtual clients ran on Windows 7 Ultimate. Microsoft SQL Server 2012 Database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (Intel x3490 2.93 GHz)), with LSI 9211 6Gbps SAS adapters with TPC-B (www.tpc.org) workloads. VM resided on separate data store from devices being tested. All devices being tested with SQL MDF were Raw Device Mapped (RDM) independent persistent with database log file (LDF) on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

SSHD storage I/O TPCE Database performance
SQLserver TPC-E transactional workload

Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 300GB 2.5” Savio 10K RPM HDD 6 Gbps SAS, 1TB 3.5” 7.2K RPM HDD 6 Gbps SATA. Workload generator and virtual clients Windows 7 Ultimate. Microsoft SQL Server 2012 database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (E8400 2.99GHz), with LSI 9211 6Gbps SAS adapters with TPC-E (www.tpc.org) workloads. VM resided on separate SSD based data store from devices being tested (e.g., where MDF resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

SSHD storage I/O Exchange performance
Microsoft Exchange workload

Test configuration: 2.5” Seagate 600 Pro 120GB (ST120FP0021 ) SSD 6 Gbps SATA, 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 2.5” Savio 146GB HDD 6 Gbps SAS, 3.5” Barracuda 500GB 7.2K RPM HDD 3 Gbps SATA. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft Small Business Server (SBS) 2011 Service Pack 1 64 bit, 8GB DRAM, One CPU (Intel X3490 2.93 GHz) LSI 9211 6 Gbps SAS adapter, JetStress 2010 (no other active workload during test intervals). All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device.

Read more about the above proof points along view data points and configuration information in the associated white paper found here (no registration required).

What this all means

Similar to flash-based SSD technologies the question is not if, rather when, where, why and how to deploy hybrid solutions such as SSHDs. If your applications and data infrastructures environment have the need for storage I/O speed without loss of space capacity and breaking your budget, SSD enabled devices like the Seagate Enterprise Turbo 600GB SSHD are in your future. You can learn more about enterprise class SSHD such as those from Seagate by visiting this link here.

Watch for extra workload proof points being performed including with 12Gbps SAS and faster servers using PCIe Gen 3.

Ok, nuff said.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Chat with Cash Coleman talking ClearDB, cloud database and Johnny Cash

Podcast with Cash Coleman talking ClearDB, cloud database and Johnny Cash

audio

In this episode from the SNIA DSI 2014 event I am joined by Cashton Coleman (@Cash_Coleman).

cash coleman cleardb

Introducing Cashton (Cash) Coleman and ClearDB

Cashton (Cash) is a Software architect, product mason, family bonder, life builder, idea founder along with Founder & CEO of SuccessBricks, Inc., makers of ClearDB. ClearDB is a provider of MySQL database software tools for cloud and physical environments. In our conversation talk about ClearDB, what they do and whom they do it with including deployments in cloud’s as well as onsite. For example if you are using some of the Microsoft Azure cloud services using MySQL, you may already be using this technology. However, there is more to the story and discussion including how Cash got his name, how to speed up databases for little and big data among other topics.

If you are a database person, you will want to listen to what Cash has to say about boosting performance and getting more value out of your physical hardware or cloud services. On the other hand if you are a storage person, listen in to get some insight and ideas on to address database performance and resiliency. For others who just like to listen to new trends, technology talk, or hear about emerging companies to keep an eye on, you wont want to miss the podcast conversation.

Topics and themes discussed:

  • Traditional and Cloud Database
  • MySQL and Database as a Service (DaaS)
  • Microsoft Azure and Amazon Web Services (AWS)
  • Little Data, Big Data and Big Data Databases
  • Boosting database performance with less hardware
  • Getting more value out of fast SSD hardware
  • Database performance and resiliency
  • What’s the Johnny Cash and Cloud Connection
  • Check out ClearDB and listen in to the conversation with Cash podcast here.

    Also available via 

    Ok, nuff said.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: EMC announces XtremIO General Availability, speeds and feeds

    Storage I/O trends

    XtremIO flash SSD more than storage I/O speed

    Following up part I of this two-part series, here are more more details, insights and perspectives about EMC XtremIO and it’s generally availability that were announced today.

    XtremIO the basics

    • All flash Solid State Device (SSD) based solution
    • Cluster of up to four X-Brick nodes today
    • X-Bricks available in 10TB increments today, 20TB in January 2014
    • 25 eMLC SSD drives per X-Brick with redundant dual processor controllers
    • Provides server-side iSCSI and Fibre Channel block attachment
    • Integrated data footprint reduction (DFR) including global dedupe and thin provisioning
    • Designed for extending duty cycle, minimizing wear of SSD
    • Removes need for dedicated hot spare drives
    • Capable of sustained performance and availability with multiple drive failure
    • Only unique data blocks are saved, others tracked via in-memory meta data pointers
    • Reduces overhead of data protection vs. traditional small RAID 5 or RAID 6 configurations
    • Eliminates overhead of back-end functions performance impact on applications
    • Deterministic  storage I/O performance (IOPs, latency, bandwidth) over life of system

    When would you use XtremIO vs. another storage system?

    If you need all enterprise like data services including thin provisioning, dedupe, resiliency with deterministic performance on an all-flash system with raw capacity from 10-40TB (today) then XtremIO could be a good fit. On the other hand, if you need a mix of SSD based storage I/O performance (IOPS, latency or bandwidth) along with some HDD based space capacity, then a hybrid or traditional storage system could be the solution. Then there are hybrid scenarios where a hybrid storage system, array or appliance (mix of SSD and HDD) are used for most of the applications and data, with an XtremIO handling more tasks that are demanding.

    How does XtremIO compare to others?

    EMC with XtremIO is taking a different approach than some of their competitors whose model is to compare their faster flash-based solutions vs. traditional mid-market and enterprise arrays, appliances or storage systems on a storage I/O IOP performance basis. With XtremIO there is improved performance measured in IOPs or database transactions among other metrics that matter. However there is also an emphasis on consistent, predictable, quality of service (QoS) or what is known as deterministic storage I/O performance basis. This means both higher IOPs with lower latency while doing normal workload along with background data services (snapshots, data footprint reduction, etc).

    Some of the competitors focus on how many IOPs or work they can do, however without context or showing impact to applications when back-ground tasks or other data services are in use. Other differences include how cluster nodes are interconnected (for scale out solutions) such as use of Ethernet and IP-based networks vs dedicated InfiniBand or PCIe fabrics. Host server attachment will also differ as some are only iSCSI or Fibre Channel block, or NAS file, or give a mix of different protocols and interfaces.

    An industry trend however is to expand beyond the flash SSD need for speed focus by adding context along with QoS, deterministic behavior and addition of data services including snapshots, local and remote replication, multi-tenancy, metering and metrics, security among other items.

    Storage I/O trends

    Who or what are XtremIO competition?

    To some degree vendors who only have PCIe flash SSD cards might place themselves as the alternative to all SSD or hybrid mixed SSD and HDD based solutions. FusionIO used to take that approach until they acquired NexGen (a storage system) and now have taken a broader more solution balanced approach of use the applicable tool for the task or application at hand.

    Other competitors include the all SSD based storage arrays, systems or appliance vendors which includes legacy existing as well as startups vendors that include among others IBM who bought TMS (flashsystems), NetApp (EF540), Solidfire, Pure, Violin (who did a recent IPO) and Whiptail (bought by Cisco).  Then there are the hybrid which is a long list including Cloudbyte (software), Dell, EMCs other products, HDS, HP, IBM, NetApp, Nexenta (Software), Nimble, Nutanix, Oracle, Simplivity and Tintri among others.

    What’s new with this XtremIO announcement

    10TB X-Bricks enable 10 to 40TB (physical space capacity) per cluster (available on 11/19/13). 20TB X-Bricks (larger capacity drives) will double the space capacity in January 2014. If you are doing the math, that means either a single brick (dual controller) system, or up to four bricks (nodes, each with dual controllers) configurations. Common across all system configurations are data features such as thin provisioning, inline data footprint reduction (e.g. dedupe) and XtremIO Data Protection (XDP).

    What does XtremIO look like?

    XtremIO consists of up to four nodes (today) based on what EMC calls X-Bricks.
    EMC XtremIO X-Brick
    25 SSD drive X-Brick

    Each 4U X-Brick has 25 eMLC SSD drives in a standard EMC 2U DAE (disk enclosure) like those used with the VNX and VMAX for SSD and Hard Disk Drives (HDD). In addition to the 2U drive shelve, there are a pair of 1U storage processors (e.g. controllers) that give redundancy and shared access to the storage shelve.

    XtremIO Architecture
    XtremIO X-Brick block diagram

    XtremIO storage processors (controllers) and drive shelve block diagram. Each X-Brick and their storage processors or controllers communicate with each other and other X-Bricks via a dedicated InfiniBand using Remote Direct Memory Access (RDMA) fabric for memory to memory data transfers. The controllers or storage processors (two per X-Brick) each have dual processors with eight cores for compute, along with 256GB of DRAM memory. Part of each controllers DRAM memory is set aside as a mirror its partner or peer and vise versa with access being over the InfiniBand fabric.

    XtremIO fabric
    XtremIO X-Brick four node fabric cluster or instance

    How XtremIO works

    Servers access XtremIO X-Bricks using iSCSI and Fibre Channel for block access. A responding X-Brick node handles the storage I/O request and in the case of a write updates other nodes. In the case of a write, the handling node or controller (aka storage processor) checks its meta data map in memory to see if the data is new and unique. If so, the data gets saved to SSD along with meta data information updated across all nodes. Note that data gets ingested and chunked or sharded into 4KB blocks. So for example if a 32KB storage I/O request from the server arrives, that is broken (e.g. chunk or shard) into 8 4KB pieces each with a mathematical unique fingerprint created. This fingerprint is compared to what is known in the in memory meta data tables (this is a hexadecimal number compare so a quick operation). Based on the comparisons if unique the data is saved and pointers created, if already exists, then pointers are updated.

    In addition to determining if unique data, the fingerprint is also used for generate a balanced data dispersal plan across the nodes and SSD devices. Thus there is the benefit of reducing duplicate data during ingestion, while also reducing back-end IOs within the XtremIO storage system. Another byproduct is the reduction in time spent on garbage collection or other background tasks commonly associated with SSD and other storage systems.

    Meta data is kept in memory with a persistent copied written to reserved area on the flash SSD drives (think of as a vault area) to support and keep system state and consistency. In between data consistency points the meta data is kept in a log journal like how a database handles log writes. What’s different from a typical database is that XtremIO XIOS platform software does these consistency point writes for persistence on a granularity of seconds vs. hours or minutes.

    Storage I/O trends

    What about rumor that XtremIO can only do 4KB IOPs?

    Does this mean that the smallest storage I/O or IOP that XtremIO can do is 4GB?

    That is a rumor or some fud I have heard floated by a competitor (or two or three) that assumes if only 4KB internal chunk or shard being used for processing, that must mean no IOPs smaller than 4KB from a server.

    XtremIO can do storage I/O IOP sizes of 512 bytes (e.g. the standard block size) as do other systems. Note that the standard server storage I/O block or IO size is 512 bytes or multiples of that unless the new 4KB advanced format (AF) block size being used which based on my conversations with EMC, AF is not supported, yet. (Updated 11/15/13 EMC has indicated that host (front-end) 4K AF support, along with 512 byte emulation modes are available now with XIOS). Also keep in mind that since XtremIO XIOS internally is working with 4KB chunks or shards, that is a stepping stone for being able to eventually leverage back-end AF drive support in the future should EMC decide to do so (Updated 11/15/13 Waiting for confirmation from EMC about if back-end AF support is now enabled or not, will give more clarity as it is recieved).

    What else is EMC doing with XtremIO?

    • VCE Vblock XtremIO systems for SAP HANA (and other databases) in memory databases along with VDI optimized solutions.
    • VPLEX and XtremIO for extended distance local, metro and wide area HA, BC and DR.
    • EMC PowerPath XtremIO storage I/O path optimization and resiliency.
    • Secure Remote Support (aka phone home) and auto support integration.

    Boosting your available software license minutes (ASLM) with SSD

    Another use of SSD has been in the past the opportunity to make better use of servers stretching their usefulness or delaying purchase of new ones by improving their effective use to do more work. In the past this technique of using SSDs to delay a server or CPU upgrade was used when systems when hardware was more expensive, or during the dot com bubble to fill surge demand gaps.  This has the added benefit of stretching database and other expensive software licenses to go further or do more work. The less time servers spend waiting for IOP’s means more time for doing useful work and bringing value of the software license. Otoh, the more time spent waiting is lot available software minutes which is cost overhead.

    Think of available software licence minutes (ASLM) in terms of available software license minutes where if doing useful work your software is providing value. On the other hand if those minutes are not used for useful work (e.g. spent waiting or lost due to CPU or server or IO wait, then they are lost). This is like airlines and available seat miles (ASM) metric where if left empty it’s a lost opportunity, however if used, then value, not to mention if yield management applied to price that seat differently. To make up for that loss many organizations have to add extra servers and thus more software licensing costs.

    Storage I/O trends

    Can we get a side of context with them metrics?

    EMC along with some other vendors are starting to give more context with their storage I/O performance metrics that matter than simple IOP’s or Hero Marketing Metrics. However context extends beyond performance to also availability and space capacity which means data protection overhead. As an example, EMC claims 25% for RAID 5 and 20% for RAID 6 or 30% for RAID 5/RAID 6 combo where a 25 drive (SSD) XDP has a 8% overhead. However this assumes a 4+1 (5 drive) RAID , not apples to apples comparison on a space overhead basis. For example a 25 drive RAID 5 (24+1) would have around an 4% parity protection space overhead or a RAID 6 (23+2) about 8%.

    Granted while the space protection overhead might be more apples to apples with the earlier examples to XDP, there are other differences. For example solutions such as XDP can be more tolerant to multiple drive failures with faster rebuilds than some of the standard or basic RAID implementations. Thus more context and clarity would be helpful.

    StorageIO would like see vendors including EMC along with startups who give data protection space overhead comparisons without context to do so (and applaud those who provide context). This means providing the context for data protection space overhead comparisons similar to performance metrics that matter. For example simply state with an asterisk or footnote comparing a 4+1 RAID 5 vs. a 25 drive erasure or forward error correction or dispersal or XDP or wide stripe RAID for that matter (e.g. can we get a side of context). Note this is in no way unique to EMC and in fact quite common with many of the smaller startups as well as established vendors.

    General comments

    My laundry list of items which for now would be nice to have’s, however for you might be need to have would include native replication (today leverages Recover Point), Advanced Format (4KB) support for servers (Updated 11/15/13 Per above, EMC has confirmed that host/server-side (front-end) AF along with 512 byte emulation modes exist today), as well as SSD based drives, DIF (Data Integrity Feature), and Microsoft ODX among others. While 12Gb SAS server to X-Brick attachment for small in the cabinet connectivity might be nice for some, more practical on a go forward basis would be 40GbE support.

    Now let us see what EMC does with XtremIO and how it competes in the market. One indicator to watch in the industry and market of the impact or presence of EMC XtremIO is the amount of fud and mud that will be tossed around. Perhaps time to make a big bowl of popcorn, sit back and enjoy the show…

    Ok, nuff said (for now).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined

    Part one of this two-part post provided a summary of today’s EMC (@EMCflash) announcement around XtremIO and renaming VFCache to XtremSF and associated software as XtremSW.

    Storage I/O industry trends and perspectives

    Synopsis of announcement

    • Product rollout and selective availability of the new all flash SSD array XtremIO
    • Rename server-side PCIe ssd flash cards from VFCache to XtremSF
    • New XtremSF models including enhanced multi-level cell (eMLC) with larger capacities
    • Rename VFCache caching software to XtremSW (enables cache mode vs. target mode)

    Now lets take a closer look at what was announced along with what it means in terms of Industry Trends and Perspectives.

    XtremIO  has been in customer beta for some time and now those along with some other early customers are able to acquire the product. In addition, EMC is opening up XtremIO to more prospective customers (Directed Availability) who have requirements or needs that line up with the products target market capabilities.

    Storage I/O industry trends and perspectives

    What this means is that XtremIO is not being simply put out into the general product population for broad distribution. Instead, it is being put into a controlled release (Directed Availability) to help customers, partners and EMC sales decide where best to use it and thus risk revenue prevention in other areas. The criteria or target opportunity (at least initially) are little-data applications including OLTP, server virtualization (where aggregation can cause aggravation) along with virtual desktop or VDI. In other words, many of the traditional or legacy IOP focused SSD opportunities.

    In addition to XtremIO EMC has renamed their VFCache PCIe flash SSD cards (Launched February 2012) to XtremSF along with new models with both SLC and MLC nand flash. Also as part of today’s announcement EMC is renaming the cache software for XtremSF (e.g. VFCache) to be known as XtremSW. Now if that did not prompt the question of if you can now buy XtremSF as a target mode only card without the cache software the answer is yes.

    What is XtremIO?

    It is a new all flash SSD storage array. XtremIO is a Cluster, grid or collection of nodes called bricks with linear performance scaling providing block based all flash SSD storage. Data services consists of data footprint reduction (DFR) including inline global (across all nodes or bricks) dedupe on 4Kbyte chunks along with thin provisioning. Global dedupe is done on ingest using a combination of flash buffered meta-data (tables, index or dictionary) of what has been seen before along with multi-threaded software to leverage multi-core processors. Using the global dedupe at ingest; only new unique data is saved based on 4 Kbyte chunks.

    Performance per EMC scales from one single node to more second node or a fourth node. Note: architecturally more nodes can be added with EMC indicating added models will be available in the future.

    In addition to DFR, other data services including writable snapshots, and auto-load balancing when new bricks are added. Note that in a normal running XtremIO, data is automatically spread across the nodes for both performance and resiliency. Data only needs to be moved or load-balanced in the background when new bricks are added. Instant copy snapshots are supported along with writable snapshots. Currently replication is done via external EMC products such as VPLEX or RecoverPoint with statement of directions (SOD) for future enhancements.

    Additional attributes of XtremIO include:

    • Each node or brick (X-Brick) has up to 16 (16 was Gen 1 hardware platform, it is now 25 SSD drives)
    • All bricks are involved in IO and storage processing
    • Positioned by EMC as Software Defined (no proprietary hardware)
    • Four x 8Gb Fibre Channel (8GFC) and four x 10Gb Ethernet (iSCSI) per brick
    • Bricks communicate with each other via a separate interconnect network or fabric
    • Bricks have redundant processors (think of as controllers) with multiple sockets and cores
    • 4KB random read IOP’s scale from 250K (one brick), 500K (two bricks) and 1 Million (four bricks). For 4K random write IOPS, the numbers are 100K, 200K and 400K across one, two and four brick configurations with low latency and all data services running (EMC supplied numbers)

    In addition to 4K being a commonly used or referred to IO size, it is also the same size as the new industry standard Advanced Format (AF). Today the standard storage block, page or sector size is 512 bytes however AF moves that to a larger 4,096 bytes (e.g. 4KB) to closer align with larger IO sizes. Note that many HDD’s and some SSD’s today support AF and provide 512 byte emulation modes for compatibility.

    What is XtremSF?

    VFCache is renamed XtremSF with new models using eMLC as companion to existing SLC PCIe  cards and blade server mezzanine cards. EMC is emphasizing performance metrics that matter including IOPs that are relative to customer workloads such as 4K, 8K or larger with mix of reads and writes with low latency. In addition to IOPs with latency, size along with reads or writes for little data, EMC is also showing bandwidth or throughput numbers for big-data and big-bandwidth.

    Model
    Capacity
    Read Transfer GB/sec
    Write Transfer GB/sec
    Random 4K Read (IOPS)
    Random 4K Write (IOPS)

    Random 4K Mixed ( IOPS)

    Read latency (usec)
    Write latency (usec)
    2200 (eMLC)
    2.2 TB
    2.47
    1.1
    343K
    105K
    206K
    87us
    30us
    700 (SLC)
    700 GB
    2.9
    1.8
    712K
    197K
    411K
    50us
    13us
    550 (eMLC)
    550 GB
    1.36
    512 MB/s
    174K
    49K
    96K
    87us
    37us
    350 (SLC)
    350 GB
    2.9
    756 MB/s
    715K
    95K
    267K
    50us
    13us

    Sampling of SLC and eMLC XtremSF PCIe SSD cards performance characteristics (via EMC) including latency measured in microseconds). Note performance differences due to some cards being based on SLC and others on eMLC.

    Additional attributes, some new and some previously announced include:

    • 8X  PCIe bandwidth lanes for performance
    • No IO impact to applications during garbage collection
    • Supports multi-core processor workloads with parallel design
    • Low CPU overhead by off-loading functions to PCIe card
    • Half-height, half-length PCIe form factor
    • Wear-leveling for nand flash program/erase (P/E) cycle duration
    Other storage, server and systems vendors including Cisco, Dell, HP, IBM, NetApp and Oracle offer various PCIe nand flash SSD cards either as target, cache or mixed modes. Manufactures or suppliers of PCIe nand flash SSD cache and target cards include among others FusionIO, Intel, LSI, Micron , OCZ and Virident (who is partnered with Seagate).

    What is XtremSW?

    Server side flash software (not to be confused with FAST) for using XtremSF as a tier 0 (server-side) ssd cache or target. In target mode the XtremSF functions as a high performance persistent local dedicated direct attached storage (DAS) device. Cache mode enables frequently accessed data to be kept close to the applications off-loading underlying storage systems to be more effectively used. The XtremSW complements back-end storage systems for data protection and persistence along with investment protection of those assets.

    Storage I/O industry trends and perspectives

    What this all means

    SSD is in your future, question is where, when and with what.

    Why not just use SSD (DRAM and or nand flash) everywhere?

    Keep in mind that in the data center (traditional, virtual or cloud) everything is not the same. Thus the simple answer is that there is not enough of it available at a low enough price point (think closer to Hard Disk Drives (HDD) costs) to fit into customers budget. Sure SSDs provide better performance and productivity benefits, however while there is no such thing as a data or information recession, there are budget constraints.

    Another reason why SSD cant simply be used everywhere are physical (and logical) constraints such as amount of memory a server can directly access, or current DDR3 DIMMs (this could change with DDR4 according to Micron) can only address and work with DRAM, PCIe bus physical slot space, operating and hypervisor addressing limits among others.

    If SSD (DRAM and or nand flash) were priced were priced low enough (e.g. much closer to HDDs) and available SSD including both DRAM and nand flash (SLC, MLC, eMLC, TLC, etc) along with emerging Phase Change Memory (PCM) are at the convergence of traditional memory and data storage. While some storage (or server) professionals may not agree, storage is an extension of memory and thus part of the traditional server and storage memory hierarchy shown below.

    Storage I/O and cache locality of reference

    This brings up the locality of reference topic also shown in the following figure where the best IO is the one that does not have to be done. The second best is the one that can be done closest to application to a given level of service. Locality of reference which is important for servers and storage systems including caching refers to how close frequently accessed data is to where it is needed. For some applications this means as much DRAM main memory in a server as possible either clustered, with battery backup or other data persistency protection including onboard HDD or SSD (e.g. towards the top of the hierarchy).

    nand flash SSD and storage I/O location options

    There are other applications where localized SSD (DRAM or nand flash) are a benefit to compliment main memory or as a persistent cache and target such as PCIe cards or SAS and SATA drives. Further down the stack and for housing larger amounts of storage with performance (reads or writes, random or sequential) along with data services is where all SSD and hybrid (mix of SSD and HDD) fit. Even further down the stack and for a broader segment is where cloud storage services based on SSD such as those from Rackspace (Cloud Block Storage with SSD) and Amazon (provisioned IOPS for EBS) have a play. Lets not forget about SSD in laptop, tablets and workstations, for example I have a Samsung model 830 in my Lenvo X1.

    Storage I/O industry trends and perspectives

    Some general industry trends include:

    • SSD is like real estate, location can matter, a little can go a long way
    • SSD media options include DRAM and nand flash (SLC, MLC, eMLC, TLC)
    • Portfolios broadening with different products for various needs
    • SSD functionality in servers, appliances, storage systems and cloud services
    • All flash SSD arrays have not killed off all traditional or hybrid storage arrays
    • Focus expanding from Just a Bunch Of SSD (JBOS) to enterprise like functionality
    • Software needs hardware, hardware needs software, the two work better together
    • Comparing meaningful metrics that matter vs. industry marketing metrics

    Related items about nand flash, SSD and metrics related themes:

    Storage I/O industry trends and perspectives

    Some additional thoughts and perspectives

    Does this mean traditional storage arrays are now dead?

    IMHO, no, there will be some cannibalization of existing storage systems by XtremIO within EMC customers or prospects if not managed, as well as via those from others. Keep in mind that recently EMC announced enhancements to their VMAX including entry-level options for service providers. Some new opportunities opened up will be where traditional all SSD (flash or dram) systems have historically had success.

    Traditional SSD and new dedicated SSD systems include Texas Memory Systems (TMS) bought by IBM in 2012, and the recently announced NetApp EF540 (and future FlashRay) along with startups Solidfire, Violin, Whiptail among others. There will be environments where XtremIO may take care of all storage needs for a customer or specific application or piece of it. Then there will be other situations where XtremIO will go-exist with EMC or other vendor’s storage solutions as part of a data infrastructure.

    Storage I/O industry trends and perspectives

    Who will EMC be competing against with XtremIO?

    Certainly the startups or smaller players such as Violin, Whiptail, Purestorage, Solidfire along with IBM/TMS and NetApp EF540 (eventually FlashRay as well) among others.

    There will also be some competition with other hybrid storage array vendors that have a mix of HDD and SSD. XtremIO will also compete in some situations on its own vs. other PCIe flash target and cache cards such as FusionIO, however for the most part those will up against XtremSF and XtremSW.

    Why the slow or “Directed Availability” rollout?

    Why not? By taking a controlled rollout selecting and qualifying customers for XtremIO, EMC gets to manage how the product goes out into production and control how it is used to increase chances of success. Unlike a startup that would be forced to try to put their new technology anywhere, EMC has the luxury of selecting where it goes, not to mention needing to avoid introducing a revenue prevention play for its other products.

    Overall, I give an Atta boy and Atta girl to the EMC crew for a Product Defined Announcement (PDA) extending their flash portfolio to complement their different customers and prospects various environment needs. Now watch EMC, NetApp and others step up their flash dance moves to see who will out flash the others in the eXtreme flash games, not to mention emerging software defined marketing moves (SDMM) ;) .

    Ok, nuff said.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Ceph Day Amsterdam 2012 (Object and cloud storage)

    StorageIO industry trends cloud, virtualization and big data

    Recently while I was in Europe presenting some sessions at conferences and doing some seminars, I was invited by Ed Saipetch (@edsai) of Inktank.com to attend the first Ceph Day in Amsterdam.

    Ceph day image

    As luck or fate would turn out, I was in Nijkerk which is about an hour train ride from Amsterdam central station plus a free day in my schedule. After a morning train ride and nice walk from Amsterdam Central I arrived at the Tobacco Theatre (a former tobacco trading venue) where Ceph Day was underway, and in time for lunch of Krokettens sandwich.

    Attendees at Ceph Day

    Lets take a quick step back and address for those not familiar what is Ceph (Cephalanthera) and why it was worth spending a day to attend this event. Ceph is an open source distributed object scale out (e.g. cluster or grid) software platform running on industry standard hardware.

    Dell server supporting ceph demoSketch of ceph demo configuration

    Ceph is used for deploying object storage, cloud storage and managed services, general purpose storage for research, commercial, scientific, high performance computing (HPC) or high productivity computing (commercial) along with backup or data protection and archiving destinations. Other software similar in functionality or capabilities to Ceph include OpenStack Swift, Basho Riak CS, Cleversafe, Scality and Caringo among others. There are also the tin wrapped software (e.g. appliances or pre-packaged) solutions such as Dell DX (Caringo), DataDirect Networks (DDN) WOS, EMC ATMOS and Centera, Amplidata and HDS HCP among others. From a service standpoint, these solutions can be used to build services similar Amazon S3 and Glacier, Rackspace Cloud files and Cloud Block, DreamHost DreamObject and HP Cloud storage among others.

    Ceph cloud and object storage architecture image

    At the heart of Ceph is RADOS a distributed object store that consists of peer nodes functioning as object storage devices (OSD). Data can be accessed via REST (Amazon S3 like) APIs, Libraries, CEPHFS and gateway with information being spread across nodes and OSDs using a CRUSH based algorithm (note Sage Weil is one of the authors of CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data). Ceph is scalable in terms of performance, availability and capacity by adding extra nodes with hard disk drives (HDD) or solid state devices (SSDs). One of the presentations pertained to DreamHost that was an early adopter of Ceph to make their DreamObjects (cloud storage) offering.

    Ceph cloud and object storage deployment image

    In addition to storage nodes, there are also an odd number of monitor nodes to coordinate and manage the Ceph cluster along with optional gateways for file access. In the above figure (via DreamHost), load balancers sit in front of gateways that interact with the storage nodes. The storage node in this example is a physical server with 12 x 3TB HDDs each configured as a OSD.

    Ceph dreamhost dreamobject cloud and object storage configuration image

    In the DreamHost example above, there are 90 storage nodes plus 3 management nodes, the total raw storage capacity (no RAID) is about 3PB (12 x 3TB = 36TB x 90 = 3.24PB). Instead of using RAID or mirroring, each objects data is replicated or copied to three (e.g. N=3) different OSDs (on separate nodes), where N is adjustable for a given level of data protection, for a usable storage capacity of about 1PB.

    Note that for more usable capacity and lower availability, N could be set lower, or a larger value of N would give more durability or data protection at higher storage capacity overhead cost. In addition to using JBOD configurations with replication, Ceph can also be configured with a combination of RAID and replication providing more flexibility for larger environments to balance performance, availability, capacity and economics.

    Ceph dreamhost and dreamobject cloud and object storage deployment image

    One of the benefits of Ceph is the flexibility to configure it how you want or need for different applications. This can be in a cost-effective hardware light configuration using JBOD or internal HDDs in small form factor generally available servers, or high density servers and storage enclosures with optional RAID adapters along with SSD. This flexibility is different from some cloud and object storage systems or software tools which take a stance of not using or avoiding RAID vs. providing options and flexibility to configure and use the technology how you see fit.

    Here are some links to presentations from Ceph Day:
    Introduction and Welcome by Wido den Hollander
    Ceph: A Unified Distributed Storage System by Sage Weil
    Ceph in the Cloud by Wido den Hollander
    DreamObjects: Cloud Object Storage with Ceph by Ross Turk
    Cluster Design and Deployment by Greg Farnum
    Notes on Librados by Sage Weil

    Presentations during ceph day

    While at Ceph day, I was able to spend a few minutes with Sage Weil Ceph creator and founder of inktank.com to record a pod cast (listen here) about what Ceph is, where and when to use it, along with other related topics. Also while at the event I had a chance to sit down with Curtis (aka Mr. Backup) Preston where we did a simulcast video and pod cast. The simulcast involved Curtis recording this video with me as a guest discussing Ceph, cloud and object storage, backup, data protection and related themes while I recorded this pod cast.

    One of the interesting things I heard, or actually did not hear while at the Ceph Day event that I tend to hear at related conferences such as SNW is a focus on where and how to use, configure and deploy Ceph along with various configuration options, replication or copy modes as opposed to going off on erasure codes or other tangents. In other words, instead of focusing on the data protection protocol and algorithms, or what is wrong with the competition or other architectures, the Ceph Day focused was removing cloud and object storage objections and enablement.

    Where do you get Ceph? You can get it here, as well as via 42on.com and inktank.com.

    Thanks again to Sage Weil for taking time out of his busy schedule to record a pod cast talking about Ceph, as well 42on.com and inktank for hosting, and the invitation to attend the first Ceph Day in Amsterdam.

    View of downtown Amsterdam on way to train station to return to Nijkerk
    Returning to Amsterdam central station after Ceph Day

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Seven databases in seven weeks, a book review of NoSQL databases

    StorageIO industry trends cloud, virtualization and big data

    Seven Databases in Seven Weeks (A Guide to Modern Databases and the NoSQL Movement) is a book written Eric Redmond (@coderoshi) and Jim Wilson (@hexlib), part of The Pragmatic Programmers (@pragprog) series that takes a look at several non SQL based database systems.

    Cover image of seven databases in seven weeks book image

    Coverage includes PostgreSQL, Riak, Apache HBase, MongoDB, Apache CouchDB, Neo4J and Redis with plenty of code and architecture examples. Also covered include relational vs. key value, columnar and document based systems among others.

    The details: Seven Databases in Seven Weeks
    Paperback: 352 pages
    Publisher: Pragmatic Bookshelf (May 18, 2012)
    Language: English
    ISBN-10: 1934356921
    ISBN-13: 978-1934356920
    Product Dimensions: 7.5 x 0.8 x 9 inches

    Buzzwords (or keywords) include availability, consistency, performance and related themes. Others include MongoDB, Cassandra, Redis, Neo4J, JSON, CouchDB, Hadoop, HBase, Amazon Dynamo, Map Reduce, Riak (Basho) and Postgres along with data models including relational, key value, columnar, document and graph along with big data, little data, cloud and object storage.

    While this book is not a how to tutorial or installation guide, it does give a deep dive into the different databases covered. The benefit is gaining an understanding of what the different databases are good for, strengths, weakness, where and when to use or choose them for various needs.

    Look inside seven databases in seven weeks book image
    A look inside my copy of Seven Databases in Seven Days

    Who should this book includes applications developers, programmers, Cloud, big data and IT/ICT architects, planners and designers along with database, server, virtualization and storage professionals. What I like about the book is that it is a great intro and overview along with sufficient depth to understand what these different solutions can and cannot do, when, where and why to use these tools for different situations in a quick read format and plenty of detail.

    Would I recommend buying it: Yes, I bought a copy myself on Amazon.com, get your copy by clicking here.

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    SSD, flash and DRAM, DejaVu or something new?

    StorageIO industry trends cloud, virtualization and big data

    Recently I was in Europe for a couple of weeks including stops at Storage Networking World (SNW) Europe in Frankfurt, StorageExpo Holland, Ceph Day in Amsterdam (object and cloud storage), and Nijkerk where I delivered two separate 2 day, and a single 1 day seminar.

    Image of Frankfurt transtationImage of inside front of ICE train going from Frankfurt to Utrecht

    At the recent StorageExpo Holland event in Utrecht, I gave a couple of presentations, one on cloud, virtualization and storage networking trends, the other taking a deeper look at Solid State Devices (SSD’s). As in the past, StorageExpo Holland was great in a fantastic venue, with many large exhibits and great attendance which I heard was over 6,000 people over two days (excluding exhibitor vendors, vars, analysts, press and bloggers) which was several times larger than what was seen in Frankfurt at the SNW event.

    Image of Ilja Coolen (twitter @@iCoolen) who was session host for SSD presentation in UtrechtImage of StorageExpo Holland exhibit show floor in Utrecht

    Both presentations were very well attended and included lively interactive discussion during and after the sessions. The theme of my second talk was SSD, the question is not if, rather what to use where, how and when which brings us up to this post.

    For those who have been around or using SSD for more than a decade outside of cell phones, camera, SD cards or USB thumb drives, that probably means DRAM based with some form of data persistency mechanisms. More recently mention SSD and that implies nand flash-based, either MLC or eMLC or SLC or perhaps emerging mram or PCM. Some might even think of NVRAM or other forms of SSD including emerging mram or mem-resistors among others, however lets stick to nand flash and dram for now.

    image of ssd technology evolution

    Often in technology what is old can be new, what is new can be seen as old, if you have seen, experienced or done something before you will have a sense of DejaVu and it might be evolutionary. On the other hand, if you have not seen, heard, experienced, or found a new audience, then it can be  revolutionary or maybe even an industry first ;).

    Technology evolves, gets improved on, matures, and can often go in cycles of adoption, deployment, refinement, retirement, and so forth. SSD in general has been an on again, off again type cycle technology for the past several decades except for the past six to seven years. Normally there is an up cycle tied to different events, servers not being fast enough or affordable so use SSD to help address performance woes, or drives and storage systems not being fast enough and so forth.

    Btw, for those of you who think that the current SSD focused technology (nand flash) is new, it is in fact 25 years old and still evolving and far from reaching its full potential in terms of customer deployment opportunities.

    StorageIO industry trends cloud, virtualization and big data

    Nand flash memory has helped keep SSD practical for the past several years riding the similar curve that is keeping hard disk drives (HDD’s) that they were supposed  to replace alive. That is improved reliability, endurance or duty cycle, better annual failure rate (AFR), larger space capacity, lower cost, and enhanced interfaces, packaging, power and functionality.

    Where SSD can be used and options

    DRAM historically at least for enterprise has been the main option for SSD based solutions using some form of data persistency. Data persistency options include battery backup combined with internal HDD’s to de stage information from the DRAM before power was lost. TMS (recently bought by IBM) was one of the early SSD vendors from the DRAM era that made the transition to flash including being one of the first many years ago to combine DRAM as a cache layer over nand flash as a persistency or de-stage layer. This would be an example of if you were not familiar with TMS back then and their capacities, you might think or believe that some more recent introductions are new and revolutionary, and perhaps they are in their own right or with enough caveats and qualifiers.

    An emerging trend, which for some will be Dejavu, is that of using more DRAM in combination with nand flash SSD.

    Oracle is one example of a vendor who IMHO rather quietly (intentionally or accidentally) has done this in the 7000 series storage systems as well as ExaData based database storage systems. Rest assured they are not alone and in fact many of the legacy large storage vendors have also piled up large amounts of DRAM based cache in their storage systems. For example EMC with 2TByte of DRAM cache in their VMAX 40K, or similar systems from Fujitsu HP, HDS, IBM and NetApp (including recent acquisition of DRAM based CacheIQ) among others. This has also prompted the question of if SSD has been successful in traditional storage arrays, systems or appliances as some would have you believe not, click here to learn more and cast your vote.

    SSD, IO, memory and storage hirearchy

    So is the future in the past? Some would say no, some will say yes, however IMHO there are lessons to learn and leverage from the past while looking and moving forward.

    Early SSD’s were essentially RAM disks, that is a portion of main random access memory (RAM) or what we now call DRAM set aside as a non persistent (unless battery backed up) cache or device. Using a device driver, applications could use the RAM disk as though it were a normal storage system. Different vendors springing up with drivers for various platforms and disappeared as their need were reduced with faster storage systems, interfaces and ram disks drives supplied by vendors, not to mention SSD devices.

    Oh, for you tech trivia types, there was also database machines from the late 80’s such as Briton Lee that would offload your database processing functions to a specialized appliance. Sound like Oracle ExaData  I, II or III to anybody?

    Image of Oracle ExaData storage system

    Ok, so we have seen this movie before, no worries, old movies or shows get remade, and unless you are nostalgic or cling to the past, sure some of the remakes are duds, however many can be quite good.

    Same goes with the remake of some of what we are seeing now. Sure there is a generation that does not know nor care about the past, its full speed ahead and leverage what will get them there.

    Thus we are seeing in memory databases again, some of you may remember the original series (pick your generation, platform, tool and technology) with each variation getting better. With 64 bit processor, 128 bit and beyond file system and addressing, not to mention ability for more DRAM to be accessed directly, or via memory address extension, combined with memory data footprint reduction or compression, there is more space to put things (e.g. no such thing as a data or information recession).

    Lets also keep in mind that the best IO is the IO that you do not have to do, and that SSD which is an extension of the memory map plays by the same rules of real estate. That is location matters.

    Thus, here we go again for some of you (DejaVu), while for others get ready for a new and exciting ride (new and revolutionary). We are back to the future with in memory database which while for a time will take some pressure from underlying IO systems until they once again out grow server memory addressing limits (or IT budgets).

    However for those who do not fall into a false sense of security, no fear, as there is no such thing as a data or information recession. Sure as the sun rises in the east and sets in the west, sooner or later those IO’s that were or are being kept in memory will need to be de-staged to persistent storage, either nand flash SSD, HDD or somewhere down the road PCM, mram and more.

    StorageIO industry trends cloud, virtualization and big data

    There is another trend that with more IOs being cached, reads are moving to where they should resolve which is closer to the application or via higher up in the memory and IO pyramid or hierarchy (shown above).

    Thus, we could see a shift over time to more writes and ugly IOs being sent down to the storage systems. Keep in mind that any cache historically provides temporal relieve, question is how long of a temporal relief or until the next new and revolutionary or DejaVu technology shows up.

    Ok, go have fun now, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Who will be winner with Oracle $10 Million dollar challenge?

    Oracle 10 million dollar challenge ad image

    In case you missed it, Oracle has a ten million dollar challenge (here, here and here) to prove that their servers and database software technologies are 5 times faster than IBM.

    Up to 10 winners open to U.S. Fortune 1000 companies running an Oracle 11g data warehouse on IBM Power system. Offer expires August 31, 2012 with configuration terms. See this URL for official rules: https://oracle.com/IBMchallenge

    Click here to view entry form or click on form below.

    Oracle 10 million dollar challenge entry form image

    Taking a step back for a moment, if you forgot or had not heard, Oracle earlier this summer had their hands slapped by the US Better Business Bureau (BBB) National Advertising Directive (NAD) over performance claims and ads. IBM complained to the BBB that unfair marketing claims about their servers and database products were being made by Oracle (read more here).

    Not one to miss a beat or bit or byte of data, not to mention dollars, Oracle has run ads in newspapers and other venues for the Oracle IBM challenge with the winner receiving $10,000,000.00 USD (details here).

    Oracle exadata servers image

    This begs the question, who wins, the company or entity that actually can standup and meet the challenge? How about Oracle, do they win if enough people see, hear, talk (or complain) about the ads and challenges? What about the cost, how will Oracle cover that or is it simply a drop in the bucket of an even larger amount of dollars potentially valued in the billions of dollars (e.g. servers, storage, software, services)?

    Now for some fun, using an inflation calculator with 1974 dollars as that is when the TV show the six million dollar man made its debut. If you do not know, that is a TV show where an injured government employee (Steve Austin) played by actor Lee Majors was rebuilt using bionic in order to be faster and stronger with the then current technology (ok, TV technology). Using the inflation calculator, the 1974 six million dollar man and machine would cost about $27,882,839.76 in 2012 USD (364.7% increase).

    Now using todays what Oracle is calling faster, stronger machine and associated staff for $10,000,000 challenge prize award, would have cost $2,151,861.17 in 1974 dollars. Note that the equal amount of compute processing, storage performance and capacity, networking capability and software abilities in 1974 similar to what is available today would have cost even more than what the inflation calculator shows. For that, we would need to have something like a technology inflation (or improvement) calculator.

    Learn more about the Oracle challenge here, here and here, as well as the NAD announcement here, and the six million dollar man here

    Ok, nuff said for now.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Are large storage arrays dead at the hands of SSD?

    Storage I/O trends

    An industry trends and perspective.

    .

    Are large storage arrays dead at the hands of SSD? Short answer NO not yet.
    There is still a place for traditional storage arrays or appliances particular those with extensive features, functionality and reliability availability serviceability (RAS). In other words, there is still a place for large (and small) storage arrays or appliances including those with SSDs.

    Is there a place for newer flash SSD storage systems, appliances and architectures? Yes
    Similar to how there is a place for traditional midrange storage arrays or appliances have found their roles vs. traditional higher end so-called enterprise arrays. Think as an example  EMC CLARiiON/VNX or HP EVA/P6000 or HDS AMS/HUS or NetApp FAS or IBM DS5000 or IBM V7000 among others vs. EMC Symmetrix/DMX/VMAX or HP P10000/3Par or HDS VSP/USP or IBM DS8000. In addition to traditional enterprise or high-end storage systems and midrange also known as modular, there are also specialized appliances or targets such as for backup/restore and archiving. Also do not forget the IO performance SSD appliances like those from TMS among others that have been around for a while.

    Is the role of large storage systems changing or evolving? Yes
    Given their scale and ability to do large amounts of work in a dense footprint, for some the role of these systems is still mission critical tier 1 application and data support. For other environments, their role continues to evolve being used for high-density tier 2 bulk or even near-line storage for on-line access at scale.

    Storage I/O trends

    Does this mean there is completion between the old and new systems? Yes
    In some circumstances as we have seen already with SSD solutions. Some will place as competing or replacements while others as complementing. For example in the PCIe flash SSD card segment EMC VFCache is positioned is complementing Dell, EMC, HDS, HP, IBM, NetApp, Oracle or others storage vs. FusionIO who positions as a replacement for the above and others. Another scenario is how some SSD vendors have and continue to position their all-flash SSD arrays using either drives or PCIe cards to complement and coexist with other storage systems in an environment (e.g. data center level tiering) vs. as a replacement. Also keep in mind SSD solutions that also support a mix of flash devices and traditional HDDs for capacity and cost savings or cloud access in the same solution.

    Does this mean that the industry has adopted all SSD appliances as the state of art?
    Avoid confusing industry adoption or talk with industry and customer deployment. They are similar, however one is focused on what the industry talks about or discusses as state of art or the future while the other is what customers are doing. Certainly some of the new flash SSD appliance and storage startups such as Solidfire, Nexgen, Violin, Whiptail or veteran TMS among others have promising futures, some of which may actually be in play with the current SSD market shakeout and consolidation.

    Does that mean everybody is going SSD?
    SSD customer adoption and deployment continues to grow, however so too does the deployment of high-capacity HDDs.

    Storage I/O trends

    Do SSDs need HDDs, do HDDs need SSDs? Yes
    Granted there are environments where needs can be addressed by all of one or the other. However at least near term, there is a very strong market for tiering and mix of SSD, some fast HDDs and lots of high-capacity HDDs to meet various needs including performance, availability, capacity, energy and economics. After all, there is no such thing, as a data or information recession yet budgets are tight or being reduced. Likewise, people and data are living longer.

    What does this mean?
    If there, were no such thing as a data recession and budgets a non-issue, perhaps everything could move to all flash SSD storage systems. However, we also know that people and data are living longer along with changing data life-cycle patterns. There is also the need for performance to close the traditional data center IO performance to space capacity gap and bottlenecks as well as store and keep data longer.

    There will continue to be a need for a mix of high-capacity and high performance. More IO will continue to gravitate towards the IO appliances, however more data will settle in for longer-term retention and continued access as data life-cycle continue to evolve. Watch for more SSD and cache in the large systems, along with higher density SAS-NL (SAS Near Line e.g. high capacity) type drives appearing in those systems.

    If you like new shiny new toys or technology (SNTs) to buy, sell or talk about, there will be plenty of those to continue industry adoption while for those who are focused on industry deployment, there will be a mix of new, and continued evolution for implementation.

    Related links
    Industry adoption vs. industry deployment, is there a difference?

    Industry trend: People plus data are aging and living longer

    No Such Thing as an Information Recession

    Changing Lifecycles & Data Footprint Reduction
    What is the best kind of IO? The one you do not have to do
    Is SSD dead? No, however some vendors might be
    Speaking of speeding up business with SSD storage
    Are Hard Disk Drives (HDD’s) getting too big?
    IT and storage economics 101, supply and demand
    Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
    Why SSD based arrays and storage appliances can be a good idea (Part I)
    Researchers and marketers don’t agree on future of nand flash SSD
    EMC VFCache respinning SSD and intelligent caching (Part I)
    SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved