Can RAID extend the life of nand flash SSD?

Storage I/O trends

Can RAID extend nand flash SSD life?

Imho, the short answer is YES, under some circumstances.

There is a myth and some FUD that RAID (Redundant Array of Independent Disks) can shorten the life durability of nand flash SSD (Solid State Device) vs. HDD (Hard Disk Drives) due to extra IOP’s. The reality is that depending on how configured, RAID level, implementation and other factors, nand flash SSD can be extended as I discuss in this here video.

Video

Nand flash SSD cells and wear

First, there is a myth that nand flash SSD does not have moving parts like hard disk drives (HDD’s) thus do not wear out or break. That is just a myth in that nand flash by its nature wears out with write usage. This is due to how they store data in cells that have a rated number of program erase (P/E) cycles that vary by type of medium. For example, Single Level Cell (SLC) has a longer P/E life duration vs. Multi-Level Cells (MLC) and eMLC that stack multiple cells together.

There are a number of factors that contribute to nand flash wear, also known as duty cycle or durability tied to P/E. For example, some storage systems or controllers do a better job both at the lower level flash translation layer (FTL) in addition to controllers, firmware, caching using DRAM and IO optimization such as write ordering or grouping.

Now what about this RAID and SSD thing?

Ok first as a recap keep in mind that there are many RAID levels along with variations, enhancements and where, or how implemented ranging from software to hardware, adapters to controllers to storage systems.

In the case of RAID 1 or mirroring, just like replication or other one to one or one too many copy operation a write to one device is echoed to another. In the case of RAID 5, data is spread across drives and parity; however, the parity is rotated across all drives in an equal manner.

Some FUD or myths or misunderstandings come into play is that not all RAID 5 implementations as an example are not the same. Some do a better job of buffering or caching data in battery protected mirrored DRAM memory until a full stripe write can occur, or if needed, a partial write.

Another attribute is the chunk or shard size (how much data is sent to each drive member) along with the stripe width (how many drives). Some systems have narrow stripes of say 3+1 or 4+1 or 5+1 while others can be 14+1 or 15+1 or wider. Thus, data can be written across a wider number of drives reducing the P/E consumption or use of a single drive depending on implementation.

How about RAID 6 (dual parity)?

Same thing, it is a matter of how well the implementation is, how the write gathering is done and so forth.

What about RAID wearing out nand flash SSD?

While it is possible that it has or can occur depending on type of RAID implementation, lack of caching or optimization, configuration, type of SSD, RAID level and other things, in general I will say myth busted.

Want some proof?

I could go through a long technical proof point and citing lots of facts, figures, experts and so forth leaving you all silenced and dazed similar to the students listening to Ben Stein in Ferris Buelers day off (Click here to see what I mean) asking “anybody anybody Buleler?

Ben Stein via https://nostagjicmoviesandthings.blogspot.com
Image via nostagjicmoviesandthings.blogspot.com

How about some simple SSD and storage math?

On a very conservative basis, my estimate is that around 250PB of nand flash SSD drives are shipped and installed on a revenue basis attached to or in storage systems and appliances. Combine what Dell + DotHill + EMC + Fujitsu + HDS + HP + IBM (including TMS) + NEC + NetApp + NEC + Oracle among other legacy along with new all flash as well as hybrid vendors (e.g. Cloudbyte, FusionIO (Via their Nexgen acquisition), Kaminario, Greenbytes, Nutanix or Nimble, Purestorage, Starboard or Solidfire, Tegile or Tintri, Violin or Whiptail among others).

It is also a safe assumption based on how customers configure and use those and other storage systems is with some form of RAID. Thus if things were as bad as some researchers were, vendors and their pundits have made them out to be, wouldn’t’t we be hearing of those issues?

Is it just a RAID 5 problem and that RAID 6 magically corrects the problem?

Well, that depends on apples to apples vs. apples to oranges comparisons.

For example if you are using a 14+2 (16 drive) RAID 6 to compare to say a 3+1 (4 drive) RAID 5 that is not a fair comparison. Granted, it is a handy one if you are a vendor that supports wider RAID groups, stripes and ranks vs. those who do not. However also keep in mind that some legacy vendors actually also support wide stripes and RAID groups.

So in some cases the magic is not in the RAID level, rather the implementation or how configured or lack thereof.

Video

Watch this TechTarget produced video recorded live while I was at EMCworld 2013 to learn more.

Otherwise, ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Have SSDs been unsuccessful with storage arrays (with poll)?

Storage I/O Industry Trends and Perspectives

I hear people talking about how Solid State Devices (SSDs) have not been successful with or for vendors of storage arrays, particular legacy storage systems. Some people have also asserted that large storage arrays are dead at the hands of new purpose-built SSD appliances or storage systems (read more here).

As a reference, legacy storage systems include those from EMC (VMAX and VNX), IBM (DS8000, DCS3700, XIV, and V7000), and NetApp FAS along with those from Dell, Fujitsu, HDS, HP, NEC and Oracle among others.

Granted EMC have launched new SSD based solutions in addition to buying startup eXtremeIO (aka Project X), and IBM bought SSD industry veteran TMS. IMHO, neither of those actions by either vendor signals an early retirement for their legacy storage solutions, instead opening up new markets giving customers more options for addressing data center and IO performance challenges. Keep in mind that the best IO is the one that you do not have to do with the second best being the least impact to applications in a cost-effective way.

SSD, IO, memory and storage hirearchy

Sometimes I even hear people citing or using some other person or source to attribute or make their assertions sound authoritative. You know the game, according to XYZ or, ABC said blah blah blah blah. Of course if you say or repeat something often enough, or hear it again and again, it can become self-convincing (e.g. industry adoption vs. customer deployments). Likewise depending on how many degrees of separation exists between you and the information you get, the more that it can change from what it originally was.

So what about it, has SSD not been successful for legacy storage system vendors and is the only place that SSD has had success is with startups or non-array based solutions?

While there have been some storage systems (arrays and appliances) that may not perform up to their claimed capabilities due to various internal architecture or implementation bottlenecks. For the most part the large vendors including EMC, HP, HDS, IBM, NetApp and Oracle have done very well shipping SSD drives in their solutions. Likewise some of the clean sheet new design based startup systems, as well as some of the startups with hybrid solutions combing HDDs  and SSDs have done well while others are still emerging.

Where SSD can be used and options

This could also be an example where myth becomes reality based on industry adoption vs. customer deployment. What this means is that the myth is that it is the startups that are having success vs. the legacy vendors from an industry adoption conversation standpoint and thus believed by some.

On the other hand, the myth is that vendors such as EMC or NetApp have not had success with their arrays and SSD yet their customer deployments prove otherwise. There is also a myth that only PCIe based SSD can be of value and that drive based SSDs are not worth using which I have a good idea where that myth comes from.

IMHO it is a depends, however safe to say from what I have seen directly that there are some vendors of storage arrays, including so-called legacy systems that have had very good success with SSD. Likewise have seen where some startups have done ok with their new clean sheet designs, including EMC (Project X). Oh, at least for now I am not a believer that with the all SSD based project “X” over at EMC that the venerable VMAX  formerly known as DMX and its predecessors Symmetric have finally hit the end of the line. Rather they will be positioned and play to different markets for some time yet.

Over at IBM I don’t think the DS8000 or XIV or V7000 and SVC folks are winding things down now that they bought SSD vendor TMS who has SSD appliances and PCIe cards. Rest assured there have been success by PCIe flash card vendors both as targets (FusionIO) and cache or hybrid cache and target systems such as those from Intel, LSI, Micron, and TMS (now IBM) among others. Oh, and if you have not noticed, check out what Qlogic, Emulex and some of the other traditional HBA vendors have done with and around SSD caching.

So where does the FUD that storage systems have not had success with SSD come from?

I suspect from those who would rather not see or hear about those who have had success taking away attention from them or their markets. In other words, using Fear, Uncertainty and Doubt (FUD) or some community peer pressure, there is a belief by some that if you hear enough times that something is dead or not of a benefit; you will look at the alternatives.

Care to guess what the preferred alternative is for some? If you guessed a PCIe card or SSD based appliance from your favorite startup that would be a fair assumption.

On the other hand, my educated guess (ok, its much more informed than a guess ;) ) is that if you ask a vendor such as EMC or NetApp they would disagree, while at the same time articulate benefits of different approaches and tools. Likewise, my educated guess is that if you ask some others, they will say mixed things and of course if you talk with the pure plays, take a wild yet educated guess what they will say.

Here is my point.

SSD, DRAM, PCM and storage adoption timeline

The SSD market, including DRAM, nand flash (SLC or MLC or any other xLC), emerging PCM or future mram among other technologies and packaging options is still in its relative infancy. Yes, I know there have been significant industry adoption and many early customer deployments, however talking with IT organizations of all size as well as with vendors and vars, customer deployment of SSD is far from reaching its full potential meaning a bright future.

Simply putting an SSD, card or drive into a solution does not guarantee results.

Likewise having a new architecture does not guarantee things will be faster.

Fast storage systems need fast devices (HDD, HHDD and SSDs) along with fast interfaces to connect with fast servers. Put a fast HDD, HHDD or SSD into a storage system that has bottlenecks (hardware, software, architectural design) and you may not see the full potential of the technology. Likewise put fast ports or interfaces on a storage system that has fast devices however also a bottleneck in its controller has or system architecture and you will not realize the full potential of that solution.

This is not unique to legacy or traditional storage systems, arrays or appliances as it is also the case with new clean sheet designs.

There are many new solutions that are or should be as fast as their touted marketing stories present, however just because something looks impressive in a YouTube video or slide deck or WebEx does not mean it will be fast in your environment. Some of these new design SSD based solutions will displace some legacy storage systems or arrays while many others will find new opportunities. Similar to how previous generation SSD storage appliances found roles complementing traditional storage systems, so to will many of these new generation of products.

What this all means is to navigate your way through the various marketing and architecture debates, benchmarks battles, claims and counter claims to understand what fits your needs and requires.

StorageIO industry trends cloud, virtualization and big data

What say you?

Ok, nuff said

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Are large storage arrays dead at the hands of SSD?

Storage I/O trends

An industry trends and perspective.

.

Are large storage arrays dead at the hands of SSD? Short answer NO not yet.
There is still a place for traditional storage arrays or appliances particular those with extensive features, functionality and reliability availability serviceability (RAS). In other words, there is still a place for large (and small) storage arrays or appliances including those with SSDs.

Is there a place for newer flash SSD storage systems, appliances and architectures? Yes
Similar to how there is a place for traditional midrange storage arrays or appliances have found their roles vs. traditional higher end so-called enterprise arrays. Think as an example  EMC CLARiiON/VNX or HP EVA/P6000 or HDS AMS/HUS or NetApp FAS or IBM DS5000 or IBM V7000 among others vs. EMC Symmetrix/DMX/VMAX or HP P10000/3Par or HDS VSP/USP or IBM DS8000. In addition to traditional enterprise or high-end storage systems and midrange also known as modular, there are also specialized appliances or targets such as for backup/restore and archiving. Also do not forget the IO performance SSD appliances like those from TMS among others that have been around for a while.

Is the role of large storage systems changing or evolving? Yes
Given their scale and ability to do large amounts of work in a dense footprint, for some the role of these systems is still mission critical tier 1 application and data support. For other environments, their role continues to evolve being used for high-density tier 2 bulk or even near-line storage for on-line access at scale.

Storage I/O trends

Does this mean there is completion between the old and new systems? Yes
In some circumstances as we have seen already with SSD solutions. Some will place as competing or replacements while others as complementing. For example in the PCIe flash SSD card segment EMC VFCache is positioned is complementing Dell, EMC, HDS, HP, IBM, NetApp, Oracle or others storage vs. FusionIO who positions as a replacement for the above and others. Another scenario is how some SSD vendors have and continue to position their all-flash SSD arrays using either drives or PCIe cards to complement and coexist with other storage systems in an environment (e.g. data center level tiering) vs. as a replacement. Also keep in mind SSD solutions that also support a mix of flash devices and traditional HDDs for capacity and cost savings or cloud access in the same solution.

Does this mean that the industry has adopted all SSD appliances as the state of art?
Avoid confusing industry adoption or talk with industry and customer deployment. They are similar, however one is focused on what the industry talks about or discusses as state of art or the future while the other is what customers are doing. Certainly some of the new flash SSD appliance and storage startups such as Solidfire, Nexgen, Violin, Whiptail or veteran TMS among others have promising futures, some of which may actually be in play with the current SSD market shakeout and consolidation.

Does that mean everybody is going SSD?
SSD customer adoption and deployment continues to grow, however so too does the deployment of high-capacity HDDs.

Storage I/O trends

Do SSDs need HDDs, do HDDs need SSDs? Yes
Granted there are environments where needs can be addressed by all of one or the other. However at least near term, there is a very strong market for tiering and mix of SSD, some fast HDDs and lots of high-capacity HDDs to meet various needs including performance, availability, capacity, energy and economics. After all, there is no such thing, as a data or information recession yet budgets are tight or being reduced. Likewise, people and data are living longer.

What does this mean?
If there, were no such thing as a data recession and budgets a non-issue, perhaps everything could move to all flash SSD storage systems. However, we also know that people and data are living longer along with changing data life-cycle patterns. There is also the need for performance to close the traditional data center IO performance to space capacity gap and bottlenecks as well as store and keep data longer.

There will continue to be a need for a mix of high-capacity and high performance. More IO will continue to gravitate towards the IO appliances, however more data will settle in for longer-term retention and continued access as data life-cycle continue to evolve. Watch for more SSD and cache in the large systems, along with higher density SAS-NL (SAS Near Line e.g. high capacity) type drives appearing in those systems.

If you like new shiny new toys or technology (SNTs) to buy, sell or talk about, there will be plenty of those to continue industry adoption while for those who are focused on industry deployment, there will be a mix of new, and continued evolution for implementation.

Related links
Industry adoption vs. industry deployment, is there a difference?

Industry trend: People plus data are aging and living longer

No Such Thing as an Information Recession

Changing Lifecycles & Data Footprint Reduction
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Speaking of speeding up business with SSD storage
Are Hard Disk Drives (HDD’s) getting too big?
IT and storage economics 101, supply and demand
Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
Researchers and marketers don’t agree on future of nand flash SSD
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Why SSD based arrays and storage appliances can be a good idea (Part II)

This is the second of a two-part post about why storage arrays and appliances with SSD drives can be a good idea, here is link to the first post.

So again, why would putting drive form factors SSDs be a bad idea for existing storage systems, arrays and appliances?

Benefits of SSD drive in storage systems, arrays and appliances:

  • Familiarity with customers who buy and use these devices
  • Reduces time to market enabling customers to innovate via deployment
  • Establish comfort and confidence with SSD technology for customers
  • Investment protection of currently installed technology (hardware and software)
  • Interoperability with existing interfaces, infrastructure, tools and policies
  • Reliability, availability and serviceability (RAS) depending on vendor implementation
  • Features and functionality (replicate, snapshot, policy, tiering, application integration)
  • Known entity in terms of hardware, software, firmware and microcode (good or bad)
  • Share SSD technology across more servers or accessing applications
  • Good performance assuming no controller, hardware or software bottlenecks
  • Wear leveling and other SSD flash management if implemented
  • Can end performance bottlenecks if backend (drives) are a problem
  • Coexist or complemented with server-based SSD caching

Note, the mere presence of SSD drives in a storage system, array or appliance will not guarantee or enable the above items to be enabled, nor to their full potential. Different vendors and products will implement to various degrees of extensibility SSD drive support, so look beyond the check box of feature, functionality. Dig in and understand how extensive and robust the SSD implementation is to meet your specific requirements.

Caveats of SSD drives in storage systems, arrays and appliances:

  • May not use full performance potential of nand flash SLC technology
  • Latency can be an issue for those who need extreme speed or performance
  • May not be the most innovative newest technology on the block
  • Fun for startup vendors, marketers and their fans to poke fun at
  • Not all vendors add value or optimization for endurance of drive SSD
  • Seen as not being technology advanced vs. legacy or mature systems

Note that different vendors will have various performance characteristics, some good for IOPs, others for bandwidth or throughput while others for latency or capacity. Look at different products to see how they will vary to meet your particular needs.

Cost comparisons are tricky. SSD in HDD form factors certainly cost more than raw flash dies, however PCIe cards and FTL (flash translation layer) controllers also cost more than flash chips by themselves. In other words, apples to apples comparisons are needed. In the future, ideally the baseboard or motherboard vendors will revise the layout to support nand flash (or its replacement) with DRAM DIMM type modules along with associated FTL and BIOS to handle the flash program/erase cycles (P/E) and wear leveling management, something that DRAM does not have to encounter. While that provides great location or locality of reference (figure 1), it is also a more complex approach that takes time and industry cooperation.

Locality of reference for memory and storage
Figure 1: Locality of reference for memory and storage

Certainly, for best performance, just like realty location matters and thus locality of reference comes into play. That is put the data as close to the server as possible, however when sharing is needed, then a different approach or a companion technique is required.

Here are some general thoughts about SSD:

  • Some customers and organizations get the value and role of SSD
  • Some see where SSD can replace HDD, others see where it compliments
  • Yet others are seeing the potential, however are moving cautiously
  • For many environments better than current performance is good enough
  • Environments with the need for speed need every bit of performance they can get
  • Storage systems and arrays or appliances continue to evolve including the media they use
  • Simply looking at how some storage arrays, systems and appliances have evolved, you can get an idea on how they might look in the future which could include not only SAS as a backend or target, also PCIe. After all, it was not that long ago where backend drive connections went from propriety to open parallel SCSI or SSA to Fibre Channel loop (or switched) to SAS.
  • Engineers and marketers tend to gravitate to newer products nand technology, which is good, as we need continued innovation on that front.
  • Customers and business people tend to gravitate towards deriving greatest value out of what is there for as long as possible.
  • Of course, both of the latter two points are not always the case and can be flip flopped.
  • Ultrahigh end environments and corner case applications will continue to push the limits and are target markets for some of the newer products and vendors.
  • Likewise, enterprise, mid market and other mainstream environments (outside of their corner case scenarios) will continue to push known technology to its limits as long as they can derive some business benefit value.

While not perfect, SSD in a HDD form factor with a SAS or SATA interface properly integrated by vendors into storage systems (or arrays or appliances) are a good fit for many environments today. Likewise, for some environments, new from the ground up SSD based solutions that leverage flash DIMM or daughter cards or PCIe flash cards are a fit. So to are PCIe flash cards either as a target, or as cache to complement storage system (arrays and appliances). Certainly, drive slots in arrays take up space for SSD, however so to does occupying PCIe space particularly in high density servers that require every available socket and slot for compute and DRAM memory. Thus, there are pros and cons, features and benefits of various approaches and which is best will depend on your needs and perhaps preferences, which may or may not be binary.

I agree that for some applications and solutions, non drive form factor SSD make sense while in others, compatibility has its benefits. Yet in other situations nand flash such as SLC combined with HDD and DRAM tightly integrated such as in my Momentus XT HHDD is good for laptops, however probably not a good fit for enterprise yet. Thus, SSD options and placements are not binary, of course, sometimes opinions and perspectives will be.

For some situations PCIe, based cards in servers or appliances make sense, either as a target or as cache. Likewise for other scenarios drive format SSD make sense in servers and storage systems, appliances, arrays or other solutions. Thus while all of those approaches are used for storing binary digital data, the solutions of what to use when and where often will not be binary, that is unless your approach is to use one tool or technique for everything.

Here are some related links to learn more about SSD, where and when to use what:
Why SSD based arrays and storage appliances can be a good idea (Part I)
IT and storage economics 101, supply and demand
Researchers and marketers dont agree on future of nand flash SSD
Speaking of speeding up business with SSD storage
EMC VFCache respinning SSD and intelligent caching (Part I)
EMC VFCache respinning SSD and intelligent caching (Part II)
SSD options for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments, Part II: The call to duty, SSD endurance
SSD options for Virtual (and Physical) Environments Part III: What type of SSD is best for you?

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Why SSD based arrays and storage appliances can be a good idea (Part I)

This is the first of a two-part series, you can read part II here.

Robin Harris (aka @storagemojo) recently in a blog post asks a question and thinks solid state devices (SSDs) using SAS or SATA interface in traditional hard disk drive (HDD) form factors are a bad idea in storage arrays (e.g. storage systems or appliances). My opinion is that as with many things about storing, processing or moving binary digital data (e.g. 1s and 0s) the answer is not always clear. That is there may not be a right or wrong answer instead it depends on the situation, use or perhaps abuse scenario. For some applications or vendors, adding SSD packaged in HDD form factors to existing storage systems, arrays and appliances makes perfect sense, likewise for others it does not, thus it depends (more on that in a bit). While we are talking about SSD, Ed Haletky (aka @texiwill) recently asked a related question of Fix the App or Add Hardware, which could easily be morphed into a discussion of Fix the SSD, or Add Hardware. Hmmm, maybe a future post idea exists there.

Lets take a step back for a moment and look at the bigger picture of what prompts the question of what type of SSD to use where and when along as well as why various vendors want you to look at things a particular way. There are many options for using SSD that is packaged in various ways to meet diverse needs including here and here (see figure 1).

Various SSD packaging options
Figure 1: Various packaging and deployment options for SSD

The growing number of startup and established vendors with SSD enabled storage solutions vying to win your hearts, minds and budget is looking like the annual NCAA basketball tournament (aka March Madness and march metrics here and here). Some of vendors have or are adding SSD with SAS or SATA interfaces that plug into existing enclosures (drive slots). These SSDs have the same form factor of a 2.5 inch small form factor (SFF) or 3.5 inch HDDs with a SAS or SATA interface for physical and connectivity interoperability. Other vendors have added PCIe based SSD cards to their storage systems or appliances as a cache (read or read and write) or a target device similar to how these cards are installed in servers.

Simply adding SSD either in a drive form factor or as a PCIe card to a storage system or appliance is only part of a solution. Sure, the hardware should be faster than a traditional spinning HDD based solution. However, what differentiates the various approaches and solutions is what is done with the storage systems or appliances software (aka operating system, storage applications, management, firmware or micro code).

So are SSD based storage systems, arrays and appliances a bad idea?

If you are a startup or established vendor able to start from scratch with a clean sheet design not having to worry about interoperability and customer investment protection (technology, people skills, software tools, etc), then you would want to do something different. For example, leverage off the shelf components such as a PCIe flash SSD card in an industry standard server combined with your software for a solution. You could also use extra DRAM memory in those servers combined with PCIe flash SSD cards perhaps even with embedded HDDs for a backing or preservation medium.

Other approaches might use a mix of DRAM, PCIe flash cards, as either a cache or target combined with some drive form factor SSDs. In other words, there is no right or wrong approach; sure, there are different technical merits that have advantages for various applications or environments. Likewise, people have preferences particular for technology focused who tend to like one approach vs. another. Thus, we have many options to leverage, use or abuse.

In his post, Robin asks a good question of if nand flash SSD were being put into a new storage system, why not use the PCIe backplane vs. using nand flash on DIMM vs. using drive formats, all of which are different packaging options (Figure 1). Some startups have gone the all backplane approach, some have gone with the drive form factor, some have gone with a mix and some even using HDDs in the background. Likewise some traditional storage system and array vendors who support a mix of SSD and HDD drive form factor devices also leverage PCIe cards, either as a server-based cache (e.g. EMC VFCahe) or installed as a performance accelerator module (e.g. NetApp PAM) in their appliances.

While most vendors who put SSD drive form factor drives into their storage systems or appliances (or serves for that matter) use them as data targets for creating LUNs or file systems, others use them for internal functionality. By internal functionality I mean instead of the SSD appearing as another drive or target, they are used exclusively by the storage system or appliance for caching or similar purposes. On storage systems, this can be to increase the size of persistent cache such as EMC on the CLARiiON and VNX (e.g. FAST Cache). Another use is on backup or dedupe target appliances where SSDs are used to store dictionary, index or meta data repositories as opposed to being a general data pool.

Part two of this post looks at the benefits and caveats of SSD in storage arrays.

Here are some related links to learn more about SSD, where and when to use what:
Why SSD based arrays and storage appliances can be a good idea (Part II)
IT and storage economics 101, supply and demand
Researchers and marketers don’t agree on future of nand flash SSD
Speaking of speeding up business with SSD storage
EMC VFCache respinning SSD and intelligent caching (Part I)
EMC VFCache respinning SSD and intelligent caching (Part II)
SSD options for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments, Part II: The call to duty, SSD endurance
SSD options for Virtual (and Physical) Environments Part III: What type of SSD is best for you?

Ok, nuff said for now, check part II.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

SAS Disk Drives Appearing in Larger Mid-Range Arrays

Storage I/O trends

2.5″ and 3.5″ 10,000RPM (10K) and 15,000RPM (15K) SAS disk drives have been available for entry level and SMB solutions from the likes of Dell, EMC, HP, IBM, Infotrend, LSI, NetApp, Promise, Sun and Xyratex among others for some time now. HDS recently introduced a new mid-range solution the AMS 2000 that supports up to 480 3G SAS drives in place of traditional 2G or 4G Fibre Channel disk drives.

The benefit of supporting SAS disk drives moving forward is that as volume production and adoption increases, price will decline making the drives more affordable not to mention that the controller and interface chip sets and internal adapters support both SAS and SATA disk drives removing additional cost and complexity from storage systems. Similar to how Fibre Channel disk drives replaced parallel SCSI, SSA or other propriety disk drives, and how SATA replaced parallel ATA (PATA) drives, dual ported SAS disk drives continue the cycle of replacing Fibre Channel disk drives at the entry level and mid-range and eventually finding their way into high-end and ultra-high-end storage arrays over the next couple of years as 6G SAS drives begin to appear.

Watch for a SAS drive to appear in a storage system or server near you soon.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved