Is SSD dead? No, however some vendors might be

Storage I/O trends

Is SSD dead? No, however some vendors might be

In a recent conversation with Dave Raffo about the nand flash solid state disk (SSD) market, we talked about industry trends, perspectives and where the market is now as well as headed. One of my comments is, has been and will remain that the industry has still not reached anywhere near full potential for deployment of SSD for enterprise, SMB and other data storage needs. Granted, there is broad adoption in terms of discussion or conversation and plenty of early adopters.

SSD and in particular nand flash is anything but dead, in fact in the big broad picture of things, it is still very early in the game. Sure, for those who cover and crave the newest, latest and greatest technology to talk about, nand flash SSD might seem old, yesterday news, long in the tooth and time for something else. However, for those who are focused on deployment vs. adoption such as customers, in general, nand flash SSD in its many packaging options has still not yet reached its full potential.

Despite the hype, fanfare from CEOs or their evangelist along with loyal followers of startups that help drive industry adoption (e.g. what is talked about), there is still lots of upside growth in the customer drive industry deployment (actually buying, installing and using) for nand flash SSD.

What about broad customer deployments?

Sure, there are the marquee customer success stories that you need a high-capacity SAS or SATA drive to hold the YouTube videos, slide decks, press releases for.

However, have we truly, reached broad customer deployment or broad industry adoption?

Hence, I see more startups coming into the market space, and some exiting on their own, via mergers and acquisition or other means.

Will we see a feeding frenzy or IPO craze as with earlier hype cycles of technologies, IMHO there will be some companies that get the big deal, some will survive as new players running as a business vs. running to be acquired or IPO. Others will survive by evolving into something else while others will join the where are they now list.

If you are a SSD startup, CEO, CxO, or marketer, their PR, evangelist or loyal follower do not worry as the SSD market and even nand flash is far from being dead. On the other hand, if you think that it has hit its full stride, you are missing either the bigger picture, or too busy patting yourselves on the back for a job well done. There is much more opportunity out there and not even all the low hanging fruit has been picked yet.

Check out the conversation with Dave Raffo along with comments from others here.

Related links on storage IO metrics and SSD performance
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Storage and IO metrics that matter
IO IO it is off to Storage and IO metrics we go
SSD and Storage System Performance
Speaking of speeding up business with SSD storage
Are Hard Disk Drives (HDD’s) getting too big?
Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
IT and storage economics 101, supply and demand
Researchers and marketers dont agree on future of nand flash SSD
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments Part II: The call to duty, SSD endurance
SSD options for Virtual (and Physical) Environments Part III: What type of SSD is best for you?
SSD options for Virtual (and Physical) Environments Part IV: What type of SSD is best for your needs

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

More storage and IO metrics that matter

It is great to see more conversations and coverage around storage metrics that matter beyond simply focusing on cost per GByte or TByte (e.g. space capacity). Likewise, it is also good to see conversations expanding beyond data footprint reduction (DFR) from a space capacity savings or reduction ratio to also address data movement and transfer rates. Also good to see is increase in discussion around input/output operations per section (IOPs) tying into conversations from virtualization, VDI, cloud to Sold State Devices (SSD).

Other storage and IO metrics that matter include latency or response time, which is how fast work is done, or time spent. Latency also ties to IOPS in that as more work arrives to be done (IOPS) of various size, random or sequential, reads or writes, queue depths are an indicator of how well work is flowing. Another storage and IO metric that matters is availability because without it, performance or capacity can be affected. Likewise, without performance, availability can be affected.

Needless to say that I am just scratching the surface here with storage and IO metrics that matter for physical, virtual and cloud environments from servers to networks to storage.

Here is a link to a post I did called IO, IO, it is off to storage and IO metrics we go that ties in themes of performance measurements and solid-state disk (SSD) among others. Also check out this piece about why VASA (VMware storage analysis metrics) is important to have your VMware CASA along with Windows boot storage and IO performance for VDI and traditional planning purposes.

Check out this post about metrics and measurements that matter along with this conversation about IOPs, capacity, bandwidth and purchasing discussion topics.

Related links on storage IO metrics and SSD performance
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Storage and IO metrics that matter
IO IO it is off to Storage and IO metrics we go
SSD and Storage System Performance
Speaking of speeding up business with SSD storage
Are Hard Disk Drives (HDD’s) getting too big?
Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
IT and storage economics 101, supply and demand
Researchers and marketers dont agree on future of nand flash SSD
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments Part II: The call to duty, SSD endurance
SSD options for Virtual (and Physical) Environments Part III: What type of SSD is best for you?
SSD options for Virtual (and Physical) Environments Part IV: What type of SSD is best for your needs

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

What is the best kind of IO? The one you do not have to do

What is the best kind of IO? The one you do not have to do

data infrastructure server storage I/O trends

Updated 2/10/2018

What is the best kind of IO? If no IO (input/output) operation is the best IO, than the second best IO is the one that can be done as close to the application and processor with best locality of reference. Then the third best IO is the one that can be done in less time, or at least cost or impact to the requesting application which means moving further down the memory and storage stack (figure 1).

Storage and IO or I/O locality of reference and storage hirearchy
Figure 1 memory and storage hierarchy

The problem with IO is that they are basic operation to get data into and out of a computer or processor so they are required; however, they also have an impact on performance, response or wait time (latency). IO require CPU or processor time and memory to set up and then process the results as well as IO and networking resources to move data to their destination or retrieve from where stored. While IOs cannot be eliminated, their impact can be greatly improved or optimized by doing fewer of them via caching, grouped reads or writes (pre-fetch, write behind) among other techniques and technologies.

Think of it this way, instead of going on multiple errands, sometimes you can group multiple destinations together making for a shorter, more efficient trip; however, that optimization may also take longer. Hence sometimes it makes sense to go on a couple of quick, short low latency trips vs. one single larger one that takes half a day however accomplishes many things. Of course, how far you have to go on those trips (e.g. locality) makes a difference of how many you can do in a given amount of time.

What is locality of reference?

Locality of reference refers to how close (e.g location) data exists for where it is needed (being referenced) for use. For example, the best locality of reference in a computer would be registers in the processor core, then level 1 (L1), level 2 (L2) or level 3 (L3) onboard cache, followed by dynamic random access memory (DRAM). Then would come memory also known as storage on PCIe cards such as nand flash solid state device (SSD) or accessible via an adapter on a direct attached storage (DAS), SAN or NAS device. In the case of a PCIe nand flash SSD card, even though physically the nand flash SSD is closer to the processor, there is still the overhead of traversing the PCIe bus and associated drivers. To help offset that impact, PCIe cards use DRAM as cache or buffers for data along with Meta or control information to further optimize and improve locality of reference. In other words, help with cache hits, cache use and cache effectiveness vs. simply boosting cache utilization.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What can you do the cut the impact of IO

  • Establish baseline performance and availability metrics for comparison
  • Realize that IOs are a fact of IT virtual, physical and cloud life
  • Understand what is a bad IO along with its impact
  • Identify why an IO is bad, expensive or causing an impact
  • Find and fix the problem, either with software, application or database changes
  • Throw more software caching tools, hyper visors or hardware at the problem
  • Hardware includes faster processors with more DRAM and fast internal busses
  • Leveraging local PCIe flash SSD cards for caching or as targets
  • Utilize storage systems or appliances that have intelligent caching and storage optimization capabilities (performance, availability, capacity).
  • Compare changes and improvements to baseline, quantify improvement

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Why SSD based arrays and storage appliances can be a good idea (Part II)

This is the second of a two-part post about why storage arrays and appliances with SSD drives can be a good idea, here is link to the first post.

So again, why would putting drive form factors SSDs be a bad idea for existing storage systems, arrays and appliances?

Benefits of SSD drive in storage systems, arrays and appliances:

  • Familiarity with customers who buy and use these devices
  • Reduces time to market enabling customers to innovate via deployment
  • Establish comfort and confidence with SSD technology for customers
  • Investment protection of currently installed technology (hardware and software)
  • Interoperability with existing interfaces, infrastructure, tools and policies
  • Reliability, availability and serviceability (RAS) depending on vendor implementation
  • Features and functionality (replicate, snapshot, policy, tiering, application integration)
  • Known entity in terms of hardware, software, firmware and microcode (good or bad)
  • Share SSD technology across more servers or accessing applications
  • Good performance assuming no controller, hardware or software bottlenecks
  • Wear leveling and other SSD flash management if implemented
  • Can end performance bottlenecks if backend (drives) are a problem
  • Coexist or complemented with server-based SSD caching

Note, the mere presence of SSD drives in a storage system, array or appliance will not guarantee or enable the above items to be enabled, nor to their full potential. Different vendors and products will implement to various degrees of extensibility SSD drive support, so look beyond the check box of feature, functionality. Dig in and understand how extensive and robust the SSD implementation is to meet your specific requirements.

Caveats of SSD drives in storage systems, arrays and appliances:

  • May not use full performance potential of nand flash SLC technology
  • Latency can be an issue for those who need extreme speed or performance
  • May not be the most innovative newest technology on the block
  • Fun for startup vendors, marketers and their fans to poke fun at
  • Not all vendors add value or optimization for endurance of drive SSD
  • Seen as not being technology advanced vs. legacy or mature systems

Note that different vendors will have various performance characteristics, some good for IOPs, others for bandwidth or throughput while others for latency or capacity. Look at different products to see how they will vary to meet your particular needs.

Cost comparisons are tricky. SSD in HDD form factors certainly cost more than raw flash dies, however PCIe cards and FTL (flash translation layer) controllers also cost more than flash chips by themselves. In other words, apples to apples comparisons are needed. In the future, ideally the baseboard or motherboard vendors will revise the layout to support nand flash (or its replacement) with DRAM DIMM type modules along with associated FTL and BIOS to handle the flash program/erase cycles (P/E) and wear leveling management, something that DRAM does not have to encounter. While that provides great location or locality of reference (figure 1), it is also a more complex approach that takes time and industry cooperation.

Locality of reference for memory and storage
Figure 1: Locality of reference for memory and storage

Certainly, for best performance, just like realty location matters and thus locality of reference comes into play. That is put the data as close to the server as possible, however when sharing is needed, then a different approach or a companion technique is required.

Here are some general thoughts about SSD:

  • Some customers and organizations get the value and role of SSD
  • Some see where SSD can replace HDD, others see where it compliments
  • Yet others are seeing the potential, however are moving cautiously
  • For many environments better than current performance is good enough
  • Environments with the need for speed need every bit of performance they can get
  • Storage systems and arrays or appliances continue to evolve including the media they use
  • Simply looking at how some storage arrays, systems and appliances have evolved, you can get an idea on how they might look in the future which could include not only SAS as a backend or target, also PCIe. After all, it was not that long ago where backend drive connections went from propriety to open parallel SCSI or SSA to Fibre Channel loop (or switched) to SAS.
  • Engineers and marketers tend to gravitate to newer products nand technology, which is good, as we need continued innovation on that front.
  • Customers and business people tend to gravitate towards deriving greatest value out of what is there for as long as possible.
  • Of course, both of the latter two points are not always the case and can be flip flopped.
  • Ultrahigh end environments and corner case applications will continue to push the limits and are target markets for some of the newer products and vendors.
  • Likewise, enterprise, mid market and other mainstream environments (outside of their corner case scenarios) will continue to push known technology to its limits as long as they can derive some business benefit value.

While not perfect, SSD in a HDD form factor with a SAS or SATA interface properly integrated by vendors into storage systems (or arrays or appliances) are a good fit for many environments today. Likewise, for some environments, new from the ground up SSD based solutions that leverage flash DIMM or daughter cards or PCIe flash cards are a fit. So to are PCIe flash cards either as a target, or as cache to complement storage system (arrays and appliances). Certainly, drive slots in arrays take up space for SSD, however so to does occupying PCIe space particularly in high density servers that require every available socket and slot for compute and DRAM memory. Thus, there are pros and cons, features and benefits of various approaches and which is best will depend on your needs and perhaps preferences, which may or may not be binary.

I agree that for some applications and solutions, non drive form factor SSD make sense while in others, compatibility has its benefits. Yet in other situations nand flash such as SLC combined with HDD and DRAM tightly integrated such as in my Momentus XT HHDD is good for laptops, however probably not a good fit for enterprise yet. Thus, SSD options and placements are not binary, of course, sometimes opinions and perspectives will be.

For some situations PCIe, based cards in servers or appliances make sense, either as a target or as cache. Likewise for other scenarios drive format SSD make sense in servers and storage systems, appliances, arrays or other solutions. Thus while all of those approaches are used for storing binary digital data, the solutions of what to use when and where often will not be binary, that is unless your approach is to use one tool or technique for everything.

Here are some related links to learn more about SSD, where and when to use what:
Why SSD based arrays and storage appliances can be a good idea (Part I)
IT and storage economics 101, supply and demand
Researchers and marketers dont agree on future of nand flash SSD
Speaking of speeding up business with SSD storage
EMC VFCache respinning SSD and intelligent caching (Part I)
EMC VFCache respinning SSD and intelligent caching (Part II)
SSD options for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments, Part II: The call to duty, SSD endurance
SSD options for Virtual (and Physical) Environments Part III: What type of SSD is best for you?

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Why SSD based arrays and storage appliances can be a good idea (Part I)

This is the first of a two-part series, you can read part II here.

Robin Harris (aka @storagemojo) recently in a blog post asks a question and thinks solid state devices (SSDs) using SAS or SATA interface in traditional hard disk drive (HDD) form factors are a bad idea in storage arrays (e.g. storage systems or appliances). My opinion is that as with many things about storing, processing or moving binary digital data (e.g. 1s and 0s) the answer is not always clear. That is there may not be a right or wrong answer instead it depends on the situation, use or perhaps abuse scenario. For some applications or vendors, adding SSD packaged in HDD form factors to existing storage systems, arrays and appliances makes perfect sense, likewise for others it does not, thus it depends (more on that in a bit). While we are talking about SSD, Ed Haletky (aka @texiwill) recently asked a related question of Fix the App or Add Hardware, which could easily be morphed into a discussion of Fix the SSD, or Add Hardware. Hmmm, maybe a future post idea exists there.

Lets take a step back for a moment and look at the bigger picture of what prompts the question of what type of SSD to use where and when along as well as why various vendors want you to look at things a particular way. There are many options for using SSD that is packaged in various ways to meet diverse needs including here and here (see figure 1).

Various SSD packaging options
Figure 1: Various packaging and deployment options for SSD

The growing number of startup and established vendors with SSD enabled storage solutions vying to win your hearts, minds and budget is looking like the annual NCAA basketball tournament (aka March Madness and march metrics here and here). Some of vendors have or are adding SSD with SAS or SATA interfaces that plug into existing enclosures (drive slots). These SSDs have the same form factor of a 2.5 inch small form factor (SFF) or 3.5 inch HDDs with a SAS or SATA interface for physical and connectivity interoperability. Other vendors have added PCIe based SSD cards to their storage systems or appliances as a cache (read or read and write) or a target device similar to how these cards are installed in servers.

Simply adding SSD either in a drive form factor or as a PCIe card to a storage system or appliance is only part of a solution. Sure, the hardware should be faster than a traditional spinning HDD based solution. However, what differentiates the various approaches and solutions is what is done with the storage systems or appliances software (aka operating system, storage applications, management, firmware or micro code).

So are SSD based storage systems, arrays and appliances a bad idea?

If you are a startup or established vendor able to start from scratch with a clean sheet design not having to worry about interoperability and customer investment protection (technology, people skills, software tools, etc), then you would want to do something different. For example, leverage off the shelf components such as a PCIe flash SSD card in an industry standard server combined with your software for a solution. You could also use extra DRAM memory in those servers combined with PCIe flash SSD cards perhaps even with embedded HDDs for a backing or preservation medium.

Other approaches might use a mix of DRAM, PCIe flash cards, as either a cache or target combined with some drive form factor SSDs. In other words, there is no right or wrong approach; sure, there are different technical merits that have advantages for various applications or environments. Likewise, people have preferences particular for technology focused who tend to like one approach vs. another. Thus, we have many options to leverage, use or abuse.

In his post, Robin asks a good question of if nand flash SSD were being put into a new storage system, why not use the PCIe backplane vs. using nand flash on DIMM vs. using drive formats, all of which are different packaging options (Figure 1). Some startups have gone the all backplane approach, some have gone with the drive form factor, some have gone with a mix and some even using HDDs in the background. Likewise some traditional storage system and array vendors who support a mix of SSD and HDD drive form factor devices also leverage PCIe cards, either as a server-based cache (e.g. EMC VFCahe) or installed as a performance accelerator module (e.g. NetApp PAM) in their appliances.

While most vendors who put SSD drive form factor drives into their storage systems or appliances (or serves for that matter) use them as data targets for creating LUNs or file systems, others use them for internal functionality. By internal functionality I mean instead of the SSD appearing as another drive or target, they are used exclusively by the storage system or appliance for caching or similar purposes. On storage systems, this can be to increase the size of persistent cache such as EMC on the CLARiiON and VNX (e.g. FAST Cache). Another use is on backup or dedupe target appliances where SSDs are used to store dictionary, index or meta data repositories as opposed to being a general data pool.

Part two of this post looks at the benefits and caveats of SSD in storage arrays.

Here are some related links to learn more about SSD, where and when to use what:
Why SSD based arrays and storage appliances can be a good idea (Part II)
IT and storage economics 101, supply and demand
Researchers and marketers don’t agree on future of nand flash SSD
Speaking of speeding up business with SSD storage
EMC VFCache respinning SSD and intelligent caching (Part I)
EMC VFCache respinning SSD and intelligent caching (Part II)
SSD options for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments, Part II: The call to duty, SSD endurance
SSD options for Virtual (and Physical) Environments Part III: What type of SSD is best for you?

Ok, nuff said for now, check part II.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

EMC VFCache respinning SSD and intelligent caching (Part II)

This is the second of a two part series pertaining to EMC VFCache, you can read the first part here.

In this part of the series, lets look at some common questions along with comments and perspectives.

Common questions, answers, comments and perspectives:

Why would EMC not just go into the same market space and mode as FusionIO, a model that many other vendors seam eager to follow? IMHO many vendors are following or chasing FusionIO thus most are selling in the same way perhaps to the same customers. Some of those vendors can very easily if they were not already also make a quick change to their playbook adding some new moves to reach broader audience. Another smart move here is that by taking a companion or complimentary approach is that EMC can continue selling existing storage systems to customers, keep those investments while also supporting competitors products. In addition, for those customers who are slow to adopt the SSD based techniques, this is a relatively easy and low risk way to gain confidence. Granted the disk drive was declared dead several years (and yes also several decades) ago, however it is and will stay alive for many years due to SSD helping to close the IO storage and performance gap.

Storage IO performance and capacity gap
Data center and storage IO performance capacity gap (Courtesy of Cloud and Virtual Data Storage Networking (CRC Press))

Has this been done before? There have been other vendors who have done LUN caching appliances in the past going back over a decade. Likewise there are PCIe RAID cards that support flash SSD as well as DRAM based caching. Even NetApp has had similar products and functionality with their PAM cards.

Does VFCache work with other PCIe SSD cards such as FusionIO? No, VFCache is a combination of software IO intercept and intelligent cache driver along with a PCIe SSD flash card (which could be supplied as EMC has indicated from different manufactures). Thus VFCache to be VFCache requires the EMC IO intercept and intelligent cache software driver.

Does VFCache work with other vendors storage? Yes, Refer to the EMC support matrix, however the product has been architected and designed to install and coexist into a customers existing environment which means supporting different EMC block storage systems as well as those from other vendors. Keep in mind that a main theme of VFCache is to compliment, coexist, enhance and protect customers investments in storage systems to improve their effectiveness and productivity as opposed to replacing them.

Does VFCache introduce a new point of vendor lockin or stickiness? Some will see or place this as a new form of vendor lockin, others assuming that EMC supports different vendors storage systems downstream as well as offer options for different PCIe flash cards and keeps the solution affordable will assert it is no more lockin that other solutions. In fact by supporting third party storage systems as opposed to replacing them, smart sales people and marketeers will place VFCache as being more open and interoperable than some other PCIe flash card vendors approach. Keep in mind that avoiding vendor lockin is a shared responsibility (read more here).

Does VFCache work with NAS? VFCache does not work with NAS (NFS or CIFS) attached storage.

Does VFCache work with databases? Yes, VFCache is well suited for little data (e.g. database) and traditional OLTP or general business application process that may not be covered or supported by other so called big data focused or optimized solutions. Refer to this EMC document (and this document here) for more information.

Does VFCache only work with little data? While VFCache is well suited for little data (e.g. databases, share point, file and web servers, traditional business systems) it also able to work with other forms of unstructured data.

Does VFCache need VMware? No, While VFCache works with VMware vSphere including a vCenter plug in, however it does not need a hypervisor and is practical in a physical machine (PM) as it is in a virtual machine (VM).

Does VFCache work with Microsoft Windows? Yes, Refer to the EMC support matrix for specific server operating systems and hypervisor version support.

Does VFCache work with other unix platforms? Refer to the EMC support matrix for specific server operating systems and hypervisor version support.

How are reads handled with VFCache? The VFCache software (driver if you prefer) intercepts IO requests to LUNs that are being cached performing a quick lookup to see if there is a valid cache entry in the physical VFCache PCIe card. If there is a cache hit the IO is resolved from the closer or local PCIe card cache making for a lower latency or faster response time IO. In the case of a cache miss, the VFCache driver simply passes the IO request onto the normal SCSI or block (e.g. iSCSI, SAS, FC, FCoE) stack for processing by the downstream storage system (or appliance). Note that when the requested data is retrieved from the storage system, the VFCache driver will based on caching algorithms determinations place a copy of the data in the PCIe read cache. Thus the real power of the VFCache is the software implementing the cache lookup and cache management functions to leverage the PCIe card that complements the underlying block storage systems.

How are writes handled with VFCache? Unless put into a write cache mode which is not the default, VFCache software simply passes the IO operation onto the IO stack for downstream processing by the storage system or appliance attached via a block interface (e.g. iSCSI, SAS, FC, FCoE). Note that as part of the caching algorithms, the VFCache software will make determinations of what to keep in cache based on IO activity requests similar to how cache management results in better cache effectiveness in a storage system. Given EMCs long history of working with intelligent cache algorithms, one would expect some of that DNA exists or will be leveraged further in future versions of the software. Ironically this is where other vendors with long cache effectiveness histories such as IBM, HDS and NetApp among others should also be scratching their collective heads saying wow, we can or should be doing that as well (or better).

Can VFCache be used as a write cache? Yes, while its default mode is to be used as a persistent read cache to compliment server and application buffers in DRAM along with enhance effectiveness of downstream storage system (or appliances) caches, VFCache can also be configured as a persistent write cache.

Does VFCache include FAST automated tiering between different storage systems? The first version is only a caching tool, however think about it a bit, where the software sits, what storage systems it can work with, ability to learn and understand IO paths and patterns and you can get an idea of where EMC could evolve it to, similar to what they have done with recoverpoint among other tools.

Changing data access patterns and lifecycles
Evolving data access patterns and life cycles (more retention and reads)

Does VFCache mean all or nothing approach with EMC? While the complete VFCache solution comes from EMC (e.g. PCIe card and software), the solution will work with other block attached storage as well as existing EMC storage systems for investment protection.

Does VFCache support NAS based storage systems? The first release of VFCache only supports block based access, however the server that VFCache is installed in could certainly be functioning as a general purpose NAS (NFS or CIFS) server (see supported operating systems in EMC interoperability notes) in addition to being a database or other other application server.

Does VFCache require that all LUNs be cached? No, you can select which LUNs are cached and which ones are not.

Does VFCache run in an active / active mode? In the first release it is active passive, refer to EMC release notes for details.

Can VFCache be installed in multiple physical servers accessing the same shared storage system? Yes, however refer to EMC release notes on details about active / active vs. active / passive configuration rules for ensuring data integrity.

Who else is doing things like this? There are caching appliance vendors as well as others such as NetApp and IBM who have used SSD flash caching cards in their storage systems or virtualization appliances. However keep in mind that VFCache is placing the caching function closer to the application that is accessing it there by improving on the locality of reference (e.g. storage and IO effectiveness).

Does VFCache work with SSD drives installed in EMC or other storage systems? Check the EMC product support matrix for specific tested and certified solutions, however in general if the SSD drive is installed in a storage system that is supported as a block LUN (e.g. iSCSI, SAS, FC, FCoE) in theory it should be possible to work with VFCache. Emphasis, visit the EMC support matrix.
What type of flash is being used?

What type of nand flash SSD memory is EMC using in the PCIe card? The first release of VFCache is leveraging enterprise class SLC (Single Level Cell) nand flash which has been used in other EMC products for its endurance, long duty cycle to minnimize or eliminate concerns of wear and tear while meeting read and write performance. EMC has indicated that they will also as part of an industry trend leverage MLC along with Enterprise MLC (EMLC) technologies on a go forward basis.

Doesnt nand ssd flash cache wear out? While nand flash SSD can wear out over time due to extensive write use, the VFCache approach mitigates this by being primarily a read cache reducing the number or program / erase cycles (P/E cycles) that occur with write operations as well as initially leveraging longer duty cycle SLC flash. EMC also has several years experience from implementing wear leveling algorithms into the storage systems controllers to increase duty cycle and reduce wear on SLC flash which will play forward as MLC or Enterprise MLC (EMLC) techniques are leveraged. This differs from vendors who are positioning their SLC or MLC based flash PCIe SSD cards for mainly write operations which will cause more P/E cycles to occur at a faster rate reducing the duty or useful life of the device.

How much capacity does the VFCache PCIe card contain? The first release supports a 300GB card and EMC has indicated that added capacity and configuration options are in their plans.

Does this mean disks are dead? Contrary to popular industry folk lore (or wish) the hard disk drive (HDD) has plenty of life left part of which has been increased by being complimented by VFCache.

Various options and locations for SSD along with different usage scenarios
Various SSD locations, types, packaging and usage scenario options

Can VFCache work in blade servers? The VFCache software is transparent to blade, rack mount, tower or other types of servers. The hardware part of VFCache is a PCIe card which means that the blade server or system will need to be able to accommodate a PCIe card to compliment the PCIe based mezzaine IO card (e.g. iSCSI, SAS, FC, FCOE) used for accessing storage. What this means is that for blade systems or server vendors such as IBM who have a PCIe expansion module for their H series blade systems (it consumes a slot normally used by a server blade), PCIe cache cards like those being initially released by IBM could work, however check with the EMC interoperability matrix, as well as your specific blade server vendor for PCIe expansion capabilities. Given that EMC leverages Cisco UCS for their vBlocks, one would assume that those systems will also see VFCache modules in those systems. NetApp partners with Cisco using UCS in their FlexPods so you see where that could go as well along with potential other server vendors support including Dell, HP, IBM and Oracle among others.

What about benchmarks? EMC has released some technical documents that show performance improvements in Oracle environments such as this here. Hopefully we will see EMC also release other workloads for different applications including Microsoft Exchange Solutions Proven (ESRP) along with SPC similar to what IBM recently did with their systems among others.

How do the first EMC supplied workload simulations compare vs. other PCIe cards? This is tough to gauge as many SSD solutions and in particular PCIe cards are doing apples to oranges comparisons. For example to generate a high IOPs rating for marketing purposes, most SSD solutions are stress performance tested at 512 bytes or 1/2 of a KByte or at least 1/8 of a small 4Kbyte IO. Note that operating systems such as Windows are moving to 4Kbyte page allocation size to align with growing IO sizes with databases moving from the old average of 4Kbytes to 8Kbytes and larger. What is important to consider is what is the average IO size and activity profile (e.g. reads vs. writes, random vs. sequential) for your applications. If your application is doing ultra small 1/2 Kbyte IOs, or even smaller 64 byte IOs (which should be handled by better application or file system caching in DRAM), then the smaller IO size and record setting examples will apply. However if your applications are more mainstream or larger, then those smaller IO size tests should be taken with a grain of salt. Also keep latency in mind that many target or oppourtunity applications for VFCache are response time sensitive or can benefit by the improved productivity they enable.

What is locality of reference? Locality of reference refers to how close data is to where it is being requested or accessed from. The closer the data to the application requesting the faster the response time or quick the work gets done. For example in the figure below L1/L2/L3 on board processor caches are the fastest, yet smallest while closest to the application running on the server. At the other extreme further down the stack, storage becomes large capacity, lower cost, however lower performing.

Locality of reference data and storage memory

What does cache effectiveness vs. cache utilization mean? Cache utilization is an indicator of how much the available cache capacity is being used however it does not give an indicator of if the cache is being well used or not. For example, cache could be 100 percent used, however there could be a low hit rate. Thus cache effectiveness is a gauge of how well the available cache is being used to improve performance in terms of more work being done (IOPS or bandwidth) or lower of latency and response time.

Isnt more cache better? More cache is not better, it is how the cache is being used, this is a message that I would be disappointed in HDS if they were not to bring up as a point of messaging (or rebuttal) given their history of emphasis cache effectiveness vs. size or quantity (Hu, that is a hint btw ;).

What is the performance impact of VFCache on the host server? EMC is saying greatest of 5 percent or less CPU consumption which they claim is several times less than the competitions worst scenario, as well as claiming 512MB to 1GB of DRM on the server vs. several times that of their competitors. The difference could be expected to be via more off load functioning including flash translation layer (FTL), wear leveling and other optimization being handled by the PCIe card vs. being handled in the servers memory and using host server CPU cycles.

How does this compare to what NetApp or IBM does? NetApp, IBM and others have done caching with SSD in their storage systems, or leveraging third party PCIe SSD cards from different vendors to be installed in servers to be used as a storage target. Some vendors such as LSI have done caching on the PCIe cards (e.g. CacheCaid which in theory has a similar software caching concept to VFCache) to improve performance and effectiveness across JBOD and SAS devices.

What about stale (old or invalid) reads, how does VFCache handle or protect against those? Stale reads are handled via the VFCache management software tool or driver which leverages caching algorithms to decide what is valid or invalid data.

How much does VFCache cost? Refer to EMC announcement pricing, however EMC has indicated that they will be competitive with the market (supply and demand).

If a server shutdowns or reboots, what happens to the data in the VFCache? Being that the data is in non volatile SLC nand flash memory, information is not lost when the server reboots or loses power in the case of a shutdown, thus it is persistent. While exact details are not know as of this time, it is expected that the VFCache driver and software do some form of cache coherency and validity check to guard against stale reads or discard any other invalid cache entries.

Industry trends and perspectives

What will EMC do with VFCache in the future and on a larger scale such as an appliance? EMC via its own internal development and via acquisitions has demonstrated ability to use various clustered techniques such as RapidIO for VMAX nodes, InfiniBand for connecting Isilon  nodes. Given an industry trend with several startups using PCIe flash cards installed in a server that then functions as a IO storage system, it seems likely given EMCs history and experience with different storage systems, caching, and interconnects that they could do something interesting. Perhaps Oracle Exadata III (Exadata I was HP, Exadata II was Sun/Oracle) could be an EMC based appliance (That is pure speculation btw)?

EMC has already shown how it can use SSD drives as a cache extension in VNX and CLARiiON servers ( FAST CACHE ) in addition to as a target or storage tier combined with Fast for tiering. Given their history with caching algorithms, it would not be surprising to see other instantiations of the technology deployed in complimentary ways.

Finally, EMC is showing that it can use nand flash SSD in different ways, various packaging forms to apply to diverse applications or customer environments. The companion or complimentary approach EMC is currently taking contrasts with some other vendors who are taking an all or nothing, its all SSD as disk is dead approach. Given the large installed base of disk based systems EMC as well as other vendors have in place, not to mention the investment by those customers, it makes sense to allow those customers the option of when, where and how they can leverage SSD technologies to coexist and complement their environments. Thus with VFCache, EMC is using SSD as a cache enabler to discuss the decades old and growing storage IO to capacity performance gap in a force multiplier model that spreads the cost over more TBytes, PBytes or EBytes while increasing the overall benefit, in other words effectiveness and productivity.

Additional related material:
Part I: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Industry adoption vs. industry deployment, is there a difference?
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Data Center I/O Bottlenecks Performance Issues and Impacts
From bits to bytes: Decoding Encoding
Who is responsible for vendor lockin
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperabity support matrix

Ok, nuff said for now, I think I see some storm clouds rolling in

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

EMC VFCache respinning SSD and intelligent caching (Part I)

This is the first part of a two part series covering EMC VFCache, you can read the second part here.

EMC formerly announced VFCache (aka Project Lightning) an IO accelerator product that comprises a PCIe nand flash card (aka Solid State Device or SSD) and intelligent cache management software. In addition EMC is also talking about the next phase of the flash business unit and project Thunder. The approach EMC is taking with vFCache should not be a surprise given their history of starting out with memory and SSD evolving it into an intelligent cache optimized storage solution.

Storage IO performance and capacity gap
Data center and storage IO performance capacity gap (Courtesy of Cloud and Virtual Data Storage Networking (CRC Press))

Could we see the future of where EMC will take VFCache along with other possible solutions already being hinted at by the EMC flash business unit by looking where they have been already?

Likewise by looking at the past can we see the future or how VFCache and sibling product solutions could evolve?

After all, EMC is no stranger to caching with both nand flash SSD (e.g. FLASH CACHE, FAST and SSD drives) along with DRAM based across their product portfolio not too mention being a core part of their company founding products that evolved into HDDs and more recent nand flash SSDs among others.

Industry trends and perspectives

Unlike others who also offer PCIe SSD cards such as FusionIO with a focus on eliminating SANs or other storage (read their marketing), EMC not surprisingly is marching to a different beat. The beat EMC is marching too or perhaps leading by example for others to follow is that of going mainstream and using PCIe SSD cards as a cache to compliment theirs as well as other vendors storage systems vs. replacing them. This is similar to what EMC and other mainstream storage vendors have done in the past such as with SSD drives being used as flash cache extension on CLARiiON or VNX based systems as well as target or storage tier.

Various options and locations for SSD along with different usage scenarios
Various SSD locations, types, packaging and usage scenario options

Other vendors including IBM, NetApp and Oracle among others have also leveraged various packaging options of Single Level Cell (SLC) or Multi Level Cell (MLC) flash as caches in the past. A different example of SSD being used as a cache is the Seagate Momentus XT which is a desktop, workstation consumer type device. Seagate has shipped over a million of the Momentus XT which use SLC flash as a cache to compliment and enhance the integrated HDD performance (a 750GB with 8GB SLC memory is in the laptop Im using to type this with).

One of the premises of solutions such as those mentioned above for caching is to discuss changing data access patterns and life cycles shown in the figure below.

Changing data access patterns and lifecycles
Evolving data access patterns and life cycles (more retention and reads)

Put a different way, instead of focusing on just big data or corner case (granted some of those are quite large) or ultra large cloud scale out solutions, EMC with VFCache is also addressing their core business which includes little data. What will be interesting to watch and listen too is how some vendors will start to jump up and down saying that they have done or enabling what EMC is announcing for some time. In some cases those vendors will be rightfully doing and making noise on something that they should have made noise about before.

EMC is bringing the SSD message to the mainstream business and storage marketplace showing how it is a compliment to, vs. a replacement of existing storage systems. By doing so, they will show how to spread the cost of SSD out across a larger storage capacity footprint boosting the effectiveness and productive of those systems. This means that customers who install the VFCache product can accelerate the performance of both their existing EMC as well as storage systems from other vendors preserving their technology along with people skills investment.

 

Key points of VFCache

  • Combines PCIe SLC nand flash card (300GB) with intelligent caching management software driver for use in virtualized and traditional servers

  • Making SSD complimentary to existing installed block based disk (and or SSD) storage systems to increase their effectiveness

  • Providing investment protection while boosting productivity of existing EMC and third party storage in customer sites

  • Brings caching closer to the application where the data is accessed while leverage larger scale direct attached and SAN block storage

  • Focusing message for SSD back on to little data as well as big data for mainstream broad customer adoption scenarios

  • Leveraging benefit and strength of SSD as a read cache and scalable of underlying downstream disk for data storage

  • Reducing concerns around SSD endurance or duty cycle wear and tear by using as a read cache

  • Off loads underlying storage systems from some read requests enabling them to do more work for other servers

Additional related material:
Part II: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Industry adoption vs. industry deployment, is there a difference?
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Data Center I/O Bottlenecks Performance Issues and Impacts
From bits to bytes: Decoding Encoding
Who is responsible for vendor lockin
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperabity support matrix

Ok, nuff said for now, I think I see some storm clouds rolling in

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

From bits to bytes: Decoding Encoding

With networking, care should be taken to understand if a given speed or performance capacity is being specified in bits or bytes as well as in base 2 (binary) or base 10 (decimal).

Another consideration and potential point of confusion are line rates (GBaud) and link speed which can vary based on encoding and low level frame or packet size. For example 1GbE along with 1, 2, 4 and 8Gb Fibre Channel along with Serial Attached SCSI (SAS) use an 8b/10b encoding scheme. This means that at the lowest physical layer 8bits of data are placed into 10bits for transmission with 2 bits being for data integrity.

With an 8Gb link using 8b/10b encoding, 2 out of every 10 bits are overhead. To determine the actual data throughput for bandwidth, or, number of IOPS, frames or packets per second is a function of the link speed, encoding and baud rate. For example, 1Gb FC has a 1.0625 Gb per second speed which is multiple by the current generation so 8Gb FC or 8GFC would be 8 x 1.0625 = 8.5Gb per second.

Remember to factor in that encoding overhead (e.g. 8 of 10 bits are for data with 8b/10b) and usable bandwidth on the 8GFC link is about 6.8Gb per second or about 850Mbytes (6.8Gb / 8 bits) per second. 10GbE uses 64b/66b encoding which means that for every 64 bits of data, only 2 bits are used for data integrity checks thus less overhead.

What do all of this bits and bytes have to do with clouds and virtual data storage network?

Quite a bit when you consider what we have talked about the need to support more information processing, moving as well as storing in a denser footprint.

In order to support higher densities faster servers, storage and networks are not enough and require various approaches to reducing the data footprint impact.

What this means is for fast networks to be effective they also have to have lower overhead to avoid moving more extra data in the same amount of time instead using that capacity for productive work and data.

PCIe leverages multiple serial unidirectional point to point links, known as lanes, compared to traditional PCI that used a parallel bus based design. With traditional PCI, the bus width varied from 32 to 64 bits while with PCIe, the number of lanes combined with PCIe version and signaling rate determines performance. PCIe interfaces can have one, two, four, eight, sixteen or thirty two lanes for data movement depending on card or adapter format and form factor.  For example, PCI and PCIx performance can be up to 528 MByte per second with 64 bit 66 MHz signaling rate.

 

PCIe Gen 1

PCIe Gen 2

PCIe Gen 3

Giga transfers per second

2.5

5

8

Encoding scheme

8b/10b

8b/10b

128b/130b

Data rate per lane per second

250MB

500MB

1GB

x32 lanes

8GB

16GBs

32GB

Table 1: PCIe generation comparisons

Table 1 shows performance characteristics of PCIe various generations. With PCIe Gen 3, the effective performance essentially doubles however the actual underlying transfer speed does not double as it has in the past. Instead the improved performance is a combination of about 60 percent link speed and 40 percent efficiency improvements by switching from an 8b/10b to 128b/130b encoding scheme among other optimizations.

Serial interface

Encoding

PCIe Gen 1

8b/10b

PCIe Gen 2

8b/10b

PCIe Gen 3

128b/120b

Ethernet 1Gb

8b/10b

Ethernet 10Gb

64b/66b

Fibre Channel 1/2/4/8 Gb

8b/10b

SAS 6Gb

8b/10b

Table 2: Common encoding

Bringing this all together is that in order to support cloud and virtual computing environments, data networks need to become faster as well as more efficient otherwise you will be paying for more overhead per second vs. productive work being done. For example, with 64b/66b encoding on a 10GbE or FCoE link, 96.96% of the overall bandwidth or about 9.7Gb per second are available for useful work.

By comparison if an 8b/10b encoding were used, the result would be only 80% of available bandwidth for useful data movement. For environments or applications this means better throughput or bandwidth while for applications that require lower response time or latency it means more IOPS, frames or packets per second.

The above is an example of where a small change such as the encoding scheme can have large benefit when applied to high volume or large environments.

Learn more in The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC) at https://storageio.com/books

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Industry Trends and Perspectives: Converged Networking and IO Virtualization (IOV)

This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageio.com/reports.

The trends that I am seeing with converged networking and I/O fall into a couple of categories. One being converged networking including unified communications, FCoE/DCB along with InfiniBand based discussions while the other being around I/O virtualization (IOV) including PCIe server based multi root IO virtualization (MRIOV).

As is often the case with new technologies the trend of some saying these are the next great things thus drop everything and adopt them now as they are working and ready for prime time mission critical deployment. Then there are those who say no, stop, do not waste your time on these as they are temporary, they will die and go away anyway. In between, there is reality which takes a bit of balancing the old with the new, look before you leap, do your homework, and do not be scared however have a strategy and a plan on how to achieve it.

Thus is FCoE a temporal or temporary technology? Well, in the scope that all technologies are temporary however it is their temporal timeframe that should be of interest. Thus given that FCoE will probably have at least a ten to fifteen year temporal timeline, I would say in technology terms it has a relative long life for supporting coexistence on the continued road to convergence which appears to be Ethernet.

Related and companion material:
Video: Storage and Networking Convergence
Blog: I/O Virtualization (IOV) Revisited
Blog: I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
Blog: EMC VPLEX: Virtual Storage Redefined or Respun?

That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

What is the Future of Servers?

Recently I provided some comments and perspectives on the future of servers in an article over at Processor.com.

In general, blade servers will become more ubiquitous, that is they wont go away, rather become more common place with even higher density processors with more cores and performance along with faster I/O and larger memory capacity per given footprint.

While the term blade server may fade giving way to some new term or phrase, rest assured their capabilities and functionality will not disappear, rather be further enhanced to support virtualization with VMware vsphere, Microsoft HyperV, Citrix/Zen along with public and private clouds, both for consolidation and in the next wave of virtualization called life beyond consolidation.

The other trend is that not only will servers be able to support more processing and memory per footprint; they will also do that drawing less energy requiring lower cooling demands, hence more Ghz per watt along with energy savings modes when less work needs to be performed.

Another trend is around convergence both in terms of packaging along with technology improvements from a server, I/O networking and storage perspective. For example, enhancements to shared PCIe with I/O virtualization, hypervisor optimization, and integration such as the recently announced EMC, Cisco, Intel and VMware VCE coalition and vblocks.

Read more including my comments in the article here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

StorageIO aka Greg Schulz appears on Infosmack

If you are in the IT industry, and specifically have any interest or tie to data infrastructures from servers, to storage and networking including hardware, software, services not to mention virtualization and clouds, InfoSmack and Storage Monkeys should be on your read or listen list.

Recently I was invited to be a guest on the InfoSmack podcast which is about a 50 some minute talk show format around storage, networking, virtualization and related topics.

The topics discussed include Sun and Oracle from a storage standpoint, Solid State Disk (SSD) among others.

Now, a word of caution, InfoSmack is not your typical prim and proper venue, nor is it a low class trash talking production.

Its fun and informative where the hosts and attendees are not afraid of poking fun at them selves while exploring topics and the story behind the story in a candid non scripted manner.

Check it out.

Cheers – gs

Greg Schulz – StorageIOblog, twitter @storageio Author “The Green and Virtual Data Center” (CRC)

Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?

Storage I/O trends

Disclosure: I have been a user, vendor, author and analyst covering and a fan (and continue to be) of SSD for over 20 years

I have thought and wanting to post about this for a while, however recently several things popped up including moderating a panel where a vendor representative told the audience that the magnetic hard disk drive (HDD) would be dead in two, at most three years. While there were a few nods from those in the audience, the majority smiled politely, chuckled, looked at their watches or returned to doing email, twitters, texting or simply rolled their eyes in a way like, yeah right, we have heard this before ( ;) ).

Likewise, I have done many events including seminars, keynotes including at a recent CMG event (the performance and capacity planning group that I have been a part of for many years), webcasts and other interactions with IT pros, vendors, vars and media. These interactions have included among other topics, IT optimization, boosting server and storage efficiency, as well as the roll of tiering IT resources to boost efficiency, achieve better productivity while boosting performance in a cost-effective way during touch economic times, in other words, the other green IT!

Then the other day, I received an email from Mary Jander over at Internet Evolution. You may remember Mary from her days over at Byte & Switch. Mary was looking for a point, counter point, perspective and sound bit to a recent blog posting on her site and basically asked if I thought that the high performance HDD would be dead in a couple of years at the cost of FLASH SSD. Having given Mary some sound bits and perspectives which appear in her Article/Blog posting, there has since been a fun and lively discourse in Marys’ Internet Evolution blog comment section which could be seen by some as the pending funeral for high performance HDDs.

There has been a lot of work taking place including by industry trade groups such as the SNIA Solid State Storage Initiative (SSSI) among others, not to mention many claims, discussions, banter and even some trash talk about how the magnetic hard disk drives (HDD) that as a technology is over 50 years old now, is nearing the end of the road and is about to be replaced by FLASH SSD in the next two to three years depending on who you talk with or listen to.

That may very well be the case, however, I have a strong suspicion that while the high performance 3.5" Fibre Channel 15,500 Revolution per minute (15.5K RPM) HDD is nearing the end of the line, I don’t believe that the 2.5" small form factor (SFF) Serial Attached SCSI (SAS) 15.5K (maybe faster in the future?) high performance and larger capacity HDD will have met its demise in the two to three-year timeframe.

The reason I subscribe to this notion is that of a need for balancing performance, availability, capacity, energy to a given power, cooling, floor space and environmental need along with price to meet different tiers of application and data quality of service and service level needs. Simply put, there continues to be a need even with some the new or emerging enhanced intelligence capabilities of storage systems for tiered media. That is tier-0 ultra fast SSD (FLASH or RAM) in a 2.5" form factor with SATA shifting to SAS connectivity, tier-1 fast 2.5" SAS 15.5K large capacity HDDs, tier-2 2.5" SATA and SAS high-capacity, 5.4 to 10K HDDs, or, ultra large capacity SAS and SATA 3.5" HDDs to meet different performance, availability, capacity, energy and economic points.

Why not just use SSD FLASH for all high performance activity, after all it excels in reads correct? Yup, however, take a closer look at write performance which is getting better and better, even with less reliance on intelligent controllers, firmware and RAM as a buffer. However, there is still a need for a balance of Tier-0, Tier-1, Tier-2, Tier-3 etc mediums to balance different requirements and stretch strained IT budgets to do more efficiency.

Maybe I’m stuck in my ways and spend to much time talking with IT professionals including server or storage architects, as well as IT planners, purchasers and others in the trenches and not enough time drinking the cool-aid and believing the evangelists and truth squads ;). However there is certainly no denying that Solid State Devices (SSD) using either RAM or FLASH are back in the spotlight again as SSD has been in the past, this time for many reasons with adoption continuing to grow. I think that its safe to say that some HDDs will fade away like other earlier generations have, such as the 3.5" FC HDD, however other HDDs like the high performance 2.5" SAS HDDs have some time to enjoy before their funeral or wake.

What say you?

BTW, check out this popular (and its Free) StorageIO Industry Trends and Perspectives White Paper Report that looks at various data center performance bottlenecks and how to discuss them to transition towards becoming more efficient. However a warning, you might actually be inclined to jump on the SSD bandwagon.

Oh, and there’s nothing wrong with SSD, after all as I mentioned earlier, I’m a huge fan, however, I’m also a huge fan of spinning HDDs having skipped SSD in my latest computer purchases for fast 7.2K (or faster) HDDs with FLASH for portability (encrypted of course). After all, it’s also about balancing the different tiers of storage mediums to the task at hand, that is, unless you subscribe to the notion that one tool or technique should be used to solve all problems which is fine if that is your cup of tea.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved