Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined

Part one of this two-part post provided a summary of today’s EMC (@EMCflash) announcement around XtremIO and renaming VFCache to XtremSF and associated software as XtremSW.

Storage I/O industry trends and perspectives

Synopsis of announcement

  • Product rollout and selective availability of the new all flash SSD array XtremIO
  • Rename server-side PCIe ssd flash cards from VFCache to XtremSF
  • New XtremSF models including enhanced multi-level cell (eMLC) with larger capacities
  • Rename VFCache caching software to XtremSW (enables cache mode vs. target mode)

Now lets take a closer look at what was announced along with what it means in terms of Industry Trends and Perspectives.

XtremIO  has been in customer beta for some time and now those along with some other early customers are able to acquire the product. In addition, EMC is opening up XtremIO to more prospective customers (Directed Availability) who have requirements or needs that line up with the products target market capabilities.

Storage I/O industry trends and perspectives

What this means is that XtremIO is not being simply put out into the general product population for broad distribution. Instead, it is being put into a controlled release (Directed Availability) to help customers, partners and EMC sales decide where best to use it and thus risk revenue prevention in other areas. The criteria or target opportunity (at least initially) are little-data applications including OLTP, server virtualization (where aggregation can cause aggravation) along with virtual desktop or VDI. In other words, many of the traditional or legacy IOP focused SSD opportunities.

In addition to XtremIO EMC has renamed their VFCache PCIe flash SSD cards (Launched February 2012) to XtremSF along with new models with both SLC and MLC nand flash. Also as part of today’s announcement EMC is renaming the cache software for XtremSF (e.g. VFCache) to be known as XtremSW. Now if that did not prompt the question of if you can now buy XtremSF as a target mode only card without the cache software the answer is yes.

What is XtremIO?

It is a new all flash SSD storage array. XtremIO is a Cluster, grid or collection of nodes called bricks with linear performance scaling providing block based all flash SSD storage. Data services consists of data footprint reduction (DFR) including inline global (across all nodes or bricks) dedupe on 4Kbyte chunks along with thin provisioning. Global dedupe is done on ingest using a combination of flash buffered meta-data (tables, index or dictionary) of what has been seen before along with multi-threaded software to leverage multi-core processors. Using the global dedupe at ingest; only new unique data is saved based on 4 Kbyte chunks.

Performance per EMC scales from one single node to more second node or a fourth node. Note: architecturally more nodes can be added with EMC indicating added models will be available in the future.

In addition to DFR, other data services including writable snapshots, and auto-load balancing when new bricks are added. Note that in a normal running XtremIO, data is automatically spread across the nodes for both performance and resiliency. Data only needs to be moved or load-balanced in the background when new bricks are added. Instant copy snapshots are supported along with writable snapshots. Currently replication is done via external EMC products such as VPLEX or RecoverPoint with statement of directions (SOD) for future enhancements.

Additional attributes of XtremIO include:

  • Each node or brick (X-Brick) has up to 16 (16 was Gen 1 hardware platform, it is now 25 SSD drives)
  • All bricks are involved in IO and storage processing
  • Positioned by EMC as Software Defined (no proprietary hardware)
  • Four x 8Gb Fibre Channel (8GFC) and four x 10Gb Ethernet (iSCSI) per brick
  • Bricks communicate with each other via a separate interconnect network or fabric
  • Bricks have redundant processors (think of as controllers) with multiple sockets and cores
  • 4KB random read IOP’s scale from 250K (one brick), 500K (two bricks) and 1 Million (four bricks). For 4K random write IOPS, the numbers are 100K, 200K and 400K across one, two and four brick configurations with low latency and all data services running (EMC supplied numbers)

In addition to 4K being a commonly used or referred to IO size, it is also the same size as the new industry standard Advanced Format (AF). Today the standard storage block, page or sector size is 512 bytes however AF moves that to a larger 4,096 bytes (e.g. 4KB) to closer align with larger IO sizes. Note that many HDD’s and some SSD’s today support AF and provide 512 byte emulation modes for compatibility.

What is XtremSF?

VFCache is renamed XtremSF with new models using eMLC as companion to existing SLC PCIe  cards and blade server mezzanine cards. EMC is emphasizing performance metrics that matter including IOPs that are relative to customer workloads such as 4K, 8K or larger with mix of reads and writes with low latency. In addition to IOPs with latency, size along with reads or writes for little data, EMC is also showing bandwidth or throughput numbers for big-data and big-bandwidth.

Model
Capacity
Read Transfer GB/sec
Write Transfer GB/sec
Random 4K Read (IOPS)
Random 4K Write (IOPS)

Random 4K Mixed ( IOPS)

Read latency (usec)
Write latency (usec)
2200 (eMLC)
2.2 TB
2.47
1.1
343K
105K
206K
87us
30us
700 (SLC)
700 GB
2.9
1.8
712K
197K
411K
50us
13us
550 (eMLC)
550 GB
1.36
512 MB/s
174K
49K
96K
87us
37us
350 (SLC)
350 GB
2.9
756 MB/s
715K
95K
267K
50us
13us

Sampling of SLC and eMLC XtremSF PCIe SSD cards performance characteristics (via EMC) including latency measured in microseconds). Note performance differences due to some cards being based on SLC and others on eMLC.

Additional attributes, some new and some previously announced include:

  • 8X  PCIe bandwidth lanes for performance
  • No IO impact to applications during garbage collection
  • Supports multi-core processor workloads with parallel design
  • Low CPU overhead by off-loading functions to PCIe card
  • Half-height, half-length PCIe form factor
  • Wear-leveling for nand flash program/erase (P/E) cycle duration
Other storage, server and systems vendors including Cisco, Dell, HP, IBM, NetApp and Oracle offer various PCIe nand flash SSD cards either as target, cache or mixed modes. Manufactures or suppliers of PCIe nand flash SSD cache and target cards include among others FusionIO, Intel, LSI, Micron , OCZ and Virident (who is partnered with Seagate).

What is XtremSW?

Server side flash software (not to be confused with FAST) for using XtremSF as a tier 0 (server-side) ssd cache or target. In target mode the XtremSF functions as a high performance persistent local dedicated direct attached storage (DAS) device. Cache mode enables frequently accessed data to be kept close to the applications off-loading underlying storage systems to be more effectively used. The XtremSW complements back-end storage systems for data protection and persistence along with investment protection of those assets.

Storage I/O industry trends and perspectives

What this all means

SSD is in your future, question is where, when and with what.

Why not just use SSD (DRAM and or nand flash) everywhere?

Keep in mind that in the data center (traditional, virtual or cloud) everything is not the same. Thus the simple answer is that there is not enough of it available at a low enough price point (think closer to Hard Disk Drives (HDD) costs) to fit into customers budget. Sure SSDs provide better performance and productivity benefits, however while there is no such thing as a data or information recession, there are budget constraints.

Another reason why SSD cant simply be used everywhere are physical (and logical) constraints such as amount of memory a server can directly access, or current DDR3 DIMMs (this could change with DDR4 according to Micron) can only address and work with DRAM, PCIe bus physical slot space, operating and hypervisor addressing limits among others.

If SSD (DRAM and or nand flash) were priced were priced low enough (e.g. much closer to HDDs) and available SSD including both DRAM and nand flash (SLC, MLC, eMLC, TLC, etc) along with emerging Phase Change Memory (PCM) are at the convergence of traditional memory and data storage. While some storage (or server) professionals may not agree, storage is an extension of memory and thus part of the traditional server and storage memory hierarchy shown below.

Storage I/O and cache locality of reference

This brings up the locality of reference topic also shown in the following figure where the best IO is the one that does not have to be done. The second best is the one that can be done closest to application to a given level of service. Locality of reference which is important for servers and storage systems including caching refers to how close frequently accessed data is to where it is needed. For some applications this means as much DRAM main memory in a server as possible either clustered, with battery backup or other data persistency protection including onboard HDD or SSD (e.g. towards the top of the hierarchy).

nand flash SSD and storage I/O location options

There are other applications where localized SSD (DRAM or nand flash) are a benefit to compliment main memory or as a persistent cache and target such as PCIe cards or SAS and SATA drives. Further down the stack and for housing larger amounts of storage with performance (reads or writes, random or sequential) along with data services is where all SSD and hybrid (mix of SSD and HDD) fit. Even further down the stack and for a broader segment is where cloud storage services based on SSD such as those from Rackspace (Cloud Block Storage with SSD) and Amazon (provisioned IOPS for EBS) have a play. Lets not forget about SSD in laptop, tablets and workstations, for example I have a Samsung model 830 in my Lenvo X1.

Storage I/O industry trends and perspectives

Some general industry trends include:

  • SSD is like real estate, location can matter, a little can go a long way
  • SSD media options include DRAM and nand flash (SLC, MLC, eMLC, TLC)
  • Portfolios broadening with different products for various needs
  • SSD functionality in servers, appliances, storage systems and cloud services
  • All flash SSD arrays have not killed off all traditional or hybrid storage arrays
  • Focus expanding from Just a Bunch Of SSD (JBOS) to enterprise like functionality
  • Software needs hardware, hardware needs software, the two work better together
  • Comparing meaningful metrics that matter vs. industry marketing metrics

Related items about nand flash, SSD and metrics related themes:

Storage I/O industry trends and perspectives

Some additional thoughts and perspectives

Does this mean traditional storage arrays are now dead?

IMHO, no, there will be some cannibalization of existing storage systems by XtremIO within EMC customers or prospects if not managed, as well as via those from others. Keep in mind that recently EMC announced enhancements to their VMAX including entry-level options for service providers. Some new opportunities opened up will be where traditional all SSD (flash or dram) systems have historically had success.

Traditional SSD and new dedicated SSD systems include Texas Memory Systems (TMS) bought by IBM in 2012, and the recently announced NetApp EF540 (and future FlashRay) along with startups Solidfire, Violin, Whiptail among others. There will be environments where XtremIO may take care of all storage needs for a customer or specific application or piece of it. Then there will be other situations where XtremIO will go-exist with EMC or other vendor’s storage solutions as part of a data infrastructure.

Storage I/O industry trends and perspectives

Who will EMC be competing against with XtremIO?

Certainly the startups or smaller players such as Violin, Whiptail, Purestorage, Solidfire along with IBM/TMS and NetApp EF540 (eventually FlashRay as well) among others.

There will also be some competition with other hybrid storage array vendors that have a mix of HDD and SSD. XtremIO will also compete in some situations on its own vs. other PCIe flash target and cache cards such as FusionIO, however for the most part those will up against XtremSF and XtremSW.

Why the slow or “Directed Availability” rollout?

Why not? By taking a controlled rollout selecting and qualifying customers for XtremIO, EMC gets to manage how the product goes out into production and control how it is used to increase chances of success. Unlike a startup that would be forced to try to put their new technology anywhere, EMC has the luxury of selecting where it goes, not to mention needing to avoid introducing a revenue prevention play for its other products.

Overall, I give an Atta boy and Atta girl to the EMC crew for a Product Defined Announcement (PDA) extending their flash portfolio to complement their different customers and prospects various environment needs. Now watch EMC, NetApp and others step up their flash dance moves to see who will out flash the others in the eXtreme flash games, not to mention emerging software defined marketing moves (SDMM) ;) .

Ok, nuff said.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined

EMC (@EMCflash) today announced some new, enhanced, renamed and a rebrand flash solid-state device (SSD) storage portfolio around theme of XtremIO. XtremIO was the startup company with a new all flash SSD storage array that EMC announced they were buying in May 2012. Since that announcement, Project “X” has been used when referring to the product now known as XtremIO (e.g. all flash new storage array).

Synopsis of announcement

  • Product rollout and selective availability of the new all flash SSD array XtremIO
  • Rename server-side PCIe ssd flash cards from VFCache to XtremSF
  • New XtremSF models including enhanced multi-level cell (eMLC) with larger capacities
  • Rename VFCache caching software to XtremSW (enables cache mode vs. target mode)

What was previously announced:

  • Buying the company XtremeIO
  • Productizing  the new all flash array as part of Project “X”
  • It would formally announce the new product in 2013 (which is now)
  • VFCache and later enhancements during 2012.

Storage I/O industry trends and perspectives

Overall, I give an Atta boy and Atta girl to the EMC crew for a Product Defined Announcement (PDA) extending their flash portfolio to complement their different customers and prospects various environment needs. Now let us sit back and watch EMC, NetApp and others step up their flash dance moves to see who will out flash the others in the eXtreme flash games, including software defined storage, software defined data centers, software defined flash, and software defined cache.

Related items about nand flash and metrics related themes:

Read more about XtremIO, XtremSF, XtremSW and flash related items here in part II of this post.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VFCache respinning SSD and intelligent caching (Part II)

This is the second of a two part series pertaining to EMC VFCache, you can read the first part here.

In this part of the series, lets look at some common questions along with comments and perspectives.

Common questions, answers, comments and perspectives:

Why would EMC not just go into the same market space and mode as FusionIO, a model that many other vendors seam eager to follow? IMHO many vendors are following or chasing FusionIO thus most are selling in the same way perhaps to the same customers. Some of those vendors can very easily if they were not already also make a quick change to their playbook adding some new moves to reach broader audience. Another smart move here is that by taking a companion or complimentary approach is that EMC can continue selling existing storage systems to customers, keep those investments while also supporting competitors products. In addition, for those customers who are slow to adopt the SSD based techniques, this is a relatively easy and low risk way to gain confidence. Granted the disk drive was declared dead several years (and yes also several decades) ago, however it is and will stay alive for many years due to SSD helping to close the IO storage and performance gap.

Storage IO performance and capacity gap
Data center and storage IO performance capacity gap (Courtesy of Cloud and Virtual Data Storage Networking (CRC Press))

Has this been done before? There have been other vendors who have done LUN caching appliances in the past going back over a decade. Likewise there are PCIe RAID cards that support flash SSD as well as DRAM based caching. Even NetApp has had similar products and functionality with their PAM cards.

Does VFCache work with other PCIe SSD cards such as FusionIO? No, VFCache is a combination of software IO intercept and intelligent cache driver along with a PCIe SSD flash card (which could be supplied as EMC has indicated from different manufactures). Thus VFCache to be VFCache requires the EMC IO intercept and intelligent cache software driver.

Does VFCache work with other vendors storage? Yes, Refer to the EMC support matrix, however the product has been architected and designed to install and coexist into a customers existing environment which means supporting different EMC block storage systems as well as those from other vendors. Keep in mind that a main theme of VFCache is to compliment, coexist, enhance and protect customers investments in storage systems to improve their effectiveness and productivity as opposed to replacing them.

Does VFCache introduce a new point of vendor lockin or stickiness? Some will see or place this as a new form of vendor lockin, others assuming that EMC supports different vendors storage systems downstream as well as offer options for different PCIe flash cards and keeps the solution affordable will assert it is no more lockin that other solutions. In fact by supporting third party storage systems as opposed to replacing them, smart sales people and marketeers will place VFCache as being more open and interoperable than some other PCIe flash card vendors approach. Keep in mind that avoiding vendor lockin is a shared responsibility (read more here).

Does VFCache work with NAS? VFCache does not work with NAS (NFS or CIFS) attached storage.

Does VFCache work with databases? Yes, VFCache is well suited for little data (e.g. database) and traditional OLTP or general business application process that may not be covered or supported by other so called big data focused or optimized solutions. Refer to this EMC document (and this document here) for more information.

Does VFCache only work with little data? While VFCache is well suited for little data (e.g. databases, share point, file and web servers, traditional business systems) it also able to work with other forms of unstructured data.

Does VFCache need VMware? No, While VFCache works with VMware vSphere including a vCenter plug in, however it does not need a hypervisor and is practical in a physical machine (PM) as it is in a virtual machine (VM).

Does VFCache work with Microsoft Windows? Yes, Refer to the EMC support matrix for specific server operating systems and hypervisor version support.

Does VFCache work with other unix platforms? Refer to the EMC support matrix for specific server operating systems and hypervisor version support.

How are reads handled with VFCache? The VFCache software (driver if you prefer) intercepts IO requests to LUNs that are being cached performing a quick lookup to see if there is a valid cache entry in the physical VFCache PCIe card. If there is a cache hit the IO is resolved from the closer or local PCIe card cache making for a lower latency or faster response time IO. In the case of a cache miss, the VFCache driver simply passes the IO request onto the normal SCSI or block (e.g. iSCSI, SAS, FC, FCoE) stack for processing by the downstream storage system (or appliance). Note that when the requested data is retrieved from the storage system, the VFCache driver will based on caching algorithms determinations place a copy of the data in the PCIe read cache. Thus the real power of the VFCache is the software implementing the cache lookup and cache management functions to leverage the PCIe card that complements the underlying block storage systems.

How are writes handled with VFCache? Unless put into a write cache mode which is not the default, VFCache software simply passes the IO operation onto the IO stack for downstream processing by the storage system or appliance attached via a block interface (e.g. iSCSI, SAS, FC, FCoE). Note that as part of the caching algorithms, the VFCache software will make determinations of what to keep in cache based on IO activity requests similar to how cache management results in better cache effectiveness in a storage system. Given EMCs long history of working with intelligent cache algorithms, one would expect some of that DNA exists or will be leveraged further in future versions of the software. Ironically this is where other vendors with long cache effectiveness histories such as IBM, HDS and NetApp among others should also be scratching their collective heads saying wow, we can or should be doing that as well (or better).

Can VFCache be used as a write cache? Yes, while its default mode is to be used as a persistent read cache to compliment server and application buffers in DRAM along with enhance effectiveness of downstream storage system (or appliances) caches, VFCache can also be configured as a persistent write cache.

Does VFCache include FAST automated tiering between different storage systems? The first version is only a caching tool, however think about it a bit, where the software sits, what storage systems it can work with, ability to learn and understand IO paths and patterns and you can get an idea of where EMC could evolve it to, similar to what they have done with recoverpoint among other tools.

Changing data access patterns and lifecycles
Evolving data access patterns and life cycles (more retention and reads)

Does VFCache mean all or nothing approach with EMC? While the complete VFCache solution comes from EMC (e.g. PCIe card and software), the solution will work with other block attached storage as well as existing EMC storage systems for investment protection.

Does VFCache support NAS based storage systems? The first release of VFCache only supports block based access, however the server that VFCache is installed in could certainly be functioning as a general purpose NAS (NFS or CIFS) server (see supported operating systems in EMC interoperability notes) in addition to being a database or other other application server.

Does VFCache require that all LUNs be cached? No, you can select which LUNs are cached and which ones are not.

Does VFCache run in an active / active mode? In the first release it is active passive, refer to EMC release notes for details.

Can VFCache be installed in multiple physical servers accessing the same shared storage system? Yes, however refer to EMC release notes on details about active / active vs. active / passive configuration rules for ensuring data integrity.

Who else is doing things like this? There are caching appliance vendors as well as others such as NetApp and IBM who have used SSD flash caching cards in their storage systems or virtualization appliances. However keep in mind that VFCache is placing the caching function closer to the application that is accessing it there by improving on the locality of reference (e.g. storage and IO effectiveness).

Does VFCache work with SSD drives installed in EMC or other storage systems? Check the EMC product support matrix for specific tested and certified solutions, however in general if the SSD drive is installed in a storage system that is supported as a block LUN (e.g. iSCSI, SAS, FC, FCoE) in theory it should be possible to work with VFCache. Emphasis, visit the EMC support matrix.
What type of flash is being used?

What type of nand flash SSD memory is EMC using in the PCIe card? The first release of VFCache is leveraging enterprise class SLC (Single Level Cell) nand flash which has been used in other EMC products for its endurance, long duty cycle to minnimize or eliminate concerns of wear and tear while meeting read and write performance. EMC has indicated that they will also as part of an industry trend leverage MLC along with Enterprise MLC (EMLC) technologies on a go forward basis.

Doesnt nand ssd flash cache wear out? While nand flash SSD can wear out over time due to extensive write use, the VFCache approach mitigates this by being primarily a read cache reducing the number or program / erase cycles (P/E cycles) that occur with write operations as well as initially leveraging longer duty cycle SLC flash. EMC also has several years experience from implementing wear leveling algorithms into the storage systems controllers to increase duty cycle and reduce wear on SLC flash which will play forward as MLC or Enterprise MLC (EMLC) techniques are leveraged. This differs from vendors who are positioning their SLC or MLC based flash PCIe SSD cards for mainly write operations which will cause more P/E cycles to occur at a faster rate reducing the duty or useful life of the device.

How much capacity does the VFCache PCIe card contain? The first release supports a 300GB card and EMC has indicated that added capacity and configuration options are in their plans.

Does this mean disks are dead? Contrary to popular industry folk lore (or wish) the hard disk drive (HDD) has plenty of life left part of which has been increased by being complimented by VFCache.

Various options and locations for SSD along with different usage scenarios
Various SSD locations, types, packaging and usage scenario options

Can VFCache work in blade servers? The VFCache software is transparent to blade, rack mount, tower or other types of servers. The hardware part of VFCache is a PCIe card which means that the blade server or system will need to be able to accommodate a PCIe card to compliment the PCIe based mezzaine IO card (e.g. iSCSI, SAS, FC, FCOE) used for accessing storage. What this means is that for blade systems or server vendors such as IBM who have a PCIe expansion module for their H series blade systems (it consumes a slot normally used by a server blade), PCIe cache cards like those being initially released by IBM could work, however check with the EMC interoperability matrix, as well as your specific blade server vendor for PCIe expansion capabilities. Given that EMC leverages Cisco UCS for their vBlocks, one would assume that those systems will also see VFCache modules in those systems. NetApp partners with Cisco using UCS in their FlexPods so you see where that could go as well along with potential other server vendors support including Dell, HP, IBM and Oracle among others.

What about benchmarks? EMC has released some technical documents that show performance improvements in Oracle environments such as this here. Hopefully we will see EMC also release other workloads for different applications including Microsoft Exchange Solutions Proven (ESRP) along with SPC similar to what IBM recently did with their systems among others.

How do the first EMC supplied workload simulations compare vs. other PCIe cards? This is tough to gauge as many SSD solutions and in particular PCIe cards are doing apples to oranges comparisons. For example to generate a high IOPs rating for marketing purposes, most SSD solutions are stress performance tested at 512 bytes or 1/2 of a KByte or at least 1/8 of a small 4Kbyte IO. Note that operating systems such as Windows are moving to 4Kbyte page allocation size to align with growing IO sizes with databases moving from the old average of 4Kbytes to 8Kbytes and larger. What is important to consider is what is the average IO size and activity profile (e.g. reads vs. writes, random vs. sequential) for your applications. If your application is doing ultra small 1/2 Kbyte IOs, or even smaller 64 byte IOs (which should be handled by better application or file system caching in DRAM), then the smaller IO size and record setting examples will apply. However if your applications are more mainstream or larger, then those smaller IO size tests should be taken with a grain of salt. Also keep latency in mind that many target or oppourtunity applications for VFCache are response time sensitive or can benefit by the improved productivity they enable.

What is locality of reference? Locality of reference refers to how close data is to where it is being requested or accessed from. The closer the data to the application requesting the faster the response time or quick the work gets done. For example in the figure below L1/L2/L3 on board processor caches are the fastest, yet smallest while closest to the application running on the server. At the other extreme further down the stack, storage becomes large capacity, lower cost, however lower performing.

Locality of reference data and storage memory

What does cache effectiveness vs. cache utilization mean? Cache utilization is an indicator of how much the available cache capacity is being used however it does not give an indicator of if the cache is being well used or not. For example, cache could be 100 percent used, however there could be a low hit rate. Thus cache effectiveness is a gauge of how well the available cache is being used to improve performance in terms of more work being done (IOPS or bandwidth) or lower of latency and response time.

Isnt more cache better? More cache is not better, it is how the cache is being used, this is a message that I would be disappointed in HDS if they were not to bring up as a point of messaging (or rebuttal) given their history of emphasis cache effectiveness vs. size or quantity (Hu, that is a hint btw ;).

What is the performance impact of VFCache on the host server? EMC is saying greatest of 5 percent or less CPU consumption which they claim is several times less than the competitions worst scenario, as well as claiming 512MB to 1GB of DRM on the server vs. several times that of their competitors. The difference could be expected to be via more off load functioning including flash translation layer (FTL), wear leveling and other optimization being handled by the PCIe card vs. being handled in the servers memory and using host server CPU cycles.

How does this compare to what NetApp or IBM does? NetApp, IBM and others have done caching with SSD in their storage systems, or leveraging third party PCIe SSD cards from different vendors to be installed in servers to be used as a storage target. Some vendors such as LSI have done caching on the PCIe cards (e.g. CacheCaid which in theory has a similar software caching concept to VFCache) to improve performance and effectiveness across JBOD and SAS devices.

What about stale (old or invalid) reads, how does VFCache handle or protect against those? Stale reads are handled via the VFCache management software tool or driver which leverages caching algorithms to decide what is valid or invalid data.

How much does VFCache cost? Refer to EMC announcement pricing, however EMC has indicated that they will be competitive with the market (supply and demand).

If a server shutdowns or reboots, what happens to the data in the VFCache? Being that the data is in non volatile SLC nand flash memory, information is not lost when the server reboots or loses power in the case of a shutdown, thus it is persistent. While exact details are not know as of this time, it is expected that the VFCache driver and software do some form of cache coherency and validity check to guard against stale reads or discard any other invalid cache entries.

Industry trends and perspectives

What will EMC do with VFCache in the future and on a larger scale such as an appliance? EMC via its own internal development and via acquisitions has demonstrated ability to use various clustered techniques such as RapidIO for VMAX nodes, InfiniBand for connecting Isilon  nodes. Given an industry trend with several startups using PCIe flash cards installed in a server that then functions as a IO storage system, it seems likely given EMCs history and experience with different storage systems, caching, and interconnects that they could do something interesting. Perhaps Oracle Exadata III (Exadata I was HP, Exadata II was Sun/Oracle) could be an EMC based appliance (That is pure speculation btw)?

EMC has already shown how it can use SSD drives as a cache extension in VNX and CLARiiON servers ( FAST CACHE ) in addition to as a target or storage tier combined with Fast for tiering. Given their history with caching algorithms, it would not be surprising to see other instantiations of the technology deployed in complimentary ways.

Finally, EMC is showing that it can use nand flash SSD in different ways, various packaging forms to apply to diverse applications or customer environments. The companion or complimentary approach EMC is currently taking contrasts with some other vendors who are taking an all or nothing, its all SSD as disk is dead approach. Given the large installed base of disk based systems EMC as well as other vendors have in place, not to mention the investment by those customers, it makes sense to allow those customers the option of when, where and how they can leverage SSD technologies to coexist and complement their environments. Thus with VFCache, EMC is using SSD as a cache enabler to discuss the decades old and growing storage IO to capacity performance gap in a force multiplier model that spreads the cost over more TBytes, PBytes or EBytes while increasing the overall benefit, in other words effectiveness and productivity.

Additional related material:
Part I: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Industry adoption vs. industry deployment, is there a difference?
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Data Center I/O Bottlenecks Performance Issues and Impacts
From bits to bytes: Decoding Encoding
Who is responsible for vendor lockin
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperabity support matrix

Ok, nuff said for now, I think I see some storm clouds rolling in

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

EMC VFCache respinning SSD and intelligent caching (Part I)

This is the first part of a two part series covering EMC VFCache, you can read the second part here.

EMC formerly announced VFCache (aka Project Lightning) an IO accelerator product that comprises a PCIe nand flash card (aka Solid State Device or SSD) and intelligent cache management software. In addition EMC is also talking about the next phase of the flash business unit and project Thunder. The approach EMC is taking with vFCache should not be a surprise given their history of starting out with memory and SSD evolving it into an intelligent cache optimized storage solution.

Storage IO performance and capacity gap
Data center and storage IO performance capacity gap (Courtesy of Cloud and Virtual Data Storage Networking (CRC Press))

Could we see the future of where EMC will take VFCache along with other possible solutions already being hinted at by the EMC flash business unit by looking where they have been already?

Likewise by looking at the past can we see the future or how VFCache and sibling product solutions could evolve?

After all, EMC is no stranger to caching with both nand flash SSD (e.g. FLASH CACHE, FAST and SSD drives) along with DRAM based across their product portfolio not too mention being a core part of their company founding products that evolved into HDDs and more recent nand flash SSDs among others.

Industry trends and perspectives

Unlike others who also offer PCIe SSD cards such as FusionIO with a focus on eliminating SANs or other storage (read their marketing), EMC not surprisingly is marching to a different beat. The beat EMC is marching too or perhaps leading by example for others to follow is that of going mainstream and using PCIe SSD cards as a cache to compliment theirs as well as other vendors storage systems vs. replacing them. This is similar to what EMC and other mainstream storage vendors have done in the past such as with SSD drives being used as flash cache extension on CLARiiON or VNX based systems as well as target or storage tier.

Various options and locations for SSD along with different usage scenarios
Various SSD locations, types, packaging and usage scenario options

Other vendors including IBM, NetApp and Oracle among others have also leveraged various packaging options of Single Level Cell (SLC) or Multi Level Cell (MLC) flash as caches in the past. A different example of SSD being used as a cache is the Seagate Momentus XT which is a desktop, workstation consumer type device. Seagate has shipped over a million of the Momentus XT which use SLC flash as a cache to compliment and enhance the integrated HDD performance (a 750GB with 8GB SLC memory is in the laptop Im using to type this with).

One of the premises of solutions such as those mentioned above for caching is to discuss changing data access patterns and life cycles shown in the figure below.

Changing data access patterns and lifecycles
Evolving data access patterns and life cycles (more retention and reads)

Put a different way, instead of focusing on just big data or corner case (granted some of those are quite large) or ultra large cloud scale out solutions, EMC with VFCache is also addressing their core business which includes little data. What will be interesting to watch and listen too is how some vendors will start to jump up and down saying that they have done or enabling what EMC is announcing for some time. In some cases those vendors will be rightfully doing and making noise on something that they should have made noise about before.

EMC is bringing the SSD message to the mainstream business and storage marketplace showing how it is a compliment to, vs. a replacement of existing storage systems. By doing so, they will show how to spread the cost of SSD out across a larger storage capacity footprint boosting the effectiveness and productive of those systems. This means that customers who install the VFCache product can accelerate the performance of both their existing EMC as well as storage systems from other vendors preserving their technology along with people skills investment.

 

Key points of VFCache

  • Combines PCIe SLC nand flash card (300GB) with intelligent caching management software driver for use in virtualized and traditional servers

  • Making SSD complimentary to existing installed block based disk (and or SSD) storage systems to increase their effectiveness

  • Providing investment protection while boosting productivity of existing EMC and third party storage in customer sites

  • Brings caching closer to the application where the data is accessed while leverage larger scale direct attached and SAN block storage

  • Focusing message for SSD back on to little data as well as big data for mainstream broad customer adoption scenarios

  • Leveraging benefit and strength of SSD as a read cache and scalable of underlying downstream disk for data storage

  • Reducing concerns around SSD endurance or duty cycle wear and tear by using as a read cache

  • Off loads underlying storage systems from some read requests enabling them to do more work for other servers

Additional related material:
Part II: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Industry adoption vs. industry deployment, is there a difference?
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Data Center I/O Bottlenecks Performance Issues and Impacts
From bits to bytes: Decoding Encoding
Who is responsible for vendor lockin
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperabity support matrix

Ok, nuff said for now, I think I see some storm clouds rolling in

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved