EMC VFCache respinning SSD and intelligent caching (Part II)

This is the second of a two part series pertaining to EMC VFCache, you can read the first part here.

In this part of the series, lets look at some common questions along with comments and perspectives.

Common questions, answers, comments and perspectives:

Why would EMC not just go into the same market space and mode as FusionIO, a model that many other vendors seam eager to follow? IMHO many vendors are following or chasing FusionIO thus most are selling in the same way perhaps to the same customers. Some of those vendors can very easily if they were not already also make a quick change to their playbook adding some new moves to reach broader audience. Another smart move here is that by taking a companion or complimentary approach is that EMC can continue selling existing storage systems to customers, keep those investments while also supporting competitors products. In addition, for those customers who are slow to adopt the SSD based techniques, this is a relatively easy and low risk way to gain confidence. Granted the disk drive was declared dead several years (and yes also several decades) ago, however it is and will stay alive for many years due to SSD helping to close the IO storage and performance gap.

Storage IO performance and capacity gap
Data center and storage IO performance capacity gap (Courtesy of Cloud and Virtual Data Storage Networking (CRC Press))

Has this been done before? There have been other vendors who have done LUN caching appliances in the past going back over a decade. Likewise there are PCIe RAID cards that support flash SSD as well as DRAM based caching. Even NetApp has had similar products and functionality with their PAM cards.

Does VFCache work with other PCIe SSD cards such as FusionIO? No, VFCache is a combination of software IO intercept and intelligent cache driver along with a PCIe SSD flash card (which could be supplied as EMC has indicated from different manufactures). Thus VFCache to be VFCache requires the EMC IO intercept and intelligent cache software driver.

Does VFCache work with other vendors storage? Yes, Refer to the EMC support matrix, however the product has been architected and designed to install and coexist into a customers existing environment which means supporting different EMC block storage systems as well as those from other vendors. Keep in mind that a main theme of VFCache is to compliment, coexist, enhance and protect customers investments in storage systems to improve their effectiveness and productivity as opposed to replacing them.

Does VFCache introduce a new point of vendor lockin or stickiness? Some will see or place this as a new form of vendor lockin, others assuming that EMC supports different vendors storage systems downstream as well as offer options for different PCIe flash cards and keeps the solution affordable will assert it is no more lockin that other solutions. In fact by supporting third party storage systems as opposed to replacing them, smart sales people and marketeers will place VFCache as being more open and interoperable than some other PCIe flash card vendors approach. Keep in mind that avoiding vendor lockin is a shared responsibility (read more here).

Does VFCache work with NAS? VFCache does not work with NAS (NFS or CIFS) attached storage.

Does VFCache work with databases? Yes, VFCache is well suited for little data (e.g. database) and traditional OLTP or general business application process that may not be covered or supported by other so called big data focused or optimized solutions. Refer to this EMC document (and this document here) for more information.

Does VFCache only work with little data? While VFCache is well suited for little data (e.g. databases, share point, file and web servers, traditional business systems) it also able to work with other forms of unstructured data.

Does VFCache need VMware? No, While VFCache works with VMware vSphere including a vCenter plug in, however it does not need a hypervisor and is practical in a physical machine (PM) as it is in a virtual machine (VM).

Does VFCache work with Microsoft Windows? Yes, Refer to the EMC support matrix for specific server operating systems and hypervisor version support.

Does VFCache work with other unix platforms? Refer to the EMC support matrix for specific server operating systems and hypervisor version support.

How are reads handled with VFCache? The VFCache software (driver if you prefer) intercepts IO requests to LUNs that are being cached performing a quick lookup to see if there is a valid cache entry in the physical VFCache PCIe card. If there is a cache hit the IO is resolved from the closer or local PCIe card cache making for a lower latency or faster response time IO. In the case of a cache miss, the VFCache driver simply passes the IO request onto the normal SCSI or block (e.g. iSCSI, SAS, FC, FCoE) stack for processing by the downstream storage system (or appliance). Note that when the requested data is retrieved from the storage system, the VFCache driver will based on caching algorithms determinations place a copy of the data in the PCIe read cache. Thus the real power of the VFCache is the software implementing the cache lookup and cache management functions to leverage the PCIe card that complements the underlying block storage systems.

How are writes handled with VFCache? Unless put into a write cache mode which is not the default, VFCache software simply passes the IO operation onto the IO stack for downstream processing by the storage system or appliance attached via a block interface (e.g. iSCSI, SAS, FC, FCoE). Note that as part of the caching algorithms, the VFCache software will make determinations of what to keep in cache based on IO activity requests similar to how cache management results in better cache effectiveness in a storage system. Given EMCs long history of working with intelligent cache algorithms, one would expect some of that DNA exists or will be leveraged further in future versions of the software. Ironically this is where other vendors with long cache effectiveness histories such as IBM, HDS and NetApp among others should also be scratching their collective heads saying wow, we can or should be doing that as well (or better).

Can VFCache be used as a write cache? Yes, while its default mode is to be used as a persistent read cache to compliment server and application buffers in DRAM along with enhance effectiveness of downstream storage system (or appliances) caches, VFCache can also be configured as a persistent write cache.

Does VFCache include FAST automated tiering between different storage systems? The first version is only a caching tool, however think about it a bit, where the software sits, what storage systems it can work with, ability to learn and understand IO paths and patterns and you can get an idea of where EMC could evolve it to, similar to what they have done with recoverpoint among other tools.

Changing data access patterns and lifecycles
Evolving data access patterns and life cycles (more retention and reads)

Does VFCache mean all or nothing approach with EMC? While the complete VFCache solution comes from EMC (e.g. PCIe card and software), the solution will work with other block attached storage as well as existing EMC storage systems for investment protection.

Does VFCache support NAS based storage systems? The first release of VFCache only supports block based access, however the server that VFCache is installed in could certainly be functioning as a general purpose NAS (NFS or CIFS) server (see supported operating systems in EMC interoperability notes) in addition to being a database or other other application server.

Does VFCache require that all LUNs be cached? No, you can select which LUNs are cached and which ones are not.

Does VFCache run in an active / active mode? In the first release it is active passive, refer to EMC release notes for details.

Can VFCache be installed in multiple physical servers accessing the same shared storage system? Yes, however refer to EMC release notes on details about active / active vs. active / passive configuration rules for ensuring data integrity.

Who else is doing things like this? There are caching appliance vendors as well as others such as NetApp and IBM who have used SSD flash caching cards in their storage systems or virtualization appliances. However keep in mind that VFCache is placing the caching function closer to the application that is accessing it there by improving on the locality of reference (e.g. storage and IO effectiveness).

Does VFCache work with SSD drives installed in EMC or other storage systems? Check the EMC product support matrix for specific tested and certified solutions, however in general if the SSD drive is installed in a storage system that is supported as a block LUN (e.g. iSCSI, SAS, FC, FCoE) in theory it should be possible to work with VFCache. Emphasis, visit the EMC support matrix.
What type of flash is being used?

What type of nand flash SSD memory is EMC using in the PCIe card? The first release of VFCache is leveraging enterprise class SLC (Single Level Cell) nand flash which has been used in other EMC products for its endurance, long duty cycle to minnimize or eliminate concerns of wear and tear while meeting read and write performance. EMC has indicated that they will also as part of an industry trend leverage MLC along with Enterprise MLC (EMLC) technologies on a go forward basis.

Doesnt nand ssd flash cache wear out? While nand flash SSD can wear out over time due to extensive write use, the VFCache approach mitigates this by being primarily a read cache reducing the number or program / erase cycles (P/E cycles) that occur with write operations as well as initially leveraging longer duty cycle SLC flash. EMC also has several years experience from implementing wear leveling algorithms into the storage systems controllers to increase duty cycle and reduce wear on SLC flash which will play forward as MLC or Enterprise MLC (EMLC) techniques are leveraged. This differs from vendors who are positioning their SLC or MLC based flash PCIe SSD cards for mainly write operations which will cause more P/E cycles to occur at a faster rate reducing the duty or useful life of the device.

How much capacity does the VFCache PCIe card contain? The first release supports a 300GB card and EMC has indicated that added capacity and configuration options are in their plans.

Does this mean disks are dead? Contrary to popular industry folk lore (or wish) the hard disk drive (HDD) has plenty of life left part of which has been increased by being complimented by VFCache.

Various options and locations for SSD along with different usage scenarios
Various SSD locations, types, packaging and usage scenario options

Can VFCache work in blade servers? The VFCache software is transparent to blade, rack mount, tower or other types of servers. The hardware part of VFCache is a PCIe card which means that the blade server or system will need to be able to accommodate a PCIe card to compliment the PCIe based mezzaine IO card (e.g. iSCSI, SAS, FC, FCOE) used for accessing storage. What this means is that for blade systems or server vendors such as IBM who have a PCIe expansion module for their H series blade systems (it consumes a slot normally used by a server blade), PCIe cache cards like those being initially released by IBM could work, however check with the EMC interoperability matrix, as well as your specific blade server vendor for PCIe expansion capabilities. Given that EMC leverages Cisco UCS for their vBlocks, one would assume that those systems will also see VFCache modules in those systems. NetApp partners with Cisco using UCS in their FlexPods so you see where that could go as well along with potential other server vendors support including Dell, HP, IBM and Oracle among others.

What about benchmarks? EMC has released some technical documents that show performance improvements in Oracle environments such as this here. Hopefully we will see EMC also release other workloads for different applications including Microsoft Exchange Solutions Proven (ESRP) along with SPC similar to what IBM recently did with their systems among others.

How do the first EMC supplied workload simulations compare vs. other PCIe cards? This is tough to gauge as many SSD solutions and in particular PCIe cards are doing apples to oranges comparisons. For example to generate a high IOPs rating for marketing purposes, most SSD solutions are stress performance tested at 512 bytes or 1/2 of a KByte or at least 1/8 of a small 4Kbyte IO. Note that operating systems such as Windows are moving to 4Kbyte page allocation size to align with growing IO sizes with databases moving from the old average of 4Kbytes to 8Kbytes and larger. What is important to consider is what is the average IO size and activity profile (e.g. reads vs. writes, random vs. sequential) for your applications. If your application is doing ultra small 1/2 Kbyte IOs, or even smaller 64 byte IOs (which should be handled by better application or file system caching in DRAM), then the smaller IO size and record setting examples will apply. However if your applications are more mainstream or larger, then those smaller IO size tests should be taken with a grain of salt. Also keep latency in mind that many target or oppourtunity applications for VFCache are response time sensitive or can benefit by the improved productivity they enable.

What is locality of reference? Locality of reference refers to how close data is to where it is being requested or accessed from. The closer the data to the application requesting the faster the response time or quick the work gets done. For example in the figure below L1/L2/L3 on board processor caches are the fastest, yet smallest while closest to the application running on the server. At the other extreme further down the stack, storage becomes large capacity, lower cost, however lower performing.

Locality of reference data and storage memory

What does cache effectiveness vs. cache utilization mean? Cache utilization is an indicator of how much the available cache capacity is being used however it does not give an indicator of if the cache is being well used or not. For example, cache could be 100 percent used, however there could be a low hit rate. Thus cache effectiveness is a gauge of how well the available cache is being used to improve performance in terms of more work being done (IOPS or bandwidth) or lower of latency and response time.

Isnt more cache better? More cache is not better, it is how the cache is being used, this is a message that I would be disappointed in HDS if they were not to bring up as a point of messaging (or rebuttal) given their history of emphasis cache effectiveness vs. size or quantity (Hu, that is a hint btw ;).

What is the performance impact of VFCache on the host server? EMC is saying greatest of 5 percent or less CPU consumption which they claim is several times less than the competitions worst scenario, as well as claiming 512MB to 1GB of DRM on the server vs. several times that of their competitors. The difference could be expected to be via more off load functioning including flash translation layer (FTL), wear leveling and other optimization being handled by the PCIe card vs. being handled in the servers memory and using host server CPU cycles.

How does this compare to what NetApp or IBM does? NetApp, IBM and others have done caching with SSD in their storage systems, or leveraging third party PCIe SSD cards from different vendors to be installed in servers to be used as a storage target. Some vendors such as LSI have done caching on the PCIe cards (e.g. CacheCaid which in theory has a similar software caching concept to VFCache) to improve performance and effectiveness across JBOD and SAS devices.

What about stale (old or invalid) reads, how does VFCache handle or protect against those? Stale reads are handled via the VFCache management software tool or driver which leverages caching algorithms to decide what is valid or invalid data.

How much does VFCache cost? Refer to EMC announcement pricing, however EMC has indicated that they will be competitive with the market (supply and demand).

If a server shutdowns or reboots, what happens to the data in the VFCache? Being that the data is in non volatile SLC nand flash memory, information is not lost when the server reboots or loses power in the case of a shutdown, thus it is persistent. While exact details are not know as of this time, it is expected that the VFCache driver and software do some form of cache coherency and validity check to guard against stale reads or discard any other invalid cache entries.

Industry trends and perspectives

What will EMC do with VFCache in the future and on a larger scale such as an appliance? EMC via its own internal development and via acquisitions has demonstrated ability to use various clustered techniques such as RapidIO for VMAX nodes, InfiniBand for connecting Isilon  nodes. Given an industry trend with several startups using PCIe flash cards installed in a server that then functions as a IO storage system, it seems likely given EMCs history and experience with different storage systems, caching, and interconnects that they could do something interesting. Perhaps Oracle Exadata III (Exadata I was HP, Exadata II was Sun/Oracle) could be an EMC based appliance (That is pure speculation btw)?

EMC has already shown how it can use SSD drives as a cache extension in VNX and CLARiiON servers ( FAST CACHE ) in addition to as a target or storage tier combined with Fast for tiering. Given their history with caching algorithms, it would not be surprising to see other instantiations of the technology deployed in complimentary ways.

Finally, EMC is showing that it can use nand flash SSD in different ways, various packaging forms to apply to diverse applications or customer environments. The companion or complimentary approach EMC is currently taking contrasts with some other vendors who are taking an all or nothing, its all SSD as disk is dead approach. Given the large installed base of disk based systems EMC as well as other vendors have in place, not to mention the investment by those customers, it makes sense to allow those customers the option of when, where and how they can leverage SSD technologies to coexist and complement their environments. Thus with VFCache, EMC is using SSD as a cache enabler to discuss the decades old and growing storage IO to capacity performance gap in a force multiplier model that spreads the cost over more TBytes, PBytes or EBytes while increasing the overall benefit, in other words effectiveness and productivity.

Additional related material:
Part I: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Industry adoption vs. industry deployment, is there a difference?
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Data Center I/O Bottlenecks Performance Issues and Impacts
From bits to bytes: Decoding Encoding
Who is responsible for vendor lockin
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperabity support matrix

Ok, nuff said for now, I think I see some storm clouds rolling in

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

EMC VFCache respinning SSD and intelligent caching (Part I)

This is the first part of a two part series covering EMC VFCache, you can read the second part here.

EMC formerly announced VFCache (aka Project Lightning) an IO accelerator product that comprises a PCIe nand flash card (aka Solid State Device or SSD) and intelligent cache management software. In addition EMC is also talking about the next phase of the flash business unit and project Thunder. The approach EMC is taking with vFCache should not be a surprise given their history of starting out with memory and SSD evolving it into an intelligent cache optimized storage solution.

Storage IO performance and capacity gap
Data center and storage IO performance capacity gap (Courtesy of Cloud and Virtual Data Storage Networking (CRC Press))

Could we see the future of where EMC will take VFCache along with other possible solutions already being hinted at by the EMC flash business unit by looking where they have been already?

Likewise by looking at the past can we see the future or how VFCache and sibling product solutions could evolve?

After all, EMC is no stranger to caching with both nand flash SSD (e.g. FLASH CACHE, FAST and SSD drives) along with DRAM based across their product portfolio not too mention being a core part of their company founding products that evolved into HDDs and more recent nand flash SSDs among others.

Industry trends and perspectives

Unlike others who also offer PCIe SSD cards such as FusionIO with a focus on eliminating SANs or other storage (read their marketing), EMC not surprisingly is marching to a different beat. The beat EMC is marching too or perhaps leading by example for others to follow is that of going mainstream and using PCIe SSD cards as a cache to compliment theirs as well as other vendors storage systems vs. replacing them. This is similar to what EMC and other mainstream storage vendors have done in the past such as with SSD drives being used as flash cache extension on CLARiiON or VNX based systems as well as target or storage tier.

Various options and locations for SSD along with different usage scenarios
Various SSD locations, types, packaging and usage scenario options

Other vendors including IBM, NetApp and Oracle among others have also leveraged various packaging options of Single Level Cell (SLC) or Multi Level Cell (MLC) flash as caches in the past. A different example of SSD being used as a cache is the Seagate Momentus XT which is a desktop, workstation consumer type device. Seagate has shipped over a million of the Momentus XT which use SLC flash as a cache to compliment and enhance the integrated HDD performance (a 750GB with 8GB SLC memory is in the laptop Im using to type this with).

One of the premises of solutions such as those mentioned above for caching is to discuss changing data access patterns and life cycles shown in the figure below.

Changing data access patterns and lifecycles
Evolving data access patterns and life cycles (more retention and reads)

Put a different way, instead of focusing on just big data or corner case (granted some of those are quite large) or ultra large cloud scale out solutions, EMC with VFCache is also addressing their core business which includes little data. What will be interesting to watch and listen too is how some vendors will start to jump up and down saying that they have done or enabling what EMC is announcing for some time. In some cases those vendors will be rightfully doing and making noise on something that they should have made noise about before.

EMC is bringing the SSD message to the mainstream business and storage marketplace showing how it is a compliment to, vs. a replacement of existing storage systems. By doing so, they will show how to spread the cost of SSD out across a larger storage capacity footprint boosting the effectiveness and productive of those systems. This means that customers who install the VFCache product can accelerate the performance of both their existing EMC as well as storage systems from other vendors preserving their technology along with people skills investment.

 

Key points of VFCache

  • Combines PCIe SLC nand flash card (300GB) with intelligent caching management software driver for use in virtualized and traditional servers

  • Making SSD complimentary to existing installed block based disk (and or SSD) storage systems to increase their effectiveness

  • Providing investment protection while boosting productivity of existing EMC and third party storage in customer sites

  • Brings caching closer to the application where the data is accessed while leverage larger scale direct attached and SAN block storage

  • Focusing message for SSD back on to little data as well as big data for mainstream broad customer adoption scenarios

  • Leveraging benefit and strength of SSD as a read cache and scalable of underlying downstream disk for data storage

  • Reducing concerns around SSD endurance or duty cycle wear and tear by using as a read cache

  • Off loads underlying storage systems from some read requests enabling them to do more work for other servers

Additional related material:
Part II: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Industry adoption vs. industry deployment, is there a difference?
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Data Center I/O Bottlenecks Performance Issues and Impacts
From bits to bytes: Decoding Encoding
Who is responsible for vendor lockin
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperabity support matrix

Ok, nuff said for now, I think I see some storm clouds rolling in

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

IT and storage economics 101, supply and demand

In my 2012 (and 2013) industry trends and perspectives predictions I mentioned that some storage systems vendors who managed their costs could benefit from the current Hard Disk Drive (HDD) shortage. Most in the industry would say that is saying what they have said, however I have an alternate scenario. My scenario is that for vendors who already manage good (or great) margins on their HDD sales and who can manage their costs including inventories stand to make even more margin. There is a popular myth that there is no money or margin in HDD or for those who sell them which might be true for some.

Without going into any details, lets just say it is a popular myth just like saying that there is no money in hardware or that all software and people services are pure profit. Ok, lets leave sleeping dogs lay where rest (at least for now).

Why will some storage vendors make more margin off of HDD when everybody is supposed to be adopting or deploying solid state devices (SSD). Or Hybrid Hard Disk Drives (HHDD) in the case of workstation, desktop or laptops? Simple, SSD adoption (and deployment) is still growing and a lot of demand generator incentives available. Likewise HDD demand continues to be strong and with supplies affected, economics 101 says that some will raise their prices, manage their expenses, make more profits which can be used to help fund or stimulate increased SSD or other initiatives.

Storage, IT and general Economics 101

Economics 101 or basics introduces the concept of supply and demand along with revenue minus costs = profits or margin. If there is no demand yet a supply of a product exists then techniques such as discounting, bundling or other forms of adding value to incentivize customers to make a purchase. Bundling can include offering some other product, service or offering that could be as simple as an extended warranty to motivate sellers. Beyond discounts, coupons, two for one, future buying credits, gift cards or memberships for frequent buyers (or flyers) are other forms of stimulating sales activity.

Likewise if there is a supply or competition for a given market of a product or alternative, vendors or those selling the products including value added resellers (VARS) may sacrifice margin (profits) to meet revenue as well as unit shipped (e.g. expand their customer and installed base footprint) goals.

Currently in the IT industry and specifically around data storage even with increased and growing adoption and demand deployment around SSD, there is also a large supply in different categories. For example there are several fabrication facilities (FABs) that produce the silicon dies (e.g. chips) that form nand flash SSD memories including Intel, Micron, the joint Intel and Micron Fab (IMF) and Samsung. Even with continued strong demand growth, the various FABs seem to have enough capacity at least for now. Likewise manufactures of SSD drive form factor products with SAS or SATA interfaces for attaching to existing servers, storage or appliances including Intel, Micron, Samsung, Seagate, STEC and SANdisk among others seem to be able to meet demand. Even PCIe SSD card vendors have come under pressure of supply and demand. For example the high flying startup FusionIO recently saw its margins affected due to competition which includes Adaptec, LSI, Texas Memory Systems (TMS) and soon EMC among others. In the SSD appliance and storage system space there are even more vendors with what amounts to about one every month or so coming out of stealth. Needless to say there will be some shakeout in the not so distant future.

On the other hand, if there is a demand however limited supply, assuming that the market will support it, prices can be increased from what discounts had applied. Assuming that costs are kept inline any subsequent increase in average selling price (ASP) minus costs should result in higher margins.

Another variation is if there is strong demand and shortage of supply such as what is occurring with hard disk drives (HDD) due to recent flooding in Thailand, not only prices increase, there can also be changes to warranties or other services and incentives. Note some of HDD manufactures such as Western Digital were more affected by the flooding than Seagate. Likewise the Thailand flooding was not limited to just HDD having also affected other electronic chip and component suppliers. Even though HDDs have been declared dead by many in the SSD camps along with their supporters, record number of HDDs are produced every year. Note that economics 101 also tells us that even though more devices are produced and sold, that may not show a profit based on their cost and price. Like the CPU processor chips produced by AMD, Broadcom, IBM and Intel among others that are high volume, with varying margins, the HDD and nand flash SSD market is also high volume with different margins.

As an example, Seagate recently announced strong profits due to a number of factors even though enterprise drive supply and shipments were down while desktop drives were up. Given that many industry pundits have proclaimed a disaster for those involved with HDDs due to the shortage, they forgot about economics 101 (supply and demand). Sure marketing 101 says that HDDs are dead and if there is a shortage then more people will buy SSDs however that also assumes that people are a) ready to buy more SSDs (e.g. demand) and b) vendors or manufactures have supply and c) that those same vendors or manufactures are willing to give up margin while reducing costs to boost profits.

Note that costs typically include selling, general and administrative, cost of goods, manufacturing, transportation and shipping, insurance, research and development among others. If it has been awhile since you looked at one, take a few minutes sometime to look at public companies and their quarterly securities exchange commission (SEC) financial filings. Those public filing documents are a treasure trove of information for those who sift through them and where many reporters, analysts and researchers find information for what they are working or speculating on. These documents show total sales, costs, profits and losses among other things. Something that vendors may not show in these public filings which means you have to look or read between the lines or get the information elsewhere is how many units were actually shipped or the ASP to get an idea of the amount of discounting that is occurring. Likewise sales and marketing expenses often get lumped into or under general selling and administration (SGA). A fun or interesting metric is to look at the percentage of SGA dollars spent per revenue and profits.

What I find interesting is to get an estimate of what it is costing an organization to do or sustain a given level of revenue and margin. For example, while some larger vendors may seem to spend more on selling and marketing, on a percentage basis, they can easily be out spent by smaller startups. Granted the larger vendor may be spending more actually dollars however those are spread out over a larger sales and revenue basis.

What does this all mean?

Look at multiple metrics that have both a future trend or forecast as well as trailing or historical perspective view. Look at both percentages as well as dollar amounts as well as both revenue and margin while keeping units or number of devices (or copies) sold also into perspective. For example its interesting to know if a vendors sales were down 10% (or up) quarter over quarter, or versus the same quarter a year ago or year over year. It is also interesting to keep the margin in perspective along with SGA costs in addition to cost of product acquired for sale. Also important is to get a gauge of if sales were down, yet margins are up, how many devices or copies were sold to get a gauge on expanding footprint which could also be a sign of future annuity (follow up sales opportunities). What Im watching is over the next couple of quarters is to see how some vendors leverage the Thailand flooding and HDD as well as other electronic component supply shortages to meet demand by managing discounts, costs and other items that contribute to enhanced margins.

Rest assured there is a lot more to IT and storage economics, including advanced topics such as Return on Investment (ROI) or Return on Innovation (The new ROI) and Total Cost of Ownership (TCO) among others that maybe we will discuss in the future.

Ok, nuff fun for now, lets get back to work.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

What industry pundits love and loathe about data storage

Drew Robb has a good article about what IT industry pundits including vendors, analysts, and advisors loath including comments from myself.

In the article Drew asks: What do you really love about storage and what are your pet peeves?

One of my comments and perspectives is that I like Hybrid Hard Disk Drives (HHDDs) in addition to traditional Hard Disk Drives (HDD) along with Solid State Devices (SSDs). As much as I like HHDDs, I also believe that with any technology, they are not the best solution for everything, however they can also be used in many ways than being seen. Here is the fifth installment of a series on HHDDs that I have done since June 2010 when I received my first HHDD a Seagate Momentus XT. You can read the other installments of my momentus moments here, here, here and here.

Seagate Momentus XT
HHDD with integrated nand flash SSD photo courtesy Seagate.com

Molly Rector VP of marketing at tape summit resources vendor Spectra Logic mentioned that what she does not like is companies that base their business plan on patent law trolling. I would have expected something different along the lines of countering or correcting people that say tape sucks, tape is dead, or that tape is the cause problem of anything wrong with storage thus clearing the air or putting up a fight that tape summit resources. Go figure…

Another of my comments involved clouds of which there are plenty of conversations taking place. I do like clouds (I even recently wrote a book involving them) however Im a fan of using them where applicable to coexist and enhance other IT resources. Dont be scared of clouds, however be ready, do your homework, listen, learn, do proof of concepts to decide best practices, when, where, what and how to use them.

Speaking of clouds, click here to read about who is responsible for cloud data loss and cast your vote, along with viewing what do you think about IT clouds in general here.

Mike Karp (aka twitter @storagewonk ) an analyst with Ptak Noel mentions that midrange environments dont get respect from big (or even startup) vendors.

I would take that a step further by saying compared to six or so years ago, SMB are getting night and day better respect along with attention by most vendors, however what is lacking is respect of the SOHO sector (e.g. lower end of SMB down to or just above consumer).

Granted some that have traditional sold into those sectors such as server vendors including Dell and HP get it or at least see the potential along with traditional enterprise vendor EMC via its Iomega . Yet I still see many vendors including startups in general discounting, shrugging off or sneering at the SOHO space similar to those who dissed or did not respect the SMB space several years ago. Similar to the SMB space, SOHO requires different products, packaging, pricing and routes to market via channel or etail mechanisms which means change for some vendors. Those vendors who embraced the SMB and realized what needed to change to adapt to those markets will also stand to do better with the SOHO.

Here is the reason that I think SOHO needs respect.

Simple, SOHOs grow up to become SMBs, SMBs grow up to become SMEs, SMEs grow up to become enterprises and not to mention that the amount of data being generated, moved, processed and stored continues to grow. The net result is that SMBs along with SOHO storage demands will continue to grow and for those vendors who can adjust to support those markets will also stand to gain new customers that in turn can become plans for other solution offerings.

Cloud conversations

Not surprising Eran Farajun of Asigra which has been doing cloud backups decades before they were known as clouds loves backup (and restores). However I am surprised that Eran did not jump on the its time to modernize and re architect data protection theme. Oh well, will have to have a chat with Eran on that sometime.

What was surprising were comments from Panzura who has a good distributed (e.g. read also cloud) file system that can be used for various things including online reference data. Panzura has a solution that normally I would not even think about in the context of being pulled into a Datadomain or dedupe appliance type discussion (e.g tape sucks or other similar themes). So it is odd that they are playing to the tape sucks camp and theme vs. playing to where the technology can really shine which IMHO is in the global, distributed, scale out and cloud file system space. Oh well, I guess you go with what you know or has worked in the past to get some attention.

Molly Rector of Spectra also mentioned that she likes High Performance Computing, surprised that she did not throw in high productivity computing as well in conjunction with big data, big bandwidth, green, dedupe, power, disk, tape and related buzzword bingo terms.

Also there are some comments from myself about cost cutting.

While I see the need for organizations to cut costs during tough economic times, Im not a fan of simply cutting cost for the sake of cost cutting as opposed to finding and removing complexity that in turn remove costs of doing work. In other words, Im a fan of finding and removing waste, becoming more effective and productive along with removing the cost of doing a particular piece of work. This in the end meets the aim of bean counters to cut costs, however can be done in a way that does not degrade service levels or customer service experience. For example instead of looking to cut backup costs, do you know where the real costs of doing data protection exist (hint swapping out media is treating the symptoms) and if so, what can be done to streamline those from the source of the problem downstream to the target (e.g. media or medium). In other words, redesign, review, modernize how data protection is done, leverage data footprint reduction (DFR) techniques including archive, compression, consolidation, data management, dedupe and other technologies in effective and creative ways, after all, return on innovation is the new ROI.

Checkout Drews article here to read more on the above topics and themes.

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

New Seagate Momentus XT Hybrid drive (SSD and HDD)

Seagate recently announced the next generation Momentus XT Hybrid Hard Disk Drive (HHDD) with a capacity of 750GB in a 2.5 inch form factor and MSRP of $245.00 USD including integrated NAND flash solid state device (SSD). As a refresher, the Momentus XT is a HHDD in that it includes a 4GB nand flash SSD integrated with a 500GB (or larger) 7,200 RPM hard disk drive (HDD) in a single 2.5 inch package.

Seagate Momentus XT
HHDD with integrated nand flash SSD photo courtesy Seagate.com

This is the fifth installment of a series that I have done since June 2010 when I received my first HHDD a Seagate Momentus XT. You can read the other installments of my momentus moments here, here, here and here.

Whats is new with the new generation.
Besides extra storage space capacity up to 750GB (was 500GB), there is twice as much single level cell (SLC) nand flash memory (8GB vs. 4GB in previous generation) along with an enhanced interface using 6Gb per second SATA that supports native command queuing (NCQ) for better performance. Note that NCQ was available on the previous generation Momentus XT that used a 3Gb SATA interface. Other enhancements include a larger block or sector size of 4096 bytes vs. traditional 512 bytes on previous generation storage devices.

This bigger sector size results in less overhead with managing data blocks on large capacity storage devices. Also new are caching enhancements are FAST Factor Flash Management, FAST Factor Boot and Adaptive Memory Technology. Not to be confused with EMC Fully Automated Storage Tiering the other FAST; Seagate FAST is technology that exists inside the storage drive itself. FAST Factor boot enables systems to boot and be productive with speeds similar to SSD or several times faster than traditional HDDs.

The FAST Factor Flash Management provides the integrated intelligence to maximize use of the nand flash or SSD capabilities along with spinning HDD to boot performance, keep up compatibility with different systems and their operating systems. In addition to performance and interoperability, data integrity and SSD flash endurance are also enhanced for investment protection. The Adaptive Memory technology is a self learning algorithm to give SSD like performance for often used applications and data to close the storage capacity too performance gap that has increased along with data center bottlenecks.

Some questions and discussion comments:

When to use SSD vs. HDD vs. HHDD?
If you need the full speed of SSD to boost performance across all data access and cost is not an issue for available capacity that is where you should be focused. However if you are looking for lowest total cost of storage capacity with no need for performance, than lower cost high capacity HDDs should be on your shopping list. On the other hand, if you want a mix of performance and capacity at an effective price, than HHDDs should be considered.

Why the price jump compared to first generation HHDD?
IMHO, it has a lot to do with current market conditions, supply and demand.

With recent floods in Thailand and forecasted HDD and other technology shortages, the lay of supply and demand applies. This means that the supply may be constrained for some products causing demand to rise for others. Your particular vendor or supplier may have inventory however will be less likely to heavily discount while there are shortages or market opportunities to keep prices high. There are already examples of this if you check around on various sites to compare prices now vs. a few months ago. Granted it is the holiday shopping season for both people as well as organizations spending the last of their available budgets so more demand for available supplies.

What kind of performance or productivity have I seen with HHDDs?
While I have not yet tested and compared the second generation or new devices, I can attest to the performance improvements resulting in better productivity over the past year using Seagate Momentus XT HHDDs compared to traditional HDDs. Here is a post that you can follow to see some boot performance comparisons as part of some virtual desktop infrastructure (VDI) sizing testing I did earlier this year that included both HHDD and HDD.

HHDD desktop 1

HDD desktop 1

HHDD desktop 2

Avg. IOPS

334

69 to 113

186 to 353

Avg. MByte sec

5.36

1.58 to 2.13

2.76 to 5.2

Percent IOPS read

94

80 to 88

92

Percent MBs read

87

63 to 77

84

Mbytes read

530

201 to 245

504

Mbytes written

128

60 to 141

100

Avg. read latency

2.24ms

8.2 to 9.5ms

1.3ms

Avg. write latency

10.41ms

20.5 to 14.96ms

8.6ms

Boot duration

120 seconds

120 to 240 sec

120

Click here to read the entire post about the above table

When will I jump on the SSD bandwagon?
Great question, I have actually been on the SSD train for several decades using them, selling them, covering, analyzing and consulting around them along with other storage mediums including HDD, HHDD, cloud and tape. I have some SSDs and will eventually put them into my laptops, workstations and servers as primary storage when the opportunity makes sense.

Will HHDDs help backup and other data protection tasks?
Yes, in fact I initially used my Momentus XTs as backup or data protection targets along with for moving large amounts of data between systems faster than what my network could support.

Why not use a SSD?
If you need the performance and can afford the price, go SSD!

On the other hand, if you are looking to add a small 64GB, 128GB or even 256GB SSD while retaining a larger capacity, slower and lower cost HDD, an HHDD should be considered as an option. By using an HHDD instead of both a SSD and HDD, you will cut the need of figuring out how to install both in space constrained laptops, desktop or workstations. In addition, you will cut the need to either manually move data between the different devices or avoid having to acquire software or drivers to do that for you.

How much does the new Seagate Momentus XT HHDD cost?
Manufactures Suggested Retail Price (MSRP) is listed at $245 for a 750GB version.

Does the Momentus XT HHDD need any special drivers, adapters or software?
No, they are plug and play. There is no need for caching or performance acceleration drivers, utilities or other software. Likewise no needs for tiering or data movement tools.

How do you install an HHDD into an existing system?
Similar to installing a new HDD to replace an existing one if you are familiar with that process. If not, it goes like this (or uses your own preferred approach).

  • Attach a new HHDD to an existing system using a cable
  • Utilize a disk clone or image tool to make a copy of the existing HDD to HHDD
  • Note that the system may not be able to be used during the copy, so plan ahead.
  • After the clone or image copy is made, shutdown system, remove existing HDD and replace it with the HHDD that was connected to the system during the copy (remember to remove the copy cable).
  • Reboot the system to verify all is well, note that it will take a few reboots before the HHDD will start to learn your data and files along with how they are used.
  • Regarding your old HDD, save it, put it in a safe place and use it as a disaster recovery (DR) backup. For example if you have a safe deposit box or somewhere else safe, put it there for when you will need it in the future.


Seagate Momentus XT and USB to SATA cable

Can an HHDD fit into an existing slot in a laptop, workstation or server?
Yes. In fact, unlike a HDD and SSD combination, that requires multiple slots or forcing one device to be external, HHDDs like the Momentus XT simply use the space where your current HDD is installed.

How do you move data to it?
Beyond the first installation described above, the HHDD appears as just another local device meaning you can move data to or from it like any other HDD, SSD or CD.

Do you need automated tiering software?
No, not unless you need it for some other reason or if you want to use an HHDD as the lower cost, larger capacity option as a companion to a smaller SSD.

Do I have any of the new or second generation HHDDs?
Not yet, maybe soon and I will do another momentus moment point when that time arrives. For the time being, I will continue to use the first generation Momentus XT HHDDs

Bottom line (for now), If you are considering a large capacity, HDDs check out the HHDDs for an added performance boost including faster boot times as well as accessing other data quicker.

On the other hand if you want an SSD however your budget restricts you to a smaller capacity version, look into how an HHDD can be a viable option for some of your needs.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Speaking of speeding up business with SSD storage

Solid state devices (SSD) are a popular topic gaining both industry adoption and customer deployment to speed up storage performance. Here is a link to a recent conversation that I had with John Hillard to discuss industry trends and perspectives pertaining to using SSD to boost performance and productivity for SMB and other environments.

I/O consolidation from Cloud and Virtual Data Storage Networking (CRC Press) www.storageio.com/book3.html

SSDs can be a great way for organizations to do IO consolidation to reduce costs in place of using many hard disk drives (HDDs) grouped together to achieve a certain level of performance. By consolidating the IOs off of many HDDs that often end up being under utilized from a space capacity basis, organizations can boost performance for applications while reducing, or reusing HDD based storage capacity for other purposes including growth.

Here is some related material and comments:
Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?
SSD and Storage System Performance
Are Hard Disk Drives (HDDs) getting too big?
Solid state devices and the hosting industry
Achieving Energy Efficiency using FLASH SSD
Using SSD flash drives to boost performance

Four ways to use SSD storage
4 trends that shape how agencies handle storage
Giving storage its due

You can read a transcript of the conversation and listen to the pod cast here, or download the MP3 audio here.

Ok, nuff said about SSD (for now)

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Congratulations to IBM for releasing XIV SPC results

Over the past several years I have done an annual post about IBM and their XIV storage system and this is the fourth in what has become a series. You can read the first one here, the second one here, and last years here and here after the announcement of the IBM V7000.

IBM XIV Gen3
IBM recently announced the generation 3 or Gen3 version of XIV along with releasing for the first time public performance comparison benchmarks using storage performance council (SPC) throughout SPC2 workload.

The XIV Gen3 is positioned by IBM as having up to four (4) times the performance of earlier generations of the storage system. In terms of speeds and feeds, the Gen3 XIV supports up to 180 2TB SAS hard disk drives (HDD) that provides up to 161TB of usable storage space capacity. For connectivity, the Gen3 XIV supports up to 24 8Gb Fibre Channel (8GFC) or for iSCSI 22 1Gb Ethernet (1 GbE) ports with a total of up to 360GBytes of system cache. In addition to the large cache to boost performance, other enhancements include leveraging multi core processors along with an internal InfiniBand  network to connect nodes replacing the former 1 GbE interconnect. Note, InfiniBand is only used to interconnect the various nodes in the XIV cluster and is not used for attachment to applications servers which is handled via iSCSI and Fibre Channel.

IBM and SPC storage performance history
IBM has a strong history if not leading the industry with benchmarking and workload simulation of their storage systems including Storage Performance Council (SPC) among others. The exception for IBM over the past couple of years has been the lack of SPC benchmarks for XIV. Last year when IBM released their new V7000 storage system benchmarks include SPC were available close to if not at the product launch. I have in the past commented about IBMs lack of SPC benchmarks for XIV to confirm their marketing claims given their history of publishing results for all of their other storage systems. Now that IBM has recently released SPC2 results for the XIV it is only fitting then that I compliment them for doing so.

Benchmark brouhaha
Performance workload simulation results can often lead to applies and oranges comparisons or benchmark brouhaha battles or storage performance games. For example a few years back NetApp submitted a SPC performance result on behalf of their competitor EMC. Now to be clear on something, Im not saying that SPC is the best or definitive benchmark or comparison tool for storage or other purpose as it is not. However it is representative and most storage vendors have released some SPC results for their storage systems in addition to TPC and Microsoft ESRP among others. SPC2 is focused on streaming such as video, backup or other throughput centric applications where SPC1 is centered around IOPS or transactional activity. The metrics for SPC2 are Megabytes per second (MBps) for large file processing (LFP), large database query (LDQ) and video on demand delivery (VOD) for a given price and protection level.

What is the best benchmark?
Simple, your own application in as close to as actual workload activity as possible. If that is not possible, then some simulation or workload simulation that closets resembles your needs.

Does this mean that XIV is still relevant?
Yes

Does this mean that XIV G3 should be used for every environment?
Generally speaking no. However its performance enhancements should allow it to be considered for more applications than in the past. Plus with the public comparisons now available, that should help to silence questions (including those from me) about what the systems can really do vs. marketing claims.

How does XIV compare to some other IBM storage systems using SPC2 comparisons?

System
SPC2 MBps
Cost per SPC2
Storage GBytes
Price tested
Discount
Protection
DS5300
5,634.17
$74.13
16,383
417,648
0%
R5
V7000
3,132.87
$71.32
29,914
$223,422
38-39%
R5
XIV G3
7,467.99
$152.34
154,619
1,137,641
63-64%
Mirror
DS8800
9,705.74
$270.38
71,537
2,624,257
40-50%
R5

In the above comparisons, the DS5300 (NetApp/Engenio based) is a dual controller (4GB of cache per controller) with 128 x 146.8GB 15K HDDs configured as RAID 5 with no discount applied to the price submitted. The V7000 system which is based on the IBM SVC along with other enhancements consists of dual controllers each with 8GB of cache and 120 x 10K 300GB HDDs configured as RAID 5 with just under a 40% discount off list price for system tested. For the XIV Gen3 system tested, discount off list price for the submission is about 63% with 15 nodes and a total of 360GB of cache and 180 2TB 7.2K SAS HDDs configured as mirrors. The DS8800 system with dual controllers has a 256GB of cache, 768 x 146GB 15K HDDs configured in RAID5 with a discount between 40 to 50% off of list.

What the various metrics do not show is the benefit of various features and functionality which should be considered to your particular needs. Likewise, if your applications are not centered around bandwidth or throughput, then the above performance comparisons would not be relevant. Also note that the systems above have various discount prices as submitted which can be a hint to a smart shopper where to begin negotiations at. You can also do some analysis of the various systems based on their performance, configuration, physical footprint, functionality and cost plus the links below take you to the complete reports with more information.

DS8800 SPC2 executive summary and full disclosure report

XIV SPC2 executive summary and full disclosure report

DS5300 SPC2 executive summary and full disclosure report

V7000 SPC2 executive summary and full disclosure report

Bottom line, benchmarks and performance comparisons are just that, a comparison that may or may not be relevant to your particular needs. Consequently they should be used as a tool combined with other information to see how a particular solution might be a fit for your specific needs. The best benchmark however is your own application running as close to possible realistic workload to get a representative perspective of a systems capabilities.

Ok, nuff said
Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Trick or treat: 2011 IT Zombie technology poll

Warning: Do not be scared, however be ready for some trick and treat fun, it is after all, the Halloween season.

I like new emerging technologies and trends along with Zombie technologies, you know, those technologies that have been declared dead yet are still being enhanced, sold and used.

Zombie technologies as a name may be new for some, while others will have a realization of experiencing something from the past, technologies being declared deceased yet still alive and being used. Zombie technologies are those that have been declared dead, yet still alive enabling productivity for customers that use them and often profits for the vendors who sell them.

Zombie technologies

Some people consider a technology or trend dead once it hits the peak of hype as that can signal a time to jump to the next bandwagon or shiny new technology (or toy).

Others will see a technology as being dead when it is on the down slope of the hype curve towards the trough of disillusionment citing that as enough cause for being deceased.

Yet others will declare something dead while it matures working its way through the trough of disillusionment evolving from market adoption to customer deployment eventually onto the plateau of productivity (or profitability).

Then there are those who see something as being dead once it finally is retired from productive use, or profitable for sale.

Of course then there are those who just like to call anything new or other than what they like or that is outside of their comfort zone as being dead. In other words, if your focus or area of interest is tied to new products, technology trends and their promotion, rest assured you better be where the resources are being applied and view other things as being dead and thus probably not a fan of Zombie technologies (or at least publicly).

On the other hand, if your area of focus is on leveraging technologies and products in a productive way, including selling things that are profitable without a lot of marketing effort, your view of what is dead or not will be different. For example if you are risk averse letting someone else be on the leading bleeding edge (unless you have a dual redundant HA blood bank attached to your environment) your view of what is dead or not will be much different from those promoting the newest trend.

Funny thing about being declared dead, often it is not the technology, implementation, research and development or customer acquisitions, rather simply a lack of promotion, marketing and general awareness. Take tape for example which has been a multi decade member of the Zombie technology list. Recently vendors banded together investing or spending on marketing awareness reaching out to say tape is alive. Guess what, lo and behold, there was a flurry of tape activity in venues that normally might not be talking about tape. Funny how marketing resources can bring something back from the dead including Zombie technologies to become popular or cool to discuss again.

With the 2011 Halloween season among us, it is time to take a look this years list of Zombie technologies. Keep in mind that being named a Zombie technology is actually an honor in that it usually means someone wants to see it dead so that his or her preferred product or technology can take it place.

Here are 2011 Zombie technologies.

Backup: Far from being dead, its focus is changing and evolving with a broader emphasis on data protection. While many technologies associated with backup have been declared dead along with some backup software tools, the reality is that it is time or modernizes how backups and data protection are performed. Thus, backup is on the Zombie technology list and will live on, like it or not until it is exorcised from, your environment replaced with a modern resilient and flexible protected data infrastructure.

Big Data: While not declared dead yet, it will be soon by some creative marketer trying to come up with something new. On the other hand, there are those who have done big data analytics across different Zombie platforms for decades which of course is a badge of honor. As for some of the other newer or shiny technologies, they will have to wait to join the big data Zombies.

Cloud: Granted clouds are still on the hype cycle, some argue that it has reached its peak in terms of hype and now heading down into the trough of disillusionment, which of course some see as meaning dead. In my opinion cloud, hype has or is close to peaking, real work is occurring which means a gradual shift from industry adoption to customer deployment. Put a different way, clouds will be on the Zombie technology list of a couple of decades or more. Also, keep in mind that being on the Zombie technology list is an honor indicating shift towards adoption and less on promotion or awareness fan fare.

Data centers: With the advent of the cloud, data centers or habitats for technology have been declared dead, yet there is continued activity in expanding or building new ones all the time. Even the cloud relies on data centers for housing the physical resources including servers, storage, networks and other components that make up a Green and Virtual Data Center or Cloud environment. Needless to day, data centers will stay on the zombie list for some time.

Disk Drives: Hard disk drives (HDD) have been declared dead for many years and more recently due to popularity of SSDs have lost their sex appeal. Ironically, if tape is dead at the hands of HDDs, then how can HDDs be dead, unless of course they are on the Zombie technology list. What is happening is like tape, HDDs role are changing as the technology continues to evolve and will be around for another decade or so.

Fibre Channel (FC): This is a perennial favorite having been declared dead on a consistent basis over three decades now going back to the early 90s. While there are challengers as there have been in the past, FC is far from dead as a technology with 16 Gb (16GFC) now rolling out and a transition path for Fibre Channel over Ethernet (FCoE). My take is that FC will be on the zombie list for several more years until finally retired.

Fibre Channel over Ethernet (FCoE): This is a new entrant and one uniquely qualified for being declared dead as it is still in its infancy. Like its peer FC which was also declared dead a couple of decades ago, FCoE is just getting started and looks to be on the Zombie list for a couple of decades into the future.

Green IT: I have heard that Green IT is dead, after all, it was hyped before the cloud era which has been declared dead by some, yet there remains a Green gap or disconnect between messaging and issues thus missed opportunities. For a dead trend, SNIA recently released their Emerald program which consists of various metrics and measurements (remember, zombies like metrics to munch on) for gauging energy effectiveness for data storage. The hype cycle of Green IT and Green storage may be dead, however Green IT in the context of a shift in focus to increased productivity using the same or less energy is underway. Thus Green IT and Green storage are on the Zombie list.

iPhone: With the advent of Droid and other smart phones, I have heard iPhones declared dead, granted some older versions are. However while the Apple cofounder Steve Jobs has passed on (RIP), I suspect we will be seeing and hearing more about the iPhone for a few years more if not longer.

IBM Mainframe: When it comes to information technology (IT), the king of the Zombie list is the venerable IBM mainframe aka zSeries. The IBM mainframe has been declared dead for over 30 years if not longer and will be on the zombie list for another decade or so. After all, IBM keeps investing in the technology as people buy them not to mention IBM built a new factory to assemble them in.

NAS: Congratulations to Network Attached Storage (NAS) including Network File System (NFS) and Windows Common Internet File System (CIFS) aka Samba or SMB for making the Zombie technology list. This means of course that NAS in general is no longer considered an upstart or immature technology; rather it is being used and enhanced in many different directions.

PC: The personal computer was touted as killing off some of its Zombie technology list members including the IBM mainframe. With the advent of tablets, smart phones, virtual desktops infrastructures (VDI), the PC has been declared dead. My take is that while the IBM mainframe may eventually drop of the Zombie list in another decade or two if it finds something to do in retirement, the PC will be on the list for many years to come. Granted, the PC could live on even longer in the form of a virtual server where the majority of guest virtual machines (VMs) are in support of Windows based PC systems.

Printers: How long have we heard that printers are dead? The day that printers are dead is the day that the HP board of directors should really consider selling off that division.

RAID: Its been over twenty years since the first RAID white paper and early products appeared. Back in the 90s RAID was a popular buzzword and bandwagon topic however, people have moved on to new things. RAID has been on the Zombie technology list for several years now while it continues to find itself being deployed at the high end of the market down into consumer products. The technology continues to evolve in both hardware as well as software implementations on a local and distributed basis. Look for RAID to be on the Zombie list for at least the next couple of decades while it continues to evolve, after all, there is still room for RAID 7, RAID 8, RAID 9 not to mention moving into hexadecimal or double digit variants.

SAN: Storage Area Networks (SANs) have been declared dead and thus on the Zombie technology list before, and will be mentioned again well into the next decade. While the various technologies will continue to evolve, networking your servers to storage will also expand into different directions.

tape summit resources: Magnetic tape has been on the Zombie technology list almost as long as the IBM mainframe and it is hard to predict which one will last longer. My opinion is that tape will outlast the IBM mainframe, as it will be needed to retrieve the instructions on how to de install those Zombie monsters. Tape has seen resurgence in vendors spending some marketing resources and to no surprise, there has been an increase in coverage about it being alive, even at Google. Rest assured, tape is very safe on the Zombie technology list for another decade or more.

Windows: Similar to the PC, Microsoft Windows has been touted in the past as causing other platforms to be dead, however has been added to the Zombie list for many years now. Given that Windows is the most commonly virtualized platform or guest VM, I think we will be hearing about Windows on the Zombie list for a few decades more. There are particular versions of Windows as with any technology that have gone into maintenance or sustainment mode or even discontinued.

Poll: What are the most popular Zombie technologies?

Keep in mind that a Zombie technology is one that is still in use, being developed or enhanced, sold usually at a profit and used typically in a productive way. In some cases, a declared dead or Zombie technology may only be just in its infancy getting started having either just climbed over the peak of hype or coming out of the trough of disillusionment. In other instance, the Zombie technology has been around for a long time yet continues to be used (or abused).

Note: Zombie voting rules apply which means vote early, vote often, and of course vote for those who cannot include those that are dead (real or virtual).

Ok, nuff said, enough fun, lets get back to work, at least for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Measuring Windows performance impact for VDI planning

Here is a link to a recent guest post that I was invited to do over at The Virtualization Practice (TVP) pertaining to measuring the impact of Windows Boot performance and what that means for planning for Virtual Desktop Infrastructure (VDI) initiatives.

With Virtual Desktop Infrastructures (VDI) initiatives adoption being a popular theme associated with cloud and dynamic infrastructure environments a related discussion point is the impact on networks, servers and storage during boot or startup activity to avoid bottlenecks. VDI solution vendors include Citrix, Microsoft and VMware along with various server, storage, networking and management tools vendors.

A common storage and network related topic involving VDI are boot storms when many workstations or desktops all startup at the same time. However any discussion around VDI and its impact on networks, servers and storage should also be expanded from read centric boots to write intensive shutdown or maintenance activity as well.

Having an understanding of what your performance requirements are is important to adequately design a configuration that will meet your Quality of Service (QoS) and service level objectives (SLOs) for VDI deployment in addition to knowing what to look for in candidate server, storage and networking technologies. For example, knowing how your different desktop applications and workloads perform on a normal basis provides a baseline to compare with during busy periods or times of trouble. Another benefit is that when shopping for example storage systems and reviewing various benchmarks, knowing what your actual performance and application characteristics are helps to align the applicable technology to your QoS and SLO needs while avoiding apples to oranges benchmark comparisons.

Check out the entire piece including some test results using the hIOmon tool from hyperIO to gather actual workstation performance numbers.

Keep in mind that the best benchmark is your actual applications running as close to possible to their typical workload and usage scenarios.

Also keep in mind that fast workstations need fast networks, fast servers and fast storage.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Have you heard of 2DRS data protection technology?

Have you heard of 2DRS as a data storage technology?

If not, dont worry, you would probably be in a minority if you said yes.

Anyway, Phil White of ECCTek has sent lots of material about 2DRS (2 dimensional error correction code: ECC) over the past few months.

In a nutshell, if you have an interest in data integrity, low level data storage topics, RAID, SSD or HDDs, you may want to have a look. I have no affiliation with Phil, ECCtek or 2DRS, nor can I vouch for what ECCtek is doing. However as he has been persistent (in a polite way), time to share some info and you can decide what to do with it.

The following is from Phil:

Hello,

You may be able to start a project to develop a 2D-RS product in your company.

You may be able to write and publish an article promoting the 2D-RS ideas.

You may be able to send me e-mail addresses of others who may be interested in the 2D-RS ideas.

You could forward this e-mail to others who may be interested in the 2D-RS ideas.

I am asking you to please take the time you need to read the web pages at the end of this e-mail, and please think seriously about the ideas and ask questions if something is unclear.

After you have read the web pages and thought about the ideas, I am asking that you please do one or more of the following things…

Start a project to develop a 2D-RS product in your company.
Write and publish an article to promote the 2D-RS ideas.
Send me e-mail addresses of others who may be interested in the 2D-RS ideas.
Forward this e-mail to others.

Regards,

Phil White
President
ECC Technologies, Inc. (ECC Tek)
4750 Coventry Road East
Minnetonka, MN 55345-3909
Phone: 952-935-2885
Fax:   952-935-2491
www.ecctek.com
 
Web Pages
ECC Teks Web Site
ECC Tek Company Profile
PRS Patent

2D ECC Concepts
2D RS HDDs
2D RS HDD Products
2D RS SSDs
2D RS Storage Systems
2D RS Comments
2D RS A
2D RS Believers

Basic ECC Concepts
Finite Fields, RS Codes and RS RAID
Finite Fields with 4bit Elements

I will leave it up to you if you want to check out what Phil has to say and if or where 2D may or may not be relevant.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Are Hard Disk Drives (HDDs) getting too big?

Lets start out by clarifying something, that is in terms of context or scope, big means storage capacity as opposed to the physical packaging size of a hard disk drive (HDD) which are getting smaller.

So are HDDs in terms of storage capacity getting too big?

This question of if HDDs storage capacity getting too big to manage comes up every few years and it is the topic of Rick Vanovers (aka twitter @RickVanover Episode 27 Pod cast: Are hard drives getting to big?

Veeam community podcast guest appearance

As I discuss in this pod cast with Rick Vannover of Veeam, with the 2TB and even larger future 4TB, 8 to 9TB, 18TB, 36TB and 48 to 50TB drives not many years away, sure they are getting bigger (in terms of capacity) however we have been here before (or at least some of us have). We discuss how back in the late 90s HDDs were going from 5.25 inch to 3.5 inch (now they are going from 3.5 inch to 2.5 inch), and 9GB were big and seen as a scary proposition by some for doing RAID rebuilds, drive copy or backups among other things, not to mention if putting to many eggs (or data) in one basket.

In some instances vendors have been able to combine various technologies, algorithms and other techniques to RAID rebuild a 1TB or 2TB drive in the same or less amount of time as it used to take to process a 9GB HDD. However those improvements are not enough and more will be needed leveraging faster processors, IO busses and back planes, HDDs with more intelligence and performance, different algorithms and design best practices among other techniques that I discussed with Rick. After all, there is no such thing as a data recession with more information to be generated, processed, moved, stored, preserved and served in the future.

If you are interested in data storage, check out Ricks pod cast and hear some of our other discussion points including how SSD will help keep the HDD alive similar to how HDDs are offloading tape from their traditional backup role, each with its changing or expanding focus among other things.

On a related note, here is post about RAID remaining relevant yet continuing to evolve. We also talk about Hybrid Hard Disk Drives (HHDD) where in a single sealed HDD device there is flash and dram along with a spinning disk all managed by the drives internal processor with no external special software or hardware needed.

Listen to comments by Greg Schulz of StorageIO on HDD, HHDD, SSD, RAID and more

Put on your head phones (or not) and check out Ricks pod cast here (or on the head phone image above).

Thanks again Rick, really enjoyed being a guest on your show.

Whats your take, are HDDs getting to big in terms of capacity or do we need to leverage other tools, technology and techniques to be more effective in managing expanding data footprint including use of data footprint reduction (DFR) techniques?

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

StorageIO Momentus Hybrid Hard Disk Drive (HHDD) Moments

This is the third in a series of posts that I have done about Hybrid Hard Disk Drives (HHDDs) along with pieces about Hard Disk Drives (HDD) and Solid State Devices (SSDs). Granted the HDD received its AARP card several years ago when it turned 50 and is routinely declared dead (or read here) even though it continues to evolve along SSD maturing and both expanding into different markets as well as usage roles.

For those who have not read previous posts about Hybrid Hard Disk Drives (HHDDs) and the Seagate Momentus XT you can find them here and here.

Since my last post, I have been using the HHDDs extensively and recently installed the latest firmware. The release of new HHDD firmware by Seagate for the Momentus XT (SD 25) like its predecessor SD24 cleaned up some annoyances and improved on overall stability. Here is a Seagate post by Mark Wojtasiak discussing SD25 and feedback obtained via the Momentus XT forum from customers.

If you have never done a HDD firmware update, its not as bad or intimidating as might be expected. The Seagate firmware update tools make it very easy, that is assuming you have a recent good backup of your data (one that can be restored) and about 10 to 15 minutes of time for a couple of reboots.

Speaking of stability, the Momentus XT HHDDs have been performing well helping to speed up accessing large documents on various projects including those for my new book. Granted an SSD would be faster across the board, however the large capacity at the price point of the HHDD is what makes it a hybrid value proposition. As I have said in previous posts, if you have the need for speed all of the time and time is money, get an SSD. Likewise if you need as much capacity as you can get and performance is not your primary objective, then leverage the high capacity HDDs. On the other hand, if you need a balance of some performance boost with capacity boost and a good value, then check out the HHDDs.

Image of Momentus XT courtesy of www.Seagate.com

Lets shift gears from that of the product or technology to that of common questions that I get asked about HHDDs.

Common questions I get asked about HHDDs include:

What is a Hybrid Hard Disk Drive?

A Hybrid Hard Disk Drive includes a combination of rotating HDD, solid state flash persistent memory along with volatile dynamic random access memory (DRAM) in an integrated package or product. The value proposition and benefit is a balance of performance and capacity at a good price for those environments, systems or applications that do not need all SSD performance (and cost) vs. those that need some performance in addition to large capacity.

How the Seagate Momentus XT differs from other Hybrid Disks?
One approach is to take a traditional HDD and pair it with a SSD using a controller packaged in various ways. For example on a large scale, HDDs and SSDs coexist in the same tiered storage system being managed by the controllers, storage processors or nodes in the solution including automated tiering and cache promotion or demotion. The main difference however between other storage systems, tiering and pairing and HHDDs is that in the case of the Momentus XT the HDD, SLC flash (SSD functionality) and RAM cache and their management are all integrated within the disk drive enclosure.

Do I use SSDs and HDDs or just HHDDs?
I have HHDDs installed internally in my laptops. I also have HDDs which are installed in servers, NAS and disk to disk (D2D) backup devices and Digital Video Recorders (DVRs) along with external SSD and Removable Hard Disk Drives (RHDDs). The RHDDs are used for archive and master or gold copy data protection that go offsite complimenting how I also use cloud backup services as part of my data protection strategy.

What are the technical specifications of a HHDD such as the Seagate Momentus XT?
3Gbs SATA interface, 2.5 inch 500GB 7,200 RPM HDD with 32MB RAM cache and integrated 4GByte SLC flash all managed via internal drive processor. Power consumption varies depending what the device is doing such as initial power up, idle, normal or other operating modes. You can view the Seagate Momentus XT 500GB (ST95005620AS which is what I have) specifications here as well as the product manual here.


One of my HHDDs on a note pad (paper) and other accessories

Do you need a special controller or management software?
Generally speaking no, the HHDD that I have been using plugged and played into my existing laptops internal bay replacing the HDD that came with those systems. No extra software was needed for Windows, no data movement or migration tools needed other than when initially copying from the source HDD to the new HHDD. The HHDD do their own caching, read ahead and write behind independent of the operating system or controller. Now the reason I say generally speaking is that like many devices, some operating systems or controllers may be able to leverage advanced features so check your particular system capabilities.

How come the storage system vendors are not talking about these HHDDs?
Good question which I assume it has a lot to do with the investment (people, time, engineering, money and marketing) that they have or are making in controller and storage system software functionality to effectively create hybrid tiered storage systems using SSD and HDDs on different scales. There have been some packaged HHDD systems or solutions brought to market by different vendors that combine HDD and SSD into a single physical package glued together with some software and controllers or processors to appear as a single system. I would not be surprised to see discrete HHDDs (where the HDD and flash SSD and RAM are all one integrated product) appear in lower end NAS or multifunction storage systems as well as for backup, dedupe or other system that requires large amounts of capacity space and performance boost now and then.

Why do I think this? Simple, say you have five HHDDs each with 500GB of capacity configured as a RAID5 set resulting in 2TByte of capacity. Using as a hypothetical example the Momentus XT yields 5 x 4GByte or 20GByte of flash cache helps accelerate write operations during data dumps, backup or other updates. Granted that is an overly simplified example and storage systems can be found with hundreds of GByte of cache, however think in terms of value or low cost balancing performance and capacity to cost for different usage scenarios. For example, applications such as bulk or scale out file and object storage including cloud or big data, entertainment, Server (Citrix/Xen, Microsoft/HyperV, VMware/vSphere) and Desktop virtualization or VDI, Disk to Disk (D2D) backup, business analytics among others. The common tenets of those applications and usage scenario is a combination of I/O and storage consolidation in a cost effective manner addressing the continuing storage capacity to I/O performance gap.

Data Center and I/O Bottlenecks

Storage and I/O performance gap

Do you have to backup HHDDs?
Yes, just as you would want to backup or protect any SSD or HHD device or system.

How does data get moved between the SSD and the HDD?
Other than the initial data migration from the old HDD (or SSD) to the HHDD, unless you are starting with a new system, once your data and applications exist on the HHDD, it automatically via the internal process of the device manages the RAM, flash and HDD activity. Unlike in a tiered storage system where data blocks or files may be moved between different types of storage devices, inside the HHDD, all data gets written to the HDD, however the flash and RAM are used as buffers for caching depending on activity needs. If you have sat through or listened to a NetApp or HDS use of cache for tiering discussion what the HHDDs do is similar in concept however on a smaller scale at the device level, potentially even in a complimentary mode in the future? Other functions performed inside the HHDD by its processor includes reading and writing, managing the caches, bad block replacement or re vectoring on the HDD, wear leveling of the SLC flash and other routine tasks such as integrity checks and diagnostics. Unlike paired storage solutions where data gets moved between tiers or types of devices, once data is stored in the HHDD, it is managed by the device similar to how a SSD or HDD would move blocks of data to and from the specific media along with leveraging RAM cache as a buffer.

Where is the controller that manages the SSD and HDD?
The HHDD itself is the controller per say in that the internal processor that manages the HDD also directly access the RAM and flash.

What type of flash is used and will it wear out?
The XT uses SLC (single level cell) flash which with wear leveling has a good duty cycle (life span) and is what is typically found in higher end flash SSD solutions vs. lower cost MLC (multi level cell)

Have I lost any data from it yet?
No, at least nothing that was not my own fault from saving the wrong file in the wrong place and having to recover from one of my recent D2D copies or the cloud. Oh, regarding what have I done with the HDDs that were replaced by the HHDDs? They are now an extra gold master backup copy as of a particular point in time and are being kept in a safe secure facility, encrypted of course.

Have you noticed a performance improvement?
Yes, performance will vary however in many cases I have seen performance comparable to SSD on both reads and writes as long as the HDDs keep up with the flash and RAM cache. Even as larger amounts of data are written, I have seen better performance than compared to HDDs. The caveat however is that initially you may see little to marginal performance improvement however over time, particularly on the same files, performance tends to improve. Working on large tens to hundreds of MByte size documents I noticed good performance when doing saves compared to working with them on a HDD.

What do the HHDDs cost?
Amazon.com has the 500GB model for about $100 which is about $40 to $50 less than when I bought my most recent one last fall. I have heard from other people that you can find them at even lower prices at other venues. In the theme of disclosures, I bought one of my HHDDs from Amazon and Seagate gave me one to test.

Will I buy more HHDDs or switch to SSDs?
Where applicable I will add SSDs as well as HDDs, however where possible and practical, I will also add HHDDs perhaps even replacing the HDDs in my NAS system with HHDDs at some time or maybe trying them in a DVR.

What is the down side to the HHDDs?
Im generating and saving more data on the devices at a faster rate which means that when I installed them I was wondering if I would ever fill up a 500GB drive. I still have hundreds of GBytes free or available for use, however I also am able to cary more reference data or information than in the past. In addition to more reference data including videos, audio, images, slide decks and other content, I have also been able to keep more versions or copies of documents which has been handy on the book project. Data that changes gets backed up D2D as well as to my cloud provider including while traveling. Leveraging compression and dedupe, given that many chapters or other content are similar, not as much data actually gets transmitted when doing cloud backups which has been handy when doing a backup from a airplane flying over the clouds. A wish for the XT type of HHDD that I have is for vendors such as Seagate to add Self Encrypting Disk (SED) capabilities to them along with applying continued intelligent power management (IPM) enhancements.

Why do I like the HHDD?
Simple, it solves both business and technology challenges while being an enabler, it gives me a balance of performance for productivity and capacity in a cost effective manner while being transparent to the systems it works with.

Here are some related links to additional material:
Data Center I/O Bottlenecks Performance Issues and Impacts
Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?
Seagate Momentus XT SD 25 firmware
Seagate Momentus XT SD25 firmware update coming this week
A Storage I/O Momentus Moment
Another StorageIO Hybrid Momentus Moment
As the Hard Disk Drive (HDD) continues to spin
Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?
Funeral for a Friend
As the Hard Disk Drive (HDD) continues to spin
Seagate Momentus XT product specifications
Seagate Momentus XT product manual
Technology Tiering, Servers Storage and Snow Removal
Self Encrypting Disks (SEDs)

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Securing data at rest: Self Encrypting Disks (SEDs)

Here is a link to a recent guest post that I was invited to do over at The Virtualization Practice (TVP) pertaining to Self Encrypting Disk (SEDs).

Based on the trusted computing group (TCG) DriveTrust and OPAL disk drive security models, SEDs offload encryption to the disk drive while complimenting other encryption security solutions to protect against theft or lost storage devices. There is another benefit however for SEDs which is simplifying the process of decommissioning a storage device safely and quickly.

If you are not familiar with them, SEDs perform encryption within the hard disk drive (HDD) itself using the onboard processor and resident firmware. Since SEDs only protect data at rest, other forms of encryption should be combined to protect data in flight or on the move.

There is also another benefit of SEDs in that for those of you concerned about how to digital destroy, shred or erase large capacity disks in the future, you may have a new option. While intended for protecting data, a byproduct is that when a SED is removed from the system or server or controller that it has established an affinity with, its contents are effectively useless until reattached. If the encryption key for a SED is changed, then the data is instantly rendered useless, or at least for most environments.

Learn more about SEDs here and via the following links:

  • Self-Encrypting Drives for IBM System x
  • Trusted Computing Group OPAL Summary
  • Storage Performance Council (SPC) SED and Non SED benchmarks
  • Seagate SED information
  • Trusted Computing Group SED information

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

What records will EMC break in NYC January 18, 2011?

What records will EMC break in NYC January 18, 2011?

In case you have not seen or heard, EMC is doing an event next week in New York City (NYC) at the AXA Equitable Center winter weather snow storm clouds permitting (and adequate tools or technologies to deal with the snow removal), that has a theme around breaking records. If you have yet to see any of the advertisements, blogs, tweets, facebook, friendfeed, twitter, yourtube or other mediums messages, here (and here and here) are a few links to learn more as well as register to view the event.

Click on the above image to see more

There is already speculation along with IT industry wiki leaks of what will be announced or talked about next week that you can google or find at some different venues.

The theme of the event is breaking records.

What might we hear?

In addition to the advisor, author, blogger and consultant hats that I wear, Im also in the EMCs analysts relations program and as such under NDA, consequently, what the actual announcement will be next week, no comment for now. BTW, I also wear other hats including one from Boeing even though I often fly on Airbus products as well.

If its not Boeing Im not going, except I do also fly Airbus, Embrear and Bombardiar products
Other hats I wear

However, how about some fun as to what might be covered at next weeks event with getting into a wiki leak situation?

  • A no brainier would be product (hardware, software, services) related as it is mid January and if you have been in the industry for more than a year or two, you might recall that EMC tends to a mid winter launch around this time of year along with sometimes an early summer refresh. Guess what time of the year it is.
  • Im guessing lots of superlatives, perhaps at a record breaking pace (e.g. revolutionary first, explosive growth, exponential explosive growth, perfect storm among others that could be candidates for the Storagebrain wall of fame or shame)
  • Maybe we will even hear that EMC has set a new record of number of members in Chads army aka the vspecialists focused on vSphere related topics along with a growing (quietly) number of Microsoft HyperV specialist.
  • That EMC has a record number of twitter tweeps engaged in conversations (or debates) with different audiences, collectives, communities, competitors, customers, individuals, organizations, partners or venues among others.
  • Possibly that their involvement in the CDP (Carbon Disclosure Project) has resulted in enough savings to offset the impact of hosting the event making it carbon and environment neutral. After all, we already know that EMC has been in the CDP as in Continual or Constant Data Protection as well as Complete or Comprehensive Data Protection along with Cloud Data Protection not to mention Common Sense Data Protection (CSDP) for sometime now.
  • Perhaps something around the number of acquisitions, patents, products, platforms, products and partners they have amassed recently.
  • For investors, wishful thinking that they will be moving their stock into record territories.
  • Im also guessing we will hear or see a record number of tweets, posts, videos and stories.
  • To be fair and balanced, Im also expecting a record number of counter tweets, counter posts, counter videos and counter stories coming out of the event.

Some records I would like to see EMC break however Im not going to hold my breath at least for next week include:

  • Announcement of upping the game in performance benchmarking battles with record setting or breaking various SPC benchmark results submitted on their own (instead of via a competitor or here) in different categories of block storage devices along with entries for SSD based, clustered and virtualized. Of course we would expect to hear how those benchmarks and workload simulations really do not matter which would be fine, at least they would have broken some records.
  • Announcement of having shipped more hard disk drives (HDD) than anyone else in conjunction with shipping more storage than anyone else. Despite being continually declared dead (its not) and SSD gaining traction, EMC would have a record breaking leg to stand on if the qualify amount of storage shipped as external or shared or networked (SAN or NAS) as opposed to collective (e.g. HP with servers and storage among others).
  • Announcement that they are buying Cisco, or Cisco is buying them, or that they and Cisco are buying Microsoft and Oracle.
  • Announcement of being proud of the record setting season of the Patriots, devastated to losing a close and questionable game to the NY Jets, wishing them well in the 2010 NFL Playoffs (Im just sayin…).
  • Announcement of being the first vendor and solution provider to establish SaaS, PaaS, IaaS, DaaS and many other XaaS offerings via their out of this world new moon base (plans underway for Mars as part of a federated offering).
  • Announcement that Fenway park will be rebranded as the house that EMC built (or rebuilt).

Disclosure: I will be in NYC on Tuesday the 18th as one of EMCs many guests that they have picked up airfare and lodging, thanks to Len Devanna and the EMC social media crew for reaching out and extending the invitation.

Other guests of the event will include analysts, advisors, authors, bloggers, beat writers, consultants, columnist, customers, editors, media, paparazzi, partners, press, protesters (hopefully polite ones), publishers, pundits, twitter tweepps and writers among others.

I wonder if there will also be a record number of disclosures made by others attending the event as guests of EMC?

More after (or maybe during) the event.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved