Intelligent Power Management (IPM) and second generation MAID 2.0 on the rise

Storage I/O trends

In case you missed it today, Adaptec announced that they are the 1st vendor “This Week” to add support for Intelligent Power Management (IPM) to their storage systems. Adaptec joins a growing list of vendors who are deploying, or, who are program announcing some variation of IPM and second generation MAID 2.0 ability including support for different types of tiered disk drives including various combinations of Fibre Channel and SAS as well as SATA.

As a quick refresh, Massive or Monolithic Arrays of Idle or Inactive Disks (MAID) was popularized by 1st generation MAID vendor Copan who spins down disk drives to avoid energy usage. One of the challenges with 1st generation MAID is the poor performance by being able to only have at most 25% of the disk drives spinning at any time to transfer data when needed.

This is a balancing act between achieving energy avoidance and associated benefits vs. maintaining performance to move data when needed particularly for large restoration to support BC/DR or other purposes. Granted, 1st generation MAID systems like those from Copan while positioned as alternatives to high-performance disk storage systems to amplify potential energy savings on one hand, or, to put as an alternative to magnetic tape by providing random restore capability. The reality is that 1st generation MAID systems are finding their niche not for on-line primary or even on-line secondary storage, nor as a direct replacement for tape or even disk based libraries to support large-scale BC/DR, rather, in a sweet spot between secondary and near-line disk libraries and virtual tape libraries with a target application of very infrequently accessed of data.

Second generation MAID, aka MAID 2.0 is an evolution of the general technologies and capabilities extending functionality and flexibility while addressing quality of service (QoS), performance, availability, capacity and energy consumption using IPM also known as Adaptive Power Management (APM), dynamic bandwidth switching or scaling (DBS) among other names. The basic premise is to add flexibility building on 1st generation characteristics including data protection, resiliency and pro-active part or drive monitoring. Another basic premise of IPM. and MAID 2.0. solutions is to allow the performance and subsequent energy usage to vary, which is to cut the amount of performance and energy usage during in-active times, yet, when data needs to be accessed, to allow full performance without penalties for energy savings.

Second generation MAID solutions can be characterized by multiple power saving modes as well as flexible performance to adjust to changing workload and application needs. Another characteristic is the ability to work across different types of disk drives including Fibre Channel, SAS and SATA as opposed to only SATA drives found in 1st generation solutions as well as for the IPM or MAID 2.0 functionality to exist in a standard storage system or array instead of in a purpose-built dedicated storage system. Other capabilities include support for more granular power settings down to a RAID group or LUN level instead of across an entire array or storage system as well as support for different RAID levels among other features.

Examples of vendors who have either announced product or made statements of direction with regard to MAID 2.0 and IPM enabled storage systems include:

Adaptec (Today), Datadirect, EMC, Fujitsu, HDS, HGST (Hitachi Disk Drives), NEC, Nexsan, and Xyratex among others on a growing list of solutions.

For applications and data storage needs that need good performance and QoS over a range of changing usage conditions to balance good performance when needed to efficiently get work done to boost productivity, while saving or avoiding energy when little or no work needs to be done, take a look at current and emerging IPM and MAID 2.0 enabled storage systems as part of a tiered storage strategy to discuss power, cooling, floor-space and EHS (PCFE) related issues.

To learn more, check out the StorageIO Industry Trends and Perspective white paper Intelligent Power Management (IPM) and MAID 2.0 and visit www.thegreenandvirtualdatacenter.com well as www.storageio.com.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Airport Parking, Tiered Storage and Latency

Storage I/O trends

Ok, so what do airport parking, tiered storage and latency have in common? Based on some recent travel experience I will assert that there is a bit in common, or at a least an analogy. What got me thinking about this was recently I could not get a parking spot at the airport primary parking ramp next to the terminal (either a reasonable walk or short tram ride) which offers quick access to the departure gate.

Granted there is premium for this ability to park or “store” my vehicle for a few days in near to airport terminal, however that premium is off-set in the time savings and less disruptions enabling me a few extra minutes to get other things done while traveling.

Let me call the normal primary airport parking tier-1 (regardless of what level of the ramp you park on), with tier-0 being valet parking where you pay a fee that might rival the cost of your airline ticket, yet your car stays in a climate controlled area, gets washed and cleaned, maybe an oil change and hopefully in a more secure environment with an even faster access to your departure gate, something for the rich and famous.

Now the primary airport parking has been full lately, not surprising given the cold weather and everyone looking to use up their carbon off-set credits to fly somewhere warm or attend business meetings or what ever it is that they are doing.

Budgeting some extra time, a couple of weeks ago I tried one of those off-site airport parking facilities where the bus picks you up in the parking lot and then whisks you off to the airport, then you on return you wait for the buss to pick you up at the airport, ride to the lot and tour the lot looking at everyone’s car as they get dropped off and 30-40 minutes later, you are finally to your vehicle faced with the challenge of how to get out of the parking lot late at night as it is such a budget operation, they have gone to lights out and automated check-out. That is, put your credit card in the machine and the gate opens, that is, if the credit card reader is not frozen because it about “zero” outside and the machine wont read your card using up more time, however heck, I saved a few dollars a day.

On another recent trip, again the main parking ramp was full, at least the airport has a parking or storage resource monitoring ( aka Airport SRM) tool that you can check ahead to see if the ramps are full or not. This time I went to another terminal, parked in the ramp there, walked a mile (would have been a nice walk if it had not been 1 above zero (F) with a 20 mile per hour wind) to the light rail train station, waited ten minutes for the 3 minute train ride to the main terminal, walked to the tram for the 1-2 minute tram ride to the real terminal to go to my departure gate. On return, the process was reversed, adding what I will estimate to be about an hour to the experience, which, if you have the time, not a bad option and certainly good exercise even if it was freezing cold.

During the planes, trains and automobiles expedition, it dawned on me, airport parking is a lot like tiered storage in that you have different types of parking with different cost points, locality of reference or latency or speed from which how much time to get from your car to your plane, levels of protection and security among others.

I likened the off-airport parking experience to off-line tier-3 tape or MAID or at best, near-line tier-2 storage in that I saved some money at the cost of lost time and productivity. The parking at the remote airport ramp involving a train ride and tram ride I likened to tier-2 or near-line storage over a very slow network or I/O path in that the ramp itself was pretty efficiency, however the transit delays or latency were ugly, however I did save some money, a couple of bucks, not as much as the off-site, however a few less than the primary parking.

Hence I jump back to the primary ramp as being the fastest as tier-1 unless you have someone footing your parking bills and can afford tier-0. It also dawned on me that like primary or tier-1 storage, regardless of if it is enterprise class like an EMC DMX, IBM DS8K, Fujitsu, HDS USP or mid-range EMC CLARiiON, HP EVA, IBM DS4K, HDS AMS, Dell or EqualLogic, 3PAR, Fujitsu, NetApp or entry-level products from many different vendors; people still pay for the premium storage, aka tier-1 storage in a given price band even if there are cheaper alternatives however like the primary airport parking, there are limits on how much primary storage or parking can be supported due to floor space, power, cooling and budget constraints.

With tiered storage the notion is to align different types and classes of storage for various usage and application categories based on service (performance, availability, capacity, energy consumption) requirements balanced with cost or other concerns. For example there is high cost yet ultra high performance with ultra low energy saving and relative small capacity of tier-0 solid state devices (SSD) using either FLASH or dynamic random access memory (DRAM) as part of a storage system, as a storage device or as a caching appliance to meet I/O or activity intensive scenarios. Tier-1 is high performance, however not as high performance as tier-0, although given a large enough budget, large enough power and cooling ability and no constraints on floor space, you can make an total of traditional disk drives out perform even solid state, having a lot more capacity at the tradeoff of power, cooling, floor space and of course cost.

For most environments tier-1 storage will be the fastest storage with a reasonable amount of capacity, as tier-1 provides a good balance of performance and capacity per amount of energy consumed for active storage and data. On the other hand, lower cost, higher capacity and slower tier-2 storage also known as near-line or secondary storage is used in some environments for primary storage where performance is not a concern, yet is typically more for non-performance intensive applications.

Again, given enough money, unlimited power, cooling and floor space not to mention the number of enclosures, controllers and management software, you can sum a large bunch of low-cost SATA drives as an example to produce a high level of performance, however the cost benefits to do a high activity or performance level, either IOPS or bandwidth particular where the excess capacity is not needed would make SSD technology look cheap on an overall cost basis perspective.

Likewise replacing your entire disk with SSD particularly for capacity based environments is not really practical outside of extreme corner case applications unless you have the disposable income of a small country for your data storage and IT budget.

Another aspect of tiered storage is the common confusion of a class of storage and the class of a storage vendor or where a product is positioned for example from a price band or target environment such as enterprise, small medium environment, small medium business (SMB), small office or home office (SOHO) or prosumer/consumer.

I often hear discussions that go along the lines of tier-1 storage being products for the enterprise, tier-1 being for workgroups and tier-3 being for SMB and SOHO. I also hear confusion around tier-1 being block based, tier-2 being NAS and tier-3 being tape. “What we have here is a failure to communicate” in that there is confusion around tiered, categories, classification, price band and product positioning and perception. To add to the confusion is that there are also different tiers of access including Fibre Channel and FICON using 8GFC (coming soon to a device near you), 4GFC, 2GFC and even 1GFC along with 1GbE and 10GbE for iSCSI and/or NAS (NFS and/or CIFS) as well as InfiniBand for block (iSCSI or SRP) and file (NAS) offering different costs, performance, latency and other differing attributes to aligning to various application service and cost requirements.

What this all means is that there is more to tiered storage, there is tiered access, tiered protection, tiered media, different price band and categories of vendors and solutions to be aligned to applicable usage and service requirements. On the other hand, similar to airport parking, I can chose to skip the airport parking and take a cab to the airport which would be analogous to shifting your storage needs to a managed service provider. However ultimately it will come down to balancing performance, availability, capacity and energy (PACE) efficiency to the level of service and specific environment or application needs.

Greg Schulz www.storageio.com and www.greendatastorage.com