RAID Relevance Revisited

Following up from some previous posts on the topic, a continued discussion point in the data storage industry is the relevance (or lack there) of RAID (Redundant Array of Independent Disks).

These discussions tend to evolve around how RAID is dead due to its lack of real or perceived ability to continue scaling in terms of performance, availability, capacity, economies or energy capabilities needed or when compared to those of newer techniques, technologies or products.

RAID Relevance

While there are many new and evolving approaches to protecting data in addition to maintaining availability or accessibility to information, RAID despite the fan fare is far from being dead at least on the technology front.

Sure, there are issues or challenges that require continued investing in RAID as has been the case over the past 20 years; however those will also be addressed on a go forward basis via continued innovation and evolution along with riding technology improvement curves.

Now from a marketing standpoint, ok, I can see where the RAID story is dead, boring, and something new and shiny is needed, or, at least change the pitch to sound like something new.

Consequently, when being long in the tooth and with some of the fore mentioned items among others, older technologies that may be boring or lack sizzle or marketing dollars can and often are declared dead on the buzzword bingo circuit. After all, how long now has the industry trade group RAID Advisory Board (RAB) been missing in action, retired, spun down, archived or ILMed?

RAID remains relevant because like other dead or zombie technologies it has reached the plateau of productivity and profitability. That success is also something that emerging technologies envy as their future domain and thus a classic marketing move is to declare the incumbent dead.

The reality is that RAID in all of its various instances from hardware to software, standard to non-standard with extensions is very much alive from the largest enterprise to the SMB to the SOHO down into consumer products and all points in between.

Now candidly, like any technology that is about 20 years old if not older after all, the disk drive is over 50 years old and been declared dead for how long now?.RAID in some ways is long in the tooth and there are certainly issues to be addressed as have been taken care of in the past. Some of these include the overhead of rebuilding large capacity 1TB, 2TB and even larger disk drives in the not so distant future.

There are also issues pertaining to distributed data protection in support of cloud, virtualized or other solutions that need to be addressed. In fact, go way way back to when RAID appeared commercially on the scene in the late 80s and one of the value propositions among others was to address the reliability of emerging large capacity multi MByte sized SCSI disk drives. It seems almost laughable today that when a decade later, when the 1GB disk drives appeared in the market back in the 90s that there was renewed concern about RAID and disk drive rebuild times.

Rest assured, I think that there is a need and plenty of room for continued innovate evolution around RAID related technologies and their associated storage systems or packaging on a go forward basis.

What I find interesting is that some of the issues facing RAID today are similar to those of a decade ago for example having to deal with large capacity disk drive rebuild, distributed data protecting and availability, performance, ease of use and so the list goes.

However what happened was that vendors continued to innovate both in terms of basic performance accelerated rebuild rates with improvements to rebuild algorithms, leveraged faster processors, busses and other techniques. In addition, vendors continued to innovate in terms of new functionality including adopting RAID 6 which for the better part of a decade outside of a few niche vendors languished as one of those future technologies that probably nobody would ever adopt, however we know that to be different now and for the past several years. RAID 6 is one of those areas where vendors who do not have it are either adding it, enhancing it, or telling you why you do not need it or why it is no good for you.

An example of how RAID 6 is being enhanced is boosting performance on normal read and write operations along with acceleration of performance during disk rebuild. Also tied to RAID 6 and disk drive rebuild are improvements in controller design to detect and proactively make repairs on the fly to minimize or eliminate errors or diminished the need for drive rebuilds, similar to what was done in previous generations. Lets also not forget the improvements in disk drives boosting performance, availability, capacity and energy improvements over time.

Funny how these and other enhancements are similar to those made to RAID controllers hardware and software fine tuning them in the early to mid 2000s in support for high capacity SATA disk drives that had different RAS characteristics of higher performance lower capacity enterprise drives.

Here is my point.

RAID to some may be dead while others continue to rely on it. Meanwhile others are working on enhancing technologies for future generations of storage systems and application requirements. Thus in different shapes, forms, configurations, feature; functionality or packaging, the spirit of RAID is very much alive and well remaining relevant.

Regardless of if a solution using two or three disk mirroring for availability, or RAID 0 fast SSD or SAS or FC disks in a stripe configuration for performance with data protection via rapid restoration from some other low cost medium (perhaps RAID 6 or tape), or perhaps single, dual or triple parity protection, or if using small block or multiMByte or volume based chunklets, let alone if it is hardware or software based, local or disturbed, standard or non standard, chances are there is some theme of RAID involved.

Granted, you do not have to call it RAID if you prefer!

As a closing thought, if RAID were no longer relevant, than why do the post RAID, next generation, life beyond RAID or whatever you prefer to call them technologies need to tie themselves to the themes of RAID? Simple, RAID is still relevant in some shape or form to different audiences as well as it is a great way of stimulating discussion or debate in a constantly evolving industry.

BTW, Im still waiting for the revolutionary piece of hardware that does not require software, and the software that does not require hardware and that includes playing games with server less servers using hypervisors :) .

Provide your perspective on RAID and its relevance in the following poll.

Here are some additional related and relevant RAID links of interests:

Stay tuned for more about RAIDs relevance as I dont think we have heard the last on this.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

StorageIO in the News Update V2010.1

StorageIO is regularly quoted and interviewed in various industry and vertical market venues and publications both on-line and in print on a global basis.

The following are some coverage, perspectives and commentary by StorageIO on IT industry trends including servers, storage, I/O networking, hardware, software, services, virtualization, cloud, cluster, grid, SSD, data protection, Green IT and more since the last update.

Realizing that some prefer blogs to webs to twitter to other venues, here are some recent links among others to media coverage and comments by me on a different topics that are among others found at www.storageio.com/news.html:

  • SearchSMBStorage: Comments on EMC Iomega v.Clone for PC data syncronization – Jan 2010
  • Computerworld: Comments on leveraging cloud or online backup – Jan 2010
  • ChannelProSMB: Comments on NAS vs SAN Storage for SMBs – Dec 2009
  • ChannelProSMB: Comments on Affordable SMB Storage Solutions – Dec 2009
  • SearchStorage: Comments on What to buy a geek for the holidays, 2009 edition – Dec 2009
  • SearchStorage: Comments on EMC VMAX storage and 8GFC enhancements – Dec 2009
  • SearchStorage: Comments on Data Footprint Reduction – Dec 2009
  • SearchStorage: Comments on Building a private storage cloud – Dec 2009
  • SearchStorage: Comments on SSD in storage systems – Dec 2009
  • SearchStorage: Comments on slow adoption of file virtualization – Dec 2009
  • IT World: Comments on maximizing data security investments – Nov 2009
  • SearchCIO: Comments on storage virtualization for your organisation – Nov 2009
  • Processor: Comments on how to win approval for hardware upgrades – Nov 2009
  • Processor: Comments on the Future of Servers – Nov 2009
  • SearchITChannel: Comments on Energy-efficient technology sales depend on pitch – Nov 2009
  • SearchStorage: Comments on how to get from Fibre Channel to FCoE – Nov 2009
  • Minneapolis Star Tribune: Comments on Google Wave and Clouds – Nov 2009
  • SearchStorage: Comments on EMC and Cisco alliance – Nov 2009
  • SearchStorage: Comments on HP virtualizaiton enhancements – Nov 2009
  • SearchStorage: Comments on Apple canceling ZFS project – Oct 2009
  • Processor: Comments on EPA Energy Star for Server and Storage Ratings – Oct 2009
  • IT World Canada: Cloud computing, dot be scared, look before you leap – Oct 2009
  • IT World: Comments on stretching your data protection and security dollar – Oct 2009
  • Enterprise Storage Forum: Comments about Fragmentation and Performance? – Oct 2009
  • SearchStorage: Comments about data migration – Oct 2009
  • SearchStorage: Comments about What’s inside internal storage clouds? – Oct 2009
  • Enterprise Storage Forum: Comments about T-Mobile and Clouds? – Oct 2009
  • Storage Monkeys: Podcast comments about Sun and Oracle- Sep 2009
  • Enterprise Storage Forum: Comments on Maxiscale clustered, cloud NAS – Sep 2009
  • SearchStorage: Comments on Maxiscale clustered NAS for web hosting – Sep 2009
  • Enterprise Storage Forum: Comments on whos hot in data storage industry – Sep 2009
  • SearchSMBStorage: Comments on SMB Fibre Channel switch options – Sep 2009
  • SearchStorage: Comments on using storage more efficiently – Sep 2009
  • SearchStorage: Comments on Data and Storage Tiering including SSD – Sep 2009
  • Enterprise IT Planet: Comments on Data Deduplication – Sep 2009
  • SearchDataCenter: Comments on Tiered Storage – Sep 2009
  • Enterprise Storage Forum: Comments on Sun-Oracle Wedding – Aug 2009
  • Processor.com: Comments on Storage Network Snags – Aug 2009
  • SearchStorageChannel: Comments on I/O virtualizaiton (IOV) – Aug 2009
  • SearchStorage: Comments on Clustered NAS storage and virtualization – Aug 2009
  • SearchITChannel: Comments on Solid-state drive prices still hinder adoption – Aug 2009
  • Check out the Content, Tips, Tools, Videos, Podcasts plus White Papers, and News pages for additional commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Recent tips, videos, articles and more update V2010.1

    Realizing that some prefer blogs to webs to twitter to other venues, here are some recent links to articles, tips, videos, webcasts and other content that have appeared in different venues since August 2009.

  • i365 Guest Interview: Experts Corner: Q&A with Greg Schulz December 2009
  • SearchCIO Midmarket: Remote-location disaster recovery risks and solutions December 2009
  • BizTech Magazine: High Availability: A Delicate Balancing Act November 2009
  • ESJ: What Comprises a Green, Efficient and Effective Virtual Data Center? November 2009
  • SearchSMBStorage: Determining what server to use for SMB November 2009
  • SearchStorage: Performance metrics: Evaluating your data storage efficiency October 2009
  • SearchStorage: Optimizing capacity and performance to reduce data footprint October 2009
  • SearchSMBStorage: How often should I conduct a disaster recovery (DR) test? October 2009
  • SearchStorage: Addressing storage performance bottlenecks in storage September 2009
  • SearchStorage AU: Is tape the right backup medium for smaller businesses? August 2009
  • ITworld: The new green data center: From energy avoidance to energy efficiency August 2009
  • Video and podcasts include:
    December 2009 Video: Green Storage: Metrics and measurement for management insight
    Discussion between Greg Schulz and Mark Lewis of TechTarget the importance of metrics and measurement to gauge productivity and efficiency for Green IT and enabling virtual information factories. Click here to watch the Video.

    December 2009 Podcast: iSCSI SANs can be a good fit for SMB storage
    Discussion between Greg Schulz and Andrew Burton of TechTarget about iSCSI and other related technologies for SMB storage. Click here to listen to the podcast.

    December 2009 Podcast: RAID Data Protection Discussion
    Discussion between Greg Schulz and Andrew Burton of TechTarget about RAID data proteciton, techniques and technologies. Click here to listen to the podcast.

    December 2009 Podcast: Green IT, Effiency and Productivity Discussion
    Discussion between Greg Schulz and Jon Flower of Adaptec about data Green IT, energy effiency, inteligent power management (IPM) also known as MAID 2.0 and other forms of optimization techniques including SSD. Click here to listen to the podcast sponsored by Adaptec.

    November 2009 Podcast: Reducing your data footprint impact
    Even though many enterprise data storage environments are coping with tightened budgets and reduced spending, overall net storage capacity is increasing. In this interview, Greg Schulz, founder and senior analyst at StorageIO Group, discusses how storage managers can reduce their data footprint. Schulz touches on the importance of managing your data footprint on both online and offline storage, as well as the various tools for doing so, including data archiving, thin provisioning and data deduplication. Click here to listen to the podcast.

    October 2009 Podcast: Enterprise data storage technologies rise from the dead
    In this interview, Greg Schulz, founder and senior analyst of the Storage I/O group, classifies popular technologies such as solid-state drives (SSDs), RAID and Fibre Channel (FC) as “zombie” technologies. Why? These are already set to become part of standard storage infrastructures, says Schulz, and are too old to be considered fresh. But while some consider these technologies to be stale, users should expect to see them in their everyday lives. Click here to listen to the podcast.

    Check out the Tips, Tools and White Papers, and News pages for additional commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Is MAID Storage Dead? I Dont Think So!

    Some vendors are doing better than others and first generation MAID (Massive or monolithic Array of Idle Disks) might be dead or about to be deceased, spun down or put into a long term sleep mode, it is safe to say that second generation MAID (e.g. MAID 2.0) also known as intelligent power management (IPM) is alive and doing well.

    In fact, IPM is not unique to disk storage or disk drives as it is also a technique found in current generation of processors such as those from Intel (e.g. Nehalem) and others.

    Other names for IPM include adaptive voltage scaling (AVS), adaptive voltage scaling optimized (AVSO) and adaptive power management (APM) among others.

    The basic concept is to vary the amount of power being used to the amount of work and service level needed at a point in time and on a granular basis.

    For example, first generation MAID or drive spin down as deployed by vendors such as Copan, which is rumored to be in the process of being spun down as a company (see blog post by a former Copan employee) were binary. That is, a disk drive was either on or off, and, that the granularity was the entire storage system. In the case of Copan, the granularly was that a maximum of 25% of the disks could ever be spun up at any point in time. As a point of reference, when I ask IT customers why they dont use MAID or IPM enabled technology they commonly site concerns about performance, or more importantly, the perception of bad performance.

    CPU chips have been taking the lead with the ability to vary the voltage and clock speed, enabling or disabling electronic circuitry to align with amount of work needing to be done at a point in time. This more granular approach allows the CPU to run at faster rates when needed, slower rates when possible to conserve energy (here, here and here).

    A common example is a laptop with technology such as speed step, or battery stretch saving modes. Disk drives have been following this approach by being able to vary their power usage by adjusting to different spin speeds along with enabling or disabling electronic circuitry.

    On a granular basis, second generation MAID with IPM enabled technology can be done on a LUN or volume group basis across different RAID levels and types of disk drives depending on specific vendor implementation. Some examples of vendors implementing various forms of IPM for second generation MAID to name a few include Adaptec, EMC, Fujitsu Eternus, HDS (AMS), HGST (disk drives), Nexsan and Xyratex among many others.

    Something else that is taking place in the industry seems to be vendors shying away from using the term MAID as there is some stigma associated with performance issues of some first generation products.

    This is not all that different than what took place about 15 years ago or so when the first purpose built monolithic RAID arrays appeared on the market. Products such as the SF2 aka South San Francisco Forklift company product called Failsafe (here and here) which was bought by MTI with patents later sold to EMC.

    Failsafe, or what many at DEC referred to as Fail Some was a large refrigerator sized device with 5.25” disk drives configured as RAID5 with dedicated hot spare disk drives. Thus its performance was ok for the time doing random reads, however writes in the pre write back cache RAID5 days was less than spectacular.

    Failsafe and other early RAID (and here) implementations received a black eye from some due to performance, availability and other issues until best practices and additional enhancements such as multiple RAID levels appeared along with cache in follow on products.

    What that trip down memory (or nightmare) lane has to do with MAID and particularly first generation products that did their part to help establish new technology is that they also gave way to second, third, fourth, fifth, sixth and beyond generations of RAID products.

    The same can be expected as we are seeing with more vendors jumping in on the second generation of MAID also known as drive spin down with more in the wings.

    Consequently, dont judge MAID based solely on the first generation products which could be thought of as advanced technology production proof of concept solutions that will have paved the way for follow up future solutions.

    Just like RAID has become so ubiquitous it has been declared dead making it another zombie technology (dead however still being developed, produced, bought and put to use), follow on IPM enabled generations of technology will be more transparent. That is, similar to finding multiple RAID levels in most storage, look for IPM features including variable drive speeds, power setting and performance options on a go forward basis. These newer solutions may not carry the MAID name, however the sprit and function of intelligent power management without performance compromise does live on.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    EMC Storage and Management Software Getting FAST

    EMC has announced the availability of the first phase of FAST (Fully Automated Storage Tiering) functionality for their Symmetrix VMAX, CLARiiON and Celerra storage systems.

    FAST was first previewed earlier this year (see here and here).

    Key themes of FAST are to leverage policies for enabling automation to support large scale environments, doing more with what you have along with enabling virtual data centers for traditional, private and public clouds as well as enhancing IT economics.

    This means enabling performance and capacity planning analysis along with facilitating load balancing or other infrastructure optimization activities to boost productivity, efficiency and resource usage effectiveness not to mention enabling Green IT.

    Is FAST revolutionary? That will depend on who you talk or listen to.

    Some vendors will jump up and down similar to donkey in shrek wanting to be picked or noticed claiming to have been the first to implement LUN or file movement inside of storage systems, or, as operating system or file system or volume manager built in. Others will claim to have done it via third party information lifecycle management (ILM) software including hierarchal storage management (HSM) tools among others. Ok, fair enough, than let their games begin (or continue) and I will leave it up to the variou vendors and their followings to debate whos got what or not.

    BTW, anyone remember system manage storage on IBM mainframes or array based movement in HP AutoRAID among others?

    Vendors have also in the past provided built in or third party add on tools for providing insight and awareness ranging from capacity or space usage and allocation storage resource management (SRM) tools, performance advisory activity monitors or charge back among others. For example, hot files analysis and reporting tool have been popular in the past, often operating system specific for identifying candidate files for placement on SSD or other fast storage. Granted the tools provided insight and awareness, there was still the time and error prone task of decision making and subsequently data movement, not to mention associated down time.

    What is new here with FAST is the integrated approach, tools that are operating system independent, functionality in the array, available for different product family and price bands as well as that are optimized for improving user and IT productivity in medium to high-end enterprise scale environments.

    One of the knocks on previous technology is either the performance impact to an application when data was moved, or, impact to other applications when data is being moved in the background. Another issue has been avoiding excessive thrashing due to data being moved at the expense of taking performance cycles from production applications. This would also be similar to having too many snapshots or raid rebuild that are not optimized running in the background on a storage system lacking sufficient performance capability. Another knock has been that historically, either 3rd party host or appliance based software was needed, or, solutions were designed and targeted for workgroup, departmental or small environments.

    What is FAST and how is it implemented
    FAST is technology for moving data within storage systems (and external for Celerra) for load balancing, capacity and performance optimization to meet quality of service (QoS) performance, availability, capacity along with energy and economic initiatives (figure1) across different tiers or types of storage devices. For example, moving data from slower SATA disks where a performance bottleneck exists to faster Fibre Channel or SSD devices. Similarly, cold or infrequently data on faster more expensive storage devices can be marked as candidates for migration to lower cost SATA devices based on customer policies.

    EMC FAST
    Figure 1 FAST big picture Source EMC

    The premise is that policies are defined based on activity along with capacity to determine when data becomes a candidate for movement. All movement is performed in the background concurrently while applications are accessing data without disruptions. This means that there are no stub files or application pause or timeouts that occur or erratic I/O activity while data is being migrated. Another aspect of FAST data movement which is performed in the actual storage systems by their respective controllers is the ability for EMC management tools to identify hot or active LUNs or volumes (files in the case of Celerra) as candidates for moving (figure 2).

    EMC FAST
    Figure 2 FAST what it does Source EMC

    However, users specify if they want data moved on its own or under supervision enabling a deterministic environment where the storage system and associated management tools makes recommendations and suggestions for administrators to approve before migration occurs. This capacity can be a safeguard as well as a learn mode enabling organizations to become comfortable with the technology along with its recommendations while applying knowledge of current business dynamics (figure 3).

    EMC FAST
    Figure 3 The Value proposition of FAST Source EMC

    FAST is implemented as technology resident or embedded in the EMC VMAX (aka Symmetrix), CLARiiON and Cellera along with external management software tools. In the case of the block (figure 4) storage systems including DMX/VMAX and CLARiiON family of products that support FAST, data movement is on a LUN or volume basis and within a single storage system. For NAS or file based Cellera storage systems, FAST is implanted using FMA technology enabling either in the box or externally to other storage systems on a file basis.

    EMC FAST
    Figure 4 Example of FAST activity Source EMC

    What this means is that data at the LUN or volume level can be moved across different tiers of storage or disk drives within a CLARiiON instance, or, within a VMAX instance (e.g. amongst the nodes). For example, Virtual LUNs are a building block that is leveraged for data movement and migration combined with external management tools including Navisphere for the CLARiiON and Symmetrix management console along with Ionix all of which has been enhanced.

    Note however that initially data is not moved externally between different CLARiiONs or VMAX systems. For external data movement, other existing EMC tools would be deployed. In the case of Celerra, files can be moved within a specific CLARiiON as well as externally across other storage systems. External storage systems that files can be moved across using EMC FMA technology includes other Celleras, Centera and ATMOS solutions based upon defined policies.

    What do I like most and why?

    Integration of management tools providing insight with ability for user to setup polices as well as approve or intercede with data movement and placement as their specific philosophies dictate. This is key, for those who want to, let the system manage it self with your supervision of course. For those who prefer to take their time, then take simple steps by using the solution for initially providing insight into hot or cold spots and then helping to make decisions on what changes to make. Use the solution and adapt it to your specific environment and philosophy approach, what a concept, a tool that works for you, vs you working for it.

    What dont I like and why?

    There is and will remain some confusion about intra and inter box or system data movement and migration, operations that can be done by other EMC technology today for those who need it. For example I have had questions asking if FAST is nothing more than EMC Invista or some other data mover appliance sitting in front of Symmetrix or CLARiiONs and the answer is NO. Thus EMC will need to articulate that FAST is both an umbrella term as well as a product feature set combining the storage system along with associated management tools unique to each of the different storage systems. In addition, there will be confusion at least with GA of lack of support for Symmetrix DMX vs supported VMAX. Of course with EMC pricing is always a question so lets see how this plays out in the market with customer acceptance.

    What about the others?

    Certainly some will jump up and down claiming ratification of their visions welcoming EMC to the game while forgetting that there were others before them. However, it can also be said that EMC like others who have had LUN and volume movement or cloning capabilities for large scale solutions are taking the next step. Thus I would expect other vendors to continue movement in the same direction with their own unique spin and approach. For others who have in the past made automated tiering their marketing differentiation, I would suggest they come up with some new spins and stories as those functions are about to become table stakes or common feature functionality on a go forward basis.

    When and where to use?

    In theory, anyone with a Symmetrix/VMAX, CLARiiON or Celerra that supports the new functionality should be a candidate for the capabilities, that is, at least the insight, analysis, monitoring and situation awareness capabilities Note that does not mean actually enabling the automated movement initially.

    While the concept is to enable automated system managed storage (Hmmm, Mainframe DejaVu anyone), for those who want to walk before they run, enabling the insight and awareness capabilities can provide valuable information about how resources are being used. The next step would then to look at the recommendations of the tools, and if you concur with the recommendations, then take remedial action by telling the system when the movement can occur at your desired time.

    For those ready to run, then let it rip and take off as FAST as you want. In either situation, look at FAST for providing insight and situational awareness of hot and cold storage, where opportunities exist for optimizing and gaining efficiency in how resources are used, all important aspects for enabling a Green and Virtual Data Center not to mention as well as supporting public and private clouds.

    FYI, FTC Disclosure and FWIW

    I have done content related projects for EMC in the past (see here), they are not currently a client nor have they sponsored, underwritten, influenced, renumerated, utilize third party off shore swiss, cayman or south american unnumbered bank accounts, or provided any other reimbursement for this post, however I did personally sign and hand to Joe Tucci a copy of my book The Green and Virtual Data Center (CRC) ;).

    Bottom line

    Do I like what EMC is doing with FAST and this approach? Yes.

    Do I think there is room for improvement and additional enhancements? Absolutely!

    Whats my recommendation? Have a look, do your homework, due diligence and see if its applicable to your environment while asking others vendors what they will be doing (under NDA if needed).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    The other Green Storage: Efficiency and Optimization

    Some believe that green storage is specifically designed to reduce power and cooling costs.

    The reality is that there are many ways to reduce environmental impact while enhancing the economics of data storage besides simply booting utilizing.

    These include optimizing data storage capacity as well as boosting performance to increase productivity per watt of energy used when work needs to be done.

    Some approaches require new hardware or software while others can be accomplished with changes to management including reconfiguration leveraging insight and awareness of resource needs.

    Here are some related links:

    The Other Green: Storage Efficiency and Optimization (Videocast)

    Energy efficient technology sales depend on the pitch

    Performance metrics: Evaluating your data storage efficiency

    How to reduce your Data Footprint impact (Podcast)

    Optimizing enterprise data storage capacity and performance to reduce your data footprint

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Green IT and Virtual Data Centers

    Green IT and virtual data centers are no fad nor are they limited to large-scale environments.

    Paying attention to how resources are used to deliver information services in a flexible, adaptable, energy-efficient, environmentally, and economically friendly way to boost efficiency and productivity are here to stay.

    Read more here in the article I did for the folks over at Enterprise Systems Journal.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    How to win approval for upgrades: Link them to business benefits

    Drew Rob has another good article over at Processor.com about various tips and strategies on how to gain approval for hardware (or software) purchases with some comments by yours truly.

    My tips and advice that are quoted in the story include to link technology resources to business needs impact which may be common sense, however still a time tested effective technique.

    Instead of speaking tech talk such as Performance, capacity, availability, IOPS, bandwidth, GHz, frames or packets per second, VMs to PM or dedupe ratio, map them to business speak, that is things that finance, accountants, MBAs or other management personal understand.

    For example, how many transactions at a given response time can be supported by a given type of server, storage or networking device.

    Or, put a different way, with a given device, how much work can be done and what is the associated monetary or business benefit.

    Likewise, if you do not have a capacity plan for servers, storage, I/O and networking along with software and facilities covering performance, availability, capacity and energy demands now is the time to put one in place.

    More on capacity and performance planning later, however for now, if you want to learn more, check Chapter 10 (Performance and Capacity Planning) in my book Resilient Storage Networks: Designing Flexible and Scalable Data Infrastructure: Elsevier).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Saving Money with Green IT: Time To Invest In Information Factories

    There is a good and timely article titled Green IT Can Save Money, Too over at Business Week that has a familiar topic and theme for those who read this blog or other content, articles, reports, books, white papers, videos, podcasts or in-person speaking and keynote sessions that I have done..

    I posted a short version of this over there, here is the full version that would not fit in their comment section.

    Short of calling it Green IT 2.0 or the perfect storm, there is a resurgence and more importantly IMHO a growing awareness of the many facets of Green IT along with Green in general having an economic business sustainability aspect.

    While the Green Gap and confusion still exists, that is, the difference between what people think or perceive and actual opportunities or issues; with growing awareness, it will close or at least narrow. For example, when I regularly talk with IT professionals from various sized, different focused industries across the globe in diverse geographies and ask them about having to go green, the response is in the 7-15% range (these are changing) with most believing that Green is only about carbon footprint.

    On the other hand, when I ask them if they have power, cooling, floor space or other footprint constraints including frozen or reduced budgets, recycling along with ewaste disposition or RoHS requirements, not to mention sustaining business growth without negatively impacting quality of service or customer experience, the response jumps up to 65-75% (these are changing) if not higher.

    That is the essence of the green gap or disconnect!

    Granted carbon dioxide or CO2 reduction is important along with NO2, water vapors and other related issues, however there is also the need to do more with what is available, stretch resources and footprints do be more productive in a shrinking footprint. Keep in mind that there is no such thing as an information, data or processing recession with all indicators pointing towards the need to move, manage and store larger amounts of data on a go forward basis. Thus, the need to do more in a given footprint or constraint, maximizing resources, energy, productivity and available budgets.

    Innovation is the ability to do more with less at a lower cost without compromise on quality of service or negatively impacting customer experience. Regardless of if you are a manufacturer, or a service provider including in IT, by innovating with a diverse Green IT focus to become more efficient and optimized, the result is that your customers become more enabled and competitive.

    By shifting from an avoidance model where cost cutting or containment are the near-term tactical focus to an efficiency and productivity model via optimization, net unit costs should be lowered while overall service experience increase in a positive manner. This means treating IT as an information factory, one that needs investment in the people, processes and technologies (hardware, software, services) along with management metric indicator tools.

    The net result is that environmental or perceived Green issues are addressed and self-funded via the investment in Green IT technology that boosts productivity (e.g. closing or narrowing the Green Gap). Thus, the environmental concerns that organizations have or need to address for different reasons yet that lack funding get addressed via funding to boost business productivity which have tangible ROI characteristics similar to other lean manufacturing approaches.

    Here are some additional links to learn more about these and other related themes:

    Have a read over at Business Week about how Green IT Can Save Money, Too while thinking about how investing in IT infrastructure productivity (Information Factories) by becoming more efficient and optimized helps the business top and bottom line, not to mention the environment as well.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Optimize Data Storage for Performance and Capacity Efficiency

    This post builds on a recent article I did that can be read here.

    Even with tough economic times, there is no such thing as a data recession! Thus the importance of optimizing data storage efficiency addressing both performance and capacity without impacting availability in a cost effective way to do more with what you have.

    What this means is that even though budgets are tight or have been cut resulting in reduced spending, overall net storage capacity is up year over year by double digits if not higher in some environments.

    Consequently, there is continued focus on stretching available IT and storage related resources or footprints further while eliminating barriers or constraints. IT footprint constraints can be physical in a cabinet or rack as well as floorspace, power or cooling thresholds and budget among others.

    Constraints can be due to lack of performance (bandwidth, IOPS or transactions), poor response time or lack of availability for some environments. Yet for other environments, constraints can be lack of capacity, limited primary or standby power or cooling constraints. Other constraints include budget, staffing or lack of infrastructure resource management (IRM) tools and time for routine tasks.

    Look before you leap
    Before jumping into an optimization effort, gain insight if you do not already have it as to where the bottlenecks exist, along with the cause and effect of moving or reconfiguring storage resources. For example, boosting capacity use to more fully use storage resources can result in a performance issue or data center bottlenecks for other environments.

    An alternative scenario is that in the quest to boost performance, storage is seen as being under-utilized, yet when capacity use is increased, low and behold, response time deteriorates. The result can be a vicious cycle hence the need to address the issue as opposed to moving problems by using tools to gain insight on resource usage, both space and activity or performance.

    Gaining insight means looking at capacity use along with performance and availability activity and how they use power, cooling and floor-space. Consequently an important tool is to gain insight and knowledge of how your resources are being used to deliver various levels of service.

    Tools include storage or system resource management (SRM) tools that report on storage space capacity usage, performance and availability with some tools now adding energy usage metrics along with storage or system resource analysis (SRA) tools.

    Cooling Off
    Power and cooling are commonly talked about as constraints, either from a cost standpoint, or availability of primary or secondary (e.g. standby) energy and cooling capacity to support growth. Electricity is essential for powering IT equipment including storage enabling devices to do their specific tasks of storing data, moving data, processing data or a combination of these attributes.

    Thus, power gets consumed, some work or effort to move and store data takes place and the by product is heat that needs to be removed. In a typical IT data center, cooling on average can account for about 50% of energy used with some sites using less.

    With cooling being a large consumer of electricity, a small percentage change to how cooling consumes energy can yield large results. Addressing cooling energy consumption can be to discuss budget or cost issues, or to enable cooling capacity to be freed up to support installation of extra storage or other IT equipment.

    Keep in mind that effective cooling relies on removing heat from as close to the source as possible to avoid over cooling which requires more energy. If you have not done so, have a facilities review or assessment performed that can range from a quick walk around, to a more in-depth review and thermal airflow analysis. A means of removing heat close to the sort are techniques such as intelligent, precision or smart cooling also known by other marketing names.

    Powering Up, or, Powering Down
    Speaking of energy or power, in addition to addressing cooling, there are a couple of ways of addressing power consumption by storage equipment (Figure 1). The most popular discussed approach towards efficiency is energy avoidance involving powering down storage when not used such as first generation MAID at the cost of performance.

    For off-line storage, tape and other removable media give low-cost capacity per watt with low to no energy needed when not in use. Second generation (e.g. MAID 2.0) solutions with intelligent power management (IPM) capabilities have become more prevalent enabling performance or energy savings on a more granular or selective basis often as a standard feature in common storage systems.

    GreenOptionsBalance
    Figure 1:  How various RAID levels and configuration impact or benefit footprint constraints

    Another approach to energy efficiency is seen in figure 1 which is doing more work for active applications per watt of energy to boost productivity. This can be done by using same amount of energy however doing more work, or, same amount of work with less energy.

    For example instead of using larger capacity disks to improve capacity per watt metrics, active or performance sensitive storage should be looked at on an activity basis such as IOP, transactions, videos, emails or throughput per watt. Hence, a fast disk drive doing work can be more energy-efficient in terms of productivity than a higher capacity slower disk drive for active workloads, where for idle or inactive, the inverse should hold true.

    On a go forward basis the trend already being seen with some servers and storage systems is to do both more work, while using less energy. Thus a larger gap between useful work (for active or non idle storage) and amount of energy consumed yields a better efficiency rating, or, take the inverse if that is your preference for smaller numbers.

    Reducing Data Footprint Impact
    Data footprint impact reduction tools or techniques for both on-line as well as off-line storage include archiving, data management, compression, deduplication, space-saving snapshots, thin provisioning along with different RAID levels among other approaches. From a storage access standpoint, you can also include bandwidth optimization, data replication optimization, protocol optimizers along with other network technologies including WAFS/WAAS/WADM to help improve efficiency of data movement or access.

    Thin provisioning for capacity centric environments can be used to achieving a higher effective storage use level by essentially over booking storage similar to how airlines oversell seats on a flight. If you have good historical information and insight into how storage capacity is used and over allocated, thin provisioning enables improved effective storage use to occur for some applications.

    However, with thin provisioning, avoid introducing performance bottlenecks by leveraging solutions that work closely with tools that providing historical trending information (capacity and performance).

    For a technology that some have tried to declare as being dead to prop other new or emerging solutions, RAID remains relevant given its widespread deployment and transparent reliance in organizations of all size. RAID also plays a role in storage performance, availability, capacity and energy constraints as well as a relief tool.

    The trick is to align the applicable RAID configuration to the task at hand meeting specific performance, availability, capacity or energy along with economic requirements. For some environments a one size fits all approach may be used while others may configure storage using different RAID levels along with number of drives in RAID sets to meet specific requirements.


    Figure 2:  How various RAID levels and configuration impact or benefit footprint constraints

    Figure 2 shows a summary and tradeoffs of various RAID levels. In addition to the RAID levels, how many disks can also have an impact on performance or capacity, such as, by creating a larger RAID 5 or RAID 6 group, the parity overhead can be spread out, however there is a tradeoff. Tradeoffs can be performance bottlenecks on writes or during drive rebuilds along with potential exposure to drive failures.

    All of this comes back to a balancing act to align to your specific needs as some will go with a RAID 10 stripe and mirror to avoid risks, even going so far as to do triple mirroring along with replication. On the other hand, some will go with RAID 5 or RAID 6 to meet cost or availability requirements, or, some I have talked with even run RAID 0 for data and applications that need the raw speed, yet can be restored rapidly from some other medium.

    Lets bring it all together with an example
    Figure 3 shows a generic example of a before and after optimization for a mixed workload environment, granted you can increase or decrease the applicable capacity and performance to meet your specific needs. In figure 3, the storage configuration consists of one storage system setup for high performance (left) and another for high-capacity secondary (right), disk to disk backup and other near-line needs, again, you can scale the approach up or down to your specific need.

    For the performance side (left), 192 x 146GB 15K RPM (28TB raw) disks provide good performance, however with low capacity use. This translates into a low capacity per watt value however with reasonable IOPs per watt and some performance hot spots.

    On the capacity centric side (right), there are 192 x 1TB disks (192TB raw) with good space utilization, however some performance hot spots or bottlenecks, constrained growth not to mention low IOPS per watt with reasonable capacity per watt. In the before scenario, the joint energy use (both arrays) is about 15 kWh or 15,000 watts which translates to about $16,000 annual energy costs (cooling excluded) assuming energy cost of 12 cents per kWh.

    Note, your specific performance, availability, capacity and energy mileage will vary based on particular vendor solution, configuration along with your application characteristics.


    Figure 3: Baseline before and after storage optimization (raw hardware) example

    Building on the example in figure 3, a combination of techniques along with technologies yields a net performance, capacity and perhaps feature functionality (depends on specific solution) increase. In addition, floor-space, power, cooling and associated footprints are also reduced. For example, the resulting solution shown (middle) comprises 4 x 250GB flash SSD devices, along with 32 x 450GB 15.5K RPM and 124 x 2TB 7200RPM enabling an 53TB (raw) capacity increase along with performance boost.

    The previous example are based on raw or baseline capacity metrics meaning that further optimization techniques should yield improved benefits. These examples should also help to discuss the question or myth that it costs more to power storage than to buy it which the answer should be it depends.

    If you can buy the above solution for say under $50,000 (cost to power), or, let alone, $100,000 (power and cool) for three years which would also be a good acquisition, then the myth of buying is more expensive than powering holds true. However, if a solution as described above costs more, than the story changes along with other variables include energy costs for your particular location re-enforcing the notion that your mileage will vary.

    Another tip is that more is not always better.

    That is, more disks, ports, processors, controllers or cache do not always equate into better performance. Performance is the sum of how those and other pieces working together in a demonstrable way, ideally your specific application workload compared to what is on a product data sheet.

    Additional general tips include:

    • Align the applicable tool, technique or technology to task at hand
    • Look to optimize for both performance and capacity, active and idle storage
    • Consolidated applications and servers need fast servers
    • Fast servers need fast I/O and storage devices to avoid bottlenecks
    • For active storage use an activity per watt metric such as IOP or transaction per watt
    • For in-active or idle storage, a capacity per watt per footprint metric would apply
    • Gain insight and control of how storage resources are used to meet service requirements

    It should go without saying, however sometimes what is understood needs to be restated.

    In the quest to become more efficient and optimized, avoid introducing performance, quality of service or availability issues by moving problems.

    Likewise, look beyond storage space capacity also considering performance as applicable to become efficient.

    Finally, it is all relative in that what might be applicable to one environment or application need may not apply to another.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Storage Efficiency and Optimization – The Other Green

    For those of you in the New York City area, I will be presenting live in person at Storage Decisions September 23, 2009 conference The Other Green, Storage Efficiency and Optimization.

    Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives.

    To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

    Topics, technologies and techniques that will be discussed include among others:

    • Energy efficiency (strategic) vs. energy avoidance (tactical), whats different between them
    • Optimization and the need for speed vs. the need for capacity, finding the right balance
    • Metrics & measurements for management insight, what the industry is doing (or not doing)
    • Tiered storage and tiered access including SSD, FC, SAS, tape, clouds and more
    • Data footprint reduction (archive, compress, dedupe) and thin provision among others
    • Best practices, financial incentives and what you can do today

    This is a free event for IT professionals, however space I hear is limited, learn more and register here.

    For those interested in broader IT data center and infrastructure optimization, check out the on-going seminar series The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. This is also a free Seminar, register and learn more here or here.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Back to School and Dedupe School

    Summers is over hear in the northern hemisphere and its back to school time.

    This coming week I will be the substitute teacher filling in for my friend Mr. Backup in Minneapolis and Toronto for TechTargets Dedupe School. If you are in either city and have not yet signed up, check out the link here to learn more.

    Hope to see you this week, or, next week at Infrastructure Optimization in Chicago or Storage Decisions in NYC where I will also be presenting or teaching if you prefer, as well as listening and learning from the attendees whats on their minds.

    Stay current on other upcoming activities on our events page, as well as see whats new or in the news here.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Performance = Availability StorageIOblog featured ITKE guest blog

    ITKE - IT Knowledge Exchange

    Recently IT Knowledge Exchange named me and StorageIOblog as their weekly featured IT blog too which Im flattered and honored. Consequently, I did a guest blog for them titled Performance = Availability, Availability = Performance that you can read about here.

    For those not familiar with ITKE, take a few minutes and go over and check it out, there is a wealth of information there on a diversity of topics that you can read about, or, you can also get involved and participate in the questions and answers discussions.

    Speaking of ITKE, interested in “The Green and Virtual Data Center” (CRC), check out this link where you can download a free chapter of my book, along with information on how to order your own copy along with a special discount code from CRC press.

    Thank you very much to Sean Brooks of ITKE and his social media team of Michael Morisy and Jenny Mackintosh for being named featured IT blogger, as well as for being able to do a guest post for them. It has been fantastic working them and particularly Jenny who helped with all of the logistics in putting together the various pieces including getting the post up on the web as well as in their news letter.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Upcoming Out and About Events

    Following up on previous Out and About updates ( here and here ) of where I have been, heres where I’m going to be over the next couple of weeks.

    On September 15th and 16th 2009, I will be the keynote speaker along with doing a deep dive discussion around data deduplication in Minneapolis, MN and Toronto ON. Free Seminar, register and learn more here.

    The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. Free Seminar, register and learn more here.

    On September 23, 2009 I will be in New York City at Storage Decisions conference participating in the Ask the Experts during the expo session as well as presenting The Other Green — Storage Efficiency and Optimization.

    Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives. To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

    Topics, technologies and techniques that will be discussed include among others:

    • Energy efficiency (strategic) vs. energy avoidance (tactical)
    • Optimization and the need for speed vs. the need for capacity
    • Metrics and measurements for management insight
    • Tiered storage and tiered access including SSD, FC, SAS and clouds
    • Data footprint reduction (archive, compress, dedupe) and thin provision
    • Best practices, financial incentives and what you can do today

    Free event, learn more and register here.

    Check out the events page for other upcoming events and hope to see you this fall while Im out and about.

    Cheers – gs

    Greg Schulz – StorageIOblog, twitter @storageio Author “The Green and Virtual Data Center” (CRC)