EPA Server and Storage Workshop Feb 2, 2010

EPA Energy Star

Following up on a recent previous post pertaining to US EPA Energy Star(r) for Servers, Data Center Storage and Data Centers, there will be a workshop held Tuesday February 2, 2010 in San Jose, CA.

Here is the note (Italics added by me for clarity) from the folks at EPA with information about the event and how to participate.

 

Dear ENERGY STAR® Servers and Storage Stakeholders:

Representatives from the US EPA will be in attendance at The Green Grid Technical Forum in San Jose, CA in early February, and will be hosting information sessions to provide updates on recent ENERGY STAR servers and storage specification development activities.  Given the timing of this event with respect to ongoing data collection and comment periods for both product categories, EPA intends for these meetings to be informal and informational in nature.  EPA will share details of recent progress, identify key issues that require further stakeholder input, discuss timelines for the completion, and answer questions from the stakeholder community for each specification.

The sessions will take place on February 2, 2010, from 10:00 AM to 4:00 PM PT, at the San Jose Marriott.  A conference line and Webinar will be available for participants who cannot attend the meeting in person.  The preliminary agenda is as follows:

Servers (10:00 AM to 12:30 PM)

  • Draft 1 Version 2.0 specification development overview & progress report
    • Tier 1 Rollover Criteria
    • Power & Performance Data Sheet
    • SPEC efficiency rating tool development
  • Opportunities for energy performance data disclosure

 

Storage (1:30 PM to 4:00 PM)

  • Draft 1 Version 1.0 specification development overview & progress report
  • Preliminary stakeholder feedback & lessons learned from data collection 

A more detailed agenda will be distributed in the coming weeks.  Please RSVP to storage@energystar.gov or servers@energystar.gov no later than Friday, January 22.  Indicate in your response whether you will be participating in person or via Webinar, and which of the two sessions you plan to attend.

Thank you for your continued support of ENERGY STAR.

 

End of EPA Transmission

For those attending the event, I look forward to seeing you there in person on Tuesday before flying down to San Diego where I will be presenting on Wednesday the 3rd at The Green Data Center Conference.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

RAID Relevance Revisited

Following up from some previous posts on the topic, a continued discussion point in the data storage industry is the relevance (or lack there) of RAID (Redundant Array of Independent Disks).

These discussions tend to evolve around how RAID is dead due to its lack of real or perceived ability to continue scaling in terms of performance, availability, capacity, economies or energy capabilities needed or when compared to those of newer techniques, technologies or products.

RAID Relevance

While there are many new and evolving approaches to protecting data in addition to maintaining availability or accessibility to information, RAID despite the fan fare is far from being dead at least on the technology front.

Sure, there are issues or challenges that require continued investing in RAID as has been the case over the past 20 years; however those will also be addressed on a go forward basis via continued innovation and evolution along with riding technology improvement curves.

Now from a marketing standpoint, ok, I can see where the RAID story is dead, boring, and something new and shiny is needed, or, at least change the pitch to sound like something new.

Consequently, when being long in the tooth and with some of the fore mentioned items among others, older technologies that may be boring or lack sizzle or marketing dollars can and often are declared dead on the buzzword bingo circuit. After all, how long now has the industry trade group RAID Advisory Board (RAB) been missing in action, retired, spun down, archived or ILMed?

RAID remains relevant because like other dead or zombie technologies it has reached the plateau of productivity and profitability. That success is also something that emerging technologies envy as their future domain and thus a classic marketing move is to declare the incumbent dead.

The reality is that RAID in all of its various instances from hardware to software, standard to non-standard with extensions is very much alive from the largest enterprise to the SMB to the SOHO down into consumer products and all points in between.

Now candidly, like any technology that is about 20 years old if not older after all, the disk drive is over 50 years old and been declared dead for how long now?.RAID in some ways is long in the tooth and there are certainly issues to be addressed as have been taken care of in the past. Some of these include the overhead of rebuilding large capacity 1TB, 2TB and even larger disk drives in the not so distant future.

There are also issues pertaining to distributed data protection in support of cloud, virtualized or other solutions that need to be addressed. In fact, go way way back to when RAID appeared commercially on the scene in the late 80s and one of the value propositions among others was to address the reliability of emerging large capacity multi MByte sized SCSI disk drives. It seems almost laughable today that when a decade later, when the 1GB disk drives appeared in the market back in the 90s that there was renewed concern about RAID and disk drive rebuild times.

Rest assured, I think that there is a need and plenty of room for continued innovate evolution around RAID related technologies and their associated storage systems or packaging on a go forward basis.

What I find interesting is that some of the issues facing RAID today are similar to those of a decade ago for example having to deal with large capacity disk drive rebuild, distributed data protecting and availability, performance, ease of use and so the list goes.

However what happened was that vendors continued to innovate both in terms of basic performance accelerated rebuild rates with improvements to rebuild algorithms, leveraged faster processors, busses and other techniques. In addition, vendors continued to innovate in terms of new functionality including adopting RAID 6 which for the better part of a decade outside of a few niche vendors languished as one of those future technologies that probably nobody would ever adopt, however we know that to be different now and for the past several years. RAID 6 is one of those areas where vendors who do not have it are either adding it, enhancing it, or telling you why you do not need it or why it is no good for you.

An example of how RAID 6 is being enhanced is boosting performance on normal read and write operations along with acceleration of performance during disk rebuild. Also tied to RAID 6 and disk drive rebuild are improvements in controller design to detect and proactively make repairs on the fly to minimize or eliminate errors or diminished the need for drive rebuilds, similar to what was done in previous generations. Lets also not forget the improvements in disk drives boosting performance, availability, capacity and energy improvements over time.

Funny how these and other enhancements are similar to those made to RAID controllers hardware and software fine tuning them in the early to mid 2000s in support for high capacity SATA disk drives that had different RAS characteristics of higher performance lower capacity enterprise drives.

Here is my point.

RAID to some may be dead while others continue to rely on it. Meanwhile others are working on enhancing technologies for future generations of storage systems and application requirements. Thus in different shapes, forms, configurations, feature; functionality or packaging, the spirit of RAID is very much alive and well remaining relevant.

Regardless of if a solution using two or three disk mirroring for availability, or RAID 0 fast SSD or SAS or FC disks in a stripe configuration for performance with data protection via rapid restoration from some other low cost medium (perhaps RAID 6 or tape), or perhaps single, dual or triple parity protection, or if using small block or multiMByte or volume based chunklets, let alone if it is hardware or software based, local or disturbed, standard or non standard, chances are there is some theme of RAID involved.

Granted, you do not have to call it RAID if you prefer!

As a closing thought, if RAID were no longer relevant, than why do the post RAID, next generation, life beyond RAID or whatever you prefer to call them technologies need to tie themselves to the themes of RAID? Simple, RAID is still relevant in some shape or form to different audiences as well as it is a great way of stimulating discussion or debate in a constantly evolving industry.

BTW, Im still waiting for the revolutionary piece of hardware that does not require software, and the software that does not require hardware and that includes playing games with server less servers using hypervisors :) .

Provide your perspective on RAID and its relevance in the following poll.

Here are some additional related and relevant RAID links of interests:

Stay tuned for more about RAIDs relevance as I dont think we have heard the last on this.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Poll: Networking Convergence, Ethernet, InfiniBand or both?

I just received an email in my inbox from Voltaire along with a pile of other advertisements, advisories, alerts and announcements from other folks.

What caught my eye on the email was that it is announcing a new survey results that you can read here as well as below.

The question that this survey announcements prompts for me and hence why I am posting it here is how dominant will InfiniBand be on a go forward basis, the answer I think is it depends…

It depends on the target market or audience, what their applications and technology preferences are along with other service requirements.

I think that there is and will remain a place for Infiniband, the question is where and for what types of environments as well as why have both InfiniBand and Ethernet including Fibre Channel over Ethernet (FCoE) in support of unified or converged I/O and data networking.

So here is the note that I received from Voltaire:

 

Hello,

A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

The full press release is below.  Please contact me if you would like to speak with a Voltaire executive for further commentary.

Regards,
Christy

____________________________________________________________
Christy Lynch| 978.439.5407(o) |617.794.1362(m)
Director, Corporate Communications
Voltaire – The Leader in Scale-Out Data Center Fabrics
christyl@voltaire.com | www.voltaire.com
Follow us on Twitter: www.twitter.com/voltaireltd

FOR IMMEDIATE RELEASE:

IT Survey Finds Executives Planning Converged Network Strategy:
Using Both InfiniBand and Ethernet

Fabric Performance Key to Making Data Centers Operate More Efficiently

CHELMSFORD, Mass. and ANANA, Israel January 12, 2010 – A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

Voltaire queried more than 120 members of the Global CIO & Executive IT Group, which includes CIOs, senior IT executives, and others in the field that attended the 2009 MIT Sloan CIO Symposium. The survey explored their data center networking needs, their choice of interconnect technologies (fabrics) for the enterprise, and criteria for making technology purchasing decisions.

“Increasingly, InfiniBand and Ethernet share the ability to address key networking requirements of virtualized, scale-out data centers, such as performance, efficiency, and scalability,” noted Asaf Somekh, vice president of marketing, Voltaire. “By adopting a converged network strategy, IT executives can build on their pre-existing investments, and leverage the best of both technologies.”

When asked about their fabric choices, 45 percent of the respondents said they planned to implement both InfiniBand with Ethernet as they made future data center enhancements. Another 54 percent intended to rely on Ethernet alone.

Among additional survey results:

  • When asked to rank the most important characteristics for their data center fabric, the largest number (31 percent) cited high bandwidth. Twenty-two percent cited low latency, and 17 percent said scalability.
  • When asked about their top data center networking priorities for the next two years, 34 percent again cited performance. Twenty-seven percent mentioned reducing costs, and 16 percent cited improving service levels.
  • A majority (nearly 60 percent) favored a fabric/network that is supported or backed by a global server manufacturer.

InfiniBand and Ethernet interconnect technologies are widely used in today’s data centers to speed up and make the most of computing applications, and to enable faster sharing of data among storage and server networks. Voltaire’s server and storage fabric switches leverage both technologies for optimum efficiency. The company provides InfiniBand products used in supercomputers, high-performance computing, and enterprise environments, as well as its Ethernet products to help a broad array of enterprise data centers meet their performance requirements and consolidation plans.

About Voltaire
Voltaire (NASDAQ: VOLT) is a leading provider of scale-out computing fabrics for data centers, high performance computing and cloud environments. Voltaire’s family of server and storage fabric switches and advanced management software improve performance of mission-critical applications, increase efficiency and reduce costs through infrastructure consolidation and lower power consumption. Used by more than 30 percent of the Fortune 100 and other premier organizations across many industries, including many of the TOP500 supercomputers, Voltaire products are included in server and blade offerings from Bull, HP, IBM, NEC and Sun. Founded in 1997, Voltaire is headquartered in Ra’anana, Israel and Chelmsford, Massachusetts. More information is available at www.voltaire.com or by calling 1-800-865-8247.

Forward Looking Statements
Information provided in this press release may contain statements relating to current expectations, estimates, forecasts and projections about future events that are "forward-looking statements" as defined in the Private Securities Litigation Reform Act of 1995. These forward-looking statements generally relate to Voltaire’s plans, objectives and expectations for future operations and are based upon management’s current estimates and projections of future results or trends. They also include third-party projections regarding expected industry growth rates. Actual future results may differ materially from those projected as a result of certain risks and uncertainties. These factors include, but are not limited to, those discussed under the heading "Risk Factors" in Voltaire’s annual report on Form 20-F for the year ended December 31, 2008. These forward-looking statements are made only as of the date hereof, and we undertake no obligation to update or revise the forward-looking statements, whether as a result of new information, future events or otherwise.

###

All product and company names mentioned herein may be the trademarks of their respective owners.

 

End of Voltaire transmission:

I/O, storage and networking interface wars come and go similar to other technology debates of what is the best or that will be supreme.

Some recent debates have been around Fibre Channel vs. iSCSI or iSCSI vs. Fibre Channel (depends on your perspective), SAN vs. NAS, NAS vs. SAS, SAS vs. iSCSI or Fibre Channel, Fibre Channel vs. Fibre Channel over Ethernet (FCoE) vs. iSCSI vs. InfiniBand, xWDM vs. SONET or MPLS, IP vs UDP or other IP based services, not to mention the whole LAN, SAN, MAN, WAN POTS and PAN speed games of 1G, 2G, 4G, 8G, 10G, 40G or 100G. Of course there are also the I/O virtualization (IOV) discussions including PCIe Single Root (SR) and Multi Root (MR) for attachment of SAS/SATA, Ethernet, Fibre Channel or other adapters vs. other approaches.

Thus when I routinely get asked about what is the best, my answer usually is a qualified it depends based on what you are doing, trying to accomplish, your environment, preferences among others. In other words, Im not hung up or tied to anyone particular networking transport, protocol, network or interface, rather, the ones that work and are most applicable to the task at hand

Now getting back to Voltaire and InfiniBand which I think has a future for some environments, however I dont see it being the be all end all it was once promoted to be. And outside of the InfiniBand faithful (there are also iSCSI, SAS, Fibre Channel, FCoE, CEE and DCE among other devotees), I suspect that the results would be mixed.

I suspect that the Voltaire survey reflects that as well as if I surveyed an Ethernet dominate environment I can take a pretty good guess at the results, likewise for a Fibre Channel, or FCoE influenced environment. Not to mention the composition of the environment, focus and business or applications being supported. One would also expect a slightly different survey results from the likes of Aprius, Broadcom, Brocade, Cisco, Emulex, Mellanox (they also are involved with InfiniBand), NextIO, Qlogic (they actually do some Infiniband activity as well), Virtensys or Xsigo (actually, they support convergence of Fibre Channel and Ethernet via Infiniband) among others.

Ok, so what is your take?

Whats your preffered network interface for convergence?

For additional reading, here are some related links:

  • I/O Virtualization (IOV) Revisited
  • I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
  • Buzzword Bingo 1.0 – Are you ready for fall product announcements?
  • StorageIO in the News Update V2010.1
  • The Green and Virtual Data Center (Chapter 9)
  • Also check out what others including Scott Lowe have to say about IOV here or, Stuart Miniman about FCoE here, or of Greg Ferro here.
  • Oh, and for what its worth for those concerned about FTC disclosure, Voltaire is not nor have they been a client of StorageIO, however, I did used to work for a Fibre Channel, iSCSI, IP storage, LAN, SAN, MAN, WAN vendor and wrote a book on the topics :).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Upcoming Events and Activities Update V2010.1

    The end of year christmas and new years holiday season has come and gone which means of course that 2009 is a wrap along with the travel from being out and about.

    In addition to getting some time to relax a bit (playing Wii resort, snow plowing, cooking etc.), I have also been catching up on developing some new content including articles, blogs (some yet to be post), tips as well as podcasts along with some custom research advisory projects.

    Check out some recent tips, articles, videos and podcasts here along with perspecitives and comments on indusitry news here.

    2009 events and activities saw visits to cities including San Jose, Tucson, Cancun Mexico, Dallas, Tampa, Miami, Los Angles, San Jose, Las Vegas, Milwaukee, Atlanta, St. Louis, Birmingham, Cincinnati, Santa Ana, Minneapolis, Boston, Dallas, Boston, Chicago, Parsipanny, Raleigh, Providence, Kansas City, Denver, Chicago, Orlando, Chicago, Philadelphia, Toronto, Richmond, Columbus, Princeton, Seattle, Portland, Dallas, San Francisco, Minneapolis, Toronto, Chicago, New York, Milwaukee, Atlanta, Boston, Cleveland and Detroit among others.

    This time of the year also means that the 2010 events and activities including in person keynote and presentations also known as out and about are getting underway. While the 2010 schedule of events is still being finalized, some initial events have are on the calendar, my bags are about to be packed and tickets in hand not to mention finalizing the presentation and discussion content.

    In addition to some non public events including keynote presenting at some vendors annual sales (kick off) meetings, the following are some of what are currently on the calendar that you can click on the links below to learn more about the venues.

    February 3, 2010 Green Data Center Conference, San Diego, CA
    January 21, 2010 Dinner Event keynote Speaker Dynamic IT Infrastructure, Beverly Hills, CA
    January 21, 2010 Morning keynote Speaker The Green and Virtual Data Center, San Diego, CA
    January 19, 2010 Dinner Event keynote Speaker Dynamic IT Infrastructure, Miami, FL

    Watch for updates to the events calendar and I look forward to seeing you all while Im out and about during 2010.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    StorageIO in the News Update V2010.1

    StorageIO is regularly quoted and interviewed in various industry and vertical market venues and publications both on-line and in print on a global basis.

    The following are some coverage, perspectives and commentary by StorageIO on IT industry trends including servers, storage, I/O networking, hardware, software, services, virtualization, cloud, cluster, grid, SSD, data protection, Green IT and more since the last update.

    Realizing that some prefer blogs to webs to twitter to other venues, here are some recent links among others to media coverage and comments by me on a different topics that are among others found at www.storageio.com/news.html:

  • SearchSMBStorage: Comments on EMC Iomega v.Clone for PC data syncronization – Jan 2010
  • Computerworld: Comments on leveraging cloud or online backup – Jan 2010
  • ChannelProSMB: Comments on NAS vs SAN Storage for SMBs – Dec 2009
  • ChannelProSMB: Comments on Affordable SMB Storage Solutions – Dec 2009
  • SearchStorage: Comments on What to buy a geek for the holidays, 2009 edition – Dec 2009
  • SearchStorage: Comments on EMC VMAX storage and 8GFC enhancements – Dec 2009
  • SearchStorage: Comments on Data Footprint Reduction – Dec 2009
  • SearchStorage: Comments on Building a private storage cloud – Dec 2009
  • SearchStorage: Comments on SSD in storage systems – Dec 2009
  • SearchStorage: Comments on slow adoption of file virtualization – Dec 2009
  • IT World: Comments on maximizing data security investments – Nov 2009
  • SearchCIO: Comments on storage virtualization for your organisation – Nov 2009
  • Processor: Comments on how to win approval for hardware upgrades – Nov 2009
  • Processor: Comments on the Future of Servers – Nov 2009
  • SearchITChannel: Comments on Energy-efficient technology sales depend on pitch – Nov 2009
  • SearchStorage: Comments on how to get from Fibre Channel to FCoE – Nov 2009
  • Minneapolis Star Tribune: Comments on Google Wave and Clouds – Nov 2009
  • SearchStorage: Comments on EMC and Cisco alliance – Nov 2009
  • SearchStorage: Comments on HP virtualizaiton enhancements – Nov 2009
  • SearchStorage: Comments on Apple canceling ZFS project – Oct 2009
  • Processor: Comments on EPA Energy Star for Server and Storage Ratings – Oct 2009
  • IT World Canada: Cloud computing, dot be scared, look before you leap – Oct 2009
  • IT World: Comments on stretching your data protection and security dollar – Oct 2009
  • Enterprise Storage Forum: Comments about Fragmentation and Performance? – Oct 2009
  • SearchStorage: Comments about data migration – Oct 2009
  • SearchStorage: Comments about What’s inside internal storage clouds? – Oct 2009
  • Enterprise Storage Forum: Comments about T-Mobile and Clouds? – Oct 2009
  • Storage Monkeys: Podcast comments about Sun and Oracle- Sep 2009
  • Enterprise Storage Forum: Comments on Maxiscale clustered, cloud NAS – Sep 2009
  • SearchStorage: Comments on Maxiscale clustered NAS for web hosting – Sep 2009
  • Enterprise Storage Forum: Comments on whos hot in data storage industry – Sep 2009
  • SearchSMBStorage: Comments on SMB Fibre Channel switch options – Sep 2009
  • SearchStorage: Comments on using storage more efficiently – Sep 2009
  • SearchStorage: Comments on Data and Storage Tiering including SSD – Sep 2009
  • Enterprise IT Planet: Comments on Data Deduplication – Sep 2009
  • SearchDataCenter: Comments on Tiered Storage – Sep 2009
  • Enterprise Storage Forum: Comments on Sun-Oracle Wedding – Aug 2009
  • Processor.com: Comments on Storage Network Snags – Aug 2009
  • SearchStorageChannel: Comments on I/O virtualizaiton (IOV) – Aug 2009
  • SearchStorage: Comments on Clustered NAS storage and virtualization – Aug 2009
  • SearchITChannel: Comments on Solid-state drive prices still hinder adoption – Aug 2009
  • Check out the Content, Tips, Tools, Videos, Podcasts plus White Papers, and News pages for additional commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Recent tips, videos, articles and more update V2010.1

    Realizing that some prefer blogs to webs to twitter to other venues, here are some recent links to articles, tips, videos, webcasts and other content that have appeared in different venues since August 2009.

  • i365 Guest Interview: Experts Corner: Q&A with Greg Schulz December 2009
  • SearchCIO Midmarket: Remote-location disaster recovery risks and solutions December 2009
  • BizTech Magazine: High Availability: A Delicate Balancing Act November 2009
  • ESJ: What Comprises a Green, Efficient and Effective Virtual Data Center? November 2009
  • SearchSMBStorage: Determining what server to use for SMB November 2009
  • SearchStorage: Performance metrics: Evaluating your data storage efficiency October 2009
  • SearchStorage: Optimizing capacity and performance to reduce data footprint October 2009
  • SearchSMBStorage: How often should I conduct a disaster recovery (DR) test? October 2009
  • SearchStorage: Addressing storage performance bottlenecks in storage September 2009
  • SearchStorage AU: Is tape the right backup medium for smaller businesses? August 2009
  • ITworld: The new green data center: From energy avoidance to energy efficiency August 2009
  • Video and podcasts include:
    December 2009 Video: Green Storage: Metrics and measurement for management insight
    Discussion between Greg Schulz and Mark Lewis of TechTarget the importance of metrics and measurement to gauge productivity and efficiency for Green IT and enabling virtual information factories. Click here to watch the Video.

    December 2009 Podcast: iSCSI SANs can be a good fit for SMB storage
    Discussion between Greg Schulz and Andrew Burton of TechTarget about iSCSI and other related technologies for SMB storage. Click here to listen to the podcast.

    December 2009 Podcast: RAID Data Protection Discussion
    Discussion between Greg Schulz and Andrew Burton of TechTarget about RAID data proteciton, techniques and technologies. Click here to listen to the podcast.

    December 2009 Podcast: Green IT, Effiency and Productivity Discussion
    Discussion between Greg Schulz and Jon Flower of Adaptec about data Green IT, energy effiency, inteligent power management (IPM) also known as MAID 2.0 and other forms of optimization techniques including SSD. Click here to listen to the podcast sponsored by Adaptec.

    November 2009 Podcast: Reducing your data footprint impact
    Even though many enterprise data storage environments are coping with tightened budgets and reduced spending, overall net storage capacity is increasing. In this interview, Greg Schulz, founder and senior analyst at StorageIO Group, discusses how storage managers can reduce their data footprint. Schulz touches on the importance of managing your data footprint on both online and offline storage, as well as the various tools for doing so, including data archiving, thin provisioning and data deduplication. Click here to listen to the podcast.

    October 2009 Podcast: Enterprise data storage technologies rise from the dead
    In this interview, Greg Schulz, founder and senior analyst of the Storage I/O group, classifies popular technologies such as solid-state drives (SSDs), RAID and Fibre Channel (FC) as “zombie” technologies. Why? These are already set to become part of standard storage infrastructures, says Schulz, and are too old to be considered fresh. But while some consider these technologies to be stale, users should expect to see them in their everyday lives. Click here to listen to the podcast.

    Check out the Tips, Tools and White Papers, and News pages for additional commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    EPA Energy Star for Data Center Storage Update

    Following up on previous posts pertaining to US EPA Energy Star for Servers, Data Center Storage and Data Centers, here is a note received today with some new information. For those interested in the evolving Energy Star for Data Center, Servers and Storage, have a look at the following as well as the associated links.

    Here is the note from EPA:

    From: ENERGY STAR Storage [storage@energystar.gov]
    Sent: Monday, December 28, 2009 8:00 AM
    Subject: ENERGY STAR Data Center Storage Initial Data Collection Procedure

    EPA Energy Star

    Dear ENERGY STAR Data Center Storage Stakeholder or Other Interested Party:

    The U.S. Environmental Production Agency (EPA) would like to invite interested parties to test the energy performance of storage products that are currently being considered for inclusion in the Version 1.0 ENERGY STAR® Data Center Storage specification. Please review the attached cover letter, data collection procedure, and test data collection sheet for further information.

    Stakeholders are encouraged to submit test data via e-mail to storage@energystar.gov no later than Friday, February 12, 2009.

    Thank you for your continued support of ENERGY STAR!

    Attachment Links:

    Storage Initial Data Collection Procedure.pdf

    Storage Initial Data Collection Cover Letter.pdf

    Storage Initial Data Collection Data Sheet.xls

    For more information, visit: www.energystar.gov

     

    For those interested in EPA Energy Star, Green IT including Green and energy efficient storage, check out these following links:

    Watch for more news and updates pertaining to EPA Energy Star for Servers, Data Center Storage and Data centers in 2010.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Poll: What was hot in 2009 and what was not, cast your vote!

    This is the time of year when people make their predictions for the next year.


    Building on some recent surveys and polls including:

    Whats your take on Windows 7

    Is IBM XIV still relevant

    EMC and Cisco Acadia VCE, what does it mean?

    What do you think of IT clouds

    Whats Your Take on FTC Guidelines For Bloggers?

    Not to mention those over at Storage Monkeys and the customer collective among others


    Before jumping to what will be hot or a flop in 2010, what do you think were the successful as well as disappointing technologies, trends, events, products or vendors of 2009?


    Cast your including adding in your own nominations in the two polls below.

    What technologies, events, products or vendors did not live up to 2009 predictions?



    What do you think were top 2009 technologies, events or vendors?

    Note:

    Feel free to vote early and often, however be advised, you will have to be creative in doing so as single balloting per IP and cookies are enabled to keep things on the down low.

    Check back soon to see how the results play out…


    Cheers gs

    Greg Schulz – StorageIO, Author The Green and Virtual Data Center (CRC)

    Behind the Scenes, SANta Claus Global Cloud Story

    There is a ton of discussion, stories, articles, videos, conferences and blogs about the benefits and value proposition of cloud computing. Not to mention, discussion or debates about what is or what is not a cloud or cloud product, service or architecture including some perspectives and polls from me.

    Now SANta does not really care about these and other similar debates I have learned. However he is concerned with who has been naughty and nice as well watching out for impersonators or members of his crew who misbehave.

    In the spirit of the holidays, how about a quick look at how SANta leverages cloud technologies to support his global operations.

    Many in IT think that SANta bases his operations out of the North Pole as it is convenient for him to cool all of his servers, storage, networks and telecom equipment (which it is). However its also centrally located (See chart) for the northern hemisphere (folks down under may get serviced via SANtas secret Antarctica base of operations). Just like ANC (Anchorage International Airport) is a popular cargo transient, transload and refueling base for cargo carriers, SANta also leverages the north and South Pole regions to his advantage.

    Great Circle Mapper
    SANtas Global Reach via Great Circle Mapper

    Now do not worry if you have never heard about SANta dual redundant South Pole operations, its one of his better kept secrets. Many organizations including SANtas partners such as Microsoft that have global mega IT operations and logistics centers have followed SANtas lead of leveraging various locations outside of the pacific northwest. Granted like some of his partners and managed service providers, he does maintain a presence in Washington Columbia river basin which provides a nice PR among other benefits.

    Likewise, many in business as well as those in IT think that SANta leverages cloud technologies for cost savings or avoidance which is partially the case. However he also leverages cloud, hosting, managed service provider (MSP), virtual data centers, virtual operations centers, Xaas, SaaS or SOA technologies, services, protocols and products that are transparent and complimentary to his own in house resources addressing various business and service requirement needs.

    What this has to do with the holidays and clouds is that you may not realize how Santa or St. Nick if you prefer (feel free to plug in whoever you like if Santa or St. Nick does not turn your crank) extensively relies on flexible and scalable resilient technologies for boosting productivity in a cost effective manner. Some of it is IT related, some of it is not. For example, from the GPS and Radar along with recently added RNP and RNAV enhanced capabilities to his increasingly high tech bio fueled powered sleigh, not to mention his information technology (IT) that powers his global operations, old St Nick has got it together when it comes to technology.

    The heart or brains of the SANta operation is his global system operations center (SOC) or network operation center (NOC) that rivals those seen at NASA among others with multiple data feeds. The SOC is a 24×365 operations function that covers all aspects from transportation, logistics, distribution, assembly or packaging, financials back office, CRM, IT and communications among other functions.

    Naturally, like the Apollo moon shots whose Grumman built LEM Lunar lander had to have 100% availability in that to get off of the moon, their engines only had to fire once, however it had to work 100% of the time! This thought process is said to have had leveraged principles from SANtas operations guide where he has one night a year to accomplish the impossible.

    I should mention, while I cannot disclose (due to NDA) the exact locations of the SOCs, data or logistics centers, not to mention the vendors or the technology being used, I can tell you that they are all around you! The fully redundant SOCs, data and call centers as well as logistics sites (including staff, facilities, technology) leverage different time zones for efficiency.

    SANtas staff have also found that the redundant SOCs, part of an approach across Santa entire vast organization has helped to guard against global epidemics and pandemics including SARs and H1N1 among others by isolating workers while providing appropriate coverage and availability, something many large organizations have since followed.

    Carrying through on the philosophy of redundant SOCs, all other aspects of SANtas operations are distributed yet with centralized coordinated management, leveraging real-time situation awareness, event and activity correlation (what we used to call or refer to as AI), cross technology domain management, proactive monitoring and planning yet with ability for on the spot decision making.

    What this means is that the various locations have ability to make localized decisions on the spot. However coordinated with primary operations or mission control to streamline global operations focus on strategic activity along with exceptions handling to be more effective. Thus it is not fully distributed nor fully centralized, rather a hybrid in terms of management, technologies and the way they work.

    For example, to handle the diverse applications, there are some primary large processing and data retention facilities that backup, replicate information to other peer sites as well as smaller regional remote office branch offices close to where information services are needed. To say the environment is highly virtualized would be an understatement.

    Likewise, optimization is key not just to keep costs low or avoid overheating some of SANtas facilities that are located in the Arctic and Antarctic regions that could melt the ice cap; they are also optimized to keep response time as low as possible while boosting productivity.

    Thus, SANta has to rely on very robust and diverse communications networking leveraging LAN, SAN, MAN, WAN, POTS and PANs among other technologies. For example, his communications portfolio is said to involves landlines (copper and optical), RF including microwave and other radio based commutations supporting or using 3G, 4G, MPLS, SONET/SCH, xWDM, Microwave and Free space optics among others.

    SANtas networking and communications elves are also said to be working with 5G and 100GbE multiplexed on 256 lambda WDM trunk circuits in non core trunk applications. Of course given the airborne operations, satellite and ACARS are a must to avoid over flying a destination while remaining in positive control during low visibility. Note that Santa routinely makes more CAT 3+ low visibility landings than most of the worlds airlines, air freight companies combined.

    My sources also tell me that SANta has virtual desktop capability leveraging PCoIP and other optimizations on his primary and backup sleighs enabling rapid reconfiguration for changing workload conditions. He also is fully equipped with onboard social media capabilities for updates via twitter, Face book and Linked In among others designed by his chief social networking elf.

    Consequently, given the vast amount of information needed to support his operations from CRM, shipping, tracking not to mention historical and profiling needs, transactional volumes both on the data as well as voice and social media networks dwarf the stock market trading volume.

    Feeding SANtas vast organizations are online highly available robust databases for transactions purposes, reference unstructured data material including videos, websites and more. Some of which look hauntingly familiar given those that are part of SANtas eWorld Helpers initiative including: Sears, Amazon, NetFlix, Target, Albertsons, Staples, EMC, Wall mart, Overstock, RadioShack, Landsend, Dell, HP, eBay, Lowes, Publix, emusic, Riteaid and Supervalu among others (Im just sayin…).

    The actual size of SANta information repository is a closely regarded secret as is the exact topology, schema and content structure. However it is understood that on peak days SANtas highly distributed high performance, low latency data warehouse sees upwards of 1,225PBytes of data added, one that is rumored to make Larry Ellison gush with excitement over its growth possibilities.

    How does SANta pull this all off is by leveraging virtualization, automation, efficient and enabling technologies that allow him and elves (excuse me, associates or team members) to be more productivity in their areas of focus that is the envy of the universe.

    Some of their efficiency is measured in terms of:

    • How many packages can be processed per elf with minimum or no mistakes
    • Number of calls, requests, inquiries per day per elf in a friendly and understandable manner
    • Knowing who has been naughty or nice in the blink of an eye including historical profiles
    • Virtual machines (VM) or physical machine (PM) servers managed per team member
    • Databases and applications, local and remote, logical and physical per team member
    • Storage in terms of PByte and Exabyte managed to given service level per team member
    • Network circuits and bandwidth with fewest dropped packets (or packages) per member
    • Fewest misdirected packages as well as aborted landings per crew
    • Fewest pounds gained from consumption of most milk and cookies per crew

    From how many packages can be processed per hour, to the number of virtual servers per person, PBytes of data managed per person, network connections and circuits per person, databases and applications per person to takes and landings (SANta has the top of the list for this one), they are all highly efficient and effective.

    Likewise, SANta leverages the partners in his SANtas eWORLD Helpers initiative network to help out where of course he looks for value; however value is not just lowest price per VM, lowest cost per TByte or cost per bandwidth. For SANta it is also very focused on performance, availability, capacity and economic efficiency not to mention quality with an environmentally friendly green supply chain.

    By having a green supply chain, SANta leverages from a responsible, global approach that also makes economic sense on where to manufacture and produce or procure products. Contrary to growing popular belief, locally produced may not always be the most environmentally as well as economically favorable approach. For example (read more here), instead of growing flowers and plans in western Europe where they are consumed, a process that would require more energy for heat, lights, not to mention water and other resources. SANta has bucked the trend instead relying on the economics and environmental benefit of leveraging flowers and plants grown in warmer, sunnier climates.

    Granted and rest assured, SANta still has an army of elves busily putting things together in his own factories along with managing IT related activities in a economically positive manner.

    SANta has also leveraged this thinking to his data and information and communications networks leveraging sites such as in the arctic where solar power can be used during summer months along with cooling economizers to offset the impact of batteries, workload is shifted around the world as needed. This approach is rumored to be the envy of the US EPA Energy Star for Server, Storage and Data Center crew not to mention their followers.

    How does SANta make sure all of the data and information is protected and available? Its a combination of best practices, techniques, technologies including hardware, software, data protection management tools, disk, dedupe, compression, tape and cloud among others.

    Rest assured, if it is in the technology buzzword bingo book, it is a good bet that it has been tested in one of SANtas facilities, or, partner sites long before you hear about it even under a strict NDA discussion with one of his elves (opps, I mean supplier partners).

    When asked of the importance of his information and data networks, resources and cloud enabled highly virtualized efficient operations SANta responded with a simple:

    Ho Ho Ho, Merry Christmas to all, and to all, a good night!

    As you sit back and relax, reflect, recreate, recoup or recharge, or whatever it is that you do this time of the year, take a moment to think about and thank all of SANtas helpers. They are the ones that work behind the scenes in SANtas facilities as well as his partners or suppliers, some in the clouds, some on or underground to make the worlds largest single event day (excuse me, night) possible! Or, is this SANta and cloud thing all just one big fantasy?

    Happy and safe holidays or whatever you want to refer to it as, best wishes and thanks!

    BTW: FTC disclosure information can be found here!

    Greg on Break

    Me on a break during tour SANta site tour

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    What do NAS NASA NASCAR have in common?

    What do NAS NASA NASCAR have in common?

    server storage I/O data infrastructure trends

    Updated 2/10/2018

    The other day it dawned on me what do NAS, NASA NASCAR have in common?

    Several things in addition to all starting with the letters NAS it turns out.

    For example, they all deal with round objects, NAS or Network Attached storage involved with circular spinning disk drives, NASA or National Aeronautical Space Administration besides involved with aircraft that have tires that go round and round, or airplanes circling waiting for landing.

    In the case of NASA they are also involved with sending craft or devices to circle other planets or moons and land or crash into them. Sometimes NAS along with other storage systems have disk drives that crash, similar to how NASCAR events see accidents.
    NAS

    Ceder Lake 3M NASCAR at dirt track - Photo (C) 2008 Karen Schulz all rights reserved

    Ceder Lake dirt track 3M NASCAR night (Photo (C) 2008 Karen Schulz)

    NASCAR is also involved with vehicles that dont or at least should not fly, however they do go round and round on a track, often paved however sometimes mud or dirt tracks plus high tech exists with computers and various data models, not to mention the NASCAR air force.

    In addition to being involved with round objects and activities, all three are also involved in computing, generating, processing, storing and retrieving for analysis of data, not to mention high performance requirements.

    NAS based storage can also be relied upon for serving the needs of NASA and NASCAR data and informational needs.

    And FWIW, just for fun, look at what you get when you spell NAS, NASA or NASCAR backwards:

    RACSAN
    ASAN
    SAN

    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Not much actually other than to stimulate some thought, discussion as well as perhaps have some fun with technology during the holiday season.

    Im sure if I put some more thought to it, more similarities would or will come to mind.

    However, for now, thats it for a quick thought, what similarities do you see or know about with NAS, NASA and NASCAR?

    Ok, nuf fun for now, time to work on some other posts, content and projects.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Is MAID Storage Dead? I Dont Think So!

    Some vendors are doing better than others and first generation MAID (Massive or monolithic Array of Idle Disks) might be dead or about to be deceased, spun down or put into a long term sleep mode, it is safe to say that second generation MAID (e.g. MAID 2.0) also known as intelligent power management (IPM) is alive and doing well.

    In fact, IPM is not unique to disk storage or disk drives as it is also a technique found in current generation of processors such as those from Intel (e.g. Nehalem) and others.

    Other names for IPM include adaptive voltage scaling (AVS), adaptive voltage scaling optimized (AVSO) and adaptive power management (APM) among others.

    The basic concept is to vary the amount of power being used to the amount of work and service level needed at a point in time and on a granular basis.

    For example, first generation MAID or drive spin down as deployed by vendors such as Copan, which is rumored to be in the process of being spun down as a company (see blog post by a former Copan employee) were binary. That is, a disk drive was either on or off, and, that the granularity was the entire storage system. In the case of Copan, the granularly was that a maximum of 25% of the disks could ever be spun up at any point in time. As a point of reference, when I ask IT customers why they dont use MAID or IPM enabled technology they commonly site concerns about performance, or more importantly, the perception of bad performance.

    CPU chips have been taking the lead with the ability to vary the voltage and clock speed, enabling or disabling electronic circuitry to align with amount of work needing to be done at a point in time. This more granular approach allows the CPU to run at faster rates when needed, slower rates when possible to conserve energy (here, here and here).

    A common example is a laptop with technology such as speed step, or battery stretch saving modes. Disk drives have been following this approach by being able to vary their power usage by adjusting to different spin speeds along with enabling or disabling electronic circuitry.

    On a granular basis, second generation MAID with IPM enabled technology can be done on a LUN or volume group basis across different RAID levels and types of disk drives depending on specific vendor implementation. Some examples of vendors implementing various forms of IPM for second generation MAID to name a few include Adaptec, EMC, Fujitsu Eternus, HDS (AMS), HGST (disk drives), Nexsan and Xyratex among many others.

    Something else that is taking place in the industry seems to be vendors shying away from using the term MAID as there is some stigma associated with performance issues of some first generation products.

    This is not all that different than what took place about 15 years ago or so when the first purpose built monolithic RAID arrays appeared on the market. Products such as the SF2 aka South San Francisco Forklift company product called Failsafe (here and here) which was bought by MTI with patents later sold to EMC.

    Failsafe, or what many at DEC referred to as Fail Some was a large refrigerator sized device with 5.25” disk drives configured as RAID5 with dedicated hot spare disk drives. Thus its performance was ok for the time doing random reads, however writes in the pre write back cache RAID5 days was less than spectacular.

    Failsafe and other early RAID (and here) implementations received a black eye from some due to performance, availability and other issues until best practices and additional enhancements such as multiple RAID levels appeared along with cache in follow on products.

    What that trip down memory (or nightmare) lane has to do with MAID and particularly first generation products that did their part to help establish new technology is that they also gave way to second, third, fourth, fifth, sixth and beyond generations of RAID products.

    The same can be expected as we are seeing with more vendors jumping in on the second generation of MAID also known as drive spin down with more in the wings.

    Consequently, dont judge MAID based solely on the first generation products which could be thought of as advanced technology production proof of concept solutions that will have paved the way for follow up future solutions.

    Just like RAID has become so ubiquitous it has been declared dead making it another zombie technology (dead however still being developed, produced, bought and put to use), follow on IPM enabled generations of technology will be more transparent. That is, similar to finding multiple RAID levels in most storage, look for IPM features including variable drive speeds, power setting and performance options on a go forward basis. These newer solutions may not carry the MAID name, however the sprit and function of intelligent power management without performance compromise does live on.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    EMC Storage and Management Software Getting FAST

    EMC has announced the availability of the first phase of FAST (Fully Automated Storage Tiering) functionality for their Symmetrix VMAX, CLARiiON and Celerra storage systems.

    FAST was first previewed earlier this year (see here and here).

    Key themes of FAST are to leverage policies for enabling automation to support large scale environments, doing more with what you have along with enabling virtual data centers for traditional, private and public clouds as well as enhancing IT economics.

    This means enabling performance and capacity planning analysis along with facilitating load balancing or other infrastructure optimization activities to boost productivity, efficiency and resource usage effectiveness not to mention enabling Green IT.

    Is FAST revolutionary? That will depend on who you talk or listen to.

    Some vendors will jump up and down similar to donkey in shrek wanting to be picked or noticed claiming to have been the first to implement LUN or file movement inside of storage systems, or, as operating system or file system or volume manager built in. Others will claim to have done it via third party information lifecycle management (ILM) software including hierarchal storage management (HSM) tools among others. Ok, fair enough, than let their games begin (or continue) and I will leave it up to the variou vendors and their followings to debate whos got what or not.

    BTW, anyone remember system manage storage on IBM mainframes or array based movement in HP AutoRAID among others?

    Vendors have also in the past provided built in or third party add on tools for providing insight and awareness ranging from capacity or space usage and allocation storage resource management (SRM) tools, performance advisory activity monitors or charge back among others. For example, hot files analysis and reporting tool have been popular in the past, often operating system specific for identifying candidate files for placement on SSD or other fast storage. Granted the tools provided insight and awareness, there was still the time and error prone task of decision making and subsequently data movement, not to mention associated down time.

    What is new here with FAST is the integrated approach, tools that are operating system independent, functionality in the array, available for different product family and price bands as well as that are optimized for improving user and IT productivity in medium to high-end enterprise scale environments.

    One of the knocks on previous technology is either the performance impact to an application when data was moved, or, impact to other applications when data is being moved in the background. Another issue has been avoiding excessive thrashing due to data being moved at the expense of taking performance cycles from production applications. This would also be similar to having too many snapshots or raid rebuild that are not optimized running in the background on a storage system lacking sufficient performance capability. Another knock has been that historically, either 3rd party host or appliance based software was needed, or, solutions were designed and targeted for workgroup, departmental or small environments.

    What is FAST and how is it implemented
    FAST is technology for moving data within storage systems (and external for Celerra) for load balancing, capacity and performance optimization to meet quality of service (QoS) performance, availability, capacity along with energy and economic initiatives (figure1) across different tiers or types of storage devices. For example, moving data from slower SATA disks where a performance bottleneck exists to faster Fibre Channel or SSD devices. Similarly, cold or infrequently data on faster more expensive storage devices can be marked as candidates for migration to lower cost SATA devices based on customer policies.

    EMC FAST
    Figure 1 FAST big picture Source EMC

    The premise is that policies are defined based on activity along with capacity to determine when data becomes a candidate for movement. All movement is performed in the background concurrently while applications are accessing data without disruptions. This means that there are no stub files or application pause or timeouts that occur or erratic I/O activity while data is being migrated. Another aspect of FAST data movement which is performed in the actual storage systems by their respective controllers is the ability for EMC management tools to identify hot or active LUNs or volumes (files in the case of Celerra) as candidates for moving (figure 2).

    EMC FAST
    Figure 2 FAST what it does Source EMC

    However, users specify if they want data moved on its own or under supervision enabling a deterministic environment where the storage system and associated management tools makes recommendations and suggestions for administrators to approve before migration occurs. This capacity can be a safeguard as well as a learn mode enabling organizations to become comfortable with the technology along with its recommendations while applying knowledge of current business dynamics (figure 3).

    EMC FAST
    Figure 3 The Value proposition of FAST Source EMC

    FAST is implemented as technology resident or embedded in the EMC VMAX (aka Symmetrix), CLARiiON and Cellera along with external management software tools. In the case of the block (figure 4) storage systems including DMX/VMAX and CLARiiON family of products that support FAST, data movement is on a LUN or volume basis and within a single storage system. For NAS or file based Cellera storage systems, FAST is implanted using FMA technology enabling either in the box or externally to other storage systems on a file basis.

    EMC FAST
    Figure 4 Example of FAST activity Source EMC

    What this means is that data at the LUN or volume level can be moved across different tiers of storage or disk drives within a CLARiiON instance, or, within a VMAX instance (e.g. amongst the nodes). For example, Virtual LUNs are a building block that is leveraged for data movement and migration combined with external management tools including Navisphere for the CLARiiON and Symmetrix management console along with Ionix all of which has been enhanced.

    Note however that initially data is not moved externally between different CLARiiONs or VMAX systems. For external data movement, other existing EMC tools would be deployed. In the case of Celerra, files can be moved within a specific CLARiiON as well as externally across other storage systems. External storage systems that files can be moved across using EMC FMA technology includes other Celleras, Centera and ATMOS solutions based upon defined policies.

    What do I like most and why?

    Integration of management tools providing insight with ability for user to setup polices as well as approve or intercede with data movement and placement as their specific philosophies dictate. This is key, for those who want to, let the system manage it self with your supervision of course. For those who prefer to take their time, then take simple steps by using the solution for initially providing insight into hot or cold spots and then helping to make decisions on what changes to make. Use the solution and adapt it to your specific environment and philosophy approach, what a concept, a tool that works for you, vs you working for it.

    What dont I like and why?

    There is and will remain some confusion about intra and inter box or system data movement and migration, operations that can be done by other EMC technology today for those who need it. For example I have had questions asking if FAST is nothing more than EMC Invista or some other data mover appliance sitting in front of Symmetrix or CLARiiONs and the answer is NO. Thus EMC will need to articulate that FAST is both an umbrella term as well as a product feature set combining the storage system along with associated management tools unique to each of the different storage systems. In addition, there will be confusion at least with GA of lack of support for Symmetrix DMX vs supported VMAX. Of course with EMC pricing is always a question so lets see how this plays out in the market with customer acceptance.

    What about the others?

    Certainly some will jump up and down claiming ratification of their visions welcoming EMC to the game while forgetting that there were others before them. However, it can also be said that EMC like others who have had LUN and volume movement or cloning capabilities for large scale solutions are taking the next step. Thus I would expect other vendors to continue movement in the same direction with their own unique spin and approach. For others who have in the past made automated tiering their marketing differentiation, I would suggest they come up with some new spins and stories as those functions are about to become table stakes or common feature functionality on a go forward basis.

    When and where to use?

    In theory, anyone with a Symmetrix/VMAX, CLARiiON or Celerra that supports the new functionality should be a candidate for the capabilities, that is, at least the insight, analysis, monitoring and situation awareness capabilities Note that does not mean actually enabling the automated movement initially.

    While the concept is to enable automated system managed storage (Hmmm, Mainframe DejaVu anyone), for those who want to walk before they run, enabling the insight and awareness capabilities can provide valuable information about how resources are being used. The next step would then to look at the recommendations of the tools, and if you concur with the recommendations, then take remedial action by telling the system when the movement can occur at your desired time.

    For those ready to run, then let it rip and take off as FAST as you want. In either situation, look at FAST for providing insight and situational awareness of hot and cold storage, where opportunities exist for optimizing and gaining efficiency in how resources are used, all important aspects for enabling a Green and Virtual Data Center not to mention as well as supporting public and private clouds.

    FYI, FTC Disclosure and FWIW

    I have done content related projects for EMC in the past (see here), they are not currently a client nor have they sponsored, underwritten, influenced, renumerated, utilize third party off shore swiss, cayman or south american unnumbered bank accounts, or provided any other reimbursement for this post, however I did personally sign and hand to Joe Tucci a copy of my book The Green and Virtual Data Center (CRC) ;).

    Bottom line

    Do I like what EMC is doing with FAST and this approach? Yes.

    Do I think there is room for improvement and additional enhancements? Absolutely!

    Whats my recommendation? Have a look, do your homework, due diligence and see if its applicable to your environment while asking others vendors what they will be doing (under NDA if needed).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    SSD and Storage System Performance

    Jacob Gsoedl has a new article over at SearchStorage titled How to add solidstate storage to your enterprise data storage systems.

    In his article which includes some commentary by me, Jacob lays out various options on where and how to deploy solid state devices (SSD) in and with enterprise storage systems.

    While many vendors have jumped on the latest SSD bandwagon adding flash based devices to storage systems, where and how they implement the technologies varies.

    Some vendors take a simplistic approach of qualify flash SSD devices for attachment to their storage controllers similar to how any other Fibre Channel, SAS or SATA hard disk drive (HDD) would be.

    Yet others take a more in depth approach including optimizing controller software, firmware or micro code to leverage flash SSD devices along with addressing wear leveling, read and write performance among other capabilities.

    Performance is another area where on paper a flash SSD device might appear to be fast and enable a storage system to be faster.

    However, systems that are not optimized for higher throughput and or increased IOPs needing lower latency may end up placing restrictions on the number of flash SSD devices or other configuration constraints. Even worse is when expected performance improvements are not realized as after all, fast controllers need fast devices, and fast devices need fast controllers.

    RAM and flash based SSD are great enabling technologies for boosting performance, productivity and enabling a green efficient environment however do your homework.

    Look at how various vendors implement and support SSD particularly flash based products with enhancements to storage controllers for optimal performance.

    Likewise check out the activity of  the SNIA Solid State Storage Initiative (SSSI) among other industry trade group or vendor initiatives around enhancing along with best practices for SSD.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    StorageIO debuts at 79 in Technobabble top 400 analyst list

    Following on the heals of being named one of three EcoTech warriors earlier in the year, and then number 5 in the top ten independent bloggers at StorageMonkeys earlier this year (plus appearing on InfoSmack), the momentum continues more recently being named as the 23rd out of the top 30 influential virtualization bloggers.

    If that were not enough, I was also surprised to learn recently that I have also made a debut appearance at number 79 in the Technobabble top 400 analyst and independent blogger lists as well.

    To say that Im honored and flattered would be an understatement and I thank all of the growing number of readers and commenters to the various blogs, twitter tweets along with other content at the different venues and events Im involved with.

    Thanks to all of you and have a safe happy holiday season along with a prosperous new years, look forward to future conversations and discussions.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved