StorageIO going Dutch: Seminar for Storage and I/O professionals

Data and Storage Networking Industry Trends and Technology Seminar

Greg Schulz of StorageIO in conjunction with or dutch parter Brouwer Storage Consultancy will be presenting a two day seminar for Storage Professionals Tuesday 24th and Wednesday 25th of May 2011 at Ampt van Nijkerk Netherlands.

Brouwer Storage ConsultanceyThe Server and StorageIO Group

This two day interactive education seminar for storage professionals will focus on current data and storage networking trends, technology and business challenges along with available technologies and solutions. During the seminar learn what technologies and management techniques are available, how different vendors solutions compare and what to use when and where. This seminar digs into the various IT tools, techniques, technologies and best practices for enabling an efficient, effective, flexible, scalable and resilient data infrastructure.

The format of this two seminar will be a mix of presentation and interactive discussion allowing attendees plenty of time to discuss among themselves and with seminar presenters. Attendees will gain insight into how to compare and contrast various technologies and solutions in addition to identifying and aligning those solutions to their specific issues, challenges and requirements.

Major themes that will be discussed include:

  • Who is doing what with various storage solutions and tools
  • Is RAID still relevant for today and tomorrow
  • Are hard disk drives and tape finally dead at the hands of SSD and clouds
  • What am I routinely hearing, seeing or being asked to comment on
  • Enabling storage optimization, efficiency and effectiveness (performance and capacity)
  • What do I see as opportunities for leveraging various technologies, techniques,trends
  • Supporting virtual servers including re-architecting data protection
  • How to modernize data protection (backup/restore, BC, DR, replication, snapshots)
  • Data footprint reduction (DFR) including archive, compression and dedupe
  • Clarifying cloud confusion, don’t be scared, however look before you leap

In addition this two day seminar will look at what are some new and improved technologies and techniques, who is doing what along with discussions around industry and vendor activity including mergers and acquisitions. Greg will also preview the contents and themes of his new book Cloud and Virtual Data Storage Networking (CRC) for enabling efficient, optimized and effective information services delivery across cloud, virtual and traditional environments.

Buzzwords and topic themes to be discussed among others include:
E2E, FCoE and DCB, CNAs, SAS, I/O virtualization, server and storage virtualization, public and private cloud, Dynamic Infrastructures, VDI, RAID and advanced data protection options, SSD, flash, SAN, DAS and NAS, object storage, application optimized or aware storage, open storage, scale out storage solutions, federated management, metrics and measurements, performance and capacity, data movement and migration, storage tiering, data protection modernization, SRA and SRM, data footprint reduction (archive, compress, dedupe), unified and multi-protocol storage, solution bundle and stacks.

For more information or to register contact Brouwer Storage Consultancy

Brouwer Storage Consultancy
Olevoortseweg 43
3861 MH Nijkerk
The Netherlands
Telephone: +31-33-246-6825
Cell: +31-652-601-309
Fax: +31-33-245-8956
Email: info@brouwerconsultancy.com
Web: www.brouwerconsultancy.com

Brouwer Storage Consultancey

Learn about other events involving Greg Schulz and StorageIO at www.storageio.com/events

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

More Data Footprint Reduction (DFR) Material

This is part of an ongoing series of short industry trends and perspectives (ITP) blog posts briefs based on what I am seeing and hearing in my conversations with IT professionals on a global basis.

These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, videos, podcasts, webcasts as well as solution brief content found a www.storageioblog.com/reports and www.storageio.com/articles.

If you recall from previous posts including here, here or here among others, Data Footprint Reduction (DFR) is a collection of tools, technologies and best practices for addressing growing data storage management and cost impacts.

DFR encompasses many different tools, techniques and technologies across various applications ranging from active or primary storage to secondary and inactive along with backup and archive.

Some of the technologies techniques and technologies include archiving, backup modernization, compression, data management, dedupe, space saving snapshots and thin provisioning among others.

Following are some links to various articles and commentary pertaining to DFR:

  • Using DFR including dedupe and compression to defry storage and management costs
  • Deduplicate, compress and defray costs of data storage management
  • Virtual tape libraries: Old backup technology holdover or gateway to the future?
  • As well as here, here or here

In the spirit of DFR, that is doing more with less, nuff said (for now).

Of course let me know what your thoughts and perspectives are on this and other related topics.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

What is DFR or Data Footprint Reduction?

What is DFR or Data Footprint Reduction?

What is DFR or Data Footprint Reduction?

Updated 10/9/2018

What is DFR or Data Footprint Reduction?

Data Footprint Reduction (DFR) is a collection of techniques, technologies, tools and best practices that are used to address data growth management challenges. Dedupe is currently the industry darling for DFR particularly in the scope or context of backup or other repetitive data.

However DFR expands the scope of expanding data footprints and their impact to cover primary, secondary along with offline data that ranges from high performance to inactive high capacity.

Consequently the focus of DFR is not just on reduction ratios, its also about meeting time or performance rates and data protection windows.

This means DFR is about using the right tool for the task at hand to effectively meet business needs, and cost objectives while meeting service requirements across all applications.

Examples of DFR technologies include Archiving, Compression, Dedupe, Data Management and Thin Provisioning among others.

Read more about DFR in Part I and Part II of a two part series found here and here.

Where to learn more

Learn more about data footprint reducton (DFR), data footprint overhead and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

That is all for now, hope you find these ongoing series of current or emerging Industry Trends and Perspectives posts of interest.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

March Metric Madness: Fun with Simple Math

Its March and besides being spring in north America, it also means tournament season including the NCAA basket ball series among others known as March Madness.

Given the office pools and other forms of playing with numbers tied to the tournaments and real or virtual money, here is a quick timeout looking at some fun with math.

The fun is showing how simple math can be used to show relative growth for IT resources such as data storage. For example, say that you have 10Tbytes of storage or data and that it is growing at only 10 percent per year, in five years with simple math yields 14.6Tbytes.

Now lets assume growth rate is 50 percent per year and in the course of five years, instead of having 10Tbytes, that now jumps to 50.6Tbytes. If you have 100Tbytes today and at 50 percent growth rate, that would yield 506.3 Tbytes or about half of a petabyte in 5 years. If by chance you have say 1Pbyte or 1,000Tbytes today, at 25% year of year growth you would have 2.44Pbytes in 5 years.
Basic Storage Forecast
Figure 1 Fun with simple math and projected growth rates

Granted this is simple math showing basic examples however the point is that depending on your growth rate and amount of either current data or storage, you might be surprised at the forecast or projected needs in only five years.

In a nutshell, these are examples of very basic primitive capacity forecasts that would vary by other factors including if the data is 10Tbytes and your policies is for 25 percent free space, that would require even more storage than the base amount. Go with a different RAID level, some extra space for replication, snapshots, disk to disk backups and replication not to mention test development and those numbers go up even higher.

Sure those amounts can be offset with thin provisioning, dedupe, archiving, compression and other forms of data footprint reduction, however the point here is to realize how simple math can portray a very basic forecast and picture of growth.

Read more about performance and capacity in Chapter 10 – Performance and capacity planning for storage networks – Resilient Storage Networks (Elsevier) as well as at www.cmg.org (Computer Measurement Group)..

And that is all I have to say about this for now, enjoy March madness and fun with numbers.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

The other Green Storage: Efficiency and Optimization

Some believe that green storage is specifically designed to reduce power and cooling costs.

The reality is that there are many ways to reduce environmental impact while enhancing the economics of data storage besides simply booting utilizing.

These include optimizing data storage capacity as well as boosting performance to increase productivity per watt of energy used when work needs to be done.

Some approaches require new hardware or software while others can be accomplished with changes to management including reconfiguration leveraging insight and awareness of resource needs.

Here are some related links:

The Other Green: Storage Efficiency and Optimization (Videocast)

Energy efficient technology sales depend on the pitch

Performance metrics: Evaluating your data storage efficiency

How to reduce your Data Footprint impact (Podcast)

Optimizing enterprise data storage capacity and performance to reduce your data footprint

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Optimize Data Storage for Performance and Capacity Efficiency

This post builds on a recent article I did that can be read here.

Even with tough economic times, there is no such thing as a data recession! Thus the importance of optimizing data storage efficiency addressing both performance and capacity without impacting availability in a cost effective way to do more with what you have.

What this means is that even though budgets are tight or have been cut resulting in reduced spending, overall net storage capacity is up year over year by double digits if not higher in some environments.

Consequently, there is continued focus on stretching available IT and storage related resources or footprints further while eliminating barriers or constraints. IT footprint constraints can be physical in a cabinet or rack as well as floorspace, power or cooling thresholds and budget among others.

Constraints can be due to lack of performance (bandwidth, IOPS or transactions), poor response time or lack of availability for some environments. Yet for other environments, constraints can be lack of capacity, limited primary or standby power or cooling constraints. Other constraints include budget, staffing or lack of infrastructure resource management (IRM) tools and time for routine tasks.

Look before you leap
Before jumping into an optimization effort, gain insight if you do not already have it as to where the bottlenecks exist, along with the cause and effect of moving or reconfiguring storage resources. For example, boosting capacity use to more fully use storage resources can result in a performance issue or data center bottlenecks for other environments.

An alternative scenario is that in the quest to boost performance, storage is seen as being under-utilized, yet when capacity use is increased, low and behold, response time deteriorates. The result can be a vicious cycle hence the need to address the issue as opposed to moving problems by using tools to gain insight on resource usage, both space and activity or performance.

Gaining insight means looking at capacity use along with performance and availability activity and how they use power, cooling and floor-space. Consequently an important tool is to gain insight and knowledge of how your resources are being used to deliver various levels of service.

Tools include storage or system resource management (SRM) tools that report on storage space capacity usage, performance and availability with some tools now adding energy usage metrics along with storage or system resource analysis (SRA) tools.

Cooling Off
Power and cooling are commonly talked about as constraints, either from a cost standpoint, or availability of primary or secondary (e.g. standby) energy and cooling capacity to support growth. Electricity is essential for powering IT equipment including storage enabling devices to do their specific tasks of storing data, moving data, processing data or a combination of these attributes.

Thus, power gets consumed, some work or effort to move and store data takes place and the by product is heat that needs to be removed. In a typical IT data center, cooling on average can account for about 50% of energy used with some sites using less.

With cooling being a large consumer of electricity, a small percentage change to how cooling consumes energy can yield large results. Addressing cooling energy consumption can be to discuss budget or cost issues, or to enable cooling capacity to be freed up to support installation of extra storage or other IT equipment.

Keep in mind that effective cooling relies on removing heat from as close to the source as possible to avoid over cooling which requires more energy. If you have not done so, have a facilities review or assessment performed that can range from a quick walk around, to a more in-depth review and thermal airflow analysis. A means of removing heat close to the sort are techniques such as intelligent, precision or smart cooling also known by other marketing names.

Powering Up, or, Powering Down
Speaking of energy or power, in addition to addressing cooling, there are a couple of ways of addressing power consumption by storage equipment (Figure 1). The most popular discussed approach towards efficiency is energy avoidance involving powering down storage when not used such as first generation MAID at the cost of performance.

For off-line storage, tape and other removable media give low-cost capacity per watt with low to no energy needed when not in use. Second generation (e.g. MAID 2.0) solutions with intelligent power management (IPM) capabilities have become more prevalent enabling performance or energy savings on a more granular or selective basis often as a standard feature in common storage systems.

GreenOptionsBalance
Figure 1:  How various RAID levels and configuration impact or benefit footprint constraints

Another approach to energy efficiency is seen in figure 1 which is doing more work for active applications per watt of energy to boost productivity. This can be done by using same amount of energy however doing more work, or, same amount of work with less energy.

For example instead of using larger capacity disks to improve capacity per watt metrics, active or performance sensitive storage should be looked at on an activity basis such as IOP, transactions, videos, emails or throughput per watt. Hence, a fast disk drive doing work can be more energy-efficient in terms of productivity than a higher capacity slower disk drive for active workloads, where for idle or inactive, the inverse should hold true.

On a go forward basis the trend already being seen with some servers and storage systems is to do both more work, while using less energy. Thus a larger gap between useful work (for active or non idle storage) and amount of energy consumed yields a better efficiency rating, or, take the inverse if that is your preference for smaller numbers.

Reducing Data Footprint Impact
Data footprint impact reduction tools or techniques for both on-line as well as off-line storage include archiving, data management, compression, deduplication, space-saving snapshots, thin provisioning along with different RAID levels among other approaches. From a storage access standpoint, you can also include bandwidth optimization, data replication optimization, protocol optimizers along with other network technologies including WAFS/WAAS/WADM to help improve efficiency of data movement or access.

Thin provisioning for capacity centric environments can be used to achieving a higher effective storage use level by essentially over booking storage similar to how airlines oversell seats on a flight. If you have good historical information and insight into how storage capacity is used and over allocated, thin provisioning enables improved effective storage use to occur for some applications.

However, with thin provisioning, avoid introducing performance bottlenecks by leveraging solutions that work closely with tools that providing historical trending information (capacity and performance).

For a technology that some have tried to declare as being dead to prop other new or emerging solutions, RAID remains relevant given its widespread deployment and transparent reliance in organizations of all size. RAID also plays a role in storage performance, availability, capacity and energy constraints as well as a relief tool.

The trick is to align the applicable RAID configuration to the task at hand meeting specific performance, availability, capacity or energy along with economic requirements. For some environments a one size fits all approach may be used while others may configure storage using different RAID levels along with number of drives in RAID sets to meet specific requirements.


Figure 2:  How various RAID levels and configuration impact or benefit footprint constraints

Figure 2 shows a summary and tradeoffs of various RAID levels. In addition to the RAID levels, how many disks can also have an impact on performance or capacity, such as, by creating a larger RAID 5 or RAID 6 group, the parity overhead can be spread out, however there is a tradeoff. Tradeoffs can be performance bottlenecks on writes or during drive rebuilds along with potential exposure to drive failures.

All of this comes back to a balancing act to align to your specific needs as some will go with a RAID 10 stripe and mirror to avoid risks, even going so far as to do triple mirroring along with replication. On the other hand, some will go with RAID 5 or RAID 6 to meet cost or availability requirements, or, some I have talked with even run RAID 0 for data and applications that need the raw speed, yet can be restored rapidly from some other medium.

Lets bring it all together with an example
Figure 3 shows a generic example of a before and after optimization for a mixed workload environment, granted you can increase or decrease the applicable capacity and performance to meet your specific needs. In figure 3, the storage configuration consists of one storage system setup for high performance (left) and another for high-capacity secondary (right), disk to disk backup and other near-line needs, again, you can scale the approach up or down to your specific need.

For the performance side (left), 192 x 146GB 15K RPM (28TB raw) disks provide good performance, however with low capacity use. This translates into a low capacity per watt value however with reasonable IOPs per watt and some performance hot spots.

On the capacity centric side (right), there are 192 x 1TB disks (192TB raw) with good space utilization, however some performance hot spots or bottlenecks, constrained growth not to mention low IOPS per watt with reasonable capacity per watt. In the before scenario, the joint energy use (both arrays) is about 15 kWh or 15,000 watts which translates to about $16,000 annual energy costs (cooling excluded) assuming energy cost of 12 cents per kWh.

Note, your specific performance, availability, capacity and energy mileage will vary based on particular vendor solution, configuration along with your application characteristics.


Figure 3: Baseline before and after storage optimization (raw hardware) example

Building on the example in figure 3, a combination of techniques along with technologies yields a net performance, capacity and perhaps feature functionality (depends on specific solution) increase. In addition, floor-space, power, cooling and associated footprints are also reduced. For example, the resulting solution shown (middle) comprises 4 x 250GB flash SSD devices, along with 32 x 450GB 15.5K RPM and 124 x 2TB 7200RPM enabling an 53TB (raw) capacity increase along with performance boost.

The previous example are based on raw or baseline capacity metrics meaning that further optimization techniques should yield improved benefits. These examples should also help to discuss the question or myth that it costs more to power storage than to buy it which the answer should be it depends.

If you can buy the above solution for say under $50,000 (cost to power), or, let alone, $100,000 (power and cool) for three years which would also be a good acquisition, then the myth of buying is more expensive than powering holds true. However, if a solution as described above costs more, than the story changes along with other variables include energy costs for your particular location re-enforcing the notion that your mileage will vary.

Another tip is that more is not always better.

That is, more disks, ports, processors, controllers or cache do not always equate into better performance. Performance is the sum of how those and other pieces working together in a demonstrable way, ideally your specific application workload compared to what is on a product data sheet.

Additional general tips include:

  • Align the applicable tool, technique or technology to task at hand
  • Look to optimize for both performance and capacity, active and idle storage
  • Consolidated applications and servers need fast servers
  • Fast servers need fast I/O and storage devices to avoid bottlenecks
  • For active storage use an activity per watt metric such as IOP or transaction per watt
  • For in-active or idle storage, a capacity per watt per footprint metric would apply
  • Gain insight and control of how storage resources are used to meet service requirements

It should go without saying, however sometimes what is understood needs to be restated.

In the quest to become more efficient and optimized, avoid introducing performance, quality of service or availability issues by moving problems.

Likewise, look beyond storage space capacity also considering performance as applicable to become efficient.

Finally, it is all relative in that what might be applicable to one environment or application need may not apply to another.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

March and Mileage Mania Wrap-up

Today’s flight to Santa Ana (SNA) Orange County California for an 18 hour visit marks my 3rd trip to the left coast in the past four weeks that started out with a trip to Los Angeles. The purpose of today’s trip is to deliver a talk around Business Continuance (BC) and Disaster recovery (DR) topics for virtual server and storage environments along with related data transformation topics themes, part of a series of on-going events.

Planned flight path from MSP to SNA, note upper midwest snow storms. Thanks to Northwest Airlines, now part of Delta!
Planned flight path from MSP to SNA courtesy of Northwest Airlines, now part of Delta

This is a short trip to southern California in that I have to be back in Minneapolis for a Wednesday afternoon meeting followed by keynoting at an IT Infrastructure Optimization Seminar downtown Minneapolis Thursday morning. Right after Thursday morning session, its off to the other coast for some Friday morning and early afternoon sessions in the Boston area, the results of which I hope to be able to share with you in a not so distant future posting.

Where has March gone? Its been a busy and fun month out on the road with in-person seminars, vendor and user group events in Minneapolis, Los Angles, Las Vegas, Milwaukee, Atlanta, St. Louis, Birmingham, Minneapolis for CMG user group, Cincinnati and Orange County not to mention some other meetings and consulting engagements elsewhere including participating in a couple of webcast and virtual conference/seminars while on the road. Coverage and discussion around my new book "The Green and Virtual Data Center" (CRC) continues expand, read here to see what’s being said.

What has made the month fun in addition to traveling around the country is the interaction with the hundreds of IT professionals from organizations of all size hearing what they are encountering, what their challenges are, what they are thinking, and in general what’s on their mind.

Some of the common themes include:

  • There’s no such thing as a data recession, however the result is doing more with less, or, with what you have
  • Confusion abounds around green hype including carbon footprints vs. core IT and business issues
  • There is life beyond consolidation for server and storage virtualization to enable business agility
  • Security and encryption remain popular topic as does heterogeneous and affordable key management
  • End to end IT resource management for virtual environments is needed that is scalable and affordable
  • Performance and quality of service can not be sacrificed in the quest to drive up storage utilization
  • Clouds, SSD (FLASH), Dedupe, FCoE and Thin Provisioning among others are on the watch list
  • Tape continues to be used complimenting disks in tiered storage environments along with VTLs
  • Dedupe continues to be deployed and we are just seeing the very tip of the ice-berg of opportunity
  • Software licensing cost savings or reallocation should be a next step focus for virtual environments
  • Now, for a bit of irony and humor, overheard was a server sales person talking to a storage sales person comparing notes on how they are missing their forecasts as their customers are buying fewer servers and storage now that they are consolidating with virtualization, or using disk dedupe to eliminate disk drives. Doh!!!

    Now if those sales people can get their marketing folks to get them the play book for virtualization for business agility, improving performance and enabling business growth in an optimized, transformed environment, they might be able to talk a different story with their customers for new opportunities…

    What’s on deck for April? More of the same, however also watch and listen for some additional web based content including interviews quotes and perspectives on industry happenings, articles, tips and columns, reports, blogs, videos, podcasts, webcasts and twitter activity as well as appearances at events in Boston, Chicago, New Jersey and Providence among other venues.

    To all of those who came out to the various events in March, thank you very much and look forward to future follow-up conversations as well as seeing you at some of the upcoming future events.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    HP Storage Virtualization Services Platform (SVSP)

    Storage I/O trends

    HP recently announced announced their new SAN Virtualization Services Platform (SVSP) which is an appliance with software (oh, excuse me, I mean platform) for enabling various (e.g. replication, snapshots, pooling, consolidation, migration, etc) storage virtualization capabilities across different HP (e.g. MSA, EVA and in "theory" XP) or in "theory" as well, 3rd party (e.g. EMC, Dell, HDS, IBM, NetApp, Sun, etc) storage.

    Sure HP has had a similar capability via their XP series which HP OEMs from Hitachi Ltd. (who also supplies the similar/same product to HDS which HP competes with), however what?s different from the XP based solution and the SVSP is that one (SVSP) is via software running on an appliance and the other implemented via software/firmware on dedicated Hitachi based hardware (e.g. the XP). One requires an investment in the XP which for larger organizations may be practical while the other enables smaller organizations to achieve the benefits of virtualization capabilties to enable efficient IT not to mention help transition from different generations of HP MSA, EVAs to newer versions of MSAs and EVAs or even to XPs .Other benefits of solutions like the HP SVSP which also include the IBM SAN Volume Controller (SVC) include cross storage system, or cross storage vendor based replication, snapshots, dynamic (e.g. thin) provisioning among other capabilities for block based storage access.

    While there will be comparisons of HP SVSP to the XP, those in many ways will be apples to oranges, the more applicable apples to apples comparison would be IBM SVC to HP SVSP, or, perhaps HP SVSP to EMC Invista, Fujitsu VS900, Incipient, Falconstor or ?Datacore based solutions.

    With the HP SVSP announcement, I’m suspecting that we will see the re-emergence of the storage virtualization in-band vs. out-of-band including fast-path control-path aka split path approaches being adopted by HP with the SVSP not to mention hardware vs. software and appliance based approaches as was the case a few years ago.

    This time around as the storage virtualization discussions heat up again, we should see and hear the usual points, counter points and continued talk around consolidation and driving up utilization to save money and avoid costs. However, as part of enabling and transforming into an efficient IT organization (e.g. a ?Green and Virtual Data Center?) that embodies efficient, productivity in an economical and environmental friendly manner, virtualization discussions will also re-focus on using management transparency to enable data movement or migration for load-balancing, maintenance, upgrades and technology replacement, BC/DR and other common functions to enable more work to be done in the same or less anoint of time while supporting more data and storage processing and retention needs.

    Thus similar to servers where not all servers have been, will be or can be consolidated, however most can be virtualized for management transparency for BC/DR and migration, the same holds true for storage, that is, not all storage can be consolidated for different quality of service reasons, however, most storage can be virtualized to assist with and facilatate common management functions.

    Here are some additional resources to learn more about the many faces of Storage Virtualizaiton and related topics and trends:

    Storage Virtualization: Myths, Realities and Other Considerations
    Storage virtualization: How to deploy it
    The Semantics of Storage Virtualization
    Storage Virtualization: It’s More Common Than You Think
    Choosing a storage virtualization approach
    Switch-level storage virtualization: Special report
    Resilient Storage Networks (Elsevier)
    The Green and Virtual Data Center (Auerbach)

    Cheers – gs

    Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
    twitter @storageio