Industry Trends and Perspectives: RAID Rebuild Rates

This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageioblog.com/reports.

There is continued concern about how long large capacity disk drives take to be rebuilt in RAID sets particularly as the continued shift from 1TB to 2TB occurs. It should not be a surprise that a disk with more capacity will take longer to rebuild or copy as well as with more drives; the likely hood of one failing statistically increases.

Not to diminish the issue, however also to avoid saying the sky is falling, we have been here before! In the late 90s and early 2000s there was a similar concern with the then large 9GB, 18GB let alone emerging 36GB and 72GB drives. There have been improvements in RAID as well as rebuild algorithms along with other storage system software or firmware enhancements not to mention boost in processor or IO bus performance.

However not all storage systems are equal even if they use the same underlying processors, IO busses, adapters or disk drives. Some vendors have made significant improvements in their rebuild times where each generation of software or firmware can reconstruct a failed drive faster. Yet for others, each subsequent iteration of larger capacity disk drives brings increased rebuild times.

If disk drive rebuild times are a concern, ask your vendor or solution provider what they are doing as well as have done over the past several years to boost their performance. Look for signs of continued improvement in rebuild and reconstruction performance as well as decrease in error rates or false drive rebuilds.

Related and companion material:
Blog: RAID data protection remains relevant
Blog: Optimize Data Storage for Performance and Capacity Efficiency

That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VPLEX: Virtual Storage Redefined or Respun?

In a flurry of announcements that coincide with EMCworld occurring in Boston this week of May 10 2010 EMC officially unveiled the Virtual Storage vision initiative (aka twitter hash tag of #emcvs) and initial VPLEX product. The Virtual Storage initiative was virtually previewed back in March (See my previous post here along with one from Stu Miniman (twitter @stu) of EMC here or here) and according to EMC the VPLEX product was made generally available (GA) back in April.

The Virtual Storage vision and associated announcements consisted of:

  • Virtual Storage vision – Big picture  initiative view of what and how to enable private clouds
  • VPLEX architecture – Big picture view of federated data storage management and access
  • First VPLEX based product – Local and campus (Metro to about 100km) solutions
  • Glimpses of how the architecture will evolve with future products and enhancements


Figure 1: EMC Virtual Storage and Virtual Server Vision and Big Pictures

The Big Picture
The EMC Virtual Storage vision (Figure 1) is the foundation of a private IT cloud which should enable characteristics including transparency, agility, flexibility, efficient, always on, resiliency, security, on demand and scalable. Think of it this way, EMC wants to enable and facilitate for storage what is being done by server virtualization hypervisor vendors including VMware (which happens to be owned by EMC), Microsoft HyperV and Citrix/Xen among others. That is, break down the physical barriers or constraints around storage similar to how virtual servers release applications and their operating systems from being tied to a physical server.

While the current focus of desktop, server and storage virtualization has been focused on consolidation and cost avoidance, the next big wave or phase is life beyond consolidation where the emphasis expands to agility, flexibility, ease of use, transparency, and portability (Figure 2). In the next phase which puts an emphasis around enablement and doing more with what you have while enhancing business agility focus extends from how much can be consolidated or the number of virtual machines per physical machine to that of using virtualization for flexibility, transparency (read more here and here or watch here).


Figure 2: Virtual Storage Big Picture

That same trend will be happening with storage where the emphasis also expands from how much data can be squeezed or consolidated onto a given device to that of enabling flexibility and agility for load balancing, BC/DR, technology upgrades, maintenance and other routine Infrastructure Resource Management (IRM) tasks.

For EMC, achieving this vision (both directly for storage, and indirectly for servers via their VMware subsidiary) is via local and distributed (metro and wide area) federation management of physical resources to support virtual data center operations. EMC building blocks for delivering this vision including VPLEX, data and storage management federation across EMC and third party products, FAST (fully automated storage tiering), SSD, data protection and data footprint reduction and data protection management products among others.

Buzzword bingo aside (e.g. LAN, SAN, MAN, WAN, Pots and Pans) along with Automation, DWDM, Asynchronous, BC, BE or Back End, Cache coherency, Cache consistency, Chargeback, Cluster, db loss, DCB, Director, Distributed, DLM or Distributed Lock Management, DR, Foe or Fibre Channel over Ethernet, FE or Front End, Federated, FAST, Fibre Channel, Grid, HyperV, Hypervisor, IRM or Infrastructure Resource Management, I/O redirection, I/O shipping, Latency, Look aside, Metadata, Metrics, Public/Private Cloud, Read ahead, Replication, SAS, Shipping off to Boston, SRA, SRM, SSD, Stale Reads, Storage virtualization, Synchronization, Synchronous, Tiering, Virtual storage, VMware and Write through among many other possible candidates the big picture here is about enabling flexibility, agility, ease of deployment and management along with boosting resource usage effectiveness and presumably productivity on a local, metro and future global basis.


Figure 3: EMC Storage Federation and Enabling Technology Big Picture

The VPLEX Big Picture
Some of the tenants of the VPLEX architecture (Figure 3) include a scale out cluster or grid design for local and distributed (metro and wide area) access where you can start small and evolve as needed in a predictable and deterministic manner.


Figure 4: Generic Virtual Storage (Local SAN and MAN/WAN) and where VPLEX fits

The VPLEX architecture is targeted towards enabling next generation data centers including private clouds where ease and transparency of data movement, access and agility are essential. VPLEX sits atop existing EMC and third party storage as a virtualization layer between physical or virtual servers and in theory, other storage systems that rely on underlying block storage. For example in theory a NAS (NFS, CIFS, and AFS) gateway, CAS content archiving or Object based storage system or purpose specific database machine could sit between actual application servers and VPLEX enabling multiple layers of flexibility and agility for larger environments.

At the heart of the architecture is an engine running a highly distributed data caching algorithm that uses an approach where a minimal amount of data is sent to other nodes or members in the VPLEX environment to reduce overhead and latency (in theory boosting performance). For data consistency and integrity, a distributed cache coherency model is employed to protect against stale reads and writes along with load balancing, resource sharing and failover for high availability. A VPLEX environment consists of a federated management view across multiple VPLEX clusters including the ability to create a stretch volume that is accessible across multiple VPLEX clusters (Figure 5).


Figure 5: EMC VPLEX Big Picture


Figure 6: EMC VPLEX Local with 1 to 4 Engines

Each VPLEX local cluster (Figure 6) is made up of 1 to 4 engines (Figure 7) per rack with each engine consisting of two directors each having 64GByte of cache, localized compute Intel processors, 16 Front End (FE) and 16 Back End (BE) Fibre Channel ports configured in a high availability (HA). Communications between the directors and engines is Fibre Channel based. Meta data is moved between the directors and engines in 4K blocks to maintain consistency and coherency. Components are fully redundant and include phone home support.


Figure 7: EMC VPLEX Engine with redundant directors

VPLEX initially host servers supported include VMware, Cisco UCS, Windows, Solaris, IBM AIX, HPUX and Linux along with EMC PowerPath and Windows multipath management drivers. Local server clusters supported include Symantec VCS, Microsoft MSCS and Oracle RAC along with various volume mangers. SAN fabric connectivity supported includes Brocade and Cisco as well as Legacy McData based products.

VPLEX also supports cache (Figure 8 ) write thru to preserve underlying array based functionality and performance with 8,000 total virtualized LUNs per system. Note that underlying LUNs can be aggregated or simply passed through the VPLEX. Storage that attaches to the BE Fibre Channel ports include EMC Symmetrix VMAX and DMX along with CLARiiON CX and CX4. Third party storage supported includes HDS9000 and USPV/VM along with IBM DS8000 and others to be added as they are certified. In theory given that the VPLEX presents block based storage to hosts; one would also expect that NAS, CAS or other object based gateways and servers that rely on underlying block storage to also be supported in the future.


Figure 8: VPLEX Architecture and Distributed Cache Overview

Functionality that can be performed between the cluster nodes and engines with VPLEX include data migration and workload movement across different physical storage systems or sites along with shared access with read caching on a local and distributed basis. LUNS can also be pooled across different vendors underlying storage solutions that also retain their native feature functionality via VPLEX write thru caching.

Reads from various servers can be resolved by any node or engine that checks their cache tables (Figure 8 ) to determine where to resolve the actual I/O operation from. Data integrity checks are also maintained to prevent stale reads or write operations from occurring. Actual meta data communications between nodes is very small to enable state fullness while reducing overhead and maximizing performance. When a change to cache data occurs, meta information is sent to other nodes to maintain the distributed cache management index schema. Note that only pointers to where data and fresh cache entries reside are what is stored and communicated in the meta data via the distributed caching algorithm.


Figure 9: EMC VPLEX Metro Today

For metro deployments, two clusters (Figure 9) are utilized with distances supported up to about 100km or about 5ms of latency in a synchronous manner utilizing long distance Fibre Channel optics and transceivers including Dense Wave Division Multiplexing (DWDM) technologies (See Chapter 6: Metropolitan and Wide Area Storage Networking in Resilient Storage Networking (Elsevier) for additional details on LAN, MAN and WAN topics).

Initially EMC is supporting local or Metro including Campus based VPLEX deployments requiring synchronous communications however asynchronous (WAN) Geo and Global based solutions are planned for the future (Figure 10).


Figure 10: EMC VPLEX Future Wide Area and Global

Online Workload Migration across Systems and Sites
Online workload or data movement and migration across storage systems or sites is not new with solutions available from different vendors including Brocade, Cisco, Datacore, EMC, Fujitsu, HDS, HP, IBM, LSI and NetApp among others.

For synchronization and data mobility operations such as a VMware Vmotion or Microsoft HyperV Live migration over distance, information is written to separate LUNs in different locations across what are known as stretch volumes to enable non disruptive workload relocation across different storage systems (arrays) from various vendors. Once synchronization is completed, the original source can be disconnected or taken offline for maintenance or other common IRM tasks. Note that at least two LUNs are required, or put another way, for every stretch volume, two LUNs are subtracted from the total number of available LUNs similar to how RAID 1 mirroring requires at least two disk drives.

Unlike other approaches that for coherency and performance rely on either no cached data, or, extensive amounts of cached data along with subsequent overhead for maintaining state fullness (consistency and coherency) including avoiding stale reads or writes, VPLEX relies on a combination of distributed cache lookup tables along with pass thru access to underlying storage when or where needed. Consequently large amounts of data does not need to be cached as well as shipped between VPLEX devices to maintain data consistency, coherency or performance which should also help to keep costs affordable.

Approach is not unique, it is the implementation
Some storage virtualization solutions that have been software based running on an appliance or network switch as well as hardware system based have had a focus of emulating or providing competing capabilities with those of mid to high end storage systems. The premise has been to use lower cost, less feature enabled storage systems aggregated behind the appliance, switch or hardware based system to provide advanced data and storage management capabilities found in traditional higher end storage products.

VPLEX while like any tool or technology could be and probably will be made to do other things than what it is intended for is really focused on, flexibility, transparency and agility as opposed to being used as a means of replacing underlying storage system functionality. What this means is that while there is data movement and migration capabilities including ability to synchronize data across sites or locations, VPLEX by itself is not a replacement for the underlying functionality present in both EMC and third party (e.g. HDS, HP, IBM, NetApp, Oracle/Sun or others) storage systems.

This will make for some interesting discussions, debates and applies to oranges comparisons in particular with those vendors whose products are focused around replacing or providing functionality not found in underlying storage system products.

In a nut shell summary, VPLEX and the Virtual Storage story (vision) is about enabling agility, resiliency, flexibility, data and resource mobility to simply IT Infrastructure Resource Management (IRM). One of the key themes of global storage federation is anywhere access on a local, metro, wide area and global basis across both EMC and heterogeneous third party vendor hardware.

Lets Put it Together: When and Where to use a VPLEX
While many storage virtualization solutions are focused around consolidation or pooling, similar to first wave server and desktop virtualization, the next general broad wave of virtualization is life beyond consolidation. That means expanding the focus of virtualization from consolidation, pooling or LUN aggregation to that of enabling transparency for agility, flexibility, data or system movement, technology refresh and other common time consuming IRM tasks.

Some applications or usage scenarios in the future should include in addition to VMware Vmotion, Microsoft HypverV and Microsoft Clustering along with other host server closuring solutions.


Figure 11: EMC VPLEX Usage Scenarios

Thoughts and Industry Trends Perspectives:

The following are various thoughts, comments, perspectives and questions pertaining to this and storage, virtualization and IT in general.

Is this truly unique as is being claimed?

Interestingly, the message Im hearing out of EMC is not the claim that this is unique, revolutionary or the industries first as is so often the case by vendors, rather that it is their implementation and ability to deploy on a broad perspective basis that is unique. Now granted you will probably hear as is often the case with any vendor or fan boy/fan girl spins of it being unique and Im sure this will also serve up plenty of fodder for mudslinging in the blogsphere, YouTube galleries, twitter land and beyond.

What is the DejaVu factor here?

For some it will be nonexistent, yet for others there is certainly a DejaVu depending on your experience or what you have seen and heard in the past. In some ways this is the manifestation of many vision and initiatives from the late 90s and early 2000s when storage virtualization or virtual storage in an open context jumped into the limelight coinciding with SAN activity. There have been products rolled out along with proof of concept technology demonstrators, some of which are still in the market, others including companies have fallen by the way side for a variety of reasons.

Consequently if you were part of or read or listened to any of the discussions and initiatives from Brocade (Rhapsody), Cisco (SVC, VxVM and others), INRANGE (Tempest) or its successor CNT UMD not to mention IBM SVC, StorAge (now LSI), Incipient (now part of Texas Memory) or Troika among others you should have some DejaVu.

I guess that also begs the question of what is VPLEX, in band, out of band or hybrid fast path control path? From what I have seen it appears to be a fast path approach combined with distributed caching as opposed to a cache centric inband approaches such as IBM SVC (either on a server or as was tried on the Cisco special service blade) among others.

Likewise if you are familiar with IBM Mainframe GDPS or even EMC GDDR as well as OpenVMS Local and Metro clusters with distributed lock management you should also have DejaVu. Similarly if you had looked at or are familiar with any of the YottaYotta products or presentations, this should also be familiar as EMC acquired the assets of that now defunct company.

Is this a way for EMC to sell more hardware along with software products?

By removing barriers enabling IT staffs to support more data on more storage in a denser and more agile footprint the answer should be yes, something that we may see other vendors emulate, or, make noise about what they can or have been doing already.

How is this virtual storage spin different from the storage virtualization story?

That all depends on your view or definition as well as belief systems and preferences for what is or what is not virtual storage vs. storage virtualization. For some who believe that storage virtualization is only virtualization if and only if it involves software running on some hardware appliance or vendors storage system for aggregation and common functionality than you probably wont see this as virtual storage let alone storage virtualization. However for others, it will be confusing hence EMC introducing terms such as federation and avoiding terms including grid to minimize confusion yet play off of cloud crowd commotion.

Is VPLEX a replacement for storage system based tiering and replication?

I do not believe so and even though some vendors are making claims that tiered storage is dead, just like some vendors declared a couple of years ago that disk drives were going to be dead this year at the hands of SSD, neither has come to life so to speak pun intended. What this means for VPLEX is that it leverages underlying automated or manual tiering found in storage systems such as EMC FAST enabled or similar policy and manual functions in third party products.

What VPLEX brings to the table is the ability to transparently present a LUN or volume locally or over distance with shared access while maintaining cache and data coherency. This means that if a LUN or volume moves the applications or file system or volume managers expecting to access that storage will not be surprised, panic or encounter failover problems. Of course there will be plenty of details to be dug into and seen how it all actually works as is the case with any new technology.

Who is this for?

I see this as for environments that need flexibility and agility across multiple storage systems either from one or multiple vendors on a local or metro or wide area basis. This is for those environments that need ability to move workloads, applications and data between different storage systems and sites for maintenance, upgrades, technology refresh, BC/DR, load balancing or other IRM functions similar to how they would use virtual server migration such as VMotion or Live migration among others.

Do VPLEX and Virtual Storage eliminate need for Storage System functionality?

I see some storage virtualization solutions or appliances that have a focus of replacing underlying storage system functionality instead of coexisting or complementing. A way to test for this approach is to listen or read if the vendor or provider says anything along the lines of eliminating vendor lock in or control of the underlying storage system. That can be a sign of the golden rule of virtualization of whoever controls the virtualization functionality (at the server hypervisor or storage) controls the gold! This is why on the server side of things we are starting to see tiered hypervisors similar to tiered servers and storage where mixed hypervisors are being used for different purposes. Will we see tiered storage hypervisors or virtual storage solutions the answer could be perhaps or it depends.

Was Invista a failure not going into production and this a second attempt at virtualization?

There is a popular myth in the industry that Invista never saw the light of day outside of trade show expo or other demos however the reality is that there are actual customer deployments. Invista unlike other storage virtualization products had a different focus which was that around enabling agility and flexibility for common IRM tasks, similar the expanded focus of VPLEX. Consequently Invista has often been in apples to oranges comparison with other virtualization appliances that have as focus pooling along with other functions or in some cases serving as an appliance based storage system.

The focus around Invista and usage by those customers who have deployed it that I have talked with is around enabling agility for maintenance, facilitating upgrades, moves or reconfiguration and other common IRM tasks vs using it for pooling of storage for consolidation purposes. Thus I see VPLEX extending on the vision of Invista in a role of complimenting and leveraging underlying storage system functionality instead of trying to replace those capabilities with that of the storage virtualizer.

Is this a replacement for EMC Invista?

According to EMC the answer is no and that customers using Invista (Yes, there are customers that I have actually talked to) will continue to be supported. However I suspect that over time Invista will either become a low end entry for VPLEX, or, an entry level VPLEX solution will appear sometime in the future.

How does this stack up or compare with what others are doing?

If you are looking to compare to cache centric platforms such as IBMs SVC that adds extensive functionality and capabilities within the storage virtualization framework this is an apples to oranges comparison. VPLEX is providing cache pointers on a local and global basis functioning in a compliment to underlying storage system model where SVC caches at the specific cluster basis and enhancing functionality of underlying storage system. Rest assured there will be other apples to oranges comparisons made between these platforms.

How will this be priced?

When I asked EMC about pricing, they would not commit to a specific price prior to the announcement other than indicating that there will be options for on demand or consumption (e.g. cloud pricing) as well as pricing per engine capacity as well as subscription models (pay as you go).

What is the overhead of VPLEX?

While EMC runs various workload simulations (including benchmarks) internally as well as some publicly (e.g. Microsoft ESRP among others) they have been opposed to some storage simulation benchmarks such as SPC. The EMC opposition to simulations such as SPC have been varied however this could be a good and interesting opportunity for them to silence the industry (including myself) who continue ask them (along with a couple of other vendors including IBM and their XIV) when they will release public results.

What the interesting opportunity I think is for EMC is that they do not even have to benchmark one of their own storage systems such as a CLARiiON or VMAX, instead simply show the performance of some third party product that already is tested on the SPC website and then a submission with that product running attached to a VPLEX.

If the performance or low latency forecasts are as good as they have been described, EMC can accomplish a couple of things by:

  • Demonstrating the low latency and minimal to no overhead of VPLEX
  • Show VPLEX with a third party product comparing latency before and after
  • Provide a comparison to other virtualization platforms including IBM SVC

As for EMC submitting a VMAX or CLARiiON SPC test in general, Im not going to hold my breath for that, instead, will continue to look at the other public workload tests such as ESRP.

Additional related reading material and links:

Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier)
Chapter 3: Networking Your Storage
Chapter 4: Storage and IO Networking
Chapter 6: Metropolitan and Wide Area Storage Networking
Chapter 11: Storage Management
Chapter 16: Metropolitan and Wide Area Examples

The Green and Virtual Data Center (CRC)
Chapter 3: (see also here) What Defines a Next-Generation and Virtual Data Center
Chapter 4: IT Infrastructure Resource Management (IRM)
Chapter 5: Measurement, Metrics, and Management of IT Resources
Chapter 7: Server: Physical, Virtual, and Software
Chapter 9: Networking with your Servers and Storage

Also see these:

Virtual Storage and Social Media: What did EMC not Announce?
Server and Storage Virtualization – Life beyond Consolidation
Should Everything Be Virtualized?
Was today the proverbial day that he!! Froze over?
Moving Beyond the Benchmark Brouhaha

Closing comments (For now):
As with any new vision, initiative, architecture and initial product there will be plenty of questions to ask, items to investigate, early adopter customers or users to talk with and determine what is real, what is future, what is usable and practical along with what is nice to have. Likewise there will be plenty of mud ball throwing and slinging between competitors, fans and foes which for those who enjoy watching or reading those you should be well entertained.

In general, the EMC vision and story builds on and presumably delivers on past industry hype, buzz and vision with solutions that can be put into environments as productivity tool that works for the customer, instead of the customer working for the tool.

Remember the golden rule of virtualization which is in play here is that whoever controls the virtualization or associated management controls the gold. Likewise keep in mind that aggregation can cause aggravation. So do not be scared, however look before you leap meaning do your homework and due diligence with appropriate levels of expectations, aligning applicable technology to the task at hand.

Also, if you have seen or experienced something in the past, you are more likely to have DejaVu as opposed to seeing things as revolutionary. However it is also important to leverage lessons learned for future success. YottaYotta was a lot of NaddaNadda, lets see if EMC can leverage their past experiences to make this a LottaLotta.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Seagate to say goodbye to Cayman Islands, Hello Ireland

Seagate (NASDQ: STX) corporation, the parent of the company many people in IT and data storage in particular know as Seagate the disk drive manufacturer is moving their paper headquarters from the Cayman Islands where they have been based since 2000 to Ireland.

Let me rephrase that as Seagate is not moving their Scotts Valley California headquarters of operations or any design, manufacturing or marketing to Ireland that is not already there. Rather, Seagate as a manufacturing company is moving where it is incorporated (paper corporate headquarters) from the Cayman Islands to the Emerald Island of Ireland.

Confused yet?
Do not worry, it is confusing at first. I ended up having to reread through the Seagate corporate material and remembering back to the late 1990s it all started to make sense. Seagate has over 50,000 employees located at facilities around the world including manufacturing, support, design, research and development, sales and marketing along with corporate administration among others.

Their business while focused on data storage currently is very much centered on magnetic disk drives with a much diversified portfolio including products obtained via their acquisition of Maxtor. The Seagate product portfolio includes among others high end enterprise class Fibre Channel and SAS 15,500 RPM (15.5K) high performance to high capacity SAS and SATA devices, 10K small form factor (SFF) to mid market, SMB, USB based SOHO, prosumer or consumer along with portable and specialized devices among many others including emerging SSD and hybrid devices.

However back in the late 1990s, Seagate ventured off into some other areas for a time being including owning (in part) Veritas (since divested and now part of Symantec), Xiotech (now back on its own under venture ownership including some tied to Seagate) among some other transactions. In a series of moves, merger and acquisition, divestures, restructuring, paper corporate headquarters that reads like something out of a Hollywood movie, Seagate ended up moving its place of incorporation to the Cayman Islands.

Seagate as it was known had essentially become the manufacturing company owned by a paper holding company incorporated off shore for business and tax purposes. Want to learn more, read the companies annual reports and other filings some of which can be found here.

The Business End of the Move
Without getting into the deep details of international finance, tax law or articles of business incorporation, many companies are actually incorporated in a location different from where they actualy have their headquarters. In the United States, that is often Delaware where corporations file their paper work for articles of incorporations and then locate their headquarters or primary place of business elsewhere.

Seagate SEC filings outlining move
Seagate SEC filing outlining proposed move

Outside of the United States, the Cayman Islands among other locations have been a popular location for companies to file their paper work and have a paper headquarters due to favorable tax rates and other business benefits. Perhaps you have even watched a movie or two where part of the plot involved some business transaction of a paper company located in the Cayman Island as a means of shelter business dealings. In the case of Seagate, in 2000 during a restricting their corporate (paper) headquarters was moved to the Cayman due to its favorable business climate including lower tax structure.

Dive Cayman Islands

Disclosure: While I am a certified and experienced PADI SCUBA Divemaster having visited many different venues, Cayman Island is not one of them. Likewise, while I have distant relatives never meet, I would live to visit Ireland sometime.

Why is Seagate saying goodbye to the nice warm climate of the Cayman Islands heading off to the emerald Isle?

Visit Ireland

Simple, a more favorable business climate that include international business and taxation benefits as well as Ireland is not coming under scrutiny as a tax haven by the U.S. and other governments as have the Cayman Islands (along with other locations). Let me also be clear that Seagate is not new to Ireland having had a presence there for some time (See here).

What does all of this mean?
From a technology perspective pretty much nothing as this appears to be mainly a business and financial move for the shareholders of Seagate. As for impact on shareholders, other than reading through some documents if so inclined, probably not much impact if any at all.

As for IT customers, their solution providers who are customers of Seagate this probably does not mean anything at all as it should be business as usual.

What about others parties, governments, countries or entities?

Tough to say if this is a trend of companies that will begin moving their paper headquarters from the Caymans to elsewhere so as to escape being in the spotlight of U.S. and other governments looking for additional revenues.

Perhaps a boon to Ireland if more companies decide to move their paper as well as actual company operations there as many have done over the past decades. Otherwise for the rest of us, it can make for interesting reading, conversations, speculation, debate and discussion.

And that is all that I have to say about this for now, what say you?

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Post Holiday IT Shopping Bargains, Dell Buying Exanet?

For consumers, the time leading up to the holiday Christmas season is usually busy including door busters as well as black Friday among other specials for purchasing gifts and other items. However savvy shoppers will wait for after Christmas or the holidays altogether perhaps well into the New Year when some good bargains can become available. IT customers are no different with budgets to use up before the end of the year thus a flurry of acquisitions that should become evident soon as we are entering earnings announcement season.

However there are also bargains for IT organizations looking to take advantage of special vendor promotions trying to stimulate sales, not to mention for IT vendors to do some shopping of their own. Consequently, in addition to the flurry of merger and acquisition (M and A) activity from last summer through the fall, there has been several recent deals, some of which might make Monty Hall blush!

Some recent acquisition activity include among others:

  • Dell bought Perot systems for $3.9B
  • DotHill bought Cloverleaf
  • Texas Memory Systems (TMS) bought Incipient
  • HP bought IBRIX and 3COM among others
  • LSI bought Onstor
  • VMware bought Zimbra
  • Micron bought Numonyx
  • Exar bought Neterion

Now the industry is abuzz about Dell, who is perhaps using some of the lose change left over from holiday sales as being in the process of acquiring Israeli clustered storage startup Exanet for about $12M USD. Compared to previous Dell acquisitions including EqualLogic in 2007 for about $1.4B or last years Perot deal in the $3.9B range, $12M is a bargain and would probably not even put a dent in the selling and marketing advertising budget let alone corporate cash coffers which as of their Q3-F10 balance sheet shows about $12.795B in cash.

Who is Exanet and what is their product solution?
Exanet is a small Israeli startup providing a clustered, scale out NAS file serving storage solution (Figure 1) that began shipping in 2003. The Exanet solution (ExaStore) can be either software based, or, as a package solution ExaStore software installed on standard x86 servers with external RAID storage arrays combining as a clustered NAS file server.

Product features include global name space, distributed metadata, expandable file systems, virtual volumes, quotas, snapshots, file migration, replication, and virus scanning, and load balancing, NFS, CIFS and AFP. Exanet scales up to 1 Exabyte of storage capacity along with supporting large files and billions of file per cluster.

The target market that Exanet pursues is large scale out NAS where performance (either small random or large sequential I/Os) along with capacity are required. Consequently, in the scale out, clustered NAS file serving space, competitors include IPM GPFS (SONAS), HP IBRIX or PolyServe, Sun Lustre and Symantec SFS among others.

Clustered Storage Model: Source The Green and Virtual Data Center (CRC)
Figure 1 Generic clustered storage model (Courtesy The Green and Virtual Data Center(CRC)

For a turnkey solution, Exanet packaged their cluster file system software with various vendors storage combined with 3rd party external Fibre Channel or other storage. This should play well for Dell who can package the Exanet software on its own servers as well as leverage either SAS or Fibre Channel  MD1000/MD3000 external RAID storage among other options (see more below).

Click here to learn more about clustered storage including clustered NAS, clustered and parallel file systems.

Dell

Whats the dell play?

  • Its an opportunity to acquire some intellectual property (IP)
  • Its an opportunity to have IP similar to EMC, HP, IBM, NetApp, Oracle and Symantec among others
  • Its an opportunity to address a market gap or need
  • Its an opportunity to sell more Dell servers, storage and services
  • Its an opportunity time for doing acquisitions (bargain shopping)

Note: IBM also this past week announced their new bundled scale out clustered NAS file serving solution based on GPFS called SONAS. HP has IBRIX in addition to their previous PolyServe acquisition, Sun has ZFS and Lustre.

How does Exanet fit into the Dell lineup?

  • Dell sells Microsoft based NAS as NX series
  • Dell has an OEM relationship with EMC
  • Dell was OEMing or reselling IBRIX in the past for certain applications or environments
  • Dell has needed to expand its NAS story to balance its iSCSI centric storage story as well as compliment its multifunction block storage solutions (e.g. MD3000) and server solutions.

Why Exanet?
Why Exanet, why not one of the other startups or small NAS or cloud file system vendors including BlueArc, Isilon, Panasas, Parascale, Reldata, OpenE or Zetta among others?

My take is that probably because those were either not relevant to what Dell is looking for, lack of seamless technology and business fit, technology tied to non Dell hardware, technology maturity, the investors are still expecting a premium valuation, or, some combination of the preceding.

Additional thoughts on why Exanet
I think that Dell simply saw an opportunity to acquire some intellectual property (IP) probably including a patent or two. The value of the patents could be in the form of current or future product offerings, perhaps a negotiating tool, or if nothing else as marketing tool. As a marketing tool, Dell via their EqualLogic acquisition among others has been able to demonstrate and generate awareness that they actually own some IP vs. OEM or resell those from others. I also think that this is an opportunity to either fill or supplement a solution offering that IBRIX provided to high performance, bulk storage and scale out file serving needs.

NAS and file serving supporting unstructured data are a strong growth market for commercial, high performance, specialized or research as well as small business environments. Thus, where EqualLogic plays to the iSCSI block theme, Dell needs to expand their NAS and file serving solutions to provide product diversity to meet various customer applications needs similar to what they do with block based storage. For example, while iSCSI based EqualLogic PS systems get the bulk of the marketing attention, Dell also has a robust business around the PowerVault MD1000/MD3000 (SAS/iSCSI/FC) and Microsoft multi protocol based PowerVault NX series not to mention their EMC CLARiiON based OEM solutions (E.g. Dell AX, Dell/EMC CX).

Thus, Dell can complement the Microsoft multi protocol (block and NAS file) NX with a packaged (Dell servers and MD (or other affordable block storage) powered with Exanet) solution. While it is possible that Dell will find a way to package Exanet as a NAS gateway in front of the iSCSI based EqualLogic PS systems, which would also make for an expensive scale out NAS solution compared to those from other vendors.

Thats it for now.

Lets see how this all plays out.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Technorati tags: Dell

Technology Tiering, Servers Storage and Snow Removal

Granted it is winter in the northern hemisphere and thus snow storms should not be a surprise.

However between December 2009 and early 2010, there has been plenty of record activity from in the U.K. (or here), to the U.S. east coast including New York, Boston and Washington DC, across the midwest and out to California, it made for a white christmas and SANta fun along with snow fun in general in the new year.

2010 Snow Storm via www.star-telegram.com

What does this have to do with Information Factories aka IT resources including public or private clouds, facilities, server, storage, networking along with data management let alone tiering?

What does this have to do with tiered snow removal, or even snow fun?

Simple, different tools are needed for addressing various types of snow from wet and heavy to light powdery or dustings to deep downfalls. Likewise, there are different types of servers, storage, data networks along with operating systems, management tools and even hyper visors to deal with various application needs or requirements.

First, lets look at tiered IT resources (servers, storage, networks, facilities, data protection and hyper visors) to meet various efficiency, optimization and service level needs.

Do you have tiered IT resources?

Let me rephrase that question to do you have different types of servers with various performance, availability, connectivity and software that support various applications and cost levels?

Thus the whole notion of tiered IT resources is to be abe to have different resources that can be aligned to the task at hand in order to meet performance, availability, capacity, energy along with economic along with service level agreement (SLA) requirements.

Computers or servers are targeted for different markets including Small Office Home Office (SOHO), Small Medium Business (SMB), Small Medium Enterprise (SME) and ultra large scale or extreme scaling, including high performance super computing. Servers are also positioned for different price bands and deployment scenarios.

General categories of tiered servers and computers include:

  • Laptops, desktops and workstations
  • Small floor standing towers or rack mounted 1U and 2U servers
  • Medium sizes floor standing towers or larger rack mounted servers
  • Blade Centers and Blade Servers
  • Large size floor standing servers, including mainframes
  • Specialized fault tolerant, rugged and embedded processing or real time servers

Servers have different names email server, database server, application server, web server, and video or file server, network server, security server, backup server or storage server associated with them depending on their use. In each of the previous examples, what defines the type of server is the type of software is being used to deliver a type of service. Sometimes the term appliance will be used for a server; this is indicative of the type of service the combined hardware and software solution are providing. For example, the same physical server running different software could be a general purpose applications server, a database server running for example Oracle, IBM, Microsoft or Teradata among other databases, an email server or a storage server.

This can lead to confusion when looking at servers in that a server may be able to support different types of workloads thus it should be considered a server, storage, networking or application platform. It depends on the type of software being used on the server. If, for example, storage software in the form a clustered and parallel file system is installed on a server to create highly scalable network attached storage (NAS) or cloud based storage service solution, then the server is a storage server. If the server has a general purpose operating system such as Microsoft Windows, Linux or UNIX and a database on it, it is a database server.

While not technically a type of server, some manufacturers use the term tin wrapped software in an attempt to not be classified as an appliance, server or hardware vendor but want their software to be positioned more as a turnkey solution. The idea is to avoid being perceived as a software only solution that requires integration with hardware. The solution is to use off the shelf commercially available general purpose servers with the vendors software technology pre integrated and installed ready for use. Thus, tin wrapped software is a turnkey software solution with some tin, or hardware, wrapped around it.

How about the same with tiered storage?

That is different tiers (Figure 1) of fast high performance disk including RAM or flash based SSD, fast Fibre Channel or SAS disk drives, or high capacity SAS and SATA disk drives along with magnetic tape as well as cloud based backup or archive?

Tiered Storage Resources
Figure 1: Tiered Storage resources

Tiered storage is also sometimes thought of in terms large enterprise class solutions or midrange, entry level, primary, secondary, near line and offline. Not to be forgotten, there are also tiered networks that support various speeds, convergence, multi tenancy and other capabilities from IO Virtualization (IOV) to traditional LAN, SAN, MAN and WANs including 1Gb Ethernet (1GbE), 10GbE up to emerging 40GbE and 100GbE not to mention various Fibre Channel speeds supporting various protocols.

The notion around tiered networks is like with servers and storage to enable aligning the right technology to be used for the task at hand economically while meeting service needs.

Two other common IT resource tiering techniques include facilities and data protection. Tiered facilities can indicate size, availability, resiliency among other characteristics. Likewise, tiered data protection is aligning the applicable technology to support different RTO and RPO requirements for example using synchronous replication where applicable vs. asynchronous time delayed for longer distance combined with snapshots. Other forms of tiered data protection include traditional backups either to disk, tape or cloud.

There is a new emerging form of tiering in many IT environments and that is tiered virtualization or specifically tiered server hyper visors in virtual data centers with similar objectives to having different server, storage, network, data protection or facilities tiers. Instead of an environment running all VMware, Microsoft HyperV or Xen among other hyper visors may be deployed to meet different application service class requirements. For example, VMware may be used for premium features and functionality on some applications, where others that do not need those features along with requiring lower operating costs leverage HyperV or Zen based solutions. Taking the tiering approach a step further, one could also declare tiered databases for example Oracle legacy vs. MySQL or Microsoft SQLserver among other examples.

What about IT clouds, are those different types of resources, or, essentially an extension of existing IT capabilities for example cloud storage being another tier of data storage?

There is another form of tiering, particularly during the winter months in the northern hemisphere where there is an abundance of snow this time of the year. That is, tiered snow management, removal or movement technologies.

What about tiered snow removal?

Well lets get back to that then.

Like IT resources, there are different technologies that can be used for moving, removing, melting or managing snow.

For example, I cant do much about getting ready of snow other than pushing it all down the hill and into the river, something that would take time and lots of fuel, or, I can manage where I put snow piles to be prepared for next storm, plus, to help put it where the piles of snow will melt and help avoid spring flood. Some technologies can be used for relocating snow elsewhere, kind of like archiving data onto different tiers of storage.

Regardless of if snowstorm or IT clouds (public or private), virtual, managed service provider (MSP), hosted or traditional IT data centers, all require physical servers, storage, I/O and data networks along with software including management tools.

Granted not all servers, storage or networking technology let alone software are the same as they address different needs. IT resources including servers, storage, networks, operating systems and even hyper visors for virtual machines are often categorized and aligned to different tiers corresponding to needs and characteristics (Figure 2).

Tiered IT Resources
Figure 2: Tiered IT resources

For example, in figure 3 there is a light weight plastic shovel (Shove 1) for moving small amounts of snow in a wide stripe or pass. Then there is a narrow shovel for digging things out, or breaking up snow piles (Shovel 2). Also shown are a light duty snow blower (snow thrower) capable of dealing with powdery or non wet snow, grooming in tight corners or small areas.

Tiered Snow tools
Figure 3: Tiered Snow management and migration tools

For other light dustings, a yard leaf blower does double duty for migrating or moving snow in small or tight corners such as decks, patios or for cleanup. Larger snowfalls, or, where there is a lot of area to clear involves heavier duty tools such as the Kawasaki mule with 5 foot curtis plow. The mule is a multifunction, multi protocol tool capable of being used for hauling items, towing, pulling or recreational tasks.

When all else fails, there is a pickup truck to get or go out and about, not to mention to pull other vehicles out of ditches or piles of snow when they become stuck!

Snow movement
Figure 4: Sometimes the snow light making for fast, low latency migration

Snow movement
Figure 5: And sometimes even snow migration technology goes off line!

Snow movement

And that is it for now!

Enjoy the northern hemisphere winter and snow while it lasts, make the best of it with the right tools to simplify the tasks of movement and management, similar to IT resources.

Keep in mind, its about the tools and when along with how to use them for various tasks for efficiency and effectiveness, and, a bit of snow fun.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

RAID Relevance Revisited

Following up from some previous posts on the topic, a continued discussion point in the data storage industry is the relevance (or lack there) of RAID (Redundant Array of Independent Disks).

These discussions tend to evolve around how RAID is dead due to its lack of real or perceived ability to continue scaling in terms of performance, availability, capacity, economies or energy capabilities needed or when compared to those of newer techniques, technologies or products.

RAID Relevance

While there are many new and evolving approaches to protecting data in addition to maintaining availability or accessibility to information, RAID despite the fan fare is far from being dead at least on the technology front.

Sure, there are issues or challenges that require continued investing in RAID as has been the case over the past 20 years; however those will also be addressed on a go forward basis via continued innovation and evolution along with riding technology improvement curves.

Now from a marketing standpoint, ok, I can see where the RAID story is dead, boring, and something new and shiny is needed, or, at least change the pitch to sound like something new.

Consequently, when being long in the tooth and with some of the fore mentioned items among others, older technologies that may be boring or lack sizzle or marketing dollars can and often are declared dead on the buzzword bingo circuit. After all, how long now has the industry trade group RAID Advisory Board (RAB) been missing in action, retired, spun down, archived or ILMed?

RAID remains relevant because like other dead or zombie technologies it has reached the plateau of productivity and profitability. That success is also something that emerging technologies envy as their future domain and thus a classic marketing move is to declare the incumbent dead.

The reality is that RAID in all of its various instances from hardware to software, standard to non-standard with extensions is very much alive from the largest enterprise to the SMB to the SOHO down into consumer products and all points in between.

Now candidly, like any technology that is about 20 years old if not older after all, the disk drive is over 50 years old and been declared dead for how long now?.RAID in some ways is long in the tooth and there are certainly issues to be addressed as have been taken care of in the past. Some of these include the overhead of rebuilding large capacity 1TB, 2TB and even larger disk drives in the not so distant future.

There are also issues pertaining to distributed data protection in support of cloud, virtualized or other solutions that need to be addressed. In fact, go way way back to when RAID appeared commercially on the scene in the late 80s and one of the value propositions among others was to address the reliability of emerging large capacity multi MByte sized SCSI disk drives. It seems almost laughable today that when a decade later, when the 1GB disk drives appeared in the market back in the 90s that there was renewed concern about RAID and disk drive rebuild times.

Rest assured, I think that there is a need and plenty of room for continued innovate evolution around RAID related technologies and their associated storage systems or packaging on a go forward basis.

What I find interesting is that some of the issues facing RAID today are similar to those of a decade ago for example having to deal with large capacity disk drive rebuild, distributed data protecting and availability, performance, ease of use and so the list goes.

However what happened was that vendors continued to innovate both in terms of basic performance accelerated rebuild rates with improvements to rebuild algorithms, leveraged faster processors, busses and other techniques. In addition, vendors continued to innovate in terms of new functionality including adopting RAID 6 which for the better part of a decade outside of a few niche vendors languished as one of those future technologies that probably nobody would ever adopt, however we know that to be different now and for the past several years. RAID 6 is one of those areas where vendors who do not have it are either adding it, enhancing it, or telling you why you do not need it or why it is no good for you.

An example of how RAID 6 is being enhanced is boosting performance on normal read and write operations along with acceleration of performance during disk rebuild. Also tied to RAID 6 and disk drive rebuild are improvements in controller design to detect and proactively make repairs on the fly to minimize or eliminate errors or diminished the need for drive rebuilds, similar to what was done in previous generations. Lets also not forget the improvements in disk drives boosting performance, availability, capacity and energy improvements over time.

Funny how these and other enhancements are similar to those made to RAID controllers hardware and software fine tuning them in the early to mid 2000s in support for high capacity SATA disk drives that had different RAS characteristics of higher performance lower capacity enterprise drives.

Here is my point.

RAID to some may be dead while others continue to rely on it. Meanwhile others are working on enhancing technologies for future generations of storage systems and application requirements. Thus in different shapes, forms, configurations, feature; functionality or packaging, the spirit of RAID is very much alive and well remaining relevant.

Regardless of if a solution using two or three disk mirroring for availability, or RAID 0 fast SSD or SAS or FC disks in a stripe configuration for performance with data protection via rapid restoration from some other low cost medium (perhaps RAID 6 or tape), or perhaps single, dual or triple parity protection, or if using small block or multiMByte or volume based chunklets, let alone if it is hardware or software based, local or disturbed, standard or non standard, chances are there is some theme of RAID involved.

Granted, you do not have to call it RAID if you prefer!

As a closing thought, if RAID were no longer relevant, than why do the post RAID, next generation, life beyond RAID or whatever you prefer to call them technologies need to tie themselves to the themes of RAID? Simple, RAID is still relevant in some shape or form to different audiences as well as it is a great way of stimulating discussion or debate in a constantly evolving industry.

BTW, Im still waiting for the revolutionary piece of hardware that does not require software, and the software that does not require hardware and that includes playing games with server less servers using hypervisors :) .

Provide your perspective on RAID and its relevance in the following poll.

Here are some additional related and relevant RAID links of interests:

Stay tuned for more about RAIDs relevance as I dont think we have heard the last on this.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Poll: Networking Convergence, Ethernet, InfiniBand or both?

I just received an email in my inbox from Voltaire along with a pile of other advertisements, advisories, alerts and announcements from other folks.

What caught my eye on the email was that it is announcing a new survey results that you can read here as well as below.

The question that this survey announcements prompts for me and hence why I am posting it here is how dominant will InfiniBand be on a go forward basis, the answer I think is it depends…

It depends on the target market or audience, what their applications and technology preferences are along with other service requirements.

I think that there is and will remain a place for Infiniband, the question is where and for what types of environments as well as why have both InfiniBand and Ethernet including Fibre Channel over Ethernet (FCoE) in support of unified or converged I/O and data networking.

So here is the note that I received from Voltaire:

 

Hello,

A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

The full press release is below.  Please contact me if you would like to speak with a Voltaire executive for further commentary.

Regards,
Christy

____________________________________________________________
Christy Lynch| 978.439.5407(o) |617.794.1362(m)
Director, Corporate Communications
Voltaire – The Leader in Scale-Out Data Center Fabrics
christyl@voltaire.com | www.voltaire.com
Follow us on Twitter: www.twitter.com/voltaireltd

FOR IMMEDIATE RELEASE:

IT Survey Finds Executives Planning Converged Network Strategy:
Using Both InfiniBand and Ethernet

Fabric Performance Key to Making Data Centers Operate More Efficiently

CHELMSFORD, Mass. and ANANA, Israel January 12, 2010 – A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

Voltaire queried more than 120 members of the Global CIO & Executive IT Group, which includes CIOs, senior IT executives, and others in the field that attended the 2009 MIT Sloan CIO Symposium. The survey explored their data center networking needs, their choice of interconnect technologies (fabrics) for the enterprise, and criteria for making technology purchasing decisions.

“Increasingly, InfiniBand and Ethernet share the ability to address key networking requirements of virtualized, scale-out data centers, such as performance, efficiency, and scalability,” noted Asaf Somekh, vice president of marketing, Voltaire. “By adopting a converged network strategy, IT executives can build on their pre-existing investments, and leverage the best of both technologies.”

When asked about their fabric choices, 45 percent of the respondents said they planned to implement both InfiniBand with Ethernet as they made future data center enhancements. Another 54 percent intended to rely on Ethernet alone.

Among additional survey results:

  • When asked to rank the most important characteristics for their data center fabric, the largest number (31 percent) cited high bandwidth. Twenty-two percent cited low latency, and 17 percent said scalability.
  • When asked about their top data center networking priorities for the next two years, 34 percent again cited performance. Twenty-seven percent mentioned reducing costs, and 16 percent cited improving service levels.
  • A majority (nearly 60 percent) favored a fabric/network that is supported or backed by a global server manufacturer.

InfiniBand and Ethernet interconnect technologies are widely used in today’s data centers to speed up and make the most of computing applications, and to enable faster sharing of data among storage and server networks. Voltaire’s server and storage fabric switches leverage both technologies for optimum efficiency. The company provides InfiniBand products used in supercomputers, high-performance computing, and enterprise environments, as well as its Ethernet products to help a broad array of enterprise data centers meet their performance requirements and consolidation plans.

About Voltaire
Voltaire (NASDAQ: VOLT) is a leading provider of scale-out computing fabrics for data centers, high performance computing and cloud environments. Voltaire’s family of server and storage fabric switches and advanced management software improve performance of mission-critical applications, increase efficiency and reduce costs through infrastructure consolidation and lower power consumption. Used by more than 30 percent of the Fortune 100 and other premier organizations across many industries, including many of the TOP500 supercomputers, Voltaire products are included in server and blade offerings from Bull, HP, IBM, NEC and Sun. Founded in 1997, Voltaire is headquartered in Ra’anana, Israel and Chelmsford, Massachusetts. More information is available at www.voltaire.com or by calling 1-800-865-8247.

Forward Looking Statements
Information provided in this press release may contain statements relating to current expectations, estimates, forecasts and projections about future events that are "forward-looking statements" as defined in the Private Securities Litigation Reform Act of 1995. These forward-looking statements generally relate to Voltaire’s plans, objectives and expectations for future operations and are based upon management’s current estimates and projections of future results or trends. They also include third-party projections regarding expected industry growth rates. Actual future results may differ materially from those projected as a result of certain risks and uncertainties. These factors include, but are not limited to, those discussed under the heading "Risk Factors" in Voltaire’s annual report on Form 20-F for the year ended December 31, 2008. These forward-looking statements are made only as of the date hereof, and we undertake no obligation to update or revise the forward-looking statements, whether as a result of new information, future events or otherwise.

###

All product and company names mentioned herein may be the trademarks of their respective owners.

 

End of Voltaire transmission:

I/O, storage and networking interface wars come and go similar to other technology debates of what is the best or that will be supreme.

Some recent debates have been around Fibre Channel vs. iSCSI or iSCSI vs. Fibre Channel (depends on your perspective), SAN vs. NAS, NAS vs. SAS, SAS vs. iSCSI or Fibre Channel, Fibre Channel vs. Fibre Channel over Ethernet (FCoE) vs. iSCSI vs. InfiniBand, xWDM vs. SONET or MPLS, IP vs UDP or other IP based services, not to mention the whole LAN, SAN, MAN, WAN POTS and PAN speed games of 1G, 2G, 4G, 8G, 10G, 40G or 100G. Of course there are also the I/O virtualization (IOV) discussions including PCIe Single Root (SR) and Multi Root (MR) for attachment of SAS/SATA, Ethernet, Fibre Channel or other adapters vs. other approaches.

Thus when I routinely get asked about what is the best, my answer usually is a qualified it depends based on what you are doing, trying to accomplish, your environment, preferences among others. In other words, Im not hung up or tied to anyone particular networking transport, protocol, network or interface, rather, the ones that work and are most applicable to the task at hand

Now getting back to Voltaire and InfiniBand which I think has a future for some environments, however I dont see it being the be all end all it was once promoted to be. And outside of the InfiniBand faithful (there are also iSCSI, SAS, Fibre Channel, FCoE, CEE and DCE among other devotees), I suspect that the results would be mixed.

I suspect that the Voltaire survey reflects that as well as if I surveyed an Ethernet dominate environment I can take a pretty good guess at the results, likewise for a Fibre Channel, or FCoE influenced environment. Not to mention the composition of the environment, focus and business or applications being supported. One would also expect a slightly different survey results from the likes of Aprius, Broadcom, Brocade, Cisco, Emulex, Mellanox (they also are involved with InfiniBand), NextIO, Qlogic (they actually do some Infiniband activity as well), Virtensys or Xsigo (actually, they support convergence of Fibre Channel and Ethernet via Infiniband) among others.

Ok, so what is your take?

Whats your preffered network interface for convergence?

For additional reading, here are some related links:

  • I/O Virtualization (IOV) Revisited
  • I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
  • Buzzword Bingo 1.0 – Are you ready for fall product announcements?
  • StorageIO in the News Update V2010.1
  • The Green and Virtual Data Center (Chapter 9)
  • Also check out what others including Scott Lowe have to say about IOV here or, Stuart Miniman about FCoE here, or of Greg Ferro here.
  • Oh, and for what its worth for those concerned about FTC disclosure, Voltaire is not nor have they been a client of StorageIO, however, I did used to work for a Fibre Channel, iSCSI, IP storage, LAN, SAN, MAN, WAN vendor and wrote a book on the topics :).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    EMC Storage and Management Software Getting FAST

    EMC has announced the availability of the first phase of FAST (Fully Automated Storage Tiering) functionality for their Symmetrix VMAX, CLARiiON and Celerra storage systems.

    FAST was first previewed earlier this year (see here and here).

    Key themes of FAST are to leverage policies for enabling automation to support large scale environments, doing more with what you have along with enabling virtual data centers for traditional, private and public clouds as well as enhancing IT economics.

    This means enabling performance and capacity planning analysis along with facilitating load balancing or other infrastructure optimization activities to boost productivity, efficiency and resource usage effectiveness not to mention enabling Green IT.

    Is FAST revolutionary? That will depend on who you talk or listen to.

    Some vendors will jump up and down similar to donkey in shrek wanting to be picked or noticed claiming to have been the first to implement LUN or file movement inside of storage systems, or, as operating system or file system or volume manager built in. Others will claim to have done it via third party information lifecycle management (ILM) software including hierarchal storage management (HSM) tools among others. Ok, fair enough, than let their games begin (or continue) and I will leave it up to the variou vendors and their followings to debate whos got what or not.

    BTW, anyone remember system manage storage on IBM mainframes or array based movement in HP AutoRAID among others?

    Vendors have also in the past provided built in or third party add on tools for providing insight and awareness ranging from capacity or space usage and allocation storage resource management (SRM) tools, performance advisory activity monitors or charge back among others. For example, hot files analysis and reporting tool have been popular in the past, often operating system specific for identifying candidate files for placement on SSD or other fast storage. Granted the tools provided insight and awareness, there was still the time and error prone task of decision making and subsequently data movement, not to mention associated down time.

    What is new here with FAST is the integrated approach, tools that are operating system independent, functionality in the array, available for different product family and price bands as well as that are optimized for improving user and IT productivity in medium to high-end enterprise scale environments.

    One of the knocks on previous technology is either the performance impact to an application when data was moved, or, impact to other applications when data is being moved in the background. Another issue has been avoiding excessive thrashing due to data being moved at the expense of taking performance cycles from production applications. This would also be similar to having too many snapshots or raid rebuild that are not optimized running in the background on a storage system lacking sufficient performance capability. Another knock has been that historically, either 3rd party host or appliance based software was needed, or, solutions were designed and targeted for workgroup, departmental or small environments.

    What is FAST and how is it implemented
    FAST is technology for moving data within storage systems (and external for Celerra) for load balancing, capacity and performance optimization to meet quality of service (QoS) performance, availability, capacity along with energy and economic initiatives (figure1) across different tiers or types of storage devices. For example, moving data from slower SATA disks where a performance bottleneck exists to faster Fibre Channel or SSD devices. Similarly, cold or infrequently data on faster more expensive storage devices can be marked as candidates for migration to lower cost SATA devices based on customer policies.

    EMC FAST
    Figure 1 FAST big picture Source EMC

    The premise is that policies are defined based on activity along with capacity to determine when data becomes a candidate for movement. All movement is performed in the background concurrently while applications are accessing data without disruptions. This means that there are no stub files or application pause or timeouts that occur or erratic I/O activity while data is being migrated. Another aspect of FAST data movement which is performed in the actual storage systems by their respective controllers is the ability for EMC management tools to identify hot or active LUNs or volumes (files in the case of Celerra) as candidates for moving (figure 2).

    EMC FAST
    Figure 2 FAST what it does Source EMC

    However, users specify if they want data moved on its own or under supervision enabling a deterministic environment where the storage system and associated management tools makes recommendations and suggestions for administrators to approve before migration occurs. This capacity can be a safeguard as well as a learn mode enabling organizations to become comfortable with the technology along with its recommendations while applying knowledge of current business dynamics (figure 3).

    EMC FAST
    Figure 3 The Value proposition of FAST Source EMC

    FAST is implemented as technology resident or embedded in the EMC VMAX (aka Symmetrix), CLARiiON and Cellera along with external management software tools. In the case of the block (figure 4) storage systems including DMX/VMAX and CLARiiON family of products that support FAST, data movement is on a LUN or volume basis and within a single storage system. For NAS or file based Cellera storage systems, FAST is implanted using FMA technology enabling either in the box or externally to other storage systems on a file basis.

    EMC FAST
    Figure 4 Example of FAST activity Source EMC

    What this means is that data at the LUN or volume level can be moved across different tiers of storage or disk drives within a CLARiiON instance, or, within a VMAX instance (e.g. amongst the nodes). For example, Virtual LUNs are a building block that is leveraged for data movement and migration combined with external management tools including Navisphere for the CLARiiON and Symmetrix management console along with Ionix all of which has been enhanced.

    Note however that initially data is not moved externally between different CLARiiONs or VMAX systems. For external data movement, other existing EMC tools would be deployed. In the case of Celerra, files can be moved within a specific CLARiiON as well as externally across other storage systems. External storage systems that files can be moved across using EMC FMA technology includes other Celleras, Centera and ATMOS solutions based upon defined policies.

    What do I like most and why?

    Integration of management tools providing insight with ability for user to setup polices as well as approve or intercede with data movement and placement as their specific philosophies dictate. This is key, for those who want to, let the system manage it self with your supervision of course. For those who prefer to take their time, then take simple steps by using the solution for initially providing insight into hot or cold spots and then helping to make decisions on what changes to make. Use the solution and adapt it to your specific environment and philosophy approach, what a concept, a tool that works for you, vs you working for it.

    What dont I like and why?

    There is and will remain some confusion about intra and inter box or system data movement and migration, operations that can be done by other EMC technology today for those who need it. For example I have had questions asking if FAST is nothing more than EMC Invista or some other data mover appliance sitting in front of Symmetrix or CLARiiONs and the answer is NO. Thus EMC will need to articulate that FAST is both an umbrella term as well as a product feature set combining the storage system along with associated management tools unique to each of the different storage systems. In addition, there will be confusion at least with GA of lack of support for Symmetrix DMX vs supported VMAX. Of course with EMC pricing is always a question so lets see how this plays out in the market with customer acceptance.

    What about the others?

    Certainly some will jump up and down claiming ratification of their visions welcoming EMC to the game while forgetting that there were others before them. However, it can also be said that EMC like others who have had LUN and volume movement or cloning capabilities for large scale solutions are taking the next step. Thus I would expect other vendors to continue movement in the same direction with their own unique spin and approach. For others who have in the past made automated tiering their marketing differentiation, I would suggest they come up with some new spins and stories as those functions are about to become table stakes or common feature functionality on a go forward basis.

    When and where to use?

    In theory, anyone with a Symmetrix/VMAX, CLARiiON or Celerra that supports the new functionality should be a candidate for the capabilities, that is, at least the insight, analysis, monitoring and situation awareness capabilities Note that does not mean actually enabling the automated movement initially.

    While the concept is to enable automated system managed storage (Hmmm, Mainframe DejaVu anyone), for those who want to walk before they run, enabling the insight and awareness capabilities can provide valuable information about how resources are being used. The next step would then to look at the recommendations of the tools, and if you concur with the recommendations, then take remedial action by telling the system when the movement can occur at your desired time.

    For those ready to run, then let it rip and take off as FAST as you want. In either situation, look at FAST for providing insight and situational awareness of hot and cold storage, where opportunities exist for optimizing and gaining efficiency in how resources are used, all important aspects for enabling a Green and Virtual Data Center not to mention as well as supporting public and private clouds.

    FYI, FTC Disclosure and FWIW

    I have done content related projects for EMC in the past (see here), they are not currently a client nor have they sponsored, underwritten, influenced, renumerated, utilize third party off shore swiss, cayman or south american unnumbered bank accounts, or provided any other reimbursement for this post, however I did personally sign and hand to Joe Tucci a copy of my book The Green and Virtual Data Center (CRC) ;).

    Bottom line

    Do I like what EMC is doing with FAST and this approach? Yes.

    Do I think there is room for improvement and additional enhancements? Absolutely!

    Whats my recommendation? Have a look, do your homework, due diligence and see if its applicable to your environment while asking others vendors what they will be doing (under NDA if needed).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    I/O Virtualization (IOV) Revisited

    Is I/O Virtualization (IOV) a server topic, a network topic, or a storage topic (See previous post)?

    Like server virtualization, IOV involves servers, storage, network, operating system, and other infrastructure resource management areas and disciplines. The business and technology value proposition or benefits of converged I/O networks and I/O virtualization are similar to those for server and storage virtualization.

    Additional benefits of IOV include:

      • Doing more with what resources (people and technology) already exist or reduce costs
      • Single (or pair for high availability) interconnect for networking and storage I/O
      • Reduction of power, cooling, floor space, and other green efficiency benefits
      • Simplified cabling and reduced complexity for server network and storage interconnects
      • Boosting servers performance to maximize I/O or mezzanine slots
      • reduce I/O and data center bottlenecks
      • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
      • Scaling I/O capacity to meet high-performance and clustered application needs
      • Leveraging common cabling infrastructure and physical networking facilities

    Before going further, lets take a step backwards for a few moments.

    To say that I/O and networking demands and requirements are increasing is an understatement. The amount of data being generated, copied, and retained for longer periods of time is elevating the importance of the role of data storage and infrastructure resource management (IRM). Networking and input/output (I/O) connectivity technologies (figure 1) tie facilities, servers, storage tools for measurement and management, and best practices on a local and wide area basis to enable an environmentally and economically friendly data center.

    TIERED ACCESS FOR SERVERS AND STORAGE
    There is an old saying that the best I/O, whether local or remote, is an I/O that does not have to occur. I/O is an essential activity for computers of all shapes, sizes, and focus to read and write data in and out of memory (including external storage) and to communicate with other computers and networking devices. This includes communicating on a local and wide area basis for access to or over Internet, cloud, XaaS, or managed services providers such as shown in figure 1.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 1 The Big Picture: Data Center I/O and Networking

    The challenge of I/O is that some form of connectivity (logical and physical), along with associated software is required along with time delays while waiting for reads and writes to occur. I/O operations that are closest to the CPU or main processor should be the fastest and occur most frequently for access to main memory using internal local CPU to memory interconnects. In other words, fast servers or processors need fast I/O, either in terms of low latency, I/O operations along with bandwidth capabilities.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 2 Tiered I/O and Networking Access

    Moving out and away from the main processor, I/O remains fairly fast with distance but is more flexible and cost effective. An example is the PCIe bus and I/O interconnect shown in Figure 2, which is slower than processor-to-memory interconnects but is still able to support attachment of various device adapters with very good performance in a cost effective manner.

    Farther from the main CPU or processor, various networking and I/O adapters can attach to PCIe, PCIx, or PCI interconnects for backward compatibility to support various distances, speeds, types of devices, and cost factors.

    In general, the faster a processor or server is, the more prone to a performance impact it will be when it has to wait for slower I/O operations.

    Consequently, faster servers need better-performing I/O connectivity and networks. Better performing means lower latency, more IOPS, and improved bandwidth to meet application profiles and types of operations.

    Peripheral Component Interconnect (PCI)
    Having established that computers need to perform some form of I/O to various devices, at the heart of many I/O and networking connectivity solutions is the Peripheral Component Interconnect (PCI) interface. PCI is an industry standard that specifies the chipsets used to communicate between CPUs and memory and the outside world of I/O and networking device peripherals.

    Figure 3 shows an example of multiple servers or blades each with dedicated Fibre Channel (FC) and Ethernet adapters (there could be two or more for redundancy). Simply put the more servers and devices to attach to, the more adapters, cabling and complexity particularly for blade servers and dense rack mount systems.
    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 3 Dedicated PCI adapters for I/O and networking devices

    Figure 4 shows an example of a PCI implementation including various components such as bridges, adapter slots, and adapter types. PCIe leverages multiple serial unidirectional point to point links, known as lanes, in contrast to traditional PCI, which used a parallel bus design.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

    Figure 4 PCI IOV Single Root Configuration Example

    In traditional PCI, bus width varied from 32 to 64 bits; in PCIe, the number of lanes combined with PCIe version and signaling rate determine performance. PCIe interfaces can have 1, 2, 4, 8, 16, or 32 lanes for data movement, depending on card or adapter format and form factor. For example, PCI and PCIx performance can be up to 528 MB per second with a 64 bit, 66 MHz signaling rate, and PCIe is capable of over 4 GB (e.g., 32 Gbit) in each direction using 16 lanes for high-end servers.

    The importance of PCIe and its predecessors is a shift from multiple vendors’ different proprietary interconnects for attaching peripherals to servers. For the most part, vendors have shifted to supporting PCIe or early generations of PCI in some form, ranging from native internal on laptops and workstations to I/O, networking, and peripheral slots on larger servers.

    The most current version of PCI, as defined by the PCI Special Interest Group (PCISIG), is PCI Express (PCIe). Backwards compatibility exists by bridging previous generations, including PCIx and PCI, off a native PCIe bus or, in the past, bridging a PCIe bus to a PCIx native implementation. Beyond speed and bus width differences for the various generations and implementations, PCI adapters also are available in several form factors and applications.

    Traditional PCI was generally limited to a main processor or was internal to a single computer, but current generations of PCI Express (PCIe) include support for PCI Special Interest Group (PCI) I/O virtualization (IOV), enabling the PCI bus to be extended to distances of a few feet. Compared to local area networking, storage interconnects, and other I/O connectivity technologies, a few feet is very short distance, but compared to the previous limit of a few inches, extended PCIe provides the ability for improved sharing of I/O and networking interconnects.

    I/O VIRTUALIZATION(IOV)
    On a traditional physical server, the operating system sees one or more instances of Fibre Channel and Ethernet adapters even if only a single physical adapter, such as an InfiniBand HCA, is installed in a PCI or PCIe slot. In the case of a virtualized server for example, Microsoft HyperV or VMware ESX/vSphere the hypervisor will be able to see and share a single physical adapter, or multiple adapters, for redundancy and performance to guest operating systems. The guest systems see what appears to be a standard SAS, FC or Ethernet adapter or NIC using standard plug-and-play drivers.

    Virtual HBA or virtual network interface cards (NICs) and switches are, as their names imply, virtual representations of a physical HBA or NIC, similar to how a virtual machine emulates a physical machine with a virtual server. With a virtual HBA or NIC, physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX, or Linux, a SAS or FC HBA, FCoE converged network adapter (CNA) or Ethernet NIC is presented.

    In addition to virtual or software-based NICs, adapters, and switches found in server virtualization implementations, virtual LAN (VLAN), virtual SAN (VSAN), and virtual private network (VPN) are tools for providing abstraction and isolation or segmentation of physical resources. Using emulation and abstraction capabilities, various segments or sub networks can be physically connected yet logically isolated for management, performance, and security purposes. Some form of routing or gateway functionality enables various network segments or virtual networks to communicate with each other when appropriate security is met.

    PCI-SIG IOV
    PCI SIG IOV consists of a PCIe bridge attached to a PCI root complex along with an attachment to a separate PCI enclosure (Figure 5). Other components and facilities include address translation service (ATS), single-root IOV (SR IOV), and multiroot IOV (MR IOV). ATS enables performance to be optimized between an I/O device and a servers I/O memory management. Single root, SR IOV enables multiple guest operating systems to access a single I/O device simultaneously, without having to rely on a hypervisor for a virtual HBA or NIC.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

    Figure 5 PCI SIG IOV

    The benefit is that physical adapter cards, located in a physically separate enclosure, can be shared within a single physical server without having to incur any potential I/O overhead via virtualization software infrastructure. MR IOV is the next step, enabling a PCIe or SR IOV device to be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures. The benefit is increased sharing of physical adapters across multiple servers and operating systems not to mention simplified cabling, reduced complexity and resource utilization.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 6 PCI SIG MR IOV

    Figure 6 shows an example of a PCIe switched environment, where two physically separate servers or blade servers attach to an external PCIe enclosure or card cage for attachment to PCIe, PCIx, or PCI devices. Instead of the adapter cards physically plugging into each server, a high performance short-distance cable connects the servers PCI root complex via a PCIe bridge port to a PCIe bridge port in the enclosure device.

    In figure 6, either SR IOV or MR IOV can take place, depending on specific PCIe firmware, server hardware, operating system, devices, and associated drivers and management software. For a SR IOV example, each server has access to some number of dedicated adapters in the external card cage, for example, InfiniBand, Fibre Channel, Ethernet, or Fibre Channel over Ethernet (FCoE) and converged networking adapters (CNA) also known as HBAs. SR IOV implementations do not allow different physical servers to share adapter cards. MR IOV builds on SR IOV by enabling multiple physical servers to access and share PCI devices such as HBAs and NICs safely with transparency.

    The primary benefit of PCI IOV is to improve utilization of PCI devices, including adapters or mezzanine cards, as well as to enable performance and availability for slot-constrained and physical footprint or form factor-challenged servers. Caveats of PCI IOV are distance limitations and the need for hardware, firmware, operating system, and management software support to enable safe and transparent sharing of PCI devices. Examples of PCIe IOV vendors include Aprius, NextIO and Virtensys among others.

    InfiniBand IOV
    InfiniBand based IOV solutions are an alternative to Ethernet-based solutions. Essentially, InfiniBand approaches are similar, if not identical, to converged Ethernet approaches including FCoE, with the difference being InfiniBand as the network transport. InfiniBand HCAs with special firmware are installed into servers that then see a Fibre Channel HBA and Ethernet NIC from a single physical adapter. The InfiniBand HCA also attaches to a switch or director that in turn attaches to Fibre Channel SAN or Ethernet LAN networks.

    The value of InfiniBand converged networks are that they exist today, and they can be used for consolidation as well as to boost performance and availability. InfiniBand IOV also provides an alternative for those who do not choose to deploy Ethernet.

    From a power, cooling, floor-space or footprint standpoint, converged networks can be used for consolidation to reduce the total number of adapters and the associated power and cooling. In addition to removing unneeded adapters without loss of functionality, converged networks also free up or allow a reduction in the amount of cabling, which can improve airflow for cooling, resulting in additional energy efficiency. An example of a vendor using InfiniBand as a platform for I/O virtualization is Xsigo.

    General takeaway points include the following:

    • Minimize the impact of I/O delays to applications, servers, storage, and networks
    • Do more with what you have, including improving utilization and performance
    • Consider latency, effective bandwidth, and availability in addition to cost
    • Apply the appropriate type and tiered I/O and networking to the task at hand
    • I/O operations and connectivity are being virtualized to simplify management
    • Convergence of networking transports and protocols continues to evolve
    • PCIe IOV is complimentary to converged networking including FCoE

    Moving forward, a revolutionary new technology may emerge that finally eliminates the need for I/O operations. However until that time, or at least for the foreseeable future, several things can be done to minimize the impacts of I/O for local and remote networking as well as to simplify connectivity.

    PCIe Fundamentals Server Storage I/O Network Essentials

    Learn more about IOV, converged networks, LAN, SAN, MAN and WAN related topics in Chapter 9 (Networking with your servers and storage) of The Green and Virtual Data Center (CRC) as well as in Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Could Huawei buy Brocade?

    Disclosure: I have no connection to Huawei. I own no stock in, nor have I worked for Brocade as an employee; however I did work for three years at SAN vendor INRANGE which was acquired by CNT. However I left to become an industry analyst prior to the acquisition by McData and well before Brocade bought McData. Brocade is not a current client; however I have done speaking events pertaining to general industry trends and perspectives at various Brocade customer events for them in the past.

    Is Brocade for sale?

    Last week a Wall Street Journal article mentioned Brocade (BRCD) might be for sale.

    BRCD has a diverse product portfolio for Fibre Channel, Ethernet along with the emerging Fibre Channel over Ethernet (FCoE) market and a whos who of OEM and channel partners. Why not be for sale, good timing for investors, CEO Mike Klayko and his team have arguably done a good job of shifting and evolving the company.

    Generally speaking, lets keep in perspective, everything is always for sale, and in an economy like now, bargains are everywhere. Many business are shopping, its just a matter of how visible the shopping for a seller or buyer is along with motivations and objectives including shareholder value.

    Consequently, the coconut wires are abuzz with talk and speculation of who will buy Brocade or perhaps who Brocade might buy among other Merger and Acquisition (M and A) activity of who will buy who. For example, who might buy BRCD, why not EMC (they sold McData off years ago via IPO), or IBM (they sold some of their networking business to Cisco years ago) or HP (currently an OEM partner of BRCD) as possible buyers?

    Last week I posted on twitter a response to a comment about who would want to buy Brocade with a response to the effect of why not a Huawei to which there was some silence except for industry luminary Steve Duplessie (have a look to see what Steve had to say).

    Part of being an analyst IMHO should be to actually analyze things vs. simply reporting on what others want you to report or what you have read or hear elsewhere. This also means talking about scenarios that are of out of the box or in adjacent boxes from some perspectives or that might not be in-line with traditional thinking. Sometimes this means breaking away and thinking and saying what may not be obvious or practical. Having said that, lets take a step back for a moment as to why Brocade may or might not be for sale and who might or may not be interested in them.

    IMHO, it has a lot to do with Cisco and not just because Brocade sees no opportunity to continue competing with the 800lb guerilla of LAN/MAN networking that has moved into Brocades stronghold of storage network SANs. Cisco is upsetting the table or apple cart with its server partners IBM, Dell, HP, Oracle/Sun and others by testing the waters of the server world with their UCS. So far I see this as something akin to a threat testing the defenses of a target before actually full out attacking.

    In other words, checking to see how the opposition responds, what defense are put up, collect G2 or intelligence as well as how the rest of the world or industry might respond to an all out assault or shift of power or control. Of course, HP, IBM, Dell and Sun/Oracle will not take this move into their revenue and account control goes un-noticed with initial counter announcements having been made some re-emphasize relationship with Brocade along with their recent acquisition of Ethernet/IP vendor Foundry.

    Now what does this have to do with Brocade potentially being sold and why the title involving Huawei?

    Many of the recent industry acquisitions have been focused on shoring up technology or intellectual property (IP), eliminating a competitor or simply taking advantage of market conditions. For example, Datadomain was sold to EMC in a bidding war with NetApp, HP bought IBRIX, Oracle bought or is trying to buy Sun, Oracle also bought Virtual Iron, Dell bought Perot after HP bought EDS a year or so ago while Xerox bought ACS and so the M and A game continues among other deals.

    Some of the deals are strategic, many being tactical, Brocade being bought I would put in the category of a strategic scenario, a bargaining chip or even pawn if you prefer in a much bigger game that is more than about switches, directors, HBAs, LANs, SANs, MANSs, WANs, POTS and PANs (Checkout my  book “Resilient Storage Networks”-Elsevier)!

    So with conversations focused around Cisco expanding into servers to control the data center discussion, mindset, thinking, budgets and decision making, why wouldnt an HP, IBM, Dell let alone a NetApp, Oracle/Sun or even EMC want to buy Brocade as a bargaining chip in a bigger game? Why not a Ciena (they just bought some of Nortels assets), Juniper or 3Com (more of a merger of equals to fight Cisco), Microsoft (might upset their partner Cisco) or Fujitsu (Their Telco group that is) among others?

    Then why not Huawei, a company some may have heard of, one that others may not have.

    Who is Huawei you might ask?

    Simple, they are a very large IT solutions provider who is also a large player in China with global operations including R&D in North America and many partnerships with U.S. vendors. By rough comparison, Cisco most recently reported annual revenue are about 36.1B (All are USD), BRCD about 1.5B, Juniper about $3.5B and 3COM about $1.3B and Huawei at about 23B USD with a year over year sales increase of 45%. Huawei has previous partnerships with storage vendors including Symantec and Falconstor among others. Huawei also has had partnership with 3com (H3C), a company that was first of the LAN vendors to get into SANs (pre-maturely) beating Cisco easily by several years.

    Sure there would be many hurdles and issues, similar to the ones CNT and INRANGE had to overcome, or McData and CNT, or Brocade and McData among others. However in the much bigger game of IT account and thus budget control is played by HP, IBM, and Sun/Oracle among others, wouldn’t maintaining a dual-source for customers networking needs make sense, or, at least serve as a check to Cisco expansion efforts? If nothing else, maintaining the status quo in the industry for now, or, if the rules and game are changing, wouldn’t some of the bigger vendors want to get closer to the markets where Huawei is seeing rapid growth?

    Does this mean that Brocade could be bought? Sure.
    Does this mean Brocade cannot compete or is a sign of defeat? I don’t think so.
    Does this mean that Brocade could end up buying or merging with someone else? Sure, why not.
    Or, is it possible that someone like Huawei could end up buying Brocade? Why not!

    Now, if Huawei were to buy Brocade, which begs the question for fun, could they be renamed or spun off as a division called HuaweiCade or HuaCadeWei? Anything is possible when you look outside the box.

    Nuff said for now, food for thought.

    Cheers – gs

    Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

    Clarifying Clustered Storage Confusion

    Clustered storage can be iSCSI, Fibre Channel block based or NAS (NFS or CIFS or proprietary file system) file system based. Clustered storage can also be found in virtual tape library (VTL) including dedupe solutions along with other storage solutions such as those for archiving, cloud, medical or other specialized grids among others.

    Recently in the IT and data storage specific industry, there has been a flurry of merger and acquisition (M&A) (Here and here), new product enhancement or announcement activity around clustered storage. For example, HP buying clustered file system vendor IBRIX complimenting their previous acquisition of another clustered file system vendor (PolyServe) a few years ago, or, of iSCSI block clustered storage software vendor LeftHand earlier this year. Another recent acquisition is that of LSI buying clustered NAS vendor ONstor, not to mention Dell buying iSCSI block clustered storage vendor EqualLogic about a year and half ago, not to mention other vendor acquisitions or announcements involving storage and clustering.

    Where the confusion enters into play is the term cluster which means many things to different people, and even more so when clustered storage is combined with NAS or file based storage. For example, clustered NAS may infer a clustered file system when in reality a solution may only be multiple NAS filers, NAS heads, controllers or storage processors configured for availability or failover.

    What this means is that a NFS or CIFS file system may only be active on one node at a time, however in the event of a failover, the file system shifts from one NAS hardware device (e.g. NAS head or filer) to another. On the other hand, a clustered file system enables a NFS or CIFS or other file system to be active on multiple nodes (e.g. NAS heads, controllers, etc.) concurrently. The concurrent access may be for small random reads and writes for example supporting a popular website or file serving application, or, it may be for parallel reads or writes to a large sequential file.

    Clustered storage is no longer exclusive to the confines of high-performance sequential and parallel scientific computing or ultra large environments. Small files and I/O (read or write), including meta-data information, are also being supported by a new generation of multipurpose, flexible, clustered storage solutions that can be tailored to support different applications workloads.

    There are many different types of clustered and bulk storage systems. Clustered storage solutions may be block (iSCSI or Fibre Channel), NAS or file serving, virtual tape library (VTL), or archiving and object-or content-addressable storage. Clustered storage in general is similar to using clustered servers, providing scale beyond the limits of a single traditional system—scale for performance, scale for availability, and scale for capacity and to enable growth in a modular fashion, adding performance and intelligence capabilities along with capacity.

    For smaller environments, clustered storage enables modular pay-as-you-grow capabilities to address specific performance or capacity needs. For larger environments, clustered storage enables growth beyond the limits of a single storage system to meet performance, capacity, or availability needs.

    Applications that lend themselves to clustered and bulk storage solutions include:

    • Unstructured data files, including spreadsheets, PDFs, slide decks, and other documents
    • Email systems, including Microsoft Exchange Personal (.PST) files stored on file servers
    • Users’ home directories and online file storage for documents and multimedia
    • Web-based managed service providers for online data storage, backup, and restore
    • Rich media data delivery, hosting, and social networking Internet sites
    • Media and entertainment creation, including animation rendering and post processing
    • High-performance databases such as Oracle with NFS direct I/O
    • Financial services and telecommunications, transportation, logistics, and manufacturing
    • Project-oriented development, simulation, and energy exploration
    • Low-cost, high-performance caching for transient and look-up or reference data
    • Real-time performance including fraud detection and electronic surveillance
    • Life sciences, chemical research, and computer-aided design

    Clustered storage solutions go beyond meeting the basic requirements of supporting large sequential parallel or concurrent file access. Clustered storage systems can also support random access of small files for highly concurrent online and other applications. Scalable and flexible clustered file servers that leverage commonly deployed servers, networking, and storage technologies are well suited for new and emerging applications, including bulk storage of online unstructured data, cloud services, and multimedia, where extreme scaling of performance (IOPS or bandwidth), low latency, storage capacity, and flexibility at a low cost are needed.

    The bandwidth-intensive and parallel-access performance characteristics associated with clustered storage are generally known; what is not so commonly known is the breakthrough to support small and random IOPS associated with database, email, general-purpose file serving, home directories, and meta-data look-up (Figure 1). Note that a clustered storage system, and in particular, a clustered NAS may or may not include a clustered file system.

    Clustered Storage Model: Source The Green and Virtual Data Center (CRC)
    Figure 1 – Generic clustered storage model (Courtesy “The Green and Virtual Data Center  (CRC)”

    More nodes, ports, memory, and disks do not guarantee more performance for applications. Performance depends on how those resources are deployed and how the storage management software enables those resources to avoid bottlenecks. For some clustered NAS and storage systems, more nodes are required to compensate for overhead or performance congestion when processing diverse application workloads. Other things to consider include support for industry-standard interfaces, protocols, and technologies.

    Scalable and flexible clustered file server and storage systems provide the potential to leverage the inherent processing capabilities of constantly improving underlying hardware platforms. For example, software-based clustered storage systems that do not rely on proprietary hardware can be deployed on industry-standard high-density servers and blade centers and utilizes third-party internal or external storage.

    Clustered storage is no longer exclusive to niche applications or scientific and high-performance computing environments. Organizations of all sizes can benefit from ultra scalable, flexible, clustered NAS storage that supports application performance needs from small random I/O to meta-data lookup and large-stream sequential I/O that scales with stability to grow with business and application needs.

    Additional considerations for clustered NAS storage solutions include the following.

    • Can memory, processors, and I/O devices be varied to meet application needs?
    • Is there support for large file systems supporting many small files as well as large files?
    • What is the performance for small random IOPS and bandwidth for large sequential I/O?
    • How is performance enabled across different application in the same cluster instance?
    • Are I/O requests, including meta-data look-up, funneled through a single node?
    • How does a solution scale as the number of nodes and storage devices is increased?
    • How disruptive and time-consuming is adding new or replacing existing storage?
    • Is proprietary hardware needed, or can industry-standard servers and storage be used?
    • What data management features, including load balancing and data protection, exists?
    • What storage interface can be used: SAS, SATA, iSCSI, or Fibre Channel?
    • What types of storage devices are supported: SSD, SAS, Fibre Channel, or SATA disks?

    As with most storage systems, it is not the total number of hard disk drives (HDDs), the quantity and speed of tiered-access I/O connectivity, the types and speeds of the processors, or even the amount of cache memory that determines performance. The performance differentiator is how a manufacturer combines the various components to create a solution that delivers a given level of performance with lower power consumption.

    To avoid performance surprises, be leery of performance claims based solely on speed and quantity of HDDs or the speed and number of ports, processors and memory. How the resources are deployed and how the storage management software enables those resources to avoid bottlenecks are more important. For some clustered NAS and storage systems, more nodes are required to compensate for overhead or performance congestion.

    Learn more about clustered storage (block, file, VTL/dedupe, archive), clustered NAS, clustered file system, grids and cloud storage among other topics in the following links:

    "The Many faces of NAS – Which is appropriate for you?"

    Article: Clarifying Storage Cluster Confusion
    Presentation: Clustered Storage: “From SMB, to Scientific, to File Serving, to Commercial, Social Networking and Web 2.0”
    Video Interview: How to Scale Data Storage Systems with Clustering
    Guidelines for controlling clustering
    The benefits of clustered storage

    Along with other material on the StorageIO Tips and Tools or portfolio archive or events pages.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    SPC and Storage Benchmarking Games

    Storage I/O trends

    There is a post over in one of the LinkedIn Discussion forums about storage performance council (SPC) benchmarks being miss-leading that I just did a short response post to. Here’s the full post as LinkedIn has a short post response limit.

    While the SPC is far from perfect, it is at least for block, arguably better than doing nothing.

    For the most part, SPC has become a de facto standard for at least block storage benchmarks independent of using IOmeter or other tools or vendor specific simulations, similar how MSFT ESRP is for exchange, TPC for database, SPEC for NFS and so forth. In fact, SPC even recently rather quietly rolled out a new set of what could be considered the basis for Green storage benchmarks. I would argue that SPC results in themselves are not misleading, particularly if you take the time to look at both the executive and full disclosures and look beyond the summary.

    Some vendors have taken advantage of the SPC results playing games with discounting on prices (something that’s allowed under SPC rules) to show and make apples to oranges comparisons on cost per IOP or other ploys. This proactive is nothing new to the IT industry or other industries for that matter, hence benchmark games.

    Where the misleading SPC issue can come into play is for those who simply look at what a vendor is claiming and not looking at the rest of the story, or taking the time to look at the results and making apples to apples, instead of believing the apples to oranges comparison. After all, the results are there for a reason. That reason is for those really interested to dig in and sift through the material, granted not everyone wants to do that.

    For example, some vendors can show a highly discounted list price to get a better IOP per cost on an apple to oranges basis, however, when processes are normalized, the results can be quite different. However here’s the real gem for those who dig into the SPC results, including looking at the configurations and that is that latency under workload is also reported.

    The reason that latency is a gem is that generally speaking, latency does not lie.

    What this means is that if vendor A doubles the amount of cache, doubles the number of controllers, doubles the number of disk drives, plays games with actual storage utilization (ASU), utilizes fast interfaces from 10 GbE  iSCSI to 8Gb FC or FCoE or SAS to get a better cost per IOP number with discounting, look at the latency numbers. There have been some recent examples of this where vendor A has a better cost per IOP while achieving a higher number of IOPS at a lower cost compared to vendor B, which is what is typically reported in a press release or news story. (See a blog entry that also points to a CMG presentation discussion around this topic here.

    Then go and look at the two results, vendor B may be at list price while vendor A is severely discounted which is not a bad thing, as that is then the starting list price as to which customers should start negotiations. However to be fair, normalize the pricing for fun, look at how much more equipment vendor A may need while having to discount to get the price to offset the increased amount of hardware, then look at latency.

    In some of the recent record reported results, the latency results are actually better for a vendor B than for a vendor A and why does latency matter? Beyond showing what a controller can actually do in terms of levering  the number of disks, cache, interface ports and so forth, the big kicker is for those talking about SSD (RAM or FLASH) in that SSD generally is about latency. To fully effectively utilize SSD which is a low latency device, you would want a controller that can do a decent job at handling IOPS; however you also need a controller that can do a decent job of handling IOPS with low latency under heavy workload conditions.

    Thus the SPC again while far from perfect, at least for a thumb nail sketch and comparison is not necessarily misleading, more often than not it’s how the results are utilized that is misleading. Now in the quest for the SPC administrators to try and gain more members and broader industry participation and thus secure their own future, is the SPC organization or administration opening itself up to being used more and more as a marketing tool in ways that potentially compromise all the credibility (I know, some will dispute the validity of SPC, however that’s reserved for a different discussion ;) )?

    There is a bit of Déjà here for those involved with RAID and storage who recall how the RAID Advisory Board (RAB) in its quest to gain broader industry adoption and support succumbed to marketing pressures and use or what some would describe as miss-use and is now a member of the “Where are they now” club!

    Don’t get me wrong here; I like the SPC tests/results/format, there is a lot of good information in the SPC. The various vendor folks who work very hard behind the scenes to make the SPC actually work and continue to evolve it also all deserve a great big kudos, an “atta boy” or “atta girl” for the fine work that have been doing, work that I hope does not become lost in the quest to gain market adoption for the SPC.

    Ok, so then this should all then beg the question of what is the best benchmark. Simple, the one that most closely resembles your actual applications, workload, conditions, configuration and environment.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cisco wins FCoE Pre-Season and Primaries – Now for the Main Event!

    Storage I/O trends

    Ok, unless you have turned off all of your news feeds, RSS feeds, discontinued all printed industry and trade related publications and stopped visiting blogs and other on-line venues, you may not have heard that Cisco, NetApp, EMC, Emulex and Qlogic have made a series of announcements signaling proof of life for the emerging Fibre Channel over Ethernet (FCoE) based on the Cisco Data Center Ethernet (DCE) or on the emerging more general Converged Enhanced Ethernet (CEE).

    Now if you have not heard, check out the various industry news and information venues and blogs. Likewise if you are a Brocadian, dont worry and do not get upset by the early poll or exit poll results from the primaries, the real and broad adoption game has not started yet, however, get your game faces on.

    At this point given the newness of the technology and early adopter status, its safe to say that Cisco has won the pre-season or primaries for the first FCoE battle. However, despite the hype and proof of life activity which can be gauged by the counter claims from the iSCSI camps, the main event or real market adoption and deployment will start ramping up in 2009 and with broader adoption occurring in the 2010 to 2011 timeframes.

    This is not to say that there will not be any adoption of FCoE between now and the next 12-18 months, quite the opposite, there will be plenty of early adopters, test and pilot cases as well as Cisco faithful who chose to go the FCoE route vs. another round of Fibre Channel at 8Gb, or, for those who want to go to FCoE at 10Gb instead of iSCSI or NAS at 10GbE for what ever reasons. However the core target market for FCoE is the higher-end, risk adverse environments that shy away from bleeding edge technology unless there is an adjacent and fully redundant blood bank located next door if not on-site.

    Consequently similar how Fibre Channel and FICON were slow to ramp-up taking a couple of years from first product and components availability, FCoE will continue to gain ground and as the complete and open ecosystem comes into place including adapters, switches and directors, routers, bridges and gateways, storage systems as well as management tools and associated training and skills development.

    Watch for vendors to ratchet up discussions about how many FCoE or FCoE enabled systems are shipped with an eye on the keyword, FCoE enabled which means that the systems may or may not actually be deployed in FCoE mode, rather that they are ready for it, sound familiar to early iSCSI or event FC product shipments?

    Rest assured, FCoE has a very bright future (see here and here) at the mid to high-end of the market while iSCSI will continue to grow and gain in adoption in the mid-market down to the lower reaches of the SMB market. Of course there will be border skirmishes as iSCSI tries to move up market and FCoE tries to move down market and of course there will be those that stay the course for another round of Fibre Channel beyond 8Gb while NAS continues to gain ground in all market segments and SAS at the very low-end where even iSCSI is to expensive. Learn more over at the Fibre Channel Industry Association (FCIA) or FCoE Portal sites as well as at Brocade, Cisco, EMC, Emulex, NetApp and Qlogic sites among others.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Why XIV is so important to IBMs storage business – Its Not About the Technology or Product!

    Storage I/O trends

    Ok, so I know I’m not taking a popular stance on this one from both camps, the IBMers and their faithful followers as well as the growing legion of XIV followers will take exception I’m sure.

    Likewise, the nay sayers would argue why not take a real swing and knock the ball out of the park as if it were baseball batting practice. No, I’m going a different route as actually, either of the approaches would be too easy and have been pretty well addressed already.

    The IBM XIV product that IBM acquired back in January 2008 is getting a lot of buzz (some good, some not so good) lately in the media and blog sphere (here and here which in turn lead to many others) as well as in various industry and customer discussions.

    How ironic that the 2008 version of storage in an election year in the U.S. pits the IBM and XIV faithful in one camp and the nay sayers and competition in the other camps. To hear both camps go at it with points, counter points, mud-slinging and lipstick slurs should be of no surprise when it comes vendor?s points and counter points. In fact the only thing missing from some of the discussions or excuse me, debates is the impromptu appearance on-stage by either Senators Bidden, Clinton, McCain or Obama or Governor Palin to weigh in on the issues, after all, it is the 2008 edition of storage in an election year here in the United States.

    Rather than jump on the bashing XIV bandwagon which about everyone in the industry is now doing except for, the proponents or, folks taking a step back looking at the bigger non-partisan picture like Steve Duplessie the genesis billionaire founder of ESG and probably the future owner of the New England Patriots (American) Football team whose valuation may have dripped enough for Steve to buy now that their start quarterback Tom Brady is out with a leg injury that will take longer to rebuild than all the RAID 6 configured 1 TByte SATA disk drives in 3PAR, Dell, EMC, HGST, HP, IBM, NetApp, Seagate, Sun and Western Digital as well as many other vendors test labs combined. As for the proponents or faithful, in the spirit of providing freedom of choice and flexible options, the cool-aid comes in both XIV orange as well as traditional IBM XIV blue, nuff said.

    In my opinion, which is just that, an opinion, XIV is going to help and may have already done so for IBMs storage business not from the technical architecture or product capabilities or even in the number of units that IBM might eventually sell bundled or un-bundled. Rather, XIV is getting IBM exposure and coverage to be able to sit at the table with some re-invigorated spirit to tell the customer what IBM is doing and if they pay attention, in-between slide decks, grasp the orders for upgrades, expansion or new installs for the existing IBM storage product line, then continue on with their pitch until the customer asks to place another upgraded or expansion order, then quickly grab that order, then continue on with the presentation while touching lightly on the products IBM customers continue to buy and looking to upgrade including:

    IBM disk
    IBM tape – tape and virtual tape
    DS8000 – Mainframe and open systems storage
    DS5000 – New version of DS4000 to compete with new EMC CLARiiON CX4s
    DS4000 ? aka the Array formerly known as the FastT
    DS3000 – Entry level iSCSI, SAS and FC storage
    NetApp based N-Series – For NAS windows CIFS and NFS file sharing
    DR550 archiving solution
    SAN Volume Controller-SVC

    Not to mention other niche products such as the Data Direct Networks-DDN based DCS9550 or IBM developed DS6000 or recently acquired Diligent VTL and de-duping software.

    IBM will be successful with XIV not by how many systems they sell or give away, oh, excuse me, add value to other solutions. How IBM should be gauging XIV success is based on increased sales of their other storage systems and associated software and networking technologies including the mainframe attachable DS8000, the new high performance midrange DS5000 that builds on the success of the DS4000, all of which should have both Brocade and Cisco salivating given their performance need for more Fibre Channel (and FICON for DS8000) 4GFC and 8GFC Fibre Channel ports, switches, adapters and directors. Then there is the netapp based N series for NAS and file serving to support unstructured data including Web and social networking.

    If I were Brocade, Cisco, NetApp or any of the other many IBM suppliers, I would be putting solution bundles together certainly to ride the XIV wave, however have solution bundles ready to play to the collateral impact of all the other IBM storage products getting coverage. For example sure Brocade and Cisco will want to talk about more Fibre Channel and iSCSI switch ports for the XIV, however, also talk performance to be able to unleash the capabilities of the DS8000 and DS5000, or, file management tools for the N-Series as well as bundles around the archiving DR550 solution.

    The N-Series NAS gateway that could be used in theory to dress up XIV and actually make it usable for NAS file serving, file sharing and Web 2.0 related applications or unstructured data. There is the IBM SAN Volume Controller-SVC that virtualizes almost everything except the kitchen sink which may be in a future release. There is the DR550 archiving and compliance platform that not only provides RAID 6 protected energy-efficient storage, it also supports movement of data to tape, now if IBM could get the story out on that solution which maybe in the course of talking about XIV, IBM DR550 might get discovered as well. Of course there are all the other backup, archiving, data protection management and associated tools that will get pick-up and traction as well.

    You see even if IBM quadruples the XIV footprint of revenue installed in production systems with 400% growth rates year over year, never mind that the nay-sayers that would only be about 1/20 or 1/50th of what Dell/EqualLogic, or LeftHand via HP/Intel or even IBM xseries not to mention all the others using IBRIX, HP/PolyServe, Isilon, 3PAR, Panasas, Permabit, NEC and the list goes on with similar clustered solutions have already done.

    The point is watch for up-tick even if only 10% on the installed DS8000 or DS5000 (new) or DS4000 or DS3000 or N-Series (NetApp) or DR550 (the archive appliance IBM should talk more about), or SVC or the TS series VTLs.

    Even a 1% jump due to IBM folks getting out and in front of customers and business partners, a 10% jump on the installed based of somewhere around 40,000 DS8000 (and earlier ESS versions) is 4,000 new systems, on the combined DS5000/DS4000/DS3000 formerly known as FasT with combined footprint of over 100,000 systems in the field, 10% would be 10,000 new systems. Take the SVC, with about 3,000 instances (or about 11,000 clustered nodes), 10% would mean another new 300 instances and continue this sort of improvement across the rest of the line and IBM will have paid for not only XIV and Moshe?s (former EMCer and founded of XIV and now IBM fellow) retirement fund.

    IBM may be laughing to the big blue bank even after having enough money to finally buy a clustered NAS file system for Web 2.0 and bulk storage such as IBRIX before someone else like Dell, EMC or HP gets their hands on it. So while everyone else continues to bash how bad XIV is performing. Whether this is a by design strategy or one that IBM can simply fall into, it could be brilliant if played out and well executed however only time will tell.

    If those who want to rip on xiv really want to inflict damage, cease and ignore XIV for what it is or is not and find something else to talk about and rest assured, if there are other good stories, they will get covered and xiv will be ignored.

    Instead of ripping on XIV, or listening to more XIV hype, I’m going fishing and maybe will come back with a fish story to rival the XIV hype, in the meantime, look I forward to seeing the IBM success for their storage business as a whole due to the opportunity for IBMers and their partners getting excited to go and talk about storage and being surprised by their customers giving them orders for other IBM products, that is unless the IBM revenue prevention department gets in the way. For example if IBMers or their partners in the excitement of the XIV moment forget to sell to customers what customers want, and will buy today or are ready to buy and grab the low hanging fruit (sales orders for upgrades and new sales) of current and recently enhanced products while trying to reprogram and re-condition customers to the XIV story.

    Congratulations to IBM and their partners as well as OEM suppliers if they can collective pull the ruse off and actually stimulate total storage sales while XIV becomes a decoy and maybe even gets a few more installs and some revenue to help prop it up as a decoy.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved