Server Storage I/O Network Benchmark Winter Olympic Games

Storage I/O trends

Server Storage I/O Network Benchmark Winter Olympic Games

It is time for the 2014 Winter Olympic games in Sochi Russia where competitors including some athletes come together in what has become a mix of sporting and entertainment engaging activities.

Games of inches and seconds, performance and skill

Some of these activities including real Olympic game events are heavier on sports appeal, some with artistic and others pure entertainment with a mix of beauty, braun and maybe even a beast or two. Then there are those events that have been around since the last ice age, while others being post global warming era.

Hence some have been around longer than others showing a mix of old, new in terms of the sports, athletes not to mention technology and their outfits.

I mean how about some of the new snow boarding and things on skis being done, can you image if they brought in as a new "X" sport roller derby on the short speed skating track sponsored by Red Bull or Bud light? Wait, that sounds like the Red Bull Crashed Ice event (check this out if not familiar with) think motto cross, hockey, down hill on ice. How about getting some of the south African long distance sprinters to learn how to speed skate, talk about moving some gold metal as in medals back to the african continent! On the other hand, the current powers to be would lodge protest, change the benchmark or rules to stay in power, hmm, sound familiar with IT?

Ok, enough of the fun stuff (for now), let’s get back on track here (catch that pun?).

Metrics that matter, winners and losers

Since these are the Olympics, lets also remember that there still awards for personal and team winners (along with second and third place), after all, if all Olympians were winners, there would be no losers and if no losers, how could there be a winner?

Who or what decides the winners vs. losers involves metrics that matter, something that also applies to servers, storage I/O networking hardware, software and services.

In the case of the Olympics, some of the sports or events are based on speed or how fast (e.g. time) something is done, or how much is accumulated or done in that amount of time while in other events the metrics that matter may be more of a mystery based on judging that maybe subjective.

The technologies to record times, scores, movements and other things that go into scoring have certainly improved, as have the ability for fans to engage and vote their choice, or opposition via social media venues from twitter to face book among others.

What about server storage I/O networking benchmarks

There could easily be an Information Technology (IT) or data infrastructure benchmarking Olympics with events such as faster server (physical, virtual or cloud, personal or consortium team), storage, I/O and networking across hardware, software or services. Of course there would be different approaches favored by the various teams with disputes, protests and other things sometimes seen during Olympic games. One of the challenges however is what would be the metrics that matter particularly to the various marketing groups of each organization or their joint consortium?

Just like with sports, which of the various industry trade groups or consortiums would be the ruling party or voice for a particular event specifying the competition criteria, scoring and other things. What happens when there is a break away group that launches their own competing approach yet when it comes time for the IT benchmarking Olympics, which of the various bodies does the Olympic committee defer to? In case you are not familiar with in sports there are various groups and sub-groups who can decide the participants for various supports perhaps independent of an overall group, sound like IT?

Storage I/O trends

Let the games begin

So then the fun starts, however which of the events are relevant to your needs or interest, sure some are fun or entertaining while others are not practical. Some you can do yourself, while others are just fun to watch, both the thrill of victory and agony of defeat.

This is similar to IT industry benchmarking and specmanship competitions, some of which is more relevant than others, then there are those that are entertaining.

Likewise some benchmarks or workload claims can be reproduced to confirm the results or claims, while others remain more like the results of figure skating judges.

Hence some of the benchmark games are more entertaining, however for those who are not aware or informed, they may turn out to be more misinformation or lead to poor decision-making.

Consequently benchmarks and metrics that matter are those that most closely aging with what your environment is or will be doing.

If your environment is going to be running a particularly simulation or script, than so be it, otoh, look for comparisons that are reflective.

On the other hand, if you can’t find something that is applicable, then look at tools and results that have meaning along with relevance, not to mention that provide clarity and repeatable. Being repeatable means that you can get access to the tools, scripts or scenario (preferably free) to run in your own environment.

There is a long list of benchmarks and workload simulation tools, as well as traces available, some for free, some for fee that apply to components, subsystems or complete application systems from server, storage I/O networking applications and hardware. These include those for Email such as Microsoft Exchange related, SQL databases, , LoginVSI for VDI, VMmark for VMware, Hadoop and HDFS related for big data among many others (see more here).

Apples to Apples vs. Apple pie vs. Orange Jello

Something else that matters are apples to apples vs. apples to oranges or worse, apple pie to orange Jello.

This means knowing or gaining insight into the pieces as we as how they behave under different conditions as well as the entire system for a baseline (e.g normal) vs. abnormal.

Hence its winter server storage I/O networking benchmark games with the first event having been earlier this week with team Brocade taking on Cisco. Here is a link to a post by Tony Bourke (@tbourke) that provides some interesting perspectives and interactions, along with a link here to the Brocade sponsored report done by Evaluator Group.

In this match-up, Team Brocade (with HP servers, Brocade switches and an unnamed 16GFC SSD storage system) take on Team Cisco and their UCS (also an un-named 16GFC SSD system that I wonder if Cisco even knows whose’s it was?). Ironic that it was almost six years to the date that there was a similar winter benchmark wonder event when NetApp submitted an SPC result for EMC (read more about that cold day here).

The Brocade FC (using HP servers and somebody’s SSD storage) vs. Cisco FCoE using UCS (and somebody else’s storage) comparison is actually quite entertaining, granted it can also be educational on what to do or not do, focus on or include among others things. The report also raises many questions that seem more wondering why somebody won in an ice figuring skating event vs. the winner of a men’s or women’s hockey game.

Closing thoughts (for now)

So here’s my last point and perspective, let’s have a side of context with them IOPs, TPS, bandwidth and other metrics that matter.

Take metrics and benchmarks with a grain of salt however look for transparency in both how they are produced, information provided and most important, does it matter or is it relevant to your environment or simply entertaining.

Lets see what the next event in the ongoing server storage I/O networking benchmark 2014 winter Olympic games will be.

Some more reading:
SPC and Storage Benchmarking Games
Moving Beyond the Benchmark Brouhaha
More storage and IO metrics that matter
Its US Census time, What about IT Data Centers?
March Metrics and Measuring Social Media (keep in mind that March Madness is just around the corner)
PUE, Are you Managing Power, Energy or Productivity?

How many IOPS can a HDD, HHDD or SSD do?
Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?

You can also take part in the on-going or re-emerging FC vs. FCoE hype and fud events by casting your vote here and see results below.

Note the following poll is from a previous StorageIOblog post (Where has the FCoE hype and FUD gone? (with poll)).

Disclosure: I used to work for Evaluator Group after working for a company called Inrange that competed with, then got absorbed (via CNT and McData) into Brocade who has been a client as has Cisco. I also do performance and functionality testing, audits, validation and proof of concepts services in my own as well as in client labs using various industry standard available tools and techniques. Otoh, not sure that I even need to disclose anything however its easy enough to do so why not ;).

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

DataDynamics StorageX 7.0 file and data management migration software

Storage I/O trends

DataDynamics StorageX 7.0 file and data management migration software

Some of you may recall back in 2006 (here and here) when Brocade bought a file management storage startup called NuView whose product was StorageX, and then in 2009 issued end of life (EOL) notice letters that the solution was being discontinued.

Fast forward to 2013 and there is a new storage startup (DatraDynamics) with an existing product that was just updated and re-released called StorageX 7.0.

Software Defined File Management – SDFM?

Granted from an industry buzz focused adoption perspective you may not have heard of DataDynamics or perhaps even StorageX. However many other customers around the world from different industry sectors have as well as are using the solution.

The current industry buzz is around software defined data centers (SDDC) which has lead to software defined networking (SDN), software defined storage (SDS), and other software defined marketing (SDM) terms, not to mention Valueware. So for those who like software defined marketing or software defined buzzwords, you can think of StorageX as software defined file management (SDFM), however don’t ask or blame them about using it as I just thought of it for them ;).

This is an example of industry adoption traction (what is being talked about) vs. industry deployment and customer adoption (what is actually in use on a revenue basis) in that DataDynamics is not a well-known company yet, however they have what many of the high-flying startups with industry adoption don’t have which is an installed base with revenue customers that also now have a new version 7.0 product to deploy.

StorageX 7.0 enabling intelligent file and data migration management

Thus, a common theme is adding management including automated data movement and migration to carry out structure around unstructured NAS file data. More than a data mover or storage migration tool, Data Dynamics StorageX is a software platform for adding storage management structure around unstructured local and distributed NAS file data. This includes heterogeneous vendor support across different storage system, protocols and tools including Windows CIFS and Unix/Linux NFS.

Storage I/O image

A few months back prior to its release, I had an opportunity to test drive StorageX 7.0 and have included some of my comments in this industry trends perspective technology solution brief (PDF). This solution brief titled Data Dynamics StorageX 7.0 Intelligent Policy Based File Data Migration is a free download with no registration required (as are others found here), however per our disclosure policy to give transparency, DataDynamics has been a StorageIO client.

If you have a new for gaining insight and management control around your file unstructured data to support migrations for upgrades, technology refresh, archiving or tiering across different vendors including EMC and NetApp, check out DataDynamics StorageX 7.0, take it for a test drive like I did and tell them StorageIO sent you.

Ok, nuff said,

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Summer 2011 StorageIO News Letter

StorageIO News Letter Image
Summer 2011 Newsletter

Welcome to the Summer 2011 edition of the Server and StorageIO Group (StorageIO) newsletter. This follows the Spring 2011 edition.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

 

Click on the following links to view the Summer 2011 edition as an HTML or PDF or, to go to the newsletter page to view previous editions.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Nuff said for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Dude, is Dell going to buy Brocade?

Some IT industry buzz this week is around continued speculation (or here) of who will Dell buy next and will it be Brocade.

Brocade was mentioned as a possible acquisition by some in the IT industry last fall after Dell stepped back from the 3PAR bidding war with HP. Industry rumors or speculations are not new involving Dell and Brocade some going back a year or more (or here or here).

Dell

Last fall I did a blog post commenting that I thought Dell would go on to buy someone else (turned out to be Compellent and Insight One). Those acquisitions by Dell followed their purchases of companies including Scalent, Kace, Exanet, Perot, and Ocarina among others. In that post, I also commented that I did not think (at least at that time) that Brocade would be a likely or good fit for Dell given their different business models, go to market strategy and other factors.

Dell is clearly looking to move further up into the enterprise space which means adding more products and routes to market of which one is via networking and another involves people with associated skill sets. The networking business at Dell has been good for them along with storage to complement their traditional server and workstation business, not to mention their continued expansion into medical, life science and healthcare related solutions. All of those are key building blocks for moving to cloud, virtual and data storage networking environments.

Dell has also done some interesting acquisitions around management and service or workflow tools with Scalent and Kace not to mention their scale out NAS file system (excuse me, big data) solutions via Exanet and data footprint reduction tools with Ocarina, all of which have plays in the enterprise, cloud and traditional Dell markets.

But what about Brocade?

Is it a good fit for Dell?

Dell certainly could benefit from owning Brocade as a means of expanding their Ethernet and IP businesses beyond OEM partnerships, like HP supplementing their networking business with 3COM and IBM with Blade networks.

However, would Dell acquiring Brocade disrupt their relationships with Cisco or other networking providers?

If Dell were to make a bid for Brocade, would Huawei (or here) sit on the sidelines and watch or jump in the game to stir things up?

Would Cisco counter with a deal Dell could not refuse to tighten their partnership at different levels perhaps even involving something with the UCS that was discussed on a recent Infosmack episode?

How would EMC, Fujitsu, HDS, HP, IBM, NetApp and Oracle among others, all of who are partners with Brocade respond to Dell now becoming their OEM supplier for some products?

Would those OEM partnerships continue or cause some of those vendors to become closer aligned with Cisco or others?

Again the question, will Huawei sit back or decide to enter the market on a more serious basis or continue to quietly increase their presences around the periphery?

Brocade could be a good fit for Dell giving them a networking solution (both Ethernet via the Foundry acquisition along with Fibre Channel and Fibre Channel over Ethernet (FCoE)) not to mention many other pieces of IP including some NAS and file management tools collecting dust on some Brocade shelf somewhere. What Dell would also get is a sales force that knows how to sell to OEMs, the channel and to enterprise customers, some of whom are networking (Ethernet or Fibre Channel) focused, some who have broader diverse backgrounds.

While it is possible that Dell could end up with Brocade along with a later bidding battle (unless others just let a possible deal go as is), Dell would find itself in new and unfamiliar waters similar to Brocade gaining its feet moving into the Ethernet and IP space after having been comfortable in the Fibre Channel storage centric space for over a decade.

While the networking products would be a good fit for Dell assuming that they were to do such a deal, the diamond in the rough so to speak could be Brocade channel, OEM and direct sales contact team of sales people, business development, systems engineers and support staff on a global basis. Keep in mind that while some of those Brocadians are network focused, many have connected servers and storage from mainframe to open systems across all vendors for years or in some cases decades. Some of those people who I know personally are even talented enough to sell ice to an Eskimo (that is a sales joke btw).

Sure the Brocadians would have to be leveraged to keep selling what they have done, a task similar to what NetApp is currently facing with their integration of Engenio.

However that DNA could help Dell set up more presences in organizations where they have not been in the past. In other words, Dell could use networking to pull the rest of their product lines into those accounts, vars or resellers.

Hmmm, does that sound like another large California based networking company?

Dell

After all, June is a popular month for weddings, lets see what happens next week down in Orlando during the Dell Storage Forum as some have speculated might be a launching pad for some type of deal.

Here are some related links to more material:

  • HP Buys one of the seven networking dwarfs and gets a bargain
  • Dell Will Buy Someone, However Not Brocade (At least for now)
  • While HP and Dell make counter bids, exclusive interview with 3PAR CEO David Scott
  • Acadia VCE: VMware + Cisco + EMC = Virtual Computing Environment
  • Did someone forget to tell Dell that Tape is dead?
  • Data footprint reduction (Part 1): Life beyond dedupe and changing data lifecycles
  • Data footprint reduction (Part 2): Dell, IBM, Ocarina and Storwize
  • What is DFR or Data Footprint Reduction?
  • Could Huawei buy Brocade?
  • Has FCoE entered the trough of disillusionment?
  • More on Fibre Channel over Ethernet (FCoE)
  • Dude, is Dell doing a disk deal again with Compellent?
  • Post Holiday IT Shopping Bargains, Dell Buying Exanet?
  • Back to school shopping: Dude, Dell Digests 3PAR Disk storage
  • Huawei should buy brocade
  • NetApp buying LSIs Engenio Storage Business Unit
  • Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

    EMC VPLEX: Virtual Storage Redefined or Respun?

    In a flurry of announcements that coincide with EMCworld occurring in Boston this week of May 10 2010 EMC officially unveiled the Virtual Storage vision initiative (aka twitter hash tag of #emcvs) and initial VPLEX product. The Virtual Storage initiative was virtually previewed back in March (See my previous post here along with one from Stu Miniman (twitter @stu) of EMC here or here) and according to EMC the VPLEX product was made generally available (GA) back in April.

    The Virtual Storage vision and associated announcements consisted of:

    • Virtual Storage vision – Big picture  initiative view of what and how to enable private clouds
    • VPLEX architecture – Big picture view of federated data storage management and access
    • First VPLEX based product – Local and campus (Metro to about 100km) solutions
    • Glimpses of how the architecture will evolve with future products and enhancements


    Figure 1: EMC Virtual Storage and Virtual Server Vision and Big Pictures

    The Big Picture
    The EMC Virtual Storage vision (Figure 1) is the foundation of a private IT cloud which should enable characteristics including transparency, agility, flexibility, efficient, always on, resiliency, security, on demand and scalable. Think of it this way, EMC wants to enable and facilitate for storage what is being done by server virtualization hypervisor vendors including VMware (which happens to be owned by EMC), Microsoft HyperV and Citrix/Xen among others. That is, break down the physical barriers or constraints around storage similar to how virtual servers release applications and their operating systems from being tied to a physical server.

    While the current focus of desktop, server and storage virtualization has been focused on consolidation and cost avoidance, the next big wave or phase is life beyond consolidation where the emphasis expands to agility, flexibility, ease of use, transparency, and portability (Figure 2). In the next phase which puts an emphasis around enablement and doing more with what you have while enhancing business agility focus extends from how much can be consolidated or the number of virtual machines per physical machine to that of using virtualization for flexibility, transparency (read more here and here or watch here).


    Figure 2: Virtual Storage Big Picture

    That same trend will be happening with storage where the emphasis also expands from how much data can be squeezed or consolidated onto a given device to that of enabling flexibility and agility for load balancing, BC/DR, technology upgrades, maintenance and other routine Infrastructure Resource Management (IRM) tasks.

    For EMC, achieving this vision (both directly for storage, and indirectly for servers via their VMware subsidiary) is via local and distributed (metro and wide area) federation management of physical resources to support virtual data center operations. EMC building blocks for delivering this vision including VPLEX, data and storage management federation across EMC and third party products, FAST (fully automated storage tiering), SSD, data protection and data footprint reduction and data protection management products among others.

    Buzzword bingo aside (e.g. LAN, SAN, MAN, WAN, Pots and Pans) along with Automation, DWDM, Asynchronous, BC, BE or Back End, Cache coherency, Cache consistency, Chargeback, Cluster, db loss, DCB, Director, Distributed, DLM or Distributed Lock Management, DR, Foe or Fibre Channel over Ethernet, FE or Front End, Federated, FAST, Fibre Channel, Grid, HyperV, Hypervisor, IRM or Infrastructure Resource Management, I/O redirection, I/O shipping, Latency, Look aside, Metadata, Metrics, Public/Private Cloud, Read ahead, Replication, SAS, Shipping off to Boston, SRA, SRM, SSD, Stale Reads, Storage virtualization, Synchronization, Synchronous, Tiering, Virtual storage, VMware and Write through among many other possible candidates the big picture here is about enabling flexibility, agility, ease of deployment and management along with boosting resource usage effectiveness and presumably productivity on a local, metro and future global basis.


    Figure 3: EMC Storage Federation and Enabling Technology Big Picture

    The VPLEX Big Picture
    Some of the tenants of the VPLEX architecture (Figure 3) include a scale out cluster or grid design for local and distributed (metro and wide area) access where you can start small and evolve as needed in a predictable and deterministic manner.


    Figure 4: Generic Virtual Storage (Local SAN and MAN/WAN) and where VPLEX fits

    The VPLEX architecture is targeted towards enabling next generation data centers including private clouds where ease and transparency of data movement, access and agility are essential. VPLEX sits atop existing EMC and third party storage as a virtualization layer between physical or virtual servers and in theory, other storage systems that rely on underlying block storage. For example in theory a NAS (NFS, CIFS, and AFS) gateway, CAS content archiving or Object based storage system or purpose specific database machine could sit between actual application servers and VPLEX enabling multiple layers of flexibility and agility for larger environments.

    At the heart of the architecture is an engine running a highly distributed data caching algorithm that uses an approach where a minimal amount of data is sent to other nodes or members in the VPLEX environment to reduce overhead and latency (in theory boosting performance). For data consistency and integrity, a distributed cache coherency model is employed to protect against stale reads and writes along with load balancing, resource sharing and failover for high availability. A VPLEX environment consists of a federated management view across multiple VPLEX clusters including the ability to create a stretch volume that is accessible across multiple VPLEX clusters (Figure 5).


    Figure 5: EMC VPLEX Big Picture


    Figure 6: EMC VPLEX Local with 1 to 4 Engines

    Each VPLEX local cluster (Figure 6) is made up of 1 to 4 engines (Figure 7) per rack with each engine consisting of two directors each having 64GByte of cache, localized compute Intel processors, 16 Front End (FE) and 16 Back End (BE) Fibre Channel ports configured in a high availability (HA). Communications between the directors and engines is Fibre Channel based. Meta data is moved between the directors and engines in 4K blocks to maintain consistency and coherency. Components are fully redundant and include phone home support.


    Figure 7: EMC VPLEX Engine with redundant directors

    VPLEX initially host servers supported include VMware, Cisco UCS, Windows, Solaris, IBM AIX, HPUX and Linux along with EMC PowerPath and Windows multipath management drivers. Local server clusters supported include Symantec VCS, Microsoft MSCS and Oracle RAC along with various volume mangers. SAN fabric connectivity supported includes Brocade and Cisco as well as Legacy McData based products.

    VPLEX also supports cache (Figure 8 ) write thru to preserve underlying array based functionality and performance with 8,000 total virtualized LUNs per system. Note that underlying LUNs can be aggregated or simply passed through the VPLEX. Storage that attaches to the BE Fibre Channel ports include EMC Symmetrix VMAX and DMX along with CLARiiON CX and CX4. Third party storage supported includes HDS9000 and USPV/VM along with IBM DS8000 and others to be added as they are certified. In theory given that the VPLEX presents block based storage to hosts; one would also expect that NAS, CAS or other object based gateways and servers that rely on underlying block storage to also be supported in the future.


    Figure 8: VPLEX Architecture and Distributed Cache Overview

    Functionality that can be performed between the cluster nodes and engines with VPLEX include data migration and workload movement across different physical storage systems or sites along with shared access with read caching on a local and distributed basis. LUNS can also be pooled across different vendors underlying storage solutions that also retain their native feature functionality via VPLEX write thru caching.

    Reads from various servers can be resolved by any node or engine that checks their cache tables (Figure 8 ) to determine where to resolve the actual I/O operation from. Data integrity checks are also maintained to prevent stale reads or write operations from occurring. Actual meta data communications between nodes is very small to enable state fullness while reducing overhead and maximizing performance. When a change to cache data occurs, meta information is sent to other nodes to maintain the distributed cache management index schema. Note that only pointers to where data and fresh cache entries reside are what is stored and communicated in the meta data via the distributed caching algorithm.


    Figure 9: EMC VPLEX Metro Today

    For metro deployments, two clusters (Figure 9) are utilized with distances supported up to about 100km or about 5ms of latency in a synchronous manner utilizing long distance Fibre Channel optics and transceivers including Dense Wave Division Multiplexing (DWDM) technologies (See Chapter 6: Metropolitan and Wide Area Storage Networking in Resilient Storage Networking (Elsevier) for additional details on LAN, MAN and WAN topics).

    Initially EMC is supporting local or Metro including Campus based VPLEX deployments requiring synchronous communications however asynchronous (WAN) Geo and Global based solutions are planned for the future (Figure 10).


    Figure 10: EMC VPLEX Future Wide Area and Global

    Online Workload Migration across Systems and Sites
    Online workload or data movement and migration across storage systems or sites is not new with solutions available from different vendors including Brocade, Cisco, Datacore, EMC, Fujitsu, HDS, HP, IBM, LSI and NetApp among others.

    For synchronization and data mobility operations such as a VMware Vmotion or Microsoft HyperV Live migration over distance, information is written to separate LUNs in different locations across what are known as stretch volumes to enable non disruptive workload relocation across different storage systems (arrays) from various vendors. Once synchronization is completed, the original source can be disconnected or taken offline for maintenance or other common IRM tasks. Note that at least two LUNs are required, or put another way, for every stretch volume, two LUNs are subtracted from the total number of available LUNs similar to how RAID 1 mirroring requires at least two disk drives.

    Unlike other approaches that for coherency and performance rely on either no cached data, or, extensive amounts of cached data along with subsequent overhead for maintaining state fullness (consistency and coherency) including avoiding stale reads or writes, VPLEX relies on a combination of distributed cache lookup tables along with pass thru access to underlying storage when or where needed. Consequently large amounts of data does not need to be cached as well as shipped between VPLEX devices to maintain data consistency, coherency or performance which should also help to keep costs affordable.

    Approach is not unique, it is the implementation
    Some storage virtualization solutions that have been software based running on an appliance or network switch as well as hardware system based have had a focus of emulating or providing competing capabilities with those of mid to high end storage systems. The premise has been to use lower cost, less feature enabled storage systems aggregated behind the appliance, switch or hardware based system to provide advanced data and storage management capabilities found in traditional higher end storage products.

    VPLEX while like any tool or technology could be and probably will be made to do other things than what it is intended for is really focused on, flexibility, transparency and agility as opposed to being used as a means of replacing underlying storage system functionality. What this means is that while there is data movement and migration capabilities including ability to synchronize data across sites or locations, VPLEX by itself is not a replacement for the underlying functionality present in both EMC and third party (e.g. HDS, HP, IBM, NetApp, Oracle/Sun or others) storage systems.

    This will make for some interesting discussions, debates and applies to oranges comparisons in particular with those vendors whose products are focused around replacing or providing functionality not found in underlying storage system products.

    In a nut shell summary, VPLEX and the Virtual Storage story (vision) is about enabling agility, resiliency, flexibility, data and resource mobility to simply IT Infrastructure Resource Management (IRM). One of the key themes of global storage federation is anywhere access on a local, metro, wide area and global basis across both EMC and heterogeneous third party vendor hardware.

    Lets Put it Together: When and Where to use a VPLEX
    While many storage virtualization solutions are focused around consolidation or pooling, similar to first wave server and desktop virtualization, the next general broad wave of virtualization is life beyond consolidation. That means expanding the focus of virtualization from consolidation, pooling or LUN aggregation to that of enabling transparency for agility, flexibility, data or system movement, technology refresh and other common time consuming IRM tasks.

    Some applications or usage scenarios in the future should include in addition to VMware Vmotion, Microsoft HypverV and Microsoft Clustering along with other host server closuring solutions.


    Figure 11: EMC VPLEX Usage Scenarios

    Thoughts and Industry Trends Perspectives:

    The following are various thoughts, comments, perspectives and questions pertaining to this and storage, virtualization and IT in general.

    Is this truly unique as is being claimed?

    Interestingly, the message Im hearing out of EMC is not the claim that this is unique, revolutionary or the industries first as is so often the case by vendors, rather that it is their implementation and ability to deploy on a broad perspective basis that is unique. Now granted you will probably hear as is often the case with any vendor or fan boy/fan girl spins of it being unique and Im sure this will also serve up plenty of fodder for mudslinging in the blogsphere, YouTube galleries, twitter land and beyond.

    What is the DejaVu factor here?

    For some it will be nonexistent, yet for others there is certainly a DejaVu depending on your experience or what you have seen and heard in the past. In some ways this is the manifestation of many vision and initiatives from the late 90s and early 2000s when storage virtualization or virtual storage in an open context jumped into the limelight coinciding with SAN activity. There have been products rolled out along with proof of concept technology demonstrators, some of which are still in the market, others including companies have fallen by the way side for a variety of reasons.

    Consequently if you were part of or read or listened to any of the discussions and initiatives from Brocade (Rhapsody), Cisco (SVC, VxVM and others), INRANGE (Tempest) or its successor CNT UMD not to mention IBM SVC, StorAge (now LSI), Incipient (now part of Texas Memory) or Troika among others you should have some DejaVu.

    I guess that also begs the question of what is VPLEX, in band, out of band or hybrid fast path control path? From what I have seen it appears to be a fast path approach combined with distributed caching as opposed to a cache centric inband approaches such as IBM SVC (either on a server or as was tried on the Cisco special service blade) among others.

    Likewise if you are familiar with IBM Mainframe GDPS or even EMC GDDR as well as OpenVMS Local and Metro clusters with distributed lock management you should also have DejaVu. Similarly if you had looked at or are familiar with any of the YottaYotta products or presentations, this should also be familiar as EMC acquired the assets of that now defunct company.

    Is this a way for EMC to sell more hardware along with software products?

    By removing barriers enabling IT staffs to support more data on more storage in a denser and more agile footprint the answer should be yes, something that we may see other vendors emulate, or, make noise about what they can or have been doing already.

    How is this virtual storage spin different from the storage virtualization story?

    That all depends on your view or definition as well as belief systems and preferences for what is or what is not virtual storage vs. storage virtualization. For some who believe that storage virtualization is only virtualization if and only if it involves software running on some hardware appliance or vendors storage system for aggregation and common functionality than you probably wont see this as virtual storage let alone storage virtualization. However for others, it will be confusing hence EMC introducing terms such as federation and avoiding terms including grid to minimize confusion yet play off of cloud crowd commotion.

    Is VPLEX a replacement for storage system based tiering and replication?

    I do not believe so and even though some vendors are making claims that tiered storage is dead, just like some vendors declared a couple of years ago that disk drives were going to be dead this year at the hands of SSD, neither has come to life so to speak pun intended. What this means for VPLEX is that it leverages underlying automated or manual tiering found in storage systems such as EMC FAST enabled or similar policy and manual functions in third party products.

    What VPLEX brings to the table is the ability to transparently present a LUN or volume locally or over distance with shared access while maintaining cache and data coherency. This means that if a LUN or volume moves the applications or file system or volume managers expecting to access that storage will not be surprised, panic or encounter failover problems. Of course there will be plenty of details to be dug into and seen how it all actually works as is the case with any new technology.

    Who is this for?

    I see this as for environments that need flexibility and agility across multiple storage systems either from one or multiple vendors on a local or metro or wide area basis. This is for those environments that need ability to move workloads, applications and data between different storage systems and sites for maintenance, upgrades, technology refresh, BC/DR, load balancing or other IRM functions similar to how they would use virtual server migration such as VMotion or Live migration among others.

    Do VPLEX and Virtual Storage eliminate need for Storage System functionality?

    I see some storage virtualization solutions or appliances that have a focus of replacing underlying storage system functionality instead of coexisting or complementing. A way to test for this approach is to listen or read if the vendor or provider says anything along the lines of eliminating vendor lock in or control of the underlying storage system. That can be a sign of the golden rule of virtualization of whoever controls the virtualization functionality (at the server hypervisor or storage) controls the gold! This is why on the server side of things we are starting to see tiered hypervisors similar to tiered servers and storage where mixed hypervisors are being used for different purposes. Will we see tiered storage hypervisors or virtual storage solutions the answer could be perhaps or it depends.

    Was Invista a failure not going into production and this a second attempt at virtualization?

    There is a popular myth in the industry that Invista never saw the light of day outside of trade show expo or other demos however the reality is that there are actual customer deployments. Invista unlike other storage virtualization products had a different focus which was that around enabling agility and flexibility for common IRM tasks, similar the expanded focus of VPLEX. Consequently Invista has often been in apples to oranges comparison with other virtualization appliances that have as focus pooling along with other functions or in some cases serving as an appliance based storage system.

    The focus around Invista and usage by those customers who have deployed it that I have talked with is around enabling agility for maintenance, facilitating upgrades, moves or reconfiguration and other common IRM tasks vs using it for pooling of storage for consolidation purposes. Thus I see VPLEX extending on the vision of Invista in a role of complimenting and leveraging underlying storage system functionality instead of trying to replace those capabilities with that of the storage virtualizer.

    Is this a replacement for EMC Invista?

    According to EMC the answer is no and that customers using Invista (Yes, there are customers that I have actually talked to) will continue to be supported. However I suspect that over time Invista will either become a low end entry for VPLEX, or, an entry level VPLEX solution will appear sometime in the future.

    How does this stack up or compare with what others are doing?

    If you are looking to compare to cache centric platforms such as IBMs SVC that adds extensive functionality and capabilities within the storage virtualization framework this is an apples to oranges comparison. VPLEX is providing cache pointers on a local and global basis functioning in a compliment to underlying storage system model where SVC caches at the specific cluster basis and enhancing functionality of underlying storage system. Rest assured there will be other apples to oranges comparisons made between these platforms.

    How will this be priced?

    When I asked EMC about pricing, they would not commit to a specific price prior to the announcement other than indicating that there will be options for on demand or consumption (e.g. cloud pricing) as well as pricing per engine capacity as well as subscription models (pay as you go).

    What is the overhead of VPLEX?

    While EMC runs various workload simulations (including benchmarks) internally as well as some publicly (e.g. Microsoft ESRP among others) they have been opposed to some storage simulation benchmarks such as SPC. The EMC opposition to simulations such as SPC have been varied however this could be a good and interesting opportunity for them to silence the industry (including myself) who continue ask them (along with a couple of other vendors including IBM and their XIV) when they will release public results.

    What the interesting opportunity I think is for EMC is that they do not even have to benchmark one of their own storage systems such as a CLARiiON or VMAX, instead simply show the performance of some third party product that already is tested on the SPC website and then a submission with that product running attached to a VPLEX.

    If the performance or low latency forecasts are as good as they have been described, EMC can accomplish a couple of things by:

    • Demonstrating the low latency and minimal to no overhead of VPLEX
    • Show VPLEX with a third party product comparing latency before and after
    • Provide a comparison to other virtualization platforms including IBM SVC

    As for EMC submitting a VMAX or CLARiiON SPC test in general, Im not going to hold my breath for that, instead, will continue to look at the other public workload tests such as ESRP.

    Additional related reading material and links:

    Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier)
    Chapter 3: Networking Your Storage
    Chapter 4: Storage and IO Networking
    Chapter 6: Metropolitan and Wide Area Storage Networking
    Chapter 11: Storage Management
    Chapter 16: Metropolitan and Wide Area Examples

    The Green and Virtual Data Center (CRC)
    Chapter 3: (see also here) What Defines a Next-Generation and Virtual Data Center
    Chapter 4: IT Infrastructure Resource Management (IRM)
    Chapter 5: Measurement, Metrics, and Management of IT Resources
    Chapter 7: Server: Physical, Virtual, and Software
    Chapter 9: Networking with your Servers and Storage

    Also see these:

    Virtual Storage and Social Media: What did EMC not Announce?
    Server and Storage Virtualization – Life beyond Consolidation
    Should Everything Be Virtualized?
    Was today the proverbial day that he!! Froze over?
    Moving Beyond the Benchmark Brouhaha

    Closing comments (For now):
    As with any new vision, initiative, architecture and initial product there will be plenty of questions to ask, items to investigate, early adopter customers or users to talk with and determine what is real, what is future, what is usable and practical along with what is nice to have. Likewise there will be plenty of mud ball throwing and slinging between competitors, fans and foes which for those who enjoy watching or reading those you should be well entertained.

    In general, the EMC vision and story builds on and presumably delivers on past industry hype, buzz and vision with solutions that can be put into environments as productivity tool that works for the customer, instead of the customer working for the tool.

    Remember the golden rule of virtualization which is in play here is that whoever controls the virtualization or associated management controls the gold. Likewise keep in mind that aggregation can cause aggravation. So do not be scared, however look before you leap meaning do your homework and due diligence with appropriate levels of expectations, aligning applicable technology to the task at hand.

    Also, if you have seen or experienced something in the past, you are more likely to have DejaVu as opposed to seeing things as revolutionary. However it is also important to leverage lessons learned for future success. YottaYotta was a lot of NaddaNadda, lets see if EMC can leverage their past experiences to make this a LottaLotta.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    HP Buys one of the seven networking dwarfs and gets a bargain

    Last week EMC and Cisco announced their VCE collation and Acadia.

    The other day, HP continued its early holiday shopping by plucking down $2.7B USD and bought 3COM, one of the networking seven dwarfs (e.g. when compared to networking giant Cisco).

    Some of the other so called networking dwarfs when compared to Cisco include Brocade, Ciena and Juniper among others.

    Why is 3COM a bargain at $2.7B

    Sure HP paid a slight multiplier premium on 3COM trailing revenues or a slight small multiplier on their market cap.

    Sure HP gets to acquire one of the networking seven dwarfs at a time when Cisco is flexing its muscles to move into the server space.

    Sure HP gets to extend their networking groups capabilities including additional offerings for HPs broad SMB and lower-end SOHO and even consumer markets not to mention enterprise ROBO or workgroups.

    Sure HP gets to extend their security and Voice over IP (VoIP) via 3COM and their US Robotics brand perhaps to better compete with Cisco at the consumer, prosumer, SOHO or low-end of SMB markets.

    Sure HP gets access to H3C as a means of further its reach into China and the growing Asian market, perhaps even getting closer to Huawei as a future possible partner.

    Sure HP could have bought Brocade however IMHO that would have cost a few more deceased presidents (aka very large dollar bills) and assumed over a billion dollars in debt, however lets leave the Brocadians and that discussion on the back burner for a different discussion on another day.

    Sure HP gets to signal to the world that they are alive, they have a ton of money in their war chest, and last I checked, actually more cash in the 11B range (minus about 2.7B being spent on 3COM) that exceeds the $5B USD cash position of Cisco.

    Sure HP could have done and perhaps will still do some smaller networking related deals in couple of hundreds of million dollar type range to beef up product offerings such as a Riverbed or others, or, perhaps wait for some fire sales or price shop on those shopping themselves around.

    ROI is the bargin IMHO, not to mention other pieces including H3C!

    3COM was and is a bargain for all of the above, plus given the revenues of about 1.3B, HP CEO Mark Hurd stands to reap a better return on cash investment than having it sitting in a bank account earning a few points. Plus, HP still has around 8-9B in cash leaving room for some other opportunistic holiday shopping, who knows, maybe adopt yet another networking or storage or server related dwarf!

    Stay tuned, this game is far from being over as there are plenty of days left in the 2009 holiday shopping season!

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Another StorageIO Appearance on Storage Monkeys InfoSmack

    Following up from a previous appearance, I recently had another opportunity to participate in another Storage Monkeys InfoSmack podcast episode.

    In the most recent podcast, discussions were centered on the recent service disruption at Microsoft/T-Mobile Side-Kick cloud services, FTC blogger disclosure guidelines, is Brocade up for sale and who should buy them, SNIA and SNW among other topics.

    Here are a couple of relevant links pertaining to topics discussed in this InfoSmack session.

    If you are involved with servers, storage, I/O networking, virtualization and other related data infrastructure topics, check out Storage Monkeys and InfoSmack.

    Cheers – gs

    Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

    Could Huawei buy Brocade?

    Disclosure: I have no connection to Huawei. I own no stock in, nor have I worked for Brocade as an employee; however I did work for three years at SAN vendor INRANGE which was acquired by CNT. However I left to become an industry analyst prior to the acquisition by McData and well before Brocade bought McData. Brocade is not a current client; however I have done speaking events pertaining to general industry trends and perspectives at various Brocade customer events for them in the past.

    Is Brocade for sale?

    Last week a Wall Street Journal article mentioned Brocade (BRCD) might be for sale.

    BRCD has a diverse product portfolio for Fibre Channel, Ethernet along with the emerging Fibre Channel over Ethernet (FCoE) market and a whos who of OEM and channel partners. Why not be for sale, good timing for investors, CEO Mike Klayko and his team have arguably done a good job of shifting and evolving the company.

    Generally speaking, lets keep in perspective, everything is always for sale, and in an economy like now, bargains are everywhere. Many business are shopping, its just a matter of how visible the shopping for a seller or buyer is along with motivations and objectives including shareholder value.

    Consequently, the coconut wires are abuzz with talk and speculation of who will buy Brocade or perhaps who Brocade might buy among other Merger and Acquisition (M and A) activity of who will buy who. For example, who might buy BRCD, why not EMC (they sold McData off years ago via IPO), or IBM (they sold some of their networking business to Cisco years ago) or HP (currently an OEM partner of BRCD) as possible buyers?

    Last week I posted on twitter a response to a comment about who would want to buy Brocade with a response to the effect of why not a Huawei to which there was some silence except for industry luminary Steve Duplessie (have a look to see what Steve had to say).

    Part of being an analyst IMHO should be to actually analyze things vs. simply reporting on what others want you to report or what you have read or hear elsewhere. This also means talking about scenarios that are of out of the box or in adjacent boxes from some perspectives or that might not be in-line with traditional thinking. Sometimes this means breaking away and thinking and saying what may not be obvious or practical. Having said that, lets take a step back for a moment as to why Brocade may or might not be for sale and who might or may not be interested in them.

    IMHO, it has a lot to do with Cisco and not just because Brocade sees no opportunity to continue competing with the 800lb guerilla of LAN/MAN networking that has moved into Brocades stronghold of storage network SANs. Cisco is upsetting the table or apple cart with its server partners IBM, Dell, HP, Oracle/Sun and others by testing the waters of the server world with their UCS. So far I see this as something akin to a threat testing the defenses of a target before actually full out attacking.

    In other words, checking to see how the opposition responds, what defense are put up, collect G2 or intelligence as well as how the rest of the world or industry might respond to an all out assault or shift of power or control. Of course, HP, IBM, Dell and Sun/Oracle will not take this move into their revenue and account control goes un-noticed with initial counter announcements having been made some re-emphasize relationship with Brocade along with their recent acquisition of Ethernet/IP vendor Foundry.

    Now what does this have to do with Brocade potentially being sold and why the title involving Huawei?

    Many of the recent industry acquisitions have been focused on shoring up technology or intellectual property (IP), eliminating a competitor or simply taking advantage of market conditions. For example, Datadomain was sold to EMC in a bidding war with NetApp, HP bought IBRIX, Oracle bought or is trying to buy Sun, Oracle also bought Virtual Iron, Dell bought Perot after HP bought EDS a year or so ago while Xerox bought ACS and so the M and A game continues among other deals.

    Some of the deals are strategic, many being tactical, Brocade being bought I would put in the category of a strategic scenario, a bargaining chip or even pawn if you prefer in a much bigger game that is more than about switches, directors, HBAs, LANs, SANs, MANSs, WANs, POTS and PANs (Checkout my  book “Resilient Storage Networks”-Elsevier)!

    So with conversations focused around Cisco expanding into servers to control the data center discussion, mindset, thinking, budgets and decision making, why wouldnt an HP, IBM, Dell let alone a NetApp, Oracle/Sun or even EMC want to buy Brocade as a bargaining chip in a bigger game? Why not a Ciena (they just bought some of Nortels assets), Juniper or 3Com (more of a merger of equals to fight Cisco), Microsoft (might upset their partner Cisco) or Fujitsu (Their Telco group that is) among others?

    Then why not Huawei, a company some may have heard of, one that others may not have.

    Who is Huawei you might ask?

    Simple, they are a very large IT solutions provider who is also a large player in China with global operations including R&D in North America and many partnerships with U.S. vendors. By rough comparison, Cisco most recently reported annual revenue are about 36.1B (All are USD), BRCD about 1.5B, Juniper about $3.5B and 3COM about $1.3B and Huawei at about 23B USD with a year over year sales increase of 45%. Huawei has previous partnerships with storage vendors including Symantec and Falconstor among others. Huawei also has had partnership with 3com (H3C), a company that was first of the LAN vendors to get into SANs (pre-maturely) beating Cisco easily by several years.

    Sure there would be many hurdles and issues, similar to the ones CNT and INRANGE had to overcome, or McData and CNT, or Brocade and McData among others. However in the much bigger game of IT account and thus budget control is played by HP, IBM, and Sun/Oracle among others, wouldn’t maintaining a dual-source for customers networking needs make sense, or, at least serve as a check to Cisco expansion efforts? If nothing else, maintaining the status quo in the industry for now, or, if the rules and game are changing, wouldn’t some of the bigger vendors want to get closer to the markets where Huawei is seeing rapid growth?

    Does this mean that Brocade could be bought? Sure.
    Does this mean Brocade cannot compete or is a sign of defeat? I don’t think so.
    Does this mean that Brocade could end up buying or merging with someone else? Sure, why not.
    Or, is it possible that someone like Huawei could end up buying Brocade? Why not!

    Now, if Huawei were to buy Brocade, which begs the question for fun, could they be renamed or spun off as a division called HuaweiCade or HuaCadeWei? Anything is possible when you look outside the box.

    Nuff said for now, food for thought.

    Cheers – gs

    Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

    I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)

    Ok, so I should have used that intro last week before heading off to VMworld in San Francisco instead of after the fact.

    Think of it as a high latency title or intro, kind of like attaching a fast SSD to a slow, high latency storage controller, or a fast server attached to a slow network, or fast network with slow storage and servers, it is what it is.

    I/O virtualization (IOV), Virtual I/O (VIO) along with I/O and networking convergence have been getting more and more attention lately, particularly on the convergence front. In fact one might conclude that it is trendy to all of a sudden to be on the IOV, VIO and convergence bandwagon given how clouds, soa and SaaS hype are being challenged, perhaps even turning to storm clouds?

    Lets get back on track, or in the case of the past week, get back in the car, get back in the plane, get back into the virtual office and what it all has to do with Virtual I/O and VMworld.

    The convergence game has at its center Brocade emanating from the data center and storage centric I/O corner challenging Cisco hailing from the MAN, WAN, LAN general networking corner.

    Granted both vendors have dabbled with success in each others corners or areas of focus in the past. For example, Brocade as via acquisitions (McData+Nishan+CNT+INRANGE among others) a diverse and capable stable of local and long distance SAN connectivity and channel extension for mainframe and open systems supporting data replication, remote tape and wide area clustering. Not to mention deep bench experience with the technologies, protocols and partners solutions for LAN, MAN (xWDM), WAN (iFCP, FCIP, etc) and even FAN (file area networking aka NAS) along with iSCSI in addition to Fibre Channel and FICON solutions.

    Disclosure: Here’s another plug ;) Learn more about SANs, LANs, MANs, WANs, POTs, PANs and related technologies and techniques in my book “Resilient Storage NetworksDesigning Flexible Scalable Data Infrastructures" (Elsevier).

    Cisco not to be outdone has a background in the LAN, MAN, WAN space directly, or similar to Brocade via partnerships with product and experience and depth. In fact while many of my former INRANGE and CNT associates ended up at Brocade via McData or in-directly, some ended up at Cisco. While Cisco is known for general networking, the past several years they have gone from zero to being successful in the Fibre Channel and yes, even the FICON mainframe space while like Brocade (HBAs) dabbling in other areas like servers and storage not to mention consumer products.

    What does this have to do with IOV and VIO, let alone VMworld and my virtual office, hang on, hold that thought for a moment, lets get the convergence aspect out of the way first.

    On the I/O and networking convergence (e.g. Fibre Channel over Ethernet – FCoE) scene both Brocade (Converged Enhanced Ethernet-CEE) and Cisco (Data Center Ethernet – DCE) along with their partners are rallying around each others camps. This is similar to how a pair of prize fighters maneuvers in advance of a match including plenty of trash talk, hype and all that goes with it. Brocade and Cisco throwing mud balls (or spam) at each other, or having someone else do it is nothing new, however in the past each has had their core areas of focus coming from different tenets in some cases selling to different people in an IT environment or those in VAR and partner organizations. Brocade and Cisco are not alone nor is the I/O networking convergence game the only one in play as it is being complimented by the IOV and VIO technologies addressing different value propositions in IT data centers.

    Now on to the IOV and VIO aspect along with VMworld.

    For those of you that attended VMworld and managed to get outside of session rooms, or media/analyst briefing or reeducation rooms, or out of partner and advisory board meetings walking the expo hall show floor, there was the usual sea of vendors and technology. There were the servers (physical and virtual), storage (physical and virtual), terminals, displays and other hardware, I/O and networking, data protection, security, cloud and managed services, development and visualization tools, infrastructure resource management (IRM) software tools, manufactures and VARs, consulting firms and even some analysts with booths selling their wares among others.

    Likewise, in the onsite physical data center to support the virtual environment, there were servers, storage, networking, cabling and associated hardware along with applicable software and tucked away in all of that, there were also some converged I/O and networking, and, IOV technologies.

    Yes, IOV, VIO and I/O networking convergence were at VMworld in force, just ask Jon Torr of Xsigo who was beaming like a proud papa wanting to tell anyone who would listen that his wares were part of the VMworld data center (Disclosure: Thanks for the T-Shirt).

    Virtensys had their wares on display with Bob Nappa more than happy to show the technology beyond an UhiGui demo including how their solution includes disk drives and an LSI MegaRAID adapter to support VM boot while leveraging off-the shelf or existing PCIe adapters (SAS, FC, FCoE, Ethernet, SATA, etc.) while allowing adapter sharing across servers, not to mention, they won best new technology at VMworld award.

    NextIO who is involved in the IOV / VIO game was there along with convergence vendors Brocade, Cisco, Qlogic and Emulex among others. Rest assured, there are many other vendors and VARs in the VIO and IOV game either still in stealth, semi-stealth or having recently launched.

    IOV and VIO are complimentary to I/O and networking convergence in that solutions like those from Aprius, Virtensys, Xsigo, NextIO and others. While they sound similar, and in fact there is confusion as to if Fibre Channel N_Port Virtual ID (FC_NPVID) and VMware virtual adapters are IOV and VIO vs. solutions that are focused on PCIe device/resource extension and sharing.

    Another point of confusion around I/O virtualization and virtual I/O are blade system or blade center connectivity solutions such as HP Virtual Connect or IBM Fabric Manger not to mention those form Engenera add confusion to the equation. Some of the buzzwords that you will be hearing and reading more about include PCIe Single Root IOV (SR-IOV) and Multi-Root IOV (MR-IOV). Think of it this way, within VMware you have virtual adapters, and Fibre Channel Virtualization N_Port IDs for LUN mapping/masking, zone management and other tasks.

    IOV enables localized sharing of physical adapters across different physical servers (blades or chassis) with distances measured in a few meters; after all, it’s the PCIe bus that is being extended. Thus, it is not a replacement for longer distance in the data center solutions such as FCoE or even SAS for that matter, thus they are complimentary, or at least should be considered complimentary.

    The following are some links to previous articles and related material including an excerpt (yes, another plug ;)) from chapter 9 “Networking with you servers and storage” of new book “The Green and Virtual Data Center” (CRC). Speaking of virtual and physical, “The Green and Virtual Data Center” (CRC) was on sale at the physical VMworld book store this week, as well as at the virtual book stores including Amazon.com

    The Green and Virtual Data Center

    The Green and Virtual Data Center (CRC) on book shelves at VMworld Book Store

    Links to some IOV, VIO and I/O networking convergence pieces among others, as well as news coverage, comments and interviews can be found here and here with StorageIOblog posts that may be of interest found here and here.

    SearchSystemChannel: Comparing I/O virtualization and virtual I/O benefits – August 2009

    Enterprise Storage Forum: I/O, I/O, It’s Off to Virtual Work We Go – December 2007

    Byte and Switch: I/O, I/O, It’s Off to Virtual Work We Go (Book Chapter Excerpt) – April 2009

    Thus I went to VMworld in San Francisco this past week as much of the work I do is involved with convergence similar to my background, that is, servers, storage, I/O networking, hardware, software, virtualization, data protection, performance and capacity planning.

    As to the virtual work, well, I spent some time on airplanes this week which as is often the case, my virtual office, granted it was real work that had to be done, however I also had a chance to meet up with some fellow tweeters at a tweet up Tuesday evening before getting back in a plane in my virtual office.

    Now, I/O, I/O, its back to real work I go at Server and StorageIO , kind of rhymes doesnt it!

    I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)

    Cisco wins FCoE Pre-Season and Primaries – Now for the Main Event!

    Storage I/O trends

    Ok, unless you have turned off all of your news feeds, RSS feeds, discontinued all printed industry and trade related publications and stopped visiting blogs and other on-line venues, you may not have heard that Cisco, NetApp, EMC, Emulex and Qlogic have made a series of announcements signaling proof of life for the emerging Fibre Channel over Ethernet (FCoE) based on the Cisco Data Center Ethernet (DCE) or on the emerging more general Converged Enhanced Ethernet (CEE).

    Now if you have not heard, check out the various industry news and information venues and blogs. Likewise if you are a Brocadian, dont worry and do not get upset by the early poll or exit poll results from the primaries, the real and broad adoption game has not started yet, however, get your game faces on.

    At this point given the newness of the technology and early adopter status, its safe to say that Cisco has won the pre-season or primaries for the first FCoE battle. However, despite the hype and proof of life activity which can be gauged by the counter claims from the iSCSI camps, the main event or real market adoption and deployment will start ramping up in 2009 and with broader adoption occurring in the 2010 to 2011 timeframes.

    This is not to say that there will not be any adoption of FCoE between now and the next 12-18 months, quite the opposite, there will be plenty of early adopters, test and pilot cases as well as Cisco faithful who chose to go the FCoE route vs. another round of Fibre Channel at 8Gb, or, for those who want to go to FCoE at 10Gb instead of iSCSI or NAS at 10GbE for what ever reasons. However the core target market for FCoE is the higher-end, risk adverse environments that shy away from bleeding edge technology unless there is an adjacent and fully redundant blood bank located next door if not on-site.

    Consequently similar how Fibre Channel and FICON were slow to ramp-up taking a couple of years from first product and components availability, FCoE will continue to gain ground and as the complete and open ecosystem comes into place including adapters, switches and directors, routers, bridges and gateways, storage systems as well as management tools and associated training and skills development.

    Watch for vendors to ratchet up discussions about how many FCoE or FCoE enabled systems are shipped with an eye on the keyword, FCoE enabled which means that the systems may or may not actually be deployed in FCoE mode, rather that they are ready for it, sound familiar to early iSCSI or event FC product shipments?

    Rest assured, FCoE has a very bright future (see here and here) at the mid to high-end of the market while iSCSI will continue to grow and gain in adoption in the mid-market down to the lower reaches of the SMB market. Of course there will be border skirmishes as iSCSI tries to move up market and FCoE tries to move down market and of course there will be those that stay the course for another round of Fibre Channel beyond 8Gb while NAS continues to gain ground in all market segments and SAS at the very low-end where even iSCSI is to expensive. Learn more over at the Fibre Channel Industry Association (FCIA) or FCoE Portal sites as well as at Brocade, Cisco, EMC, Emulex, NetApp and Qlogic sites among others.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Why XIV is so important to IBMs storage business – Its Not About the Technology or Product!

    Storage I/O trends

    Ok, so I know I’m not taking a popular stance on this one from both camps, the IBMers and their faithful followers as well as the growing legion of XIV followers will take exception I’m sure.

    Likewise, the nay sayers would argue why not take a real swing and knock the ball out of the park as if it were baseball batting practice. No, I’m going a different route as actually, either of the approaches would be too easy and have been pretty well addressed already.

    The IBM XIV product that IBM acquired back in January 2008 is getting a lot of buzz (some good, some not so good) lately in the media and blog sphere (here and here which in turn lead to many others) as well as in various industry and customer discussions.

    How ironic that the 2008 version of storage in an election year in the U.S. pits the IBM and XIV faithful in one camp and the nay sayers and competition in the other camps. To hear both camps go at it with points, counter points, mud-slinging and lipstick slurs should be of no surprise when it comes vendor?s points and counter points. In fact the only thing missing from some of the discussions or excuse me, debates is the impromptu appearance on-stage by either Senators Bidden, Clinton, McCain or Obama or Governor Palin to weigh in on the issues, after all, it is the 2008 edition of storage in an election year here in the United States.

    Rather than jump on the bashing XIV bandwagon which about everyone in the industry is now doing except for, the proponents or, folks taking a step back looking at the bigger non-partisan picture like Steve Duplessie the genesis billionaire founder of ESG and probably the future owner of the New England Patriots (American) Football team whose valuation may have dripped enough for Steve to buy now that their start quarterback Tom Brady is out with a leg injury that will take longer to rebuild than all the RAID 6 configured 1 TByte SATA disk drives in 3PAR, Dell, EMC, HGST, HP, IBM, NetApp, Seagate, Sun and Western Digital as well as many other vendors test labs combined. As for the proponents or faithful, in the spirit of providing freedom of choice and flexible options, the cool-aid comes in both XIV orange as well as traditional IBM XIV blue, nuff said.

    In my opinion, which is just that, an opinion, XIV is going to help and may have already done so for IBMs storage business not from the technical architecture or product capabilities or even in the number of units that IBM might eventually sell bundled or un-bundled. Rather, XIV is getting IBM exposure and coverage to be able to sit at the table with some re-invigorated spirit to tell the customer what IBM is doing and if they pay attention, in-between slide decks, grasp the orders for upgrades, expansion or new installs for the existing IBM storage product line, then continue on with their pitch until the customer asks to place another upgraded or expansion order, then quickly grab that order, then continue on with the presentation while touching lightly on the products IBM customers continue to buy and looking to upgrade including:

    IBM disk
    IBM tape – tape and virtual tape
    DS8000 – Mainframe and open systems storage
    DS5000 – New version of DS4000 to compete with new EMC CLARiiON CX4s
    DS4000 ? aka the Array formerly known as the FastT
    DS3000 – Entry level iSCSI, SAS and FC storage
    NetApp based N-Series – For NAS windows CIFS and NFS file sharing
    DR550 archiving solution
    SAN Volume Controller-SVC

    Not to mention other niche products such as the Data Direct Networks-DDN based DCS9550 or IBM developed DS6000 or recently acquired Diligent VTL and de-duping software.

    IBM will be successful with XIV not by how many systems they sell or give away, oh, excuse me, add value to other solutions. How IBM should be gauging XIV success is based on increased sales of their other storage systems and associated software and networking technologies including the mainframe attachable DS8000, the new high performance midrange DS5000 that builds on the success of the DS4000, all of which should have both Brocade and Cisco salivating given their performance need for more Fibre Channel (and FICON for DS8000) 4GFC and 8GFC Fibre Channel ports, switches, adapters and directors. Then there is the netapp based N series for NAS and file serving to support unstructured data including Web and social networking.

    If I were Brocade, Cisco, NetApp or any of the other many IBM suppliers, I would be putting solution bundles together certainly to ride the XIV wave, however have solution bundles ready to play to the collateral impact of all the other IBM storage products getting coverage. For example sure Brocade and Cisco will want to talk about more Fibre Channel and iSCSI switch ports for the XIV, however, also talk performance to be able to unleash the capabilities of the DS8000 and DS5000, or, file management tools for the N-Series as well as bundles around the archiving DR550 solution.

    The N-Series NAS gateway that could be used in theory to dress up XIV and actually make it usable for NAS file serving, file sharing and Web 2.0 related applications or unstructured data. There is the IBM SAN Volume Controller-SVC that virtualizes almost everything except the kitchen sink which may be in a future release. There is the DR550 archiving and compliance platform that not only provides RAID 6 protected energy-efficient storage, it also supports movement of data to tape, now if IBM could get the story out on that solution which maybe in the course of talking about XIV, IBM DR550 might get discovered as well. Of course there are all the other backup, archiving, data protection management and associated tools that will get pick-up and traction as well.

    You see even if IBM quadruples the XIV footprint of revenue installed in production systems with 400% growth rates year over year, never mind that the nay-sayers that would only be about 1/20 or 1/50th of what Dell/EqualLogic, or LeftHand via HP/Intel or even IBM xseries not to mention all the others using IBRIX, HP/PolyServe, Isilon, 3PAR, Panasas, Permabit, NEC and the list goes on with similar clustered solutions have already done.

    The point is watch for up-tick even if only 10% on the installed DS8000 or DS5000 (new) or DS4000 or DS3000 or N-Series (NetApp) or DR550 (the archive appliance IBM should talk more about), or SVC or the TS series VTLs.

    Even a 1% jump due to IBM folks getting out and in front of customers and business partners, a 10% jump on the installed based of somewhere around 40,000 DS8000 (and earlier ESS versions) is 4,000 new systems, on the combined DS5000/DS4000/DS3000 formerly known as FasT with combined footprint of over 100,000 systems in the field, 10% would be 10,000 new systems. Take the SVC, with about 3,000 instances (or about 11,000 clustered nodes), 10% would mean another new 300 instances and continue this sort of improvement across the rest of the line and IBM will have paid for not only XIV and Moshe?s (former EMCer and founded of XIV and now IBM fellow) retirement fund.

    IBM may be laughing to the big blue bank even after having enough money to finally buy a clustered NAS file system for Web 2.0 and bulk storage such as IBRIX before someone else like Dell, EMC or HP gets their hands on it. So while everyone else continues to bash how bad XIV is performing. Whether this is a by design strategy or one that IBM can simply fall into, it could be brilliant if played out and well executed however only time will tell.

    If those who want to rip on xiv really want to inflict damage, cease and ignore XIV for what it is or is not and find something else to talk about and rest assured, if there are other good stories, they will get covered and xiv will be ignored.

    Instead of ripping on XIV, or listening to more XIV hype, I’m going fishing and maybe will come back with a fish story to rival the XIV hype, in the meantime, look I forward to seeing the IBM success for their storage business as a whole due to the opportunity for IBMers and their partners getting excited to go and talk about storage and being surprised by their customers giving them orders for other IBM products, that is unless the IBM revenue prevention department gets in the way. For example if IBMers or their partners in the excitement of the XIV moment forget to sell to customers what customers want, and will buy today or are ready to buy and grab the low hanging fruit (sales orders for upgrades and new sales) of current and recently enhanced products while trying to reprogram and re-condition customers to the XIV story.

    Congratulations to IBM and their partners as well as OEM suppliers if they can collective pull the ruse off and actually stimulate total storage sales while XIV becomes a decoy and maybe even gets a few more installs and some revenue to help prop it up as a decoy.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Brocade to Buy Foundry Networks – Prelude to Upcoming Converged Ethernet and FCoE Battle

    Storage I/O trends

    The emerging and maturing Fibre Channel over Ethernet (FCoE) and Converged Ethernet, aka Data Center Ethernet, Converged Enhanced Ethernet, Enterprise Ethernet among others marketing names activity is picking up. Today Brocade took a major step to shore up its already announced FCoE and converged Ethernet story which includes new directors and converged host bus adapters
    by announcing intentions of buying

    Ethernet high performance switching vendor Foundry Networks in a deal valued around $3B USD and some change. Not a bad deal for Foundry, some would say an expensive deal for Brocade, perhaps paying to much, however given some of the recent storage and networking related deals. For example IBM spending around $300M for a startup called XIV who claims to have shipped a few storage systems to a few customers, or, Dell spending about $1.3B to buy EqualLogic who had a few thousand customers (Could be the deal of the century for Dell compared to IBM and XIV, however time will tell), or EMC and some of its recent purchases like RSA, Avamar or bargains like WysDM, Mozy and Iomega not to mention Cisco having not been bashful about dropping some serious coin for standalone companies like Neuspeed (where are they now) for iSCSI as well as Andimao and more recently Nuovo. Regardless of if Mike Klayko (Brocade CEO) paid too much or not, he did what he had to do as part of his continuing activities to re-invent Brocade and leverage their core DNA and business focus of data infrastructures.

    Brocade could probably have made a nice business for a few more years like some of the companies they have recently acquired tried to do including McData, CNT, INRANGE and so forth. However the reality is that sooner or later, they too (Brocade) would probably have been acquired by someone perhaps. With the acquisition of Foundry Networks, along with previous announcements for FCoE technologies and their existing products for NAS or file based storage management and iSCSI solutions, Brocade is signaling that they want to fight for survival as opposed to circle the wagons and guard their installed base and wheel house.

    With the up-coming Converged Ethernet and FCoE battle royal shaping up to start in about 12 to 18 months, sooner for the early adopters who like to test and kick around technology early, or for those who want to go right to 10GbE today instead of 8Gb Fibre Channel, or, for those who like bleeding edge solutions. The reality even with recent proof of life plug-fest demos and claims of being ready for primetime, core Brocade customers particularly at the high-end of the market tend to be rather risk averse and cautions with their data infrastructure thus moving at a slower pace. For them, upgrading to 8Gb Fibre Channel may be the near term future while watching FCoE and converged Ethernet or converged enhanced Ethernet evolve and being transitioning in a couple of years. For these risk adverse type customers, bleeding edge technology means having a blood bank nearby and on call as downtime and disruption is not an option.

    Rest assured, with Ciscopushing hard to stimulate the FCoE market and get people to skip 8Gb FC and switch over to 10GbE, there will be plenty more plug fest and proof of life demos, plenty of trash talking by both sides that will rival some of the best heavy weight match-ups.

    Buyers beware, do your home work and if being an early adopter of FCoE and converged networks is right for you, with due diligence do some testing to see how everything really works in your environment from storage systems, to adapters, to switches, to protocol converters and gateways to management and diagnostic software. How does the whole ecosystem that matches your environment work for your scenario. If you are not comfortable with where the FCoE and converged Ethernet technologies and more importantly supporting ecosystem are at, take your time, monitor the situation as it unfolds over the next year or so leading up to the big battle royal between Brocade and Cisco.

    Something that I think is interesting is that here we have Brocade and Cisco squaring off in a convergence battle between a general networking vendor (Cisco) and storage centric networking vendor (Brocade), both of whom have been built on organic growth as well as acquisitions. What?s even more interesting is that around 10 years ago back when Brocade was just getting started and Cisco was still trying to figure out Fibre Channel and iSCSI, 3COM had at the time the foresight to put together an alliance of Storage related partners to get into the then emerging SAN market place. The alliance was to include various storage vendors, switch and HBA as well as router or gateway vendors along with data and backup software vendors. Before the program could be officially launched, it was canceled just as all of the promotional material was about to be distributed due to poor finical health of 3COM. With a few exceptions, most of the participants in that early program, which was probably a year or two ahead of its time have either been bought or disappeared altogether. 3COM could have been a major force in a converged LAN and SAN market place instead of now watching Brocade and Cisco form the sidelines.

    For now, congratulations to Mike Klayko and crew for demonstrating that they want to put up a fight and provide an alternative for their customers to Cisco and that they are serious about being a serious contender in the data infrastructure solution provider fight. For Cisco, looks like two of your competitors have now become one. Good luck and best wishes to both sides, Brocade and Cisco and I will be watching this battle from ring side as both parties line up and re-align their partner ecosystems.

    Cheers
    gs

    More on Fibre Channel over Ethernet (FCoE)

    Here’s a link to a new StorageIO Industry Trends and Perspective on the emerging “Fibre Channel over Ethernet (FCoE) technology”.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Logo-ology

    In case you did not catch it, NetApp (formerly officially known as Network Appliance) who has been known in the industry for years by their nickname ?NetApp? has joined the ranks of companies like FedEx (formerly known as Federal Express) to shorten their name and logo to their more commonly used and referred to name which I think is great.

    NetApp is also the latest vendor to as part of their new identify makeover adopt a new logo, now some logos make more sense than others do, some leave you scratching your head as to what it means while others, well, leave it at that.

    So in honor of NetApp?s new logo, let?s have a quick look and see what we can interpret or at least leave room for pondering what the logo could mean. For example, looking at the following images, granted the democratic logo has three legs, four feet and a tail showing, and the republican logo has two legs and trunk showing, so what if you transposed the blue part of both parties logo on top of the NetApp new logo?

    NetApp Logo ? Neutral and Agnostic???

    Ok, how about does the new NetApp logo mean agnostic between block and file, or FC and iSCSI? Or that the middle opening is the door to pass into a new world, a world either of enablement or of what some might say vendor lock-in or is it the data protection vault? Nuff with NetApp, lets have a quick look at some others.

    What about the ?E? in the
    Dell Logo
    , what?s up or down with that?
    Dell Logo

    Then there is the tale of two plumbing and infrastructure vendors, one for IT (Brocade) and one for building and water related applications (Moen), one sold by the International Business Machine ? IBM company and the other represented by the Internal Building Materials trade group.

    Ok, nuff fun for now, back to work.

    Cheers
    GS