CompTIA needs input for their Storage+ certification, can you help?

CompTIA needs input for their Storage+ certification, can you help?

The CompTIA folks are looking for some comments and feedback from those who are involved with data storage in various ways as part of planning for their upcoming enhancements to the Storage+ certification testing.

As a point of disclosure, I am member of the CompTIA Storage+ certification advisory committee (CAC), however I don’t get paid or receive any other type of renumeration for contributing my time to give them feedback and guidance other than a thank, Atta boy for giving back and playing it forward to help others in the IT community similar to what my predecessors did.

I have been asked to pass this along to others (e.g. you or who ever forwards it on to you).

Please take a few moments and feel free to share with others this link here to the survey for CompTIA Storage+.

What they are looking for is to validate the exam blueprint generated from a recent Job Task Analysis (JTA) process.

In other words, does the certification exam show real-world relevance to what you and your associates may be doing involved with data storage.

This is opposed to being aligned with those whose’s job it is to create test questions and may not understand what it is you the IT pro involved with storage does or does not do.

If you have ever taken a certification exam test and scratched your head or wondered out why some questions that seem to lack real-world relevance were included, vs. ones of practical on-the-job experience were missing, here’s your chance to give feedback.

Note that you will not be rewarded with an Amex or Amazon gift card, Starbucks or Dunkin Donuts certificates, free software download or some other incentive to play and win, however if you take the survey let me know and will be sure to tweet you an Atta boy or Atta girl! However they are giving away a free T-Shirt to every 10 survey takers.

Btw, if you really need something for free, send me a note (I’m not that difficult to find) as I have some free copies of Resilient Storage Networking (RSN): Designing Flexible Scalable Data Infrastructures (Elsevier) you simply pay shopping and handling. RSN can be used to help prepare you for various storage testing as well as other day-to-day activities.

CompTIA is looking for survey takers who have some hands-on experience or involved with data storage (e.g. can you spell SAN, NAS, Disk or SSD and work with them hands-on then you are a candidate ;).

Welcome to the CompTIA Storage+ Certification Job Task Analysis (JTA) Survey

  • Your input will help CompTIA evaluate which test objectives are most important to include in the CompTIA Storage+ Certification Exam
  • Your responses are completely confidential.
  • The results will only be viewed in the aggregate.
  • Here is what (and whom) CompTIA is looking for feedback from:

  • Has at least 12 to 18 months of experience with storage-related technologies.
  • Makes recommendations and decisions regarding storage configuration.
  • Facilitates data security and data integrity.
  • Supports a multiplatform and multiprotocol storage environment with little assistance.
  • Has basic knowledge of cloud technologies and object storage concepts.
  • As a small token of CompTIA appreciation for your participation, they will provide an official CompTIA T-shirt to every tenth (1 of every 10) person who completes this survey. Go here for official rules.

    Click here to complete the CompTIA Storage+ survey

    Contact CompTIA with any survey issues, research@comptia.org

    What say you, take a few minutes like I did and give some feedback, you will not be on the hook for anything, and if you do get spammed by the CompTIA folks, let me know and I in turn will spam them back for spamming you as well as me.

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    SNIA’s new SPDEcon conference

    Now also available via

    This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

    StorageIO industry trends cloud, virtualization and big data

    In this episode from SNW Spring 2013 in Orlando Florida, Bruce Ravid (@BruceRave) and me visit with our guests SNIA Chairman Wayne Adams (@wma01606) and from SNIA Education SW Worth. Wayne was one of our first podcast guests in the episode titled Waynes World, SNIA and SNW that you can listen to here.

    SNIA image logo

    Our conversation centers around the new SNIA SPDEcon conference that will occur June 10th in Santa Clara California.

    SNIA SPDEcon image

    The tag line of the event is for experts by experts and those who want to become experts. Listen to our conversation and check out the snia.org and snia.org/spdecon websites to signup and take part in this new event.

    Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Wayne and SW.

    StorageIO podcast

    Also available via

    Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

    Enjoy this episode from SNW Spring 2013 with Wayne Adams and SW Worth of SNIA to learn about the new SPDEcon conference.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go

    StorageIO industry trends and perspectives

    In case you missed it, VMware recently announced spending $1.05 billion USD acquiring startup Nicira for their virtualization and software technology that enables software defined networks (SDN). Also last week Oracle was in the news getting its hands slapped by for making misleading advertisement performance claims vs. IBM.

    On the heals of VMware buying Nicira for software defined networking (SDN) or what is also known as IO virtualization (IOV) and virtualized networking, Oracle is now claiming their own SDN capabilities with their announcement of intent to acquire Xsigo. Founded in 2004, Xsigo has a hardware platform combined with software to enable attachment of servers to different Fibre Channel (SAN) and Ethernet based (LAN) networks with their version of IOV.

    Now its Oracle who has announced that it will be acquiring IO, networking, virtualization hardware and software vendor Xsigo founded in 2004 for an undisclosed amount. Xsigo has made its name in the IO virtualization (IOV) and converged networking along with server and storage virtualization space over the past several years including partnerships with various vendors.

    Buzz word bingo

    Technology buzzwords and buzz terms can often be a gray area leaving plenty of room for marketers and PR folks to run with. Case in point AaaS, Big data, Cloud, Compliance, Green, IaaS, IOV, Orchestration, PaaS and Virtualization among other buzzword bingo or XaaS topics. Since Xsigo has been out front in messaging and industry awareness around IO networking convergence of Ethernet based Local Area Networks (LANs) and Fibre Channel (FC) based Storage Area Networks (SANs), along with embracing InfiniBand, it made sense for them to play to their strength which is IO virtualization (aka IOV).

    Too me and among others (here and here and here) it is interesting that Xsigo has not laid claims to being part of the software defined networking (SDN) movement or the affiliated OpenFlow networking initiatives as happens with Nicira (and Oracle for that matter). In the press release that the Oracle marketing and PR folks put out on a Monday morning, some of the media and press, both trade industry, financial and general news agency took the Oracle script hook line and sinker running with it.

    What was effective is how well many industry trade pubs and their analysts simply picked up the press release story and ran with it in the all too common race to see who can get the news or story out first, or before it actually happens in some cases.

    Image of media, news papers

    Too be clear, not all pubs jumped including some of those mentioned by Greg Knieriemen (aka @knieriemen) over at SpeakinginTech highlights. I know some who took the time to call, ask around, leverage their journalistic training to dig, research and find out what this really meant vs. simply taking and running with the script. An example of one of those calls that I had was with Beth Pariseu (aka @pariseautt) that you can read her story here and here.

    Interesting enough, the Xsigo marketers had not embraced the SDN term sticking with the more known (at least in some circles) VIO and VIO descriptions. What is also interesting is just last week Oracle marketing had their hands slapped by the Better Business Bureau (BBB) NAD after IBM complained about unfair performance based advertisements on ExaData.

    Oracle Exadata

    Hmm, I wonder if the SDN police or somebody else will lodge a similar complaint with the BBB on behalf of those doing SDN?

    Both Oracle and Xsigo along with other InfiniBand (and some Ethernet and PCIe) focused vendors are members of the Open Fabric initiative, not to be confused with the group working on OpenFlow.

    StorageIO industry trends and perspectives

    Here are some other things to think about:

    Oracle has a history of doing different acquisitions without disclosing terms, as well as doing them based on earn outs such as was the case with Pillar.

    Oracle use Ethernet in the servers and appliances as well as has been an adopter of InfiniBand primarily for node to node communication, however also for server to application.

    Oracle is also an investor in Mellanox the folks that make InfiniBand and Ethernet products.

    Oracle has built various stacks including ExaData (Database machine), Exalogic, Exalytics and Database Appliance in addition to their 7000 series of storage systems.

    Oracle has done earlier virtualization related acquisitions including Virtual Iron.

    Oracle has a reputation with some of their customers who love to hate them for various reasons.

    Oracle has a reputation of being aggressive, even by other market leader aggressive standards.

    Integrated solution stacks (aka stack wars) or what some remember as bundles continues and Oracle has many solutions.

    What will happen to Xsigo as you know it today (besides what the press releases are saying).

    While Xsigo was not a member of the Open Networking Forum (ONF), Oracle is.

    Xsigo is a member of the Open Fabric Alliance along with Oracle, Mellanox and others interested in servers, PCIe, InfiniBand, Ethernet, networking and storage.

    StorageIO industry trends and perspectives

    What’s my take?

    While there are similarities in that both Nicira and Xsigo are involved with IO Virtualization, what they are doing, how they are doing it, who they are doing it with along with where they can play vary.

    Not sure what Oracle paid however assuming that it was in the couple of million dollars or less, cash or combination of stock, both they and the investors as well as some of the employees, friends and family’s did ok.

    Oracle also gets some intellectual property that they can combine with other earlier acquisitions via Sun and Virtual Iron along with their investment in InfiniBand (also now Ethernet) vendor Mellanox

    Likewise, Oracle gets some extra technology that they can leverage in their various stacked or integrated (aka bundled) solutions for both virtual and physical environments.

    For Xsigo customers the good news is that you now know who will be buying the company, however and should be questions about the future beyond what is being said in press releases.

    Does this acquisition give Oracle a play in the software defined networking space like Nicira gives to VMware I would say no given their hardware dependency, however it does give Oracle some extra technology to play with.

    Likewise while important and a popular buzzword topic (e.g. SDN), since OpenFlow comes up in conversations, perhaps that should be more of the focus vs. if a solution is all software or hardware and software.

    StorageIO industry trends and perspectives

    I also find it entertaining how last week the Better Business Bureau (BBB) and NAD (National Advertising Division) slapped Oracles hands after IBM complaints of misleading performance claims about Oracle ExaData vs. IBM. The reason I find it entertaining is not that Oracle had its hands slapped or that IBM complained to the BBB, rather how the Oracle marketers and PR folks came up with a spin around what could be called a proprietary SDN (hmm, pSDN ?) story feed it to the press and media who then ran with it.

    Im not convinced that this is an all our launch of a war by Oracle vs. Cisco let alone any of the other networking vendors as some have speculated (makes for good headlines though). Instead Im seeing it as more of an opportunistic acquisition by Oracle most likely at a good middle of summer price. Now if Oracle really wanted to go to battle with Cisco (and others), then there are others to buy such as Brocade, Juniper, etc etc etc. However there are other opportunities for Oracle to be focused (or side tracked on right now).

    Oh, lets also see what Cisco has to say about all of this which should be interesting.

    Additional related links:
    Data Center I/O Bottlenecks Performance Issues and Impacts
    I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
    I/O Virtualization (IOV) Revisited
    Industry Trends and Perspectives: Converged Networking and IO Virtualization (IOV)
    The function of XaaS(X) Pick a letter
    What is the best kind of IO? The one you do not have to do
    Why FC and FCoE vendors get beat up over bandwidth?

    StorageIO industry trends and perspectives

    If you are interested in learning more about IOV, Xisgo, or are having trouble sleeping, click here, here, here, here, here, here, here, here, here, here, here, here, here, or here (I think that’s enough links for now ;).

    Ok, nuff said for now as I have probably requalified for being on the Oracle you know what list for not sticking to the story script, opps, excuse me, I mean press release message.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Cloud and Virtual Data Storage Networking

    For those who have read any of my previous posts, seen some of my articles, news letters, videos, pod casts, web casts or in person appearances you may have heard that I have a new book coming out this summer.

    Here in the northern hemisphere its summer (well technically the solstice is just around the corner) and in Minnesota the ice (from the winter) is off the lakes and rivers. Granted, there is some ice floating that fell out of coolers for keeping beverages cool. This means that it is also fishing (and catching) season on the Scenic St. Croix River.

    Karen of Arcola catches first fish of 2011 season, St. Croix river, stripe bassGreg showing his first catch of the 2011 season, St. Croix walleye aka Walter or Wanda

    FTC disclosures (and for fun): Karenofarcola is wearing a StorageIO baseball cap and Im wearing a cap from a vendor marketing person who sent several as they too enjoy fishing and boating. Funny thing about the cap, all of the river rats and fishing people think it is from the people who make rod reels instead of solutions that go around tape and disk reels. Note, if you feel compelled to send me baseball caps, send at least a pair so there is a backup, standby, spare or extra one for a guest. The mustang survival jacket that Im wearing with the Seadoo logo is something I bought myself. I did get a discount however since there was a Seadoo logo on it and I used to have Seadoo jet boats. Btw, that was some disclosure fun and humor!

    Ok, enough of the fun stuff, lets get back to the main theme of this post.

    My new book which is the third in a series of solo projects including Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier) and The Green and Virtual Data Center (CRC).

    While the official launch and general availability will be later in the summer, following are some links and related content to give you advance information about the new book.

    Cloud and Virtual Data Storage Networking

    Click on the above image which will take you to the CRC Press page where you can learn more including what the book is about, view a table of contents, see reviews and more. Also check out the video below to learn more as well as visit my main web site where you can learn about Cloud and Virtual Data Storage Networking, my other books and view (or listen to) related content such as white papers, solution briefs, articles, tips, web cast, pod cast as well as view the recent and upcoming events schedule.

    I also invite you to join Cloud and Virtual Data Storage Networking group

    You can also view the short video at dailymotion, metacage, blip.tv, veoh, flickr, and photobucket among other venues.

    If you are interested in being a reviewer, send a note to cvdsn@storageio.com with your name, blog or website and contact information including shipping address (sorry no PO boxes) plus telephone (or skype) number. Also indicate if you are a blogger, press/media, free lance writer, analyst, consultant, var, vendor, investor, IT professional or other.

    Watch for more news and information as we get closer to the formal launch and release, in the meantime, you can pre order your copy now at Amazon, CRC Press and other venues around the world.

    Ok, time to get back to work or go fishing, nuff said

    Cheers Gs

    Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

    EMC VPLEX: Virtual Storage Redefined or Respun?

    In a flurry of announcements that coincide with EMCworld occurring in Boston this week of May 10 2010 EMC officially unveiled the Virtual Storage vision initiative (aka twitter hash tag of #emcvs) and initial VPLEX product. The Virtual Storage initiative was virtually previewed back in March (See my previous post here along with one from Stu Miniman (twitter @stu) of EMC here or here) and according to EMC the VPLEX product was made generally available (GA) back in April.

    The Virtual Storage vision and associated announcements consisted of:

    • Virtual Storage vision – Big picture  initiative view of what and how to enable private clouds
    • VPLEX architecture – Big picture view of federated data storage management and access
    • First VPLEX based product – Local and campus (Metro to about 100km) solutions
    • Glimpses of how the architecture will evolve with future products and enhancements


    Figure 1: EMC Virtual Storage and Virtual Server Vision and Big Pictures

    The Big Picture
    The EMC Virtual Storage vision (Figure 1) is the foundation of a private IT cloud which should enable characteristics including transparency, agility, flexibility, efficient, always on, resiliency, security, on demand and scalable. Think of it this way, EMC wants to enable and facilitate for storage what is being done by server virtualization hypervisor vendors including VMware (which happens to be owned by EMC), Microsoft HyperV and Citrix/Xen among others. That is, break down the physical barriers or constraints around storage similar to how virtual servers release applications and their operating systems from being tied to a physical server.

    While the current focus of desktop, server and storage virtualization has been focused on consolidation and cost avoidance, the next big wave or phase is life beyond consolidation where the emphasis expands to agility, flexibility, ease of use, transparency, and portability (Figure 2). In the next phase which puts an emphasis around enablement and doing more with what you have while enhancing business agility focus extends from how much can be consolidated or the number of virtual machines per physical machine to that of using virtualization for flexibility, transparency (read more here and here or watch here).


    Figure 2: Virtual Storage Big Picture

    That same trend will be happening with storage where the emphasis also expands from how much data can be squeezed or consolidated onto a given device to that of enabling flexibility and agility for load balancing, BC/DR, technology upgrades, maintenance and other routine Infrastructure Resource Management (IRM) tasks.

    For EMC, achieving this vision (both directly for storage, and indirectly for servers via their VMware subsidiary) is via local and distributed (metro and wide area) federation management of physical resources to support virtual data center operations. EMC building blocks for delivering this vision including VPLEX, data and storage management federation across EMC and third party products, FAST (fully automated storage tiering), SSD, data protection and data footprint reduction and data protection management products among others.

    Buzzword bingo aside (e.g. LAN, SAN, MAN, WAN, Pots and Pans) along with Automation, DWDM, Asynchronous, BC, BE or Back End, Cache coherency, Cache consistency, Chargeback, Cluster, db loss, DCB, Director, Distributed, DLM or Distributed Lock Management, DR, Foe or Fibre Channel over Ethernet, FE or Front End, Federated, FAST, Fibre Channel, Grid, HyperV, Hypervisor, IRM or Infrastructure Resource Management, I/O redirection, I/O shipping, Latency, Look aside, Metadata, Metrics, Public/Private Cloud, Read ahead, Replication, SAS, Shipping off to Boston, SRA, SRM, SSD, Stale Reads, Storage virtualization, Synchronization, Synchronous, Tiering, Virtual storage, VMware and Write through among many other possible candidates the big picture here is about enabling flexibility, agility, ease of deployment and management along with boosting resource usage effectiveness and presumably productivity on a local, metro and future global basis.


    Figure 3: EMC Storage Federation and Enabling Technology Big Picture

    The VPLEX Big Picture
    Some of the tenants of the VPLEX architecture (Figure 3) include a scale out cluster or grid design for local and distributed (metro and wide area) access where you can start small and evolve as needed in a predictable and deterministic manner.


    Figure 4: Generic Virtual Storage (Local SAN and MAN/WAN) and where VPLEX fits

    The VPLEX architecture is targeted towards enabling next generation data centers including private clouds where ease and transparency of data movement, access and agility are essential. VPLEX sits atop existing EMC and third party storage as a virtualization layer between physical or virtual servers and in theory, other storage systems that rely on underlying block storage. For example in theory a NAS (NFS, CIFS, and AFS) gateway, CAS content archiving or Object based storage system or purpose specific database machine could sit between actual application servers and VPLEX enabling multiple layers of flexibility and agility for larger environments.

    At the heart of the architecture is an engine running a highly distributed data caching algorithm that uses an approach where a minimal amount of data is sent to other nodes or members in the VPLEX environment to reduce overhead and latency (in theory boosting performance). For data consistency and integrity, a distributed cache coherency model is employed to protect against stale reads and writes along with load balancing, resource sharing and failover for high availability. A VPLEX environment consists of a federated management view across multiple VPLEX clusters including the ability to create a stretch volume that is accessible across multiple VPLEX clusters (Figure 5).


    Figure 5: EMC VPLEX Big Picture


    Figure 6: EMC VPLEX Local with 1 to 4 Engines

    Each VPLEX local cluster (Figure 6) is made up of 1 to 4 engines (Figure 7) per rack with each engine consisting of two directors each having 64GByte of cache, localized compute Intel processors, 16 Front End (FE) and 16 Back End (BE) Fibre Channel ports configured in a high availability (HA). Communications between the directors and engines is Fibre Channel based. Meta data is moved between the directors and engines in 4K blocks to maintain consistency and coherency. Components are fully redundant and include phone home support.


    Figure 7: EMC VPLEX Engine with redundant directors

    VPLEX initially host servers supported include VMware, Cisco UCS, Windows, Solaris, IBM AIX, HPUX and Linux along with EMC PowerPath and Windows multipath management drivers. Local server clusters supported include Symantec VCS, Microsoft MSCS and Oracle RAC along with various volume mangers. SAN fabric connectivity supported includes Brocade and Cisco as well as Legacy McData based products.

    VPLEX also supports cache (Figure 8 ) write thru to preserve underlying array based functionality and performance with 8,000 total virtualized LUNs per system. Note that underlying LUNs can be aggregated or simply passed through the VPLEX. Storage that attaches to the BE Fibre Channel ports include EMC Symmetrix VMAX and DMX along with CLARiiON CX and CX4. Third party storage supported includes HDS9000 and USPV/VM along with IBM DS8000 and others to be added as they are certified. In theory given that the VPLEX presents block based storage to hosts; one would also expect that NAS, CAS or other object based gateways and servers that rely on underlying block storage to also be supported in the future.


    Figure 8: VPLEX Architecture and Distributed Cache Overview

    Functionality that can be performed between the cluster nodes and engines with VPLEX include data migration and workload movement across different physical storage systems or sites along with shared access with read caching on a local and distributed basis. LUNS can also be pooled across different vendors underlying storage solutions that also retain their native feature functionality via VPLEX write thru caching.

    Reads from various servers can be resolved by any node or engine that checks their cache tables (Figure 8 ) to determine where to resolve the actual I/O operation from. Data integrity checks are also maintained to prevent stale reads or write operations from occurring. Actual meta data communications between nodes is very small to enable state fullness while reducing overhead and maximizing performance. When a change to cache data occurs, meta information is sent to other nodes to maintain the distributed cache management index schema. Note that only pointers to where data and fresh cache entries reside are what is stored and communicated in the meta data via the distributed caching algorithm.


    Figure 9: EMC VPLEX Metro Today

    For metro deployments, two clusters (Figure 9) are utilized with distances supported up to about 100km or about 5ms of latency in a synchronous manner utilizing long distance Fibre Channel optics and transceivers including Dense Wave Division Multiplexing (DWDM) technologies (See Chapter 6: Metropolitan and Wide Area Storage Networking in Resilient Storage Networking (Elsevier) for additional details on LAN, MAN and WAN topics).

    Initially EMC is supporting local or Metro including Campus based VPLEX deployments requiring synchronous communications however asynchronous (WAN) Geo and Global based solutions are planned for the future (Figure 10).


    Figure 10: EMC VPLEX Future Wide Area and Global

    Online Workload Migration across Systems and Sites
    Online workload or data movement and migration across storage systems or sites is not new with solutions available from different vendors including Brocade, Cisco, Datacore, EMC, Fujitsu, HDS, HP, IBM, LSI and NetApp among others.

    For synchronization and data mobility operations such as a VMware Vmotion or Microsoft HyperV Live migration over distance, information is written to separate LUNs in different locations across what are known as stretch volumes to enable non disruptive workload relocation across different storage systems (arrays) from various vendors. Once synchronization is completed, the original source can be disconnected or taken offline for maintenance or other common IRM tasks. Note that at least two LUNs are required, or put another way, for every stretch volume, two LUNs are subtracted from the total number of available LUNs similar to how RAID 1 mirroring requires at least two disk drives.

    Unlike other approaches that for coherency and performance rely on either no cached data, or, extensive amounts of cached data along with subsequent overhead for maintaining state fullness (consistency and coherency) including avoiding stale reads or writes, VPLEX relies on a combination of distributed cache lookup tables along with pass thru access to underlying storage when or where needed. Consequently large amounts of data does not need to be cached as well as shipped between VPLEX devices to maintain data consistency, coherency or performance which should also help to keep costs affordable.

    Approach is not unique, it is the implementation
    Some storage virtualization solutions that have been software based running on an appliance or network switch as well as hardware system based have had a focus of emulating or providing competing capabilities with those of mid to high end storage systems. The premise has been to use lower cost, less feature enabled storage systems aggregated behind the appliance, switch or hardware based system to provide advanced data and storage management capabilities found in traditional higher end storage products.

    VPLEX while like any tool or technology could be and probably will be made to do other things than what it is intended for is really focused on, flexibility, transparency and agility as opposed to being used as a means of replacing underlying storage system functionality. What this means is that while there is data movement and migration capabilities including ability to synchronize data across sites or locations, VPLEX by itself is not a replacement for the underlying functionality present in both EMC and third party (e.g. HDS, HP, IBM, NetApp, Oracle/Sun or others) storage systems.

    This will make for some interesting discussions, debates and applies to oranges comparisons in particular with those vendors whose products are focused around replacing or providing functionality not found in underlying storage system products.

    In a nut shell summary, VPLEX and the Virtual Storage story (vision) is about enabling agility, resiliency, flexibility, data and resource mobility to simply IT Infrastructure Resource Management (IRM). One of the key themes of global storage federation is anywhere access on a local, metro, wide area and global basis across both EMC and heterogeneous third party vendor hardware.

    Lets Put it Together: When and Where to use a VPLEX
    While many storage virtualization solutions are focused around consolidation or pooling, similar to first wave server and desktop virtualization, the next general broad wave of virtualization is life beyond consolidation. That means expanding the focus of virtualization from consolidation, pooling or LUN aggregation to that of enabling transparency for agility, flexibility, data or system movement, technology refresh and other common time consuming IRM tasks.

    Some applications or usage scenarios in the future should include in addition to VMware Vmotion, Microsoft HypverV and Microsoft Clustering along with other host server closuring solutions.


    Figure 11: EMC VPLEX Usage Scenarios

    Thoughts and Industry Trends Perspectives:

    The following are various thoughts, comments, perspectives and questions pertaining to this and storage, virtualization and IT in general.

    Is this truly unique as is being claimed?

    Interestingly, the message Im hearing out of EMC is not the claim that this is unique, revolutionary or the industries first as is so often the case by vendors, rather that it is their implementation and ability to deploy on a broad perspective basis that is unique. Now granted you will probably hear as is often the case with any vendor or fan boy/fan girl spins of it being unique and Im sure this will also serve up plenty of fodder for mudslinging in the blogsphere, YouTube galleries, twitter land and beyond.

    What is the DejaVu factor here?

    For some it will be nonexistent, yet for others there is certainly a DejaVu depending on your experience or what you have seen and heard in the past. In some ways this is the manifestation of many vision and initiatives from the late 90s and early 2000s when storage virtualization or virtual storage in an open context jumped into the limelight coinciding with SAN activity. There have been products rolled out along with proof of concept technology demonstrators, some of which are still in the market, others including companies have fallen by the way side for a variety of reasons.

    Consequently if you were part of or read or listened to any of the discussions and initiatives from Brocade (Rhapsody), Cisco (SVC, VxVM and others), INRANGE (Tempest) or its successor CNT UMD not to mention IBM SVC, StorAge (now LSI), Incipient (now part of Texas Memory) or Troika among others you should have some DejaVu.

    I guess that also begs the question of what is VPLEX, in band, out of band or hybrid fast path control path? From what I have seen it appears to be a fast path approach combined with distributed caching as opposed to a cache centric inband approaches such as IBM SVC (either on a server or as was tried on the Cisco special service blade) among others.

    Likewise if you are familiar with IBM Mainframe GDPS or even EMC GDDR as well as OpenVMS Local and Metro clusters with distributed lock management you should also have DejaVu. Similarly if you had looked at or are familiar with any of the YottaYotta products or presentations, this should also be familiar as EMC acquired the assets of that now defunct company.

    Is this a way for EMC to sell more hardware along with software products?

    By removing barriers enabling IT staffs to support more data on more storage in a denser and more agile footprint the answer should be yes, something that we may see other vendors emulate, or, make noise about what they can or have been doing already.

    How is this virtual storage spin different from the storage virtualization story?

    That all depends on your view or definition as well as belief systems and preferences for what is or what is not virtual storage vs. storage virtualization. For some who believe that storage virtualization is only virtualization if and only if it involves software running on some hardware appliance or vendors storage system for aggregation and common functionality than you probably wont see this as virtual storage let alone storage virtualization. However for others, it will be confusing hence EMC introducing terms such as federation and avoiding terms including grid to minimize confusion yet play off of cloud crowd commotion.

    Is VPLEX a replacement for storage system based tiering and replication?

    I do not believe so and even though some vendors are making claims that tiered storage is dead, just like some vendors declared a couple of years ago that disk drives were going to be dead this year at the hands of SSD, neither has come to life so to speak pun intended. What this means for VPLEX is that it leverages underlying automated or manual tiering found in storage systems such as EMC FAST enabled or similar policy and manual functions in third party products.

    What VPLEX brings to the table is the ability to transparently present a LUN or volume locally or over distance with shared access while maintaining cache and data coherency. This means that if a LUN or volume moves the applications or file system or volume managers expecting to access that storage will not be surprised, panic or encounter failover problems. Of course there will be plenty of details to be dug into and seen how it all actually works as is the case with any new technology.

    Who is this for?

    I see this as for environments that need flexibility and agility across multiple storage systems either from one or multiple vendors on a local or metro or wide area basis. This is for those environments that need ability to move workloads, applications and data between different storage systems and sites for maintenance, upgrades, technology refresh, BC/DR, load balancing or other IRM functions similar to how they would use virtual server migration such as VMotion or Live migration among others.

    Do VPLEX and Virtual Storage eliminate need for Storage System functionality?

    I see some storage virtualization solutions or appliances that have a focus of replacing underlying storage system functionality instead of coexisting or complementing. A way to test for this approach is to listen or read if the vendor or provider says anything along the lines of eliminating vendor lock in or control of the underlying storage system. That can be a sign of the golden rule of virtualization of whoever controls the virtualization functionality (at the server hypervisor or storage) controls the gold! This is why on the server side of things we are starting to see tiered hypervisors similar to tiered servers and storage where mixed hypervisors are being used for different purposes. Will we see tiered storage hypervisors or virtual storage solutions the answer could be perhaps or it depends.

    Was Invista a failure not going into production and this a second attempt at virtualization?

    There is a popular myth in the industry that Invista never saw the light of day outside of trade show expo or other demos however the reality is that there are actual customer deployments. Invista unlike other storage virtualization products had a different focus which was that around enabling agility and flexibility for common IRM tasks, similar the expanded focus of VPLEX. Consequently Invista has often been in apples to oranges comparison with other virtualization appliances that have as focus pooling along with other functions or in some cases serving as an appliance based storage system.

    The focus around Invista and usage by those customers who have deployed it that I have talked with is around enabling agility for maintenance, facilitating upgrades, moves or reconfiguration and other common IRM tasks vs using it for pooling of storage for consolidation purposes. Thus I see VPLEX extending on the vision of Invista in a role of complimenting and leveraging underlying storage system functionality instead of trying to replace those capabilities with that of the storage virtualizer.

    Is this a replacement for EMC Invista?

    According to EMC the answer is no and that customers using Invista (Yes, there are customers that I have actually talked to) will continue to be supported. However I suspect that over time Invista will either become a low end entry for VPLEX, or, an entry level VPLEX solution will appear sometime in the future.

    How does this stack up or compare with what others are doing?

    If you are looking to compare to cache centric platforms such as IBMs SVC that adds extensive functionality and capabilities within the storage virtualization framework this is an apples to oranges comparison. VPLEX is providing cache pointers on a local and global basis functioning in a compliment to underlying storage system model where SVC caches at the specific cluster basis and enhancing functionality of underlying storage system. Rest assured there will be other apples to oranges comparisons made between these platforms.

    How will this be priced?

    When I asked EMC about pricing, they would not commit to a specific price prior to the announcement other than indicating that there will be options for on demand or consumption (e.g. cloud pricing) as well as pricing per engine capacity as well as subscription models (pay as you go).

    What is the overhead of VPLEX?

    While EMC runs various workload simulations (including benchmarks) internally as well as some publicly (e.g. Microsoft ESRP among others) they have been opposed to some storage simulation benchmarks such as SPC. The EMC opposition to simulations such as SPC have been varied however this could be a good and interesting opportunity for them to silence the industry (including myself) who continue ask them (along with a couple of other vendors including IBM and their XIV) when they will release public results.

    What the interesting opportunity I think is for EMC is that they do not even have to benchmark one of their own storage systems such as a CLARiiON or VMAX, instead simply show the performance of some third party product that already is tested on the SPC website and then a submission with that product running attached to a VPLEX.

    If the performance or low latency forecasts are as good as they have been described, EMC can accomplish a couple of things by:

    • Demonstrating the low latency and minimal to no overhead of VPLEX
    • Show VPLEX with a third party product comparing latency before and after
    • Provide a comparison to other virtualization platforms including IBM SVC

    As for EMC submitting a VMAX or CLARiiON SPC test in general, Im not going to hold my breath for that, instead, will continue to look at the other public workload tests such as ESRP.

    Additional related reading material and links:

    Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier)
    Chapter 3: Networking Your Storage
    Chapter 4: Storage and IO Networking
    Chapter 6: Metropolitan and Wide Area Storage Networking
    Chapter 11: Storage Management
    Chapter 16: Metropolitan and Wide Area Examples

    The Green and Virtual Data Center (CRC)
    Chapter 3: (see also here) What Defines a Next-Generation and Virtual Data Center
    Chapter 4: IT Infrastructure Resource Management (IRM)
    Chapter 5: Measurement, Metrics, and Management of IT Resources
    Chapter 7: Server: Physical, Virtual, and Software
    Chapter 9: Networking with your Servers and Storage

    Also see these:

    Virtual Storage and Social Media: What did EMC not Announce?
    Server and Storage Virtualization – Life beyond Consolidation
    Should Everything Be Virtualized?
    Was today the proverbial day that he!! Froze over?
    Moving Beyond the Benchmark Brouhaha

    Closing comments (For now):
    As with any new vision, initiative, architecture and initial product there will be plenty of questions to ask, items to investigate, early adopter customers or users to talk with and determine what is real, what is future, what is usable and practical along with what is nice to have. Likewise there will be plenty of mud ball throwing and slinging between competitors, fans and foes which for those who enjoy watching or reading those you should be well entertained.

    In general, the EMC vision and story builds on and presumably delivers on past industry hype, buzz and vision with solutions that can be put into environments as productivity tool that works for the customer, instead of the customer working for the tool.

    Remember the golden rule of virtualization which is in play here is that whoever controls the virtualization or associated management controls the gold. Likewise keep in mind that aggregation can cause aggravation. So do not be scared, however look before you leap meaning do your homework and due diligence with appropriate levels of expectations, aligning applicable technology to the task at hand.

    Also, if you have seen or experienced something in the past, you are more likely to have DejaVu as opposed to seeing things as revolutionary. However it is also important to leverage lessons learned for future success. YottaYotta was a lot of NaddaNadda, lets see if EMC can leverage their past experiences to make this a LottaLotta.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Technology Tiering, Servers Storage and Snow Removal

    Granted it is winter in the northern hemisphere and thus snow storms should not be a surprise.

    However between December 2009 and early 2010, there has been plenty of record activity from in the U.K. (or here), to the U.S. east coast including New York, Boston and Washington DC, across the midwest and out to California, it made for a white christmas and SANta fun along with snow fun in general in the new year.

    2010 Snow Storm via www.star-telegram.com

    What does this have to do with Information Factories aka IT resources including public or private clouds, facilities, server, storage, networking along with data management let alone tiering?

    What does this have to do with tiered snow removal, or even snow fun?

    Simple, different tools are needed for addressing various types of snow from wet and heavy to light powdery or dustings to deep downfalls. Likewise, there are different types of servers, storage, data networks along with operating systems, management tools and even hyper visors to deal with various application needs or requirements.

    First, lets look at tiered IT resources (servers, storage, networks, facilities, data protection and hyper visors) to meet various efficiency, optimization and service level needs.

    Do you have tiered IT resources?

    Let me rephrase that question to do you have different types of servers with various performance, availability, connectivity and software that support various applications and cost levels?

    Thus the whole notion of tiered IT resources is to be abe to have different resources that can be aligned to the task at hand in order to meet performance, availability, capacity, energy along with economic along with service level agreement (SLA) requirements.

    Computers or servers are targeted for different markets including Small Office Home Office (SOHO), Small Medium Business (SMB), Small Medium Enterprise (SME) and ultra large scale or extreme scaling, including high performance super computing. Servers are also positioned for different price bands and deployment scenarios.

    General categories of tiered servers and computers include:

    • Laptops, desktops and workstations
    • Small floor standing towers or rack mounted 1U and 2U servers
    • Medium sizes floor standing towers or larger rack mounted servers
    • Blade Centers and Blade Servers
    • Large size floor standing servers, including mainframes
    • Specialized fault tolerant, rugged and embedded processing or real time servers

    Servers have different names email server, database server, application server, web server, and video or file server, network server, security server, backup server or storage server associated with them depending on their use. In each of the previous examples, what defines the type of server is the type of software is being used to deliver a type of service. Sometimes the term appliance will be used for a server; this is indicative of the type of service the combined hardware and software solution are providing. For example, the same physical server running different software could be a general purpose applications server, a database server running for example Oracle, IBM, Microsoft or Teradata among other databases, an email server or a storage server.

    This can lead to confusion when looking at servers in that a server may be able to support different types of workloads thus it should be considered a server, storage, networking or application platform. It depends on the type of software being used on the server. If, for example, storage software in the form a clustered and parallel file system is installed on a server to create highly scalable network attached storage (NAS) or cloud based storage service solution, then the server is a storage server. If the server has a general purpose operating system such as Microsoft Windows, Linux or UNIX and a database on it, it is a database server.

    While not technically a type of server, some manufacturers use the term tin wrapped software in an attempt to not be classified as an appliance, server or hardware vendor but want their software to be positioned more as a turnkey solution. The idea is to avoid being perceived as a software only solution that requires integration with hardware. The solution is to use off the shelf commercially available general purpose servers with the vendors software technology pre integrated and installed ready for use. Thus, tin wrapped software is a turnkey software solution with some tin, or hardware, wrapped around it.

    How about the same with tiered storage?

    That is different tiers (Figure 1) of fast high performance disk including RAM or flash based SSD, fast Fibre Channel or SAS disk drives, or high capacity SAS and SATA disk drives along with magnetic tape as well as cloud based backup or archive?

    Tiered Storage Resources
    Figure 1: Tiered Storage resources

    Tiered storage is also sometimes thought of in terms large enterprise class solutions or midrange, entry level, primary, secondary, near line and offline. Not to be forgotten, there are also tiered networks that support various speeds, convergence, multi tenancy and other capabilities from IO Virtualization (IOV) to traditional LAN, SAN, MAN and WANs including 1Gb Ethernet (1GbE), 10GbE up to emerging 40GbE and 100GbE not to mention various Fibre Channel speeds supporting various protocols.

    The notion around tiered networks is like with servers and storage to enable aligning the right technology to be used for the task at hand economically while meeting service needs.

    Two other common IT resource tiering techniques include facilities and data protection. Tiered facilities can indicate size, availability, resiliency among other characteristics. Likewise, tiered data protection is aligning the applicable technology to support different RTO and RPO requirements for example using synchronous replication where applicable vs. asynchronous time delayed for longer distance combined with snapshots. Other forms of tiered data protection include traditional backups either to disk, tape or cloud.

    There is a new emerging form of tiering in many IT environments and that is tiered virtualization or specifically tiered server hyper visors in virtual data centers with similar objectives to having different server, storage, network, data protection or facilities tiers. Instead of an environment running all VMware, Microsoft HyperV or Xen among other hyper visors may be deployed to meet different application service class requirements. For example, VMware may be used for premium features and functionality on some applications, where others that do not need those features along with requiring lower operating costs leverage HyperV or Zen based solutions. Taking the tiering approach a step further, one could also declare tiered databases for example Oracle legacy vs. MySQL or Microsoft SQLserver among other examples.

    What about IT clouds, are those different types of resources, or, essentially an extension of existing IT capabilities for example cloud storage being another tier of data storage?

    There is another form of tiering, particularly during the winter months in the northern hemisphere where there is an abundance of snow this time of the year. That is, tiered snow management, removal or movement technologies.

    What about tiered snow removal?

    Well lets get back to that then.

    Like IT resources, there are different technologies that can be used for moving, removing, melting or managing snow.

    For example, I cant do much about getting ready of snow other than pushing it all down the hill and into the river, something that would take time and lots of fuel, or, I can manage where I put snow piles to be prepared for next storm, plus, to help put it where the piles of snow will melt and help avoid spring flood. Some technologies can be used for relocating snow elsewhere, kind of like archiving data onto different tiers of storage.

    Regardless of if snowstorm or IT clouds (public or private), virtual, managed service provider (MSP), hosted or traditional IT data centers, all require physical servers, storage, I/O and data networks along with software including management tools.

    Granted not all servers, storage or networking technology let alone software are the same as they address different needs. IT resources including servers, storage, networks, operating systems and even hyper visors for virtual machines are often categorized and aligned to different tiers corresponding to needs and characteristics (Figure 2).

    Tiered IT Resources
    Figure 2: Tiered IT resources

    For example, in figure 3 there is a light weight plastic shovel (Shove 1) for moving small amounts of snow in a wide stripe or pass. Then there is a narrow shovel for digging things out, or breaking up snow piles (Shovel 2). Also shown are a light duty snow blower (snow thrower) capable of dealing with powdery or non wet snow, grooming in tight corners or small areas.

    Tiered Snow tools
    Figure 3: Tiered Snow management and migration tools

    For other light dustings, a yard leaf blower does double duty for migrating or moving snow in small or tight corners such as decks, patios or for cleanup. Larger snowfalls, or, where there is a lot of area to clear involves heavier duty tools such as the Kawasaki mule with 5 foot curtis plow. The mule is a multifunction, multi protocol tool capable of being used for hauling items, towing, pulling or recreational tasks.

    When all else fails, there is a pickup truck to get or go out and about, not to mention to pull other vehicles out of ditches or piles of snow when they become stuck!

    Snow movement
    Figure 4: Sometimes the snow light making for fast, low latency migration

    Snow movement
    Figure 5: And sometimes even snow migration technology goes off line!

    Snow movement

    And that is it for now!

    Enjoy the northern hemisphere winter and snow while it lasts, make the best of it with the right tools to simplify the tasks of movement and management, similar to IT resources.

    Keep in mind, its about the tools and when along with how to use them for various tasks for efficiency and effectiveness, and, a bit of snow fun.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Behind the Scenes, SANta Claus Global Cloud Story

    There is a ton of discussion, stories, articles, videos, conferences and blogs about the benefits and value proposition of cloud computing. Not to mention, discussion or debates about what is or what is not a cloud or cloud product, service or architecture including some perspectives and polls from me.

    Now SANta does not really care about these and other similar debates I have learned. However he is concerned with who has been naughty and nice as well watching out for impersonators or members of his crew who misbehave.

    In the spirit of the holidays, how about a quick look at how SANta leverages cloud technologies to support his global operations.

    Many in IT think that SANta bases his operations out of the North Pole as it is convenient for him to cool all of his servers, storage, networks and telecom equipment (which it is). However its also centrally located (See chart) for the northern hemisphere (folks down under may get serviced via SANtas secret Antarctica base of operations). Just like ANC (Anchorage International Airport) is a popular cargo transient, transload and refueling base for cargo carriers, SANta also leverages the north and South Pole regions to his advantage.

    Great Circle Mapper
    SANtas Global Reach via Great Circle Mapper

    Now do not worry if you have never heard about SANta dual redundant South Pole operations, its one of his better kept secrets. Many organizations including SANtas partners such as Microsoft that have global mega IT operations and logistics centers have followed SANtas lead of leveraging various locations outside of the pacific northwest. Granted like some of his partners and managed service providers, he does maintain a presence in Washington Columbia river basin which provides a nice PR among other benefits.

    Likewise, many in business as well as those in IT think that SANta leverages cloud technologies for cost savings or avoidance which is partially the case. However he also leverages cloud, hosting, managed service provider (MSP), virtual data centers, virtual operations centers, Xaas, SaaS or SOA technologies, services, protocols and products that are transparent and complimentary to his own in house resources addressing various business and service requirement needs.

    What this has to do with the holidays and clouds is that you may not realize how Santa or St. Nick if you prefer (feel free to plug in whoever you like if Santa or St. Nick does not turn your crank) extensively relies on flexible and scalable resilient technologies for boosting productivity in a cost effective manner. Some of it is IT related, some of it is not. For example, from the GPS and Radar along with recently added RNP and RNAV enhanced capabilities to his increasingly high tech bio fueled powered sleigh, not to mention his information technology (IT) that powers his global operations, old St Nick has got it together when it comes to technology.

    The heart or brains of the SANta operation is his global system operations center (SOC) or network operation center (NOC) that rivals those seen at NASA among others with multiple data feeds. The SOC is a 24×365 operations function that covers all aspects from transportation, logistics, distribution, assembly or packaging, financials back office, CRM, IT and communications among other functions.

    Naturally, like the Apollo moon shots whose Grumman built LEM Lunar lander had to have 100% availability in that to get off of the moon, their engines only had to fire once, however it had to work 100% of the time! This thought process is said to have had leveraged principles from SANtas operations guide where he has one night a year to accomplish the impossible.

    I should mention, while I cannot disclose (due to NDA) the exact locations of the SOCs, data or logistics centers, not to mention the vendors or the technology being used, I can tell you that they are all around you! The fully redundant SOCs, data and call centers as well as logistics sites (including staff, facilities, technology) leverage different time zones for efficiency.

    SANtas staff have also found that the redundant SOCs, part of an approach across Santa entire vast organization has helped to guard against global epidemics and pandemics including SARs and H1N1 among others by isolating workers while providing appropriate coverage and availability, something many large organizations have since followed.

    Carrying through on the philosophy of redundant SOCs, all other aspects of SANtas operations are distributed yet with centralized coordinated management, leveraging real-time situation awareness, event and activity correlation (what we used to call or refer to as AI), cross technology domain management, proactive monitoring and planning yet with ability for on the spot decision making.

    What this means is that the various locations have ability to make localized decisions on the spot. However coordinated with primary operations or mission control to streamline global operations focus on strategic activity along with exceptions handling to be more effective. Thus it is not fully distributed nor fully centralized, rather a hybrid in terms of management, technologies and the way they work.

    For example, to handle the diverse applications, there are some primary large processing and data retention facilities that backup, replicate information to other peer sites as well as smaller regional remote office branch offices close to where information services are needed. To say the environment is highly virtualized would be an understatement.

    Likewise, optimization is key not just to keep costs low or avoid overheating some of SANtas facilities that are located in the Arctic and Antarctic regions that could melt the ice cap; they are also optimized to keep response time as low as possible while boosting productivity.

    Thus, SANta has to rely on very robust and diverse communications networking leveraging LAN, SAN, MAN, WAN, POTS and PANs among other technologies. For example, his communications portfolio is said to involves landlines (copper and optical), RF including microwave and other radio based commutations supporting or using 3G, 4G, MPLS, SONET/SCH, xWDM, Microwave and Free space optics among others.

    SANtas networking and communications elves are also said to be working with 5G and 100GbE multiplexed on 256 lambda WDM trunk circuits in non core trunk applications. Of course given the airborne operations, satellite and ACARS are a must to avoid over flying a destination while remaining in positive control during low visibility. Note that Santa routinely makes more CAT 3+ low visibility landings than most of the worlds airlines, air freight companies combined.

    My sources also tell me that SANta has virtual desktop capability leveraging PCoIP and other optimizations on his primary and backup sleighs enabling rapid reconfiguration for changing workload conditions. He also is fully equipped with onboard social media capabilities for updates via twitter, Face book and Linked In among others designed by his chief social networking elf.

    Consequently, given the vast amount of information needed to support his operations from CRM, shipping, tracking not to mention historical and profiling needs, transactional volumes both on the data as well as voice and social media networks dwarf the stock market trading volume.

    Feeding SANtas vast organizations are online highly available robust databases for transactions purposes, reference unstructured data material including videos, websites and more. Some of which look hauntingly familiar given those that are part of SANtas eWorld Helpers initiative including: Sears, Amazon, NetFlix, Target, Albertsons, Staples, EMC, Wall mart, Overstock, RadioShack, Landsend, Dell, HP, eBay, Lowes, Publix, emusic, Riteaid and Supervalu among others (Im just sayin…).

    The actual size of SANta information repository is a closely regarded secret as is the exact topology, schema and content structure. However it is understood that on peak days SANtas highly distributed high performance, low latency data warehouse sees upwards of 1,225PBytes of data added, one that is rumored to make Larry Ellison gush with excitement over its growth possibilities.

    How does SANta pull this all off is by leveraging virtualization, automation, efficient and enabling technologies that allow him and elves (excuse me, associates or team members) to be more productivity in their areas of focus that is the envy of the universe.

    Some of their efficiency is measured in terms of:

    • How many packages can be processed per elf with minimum or no mistakes
    • Number of calls, requests, inquiries per day per elf in a friendly and understandable manner
    • Knowing who has been naughty or nice in the blink of an eye including historical profiles
    • Virtual machines (VM) or physical machine (PM) servers managed per team member
    • Databases and applications, local and remote, logical and physical per team member
    • Storage in terms of PByte and Exabyte managed to given service level per team member
    • Network circuits and bandwidth with fewest dropped packets (or packages) per member
    • Fewest misdirected packages as well as aborted landings per crew
    • Fewest pounds gained from consumption of most milk and cookies per crew

    From how many packages can be processed per hour, to the number of virtual servers per person, PBytes of data managed per person, network connections and circuits per person, databases and applications per person to takes and landings (SANta has the top of the list for this one), they are all highly efficient and effective.

    Likewise, SANta leverages the partners in his SANtas eWORLD Helpers initiative network to help out where of course he looks for value; however value is not just lowest price per VM, lowest cost per TByte or cost per bandwidth. For SANta it is also very focused on performance, availability, capacity and economic efficiency not to mention quality with an environmentally friendly green supply chain.

    By having a green supply chain, SANta leverages from a responsible, global approach that also makes economic sense on where to manufacture and produce or procure products. Contrary to growing popular belief, locally produced may not always be the most environmentally as well as economically favorable approach. For example (read more here), instead of growing flowers and plans in western Europe where they are consumed, a process that would require more energy for heat, lights, not to mention water and other resources. SANta has bucked the trend instead relying on the economics and environmental benefit of leveraging flowers and plants grown in warmer, sunnier climates.

    Granted and rest assured, SANta still has an army of elves busily putting things together in his own factories along with managing IT related activities in a economically positive manner.

    SANta has also leveraged this thinking to his data and information and communications networks leveraging sites such as in the arctic where solar power can be used during summer months along with cooling economizers to offset the impact of batteries, workload is shifted around the world as needed. This approach is rumored to be the envy of the US EPA Energy Star for Server, Storage and Data Center crew not to mention their followers.

    How does SANta make sure all of the data and information is protected and available? Its a combination of best practices, techniques, technologies including hardware, software, data protection management tools, disk, dedupe, compression, tape and cloud among others.

    Rest assured, if it is in the technology buzzword bingo book, it is a good bet that it has been tested in one of SANtas facilities, or, partner sites long before you hear about it even under a strict NDA discussion with one of his elves (opps, I mean supplier partners).

    When asked of the importance of his information and data networks, resources and cloud enabled highly virtualized efficient operations SANta responded with a simple:

    Ho Ho Ho, Merry Christmas to all, and to all, a good night!

    As you sit back and relax, reflect, recreate, recoup or recharge, or whatever it is that you do this time of the year, take a moment to think about and thank all of SANtas helpers. They are the ones that work behind the scenes in SANtas facilities as well as his partners or suppliers, some in the clouds, some on or underground to make the worlds largest single event day (excuse me, night) possible! Or, is this SANta and cloud thing all just one big fantasy?

    Happy and safe holidays or whatever you want to refer to it as, best wishes and thanks!

    BTW: FTC disclosure information can be found here!

    Greg on Break

    Me on a break during tour SANta site tour

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    I/O Virtualization (IOV) Revisited

    Is I/O Virtualization (IOV) a server topic, a network topic, or a storage topic (See previous post)?

    Like server virtualization, IOV involves servers, storage, network, operating system, and other infrastructure resource management areas and disciplines. The business and technology value proposition or benefits of converged I/O networks and I/O virtualization are similar to those for server and storage virtualization.

    Additional benefits of IOV include:

      • Doing more with what resources (people and technology) already exist or reduce costs
      • Single (or pair for high availability) interconnect for networking and storage I/O
      • Reduction of power, cooling, floor space, and other green efficiency benefits
      • Simplified cabling and reduced complexity for server network and storage interconnects
      • Boosting servers performance to maximize I/O or mezzanine slots
      • reduce I/O and data center bottlenecks
      • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
      • Scaling I/O capacity to meet high-performance and clustered application needs
      • Leveraging common cabling infrastructure and physical networking facilities

    Before going further, lets take a step backwards for a few moments.

    To say that I/O and networking demands and requirements are increasing is an understatement. The amount of data being generated, copied, and retained for longer periods of time is elevating the importance of the role of data storage and infrastructure resource management (IRM). Networking and input/output (I/O) connectivity technologies (figure 1) tie facilities, servers, storage tools for measurement and management, and best practices on a local and wide area basis to enable an environmentally and economically friendly data center.

    TIERED ACCESS FOR SERVERS AND STORAGE
    There is an old saying that the best I/O, whether local or remote, is an I/O that does not have to occur. I/O is an essential activity for computers of all shapes, sizes, and focus to read and write data in and out of memory (including external storage) and to communicate with other computers and networking devices. This includes communicating on a local and wide area basis for access to or over Internet, cloud, XaaS, or managed services providers such as shown in figure 1.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 1 The Big Picture: Data Center I/O and Networking

    The challenge of I/O is that some form of connectivity (logical and physical), along with associated software is required along with time delays while waiting for reads and writes to occur. I/O operations that are closest to the CPU or main processor should be the fastest and occur most frequently for access to main memory using internal local CPU to memory interconnects. In other words, fast servers or processors need fast I/O, either in terms of low latency, I/O operations along with bandwidth capabilities.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 2 Tiered I/O and Networking Access

    Moving out and away from the main processor, I/O remains fairly fast with distance but is more flexible and cost effective. An example is the PCIe bus and I/O interconnect shown in Figure 2, which is slower than processor-to-memory interconnects but is still able to support attachment of various device adapters with very good performance in a cost effective manner.

    Farther from the main CPU or processor, various networking and I/O adapters can attach to PCIe, PCIx, or PCI interconnects for backward compatibility to support various distances, speeds, types of devices, and cost factors.

    In general, the faster a processor or server is, the more prone to a performance impact it will be when it has to wait for slower I/O operations.

    Consequently, faster servers need better-performing I/O connectivity and networks. Better performing means lower latency, more IOPS, and improved bandwidth to meet application profiles and types of operations.

    Peripheral Component Interconnect (PCI)
    Having established that computers need to perform some form of I/O to various devices, at the heart of many I/O and networking connectivity solutions is the Peripheral Component Interconnect (PCI) interface. PCI is an industry standard that specifies the chipsets used to communicate between CPUs and memory and the outside world of I/O and networking device peripherals.

    Figure 3 shows an example of multiple servers or blades each with dedicated Fibre Channel (FC) and Ethernet adapters (there could be two or more for redundancy). Simply put the more servers and devices to attach to, the more adapters, cabling and complexity particularly for blade servers and dense rack mount systems.
    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 3 Dedicated PCI adapters for I/O and networking devices

    Figure 4 shows an example of a PCI implementation including various components such as bridges, adapter slots, and adapter types. PCIe leverages multiple serial unidirectional point to point links, known as lanes, in contrast to traditional PCI, which used a parallel bus design.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

    Figure 4 PCI IOV Single Root Configuration Example

    In traditional PCI, bus width varied from 32 to 64 bits; in PCIe, the number of lanes combined with PCIe version and signaling rate determine performance. PCIe interfaces can have 1, 2, 4, 8, 16, or 32 lanes for data movement, depending on card or adapter format and form factor. For example, PCI and PCIx performance can be up to 528 MB per second with a 64 bit, 66 MHz signaling rate, and PCIe is capable of over 4 GB (e.g., 32 Gbit) in each direction using 16 lanes for high-end servers.

    The importance of PCIe and its predecessors is a shift from multiple vendors’ different proprietary interconnects for attaching peripherals to servers. For the most part, vendors have shifted to supporting PCIe or early generations of PCI in some form, ranging from native internal on laptops and workstations to I/O, networking, and peripheral slots on larger servers.

    The most current version of PCI, as defined by the PCI Special Interest Group (PCISIG), is PCI Express (PCIe). Backwards compatibility exists by bridging previous generations, including PCIx and PCI, off a native PCIe bus or, in the past, bridging a PCIe bus to a PCIx native implementation. Beyond speed and bus width differences for the various generations and implementations, PCI adapters also are available in several form factors and applications.

    Traditional PCI was generally limited to a main processor or was internal to a single computer, but current generations of PCI Express (PCIe) include support for PCI Special Interest Group (PCI) I/O virtualization (IOV), enabling the PCI bus to be extended to distances of a few feet. Compared to local area networking, storage interconnects, and other I/O connectivity technologies, a few feet is very short distance, but compared to the previous limit of a few inches, extended PCIe provides the ability for improved sharing of I/O and networking interconnects.

    I/O VIRTUALIZATION(IOV)
    On a traditional physical server, the operating system sees one or more instances of Fibre Channel and Ethernet adapters even if only a single physical adapter, such as an InfiniBand HCA, is installed in a PCI or PCIe slot. In the case of a virtualized server for example, Microsoft HyperV or VMware ESX/vSphere the hypervisor will be able to see and share a single physical adapter, or multiple adapters, for redundancy and performance to guest operating systems. The guest systems see what appears to be a standard SAS, FC or Ethernet adapter or NIC using standard plug-and-play drivers.

    Virtual HBA or virtual network interface cards (NICs) and switches are, as their names imply, virtual representations of a physical HBA or NIC, similar to how a virtual machine emulates a physical machine with a virtual server. With a virtual HBA or NIC, physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX, or Linux, a SAS or FC HBA, FCoE converged network adapter (CNA) or Ethernet NIC is presented.

    In addition to virtual or software-based NICs, adapters, and switches found in server virtualization implementations, virtual LAN (VLAN), virtual SAN (VSAN), and virtual private network (VPN) are tools for providing abstraction and isolation or segmentation of physical resources. Using emulation and abstraction capabilities, various segments or sub networks can be physically connected yet logically isolated for management, performance, and security purposes. Some form of routing or gateway functionality enables various network segments or virtual networks to communicate with each other when appropriate security is met.

    PCI-SIG IOV
    PCI SIG IOV consists of a PCIe bridge attached to a PCI root complex along with an attachment to a separate PCI enclosure (Figure 5). Other components and facilities include address translation service (ATS), single-root IOV (SR IOV), and multiroot IOV (MR IOV). ATS enables performance to be optimized between an I/O device and a servers I/O memory management. Single root, SR IOV enables multiple guest operating systems to access a single I/O device simultaneously, without having to rely on a hypervisor for a virtual HBA or NIC.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

    Figure 5 PCI SIG IOV

    The benefit is that physical adapter cards, located in a physically separate enclosure, can be shared within a single physical server without having to incur any potential I/O overhead via virtualization software infrastructure. MR IOV is the next step, enabling a PCIe or SR IOV device to be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures. The benefit is increased sharing of physical adapters across multiple servers and operating systems not to mention simplified cabling, reduced complexity and resource utilization.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 6 PCI SIG MR IOV

    Figure 6 shows an example of a PCIe switched environment, where two physically separate servers or blade servers attach to an external PCIe enclosure or card cage for attachment to PCIe, PCIx, or PCI devices. Instead of the adapter cards physically plugging into each server, a high performance short-distance cable connects the servers PCI root complex via a PCIe bridge port to a PCIe bridge port in the enclosure device.

    In figure 6, either SR IOV or MR IOV can take place, depending on specific PCIe firmware, server hardware, operating system, devices, and associated drivers and management software. For a SR IOV example, each server has access to some number of dedicated adapters in the external card cage, for example, InfiniBand, Fibre Channel, Ethernet, or Fibre Channel over Ethernet (FCoE) and converged networking adapters (CNA) also known as HBAs. SR IOV implementations do not allow different physical servers to share adapter cards. MR IOV builds on SR IOV by enabling multiple physical servers to access and share PCI devices such as HBAs and NICs safely with transparency.

    The primary benefit of PCI IOV is to improve utilization of PCI devices, including adapters or mezzanine cards, as well as to enable performance and availability for slot-constrained and physical footprint or form factor-challenged servers. Caveats of PCI IOV are distance limitations and the need for hardware, firmware, operating system, and management software support to enable safe and transparent sharing of PCI devices. Examples of PCIe IOV vendors include Aprius, NextIO and Virtensys among others.

    InfiniBand IOV
    InfiniBand based IOV solutions are an alternative to Ethernet-based solutions. Essentially, InfiniBand approaches are similar, if not identical, to converged Ethernet approaches including FCoE, with the difference being InfiniBand as the network transport. InfiniBand HCAs with special firmware are installed into servers that then see a Fibre Channel HBA and Ethernet NIC from a single physical adapter. The InfiniBand HCA also attaches to a switch or director that in turn attaches to Fibre Channel SAN or Ethernet LAN networks.

    The value of InfiniBand converged networks are that they exist today, and they can be used for consolidation as well as to boost performance and availability. InfiniBand IOV also provides an alternative for those who do not choose to deploy Ethernet.

    From a power, cooling, floor-space or footprint standpoint, converged networks can be used for consolidation to reduce the total number of adapters and the associated power and cooling. In addition to removing unneeded adapters without loss of functionality, converged networks also free up or allow a reduction in the amount of cabling, which can improve airflow for cooling, resulting in additional energy efficiency. An example of a vendor using InfiniBand as a platform for I/O virtualization is Xsigo.

    General takeaway points include the following:

    • Minimize the impact of I/O delays to applications, servers, storage, and networks
    • Do more with what you have, including improving utilization and performance
    • Consider latency, effective bandwidth, and availability in addition to cost
    • Apply the appropriate type and tiered I/O and networking to the task at hand
    • I/O operations and connectivity are being virtualized to simplify management
    • Convergence of networking transports and protocols continues to evolve
    • PCIe IOV is complimentary to converged networking including FCoE

    Moving forward, a revolutionary new technology may emerge that finally eliminates the need for I/O operations. However until that time, or at least for the foreseeable future, several things can be done to minimize the impacts of I/O for local and remote networking as well as to simplify connectivity.

    PCIe Fundamentals Server Storage I/O Network Essentials

    Learn more about IOV, converged networks, LAN, SAN, MAN and WAN related topics in Chapter 9 (Networking with your servers and storage) of The Green and Virtual Data Center (CRC) as well as in Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Could Huawei buy Brocade?

    Disclosure: I have no connection to Huawei. I own no stock in, nor have I worked for Brocade as an employee; however I did work for three years at SAN vendor INRANGE which was acquired by CNT. However I left to become an industry analyst prior to the acquisition by McData and well before Brocade bought McData. Brocade is not a current client; however I have done speaking events pertaining to general industry trends and perspectives at various Brocade customer events for them in the past.

    Is Brocade for sale?

    Last week a Wall Street Journal article mentioned Brocade (BRCD) might be for sale.

    BRCD has a diverse product portfolio for Fibre Channel, Ethernet along with the emerging Fibre Channel over Ethernet (FCoE) market and a whos who of OEM and channel partners. Why not be for sale, good timing for investors, CEO Mike Klayko and his team have arguably done a good job of shifting and evolving the company.

    Generally speaking, lets keep in perspective, everything is always for sale, and in an economy like now, bargains are everywhere. Many business are shopping, its just a matter of how visible the shopping for a seller or buyer is along with motivations and objectives including shareholder value.

    Consequently, the coconut wires are abuzz with talk and speculation of who will buy Brocade or perhaps who Brocade might buy among other Merger and Acquisition (M and A) activity of who will buy who. For example, who might buy BRCD, why not EMC (they sold McData off years ago via IPO), or IBM (they sold some of their networking business to Cisco years ago) or HP (currently an OEM partner of BRCD) as possible buyers?

    Last week I posted on twitter a response to a comment about who would want to buy Brocade with a response to the effect of why not a Huawei to which there was some silence except for industry luminary Steve Duplessie (have a look to see what Steve had to say).

    Part of being an analyst IMHO should be to actually analyze things vs. simply reporting on what others want you to report or what you have read or hear elsewhere. This also means talking about scenarios that are of out of the box or in adjacent boxes from some perspectives or that might not be in-line with traditional thinking. Sometimes this means breaking away and thinking and saying what may not be obvious or practical. Having said that, lets take a step back for a moment as to why Brocade may or might not be for sale and who might or may not be interested in them.

    IMHO, it has a lot to do with Cisco and not just because Brocade sees no opportunity to continue competing with the 800lb guerilla of LAN/MAN networking that has moved into Brocades stronghold of storage network SANs. Cisco is upsetting the table or apple cart with its server partners IBM, Dell, HP, Oracle/Sun and others by testing the waters of the server world with their UCS. So far I see this as something akin to a threat testing the defenses of a target before actually full out attacking.

    In other words, checking to see how the opposition responds, what defense are put up, collect G2 or intelligence as well as how the rest of the world or industry might respond to an all out assault or shift of power or control. Of course, HP, IBM, Dell and Sun/Oracle will not take this move into their revenue and account control goes un-noticed with initial counter announcements having been made some re-emphasize relationship with Brocade along with their recent acquisition of Ethernet/IP vendor Foundry.

    Now what does this have to do with Brocade potentially being sold and why the title involving Huawei?

    Many of the recent industry acquisitions have been focused on shoring up technology or intellectual property (IP), eliminating a competitor or simply taking advantage of market conditions. For example, Datadomain was sold to EMC in a bidding war with NetApp, HP bought IBRIX, Oracle bought or is trying to buy Sun, Oracle also bought Virtual Iron, Dell bought Perot after HP bought EDS a year or so ago while Xerox bought ACS and so the M and A game continues among other deals.

    Some of the deals are strategic, many being tactical, Brocade being bought I would put in the category of a strategic scenario, a bargaining chip or even pawn if you prefer in a much bigger game that is more than about switches, directors, HBAs, LANs, SANs, MANSs, WANs, POTS and PANs (Checkout my  book “Resilient Storage Networks”-Elsevier)!

    So with conversations focused around Cisco expanding into servers to control the data center discussion, mindset, thinking, budgets and decision making, why wouldnt an HP, IBM, Dell let alone a NetApp, Oracle/Sun or even EMC want to buy Brocade as a bargaining chip in a bigger game? Why not a Ciena (they just bought some of Nortels assets), Juniper or 3Com (more of a merger of equals to fight Cisco), Microsoft (might upset their partner Cisco) or Fujitsu (Their Telco group that is) among others?

    Then why not Huawei, a company some may have heard of, one that others may not have.

    Who is Huawei you might ask?

    Simple, they are a very large IT solutions provider who is also a large player in China with global operations including R&D in North America and many partnerships with U.S. vendors. By rough comparison, Cisco most recently reported annual revenue are about 36.1B (All are USD), BRCD about 1.5B, Juniper about $3.5B and 3COM about $1.3B and Huawei at about 23B USD with a year over year sales increase of 45%. Huawei has previous partnerships with storage vendors including Symantec and Falconstor among others. Huawei also has had partnership with 3com (H3C), a company that was first of the LAN vendors to get into SANs (pre-maturely) beating Cisco easily by several years.

    Sure there would be many hurdles and issues, similar to the ones CNT and INRANGE had to overcome, or McData and CNT, or Brocade and McData among others. However in the much bigger game of IT account and thus budget control is played by HP, IBM, and Sun/Oracle among others, wouldn’t maintaining a dual-source for customers networking needs make sense, or, at least serve as a check to Cisco expansion efforts? If nothing else, maintaining the status quo in the industry for now, or, if the rules and game are changing, wouldn’t some of the bigger vendors want to get closer to the markets where Huawei is seeing rapid growth?

    Does this mean that Brocade could be bought? Sure.
    Does this mean Brocade cannot compete or is a sign of defeat? I don’t think so.
    Does this mean that Brocade could end up buying or merging with someone else? Sure, why not.
    Or, is it possible that someone like Huawei could end up buying Brocade? Why not!

    Now, if Huawei were to buy Brocade, which begs the question for fun, could they be renamed or spun off as a division called HuaweiCade or HuaCadeWei? Anything is possible when you look outside the box.

    Nuff said for now, food for thought.

    Cheers – gs

    Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

    Brocade to Buy Foundry Networks – Prelude to Upcoming Converged Ethernet and FCoE Battle

    Storage I/O trends

    The emerging and maturing Fibre Channel over Ethernet (FCoE) and Converged Ethernet, aka Data Center Ethernet, Converged Enhanced Ethernet, Enterprise Ethernet among others marketing names activity is picking up. Today Brocade took a major step to shore up its already announced FCoE and converged Ethernet story which includes new directors and converged host bus adapters
    by announcing intentions of buying

    Ethernet high performance switching vendor Foundry Networks in a deal valued around $3B USD and some change. Not a bad deal for Foundry, some would say an expensive deal for Brocade, perhaps paying to much, however given some of the recent storage and networking related deals. For example IBM spending around $300M for a startup called XIV who claims to have shipped a few storage systems to a few customers, or, Dell spending about $1.3B to buy EqualLogic who had a few thousand customers (Could be the deal of the century for Dell compared to IBM and XIV, however time will tell), or EMC and some of its recent purchases like RSA, Avamar or bargains like WysDM, Mozy and Iomega not to mention Cisco having not been bashful about dropping some serious coin for standalone companies like Neuspeed (where are they now) for iSCSI as well as Andimao and more recently Nuovo. Regardless of if Mike Klayko (Brocade CEO) paid too much or not, he did what he had to do as part of his continuing activities to re-invent Brocade and leverage their core DNA and business focus of data infrastructures.

    Brocade could probably have made a nice business for a few more years like some of the companies they have recently acquired tried to do including McData, CNT, INRANGE and so forth. However the reality is that sooner or later, they too (Brocade) would probably have been acquired by someone perhaps. With the acquisition of Foundry Networks, along with previous announcements for FCoE technologies and their existing products for NAS or file based storage management and iSCSI solutions, Brocade is signaling that they want to fight for survival as opposed to circle the wagons and guard their installed base and wheel house.

    With the up-coming Converged Ethernet and FCoE battle royal shaping up to start in about 12 to 18 months, sooner for the early adopters who like to test and kick around technology early, or for those who want to go right to 10GbE today instead of 8Gb Fibre Channel, or, for those who like bleeding edge solutions. The reality even with recent proof of life plug-fest demos and claims of being ready for primetime, core Brocade customers particularly at the high-end of the market tend to be rather risk averse and cautions with their data infrastructure thus moving at a slower pace. For them, upgrading to 8Gb Fibre Channel may be the near term future while watching FCoE and converged Ethernet or converged enhanced Ethernet evolve and being transitioning in a couple of years. For these risk adverse type customers, bleeding edge technology means having a blood bank nearby and on call as downtime and disruption is not an option.

    Rest assured, with Ciscopushing hard to stimulate the FCoE market and get people to skip 8Gb FC and switch over to 10GbE, there will be plenty more plug fest and proof of life demos, plenty of trash talking by both sides that will rival some of the best heavy weight match-ups.

    Buyers beware, do your home work and if being an early adopter of FCoE and converged networks is right for you, with due diligence do some testing to see how everything really works in your environment from storage systems, to adapters, to switches, to protocol converters and gateways to management and diagnostic software. How does the whole ecosystem that matches your environment work for your scenario. If you are not comfortable with where the FCoE and converged Ethernet technologies and more importantly supporting ecosystem are at, take your time, monitor the situation as it unfolds over the next year or so leading up to the big battle royal between Brocade and Cisco.

    Something that I think is interesting is that here we have Brocade and Cisco squaring off in a convergence battle between a general networking vendor (Cisco) and storage centric networking vendor (Brocade), both of whom have been built on organic growth as well as acquisitions. What?s even more interesting is that around 10 years ago back when Brocade was just getting started and Cisco was still trying to figure out Fibre Channel and iSCSI, 3COM had at the time the foresight to put together an alliance of Storage related partners to get into the then emerging SAN market place. The alliance was to include various storage vendors, switch and HBA as well as router or gateway vendors along with data and backup software vendors. Before the program could be officially launched, it was canceled just as all of the promotional material was about to be distributed due to poor finical health of 3COM. With a few exceptions, most of the participants in that early program, which was probably a year or two ahead of its time have either been bought or disappeared altogether. 3COM could have been a major force in a converged LAN and SAN market place instead of now watching Brocade and Cisco form the sidelines.

    For now, congratulations to Mike Klayko and crew for demonstrating that they want to put up a fight and provide an alternative for their customers to Cisco and that they are serious about being a serious contender in the data infrastructure solution provider fight. For Cisco, looks like two of your competitors have now become one. Good luck and best wishes to both sides, Brocade and Cisco and I will be watching this battle from ring side as both parties line up and re-align their partner ecosystems.

    Cheers
    gs