Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go

StorageIO industry trends and perspectives

In case you missed it, VMware recently announced spending $1.05 billion USD acquiring startup Nicira for their virtualization and software technology that enables software defined networks (SDN). Also last week Oracle was in the news getting its hands slapped by for making misleading advertisement performance claims vs. IBM.

On the heals of VMware buying Nicira for software defined networking (SDN) or what is also known as IO virtualization (IOV) and virtualized networking, Oracle is now claiming their own SDN capabilities with their announcement of intent to acquire Xsigo. Founded in 2004, Xsigo has a hardware platform combined with software to enable attachment of servers to different Fibre Channel (SAN) and Ethernet based (LAN) networks with their version of IOV.

Now its Oracle who has announced that it will be acquiring IO, networking, virtualization hardware and software vendor Xsigo founded in 2004 for an undisclosed amount. Xsigo has made its name in the IO virtualization (IOV) and converged networking along with server and storage virtualization space over the past several years including partnerships with various vendors.

Buzz word bingo

Technology buzzwords and buzz terms can often be a gray area leaving plenty of room for marketers and PR folks to run with. Case in point AaaS, Big data, Cloud, Compliance, Green, IaaS, IOV, Orchestration, PaaS and Virtualization among other buzzword bingo or XaaS topics. Since Xsigo has been out front in messaging and industry awareness around IO networking convergence of Ethernet based Local Area Networks (LANs) and Fibre Channel (FC) based Storage Area Networks (SANs), along with embracing InfiniBand, it made sense for them to play to their strength which is IO virtualization (aka IOV).

Too me and among others (here and here and here) it is interesting that Xsigo has not laid claims to being part of the software defined networking (SDN) movement or the affiliated OpenFlow networking initiatives as happens with Nicira (and Oracle for that matter). In the press release that the Oracle marketing and PR folks put out on a Monday morning, some of the media and press, both trade industry, financial and general news agency took the Oracle script hook line and sinker running with it.

What was effective is how well many industry trade pubs and their analysts simply picked up the press release story and ran with it in the all too common race to see who can get the news or story out first, or before it actually happens in some cases.

Image of media, news papers

Too be clear, not all pubs jumped including some of those mentioned by Greg Knieriemen (aka @knieriemen) over at SpeakinginTech highlights. I know some who took the time to call, ask around, leverage their journalistic training to dig, research and find out what this really meant vs. simply taking and running with the script. An example of one of those calls that I had was with Beth Pariseu (aka @pariseautt) that you can read her story here and here.

Interesting enough, the Xsigo marketers had not embraced the SDN term sticking with the more known (at least in some circles) VIO and VIO descriptions. What is also interesting is just last week Oracle marketing had their hands slapped by the Better Business Bureau (BBB) NAD after IBM complained about unfair performance based advertisements on ExaData.

Oracle Exadata

Hmm, I wonder if the SDN police or somebody else will lodge a similar complaint with the BBB on behalf of those doing SDN?

Both Oracle and Xsigo along with other InfiniBand (and some Ethernet and PCIe) focused vendors are members of the Open Fabric initiative, not to be confused with the group working on OpenFlow.

StorageIO industry trends and perspectives

Here are some other things to think about:

Oracle has a history of doing different acquisitions without disclosing terms, as well as doing them based on earn outs such as was the case with Pillar.

Oracle use Ethernet in the servers and appliances as well as has been an adopter of InfiniBand primarily for node to node communication, however also for server to application.

Oracle is also an investor in Mellanox the folks that make InfiniBand and Ethernet products.

Oracle has built various stacks including ExaData (Database machine), Exalogic, Exalytics and Database Appliance in addition to their 7000 series of storage systems.

Oracle has done earlier virtualization related acquisitions including Virtual Iron.

Oracle has a reputation with some of their customers who love to hate them for various reasons.

Oracle has a reputation of being aggressive, even by other market leader aggressive standards.

Integrated solution stacks (aka stack wars) or what some remember as bundles continues and Oracle has many solutions.

What will happen to Xsigo as you know it today (besides what the press releases are saying).

While Xsigo was not a member of the Open Networking Forum (ONF), Oracle is.

Xsigo is a member of the Open Fabric Alliance along with Oracle, Mellanox and others interested in servers, PCIe, InfiniBand, Ethernet, networking and storage.

StorageIO industry trends and perspectives

What’s my take?

While there are similarities in that both Nicira and Xsigo are involved with IO Virtualization, what they are doing, how they are doing it, who they are doing it with along with where they can play vary.

Not sure what Oracle paid however assuming that it was in the couple of million dollars or less, cash or combination of stock, both they and the investors as well as some of the employees, friends and family’s did ok.

Oracle also gets some intellectual property that they can combine with other earlier acquisitions via Sun and Virtual Iron along with their investment in InfiniBand (also now Ethernet) vendor Mellanox

Likewise, Oracle gets some extra technology that they can leverage in their various stacked or integrated (aka bundled) solutions for both virtual and physical environments.

For Xsigo customers the good news is that you now know who will be buying the company, however and should be questions about the future beyond what is being said in press releases.

Does this acquisition give Oracle a play in the software defined networking space like Nicira gives to VMware I would say no given their hardware dependency, however it does give Oracle some extra technology to play with.

Likewise while important and a popular buzzword topic (e.g. SDN), since OpenFlow comes up in conversations, perhaps that should be more of the focus vs. if a solution is all software or hardware and software.

StorageIO industry trends and perspectives

I also find it entertaining how last week the Better Business Bureau (BBB) and NAD (National Advertising Division) slapped Oracles hands after IBM complaints of misleading performance claims about Oracle ExaData vs. IBM. The reason I find it entertaining is not that Oracle had its hands slapped or that IBM complained to the BBB, rather how the Oracle marketers and PR folks came up with a spin around what could be called a proprietary SDN (hmm, pSDN ?) story feed it to the press and media who then ran with it.

Im not convinced that this is an all our launch of a war by Oracle vs. Cisco let alone any of the other networking vendors as some have speculated (makes for good headlines though). Instead Im seeing it as more of an opportunistic acquisition by Oracle most likely at a good middle of summer price. Now if Oracle really wanted to go to battle with Cisco (and others), then there are others to buy such as Brocade, Juniper, etc etc etc. However there are other opportunities for Oracle to be focused (or side tracked on right now).

Oh, lets also see what Cisco has to say about all of this which should be interesting.

Additional related links:
Data Center I/O Bottlenecks Performance Issues and Impacts
I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
I/O Virtualization (IOV) Revisited
Industry Trends and Perspectives: Converged Networking and IO Virtualization (IOV)
The function of XaaS(X) Pick a letter
What is the best kind of IO? The one you do not have to do
Why FC and FCoE vendors get beat up over bandwidth?

StorageIO industry trends and perspectives

If you are interested in learning more about IOV, Xisgo, or are having trouble sleeping, click here, here, here, here, here, here, here, here, here, here, here, here, here, or here (I think that’s enough links for now ;).

Ok, nuff said for now as I have probably requalified for being on the Oracle you know what list for not sticking to the story script, opps, excuse me, I mean press release message.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Dude, is Dell going to buy Brocade?

Some IT industry buzz this week is around continued speculation (or here) of who will Dell buy next and will it be Brocade.

Brocade was mentioned as a possible acquisition by some in the IT industry last fall after Dell stepped back from the 3PAR bidding war with HP. Industry rumors or speculations are not new involving Dell and Brocade some going back a year or more (or here or here).

Dell

Last fall I did a blog post commenting that I thought Dell would go on to buy someone else (turned out to be Compellent and Insight One). Those acquisitions by Dell followed their purchases of companies including Scalent, Kace, Exanet, Perot, and Ocarina among others. In that post, I also commented that I did not think (at least at that time) that Brocade would be a likely or good fit for Dell given their different business models, go to market strategy and other factors.

Dell is clearly looking to move further up into the enterprise space which means adding more products and routes to market of which one is via networking and another involves people with associated skill sets. The networking business at Dell has been good for them along with storage to complement their traditional server and workstation business, not to mention their continued expansion into medical, life science and healthcare related solutions. All of those are key building blocks for moving to cloud, virtual and data storage networking environments.

Dell has also done some interesting acquisitions around management and service or workflow tools with Scalent and Kace not to mention their scale out NAS file system (excuse me, big data) solutions via Exanet and data footprint reduction tools with Ocarina, all of which have plays in the enterprise, cloud and traditional Dell markets.

But what about Brocade?

Is it a good fit for Dell?

Dell certainly could benefit from owning Brocade as a means of expanding their Ethernet and IP businesses beyond OEM partnerships, like HP supplementing their networking business with 3COM and IBM with Blade networks.

However, would Dell acquiring Brocade disrupt their relationships with Cisco or other networking providers?

If Dell were to make a bid for Brocade, would Huawei (or here) sit on the sidelines and watch or jump in the game to stir things up?

Would Cisco counter with a deal Dell could not refuse to tighten their partnership at different levels perhaps even involving something with the UCS that was discussed on a recent Infosmack episode?

How would EMC, Fujitsu, HDS, HP, IBM, NetApp and Oracle among others, all of who are partners with Brocade respond to Dell now becoming their OEM supplier for some products?

Would those OEM partnerships continue or cause some of those vendors to become closer aligned with Cisco or others?

Again the question, will Huawei sit back or decide to enter the market on a more serious basis or continue to quietly increase their presences around the periphery?

Brocade could be a good fit for Dell giving them a networking solution (both Ethernet via the Foundry acquisition along with Fibre Channel and Fibre Channel over Ethernet (FCoE)) not to mention many other pieces of IP including some NAS and file management tools collecting dust on some Brocade shelf somewhere. What Dell would also get is a sales force that knows how to sell to OEMs, the channel and to enterprise customers, some of whom are networking (Ethernet or Fibre Channel) focused, some who have broader diverse backgrounds.

While it is possible that Dell could end up with Brocade along with a later bidding battle (unless others just let a possible deal go as is), Dell would find itself in new and unfamiliar waters similar to Brocade gaining its feet moving into the Ethernet and IP space after having been comfortable in the Fibre Channel storage centric space for over a decade.

While the networking products would be a good fit for Dell assuming that they were to do such a deal, the diamond in the rough so to speak could be Brocade channel, OEM and direct sales contact team of sales people, business development, systems engineers and support staff on a global basis. Keep in mind that while some of those Brocadians are network focused, many have connected servers and storage from mainframe to open systems across all vendors for years or in some cases decades. Some of those people who I know personally are even talented enough to sell ice to an Eskimo (that is a sales joke btw).

Sure the Brocadians would have to be leveraged to keep selling what they have done, a task similar to what NetApp is currently facing with their integration of Engenio.

However that DNA could help Dell set up more presences in organizations where they have not been in the past. In other words, Dell could use networking to pull the rest of their product lines into those accounts, vars or resellers.

Hmmm, does that sound like another large California based networking company?

Dell

After all, June is a popular month for weddings, lets see what happens next week down in Orlando during the Dell Storage Forum as some have speculated might be a launching pad for some type of deal.

Here are some related links to more material:

  • HP Buys one of the seven networking dwarfs and gets a bargain
  • Dell Will Buy Someone, However Not Brocade (At least for now)
  • While HP and Dell make counter bids, exclusive interview with 3PAR CEO David Scott
  • Acadia VCE: VMware + Cisco + EMC = Virtual Computing Environment
  • Did someone forget to tell Dell that Tape is dead?
  • Data footprint reduction (Part 1): Life beyond dedupe and changing data lifecycles
  • Data footprint reduction (Part 2): Dell, IBM, Ocarina and Storwize
  • What is DFR or Data Footprint Reduction?
  • Could Huawei buy Brocade?
  • Has FCoE entered the trough of disillusionment?
  • More on Fibre Channel over Ethernet (FCoE)
  • Dude, is Dell doing a disk deal again with Compellent?
  • Post Holiday IT Shopping Bargains, Dell Buying Exanet?
  • Back to school shopping: Dude, Dell Digests 3PAR Disk storage
  • Huawei should buy brocade
  • NetApp buying LSIs Engenio Storage Business Unit
  • Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

    Has FCoE entered the trough of disillusionment?

    This is part of an ongoing series of short industry trends and perspectives blog posts briefs based on what I am seeing and hearing in my conversations with IT professionals on a global basis.

    These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, videos, podcasts, webcasts as well as solution brief content found a www.storageio.com/reports and www.storageio.com/articles.

    Has FCoE (Fibre Channel over Ethernet) entered the trough of disillusionment?

    IMHO Yes and that is not a bad thing if you like FCoE (which I do among other technologies).

    The reason I think that it is good that FCoE is in or entering the trough is not that I do not believe in FCoE. Instead, the reason is that most if not all technologies that are more than a passing fad often go through a hype and early adopter phase before taking a breather prior to broader longer term adoption.

    Sure there are FCoE solutions available including switches, CNAs and even storage systems from various vendors. However, FCoE is still very much in its infancy and maturing.

    Based on conversations with IT customer professionals (e.g those that are not vendor, vars, consultants, media or analysts) and hearing their plans, I believe that FCoE has entered the proverbial trough of disillusionment which is a good thing in that FCoE is also ramping up for deployment.

    Another common question that comes up regarding FCoE as well as other IO networking interfaces, transports and protocols is if they are temporal (temporary short life span) technologies.

    Perhaps in the scope that all technologies are temporary however it is their temporal timeframe that should be of interest. Given that FCoE will probably have at least a ten to fifteen year temporal timeline, I would say in technology terms it has a relative long life for supporting coexistence on the continued road to convergence which appears to be around Ethernet.

    That is where I feel FCoE is at currently, taking a break from the initial hype, maturing while IT organizations begin planning for its future deployment.

    I see FCoE as having a bright future coexisting with other complimentary and enabling technologies such as IO Virtualization (IOV) including PCI SIG MRIOV, Converged Networking, iSCSI, SAS and NAS among others.

    Keep in mind that FCoE does not have to be seen as competitive to iSCSI or NAS as they all can coexist on a common DCB/CEE/DCE environment enabling the best of all worlds not to mention choice. FCoE along with DCB/CEE/DCE provides IT professionals with choice options (e.g. tiered I/O and networking) to align the applicable technology to the task at hand for physical or

    Again, the questions pertaining to FCoE for many organizations, particularly those not going to iSCSI or NAS for all or part of their needs should be when, where and how to deploy.

    This means that for those with long lead time planning and deployment cycles, now is the time to putting your strategy into place for what you will be doing over the next couple of years if not sooner.

    For those interested, here is a link (may require registration) to a good conversation taking place over on IT Toolbox regarding FCoE and other related themes that may be of interest.

    Here are some links to additional related material:

    • FCoE Infrastructure Coming Together
    • 2010 and 2011 Trends, Perspectives and Predictions: More of the same?
    • SNWSpotlight: 8G FC and FCoE, Solid State Storage
    • NetApp and Cisco roll out vSphere compatible FCoE solutions
    • Fibre Channel over Ethernet FAQs
    • Fast Fibre Channel and iSCSI switches deliver big pipes to virtualized SAN environments.
    • Poll: Networking Convergence, Ethernet, InfiniBand or both?
    • I/O Virtualization (IOV) Revisited
    • Will 6Gb SAS kill Fibre Channel?
    • Experts Corner: Q and A with Greg Schulz at StorageIO
    • Networking Convergence, Ethernet, Infiniband or both?
    • Vendors hail Fibre Channel over Ethernet spec
    • Cisco, NetApp and VMware combine for ‘end-to-end’ FCoE storage
    • FCoE: The great convergence, or not?
    • I/O virtualization and Fibre Channel over Ethernet (FCoE): How do they differ?
    • Chapter 9 – Networking with your servers and storage: The Green and Virtual Data Center (CRC)
    • Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier)

    That is all for now, hope you find these ongoing series of current or emerging Industry Trends and Perspectives posts of interest.

    Of course let me know what your thoughts and perspectives are on this and other related topics.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Industry Trends and Perspectives: Converged Networking and IO Virtualization (IOV)

    This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

    These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageio.com/reports.

    The trends that I am seeing with converged networking and I/O fall into a couple of categories. One being converged networking including unified communications, FCoE/DCB along with InfiniBand based discussions while the other being around I/O virtualization (IOV) including PCIe server based multi root IO virtualization (MRIOV).

    As is often the case with new technologies the trend of some saying these are the next great things thus drop everything and adopt them now as they are working and ready for prime time mission critical deployment. Then there are those who say no, stop, do not waste your time on these as they are temporary, they will die and go away anyway. In between, there is reality which takes a bit of balancing the old with the new, look before you leap, do your homework, and do not be scared however have a strategy and a plan on how to achieve it.

    Thus is FCoE a temporal or temporary technology? Well, in the scope that all technologies are temporary however it is their temporal timeframe that should be of interest. Thus given that FCoE will probably have at least a ten to fifteen year temporal timeline, I would say in technology terms it has a relative long life for supporting coexistence on the continued road to convergence which appears to be Ethernet.

    Related and companion material:
    Video: Storage and Networking Convergence
    Blog: I/O Virtualization (IOV) Revisited
    Blog: I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
    Blog: EMC VPLEX: Virtual Storage Redefined or Respun?

    That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Upcoming Event: Industry Trends and Perspective European Seminar

    Event Seminar Announcement:

    IT Data Center, Storage and Virtualization Industry Trends and Perspective
    June 16, 2010 Nijkerk, GELDERLAND Netherlands

    Event TypeTraining/Seminar
    Event TypeSeminar Training with Greg Schulz of US based Server and StorageIO
    SponsorBrouwer Storage Consultancy
    Target AudienceStorage Architects, Consultants, Pre-Sales, Customer (technical) decison makers
    KeywordsCloud, Grid, Data Protection, Disaster Recovery, Storage, Green IT, VTL, Encryption, Dedupe, SAN, NAS, Backup, BC, DR, Performance, Virtualization, FCoE
    Location and VenueAmpt van Nijkerk Berencamperweg
    Nijkerk, GELDERLAND NL
    WhenWed. June 16, 2010 9AM-5PM Local
    Price€ 450,=
    Event URLLinkedIn: https://storageioblog.com/book4.html
    ContactGert Brouwer
    Olevoortseweg 43
    3861 MH Nijkerk
    The Netherlands
    Phone: +31-33-246-6825
    Fax: +31-33-245-8956
    Cell Phone: +31-652-601-309

    info@brouwerconsultancy.com

    AbstractGeneral items that will be covered include: What are current and emerging macro trends, issues, challenges and opportunities. Common IT customer and IT trends, issues and challenges. Opportunities for leveraging various current, new and emerging technologies, techniques. What are some new and improved technologies and techniques. The seminar will provide insight on how to address various IT and data storage management challenges, where and how new and emerging technologies can co-exist as well as compliment installed resources for maximum investment protection and business agility. Additional themes include cost and storage resource management, optimization and efficiency approaches along with where and how cloud, virtualizaiton and other topics fit into existing environments.

    Buzzwords and topics to be discussed include among others: FC and FCoE, SAS, SATA, iSCSI and NAS, I/O Vritualization (IOV) and convergence SSD (Flash and RAM), RAID, Second Generation MAID and IPM, Tape Performance and Capacity planning, Performance and Capacity Optimization, Metrics IRM tools including DPM, E2E, SRA, SRM, as Well as Federated Management Data movement and migration including automation or policy enabled HA and Data protection including Backup/Restore, BC/DR , Security/Encryption VTL, CDP, Snapshots and replication for virtual and non virtual environments Dynamic IT and Optimization , the new Green IT (efficiency and productivity) Distributed data protection (DDP) and distributed data caching (DDC) Server and Storage Virtualization along with discussion about life beyond consolidation SAN, NAS, Clusters, Grids, Clouds (Public and Private), Bulk and object based Storage Unified and vendor prepackaged stacked solutions (e.g. EMC VCE among others) Data footprint reduction (Servers, Storage, Networks, Data Protection and Hypervisors among others.

    Learn about other events involving Greg Schulz and StorageIO at www.storageio.com/events

    2010 and 2011 Trends, Perspectives and Predictions: More of the same?

    2011 is not a typo, I figured that since Im getting caught up on some things, why not get a jump as well.

    Since 2009 went by so fast, and that Im finally getting around to doing an obligatory 2010 predictions post, lets take a look at both 2010 and 2011.

    Actually Im getting around to doing a post here having already done interviews and articles for others soon to be released.

    Based on prior trends and looking at forecasts, a simple predictions is that some of the items for 2010 will apply for 2011 as well given some of this years items may have been predicted by some in 2008, 2007, 2006, 2005 or, well ok, you get the picture. :)

    Predictions are fun and funny in that for some, they are taken very seriously, while for others, at best they are taken with a grain of salt depending on where you sit. This applies both for the reader as well as who is making the predictions along with various motives or incentives.

    Some are serious, some not so much…

    For some, predictions are a great way of touting or promoting favorite wares (hard, soft or services) or getting yet another plug (YAP is a TLA BTW) in to meet coverage or exposure quota.

    Meanwhile for others, predictions are a chance to brush up on new terms for the upcoming season of buzzword bingo games (did you pick up on YAP).

    In honor of the Vancouver winter games, Im expecting some cool Olympic sized buzzword bingo games with a new slippery fast one being federation. Some buzzwords will take a break in 2010 as well as 2011 having been worked pretty hard the past few years, while others that have been on break, will reappear well rested, rejuvenated, and ready for duty.

    Lets also clarify something regarding predictions and this is that they can be from at least two different perspectives. One view is that from a trend of what will be talked about or discussed in the industry. The other is in terms of what will actually be bought, deployed and used.

    What can be confusing is sometimes the two perspectives are intermixed or assumed to be one and the same and for 2010 I see that trend continuing. In other words, there is adoption in terms of customers asking and investigating technologies vs. deployment where they are buying, installing and using those technologies in primary situations.

    It is safe to say that there is still no such thing as an information, data or processing recession. Ok, surprise surprise; my dogs could have probably made that prediction during a nap. However what this means is more data will need to be moved, processed and stored for longer periods of time and at a lower cost without degrading performance or availability.

    This means, denser technologies that enable a lower per unit cost of service without negatively impacting performance, availability, capacity or energy efficiency will be needed. In other words, watch for an expanded virtualization discussion around life beyond consolidation for servers, storage, desktops and networks with a theme around productivity and virtualization for agility and management enablement.

    Certainly there will be continued merger and acquisitions on both a small as well as large scale ranging from liquidation sales or bargain hunting, to large and a mega block buster or two. Im thinking in terms of outside of the box, the type that will have people wondering perhaps confused as to why such a deal would be done until the whole picture is reveled and thought out.

    In other words, outside of perhaps IBM, HP, Oracle, Intel or Microsoft among a few others, no vendor is too large not to be acquired, merged with, or even involved in a reverse merger. Im also thinking in terms of vendors filling in niche areas as well as building out their larger portfolio and IT stacks for integrated solutions.

    Ok, lets take a look at some easy ones, lay ups or slam dunks:

    • More cluster, cloud conversations and confusion (public vs. private, service vs. product vs. architecture)
    • More server, desktop, IO and storage consolidation (excuse me, server virtualization)
    • Data footprint impact reduction ranging from deletion to archive to compress to dedupe among others
    • SSD and in particular flash continues to evolve with more conversations around PCM
    • Growing awareness of social media as yet another tool for customer relations management (CRM)
    • Security, data loss/leap prevention, digital forensics, PCI (payment card industry) and compliance
    • Focus expands from gaming/digital surveillance /security and energy to healthcare
    • Fibre Channel over Ethernet (FCoE) mainstream in discussions with some initial deployments
    • Continued confusion of Green IT and carbon reduction vs. economic and productivity (Green Gap)
    • No such thing as an information, data or processing recession, granted budgets are strained
    • Server, Storage or Systems Resource Analysis (SRA) with event correlation
    • SRA tools that provide and enable automation along with situational awareness

    The green gap of confusion will continue with carbon or environment centric stories and messages continue to second back stage while people realize the other dimension of green being productivity.

    As previously mentioned, virtualization of servers and storage continues to be popular with an expanding focus from just consolidation to one around agility, flexibility and enabling production, high performance or for other systems that do not lend themselves to consolidation to be virtualized.

    6GB SAS interfaces as well as more SAS disk drives continue to gain popularity. I have said in the past there was a long shot that 8GFC disk drives might appear. We might very well see those in higher end systems while SAS drives continue to pick up the high performance spinning disk role in mid range systems.

    Granted some types of disk drives will give way over time to others, for example high performance 3.5” 15.5K Fibre Channel disks will give way to 2.5” 15.5K SAS boosting densities, energy efficiency while maintaining performance. SSD will help to offload hot spots as they have in the past enabling disks to be more effectively used in their applicable roles or tiers with a net result of enhanced optimization, productivity and economics all of which have environmental benefits (e.g. the other Green IT closing the Green Gap).

    What I dont see occurring, or at least in 2010

    • An information or data recession requiring less server, storage, I/O networking or software resources
    • OSD (object based disk storage without a gateway) at least in the context of T10
    • Mainframes, magnetic tape, disk drives, PCs, or Windows going away (at least physically)
    • Cisco cracking top 3, no wait, top 5, no make that top 10 server vendor ranking
    • More respect for growing and diverse SOHO market space
    • iSCSI taking over for all I/O connectivity, however I do see iSCSI expand its footprint
    • FCoE and flash based SSD reaching tipping point in terms of actual customer deployments
    • Large increases in IT Budgets and subsequent wild spending rivaling the dot com era
    • Backup, security, data loss prevention (DLP), data availability or protection issues going away
    • Brett Favre and the Minnesota Vikings winning the super bowl

    What will be predicted at end of 2010 for 2011 (some of these will be DejaVU)

    • Many items that were predicted this year, last year, the year before that and so on…
    • Dedupe moving into primary and online active storage, rekindling of dedupe debates
    • Demise of cloud in terms of hype and confusion being replaced by federation
    • Clustered, grid, bulk and other forms of scale out storage grow in adoption
    • Disk, Tape, RAID, Mainframe, Fibre Channel, PCs, Windows being declared dead (again)
    • 2011 will be the year of Holographic storage and T10 OSD (an annual prediction by some)
    • FCoE kicks into broad and mainstream deployment adoption reaching tipping point
    • 16Gb (16GFC) Fibre Channel gets more attention stirring FCoE vs. FC vs. iSCSI debates
    • 100GbE gets more attention along with 4G adoption in order to move more data
    • Demise of iSCSI at the hands of SAS at low end, FCoE at high end and NAS from all angles

    Gaining ground in 2010 however not yet in full stride (at least from customer deployment)

    • On the connectivity front, iSCSI, 6Gb SAS, 8Gb Fibre Channel, FCoE and 100GbE
    • SSD/flash based storage everywhere, however continued expansion
    • Dedupe  everywhere including primary storage – its still far from its full potential
    • Public and private clouds along with pNFS as well as scale out or clustered storage
    • Policy based automated storage tiering and transparent data movement or migration
    • Microsoft HyperV and Oracle based server virtualization technologies
    • Open source based technologies along with heterogeneous encryption
    • Virtualization life beyond consolidation addressing agility, flexibility and ease of management
    • Desktop virtualization using Citrix, Microsoft and VMware along with Microsoft Windows 7

    Buzzword bingo hot topics and themes (in no particular order) include:

    • 2009 and previous year carry over items including cloud, iSCSI, HyperV, Dedupe, open source
    • Federation takes over some of the work of cloud, virtualization, clusters and grids
    • E2E, End to End management preferably across different technologies
    • SAS, Serial Attached SCSI for server to storage systems and as disk to storage interface
    • SRA, E23, Event correlation and other situational awareness related IRM tools
    • Virtualization, Life beyond consolidation enabling agility, flexibility for desktop, server and storage
    • Green IT, Transitions from carbon focus to economic with efficiency enabling productivity
    • FCoE, Continues to evolve and mature with more deployments however still not at tipping point
    • SSD, Flash based mediums continue to evolve however tipping point is still over the horizon
    • IOV, I/O Virtualization for both virtual and non virtual servers
    • Other new or recycled buzzword bingo candidates include PCoIP, 4G,

    RAID will again be pronounced as being dead no longer relevant yet being found in more diverse deployments from consumer to the enterprise. In other words, RAID may be boring and thus no longer relevant to talk about, yet it is being used everywhere and enhanced in evolutionary ways, perhaps for some even revolutionary.

    Tape remains being declared dead (e.g. on the Zombie technology list) yet being enhanced, purchased and utilized at higher rates with more data stored than in past history. Instead of being killed off by the disk drive, tape is being kept around for both traditional uses as well as taking on new roles where it is best suited such as long term or bulk off-line storage of data in ultra dense and energy efficient not to mention economical manners.

    What I am seeing and hearing is that customers using tape are able to reduce the number of drives or transports, yet due to leveraging disk buffers or caches including from VTL and dedupe devices, they are able to operate their devices at higher utilization, thus requiring fewer devices with more data stored on media than in the past.

    Likewise, even though I have been a fan of SSD for about 20 years and am bullish on its continued adoption, I do not see SSD killing off the spinning disk drive anytime soon. Disk drives are helping tape take on this new role by being a buffer or cache in the form of VTLs, disk based backup and bulk storage enhanced with compression, dedupe, thin provision and replication among other functionality.

    There you have it, my predictions, observations and perspectives for 2010 and 2011. It is a broad and diverse list however I also get asked about and see a lot of different technologies, techniques and trends tied to IT resources (servers, storage, I/O and networks, hardware, software and services).

    Lets see how they play out.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Poll: Networking Convergence, Ethernet, InfiniBand or both?

    I just received an email in my inbox from Voltaire along with a pile of other advertisements, advisories, alerts and announcements from other folks.

    What caught my eye on the email was that it is announcing a new survey results that you can read here as well as below.

    The question that this survey announcements prompts for me and hence why I am posting it here is how dominant will InfiniBand be on a go forward basis, the answer I think is it depends…

    It depends on the target market or audience, what their applications and technology preferences are along with other service requirements.

    I think that there is and will remain a place for Infiniband, the question is where and for what types of environments as well as why have both InfiniBand and Ethernet including Fibre Channel over Ethernet (FCoE) in support of unified or converged I/O and data networking.

    So here is the note that I received from Voltaire:

     

    Hello,

    A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

    The full press release is below.  Please contact me if you would like to speak with a Voltaire executive for further commentary.

    Regards,
    Christy

    ____________________________________________________________
    Christy Lynch| 978.439.5407(o) |617.794.1362(m)
    Director, Corporate Communications
    Voltaire – The Leader in Scale-Out Data Center Fabrics
    christyl@voltaire.com | www.voltaire.com
    Follow us on Twitter: www.twitter.com/voltaireltd

    FOR IMMEDIATE RELEASE:

    IT Survey Finds Executives Planning Converged Network Strategy:
    Using Both InfiniBand and Ethernet

    Fabric Performance Key to Making Data Centers Operate More Efficiently

    CHELMSFORD, Mass. and ANANA, Israel January 12, 2010 – A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

    Voltaire queried more than 120 members of the Global CIO & Executive IT Group, which includes CIOs, senior IT executives, and others in the field that attended the 2009 MIT Sloan CIO Symposium. The survey explored their data center networking needs, their choice of interconnect technologies (fabrics) for the enterprise, and criteria for making technology purchasing decisions.

    “Increasingly, InfiniBand and Ethernet share the ability to address key networking requirements of virtualized, scale-out data centers, such as performance, efficiency, and scalability,” noted Asaf Somekh, vice president of marketing, Voltaire. “By adopting a converged network strategy, IT executives can build on their pre-existing investments, and leverage the best of both technologies.”

    When asked about their fabric choices, 45 percent of the respondents said they planned to implement both InfiniBand with Ethernet as they made future data center enhancements. Another 54 percent intended to rely on Ethernet alone.

    Among additional survey results:

    • When asked to rank the most important characteristics for their data center fabric, the largest number (31 percent) cited high bandwidth. Twenty-two percent cited low latency, and 17 percent said scalability.
    • When asked about their top data center networking priorities for the next two years, 34 percent again cited performance. Twenty-seven percent mentioned reducing costs, and 16 percent cited improving service levels.
    • A majority (nearly 60 percent) favored a fabric/network that is supported or backed by a global server manufacturer.

    InfiniBand and Ethernet interconnect technologies are widely used in today’s data centers to speed up and make the most of computing applications, and to enable faster sharing of data among storage and server networks. Voltaire’s server and storage fabric switches leverage both technologies for optimum efficiency. The company provides InfiniBand products used in supercomputers, high-performance computing, and enterprise environments, as well as its Ethernet products to help a broad array of enterprise data centers meet their performance requirements and consolidation plans.

    About Voltaire
    Voltaire (NASDAQ: VOLT) is a leading provider of scale-out computing fabrics for data centers, high performance computing and cloud environments. Voltaire’s family of server and storage fabric switches and advanced management software improve performance of mission-critical applications, increase efficiency and reduce costs through infrastructure consolidation and lower power consumption. Used by more than 30 percent of the Fortune 100 and other premier organizations across many industries, including many of the TOP500 supercomputers, Voltaire products are included in server and blade offerings from Bull, HP, IBM, NEC and Sun. Founded in 1997, Voltaire is headquartered in Ra’anana, Israel and Chelmsford, Massachusetts. More information is available at www.voltaire.com or by calling 1-800-865-8247.

    Forward Looking Statements
    Information provided in this press release may contain statements relating to current expectations, estimates, forecasts and projections about future events that are "forward-looking statements" as defined in the Private Securities Litigation Reform Act of 1995. These forward-looking statements generally relate to Voltaire’s plans, objectives and expectations for future operations and are based upon management’s current estimates and projections of future results or trends. They also include third-party projections regarding expected industry growth rates. Actual future results may differ materially from those projected as a result of certain risks and uncertainties. These factors include, but are not limited to, those discussed under the heading "Risk Factors" in Voltaire’s annual report on Form 20-F for the year ended December 31, 2008. These forward-looking statements are made only as of the date hereof, and we undertake no obligation to update or revise the forward-looking statements, whether as a result of new information, future events or otherwise.

    ###

    All product and company names mentioned herein may be the trademarks of their respective owners.

     

    End of Voltaire transmission:

    I/O, storage and networking interface wars come and go similar to other technology debates of what is the best or that will be supreme.

    Some recent debates have been around Fibre Channel vs. iSCSI or iSCSI vs. Fibre Channel (depends on your perspective), SAN vs. NAS, NAS vs. SAS, SAS vs. iSCSI or Fibre Channel, Fibre Channel vs. Fibre Channel over Ethernet (FCoE) vs. iSCSI vs. InfiniBand, xWDM vs. SONET or MPLS, IP vs UDP or other IP based services, not to mention the whole LAN, SAN, MAN, WAN POTS and PAN speed games of 1G, 2G, 4G, 8G, 10G, 40G or 100G. Of course there are also the I/O virtualization (IOV) discussions including PCIe Single Root (SR) and Multi Root (MR) for attachment of SAS/SATA, Ethernet, Fibre Channel or other adapters vs. other approaches.

    Thus when I routinely get asked about what is the best, my answer usually is a qualified it depends based on what you are doing, trying to accomplish, your environment, preferences among others. In other words, Im not hung up or tied to anyone particular networking transport, protocol, network or interface, rather, the ones that work and are most applicable to the task at hand

    Now getting back to Voltaire and InfiniBand which I think has a future for some environments, however I dont see it being the be all end all it was once promoted to be. And outside of the InfiniBand faithful (there are also iSCSI, SAS, Fibre Channel, FCoE, CEE and DCE among other devotees), I suspect that the results would be mixed.

    I suspect that the Voltaire survey reflects that as well as if I surveyed an Ethernet dominate environment I can take a pretty good guess at the results, likewise for a Fibre Channel, or FCoE influenced environment. Not to mention the composition of the environment, focus and business or applications being supported. One would also expect a slightly different survey results from the likes of Aprius, Broadcom, Brocade, Cisco, Emulex, Mellanox (they also are involved with InfiniBand), NextIO, Qlogic (they actually do some Infiniband activity as well), Virtensys or Xsigo (actually, they support convergence of Fibre Channel and Ethernet via Infiniband) among others.

    Ok, so what is your take?

    Whats your preffered network interface for convergence?

    For additional reading, here are some related links:

  • I/O Virtualization (IOV) Revisited
  • I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
  • Buzzword Bingo 1.0 – Are you ready for fall product announcements?
  • StorageIO in the News Update V2010.1
  • The Green and Virtual Data Center (Chapter 9)
  • Also check out what others including Scott Lowe have to say about IOV here or, Stuart Miniman about FCoE here, or of Greg Ferro here.
  • Oh, and for what its worth for those concerned about FTC disclosure, Voltaire is not nor have they been a client of StorageIO, however, I did used to work for a Fibre Channel, iSCSI, IP storage, LAN, SAN, MAN, WAN vendor and wrote a book on the topics :).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    What is the Future of Servers?

    Recently I provided some comments and perspectives on the future of servers in an article over at Processor.com.

    In general, blade servers will become more ubiquitous, that is they wont go away even with cloud, rather become more common place with even higher density processors with more cores and performance along with faster I/O and larger memory capacity per given footprint.

    While the term blade server may fade giving way to some new term or phrase, rest assured their capabilities and functionality will not disappear, rather be further enhanced to support virtualization with VMware vsphere, Microsoft HyperV, Citrix/Zen along with public and private clouds, both for consolidation and in the next wave of virtualization called life beyond consolidation.

    The other trend is that not only will servers be able to support more processing and memory per footprint; they will also do that drawing less energy requiring lower cooling demands, hence more Ghz per watt along with energy savings modes when less work needs to be performed.

    Another trend is around convergence both in terms of packaging along with technology improvements from a server, I/O networking and storage perspective. For example, enhancements to shared PCIe with I/O virtualization, hypervisor optimization, and integration such as the recently announced EMC, Cisco, Intel and VMware VCE coalition and vblocks.

    Read more including my comments in the article here.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    I/O Virtualization (IOV) Revisited

    Is I/O Virtualization (IOV) a server topic, a network topic, or a storage topic (See previous post)?

    Like server virtualization, IOV involves servers, storage, network, operating system, and other infrastructure resource management areas and disciplines. The business and technology value proposition or benefits of converged I/O networks and I/O virtualization are similar to those for server and storage virtualization.

    Additional benefits of IOV include:

      • Doing more with what resources (people and technology) already exist or reduce costs
      • Single (or pair for high availability) interconnect for networking and storage I/O
      • Reduction of power, cooling, floor space, and other green efficiency benefits
      • Simplified cabling and reduced complexity for server network and storage interconnects
      • Boosting servers performance to maximize I/O or mezzanine slots
      • reduce I/O and data center bottlenecks
      • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
      • Scaling I/O capacity to meet high-performance and clustered application needs
      • Leveraging common cabling infrastructure and physical networking facilities

    Before going further, lets take a step backwards for a few moments.

    To say that I/O and networking demands and requirements are increasing is an understatement. The amount of data being generated, copied, and retained for longer periods of time is elevating the importance of the role of data storage and infrastructure resource management (IRM). Networking and input/output (I/O) connectivity technologies (figure 1) tie facilities, servers, storage tools for measurement and management, and best practices on a local and wide area basis to enable an environmentally and economically friendly data center.

    TIERED ACCESS FOR SERVERS AND STORAGE
    There is an old saying that the best I/O, whether local or remote, is an I/O that does not have to occur. I/O is an essential activity for computers of all shapes, sizes, and focus to read and write data in and out of memory (including external storage) and to communicate with other computers and networking devices. This includes communicating on a local and wide area basis for access to or over Internet, cloud, XaaS, or managed services providers such as shown in figure 1.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 1 The Big Picture: Data Center I/O and Networking

    The challenge of I/O is that some form of connectivity (logical and physical), along with associated software is required along with time delays while waiting for reads and writes to occur. I/O operations that are closest to the CPU or main processor should be the fastest and occur most frequently for access to main memory using internal local CPU to memory interconnects. In other words, fast servers or processors need fast I/O, either in terms of low latency, I/O operations along with bandwidth capabilities.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 2 Tiered I/O and Networking Access

    Moving out and away from the main processor, I/O remains fairly fast with distance but is more flexible and cost effective. An example is the PCIe bus and I/O interconnect shown in Figure 2, which is slower than processor-to-memory interconnects but is still able to support attachment of various device adapters with very good performance in a cost effective manner.

    Farther from the main CPU or processor, various networking and I/O adapters can attach to PCIe, PCIx, or PCI interconnects for backward compatibility to support various distances, speeds, types of devices, and cost factors.

    In general, the faster a processor or server is, the more prone to a performance impact it will be when it has to wait for slower I/O operations.

    Consequently, faster servers need better-performing I/O connectivity and networks. Better performing means lower latency, more IOPS, and improved bandwidth to meet application profiles and types of operations.

    Peripheral Component Interconnect (PCI)
    Having established that computers need to perform some form of I/O to various devices, at the heart of many I/O and networking connectivity solutions is the Peripheral Component Interconnect (PCI) interface. PCI is an industry standard that specifies the chipsets used to communicate between CPUs and memory and the outside world of I/O and networking device peripherals.

    Figure 3 shows an example of multiple servers or blades each with dedicated Fibre Channel (FC) and Ethernet adapters (there could be two or more for redundancy). Simply put the more servers and devices to attach to, the more adapters, cabling and complexity particularly for blade servers and dense rack mount systems.
    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 3 Dedicated PCI adapters for I/O and networking devices

    Figure 4 shows an example of a PCI implementation including various components such as bridges, adapter slots, and adapter types. PCIe leverages multiple serial unidirectional point to point links, known as lanes, in contrast to traditional PCI, which used a parallel bus design.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

    Figure 4 PCI IOV Single Root Configuration Example

    In traditional PCI, bus width varied from 32 to 64 bits; in PCIe, the number of lanes combined with PCIe version and signaling rate determine performance. PCIe interfaces can have 1, 2, 4, 8, 16, or 32 lanes for data movement, depending on card or adapter format and form factor. For example, PCI and PCIx performance can be up to 528 MB per second with a 64 bit, 66 MHz signaling rate, and PCIe is capable of over 4 GB (e.g., 32 Gbit) in each direction using 16 lanes for high-end servers.

    The importance of PCIe and its predecessors is a shift from multiple vendors’ different proprietary interconnects for attaching peripherals to servers. For the most part, vendors have shifted to supporting PCIe or early generations of PCI in some form, ranging from native internal on laptops and workstations to I/O, networking, and peripheral slots on larger servers.

    The most current version of PCI, as defined by the PCI Special Interest Group (PCISIG), is PCI Express (PCIe). Backwards compatibility exists by bridging previous generations, including PCIx and PCI, off a native PCIe bus or, in the past, bridging a PCIe bus to a PCIx native implementation. Beyond speed and bus width differences for the various generations and implementations, PCI adapters also are available in several form factors and applications.

    Traditional PCI was generally limited to a main processor or was internal to a single computer, but current generations of PCI Express (PCIe) include support for PCI Special Interest Group (PCI) I/O virtualization (IOV), enabling the PCI bus to be extended to distances of a few feet. Compared to local area networking, storage interconnects, and other I/O connectivity technologies, a few feet is very short distance, but compared to the previous limit of a few inches, extended PCIe provides the ability for improved sharing of I/O and networking interconnects.

    I/O VIRTUALIZATION(IOV)
    On a traditional physical server, the operating system sees one or more instances of Fibre Channel and Ethernet adapters even if only a single physical adapter, such as an InfiniBand HCA, is installed in a PCI or PCIe slot. In the case of a virtualized server for example, Microsoft HyperV or VMware ESX/vSphere the hypervisor will be able to see and share a single physical adapter, or multiple adapters, for redundancy and performance to guest operating systems. The guest systems see what appears to be a standard SAS, FC or Ethernet adapter or NIC using standard plug-and-play drivers.

    Virtual HBA or virtual network interface cards (NICs) and switches are, as their names imply, virtual representations of a physical HBA or NIC, similar to how a virtual machine emulates a physical machine with a virtual server. With a virtual HBA or NIC, physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX, or Linux, a SAS or FC HBA, FCoE converged network adapter (CNA) or Ethernet NIC is presented.

    In addition to virtual or software-based NICs, adapters, and switches found in server virtualization implementations, virtual LAN (VLAN), virtual SAN (VSAN), and virtual private network (VPN) are tools for providing abstraction and isolation or segmentation of physical resources. Using emulation and abstraction capabilities, various segments or sub networks can be physically connected yet logically isolated for management, performance, and security purposes. Some form of routing or gateway functionality enables various network segments or virtual networks to communicate with each other when appropriate security is met.

    PCI-SIG IOV
    PCI SIG IOV consists of a PCIe bridge attached to a PCI root complex along with an attachment to a separate PCI enclosure (Figure 5). Other components and facilities include address translation service (ATS), single-root IOV (SR IOV), and multiroot IOV (MR IOV). ATS enables performance to be optimized between an I/O device and a servers I/O memory management. Single root, SR IOV enables multiple guest operating systems to access a single I/O device simultaneously, without having to rely on a hypervisor for a virtual HBA or NIC.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

    Figure 5 PCI SIG IOV

    The benefit is that physical adapter cards, located in a physically separate enclosure, can be shared within a single physical server without having to incur any potential I/O overhead via virtualization software infrastructure. MR IOV is the next step, enabling a PCIe or SR IOV device to be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures. The benefit is increased sharing of physical adapters across multiple servers and operating systems not to mention simplified cabling, reduced complexity and resource utilization.

    PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
    Figure 6 PCI SIG MR IOV

    Figure 6 shows an example of a PCIe switched environment, where two physically separate servers or blade servers attach to an external PCIe enclosure or card cage for attachment to PCIe, PCIx, or PCI devices. Instead of the adapter cards physically plugging into each server, a high performance short-distance cable connects the servers PCI root complex via a PCIe bridge port to a PCIe bridge port in the enclosure device.

    In figure 6, either SR IOV or MR IOV can take place, depending on specific PCIe firmware, server hardware, operating system, devices, and associated drivers and management software. For a SR IOV example, each server has access to some number of dedicated adapters in the external card cage, for example, InfiniBand, Fibre Channel, Ethernet, or Fibre Channel over Ethernet (FCoE) and converged networking adapters (CNA) also known as HBAs. SR IOV implementations do not allow different physical servers to share adapter cards. MR IOV builds on SR IOV by enabling multiple physical servers to access and share PCI devices such as HBAs and NICs safely with transparency.

    The primary benefit of PCI IOV is to improve utilization of PCI devices, including adapters or mezzanine cards, as well as to enable performance and availability for slot-constrained and physical footprint or form factor-challenged servers. Caveats of PCI IOV are distance limitations and the need for hardware, firmware, operating system, and management software support to enable safe and transparent sharing of PCI devices. Examples of PCIe IOV vendors include Aprius, NextIO and Virtensys among others.

    InfiniBand IOV
    InfiniBand based IOV solutions are an alternative to Ethernet-based solutions. Essentially, InfiniBand approaches are similar, if not identical, to converged Ethernet approaches including FCoE, with the difference being InfiniBand as the network transport. InfiniBand HCAs with special firmware are installed into servers that then see a Fibre Channel HBA and Ethernet NIC from a single physical adapter. The InfiniBand HCA also attaches to a switch or director that in turn attaches to Fibre Channel SAN or Ethernet LAN networks.

    The value of InfiniBand converged networks are that they exist today, and they can be used for consolidation as well as to boost performance and availability. InfiniBand IOV also provides an alternative for those who do not choose to deploy Ethernet.

    From a power, cooling, floor-space or footprint standpoint, converged networks can be used for consolidation to reduce the total number of adapters and the associated power and cooling. In addition to removing unneeded adapters without loss of functionality, converged networks also free up or allow a reduction in the amount of cabling, which can improve airflow for cooling, resulting in additional energy efficiency. An example of a vendor using InfiniBand as a platform for I/O virtualization is Xsigo.

    General takeaway points include the following:

    • Minimize the impact of I/O delays to applications, servers, storage, and networks
    • Do more with what you have, including improving utilization and performance
    • Consider latency, effective bandwidth, and availability in addition to cost
    • Apply the appropriate type and tiered I/O and networking to the task at hand
    • I/O operations and connectivity are being virtualized to simplify management
    • Convergence of networking transports and protocols continues to evolve
    • PCIe IOV is complimentary to converged networking including FCoE

    Moving forward, a revolutionary new technology may emerge that finally eliminates the need for I/O operations. However until that time, or at least for the foreseeable future, several things can be done to minimize the impacts of I/O for local and remote networking as well as to simplify connectivity.

    PCIe Fundamentals Server Storage I/O Network Essentials

    Learn more about IOV, converged networks, LAN, SAN, MAN and WAN related topics in Chapter 9 (Networking with your servers and storage) of The Green and Virtual Data Center (CRC) as well as in Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Should Everything Be Virtualized?

    Storage I/O trends

    Should everything, that is all servers, storage and I/O along with facilities, be virtualized?

    The answer not surprisingly should be it depends!

    Denny Cherry (aka Mrdenny) over at ITKE did a great recent post about applications not being virtualized, particularly databases. In general some of the points or themes we are on the same or similar page, while on others we slightly differ, not by very much.

    Unfortunately consolidation is commonly misunderstood to be the sole function or value proposition of server virtualization given its first wave focus. I agree that not all applications or servers should be consolidated (note that I did not say virtualized).

    From a consolidation standpoint, the emphasis is often on boosting resource use to cut physical hardware and management costs by boosting the number of virtual machines (VMs) per physical machine (PMs). Ironically, while VMs using VMware, Microsoft HyperV, Citrix/Xen among others can leverage a common gold image for cloning or rapid provisioning, there are still separate operating system instances and applications that need to be managed for each VM.

    Sure, VM tools from the hypervisor along with 3rd party vendors help with these tasks as well as storage vendor tools including dedupe and thin provisioning help to cut the data footprint impact of these multiple images. However, there are still multiple images to manage providing a future opportunity for further cost and management reduction (more on that in a different post).

    Getting back on track:

    Some reasons that all servers or applications cannot be consolidated include among others:

    • Performance, response time, latency and Quality of Service (QoS)
    • Security requirements including keeping customers or applications separate
    • Vendor support of software on virtual or consolidated servers
    • Financial where different departments own hardware or software
    • Internal political or organizational barriers and turf wars

    On the other hand, for those that see virtualization as enabling agility and flexibility, that is life beyond consolidation, there are many deployment opportunities for virtualization (note that I did not say consolidation). For some environments and applications, the emphasis can be on performance, quality of service (QoS) and other service characteristics where the ratio of VMs to PMs will be much lower, if not one to one. This is where Mrdenny and me are essentially on the same page, perhaps saying it different with plenty of caveats and clarification needed of course.

    My view is that in life beyond consolidation, many more servers or applications can be virtualized than might be otherwise hosted by VMs (note that I did not say consolidated). For example, instead of a high number or ratio of VMs to PMs, a lower number and for some workloads or applications, even one VM to PM can be leveraged with a focus beyond basic CPU use.

    Yes you read that correctly, I said why not configure some VMs on a one to one PM basis!

    Here’s the premise, todays current wave or focus is around maximizing the number of VMs and/or the reduction of physical machines to cut capital and operating costs for under-utilized applications and servers, thus the move to stuff as many VMs into/onto a PM as possible.

    However, for those applications that cannot be consolidated as outlined above, there is still a benefit of having a VM dedicated to a PM. For example, by dedicating a PM (blade, server or perhaps core) allows performance and QoS aims to be meet while still providing the ability for operational and infrastructure resource management (IRM), DCIM or ITSM flexibility and agility.

    Meanwhile during busy periods, the application such as a database server could have its own PM, yet during off-hours, some over VM could be moved onto that PM for backup or other IRM/DCIM/ITSM activities. Likewise, by having the VM under the database with a dedicated PM, the application could be moved proactively for maintenance or in a clustered HA scenario support BC/DR.

    What can and should be done?
    First and foremost, decide how VMs is the right number to divide per PM for your environment and different applications to meet your particular requirements and business needs.

    Identify various VM to PM ratios to align with different application service requirements. For example, some applications may run on virtual environments with a higher number of VMs to PMs, others with a lower number of VMs to PMs and some with a one VM to PM allocation.

    Certainly there will be for different reasons the need to keep some applications on a direct PM without introducing a hypervisors and VM, however many applications and servers can benefit from virtualization (again note, I did not say consolation) for agility, flexibility, BC/DR, HA and ease of IRM assuming the costs work in your favor.

    Additional general to do or action items include among others:

    • Look beyond CPU use also factoring in memory and I/O performance
    • Keep response time or latency in perspective as part of performance
    • More and fast memory are important for VMs as well as for applications including databases
    • High utilization may not show high hit rates or effectiveness of resource usage
    • Fast servers need fast memory, fast I/O and fast storage systems
    • Establish tiers of virtual and physical servers to meet different service requirements
    • See efficiency and optimization as more than simply driving up utilization to cut costs
    • Productivity and improved QoS are also tenants of an efficient and optimized environment

    These are themes among others that are covered in chapters 3 (What Defines a Next-Generation and Virtual Data Center?), 4 (IT Infrastructure Resource Management), 5 (Measurement, Metrics, and Management of IT Resources), as well as 7 (Servers—Physical, Virtual, and Software) in my book “The Green and Virtual Data Center (CRC) that you can learn more about here.

    Welcome to life beyond consolidation, the next wave of desktop, server, storage and IO virtualization along with the many new and expanded opportunities!

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Could Huawei buy Brocade?

    Disclosure: I have no connection to Huawei. I own no stock in, nor have I worked for Brocade as an employee; however I did work for three years at SAN vendor INRANGE which was acquired by CNT. However I left to become an industry analyst prior to the acquisition by McData and well before Brocade bought McData. Brocade is not a current client; however I have done speaking events pertaining to general industry trends and perspectives at various Brocade customer events for them in the past.

    Is Brocade for sale?

    Last week a Wall Street Journal article mentioned Brocade (BRCD) might be for sale.

    BRCD has a diverse product portfolio for Fibre Channel, Ethernet along with the emerging Fibre Channel over Ethernet (FCoE) market and a whos who of OEM and channel partners. Why not be for sale, good timing for investors, CEO Mike Klayko and his team have arguably done a good job of shifting and evolving the company.

    Generally speaking, lets keep in perspective, everything is always for sale, and in an economy like now, bargains are everywhere. Many business are shopping, its just a matter of how visible the shopping for a seller or buyer is along with motivations and objectives including shareholder value.

    Consequently, the coconut wires are abuzz with talk and speculation of who will buy Brocade or perhaps who Brocade might buy among other Merger and Acquisition (M and A) activity of who will buy who. For example, who might buy BRCD, why not EMC (they sold McData off years ago via IPO), or IBM (they sold some of their networking business to Cisco years ago) or HP (currently an OEM partner of BRCD) as possible buyers?

    Last week I posted on twitter a response to a comment about who would want to buy Brocade with a response to the effect of why not a Huawei to which there was some silence except for industry luminary Steve Duplessie (have a look to see what Steve had to say).

    Part of being an analyst IMHO should be to actually analyze things vs. simply reporting on what others want you to report or what you have read or hear elsewhere. This also means talking about scenarios that are of out of the box or in adjacent boxes from some perspectives or that might not be in-line with traditional thinking. Sometimes this means breaking away and thinking and saying what may not be obvious or practical. Having said that, lets take a step back for a moment as to why Brocade may or might not be for sale and who might or may not be interested in them.

    IMHO, it has a lot to do with Cisco and not just because Brocade sees no opportunity to continue competing with the 800lb guerilla of LAN/MAN networking that has moved into Brocades stronghold of storage network SANs. Cisco is upsetting the table or apple cart with its server partners IBM, Dell, HP, Oracle/Sun and others by testing the waters of the server world with their UCS. So far I see this as something akin to a threat testing the defenses of a target before actually full out attacking.

    In other words, checking to see how the opposition responds, what defense are put up, collect G2 or intelligence as well as how the rest of the world or industry might respond to an all out assault or shift of power or control. Of course, HP, IBM, Dell and Sun/Oracle will not take this move into their revenue and account control goes un-noticed with initial counter announcements having been made some re-emphasize relationship with Brocade along with their recent acquisition of Ethernet/IP vendor Foundry.

    Now what does this have to do with Brocade potentially being sold and why the title involving Huawei?

    Many of the recent industry acquisitions have been focused on shoring up technology or intellectual property (IP), eliminating a competitor or simply taking advantage of market conditions. For example, Datadomain was sold to EMC in a bidding war with NetApp, HP bought IBRIX, Oracle bought or is trying to buy Sun, Oracle also bought Virtual Iron, Dell bought Perot after HP bought EDS a year or so ago while Xerox bought ACS and so the M and A game continues among other deals.

    Some of the deals are strategic, many being tactical, Brocade being bought I would put in the category of a strategic scenario, a bargaining chip or even pawn if you prefer in a much bigger game that is more than about switches, directors, HBAs, LANs, SANs, MANSs, WANs, POTS and PANs (Checkout my  book “Resilient Storage Networks”-Elsevier)!

    So with conversations focused around Cisco expanding into servers to control the data center discussion, mindset, thinking, budgets and decision making, why wouldnt an HP, IBM, Dell let alone a NetApp, Oracle/Sun or even EMC want to buy Brocade as a bargaining chip in a bigger game? Why not a Ciena (they just bought some of Nortels assets), Juniper or 3Com (more of a merger of equals to fight Cisco), Microsoft (might upset their partner Cisco) or Fujitsu (Their Telco group that is) among others?

    Then why not Huawei, a company some may have heard of, one that others may not have.

    Who is Huawei you might ask?

    Simple, they are a very large IT solutions provider who is also a large player in China with global operations including R&D in North America and many partnerships with U.S. vendors. By rough comparison, Cisco most recently reported annual revenue are about 36.1B (All are USD), BRCD about 1.5B, Juniper about $3.5B and 3COM about $1.3B and Huawei at about 23B USD with a year over year sales increase of 45%. Huawei has previous partnerships with storage vendors including Symantec and Falconstor among others. Huawei also has had partnership with 3com (H3C), a company that was first of the LAN vendors to get into SANs (pre-maturely) beating Cisco easily by several years.

    Sure there would be many hurdles and issues, similar to the ones CNT and INRANGE had to overcome, or McData and CNT, or Brocade and McData among others. However in the much bigger game of IT account and thus budget control is played by HP, IBM, and Sun/Oracle among others, wouldn’t maintaining a dual-source for customers networking needs make sense, or, at least serve as a check to Cisco expansion efforts? If nothing else, maintaining the status quo in the industry for now, or, if the rules and game are changing, wouldn’t some of the bigger vendors want to get closer to the markets where Huawei is seeing rapid growth?

    Does this mean that Brocade could be bought? Sure.
    Does this mean Brocade cannot compete or is a sign of defeat? I don’t think so.
    Does this mean that Brocade could end up buying or merging with someone else? Sure, why not.
    Or, is it possible that someone like Huawei could end up buying Brocade? Why not!

    Now, if Huawei were to buy Brocade, which begs the question for fun, could they be renamed or spun off as a division called HuaweiCade or HuaCadeWei? Anything is possible when you look outside the box.

    Nuff said for now, food for thought.

    Cheers – gs

    Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

    I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)

    Ok, so I should have used that intro last week before heading off to VMworld in San Francisco instead of after the fact.

    Think of it as a high latency title or intro, kind of like attaching a fast SSD to a slow, high latency storage controller, or a fast server attached to a slow network, or fast network with slow storage and servers, it is what it is.

    I/O virtualization (IOV), Virtual I/O (VIO) along with I/O and networking convergence have been getting more and more attention lately, particularly on the convergence front. In fact one might conclude that it is trendy to all of a sudden to be on the IOV, VIO and convergence bandwagon given how clouds, soa and SaaS hype are being challenged, perhaps even turning to storm clouds?

    Lets get back on track, or in the case of the past week, get back in the car, get back in the plane, get back into the virtual office and what it all has to do with Virtual I/O and VMworld.

    The convergence game has at its center Brocade emanating from the data center and storage centric I/O corner challenging Cisco hailing from the MAN, WAN, LAN general networking corner.

    Granted both vendors have dabbled with success in each others corners or areas of focus in the past. For example, Brocade as via acquisitions (McData+Nishan+CNT+INRANGE among others) a diverse and capable stable of local and long distance SAN connectivity and channel extension for mainframe and open systems supporting data replication, remote tape and wide area clustering. Not to mention deep bench experience with the technologies, protocols and partners solutions for LAN, MAN (xWDM), WAN (iFCP, FCIP, etc) and even FAN (file area networking aka NAS) along with iSCSI in addition to Fibre Channel and FICON solutions.

    Disclosure: Here’s another plug ;) Learn more about SANs, LANs, MANs, WANs, POTs, PANs and related technologies and techniques in my book “Resilient Storage NetworksDesigning Flexible Scalable Data Infrastructures" (Elsevier).

    Cisco not to be outdone has a background in the LAN, MAN, WAN space directly, or similar to Brocade via partnerships with product and experience and depth. In fact while many of my former INRANGE and CNT associates ended up at Brocade via McData or in-directly, some ended up at Cisco. While Cisco is known for general networking, the past several years they have gone from zero to being successful in the Fibre Channel and yes, even the FICON mainframe space while like Brocade (HBAs) dabbling in other areas like servers and storage not to mention consumer products.

    What does this have to do with IOV and VIO, let alone VMworld and my virtual office, hang on, hold that thought for a moment, lets get the convergence aspect out of the way first.

    On the I/O and networking convergence (e.g. Fibre Channel over Ethernet – FCoE) scene both Brocade (Converged Enhanced Ethernet-CEE) and Cisco (Data Center Ethernet – DCE) along with their partners are rallying around each others camps. This is similar to how a pair of prize fighters maneuvers in advance of a match including plenty of trash talk, hype and all that goes with it. Brocade and Cisco throwing mud balls (or spam) at each other, or having someone else do it is nothing new, however in the past each has had their core areas of focus coming from different tenets in some cases selling to different people in an IT environment or those in VAR and partner organizations. Brocade and Cisco are not alone nor is the I/O networking convergence game the only one in play as it is being complimented by the IOV and VIO technologies addressing different value propositions in IT data centers.

    Now on to the IOV and VIO aspect along with VMworld.

    For those of you that attended VMworld and managed to get outside of session rooms, or media/analyst briefing or reeducation rooms, or out of partner and advisory board meetings walking the expo hall show floor, there was the usual sea of vendors and technology. There were the servers (physical and virtual), storage (physical and virtual), terminals, displays and other hardware, I/O and networking, data protection, security, cloud and managed services, development and visualization tools, infrastructure resource management (IRM) software tools, manufactures and VARs, consulting firms and even some analysts with booths selling their wares among others.

    Likewise, in the onsite physical data center to support the virtual environment, there were servers, storage, networking, cabling and associated hardware along with applicable software and tucked away in all of that, there were also some converged I/O and networking, and, IOV technologies.

    Yes, IOV, VIO and I/O networking convergence were at VMworld in force, just ask Jon Torr of Xsigo who was beaming like a proud papa wanting to tell anyone who would listen that his wares were part of the VMworld data center (Disclosure: Thanks for the T-Shirt).

    Virtensys had their wares on display with Bob Nappa more than happy to show the technology beyond an UhiGui demo including how their solution includes disk drives and an LSI MegaRAID adapter to support VM boot while leveraging off-the shelf or existing PCIe adapters (SAS, FC, FCoE, Ethernet, SATA, etc.) while allowing adapter sharing across servers, not to mention, they won best new technology at VMworld award.

    NextIO who is involved in the IOV / VIO game was there along with convergence vendors Brocade, Cisco, Qlogic and Emulex among others. Rest assured, there are many other vendors and VARs in the VIO and IOV game either still in stealth, semi-stealth or having recently launched.

    IOV and VIO are complimentary to I/O and networking convergence in that solutions like those from Aprius, Virtensys, Xsigo, NextIO and others. While they sound similar, and in fact there is confusion as to if Fibre Channel N_Port Virtual ID (FC_NPVID) and VMware virtual adapters are IOV and VIO vs. solutions that are focused on PCIe device/resource extension and sharing.

    Another point of confusion around I/O virtualization and virtual I/O are blade system or blade center connectivity solutions such as HP Virtual Connect or IBM Fabric Manger not to mention those form Engenera add confusion to the equation. Some of the buzzwords that you will be hearing and reading more about include PCIe Single Root IOV (SR-IOV) and Multi-Root IOV (MR-IOV). Think of it this way, within VMware you have virtual adapters, and Fibre Channel Virtualization N_Port IDs for LUN mapping/masking, zone management and other tasks.

    IOV enables localized sharing of physical adapters across different physical servers (blades or chassis) with distances measured in a few meters; after all, it’s the PCIe bus that is being extended. Thus, it is not a replacement for longer distance in the data center solutions such as FCoE or even SAS for that matter, thus they are complimentary, or at least should be considered complimentary.

    The following are some links to previous articles and related material including an excerpt (yes, another plug ;)) from chapter 9 “Networking with you servers and storage” of new book “The Green and Virtual Data Center” (CRC). Speaking of virtual and physical, “The Green and Virtual Data Center” (CRC) was on sale at the physical VMworld book store this week, as well as at the virtual book stores including Amazon.com

    The Green and Virtual Data Center

    The Green and Virtual Data Center (CRC) on book shelves at VMworld Book Store

    Links to some IOV, VIO and I/O networking convergence pieces among others, as well as news coverage, comments and interviews can be found here and here with StorageIOblog posts that may be of interest found here and here.

    SearchSystemChannel: Comparing I/O virtualization and virtual I/O benefits – August 2009

    Enterprise Storage Forum: I/O, I/O, It’s Off to Virtual Work We Go – December 2007

    Byte and Switch: I/O, I/O, It’s Off to Virtual Work We Go (Book Chapter Excerpt) – April 2009

    Thus I went to VMworld in San Francisco this past week as much of the work I do is involved with convergence similar to my background, that is, servers, storage, I/O networking, hardware, software, virtualization, data protection, performance and capacity planning.

    As to the virtual work, well, I spent some time on airplanes this week which as is often the case, my virtual office, granted it was real work that had to be done, however I also had a chance to meet up with some fellow tweeters at a tweet up Tuesday evening before getting back in a plane in my virtual office.

    Now, I/O, I/O, its back to real work I go at Server and StorageIO , kind of rhymes doesnt it!

    I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)

    Will 6Gb SAS kill Fibre Channel?

    Storage I/O trends

    With the advent of 6Gb SAS (Serial Attached SCSI) which doubles the speed from earlier 3Gb along with other enhancements including longer cable distances up to 10m, does this mean that Fibre Channel will be threatened? Well, I’m sure some conspiracy theorist or iSCSI die hards might jump up and down and say yes, finally, even though some of the FCoE cheering section has already arranged a funeral or wake for FC even while Converged enhanced Ethernet based Fibre Channel over Ethernet (FCoE) and its complete ecosystem completely evolves.

    Needless to say, SAS will be in your future, it may not be as a host server to storage system interconnect, however look for SAS high performance drives to appear sometime in the not so distant future. While over time, Fibre Channel based high performance disk drives can be expected to give way to SAS based disks, similar to how Parralel SCSI or even IBM SSA drives gave way to FC disks, SAS as a server to storage system interconnect will at leat for the forseeable future be more for smaller configurations, direct connect storage for blade centers, two server clusters, extremely cost sensitive environments that do not need or can afford a more expensive iSCSI, NAS let alone an FC or FCoE based solution.

    So while larger storage systems over time can be expected to support high performance 3.5″ and 2.5″ SAS disks to replace FC disks, those systems will be accessed via FCoE, FC, iSCSI or NAS while mid-range and entry-level systems as they do today will see a mix of SAS, iSCSI, FC, NAS and in the future, some FCoE as well not to mention some InfiniBand based NAS or SRP for block access.

    From an I/O virtualization (IOV) standpoint, keep an eye on whats taking place with the PCI SIG and Single Root IOV and multi-root IOV from a server I/O and I/O virtualization standpoint.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Links to Upcoming and Recent Webcasts and Videocasts

    Here are links to several recent and upcoming Webcast and video casts covering a wide range of topics. Some of these free Webcast and video casts may require registration.

    Industry Trends & Perspectives – Data Protection for Virtual Server Environments

    Next Generation Data Centers Today: What’s New with Storage and Networking

    Hot Storage Trends for 2008

    Expanding your Channel Business with Performance and Capacity Planning

    Top Ten I/O Strategies for the Green and Virtual Data Center

    Cheers
    Greg Schulz – StorageIO