EMC VFCache respinning SSD and intelligent caching (Part I)

This is the first part of a two part series covering EMC VFCache, you can read the second part here.

EMC formerly announced VFCache (aka Project Lightning) an IO accelerator product that comprises a PCIe nand flash card (aka Solid State Device or SSD) and intelligent cache management software. In addition EMC is also talking about the next phase of the flash business unit and project Thunder. The approach EMC is taking with vFCache should not be a surprise given their history of starting out with memory and SSD evolving it into an intelligent cache optimized storage solution.

Storage IO performance and capacity gap
Data center and storage IO performance capacity gap (Courtesy of Cloud and Virtual Data Storage Networking (CRC Press))

Could we see the future of where EMC will take VFCache along with other possible solutions already being hinted at by the EMC flash business unit by looking where they have been already?

Likewise by looking at the past can we see the future or how VFCache and sibling product solutions could evolve?

After all, EMC is no stranger to caching with both nand flash SSD (e.g. FLASH CACHE, FAST and SSD drives) along with DRAM based across their product portfolio not too mention being a core part of their company founding products that evolved into HDDs and more recent nand flash SSDs among others.

Industry trends and perspectives

Unlike others who also offer PCIe SSD cards such as FusionIO with a focus on eliminating SANs or other storage (read their marketing), EMC not surprisingly is marching to a different beat. The beat EMC is marching too or perhaps leading by example for others to follow is that of going mainstream and using PCIe SSD cards as a cache to compliment theirs as well as other vendors storage systems vs. replacing them. This is similar to what EMC and other mainstream storage vendors have done in the past such as with SSD drives being used as flash cache extension on CLARiiON or VNX based systems as well as target or storage tier.

Various options and locations for SSD along with different usage scenarios
Various SSD locations, types, packaging and usage scenario options

Other vendors including IBM, NetApp and Oracle among others have also leveraged various packaging options of Single Level Cell (SLC) or Multi Level Cell (MLC) flash as caches in the past. A different example of SSD being used as a cache is the Seagate Momentus XT which is a desktop, workstation consumer type device. Seagate has shipped over a million of the Momentus XT which use SLC flash as a cache to compliment and enhance the integrated HDD performance (a 750GB with 8GB SLC memory is in the laptop Im using to type this with).

One of the premises of solutions such as those mentioned above for caching is to discuss changing data access patterns and life cycles shown in the figure below.

Changing data access patterns and lifecycles
Evolving data access patterns and life cycles (more retention and reads)

Put a different way, instead of focusing on just big data or corner case (granted some of those are quite large) or ultra large cloud scale out solutions, EMC with VFCache is also addressing their core business which includes little data. What will be interesting to watch and listen too is how some vendors will start to jump up and down saying that they have done or enabling what EMC is announcing for some time. In some cases those vendors will be rightfully doing and making noise on something that they should have made noise about before.

EMC is bringing the SSD message to the mainstream business and storage marketplace showing how it is a compliment to, vs. a replacement of existing storage systems. By doing so, they will show how to spread the cost of SSD out across a larger storage capacity footprint boosting the effectiveness and productive of those systems. This means that customers who install the VFCache product can accelerate the performance of both their existing EMC as well as storage systems from other vendors preserving their technology along with people skills investment.

 

Key points of VFCache

  • Combines PCIe SLC nand flash card (300GB) with intelligent caching management software driver for use in virtualized and traditional servers

  • Making SSD complimentary to existing installed block based disk (and or SSD) storage systems to increase their effectiveness

  • Providing investment protection while boosting productivity of existing EMC and third party storage in customer sites

  • Brings caching closer to the application where the data is accessed while leverage larger scale direct attached and SAN block storage

  • Focusing message for SSD back on to little data as well as big data for mainstream broad customer adoption scenarios

  • Leveraging benefit and strength of SSD as a read cache and scalable of underlying downstream disk for data storage

  • Reducing concerns around SSD endurance or duty cycle wear and tear by using as a read cache

  • Off loads underlying storage systems from some read requests enabling them to do more work for other servers

Additional related material:
Part II: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Industry adoption vs. industry deployment, is there a difference?
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Data Center I/O Bottlenecks Performance Issues and Impacts
From bits to bytes: Decoding Encoding
Who is responsible for vendor lockin
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperabity support matrix

Ok, nuff said for now, I think I see some storm clouds rolling in

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

IT and technology turkeys

Now that Halloween and talk of Zombies has past (at least for now), that means next up on the social or holiday calendar topics in the U.S. is thanksgiving which means turkey themes.

With turkey themes in mind, how about some past, current and maybe future technology flops or where are they now.

A technology turkey can be a product, trend, technique or theme that was touted (or hyped) and flopped for various reasons not flying up to, or meeting its expectations. That means that a technology turkey may have had industry adoption however lacked customer deployment.

Lets try a few, how about holographic storage, or is that still a future technology?

Were NEXT computer and the Apple Newton turkeys?

Disclosure: I have a Newton that has not been used since the mid 90s.

Is ATA over Ethernet (AoE) a future turkey candidate along with FCoE aka Fibre Channel over Ethernet (or here or here), or is that just some peoples wishful thinking regarding FCoE being a turkey?

Speaking of AoE, what ever happened to Zetera (aka Hammer storage) the iSCSI alternative of a few years ago?

To be fair how about IPFC not to be confused with FCIP (Fibre Channel frames mapped to IP for distance) or iFCP not to be confused with FCoE or iSCSI. IPFC mapped IP as upper level protocol (ULP) onto Fibre Channel coexisting with FCP and FICON. There were only a few adopters of IPFC that used it as a low latency channel to channel (CTC) mechanism for open systems before InfiniBand and other technologies matured.

Im guessing that someone will step up to defend the honor of Microsoft Windows Vista, however until then, IMHO it is or was a Turkey. While on the topic of operating systems, anyone have an opinion on IBMs OS2? Speaking of PCs, how about the DEC Rainbow and its sibling the Robin? Remember when IBM was in the PC business before selling it off to Lenovo, how about the IBM PCjr, turkey candidate or not?

HP should be on the turkey list with their now ex CEO Leo Apotheker whom they put out to pasture, on the technology front, anybody remember AutoRAID?

How about the Britton Lee Database machine which today would be referred to as a storage appliance or application optimized storage system such as the Oracle Exadata II (or Oracle Exadata I based on HP hardware) among others. Note that Im not saying Exadata I or Exadata II are turkeys as that will be left to your own determination. Both are cool from a technology standpoint, however there is more to having neat or interesting technology to move from announcement to industry adoption to customer deployment, things that Oracle has been having some success with.

Speaking of Oracle, remember when Sun bought the Encore storage system and renamed it the A7000 (not to be confused with the A5000 aka Photon) in an attempt to compete against the EMC Symmetrix. The Encore folks after Sun went on to their next project and still today call it DataCore. Meanwhile Sun discontinued the A7000 after a period of time similar to what they did with other acquisitions such as Pirus which became the 6920 which was end of lifed as part of a deal where Sun increased their resell activity of HDS which too has since been archived. Hmmm, that begs the question of what happens with Oracle acquiring Pillar with an earn out scheme where if there is revenue there is a payout, if there is no revenue then there is a tax write off.

What about big data, will that become a turkey following in the footsteps of other former high flyers such as cloud, virtualization, data classification, CDP, Green IT and SOA among many others. IMHO that depends upon what your view or definition along with expectations of big data is as a buzzword bingo topic. Depending on your view, that will determine if the above will join others that fade away from the limelight shifting into productive modes for customers and profitable activity for vendors.

Want to read what others have to say about technology turkeys or flops?

Here is what ibitimes has to say about technology flops (aka) turkeys, with Infoworlds lineup here, Computerworlds list is here. Meanwhile a couple from mashable here and here, Cnet weighs in here, with another list over at investorplace found here, and checkout the list at Money here with the telegraph represented here. Of course you could Google to find more however you would probably also stumble upon Googles own flops or technology turkeys including wave.

What is your take as to other technology turkeys past, present or future?

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

IBMs Storwize or wise Storage, the V7000 and DFR

A few months ago IBM bought a Data Footprint Reduction (DFR) technology company called Storwize (read more about DFR and Storwize Real time Compression here, here, here, here and here).

A couple of weeks ago IBM renamed the Storwize real time compression technology to surprise surprise, IBM real time compression (wow, wonder how lively that market focus research group study discussion was).

Subsequently IBM recycled the Storwize name in time to be used for the V7000 launch.

Now to be clear right up front, currently the V7000 does not include real time compression capabilities, however I would look for that and other forms of DFR techniques to appear on an increasing basis in IBM products in the future.

IBM has a diverse storage portfolio with good products some with longer legs than others to compete in the market. By long legs, that means both technology and marketability for enabling their direct as well as partners including distributors or vars to effectively compete with other vendors offerings.

The enablement capability of the V7000 will be to give IBM and their business partners a product that they will want go tell and sell to customers competing with Cisco, Dell, EMC, Fujitsu, HDS, HP, NEC, NetApp and Oracle among others.

What about XIV?

For those interested in XIV regardless of if you are a fan, nay sayer or simply an observer, here, here and here are some related posts to view if you like (as well as comment on).

Back to the V7000

A couple of common themes about the IBM V7000 are:

  • It appears to be a good product based on the SVC platform with many enhancements
  • Expanding the industry scope and focus awareness around Data Footprint Reduction (DFR)
  • Branding the storwize acquisition as real-time compression as part of their DFR portfolio
  • Confusion about using the Storwize name for a storage virtualization solution
  • Lack of Data Footprint Reduction (DFR) particularly real-time compression (aka Storwize)
  • Yet another IBM storage product adding to confusion around product positioning

Common questions that Im being asked about the IBM V7000 include among others:

  • Is the V7000 based on LSI, NetApp or other third party OEM technology?

    No, it is based on the IBM SVC code base along with an XIV like GUI and features from other IBM products.

  • Is the V7000 based on XIV?

    No, as mentioned above, the V7000 is based on the IBM SVC code base along with an XIV like GUI and features from other IBM products.

  • Does the V7000 have DFR such as dedupe or compression?

    No, not at this time other than what was previously available with the SVC.

  • Does this mean there will be a change or defocusing on or of other IBM storage products?

    IMHO I do not think so other than perhaps around XIV. If anything, I would expect IBM to start pushing the V7000 as well as the entire storage product portfolio more aggressively. Now there could be some defocusing on XIV or put a different way, putting all products on the same equal footing and let the customer determine what they want based on effective solution selling from IBM and their business partners.

  • What does this mean for XIV is that product no longer the featured or marquee product?

    IMHO XIV remains relevant for the time being. However, I also expect to be put on equal footprint with other IBM products or, if you prefer, other IBM products particularly the V7000 to be unleashed to compete with other external vendors solutions such as those from Cisco, Dell, EMC, Fujitsu, HDS, HP, NEC, NetApp and Oracle among others. Read more here, here and here about XIV remaining relevant.

  • Why would I not just buy an SVC and add storage to it?

    That is an option and strength of SVC to sit in front of different IBM storage products as well as those of third party competitors. However with the V7000 customers now have a turnkey storage solution to sell instead of a virtualization appliance.

  • Is this a reaction to EMC VPLEX, HDS VSP, HP SVSP or 3PAR, Oracle/Sun 7000?

    Perhaps it is, perhaps it is a reaction to XIV, and perhaps it is a realization that IBM has a lot of IP that could be combined into a solution to respond to a market need among many other scenarios. However, IBM has had a virtualization platform with a decent installed base in the form of SVC which happens to be at the heart of the V7000.

  • Does this mean IBM is jumping on the using off the shelf server instead of purpose built hardware for storage systems bandwagon like Oracle, HP and others are doing?

    If you are new to storage or IBM, it might appear that way, however, IBM has been shipping storage systems that are based on general purpose servers for a couple for a couple of decades now. Granted, some of those products are based on IBM Power PC (e.g. power platform) also used in their pSeries formerly known as the RS6000s. For example, the DS8000 series similar to its predecessors the ESS (aka Shark) and VSS before that have been based on the Power platform. Likewise, SVC has been based on general purpose processors since its inception.

    Likewise, while only generally deployed in two node pairs, the DS8000 is architected to scale into many more nodes that what has been shipped meaning that IBM has had clustered storage for some time, granted, some of their competitors will dispute that.

  • How does the V7000 stack up from a performance standpoint?

    Interestingly, IBM has traditionally been very good if not out front running public benchmarks and workload simulations ranging from SPC to TPC to SPEC to Microsoft ESRP among others for all of their storage systems except one (e.g. XIV). However true to traditional IBM systems and storage practices, just a couple of weeks after the V7000 launch, IBM has released the first wave of performance comparisons including SPC for the V7000 which can be seen here to compare with others.

  • What do I think of the V7000?

    Like other products both in the IBM storage portfolio or from other vendors, the V7000 has its place and in that place which needs to be further articulated by IBM, it has a bright future. I think that the V7000 for many environments particularly those that were looking at XIV will be a good IBM based solution as well as competitor to other solutions from Dell, EMC, HDS, HP, NetApp, Oracle as well as some smaller startups providers.

Comments, thoughts and perspectives:

IBM is part of a growing industry trend realizing that data footprint reduction (DFR) focus should expand the scope beyond backup and dedupe to span an entire organization using many different tools, techniques and best practices. These include archiving of databases, email, file systems for both compliance and non compliance purposes, backup/restore modernization or redesign, compression (real-time for online and post processing). In addition, DFR includes consolidation of storage capacity and performance (e.g. fast 15K SAS, caching or SSD), data management (including some data deletion where practical), data dedupe, space saving snapshots such as copy on write or redirect on write, thin provisioning as well as virtualization for both consolidation and enabling agility.

IBM has some great products, however too often with such a diverse product portfolio better navigation and messaging of what to use when, where and why is needed not to mention the confusion over the current product dejur.

As has been the case for the past couple of years, lets see how this all plays out in a year or so from now. Meanwhile cast your vote or see the results of others as to if XIV remains relevant. Likewise, join in on the new poll below as to if the V7000 is now relevant or not.

Note: As with the ongoing is XIV relevant polling (above), for the new is the V7000 relevant polling (below) you are free to vote early, vote often, vote for those who cannot or that care not to vote.

Here are some links to read more about this and related topics:

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Re visiting if IBM XIV is still relevant with V7000

Over the past couple of years I routinely get asked what I think of XIV by fans as well as foes in addition to many curious or neutral onlookers including XIV competitors, other analysts, media, bloggers, consultants as well as IBM customers, prospects, vars and business partners. Consequently I have done some blog posts about my thoughts and perspectives.

Its time again for what has turned out to be the third annual perspective or thoughts around IBM XIV and if it is still relevant as a result of the recent IBM V7000 (excuse me, I meant to say IBM Storwize V7000) storage system launch.

For those wanting to take a step back in time, here is an initial thought perspective about IBM and XIV storage from 2008, as well as the 2009 revisiting of XIV relevance post and the latest V7000 companion post found here.

What is the IBM V7000?

Here is a link to a companion post pertaining to the IBM V7000 that you will want to have a look at.

In a nut shell, the V7000 is a new storage system with built in storage virtualization or virtual storage if you prefer that leverages IBM developed software from its San Volume Controller (SVC), DS8000 enterprise system and others.

Unlike the SVC which is a gateway or appliance head that virtualizes various IBM and third party storage systems providing data movement, migration, copy, replication, snapshot and other agility or abstraction capabilities, the V7000 is a turnkey integrated solution.

By being a turnkey solution, the V7000 combines the functionality of the SVC as a basis for adding other IBM technologies including a GUI management tool similar to that found on XIV along with dedicated attached storage (e.g. SAS disk drives including fast, high capacity as well as SSD).

In other words, for those customer or prospects who liked XIV because of its management GUI interface, you may like the V7000.

For those who liked the functionality capabilities of the SVC however needed it to be a turnkey solution, you might like the V7000.

For those of you who did not like or competed with the SVC in the past, well, you know what to do.

BTW, for those who knew of Storwize the Data Footprint Reduction (DFR) vendor with real time compression that IBM recently acquired and renamed IBM Real time Compression, the V7000 does not contain any real time compression (yet).

What are my thoughts and perspectives?

In addition to the comments in the companion post found here, right now Im of the mind set that XIV does not fade away quietly into the sunset or take a timeout at the IBM technology rest and recuperation resort located on the beautiful someday isle.

The reason I think XIV will remain somewhat relevant for some time, (time to be determined of course) is that IBM has expended over the past two and half years significant resources to promote it. Those resources have included marketing time, messaging space and in some instances perhaps inadvertinly at the expense of other IBM storage solutions. Simiarly, a lot of time, money and effort have gone into business partner outreach to establish and keep XIV relevant with those commuities who in turn have gone to their customers to tell and sell the XIV story to some customers who have bought it.

Consequently or as a result of all of that investment, I would be surprised if IBM were simply to walk away from XIV at least near term.

What I do see as happening including some early indicators is that the V7000 (along with other IBM products) now will be getting equal billing, resources and promotional support. Weather this means the XIV division finally being assimilated into the mainstream IBM fold and on equal footing with other IBM products, or, that other IBM products being brought up to an elevated position of XIV is subject to interpretation and your own perception.

I expect to continue to see IBM teams and subsequently their distributors, vars and other business partners get more excited talking about the V7000 along with other IBM solutions. For example, SONAS for bulk, clustered and scale out NAS, DS8000 for high end, GMAS and Information Archive platforms as well as N and DS3K/DS4K/DS5K not to mentiuon the TS/TL backup and archive target platforms along with associated Tivoli software. Also, lets not forget about SVC among other IBM solutions including of course, XIV.

I would also not be surprised if some of the diehard XIV loyalist (e.g. sales and marketing reps that were faithful members of Moshe Yani army who appears to be MIA at IBM) pack up their bags and leave the IBM storage SANdbox in virtual protest. That is, refusing to be assimilated into the general IBM storage pool and thus leaving for Greener IT pastures elsewhere. Some will stick around discovering the opportunities associated with selling a broader more diverse product portfolio into their target accounts where they have spent time and resources to establish relationships or getting thier proverbial foot in the door.

Consequently, I think XIV remains somewhat relevant for now given all of the resources that IBM poured into it and relationships that their partner ecosystem also spent on establishing with the installed customer base.

However, I do think that the V7000 despite some confusion (here and here) around its recycled Storwize name that is built around the field proven SVC and other IBM technology has some legs. Those legs of the V7000 are both from a technology standpoint as well as a means to get the entire IBM systems and storage group energized to go out and compete with their primary nemesis (e.g. Dell, EMC, HP, HDS, NetApp and Oracle among others).

As has been the case for the past couple of years, lets see how this all plays out in a year or so from now. Meanwhile cast your vote or see the results of others as to if XIV remains relevant. Likewise, join in on the new poll below as to if the V7000 is now relevant or not.

Note: As with the ongoing is XIV relevant polling (above), for the new is the V7000 relevant polling (below) you are free to vote early, vote often, vote for those who cannot or that care not to vote.

Here are some links to read more about this and related topics:

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Kudos to HP CEO Mark Hurd for dignity to step down from his post

Yesterday (Friday) late afternoon, HP announced (or read here) that their CEO Mark Hurd was resigning due to improprieties uncovered during an internal investigation.

HP is far from being alone in the corporate world involving investigations, lawsuits by governments or allegations of bribes and impropriety.

However what stands out is that of the CEO stepping down.

While not unique, after all remember the former CA CEO Sanjay Kumar who was locked up, or former Brocade CEO Greg Reyes now stepping into new government provided accommodations due to illegal activities, not to mention those from Enron among others. Granted in those situations there were legal ramifications outside of the companies prompting the courts to get involved, something that looks like for now is not the case at HP. However, having the courts get involved with corporate activity is almost becoming a pattern of how business is done. For example, there is a whos who list (e.g.Cisco, Dell, EMC, IBM, Intel, or Oracle among others) of IT companies involved in (or recently settled) various government or financial dealing cases associated with bribes, kickbacks or other business improprieties reminiscent of Rodney Dangerfield character Thornton Melon explaining how business is conducted in the real world during Dr Phillip Barbay business class in Back to School.

Lets get back to and focus on the individual, that is Mr Hurd and what I think is something rare these days. That is a CEO or leader of a company or organization seriously taking responsibility for their actions or those that they are responsible for instead of lip service and spin doctoring.

I do not know whether Mr Hurd decided on his own or it was suggested to him that he step down from his post. However what I do know simply based on the story that has been put out by HP is that Mr Hurd either has, or is being portrayed as taking the high road of stepping down. That is, as the head of the HP organization, he is taking responsibility for actions, not looking for special status or exceptions and stepping down from his post instead of trying to sweep the dust or dirt under the rug. Thus Kudos to Mr Hurd for taking responsibility, not hiding, spinning or throwing someone else under the proverbial corporate politics bus to save his own hide.As the CEO of a major corporation the buck stops with him and he should not be above the law or polices of his own organizations that other employees would be expected to follow.

Too often today we hear stories of company or organization or government leaders getting or expecting special treatment in some cases not taking full and complete responsibility for their actions other than for a photo opportunity.

On a different yet related note, perhaps my thinking will change as more comes out on the story as well as they story behind the story, however this is an interesting example of how crisis management can be dealt with. Sure the story was released on a Friday afternoon which is typically when bad news is put out after the financial markets have closed. On the other hand, given the nature of HP being a tech company and with web, blogs, twitter, face book and other social media the chatter was significant for a late Friday afternoon.

Lets see how this plays out and if HP along with their PR crisis team played the right cards by getting the story out, CEO Mark Hurd stepping down to avoid prolonging the situations as well as how wall street will react short term and over the long haul.

This leaves me with a closing thought of if politicians from all sides (or across both sides of aisle or parties) did what HP CEO Mark Hurd did (resign) due to impropriety, we would have fewer elected officials. Thus I do not think Mr Hurd has a future in government politics not because of what he did that caused his stepping down at HP.

No, rather because either on his own or under advice of others he decided not to look for or seek special favor or cover up of what was done as well as try not to spin the story thus saving both him and his company (HP) for the long term.

Nuff said for now.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

EMC VPLEX: Virtual Storage Redefined or Respun?

In a flurry of announcements that coincide with EMCworld occurring in Boston this week of May 10 2010 EMC officially unveiled the Virtual Storage vision initiative (aka twitter hash tag of #emcvs) and initial VPLEX product. The Virtual Storage initiative was virtually previewed back in March (See my previous post here along with one from Stu Miniman (twitter @stu) of EMC here or here) and according to EMC the VPLEX product was made generally available (GA) back in April.

The Virtual Storage vision and associated announcements consisted of:

  • Virtual Storage vision – Big picture  initiative view of what and how to enable private clouds
  • VPLEX architecture – Big picture view of federated data storage management and access
  • First VPLEX based product – Local and campus (Metro to about 100km) solutions
  • Glimpses of how the architecture will evolve with future products and enhancements


Figure 1: EMC Virtual Storage and Virtual Server Vision and Big Pictures

The Big Picture
The EMC Virtual Storage vision (Figure 1) is the foundation of a private IT cloud which should enable characteristics including transparency, agility, flexibility, efficient, always on, resiliency, security, on demand and scalable. Think of it this way, EMC wants to enable and facilitate for storage what is being done by server virtualization hypervisor vendors including VMware (which happens to be owned by EMC), Microsoft HyperV and Citrix/Xen among others. That is, break down the physical barriers or constraints around storage similar to how virtual servers release applications and their operating systems from being tied to a physical server.

While the current focus of desktop, server and storage virtualization has been focused on consolidation and cost avoidance, the next big wave or phase is life beyond consolidation where the emphasis expands to agility, flexibility, ease of use, transparency, and portability (Figure 2). In the next phase which puts an emphasis around enablement and doing more with what you have while enhancing business agility focus extends from how much can be consolidated or the number of virtual machines per physical machine to that of using virtualization for flexibility, transparency (read more here and here or watch here).


Figure 2: Virtual Storage Big Picture

That same trend will be happening with storage where the emphasis also expands from how much data can be squeezed or consolidated onto a given device to that of enabling flexibility and agility for load balancing, BC/DR, technology upgrades, maintenance and other routine Infrastructure Resource Management (IRM) tasks.

For EMC, achieving this vision (both directly for storage, and indirectly for servers via their VMware subsidiary) is via local and distributed (metro and wide area) federation management of physical resources to support virtual data center operations. EMC building blocks for delivering this vision including VPLEX, data and storage management federation across EMC and third party products, FAST (fully automated storage tiering), SSD, data protection and data footprint reduction and data protection management products among others.

Buzzword bingo aside (e.g. LAN, SAN, MAN, WAN, Pots and Pans) along with Automation, DWDM, Asynchronous, BC, BE or Back End, Cache coherency, Cache consistency, Chargeback, Cluster, db loss, DCB, Director, Distributed, DLM or Distributed Lock Management, DR, Foe or Fibre Channel over Ethernet, FE or Front End, Federated, FAST, Fibre Channel, Grid, HyperV, Hypervisor, IRM or Infrastructure Resource Management, I/O redirection, I/O shipping, Latency, Look aside, Metadata, Metrics, Public/Private Cloud, Read ahead, Replication, SAS, Shipping off to Boston, SRA, SRM, SSD, Stale Reads, Storage virtualization, Synchronization, Synchronous, Tiering, Virtual storage, VMware and Write through among many other possible candidates the big picture here is about enabling flexibility, agility, ease of deployment and management along with boosting resource usage effectiveness and presumably productivity on a local, metro and future global basis.


Figure 3: EMC Storage Federation and Enabling Technology Big Picture

The VPLEX Big Picture
Some of the tenants of the VPLEX architecture (Figure 3) include a scale out cluster or grid design for local and distributed (metro and wide area) access where you can start small and evolve as needed in a predictable and deterministic manner.


Figure 4: Generic Virtual Storage (Local SAN and MAN/WAN) and where VPLEX fits

The VPLEX architecture is targeted towards enabling next generation data centers including private clouds where ease and transparency of data movement, access and agility are essential. VPLEX sits atop existing EMC and third party storage as a virtualization layer between physical or virtual servers and in theory, other storage systems that rely on underlying block storage. For example in theory a NAS (NFS, CIFS, and AFS) gateway, CAS content archiving or Object based storage system or purpose specific database machine could sit between actual application servers and VPLEX enabling multiple layers of flexibility and agility for larger environments.

At the heart of the architecture is an engine running a highly distributed data caching algorithm that uses an approach where a minimal amount of data is sent to other nodes or members in the VPLEX environment to reduce overhead and latency (in theory boosting performance). For data consistency and integrity, a distributed cache coherency model is employed to protect against stale reads and writes along with load balancing, resource sharing and failover for high availability. A VPLEX environment consists of a federated management view across multiple VPLEX clusters including the ability to create a stretch volume that is accessible across multiple VPLEX clusters (Figure 5).


Figure 5: EMC VPLEX Big Picture


Figure 6: EMC VPLEX Local with 1 to 4 Engines

Each VPLEX local cluster (Figure 6) is made up of 1 to 4 engines (Figure 7) per rack with each engine consisting of two directors each having 64GByte of cache, localized compute Intel processors, 16 Front End (FE) and 16 Back End (BE) Fibre Channel ports configured in a high availability (HA). Communications between the directors and engines is Fibre Channel based. Meta data is moved between the directors and engines in 4K blocks to maintain consistency and coherency. Components are fully redundant and include phone home support.


Figure 7: EMC VPLEX Engine with redundant directors

VPLEX initially host servers supported include VMware, Cisco UCS, Windows, Solaris, IBM AIX, HPUX and Linux along with EMC PowerPath and Windows multipath management drivers. Local server clusters supported include Symantec VCS, Microsoft MSCS and Oracle RAC along with various volume mangers. SAN fabric connectivity supported includes Brocade and Cisco as well as Legacy McData based products.

VPLEX also supports cache (Figure 8 ) write thru to preserve underlying array based functionality and performance with 8,000 total virtualized LUNs per system. Note that underlying LUNs can be aggregated or simply passed through the VPLEX. Storage that attaches to the BE Fibre Channel ports include EMC Symmetrix VMAX and DMX along with CLARiiON CX and CX4. Third party storage supported includes HDS9000 and USPV/VM along with IBM DS8000 and others to be added as they are certified. In theory given that the VPLEX presents block based storage to hosts; one would also expect that NAS, CAS or other object based gateways and servers that rely on underlying block storage to also be supported in the future.


Figure 8: VPLEX Architecture and Distributed Cache Overview

Functionality that can be performed between the cluster nodes and engines with VPLEX include data migration and workload movement across different physical storage systems or sites along with shared access with read caching on a local and distributed basis. LUNS can also be pooled across different vendors underlying storage solutions that also retain their native feature functionality via VPLEX write thru caching.

Reads from various servers can be resolved by any node or engine that checks their cache tables (Figure 8 ) to determine where to resolve the actual I/O operation from. Data integrity checks are also maintained to prevent stale reads or write operations from occurring. Actual meta data communications between nodes is very small to enable state fullness while reducing overhead and maximizing performance. When a change to cache data occurs, meta information is sent to other nodes to maintain the distributed cache management index schema. Note that only pointers to where data and fresh cache entries reside are what is stored and communicated in the meta data via the distributed caching algorithm.


Figure 9: EMC VPLEX Metro Today

For metro deployments, two clusters (Figure 9) are utilized with distances supported up to about 100km or about 5ms of latency in a synchronous manner utilizing long distance Fibre Channel optics and transceivers including Dense Wave Division Multiplexing (DWDM) technologies (See Chapter 6: Metropolitan and Wide Area Storage Networking in Resilient Storage Networking (Elsevier) for additional details on LAN, MAN and WAN topics).

Initially EMC is supporting local or Metro including Campus based VPLEX deployments requiring synchronous communications however asynchronous (WAN) Geo and Global based solutions are planned for the future (Figure 10).


Figure 10: EMC VPLEX Future Wide Area and Global

Online Workload Migration across Systems and Sites
Online workload or data movement and migration across storage systems or sites is not new with solutions available from different vendors including Brocade, Cisco, Datacore, EMC, Fujitsu, HDS, HP, IBM, LSI and NetApp among others.

For synchronization and data mobility operations such as a VMware Vmotion or Microsoft HyperV Live migration over distance, information is written to separate LUNs in different locations across what are known as stretch volumes to enable non disruptive workload relocation across different storage systems (arrays) from various vendors. Once synchronization is completed, the original source can be disconnected or taken offline for maintenance or other common IRM tasks. Note that at least two LUNs are required, or put another way, for every stretch volume, two LUNs are subtracted from the total number of available LUNs similar to how RAID 1 mirroring requires at least two disk drives.

Unlike other approaches that for coherency and performance rely on either no cached data, or, extensive amounts of cached data along with subsequent overhead for maintaining state fullness (consistency and coherency) including avoiding stale reads or writes, VPLEX relies on a combination of distributed cache lookup tables along with pass thru access to underlying storage when or where needed. Consequently large amounts of data does not need to be cached as well as shipped between VPLEX devices to maintain data consistency, coherency or performance which should also help to keep costs affordable.

Approach is not unique, it is the implementation
Some storage virtualization solutions that have been software based running on an appliance or network switch as well as hardware system based have had a focus of emulating or providing competing capabilities with those of mid to high end storage systems. The premise has been to use lower cost, less feature enabled storage systems aggregated behind the appliance, switch or hardware based system to provide advanced data and storage management capabilities found in traditional higher end storage products.

VPLEX while like any tool or technology could be and probably will be made to do other things than what it is intended for is really focused on, flexibility, transparency and agility as opposed to being used as a means of replacing underlying storage system functionality. What this means is that while there is data movement and migration capabilities including ability to synchronize data across sites or locations, VPLEX by itself is not a replacement for the underlying functionality present in both EMC and third party (e.g. HDS, HP, IBM, NetApp, Oracle/Sun or others) storage systems.

This will make for some interesting discussions, debates and applies to oranges comparisons in particular with those vendors whose products are focused around replacing or providing functionality not found in underlying storage system products.

In a nut shell summary, VPLEX and the Virtual Storage story (vision) is about enabling agility, resiliency, flexibility, data and resource mobility to simply IT Infrastructure Resource Management (IRM). One of the key themes of global storage federation is anywhere access on a local, metro, wide area and global basis across both EMC and heterogeneous third party vendor hardware.

Lets Put it Together: When and Where to use a VPLEX
While many storage virtualization solutions are focused around consolidation or pooling, similar to first wave server and desktop virtualization, the next general broad wave of virtualization is life beyond consolidation. That means expanding the focus of virtualization from consolidation, pooling or LUN aggregation to that of enabling transparency for agility, flexibility, data or system movement, technology refresh and other common time consuming IRM tasks.

Some applications or usage scenarios in the future should include in addition to VMware Vmotion, Microsoft HypverV and Microsoft Clustering along with other host server closuring solutions.


Figure 11: EMC VPLEX Usage Scenarios

Thoughts and Industry Trends Perspectives:

The following are various thoughts, comments, perspectives and questions pertaining to this and storage, virtualization and IT in general.

Is this truly unique as is being claimed?

Interestingly, the message Im hearing out of EMC is not the claim that this is unique, revolutionary or the industries first as is so often the case by vendors, rather that it is their implementation and ability to deploy on a broad perspective basis that is unique. Now granted you will probably hear as is often the case with any vendor or fan boy/fan girl spins of it being unique and Im sure this will also serve up plenty of fodder for mudslinging in the blogsphere, YouTube galleries, twitter land and beyond.

What is the DejaVu factor here?

For some it will be nonexistent, yet for others there is certainly a DejaVu depending on your experience or what you have seen and heard in the past. In some ways this is the manifestation of many vision and initiatives from the late 90s and early 2000s when storage virtualization or virtual storage in an open context jumped into the limelight coinciding with SAN activity. There have been products rolled out along with proof of concept technology demonstrators, some of which are still in the market, others including companies have fallen by the way side for a variety of reasons.

Consequently if you were part of or read or listened to any of the discussions and initiatives from Brocade (Rhapsody), Cisco (SVC, VxVM and others), INRANGE (Tempest) or its successor CNT UMD not to mention IBM SVC, StorAge (now LSI), Incipient (now part of Texas Memory) or Troika among others you should have some DejaVu.

I guess that also begs the question of what is VPLEX, in band, out of band or hybrid fast path control path? From what I have seen it appears to be a fast path approach combined with distributed caching as opposed to a cache centric inband approaches such as IBM SVC (either on a server or as was tried on the Cisco special service blade) among others.

Likewise if you are familiar with IBM Mainframe GDPS or even EMC GDDR as well as OpenVMS Local and Metro clusters with distributed lock management you should also have DejaVu. Similarly if you had looked at or are familiar with any of the YottaYotta products or presentations, this should also be familiar as EMC acquired the assets of that now defunct company.

Is this a way for EMC to sell more hardware along with software products?

By removing barriers enabling IT staffs to support more data on more storage in a denser and more agile footprint the answer should be yes, something that we may see other vendors emulate, or, make noise about what they can or have been doing already.

How is this virtual storage spin different from the storage virtualization story?

That all depends on your view or definition as well as belief systems and preferences for what is or what is not virtual storage vs. storage virtualization. For some who believe that storage virtualization is only virtualization if and only if it involves software running on some hardware appliance or vendors storage system for aggregation and common functionality than you probably wont see this as virtual storage let alone storage virtualization. However for others, it will be confusing hence EMC introducing terms such as federation and avoiding terms including grid to minimize confusion yet play off of cloud crowd commotion.

Is VPLEX a replacement for storage system based tiering and replication?

I do not believe so and even though some vendors are making claims that tiered storage is dead, just like some vendors declared a couple of years ago that disk drives were going to be dead this year at the hands of SSD, neither has come to life so to speak pun intended. What this means for VPLEX is that it leverages underlying automated or manual tiering found in storage systems such as EMC FAST enabled or similar policy and manual functions in third party products.

What VPLEX brings to the table is the ability to transparently present a LUN or volume locally or over distance with shared access while maintaining cache and data coherency. This means that if a LUN or volume moves the applications or file system or volume managers expecting to access that storage will not be surprised, panic or encounter failover problems. Of course there will be plenty of details to be dug into and seen how it all actually works as is the case with any new technology.

Who is this for?

I see this as for environments that need flexibility and agility across multiple storage systems either from one or multiple vendors on a local or metro or wide area basis. This is for those environments that need ability to move workloads, applications and data between different storage systems and sites for maintenance, upgrades, technology refresh, BC/DR, load balancing or other IRM functions similar to how they would use virtual server migration such as VMotion or Live migration among others.

Do VPLEX and Virtual Storage eliminate need for Storage System functionality?

I see some storage virtualization solutions or appliances that have a focus of replacing underlying storage system functionality instead of coexisting or complementing. A way to test for this approach is to listen or read if the vendor or provider says anything along the lines of eliminating vendor lock in or control of the underlying storage system. That can be a sign of the golden rule of virtualization of whoever controls the virtualization functionality (at the server hypervisor or storage) controls the gold! This is why on the server side of things we are starting to see tiered hypervisors similar to tiered servers and storage where mixed hypervisors are being used for different purposes. Will we see tiered storage hypervisors or virtual storage solutions the answer could be perhaps or it depends.

Was Invista a failure not going into production and this a second attempt at virtualization?

There is a popular myth in the industry that Invista never saw the light of day outside of trade show expo or other demos however the reality is that there are actual customer deployments. Invista unlike other storage virtualization products had a different focus which was that around enabling agility and flexibility for common IRM tasks, similar the expanded focus of VPLEX. Consequently Invista has often been in apples to oranges comparison with other virtualization appliances that have as focus pooling along with other functions or in some cases serving as an appliance based storage system.

The focus around Invista and usage by those customers who have deployed it that I have talked with is around enabling agility for maintenance, facilitating upgrades, moves or reconfiguration and other common IRM tasks vs using it for pooling of storage for consolidation purposes. Thus I see VPLEX extending on the vision of Invista in a role of complimenting and leveraging underlying storage system functionality instead of trying to replace those capabilities with that of the storage virtualizer.

Is this a replacement for EMC Invista?

According to EMC the answer is no and that customers using Invista (Yes, there are customers that I have actually talked to) will continue to be supported. However I suspect that over time Invista will either become a low end entry for VPLEX, or, an entry level VPLEX solution will appear sometime in the future.

How does this stack up or compare with what others are doing?

If you are looking to compare to cache centric platforms such as IBMs SVC that adds extensive functionality and capabilities within the storage virtualization framework this is an apples to oranges comparison. VPLEX is providing cache pointers on a local and global basis functioning in a compliment to underlying storage system model where SVC caches at the specific cluster basis and enhancing functionality of underlying storage system. Rest assured there will be other apples to oranges comparisons made between these platforms.

How will this be priced?

When I asked EMC about pricing, they would not commit to a specific price prior to the announcement other than indicating that there will be options for on demand or consumption (e.g. cloud pricing) as well as pricing per engine capacity as well as subscription models (pay as you go).

What is the overhead of VPLEX?

While EMC runs various workload simulations (including benchmarks) internally as well as some publicly (e.g. Microsoft ESRP among others) they have been opposed to some storage simulation benchmarks such as SPC. The EMC opposition to simulations such as SPC have been varied however this could be a good and interesting opportunity for them to silence the industry (including myself) who continue ask them (along with a couple of other vendors including IBM and their XIV) when they will release public results.

What the interesting opportunity I think is for EMC is that they do not even have to benchmark one of their own storage systems such as a CLARiiON or VMAX, instead simply show the performance of some third party product that already is tested on the SPC website and then a submission with that product running attached to a VPLEX.

If the performance or low latency forecasts are as good as they have been described, EMC can accomplish a couple of things by:

  • Demonstrating the low latency and minimal to no overhead of VPLEX
  • Show VPLEX with a third party product comparing latency before and after
  • Provide a comparison to other virtualization platforms including IBM SVC

As for EMC submitting a VMAX or CLARiiON SPC test in general, Im not going to hold my breath for that, instead, will continue to look at the other public workload tests such as ESRP.

Additional related reading material and links:

Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier)
Chapter 3: Networking Your Storage
Chapter 4: Storage and IO Networking
Chapter 6: Metropolitan and Wide Area Storage Networking
Chapter 11: Storage Management
Chapter 16: Metropolitan and Wide Area Examples

The Green and Virtual Data Center (CRC)
Chapter 3: (see also here) What Defines a Next-Generation and Virtual Data Center
Chapter 4: IT Infrastructure Resource Management (IRM)
Chapter 5: Measurement, Metrics, and Management of IT Resources
Chapter 7: Server: Physical, Virtual, and Software
Chapter 9: Networking with your Servers and Storage

Also see these:

Virtual Storage and Social Media: What did EMC not Announce?
Server and Storage Virtualization – Life beyond Consolidation
Should Everything Be Virtualized?
Was today the proverbial day that he!! Froze over?
Moving Beyond the Benchmark Brouhaha

Closing comments (For now):
As with any new vision, initiative, architecture and initial product there will be plenty of questions to ask, items to investigate, early adopter customers or users to talk with and determine what is real, what is future, what is usable and practical along with what is nice to have. Likewise there will be plenty of mud ball throwing and slinging between competitors, fans and foes which for those who enjoy watching or reading those you should be well entertained.

In general, the EMC vision and story builds on and presumably delivers on past industry hype, buzz and vision with solutions that can be put into environments as productivity tool that works for the customer, instead of the customer working for the tool.

Remember the golden rule of virtualization which is in play here is that whoever controls the virtualization or associated management controls the gold. Likewise keep in mind that aggregation can cause aggravation. So do not be scared, however look before you leap meaning do your homework and due diligence with appropriate levels of expectations, aligning applicable technology to the task at hand.

Also, if you have seen or experienced something in the past, you are more likely to have DejaVu as opposed to seeing things as revolutionary. However it is also important to leverage lessons learned for future success. YottaYotta was a lot of NaddaNadda, lets see if EMC can leverage their past experiences to make this a LottaLotta.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Infosmack Episode 34, VMware, Microsoft and More

Following on the heals of several guest appearances late in 2009 ( here, here, here and here) on the Storage Monkeys Infosmack weekly pod cast, I was recently asked to join them again for the inaugural 2010 show (Episode 34).

Along with VMguru Rich Brambley and hosts Greg Knieriemen and Marc Farley we discussed several recent industry topics in this first show of the year which can be accessed here or on iTunes.

Heres a link to the pod cast where you can listen to the discussion including VMware Go, VMware buying Zimbra, Vendor Alliances such as HP and Microsoft HyperV and EMC+Cisco+VMware, along with data protection for virtual servers issues options (or opportunities) among other topics.

I have included the following links that pertain to some of the items we discussed during the show.

Enjoy the show.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Could Huawei buy Brocade?

Disclosure: I have no connection to Huawei. I own no stock in, nor have I worked for Brocade as an employee; however I did work for three years at SAN vendor INRANGE which was acquired by CNT. However I left to become an industry analyst prior to the acquisition by McData and well before Brocade bought McData. Brocade is not a current client; however I have done speaking events pertaining to general industry trends and perspectives at various Brocade customer events for them in the past.

Is Brocade for sale?

Last week a Wall Street Journal article mentioned Brocade (BRCD) might be for sale.

BRCD has a diverse product portfolio for Fibre Channel, Ethernet along with the emerging Fibre Channel over Ethernet (FCoE) market and a whos who of OEM and channel partners. Why not be for sale, good timing for investors, CEO Mike Klayko and his team have arguably done a good job of shifting and evolving the company.

Generally speaking, lets keep in perspective, everything is always for sale, and in an economy like now, bargains are everywhere. Many business are shopping, its just a matter of how visible the shopping for a seller or buyer is along with motivations and objectives including shareholder value.

Consequently, the coconut wires are abuzz with talk and speculation of who will buy Brocade or perhaps who Brocade might buy among other Merger and Acquisition (M and A) activity of who will buy who. For example, who might buy BRCD, why not EMC (they sold McData off years ago via IPO), or IBM (they sold some of their networking business to Cisco years ago) or HP (currently an OEM partner of BRCD) as possible buyers?

Last week I posted on twitter a response to a comment about who would want to buy Brocade with a response to the effect of why not a Huawei to which there was some silence except for industry luminary Steve Duplessie (have a look to see what Steve had to say).

Part of being an analyst IMHO should be to actually analyze things vs. simply reporting on what others want you to report or what you have read or hear elsewhere. This also means talking about scenarios that are of out of the box or in adjacent boxes from some perspectives or that might not be in-line with traditional thinking. Sometimes this means breaking away and thinking and saying what may not be obvious or practical. Having said that, lets take a step back for a moment as to why Brocade may or might not be for sale and who might or may not be interested in them.

IMHO, it has a lot to do with Cisco and not just because Brocade sees no opportunity to continue competing with the 800lb guerilla of LAN/MAN networking that has moved into Brocades stronghold of storage network SANs. Cisco is upsetting the table or apple cart with its server partners IBM, Dell, HP, Oracle/Sun and others by testing the waters of the server world with their UCS. So far I see this as something akin to a threat testing the defenses of a target before actually full out attacking.

In other words, checking to see how the opposition responds, what defense are put up, collect G2 or intelligence as well as how the rest of the world or industry might respond to an all out assault or shift of power or control. Of course, HP, IBM, Dell and Sun/Oracle will not take this move into their revenue and account control goes un-noticed with initial counter announcements having been made some re-emphasize relationship with Brocade along with their recent acquisition of Ethernet/IP vendor Foundry.

Now what does this have to do with Brocade potentially being sold and why the title involving Huawei?

Many of the recent industry acquisitions have been focused on shoring up technology or intellectual property (IP), eliminating a competitor or simply taking advantage of market conditions. For example, Datadomain was sold to EMC in a bidding war with NetApp, HP bought IBRIX, Oracle bought or is trying to buy Sun, Oracle also bought Virtual Iron, Dell bought Perot after HP bought EDS a year or so ago while Xerox bought ACS and so the M and A game continues among other deals.

Some of the deals are strategic, many being tactical, Brocade being bought I would put in the category of a strategic scenario, a bargaining chip or even pawn if you prefer in a much bigger game that is more than about switches, directors, HBAs, LANs, SANs, MANSs, WANs, POTS and PANs (Checkout my  book “Resilient Storage Networks”-Elsevier)!

So with conversations focused around Cisco expanding into servers to control the data center discussion, mindset, thinking, budgets and decision making, why wouldnt an HP, IBM, Dell let alone a NetApp, Oracle/Sun or even EMC want to buy Brocade as a bargaining chip in a bigger game? Why not a Ciena (they just bought some of Nortels assets), Juniper or 3Com (more of a merger of equals to fight Cisco), Microsoft (might upset their partner Cisco) or Fujitsu (Their Telco group that is) among others?

Then why not Huawei, a company some may have heard of, one that others may not have.

Who is Huawei you might ask?

Simple, they are a very large IT solutions provider who is also a large player in China with global operations including R&D in North America and many partnerships with U.S. vendors. By rough comparison, Cisco most recently reported annual revenue are about 36.1B (All are USD), BRCD about 1.5B, Juniper about $3.5B and 3COM about $1.3B and Huawei at about 23B USD with a year over year sales increase of 45%. Huawei has previous partnerships with storage vendors including Symantec and Falconstor among others. Huawei also has had partnership with 3com (H3C), a company that was first of the LAN vendors to get into SANs (pre-maturely) beating Cisco easily by several years.

Sure there would be many hurdles and issues, similar to the ones CNT and INRANGE had to overcome, or McData and CNT, or Brocade and McData among others. However in the much bigger game of IT account and thus budget control is played by HP, IBM, and Sun/Oracle among others, wouldn’t maintaining a dual-source for customers networking needs make sense, or, at least serve as a check to Cisco expansion efforts? If nothing else, maintaining the status quo in the industry for now, or, if the rules and game are changing, wouldn’t some of the bigger vendors want to get closer to the markets where Huawei is seeing rapid growth?

Does this mean that Brocade could be bought? Sure.
Does this mean Brocade cannot compete or is a sign of defeat? I don’t think so.
Does this mean that Brocade could end up buying or merging with someone else? Sure, why not.
Or, is it possible that someone like Huawei could end up buying Brocade? Why not!

Now, if Huawei were to buy Brocade, which begs the question for fun, could they be renamed or spun off as a division called HuaweiCade or HuaCadeWei? Anything is possible when you look outside the box.

Nuff said for now, food for thought.

Cheers – gs

Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

StorageIO aka Greg Schulz appears on Infosmack

If you are in the IT industry, and specifically have any interest or tie to data infrastructures from servers, to storage and networking including hardware, software, services not to mention virtualization and clouds, InfoSmack and Storage Monkeys should be on your read or listen list.

Recently I was invited to be a guest on the InfoSmack podcast which is about a 50 some minute talk show format around storage, networking, virtualization and related topics.

The topics discussed include Sun and Oracle from a storage standpoint, Solid State Disk (SSD) among others.

Now, a word of caution, InfoSmack is not your typical prim and proper venue, nor is it a low class trash talking production.

Its fun and informative where the hosts and attendees are not afraid of poking fun at them selves while exploring topics and the story behind the story in a candid non scripted manner.

Check it out.

Cheers – gs

Greg Schulz – StorageIOblog, twitter @storageio Author “The Green and Virtual Data Center” (CRC)

SPC and Storage Benchmarking Games

Storage I/O trends

There is a post over in one of the LinkedIn Discussion forums about storage performance council (SPC) benchmarks being miss-leading that I just did a short response post to. Here’s the full post as LinkedIn has a short post response limit.

While the SPC is far from perfect, it is at least for block, arguably better than doing nothing.

For the most part, SPC has become a de facto standard for at least block storage benchmarks independent of using IOmeter or other tools or vendor specific simulations, similar how MSFT ESRP is for exchange, TPC for database, SPEC for NFS and so forth. In fact, SPC even recently rather quietly rolled out a new set of what could be considered the basis for Green storage benchmarks. I would argue that SPC results in themselves are not misleading, particularly if you take the time to look at both the executive and full disclosures and look beyond the summary.

Some vendors have taken advantage of the SPC results playing games with discounting on prices (something that’s allowed under SPC rules) to show and make apples to oranges comparisons on cost per IOP or other ploys. This proactive is nothing new to the IT industry or other industries for that matter, hence benchmark games.

Where the misleading SPC issue can come into play is for those who simply look at what a vendor is claiming and not looking at the rest of the story, or taking the time to look at the results and making apples to apples, instead of believing the apples to oranges comparison. After all, the results are there for a reason. That reason is for those really interested to dig in and sift through the material, granted not everyone wants to do that.

For example, some vendors can show a highly discounted list price to get a better IOP per cost on an apple to oranges basis, however, when processes are normalized, the results can be quite different. However here’s the real gem for those who dig into the SPC results, including looking at the configurations and that is that latency under workload is also reported.

The reason that latency is a gem is that generally speaking, latency does not lie.

What this means is that if vendor A doubles the amount of cache, doubles the number of controllers, doubles the number of disk drives, plays games with actual storage utilization (ASU), utilizes fast interfaces from 10 GbE  iSCSI to 8Gb FC or FCoE or SAS to get a better cost per IOP number with discounting, look at the latency numbers. There have been some recent examples of this where vendor A has a better cost per IOP while achieving a higher number of IOPS at a lower cost compared to vendor B, which is what is typically reported in a press release or news story. (See a blog entry that also points to a CMG presentation discussion around this topic here.

Then go and look at the two results, vendor B may be at list price while vendor A is severely discounted which is not a bad thing, as that is then the starting list price as to which customers should start negotiations. However to be fair, normalize the pricing for fun, look at how much more equipment vendor A may need while having to discount to get the price to offset the increased amount of hardware, then look at latency.

In some of the recent record reported results, the latency results are actually better for a vendor B than for a vendor A and why does latency matter? Beyond showing what a controller can actually do in terms of levering  the number of disks, cache, interface ports and so forth, the big kicker is for those talking about SSD (RAM or FLASH) in that SSD generally is about latency. To fully effectively utilize SSD which is a low latency device, you would want a controller that can do a decent job at handling IOPS; however you also need a controller that can do a decent job of handling IOPS with low latency under heavy workload conditions.

Thus the SPC again while far from perfect, at least for a thumb nail sketch and comparison is not necessarily misleading, more often than not it’s how the results are utilized that is misleading. Now in the quest for the SPC administrators to try and gain more members and broader industry participation and thus secure their own future, is the SPC organization or administration opening itself up to being used more and more as a marketing tool in ways that potentially compromise all the credibility (I know, some will dispute the validity of SPC, however that’s reserved for a different discussion ;) )?

There is a bit of Déjà here for those involved with RAID and storage who recall how the RAID Advisory Board (RAB) in its quest to gain broader industry adoption and support succumbed to marketing pressures and use or what some would describe as miss-use and is now a member of the “Where are they now” club!

Don’t get me wrong here; I like the SPC tests/results/format, there is a lot of good information in the SPC. The various vendor folks who work very hard behind the scenes to make the SPC actually work and continue to evolve it also all deserve a great big kudos, an “atta boy” or “atta girl” for the fine work that have been doing, work that I hope does not become lost in the quest to gain market adoption for the SPC.

Ok, so then this should all then beg the question of what is the best benchmark. Simple, the one that most closely resembles your actual applications, workload, conditions, configuration and environment.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

IBM Out, Oracle In as Buyer of Sun

Following on the heals of IBM in talks with Sun that broke down a week or so ago, today’s news is Oracle has agreed to buy Sun extending Larry Ellison’s software empire as well as boosting his hardware empire from fast sport platforms to server, storage and other IT data center hardware.

What’s the real play and story here is certainly open to discussion and debate, is it good, is it bad, who are the winners and losers will be determined as the dust settles, not to mention as responses from across the industry, not to mention new product announcements and enhances slated by some for as early as this week. What if any role does Cisco wanting to get into servers and maybe storage play, does Oracle want to make sure they remain at the big table?

Regarding discussions of this deal, what it means, the twitter world has been abuzz already this morning, click here to see and follow some of the conversations, perspectives and insights being exchanged.

Nuf said for now, its time to get ready to head off to the airport as I’m doing several events speaking and keynote sessions this week on the right coast while the left coast is abuzz with the Sun & Oracle activity.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved