Did HP respond to EMC and Cisco VCE with Microsoft HyperV bundle?

Last week EMC and Cisco along with Intel and VMware created the VCE collation along with a consumption model based service joint venture called Acadia.

In other activity last week, HP made several announcements including:

  • Improvements in sensing technologies
  • StorageWorks enhancements (SVSP, IBRIX, EVA and HyperV, X9000 and others)

EMC and Cisco were relatively quiet this week on announcement front, however HP unleashed another round of announcements that among others included:

  • Quarterly financial results
  • SMB server, storage, network and virtualization enhancements (here, here, here and here)
  • Acquisitions of 3COM (see related blog post here)

The reason I bring up all of this HP activity is not to simply re-cap all of the news and announcements which you can find on many other blogs or news sites, rather I see as a trend.

That trend appears to be one of a company on the move, not ready to sit back on its laurels, rather a company that continues to innovate in-house and via acquisitions.

Some of those acquisitions including IBRIX were relatively small, some like EDS last year and the one this week of 3COM to some would be large while to others perhaps as being seen as medium sized. Either way, HP has been busy expanding its portfolio of technology solution and services offerings along with its comprehensive IT stack.

Cisco, EMC and HP are examples of companies looking to expand their IT stacks and footprint in terms of diversifying current product focus and reach, along with extending into new or further into existing customer and market sector areas. Last weeks EMC and Cisco signaled two large players combing their resources to make virtualization and private clouds easy to acquire and deploy for mid to large size environments with a theme around VMware.

This week buried in all of the HP announcements was one that caught my eye which is a virtualization solution bundle designed for small business (that is something smaller than a vblock0), something that was missing in the Cisco and EMC news of last week however one that Im sure will be addressed sooner versus later.

In the case of HP, the other thing with their virtualization bundle was the focus on the mid to small business that fall into the broad and diverse SMB category, not to mention including Microsoft.

Yes, that is right, while a VMware based solution from HP would be a no-brainer given all of the activity the two companies are involved  in as joint partners, Microsoft HyperV was front and center.

Is this a reaction to last weeks Cisco and EMC salvo?

Perhaps and some will jump to that conclusion. However I will also offer this alternative scenario, 85-90 percent of servers consolidated into virtual machines (VMs) on VMware or other hypervisors including Microsoft HyperV are Windows based.

Likewise as one of the largest if not largest server vendors (pick your favorite server category or price band) who also happens to be one of the largest Microsoft Windows partners, I would have been more surprised if HP had not done a HyperV bundle.

While Cisco and EMC may stay the course or at least talk the talk with a VMware affinity in the Acadia and VCE coalition for the time being, I would expect HP to flex its wings a bit and show diversity of support for multiple Hypervisors, Operating Systems across its various server, network, storage and services platforms.

I would not be surprised to see some VMware based bundles appear over time building on previous announced HP blade systems matrix solution bundles.

Welcome back my friends to the show that never ends, that is the on-going server, storage, networking, virtualization, hardware, software and services solutions game for enabling the adaptive, dynamic, flexible, scalable, resilient, service oriented, public or private cloud, infrastructure as a service green and virtual data center.

Stay tuned, there is much more to come!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

HP Buys one of the seven networking dwarfs and gets a bargain

Last week EMC and Cisco announced their VCE collation and Acadia.

The other day, HP continued its early holiday shopping by plucking down $2.7B USD and bought 3COM, one of the networking seven dwarfs (e.g. when compared to networking giant Cisco).

Some of the other so called networking dwarfs when compared to Cisco include Brocade, Ciena and Juniper among others.

Why is 3COM a bargain at $2.7B

Sure HP paid a slight multiplier premium on 3COM trailing revenues or a slight small multiplier on their market cap.

Sure HP gets to acquire one of the networking seven dwarfs at a time when Cisco is flexing its muscles to move into the server space.

Sure HP gets to extend their networking groups capabilities including additional offerings for HPs broad SMB and lower-end SOHO and even consumer markets not to mention enterprise ROBO or workgroups.

Sure HP gets to extend their security and Voice over IP (VoIP) via 3COM and their US Robotics brand perhaps to better compete with Cisco at the consumer, prosumer, SOHO or low-end of SMB markets.

Sure HP gets access to H3C as a means of further its reach into China and the growing Asian market, perhaps even getting closer to Huawei as a future possible partner.

Sure HP could have bought Brocade however IMHO that would have cost a few more deceased presidents (aka very large dollar bills) and assumed over a billion dollars in debt, however lets leave the Brocadians and that discussion on the back burner for a different discussion on another day.

Sure HP gets to signal to the world that they are alive, they have a ton of money in their war chest, and last I checked, actually more cash in the 11B range (minus about 2.7B being spent on 3COM) that exceeds the $5B USD cash position of Cisco.

Sure HP could have done and perhaps will still do some smaller networking related deals in couple of hundreds of million dollar type range to beef up product offerings such as a Riverbed or others, or, perhaps wait for some fire sales or price shop on those shopping themselves around.

ROI is the bargin IMHO, not to mention other pieces including H3C!

3COM was and is a bargain for all of the above, plus given the revenues of about 1.3B, HP CEO Mark Hurd stands to reap a better return on cash investment than having it sitting in a bank account earning a few points. Plus, HP still has around 8-9B in cash leaving room for some other opportunistic holiday shopping, who knows, maybe adopt yet another networking or storage or server related dwarf!

Stay tuned, this game is far from being over as there are plenty of days left in the 2009 holiday shopping season!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Acadia VCE: VMware + Cisco + EMC = Virtual Computing Environment

Was today the day the music died? (click here or here if you are not familar with the expression)

Add another three letter acronym (TLA) to your IT vocabulary if you are involved with server, storage, networking, virtualization, security and related infrastructure resource management (IRM) topics.

That new TLA is Virtual Computing Environment (VCE), a coalition formed by EMC and Cisco along with partner Intel called Acadia that was announced today. Of course, EMC who also happens to own VMware for virtualization and RSA for security software tools bring those to the coalition (read press release here).

For some quick fun, twittervile and the blogosphere have come up with other meanings such as:

VCE = Virtualization Communications Endpoint
VCE = VMware Cisco EMC
VCE = Very Cash Efficient
VCE = VMware Controls Everything
VCE = Virtualization Causes Enthusiasm
VCE = VMware Cisco Exclusive

Ok, so much for some fun, at least for now.

With Cisco, EMC and VMware announcing their new VCE coalition, has this signaled the end of servers, storage, networking, hardware and software for physical, virtual and clouding computing as we know it?

Does this mean all other vendors not in this announcement should pack it up, game over and go home?

The answer in my perspective is NO!

No, the music did not end today!

NO, servers, storage and networking for virtual or cloud environments has not ended.

Also, NO, other vendors do not have to go home today, the game is not over!

However a new game is on, one that some have seen before, for others it is something new, exciting perhaps revolutionary or an industry first.

What was announced?
Figure 1 shows a general vision or positioning from the three major players involved along with four tenants or topic areas of focus. Here is a link to a press release where you can read more.

CiscoVirtualizationCoalition.png
Figure 1: Source: Cisco, EMC, VMware

General points include:

  • A new coalition (e.g. VCE) focused on virtual compute for cloud and non cloud environments
  • A new company Acadia owned by EMC and Cisco (1/3 each) along with Intel and VMware
  • A new go to market pre-sales, service and support cross technology domain skill set team
  • Solution bundles or vblocks with technology from Cisco, EMC, Intel and VMware

What are the vblocks and components?
Pre-configured (see this link for a 3D model), tested, and supported with a single throat to choke model for streamlined end to end management and acquisition. There are three vblocks or virtual building blocks that include server, storage, I/O networking, and virtualization hypervisor software along with associated IRM software tools.

Cisco is bringing to the game their Unified Compute Solution (UCS) server along with Nexus 1000v and Multilayer Director (MDS) switches, EMC is bringing storage (Symmetrix VMax, CLARiiON and unified storage) along with their RSA security and Ionix IRM tools. VMware is providing their vSphere hypervisors running on Intel based services (via Cisco).

The components include:

  • EMC Ionix management tools and framework – The IRM tools
  • EMC RSA security framework software – The security tools
  • EMC VMware vSphere hypervisor virtualization software – The virtualization layer
  • EMC VMax, CLARiiON and unified storage systems – The storage
  • Cisco Nexus 1000v and MDS switches – The Network and connectivity
  • Cisco Unified Compute Solution (UCS) – The physical servers
  • Services and support – Cross technology domain presales, delivery and professional services

CiscoEMCVMwarevblock.jpg
Figure 2: Source: Cisco vblock (Server, Storage, Networking and Virtualization Software) via Cisco

The three vblock models are:
Vblock0: entry level system due out in 2010 supporting 300 to 800 VMs for initial customer consolidation, private clouds or other diverse applications in small or medium sized business. You can think of this as a SAN in a CAN or Data Center in a box with Cisco UCS and Nexus 1000v, EMC unified storage secured by RSA and VMware vSphere.

Vblock1: mid sized building block supporting 800 to 3000 VMs for consolidation and other optimization initiatives using Cisco UCS, Nexus and MDS switches along with EMC CLARiiON storage secured with RSA software hosting VMware hypervisors.

Vblock2 high end supporting up 3000 to 6000 VMs for large scale data center transformation or new virtualization efforts combing Cisco Unified Computing System (UCS), Nexus 1000v and MDS switches and EMC VMax Symmetix storage with RSA security software hosting VMware vSpshere hypervisor.

What does this all mean?
With this move, for some it will add fuel to the campfire that Cisco is moving closer to EMC and or VMware with a pre-nuptial via Acadia. For others, this will be seen as fragmentation for virtualization particularly if other vendors such as Dell, Fujitsu, HP, IBM and Microsoft among others are kept out of the game, not to mention their channels of vars or IT customers barriers.

Acadia is a new company or more precisely, a joint venture being created by major backers EMC and Cisco with minority backers being VMware and Intel.

Like any other joint ventures, for examples those commonly seen in the airline industry (e.g. transportation utility) where carriers pool resources such as SkyTeam whose members include Delta who had a JV with Airframe owner of KLM who had a antitrust immunity JV with northwest (now being digested by Delta).

These joint ventures can range from simple marketing alliances like you see with EMC programs such as their Select program to more formal OEM to ownership as is the case with VMware and RSA to this new model for Acadia.

An airline analogy may not be the most appropriate, yet there are some interesting similarities, least of which that air carriers rely on information systems and technologies provided by members of this collation among others. There is also a correlation in that joint ventures are about streamlining and creating a seamless end to end customer experience. That is, give them enough choice and options, keep them happy, take out the complexities and hopefully some cost, and with customer control come revenue and margin or profits.

Certainly there are opportunities to streamline and not just simply cut corners, perhaps that’s another area or analogy with the airlines where there is a current focus on cutting, nickel and dimming for services. Hopefully the Acadia and VCE are not just another example of vendors getting together around the campfire to sing Kumbaya in the name of increasing customer adoption, cost cutting or putting a marketing spin on how to sell more to customers for account control.

Now with all due respect to the individual companies and personal, at least in this iteration, it is not as much about the technology or packaging. Likewise, while important, it is also not just about bundling, integration and testing (they are important) as we have seen similar solutions before.

Rather, I think this has the potential for changing the way server, storage and networking hardware along with IRM and virtualization software are sold into organizations, for the better or worse.

What Im watching is how Acadia and their principal backers can navigate the channel maze and ultimately the customer maze to sell a cross technology domain solution. For example, will a sales call require six to fourteen legs (e.g. one person is a two legged call for those not up on sales or vendor lingo) with a storage, server, networking, VMware, RSA, Ionix and services representative?

Or, can a model to drive down the number of people or product specialist involved in a given sales call be achieved leveraging people with cross technology domain skills (e.g. someone who can speak server and storage hardware and software along with networking)?

Assuming Acadia and VCE vblocks address product integration issues, I see the bigger issue as being streamlining the sales process (including compensation plans) along with how partners are dealt with not to mention customers.

How will the sales pitch be to the Cisco network people at VARs or customer sites, or too the storage or server or VMware teams, or, all of the above?

What about the others?
Cisco has relationships with Dell, HP, IBM, Microsoft and Oracle/Sun among others that they will be stepping even more on the partner toes than when they launched the UCS earlier this year. EMC for its part if fairly diversified and is not as subservient to IBM however has a history of partnering with Dell, Oracle and Microsoft among others.

VMware has a smaller investment and thus more in the wings as is Intel given that both have large partnership with Dell, HP, IBM and Microsoft. Microsoft is of interest here because on one front the bulk of all servers virtualized into VMware VMs are Windows based.

On the other hand, Microsoft has their own virtualization hypervisor HyperV that depending upon how you look at it, could be a competitor of VMware or simply a nuisance. Im of the mindset that its still to early and don’t judge this game on the first round which VMware has won. Keep in mind the history such as desktop and browser wars that Microsoft lost in the first round only to come back strong later. This move could very well invigorate Microsoft, or perhaps Oracle, Citrix among others.

Now this is far from the first time that we have seen alliances, coalitions, marketing or sales promotion cross technology vendor clubs in the industry let alone from the specific vendors involved in this announcement.

One that comes to mind was 3COMs failed attempt in the late 90s to become the first traditional networking vendor to get into SANs, that was many years before Cisco could spell SAN let alone their Andiamo startup incubated. The 3COM initiative which was cancelled due to financial issues literally on the eve of rollout was to include the likes of STK (pre-sun), Qlogic, Anchor (People were still learning how to spell Brocade), Crossroads (FC to SCSI routers for tape), Legato (pre-EMC), DG CLARiiON (Pre-EMC), MTI (sold their patents to EMC, became a reseller, now defunct) along with some others slated to jump on the bandwagon.

Lets also not forget that while among the traditional networking market vendors Cisco is the $32B giant and all of the others including 3Com, Brocade, Broadcom, Ciena, Emulex, Juniper and Qlogic are the seven plus dwarfs. However, keep the $23B USD Huawei networking vendor that is growing at a 45% annual rate in mind.

I would keep an eye on AMD, Brocade, Citrix, Dell, Fujitsu, HP, Huawei, Juniper, Microsoft, NetApp, Oracle/Sun, Rackable and Symantec among many others for similar joint venture or marketing alliances.

Some of these have already surfaced with Brocade and Oracle sharing hugs and chugs (another sales term referring to alliance meetings over beers or shots).

Also keep in mind that VMware has a large software (customer business) footprint deployed on HP with Intel (and AMD) servers.

Oh, and those VMware based VMs running on HP servers also just happen to be hosting in their neighbor of 80% or more Windows based guests operating systems, I would say its game on time.

When I say its game on time, I dont think VMware is brash enough to cut HP (or others) off forcing them to move to Microsoft for virtualization. However the game is about control, control of technology stacks and partnerships, control of vars, integrators and the channel, as well as control of customers.

If you cannot tell, I find this topic fun and interesting.

For those who only know me from servers they often ask when did I learn about networking to which I say check out one of my books (Resilient Storage Networks-Elsevier). Meanwhile for others who know me from storage I get asked when did I learn about or get into servers to which I respond about 28 years ago when I worked in IT as the customer.

Bottom line on Acadia, vblocks and VCE for now, I like the idea of a unified and bundled solution as long as they are open and flexible.

On the other hand, I have many questions and even skeptical in some areas including of how this plays out for Cisco and EMC in terms of if it can be a unifier or polarized causing market fragmentation.

For some this is or will be dejavu, back to the future, while for others it is a new, exciting and revolutionary approach while for others it will be new fodder for smack talk!

More to follow soon.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Optimize Data Storage for Performance and Capacity Efficiency

This post builds on a recent article I did that can be read here.

Even with tough economic times, there is no such thing as a data recession! Thus the importance of optimizing data storage efficiency addressing both performance and capacity without impacting availability in a cost effective way to do more with what you have.

What this means is that even though budgets are tight or have been cut resulting in reduced spending, overall net storage capacity is up year over year by double digits if not higher in some environments.

Consequently, there is continued focus on stretching available IT and storage related resources or footprints further while eliminating barriers or constraints. IT footprint constraints can be physical in a cabinet or rack as well as floorspace, power or cooling thresholds and budget among others.

Constraints can be due to lack of performance (bandwidth, IOPS or transactions), poor response time or lack of availability for some environments. Yet for other environments, constraints can be lack of capacity, limited primary or standby power or cooling constraints. Other constraints include budget, staffing or lack of infrastructure resource management (IRM) tools and time for routine tasks.

Look before you leap
Before jumping into an optimization effort, gain insight if you do not already have it as to where the bottlenecks exist, along with the cause and effect of moving or reconfiguring storage resources. For example, boosting capacity use to more fully use storage resources can result in a performance issue or data center bottlenecks for other environments.

An alternative scenario is that in the quest to boost performance, storage is seen as being under-utilized, yet when capacity use is increased, low and behold, response time deteriorates. The result can be a vicious cycle hence the need to address the issue as opposed to moving problems by using tools to gain insight on resource usage, both space and activity or performance.

Gaining insight means looking at capacity use along with performance and availability activity and how they use power, cooling and floor-space. Consequently an important tool is to gain insight and knowledge of how your resources are being used to deliver various levels of service.

Tools include storage or system resource management (SRM) tools that report on storage space capacity usage, performance and availability with some tools now adding energy usage metrics along with storage or system resource analysis (SRA) tools.

Cooling Off
Power and cooling are commonly talked about as constraints, either from a cost standpoint, or availability of primary or secondary (e.g. standby) energy and cooling capacity to support growth. Electricity is essential for powering IT equipment including storage enabling devices to do their specific tasks of storing data, moving data, processing data or a combination of these attributes.

Thus, power gets consumed, some work or effort to move and store data takes place and the by product is heat that needs to be removed. In a typical IT data center, cooling on average can account for about 50% of energy used with some sites using less.

With cooling being a large consumer of electricity, a small percentage change to how cooling consumes energy can yield large results. Addressing cooling energy consumption can be to discuss budget or cost issues, or to enable cooling capacity to be freed up to support installation of extra storage or other IT equipment.

Keep in mind that effective cooling relies on removing heat from as close to the source as possible to avoid over cooling which requires more energy. If you have not done so, have a facilities review or assessment performed that can range from a quick walk around, to a more in-depth review and thermal airflow analysis. A means of removing heat close to the sort are techniques such as intelligent, precision or smart cooling also known by other marketing names.

Powering Up, or, Powering Down
Speaking of energy or power, in addition to addressing cooling, there are a couple of ways of addressing power consumption by storage equipment (Figure 1). The most popular discussed approach towards efficiency is energy avoidance involving powering down storage when not used such as first generation MAID at the cost of performance.

For off-line storage, tape and other removable media give low-cost capacity per watt with low to no energy needed when not in use. Second generation (e.g. MAID 2.0) solutions with intelligent power management (IPM) capabilities have become more prevalent enabling performance or energy savings on a more granular or selective basis often as a standard feature in common storage systems.

GreenOptionsBalance
Figure 1:  How various RAID levels and configuration impact or benefit footprint constraints

Another approach to energy efficiency is seen in figure 1 which is doing more work for active applications per watt of energy to boost productivity. This can be done by using same amount of energy however doing more work, or, same amount of work with less energy.

For example instead of using larger capacity disks to improve capacity per watt metrics, active or performance sensitive storage should be looked at on an activity basis such as IOP, transactions, videos, emails or throughput per watt. Hence, a fast disk drive doing work can be more energy-efficient in terms of productivity than a higher capacity slower disk drive for active workloads, where for idle or inactive, the inverse should hold true.

On a go forward basis the trend already being seen with some servers and storage systems is to do both more work, while using less energy. Thus a larger gap between useful work (for active or non idle storage) and amount of energy consumed yields a better efficiency rating, or, take the inverse if that is your preference for smaller numbers.

Reducing Data Footprint Impact
Data footprint impact reduction tools or techniques for both on-line as well as off-line storage include archiving, data management, compression, deduplication, space-saving snapshots, thin provisioning along with different RAID levels among other approaches. From a storage access standpoint, you can also include bandwidth optimization, data replication optimization, protocol optimizers along with other network technologies including WAFS/WAAS/WADM to help improve efficiency of data movement or access.

Thin provisioning for capacity centric environments can be used to achieving a higher effective storage use level by essentially over booking storage similar to how airlines oversell seats on a flight. If you have good historical information and insight into how storage capacity is used and over allocated, thin provisioning enables improved effective storage use to occur for some applications.

However, with thin provisioning, avoid introducing performance bottlenecks by leveraging solutions that work closely with tools that providing historical trending information (capacity and performance).

For a technology that some have tried to declare as being dead to prop other new or emerging solutions, RAID remains relevant given its widespread deployment and transparent reliance in organizations of all size. RAID also plays a role in storage performance, availability, capacity and energy constraints as well as a relief tool.

The trick is to align the applicable RAID configuration to the task at hand meeting specific performance, availability, capacity or energy along with economic requirements. For some environments a one size fits all approach may be used while others may configure storage using different RAID levels along with number of drives in RAID sets to meet specific requirements.


Figure 2:  How various RAID levels and configuration impact or benefit footprint constraints

Figure 2 shows a summary and tradeoffs of various RAID levels. In addition to the RAID levels, how many disks can also have an impact on performance or capacity, such as, by creating a larger RAID 5 or RAID 6 group, the parity overhead can be spread out, however there is a tradeoff. Tradeoffs can be performance bottlenecks on writes or during drive rebuilds along with potential exposure to drive failures.

All of this comes back to a balancing act to align to your specific needs as some will go with a RAID 10 stripe and mirror to avoid risks, even going so far as to do triple mirroring along with replication. On the other hand, some will go with RAID 5 or RAID 6 to meet cost or availability requirements, or, some I have talked with even run RAID 0 for data and applications that need the raw speed, yet can be restored rapidly from some other medium.

Lets bring it all together with an example
Figure 3 shows a generic example of a before and after optimization for a mixed workload environment, granted you can increase or decrease the applicable capacity and performance to meet your specific needs. In figure 3, the storage configuration consists of one storage system setup for high performance (left) and another for high-capacity secondary (right), disk to disk backup and other near-line needs, again, you can scale the approach up or down to your specific need.

For the performance side (left), 192 x 146GB 15K RPM (28TB raw) disks provide good performance, however with low capacity use. This translates into a low capacity per watt value however with reasonable IOPs per watt and some performance hot spots.

On the capacity centric side (right), there are 192 x 1TB disks (192TB raw) with good space utilization, however some performance hot spots or bottlenecks, constrained growth not to mention low IOPS per watt with reasonable capacity per watt. In the before scenario, the joint energy use (both arrays) is about 15 kWh or 15,000 watts which translates to about $16,000 annual energy costs (cooling excluded) assuming energy cost of 12 cents per kWh.

Note, your specific performance, availability, capacity and energy mileage will vary based on particular vendor solution, configuration along with your application characteristics.


Figure 3: Baseline before and after storage optimization (raw hardware) example

Building on the example in figure 3, a combination of techniques along with technologies yields a net performance, capacity and perhaps feature functionality (depends on specific solution) increase. In addition, floor-space, power, cooling and associated footprints are also reduced. For example, the resulting solution shown (middle) comprises 4 x 250GB flash SSD devices, along with 32 x 450GB 15.5K RPM and 124 x 2TB 7200RPM enabling an 53TB (raw) capacity increase along with performance boost.

The previous example are based on raw or baseline capacity metrics meaning that further optimization techniques should yield improved benefits. These examples should also help to discuss the question or myth that it costs more to power storage than to buy it which the answer should be it depends.

If you can buy the above solution for say under $50,000 (cost to power), or, let alone, $100,000 (power and cool) for three years which would also be a good acquisition, then the myth of buying is more expensive than powering holds true. However, if a solution as described above costs more, than the story changes along with other variables include energy costs for your particular location re-enforcing the notion that your mileage will vary.

Another tip is that more is not always better.

That is, more disks, ports, processors, controllers or cache do not always equate into better performance. Performance is the sum of how those and other pieces working together in a demonstrable way, ideally your specific application workload compared to what is on a product data sheet.

Additional general tips include:

  • Align the applicable tool, technique or technology to task at hand
  • Look to optimize for both performance and capacity, active and idle storage
  • Consolidated applications and servers need fast servers
  • Fast servers need fast I/O and storage devices to avoid bottlenecks
  • For active storage use an activity per watt metric such as IOP or transaction per watt
  • For in-active or idle storage, a capacity per watt per footprint metric would apply
  • Gain insight and control of how storage resources are used to meet service requirements

It should go without saying, however sometimes what is understood needs to be restated.

In the quest to become more efficient and optimized, avoid introducing performance, quality of service or availability issues by moving problems.

Likewise, look beyond storage space capacity also considering performance as applicable to become efficient.

Finally, it is all relative in that what might be applicable to one environment or application need may not apply to another.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

I/O Virtualization (IOV) Revisited

Is I/O Virtualization (IOV) a server topic, a network topic, or a storage topic (See previous post)?

Like server virtualization, IOV involves servers, storage, network, operating system, and other infrastructure resource management areas and disciplines. The business and technology value proposition or benefits of converged I/O networks and I/O virtualization are similar to those for server and storage virtualization.

Additional benefits of IOV include:

    • Doing more with what resources (people and technology) already exist or reduce costs
    • Single (or pair for high availability) interconnect for networking and storage I/O
    • Reduction of power, cooling, floor space, and other green efficiency benefits
    • Simplified cabling and reduced complexity for server network and storage interconnects
    • Boosting servers performance to maximize I/O or mezzanine slots
    • reduce I/O and data center bottlenecks
    • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
    • Scaling I/O capacity to meet high-performance and clustered application needs
    • Leveraging common cabling infrastructure and physical networking facilities

Before going further, lets take a step backwards for a few moments.

To say that I/O and networking demands and requirements are increasing is an understatement. The amount of data being generated, copied, and retained for longer periods of time is elevating the importance of the role of data storage and infrastructure resource management (IRM). Networking and input/output (I/O) connectivity technologies (figure 1) tie facilities, servers, storage tools for measurement and management, and best practices on a local and wide area basis to enable an environmentally and economically friendly data center.

TIERED ACCESS FOR SERVERS AND STORAGE
There is an old saying that the best I/O, whether local or remote, is an I/O that does not have to occur. I/O is an essential activity for computers of all shapes, sizes, and focus to read and write data in and out of memory (including external storage) and to communicate with other computers and networking devices. This includes communicating on a local and wide area basis for access to or over Internet, cloud, XaaS, or managed services providers such as shown in figure 1.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 1 The Big Picture: Data Center I/O and Networking

The challenge of I/O is that some form of connectivity (logical and physical), along with associated software is required along with time delays while waiting for reads and writes to occur. I/O operations that are closest to the CPU or main processor should be the fastest and occur most frequently for access to main memory using internal local CPU to memory interconnects. In other words, fast servers or processors need fast I/O, either in terms of low latency, I/O operations along with bandwidth capabilities.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 2 Tiered I/O and Networking Access

Moving out and away from the main processor, I/O remains fairly fast with distance but is more flexible and cost effective. An example is the PCIe bus and I/O interconnect shown in Figure 2, which is slower than processor-to-memory interconnects but is still able to support attachment of various device adapters with very good performance in a cost effective manner.

Farther from the main CPU or processor, various networking and I/O adapters can attach to PCIe, PCIx, or PCI interconnects for backward compatibility to support various distances, speeds, types of devices, and cost factors.

In general, the faster a processor or server is, the more prone to a performance impact it will be when it has to wait for slower I/O operations.

Consequently, faster servers need better-performing I/O connectivity and networks. Better performing means lower latency, more IOPS, and improved bandwidth to meet application profiles and types of operations.

Peripheral Component Interconnect (PCI)
Having established that computers need to perform some form of I/O to various devices, at the heart of many I/O and networking connectivity solutions is the Peripheral Component Interconnect (PCI) interface. PCI is an industry standard that specifies the chipsets used to communicate between CPUs and memory and the outside world of I/O and networking device peripherals.

Figure 3 shows an example of multiple servers or blades each with dedicated Fibre Channel (FC) and Ethernet adapters (there could be two or more for redundancy). Simply put the more servers and devices to attach to, the more adapters, cabling and complexity particularly for blade servers and dense rack mount systems.
PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 3 Dedicated PCI adapters for I/O and networking devices

Figure 4 shows an example of a PCI implementation including various components such as bridges, adapter slots, and adapter types. PCIe leverages multiple serial unidirectional point to point links, known as lanes, in contrast to traditional PCI, which used a parallel bus design.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 4 PCI IOV Single Root Configuration Example

In traditional PCI, bus width varied from 32 to 64 bits; in PCIe, the number of lanes combined with PCIe version and signaling rate determine performance. PCIe interfaces can have 1, 2, 4, 8, 16, or 32 lanes for data movement, depending on card or adapter format and form factor. For example, PCI and PCIx performance can be up to 528 MB per second with a 64 bit, 66 MHz signaling rate, and PCIe is capable of over 4 GB (e.g., 32 Gbit) in each direction using 16 lanes for high-end servers.

The importance of PCIe and its predecessors is a shift from multiple vendors’ different proprietary interconnects for attaching peripherals to servers. For the most part, vendors have shifted to supporting PCIe or early generations of PCI in some form, ranging from native internal on laptops and workstations to I/O, networking, and peripheral slots on larger servers.

The most current version of PCI, as defined by the PCI Special Interest Group (PCISIG), is PCI Express (PCIe). Backwards compatibility exists by bridging previous generations, including PCIx and PCI, off a native PCIe bus or, in the past, bridging a PCIe bus to a PCIx native implementation. Beyond speed and bus width differences for the various generations and implementations, PCI adapters also are available in several form factors and applications.

Traditional PCI was generally limited to a main processor or was internal to a single computer, but current generations of PCI Express (PCIe) include support for PCI Special Interest Group (PCI) I/O virtualization (IOV), enabling the PCI bus to be extended to distances of a few feet. Compared to local area networking, storage interconnects, and other I/O connectivity technologies, a few feet is very short distance, but compared to the previous limit of a few inches, extended PCIe provides the ability for improved sharing of I/O and networking interconnects.

I/O VIRTUALIZATION(IOV)
On a traditional physical server, the operating system sees one or more instances of Fibre Channel and Ethernet adapters even if only a single physical adapter, such as an InfiniBand HCA, is installed in a PCI or PCIe slot. In the case of a virtualized server for example, Microsoft HyperV or VMware ESX/vSphere the hypervisor will be able to see and share a single physical adapter, or multiple adapters, for redundancy and performance to guest operating systems. The guest systems see what appears to be a standard SAS, FC or Ethernet adapter or NIC using standard plug-and-play drivers.

Virtual HBA or virtual network interface cards (NICs) and switches are, as their names imply, virtual representations of a physical HBA or NIC, similar to how a virtual machine emulates a physical machine with a virtual server. With a virtual HBA or NIC, physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX, or Linux, a SAS or FC HBA, FCoE converged network adapter (CNA) or Ethernet NIC is presented.

In addition to virtual or software-based NICs, adapters, and switches found in server virtualization implementations, virtual LAN (VLAN), virtual SAN (VSAN), and virtual private network (VPN) are tools for providing abstraction and isolation or segmentation of physical resources. Using emulation and abstraction capabilities, various segments or sub networks can be physically connected yet logically isolated for management, performance, and security purposes. Some form of routing or gateway functionality enables various network segments or virtual networks to communicate with each other when appropriate security is met.

PCI-SIG IOV
PCI SIG IOV consists of a PCIe bridge attached to a PCI root complex along with an attachment to a separate PCI enclosure (Figure 5). Other components and facilities include address translation service (ATS), single-root IOV (SR IOV), and multiroot IOV (MR IOV). ATS enables performance to be optimized between an I/O device and a servers I/O memory management. Single root, SR IOV enables multiple guest operating systems to access a single I/O device simultaneously, without having to rely on a hypervisor for a virtual HBA or NIC.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 5 PCI SIG IOV

The benefit is that physical adapter cards, located in a physically separate enclosure, can be shared within a single physical server without having to incur any potential I/O overhead via virtualization software infrastructure. MR IOV is the next step, enabling a PCIe or SR IOV device to be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures. The benefit is increased sharing of physical adapters across multiple servers and operating systems not to mention simplified cabling, reduced complexity and resource utilization.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 6 PCI SIG MR IOV

Figure 6 shows an example of a PCIe switched environment, where two physically separate servers or blade servers attach to an external PCIe enclosure or card cage for attachment to PCIe, PCIx, or PCI devices. Instead of the adapter cards physically plugging into each server, a high performance short-distance cable connects the servers PCI root complex via a PCIe bridge port to a PCIe bridge port in the enclosure device.

In figure 6, either SR IOV or MR IOV can take place, depending on specific PCIe firmware, server hardware, operating system, devices, and associated drivers and management software. For a SR IOV example, each server has access to some number of dedicated adapters in the external card cage, for example, InfiniBand, Fibre Channel, Ethernet, or Fibre Channel over Ethernet (FCoE) and converged networking adapters (CNA) also known as HBAs. SR IOV implementations do not allow different physical servers to share adapter cards. MR IOV builds on SR IOV by enabling multiple physical servers to access and share PCI devices such as HBAs and NICs safely with transparency.

The primary benefit of PCI IOV is to improve utilization of PCI devices, including adapters or mezzanine cards, as well as to enable performance and availability for slot-constrained and physical footprint or form factor-challenged servers. Caveats of PCI IOV are distance limitations and the need for hardware, firmware, operating system, and management software support to enable safe and transparent sharing of PCI devices. Examples of PCIe IOV vendors include Aprius, NextIO and Virtensys among others.

InfiniBand IOV
InfiniBand based IOV solutions are an alternative to Ethernet-based solutions. Essentially, InfiniBand approaches are similar, if not identical, to converged Ethernet approaches including FCoE, with the difference being InfiniBand as the network transport. InfiniBand HCAs with special firmware are installed into servers that then see a Fibre Channel HBA and Ethernet NIC from a single physical adapter. The InfiniBand HCA also attaches to a switch or director that in turn attaches to Fibre Channel SAN or Ethernet LAN networks.

The value of InfiniBand converged networks are that they exist today, and they can be used for consolidation as well as to boost performance and availability. InfiniBand IOV also provides an alternative for those who do not choose to deploy Ethernet.

From a power, cooling, floor-space or footprint standpoint, converged networks can be used for consolidation to reduce the total number of adapters and the associated power and cooling. In addition to removing unneeded adapters without loss of functionality, converged networks also free up or allow a reduction in the amount of cabling, which can improve airflow for cooling, resulting in additional energy efficiency. An example of a vendor using InfiniBand as a platform for I/O virtualization is Xsigo.

General takeaway points include the following:

  • Minimize the impact of I/O delays to applications, servers, storage, and networks
  • Do more with what you have, including improving utilization and performance
  • Consider latency, effective bandwidth, and availability in addition to cost
  • Apply the appropriate type and tiered I/O and networking to the task at hand
  • I/O operations and connectivity are being virtualized to simplify management
  • Convergence of networking transports and protocols continues to evolve
  • PCIe IOV is complimentary to converged networking including FCoE

Moving forward, a revolutionary new technology may emerge that finally eliminates the need for I/O operations. However until that time, or at least for the foreseeable future, several things can be done to minimize the impacts of I/O for local and remote networking as well as to simplify connectivity.

PCIe Fundamentals Server Storage I/O Network Essentials

Learn more about IOV, converged networks, LAN, SAN, MAN and WAN related topics in Chapter 9 (Networking with your servers and storage) of The Green and Virtual Data Center (CRC) as well as in Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Should Everything Be Virtualized?

Storage I/O trends

Should everything, that is all servers, storage and I/O along with facilities, be virtualized?

The answer not surprisingly should be it depends!

Denny Cherry (aka Mrdenny) over at ITKE did a great recent post about applications not being virtualized, particularly databases. In general some of the points or themes we are on the same or similar page, while on others we slightly differ, not by very much.

Unfortunately consolidation is commonly misunderstood to be the sole function or value proposition of server virtualization given its first wave focus. I agree that not all applications or servers should be consolidated (note that I did not say virtualized).

From a consolidation standpoint, the emphasis is often on boosting resource use to cut physical hardware and management costs by boosting the number of virtual machines (VMs) per physical machine (PMs). Ironically, while VMs using VMware, Microsoft HyperV, Citrix/Xen among others can leverage a common gold image for cloning or rapid provisioning, there are still separate operating system instances and applications that need to be managed for each VM.

Sure, VM tools from the hypervisor along with 3rd party vendors help with these tasks as well as storage vendor tools including dedupe and thin provisioning help to cut the data footprint impact of these multiple images. However, there are still multiple images to manage providing a future opportunity for further cost and management reduction (more on that in a different post).

Getting back on track:

Some reasons that all servers or applications cannot be consolidated include among others:

  • Performance, response time, latency and Quality of Service (QoS)
  • Security requirements including keeping customers or applications separate
  • Vendor support of software on virtual or consolidated servers
  • Financial where different departments own hardware or software
  • Internal political or organizational barriers and turf wars

On the other hand, for those that see virtualization as enabling agility and flexibility, that is life beyond consolidation, there are many deployment opportunities for virtualization (note that I did not say consolidation). For some environments and applications, the emphasis can be on performance, quality of service (QoS) and other service characteristics where the ratio of VMs to PMs will be much lower, if not one to one. This is where Mrdenny and me are essentially on the same page, perhaps saying it different with plenty of caveats and clarification needed of course.

My view is that in life beyond consolidation, many more servers or applications can be virtualized than might be otherwise hosted by VMs (note that I did not say consolidated). For example, instead of a high number or ratio of VMs to PMs, a lower number and for some workloads or applications, even one VM to PM can be leveraged with a focus beyond basic CPU use.

Yes you read that correctly, I said why not configure some VMs on a one to one PM basis!

Here’s the premise, todays current wave or focus is around maximizing the number of VMs and/or the reduction of physical machines to cut capital and operating costs for under-utilized applications and servers, thus the move to stuff as many VMs into/onto a PM as possible.

However, for those applications that cannot be consolidated as outlined above, there is still a benefit of having a VM dedicated to a PM. For example, by dedicating a PM (blade, server or perhaps core) allows performance and QoS aims to be meet while still providing the ability for operational and infrastructure resource management (IRM), DCIM or ITSM flexibility and agility.

Meanwhile during busy periods, the application such as a database server could have its own PM, yet during off-hours, some over VM could be moved onto that PM for backup or other IRM/DCIM/ITSM activities. Likewise, by having the VM under the database with a dedicated PM, the application could be moved proactively for maintenance or in a clustered HA scenario support BC/DR.

What can and should be done?
First and foremost, decide how VMs is the right number to divide per PM for your environment and different applications to meet your particular requirements and business needs.

Identify various VM to PM ratios to align with different application service requirements. For example, some applications may run on virtual environments with a higher number of VMs to PMs, others with a lower number of VMs to PMs and some with a one VM to PM allocation.

Certainly there will be for different reasons the need to keep some applications on a direct PM without introducing a hypervisors and VM, however many applications and servers can benefit from virtualization (again note, I did not say consolation) for agility, flexibility, BC/DR, HA and ease of IRM assuming the costs work in your favor.

Additional general to do or action items include among others:

  • Look beyond CPU use also factoring in memory and I/O performance
  • Keep response time or latency in perspective as part of performance
  • More and fast memory are important for VMs as well as for applications including databases
  • High utilization may not show high hit rates or effectiveness of resource usage
  • Fast servers need fast memory, fast I/O and fast storage systems
  • Establish tiers of virtual and physical servers to meet different service requirements
  • See efficiency and optimization as more than simply driving up utilization to cut costs
  • Productivity and improved QoS are also tenants of an efficient and optimized environment

These are themes among others that are covered in chapters 3 (What Defines a Next-Generation and Virtual Data Center?), 4 (IT Infrastructure Resource Management), 5 (Measurement, Metrics, and Management of IT Resources), as well as 7 (Servers—Physical, Virtual, and Software) in my book “The Green and Virtual Data Center (CRC) that you can learn more about here.

Welcome to life beyond consolidation, the next wave of desktop, server, storage and IO virtualization along with the many new and expanded opportunities!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Could Huawei buy Brocade?

Disclosure: I have no connection to Huawei. I own no stock in, nor have I worked for Brocade as an employee; however I did work for three years at SAN vendor INRANGE which was acquired by CNT. However I left to become an industry analyst prior to the acquisition by McData and well before Brocade bought McData. Brocade is not a current client; however I have done speaking events pertaining to general industry trends and perspectives at various Brocade customer events for them in the past.

Is Brocade for sale?

Last week a Wall Street Journal article mentioned Brocade (BRCD) might be for sale.

BRCD has a diverse product portfolio for Fibre Channel, Ethernet along with the emerging Fibre Channel over Ethernet (FCoE) market and a whos who of OEM and channel partners. Why not be for sale, good timing for investors, CEO Mike Klayko and his team have arguably done a good job of shifting and evolving the company.

Generally speaking, lets keep in perspective, everything is always for sale, and in an economy like now, bargains are everywhere. Many business are shopping, its just a matter of how visible the shopping for a seller or buyer is along with motivations and objectives including shareholder value.

Consequently, the coconut wires are abuzz with talk and speculation of who will buy Brocade or perhaps who Brocade might buy among other Merger and Acquisition (M and A) activity of who will buy who. For example, who might buy BRCD, why not EMC (they sold McData off years ago via IPO), or IBM (they sold some of their networking business to Cisco years ago) or HP (currently an OEM partner of BRCD) as possible buyers?

Last week I posted on twitter a response to a comment about who would want to buy Brocade with a response to the effect of why not a Huawei to which there was some silence except for industry luminary Steve Duplessie (have a look to see what Steve had to say).

Part of being an analyst IMHO should be to actually analyze things vs. simply reporting on what others want you to report or what you have read or hear elsewhere. This also means talking about scenarios that are of out of the box or in adjacent boxes from some perspectives or that might not be in-line with traditional thinking. Sometimes this means breaking away and thinking and saying what may not be obvious or practical. Having said that, lets take a step back for a moment as to why Brocade may or might not be for sale and who might or may not be interested in them.

IMHO, it has a lot to do with Cisco and not just because Brocade sees no opportunity to continue competing with the 800lb guerilla of LAN/MAN networking that has moved into Brocades stronghold of storage network SANs. Cisco is upsetting the table or apple cart with its server partners IBM, Dell, HP, Oracle/Sun and others by testing the waters of the server world with their UCS. So far I see this as something akin to a threat testing the defenses of a target before actually full out attacking.

In other words, checking to see how the opposition responds, what defense are put up, collect G2 or intelligence as well as how the rest of the world or industry might respond to an all out assault or shift of power or control. Of course, HP, IBM, Dell and Sun/Oracle will not take this move into their revenue and account control goes un-noticed with initial counter announcements having been made some re-emphasize relationship with Brocade along with their recent acquisition of Ethernet/IP vendor Foundry.

Now what does this have to do with Brocade potentially being sold and why the title involving Huawei?

Many of the recent industry acquisitions have been focused on shoring up technology or intellectual property (IP), eliminating a competitor or simply taking advantage of market conditions. For example, Datadomain was sold to EMC in a bidding war with NetApp, HP bought IBRIX, Oracle bought or is trying to buy Sun, Oracle also bought Virtual Iron, Dell bought Perot after HP bought EDS a year or so ago while Xerox bought ACS and so the M and A game continues among other deals.

Some of the deals are strategic, many being tactical, Brocade being bought I would put in the category of a strategic scenario, a bargaining chip or even pawn if you prefer in a much bigger game that is more than about switches, directors, HBAs, LANs, SANs, MANSs, WANs, POTS and PANs (Checkout my  book “Resilient Storage Networks”-Elsevier)!

So with conversations focused around Cisco expanding into servers to control the data center discussion, mindset, thinking, budgets and decision making, why wouldnt an HP, IBM, Dell let alone a NetApp, Oracle/Sun or even EMC want to buy Brocade as a bargaining chip in a bigger game? Why not a Ciena (they just bought some of Nortels assets), Juniper or 3Com (more of a merger of equals to fight Cisco), Microsoft (might upset their partner Cisco) or Fujitsu (Their Telco group that is) among others?

Then why not Huawei, a company some may have heard of, one that others may not have.

Who is Huawei you might ask?

Simple, they are a very large IT solutions provider who is also a large player in China with global operations including R&D in North America and many partnerships with U.S. vendors. By rough comparison, Cisco most recently reported annual revenue are about 36.1B (All are USD), BRCD about 1.5B, Juniper about $3.5B and 3COM about $1.3B and Huawei at about 23B USD with a year over year sales increase of 45%. Huawei has previous partnerships with storage vendors including Symantec and Falconstor among others. Huawei also has had partnership with 3com (H3C), a company that was first of the LAN vendors to get into SANs (pre-maturely) beating Cisco easily by several years.

Sure there would be many hurdles and issues, similar to the ones CNT and INRANGE had to overcome, or McData and CNT, or Brocade and McData among others. However in the much bigger game of IT account and thus budget control is played by HP, IBM, and Sun/Oracle among others, wouldn’t maintaining a dual-source for customers networking needs make sense, or, at least serve as a check to Cisco expansion efforts? If nothing else, maintaining the status quo in the industry for now, or, if the rules and game are changing, wouldn’t some of the bigger vendors want to get closer to the markets where Huawei is seeing rapid growth?

Does this mean that Brocade could be bought? Sure.
Does this mean Brocade cannot compete or is a sign of defeat? I don’t think so.
Does this mean that Brocade could end up buying or merging with someone else? Sure, why not.
Or, is it possible that someone like Huawei could end up buying Brocade? Why not!

Now, if Huawei were to buy Brocade, which begs the question for fun, could they be renamed or spun off as a division called HuaweiCade or HuaCadeWei? Anything is possible when you look outside the box.

Nuff said for now, food for thought.

Cheers – gs

Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

The Many Faces of Solid State Devices/Disks (SSD)

Storage I/O trends

Here’s a link to a recent article I wrote for Enterprise Storage Forum titled “Not a Flash in the PAN” providing a synopsis of the many faces, implementations and forms of SSD based technologies that includes several links to other related content.

A popular topic over the past year or so has been SSD with FLASH based storage for laptops, also sometimes referred to as hybrid disk drives along with announcements late last year by companies such as Texas Memory Systems (TMS) of a FLASH based storage system combining DRAM for high speed cache in their RAMSAN-500 and more recently EMC adding support for FLASH based SSD devices in their DMX4 systems as a tier-0 to co-exist with other tier-1 (fast FC) and tier-2 (SATA) drives.

Solid State Disks/Devices (SSD) or memory based storage mediums have been around for decades, they continue to evolve using different types of memory ranging from volatile dynamic random access (DRAM) memory to persistent or non-volatile RAM (NVRAM) and various derivatives of NAND FLASH among other users. Likewise, the capacity cost points, performance, reliability, packaging, interfaces and power consumption all continue to improve.

SSD in general, is a technology that has been miss-understood over the decades particularly when simply compared on a cost per capacity (e.g. dollar per GByte) basis which is an unfair comparison. The more approaches comparison is to look at how much work or amount of activity for example transactions per second, NFS operations per second, IOPS or email messages that can be processed in a given amount of time and then comparing the amount of power and number of devices to achieve a desired level of performance. Granted SSD and in particular DRAM based systems cost more on a GByte or TByte basis than magnetic hard disk drives however it also requires more HDDs and controllers to achieve the same level of performance not to mention requiring more power and cooling than compared to a typical SSD based device.

The many faces of SSD range from low cost consumer grade products based on consumer FLASH products to high performance DRAM based caches and devices for enterprise storage applications. Over the past year or so, SSD have re-emerged for those who are familiar with the technology, and emerged or appeared for those new to the various implementations and technologies leading to another up swinging in the historic up and down cycles of SSD adoption and technology evolution in the industry.

This time around, a few things are different and I believe that SSD in general, that is, the many difference faces of SSD will have staying power and not fade away into the shadows only to re-emerge a few years later as has been the case in the past.

The reason I have this opinion is based on two basic premises which are economics and ecological”. Given the focus on reducing or containing costs, doing more with what you have and environmental or ecological awareness in the race to green the data center and green storage, improving on the economics with more energy efficiency storage, that is, enabling your storage to do more work with less energy as opposed to avoiding energy consumption, has the by product of improved economics (cost savings and improved resource utilization and better service delivery) along with ecological (better use of energy or less use of energy).

Current implementations of SSD based solutions are addressing both the energy efficiency topics to enable better energy efficiency ranging from maximizing battery life to boosting performance while drawing less power. Consequently we are now seeing SSD in general are not only being used for boosting performance, also we are seeing it as one of many different tools to address power, cooling, floor space and environmental or green storage issues.

Here’s a link to a StorageIO industry trends and perspectives white paper at www.storageio.com/xreports.htm.

Here’s the bottom line, there are many faces to SSD. SSD (FLASH or DRAM) based solutions and devices have a place in a tiered storage environment as a Tier-0 or as an alternative in some laptop or other servers where appropriate. SSD compliments other technologies and SSD benefits from being paired with other technologies including high performance storage for tier-1 and near-line or tier-2 storage implementing intelligent power management (IPM).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved