PUE, Are you Managing Power, Energy or Productivity?

With a renewed focus on Green IT including energy Efficiency and Optimization of servers, storage, networks and facilities, is your focus on managing power, energy, or, productivity?

For example, do you use or are interested in metrics such as Greengrid PUE or 80 Plus efficient power supplies along with initiatives such as EPA Energy Star for servers and emerging Energy Star for Data Center for Storage in terms of energy usage?

Or are you interested in productivity such as amount of work or activity that can be done in a given amount of time,or how much information can be stored in a given footprint (power, cooling, floor space, budget, management)?

For many organizations, there tends to be a focus and in both managing power along with managing productivity. The two are or should interrelated, however there are some disconnects with some emphasis and metrics. For example, the Green grid PUE is a macro facilities centric metric that does not show the productivity, quality or measure of services being delivered by a data center or information factory. Instead, PUE provides a gauge of how the habitat, that is the building and power distribution along with cooling are efficient with respect to the total energy consumption of IT equipment.

As a refresher, PUE is a macro metric that is essentially a ratio of how much total power or energy goes into a facility vs. the amount of energy used by IT equipment. For example, if 12Kw (smaller room/site) or 12Mw (larger site) are required to power an IT data center or computer room for that matter, and of that energy load, 6kWh or 6Mw, the PUE would be 2. A PUE of 2 is an indicator that 50% of energy going to power a facility or computer room goes towards IT equipment (servers, storage, networks, telecom and related equipment) with the balance going towards running the facility or environment which typically has had the highest percentage being HVAC/cooling.

In the case of EPA Energy Star for Data Centers which initially is focused on the habitat or facility efficiency, the answer is measuring and managing energy use and facility efficiency as opposed to productivity or useful work. The metric for EPA Energy Star for Data Center initially will be Energy Usage Effectiveness (EUE) that will be used to calculate a ratting for a data center facility. Those data centers in the top25 percentile will qualify for Energy Star certification.

Note the word energy and not power which means that the data center macro metric based on Green grid PUE rating looks at all source of energy used by a data center and not just electrical power. What this means is that a macro and holistic facilities energy consumption could be a combination of electrical power, diesel, propane or natural gas or other fuel sources to generate or create power for IT equipment, HVAC/Cooling and other needs. By using a metric that factor in all energy sources, a facility that uses solar, radiant, heat pumps, economizers or other techniques to reduce demands on energy will make a better rating.

By using a macro metric such as EUE or PUE (ratio = Total_Power_Used / IT_Power_Needs), a starting point is available to decide and compare efficiency and cost to power or energize a facility or room also known as a habitat for technology.

Managing Productivity of Information Factories (E.g. Data Centers)
What EUE and PUE do not reflect or indicate is how much data is processed, moved and stored by servers, storage and networks within a facility. On the other hand or extreme from macro metrics are micro or component metrics that gauge energy usage on an individual device basis. Some of these micro metrics may have activity or productivity indicator measurements associated with them, some don’t. Where these leave a big gap and opportunity is to fill the span between the macro and micro.

This is where work is being done by various industry groups including SNIA GSI, SPC and SPEC among others along with EPA Energy Star among others to move beyond macro PUE indicators to more granular effectiveness and efficiency metrics that reflect productivity. Ultimately productivity is important to gauge,  the return on investment and business value of how much data can be processed by servers, moved via networks or stored on storage devices in a given energy footprint or cost.

In Figure 1 are shown four basic approaches (in addition to doing nothing) to energy efficiency. One approach is to avoid energy usage, similar to following a rationing model, but this approach will affect the amount of work that can be accomplished. Another approach is to do more work using the same amount of energy, boosting energy efficiency, or do same amount of work (or storage data) however with less energy.

Tiered Storage
Figure 1 the Many Faces of Energy Efficiency Source: The Green and Virtual Data Center(CRC)

The energy efficiency gap is the difference between the amount of work accomplished or information stored in a given footprint and the energy consumed. In other words, the bigger the energy efficiency gap, the better, as seen in the fourth scenario, doing more work or storing more information in a smaller footprint using less energy. Clock here to read more about Shifting from energy avoidance to energy efficiency.

Watch for new metrics looking at productivity and activity for servers, storage and networks ranging from MHz or GHz per watt, transactions or IOPS per watt, bandwidth, frames or packets processed per watt or capacity stored per watt in a given footprint. One of the confusing metrics is Gbytes or Tbytes per watt in that it can mean storage capacity or bandwidth, thus, understand the context of the metric. Likewise watch for metrics that reflect energy usage for active along with in-active including idle or dormant storage common with archives, backup or fixed content data.

What this all means is that work continues on developing usable and relevant metrics and measurement not only for macro energy usage, also, to gauge the effectiveness of delivering IT services. The business value proposition of driving efficiency and optimization including increased productivity along with storing more information in a given footprint is to support density and business sustainability.

 

Additional resources and where to learn in addition to those mentioned above include:

EPA Energy Star for Data Center Storage

Storage Efficiency and Optimization – The Other Green

Performance = Availability StorageIOblog featured ITKE guest blog

SPC and Storage Benchmarking Games

Shifting from energy avoidance to energy efficiency

Green IT Confusion Continues, Opportunities Missed!

Green Power and Cooling Tools and Calculators

Determining Computer or Server Energy Use

Examples of Green Metrics

Green IT, Power, Energy and Related Tools or Calculators

Chapter 10 (Performance and Capacity Planning)
Resilient Storage Networks (Elsevier)

Chapter 5 (Measurement, Metrics and Management of IT Resources)
The Green and Virtual Data Center (CRC)

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Clouds and Data Loss: Time for CDP (Commonsense Data Protection)?

Today SNIA released a press release pertaining to cloud storage timed to coincide with SNW where we can only presume vendors are talking about their cloud storage stories.

Yet chatter on the coconut wire along with various news (here and here and here) and social media sites is how could cloud storage and information service provider T-Mobile/Microsoft/Side-Kick loose customers data?

Data loss is a dangerous phrase, after all, your data may still be intact somewhere, however if you cannot get to it when needed, that may seem like data loss to you.

There are many types of data loss including loss of accessibility or availability along with flat out loss. Let me clarify, loss of data availability or accessibility means that somewhere, your data is still intact, perhaps off-line on a removable disk, optical, tape or at another site on-line, near-line or off-line, its just that you cannot get to it yet. There is also real data loss where both your primary copy and backup as well as archive data are lost, stolen, corrupted or never actually protected.

Clouds or managed service providers in general are getting beat up due to some loss of access, availability or actual data loss, however before jumping on that bandwagon and pointing fingers at the service, how about a step back for a minute. Granted, given all of the cloud hype and proliferation of managed service offerings on the web (excuse me cloud), there is a bit of a lightning rod backlash or see I told you so approach.

Whats different about this story compared to prior disruptions with Amazon, Google, Blackberry among others is that unlike where access to information or services ranging from calendar, emails, contacts or other documents is disrupted for a period of time, it sounds as those data may have been lost.

Lost data you should say? How can you lose data after all there are copies of copies of data that have been snapshot, replicated and deduplicated storage across different tiered storage right?

Certainly anyone involved in data management or data protection is asking the question; why not go back to a snapshot copy, replicated volute, backup copy on disk or tape?

Needless to say, finger pointing aerobics are or will be in full swing. Instead, lets ask the question, is it time for CDP as in Commonsense Data Protection?

However, rather than point blame or spout off about how bad clouds are, or, that they are getting an un-fair shake and un-due coverage, and that just because there might be a few bad ones, not all clouds are bad particularly with recent outages.

I can think of many ways on how to actually lose data, however, to totally lose data requires not a technology failure, it can be something much simpler and is equally applicable to cloud, virtual and physical data centers and storage environments from the largest to the smallest to the consumer. Its simple, common sense, best practices, making copies of all data and keeping extra copies around somewhere, with more frequent or recent data having copies readily available.

Some trends Im seeing include among others:

  • Low cost craze leveraging free or near free services and products
  • Cloud hype and cloud bashing and need to discuss wide area in between those extremes
  • Renewed need for basic data protection including BC/DR, HA, backup and security
  • Opportunity to re-architect data protection in conjunction with other initiatives
  • Lack of adequate funding for continued and proactive data protection

Just to be safe, lets revisit some common data protection best practices:

  • Learn from mistakes, preferable during testing with aim to avoid repeating them again
  • Most disasters in IT and elsewhere are the result of a chain of events not being contained
  • RAID is not a replacement for backup, it simply provides availability or accessibility
  • Likewise, mirroring or replication by themselves is not a replacement for backup.
  • Use point in time RPO based data protection such as snapshots or backup with replication
  • Maintain a master backup or gold copy that can be used to restore to a given point of time
  • Keep backup on another medium, also protect backup catalog or other configuration data
  • If using deduplication, make sure that indexes/dictionary or Meta data is also protected.
  • Moving your data into the cloud is not a replacement for a data protection strategy
  • Test restoration of backed data both locally, as well as from cloud services
  • Employ data protection management (DPM) tools for event correlation and analysis
  • Data stored in clouds need to be part of a BC/DR and overall data protection strategy
  • Have extra copy of data placed in clouds kept in alternate location as part of BC/DR
  • Ask yourself, what will do you when your cloud data goes away (note its not if, its when)
  • Combine multiple layers or rings of defines and assume what can break will break

Clouds should not be scary; Clouds do not magically solve all IT or consumer issues. However they can be an effective tool when of high caliber as part of a total data protection strategy.

Perhaps this will be a wake up call, a reminder, that it is time to think beyond cost savings and a shift back to basic data protection best practices. What good is the best or most advanced technology if you have less than adequate practices or polices? Bottom line, time for Commonsense Data Protection (CDP).

Ok, nuff said for now, I need to go and make sure I have a good removable backup in case my other local copies fail or Im not able to get to my cloud copies!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Could Huawei buy Brocade?

Disclosure: I have no connection to Huawei. I own no stock in, nor have I worked for Brocade as an employee; however I did work for three years at SAN vendor INRANGE which was acquired by CNT. However I left to become an industry analyst prior to the acquisition by McData and well before Brocade bought McData. Brocade is not a current client; however I have done speaking events pertaining to general industry trends and perspectives at various Brocade customer events for them in the past.

Is Brocade for sale?

Last week a Wall Street Journal article mentioned Brocade (BRCD) might be for sale.

BRCD has a diverse product portfolio for Fibre Channel, Ethernet along with the emerging Fibre Channel over Ethernet (FCoE) market and a whos who of OEM and channel partners. Why not be for sale, good timing for investors, CEO Mike Klayko and his team have arguably done a good job of shifting and evolving the company.

Generally speaking, lets keep in perspective, everything is always for sale, and in an economy like now, bargains are everywhere. Many business are shopping, its just a matter of how visible the shopping for a seller or buyer is along with motivations and objectives including shareholder value.

Consequently, the coconut wires are abuzz with talk and speculation of who will buy Brocade or perhaps who Brocade might buy among other Merger and Acquisition (M and A) activity of who will buy who. For example, who might buy BRCD, why not EMC (they sold McData off years ago via IPO), or IBM (they sold some of their networking business to Cisco years ago) or HP (currently an OEM partner of BRCD) as possible buyers?

Last week I posted on twitter a response to a comment about who would want to buy Brocade with a response to the effect of why not a Huawei to which there was some silence except for industry luminary Steve Duplessie (have a look to see what Steve had to say).

Part of being an analyst IMHO should be to actually analyze things vs. simply reporting on what others want you to report or what you have read or hear elsewhere. This also means talking about scenarios that are of out of the box or in adjacent boxes from some perspectives or that might not be in-line with traditional thinking. Sometimes this means breaking away and thinking and saying what may not be obvious or practical. Having said that, lets take a step back for a moment as to why Brocade may or might not be for sale and who might or may not be interested in them.

IMHO, it has a lot to do with Cisco and not just because Brocade sees no opportunity to continue competing with the 800lb guerilla of LAN/MAN networking that has moved into Brocades stronghold of storage network SANs. Cisco is upsetting the table or apple cart with its server partners IBM, Dell, HP, Oracle/Sun and others by testing the waters of the server world with their UCS. So far I see this as something akin to a threat testing the defenses of a target before actually full out attacking.

In other words, checking to see how the opposition responds, what defense are put up, collect G2 or intelligence as well as how the rest of the world or industry might respond to an all out assault or shift of power or control. Of course, HP, IBM, Dell and Sun/Oracle will not take this move into their revenue and account control goes un-noticed with initial counter announcements having been made some re-emphasize relationship with Brocade along with their recent acquisition of Ethernet/IP vendor Foundry.

Now what does this have to do with Brocade potentially being sold and why the title involving Huawei?

Many of the recent industry acquisitions have been focused on shoring up technology or intellectual property (IP), eliminating a competitor or simply taking advantage of market conditions. For example, Datadomain was sold to EMC in a bidding war with NetApp, HP bought IBRIX, Oracle bought or is trying to buy Sun, Oracle also bought Virtual Iron, Dell bought Perot after HP bought EDS a year or so ago while Xerox bought ACS and so the M and A game continues among other deals.

Some of the deals are strategic, many being tactical, Brocade being bought I would put in the category of a strategic scenario, a bargaining chip or even pawn if you prefer in a much bigger game that is more than about switches, directors, HBAs, LANs, SANs, MANSs, WANs, POTS and PANs (Checkout my  book “Resilient Storage Networks”-Elsevier)!

So with conversations focused around Cisco expanding into servers to control the data center discussion, mindset, thinking, budgets and decision making, why wouldnt an HP, IBM, Dell let alone a NetApp, Oracle/Sun or even EMC want to buy Brocade as a bargaining chip in a bigger game? Why not a Ciena (they just bought some of Nortels assets), Juniper or 3Com (more of a merger of equals to fight Cisco), Microsoft (might upset their partner Cisco) or Fujitsu (Their Telco group that is) among others?

Then why not Huawei, a company some may have heard of, one that others may not have.

Who is Huawei you might ask?

Simple, they are a very large IT solutions provider who is also a large player in China with global operations including R&D in North America and many partnerships with U.S. vendors. By rough comparison, Cisco most recently reported annual revenue are about 36.1B (All are USD), BRCD about 1.5B, Juniper about $3.5B and 3COM about $1.3B and Huawei at about 23B USD with a year over year sales increase of 45%. Huawei has previous partnerships with storage vendors including Symantec and Falconstor among others. Huawei also has had partnership with 3com (H3C), a company that was first of the LAN vendors to get into SANs (pre-maturely) beating Cisco easily by several years.

Sure there would be many hurdles and issues, similar to the ones CNT and INRANGE had to overcome, or McData and CNT, or Brocade and McData among others. However in the much bigger game of IT account and thus budget control is played by HP, IBM, and Sun/Oracle among others, wouldn’t maintaining a dual-source for customers networking needs make sense, or, at least serve as a check to Cisco expansion efforts? If nothing else, maintaining the status quo in the industry for now, or, if the rules and game are changing, wouldn’t some of the bigger vendors want to get closer to the markets where Huawei is seeing rapid growth?

Does this mean that Brocade could be bought? Sure.
Does this mean Brocade cannot compete or is a sign of defeat? I don’t think so.
Does this mean that Brocade could end up buying or merging with someone else? Sure, why not.
Or, is it possible that someone like Huawei could end up buying Brocade? Why not!

Now, if Huawei were to buy Brocade, which begs the question for fun, could they be renamed or spun off as a division called HuaweiCade or HuaCadeWei? Anything is possible when you look outside the box.

Nuff said for now, food for thought.

Cheers – gs

Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

Poll: What Do You Think of IT Clouds?

Clouds

IT clouds (compute, applications, storage, and services) are a popular topic for discussion with some people being entirely sold on them as the way of the future, while others totally dismissing them, meanwhile, there’s plenty of thoughts in between.

I recently shared some of my thoughts in this blog post about IT clouds, now whats your take (your identity will remain confidential)?

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

The function of XaaS(X) – Pick a letter

Remember the xSP era where X was I for ISP (Internet Service Provider) or M for Managed Service Provider (MSP) or S for Storage Service Provider, part of buzzword bingo?

That was similar to the xLM craze where X could have been I for Information Lifecycle Management (ILM), D for Data Lifecycle Management (DLM) and so forth where even someone tried to register the term ILM and failed instead of grabbing something like XLM, lest I digress.

Fast forward to today, given the wide spread use of anything SaaS among other XaaS terms, lets have a quick and perhaps fun look at what some of the different usages of the new function XaaS(X) in the IT industry today.

By no means is this an exhaustive list, feel free to comment with others, the more the merrier. Using the Basic English alphabet without numbers or extended character sets, here are some possibilities among others (some are and continue to be used in the industry):

AAnalyst, Application, Archive, Audit or Authentication
BBackup or Blogger
CCloud, Complier, Compute or Connectivity
DData management, Datawharehouse, DBA, Dedupe, Development, Disk or Docmanagement
EEmail, Encryption or Evangelist
FFiles or Freeware
GGrid or Google
HHelp, Hotline or Hype
IILM, Information, Infrastructure, IO or IT
JJobs
KKbytes
LLibrary or Linkedin
MMainframe, Marketing, Manufacturing, Media, Memory or Middleware
NNAS, Networking or Notification
OOffice, Oracle, Optical or Optimization
PPerformance, Petabytes, Platform, Policy, Police, Print or PR
QQuality
RRAID, Replication, Reporter, Research or Rightsmanagement
SSAN, Search, Security, Server, Software, Storage, Support
TTape, Technology, Testing, Tradegroup, Trends or Twittering
UUnfollow
VVAR, Virtualization or Vendor
WWeb
XXray
YYoutube
ZzSeries or zilla

Feel free to comment with others for the list, and likewise, feel free to share the list.

Cheers gs

Cheers gs
Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

Clouds are like Electricity: Dont be Scared

Clouds

IT clouds (compute, applications, storage, and services) are like electricity in that they can be scary or confusing to some while being enabling or a necessity s to others not to mention being a polarizing force depending on where you sit or view them.

As a polarizing force, if you are a cloud crowd cheerleader or evangelist, you might view someone who does not subscribe or share your excitement, views or interpretations as a cynic.

On the other hand, if you are a skeptic, or perhaps scared or even a cynic, you might view anyone who talks about cloud in general or not specific terms as a cheerleader.

I have seen and experienced this electrifying polarization first hand having being told by crowd cloud cheerleaders or evangelists that I dont like clouds, that Im a cynic who does not know anything about clouds.

As a funny aside (at least I thought it was funny), I recently asked someone who gave me an ear full while they were trying to convert me to be a cloud believer if they had read any of the chapters in my new book The Green and Virtual Data Center (CRC). The response was NO and I said to the effect to bad, as in the book, I talk about how clouds can be complimentary to existing IT resources as being another tier of servers, storage, applications, facilities and IT services.

On the other hand, and this might be funny for some of the crowd cloud, when I bring up tiered IT resources including servers, storage, applications and facilities as well as where or how clouds can fit to compliment IT, I have been told by cynics or naysayers that Im a cloud cheerleader.

Wow, talk about polarized sides!

Now, what about all those that are somewhere in the middle, those that are skeptics who might see value for IT clouds for different scenarios and may in fact already be using clouds (depending upon someones definition).

For those in the middle, whether they are vendors, vars, media, press, analysts, consultants, IT professionals, investors or others, they can easily be misunderstood, misrepresented, and a missed opportunity, perhaps even lamented by those on either of the two extremes (e.g. cloud crowd cheerleaders or true skeptic nay sayers).

Time for some education, don’t be scared, however be careful!

When I worked for an electric power generating and transmission utility an important lesson was not to be scared of electricity, however, be educated, what to do, what not to do in different situations including what to do or not do in the actual power plant or substation. I was taught that when in the actual plant, or at a substation of which I visited in support of the applications and systems I was developing or maintaining, to do certain things. For example, number one, dont touch certain things, number two, if you fall, don’t grab anything, the fall may or may not hurt you, let alone the sudden stop where ever you land, however, if you grab something, that might kill you and you may not be able to let go further injuring yourself. This was a challenging thought as we are taught to grab onto something when falling.

What does this have to do with clouds?

Don’t grab and hang-on if you don’t know what you are grabbing on to if you don’t have to.

The cloud crowd can be polarizing and in some ways acting as a lightning rod drawing the scorns, cynicism ,skeptics, lambasting or being poked fun of given some of the over the top hype around clouds today. Now granted, not all cloud evangelists, vendors or cheerleaders deserve to be the brunt of some of this backlash within the industry; however, it comes with the territory.

Im in the middle as I pointed out above when I talk with vendors, vars, media, investors and IT customers.  Some I talk with are using clouds (perhaps not compliant with some of the definitions). Some are looking at clouds to move problems or mask issues, others are curious yet skeptical to see where or how they could use clouds to compliment their environments. Yet others are scared however maybe in the future will be more open minded as they become educated and see technologies evolve or shift beyond a fashionable trend.

So its time for disclosure, I seeIT clouds as being complimentary that can co-exist with other IT resources (servers, storage, software). In essence, my view is that clouds are just another tier of IT resources to be used when and where applicable as opposed to being a complete replacement, or, simply ignored.

My point is that cloud computing is another tier of traditional computing or servers providing a different performance, availability, capacity, economic and management attributes compared to other traditional technology delivery vehicles. Same thing with storage, same thing with data centers or hosting sites in general. This also applies to application services, in that a cloud web, email, expense, sales, crm, erp, office or other applications is a tier of those same implementations that may exist in a traditional environment. After all, legacy, physical, virtual, grid and cloud IT datacenters all have something in common, they rely on physical servers, storage, networks, software, metrics and management involving people, processes and best practices.

Now back to disclosure, I like clouds, however Im not a cloud cheerleader, Im a skeptic at times of some over the top hype, yet I also personally use some cloud services and technologies as well as advise others to leverage cloud services when, or where applicable to compliment, co-exist and help enable a green and virtual data center and information factory.

To the cloud crowd cheerleaders, too bad if I don’t line up with all of your belief systems or if you perceive me as raining on your parade by being a skeptic , or what you might think of as a cynic and non believer, even though I use clouds myself.

Likewise, to the true cynics (not skeptics) or naysayers, ease up, Im not drinking the cool-aid of the cheerleaders and evangelists, or at least not in large excessive binge doses. I agree that clouds are not the solution to every IT issue, regardless of what your definition of a cloud happens to be.

To everyone else, regardless of if you are the minatory or majority out there that do not fall into one of the two above groups I have this to say.

Dont be afraid, dont be scared of clouds, learn to navigate your way around and through the various technologies, techniques, products and services and indemnity where they might compliment and enable a flexible and scalable resilient IT infrastructure.

Take some time to listen and learn, become educated on what the different types of clouds (public, private, services, products, architectures, or marketecture), their attributes (compute, storage, applications, services, cost, availability, performance, protocols, functionality) and value proposition.

Look into how cloud technologies and techniques might compliment your existing environment to meet specific business objectives. You might find there are fits, you might there are not, however have a look and do some research so that you can at least hold your ground if storm clouds roll in.

After all, clouds are just another tier of IT resources to add to your tool box enabling more efficient and effective IT services delivery. Clouds do not have to be the all or nothing value proposition that often end up in discussions due to polarized extreme views and definitions or past experiences.

Look at it this way, IT relies on electricity, however electricity needs to be understood and respected not to mention used in effective ways. You can be scared of electricity, you can be caviler around it, or, it can be part of your environment and enabler as long as you know when, where and how to use it, not to mention not using it as applicable.

So next time you see a cloud crowd cheerleader, give them a hug, give them a pat on the back, an atta boy or atta girl as they are just doing their jobs, perhaps even following their beliefs and in the line of duty taking a lot of heat from the industry in the pursuit of their work.

On the other hand, as to the cynics and naysayers, they may in fact be using clouds already, perhaps not under the strict definition of some of the chieftains of the cloud crowd.

To everyone else, dont worry, don’t by scared about the clouds, instead, focus on your business, you IT issues and look at various tiers of technologies that can serve as an enabler in a cost effective manner.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

StorageIO aka Greg Schulz appears on Infosmack

If you are in the IT industry, and specifically have any interest or tie to data infrastructures from servers, to storage and networking including hardware, software, services not to mention virtualization and clouds, InfoSmack and Storage Monkeys should be on your read or listen list.

Recently I was invited to be a guest on the InfoSmack podcast which is about a 50 some minute talk show format around storage, networking, virtualization and related topics.

The topics discussed include Sun and Oracle from a storage standpoint, Solid State Disk (SSD) among others.

Now, a word of caution, InfoSmack is not your typical prim and proper venue, nor is it a low class trash talking production.

Its fun and informative where the hosts and attendees are not afraid of poking fun at them selves while exploring topics and the story behind the story in a candid non scripted manner.

Check it out.

Cheers – gs

Greg Schulz – StorageIOblog, twitter @storageio Author “The Green and Virtual Data Center” (CRC)

Blame IT on the UN in NYC this week

This week is UN week in NYC, that annual fall event that results in traffic jams that make normal traffic seem like a breeze.

What with the security lockdowns, sudden road closures, re-routes, news crews, security details and the like, it’s a wonder anything gets done. I was in NYC for about 26 hours this week at the Storage Decisions event where I presented on optimizing for performance and capacity to enable efficient and green storage as well as recording a video on cloud storage and saw or experienced first hand the delays.

This is not going to be one of those complain about how I was inconvenienced rants, rather a bit of fun

Consequently, should you have or had any issues this past week, do like others and blame the UN. For example, late for a meeting, presentation, conference call, coffee break or lunch, getting home or to the ballpark, blame it on the UN. Other potential items that you can feel free to blame on the UN in NYC this week include:

  • RAID rebuilds on those large disk drives taking to long
  • Server, workstation, desktop, laptop or iphone reboots taking to long.
  • Database consistency checks or virus scans taking to long, you know who you can blame!
  • Cannot get a cell phone, landline or wireless connection inside, outside or anywhere?
  • Vmotion taking to long to migrate a server, failover not as fast, you know the drill.
  • IT budget scrapped, yet you have to do more, guess who’s to blame this week
  • Regulatory compliance, BC/DR, data security have you locked up, yup, thats right!
  • Cant download, upload or access WebEx, FedEx or backup to cloud, yup, blame it on the UN
  • Cant get a loan or venture capital financing for your startup, it’s the UNs fault right?
  • Your kindle brook and Amazon took away the books you bought and downloaded?
  • Missed your flight, train, car pool ride in another city, you know the story.
  • Interoperability and vendor finger pointing got you in a bind, yup; it’s the UN in NYC that’s the issue.
  • Forest fires or dust storms in Australia, ice cap melting at the north pole, yup, the UN in NYC this week

Look, I was stuck in traffic, made the best of it, listened to Infosmack #20 while doing some emails, doing a few calls instead of getting all twisted up about it. I actually like visiting NYC, lots to see and do, however also nice to move on, for those who have never experience NYC during UN week, give it a try sometime.

Cheers – gs

Greg Schulz – StorageIOblog, twitter @storageio Author “The Green and Virtual Data Center” (CRC)

Technorati tags: NYC, UN

Storage Efficiency and Optimization – The Other Green

For those of you in the New York City area, I will be presenting live in person at Storage Decisions September 23, 2009 conference The Other Green, Storage Efficiency and Optimization.

Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives.

To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

Topics, technologies and techniques that will be discussed include among others:

  • Energy efficiency (strategic) vs. energy avoidance (tactical), whats different between them
  • Optimization and the need for speed vs. the need for capacity, finding the right balance
  • Metrics & measurements for management insight, what the industry is doing (or not doing)
  • Tiered storage and tiered access including SSD, FC, SAS, tape, clouds and more
  • Data footprint reduction (archive, compress, dedupe) and thin provision among others
  • Best practices, financial incentives and what you can do today

This is a free event for IT professionals, however space I hear is limited, learn more and register here.

For those interested in broader IT data center and infrastructure optimization, check out the on-going seminar series The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. This is also a free Seminar, register and learn more here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Back to School and Dedupe School

Summers is over hear in the northern hemisphere and its back to school time.

This coming week I will be the substitute teacher filling in for my friend Mr. Backup in Minneapolis and Toronto for TechTargets Dedupe School. If you are in either city and have not yet signed up, check out the link here to learn more.

Hope to see you this week, or, next week at Infrastructure Optimization in Chicago or Storage Decisions in NYC where I will also be presenting or teaching if you prefer, as well as listening and learning from the attendees whats on their minds.

Stay current on other upcoming activities on our events page, as well as see whats new or in the news here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Performance = Availability StorageIOblog featured ITKE guest blog

ITKE - IT Knowledge Exchange

Recently IT Knowledge Exchange named me and StorageIOblog as their weekly featured IT blog too which Im flattered and honored. Consequently, I did a guest blog for them titled Performance = Availability, Availability = Performance that you can read about here.

For those not familiar with ITKE, take a few minutes and go over and check it out, there is a wealth of information there on a diversity of topics that you can read about, or, you can also get involved and participate in the questions and answers discussions.

Speaking of ITKE, interested in “The Green and Virtual Data Center” (CRC), check out this link where you can download a free chapter of my book, along with information on how to order your own copy along with a special discount code from CRC press.

Thank you very much to Sean Brooks of ITKE and his social media team of Michael Morisy and Jenny Mackintosh for being named featured IT blogger, as well as for being able to do a guest post for them. It has been fantastic working them and particularly Jenny who helped with all of the logistics in putting together the various pieces including getting the post up on the web as well as in their news letter.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Data Center I/O Bottlenecks Performance Issues and Impacts

This is an excerpt blog version of the popular Server and StorageIO Group white paper "IT Data Center and Data Storage Bottlenecks" originally published August of 2006 that is as much if not more relevant today than it was in the past.

Most Information Technology (IT) data centers have bottleneck areas that impact application performance and service delivery to IT customers and users. Possible bottleneck locations shown in Figure-1 include servers (application, web, file, email and database), networks, application software, and storage systems. For example users of IT services can encounter delays and lost productivity due to seasonal workload surges or Internet and other network bottlenecks. Network congestion or dropped packets resulting in wasteful and delayed retransmission of data can be the results of network component failure, poor configuration or lack of available low latency bandwidth.

Server bottlenecks due to lack of CPU processing power, memory or under sized I/O interfaces can result in poor performance or in worse case scenarios application instability. Application including database systems bottlenecks due to excessive locking, poor query design, data contention and deadlock conditions result in poor user response time. Storage and I/O performance bottlenecks can occur at the host server due to lack of I/O interconnect bandwidth such as an overloaded PCI interconnect, storage device contention, and lack of available storage system I/O capacity.

These performance bottlenecks, impact most applications and are not unique to the large enterprise or scientific high compute (HPC) environments. The direct impact of data center I/O performance issues include general slowing of the systems and applications, causing lost productivity time for users of IT services. Indirect impacts of data center I/O performance bottlenecks include additional management by IT staff to trouble shoot, analyze, re-configure and react to application delays and service disruptions.


Figure-1: Data center performance bottleneck locations

Data center performance bottleneck impacts (see Figure-1) include:

  • Under utilization of disk storage capacity to compensate for lack of I/O performance capability
  • Poor Quality of Service (QoS) causing Service Level Agreements (SLA) objectives to be missed
  • Premature infrastructure upgrades combined with increased management and operating costs
  • Inability to meet peak and seasonal workload demands resulting in lost business opportunity

I/O bottleneck impacts
It should come as no surprise that businesses continue to consume and rely upon larger amounts of disk storage. Disk storage and I/O performance fuel the hungry needs of applications in order to meet SLAs and QoS objectives. The Server and StorageIO Group sees that, even with efforts to reduce storage capacity or improve capacity utilization with information lifecycle management (ILM) and Infrastructure Resource Management (IRM) enabled infrastructures, applications leveraging rich content will continue to consume more storage capacity and require additional I/O performance. Similarly, at least for the next few of years, the current trend of making and keeping additional copies of data for regulatory compliance and business continue is expected to continue. These demands all add up to a need for more I/O performance capabilities to keep up with server processor performance improvements.


Figure-2: Processing and I/O performance gap

Server and I/O performance gap
The continued need for accessing more storage capacity results in an alarming trend: the expanding gap between server processing power and available I/O performance of disk storage (Figure-2). This server to I/O performance gap has existed for several decades and continues to widen instead of improving. The net impact is that bottlenecks associated with the server to I/O performance lapse result in lost productivity for IT personal and customers who must wait for transactions, queries, and data access requests to be resolved.

Application symptoms of I/O bottlenecks
There are many applications across different industries that are sensitive to timely data access and impacted by common I/O performance bottlenecks. For example, as more users access a popular file, database table, or other stored data item, resource contention will increase. One way resource contention manifests itself is in the form of database “deadlock” which translates into slower response time and lost productivity. 

Given the rise and popularity of internet search engines, search engine optimization (SEO) and on-line price shopping, some businesses have been forced to create expensive read-only copies of databases. These read-only copies are used to support more queries to address bottlenecks from impacting time sensitive transaction databases.

In addition to increased application workload, IT operational procedures to manage and protect data help to contribute to performance bottlenecks. Data center operational procedures result in additional file I/O scans for virus checking, database purge and maintenance, data backup, classification, replication, data migration for maintenance and upgrades as well as data archiving. The net result is that essential data center management procedures contribute to performance challenges and impacting business productivity.

Poor response time and increased latency
Generally speaking, as additional activity or application workload including transactions or file accesses are performed, I/O bottlenecks result in increased response time or latency (shown in Figure-3). With most performance metrics more is better; however, in the case of response time or latency, less is better.  Figure-3 shows the impact as more work is performed (dotted curve) and resulting I/O bottlenecks have a negative impact by increasing response time (solid curve) above acceptable levels. The specific acceptable response time threshold will vary by applications and SLA requirements. The acceptable threshold level based on performance plans, testing, SLAs and other factors including experience serves as a guide line between acceptable and poor application performance.

As more workload is added to a system with existing I/O issues, response time will correspondingly decrease as was seen in Figure-3. The more severe the bottleneck, the faster response time will deteriorate (e.g. increase) from acceptable levels. The elimination of bottlenecks enables more work to be performed while maintaining response time below acceptable service level threshold limits.


Figure-3: I/O response time performance impact

Seasonal and peak workload I/O bottlenecks
Another common challenge and cause of I/O bottlenecks is seasonal and/or unplanned workload increases that result in application delays and frustrated customers. In Figure-4 a workload representing an eCommerce transaction based system is shown with seasonal spikes in activity (dotted curve). The resulting impact to response time (solid curve) is shown in relation to a threshold line of acceptable response time performance. For example, peaks due holiday shopping exchanges appear in January then dropping off increasing near mother’s day in May, then back to school shopping in August results in increased activity as does holiday shopping starting in late November.


Figure-4: I/O bottleneck impact from surge workload activity

Compensating for lack of performance
Besides impacting user productivity due to poor performance, I/O bottlenecks can result in system instability or unplanned application downtime. One only needs to recall recent electric power grid outages that were due to instability, insufficient capacity bottlenecks as a result of increased peak user demand.

I/O performance improvement approaches to address I/O bottlenecks have been to do nothing (incur and deal with the service disruptions) or over configure by throwing more hardware and software at the problem. To compensate for lack of I/O performance and counter the resulting negative impact to IT users, a common approach is to add more hardware to mask or move the problem.

However, this often leads to extra storage capacity being added to make up for a short fall in I/O performance. By over configuring to support peak workloads and prevent loss of business revenue, excess storage capacity must be managed throughout the non-peak periods, adding to data center and management costs. The resulting ripple affect is that now more storage needs to be managed, including allocating storage network ports, configuring, tuning, and backing up of data. This can and does result in environments that have storage utilization well below 50% of their useful storage capacity. The solution is to address the problem rather than moving and hiding the bottleneck elsewhere (rather like sweeping dust under the rug).

Business value of improved performance
Putting a value on the performance of applications and their importance to your business is a necessary step in the process of deciding where and what to focus on for improvement. For example, what is the value of reducing application response time and the associated business benefit of allowing more transactions, reservations or sales to be made? Likewise, what is the value of improving the productivity of a designer or animator to meet tight deadlines and market schedules? What is business benefit of enabling a customer to search faster for and item, place an order, access media rich content, or in general improve their productivity?

Server and I/O performance gap as a data center bottleneck
I/O performance bottlenecks are a wide spread issue across most data centers, affecting many applications and industries. Applications impacted by data center I/O bottlenecks to be looked at in more depth are electronic design automation (EDA), entertainment and media, database online transaction processing (OLTP) and business intelligence. These application categories represent transactional processing, shared file access for collaborative work, and processing of shared, time sensitive data.

Electronic design
Computer aided design (CAD), computer assisted engineering (CAE), electronic design automaton (EDA) and other design tools are used for a wide variety of engineering and design functions. These design tools require fast access to shared, secured and protected data. The objective of using EDA and other tools is to enable faster product development with better quality and improved worker productivity. Electronic components manufactured for the commercial, consumer and specialized markets rely on design tools to speed the time-to-market of new products as well as to improve engineer productivity.

EDA tools, including those from Cadence, Synopsis, Mentor Graphics and others, are used to develop expensive and time sensitive electronic chips, along with circuit boards and other components to meet market windows and suppler deadlines. An example of this is a chip vendor being able to simulate, develop, test, produce and deliver a new chip in time for manufacturers to release their new products based on those chips. Another example is aerospace and automotive engineering firms leveraging design tools, including CATIA and UGS, on a global basis relying on their suppler networks to do the same in a real-time, collaborative manner to improve productivity and time-to-market. These results in contention of shared file and data access and, as a work-around, more copies of data kept as local buffers.

I/O performance impacts and challenges for EDA, CAE and CAD systems include:

  • Delays in drawing and file access resulting in lost productivity and project delays
  • Complex configurations to support computer farms (server grids) for I/O and storage performance
  • Proliferation of dedicated storage on individual servers and workstations to improve performance

Entertainment and media
While some applications are characterized by high bandwidth or throughput, such as streaming video and digital intermediate (DI) processing of 2K (2048 pixels per line) and 4K (4096 pixels per line) video and film, there are many other applications that are also impacted by I/O performance time delays. Even bandwidth intensive applications for video production and other applications are time sensitive and vulnerable to I/O bottleneck delays. For example, cell phone ring tone, instant messaging, small MP3 audio, and voice- and e-mail are impacted by congestion and resource contention.

Prepress production and publishing requiring assimilation of many small documents, files and images while undergoing revisions can also suffer. News and information websites need to look up breaking stories, entertainment sites need to view and download popular music, along with still images and other rich content; all of this can be negatively impacted by even small bottlenecks.  Even with streaming video and audio, access to those objects requires accessing some form of a high speed index to locate where the data files are stored for retrieval. These indexes or databases can become bottlenecks preventing high performance storage and I/O systems from being fully leveraged.

Index files and databases must be searched to determine the location where images and objects, including streaming media, are stored. Consequently, these indices can become points of contention resulting in bottlenecks that delay processing of streaming media objects. When cell phone picture is taken phone and sent to someone, chances are that the resulting image will be stored on network attached storage (NAS) as a file with a corresponding index entry in a database at some service provider location. Think about what happens to those servers and storage systems when several people all send photos at the same time.

I/O performance impacts and challenges for entertainment and media systems include:

  • Delays in image and file access resulting in lost productivity
  • Redundant files and storage local servers to improve performance
  • Contention for resources causing further bottlenecks during peak workload surges

OLTP and business intelligence
Surges in peak workloads result in performance bottlenecks on database and file servers, impacting time sensitive OLTP systems unless they are over configured for peak demand. For example, workload spikes due to holiday and back-to-school shopping, spring break and summer vacation travel reservations, Valentines or Mothers Day gift shopping, and clearance and settlement on peak stock market trading days strain fragile systems. For database systems maintaining performance for key objects, including transaction logs and journals, it is important to eliminate performance issues as well as maintain transaction and data integrity.

An example tied to eCommerce is business intelligence systems (not to be confused with back office marketing and analytics systems for research). Online business intelligence systems are popular with online shopping and services vendors who track customer interests and previous purchases to tailor search results, views and make suggestions to influence shopping habits.

Business intelligence systems need to be fast and support rapid lookup of history and other information to provide purchase histories and offer timely suggestions. The relative performance improvements of processors shift the application bottlenecks from the server to the storage access network. These applications have, in some cases, resulted in an exponential increase in query or read operations beyond the capabilities of single database and storage instances, resulting in database deadlock and performance problems or the proliferation of multiple data copies and dedicated storage on application servers.

A more recent contribution to performance challenges, caused by the increased availability of on-line shopping and price shopping search tools, is low cost craze (LCC) or price shopping. LCC has created a dramatic increase in the number of read or search queries taking place, further impacting database and file systems performance. For example, an airline reservation system that supports price shopping while preventing impact to time sensitive transactional reservation systems would create multiple read-only copies of reservations databases for searches. The result is that more copies of data must be maintained across more servers and storage systems thus increasing costs and complexity. While expensive, the alternative of doing nothing results in lost business and market share.

I/O performance impacts and challenges for OLTP and business intelligence systems include:

  • Application and database contention, including deadlock conditions, due to slow transactions
  • Disruption to application servers to install special monitoring, load balance or I/O driver software
  • Increased management time required to support additional storage needed as a I/O workaround

Summary/Conclusion
It is vital to understand the value of performance, including response time or latency, and numbers of I/O operations for each environment and particular application. While the cost per raw TByte may seem relatively in-expensive, the cost for I/O response time performance also needs to be effectively addressed and put into the proper context as part of the data center QoS cost structure.

There are many approaches to address data center I/O performance bottlenecks with most centered on adding more hardware or addressing bandwidth or throughput issues. Time sensitive applications depend on low response time as workload including throughput increase and thus latency can not be ignored. The key to removing data center I/O bottlenecks is to find and address the problem instead of simply moving or hiding it with more hardware and/or software. Simply adding fast devices such as SSD may provide relief, however if the SSDs are attached to high latency storage controllers, the full benefit may not be realized. Thus, identify and gain insight into data center and I/O bottleneck paths eliminating issues and problems to boost productivity and efficiency.

Where to Learn More
Additional information about IT data center, server, storage as well as I/O networking bottlenecks along with solutions can be found at the Server and StorageIO website in the tips, tools and white papers, as well as news, books, and activity on the events pages. If you are in the New York area on September 23, 2009, check out my presentation on The Other Green – Storage Optimization and Efficiency that will touch on the above and other related topics. Download your copy of "IT Data Center and Storage Bottlenecks" by clicking here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Upcoming Out and About Events

Following up on previous Out and About updates ( here and here ) of where I have been, heres where I’m going to be over the next couple of weeks.

On September 15th and 16th 2009, I will be the keynote speaker along with doing a deep dive discussion around data deduplication in Minneapolis, MN and Toronto ON. Free Seminar, register and learn more here.

The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. Free Seminar, register and learn more here.

On September 23, 2009 I will be in New York City at Storage Decisions conference participating in the Ask the Experts during the expo session as well as presenting The Other Green — Storage Efficiency and Optimization.

Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives. To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

Topics, technologies and techniques that will be discussed include among others:

  • Energy efficiency (strategic) vs. energy avoidance (tactical)
  • Optimization and the need for speed vs. the need for capacity
  • Metrics and measurements for management insight
  • Tiered storage and tiered access including SSD, FC, SAS and clouds
  • Data footprint reduction (archive, compress, dedupe) and thin provision
  • Best practices, financial incentives and what you can do today

Free event, learn more and register here.

Check out the events page for other upcoming events and hope to see you this fall while Im out and about.

Cheers – gs

Greg Schulz – StorageIOblog, twitter @storageio Author “The Green and Virtual Data Center” (CRC)

I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)

Ok, so I should have used that intro last week before heading off to VMworld in San Francisco instead of after the fact.

Think of it as a high latency title or intro, kind of like attaching a fast SSD to a slow, high latency storage controller, or a fast server attached to a slow network, or fast network with slow storage and servers, it is what it is.

I/O virtualization (IOV), Virtual I/O (VIO) along with I/O and networking convergence have been getting more and more attention lately, particularly on the convergence front. In fact one might conclude that it is trendy to all of a sudden to be on the IOV, VIO and convergence bandwagon given how clouds, soa and SaaS hype are being challenged, perhaps even turning to storm clouds?

Lets get back on track, or in the case of the past week, get back in the car, get back in the plane, get back into the virtual office and what it all has to do with Virtual I/O and VMworld.

The convergence game has at its center Brocade emanating from the data center and storage centric I/O corner challenging Cisco hailing from the MAN, WAN, LAN general networking corner.

Granted both vendors have dabbled with success in each others corners or areas of focus in the past. For example, Brocade as via acquisitions (McData+Nishan+CNT+INRANGE among others) a diverse and capable stable of local and long distance SAN connectivity and channel extension for mainframe and open systems supporting data replication, remote tape and wide area clustering. Not to mention deep bench experience with the technologies, protocols and partners solutions for LAN, MAN (xWDM), WAN (iFCP, FCIP, etc) and even FAN (file area networking aka NAS) along with iSCSI in addition to Fibre Channel and FICON solutions.

Disclosure: Here’s another plug ;) Learn more about SANs, LANs, MANs, WANs, POTs, PANs and related technologies and techniques in my book “Resilient Storage NetworksDesigning Flexible Scalable Data Infrastructures" (Elsevier).

Cisco not to be outdone has a background in the LAN, MAN, WAN space directly, or similar to Brocade via partnerships with product and experience and depth. In fact while many of my former INRANGE and CNT associates ended up at Brocade via McData or in-directly, some ended up at Cisco. While Cisco is known for general networking, the past several years they have gone from zero to being successful in the Fibre Channel and yes, even the FICON mainframe space while like Brocade (HBAs) dabbling in other areas like servers and storage not to mention consumer products.

What does this have to do with IOV and VIO, let alone VMworld and my virtual office, hang on, hold that thought for a moment, lets get the convergence aspect out of the way first.

On the I/O and networking convergence (e.g. Fibre Channel over Ethernet – FCoE) scene both Brocade (Converged Enhanced Ethernet-CEE) and Cisco (Data Center Ethernet – DCE) along with their partners are rallying around each others camps. This is similar to how a pair of prize fighters maneuvers in advance of a match including plenty of trash talk, hype and all that goes with it. Brocade and Cisco throwing mud balls (or spam) at each other, or having someone else do it is nothing new, however in the past each has had their core areas of focus coming from different tenets in some cases selling to different people in an IT environment or those in VAR and partner organizations. Brocade and Cisco are not alone nor is the I/O networking convergence game the only one in play as it is being complimented by the IOV and VIO technologies addressing different value propositions in IT data centers.

Now on to the IOV and VIO aspect along with VMworld.

For those of you that attended VMworld and managed to get outside of session rooms, or media/analyst briefing or reeducation rooms, or out of partner and advisory board meetings walking the expo hall show floor, there was the usual sea of vendors and technology. There were the servers (physical and virtual), storage (physical and virtual), terminals, displays and other hardware, I/O and networking, data protection, security, cloud and managed services, development and visualization tools, infrastructure resource management (IRM) software tools, manufactures and VARs, consulting firms and even some analysts with booths selling their wares among others.

Likewise, in the onsite physical data center to support the virtual environment, there were servers, storage, networking, cabling and associated hardware along with applicable software and tucked away in all of that, there were also some converged I/O and networking, and, IOV technologies.

Yes, IOV, VIO and I/O networking convergence were at VMworld in force, just ask Jon Torr of Xsigo who was beaming like a proud papa wanting to tell anyone who would listen that his wares were part of the VMworld data center (Disclosure: Thanks for the T-Shirt).

Virtensys had their wares on display with Bob Nappa more than happy to show the technology beyond an UhiGui demo including how their solution includes disk drives and an LSI MegaRAID adapter to support VM boot while leveraging off-the shelf or existing PCIe adapters (SAS, FC, FCoE, Ethernet, SATA, etc.) while allowing adapter sharing across servers, not to mention, they won best new technology at VMworld award.

NextIO who is involved in the IOV / VIO game was there along with convergence vendors Brocade, Cisco, Qlogic and Emulex among others. Rest assured, there are many other vendors and VARs in the VIO and IOV game either still in stealth, semi-stealth or having recently launched.

IOV and VIO are complimentary to I/O and networking convergence in that solutions like those from Aprius, Virtensys, Xsigo, NextIO and others. While they sound similar, and in fact there is confusion as to if Fibre Channel N_Port Virtual ID (FC_NPVID) and VMware virtual adapters are IOV and VIO vs. solutions that are focused on PCIe device/resource extension and sharing.

Another point of confusion around I/O virtualization and virtual I/O are blade system or blade center connectivity solutions such as HP Virtual Connect or IBM Fabric Manger not to mention those form Engenera add confusion to the equation. Some of the buzzwords that you will be hearing and reading more about include PCIe Single Root IOV (SR-IOV) and Multi-Root IOV (MR-IOV). Think of it this way, within VMware you have virtual adapters, and Fibre Channel Virtualization N_Port IDs for LUN mapping/masking, zone management and other tasks.

IOV enables localized sharing of physical adapters across different physical servers (blades or chassis) with distances measured in a few meters; after all, it’s the PCIe bus that is being extended. Thus, it is not a replacement for longer distance in the data center solutions such as FCoE or even SAS for that matter, thus they are complimentary, or at least should be considered complimentary.

The following are some links to previous articles and related material including an excerpt (yes, another plug ;)) from chapter 9 “Networking with you servers and storage” of new book “The Green and Virtual Data Center” (CRC). Speaking of virtual and physical, “The Green and Virtual Data Center” (CRC) was on sale at the physical VMworld book store this week, as well as at the virtual book stores including Amazon.com

The Green and Virtual Data Center

The Green and Virtual Data Center (CRC) on book shelves at VMworld Book Store

Links to some IOV, VIO and I/O networking convergence pieces among others, as well as news coverage, comments and interviews can be found here and here with StorageIOblog posts that may be of interest found here and here.

SearchSystemChannel: Comparing I/O virtualization and virtual I/O benefits – August 2009

Enterprise Storage Forum: I/O, I/O, It’s Off to Virtual Work We Go – December 2007

Byte and Switch: I/O, I/O, It’s Off to Virtual Work We Go (Book Chapter Excerpt) – April 2009

Thus I went to VMworld in San Francisco this past week as much of the work I do is involved with convergence similar to my background, that is, servers, storage, I/O networking, hardware, software, virtualization, data protection, performance and capacity planning.

As to the virtual work, well, I spent some time on airplanes this week which as is often the case, my virtual office, granted it was real work that had to be done, however I also had a chance to meet up with some fellow tweeters at a tweet up Tuesday evening before getting back in a plane in my virtual office.

Now, I/O, I/O, its back to real work I go at Server and StorageIO , kind of rhymes doesnt it!

I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)