Hard product vs. soft product

In the IT industry space and data storage or computers and servers particularly so, mention hard product or software product and what comes to mind?

How about physical vs. virtual servers or storage, hardware vs. software solutions, products vs. services?

By contrast, in the aviation and airline industry among others, mention hard vs. soft product and there is a slight variation, which is the difference between one providers service delivery experience.

For example, two or more different airlines or carriers may fly the same aircraft perhaps even with the same engines, instrumentation, navigation electronics and base features, all part of the hard product.

However, their hard product could vary by type of seats, spacing or pitch along with width, overhead luggage room, Video on Demand (VoD) or In Flight Entertainment (IFE) as well as different cabin treatments (carpeting, wall coverings) and galley configurations. Even in scenarios where carriers have the same equipment and hard product, their soft product can differ.

Example of a Soft Product, that is service (or lack there of) being delivered

Example of a Soft Product (Service or lack there of being delivered)

The soft product is the service delivery experience including by the cabin crew (flight attendants and pursers), food (or lack of), beverage, presentation and so forth. Also part of the soft product can be how seats are allocated or available for selection, boarding process and other items that contribute to the overall customer experience.

This all got me thinking on a recent flight where the hard product (e.g. aircraft) of a particular carrier was identical; however given transitions taking place, the soft product still differed as was not fully integrated or merged yet. What the experience got me thinking about is that in IT, customers or solution providers can buy the same technology or hard product (hardware, software, services) from the same suppliers yet present different soft products or service experience to their customers.

Example IT hard product (hardware and software) delivering soft product services

IT equipment being used for delivery of different soft products

Im sure that some of the cloud crowd cheerleaders might even jump up and down and claim that is the benefit of using managed service producers or similar services to obtain a different soft product. And while that may be true in some instances, it is also true that different traditional IT organizations are able to craft and deploy various types of soft products to their customers to meet different service requirements, cost or economic objectives using the same technology used by others.

A different example of hard vs soft product is a site I have visited that has mainframes, windows and open systems servers whose business requires a soft product that is highly available, reliable, flexible, fast and affordable. Needless to say, in that environment, some of the open systems including windows platforms can have reliability close to if not equal to the mainframes.

Example IT hard product (hardware and software) delivering soft product services
IT equipment being used for delivery of different soft products

What is even more amazing is that no special or different hard products (e.g. servers, storage, networks or software) are being used to achieve those services objectives. Rather it is the soft product that achieves the results in terms of how the techniques are used and managed. Likewise I have heard of other environments that have mixed mainframe and open systems, using common hard products as other organizations yet whose soft product is not as robust or reliable as others. If using the same hard product that is same software, hardware, networks and services, how could the soft product be any less robust?

The answer is that good and reliable technology is important, however the technology is only as good as how it is managed, configured, monitored and deployed centering on processes, procedures and best practices.

Next time you are on an airplane, or, using some other service that leverages common technologies (hardware or software or networks) take a moment to look around at the soft product and how the service experience of a common hard product can vary. That is, using common technology, how can various best practices, policies and operating principals to meet diverse service requirements differ to meet demand as well as economic requirements.

What is your take and experience on different hard vs soft products in or around IT?

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Virtual Storage and Social Media: What did EMC not Announce?

Synopsis: EMC made a vision statement in a recent multimedia briefing that has a social networking angle as well as storage virtualization, virtual storage, public and private clouds.

Basically EMC provided a vision preview of in a social media networking friendly manner of a vision being refereed to initially as EMC Virtual Storage (aka twitter hash tag #emcvs) which of course sounds similar to a pharmacy chain.

The vision includes stirring up the industry with a new discussion around virtual storage compared to the decade old coverage of storage virtualization.

The underlying theme of this vision is similar to that of virtual serves vs. server virtualization including the ability to move servers around, so to should there be the ability to move data around more freely on a local or global basis and in real or near real time. In other words, breaking the decades long affinity that has existed between data storage and the data that exists on it (Figure 1). Buzzword bingo themes include federated storage, virtual storage, public and private cloud along with global cache coherency among others.


Figure 1: EMC Virtual Storage (EMCVS) Vision

The rest of the story

On Thursday March 11th 2010 Pat Gelsinger (EMC President and COO, Information Infrastructure Products) held an interactive briefing with the global analyst community pertaining to future EMC trajectory or visions. One of the interesting things about this session was that it was not unique to industry analysts nor was it under NDA.

For example, here is a link that if still active, should provide access to the briefing material.

The vision being talked about include those that EMC has talked about in the past such as virtualized data centers, or, putting a spin on the phrase data center virtualization, along with public and private clouds as well as  infrastructure  resource management virtualization (Figure 2):


Figure 2: Public and Private Clouds along with Virtual Data Centers

Figure 2 is a fairly common slide used in many EMC discussions positing public and private clouds along with virtualized data centers.


Figure 3: Tenants of the EMC Virtual Storage (EMCVS) vision


Figure 4: Enabling mobile data, breaking data and storage affinity


Figure 5: Enabling teleporting and virtual storage

Thus setting up the story for the need and benefit of distributed cache coherency, similar to distributed lock management (DLM) used on local and wide area clustered file systems for maintain data integrity.


Figure 6: Leveraging distributed cache coherency

This discussion around distributed cache coherency should ring Dejavu of IBM GDPS (Global Dispersed Parallel Sysplex) for Mainframe, OpenVMS distributed lock management for VAX and Alpha clusters, Oracle RAC, or other parallel and clustered file systems among others. Likewise for those familiar with technology from Yotta Yotta, this should also ring familiar.

However while many are jumping on the Yotta Yotta familiarity bandwagon given comments made by Pat Gelsinger, something that came to mind is what about EMC GDDR? Do not worry if that is an acronym or product you are not up on as an EMC follower as it stands for EMC Geographically Dispersed Disaster (GDDR) solution that is an alternative to IBMs proprietary GDPS. Perhaps there is none, perhaps this is some, however what role if any including lessons learned will come from EMCs experience with GDDR not to mention other clustered file systems?


Figure 7: The EMC vision as presented

One of the interesting things about the vision announcement and perhaps part of floating it out for discussion was a comment made by Pat Gelsinger. That comment was about enabling the wild Wild West for IT, something that perhaps one generation might enjoy, however a notion another would soon forget. Im sure the EMC marke3ting team including their new chief marketing officer (CMO) Jeremy Burton can fine tune with time.
 

More on the social networking and non NDA angle

As is often the case with many other vendors, these types of customer, partner, analyst or media briefings (either online or in person) are under some form of NDA or embargo as they contain forward looking, yet to be announced products, solutions, technologies or other business initiatives. Note, these types of NDA discussions are not typically the same as those that portray or pretend to be NDA in order to sound more important a few days before an announcement that has already been leaked to get extra coverage or what are also known as media embargos.

After some amount of time, usually the information is formerly made public that was covered in advanced briefings, along with additional details. Sometimes material covered under NDA is done so in advanced such that third parties can prepare reports, deep dive analysis or assessment and other content that is made available at announcement or shortly there. The material is often prepared partners, vars, media, analysts, consultants, customers or others outside of the announcing company via different venues ranging from print, online columns, blogs, tweets videos and more.

Lately there has been some confusion in the broader IT as well as other industries as to where and how to classify bloggers, tweeters or other social media practionier. After all, is a blogger an analyst, journalist, free lance writer, advisor, vendor, consultant, customer, var, investor, hobbyist, competitor not to mention how does information get feed to them?

Likewise, NDAs and embargo have joined the list of fodder topics that some do not like for various reasons yet like to complain about for others. There is a time and place for real NDAs that cover and address material, discussions and other information that should not be shared. However all to often NDAs get watered down particularly on the press release games where a vendor or public relations firm (PR) will dangle an announcement briefing a couple of days or perhaps a week or two prior to an announcement under the guise that it not be disclosed prior to formal announcement.

Where these NDAs get tricky is that often they are honored by some and ignored by others, thus, those who honor the agreement get left behind by those who break the story. Personally I do not mind real NDA that are tied to real confidential material, discussion or other information that needs to be kept under wraps for various reasons. However the value or issues of NDA is whole different discussion, for now, lets get back to what EMC did not announce in their recent non-NDA briefing.

Different organizations are addressing social media in various ways, some ignoring it, others embracing it regardless of what it is. EMC is an example of a vendor who has embraced social networking and social media along with traditional means of developing and maintaining relations with the media (media or press relations), customers, partners, vars, consultants, investors (e.g. investor relations) as well as analysts (analyst relations).

For example, EMC works with analysts in traditional ways as they do with the media and other groups, however they also recognize that while some analysts (or media or investors or partners or customers or vars etc) blog and tweet (among other social networking mediums), not all do (as is also the case with media, customers, vars and so forth). Likewise EMC from a social media and networking perspective does not appear to define audiences based on the medium or tool that they use, rather, in a matrix or multi dimensional approach.

That is, an analyst with a blog is a blogger, a var or independent consultant with a blog is a blogger, or a media person including free lance writers, journalist, reporters or publisher with a blog is a blogger as are vars, advisors, partners and competitors with blogs also treated as bloggers.



Some of the 2009 EMC Bloggers Lounge Visitors

Thus at their EMCworld event, admission to the bloggers lounge is as simple and non exclusive as having a blog to join regardless of what your role or usage of a blog happens to be. On the other hand, information is communicated via different channels such as for traditional press via public relations folks, investors through investors relations, analysts via analyst relations, partners and customers through their venues and so forth.

When you think about it, makes sense as after all, EMC sells and attaches storage to mainframes, open systems Windows, UNIX, Linux as well as virtual servers that use different tools, protocols, languages and points of interest. Thus it should not be surprising that their approach to communicating with different audiences leverage various mediums for diverse messages at multiple points in time.

 

What does all of this social media discussion have to do with the March 11 EMC event?

In my opinion, this was an experiment of sorts of EMC to test the waters by floating a new vision to their traditional  pre brief audience in advance of talking with media prior to an actual announcement.

That is, EMC did not announce a new product, technology, initiative, business alliance or customer event, rather a vision and trajectory or signaling what they may be doing in the future.

How this ties to social media and networking is that rather than being an event only for those media, bloggers, tweeters, customers, consultants, vars, free lancers, partners or others who agreed to do so under NDA, EMC used the venue as an advance sounding board of sorts.

That is, by sticking to broad vision vs. propriety and confidential or sensitive topics, the discussion has been put out in advance in the open to stimulate discussion in traditional reports, articles, columns or related venues not to mention in temporal real time via twitter not to mention via blogs and beyond.

Does this mean EMC will be moving away from NDAs anytime soon? I do not think so as there is still very much a need for advanced (and not a couple of weeks prior to announcement) types of discussion around sensitive information. For example with the trajectory or visionary discussion last week by EMC, the short presentation and discussion, limited slides prompt more questions than they address.

Perhaps what we are seeing is a new approach or technique of how organizations can use and bring social networking mediums into the mainstream business process as opposed to being perceived as niche or experimental mediums.

The reason I think it was an experiment is that EMC practices both traditional analyst/media relations along with emerging social media networking relations that includes practioners that span both audiences. For some the social media bloggers and tweeters are a different audience than traditional media, writers, consultants or analysts, that is, they are a separate and unique audience.

Thus, it is in my opinion and like human knees, elbows, feet, hands, ears as well as, well, you get the picture I think that there are many different views or thoughts not to mention interpretations of social media, social networking, blogging, analysts, consultants, advisors, media or press, customers, partners, and so on with diverse roles, functions and needs.

Where this comes back to the topic of last weeks discussion is that of storage virtualization vs. virtual storage. Rest assured in the time since the EMC briefing and certainly in the weeks or months to come, there will be penalty of knees, elbows, hands and other body parts flying and signaling what is a particular view or definition of storage virtualization vs. virtual storage.

Of course, some of these will be more entertaining than others ranging from well rehearsed, in some cases over the past decade or more to new and perhaps even revolutionary ones of what is and what is not storage virtualization vs. virtual storage, let alone cloud vs. cluster vs. grid vs. federated and beyond.

 

Additional Comments and thoughts

In general, I like the trajectory vision EMC is rolling out even if it causes confusion between what is virtual storage vs. storage virtualization, after all, we have been hearing about storage virtualization for over a decade now if not longer. Likewise, there has been plenty of talk about public clouds so it is refreshing to see more discussion and less cloud ware or cloud marketecture and how to actually leverage what you have to adopt private cloud practices.

I suspect that as the EMC competition starts to hear or piece together what they think this vision is or is not, we should also start to hear some interesting stories, spins, counter pitches, debates, twitter fights, blog slams and YouTube videos, all of which also happen to consume more storage.

I also like what EMC is doing with social media and networking as a means or medium for building and maintain relationships as well as for information exchange complimenting traditional means and mediums.  

In other words, EMC is succeeding with social networking by not using it just as another megaphone to talk at or over people, rather, as a means to engage, to get to know, to challenge, to exchange regardless of if you are a so called independent blogger, twitter, analyst, medial, constant, customer, var, investor, partner among others.

If you are not already doing so, here are some EMC folks who actively participate in two way dialogues across different areas with @lendevanna helping to facilitate and leverage the masses of various people and subject matter experts including @chuckhollis @c_weil @cxi @davegraham @gminks @mike_fishman @stevetodd @storageanarchy @storagezilla @Stu and @vcto among many others.

Note that for you non twitter types, the previous are twitter handles (names or addresses) that can be accessed by putting https://twitter.com in place of the @ sign. For example @storageio = https://twitter.com/storageio

 

Additional Comments and thoughts:

Some comments and thoughts among others that I posted via twitter last week during the briefing event:

Here are some twitter comments that I posted last week during the event with hash tag #emcvs:

Is what was presented on the #emcvs #it #storage #virtualization call NDA material = Negative
Is what was presented on the #emcvs #it #storage #virtualization call a product announcement = NOpe
Is what was presented on the #emcvs #it #storage #virtualization call a statement of direction = Kind of
Is what was presented on the #emcvs #it #storage #virtualization call a hint of future functionality = probably
Is what was presented on the #emcvs #it #storage #virtualization call going to be shared with general public = R U reading this?
Is what was presented on the #emcvs #it #storage #virtualization call going to be discussed further = Yup
Is what was presented on the #emcvs #it #storage #virtualization call going to confuse the industry = Maybe
Is what was presented on the #emcvs #it #storage #virtualization call going to confuse customers = Depends on story teller
Is what was presented on the #emcvs #it #storage #virtualization call going to confuse competition = probably
Is what was presented on the #emcvs #it #storage #virtualization call going to provide fodder/fuel for bloggers = Yup
Anything else to add about #emcvs #it #storage #virtualization call today = Stay tuned, watch and listen for more!

Some additional questions and my perspectives on those include:

  • What did EMC announce? Nothing, it was not an announcement; it was a statement of vision.
  • Why did EMC hold a briefing without an NDA and yet nothing was announced? It is my opinion that EMC has a vision that they want to float an idea or direction, thus, sharing a vision to get discussions going without actually announcing a specific product or technology.
  • Is this going to be a repackaged version of the Invista storage virtualization platform? I do not believe so.
  • Is this going to be a repackaged version of the intellectual property (IP) assets that EMC picked up from the defunct startup called Yotta Yotta? Given some references to, along with what some of the themes and discussions center around, it is my guess that there is some Yotta Yotta IP along with other technologies that may be part of any future possible solution.
  • Who or what is YottaYotta? They were a late dot com startup founded in 2000 that went through various incarnations and value propositions with some solutions that shipped. Some of the late era IP included distributed cache coherency and distance enablement of large scale federated storage on a global basis.
  • Can the Yotta Yotta (or here) technology really scale? That remains to be seen, Yotta Yotta had some interesting demos, proof of concept, early adopters and big plans, however they also amounted to Nada Nada, perhaps EMC can make a Lotta Lotta out of it!

 

Other questions are still waiting for answers including among others:

  • Will EMC Virtual Storage (aka emcvs) become a common cure for typical IT infrastructure ailments?
  • Will this restart the debate around the golden rule of virtualization being whoever controls the virtualization controls the gold and thus vendors lock in?
  • Will this be a members only vision where only certain partners can participate?
  • What will other competitors respond with, technology, and marketecture, FUD or something else?
  • What are the specific details of when, where and how the vision is implemented?
  • What will all of this cost, will it work with existing products or is a forklift upgrade needed?
  • Has EMC bitten off more than they can chew or deliver on or is Pat Gelsinger and his crew racing down a mountain and out in front of their skis, or, is this brilliance beyond what we mere mortals can yet comprehend?
  • Can global data cache coherency really be deployed with data integrity on a global and large scale without negatively impacting performance?
  • Can EMC make Lotta Lotta with this vision?

 

Here is what some of the EMC bloggers have had to say so far:

Chuck Hollis aka @chuckhollis had this to say

Stuart Miniman aka @stu had this to say

 

Summing it up for now

Lets see how the rest of the industry responds to this as the vision rolls out and perhaps sooner vs. later becomes technology that gets deployed and used.

Im skeptical until more details are understood, however I also like it and intrigued by it if it can actually jump from Yotta Yotta slide ware to Lotta Lotta deployments.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Green IT, Green Gap, Tiered Energy and Green Myths

There are many different aspects of Green IT along with several myths or misperceptions not to mention missed opportunities.

There is a Green Gap or disconnect between environmentally aware, focused messaging and core IT data center issues. For example, when I ask IT professionals whether they have or are under direction to implement green IT initiatives, the number averages in the 10-15% range.

However, when I ask the same audiences who has or sees power, cooling, floor space, supporting growth, or addressing environmental health and safety (EHS) related issues, the average is 75 to 90%. What this means is a disconnect between what is perceived as being green and opportunities for IT organizations to make improvements from an economic and efficiency standpoint including boosting productivity.

 

Some IT Data Center Green Myths
Is “green IT” a convenient or inconvenient truth or a legend?

When it comes to green and virtual environments, there are plenty of myths and realities, some of which vary depending on market or industry focus, price band, and other factors.

For example, there are lines of thinking that only ultra large data centers are subject to PCFE-related issues, or that all data centers need to be built along the Columbia River basin in Washington State, or that virtualization eliminates vendor lock-in, or that hardware is more expensive to power and cool than it is to buy.

The following are some myths and realities as of today, some of which may be subject to change from reality to myth or from myth to reality as time progresses.

Myth: Green and PCFE issues are applicable only to large environments.

Reality: I commonly hear that green IT applies only to the largest of companies. The reality is that PCFE issues or green topics are relevant to environments of all sizes, from the largest of enterprises to the small/medium business, to the remote office branch office, to the small office/home office or “virtual office,” all the way to the digital home and consumer.

 

Myth: All computer storage is the same, and powering disks off solves PCFE issues.

Reality: There are many different types of computer storage, with various performance, capacity, power consumption, and cost attributes. Although some storage can be powered off, other storage that is needed for online access does not lend itself to being powered off and on. For storage that needs to be always online and accessible, energy efficiency is achieved by doing more with less—that is, boosting performance and storing more data in a smaller footprint using less power.

 

Myth: Servers are the main consumer of electrical power in IT data centers.

Reality: In the typical IT data center, on average, 50% of electrical power is consumed by cooling, with the balance used for servers, storage, networking, and other aspects. However, in many environments, particularly processing or computation intensive environments, servers in total (including power for cooling and to power the equipment) can be a major power draw.

 

Myth: IT data centers produce 2 to 8% of all global Carbon Dioxide (CO2) and carbon emissions.

Reality:  Thus might be perhaps true, given some creative accounting and marketing math in order to help build a justification case or to scare you into doing something. However, the reality is that in the United States, for example, IT data centers consume around 2 to 4% of electrical power (depending on when you read this), and less than 80% of all U.S. CO2 emissions are from electrical power generation, so the math does not quite add up. The reality is this, if no action is taken to improve IT data center energy efficiency, continued demand growth will shift IT power-related emissions from myth to reality, not to mention cause constraints on IT and business sustainability from an economic and productivity standpoint.

Myth: Server consolidation with virtualization is a silver bullet to address PCFE issues.

Reality: Server virtualization for consolidation is only part of an overall solution that should be combined with other techniques, including lower power, faster and more energy efficient servers, and improved data and storage management techniques.

 

Myth: Hardware costs more to power than to purchase.

Reality: Currently, for some low-cost servers, standalone disk storage, or entry level networking switches and desktops, this may be true, particularly where energy costs are excessively high and the devices are kept and used continually for three to five years. A general rule of thumb is that the actual cost of most IT hardware will be a fraction of the price of associated management and software tool costs plus facilities and cooling costs. For the most part, at least as of this writing, small standalone individual hard disk drives or small entry level volume servers can be bought and then used in locations that have very high electrical costs over a three  to five year time frame.

 

Regarding this last myth, for the more commonly deployed external storage systems across all price bands and categories, generally speaking, except for extremely inefficient and hot running legacy equipment, the reality is that it is still cheaper to power the equipment than to buy it. Having said that, there are some qualifiers that should also be used as key indicators to keep the equation balanced. These qualifiers include the acquisition cost  if any, for new, expanded, or remodeled habitats or space to house the equipment, the price of energy in a given region, including surcharges, as well as cooling, length of time, and continuous time the device will be used.

For larger businesses, IT equipment in general still costs more to purchase than to power, particularly with newer, more energy efficient devices. However, given rising energy prices, or the need to build new facilities, this could change moving forward, particularly if a move toward energy efficiency is not undertaken.

There are many variables when purchasing hardware, including acquisition cost, the energy efficiency of the device, power and cooling costs for a given location and habitat, and facilities costs. For example, if a new storage solution is purchased for $100,000, yet new habitat or facilities must be built for three to five times the cost of the equipment, those costs must be figured into the purchase cost.

Likewise, if the price of a storage solution decreases dramatically, but the device consumes a lot of electrical power and needs a large cooling capacity while operating in a region with expensive electricity costs, that, too, will change the equation and the potential reality of the myth.

 

Tiered Energy Sources
Given that IT resources and facilitated require energy to power equipment as well as keep them cool, electricity are popular topics associated with Green IT, economics and efficiency with lots of metrics and numbers tossed around. With that in mind, the U.S. national average CO2 emission is 1.34 lb/kWh of electrical power. Granted, this number will vary depending on the region of the country and the source of fuel for the power-generating station or power plant.

Like IT tiered resources (Servers, storage, I/O networks, virtual machines and facilities) of which there are various tiers or types of technologies to meet various needs, there are also multiple types of energy sources. Different tiers of energy sources vary by their cost, availability and environmental characteristics among others. For example, in the US, there are different types of coal and not all coal is as dirty when combined with emissions air scrubbers as you might be lead to believe however there are other energy sources to consider as well.

Coal continues to be a dominant fuel source for electrical power generation both in the United States and abroad, with other fuel sources, including oil, gas, natural gas, liquid propane gas (LPG or propane), nuclear, hydro, thermo or steam, wind and solar. Within a category of fuel, for example, coal, there are different emissions per ton of fuel burned. Eastern U.S. coal is higher in CO2 emissions per kilowatt hour than western U.S. lignite coal. However, eastern coal has more British thermal units (Btu) of energy per ton of coal, enabling less coal to be burned in smaller physical power plants.

If you have ever noticed that coal power plants in the United States seem to be smaller in the eastern states than in the Midwest and western states, it’s not an optical illusion. Because eastern coal burns hotter, producing more Btu, smaller boilers and stockpiles of coal are needed, making for smaller power plant footprints. On the other hand, as you move into the Midwest and western states of the United States, coal power plants are physically larger, because more coal is needed to generate 1 kWh, resulting in bigger boilers and vent stacks along with larger coal stockpiles.

On average, a gallon of gasoline produces about 20 lb of CO2, depending on usage and efficiency of the engine as well as the nature of the fuel in terms of octane or amount of Btu. Aviation fuel and diesel fuel differ from gasoline, as does natural gas or various types of coal commonly used in the generation of electricity. For example, natural gas is less expensive than LPG but also provides fewer Btu per gallon or pound of fuel. This means that more natural gas is needed as a fuel to generate a given amount of power.

Recently, while researching small, 10 to 12 kWh standby generators for my office, I learned about some of the differences between propane and natural gas. What I found was that with natural gas as fuel, a given generator produced about 10.5 kWh, whereas the same unit attached to a LPG or propane fuel source produced 12 kWh. The trade off was that to get as much power as possible out of the generator, the higher cost LPG was the better choice. To use lower cost fuel but get less power out of the device, the choice would be natural gas. If more power was needed, than a larger generator could be deployed to use natural gas, with the trade off of requiring a larger physical footprint.

Oil and gas are not used as much as fuel sources for electrical power generation in the United States as in other countries such as the United Kingdom. Gasoline, diesel, and other petroleum based fuels are used for some power plants in the United States, including standby or peaking plants. In the electrical power G and T industry as in IT, where different tiers of servers and storage are used for different applications there are different tiers of power plants using different fuels with various costs. Peaking and standby plants are brought online when there is heavy demand for electrical power, during disruptions when a lower cost or more environmentally friendly plant goes offline for planned maintenance, or in the event of a trip or unplanned outage.

CO2 is commonly discussed with respect to green and associated emissions however there are other so called Green Houses Gases including Nitrogen Dioxide (NO2) and water vapors among others. Carbon makes up only a fraction of CO2. To be specific, only about 27% of a pound of CO2 is carbon; the balance is not. Consequently, carbon emissions taxes schemes (ETS), as opposed to CO2 tax schemes, need to account for the amount of carbon per ton of CO2 being put into the atmosphere. In some parts of the world, including the EU and the UK, ETS are either already in place or in initial pilot phases, to provide incentives to improve energy efficiency and use.

Meanwhile, in the United States there are voluntary programs for buying carbon offset credits along with initiatives such as the carbon disclosure project. The Carbon Disclosure Project (www.cdproject.net) is a not for profit organization to facilitate the flow of information pertaining to emissions by organizations for investors to make informed decisions and business assessment from an economic and environmental perspective. Another voluntary program is the United States EPA Climate Leaders initiative where organizations commit to reduce their GHG emissions to a given level or a specific period of time.

Regardless of your stance or perception on green issues, the reality is that for business and IT sustainability, a focus on ecological and, in particular, the corresponding economic aspects cannot be ignored. There are business benefits to aligning the most energy efficient and low power IT solutions combined with best practices to meet different data and application requirements in an economic and ecologically friendly manner.

Green initiatives need to be seen in a different light, as business enables as opposed to ecological cost centers. For example, many local utilities and state energy or environmentally concerned organizations are providing funding, grants, loans, or other incentives to improve energy efficiency. Some of these programs can help offset the costs of doing business and going green. Instead of being seen as the cost to go green, by addressing efficiency, the by products are economic as well as ecological.

Put a different way, a company can spend carbon credits to offset its environmental impact, similar to paying a fine for noncompliance or it can achieve efficiency and obtain incentives. There are many solutions and approaches to address these different issues, which will be looked at in the coming chapters.

What does this all mean?
There are real things that can be done today that can be effective toward achieving a balance of performance, availability, capacity, and energy effectiveness to meet particular application and service needs.

Sustaining for economic and ecological purposes can be achieved by balancing performance, availability, capacity, and energy to applicable application service level and physical floor space constraints along with intelligent power management. Energy economics should be considered as much a strategic resource part of IT data centers as are servers, storage, networks, software, and personnel.

The bottom line is that without electrical power, IT data centers come to a halt. Rising fuel prices, strained generating and transmission facilities for electrical power, and a growing awareness of environmental issues are forcing businesses to look at PCFE issues. IT data centers to support and sustain business growth, including storing and processing more data, need to leverage energy efficiency as a means of addressing PCFE issues. By adopting effective solutions, economic value can be achieved with positive ecological results while sustaining business growth.

Some additional links include:

Want to learn or read more?

Check out Chapter 1 (Green IT and the Green Gap, Real or Virtual?) in my book “The Green and Virtual Data Center” (CRC) here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Post Holiday IT Shopping Bargains, Dell Buying Exanet?

For consumers, the time leading up to the holiday Christmas season is usually busy including door busters as well as black Friday among other specials for purchasing gifts and other items. However savvy shoppers will wait for after Christmas or the holidays altogether perhaps well into the New Year when some good bargains can become available. IT customers are no different with budgets to use up before the end of the year thus a flurry of acquisitions that should become evident soon as we are entering earnings announcement season.

However there are also bargains for IT organizations looking to take advantage of special vendor promotions trying to stimulate sales, not to mention for IT vendors to do some shopping of their own. Consequently, in addition to the flurry of merger and acquisition (M and A) activity from last summer through the fall, there has been several recent deals, some of which might make Monty Hall blush!

Some recent acquisition activity include among others:

  • Dell bought Perot systems for $3.9B
  • DotHill bought Cloverleaf
  • Texas Memory Systems (TMS) bought Incipient
  • HP bought IBRIX and 3COM among others
  • LSI bought Onstor
  • VMware bought Zimbra
  • Micron bought Numonyx
  • Exar bought Neterion

Now the industry is abuzz about Dell, who is perhaps using some of the lose change left over from holiday sales as being in the process of acquiring Israeli clustered storage startup Exanet for about $12M USD. Compared to previous Dell acquisitions including EqualLogic in 2007 for about $1.4B or last years Perot deal in the $3.9B range, $12M is a bargain and would probably not even put a dent in the selling and marketing advertising budget let alone corporate cash coffers which as of their Q3-F10 balance sheet shows about $12.795B in cash.

Who is Exanet and what is their product solution?
Exanet is a small Israeli startup providing a clustered, scale out NAS file serving storage solution (Figure 1) that began shipping in 2003. The Exanet solution (ExaStore) can be either software based, or, as a package solution ExaStore software installed on standard x86 servers with external RAID storage arrays combining as a clustered NAS file server.

Product features include global name space, distributed metadata, expandable file systems, virtual volumes, quotas, snapshots, file migration, replication, and virus scanning, and load balancing, NFS, CIFS and AFP. Exanet scales up to 1 Exabyte of storage capacity along with supporting large files and billions of file per cluster.

The target market that Exanet pursues is large scale out NAS where performance (either small random or large sequential I/Os) along with capacity are required. Consequently, in the scale out, clustered NAS file serving space, competitors include IPM GPFS (SONAS), HP IBRIX or PolyServe, Sun Lustre and Symantec SFS among others.

Clustered Storage Model: Source The Green and Virtual Data Center (CRC)
Figure 1 Generic clustered storage model (Courtesy The Green and Virtual Data Center(CRC)

For a turnkey solution, Exanet packaged their cluster file system software with various vendors storage combined with 3rd party external Fibre Channel or other storage. This should play well for Dell who can package the Exanet software on its own servers as well as leverage either SAS or Fibre Channel  MD1000/MD3000 external RAID storage among other options (see more below).

Click here to learn more about clustered storage including clustered NAS, clustered and parallel file systems.

Dell

Whats the dell play?

  • Its an opportunity to acquire some intellectual property (IP)
  • Its an opportunity to have IP similar to EMC, HP, IBM, NetApp, Oracle and Symantec among others
  • Its an opportunity to address a market gap or need
  • Its an opportunity to sell more Dell servers, storage and services
  • Its an opportunity time for doing acquisitions (bargain shopping)

Note: IBM also this past week announced their new bundled scale out clustered NAS file serving solution based on GPFS called SONAS. HP has IBRIX in addition to their previous PolyServe acquisition, Sun has ZFS and Lustre.

How does Exanet fit into the Dell lineup?

  • Dell sells Microsoft based NAS as NX series
  • Dell has an OEM relationship with EMC
  • Dell was OEMing or reselling IBRIX in the past for certain applications or environments
  • Dell has needed to expand its NAS story to balance its iSCSI centric storage story as well as compliment its multifunction block storage solutions (e.g. MD3000) and server solutions.

Why Exanet?
Why Exanet, why not one of the other startups or small NAS or cloud file system vendors including BlueArc, Isilon, Panasas, Parascale, Reldata, OpenE or Zetta among others?

My take is that probably because those were either not relevant to what Dell is looking for, lack of seamless technology and business fit, technology tied to non Dell hardware, technology maturity, the investors are still expecting a premium valuation, or, some combination of the preceding.

Additional thoughts on why Exanet
I think that Dell simply saw an opportunity to acquire some intellectual property (IP) probably including a patent or two. The value of the patents could be in the form of current or future product offerings, perhaps a negotiating tool, or if nothing else as marketing tool. As a marketing tool, Dell via their EqualLogic acquisition among others has been able to demonstrate and generate awareness that they actually own some IP vs. OEM or resell those from others. I also think that this is an opportunity to either fill or supplement a solution offering that IBRIX provided to high performance, bulk storage and scale out file serving needs.

NAS and file serving supporting unstructured data are a strong growth market for commercial, high performance, specialized or research as well as small business environments. Thus, where EqualLogic plays to the iSCSI block theme, Dell needs to expand their NAS and file serving solutions to provide product diversity to meet various customer applications needs similar to what they do with block based storage. For example, while iSCSI based EqualLogic PS systems get the bulk of the marketing attention, Dell also has a robust business around the PowerVault MD1000/MD3000 (SAS/iSCSI/FC) and Microsoft multi protocol based PowerVault NX series not to mention their EMC CLARiiON based OEM solutions (E.g. Dell AX, Dell/EMC CX).

Thus, Dell can complement the Microsoft multi protocol (block and NAS file) NX with a packaged (Dell servers and MD (or other affordable block storage) powered with Exanet) solution. While it is possible that Dell will find a way to package Exanet as a NAS gateway in front of the iSCSI based EqualLogic PS systems, which would also make for an expensive scale out NAS solution compared to those from other vendors.

Thats it for now.

Lets see how this all plays out.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Technorati tags: Dell

Does IBM Power7 processor announcement signal storage upgrades?

IBM recently announced the Power7 as the latest generation of processors that the company uses in some of its mid range and high end compute servers including the iSeries and pSeries.


IBM Power7 processor wafers (chips)

 

What is the Power7 processor?
The Power7 is the latest generation of IBM processors (chips) that are used as the CPUs in IBM mid range and high end open systems (pSeries) for Unix (AIX) and Linux as well as for the iSeries (aka AS400 successor). Building on previous Power series processors, the Power7 increases the performance per core (CPU) along with the number of cores per socket (chip) footprint. For example, each Power7 chip that plugs into a socket on a processor card in a server can have up to 8 cores or CPUs. Note that sometimes cores are also known as micro CPUs as well as virtual CPUs not to be confused with their presented via Hypervisor abstraction.

Sometimes you may also here the term or phrase 2 way, 4 way (not to be confused with a Cincinnati style 4 way chili) or 8 way among others that refers to the number of cores on a chip. Hence, a dual 2 way would be a pair of processor chips each with 2 cores while a quad 8 way would be 4 processors chips each with 8 cores and so on.


IBM Power7 with up to eight cores per processor (chip)

In addition to faster and more cores in a denser footprint, there are also energy efficiency enhancements including Energy Star for enterprise servers qualification along with intelligent power management (IPM also see here) implementation. IPM is implanted in what IBM refers to as Intelligent Energy technology for turning on or off various parts of the system along with varying processor clock speeds. The benefit is when there is work to be done, get it down quickly or if there is less work, turn some cores off or slow clock speed down. This is similar to what other industry leaders including Intel have deployed with their Nehalem series of processors that also support IPM.

Additional features of the Power7 include (varies by system solutions):

  • Energy Star for server qualified providing enhanced performance and efficiency.
  • IBM Systems Director Express, Standard and Enterprise Editions for simplified management including virtualization capabilities across pools of Power servers as a single entity.
  • PowerVM (Hypervisor) virtualization for AIX, iSeries and Linux operating systems.
  • ActiveMemory enables effective memory capacity to be larger than physical memory, similar to how virtual memory works within many operating systems. The benefit is to enable a partition to have access to more memory which is important for virtual machines along with the ability to support more partitions in a given physical memory footprint.
  • TurboCore and Intelligent Threads enable workload optimization by selecting the applicable mode for the work to be done. For example, single thread per core along with simultaneous threads (2 or 4) modes per core. The trade off is to have more threads per core for concurrent processing, or, fewer threads to boost single stream performance.

IBM has announced several Power7 enabled or based server system models with various numbers of processors and cores along with standalone and clustered configurations including:

IBM Power7 family of server systems

  • Power 750 Express, 4U server with one to four socket server supporting up to 32 cores (3.0 to 3.5 GHz) and 128 threads (4 threads per core), PowerVM (Hypervisor) along with main memory capacity of 512GB or 1TByte of virtual memory using Active Memory Expansion.
  • Power 755, 32 3.3Ghz Power7 cores (8 cores per processor) with memory up to 256GB along with AltiVec and VSX SIMD instruction set support. Up to 64 755 nodes each with 32 cores can be clustered together for high performance applications.
  • Power 770, Up to 64 Power7 cores providing more performance while consuming less energy per core compared to previous Power6 generations. Support for up to 2TB of main memory or RAM using 32GB DIMM when available later in 2010.
  • Power 780, 64 Power7 cores with TurboCore workload optimization providing performance boost per core. With TurboCore, 64 cores can operate at 3.8 GHz, or, enable up to 32 cores at 4.1 GHz and twice the amount of cache when more speed per thread is needed. Support for up to 2TB of main memory or RAM using 32GB DIMM when available later in 2010.

Additional Power7 specifications and details can be found here.

 

What is the DS8000?
The DS8000 is the latest generation of a family of high end enterprise class storage systems supporting IBM mainframe (zSeries), Open systems along with mixed workloads. Being high end open systems or mainframe, the DS8000 competes with similar systems from EMC (Symmetrix/DMX/VMAX), Fujitsu (Eternus DX8000), HDS (Hitachi) and HP (XP series OEM from Hitachi). Previous generations of the DS8000 (aka predecessors) include the ESS (Enterprise Storage System) Model 2105 (aka Shark) and VSS (Versatile Storage Server). Current generation family members include the Power5 based DS8100 and DS8300 along with the Power6 based DS8700.

IBM DS8000 Storage System

Learn more about the DS8000 here, here, here and here.

 

What is the association between the Power7 and DS8000?
Disclosure: Before I go any further, lets be clear on something, what I am about to post on is based entirely on researching, analyzing, correlating (connecting the dots) of what is publicly and freely available from IBM on the Web (e.g. there is no NDA material being disclosed here that I am aware of) along with prior trends and tendency of IBM and their solutions. In other words, you can call it speculation, a prediction, industry analysis perspective, looking into the proverbial crystal ball or educated guess and thus should not be taken as an indicator of what IBM may actually do or be working on. As to what may actually be done or not done, for that you will need to contact one of the IBM truth squad members.

As to what is the linkage between Power7 and the DS8000?

The linkage between the Power7 and the DS8000 is just that, the Power processors!

At the heart of the DS8000 are Power series processors coupled or clustered together in pairs for performance and availability that run IBM developed storage systems software. While the spin doctors may not agree, essentially the DS8000 and its predecessors are based on and around Power series processors clustered together with a high speed interconnect that combine to host an operating system and IBM developed storage system application software.

Thus IBM has been able to for over a decade leverage technology improvement curve advantages with faster processors, increased memory and I/O connectivity in denser footprints while enhancing their storage system application software.

Given that the current DS8000 family members utilize 2 way (2 core) or 4 way (4 core) Power5 and Power6 processors, similar to how their predecessors utilized previous generation Power4, Power3 and so forth processors, it only makes sense that IBM might possibly use a Power7 processor in a future DS8000 (or derivative perhaps even with a different name or model number). Again, this is just based all on historical trends and patterns of IBM storage systems group leveraging the latest generation of Power processors; after all, they are a large customer of the Power systems group.

Consequently it would make sense for IBM storage folks to leverage the new Power7 processors and features similar to how EMC is leveraging Intel processor enhances along with what other vendors are doing.

There is certainly room in the DS8000 architecture for growth in terms of supporting additional nodes or complexes or controllers (or whatever your term preference of choice is for describing a server) each equipped with multiple processors (chips or sockets) that have multiple cores. While IBM has only commercially released two complex or dual server versions of the DS8000 with various numbers of cores per server, they have come nowhere close to their architecture limit of nodes. In fact with this release of Power7, as an example, the model 755 can be clustered via InfiniBand with up to 64 nodes, with each node having 4 sockets (e.g. 4 way) with up to 8 cores each. That means on paper, 64 x 4 x 8 = 2048 cores and each core could have up to 4 threads for concurrency, or half as many cores for more cache performance. Now will IBM ever come out with a 64 node DS8000 on steroids?

Tough to say, maybe possibly some day to play specmanship vs EMC VMAX 256 node architectural limit, however Im not holding my breath just yet. Thus with more and faster cores per processor, ability to increase number of processors per server or node, along with architectural capabilities to boost the number of nodes in an instance or cluster, on paper alone, there is lots of head room for the DS8000 or a future derivative.

What about software and functionality, sure IBM could in theory simply turn the crank and use a new hardware platform that is faster, more capacity, denser, better energy efficiency, however what about new features?

Can IBM enhance its storage systems application software that it evolved from the ESS with new features to leverage underlying hardware capabilities including TurboCore, PowerVM, device and I/O sharing, Intelligent Energy efficiency along with threads enhancements?

Can IBM leverage those and other features to support not only scaling of performance, availability, capacity and energy efficiency in an economical manner, however also add features for advanced automated tiering or data movement plus other popular industry buzzword functionality?

 

Additional thoughts and perspectives
One of the things I find interesting is that some IBM folks along with their channel partners will go to great lengths to explain why and how the DS8000 is not just a pair of Power enabled based servers tightly coupled together. Yet, on the other hand, some of those folks will go to great lengths touting the advantages of leveraging off the shelf or commercial enabled servers based on Intel or AMD based systems such as IBMs own XIV storage solution.

I can understand in the past when the likes of EMC, Hitachi and Fujitsu were all competing with IBM building bigger and more function rich monolithic systems, however that trend is shifting. The trend now as is being seen with EMC and VMAX is to decouple and leverage more off the shelf commercially available technology combined with custom ASICs where and when needed.

Thus at a time where more attention and discussion is around clustered, grid, scalable storage systems, will we see or hear the IBM folks change their tune about the architectural scale up and out capabilities of the Power enabled DS8000 family?

There had been some industry speculation that the DS8000 would be the end of the line if the Power7 had not been released which will now (assuming that IBM leverages the Power7 for storage) shift to if there will be a Power8 or Power9 and so forth.

From a storage perspective, is the DS8K still relevant?

I say yes given its installed base and need for IBM to have an enterprise solution (sorry, IMHO XIV does not fit that bill just yet) of their own, lest they cut an OEM deal with the likes of Hitachi or Fujitsu which while possible, I do not see it as likely near term. Another soft point on its relevance is to gauge reaction from their competitors including EMC and HDS.

From a server perspective, what is the benefit of the new Power7 enabled servers from IBM?

Simple, increase scale of performance for single thread as well as concurrent or parallel application workloads.

In other words, supporting more web sites, partitions for virtual machines and guest operating system instances, databases, compute and other applications that demand performance and economy of scale.

This also means that IBM has a platform to aggressively go after Sun Solaris server customers with a lifeline during the Oracle transition, not to mention being a platform for running Oracle in addition to its own UDB/DB2 database. In addition to being a platform for Unix AIX as well as Linux, the Power7 series also are at the heart of current generation iSeries (the server formerly known as the AS400).

Additional links and resources:

Closing comments (for now):
Given IBMs history of following a Power chip enhancement with a new upgraded version of the DS8000 (or ESS/2105 aka Shark/VSS) and its predecessors by a reasonable amount of time, I would be surprised if we do not see a new DS8000 (perhaps even renamed or renumbered) within the year.

This is similar to how other vendors leverage new processor chip technology evolution to pace their systems upgrades for example how many vendors who leverage Intel processes have done announcements over the past year since the Nehalem series rolled out including EMC among others.

Lets see what the IBM truth squads have to say, or, not have to say :)

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Technology Tiering, Servers Storage and Snow Removal

Granted it is winter in the northern hemisphere and thus snow storms should not be a surprise.

However between December 2009 and early 2010, there has been plenty of record activity from in the U.K. (or here), to the U.S. east coast including New York, Boston and Washington DC, across the midwest and out to California, it made for a white christmas and SANta fun along with snow fun in general in the new year.

2010 Snow Storm via www.star-telegram.com

What does this have to do with Information Factories aka IT resources including public or private clouds, facilities, server, storage, networking along with data management let alone tiering?

What does this have to do with tiered snow removal, or even snow fun?

Simple, different tools are needed for addressing various types of snow from wet and heavy to light powdery or dustings to deep downfalls. Likewise, there are different types of servers, storage, data networks along with operating systems, management tools and even hyper visors to deal with various application needs or requirements.

First, lets look at tiered IT resources (servers, storage, networks, facilities, data protection and hyper visors) to meet various efficiency, optimization and service level needs.

Do you have tiered IT resources?

Let me rephrase that question to do you have different types of servers with various performance, availability, connectivity and software that support various applications and cost levels?

Thus the whole notion of tiered IT resources is to be abe to have different resources that can be aligned to the task at hand in order to meet performance, availability, capacity, energy along with economic along with service level agreement (SLA) requirements.

Computers or servers are targeted for different markets including Small Office Home Office (SOHO), Small Medium Business (SMB), Small Medium Enterprise (SME) and ultra large scale or extreme scaling, including high performance super computing. Servers are also positioned for different price bands and deployment scenarios.

General categories of tiered servers and computers include:

  • Laptops, desktops and workstations
  • Small floor standing towers or rack mounted 1U and 2U servers
  • Medium sizes floor standing towers or larger rack mounted servers
  • Blade Centers and Blade Servers
  • Large size floor standing servers, including mainframes
  • Specialized fault tolerant, rugged and embedded processing or real time servers

Servers have different names email server, database server, application server, web server, and video or file server, network server, security server, backup server or storage server associated with them depending on their use. In each of the previous examples, what defines the type of server is the type of software is being used to deliver a type of service. Sometimes the term appliance will be used for a server; this is indicative of the type of service the combined hardware and software solution are providing. For example, the same physical server running different software could be a general purpose applications server, a database server running for example Oracle, IBM, Microsoft or Teradata among other databases, an email server or a storage server.

This can lead to confusion when looking at servers in that a server may be able to support different types of workloads thus it should be considered a server, storage, networking or application platform. It depends on the type of software being used on the server. If, for example, storage software in the form a clustered and parallel file system is installed on a server to create highly scalable network attached storage (NAS) or cloud based storage service solution, then the server is a storage server. If the server has a general purpose operating system such as Microsoft Windows, Linux or UNIX and a database on it, it is a database server.

While not technically a type of server, some manufacturers use the term tin wrapped software in an attempt to not be classified as an appliance, server or hardware vendor but want their software to be positioned more as a turnkey solution. The idea is to avoid being perceived as a software only solution that requires integration with hardware. The solution is to use off the shelf commercially available general purpose servers with the vendors software technology pre integrated and installed ready for use. Thus, tin wrapped software is a turnkey software solution with some tin, or hardware, wrapped around it.

How about the same with tiered storage?

That is different tiers (Figure 1) of fast high performance disk including RAM or flash based SSD, fast Fibre Channel or SAS disk drives, or high capacity SAS and SATA disk drives along with magnetic tape as well as cloud based backup or archive?

Tiered Storage Resources
Figure 1: Tiered Storage resources

Tiered storage is also sometimes thought of in terms large enterprise class solutions or midrange, entry level, primary, secondary, near line and offline. Not to be forgotten, there are also tiered networks that support various speeds, convergence, multi tenancy and other capabilities from IO Virtualization (IOV) to traditional LAN, SAN, MAN and WANs including 1Gb Ethernet (1GbE), 10GbE up to emerging 40GbE and 100GbE not to mention various Fibre Channel speeds supporting various protocols.

The notion around tiered networks is like with servers and storage to enable aligning the right technology to be used for the task at hand economically while meeting service needs.

Two other common IT resource tiering techniques include facilities and data protection. Tiered facilities can indicate size, availability, resiliency among other characteristics. Likewise, tiered data protection is aligning the applicable technology to support different RTO and RPO requirements for example using synchronous replication where applicable vs. asynchronous time delayed for longer distance combined with snapshots. Other forms of tiered data protection include traditional backups either to disk, tape or cloud.

There is a new emerging form of tiering in many IT environments and that is tiered virtualization or specifically tiered server hyper visors in virtual data centers with similar objectives to having different server, storage, network, data protection or facilities tiers. Instead of an environment running all VMware, Microsoft HyperV or Xen among other hyper visors may be deployed to meet different application service class requirements. For example, VMware may be used for premium features and functionality on some applications, where others that do not need those features along with requiring lower operating costs leverage HyperV or Zen based solutions. Taking the tiering approach a step further, one could also declare tiered databases for example Oracle legacy vs. MySQL or Microsoft SQLserver among other examples.

What about IT clouds, are those different types of resources, or, essentially an extension of existing IT capabilities for example cloud storage being another tier of data storage?

There is another form of tiering, particularly during the winter months in the northern hemisphere where there is an abundance of snow this time of the year. That is, tiered snow management, removal or movement technologies.

What about tiered snow removal?

Well lets get back to that then.

Like IT resources, there are different technologies that can be used for moving, removing, melting or managing snow.

For example, I cant do much about getting ready of snow other than pushing it all down the hill and into the river, something that would take time and lots of fuel, or, I can manage where I put snow piles to be prepared for next storm, plus, to help put it where the piles of snow will melt and help avoid spring flood. Some technologies can be used for relocating snow elsewhere, kind of like archiving data onto different tiers of storage.

Regardless of if snowstorm or IT clouds (public or private), virtual, managed service provider (MSP), hosted or traditional IT data centers, all require physical servers, storage, I/O and data networks along with software including management tools.

Granted not all servers, storage or networking technology let alone software are the same as they address different needs. IT resources including servers, storage, networks, operating systems and even hyper visors for virtual machines are often categorized and aligned to different tiers corresponding to needs and characteristics (Figure 2).

Tiered IT Resources
Figure 2: Tiered IT resources

For example, in figure 3 there is a light weight plastic shovel (Shove 1) for moving small amounts of snow in a wide stripe or pass. Then there is a narrow shovel for digging things out, or breaking up snow piles (Shovel 2). Also shown are a light duty snow blower (snow thrower) capable of dealing with powdery or non wet snow, grooming in tight corners or small areas.

Tiered Snow tools
Figure 3: Tiered Snow management and migration tools

For other light dustings, a yard leaf blower does double duty for migrating or moving snow in small or tight corners such as decks, patios or for cleanup. Larger snowfalls, or, where there is a lot of area to clear involves heavier duty tools such as the Kawasaki mule with 5 foot curtis plow. The mule is a multifunction, multi protocol tool capable of being used for hauling items, towing, pulling or recreational tasks.

When all else fails, there is a pickup truck to get or go out and about, not to mention to pull other vehicles out of ditches or piles of snow when they become stuck!

Snow movement
Figure 4: Sometimes the snow light making for fast, low latency migration

Snow movement
Figure 5: And sometimes even snow migration technology goes off line!

Snow movement

And that is it for now!

Enjoy the northern hemisphere winter and snow while it lasts, make the best of it with the right tools to simplify the tasks of movement and management, similar to IT resources.

Keep in mind, its about the tools and when along with how to use them for various tasks for efficiency and effectiveness, and, a bit of snow fun.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Infosmack Episode 34, VMware, Microsoft and More

Following on the heals of several guest appearances late in 2009 ( here, here, here and here) on the Storage Monkeys Infosmack weekly pod cast, I was recently asked to join them again for the inaugural 2010 show (Episode 34).

Along with VMguru Rich Brambley and hosts Greg Knieriemen and Marc Farley we discussed several recent industry topics in this first show of the year which can be accessed here or on iTunes.

Heres a link to the pod cast where you can listen to the discussion including VMware Go, VMware buying Zimbra, Vendor Alliances such as HP and Microsoft HyperV and EMC+Cisco+VMware, along with data protection for virtual servers issues options (or opportunities) among other topics.

I have included the following links that pertain to some of the items we discussed during the show.

Enjoy the show.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

2010 and 2011 Trends, Perspectives and Predictions: More of the same?

2011 is not a typo, I figured that since Im getting caught up on some things, why not get a jump as well.

Since 2009 went by so fast, and that Im finally getting around to doing an obligatory 2010 predictions post, lets take a look at both 2010 and 2011.

Actually Im getting around to doing a post here having already done interviews and articles for others soon to be released.

Based on prior trends and looking at forecasts, a simple predictions is that some of the items for 2010 will apply for 2011 as well given some of this years items may have been predicted by some in 2008, 2007, 2006, 2005 or, well ok, you get the picture. :)

Predictions are fun and funny in that for some, they are taken very seriously, while for others, at best they are taken with a grain of salt depending on where you sit. This applies both for the reader as well as who is making the predictions along with various motives or incentives.

Some are serious, some not so much…

For some, predictions are a great way of touting or promoting favorite wares (hard, soft or services) or getting yet another plug (YAP is a TLA BTW) in to meet coverage or exposure quota.

Meanwhile for others, predictions are a chance to brush up on new terms for the upcoming season of buzzword bingo games (did you pick up on YAP).

In honor of the Vancouver winter games, Im expecting some cool Olympic sized buzzword bingo games with a new slippery fast one being federation. Some buzzwords will take a break in 2010 as well as 2011 having been worked pretty hard the past few years, while others that have been on break, will reappear well rested, rejuvenated, and ready for duty.

Lets also clarify something regarding predictions and this is that they can be from at least two different perspectives. One view is that from a trend of what will be talked about or discussed in the industry. The other is in terms of what will actually be bought, deployed and used.

What can be confusing is sometimes the two perspectives are intermixed or assumed to be one and the same and for 2010 I see that trend continuing. In other words, there is adoption in terms of customers asking and investigating technologies vs. deployment where they are buying, installing and using those technologies in primary situations.

It is safe to say that there is still no such thing as an information, data or processing recession. Ok, surprise surprise; my dogs could have probably made that prediction during a nap. However what this means is more data will need to be moved, processed and stored for longer periods of time and at a lower cost without degrading performance or availability.

This means, denser technologies that enable a lower per unit cost of service without negatively impacting performance, availability, capacity or energy efficiency will be needed. In other words, watch for an expanded virtualization discussion around life beyond consolidation for servers, storage, desktops and networks with a theme around productivity and virtualization for agility and management enablement.

Certainly there will be continued merger and acquisitions on both a small as well as large scale ranging from liquidation sales or bargain hunting, to large and a mega block buster or two. Im thinking in terms of outside of the box, the type that will have people wondering perhaps confused as to why such a deal would be done until the whole picture is reveled and thought out.

In other words, outside of perhaps IBM, HP, Oracle, Intel or Microsoft among a few others, no vendor is too large not to be acquired, merged with, or even involved in a reverse merger. Im also thinking in terms of vendors filling in niche areas as well as building out their larger portfolio and IT stacks for integrated solutions.

Ok, lets take a look at some easy ones, lay ups or slam dunks:

  • More cluster, cloud conversations and confusion (public vs. private, service vs. product vs. architecture)
  • More server, desktop, IO and storage consolidation (excuse me, server virtualization)
  • Data footprint impact reduction ranging from deletion to archive to compress to dedupe among others
  • SSD and in particular flash continues to evolve with more conversations around PCM
  • Growing awareness of social media as yet another tool for customer relations management (CRM)
  • Security, data loss/leap prevention, digital forensics, PCI (payment card industry) and compliance
  • Focus expands from gaming/digital surveillance /security and energy to healthcare
  • Fibre Channel over Ethernet (FCoE) mainstream in discussions with some initial deployments
  • Continued confusion of Green IT and carbon reduction vs. economic and productivity (Green Gap)
  • No such thing as an information, data or processing recession, granted budgets are strained
  • Server, Storage or Systems Resource Analysis (SRA) with event correlation
  • SRA tools that provide and enable automation along with situational awareness

The green gap of confusion will continue with carbon or environment centric stories and messages continue to second back stage while people realize the other dimension of green being productivity.

As previously mentioned, virtualization of servers and storage continues to be popular with an expanding focus from just consolidation to one around agility, flexibility and enabling production, high performance or for other systems that do not lend themselves to consolidation to be virtualized.

6GB SAS interfaces as well as more SAS disk drives continue to gain popularity. I have said in the past there was a long shot that 8GFC disk drives might appear. We might very well see those in higher end systems while SAS drives continue to pick up the high performance spinning disk role in mid range systems.

Granted some types of disk drives will give way over time to others, for example high performance 3.5” 15.5K Fibre Channel disks will give way to 2.5” 15.5K SAS boosting densities, energy efficiency while maintaining performance. SSD will help to offload hot spots as they have in the past enabling disks to be more effectively used in their applicable roles or tiers with a net result of enhanced optimization, productivity and economics all of which have environmental benefits (e.g. the other Green IT closing the Green Gap).

What I dont see occurring, or at least in 2010

  • An information or data recession requiring less server, storage, I/O networking or software resources
  • OSD (object based disk storage without a gateway) at least in the context of T10
  • Mainframes, magnetic tape, disk drives, PCs, or Windows going away (at least physically)
  • Cisco cracking top 3, no wait, top 5, no make that top 10 server vendor ranking
  • More respect for growing and diverse SOHO market space
  • iSCSI taking over for all I/O connectivity, however I do see iSCSI expand its footprint
  • FCoE and flash based SSD reaching tipping point in terms of actual customer deployments
  • Large increases in IT Budgets and subsequent wild spending rivaling the dot com era
  • Backup, security, data loss prevention (DLP), data availability or protection issues going away
  • Brett Favre and the Minnesota Vikings winning the super bowl

What will be predicted at end of 2010 for 2011 (some of these will be DejaVU)

  • Many items that were predicted this year, last year, the year before that and so on…
  • Dedupe moving into primary and online active storage, rekindling of dedupe debates
  • Demise of cloud in terms of hype and confusion being replaced by federation
  • Clustered, grid, bulk and other forms of scale out storage grow in adoption
  • Disk, Tape, RAID, Mainframe, Fibre Channel, PCs, Windows being declared dead (again)
  • 2011 will be the year of Holographic storage and T10 OSD (an annual prediction by some)
  • FCoE kicks into broad and mainstream deployment adoption reaching tipping point
  • 16Gb (16GFC) Fibre Channel gets more attention stirring FCoE vs. FC vs. iSCSI debates
  • 100GbE gets more attention along with 4G adoption in order to move more data
  • Demise of iSCSI at the hands of SAS at low end, FCoE at high end and NAS from all angles

Gaining ground in 2010 however not yet in full stride (at least from customer deployment)

  • On the connectivity front, iSCSI, 6Gb SAS, 8Gb Fibre Channel, FCoE and 100GbE
  • SSD/flash based storage everywhere, however continued expansion
  • Dedupe  everywhere including primary storage – its still far from its full potential
  • Public and private clouds along with pNFS as well as scale out or clustered storage
  • Policy based automated storage tiering and transparent data movement or migration
  • Microsoft HyperV and Oracle based server virtualization technologies
  • Open source based technologies along with heterogeneous encryption
  • Virtualization life beyond consolidation addressing agility, flexibility and ease of management
  • Desktop virtualization using Citrix, Microsoft and VMware along with Microsoft Windows 7

Buzzword bingo hot topics and themes (in no particular order) include:

  • 2009 and previous year carry over items including cloud, iSCSI, HyperV, Dedupe, open source
  • Federation takes over some of the work of cloud, virtualization, clusters and grids
  • E2E, End to End management preferably across different technologies
  • SAS, Serial Attached SCSI for server to storage systems and as disk to storage interface
  • SRA, E23, Event correlation and other situational awareness related IRM tools
  • Virtualization, Life beyond consolidation enabling agility, flexibility for desktop, server and storage
  • Green IT, Transitions from carbon focus to economic with efficiency enabling productivity
  • FCoE, Continues to evolve and mature with more deployments however still not at tipping point
  • SSD, Flash based mediums continue to evolve however tipping point is still over the horizon
  • IOV, I/O Virtualization for both virtual and non virtual servers
  • Other new or recycled buzzword bingo candidates include PCoIP, 4G,

RAID will again be pronounced as being dead no longer relevant yet being found in more diverse deployments from consumer to the enterprise. In other words, RAID may be boring and thus no longer relevant to talk about, yet it is being used everywhere and enhanced in evolutionary ways, perhaps for some even revolutionary.

Tape remains being declared dead (e.g. on the Zombie technology list) yet being enhanced, purchased and utilized at higher rates with more data stored than in past history. Instead of being killed off by the disk drive, tape is being kept around for both traditional uses as well as taking on new roles where it is best suited such as long term or bulk off-line storage of data in ultra dense and energy efficient not to mention economical manners.

What I am seeing and hearing is that customers using tape are able to reduce the number of drives or transports, yet due to leveraging disk buffers or caches including from VTL and dedupe devices, they are able to operate their devices at higher utilization, thus requiring fewer devices with more data stored on media than in the past.

Likewise, even though I have been a fan of SSD for about 20 years and am bullish on its continued adoption, I do not see SSD killing off the spinning disk drive anytime soon. Disk drives are helping tape take on this new role by being a buffer or cache in the form of VTLs, disk based backup and bulk storage enhanced with compression, dedupe, thin provision and replication among other functionality.

There you have it, my predictions, observations and perspectives for 2010 and 2011. It is a broad and diverse list however I also get asked about and see a lot of different technologies, techniques and trends tied to IT resources (servers, storage, I/O and networks, hardware, software and services).

Lets see how they play out.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EPA Server and Storage Workshop Feb 2, 2010

EPA Energy Star

Following up on a recent previous post pertaining to US EPA Energy Star(r) for Servers, Data Center Storage and Data Centers, there will be a workshop held Tuesday February 2, 2010 in San Jose, CA.

Here is the note (Italics added by me for clarity) from the folks at EPA with information about the event and how to participate.

 

Dear ENERGY STAR® Servers and Storage Stakeholders:

Representatives from the US EPA will be in attendance at The Green Grid Technical Forum in San Jose, CA in early February, and will be hosting information sessions to provide updates on recent ENERGY STAR servers and storage specification development activities.  Given the timing of this event with respect to ongoing data collection and comment periods for both product categories, EPA intends for these meetings to be informal and informational in nature.  EPA will share details of recent progress, identify key issues that require further stakeholder input, discuss timelines for the completion, and answer questions from the stakeholder community for each specification.

The sessions will take place on February 2, 2010, from 10:00 AM to 4:00 PM PT, at the San Jose Marriott.  A conference line and Webinar will be available for participants who cannot attend the meeting in person.  The preliminary agenda is as follows:

Servers (10:00 AM to 12:30 PM)

  • Draft 1 Version 2.0 specification development overview & progress report
    • Tier 1 Rollover Criteria
    • Power & Performance Data Sheet
    • SPEC efficiency rating tool development
  • Opportunities for energy performance data disclosure

 

Storage (1:30 PM to 4:00 PM)

  • Draft 1 Version 1.0 specification development overview & progress report
  • Preliminary stakeholder feedback & lessons learned from data collection 

A more detailed agenda will be distributed in the coming weeks.  Please RSVP to storage@energystar.gov or servers@energystar.gov no later than Friday, January 22.  Indicate in your response whether you will be participating in person or via Webinar, and which of the two sessions you plan to attend.

Thank you for your continued support of ENERGY STAR.

 

End of EPA Transmission

For those attending the event, I look forward to seeing you there in person on Tuesday before flying down to San Diego where I will be presenting on Wednesday the 3rd at The Green Data Center Conference.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Poll: Networking Convergence, Ethernet, InfiniBand or both?

I just received an email in my inbox from Voltaire along with a pile of other advertisements, advisories, alerts and announcements from other folks.

What caught my eye on the email was that it is announcing a new survey results that you can read here as well as below.

The question that this survey announcements prompts for me and hence why I am posting it here is how dominant will InfiniBand be on a go forward basis, the answer I think is it depends…

It depends on the target market or audience, what their applications and technology preferences are along with other service requirements.

I think that there is and will remain a place for Infiniband, the question is where and for what types of environments as well as why have both InfiniBand and Ethernet including Fibre Channel over Ethernet (FCoE) in support of unified or converged I/O and data networking.

So here is the note that I received from Voltaire:

 

Hello,

A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

The full press release is below.  Please contact me if you would like to speak with a Voltaire executive for further commentary.

Regards,
Christy

____________________________________________________________
Christy Lynch| 978.439.5407(o) |617.794.1362(m)
Director, Corporate Communications
Voltaire – The Leader in Scale-Out Data Center Fabrics
christyl@voltaire.com | www.voltaire.com
Follow us on Twitter: www.twitter.com/voltaireltd

FOR IMMEDIATE RELEASE:

IT Survey Finds Executives Planning Converged Network Strategy:
Using Both InfiniBand and Ethernet

Fabric Performance Key to Making Data Centers Operate More Efficiently

CHELMSFORD, Mass. and ANANA, Israel January 12, 2010 – A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

Voltaire queried more than 120 members of the Global CIO & Executive IT Group, which includes CIOs, senior IT executives, and others in the field that attended the 2009 MIT Sloan CIO Symposium. The survey explored their data center networking needs, their choice of interconnect technologies (fabrics) for the enterprise, and criteria for making technology purchasing decisions.

“Increasingly, InfiniBand and Ethernet share the ability to address key networking requirements of virtualized, scale-out data centers, such as performance, efficiency, and scalability,” noted Asaf Somekh, vice president of marketing, Voltaire. “By adopting a converged network strategy, IT executives can build on their pre-existing investments, and leverage the best of both technologies.”

When asked about their fabric choices, 45 percent of the respondents said they planned to implement both InfiniBand with Ethernet as they made future data center enhancements. Another 54 percent intended to rely on Ethernet alone.

Among additional survey results:

  • When asked to rank the most important characteristics for their data center fabric, the largest number (31 percent) cited high bandwidth. Twenty-two percent cited low latency, and 17 percent said scalability.
  • When asked about their top data center networking priorities for the next two years, 34 percent again cited performance. Twenty-seven percent mentioned reducing costs, and 16 percent cited improving service levels.
  • A majority (nearly 60 percent) favored a fabric/network that is supported or backed by a global server manufacturer.

InfiniBand and Ethernet interconnect technologies are widely used in today’s data centers to speed up and make the most of computing applications, and to enable faster sharing of data among storage and server networks. Voltaire’s server and storage fabric switches leverage both technologies for optimum efficiency. The company provides InfiniBand products used in supercomputers, high-performance computing, and enterprise environments, as well as its Ethernet products to help a broad array of enterprise data centers meet their performance requirements and consolidation plans.

About Voltaire
Voltaire (NASDAQ: VOLT) is a leading provider of scale-out computing fabrics for data centers, high performance computing and cloud environments. Voltaire’s family of server and storage fabric switches and advanced management software improve performance of mission-critical applications, increase efficiency and reduce costs through infrastructure consolidation and lower power consumption. Used by more than 30 percent of the Fortune 100 and other premier organizations across many industries, including many of the TOP500 supercomputers, Voltaire products are included in server and blade offerings from Bull, HP, IBM, NEC and Sun. Founded in 1997, Voltaire is headquartered in Ra’anana, Israel and Chelmsford, Massachusetts. More information is available at www.voltaire.com or by calling 1-800-865-8247.

Forward Looking Statements
Information provided in this press release may contain statements relating to current expectations, estimates, forecasts and projections about future events that are "forward-looking statements" as defined in the Private Securities Litigation Reform Act of 1995. These forward-looking statements generally relate to Voltaire’s plans, objectives and expectations for future operations and are based upon management’s current estimates and projections of future results or trends. They also include third-party projections regarding expected industry growth rates. Actual future results may differ materially from those projected as a result of certain risks and uncertainties. These factors include, but are not limited to, those discussed under the heading "Risk Factors" in Voltaire’s annual report on Form 20-F for the year ended December 31, 2008. These forward-looking statements are made only as of the date hereof, and we undertake no obligation to update or revise the forward-looking statements, whether as a result of new information, future events or otherwise.

###

All product and company names mentioned herein may be the trademarks of their respective owners.

 

End of Voltaire transmission:

I/O, storage and networking interface wars come and go similar to other technology debates of what is the best or that will be supreme.

Some recent debates have been around Fibre Channel vs. iSCSI or iSCSI vs. Fibre Channel (depends on your perspective), SAN vs. NAS, NAS vs. SAS, SAS vs. iSCSI or Fibre Channel, Fibre Channel vs. Fibre Channel over Ethernet (FCoE) vs. iSCSI vs. InfiniBand, xWDM vs. SONET or MPLS, IP vs UDP or other IP based services, not to mention the whole LAN, SAN, MAN, WAN POTS and PAN speed games of 1G, 2G, 4G, 8G, 10G, 40G or 100G. Of course there are also the I/O virtualization (IOV) discussions including PCIe Single Root (SR) and Multi Root (MR) for attachment of SAS/SATA, Ethernet, Fibre Channel or other adapters vs. other approaches.

Thus when I routinely get asked about what is the best, my answer usually is a qualified it depends based on what you are doing, trying to accomplish, your environment, preferences among others. In other words, Im not hung up or tied to anyone particular networking transport, protocol, network or interface, rather, the ones that work and are most applicable to the task at hand

Now getting back to Voltaire and InfiniBand which I think has a future for some environments, however I dont see it being the be all end all it was once promoted to be. And outside of the InfiniBand faithful (there are also iSCSI, SAS, Fibre Channel, FCoE, CEE and DCE among other devotees), I suspect that the results would be mixed.

I suspect that the Voltaire survey reflects that as well as if I surveyed an Ethernet dominate environment I can take a pretty good guess at the results, likewise for a Fibre Channel, or FCoE influenced environment. Not to mention the composition of the environment, focus and business or applications being supported. One would also expect a slightly different survey results from the likes of Aprius, Broadcom, Brocade, Cisco, Emulex, Mellanox (they also are involved with InfiniBand), NextIO, Qlogic (they actually do some Infiniband activity as well), Virtensys or Xsigo (actually, they support convergence of Fibre Channel and Ethernet via Infiniband) among others.

Ok, so what is your take?

Whats your preffered network interface for convergence?

For additional reading, here are some related links:

  • I/O Virtualization (IOV) Revisited
  • I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
  • Buzzword Bingo 1.0 – Are you ready for fall product announcements?
  • StorageIO in the News Update V2010.1
  • The Green and Virtual Data Center (Chapter 9)
  • Also check out what others including Scott Lowe have to say about IOV here or, Stuart Miniman about FCoE here, or of Greg Ferro here.
  • Oh, and for what its worth for those concerned about FTC disclosure, Voltaire is not nor have they been a client of StorageIO, however, I did used to work for a Fibre Channel, iSCSI, IP storage, LAN, SAN, MAN, WAN vendor and wrote a book on the topics :).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    StorageIO in the News Update V2010.1

    StorageIO is regularly quoted and interviewed in various industry and vertical market venues and publications both on-line and in print on a global basis.

    The following are some coverage, perspectives and commentary by StorageIO on IT industry trends including servers, storage, I/O networking, hardware, software, services, virtualization, cloud, cluster, grid, SSD, data protection, Green IT and more since the last update.

    Realizing that some prefer blogs to webs to twitter to other venues, here are some recent links among others to media coverage and comments by me on a different topics that are among others found at www.storageio.com/news.html:

  • SearchSMBStorage: Comments on EMC Iomega v.Clone for PC data syncronization – Jan 2010
  • Computerworld: Comments on leveraging cloud or online backup – Jan 2010
  • ChannelProSMB: Comments on NAS vs SAN Storage for SMBs – Dec 2009
  • ChannelProSMB: Comments on Affordable SMB Storage Solutions – Dec 2009
  • SearchStorage: Comments on What to buy a geek for the holidays, 2009 edition – Dec 2009
  • SearchStorage: Comments on EMC VMAX storage and 8GFC enhancements – Dec 2009
  • SearchStorage: Comments on Data Footprint Reduction – Dec 2009
  • SearchStorage: Comments on Building a private storage cloud – Dec 2009
  • SearchStorage: Comments on SSD in storage systems – Dec 2009
  • SearchStorage: Comments on slow adoption of file virtualization – Dec 2009
  • IT World: Comments on maximizing data security investments – Nov 2009
  • SearchCIO: Comments on storage virtualization for your organisation – Nov 2009
  • Processor: Comments on how to win approval for hardware upgrades – Nov 2009
  • Processor: Comments on the Future of Servers – Nov 2009
  • SearchITChannel: Comments on Energy-efficient technology sales depend on pitch – Nov 2009
  • SearchStorage: Comments on how to get from Fibre Channel to FCoE – Nov 2009
  • Minneapolis Star Tribune: Comments on Google Wave and Clouds – Nov 2009
  • SearchStorage: Comments on EMC and Cisco alliance – Nov 2009
  • SearchStorage: Comments on HP virtualizaiton enhancements – Nov 2009
  • SearchStorage: Comments on Apple canceling ZFS project – Oct 2009
  • Processor: Comments on EPA Energy Star for Server and Storage Ratings – Oct 2009
  • IT World Canada: Cloud computing, dot be scared, look before you leap – Oct 2009
  • IT World: Comments on stretching your data protection and security dollar – Oct 2009
  • Enterprise Storage Forum: Comments about Fragmentation and Performance? – Oct 2009
  • SearchStorage: Comments about data migration – Oct 2009
  • SearchStorage: Comments about What’s inside internal storage clouds? – Oct 2009
  • Enterprise Storage Forum: Comments about T-Mobile and Clouds? – Oct 2009
  • Storage Monkeys: Podcast comments about Sun and Oracle- Sep 2009
  • Enterprise Storage Forum: Comments on Maxiscale clustered, cloud NAS – Sep 2009
  • SearchStorage: Comments on Maxiscale clustered NAS for web hosting – Sep 2009
  • Enterprise Storage Forum: Comments on whos hot in data storage industry – Sep 2009
  • SearchSMBStorage: Comments on SMB Fibre Channel switch options – Sep 2009
  • SearchStorage: Comments on using storage more efficiently – Sep 2009
  • SearchStorage: Comments on Data and Storage Tiering including SSD – Sep 2009
  • Enterprise IT Planet: Comments on Data Deduplication – Sep 2009
  • SearchDataCenter: Comments on Tiered Storage – Sep 2009
  • Enterprise Storage Forum: Comments on Sun-Oracle Wedding – Aug 2009
  • Processor.com: Comments on Storage Network Snags – Aug 2009
  • SearchStorageChannel: Comments on I/O virtualizaiton (IOV) – Aug 2009
  • SearchStorage: Comments on Clustered NAS storage and virtualization – Aug 2009
  • SearchITChannel: Comments on Solid-state drive prices still hinder adoption – Aug 2009
  • Check out the Content, Tips, Tools, Videos, Podcasts plus White Papers, and News pages for additional commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Recent tips, videos, articles and more update V2010.1

    Realizing that some prefer blogs to webs to twitter to other venues, here are some recent links to articles, tips, videos, webcasts and other content that have appeared in different venues since August 2009.

  • i365 Guest Interview: Experts Corner: Q&A with Greg Schulz December 2009
  • SearchCIO Midmarket: Remote-location disaster recovery risks and solutions December 2009
  • BizTech Magazine: High Availability: A Delicate Balancing Act November 2009
  • ESJ: What Comprises a Green, Efficient and Effective Virtual Data Center? November 2009
  • SearchSMBStorage: Determining what server to use for SMB November 2009
  • SearchStorage: Performance metrics: Evaluating your data storage efficiency October 2009
  • SearchStorage: Optimizing capacity and performance to reduce data footprint October 2009
  • SearchSMBStorage: How often should I conduct a disaster recovery (DR) test? October 2009
  • SearchStorage: Addressing storage performance bottlenecks in storage September 2009
  • SearchStorage AU: Is tape the right backup medium for smaller businesses? August 2009
  • ITworld: The new green data center: From energy avoidance to energy efficiency August 2009
  • Video and podcasts include:
    December 2009 Video: Green Storage: Metrics and measurement for management insight
    Discussion between Greg Schulz and Mark Lewis of TechTarget the importance of metrics and measurement to gauge productivity and efficiency for Green IT and enabling virtual information factories. Click here to watch the Video.

    December 2009 Podcast: iSCSI SANs can be a good fit for SMB storage
    Discussion between Greg Schulz and Andrew Burton of TechTarget about iSCSI and other related technologies for SMB storage. Click here to listen to the podcast.

    December 2009 Podcast: RAID Data Protection Discussion
    Discussion between Greg Schulz and Andrew Burton of TechTarget about RAID data proteciton, techniques and technologies. Click here to listen to the podcast.

    December 2009 Podcast: Green IT, Effiency and Productivity Discussion
    Discussion between Greg Schulz and Jon Flower of Adaptec about data Green IT, energy effiency, inteligent power management (IPM) also known as MAID 2.0 and other forms of optimization techniques including SSD. Click here to listen to the podcast sponsored by Adaptec.

    November 2009 Podcast: Reducing your data footprint impact
    Even though many enterprise data storage environments are coping with tightened budgets and reduced spending, overall net storage capacity is increasing. In this interview, Greg Schulz, founder and senior analyst at StorageIO Group, discusses how storage managers can reduce their data footprint. Schulz touches on the importance of managing your data footprint on both online and offline storage, as well as the various tools for doing so, including data archiving, thin provisioning and data deduplication. Click here to listen to the podcast.

    October 2009 Podcast: Enterprise data storage technologies rise from the dead
    In this interview, Greg Schulz, founder and senior analyst of the Storage I/O group, classifies popular technologies such as solid-state drives (SSDs), RAID and Fibre Channel (FC) as “zombie” technologies. Why? These are already set to become part of standard storage infrastructures, says Schulz, and are too old to be considered fresh. But while some consider these technologies to be stale, users should expect to see them in their everyday lives. Click here to listen to the podcast.

    Check out the Tips, Tools and White Papers, and News pages for additional commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Poll: What was hot in 2009 and what was not, cast your vote!

    This is the time of year when people make their predictions for the next year.


    Building on some recent surveys and polls including:

    Whats your take on Windows 7

    Is IBM XIV still relevant

    EMC and Cisco Acadia VCE, what does it mean?

    What do you think of IT clouds

    Whats Your Take on FTC Guidelines For Bloggers?

    Not to mention those over at Storage Monkeys and the customer collective among others


    Before jumping to what will be hot or a flop in 2010, what do you think were the successful as well as disappointing technologies, trends, events, products or vendors of 2009?


    Cast your including adding in your own nominations in the two polls below.

    What technologies, events, products or vendors did not live up to 2009 predictions?



    What do you think were top 2009 technologies, events or vendors?

    Note:

    Feel free to vote early and often, however be advised, you will have to be creative in doing so as single balloting per IP and cookies are enabled to keep things on the down low.

    Check back soon to see how the results play out…


    Cheers gs

    Greg Schulz – StorageIO, Author The Green and Virtual Data Center (CRC)

    Is MAID Storage Dead? I Dont Think So!

    Some vendors are doing better than others and first generation MAID (Massive or monolithic Array of Idle Disks) might be dead or about to be deceased, spun down or put into a long term sleep mode, it is safe to say that second generation MAID (e.g. MAID 2.0) also known as intelligent power management (IPM) is alive and doing well.

    In fact, IPM is not unique to disk storage or disk drives as it is also a technique found in current generation of processors such as those from Intel (e.g. Nehalem) and others.

    Other names for IPM include adaptive voltage scaling (AVS), adaptive voltage scaling optimized (AVSO) and adaptive power management (APM) among others.

    The basic concept is to vary the amount of power being used to the amount of work and service level needed at a point in time and on a granular basis.

    For example, first generation MAID or drive spin down as deployed by vendors such as Copan, which is rumored to be in the process of being spun down as a company (see blog post by a former Copan employee) were binary. That is, a disk drive was either on or off, and, that the granularity was the entire storage system. In the case of Copan, the granularly was that a maximum of 25% of the disks could ever be spun up at any point in time. As a point of reference, when I ask IT customers why they dont use MAID or IPM enabled technology they commonly site concerns about performance, or more importantly, the perception of bad performance.

    CPU chips have been taking the lead with the ability to vary the voltage and clock speed, enabling or disabling electronic circuitry to align with amount of work needing to be done at a point in time. This more granular approach allows the CPU to run at faster rates when needed, slower rates when possible to conserve energy (here, here and here).

    A common example is a laptop with technology such as speed step, or battery stretch saving modes. Disk drives have been following this approach by being able to vary their power usage by adjusting to different spin speeds along with enabling or disabling electronic circuitry.

    On a granular basis, second generation MAID with IPM enabled technology can be done on a LUN or volume group basis across different RAID levels and types of disk drives depending on specific vendor implementation. Some examples of vendors implementing various forms of IPM for second generation MAID to name a few include Adaptec, EMC, Fujitsu Eternus, HDS (AMS), HGST (disk drives), Nexsan and Xyratex among many others.

    Something else that is taking place in the industry seems to be vendors shying away from using the term MAID as there is some stigma associated with performance issues of some first generation products.

    This is not all that different than what took place about 15 years ago or so when the first purpose built monolithic RAID arrays appeared on the market. Products such as the SF2 aka South San Francisco Forklift company product called Failsafe (here and here) which was bought by MTI with patents later sold to EMC.

    Failsafe, or what many at DEC referred to as Fail Some was a large refrigerator sized device with 5.25” disk drives configured as RAID5 with dedicated hot spare disk drives. Thus its performance was ok for the time doing random reads, however writes in the pre write back cache RAID5 days was less than spectacular.

    Failsafe and other early RAID (and here) implementations received a black eye from some due to performance, availability and other issues until best practices and additional enhancements such as multiple RAID levels appeared along with cache in follow on products.

    What that trip down memory (or nightmare) lane has to do with MAID and particularly first generation products that did their part to help establish new technology is that they also gave way to second, third, fourth, fifth, sixth and beyond generations of RAID products.

    The same can be expected as we are seeing with more vendors jumping in on the second generation of MAID also known as drive spin down with more in the wings.

    Consequently, dont judge MAID based solely on the first generation products which could be thought of as advanced technology production proof of concept solutions that will have paved the way for follow up future solutions.

    Just like RAID has become so ubiquitous it has been declared dead making it another zombie technology (dead however still being developed, produced, bought and put to use), follow on IPM enabled generations of technology will be more transparent. That is, similar to finding multiple RAID levels in most storage, look for IPM features including variable drive speeds, power setting and performance options on a go forward basis. These newer solutions may not carry the MAID name, however the sprit and function of intelligent power management without performance compromise does live on.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved