Green IT, Green Gap, Tiered Energy and Green Myths

There are many different aspects of Green IT along with several myths or misperceptions not to mention missed opportunities.

There is a Green Gap or disconnect between environmentally aware, focused messaging and core IT data center issues. For example, when I ask IT professionals whether they have or are under direction to implement green IT initiatives, the number averages in the 10-15% range.

However, when I ask the same audiences who has or sees power, cooling, floor space, supporting growth, or addressing environmental health and safety (EHS) related issues, the average is 75 to 90%. What this means is a disconnect between what is perceived as being green and opportunities for IT organizations to make improvements from an economic and efficiency standpoint including boosting productivity.

 

Some IT Data Center Green Myths
Is “green IT” a convenient or inconvenient truth or a legend?

When it comes to green and virtual environments, there are plenty of myths and realities, some of which vary depending on market or industry focus, price band, and other factors.

For example, there are lines of thinking that only ultra large data centers are subject to PCFE-related issues, or that all data centers need to be built along the Columbia River basin in Washington State, or that virtualization eliminates vendor lock-in, or that hardware is more expensive to power and cool than it is to buy.

The following are some myths and realities as of today, some of which may be subject to change from reality to myth or from myth to reality as time progresses.

Myth: Green and PCFE issues are applicable only to large environments.

Reality: I commonly hear that green IT applies only to the largest of companies. The reality is that PCFE issues or green topics are relevant to environments of all sizes, from the largest of enterprises to the small/medium business, to the remote office branch office, to the small office/home office or “virtual office,” all the way to the digital home and consumer.

 

Myth: All computer storage is the same, and powering disks off solves PCFE issues.

Reality: There are many different types of computer storage, with various performance, capacity, power consumption, and cost attributes. Although some storage can be powered off, other storage that is needed for online access does not lend itself to being powered off and on. For storage that needs to be always online and accessible, energy efficiency is achieved by doing more with less—that is, boosting performance and storing more data in a smaller footprint using less power.

 

Myth: Servers are the main consumer of electrical power in IT data centers.

Reality: In the typical IT data center, on average, 50% of electrical power is consumed by cooling, with the balance used for servers, storage, networking, and other aspects. However, in many environments, particularly processing or computation intensive environments, servers in total (including power for cooling and to power the equipment) can be a major power draw.

 

Myth: IT data centers produce 2 to 8% of all global Carbon Dioxide (CO2) and carbon emissions.

Reality:  Thus might be perhaps true, given some creative accounting and marketing math in order to help build a justification case or to scare you into doing something. However, the reality is that in the United States, for example, IT data centers consume around 2 to 4% of electrical power (depending on when you read this), and less than 80% of all U.S. CO2 emissions are from electrical power generation, so the math does not quite add up. The reality is this, if no action is taken to improve IT data center energy efficiency, continued demand growth will shift IT power-related emissions from myth to reality, not to mention cause constraints on IT and business sustainability from an economic and productivity standpoint.

Myth: Server consolidation with virtualization is a silver bullet to address PCFE issues.

Reality: Server virtualization for consolidation is only part of an overall solution that should be combined with other techniques, including lower power, faster and more energy efficient servers, and improved data and storage management techniques.

 

Myth: Hardware costs more to power than to purchase.

Reality: Currently, for some low-cost servers, standalone disk storage, or entry level networking switches and desktops, this may be true, particularly where energy costs are excessively high and the devices are kept and used continually for three to five years. A general rule of thumb is that the actual cost of most IT hardware will be a fraction of the price of associated management and software tool costs plus facilities and cooling costs. For the most part, at least as of this writing, small standalone individual hard disk drives or small entry level volume servers can be bought and then used in locations that have very high electrical costs over a three  to five year time frame.

 

Regarding this last myth, for the more commonly deployed external storage systems across all price bands and categories, generally speaking, except for extremely inefficient and hot running legacy equipment, the reality is that it is still cheaper to power the equipment than to buy it. Having said that, there are some qualifiers that should also be used as key indicators to keep the equation balanced. These qualifiers include the acquisition cost  if any, for new, expanded, or remodeled habitats or space to house the equipment, the price of energy in a given region, including surcharges, as well as cooling, length of time, and continuous time the device will be used.

For larger businesses, IT equipment in general still costs more to purchase than to power, particularly with newer, more energy efficient devices. However, given rising energy prices, or the need to build new facilities, this could change moving forward, particularly if a move toward energy efficiency is not undertaken.

There are many variables when purchasing hardware, including acquisition cost, the energy efficiency of the device, power and cooling costs for a given location and habitat, and facilities costs. For example, if a new storage solution is purchased for $100,000, yet new habitat or facilities must be built for three to five times the cost of the equipment, those costs must be figured into the purchase cost.

Likewise, if the price of a storage solution decreases dramatically, but the device consumes a lot of electrical power and needs a large cooling capacity while operating in a region with expensive electricity costs, that, too, will change the equation and the potential reality of the myth.

 

Tiered Energy Sources
Given that IT resources and facilitated require energy to power equipment as well as keep them cool, electricity are popular topics associated with Green IT, economics and efficiency with lots of metrics and numbers tossed around. With that in mind, the U.S. national average CO2 emission is 1.34 lb/kWh of electrical power. Granted, this number will vary depending on the region of the country and the source of fuel for the power-generating station or power plant.

Like IT tiered resources (Servers, storage, I/O networks, virtual machines and facilities) of which there are various tiers or types of technologies to meet various needs, there are also multiple types of energy sources. Different tiers of energy sources vary by their cost, availability and environmental characteristics among others. For example, in the US, there are different types of coal and not all coal is as dirty when combined with emissions air scrubbers as you might be lead to believe however there are other energy sources to consider as well.

Coal continues to be a dominant fuel source for electrical power generation both in the United States and abroad, with other fuel sources, including oil, gas, natural gas, liquid propane gas (LPG or propane), nuclear, hydro, thermo or steam, wind and solar. Within a category of fuel, for example, coal, there are different emissions per ton of fuel burned. Eastern U.S. coal is higher in CO2 emissions per kilowatt hour than western U.S. lignite coal. However, eastern coal has more British thermal units (Btu) of energy per ton of coal, enabling less coal to be burned in smaller physical power plants.

If you have ever noticed that coal power plants in the United States seem to be smaller in the eastern states than in the Midwest and western states, it’s not an optical illusion. Because eastern coal burns hotter, producing more Btu, smaller boilers and stockpiles of coal are needed, making for smaller power plant footprints. On the other hand, as you move into the Midwest and western states of the United States, coal power plants are physically larger, because more coal is needed to generate 1 kWh, resulting in bigger boilers and vent stacks along with larger coal stockpiles.

On average, a gallon of gasoline produces about 20 lb of CO2, depending on usage and efficiency of the engine as well as the nature of the fuel in terms of octane or amount of Btu. Aviation fuel and diesel fuel differ from gasoline, as does natural gas or various types of coal commonly used in the generation of electricity. For example, natural gas is less expensive than LPG but also provides fewer Btu per gallon or pound of fuel. This means that more natural gas is needed as a fuel to generate a given amount of power.

Recently, while researching small, 10 to 12 kWh standby generators for my office, I learned about some of the differences between propane and natural gas. What I found was that with natural gas as fuel, a given generator produced about 10.5 kWh, whereas the same unit attached to a LPG or propane fuel source produced 12 kWh. The trade off was that to get as much power as possible out of the generator, the higher cost LPG was the better choice. To use lower cost fuel but get less power out of the device, the choice would be natural gas. If more power was needed, than a larger generator could be deployed to use natural gas, with the trade off of requiring a larger physical footprint.

Oil and gas are not used as much as fuel sources for electrical power generation in the United States as in other countries such as the United Kingdom. Gasoline, diesel, and other petroleum based fuels are used for some power plants in the United States, including standby or peaking plants. In the electrical power G and T industry as in IT, where different tiers of servers and storage are used for different applications there are different tiers of power plants using different fuels with various costs. Peaking and standby plants are brought online when there is heavy demand for electrical power, during disruptions when a lower cost or more environmentally friendly plant goes offline for planned maintenance, or in the event of a trip or unplanned outage.

CO2 is commonly discussed with respect to green and associated emissions however there are other so called Green Houses Gases including Nitrogen Dioxide (NO2) and water vapors among others. Carbon makes up only a fraction of CO2. To be specific, only about 27% of a pound of CO2 is carbon; the balance is not. Consequently, carbon emissions taxes schemes (ETS), as opposed to CO2 tax schemes, need to account for the amount of carbon per ton of CO2 being put into the atmosphere. In some parts of the world, including the EU and the UK, ETS are either already in place or in initial pilot phases, to provide incentives to improve energy efficiency and use.

Meanwhile, in the United States there are voluntary programs for buying carbon offset credits along with initiatives such as the carbon disclosure project. The Carbon Disclosure Project (www.cdproject.net) is a not for profit organization to facilitate the flow of information pertaining to emissions by organizations for investors to make informed decisions and business assessment from an economic and environmental perspective. Another voluntary program is the United States EPA Climate Leaders initiative where organizations commit to reduce their GHG emissions to a given level or a specific period of time.

Regardless of your stance or perception on green issues, the reality is that for business and IT sustainability, a focus on ecological and, in particular, the corresponding economic aspects cannot be ignored. There are business benefits to aligning the most energy efficient and low power IT solutions combined with best practices to meet different data and application requirements in an economic and ecologically friendly manner.

Green initiatives need to be seen in a different light, as business enables as opposed to ecological cost centers. For example, many local utilities and state energy or environmentally concerned organizations are providing funding, grants, loans, or other incentives to improve energy efficiency. Some of these programs can help offset the costs of doing business and going green. Instead of being seen as the cost to go green, by addressing efficiency, the by products are economic as well as ecological.

Put a different way, a company can spend carbon credits to offset its environmental impact, similar to paying a fine for noncompliance or it can achieve efficiency and obtain incentives. There are many solutions and approaches to address these different issues, which will be looked at in the coming chapters.

What does this all mean?
There are real things that can be done today that can be effective toward achieving a balance of performance, availability, capacity, and energy effectiveness to meet particular application and service needs.

Sustaining for economic and ecological purposes can be achieved by balancing performance, availability, capacity, and energy to applicable application service level and physical floor space constraints along with intelligent power management. Energy economics should be considered as much a strategic resource part of IT data centers as are servers, storage, networks, software, and personnel.

The bottom line is that without electrical power, IT data centers come to a halt. Rising fuel prices, strained generating and transmission facilities for electrical power, and a growing awareness of environmental issues are forcing businesses to look at PCFE issues. IT data centers to support and sustain business growth, including storing and processing more data, need to leverage energy efficiency as a means of addressing PCFE issues. By adopting effective solutions, economic value can be achieved with positive ecological results while sustaining business growth.

Some additional links include:

Want to learn or read more?

Check out Chapter 1 (Green IT and the Green Gap, Real or Virtual?) in my book “The Green and Virtual Data Center” (CRC) here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Does IBM Power7 processor announcement signal storage upgrades?

IBM recently announced the Power7 as the latest generation of processors that the company uses in some of its mid range and high end compute servers including the iSeries and pSeries.


IBM Power7 processor wafers (chips)

 

What is the Power7 processor?
The Power7 is the latest generation of IBM processors (chips) that are used as the CPUs in IBM mid range and high end open systems (pSeries) for Unix (AIX) and Linux as well as for the iSeries (aka AS400 successor). Building on previous Power series processors, the Power7 increases the performance per core (CPU) along with the number of cores per socket (chip) footprint. For example, each Power7 chip that plugs into a socket on a processor card in a server can have up to 8 cores or CPUs. Note that sometimes cores are also known as micro CPUs as well as virtual CPUs not to be confused with their presented via Hypervisor abstraction.

Sometimes you may also here the term or phrase 2 way, 4 way (not to be confused with a Cincinnati style 4 way chili) or 8 way among others that refers to the number of cores on a chip. Hence, a dual 2 way would be a pair of processor chips each with 2 cores while a quad 8 way would be 4 processors chips each with 8 cores and so on.


IBM Power7 with up to eight cores per processor (chip)

In addition to faster and more cores in a denser footprint, there are also energy efficiency enhancements including Energy Star for enterprise servers qualification along with intelligent power management (IPM also see here) implementation. IPM is implanted in what IBM refers to as Intelligent Energy technology for turning on or off various parts of the system along with varying processor clock speeds. The benefit is when there is work to be done, get it down quickly or if there is less work, turn some cores off or slow clock speed down. This is similar to what other industry leaders including Intel have deployed with their Nehalem series of processors that also support IPM.

Additional features of the Power7 include (varies by system solutions):

  • Energy Star for server qualified providing enhanced performance and efficiency.
  • IBM Systems Director Express, Standard and Enterprise Editions for simplified management including virtualization capabilities across pools of Power servers as a single entity.
  • PowerVM (Hypervisor) virtualization for AIX, iSeries and Linux operating systems.
  • ActiveMemory enables effective memory capacity to be larger than physical memory, similar to how virtual memory works within many operating systems. The benefit is to enable a partition to have access to more memory which is important for virtual machines along with the ability to support more partitions in a given physical memory footprint.
  • TurboCore and Intelligent Threads enable workload optimization by selecting the applicable mode for the work to be done. For example, single thread per core along with simultaneous threads (2 or 4) modes per core. The trade off is to have more threads per core for concurrent processing, or, fewer threads to boost single stream performance.

IBM has announced several Power7 enabled or based server system models with various numbers of processors and cores along with standalone and clustered configurations including:

IBM Power7 family of server systems

  • Power 750 Express, 4U server with one to four socket server supporting up to 32 cores (3.0 to 3.5 GHz) and 128 threads (4 threads per core), PowerVM (Hypervisor) along with main memory capacity of 512GB or 1TByte of virtual memory using Active Memory Expansion.
  • Power 755, 32 3.3Ghz Power7 cores (8 cores per processor) with memory up to 256GB along with AltiVec and VSX SIMD instruction set support. Up to 64 755 nodes each with 32 cores can be clustered together for high performance applications.
  • Power 770, Up to 64 Power7 cores providing more performance while consuming less energy per core compared to previous Power6 generations. Support for up to 2TB of main memory or RAM using 32GB DIMM when available later in 2010.
  • Power 780, 64 Power7 cores with TurboCore workload optimization providing performance boost per core. With TurboCore, 64 cores can operate at 3.8 GHz, or, enable up to 32 cores at 4.1 GHz and twice the amount of cache when more speed per thread is needed. Support for up to 2TB of main memory or RAM using 32GB DIMM when available later in 2010.

Additional Power7 specifications and details can be found here.

 

What is the DS8000?
The DS8000 is the latest generation of a family of high end enterprise class storage systems supporting IBM mainframe (zSeries), Open systems along with mixed workloads. Being high end open systems or mainframe, the DS8000 competes with similar systems from EMC (Symmetrix/DMX/VMAX), Fujitsu (Eternus DX8000), HDS (Hitachi) and HP (XP series OEM from Hitachi). Previous generations of the DS8000 (aka predecessors) include the ESS (Enterprise Storage System) Model 2105 (aka Shark) and VSS (Versatile Storage Server). Current generation family members include the Power5 based DS8100 and DS8300 along with the Power6 based DS8700.

IBM DS8000 Storage System

Learn more about the DS8000 here, here, here and here.

 

What is the association between the Power7 and DS8000?
Disclosure: Before I go any further, lets be clear on something, what I am about to post on is based entirely on researching, analyzing, correlating (connecting the dots) of what is publicly and freely available from IBM on the Web (e.g. there is no NDA material being disclosed here that I am aware of) along with prior trends and tendency of IBM and their solutions. In other words, you can call it speculation, a prediction, industry analysis perspective, looking into the proverbial crystal ball or educated guess and thus should not be taken as an indicator of what IBM may actually do or be working on. As to what may actually be done or not done, for that you will need to contact one of the IBM truth squad members.

As to what is the linkage between Power7 and the DS8000?

The linkage between the Power7 and the DS8000 is just that, the Power processors!

At the heart of the DS8000 are Power series processors coupled or clustered together in pairs for performance and availability that run IBM developed storage systems software. While the spin doctors may not agree, essentially the DS8000 and its predecessors are based on and around Power series processors clustered together with a high speed interconnect that combine to host an operating system and IBM developed storage system application software.

Thus IBM has been able to for over a decade leverage technology improvement curve advantages with faster processors, increased memory and I/O connectivity in denser footprints while enhancing their storage system application software.

Given that the current DS8000 family members utilize 2 way (2 core) or 4 way (4 core) Power5 and Power6 processors, similar to how their predecessors utilized previous generation Power4, Power3 and so forth processors, it only makes sense that IBM might possibly use a Power7 processor in a future DS8000 (or derivative perhaps even with a different name or model number). Again, this is just based all on historical trends and patterns of IBM storage systems group leveraging the latest generation of Power processors; after all, they are a large customer of the Power systems group.

Consequently it would make sense for IBM storage folks to leverage the new Power7 processors and features similar to how EMC is leveraging Intel processor enhances along with what other vendors are doing.

There is certainly room in the DS8000 architecture for growth in terms of supporting additional nodes or complexes or controllers (or whatever your term preference of choice is for describing a server) each equipped with multiple processors (chips or sockets) that have multiple cores. While IBM has only commercially released two complex or dual server versions of the DS8000 with various numbers of cores per server, they have come nowhere close to their architecture limit of nodes. In fact with this release of Power7, as an example, the model 755 can be clustered via InfiniBand with up to 64 nodes, with each node having 4 sockets (e.g. 4 way) with up to 8 cores each. That means on paper, 64 x 4 x 8 = 2048 cores and each core could have up to 4 threads for concurrency, or half as many cores for more cache performance. Now will IBM ever come out with a 64 node DS8000 on steroids?

Tough to say, maybe possibly some day to play specmanship vs EMC VMAX 256 node architectural limit, however Im not holding my breath just yet. Thus with more and faster cores per processor, ability to increase number of processors per server or node, along with architectural capabilities to boost the number of nodes in an instance or cluster, on paper alone, there is lots of head room for the DS8000 or a future derivative.

What about software and functionality, sure IBM could in theory simply turn the crank and use a new hardware platform that is faster, more capacity, denser, better energy efficiency, however what about new features?

Can IBM enhance its storage systems application software that it evolved from the ESS with new features to leverage underlying hardware capabilities including TurboCore, PowerVM, device and I/O sharing, Intelligent Energy efficiency along with threads enhancements?

Can IBM leverage those and other features to support not only scaling of performance, availability, capacity and energy efficiency in an economical manner, however also add features for advanced automated tiering or data movement plus other popular industry buzzword functionality?

 

Additional thoughts and perspectives
One of the things I find interesting is that some IBM folks along with their channel partners will go to great lengths to explain why and how the DS8000 is not just a pair of Power enabled based servers tightly coupled together. Yet, on the other hand, some of those folks will go to great lengths touting the advantages of leveraging off the shelf or commercial enabled servers based on Intel or AMD based systems such as IBMs own XIV storage solution.

I can understand in the past when the likes of EMC, Hitachi and Fujitsu were all competing with IBM building bigger and more function rich monolithic systems, however that trend is shifting. The trend now as is being seen with EMC and VMAX is to decouple and leverage more off the shelf commercially available technology combined with custom ASICs where and when needed.

Thus at a time where more attention and discussion is around clustered, grid, scalable storage systems, will we see or hear the IBM folks change their tune about the architectural scale up and out capabilities of the Power enabled DS8000 family?

There had been some industry speculation that the DS8000 would be the end of the line if the Power7 had not been released which will now (assuming that IBM leverages the Power7 for storage) shift to if there will be a Power8 or Power9 and so forth.

From a storage perspective, is the DS8K still relevant?

I say yes given its installed base and need for IBM to have an enterprise solution (sorry, IMHO XIV does not fit that bill just yet) of their own, lest they cut an OEM deal with the likes of Hitachi or Fujitsu which while possible, I do not see it as likely near term. Another soft point on its relevance is to gauge reaction from their competitors including EMC and HDS.

From a server perspective, what is the benefit of the new Power7 enabled servers from IBM?

Simple, increase scale of performance for single thread as well as concurrent or parallel application workloads.

In other words, supporting more web sites, partitions for virtual machines and guest operating system instances, databases, compute and other applications that demand performance and economy of scale.

This also means that IBM has a platform to aggressively go after Sun Solaris server customers with a lifeline during the Oracle transition, not to mention being a platform for running Oracle in addition to its own UDB/DB2 database. In addition to being a platform for Unix AIX as well as Linux, the Power7 series also are at the heart of current generation iSeries (the server formerly known as the AS400).

Additional links and resources:

Closing comments (for now):
Given IBMs history of following a Power chip enhancement with a new upgraded version of the DS8000 (or ESS/2105 aka Shark/VSS) and its predecessors by a reasonable amount of time, I would be surprised if we do not see a new DS8000 (perhaps even renamed or renumbered) within the year.

This is similar to how other vendors leverage new processor chip technology evolution to pace their systems upgrades for example how many vendors who leverage Intel processes have done announcements over the past year since the Nehalem series rolled out including EMC among others.

Lets see what the IBM truth squads have to say, or, not have to say :)

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

California Center for Sustainable Energy (CCSE)



CCSE Facility and Seminar Series

This past week I had the honor of delivering a keynote presentation in San Diego at the California Center for Sustainable Energy (CCSE) as part of their continuing education and community outreach and education, workshop and seminar series. The theme of the well attended event was Next Generation Data Center Solutions of which my talk centered around leveraging Green and Virtual Data Centers for enabling efficiencey and effectiveness. In addition to my keynote, included a panel discussion that I moderated with representatives of the events sponsor Compucom, along with their special guests APC, HP, Intel and VMware.

The CCSE has a focus around Climate Change, Energy Efficienecey, Green Buildings, Renewable Energy, Transportation, Home and Business. Their services and focus includes awareness and outreach, education programs, library and tools, consultant and associated services. Speaking of their library, there is even a signed copy of my book The Green and Virtual Data Center (CRC) now at the CCSE library that can be checked out along with their other resources.

The CCSE staff and facilities were fantastic with hosts Mike Bigelow (an energy engineer) and Marlene King (program manager) orchestrating a great event.

If you are in the San Diego area, check out the CCSE located at 8690 Balboa Ave., Suite 100. They have a great library, cool demonstrations and tools that you can check out to assist with optimization IT data centers from an energy efficicinecy standpoint. Learn more about the CCSE here.

Following are some relevant links to the keynote along with panel discussion from the CCSE event:

Follow these links to view additional videos or podcasts, tips, articles, books, reports and events.

Cheers
gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Technorati tags: Trends

EPA Server and Storage Workshop Feb 2, 2010

EPA Energy Star

Following up on a recent previous post pertaining to US EPA Energy Star(r) for Servers, Data Center Storage and Data Centers, there will be a workshop held Tuesday February 2, 2010 in San Jose, CA.

Here is the note (Italics added by me for clarity) from the folks at EPA with information about the event and how to participate.

 

Dear ENERGY STAR® Servers and Storage Stakeholders:

Representatives from the US EPA will be in attendance at The Green Grid Technical Forum in San Jose, CA in early February, and will be hosting information sessions to provide updates on recent ENERGY STAR servers and storage specification development activities.  Given the timing of this event with respect to ongoing data collection and comment periods for both product categories, EPA intends for these meetings to be informal and informational in nature.  EPA will share details of recent progress, identify key issues that require further stakeholder input, discuss timelines for the completion, and answer questions from the stakeholder community for each specification.

The sessions will take place on February 2, 2010, from 10:00 AM to 4:00 PM PT, at the San Jose Marriott.  A conference line and Webinar will be available for participants who cannot attend the meeting in person.  The preliminary agenda is as follows:

Servers (10:00 AM to 12:30 PM)

  • Draft 1 Version 2.0 specification development overview & progress report
    • Tier 1 Rollover Criteria
    • Power & Performance Data Sheet
    • SPEC efficiency rating tool development
  • Opportunities for energy performance data disclosure

 

Storage (1:30 PM to 4:00 PM)

  • Draft 1 Version 1.0 specification development overview & progress report
  • Preliminary stakeholder feedback & lessons learned from data collection 

A more detailed agenda will be distributed in the coming weeks.  Please RSVP to storage@energystar.gov or servers@energystar.gov no later than Friday, January 22.  Indicate in your response whether you will be participating in person or via Webinar, and which of the two sessions you plan to attend.

Thank you for your continued support of ENERGY STAR.

 

End of EPA Transmission

For those attending the event, I look forward to seeing you there in person on Tuesday before flying down to San Diego where I will be presenting on Wednesday the 3rd at The Green Data Center Conference.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

EPA Energy Star for Data Center Storage Update

Following up on previous posts pertaining to US EPA Energy Star for Servers, Data Center Storage and Data Centers, here is a note received today with some new information. For those interested in the evolving Energy Star for Data Center, Servers and Storage, have a look at the following as well as the associated links.

Here is the note from EPA:

From: ENERGY STAR Storage [storage@energystar.gov]
Sent: Monday, December 28, 2009 8:00 AM
Subject: ENERGY STAR Data Center Storage Initial Data Collection Procedure

EPA Energy Star

Dear ENERGY STAR Data Center Storage Stakeholder or Other Interested Party:

The U.S. Environmental Production Agency (EPA) would like to invite interested parties to test the energy performance of storage products that are currently being considered for inclusion in the Version 1.0 ENERGY STAR® Data Center Storage specification. Please review the attached cover letter, data collection procedure, and test data collection sheet for further information.

Stakeholders are encouraged to submit test data via e-mail to storage@energystar.gov no later than Friday, February 12, 2009.

Thank you for your continued support of ENERGY STAR!

Attachment Links:

Storage Initial Data Collection Procedure.pdf

Storage Initial Data Collection Cover Letter.pdf

Storage Initial Data Collection Data Sheet.xls

For more information, visit: www.energystar.gov

 

For those interested in EPA Energy Star, Green IT including Green and energy efficient storage, check out these following links:

Watch for more news and updates pertaining to EPA Energy Star for Servers, Data Center Storage and Data centers in 2010.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Is MAID Storage Dead? I Dont Think So!

Some vendors are doing better than others and first generation MAID (Massive or monolithic Array of Idle Disks) might be dead or about to be deceased, spun down or put into a long term sleep mode, it is safe to say that second generation MAID (e.g. MAID 2.0) also known as intelligent power management (IPM) is alive and doing well.

In fact, IPM is not unique to disk storage or disk drives as it is also a technique found in current generation of processors such as those from Intel (e.g. Nehalem) and others.

Other names for IPM include adaptive voltage scaling (AVS), adaptive voltage scaling optimized (AVSO) and adaptive power management (APM) among others.

The basic concept is to vary the amount of power being used to the amount of work and service level needed at a point in time and on a granular basis.

For example, first generation MAID or drive spin down as deployed by vendors such as Copan, which is rumored to be in the process of being spun down as a company (see blog post by a former Copan employee) were binary. That is, a disk drive was either on or off, and, that the granularity was the entire storage system. In the case of Copan, the granularly was that a maximum of 25% of the disks could ever be spun up at any point in time. As a point of reference, when I ask IT customers why they dont use MAID or IPM enabled technology they commonly site concerns about performance, or more importantly, the perception of bad performance.

CPU chips have been taking the lead with the ability to vary the voltage and clock speed, enabling or disabling electronic circuitry to align with amount of work needing to be done at a point in time. This more granular approach allows the CPU to run at faster rates when needed, slower rates when possible to conserve energy (here, here and here).

A common example is a laptop with technology such as speed step, or battery stretch saving modes. Disk drives have been following this approach by being able to vary their power usage by adjusting to different spin speeds along with enabling or disabling electronic circuitry.

On a granular basis, second generation MAID with IPM enabled technology can be done on a LUN or volume group basis across different RAID levels and types of disk drives depending on specific vendor implementation. Some examples of vendors implementing various forms of IPM for second generation MAID to name a few include Adaptec, EMC, Fujitsu Eternus, HDS (AMS), HGST (disk drives), Nexsan and Xyratex among many others.

Something else that is taking place in the industry seems to be vendors shying away from using the term MAID as there is some stigma associated with performance issues of some first generation products.

This is not all that different than what took place about 15 years ago or so when the first purpose built monolithic RAID arrays appeared on the market. Products such as the SF2 aka South San Francisco Forklift company product called Failsafe (here and here) which was bought by MTI with patents later sold to EMC.

Failsafe, or what many at DEC referred to as Fail Some was a large refrigerator sized device with 5.25” disk drives configured as RAID5 with dedicated hot spare disk drives. Thus its performance was ok for the time doing random reads, however writes in the pre write back cache RAID5 days was less than spectacular.

Failsafe and other early RAID (and here) implementations received a black eye from some due to performance, availability and other issues until best practices and additional enhancements such as multiple RAID levels appeared along with cache in follow on products.

What that trip down memory (or nightmare) lane has to do with MAID and particularly first generation products that did their part to help establish new technology is that they also gave way to second, third, fourth, fifth, sixth and beyond generations of RAID products.

The same can be expected as we are seeing with more vendors jumping in on the second generation of MAID also known as drive spin down with more in the wings.

Consequently, dont judge MAID based solely on the first generation products which could be thought of as advanced technology production proof of concept solutions that will have paved the way for follow up future solutions.

Just like RAID has become so ubiquitous it has been declared dead making it another zombie technology (dead however still being developed, produced, bought and put to use), follow on IPM enabled generations of technology will be more transparent. That is, similar to finding multiple RAID levels in most storage, look for IPM features including variable drive speeds, power setting and performance options on a go forward basis. These newer solutions may not carry the MAID name, however the sprit and function of intelligent power management without performance compromise does live on.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

The other Green Storage: Efficiency and Optimization

Some believe that green storage is specifically designed to reduce power and cooling costs.

The reality is that there are many ways to reduce environmental impact while enhancing the economics of data storage besides simply booting utilizing.

These include optimizing data storage capacity as well as boosting performance to increase productivity per watt of energy used when work needs to be done.

Some approaches require new hardware or software while others can be accomplished with changes to management including reconfiguration leveraging insight and awareness of resource needs.

Here are some related links:

The Other Green: Storage Efficiency and Optimization (Videocast)

Energy efficient technology sales depend on the pitch

Performance metrics: Evaluating your data storage efficiency

How to reduce your Data Footprint impact (Podcast)

Optimizing enterprise data storage capacity and performance to reduce your data footprint

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Green IT and Virtual Data Centers

Green IT and virtual data centers are no fad nor are they limited to large-scale environments.

Paying attention to how resources are used to deliver information services in a flexible, adaptable, energy-efficient, environmentally, and economically friendly way to boost efficiency and productivity are here to stay.

Read more here in the article I did for the folks over at Enterprise Systems Journal.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

What is the Future of Servers?

Recently I provided some comments and perspectives on the future of servers in an article over at Processor.com.

In general, blade servers will become more ubiquitous, that is they wont go away even with cloud, rather become more common place with even higher density processors with more cores and performance along with faster I/O and larger memory capacity per given footprint.

While the term blade server may fade giving way to some new term or phrase, rest assured their capabilities and functionality will not disappear, rather be further enhanced to support virtualization with VMware vsphere, Microsoft HyperV, Citrix/Zen along with public and private clouds, both for consolidation and in the next wave of virtualization called life beyond consolidation.

The other trend is that not only will servers be able to support more processing and memory per footprint; they will also do that drawing less energy requiring lower cooling demands, hence more Ghz per watt along with energy savings modes when less work needs to be performed.

Another trend is around convergence both in terms of packaging along with technology improvements from a server, I/O networking and storage perspective. For example, enhancements to shared PCIe with I/O virtualization, hypervisor optimization, and integration such as the recently announced EMC, Cisco, Intel and VMware VCE coalition and vblocks.

Read more including my comments in the article here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Justifying Green IT and Home Hardware Upgrades with EnergyStar

Energy Star

Have you seen the TV commercials or print advertisements where an energy star washer is mentioned as so efficient that the savings from reduced power consumption are enough to pay for the dryer? If not, check out the EPA Energy Star website for information about various programs, savings and efficiency options to learn more

What does this have to do with servers, storage, networking, data centers or other IT equipment?

Simple, if you are not aware, Energy Star for Servers now exits and is being enhanced while good progress is being made on the Energy Star for storage program.

The Energy Star for household appliances has been around a bit longer and more refined, something that I anticipated the server and storage programs to follow-suit with over time.

What really caught my eye with the commercial is the focus on closing the green gap, that is instead of the green environmental impact savings of an appliance that uses less power and subsequent carbon footprint benefits, the message is to the economic hot button. That is, switch to more energy efficient technology that allows more work to done at a lower overall cost and the savings can help self fund the enhancements.

For example, a more energy efficient server that can do more work or GHz per watt of energy when needed, or, to go into lower power modes (intelligent power management: IPM). Low power modes do not necessarily mean turning completely off, rather, drawing less energy and subsequently lower cooling demands during slow periods such as with new Intel Nehalem and other processors.

From a disk storage perspective, energy efficiency is often thought to be avoidance or turning disk drives off boosting capacity and squeezing data footprints.

However energy efficiency and savings can also be achieved by slowing a disk drive down or turning of some of the electronics to reduce energy consumption and heat generation.

Other forms of energy savings include thin provisioning and deduplication however another form of energy efficiency for storage is boosting performance. That is, doing more work per watt of energy for active or time sensitive applications or usage scenarios.

Thus there is another Green IT, one that provides both economic and environmental benefits!

Here are some related links:

Saving Money with Green IT: Time To Invest In Information Factories

EPA Energy Star for Data Center Storage Update

Green Storage is Alive and Well: ENERGY STAR Enterprise Storage Stakeholder Meeting Details

Shifting from energy avoidance to energy efficiency

U.S. EPA Energy Star for Server Update

U.S. EPA Looking for Industry Input on Energy Star for Storage

Update: EnergyStar for Server Workshop

US EPA EnergyStar for Servers Wants To Hear From YOU!

Optimize Data Storage for Performance and Capacity Efficiency

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Saving Money with Green IT: Time To Invest In Information Factories

There is a good and timely article titled Green IT Can Save Money, Too over at Business Week that has a familiar topic and theme for those who read this blog or other content, articles, reports, books, white papers, videos, podcasts or in-person speaking and keynote sessions that I have done..

I posted a short version of this over there, here is the full version that would not fit in their comment section.

Short of calling it Green IT 2.0 or the perfect storm, there is a resurgence and more importantly IMHO a growing awareness of the many facets of Green IT along with Green in general having an economic business sustainability aspect.

While the Green Gap and confusion still exists, that is, the difference between what people think or perceive and actual opportunities or issues; with growing awareness, it will close or at least narrow. For example, when I regularly talk with IT professionals from various sized, different focused industries across the globe in diverse geographies and ask them about having to go green, the response is in the 7-15% range (these are changing) with most believing that Green is only about carbon footprint.

On the other hand, when I ask them if they have power, cooling, floor space or other footprint constraints including frozen or reduced budgets, recycling along with ewaste disposition or RoHS requirements, not to mention sustaining business growth without negatively impacting quality of service or customer experience, the response jumps up to 65-75% (these are changing) if not higher.

That is the essence of the green gap or disconnect!

Granted carbon dioxide or CO2 reduction is important along with NO2, water vapors and other related issues, however there is also the need to do more with what is available, stretch resources and footprints do be more productive in a shrinking footprint. Keep in mind that there is no such thing as an information, data or processing recession with all indicators pointing towards the need to move, manage and store larger amounts of data on a go forward basis. Thus, the need to do more in a given footprint or constraint, maximizing resources, energy, productivity and available budgets.

Innovation is the ability to do more with less at a lower cost without compromise on quality of service or negatively impacting customer experience. Regardless of if you are a manufacturer, or a service provider including in IT, by innovating with a diverse Green IT focus to become more efficient and optimized, the result is that your customers become more enabled and competitive.

By shifting from an avoidance model where cost cutting or containment are the near-term tactical focus to an efficiency and productivity model via optimization, net unit costs should be lowered while overall service experience increase in a positive manner. This means treating IT as an information factory, one that needs investment in the people, processes and technologies (hardware, software, services) along with management metric indicator tools.

The net result is that environmental or perceived Green issues are addressed and self-funded via the investment in Green IT technology that boosts productivity (e.g. closing or narrowing the Green Gap). Thus, the environmental concerns that organizations have or need to address for different reasons yet that lack funding get addressed via funding to boost business productivity which have tangible ROI characteristics similar to other lean manufacturing approaches.

Here are some additional links to learn more about these and other related themes:

Have a read over at Business Week about how Green IT Can Save Money, Too while thinking about how investing in IT infrastructure productivity (Information Factories) by becoming more efficient and optimized helps the business top and bottom line, not to mention the environment as well.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EPA Energy Star for Data Center Storage Update

EPA Energy Star

Following up on a recent post about Green IT, energy efficiency and optimization for servers, storage and more, here are some additional  thoughts, perspectives along with industry activity around the U.S. Environmental Protection Agency (EPA) Energy Star for Server, Data Center Storage and Data Centers.

First a quick update, Energy Star for Servers is in place with work now underway on expanding and extending beyond the first specification. Second is that Energy Star for Data Center storage definition is well underway including a recent workshop to refine the initial specification along with discussion for follow-on drafts.

Energy Star for Data Centers is also currently undergoing definition which is focused more on macro or facility energy (notice I did not say electricity) efficiency as opposed to productivity or effectiveness, items that the Server and Storage specifications are working towards.

Among all of the different industry trade or special interests groups, at least on the storage front the Storage Networking Industry Association (SNIA) Green Storage Initiative (GSI) and their Technical Work Groups (TWG) have been busily working for the past couple of years on taxonomies, metrics and other items in support of EPA Energy Star for Data Center Storage.

A challenge for SNIA along with others working on related material pertaining to storage and efficiency is the multi-role functionality of storage. That is, some storage simply stores data with little to no performance requirements while other storage is actively used for reading and writing. In addition, there are various categories, architectures not to mention hardware and software feature functionality or vendors with different product focus and interests.

Unlike servers that are either on and doing work, or, off or in low power mode, storage is either doing active work (e.g. moving data), storing in-active or idle data, or a combination of both. Hence for some, energy efficiency is about how much data can be stored in a given footprint with the least amount of power known as in-active or idle measurement.

On the other hand, storage efficiency is also about using the least amount of energy to produce the most amount of work or activity, for example IOPS or bandwidth per watt per footprint.

Thus the challenge and need for at least a two dimensional  model looking at, and reflecting different types or categories of storage aligned for active or in-active (e.g. storing) data enabling apples to apples, vs. apples to oranges comparison.

This is not all that different from how EPA looks at motor vehicle categories of economy cars, sport utility, work or heavy utility among others when doing different types of work, or, in idle.

What does this have to do with servers and storage?

Simple, when a server powers down where does its data go? That’s right, to a storage system using disk, ssd (RAM or flash), tape or optical for persistency. Likewise, when there is work to be done, where does the data get read into computer memory from, or written to? That’s right, a storage system. Hence the need to look at storage in a multi-tenant manner.

The storage industry is diverse with some vendors or products focused on performance or activity, while others on long term, low cost persistent storage for archive, backup, not to mention some doing a bit of both. Hence the nomenclature of herding cats towards a common goal when different parties have various interests that may conflict yet support needs of various customer storage usage requirements.

Figure 1 shows a simplified, streamlined storage taxonomy that has been put together by SNIA representing various types, categories and functions of data center storage. The green shaded areas are a good step in the right direction to simplify yet move towards realistic and achievable befits for storage consumers.


Figure 1 Source: EPA Energy Star for Data Center Storage web site document

The importance of the streamlined SNIA taxonomy is to help differentiate or characterize various types and tiers of storage (Figure 2) products facilitating apples to apples comparison instead of apples or oranges. For example, on-line primary storage needs to be looked at in terms of how much work or activity per energy footprint determines efficiency.


Figure 2: Tiered Storage Example

On other hand, storage for retaining large amounts of data that is in-active or idle for long periods of time should be looked at on a capacity per energy footprint basis. While final metrics are still being flushed out, some examples could be active storage gauged by IOPS or work or bandwidth per watt of energy per footprint while other storage for idle or inactive data could be looked at on a capacity per energy footprint basis.

What benchmarks or workloads to be used for simulating or measuring work or activity are still being discussed with proposals coming from various sources. For example SNIA GSI TWG are developing measurements and discussing metrics, as have the storage performance council (SPC) and SPEC among others including use of simulation tools such as IOmeter, VMware VMmark, TPC, Bonnie, or perhaps even Microsoft ESRP.

Tenants of Energy Star for Data Center Storage overtime hopefully will include:

  • Reflective of different types, categories, price-bands and storage usage scenarios
  • Measure storage efficiency for active work along with in-active or idle usage
  • Provide insight for both storage performance efficiency and effective capacity
  • Baseline or raw storage capacity along with effective enhanced optimized capacity
  • Easy to use metrics with more in-depth back ground or disclosure information

Ultimately the specification should help IT storage buyers and decision makers to compare and contrast different storage systems that are best suited and applicable to their usage scenarios.

This means measuring work or activity per energy footprint at a given capacity and data protection level to meet service requirements along with during in-active or idle periods. This also means showing storage that is capacity focused in terms of how much data can be stored in a given energy footprint.

One thing that will be tricky however will be differentiating GBytes per watt in terms of capacity, or, in terms of performance and bandwidth.

Here are some links to learn more:

Stay tuned for more on Energy Star for Data Centers, Servers and Data Center Storage.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Upcoming Out and About Events

Following up on previous Out and About updates ( here and here ) of where I have been, heres where I’m going to be over the next couple of weeks.

On September 15th and 16th 2009, I will be the keynote speaker along with doing a deep dive discussion around data deduplication in Minneapolis, MN and Toronto ON. Free Seminar, register and learn more here.

The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. Free Seminar, register and learn more here.

On September 23, 2009 I will be in New York City at Storage Decisions conference participating in the Ask the Experts during the expo session as well as presenting The Other Green — Storage Efficiency and Optimization.

Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives. To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

Topics, technologies and techniques that will be discussed include among others:

  • Energy efficiency (strategic) vs. energy avoidance (tactical)
  • Optimization and the need for speed vs. the need for capacity
  • Metrics and measurements for management insight
  • Tiered storage and tiered access including SSD, FC, SAS and clouds
  • Data footprint reduction (archive, compress, dedupe) and thin provision
  • Best practices, financial incentives and what you can do today

Free event, learn more and register here.

Check out the events page for other upcoming events and hope to see you this fall while Im out and about.

Cheers – gs

Greg Schulz – StorageIOblog, twitter @storageio Author “The Green and Virtual Data Center” (CRC)

Recent tips, videos, articles and more

Its been a busy year so far and there is still plenty more to do. Taking advantage of a short summer break, I’m getting caught up on some items including putting up a link to some of the recent articles, tips, reports, webcasts, videos and more that I have eluded to in recent posts. Realizing that some prefer blogs to webs to tweets to other venues, here are some links to recent articles, tips, videos, podcasts, webcasts, white papers and more that can be found on the StorageIO Tips, tools and White Papers pages.

Recent articles, columns, tips, white papers and reports:

  • ITworld: The new green data center: From energy avoidance to energy efficiency August 2009
  • SearchSystemsChannel: Comparing I/O virtualization and virtual I/O benefits July 2009
  • SearchDisasterRecovery: Top server virtualization myths in DR and BC July 2009
  • Enterprise Storage Forum: Saving Money with Green Data Storage Technology July 2009
  • SearchSMB ATE Tips: SMB Tips and ATE by Greg Schulz
  • SearchSMB ATE Tip: Tape library storage July 2009
  • SearchSMB ATE Tip: Server-based operating systems vs. PC-based operating systems June 2009
  • SearchSMB ATE Tip: Pros/cons of block/variable block dedupe June 2009
  • FedTechAt the Ready: High-availability storage hinges on being ready for a system failure May 2009
  • Byte & Switch Part XI – Key Elements For A Green and Virtual Data Center May 2009
  • Byte & Switch Part X – Basic Steps For Building a Green and Virtual Data Center May 2009
  • InfoStor Technology Options for Green Storage: April 2009
  • Byte & Switch Part IX – I/O, I/O, Its off to Virtual Work We Go: Networks role in Virtual Data Centers April 2009
  • Byte & Switch Part VIII – Data Storage Can Become Green: There are many steps you can take April 2009
  • Byte & Switch Part VII – Server Virtualization Can Save Costs April 2009
  • Byte & Switch Part VI – Building a Habitat for Technology April 2009
  • Byte & Switch Part V – Data Center Measurement, Metrics & Capacity Planning April 2009
  • zJournal Storage & Data Management: Tips for Enabling Green and Virtual Efficient Data Management March 2009
  • Serial Storage Wire (STA): Green and SASy = Energy and Economic, Effective Storage March 2009
  • SearchSystemsChannel: FAQs: Green IT strategies for solutions providers March 2009
  • Computer Technology Review: Recent Comments on The Green and Virtual Data Center March 2009
  • Byte & Switch Part IV – Virtual Data Centers Can Promote Business Growth March 2009
  • Byte & Switch Part III – The Challenge of IT Infrastructure Resource Management March 2009
  • Byte & Switch Part II – Building an Efficient & Ecologically Friendly Data Center March 2009
  • Byte & Switch Part I – The Green Gap – Addressing Environmental & Economic Sustainability March 2009
  • Byte & Switch Green IT and the Green Gap February 2009
  • GreenerComputing: Enabling a Green and Virtual Data Center February 2009
  • Some recent videos and podcasts include:

  • bmighty.com The dark side of SMB virtualization July 2009
  • bmighty.com SMBs Are Now Virtualization’s “Sweet Spot” July 2009
  • eWeek.com Green IT is not dead, its new focus is about efficiency July 2009
  • SearchSystemsChannel FAQ: Using cloud computing services opportunities to get more business July 2009
  • SearchStorage FAQ guide – How Fibre Channel over Ethernet can combine networks July 2009
  • SearchDataCenter Business Benefits of Boosting Web hosting Efficiency June 2009
  • SearchStorageChannel Disaster recovery services for solution providers June 2009
  • The Serverside The Changing Dynamic of the Data Center April 2009
  • TechTarget Virtualization and Consolidation for Agility: Intels Xeon Processor 5500 series May 2009
  • TechTarget Virtualization and Consolidation for Agility: Intels Xeon Processor 5500 series May 2009
  • Intel Reduce Energy Usage while Increasing Business Productivity in the Data Center May 2009
  • WSRadio Closing the green gap and shifting towards an IT efficiency and productivity April 2009
  • bmighty.com July 2009
  • Check out the Tips, Tools and White Papers, and News pages for more commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved