Upcoming Event: Industry Trends and Perspective European Seminar

Event Seminar Announcement:

IT Data Center, Storage and Virtualization Industry Trends and Perspective
June 16, 2010 Nijkerk, GELDERLAND Netherlands

Event TypeTraining/Seminar
Event TypeSeminar Training with Greg Schulz of US based Server and StorageIO
SponsorBrouwer Storage Consultancy
Target AudienceStorage Architects, Consultants, Pre-Sales, Customer (technical) decison makers
KeywordsCloud, Grid, Data Protection, Disaster Recovery, Storage, Green IT, VTL, Encryption, Dedupe, SAN, NAS, Backup, BC, DR, Performance, Virtualization, FCoE
Location and VenueAmpt van Nijkerk Berencamperweg
Nijkerk, GELDERLAND NL
WhenWed. June 16, 2010 9AM-5PM Local
Price€ 450,=
Event URLLinkedIn: https://storageioblog.com/book4.html
ContactGert Brouwer
Olevoortseweg 43
3861 MH Nijkerk
The Netherlands
Phone: +31-33-246-6825
Fax: +31-33-245-8956
Cell Phone: +31-652-601-309

info@brouwerconsultancy.com

AbstractGeneral items that will be covered include: What are current and emerging macro trends, issues, challenges and opportunities. Common IT customer and IT trends, issues and challenges. Opportunities for leveraging various current, new and emerging technologies, techniques. What are some new and improved technologies and techniques. The seminar will provide insight on how to address various IT and data storage management challenges, where and how new and emerging technologies can co-exist as well as compliment installed resources for maximum investment protection and business agility. Additional themes include cost and storage resource management, optimization and efficiency approaches along with where and how cloud, virtualizaiton and other topics fit into existing environments.

Buzzwords and topics to be discussed include among others: FC and FCoE, SAS, SATA, iSCSI and NAS, I/O Vritualization (IOV) and convergence SSD (Flash and RAM), RAID, Second Generation MAID and IPM, Tape Performance and Capacity planning, Performance and Capacity Optimization, Metrics IRM tools including DPM, E2E, SRA, SRM, as Well as Federated Management Data movement and migration including automation or policy enabled HA and Data protection including Backup/Restore, BC/DR , Security/Encryption VTL, CDP, Snapshots and replication for virtual and non virtual environments Dynamic IT and Optimization , the new Green IT (efficiency and productivity) Distributed data protection (DDP) and distributed data caching (DDC) Server and Storage Virtualization along with discussion about life beyond consolidation SAN, NAS, Clusters, Grids, Clouds (Public and Private), Bulk and object based Storage Unified and vendor prepackaged stacked solutions (e.g. EMC VCE among others) Data footprint reduction (Servers, Storage, Networks, Data Protection and Hypervisors among others.

Learn about other events involving Greg Schulz and StorageIO at www.storageio.com/events

Green IT, Green Gap, Tiered Energy and Green Myths

There are many different aspects of Green IT along with several myths or misperceptions not to mention missed opportunities.

There is a Green Gap or disconnect between environmentally aware, focused messaging and core IT data center issues. For example, when I ask IT professionals whether they have or are under direction to implement green IT initiatives, the number averages in the 10-15% range.

However, when I ask the same audiences who has or sees power, cooling, floor space, supporting growth, or addressing environmental health and safety (EHS) related issues, the average is 75 to 90%. What this means is a disconnect between what is perceived as being green and opportunities for IT organizations to make improvements from an economic and efficiency standpoint including boosting productivity.

 

Some IT Data Center Green Myths
Is “green IT” a convenient or inconvenient truth or a legend?

When it comes to green and virtual environments, there are plenty of myths and realities, some of which vary depending on market or industry focus, price band, and other factors.

For example, there are lines of thinking that only ultra large data centers are subject to PCFE-related issues, or that all data centers need to be built along the Columbia River basin in Washington State, or that virtualization eliminates vendor lock-in, or that hardware is more expensive to power and cool than it is to buy.

The following are some myths and realities as of today, some of which may be subject to change from reality to myth or from myth to reality as time progresses.

Myth: Green and PCFE issues are applicable only to large environments.

Reality: I commonly hear that green IT applies only to the largest of companies. The reality is that PCFE issues or green topics are relevant to environments of all sizes, from the largest of enterprises to the small/medium business, to the remote office branch office, to the small office/home office or “virtual office,” all the way to the digital home and consumer.

 

Myth: All computer storage is the same, and powering disks off solves PCFE issues.

Reality: There are many different types of computer storage, with various performance, capacity, power consumption, and cost attributes. Although some storage can be powered off, other storage that is needed for online access does not lend itself to being powered off and on. For storage that needs to be always online and accessible, energy efficiency is achieved by doing more with less—that is, boosting performance and storing more data in a smaller footprint using less power.

 

Myth: Servers are the main consumer of electrical power in IT data centers.

Reality: In the typical IT data center, on average, 50% of electrical power is consumed by cooling, with the balance used for servers, storage, networking, and other aspects. However, in many environments, particularly processing or computation intensive environments, servers in total (including power for cooling and to power the equipment) can be a major power draw.

 

Myth: IT data centers produce 2 to 8% of all global Carbon Dioxide (CO2) and carbon emissions.

Reality:  Thus might be perhaps true, given some creative accounting and marketing math in order to help build a justification case or to scare you into doing something. However, the reality is that in the United States, for example, IT data centers consume around 2 to 4% of electrical power (depending on when you read this), and less than 80% of all U.S. CO2 emissions are from electrical power generation, so the math does not quite add up. The reality is this, if no action is taken to improve IT data center energy efficiency, continued demand growth will shift IT power-related emissions from myth to reality, not to mention cause constraints on IT and business sustainability from an economic and productivity standpoint.

Myth: Server consolidation with virtualization is a silver bullet to address PCFE issues.

Reality: Server virtualization for consolidation is only part of an overall solution that should be combined with other techniques, including lower power, faster and more energy efficient servers, and improved data and storage management techniques.

 

Myth: Hardware costs more to power than to purchase.

Reality: Currently, for some low-cost servers, standalone disk storage, or entry level networking switches and desktops, this may be true, particularly where energy costs are excessively high and the devices are kept and used continually for three to five years. A general rule of thumb is that the actual cost of most IT hardware will be a fraction of the price of associated management and software tool costs plus facilities and cooling costs. For the most part, at least as of this writing, small standalone individual hard disk drives or small entry level volume servers can be bought and then used in locations that have very high electrical costs over a three  to five year time frame.

 

Regarding this last myth, for the more commonly deployed external storage systems across all price bands and categories, generally speaking, except for extremely inefficient and hot running legacy equipment, the reality is that it is still cheaper to power the equipment than to buy it. Having said that, there are some qualifiers that should also be used as key indicators to keep the equation balanced. These qualifiers include the acquisition cost  if any, for new, expanded, or remodeled habitats or space to house the equipment, the price of energy in a given region, including surcharges, as well as cooling, length of time, and continuous time the device will be used.

For larger businesses, IT equipment in general still costs more to purchase than to power, particularly with newer, more energy efficient devices. However, given rising energy prices, or the need to build new facilities, this could change moving forward, particularly if a move toward energy efficiency is not undertaken.

There are many variables when purchasing hardware, including acquisition cost, the energy efficiency of the device, power and cooling costs for a given location and habitat, and facilities costs. For example, if a new storage solution is purchased for $100,000, yet new habitat or facilities must be built for three to five times the cost of the equipment, those costs must be figured into the purchase cost.

Likewise, if the price of a storage solution decreases dramatically, but the device consumes a lot of electrical power and needs a large cooling capacity while operating in a region with expensive electricity costs, that, too, will change the equation and the potential reality of the myth.

 

Tiered Energy Sources
Given that IT resources and facilitated require energy to power equipment as well as keep them cool, electricity are popular topics associated with Green IT, economics and efficiency with lots of metrics and numbers tossed around. With that in mind, the U.S. national average CO2 emission is 1.34 lb/kWh of electrical power. Granted, this number will vary depending on the region of the country and the source of fuel for the power-generating station or power plant.

Like IT tiered resources (Servers, storage, I/O networks, virtual machines and facilities) of which there are various tiers or types of technologies to meet various needs, there are also multiple types of energy sources. Different tiers of energy sources vary by their cost, availability and environmental characteristics among others. For example, in the US, there are different types of coal and not all coal is as dirty when combined with emissions air scrubbers as you might be lead to believe however there are other energy sources to consider as well.

Coal continues to be a dominant fuel source for electrical power generation both in the United States and abroad, with other fuel sources, including oil, gas, natural gas, liquid propane gas (LPG or propane), nuclear, hydro, thermo or steam, wind and solar. Within a category of fuel, for example, coal, there are different emissions per ton of fuel burned. Eastern U.S. coal is higher in CO2 emissions per kilowatt hour than western U.S. lignite coal. However, eastern coal has more British thermal units (Btu) of energy per ton of coal, enabling less coal to be burned in smaller physical power plants.

If you have ever noticed that coal power plants in the United States seem to be smaller in the eastern states than in the Midwest and western states, it’s not an optical illusion. Because eastern coal burns hotter, producing more Btu, smaller boilers and stockpiles of coal are needed, making for smaller power plant footprints. On the other hand, as you move into the Midwest and western states of the United States, coal power plants are physically larger, because more coal is needed to generate 1 kWh, resulting in bigger boilers and vent stacks along with larger coal stockpiles.

On average, a gallon of gasoline produces about 20 lb of CO2, depending on usage and efficiency of the engine as well as the nature of the fuel in terms of octane or amount of Btu. Aviation fuel and diesel fuel differ from gasoline, as does natural gas or various types of coal commonly used in the generation of electricity. For example, natural gas is less expensive than LPG but also provides fewer Btu per gallon or pound of fuel. This means that more natural gas is needed as a fuel to generate a given amount of power.

Recently, while researching small, 10 to 12 kWh standby generators for my office, I learned about some of the differences between propane and natural gas. What I found was that with natural gas as fuel, a given generator produced about 10.5 kWh, whereas the same unit attached to a LPG or propane fuel source produced 12 kWh. The trade off was that to get as much power as possible out of the generator, the higher cost LPG was the better choice. To use lower cost fuel but get less power out of the device, the choice would be natural gas. If more power was needed, than a larger generator could be deployed to use natural gas, with the trade off of requiring a larger physical footprint.

Oil and gas are not used as much as fuel sources for electrical power generation in the United States as in other countries such as the United Kingdom. Gasoline, diesel, and other petroleum based fuels are used for some power plants in the United States, including standby or peaking plants. In the electrical power G and T industry as in IT, where different tiers of servers and storage are used for different applications there are different tiers of power plants using different fuels with various costs. Peaking and standby plants are brought online when there is heavy demand for electrical power, during disruptions when a lower cost or more environmentally friendly plant goes offline for planned maintenance, or in the event of a trip or unplanned outage.

CO2 is commonly discussed with respect to green and associated emissions however there are other so called Green Houses Gases including Nitrogen Dioxide (NO2) and water vapors among others. Carbon makes up only a fraction of CO2. To be specific, only about 27% of a pound of CO2 is carbon; the balance is not. Consequently, carbon emissions taxes schemes (ETS), as opposed to CO2 tax schemes, need to account for the amount of carbon per ton of CO2 being put into the atmosphere. In some parts of the world, including the EU and the UK, ETS are either already in place or in initial pilot phases, to provide incentives to improve energy efficiency and use.

Meanwhile, in the United States there are voluntary programs for buying carbon offset credits along with initiatives such as the carbon disclosure project. The Carbon Disclosure Project (www.cdproject.net) is a not for profit organization to facilitate the flow of information pertaining to emissions by organizations for investors to make informed decisions and business assessment from an economic and environmental perspective. Another voluntary program is the United States EPA Climate Leaders initiative where organizations commit to reduce their GHG emissions to a given level or a specific period of time.

Regardless of your stance or perception on green issues, the reality is that for business and IT sustainability, a focus on ecological and, in particular, the corresponding economic aspects cannot be ignored. There are business benefits to aligning the most energy efficient and low power IT solutions combined with best practices to meet different data and application requirements in an economic and ecologically friendly manner.

Green initiatives need to be seen in a different light, as business enables as opposed to ecological cost centers. For example, many local utilities and state energy or environmentally concerned organizations are providing funding, grants, loans, or other incentives to improve energy efficiency. Some of these programs can help offset the costs of doing business and going green. Instead of being seen as the cost to go green, by addressing efficiency, the by products are economic as well as ecological.

Put a different way, a company can spend carbon credits to offset its environmental impact, similar to paying a fine for noncompliance or it can achieve efficiency and obtain incentives. There are many solutions and approaches to address these different issues, which will be looked at in the coming chapters.

What does this all mean?
There are real things that can be done today that can be effective toward achieving a balance of performance, availability, capacity, and energy effectiveness to meet particular application and service needs.

Sustaining for economic and ecological purposes can be achieved by balancing performance, availability, capacity, and energy to applicable application service level and physical floor space constraints along with intelligent power management. Energy economics should be considered as much a strategic resource part of IT data centers as are servers, storage, networks, software, and personnel.

The bottom line is that without electrical power, IT data centers come to a halt. Rising fuel prices, strained generating and transmission facilities for electrical power, and a growing awareness of environmental issues are forcing businesses to look at PCFE issues. IT data centers to support and sustain business growth, including storing and processing more data, need to leverage energy efficiency as a means of addressing PCFE issues. By adopting effective solutions, economic value can be achieved with positive ecological results while sustaining business growth.

Some additional links include:

Want to learn or read more?

Check out Chapter 1 (Green IT and the Green Gap, Real or Virtual?) in my book “The Green and Virtual Data Center” (CRC) here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EPA Energy Star for Data Center Storage Update

Following up on previous posts pertaining to US EPA Energy Star for Servers, Data Center Storage and Data Centers, here is a note received today with some new information. For those interested in the evolving Energy Star for Data Center, Servers and Storage, have a look at the following as well as the associated links.

Here is the note from EPA:

From: ENERGY STAR Storage [storage@energystar.gov]
Sent: Monday, December 28, 2009 8:00 AM
Subject: ENERGY STAR Data Center Storage Initial Data Collection Procedure

EPA Energy Star

Dear ENERGY STAR Data Center Storage Stakeholder or Other Interested Party:

The U.S. Environmental Production Agency (EPA) would like to invite interested parties to test the energy performance of storage products that are currently being considered for inclusion in the Version 1.0 ENERGY STAR® Data Center Storage specification. Please review the attached cover letter, data collection procedure, and test data collection sheet for further information.

Stakeholders are encouraged to submit test data via e-mail to storage@energystar.gov no later than Friday, February 12, 2009.

Thank you for your continued support of ENERGY STAR!

Attachment Links:

Storage Initial Data Collection Procedure.pdf

Storage Initial Data Collection Cover Letter.pdf

Storage Initial Data Collection Data Sheet.xls

For more information, visit: www.energystar.gov

 

For those interested in EPA Energy Star, Green IT including Green and energy efficient storage, check out these following links:

Watch for more news and updates pertaining to EPA Energy Star for Servers, Data Center Storage and Data centers in 2010.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

The other Green Storage: Efficiency and Optimization

Some believe that green storage is specifically designed to reduce power and cooling costs.

The reality is that there are many ways to reduce environmental impact while enhancing the economics of data storage besides simply booting utilizing.

These include optimizing data storage capacity as well as boosting performance to increase productivity per watt of energy used when work needs to be done.

Some approaches require new hardware or software while others can be accomplished with changes to management including reconfiguration leveraging insight and awareness of resource needs.

Here are some related links:

The Other Green: Storage Efficiency and Optimization (Videocast)

Energy efficient technology sales depend on the pitch

Performance metrics: Evaluating your data storage efficiency

How to reduce your Data Footprint impact (Podcast)

Optimizing enterprise data storage capacity and performance to reduce your data footprint

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Green IT and Virtual Data Centers

Green IT and virtual data centers are no fad nor are they limited to large-scale environments.

Paying attention to how resources are used to deliver information services in a flexible, adaptable, energy-efficient, environmentally, and economically friendly way to boost efficiency and productivity are here to stay.

Read more here in the article I did for the folks over at Enterprise Systems Journal.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Justifying Green IT and Home Hardware Upgrades with EnergyStar

Energy Star

Have you seen the TV commercials or print advertisements where an energy star washer is mentioned as so efficient that the savings from reduced power consumption are enough to pay for the dryer? If not, check out the EPA Energy Star website for information about various programs, savings and efficiency options to learn more

What does this have to do with servers, storage, networking, data centers or other IT equipment?

Simple, if you are not aware, Energy Star for Servers now exits and is being enhanced while good progress is being made on the Energy Star for storage program.

The Energy Star for household appliances has been around a bit longer and more refined, something that I anticipated the server and storage programs to follow-suit with over time.

What really caught my eye with the commercial is the focus on closing the green gap, that is instead of the green environmental impact savings of an appliance that uses less power and subsequent carbon footprint benefits, the message is to the economic hot button. That is, switch to more energy efficient technology that allows more work to done at a lower overall cost and the savings can help self fund the enhancements.

For example, a more energy efficient server that can do more work or GHz per watt of energy when needed, or, to go into lower power modes (intelligent power management: IPM). Low power modes do not necessarily mean turning completely off, rather, drawing less energy and subsequently lower cooling demands during slow periods such as with new Intel Nehalem and other processors.

From a disk storage perspective, energy efficiency is often thought to be avoidance or turning disk drives off boosting capacity and squeezing data footprints.

However energy efficiency and savings can also be achieved by slowing a disk drive down or turning of some of the electronics to reduce energy consumption and heat generation.

Other forms of energy savings include thin provisioning and deduplication however another form of energy efficiency for storage is boosting performance. That is, doing more work per watt of energy for active or time sensitive applications or usage scenarios.

Thus there is another Green IT, one that provides both economic and environmental benefits!

Here are some related links:

Saving Money with Green IT: Time To Invest In Information Factories

EPA Energy Star for Data Center Storage Update

Green Storage is Alive and Well: ENERGY STAR Enterprise Storage Stakeholder Meeting Details

Shifting from energy avoidance to energy efficiency

U.S. EPA Energy Star for Server Update

U.S. EPA Looking for Industry Input on Energy Star for Storage

Update: EnergyStar for Server Workshop

US EPA EnergyStar for Servers Wants To Hear From YOU!

Optimize Data Storage for Performance and Capacity Efficiency

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Saving Money with Green IT: Time To Invest In Information Factories

There is a good and timely article titled Green IT Can Save Money, Too over at Business Week that has a familiar topic and theme for those who read this blog or other content, articles, reports, books, white papers, videos, podcasts or in-person speaking and keynote sessions that I have done..

I posted a short version of this over there, here is the full version that would not fit in their comment section.

Short of calling it Green IT 2.0 or the perfect storm, there is a resurgence and more importantly IMHO a growing awareness of the many facets of Green IT along with Green in general having an economic business sustainability aspect.

While the Green Gap and confusion still exists, that is, the difference between what people think or perceive and actual opportunities or issues; with growing awareness, it will close or at least narrow. For example, when I regularly talk with IT professionals from various sized, different focused industries across the globe in diverse geographies and ask them about having to go green, the response is in the 7-15% range (these are changing) with most believing that Green is only about carbon footprint.

On the other hand, when I ask them if they have power, cooling, floor space or other footprint constraints including frozen or reduced budgets, recycling along with ewaste disposition or RoHS requirements, not to mention sustaining business growth without negatively impacting quality of service or customer experience, the response jumps up to 65-75% (these are changing) if not higher.

That is the essence of the green gap or disconnect!

Granted carbon dioxide or CO2 reduction is important along with NO2, water vapors and other related issues, however there is also the need to do more with what is available, stretch resources and footprints do be more productive in a shrinking footprint. Keep in mind that there is no such thing as an information, data or processing recession with all indicators pointing towards the need to move, manage and store larger amounts of data on a go forward basis. Thus, the need to do more in a given footprint or constraint, maximizing resources, energy, productivity and available budgets.

Innovation is the ability to do more with less at a lower cost without compromise on quality of service or negatively impacting customer experience. Regardless of if you are a manufacturer, or a service provider including in IT, by innovating with a diverse Green IT focus to become more efficient and optimized, the result is that your customers become more enabled and competitive.

By shifting from an avoidance model where cost cutting or containment are the near-term tactical focus to an efficiency and productivity model via optimization, net unit costs should be lowered while overall service experience increase in a positive manner. This means treating IT as an information factory, one that needs investment in the people, processes and technologies (hardware, software, services) along with management metric indicator tools.

The net result is that environmental or perceived Green issues are addressed and self-funded via the investment in Green IT technology that boosts productivity (e.g. closing or narrowing the Green Gap). Thus, the environmental concerns that organizations have or need to address for different reasons yet that lack funding get addressed via funding to boost business productivity which have tangible ROI characteristics similar to other lean manufacturing approaches.

Here are some additional links to learn more about these and other related themes:

Have a read over at Business Week about how Green IT Can Save Money, Too while thinking about how investing in IT infrastructure productivity (Information Factories) by becoming more efficient and optimized helps the business top and bottom line, not to mention the environment as well.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EPA Energy Star for Data Center Storage Update

EPA Energy Star

Following up on a recent post about Green IT, energy efficiency and optimization for servers, storage and more, here are some additional  thoughts, perspectives along with industry activity around the U.S. Environmental Protection Agency (EPA) Energy Star for Server, Data Center Storage and Data Centers.

First a quick update, Energy Star for Servers is in place with work now underway on expanding and extending beyond the first specification. Second is that Energy Star for Data Center storage definition is well underway including a recent workshop to refine the initial specification along with discussion for follow-on drafts.

Energy Star for Data Centers is also currently undergoing definition which is focused more on macro or facility energy (notice I did not say electricity) efficiency as opposed to productivity or effectiveness, items that the Server and Storage specifications are working towards.

Among all of the different industry trade or special interests groups, at least on the storage front the Storage Networking Industry Association (SNIA) Green Storage Initiative (GSI) and their Technical Work Groups (TWG) have been busily working for the past couple of years on taxonomies, metrics and other items in support of EPA Energy Star for Data Center Storage.

A challenge for SNIA along with others working on related material pertaining to storage and efficiency is the multi-role functionality of storage. That is, some storage simply stores data with little to no performance requirements while other storage is actively used for reading and writing. In addition, there are various categories, architectures not to mention hardware and software feature functionality or vendors with different product focus and interests.

Unlike servers that are either on and doing work, or, off or in low power mode, storage is either doing active work (e.g. moving data), storing in-active or idle data, or a combination of both. Hence for some, energy efficiency is about how much data can be stored in a given footprint with the least amount of power known as in-active or idle measurement.

On the other hand, storage efficiency is also about using the least amount of energy to produce the most amount of work or activity, for example IOPS or bandwidth per watt per footprint.

Thus the challenge and need for at least a two dimensional  model looking at, and reflecting different types or categories of storage aligned for active or in-active (e.g. storing) data enabling apples to apples, vs. apples to oranges comparison.

This is not all that different from how EPA looks at motor vehicle categories of economy cars, sport utility, work or heavy utility among others when doing different types of work, or, in idle.

What does this have to do with servers and storage?

Simple, when a server powers down where does its data go? That’s right, to a storage system using disk, ssd (RAM or flash), tape or optical for persistency. Likewise, when there is work to be done, where does the data get read into computer memory from, or written to? That’s right, a storage system. Hence the need to look at storage in a multi-tenant manner.

The storage industry is diverse with some vendors or products focused on performance or activity, while others on long term, low cost persistent storage for archive, backup, not to mention some doing a bit of both. Hence the nomenclature of herding cats towards a common goal when different parties have various interests that may conflict yet support needs of various customer storage usage requirements.

Figure 1 shows a simplified, streamlined storage taxonomy that has been put together by SNIA representing various types, categories and functions of data center storage. The green shaded areas are a good step in the right direction to simplify yet move towards realistic and achievable befits for storage consumers.


Figure 1 Source: EPA Energy Star for Data Center Storage web site document

The importance of the streamlined SNIA taxonomy is to help differentiate or characterize various types and tiers of storage (Figure 2) products facilitating apples to apples comparison instead of apples or oranges. For example, on-line primary storage needs to be looked at in terms of how much work or activity per energy footprint determines efficiency.


Figure 2: Tiered Storage Example

On other hand, storage for retaining large amounts of data that is in-active or idle for long periods of time should be looked at on a capacity per energy footprint basis. While final metrics are still being flushed out, some examples could be active storage gauged by IOPS or work or bandwidth per watt of energy per footprint while other storage for idle or inactive data could be looked at on a capacity per energy footprint basis.

What benchmarks or workloads to be used for simulating or measuring work or activity are still being discussed with proposals coming from various sources. For example SNIA GSI TWG are developing measurements and discussing metrics, as have the storage performance council (SPC) and SPEC among others including use of simulation tools such as IOmeter, VMware VMmark, TPC, Bonnie, or perhaps even Microsoft ESRP.

Tenants of Energy Star for Data Center Storage overtime hopefully will include:

  • Reflective of different types, categories, price-bands and storage usage scenarios
  • Measure storage efficiency for active work along with in-active or idle usage
  • Provide insight for both storage performance efficiency and effective capacity
  • Baseline or raw storage capacity along with effective enhanced optimized capacity
  • Easy to use metrics with more in-depth back ground or disclosure information

Ultimately the specification should help IT storage buyers and decision makers to compare and contrast different storage systems that are best suited and applicable to their usage scenarios.

This means measuring work or activity per energy footprint at a given capacity and data protection level to meet service requirements along with during in-active or idle periods. This also means showing storage that is capacity focused in terms of how much data can be stored in a given energy footprint.

One thing that will be tricky however will be differentiating GBytes per watt in terms of capacity, or, in terms of performance and bandwidth.

Here are some links to learn more:

Stay tuned for more on Energy Star for Data Centers, Servers and Data Center Storage.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Data Center I/O Bottlenecks Performance Issues and Impacts

This is an excerpt blog version of the popular Server and StorageIO Group white paper "IT Data Center and Data Storage Bottlenecks" originally published August of 2006 that is as much if not more relevant today than it was in the past.

Most Information Technology (IT) data centers have bottleneck areas that impact application performance and service delivery to IT customers and users. Possible bottleneck locations shown in Figure-1 include servers (application, web, file, email and database), networks, application software, and storage systems. For example users of IT services can encounter delays and lost productivity due to seasonal workload surges or Internet and other network bottlenecks. Network congestion or dropped packets resulting in wasteful and delayed retransmission of data can be the results of network component failure, poor configuration or lack of available low latency bandwidth.

Server bottlenecks due to lack of CPU processing power, memory or under sized I/O interfaces can result in poor performance or in worse case scenarios application instability. Application including database systems bottlenecks due to excessive locking, poor query design, data contention and deadlock conditions result in poor user response time. Storage and I/O performance bottlenecks can occur at the host server due to lack of I/O interconnect bandwidth such as an overloaded PCI interconnect, storage device contention, and lack of available storage system I/O capacity.

These performance bottlenecks, impact most applications and are not unique to the large enterprise or scientific high compute (HPC) environments. The direct impact of data center I/O performance issues include general slowing of the systems and applications, causing lost productivity time for users of IT services. Indirect impacts of data center I/O performance bottlenecks include additional management by IT staff to trouble shoot, analyze, re-configure and react to application delays and service disruptions.


Figure-1: Data center performance bottleneck locations

Data center performance bottleneck impacts (see Figure-1) include:

  • Under utilization of disk storage capacity to compensate for lack of I/O performance capability
  • Poor Quality of Service (QoS) causing Service Level Agreements (SLA) objectives to be missed
  • Premature infrastructure upgrades combined with increased management and operating costs
  • Inability to meet peak and seasonal workload demands resulting in lost business opportunity

I/O bottleneck impacts
It should come as no surprise that businesses continue to consume and rely upon larger amounts of disk storage. Disk storage and I/O performance fuel the hungry needs of applications in order to meet SLAs and QoS objectives. The Server and StorageIO Group sees that, even with efforts to reduce storage capacity or improve capacity utilization with information lifecycle management (ILM) and Infrastructure Resource Management (IRM) enabled infrastructures, applications leveraging rich content will continue to consume more storage capacity and require additional I/O performance. Similarly, at least for the next few of years, the current trend of making and keeping additional copies of data for regulatory compliance and business continue is expected to continue. These demands all add up to a need for more I/O performance capabilities to keep up with server processor performance improvements.


Figure-2: Processing and I/O performance gap

Server and I/O performance gap
The continued need for accessing more storage capacity results in an alarming trend: the expanding gap between server processing power and available I/O performance of disk storage (Figure-2). This server to I/O performance gap has existed for several decades and continues to widen instead of improving. The net impact is that bottlenecks associated with the server to I/O performance lapse result in lost productivity for IT personal and customers who must wait for transactions, queries, and data access requests to be resolved.

Application symptoms of I/O bottlenecks
There are many applications across different industries that are sensitive to timely data access and impacted by common I/O performance bottlenecks. For example, as more users access a popular file, database table, or other stored data item, resource contention will increase. One way resource contention manifests itself is in the form of database “deadlock” which translates into slower response time and lost productivity. 

Given the rise and popularity of internet search engines, search engine optimization (SEO) and on-line price shopping, some businesses have been forced to create expensive read-only copies of databases. These read-only copies are used to support more queries to address bottlenecks from impacting time sensitive transaction databases.

In addition to increased application workload, IT operational procedures to manage and protect data help to contribute to performance bottlenecks. Data center operational procedures result in additional file I/O scans for virus checking, database purge and maintenance, data backup, classification, replication, data migration for maintenance and upgrades as well as data archiving. The net result is that essential data center management procedures contribute to performance challenges and impacting business productivity.

Poor response time and increased latency
Generally speaking, as additional activity or application workload including transactions or file accesses are performed, I/O bottlenecks result in increased response time or latency (shown in Figure-3). With most performance metrics more is better; however, in the case of response time or latency, less is better.  Figure-3 shows the impact as more work is performed (dotted curve) and resulting I/O bottlenecks have a negative impact by increasing response time (solid curve) above acceptable levels. The specific acceptable response time threshold will vary by applications and SLA requirements. The acceptable threshold level based on performance plans, testing, SLAs and other factors including experience serves as a guide line between acceptable and poor application performance.

As more workload is added to a system with existing I/O issues, response time will correspondingly decrease as was seen in Figure-3. The more severe the bottleneck, the faster response time will deteriorate (e.g. increase) from acceptable levels. The elimination of bottlenecks enables more work to be performed while maintaining response time below acceptable service level threshold limits.


Figure-3: I/O response time performance impact

Seasonal and peak workload I/O bottlenecks
Another common challenge and cause of I/O bottlenecks is seasonal and/or unplanned workload increases that result in application delays and frustrated customers. In Figure-4 a workload representing an eCommerce transaction based system is shown with seasonal spikes in activity (dotted curve). The resulting impact to response time (solid curve) is shown in relation to a threshold line of acceptable response time performance. For example, peaks due holiday shopping exchanges appear in January then dropping off increasing near mother’s day in May, then back to school shopping in August results in increased activity as does holiday shopping starting in late November.


Figure-4: I/O bottleneck impact from surge workload activity

Compensating for lack of performance
Besides impacting user productivity due to poor performance, I/O bottlenecks can result in system instability or unplanned application downtime. One only needs to recall recent electric power grid outages that were due to instability, insufficient capacity bottlenecks as a result of increased peak user demand.

I/O performance improvement approaches to address I/O bottlenecks have been to do nothing (incur and deal with the service disruptions) or over configure by throwing more hardware and software at the problem. To compensate for lack of I/O performance and counter the resulting negative impact to IT users, a common approach is to add more hardware to mask or move the problem.

However, this often leads to extra storage capacity being added to make up for a short fall in I/O performance. By over configuring to support peak workloads and prevent loss of business revenue, excess storage capacity must be managed throughout the non-peak periods, adding to data center and management costs. The resulting ripple affect is that now more storage needs to be managed, including allocating storage network ports, configuring, tuning, and backing up of data. This can and does result in environments that have storage utilization well below 50% of their useful storage capacity. The solution is to address the problem rather than moving and hiding the bottleneck elsewhere (rather like sweeping dust under the rug).

Business value of improved performance
Putting a value on the performance of applications and their importance to your business is a necessary step in the process of deciding where and what to focus on for improvement. For example, what is the value of reducing application response time and the associated business benefit of allowing more transactions, reservations or sales to be made? Likewise, what is the value of improving the productivity of a designer or animator to meet tight deadlines and market schedules? What is business benefit of enabling a customer to search faster for and item, place an order, access media rich content, or in general improve their productivity?

Server and I/O performance gap as a data center bottleneck
I/O performance bottlenecks are a wide spread issue across most data centers, affecting many applications and industries. Applications impacted by data center I/O bottlenecks to be looked at in more depth are electronic design automation (EDA), entertainment and media, database online transaction processing (OLTP) and business intelligence. These application categories represent transactional processing, shared file access for collaborative work, and processing of shared, time sensitive data.

Electronic design
Computer aided design (CAD), computer assisted engineering (CAE), electronic design automaton (EDA) and other design tools are used for a wide variety of engineering and design functions. These design tools require fast access to shared, secured and protected data. The objective of using EDA and other tools is to enable faster product development with better quality and improved worker productivity. Electronic components manufactured for the commercial, consumer and specialized markets rely on design tools to speed the time-to-market of new products as well as to improve engineer productivity.

EDA tools, including those from Cadence, Synopsis, Mentor Graphics and others, are used to develop expensive and time sensitive electronic chips, along with circuit boards and other components to meet market windows and suppler deadlines. An example of this is a chip vendor being able to simulate, develop, test, produce and deliver a new chip in time for manufacturers to release their new products based on those chips. Another example is aerospace and automotive engineering firms leveraging design tools, including CATIA and UGS, on a global basis relying on their suppler networks to do the same in a real-time, collaborative manner to improve productivity and time-to-market. These results in contention of shared file and data access and, as a work-around, more copies of data kept as local buffers.

I/O performance impacts and challenges for EDA, CAE and CAD systems include:

  • Delays in drawing and file access resulting in lost productivity and project delays
  • Complex configurations to support computer farms (server grids) for I/O and storage performance
  • Proliferation of dedicated storage on individual servers and workstations to improve performance

Entertainment and media
While some applications are characterized by high bandwidth or throughput, such as streaming video and digital intermediate (DI) processing of 2K (2048 pixels per line) and 4K (4096 pixels per line) video and film, there are many other applications that are also impacted by I/O performance time delays. Even bandwidth intensive applications for video production and other applications are time sensitive and vulnerable to I/O bottleneck delays. For example, cell phone ring tone, instant messaging, small MP3 audio, and voice- and e-mail are impacted by congestion and resource contention.

Prepress production and publishing requiring assimilation of many small documents, files and images while undergoing revisions can also suffer. News and information websites need to look up breaking stories, entertainment sites need to view and download popular music, along with still images and other rich content; all of this can be negatively impacted by even small bottlenecks.  Even with streaming video and audio, access to those objects requires accessing some form of a high speed index to locate where the data files are stored for retrieval. These indexes or databases can become bottlenecks preventing high performance storage and I/O systems from being fully leveraged.

Index files and databases must be searched to determine the location where images and objects, including streaming media, are stored. Consequently, these indices can become points of contention resulting in bottlenecks that delay processing of streaming media objects. When cell phone picture is taken phone and sent to someone, chances are that the resulting image will be stored on network attached storage (NAS) as a file with a corresponding index entry in a database at some service provider location. Think about what happens to those servers and storage systems when several people all send photos at the same time.

I/O performance impacts and challenges for entertainment and media systems include:

  • Delays in image and file access resulting in lost productivity
  • Redundant files and storage local servers to improve performance
  • Contention for resources causing further bottlenecks during peak workload surges

OLTP and business intelligence
Surges in peak workloads result in performance bottlenecks on database and file servers, impacting time sensitive OLTP systems unless they are over configured for peak demand. For example, workload spikes due to holiday and back-to-school shopping, spring break and summer vacation travel reservations, Valentines or Mothers Day gift shopping, and clearance and settlement on peak stock market trading days strain fragile systems. For database systems maintaining performance for key objects, including transaction logs and journals, it is important to eliminate performance issues as well as maintain transaction and data integrity.

An example tied to eCommerce is business intelligence systems (not to be confused with back office marketing and analytics systems for research). Online business intelligence systems are popular with online shopping and services vendors who track customer interests and previous purchases to tailor search results, views and make suggestions to influence shopping habits.

Business intelligence systems need to be fast and support rapid lookup of history and other information to provide purchase histories and offer timely suggestions. The relative performance improvements of processors shift the application bottlenecks from the server to the storage access network. These applications have, in some cases, resulted in an exponential increase in query or read operations beyond the capabilities of single database and storage instances, resulting in database deadlock and performance problems or the proliferation of multiple data copies and dedicated storage on application servers.

A more recent contribution to performance challenges, caused by the increased availability of on-line shopping and price shopping search tools, is low cost craze (LCC) or price shopping. LCC has created a dramatic increase in the number of read or search queries taking place, further impacting database and file systems performance. For example, an airline reservation system that supports price shopping while preventing impact to time sensitive transactional reservation systems would create multiple read-only copies of reservations databases for searches. The result is that more copies of data must be maintained across more servers and storage systems thus increasing costs and complexity. While expensive, the alternative of doing nothing results in lost business and market share.

I/O performance impacts and challenges for OLTP and business intelligence systems include:

  • Application and database contention, including deadlock conditions, due to slow transactions
  • Disruption to application servers to install special monitoring, load balance or I/O driver software
  • Increased management time required to support additional storage needed as a I/O workaround

Summary/Conclusion
It is vital to understand the value of performance, including response time or latency, and numbers of I/O operations for each environment and particular application. While the cost per raw TByte may seem relatively in-expensive, the cost for I/O response time performance also needs to be effectively addressed and put into the proper context as part of the data center QoS cost structure.

There are many approaches to address data center I/O performance bottlenecks with most centered on adding more hardware or addressing bandwidth or throughput issues. Time sensitive applications depend on low response time as workload including throughput increase and thus latency can not be ignored. The key to removing data center I/O bottlenecks is to find and address the problem instead of simply moving or hiding it with more hardware and/or software. Simply adding fast devices such as SSD may provide relief, however if the SSDs are attached to high latency storage controllers, the full benefit may not be realized. Thus, identify and gain insight into data center and I/O bottleneck paths eliminating issues and problems to boost productivity and efficiency.

Where to Learn More
Additional information about IT data center, server, storage as well as I/O networking bottlenecks along with solutions can be found at the Server and StorageIO website in the tips, tools and white papers, as well as news, books, and activity on the events pages. If you are in the New York area on September 23, 2009, check out my presentation on The Other Green – Storage Optimization and Efficiency that will touch on the above and other related topics. Download your copy of "IT Data Center and Storage Bottlenecks" by clicking here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Out and About Update

As part of the continuing on the road theme and series, this post is being done while traveling for this weeks adventures and events including stops in Atlanta, St. Louis and wrapping up the week in Minneapolis at the local CMG quarterly meeting event. At both last weeks events in Las Vegas and Milwaukee as well as this weeks events talking with IT professionals from various organizations, a consistent theme is that there is no data or I/O recession, however there is the need to do more with less while enabling business sustainability.

While VMware remains the dominant server virtualization platform, I’m hearing of more organizations using Citrix or other Xensource based technologies along with some Microsoft HyperV adopters in part to leverage lower cost of ownership compared to VMware in instances where not all of the feature functionality of the robust VMware technology is needed. This will be an interesting scenario to keep an eye on in the weeks and months to come to see if there are any shifting patterns on the server virtualization front while trying to stretch IT dollars further to do more.

On the Merger & Acquisition (M&A) scene, coverage of on again, off-again and recently rekindled rumored of IBM buying Sun is rampant from the Wall Street Journal to twitter and most points in between. There have been many storm clouds around Sun the past several years from a business and technology perspective, and perhaps the best thing is for Sun and IBM to combine forces and resources, bridging the gap between old physical worlds and new virtual cloud enabled worlds so to speak. Personally, I like the idea for many different reasons and think that some shape or form of an IBM and Sun deal either in entirety, or pieces is far more likely to occur and sooner, than seeing funds returned from either AIG or Bernard Madoff, the other top news items this week, nuf said for now about IBM and Sun.

Also this week, other activity included Cisco announcing that they are testing the waters to enter into the server market space to help jumpstart the converged networking space with some of my initial comments here and here. Check out StorageIO in the news page here for other comments on various IT industry trends, technologies and related activities including a recent piece by Drew Robb about The State of the Data Storage Job Market.

Lets see how this plays out with more to say later, thanks again for everyone who came out for last weeks as well as this weeks events, look forward to seeing and talking with you again soon I hope.

Cheers – gs

Technorati tags: Recession, Sustainability, Wall Street Journal, Data Center Bottlenecks, Performance, Capacity, Networking, Telephone, Data Center, Consolidation, Virtualization, VMware, Server, Storage, Software, Sun, IBM, Las Vegas, Milwaukee, St. Louis, Atlanta, CMG, AIG, Bernard Madoff, Cisco

Server Storage I/O Network Virtualization Whats Next?

Server Storage I/O Network Virtualization Whats Next?
Server Storage I/O Network Virtualization Whats Next?
Updated 9/28/18

There are many faces and thus functionalities of virtualization beyond the one most commonly discussed which is consolidation or aggregation. Other common forms of virtualization include emulation (which is part of enabling consolidation) which can be in the form of a virtual tape library for storage to bridge new disk technology to old software technology, processes, procedures and skill sets. Other forms of virtualization functionality for life beyond consolidation include abstraction for transparent movement of applications or operating systems on servers, or data on storage to support planned and un-planned maintenance, upgrades, BC/DR and other activities.

So the gist is that there are many forms of virtualization technologies and techniques for servers, storage and even I/O networks to address different issues including life beyond consolidation. However the next wave of consolidation could and should be that of reducing the number of logical images, or, the impact of the multiple operating systems and application images, along with their associated management costs.

This may be easier said than done, however, for those looking to cut costs even further than from what can be realized by reducing physical footprints (e.g. going from 10 to 1 or from 250 to 25 physical servers), there could be upside however it will come at a cost. The cost is like that of reducing data and storage footprint impacts with such as data management and archiving.

Savings can be realized by archiving and deleting data via data management however that is easier said than done given the cost in terms of people time and ability to decide what to archive, even for non-compliance data along with associated business rules and policies to be defined (for automation) along with hardware, software and services (managed services, consulting and/or cloud and SaaS).

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

IT Belt Tightening and Stratigies for IT Economic Sustainment

Storage I/O trends

There’s been and will continue to be lots of talk around tightening IT budgets and spending. Here’s a link to a piece by Chris Preimesberger over at eWeek titled “How the ‘Down’ Macroeconomy Will Impact the Data Storage Sector”.

Here’s another related piece by Marty Foltyn over on Internet News that looks at “Storage Technologies for a Slowing Economy”.

A macro trend that Im seeing and hearing more often is that with the demise or should I say, falling out of favor of Green hype and Green washing, there is a growing realization of the practical aspects of boosting efficiency and productivity with a by-product of being environmental friendly while addressing economic issues.

The so called Green Gap (See here, here, here, here, and here) exists between green hype and rhetoric which is now destined for the endangered species or extinction list, while core issues including power, cooling, floor space/footprint and environmental health & safety (EH&S) known as PCFE that need to be addressed are in fact covered under the broader green umbrella.

With the growing realization that efficiency gains that can boost IT productivity to sustain and enable business and economic growth also help the environment, there is awareness that those initiatives and activities that address PCFE and related issues are in fact green and part of many organizations agendas and budgets. Consequently going green for the sake of going green may be falling out of favor or off of budgets for many, addressing PCFE related issues to sustain business and economic growth are gaining in popularity though not always associated with as being green.

Thus in tough economic times where dollars and productivity become important, the new green will be that of efficiency and productivity with a by-product benefit being to help the environment, some of the themes in my book, The Green and Virtual Data Center (Auerbach) that you can pre-order now at Amazon.com and other venues around the world.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved