Inaugural StorageIO Newsletter

Welcome to the winter 2010 edition of the Server and StorageIO (StorageIO) news letter. This inaugural edition of the StorageIO news letter coincides with our 5th year in business along with recent web site and blog enhancements.

In an age of social media including facebook, twitter, blogs and video, some might ask the question of why a news letter, after all, is that not old school or non social media?

For those who are immersed into twitter, blogs, facebook, feeds and other Web 2.0 means of communication, a traditional newsletter might not be in vogue.

StorageIO News Letter Image
Winter 2010 Newsletter
(Inaugural Edition)

However, realizing that there is still a large percentage of the population which also means a vast number of visitors and guest of StorageIO web sites and blogs or viewers of articles along with other content that do not use twitter, facebook, LinkedIn or RSS feeds, I realize that there is still a role for a newsletter.

Thus, it makes sense to bring info to those of you who prefer a traditional news letter format via email or other subscription, however this newsletter is available in HTML or PDF formats.

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the inaugural newsletter as HTML or PDF or, to go to the newsletter page.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this inaugural edition of the StorageIO newsletter, let me know your comments and feedback.

Also, a very big thank you to everyone who has helped make StorageIO a success!.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Green IT, Green Gap, Tiered Energy and Green Myths

There are many different aspects of Green IT along with several myths or misperceptions not to mention missed opportunities.

There is a Green Gap or disconnect between environmentally aware, focused messaging and core IT data center issues. For example, when I ask IT professionals whether they have or are under direction to implement green IT initiatives, the number averages in the 10-15% range.

However, when I ask the same audiences who has or sees power, cooling, floor space, supporting growth, or addressing environmental health and safety (EHS) related issues, the average is 75 to 90%. What this means is a disconnect between what is perceived as being green and opportunities for IT organizations to make improvements from an economic and efficiency standpoint including boosting productivity.

 

Some IT Data Center Green Myths
Is “green IT” a convenient or inconvenient truth or a legend?

When it comes to green and virtual environments, there are plenty of myths and realities, some of which vary depending on market or industry focus, price band, and other factors.

For example, there are lines of thinking that only ultra large data centers are subject to PCFE-related issues, or that all data centers need to be built along the Columbia River basin in Washington State, or that virtualization eliminates vendor lock-in, or that hardware is more expensive to power and cool than it is to buy.

The following are some myths and realities as of today, some of which may be subject to change from reality to myth or from myth to reality as time progresses.

Myth: Green and PCFE issues are applicable only to large environments.

Reality: I commonly hear that green IT applies only to the largest of companies. The reality is that PCFE issues or green topics are relevant to environments of all sizes, from the largest of enterprises to the small/medium business, to the remote office branch office, to the small office/home office or “virtual office,” all the way to the digital home and consumer.

 

Myth: All computer storage is the same, and powering disks off solves PCFE issues.

Reality: There are many different types of computer storage, with various performance, capacity, power consumption, and cost attributes. Although some storage can be powered off, other storage that is needed for online access does not lend itself to being powered off and on. For storage that needs to be always online and accessible, energy efficiency is achieved by doing more with less—that is, boosting performance and storing more data in a smaller footprint using less power.

 

Myth: Servers are the main consumer of electrical power in IT data centers.

Reality: In the typical IT data center, on average, 50% of electrical power is consumed by cooling, with the balance used for servers, storage, networking, and other aspects. However, in many environments, particularly processing or computation intensive environments, servers in total (including power for cooling and to power the equipment) can be a major power draw.

 

Myth: IT data centers produce 2 to 8% of all global Carbon Dioxide (CO2) and carbon emissions.

Reality:  Thus might be perhaps true, given some creative accounting and marketing math in order to help build a justification case or to scare you into doing something. However, the reality is that in the United States, for example, IT data centers consume around 2 to 4% of electrical power (depending on when you read this), and less than 80% of all U.S. CO2 emissions are from electrical power generation, so the math does not quite add up. The reality is this, if no action is taken to improve IT data center energy efficiency, continued demand growth will shift IT power-related emissions from myth to reality, not to mention cause constraints on IT and business sustainability from an economic and productivity standpoint.

Myth: Server consolidation with virtualization is a silver bullet to address PCFE issues.

Reality: Server virtualization for consolidation is only part of an overall solution that should be combined with other techniques, including lower power, faster and more energy efficient servers, and improved data and storage management techniques.

 

Myth: Hardware costs more to power than to purchase.

Reality: Currently, for some low-cost servers, standalone disk storage, or entry level networking switches and desktops, this may be true, particularly where energy costs are excessively high and the devices are kept and used continually for three to five years. A general rule of thumb is that the actual cost of most IT hardware will be a fraction of the price of associated management and software tool costs plus facilities and cooling costs. For the most part, at least as of this writing, small standalone individual hard disk drives or small entry level volume servers can be bought and then used in locations that have very high electrical costs over a three  to five year time frame.

 

Regarding this last myth, for the more commonly deployed external storage systems across all price bands and categories, generally speaking, except for extremely inefficient and hot running legacy equipment, the reality is that it is still cheaper to power the equipment than to buy it. Having said that, there are some qualifiers that should also be used as key indicators to keep the equation balanced. These qualifiers include the acquisition cost  if any, for new, expanded, or remodeled habitats or space to house the equipment, the price of energy in a given region, including surcharges, as well as cooling, length of time, and continuous time the device will be used.

For larger businesses, IT equipment in general still costs more to purchase than to power, particularly with newer, more energy efficient devices. However, given rising energy prices, or the need to build new facilities, this could change moving forward, particularly if a move toward energy efficiency is not undertaken.

There are many variables when purchasing hardware, including acquisition cost, the energy efficiency of the device, power and cooling costs for a given location and habitat, and facilities costs. For example, if a new storage solution is purchased for $100,000, yet new habitat or facilities must be built for three to five times the cost of the equipment, those costs must be figured into the purchase cost.

Likewise, if the price of a storage solution decreases dramatically, but the device consumes a lot of electrical power and needs a large cooling capacity while operating in a region with expensive electricity costs, that, too, will change the equation and the potential reality of the myth.

 

Tiered Energy Sources
Given that IT resources and facilitated require energy to power equipment as well as keep them cool, electricity are popular topics associated with Green IT, economics and efficiency with lots of metrics and numbers tossed around. With that in mind, the U.S. national average CO2 emission is 1.34 lb/kWh of electrical power. Granted, this number will vary depending on the region of the country and the source of fuel for the power-generating station or power plant.

Like IT tiered resources (Servers, storage, I/O networks, virtual machines and facilities) of which there are various tiers or types of technologies to meet various needs, there are also multiple types of energy sources. Different tiers of energy sources vary by their cost, availability and environmental characteristics among others. For example, in the US, there are different types of coal and not all coal is as dirty when combined with emissions air scrubbers as you might be lead to believe however there are other energy sources to consider as well.

Coal continues to be a dominant fuel source for electrical power generation both in the United States and abroad, with other fuel sources, including oil, gas, natural gas, liquid propane gas (LPG or propane), nuclear, hydro, thermo or steam, wind and solar. Within a category of fuel, for example, coal, there are different emissions per ton of fuel burned. Eastern U.S. coal is higher in CO2 emissions per kilowatt hour than western U.S. lignite coal. However, eastern coal has more British thermal units (Btu) of energy per ton of coal, enabling less coal to be burned in smaller physical power plants.

If you have ever noticed that coal power plants in the United States seem to be smaller in the eastern states than in the Midwest and western states, it’s not an optical illusion. Because eastern coal burns hotter, producing more Btu, smaller boilers and stockpiles of coal are needed, making for smaller power plant footprints. On the other hand, as you move into the Midwest and western states of the United States, coal power plants are physically larger, because more coal is needed to generate 1 kWh, resulting in bigger boilers and vent stacks along with larger coal stockpiles.

On average, a gallon of gasoline produces about 20 lb of CO2, depending on usage and efficiency of the engine as well as the nature of the fuel in terms of octane or amount of Btu. Aviation fuel and diesel fuel differ from gasoline, as does natural gas or various types of coal commonly used in the generation of electricity. For example, natural gas is less expensive than LPG but also provides fewer Btu per gallon or pound of fuel. This means that more natural gas is needed as a fuel to generate a given amount of power.

Recently, while researching small, 10 to 12 kWh standby generators for my office, I learned about some of the differences between propane and natural gas. What I found was that with natural gas as fuel, a given generator produced about 10.5 kWh, whereas the same unit attached to a LPG or propane fuel source produced 12 kWh. The trade off was that to get as much power as possible out of the generator, the higher cost LPG was the better choice. To use lower cost fuel but get less power out of the device, the choice would be natural gas. If more power was needed, than a larger generator could be deployed to use natural gas, with the trade off of requiring a larger physical footprint.

Oil and gas are not used as much as fuel sources for electrical power generation in the United States as in other countries such as the United Kingdom. Gasoline, diesel, and other petroleum based fuels are used for some power plants in the United States, including standby or peaking plants. In the electrical power G and T industry as in IT, where different tiers of servers and storage are used for different applications there are different tiers of power plants using different fuels with various costs. Peaking and standby plants are brought online when there is heavy demand for electrical power, during disruptions when a lower cost or more environmentally friendly plant goes offline for planned maintenance, or in the event of a trip or unplanned outage.

CO2 is commonly discussed with respect to green and associated emissions however there are other so called Green Houses Gases including Nitrogen Dioxide (NO2) and water vapors among others. Carbon makes up only a fraction of CO2. To be specific, only about 27% of a pound of CO2 is carbon; the balance is not. Consequently, carbon emissions taxes schemes (ETS), as opposed to CO2 tax schemes, need to account for the amount of carbon per ton of CO2 being put into the atmosphere. In some parts of the world, including the EU and the UK, ETS are either already in place or in initial pilot phases, to provide incentives to improve energy efficiency and use.

Meanwhile, in the United States there are voluntary programs for buying carbon offset credits along with initiatives such as the carbon disclosure project. The Carbon Disclosure Project (www.cdproject.net) is a not for profit organization to facilitate the flow of information pertaining to emissions by organizations for investors to make informed decisions and business assessment from an economic and environmental perspective. Another voluntary program is the United States EPA Climate Leaders initiative where organizations commit to reduce their GHG emissions to a given level or a specific period of time.

Regardless of your stance or perception on green issues, the reality is that for business and IT sustainability, a focus on ecological and, in particular, the corresponding economic aspects cannot be ignored. There are business benefits to aligning the most energy efficient and low power IT solutions combined with best practices to meet different data and application requirements in an economic and ecologically friendly manner.

Green initiatives need to be seen in a different light, as business enables as opposed to ecological cost centers. For example, many local utilities and state energy or environmentally concerned organizations are providing funding, grants, loans, or other incentives to improve energy efficiency. Some of these programs can help offset the costs of doing business and going green. Instead of being seen as the cost to go green, by addressing efficiency, the by products are economic as well as ecological.

Put a different way, a company can spend carbon credits to offset its environmental impact, similar to paying a fine for noncompliance or it can achieve efficiency and obtain incentives. There are many solutions and approaches to address these different issues, which will be looked at in the coming chapters.

What does this all mean?
There are real things that can be done today that can be effective toward achieving a balance of performance, availability, capacity, and energy effectiveness to meet particular application and service needs.

Sustaining for economic and ecological purposes can be achieved by balancing performance, availability, capacity, and energy to applicable application service level and physical floor space constraints along with intelligent power management. Energy economics should be considered as much a strategic resource part of IT data centers as are servers, storage, networks, software, and personnel.

The bottom line is that without electrical power, IT data centers come to a halt. Rising fuel prices, strained generating and transmission facilities for electrical power, and a growing awareness of environmental issues are forcing businesses to look at PCFE issues. IT data centers to support and sustain business growth, including storing and processing more data, need to leverage energy efficiency as a means of addressing PCFE issues. By adopting effective solutions, economic value can be achieved with positive ecological results while sustaining business growth.

Some additional links include:

Want to learn or read more?

Check out Chapter 1 (Green IT and the Green Gap, Real or Virtual?) in my book “The Green and Virtual Data Center” (CRC) here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Post Holiday IT Shopping Bargains, Dell Buying Exanet?

For consumers, the time leading up to the holiday Christmas season is usually busy including door busters as well as black Friday among other specials for purchasing gifts and other items. However savvy shoppers will wait for after Christmas or the holidays altogether perhaps well into the New Year when some good bargains can become available. IT customers are no different with budgets to use up before the end of the year thus a flurry of acquisitions that should become evident soon as we are entering earnings announcement season.

However there are also bargains for IT organizations looking to take advantage of special vendor promotions trying to stimulate sales, not to mention for IT vendors to do some shopping of their own. Consequently, in addition to the flurry of merger and acquisition (M and A) activity from last summer through the fall, there has been several recent deals, some of which might make Monty Hall blush!

Some recent acquisition activity include among others:

  • Dell bought Perot systems for $3.9B
  • DotHill bought Cloverleaf
  • Texas Memory Systems (TMS) bought Incipient
  • HP bought IBRIX and 3COM among others
  • LSI bought Onstor
  • VMware bought Zimbra
  • Micron bought Numonyx
  • Exar bought Neterion

Now the industry is abuzz about Dell, who is perhaps using some of the lose change left over from holiday sales as being in the process of acquiring Israeli clustered storage startup Exanet for about $12M USD. Compared to previous Dell acquisitions including EqualLogic in 2007 for about $1.4B or last years Perot deal in the $3.9B range, $12M is a bargain and would probably not even put a dent in the selling and marketing advertising budget let alone corporate cash coffers which as of their Q3-F10 balance sheet shows about $12.795B in cash.

Who is Exanet and what is their product solution?
Exanet is a small Israeli startup providing a clustered, scale out NAS file serving storage solution (Figure 1) that began shipping in 2003. The Exanet solution (ExaStore) can be either software based, or, as a package solution ExaStore software installed on standard x86 servers with external RAID storage arrays combining as a clustered NAS file server.

Product features include global name space, distributed metadata, expandable file systems, virtual volumes, quotas, snapshots, file migration, replication, and virus scanning, and load balancing, NFS, CIFS and AFP. Exanet scales up to 1 Exabyte of storage capacity along with supporting large files and billions of file per cluster.

The target market that Exanet pursues is large scale out NAS where performance (either small random or large sequential I/Os) along with capacity are required. Consequently, in the scale out, clustered NAS file serving space, competitors include IPM GPFS (SONAS), HP IBRIX or PolyServe, Sun Lustre and Symantec SFS among others.

Clustered Storage Model: Source The Green and Virtual Data Center (CRC)
Figure 1 Generic clustered storage model (Courtesy The Green and Virtual Data Center(CRC)

For a turnkey solution, Exanet packaged their cluster file system software with various vendors storage combined with 3rd party external Fibre Channel or other storage. This should play well for Dell who can package the Exanet software on its own servers as well as leverage either SAS or Fibre Channel  MD1000/MD3000 external RAID storage among other options (see more below).

Click here to learn more about clustered storage including clustered NAS, clustered and parallel file systems.

Dell

Whats the dell play?

  • Its an opportunity to acquire some intellectual property (IP)
  • Its an opportunity to have IP similar to EMC, HP, IBM, NetApp, Oracle and Symantec among others
  • Its an opportunity to address a market gap or need
  • Its an opportunity to sell more Dell servers, storage and services
  • Its an opportunity time for doing acquisitions (bargain shopping)

Note: IBM also this past week announced their new bundled scale out clustered NAS file serving solution based on GPFS called SONAS. HP has IBRIX in addition to their previous PolyServe acquisition, Sun has ZFS and Lustre.

How does Exanet fit into the Dell lineup?

  • Dell sells Microsoft based NAS as NX series
  • Dell has an OEM relationship with EMC
  • Dell was OEMing or reselling IBRIX in the past for certain applications or environments
  • Dell has needed to expand its NAS story to balance its iSCSI centric storage story as well as compliment its multifunction block storage solutions (e.g. MD3000) and server solutions.

Why Exanet?
Why Exanet, why not one of the other startups or small NAS or cloud file system vendors including BlueArc, Isilon, Panasas, Parascale, Reldata, OpenE or Zetta among others?

My take is that probably because those were either not relevant to what Dell is looking for, lack of seamless technology and business fit, technology tied to non Dell hardware, technology maturity, the investors are still expecting a premium valuation, or, some combination of the preceding.

Additional thoughts on why Exanet
I think that Dell simply saw an opportunity to acquire some intellectual property (IP) probably including a patent or two. The value of the patents could be in the form of current or future product offerings, perhaps a negotiating tool, or if nothing else as marketing tool. As a marketing tool, Dell via their EqualLogic acquisition among others has been able to demonstrate and generate awareness that they actually own some IP vs. OEM or resell those from others. I also think that this is an opportunity to either fill or supplement a solution offering that IBRIX provided to high performance, bulk storage and scale out file serving needs.

NAS and file serving supporting unstructured data are a strong growth market for commercial, high performance, specialized or research as well as small business environments. Thus, where EqualLogic plays to the iSCSI block theme, Dell needs to expand their NAS and file serving solutions to provide product diversity to meet various customer applications needs similar to what they do with block based storage. For example, while iSCSI based EqualLogic PS systems get the bulk of the marketing attention, Dell also has a robust business around the PowerVault MD1000/MD3000 (SAS/iSCSI/FC) and Microsoft multi protocol based PowerVault NX series not to mention their EMC CLARiiON based OEM solutions (E.g. Dell AX, Dell/EMC CX).

Thus, Dell can complement the Microsoft multi protocol (block and NAS file) NX with a packaged (Dell servers and MD (or other affordable block storage) powered with Exanet) solution. While it is possible that Dell will find a way to package Exanet as a NAS gateway in front of the iSCSI based EqualLogic PS systems, which would also make for an expensive scale out NAS solution compared to those from other vendors.

Thats it for now.

Lets see how this all plays out.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Technorati tags: Dell

Does IBM Power7 processor announcement signal storage upgrades?

IBM recently announced the Power7 as the latest generation of processors that the company uses in some of its mid range and high end compute servers including the iSeries and pSeries.


IBM Power7 processor wafers (chips)

 

What is the Power7 processor?
The Power7 is the latest generation of IBM processors (chips) that are used as the CPUs in IBM mid range and high end open systems (pSeries) for Unix (AIX) and Linux as well as for the iSeries (aka AS400 successor). Building on previous Power series processors, the Power7 increases the performance per core (CPU) along with the number of cores per socket (chip) footprint. For example, each Power7 chip that plugs into a socket on a processor card in a server can have up to 8 cores or CPUs. Note that sometimes cores are also known as micro CPUs as well as virtual CPUs not to be confused with their presented via Hypervisor abstraction.

Sometimes you may also here the term or phrase 2 way, 4 way (not to be confused with a Cincinnati style 4 way chili) or 8 way among others that refers to the number of cores on a chip. Hence, a dual 2 way would be a pair of processor chips each with 2 cores while a quad 8 way would be 4 processors chips each with 8 cores and so on.


IBM Power7 with up to eight cores per processor (chip)

In addition to faster and more cores in a denser footprint, there are also energy efficiency enhancements including Energy Star for enterprise servers qualification along with intelligent power management (IPM also see here) implementation. IPM is implanted in what IBM refers to as Intelligent Energy technology for turning on or off various parts of the system along with varying processor clock speeds. The benefit is when there is work to be done, get it down quickly or if there is less work, turn some cores off or slow clock speed down. This is similar to what other industry leaders including Intel have deployed with their Nehalem series of processors that also support IPM.

Additional features of the Power7 include (varies by system solutions):

  • Energy Star for server qualified providing enhanced performance and efficiency.
  • IBM Systems Director Express, Standard and Enterprise Editions for simplified management including virtualization capabilities across pools of Power servers as a single entity.
  • PowerVM (Hypervisor) virtualization for AIX, iSeries and Linux operating systems.
  • ActiveMemory enables effective memory capacity to be larger than physical memory, similar to how virtual memory works within many operating systems. The benefit is to enable a partition to have access to more memory which is important for virtual machines along with the ability to support more partitions in a given physical memory footprint.
  • TurboCore and Intelligent Threads enable workload optimization by selecting the applicable mode for the work to be done. For example, single thread per core along with simultaneous threads (2 or 4) modes per core. The trade off is to have more threads per core for concurrent processing, or, fewer threads to boost single stream performance.

IBM has announced several Power7 enabled or based server system models with various numbers of processors and cores along with standalone and clustered configurations including:

IBM Power7 family of server systems

  • Power 750 Express, 4U server with one to four socket server supporting up to 32 cores (3.0 to 3.5 GHz) and 128 threads (4 threads per core), PowerVM (Hypervisor) along with main memory capacity of 512GB or 1TByte of virtual memory using Active Memory Expansion.
  • Power 755, 32 3.3Ghz Power7 cores (8 cores per processor) with memory up to 256GB along with AltiVec and VSX SIMD instruction set support. Up to 64 755 nodes each with 32 cores can be clustered together for high performance applications.
  • Power 770, Up to 64 Power7 cores providing more performance while consuming less energy per core compared to previous Power6 generations. Support for up to 2TB of main memory or RAM using 32GB DIMM when available later in 2010.
  • Power 780, 64 Power7 cores with TurboCore workload optimization providing performance boost per core. With TurboCore, 64 cores can operate at 3.8 GHz, or, enable up to 32 cores at 4.1 GHz and twice the amount of cache when more speed per thread is needed. Support for up to 2TB of main memory or RAM using 32GB DIMM when available later in 2010.

Additional Power7 specifications and details can be found here.

 

What is the DS8000?
The DS8000 is the latest generation of a family of high end enterprise class storage systems supporting IBM mainframe (zSeries), Open systems along with mixed workloads. Being high end open systems or mainframe, the DS8000 competes with similar systems from EMC (Symmetrix/DMX/VMAX), Fujitsu (Eternus DX8000), HDS (Hitachi) and HP (XP series OEM from Hitachi). Previous generations of the DS8000 (aka predecessors) include the ESS (Enterprise Storage System) Model 2105 (aka Shark) and VSS (Versatile Storage Server). Current generation family members include the Power5 based DS8100 and DS8300 along with the Power6 based DS8700.

IBM DS8000 Storage System

Learn more about the DS8000 here, here, here and here.

 

What is the association between the Power7 and DS8000?
Disclosure: Before I go any further, lets be clear on something, what I am about to post on is based entirely on researching, analyzing, correlating (connecting the dots) of what is publicly and freely available from IBM on the Web (e.g. there is no NDA material being disclosed here that I am aware of) along with prior trends and tendency of IBM and their solutions. In other words, you can call it speculation, a prediction, industry analysis perspective, looking into the proverbial crystal ball or educated guess and thus should not be taken as an indicator of what IBM may actually do or be working on. As to what may actually be done or not done, for that you will need to contact one of the IBM truth squad members.

As to what is the linkage between Power7 and the DS8000?

The linkage between the Power7 and the DS8000 is just that, the Power processors!

At the heart of the DS8000 are Power series processors coupled or clustered together in pairs for performance and availability that run IBM developed storage systems software. While the spin doctors may not agree, essentially the DS8000 and its predecessors are based on and around Power series processors clustered together with a high speed interconnect that combine to host an operating system and IBM developed storage system application software.

Thus IBM has been able to for over a decade leverage technology improvement curve advantages with faster processors, increased memory and I/O connectivity in denser footprints while enhancing their storage system application software.

Given that the current DS8000 family members utilize 2 way (2 core) or 4 way (4 core) Power5 and Power6 processors, similar to how their predecessors utilized previous generation Power4, Power3 and so forth processors, it only makes sense that IBM might possibly use a Power7 processor in a future DS8000 (or derivative perhaps even with a different name or model number). Again, this is just based all on historical trends and patterns of IBM storage systems group leveraging the latest generation of Power processors; after all, they are a large customer of the Power systems group.

Consequently it would make sense for IBM storage folks to leverage the new Power7 processors and features similar to how EMC is leveraging Intel processor enhances along with what other vendors are doing.

There is certainly room in the DS8000 architecture for growth in terms of supporting additional nodes or complexes or controllers (or whatever your term preference of choice is for describing a server) each equipped with multiple processors (chips or sockets) that have multiple cores. While IBM has only commercially released two complex or dual server versions of the DS8000 with various numbers of cores per server, they have come nowhere close to their architecture limit of nodes. In fact with this release of Power7, as an example, the model 755 can be clustered via InfiniBand with up to 64 nodes, with each node having 4 sockets (e.g. 4 way) with up to 8 cores each. That means on paper, 64 x 4 x 8 = 2048 cores and each core could have up to 4 threads for concurrency, or half as many cores for more cache performance. Now will IBM ever come out with a 64 node DS8000 on steroids?

Tough to say, maybe possibly some day to play specmanship vs EMC VMAX 256 node architectural limit, however Im not holding my breath just yet. Thus with more and faster cores per processor, ability to increase number of processors per server or node, along with architectural capabilities to boost the number of nodes in an instance or cluster, on paper alone, there is lots of head room for the DS8000 or a future derivative.

What about software and functionality, sure IBM could in theory simply turn the crank and use a new hardware platform that is faster, more capacity, denser, better energy efficiency, however what about new features?

Can IBM enhance its storage systems application software that it evolved from the ESS with new features to leverage underlying hardware capabilities including TurboCore, PowerVM, device and I/O sharing, Intelligent Energy efficiency along with threads enhancements?

Can IBM leverage those and other features to support not only scaling of performance, availability, capacity and energy efficiency in an economical manner, however also add features for advanced automated tiering or data movement plus other popular industry buzzword functionality?

 

Additional thoughts and perspectives
One of the things I find interesting is that some IBM folks along with their channel partners will go to great lengths to explain why and how the DS8000 is not just a pair of Power enabled based servers tightly coupled together. Yet, on the other hand, some of those folks will go to great lengths touting the advantages of leveraging off the shelf or commercial enabled servers based on Intel or AMD based systems such as IBMs own XIV storage solution.

I can understand in the past when the likes of EMC, Hitachi and Fujitsu were all competing with IBM building bigger and more function rich monolithic systems, however that trend is shifting. The trend now as is being seen with EMC and VMAX is to decouple and leverage more off the shelf commercially available technology combined with custom ASICs where and when needed.

Thus at a time where more attention and discussion is around clustered, grid, scalable storage systems, will we see or hear the IBM folks change their tune about the architectural scale up and out capabilities of the Power enabled DS8000 family?

There had been some industry speculation that the DS8000 would be the end of the line if the Power7 had not been released which will now (assuming that IBM leverages the Power7 for storage) shift to if there will be a Power8 or Power9 and so forth.

From a storage perspective, is the DS8K still relevant?

I say yes given its installed base and need for IBM to have an enterprise solution (sorry, IMHO XIV does not fit that bill just yet) of their own, lest they cut an OEM deal with the likes of Hitachi or Fujitsu which while possible, I do not see it as likely near term. Another soft point on its relevance is to gauge reaction from their competitors including EMC and HDS.

From a server perspective, what is the benefit of the new Power7 enabled servers from IBM?

Simple, increase scale of performance for single thread as well as concurrent or parallel application workloads.

In other words, supporting more web sites, partitions for virtual machines and guest operating system instances, databases, compute and other applications that demand performance and economy of scale.

This also means that IBM has a platform to aggressively go after Sun Solaris server customers with a lifeline during the Oracle transition, not to mention being a platform for running Oracle in addition to its own UDB/DB2 database. In addition to being a platform for Unix AIX as well as Linux, the Power7 series also are at the heart of current generation iSeries (the server formerly known as the AS400).

Additional links and resources:

Closing comments (for now):
Given IBMs history of following a Power chip enhancement with a new upgraded version of the DS8000 (or ESS/2105 aka Shark/VSS) and its predecessors by a reasonable amount of time, I would be surprised if we do not see a new DS8000 (perhaps even renamed or renumbered) within the year.

This is similar to how other vendors leverage new processor chip technology evolution to pace their systems upgrades for example how many vendors who leverage Intel processes have done announcements over the past year since the Nehalem series rolled out including EMC among others.

Lets see what the IBM truth squads have to say, or, not have to say :)

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Technology Tiering, Servers Storage and Snow Removal

Granted it is winter in the northern hemisphere and thus snow storms should not be a surprise.

However between December 2009 and early 2010, there has been plenty of record activity from in the U.K. (or here), to the U.S. east coast including New York, Boston and Washington DC, across the midwest and out to California, it made for a white christmas and SANta fun along with snow fun in general in the new year.

2010 Snow Storm via www.star-telegram.com

What does this have to do with Information Factories aka IT resources including public or private clouds, facilities, server, storage, networking along with data management let alone tiering?

What does this have to do with tiered snow removal, or even snow fun?

Simple, different tools are needed for addressing various types of snow from wet and heavy to light powdery or dustings to deep downfalls. Likewise, there are different types of servers, storage, data networks along with operating systems, management tools and even hyper visors to deal with various application needs or requirements.

First, lets look at tiered IT resources (servers, storage, networks, facilities, data protection and hyper visors) to meet various efficiency, optimization and service level needs.

Do you have tiered IT resources?

Let me rephrase that question to do you have different types of servers with various performance, availability, connectivity and software that support various applications and cost levels?

Thus the whole notion of tiered IT resources is to be abe to have different resources that can be aligned to the task at hand in order to meet performance, availability, capacity, energy along with economic along with service level agreement (SLA) requirements.

Computers or servers are targeted for different markets including Small Office Home Office (SOHO), Small Medium Business (SMB), Small Medium Enterprise (SME) and ultra large scale or extreme scaling, including high performance super computing. Servers are also positioned for different price bands and deployment scenarios.

General categories of tiered servers and computers include:

  • Laptops, desktops and workstations
  • Small floor standing towers or rack mounted 1U and 2U servers
  • Medium sizes floor standing towers or larger rack mounted servers
  • Blade Centers and Blade Servers
  • Large size floor standing servers, including mainframes
  • Specialized fault tolerant, rugged and embedded processing or real time servers

Servers have different names email server, database server, application server, web server, and video or file server, network server, security server, backup server or storage server associated with them depending on their use. In each of the previous examples, what defines the type of server is the type of software is being used to deliver a type of service. Sometimes the term appliance will be used for a server; this is indicative of the type of service the combined hardware and software solution are providing. For example, the same physical server running different software could be a general purpose applications server, a database server running for example Oracle, IBM, Microsoft or Teradata among other databases, an email server or a storage server.

This can lead to confusion when looking at servers in that a server may be able to support different types of workloads thus it should be considered a server, storage, networking or application platform. It depends on the type of software being used on the server. If, for example, storage software in the form a clustered and parallel file system is installed on a server to create highly scalable network attached storage (NAS) or cloud based storage service solution, then the server is a storage server. If the server has a general purpose operating system such as Microsoft Windows, Linux or UNIX and a database on it, it is a database server.

While not technically a type of server, some manufacturers use the term tin wrapped software in an attempt to not be classified as an appliance, server or hardware vendor but want their software to be positioned more as a turnkey solution. The idea is to avoid being perceived as a software only solution that requires integration with hardware. The solution is to use off the shelf commercially available general purpose servers with the vendors software technology pre integrated and installed ready for use. Thus, tin wrapped software is a turnkey software solution with some tin, or hardware, wrapped around it.

How about the same with tiered storage?

That is different tiers (Figure 1) of fast high performance disk including RAM or flash based SSD, fast Fibre Channel or SAS disk drives, or high capacity SAS and SATA disk drives along with magnetic tape as well as cloud based backup or archive?

Tiered Storage Resources
Figure 1: Tiered Storage resources

Tiered storage is also sometimes thought of in terms large enterprise class solutions or midrange, entry level, primary, secondary, near line and offline. Not to be forgotten, there are also tiered networks that support various speeds, convergence, multi tenancy and other capabilities from IO Virtualization (IOV) to traditional LAN, SAN, MAN and WANs including 1Gb Ethernet (1GbE), 10GbE up to emerging 40GbE and 100GbE not to mention various Fibre Channel speeds supporting various protocols.

The notion around tiered networks is like with servers and storage to enable aligning the right technology to be used for the task at hand economically while meeting service needs.

Two other common IT resource tiering techniques include facilities and data protection. Tiered facilities can indicate size, availability, resiliency among other characteristics. Likewise, tiered data protection is aligning the applicable technology to support different RTO and RPO requirements for example using synchronous replication where applicable vs. asynchronous time delayed for longer distance combined with snapshots. Other forms of tiered data protection include traditional backups either to disk, tape or cloud.

There is a new emerging form of tiering in many IT environments and that is tiered virtualization or specifically tiered server hyper visors in virtual data centers with similar objectives to having different server, storage, network, data protection or facilities tiers. Instead of an environment running all VMware, Microsoft HyperV or Xen among other hyper visors may be deployed to meet different application service class requirements. For example, VMware may be used for premium features and functionality on some applications, where others that do not need those features along with requiring lower operating costs leverage HyperV or Zen based solutions. Taking the tiering approach a step further, one could also declare tiered databases for example Oracle legacy vs. MySQL or Microsoft SQLserver among other examples.

What about IT clouds, are those different types of resources, or, essentially an extension of existing IT capabilities for example cloud storage being another tier of data storage?

There is another form of tiering, particularly during the winter months in the northern hemisphere where there is an abundance of snow this time of the year. That is, tiered snow management, removal or movement technologies.

What about tiered snow removal?

Well lets get back to that then.

Like IT resources, there are different technologies that can be used for moving, removing, melting or managing snow.

For example, I cant do much about getting ready of snow other than pushing it all down the hill and into the river, something that would take time and lots of fuel, or, I can manage where I put snow piles to be prepared for next storm, plus, to help put it where the piles of snow will melt and help avoid spring flood. Some technologies can be used for relocating snow elsewhere, kind of like archiving data onto different tiers of storage.

Regardless of if snowstorm or IT clouds (public or private), virtual, managed service provider (MSP), hosted or traditional IT data centers, all require physical servers, storage, I/O and data networks along with software including management tools.

Granted not all servers, storage or networking technology let alone software are the same as they address different needs. IT resources including servers, storage, networks, operating systems and even hyper visors for virtual machines are often categorized and aligned to different tiers corresponding to needs and characteristics (Figure 2).

Tiered IT Resources
Figure 2: Tiered IT resources

For example, in figure 3 there is a light weight plastic shovel (Shove 1) for moving small amounts of snow in a wide stripe or pass. Then there is a narrow shovel for digging things out, or breaking up snow piles (Shovel 2). Also shown are a light duty snow blower (snow thrower) capable of dealing with powdery or non wet snow, grooming in tight corners or small areas.

Tiered Snow tools
Figure 3: Tiered Snow management and migration tools

For other light dustings, a yard leaf blower does double duty for migrating or moving snow in small or tight corners such as decks, patios or for cleanup. Larger snowfalls, or, where there is a lot of area to clear involves heavier duty tools such as the Kawasaki mule with 5 foot curtis plow. The mule is a multifunction, multi protocol tool capable of being used for hauling items, towing, pulling or recreational tasks.

When all else fails, there is a pickup truck to get or go out and about, not to mention to pull other vehicles out of ditches or piles of snow when they become stuck!

Snow movement
Figure 4: Sometimes the snow light making for fast, low latency migration

Snow movement
Figure 5: And sometimes even snow migration technology goes off line!

Snow movement

And that is it for now!

Enjoy the northern hemisphere winter and snow while it lasts, make the best of it with the right tools to simplify the tasks of movement and management, similar to IT resources.

Keep in mind, its about the tools and when along with how to use them for various tasks for efficiency and effectiveness, and, a bit of snow fun.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

California Center for Sustainable Energy (CCSE)



CCSE Facility and Seminar Series

This past week I had the honor of delivering a keynote presentation in San Diego at the California Center for Sustainable Energy (CCSE) as part of their continuing education and community outreach and education, workshop and seminar series. The theme of the well attended event was Next Generation Data Center Solutions of which my talk centered around leveraging Green and Virtual Data Centers for enabling efficiencey and effectiveness. In addition to my keynote, included a panel discussion that I moderated with representatives of the events sponsor Compucom, along with their special guests APC, HP, Intel and VMware.

The CCSE has a focus around Climate Change, Energy Efficienecey, Green Buildings, Renewable Energy, Transportation, Home and Business. Their services and focus includes awareness and outreach, education programs, library and tools, consultant and associated services. Speaking of their library, there is even a signed copy of my book The Green and Virtual Data Center (CRC) now at the CCSE library that can be checked out along with their other resources.

The CCSE staff and facilities were fantastic with hosts Mike Bigelow (an energy engineer) and Marlene King (program manager) orchestrating a great event.

If you are in the San Diego area, check out the CCSE located at 8690 Balboa Ave., Suite 100. They have a great library, cool demonstrations and tools that you can check out to assist with optimization IT data centers from an energy efficicinecy standpoint. Learn more about the CCSE here.

Following are some relevant links to the keynote along with panel discussion from the CCSE event:

Follow these links to view additional videos or podcasts, tips, articles, books, reports and events.

Cheers
gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Technorati tags: Trends

Infosmack Episode 34, VMware, Microsoft and More

Following on the heals of several guest appearances late in 2009 ( here, here, here and here) on the Storage Monkeys Infosmack weekly pod cast, I was recently asked to join them again for the inaugural 2010 show (Episode 34).

Along with VMguru Rich Brambley and hosts Greg Knieriemen and Marc Farley we discussed several recent industry topics in this first show of the year which can be accessed here or on iTunes.

Heres a link to the pod cast where you can listen to the discussion including VMware Go, VMware buying Zimbra, Vendor Alliances such as HP and Microsoft HyperV and EMC+Cisco+VMware, along with data protection for virtual servers issues options (or opportunities) among other topics.

I have included the following links that pertain to some of the items we discussed during the show.

Enjoy the show.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

2010 and 2011 Trends, Perspectives and Predictions: More of the same?

2011 is not a typo, I figured that since Im getting caught up on some things, why not get a jump as well.

Since 2009 went by so fast, and that Im finally getting around to doing an obligatory 2010 predictions post, lets take a look at both 2010 and 2011.

Actually Im getting around to doing a post here having already done interviews and articles for others soon to be released.

Based on prior trends and looking at forecasts, a simple predictions is that some of the items for 2010 will apply for 2011 as well given some of this years items may have been predicted by some in 2008, 2007, 2006, 2005 or, well ok, you get the picture. :)

Predictions are fun and funny in that for some, they are taken very seriously, while for others, at best they are taken with a grain of salt depending on where you sit. This applies both for the reader as well as who is making the predictions along with various motives or incentives.

Some are serious, some not so much…

For some, predictions are a great way of touting or promoting favorite wares (hard, soft or services) or getting yet another plug (YAP is a TLA BTW) in to meet coverage or exposure quota.

Meanwhile for others, predictions are a chance to brush up on new terms for the upcoming season of buzzword bingo games (did you pick up on YAP).

In honor of the Vancouver winter games, Im expecting some cool Olympic sized buzzword bingo games with a new slippery fast one being federation. Some buzzwords will take a break in 2010 as well as 2011 having been worked pretty hard the past few years, while others that have been on break, will reappear well rested, rejuvenated, and ready for duty.

Lets also clarify something regarding predictions and this is that they can be from at least two different perspectives. One view is that from a trend of what will be talked about or discussed in the industry. The other is in terms of what will actually be bought, deployed and used.

What can be confusing is sometimes the two perspectives are intermixed or assumed to be one and the same and for 2010 I see that trend continuing. In other words, there is adoption in terms of customers asking and investigating technologies vs. deployment where they are buying, installing and using those technologies in primary situations.

It is safe to say that there is still no such thing as an information, data or processing recession. Ok, surprise surprise; my dogs could have probably made that prediction during a nap. However what this means is more data will need to be moved, processed and stored for longer periods of time and at a lower cost without degrading performance or availability.

This means, denser technologies that enable a lower per unit cost of service without negatively impacting performance, availability, capacity or energy efficiency will be needed. In other words, watch for an expanded virtualization discussion around life beyond consolidation for servers, storage, desktops and networks with a theme around productivity and virtualization for agility and management enablement.

Certainly there will be continued merger and acquisitions on both a small as well as large scale ranging from liquidation sales or bargain hunting, to large and a mega block buster or two. Im thinking in terms of outside of the box, the type that will have people wondering perhaps confused as to why such a deal would be done until the whole picture is reveled and thought out.

In other words, outside of perhaps IBM, HP, Oracle, Intel or Microsoft among a few others, no vendor is too large not to be acquired, merged with, or even involved in a reverse merger. Im also thinking in terms of vendors filling in niche areas as well as building out their larger portfolio and IT stacks for integrated solutions.

Ok, lets take a look at some easy ones, lay ups or slam dunks:

  • More cluster, cloud conversations and confusion (public vs. private, service vs. product vs. architecture)
  • More server, desktop, IO and storage consolidation (excuse me, server virtualization)
  • Data footprint impact reduction ranging from deletion to archive to compress to dedupe among others
  • SSD and in particular flash continues to evolve with more conversations around PCM
  • Growing awareness of social media as yet another tool for customer relations management (CRM)
  • Security, data loss/leap prevention, digital forensics, PCI (payment card industry) and compliance
  • Focus expands from gaming/digital surveillance /security and energy to healthcare
  • Fibre Channel over Ethernet (FCoE) mainstream in discussions with some initial deployments
  • Continued confusion of Green IT and carbon reduction vs. economic and productivity (Green Gap)
  • No such thing as an information, data or processing recession, granted budgets are strained
  • Server, Storage or Systems Resource Analysis (SRA) with event correlation
  • SRA tools that provide and enable automation along with situational awareness

The green gap of confusion will continue with carbon or environment centric stories and messages continue to second back stage while people realize the other dimension of green being productivity.

As previously mentioned, virtualization of servers and storage continues to be popular with an expanding focus from just consolidation to one around agility, flexibility and enabling production, high performance or for other systems that do not lend themselves to consolidation to be virtualized.

6GB SAS interfaces as well as more SAS disk drives continue to gain popularity. I have said in the past there was a long shot that 8GFC disk drives might appear. We might very well see those in higher end systems while SAS drives continue to pick up the high performance spinning disk role in mid range systems.

Granted some types of disk drives will give way over time to others, for example high performance 3.5” 15.5K Fibre Channel disks will give way to 2.5” 15.5K SAS boosting densities, energy efficiency while maintaining performance. SSD will help to offload hot spots as they have in the past enabling disks to be more effectively used in their applicable roles or tiers with a net result of enhanced optimization, productivity and economics all of which have environmental benefits (e.g. the other Green IT closing the Green Gap).

What I dont see occurring, or at least in 2010

  • An information or data recession requiring less server, storage, I/O networking or software resources
  • OSD (object based disk storage without a gateway) at least in the context of T10
  • Mainframes, magnetic tape, disk drives, PCs, or Windows going away (at least physically)
  • Cisco cracking top 3, no wait, top 5, no make that top 10 server vendor ranking
  • More respect for growing and diverse SOHO market space
  • iSCSI taking over for all I/O connectivity, however I do see iSCSI expand its footprint
  • FCoE and flash based SSD reaching tipping point in terms of actual customer deployments
  • Large increases in IT Budgets and subsequent wild spending rivaling the dot com era
  • Backup, security, data loss prevention (DLP), data availability or protection issues going away
  • Brett Favre and the Minnesota Vikings winning the super bowl

What will be predicted at end of 2010 for 2011 (some of these will be DejaVU)

  • Many items that were predicted this year, last year, the year before that and so on…
  • Dedupe moving into primary and online active storage, rekindling of dedupe debates
  • Demise of cloud in terms of hype and confusion being replaced by federation
  • Clustered, grid, bulk and other forms of scale out storage grow in adoption
  • Disk, Tape, RAID, Mainframe, Fibre Channel, PCs, Windows being declared dead (again)
  • 2011 will be the year of Holographic storage and T10 OSD (an annual prediction by some)
  • FCoE kicks into broad and mainstream deployment adoption reaching tipping point
  • 16Gb (16GFC) Fibre Channel gets more attention stirring FCoE vs. FC vs. iSCSI debates
  • 100GbE gets more attention along with 4G adoption in order to move more data
  • Demise of iSCSI at the hands of SAS at low end, FCoE at high end and NAS from all angles

Gaining ground in 2010 however not yet in full stride (at least from customer deployment)

  • On the connectivity front, iSCSI, 6Gb SAS, 8Gb Fibre Channel, FCoE and 100GbE
  • SSD/flash based storage everywhere, however continued expansion
  • Dedupe  everywhere including primary storage – its still far from its full potential
  • Public and private clouds along with pNFS as well as scale out or clustered storage
  • Policy based automated storage tiering and transparent data movement or migration
  • Microsoft HyperV and Oracle based server virtualization technologies
  • Open source based technologies along with heterogeneous encryption
  • Virtualization life beyond consolidation addressing agility, flexibility and ease of management
  • Desktop virtualization using Citrix, Microsoft and VMware along with Microsoft Windows 7

Buzzword bingo hot topics and themes (in no particular order) include:

  • 2009 and previous year carry over items including cloud, iSCSI, HyperV, Dedupe, open source
  • Federation takes over some of the work of cloud, virtualization, clusters and grids
  • E2E, End to End management preferably across different technologies
  • SAS, Serial Attached SCSI for server to storage systems and as disk to storage interface
  • SRA, E23, Event correlation and other situational awareness related IRM tools
  • Virtualization, Life beyond consolidation enabling agility, flexibility for desktop, server and storage
  • Green IT, Transitions from carbon focus to economic with efficiency enabling productivity
  • FCoE, Continues to evolve and mature with more deployments however still not at tipping point
  • SSD, Flash based mediums continue to evolve however tipping point is still over the horizon
  • IOV, I/O Virtualization for both virtual and non virtual servers
  • Other new or recycled buzzword bingo candidates include PCoIP, 4G,

RAID will again be pronounced as being dead no longer relevant yet being found in more diverse deployments from consumer to the enterprise. In other words, RAID may be boring and thus no longer relevant to talk about, yet it is being used everywhere and enhanced in evolutionary ways, perhaps for some even revolutionary.

Tape remains being declared dead (e.g. on the Zombie technology list) yet being enhanced, purchased and utilized at higher rates with more data stored than in past history. Instead of being killed off by the disk drive, tape is being kept around for both traditional uses as well as taking on new roles where it is best suited such as long term or bulk off-line storage of data in ultra dense and energy efficient not to mention economical manners.

What I am seeing and hearing is that customers using tape are able to reduce the number of drives or transports, yet due to leveraging disk buffers or caches including from VTL and dedupe devices, they are able to operate their devices at higher utilization, thus requiring fewer devices with more data stored on media than in the past.

Likewise, even though I have been a fan of SSD for about 20 years and am bullish on its continued adoption, I do not see SSD killing off the spinning disk drive anytime soon. Disk drives are helping tape take on this new role by being a buffer or cache in the form of VTLs, disk based backup and bulk storage enhanced with compression, dedupe, thin provision and replication among other functionality.

There you have it, my predictions, observations and perspectives for 2010 and 2011. It is a broad and diverse list however I also get asked about and see a lot of different technologies, techniques and trends tied to IT resources (servers, storage, I/O and networks, hardware, software and services).

Lets see how they play out.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EPA Server and Storage Workshop Feb 2, 2010

EPA Energy Star

Following up on a recent previous post pertaining to US EPA Energy Star(r) for Servers, Data Center Storage and Data Centers, there will be a workshop held Tuesday February 2, 2010 in San Jose, CA.

Here is the note (Italics added by me for clarity) from the folks at EPA with information about the event and how to participate.

 

Dear ENERGY STAR® Servers and Storage Stakeholders:

Representatives from the US EPA will be in attendance at The Green Grid Technical Forum in San Jose, CA in early February, and will be hosting information sessions to provide updates on recent ENERGY STAR servers and storage specification development activities.  Given the timing of this event with respect to ongoing data collection and comment periods for both product categories, EPA intends for these meetings to be informal and informational in nature.  EPA will share details of recent progress, identify key issues that require further stakeholder input, discuss timelines for the completion, and answer questions from the stakeholder community for each specification.

The sessions will take place on February 2, 2010, from 10:00 AM to 4:00 PM PT, at the San Jose Marriott.  A conference line and Webinar will be available for participants who cannot attend the meeting in person.  The preliminary agenda is as follows:

Servers (10:00 AM to 12:30 PM)

  • Draft 1 Version 2.0 specification development overview & progress report
    • Tier 1 Rollover Criteria
    • Power & Performance Data Sheet
    • SPEC efficiency rating tool development
  • Opportunities for energy performance data disclosure

 

Storage (1:30 PM to 4:00 PM)

  • Draft 1 Version 1.0 specification development overview & progress report
  • Preliminary stakeholder feedback & lessons learned from data collection 

A more detailed agenda will be distributed in the coming weeks.  Please RSVP to storage@energystar.gov or servers@energystar.gov no later than Friday, January 22.  Indicate in your response whether you will be participating in person or via Webinar, and which of the two sessions you plan to attend.

Thank you for your continued support of ENERGY STAR.

 

End of EPA Transmission

For those attending the event, I look forward to seeing you there in person on Tuesday before flying down to San Diego where I will be presenting on Wednesday the 3rd at The Green Data Center Conference.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

RAID Relevance Revisited

Following up from some previous posts on the topic, a continued discussion point in the data storage industry is the relevance (or lack there) of RAID (Redundant Array of Independent Disks).

These discussions tend to evolve around how RAID is dead due to its lack of real or perceived ability to continue scaling in terms of performance, availability, capacity, economies or energy capabilities needed or when compared to those of newer techniques, technologies or products.

RAID Relevance

While there are many new and evolving approaches to protecting data in addition to maintaining availability or accessibility to information, RAID despite the fan fare is far from being dead at least on the technology front.

Sure, there are issues or challenges that require continued investing in RAID as has been the case over the past 20 years; however those will also be addressed on a go forward basis via continued innovation and evolution along with riding technology improvement curves.

Now from a marketing standpoint, ok, I can see where the RAID story is dead, boring, and something new and shiny is needed, or, at least change the pitch to sound like something new.

Consequently, when being long in the tooth and with some of the fore mentioned items among others, older technologies that may be boring or lack sizzle or marketing dollars can and often are declared dead on the buzzword bingo circuit. After all, how long now has the industry trade group RAID Advisory Board (RAB) been missing in action, retired, spun down, archived or ILMed?

RAID remains relevant because like other dead or zombie technologies it has reached the plateau of productivity and profitability. That success is also something that emerging technologies envy as their future domain and thus a classic marketing move is to declare the incumbent dead.

The reality is that RAID in all of its various instances from hardware to software, standard to non-standard with extensions is very much alive from the largest enterprise to the SMB to the SOHO down into consumer products and all points in between.

Now candidly, like any technology that is about 20 years old if not older after all, the disk drive is over 50 years old and been declared dead for how long now?.RAID in some ways is long in the tooth and there are certainly issues to be addressed as have been taken care of in the past. Some of these include the overhead of rebuilding large capacity 1TB, 2TB and even larger disk drives in the not so distant future.

There are also issues pertaining to distributed data protection in support of cloud, virtualized or other solutions that need to be addressed. In fact, go way way back to when RAID appeared commercially on the scene in the late 80s and one of the value propositions among others was to address the reliability of emerging large capacity multi MByte sized SCSI disk drives. It seems almost laughable today that when a decade later, when the 1GB disk drives appeared in the market back in the 90s that there was renewed concern about RAID and disk drive rebuild times.

Rest assured, I think that there is a need and plenty of room for continued innovate evolution around RAID related technologies and their associated storage systems or packaging on a go forward basis.

What I find interesting is that some of the issues facing RAID today are similar to those of a decade ago for example having to deal with large capacity disk drive rebuild, distributed data protecting and availability, performance, ease of use and so the list goes.

However what happened was that vendors continued to innovate both in terms of basic performance accelerated rebuild rates with improvements to rebuild algorithms, leveraged faster processors, busses and other techniques. In addition, vendors continued to innovate in terms of new functionality including adopting RAID 6 which for the better part of a decade outside of a few niche vendors languished as one of those future technologies that probably nobody would ever adopt, however we know that to be different now and for the past several years. RAID 6 is one of those areas where vendors who do not have it are either adding it, enhancing it, or telling you why you do not need it or why it is no good for you.

An example of how RAID 6 is being enhanced is boosting performance on normal read and write operations along with acceleration of performance during disk rebuild. Also tied to RAID 6 and disk drive rebuild are improvements in controller design to detect and proactively make repairs on the fly to minimize or eliminate errors or diminished the need for drive rebuilds, similar to what was done in previous generations. Lets also not forget the improvements in disk drives boosting performance, availability, capacity and energy improvements over time.

Funny how these and other enhancements are similar to those made to RAID controllers hardware and software fine tuning them in the early to mid 2000s in support for high capacity SATA disk drives that had different RAS characteristics of higher performance lower capacity enterprise drives.

Here is my point.

RAID to some may be dead while others continue to rely on it. Meanwhile others are working on enhancing technologies for future generations of storage systems and application requirements. Thus in different shapes, forms, configurations, feature; functionality or packaging, the spirit of RAID is very much alive and well remaining relevant.

Regardless of if a solution using two or three disk mirroring for availability, or RAID 0 fast SSD or SAS or FC disks in a stripe configuration for performance with data protection via rapid restoration from some other low cost medium (perhaps RAID 6 or tape), or perhaps single, dual or triple parity protection, or if using small block or multiMByte or volume based chunklets, let alone if it is hardware or software based, local or disturbed, standard or non standard, chances are there is some theme of RAID involved.

Granted, you do not have to call it RAID if you prefer!

As a closing thought, if RAID were no longer relevant, than why do the post RAID, next generation, life beyond RAID or whatever you prefer to call them technologies need to tie themselves to the themes of RAID? Simple, RAID is still relevant in some shape or form to different audiences as well as it is a great way of stimulating discussion or debate in a constantly evolving industry.

BTW, Im still waiting for the revolutionary piece of hardware that does not require software, and the software that does not require hardware and that includes playing games with server less servers using hypervisors :) .

Provide your perspective on RAID and its relevance in the following poll.

Here are some additional related and relevant RAID links of interests:

Stay tuned for more about RAIDs relevance as I dont think we have heard the last on this.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Poll: Networking Convergence, Ethernet, InfiniBand or both?

I just received an email in my inbox from Voltaire along with a pile of other advertisements, advisories, alerts and announcements from other folks.

What caught my eye on the email was that it is announcing a new survey results that you can read here as well as below.

The question that this survey announcements prompts for me and hence why I am posting it here is how dominant will InfiniBand be on a go forward basis, the answer I think is it depends…

It depends on the target market or audience, what their applications and technology preferences are along with other service requirements.

I think that there is and will remain a place for Infiniband, the question is where and for what types of environments as well as why have both InfiniBand and Ethernet including Fibre Channel over Ethernet (FCoE) in support of unified or converged I/O and data networking.

So here is the note that I received from Voltaire:

 

Hello,

A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

The full press release is below.  Please contact me if you would like to speak with a Voltaire executive for further commentary.

Regards,
Christy

____________________________________________________________
Christy Lynch| 978.439.5407(o) |617.794.1362(m)
Director, Corporate Communications
Voltaire – The Leader in Scale-Out Data Center Fabrics
christyl@voltaire.com | www.voltaire.com
Follow us on Twitter: www.twitter.com/voltaireltd

FOR IMMEDIATE RELEASE:

IT Survey Finds Executives Planning Converged Network Strategy:
Using Both InfiniBand and Ethernet

Fabric Performance Key to Making Data Centers Operate More Efficiently

CHELMSFORD, Mass. and ANANA, Israel January 12, 2010 – A new survey by Voltaire (NASDAQ: VOLT) reveals that IT executives plan to use InfiniBand and Ethernet technologies together as they refresh or build new data centers. They’re choosing a converged network strategy to improve fabric performance which in turn furthers their infrastructure consolidation and efficiency objectives.

Voltaire queried more than 120 members of the Global CIO & Executive IT Group, which includes CIOs, senior IT executives, and others in the field that attended the 2009 MIT Sloan CIO Symposium. The survey explored their data center networking needs, their choice of interconnect technologies (fabrics) for the enterprise, and criteria for making technology purchasing decisions.

“Increasingly, InfiniBand and Ethernet share the ability to address key networking requirements of virtualized, scale-out data centers, such as performance, efficiency, and scalability,” noted Asaf Somekh, vice president of marketing, Voltaire. “By adopting a converged network strategy, IT executives can build on their pre-existing investments, and leverage the best of both technologies.”

When asked about their fabric choices, 45 percent of the respondents said they planned to implement both InfiniBand with Ethernet as they made future data center enhancements. Another 54 percent intended to rely on Ethernet alone.

Among additional survey results:

  • When asked to rank the most important characteristics for their data center fabric, the largest number (31 percent) cited high bandwidth. Twenty-two percent cited low latency, and 17 percent said scalability.
  • When asked about their top data center networking priorities for the next two years, 34 percent again cited performance. Twenty-seven percent mentioned reducing costs, and 16 percent cited improving service levels.
  • A majority (nearly 60 percent) favored a fabric/network that is supported or backed by a global server manufacturer.

InfiniBand and Ethernet interconnect technologies are widely used in today’s data centers to speed up and make the most of computing applications, and to enable faster sharing of data among storage and server networks. Voltaire’s server and storage fabric switches leverage both technologies for optimum efficiency. The company provides InfiniBand products used in supercomputers, high-performance computing, and enterprise environments, as well as its Ethernet products to help a broad array of enterprise data centers meet their performance requirements and consolidation plans.

About Voltaire
Voltaire (NASDAQ: VOLT) is a leading provider of scale-out computing fabrics for data centers, high performance computing and cloud environments. Voltaire’s family of server and storage fabric switches and advanced management software improve performance of mission-critical applications, increase efficiency and reduce costs through infrastructure consolidation and lower power consumption. Used by more than 30 percent of the Fortune 100 and other premier organizations across many industries, including many of the TOP500 supercomputers, Voltaire products are included in server and blade offerings from Bull, HP, IBM, NEC and Sun. Founded in 1997, Voltaire is headquartered in Ra’anana, Israel and Chelmsford, Massachusetts. More information is available at www.voltaire.com or by calling 1-800-865-8247.

Forward Looking Statements
Information provided in this press release may contain statements relating to current expectations, estimates, forecasts and projections about future events that are "forward-looking statements" as defined in the Private Securities Litigation Reform Act of 1995. These forward-looking statements generally relate to Voltaire’s plans, objectives and expectations for future operations and are based upon management’s current estimates and projections of future results or trends. They also include third-party projections regarding expected industry growth rates. Actual future results may differ materially from those projected as a result of certain risks and uncertainties. These factors include, but are not limited to, those discussed under the heading "Risk Factors" in Voltaire’s annual report on Form 20-F for the year ended December 31, 2008. These forward-looking statements are made only as of the date hereof, and we undertake no obligation to update or revise the forward-looking statements, whether as a result of new information, future events or otherwise.

###

All product and company names mentioned herein may be the trademarks of their respective owners.

 

End of Voltaire transmission:

I/O, storage and networking interface wars come and go similar to other technology debates of what is the best or that will be supreme.

Some recent debates have been around Fibre Channel vs. iSCSI or iSCSI vs. Fibre Channel (depends on your perspective), SAN vs. NAS, NAS vs. SAS, SAS vs. iSCSI or Fibre Channel, Fibre Channel vs. Fibre Channel over Ethernet (FCoE) vs. iSCSI vs. InfiniBand, xWDM vs. SONET or MPLS, IP vs UDP or other IP based services, not to mention the whole LAN, SAN, MAN, WAN POTS and PAN speed games of 1G, 2G, 4G, 8G, 10G, 40G or 100G. Of course there are also the I/O virtualization (IOV) discussions including PCIe Single Root (SR) and Multi Root (MR) for attachment of SAS/SATA, Ethernet, Fibre Channel or other adapters vs. other approaches.

Thus when I routinely get asked about what is the best, my answer usually is a qualified it depends based on what you are doing, trying to accomplish, your environment, preferences among others. In other words, Im not hung up or tied to anyone particular networking transport, protocol, network or interface, rather, the ones that work and are most applicable to the task at hand

Now getting back to Voltaire and InfiniBand which I think has a future for some environments, however I dont see it being the be all end all it was once promoted to be. And outside of the InfiniBand faithful (there are also iSCSI, SAS, Fibre Channel, FCoE, CEE and DCE among other devotees), I suspect that the results would be mixed.

I suspect that the Voltaire survey reflects that as well as if I surveyed an Ethernet dominate environment I can take a pretty good guess at the results, likewise for a Fibre Channel, or FCoE influenced environment. Not to mention the composition of the environment, focus and business or applications being supported. One would also expect a slightly different survey results from the likes of Aprius, Broadcom, Brocade, Cisco, Emulex, Mellanox (they also are involved with InfiniBand), NextIO, Qlogic (they actually do some Infiniband activity as well), Virtensys or Xsigo (actually, they support convergence of Fibre Channel and Ethernet via Infiniband) among others.

Ok, so what is your take?

Whats your preffered network interface for convergence?

For additional reading, here are some related links:

  • I/O Virtualization (IOV) Revisited
  • I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
  • Buzzword Bingo 1.0 – Are you ready for fall product announcements?
  • StorageIO in the News Update V2010.1
  • The Green and Virtual Data Center (Chapter 9)
  • Also check out what others including Scott Lowe have to say about IOV here or, Stuart Miniman about FCoE here, or of Greg Ferro here.
  • Oh, and for what its worth for those concerned about FTC disclosure, Voltaire is not nor have they been a client of StorageIO, however, I did used to work for a Fibre Channel, iSCSI, IP storage, LAN, SAN, MAN, WAN vendor and wrote a book on the topics :).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Upcoming Events and Activities Update V2010.1

    The end of year christmas and new years holiday season has come and gone which means of course that 2009 is a wrap along with the travel from being out and about.

    In addition to getting some time to relax a bit (playing Wii resort, snow plowing, cooking etc.), I have also been catching up on developing some new content including articles, blogs (some yet to be post), tips as well as podcasts along with some custom research advisory projects.

    Check out some recent tips, articles, videos and podcasts here along with perspecitives and comments on indusitry news here.

    2009 events and activities saw visits to cities including San Jose, Tucson, Cancun Mexico, Dallas, Tampa, Miami, Los Angles, San Jose, Las Vegas, Milwaukee, Atlanta, St. Louis, Birmingham, Cincinnati, Santa Ana, Minneapolis, Boston, Dallas, Boston, Chicago, Parsipanny, Raleigh, Providence, Kansas City, Denver, Chicago, Orlando, Chicago, Philadelphia, Toronto, Richmond, Columbus, Princeton, Seattle, Portland, Dallas, San Francisco, Minneapolis, Toronto, Chicago, New York, Milwaukee, Atlanta, Boston, Cleveland and Detroit among others.

    This time of the year also means that the 2010 events and activities including in person keynote and presentations also known as out and about are getting underway. While the 2010 schedule of events is still being finalized, some initial events have are on the calendar, my bags are about to be packed and tickets in hand not to mention finalizing the presentation and discussion content.

    In addition to some non public events including keynote presenting at some vendors annual sales (kick off) meetings, the following are some of what are currently on the calendar that you can click on the links below to learn more about the venues.

    February 3, 2010 Green Data Center Conference, San Diego, CA
    January 21, 2010 Dinner Event keynote Speaker Dynamic IT Infrastructure, Beverly Hills, CA
    January 21, 2010 Morning keynote Speaker The Green and Virtual Data Center, San Diego, CA
    January 19, 2010 Dinner Event keynote Speaker Dynamic IT Infrastructure, Miami, FL

    Watch for updates to the events calendar and I look forward to seeing you all while Im out and about during 2010.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    StorageIO in the News Update V2010.1

    StorageIO is regularly quoted and interviewed in various industry and vertical market venues and publications both on-line and in print on a global basis.

    The following are some coverage, perspectives and commentary by StorageIO on IT industry trends including servers, storage, I/O networking, hardware, software, services, virtualization, cloud, cluster, grid, SSD, data protection, Green IT and more since the last update.

    Realizing that some prefer blogs to webs to twitter to other venues, here are some recent links among others to media coverage and comments by me on a different topics that are among others found at www.storageio.com/news.html:

  • SearchSMBStorage: Comments on EMC Iomega v.Clone for PC data syncronization – Jan 2010
  • Computerworld: Comments on leveraging cloud or online backup – Jan 2010
  • ChannelProSMB: Comments on NAS vs SAN Storage for SMBs – Dec 2009
  • ChannelProSMB: Comments on Affordable SMB Storage Solutions – Dec 2009
  • SearchStorage: Comments on What to buy a geek for the holidays, 2009 edition – Dec 2009
  • SearchStorage: Comments on EMC VMAX storage and 8GFC enhancements – Dec 2009
  • SearchStorage: Comments on Data Footprint Reduction – Dec 2009
  • SearchStorage: Comments on Building a private storage cloud – Dec 2009
  • SearchStorage: Comments on SSD in storage systems – Dec 2009
  • SearchStorage: Comments on slow adoption of file virtualization – Dec 2009
  • IT World: Comments on maximizing data security investments – Nov 2009
  • SearchCIO: Comments on storage virtualization for your organisation – Nov 2009
  • Processor: Comments on how to win approval for hardware upgrades – Nov 2009
  • Processor: Comments on the Future of Servers – Nov 2009
  • SearchITChannel: Comments on Energy-efficient technology sales depend on pitch – Nov 2009
  • SearchStorage: Comments on how to get from Fibre Channel to FCoE – Nov 2009
  • Minneapolis Star Tribune: Comments on Google Wave and Clouds – Nov 2009
  • SearchStorage: Comments on EMC and Cisco alliance – Nov 2009
  • SearchStorage: Comments on HP virtualizaiton enhancements – Nov 2009
  • SearchStorage: Comments on Apple canceling ZFS project – Oct 2009
  • Processor: Comments on EPA Energy Star for Server and Storage Ratings – Oct 2009
  • IT World Canada: Cloud computing, dot be scared, look before you leap – Oct 2009
  • IT World: Comments on stretching your data protection and security dollar – Oct 2009
  • Enterprise Storage Forum: Comments about Fragmentation and Performance? – Oct 2009
  • SearchStorage: Comments about data migration – Oct 2009
  • SearchStorage: Comments about What’s inside internal storage clouds? – Oct 2009
  • Enterprise Storage Forum: Comments about T-Mobile and Clouds? – Oct 2009
  • Storage Monkeys: Podcast comments about Sun and Oracle- Sep 2009
  • Enterprise Storage Forum: Comments on Maxiscale clustered, cloud NAS – Sep 2009
  • SearchStorage: Comments on Maxiscale clustered NAS for web hosting – Sep 2009
  • Enterprise Storage Forum: Comments on whos hot in data storage industry – Sep 2009
  • SearchSMBStorage: Comments on SMB Fibre Channel switch options – Sep 2009
  • SearchStorage: Comments on using storage more efficiently – Sep 2009
  • SearchStorage: Comments on Data and Storage Tiering including SSD – Sep 2009
  • Enterprise IT Planet: Comments on Data Deduplication – Sep 2009
  • SearchDataCenter: Comments on Tiered Storage – Sep 2009
  • Enterprise Storage Forum: Comments on Sun-Oracle Wedding – Aug 2009
  • Processor.com: Comments on Storage Network Snags – Aug 2009
  • SearchStorageChannel: Comments on I/O virtualizaiton (IOV) – Aug 2009
  • SearchStorage: Comments on Clustered NAS storage and virtualization – Aug 2009
  • SearchITChannel: Comments on Solid-state drive prices still hinder adoption – Aug 2009
  • Check out the Content, Tips, Tools, Videos, Podcasts plus White Papers, and News pages for additional commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Recent tips, videos, articles and more update V2010.1

    Realizing that some prefer blogs to webs to twitter to other venues, here are some recent links to articles, tips, videos, webcasts and other content that have appeared in different venues since August 2009.

  • i365 Guest Interview: Experts Corner: Q&A with Greg Schulz December 2009
  • SearchCIO Midmarket: Remote-location disaster recovery risks and solutions December 2009
  • BizTech Magazine: High Availability: A Delicate Balancing Act November 2009
  • ESJ: What Comprises a Green, Efficient and Effective Virtual Data Center? November 2009
  • SearchSMBStorage: Determining what server to use for SMB November 2009
  • SearchStorage: Performance metrics: Evaluating your data storage efficiency October 2009
  • SearchStorage: Optimizing capacity and performance to reduce data footprint October 2009
  • SearchSMBStorage: How often should I conduct a disaster recovery (DR) test? October 2009
  • SearchStorage: Addressing storage performance bottlenecks in storage September 2009
  • SearchStorage AU: Is tape the right backup medium for smaller businesses? August 2009
  • ITworld: The new green data center: From energy avoidance to energy efficiency August 2009
  • Video and podcasts include:
    December 2009 Video: Green Storage: Metrics and measurement for management insight
    Discussion between Greg Schulz and Mark Lewis of TechTarget the importance of metrics and measurement to gauge productivity and efficiency for Green IT and enabling virtual information factories. Click here to watch the Video.

    December 2009 Podcast: iSCSI SANs can be a good fit for SMB storage
    Discussion between Greg Schulz and Andrew Burton of TechTarget about iSCSI and other related technologies for SMB storage. Click here to listen to the podcast.

    December 2009 Podcast: RAID Data Protection Discussion
    Discussion between Greg Schulz and Andrew Burton of TechTarget about RAID data proteciton, techniques and technologies. Click here to listen to the podcast.

    December 2009 Podcast: Green IT, Effiency and Productivity Discussion
    Discussion between Greg Schulz and Jon Flower of Adaptec about data Green IT, energy effiency, inteligent power management (IPM) also known as MAID 2.0 and other forms of optimization techniques including SSD. Click here to listen to the podcast sponsored by Adaptec.

    November 2009 Podcast: Reducing your data footprint impact
    Even though many enterprise data storage environments are coping with tightened budgets and reduced spending, overall net storage capacity is increasing. In this interview, Greg Schulz, founder and senior analyst at StorageIO Group, discusses how storage managers can reduce their data footprint. Schulz touches on the importance of managing your data footprint on both online and offline storage, as well as the various tools for doing so, including data archiving, thin provisioning and data deduplication. Click here to listen to the podcast.

    October 2009 Podcast: Enterprise data storage technologies rise from the dead
    In this interview, Greg Schulz, founder and senior analyst of the Storage I/O group, classifies popular technologies such as solid-state drives (SSDs), RAID and Fibre Channel (FC) as “zombie” technologies. Why? These are already set to become part of standard storage infrastructures, says Schulz, and are too old to be considered fresh. But while some consider these technologies to be stale, users should expect to see them in their everyday lives. Click here to listen to the podcast.

    Check out the Tips, Tools and White Papers, and News pages for additional commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved