StorageIO Spring Keynote and Speaking tour V2.008

Several new keynote and speaking engagements involving myself have been added to the StorageIO events page including among others:

April 8th, 2008 – SNW Orlando FL
Beyond Green-Wash:
IT Data Center Power, Cooling, Floor Space and Environmental (PCFE) Topics and Trends V2.008

This talk will move past what are the issues and reasons for going green and get right to the point of what you can do today leveraging various technologies, techniques and best practices to address PCFE and green environmental issues including EHS, low power and economic sustainment in an environmental friendly manner as well as what to include in a long term green strategy for your data center.

Chicago, May 13th-15th – StorageDecisions
Clustered Storage:
From SMB, to Scientific, to Social Networking and Web 2.0

The growth of structured and unstructured data continues at an explosive rate in most environments resulting in a constantly expanding data footprint requiring data and storage management resources. Similarly, the relative ease of use of NFS and Windows CIFS file sharing based storage, also known as Network Attached Storage (NAS), has led to a proliferation of NAS and Windows file servers which are not all that different from how the ease of use of personal computers (PCs) resulted in desktop and server sprawl. With the focus of many IT organizations today to do more with less, or, do more with what you have, clustered storage and clustered file serving have become a popular option to support modular, scalable and flexible growth. Clustered storage including clustered file serving, grid and web 2.0 based storage solutions are no longer confined to the specific high performance scientific applications they are commonly associated. Clustered storage serving is commonly being deployed to support a wide diversity of applications including commercial, entertainment or media, Web 2.0 and social networking along with grid, cloud and traditional scientific needs.

This session takes a look at among other topics:
? Look at what different clustered storage vendors are claiming and how their solutions differ
? Fact vs. Fiction, Myths and Realties of clustered storage
o Grid vs. Clusters, Cluster vs. Grid, what?s the differences
o Clustered storage is only for ultra large environments like Google
o Clustered file serving is only for high performance (HPC) environments
o SMBs and bulk storage applications can not benefit from clustered storage
? What are the caveats to be aware of when deploying clustered storage?
? What are some emerging trends and solutions to keep an eye on for clustered storage
? What are some questions that some vendors do not want you to ask about their solutions!

Green and Environmental Friendly Storage:
Practical Ways to Achieve Energy Efficiency

Green is in-and every storage vendor out there has a green story to tell. Despite the vendor and industry hyperbole about the environmental benefits of their products, there are still no standard metrics by which to measure and compare power consumption or energy efficiency claims. The challenge is sorting out and closing the gap between vendor green messaging and IT data center issues including power, cooling, floor space and other environmental topics including RoHS and e-waste disposal. This session looks at several practical techniques and technologies that you can leverage today to achieve an energy efficiency data center to sustain business growth in an economical and ecological friendly manner.

Topics that will be covered include among others:
? How truthful are vendor claims and what is ?Green wash?
? Facts and Fiction, Myths and Realities:
o Storage is cheaper to buy than to power
o Power avoidance vs. energy efficiency
o Are Solid State Devices (SSD) the silver bullet?
o Dedupe vs. Archive vs. Compression vs. Consolidation
? What?s real and achievable today, what are your options?
? Measuring and determining energy efficiency with emerging metrics
? How to do more with what you have and avoid forklift upgrades
? Who is the ?Greenest of them all? and where to learn more

I will also be keynoting at several TechTarget seminar series events around the U.S. including
StorageIO events page located here.

Cheers
GS

Chargeback for storage

TechTarget SearchStorage recently put out a piece on chargeback for storage that includes some commentary by myself on the topic including some common myths about chargeback of what it is and how it is done.

A common misperception is that chargeback requires actual invoicing and monetizing of IT resource use including servers, storage and networks where in some cases, charge back is not as much about generating invoices as it is for accounting and resource usage tracking.

Granted, if you are in a services oriented environment, rest assured there is monetization needing to take place, however, informational charge back initiatives are also useful for budgeting, planning and awareness of IT services and usage.Have a read here.

Cheers
GS

Trains Going “Green”, ah, well, maybe “Blue”…

With all of the global focus on going green, saving the planet, green IT and green storage all of which are good when like so many other things when done in moderation, the focus especially around IT and even aviation tends to be around green house gas (GHG), Co2/Carbon Offsets and so forth. Well, when I read this article about what India is doing to make their trains more green it got me thinking, perhaps a more moderate or balanced approach to green messaging should include more looking down at the ground than up in the sky.

That is, focus on some additional issues including ewaste with rules such as RoHS, WEEE among others in addition to the focus on reducing carbon and GHGs. By spending some time looking down as opposed to just looking up so to speak, you can reduce the chances of stepping in something undesirable as well as seeing real issues including power, cooling, floor space, environmental (PCFE) related items that can be addressed as well as on a go forward basis.

Interested in green IT and storage along with associated PCFE topics? I will be presenting at SNW in Orlando in April as well as Storage Decisions in May in Chicago on those and other related topics, learn more at the StorageIO events page. If you will be at any of those venues, stop by and say hello.

Cheers
GS

IBM speed of light energy saving or speed of light green marketing?

IBM researchers have announced a new technique to achieve a new means of data transmission with speeds of up to 8Tbps using optics which at first blush seemed about as surprising as saying magnetic disk drive capacities will continue to increase or CPUs will get faster. However, upon closer look and actually reading past the headlines clearing my eyes a bit to make sure that it was 8 terabits and not 8 gigabits, in other words, 1,000 times where faster where Fibre Channel is currently at, or 800 times faster where 10Gb Ethernet is at, what really got my attention was the power or energy required, about 100 watts or that of a light bulb to achieve that level of performance for what appears to be relatively close (say 100 meters or so, however details are still slim).

For example, and granted miles per gallon or distance and speed per watt of energy will vary by vendor and implementation, however vendors like Finisar have shown 10Gbps second optic transceivers that have a reach of about 220 meters using only 1 watt of power, keep in mind however that 1 watt is just for the optic transceiver and you would need other components to actually drive that transceiver. However, if what IBM is claiming is true, and if they can achieve the large jump in performance while cutting energy consumption to a fraction, then the story gets rather interesting for chip and board fab vendors.

Now upon closer look, it turns out that the link speed of what IBM is referring to as “Optochip” to enable “Optocards” is not at 8Tbps, rather, more like 10Gbps (e.g. similar link speed to current high speed networking) over 32 links (e.g. 320Gbps) which while not 8Tbps, is still pretty impressive, particular when used for linking chip and other components in close proximity and reducing energy draw and heat dissipation. Also in the announcement is what IBM is referring to as a next evolution of the technology where 24 bi-directional links (separate send and receive) each operating at 12.5Gbps for an aggregated 600Gbps, still not quite 8Tbps, however still pretty darn fast with a low power consumption for use as an interconnect inside servers and other devices (read, this is not a replacement at least today for PCIe, InfiniBand, Fibre Channel, SAS/SATA or Ethernet) in a computer or digital devices.

I?m still not sure where IBM is getting the 8Tbps per second reference unless somewhere in their announcement they are intending that the Optochips are actually operating at 10GBytes per second per link instead of 10Gbits per second per link and even then, the numbers are off a bit, however given rounding, converting bits to bytes, number system conversions (e.g. base2 vs base10) and so on, given the slack we give startups for big claims and virtual announcements, I think we can cut IBM researchers a few bits of slack however looking forward to more information on what the real link rates are, what is the math behind the 8Tbits, actual distances and so forth to determine if this is really a technology breakthrough when considering the power used, or, is this a green marketing ploy qualifying for greenwash?

Cheers
GS

Sherwood becomes Atrato

If you had heard of Sherwood based in the Denver area in the past and saw the PR about Atrato and have not connected the dots, don?t worry, they are one in the same with a new name. Sherwood, I mean Atrato from a hardware and geeky side are kind of cool using I/O accelerators to boost performance to magnitudes better than traditional spinning magnetic hard disk drives.

However, with the price of SSD (both flash and RAM) based technologies falling as fast as Britney Spears musical career, and the usable capacity and performance per footprint with SSD rising as fast as Eli Manning?s endorsement deals, one has to wonder what the real value proposition will end up being with and for Atrato and can they leverage their hardware centric lineup of management talent and investors to secure a major OEM deal? Will this be a niche product for niche extreme corner case markets, or, can Atrato play to a more mainstream audience in the face of growing competition from the many faces of SSD based technologies?

Time will tell.

Cheers
GS

Airport Parking, Tiered Storage and Latency

Storage I/O trends

Ok, so what do airport parking, tiered storage and latency have in common? Based on some recent travel experience I will assert that there is a bit in common, or at a least an analogy. What got me thinking about this was recently I could not get a parking spot at the airport primary parking ramp next to the terminal (either a reasonable walk or short tram ride) which offers quick access to the departure gate.

Granted there is premium for this ability to park or “store” my vehicle for a few days in near to airport terminal, however that premium is off-set in the time savings and less disruptions enabling me a few extra minutes to get other things done while traveling.

Let me call the normal primary airport parking tier-1 (regardless of what level of the ramp you park on), with tier-0 being valet parking where you pay a fee that might rival the cost of your airline ticket, yet your car stays in a climate controlled area, gets washed and cleaned, maybe an oil change and hopefully in a more secure environment with an even faster access to your departure gate, something for the rich and famous.

Now the primary airport parking has been full lately, not surprising given the cold weather and everyone looking to use up their carbon off-set credits to fly somewhere warm or attend business meetings or what ever it is that they are doing.

Budgeting some extra time, a couple of weeks ago I tried one of those off-site airport parking facilities where the bus picks you up in the parking lot and then whisks you off to the airport, then you on return you wait for the buss to pick you up at the airport, ride to the lot and tour the lot looking at everyone’s car as they get dropped off and 30-40 minutes later, you are finally to your vehicle faced with the challenge of how to get out of the parking lot late at night as it is such a budget operation, they have gone to lights out and automated check-out. That is, put your credit card in the machine and the gate opens, that is, if the credit card reader is not frozen because it about “zero” outside and the machine wont read your card using up more time, however heck, I saved a few dollars a day.

On another recent trip, again the main parking ramp was full, at least the airport has a parking or storage resource monitoring ( aka Airport SRM) tool that you can check ahead to see if the ramps are full or not. This time I went to another terminal, parked in the ramp there, walked a mile (would have been a nice walk if it had not been 1 above zero (F) with a 20 mile per hour wind) to the light rail train station, waited ten minutes for the 3 minute train ride to the main terminal, walked to the tram for the 1-2 minute tram ride to the real terminal to go to my departure gate. On return, the process was reversed, adding what I will estimate to be about an hour to the experience, which, if you have the time, not a bad option and certainly good exercise even if it was freezing cold.

During the planes, trains and automobiles expedition, it dawned on me, airport parking is a lot like tiered storage in that you have different types of parking with different cost points, locality of reference or latency or speed from which how much time to get from your car to your plane, levels of protection and security among others.

I likened the off-airport parking experience to off-line tier-3 tape or MAID or at best, near-line tier-2 storage in that I saved some money at the cost of lost time and productivity. The parking at the remote airport ramp involving a train ride and tram ride I likened to tier-2 or near-line storage over a very slow network or I/O path in that the ramp itself was pretty efficiency, however the transit delays or latency were ugly, however I did save some money, a couple of bucks, not as much as the off-site, however a few less than the primary parking.

Hence I jump back to the primary ramp as being the fastest as tier-1 unless you have someone footing your parking bills and can afford tier-0. It also dawned on me that like primary or tier-1 storage, regardless of if it is enterprise class like an EMC DMX, IBM DS8K, Fujitsu, HDS USP or mid-range EMC CLARiiON, HP EVA, IBM DS4K, HDS AMS, Dell or EqualLogic, 3PAR, Fujitsu, NetApp or entry-level products from many different vendors; people still pay for the premium storage, aka tier-1 storage in a given price band even if there are cheaper alternatives however like the primary airport parking, there are limits on how much primary storage or parking can be supported due to floor space, power, cooling and budget constraints.

With tiered storage the notion is to align different types and classes of storage for various usage and application categories based on service (performance, availability, capacity, energy consumption) requirements balanced with cost or other concerns. For example there is high cost yet ultra high performance with ultra low energy saving and relative small capacity of tier-0 solid state devices (SSD) using either FLASH or dynamic random access memory (DRAM) as part of a storage system, as a storage device or as a caching appliance to meet I/O or activity intensive scenarios. Tier-1 is high performance, however not as high performance as tier-0, although given a large enough budget, large enough power and cooling ability and no constraints on floor space, you can make an total of traditional disk drives out perform even solid state, having a lot more capacity at the tradeoff of power, cooling, floor space and of course cost.

For most environments tier-1 storage will be the fastest storage with a reasonable amount of capacity, as tier-1 provides a good balance of performance and capacity per amount of energy consumed for active storage and data. On the other hand, lower cost, higher capacity and slower tier-2 storage also known as near-line or secondary storage is used in some environments for primary storage where performance is not a concern, yet is typically more for non-performance intensive applications.

Again, given enough money, unlimited power, cooling and floor space not to mention the number of enclosures, controllers and management software, you can sum a large bunch of low-cost SATA drives as an example to produce a high level of performance, however the cost benefits to do a high activity or performance level, either IOPS or bandwidth particular where the excess capacity is not needed would make SSD technology look cheap on an overall cost basis perspective.

Likewise replacing your entire disk with SSD particularly for capacity based environments is not really practical outside of extreme corner case applications unless you have the disposable income of a small country for your data storage and IT budget.

Another aspect of tiered storage is the common confusion of a class of storage and the class of a storage vendor or where a product is positioned for example from a price band or target environment such as enterprise, small medium environment, small medium business (SMB), small office or home office (SOHO) or prosumer/consumer.

I often hear discussions that go along the lines of tier-1 storage being products for the enterprise, tier-1 being for workgroups and tier-3 being for SMB and SOHO. I also hear confusion around tier-1 being block based, tier-2 being NAS and tier-3 being tape. “What we have here is a failure to communicate” in that there is confusion around tiered, categories, classification, price band and product positioning and perception. To add to the confusion is that there are also different tiers of access including Fibre Channel and FICON using 8GFC (coming soon to a device near you), 4GFC, 2GFC and even 1GFC along with 1GbE and 10GbE for iSCSI and/or NAS (NFS and/or CIFS) as well as InfiniBand for block (iSCSI or SRP) and file (NAS) offering different costs, performance, latency and other differing attributes to aligning to various application service and cost requirements.

What this all means is that there is more to tiered storage, there is tiered access, tiered protection, tiered media, different price band and categories of vendors and solutions to be aligned to applicable usage and service requirements. On the other hand, similar to airport parking, I can chose to skip the airport parking and take a cab to the airport which would be analogous to shifting your storage needs to a managed service provider. However ultimately it will come down to balancing performance, availability, capacity and energy (PACE) efficiency to the level of service and specific environment or application needs.

Greg Schulz www.storageio.com and www.greendatastorage.com

Was today the proverbal day that he!! froze over?

Storage I/O trends

The upper Midwest, well, make that the Midwest in general was hit by high winds and a nasty cold front today, nothing all that unusual for January, especially in the Minneapolis area where the temperature yesterday was 40 to 45 with light rain and by about noon today about zero “F” without the wind chill. So in the face of old man winter and the cold, I had a chuckle today reading the announcement that an SPC (Storage Performance Council) SPC1 (IOPS) benchmark had finally been published for an EMC CLARiiON CX3-40.

Now for those of you who cover or track or cheerlead or dread EMC, you probably know there position on benchmarks, or at least of some of their bloggers and in particular, SPC and if not, read here for some perspective. My position has always been the best benchmark is your actual application in a real and applicable workload scenario, however also realizing that not everybody can simulate or test their applications, there is the need for points of reference comparison benchmarks such as SPC, Microsoft ESRP, TPC, SPEC among others.

Heres’ the caveat, take these benchmarks with a grain of salt; use them as a gauge along with other tools as they are an indicator of a particular workload. I am a fan of benchmarks that make sense, can be reproduced consistently and that are realistic representations, not a substitute for your actual applications. They are tools to help you make a better informed decision however that is all they are, a relative comparison.

Nuff rambling for now on that, why did I chuckle this morning and think that He!! had perhaps finally froze over, and don?t get me wrong, Minneapolis and the Midwest for that matter is far from being He!!!, granted its cold as crap during the winter months. The reason I chuckled is that EMC did not in fact submit the SPC1 benchmarks for the CLARiiON CX3-40, instead, one of their competitors namely Network Appliance (aka NetApp) did the honors for EMC along with a submission for their FAS3040.

So besides the fact that there is plenty of wiggle and debate room in the test for example NetApp using RAID6 (e.g. RAID-DP) and Mirroring on the EMC (I?m sure we will hear EMC cry foul), EMC can keep their hands clean on their party line about not submitting an SPC1 result, or at least that?s a card they could choose to play.I would like to see the DMX4 particularly with the new FLASH based SSDs in a future SPC test submission; however, I?m not going to hold my breath at least yet.

However it is ironic that EMC has in fact submitted other benchmark tests scenarios in the past including for Microsoft ESRP among others. Speaking of SPC submissions, TMS (Texas Memory Systems) also posted some new SPC results the other day.So maybe he!! did not really freeze over today with EMC finally submitting an SPC test, however it made for good warming chuckle on a cold morning.

Now, even though EMC has not officially submitted the SPC1 result, even though it is posted on the SPC website, that leaves only one major storage vendor yet to have their midrange open storage systems represented on the SPC results, and that would be HDS with their AMS series, maybe He!! will still freeze over…

Cheers

Greg Schulz – www.storageio.com

StorageIO Outlines Intelligent Power Management and MAID 2.0 Storage Techniques, Advocates New Technologies to Address Modern Data Center Energy Concerns

Storage I/O trends

Marketwire – January 23, 2008?StorageIO Outlines Intelligent Power Management and MAID 2.0 Storage Techniques, Advocates New Technologies to Address Modern Data Center Energy Concerns.?Intelligent Power Management and MAID 2.0 Equal Energy Efficiency Without Compromising Performance.

?The StorageIO Group explores these issues in detail in two new Industry Trends and Perspectives white papers entitled, “MAID 2.0: Energy Savings without Performance Compromises” and “The Many Faces of MAID Storage Technology.” These and other Industry Trends and Perspectives white papers addressing power, cooling, floor space and green storage related topics including “Business Benefits of Data Footprint Reduction” and “Achieving Energy Efficiency using FLASH SSD” are available for download at www.storageio.com?and www.greendatastorage.com.

The Many Faces of Solid State Devices/Disks (SSD)

Storage I/O trends

Here’s a link to a recent article I wrote for Enterprise Storage Forum titled “Not a Flash in the PAN” providing a synopsis of the many faces, implementations and forms of SSD based technologies that includes several links to other related content.

A popular topic over the past year or so has been SSD with FLASH based storage for laptops, also sometimes referred to as hybrid disk drives along with announcements late last year by companies such as Texas Memory Systems (TMS) of a FLASH based storage system combining DRAM for high speed cache in their RAMSAN-500 and more recently EMC adding support for FLASH based SSD devices in their DMX4 systems as a tier-0 to co-exist with other tier-1 (fast FC) and tier-2 (SATA) drives.

Solid State Disks/Devices (SSD) or memory based storage mediums have been around for decades, they continue to evolve using different types of memory ranging from volatile dynamic random access (DRAM) memory to persistent or non-volatile RAM (NVRAM) and various derivatives of NAND FLASH among other users. Likewise, the capacity cost points, performance, reliability, packaging, interfaces and power consumption all continue to improve.

SSD in general, is a technology that has been miss-understood over the decades particularly when simply compared on a cost per capacity (e.g. dollar per GByte) basis which is an unfair comparison. The more approaches comparison is to look at how much work or amount of activity for example transactions per second, NFS operations per second, IOPS or email messages that can be processed in a given amount of time and then comparing the amount of power and number of devices to achieve a desired level of performance. Granted SSD and in particular DRAM based systems cost more on a GByte or TByte basis than magnetic hard disk drives however it also requires more HDDs and controllers to achieve the same level of performance not to mention requiring more power and cooling than compared to a typical SSD based device.

The many faces of SSD range from low cost consumer grade products based on consumer FLASH products to high performance DRAM based caches and devices for enterprise storage applications. Over the past year or so, SSD have re-emerged for those who are familiar with the technology, and emerged or appeared for those new to the various implementations and technologies leading to another up swinging in the historic up and down cycles of SSD adoption and technology evolution in the industry.

This time around, a few things are different and I believe that SSD in general, that is, the many difference faces of SSD will have staying power and not fade away into the shadows only to re-emerge a few years later as has been the case in the past.

The reason I have this opinion is based on two basic premises which are economics and ecological”. Given the focus on reducing or containing costs, doing more with what you have and environmental or ecological awareness in the race to green the data center and green storage, improving on the economics with more energy efficiency storage, that is, enabling your storage to do more work with less energy as opposed to avoiding energy consumption, has the by product of improved economics (cost savings and improved resource utilization and better service delivery) along with ecological (better use of energy or less use of energy).

Current implementations of SSD based solutions are addressing both the energy efficiency topics to enable better energy efficiency ranging from maximizing battery life to boosting performance while drawing less power. Consequently we are now seeing SSD in general are not only being used for boosting performance, also we are seeing it as one of many different tools to address power, cooling, floor space and environmental or green storage issues.

Here’s a link to a StorageIO industry trends and perspectives white paper at www.storageio.com/xreports.htm.

Here’s the bottom line, there are many faces to SSD. SSD (FLASH or DRAM) based solutions and devices have a place in a tiered storage environment as a Tier-0 or as an alternative in some laptop or other servers where appropriate. SSD compliments other technologies and SSD benefits from being paired with other technologies including high performance storage for tier-1 and near-line or tier-2 storage implementing intelligent power management (IPM).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Green Data Storage and Server I/O Topics

Are you currently encountering or do you forsee in the future?a problem regarding “Green” environmental or power and cooling issues pertaining to IT data and storage infrastructures? To learn more about “Green IT” and/or Server and storage I/O related topics including power, cooling, energy emmissions, asset disposal and related items checkout the website www.thegreenandvirtualdatacenter.com . GS