Many different Implementations of RAID

Processor Magazine has a new article looking at different flavors and implementations of RAID technology called Moving Toward Software-Based RAID: Tapping In To Fast Multicore CPUs For Performance & Flexibility. The exact number of categories is up for debate as you could make it as simple as either hardware or software RAID similar to all the variations of RAID levels in hybrid configurations. You could further break it down to different types and locations of implementations of software based RAID such as in a volume manager or file system, operating system, software driver stack, or standalone software stack to transform a general purpose processor into a RAID controller, or, software to leverage RAID off-load capabilities in hardware.

Speaking of hardware, one could debate that some off the shelf processor chips have some amount of RAID capabilities or primitives while other specialized off-load chips do more. There are dedicated RAID on a Chip known as ROCs that may include a RAID6 or other special purpose functionality or rely on external chips for parity rebuilds or other functions. ROCs can be embedded on a processor mother board in the form of RAID on Mother Board (aka ROMB), ROCs can be found on RAID adapter cards that plug into a PCI, PCIx or PCIe I/O slot with a SCSI, SAS or SATA I/O port for disk attachment. Then there are RAID controllers external to a computer that resides in external storage system of various size and shapes.

What does this all mean is that there are many different implementations and vendor packaged solutions and approaches to delivering RAID technologies to different market and price band segments, some are software based, some are external hardware based, some include various combinations. The bottom line is that RAID after 20 years is still relevant enough to warrant discussion of new and varying implementations schemes and packaging approaches. See additional links to articles, tips, presentations, webcasts and commentary pertaining to RAID and related topics on the portfolio and portfolio archive pages on the StorageIO website.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

IBM speed of light energy saving or speed of light green marketing?

IBM researchers have announced a new technique to achieve a new means of data transmission with speeds of up to 8Tbps using optics which at first blush seemed about as surprising as saying magnetic disk drive capacities will continue to increase or CPUs will get faster. However, upon closer look and actually reading past the headlines clearing my eyes a bit to make sure that it was 8 terabits and not 8 gigabits, in other words, 1,000 times where faster where Fibre Channel is currently at, or 800 times faster where 10Gb Ethernet is at, what really got my attention was the power or energy required, about 100 watts or that of a light bulb to achieve that level of performance for what appears to be relatively close (say 100 meters or so, however details are still slim).

For example, and granted miles per gallon or distance and speed per watt of energy will vary by vendor and implementation, however vendors like Finisar have shown 10Gbps second optic transceivers that have a reach of about 220 meters using only 1 watt of power, keep in mind however that 1 watt is just for the optic transceiver and you would need other components to actually drive that transceiver. However, if what IBM is claiming is true, and if they can achieve the large jump in performance while cutting energy consumption to a fraction, then the story gets rather interesting for chip and board fab vendors.

Now upon closer look, it turns out that the link speed of what IBM is referring to as “Optochip” to enable “Optocards” is not at 8Tbps, rather, more like 10Gbps (e.g. similar link speed to current high speed networking) over 32 links (e.g. 320Gbps) which while not 8Tbps, is still pretty impressive, particular when used for linking chip and other components in close proximity and reducing energy draw and heat dissipation. Also in the announcement is what IBM is referring to as a next evolution of the technology where 24 bi-directional links (separate send and receive) each operating at 12.5Gbps for an aggregated 600Gbps, still not quite 8Tbps, however still pretty darn fast with a low power consumption for use as an interconnect inside servers and other devices (read, this is not a replacement at least today for PCIe, InfiniBand, Fibre Channel, SAS/SATA or Ethernet) in a computer or digital devices.

I?m still not sure where IBM is getting the 8Tbps per second reference unless somewhere in their announcement they are intending that the Optochips are actually operating at 10GBytes per second per link instead of 10Gbits per second per link and even then, the numbers are off a bit, however given rounding, converting bits to bytes, number system conversions (e.g. base2 vs base10) and so on, given the slack we give startups for big claims and virtual announcements, I think we can cut IBM researchers a few bits of slack however looking forward to more information on what the real link rates are, what is the math behind the 8Tbits, actual distances and so forth to determine if this is really a technology breakthrough when considering the power used, or, is this a green marketing ploy qualifying for greenwash?

Cheers
GS

Mainframe, CMG, Virtualization, Storage and “Zombie Technologies”

In case you missed the news, IBM today announced a new mainframe the z10, yes, that’s right, the mother of all “Zombie Technologies” (and I say this with all due respect), you know, technologies that are declared dead yet are viable, working and still living thus purchasable. Some other examples of Zombie technologies include magnetic tape which was declared dead at least ten years ago if not longer yet continues to evolve, granted a bit slower and with fewer vendors, yet the technology is still economically viable when paired up with disk to disk based backup as an off-line (read no power required) medium for preserving idle and inactive archive data. Another example of a Zombie technology is the printer as you may recall we were supposed to be completely paperless by now, yeh right.

Then there is Fibre Channel which was declared still born over a decade ago yet shows plenty of signs of life in the form of 8GFC (8Gb Fibre Channel) and emerging FCoE even with iSCSI waging another assault to kill the FC beast. Even the venerable 50 year old magnetic disk drive has been declared a dead technology with the re-emergence of SSD in the form of DRAM and FLASH yet magnetic hard disk drives (HDD) continue to be manufactured and shipped in record numbers making it a member of the Zombie technology club, a club that has some pretty esteemed members and more on the way.

I forget how many decades ago it was now that the IBM mainframe was declared dead and granted we have seen the exodus of Hitachi, Amdahl, NEC and others from the active marketplace developing and selling IBM plug compatible mainframes (PCMs). However the venerable mainframe from IBM like the energizer bunny keeps ticking and finding new roles including leveraging its built-in logical partition (LPAR) capabilities to support virtual machines, something that has been available for at least a few decades to enable being leveraged as a consolidation platform of not only legacy zOS (aka revamped MVS) and zVM (not to be confused with VMware) as well native Linux.

Shifting gears a bit, last week I had the pleasure of key-noting at a local Computer Measurement Group (CMG) event, a group that I have been involved with presenting at their events around the world for over a decade. Last weeks theme at the CMG event was Is Storage and I/O Still Important in a Virtual World? If you are not familiar with the CMG folks and you have an interesting in servers, storage, I/O and networking along with performance, capacity planning and virtualization these are some people to get to know. Granted the organization has its roots back in the mainframe era and thus the organization is a gazillion years old, yet, younger than the mainframe however not by much. Over the past several years with the advent of lower cost hardware and the attitude of hardware is cheap, just throw more hardware at the problem has in part led to the decline of what CMG once was as an organization.

However, given the collective knowledge base and broad background, skills and areas of interest spanning servers, storage, I/O, networking, virtualization, hardware, software, mainframe and open systems among others, given the current focus on addressing IT data center power, cooling, floor space and associated ecological and economic related topics (see www.greendatastorage.com) CMG has an opportunity to revive itself even more so than it has over the past few years. That is, CMG assuming its leaders and members can recognize the opportunity can stand up to take a lead role in tying together the skills and knowledge to implement cross technology domain IT infrastructure resource management including working with facilities personal to insure adequate resources (servers, storage, networking, power, cooling and floor space) are available when and where needed moving forward not to mention help shape and guide the server, storage and networking industry groups and forums on applicable metrics and measurement usage. If you are not familiar with CMG, check them out, it’s a good group with a lot of good subject matter expertise to tap into.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

IBM Mainframe part Deux!

Couple more thoughts and comments regarding todays earlier post on the mainframe and that has to do with mainframe experience and skills which are drying up as more of the baby boomers retire. In some circles there are concerns about what happens when the last mainframe person retires, who will take care of the mainframe beasts? Part of the answer is in a group of programs being sponsored in part with no surprise by IBM to help stimulate and educate the next generation of mainframe skilled workers. If you are a current or former IBM mainframe professional, consider getting involved in one of these groups to help transfer knowledge and skill sets to the next generation.

Here are some links to learn more about these programs.

Check them out.

Cheers
GS

Continuing Education and Refresher Time (RAID and LUNs)

Storage I/O trends

TechTarget is currently running a refresher and primer series on RAID technology at www.searchstorage.com combining several tips pertaining to RAID basics, what RAID level to use for different applications and more. RAID is around 20 years old depending on how you view it and define it and some are now decrying RAID as being dead which for some implementations may be the case, however the underlying premises still hold, its just time for some new implementation and evolutionary changes. In the meantime, back at planet reality and today-land, check it out for a primer, refresh, or to see if RAID is even still relevant.

Another Primer and refresh series that is appearing on TechTarget is about LUNs. For those of you new to data storage, or, for those who need a refresh, these are a good source of primer or back ground material. Check out the LUN series here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Updated Look and Feel

This blog is a work in progress and consequently there will be changes, hopefully for the better as this journey continues including an enhanced look and feel that I hope you find to be cleaner and better organized. More enhancements in the works, however for now, its back to generating some content, providing consul and advisory services that are may day job at StorageIO (www.storageio.com).

Cheers
GS

Sherwood becomes Atrato

If you had heard of Sherwood based in the Denver area in the past and saw the PR about Atrato and have not connected the dots, don?t worry, they are one in the same with a new name. Sherwood, I mean Atrato from a hardware and geeky side are kind of cool using I/O accelerators to boost performance to magnitudes better than traditional spinning magnetic hard disk drives.

However, with the price of SSD (both flash and RAM) based technologies falling as fast as Britney Spears musical career, and the usable capacity and performance per footprint with SSD rising as fast as Eli Manning?s endorsement deals, one has to wonder what the real value proposition will end up being with and for Atrato and can they leverage their hardware centric lineup of management talent and investors to secure a major OEM deal? Will this be a niche product for niche extreme corner case markets, or, can Atrato play to a more mainstream audience in the face of growing competition from the many faces of SSD based technologies?

Time will tell.

Cheers
GS

Director Dinner Discussions of the SAN kind

The other night I had the opportunity to meet up for a quick dinner while traveling with some folks I used to work with when I wore a vendor hat. Directions to the venue were pretty straightforward as I knew where I was going even with out gps or “ever lost” or other electronic guidance system. During dinner, (I had the crab cakes which were great), not surprising given the backgrounds and current vocations of the people I was dining with, the technology tour and trends around the industry discussion came to that of what is and what is not a director vs. a switch and should new ultra large scale directors like those recently announced by both Brocade and Cisco be referred to as directors or something different.

I?m sure the switching folks from the various camps all have their different views, and if you follow marketing fundamentals, as a vendor you want to position yourself as something new and different to own a market or category. Hence when the discuss turned to why these new platforms were or were not directors and how they were something more, it reminded me of where we were in the industry about seven years ago when the director vs. switch debates raged. So with that in mind, when I was asked as to what I thought a director was vs. a switch, my answer is this:

Switches can be highly available with redundant power supplies, hot swap components, non-disruptive code load and activation, multiple physical interfaces (e.g. Fibre Channel and Ethernet) with multiple protocols (e.g. FCP, FICON, FCIP, iSCSI, etc) and even intelligence and so forth. Switches can even be modular, such as those that enable blocks or groups of ports to be turned on or added via mezzanine or daughter cards or even stackable switches. Switches generally speaking and with all due respect tend to be smaller, however if you go by the standard of what some consider a director vs. a switch, the ultra large scale Qlogic switches make for an interesting quandary if you set size aside. Likewise if you use the test that a director supports FICON for the mainframe then the Brocade/McData switches make for another quandary as some of those support mixed mode.

IMHO a director is this, a switching device on a large scale with modularity to leverage different types of blades supporting different types of physical interfaces (Ethernet, Fibre Channel, InfiniBand, WAN) and protocols (FICON, FCP, iSCSI, FCIP, iFCP, etc) with scalable performance, interoperability, no single point of failure (outside of the chassis, backplane or firmware) and generally much larger than a traditional switch or collection of stackable switches. Put another way, think “big iron” like a mainframe vs. small iron like a small server, director = big iron, switch being smaller volume oriented like volume servers of which there are places for both.

So, to avoid confusing the market place, and in particular those who buy these technologies, if you want to think of a large switch as a big switch, fine, if you want to think of a brand new ultra large platform for converged networks and unified data center fabrics as a director, be my guest as you are not alone, if you want to call it a backbone switch great, after all, IMHO its not as much as what they are called, its being able to speak the language of your audience and address what it is they are looking for instead of teaching them a new language and vocabulary, its the functionality and how the different solutions address your various needs and plug into your environments.

Cheers
GS

Snow Birds

Don?t ask me why, maybe it was flying through Detroit or Chicago weaving between snowstorms the other day enroute to Florida, a place where many northern ?snow birds? flock towards during the winter months to escape the harsh weather like what most of the northern U.S. is facing this week. Maybe it was the audience and atmosphere of the inaugural Florida Linux Show in Jacksonville Florida where I did a keynote talk the other day, an audience who adorn the penguin.

I?m thinking that it was a combination of too much iPod time, enroute to a conference where penguin fans (and not Pittsburgh hockey fans) convene in a state where many escape to this time of year and how cold it has been, that had a tune in my head ?We come from the land of the ice and snow, from the midnight sun where hot springs blow?? which if you are up on your ?Zeppelin?, you recognize.

Rod Sharp and Don (aka ?The Linux Guy?) Corbet put together a great inaugural event the other day at the University of Northern Florida complete with a ribbon cutting ceremony, several presentations and keynotes along with an exhibit area. What I found interesting about the event was the diversity of the attendees which represented a diverse cross section of interests, focus areas and types of organizations from enterprise to prosumer and pretty much everything in between not to mention the enthusiasm and interest of the attendees to engage and share ideas and information, made for a fun Monday morning before returning back to the cold and frozen tundra.

Thanks Rod and Don for the opportunity to participate and be the key note at your event this week.

Cheers
GS

Politics and Storage, or, storage in an election year V2.008

One of the disadvantages of travel, particularly air travel today is time spent waiting, however, instead of worrying about lost time waiting, time waiting can also be time to think, ponder, watch CNN, FOX, CNBC, BBC, Sky, or what ever your preference, think and ponder some more.

Lately with the coverage on the U.S. elections this year, with the debate coverage?s, candidate mud slinging and fud, claims, counter claims, statistics and studies to cite, different pundits and analysts as well as media coverage and perspectives (and its only the primaries), it occurred to me, that the IT industry and storage or storage networking in general has a lot more in common with politics than might be noticed. After all, take a look at data storage or storage networking or storage networking or however you want to classify, there are opposing views, special interests and lobbies (e.g. trade groups), left vs. right, conservative vs. liberal, independent vs. establishment and so forth. For example, on the storage and I/O networking front, you have Ethernet vs. Fibre Channel, or, iSCSI vs. Fibre Channel, Fibre Channel vs. NAS or NAS vs. iSCSI vs. DAS, iSCSI vs. FCoE vs. InfiniBand. (Check out I/O I/O it?s off to virtual work we go)

Then there are the tape vs. disk or disk vs. MAID or disk vs. SSD debates or managed service provider vs. do it yourself, not to mention the platform and vendor debates or what to refer to a product as such as a switch or director or platform or fabric. Another is the dedup debates of when to dedupe or not, where and how to dedupe, or how about to green IT and storage or to address power and cooling issues, probably the same yet like many politicians on all sides of the table, there is a disconnect between issues and messaging, such as the growing green gap between vendors green messages not aligning with IT customers pain points of power, cooling, floor space, recycling, energy efficiency which if you step back, both sides have common intentions, its how the message is not connecting.

Rest assured, we are only in early February and already things are heating up on both the political front as well as IT and data storage or data infrastructures fronts which should make for an interesting show and conference season leading up to the fall elections and storage selections, that is, unless the vendors unlike the politicians simply show up to sit around the campfire and singing kumbya while together only to leave and return to their debates on the battlefield.

Cheers
Greg Schulz ? www.storageio.com

Airport Parking, Tiered Storage and Latency

Storage I/O trends

Ok, so what do airport parking, tiered storage and latency have in common? Based on some recent travel experience I will assert that there is a bit in common, or at a least an analogy. What got me thinking about this was recently I could not get a parking spot at the airport primary parking ramp next to the terminal (either a reasonable walk or short tram ride) which offers quick access to the departure gate.

Granted there is premium for this ability to park or “store” my vehicle for a few days in near to airport terminal, however that premium is off-set in the time savings and less disruptions enabling me a few extra minutes to get other things done while traveling.

Let me call the normal primary airport parking tier-1 (regardless of what level of the ramp you park on), with tier-0 being valet parking where you pay a fee that might rival the cost of your airline ticket, yet your car stays in a climate controlled area, gets washed and cleaned, maybe an oil change and hopefully in a more secure environment with an even faster access to your departure gate, something for the rich and famous.

Now the primary airport parking has been full lately, not surprising given the cold weather and everyone looking to use up their carbon off-set credits to fly somewhere warm or attend business meetings or what ever it is that they are doing.

Budgeting some extra time, a couple of weeks ago I tried one of those off-site airport parking facilities where the bus picks you up in the parking lot and then whisks you off to the airport, then you on return you wait for the buss to pick you up at the airport, ride to the lot and tour the lot looking at everyone’s car as they get dropped off and 30-40 minutes later, you are finally to your vehicle faced with the challenge of how to get out of the parking lot late at night as it is such a budget operation, they have gone to lights out and automated check-out. That is, put your credit card in the machine and the gate opens, that is, if the credit card reader is not frozen because it about “zero” outside and the machine wont read your card using up more time, however heck, I saved a few dollars a day.

On another recent trip, again the main parking ramp was full, at least the airport has a parking or storage resource monitoring ( aka Airport SRM) tool that you can check ahead to see if the ramps are full or not. This time I went to another terminal, parked in the ramp there, walked a mile (would have been a nice walk if it had not been 1 above zero (F) with a 20 mile per hour wind) to the light rail train station, waited ten minutes for the 3 minute train ride to the main terminal, walked to the tram for the 1-2 minute tram ride to the real terminal to go to my departure gate. On return, the process was reversed, adding what I will estimate to be about an hour to the experience, which, if you have the time, not a bad option and certainly good exercise even if it was freezing cold.

During the planes, trains and automobiles expedition, it dawned on me, airport parking is a lot like tiered storage in that you have different types of parking with different cost points, locality of reference or latency or speed from which how much time to get from your car to your plane, levels of protection and security among others.

I likened the off-airport parking experience to off-line tier-3 tape or MAID or at best, near-line tier-2 storage in that I saved some money at the cost of lost time and productivity. The parking at the remote airport ramp involving a train ride and tram ride I likened to tier-2 or near-line storage over a very slow network or I/O path in that the ramp itself was pretty efficiency, however the transit delays or latency were ugly, however I did save some money, a couple of bucks, not as much as the off-site, however a few less than the primary parking.

Hence I jump back to the primary ramp as being the fastest as tier-1 unless you have someone footing your parking bills and can afford tier-0. It also dawned on me that like primary or tier-1 storage, regardless of if it is enterprise class like an EMC DMX, IBM DS8K, Fujitsu, HDS USP or mid-range EMC CLARiiON, HP EVA, IBM DS4K, HDS AMS, Dell or EqualLogic, 3PAR, Fujitsu, NetApp or entry-level products from many different vendors; people still pay for the premium storage, aka tier-1 storage in a given price band even if there are cheaper alternatives however like the primary airport parking, there are limits on how much primary storage or parking can be supported due to floor space, power, cooling and budget constraints.

With tiered storage the notion is to align different types and classes of storage for various usage and application categories based on service (performance, availability, capacity, energy consumption) requirements balanced with cost or other concerns. For example there is high cost yet ultra high performance with ultra low energy saving and relative small capacity of tier-0 solid state devices (SSD) using either FLASH or dynamic random access memory (DRAM) as part of a storage system, as a storage device or as a caching appliance to meet I/O or activity intensive scenarios. Tier-1 is high performance, however not as high performance as tier-0, although given a large enough budget, large enough power and cooling ability and no constraints on floor space, you can make an total of traditional disk drives out perform even solid state, having a lot more capacity at the tradeoff of power, cooling, floor space and of course cost.

For most environments tier-1 storage will be the fastest storage with a reasonable amount of capacity, as tier-1 provides a good balance of performance and capacity per amount of energy consumed for active storage and data. On the other hand, lower cost, higher capacity and slower tier-2 storage also known as near-line or secondary storage is used in some environments for primary storage where performance is not a concern, yet is typically more for non-performance intensive applications.

Again, given enough money, unlimited power, cooling and floor space not to mention the number of enclosures, controllers and management software, you can sum a large bunch of low-cost SATA drives as an example to produce a high level of performance, however the cost benefits to do a high activity or performance level, either IOPS or bandwidth particular where the excess capacity is not needed would make SSD technology look cheap on an overall cost basis perspective.

Likewise replacing your entire disk with SSD particularly for capacity based environments is not really practical outside of extreme corner case applications unless you have the disposable income of a small country for your data storage and IT budget.

Another aspect of tiered storage is the common confusion of a class of storage and the class of a storage vendor or where a product is positioned for example from a price band or target environment such as enterprise, small medium environment, small medium business (SMB), small office or home office (SOHO) or prosumer/consumer.

I often hear discussions that go along the lines of tier-1 storage being products for the enterprise, tier-1 being for workgroups and tier-3 being for SMB and SOHO. I also hear confusion around tier-1 being block based, tier-2 being NAS and tier-3 being tape. “What we have here is a failure to communicate” in that there is confusion around tiered, categories, classification, price band and product positioning and perception. To add to the confusion is that there are also different tiers of access including Fibre Channel and FICON using 8GFC (coming soon to a device near you), 4GFC, 2GFC and even 1GFC along with 1GbE and 10GbE for iSCSI and/or NAS (NFS and/or CIFS) as well as InfiniBand for block (iSCSI or SRP) and file (NAS) offering different costs, performance, latency and other differing attributes to aligning to various application service and cost requirements.

What this all means is that there is more to tiered storage, there is tiered access, tiered protection, tiered media, different price band and categories of vendors and solutions to be aligned to applicable usage and service requirements. On the other hand, similar to airport parking, I can chose to skip the airport parking and take a cab to the airport which would be analogous to shifting your storage needs to a managed service provider. However ultimately it will come down to balancing performance, availability, capacity and energy (PACE) efficiency to the level of service and specific environment or application needs.

Greg Schulz www.storageio.com and www.greendatastorage.com

Was today the proverbal day that he!! froze over?

Storage I/O trends

The upper Midwest, well, make that the Midwest in general was hit by high winds and a nasty cold front today, nothing all that unusual for January, especially in the Minneapolis area where the temperature yesterday was 40 to 45 with light rain and by about noon today about zero “F” without the wind chill. So in the face of old man winter and the cold, I had a chuckle today reading the announcement that an SPC (Storage Performance Council) SPC1 (IOPS) benchmark had finally been published for an EMC CLARiiON CX3-40.

Now for those of you who cover or track or cheerlead or dread EMC, you probably know there position on benchmarks, or at least of some of their bloggers and in particular, SPC and if not, read here for some perspective. My position has always been the best benchmark is your actual application in a real and applicable workload scenario, however also realizing that not everybody can simulate or test their applications, there is the need for points of reference comparison benchmarks such as SPC, Microsoft ESRP, TPC, SPEC among others.

Heres’ the caveat, take these benchmarks with a grain of salt; use them as a gauge along with other tools as they are an indicator of a particular workload. I am a fan of benchmarks that make sense, can be reproduced consistently and that are realistic representations, not a substitute for your actual applications. They are tools to help you make a better informed decision however that is all they are, a relative comparison.

Nuff rambling for now on that, why did I chuckle this morning and think that He!! had perhaps finally froze over, and don?t get me wrong, Minneapolis and the Midwest for that matter is far from being He!!!, granted its cold as crap during the winter months. The reason I chuckled is that EMC did not in fact submit the SPC1 benchmarks for the CLARiiON CX3-40, instead, one of their competitors namely Network Appliance (aka NetApp) did the honors for EMC along with a submission for their FAS3040.

So besides the fact that there is plenty of wiggle and debate room in the test for example NetApp using RAID6 (e.g. RAID-DP) and Mirroring on the EMC (I?m sure we will hear EMC cry foul), EMC can keep their hands clean on their party line about not submitting an SPC1 result, or at least that?s a card they could choose to play.I would like to see the DMX4 particularly with the new FLASH based SSDs in a future SPC test submission; however, I?m not going to hold my breath at least yet.

However it is ironic that EMC has in fact submitted other benchmark tests scenarios in the past including for Microsoft ESRP among others. Speaking of SPC submissions, TMS (Texas Memory Systems) also posted some new SPC results the other day.So maybe he!! did not really freeze over today with EMC finally submitting an SPC test, however it made for good warming chuckle on a cold morning.

Now, even though EMC has not officially submitted the SPC1 result, even though it is posted on the SPC website, that leaves only one major storage vendor yet to have their midrange open storage systems represented on the SPC results, and that would be HDS with their AMS series, maybe He!! will still freeze over…

Cheers

Greg Schulz – www.storageio.com

StorageIO Outlines Intelligent Power Management and MAID 2.0 Storage Techniques, Advocates New Technologies to Address Modern Data Center Energy Concerns

Storage I/O trends

Marketwire – January 23, 2008?StorageIO Outlines Intelligent Power Management and MAID 2.0 Storage Techniques, Advocates New Technologies to Address Modern Data Center Energy Concerns.?Intelligent Power Management and MAID 2.0 Equal Energy Efficiency Without Compromising Performance.

?The StorageIO Group explores these issues in detail in two new Industry Trends and Perspectives white papers entitled, “MAID 2.0: Energy Savings without Performance Compromises” and “The Many Faces of MAID Storage Technology.” These and other Industry Trends and Perspectives white papers addressing power, cooling, floor space and green storage related topics including “Business Benefits of Data Footprint Reduction” and “Achieving Energy Efficiency using FLASH SSD” are available for download at www.storageio.com?and www.greendatastorage.com.