Spring (May) 2012 StorageIO news letter

StorageIO News Letter Image
Spring (May) 2012 News letter

Welcome to the Spring (May) 2012 edition of the Server and StorageIO Group (StorageIO) news letter. This follows the Fall (December) 2011 edition.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

Click on the following links to view the Spring May 2012 edition as an HTML or PDF or, to go to the news letter page to view previous editions.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Nuff said for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

StorageIO Momentus Hybrid Hard Disk Drive (HHDD) Moments

This is the third in a series of posts that I have done about Hybrid Hard Disk Drives (HHDDs) along with pieces about Hard Disk Drives (HDD) and Solid State Devices (SSDs). Granted the HDD received its AARP card several years ago when it turned 50 and is routinely declared dead (or read here) even though it continues to evolve along SSD maturing and both expanding into different markets as well as usage roles.

For those who have not read previous posts about Hybrid Hard Disk Drives (HHDDs) and the Seagate Momentus XT you can find them here and here.

Since my last post, I have been using the HHDDs extensively and recently installed the latest firmware. The release of new HHDD firmware by Seagate for the Momentus XT (SD 25) like its predecessor SD24 cleaned up some annoyances and improved on overall stability. Here is a Seagate post by Mark Wojtasiak discussing SD25 and feedback obtained via the Momentus XT forum from customers.

If you have never done a HDD firmware update, its not as bad or intimidating as might be expected. The Seagate firmware update tools make it very easy, that is assuming you have a recent good backup of your data (one that can be restored) and about 10 to 15 minutes of time for a couple of reboots.

Speaking of stability, the Momentus XT HHDDs have been performing well helping to speed up accessing large documents on various projects including those for my new book. Granted an SSD would be faster across the board, however the large capacity at the price point of the HHDD is what makes it a hybrid value proposition. As I have said in previous posts, if you have the need for speed all of the time and time is money, get an SSD. Likewise if you need as much capacity as you can get and performance is not your primary objective, then leverage the high capacity HDDs. On the other hand, if you need a balance of some performance boost with capacity boost and a good value, then check out the HHDDs.

Image of Momentus XT courtesy of www.Seagate.com

Lets shift gears from that of the product or technology to that of common questions that I get asked about HHDDs.

Common questions I get asked about HHDDs include:

What is a Hybrid Hard Disk Drive?

A Hybrid Hard Disk Drive includes a combination of rotating HDD, solid state flash persistent memory along with volatile dynamic random access memory (DRAM) in an integrated package or product. The value proposition and benefit is a balance of performance and capacity at a good price for those environments, systems or applications that do not need all SSD performance (and cost) vs. those that need some performance in addition to large capacity.

How the Seagate Momentus XT differs from other Hybrid Disks?
One approach is to take a traditional HDD and pair it with a SSD using a controller packaged in various ways. For example on a large scale, HDDs and SSDs coexist in the same tiered storage system being managed by the controllers, storage processors or nodes in the solution including automated tiering and cache promotion or demotion. The main difference however between other storage systems, tiering and pairing and HHDDs is that in the case of the Momentus XT the HDD, SLC flash (SSD functionality) and RAM cache and their management are all integrated within the disk drive enclosure.

Do I use SSDs and HDDs or just HHDDs?
I have HHDDs installed internally in my laptops. I also have HDDs which are installed in servers, NAS and disk to disk (D2D) backup devices and Digital Video Recorders (DVRs) along with external SSD and Removable Hard Disk Drives (RHDDs). The RHDDs are used for archive and master or gold copy data protection that go offsite complimenting how I also use cloud backup services as part of my data protection strategy.

What are the technical specifications of a HHDD such as the Seagate Momentus XT?
3Gbs SATA interface, 2.5 inch 500GB 7,200 RPM HDD with 32MB RAM cache and integrated 4GByte SLC flash all managed via internal drive processor. Power consumption varies depending what the device is doing such as initial power up, idle, normal or other operating modes. You can view the Seagate Momentus XT 500GB (ST95005620AS which is what I have) specifications here as well as the product manual here.


One of my HHDDs on a note pad (paper) and other accessories

Do you need a special controller or management software?
Generally speaking no, the HHDD that I have been using plugged and played into my existing laptops internal bay replacing the HDD that came with those systems. No extra software was needed for Windows, no data movement or migration tools needed other than when initially copying from the source HDD to the new HHDD. The HHDD do their own caching, read ahead and write behind independent of the operating system or controller. Now the reason I say generally speaking is that like many devices, some operating systems or controllers may be able to leverage advanced features so check your particular system capabilities.

How come the storage system vendors are not talking about these HHDDs?
Good question which I assume it has a lot to do with the investment (people, time, engineering, money and marketing) that they have or are making in controller and storage system software functionality to effectively create hybrid tiered storage systems using SSD and HDDs on different scales. There have been some packaged HHDD systems or solutions brought to market by different vendors that combine HDD and SSD into a single physical package glued together with some software and controllers or processors to appear as a single system. I would not be surprised to see discrete HHDDs (where the HDD and flash SSD and RAM are all one integrated product) appear in lower end NAS or multifunction storage systems as well as for backup, dedupe or other system that requires large amounts of capacity space and performance boost now and then.

Why do I think this? Simple, say you have five HHDDs each with 500GB of capacity configured as a RAID5 set resulting in 2TByte of capacity. Using as a hypothetical example the Momentus XT yields 5 x 4GByte or 20GByte of flash cache helps accelerate write operations during data dumps, backup or other updates. Granted that is an overly simplified example and storage systems can be found with hundreds of GByte of cache, however think in terms of value or low cost balancing performance and capacity to cost for different usage scenarios. For example, applications such as bulk or scale out file and object storage including cloud or big data, entertainment, Server (Citrix/Xen, Microsoft/HyperV, VMware/vSphere) and Desktop virtualization or VDI, Disk to Disk (D2D) backup, business analytics among others. The common tenets of those applications and usage scenario is a combination of I/O and storage consolidation in a cost effective manner addressing the continuing storage capacity to I/O performance gap.

Data Center and I/O Bottlenecks

Storage and I/O performance gap

Do you have to backup HHDDs?
Yes, just as you would want to backup or protect any SSD or HHD device or system.

How does data get moved between the SSD and the HDD?
Other than the initial data migration from the old HDD (or SSD) to the HHDD, unless you are starting with a new system, once your data and applications exist on the HHDD, it automatically via the internal process of the device manages the RAM, flash and HDD activity. Unlike in a tiered storage system where data blocks or files may be moved between different types of storage devices, inside the HHDD, all data gets written to the HDD, however the flash and RAM are used as buffers for caching depending on activity needs. If you have sat through or listened to a NetApp or HDS use of cache for tiering discussion what the HHDDs do is similar in concept however on a smaller scale at the device level, potentially even in a complimentary mode in the future? Other functions performed inside the HHDD by its processor includes reading and writing, managing the caches, bad block replacement or re vectoring on the HDD, wear leveling of the SLC flash and other routine tasks such as integrity checks and diagnostics. Unlike paired storage solutions where data gets moved between tiers or types of devices, once data is stored in the HHDD, it is managed by the device similar to how a SSD or HDD would move blocks of data to and from the specific media along with leveraging RAM cache as a buffer.

Where is the controller that manages the SSD and HDD?
The HHDD itself is the controller per say in that the internal processor that manages the HDD also directly access the RAM and flash.

What type of flash is used and will it wear out?
The XT uses SLC (single level cell) flash which with wear leveling has a good duty cycle (life span) and is what is typically found in higher end flash SSD solutions vs. lower cost MLC (multi level cell)

Have I lost any data from it yet?
No, at least nothing that was not my own fault from saving the wrong file in the wrong place and having to recover from one of my recent D2D copies or the cloud. Oh, regarding what have I done with the HDDs that were replaced by the HHDDs? They are now an extra gold master backup copy as of a particular point in time and are being kept in a safe secure facility, encrypted of course.

Have you noticed a performance improvement?
Yes, performance will vary however in many cases I have seen performance comparable to SSD on both reads and writes as long as the HDDs keep up with the flash and RAM cache. Even as larger amounts of data are written, I have seen better performance than compared to HDDs. The caveat however is that initially you may see little to marginal performance improvement however over time, particularly on the same files, performance tends to improve. Working on large tens to hundreds of MByte size documents I noticed good performance when doing saves compared to working with them on a HDD.

What do the HHDDs cost?
Amazon.com has the 500GB model for about $100 which is about $40 to $50 less than when I bought my most recent one last fall. I have heard from other people that you can find them at even lower prices at other venues. In the theme of disclosures, I bought one of my HHDDs from Amazon and Seagate gave me one to test.

Will I buy more HHDDs or switch to SSDs?
Where applicable I will add SSDs as well as HDDs, however where possible and practical, I will also add HHDDs perhaps even replacing the HDDs in my NAS system with HHDDs at some time or maybe trying them in a DVR.

What is the down side to the HHDDs?
Im generating and saving more data on the devices at a faster rate which means that when I installed them I was wondering if I would ever fill up a 500GB drive. I still have hundreds of GBytes free or available for use, however I also am able to cary more reference data or information than in the past. In addition to more reference data including videos, audio, images, slide decks and other content, I have also been able to keep more versions or copies of documents which has been handy on the book project. Data that changes gets backed up D2D as well as to my cloud provider including while traveling. Leveraging compression and dedupe, given that many chapters or other content are similar, not as much data actually gets transmitted when doing cloud backups which has been handy when doing a backup from a airplane flying over the clouds. A wish for the XT type of HHDD that I have is for vendors such as Seagate to add Self Encrypting Disk (SED) capabilities to them along with applying continued intelligent power management (IPM) enhancements.

Why do I like the HHDD?
Simple, it solves both business and technology challenges while being an enabler, it gives me a balance of performance for productivity and capacity in a cost effective manner while being transparent to the systems it works with.

Here are some related links to additional material:
Data Center I/O Bottlenecks Performance Issues and Impacts
Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?
Seagate Momentus XT SD 25 firmware
Seagate Momentus XT SD25 firmware update coming this week
A Storage I/O Momentus Moment
Another StorageIO Hybrid Momentus Moment
As the Hard Disk Drive (HDD) continues to spin
Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?
Funeral for a Friend
As the Hard Disk Drive (HDD) continues to spin
Seagate Momentus XT product specifications
Seagate Momentus XT product manual
Technology Tiering, Servers Storage and Snow Removal
Self Encrypting Disks (SEDs)

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

As the Hard Disk Drive HDD continues to spin

As the Hard Disk Drive HDD continues to spin

server storage data infrastructure i/o iop hdd ssd trends

Updated 2/10/2018

Despite having been repeatedly declared dead at the hands of some new emerging technology over the past several decades, the Hard Disk Drive (HDD) continues to spin and evolve as it moves towards its 60th birthday.

More recently HDDs have been declared dead due to flash SSD that according to some predictions, should have caused the HDD to be extinct by now.

Meanwhile, having not yet died in addition to having qualified for its AARP membership a few years ago, the HDD continues to evolve in capacity, smaller form factor, performance, reliability, density along with cost improvements.

Back in 2006 I did an article titled Happy 50th, hard drive, but will you make it to 60?

IMHO it is safe to say that the HDD will be around for at least a few more years if not another decade (or more).

This is not to say that the HDD has outlived its usefulness or that there are not other tiered storage mediums to do specific jobs or tasks better (there are).

Instead, the HDD continues to evolve and is complimented by flash SSD in a way that HDDs are complimenting magnetic tape (another declared dead technology) each finding new roles to support more data being stored for longer periods of time.

After all, there is no such thing as a data or information recession!

What the importance of this is about technology tiering and resource alignment, matching the applicable technology to the task at hand.

Technology tiering (Servers, storage, networking, snow removal) is about aligning the applicable resource that is best suited to a particular need in a cost as well as productive manner. The HDD remains a viable tiered storage medium that continues to evolve while taking on new roles coexisting with SSD and tape along with cloud resources. These and other technologies have their place which ideally is finding or expanding into new markets instead of simply trying to cannibalize each other for market share.

Here is a link to a good story by Lucas Mearian on the history or evolution of the hard disk drive (HDD) including how a 1TB device that costs about $60 today would have cost about a trillion dollars back in the 1950s. FWIW, IMHO the 1 trillion dollars is low and should be more around 2 to 5 trillion for the one TByte if you apply common costs for management, people, care and feeding, power, cooling, backup, BC, DR and other functions.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

IMHO, it is safe to say that the HDD is here to stay for at least a few more years (if not decades) or at least until someone decides to try a new creative marketing approach by declaring it dead (again).

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Another StorageIO Hybrid Momentus Moment

Its been a few months since my last post (read it here) about Hybrid Hard Disk Drive (HHDD) such as the Seagate Momentus XT that I have been using.

The Momentus XT HHDD I have been using is a 500GB 7,200RPM 2.5 inch SATA Hard Disk Drive (HDD) with 4GB of embedded FLASH (aka SSD) and 32MB of DRAM memory for buffering hence the hybrid name.

I have been using the XT HHDD mainly for transferring large multi GByte size files between computers and for doing some disk to disk (D2D) backups while becoming more comfortable with it. While not as fast as my 64GB all flash SSD, the XT HHDD is as fast as my 7,200RPM 160GB Momentus HDD and in some cases faster on burst reads or writes. The notion of having a 500GB HDD that was affordable to support D2D was attractive however the ability to get some performance boost now and then via the embedded 4GB FLASH opens many different possibilities particularly when combined with compression.

Recently I switched the role of the Momentus XT HHDD from that of being a utility drive to becoming the main disk in one of my laptops. Despite many forums or bulletin boards touting issues or problems with the Seagate Momentus XT causing system hangs or Windows Blue Screen of Death (BSoD), I continued on with the next phase of testing.

Making the switch to XT HHDD as a primary disk

I took a few precaution including eating some of my own dog food that I routinely talk about. For example, I made sure that the Lenovo T61 where the Momentus XT was going to be installed was backed up. In addition, I synced my traveling laptop so that it was the primary so that I could continue working during the conversion not to mention having an extra copy in addition to normal on and offsite backups.

Ok, lets get back to the conversion or migration from a regular HDD to the HHDD.

Once I knew I had a good backup, I used the Seagate Discwizard (e.g. Acronis based) tool for imaging the existing T61 HDD to the Momentus XT HHDD. Using Discwizard (you could use other tools as well) I configured it to initialize the HHDD which was attached via a Seagate Goflex USB to SATA cable kit as well as image or copy the contents of the T61 HDD partitions to the Momentus XT. During the several hours it took to copy and create a new bootable disk image on the HHDD I continued working on my travel or standby laptop.

After the image copy was completed and verified, it was time to reboot and see how Windows (XP SP3) liked the HHDD which all seemed to be normal. There were some parts of the boot that seemed a bit faster, however not 100 percent conclusive. The next step was to shutdown the laptop and physically swap the old internal HDD with the HHDD and reboot. The subsequent boot did seem faster and programs accessing large files also seemed to run a bit faster.

Keep in mind that the HHDD is still a spinning 7,200RPM disk drive so comparisons to a full time SSD would be apples to oranges as would the cost capacity difference between those devices. However, for what I wanted to see and use, the limited 4GB of flash does seem to provide a performance boost and if I needed full time super fast performance, I could buy a larger capacity SSD and install it. Im going to hold off on buying any more larger capacity flash SSD for the time being however.

Do I see HHDD appearing in SMB, SME or enterprise storage systems anytime soon? Probably not, at least not in primary storage systems. However perhaps in some D2D backup, archive or dedupe and VTL devices or other appliances.

Momentus XT Speed Bumps

Now, to be fair, there have been some bumps in the road!

The first couple of days were smooth sailing other than hearing the mystery chirp the HHDD makes a couple of times a day. Low and behold after a couple of days, just as many forums had indicated, a mystery system hang occurred (and no, not like Windows might normally do so for those Microsoft cynics). Other than the inconvenience of a reboot, no data was lost as files being updated were saved or had been backed up not to mention after the reboot, everything was intact anyway. So far just an inconvenience or so I thought.

Almost 24 hours later, same thing except this time I got to see the BSoD which candidly, I very rarely see despite hearing stories from others. Ok, this was annoying, however as long as I did not lose any data, other than lost time from a reboot, lets chalk this up to a learning experience and see where it goes. Now guess what, about 12 hours later, once again, the system froze up and this time I was in the middle of a document edit. This time I did lose about 8 minutes of typing data that had not been auto saved (I have since changed my auto save from 10 minutes to 5 minutes).

With this BSoD incident, I took some notes and using the X61s, started checking some web sites and verified the BIOS firmware on the T61 which was up to date. However I noticed that the Seagate Momentus XT HHDD was at firmware 22 while there was a 23 version available. Reading through some web sites and forums, I was on the fence on trying firmware 23 given that it appears a newer firmware version for the HHDD is in the works. Deciding to forge forward with the experiment, after all, no real data loss had occurred, and I still had the X61s not to mention the original T61 HDD to fall back to worse case.

Going to the Seagate web site, I downloaded the firmware 23 install kit and ran it to their instructions which was a breeze and then did the reboot.

It has not been quite a week yet, however knocking on wood, while I keep expecting to see one, no BSoD or system freezes have occurred. However having said that and knocking on wood, Im also making sure things are backed up protected and ready if needed. Likewise, if I start to see a rash of BSoD, my plan is to fall back to the original T61 HDD, bring it up to date and use it until a newer HHDD firmware version is available to resume testing.

What is next for my Seagate Momentus XT HHDD?

Im going to wait to see if the BSoD and mystery system hangs disappear as well as for the arrival of the new firmware followed by some more testing. However, when Im confident with it, the next step is to put the XT HHDD into the X61s which is used primarily for travel purpose.

Why wait? Simple, while I can tolerate a reboot or crash or data loss or disruption while in the office given access to copies as well as standby or backup systems to work from, when traveling options are more limited. Sure if there is data loss, I can go to my cloud provider and rapidly recall a file or multiple ones as needed or for critical data, recover from a portable encrypted USB device. Consequently I want more confidence in the XT HHDD before deploying it for travel mode which it is probably safe to do as of now, however I want to see how stable it is in the office before taking it on the road.

What does this all mean?

  • Simple, have a backup of your data and systems
  • Test and verify those backups or standby systems periodically
  • Have a fall back plan for when trying new things
  • Keep productivity in mind, at some point you may have to fall back
  • If something is important enough to protect, have multiple copies
  • Be ready to eat your own dog food or what you talk about
  • Do not be scared, however be prepared, look before you leap

How about you are you using a HHDD yet and if so, what are your experiences? I am curious to hear if anyone has tried using a HHDD in their VMware lab environments yet in place of a regular HDD or before spending a boat load of money for a similar sized SSD.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

What is DFR or Data Footprint Reduction?

What is DFR or Data Footprint Reduction?

What is DFR or Data Footprint Reduction?

Updated 10/9/2018

What is DFR or Data Footprint Reduction?

Data Footprint Reduction (DFR) is a collection of techniques, technologies, tools and best practices that are used to address data growth management challenges. Dedupe is currently the industry darling for DFR particularly in the scope or context of backup or other repetitive data.

However DFR expands the scope of expanding data footprints and their impact to cover primary, secondary along with offline data that ranges from high performance to inactive high capacity.

Consequently the focus of DFR is not just on reduction ratios, its also about meeting time or performance rates and data protection windows.

This means DFR is about using the right tool for the task at hand to effectively meet business needs, and cost objectives while meeting service requirements across all applications.

Examples of DFR technologies include Archiving, Compression, Dedupe, Data Management and Thin Provisioning among others.

Read more about DFR in Part I and Part II of a two part series found here and here.

Where to learn more

Learn more about data footprint reducton (DFR), data footprint overhead and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

That is all for now, hope you find these ongoing series of current or emerging Industry Trends and Perspectives posts of interest.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Industry Trends and Perspectives: Tiered Storage, Systems and Mediums

This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageioblog.com/reports.

Two years ago we read about how the magnetic disk drive would be dead in a couple of years at the hand of flash SSD. Guess what, it is a couple of years later and the magnetic disk drive is far from being dead. Granted high performance Fibre Channel disks will continue to be replaced by high performance, small form factor 2.5" SAS drives along with continued adoption of high capacity SAS and SATA devices.

Likewise, SSD or flash drives continue to be deployed, however outside of iPhone, iPod and other consumer or low end devices, nowhere near the projected or perhaps hoped for level. Rest assured the trend Im seeing and hearing from IT customers is that some will continue to look for places to strategically deploy SSD where possible, practical and affordable, there will continue to be a roll for disk and even tape devices on a go forward basis.

Also watch for more coverage and discussion around the emergence of the Hybrid Hard Disk Drive (HHDD) that was discussed about four to five years ago. The HHDD made an appearance and then quietly went away for some time, perhaps more R and D time in the labs while flash SSD garnered the spotlight.

There could be a good opportunity for HHDD technology leveraging the best of both worlds that is continued pricing decreases for disk with larger capacity using smaller yet more affordable amounts of flash in a solution that is transparent to the server or storage controller making for easier integration.

Related and companion material:
Blog: ILM = Has It Losts its Meaning
Blog: SSD and Storage System Performance
Blog: Has SSD put Hard Disk Drives (HDDs) On Endangered Species List
Blog: Optimize Data Storage for Performance and Capacity Efficiency

That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Industry Trends and Perspectives: Tiered Hypervisors and Microsoft Hyper-V

Storage I/O trends
This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

These short posts complement other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageioblog.com/reports.

Multiple – Tiered Hypervisors for Server Virtualization

The topic of this post is a trend that I am seeing and hearing about during discussions with IT professionals of the use of two or more server virtualization hypervisors or what is known as tiered Hypervisors.

Server Virtualization Hypervisor Trends

A trends tied to server virtualization that I am seeing more of are that IT organizations are increasingly deploying or using two or more different hypervisors (e.g. Citrix/Xen, Microsoft/Hyper-V, VMware vSphere) in their environment (on separate physical server or blades).

Tiered hypervisors is a concept similar to what many IT organizations already have in terms of different types of servers for various use cases, multiple operating systems as well as several kinds of storage mediums or devices.

What Im seeing is that IT pros are using different hypervisors to meet various cost, management and vendor control goals aligning the applicable technology to the business or application service category.

Tiered Virtualization Hypervisor Management

Of course this brings up the discussion of how to manage multiple hypervisors and thus the real battle is or will be not about hypervisors, rather that of End to End (E2E) management.

A question that I often ask VARs and IT customers if they see Microsoft on the offensive or defensive with Hyper-V vs. VMware and vice versa, that is if VMware is on the defense or offense against Microsoft.

Not surprisingly the VMware and Microsoft faithful will say that the other is clearly on the defensive.

Meanwhile from other people, the feelings are rather mixed with many feeling that Microsoft is increasingly on the offensive with VMware being seen by some as playing a strong defense with a ferocious offense.

Learn more

Related and companion material:
Video: Beyond Virtualization Basics (Free: May require registration)
Blog: Server and Storage Virtualization: Life beyond Consolidation
Blog: Should Everything Be Virtualized?

That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Technology Tiering, Servers Storage and Snow Removal

Granted it is winter in the northern hemisphere and thus snow storms should not be a surprise.

However between December 2009 and early 2010, there has been plenty of record activity from in the U.K. (or here), to the U.S. east coast including New York, Boston and Washington DC, across the midwest and out to California, it made for a white christmas and SANta fun along with snow fun in general in the new year.

2010 Snow Storm via www.star-telegram.com

What does this have to do with Information Factories aka IT resources including public or private clouds, facilities, server, storage, networking along with data management let alone tiering?

What does this have to do with tiered snow removal, or even snow fun?

Simple, different tools are needed for addressing various types of snow from wet and heavy to light powdery or dustings to deep downfalls. Likewise, there are different types of servers, storage, data networks along with operating systems, management tools and even hyper visors to deal with various application needs or requirements.

First, lets look at tiered IT resources (servers, storage, networks, facilities, data protection and hyper visors) to meet various efficiency, optimization and service level needs.

Do you have tiered IT resources?

Let me rephrase that question to do you have different types of servers with various performance, availability, connectivity and software that support various applications and cost levels?

Thus the whole notion of tiered IT resources is to be abe to have different resources that can be aligned to the task at hand in order to meet performance, availability, capacity, energy along with economic along with service level agreement (SLA) requirements.

Computers or servers are targeted for different markets including Small Office Home Office (SOHO), Small Medium Business (SMB), Small Medium Enterprise (SME) and ultra large scale or extreme scaling, including high performance super computing. Servers are also positioned for different price bands and deployment scenarios.

General categories of tiered servers and computers include:

  • Laptops, desktops and workstations
  • Small floor standing towers or rack mounted 1U and 2U servers
  • Medium sizes floor standing towers or larger rack mounted servers
  • Blade Centers and Blade Servers
  • Large size floor standing servers, including mainframes
  • Specialized fault tolerant, rugged and embedded processing or real time servers

Servers have different names email server, database server, application server, web server, and video or file server, network server, security server, backup server or storage server associated with them depending on their use. In each of the previous examples, what defines the type of server is the type of software is being used to deliver a type of service. Sometimes the term appliance will be used for a server; this is indicative of the type of service the combined hardware and software solution are providing. For example, the same physical server running different software could be a general purpose applications server, a database server running for example Oracle, IBM, Microsoft or Teradata among other databases, an email server or a storage server.

This can lead to confusion when looking at servers in that a server may be able to support different types of workloads thus it should be considered a server, storage, networking or application platform. It depends on the type of software being used on the server. If, for example, storage software in the form a clustered and parallel file system is installed on a server to create highly scalable network attached storage (NAS) or cloud based storage service solution, then the server is a storage server. If the server has a general purpose operating system such as Microsoft Windows, Linux or UNIX and a database on it, it is a database server.

While not technically a type of server, some manufacturers use the term tin wrapped software in an attempt to not be classified as an appliance, server or hardware vendor but want their software to be positioned more as a turnkey solution. The idea is to avoid being perceived as a software only solution that requires integration with hardware. The solution is to use off the shelf commercially available general purpose servers with the vendors software technology pre integrated and installed ready for use. Thus, tin wrapped software is a turnkey software solution with some tin, or hardware, wrapped around it.

How about the same with tiered storage?

That is different tiers (Figure 1) of fast high performance disk including RAM or flash based SSD, fast Fibre Channel or SAS disk drives, or high capacity SAS and SATA disk drives along with magnetic tape as well as cloud based backup or archive?

Tiered Storage Resources
Figure 1: Tiered Storage resources

Tiered storage is also sometimes thought of in terms large enterprise class solutions or midrange, entry level, primary, secondary, near line and offline. Not to be forgotten, there are also tiered networks that support various speeds, convergence, multi tenancy and other capabilities from IO Virtualization (IOV) to traditional LAN, SAN, MAN and WANs including 1Gb Ethernet (1GbE), 10GbE up to emerging 40GbE and 100GbE not to mention various Fibre Channel speeds supporting various protocols.

The notion around tiered networks is like with servers and storage to enable aligning the right technology to be used for the task at hand economically while meeting service needs.

Two other common IT resource tiering techniques include facilities and data protection. Tiered facilities can indicate size, availability, resiliency among other characteristics. Likewise, tiered data protection is aligning the applicable technology to support different RTO and RPO requirements for example using synchronous replication where applicable vs. asynchronous time delayed for longer distance combined with snapshots. Other forms of tiered data protection include traditional backups either to disk, tape or cloud.

There is a new emerging form of tiering in many IT environments and that is tiered virtualization or specifically tiered server hyper visors in virtual data centers with similar objectives to having different server, storage, network, data protection or facilities tiers. Instead of an environment running all VMware, Microsoft HyperV or Xen among other hyper visors may be deployed to meet different application service class requirements. For example, VMware may be used for premium features and functionality on some applications, where others that do not need those features along with requiring lower operating costs leverage HyperV or Zen based solutions. Taking the tiering approach a step further, one could also declare tiered databases for example Oracle legacy vs. MySQL or Microsoft SQLserver among other examples.

What about IT clouds, are those different types of resources, or, essentially an extension of existing IT capabilities for example cloud storage being another tier of data storage?

There is another form of tiering, particularly during the winter months in the northern hemisphere where there is an abundance of snow this time of the year. That is, tiered snow management, removal or movement technologies.

What about tiered snow removal?

Well lets get back to that then.

Like IT resources, there are different technologies that can be used for moving, removing, melting or managing snow.

For example, I cant do much about getting ready of snow other than pushing it all down the hill and into the river, something that would take time and lots of fuel, or, I can manage where I put snow piles to be prepared for next storm, plus, to help put it where the piles of snow will melt and help avoid spring flood. Some technologies can be used for relocating snow elsewhere, kind of like archiving data onto different tiers of storage.

Regardless of if snowstorm or IT clouds (public or private), virtual, managed service provider (MSP), hosted or traditional IT data centers, all require physical servers, storage, I/O and data networks along with software including management tools.

Granted not all servers, storage or networking technology let alone software are the same as they address different needs. IT resources including servers, storage, networks, operating systems and even hyper visors for virtual machines are often categorized and aligned to different tiers corresponding to needs and characteristics (Figure 2).

Tiered IT Resources
Figure 2: Tiered IT resources

For example, in figure 3 there is a light weight plastic shovel (Shove 1) for moving small amounts of snow in a wide stripe or pass. Then there is a narrow shovel for digging things out, or breaking up snow piles (Shovel 2). Also shown are a light duty snow blower (snow thrower) capable of dealing with powdery or non wet snow, grooming in tight corners or small areas.

Tiered Snow tools
Figure 3: Tiered Snow management and migration tools

For other light dustings, a yard leaf blower does double duty for migrating or moving snow in small or tight corners such as decks, patios or for cleanup. Larger snowfalls, or, where there is a lot of area to clear involves heavier duty tools such as the Kawasaki mule with 5 foot curtis plow. The mule is a multifunction, multi protocol tool capable of being used for hauling items, towing, pulling or recreational tasks.

When all else fails, there is a pickup truck to get or go out and about, not to mention to pull other vehicles out of ditches or piles of snow when they become stuck!

Snow movement
Figure 4: Sometimes the snow light making for fast, low latency migration

Snow movement
Figure 5: And sometimes even snow migration technology goes off line!

Snow movement

And that is it for now!

Enjoy the northern hemisphere winter and snow while it lasts, make the best of it with the right tools to simplify the tasks of movement and management, similar to IT resources.

Keep in mind, its about the tools and when along with how to use them for various tasks for efficiency and effectiveness, and, a bit of snow fun.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Storage Efficiency and Optimization – The Other Green

For those of you in the New York City area, I will be presenting live in person at Storage Decisions September 23, 2009 conference The Other Green, Storage Efficiency and Optimization.

Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives.

To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

Topics, technologies and techniques that will be discussed include among others:

  • Energy efficiency (strategic) vs. energy avoidance (tactical), whats different between them
  • Optimization and the need for speed vs. the need for capacity, finding the right balance
  • Metrics & measurements for management insight, what the industry is doing (or not doing)
  • Tiered storage and tiered access including SSD, FC, SAS, tape, clouds and more
  • Data footprint reduction (archive, compress, dedupe) and thin provision among others
  • Best practices, financial incentives and what you can do today

This is a free event for IT professionals, however space I hear is limited, learn more and register here.

For those interested in broader IT data center and infrastructure optimization, check out the on-going seminar series The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. This is also a free Seminar, register and learn more here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Storage Optimization: Performance, Availability, Capacity, Effectiveness

Storage I/O trends

With the IT and storage industry shying away from green hype, green washing and other green noise, there is also a growing realization that the new green is about effectively boosting efficiency to improve productivity and profitability or to sustain business and IT growth during tough economic times.

This past week while doing some presentations (I’ll post a link soon to the downloads) at the 2008 San Francisco installment of Storage Decisions event focused on storage professionals, as well as a keynote talk at the value added reseller (VAR) channel professional focused storage strategies event, a common theme was boosting productivity, improving on efficiency, stretching budgets and enabling existing personal and resources to do more with the same or less.

During these and other presentations, keynotes, sessions and seminars both here in the U.S. as well as in Europe recently, these common themes of booting efficiency as well as the closing of the green gap, that is, the gap between industry and marketing rhetoric around green hype, green noise, green washing and issues that either do not resonate with, or, can not be funded by IT organizations compared with the disconnect of where many IT organizations issues exist which are around power, cooling, floor space or footprint as well as EH&S (Environmental health and safety) and economics.

The green gap (here, and here, and here) is that many IT organizations around the world have not realized due to green hype around carbon footprints and related themes that in fact, boosting energy efficiency for active and on-line applications, data and workloads (e.g. doing more I/O operations per second-IOPS, transactions, files or messages processed per watt of energy) to address power, cooling, floor space are in fact a form of addressing green issues, both economic and environmental.

Likewise for inactive or idle data, there is a bit more of a linkage that green can mean powering things off, however there is also a disconnect in that many perceive that green storage for example is only green if the storage can be powered off which while true for in-active or idle data and applications, is not true for all data and applications types.

As mentioned already, for active workloads, green means doing more with the same or less power, cooling and floor space impact, this means doing more work per unit of energy. In that theme, for active workload, a slow, large capacity disk may in fact not be energy efficient if it impedes productivity and results in more energy to get the same amount of work done. For example, larger capacity SATA disk drives are also positioned as being the most green or energy efficiency which can be true for idle or in-active or non performance (time) sensitive applications where more data is stored in a denser footprint.

However for active workload, lower capacity 15.5K RPM 300GB and 400GB Fibre Channel (FC) and SAS disk drives that deliver more IOPS or bandwidth per watt of energy can get more work done in the same amount of time.

There is also a perception that FC and SAS disk drives use more power than SATA disk drives which in some cases can be true, however current generations of high performance 10K RPM and 15.5K RPM drives have very similar power draw on a raw spindle or device basis. What differs is the amount of capacity per watt for idle or inactive applications, or, the number of IOPS or amount of performance for active configurations.

On the other hand, while not normally perceived as being green compared to tape or IPM and MAID (1st generation and MAID 2.0) solutions, along with SSD (Flash and RAM), not to mention fast SAS and FC disks or tiered storage systems that can do more IOPS or bandwidth per watt of energy are in fact green and energy efficiency for getting work done. Thus, there are two sides to optimizing storage for energy efficiency, optimizing for when doing work e.g. more miles per gallon per amount of work done, and, how little energy used when not doing work.

Thus, a new form of being green to sustain business growth while boosting productivity is Gaining Realistic Economic Efficiency Now that as a by product helps both business bottom lines as well as the environment by doing more with less. These are themes that are addressed in my new book

“The Green and Virtual Data Center” (Auerbach) that will be formerly launched and released for generally availability just after the 1st of the year (hopefully sooner), however you can beat the rush and order your copy now to beat the rush at Amazon and other fine venues around the world.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

HP Storage Virtualization Services Platform (SVSP)

Storage I/O trends

HP recently announced announced their new SAN Virtualization Services Platform (SVSP) which is an appliance with software (oh, excuse me, I mean platform) for enabling various (e.g. replication, snapshots, pooling, consolidation, migration, etc) storage virtualization capabilities across different HP (e.g. MSA, EVA and in "theory" XP) or in "theory" as well, 3rd party (e.g. EMC, Dell, HDS, IBM, NetApp, Sun, etc) storage.

Sure HP has had a similar capability via their XP series which HP OEMs from Hitachi Ltd. (who also supplies the similar/same product to HDS which HP competes with), however what?s different from the XP based solution and the SVSP is that one (SVSP) is via software running on an appliance and the other implemented via software/firmware on dedicated Hitachi based hardware (e.g. the XP). One requires an investment in the XP which for larger organizations may be practical while the other enables smaller organizations to achieve the benefits of virtualization capabilties to enable efficient IT not to mention help transition from different generations of HP MSA, EVAs to newer versions of MSAs and EVAs or even to XPs .Other benefits of solutions like the HP SVSP which also include the IBM SAN Volume Controller (SVC) include cross storage system, or cross storage vendor based replication, snapshots, dynamic (e.g. thin) provisioning among other capabilities for block based storage access.

While there will be comparisons of HP SVSP to the XP, those in many ways will be apples to oranges, the more applicable apples to apples comparison would be IBM SVC to HP SVSP, or, perhaps HP SVSP to EMC Invista, Fujitsu VS900, Incipient, Falconstor or ?Datacore based solutions.

With the HP SVSP announcement, I’m suspecting that we will see the re-emergence of the storage virtualization in-band vs. out-of-band including fast-path control-path aka split path approaches being adopted by HP with the SVSP not to mention hardware vs. software and appliance based approaches as was the case a few years ago.

This time around as the storage virtualization discussions heat up again, we should see and hear the usual points, counter points and continued talk around consolidation and driving up utilization to save money and avoid costs. However, as part of enabling and transforming into an efficient IT organization (e.g. a ?Green and Virtual Data Center?) that embodies efficient, productivity in an economical and environmental friendly manner, virtualization discussions will also re-focus on using management transparency to enable data movement or migration for load-balancing, maintenance, upgrades and technology replacement, BC/DR and other common functions to enable more work to be done in the same or less anoint of time while supporting more data and storage processing and retention needs.

Thus similar to servers where not all servers have been, will be or can be consolidated, however most can be virtualized for management transparency for BC/DR and migration, the same holds true for storage, that is, not all storage can be consolidated for different quality of service reasons, however, most storage can be virtualized to assist with and facilatate common management functions.

Here are some additional resources to learn more about the many faces of Storage Virtualizaiton and related topics and trends:

Storage Virtualization: Myths, Realities and Other Considerations
Storage virtualization: How to deploy it
The Semantics of Storage Virtualization
Storage Virtualization: It’s More Common Than You Think
Choosing a storage virtualization approach
Switch-level storage virtualization: Special report
Resilient Storage Networks (Elsevier)
The Green and Virtual Data Center (Auerbach)

Cheers – gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Tape Talk – Changing Role of Tape

Storage I/O trends

Here’s a link to a new article over at Enterprise Storage Forum titled “The Changing Role of Tape” for those of you who still use or care to admit to using magnetic tape as part of your data protection (backup, BC, DR) and data preservation (e.g. archiving and compliance) strategies.

Disk based solutions continue to grow in adoption for data protection, however tape remains relevant taking on different roles, similar to how disk drives are taking on different roles as FLASH and RAM based SSD continue to evolve and grow in terms of customer deployment adoption. Consequently, despite the continued hype that tape is dead, the reality is that tape remains on of if not the most energy-efficient or green storage mediums of in-active, off-line data in a given footprint and cost basis.

Tape is still being used in many environments particularly more so in larger environments with a focus shifting towards supporting ultra-dense large full backups that have been copied from disk to disk based backups as well as for archives.

Disk based data protection particularly with virtual tape libraries (VTLs) that combine data footprint reduction techniques such as compression, de-dupe, replication and migration to tape capabilities continue to gain in popularity as a convenient way to migrate from tape based backups to disk based backup preserving investment in existing people skills, policies, rules and software.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Airport Parking, Tiered Storage and Latency

Storage I/O trends

Ok, so what do airport parking, tiered storage and latency have in common? Based on some recent travel experience I will assert that there is a bit in common, or at a least an analogy. What got me thinking about this was recently I could not get a parking spot at the airport primary parking ramp next to the terminal (either a reasonable walk or short tram ride) which offers quick access to the departure gate.

Granted there is premium for this ability to park or “store” my vehicle for a few days in near to airport terminal, however that premium is off-set in the time savings and less disruptions enabling me a few extra minutes to get other things done while traveling.

Let me call the normal primary airport parking tier-1 (regardless of what level of the ramp you park on), with tier-0 being valet parking where you pay a fee that might rival the cost of your airline ticket, yet your car stays in a climate controlled area, gets washed and cleaned, maybe an oil change and hopefully in a more secure environment with an even faster access to your departure gate, something for the rich and famous.

Now the primary airport parking has been full lately, not surprising given the cold weather and everyone looking to use up their carbon off-set credits to fly somewhere warm or attend business meetings or what ever it is that they are doing.

Budgeting some extra time, a couple of weeks ago I tried one of those off-site airport parking facilities where the bus picks you up in the parking lot and then whisks you off to the airport, then you on return you wait for the buss to pick you up at the airport, ride to the lot and tour the lot looking at everyone’s car as they get dropped off and 30-40 minutes later, you are finally to your vehicle faced with the challenge of how to get out of the parking lot late at night as it is such a budget operation, they have gone to lights out and automated check-out. That is, put your credit card in the machine and the gate opens, that is, if the credit card reader is not frozen because it about “zero” outside and the machine wont read your card using up more time, however heck, I saved a few dollars a day.

On another recent trip, again the main parking ramp was full, at least the airport has a parking or storage resource monitoring ( aka Airport SRM) tool that you can check ahead to see if the ramps are full or not. This time I went to another terminal, parked in the ramp there, walked a mile (would have been a nice walk if it had not been 1 above zero (F) with a 20 mile per hour wind) to the light rail train station, waited ten minutes for the 3 minute train ride to the main terminal, walked to the tram for the 1-2 minute tram ride to the real terminal to go to my departure gate. On return, the process was reversed, adding what I will estimate to be about an hour to the experience, which, if you have the time, not a bad option and certainly good exercise even if it was freezing cold.

During the planes, trains and automobiles expedition, it dawned on me, airport parking is a lot like tiered storage in that you have different types of parking with different cost points, locality of reference or latency or speed from which how much time to get from your car to your plane, levels of protection and security among others.

I likened the off-airport parking experience to off-line tier-3 tape or MAID or at best, near-line tier-2 storage in that I saved some money at the cost of lost time and productivity. The parking at the remote airport ramp involving a train ride and tram ride I likened to tier-2 or near-line storage over a very slow network or I/O path in that the ramp itself was pretty efficiency, however the transit delays or latency were ugly, however I did save some money, a couple of bucks, not as much as the off-site, however a few less than the primary parking.

Hence I jump back to the primary ramp as being the fastest as tier-1 unless you have someone footing your parking bills and can afford tier-0. It also dawned on me that like primary or tier-1 storage, regardless of if it is enterprise class like an EMC DMX, IBM DS8K, Fujitsu, HDS USP or mid-range EMC CLARiiON, HP EVA, IBM DS4K, HDS AMS, Dell or EqualLogic, 3PAR, Fujitsu, NetApp or entry-level products from many different vendors; people still pay for the premium storage, aka tier-1 storage in a given price band even if there are cheaper alternatives however like the primary airport parking, there are limits on how much primary storage or parking can be supported due to floor space, power, cooling and budget constraints.

With tiered storage the notion is to align different types and classes of storage for various usage and application categories based on service (performance, availability, capacity, energy consumption) requirements balanced with cost or other concerns. For example there is high cost yet ultra high performance with ultra low energy saving and relative small capacity of tier-0 solid state devices (SSD) using either FLASH or dynamic random access memory (DRAM) as part of a storage system, as a storage device or as a caching appliance to meet I/O or activity intensive scenarios. Tier-1 is high performance, however not as high performance as tier-0, although given a large enough budget, large enough power and cooling ability and no constraints on floor space, you can make an total of traditional disk drives out perform even solid state, having a lot more capacity at the tradeoff of power, cooling, floor space and of course cost.

For most environments tier-1 storage will be the fastest storage with a reasonable amount of capacity, as tier-1 provides a good balance of performance and capacity per amount of energy consumed for active storage and data. On the other hand, lower cost, higher capacity and slower tier-2 storage also known as near-line or secondary storage is used in some environments for primary storage where performance is not a concern, yet is typically more for non-performance intensive applications.

Again, given enough money, unlimited power, cooling and floor space not to mention the number of enclosures, controllers and management software, you can sum a large bunch of low-cost SATA drives as an example to produce a high level of performance, however the cost benefits to do a high activity or performance level, either IOPS or bandwidth particular where the excess capacity is not needed would make SSD technology look cheap on an overall cost basis perspective.

Likewise replacing your entire disk with SSD particularly for capacity based environments is not really practical outside of extreme corner case applications unless you have the disposable income of a small country for your data storage and IT budget.

Another aspect of tiered storage is the common confusion of a class of storage and the class of a storage vendor or where a product is positioned for example from a price band or target environment such as enterprise, small medium environment, small medium business (SMB), small office or home office (SOHO) or prosumer/consumer.

I often hear discussions that go along the lines of tier-1 storage being products for the enterprise, tier-1 being for workgroups and tier-3 being for SMB and SOHO. I also hear confusion around tier-1 being block based, tier-2 being NAS and tier-3 being tape. “What we have here is a failure to communicate” in that there is confusion around tiered, categories, classification, price band and product positioning and perception. To add to the confusion is that there are also different tiers of access including Fibre Channel and FICON using 8GFC (coming soon to a device near you), 4GFC, 2GFC and even 1GFC along with 1GbE and 10GbE for iSCSI and/or NAS (NFS and/or CIFS) as well as InfiniBand for block (iSCSI or SRP) and file (NAS) offering different costs, performance, latency and other differing attributes to aligning to various application service and cost requirements.

What this all means is that there is more to tiered storage, there is tiered access, tiered protection, tiered media, different price band and categories of vendors and solutions to be aligned to applicable usage and service requirements. On the other hand, similar to airport parking, I can chose to skip the airport parking and take a cab to the airport which would be analogous to shifting your storage needs to a managed service provider. However ultimately it will come down to balancing performance, availability, capacity and energy (PACE) efficiency to the level of service and specific environment or application needs.

Greg Schulz www.storageio.com and www.greendatastorage.com