If March 31st is backup day, dont be fooled with restore on April 1st

With March 31st as world backup day, hopefully some will keep recovery and restoration in mind to not be fooled on April 1st.

Lost data

When it comes to protecting data, it may not be a headline news disaster such as earthquake, fire, flood, hurricane or act of man, rather something as simply accidentally overwriting a file, not to mention virus or other more likely to occur problems. Depending upon who you ask, some will say backup or saving data is more important while others will standby that it is recovery or restoration that matter. Without one the other is not practical, they need each other and both need to be done as well as tested to make sure they work.

Just the other day I needed to restore a file that I accidentally overwrote and as luck would have it, my local bad copy had also just overwrote my local backup. However I was able to go and pull an earlier version from my cloud provider which gave a good opportunity to test and try some different things. In the course of testing, I did find some things that have since been updated as well as found some things to optimize for the future.

Destroyed data

My opinion is that if not used properly including ignoring best practices, any form of data storage medium or media as well as software could result or be blamed for data loss. For some people they have lost data as a result of using cloud storage services just as other people have lost data or access to information on other storage mediums and solutions. For example, data has been lost on cloud, tape, Hard Disk Drives (HDDs), Solid State Devices (SSD), Hybrid HDDs (HHDD), RAID and non RAID, local and remote and even optical based storage systems large and small. In some cases, there have been errors or problems with the medium or media, in other cases storage systems have lost access to, or lost data due to hardware, firmware, software, or configuration including due to human error among other issues.

Now is the time to start thinking about modernizing data protection, and that means more than simply swapping out media. Data protection modernization the past several years has been focused on treating the symptoms of downstream problems at the target or destination. This has involved swapping out or moving media around, applying data footprint reduction (DFR) techniques downstream to give near term tactical relief as has been the cause with backup, restore, BC and DR for many years. The focus is starting to expand to how to discuss the source of the problem with is an expanding data footprint upstream or at the source using different data footprint reduction tools and techniques. This also means using different metrics including keeping performance and response time in perspective as part of reduction rates vs. ratios while leveraging different techniques and tools from the data footprint reduction tool box. In other words, its time to stop swapping out media like changing tires that keep going flat on a car, find and fix the problem, change the way data is protected (and when) to cut the impact down stream.

Here is a link to a free download of chapter 5 (Data Protection: Backup/Restore and Business Continuance / Disaster Recovery) from my new book Cloud and Virtual Data Storage Networking (CRC Press).

Cloud and Virtual Data Storage NetworkingIntel Recommended Reading List

Additional related links to read more and sources of information:

Choosing the Right Local/Cloud Hybrid Backup for SMBs
E2E Awareness and insight for IT environments
Poll: What Do You Think of IT Clouds?
Convergence: People, Processes, Policies and Products
What do VARs and Clouds as well as MSPs have in common?
Industry adoption vs. industry deployment, is there a difference?
Cloud conversations: Loss of data access vs. data loss
Clouds and Data Loss: Time for CDP (Commonsense Data Protection)?
Clouds are like Electricity: Dont be scared
Wit and wisdom for BC and DR
Criteria for choosing the right business continuity or disaster recovery consultant
Local and Cloud Hybrid Backup for SMBs
Is cloud disaster recovery appropriate for SMBs?
Laptop data protection: A major headache with many cures
Disaster recovery in the cloud explained
Backup in the cloud: Large enterprises wary, others climbing on board
Cloud and Virtual Data Storage Networking (CRC Press, 2011)
Enterprise Systems Backup and Recovery: A Corporate Insurance Policy

Take a few minutes out of your busy schedule and check to see if your backups and data protection are working, as well as make sure to test restoration and recovery to avoid an April fools type surprise. One last thing, you might want to check out the data storage prayer while you are at it.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Is 14.4TBytes of data storage for $52,503 a good deal? It depends!

A news story about the school board in Marshall Missouri approving data storage plans in addition to getting good news on health insurance rates just came into my in box.

I do not live in or anywhere near Marshall Missouri as I live about 420 miles north in the Stillwater Minnesota area.

What caught my eye about the story is the dollar amount ($52,503) and capacity amount (14.4TByte) for the new Marshall school district data storage solution to replace their old, almost full 4.8TByte system.

That prompted me to wonder, if the school district are getting a really good deal (if so congratulations), paying too much, or if about right.

Industry Trends and Perspectives

Not knowing what type of storage system they are getting, it is difficult to know what type of value the Marshall School district is getting with their new solution. For example, what type of performance and availability in addition to capacity? What type of system and features such as snapshots, replication, data footprint reduction aka DFR capabilities (archive, compression, dedupe, thin provisioning), backup, cloud access, redundancy for availability, application agents or integration, virtualization support, tiering. Or if the 14.4TByte is total (raw) or usable storage capacity or if it includes two storage systems for replication. Or what type of drives (SSD, fast SAS HDD or high-capacity SAS or SATA HDDs), block (iSCSI, SAS or FC) or NAS (CIFS and NFS) or unified, management software and reporting tools among capabilities not to mention service and warranty.

Sure there are less expensive solutions that might work, however since I do not know what their needs and wants are, saying they paid too much would not be responsible. Likewise, not knowing their needs vs. wants, requirements, growth and application concerns, given that there are solutions that cost a lot more with extensive capabilities, saying that they got the deal of the century would also not be fair. Maybe somewhere down the road we will hear some vendor and VAR make a press release announcement about their win in taking out a competitor from the Marshall school district, or perhaps that they upgraded a system they previously sold so we can all learn more.

With school districts across the country trying to stretch their budgets to go further while supporting growth, it would be interesting to hear more about what type of value the Marshall school district is getting from their new storage solution. Likewise, it would also be interesting to hear what alternatives they looked at that were more expensive, as well as cheaper however with less functionality. I’m guessing some of the cloud crowd cheerleaders will also want to know why the school district is going the route they are vs. going to the cloud.

IMHO value is not the same thing as less or lower cost or cheaper, instead its the benefit derived vs. what you pay. This means that something might cost more than something cheaper, however if I get more benefit from what might be more expensive, then it has more value.

Industry Trends and Perspectives

If you are a school district of similar size, what criteria or requirements would you want as opposed to need, and then what would you do or have you done?

What if you are a commercial or SMB environment, again not knowing the feature functionality benefit being obtained, what requirements would you have including want to have (e.g. nice to have) vs. must or have to have (e.g. what you are willing to pay more for), what would you do or have done?

How about if you were a cloud or managed service provider (MSP) or a VAR representing one of the many services, what would your pitch and approach be beyond simply competing on a cost per TByte basis?

Or if you are a vendor or VAR facing a similar opportunity, again not knowing the requirements, what would you recommend a school district or SMB environment to do, why and how to cost justify it?

What this all means to me is the importance of looking beyond lowest cost, or cost per capacity (e.g. cost per GByte or TByte) also factoring in value, feature functionality benefit.

Ok, nuff said for now, I need to get my homework assignments done.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Why SSD based arrays and storage appliances can be a good idea (Part II)

This is the second of a two-part post about why storage arrays and appliances with SSD drives can be a good idea, here is link to the first post.

So again, why would putting drive form factors SSDs be a bad idea for existing storage systems, arrays and appliances?

Benefits of SSD drive in storage systems, arrays and appliances:

  • Familiarity with customers who buy and use these devices
  • Reduces time to market enabling customers to innovate via deployment
  • Establish comfort and confidence with SSD technology for customers
  • Investment protection of currently installed technology (hardware and software)
  • Interoperability with existing interfaces, infrastructure, tools and policies
  • Reliability, availability and serviceability (RAS) depending on vendor implementation
  • Features and functionality (replicate, snapshot, policy, tiering, application integration)
  • Known entity in terms of hardware, software, firmware and microcode (good or bad)
  • Share SSD technology across more servers or accessing applications
  • Good performance assuming no controller, hardware or software bottlenecks
  • Wear leveling and other SSD flash management if implemented
  • Can end performance bottlenecks if backend (drives) are a problem
  • Coexist or complemented with server-based SSD caching

Note, the mere presence of SSD drives in a storage system, array or appliance will not guarantee or enable the above items to be enabled, nor to their full potential. Different vendors and products will implement to various degrees of extensibility SSD drive support, so look beyond the check box of feature, functionality. Dig in and understand how extensive and robust the SSD implementation is to meet your specific requirements.

Caveats of SSD drives in storage systems, arrays and appliances:

  • May not use full performance potential of nand flash SLC technology
  • Latency can be an issue for those who need extreme speed or performance
  • May not be the most innovative newest technology on the block
  • Fun for startup vendors, marketers and their fans to poke fun at
  • Not all vendors add value or optimization for endurance of drive SSD
  • Seen as not being technology advanced vs. legacy or mature systems

Note that different vendors will have various performance characteristics, some good for IOPs, others for bandwidth or throughput while others for latency or capacity. Look at different products to see how they will vary to meet your particular needs.

Cost comparisons are tricky. SSD in HDD form factors certainly cost more than raw flash dies, however PCIe cards and FTL (flash translation layer) controllers also cost more than flash chips by themselves. In other words, apples to apples comparisons are needed. In the future, ideally the baseboard or motherboard vendors will revise the layout to support nand flash (or its replacement) with DRAM DIMM type modules along with associated FTL and BIOS to handle the flash program/erase cycles (P/E) and wear leveling management, something that DRAM does not have to encounter. While that provides great location or locality of reference (figure 1), it is also a more complex approach that takes time and industry cooperation.

Locality of reference for memory and storage
Figure 1: Locality of reference for memory and storage

Certainly, for best performance, just like realty location matters and thus locality of reference comes into play. That is put the data as close to the server as possible, however when sharing is needed, then a different approach or a companion technique is required.

Here are some general thoughts about SSD:

  • Some customers and organizations get the value and role of SSD
  • Some see where SSD can replace HDD, others see where it compliments
  • Yet others are seeing the potential, however are moving cautiously
  • For many environments better than current performance is good enough
  • Environments with the need for speed need every bit of performance they can get
  • Storage systems and arrays or appliances continue to evolve including the media they use
  • Simply looking at how some storage arrays, systems and appliances have evolved, you can get an idea on how they might look in the future which could include not only SAS as a backend or target, also PCIe. After all, it was not that long ago where backend drive connections went from propriety to open parallel SCSI or SSA to Fibre Channel loop (or switched) to SAS.
  • Engineers and marketers tend to gravitate to newer products nand technology, which is good, as we need continued innovation on that front.
  • Customers and business people tend to gravitate towards deriving greatest value out of what is there for as long as possible.
  • Of course, both of the latter two points are not always the case and can be flip flopped.
  • Ultrahigh end environments and corner case applications will continue to push the limits and are target markets for some of the newer products and vendors.
  • Likewise, enterprise, mid market and other mainstream environments (outside of their corner case scenarios) will continue to push known technology to its limits as long as they can derive some business benefit value.

While not perfect, SSD in a HDD form factor with a SAS or SATA interface properly integrated by vendors into storage systems (or arrays or appliances) are a good fit for many environments today. Likewise, for some environments, new from the ground up SSD based solutions that leverage flash DIMM or daughter cards or PCIe flash cards are a fit. So to are PCIe flash cards either as a target, or as cache to complement storage system (arrays and appliances). Certainly, drive slots in arrays take up space for SSD, however so to does occupying PCIe space particularly in high density servers that require every available socket and slot for compute and DRAM memory. Thus, there are pros and cons, features and benefits of various approaches and which is best will depend on your needs and perhaps preferences, which may or may not be binary.

I agree that for some applications and solutions, non drive form factor SSD make sense while in others, compatibility has its benefits. Yet in other situations nand flash such as SLC combined with HDD and DRAM tightly integrated such as in my Momentus XT HHDD is good for laptops, however probably not a good fit for enterprise yet. Thus, SSD options and placements are not binary, of course, sometimes opinions and perspectives will be.

For some situations PCIe, based cards in servers or appliances make sense, either as a target or as cache. Likewise for other scenarios drive format SSD make sense in servers and storage systems, appliances, arrays or other solutions. Thus while all of those approaches are used for storing binary digital data, the solutions of what to use when and where often will not be binary, that is unless your approach is to use one tool or technique for everything.

Here are some related links to learn more about SSD, where and when to use what:
Why SSD based arrays and storage appliances can be a good idea (Part I)
IT and storage economics 101, supply and demand
Researchers and marketers dont agree on future of nand flash SSD
Speaking of speeding up business with SSD storage
EMC VFCache respinning SSD and intelligent caching (Part I)
EMC VFCache respinning SSD and intelligent caching (Part II)
SSD options for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments, Part II: The call to duty, SSD endurance
SSD options for Virtual (and Physical) Environments Part III: What type of SSD is best for you?

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Why SSD based arrays and storage appliances can be a good idea (Part I)

This is the first of a two-part series, you can read part II here.

Robin Harris (aka @storagemojo) recently in a blog post asks a question and thinks solid state devices (SSDs) using SAS or SATA interface in traditional hard disk drive (HDD) form factors are a bad idea in storage arrays (e.g. storage systems or appliances). My opinion is that as with many things about storing, processing or moving binary digital data (e.g. 1s and 0s) the answer is not always clear. That is there may not be a right or wrong answer instead it depends on the situation, use or perhaps abuse scenario. For some applications or vendors, adding SSD packaged in HDD form factors to existing storage systems, arrays and appliances makes perfect sense, likewise for others it does not, thus it depends (more on that in a bit). While we are talking about SSD, Ed Haletky (aka @texiwill) recently asked a related question of Fix the App or Add Hardware, which could easily be morphed into a discussion of Fix the SSD, or Add Hardware. Hmmm, maybe a future post idea exists there.

Lets take a step back for a moment and look at the bigger picture of what prompts the question of what type of SSD to use where and when along as well as why various vendors want you to look at things a particular way. There are many options for using SSD that is packaged in various ways to meet diverse needs including here and here (see figure 1).

Various SSD packaging options
Figure 1: Various packaging and deployment options for SSD

The growing number of startup and established vendors with SSD enabled storage solutions vying to win your hearts, minds and budget is looking like the annual NCAA basketball tournament (aka March Madness and march metrics here and here). Some of vendors have or are adding SSD with SAS or SATA interfaces that plug into existing enclosures (drive slots). These SSDs have the same form factor of a 2.5 inch small form factor (SFF) or 3.5 inch HDDs with a SAS or SATA interface for physical and connectivity interoperability. Other vendors have added PCIe based SSD cards to their storage systems or appliances as a cache (read or read and write) or a target device similar to how these cards are installed in servers.

Simply adding SSD either in a drive form factor or as a PCIe card to a storage system or appliance is only part of a solution. Sure, the hardware should be faster than a traditional spinning HDD based solution. However, what differentiates the various approaches and solutions is what is done with the storage systems or appliances software (aka operating system, storage applications, management, firmware or micro code).

So are SSD based storage systems, arrays and appliances a bad idea?

If you are a startup or established vendor able to start from scratch with a clean sheet design not having to worry about interoperability and customer investment protection (technology, people skills, software tools, etc), then you would want to do something different. For example, leverage off the shelf components such as a PCIe flash SSD card in an industry standard server combined with your software for a solution. You could also use extra DRAM memory in those servers combined with PCIe flash SSD cards perhaps even with embedded HDDs for a backing or preservation medium.

Other approaches might use a mix of DRAM, PCIe flash cards, as either a cache or target combined with some drive form factor SSDs. In other words, there is no right or wrong approach; sure, there are different technical merits that have advantages for various applications or environments. Likewise, people have preferences particular for technology focused who tend to like one approach vs. another. Thus, we have many options to leverage, use or abuse.

In his post, Robin asks a good question of if nand flash SSD were being put into a new storage system, why not use the PCIe backplane vs. using nand flash on DIMM vs. using drive formats, all of which are different packaging options (Figure 1). Some startups have gone the all backplane approach, some have gone with the drive form factor, some have gone with a mix and some even using HDDs in the background. Likewise some traditional storage system and array vendors who support a mix of SSD and HDD drive form factor devices also leverage PCIe cards, either as a server-based cache (e.g. EMC VFCahe) or installed as a performance accelerator module (e.g. NetApp PAM) in their appliances.

While most vendors who put SSD drive form factor drives into their storage systems or appliances (or serves for that matter) use them as data targets for creating LUNs or file systems, others use them for internal functionality. By internal functionality I mean instead of the SSD appearing as another drive or target, they are used exclusively by the storage system or appliance for caching or similar purposes. On storage systems, this can be to increase the size of persistent cache such as EMC on the CLARiiON and VNX (e.g. FAST Cache). Another use is on backup or dedupe target appliances where SSDs are used to store dictionary, index or meta data repositories as opposed to being a general data pool.

Part two of this post looks at the benefits and caveats of SSD in storage arrays.

Here are some related links to learn more about SSD, where and when to use what:
Why SSD based arrays and storage appliances can be a good idea (Part II)
IT and storage economics 101, supply and demand
Researchers and marketers don’t agree on future of nand flash SSD
Speaking of speeding up business with SSD storage
EMC VFCache respinning SSD and intelligent caching (Part I)
EMC VFCache respinning SSD and intelligent caching (Part II)
SSD options for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments, Part II: The call to duty, SSD endurance
SSD options for Virtual (and Physical) Environments Part III: What type of SSD is best for you?

Ok, nuff said for now, check part II.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

StorageIO books by Greg Schulz added to Intel Recommended Reading Lists

My two most recent books The Green and Virtual Data Center and Cloud and Virtual Data Storage Networking both published by CRC Press/Taylor and Francis have been added to the Intel Recommended Reading List for Developers.

Intel Recommended Reading

If you are not familiar with the Intel Recommended Reading List for Developers, it is a leading comprehensive list of different books across various technology domains covering hardware, software, servers, storage, networking, facilities, management, development and more.

Cloud and Virtual Data Storage NetworkingIntel Recommended Reading List

So what are you waiting for, check out the Intel Recommended Reading list for Developers where you can find a diverse line up of different books of which I’m honored to have two of mine join the esteemed list. Here is a link to a free chapter download from Cloud and Virtual Data Storage Networking.

Ok, nuff said for now.

cheers
gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Researchers and marketers dont agree on future of nand flash SSD

Marketers particular those involved with anything resembling Solid State Devices (SSD) will tell you SSD is the future as will some researchers along with their fans and pundits. Some will tell you that the future only has room for SSD with the current flavor de jour being nand flash (both Single Level Cell aka SLC and Multi Level Cell aka MLC) with any other form of storage medium (e.g. Hard Disk Drives or HDD and tape summit resources) being dead and to avoid wasting your money on them.

Of course others and their fans or supporters who do not have an SSD play or product will tell forget about them, they are not ready yet.

Then there are those who take no sides per say, simply providing comments and perspectives along with things to be considered that also get used to spin stories for or against by others.

For the record, I have been a fan and user of various forms of SSD along with other variations of tiered storage mediums using them for where they fit best for several decades as a customer in IT, as a vendor, analyst and advisory consultant. Thus my perspective and opinion is that SSDs do in fact have a very bright future. However I also believe that other storage mediums are not dead yet although their roles are evolving while their technologies continue be developed. In other words, use the right technology and tool, packaged and deployed in the best most effective way for the task at hand.

Memory and tiered storage hirearchy
Memory and tiered storage hierarchy

Consequently while some SSD vendors, their fans, supporters, pundits and others might be put off by some recent UCSD research that does not paint SSD and particular nand flash in the best long-term light, it caught my attention and here is why. First I have already seen in different venues where some are using the research as a tool, club or weapon against SSD and in particular nand flash which should be no surprise. Secondly I have also seen those who don’t agree with the research at best dismiss the findings. Others are using it as a conversation or topic piece for their columns or other venues such as here.

The reason the UCSD research caught my eye was that it appeared to be looking at how will nand SSD technology evolve from where it is today to where it will be in ten years or so.

While ten years may seem like a long time, just look back at how fast things evolved over the past decade. Granted the UCSD research is open to discussion, debate and dismissal as clear in the comments of this article here. However the research does give a counter point or perspective to some of the hype which can mean somewhere between the two extremes, exists reality and where things are headed or need to be discussed. While I do not agree with all the observations or opinions of the research, it does give stimulus for discussing things including best practices around deployment vs. simply talking about adoption.

It has taken many decades for people to become comfortable or familiar with the pros and cons of HDD or tape for that matter.

Likewise some are familiar with (good or bad) with DRAM based SSD of earlier generations. On the other hand, while many people use various forms of nand flash SSD ranging from what is inside their cell phone or SD cards for cameras to USB thumb drives to SSD on drives, on PCIe cards or in storage systems and appliances, there is still an evolving comfort and confidence level for business and enterprise storage use. Some have embraced, some have dismissed, many if not most are intrigued wanting to know more, are using nand flash SSD in some shape or form, while gaining confidence.

Part of gaining confidence is moving beyond the industry hype looking at and understanding what are the pros, cons and how to leverage or work around the constraints. A long time ago a wise person told me that it is better to know the good, bad and ugly about a product, service or technology so that you could leverage the best, configure, plan and manage around the bad to avoid or minimized the ugly. Based on that philosophy I find many IT customers and even some VARs and vendors wanting to know the good, the bad and they ugly not for hanging out a vendor or their technology and products, rather so that they can be comfortable in knowing when, where, why and how to use to be most effective.

Industry Trends and Perspectives

Granted to get some of the not so good information may need NDA (Non Disclosure Agreement) or other confidentially discussions as after all, what vendor or solution provider wants to show or let anything less than favorable out into the blogosphere, twittersphere, googleplus, tabloids, news sphere or other competitive landscapes venues.

Ok, lets bring this back to the UCSD research report titled The Bleak Future of NAND Flash Memory

UCSD research report: The Bleak Future of NAND Flash Memory
Click here or on the above image to read the UCSD research report

I’m not concerned that the UCSD research was less than favorable as some others might be, after all, it is looking out into the future and if a concern, provides a glimpse of what to keep an eye on.

Likewise, looking back, the research report could be taken as simply a barometer of what could happen if no improvements or new technologies evolve. For example, the HDD would have hit the proverbial brick wall also known as the super parametric barrier many years ago if new recording methods and materials had not been deployed including a shift to perpendicular recording, something that was recently added to tape.

Tomorrows SSDs and storage mediums will still be based on nand flash including SLC, MLC, eMLC along with other variants not to mention phased change memory (PCM) and other possible contenders.

Todays SSDs have shifted from being DRAM based with HDD or even flash-based persistent backing storage to nand flash-based, both SLC and MLC with enhanced or enterprise MLC appearing. Likewise the density of SSDs continue to increase meaning more data packed into the same die or footprint, more dies stacked in a chip package to boost capacity while decreasing cost. However what is also happening is behind the scenes which is a big differentiator with SSDs and that is the quality of some firmware and low-level page management at the flash translation layer (FTL). Hence they saying that anybody with a soldering iron and ability to pull together off the shelves FTLs and packaging can create some form of an SSD. How effective a product will be is based on the intelligence and robustness of the combination of the dies, FTL, controller and associated firmware and device drivers along with other packaging options plus the testing, validation and verification they undergo.

Various packaging options and where SSD can be deployed
Various SSD locations, types, packaging and usage scenario options

Good SSD vendors and solution providers I believe will be able to discuss your concerns around endurance, duty cycles, data integrity and other related topics to set up confidence with current and future issues, granted you may have to go under NDA to gain that insight. On the other hand, those who feel threatened or not able or interested in addressing or demonstrating confidence for the long haul will be more likely to dismiss studies, research, reports, opinions or discussions that dig deeper into creating confidence via understanding of how things work so that customers can more fully leverage those technologies.

Some will view and use reports such as the one from UCSD as a club or weapon against SSD and in particular against nand flash to help their cause or campaign while others will use it to stimulate controversy and page hit views. My reason for bringing up the topic and discussion it to stimulate thinking and help increase awareness and confidence in technologies such as SSD near and long-term. Regardless of if your view is that SSD will replace HDD, or that they will continue to coexist as tiered storage mediums into the future, gaining confidence in the technologies along with when, where and how to use them are important steps in shifting from industry adoption to customer deployment.

What say you?

Is SSD the best thing and you are dumb or foolish if you do not embrace it totally or a fan, pundit cheerleader view?

Or is SSD great when and where used in the right place so embrace it?

How will SSD continue to evolve including nand and other types of memories?

Are you comfortable with SSD as a long term data storage medium, or for today, its simply a good way to discuss performance bottlenecks?

On the other hand, is SSD interesting, however you are not comfortable or have confidence with the technology, yet you want to learn more, in other words a skeptics view?

Or perhaps the true cynic view which is that SSD are nothing but the latest buzzword bandwagon fad technology?

Ok, nuff said for now, other than here is some extra related SSD material:
SSD options for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments, Part II: The call to duty, SSD endurance
Part I: EMC VFCache respinning SSD and intelligent caching
Part II: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Industry adoption vs. industry deployment, is there a difference?
Data Center I/O Bottlenecks Performance Issues and Impacts
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperability support matrix

Cheers
gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

EMC VFCache respinning SSD and intelligent caching (Part II)

This is the second of a two part series pertaining to EMC VFCache, you can read the first part here.

In this part of the series, lets look at some common questions along with comments and perspectives.

Common questions, answers, comments and perspectives:

Why would EMC not just go into the same market space and mode as FusionIO, a model that many other vendors seam eager to follow? IMHO many vendors are following or chasing FusionIO thus most are selling in the same way perhaps to the same customers. Some of those vendors can very easily if they were not already also make a quick change to their playbook adding some new moves to reach broader audience.

Another smart move here is that by taking a companion or complimentary approach is that EMC can continue selling existing storage systems to customers, keep those investments while also supporting competitors products. In addition, for those customers who are slow to adopt the SSD based techniques, this is a relatively easy and low risk way to gain confidence. Granted the disk drive was declared dead several years (and yes also several decades) ago, however it is and will stay alive for many years due to SSD helping to close the IO storage and performance gap.

Storage IO performance and capacity gap
Data center and storage IO performance capacity gap (Courtesy of Cloud and Virtual Data Storage Networking (CRC Press))

Has this been done before? There have been other vendors who have done LUN caching appliances in the past going back over a decade. Likewise there are PCIe RAID cards that support flash SSD as well as DRAM based caching. Even NetApp has had similar products and functionality with their PAM cards.

Does VFCache work with other PCIe SSD cards such as FusionIO? No, VFCache is a combination of software IO intercept and intelligent cache driver along with a PCIe SSD flash card (which could be supplied as EMC has indicated from different manufactures). Thus VFCache to be VFCache requires the EMC IO intercept and intelligent cache software driver.

Does VFCache work with other vendors storage? Yes, Refer to the EMC support matrix, however the product has been architected and designed to install and coexist into a customers existing environment which means supporting different EMC block storage systems as well as those from other vendors. Keep in mind that a main theme of VFCache is to compliment, coexist, enhance and protect customers investments in storage systems to improve their effectiveness and productivity as opposed to replacing them.

Does VFCache introduce a new point of vendor lockin or stickiness? Some will see or place this as a new form of vendor lockin, others assuming that EMC supports different vendors storage systems downstream as well as offer options for different PCIe flash cards and keeps the solution affordable will assert it is no more lockin that other solutions. In fact by supporting third party storage systems as opposed to replacing them, smart sales people and marketeers will place VFCache as being more open and interoperable than some other PCIe flash card vendors approach. Keep in mind that avoiding vendor lockin is a shared responsibility (read more here).

Does VFCache work with NAS? VFCache does not work with NAS (NFS or CIFS) attached storage.

Does VFCache work with databases? Yes, VFCache is well suited for little data (e.g. database) and traditional OLTP or general business application process that may not be covered or supported by other so called big data focused or optimized solutions. Refer to this EMC document (and this document here) for more information.

Does VFCache only work with little data? While VFCache is well suited for little data (e.g. databases, share point, file and web servers, traditional business systems) it also able to work with other forms of unstructured data.

Does VFCache need VMware? No, While VFCache works with VMware vSphere including a vCenter plug in, however it does not need a hypervisor and is practical in a physical machine (PM) as it is in a virtual machine (VM).

Does VFCache work with Microsoft Windows? Yes, Refer to the EMC support matrix for specific server operating systems and hypervisor version support.

Does VFCache work with other unix platforms? Refer to the EMC support matrix for specific server operating systems and hypervisor version support.

How are reads handled with VFCache? The VFCache software (driver if you prefer) intercepts IO requests to LUNs that are being cached performing a quick lookup to see if there is a valid cache entry in the physical VFCache PCIe card. If there is a cache hit the IO is resolved from the closer or local PCIe card cache making for a lower latency or faster response time IO. In the case of a cache miss, the VFCache driver simply passes the IO request onto the normal SCSI or block (e.g. iSCSI, SAS, FC, FCoE) stack for processing by the downstream storage system (or appliance). Note that when the requested data is retrieved from the storage system, the VFCache driver will based on caching algorithms determinations place a copy of the data in the PCIe read cache. Thus the real power of the VFCache is the software implementing the cache lookup and cache management functions to leverage the PCIe card that complements the underlying block storage systems.

How are writes handled with VFCache? Unless put into a write cache mode which is not the default, VFCache software simply passes the IO operation onto the IO stack for downstream processing by the storage system or appliance attached via a block interface (e.g. iSCSI, SAS, FC, FCoE). Note that as part of the caching algorithms, the VFCache software will make determinations of what to keep in cache based on IO activity requests similar to how cache management results in better cache effectiveness in a storage system. Given EMCs long history of working with intelligent cache algorithms, one would expect some of that DNA exists or will be leveraged further in future versions of the software. Ironically this is where other vendors with long cache effectiveness histories such as IBM, HDS and NetApp among others should also be scratching their collective heads saying wow, we can or should be doing that as well (or better).

Can VFCache be used as a write cache? Yes, while its default mode is to be used as a persistent read cache to compliment server and application buffers in DRAM along with enhance effectiveness of downstream storage system (or appliances) caches, VFCache can also be configured as a persistent write cache.

Does VFCache include FAST automated tiering between different storage systems? The first version is only a caching tool, however think about it a bit, where the software sits, what storage systems it can work with, ability to learn and understand IO paths and patterns and you can get an idea of where EMC could evolve it to, similar to what they have done with recoverpoint among other tools.

Changing data access patterns and lifecycles
Evolving data access patterns and life cycles (more retention and reads)

Does VFCache mean all or nothing approach with EMC? While the complete VFCache solution comes from EMC (e.g. PCIe card and software), the solution will work with other block attached storage as well as existing EMC storage systems for investment protection.

Does VFCache support NAS based storage systems? The first release of VFCache only supports block based access, however the server that VFCache is installed in could certainly be functioning as a general purpose NAS (NFS or CIFS) server (see supported operating systems in EMC interoperability notes) in addition to being a database or other other application server.

Does VFCache require that all LUNs be cached? No, you can select which LUNs are cached and which ones are not.

Does VFCache run in an active / active mode? In the first release it is active passive, refer to EMC release notes for details.

Can VFCache be installed in multiple physical servers accessing the same shared storage system? Yes, however refer to EMC release notes on details about active / active vs. active / passive configuration rules for ensuring data integrity.

Who else is doing things like this? There are caching appliance vendors as well as others such as NetApp and IBM who have used SSD flash caching cards in their storage systems or virtualization appliances. However keep in mind that VFCache is placing the caching function closer to the application that is accessing it there by improving on the locality of reference (e.g. storage and IO effectiveness).

Does VFCache work with SSD drives installed in EMC or other storage systems? Check the EMC product support matrix for specific tested and certified solutions, however in general if the SSD drive is installed in a storage system that is supported as a block LUN (e.g. iSCSI, SAS, FC, FCoE) in theory it should be possible to work with VFCache. Emphasis, visit the EMC support matrix.
What type of flash is being used?

What type of nand flash SSD memory is EMC using in the PCIe card? The first release of VFCache is leveraging enterprise class SLC (Single Level Cell) nand flash which has been used in other EMC products for its endurance, long duty cycle to minnimize or eliminate concerns of wear and tear while meeting read and write performance. EMC has indicated that they will also as part of an industry trend leverage MLC along with Enterprise MLC (EMLC) technologies on a go forward basis.

Doesnt nand ssd flash cache wear out? While nand flash SSD can wear out over time due to extensive write use, the VFCache approach mitigates this by being primarily a read cache reducing the number or program / erase cycles (P/E cycles) that occur with write operations as well as initially leveraging longer duty cycle SLC flash. EMC also has several years experience from implementing wear leveling algorithms into the storage systems controllers to increase duty cycle and reduce wear on SLC flash which will play forward as MLC or Enterprise MLC (EMLC) techniques are leveraged. This differs from vendors who are positioning their SLC or MLC based flash PCIe SSD cards for mainly write operations which will cause more P/E cycles to occur at a faster rate reducing the duty or useful life of the device.

How much capacity does the VFCache PCIe card contain? The first release supports a 300GB card and EMC has indicated that added capacity and configuration options are in their plans.

Does this mean disks are dead? Contrary to popular industry folk lore (or wish) the hard disk drive (HDD) has plenty of life left part of which has been increased by being complimented by VFCache.

Various options and locations for SSD along with different usage scenarios
Various SSD locations, types, packaging and usage scenario options

Can VFCache work in blade servers? The VFCache software is transparent to blade, rack mount, tower or other types of servers. The hardware part of VFCache is a PCIe card which means that the blade server or system will need to be able to accommodate a PCIe card to compliment the PCIe based mezzaine IO card (e.g. iSCSI, SAS, FC, FCOE) used for accessing storage. What this means is that for blade systems or server vendors such as IBM who have a PCIe expansion module for their H series blade systems (it consumes a slot normally used by a server blade), PCIe cache cards like those being initially released by IBM could work, however check with the EMC interoperability matrix, as well as your specific blade server vendor for PCIe expansion capabilities. Given that EMC leverages Cisco UCS for their vBlocks, one would assume that those systems will also see VFCache modules in those systems. NetApp partners with Cisco using UCS in their FlexPods so you see where that could go as well along with potential other server vendors support including Dell, HP, IBM and Oracle among others.

What about benchmarks? EMC has released some technical documents that show performance improvements in Oracle environments such as this here. Hopefully we will see EMC also release other workloads for different applications including Microsoft Exchange Solutions Proven (ESRP) along with SPC similar to what IBM recently did with their systems among others.

How do the first EMC supplied workload simulations compare vs. other PCIe cards? This is tough to gauge as many SSD solutions and in particular PCIe cards are doing apples to oranges comparisons. For example to generate a high IOPs rating for marketing purposes, most SSD solutions are stress performance tested at 512 bytes or 1/2 of a KByte or at least 1/8 of a small 4Kbyte IO. Note that operating systems such as Windows are moving to 4Kbyte page allocation size to align with growing IO sizes with databases moving from the old average of 4Kbytes to 8Kbytes and larger. What is important to consider is what is the average IO size and activity profile (e.g. reads vs. writes, random vs. sequential) for your applications. If your application is doing ultra small 1/2 Kbyte IOs, or even smaller 64 byte IOs (which should be handled by better application or file system caching in DRAM), then the smaller IO size and record setting examples will apply. However if your applications are more mainstream or larger, then those smaller IO size tests should be taken with a grain of salt. Also keep latency in mind that many target or oppourtunity applications for VFCache are response time sensitive or can benefit by the improved productivity they enable.

What is locality of reference? Locality of reference refers to how close data is to where it is being requested or accessed from. The closer the data to the application requesting the faster the response time or quick the work gets done. For example in the figure below L1/L2/L3 on board processor caches are the fastest, yet smallest while closest to the application running on the server. At the other extreme further down the stack, storage becomes large capacity, lower cost, however lower performing.

Locality of reference data and storage memory

What does cache effectiveness vs. cache utilization mean? Cache utilization is an indicator of how much the available cache capacity is being used however it does not give an indicator of if the cache is being well used or not. For example, cache could be 100 percent used, however there could be a low hit rate. Thus cache effectiveness is a gauge of how well the available cache is being used to improve performance in terms of more work being done (IOPS or bandwidth) or lower of latency and response time.

Isnt more cache better? More cache is not better, it is how the cache is being used, this is a message that I would be disappointed in HDS if they were not to bring up as a point of messaging (or rebuttal) given their history of emphasis cache effectiveness vs. size or quantity (Hu, that is a hint btw ;).

What is the performance impact of VFCache on the host server? EMC is saying greatest of 5 percent or less CPU consumption which they claim is several times less than the competitions worst scenario, as well as claiming 512MB to 1GB of DRM on the server vs. several times that of their competitors. The difference could be expected to be via more off load functioning including flash translation layer (FTL), wear leveling and other optimization being handled by the PCIe card vs. being handled in the servers memory and using host server CPU cycles.

How does this compare to what NetApp or IBM does? NetApp, IBM and others have done caching with SSD in their storage systems, or leveraging third party PCIe SSD cards from different vendors to be installed in servers to be used as a storage target. Some vendors such as LSI have done caching on the PCIe cards (e.g. CacheCaid which in theory has a similar software caching concept to VFCache) to improve performance and effectiveness across JBOD and SAS devices.

What about stale (old or invalid) reads, how does VFCache handle or protect against those? Stale reads are handled via the VFCache management software tool or driver which leverages caching algorithms to decide what is valid or invalid data.

How much does VFCache cost? Refer to EMC announcement pricing, however EMC has indicated that they will be competitive with the market (supply and demand).

If a server shutdowns or reboots, what happens to the data in the VFCache? Being that the data is in non volatile SLC nand flash memory, information is not lost when the server reboots or loses power in the case of a shutdown, thus it is persistent. While exact details are not know as of this time, it is expected that the VFCache driver and software do some form of cache coherency and validity check to guard against stale reads or discard any other invalid cache entries.

Industry trends and perspectives

What will EMC do with VFCache in the future and on a larger scale such as an appliance? EMC via its own internal development and via acquisitions has demonstrated ability to use various clustered techniques such as RapidIO for VMAX nodes, InfiniBand for connecting Isilon  nodes. Given an industry trend with several startups using PCIe flash cards installed in a server that then functions as a IO storage system, it seems likely given EMCs history and experience with different storage systems, caching, and interconnects that they could do something interesting. Perhaps Oracle Exadata III (Exadata I was HP, Exadata II was Sun/Oracle) could be an EMC based appliance (That is pure speculation btw)?

EMC has already shown how it can use SSD drives as a cache extension in VNX and CLARiiON servers ( FAST CACHE ) in addition to as a target or storage tier combined with Fast for tiering. Given their history with caching algorithms, it would not be surprising to see other instantiations of the technology deployed in complimentary ways.

Finally, EMC is showing that it can use nand flash SSD in different ways, various packaging forms to apply to diverse applications or customer environments. The companion or complimentary approach EMC is currently taking contrasts with some other vendors who are taking an all or nothing, its all SSD as disk is dead approach. Given the large installed base of disk based systems EMC as well as other vendors have in place, not to mention the investment by those customers, it makes sense to allow those customers the option of when, where and how they can leverage SSD technologies to coexist and complement their environments. Thus with VFCache, EMC is using SSD as a cache enabler to discuss the decades old and growing storage IO to capacity performance gap in a force multiplier model that spreads the cost over more TBytes, PBytes or EBytes while increasing the overall benefit, in other words effectiveness and productivity.

Additional related material:
Part I: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Industry adoption vs. industry deployment, is there a difference?
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Data Center I/O Bottlenecks Performance Issues and Impacts
From bits to bytes: Decoding Encoding
Who is responsible for vendor lockin
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperabity support matrix

Ok, nuff said for now, I think I see some storm clouds rolling in

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

EMC VFCache respinning SSD and intelligent caching (Part I)

This is the first part of a two part series covering EMC VFCache, you can read the second part here.

EMC formerly announced VFCache (aka Project Lightning) an IO accelerator product that comprises a PCIe nand flash card (aka Solid State Device or SSD) and intelligent cache management software. In addition EMC is also talking about the next phase of the flash business unit and project Thunder. The approach EMC is taking with vFCache should not be a surprise given their history of starting out with memory and SSD evolving it into an intelligent cache optimized storage solution.

Storage IO performance and capacity gap
Data center and storage IO performance capacity gap (Courtesy of Cloud and Virtual Data Storage Networking (CRC Press))

Could we see the future of where EMC will take VFCache along with other possible solutions already being hinted at by the EMC flash business unit by looking where they have been already?

Likewise by looking at the past can we see the future or how VFCache and sibling product solutions could evolve?

After all, EMC is no stranger to caching with both nand flash SSD (e.g. FLASH CACHE, FAST and SSD drives) along with DRAM based across their product portfolio not too mention being a core part of their company founding products that evolved into HDDs and more recent nand flash SSDs among others.

Industry trends and perspectives

Unlike others who also offer PCIe SSD cards such as FusionIO with a focus on eliminating SANs or other storage (read their marketing), EMC not surprisingly is marching to a different beat. The beat EMC is marching too or perhaps leading by example for others to follow is that of going mainstream and using PCIe SSD cards as a cache to compliment theirs as well as other vendors storage systems vs. replacing them. This is similar to what EMC and other mainstream storage vendors have done in the past such as with SSD drives being used as flash cache extension on CLARiiON or VNX based systems as well as target or storage tier.

Various options and locations for SSD along with different usage scenarios
Various SSD locations, types, packaging and usage scenario options

Other vendors including IBM, NetApp and Oracle among others have also leveraged various packaging options of Single Level Cell (SLC) or Multi Level Cell (MLC) flash as caches in the past. A different example of SSD being used as a cache is the Seagate Momentus XT which is a desktop, workstation consumer type device. Seagate has shipped over a million of the Momentus XT which use SLC flash as a cache to compliment and enhance the integrated HDD performance (a 750GB with 8GB SLC memory is in the laptop Im using to type this with).

One of the premises of solutions such as those mentioned above for caching is to discuss changing data access patterns and life cycles shown in the figure below.

Changing data access patterns and lifecycles
Evolving data access patterns and life cycles (more retention and reads)

Put a different way, instead of focusing on just big data or corner case (granted some of those are quite large) or ultra large cloud scale out solutions, EMC with VFCache is also addressing their core business which includes little data. What will be interesting to watch and listen too is how some vendors will start to jump up and down saying that they have done or enabling what EMC is announcing for some time. In some cases those vendors will be rightfully doing and making noise on something that they should have made noise about before.

EMC is bringing the SSD message to the mainstream business and storage marketplace showing how it is a compliment to, vs. a replacement of existing storage systems. By doing so, they will show how to spread the cost of SSD out across a larger storage capacity footprint boosting the effectiveness and productive of those systems. This means that customers who install the VFCache product can accelerate the performance of both their existing EMC as well as storage systems from other vendors preserving their technology along with people skills investment.

 

Key points of VFCache

  • Combines PCIe SLC nand flash card (300GB) with intelligent caching management software driver for use in virtualized and traditional servers

  • Making SSD complimentary to existing installed block based disk (and or SSD) storage systems to increase their effectiveness

  • Providing investment protection while boosting productivity of existing EMC and third party storage in customer sites

  • Brings caching closer to the application where the data is accessed while leverage larger scale direct attached and SAN block storage

  • Focusing message for SSD back on to little data as well as big data for mainstream broad customer adoption scenarios

  • Leveraging benefit and strength of SSD as a read cache and scalable of underlying downstream disk for data storage

  • Reducing concerns around SSD endurance or duty cycle wear and tear by using as a read cache

  • Off loads underlying storage systems from some read requests enabling them to do more work for other servers

Additional related material:
Part II: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Industry adoption vs. industry deployment, is there a difference?
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Data Center I/O Bottlenecks Performance Issues and Impacts
From bits to bytes: Decoding Encoding
Who is responsible for vendor lockin
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperabity support matrix

Ok, nuff said for now, I think I see some storm clouds rolling in

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

IT and storage economics 101, supply and demand

In my 2012 (and 2013) industry trends and perspectives predictions I mentioned that some storage systems vendors who managed their costs could benefit from the current Hard Disk Drive (HDD) shortage. Most in the industry would say that is saying what they have said, however I have an alternate scenario. My scenario is that for vendors who already manage good (or great) margins on their HDD sales and who can manage their costs including inventories stand to make even more margin. There is a popular myth that there is no money or margin in HDD or for those who sell them which might be true for some.

Without going into any details, lets just say it is a popular myth just like saying that there is no money in hardware or that all software and people services are pure profit. Ok, lets leave sleeping dogs lay where rest (at least for now).

Why will some storage vendors make more margin off of HDD when everybody is supposed to be adopting or deploying solid state devices (SSD). Or Hybrid Hard Disk Drives (HHDD) in the case of workstation, desktop or laptops? Simple, SSD adoption (and deployment) is still growing and a lot of demand generator incentives available. Likewise HDD demand continues to be strong and with supplies affected, economics 101 says that some will raise their prices, manage their expenses, make more profits which can be used to help fund or stimulate increased SSD or other initiatives.

Storage, IT and general Economics 101

Economics 101 or basics introduces the concept of supply and demand along with revenue minus costs = profits or margin. If there is no demand yet a supply of a product exists then techniques such as discounting, bundling or other forms of adding value to incentivize customers to make a purchase. Bundling can include offering some other product, service or offering that could be as simple as an extended warranty to motivate sellers. Beyond discounts, coupons, two for one, future buying credits, gift cards or memberships for frequent buyers (or flyers) are other forms of stimulating sales activity.

Likewise if there is a supply or competition for a given market of a product or alternative, vendors or those selling the products including value added resellers (VARS) may sacrifice margin (profits) to meet revenue as well as unit shipped (e.g. expand their customer and installed base footprint) goals.

Currently in the IT industry and specifically around data storage even with increased and growing adoption and demand deployment around SSD, there is also a large supply in different categories. For example there are several fabrication facilities (FABs) that produce the silicon dies (e.g. chips) that form nand flash SSD memories including Intel, Micron, the joint Intel and Micron Fab (IMF) and Samsung. Even with continued strong demand growth, the various FABs seem to have enough capacity at least for now. Likewise manufactures of SSD drive form factor products with SAS or SATA interfaces for attaching to existing servers, storage or appliances including Intel, Micron, Samsung, Seagate, STEC and SANdisk among others seem to be able to meet demand. Even PCIe SSD card vendors have come under pressure of supply and demand. For example the high flying startup FusionIO recently saw its margins affected due to competition which includes Adaptec, LSI, Texas Memory Systems (TMS) and soon EMC among others. In the SSD appliance and storage system space there are even more vendors with what amounts to about one every month or so coming out of stealth. Needless to say there will be some shakeout in the not so distant future.

On the other hand, if there is a demand however limited supply, assuming that the market will support it, prices can be increased from what discounts had applied. Assuming that costs are kept inline any subsequent increase in average selling price (ASP) minus costs should result in higher margins.

Another variation is if there is strong demand and shortage of supply such as what is occurring with hard disk drives (HDD) due to recent flooding in Thailand, not only prices increase, there can also be changes to warranties or other services and incentives. Note some of HDD manufactures such as Western Digital were more affected by the flooding than Seagate. Likewise the Thailand flooding was not limited to just HDD having also affected other electronic chip and component suppliers. Even though HDDs have been declared dead by many in the SSD camps along with their supporters, record number of HDDs are produced every year. Note that economics 101 also tells us that even though more devices are produced and sold, that may not show a profit based on their cost and price. Like the CPU processor chips produced by AMD, Broadcom, IBM and Intel among others that are high volume, with varying margins, the HDD and nand flash SSD market is also high volume with different margins.

As an example, Seagate recently announced strong profits due to a number of factors even though enterprise drive supply and shipments were down while desktop drives were up. Given that many industry pundits have proclaimed a disaster for those involved with HDDs due to the shortage, they forgot about economics 101 (supply and demand). Sure marketing 101 says that HDDs are dead and if there is a shortage then more people will buy SSDs however that also assumes that people are a) ready to buy more SSDs (e.g. demand) and b) vendors or manufactures have supply and c) that those same vendors or manufactures are willing to give up margin while reducing costs to boost profits.

Note that costs typically include selling, general and administrative, cost of goods, manufacturing, transportation and shipping, insurance, research and development among others. If it has been awhile since you looked at one, take a few minutes sometime to look at public companies and their quarterly securities exchange commission (SEC) financial filings. Those public filing documents are a treasure trove of information for those who sift through them and where many reporters, analysts and researchers find information for what they are working or speculating on. These documents show total sales, costs, profits and losses among other things. Something that vendors may not show in these public filings which means you have to look or read between the lines or get the information elsewhere is how many units were actually shipped or the ASP to get an idea of the amount of discounting that is occurring. Likewise sales and marketing expenses often get lumped into or under general selling and administration (SGA). A fun or interesting metric is to look at the percentage of SGA dollars spent per revenue and profits.

What I find interesting is to get an estimate of what it is costing an organization to do or sustain a given level of revenue and margin. For example, while some larger vendors may seem to spend more on selling and marketing, on a percentage basis, they can easily be out spent by smaller startups. Granted the larger vendor may be spending more actually dollars however those are spread out over a larger sales and revenue basis.

What does this all mean?

Look at multiple metrics that have both a future trend or forecast as well as trailing or historical perspective view. Look at both percentages as well as dollar amounts as well as both revenue and margin while keeping units or number of devices (or copies) sold also into perspective. For example its interesting to know if a vendors sales were down 10% (or up) quarter over quarter, or versus the same quarter a year ago or year over year. It is also interesting to keep the margin in perspective along with SGA costs in addition to cost of product acquired for sale. Also important is to get a gauge of if sales were down, yet margins are up, how many devices or copies were sold to get a gauge on expanding footprint which could also be a sign of future annuity (follow up sales opportunities). What Im watching is over the next couple of quarters is to see how some vendors leverage the Thailand flooding and HDD as well as other electronic component supply shortages to meet demand by managing discounts, costs and other items that contribute to enhanced margins.

Rest assured there is a lot more to IT and storage economics, including advanced topics such as Return on Investment (ROI) or Return on Innovation (The new ROI) and Total Cost of Ownership (TCO) among others that maybe we will discuss in the future.

Ok, nuff fun for now, lets get back to work.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

AWS (Amazon) storage gateway, first, second and third impressions

Amazon Web Services (AWS) today announced the beta of their new storage gateway functionality that enables access of Amazon S3 (Simple Storage Services) from your different applications using an appliance installed in your data center site. With this beta launch, Amazon joins other startup vendors who are providing standalone gateway appliance products (e.g. Nasuni etc) along with those who have disappeared from the market (e.g. Cirtas). In addition to gateway vendors, there are also those with cloud access added to their software tools such as (e.g. Jungle Disk that access both Rack space and Amazon S3 along with Commvault Simpana Cloud connector among others). There are also vendors that have joined cloud access gateways as part of their storage systems such as TwinStrata among others. Even EMC (and here) has gotten into the game adding qualified cloud access support to some of their products.

What is a cloud storage gateway?

Before going further, lets take a step back and address what for some may be a fundemental quesiton of what is a cloud storage gateway?

Cloud services such as storage are accessed via some type of network, either the public Internet or a private connection. The type of cloud service being accessed (figure 1) will decide what is needed. For example, some services can be accessed using a standard Web browser, while others must plug-in or add-on modules. Some cloud services may need downloading an application, agent, or other tool for accessing the cloud service or resources, while others give an on-site or on-premisess appliance or gateway.

Generic cloud access example via Cloud and Virtual Data Storage Networking (CRC Press)
Figure 1: Accessing and using clouds (From Cloud and Virtual Data Storage Networking (CRC Press))

Cloud access software and gateways or appliances are used for making cloud storage accessible to local applications. The gateways, as well as enabling cloud access, provide replication, snapshots, and other storage services functionality. Cloud access gateways or server-based software include tools from BAE, Citrix, Gladinet, Mezeo, Nasuni, Openstack, Twinstrata among others. In addition to cloud gateway appliances or cloud points of presence (cpops), access to public services is also supported via various software tools. Many data protection tools including backup/restore, archiving, replication, and other applications have added (or are planning to add) support for access to various public services such as Amazon, Goggle, Iron Mountain, Microsoft, Nirvanix, or Rack space among several others.

Some of the tools have added native support for one or more of the cloud services leveraging various applicaiotn programming interfaces (APIs), while other tools or applications rely on third-party access gateway appliances or a combination of native and appliances. Another option for accessing cloud resources is to use tools (Figure 2) supplied by the service provider, which may be their own, from a third-party partner, or open source, as well as using their APIs to customize your own tools.

Generic cloud access example via Cloud and Virtual Data Storage Networking (CRC Press)
Figure 2: Cloud access tools (From Cloud and Virtual Data Storage Networking (CRC Press))

For example, I can use my Amazon S3 or Rackspace storage accounts using their web and other provided tools for basic functionality. However, for doing backups and restores, I use the tools provided by the service provider, which then deal with two different cloud storage services. The tool presents an interface for defining what to back up, protect, and restore, as well as enabling shared (public or private) storage devices and network drives. In addition to providing an interface (Figure 2), the tool also speaks specific API and protocols of the different services, including PUT (create or update a container), POST (update header or Meta data), LIST (retrieve information), HEAD (metadata information access), GET (retrieve data from a container), and DELETE (remove container) functions. Note that the real behavior and API functionality will vary by service provider. The importance of mentioning the above example is that when you look at some cloud storage services providers, you will see mention of PUT, POST, LIST, HEAD, GET, and DELETE operations as well as services such as capacity and availability. Some services will include an unlimited number of operations, while others will have fees for doing updates, listing, or retrieving your data in addition to  basic storage fees. By being aware of cloud primitive functions such as PUT or POST and GET or LIST, you can have a better idea of what they are used for as well as how they play into evaluating different services, pricing, and services plans.

Depending on the type of cloud service, various protocols or interfaces may be used, including iSCSI, NAS NFS, HTTP or HTTPs, FTP, REST, SOAP, and Bit Torrent, and APIs and PaaS mechanisms including .NET or SQL database commands, in addition to XM, JSON, or other formatted data. VMs can be moved to a cloud service using file transfer tools or upload capabilities of the provider. For example, a VM such as a VMDK or VHD  is prepared locally in your environment and then uploaded to a cloud provider for execution. Cloud services may give an access program or utility that allows you to configure when, where, and how data will be protected, similar to other backup or archive tools.

Some traditional backup or archive tools have added direct or via third party support for accessing IaaS cloud storage services such as Amazon, Rack space, and others. Third-party access appliance or gateways enable existing tools to read and write data to a cloud environment by presenting a standard interface such as NAS (NFS and/or CIFS) or iSCSI (Block) that gets mapped to the back-end cloud service format. For example, if you subscribe to Amazon S3, storage is allocated as objects and various tools are used to use or utilize. The cloud access software or appliance understands how to communicate with the IaaS  storage APIs and abstracts those from how they are used. Access software tools or gateways, in addition to translating or mapping between cloud APIs, formats your applications including security with encryption, bandwidth optimization, and data footprint reduction such as compression and de-duplication. Other functionality include reporting, management tools that support various interfaces, protocols and standards including SNMP or SNIA, Storage Management Initiative Specification (SMIS), and Cloud Data Management Initiative (CDMI).

First impression: Interesting, good move Amazon, I was ready to install and start testing it today

The good news here is that Amazon is taking steps to make it easier for your existing applications and IT environments to use and leverage clouds for private and hybrid adoption models with both an Amazon branded and managed services, technology and associated tools.

This means leveraging your existing Amazon accounts to simplify procurement, management, ongoing billing as well as leveraging their infrastructure. As a standalone gateway appliance (e.g. it does not have to be bundled as part of a specific backup, archive, replication or other data management tool), the idea is that you can insert the technology into your existing data center between your servers and storage to begin sending a copy of data off to Amazon S3. In addition to sending data to S3, the integrated functionality with other AWS services should make it easier to integrated with Elastic Cloud Compute (EC2) and Elastic Block storage (EBS) capabilities including snapshots for data protection.

Thus my first impression of AWS storage gateway at a high level view is good and interesting resulting in looking a bit deeper resulting in a second impression.

Second impression: Hmm, what does it really do and require, time to slow down and do more home work

Digging deeper and going through the various publicly available material (note can only comment or discuss on what is announced or publicly available) results in a second impression of wanting and needing to dig deeper based on some of caveats. Now granted and in fairness to Amazon, this is of course a beta release and hence while on first impression it can be easy to miss the notice that it is in fact a beta so keep in mind things can and hopefully will change.

Pricing aside, which means as with any cloud or managed storage service, you will want to do a cost analysis model just as you would for procuring physical storage, look into the cost of monthly gateway fee along with its associated physical service running VMware ESXi configuration that you will need to supply. Chances are that if you are an average sized SMB, you have a physical machine (PM) laying around that you can throw a copy of ESXi on to if you dont already have room for some more VMs on an existing one.

You will also need to assess the costs for using the S3 storage including space capacity charges, access and other fees as well as charges for doing snapshots or using other functionality. Again these are not unique to Amazon or their cloud gateway and should be best practices for any service or solution that you are considering. Amazon makes it easy by the way to see their base pricing for different tiers of availability, geographic locations and optional fees.

Speaking of accessing the cloud, and cloud conversations, you will also want to keep in mind what your networking bandwidth service requirements will be to move data to Amazon that might not already be doing so.

Another thing to consider with the AWS storage gateway is that it does not replace your local storage (that is unless you move your applications to Amazon EC2 and EBS), rather makes a copy of what every you save locally to a remote Amazon S3 storage pool. This can be good for high availability (HA), business continuance (BC), disaster recovery (DR) and compliance among other data management needs. However in your cost model you also need to keep in mind that you are not replacing your local storage, you are adding to it via the cloud which should be seen as complimenting and enhancing your private now to be hybrid environment.

 

Walking the cloud data protection talk

FWIW, I leverage a similar model where I use a service (Jungle Disk) where critical copies of my data get sent to that service which in turn places copies at Rack space (Jungledisks parent) and Amazon S3. What data goes to where depends on different policies that I have established. I also have local backup copies as well as master gold disaster copy stored in a secure offsite location. The idea is that when needed, I can get a good copy restored from my cloud providers quickly regardless of where I am if the local copy is not good. On the other hand, experience has already demonstrated that without sufficient network bandwidth services, if I need to bring back 100s of GBytes or TBytes of data quickly, Im going to be better off bring back onsite my master gold copy, then applying fewer, smaller updates from the cloud service. In other words, the technologies compliment each other.

By the way, a lesson learned here is that once my first copy is made which have data footprint reduction (DFR) techniques applied (e.g. compress, de dupe, optimized, etc), later copies occur very fast. However subsequent restores of those large files or volumes also takes longer to retrieve from the cloud vs. sending up changed versions. Thus be aware of backup vs. restore times, something of which will apply to any cloud provider and can be mitigated by appliances that do local caching. However also keep in mind that if a disaster occurs, will your local appliance be affected and its cache rendered useless.

Getting back to AWS storage gateway and my second impression is that at first it sounded great.

However then I realized it only supports iSCSI and FWIW, nothing wrong with iSCSI, I like it and recommend using it where applicable, even though Im not using it. I would like to have seen a NAS (either NFS and/or CIFS) support for a gateway making it easier for in my scenario different applications, servers and systems to use and leverage the AWS services, something that I can do with my other gateways provided via different software tools. Granted for those environments that already are using iSCSI for your servers that will be using AWS storage gateway, then this is a non issue while for others it is a consideration including cost (time) to factor in to prepare your environment for using the ability.

Depending on the amount of storage you have in your environment, the next item that caught my eye may or may not be an issue that the iSCSI gateway supports up to 1TB volumes and up to 12 of them hence a largest capacity of 12TB under management. This can be gotten around by using multiple gateways however the increased complexity balanced to the benefit the functionality is something to consider.

Third impression: Dig deeper, learn more, address various questions

This leads up to my third impression the need to dig deeper into what AWS storage gateway can and cannot do for various environments. I can see where it can be a fit for some environments while for others at least in its beta version will be a non starter. In the meantime, do your homework, look around at other options which ironically by having Amazon launching a gateway service may reinvigorate the market place of some of the standalone or embedded cloud gateway solution providers.

What is needed for using AWS storage gateway

In addition to having an S3 account, you will need to acquire for a monthly fee the storage gateway appliance which is software installed into a VMware ESXi hypervisor virtual machine (VM). The requirements are VMware ESXi hypervisor (v4.1) on a physical machine (PM) with at least 7.5GB of RAM and four (4) virtual processors assigned to the appliance VM along with 75GB of disk space for the Open Virtual Alliance (OVA) image installation and data. You will also need to have an proper sized network connection to Amazon. You will also need iSCSI initiators on either Windows server 2008, Windows 7 or Red Hat Enterprise Linux.

Note that the AWS storage gateway beta is optimized for block write sizes greater than 4Kbytes and warns that smaller IO sizes can cause overhead resulting in lost storage space. This is a consideration for systems that have not yet changed your file systems and volumes to use the larger allocation sizes.

Some closing thoughts, tips and comments:

  • Congratulations to Amazon for introducing and launching an AWS branded storage gateway.
  • Amazon brings trust the value of trust to a cloud relationship.
  • Initially I was excited about the idea of using a gateway that any of may systems could use my S3 storage pools with vs. using gateway access functions that are part of different tools such as my backup software or via Amazon web tools. Likewise I was excited by the idea of having an easy to install and use gateway that would allow me to grow in a cost effective way.
  • Keep in mind that this solution or at least in its beta version DOES NOT replace your existing iSCSI based storage needs, instead it compliments what you already have.
  • I hope Amazon listens carefully to what they customers and prospects want vs. need to evolve the functionality.
  • This announcement should reinvigorate some of the cloud appliance vendors as well as those who have embedded functionality to Amazon and other providers.
  • Keep bandwidth services and optimization in mind both for sending data as well as for when retrieving during a disaster or small file restore.
  • In concept, the AWS storage gateway is not all that different than appliances that do snapshots and other local and remote data protection such as those from Actifio, EMC (Recoverpoint), Falconstor or dedicated gateways such as those from Nasuni among others.
  • Here is a link to added AWS storage gateways frequently asked questions (FAQs).
  • If the AWS were available with a NAS interface, I would probably be activating it this afternoon even with some of their other requirements and cost aside.
  • Im still formulating my fourth impression which is going to take some time, perhaps if I can get Amazon to help sell more of my books so that I can get some money to afford to test the entire solution leveraging my existing S3, EC2 and EBS accounts I might do so in the future, otherwise for now, will continue to research.
  • Learn more about the AWS storage gateway beta, check out this free Amazon web cast on February 23, 2012.

Learn more abut cloud based data protection, data footprint reduction, cloud gateways, access and management, check out my book Cloud and Virtual Data Storage Networking (CRC Press) which is of course available on Amazon Kindle as well as via hard cover print copy also available at Amazon.com.

Ok, nuff said for now, I need to get back to some other things while thinking about this all some more.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Can I ask for your support? Please vote for my blog

No Im not running for any elected office in a political or other organizational capacity, more on the voting stuff in a moment.

Let me start out by saying thank you to all of you who have and continue to read theses posts from where ever that happens to be from.

I also want to thank all of the sites and venues that pickup my blog feeds to make it easier for readers to view the content as well as thanks for all of the great comments and discussions.

Doing some recent end of year clean up and preparation for 2012, I was going back looking at some blog history and realized that StorageIOblog was launched back in late fall of 2006. For those not aware, my full blog feed is https://storageioblog.com/RSSfull.xml and there is also a brief feed at https://storageioblog.com/RSS.xml and the full archives going back to 2006 can be found at https://storageioblog.com/RSSfullArchive.xml.

Ok, now back to the voting stuff.

It is that time of the year to cast your vote over at Eric Sieberts (aka @ericsiebert) vsphere-land site where my StorageIOblog is among around 180 different IT technology blogs nominated for inclusion and balloting, many of whom are also fellow vExperts. The blogs over at vsphere-land cover diverse topics, technologies, trends and themes including servers, storage, networking, cloud, virtualization, security and related topic themes.

Here is the announcement for the 2012 vsphere-land voting.

Some of the blogs have been around for many years while there is also a category for new less than a year old. In this years voting, anyone can vote however only one ballot per person, there the top ten where you can pick up to ten different blogs and then rank those.

There are categories for virtualization, cloud and storage focused as well as for independent bloggers (e.g. non vendors) as well as for news and media venues. The blogs that are part of the balloting were all via open nomination and if yours or your favorite blog is not on the list, go easy on Eric as he made multiple attempts via different venues to make the process known (hint, make sure Eric knows of your site, however also follow him and his sites for the future).

The voting is up and running until February 7 2012 at this site here.

Check out the voting, balloting and polling process where you can select my StorageIOblog as one of ten overall selections, as well as rank it within those ten, then select StorageIOblog in the storage category as well as in the independent blogger categories if you are so inclined (thanks in advance).

Also, check out Erics great books Maximum vSphere along with VMware VI3 implementation at Amazon.com among other venues.

Ok, nuff said for now, please get out and vote and thank in advance for your interest and support.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

My Server and Storage IO holiday break projects

Happy new years!

Following up from a flurry of posts in the closing days of 2011 including industry trends perspective predictions for 2012 and 2013, top blog posts from 2011, top all time posts, along with a couple of other items here and here, its time to get back to 2012 activity. Also if you missed it, here is the Fall (December) 2011 StorageIO news letter.

Actually I have been busy working on some other projects the past several weeks most of which are NDA so not much else can be said about them, however there are some other things I’m working on that will show themselves in the weeks and months to come. Here is a link to a webinar and live chat that I did the first week of January on CDP (Continuous Data Protection) and how it can be applied to many different environments.

But lets take a step back for a moment and let me share with you some of the things I did or started during the holiday break between christmas and the new years.

Like many others, I found time to relax and get away from normal work activities during the recent holiday season.

However like many of you that may also be techniques or geeks or wanna be geeks at heart, I could not get away from server, storage, IO, networking, data protection, video and other things completely. I used some time to discuss a few projects that I had wanted to do or that I had started before the holidays and here is a synopsis.

Increased storage capacity on a DVR by about 5x In order to get this to work, I modified a 3.5 enclosure with a power supply to accept a 2.5 1.5TB SATA HDD with an eSATA connection, the easy part was then attaching it to the external eSATA port on my DVR. The hard part was then waiting for the DVR to reconfigure and start recording information again. Also as part of upgrading the external storage on the DVR was to get the media share option to do more than basic things leveraging audio and video real-time trans coding using the Tversity software along with various codecs on a media server.

Another project involved upgrading a 500GB HHDD to a 750GB HHDD and did some testing Shortly before the holidays I received a new 750GB Seagate Momentus XT II HHDD to compare to my exiting 500GB previous generation model. I have been using the 750GB HHDD for over a month now and it is amazing to see so much space in a laptop that also has good performance. Some follow-up activities are to go back and analyze some performance data that I collected before and after the upgrade. This includes both workload simulation of reads, writes, random, sequential of different IO size as well as comparing Windows startup and shutdown speed and impact to build on what I did last summer (see this post). More on these in the not so distance future.

Speaking of clouds, I had a chance to do some more testing with my Amazon EC2 and EBS accounts in addition to cleaning up my S3 pool in addition to my other cloud backup and storage providers accounts. This also involved refining some data protection backup/restore and archive frequency and retention settings. In addition to refinements for cloud based backup, I’m also in the process of transitioning from Imation Odyssey Removable Hard Disk Drives (RHDD) too much larger capacity 2.5 portable RHDDs that are used for offsite bulk copies. Part of the migration includes seeing that end of year master or gold backups and archives were made and safely secured elsewhere in addition to having data sent to the cloud.

Another project involved doing some more testing and simulations with my SSD along with more windows boot and shutdown tests mentioned above. More on these results in a future post.

Sometime (actually not very much) was also spent adding some new shares to my Iomega IX4 NAS which is filling up so I also did some more research on what I will upgrade or replace it with. While Iomega has been great (knock on wood), Synology is also looking interesting as a future solution however keeping my options open for now. Right now I’m leaning towards keeping the IX4 and adding another NAS filer using the two for different purposes.

Some other server, storage and IO projects also included upgrading some networking components, and to finish decommissioning old drives making them secure for safe disposal when the time comes.

I also was able to spend time on non tech items including outside enjoying the nice weather, cutting up some fallen trees and roasting them on a bonfire among other things.

Tree cleanupOn break

roasting logswalking on frozen water

Ok, nuff said for now, time to get back to work.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

A conversation from SNW 2011 with Jenny Hamel

Here (.qt) and here (.wmv) is a video from an interview that I did with Jenny Hamel (@jennyhamelsd6) during the Fall 2011 SNW event in Orlando Florida.

audio

Topics covered during the discussion include:

  • Importance of metrics that matter for gaining and maintaining IT situational awareness
  • The continued journey of IT to improve customer service delivery in a cost-effective manner
  • Reducing cost and complexity without negatively impacting customer service experience
  • Participating in SNW and SNIA for over ten years on three different continents

Industry Trends and Perspectives

  • Industry trends, buzzword bingo (SSD, cloud, big data, virtualization), adoption vs. deployment
  • Increasing efficiency along with effectiveness and productivity
  • Stretching budgets to do more without degrading performance or availability
  • How customers can navigate their way around various options, products and services
  • Importance of networking at events such as SNW along with information exchange and learning
  • Why data footprint reduction is similar to packing smartly when going on a journey
  • Cloud and Virtual Data Storage Networking (now available on Kindle and other epub formats)

View the video from SNW fall 2011 here (.qt) or here (.wmv).

audio

Check out other videos and pod casts here or at StorageioTV.com

Speaking of industry trends, check out the top 25 new posts from 2011, along with the top 25 all time posts and my comments (predictions) for 2012 and 2013.

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Top storageio cloud virtualization networking and data protection posts

Im in the process of wrapping up 2011 and getting ready for 2012. Here is a list of the top 25 all time posts from StorageIOblog covering cloud, virtualization, servers, storage, green IT, networking and data protection. Looking back, here is 2010 and 2011 industry trends, thoughts and perspective predictions along with looking forward, a 2012 preview here.

Top 25 all time posts about storage, cloud, virtualization, networking, green IT and data protection

Check out the companion post to this which is the top 25 2011 posts located here as well as 2012 and 2013 predictions preview here.

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved