Welcome to the Data Protection Diaries

Updated 1/10/2018

Storage I/O trends

Welcome to the Data Protection Diaries

This is a series of posts about data protection which includes security (logical and physical), backup/restore, business continuance (BC), disaster recovery (DR), business resiliency (BR) along with high availability (HA), archiving and related topic themes, technologies and trends.

Think of data protection like protect, preserve and serve information across cloud, virtual and physical environments spanning traditional servers, storage I/O networking along with mobile (ok, some IoT as well), SOHO/SMB to enterprise.

Getting started, taking a step back

Recently I have done a series of webinars and Google+ hangouts as part of the BackupU initiative brought to you by Dell Software (that’s a disclosure btw ;) ) that are vendor and technology neutral. Instead of the usual vendor product or technology focused seminars and events, these are about getting back to the roots, the fundamentals of what to protect when and why, then decide your options as well as different approaches (e.g. what tools to use when).

In addition over the past year (ok, years) I have also been doing other data protection related events, seminars, workshops, articles, tips, posts across cloud, virtual and physical from SOHO/SMB to enterprise. These are in addition to the other data infrastructure server and storage I/O stuff (e.g. SSD, object storage, software defined, big data, little data, buzzword bingo and others).

Keep in mind that in the data center or information factory everything is not the same as there are different applications, threat risk scenarios, availability and durability among other considerations. In this series like the cloud conversations among others, I’m going to be pulling various data protection themes together hopefully to make it easier for others to find, as well as where I know where to get them.

data protection diaries
Some notes for an upcoming post in this series using my Livescribe about data protection

Data protection topics, trends, technologies and related themes

Here are some more posts to checkout pertaining to data protection trends, technologies and perspectives:

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Can we get a side of context with them IOPS server storage metrics?

Can we get a side of context with them server storage metrics?

Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.

There is an old saying that the best I/O (Input/Output) is the one that you do not have to do.

In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.

Expanding the conversation, the need for more context

The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.

hdd and ssd iops

This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.

Adding a side of context

The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.

Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time. Not to mention block, file, object or blob as well as table.

However, are those million IOP’s applicable to your environment or needs?

Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?

How about the response time or latency for achieving them IOPS?

In other words, what is the context of those metrics and why do they matter?

storage i/o iops
Click on image to view more metrics that matter including IOP’s for HDD and SSD’s

Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.

As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.

Here are some examples of context that can be added to help make IOP’s and other metrics matter:

  • What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
  • Are they reads, writes, random, sequential or mixed and what percentage?
  • How was the storage configured including RAID, replication, erasure or dispersal codes?
  • Then there is the latency or response time and IO queue depths for the given number of IOPS.
  • Let us not forget if the storage systems (and servers) were busy with other work or not.
  • If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
  • What was the number of threads or workers, along with how many servers?
  • What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
  • Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
  • Did the IOP’s number come from a single storage system or total of multiple systems?
  • Fast storage needs fast serves and networks, what was their configuration?
  • Was the performance a short burst, or long sustained period?
  • What was the size of the test data used; did it all fit into cache?
  • Were short stroking for IOPS or long stroking for bandwidth techniques used?
  • Data footprint reduction (DFR) techniques (thin provisioned, compression or dedupe) used?
  • Were write data committed synchronously to storage, or deferred (aka lazy writes used)?

The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.

Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.

Storage I/O trends

Does size or age of vendors make a difference when it comes to context?

Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.

Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.

Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.

Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.

Storage I/O trends

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What this means is let us start putting and asking for metrics that matter such as IOP’s with context.

If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.

IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.

Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.

So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part II: EMC Evolves Enterprise Data Protection with Enhancements

Storage I/O trends

This is the second part of a two-part series on recent EMC backup and data protection announcements. Read part I here.

What about the products, what’s new?

In addition to articulating their strategy for modernizing data protection (covered in part I here), EMC announced enhancements to Avamar, Data Domain, Mozy and Networker.

Data protection storage systems (e.g. Data Domain)

Building off of previously announced Backup Recovery Solutions (BRS) including Data Domain operating system storage software enhancements, EMC is adding more application and software integration along with new platform (systems) support.

Data Domain (e.g. Protection Storage) enhancements include:

  • Application integration with Oracle, SAP HANA for big data backup and archiving
  • New Data Domain protection storage system models
  • Data in place upgrades of storage controllers
  • Extended Retention now available on added models
  • SAP HANA Studio backup integration via NFS
  • Boost for Oracle RMAN, native SAP tools and replication integration
  • Support for backing up and protecting Oracle Exadata
  • SAP (non HANA) support both on SAP and Oracle

Data in place upgrades of controllers for 4200 series models on up (previously available on some larger models). This means that controllers can be upgraded with data remaining in place as opposed to a lengthy data migration.

Extended Retention facility is a zero cost license that enables more disk drive shelves to be attached to supported Data Domain systems. Thus there is a not a license fee, however you do pay for the storage shelves and drives to increase the available storage capacity. Note that this feature increases the storage capacity by adding more disk drives and does not increase the performance of the Data Domain system. Extended Retention has been available in the past however is now supported via more platform models. The extra storage capacity is essentially placed into a different tier that an archive policy can then migrate data into.

Boost for accelerating data movement to and from Data Domain systems is only available using Fibre Channel. When asked about FC over Ethernet (FCoE) or iSCSI EMC indicated its customers are not asking for this ability yet. This has me wondering if it is that the current customer focus is around FC, or if those customers are not yet ready for iSCSI or FCoE, or, if there were iSCSI or FCoE support, more customers would ask for it?

With the new Data Domain protection storage systems EMC is claiming up to:

  • 4x faster performance than earlier models
  • 10x more scalable and 3x more backup/archive streams
  • 38 percent lower cost per GB based on holding price points and applying improvements


EMC Data Domain data protection storage platform family


Data Domain supporting both backup and archive

Expanding Data Domain from backup to archive

EMC continues to evolve the Data Domain platform from just being a backup target platform with dedupe and replication to a multi-function, multi-role solution. In other words, one platform with many uses. This is an example of using one tool or technology for different purposes such as backup and archiving, however with separate polices. Here is a link to a video where I discuss using common tools for backup and archiving, however with separate polices. In the above figure EMC Data Domain is shown as being used for backup along with storage tiering and archiving (file, email, Sharepoint, content management and databases among other workloads).


EMC Data Domain supporting different functions and workloads

Also shown are various tools from other vendors such as Commvault Simpana that can be used as both a backup or archiving tool with Data Domain as a target. Likewise Dell products acquired via the Quest acquisition are shown along with those from IBM (e.g. Tivoli), FileTek among others. Note that if you are a competitor of EMC or simply a fan of other technology you might come to the conclusion that the above may not be different from others. Then again others who are not articulating their version or vision of something like the above figure probably should be also stating the obvious vs. arguing they did it first.

Data source integration (aka data protection software tools)

It seems like just yesterday that EMC acquired Avamar (2006) and NetWorker aka Legato (2003), not to mention Mozy (2007) or Dantz (Retrospect, since divested) in 2004. With the exception of Dantz (Retrospect) which is now back in the hands of its original developers, EMC continues to enhance and evolve Avamar, Mozy and NetWorker including with this announcement.

General Avamar 7 and Networker 8.1 enhancements include:

  • Deeper integration with primary storage and protection storage tiers
  • Optimization for VMware vSphere virtual server environments
  • Improved visibility and control for data protection of enterprise applications

Additional Avamar 7 enhancements include:

  • More Data Domain integration and leveraging as a repository (since Avamar 6)
  • NAS file systems with NDMP accelerator access (EMC Isilon & Celera, NetApp)
  • Data Domain Boost enhancements for faster backup / recovery
  • Application integration with IBM (DB2 and Notes), Microsoft (Exchange, Hyper-V images, Sharepoint, SQL Server), Oracle, SAP, Sybase, VMware images

Note that Avamar dat is still used mainly for ROBO and desktop, laptop type backup scenarios that do not yet support Data Domain. Also see Mozy enhancements below).

Avamar supports VMware vSphere virtual server environments using granular change block tracking (CBT) technology as well as image level backup and recovery with vSphere plugins. This includes an Instant Access recovery when images are stored on Data Domain storage.

Instant Access enables a VM that has been protected using Avamar image level technology on Data Domain to be booted via an NFS VMware Dat. VMware sees the VM and is able to power it on and boot directly from the Data Domain via the NFS Dat. Once the VM is active, it can be Storage vMotion to a production storage VMware Dat while active (e.g. running) for recovery on the fly capabilities.


Instant Access to a VM on Data Domain storage

EMC NetWorker 8.1 enhancements include:

  • Enhanced visibility and control for owners of data
  • Collaborative protection for Oracle environments
  • Synchronize backup and data protection between DBA and Backup admin’s
  • Oracle DBAs use native tools (e.g. RMAN)
  • Backup admin implements organizations SLA’s (e.g. using Networker)
  • Deeper integration with EMC primary storage (e.g. VMAX, VNX, etc)
  • Isilon integration support
  • Snapshot management (VMAX, VNX, RecoverPoint)
  • Automation and wizards for integration, discovery, simplified management
  • Policy-based management, fast recovery from snapshots
  • Integrating snapshots into and as part of data protection strategy. Note that this is more than basic snapshot management as there is also the ability to roll over a snapshot into a Data Domain protection storage tier.
  • Deeper integration with Data Domain protection storage tier
  • Data Domain Boost over Fibre Channel for faster backups and restores
  • Data Domain Virtual Synthetics to cut impact of full backups
  • Integration with Avamar for managing image level backup recovery (Avamar services embedded as part of NetWorker)
  • vSphere Web Client enabling self-service recovery of VMware images
  • Newly created VMs inherit backup polices automatically

Mozy is being positioned for enterprise remote office branch office (ROBO) or distributed private cloud where Avamar, NetWorker or Data Domain solutions are not as applicable. EMC has mentioned that they have over 800 enterprises using Mozy for desktop, laptop, ROBO and mobile data protection. Note that this is a different target market than the Mozy consumer product focused which also addresses smaller SMBs and SOHOs (Small Office Home Offices).

EMC Mozy enhancements to be more enterprise grade:

  • Simplified management services and integration
  • Active Directory (AD) for Microsoft environments
  • New storage pools (multiple types of pools) vs. dedicated storage per client
  • Keyless activation for faster provisioning of backup clients

Note that EMC enhanced earlier this year Data Protection Advisor (DPA) with version 6.0.

What does this all mean?

Storage I/O trends

Data protection and backup discussions often focus around tape summit resources or cloud arguments, although this is changing. What is changing is growing awareness and discussion around how data protection storage mediums, systems and services are used along with the associated software management tools.

Some will say backup is broke often pointing a finger at a media or medium (e.g. tape and disk) about what is wrong. Granted in some environments the target medium (or media) destination is an easy culprit to point a finger to as the problem (e.g. the usual tape sucks or is dead) mantra. However, for many environments while there can be issues, it is more often than not the media, medium, device or target storage system that is broke, instead how it is being used or abused.

This means revisiting how tools are used along with media or storage systems allocated, used and retained with respect to different threat risk scenarios. After all, not everything is the same in the data center or information factory.

Thus modernizing data protection is more than swapping media or mediums including types of storage system from one to another. It is also more than swapping out one backup or data protection tool for another. Modernizing data protection means rethinking what different applications and data need to be protected against various threat risks.

Storage I/O trends

What this has to do with today’s announcement is that EMC is among others in the industry moving towards a holistic data protection modernizing thought model.

In my opinion what you are seeing out of EMC and some others is taking that step back and expanding the data protection conversation to revisit, rethink why, how, where, when and by whom applications and information get protected.

This announcement also ties into finding and removing costs vs. simply cutting cost at the cost of something elsewhere (e.g. service levels, performance, availability). In other words, finding and removing complexities or overhead associated with data protection while making it more effective.

Some closing points, thoughts and more links:

There is no such thing as a data or information recession
People and data are living longer while getting larger
Not everything is the same in the data center or information factory
Rethink data protection including when, why, how, where, with what and by whom
There is little data, big data, very big data and big fast data
Data protection modernization is more than playing buzzword bingo
Avoid using new technology in old ways
Data footprint reduction (DFR) can be help counter changing data life-cycle patterns
EMC continues to leverage Avamar while keeping Networker relevant
Data Domain evolving for both backup and archiving as an example of tool for multiple uses

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC Evolves Enterprise Data Protection with Enhancements (Part I)

Storage I/O trends

A couple of months ago at EMCworld there were announcements around ViPR, Pivotal along with trust and clouds among other topics. During the recent EMCworld event there were some questions among attendees what about backup and data protection announcements (or lack there of)?

Modernizing Data Protection

Today EMC announced enhancements to its Backup Recovery Solutions (BRS) portfolio (@EMCBackup) that continue to enable information and applications data protection modernizing including Avamar, Data Domain, Mozy and Networker.

Keep in mind you can’t go forward if you can’t go back, which means if you do not have good data protection to go to, you can’t go forward with your information.

EMC Modern Data Protection Announcements

As part of their Backup to the Future event, EMC announced the following:

  • New generation of data protection products and technologies
  • Data Domain systems: enhanced application integration for backup and archive
  • Data protection suite tools Avamar 7 and Networker 8.1
  • Enhanced Cloud backup capabilities for the Mozy service
  • Paradigm shift as part of data protection modernizing including revisiting why, when, where, how, with what and by whom data protection is accomplished.

What did EMC announce for data protection modernization?

While much of the EMC data protection announcement is around product, there is also the aspect of rethinking data protection. This means looking at data protection modernization beyond swapping out media (e.g. tape for disk, disk for cloud) or one backup software tool for another. Instead, revisiting why data protection needs to be accomplished, by whom, how to remove complexity and cost, enable agility and flexibility. This also means enabling data protection to be used or consumed as a service in traditional, virtual and private or hybrid cloud environments.

EMC uses as an example (what they refer to as Accidental Architecture) of how there are different group and areas of focus, along with silos associated with data protection. These groups span virtual, applications, database, server, storage among others.

The results are silos that need to be transformed in part using new technology in new ways, as well as addressing a barrier to IT convergence (people and processes). The theme behind EMC data protection strategy is to enable the needs and requirements of various groups (servers, applications, database, compliance, storage, BC and DR) while removing complexity.

Moving from Silos of data protection to a converged service enabled model

Three data protection and backup focus areas

This sets the stage for the three components for enabling a converged data protection model that can be consumed or used as a service in traditional, virtual and private cloud environments.


EMC three components of modernized data protection (EMC Future Backup)

The three main components (and their associated solutions) of EMC BRS strategy are:

  • Data management services: Policy and storage management, SLA, SLO, monitoring, discovery and analysis. This is where tools such as EMC Data Protection Advisor (aka via WysDM acquisition) fit among others for coordination or orchestration, setting and managing polices along with other activities.
  • Data source integration: Applications, Database, File systems, Operating System, Hypervisors and primary storage systems. This is where data movement tools such as Avamar and Networker among others fit along with interfaces to application tools such as Oracle RMAN.
  • Protection storage: Targets, destination storage system with media or mediums optimized for protecting and preserving data along with enabling data footprint reduction (DFR). DFR includes functionality such as compression and dedupe among others. Example of data protection storage is EMC Data Domain.

Read more about product items announced and what this all means here in the second of this two-part series.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Tape is still alive, or at least in conversations and discussions

StorageIO Industry trends and perspectives image

Depending on whom you talk to or ask, you will get different views and opinions, some of them stronger than others on if magnetic tape is dead or alive as a data storage medium. However an aspect of tape that is alive are the discussions by those for, against or that simply see it as one of many data storage mediums and technologies whose role is changing.

Here is a link to an a ongoing discussion over in one of the Linked In group forums (Backup & Recovery Professionals) titled About Tape and disk drives. Rest assured, there is plenty of fud and hype on both sides of the tape is dead (or alive) arguments, not very different from the disk is dead vs. SSD or cloud arguments. After all, not everything is the same in data centers, clouds and information factories.

Fwiw, I removed tape from my environment about 8 years ago, or I should say directly as some of my cloud providers may in fact be using tape in various ways that I do not see, nor do I care one way or the other as long as my data is safe, secure, protected and SLA’s are meet. Likewise, I consult and advice for organizations where tape still exists yet its role is changing, same with those using disk and cloud.

Storage I/O data center image

I am not ready to adopt the singular view that tape is dead yet as I know too many environments that are still using it, however agree that its role is changing, thus I am not part of the tape cheerleading camp.

On the other hand, I am a fan of using disk based data protection along with cloud in new and creative (including for my use) as part of modernizing data protection. Although I see disk as having a very bright and important future beyond what it is being used for now, at least today, I am not ready to join the chants of tape is dead either.

StorageIO Industry trends and perspectives image

Does that mean I can’t decide or don’t want to pick a side? NO

It means that I do not have to nor should anyone have to choose a side, instead look at your options, what are you trying to do, how can you leverage different things, techniques and tools to maximize your return on innovation. If that means that tape is, being phased out of your organization good for you. If that means there is a new or different role for tape in your organization co-existing with disk, then good for you.

If somebody tells you that tape sucks and that you are dumb and stupid for using it without giving any informed basis for those comments then call them dumb and stupid requesting they come back when then can learn more about your environment, needs, and requirements ready to have an informed discussion on how to move forward.

Likewise, if you can make an informed value proposition on why and how to migrate to new ways of modernizing data protection without having to stoop to the tape is dead argument, or cite some research or whatever, good for you and start telling others about it.

StorageIO Industry trends and perspectives image

Otoh, if you need to use fud and hype on why tape is dead, why it sucks or is bad, at least come up with some new and relevant facts, third-party research, arguments or value propositions.

You can read more about tape and its changing role at tapeisalive.com or Tapesummit.com.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Spring (May) 2012 StorageIO news letter

StorageIO News Letter Image
Spring (May) 2012 News letter

Welcome to the Spring (May) 2012 edition of the Server and StorageIO Group (StorageIO) news letter. This follows the Fall (December) 2011 edition.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

Click on the following links to view the Spring May 2012 edition as an HTML or PDF or, to go to the news letter page to view previous editions.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Nuff said for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

If March 31st is backup day, dont be fooled with restore on April 1st

With March 31st as world backup day, hopefully some will keep recovery and restoration in mind to not be fooled on April 1st.

Lost data

When it comes to protecting data, it may not be a headline news disaster such as earthquake, fire, flood, hurricane or act of man, rather something as simply accidentally overwriting a file, not to mention virus or other more likely to occur problems. Depending upon who you ask, some will say backup or saving data is more important while others will standby that it is recovery or restoration that matter. Without one the other is not practical, they need each other and both need to be done as well as tested to make sure they work.

Just the other day I needed to restore a file that I accidentally overwrote and as luck would have it, my local bad copy had also just overwrote my local backup. However I was able to go and pull an earlier version from my cloud provider which gave a good opportunity to test and try some different things. In the course of testing, I did find some things that have since been updated as well as found some things to optimize for the future.

Destroyed data

My opinion is that if not used properly including ignoring best practices, any form of data storage medium or media as well as software could result or be blamed for data loss. For some people they have lost data as a result of using cloud storage services just as other people have lost data or access to information on other storage mediums and solutions. For example, data has been lost on cloud, tape, Hard Disk Drives (HDDs), Solid State Devices (SSD), Hybrid HDDs (HHDD), RAID and non RAID, local and remote and even optical based storage systems large and small. In some cases, there have been errors or problems with the medium or media, in other cases storage systems have lost access to, or lost data due to hardware, firmware, software, or configuration including due to human error among other issues.

Now is the time to start thinking about modernizing data protection, and that means more than simply swapping out media. Data protection modernization the past several years has been focused on treating the symptoms of downstream problems at the target or destination. This has involved swapping out or moving media around, applying data footprint reduction (DFR) techniques downstream to give near term tactical relief as has been the cause with backup, restore, BC and DR for many years. The focus is starting to expand to how to discuss the source of the problem with is an expanding data footprint upstream or at the source using different data footprint reduction tools and techniques. This also means using different metrics including keeping performance and response time in perspective as part of reduction rates vs. ratios while leveraging different techniques and tools from the data footprint reduction tool box. In other words, its time to stop swapping out media like changing tires that keep going flat on a car, find and fix the problem, change the way data is protected (and when) to cut the impact down stream.

Here is a link to a free download of chapter 5 (Data Protection: Backup/Restore and Business Continuance / Disaster Recovery) from my new book Cloud and Virtual Data Storage Networking (CRC Press).

Cloud and Virtual Data Storage NetworkingIntel Recommended Reading List

Additional related links to read more and sources of information:

Choosing the Right Local/Cloud Hybrid Backup for SMBs
E2E Awareness and insight for IT environments
Poll: What Do You Think of IT Clouds?
Convergence: People, Processes, Policies and Products
What do VARs and Clouds as well as MSPs have in common?
Industry adoption vs. industry deployment, is there a difference?
Cloud conversations: Loss of data access vs. data loss
Clouds and Data Loss: Time for CDP (Commonsense Data Protection)?
Clouds are like Electricity: Dont be scared
Wit and wisdom for BC and DR
Criteria for choosing the right business continuity or disaster recovery consultant
Local and Cloud Hybrid Backup for SMBs
Is cloud disaster recovery appropriate for SMBs?
Laptop data protection: A major headache with many cures
Disaster recovery in the cloud explained
Backup in the cloud: Large enterprises wary, others climbing on board
Cloud and Virtual Data Storage Networking (CRC Press, 2011)
Enterprise Systems Backup and Recovery: A Corporate Insurance Policy

Take a few minutes out of your busy schedule and check to see if your backups and data protection are working, as well as make sure to test restoration and recovery to avoid an April fools type surprise. One last thing, you might want to check out the data storage prayer while you are at it.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

What industry pundits love and loathe about data storage

Drew Robb has a good article about what IT industry pundits including vendors, analysts, and advisors loath including comments from myself.

In the article Drew asks: What do you really love about storage and what are your pet peeves?

One of my comments and perspectives is that I like Hybrid Hard Disk Drives (HHDDs) in addition to traditional Hard Disk Drives (HDD) along with Solid State Devices (SSDs). As much as I like HHDDs, I also believe that with any technology, they are not the best solution for everything, however they can also be used in many ways than being seen. Here is the fifth installment of a series on HHDDs that I have done since June 2010 when I received my first HHDD a Seagate Momentus XT. You can read the other installments of my momentus moments here, here, here and here.

Seagate Momentus XT
HHDD with integrated nand flash SSD photo courtesy Seagate.com

Molly Rector VP of marketing at tape summit resources vendor Spectra Logic mentioned that what she does not like is companies that base their business plan on patent law trolling. I would have expected something different along the lines of countering or correcting people that say tape sucks, tape is dead, or that tape is the cause problem of anything wrong with storage thus clearing the air or putting up a fight that tape summit resources. Go figure…

Another of my comments involved clouds of which there are plenty of conversations taking place. I do like clouds (I even recently wrote a book involving them) however Im a fan of using them where applicable to coexist and enhance other IT resources. Dont be scared of clouds, however be ready, do your homework, listen, learn, do proof of concepts to decide best practices, when, where, what and how to use them.

Speaking of clouds, click here to read about who is responsible for cloud data loss and cast your vote, along with viewing what do you think about IT clouds in general here.

Mike Karp (aka twitter @storagewonk ) an analyst with Ptak Noel mentions that midrange environments dont get respect from big (or even startup) vendors.

I would take that a step further by saying compared to six or so years ago, SMB are getting night and day better respect along with attention by most vendors, however what is lacking is respect of the SOHO sector (e.g. lower end of SMB down to or just above consumer).

Granted some that have traditional sold into those sectors such as server vendors including Dell and HP get it or at least see the potential along with traditional enterprise vendor EMC via its Iomega . Yet I still see many vendors including startups in general discounting, shrugging off or sneering at the SOHO space similar to those who dissed or did not respect the SMB space several years ago. Similar to the SMB space, SOHO requires different products, packaging, pricing and routes to market via channel or etail mechanisms which means change for some vendors. Those vendors who embraced the SMB and realized what needed to change to adapt to those markets will also stand to do better with the SOHO.

Here is the reason that I think SOHO needs respect.

Simple, SOHOs grow up to become SMBs, SMBs grow up to become SMEs, SMEs grow up to become enterprises and not to mention that the amount of data being generated, moved, processed and stored continues to grow. The net result is that SMBs along with SOHO storage demands will continue to grow and for those vendors who can adjust to support those markets will also stand to gain new customers that in turn can become plans for other solution offerings.

Cloud conversations

Not surprising Eran Farajun of Asigra which has been doing cloud backups decades before they were known as clouds loves backup (and restores). However I am surprised that Eran did not jump on the its time to modernize and re architect data protection theme. Oh well, will have to have a chat with Eran on that sometime.

What was surprising were comments from Panzura who has a good distributed (e.g. read also cloud) file system that can be used for various things including online reference data. Panzura has a solution that normally I would not even think about in the context of being pulled into a Datadomain or dedupe appliance type discussion (e.g tape sucks or other similar themes). So it is odd that they are playing to the tape sucks camp and theme vs. playing to where the technology can really shine which IMHO is in the global, distributed, scale out and cloud file system space. Oh well, I guess you go with what you know or has worked in the past to get some attention.

Molly Rector of Spectra also mentioned that she likes High Performance Computing, surprised that she did not throw in high productivity computing as well in conjunction with big data, big bandwidth, green, dedupe, power, disk, tape and related buzzword bingo terms.

Also there are some comments from myself about cost cutting.

While I see the need for organizations to cut costs during tough economic times, Im not a fan of simply cutting cost for the sake of cost cutting as opposed to finding and removing complexity that in turn remove costs of doing work. In other words, Im a fan of finding and removing waste, becoming more effective and productive along with removing the cost of doing a particular piece of work. This in the end meets the aim of bean counters to cut costs, however can be done in a way that does not degrade service levels or customer service experience. For example instead of looking to cut backup costs, do you know where the real costs of doing data protection exist (hint swapping out media is treating the symptoms) and if so, what can be done to streamline those from the source of the problem downstream to the target (e.g. media or medium). In other words, redesign, review, modernize how data protection is done, leverage data footprint reduction (DFR) techniques including archive, compression, consolidation, data management, dedupe and other technologies in effective and creative ways, after all, return on innovation is the new ROI.

Checkout Drews article here to read more on the above topics and themes.

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

The blame game: Does cloud storage result in data loss?

I recently came across a piece by Carl Brooks over at IT Tech News Daily that caught my eye, title was Cloud Storage Often Results in Data Loss. The piece has an effective title (good for search engine: SEO optimization) as it stood out from many others I saw on that particular day.

Industry Trend: Cloud storage

What caught my eye on Carls piece is that it reads as if the facts based on a quick survey point to clouds resulting in data loss, as opposed to being an opinion that some cloud usage can result in data loss.

Data loss

My opinion is that if not used properly including ignoring best practices, any form of data storage medium or media could result or be blamed for data loss. For some people they have lost data as a result of using cloud storage services just as other people have lost data or access to information on other storage mediums and solutions. For example, data has been lost on tape, Hard Disk Drives (HDDs), Solid State Devices (SSD), Hybrid HDDs (HHDD), RAID and non RAID, local and remote and even optical based storage systems large and small. In some cases, there have been errors or problems with the medium or media, in other cases storage systems have lost access to, or lost data due to hardware, firmware, software, or configuration including due to human error among other issues.

Data loss

Technology failure: Not if, rather when and how to decrease impact
Any technology regardless of what it is or who it is from along with its architecture design and implementation can fail. It is not if, rather when and how gracefully along with what safeguards to decrease the impact, in addition to containing or isolating faults differentiates various products or solutions. How they automatically repair and self heal to keep running or support accessibility and maintain data integrity are important as is how those options are used. Granted a failure may not be technology related per say, rather something associated with human intervention, configuration, change management (or lack thereof) along with accidental or intentional activities.

Walking the talk
I have used public cloud storage services for several years including SaaS and AaaS as well as IaaS (See more XaaS here) and knock on wood, have not lost any data yet, loss of access sure, however not data being lost.

I follow my advice and best practices when selecting cloud providers looking for good value, service level agreements (SLAs) and service level objectives (SLOs) over low cost or for free services.

In the several years of using cloud based storage and services there has been some loss of access, however no loss of data. Those service disruptions or loss of access to data and services ranged from a few minutes to a little over an hour. In those scenarios, if I could not have waited for cloud storage to become accessible, I could have accessed a local copy if it were available.

Had a major disruption occurred where it would have been several days before I could gain access to that information, or if it were actually lost, I have a data insurance policy. That data insurance policy is part of my business continuance (BC) and disaster recovery (DR) strategy. My BC and DR strategy is a multi layered approach combining local, offline and offsite as along with online cloud data protection and archiving.

Assuming my cloud storage service could get data back to a given point (RPO) in a given amount of time (RTO), I have some options. One option is to wait for the service or information to become available again assuming a local copy is no longer valid or available. Another option is to start restoration from a master gold copy and then roll forward changes from the cloud services as that information becomes available. In other words, I am using cloud storage as another resource that is for both protecting what is local, as well as complimenting how I locally protect things.

Minimize or cut data loss or loss of access
Anything important should be protected locally and remotely meaning leveraging cloud and a master or gold backup copy.

To cut the cost of protecting information, I also leverage archives, which mean not all data gets protected the same. Important data is protected more often reducing RPO exposure and speed up RTO during restoration. Other data that is not as important is protected, however on a different frequency with other retention cycles, in other words, tiered data protection. By implementing tiered data protection, best practices, and various technologies including data footprint reduction (DFR) such as archive, compression, dedupe in addition to local disk to disk (D2D), disk to disk to cloud (D2D2C), along with routine copies to offline media (removable HDDs or RHDDs) that go offsite,  Im able to stretch my data protection budget further. Not only is my data protection budget stretched further, I have more options to speed up RTO and better detail for recovery and enhanced RPOs.

If you are looking to avoid losing data, or loss of access, it is a simple equation in no particular order:

  • Strategy and design
  • Best practices and processes
  • Various technologies
  • Quality products
  • Robust service delivery
  • Configuration and implementation
  • SLO and SLA management metrics
  • People skill set and knowledge
  • Usage guidelines or terms of service (ToS)

Unfortunately, clouds like other technologies or solutions get a bad reputation or blamed when something goes wrong. Sometimes it is the technology or service that fails, other times it is a combination of errors that resulted in loss of access or lost data. With clouds as has been the case with other storage mediums and systems in the past, when something goes wrong and if it has been hyped, chances are it will become a target for blame or finger pointing vs. determining what went wrong so that it does not occur again. For example cloud storage has been hyped as easy to use, don’t worry, just put your data there, you can get out of the business of managing storage as the cloud will do that magically for you behind the scenes.

The reality is that while cloud storage solutions can offload functions, someone is still responsible for making decisions on its usage and configuration that impact availability. What separates various providers is their ability to design in best practices, isolate and contain faults quickly, have resiliency integrated as part of a solution along with various SLAs aligned to what the service level you are expecting in an easy to use manner.

Does that mean the more you pay the more reliable and resilient a solution should be?
No, not necessarily, as there can still be risks including how the solution is used.

Does that mean low cost or for free solutions have the most risk?
No, not necessarily as it comes down to how you use or design around those options. In other words, while cloud storage services remove or mask complexity, it still comes down to how you are going to use a given service.

Shared responsibility for cloud (and non cloud) storage data protection
Anything important enough that you cannot afford to lose, or have quick access to should be protected in different locations and on various mediums. In other words, balance your risk. Cloud storage service provider toned to take responsibility to meet service expectations for a given SLA and SLOs that you agree to pay for (unless free).

As the customer you have the responsibility of following best practices supplied by the service provider including reading the ToS. Part of the responsibility as a customer or consumer is to understand what are the ToS, SLA and SLOs for a given level of service that you are using. As a customer or consumer, this means doing your homework to be ready as a smart educated buyer or consumer of cloud storage services.

If you are a vendor or value added reseller (VAR), your opportunity is to help customers with the acquisition process to make informed decision. For VARs and solution providers, this can mean up selling customers to a higher level of service by making them aware of the risk and reward benefits as opposed to focus on cost. After all, if a order taker at McDonalds can ask Would you like to super size your order, why cant you as a vendor or solution provider also have a value oriented up sell message.

Additional related links to read more and sources of information:

Choosing the Right Local/Cloud Hybrid Backup for SMBs
E2E Awareness and insight for IT environments
Poll: What Do You Think of IT Clouds?
Convergence: People, Processes, Policies and Products
What do VARs and Clouds as well as MSPs have in common?
Industry adoption vs. industry deployment, is there a difference?
Cloud conversations: Loss of data access vs. data loss
Clouds and Data Loss: Time for CDP (Commonsense Data Protection)?
Clouds are like Electricity: Dont be scared
Wit and wisdom for BC and DR
Criteria for choosing the right business continuity or disaster recovery consultant
Local and Cloud Hybrid Backup for SMBs
Is cloud disaster recovery appropriate for SMBs?
Laptop data protection: A major headache with many cures
Disaster recovery in the cloud explained
Backup in the cloud: Large enterprises wary, others climbing on board
Cloud and Virtual Data Storage Networking (CRC Press, 2011)
Enterprise Systems Backup and Recovery: A Corporate Insurance Policy

Poll:  Who is responsible for cloud storage data loss?

Taking action, what you should (or not) do
Dont be scared of clouds, however do your homework, be ready, look before you leap and follow best practices. Look into the service level agreements (SLAs) associated with a given cloud storage product or service. Follow best practices about how you or someone else will protect what data is put into the cloud.

For critical data or information, consider having a copy of that data in the cloud as well as at or in another place, which could be in a different cloud or local or offsite and offline. Keep in mind the theme for critical information and data is not if, rather when so what can be done to decrease the risk or impact of something happening, in other words, be ready.

Data put into the cloud can be lost, or, loss of access to it can occur for some amount of time just as happens with using non cloud storage such as tape, disk or ssd. What impacts or minimizes your risk of using traditional local or remote as well as cloud storage are the best practices, how configured, protected, secured and managed. Another consideration is the type and quality of the storage product or cloud service can have a big impact. Sure, a quality product or service can fail; however, you can also design and configure to decrease those impacts.

Wrap up
Bottom line, do not be scared of cloud storage, however be ready, do your homework, review best practices, understand benefits and caveats, risk and reward. For those who want to learn more about cloud storage (public, private and hybrid) along with data protection, data management, data footprint reduction among other related topics and best practices, I happen to know of some good resources. Those resources in addition to the links provided above are titled Cloud and Virtual Data Storage Networking (CRC Press) that you can learn more about here as well as find at Amazon among other venues. Also, check out Enterprise Systems Backup and Recovery: A Corporate Insurance Policy by Preston De Guise (aka twitter @backupbear ) which is a great resource for protecting data.

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Industry trend: People plus data are aging and living longer

Lets face it, people and information are living longer and thus there are more of each along with a strong interdependency by both.

People living and data being retained longer should not be a surprise, take a step back and look at the bigger picture. There is no such thing as an information recession with more data being generated, processed, moved and stored for longer periods of time not to mention that a data object is also getting larger.

Industry trend and performance

By data objects getting larger, think about a digital photo taken on a typical camera ten years ago which whose resolution was lower and thus its file size would have been measured in kilo bytes (thousands). Today megapixel resolutions are common from cell phones, smart phones, PDAs and even larger with more robust digital and high definition (HD) still and video cameras. This means that a photo of the same object that resulted in a file of hundreds of Kbytes ten years ago would be measured in Megabytes today. With three dimensional (3D) cameras appearing along with higher resolution, you do not need to be a rocket scientist or industry pundit to figure out what that growth trend trajectory looks like.

However it is not just the size of the data that is getting larger, there are also more instances along with copies of those files, photos, videos and other objects being created, stored and retained. Similar to data, there are more people now than ten years ago and some of those have also grown larger, or at least around the waistline. This means that more people are creating and relying on larger amounts of information being available or accessible when and where needed. As people grow older, the amount of data that they generate will naturally increase as will the information that they consume and rely upon.

Where things get interesting is that looking back in history, that is more than ten or even a hundred years, the trend is that there are more people, they are living longer, and they are generating larger amounts of data that is taking on new value or meaning. Heck you can even go back from hundreds to thousands of years and see early forms of data archiving and storage with drawings on walls of caves or other venues. I Wonder if had the cost (and ease of use) to store and keep data had been lower back than would there have been more information saved? Or was it a case of being too difficult to use the then state of art data and information storage medium combined with limited capacities so they simply ran out of storage and retention mediums (e.g. walls and ceilings)?

Lets come back to the current for a moment which is another trend of data that in the past would have been kept offline or best case near line due to cost and limits or constraints are finding their way online either in public or private venues (or clouds if you prefer).

Thus the trend of expanding data life cycles with some types of data being kept online or readily accessible as its value is discovered.

Evolving data life cycle and access patterns

Here is an easy test, think of something that you may have googled or searched for a year or two ago that either could not be found or was very difficult to find. Now take that same search or topic query and see if anything appears and if it does, how many instances of it appear. Now make a note to do the same test again in a year or even six months and compare the results.

Now back to the future however with an eye to the past and things get even more interesting in that some researchers are saying that in centuries to come, we should expect to see more people not only living into their hundreds, however even longer. This follows the trend of the average life expectancy of people continues to increase over decades and centuries.

What if people start to live hundreds of years or even longer, what about the information they will generate and rely upon and its later life cycle or span?

More information and data

Here is a link to a post where a researcher sees that very far down the road, people could live to be a thousand years old which brings up the question, what about all the data they generate and rely upon during their lifetime.

Ok, now back to the 21st century and it is safe to say that there will be more data and information to process, move, store and keep for longer periods of time in a cost effective way. This means applying data footprint reduction (DFR) such as archiving, backup and data protection modernization, compression, consolidation where possible, dedupe and data management including deletion where applicable along with other techniques and technologies combined with best practices.

Will you out live your data, or will your data survive you?

These are among other things to ponder while you enjoy your summer (northern hemisphere) vacation sitting on a beach or pool side enjoying a cool beverage perhaps gazing at the passing clouds reflecting on all things great and small.

Clouds: Dont be scared, however look before you leap and be prepared

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

SMB, SOHO and low end NAS gaining enterprise features

Here is a link to an interview that I did providing industry trends, perspectives and commentary on how Network Attached Storage (NAS) aka file and data sharing for the Small Medium Business (SMB), Small Office Home Office (SOHO) and consumer or low end offerings are gaining features and functionality traditionally associated with larger enterprise, however without the large price. In addition, here is a link to some tips for small business NAS storage and to another perspective on how choosing an SMB NAS is getting easier (and here for comments on unified storage).

Click on the image below to listen to a pod cast that I did with comments and perspectives involving SMB, SOHO, ROBO and low end NAS.

Listen to comments by Greg Schulz of StorageIO on SMB, SOHO, ROBO and lowend NAS

If your favorite or preferred product or vendor was not mentioned in the above links, dont worry, as with many media interviews there is a limited amount of time or narrow scope so those mentioned were among others in the space.

Speaking of others, there are many others in the broad and diverse SMB, SOHO, ROBO and consumer NAS and unified storage space. For example there are QNAP, SMC, Huawei, Buffalo, Synology and Starwind among many others. There is a lot of diversity in this NAS space. You’ve got Buffalo Technology, Cisco, Dlink, Dell, Data Robotic Drobo, EMC Iomega, Hewlett-Packard (HP) Co. via Microsoft, Intel, Overland Storage Snap Server, Seagate Black Armour, Western Digital Corp., and many others. Some of these vendors are household names that you would expect to see in the upper SMB, mid sized environments, and even into the enterprise.

For those who have other favorites or want to add another vendor to those already mentioned above, feel free to respond with a polite comment below. Oh and for disclosure, I bought my SMB or low end NAS from Amazon.com and it is an Iomega IX4.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Are Hard Disk Drives (HDDs) getting too big?

Lets start out by clarifying something, that is in terms of context or scope, big means storage capacity as opposed to the physical packaging size of a hard disk drive (HDD) which are getting smaller.

So are HDDs in terms of storage capacity getting too big?

This question of if HDDs storage capacity getting too big to manage comes up every few years and it is the topic of Rick Vanovers (aka twitter @RickVanover Episode 27 Pod cast: Are hard drives getting to big?

Veeam community podcast guest appearance

As I discuss in this pod cast with Rick Vannover of Veeam, with the 2TB and even larger future 4TB, 8 to 9TB, 18TB, 36TB and 48 to 50TB drives not many years away, sure they are getting bigger (in terms of capacity) however we have been here before (or at least some of us have). We discuss how back in the late 90s HDDs were going from 5.25 inch to 3.5 inch (now they are going from 3.5 inch to 2.5 inch), and 9GB were big and seen as a scary proposition by some for doing RAID rebuilds, drive copy or backups among other things, not to mention if putting to many eggs (or data) in one basket.

In some instances vendors have been able to combine various technologies, algorithms and other techniques to RAID rebuild a 1TB or 2TB drive in the same or less amount of time as it used to take to process a 9GB HDD. However those improvements are not enough and more will be needed leveraging faster processors, IO busses and back planes, HDDs with more intelligence and performance, different algorithms and design best practices among other techniques that I discussed with Rick. After all, there is no such thing as a data recession with more information to be generated, processed, moved, stored, preserved and served in the future.

If you are interested in data storage, check out Ricks pod cast and hear some of our other discussion points including how SSD will help keep the HDD alive similar to how HDDs are offloading tape from their traditional backup role, each with its changing or expanding focus among other things.

On a related note, here is post about RAID remaining relevant yet continuing to evolve. We also talk about Hybrid Hard Disk Drives (HHDD) where in a single sealed HDD device there is flash and dram along with a spinning disk all managed by the drives internal processor with no external special software or hardware needed.

Listen to comments by Greg Schulz of StorageIO on HDD, HHDD, SSD, RAID and more

Put on your head phones (or not) and check out Ricks pod cast here (or on the head phone image above).

Thanks again Rick, really enjoyed being a guest on your show.

Whats your take, are HDDs getting to big in terms of capacity or do we need to leverage other tools, technology and techniques to be more effective in managing expanding data footprint including use of data footprint reduction (DFR) techniques?

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Using Removable Hard Disk Drives (RHDDs)

Removable hard disk drives (RHDD) are a form of removable media which includes magnetic tape that address many common use cases. Usage scenarios include enabling bulk data portability for larger environments or for D2D backup where the media needs to be physically moved offsite for small and mid sized environments. RHDDs include among others those from Imation such as the Odyssey (which is what I use) and the Prostor RDX (OEMed by Imation and others). RHDD, tape along with other forms of portable media including those that use flash by being removable and portable presumable should have some extra packaging protection to safeguard against static shock in addition to supporting encryption capabilities.

Compared to disks including RHDD, tape for most and particularly larger environments should have an overall lower media cost for parking, preserving and when needed serving inactive or archived data (e.g. the changing roll of tape from day to back up to archive). Of course your real costs will vary by use in addition to how combined with data footprint reduction and other technologies.

A big benefit of RHDDs is that they are random meaning data can be searched and found quickly vs. tape media which has great sequential or streaming capabilities if you have a system that can support that ability. The other benefit of RHDD is that depending on their implementation, they should plug and play with your systems appearing as disk without any extra drivers or configuration or software tools making for ease of use. Being removable they can be used for portability such as sending data to a cloud or MSP as part of an initial bulk copy, or sending data offset or taking home as part of an offsite backup, data protection or BC/DR strategy as well as being used for archiving. The warning with RHDD is their cost per TByte will generally be higher than compared to tape as well as having to have a docking station or specific drive interface depending on specific product and configuration.

RHDD are a great compliment to traditional fixed or non removable disk, Hybrid Hard Disk Drive (HHDD) and Solid State Device (SSD) based storage as well as coexist with cloud or MSP backup and archive solutions. The smaller the environment the more affordable using RHDD become vs. tape for backup and archive operations or when portability is required. Even if using a cloud or managed service provider (MSP) backup provider, network bandwidth costs, availability or performance may limit the amount of data that can be moved in a cost effective way. For example placing an archive and gold or master copy of your static data on a RHDD that may be kept on site in a safe off-site place and then sending data that is routinely changed to the cloud or MSP provider (think full local and offsite plus partial full and incremental in the cloud).

By leveraging archiving and data footprint reduction (DFR) techniques including dedupe and compression, you can stretch your budget by sending less data to cloud or MSP services while using removable media for data protection. You would be surprised how many TBytes of data can be kept in a safe deposit box. For my own business, I have used RHDDs for several years to keep gold master copies as well as archives offsite as part of a disk to disk (D2D) or D2D2RHDD strategy. The data protection strategy is also complimented by sending active data to a cloud backup MSP (encrypted of course). It might be belt and suspenders, however it is also eating my own dog food practicing what I talk about and the approach has proven itself a few times.

Here are some related links to more material:
Removable disk drives vs. tape storage for small businesses
The pros and cons of removable disk storage for small businesses
Removable storage media appealing to SMBs, but with caveats
StorageIO Momentus Hybrid Hard Disk Drive (HHDD) Moments

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Spring 2011 Server and StorageIO News Letter

StorageIO News Letter Image
Spring 2011 Newsletter

Welcome to the Spring 2011 edition of the Server and StorageIO Group (StorageIO) newsletter. This follows the Winter 2011 edition.

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

 

Click on the following links to view the Spring 2011 edition as an HTML or PDF or, to go to the newsletter page to view previous editions.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Cheers gs

Nuff said for now

Cheers
Gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved