Clouds and Data Loss: Time for CDP (Commonsense Data Protection)?

Today SNIA released a press release pertaining to cloud storage timed to coincide with SNW where we can only presume vendors are talking about their cloud storage stories.

Yet chatter on the coconut wire along with various news (here and here and here) and social media sites is how could cloud storage and information service provider T-Mobile/Microsoft/Side-Kick loose customers data?

Data loss is a dangerous phrase, after all, your data may still be intact somewhere, however if you cannot get to it when needed, that may seem like data loss to you.

There are many types of data loss including loss of accessibility or availability along with flat out loss. Let me clarify, loss of data availability or accessibility means that somewhere, your data is still intact, perhaps off-line on a removable disk, optical, tape or at another site on-line, near-line or off-line, its just that you cannot get to it yet. There is also real data loss where both your primary copy and backup as well as archive data are lost, stolen, corrupted or never actually protected.

Clouds or managed service providers in general are getting beat up due to some loss of access, availability or actual data loss, however before jumping on that bandwagon and pointing fingers at the service, how about a step back for a minute. Granted, given all of the cloud hype and proliferation of managed service offerings on the web (excuse me cloud), there is a bit of a lightning rod backlash or see I told you so approach.

Whats different about this story compared to prior disruptions with Amazon, Google, Blackberry among others is that unlike where access to information or services ranging from calendar, emails, contacts or other documents is disrupted for a period of time, it sounds as those data may have been lost.

Lost data you should say? How can you lose data after all there are copies of copies of data that have been snapshot, replicated and deduplicated storage across different tiered storage right?

Certainly anyone involved in data management or data protection is asking the question; why not go back to a snapshot copy, replicated volute, backup copy on disk or tape?

Needless to say, finger pointing aerobics are or will be in full swing. Instead, lets ask the question, is it time for CDP as in Commonsense Data Protection?

However, rather than point blame or spout off about how bad clouds are, or, that they are getting an un-fair shake and un-due coverage, and that just because there might be a few bad ones, not all clouds are bad particularly with recent outages.

I can think of many ways on how to actually lose data, however, to totally lose data requires not a technology failure, it can be something much simpler and is equally applicable to cloud, virtual and physical data centers and storage environments from the largest to the smallest to the consumer. Its simple, common sense, best practices, making copies of all data and keeping extra copies around somewhere, with more frequent or recent data having copies readily available.

Some trends Im seeing include among others:

  • Low cost craze leveraging free or near free services and products
  • Cloud hype and cloud bashing and need to discuss wide area in between those extremes
  • Renewed need for basic data protection including BC/DR, HA, backup and security
  • Opportunity to re-architect data protection in conjunction with other initiatives
  • Lack of adequate funding for continued and proactive data protection

Just to be safe, lets revisit some common data protection best practices:

  • Learn from mistakes, preferable during testing with aim to avoid repeating them again
  • Most disasters in IT and elsewhere are the result of a chain of events not being contained
  • RAID is not a replacement for backup, it simply provides availability or accessibility
  • Likewise, mirroring or replication by themselves is not a replacement for backup.
  • Use point in time RPO based data protection such as snapshots or backup with replication
  • Maintain a master backup or gold copy that can be used to restore to a given point of time
  • Keep backup on another medium, also protect backup catalog or other configuration data
  • If using deduplication, make sure that indexes/dictionary or Meta data is also protected.
  • Moving your data into the cloud is not a replacement for a data protection strategy
  • Test restoration of backed data both locally, as well as from cloud services
  • Employ data protection management (DPM) tools for event correlation and analysis
  • Data stored in clouds need to be part of a BC/DR and overall data protection strategy
  • Have extra copy of data placed in clouds kept in alternate location as part of BC/DR
  • Ask yourself, what will do you when your cloud data goes away (note its not if, its when)
  • Combine multiple layers or rings of defines and assume what can break will break

Clouds should not be scary; Clouds do not magically solve all IT or consumer issues. However they can be an effective tool when of high caliber as part of a total data protection strategy.

Perhaps this will be a wake up call, a reminder, that it is time to think beyond cost savings and a shift back to basic data protection best practices. What good is the best or most advanced technology if you have less than adequate practices or polices? Bottom line, time for Commonsense Data Protection (CDP).

Ok, nuff said for now, I need to go and make sure I have a good removable backup in case my other local copies fail or Im not able to get to my cloud copies!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

greg

View Comments

  • We totally agree on the need for testing. But how can you test every backup, every day?

    You can't. It's too expensive.

    So (for Tivoli Storage Manager users anyway) we developed an automated random-sample restore testing system. It installs easily, touches every computer you back up, and shows you the root cause.

    One of the cooler things it finds is junk storage: backups that should never have been made, and are wasting space.

    See our 3-minute concept videos: search YouTube for TSMworks.

    Hope this helps.

Recent Posts

RTO Context Matters

RTO Context Matters With RTO context matters similar to many things in and around Information…

2 months ago

Microsoft Azure Elastic SAN from Cloud to On-Prem

What is Azure Elastic SAN Azure Elastic SAN (AES) is a new (now GA) Azure…

9 months ago

Microsoft Hyper-V Is Alive Enhanced With Windows Server 2025

Yes, you read that correctly, Microsoft Hyper-V is alive and enhanced with Windows Server 2025,…

11 months ago

March 31st is world backup day; when is world recovery day

March 31st is world backup day; when is world recovery day If March 31st is…

2 years ago

ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs

ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs. Yes, you read that correct; leverage…

3 years ago