Preparing For World Backup Day 2017 Are You Prepared

Preparing For World Backup Day 2017 Are You Prepared

In case you have forgotten, or were not aware, this coming Friday March 31 is World Backup Day 2017 (and recovery day). The annual day is a to remember to make sure you are protecting your applications, data, information, configuration settings as well as data infrastructures. While the emphasis is on Backup, that also means recovery as well as testing to make sure everything is working properly as part of on-prem and cloud data protection.

What the Vendors Have To Say

Today I received the following from Kylle over at TOUCHDOWNPR on behalf of their clients providing their perspectives on what World Backup Day means, or how to be prepared. Keep in mind these are not Server StorageIO clients (granted some have been in the past, or I know them, that is a disclosure btw), and this is in no way an endorsement of what they are saying, or advocating. Instead, this is simply passing along to you what was given to me.

Not included in this list? No worries, add your perspectives (politely) to the comments, or, drop me a note, and perhaps I will do a follow-up or addition to this.

Kylle O’Sullivan
TOUCHDOWNPR
Email: Kosullivan@touchdownpr.com
Mobile: 508-826-4482
Skype: Kylle.OSullivan

“Data loss and disruption happens far too often in the enterprise. Research by Ponemon in 2016 estimates the average cost of an unplanned outage has spiralled to nearly $9,000 a minute, causing crippling downtime as well as financial and reputational damage. Legacy backups simply aren’t equipped to provide seamless operations, with zero Recovery Point Objectives (RPO) should a disaster strike. In order to guarantee the availability of applications, synchronous replication with real-time analytics needs to be simple to setup, monitor and manage for application owners and economical to the organization. That way, making zero data loss attainable suddenly becomes a reality.” – Chuck Dubuque, VP Product Marketing, Tintri

“With today’s “always-on” business environment, data loss can destroy a company’s brand and customer trust. A multiple software-based strategy with software-defined and hyperconverged storage infrastructure is the most effective route for a flexible backup plan.  With this tactic, snapshots, replication and stretched clusters can help protect data, whether in a local data center cluster, across data centers or across the cloud. IT teams rely on these software-based policies as the backbone of their disaster recovery implementations as the human element is removed. This is possible as the software-based strategy dictates that all virtual machines are accurately, automatically and consistently replicated to the DR sites. Through this automatic and transparent approach, no administrator action is required, saving employees time, money and providing peace of mind that business can carry on despite any outage.” – Patrick Brennan, Senior Product Marketing Manager, Atlantis Computing

“It’s only a matter of time before your datacenter experiences a significant outage, if it hasn’t already, due to a wide range of causes, from something as simple as human error or power failure to criminal activity like ransomware and cyberattacks, or even more catastrophic events like hurricanes. Shifting thinking to ‘when’ as opposed to ‘if’ something like this happens is crucial; crucial to building a more flexible and resilient IT infrastructure that can withstand any kind of disruption resulting in negative impact on business performance. World Backup Day reminds us of the importance of both having a backup plan in place and as well as conducting regular reviews of current and new technology to do everything possible to keep business running without interruption. Organizations today are highly aware that they are heavily dependent on data and critical applications, and that losing even just an hour of data can greatly harm revenues and brand reputation, sometimes beyond repair. Savvy businesses are taking an all-inclusive approach to this problem that incorporates cloud-based technologies into their disaster recovery plans. And with consistent testing and automation, they are ensuring that those plans are extremely simple to execute against in even the most challenging of situations, a key element of successfully avoiding damaging downtime.” Rob Strechay, VP Product, Zerto

“Data is one of the most valuable business assets and when it comes to data protection chief among its IT challenges is the ever-growing rate of data and the associated vulnerability. Backup needs to be reliable, fast and cost efficient. Organizations are on the defensive after a disaster and being able to recover critical data within minutes is crucial. Breakthroughs in disk technologies and pricing have led to very dense arrays that are power, cost and performance efficient. Backup has been revolutionized and organizations need to ensure they are safeguarding their most valuable commodity – not just now but for the long term. Secure archive platforms are complementary and create a complete recovery strategy.”  – Geoff Barrall, COO, Nexsan

Consider the DR Options that Object Storage Adds
“Data backup and disaster recovery used to be treated as separate processes, which added complexity. But with object storage as a backup target you now have multiple options to bring backup and DR together in a single flow. You can configure a hybrid cloud and tier a portion of your data to the public cloud, or you can locate object storage nodes at different locations and use replication to provide geographic separation. So, this World Backup Day, consider how object storage has increased your options for meeting this critical need.” – Jon Toor, Cloudian CMO

Whats In Your Data Protection Toolbox

What tools, technologies do you have in your data protection toolbox? Do you only have a hammer and thus answer to every situation is that it looks like a nail? Or, do you have multiple tools, technologies combined with your various tradecraft experiences to applice different techniques?

storageio data protection toolbox

Where To Learn More

Following these links to additional related material about backup, restore, availability, data protection, BC, BR, DR along with associated topics, trends, tools, technologies as well as techniques.

Time to restore from backup: Do you know where your data is?
February 2017 Server StorageIO Update Newsletter
Data Infrastructure Server Storage I/O Tradecraft Trends
Data Infrastructure Server Storage I/O related Tradecraft Overview
Data Infrastructure Primer and Overview (Its Whats Inside The Data Center)
What’s a data infrastructure?
Ensure your data infrastructure remains available and resilient
Part III Until the focus expands to data protection – Taking action
Welcome to the Data Protection Diaries
Backup, Big data, Big Data Protection, CMG & More with Tom Becchetti Podcast
Six plus data center software defined management dashboards
Cloud Storage Concerns, Considerations and Trends
Software Defined, Cloud, Bulk and Object Storage Fundamentals (www.objectstoragecenter.com)

Data Infrastructure Overview, Its Whats Inside of Data Centers
All You Need To Know about Remote Office/Branch Office Data Protection Backup (free webinar with registration)
Software Defined, Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI) resources
The SSD Place (SSD, NVM, PM, SCM, Flash, NVMe, 3D XPoint, MRAM and related topics)
The NVMe Place (NVMe related topics, trends, tools, technologies, tip resources)
Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA, RAID/EC/LRC, Replication, Security)
Software Defined Data Infrastructure Essentials (CRC Press 2017) including SDDC, Cloud, Container and more
Various Data Infrastructure related events, webinars and other activities

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Backup of data is important, so to is recovery which also means testing. Testing means more than just if you can read the tape, disk, SSD, USB, cloud or other medium (or location). Go a step further and verify that not only you can read the data from the medium, also if your applications or software are able to use it. Have you protected your applications (e.g. not just the data), security keys, encryption, access, dedupe and other certificates along with metadata as well as other settings? Do you have a backup or protection copy of your protection including recovery tools? What granularity of protection and recovery do you have in place, when did you test or try it recently? In other words, what this all means is be prepared, find and fix issues, as well as in the course of testing, don’t cause a disaster.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Backup, Big data, Big Data Protection, CMG & More with Tom Becchetti Podcast

server storage I/O trends

In this Server StorageIO podcast episode, I am joined by Tom Becchetti (@tbecchetti) for a Friday afternoon conversation recorded live at Meisters in Scandia Minnesota (thanks to the Meisters crew!).

Tom Becchetti

For those of you who may not know Tom, he has been in the IT, data center, data infrastructure, server and storage (as well as data protection) industry for many years (ok decades) as a customer and vendor in various roles. Not surprising our data infrastructure discussion involves server, software, storage, big data, backup, data protection, big data protection, CMG (Computer Measurement Group @mspcmg), copy data management, cloud, containers, fundamental tradecraft skills among other related topics.

Check out Tom on twitter @tbecchetti and @mspcmg as well as his new website www.storagegodfather.com. Listen to the podcast discussion here (42 minutes) as well as on iTunes.

Also available on 

Ok, nuff said for now…

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book Software-Defined Data Infrastructure Essentials (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

>

Data Infrastructure Primer Overview (Its Whats Inside The Data Center)

Data Infrastructure Primer Overview

Data Infrastructure Primer Overview

Updated 1/17/2018

Data Infrastructure Primer Overview looks at the resources that combine to support business, cloud and information technology (IT) among other applications that transform data into information or services. The fundamental role of data infrastructures is to provide a platform environment for applications and data that is resilient, flexible, scalable, agile, efficient as well as cost-effective. Put another way, data infrastructures exist to protect, preserve, process, move, secure and serve data as well as their applications for information services delivery. Technologies that make up data infrastructures include hardware, software, cloud or managed services, servers, storage, I/O and networking along with people, processes, policies along with various tools spanning legacy, software-defined virtual, containers and cloud.

Various Types and Layers of Infrastructures

Depending on your role or focus, you may have a different view than somebody else of what is infrastructure, or what an infrastructure is. Generally speaking, people tend to refer to infrastructure as those things that support what they are doing at work, at home, or in other aspects of their lives. For example, the roads and bridges that carry you over rivers or valleys when traveling in a vehicle are referred to as infrastructure.

Similarly, the system of pipes, valves, meters, lifts, and pumps that bring fresh water to you, and the sewer system that takes away waste water, are called infrastructure. The telecommunications network. This includes both wired and wireless, such as cell phone networks, along with electrical generating and transmission networks are considered infrastructure. Even the airplanes, trains, boats, and buses that transport us locally or globally are considered part of the transportation infrastructure. Anything that is below what you do, or that supports what you do is considered infrastructure.

Software Defined Data Infrastructure overview

Figure 1 Business, IT Information, Data and other Infrastructures

This is also the situation with IT systems and services where, depending on where you sit or use various services, anything below what you do may be considered infrastructure. However, that also causes a context issue in that infrastructure can mean different things. For example in figure 1, the user, customer, client, or consumer who is accessing some service or application may view IT in general as infrastructure, or perhaps as business infrastructure.

Those who develop, service, and support the business infrastructure and its users or clients may view anything below them as infrastructure, from desktop to database, servers to storage, network to security, data protection to physical facilities. Moving down a layer (lower altitude) in figure 1 is the information infrastructure which, depending on your view, may also include servers, storage, and I/O hardware and software.

To help make a point, let’s think of the information infrastructure as the collection of databases, key-value stores, repositories, and applications along with development tools that support the business infrastructure. This is where you may find developers who maintain and create real business applications for the business infrastructure. Those in the information infrastructure usually refer to what’s below them as infrastructure. Meanwhile, those lower in the stack shown in figure 1 may refer to what’s above them as the customer, user, or application, even if the real user is up another layer or two.

Whats inside a data infrastructure
Context matters in the discussion of infrastructure. So for our of server storage I/O fundamentals, the data infrastructures support the databases and applications developers as well as things above, while existing above the physical facilities infrastructure, leveraging power, cooling, and communication network infrastructures below.

SDDI and Data Infrastructure building blocks

Figure 2 Data Infrastructure fundamental building blocks (hardware, software, services).

Figure 2 shows the fundamental pillars or building blocks for a data infrastructure, including servers for computer processing, I/O networks for connectivity, and storage for storing data. These resources including both hardware and software as well as services and tools. The size of the environment, organization, or application needs will determine how large or small the data infrastructure is or can be.

For example, at one extreme you can have a single high-performance laptop with a hypervisor running OpenStack; along with various operating systems along with their applications leveraging flash SSD and high-performance wired or wireless networks powering a home lab or test environment. On the other hand, you can have a scenario with tens of thousands (or more) servers, networking devices, and hundreds of petabytes (PBs) of storage (or more).

In figure 2 the primary data infrastructure components or pillar (server, storage, and I/O) hardware and software resources are packaged and defined to meet various needs. Software-defined storage management includes configuring the server, storage, and I/O hardware and software as well as services for use, implementing data protection and security, provisioning, diagnostics, troubleshooting, performance analysis, and other activities. Server storage and I/O hardware and software can be individual components, prepackaged as bundles or application suites and converged, among other options.

Figure 3 shows a deeper look into the data infrastructure shown at a high level in figure 2. The lower left of figure 2 shows the common-to-all-environments hardware, software, people, processes, and practices that include tradecraft (experiences, skills, techniques) and “valueware”. Valueware is how you define the hardware and software along with any customization to create a resulting service that adds value to what you are doing or supporting. Also shown in figure 3 are common application and services attributes including performance, availability, capacity, and economics (PACE), which vary with different applications or usage scenarios.

Data Infrastructure components

Figure 3 Data Infrastructure server storage I/O hardware and software components.

Applications are what transform data into information. Figure 4 shows how applications, which are software defined by people and software, consist of algorithms, policies, procedures, and rules that are put into some code to tell the server processor (CPU) what to do.

SDDI and SDDC server storage I/O

Figure 4 How data infrastructure resources transform data into information.

Application programs include data structures (not to be confused with infrastructures) that define what data looks like and how to organize and access it using the “rules of the road” (the algorithms). The program algorithms along with data structures are stored in memory, together with some of the data being worked on (i.e., the active working set). Additional data is stored in some form of extended memory storage devices such as Non-Volatile Memory (NVM) solid-state devices (SSD), hard disk drives (HDD), or tape, among others, either locally or remotely. Also shown in figure 4 are various devices that do input/output (I/O) with the applications and server, including mobile devices as well as other application servers.

Bringing IT All Together (for now)

Software Defined Data Infrastructure overview

Figure 5 Data Infrastructure  fundamentals “big picture”

A fundamental theme is that servers process data using various applications programs to create information; I/O networks provide connectivity to access servers and storage; storage is where data gets stored, protected, preserved, and served from; and all of this needs to be managed. There are also many technologies involved, including hardware, software, and services as well as various techniques that make up a server, storage, and I/O enabled data infrastructure.

Server storage I/O and data infrastructure fundamental focus areas include:

  • Organizations: Markets and industry focus, organizational size
  • Applications: What’s using, creating, and resulting in server storage I/O demands
  • Technologies: Tools and hard products (hardware, software, services, packaging)
  • Trade craft: Techniques, skills, best practices, how managed, decision making
  • Management: Configuration, monitoring, reporting, troubleshooting, performance, availability, data protection and security, access, and capacity planning

Where To Learn More

View additional Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Whether you realize it or not, you may already be using, rely upon, affiliated with, support or otherwise involved with data infrastructures. Granted what you or others generically refer to as infrastructure or the data center may, in fact, be the data infrastructure. Watch for more discussions and content about as well as related technologies, tools, trends, techniques and tradecraft in future posts as well as other venues, some of which involve legacy, others software-defined, cloud, virtual, container and hybrid.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Until the focus expands to data protection, backup is staying alive!

Storage I/O trends

Until the focus expands to data protection, backup is staying alive!

This is the first of a three-part series discussing how and why vendors are keeping backup alive, read part two here.

Some vendors, Value Added Resellers (VARs), pundits (consultants, analysts, media, bloggers) and their followers want backup to not only be declared dead, they also want to attend (or send flowers) to the wake and funeral not to mention proof of burial so to speak.

Yet many of these same vendors, VARs and their pundits also are helping or causing backup to staying alive.

Sure there are plenty of discussion including industry adoption and customer deployment around modernizing backup and data protection that are also tied to disaster recovery (DR), business continuance (BC), high availability (HA) and business resiliency (BR).

On the other hand the usual themes are around talking about product or technology deployment to modernize backup by simply swapping out hardware (e.g. disk for tape, cloud for disk), applying data footprint reduciton (DFR) including archiving, compression and dedupe or, another common scenario of switching from one vendors tool to another.

How vendors are helping backup staying alive?

One of the routine things I hear from vendors among others is that backup needs to move from the 70’s or 80’s or 90’s to the current era when the John Travolta and Oliva Newton John movie Saturday Night Fever and the Bee Gees song "Stayin Alive" appeared (click here to hear the song via Amazon).

Stayin Alive Image via Amazon.com

Some vendors keep talking and using the term backup instead of expanding the conversation to data protection that includes backup/restore, business continuance (BC), disaster recovery (DR) along with archiving and security. Now let’s be that we can not expect something like backup to be removed from the vocabulary overnight as its been around for decades, hence it will take time.

IMHO: The biggest barrier to moving away from backup is the industry including vendors, their pundits, press/media, vars and customers who continue to insist on using or referring to back up vs. expanding the conversation to data protection. – GS @StorageIO

Until there’s a broad focus on shifting to and using the term data protection including backup, BC, DR and archiving, people will simply keep referring to what they know, read or hear (e.g. backup). On the other hand if the industry starts putting more focus on using data protection with backup, people will stat following suit using the two and over time backup as a term can fade away.

Taking a step back to move forward

Some of the modernizing backup discussions is actually focused on take a step back to reconsider why, when, where, how and with what different applications, systems and data gets protected. certainly there are the various industry trends, challenges and opportunities some of which are shown below including more facts to protect, preserve and service for longer periods of time.

Likewise there are various threat risks or scenarios to protect information assets from or against, not all of which are head-line news making event situations.

data protection threat risk scenarios

Not all threat risks are headline news making events

There is an old saying in and around backup/restore, BC, DR, BR and HA of never letting a disaster go to waste. What this means is that if you have never noticed, there is usually a flurry of marketing and awareness activity including conversations about why you should do something BC, DR and other data protection activities right around, or shortly after a disaster scenario. However not all disasters or incidents are headline news making events and hence there should be more awareness every day vs. just during disaster season or situations. In addition, this also means expanding the focus on other situations that are likely to occur including among others those in the following figure.

data protection headline news and beyond

Continue reading part two of this series here to see what can be done about shifting the conversation about modernizing data protection. Also check out conversations about trends, themes, technologies, techniques perspectives in my ongoing data protection diaries discussions (e.g. www.storageioblog.com/data-protection-diaries-main/).

Ok, nuff said

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Data Protection Diaries – My data protection needs and wants

Storage I/O trends

Blog post: Data Protection Diaries – My data protection needs and wants

Update 1/10/18

Rather than talking about what others should do or consider for their data protection needs, for this post I wrote down some notes using my Livescribe about what I need and want for my environment. As part of walking the talk in future posts I’m going to expand a bit more on what I’m doing as well as considering for enhancements to my environment for data protection which consists of cloud, virtual and physical.

Why and what am I Protecting?

live scribe example
Livescribe notes that I used for creating the following content

What is my environment

Server and StorageIO (aka StorageIO) is a small business that is focused in and around data infrastructures which includes data protection as a result, have lots of data including videos, audio, images, presentations, reports, research as well, file serving as back-office applications.  Then there are websites, blog, email and related applications, some of which are cloud based that are also part of my environment that have different availability, durable, and accessibility requirements.

My environment includes local on-site physical as well as virtual systems, mobile devices, as well as off-site resources including a dedicated private server (DPS) at a service provider. On one hand as a small business, I could easily move most if not everything into the cloud using an as a service model. However, I also have a lab and research environment for doing various things involving data infrastructure including data protection so why not leverage those for other things.

Why do I need to protect my information and data infrastructure?

  • Protect and preserve the business along with associated information as well as assets
  • Compliance (self and client based, PCI and other)
  • Security (logical and physical) and privacy to guard against theft, loss, instrusions
  • Logical (corruption, virus, accidental deletion) and physical damage to systems, devices, applications and data
  • Isolate and contain faults of hardware, software, networks, people actions from spreading to disasters
  • Guard against on-site or off-site incidents, acts of man or nature, head-line news and non head-line news
  • Address previous experience, incidents and situations, preventing future issues or problems
  • Support growth while enabling agility, flexibity
  • Walk the talk, research, learning increasing experience

My wants – What I would like to have

  • Somebody else pay for it all, or exist in world where there are no threat risks to information (yeh right ;) )
  • Cost effective and value (not necessarily the cheapest, I also want it to work)
  • High availability and durability to protect against different threat risks (including myself)
  • Automated, magically to take care of everything enabled by unicorns and pixie dust ;).

My requirements – What I need (vs. want):

  • Support mix of physical, virtual and cloud applications, systems and data
  • Different applications and data, local and some that are mobile
  • Various operating environments including Windows and Linux
  • NOT have to change my environment to meet limits of a particular solution or approach
  • Need a solution (s) that fit my needs and that can scale, evolve as well as enable change when my environment does
  • Also leverage what I have while supporting new things

Data protection topics, trends, technologies and related themes

Wrap and summary (for now)

Taking a step back to look at a high-level of what my data protection needs are involves looking at business requirements along with various threat risks, not to mention technical considerations. In a future post I will outline what I am doing as well as considering for enhancements or other changes along with different tools, technologies used in hybrid ways. Watch for more posts in this ongoing series of the data protection dairies via www.storageioblog.com/data-protection-diaries-main/.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Virtual, Cloud and IT Availability, its a shared responsibility and common sense

IT Availability, it’s a shared responsibility and common sense

In case you missed it, recently the State of Oregon had a data center computer problem (ok, storage and application outage) that resulted in unemployment benefits not being provided. Tony Knotzer over at Network Computing did a story Oregon Storage Debacle Highlights Need To Plan For Failure and asked me for some perspectives that you can read here.

Data center

The reason I bring this incident up is not to join in the feeding frenzy that usually occurs when something like this happens, instead, to touch on what should be common. What is lacking at times (or more needed) is common sense when it comes to designing and managing flexible scalable data infrastructures.

“Fundamental IT 101 is that all technology will fail, despite what the vendors tell you,” Schulz said. And the most likely time technology will fail, he notes, is when people are involved — doing configurations, making changes or updates, or performing upgrades. – Via Network Computing

Note that while any technology can or has fail at some point, how it fails along with fault containment via design best practices and vendor resolution are important.

Good vendors learn and correct things so that they don’t happen again as well as work with customers on best practices to isolate and contain faults from expanding into disasters. Thus when a sales or marketing person tries to tell me that they have never had a failure I wonder if a: they are making something up, b: have not actually shipped to a customer in production, c: not aware of other deployments, d: towing the company line, e: too good to be true or f: all the above.

People talking

On the other hand, when a vendor tells me how they have resiliency in their product as well as processes, best practices and can even tell me (public or under NDA) how they have addressed issues, then they have my attention.

A common challenge today is cost cutting along with focus on the newest technology from servers to storage, networking to cloud, virtualization and software defined among other buzzword bingo themes and trends.

buzzword bingo

What also gets overlooked as mentioned above is common sense.

Perhaps if somebody could package and launch a good public relations campaign profiling common sense such as Software Defined Common Sense (SDCS) that might help?

On the other hand, similar to public service announcements (PSA) that may seem like common sense to some, there is a reason they are being done. That is to pass on the information to others who may not know about it thus lack what is perceived as common sense.

Lets get back to the state of Oregon’s computer systems issues and the blame game.

You know the blame game? That is when something happens or does not happen as you want it to simply find somebody else to blame or pivot and point a finger elsewhere.

the blame game

While perhaps good for CYA, the blame games usually does not help to prevent something happening again, or in the first place.

Hence in my comments about the state of Oregon computer storage system problems, I took the tone of what is common these days of no fault, shared responsibility and blame.

In other words does not matter who did what first or did not do, both sides could have prevented it.

For some this might resonate of it does not matter who misbehaved in the sandbox or play room, everybody gets a time out.

This is not to say that one side or the other has to assume or take on more blame or responsibility than the other, rather there is a shared responsibility to look out for each other.

Storage I/O trends

Just like when you drive a car, the education focus is on defensive safe driving to watch out for what the other person might do or not do (e.g. use turn signals or too busy to look in a mirror while talking or texting and driving among other things). The goal is to prevent accidents by watching out for those who are not taking responsibilities for themselves, not to mention learning from others mishaps.

teamwork
Working together vs. the blame game

Different views of customer vs. vendor

Having been a customer, as well as a vendor in the past not surprisingly I have some different views on this.

Sure the customer or client is always right, however sometimes there needs to be unpleasant conversations to help the customer help themselves, or keep themselves out of trouble.

Likewise a vendor may also take the blame when something does go wrong, even if it was entirely not their own fault just to stay in good graces with the customer or get that next deal.

Sometimes a vendor deserves to get beat up when something goes wrong, or at a least tell their story including if needed behind closed doors or under NDA. Likewise to have a meaningful relationship or partnership with the vendor, supplier or VAR, there needs to be trust and confidence which means not everything gets put out for media or blog venues to feed on.

Sure there is explaining what happened without spin, however there is also learning from mistakes to prevent them from happening which should be common sense. If part of that sharing of blame and responsibility requires being not in public that’s fine, as well as enough information of what happened is conveyed to clarify concerns and create confidence.

With vendor lockin, when I was a customer some taught that it’s the vendors fault (or for CYA, blame them), as a vendor the thinking was enforced that the customer is always right and its the competition who causes lockin.

As an analyst advisory consulting, my thinking not surprisingly is that of shared responsibility.

This means only you can allow vendor lockin, not to mention decide if lockin is bad or not.

Likewise only you can prevent data loss in cloud, virtual or traditional environments which also includes loss of access.

Granted somebody higher up the organization structure may over-ride you, however ask yourself if you did what was needed?

Likewise if a vendor is going to be doing some maintenance work in the middle of the week and there is a risk of something happening, even if they have told or sold you there is no single point of failure (NSPOF), or non disruptive upgrades.

Anytime there is a person involved regardless of if hardware, cables, software, firmware, configurations or physical environments something can happen. If the vendor drops the ball or a cable or card or something else and causes an outage or downtime, it is their responsibility to discuss those issues. However it is also the customers responsibility to discuss why they let the vendor do something during that time without taking adequate precautions. Likewise if the storage system was a single point of failure for an important system, then there is the responsibility to discuss the cost cutting concerns of others and have them justify why a redundant solution is not needed (that’s CYA 101 btw ).

Some other common sense tips

For some these might be familiar and if so, are they being done, and for others, perhaps they are new or revolutionary.

In the race to jump to a new technology or vendor, what are the unknowns? For example you may know what the issues or flaws are in an existing systems, solution, product, service or vendor, however what about the new one? Will you be the production beta customer and if so, how can you mitigate any risk?

Ask vendors tough, yet fair questions that are relevant to your needs and requirements including how they handle updates, upgrades and other tasks. Don’t be afraid to go under NDA if needed to get a better view of where they are at, have been and going to avoid surprises.

If this is not common IT sense, then take the responsibility to learn.

On the other hand, if this is common sense, take the responsibility to share and help others learn what it is that you know.

Also understand your availability needs and wants as well as balance those with costs along with risks. If something can go wrong it will if people are involved, thus design for resiliency including maintenance to offset applicable threat risks. Remember in the data center not everything is the same.

Storage I/O trends

Here is my point.

There is enough blame as well as accolades to go around, however take some shared responsibility and use it wisely.

Likewise in the race to cut cost, watch out for causing problems that compromise your information systems or services.

Look into removing complexity and costs without compromise which has long-term benefits vs. simply cutting costs.

Here are some related links and perspectives:
Don’t Let Clouds Scare You Be Prepared
Cloud conversation, Thanks Gartner for saying what has been said
Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)
Make Your Company Ready for the Cloud
What do you do when your service provider drops the ball
People, Not Tech, Prevent IT Convergence
Pulling Together a Converged Team
Speaking of lockin, does software eliminate or move the location of vendor lock-in?

Ok, nuff said for now, what say you?

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Non Disruptive Updates, Needs vs. Wants

Storage I/O trends

Do you want non disruptive updates or do you need non disruptive upgrades?

First there is a bit of play on words going on here with needs vs. wants, as well as what is meant by non disruptive.

Regarding needs vs. wants, they are often used interchangeably particular in IT when discussing requirements or what the customer would like to have. The key differentiator is that a need is something that is required and somehow cost justified, or hopefully easier than a want item. A want or like to have item is simply that, its not a need however it could add value being a benefit although may be seen as discretionary.

There is also a bit of play on words with non disruptive updates or upgrades that can take on different meanings or assumptions. For example my Windows 7 laptop has automatic Microsoft updates enabled some of which can be applied while I work. On the other hand, some of those updates may be applied while I work however they may not take effect until I reboot or exit and restart an application.

This is not unique to Windows as my Ubuntu and Centos Linux systems can also apply updates, and in some cases a reboot might be required, same with my VMware environment. Lets not forget about applying new firmware to a server, or workstation, laptop or other device, along with networking routers, switches and related devices. Storage is also not immune as new software or firmware can be applied to a HDD or SSD (traditional or NVMe), either by your workstation, laptop, server or storage system. Speaking of storage systems, they too have new software or firmware that gets updated.

Storage I/O trends

The common theme here though is if the code (e.g. software, firmware, microcode, flash update, etc) can be applied non disruptive something known as non disruptive code load, followed by activation. With activation, the code may have been applied while the device or software was in use, however may need a reboot or restart. With non disruptive code activation, there should not be a disruption to what is being done when the new software takes effect.

This means that if a device supports non disruptive code load (NDCL) updates along with non disruptive code activation (NDCA), the upgrade can occur without disruption or having to wait for a reboot.

Which is better?

That depends, I want NDCA, however for many things I only need NDCL.

On the other hand, depending on what you need, perhaps it is both NDCL and NDCA, however also keep in mind needs vs. wants.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Tape is still alive, or at least in conversations and discussions

StorageIO Industry trends and perspectives image

Depending on whom you talk to or ask, you will get different views and opinions, some of them stronger than others on if magnetic tape is dead or alive as a data storage medium. However an aspect of tape that is alive are the discussions by those for, against or that simply see it as one of many data storage mediums and technologies whose role is changing.

Here is a link to an a ongoing discussion over in one of the Linked In group forums (Backup & Recovery Professionals) titled About Tape and disk drives. Rest assured, there is plenty of fud and hype on both sides of the tape is dead (or alive) arguments, not very different from the disk is dead vs. SSD or cloud arguments. After all, not everything is the same in data centers, clouds and information factories.

Fwiw, I removed tape from my environment about 8 years ago, or I should say directly as some of my cloud providers may in fact be using tape in various ways that I do not see, nor do I care one way or the other as long as my data is safe, secure, protected and SLA’s are meet. Likewise, I consult and advice for organizations where tape still exists yet its role is changing, same with those using disk and cloud.

Storage I/O data center image

I am not ready to adopt the singular view that tape is dead yet as I know too many environments that are still using it, however agree that its role is changing, thus I am not part of the tape cheerleading camp.

On the other hand, I am a fan of using disk based data protection along with cloud in new and creative (including for my use) as part of modernizing data protection. Although I see disk as having a very bright and important future beyond what it is being used for now, at least today, I am not ready to join the chants of tape is dead either.

StorageIO Industry trends and perspectives image

Does that mean I can’t decide or don’t want to pick a side? NO

It means that I do not have to nor should anyone have to choose a side, instead look at your options, what are you trying to do, how can you leverage different things, techniques and tools to maximize your return on innovation. If that means that tape is, being phased out of your organization good for you. If that means there is a new or different role for tape in your organization co-existing with disk, then good for you.

If somebody tells you that tape sucks and that you are dumb and stupid for using it without giving any informed basis for those comments then call them dumb and stupid requesting they come back when then can learn more about your environment, needs, and requirements ready to have an informed discussion on how to move forward.

Likewise, if you can make an informed value proposition on why and how to migrate to new ways of modernizing data protection without having to stoop to the tape is dead argument, or cite some research or whatever, good for you and start telling others about it.

StorageIO Industry trends and perspectives image

Otoh, if you need to use fud and hype on why tape is dead, why it sucks or is bad, at least come up with some new and relevant facts, third-party research, arguments or value propositions.

You can read more about tape and its changing role at tapeisalive.com or Tapesummit.com.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)

StorageIO industry trends cloud, virtualization and big data

This is the second in a two-part industry trends and perspective looking at learning from cloud incidents, view part I here.

There is good information, insight and lessons to be learned from cloud outages and other incidents.

Sorry cynics no that does not mean an end to clouds, as they are here to stay. However when and where to use them, along with what best practices, how to be ready and configure for use are part of the discussion. This means that clouds may not be for everybody or all applications, or at least today. For those who are into clouds for the long haul (either all in or partially) including current skeptics, there are many lessons to be  learned and leveraged.

In order to gain confidence in clouds, some questions that I routinely am asked include are clouds more or less reliable than what you are doing? Depends on what you are doing, and how you will be using the cloud services. If you are applying HA and other BC or resiliency best practices, you may be able to configure and isolate from the more common situations. On the other hand, if you are simply using the cloud services as a low-cost alternative selecting the lowest price and service class (SLAs and SLOs), you might get what you paid for. Thus, clouds are a shared responsibility, the service provider has things they need to do, and the user or person designing how the service will be used have some decisions making responsibilities.

Keep in mind that high availability (HA), resiliency, business continuance (BC) along with disaster recovery (DR) are the sum of several pieces. This includes people, best practices, processes including change management, good design eliminating points of failure and isolating or containing faults, along with how the components  or technology used (e.g. hardware, software, networks, services, tools). Good technology used in goods ways can be part of a highly resilient flexible and scalable data infrastructure. Good technology used in the wrong ways may not leverage the solutions to their full potential.

While it is easy to focus on the physical technologies (servers, storage, networks, software, facilities), many of the cloud services incidents or outages have involved people, process and best practices so those need to be considered.

These incidents or outages bring awareness, a level set, that this is still early in the cloud evolution lifecycle and to move beyond seeing clouds as just a way to cut cost, and seeing the importance and value HA, resiliency, BC and DR. This means learning from mistakes, taking action to correct or fix errors, find and cut points of failure are part of a technology maturing or the use of it. These all tie into having services with service level agreements (SLAs) with service level objectives (SLOs) for availability, reliability, durability, accessibility, performance and security among others to protect against mayhem or other things that can and do happen.

Images licensed for use by StorageIO via
Atomazul / Shutterstock.com

The reason I mentioned earlier that AWS had another incident is that like their peers or competitors who have incidents in the past, AWS appears to be going through some growing, maturing, evolution related activities. During summer 2012 there was an AWS incident that affected Netflix (read more here: AWS and the Netflix Fix?). It should also be noted that there were earlier AWS outages where Netflix (read about Netflix architecture here) leveraged resiliency designs to try and prevent mayhem when others were impacted.

Is AWS a lightning rod for things to happen, a point of attraction for Mayhem and others?

Granted given their size, scope of services and how being used on a global basis AWS is blazing new territory and experiences, similar to what other information services delivery platforms did in the past. What I mean is that while taken for granted today, open systems Unix, Linux, Windows-based along with client-server, midrange or distributed systems, not to mention mainframe hardware, software, networks, processes, procedures, best practices all went through growing pains.

There are a couple of interesting threads going on over in various LinkedIn Groups based on some reporters stories including on speculation of what happened, followed with some good discussions of what actually happened and how to prevent recurrence of them in the future.

Over in the Cloud Computing, SaaS & Virtualization group forum, this thread is based on a Forbes article (Amazon AWS Takes Down Netflix on Christmas Eve) and involves conversations about SLAs, best practices, HA and related themes. Have a look at the story the thread is based on and some of the assertions being made, and ensuing discussions.

Also over at LinkedIn, in the Cloud Hosting & Service Providers group forum, this thread is based on a story titled Why Netflix’ Christmas Eve Crash Was Its Own Fault with a good discussion on clouds, HA, BC, DR, resiliency and related themes.

Over at the Virtualization Practice, there is a piece titled Is Amazon Ruining Public Cloud Computing? with comments from me and Adrian Cockcroft (@Adrianco) a Netflix Architect (you can read his blog here). You can also view some presentations about the Netflix architecture here.

What this all means

Saying you get what you pay for would be too easy and perhaps not applicable.

There are good services free, or low-cost, just like good free content and other things, however vice versa, just because something costs more, does not make it better.

Otoh, there are services that charge a premium however may have no better if not worse reliability, same with content for fee or perceived value that is no better than what you get free.

Additional related material

Some closing thoughts:

  • Clouds are real and can be used safely; however, they are a shared responsibility.
  • Only you can prevent cloud data loss, which means do your homework, be ready.
  • If something can go wrong, it probably will, particularly if humans are involved.
  • Prepare for the unexpected and clarify assumptions vs. realities of service capabilities.
  • Leverage fault isolation and containment to prevent rolling or spreading disasters.
  • Look at cloud services beyond lowest cost or for cost avoidance.
  • What is your organizations culture for learning from mistakes vs. fixing blame?
  • Ask yourself if you, your applications and organization are ready for clouds.
  • Ask your cloud providers if they are ready for you and your applications.
  • Identify what your cloud concerns are to decide what can be done about them.
  • Do a proof of concept to decide what types of clouds and services are best for you.

Do not be scared of clouds, however be ready, do your homework, learn from the mistakes, misfortune and errors of others. Establish and leverage known best practices while creating new ones. Look at the past for guidance to the future, however avoid clinging to, and bringing the baggage of the past to the future. Use new technologies, tools and techniques in new ways vs. using them in old ways.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Gaining cloud confidence from insights into AWS outages

StorageIO industry trends cloud, virtualization and big data

This is the first of a two-part industry trends and perspectives series looking at how to learn from cloud outages (read part II here).

In case you missed it, there were some public cloud outages during the recent Christmas 2012-holiday season. One incident involved Microsoft Xbox (view the Microsoft Azure status dashboard here) users were impacted, and the other was another Amazon Web Services (AWS) incident. Microsoft and AWS are not alone, most if not all cloud services have had some type of incident and have gone on to improve from those outages. Google has had issues with different applications and services including some in December 2012 along with a Gmail incident that received covered back in 2011.

For those interested, here is a link to the AWS status dashboard and a link to the AWS December 24 2012 incident postmortem. In the case of the recent AWS incident which affected users such as Netflix, the incident (read the AWS postmortem and Netflix postmortem) was tied to a human error. This is not to say AWS has more outages or incidents vs. others including Microsoft, it just seems that we hear more about AWS when things happen compared to others. That could be due to AWS size and arguably market leading status, diversity of services and scale at which some of their clients are using them.

Btw, if you were not aware, Microsoft Azure is more than just about supporting SQLserver, Exchange, SharePoint or Office, it is also an IaaS layer for running virtual machines such as Hyper-V, as well as a storage target for storing data. You can use Microsoft Azure storage services as a target for backing up or archiving or as general storage, similar to using AWS S3 or Rackspace Cloud files or other services. Some backup and archiving AaaS and SaaS providers including Evault partner with Microsoft Azure as a storage repository target.

When reading some of the coverage of these recent cloud incidents, I am not sure if I am more amazed by some of the marketing cloud washing, or the cloud bashing and uniformed reporting or lack of research and insight. Then again, if someone repeats a myth often enough for others to hear and repeat, as it gets amplified, the myth may assume status of reality. After all, you may know the expression that if it is on the internet then it must be true?

Images licensed for use by StorageIO via
Atomazul / Shutterstock.com

Have AWS and public cloud services become a lightning rod for when things go wrong?

Here is some coverage of various cloud incidents:

The above are a small sampling of different stories, articles, columns, blogs, perspectives about cloud services outages or other incidents. Assuming the services are available, you can Google or Bing many others along with reading postmortems to gain insight into what happened, the cause, effect and how to prevent in the future.

Do these recent incidents show a trend of increased cloud outages? Alternatively, do they say that the cloud services are being used more and on a larger basis, thus the impacts become more known?

Perhaps it is a mix of the above, and like when a magnetic storage tape gets lost or stolen, it makes for good news or copy, something to write about. Granted there are fewer tapes actually lost than in the past, and far fewer vs. lost or stolen laptops and other devices with data on them. There are probably other reasons such as the lightning rod effect given how much industry hype around clouds that when something does happen, the cynics or foes come out in force, sometimes with FUD.

Similar to traditional hardware or software based product vendors, some service providers have even tried to convince me that they have never had an incident, lost or corrupted or compromised any data, yeah, right. Candidly, I put more credibility and confidence in a vendor or solution provider who tells me that they have had incidents and taken steps to prevent them from recurring. Granted those steps might be made public while others might be under NDA, at least they are learning and implementing improvements.

As part of gaining insights, here are some links to AWS, Google, Microsoft Azure and other service status dashboards where you can view current and past situations.

What is your take on IT clouds? Click here to cast your vote and see what others are thinking about clouds.

Ok, nuff said for now (check out part II here )

Disclosure: I am a customer of AWS for EC2, EBS, S3 and Glacier as well as a customer of Bluehost for hosting and Rackspace for backups. Other than Amazon being a seller of my books (and my blog via Kindle) along with running ads on my sites and being an Amazon Associates member (Google also has ads), none of those mentioned are or have been StorageIO clients.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Data protection modernization, more than swapping out media

backup, restore, BC, DR and archiving

Have you modernized your data protection strategy and environment?

If not, are you thinking about updating your strategy and environment?

Why modernize your data protection including backup restore, business continuance (BC), high availability (HA) and disaster recovery (DR) strategy and environment?

backup, restore, BC, DR and archiving

Is it to leverage new technology such as disk to disk (D2D) backups, cloud, virtualization, data footprint reduction (DFR) including compression or dedupe?

Perhaps you have or are considering data protection modernization because somebody told you to or you read about it or watched a video or web cast? Or, perhaps your backup and restore are broke so its time to change media or try something different.

Lets take a step back for a moment and ask the question of what is your view of data protection modernization?

Perhaps it is modernizing backup by replacing tape with disk, or disk with clouds?

Maybe it is leveraging data footprint reduction (DFR) techniques including compression and dedupe?

Data protection, data footprint reduction, dfr, dedupe, compress

How about instead of swapping out media, changing backup software?

Or what about virtualizing servers moving from physical machines to virtual machines?

On the other hand maybe your view of modernizing data protection is around using a different product ranging from backup software to a data protection appliance, or snapshots and replication.

The above and others certainly fall under the broad group of backup, restore, BC, DR and archiving, however there is another area which is not as much technology as it is techniques, best practices, processes and procedure based. That is, revisit why data and applications are being protected against what applicable threat risks and associated business risks.

backup, restore, BC, DR and archiving

This means reviewing service needs and wants including backup, restore, BC, DR and archiving that in turn drive what data and applications to protect, how often, how many copies and where those are located, along with how long they will be retained.

backup, restore, BC, DR and archiving

Modernizing data protection is more than simply swapping out old or broken media like flat tires on a vehicle.

To be effective, data protection modernization involves taking a step back from the technology, tools and buzzword bingo topics to review what is being protected and why. It also means revisiting service level expectations and clarify wants vs. needs which translates to what if for free that is what is wanted, however for a cost then what is required.

backup, restore, BC, DR and archiving

Certainly technologies and tools play a role, however simply using new tools and techniques without revisiting data protection challenges at the source will result in new problems that resemble old problems.

backup, restore, BC, DR and archiving

Hence to support growth with a constrained or shrinking budget while maintaining or enhancing service levels, the trick is to remove complexity and costs.

backup, restore, BC, DR and archiving

This means not treating all data and applications the same, stretch your available resources to be more effective without compromise on service is mantra of modernizing data protection.

Ok, nuff said for now, plenty more to discuss later.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

StorageIO going Dutch: Seminar for Storage and I/O professionals

Data and Storage Networking Industry Trends and Technology Seminar

Greg Schulz of StorageIO in conjunction with or dutch parter Brouwer Storage Consultancy will be presenting a two day seminar for Storage Professionals Tuesday 24th and Wednesday 25th of May 2011 at Ampt van Nijkerk Netherlands.

Brouwer Storage ConsultanceyThe Server and StorageIO Group

This two day interactive education seminar for storage professionals will focus on current data and storage networking trends, technology and business challenges along with available technologies and solutions. During the seminar learn what technologies and management techniques are available, how different vendors solutions compare and what to use when and where. This seminar digs into the various IT tools, techniques, technologies and best practices for enabling an efficient, effective, flexible, scalable and resilient data infrastructure.

The format of this two seminar will be a mix of presentation and interactive discussion allowing attendees plenty of time to discuss among themselves and with seminar presenters. Attendees will gain insight into how to compare and contrast various technologies and solutions in addition to identifying and aligning those solutions to their specific issues, challenges and requirements.

Major themes that will be discussed include:

  • Who is doing what with various storage solutions and tools
  • Is RAID still relevant for today and tomorrow
  • Are hard disk drives and tape finally dead at the hands of SSD and clouds
  • What am I routinely hearing, seeing or being asked to comment on
  • Enabling storage optimization, efficiency and effectiveness (performance and capacity)
  • What do I see as opportunities for leveraging various technologies, techniques,trends
  • Supporting virtual servers including re-architecting data protection
  • How to modernize data protection (backup/restore, BC, DR, replication, snapshots)
  • Data footprint reduction (DFR) including archive, compression and dedupe
  • Clarifying cloud confusion, don’t be scared, however look before you leap

In addition this two day seminar will look at what are some new and improved technologies and techniques, who is doing what along with discussions around industry and vendor activity including mergers and acquisitions. Greg will also preview the contents and themes of his new book Cloud and Virtual Data Storage Networking (CRC) for enabling efficient, optimized and effective information services delivery across cloud, virtual and traditional environments.

Buzzwords and topic themes to be discussed among others include:
E2E, FCoE and DCB, CNAs, SAS, I/O virtualization, server and storage virtualization, public and private cloud, Dynamic Infrastructures, VDI, RAID and advanced data protection options, SSD, flash, SAN, DAS and NAS, object storage, application optimized or aware storage, open storage, scale out storage solutions, federated management, metrics and measurements, performance and capacity, data movement and migration, storage tiering, data protection modernization, SRA and SRM, data footprint reduction (archive, compress, dedupe), unified and multi-protocol storage, solution bundle and stacks.

For more information or to register contact Brouwer Storage Consultancy

Brouwer Storage Consultancy
Olevoortseweg 43
3861 MH Nijkerk
The Netherlands
Telephone: +31-33-246-6825
Cell: +31-652-601-309
Fax: +31-33-245-8956
Email: info@brouwerconsultancy.com
Web: www.brouwerconsultancy.com

Brouwer Storage Consultancey

Learn about other events involving Greg Schulz and StorageIO at www.storageio.com/events

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Cloud conversations: Loss of data access vs. data loss

Have you hugged your cloud or MSP lately?

Why give a cloud a hug and what does it have to do with loss of data access vs. loss of data?

First there is a difference between actually losing data and losing access to it.

Losing data means that you have no backup or copy of the information thus it is gone. This means there are no good valid backups, snapshots, copies or archives that can be used to restore or recover the information.

Losing access to data means that there is a copy of it somewhere however it will take time to make it usable (no data was actually lost). How long you have to wait until the data is restored or recovered will vary and during that time it may seem like data was lost.

Second, industry hype for and against clouds serves as a lighting rod for when things happen.

Lighting recently struck (or at least virtually) with some outages (see links below) including at Google Gmail.

Cloud crowd cheerleaders may need a hug to feel good while they or their technology get tossed about a bit. Google announced that they had a service disruption recently however that data was not lost, only loss of access for a period of time.

Lets take a step back before going forward.

With the Google Gmail disruption, following on previous incidents, true cynics and naysayers will probably jump on the anti cloud FUD feeding frenzy. The true cloud cynics will tell the skeptics all about cloud challenges perhaps never having had actually used any service or technology themselves.

Cloud crowd cheerleaders are generally a happy go lucky bunch with virtual beliefs and physical or real emotions. Cloud crowd cheerleaders have a strong passion for their technology or paradigm taking it various serious in some instances perceiving attacks or fud against cloud as an attack on them or their belief. Some cheerleaders will see this post as snarky or cynical (ok, get over it already).


Ongoing poll at StorageIOblog.com, click on the image to cast your vote.

Then there are the skeptics or interested audience who are not complete cynics or cheerleaders (those in the middle 80 percent of the above chart).

Generally speaking they want to learn more, understand issues to work around or take appropriate steps and institute best practices. They see a place for MSP or cloud services for some things to compliment what they are currently doing and tend to be the majority of audiences outside of special interest, vendor or industry trade groups.

Some additional thoughts, comments and perspectives:

  • Loss of data means you cannot get it back to a specific RPO (Recovery Point Objective or how much data you can afford to lose). Loss of access to data means that you cannot get to your data until a specific RTO (Recovery Time Objective).


Tiered data protection, RTO and RPOs, align technique and technology to SLO needs


RTO and RPOs

  • RAID and replication provide accessibility to data not data protection. The good news with RAID and replication or mirroring is if you make a change to the data it is copied or protected. The bad news is if it is deleted or corrupted that error or problem is also replicated.
  • Backup, snapshots, CDP or other time interval based techniques protect data against loss however may require time to restore, recovery or refresh from. A combination of data availability and accessibility along with time interval based protection are needed (e.g. the two previous above items should be combined). CDP should also mean complete, consistent, coherent or comprehensive data protection including data in application or VM buffers.
  • Any technology will fail either on its own or via human intervention or lack of configuration. It is not if rather when as well as how gracefully a failure along with fault isolation occurs and is remediate (corrected). There is generally speaking, no such thing as a bad technology, rather poor or inappropriate use, configuration or deployment of it.
  • Protect onsite data with offsite mediums including MSP or cloud backup services while keeping a local onsite copy. Why keep an onsite local copy when using a cloud? Simple, when you lose access to the cloud or MSP for extended periods of time, if needed you have a copy of data to work with (assuming it is still valid). On other hand, important data that is onsite needs to be kept offsite. Hence cloud and MSP should compliment what is done for data protection and vise versa. Thats what I do, is what you do?
  • The technology golden rule which applies to cloud and virtualization is whoever controls the management of the technology controls the gold. Leverage CDP, which is Commonsense Data Protection or Cloud Data Protection. Hops are great in beer (as well as some other foods) however they add latency including with networks. Aggregation can cause aggravation, not everything can be consolidated, however much can be virtualized.

Here are some related blog posts:

Additional links to related articles and commentary:

Closing thoughts and comments (for now) regarding clouds.

Its not if, rather when, where, why, how and with what will you leverage a cloud or MSP technologies, products, service, solution or architectures to compliment your environment.

How will cloud or MSP work for you vs. you working for it (unless you actually do work for one of them).

Dont be scared of clouds or virtualization, however look before you leap!

BTW, for those in the Minneapolis St. Paul area (aka the other MSP), check out this event on March 15, 2011. I have been invited to talk about optimizing your data storage and virtual environments and be prepared to take advantage of cloud computing opportunities as they mature.

Nuff said for now

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC) at https://storageio.com/books
twitter @storageio

End to End (E2E) Systems Resource Analysis (SRA) for Cloud and Virtual Environments

A new StorageIO Industry Trends and Perspective (ITP) white paper titled “End to End (E2E) Systems Resource Analysis (SRA) for Cloud, Virtual and Abstracted Environments” is now available at www.storageioblog.com/reports compliments of SANpulse technologies.

End to End (E2E) Systems Resource Analysis (SRA) for Virtual, Cloud and abstracted environments: Importance of Situational Awareness for Virtual and Abstracted Environments

Abstract:
Many organizations are in the planning phase or already executing initiatives moving their IT applications and data to abstracted, cloud (public or private) virtualized or other forms of efficient, effective dynamic operating environments. Others are in the process of exploring where, when, why and how to use various forms of abstraction techniques and technologies to address various issues. Issues include opportunities to leverage virtualization and abstraction techniques that enable IT agility, flexibility, resiliency and salability in a cost effective yet productive manner.

An important need when moving to a cloud or virtualized dynamic environment is to have situational awareness of IT resources. This means having insight into how IT resources are being deployed to support business applications and to meet service objectives in a cost effective manner.

Awareness of IT resource usage provides insight necessary for both tactical and strategic planning as well as decision making. Effective management requires insight into not only what resources are at hand but also how they are being used to decide where different applications and data should be placed to effectively meet business requirements.

Learn more about the importance and opportunities associated with gaining situational awareness using E2E SRA for virtual, cloud and abstracted environments in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of SANpulse technologies by clicking here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved