IT Availability, it’s a shared responsibility and common sense

In case you missed it, recently the State of Oregon had a data center computer problem (ok, storage and application outage) that resulted in unemployment benefits not being provided. Tony Knotzer over at Network Computing did a story Oregon Storage Debacle Highlights Need To Plan For Failure and asked me for some perspectives that you can read here.

The reason I bring this incident up is not to join in the feeding frenzy that usually occurs when something like this happens, instead, to touch on what should be common. What is lacking at times (or more needed) is common sense when it comes to designing and managing flexible scalable data infrastructures.

“Fundamental IT 101 is that all technology will fail, despite what the vendors tell you,” Schulz said. And the most likely time technology will fail, he notes, is when people are involved — doing configurations, making changes or updates, or performing upgrades. – Via Network Computing

Note that while any technology can or has fail at some point, how it fails along with fault containment via design best practices and vendor resolution are important.

Good vendors learn and correct things so that they don’t happen again as well as work with customers on best practices to isolate and contain faults from expanding into disasters. Thus when a sales or marketing person tries to tell me that they have never had a failure I wonder if a: they are making something up, b: have not actually shipped to a customer in production, c: not aware of other deployments, d: towing the company line, e: too good to be true or f: all the above.

On the other hand, when a vendor tells me how they have resiliency in their product as well as processes, best practices and can even tell me (public or under NDA) how they have addressed issues, then they have my attention.

A common challenge today is cost cutting along with focus on the newest technology from servers to storage, networking to cloud, virtualization and software defined among other buzzword bingo themes and trends.

What also gets overlooked as mentioned above is common sense.

Perhaps if somebody could package and launch a good public relations campaign profiling common sense such as Software Defined Common Sense (SDCS) that might help?

On the other hand, similar to public service announcements (PSA) that may seem like common sense to some, there is a reason they are being done. That is to pass on the information to others who may not know about it thus lack what is perceived as common sense.

Lets get back to the state of Oregon’s computer systems issues and the blame game.

You know the blame game? That is when something happens or does not happen as you want it to simply find somebody else to blame or pivot and point a finger elsewhere.

While perhaps good for CYA, the blame games usually does not help to prevent something happening again, or in the first place.

Hence in my comments about the state of Oregon computer storage system problems, I took the tone of what is common these days of no fault, shared responsibility and blame.

In other words does not matter who did what first or did not do, both sides could have prevented it.

For some this might resonate of it does not matter who misbehaved in the sandbox or play room, everybody gets a time out.

This is not to say that one side or the other has to assume or take on more blame or responsibility than the other, rather there is a shared responsibility to look out for each other.

Just like when you drive a car, the education focus is on defensive safe driving to watch out for what the other person might do or not do (e.g. use turn signals or too busy to look in a mirror while talking or texting and driving among other things). The goal is to prevent accidents by watching out for those who are not taking responsibilities for themselves, not to mention learning from others mishaps.


Working together vs. the blame game

Different views of customer vs. vendor

Having been a customer, as well as a vendor in the past not surprisingly I have some different views on this.

Sure the customer or client is always right, however sometimes there needs to be unpleasant conversations to help the customer help themselves, or keep themselves out of trouble.

Likewise a vendor may also take the blame when something does go wrong, even if it was entirely not their own fault just to stay in good graces with the customer or get that next deal.

Sometimes a vendor deserves to get beat up when something goes wrong, or at a least tell their story including if needed behind closed doors or under NDA. Likewise to have a meaningful relationship or partnership with the vendor, supplier or VAR, there needs to be trust and confidence which means not everything gets put out for media or blog venues to feed on.

Sure there is explaining what happened without spin, however there is also learning from mistakes to prevent them from happening which should be common sense. If part of that sharing of blame and responsibility requires being not in public that’s fine, as well as enough information of what happened is conveyed to clarify concerns and create confidence.

With vendor lockin, when I was a customer some taught that it’s the vendors fault (or for CYA, blame them), as a vendor the thinking was enforced that the customer is always right and its the competition who causes lockin.

As an analyst advisory consulting, my thinking not surprisingly is that of shared responsibility.

This means only you can allow vendor lockin, not to mention decide if lockin is bad or not.

Likewise only you can prevent data loss in cloud, virtual or traditional environments which also includes loss of access.

Granted somebody higher up the organization structure may over-ride you, however ask yourself if you did what was needed?

Likewise if a vendor is going to be doing some maintenance work in the middle of the week and there is a risk of something happening, even if they have told or sold you there is no single point of failure (NSPOF), or non disruptive upgrades.

Anytime there is a person involved regardless of if hardware, cables, software, firmware, configurations or physical environments something can happen. If the vendor drops the ball or a cable or card or something else and causes an outage or downtime, it is their responsibility to discuss those issues. However it is also the customers responsibility to discuss why they let the vendor do something during that time without taking adequate precautions. Likewise if the storage system was a single point of failure for an important system, then there is the responsibility to discuss the cost cutting concerns of others and have them justify why a redundant solution is not needed (that’s CYA 101 btw ).

Some other common sense tips

For some these might be familiar and if so, are they being done, and for others, perhaps they are new or revolutionary.

In the race to jump to a new technology or vendor, what are the unknowns? For example you may know what the issues or flaws are in an existing systems, solution, product, service or vendor, however what about the new one? Will you be the production beta customer and if so, how can you mitigate any risk?

Ask vendors tough, yet fair questions that are relevant to your needs and requirements including how they handle updates, upgrades and other tasks. Don’t be afraid to go under NDA if needed to get a better view of where they are at, have been and going to avoid surprises.

If this is not common IT sense, then take the responsibility to learn.

On the other hand, if this is common sense, take the responsibility to share and help others learn what it is that you know.

Also understand your availability needs and wants as well as balance those with costs along with risks. If something can go wrong it will if people are involved, thus design for resiliency including maintenance to offset applicable threat risks. Remember in the data center not everything is the same.

Here is my point.

There is enough blame as well as accolades to go around, however take some shared responsibility and use it wisely.

Likewise in the race to cut cost, watch out for causing problems that compromise your information systems or services.

Look into removing complexity and costs without compromise which has long-term benefits vs. simply cutting costs.

Here are some related links and perspectives:
Don’t Let Clouds Scare You Be Prepared
Cloud conversation, Thanks Gartner for saying what has been said
Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)
Make Your Company Ready for the Cloud
What do you do when your service provider drops the ball
People, Not Tech, Prevent IT Convergence
Pulling Together a Converged Team
Speaking of lockin, does software eliminate or move the location of vendor lock-in?

Ok, nuff said for now, what say you?

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

greg

View Comments

  • Concur, hence the notion of taking a shared responsibility vs. simply pointing fingers and playing the blame game.

    This of course also means applying (or learning) traditional best practices making them new and updated for the cloud era when/where applicable, not to mention looking at past events/scenarios/failures/problems to determine how to work around or prevent them.

    Thks for your comments.

Recent Posts

RTO Context Matters

RTO Context Matters With RTO context matters similar to many things in and around Information…

2 months ago

Microsoft Azure Elastic SAN from Cloud to On-Prem

What is Azure Elastic SAN Azure Elastic SAN (AES) is a new (now GA) Azure…

9 months ago

Microsoft Hyper-V Is Alive Enhanced With Windows Server 2025

Yes, you read that correctly, Microsoft Hyper-V is alive and enhanced with Windows Server 2025,…

11 months ago

March 31st is world backup day; when is world recovery day

March 31st is world backup day; when is world recovery day If March 31st is…

2 years ago

ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs

ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs. Yes, you read that correct; leverage…

3 years ago