What industry pundits love and loathe about data storage

Drew Robb has a good article about what IT industry pundits including vendors, analysts, and advisors loath including comments from myself.

In the article Drew asks: What do you really love about storage and what are your pet peeves?

One of my comments and perspectives is that I like Hybrid Hard Disk Drives (HHDDs) in addition to traditional Hard Disk Drives (HDD) along with Solid State Devices (SSDs). As much as I like HHDDs, I also believe that with any technology, they are not the best solution for everything, however they can also be used in many ways than being seen. Here is the fifth installment of a series on HHDDs that I have done since June 2010 when I received my first HHDD a Seagate Momentus XT. You can read the other installments of my momentus moments here, here, here and here.

Seagate Momentus XT
HHDD with integrated nand flash SSD photo courtesy Seagate.com

Molly Rector VP of marketing at tape summit resources vendor Spectra Logic mentioned that what she does not like is companies that base their business plan on patent law trolling. I would have expected something different along the lines of countering or correcting people that say tape sucks, tape is dead, or that tape is the cause problem of anything wrong with storage thus clearing the air or putting up a fight that tape summit resources. Go figure…

Another of my comments involved clouds of which there are plenty of conversations taking place. I do like clouds (I even recently wrote a book involving them) however Im a fan of using them where applicable to coexist and enhance other IT resources. Dont be scared of clouds, however be ready, do your homework, listen, learn, do proof of concepts to decide best practices, when, where, what and how to use them.

Speaking of clouds, click here to read about who is responsible for cloud data loss and cast your vote, along with viewing what do you think about IT clouds in general here.

Mike Karp (aka twitter @storagewonk ) an analyst with Ptak Noel mentions that midrange environments dont get respect from big (or even startup) vendors.

I would take that a step further by saying compared to six or so years ago, SMB are getting night and day better respect along with attention by most vendors, however what is lacking is respect of the SOHO sector (e.g. lower end of SMB down to or just above consumer).

Granted some that have traditional sold into those sectors such as server vendors including Dell and HP get it or at least see the potential along with traditional enterprise vendor EMC via its Iomega . Yet I still see many vendors including startups in general discounting, shrugging off or sneering at the SOHO space similar to those who dissed or did not respect the SMB space several years ago. Similar to the SMB space, SOHO requires different products, packaging, pricing and routes to market via channel or etail mechanisms which means change for some vendors. Those vendors who embraced the SMB and realized what needed to change to adapt to those markets will also stand to do better with the SOHO.

Here is the reason that I think SOHO needs respect.

Simple, SOHOs grow up to become SMBs, SMBs grow up to become SMEs, SMEs grow up to become enterprises and not to mention that the amount of data being generated, moved, processed and stored continues to grow. The net result is that SMBs along with SOHO storage demands will continue to grow and for those vendors who can adjust to support those markets will also stand to gain new customers that in turn can become plans for other solution offerings.

Cloud conversations

Not surprising Eran Farajun of Asigra which has been doing cloud backups decades before they were known as clouds loves backup (and restores). However I am surprised that Eran did not jump on the its time to modernize and re architect data protection theme. Oh well, will have to have a chat with Eran on that sometime.

What was surprising were comments from Panzura who has a good distributed (e.g. read also cloud) file system that can be used for various things including online reference data. Panzura has a solution that normally I would not even think about in the context of being pulled into a Datadomain or dedupe appliance type discussion (e.g tape sucks or other similar themes). So it is odd that they are playing to the tape sucks camp and theme vs. playing to where the technology can really shine which IMHO is in the global, distributed, scale out and cloud file system space. Oh well, I guess you go with what you know or has worked in the past to get some attention.

Molly Rector of Spectra also mentioned that she likes High Performance Computing, surprised that she did not throw in high productivity computing as well in conjunction with big data, big bandwidth, green, dedupe, power, disk, tape and related buzzword bingo terms.

Also there are some comments from myself about cost cutting.

While I see the need for organizations to cut costs during tough economic times, Im not a fan of simply cutting cost for the sake of cost cutting as opposed to finding and removing complexity that in turn remove costs of doing work. In other words, Im a fan of finding and removing waste, becoming more effective and productive along with removing the cost of doing a particular piece of work. This in the end meets the aim of bean counters to cut costs, however can be done in a way that does not degrade service levels or customer service experience. For example instead of looking to cut backup costs, do you know where the real costs of doing data protection exist (hint swapping out media is treating the symptoms) and if so, what can be done to streamline those from the source of the problem downstream to the target (e.g. media or medium). In other words, redesign, review, modernize how data protection is done, leverage data footprint reduction (DFR) techniques including archive, compression, consolidation, data management, dedupe and other technologies in effective and creative ways, after all, return on innovation is the new ROI.

Checkout Drews article here to read more on the above topics and themes.

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

The blame game: Does cloud storage result in data loss?

I recently came across a piece by Carl Brooks over at IT Tech News Daily that caught my eye, title was Cloud Storage Often Results in Data Loss. The piece has an effective title (good for search engine: SEO optimization) as it stood out from many others I saw on that particular day.

Industry Trend: Cloud storage

What caught my eye on Carls piece is that it reads as if the facts based on a quick survey point to clouds resulting in data loss, as opposed to being an opinion that some cloud usage can result in data loss.

Data loss

My opinion is that if not used properly including ignoring best practices, any form of data storage medium or media could result or be blamed for data loss. For some people they have lost data as a result of using cloud storage services just as other people have lost data or access to information on other storage mediums and solutions. For example, data has been lost on tape, Hard Disk Drives (HDDs), Solid State Devices (SSD), Hybrid HDDs (HHDD), RAID and non RAID, local and remote and even optical based storage systems large and small. In some cases, there have been errors or problems with the medium or media, in other cases storage systems have lost access to, or lost data due to hardware, firmware, software, or configuration including due to human error among other issues.

Data loss

Technology failure: Not if, rather when and how to decrease impact
Any technology regardless of what it is or who it is from along with its architecture design and implementation can fail. It is not if, rather when and how gracefully along with what safeguards to decrease the impact, in addition to containing or isolating faults differentiates various products or solutions. How they automatically repair and self heal to keep running or support accessibility and maintain data integrity are important as is how those options are used. Granted a failure may not be technology related per say, rather something associated with human intervention, configuration, change management (or lack thereof) along with accidental or intentional activities.

Walking the talk
I have used public cloud storage services for several years including SaaS and AaaS as well as IaaS (See more XaaS here) and knock on wood, have not lost any data yet, loss of access sure, however not data being lost.

I follow my advice and best practices when selecting cloud providers looking for good value, service level agreements (SLAs) and service level objectives (SLOs) over low cost or for free services.

In the several years of using cloud based storage and services there has been some loss of access, however no loss of data. Those service disruptions or loss of access to data and services ranged from a few minutes to a little over an hour. In those scenarios, if I could not have waited for cloud storage to become accessible, I could have accessed a local copy if it were available.

Had a major disruption occurred where it would have been several days before I could gain access to that information, or if it were actually lost, I have a data insurance policy. That data insurance policy is part of my business continuance (BC) and disaster recovery (DR) strategy. My BC and DR strategy is a multi layered approach combining local, offline and offsite as along with online cloud data protection and archiving.

Assuming my cloud storage service could get data back to a given point (RPO) in a given amount of time (RTO), I have some options. One option is to wait for the service or information to become available again assuming a local copy is no longer valid or available. Another option is to start restoration from a master gold copy and then roll forward changes from the cloud services as that information becomes available. In other words, I am using cloud storage as another resource that is for both protecting what is local, as well as complimenting how I locally protect things.

Minimize or cut data loss or loss of access
Anything important should be protected locally and remotely meaning leveraging cloud and a master or gold backup copy.

To cut the cost of protecting information, I also leverage archives, which mean not all data gets protected the same. Important data is protected more often reducing RPO exposure and speed up RTO during restoration. Other data that is not as important is protected, however on a different frequency with other retention cycles, in other words, tiered data protection. By implementing tiered data protection, best practices, and various technologies including data footprint reduction (DFR) such as archive, compression, dedupe in addition to local disk to disk (D2D), disk to disk to cloud (D2D2C), along with routine copies to offline media (removable HDDs or RHDDs) that go offsite,  Im able to stretch my data protection budget further. Not only is my data protection budget stretched further, I have more options to speed up RTO and better detail for recovery and enhanced RPOs.

If you are looking to avoid losing data, or loss of access, it is a simple equation in no particular order:

  • Strategy and design
  • Best practices and processes
  • Various technologies
  • Quality products
  • Robust service delivery
  • Configuration and implementation
  • SLO and SLA management metrics
  • People skill set and knowledge
  • Usage guidelines or terms of service (ToS)

Unfortunately, clouds like other technologies or solutions get a bad reputation or blamed when something goes wrong. Sometimes it is the technology or service that fails, other times it is a combination of errors that resulted in loss of access or lost data. With clouds as has been the case with other storage mediums and systems in the past, when something goes wrong and if it has been hyped, chances are it will become a target for blame or finger pointing vs. determining what went wrong so that it does not occur again. For example cloud storage has been hyped as easy to use, don’t worry, just put your data there, you can get out of the business of managing storage as the cloud will do that magically for you behind the scenes.

The reality is that while cloud storage solutions can offload functions, someone is still responsible for making decisions on its usage and configuration that impact availability. What separates various providers is their ability to design in best practices, isolate and contain faults quickly, have resiliency integrated as part of a solution along with various SLAs aligned to what the service level you are expecting in an easy to use manner.

Does that mean the more you pay the more reliable and resilient a solution should be?
No, not necessarily, as there can still be risks including how the solution is used.

Does that mean low cost or for free solutions have the most risk?
No, not necessarily as it comes down to how you use or design around those options. In other words, while cloud storage services remove or mask complexity, it still comes down to how you are going to use a given service.

Shared responsibility for cloud (and non cloud) storage data protection
Anything important enough that you cannot afford to lose, or have quick access to should be protected in different locations and on various mediums. In other words, balance your risk. Cloud storage service provider toned to take responsibility to meet service expectations for a given SLA and SLOs that you agree to pay for (unless free).

As the customer you have the responsibility of following best practices supplied by the service provider including reading the ToS. Part of the responsibility as a customer or consumer is to understand what are the ToS, SLA and SLOs for a given level of service that you are using. As a customer or consumer, this means doing your homework to be ready as a smart educated buyer or consumer of cloud storage services.

If you are a vendor or value added reseller (VAR), your opportunity is to help customers with the acquisition process to make informed decision. For VARs and solution providers, this can mean up selling customers to a higher level of service by making them aware of the risk and reward benefits as opposed to focus on cost. After all, if a order taker at McDonalds can ask Would you like to super size your order, why cant you as a vendor or solution provider also have a value oriented up sell message.

Additional related links to read more and sources of information:

Choosing the Right Local/Cloud Hybrid Backup for SMBs
E2E Awareness and insight for IT environments
Poll: What Do You Think of IT Clouds?
Convergence: People, Processes, Policies and Products
What do VARs and Clouds as well as MSPs have in common?
Industry adoption vs. industry deployment, is there a difference?
Cloud conversations: Loss of data access vs. data loss
Clouds and Data Loss: Time for CDP (Commonsense Data Protection)?
Clouds are like Electricity: Dont be scared
Wit and wisdom for BC and DR
Criteria for choosing the right business continuity or disaster recovery consultant
Local and Cloud Hybrid Backup for SMBs
Is cloud disaster recovery appropriate for SMBs?
Laptop data protection: A major headache with many cures
Disaster recovery in the cloud explained
Backup in the cloud: Large enterprises wary, others climbing on board
Cloud and Virtual Data Storage Networking (CRC Press, 2011)
Enterprise Systems Backup and Recovery: A Corporate Insurance Policy

Poll:  Who is responsible for cloud storage data loss?

Taking action, what you should (or not) do
Dont be scared of clouds, however do your homework, be ready, look before you leap and follow best practices. Look into the service level agreements (SLAs) associated with a given cloud storage product or service. Follow best practices about how you or someone else will protect what data is put into the cloud.

For critical data or information, consider having a copy of that data in the cloud as well as at or in another place, which could be in a different cloud or local or offsite and offline. Keep in mind the theme for critical information and data is not if, rather when so what can be done to decrease the risk or impact of something happening, in other words, be ready.

Data put into the cloud can be lost, or, loss of access to it can occur for some amount of time just as happens with using non cloud storage such as tape, disk or ssd. What impacts or minimizes your risk of using traditional local or remote as well as cloud storage are the best practices, how configured, protected, secured and managed. Another consideration is the type and quality of the storage product or cloud service can have a big impact. Sure, a quality product or service can fail; however, you can also design and configure to decrease those impacts.

Wrap up
Bottom line, do not be scared of cloud storage, however be ready, do your homework, review best practices, understand benefits and caveats, risk and reward. For those who want to learn more about cloud storage (public, private and hybrid) along with data protection, data management, data footprint reduction among other related topics and best practices, I happen to know of some good resources. Those resources in addition to the links provided above are titled Cloud and Virtual Data Storage Networking (CRC Press) that you can learn more about here as well as find at Amazon among other venues. Also, check out Enterprise Systems Backup and Recovery: A Corporate Insurance Policy by Preston De Guise (aka twitter @backupbear ) which is a great resource for protecting data.

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Trick or treat: 2011 IT Zombie technology poll

Warning: Do not be scared, however be ready for some trick and treat fun, it is after all, the Halloween season.

I like new emerging technologies and trends along with Zombie technologies, you know, those technologies that have been declared dead yet are still being enhanced, sold and used.

Zombie technologies as a name may be new for some, while others will have a realization of experiencing something from the past, technologies being declared deceased yet still alive and being used. Zombie technologies are those that have been declared dead, yet still alive enabling productivity for customers that use them and often profits for the vendors who sell them.

Zombie technologies

Some people consider a technology or trend dead once it hits the peak of hype as that can signal a time to jump to the next bandwagon or shiny new technology (or toy).

Others will see a technology as being dead when it is on the down slope of the hype curve towards the trough of disillusionment citing that as enough cause for being deceased.

Yet others will declare something dead while it matures working its way through the trough of disillusionment evolving from market adoption to customer deployment eventually onto the plateau of productivity (or profitability).

Then there are those who see something as being dead once it finally is retired from productive use, or profitable for sale.

Of course then there are those who just like to call anything new or other than what they like or that is outside of their comfort zone as being dead. In other words, if your focus or area of interest is tied to new products, technology trends and their promotion, rest assured you better be where the resources are being applied and view other things as being dead and thus probably not a fan of Zombie technologies (or at least publicly).

On the other hand, if your area of focus is on leveraging technologies and products in a productive way, including selling things that are profitable without a lot of marketing effort, your view of what is dead or not will be different. For example if you are risk averse letting someone else be on the leading bleeding edge (unless you have a dual redundant HA blood bank attached to your environment) your view of what is dead or not will be much different from those promoting the newest trend.

Funny thing about being declared dead, often it is not the technology, implementation, research and development or customer acquisitions, rather simply a lack of promotion, marketing and general awareness. Take tape for example which has been a multi decade member of the Zombie technology list. Recently vendors banded together investing or spending on marketing awareness reaching out to say tape is alive. Guess what, lo and behold, there was a flurry of tape activity in venues that normally might not be talking about tape. Funny how marketing resources can bring something back from the dead including Zombie technologies to become popular or cool to discuss again.

With the 2011 Halloween season among us, it is time to take a look this years list of Zombie technologies. Keep in mind that being named a Zombie technology is actually an honor in that it usually means someone wants to see it dead so that his or her preferred product or technology can take it place.

Here are 2011 Zombie technologies.

Backup: Far from being dead, its focus is changing and evolving with a broader emphasis on data protection. While many technologies associated with backup have been declared dead along with some backup software tools, the reality is that it is time or modernizes how backups and data protection are performed. Thus, backup is on the Zombie technology list and will live on, like it or not until it is exorcised from, your environment replaced with a modern resilient and flexible protected data infrastructure.

Big Data: While not declared dead yet, it will be soon by some creative marketer trying to come up with something new. On the other hand, there are those who have done big data analytics across different Zombie platforms for decades which of course is a badge of honor. As for some of the other newer or shiny technologies, they will have to wait to join the big data Zombies.

Cloud: Granted clouds are still on the hype cycle, some argue that it has reached its peak in terms of hype and now heading down into the trough of disillusionment, which of course some see as meaning dead. In my opinion cloud, hype has or is close to peaking, real work is occurring which means a gradual shift from industry adoption to customer deployment. Put a different way, clouds will be on the Zombie technology list of a couple of decades or more. Also, keep in mind that being on the Zombie technology list is an honor indicating shift towards adoption and less on promotion or awareness fan fare.

Data centers: With the advent of the cloud, data centers or habitats for technology have been declared dead, yet there is continued activity in expanding or building new ones all the time. Even the cloud relies on data centers for housing the physical resources including servers, storage, networks and other components that make up a Green and Virtual Data Center or Cloud environment. Needless to day, data centers will stay on the zombie list for some time.

Disk Drives: Hard disk drives (HDD) have been declared dead for many years and more recently due to popularity of SSDs have lost their sex appeal. Ironically, if tape is dead at the hands of HDDs, then how can HDDs be dead, unless of course they are on the Zombie technology list. What is happening is like tape, HDDs role are changing as the technology continues to evolve and will be around for another decade or so.

Fibre Channel (FC): This is a perennial favorite having been declared dead on a consistent basis over three decades now going back to the early 90s. While there are challengers as there have been in the past, FC is far from dead as a technology with 16 Gb (16GFC) now rolling out and a transition path for Fibre Channel over Ethernet (FCoE). My take is that FC will be on the zombie list for several more years until finally retired.

Fibre Channel over Ethernet (FCoE): This is a new entrant and one uniquely qualified for being declared dead as it is still in its infancy. Like its peer FC which was also declared dead a couple of decades ago, FCoE is just getting started and looks to be on the Zombie list for a couple of decades into the future.

Green IT: I have heard that Green IT is dead, after all, it was hyped before the cloud era which has been declared dead by some, yet there remains a Green gap or disconnect between messaging and issues thus missed opportunities. For a dead trend, SNIA recently released their Emerald program which consists of various metrics and measurements (remember, zombies like metrics to munch on) for gauging energy effectiveness for data storage. The hype cycle of Green IT and Green storage may be dead, however Green IT in the context of a shift in focus to increased productivity using the same or less energy is underway. Thus Green IT and Green storage are on the Zombie list.

iPhone: With the advent of Droid and other smart phones, I have heard iPhones declared dead, granted some older versions are. However while the Apple cofounder Steve Jobs has passed on (RIP), I suspect we will be seeing and hearing more about the iPhone for a few years more if not longer.

IBM Mainframe: When it comes to information technology (IT), the king of the Zombie list is the venerable IBM mainframe aka zSeries. The IBM mainframe has been declared dead for over 30 years if not longer and will be on the zombie list for another decade or so. After all, IBM keeps investing in the technology as people buy them not to mention IBM built a new factory to assemble them in.

NAS: Congratulations to Network Attached Storage (NAS) including Network File System (NFS) and Windows Common Internet File System (CIFS) aka Samba or SMB for making the Zombie technology list. This means of course that NAS in general is no longer considered an upstart or immature technology; rather it is being used and enhanced in many different directions.

PC: The personal computer was touted as killing off some of its Zombie technology list members including the IBM mainframe. With the advent of tablets, smart phones, virtual desktops infrastructures (VDI), the PC has been declared dead. My take is that while the IBM mainframe may eventually drop of the Zombie list in another decade or two if it finds something to do in retirement, the PC will be on the list for many years to come. Granted, the PC could live on even longer in the form of a virtual server where the majority of guest virtual machines (VMs) are in support of Windows based PC systems.

Printers: How long have we heard that printers are dead? The day that printers are dead is the day that the HP board of directors should really consider selling off that division.

RAID: Its been over twenty years since the first RAID white paper and early products appeared. Back in the 90s RAID was a popular buzzword and bandwagon topic however, people have moved on to new things. RAID has been on the Zombie technology list for several years now while it continues to find itself being deployed at the high end of the market down into consumer products. The technology continues to evolve in both hardware as well as software implementations on a local and distributed basis. Look for RAID to be on the Zombie list for at least the next couple of decades while it continues to evolve, after all, there is still room for RAID 7, RAID 8, RAID 9 not to mention moving into hexadecimal or double digit variants.

SAN: Storage Area Networks (SANs) have been declared dead and thus on the Zombie technology list before, and will be mentioned again well into the next decade. While the various technologies will continue to evolve, networking your servers to storage will also expand into different directions.

tape summit resources: Magnetic tape has been on the Zombie technology list almost as long as the IBM mainframe and it is hard to predict which one will last longer. My opinion is that tape will outlast the IBM mainframe, as it will be needed to retrieve the instructions on how to de install those Zombie monsters. Tape has seen resurgence in vendors spending some marketing resources and to no surprise, there has been an increase in coverage about it being alive, even at Google. Rest assured, tape is very safe on the Zombie technology list for another decade or more.

Windows: Similar to the PC, Microsoft Windows has been touted in the past as causing other platforms to be dead, however has been added to the Zombie list for many years now. Given that Windows is the most commonly virtualized platform or guest VM, I think we will be hearing about Windows on the Zombie list for a few decades more. There are particular versions of Windows as with any technology that have gone into maintenance or sustainment mode or even discontinued.

Poll: What are the most popular Zombie technologies?

Keep in mind that a Zombie technology is one that is still in use, being developed or enhanced, sold usually at a profit and used typically in a productive way. In some cases, a declared dead or Zombie technology may only be just in its infancy getting started having either just climbed over the peak of hype or coming out of the trough of disillusionment. In other instance, the Zombie technology has been around for a long time yet continues to be used (or abused).

Note: Zombie voting rules apply which means vote early, vote often, and of course vote for those who cannot include those that are dead (real or virtual).

Ok, nuff said, enough fun, lets get back to work, at least for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Industry trend: People plus data are aging and living longer

Lets face it, people and information are living longer and thus there are more of each along with a strong interdependency by both.

People living and data being retained longer should not be a surprise, take a step back and look at the bigger picture. There is no such thing as an information recession with more data being generated, processed, moved and stored for longer periods of time not to mention that a data object is also getting larger.

Industry trend and performance

By data objects getting larger, think about a digital photo taken on a typical camera ten years ago which whose resolution was lower and thus its file size would have been measured in kilo bytes (thousands). Today megapixel resolutions are common from cell phones, smart phones, PDAs and even larger with more robust digital and high definition (HD) still and video cameras. This means that a photo of the same object that resulted in a file of hundreds of Kbytes ten years ago would be measured in Megabytes today. With three dimensional (3D) cameras appearing along with higher resolution, you do not need to be a rocket scientist or industry pundit to figure out what that growth trend trajectory looks like.

However it is not just the size of the data that is getting larger, there are also more instances along with copies of those files, photos, videos and other objects being created, stored and retained. Similar to data, there are more people now than ten years ago and some of those have also grown larger, or at least around the waistline. This means that more people are creating and relying on larger amounts of information being available or accessible when and where needed. As people grow older, the amount of data that they generate will naturally increase as will the information that they consume and rely upon.

Where things get interesting is that looking back in history, that is more than ten or even a hundred years, the trend is that there are more people, they are living longer, and they are generating larger amounts of data that is taking on new value or meaning. Heck you can even go back from hundreds to thousands of years and see early forms of data archiving and storage with drawings on walls of caves or other venues. I Wonder if had the cost (and ease of use) to store and keep data had been lower back than would there have been more information saved? Or was it a case of being too difficult to use the then state of art data and information storage medium combined with limited capacities so they simply ran out of storage and retention mediums (e.g. walls and ceilings)?

Lets come back to the current for a moment which is another trend of data that in the past would have been kept offline or best case near line due to cost and limits or constraints are finding their way online either in public or private venues (or clouds if you prefer).

Thus the trend of expanding data life cycles with some types of data being kept online or readily accessible as its value is discovered.

Evolving data life cycle and access patterns

Here is an easy test, think of something that you may have googled or searched for a year or two ago that either could not be found or was very difficult to find. Now take that same search or topic query and see if anything appears and if it does, how many instances of it appear. Now make a note to do the same test again in a year or even six months and compare the results.

Now back to the future however with an eye to the past and things get even more interesting in that some researchers are saying that in centuries to come, we should expect to see more people not only living into their hundreds, however even longer. This follows the trend of the average life expectancy of people continues to increase over decades and centuries.

What if people start to live hundreds of years or even longer, what about the information they will generate and rely upon and its later life cycle or span?

More information and data

Here is a link to a post where a researcher sees that very far down the road, people could live to be a thousand years old which brings up the question, what about all the data they generate and rely upon during their lifetime.

Ok, now back to the 21st century and it is safe to say that there will be more data and information to process, move, store and keep for longer periods of time in a cost effective way. This means applying data footprint reduction (DFR) such as archiving, backup and data protection modernization, compression, consolidation where possible, dedupe and data management including deletion where applicable along with other techniques and technologies combined with best practices.

Will you out live your data, or will your data survive you?

These are among other things to ponder while you enjoy your summer (northern hemisphere) vacation sitting on a beach or pool side enjoying a cool beverage perhaps gazing at the passing clouds reflecting on all things great and small.

Clouds: Dont be scared, however look before you leap and be prepared

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Are Hard Disk Drives (HDDs) getting too big?

Lets start out by clarifying something, that is in terms of context or scope, big means storage capacity as opposed to the physical packaging size of a hard disk drive (HDD) which are getting smaller.

So are HDDs in terms of storage capacity getting too big?

This question of if HDDs storage capacity getting too big to manage comes up every few years and it is the topic of Rick Vanovers (aka twitter @RickVanover Episode 27 Pod cast: Are hard drives getting to big?

Veeam community podcast guest appearance

As I discuss in this pod cast with Rick Vannover of Veeam, with the 2TB and even larger future 4TB, 8 to 9TB, 18TB, 36TB and 48 to 50TB drives not many years away, sure they are getting bigger (in terms of capacity) however we have been here before (or at least some of us have). We discuss how back in the late 90s HDDs were going from 5.25 inch to 3.5 inch (now they are going from 3.5 inch to 2.5 inch), and 9GB were big and seen as a scary proposition by some for doing RAID rebuilds, drive copy or backups among other things, not to mention if putting to many eggs (or data) in one basket.

In some instances vendors have been able to combine various technologies, algorithms and other techniques to RAID rebuild a 1TB or 2TB drive in the same or less amount of time as it used to take to process a 9GB HDD. However those improvements are not enough and more will be needed leveraging faster processors, IO busses and back planes, HDDs with more intelligence and performance, different algorithms and design best practices among other techniques that I discussed with Rick. After all, there is no such thing as a data recession with more information to be generated, processed, moved, stored, preserved and served in the future.

If you are interested in data storage, check out Ricks pod cast and hear some of our other discussion points including how SSD will help keep the HDD alive similar to how HDDs are offloading tape from their traditional backup role, each with its changing or expanding focus among other things.

On a related note, here is post about RAID remaining relevant yet continuing to evolve. We also talk about Hybrid Hard Disk Drives (HHDD) where in a single sealed HDD device there is flash and dram along with a spinning disk all managed by the drives internal processor with no external special software or hardware needed.

Listen to comments by Greg Schulz of StorageIO on HDD, HHDD, SSD, RAID and more

Put on your head phones (or not) and check out Ricks pod cast here (or on the head phone image above).

Thanks again Rick, really enjoyed being a guest on your show.

Whats your take, are HDDs getting to big in terms of capacity or do we need to leverage other tools, technology and techniques to be more effective in managing expanding data footprint including use of data footprint reduction (DFR) techniques?

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Using Removable Hard Disk Drives (RHDDs)

Removable hard disk drives (RHDD) are a form of removable media which includes magnetic tape that address many common use cases. Usage scenarios include enabling bulk data portability for larger environments or for D2D backup where the media needs to be physically moved offsite for small and mid sized environments. RHDDs include among others those from Imation such as the Odyssey (which is what I use) and the Prostor RDX (OEMed by Imation and others). RHDD, tape along with other forms of portable media including those that use flash by being removable and portable presumable should have some extra packaging protection to safeguard against static shock in addition to supporting encryption capabilities.

Compared to disks including RHDD, tape for most and particularly larger environments should have an overall lower media cost for parking, preserving and when needed serving inactive or archived data (e.g. the changing roll of tape from day to back up to archive). Of course your real costs will vary by use in addition to how combined with data footprint reduction and other technologies.

A big benefit of RHDDs is that they are random meaning data can be searched and found quickly vs. tape media which has great sequential or streaming capabilities if you have a system that can support that ability. The other benefit of RHDD is that depending on their implementation, they should plug and play with your systems appearing as disk without any extra drivers or configuration or software tools making for ease of use. Being removable they can be used for portability such as sending data to a cloud or MSP as part of an initial bulk copy, or sending data offset or taking home as part of an offsite backup, data protection or BC/DR strategy as well as being used for archiving. The warning with RHDD is their cost per TByte will generally be higher than compared to tape as well as having to have a docking station or specific drive interface depending on specific product and configuration.

RHDD are a great compliment to traditional fixed or non removable disk, Hybrid Hard Disk Drive (HHDD) and Solid State Device (SSD) based storage as well as coexist with cloud or MSP backup and archive solutions. The smaller the environment the more affordable using RHDD become vs. tape for backup and archive operations or when portability is required. Even if using a cloud or managed service provider (MSP) backup provider, network bandwidth costs, availability or performance may limit the amount of data that can be moved in a cost effective way. For example placing an archive and gold or master copy of your static data on a RHDD that may be kept on site in a safe off-site place and then sending data that is routinely changed to the cloud or MSP provider (think full local and offsite plus partial full and incremental in the cloud).

By leveraging archiving and data footprint reduction (DFR) techniques including dedupe and compression, you can stretch your budget by sending less data to cloud or MSP services while using removable media for data protection. You would be surprised how many TBytes of data can be kept in a safe deposit box. For my own business, I have used RHDDs for several years to keep gold master copies as well as archives offsite as part of a disk to disk (D2D) or D2D2RHDD strategy. The data protection strategy is also complimented by sending active data to a cloud backup MSP (encrypted of course). It might be belt and suspenders, however it is also eating my own dog food practicing what I talk about and the approach has proven itself a few times.

Here are some related links to more material:
Removable disk drives vs. tape storage for small businesses
The pros and cons of removable disk storage for small businesses
Removable storage media appealing to SMBs, but with caveats
StorageIO Momentus Hybrid Hard Disk Drive (HHDD) Moments

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Tape talk time (tape summit and tape is alive, for some)

Welcome to the tape summit resources and tape summit resources micro site with links for those who are interested in magnetic tape for backup, archive, BC, DR, big and little data

For being a declared dead or zombie technology (here, here or here) tape remains very much alive however its role is changing. There is no disputing that hard disk drives (HDDs) are continuing to expand their role for data protection including backup/restore, BC and DR where tape has been used  for decades.

What is also occurring is that tapes role is changing from day to day backup to that of longer term data preservation including archiving with more data stored on tape today than in past history at a lower cost. In fact the continued reduced cost per tape and improved capacity as well as utilization has worked against tape from a marketing competitive standpoint. For example if you look at a chart showing tape (media and drive) revenues you see a decline, similar to what was seen a couple of years ago for HDDs.

What is not shown on some charts are how many units (drives or media) shipped with more capacity for a given price (again what was reported for HDDs a few years ago) when net capacity had increased. Vendors of tape technology have also had a rather low profile particular for those with other technologies that have received more marketing resources (people, time, money). After all, if a product is on a plateau of productivity and profitability why spend time or effort on extensive marketing or promotion vs. directing resources to get new items into the market.

As a result, for those looking to make a case that tape is on the decline based on revenues to convince customers to move away from that technology should have a marketing freebie. Recently Oracle announced a new large capacity tape drive and media following on previous announcements of enhanced LTO roadmap and future 35TByte  tape capabilities announced January 2010 by Fujifilm and IBM.

For those who are interested following are some links to various topics including how SSD, HDD and tape can coexist complementing each other for different roles or functions. As to those who do not like tape, feel free to read if you like as there is also material on SSD, HDD, dedupe, cloud, data protection and other topics.

Some previous blog posts:

Here are some additional articles, commentary and reports pertaining to tape related topics:

Something tells me we will be hearing, reading or watching more about tape being alive in the months to come.

Nuff said for now

Cheers gs

Thanks for visiting tape summit resources and tape summit resources micro site with links for those who are interested in magnetic tape for backup, archive, BC, DR, big and little data

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Have VTLs or VxLs become Zombies, Declared dead yet still alive?

Have you heard or read the reports and speculation that VTLs (Virtual Tape Libraries) are dead?

It seems that in IT the all to popular trend is to declare something dead so that your new product or technology can have a chance of making it in to the market or perhaps seen in a better light.

Sometimes this approach works to temporary freeze the market until common sense and clarity returns to the market or until something else fun to talk about comes along and in other cases, the messages can fall on deft ears.

The approach of declaring something dead tends to play well for those who like shiny new toys (SNT) or new shiny toys (NST) and being on the popular, cool trendy bandwagon.

Not surprisingly, while some actual IT customers can fall into the SNT or NST syndrome, its often the broader industry including media, bloggers, analysts, consultants and other self proclaimed or anointed pundits as well as vendors who latch on to the declare it dead movement. After all, who wants to talk about something that is old, boring and already being sold to paying customers who are using it. Now this is not a bad thing as we need a balance of up and coming challengers to keep the status quo challenged, likewise we need a balance of the new to avoid death grips on the old and what is working.

Likewise, many IT customers particularly larger ones tend to be very risk averse and conservative with their budgets protecting their investments thus they may only go leading bleeding edge if there is a dual redundant blood bank with a backup on hot standby (thats some HA humor BTW).

Another reason that declaring items dead in support of SNT and NST is that while many of the commonly declared dead items are on the proverbial plateau of productivity for IT customers, that also can mean that they are on the plateau of profitability for the vendors.

However, not all good things last and at sometime, there is the need to transition from the old to the new and this is where things like virtualization including virtual tape libraries or virtual disk libraries or virtual storage library or what ever you want to call a VxL (more on what a VxL is in a moment) can come into play.

I realize that for some, particularly those who like to grasp on to SNT, NST and ride the dead pool bandwagons this will probably appear as snarky or cynical which is fine, after all, for some, you should be laughing to the bank and if not, you may in fact be missing out on an opportunity for playing in the dead pool marketing game.

Now back to VxL.

In the case of VTLs, for some it is the T word that bothers them, you know T as in Tape which is not a SNT or NST in an age where SSD has supposedly killed the disk drive which allegedly terminated tape (yeah right). Sure tape is not being used as much for backup as it has in the past with its role shifting to that of longer term retention, something that it is well suited for.

For tape fans (or cynics) you can read more here, here and here. However there is still a large amount of backup/restore along with other data protection or preservation (e.g. archiving) processing (software tools, processes, procedures, skill sets, management tools) that still expects to see tape.

Hence this is where VTLs or VxLs come into play leveraging virtualization in an Life Beyond Consolidation (and here) scenario providing abstraction, transparency, agility and emulation and IMHO are still very much alive and evolving.

Ok, for those who do not like or believe in or of its continued existence and evolving role, substitute the T (tape) with X and you get a VxL. That is, plug in what ever X word that makes you happy or marketable or a Shiny New TLA. For example Virtual Disk Library, Virtual Storage Library, Virtual Backup Library, Virtual Compression Library, Virtual Dedupe Library, Virtual ILM Library, Virtual Archive Library, Virtual Cloud Library and so forth. Granted some VxLs only emulate tape and hence are VTLs while others support NAS and other protocols (or personalities) not to mention functionality ranging from replication, DFR as well as automated policy management.

However, keep in mind that if your preference is VTL, VxL or what ever other buzzword bingo name that you want to use or come up with, look at how virtualization in the form of abstraction, transparency and emulation can bridge the gap between the new (disk based data protection) combined with DFR (Data Footprint Reduction) and the old (existing backup/restore, archive or other management tools and processes.

Here are some additional links pertaining to VTLs (excuse me, VxLs):

  • Virtual tape libraries: Old backup technology holdover or gateway to the future?
  • Not to mention here, here, here, here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Data footprint reduction (Part 2): Dell, IBM, Ocarina and Storwize

Dell

IBM

Over the past couple of weeks there has been a flurry of IT industry activity around data footprint impact reduction with Dell buying Ocarina and IBM acquiring Storwize. For those who want the quick (compacted, reduced) synopsis of what Dell buying Ocarina as well as IBM acquiring Storwize means read the first post in this two part series as well as some of my comments here and here.

This piece and it companion in part I of this two part series is about expanding the discussion to the much larger opportunity for vendors or vars of overall data footprint impact reduction beyond where they are currently focused. Likewise, this is about IT customers realizing that there are more opportunities to address data and storage optimization across your entire organization using various techniques instead of just focusing on backup or vmware virtual servers.

Who is Ocarina and Storwize?
Ocarina is a data and storage management software startup focused on data footprint reduction using a variety of approaches, techniques and algorithms. They differ from the traditional data dedupers (e.g. Asigra, Bakbone, Commvault, EMC Avamar, Datadomain and Networker, Exagrid, Falconstor, HP, IBM Protectier and TSM, Quantum, Sepaton and Symantec among others) by looking at data footprint reduction beyond just backup.

This means looking at how to reduce data footprint across different types of data including videos, image as well as text based documents among others. As a result, the market sweet spot for Ocarina is for general data footprint reduction including static along with active data including entertainment, video surveillance or gaming, reference data, web 2.0 and other bulk storage application data needs (this should compliment Dells recent Exanet acquisition).

What this means is that Ocarina is very well suited to address the rapidly growing amount of unstructured data that may not otherwise be handled as efficiently with by dedupe alone.

Storwize is a data and storage management startup focused on data footprint reduction using inline compression with an emphasis on maintaining performance for reads as well as writes of unstructured as well as structured database data. Consequently the market sweet spot for Storwize is around boosting the capacity of existing NAS storage systems from different vendors without negatively impacting performance. The trade off of the Storwize approach is that you do not get the spectacular data reduction ratios associated with backup centric or focused dedupe, however, you maintain performance associated with online storage that some dedupers dream of.

Both Dell and IBM have existing dedupe solutions for general purpose as well as backup along with other data footprint impact reduction tools (either owned or via partners). Now they are both expanding their focus and reach similar to what others such as EMC, HP, NetApp, Oracle and Symantec among others are doing. What this means is that someone at Dell and IBM see that there is much more to data footprint impact reduction than just a focus on dedupe for backup.

Wait, what does all of this discussion (or read here for background issues, challenges and opportunities) about unstructured data and changing access lifecycles have to do with dedupe, Ocarina and Storwize?

Continue reading on as this is about the expanding opportunity for data footprint reduction across entire organizations. That is, more data is being kept online and expanding data footprint impact needs to be addressed to meet business objectives using various techniques balancing performance, availability, capacity and energy or economics (PACE).

Dell

IBM

What does all of this have to do with IBM buying Storwize and Dell acquiring Ocarina?
If you have not pieced this together yet, let me net it out.

This is about the opportunity to address the organization wide expanding data footprint impact across all applications, types of data as well as tiers of storage to support business growth (more data to store) while maintaining QoS yet reduce per unit costs including management.

This is about expanding the story to the broader data footprint impact reduction from the more narrowly focused backup and dedupe discussion which are still in their infancy on a relative basis to their full market potential (read more here).

Now are you seeing where this is going and fits?

Does this mean IBM and Dell defocus on their existing Dedupe product lines or partners?
I do not believe so, at least as long as their respective revenue prevention departments are kept on the sidelines and off of the field of play. What I mean by this is that the challenge for IBM and Dell is similar to that of what others such as EMC are faced with having diverse portfolios or technology toolboxes. The challenge is messaging to the bigger issues, then aligning the right tool to the task at hand to address given issues and opportunities instead of singularly focused on a specific product causing revenue prevention elsewhere.

As an example, for backup, I would expect Dell to continue to work with its existing dedupe backup centric partners and technologies however find new opportunities to leverage their Ocarina solution. Likewise, IBM I would expect to continue to show customers where Tivoli software based dedupe or Protectier (aka the deduper formerly known as Diligent) or other target based dedupe fits and expand into other data footprint impact areas with Storewize.

Does this change the playing field?
IMHO these moves as well as some previous moves by the likes of EMC and NetApp among others are examples of expanding the scope and dimension of the playing field. That is, the focus is much more than just dedupe for backup or of virtual machines (e.g. VMware vSphere or Microsoft HyperV).

This signals a growing awareness around the much larger and broader opportunity around organization wide data footprint impact reduction. In the broader context some applications or data gets compressed either in application software such as databases, file systems, operating systems or even hypervisors as well as in networks using protocol or bandwidth optimizers as well as inline compression or post processing techniques as has been the case with streaming tape devices for some time.

This also means that where with dedupe the primary focus or marketing angle up until recently has been around reduction ratios, to meet the needs of time or performance sensitive applications data transfer rates also become important.

Hence the role of policy based data footprint reduction where the right tool or technique to meet specific service requirements is applied. For those vendors with a diverse data footprint impact reduction tool kit including archive, compression, dedupe, thin provision among other techniques, I would expect to hear expanded messaging around the theme of applying the right tool to the task at hand.

Does this mean Dell bought Ocarina to accessorize EqualLogic?
Perhaps, however that would then beg the question of why EqualLogic needs accessorizing. Granted there are many EqualLogic along with other Dell sold storage systems attached to Dell and other vendors servers operating as NFS or Windows CIFS file servers that are candidates for Ocarina. However there are also many environments that do not yet include Dell EqualLogic solutions where Ocarina is a means for Dell to extend their reach enabling those organizations to do more with what they have while supporting growth.

In other words, Ocarina can be used to accessorize, or, it can be used to generate and create pull through for various Dell products. I also see a very strong affinity and opportunity for Dell to combine their recent Exanet NAS storage clustering software with Dell servers, storage to create bulk or scale out solutions similar to what HP and other vendors have done. Of course what Dell does with the Ocarina software over time, where they integrate it into their own products as well as OEM to others should be interesting to watch or speculate upon.

Does this mean IBM bought Storwize to accessorize XIV?
Well, I guess if you put a gateway (or software on a server which is the same thing) in front of XIV to transform it into a NAS system, sure, then Storwize could be used to increase the net usable capacity of the XIV installed base. However that is a lot of work and cost for what is on a relative basis a small footprint, yet it is a viable option never the less.

IMHO IBM has much more of a play, perhaps a home run by walking before they run by placing Storwize in front of their existing large installed base of NetApp N series (not to mention targeting NetApps own install base) as well as complimenting their SONAS solutions. From there as IBM gets their legs and mojo, they could go on the attack by going after other vendors NAS solutions with an efficiency story similar to how IBM server groups target other vendors server business for takeout opportunities except in a complimenting manner.

Longer term I would not be surprised to see IBM continue development of the block based IP (as well as file) in the storwize product for deployment in solutions ranging from SVC to their own or OEM based products along with articulating their comprehensive data footprint reduction solution portfolio. What will be important for IBM to do is articulating what solution to use when, where, why and how without confusing their customers, partners and rest of the industry (something that Dell will also have to do).

Some links for additional reading on the above and related topics

Wrap up (for now)

Organizations of all shape and size are encountering some form of growing data footprint impact that currently, or soon will need to be addressed. Given that different applications and types of data along with associated storage mediums or tiers have various performance, availability, capacity, energy as well as economic characteristics multiple data footprint impact reduction tools or techniques are needed. What this all means is that the focus of data footprint reduction is expanding beyond that of just dedupe for backup or other early deployment scenarios.

Note what this means is that dedupe has an even brighter future than where it currently is focused which is still only scratching the surface of potential market adoption as was discussed in part 1 of this series.

However this also means that dedupe is not the only solution to all data footprint reduction scenarios. Other techniques including archiving, compression, data management, thin provisioning, data deletion, tiered storage and consolidation will start to gain respect, coverage discussions and debates.

Bottom line, use the most applicable technologies or combinations along with best practice for the task and activity at hand.

For some applications reduction ratios are an important focus on the tools or modes of operations that achieve those results.

Likewise for other applications where the focus is on performance with some data reduction benefit, tools are optimized for performance first and reduction secondary.

Thus I expect messaging from some vendors to adjust (expand) to those capabilities that they have in their toolboxes (product portfolios) offerings

Consequently, IMHO some of the backup centric dedupe solutions may find themselves in niche roles in the future unless they can diversity. Vendors with multiple data footprint reduction tools will also do better than those with only a single function or focused tool.

However for those who only have a single or perhaps a couple of tools, well, guess what the approach and messaging will be. After all, if all you have is a hammer everything looks like a nail, if all you have is a screw driver, well, you get the picture.

On the other hand, if you are still not clear on what all this means, send me a note, give a call, post a comment or a tweet and will be happy to discuss with you.

Oh, FWIW, if interested, disclosure: Storwize was a client a couple of years ago.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Industry Trends and Perspectives: Tape, Disk and Dedupe Coexistence

This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageio.com/reports.

The topic of this post is a trend that I am seeing and hearing about during discussions with IT professionals pertaining to how tape is still alive despite common industry FUD.

Not only is tape still very much alive with recent enhancements including LTO5 with an extended range roadmap, it is also finding new roles. In addition to being deployed in new roles, tape is coexisting and complimenting dedupe or other disk based backup and data protection approaches and vice versa.

Hearing tape is alive in the same sentence as dedupe deployments continuing may sound counter intuitive if you only listen to some vendor pitches.

However if you talk with IT customers particularly those in larger environments or with VARs that provide complete solution offering focus you will hear a different tune than tape is dead and dedupe rules. Tape is still alive however its roll is changing. Watch for more on this and related topics.

That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Recent tips, videos, articles and more update V2010.1

Realizing that some prefer blogs to webs to twitter to other venues, here are some recent links to articles, tips, videos, webcasts and other content that have appeared in different venues since August 2009.

  • i365 Guest Interview: Experts Corner: Q&A with Greg Schulz December 2009
  • SearchCIO Midmarket: Remote-location disaster recovery risks and solutions December 2009
  • BizTech Magazine: High Availability: A Delicate Balancing Act November 2009
  • ESJ: What Comprises a Green, Efficient and Effective Virtual Data Center? November 2009
  • SearchSMBStorage: Determining what server to use for SMB November 2009
  • SearchStorage: Performance metrics: Evaluating your data storage efficiency October 2009
  • SearchStorage: Optimizing capacity and performance to reduce data footprint October 2009
  • SearchSMBStorage: How often should I conduct a disaster recovery (DR) test? October 2009
  • SearchStorage: Addressing storage performance bottlenecks in storage September 2009
  • SearchStorage AU: Is tape the right backup medium for smaller businesses? August 2009
  • ITworld: The new green data center: From energy avoidance to energy efficiency August 2009
  • Video and podcasts include:
    December 2009 Video: Green Storage: Metrics and measurement for management insight
    Discussion between Greg Schulz and Mark Lewis of TechTarget the importance of metrics and measurement to gauge productivity and efficiency for Green IT and enabling virtual information factories. Click here to watch the Video.

    December 2009 Podcast: iSCSI SANs can be a good fit for SMB storage
    Discussion between Greg Schulz and Andrew Burton of TechTarget about iSCSI and other related technologies for SMB storage. Click here to listen to the podcast.

    December 2009 Podcast: RAID Data Protection Discussion
    Discussion between Greg Schulz and Andrew Burton of TechTarget about RAID data proteciton, techniques and technologies. Click here to listen to the podcast.

    December 2009 Podcast: Green IT, Effiency and Productivity Discussion
    Discussion between Greg Schulz and Jon Flower of Adaptec about data Green IT, energy effiency, inteligent power management (IPM) also known as MAID 2.0 and other forms of optimization techniques including SSD. Click here to listen to the podcast sponsored by Adaptec.

    November 2009 Podcast: Reducing your data footprint impact
    Even though many enterprise data storage environments are coping with tightened budgets and reduced spending, overall net storage capacity is increasing. In this interview, Greg Schulz, founder and senior analyst at StorageIO Group, discusses how storage managers can reduce their data footprint. Schulz touches on the importance of managing your data footprint on both online and offline storage, as well as the various tools for doing so, including data archiving, thin provisioning and data deduplication. Click here to listen to the podcast.

    October 2009 Podcast: Enterprise data storage technologies rise from the dead
    In this interview, Greg Schulz, founder and senior analyst of the Storage I/O group, classifies popular technologies such as solid-state drives (SSDs), RAID and Fibre Channel (FC) as “zombie” technologies. Why? These are already set to become part of standard storage infrastructures, says Schulz, and are too old to be considered fresh. But while some consider these technologies to be stale, users should expect to see them in their everyday lives. Click here to listen to the podcast.

    Check out the Tips, Tools and White Papers, and News pages for additional commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Storage Efficiency and Optimization – The Other Green

    For those of you in the New York City area, I will be presenting live in person at Storage Decisions September 23, 2009 conference The Other Green, Storage Efficiency and Optimization.

    Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives.

    To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

    Topics, technologies and techniques that will be discussed include among others:

    • Energy efficiency (strategic) vs. energy avoidance (tactical), whats different between them
    • Optimization and the need for speed vs. the need for capacity, finding the right balance
    • Metrics & measurements for management insight, what the industry is doing (or not doing)
    • Tiered storage and tiered access including SSD, FC, SAS, tape, clouds and more
    • Data footprint reduction (archive, compress, dedupe) and thin provision among others
    • Best practices, financial incentives and what you can do today

    This is a free event for IT professionals, however space I hear is limited, learn more and register here.

    For those interested in broader IT data center and infrastructure optimization, check out the on-going seminar series The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. This is also a free Seminar, register and learn more here or here.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Upcoming Out and About Events

    Following up on previous Out and About updates ( here and here ) of where I have been, heres where I’m going to be over the next couple of weeks.

    On September 15th and 16th 2009, I will be the keynote speaker along with doing a deep dive discussion around data deduplication in Minneapolis, MN and Toronto ON. Free Seminar, register and learn more here.

    The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. Free Seminar, register and learn more here.

    On September 23, 2009 I will be in New York City at Storage Decisions conference participating in the Ask the Experts during the expo session as well as presenting The Other Green — Storage Efficiency and Optimization.

    Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives. To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

    Topics, technologies and techniques that will be discussed include among others:

    • Energy efficiency (strategic) vs. energy avoidance (tactical)
    • Optimization and the need for speed vs. the need for capacity
    • Metrics and measurements for management insight
    • Tiered storage and tiered access including SSD, FC, SAS and clouds
    • Data footprint reduction (archive, compress, dedupe) and thin provision
    • Best practices, financial incentives and what you can do today

    Free event, learn more and register here.

    Check out the events page for other upcoming events and hope to see you this fall while Im out and about.

    Cheers – gs

    Greg Schulz – StorageIOblog, twitter @storageio Author “The Green and Virtual Data Center” (CRC)

    IBM Out, Oracle In as Buyer of Sun

    Following on the heals of IBM in talks with Sun that broke down a week or so ago, today’s news is Oracle has agreed to buy Sun extending Larry Ellison’s software empire as well as boosting his hardware empire from fast sport platforms to server, storage and other IT data center hardware.

    What’s the real play and story here is certainly open to discussion and debate, is it good, is it bad, who are the winners and losers will be determined as the dust settles, not to mention as responses from across the industry, not to mention new product announcements and enhances slated by some for as early as this week. What if any role does Cisco wanting to get into servers and maybe storage play, does Oracle want to make sure they remain at the big table?

    Regarding discussions of this deal, what it means, the twitter world has been abuzz already this morning, click here to see and follow some of the conversations, perspectives and insights being exchanged.

    Nuf said for now, its time to get ready to head off to the airport as I’m doing several events speaking and keynote sessions this week on the right coast while the left coast is abuzz with the Sun & Oracle activity.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved