Who is responsible for vendor lockin?

Who is responsible for vendor lockin?

data infrastructure server storage I/O vendor lockin

Updated 1/21/2018

Who is responsible for vendor lockin?

Is vendor lockin caused by vendors, their partners or by customers?

In my opinion vendor lockin can be from any or all of the above.

What is vendor lockin

Vendor lockin is a situation where a customer becomes dependent or locked in by choice or other circumstances to a particular supplier or technology.

What is the difference between vendor lockin, account control and stickiness?

Im sure some marketing wiz or sales type will be happy to explain the subtle differences. Generally speaking, lockin, stickiness and account control are essentially the same, or at least strive to obtain similar results. For example, vendor lockin too some has a negative stigma. However vendor stickiness may be a new term, perhaps even sounding cool thus it is not a concern. Remember the Mary Poppins song a spoon full of sugar makes the medicine go down? In other words sometimes changing and using a different term such as sticky vs vendor lockin helps make the situation taste better.

Is vendor lockin or stickiness a bad thing?

No, not necessarily, particularly if you the customer are aware and still in control of your environment.

I have had different views of vendor lockin over the years.

These have varied from when I was a customer working in IT organizations or being a vendor and later as an advisory analyst consultant. Even as a customer, I had different views of lockin which varied depending upon the situation. In some cases lockin was a result of upper management having their favorite vendor which meant when a change occurred further up the ranks, sometimes vendor lockin would shift as well. On the other hand, I also worked in IT environments where we had multiple vendors for different technologies to maintain competition across suppliers.

As a vendor, I was involved with customer sites that were best of breed while others were aligned around a single or few vendors. Some were aligned around technologies from the vendors I worked for and others were aligned with someone elses technology. In some cases as a vendor we were locked out of an account until there was a change of management or mandates at those sites. In other cases where lock out occurred, once our product was OEMd or resold by an incumbent vendor, the lockout ended.

Some vendors do a better job of establishing lockin, account management, account control or stickiness than compared to others. Some vendors may try to lock a customer in and thus there is perception that vendors lock customers in. Likewise, there is a perception that vendor lockin only occurs with the largest vendors however I have seen this also occur with smaller or niche vendors who gain control of their customers keeping larger or other vendors out.

Sweet, sticky Sue Bee Honey

Vendor lockin or stickiness is not always the result of the vendor, var, consultant or service provider pushing a particular technology, product or service. Customers can allow or enable vendor lockin as well, either by intent via alliances to drive some business initiative or accidentally by giving up account control management. Consequently vendor lockin is not a bad thing if it brings mutual benefit to the suppler and consumer.

On the other hand, if lockin causes hardship on the consumer while only benefiting the supplier, than it can be a bad thing for the customer.

Do some technologies lend themselves more to vendor lockin vs others?

Yes, some technologies lend themselves more to stickiness or lockin then others. For example, often big ticket or expensive hardware are seen as being vulnerable to vendor lockin along with other hardware items however software is where I have seen a lot of stickiness or lockin around.

However what about virtualization solutions after all the golden rule of virtualization is whoever controls the virtualization (hardware, software or services) controls the gold. This means that vendor lockin could be around a particular hypervisor or associated management tools.

How about bundled solutions or what are now called integrated vendor technology stacks including PODs (here or here) or vBlocks among others? How about databases, do they enable or facilitate vendor lockin? Perhaps, just like virtualization or operating systems or networking technology, storage system, data protection or other solutions, if you let the technology or vendor manage you, then you enable vendor lockin.

Where can vendor lockin or stickiness occur?

Application software, databases, data or information tools, messaging or collaboration, infrastructure resource management (IRM) tools ranging from security to backup to hypervisors and operating systems to email. Lets not forget about hardware which has become more interoperable from servers, storage and networks to integrated marketing or alliance stacks.

Another opportunity for lockin or stickiness can be in the form of drivers, agents or software shims where you become hooked on a feature functionality that then drives future decisions. In other words, lockin can occur in different locations both in traditional IT as well as via managed services, virtualization or cloud environments if you let it occur.

 

Keep these thoughts in mind:

  • Customers need to manage their resources and suppliers
  • Technology and their providers should work for you the customer, not the other way around
  • Technology providers conversely need to get closer to influence customer thinking
  • There can be cost with single vendor or technology sourcing due to loss of competition
  • There can be a cost associated with best of breed or functioning as your own integrator
  • There is a cost switching from vendors and or their technology to keep in mind
  • Managing your vendors or suppliers may be easier than managing your upper management
  • Vendors sales remove barriers so they can sell and setting barriers for others
  • Virtualization and cloud can be both a source for lockin as well as a tool to help prevent it
  • As a customer, if lockin provides benefits than it can be a good thing for all involved

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Ultimately, its up to the customer to manage their environment and thus have a say if they will allow vendor lockin. Granted, upper management may be the source of the lockin and not surprisingly is where some vendors will want to focus their attention directly, or via influence of high level management consultants.

So while a vendors solution may appear to be a locked in solution, it does not become a lockin issue or problem until a customer lets or allows it to be a lockin or sticky situation.

What is your take on vendor lockin? Cast your vote and see results in the following polls.

Is vendor lockin a good or bad thing?

Who is responsible for managing vendor lockin

Where is most common form or concern of vendor lockin

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Another StorageIO Hybrid Momentus Moment

Its been a few months since my last post (read it here) about Hybrid Hard Disk Drive (HHDD) such as the Seagate Momentus XT that I have been using.

The Momentus XT HHDD I have been using is a 500GB 7,200RPM 2.5 inch SATA Hard Disk Drive (HDD) with 4GB of embedded FLASH (aka SSD) and 32MB of DRAM memory for buffering hence the hybrid name.

I have been using the XT HHDD mainly for transferring large multi GByte size files between computers and for doing some disk to disk (D2D) backups while becoming more comfortable with it. While not as fast as my 64GB all flash SSD, the XT HHDD is as fast as my 7,200RPM 160GB Momentus HDD and in some cases faster on burst reads or writes. The notion of having a 500GB HDD that was affordable to support D2D was attractive however the ability to get some performance boost now and then via the embedded 4GB FLASH opens many different possibilities particularly when combined with compression.

Recently I switched the role of the Momentus XT HHDD from that of being a utility drive to becoming the main disk in one of my laptops. Despite many forums or bulletin boards touting issues or problems with the Seagate Momentus XT causing system hangs or Windows Blue Screen of Death (BSoD), I continued on with the next phase of testing.

Making the switch to XT HHDD as a primary disk

I took a few precaution including eating some of my own dog food that I routinely talk about. For example, I made sure that the Lenovo T61 where the Momentus XT was going to be installed was backed up. In addition, I synced my traveling laptop so that it was the primary so that I could continue working during the conversion not to mention having an extra copy in addition to normal on and offsite backups.

Ok, lets get back to the conversion or migration from a regular HDD to the HHDD.

Once I knew I had a good backup, I used the Seagate Discwizard (e.g. Acronis based) tool for imaging the existing T61 HDD to the Momentus XT HHDD. Using Discwizard (you could use other tools as well) I configured it to initialize the HHDD which was attached via a Seagate Goflex USB to SATA cable kit as well as image or copy the contents of the T61 HDD partitions to the Momentus XT. During the several hours it took to copy and create a new bootable disk image on the HHDD I continued working on my travel or standby laptop.

After the image copy was completed and verified, it was time to reboot and see how Windows (XP SP3) liked the HHDD which all seemed to be normal. There were some parts of the boot that seemed a bit faster, however not 100 percent conclusive. The next step was to shutdown the laptop and physically swap the old internal HDD with the HHDD and reboot. The subsequent boot did seem faster and programs accessing large files also seemed to run a bit faster.

Keep in mind that the HHDD is still a spinning 7,200RPM disk drive so comparisons to a full time SSD would be apples to oranges as would the cost capacity difference between those devices. However, for what I wanted to see and use, the limited 4GB of flash does seem to provide a performance boost and if I needed full time super fast performance, I could buy a larger capacity SSD and install it. Im going to hold off on buying any more larger capacity flash SSD for the time being however.

Do I see HHDD appearing in SMB, SME or enterprise storage systems anytime soon? Probably not, at least not in primary storage systems. However perhaps in some D2D backup, archive or dedupe and VTL devices or other appliances.

Momentus XT Speed Bumps

Now, to be fair, there have been some bumps in the road!

The first couple of days were smooth sailing other than hearing the mystery chirp the HHDD makes a couple of times a day. Low and behold after a couple of days, just as many forums had indicated, a mystery system hang occurred (and no, not like Windows might normally do so for those Microsoft cynics). Other than the inconvenience of a reboot, no data was lost as files being updated were saved or had been backed up not to mention after the reboot, everything was intact anyway. So far just an inconvenience or so I thought.

Almost 24 hours later, same thing except this time I got to see the BSoD which candidly, I very rarely see despite hearing stories from others. Ok, this was annoying, however as long as I did not lose any data, other than lost time from a reboot, lets chalk this up to a learning experience and see where it goes. Now guess what, about 12 hours later, once again, the system froze up and this time I was in the middle of a document edit. This time I did lose about 8 minutes of typing data that had not been auto saved (I have since changed my auto save from 10 minutes to 5 minutes).

With this BSoD incident, I took some notes and using the X61s, started checking some web sites and verified the BIOS firmware on the T61 which was up to date. However I noticed that the Seagate Momentus XT HHDD was at firmware 22 while there was a 23 version available. Reading through some web sites and forums, I was on the fence on trying firmware 23 given that it appears a newer firmware version for the HHDD is in the works. Deciding to forge forward with the experiment, after all, no real data loss had occurred, and I still had the X61s not to mention the original T61 HDD to fall back to worse case.

Going to the Seagate web site, I downloaded the firmware 23 install kit and ran it to their instructions which was a breeze and then did the reboot.

It has not been quite a week yet, however knocking on wood, while I keep expecting to see one, no BSoD or system freezes have occurred. However having said that and knocking on wood, Im also making sure things are backed up protected and ready if needed. Likewise, if I start to see a rash of BSoD, my plan is to fall back to the original T61 HDD, bring it up to date and use it until a newer HHDD firmware version is available to resume testing.

What is next for my Seagate Momentus XT HHDD?

Im going to wait to see if the BSoD and mystery system hangs disappear as well as for the arrival of the new firmware followed by some more testing. However, when Im confident with it, the next step is to put the XT HHDD into the X61s which is used primarily for travel purpose.

Why wait? Simple, while I can tolerate a reboot or crash or data loss or disruption while in the office given access to copies as well as standby or backup systems to work from, when traveling options are more limited. Sure if there is data loss, I can go to my cloud provider and rapidly recall a file or multiple ones as needed or for critical data, recover from a portable encrypted USB device. Consequently I want more confidence in the XT HHDD before deploying it for travel mode which it is probably safe to do as of now, however I want to see how stable it is in the office before taking it on the road.

What does this all mean?

  • Simple, have a backup of your data and systems
  • Test and verify those backups or standby systems periodically
  • Have a fall back plan for when trying new things
  • Keep productivity in mind, at some point you may have to fall back
  • If something is important enough to protect, have multiple copies
  • Be ready to eat your own dog food or what you talk about
  • Do not be scared, however be prepared, look before you leap

How about you are you using a HHDD yet and if so, what are your experiences? I am curious to hear if anyone has tried using a HHDD in their VMware lab environments yet in place of a regular HDD or before spending a boat load of money for a similar sized SSD.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Re visiting if IBM XIV is still relevant with V7000

Over the past couple of years I routinely get asked what I think of XIV by fans as well as foes in addition to many curious or neutral onlookers including XIV competitors, other analysts, media, bloggers, consultants as well as IBM customers, prospects, vars and business partners. Consequently I have done some blog posts about my thoughts and perspectives.

Its time again for what has turned out to be the third annual perspective or thoughts around IBM XIV and if it is still relevant as a result of the recent IBM V7000 (excuse me, I meant to say IBM Storwize V7000) storage system launch.

For those wanting to take a step back in time, here is an initial thought perspective about IBM and XIV storage from 2008, as well as the 2009 revisiting of XIV relevance post and the latest V7000 companion post found here.

What is the IBM V7000?

Here is a link to a companion post pertaining to the IBM V7000 that you will want to have a look at.

In a nut shell, the V7000 is a new storage system with built in storage virtualization or virtual storage if you prefer that leverages IBM developed software from its San Volume Controller (SVC), DS8000 enterprise system and others.

Unlike the SVC which is a gateway or appliance head that virtualizes various IBM and third party storage systems providing data movement, migration, copy, replication, snapshot and other agility or abstraction capabilities, the V7000 is a turnkey integrated solution.

By being a turnkey solution, the V7000 combines the functionality of the SVC as a basis for adding other IBM technologies including a GUI management tool similar to that found on XIV along with dedicated attached storage (e.g. SAS disk drives including fast, high capacity as well as SSD).

In other words, for those customer or prospects who liked XIV because of its management GUI interface, you may like the V7000.

For those who liked the functionality capabilities of the SVC however needed it to be a turnkey solution, you might like the V7000.

For those of you who did not like or competed with the SVC in the past, well, you know what to do.

BTW, for those who knew of Storwize the Data Footprint Reduction (DFR) vendor with real time compression that IBM recently acquired and renamed IBM Real time Compression, the V7000 does not contain any real time compression (yet).

What are my thoughts and perspectives?

In addition to the comments in the companion post found here, right now Im of the mind set that XIV does not fade away quietly into the sunset or take a timeout at the IBM technology rest and recuperation resort located on the beautiful someday isle.

The reason I think XIV will remain somewhat relevant for some time, (time to be determined of course) is that IBM has expended over the past two and half years significant resources to promote it. Those resources have included marketing time, messaging space and in some instances perhaps inadvertinly at the expense of other IBM storage solutions. Simiarly, a lot of time, money and effort have gone into business partner outreach to establish and keep XIV relevant with those commuities who in turn have gone to their customers to tell and sell the XIV story to some customers who have bought it.

Consequently or as a result of all of that investment, I would be surprised if IBM were simply to walk away from XIV at least near term.

What I do see as happening including some early indicators is that the V7000 (along with other IBM products) now will be getting equal billing, resources and promotional support. Weather this means the XIV division finally being assimilated into the mainstream IBM fold and on equal footing with other IBM products, or, that other IBM products being brought up to an elevated position of XIV is subject to interpretation and your own perception.

I expect to continue to see IBM teams and subsequently their distributors, vars and other business partners get more excited talking about the V7000 along with other IBM solutions. For example, SONAS for bulk, clustered and scale out NAS, DS8000 for high end, GMAS and Information Archive platforms as well as N and DS3K/DS4K/DS5K not to mentiuon the TS/TL backup and archive target platforms along with associated Tivoli software. Also, lets not forget about SVC among other IBM solutions including of course, XIV.

I would also not be surprised if some of the diehard XIV loyalist (e.g. sales and marketing reps that were faithful members of Moshe Yani army who appears to be MIA at IBM) pack up their bags and leave the IBM storage SANdbox in virtual protest. That is, refusing to be assimilated into the general IBM storage pool and thus leaving for Greener IT pastures elsewhere. Some will stick around discovering the opportunities associated with selling a broader more diverse product portfolio into their target accounts where they have spent time and resources to establish relationships or getting thier proverbial foot in the door.

Consequently, I think XIV remains somewhat relevant for now given all of the resources that IBM poured into it and relationships that their partner ecosystem also spent on establishing with the installed customer base.

However, I do think that the V7000 despite some confusion (here and here) around its recycled Storwize name that is built around the field proven SVC and other IBM technology has some legs. Those legs of the V7000 are both from a technology standpoint as well as a means to get the entire IBM systems and storage group energized to go out and compete with their primary nemesis (e.g. Dell, EMC, HP, HDS, NetApp and Oracle among others).

As has been the case for the past couple of years, lets see how this all plays out in a year or so from now. Meanwhile cast your vote or see the results of others as to if XIV remains relevant. Likewise, join in on the new poll below as to if the V7000 is now relevant or not.

Note: As with the ongoing is XIV relevant polling (above), for the new is the V7000 relevant polling (below) you are free to vote early, vote often, vote for those who cannot or that care not to vote.

Here are some links to read more about this and related topics:

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

What do you do when your service provider drops the ball

Do you have a web, internet, backup or other IT cloud service provider of some type?

Do you pay for it, or is it a free service?

Do you take your service provider for granted?

Does your service provider take you or your data for granted?

Does your provider offer some form of service level objectives (SLO)?

For example, Recovery Time Objectives (RTO), Recovery Point Objectives (RPO), Quality of Service (QOS) or if a backup service alternate forms of recovery among others?

So what happens when there is a service disruption, do you threaten to leave the provider and if so, how much does that (or would it) cost you to move?

A couple of weeks ago I was using on a Delta airlines flight from LAX to MSP returning from a west coast speaking engagement event.

During the late evening three hour flight, I was using the gogo inflight wifi service to get caught up on some emails, blog items along with other work items in addition to doing a few twitter tweets while flying high over the real clouds from my virtual office.

During that time, I saw a twitter tweet from Devang Panchigar (@storageNerve) commenting that his hosting service provider Bluehost was down or offline. This caught my attention as Bluehost is also my service provider and a quick check verified that my sites and services were still working. I subsequently sent a tweet to Devang indicating that Bluehost or at least from looking at my sites and services were still functioning, or at least for the time being as I was about to find out. Long story short, about 20 to 25 minutes later, I noticed that I could not longer get to any of my sites, low and behold my Bluehost services were also now offline.

Bluehost

Overall, I have been pleased with Bluehost as a service provider including finding their call support staff very accommodating and easy to work with when I have questions or need something taken care of. Normally I would have simply called Bluehost to see what was going on, however being at about 38,000 feet above the clouds, a quick conversation was not going to be possible. Instead, I checked some forums that revealed Bluehost was experiencing some electrical power issues with their data center (I believe in Utah). Looking at some of the forums as well as various twitter comments, I also decided to check to see if Bluehost CEO Matt Heaton blog was functioning (it was).

It would have been too easy to do one of those irate customer type posts telling them how bad they were, how I was dropping them like a hot potato and then doing a blog post telling everyone to never use them again or along those lines that are far to common and often get deleted as spam.

Instead, I took a different approach (you could have read it here however I just checked and it has been deleted). My comment on Matts blog post took a week or so to be moderated (now since deleted). Essentially my post took the opposite approach of going off on the usual customer tirade instead commenting how ironic that a hosting service for my web site which contains content information about resilient data infrastructure themes was offline.

Now I realize that I am not paying for a high end no downtime always available hosting service, however I also realize that I am paying for a more premium package vs. a basic subscription or even a for free service. While I was not happy about the one hour of downtime around midnight, it was comforting to know that no data was lost and my sites were only offline for a short period of time.

What does all of this mean?

There have been some widely publicized and discussed internet and cloud service related disruptions.

I hope Bluehost continues to improve on their services to stay out of the news for a major disruption as well as minimize or eliminate downtime for their for fee based services.

I also hope that Bluehost CEO Matt Heaton continues to listen to what his customers have to say while improving his services to keep us as customers instead of taking us for granted as some providers or vendors do.

Thanks again to Devang for the tip that there was a service disruption, after all, sometimes we take services for granted and in other situations some service providers take their customers for granted.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

End to End (E2E) Systems Resource Analysis (SRA) for Cloud and Virtual Environments

A new StorageIO Industry Trends and Perspective (ITP) white paper titled “End to End (E2E) Systems Resource Analysis (SRA) for Cloud, Virtual and Abstracted Environments” is now available at www.storageioblog.com/reports compliments of SANpulse technologies.

End to End (E2E) Systems Resource Analysis (SRA) for Virtual, Cloud and abstracted environments: Importance of Situational Awareness for Virtual and Abstracted Environments

Abstract:
Many organizations are in the planning phase or already executing initiatives moving their IT applications and data to abstracted, cloud (public or private) virtualized or other forms of efficient, effective dynamic operating environments. Others are in the process of exploring where, when, why and how to use various forms of abstraction techniques and technologies to address various issues. Issues include opportunities to leverage virtualization and abstraction techniques that enable IT agility, flexibility, resiliency and salability in a cost effective yet productive manner.

An important need when moving to a cloud or virtualized dynamic environment is to have situational awareness of IT resources. This means having insight into how IT resources are being deployed to support business applications and to meet service objectives in a cost effective manner.

Awareness of IT resource usage provides insight necessary for both tactical and strategic planning as well as decision making. Effective management requires insight into not only what resources are at hand but also how they are being used to decide where different applications and data should be placed to effectively meet business requirements.

Learn more about the importance and opportunities associated with gaining situational awareness using E2E SRA for virtual, cloud and abstracted environments in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of SANpulse technologies by clicking here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Is the new HDS VSP really the MVSP?

Today HDS announced with much fan fare that must have been a million dollar launch budget the VSP (successor to the previous USPV and USPVM).

Im also thinking that the HDS VSP (not to be confused with HP SVSP that HP OEMs via LSI) could also be called the the HDS MVSP.

Now if you are part of the HDS SAN, LAN, MAN, WAN or FAN bandwagon, MVSP could mean Most Valuable Storage Platform or Most Virtualized Storage Product. MVSP might be also called More Virtualized Storage Products by others.

Yet OTOH, MVSP could be More Virtual Story Points (e.g. talking points) for HDS building upon and when comparing to their previous products.

For example among others:

More cache to drive cash movement (e.g. cash velocity or revenue)
More claims and counter claims of industry unique or fists
More cloud material or discussion topics
More cross points
More data mobility
More density
More FUD and MUD throwing by competitors
More functionality
More packets of information to move, manage and store
More pages in the media
More partitioning of resources
More partners to sell thorough or too
More PBytes
More performance and bandwidths
More platforms virtualized
More platters
More points of resiliency
More ports to connect to or through
More posts from bloggers
More power management, Eco and Green talking points
More press releases
More processors
More products to sell
More profits to be made
More protocols (Fibre Channel, FICON, FCoE, NAS) supported
More pundits praises
More SAS, SATA and SSD (flash drives) devices supported
More scale up, scale out, and scale within
More security
More single (Virtual and Physical) pane of glass managements
More software to sell and be licensed by customers
More use of virtualization, 3D and other TLAs
More videos to watch or be stored

Im sure more points can be thought of, however that is a good start for now including some to have a bit of fun with.

Read more about HDS new announcement here, here, here and here:

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

What is DFR or Data Footprint Reduction?

What is DFR or Data Footprint Reduction?

What is DFR or Data Footprint Reduction?

Updated 10/9/2018

What is DFR or Data Footprint Reduction?

Data Footprint Reduction (DFR) is a collection of techniques, technologies, tools and best practices that are used to address data growth management challenges. Dedupe is currently the industry darling for DFR particularly in the scope or context of backup or other repetitive data.

However DFR expands the scope of expanding data footprints and their impact to cover primary, secondary along with offline data that ranges from high performance to inactive high capacity.

Consequently the focus of DFR is not just on reduction ratios, its also about meeting time or performance rates and data protection windows.

This means DFR is about using the right tool for the task at hand to effectively meet business needs, and cost objectives while meeting service requirements across all applications.

Examples of DFR technologies include Archiving, Compression, Dedupe, Data Management and Thin Provisioning among others.

Read more about DFR in Part I and Part II of a two part series found here and here.

Where to learn more

Learn more about data footprint reducton (DFR), data footprint overhead and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

That is all for now, hope you find these ongoing series of current or emerging Industry Trends and Perspectives posts of interest.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Has FCoE entered the trough of disillusionment?

This is part of an ongoing series of short industry trends and perspectives blog posts briefs based on what I am seeing and hearing in my conversations with IT professionals on a global basis.

These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, videos, podcasts, webcasts as well as solution brief content found a www.storageioblog.com/reports and www.storageio.com/articles.

Has FCoE (Fibre Channel over Ethernet) entered the trough of disillusionment?

IMHO Yes and that is not a bad thing if you like FCoE (which I do among other technologies).

The reason I think that it is good that FCoE is in or entering the trough is not that I do not believe in FCoE. Instead, the reason is that most if not all technologies that are more than a passing fad often go through a hype and early adopter phase before taking a breather prior to broader longer term adoption.

Sure there are FCoE solutions available including switches, CNAs and even storage systems from various vendors. However, FCoE is still very much in its infancy and maturing.

Based on conversations with IT customer professionals (e.g those that are not vendor, vars, consultants, media or analysts) and hearing their plans, I believe that FCoE has entered the proverbial trough of disillusionment which is a good thing in that FCoE is also ramping up for deployment.

Another common question that comes up regarding FCoE as well as other IO networking interfaces, transports and protocols is if they are temporal (temporary short life span) technologies.

Perhaps in the scope that all technologies are temporary however it is their temporal timeframe that should be of interest. Given that FCoE will probably have at least a ten to fifteen year temporal timeline, I would say in technology terms it has a relative long life for supporting coexistence on the continued road to convergence which appears to be around Ethernet.

That is where I feel FCoE is at currently, taking a break from the initial hype, maturing while IT organizations begin planning for its future deployment.

I see FCoE as having a bright future coexisting with other complimentary and enabling technologies such as IO Virtualization (IOV) including PCI SIG MRIOV, Converged Networking, iSCSI, SAS and NAS among others.

Keep in mind that FCoE does not have to be seen as competitive to iSCSI or NAS as they all can coexist on a common DCB/CEE/DCE environment enabling the best of all worlds not to mention choice. FCoE along with DCB/CEE/DCE provides IT professionals with choice options (e.g. tiered I/O and networking) to align the applicable technology to the task at hand for physical or

Again, the questions pertaining to FCoE for many organizations, particularly those not going to iSCSI or NAS for all or part of their needs should be when, where and how to deploy.

This means that for those with long lead time planning and deployment cycles, now is the time to putting your strategy into place for what you will be doing over the next couple of years if not sooner.

For those interested, here is a link (may require registration) to a good conversation taking place over on IT Toolbox regarding FCoE and other related themes that may be of interest.

Here are some links to additional related material:

  • FCoE Infrastructure Coming Together
  • 2010 and 2011 Trends, Perspectives and Predictions: More of the same?
  • SNWSpotlight: 8G FC and FCoE, Solid State Storage
  • NetApp and Cisco roll out vSphere compatible FCoE solutions
  • Fibre Channel over Ethernet FAQs
  • Fast Fibre Channel and iSCSI switches deliver big pipes to virtualized SAN environments.
  • Poll: Networking Convergence, Ethernet, InfiniBand or both?
  • I/O Virtualization (IOV) Revisited
  • Will 6Gb SAS kill Fibre Channel?
  • Experts Corner: Q and A with Greg Schulz at StorageIO
  • Networking Convergence, Ethernet, Infiniband or both?
  • Vendors hail Fibre Channel over Ethernet spec
  • Cisco, NetApp and VMware combine for ‘end-to-end’ FCoE storage
  • FCoE: The great convergence, or not?
  • I/O virtualization and Fibre Channel over Ethernet (FCoE): How do they differ?
  • Chapter 9 – Networking with your servers and storage: The Green and Virtual Data Center (CRC)
  • Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier)

That is all for now, hope you find these ongoing series of current or emerging Industry Trends and Perspectives posts of interest.

Of course let me know what your thoughts and perspectives are on this and other related topics.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

VMworld 2010 virtual roads, clouds and INXS Devil Inside

This past week I spent a few days in San Francisco attending the VMworld 2010 event which included a Wednesday evening concert with the Australian band INXS.

Despite some long lines (or queues) waiting to get into sessions, keynotes or lunch resulting in delays reminiscent of trying to put too many virtual machines (VMs) onto a given number of physical machines (PMs) in the quest to drive up utilization, the overall event was fantastic.

While at the event, I had a chance to meet up with fellow vExpert Eric Siebert whose new book Maximum vSphere made its debut. I was honored when asked by Eric to help out with his chapter on storage, learn more about Erics new book here.

Eric was just one of many people I was able to catch up with or in some cases meet for the first time face to face. Among the many fellow twitter tweeps included @3parfarley @aebarrett @charleshood @cxi @edsai @ericsiebert @hpstorageguy @iben @jmichelmetz @jtroyer @keithnorbie @KendrickColeman @MesabiGroup @PariseauTT @RayLucchesi @RickVanover @rodos @rogerlund @rootwyrm @sakacc @scott_lowe @ServerVirt_TT @SiliconValleyPR @ssauer @ssharwood @StorageOlogist @stu @Texiwill and @vmworld not to mention many others who are not on twitter.

Big thanks to @rogerlund for organizing a very impromptu ad hoc lunch discussion with a couple of other IT pros representing vary different as well as diverse spectrums of public, private, small, large and ultra large environments. I was only at the event for two days and thus there were many others that I was looking for at their booths or in the hallways (I saw @ekhnaser among others that I could not call out too in time) or in the meeting rooms as well as in the lunch hall.  I look forward to seeing you all at some future event or venue.

On the food scene, while I did not have a chance to dine at one of my local favorites Brandy Hos, I did have a fantastic lunch at Henrys House of Pain (aka Henrys House of Hunan on Sansome). I also had a great outdoor dinner in the alleyway based Cafe Tiramisu where I enjoyed their signature dish. The dish which was essentially a fruit de mer (Fruit of the Sea) over linguine covered with a thin pizza crust that was baked. It was fantastic and brings a whole new dimension to the theme of a classic pot pie meets fruit de mar, give it a try!

On an even lighter or fun note, following are photos and links to some videos of the INXS event courtesy of Karen (aka Mrs Schulz). In addition to being an award winning photographer, Karens day time job is that of an applications development analyst (e.g. an IT Geekette) at a large Minnesota based Mining and Manufacturing company that is also involved in many different sticky and abrasive among other products.

Karen

Karen (Photo Courtesy Karen Sculz)

Karen took the following photos (and videos) with her Cannon Powershot S5 Digital camera.

Greg going to INXS

Me heading to INXS show at VMworld 2010 (Photo Courtesy Karen Schulz)

Greg On Virtual Road

Me sitting in the middle of the virtual highway (Photo Courtesy Karen Schulz)

INXS at VMworld 2010
INXS at VMworld 2010 (Photo Courtesy Karen Schulz)

JD Fortune of INXS at VMworld

JD Fortune of INXS at VMworld (Photo Courtesy Karen Schulz)

Kirk Pengilly and JD Fortune of INXS at VMworld

Kirk Pengilly and JD Fortune of INXS at VMworld 2010 (Photo Courtesy Karen Schulz)

Tim Farriss of INXS at VMworld

Tim Farriss of INXS (Photo Courtesy Karen Schulz)

Here are links to some videos that Karen captured from up front near the stage during the INXS show at VMworld 2010.

Devil Inside (not to be confused with the devil is in the details of clouds, virtualization and other IT topics)

By My Side (Where a vendor or solution partner should be during and after the sale for their customers)

Disappear (What should not happen to your data or virtual machines in physical, virtual or cloud environments)

Never Tear Us Apart (What should not happen between your servers, storage, applications and data)

Need You Tonight (The call that many system admins get during their off hours)

New Sensation (What many are experience with virtualization and clouds)

Dont Change (Ironic final song of encore of a concert at conference with a theme of change)

A big tip of the hat along with thanks goes out to John Troyer of VMware as well as Sarah Shvil of the VMware Analyst Relations team for helping make it possible for me to attend as an independent IT industry analyst instead of on the coat tails of a vendors exhibit hall pass (disclosure: I paid for my own travel, lodging and dinning expenses).

Greg Hitching a Ride to VMworld
Me hitching a ride on the virtual highway to the clouds and VMworld (Photo Curtsey Karen Schulz)

Hopefully with some luck, I will be able to hitch a ride and attend VMworld again next year in Las Vegas, perhaps even as a repeat vExpert as well as IT Industry Analyst.

Thats a wrap for now.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

August 2010 StorageIO News Letter

StorageIO News Letter Image
August 2010 Newsletter

Welcome to the August Summer Wrap Up 2010 edition of the Server and StorageIO Group (StorageIO) newsletter. This follows the June 2010 edition building on the great feedback received from recipients.
Items that are new in this expanded edition include:

  • Out and About Update
  • Industry Trends and Perspectives (ITP)
  • Featured Article

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the August 2010 edition as an HTML or PDF or, to go to the newsletter page to view previous editions.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Back to school shopping: Dude, Dell Digests 3PAR Disk storage

Dell

No sooner has the dust settled from Dells other recent acquisitions, its back to school shopping time and the latest bargain for the Round Rock Texas folks is bay (San Francisco) area storage vendor 3PAR for $1.15B. As a refresh, some of Dells more recent acquisitions including a few years ago $1.4B for EqualLogic, $3.9B for Perot systems not to mention Exanet, Kace and Ocarina earlier this year. For those interested, as of April 2010 reporting figures found here, Dell showed about $10B USD in cash and here is financial information on publicly held 3PAR (PAR).

Who is 3PAR
3PAR is a publicly traded company (PAR) that makes a scalable or clustered storage system with many built in advanced features typically associated with high end EMC DMX and VMAX as well as CLARiiON, in addition to Hitachi or HP or IBM enterprise class solutions. The Inserv (3PARs storage solution) combines hardware and software providing a very scalable solution that can be configured for smaller environments or larger enterprise by varying the number of controllers or processing nodes, connectivity (server attachment) ports, cache and disk drives.

Unlike EqualLogic which is more of a mid market iSCSI only storage system, the 3PAR Inserv is capable of going head to head with the EMC CLARiiON as well as DMC or VMAX systems that support a mix of iSCSI and Fibre Channel or NAS via gateway or appliances. Thus while there were occasional competitive situations between 3PAR and Dell EqualLogic, they for the most part were targeted at different market sectors or customers deployment scenarios.

What does Dell get with 3PAR?

  • A good deal if not a bargain on one of the last new storage startup pure plays
  • A public company that is actually generating revenue with a large and growing installed base
  • A seasoned sales force who knows how to sell into the enterprise storage space against EMC, HP, IBM, Oracle/SUN, Netapp and others
  • A solution that can scale in terms of functionality, connectivity, performance, availability, capacity and energy efficiency (PACE)
  • Potential route to new markets where 3PAR has had success, or to bridge gaps where both have played and competed in the past
  • Did I say a company with an established footprint of installed 3PAR Inserv storage systems and good list of marquee customers
  • Ability to sell a solution that they own the intellectual property (IP) instead of that of partner EMC
  • Plenty of IP that can be leveraged within other Dell solutions, not to mention combine 3PAR with other recently acquired technologies or companies.

On a lighter note, Dell picks up once again Marc Farley who was with them briefly after the EqualLogic acquisition who then departed to 3PAR where he became director of social media including launch of Infosmack on Storage Monkeys with co host Greg Knieriemen (@Knieriemen). Of course the twitter world and traditional coconut wires are now speculating where Farley will go next that Dell may end up buying in the future.

What does this mean for Dell and their data storage portfolio?
While in no ways all inclusive or comprehensive, table 1 provides a rough framework of different price bands, categories, tiers and market or application segments requiring various types of storage solutions where Dell can sell into.

 

HP

Dell

EMC

IBM

Oracle/Sun

Servers

Blade systems, rack mount, towers to desktop

Blade systems, rack mount, towers to desktop

Virtual servers with VMware, servers via vBlock servers via Cisco

Blade systems, rack mount, towers to desktop

Blade systems, rack mount, towers to desktop

Services

HP managed services, consulting and hosting supplemented by EDS acquisition

Bought Perot systems (an EDS spin off/out)

Partnered with various organizations and services

Has been doing smaller acquisitions adding tools and capabilities to IBM global services

Large internal consulting and services as well as Software as a Service (SaaS) hosting, partnered with others

Enterprise storage

XP (FC, iSCSI, FICON for mainframe and NAS with gateway) which is OEMed from Hitachi Japan parent of HDS

3PAR (iSCSI and FICON or NAS with gateway) replaces EMC CLARiiON or perhaps rare DMX/VMAX at high end?

DMX and VMAX

DS8000

Sun resold HDS version of XP/USP however Oracle has since dropped it from lineup

Data footprint impact reduction

Dedupe on VTL via Sepaton plus HP developed technology or OEMed products

Dedupe in OEM or partner software or hardware solutions, recently acquired Ocarina

Dedupe in Avamar, Datadomain, Networker, Celerra, Centera, Atmos. CLARiiON and Celerra compression

Dedupe in various hardware and software solutions, source and target, compression with Storwize

Dedupe via OEM VTLs and other sun solutions

Data preservation

Database and other archive tools, archive storage

OEM solutions from EMC and others

Centera and other solutions

Various hardware and software solutions

Various hardware and software solutions

General data protection (excluding logical or physical security and DLP)

Internal Data Protector software plus OEM, partners with other software, various VTL, TL and target solutions as well as services

OEM and resell partner tools as well as Dell target devices and those of partners. Could this be a future acquisition target area?

Networker and Avamar software, Datadomain and other targets, DPA management tools and Mozy services

Tivoli suite of software and various hardware targets, management tools and cloud services

Various software and partners tools, tape libraries, VTLs and online storage solutions

Scale out, bulk, or clustered NAS

eXtreme scale out, bulk and clustered storage for unstructured data applications

Exanet on Dell servers with shared SAS, iSCSI or FC storage

Celerra and ATMOS

IBM SONAS or N series (OEM from NetApp)

ZFS based solutions including 7000 series

General purpose NAS

Various gateways for EVA or MSA or XP, HP IBRIX or Polyserve based as well as Microsoft WSS solutions

EMC Celerra, Dell Exanet, Microsoft WSS based. Acquisition or partner target area?

Celerra

N Series OEMed from Netapp as well as growing awareness of SONAS

ZFS based solutions. Whatever happened to Procom?

Mid market multi protocol block

EVA (FC with iSCSI or NAS gateways), LeftHand (P Series iSCSI) for lowered of this market

3PAR (FC and iSCSI, NAS with gateway) for mid to upper end of this market, EqualLogic (iSCSI) for the lower end of the market, some residual EMC CX activity phases out over time?

CLARiiON (FC and iSCSI with NAS via gateway), Some smaller DMX or VMAX configurations for mid to upper end of this market

DS5000, DS4000 (FC and iSCSI with NAS via a gateway) both OEMed from LSI, XIV and N series (Netapp)

7000 series (ZFS and Sun storage software running on Sun server with internal storage, optional external storage)

6000 series

Scalable SMB iSCSI

LeftHand (P Series)

EqualLogic

Celerra NX, CLARiiON AX/CX

XIV, DS3000, N Series

2000
7000

Entry level shared block

MSA2000 (iSCSI, FC, SAS)

MD3000 (iSCSI, FC, SAS)

AX (iSCSI, FC)

DS3000 (iSCSI, FC, SAS), N Series (iSCSI, FC, NAS)

2000
7000

Entry level unified multi function

X (not to be confused with eXtreme series) HP servers with Windows Storage Software

Dell servers with Windows Storage Software or EMC Celerra

Celerra NX, Iomega

xSeries servers with Microsoft or other software installed

ZFS based solutions running on Sun servers

Low end SOHO

X (not to be confused with eXtreme series) HP servers with Windows Storage Software

Dell servers with storage and Windows Storage Software. Future acqustion area perhaps?

Iomega

 

 

Table 1: Sampling of various tiers, architectures, functionality and storage solution options

Clarifying some of the above categories in table 1:

Servers: Application servers or computers running Windows, Linux, HyperV, VMware or other applications, operating systems and hypervisors.

Services: Professional and consulting services, installation, break fix repair, call center, hosting, managed services or cloud solutions

Enterprise storage: Large scale (hundreds to thousands of drives, many front end as well as back ports, multiple controllers or storage processing engines (nodes), large amount of cache and equally strong performance, feature rich functionality, resilient and scalable.

Data footprint impact reduction: Archive, data management, compression, dedupe, thin provision among other techniques. Read more here and here.

Data preservation: Archiving for compliance and non regulatory applications or data including software, hardware, services.

General data protection: Excluding physical or logical data security (firewalls, dlp, etc), this would be backup/restore with encryption, replication, snapshots, hardware and software to support BC, DR and normal business operations. Read more about data protection options for virtual and physical storage here.

Scale out NAS: Clustered NAS, bulk unstructured storage, cloud storage system or file system. Read more about clustered storage here. HP has their eXtreme X series of scale out and bulk storage systems as well as gateways. These leverage IBRIX and Polyserve which were bought by HP as software, or as a solution (HP servers, storage and software), perhaps with optional data reduction software such as Ocarina OEMed by Dell. Dell now has Exanet which they bought recently as software, or as a solution running on Dell servers, with either SAS, iSCSI or FC back end storage plus optional data footprint reduction software such as Ocarina. IBM has GPFS as a software solution running on IBM or other vendors servers with attached storage, or as a solution such as SONAS with IBM servers running software with IBM DS mid range storage. IBM also OEMs Netapp as the N series.

General purpose NAS: NAS (NFS and CIFS or optional AFP and pNFS) for everyday enterprise (or SME/SMB) file serving and sharing

Mid market multi protocol block: For SMB to SME environments that need scalable shared (SAN) scalable block storage using iSCSI, FC or FCoE

Scalable SMB iSCSI: For SMB to SME environments that need scalable iSCSI storage with feature rich functionality including built in virtualization

Entry level shared block: Block storage with flexibility to support iSCSI, SAS or Fibre Channel with optional NAS support built in or available via a gateway. For example external SAS RAID shared storage between 2 or more servers configured in a HyeprV or VMware clustered that do not need or can afford higher cost of iSCSI. Another example would be shared SAS (or iSCSI or Fibre Channel) storage attached to a server running storage software such as clustered file system (e.g. Exanet) or VTL, Dedupe, Backup, Archiving or data footprint reduction tools or perhaps database software where higher cost or complexity of an iSCSI or Fibre Channel SAN is not needed. Read more about external shared SAS here.

Entry level unified multifunction: This is storage that can do block and file yet is scaled down to meet ease of acquisition, ease of sale, channel friendly, simplified deployment and installation yet affordable for SMBs or larger SOHOs as well as ROBOs.

Low end SOHO: Storage that can scale down to consumer, prosumer or lower end of SMB (e.g. SOHO) providing mix of block and file, yet priced and positioned below higher price multifunction systems.

Wait a minute, are that too many different categories or types of storage?

Perhaps, however it also enables multiple tools (tiers of technologies) to be in a vendors tool box, or, in an IT professionals tool bin to address different challenges. Lets come back to this in a few moments.

 

Some Industry trends and perspectives (ITP) thoughts:

How can Dell with 3PAR be an enterprise play without IBM mainframe FICON support?
Some would say forget about it, mainframes are dead thus not a Dell objective even though EMC, HDS and IBM sell a ton of storage into those environments. However, fair enough argument and one that 3PAR has faced for years while competing with EMC, HDS, HP, IBM and Fujitsu thus they are versed in how to handle that discussion. Thus the 3PAR teams can help the Dell folks determine where to hunt and farm for business something that many of the Dell folks already know how to do. After all, today they have to flip the business to EMC or worse.

If truly pressured and in need, Dell could continue reference sales with EMC for DMX and VMAX. Likewise they could also go to Bustech and/or Luminex who have open systems to mainframe gateways (including VTL support) under a custom or special solution sale. Ironically EMC has OEMed in the past Bustech to transform their high end storage into Mainframe VTLs (not to be confused with Falconstor or Quantum for open system) as well as Datadomain partnered with Luminex.

BTW, did you know that Dell has had for several years a group or team that handles specialized storage solutions addressing needs outside the usual product portfolio?

Thus IMHO Dells enterprise class focus will be that for open systems large scale out where they will compete with EMC DMX and VMAX, HDS USP or their soon to be announced enhancements, HP and their Hitachi Japan OEMed XP, IBM and the DS8000 as well as the seldom heard about yet equally scalable Fujitsu Eternus systems.

 

Why only 1.15B, after all they paid 1.4B for EqualLogic?
IMHO, had this deal occurred a couple of years ago when some valuations were still flying higher than today, and 3PAR were at their current sales run rate, customer deployment situations, it is possible the amount would have been higher, either way, this is still a great value for both Dell and 3PAR investors, customers, employees and partners.

 

Does this mean Dell dumps EMC?
Near term I do not think Dell dumps the EMC dudes (or dudettes) as there is still plenty of business in the mid market for the two companies. However, over time, I would expect that Dell will unleash the 3PAR folks into the space where normally a CLARiiON CX would have been positioned such as deals just above where EqualLogic plays, or where Fibre Channel is preferred. Likewise, I would expect Dell to empower the 3PAR team to go after additional higher end deals where a DMX or VMAX would have been the previous option not to mention where 3PAR has had success.

This would also mean extending into sales against HP EVA and XPs, IBM DS5000 and DS8000 as well as XIV, Oracle/Sun 6000 and 7000s to name a few. In other words there will be some spin around coopition, however longer term you can read the writing on the wall. Oh, btw, lest you forget, Dell is first and foremost a server company who now is getting into storage in a much bigger way and EMC is first and foremost a storage company who is getting into severs via VMware as well as their Cisco partnerships.

Are shots being fired across each other bows? I will leave that up to you to speculate.

 

Does this mean Dell MD1000/MD3000 iSCSI, SAS and FC disappears?
I do not think so as they have had a specific role for entry level below where the EqualLogic iSCSI only solution fits providing mixed iSCSI, SAS and Fibre Channel capabilities to compete with the HP MSA2000 (OEMed by Dothill) and IBM DS3000 (OEMed from LSI). While 3PAR could be taken down into some of these markets, which would also potentially dilute the brand and thus premium margin of those solutions.

Likewise, there is a play with server vendors to attach shared SAS external storage to small 2 and 4 node clusters for VMware, HyperV, Exchange, SQL, SharePoint and other applications where iSCSI or Fibre Channel are to expensive or not needed or where NAS is not a fit. Another play for the shared external SAS attached is for attaching low cost storage to scale out clustered NAS or bulk storage where software such as Exanet runs on a Dell server. Take a closer look at how HP is supporting their scale out as well as IBM and Oracle among others. Sure you can find iSCSI or Fibre Channel or even NAS back end to file servers. However growing trend of using shared SAS.

 

Does Dell now have too many different storage systems and solutions in their portfolio?
Possibly depending upon how you look at it and certainly the potential is there for revenue prevention teams to get in the way of each other instead of competing with external competitors. However if you compare the Dell lineup with those of EMC, HP, IBM and Oracle/Sun among others, it is not all that different. Note that HP, IBM and Oracle also have something in common with Dell in that they are general IT resource providers (servers, storage, networks, services, hardware and software) as compared to other traditional storage vendors.

Consequently if you look at these vendors in terms of their different markets from consumer to prosumer to SOHO at the low end of the SMB to SME that sits between SMB and enterprise, they have diverse customer needs. Likewise, if you look at these vendors server offerings, they too are diverse ranging from desktops to floor standing towers to racks, high density racks and blade servers that also need various tiers, architectures, price bands and purposed storage functionality.

 

What will be key for Dell to make this all work?
The key for Dell will be similar to that of their competitors which is to clearly communicate the value proposition of the various products or solutions, where, who and what their target markets are and then execute on those plans. There will be overlap and conflict despite the best spin as is always the case with diverse portfolios by vendors.

However if Dell can keep their teams focused on expanding their customer footprints at the expense of their external competition vs. cannibalizing their own internal product lines, not to mention creating or extending into new markets or applications. Consequently Dell now has many tools in their tool box and thus need to educate their solution teams on what to use or sell when, where, why and how instead of just having one tool or a singular focus. In other words, while a great solution, Dell no longer has to respond with the solution to everything is iSCSI based EqualLogic.

Likewise Dell can leverage the same emotion and momentum behind the EqualLogic teams to invigorate and unleash the best with 3PAR teams and solution into or onto the higher end of the SMB, SME and enterprise environments.

Im still thinking that Exanet is a diamond in the rough for Dell where they can install the clustered scalable NAS software onto their servers and use either lower end shared SAS RAID (e.g. MD3000), or iSCSI (MD3000, EqualLogic or 3PAR) or higher end Fibre Channel with 3PAR) for scale out, cloud and other bulk solutions competing with HP, Oracle and IBM. Dell still has the Windows based storage server for entry level multi protocol block and file capabilities as well as what they OEM from EMC.

 

Is Dell done shopping?
IMHO I do not think so as there are still areas where Dell can extend their portfolio and not just in storage. Likewise there are still some opportunities or perhaps bargains out there for fall and beyond acquisitions.

 

Does this mean that Dell is not happy with EqualLogic and iSCSI
Simply put from my perspective talking with Dell customers, prospects, and partners and seeing them all in action nothing could be further from Dell not being happy with iSCSI or EqualLogic. Look at this as being a way to extend the Dell story and capabilities into new markets, granted the EqualLogic folks now have a new sibling to compete with internal marketing and management for love and attention.

 

Isnt Dell just an iSCSI focused company?
A couple of years I was quoted in one of the financial analysis reports as saying that Dell needed to remain open to various forms of storage instead of becoming singularly focused on just iSCSI as a result of the EqualLogic deal. I standby that statement in that Dell to be a strong enterprise contender needs to have a balanced portfolio across different price or market bands, from block to file, from shared SAS to iSCSI to Fibre Channel and emerging FCoE.

This also means supporting traditional NAS across those different price band or market sectors as well as support for emerging and fast growing unstructured data markets where there is a need for scale out and bulk storage. Thus it is great to see Dell remaining open minded and not becoming singularly focused on just iSCSI instead providing the right solution to meet their diverse customer as well as prospect needs or opportunities.

While EqualLogic was and is a very successfully iSCSI focused storage solution not to mention one that Dell continues to leverage, Dell is more than just iSCSI. Take a look at Dells current storage line up as well as up in table 1 and there is a lot of existing diversity. Granted some of that current diversity is via partners which the 3PAR deal helps to address. What this means is that iSCSI continues to grow in popularity however there are other needs where shared SAS or Fibre Channel or FCoE will be needed opening new markets to Dell.

 

Bottom line and wrap up (for now)
This is a great move for Dell (as well as 3PAR) to move up market in the storage space with less reliance on EMC. Assuming that Dell can communicate the what to use when, where, why and how to both their internal teams, partners as well as industry and customers not to mention then execute on, they should have themselves a winner.

Will this deal end up being an even better bargain than when Dell paid $1.4B for EqualLogic?

Not sure yet, it certainly has potential if Dell can execute on their plans without losing momentum in any other their other areas (products).

Whats your take?

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Here are some related links to read more

Data footprint reduction (Part 1): Life beyond dedupe and changing data lifecycles

Over the past couple of weeks there has been a flurry of IT industry activity around data footprint impact reduction with Dell buying Ocarina and IBM acquiring Storwize. For those who want the quick (compacted, reduced) synopsis of what Dell buying Ocarina as well as IBM acquiring Storwize means read this post here along with some of my comments here and here.

Now, before any Drs or Divas of Dedupe get concerned and feel the need to debate dedupes expanding role, success or applicability, relax, take a deep breath, then read on and take another breath before responding if so inclined.

The reason I mention this is that some may mistake this as a piece against or not in favor of dedupe as it talks about life beyond dedupe which could be mistaken as indicating dedupes diminished role which is not the case (read ahead and see figure 5 to see the bigger picture).

Likewise some might feel that since this piece talks about archiving for compliance and non regulatory situations along with compression, data management and other forms of data footprint reduction they may be compelled to defend dedupes honor and future role.

Again, relax, take a deep breath and read on, this is not about the death of dedupe.

Now for others, you might wonder why the dedupe tongue in check humor mentioned above (which is what it is) and the answer is quite simple. The industry in general is drunk on dedupe and in some cases thus having numbed its senses not to mention having blurred its vision of the even bigger opportunities for the business benefits of data footprint reduction beyond todays backup centric or vmware server virtualization dedupe discussions.

Likewise, it is time for the industry to wake (or sober) up and instead of trying to stuff everything under or into the narrowly focused dedupe bottle. Instead, realize that there is a broader umbrella called data footprint impact reduction which includes among other techniques, dedupe, archive, compression, data management, data deletion and thin provisioning across all types of data and applications. What this means is a broader opportunity or market than what exists or being discussed today leveraging different techniques, technologies and best practices.

Consequently this piece is about expanding the discussion to the larger opportunity for vendors or vars to extend their focus to the bigger world of overall data footprint impact reduction beyond where currently focused. Likewise, this is about IT customers realizing that there are more opportunities to address data and storage optimization across your entire organization using various techniques instead of just focusing on backup.

In other words, there is a very bright future for dedupe as well as other techniques and technologies that fall under the data footprint reduction umbrella including data stored online, offline, near line, primary, secondary, tertiary, virtual and in a public or private cloud..

Before going further however lets take a step back and look at some business along with IT issues, challenges and opportunities.

What is the business and IT issue or challenge?
Given that there is no such thing as a data or information recession shown in figure 1, IT organizations of all size are faced with the constant demand to store more data, including multiple copies of the same or similar data, for longer periods of time.


Figure 1: IT resource demand growth continues

The result is an expanding data footprint, increased IT expenses, both capital and operational, due to additional Infrastructure Resource Management (IRM) activities to sustain given levels of application Quality of Service (QoS) delivery shown in figure 2.

Some common IT costs associated with supporting an increased data footprint include among others:

  • Data storage hardware and management software tools acquisition
  • Associated networking or IO connectivity hardware, software and services
  • Recurring maintenance and software renewal fees
  • Facilities fees for floor space, power and cooling along with IT staffing
  • Physical and logical security for data and IT resources
  • Data protection for HA, BC or DR including backup, replication and archiving


Figure 2: IT Resources and cost balancing conflicts and opportunities

Figure 2 shows the result is that IT organizations of all size are faced with having to do more with what they have or with less including maximizing available resources. In addition, IT organizations often have to overcome common footprint constraints (available power, cooling, floor space, server, storage and networking resources, management, budgets, and IT staffing) while supporting business growth.

Figure 2 also shows that to support demand, more resources are needed (real or virtual) in a denser footprint, while maintaining or enhancing QoS plus lowering per unit resource cost. The trick is improving on available resources while maintaining QoS in a cost effective manner. By comparison, traditionally if costs are reduced, one of the other curves (amount of resources or QoS) are often negatively impacted and vice versa. Meanwhile in other situations the result can be moving problems around that later resurface elsewhere. Instead, find, identify, diagnose and prescribe the applicable treatment or form of data footprint reduction or other IT IRM technology, technique or best practices to cure the ailment.

What is driving the expanding data footprint?
Granted more data can be stored in the same or smaller physical footprint than in the past, thus requiring less power and cooling per Gbyte, Tbyte or PByte. Data growth rates necessary to sustain business activity, enhanced IT service delivery and enable new applications are placing continued demands to move, protect, preserve, store and serve data for longer periods of time.

The popularity of rich media and Internet based applications has resulted in explosive growth of unstructured file data requiring new and more scalable storage solutions. Unstructured data includes spreadsheets, Power Point, slide decks, Adobe PDF and word documents, web pages, video and audio JPEG, MP3 and MP4 files. This trend towards increasing data storage requirements does not appear to be slowing anytime soon for organizations of all sizes.

After all, there is no such thing as a data or information recession!

Changing data access lifecycles
Many strategies or marketing stories are built around the premise that shortly after data is created data is seldom, if ever accessed again. The traditional transactional model lends itself to what has become known as information lifecycle management (ILM) where data can and should be archived or moved to lower cost, lower performing, and high density storage or even deleted where possible.

Figure 3 shows as an example on the left side of the diagram the traditional transactional data lifecycle with data being created and then going dormant. The amount of dormant data will vary by the type and size of an organization along with application mix. 


Figure 3: Changing access and data lifecycle patterns

However, unlike the transactional data lifecycle models where data can be removed after a period of time, Web 2.0 and related data needs to remain online and readily accessible. Unlike traditional data lifecycles where data goes dormant after a period of time, on the right side of figure 3, data is created and then accessed on an intermittent basis with variable frequency. The frequency between periods of inactivity could be hours, days, weeks or months and, in some cases, there may be sustained periods of activity.

A common example is a video or some other content that gets created and posted to a web site or social networking site such as Face book, Linked in, or You Tube among others. Once the content is discussed, while it may not change, additional comment and collaborative data can be wrapped around the data as additional viewers discover and comment on the content. Solution approaches for the new category and data lifecycle model include low cost, relative good performing high capacity storage such as clustered bulk storage as well as leveraging different forms of data footprint reduction techniques.

Given that a large (and growing) percentage of new data is unstructured, NAS based storage solutions including clustered, bulk, cloud and managed service offerings with file based access are gaining in popularity. To reduce cost along with support increased business demands (figure 2), a growing trend is to utilize clustered, scale out and bulk NAS file systems that support NFS, CIFS for concurrent large and small IOs as well as optionally pNFS for large parallel access of files. These solutions are also increasingly being deployed with either built in or add on accessorized data footprint reduction techniques including archive, policy management, dedupe and compression among others.

What is your data footprint impact?
Your data footprint impact is the total data storage needed to support your various business application and information needs. Your data footprint may be larger than how much actual data storage you have as seen in figure 4. In Figure 4, an example is an organization that has 20TBytes of storage space allocated and being used for databases, email, home directories, shared documents, engineering documents, financial and other data in different formats (structured and unstructured) not to mention varying access patterns.


Figure 4: Expanding data footprint due to data proliferation and copies being retained

Of the 20TBytes of data allocated and used, it is very likely that the consumed storage space is not 100 percent used. Database tables may be sparsely (empty or not fully) allocated and there is likely duplicate data in email and other shared documents or folders. Additionally, of the 20TBytes, 10TBytes are duplicated to three different areas on a regular basis for application testing, training and business analysis and reporting purposes.

The overall data footprint is the total amount of data including all copies plus the additional storage required for supporting that data such as extra disks for Redundant Array of Independent Disks (RAID) protection or remote mirroring.

In this overly simplified example, the data footprint and subsequent storage requirement are several times that of the 20TBytes of data. Consequently, the larger the data footprint the more data storage capacity and performance bandwidth needed, not to mention being managed, protected and housed (powered, cooled, situated in a rack or cabinet on a floor somewhere).

Data footprint reduction techniques
While data storage capacity has become less expensive on a relative basis, as data footprint continue to expand in order to support business requirements, more IT resources will be needed to be made available in a cost effective, yet QoS satisfying manner (again, refer back to figure 2). What this means is that more IT resources including server, storage and networking capacity, management tools along with associated software licensing and IT staff time will be required to protect, preserve and serve information.

By more effectively managing the data footprint across different applications and tiers of storage, it is possible to enhance application service delivery and responsiveness as well as facilitate more timely data protection to meet compliance and business objectives. To realize the full benefits of data footprint reduction, look beyond backup and offline data improvements to include online and active data using various techniques such as those in table 1 among others.

There are several methods (shown in table 1) that can be used to address data footprint proliferation without compromising data protection or negatively impacting application and business service levels. These approaches include archiving of structured (database), semi structured (email) and unstructured (general files and documents), data compression (real time and offline) and data deduplication.

 

Archiving

Compression

Deduplication

When to use

Structured (database), email and unstructured

Online (database, email, file sharing), backup or archive

Backup or archiving or recurring and similar data

Characteristic

Software to identify and remove unused data from active storage devices

Reduce amount of data to be moved (transmitted) or stored on disk or tape.

Eliminate duplicate files or file content observed over a period of time to reduce data footprint

Examples

Database, email, unstructured file solutions with archive storage

Host software, disk or tape, (network routers) and compression appliances or software as well as appearing in some primary storage system solutions

Backup and archive target devices and Virtual Tape Libraries (VTLs), specialized appliances

Caveats

Time and knowledge to know what and when to archive and delete, data and application aware

Software based solutions require host CPU cycles impacting application performance

Works well in background mode for backup data to avoid performance impact during data ingestion

Table 1: Data footprint reduction approaches and techniques

Archiving for compliance and general data retention
Data archiving is often perceived as a solution for compliance, however, archiving can be used for many other non compliance purposes. These include general data footprint reduction, to boost performance and enhance routine data maintenance and data protection. Archiving can be applied to structured databases data, semi structured email data and attachments and unstructured file data.

A key to deploying an archiving solution is having insight into what data exists along with applicable rules and policies to determine what can be archived, for how long, how many copies and how data ultimately may be finally retired or deleted. Archiving requires a combination of hardware, software and people to implement business rules.

A challenge with archiving is having the time and tools available to identify what data should be archived and what data can be securely destroyed when no longer needed. Further complicating archiving is that knowledge of the data value is also needed; this may well include legal issues as to who is responsible for making decisions on what data to keep or discard.

If a business can invest in the time and software tools, as well as identify which data to archive to support an effective archive strategy, the returns can be very positive towards reducing the data footprint without limiting the amount of information available for use.

Data compression (real time and offline)
Data compression is a commonly used technique for reducing the size of data being stored or transmitted to improve network performance or reduce the amount of storage capacity needed for storing data. If you have used a traditional or TCP/IP based telephone or cell phone, watched either a DVD or HDTV, listened to an MP3, transferred data over the internet or used email you have most likely relied on some form of compression technology that is transparent to you. Some forms of compression are time delayed, such as using PKZIP to zip files, while others are real time or on the fly based such as when using a network, cell phone or listening to an MP3.

Two different approaches to data compression that vary in time delay or impact on application performance along with the amount of compression and loss of data are loss less (no data loss) and lossy (some data loss for higher compression ratio). In addition to these approaches, there are also different implementations of including real time for no performance impact to applications and time delayed where there is a performance impact to applications.

In contrast to traditional ZIP or offline, time delayed compression approaches that require complete decompression of data prior to modification, online compression allows for reading from, or writing to, any location within a compressed file without full file decompression and resulting application or time delay. Real time appliance or target based compression capabilities are well suited for supporting online applications including databases, OLTP, email, home directories, web sites and video streaming among others without consuming host server CPU or memory resources or degrading storage system performance.

Note that with the increase of CPU server processing performance along with multiple cores, server based compression running in applications such as database, email, file systems or operating systems can be a viable option for some environments.

A scenario for using real time data compression is for time sensitive applications that require large amounts of data such as online databases, video and audio media servers, web and analytic tools. For example, databases such as Oracle support NFS3 Direct IO (DIO) and Concurrent IO (CIO) capabilities to enable random and direct addressing of data within an NFS based file. This differs from traditional NFS operations where a file would be sequential read or written.

Another example of using real time compression is to combine a NAS file server configured with 300GB or 600GB high performance 15.5K Fibre Channel or SAS HDDs in addition to flash based SSDs to boost the effective storage capacity of active data without introducing a performance bottleneck associated with using larger capacity HDDs. Of course, compression would vary with the type of solution being deployed and type of data being stored just as dedupe ratios will differ depending on algorithm along with if text or video or object based among other factors.

Deduplication (Dedupe)
Data deduplication (also known as single instance storage, commonalty factoring, data difference or normalization) is a data footprint reduction technique that eliminates the occurrence of the same data. Deduplication works by normalizing the data being backed up or stored by eliminating recurring or duplicate copies of files or data blocks depending on the implementation.

Some data deduplication solutions boast spectacular ratios for data reduction given specific scenarios, such as backup of repetitive and similar files, while providing little value over a broader range of applications.

This is in contrast with traditional data compression approaches that provide lower, yet more predictable and consistent data reduction ratios over more types of data and application, including online and primary storage scenarios. For example, in environments where there is little to no common or repetitive data files, data deduplication will have little to no impact while data compression generally will yield some amount of data footprint reduction across almost all types of data.

Some data deduplication solution providers have either already added, or have announced plans to add, compression techniques to compliment and increase the data footprint effectiveness of their solutions across a broader range of applications and storage scenarios, attesting to the value and importance of data compression to reduce data footprint.

When looking at deduplication solutions, determine if the solution is designed to scale in terms of performance, capacity and availability over a large amount of data along with how restoration of data will be impacted by scaling for growth. Other items to consider include how data is reduplicated, such as real time using inline or some form of time delayed post processing, and the ability to select the mode of operation.

For example, a dedupe solution may be able to process data at a specific ingest rate inline until a certain threshold is hit and then processing reverts to post processing so as to not cause a performance degradation to the application writing data to the deduplication solution. The downside of post processing is that more storage is needed as a buffer. It can, however, also enable solutions to scale without becoming a bottleneck during data ingestion.

However, there is life beyond dedupe which is to in no way diminish dedupe or its very strong and bright future, one that Im increasingly convinced of having talked with hundreds of IT professionals (e.g. the customers) is that only the surface is being scratched for dedupe, not to mention larger data footprint impact opportunity seen in figure 5.


Figure 5: Dedupe adoption and deployment waves over time

While dedupe is a popular technology from a discussion standpoint and has good deployment traction, it is far from reaching mass customer adoption or even broad coverage in environments where it is being used. StorageIO research shows broadest adoption of dedupe centered around backup in smaller or SMB environments (dedupe deployment wave one in figure 5) with some deployment in Remote Office Branch Office (ROBO) work groups as well as departmental environments.

StorageIO research also shows that complete adoption in many of those SMB, ROBO, work group or smaller environments has yet to reach 100 percent. This means that there remains a large population that has yet to deploy dedupe as well as further opportunities to increase the level of dedupe deployment by those already doing so.

There has also been some early adoption in larger core IT environments where dedupe coexists with complimenting existing data protection and preservation practices. Another current deployment scenario for dedupe has been for supporting core edge deployments in larger environments that provide support for backup and data protection of ROBO, work group and departmental systems.

Note that figure 5 simply shows the general types of environments in which dedupe is being adopted and not any sort of indicators as to the degree of deployment by a given customer or IT environment.

What to do about your expanding data footprint impact?
Develop an overall data foot reduction strategy that leverages different techniques and technologies addressing online primary, secondary and offline data. Assess and discover what data exists and how it is used in order to effectively manage storage needs.

Determine policies and rules for retention and deletion of data combining archiving, compression (online and offline) and dedupe in a comprehensive data footprint strategy. The benefit of a broader, more holistic, data footprint reduction strategy is the ability to address the overall environment, including all applications that generate and use data as well as IRM or overhead functions that compound and impact the data footprint.

Data footprint reduction: life beyond (and complimenting) dedupe
The good news is that the Drs. and Divas of dedupe marketing (the ones who also are good at the disco dedupe dance debates) have targeted backup as an initial market sweet (and success) spot shown in figure 5 given the high degree of duplicate data.


Figure 6: Leverage multiple data footprint reduction techniques and technologies

However that same good news is bad news in that there is now a stigma that dedupe is only for backup, similar to how archive was hijacked by the compliance marketing folks in the post Y2K era. There are several techniques that can be used individually to address specific data footprint reduction issues or in combination as seen in figure 7 to implement a more cohesive and effective data footprint reduction strategy.


Figure 7: How various data footprint reduction techniques are complimentary

What this means is that both archive, dedupe as well as other forms of data footprint reduction can and should be used beyond where they have been target marketed using the applicable tool for the task at hand. For example, a common industry rule of thumb is that on average, ten percent of data changes per day (your mileage and rate of change will certainly vary given applications, environment and other factors).

Now assuming that you have 100TB (feel free to subtract a zero or two, or add as many as needed) of data (note I did not say storage capacity or percent utilized), ten percent change would be 10TB that needs to be backed up, replicated and so forth. Now with basic 2 to 1 streaming tape compression (2.5 to 1 in upcoming LTO enhancements) would reduce the daily backup footprint from 10TB to 5TB.

Using dedupe with 10 to 1 would get that from 10TB down to 1TB or about the size of a large capacity disk drive. With 20 to 1 that cuts the daily backup down to 500GB and so forth. The net effect is that more daily backups can be stored in the same footprint which in turn helps expedite individual file recover by having more options to choose from off of the disk based cache, buffer or storage pool.

On the other hand, if your objective is to reduce and eliminate storage capacity, then the same amount of backups can be stored on less disk freeing up resources. Now take the savings times the number of days in your backup retention and you should see the numbers start to add up.

Now what about the other 90 percent of the data that may not have changed, or, that did change and exists on higher performance storage?

Can its footprint impact be reduced?

The answer should be perhaps or it depends as well as prompts the question of what tool would be best. There is a popular thinking as is often the case with industry buzzwords or technologies to use it everywhere. After all goes the thinking, if it is a good thing why not use and deploy more of it everywhere?

Keep in mind that dedupe trades time to perform thinking and apply intelligence to further reduce data in exchange for space capacity. Thus trading time for space capacity can have a negative impact on applications that need lower response time, higher performance where the focus is on rates vs ratios. For example, the other 90 to 100 percent of the data in the above example may have to be on a mix of high and medium performance storage to meet QoS or service level agreement (SLA) objectives. While it would fun or perhaps cool to try and achieve a high data reduction ratio on the entire 100TB of active data with dedupe (e.g. trying to achieve primary dedupe), the performance impacts could have a negative impact.

The option is to apply a mix of different data footprint reduction techniques across the entire 100TB. That is, use dedupe where applicable and higher reduction ratios can be achieved while balancing performance, compression used for streaming data to tape for retention or archive as well as in databases or other applications software not to mention in networks. Likewise, use real time compression or what some refer to as primary dedupe for online active changing data along with online static read only data.

Deploy a comprehensive data footprint reduction strategy combining various techniques and technologies to address point solution needs as well as the overall environment, including online, near line for backup, and offline for archive data.

Lets not forget about archiving, thin provisioning, space saving snapshots, commonsense data management among other techniques across the entire environment. In other words, if your focus is just on dedupe for backup to
achieve an optimized and efficient storage environment, you are also missing

out on a larger opportunity. However, this also means having multiple tools or

technologies in your IT IRM toolbox as well as understanding what to use when, where and why.

Data transfer rates is a key metric for performance (time) optimization such as meeting backup or restore or other data protection windows. Data reduction ratios is a key metric for capacity (space) optimization where the focus is on storing as much data in a given footprint

Some additional take away points:

  • Develop a data footprint reduction strategy for online and offline data
  • Energy avoidance can be accomplished by powering down storage
  • Energy efficiency can be accomplished by using tiered storage to meet different needs
  • Measure and compare storage based on idle and active workload conditions
  • Storage efficiency metrics include IOPS or bandwidth per watt for active data
  • Storage capacity per watt per footprint and cost is a measure for in active data
  • Small percentage reductions on a large scale have big benefits
  • Align the applicable form of virtualization for the given task at hand

Some links for additional reading on the above and related topics

Wrap up (for now, read part II here)

For some applications reduction ratios are an important focus on the tools or modes of operations that achieve those results.

Likewise for other applications where the focus is on performance with some data reduction benefit, tools are optimized for performance first and reduction secondary.

Thus I expect messaging from some vendors to adjust (expand) to those capabilities that they have in their toolboxes (product portfolios) offerings

Consequently, IMHO some of the backup centric dedupe solutions may find themselves in niche roles in the future unless they can diversity. Vendors with multiple data footprint reduction tools will also do better than those with only a single function or focused tool.

However for those who only have a single or perhaps a couple of tools, well, guess what the approach and messaging will be.

After all, if all you have is a hammer everything looks like a nail, if all you have is a screw driver, well, you get the picture.

On the other hand, if you are still not clear on what all this means, send me a note, give a call, post a comment or a tweet and will be happy to discuss with you.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Availability or lack there of: Lessons From Our Frail & Aging Infrastructure

I have a new blog post over at Enterprise Efficiency about aging infrastructures including those involved with IT, Telcom and related ones.

As a society, we face growing problems repairing and maintaining the vital infrastructure we once took for granted.

Most of these incidents involve aging, worn-out physical infrastructure desperately in need of repair or replacement. But infrastructure doesn’t have to be old or even physical to cause problems when it fails.

The IT systems and applications all around us form a digital infrastructure that most enterprises take for granted until it’s not there.

Bottom line, there really isn’t much choice.

You can either pay up front now to update aging infrastructures, or, wait and pay more later. Either way, there will be a price to pay and you can not realize a cost savings until you actually embark on that endeavor.

Here is the link to the full blog post over at Enterprise Efficiency.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

A Storage I/O Momentus Moment

I recently asked for and received from Seagate (See recent post about them moving their paper head quarters to Ireland here) a Momentus XT 500GB 7200 RPM 2.5 Hybrid Hard Disk Drive (HHDD) to use in an upcoming project. That project is not to test a bunch of different Hard Disk Drives (HDDs), HHDDs, Removable HDD (RHDDs) or Solid State Devices (read more about SSDs here and here or storage optimization here) in order to produce results for someone for a fee or some other consideration.

Do not worry, I am not jumping on the bandwagon of calling my office collection of computers, storage, networks and software the StorageIO Independent hands on test lab. Instead, my objective is to actually use the Momentus XT in conjunction with other storage I/O devices ranging from notebook or laptop, desktop or server, NAS and cloud based storage in conjunction with regular projects that Im working on both in the office as well as while traveling to various out and about activities.

More often than not these days, common thinking or perception is that if anybody is talking about a product or technology it must be a paid for activity as why would anyone write or talk about something without getting or expecting something in exchange (granted there are some exceptions). Given this era of transparency talk, lets walk the talk and here is my disclosure which for those who have read my content before hopefully you will realize that disclosures should be simple, straight forward, easy, fun and common sense based instead of having to dance around or hide what may be being done.

Disclosure moment:
This is not a paid for or sponsored blog (read my disclosure statement here) and in fact is no way connected to in conjunction with, endorsed, sanctioned or approved by Seagate for that matter nor have they been and currently are not a client. I did however ask them for and they offered to send to me a single 500GB Momentus XT Hybrid Hard Disk Drive (HHDD) with no enclosure, accessories, adapter, cables, software or other packaging to be used for a project I am working on. However I did buy from Amazon.com a Seagate GoFlex USB 3.0 to SATA 3 connection cable kit that I had been eyeing for some other projects. Nuff said about that.

What am I doing with a Seagate Momentus XT
As to the project I am working on, it has nothing to do with Seagate or any other vendors or clients for that matter as it is a new book that I will tell you more about in future posts. What I can share with you for now is that it is a follow on to my most previous books ( The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier) ). The new book will also be published by CRC Taylor and Francis.

Now for those who are interested in why would I request a Momentus XT Hybrid Hard Disk Drive (HHDD) from Seagate while turning down others offers of free hardware, software, services, trips and the like it is many fold. First I already own some Momentus (as perhaps you do and may not realize it) HDDs thus thought it would be fun and relatively straight forward to make some general comparisons. I needed some additional storage and I/O improvements to compliment and coexist with what I already have.

Does this mean that the book is going to be about flash Solid State Devices (SSD) since I am using a Momentus XT HHDD? The short answer is NO, it will be much more broadly focused however certainly various types of storage I/O control, public and private clouds, management, gaining control, networking, virtualization as well as other hardware, software, services techniques and technologies will be discussed building on my two previous books.

In addition, I want to see how compatible and useful in every day activities the HHDDs are as opposed to running a couple of standard iometer or other so called lab bench tests. After all, when you buy storage or any IT solutions, do you buy them to be used in your lab to run tests, or, do you buy them to do actual day to day tasks?

I also have been a fan of the HHDD as well as flash and DRAM based SSDs for many years (make that decades for SSDs) and see the opportunity to increase how I am actually using HDDs, HHDDs, SSDs as well as Removable Hard Disk Drives (RHDD) in conjunction with NAS, DAS and other storage to support my book writing as well as other projects that I have bought in the past.

What is the Seagate Momentus XT
The Seagate Momentus series of HDDs are positioned as desktop, notebook and laptop devices that vary in rotational speed (RPM), physical form factor, storage capacity as well as price. The XT is a Hybrid Hard Disk Drive (HHDD) that is essentially a best of breed (hence Hybrid) type device incorporating the high capacity and low cost of a traditional 2.5 7200 RPM HDD with performance boost of flash SSD memory. For example some initial testing of working with very large files have found that the XT can in some instances be as fast as a SSD while holding 10x the capacity with a favorable price.

In other words, an effective balance of cost per GByte capacity, cost per IOP and energy efficiency per IOP. This does not mean however that an XT should be used everywhere or for a replacement to DRAM or flash SSD quite to the contrary as those devices are good tools for specific needs or applications. Instead, the XT provides a good balance of performance and capacity to bridge the gap between traditional spinning HDDs price per capacity and performance per cost of SSD. (For those interested, here is a link to what Seagate is doing with SSD e.g. Pulsar in addition to HHDD and HDD).

Value proposition and business (or consumer) benefits moment
What is the benefit, why not just go all flash?

Simple and that is price unless your specific needs fit into the capacity space of an SSD and you need both the higher performance and lower energy draw (with subsequent heat generation). Note that I did not say heat elimination as during a recent quick test of copying 6GB of data to a flash based SSD it was warm just as the XT device was, however also a bit cooler than a comparable 7200 RPM 2.5 drive. If you can afford the full SSD flash or dram based device as well as it fits your needs and compatibility, go for it. However also make sure that you will see the full expected benefit of adding a SSD to your specific solutions as not all implementations are the same (e.g. do your homework).

Why not just go all HDD?

Simple, economics and performance which is why as I said back in 2005 that HHDDs had a very bright future and will IMHO drive a wedge between the traditional HDD and emerging flash based SSD markets at least for non consumer devices on a near term basis given their compatibility capabilities.

In other words, you could think of it as a compromise, or as a best of breed. For example I can see where for compatible not to mention cost and customer comfort ability of a known entity HHDD will gain some popularity in desktops, laptops, notebooks as well as other devices where a performance boost is needed however not at the expense of throwing out capacity or tight economic budgets.

I can also see some interesting scenarios for hosting virtual machines (VMs) to support server Virtualization with VMware, HyperV or Xen based solutions among others. Another scenario is for bulk storage or archive and backup solutions where the HHDD with their extended cache in the form of flash can help to boost performance of read or write operations on VTLs and dedupe devices, archive platforms, backup or other similar functions. Sure the Momentus XT is positioned as a desktop, notebook type device however has that ever stopped vendors or solution providers from using those types of devices in different roles other than what they were designed for? I am just sayin.

Speeds, feeds and buzzword bingo moment
Seagate has many different types of disk drives that can be found here. In general, the Momentus XT is a 2.5 small form factor (SFF) Hybrid Hard Disk Drive (HHDD) available in 500GB, 320GB and 250GB capacity (I have the 500GB model ST95005620AS) with 4GB SLC NAND (flash) SSD memory, 32MB of drive level cache, an underlying 7200RPM disk drive with SATA 3Gb/s interface including as well as Native Command Queuing (NCQ). Now if you want to say that the XT implements tiered storage in a single device (DRAM, flash and HDD) go ahead. Following are a couple of links of where you can learn more.

Seagate Seatools disk drive diagnostic software (free here)

Seagate FreeAgent Goflex Upgrade Cable (USB 3.0 to SATA 3 STAE104) (Seagate site and Amazon)

Seagate Momentus XT site with general information, product overview and data sheets as well as on Amazon

What does a Momentus XT have to do with writing a book?
If you have ever written a book, or for that matter, done a large development project of any type then things should be a bit familiar. These types of projects include the needs to keep organized as well as protected multiple copies of documents (a dedupers dream) including text, graphics or figures, spreadsheets not to mention project tracking material among others. Likewise as is the case with other authors who work for a living, much of these books are written, edited, proofed or thought about while traveling to different vents, client sites, conferences, meetings or on vacation for that matter. Hence the need to have multiple copies of data on different devices to help guard against when something happens (note that I did not say if).

This is nothing new as each of my last two solo book projects as well as when I was a coauthor contributing content to other books including The Resilient Enterprise (Veritas/Symantec). Much of the content was created while traveling relying on portable storage and backup while on the road. Something someone pointed out to me recently is that this is an example of eating your own dog food or eliminating the shoe makers children syndrome (where the shoe maker creates shoes for others however not for his own children).

Initial moments and general observations
From time to time I will post some notes and observations about how the Momentus XT is performing or behaving which if all goes as planned and so far has, it should be very transparent coexisting with some of my Removable Hard Disk Drives (RHDD) such as the Imation Odyssey which I bought several years ago for offsite bulk removable storage of data that goes to a secure vault somewhere.

Initial deployment other than a stupid mistake on my part has been smooth. What was the stupid mistake you ask? Simple, when I attached the drive via a USB 3.0 cable to SATA 3 connector to one of my XP SP3 systems, Windows saw the device however it did not show up in the list of available devices. Ok, I know I know, it was late in the evening however that is no excuse for realizing that the disk had not yet been initialized let alone formatted. A quick check using Seatools (free here) showed all was well. I then launched Windows Disk Manager, did the initialize, followed by format and all was good from that point on. Wow, wonder how much credibility I will lose over that gaff with the techno elite (that is a joke and a bit of humor btw).

I have already done some initial familiarization and compatibility testing with some of my other drives including a 2.5 64GB SATA flash SSD as well as a 2.5 7200RPM HDD both that I use for bulk data movement activities. At some point I also plan on attaching the XT to my Iomega IX4 NAS to try various things as I have done with other external devices in the past.

Granted these were not ideal conditions as I was in hurry and wanted to get some quick info. Given the probably less than ideal configuration as the format after the HDD was first initialized took about an hour using a FAT32 plug and play configuration. With NTFS and other optimizations I assume it can be better however this was again just to get an initial glimpse of the device in use.

Given that it is a HHDD that uses flash as a big buffer with a 500GB HDD plus 32MB of cache as a backing store, it was interesting attaching it to the computer, then waiting a few minutes, then launching a file copy. Where a normal HDD would start slightly vibrating due to rotation, it was a few moments before any vibration or noise was detected on the Momentus XT which should be of no surprise as the flash was doing its job acting as a buffer until the HDD spun up for work.

I did some initial file copying back and forth between different computers while LAN and NAS were busy doing other things including backups to the Mozy cloud. No discrete time or performance benchmarks to talk about yet, however overall, the XT not surprisingly does seem to be a bit faster than another external 7200 RPM 2.5 drive I use for bulk data moves both on reads and writes. Likewise, given that it is a hybrid HDD leveraging flash as an extended cache with an underlying HDD plus 32MB of cache, it may not always be as fast as my external 2.5 64GB flash SSD, however that is also a common apples to oranges comparison mistake (more on that in a future post).

For example, copying over 6GBytes of data (5 large files of various size) from a 7200 RPM 2.5 160GB Momentus drive in a laptop to the HHDD XT and a flash SSD both took about 8 to 9 minutes where as the normal copy to a 2.5 5400 RPM HDD takes at least 14 to 15 minutes if not longer. Note that these are very rough and far from accurate or reflective comparisons rather a quick gauge of benefits (e.g. getting data moved faster). When I get around to it, will do some more accurate comparisons and put into a follow up post. However I can see already where the XT has the performance similar to the SSD however with almost 10x the capacity which means it could possibly have an interesting role in supporting disk to disk (D2D) backups which I will give a try.

Eventually I will be removing the USB connector kit and actually installing the Momentus into a computer or two (not at the same time) however I am currently walking before running. Im still up in the air as to if I would install the XT into a computer with Windows XP SP3, or simply do a new install of Windows 7 on it to which Im open to thoughts, comments, feedback or applicable suggestions (besides switching to a Macbook or iPad).

Wrap up and fun moment

In the above photo, there is the Seagate Momentus (ST95005620AS), a Goflex USB 3.0 to SATA conversion attachment cable (docking device), a fortune cookie, couple of US quarters and Canadian two dollar coins (See out and about update), paper clips and fishing bobber on a note pad. Why the coins to show relative size and diversity across different geographies as this device will be traveling (it missed out on recent European trip to Holland).

Why the paper clips? Simple, why not, you never know when you will need one for something such as a MacGyver moment, or for pushing the tiny reset button on a device among other activities.

How about the fortune cookie? For good luck and I might need a quick snack while having a cup of coffee not to mention Chinese as well as Asian in general is one of my favorites cuisines to prepare or cook not to mention eat.

Oh, what about the fishing bobber? Why not, it was just laying around and you could also that Im fishing for information to see how the device fits into normal use or that it is there for fun or to add color to the photo.

Oh, and the note pad? Hmm, well, if you cannot figure that one out besides being a back drop, lets just say that the Momentus line in general as well as XT specifically are targeted for notebook, desktop, laptop or other deployment scenarios. If you still dont see the connection, ok fine, feel free to post a comment and I will happily clarify it for you.

That is all for the moment, however I will be following up with more soon.

In the meantime, enjoy your summer if in the northern hemisphere (or winter if in the south).

Take lots of photos, videos and make audio recordings to fill up those USB flash thumb drives (consumer SSD), SD memory cards, computer hard drives, cloud and online web hosting sites so that have you something to remember your special out and about moments by.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved