Taking a break from industry trends and perspectives activity, here is using some technology to show how even in the snow and cold, for those who are bold, they wont grow old, as kids of all ages enjoy snow sled sliding in the scenic St. Croix River Valley.
Here is a video (.MPG) (and smaller .wmv version) that I put together this afternoon of the neighbor kids and their crew enjoying the snow during the first sled runs of the year.
For those enjoying the thanksgiving holidays in the U.S. try not to eat to much and thus avoid the sleep to much syndrome. However for everyone else, enjoy, have fun and best wishes.
Welcome to the Fall 2010 edition of the Server and StorageIO Group (StorageIO) newsletter. This follows the August 2010 edition building on the great feedback received from recipients.
You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the Fall 2010 edition as an HTML or PDF or, to go to the newsletter page to view previous editions.
Is vendor lockin caused by vendors, their partners or by customers?
In my opinion vendor lockin can be from any or all of the above.
What is vendor lockin
Vendor lockin is a situation where a customer becomes dependent or locked in by choice or other circumstances to a particular supplier or technology.
What is the difference between vendor lockin, account control and stickiness?
Im sure some marketing wiz or sales type will be happy to explain the subtle differences. Generally speaking, lockin, stickiness and account control are essentially the same, or at least strive to obtain similar results. For example, vendor lockin too some has a negative stigma. However vendor stickiness may be a new term, perhaps even sounding cool thus it is not a concern. Remember the Mary Poppins song a spoon full of sugar makes the medicine go down? In other words sometimes changing and using a different term such as sticky vs vendor lockin helps make the situation taste better.
Is vendor lockin or stickiness a bad thing?
No, not necessarily, particularly if you the customer are aware and still in control of your environment.
I have had different views of vendor lockin over the years.
These have varied from when I was a customer working in IT organizations or being a vendor and later as an advisory analyst consultant. Even as a customer, I had different views of lockin which varied depending upon the situation. In some cases lockin was a result of upper management having their favorite vendor which meant when a change occurred further up the ranks, sometimes vendor lockin would shift as well. On the other hand, I also worked in IT environments where we had multiple vendors for different technologies to maintain competition across suppliers.
As a vendor, I was involved with customer sites that were best of breed while others were aligned around a single or few vendors. Some were aligned around technologies from the vendors I worked for and others were aligned with someone elses technology. In some cases as a vendor we were locked out of an account until there was a change of management or mandates at those sites. In other cases where lock out occurred, once our product was OEMd or resold by an incumbent vendor, the lockout ended.
Some vendors do a better job of establishing lockin, account management, account control or stickiness than compared to others. Some vendors may try to lock a customer in and thus there is perception that vendors lock customers in. Likewise, there is a perception that vendor lockin only occurs with the largest vendors however I have seen this also occur with smaller or niche vendors who gain control of their customers keeping larger or other vendors out.
Sweet, sticky Sue Bee Honey
Vendor lockin or stickiness is not always the result of the vendor, var, consultant or service provider pushing a particular technology, product or service. Customers can allow or enable vendor lockin as well, either by intent via alliances to drive some business initiative or accidentally by giving up account control management. Consequently vendor lockin is not a bad thing if it brings mutual benefit to the suppler and consumer.
On the other hand, if lockin causes hardship on the consumer while only benefiting the supplier, than it can be a bad thing for the customer.
Do some technologies lend themselves more to vendor lockin vs others?
Yes, some technologies lend themselves more to stickiness or lockin then others. For example, often big ticket or expensive hardware are seen as being vulnerable to vendor lockin along with other hardware items however software is where I have seen a lot of stickiness or lockin around.
However what about virtualization solutions after all the golden rule of virtualization is whoever controls the virtualization (hardware, software or services) controls the gold. This means that vendor lockin could be around a particular hypervisor or associated management tools.
How about bundled solutions or what are now called integrated vendor technology stacks including PODs (here or here) or vBlocks among others? How about databases, do they enable or facilitate vendor lockin? Perhaps, just like virtualization or operating systems or networking technology, storage system, data protection or other solutions, if you let the technology or vendor manage you, then you enable vendor lockin.
Where can vendor lockin or stickiness occur?
Application software, databases, data or information tools, messaging or collaboration, infrastructure resource management (IRM) tools ranging from security to backup to hypervisors and operating systems to email. Lets not forget about hardware which has become more interoperable from servers, storage and networks to integrated marketing or alliance stacks.
Another opportunity for lockin or stickiness can be in the form of drivers, agents or software shims where you become hooked on a feature functionality that then drives future decisions. In other words, lockin can occur in different locations both in traditional IT as well as via managed services, virtualization or cloud environments if you let it occur.
Keep these thoughts in mind:
Customers need to manage their resources and suppliers
Technology and their providers should work for you the customer, not the other way around
Technology providers conversely need to get closer to influence customer thinking
There can be cost with single vendor or technology sourcing due to loss of competition
There can be a cost associated with best of breed or functioning as your own integrator
There is a cost switching from vendors and or their technology to keep in mind
Managing your vendors or suppliers may be easier than managing your upper management
Vendors sales remove barriers so they can sell and setting barriers for others
Virtualization and cloud can be both a source for lockin as well as a tool to help prevent it
As a customer, if lockin provides benefits than it can be a good thing for all involved
Ultimately, its up to the customer to manage their environment and thus have a say if they will allow vendor lockin. Granted, upper management may be the source of the lockin and not surprisingly is where some vendors will want to focus their attention directly, or via influence of high level management consultants.
So while a vendors solution may appear to be a locked in solution, it does not become a lockin issue or problem until a customer lets or allows it to be a lockin or sticky situation.
What is your take on vendor lockin? Cast your vote and see results in the following polls.
Is vendor lockin a good or bad thing?
Who is responsible for managing vendor lockin
Where is most common form or concern of vendor lockin
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
Its been a few months since my last post (read it here) about Hybrid Hard Disk Drive (HHDD) such as the Seagate Momentus XT that I have been using.
The Momentus XT HHDDI have been using is a 500GB 7,200RPM 2.5 inch SATA Hard Disk Drive (HDD) with 4GB of embedded FLASH (aka SSD) and 32MB of DRAM memory for buffering hence the hybrid name.
I have been using the XT HHDD mainly for transferring large multi GByte size files between computers and for doing some disk to disk (D2D) backups while becoming more comfortable with it. While not as fast as my 64GB all flash SSD, the XT HHDD is as fast as my 7,200RPM 160GB Momentus HDD and in some cases faster on burst reads or writes. The notion of having a 500GB HDD that was affordable to support D2D was attractive however the ability to get some performance boost now and then via the embedded 4GB FLASH opens many different possibilities particularly when combined with compression.
Recently I switched the role of the Momentus XT HHDD from that of being a utility drive to becoming the main disk in one of my laptops. Despite many forums or bulletin boards touting issues or problems with the Seagate Momentus XT causing system hangs or Windows Blue Screen of Death (BSoD), I continued on with the next phase of testing.
Making the switch to XT HHDD as a primary disk
I took a few precaution including eating some of my own dog food that I routinely talk about. For example, I made sure that the Lenovo T61 where the Momentus XT was going to be installed was backed up. In addition, I synced my traveling laptop so that it was the primary so that I could continue working during the conversion not to mention having an extra copy in addition to normal on and offsite backups.
Ok, lets get back to the conversion or migration from a regular HDD to the HHDD.
Once I knew I had a good backup, I used the Seagate Discwizard (e.g. Acronis based) tool for imaging the existing T61 HDD to the Momentus XT HHDD. Using Discwizard (you could use other tools as well) I configured it to initialize the HHDD which was attached via a Seagate Goflex USB to SATA cable kit as well as image or copy the contents of the T61 HDD partitions to the Momentus XT. During the several hours it took to copy and create a new bootable disk image on the HHDD I continued working on my travel or standby laptop.
After the image copy was completed and verified, it was time to reboot and see how Windows (XP SP3) liked the HHDD which all seemed to be normal. There were some parts of the boot that seemed a bit faster, however not 100 percent conclusive. The next step was to shutdown the laptop and physically swap the old internal HDD with the HHDD and reboot. The subsequent boot did seem faster and programs accessing large files also seemed to run a bit faster.
Keep in mind that the HHDD is still a spinning 7,200RPM disk drive so comparisons to a full time SSD would be apples to oranges as would the cost capacity difference between those devices. However, for what I wanted to see and use, the limited 4GB of flash does seem to provide a performance boost and if I needed full time super fast performance, I could buy a larger capacity SSD and install it. Im going to hold off on buying any more larger capacity flash SSD for the time being however.
Do I see HHDD appearing in SMB, SME or enterprise storage systems anytime soon? Probably not, at least not in primary storage systems. However perhaps in some D2D backup, archive or dedupe and VTL devices or other appliances.
Momentus XT Speed Bumps
Now, to be fair, there have been some bumps in the road!
The first couple of days were smooth sailing other than hearing the mystery chirp the HHDD makes a couple of times a day. Low and behold after a couple of days, just as many forums had indicated, a mystery system hang occurred (and no, not like Windows might normally do so for those Microsoft cynics). Other than the inconvenience of a reboot, no data was lost as files being updated were saved or had been backed up not to mention after the reboot, everything was intact anyway. So far just an inconvenience or so I thought.
Almost 24 hours later, same thing except this time I got to see the BSoD which candidly, I very rarely see despite hearing stories from others. Ok, this was annoying, however as long as I did not lose any data, other than lost time from a reboot, lets chalk this up to a learning experience and see where it goes. Now guess what, about 12 hours later, once again, the system froze up and this time I was in the middle of a document edit. This time I did lose about 8 minutes of typing data that had not been auto saved (I have since changed my auto save from 10 minutes to 5 minutes).
With this BSoD incident, I took some notes and using the X61s, started checking some web sites and verified the BIOS firmware on the T61 which was up to date. However I noticed that the Seagate Momentus XT HHDD was at firmware 22 while there was a 23 version available. Reading through some web sites and forums, I was on the fence on trying firmware 23 given that it appears a newer firmware version for the HHDD is in the works. Deciding to forge forward with the experiment, after all, no real data loss had occurred, and I still had the X61s not to mention the original T61 HDD to fall back to worse case.
Going to the Seagate web site, I downloaded the firmware 23 install kit and ran it to their instructions which was a breeze and then did the reboot.
It has not been quite a week yet, however knocking on wood, while I keep expecting to see one, no BSoD or system freezes have occurred. However having said that and knocking on wood, Im also making sure things are backed up protected and ready if needed. Likewise, if I start to see a rash of BSoD, my plan is to fall back to the original T61 HDD, bring it up to date and use it until a newer HHDD firmware version is available to resume testing.
What is next for my Seagate Momentus XT HHDD?
Im going to wait to see if the BSoD and mystery system hangs disappear as well as for the arrival of the new firmware followed by some more testing. However, when Im confident with it, the next step is to put the XT HHDD into the X61s which is used primarily for travel purpose.
Why wait? Simple, while I can tolerate a reboot or crash or data loss or disruption while in the office given access to copies as well as standby or backup systems to work from, when traveling options are more limited. Sure if there is data loss, I can go to my cloud provider and rapidly recall a file or multiple ones as needed or for critical data, recover from a portable encrypted USB device. Consequently I want more confidence in the XT HHDD before deploying it for travel mode which it is probably safe to do as of now, however I want to see how stable it is in the office before taking it on the road.
What does this all mean?
Simple, have a backup of your data and systems
Test and verify those backups or standby systems periodically
Have a fall back plan for when trying new things
Keep productivity in mind, at some point you may have to fall back
If something is important enough to protect, have multiple copies
Be ready to eat your own dog food or what you talk about
Do not be scared, however be prepared, look before you leap
How about you are you using a HHDD yet and if so, what are your experiences? I am curious to hear if anyone has tried using a HHDD in their VMware lab environments yet in place of a regular HDD or before spending a boat load of money for a similar sized SSD.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
A few months ago IBM bought a Data Footprint Reduction (DFR) technology company called Storwize (read more about DFR and Storwize Real time Compression here, here, here, here and here).
A couple of weeks ago IBM renamed the Storwize real time compression technology to surprise surprise, IBM real time compression (wow, wonder how lively that market focus research group study discussion was).
Subsequently IBM recycled the Storwize name in time to be used for the V7000 launch.
Now to be clear right up front, currently the V7000 does not include real time compression capabilities, however I would look for that and other forms of DFR techniques to appear on an increasing basis in IBM products in the future.
IBM has a diverse storage portfolio with good products some with longer legs than others to compete in the market. By long legs, that means both technology and marketability for enabling their direct as well as partners including distributors or vars to effectively compete with other vendors offerings.
The enablement capability of the V7000 will be to give IBM and their business partners a product that they will want go tell and sell to customers competing with Cisco, Dell, EMC, Fujitsu, HDS, HP, NEC, NetApp and Oracle among others.
What about XIV?
For those interested in XIV regardless of if you are a fan, nay sayer or simply an observer, here, here and here are some related posts to view if you like (as well as comment on).
Back to the V7000
A couple of common themes about the IBM V7000 are:
It appears to be a good product based on the SVC platform with many enhancements
Branding the storwize acquisition as real-time compression as part of their DFR portfolio
Confusion about using the Storwize name for a storage virtualization solution
Lack of Data Footprint Reduction (DFR) particularly real-time compression (aka Storwize)
Yet another IBM storage product adding to confusion around product positioning
Common questions that Im being asked about the IBM V7000 include among others:
Is the V7000 based on LSI, NetApp or other third party OEM technology?
No, it is based on the IBM SVC code base along with an XIV like GUI and features from other IBM products.
Is the V7000 based on XIV?
No, as mentioned above, the V7000 is based on the IBM SVC code base along with an XIV like GUI and features from other IBM products.
Does the V7000 have DFR such as dedupe or compression?
No, not at this time other than what was previously available with the SVC.
Does this mean there will be a change or defocusing on or of other IBM storage products?
IMHO I do not think so other than perhaps around XIV. If anything, I would expect IBM to start pushing the V7000 as well as the entire storage product portfolio more aggressively. Now there could be some defocusing on XIV or put a different way, putting all products on the same equal footing and let the customer determine what they want based on effective solution selling from IBM and their business partners.
What does this mean for XIV is that product no longer the featured or marquee product?
IMHO XIV remains relevant for the time being. However, I also expect to be put on equal footprint with other IBM products or, if you prefer, other IBM products particularly the V7000 to be unleashed to compete with other external vendors solutions such as those from Cisco, Dell, EMC, Fujitsu, HDS, HP, NEC, NetApp and Oracle among others. Read more here, here and here about XIV remaining relevant.
Why would I not just buy an SVC and add storage to it?
That is an option and strength of SVC to sit in front of different IBM storage products as well as those of third party competitors. However with the V7000 customers now have a turnkey storage solution to sell instead of a virtualization appliance.
Is this a reaction to EMC VPLEX, HDS VSP, HP SVSP or 3PAR, Oracle/Sun 7000?
Perhaps it is, perhaps it is a reaction to XIV, and perhaps it is a realization that IBM has a lot of IP that could be combined into a solution to respond to a market need among many other scenarios. However, IBM has had a virtualization platform with a decent installed base in the form of SVC which happens to be at the heart of the V7000.
Does this mean IBM is jumping on the using off the shelf server instead of purpose built hardware for storage systems bandwagon like Oracle, HP and others are doing?
If you are new to storage or IBM, it might appear that way, however, IBM has been shipping storage systems that are based on general purpose servers for a couple for a couple of decades now. Granted, some of those products are based on IBM Power PC (e.g. power platform) also used in their pSeries formerly known as the RS6000s. For example, the DS8000 series similar to its predecessors the ESS (aka Shark) and VSS before that have been based on the Power platform. Likewise, SVC has been based on general purpose processors since its inception.
Likewise, while only generally deployed in two node pairs, the DS8000 is architected to scale into many more nodes that what has been shipped meaning that IBM has had clustered storage for some time, granted, some of their competitors will dispute that.
How does the V7000 stack up from a performance standpoint?
Interestingly, IBM has traditionally been very good if not out front running public benchmarks and workload simulations ranging from SPC to TPC to SPEC to Microsoft ESRP among others for all of their storage systems except one (e.g. XIV). However true to traditional IBM systems and storage practices, just a couple of weeks after the V7000 launch, IBM has released the first wave of performance comparisons including SPC for the V7000 which can be seen here to compare with others.
What do I think of the V7000?
Like other products both in the IBM storage portfolio or from other vendors, the V7000 has its place and in that place which needs to be further articulated by IBM, it has a bright future. I think that the V7000 for many environments particularly those that were looking at XIV will be a good IBM based solution as well as competitor to other solutions from Dell, EMC, HDS, HP, NetApp, Oracle as well as some smaller startups providers.
Comments, thoughts and perspectives:
IBM is part of a growing industry trend realizing that data footprint reduction (DFR) focus should expand the scope beyond backup and dedupe to span an entire organization using many different tools, techniques and best practices. These include archiving of databases, email, file systems for both compliance and non compliance purposes, backup/restore modernization or redesign, compression (real-time for online and post processing). In addition, DFR includes consolidation of storage capacity and performance (e.g. fast 15K SAS, caching or SSD), data management (including some data deletion where practical), data dedupe, space saving snapshots such as copy on write or redirect on write, thin provisioning as well as virtualization for both consolidation and enabling agility.
IBM has some great products, however too often with such a diverse product portfolio better navigation and messaging of what to use when, where and why is needed not to mention the confusion over the current product dejur.
As has been the case for the past couple of years, lets see how this all plays out in a year or so from now. Meanwhile cast your vote or see the results of others as to if XIV remains relevant. Likewise, join in on the new poll below as to if the V7000 is now relevant or not.
Note: As with the ongoing is XIV relevant polling (above), for the new is the V7000 relevant polling (below) you are free to vote early, vote often, vote for those who cannot or that care not to vote.
Here are some links to read more about this and related topics:
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Over the past couple of years I routinely get asked what I think of XIV by fans as well as foes in addition to many curious or neutral onlookers including XIV competitors, other analysts, media, bloggers, consultants as well as IBM customers, prospects, vars and business partners. Consequently I have done some blog posts about my thoughts and perspectives.
Its time again for what has turned out to be the third annual perspective or thoughts around IBM XIV and if it is still relevant as a result of the recent IBM V7000 (excuse me, I meant to say IBM Storwize V7000) storage system launch.
In a nut shell, the V7000 is a new storage system with built in storage virtualization or virtual storage if you prefer that leverages IBM developed software from its San Volume Controller (SVC), DS8000 enterprise system and others.
Unlike the SVC which is a gateway or appliance head that virtualizes various IBM and third party storage systems providing data movement, migration, copy, replication, snapshot and other agility or abstraction capabilities, the V7000 is a turnkey integrated solution.
By being a turnkey solution, the V7000 combines the functionality of the SVC as a basis for adding other IBM technologies including a GUI management tool similar to that found on XIV along with dedicated attached storage (e.g. SAS disk drives including fast, high capacity as well as SSD).
In other words, for those customer or prospects who liked XIV because of its management GUI interface, you may like the V7000.
For those who liked the functionality capabilities of the SVC however needed it to be a turnkey solution, you might like the V7000.
For those of you who did not like or competed with the SVC in the past, well, you know what to do.
BTW, for those who knew of Storwize the Data Footprint Reduction (DFR) vendor with real time compression that IBM recently acquired and renamed IBM Real time Compression, the V7000 does not contain any real time compression (yet).
What are my thoughts and perspectives?
In addition to the comments in the companion post found here, right now Im of the mind set that XIV does not fade away quietly into the sunset or take a timeout at the IBM technology rest and recuperation resort located on the beautiful someday isle.
The reason I think XIV will remain somewhat relevant for some time, (time to be determined of course) is that IBM has expended over the past two and half years significant resources to promote it. Those resources have included marketing time, messaging space and in some instances perhaps inadvertinly at the expense of other IBM storage solutions. Simiarly, a lot of time, money and effort have gone into business partner outreach to establish and keep XIV relevant with those commuities who in turn have gone to their customers to tell and sell the XIV story to some customers who have bought it.
Consequently or as a result of all of that investment, I would be surprised if IBM were simply to walk away from XIV at least near term.
What I do see as happening including some early indicators is that the V7000 (along with other IBM products) now will be getting equal billing, resources and promotional support. Weather this means the XIV division finally being assimilated into the mainstream IBM fold and on equal footing with other IBM products, or, that other IBM products being brought up to an elevated position of XIV is subject to interpretation and your own perception.
I expect to continue to see IBM teams and subsequently their distributors, vars and other business partners get more excited talking about the V7000 along with other IBM solutions. For example, SONAS for bulk, clustered and scale out NAS, DS8000 for high end, GMAS and Information Archive platforms as well as N and DS3K/DS4K/DS5K not to mentiuon the TS/TL backup and archive target platforms along with associated Tivoli software. Also, lets not forget about SVC among other IBM solutions including of course, XIV.
I would also not be surprised if some of the diehard XIV loyalist (e.g. sales and marketing reps that were faithful members of Moshe Yani army who appears to be MIA at IBM) pack up their bags and leave the IBM storage SANdbox in virtual protest. That is, refusing to be assimilated into the general IBM storage pool and thus leaving for Greener IT pastures elsewhere. Some will stick around discovering the opportunities associated with selling a broader more diverse product portfolio into their target accounts where they have spent time and resources to establish relationships or getting thier proverbial foot in the door.
Consequently, I think XIV remains somewhat relevant for now given all of the resources that IBM poured into it and relationships that their partner ecosystem also spent on establishing with the installed customer base.
However, I do think that the V7000 despite some confusion (here and here) around its recycled Storwize name that is built around the field proven SVC and other IBM technology has some legs. Those legs of the V7000 are both from a technology standpoint as well as a means to get the entire IBM systems and storage group energized to go out and compete with their primary nemesis (e.g. Dell, EMC, HP, HDS, NetApp and Oracle among others).
As has been the case for the past couple of years, lets see how this all plays out in a year or so from now. Meanwhile cast your vote or see the results of others as to if XIV remains relevant. Likewise, join in on the new poll below as to if the V7000 is now relevant or not.
Note: As with the ongoing is XIV relevant polling (above), for the new is the V7000 relevant polling (below) you are free to vote early, vote often, vote for those who cannot or that care not to vote.
Here are some links to read more about this and related topics:
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Have you heard or read the reports and speculation that VTLs (Virtual Tape Libraries) are dead?
It seems that in IT the all to popular trend is to declare something dead so that your new product or technology can have a chance of making it in to the market or perhaps seen in a better light.
Sometimes this approach works to temporary freeze the market until common sense and clarity returns to the market or until something else fun to talk about comes along and in other cases, the messages can fall on deft ears.
The approach of declaring something dead tends to play well for those who like shiny new toys (SNT) or new shiny toys (NST) and being on the popular, cool trendy bandwagon.
Not surprisingly, while some actual IT customers can fall into the SNT or NST syndrome, its often the broader industry including media, bloggers, analysts, consultants and other self proclaimed or anointed pundits as well as vendors who latch on to the declare it dead movement. After all, who wants to talk about something that is old, boring and already being sold to paying customers who are using it. Now this is not a bad thing as we need a balance of up and coming challengers to keep the status quo challenged, likewise we need a balance of the new to avoid death grips on the old and what is working.
Likewise, many IT customers particularly larger ones tend to be very risk averse and conservative with their budgets protecting their investments thus they may only go leading bleeding edge if there is a dual redundant blood bank with a backup on hot standby (thats some HA humor BTW).
Another reason that declaring items dead in support of SNT and NST is that while many of the commonly declared dead items are on the proverbial plateau of productivity for IT customers, that also can mean that they are on the plateau of profitability for the vendors.
However, not all good things last and at sometime, there is the need to transition from the old to the new and this is where things like virtualization including virtual tape libraries or virtual disk libraries or virtual storage library or what ever you want to call a VxL (more on what a VxL is in a moment) can come into play.
I realize that for some, particularly those who like to grasp on to SNT, NST and ride the dead pool bandwagons this will probably appear as snarky or cynical which is fine, after all, for some, you should be laughing to the bank and if not, you may in fact be missing out on an opportunity for playing in the dead pool marketing game.
Now back to VxL.
In the case of VTLs, for some it is the T word that bothers them, you know T as in Tape which is not a SNT or NST in an age where SSD has supposedly killed the disk drive which allegedly terminated tape (yeah right). Sure tape is not being used as much for backup as it has in the past with its role shifting to that of longer term retention, something that it is well suited for.
For tape fans (or cynics) you can read more here, here and here. However there is still a large amount of backup/restore along with other data protection or preservation (e.g. archiving) processing (software tools, processes, procedures, skill sets, management tools) that still expects to see tape.
Hence this is where VTLs or VxLs come into play leveraging virtualization in an Life Beyond Consolidation (and here) scenario providing abstraction, transparency, agility and emulation and IMHO are still very much alive and evolving.
Ok, for those who do not like or believe in or of its continued existence and evolving role, substitute the T (tape) with X and you get a VxL. That is, plug in what ever X word that makes you happy or marketable or a Shiny New TLA. For example Virtual Disk Library, Virtual Storage Library, Virtual Backup Library, Virtual Compression Library, Virtual Dedupe Library, Virtual ILM Library, Virtual Archive Library, Virtual Cloud Library and so forth. Granted some VxLs only emulate tape and hence are VTLs while others support NAS and other protocols (or personalities) not to mention functionality ranging from replication, DFR as well as automated policy management.
However, keep in mind that if your preference is VTL, VxL or what ever other buzzword bingo name that you want to use or come up with, look at how virtualization in the form of abstraction, transparency and emulation can bridge the gap between the new (disk based data protection) combined with DFR (Data Footprint Reduction) and the old (existing backup/restore, archive or other management tools and processes.
Here are some additional links pertaining to VTLs (excuse me, VxLs):
Virtual tape libraries: Old backup technology holdover or gateway to the future?
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
This is part of an ongoing series of short industry trends and perspectives (ITP) blog posts briefs based on what I am seeing and hearing in my conversations with IT professionals on a global basis.
These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, videos, podcasts, webcasts as well as solution brief content found a www.storageioblog.com/reports and www.storageio.com/articles.
If you recall from previous posts including here, here or here among others, Data Footprint Reduction (DFR) is a collection of tools, technologies and best practices for addressing growing data storage management and cost impacts.
DFR encompasses many different tools, techniques and technologies across various applications ranging from active or primary storage to secondary and inactive along with backup and archive.
Some of the technologies techniques and technologies include archiving, backup modernization, compression, data management, dedupe, space saving snapshots and thin provisioning among others.
Following are some links to various articles and commentary pertaining to DFR:
Using DFR including dedupe and compression to defry storage and management costs
Deduplicate, compress and defray costs of data storage management
Virtual tape libraries: Old backup technology holdover or gateway to the future?
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Do you have a web, internet, backup or other IT cloud service provider of some type?
Do you pay for it, or is it a free service?
Do you take your service provider for granted?
Does your service provider take you or your data for granted?
Does your provider offer some form of service level objectives (SLO)?
For example, Recovery Time Objectives (RTO), Recovery Point Objectives (RPO), Quality of Service (QOS) or if a backup service alternate forms of recovery among others?
So what happens when there is a service disruption, do you threaten to leave the provider and if so, how much does that (or would it) cost you to move?
A couple of weeks ago I was using on a Delta airlines flight from LAX to MSP returning from a west coast speaking engagement event.
During the late evening three hour flight, I was using the gogo inflight wifi service to get caught up on some emails, blog items along with other work items in addition to doing a few twitter tweets while flying high over the real clouds from my virtual office.
During that time, I saw a twitter tweet from Devang Panchigar (@storageNerve) commenting that his hosting service provider Bluehost was down or offline. This caught my attention as Bluehost is also my service provider and a quick check verified that my sites and services were still working. I subsequently sent a tweet to Devang indicating that Bluehost or at least from looking at my sites and services were still functioning, or at least for the time being as I was about to find out. Long story short, about 20 to 25 minutes later, I noticed that I could not longer get to any of my sites, low and behold my Bluehost services were also now offline.
Overall, I have been pleased with Bluehost as a service provider including finding their call support staff very accommodating and easy to work with when I have questions or need something taken care of. Normally I would have simply called Bluehost to see what was going on, however being at about 38,000 feet above the clouds, a quick conversation was not going to be possible. Instead, I checked some forums that revealed Bluehost was experiencing some electrical power issues with their data center (I believe in Utah). Looking at some of the forums as well as various twitter comments, I also decided to check to see if Bluehost CEO Matt Heaton blog was functioning (it was).
It would have been too easy to do one of those irate customer type posts telling them how bad they were, how I was dropping them like a hot potato and then doing a blog post telling everyone to never use them again or along those lines that are far to common and often get deleted as spam.
Instead, I took a different approach (you could have read it here however I just checked and it has been deleted). My comment on Matts blog post took a week or so to be moderated (now since deleted). Essentially my post took the opposite approach of going off on the usual customer tirade instead commenting how ironic that a hosting service for my web site which contains content information about resilient data infrastructure themes was offline.
Now I realize that I am not paying for a high end no downtime always available hosting service, however I also realize that I am paying for a more premium package vs. a basic subscription or even a for free service. While I was not happy about the one hour of downtime around midnight, it was comforting to know that no data was lost and my sites were only offline for a short period of time.
I hope Bluehost continues to improve on their services to stay out of the news for a major disruption as well as minimize or eliminate downtime for their for fee based services.
I also hope that Bluehost CEO Matt Heaton continues to listen to what his customers have to say while improving his services to keep us as customers instead of taking us for granted as some providers or vendors do.
Thanks again to Devang for the tip that there was a service disruption, after all, sometimes we take services for granted and in other situations some service providers take their customers for granted.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
A new StorageIO Industry Trends and Perspective (ITP) white paper titled “End to End (E2E) Systems Resource Analysis (SRA) for Cloud, Virtual and Abstracted Environments” is now available at www.storageioblog.com/reports compliments of SANpulse technologies.
Abstract: Many organizations are in the planning phase or already executing initiatives moving their IT applications and data to abstracted, cloud (public or private) virtualized or other forms of efficient, effective dynamic operating environments. Others are in the process of exploring where, when, why and how to use various forms of abstraction techniques and technologies to address various issues. Issues include opportunities to leverage virtualization and abstraction techniques that enable IT agility, flexibility, resiliency and salability in a cost effective yet productive manner.
An important need when moving to a cloud or virtualized dynamic environment is to have situational awareness of IT resources. This means having insight into how IT resources are being deployed to support business applications and to meet service objectives in a cost effective manner.
Awareness of IT resource usage provides insight necessary for both tactical and strategic planning as well as decision making. Effective management requires insight into not only what resources are at hand but also how they are being used to decide where different applications and data should be placed to effectively meet business requirements.
Learn more about the importance and opportunities associated with gaining situational awareness using E2E SRA for virtual, cloud and abstracted environments in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of SANpulse technologies by clicking here.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Today HDS announced with much fan fare that must have been a million dollar launch budget the VSP (successor to the previous USPV and USPVM).
Im also thinking that the HDS VSP (not to be confused with HP SVSP that HP OEMs via LSI) could also be called the the HDS MVSP.
Now if you are part of the HDS SAN, LAN, MAN, WAN or FAN bandwagon, MVSP could mean Most Valuable Storage Platform or Most Virtualized Storage Product. MVSP might be also called More Virtualized Storage Products by others.
Yet OTOH, MVSP could be More Virtual Story Points (e.g. talking points) for HDS building upon and when comparing to their previous products.
For example among others:
More cache to drive cash movement (e.g. cash velocity or revenue) More claims and counter claims of industry unique or fists More cloud material or discussion topics More cross points More data mobility More density More FUD and MUD throwing by competitors More functionality More packets of information to move, manage and store More pages in the media More partitioning of resources More partners to sell thorough or too More PBytes More performance and bandwidths More platforms virtualized More platters More points of resiliency More ports to connect to or through More posts from bloggers More power management, Eco and Green talking points More press releases More processors More products to sell More profits to be made More protocols (Fibre Channel, FICON, FCoE, NAS) supported More pundits praises More SAS, SATA and SSD (flash drives) devices supported More scale up, scale out, and scale within More security More single (Virtual and Physical) pane of glass managements More software to sell and be licensed by customers More use of virtualization, 3D and other TLAs More videos to watch or be stored
Im sure more points can be thought of, however that is a good start for now including some to have a bit of fun with.
Read more about HDS new announcement here, here, here and here:
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
Data Footprint Reduction (DFR) is a collection of techniques, technologies, tools and best practices that are used to address data growth management challenges. Dedupe is currently the industry darling for DFR particularly in the scope or context of backup or other repetitive data.
However DFR expands the scope of expanding data footprints and their impact to cover primary, secondary along with offline data that ranges from high performance to inactive high capacity.
Consequently the focus of DFR is not just on reduction ratios, its also about meeting time or performance rates and data protection windows.
This means DFR is about using the right tool for the task at hand to effectively meet business needs, and cost objectives while meeting service requirements across all applications.
Examples of DFR technologies include Archiving, Compression, Dedupe, Data Management and Thin Provisioning among others.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.
This is part of an ongoing series of short industry trends and perspectives blog posts briefs based on what I am seeing and hearing in my conversations with IT professionals on a global basis.
These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, videos, podcasts, webcasts as well as solution brief content found a www.storageioblog.com/reports and www.storageio.com/articles.
Has FCoE (Fibre Channel over Ethernet) entered the trough of disillusionment?
IMHO Yes and that is not a bad thing if you like FCoE (which I do among other technologies).
The reason I think that it is good that FCoE is in or entering the trough is not that I do not believe in FCoE. Instead, the reason is that most if not all technologies that are more than a passing fad often go through a hype and early adopter phase before taking a breather prior to broader longer term adoption.
Sure there are FCoE solutions available including switches, CNAs and even storage systems from various vendors. However, FCoE is still very much in its infancy and maturing.
Based on conversations with IT customer professionals (e.g those that are not vendor, vars, consultants, media or analysts) and hearing their plans, I believe that FCoE has entered the proverbial trough of disillusionment which is a good thing in that FCoE is also ramping up for deployment.
Another common question that comes up regarding FCoE as well as other IO networking interfaces, transports and protocols is if they are temporal (temporary short life span) technologies.
Perhaps in the scope that all technologies are temporary however it is their temporal timeframe that should be of interest. Given that FCoE will probably have at least a ten to fifteen year temporal timeline, I would say in technology terms it has a relative long life for supporting coexistence on the continued road to convergence which appears to be around Ethernet.
That is where I feel FCoE is at currently, taking a break from the initial hype, maturing while IT organizations begin planning for its future deployment.
I see FCoE as having a bright future coexisting with other complimentary and enabling technologies such as IO Virtualization (IOV) including PCI SIG MRIOV, Converged Networking, iSCSI, SAS and NAS among others.
Keep in mind that FCoE does not have to be seen as competitive to iSCSI or NAS as they all can coexist on a common DCB/CEE/DCE environment enabling the best of all worlds not to mention choice. FCoE along with DCB/CEE/DCE provides IT professionals with choice options (e.g. tiered I/O and networking) to align the applicable technology to the task at hand for physical or
Again, the questions pertaining to FCoE for many organizations, particularly those not going to iSCSI or NAS for all or part of their needs should be when, where and how to deploy.
This means that for those with long lead time planning and deployment cycles, now is the time to putting your strategy into place for what you will be doing over the next couple of years if not sooner.
For those interested, here is a link (may require registration) to a good conversation taking place over on IT Toolbox regarding FCoE and other related themes that may be of interest.
Here are some links to additional related material:
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
This past week I spent a few days in San Francisco attending the VMworld 2010 event which included a Wednesday evening concert with the Australian band INXS.
Despite some long lines (or queues) waiting to get into sessions, keynotes or lunch resulting in delays reminiscent of trying to put too many virtual machines (VMs) onto a given number of physical machines (PMs) in the quest to drive up utilization, the overall event was fantastic.
While at the event, I had a chance to meet up with fellow vExpert Eric Siebert whose new book Maximum vSphere made its debut. I was honored when asked by Eric to help out with his chapter on storage, learn more about Erics new book here.
Big thanks to @rogerlund for organizing a very impromptu ad hoc lunch discussion with a couple of other IT pros representing vary different as well as diverse spectrums of public, private, small, large and ultra large environments. I was only at the event for two days and thus there were many others that I was looking for at their booths or in the hallways (I saw @ekhnaser among others that I could not call out too in time) or in the meeting rooms as well as in the lunch hall. I look forward to seeing you all at some future event or venue.
On the food scene, while I did not have a chance to dine at one of my local favorites Brandy Hos, I did have a fantastic lunch at Henrys House of Pain (aka Henrys House of Hunan on Sansome). I also had a great outdoor dinner in the alleyway based Cafe Tiramisu where I enjoyed their signature dish. The dish which was essentially a fruit de mer (Fruit of the Sea) over linguine covered with a thin pizza crust that was baked. It was fantastic and brings a whole new dimension to the theme of a classic pot pie meets fruit de mar, give it a try!
On an even lighter or fun note, following are photos and links to some videos of the INXS event courtesy of Karen (aka Mrs Schulz). In addition to being an award winning photographer, Karens day time job is that of an applications development analyst (e.g. an IT Geekette) at a large Minnesota based Mining and Manufacturing company that is also involved in many different sticky and abrasive among other products.
Karen (Photo Courtesy Karen Sculz)
Karen took the following photos (and videos) with her Cannon Powershot S5 Digital camera.
Me heading to INXS show at VMworld 2010 (Photo Courtesy Karen Schulz)
Me sitting in the middle of the virtual highway (Photo Courtesy Karen Schulz)
INXS at VMworld 2010 (Photo Courtesy Karen Schulz)
JD Fortune of INXS at VMworld (Photo Courtesy Karen Schulz)
Kirk Pengilly and JD Fortune of INXS at VMworld 2010 (Photo Courtesy Karen Schulz)
Tim Farriss of INXS (Photo Courtesy Karen Schulz)
Here are links to some videos that Karen captured from up front near the stage during the INXS show at VMworld 2010.
Devil Inside (not to be confused with the devil is in the details of clouds, virtualization and other IT topics)
By My Side (Where a vendor or solution partner should be during and after the sale for their customers)
Disappear (What should not happen to your data or virtual machines in physical, virtual or cloud environments)
Never Tear Us Apart (What should not happen between your servers, storage, applications and data)
Need You Tonight (The call that many system admins get during their off hours)
New Sensation (What many are experience with virtualization and clouds)
Dont Change (Ironic final song of encore of a concert at conference with a theme of change)
A big tip of the hat along with thanks goes out to John Troyer of VMware as well as Sarah Shvil of the VMware Analyst Relations team for helping make it possible for me to attend as an independent IT industry analyst instead of on the coat tails of a vendors exhibit hall pass (disclosure: I paid for my own travel, lodging and dinning expenses).
Me hitching a ride on the virtual highway to the clouds and VMworld (Photo Curtsey Karen Schulz)
Hopefully with some luck, I will be able to hitch a ride and attend VMworld again next year in Las Vegas, perhaps even as a repeat vExpert as well as IT Industry Analyst.
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved