StorageIO V20.11 (2011) events seminars and web casts schedule

The V20.11 (e.g. 2011 or follow up from V20.10) Server and StorageIO (StorageIO) out and about events schedule continues to evolve.

In the meantime, here are a few (actually a couple dozen) seminars and web casts currently on the event calendar for 2011 that I will be speaking or presenting at. Topics and themes include Server and Storage Optimization, Clouds, Virtualization, Data Protection Modernization (HA, BC, DR, Backup/restore) along with Data Footprint Reduction (DFR including archive, compression, dedupe), End to End (E2E) Management, efficient IT data centers (and storage) among other related items.

Later this summer watch for the release of my new book Cloud and Virtual Data Storage Networking (CRC) as well as keep an eye on the StorageIO events page for additional events or details to appear. Also check out the news page for commentary on industry activities, announcements, trends or related topics in addition to the tips (or articles) page. You can also view videos, webinars and pod casts along with news letters containing links from while out and about during 2011 activities (or from past events).

WhenVenueEvent Name Location
Nov 10, 2011 Data Center Summit BC and DR Track Key note speaker: Protect, preserve and server your organizations essential applications and information services in an affordable mannerLos Angeles, CA
Nov 8, 2011 Data Center Summit BC and DR Track Key note speaker: Protect, preserve and server your organizations essential applications and information services in an affordable mannerSeattle, WA
Nov 3, 2011 Data Center Summit BC and DR Track Key note speaker: Protect, preserve and server your organizations essential applications and information services in an affordable mannerDenver, CO
Nov 1, 2011 Data Center Summit BC and DR Track Key note speaker: Protect, preserve and server your organizations essential applications and information services in an affordable mannerChicago, IL
Sept 29, 2011 Data Center Summit BC and DR Track Key note speaker: Protect, preserve and server your organizations essential applications and information services in an affordable mannerMinneapolis, MN
Aug 4, 2011 Data Center Summit BC and DR Track Key note speaker: Protect, preserve and server your organizations essential applications and information services in an affordable mannerSan Francisco, CA
Jul 28, 2011 Data Center Summit BC and DR Track Key note speaker: Protect, preserve and server your organizations essential applications and information services in an affordable mannerHouston, TX
Jul 21, 2011 Event keynote Speaker: Data Center Summit: Virtualization, Business Continuity and Cloud ComputingRaleigh, NC
Jun 28, 2011 Data Center Summit BC and DR Track Key note speaker: Protect, preserve and server your organizations essential applications and information services in an affordable mannerBoston, MA
June 23, 2011 Data Center Summit BC and DR Track Key note speaker: Protect, preserve and server your organizations essential applications and information services in an affordable mannerNew York City
June 21, 2011 Event keynote Speaker: Data Center Summit: Virtualization, Business Continuity and Cloud ComputingTampa, FL
May 18, 2011Keynote: 2011 Virtualization Best PracticesIrvine, CA
May 12, 2011Keynote: 2011 Virtualization Best PracticesChicago, IL
May 10, 2011Keynote: 2011 Virtualization Best PracticesDallas, TX
May 5, 2011Keynote: 2011 Virtualization Best PracticesNew York, NY
May 3, 2011Keynote: 2011 Virtualization Best PracticesBoston, MA
Apr 28, 2011 Data Center Summit BC and DR Track Key note speaker: Protect, preserve and server your organizations essential applications and information services in an affordable mannerDallas, TX
Apr 12, 2011 Event keynote Speaker: Data Center Summit: Virtualization, Business Continuity and Cloud ComputingSt. Louis, MO
Mar 29, 2011WebcastCloud and Virtual BC/DRWebcast
Mar 24, 2011WebcastTapes Evolving Data Storage RoleWebcast
Mar 15, 2011Wildfire GrilleKeynote: Virtualization, storage and the enterprise cloudEden Prairie, MN
Feb 10, 2011 Guest participant – Enabling safe and secure SaaSOn demand eSeminar
Jan 31, 2011century CollegeCloud and Virtual Data Storage Networking: Industry TrendsMahtomedi MN
Jan 12, 2011 Presenter – E2E Awareness and insight for cloud, virtualized and legacy IT environments
See more here including viewing the webcast
Virtual event
More information

Watch here for more events updates and information as well as signup for the free StorageIO news letter here.

Nuff said for now, look forward to seeing as well as hearing from you while out and about during 2011.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

NetApp buying LSIs Engenio Storage Business Unit

Storage I/O trends

This has been a busy week as on Monday Western Digital (WD) announced that they were buying the disk drive business from Hitachi Ltd. (e.g. HGST) for about $4.3 billion USD. The deal includes about $3.5B in cash and 25 million WD common shares (e.g. $750M USD) which will give Hitachi Ltd. about ten (10) percent ownership in WD along with adding two Hitachi persons onto the WD board of directors. WD now moves into the number one hard disk drive (HDD) spot above Seagate (note Hitachi is not selling HDS) in addition to giving them a competitive positioning in both the enterprise HDD as well as emerging SSD markets.

Today NetApp announced that they have agreed to purchase portions of the LSI storage business known as Engenio for $480M USD.

The business and technology that LSI is selling to NetApp (aka Engenio) is the external storage system business that accounted for about $705M of their approximate $900M+ storage business in 2010. This piece of the business represents external (outside of the server) shared RAID storage systems that support Serial Attached SCSI (SAS), iSCSI, Fibre Channel (FC) and emerging FCoE (Fibre Channel over Ethernet) with SSD, SAS and FC high performance HDDs as well as high capacity HDDs. NetApp has block however there strong suit (sorry netapp guys) is file while Engenio strong suit is block that attaches to gateways from NetApp as well as others in addition to servers for scale out NAS and cloud.

What NetApp is getting from LSI is the business that sells storage systems or their components to OEMs including Dell, IBM (here and here), Oracle, SGI and TeraData (a former NCR spin off) among others.

What LSI is retaining are their custom storage silicon, ICs, PCI RAID adapter and host bus adapter (HBA) cards including MegaRAID, 3ware along with SAS chips, SAS switches, PCI SSD card and the Onstor NAS product they acquired about a year ago. Other parts of the LSI business which makes chips for storage, networking and communications vendors is also not affected by this deal.

In other words, the sign in front of the Wichita LSI facility that used to say NCR will now probably include a NetApp logo once the deal closes.

For those not familiar, Tom Georgens current CEO of NetApp is very familiar with Engenio and LSI as he used to work there (after leaving a career at EMC). In fact Mr. Georgens was part of the most recent attempt to spin the external storage business out of LSI back in the mid 2000s when it received the Engenio name and branding. In addition to Tom Georgens, Vic Mahadevan the current NetApp Chief Strategy Officer recently worked at LSI and before that at BMC, Compaq and Maxxan among others.

What do I mean by the most recent attempt to spin the storage business out of LSI? Simple, the Engenio storage business traces its lineage back to NCR and what become known as Symbiosis Logic that LSI acquired as part of some other acquisitions.

Going back to the late 90s, there was word on the street that the then LSI management was not sure what to do with storage business as their core business was and still is making high volume chips and related technologies. Current LSI CEO Abhi Talwalkar is a chip guy (nothing wrong with that) who honed his skills at Intel. Thus it should not be a surprise that there is a focus on the LSI core business model of making their own as well as producing silicon (not the implant stuff) for IT and consumer electronics (read their annual report).

As part of the acquisition, LSI has already indicated that they will use all or some of the cash to buy back their stock. However I also wonder if this does not open the door for Abhi and his team to do some other acquisitions more synergic with their core business.

What does NetApp get:

  • Expanded OEM and channel distribution capabilities
  • Block based products to coexist with their NAS gateways
  • Business with an established revenue base
  • Footprint into new or different markets
  • Opportunity to sell different product set to existing customers

NetApp gets an OEM channel distribution model to complement what they already have (mainly IBM) in addition to their mainly direct sales and with VARs. Note that Engenio went to an all OEM/distribution model several years ago maintaining direct touch support for their partners.

Note that NetApp is providing financial guidance that the deal could add $750M to FY12 which is based on retaining some portion of the existing OEM business however moving into new markets as well as increasing product diversity with existing direct customers, vars or channel partners.

NetApp also gets to address storage market fragmentation and enable OEM as well as channel diversification including selling to other server vendors besides IBM. The Engenio model in addition to supporting Dell, IBM, Oracle, SGI and other server vendors also involves working with vertical solution integrator OEMs in the video, entertainment, High Performance Compute (HPC), cloud and MSP markets. This means that NetApp can enter new markets where bandwidth performance is needed including scale out NAS (beyond what NetApp has been doing). This also means that NetApp gets a product to sell into markets where back end storage for big data, bulk storage, media and entertainment, cloud and MSP as well as other applications leverage SAS, iSCSI or FC and FCoE beyond what their current lineup offers. Who sells into those spaces? Dell, HP, IBM, Oracle, SGI and Supermicro among others.

What does LSI get:

  • $480M USD cash and buy back some stock to keep investors happy
  • Streamline their business or open door for new ones
  • Perhaps increase OEM sales to other new or existing customers
  • Perhaps do some acquisitions or be acquired

What does Engenio get:
A new parent that hopefully invest in the technology and marketing of the solution sets as well as leverage or take care of the installed base of customers

What do the combined Engenio and NetApp OEMs and partners get:
With combination of the organizations, hopefully streamlined support, service, and marketing, product enhancements to address new or different needs. Possibly comfort in knowing that Engenio now has a home and its future somewhat known.

What about the Engenio employees?
The reason I bring this up is wondering what happens to those who have many years invested and their LSI stock which I presume they keep hoping that the sale gives them a future return on their investment or efforts. Having been in similar acquisitions in the past, it can be a rough go however if the acquirer has a bright future, than enough said.

Some random thoughts:

Is this one of those industry trendy, sexy, cool everybody drooling type deals with new and upcoming technology and marketing buzz?
No

Is this one of those industry deals that has good upside potential if executed upon and leveraged?
Yes

Netapp already has a storage offering why do they need Engenio?
No offense to NetApp, however they have needed a robust block storage offering to complement their NAS file serving and extensive software functionality to move into to different markets. This is not all that different from what EMC needed to do in the late 90s extending their capabilities from their sole cash cow platform Symmetrix to acquire DG to have a mid range offering.

NetApp is risking $480M on a business with technologies that some see or say is on the decline, so why would they do such a thing?
Ok, lets set the technology topics aside, from a pure numbers perspective, lets take two scenarios and Im not a financial person so go easy on me please. What some financial people have told me with other deals is that its sometimes about getting a return on cash vs. it not doing anything. So with that and other things in mind, say NetApp just lets $480M sit in the bank, can they get 12 per cent or better interest? Probably not and if they can, I want the name of that bank. What that means is that for a five year period, if they could get that rate of return (12 percent), they would only make $824M-480M=$344M on the investment (I know, there are tax and other financial considerations however lets keep simple). Now lets take another scenario, assume that NetApp simply rides a decline of the business at say a 20 percent per year rate (how many business are growing or in storage declining at 20 percent per year?) for five years. That works out to about a $1.4B yield. Lets take a different scenario and assume that NetApp can simply maintain an annual run rate of $700-750M for that five years, that works out to around $3.66B-480M=$3.1B revenue or return on investment. In other words, even with some decline, over a five year period, the OEM business pays for the deal alone and perhaps helps funds investment in technology improvement with the business balance being positive upside.

Now both of those are extreme scenarios so lets take something more likely such as NetApp being able to simply maintain a 700-750M run rate by keeping some of the OEM business, finding new markets for challenge and OEM as well as direct, expanding footprint into their markets. Now that math gets even more interesting. Having said all of that, NetApp needs to keep investing in the business and products to get those returns which might help explain the relative low price to run rate.

Is this a good deal for NetApp?
IMHO yes, as long as NetApp does not screw it up. If NetApp can manage the business, invest in it, grow into new markets instead of simple cannibalization, they will have made a good deal similar to what EMC did with DG back in the late 90s. However NetApp needs to execute, leverage what they are buying, invest in it and pick up new business to make up for the declining business with some of the OEMs.

With several hundred thousand systems or controllers having been sold over the years (granted how many are actually running is your guess as good as mine), NetApp has a footprint to leverage with their other products. For example, should IBM, Dell or Oracle completely walk away from those installed footprints, NetApp can move in with firmware or other upgrades to support plus up sell with their NAS gateways to add value with compression, dedupe, etc.

What about NetApps acquisition track record?
Fair question although Im sure the NetApp faithful wont like it. NetApp has had their ups and downs with acquisitions (Topio, Decru, Spinaker, Onaro, etc), perhaps with this one like EMC in the late 90s who bought DG to overcome some rough up and down acquisitions can also get their mojo on. (See this post).While we are on the topic of acquisitions, NetApp recently bought Akorri and last year Bycast which they now call StorageGrid that has been OEMd in the past by IBM. Guess what storage was commonly used under the IBM servers running the Bycast software? If you guessed XIV you might want to take a mulligan or a do over. Btw, HP also has OEMd the Bycast software. If you are not familiar with Bycast and interested in automated movement, tiering, policy management, objects and other buzzwords, ping your favorite NetApp person as it is a diamond in the rough if leveraged beyond healthcare capabilities.

What does this mean for Xyratex and Dothill who are NetApp partners?
My guess is that for now, the general purpose enclosures would stay the same (e.g. Xyratex) until there is a business case to do something different. For the high density enclosures, that could be a different scenario. As for others, we will have to wait and see.

Will NetApp port OnTap into Engenio?
The easiest and fastest thing is to do what NetApp and Engenio OEM customers have already been doing, that is, place the Engenio arrays behind the NetApp fas vfiler. Note that Engenio has storage systems that speak SAS to HDDs and SSDs as well as able to speak SAS, iSCSI and FC to hosts or gateways. NetApp has also embraced SAS for back end storage, maybe we will see them leverage a SAS connection out of their filers in the future to SAS storage systems or shelves instead of FC loop?

Speaking of SAS host or server attached storage, guess what many cloud, MSP, high performance and other environment are using for storage on the back end of their clusters or scale out NAS systems?
Yup, SAS.

Guess what gap NetApp gets to fill joining Dell, HP, IBM and Oracle who can now give a choice of SAS, iSCSI or FC in addition or NAS?
Yup, SAS.

Care to guess what storage vendor we can expect to hear downplay SAS as a storage system to server or gateway technology?
Hmm

Is this all about SAS?
No

Will this move scare EMC?
No, EMC does not get scared, or at least that is what they tell me.

Will LSI buy Fusion IO who has or is filing their documents to IPO or someone else?
Your guess or speculation is better than mine. However LSI already has and is retaining their own PCIe SSD card.

Why only $480M for a business that did $705M in 2010?
Good question. There is risk in that if NetApp does not invest in the product, marketing, relationships that they will not see the previous annual run rate so it is not a straight annuity. Consequently NetApp is taking risk with the business and thus they should get the reward if they can run with it. Another reason is that there probably were not any investment bankers or brokers running up the price.

Why didnt Dell buy Engenio for $480M?
Good question, if they had the chance, they should have however it probably would not have been a good fit as Dell needs direct sales vs. OEM sales.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Loss of data access vs. data loss

Have you hugged your cloud or MSP lately?

Why give a cloud a hug and what does it have to do with loss of data access vs. loss of data?

First there is a difference between actually losing data and losing access to it.

Losing data means that you have no backup or copy of the information thus it is gone. This means there are no good valid backups, snapshots, copies or archives that can be used to restore or recover the information.

Losing access to data means that there is a copy of it somewhere however it will take time to make it usable (no data was actually lost). How long you have to wait until the data is restored or recovered will vary and during that time it may seem like data was lost.

Second, industry hype for and against clouds serves as a lighting rod for when things happen.

Lighting recently struck (or at least virtually) with some outages (see links below) including at Google Gmail.

Cloud crowd cheerleaders may need a hug to feel good while they or their technology get tossed about a bit. Google announced that they had a service disruption recently however that data was not lost, only loss of access for a period of time.

Lets take a step back before going forward.

With the Google Gmail disruption, following on previous incidents, true cynics and naysayers will probably jump on the anti cloud FUD feeding frenzy. The true cloud cynics will tell the skeptics all about cloud challenges perhaps never having had actually used any service or technology themselves.

Cloud crowd cheerleaders are generally a happy go lucky bunch with virtual beliefs and physical or real emotions. Cloud crowd cheerleaders have a strong passion for their technology or paradigm taking it various serious in some instances perceiving attacks or fud against cloud as an attack on them or their belief. Some cheerleaders will see this post as snarky or cynical (ok, get over it already).


Ongoing poll at StorageIOblog.com, click on the image to cast your vote.

Then there are the skeptics or interested audience who are not complete cynics or cheerleaders (those in the middle 80 percent of the above chart).

Generally speaking they want to learn more, understand issues to work around or take appropriate steps and institute best practices. They see a place for MSP or cloud services for some things to compliment what they are currently doing and tend to be the majority of audiences outside of special interest, vendor or industry trade groups.

Some additional thoughts, comments and perspectives:

  • Loss of data means you cannot get it back to a specific RPO (Recovery Point Objective or how much data you can afford to lose). Loss of access to data means that you cannot get to your data until a specific RTO (Recovery Time Objective).


Tiered data protection, RTO and RPOs, align technique and technology to SLO needs


RTO and RPOs

  • RAID and replication provide accessibility to data not data protection. The good news with RAID and replication or mirroring is if you make a change to the data it is copied or protected. The bad news is if it is deleted or corrupted that error or problem is also replicated.
  • Backup, snapshots, CDP or other time interval based techniques protect data against loss however may require time to restore, recovery or refresh from. A combination of data availability and accessibility along with time interval based protection are needed (e.g. the two previous above items should be combined). CDP should also mean complete, consistent, coherent or comprehensive data protection including data in application or VM buffers.
  • Any technology will fail either on its own or via human intervention or lack of configuration. It is not if rather when as well as how gracefully a failure along with fault isolation occurs and is remediate (corrected). There is generally speaking, no such thing as a bad technology, rather poor or inappropriate use, configuration or deployment of it.
  • Protect onsite data with offsite mediums including MSP or cloud backup services while keeping a local onsite copy. Why keep an onsite local copy when using a cloud? Simple, when you lose access to the cloud or MSP for extended periods of time, if needed you have a copy of data to work with (assuming it is still valid). On other hand, important data that is onsite needs to be kept offsite. Hence cloud and MSP should compliment what is done for data protection and vise versa. Thats what I do, is what you do?
  • The technology golden rule which applies to cloud and virtualization is whoever controls the management of the technology controls the gold. Leverage CDP, which is Commonsense Data Protection or Cloud Data Protection. Hops are great in beer (as well as some other foods) however they add latency including with networks. Aggregation can cause aggravation, not everything can be consolidated, however much can be virtualized.

Here are some related blog posts:

Additional links to related articles and commentary:

Closing thoughts and comments (for now) regarding clouds.

Its not if, rather when, where, why, how and with what will you leverage a cloud or MSP technologies, products, service, solution or architectures to compliment your environment.

How will cloud or MSP work for you vs. you working for it (unless you actually do work for one of them).

Dont be scared of clouds or virtualization, however look before you leap!

BTW, for those in the Minneapolis St. Paul area (aka the other MSP), check out this event on March 15, 2011. I have been invited to talk about optimizing your data storage and virtual environments and be prepared to take advantage of cloud computing opportunities as they mature.

Nuff said for now

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC) at https://storageio.com/books
twitter @storageio

What do you need when its time to buy a new server?

You have been told by someone or determined on your own that it is time for a new server, however what to get?

A blade server, rack mount, floor model, physical or virtual perhaps cloud?

How about one that is fully configured and accessorized to meet your specific environments needs?

There are several considerations involving what type of server or computer is needed to meet your specific needs or application requirements. Options include price, packaging, vendor preferences, blade center, freestanding, 1U rack mount, virtual and cloud support, with or without storage and networking, performance as well as power and cooling among other considerations.

Here is a link (PDF version here, may require registration) to an article that I put together to help determine your needs as well as consider various options for your next server.

Hope you find the information useful!

Nuff said for now

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC) at https://storageio.com/books
twitter @storageio

From bits to bytes: Decoding Encoding

With networking, care should be taken to understand if a given speed or performance capacity is being specified in bits or bytes as well as in base 2 (binary) or base 10 (decimal).

Another consideration and potential point of confusion are line rates (GBaud) and link speed which can vary based on encoding and low level frame or packet size. For example 1GbE along with 1, 2, 4 and 8Gb Fibre Channel along with Serial Attached SCSI (SAS) use an 8b/10b encoding scheme. This means that at the lowest physical layer 8bits of data are placed into 10bits for transmission with 2 bits being for data integrity.

With an 8Gb link using 8b/10b encoding, 2 out of every 10 bits are overhead. To determine the actual data throughput for bandwidth, or, number of IOPS, frames or packets per second is a function of the link speed, encoding and baud rate. For example, 1Gb FC has a 1.0625 Gb per second speed which is multiple by the current generation so 8Gb FC or 8GFC would be 8 x 1.0625 = 8.5Gb per second.

Remember to factor in that encoding overhead (e.g. 8 of 10 bits are for data with 8b/10b) and usable bandwidth on the 8GFC link is about 6.8Gb per second or about 850Mbytes (6.8Gb / 8 bits) per second. 10GbE uses 64b/66b encoding which means that for every 64 bits of data, only 2 bits are used for data integrity checks thus less overhead.

What do all of this bits and bytes have to do with clouds and virtual data storage network?

Quite a bit when you consider what we have talked about the need to support more information processing, moving as well as storing in a denser footprint.

In order to support higher densities faster servers, storage and networks are not enough and require various approaches to reducing the data footprint impact.

What this means is for fast networks to be effective they also have to have lower overhead to avoid moving more extra data in the same amount of time instead using that capacity for productive work and data.

PCIe leverages multiple serial unidirectional point to point links, known as lanes, compared to traditional PCI that used a parallel bus based design. With traditional PCI, the bus width varied from 32 to 64 bits while with PCIe, the number of lanes combined with PCIe version and signaling rate determines performance. PCIe interfaces can have one, two, four, eight, sixteen or thirty two lanes for data movement depending on card or adapter format and form factor.  For example, PCI and PCIx performance can be up to 528 MByte per second with 64 bit 66 MHz signaling rate.

 

PCIe Gen 1

PCIe Gen 2

PCIe Gen 3

Giga transfers per second

2.5

5

8

Encoding scheme

8b/10b

8b/10b

128b/130b

Data rate per lane per second

250MB

500MB

1GB

x32 lanes

8GB

16GBs

32GB

Table 1: PCIe generation comparisons

Table 1 shows performance characteristics of PCIe various generations. With PCIe Gen 3, the effective performance essentially doubles however the actual underlying transfer speed does not double as it has in the past. Instead the improved performance is a combination of about 60 percent link speed and 40 percent efficiency improvements by switching from an 8b/10b to 128b/130b encoding scheme among other optimizations.

Serial interface

Encoding

PCIe Gen 1

8b/10b

PCIe Gen 2

8b/10b

PCIe Gen 3

128b/120b

Ethernet 1Gb

8b/10b

Ethernet 10Gb

64b/66b

Fibre Channel 1/2/4/8 Gb

8b/10b

SAS 6Gb

8b/10b

Table 2: Common encoding

Bringing this all together is that in order to support cloud and virtual computing environments, data networks need to become faster as well as more efficient otherwise you will be paying for more overhead per second vs. productive work being done. For example, with 64b/66b encoding on a 10GbE or FCoE link, 96.96% of the overall bandwidth or about 9.7Gb per second are available for useful work.

By comparison if an 8b/10b encoding were used, the result would be only 80% of available bandwidth for useful data movement. For environments or applications this means better throughput or bandwidth while for applications that require lower response time or latency it means more IOPS, frames or packets per second.

The above is an example of where a small change such as the encoding scheme can have large benefit when applied to high volume or large environments.

Learn more in The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC) at https://storageio.com/books

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Winter 2011 Server and StorageIO News Letter

StorageIO News Letter Image
Winter 2011 Newsletter

Welcome to the Winter 2011 edition of the Server and StorageIO Group (StorageIO) newsletter. This follows the Fall 2011 edition.

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the Winter 2011 edition as an HTML or PDF or, to go to the newsletter page to view previous editions.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Cheers gs

Nuff said for now

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

Securing data at rest: Self Encrypting Disks (SEDs)

Here is a link to a recent guest post that I was invited to do over at The Virtualization Practice (TVP) pertaining to Self Encrypting Disk (SEDs).

Based on the trusted computing group (TCG) DriveTrust and OPAL disk drive security models, SEDs offload encryption to the disk drive while complimenting other encryption security solutions to protect against theft or lost storage devices. There is another benefit however for SEDs which is simplifying the process of decommissioning a storage device safely and quickly.

If you are not familiar with them, SEDs perform encryption within the hard disk drive (HDD) itself using the onboard processor and resident firmware. Since SEDs only protect data at rest, other forms of encryption should be combined to protect data in flight or on the move.

There is also another benefit of SEDs in that for those of you concerned about how to digital destroy, shred or erase large capacity disks in the future, you may have a new option. While intended for protecting data, a byproduct is that when a SED is removed from the system or server or controller that it has established an affinity with, its contents are effectively useless until reattached. If the encryption key for a SED is changed, then the data is instantly rendered useless, or at least for most environments.

Learn more about SEDs here and via the following links:

  • Self-Encrypting Drives for IBM System x
  • Trusted Computing Group OPAL Summary
  • Storage Performance Council (SPC) SED and Non SED benchmarks
  • Seagate SED information
  • Trusted Computing Group SED information

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

What have I been doing this winter?

Its been almost a month since my last post and want to say hello and let you know what I have been doing.

What I have been doing is:

  • Accumulating a long list of ideas for upcoming blog post, article, tips, webinars and other content.
  • Recording some podcasts, web casts doing interviews and commentary along with a few articles here and there.
  • Working with some new venues where if all comes together you should be seeing material or commentary appearing soon.
  • Filling some dates for the 2011 out and about events and activities page.
  • Doing research in several different areas as well as working with clients on various project activities, many of which that are NDA.
  • Getting some recently finished content ready to appear on the main web site as well as in the blog and other venues.
  • Attending vendor events and briefing sessions on solutions some of which are yet to be announced.
  • Enjoying the cold and snowy winter as best as can be (see some videos here) while trying to avoid cold and flue season.

In addition to the above, I have been trying to stay very focused on is getting my new book which is titled Cloud and Virtual Data Storage Networking (CRC) wrapped up for a summer 2011 release. This is my third solo book project that is in addition to co writing or contributing to several other book projects.

Cloud and Virtual Data Storage Networking

Im doing the project the old fashioned way which means writing it myself as opposed to using ghost writers along with a traditional publishing house (CRC, same as my last book) all of which takes a bit more time. For anyone who has done a project like this you know what is involved. For those who have not it includes research, writing, editing, working with editors and copyeditors, subject matter experts doing initial reviews, illustrations and page layouts, markups, more edits and proofs. Then there are the general project management activities along with marketing and rollout plans, companion presentation material working with the publisher and others.

Anyway, hope you are all doing well, look forward to sharing more with you soon, now it is time to get back to work…

Nuff said for now

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

Are you on the StorageIO IT Data Infrastructure industry links page?

Hey IT data infrastructure vendors, VARs or service providers, are you on the Server and StorageIO IT industry interesting links page?

Dont worry, its free and no obligation!

There are no hidden charges or fees, you will not be obligated to pay a fee or subscribe to a service, or be called or contacted via a sales or account manager person to buy something. Nor will you be required to sign up for a annual or short term retainer, make a donation, honorarium, endowment, contribution, subsidy, renumerate or sponsor in any other manner directly or via indirect means including second, third, fourth or by way of other virtual means or physical means. This also means via other organizations, venues, institutes, associations, communities, events or causes. (Btw, that is some industry humor some will get however to others that feel it is poking fun of their lively hoods, too bad!)

Your contact information will not be sold, bartered, traded, borrowed or abused being kept confidential nor will you be called or bothered (contact me if somebody does reach out to you). However you may get an occasional Server and StorageIO newsletter sent to you via email (privacy and disclosure statement can be found here).

There is however one small caveat and that is no spamming and direct submissions on yours or your companies behalf. If you are a public relations firm feel free to submit on behalf of your own organization, however have your clients submit on their own (or use their identity when doing so on their own behalf).

Why do I make this links page and list available for free to those who read it, as well as to those who are on it?

Simple, I use it myself to keep a list of companies, firms or organizations that are involved with data infrastructures (servers, storage, I/O and networking, hardware, software, services) that I have come across and worth keeping track of that I also feel worth sharing with others.

Of course, if you feel compelled, you can always contact Server and StorageIO to discuss other services or simply buy one of my books including Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier), The Green and Virtual Data Center (CRC) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC) at Amazon or one of the other many fine global venues.

 

Still interested, all you need to do is the following:

No SPAM submission please

Please do not submit via web or blog page unless you want your contact information known to others.

Send an email to links at storageio dot com that includes the following:

1. Your company name
2. Your company URL
3. Your company contact person (you or someone else) including:
Name
Title or position
Phone or Skype
Email
Optional twitter

4. Brief 40 character or less description of what you do, or solution categories (tip, avoid superlatives, see links page for ideas)

5. Optionally indicate to DND (Do Not Disturb) you with email newsletters, coverage or mentions.

Again, please, No Spam!

Its that simple.

Now its up to you to decide if you want to be included or not.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

E2E Awareness and insight for IT environments

I recently did a couple of Industry Trends and Perspectives webcast events around the topic and themes of End to End (E2E) awareness and cross domain (or technology) management insight for cloud, virtual and other abstracted as well as physical IT environments.

The importance of E2E awareness of IT resources across different technology domains (or focus areas) is that you can not effectively manage what you do not have timely access or visibility into. Hence the theme of session being You cannot effectively manage what you do not know about in a timely manner.

Here is the abstract for the webcast:

Virtualization, clouds and other forms of abstraction help IT organizations enable flexible and scalable services delivery. While abstraction of underlying resources simplifies services delivery from an IT customers perspective, additional layers of technology along with interdependencies still need to be tracked as well as managed.  A key enabler for IT organizations is having end to end (E2E) situational awareness of available resources and how they are being used. By having timely situational awareness across various technology domains, IT organizations gain insight into how resources can be more effectively deployed in an efficient manner.

Join independent IT industry analyst, author and blogger Greg Schulz as he looks at common challenges as well as opportunities for leveraging E2E situational awareness to remove blind spots from efficient effective IT services delivery. Greg will look several scenarios including among others cost reduction, maximize resource usage, shrink migration and data consolidation times for cloud, virtual and traditional IT environments while maintaining or enhancing IT services delivery.

If you are interested in IT Infrastructure Resource Management (IRM) of servers, storage, IO networking, virtualization, cloud, backup or restore, optimization as well as cloud or legacy environments and metrics, I invite you to view the following web cast.

E2E cross domain awareness webcast

Click on the above image to access the BrightTalk web cast from their recent Virtualization Summit series (may require registration)

If you are interested, here is a link to a previous post I did on E2E management, SRA (systems or storage resource analysis) and management insight along with a recent related white paper sponsored by SANpulse that you can access here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Dude, is Dell doing a disk deal again with Compellent?

Over in Eden Prairie (Minneapolis Minnesota suburb) where data storage vendor Compellent (CML) is based, they must be singing in the hallways today that it is beginning to feel a lot like Christmas.

Sure we had another dusting of snow this morning here in the Minneapolis area and the temp is actually up in the balmy 20F temperature range (was around 0F yesterday) and holiday shopping is in full swing.

The other reason I think that the Compellent folks are thinking that it feels a lot like Christmas are the reports that Dell is in exclusive talks to buy them at about $29 per share or about $876 million USD.

Dell is no stranger to holiday or shopping sprees, check these posts out as examples:

Dell Will Buy Someone, However Not Brocade (At least for now)

Back to school shopping: Dude, Dell Digests 3PAR Disk storage (we now know Dell was out bid)

Data footprint reduction (Part 2): Dell, IBM, Ocarina and Storwize

Data footprint reduction (Part 1): Life beyond dedupe and changing data lifecycles

Post Holiday IT Shopping Bargains, Dell Buying Exanet?

Did someone forget to tell Dell that Tape is dead?

Now some Compellent fans are not going to be happy with only about $29 a share or about $876 million USD price given the recent stock run up into the $30 plus range. Likewise, some of the Compellent fans may be hoping for or expecting a bidding war to drive the stock back up into the $30 range however keep in mind that it was earlier this year when the stock adjusted itself down into the mid teens.

In the case of 3PAR and the HP Dell budding war, that was a different product and company focused in a different space than where Compellent has a good fit.

Sure both 3PAR and Compellent do Fibre Channel (FC) where Dells EqualLogic only does iSCSI, however a valuation based just on FC would be like saying Dell has all the storage capabilities they need with their MD3000 series that can do SAS, iSCSI and FC.

In other words, there are different storage products for different markets or price bands and customer application needs. Kind of like winter here in Minnesota, sure one type of shovel will work for moving snow or you can leverage different technologies and techniques (tiering) to get the job done effectively the same holds for storage solutions.

Compellent has a good Cadillac product that is a good fit for some SMB environments. However the SMB space is also where Dell has several storage products some of which they own (e.g. EqualLogic), some they OEM (MD3000 series and NX) as well as resell (e.g. EMC CLARiiON).

Can the Compellent product replace the lowered CLARiiON business that Dell has itself been shifting more to their flagship EqualLogic product?

Sure however at the risk of revenue cannibalization or worse, introduction of revenue prevention teams.

Can the Compellent product then be positioned lower down under the EqualLogic product?

Sure, however why hold it back not to mention force a higher priced product down into that market segment.

Can the Compellent product be taken up market to compete above the EqualLogic head to head with the larger CLARiiON systems from EMC or comparable solutions from other vendors?

Sure, however I can hear choruses of its sounding a lot like Christmas from New England, the bay area and Tucson among others.

Does this mean that Dell is being overly generous and that this is not a good deal?

No, not at all.

Sure it is the holiday season and Dell has several billion dollars of cash laying around however that in itself does not guarantee a large handout or government sized bailout (excuse me, infusion). At $30 or more, that would be overly generous simply based on where the technology fits as well as aligns to the market realities. Consequently, at $29, this is a great deal for Compellent and also for Dell.

Why is it a good deal for Dell?

I think that it is as much about Dell getting a good deal (ok, paying a premium) to acquire a competitor that they can use to fill some product gaps where they have common VARs. However I also think that this is very much about the channel and the VAR as much if not more than it is just about a storage product. Servers are part of the game here which in turn supports storage, networking, management tools, backup/recovery, archiving and services.

Sure Dell can maybe take some cost out of the Compellent solution by replacing the Supermicro PCs that are the hardware platform for their storage controllers with Dell servers. However the bigger play is around further developing its channel and VAR ecosystems, some of whom were with EqualLogic before Dell bought them. This can also be seen as a means of Dell getting that partner ecosystem to sell overall, more dell products and solutions instead of those from Apple, EMC, Futjisu, HP, IBM, Oracle and many others.

Likewise, I doubt that Mr. Dell is paying a premium simply to make the Compellent shareholders and fans happy to create monetary velocity to stimulate holiday shopping and economic stimulus. However, for the fans, sure, while drowning your sorrows in egg nogg of holiday cheer that you are not getting $30 or higher, instead buy a round for your mates and toast Dell for your holiday gift.

The real reason I think this is a good reason for Dell is that from a business and financial perspective, assuming they stick to the $29 range, it is a good bargain for both parties. Dell gets a company who has been competing with their EqualLogic product in some cases with the same VARs or resellers. Sure it gets a Fibre Channel based product however Dell already has that with the MD3000 series which I realize is less function laden then Compellent or EqualLogic; however it is also more affordable for a different market.

If Dell can close on the deal sticking to its offer which they have the upper hand on, execute including rolling out a strategy as well as product positioning plan. Then educate their own teams as well as VARs and customers of what products fit where and when in such a manner that does not cause revenue prevention (e.g. one product or team blocking the other) or cannibalization instead expanding markets, they can do well.

While Compellent gets a huge price multiple based on their revenue (about $125M USD), if Dell can get the product revenue up from the $125 to $150 million plateau to around $250 to $300 million without cannibalizing other Dell products, the deal pays for itself in many ways.

Keep in mind that a large pile of cash sitting in the bank these days is not exactly yielding the best returns on investment.

For the Compellent fans and shareholders, congratulations!

You have gotten or perhaps are about to get a good holiday gift so knock of the complaining that you should be getting more. The option is that instead of $28 per share, you could be getting 28 lumps of coal in your Christmas stocking.

For the Dell folks, assuming the deal is done on their terms and that they can quickly rationalize the product overlap, convey and then execute on a strategy while keeping the revenue prevention teams on the sidelines you too have a holiday gift to work with (some assembly will be required however). This also is good for Dell outside of storage which may turn out to be one of the gems of the deal in keeping or expanding VARs selling Dell based servers and associated technologies.

For EMC who was slapped in the face earlier this year when Dell took a run at 3PAR, sure there will be more erosion on the lower end CLARiiOn as has been occurring with the EqualLogic. However Dell still needs a solution to effectively compete with EMC and others at the higher end of the SMB or lower end of the enterprise market.

Sure the EqualLogic or Compellent products could be deployed into such scenarios; however those solutions are then playing on a different field and out of their market sweet spots.

Lets see what happens shall we.

In the meantime, what say you?

Is this a good deal for Dell, who is the deal good for assuming it goes through and at the terms mentioned, what is your take?

Who benefits from this proposed deal?

Note that in the holiday gift giving spirit, Chicago style voting or polling will be enabled.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Fall 2010 StorageIO News Letter

StorageIO News Letter Image
Fall 2010 Newsletter

Welcome to the Fall 2010 edition of the Server and StorageIO Group (StorageIO) newsletter. This follows the August 2010 edition building on the great feedback received from recipients.

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the Fall 2010 edition as an HTML or PDF or, to go to the newsletter page to view previous editions.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Cheers gs

Nuff said for now

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Who is responsible for vendor lockin?

Who is responsible for vendor lockin?

data infrastructure server storage I/O vendor lockin

Updated 1/21/2018

Who is responsible for vendor lockin?

Is vendor lockin caused by vendors, their partners or by customers?

In my opinion vendor lockin can be from any or all of the above.

What is vendor lockin

Vendor lockin is a situation where a customer becomes dependent or locked in by choice or other circumstances to a particular supplier or technology.

What is the difference between vendor lockin, account control and stickiness?

Im sure some marketing wiz or sales type will be happy to explain the subtle differences. Generally speaking, lockin, stickiness and account control are essentially the same, or at least strive to obtain similar results. For example, vendor lockin too some has a negative stigma. However vendor stickiness may be a new term, perhaps even sounding cool thus it is not a concern. Remember the Mary Poppins song a spoon full of sugar makes the medicine go down? In other words sometimes changing and using a different term such as sticky vs vendor lockin helps make the situation taste better.

Is vendor lockin or stickiness a bad thing?

No, not necessarily, particularly if you the customer are aware and still in control of your environment.

I have had different views of vendor lockin over the years.

These have varied from when I was a customer working in IT organizations or being a vendor and later as an advisory analyst consultant. Even as a customer, I had different views of lockin which varied depending upon the situation. In some cases lockin was a result of upper management having their favorite vendor which meant when a change occurred further up the ranks, sometimes vendor lockin would shift as well. On the other hand, I also worked in IT environments where we had multiple vendors for different technologies to maintain competition across suppliers.

As a vendor, I was involved with customer sites that were best of breed while others were aligned around a single or few vendors. Some were aligned around technologies from the vendors I worked for and others were aligned with someone elses technology. In some cases as a vendor we were locked out of an account until there was a change of management or mandates at those sites. In other cases where lock out occurred, once our product was OEMd or resold by an incumbent vendor, the lockout ended.

Some vendors do a better job of establishing lockin, account management, account control or stickiness than compared to others. Some vendors may try to lock a customer in and thus there is perception that vendors lock customers in. Likewise, there is a perception that vendor lockin only occurs with the largest vendors however I have seen this also occur with smaller or niche vendors who gain control of their customers keeping larger or other vendors out.

Sweet, sticky Sue Bee Honey

Vendor lockin or stickiness is not always the result of the vendor, var, consultant or service provider pushing a particular technology, product or service. Customers can allow or enable vendor lockin as well, either by intent via alliances to drive some business initiative or accidentally by giving up account control management. Consequently vendor lockin is not a bad thing if it brings mutual benefit to the suppler and consumer.

On the other hand, if lockin causes hardship on the consumer while only benefiting the supplier, than it can be a bad thing for the customer.

Do some technologies lend themselves more to vendor lockin vs others?

Yes, some technologies lend themselves more to stickiness or lockin then others. For example, often big ticket or expensive hardware are seen as being vulnerable to vendor lockin along with other hardware items however software is where I have seen a lot of stickiness or lockin around.

However what about virtualization solutions after all the golden rule of virtualization is whoever controls the virtualization (hardware, software or services) controls the gold. This means that vendor lockin could be around a particular hypervisor or associated management tools.

How about bundled solutions or what are now called integrated vendor technology stacks including PODs (here or here) or vBlocks among others? How about databases, do they enable or facilitate vendor lockin? Perhaps, just like virtualization or operating systems or networking technology, storage system, data protection or other solutions, if you let the technology or vendor manage you, then you enable vendor lockin.

Where can vendor lockin or stickiness occur?

Application software, databases, data or information tools, messaging or collaboration, infrastructure resource management (IRM) tools ranging from security to backup to hypervisors and operating systems to email. Lets not forget about hardware which has become more interoperable from servers, storage and networks to integrated marketing or alliance stacks.

Another opportunity for lockin or stickiness can be in the form of drivers, agents or software shims where you become hooked on a feature functionality that then drives future decisions. In other words, lockin can occur in different locations both in traditional IT as well as via managed services, virtualization or cloud environments if you let it occur.

 

Keep these thoughts in mind:

  • Customers need to manage their resources and suppliers
  • Technology and their providers should work for you the customer, not the other way around
  • Technology providers conversely need to get closer to influence customer thinking
  • There can be cost with single vendor or technology sourcing due to loss of competition
  • There can be a cost associated with best of breed or functioning as your own integrator
  • There is a cost switching from vendors and or their technology to keep in mind
  • Managing your vendors or suppliers may be easier than managing your upper management
  • Vendors sales remove barriers so they can sell and setting barriers for others
  • Virtualization and cloud can be both a source for lockin as well as a tool to help prevent it
  • As a customer, if lockin provides benefits than it can be a good thing for all involved

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Ultimately, its up to the customer to manage their environment and thus have a say if they will allow vendor lockin. Granted, upper management may be the source of the lockin and not surprisingly is where some vendors will want to focus their attention directly, or via influence of high level management consultants.

So while a vendors solution may appear to be a locked in solution, it does not become a lockin issue or problem until a customer lets or allows it to be a lockin or sticky situation.

What is your take on vendor lockin? Cast your vote and see results in the following polls.

Is vendor lockin a good or bad thing?

Who is responsible for managing vendor lockin

Where is most common form or concern of vendor lockin

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

IBMs Storwize or wise Storage, the V7000 and DFR

A few months ago IBM bought a Data Footprint Reduction (DFR) technology company called Storwize (read more about DFR and Storwize Real time Compression here, here, here, here and here).

A couple of weeks ago IBM renamed the Storwize real time compression technology to surprise surprise, IBM real time compression (wow, wonder how lively that market focus research group study discussion was).

Subsequently IBM recycled the Storwize name in time to be used for the V7000 launch.

Now to be clear right up front, currently the V7000 does not include real time compression capabilities, however I would look for that and other forms of DFR techniques to appear on an increasing basis in IBM products in the future.

IBM has a diverse storage portfolio with good products some with longer legs than others to compete in the market. By long legs, that means both technology and marketability for enabling their direct as well as partners including distributors or vars to effectively compete with other vendors offerings.

The enablement capability of the V7000 will be to give IBM and their business partners a product that they will want go tell and sell to customers competing with Cisco, Dell, EMC, Fujitsu, HDS, HP, NEC, NetApp and Oracle among others.

What about XIV?

For those interested in XIV regardless of if you are a fan, nay sayer or simply an observer, here, here and here are some related posts to view if you like (as well as comment on).

Back to the V7000

A couple of common themes about the IBM V7000 are:

  • It appears to be a good product based on the SVC platform with many enhancements
  • Expanding the industry scope and focus awareness around Data Footprint Reduction (DFR)
  • Branding the storwize acquisition as real-time compression as part of their DFR portfolio
  • Confusion about using the Storwize name for a storage virtualization solution
  • Lack of Data Footprint Reduction (DFR) particularly real-time compression (aka Storwize)
  • Yet another IBM storage product adding to confusion around product positioning

Common questions that Im being asked about the IBM V7000 include among others:

  • Is the V7000 based on LSI, NetApp or other third party OEM technology?

    No, it is based on the IBM SVC code base along with an XIV like GUI and features from other IBM products.

  • Is the V7000 based on XIV?

    No, as mentioned above, the V7000 is based on the IBM SVC code base along with an XIV like GUI and features from other IBM products.

  • Does the V7000 have DFR such as dedupe or compression?

    No, not at this time other than what was previously available with the SVC.

  • Does this mean there will be a change or defocusing on or of other IBM storage products?

    IMHO I do not think so other than perhaps around XIV. If anything, I would expect IBM to start pushing the V7000 as well as the entire storage product portfolio more aggressively. Now there could be some defocusing on XIV or put a different way, putting all products on the same equal footing and let the customer determine what they want based on effective solution selling from IBM and their business partners.

  • What does this mean for XIV is that product no longer the featured or marquee product?

    IMHO XIV remains relevant for the time being. However, I also expect to be put on equal footprint with other IBM products or, if you prefer, other IBM products particularly the V7000 to be unleashed to compete with other external vendors solutions such as those from Cisco, Dell, EMC, Fujitsu, HDS, HP, NEC, NetApp and Oracle among others. Read more here, here and here about XIV remaining relevant.

  • Why would I not just buy an SVC and add storage to it?

    That is an option and strength of SVC to sit in front of different IBM storage products as well as those of third party competitors. However with the V7000 customers now have a turnkey storage solution to sell instead of a virtualization appliance.

  • Is this a reaction to EMC VPLEX, HDS VSP, HP SVSP or 3PAR, Oracle/Sun 7000?

    Perhaps it is, perhaps it is a reaction to XIV, and perhaps it is a realization that IBM has a lot of IP that could be combined into a solution to respond to a market need among many other scenarios. However, IBM has had a virtualization platform with a decent installed base in the form of SVC which happens to be at the heart of the V7000.

  • Does this mean IBM is jumping on the using off the shelf server instead of purpose built hardware for storage systems bandwagon like Oracle, HP and others are doing?

    If you are new to storage or IBM, it might appear that way, however, IBM has been shipping storage systems that are based on general purpose servers for a couple for a couple of decades now. Granted, some of those products are based on IBM Power PC (e.g. power platform) also used in their pSeries formerly known as the RS6000s. For example, the DS8000 series similar to its predecessors the ESS (aka Shark) and VSS before that have been based on the Power platform. Likewise, SVC has been based on general purpose processors since its inception.

    Likewise, while only generally deployed in two node pairs, the DS8000 is architected to scale into many more nodes that what has been shipped meaning that IBM has had clustered storage for some time, granted, some of their competitors will dispute that.

  • How does the V7000 stack up from a performance standpoint?

    Interestingly, IBM has traditionally been very good if not out front running public benchmarks and workload simulations ranging from SPC to TPC to SPEC to Microsoft ESRP among others for all of their storage systems except one (e.g. XIV). However true to traditional IBM systems and storage practices, just a couple of weeks after the V7000 launch, IBM has released the first wave of performance comparisons including SPC for the V7000 which can be seen here to compare with others.

  • What do I think of the V7000?

    Like other products both in the IBM storage portfolio or from other vendors, the V7000 has its place and in that place which needs to be further articulated by IBM, it has a bright future. I think that the V7000 for many environments particularly those that were looking at XIV will be a good IBM based solution as well as competitor to other solutions from Dell, EMC, HDS, HP, NetApp, Oracle as well as some smaller startups providers.

Comments, thoughts and perspectives:

IBM is part of a growing industry trend realizing that data footprint reduction (DFR) focus should expand the scope beyond backup and dedupe to span an entire organization using many different tools, techniques and best practices. These include archiving of databases, email, file systems for both compliance and non compliance purposes, backup/restore modernization or redesign, compression (real-time for online and post processing). In addition, DFR includes consolidation of storage capacity and performance (e.g. fast 15K SAS, caching or SSD), data management (including some data deletion where practical), data dedupe, space saving snapshots such as copy on write or redirect on write, thin provisioning as well as virtualization for both consolidation and enabling agility.

IBM has some great products, however too often with such a diverse product portfolio better navigation and messaging of what to use when, where and why is needed not to mention the confusion over the current product dejur.

As has been the case for the past couple of years, lets see how this all plays out in a year or so from now. Meanwhile cast your vote or see the results of others as to if XIV remains relevant. Likewise, join in on the new poll below as to if the V7000 is now relevant or not.

Note: As with the ongoing is XIV relevant polling (above), for the new is the V7000 relevant polling (below) you are free to vote early, vote often, vote for those who cannot or that care not to vote.

Here are some links to read more about this and related topics:

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved