Amazon cloud storage options enhanced with Glacier

StorageIO industry trend for storage IO

In case you missed it, Amazon Web Services (AWS) has enhanced their cloud services (Elastic Cloud Compute or EC2) along with storage offerings. These include Relational Database Service (RDS), DynamoDB, Elastic Block Store (EBS), and Simple Storage Service (S3). Enhancements include new functionality along with availability or reliability in the wake of recent events (outages or service disruptions). Earlier this year AWS announced their Cloud Storage Gateway solution that you can read an analysis here. More recently AWS announced provisioned IOPS among other enhancements (see AWS whats new page here).

Amazon Web Services logo

Before announcing Glacier, options for Amazon storage services relied on general purpose S3, or EBS with other Amazon services. S3 has provided users the ability to select different availability zones (e.g. geographical regions where data is stored) along with level of reliability for different price points for their applications or services being offered.

Note that AWS S3 flexibility lends itself to individuals or organizations using it for various purposes. This ranges from storing backup or file sharing data to being used as a target for other cloud services. S3 pricing options vary depending on which availability zones you select as well as if standard or reduced redundancy. As its name implies, reduced redundancy trades lower availability recovery time objective (RTO) in exchange for lower cost per given amount of space capacity.

AWS has now announced a new class or tier of storage service called Glacier, which as its name implies moves very slow and capable of supporting large amounts of data. In other words, targeting inactive or seldom accessed data where emphasis is on ultra-low cost in exchange for a longer RTO. In exchange for an RTO that AWS is stating that it can be measured in hours, your monthly storage cost can be as low as 1 cent per GByte or about 12 cents per year per GByte plus any extra fees (See here).

Here is a note that I received from the Amazon Web Services (AWS) team:

Dear Amazon Web Services Customer,
We are excited to announce the immediate availability of Amazon Glacier – a secure, reliable and extremely low cost storage service designed for data archiving and backup. Amazon Glacier is designed for data that is infrequently accessed, yet still important to keep for future reference. Examples include digital media archives, financial and healthcare records, raw genomic sequence data, long-term database backups, and data that must be retained for regulatory compliance. With Amazon Glacier, customers can reliably and durably store large or small amounts of data for as little as $0.01/GB/month. As with all Amazon Web Services, you pay only for what you use, and there are no up-front expenses or long-term commitments.

Amazon Glacier is:

  • Low cost– Amazon Glacier is an extremely low-cost, pay-as-you-go storage service that can cost as little as $0.01 per gigabyte per month, irrespective of how much data you store.
  • Secure – Amazon Glacier supports secure transfer of your data over Secure Sockets Layer (SSL) and automatically stores data encrypted at rest using Advanced Encryption Standard (AES) 256, a secure symmetrix-key encryption standard using 256-bit encryption keys.
  • Durable– Amazon Glacier is designed to give average annual durability of 99.999999999% for each item stored.
  • Flexible -Amazon Glacier scales to meet your growing and often unpredictable storage requirements. There is no limit to the amount of data you can store in the service.
  • Simple– Amazon Glacier allows you to offload the administrative burdens of operating and scaling archival storage to AWS, and makes long term data archiving especially simple. You no longer need to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and repair, or time-consuming hardware migrations.
  • Designed for use with other Amazon Web Services – You can use AWS Import/Export to accelerate moving large amounts of data into Amazon Glacier using portable storage devices for transport. In the coming months, Amazon Simple Storage Service (Amazon S3) plans to introduce an option that will allow you to seamlessly move data between Amazon S3 and Amazon Glacier using data lifecycle policies.

Amazon Glacier is currently available in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), EU-West (Ireland), and Asia Pacific (Japan) Regions.

A few clicks in the AWS Management Console are all it takes to setup Amazon Glacier. You can learn more by visiting the Amazon Glacier detail page, reading Jeff Barrs blog post, or joining our September 19th webinar.
Sincerely,
The Amazon Web Services Team

StorageIO industry trend for storage IO

What is AWS Glacier?

Glacier is low-cost for lower performance (e.g. access time) storage suited to data applications including archiving, inactive or idle data that you are not in a hurry to retrieve. Pay as you go pricing that can be as low as $0.01 USD per GByte per month (and other optional fees may apply, see here) depending on availability zone. Availability zone or regions include US West coast (Oregon or Northern California), US East Coast (Northern Virginia), Europe (Ireland) and Asia (Tokyo).

Amazon Web Services logo

Now what is understood should have to be discussed, however just to be safe, pity the fool who complains about signing up for AWS Glacier due to its penny per month per GByte cost and it being too slow for their iTunes or videos as you know its going to happen. Likewise, you know that some creative vendor or their surrogate is going to try to show a miss-match of AWS Glacier vs. their faster service that caters to a different usage model; it is just a matter of time.

StorageIO industry trend for storage IO

Lets be clear, Glacier is designed for low-cost, high-capacity, slow access of infrequently accessed data such as an archive or other items. This means that you will be more than disappointed if you try to stream a video, or access a document or photo from Glacier as you would from S3 or EBS or any other cloud service. The reason being is that Glacier is designed with the premise of low-cost, high-capacity, high availability at the cost of slow access time or performance. How slow? AWS states that you may have to wait several hours to reach your data when needed, however that is the tradeoff. If you need faster access, pay more or find a different class and tier of storage service to meet that need, perhaps for those with the real need for speed, AWS SSD capabilities ;).

Here is a link to a good post over at Planforcloud.com comparing Glacier vs. S3, which is like comparing apples and oranges; however, it helps to put things into context.

Amazon Web Services logo

In terms of functionality, Glacier security includes secure socket layer (SSL), advanced encryption standard (AES) 256 (256-bit encryption keys) data at rest encryption along with AWS identify and access management (IAM) policies.

Persistent storage designed for 99.999999999% durability with data automatically placed in different facilities on multiple devices for redundancy when data is ingested or uploaded. Self-healing is accomplished with automatic background data integrity checks and repair.

Scale and flexibility are bound by the size of your budget or credit card spending limit along with what availability zones and other options you choose. Integration with other AWS services including Import/Export where you can ship large amounts of data to Amazon using different media and mediums. Note that AWS has also made a statement of direction (SOD) that S3 will be enhanced to seamless move data in and out of Glacier using data policies.

Part of stretching budgets for organizations of all size is to avoid treating all data and applications the same (key theme of data protection modernization). This means classifying and addressing how and where different applications and data are placed on various types of servers, storage along with revisiting modernizing data protection.

While the low-cost of Amazon Glacier is an attention getter, I am looking for more than just the lowest cost, which means I am also looking for reliability, security among other things to gain and keep confidence in my cloud storage services providers. As an example, a few years ago I switched from one cloud backup provider to another not based on cost, rather functionality and ability to leverage the service more extensively. In fact, I could switch back to the other provider and save money on the monthly bills; however I would end up paying more in lost time, productivity and other costs.

StorageIO industry trend for storage IO

What do I see as the barrier to AWS Glacier adoption?

Simple, getting vendors and other service providers to enhance their products or services to leverage the new AWS Glacier storage category. This means backup/restore, BC and DR vendors ranging from Amazon (e.g. releasing S3 to Glacier automated policy based migration), Commvault, Dell (via their acquisitions of Appassure and Quest), EMC (Avamar, Networker and other tools), HP, IBM/Tivoli, Jungledisk/Rackspace, NetApp, Symantec and others, not to mention cloud gateway providers will need to add support for this new capabilities, along with those from other providers.

As an Amazon EC2 and S3 customer, it is great to see Amazon continue to expand their cloud compute, storage, networking and application service offerings. I look forward to actually trying out Amazon Glacier for storing encrypted archive or inactive data to compliment what I am doing. Since I am not using the Amazon Cloud Storage Gateway, I am looking into how I can use Rackspace Jungledisk to manage an Amazon Glacier repository similar to how it manages my S3 stores.

Some more related reading:
Only you can prevent cloud data loss
Data protection modernization, more than swapping out media
Amazon Web Services (AWS) and the NetFlix Fix?
AWS (Amazon) storage gateway, first, second and third impressions

As of now, it looks like I will have to wait for either Jungledisk adds native support as they do today for managing my S3 storage pool today, or, the automated policy based movement between S3 and Glacier is transparently enabled.

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

IBM buys flash solid state device (SSD) industry veteran TMS

How much flash (or DRAM) based Solid State Device (SSD) do you want or need?

IBM recently took a flash step announcing it wants and needs more SSD capabilities in different packaging and functionality capabilities to meet the demands and opportunities of customers, business partners and prospects by acquiring Texas Memory Systems (TMS).

IBM buys SSD flash vendor TMS

Unlike most of the current generation of SSD vendors besides those actually making the dies (chips or semiconductors) or SSD drives that are startups or relatively new, TMS is the industry veteran. Where most of the current SSD vendors experiences (as companies) is measured in months or at best years, TMS has seen several generations and SSD adoption cycles during its multi-decade existence.

IBM buys SSD vendor Texas Memory Systems TMS

What this means is that TMS has been around during past dynamic random access memory (DRAM) based SSD cycles or eras, as well as being an early adopter and player in the current nand flash SSD era or cycle.

Granted, some in the industry do not consider the previous DRAM based generation of products as being SSD, and vice versa, some DRAM era SSD aficionados do not consider nand flash as being real SSD. Needless to say that there are many faces or facets to SSD ranging in media (DRAM, and nand flash among others) along with packaging for different use cases and functionality.

IBM along with some other vendors recognize that the best type of IO is the one that you do not have to do. However reality is that some type of Input Output (IO) operations need to be done with computer systems. Hence the second best type of IO is the one that can be done with the least impact to applications in a cost-effective way to meet specific service level objectives (SLO) requirements. This includes leveraging main memory or DRAM as cache or buffers along with server-based PCIe SSD flash cards as cache or target devices, along with internal SSD drives, as well as external SSD drives and SSD drives and flash cards in traditional storage systems or appliances as well as purpose-built SSD storage systems.

While TMS does not build the real nand flash single level cell (SLC) or multi-level cell (MLC) SSD drives (like those built by Intel, Micron, Samsung, SANdisk, Seagate, STEC and Western Digital (WD) among others), TMS does incorporate nand flash chips or components that are also used by others who also make nand flash PCIe cards and storage systems.

StorageIO industry trend for storage IO

IMHO this is a good move for both TMS and IBM, both of whom have been StorageIO clients in the past (here, here and here) that was a disclosure btw ;) as it gives TMS, their partners and customers a clear path and large organization able to invest in the technologies and solutions on a go forward basis. In other words, TMS who had looked to be bought gets certainty about their future as do they clients.

IBM who has used SSD based components such as PCIe flash SSD cards and SSD based drives from various suppliers gets a PCIe SSD card of their own, along with purpose-built mature SSD storage systems that have lineages to both DRAM and nand flash-based experiences. Thus IBM controls some of their own SSD intellectual property (e.g. IP) for PCIe cards that can go in theory into their servers, as well as storage systems and appliances that use Intel based (e.g. xSeries from IBM) and IBM Power processor based servers as a platform such. For example DS8000 (Power processor), and Intel based XIV, SONAS, V7000, SVC, ProtecTier and Pursystems (some are Power based).

In addition IBM also gets a field proven purpose-built all SSD storage system to compete with those from startups (Kaminario, Purestorage, Solidfire, Violin and Whiptail among others), as well as those being announced from competitors such as EMC (e.g. project X and project thunder) in addition to SSD drives that can go into servers and storage systems.

The question should not be if SSD is in your future, rather where will you be using it, in the server or a storage system, as a cache or a target, as a PCIe target or cache card or as a drive or as a storage system. This also means the question of how much SSD do you need along with what type (flash or DRAM), for what applications and how configured among other topics.

Storage and Memory Hirearchy diagram where SSD fits

What this means is that there are many locations and places where SSD fits, one type of product or model does not fit or meet all requirements and thus IBM with their acquisition of TMS, along with presumed partnership with other SSD based components will be able to offer a diverse SSD portfolio.

StorageIO industry trend for storage IO

The industry trend is for vendors such as Cisco, Dell, EMC, IBM, HP, NetApp, Oracle and others all of whom are either physical server and storage vendors, or in the case of EMC, virtual servers partnered with Cisco (vBlock and VCE) and Lenovo for physical servers.

Different types and locations for SSD

Thus it only makes sense for those vendors to offer diverse SSD product and solution offerings to meet different customer and application needs vs. having a single solution that users adapt to. In other words, if all you have is a hammer, everything needs to look like a nail, however if you have a tool box of various technologies, then it comes down to being able to leverage including articulating what to use when, where, why and how for different situations.

I think this is a good move for both IBM and TMS. Now lets watch how IBM and TMS can go beyond the press release, slide decks and webex briefings covering why it is a good move to justify their acquisition and plans, moving forward and to see the results of what is actually accomplished near and long-term.

Read added industry trends and perspective commentary about IBM buying TMS here and here, as well as check out these related posts and content:

How much SSD do you need vs. want?
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD
Speaking of speeding up business with SSD storage
Is SSD dead? No, however some vendors might be
Part I: PureSystems, something old, something new, something from big blue
The Many Faces of Solid State Devices/Disks (SSD)
SSD and Green IT moving beyond green washing

Meanwhile, congratulations to both IBM and TMS, ok, nuff said (for now).

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Open Data Center Alliance (ODCA) publishes two new cloud usage models

The Open Data Center Alliance (ODCA) has announced and published more documents for data center customers of cloud usage. These new cloud usage models for to address customer demands for interoperability of various clouds and services before for Infrastructure as a Service (IaaS) among other topics which are now joined by the new Software as a Service (SaaS), Platform as a Service (PaaS) and foundational document for cloud interoperability.

Unlike most industry trade groups or alliances that are vendor driven or centric, ODCA is consortium of global IT leaders (e.g. customers) that is vendor independent and comprises as 12 member steering committee from member companies (e.g. customers), learn more about ODCA here.

Disclosure note, StorageIO is an ODCA member, visit here to become an ODCA member.

From the ODCA announcement of the new documents:

The documents detail expectations for market delivery to the organizations mission of open, industry standard cloud solution adoption, and discussions have already begun with providers to help accelerate delivery of solutions based on these new requirements. This suite of requirements was joined by a Best Practices document from National Australia Bank (NAB) outlining carbon footprint reductions in cloud computing. NAB’s paper illustrates their leadership in innovative methods to report carbon emissions in the cloud and aligns their best practices to underlying Alliance requirements. All of these documents are available in the ODCA Documents Library.

The PaaS interoperability usage model outlines requirements for rapid application deployment, application scalability, application migration and business continuity. The SaaS interoperability usage model makes applications available on demand, and encourages consistent mechanisms, enabling cloud subscribers to efficiently consume SaaS via standard interactions. In concert with these usage models, the Alliance published the ODCA Guide to Interoperability, which describes proposed requirements for interoperability, portability and interconnectivity. The documents are designed to ensure that companies are able to move workloads across clouds.

It is great to see IT customer driven or centric groups step and actually deliver content and material to help their peers, or in some cases competitors that compliments information provided by vendors and vendor driven trade groups.

As with technologies, tools and services that often are seen as competitive, a mistake would be viewing ODCA as or in competition with other industry trade groups and organizations or vise versa. Rather, IT organizations and vendors can and should leverage the different content from the various sources. This is an opportunity for example vendors to learn more about what the customers are thinking or concerned about as opposed to telling IT organizations what to be looking at and vise versa.

Granted some marketing organizations or even trade groups may not like that and view groups such as ODCA as giving away control of who decides what is best for them. Smart vendors, vars, business partners, consultants and advisors are and will leverage material and resources such as ODCA, and likewise, groups like ODCA are open to including a diverse membership unlike some pay to play industry vendor centric trade groups. If you are a vendor, var or business partner, don’t look at ODCA as a threat, instead, explore how your customers or prospects may be involved with, or using ODCA material and leverage that as a differentiator between you and your competitor.

Likewise don’t be scared of vendor centric industry trade groups, alliances or consortiums, even the pay to play ones can have some value, although some have more value than others. For example from a storage and storage networking perspective, there are the Storage Networking Industry Association (SNIA) along with their various groups focused on Green and Energy along with Cloud Data Management Initiative (CDMI) related topics among others. There is also the SCSI Trade Association (STA) along with the Open Virtualization Alliance (OVA) not to mention the Open Fabric Alliance (OVA), Open Networking Foundation (ONF) and Computer Measurement Group (CMG) among many others that do good work and offer value with diverse content and offerings, some of which are free including to non members.

Learn more about the ODCA here, along with access various documents including usage models in the ODCA document library here.

While you are at, why not join StorageIO and other members by signing up to become a part of the ODCA here.

Ok, nuff said for now.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Over 1,000 entries now on the StorageIO industry links page

Industry trends and perspective data protection modernization

Is your company, organization or one that you are a fan of, or represent listed on the StorageIO industry links page (click here to learn more about it).

The StorageIO industry links page has been updated with over thousand different industry related companies, vendors, vars, trade groups, part and solution suppliers along with cloud and managed service providers. The common theme with these industry links is information and data infrastructures which means severs, storage, IO and networking, hardware, software, applications and tools, services, products and related items for traditional, virtual and cloud environments.

StorageIO server storage IO networking cloud and virtualization links

The industry links page is accessed from the StorageIO main web page via the Tools and Links menu tab, or via the URL https://storageio.com/links. An example of the StorageIO industry links page is shown below with six different menu tabs in alphabetical order.

StorageIO server storage IO networking cloud and virtualization links

Know of a company, service or organization that is not listed on the links page, if so, send an email note to info at storageio.com. If your company or organization is listed, contact StorageIO to discuss how to expand your presence on the links page and other related options.

Visit the updated StorageIO industry links page and watch for more updates, and click here to learn more about the links page.

Ok, nuff said for now.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Two companies on parallel tracks moving like trains offset by time: EMC and NetApp

View from VIA Rail Canada taken using Gregs iFlip

I see some similarities and parallels between two competing companies. Those companies happen to be in the same sector (e.g. IT data storage) however offset by time (about a decade or) subject to continued execution by both.

Those two companies are EMC and NetApp.

Some people might assert that these two companies are complete opposites. Perhaps claiming that one is on the up swing while the other on the down path (have heard claims and counter claims of both being on the other path). I will leave the discussion or debate of which is on the up and which is on the down path to the twittervile and blogsphere ultimate tag team mud wrestling arena or You Tube video rooms.

I see EMC and NetApp a bit differently which you can take it for what that is, simply an opinion or perspective having been the competitor and partner of both when I was on the vendor side of the table and later covering the two as an industry analyst.

Without going too far down the memory lane route, in a nut shell, I recall when EMC was still a fledgling startup who wanted to sell me (I was on the customer side then) rebrand Fujitsu disk drives to attach to my VAX/VMS systems and memory for our mainframes. Come to think about it, Emulex was also selling disk drives back then before reinventing themselves later as an HBA and hub vendor.

Later as a vendor, around late 94 or early 95, it was the up and coming small little bay area NAS filer appliance vendor (e.g. the toaster era) that we partnered with including a very brief OEM deal involving repackaging their product which was NetApp or Network Appliance as they were formerly known then. Once that ended after a year or so NetApp become a competitor as was EMC who at the time had as the main act the Symmetrix and about to do the EPOCH backup and McData acquisitions as well as landing the HP OEM deal for open systems.

Ironically NetApp was out to knock off Auspex which happened fairly quickly while EMC was struggling to get its NAS act together with the early DART behemoth while successfully knocking out IBM and other entrenched high-end solutions. In a twist of fate, the company I was working for ended up selling off all of their RAID (initially a few, then later all of them) patents to EMC for some cash and later transitioned out of the hardware business becoming simply a VAR of EMC (that was MTI).

While at INRANGE which later merged into CNT before acquired by McData (I left before that) and then Brocade, both EMC and NetApp were partners across different product lines.

What they have in common

Ok, enough of the memory lane stuff; lets get back to where the similarities exist.

Back in the mid 90s, EMC was essentially a one trick pony with a very software feature function rich large storage system that sold for a premium generating lots of cash from its use of cache. Likewise, NetApp is a vendor that while it has many product offerings and has some acquisitions, still relies very much on their flagship NAS storage systems that are also feature function (e.g. software) rich that leverage cache to generate cash.

Both companies are growing in terms of revenues, installed base, partners/OEMs and product diversity. Likewise each company needs to continue expansion into those as well as other adjacent areas.

Can NetApp catch EMC? Maybe, maybe not, however IMHO the question should be are there other areas that NetApp can extend its reach into causing EMC to react to those, like how EMC took advantage of opportunities causing IBM and others to react.

Here are some other similarities I see of and for EMC and NetApp:

  • Both have great outreach programs where information is provided without having to ask or dig in a proactive way, yet when something is needed, they give it without fanfare
  • Both are engaging at multiple levels, from customer, to financial and investors, to var, to partner, trade groups, to trade and other media, to analysts to social networking and beyond
  • Both are passionate about their companies, cultures, products, solutions and customers
  • Both can walk the talk, however both also like to talk and see the other balk
  • Both lead by example and not afraid to tell you what they think about something
  • Both embrace social media in connection with traditional mediums for communication with people as opposed to a giant megaphone for talking at or spamming people (when will other vendors figure that out?)
  • Both also are willing to hear what you have to say even if they do not agree with it
  • Neither is scared of the other (or at least not in public)
  • Both cause the other to play and execute a stronger game
  • Both are not above throwing a mud ball or fire cracker at the other
  • Both are not above burying the hatchet and getting along when or where needed
  • Both compete vigorously on some fronts, yet partner (publicly or privately) on other fronts
  • Both have been direct focused with some vars and some OEMs
  • Both started somewhere else and now going and moving to different places and in some ways returning to their roots or at least making sure they are not forgotten
  • Both are synonymous with their core focus products and background
  • One comes from an open systems focus working to prove itself in the enterprise
  • One comes from the enterprise establishing itself in SOHO, SMB and other spaces
  • Both have many solutions, some would say long in the tooth, others would say revolutionary
  • Both are growing via organic growth as well as acquisition and partnering
  • Both have celebrity leaders and team role players to support and back then up
  • Both also have deep benches and technical folks in the trenches to get things done
  • Both have developed leadership along with rank and file employees internal
  • Both have gone outside and brought in leadership and skilled players to expand their employee ranks
  • Both are very much involved with server virtualization (Microsoft and VMware)
  • Both are very much involved in storage virtualization and associated management
  • Both are involved with cloud solutions for enabling public or private storage
  • Both are independent storage vendors not part of a larger server organization
  • Both have interoperability programs with other vendors servers and software and networks
  • Both also get beat up about their pricing models for extensive software feature function portfolios associated with respective storage solutions
  • Both get criticized by customers or the industry as is often the case of market leaders

What I see EMC needing to do

  • Articulate where their multiple products and services fit and play into their different target market opportunities while worrying less about the color hue of logos or video backgrounds
  • Avoiding competing with itself or becoming its own major or main competitor
  • Clarify cloud (public and private) cloud confusion transitioning into cloud cash and opportunity
  • Minimize or cut channel contention and confusion internally and across partners
  • Remember where they came from and core competences however avoid a death grip on them
  • Look to the future, leverage lessons learned that helped EMC succeed where others failed
  • EMC needs NetApp as a strong NAS competitor as each plays stronger when against the other. This is like watching world-class athletes, artists or musicians that step up their games or works when paired with another

What I see NTAP needing to do

  • Doing an acquisition in an adjacent space, perhaps even a reverse merger of sorts to move up and out into a broader space that compliments their core offerings. For example, something outside of the normal comfort zone which arguably Datadomain would have been close to their comfort zone. Likewise acquiring a software player such as Commvault would be similar to EMC having acquired Legato, Documentum and so forth. That is NetApp would have to do a series of those. So why not something really big like a reverse merger or partial acquisition of say Symantecs data protection and management group (aka the old Veritas suite including backup, management tools, clustered file server software, volume managers etc).
  • In addition to adjacent acquisition, opportunities plays such as the recent Bycast move makes sense however then those need to be integrated and rolled out similar to what EMC has done with so many of their purchases.
  • Minimize or cut channel contention and confusion both internal across products and with partners.
  • NetApp started at the lower end SMB, grew into the SME and now enterprise place, however they tried with the StorVault and backed out of that market leaving it to EMC Iomega, Cisco, HP, Dell and others. Maybe they do not need a low-end play, however I rather liked the low-end StorVault story as well as where it was going. Oh well, needless to say I ended up buying an EMC Iomega IX4 as the StorVault left the market. Hmm, does that mean NetApp should acquire SNAP or Drobo or some other low-end SOHO play? Only if the price is right and there is an existing customer base and channel in place otherwise it would be a distraction from the core business. BTW, did I mention EMC Legato, oh excuse me, Networker came from the desktop and SMB environment however grew to the enterprise (yes I know, that is debatable) however now is difficult to put into SOHO environments.
  • Does NetApp need a stronger block storage play, perhaps a 3PAR acquisition? Maybe, perhaps not depending on if they are competing for today’s market or tomorrows.
  • Does NetApp need to be acquired? I think they can stay independent; however they need to expand their presence and footprint from a product, partner and customer perspective.
  • NetApp needs a strong NAS competitor in the likes of an EMC as the competition IMHO makes each stronger as well as providing competition which should play well for customers. Not to mention the back and forth mud ball and fire cracker tossing can be entertaining for some.

What is your take?

Are EMC and NetApp two companies on parallel tracks offset by time and perhaps execution?

Cast your vote and see what others have indicated in the following poll.

View from VIA Rail Canada taken using Gregs iFlip

Ok, nuff said.

Cheers gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved