Does software eliminate or move location of vendor lockin?

Does software eliminate or move location of vendor lockin?

data infrastructure server storage I/O vendor lockin

Updated 1/21/2018

Does software eliminate or move location of vendor lockin?

I’m always interested when I hear or read a software vendor or their value added reseller (VAR) or business partner claim that their solution eliminates vendor lockin.

More often than not, I end up being amazed if not amused over the claims which usually should be rephrased as eliminating hardware vendor lock-in.

What is also amazing or amusing is that while some vendors make claims of eliminating (hardware) vendor lock-in, there is also some misdirection taking place. While some solutions may be architected to cut hardware vendor lock-in, how they are sold or packaged can force certain vendors technology into your solution. For example, the EMC Centera software in theory and architecture is hardware vendor independent, however it is sold as a solution (hardware and software), similar to how Dell sells the DX which uses software from Caringo and you guessed right, Dell hardware among many other similar scenarios from other vendors.

How about virtualization or other abstraction software tools along with cloud, object storage, clustered file systems and related tools.

StorageIO industry trends and perspectives, I/O, clouds, virtualization

Keep in mind the gold rule of management software and tools which includes virtualization, cloud stacks, clustered file systems among other similar tools. The golden rule is simply who ever controls the software and management controls the gold (e.g. your budget). In the case of a storage software tools such as virtualization, cloud or object storage, cluster or NAS system among others, while they can be correct depending on how packaged and sold of eliminating hardware vendor lock-in, the lock-in also moves.

The lock-in moves from the hardware to the software which even though a particular solution may be architected to use industry standard components, often to make it easy for acquisition, a vendor packages the solution with hardware. In other words, sure, the vendor unlocked you from one vendors hardware with their software only to lock you into theirs or somebody else’s.

Now granted, it may not be a hard lock (pun intended), rather a soft marketing and deployment packaging decision. However there are some solutions that give themselves or at least via their marketing on hardware independence only to force you into buying their tin wrapped software (e.g. an appliance) with their choice of disk drives, network components and other items.

So when a software or solution vendor claims to cut vendor lock-in, ask them if that is hardware vendor lock-in and if they are moving or shifting the point of vendor lock-in. Keep in mind that vendor lock-in does not have to be a bad thing if it provides you the customer with value. Also keep in mind that only you can prevent vendor lock-in which is like only you can prevent cloud data loss (actually its a shared responsibility ;) ).

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Here is my point, so what if a vendor chooses to wrap their software with an appliance to make it easy for you to buy and deploy, however unless they are willing to work with you on what hardware that will be, perhaps they should think about going a bit easier on the vendor lock-in theme.

In the quest to race from hardware vendor lock-in, be aware with ears and eyes wide open to make sure that you are not fleeing from one point of lock-in to another. In other words, make sure that the cure to your vendor lock-in challenge is not going to be more painful than your current ailment.

What is your take on vendor lockin? Cast your vote and see results in the following polls.

Is vendor lockin a good or bad thing?

Who is responsible for managing vendor lockin

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Who will be winner with Oracle $10 Million dollar challenge?

Oracle 10 million dollar challenge ad image

In case you missed it, Oracle has a ten million dollar challenge (here, here and here) to prove that their servers and database software technologies are 5 times faster than IBM.

Up to 10 winners open to U.S. Fortune 1000 companies running an Oracle 11g data warehouse on IBM Power system. Offer expires August 31, 2012 with configuration terms. See this URL for official rules: https://oracle.com/IBMchallenge

Click here to view entry form or click on form below.

Oracle 10 million dollar challenge entry form image

Taking a step back for a moment, if you forgot or had not heard, Oracle earlier this summer had their hands slapped by the US Better Business Bureau (BBB) National Advertising Directive (NAD) over performance claims and ads. IBM complained to the BBB that unfair marketing claims about their servers and database products were being made by Oracle (read more here).

Not one to miss a beat or bit or byte of data, not to mention dollars, Oracle has run ads in newspapers and other venues for the Oracle IBM challenge with the winner receiving $10,000,000.00 USD (details here).

Oracle exadata servers image

This begs the question, who wins, the company or entity that actually can standup and meet the challenge? How about Oracle, do they win if enough people see, hear, talk (or complain) about the ads and challenges? What about the cost, how will Oracle cover that or is it simply a drop in the bucket of an even larger amount of dollars potentially valued in the billions of dollars (e.g. servers, storage, software, services)?

Now for some fun, using an inflation calculator with 1974 dollars as that is when the TV show the six million dollar man made its debut. If you do not know, that is a TV show where an injured government employee (Steve Austin) played by actor Lee Majors was rebuilt using bionic in order to be faster and stronger with the then current technology (ok, TV technology). Using the inflation calculator, the 1974 six million dollar man and machine would cost about $27,882,839.76 in 2012 USD (364.7% increase).

Now using todays what Oracle is calling faster, stronger machine and associated staff for $10,000,000 challenge prize award, would have cost $2,151,861.17 in 1974 dollars. Note that the equal amount of compute processing, storage performance and capacity, networking capability and software abilities in 1974 similar to what is available today would have cost even more than what the inflation calculator shows. For that, we would need to have something like a technology inflation (or improvement) calculator.

Learn more about the Oracle challenge here, here and here, as well as the NAD announcement here, and the six million dollar man here

Ok, nuff said for now.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Cloud conversations: AWS Government Cloud (GovCloud)

StorageIO industry trends clouds, virtualization, data and storage networking image

Following earlier cloud conversations posts, cloud computing means many things from products to services, functionality and positioned for different layers of service delivery or capabilities (e.g. SaaS, AaaS, PaaS, IaaS and XaaS).

Consequently it is no surprise when I hear from different people their opinion, belief or perception of what is or is not a cloud, confidence or concerns, or how to use and abuse clouds among other related themes.

A common theme I hear talking with IT professionals on a global basis centers around conversations about confidence in clouds including reliability, security, privacy, compliance and confidentiality for where data is protected and preserved. This includes data being stored in different geography locations ranging from states or regions to countries and continents. What I also often hear are discussion around concerns over data from counties outside of the US being stored in the US or vice versa of information privacy laws.

StorageIO cloud travel image

Cost is also coming up in many conversations, which is interesting in that many first value propositions have been presented around cloud being cheaper. As with many things it depends, some services and usage models can be cheaper on a relative basis, just like some can be more expensive. Think of it this way, for some people a lease of an automobile can cheaper on monthly cash flow vs. buying or making loan payments. On the other hand, a buy or loan payment can have a lower overall cost depending on different factors then a lease.

As with many cloud conversations, cost and return on investment (ROI) will vary, just as how the cloud is used to impact your return on innovation (the new ROI) will also vary.

This brings me to something else I hear during my travels and in other conversations with IT; practitioners (e.g. customers and users as well as industry pundits) is a belief that governments cannot use clouds. Again, it depends on what type of government, the applications, sensitivity of data among others factors.

Some FUD (Fear uncertainty doubt) I hear includes blanket statements such as governments cannot use cloud services or cloud services do not exist for governments. Again it comes down to digging deeper into the conversation such as what type of cloud, applications, government function, security and sensitivity among other factors.

Keep in mind that there are services including those from Amazon Web Services (AWS) such as their Government Cloud (GovCloud) region. Granted, GovCloud is not applicable to all government cloud needs or types of applications or data or security clearances among other concerns.

Needless to say AWS GovCloud is not the only solution out there on a public (government focused), private or hybrid basis, there are probably even some super double secret ultra-private or hybrid fortified government clouds that most in the government including experts are not aware of. However if those do exist, certainly talking about them is also probably off-limits for discussions even by the experts.

Amazon Web Services logo

Speaking of AWS, here is a link to an analysis of their cloud storage for archiving and inactive big data called Glacier, along with analysis of AWS Cloud Storage Gateway. Also, keep in mind that protecting data in the cloud is a shared responsibility meaning there are things both you as the user or consumer as well as the provider need to do.

Btw, what is your take on clouds? Click here to cast your vote and see what others are thinking about clouds.

Ok, nuff said for now.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Amazon cloud storage options enhanced with Glacier

StorageIO industry trend for storage IO

In case you missed it, Amazon Web Services (AWS) has enhanced their cloud services (Elastic Cloud Compute or EC2) along with storage offerings. These include Relational Database Service (RDS), DynamoDB, Elastic Block Store (EBS), and Simple Storage Service (S3). Enhancements include new functionality along with availability or reliability in the wake of recent events (outages or service disruptions). Earlier this year AWS announced their Cloud Storage Gateway solution that you can read an analysis here. More recently AWS announced provisioned IOPS among other enhancements (see AWS whats new page here).

Amazon Web Services logo

Before announcing Glacier, options for Amazon storage services relied on general purpose S3, or EBS with other Amazon services. S3 has provided users the ability to select different availability zones (e.g. geographical regions where data is stored) along with level of reliability for different price points for their applications or services being offered.

Note that AWS S3 flexibility lends itself to individuals or organizations using it for various purposes. This ranges from storing backup or file sharing data to being used as a target for other cloud services. S3 pricing options vary depending on which availability zones you select as well as if standard or reduced redundancy. As its name implies, reduced redundancy trades lower availability recovery time objective (RTO) in exchange for lower cost per given amount of space capacity.

AWS has now announced a new class or tier of storage service called Glacier, which as its name implies moves very slow and capable of supporting large amounts of data. In other words, targeting inactive or seldom accessed data where emphasis is on ultra-low cost in exchange for a longer RTO. In exchange for an RTO that AWS is stating that it can be measured in hours, your monthly storage cost can be as low as 1 cent per GByte or about 12 cents per year per GByte plus any extra fees (See here).

Here is a note that I received from the Amazon Web Services (AWS) team:

Dear Amazon Web Services Customer,
We are excited to announce the immediate availability of Amazon Glacier – a secure, reliable and extremely low cost storage service designed for data archiving and backup. Amazon Glacier is designed for data that is infrequently accessed, yet still important to keep for future reference. Examples include digital media archives, financial and healthcare records, raw genomic sequence data, long-term database backups, and data that must be retained for regulatory compliance. With Amazon Glacier, customers can reliably and durably store large or small amounts of data for as little as $0.01/GB/month. As with all Amazon Web Services, you pay only for what you use, and there are no up-front expenses or long-term commitments.

Amazon Glacier is:

  • Low cost– Amazon Glacier is an extremely low-cost, pay-as-you-go storage service that can cost as little as $0.01 per gigabyte per month, irrespective of how much data you store.
  • Secure – Amazon Glacier supports secure transfer of your data over Secure Sockets Layer (SSL) and automatically stores data encrypted at rest using Advanced Encryption Standard (AES) 256, a secure symmetrix-key encryption standard using 256-bit encryption keys.
  • Durable– Amazon Glacier is designed to give average annual durability of 99.999999999% for each item stored.
  • Flexible -Amazon Glacier scales to meet your growing and often unpredictable storage requirements. There is no limit to the amount of data you can store in the service.
  • Simple– Amazon Glacier allows you to offload the administrative burdens of operating and scaling archival storage to AWS, and makes long term data archiving especially simple. You no longer need to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and repair, or time-consuming hardware migrations.
  • Designed for use with other Amazon Web Services – You can use AWS Import/Export to accelerate moving large amounts of data into Amazon Glacier using portable storage devices for transport. In the coming months, Amazon Simple Storage Service (Amazon S3) plans to introduce an option that will allow you to seamlessly move data between Amazon S3 and Amazon Glacier using data lifecycle policies.

Amazon Glacier is currently available in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), EU-West (Ireland), and Asia Pacific (Japan) Regions.

A few clicks in the AWS Management Console are all it takes to setup Amazon Glacier. You can learn more by visiting the Amazon Glacier detail page, reading Jeff Barrs blog post, or joining our September 19th webinar.
Sincerely,
The Amazon Web Services Team

StorageIO industry trend for storage IO

What is AWS Glacier?

Glacier is low-cost for lower performance (e.g. access time) storage suited to data applications including archiving, inactive or idle data that you are not in a hurry to retrieve. Pay as you go pricing that can be as low as $0.01 USD per GByte per month (and other optional fees may apply, see here) depending on availability zone. Availability zone or regions include US West coast (Oregon or Northern California), US East Coast (Northern Virginia), Europe (Ireland) and Asia (Tokyo).

Amazon Web Services logo

Now what is understood should have to be discussed, however just to be safe, pity the fool who complains about signing up for AWS Glacier due to its penny per month per GByte cost and it being too slow for their iTunes or videos as you know its going to happen. Likewise, you know that some creative vendor or their surrogate is going to try to show a miss-match of AWS Glacier vs. their faster service that caters to a different usage model; it is just a matter of time.

StorageIO industry trend for storage IO

Lets be clear, Glacier is designed for low-cost, high-capacity, slow access of infrequently accessed data such as an archive or other items. This means that you will be more than disappointed if you try to stream a video, or access a document or photo from Glacier as you would from S3 or EBS or any other cloud service. The reason being is that Glacier is designed with the premise of low-cost, high-capacity, high availability at the cost of slow access time or performance. How slow? AWS states that you may have to wait several hours to reach your data when needed, however that is the tradeoff. If you need faster access, pay more or find a different class and tier of storage service to meet that need, perhaps for those with the real need for speed, AWS SSD capabilities ;).

Here is a link to a good post over at Planforcloud.com comparing Glacier vs. S3, which is like comparing apples and oranges; however, it helps to put things into context.

Amazon Web Services logo

In terms of functionality, Glacier security includes secure socket layer (SSL), advanced encryption standard (AES) 256 (256-bit encryption keys) data at rest encryption along with AWS identify and access management (IAM) policies.

Persistent storage designed for 99.999999999% durability with data automatically placed in different facilities on multiple devices for redundancy when data is ingested or uploaded. Self-healing is accomplished with automatic background data integrity checks and repair.

Scale and flexibility are bound by the size of your budget or credit card spending limit along with what availability zones and other options you choose. Integration with other AWS services including Import/Export where you can ship large amounts of data to Amazon using different media and mediums. Note that AWS has also made a statement of direction (SOD) that S3 will be enhanced to seamless move data in and out of Glacier using data policies.

Part of stretching budgets for organizations of all size is to avoid treating all data and applications the same (key theme of data protection modernization). This means classifying and addressing how and where different applications and data are placed on various types of servers, storage along with revisiting modernizing data protection.

While the low-cost of Amazon Glacier is an attention getter, I am looking for more than just the lowest cost, which means I am also looking for reliability, security among other things to gain and keep confidence in my cloud storage services providers. As an example, a few years ago I switched from one cloud backup provider to another not based on cost, rather functionality and ability to leverage the service more extensively. In fact, I could switch back to the other provider and save money on the monthly bills; however I would end up paying more in lost time, productivity and other costs.

StorageIO industry trend for storage IO

What do I see as the barrier to AWS Glacier adoption?

Simple, getting vendors and other service providers to enhance their products or services to leverage the new AWS Glacier storage category. This means backup/restore, BC and DR vendors ranging from Amazon (e.g. releasing S3 to Glacier automated policy based migration), Commvault, Dell (via their acquisitions of Appassure and Quest), EMC (Avamar, Networker and other tools), HP, IBM/Tivoli, Jungledisk/Rackspace, NetApp, Symantec and others, not to mention cloud gateway providers will need to add support for this new capabilities, along with those from other providers.

As an Amazon EC2 and S3 customer, it is great to see Amazon continue to expand their cloud compute, storage, networking and application service offerings. I look forward to actually trying out Amazon Glacier for storing encrypted archive or inactive data to compliment what I am doing. Since I am not using the Amazon Cloud Storage Gateway, I am looking into how I can use Rackspace Jungledisk to manage an Amazon Glacier repository similar to how it manages my S3 stores.

Some more related reading:
Only you can prevent cloud data loss
Data protection modernization, more than swapping out media
Amazon Web Services (AWS) and the NetFlix Fix?
AWS (Amazon) storage gateway, first, second and third impressions

As of now, it looks like I will have to wait for either Jungledisk adds native support as they do today for managing my S3 storage pool today, or, the automated policy based movement between S3 and Glacier is transparently enabled.

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

IBM buys flash solid state device (SSD) industry veteran TMS

How much flash (or DRAM) based Solid State Device (SSD) do you want or need?

IBM recently took a flash step announcing it wants and needs more SSD capabilities in different packaging and functionality capabilities to meet the demands and opportunities of customers, business partners and prospects by acquiring Texas Memory Systems (TMS).

IBM buys SSD flash vendor TMS

Unlike most of the current generation of SSD vendors besides those actually making the dies (chips or semiconductors) or SSD drives that are startups or relatively new, TMS is the industry veteran. Where most of the current SSD vendors experiences (as companies) is measured in months or at best years, TMS has seen several generations and SSD adoption cycles during its multi-decade existence.

IBM buys SSD vendor Texas Memory Systems TMS

What this means is that TMS has been around during past dynamic random access memory (DRAM) based SSD cycles or eras, as well as being an early adopter and player in the current nand flash SSD era or cycle.

Granted, some in the industry do not consider the previous DRAM based generation of products as being SSD, and vice versa, some DRAM era SSD aficionados do not consider nand flash as being real SSD. Needless to say that there are many faces or facets to SSD ranging in media (DRAM, and nand flash among others) along with packaging for different use cases and functionality.

IBM along with some other vendors recognize that the best type of IO is the one that you do not have to do. However reality is that some type of Input Output (IO) operations need to be done with computer systems. Hence the second best type of IO is the one that can be done with the least impact to applications in a cost-effective way to meet specific service level objectives (SLO) requirements. This includes leveraging main memory or DRAM as cache or buffers along with server-based PCIe SSD flash cards as cache or target devices, along with internal SSD drives, as well as external SSD drives and SSD drives and flash cards in traditional storage systems or appliances as well as purpose-built SSD storage systems.

While TMS does not build the real nand flash single level cell (SLC) or multi-level cell (MLC) SSD drives (like those built by Intel, Micron, Samsung, SANdisk, Seagate, STEC and Western Digital (WD) among others), TMS does incorporate nand flash chips or components that are also used by others who also make nand flash PCIe cards and storage systems.

StorageIO industry trend for storage IO

IMHO this is a good move for both TMS and IBM, both of whom have been StorageIO clients in the past (here, here and here) that was a disclosure btw ;) as it gives TMS, their partners and customers a clear path and large organization able to invest in the technologies and solutions on a go forward basis. In other words, TMS who had looked to be bought gets certainty about their future as do they clients.

IBM who has used SSD based components such as PCIe flash SSD cards and SSD based drives from various suppliers gets a PCIe SSD card of their own, along with purpose-built mature SSD storage systems that have lineages to both DRAM and nand flash-based experiences. Thus IBM controls some of their own SSD intellectual property (e.g. IP) for PCIe cards that can go in theory into their servers, as well as storage systems and appliances that use Intel based (e.g. xSeries from IBM) and IBM Power processor based servers as a platform such. For example DS8000 (Power processor), and Intel based XIV, SONAS, V7000, SVC, ProtecTier and Pursystems (some are Power based).

In addition IBM also gets a field proven purpose-built all SSD storage system to compete with those from startups (Kaminario, Purestorage, Solidfire, Violin and Whiptail among others), as well as those being announced from competitors such as EMC (e.g. project X and project thunder) in addition to SSD drives that can go into servers and storage systems.

The question should not be if SSD is in your future, rather where will you be using it, in the server or a storage system, as a cache or a target, as a PCIe target or cache card or as a drive or as a storage system. This also means the question of how much SSD do you need along with what type (flash or DRAM), for what applications and how configured among other topics.

Storage and Memory Hirearchy diagram where SSD fits

What this means is that there are many locations and places where SSD fits, one type of product or model does not fit or meet all requirements and thus IBM with their acquisition of TMS, along with presumed partnership with other SSD based components will be able to offer a diverse SSD portfolio.

StorageIO industry trend for storage IO

The industry trend is for vendors such as Cisco, Dell, EMC, IBM, HP, NetApp, Oracle and others all of whom are either physical server and storage vendors, or in the case of EMC, virtual servers partnered with Cisco (vBlock and VCE) and Lenovo for physical servers.

Different types and locations for SSD

Thus it only makes sense for those vendors to offer diverse SSD product and solution offerings to meet different customer and application needs vs. having a single solution that users adapt to. In other words, if all you have is a hammer, everything needs to look like a nail, however if you have a tool box of various technologies, then it comes down to being able to leverage including articulating what to use when, where, why and how for different situations.

I think this is a good move for both IBM and TMS. Now lets watch how IBM and TMS can go beyond the press release, slide decks and webex briefings covering why it is a good move to justify their acquisition and plans, moving forward and to see the results of what is actually accomplished near and long-term.

Read added industry trends and perspective commentary about IBM buying TMS here and here, as well as check out these related posts and content:

How much SSD do you need vs. want?
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD
Speaking of speeding up business with SSD storage
Is SSD dead? No, however some vendors might be
Part I: PureSystems, something old, something new, something from big blue
The Many Faces of Solid State Devices/Disks (SSD)
SSD and Green IT moving beyond green washing

Meanwhile, congratulations to both IBM and TMS, ok, nuff said (for now).

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Open Data Center Alliance (ODCA) publishes two new cloud usage models

The Open Data Center Alliance (ODCA) has announced and published more documents for data center customers of cloud usage. These new cloud usage models for to address customer demands for interoperability of various clouds and services before for Infrastructure as a Service (IaaS) among other topics which are now joined by the new Software as a Service (SaaS), Platform as a Service (PaaS) and foundational document for cloud interoperability.

Unlike most industry trade groups or alliances that are vendor driven or centric, ODCA is consortium of global IT leaders (e.g. customers) that is vendor independent and comprises as 12 member steering committee from member companies (e.g. customers), learn more about ODCA here.

Disclosure note, StorageIO is an ODCA member, visit here to become an ODCA member.

From the ODCA announcement of the new documents:

The documents detail expectations for market delivery to the organizations mission of open, industry standard cloud solution adoption, and discussions have already begun with providers to help accelerate delivery of solutions based on these new requirements. This suite of requirements was joined by a Best Practices document from National Australia Bank (NAB) outlining carbon footprint reductions in cloud computing. NAB’s paper illustrates their leadership in innovative methods to report carbon emissions in the cloud and aligns their best practices to underlying Alliance requirements. All of these documents are available in the ODCA Documents Library.

The PaaS interoperability usage model outlines requirements for rapid application deployment, application scalability, application migration and business continuity. The SaaS interoperability usage model makes applications available on demand, and encourages consistent mechanisms, enabling cloud subscribers to efficiently consume SaaS via standard interactions. In concert with these usage models, the Alliance published the ODCA Guide to Interoperability, which describes proposed requirements for interoperability, portability and interconnectivity. The documents are designed to ensure that companies are able to move workloads across clouds.

It is great to see IT customer driven or centric groups step and actually deliver content and material to help their peers, or in some cases competitors that compliments information provided by vendors and vendor driven trade groups.

As with technologies, tools and services that often are seen as competitive, a mistake would be viewing ODCA as or in competition with other industry trade groups and organizations or vise versa. Rather, IT organizations and vendors can and should leverage the different content from the various sources. This is an opportunity for example vendors to learn more about what the customers are thinking or concerned about as opposed to telling IT organizations what to be looking at and vise versa.

Granted some marketing organizations or even trade groups may not like that and view groups such as ODCA as giving away control of who decides what is best for them. Smart vendors, vars, business partners, consultants and advisors are and will leverage material and resources such as ODCA, and likewise, groups like ODCA are open to including a diverse membership unlike some pay to play industry vendor centric trade groups. If you are a vendor, var or business partner, don’t look at ODCA as a threat, instead, explore how your customers or prospects may be involved with, or using ODCA material and leverage that as a differentiator between you and your competitor.

Likewise don’t be scared of vendor centric industry trade groups, alliances or consortiums, even the pay to play ones can have some value, although some have more value than others. For example from a storage and storage networking perspective, there are the Storage Networking Industry Association (SNIA) along with their various groups focused on Green and Energy along with Cloud Data Management Initiative (CDMI) related topics among others. There is also the SCSI Trade Association (STA) along with the Open Virtualization Alliance (OVA) not to mention the Open Fabric Alliance (OVA), Open Networking Foundation (ONF) and Computer Measurement Group (CMG) among many others that do good work and offer value with diverse content and offerings, some of which are free including to non members.

Learn more about the ODCA here, along with access various documents including usage models in the ODCA document library here.

While you are at, why not join StorageIO and other members by signing up to become a part of the ODCA here.

Ok, nuff said for now.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Over 1,000 entries now on the StorageIO industry links page

Industry trends and perspective data protection modernization

Is your company, organization or one that you are a fan of, or represent listed on the StorageIO industry links page (click here to learn more about it).

The StorageIO industry links page has been updated with over thousand different industry related companies, vendors, vars, trade groups, part and solution suppliers along with cloud and managed service providers. The common theme with these industry links is information and data infrastructures which means severs, storage, IO and networking, hardware, software, applications and tools, services, products and related items for traditional, virtual and cloud environments.

StorageIO server storage IO networking cloud and virtualization links

The industry links page is accessed from the StorageIO main web page via the Tools and Links menu tab, or via the URL https://storageio.com/links. An example of the StorageIO industry links page is shown below with six different menu tabs in alphabetical order.

StorageIO server storage IO networking cloud and virtualization links

Know of a company, service or organization that is not listed on the links page, if so, send an email note to info at storageio.com. If your company or organization is listed, contact StorageIO to discuss how to expand your presence on the links page and other related options.

Visit the updated StorageIO industry links page and watch for more updates, and click here to learn more about the links page.

Ok, nuff said for now.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

What does new EMC and Lenovo partnership mean?

EMC and EMCworld

The past several weeks have been busy with various merger, acquisitions and collaborating activity in the IT and data storage world. Summer time often brings new relationships and even summer marriages. The most recent is EMC and Lenovo announcing a new partnership that includes OEM sourcing of technology, market expansion and other initiatives. Hmm, does anybody remember who EMCs former desktop and server partner was, or who put Lenovo out for adoption several years ago?

Here is the press release from EMC and Lenovo that you can read yourself vs. me simply paraphrasing it:

Lenovo and EMC Team Up In Strategic Worldwide Partnership
A Solid Step in Lenovo’s Aspiration to Be a Player in Industry Standard Servers and Networked Storage with EMC’s Leading Technology; EMC Further Strengthens Ability to Serve Customers’ Storage Solutions Needs in China and Other Emerging Markets; Companies Agree to Form SMB-Focused Storage Joint Venture
BEIJING, China – August 1, 2012
Lenovo (HKSE: 992) (ADR: LNVGY) and EMC Corporation (NYSE: EMC) today announced a broad partnership that enhances Lenovo’s position in industry standard servers and networked storage solutions, while significantly expanding EMC’s reach in China and other key, high-growth markets. The new partnership is expected to spark innovation and additional R&D in the server and storage markets by maximizing the product development talents and resources at both companies, while driving scale and efficiency in the partners’ respective supply chains.
The partnership is a strong strategic fit, leveraging the two leading companies’ respective strengths, across three main areas:

  • First, Lenovo and EMC have formed a server technology development program that will accelerate and extend Lenovo’s capabilities in the x86 industry-standard server segment. These servers will be brought to market by Lenovo and embedded into selected EMC storage systems over time.
  • Second, the companies have forged an OEM and reseller relationship in which Lenovo will provide EMC’s industry-leading networked storage solutions to its customers, initially in China and expanding into other global markets in step with the ongoing development of its server business.
  • Finally, EMC and Lenovo plan to bring certain assets and resources from EMC’s Iomega business into a new joint venture which will provide Network Attached Storage (NAS) systems to small/medium businesses (SMB) and distributed enterprise sites.

“Today’s announcement with industry leader EMC is another solid step in our journey to build on our foundation in PCs and become a leader in the new PC-plus era,” said Yuanqing Yang, Lenovo chairman and CEO. “This partnership will help us fully deliver on our PC-plus strategy by giving us strong back-end capabilities and business foundation in servers and storage, in addition to our already strong position in devices. EMC is the perfect partner to help us fully realize the PC-plus opportunity in the long term.”
Joe Tucci, chairman and CEO of EMC, said, “The relationship with Lenovo represents a powerful opportunity for EMC to significantly expand our presence in China, a vibrant and very important market, and extend it to other parts of the world over time. Lenovo has clearly demonstrated its ability to apply its considerable resources and expertise not only to enter, but to lead major market segments. We’re excited to partner with Lenovo as we focus our combined energies serving a broader range of customers with industry-leading storage and server solutions.”
In the joint venture, Lenovo will contribute cash, while EMC will contribute certain assets and resources of Iomega. Upon closing, Lenovo will hold a majority interest in the new joint venture. During and after the transition from independent operations to the joint venture, customers will experience continuity of service, product delivery and warranty fulfillment. The joint venture is subject to customary closing procedures including regulatory approvals and is expected to close by the end of 2012.
The partnership described here is not considered material to either company’s current fiscal year earnings.
About Lenovo
Lenovo (HKSE: 992) (ADR: LNVGY) is a $US30 billion personal technology company and the world’s second largest PC company, serving customers in more than 160 countries. Dedicated to building exceptionally engineered PCs and mobile internet devices, Lenovo’s business is built on product innovation, a highly efficient global supply chain and strong strategic execution. Formed by Lenovo Group’s acquisition of the former IBM Personal Computing Division, the Company develops, manufactures and markets reliable, high-quality, secure and easy-to-use technology products and services. Its product lines include legendary Think-branded commercial PCs and Idea-branded consumer PCs, as well as servers, workstations, and a family of mobile internet devices, including tablets and smart phones. Lenovo has major research centers in Yamato, Japan; Beijing, Shanghai and Shenzhen, China; and Raleigh, North Carolina. For more information, see www.lenovo.com.
About EMC
EMC Corporation is a global leader in enabling businesses and service providers to transform their operations and deliver IT as a service. Fundamental to this transformation is cloud computing. Through innovative products and services, EMC accelerates the journey to cloud computing, helping IT departments to store, manage, protect and analyze their most valuable asset — information — in a more agile, trusted and cost-efficient way. Additional information about EMC can be found at www.EMC.com.

StorageIO industry trends and perspectives

What is my take?

Disclosures
I have been buying and using Lenovo desktop and laptop products for over a decade and currently typing this post from my X1 ThinkPad equipped with a Samsung SSD. Likewise I bought an Iomega IX4 NAS a couple of years ago (so I am a customer), am a Retrospect customer (EMC bought and then sold them off), used to be a Mozy user (now a former customer) and EMC has been a client of StorageIO in the past.

Lenovo Thinkpad
Some of my Lenovo(s) and EMC Iomega IX4

Let us take a step back for a moment, Lenovo was the spinout and sale from IBM who has a US base in Raleigh North Carolina. While IBM still partners with Lenovo for desktops, IBM over the past years or decade(s) has been more strategically focused on big enterprise environments, software and services. Note that IBM has continued enhancing its own Intel based servers (e.g. xSeries), propriety Power processor series, storage and technology solutions (here, here, here and here among others). However, for the most part, IBM has moved away from catering to the Consumer, SOHO and SMB server, storage, desktop and related technology environments.

EMC on the other hand started out in the data center growing up to challenge IBMs dominance of data storage in big environments to now being the industry maker storage player for big and little data, from enterprise to cloud to desktop to server, consumer to data center. EMC also was partnered with Dell who competes directly with Lenovo until that relationship ended a few years ago. EMC for its part has been on a growth and expansion strategy adding technologies, companies, DNA and ability along with staff in the desktop, server and other spaces from a data, information and storage perspective not to mention VMware (virtualization and cloud), RSA (security) among others such as Mozy for cloud backup. EMC is also using more servers in its solutions ranging from Iomega based NAS to VNX unified storage systems, Greenplum big data to Centera archiving, ATMOS and various data protection solutions among other products.

StorageIO industry trends and perspectives

Note that this is an industry wide trend of leveraging Intel Architecture (IA) along with AMD, Broadcom, and IBM Power among other general-purpose processors and servers as platforms for running storage and data applications or appliances.

Overall, I think that this is a good move for both EMC and Lenovo to expand their reach into different adjacent markets leveraging and complimenting each other strengths.

Ok, lets see who is involved in the next IT summer relationship, nuff said for now.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go

StorageIO industry trends and perspectives

In case you missed it, VMware recently announced spending $1.05 billion USD acquiring startup Nicira for their virtualization and software technology that enables software defined networks (SDN). Also last week Oracle was in the news getting its hands slapped by for making misleading advertisement performance claims vs. IBM.

On the heals of VMware buying Nicira for software defined networking (SDN) or what is also known as IO virtualization (IOV) and virtualized networking, Oracle is now claiming their own SDN capabilities with their announcement of intent to acquire Xsigo. Founded in 2004, Xsigo has a hardware platform combined with software to enable attachment of servers to different Fibre Channel (SAN) and Ethernet based (LAN) networks with their version of IOV.

Now its Oracle who has announced that it will be acquiring IO, networking, virtualization hardware and software vendor Xsigo founded in 2004 for an undisclosed amount. Xsigo has made its name in the IO virtualization (IOV) and converged networking along with server and storage virtualization space over the past several years including partnerships with various vendors.

Buzz word bingo

Technology buzzwords and buzz terms can often be a gray area leaving plenty of room for marketers and PR folks to run with. Case in point AaaS, Big data, Cloud, Compliance, Green, IaaS, IOV, Orchestration, PaaS and Virtualization among other buzzword bingo or XaaS topics. Since Xsigo has been out front in messaging and industry awareness around IO networking convergence of Ethernet based Local Area Networks (LANs) and Fibre Channel (FC) based Storage Area Networks (SANs), along with embracing InfiniBand, it made sense for them to play to their strength which is IO virtualization (aka IOV).

Too me and among others (here and here and here) it is interesting that Xsigo has not laid claims to being part of the software defined networking (SDN) movement or the affiliated OpenFlow networking initiatives as happens with Nicira (and Oracle for that matter). In the press release that the Oracle marketing and PR folks put out on a Monday morning, some of the media and press, both trade industry, financial and general news agency took the Oracle script hook line and sinker running with it.

What was effective is how well many industry trade pubs and their analysts simply picked up the press release story and ran with it in the all too common race to see who can get the news or story out first, or before it actually happens in some cases.

Image of media, news papers

Too be clear, not all pubs jumped including some of those mentioned by Greg Knieriemen (aka @knieriemen) over at SpeakinginTech highlights. I know some who took the time to call, ask around, leverage their journalistic training to dig, research and find out what this really meant vs. simply taking and running with the script. An example of one of those calls that I had was with Beth Pariseu (aka @pariseautt) that you can read her story here and here.

Interesting enough, the Xsigo marketers had not embraced the SDN term sticking with the more known (at least in some circles) VIO and VIO descriptions. What is also interesting is just last week Oracle marketing had their hands slapped by the Better Business Bureau (BBB) NAD after IBM complained about unfair performance based advertisements on ExaData.

Oracle Exadata

Hmm, I wonder if the SDN police or somebody else will lodge a similar complaint with the BBB on behalf of those doing SDN?

Both Oracle and Xsigo along with other InfiniBand (and some Ethernet and PCIe) focused vendors are members of the Open Fabric initiative, not to be confused with the group working on OpenFlow.

StorageIO industry trends and perspectives

Here are some other things to think about:

Oracle has a history of doing different acquisitions without disclosing terms, as well as doing them based on earn outs such as was the case with Pillar.

Oracle use Ethernet in the servers and appliances as well as has been an adopter of InfiniBand primarily for node to node communication, however also for server to application.

Oracle is also an investor in Mellanox the folks that make InfiniBand and Ethernet products.

Oracle has built various stacks including ExaData (Database machine), Exalogic, Exalytics and Database Appliance in addition to their 7000 series of storage systems.

Oracle has done earlier virtualization related acquisitions including Virtual Iron.

Oracle has a reputation with some of their customers who love to hate them for various reasons.

Oracle has a reputation of being aggressive, even by other market leader aggressive standards.

Integrated solution stacks (aka stack wars) or what some remember as bundles continues and Oracle has many solutions.

What will happen to Xsigo as you know it today (besides what the press releases are saying).

While Xsigo was not a member of the Open Networking Forum (ONF), Oracle is.

Xsigo is a member of the Open Fabric Alliance along with Oracle, Mellanox and others interested in servers, PCIe, InfiniBand, Ethernet, networking and storage.

StorageIO industry trends and perspectives

What’s my take?

While there are similarities in that both Nicira and Xsigo are involved with IO Virtualization, what they are doing, how they are doing it, who they are doing it with along with where they can play vary.

Not sure what Oracle paid however assuming that it was in the couple of million dollars or less, cash or combination of stock, both they and the investors as well as some of the employees, friends and family’s did ok.

Oracle also gets some intellectual property that they can combine with other earlier acquisitions via Sun and Virtual Iron along with their investment in InfiniBand (also now Ethernet) vendor Mellanox

Likewise, Oracle gets some extra technology that they can leverage in their various stacked or integrated (aka bundled) solutions for both virtual and physical environments.

For Xsigo customers the good news is that you now know who will be buying the company, however and should be questions about the future beyond what is being said in press releases.

Does this acquisition give Oracle a play in the software defined networking space like Nicira gives to VMware I would say no given their hardware dependency, however it does give Oracle some extra technology to play with.

Likewise while important and a popular buzzword topic (e.g. SDN), since OpenFlow comes up in conversations, perhaps that should be more of the focus vs. if a solution is all software or hardware and software.

StorageIO industry trends and perspectives

I also find it entertaining how last week the Better Business Bureau (BBB) and NAD (National Advertising Division) slapped Oracles hands after IBM complaints of misleading performance claims about Oracle ExaData vs. IBM. The reason I find it entertaining is not that Oracle had its hands slapped or that IBM complained to the BBB, rather how the Oracle marketers and PR folks came up with a spin around what could be called a proprietary SDN (hmm, pSDN ?) story feed it to the press and media who then ran with it.

Im not convinced that this is an all our launch of a war by Oracle vs. Cisco let alone any of the other networking vendors as some have speculated (makes for good headlines though). Instead Im seeing it as more of an opportunistic acquisition by Oracle most likely at a good middle of summer price. Now if Oracle really wanted to go to battle with Cisco (and others), then there are others to buy such as Brocade, Juniper, etc etc etc. However there are other opportunities for Oracle to be focused (or side tracked on right now).

Oh, lets also see what Cisco has to say about all of this which should be interesting.

Additional related links:
Data Center I/O Bottlenecks Performance Issues and Impacts
I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
I/O Virtualization (IOV) Revisited
Industry Trends and Perspectives: Converged Networking and IO Virtualization (IOV)
The function of XaaS(X) Pick a letter
What is the best kind of IO? The one you do not have to do
Why FC and FCoE vendors get beat up over bandwidth?

StorageIO industry trends and perspectives

If you are interested in learning more about IOV, Xisgo, or are having trouble sleeping, click here, here, here, here, here, here, here, here, here, here, here, here, here, or here (I think that’s enough links for now ;).

Ok, nuff said for now as I have probably requalified for being on the Oracle you know what list for not sticking to the story script, opps, excuse me, I mean press release message.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Data protection modernization, more than swapping out media

backup, restore, BC, DR and archiving

Have you modernized your data protection strategy and environment?

If not, are you thinking about updating your strategy and environment?

Why modernize your data protection including backup restore, business continuance (BC), high availability (HA) and disaster recovery (DR) strategy and environment?

backup, restore, BC, DR and archiving

Is it to leverage new technology such as disk to disk (D2D) backups, cloud, virtualization, data footprint reduction (DFR) including compression or dedupe?

Perhaps you have or are considering data protection modernization because somebody told you to or you read about it or watched a video or web cast? Or, perhaps your backup and restore are broke so its time to change media or try something different.

Lets take a step back for a moment and ask the question of what is your view of data protection modernization?

Perhaps it is modernizing backup by replacing tape with disk, or disk with clouds?

Maybe it is leveraging data footprint reduction (DFR) techniques including compression and dedupe?

Data protection, data footprint reduction, dfr, dedupe, compress

How about instead of swapping out media, changing backup software?

Or what about virtualizing servers moving from physical machines to virtual machines?

On the other hand maybe your view of modernizing data protection is around using a different product ranging from backup software to a data protection appliance, or snapshots and replication.

The above and others certainly fall under the broad group of backup, restore, BC, DR and archiving, however there is another area which is not as much technology as it is techniques, best practices, processes and procedure based. That is, revisit why data and applications are being protected against what applicable threat risks and associated business risks.

backup, restore, BC, DR and archiving

This means reviewing service needs and wants including backup, restore, BC, DR and archiving that in turn drive what data and applications to protect, how often, how many copies and where those are located, along with how long they will be retained.

backup, restore, BC, DR and archiving

Modernizing data protection is more than simply swapping out old or broken media like flat tires on a vehicle.

To be effective, data protection modernization involves taking a step back from the technology, tools and buzzword bingo topics to review what is being protected and why. It also means revisiting service level expectations and clarify wants vs. needs which translates to what if for free that is what is wanted, however for a cost then what is required.

backup, restore, BC, DR and archiving

Certainly technologies and tools play a role, however simply using new tools and techniques without revisiting data protection challenges at the source will result in new problems that resemble old problems.

backup, restore, BC, DR and archiving

Hence to support growth with a constrained or shrinking budget while maintaining or enhancing service levels, the trick is to remove complexity and costs.

backup, restore, BC, DR and archiving

This means not treating all data and applications the same, stretch your available resources to be more effective without compromise on service is mantra of modernizing data protection.

Ok, nuff said for now, plenty more to discuss later.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Modernizing data protection with certainty

Speaking of and about modernizing data protection, back in June I was invited to be a keynote presenter on industry trends and perspectives at a series of five dinner events (Boston, Chicago, Palo Alto, Houston and New York City) sponsored by Quantum (that is a disclosure btw).

backup, restore, BC, DR and archiving

The theme of the dinner events was an engaging discussion around modernizing data protection with certainty along with clouds, virtualization and related topics. Quantum and one of their business partner resellers started the event with introductions followed by an interactive discussion by myself, followed by David Chappa (@davidchapa ) who ties the various themes with what Quantum is doing along with some of their customer success stories.

Themes and examples for these events build on my book Cloud and Virtual Data Storage Networking including:

  • Rethinking how, when, where and why data is being protected
  • Big data, little data and big backup issues and techniques
  • Archive, backup modernization, compression, dedupe and storage tiering
  • Service level agreements (SLA) and service level objectives (SLO)
  • Recovery time objective (RTO) and recovery point objective (RPO)
  • Service alignment and balancing needs vs. wants, cost vs. risk
  • Protecting virtual, cloud and physical environments
  • Stretching your available budget to do more without compromise
  • People, processes, products and procedures

Quantum is among other industry leaders with multiple technology and solution offerings for addressing different aspects of data footprint reduction and data protection modernization. These include for physical, virtual and cloud environments along with traditional tape, disk based, compression, dedupe, archive, big data, hardware, software and management tools. A diverse group of attendees have been at the different events including enterprise and SMB, public, private and government across different sectors.

Following are links to some blog posts that covered first series of events along with some of the specific themes and discussion points from different cities:

Via ITKE: The New Realities of Data Protection
Via ITKE: Looking For Certainty In The Cloud
Via ITKE: Success Stories in Data Protection: Cloud virtualization
Via ITKE: Practical Solutions for Data Protection Challenges
Via David Chappas blog

If you missed attending any of the above events, more dates are being added in August and September including stops in Cleveland, Raleigh, Atlanta, Washington DC, San Diego, Connecticut and Philadelphia with more details here.

Ok, nuff said for now, hope to see you at one of the upcoming events.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

NAD recommends Oracle discontinue certain Exadata performance claims

I Received the following press release in my inbox today from the National Advertising Division (NAD) recommending that Oracle stop making certain performance claims about Exadata after a complaint from IBM.

Oracle Exadata

In case you are not familiar with ExaData, it is a database machine or storage appliance that only supports Oracle database systems (learn more here). Oracle having bought Sun microsystems a few years back moved from being a software vendor that competed with other vendors software solutions including those from IBM while running on hardware from Dell, HP and IBM among others. Now that Oracle is in the hardware business, while you will still find Oracle software products running on their competitors hardware (servers and storage), Oracle is also more aggressively competing with those same partners, particularly IBM.

Hmm, to quote Scooby Doo: Rut Roh!

Looks like IBM complained to the Better Business Bureau (BBB) National Advertising Division (NAD) that resulted in the Advertising Self-Regulatory Council (ASRC) making their recommendation below (more about NAD and ASRC can be found here). Based on a billboard sign that I saw while riding from JFK airport into New York City last week, I would not be surprised if a company with two initials that start with an H and end with a P were to file a similar complaint.

I Wonder if the large wall size Oracle advertisement that used to be in the entry way to the white plains (IATA:HPN) airport (e.g. in IBM’s backyard) welcoming you to the terminal as you get off the airplanes is still there?

The following is the press release that I received:

For Immediate Release
Contact: Linda Bean
212.705.0129

NAD Finds Oracle Took Necessary Action in Discontinuing Comparative Performance Claims for Exadata; Oracle to Appeal NAD Decision

New York, NY – July 24,  2012 –TheNational Advertising Division has recommended that Oracle Corporation discontinue certain comparative product-performance claims for the company’s Exadata database machines, following a challenge by International Business Machines Corporation. Oracle said it would voluntarily discontinue the challenged claims, but noted that it would appeal NADs decision to the National Advertising Review Board.

The advertising claims at issue appeared in a full-page advertisement in the Wall Street Journal and included the following:

  • “Exadata 20x Faster … Replaces IBM Again”
  • “Giant European Retailer Moves Databases from IBM Power to Exadata … Runs 20 Times Faster”

NAD also considered whether the advertising implied that all Oracle Exadata systems are twenty times faster than all IBM Power systems.

The advertisement featured the image of an Oracle Exadata system, along with the statement: “Giant European Retailer Moves Databases from IBM Power to Exadata Runs 20 Times Faster.” The advertisement also offered a link to the Oracle website: “For more details oracle.com/EuroRetailer.” 

IBM argued that the “20x Faster” claim makes overly broad references to “Exadata” and “IBM Power,” resulting in a misleading claim, which the advertiser’s evidence does not support.  In particular, the challenger argued that by referring to the brand name “IBM Power” without qualification, Oracle was making a broad claim about the entire IBM Power systems line of products. 

The advertiser, on the other hand, argued that the advertisement represented a case study, not a line claim, and noted that the sophisticated target audience would understand that the advertisement is based on the experience of one customer – the “Giant European Retailer” referenced in the advertisement.

In a NAD proceeding, the advertiser is obligated to support all reasonable interpretations of its advertising claims, not just the message it intended to convey.   In the absence of reliable consumer perception evidence, NAD uses its experienced judgment to determine what implied messages, if any, are conveyed by an advertisement.   When evaluating the message communicated by an advertising claim, NAD will examine the claims at issue in the context of the entire advertisement in which they appear.

In this case, NAD concluded that while the advertiser may have intended to convey the message that in one case study a particular Exadata system was up to 20 times faster when performing two particular functions than a particular IBM Power system, Oracle’s general references to “Exadata” and “IBM Power,” along with the bold unqualified headline “Exadata 20x Faster Replaces IBM Again,” conveyed a much broader message.

NAD determined that at least one reasonable interpretation of the challenged advertisement is that all – or a vast majority – of Exadata systems consistently perform 20 times faster in all or many respects than all – or a vast majority – of IBM Power systems. NAD found that the message was not supported by the evidence in the record, which consisted of one   particular comparison of one consumer’s specific IBM Power system to a specific Exadata System. 

NAD further determined that the disclosure provided on the advertiser’s website was not sufficient to limit the broad message conveyed by the “20x Faster” claim. More importantly, NAD noted that even if Oracle’s website disclosure was acceptable – and had appeared clearly and conspicuously in the challenged advertisement – it would still be insufficient because an advertiser cannot use a disclosure to cure an otherwise false claim.

NAD noted that Oracle’s decision to permanently discontinue the claims at issue was necessary and proper.

Oracle, in its advertiser’s statement, said it was “disappointed with the NAD’s decision in this matter, which it believes is unduly broad and will severely limit the ability to run truthful comparative advertising, not only for Oracle but for others in the commercial hardware and software industry.”

Oracle noted that it would appeal all of NAD’s findings in the matter.

 

###

NAD’s inquiry was conducted under NAD/CARU/NARB Procedures for the Voluntary Self-Regulation of National Advertising.  Details of the initial inquiry, NAD’s decision, and the advertiser’s response will be included in the next NAD/CARU Case Report.

About Advertising Industry Self-Regulation:  The Advertising Self-Regulatory Council establishes the policies and procedures for advertising industry self-regulation, including the National Advertising Division (NAD), Children’s Advertising Review Unit (CARU), National Advertising Review Board (NARB), Electronic Retailing Self-Regulation Program (ERSP) and Online Interest-Based Advertising Accountability Program (Accountability Program.) The self-regulatory system is administered by the Council of Better Business Bureaus.

Self-regulation is good for consumers. The self-regulatory system monitors the marketplace, holds advertisers responsible for their claims and practices and tracks emerging issues and trends. Self-regulation is good for advertisers. Rigorous review serves to encourage consumer trust; the self-regulatory system offers an expert, cost-efficient, meaningful alternative to litigation and provides a framework for the development of a self-regulatory to emerging issues.

To learn more about supporting advertising industry self-regulation, please visit us at: www.asrcreviews.org.

 

 

Linda Bean l Director, Communications,
Advertising Self-Regulatory Council

Tel: 212.705.0129
Cell: 908.812.8175
lbean@asrc.bbb.org

112 Madison Ave.
3rd Fl.
New York, NY
10016

 

Ok, Oracle is no stranger to benchmark and performance claims controversy having amassed several decades of experience. Anybody remember the silver bullet database test from late 80s early 90s when Oracle set a record performance except that they never committed the writes to disk?

Something tells me that Oracle and Uncle Larry (e.g. Larry Ellison who is not really my uncle) will treat this as any kind of press or media coverage is good and probably will issue something like IBM must be worried if they have to go to the BBB.

Will a complaint which I’m sure is not the fist to be lodged with the BBB against Oracle deter customers, or be of more use to IBM sales and their partners in deals vs. Oracle?

What’s your take?

Is this much ado about nothing, a filler for a slow news or discussion day, a break from talking about VMware acquisition of Nicira or VMware CEO management changes? Perhaps this is an alternative to talking about the CEO of SSD vendor STEC being charged with insider trading, or something other than Larry Ellison buying an Hawaiian island (IMHO he could have gotten a better deal buying Greece), or is this something that Oracle will need to take seriously?

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Give HP storage some love and short strokin

Server and StorageIO industry trends and perspective DAS

Following up from my last post over at InfoStor about metrics that matter, here is a link to a new piece that I did on storage vendors benchmarking and related topics. This new post looked at an storage performance council (SPC1) benchmark that HP did with their P10000 (e.g. 3PAR) storage system under assertions by some in the industry that they were short stroking to meet better performance.

Amazon Web Services (AWS)

I’m surprised some creative technical marketer, blogger or prankster has yet to rework Clarence Carters (e.g. Dr. CC) iconic song into something about storage performance and capacity short strokin.


Ok, nuff said before I get a visit from the HP truth squads, in the meantime, give HP a hug and some love if so inclined.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Announcing SAS SANs for Dummies book, LSI edition

There is a new (free) book that I’m a co-author of along Bruce Grieshaber and Larry Jacob (both of LSI) along with foreword by Harry Mason of LSI and President of the SCSI Trade Association titled SAS SANs for Dummies compliments of LSI.

SAS SANs for Dummies, LSI Edition

This new book (ebook and print hard copy) looks at Serial Attached SCSI (SAS) and how it can be used beyond traditional direct attached storage (DAS) configurations for support various types of storage mediums including SSD, HDD and tape. These configuration options include as entry-level SAN with SAS switches for small clusters or server virtualization, or as shared DAS as well as being a scale out back-end solution for NAS, object, cloud and big data storage solutions.

Here is the table of contents (TOC) of SAS SANs for Dummies

Chapter 1: Data storage challenges

  • Storage Growth Demand Drivers
  • Recognizing Challenges
  • Solutions and Opportunities
  • Chapter 2: Storage Area Networks

  • Introducing Storage Area Networks
  • Moving from Dedicated Internal to Shared Storage
  • Chapter 3: SAS Basics

  • Introducing the Basics of SAS
  • How SAS Functions
  • Components of SAS
  • SAS Target Devices
  • SAS for SANs
  • Chapter 4: SAS Usage Scenarios

  • Understanding SAS SANs Usage
  • Shared SAS SANs Scenarios including:
    • SAS in HPC environments
    • Big data and big bandwidth
    • Database, e-mail, back-office
    • NAS and object storage servers
    • Cloud, wen and high-density
    • Server virtualization

    Chapter 5: Advanced SAS Topics

  • The SAS Physical Layer
  • Choosing SAS Cabling
  • Using SAS Switch Zoning
  • SAS HBA Target Mode
  • Chapter 6: Nine Common Questions

  • Can You Interconnect Switches?
  • What Is SAS Cable Distance?
  • How Many Servers Can Be In a SAS SAN?
  • How Do You Manage SAS Zones?
  • How Do You Configure SAS for HA?
  • How Does SAS Zoning Compare to LUN Mapping?
  • Who Has SAS Solutions?
  • How Do SAS SANs Compare?
  • Where Can You Learn More?
  • Chapter 7: Next Steps

  • SAS Going Forward
  • Next Steps
  • Great Take Away’s
  • Regardless of if you are looking to use SAS as a primary SAN interface, or leverage it for DAS or implementing back-end storage for big-data, NAS, object, cloud or other types of scalable storage solutions, check out and get your free copy of SAS SANs for Dummies here compliments of LSI.

    SAS SANs for Dummies, LSI Edition

    Click here to ask your free copy of SAS SANs for Dummies compliments of LSI, tell them Greg from StorageIO sent you and enjoy the book.

    Ok, nuff said.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved