Part II: EMC Evolves Enterprise Data Protection with Enhancements

Storage I/O trends

This is the second part of a two-part series on recent EMC backup and data protection announcements. Read part I here.

What about the products, what’s new?

In addition to articulating their strategy for modernizing data protection (covered in part I here), EMC announced enhancements to Avamar, Data Domain, Mozy and Networker.

Data protection storage systems (e.g. Data Domain)

Building off of previously announced Backup Recovery Solutions (BRS) including Data Domain operating system storage software enhancements, EMC is adding more application and software integration along with new platform (systems) support.

Data Domain (e.g. Protection Storage) enhancements include:

  • Application integration with Oracle, SAP HANA for big data backup and archiving
  • New Data Domain protection storage system models
  • Data in place upgrades of storage controllers
  • Extended Retention now available on added models
  • SAP HANA Studio backup integration via NFS
  • Boost for Oracle RMAN, native SAP tools and replication integration
  • Support for backing up and protecting Oracle Exadata
  • SAP (non HANA) support both on SAP and Oracle

Data in place upgrades of controllers for 4200 series models on up (previously available on some larger models). This means that controllers can be upgraded with data remaining in place as opposed to a lengthy data migration.

Extended Retention facility is a zero cost license that enables more disk drive shelves to be attached to supported Data Domain systems. Thus there is a not a license fee, however you do pay for the storage shelves and drives to increase the available storage capacity. Note that this feature increases the storage capacity by adding more disk drives and does not increase the performance of the Data Domain system. Extended Retention has been available in the past however is now supported via more platform models. The extra storage capacity is essentially placed into a different tier that an archive policy can then migrate data into.

Boost for accelerating data movement to and from Data Domain systems is only available using Fibre Channel. When asked about FC over Ethernet (FCoE) or iSCSI EMC indicated its customers are not asking for this ability yet. This has me wondering if it is that the current customer focus is around FC, or if those customers are not yet ready for iSCSI or FCoE, or, if there were iSCSI or FCoE support, more customers would ask for it?

With the new Data Domain protection storage systems EMC is claiming up to:

  • 4x faster performance than earlier models
  • 10x more scalable and 3x more backup/archive streams
  • 38 percent lower cost per GB based on holding price points and applying improvements


EMC Data Domain data protection storage platform family


Data Domain supporting both backup and archive

Expanding Data Domain from backup to archive

EMC continues to evolve the Data Domain platform from just being a backup target platform with dedupe and replication to a multi-function, multi-role solution. In other words, one platform with many uses. This is an example of using one tool or technology for different purposes such as backup and archiving, however with separate polices. Here is a link to a video where I discuss using common tools for backup and archiving, however with separate polices. In the above figure EMC Data Domain is shown as being used for backup along with storage tiering and archiving (file, email, Sharepoint, content management and databases among other workloads).


EMC Data Domain supporting different functions and workloads

Also shown are various tools from other vendors such as Commvault Simpana that can be used as both a backup or archiving tool with Data Domain as a target. Likewise Dell products acquired via the Quest acquisition are shown along with those from IBM (e.g. Tivoli), FileTek among others. Note that if you are a competitor of EMC or simply a fan of other technology you might come to the conclusion that the above may not be different from others. Then again others who are not articulating their version or vision of something like the above figure probably should be also stating the obvious vs. arguing they did it first.

Data source integration (aka data protection software tools)

It seems like just yesterday that EMC acquired Avamar (2006) and NetWorker aka Legato (2003), not to mention Mozy (2007) or Dantz (Retrospect, since divested) in 2004. With the exception of Dantz (Retrospect) which is now back in the hands of its original developers, EMC continues to enhance and evolve Avamar, Mozy and NetWorker including with this announcement.

General Avamar 7 and Networker 8.1 enhancements include:

  • Deeper integration with primary storage and protection storage tiers
  • Optimization for VMware vSphere virtual server environments
  • Improved visibility and control for data protection of enterprise applications

Additional Avamar 7 enhancements include:

  • More Data Domain integration and leveraging as a repository (since Avamar 6)
  • NAS file systems with NDMP accelerator access (EMC Isilon & Celera, NetApp)
  • Data Domain Boost enhancements for faster backup / recovery
  • Application integration with IBM (DB2 and Notes), Microsoft (Exchange, Hyper-V images, Sharepoint, SQL Server), Oracle, SAP, Sybase, VMware images

Note that Avamar dat is still used mainly for ROBO and desktop, laptop type backup scenarios that do not yet support Data Domain. Also see Mozy enhancements below).

Avamar supports VMware vSphere virtual server environments using granular change block tracking (CBT) technology as well as image level backup and recovery with vSphere plugins. This includes an Instant Access recovery when images are stored on Data Domain storage.

Instant Access enables a VM that has been protected using Avamar image level technology on Data Domain to be booted via an NFS VMware Dat. VMware sees the VM and is able to power it on and boot directly from the Data Domain via the NFS Dat. Once the VM is active, it can be Storage vMotion to a production storage VMware Dat while active (e.g. running) for recovery on the fly capabilities.


Instant Access to a VM on Data Domain storage

EMC NetWorker 8.1 enhancements include:

  • Enhanced visibility and control for owners of data
  • Collaborative protection for Oracle environments
  • Synchronize backup and data protection between DBA and Backup admin’s
  • Oracle DBAs use native tools (e.g. RMAN)
  • Backup admin implements organizations SLA’s (e.g. using Networker)
  • Deeper integration with EMC primary storage (e.g. VMAX, VNX, etc)
  • Isilon integration support
  • Snapshot management (VMAX, VNX, RecoverPoint)
  • Automation and wizards for integration, discovery, simplified management
  • Policy-based management, fast recovery from snapshots
  • Integrating snapshots into and as part of data protection strategy. Note that this is more than basic snapshot management as there is also the ability to roll over a snapshot into a Data Domain protection storage tier.
  • Deeper integration with Data Domain protection storage tier
  • Data Domain Boost over Fibre Channel for faster backups and restores
  • Data Domain Virtual Synthetics to cut impact of full backups
  • Integration with Avamar for managing image level backup recovery (Avamar services embedded as part of NetWorker)
  • vSphere Web Client enabling self-service recovery of VMware images
  • Newly created VMs inherit backup polices automatically

Mozy is being positioned for enterprise remote office branch office (ROBO) or distributed private cloud where Avamar, NetWorker or Data Domain solutions are not as applicable. EMC has mentioned that they have over 800 enterprises using Mozy for desktop, laptop, ROBO and mobile data protection. Note that this is a different target market than the Mozy consumer product focused which also addresses smaller SMBs and SOHOs (Small Office Home Offices).

EMC Mozy enhancements to be more enterprise grade:

  • Simplified management services and integration
  • Active Directory (AD) for Microsoft environments
  • New storage pools (multiple types of pools) vs. dedicated storage per client
  • Keyless activation for faster provisioning of backup clients

Note that EMC enhanced earlier this year Data Protection Advisor (DPA) with version 6.0.

What does this all mean?

Storage I/O trends

Data protection and backup discussions often focus around tape summit resources or cloud arguments, although this is changing. What is changing is growing awareness and discussion around how data protection storage mediums, systems and services are used along with the associated software management tools.

Some will say backup is broke often pointing a finger at a media or medium (e.g. tape and disk) about what is wrong. Granted in some environments the target medium (or media) destination is an easy culprit to point a finger to as the problem (e.g. the usual tape sucks or is dead) mantra. However, for many environments while there can be issues, it is more often than not the media, medium, device or target storage system that is broke, instead how it is being used or abused.

This means revisiting how tools are used along with media or storage systems allocated, used and retained with respect to different threat risk scenarios. After all, not everything is the same in the data center or information factory.

Thus modernizing data protection is more than swapping media or mediums including types of storage system from one to another. It is also more than swapping out one backup or data protection tool for another. Modernizing data protection means rethinking what different applications and data need to be protected against various threat risks.

Storage I/O trends

What this has to do with today’s announcement is that EMC is among others in the industry moving towards a holistic data protection modernizing thought model.

In my opinion what you are seeing out of EMC and some others is taking that step back and expanding the data protection conversation to revisit, rethink why, how, where, when and by whom applications and information get protected.

This announcement also ties into finding and removing costs vs. simply cutting cost at the cost of something elsewhere (e.g. service levels, performance, availability). In other words, finding and removing complexities or overhead associated with data protection while making it more effective.

Some closing points, thoughts and more links:

There is no such thing as a data or information recession
People and data are living longer while getting larger
Not everything is the same in the data center or information factory
Rethink data protection including when, why, how, where, with what and by whom
There is little data, big data, very big data and big fast data
Data protection modernization is more than playing buzzword bingo
Avoid using new technology in old ways
Data footprint reduction (DFR) can be help counter changing data life-cycle patterns
EMC continues to leverage Avamar while keeping Networker relevant
Data Domain evolving for both backup and archiving as an example of tool for multiple uses

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC Evolves Enterprise Data Protection with Enhancements (Part I)

Storage I/O trends

A couple of months ago at EMCworld there were announcements around ViPR, Pivotal along with trust and clouds among other topics. During the recent EMCworld event there were some questions among attendees what about backup and data protection announcements (or lack there of)?

Modernizing Data Protection

Today EMC announced enhancements to its Backup Recovery Solutions (BRS) portfolio (@EMCBackup) that continue to enable information and applications data protection modernizing including Avamar, Data Domain, Mozy and Networker.

Keep in mind you can’t go forward if you can’t go back, which means if you do not have good data protection to go to, you can’t go forward with your information.

EMC Modern Data Protection Announcements

As part of their Backup to the Future event, EMC announced the following:

  • New generation of data protection products and technologies
  • Data Domain systems: enhanced application integration for backup and archive
  • Data protection suite tools Avamar 7 and Networker 8.1
  • Enhanced Cloud backup capabilities for the Mozy service
  • Paradigm shift as part of data protection modernizing including revisiting why, when, where, how, with what and by whom data protection is accomplished.

What did EMC announce for data protection modernization?

While much of the EMC data protection announcement is around product, there is also the aspect of rethinking data protection. This means looking at data protection modernization beyond swapping out media (e.g. tape for disk, disk for cloud) or one backup software tool for another. Instead, revisiting why data protection needs to be accomplished, by whom, how to remove complexity and cost, enable agility and flexibility. This also means enabling data protection to be used or consumed as a service in traditional, virtual and private or hybrid cloud environments.

EMC uses as an example (what they refer to as Accidental Architecture) of how there are different group and areas of focus, along with silos associated with data protection. These groups span virtual, applications, database, server, storage among others.

The results are silos that need to be transformed in part using new technology in new ways, as well as addressing a barrier to IT convergence (people and processes). The theme behind EMC data protection strategy is to enable the needs and requirements of various groups (servers, applications, database, compliance, storage, BC and DR) while removing complexity.

Moving from Silos of data protection to a converged service enabled model

Three data protection and backup focus areas

This sets the stage for the three components for enabling a converged data protection model that can be consumed or used as a service in traditional, virtual and private cloud environments.


EMC three components of modernized data protection (EMC Future Backup)

The three main components (and their associated solutions) of EMC BRS strategy are:

  • Data management services: Policy and storage management, SLA, SLO, monitoring, discovery and analysis. This is where tools such as EMC Data Protection Advisor (aka via WysDM acquisition) fit among others for coordination or orchestration, setting and managing polices along with other activities.
  • Data source integration: Applications, Database, File systems, Operating System, Hypervisors and primary storage systems. This is where data movement tools such as Avamar and Networker among others fit along with interfaces to application tools such as Oracle RMAN.
  • Protection storage: Targets, destination storage system with media or mediums optimized for protecting and preserving data along with enabling data footprint reduction (DFR). DFR includes functionality such as compression and dedupe among others. Example of data protection storage is EMC Data Domain.

Read more about product items announced and what this all means here in the second of this two-part series.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

HDS Mid Summer Storage and Converged Compute Enhancements

Storage I/O trends

Converged Compute, SSD Storage and Clouds

Hitachi Data Systems (HDS) announced today several enhancements to their data storage and unified compute portfolio as part of their Maximize I.T. initiative.

Setting the context

As part of setting the stage for this announcement, HDS has presented the following strategy vision as part their vision for IT transformation and cloud computing.

https://hds.com/solutions/it-strategies/maximize-it.html?WT.ac=us_hp_flash_r11

What was announced

This announcement builds on earlier ones around HDS Unified Storage (HUS) primary storage using nand flash MLC Solid State Devices (SSD) and Hard Disk Drives (HDD’s), along with unified block and file (NAS), as well Unified Compute Platform (UCP) also known as converged compute, networking, storage and software. These enhancements follow recent updates to the HDS Content Platform (HCP) for object, file and content storage.

There are three main focus areas of the announcement:

  • Flash SSD storage enhancements for HUS
  • Unified with enhanced file (aka BlueArc based)
  • Enhanced unified compute (UCP)

HDS Flash SSD acceleration

The question should not be if SSD is in your future, rather when, where, with what and how much will be needed.

As part of this announcement, HDS is releasing an all flash SSD based HUS enterprise storage system. Similar to what other vendors have done, HDS is attaching flash SSD storage to their HUS systems in place of HDD’s. Hitachi has developed their own SSD module announced in 2012 (read more here). The HDS SSD module use Multi Level Cell (MLC) nand flash chips (dies) that now supports 1.6TB of storage space capacity unit. This is different from other vendors who either use nand flash SSD drive form factor devices (e.g. Intel, Micron, Samsung, SANdisk, Seagate, STEC (now WD), WD among others) or, PCIe form factor cards (e.g. FusionIO, Intel, LSI, Micron, Virident among others) or, attach a third-party external SSD device (e.g. IBM/TMS, Violin, Whiptail etc.).

Like some other vendors, HDS has also done more than simply attach a SSD (drive, PCIe card, or external device) to their storage systems calling it an integrated solution. What this means is that HDS has implemented software or firmware changes into their storage systems to manage durability and extend flash duty cycles caused by program erase (P/E) cycle wear. In addition HDS has implemented performance optimization in their storage systems to leverage the faster SSD modules, after all, faster storage media or devices need fast storage systems or controllers.

While the new all flash storage system can be initially bought with just SSD, similar to other hybrid storage solutions, hard disk drives (HDD’s) can also be installed. For enabling full performance at low latency, HDS is addressing both the flash SSD modules as well as the storage systems they attach to including back-end, front-end and caching in-between.

The release enables 500,000 or half a million IOPS (no IOP size, reads or writes, random or sequential. Future firmware (non-disrupted) to enable higher performance that HDS is claiming will be 1,000,000 IOPS at under a millisecond) were indicated.

In addition to future performance improvements, HDS is also indicating increased storage space capacity of its MLC flash SSD modules (1.6TB today). Using 12 modules (1.6TB each), 154TB of flash SSD can be placed in a single rack.

HDS File and Network Attached Storage (NAS)

HUS unified NAS file system and gateway (BlueArc based) enhancements include:

  • New platforms leveraging faster processors (both Intel and Field Programmable Gate Arrays (FPGA’s))
  • Common management and software tools from 3000 to new 4000 series
  • Bandwidth doubled with faster connections and more memory
  • Four 10GbE NAS serving ports (front-end)
  • Four 8Gb Fibre Channel ports (back-end)
  • FPGA leveraged for off-loading some dedupe functions (faster performance)

HDS Unified Complete Platform (UCP)

As part of this announcement, HDS is enhancing the Unified Compute Platform (UCP) offerings. HDS re-entered the compute market in 2012 joining other vendors offering unified compute, storage and networking solutions. The HDS converged data infrastructure competes with AMD (Seamicro) SM15000, Dell vStart and VRTX (for lower end market), EMC and VCE vBlock, NetApp FlexPod along with those from HP (or Moonshot micro servers), IBM Puresystems, Oracle and others.

UCP Pro for VMware vSphere

  • Turnkey converged solution (Compute, Networking, Storage, Software)
  • Includes VMware vSphere pre-installed (OEM from VMware)
  • Flexible compute blade options
  • Three storage system options (HUS, HUS VM and VSP)
  • Cisco and Brocade IP networking
  • UCP Director 3.0 with enhanced automation and orchestration software

UCP Select for Microsoft Private Cloud

  • Supports Hyper-V 3.0 server virtualization
  • Live migration with DR and resynch
  • Microsoft Fast Track certified

UCP Select for Oracle RAC

  • HDS Flash SSD storage
  • SMP x86 compute for performance
  • 2x improvements for IOPS less than 1 millisecond
  • Common management with HiCommand suite
  • Integrated with Oracle RMAN and OVM

UCP Select for SAP HANA

  • Scale out to 8TBs memory (DRAM)
  • Tier 1 storage system certified for SAP HANA DR
  • Leverages SAP HANA SAP storage connector API

What this all means?

Storage I/O trends

With these announcements HDS is extending its storage centric hardware, software and services solution portfolio for block, file and object access across different usage tiers (systems, applications, mediums). HDS is also expanding their converged unified compute platforms to stay competitive with others including Dell, EMC, Fujitsu, HP, IBM, NEC, NetApp and Oracle among others. For environments with HDS storage looking for converged solutions to support VMware, Microsoft Hyper-V, Oracle or SAP HANA these UCP systems are worth checking out as part of evaluating vendor offerings. Likewise for those who have HDS storage exploring SSD offerings, these announcements give opportunities to enable consolidation as do the unified file (NAS) offerings.

Note that now HDS does not have a public formalized message or story around PCIe flash cards, however they have relationships with various vendors as part of their UCP offerings.

Overall a good set of incremental enhancements for HDS to stay competitive and leverage their field proven capabilities including management software tools.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Talking with Tony DiCenzo at SNW Spring 2013

Now also available via

This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

StorageIO industry trends cloud, virtualization and big data

In this episode from SNW Spring 2013 in Orlando Florida, while Greg is in the process of boarding a flight home, Bruce Ravid (@BruceRave) catches up and talks with long time storage industry insider Tony DiCenzo of SNIA and Oracle. Their conversation covers industry trends, observations of SNW past and present along with other related topics.

SNIA image logo

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Bruce and Tony.

StorageIO podcast

Also available via

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

Enjoy this episode from SNW Spring 2013 with Tony DiCenzo of Oracle and SNIA.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined

Part one of this two-part post provided a summary of today’s EMC (@EMCflash) announcement around XtremIO and renaming VFCache to XtremSF and associated software as XtremSW.

Storage I/O industry trends and perspectives

Synopsis of announcement

  • Product rollout and selective availability of the new all flash SSD array XtremIO
  • Rename server-side PCIe ssd flash cards from VFCache to XtremSF
  • New XtremSF models including enhanced multi-level cell (eMLC) with larger capacities
  • Rename VFCache caching software to XtremSW (enables cache mode vs. target mode)

Now lets take a closer look at what was announced along with what it means in terms of Industry Trends and Perspectives.

XtremIO  has been in customer beta for some time and now those along with some other early customers are able to acquire the product. In addition, EMC is opening up XtremIO to more prospective customers (Directed Availability) who have requirements or needs that line up with the products target market capabilities.

Storage I/O industry trends and perspectives

What this means is that XtremIO is not being simply put out into the general product population for broad distribution. Instead, it is being put into a controlled release (Directed Availability) to help customers, partners and EMC sales decide where best to use it and thus risk revenue prevention in other areas. The criteria or target opportunity (at least initially) are little-data applications including OLTP, server virtualization (where aggregation can cause aggravation) along with virtual desktop or VDI. In other words, many of the traditional or legacy IOP focused SSD opportunities.

In addition to XtremIO EMC has renamed their VFCache PCIe flash SSD cards (Launched February 2012) to XtremSF along with new models with both SLC and MLC nand flash. Also as part of today’s announcement EMC is renaming the cache software for XtremSF (e.g. VFCache) to be known as XtremSW. Now if that did not prompt the question of if you can now buy XtremSF as a target mode only card without the cache software the answer is yes.

What is XtremIO?

It is a new all flash SSD storage array. XtremIO is a Cluster, grid or collection of nodes called bricks with linear performance scaling providing block based all flash SSD storage. Data services consists of data footprint reduction (DFR) including inline global (across all nodes or bricks) dedupe on 4Kbyte chunks along with thin provisioning. Global dedupe is done on ingest using a combination of flash buffered meta-data (tables, index or dictionary) of what has been seen before along with multi-threaded software to leverage multi-core processors. Using the global dedupe at ingest; only new unique data is saved based on 4 Kbyte chunks.

Performance per EMC scales from one single node to more second node or a fourth node. Note: architecturally more nodes can be added with EMC indicating added models will be available in the future.

In addition to DFR, other data services including writable snapshots, and auto-load balancing when new bricks are added. Note that in a normal running XtremIO, data is automatically spread across the nodes for both performance and resiliency. Data only needs to be moved or load-balanced in the background when new bricks are added. Instant copy snapshots are supported along with writable snapshots. Currently replication is done via external EMC products such as VPLEX or RecoverPoint with statement of directions (SOD) for future enhancements.

Additional attributes of XtremIO include:

  • Each node or brick (X-Brick) has up to 16 (16 was Gen 1 hardware platform, it is now 25 SSD drives)
  • All bricks are involved in IO and storage processing
  • Positioned by EMC as Software Defined (no proprietary hardware)
  • Four x 8Gb Fibre Channel (8GFC) and four x 10Gb Ethernet (iSCSI) per brick
  • Bricks communicate with each other via a separate interconnect network or fabric
  • Bricks have redundant processors (think of as controllers) with multiple sockets and cores
  • 4KB random read IOP’s scale from 250K (one brick), 500K (two bricks) and 1 Million (four bricks). For 4K random write IOPS, the numbers are 100K, 200K and 400K across one, two and four brick configurations with low latency and all data services running (EMC supplied numbers)

In addition to 4K being a commonly used or referred to IO size, it is also the same size as the new industry standard Advanced Format (AF). Today the standard storage block, page or sector size is 512 bytes however AF moves that to a larger 4,096 bytes (e.g. 4KB) to closer align with larger IO sizes. Note that many HDD’s and some SSD’s today support AF and provide 512 byte emulation modes for compatibility.

What is XtremSF?

VFCache is renamed XtremSF with new models using eMLC as companion to existing SLC PCIe  cards and blade server mezzanine cards. EMC is emphasizing performance metrics that matter including IOPs that are relative to customer workloads such as 4K, 8K or larger with mix of reads and writes with low latency. In addition to IOPs with latency, size along with reads or writes for little data, EMC is also showing bandwidth or throughput numbers for big-data and big-bandwidth.

Model
Capacity
Read Transfer GB/sec
Write Transfer GB/sec
Random 4K Read (IOPS)
Random 4K Write (IOPS)

Random 4K Mixed ( IOPS)

Read latency (usec)
Write latency (usec)
2200 (eMLC)
2.2 TB
2.47
1.1
343K
105K
206K
87us
30us
700 (SLC)
700 GB
2.9
1.8
712K
197K
411K
50us
13us
550 (eMLC)
550 GB
1.36
512 MB/s
174K
49K
96K
87us
37us
350 (SLC)
350 GB
2.9
756 MB/s
715K
95K
267K
50us
13us

Sampling of SLC and eMLC XtremSF PCIe SSD cards performance characteristics (via EMC) including latency measured in microseconds). Note performance differences due to some cards being based on SLC and others on eMLC.

Additional attributes, some new and some previously announced include:

  • 8X  PCIe bandwidth lanes for performance
  • No IO impact to applications during garbage collection
  • Supports multi-core processor workloads with parallel design
  • Low CPU overhead by off-loading functions to PCIe card
  • Half-height, half-length PCIe form factor
  • Wear-leveling for nand flash program/erase (P/E) cycle duration
Other storage, server and systems vendors including Cisco, Dell, HP, IBM, NetApp and Oracle offer various PCIe nand flash SSD cards either as target, cache or mixed modes. Manufactures or suppliers of PCIe nand flash SSD cache and target cards include among others FusionIO, Intel, LSI, Micron , OCZ and Virident (who is partnered with Seagate).

What is XtremSW?

Server side flash software (not to be confused with FAST) for using XtremSF as a tier 0 (server-side) ssd cache or target. In target mode the XtremSF functions as a high performance persistent local dedicated direct attached storage (DAS) device. Cache mode enables frequently accessed data to be kept close to the applications off-loading underlying storage systems to be more effectively used. The XtremSW complements back-end storage systems for data protection and persistence along with investment protection of those assets.

Storage I/O industry trends and perspectives

What this all means

SSD is in your future, question is where, when and with what.

Why not just use SSD (DRAM and or nand flash) everywhere?

Keep in mind that in the data center (traditional, virtual or cloud) everything is not the same. Thus the simple answer is that there is not enough of it available at a low enough price point (think closer to Hard Disk Drives (HDD) costs) to fit into customers budget. Sure SSDs provide better performance and productivity benefits, however while there is no such thing as a data or information recession, there are budget constraints.

Another reason why SSD cant simply be used everywhere are physical (and logical) constraints such as amount of memory a server can directly access, or current DDR3 DIMMs (this could change with DDR4 according to Micron) can only address and work with DRAM, PCIe bus physical slot space, operating and hypervisor addressing limits among others.

If SSD (DRAM and or nand flash) were priced were priced low enough (e.g. much closer to HDDs) and available SSD including both DRAM and nand flash (SLC, MLC, eMLC, TLC, etc) along with emerging Phase Change Memory (PCM) are at the convergence of traditional memory and data storage. While some storage (or server) professionals may not agree, storage is an extension of memory and thus part of the traditional server and storage memory hierarchy shown below.

Storage I/O and cache locality of reference

This brings up the locality of reference topic also shown in the following figure where the best IO is the one that does not have to be done. The second best is the one that can be done closest to application to a given level of service. Locality of reference which is important for servers and storage systems including caching refers to how close frequently accessed data is to where it is needed. For some applications this means as much DRAM main memory in a server as possible either clustered, with battery backup or other data persistency protection including onboard HDD or SSD (e.g. towards the top of the hierarchy).

nand flash SSD and storage I/O location options

There are other applications where localized SSD (DRAM or nand flash) are a benefit to compliment main memory or as a persistent cache and target such as PCIe cards or SAS and SATA drives. Further down the stack and for housing larger amounts of storage with performance (reads or writes, random or sequential) along with data services is where all SSD and hybrid (mix of SSD and HDD) fit. Even further down the stack and for a broader segment is where cloud storage services based on SSD such as those from Rackspace (Cloud Block Storage with SSD) and Amazon (provisioned IOPS for EBS) have a play. Lets not forget about SSD in laptop, tablets and workstations, for example I have a Samsung model 830 in my Lenvo X1.

Storage I/O industry trends and perspectives

Some general industry trends include:

  • SSD is like real estate, location can matter, a little can go a long way
  • SSD media options include DRAM and nand flash (SLC, MLC, eMLC, TLC)
  • Portfolios broadening with different products for various needs
  • SSD functionality in servers, appliances, storage systems and cloud services
  • All flash SSD arrays have not killed off all traditional or hybrid storage arrays
  • Focus expanding from Just a Bunch Of SSD (JBOS) to enterprise like functionality
  • Software needs hardware, hardware needs software, the two work better together
  • Comparing meaningful metrics that matter vs. industry marketing metrics

Related items about nand flash, SSD and metrics related themes:

Storage I/O industry trends and perspectives

Some additional thoughts and perspectives

Does this mean traditional storage arrays are now dead?

IMHO, no, there will be some cannibalization of existing storage systems by XtremIO within EMC customers or prospects if not managed, as well as via those from others. Keep in mind that recently EMC announced enhancements to their VMAX including entry-level options for service providers. Some new opportunities opened up will be where traditional all SSD (flash or dram) systems have historically had success.

Traditional SSD and new dedicated SSD systems include Texas Memory Systems (TMS) bought by IBM in 2012, and the recently announced NetApp EF540 (and future FlashRay) along with startups Solidfire, Violin, Whiptail among others. There will be environments where XtremIO may take care of all storage needs for a customer or specific application or piece of it. Then there will be other situations where XtremIO will go-exist with EMC or other vendor’s storage solutions as part of a data infrastructure.

Storage I/O industry trends and perspectives

Who will EMC be competing against with XtremIO?

Certainly the startups or smaller players such as Violin, Whiptail, Purestorage, Solidfire along with IBM/TMS and NetApp EF540 (eventually FlashRay as well) among others.

There will also be some competition with other hybrid storage array vendors that have a mix of HDD and SSD. XtremIO will also compete in some situations on its own vs. other PCIe flash target and cache cards such as FusionIO, however for the most part those will up against XtremSF and XtremSW.

Why the slow or “Directed Availability” rollout?

Why not? By taking a controlled rollout selecting and qualifying customers for XtremIO, EMC gets to manage how the product goes out into production and control how it is used to increase chances of success. Unlike a startup that would be forced to try to put their new technology anywhere, EMC has the luxury of selecting where it goes, not to mention needing to avoid introducing a revenue prevention play for its other products.

Overall, I give an Atta boy and Atta girl to the EMC crew for a Product Defined Announcement (PDA) extending their flash portfolio to complement their different customers and prospects various environment needs. Now watch EMC, NetApp and others step up their flash dance moves to see who will out flash the others in the eXtreme flash games, not to mention emerging software defined marketing moves (SDMM) ;) .

Ok, nuff said.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive (part III)

StorageIO industry trends cloud, virtualization and big data

This is the third in a multi-part series of posts (read first post here and second post here) looking at what else EMC announced today in addition to an enhanced VMAX 10K and dispelling the myth that large storage arrays are dead (or at least for now).

In addition to the VMAX 10K specific updates, EMC also announced the release of a new version of their Enginuity storage software (firmware, storage operating system). Enginuity is supported across all VMAX platforms and features the following:

  • Replication enhancements include TimeFinder clone refresh, restore and four site SRDF for the VMAX 10K, along with think or thin support. This capability enables functionality across VMAX 10K, 40K or 20K using synchronous or asynchronous and extends earlier 3 site to 4 site and mix modes. Note that larger VMAX systems had the extended replication feature support with VMAX 10K now on par with those. Note that the VMAX can be enhanced with VPLEX in front of storage systems (local or wide area, in region HA and out of region DR) and RecoverPoint behind the systems supporting bi-synchronous (two-way), synchronous and asynchronous data protection (CDP, replication, snapshots).
  • Unisphere for VMAX 1.5 manages DMX along with VMware VAAI UNMAP and space reclamation, block zero and hardware clone enhancements, IPV6, Microsoft Server 2012 support and VFCache 1.5.
  • Support for mix of 2.5 inch and 3.5 inch DAEs (disk array enclosures) along with new SAS drive support (high-performance and high-capacity, and various flash-based SSD or EFD).
  • The addition of a fourth dynamic tier within FAST for supporting third-party virtualized storage, along with compression of in-active, cold or stale data (manual or automatic) with 2 to 1 data footprint reduction (DFR) ratio. Note that EMC was one of early vendors to put compression into its storage systems on a block LUN basis in the CLARiiON (now VNX) along with NetApp and IBM (via their Storwize acquisition). The new fourth tier also means that third-party storage does not have to be the lowest tier in terms of performance or functionality.
  • Federated Tiered Storage (FTS) is now available on all EMC block storage systems including those with third-party storage attached in virtualization mode (e.g. VMAX). In addition to supporting tiering across its own products, and those of other vendors that have been virtualized when attached to a VMAX, ANSI T10 Data Integrity Field (DIF) is also supported. Read more about T10 DIF here, and here.
  • Front-end performance enhancements with host I/O limits (Quality of Service or QoS) for multi tenant and cloud environments to balance or prioritize IO across ports and users. This feature can balance based on thresholds for IOPS, bandwidth or both from the VMAX. Note that this feature is independent of any operating system based tool, utility, pathing driver or feature such as VMware DRS and Storage I/O control. Storage groups are created and mapped to specific host ports on the VMAX with the QoS performance thresholds applied to meet specific service level requirements or objectives.

For discussion (or entertainment) purpose, how about the question of if Enginuity qualifies or can be considered as a storage hypervisors (or storage virtualization or virtual storage)? After all, the VMAX is now capable of having third-party storage from other vendors attached to it, something that HDS has done for many years now. For those who feel a storage hypervisor, virtual storage or storage virtualization requires software running on Intel or other commodity based processors, guess what the VMAX uses for CPU processors (granted, you can’t simply download Enginuity software and run on a Dell, HP, IBM, Oracle or SuperMicro server).

I am guessing some of EMC competitors and their surrogates or others who like to play the storage hypervisor card game will be quick to tell you it is not based on various reasons or product comparisons, however you be the judge.

 

Back to the question of if, traditional high-end storage arrays are dead or dying (from part one in this series).

IMHO as mentioned not yet.

Granted like other technologies that have been declared dead or dying yet still in use (technology zombies), they continue to be enhanced, finding new customers, or existing customers using them in new ways, their roles are evolving, this still alive.

For some environments as has been the case over the past decade or so, there will be a continued migration from large legacy enterprise class storage systems to midrange or modular storage arrays with a mix of SSD and HDD. Thus, watch out for having a death grip not letting go of the past, while being careful about flying blind into the future. Do not be scared, be ready, do your homework with clouds, virtualization and traditional physical resources.

Likewise, there will be the continued migration for some from traditional mid-range class storage arrays to all flash-based appliances. Yet others will continue to leverage all the above in different roles aligned to where their specific features best serve the applications and needs of an organization.

In the case of high-end storage systems such as EMC VMAX (aka formerly known as DMX and Symmetrix before that) based on its Enginuity software, the hardware platforms will continue to evolve as will the software functionality. This means that these systems will evolve to handling more workloads, as well as moving into new environments from service providers to mid-range organizations where the systems were before out of their reach.

Smaller environments have grown larger as have their needs for storage systems while higher end solutions have scaled down to meet needs in different markets. What this means is a convergence of where smaller environments have bigger data storage needs and can afford the capabilities of scaled down or Right-sized storage systems such as the VMAX 10K.

Thus while some of the high-end systems may fade away faster than others, for those that continue to evolve being able to move into different adjacent markets or usage scenarios, they will be around for some time, at least in some environments.

Avoid confusing what is new and cool falling under industry adoption vs. what is productive and practical for customer deployment. Systems like the VMAX 10K are not for all environments or applications; however, for those who are open to exploring alternative solutions and approaches, it could open new opportunities.

If there is a high-end storage system platform (e.g. Enginuity) that continues to evolve, re-invent itself in terms of moving into or finding new uses and markets the EMC VMAX would be at or near the top of such list. For the other vendors of high-end storage system that are also evolving, you can have an Atta boy or Atta girl as well to make you feel better, loved and not left out or off of such list. ;)

Ok, nuff said for now.

Disclosure: EMC is not a StorageIO client; however, they have been in the past directly and via acquisitions that they have done. I am however a customer of EMC via my Iomega IX4 NAS (I never did get the IX2 that I supposedly won at EMCworld ;) ) that I bought on Amazon.com and indirectly via VMware products that I have, oh, and they did sent me a copy of the new book Human Face of Big Data (read more here).

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

SSD, flash and DRAM, DejaVu or something new?

StorageIO industry trends cloud, virtualization and big data

Recently I was in Europe for a couple of weeks including stops at Storage Networking World (SNW) Europe in Frankfurt, StorageExpo Holland, Ceph Day in Amsterdam (object and cloud storage), and Nijkerk where I delivered two separate 2 day, and a single 1 day seminar.

Image of Frankfurt transtationImage of inside front of ICE train going from Frankfurt to Utrecht

At the recent StorageExpo Holland event in Utrecht, I gave a couple of presentations, one on cloud, virtualization and storage networking trends, the other taking a deeper look at Solid State Devices (SSD’s). As in the past, StorageExpo Holland was great in a fantastic venue, with many large exhibits and great attendance which I heard was over 6,000 people over two days (excluding exhibitor vendors, vars, analysts, press and bloggers) which was several times larger than what was seen in Frankfurt at the SNW event.

Image of Ilja Coolen (twitter @@iCoolen) who was session host for SSD presentation in UtrechtImage of StorageExpo Holland exhibit show floor in Utrecht

Both presentations were very well attended and included lively interactive discussion during and after the sessions. The theme of my second talk was SSD, the question is not if, rather what to use where, how and when which brings us up to this post.

For those who have been around or using SSD for more than a decade outside of cell phones, camera, SD cards or USB thumb drives, that probably means DRAM based with some form of data persistency mechanisms. More recently mention SSD and that implies nand flash-based, either MLC or eMLC or SLC or perhaps emerging mram or PCM. Some might even think of NVRAM or other forms of SSD including emerging mram or mem-resistors among others, however lets stick to nand flash and dram for now.

image of ssd technology evolution

Often in technology what is old can be new, what is new can be seen as old, if you have seen, experienced or done something before you will have a sense of DejaVu and it might be evolutionary. On the other hand, if you have not seen, heard, experienced, or found a new audience, then it can be  revolutionary or maybe even an industry first ;).

Technology evolves, gets improved on, matures, and can often go in cycles of adoption, deployment, refinement, retirement, and so forth. SSD in general has been an on again, off again type cycle technology for the past several decades except for the past six to seven years. Normally there is an up cycle tied to different events, servers not being fast enough or affordable so use SSD to help address performance woes, or drives and storage systems not being fast enough and so forth.

Btw, for those of you who think that the current SSD focused technology (nand flash) is new, it is in fact 25 years old and still evolving and far from reaching its full potential in terms of customer deployment opportunities.

StorageIO industry trends cloud, virtualization and big data

Nand flash memory has helped keep SSD practical for the past several years riding the similar curve that is keeping hard disk drives (HDD’s) that they were supposed  to replace alive. That is improved reliability, endurance or duty cycle, better annual failure rate (AFR), larger space capacity, lower cost, and enhanced interfaces, packaging, power and functionality.

Where SSD can be used and options

DRAM historically at least for enterprise has been the main option for SSD based solutions using some form of data persistency. Data persistency options include battery backup combined with internal HDD’s to de stage information from the DRAM before power was lost. TMS (recently bought by IBM) was one of the early SSD vendors from the DRAM era that made the transition to flash including being one of the first many years ago to combine DRAM as a cache layer over nand flash as a persistency or de-stage layer. This would be an example of if you were not familiar with TMS back then and their capacities, you might think or believe that some more recent introductions are new and revolutionary, and perhaps they are in their own right or with enough caveats and qualifiers.

An emerging trend, which for some will be Dejavu, is that of using more DRAM in combination with nand flash SSD.

Oracle is one example of a vendor who IMHO rather quietly (intentionally or accidentally) has done this in the 7000 series storage systems as well as ExaData based database storage systems. Rest assured they are not alone and in fact many of the legacy large storage vendors have also piled up large amounts of DRAM based cache in their storage systems. For example EMC with 2TByte of DRAM cache in their VMAX 40K, or similar systems from Fujitsu HP, HDS, IBM and NetApp (including recent acquisition of DRAM based CacheIQ) among others. This has also prompted the question of if SSD has been successful in traditional storage arrays, systems or appliances as some would have you believe not, click here to learn more and cast your vote.

SSD, IO, memory and storage hirearchy

So is the future in the past? Some would say no, some will say yes, however IMHO there are lessons to learn and leverage from the past while looking and moving forward.

Early SSD’s were essentially RAM disks, that is a portion of main random access memory (RAM) or what we now call DRAM set aside as a non persistent (unless battery backed up) cache or device. Using a device driver, applications could use the RAM disk as though it were a normal storage system. Different vendors springing up with drivers for various platforms and disappeared as their need were reduced with faster storage systems, interfaces and ram disks drives supplied by vendors, not to mention SSD devices.

Oh, for you tech trivia types, there was also database machines from the late 80’s such as Briton Lee that would offload your database processing functions to a specialized appliance. Sound like Oracle ExaData  I, II or III to anybody?

Image of Oracle ExaData storage system

Ok, so we have seen this movie before, no worries, old movies or shows get remade, and unless you are nostalgic or cling to the past, sure some of the remakes are duds, however many can be quite good.

Same goes with the remake of some of what we are seeing now. Sure there is a generation that does not know nor care about the past, its full speed ahead and leverage what will get them there.

Thus we are seeing in memory databases again, some of you may remember the original series (pick your generation, platform, tool and technology) with each variation getting better. With 64 bit processor, 128 bit and beyond file system and addressing, not to mention ability for more DRAM to be accessed directly, or via memory address extension, combined with memory data footprint reduction or compression, there is more space to put things (e.g. no such thing as a data or information recession).

Lets also keep in mind that the best IO is the IO that you do not have to do, and that SSD which is an extension of the memory map plays by the same rules of real estate. That is location matters.

Thus, here we go again for some of you (DejaVu), while for others get ready for a new and exciting ride (new and revolutionary). We are back to the future with in memory database which while for a time will take some pressure from underlying IO systems until they once again out grow server memory addressing limits (or IT budgets).

However for those who do not fall into a false sense of security, no fear, as there is no such thing as a data or information recession. Sure as the sun rises in the east and sets in the west, sooner or later those IO’s that were or are being kept in memory will need to be de-staged to persistent storage, either nand flash SSD, HDD or somewhere down the road PCM, mram and more.

StorageIO industry trends cloud, virtualization and big data

There is another trend that with more IOs being cached, reads are moving to where they should resolve which is closer to the application or via higher up in the memory and IO pyramid or hierarchy (shown above).

Thus, we could see a shift over time to more writes and ugly IOs being sent down to the storage systems. Keep in mind that any cache historically provides temporal relieve, question is how long of a temporal relief or until the next new and revolutionary or DejaVu technology shows up.

Ok, go have fun now, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

IBM vs. Oracle, NAD intervenes, again

StorageIO industry trends cloud, virtualization and big data

With HP announcing that they were sold a bogus deal with Autonomy (read here, here and here among others) and the multi billion write off (loss), or speculation of who will be named the new CEO of Intel in 2013, don’t worry if you missed the latest in the ongoing IBM vs. Oracle campaign. The other day the NAD (National Advertising Directive) part of the Better Business Bureau (BBB) issued yet another statement about IBM and Oracle (read here and posted below).

NAD BBB logo

In case you had not heard, earlier this year, Oracle launched an advertising promotion touting how much faster their solutions are vs. IBM. Perhaps you even saw the advertising billboards along highways or in airports making the Oracle claims.

Big Blue (e.g. IBM) being the giant that they are was not going take the Oracle challenge sitting down and stepped up and complained to the better business bureau (BBB). As a result, the NAD issued a decision for Oracle to stop the ads (read more here). Oracle at 37.1B (May 2012 annual earnings) is about a third the size of IBM at 106.9B (2011 earnings), thus neither is exactly a small business.

Lets get back to the topic at hand the NAD issued yet another directive. In the latest spat, after the first Ads, Oracle launched the 10M challenge (you can read about that here).

Oracle 10 million dollar challenge ad image

Once again the BBB and the NAD weighs in for IBM and issued the following statement (mentioned above):

For Immediate Release
Contact: Linda Bean
212.705.0129

NAD Determines Oracle Acted Properly in Discontinuing Performance Claim Couched in ‘Contest’ Language

New York, NY – Nov. 20, 2012 – The National Advertising Division has determined that Oracle Corporation took necessary action in discontinuing advertising that stated its Exadata server is “5x Faster Than IBM … Or you win $10,000,000.”

The claim, which appeared in print advertising in the Wall Street Journal and other major newspapers, was challenged before NAD by International Business Machines Corporation.

NAD is an investigative unit of the advertising industry system of self-regulation and is administered by the Council of Better Business Bureaus.

As an initial matter, NAD considered whether or not Oracle’s advertisement conveyed a comparative performance claim – or whether the advertisement simply described a contest.

In an NAD proceeding, the advertiser is obligated to support all reasonable interpretations of its advertising claims, not just the message it intended to convey. In the absence of reliable consumer perception evidence, NAD uses its judgment to determine what implied messages, if any, are conveyed by an advertisement.

Here, NAD found that, even accounting for a sophisticated target audience, a consumer would be reasonable to take away the message that all Oracle Exadata systems run five times as fast as all IBM’s Power computer products. NAD noted in its decision that the fact that the claim was made in the context of a contest announcement did not excuse the advertiser from its obligation to provide substantiation.

The advertiser did not provide any speed performance tests, examples of comparative system speed superiority or any other data to substantiate the message that its Exadata computer systems run data warehouses five times as fast as IBM Power computer systems.

Accordingly, NAD determined that the advertiser’s decision to permanently discontinue this advertisement was necessary and appropriate. Further, to the extent that Oracle reserves the right to publish similar advertisements in the future, NAD cautioned that such performance claims require evidentiary support whether or not the claims are couched in a contest announcement.

Oracle, in its advertiser’s statement, said it disagreed with NAD’s findings, but would take “NAD’s concerns into account should it disseminate similar advertising in the future.”

###

NAD’s inquiry was conducted under NAD/CARU/NARB Procedures for the Voluntary Self-Regulation of National Advertising. Details of the initial inquiry, NAD’s decision, and the advertiser’s response will be included in the next NAD/CARU Case Report.

About Advertising Industry Self-Regulation: The Advertising Self-Regulatory Council establishes the policies and procedures for advertising industry self-regulation, including the National Advertising Division (NAD), Children’s Advertising Review Unit (CARU), National Advertising Review Board (NARB), Electronic Retailing Self-Regulation Program (ERSP) and Online Interest-Based Advertising Accountability Program (Accountability Program.) The self-regulatory system is administered by the Council of Better Business Bureaus.

Self-regulation is good for consumers. The self-regulatory system monitors the marketplace, holds advertisers responsible for their claims and practices and tracks emerging issues and trends. Self-regulation is good for advertisers. Rigorous review serves to encourage consumer trust; the self-regulatory system offers an expert, cost-efficient, meaningful alternative to litigation and provides a framework for the development of a self-regulatory to emerging issues.

To learn more about supporting advertising industry self-regulation, please visit us at: www.asrcreviews.org.

Linda Bean Director, Communications,
Advertising Self-Regulatory Council

Tel: 212.705.0129
Cell: 908.812.8175
lbean@asrc.bbb.org

112 Madison Ave.
3rd Fl.
New York, NY
10016

Not surprisingly, IBM sent the following email to highlight their latest news:

Greg,

For the third time in eight months Oracle has agreed to kill a misleading advertisement targeting IBM after scrutiny from the Better Business Bureau’s National Advertising Division.

Oracle’s ‘$10 Million Challenge’ ad claimed that its Exadata server was ‘Five Times Faster than IBM Power or You Win $10,000,000.’ The advertising council just issued a press release announcing that the claim was not supported by the evidence in the record, and that Oracle has agreed to stop making the claim. ‘[Oracle] did not provide speed performance tests, examples of comparative systems speed superiority or any other data to  substantiate its message,’ the BBB says in the release: The ads ran in The Wall Street Journal, The Economist, Chief Executive Magazine, trade publications and online.

The National Advertising Division reached similar judgments against Oracle advertising on two previous occasions this year. Lofty and unsubstantiated claims about Oracle systems being ‘Twenty Times Faster than IBM’ and ‘Twice as Fast Running Java’ were both deemed to be unsubstantiated and misleading. Oracle quietly shelved both campaigns.

If you follow Oracle’s history of claims, you won’t be surprised that the company issues misleading ads until they’re called out in public and forced to kill the campaign. As far back as 2001, Oracle’s favorite tactic has been to launch unsubstantiated attacks on competitors in ads while promising prize money to anyone who can disprove the bluff. Not surprisingly, no prize money is ever paid as the campaigns wither under scrutiny. They are designed to generate publicity for Oracle, nothing more. You may be familiar with their presentation, ‘Ridding the Market of Competition,’ which they issued to the Society of Competitive Intelligence Professionals laying out their strategy.

The repeated rulings by the BBB even caused analyst Rob Enderle to comment that, ‘there have been significant forced retractions and it is also apparent that increasingly the only people who could cite these false Oracle performance advantages with a straight face were Oracle’s own executives, who either were too dumb to know they were false or too dishonest to care.’

Let me know if you’re interested in following up on this news. You won’t hear anything about it from Oracle.

Best,

Chris

Christopher Rubsamen
Worldwide Communications for PureSystems and Cloud Computing
IBM Systems & Technology Group
aim: crubsamen
twitter: @crubsamen

Wow, I never knew however I should not be surprised that there is a Society of Competitive Intelligence Professionals.

Now Oracle is what they are, aggressive and have a history of doing creative or innovative (e.g. stepping out-of-bounds) in sales and marketing campaigns, benchmarking and other activities. On the other hand has IBM been victimized at the hands of Oracle and thus having to resort to using the BBB and NAD as part of its new sales and marketing tool to counter Oracle?

Does anybody think that the above will cause Oracle to retreat, repent, and tone down how they compete on the field of sales and marketing of servers, storage, database and related IT, ICT, big and little data, clouds?

Anyone else have a visual of a group of IBMers sitting around a table at an exclusive country club enjoying a fine cigar along with glass of cognac toasting each other on their recent success in having the BBB and NAD issue another ruling against Oracle. Meanwhile perhaps at some left coast yacht club, the Oracle crew are high fiving, congratulating each other on their commission checks while spraying champagne all over the place like they just won the Americas cup race?

How about it Oracle, IBM says Im not going to hear anything from you, is that true?

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Little data, big data and very big data (VBD) or big BS?

StorageIO industry trends cloud, virtualization and big data

This is an industry trends and perspective piece about big data and little data, industry adoption and customer deployment.

If you are in any way associated with information technology (IT), business, scientific, media and entertainment computing or related areas, you may have heard big data mentioned. Big data has been a popular buzzword bingo topic and term for a couple of years now. Big data is being used to describe new and emerging along with existing types of applications and information processing tools and techniques.

I routinely hear from different people or groups trying to define what is or is not big data and all too often those are based on a particular product, technology, service or application focus. Thus it should be no surprise that those trying to police what is or is not big data will often do so based on what their interest, sphere of influence, knowledge or experience and jobs depend on.

Traveling and big data images

Not long ago while out traveling I ran into a person who told me that big data is new data that did not exist just a few years ago. Turns out this person was involved in geology so I was surprised that somebody in that field was not aware of or working with geophysical, mapping, seismic and other legacy or traditional big data. Turns out this person was basing his statements on what he knew, heard, was told about or on sphere of influence around a particular technology, tool or approach.

Fwiw, if you have not figured out already, like cloud, virtualization and other technology enabling tools and techniques, I tend to take a pragmatic approach vs. becoming latched on to a particular bandwagon (for or against) per say.

Not surprisingly there is confusion and debate about what is or is not big data including if it only applies to new vs. existing and old data. As with any new technology, technique or buzzword bingo topic theme, various parties will try to place what is or is not under the definition to align with their needs, goals and preferences. This is the case with big data where you can routinely find proponents of Hadoop and Map reduce position big data as aligning with the capabilities and usage scenarios of those related technologies for business and other forms of analytics.

SAS software for big data

Not surprisingly the granddaddy of all business analytics, data science and statistic analysis number crunching is the Statistical Analysis Software (SAS) from the SAS Institute. If these types of technology solutions and their peers define what is big data then SAS (not to be confused with Serial Attached SCSI which can be found on the back-end of big data storage solutions) can be considered first generation big data analytics or Big Data 1.0 (BD1 ;) ). That means Hadoop Map Reduce is Big Data 2.0 (BD2 ;) ;) ) if you like, or dislike for that matter.

Funny thing about some fans and proponents or surrogates of BD2 is that they may have heard of BD1 like SAS with a limited understanding of what it is or how it is or can be used. When I worked in IT as a performance and capacity planning analyst focused on servers, storage, network hardware, software and applications I used SAS to crunch various data streams of event, activity and other data from diverse sources. This involved correlating data, running various analytic algorithms on the data to determine response times, availability, usage and other things in support of modeling, forecasting, tuning and trouble shooting. Hmm, sound like first generation big data analytics or Data Center Infrastructure Management (DCIM) and IT Service Management (ITSM) to anybody?

Now to be fair, comparing SAS, SPSS or any number of other BD1 generation tools to Hadoop and Map Reduce or BD2 second generation tools is like comparing apples to oranges, or apples to pears.

Lets move on as there is much more to what is big data than simply focus around SAS or Hadoop.

StorageIO industry trends cloud, virtualization and big data

Another type of big data are the information generated, processed, stored and used by applications that result in large files, data sets or objects. Large file, objects or data sets include low resolution and high-definition photos, videos, audio, security and surveillance, geophysical mapping and seismic exploration among others. Then there are data warehouses where transactional data from databases gets moved to for analysis in systems such as those from Oracle, Teradata, Vertica or FX among others. Some of those other tools even play (or work) in both traditional e.g. BD1 and new or emerging BD2 worlds.

This is where some interesting discussions, debates or disagreements can occur between those who latch onto or want to keep big data associated with being something new and usually focused around their preferred tool or technology. What results from these types of debates or disagreements is a missed opportunity for organizations to realize that they might already be doing or using a form of big data and thus have a familiarity and comfort zone with it.

By having a familiarity or comfort zone vs. seeing big data as something new, different, hype or full of FUD (or BS), an organization can be comfortable with the term big data. Often after taking a step back and looking at big data beyond the hype or fud, the reaction is along the lines of, oh yeah, now we get it, sure, we are already doing something like that so lets take a look at some of the new tools and techniques to see how we can extend what we are doing.

Likewise many organizations are doing big bandwidth already and may not realize it thinking that is only what media and entertainment, government, technical or scientific computing, high performance computing or high productivity computing (HPC) does. I’m assuming that some of the big data and big bandwidth pundits will disagree, however if in your environment you are doing many large backups, archives, content distribution, or copying large amounts of data for different purposes that consume big bandwidth and need big bandwidth solutions.

Yes I know, that’s apples to oranges and perhaps stretching the limits of what is or can be called big bandwidth based on somebody’s definition, taxonomy or preference. Hopefully you get the point that there is diversity across various environments as well as types of data and applications, technologies, tools and techniques.

StorageIO industry trends cloud, virtualization and big data

What about little data then?

I often say that if big data is getting all the marketing dollars to generate industry adoption, then little data is generating all the revenue (and profit or margin) dollars by customer deployment. While tools and technologies related to Hadoop (or Haydoop if you are from HDS) are getting industry adoption attention (e.g. marketing dollars being spent) revenues from customer deployment are growing.

Where big data revenues are strongest for most vendors today are centered around solutions for hosting, storing, managing and protecting big files, big objects. These include scale out NAS solutions for large unstructured data like those from Amplidata, Cray, Dell, Data Direct Networks (DDN), EMC (e.g. Isilon), HP X9000 (IBRIX), IBM SONAS, NetApp, Oracle and Xyratex among others. Then there flexible converged compute storage platforms optimized for analytics and running different software tools such as those from EMC (Greenplum), IBM (Netezza), NetApp (via partnerships) or Oracle among others that can be used for different purposes in addition to supporting Hadoop and Map reduce.

If little data is databases and things not generally lumped into the big data bucket, and if you think or perceive big data only to be Hadoop map reduce based data, then does that mean all the large unstructured non little data is then very big data or VBD?

StorageIO industry trends cloud, virtualization and big data

Of course the virtualization folks might want to if they have not already corner the V for Virtual Big Data. In that case, then instead of Very Big Data, how about very very Big Data (vvBD). How about Ultra-Large Big Data (ULBD), or High-Revenue Big Data (HRBD), granted the HR might cause some to think its unique for Health Records, or Human Resources, both btw leverage different forms of big data regardless of what you see or think big data is.

Does that then mean we should really be calling videos, audio, PACs, seismic, security surveillance video and related data to be VBD? Would this further confuse the market, or the industry or help elevate it to a grander status in terms of size (data file or object capacity, bandwidth, market size and application usage, market revenue and so forth)?

Do we need various industry consortiums, lobbyists or trade groups to go off and create models, taxonomies, standards and dictionaries based on their constituents needs and would they align with those of the customers, after all, there are big dollars flowing around big data industry adoption (marketing).

StorageIO industry trends cloud, virtualization and big data

What does this all mean?

Is Big Data BS?

First let me be clear, big data is not BS, however there is a lot of BS marketing BS by some along with hype and fud adding to the confusion and chaos, perhaps even missed opportunities. Keep in mind that in chaos and confusion there can be opportunity for some.

IMHO big data is real.

There are different variations, use cases and types of products, technologies and services that fall under the big data umbrella. That does not mean everything can or should fall under the big data umbrella as there is also little data.

What this all means is that there are different types of applications for various industries that have big and little data, virtual and very big data from videos, photos, images, audio, documents and more.

Big data is a big buzzword bingo term these days with vendor marketing big dollars being applied so no surprise the buzz, hype, fud and more.

Ok, nuff said, for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Cloud, virtualization, storage and networking in an election year

My how time flies, seems like just yesterday (back in 2008) that I did a piece titled Politics and Storage, or, storage in an election year V2.008 and if you are not aware, it is 2012 and thus an election year in the U.S. as well as in many other parts of the world. Being an election year it’s not just about politicians, their supporters, pundits, surrogates, donors and voters, it’s also a technology decision-making and acquisition year (as are most years) for many environments.

Similar to politics, some technology decisions will be major while others will be minor or renewals so to speak. Major decisions will evolve around strategies, architectures, visions, implementation plans and technology selections including products, protocols, processes, people, vendors or suppliers and services for traditional, virtual and cloud data infrastructure environments.

Vendors, suppliers, service providers and their associated industry forums or alliances and trade groups are in various sales and marketing awareness campaigns. These various campaigns will decide who will be chosen by their customers or prospects for technology acquisitions ranging from hardware, software and services including servers, storage, IO and networking, desktops, power, cooling, facilities, management tools, virtualization and cloud products and services along with related items.

The politics of data infrastructures including servers, storage, networking, hardware, software and services spanning physical, cloud and virtual environments has similarities to other political races. These include many organizations in the form of inter departmental rivalry over budgets or funding, service levels, decision-making, turf wars and technology ownership not to mention the usual vendor vs. vendor, VAR vs. VAR, service provider vs. service provider or other match ups.

On the other hand, data and storage are also being used to support political campaigns in many ways across physical, virtual and cloud deployment scenarios.

StorageIO industry trends cloud, virtualization and big data

Let us not forget about the conventions or what are more commonly known as shows, conferences, user group events in the IT world. For example EMCworld earlier this year, Dell Storage Forum, or the recent VMworld (or click here to view video from past VMworld party with INXS), Oracle Open World along with many vendor analyst, partner, press and media or blogger days.

Here are some 2012 politics of data infrastructure and storage campaign match-ups:

Speaking of networks vs. server and storage or software and convergence, how about Brocade vs. Cisco, Qlogic vs. Emulex, Broadcom vs. Mellanox, Juniper vs. HP and Dell (Force10) or Arista vs. others in the race for SAN LAN MAN WAN POTS and PANs.

Then there are the claims, counter claims, pundits, media, bloggers, trade groups or lobbyist, marketing alliance or pacs, paid for ads and posts, tweets and videos along with supporting metrics for traditional and social media.

Lets also not forget about polls, and more polls.

Certainly, there are vendors vs. vendors relying on their campaign teams (sales, marketing, engineering, financing and external surrogates) similar to what you would find with a politician, of course scope, size and complexity would vary.

Surrogates include analyst, bloggers, consultants, business partners, community organizers, editors, VARs, influencers, press, public relations and publications among others. Some claim to be objective and free of vendor influence while leveraging simple to complex schemes for renumeration (e.g. getting paid) while others simply state what they are doing and with whom.

Likewise, some point fingers at others who are misbehaving while deflecting away from what they are actually doing. Hmm, sounds like the pundit or surrogate two-step (as opposed to the Potomac two step) and prompts the question of who is checking the fact checkers and making disclosures (disclosure: this piece is being sponsored by StorageIO ;) )?

StorageIO industry trends cloud, virtualization and big data

What this all means?

Use your brain, use your eyes and ears, and use your nose all of which have dual paths to your senses.

In other words, if something sounds or looks too good to be true, it probably isn’t.

Likewise if something smells funny or does not feel right to your senses or common sense, it probably is not or at least requires a closer look or analysis.

Be an informed decision maker balancing needs vs. wants to make effective selections regardless of if for a major or minor item, technology, trend, product, process, protocol or service. Informed decisions also mean looking at both current and evolving or future trends, challenges and needs which for data infrastructures including servers, storage, networking, IO fabrics, cloud and virtualization means factoring in changing data and information life cycles and access or usage patterns. After all, while there are tough economic times on a global basis, there is no such thing as a data or information recession.

StorageIO and uncle sam want you for cloud virtualization and data storage networking

This also means gaining insight and awareness of issues and challenges, plus balancing awareness and knowledge (G2) vs. looks, appearances and campaign sales pitches (GQ) for your particular environment, priorities and preferences.

Keep in mind and in the spirit of legendary Chicago style voting, when it comes to storage and data infrastructure topics, technologies and decisions, spend early, spend often and spend for those who cannot to keep the vendors and their ecosystem of partners happy.

Note that this post is neither supported, influenced, endorsed or paid for by any vendors, VARs, service providers, trade groups, political action committees or Picture Archive Communication system (e.g. PACs), both of which deal with and in big data along with industry consortiums, their partners, customers or surrogates and neither would they probably approve of it anyway’s.

With that being said, I am Greg Schulz of StorageIO and am not running for or from anything this year and I do endorse the above post ;).

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Who will be winner with Oracle $10 Million dollar challenge?

Oracle 10 million dollar challenge ad image

In case you missed it, Oracle has a ten million dollar challenge (here, here and here) to prove that their servers and database software technologies are 5 times faster than IBM.

Up to 10 winners open to U.S. Fortune 1000 companies running an Oracle 11g data warehouse on IBM Power system. Offer expires August 31, 2012 with configuration terms. See this URL for official rules: https://oracle.com/IBMchallenge

Click here to view entry form or click on form below.

Oracle 10 million dollar challenge entry form image

Taking a step back for a moment, if you forgot or had not heard, Oracle earlier this summer had their hands slapped by the US Better Business Bureau (BBB) National Advertising Directive (NAD) over performance claims and ads. IBM complained to the BBB that unfair marketing claims about their servers and database products were being made by Oracle (read more here).

Not one to miss a beat or bit or byte of data, not to mention dollars, Oracle has run ads in newspapers and other venues for the Oracle IBM challenge with the winner receiving $10,000,000.00 USD (details here).

Oracle exadata servers image

This begs the question, who wins, the company or entity that actually can standup and meet the challenge? How about Oracle, do they win if enough people see, hear, talk (or complain) about the ads and challenges? What about the cost, how will Oracle cover that or is it simply a drop in the bucket of an even larger amount of dollars potentially valued in the billions of dollars (e.g. servers, storage, software, services)?

Now for some fun, using an inflation calculator with 1974 dollars as that is when the TV show the six million dollar man made its debut. If you do not know, that is a TV show where an injured government employee (Steve Austin) played by actor Lee Majors was rebuilt using bionic in order to be faster and stronger with the then current technology (ok, TV technology). Using the inflation calculator, the 1974 six million dollar man and machine would cost about $27,882,839.76 in 2012 USD (364.7% increase).

Now using todays what Oracle is calling faster, stronger machine and associated staff for $10,000,000 challenge prize award, would have cost $2,151,861.17 in 1974 dollars. Note that the equal amount of compute processing, storage performance and capacity, networking capability and software abilities in 1974 similar to what is available today would have cost even more than what the inflation calculator shows. For that, we would need to have something like a technology inflation (or improvement) calculator.

Learn more about the Oracle challenge here, here and here, as well as the NAD announcement here, and the six million dollar man here

Ok, nuff said for now.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go

StorageIO industry trends and perspectives

In case you missed it, VMware recently announced spending $1.05 billion USD acquiring startup Nicira for their virtualization and software technology that enables software defined networks (SDN). Also last week Oracle was in the news getting its hands slapped by for making misleading advertisement performance claims vs. IBM.

On the heals of VMware buying Nicira for software defined networking (SDN) or what is also known as IO virtualization (IOV) and virtualized networking, Oracle is now claiming their own SDN capabilities with their announcement of intent to acquire Xsigo. Founded in 2004, Xsigo has a hardware platform combined with software to enable attachment of servers to different Fibre Channel (SAN) and Ethernet based (LAN) networks with their version of IOV.

Now its Oracle who has announced that it will be acquiring IO, networking, virtualization hardware and software vendor Xsigo founded in 2004 for an undisclosed amount. Xsigo has made its name in the IO virtualization (IOV) and converged networking along with server and storage virtualization space over the past several years including partnerships with various vendors.

Buzz word bingo

Technology buzzwords and buzz terms can often be a gray area leaving plenty of room for marketers and PR folks to run with. Case in point AaaS, Big data, Cloud, Compliance, Green, IaaS, IOV, Orchestration, PaaS and Virtualization among other buzzword bingo or XaaS topics. Since Xsigo has been out front in messaging and industry awareness around IO networking convergence of Ethernet based Local Area Networks (LANs) and Fibre Channel (FC) based Storage Area Networks (SANs), along with embracing InfiniBand, it made sense for them to play to their strength which is IO virtualization (aka IOV).

Too me and among others (here and here and here) it is interesting that Xsigo has not laid claims to being part of the software defined networking (SDN) movement or the affiliated OpenFlow networking initiatives as happens with Nicira (and Oracle for that matter). In the press release that the Oracle marketing and PR folks put out on a Monday morning, some of the media and press, both trade industry, financial and general news agency took the Oracle script hook line and sinker running with it.

What was effective is how well many industry trade pubs and their analysts simply picked up the press release story and ran with it in the all too common race to see who can get the news or story out first, or before it actually happens in some cases.

Image of media, news papers

Too be clear, not all pubs jumped including some of those mentioned by Greg Knieriemen (aka @knieriemen) over at SpeakinginTech highlights. I know some who took the time to call, ask around, leverage their journalistic training to dig, research and find out what this really meant vs. simply taking and running with the script. An example of one of those calls that I had was with Beth Pariseu (aka @pariseautt) that you can read her story here and here.

Interesting enough, the Xsigo marketers had not embraced the SDN term sticking with the more known (at least in some circles) VIO and VIO descriptions. What is also interesting is just last week Oracle marketing had their hands slapped by the Better Business Bureau (BBB) NAD after IBM complained about unfair performance based advertisements on ExaData.

Oracle Exadata

Hmm, I wonder if the SDN police or somebody else will lodge a similar complaint with the BBB on behalf of those doing SDN?

Both Oracle and Xsigo along with other InfiniBand (and some Ethernet and PCIe) focused vendors are members of the Open Fabric initiative, not to be confused with the group working on OpenFlow.

StorageIO industry trends and perspectives

Here are some other things to think about:

Oracle has a history of doing different acquisitions without disclosing terms, as well as doing them based on earn outs such as was the case with Pillar.

Oracle use Ethernet in the servers and appliances as well as has been an adopter of InfiniBand primarily for node to node communication, however also for server to application.

Oracle is also an investor in Mellanox the folks that make InfiniBand and Ethernet products.

Oracle has built various stacks including ExaData (Database machine), Exalogic, Exalytics and Database Appliance in addition to their 7000 series of storage systems.

Oracle has done earlier virtualization related acquisitions including Virtual Iron.

Oracle has a reputation with some of their customers who love to hate them for various reasons.

Oracle has a reputation of being aggressive, even by other market leader aggressive standards.

Integrated solution stacks (aka stack wars) or what some remember as bundles continues and Oracle has many solutions.

What will happen to Xsigo as you know it today (besides what the press releases are saying).

While Xsigo was not a member of the Open Networking Forum (ONF), Oracle is.

Xsigo is a member of the Open Fabric Alliance along with Oracle, Mellanox and others interested in servers, PCIe, InfiniBand, Ethernet, networking and storage.

StorageIO industry trends and perspectives

What’s my take?

While there are similarities in that both Nicira and Xsigo are involved with IO Virtualization, what they are doing, how they are doing it, who they are doing it with along with where they can play vary.

Not sure what Oracle paid however assuming that it was in the couple of million dollars or less, cash or combination of stock, both they and the investors as well as some of the employees, friends and family’s did ok.

Oracle also gets some intellectual property that they can combine with other earlier acquisitions via Sun and Virtual Iron along with their investment in InfiniBand (also now Ethernet) vendor Mellanox

Likewise, Oracle gets some extra technology that they can leverage in their various stacked or integrated (aka bundled) solutions for both virtual and physical environments.

For Xsigo customers the good news is that you now know who will be buying the company, however and should be questions about the future beyond what is being said in press releases.

Does this acquisition give Oracle a play in the software defined networking space like Nicira gives to VMware I would say no given their hardware dependency, however it does give Oracle some extra technology to play with.

Likewise while important and a popular buzzword topic (e.g. SDN), since OpenFlow comes up in conversations, perhaps that should be more of the focus vs. if a solution is all software or hardware and software.

StorageIO industry trends and perspectives

I also find it entertaining how last week the Better Business Bureau (BBB) and NAD (National Advertising Division) slapped Oracles hands after IBM complaints of misleading performance claims about Oracle ExaData vs. IBM. The reason I find it entertaining is not that Oracle had its hands slapped or that IBM complained to the BBB, rather how the Oracle marketers and PR folks came up with a spin around what could be called a proprietary SDN (hmm, pSDN ?) story feed it to the press and media who then ran with it.

Im not convinced that this is an all our launch of a war by Oracle vs. Cisco let alone any of the other networking vendors as some have speculated (makes for good headlines though). Instead Im seeing it as more of an opportunistic acquisition by Oracle most likely at a good middle of summer price. Now if Oracle really wanted to go to battle with Cisco (and others), then there are others to buy such as Brocade, Juniper, etc etc etc. However there are other opportunities for Oracle to be focused (or side tracked on right now).

Oh, lets also see what Cisco has to say about all of this which should be interesting.

Additional related links:
Data Center I/O Bottlenecks Performance Issues and Impacts
I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
I/O Virtualization (IOV) Revisited
Industry Trends and Perspectives: Converged Networking and IO Virtualization (IOV)
The function of XaaS(X) Pick a letter
What is the best kind of IO? The one you do not have to do
Why FC and FCoE vendors get beat up over bandwidth?

StorageIO industry trends and perspectives

If you are interested in learning more about IOV, Xisgo, or are having trouble sleeping, click here, here, here, here, here, here, here, here, here, here, here, here, here, or here (I think that’s enough links for now ;).

Ok, nuff said for now as I have probably requalified for being on the Oracle you know what list for not sticking to the story script, opps, excuse me, I mean press release message.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

NAD recommends Oracle discontinue certain Exadata performance claims

I Received the following press release in my inbox today from the National Advertising Division (NAD) recommending that Oracle stop making certain performance claims about Exadata after a complaint from IBM.

Oracle Exadata

In case you are not familiar with ExaData, it is a database machine or storage appliance that only supports Oracle database systems (learn more here). Oracle having bought Sun microsystems a few years back moved from being a software vendor that competed with other vendors software solutions including those from IBM while running on hardware from Dell, HP and IBM among others. Now that Oracle is in the hardware business, while you will still find Oracle software products running on their competitors hardware (servers and storage), Oracle is also more aggressively competing with those same partners, particularly IBM.

Hmm, to quote Scooby Doo: Rut Roh!

Looks like IBM complained to the Better Business Bureau (BBB) National Advertising Division (NAD) that resulted in the Advertising Self-Regulatory Council (ASRC) making their recommendation below (more about NAD and ASRC can be found here). Based on a billboard sign that I saw while riding from JFK airport into New York City last week, I would not be surprised if a company with two initials that start with an H and end with a P were to file a similar complaint.

I Wonder if the large wall size Oracle advertisement that used to be in the entry way to the white plains (IATA:HPN) airport (e.g. in IBM’s backyard) welcoming you to the terminal as you get off the airplanes is still there?

The following is the press release that I received:

For Immediate Release
Contact: Linda Bean
212.705.0129

NAD Finds Oracle Took Necessary Action in Discontinuing Comparative Performance Claims for Exadata; Oracle to Appeal NAD Decision

New York, NY – July 24,  2012 –TheNational Advertising Division has recommended that Oracle Corporation discontinue certain comparative product-performance claims for the company’s Exadata database machines, following a challenge by International Business Machines Corporation. Oracle said it would voluntarily discontinue the challenged claims, but noted that it would appeal NADs decision to the National Advertising Review Board.

The advertising claims at issue appeared in a full-page advertisement in the Wall Street Journal and included the following:

  • “Exadata 20x Faster … Replaces IBM Again”
  • “Giant European Retailer Moves Databases from IBM Power to Exadata … Runs 20 Times Faster”

NAD also considered whether the advertising implied that all Oracle Exadata systems are twenty times faster than all IBM Power systems.

The advertisement featured the image of an Oracle Exadata system, along with the statement: “Giant European Retailer Moves Databases from IBM Power to Exadata Runs 20 Times Faster.” The advertisement also offered a link to the Oracle website: “For more details oracle.com/EuroRetailer.” 

IBM argued that the “20x Faster” claim makes overly broad references to “Exadata” and “IBM Power,” resulting in a misleading claim, which the advertiser’s evidence does not support.  In particular, the challenger argued that by referring to the brand name “IBM Power” without qualification, Oracle was making a broad claim about the entire IBM Power systems line of products. 

The advertiser, on the other hand, argued that the advertisement represented a case study, not a line claim, and noted that the sophisticated target audience would understand that the advertisement is based on the experience of one customer – the “Giant European Retailer” referenced in the advertisement.

In a NAD proceeding, the advertiser is obligated to support all reasonable interpretations of its advertising claims, not just the message it intended to convey.   In the absence of reliable consumer perception evidence, NAD uses its experienced judgment to determine what implied messages, if any, are conveyed by an advertisement.   When evaluating the message communicated by an advertising claim, NAD will examine the claims at issue in the context of the entire advertisement in which they appear.

In this case, NAD concluded that while the advertiser may have intended to convey the message that in one case study a particular Exadata system was up to 20 times faster when performing two particular functions than a particular IBM Power system, Oracle’s general references to “Exadata” and “IBM Power,” along with the bold unqualified headline “Exadata 20x Faster Replaces IBM Again,” conveyed a much broader message.

NAD determined that at least one reasonable interpretation of the challenged advertisement is that all – or a vast majority – of Exadata systems consistently perform 20 times faster in all or many respects than all – or a vast majority – of IBM Power systems. NAD found that the message was not supported by the evidence in the record, which consisted of one   particular comparison of one consumer’s specific IBM Power system to a specific Exadata System. 

NAD further determined that the disclosure provided on the advertiser’s website was not sufficient to limit the broad message conveyed by the “20x Faster” claim. More importantly, NAD noted that even if Oracle’s website disclosure was acceptable – and had appeared clearly and conspicuously in the challenged advertisement – it would still be insufficient because an advertiser cannot use a disclosure to cure an otherwise false claim.

NAD noted that Oracle’s decision to permanently discontinue the claims at issue was necessary and proper.

Oracle, in its advertiser’s statement, said it was “disappointed with the NAD’s decision in this matter, which it believes is unduly broad and will severely limit the ability to run truthful comparative advertising, not only for Oracle but for others in the commercial hardware and software industry.”

Oracle noted that it would appeal all of NAD’s findings in the matter.

 

###

NAD’s inquiry was conducted under NAD/CARU/NARB Procedures for the Voluntary Self-Regulation of National Advertising.  Details of the initial inquiry, NAD’s decision, and the advertiser’s response will be included in the next NAD/CARU Case Report.

About Advertising Industry Self-Regulation:  The Advertising Self-Regulatory Council establishes the policies and procedures for advertising industry self-regulation, including the National Advertising Division (NAD), Children’s Advertising Review Unit (CARU), National Advertising Review Board (NARB), Electronic Retailing Self-Regulation Program (ERSP) and Online Interest-Based Advertising Accountability Program (Accountability Program.) The self-regulatory system is administered by the Council of Better Business Bureaus.

Self-regulation is good for consumers. The self-regulatory system monitors the marketplace, holds advertisers responsible for their claims and practices and tracks emerging issues and trends. Self-regulation is good for advertisers. Rigorous review serves to encourage consumer trust; the self-regulatory system offers an expert, cost-efficient, meaningful alternative to litigation and provides a framework for the development of a self-regulatory to emerging issues.

To learn more about supporting advertising industry self-regulation, please visit us at: www.asrcreviews.org.

 

 

Linda Bean l Director, Communications,
Advertising Self-Regulatory Council

Tel: 212.705.0129
Cell: 908.812.8175
lbean@asrc.bbb.org

112 Madison Ave.
3rd Fl.
New York, NY
10016

 

Ok, Oracle is no stranger to benchmark and performance claims controversy having amassed several decades of experience. Anybody remember the silver bullet database test from late 80s early 90s when Oracle set a record performance except that they never committed the writes to disk?

Something tells me that Oracle and Uncle Larry (e.g. Larry Ellison who is not really my uncle) will treat this as any kind of press or media coverage is good and probably will issue something like IBM must be worried if they have to go to the BBB.

Will a complaint which I’m sure is not the fist to be lodged with the BBB against Oracle deter customers, or be of more use to IBM sales and their partners in deals vs. Oracle?

What’s your take?

Is this much ado about nothing, a filler for a slow news or discussion day, a break from talking about VMware acquisition of Nicira or VMware CEO management changes? Perhaps this is an alternative to talking about the CEO of SSD vendor STEC being charged with insider trading, or something other than Larry Ellison buying an Hawaiian island (IMHO he could have gotten a better deal buying Greece), or is this something that Oracle will need to take seriously?

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

EMC VFCache respinning SSD and intelligent caching (Part II)

This is the second of a two part series pertaining to EMC VFCache, you can read the first part here.

In this part of the series, lets look at some common questions along with comments and perspectives.

Common questions, answers, comments and perspectives:

Why would EMC not just go into the same market space and mode as FusionIO, a model that many other vendors seam eager to follow? IMHO many vendors are following or chasing FusionIO thus most are selling in the same way perhaps to the same customers. Some of those vendors can very easily if they were not already also make a quick change to their playbook adding some new moves to reach broader audience.

Another smart move here is that by taking a companion or complimentary approach is that EMC can continue selling existing storage systems to customers, keep those investments while also supporting competitors products. In addition, for those customers who are slow to adopt the SSD based techniques, this is a relatively easy and low risk way to gain confidence. Granted the disk drive was declared dead several years (and yes also several decades) ago, however it is and will stay alive for many years due to SSD helping to close the IO storage and performance gap.

Storage IO performance and capacity gap
Data center and storage IO performance capacity gap (Courtesy of Cloud and Virtual Data Storage Networking (CRC Press))

Has this been done before? There have been other vendors who have done LUN caching appliances in the past going back over a decade. Likewise there are PCIe RAID cards that support flash SSD as well as DRAM based caching. Even NetApp has had similar products and functionality with their PAM cards.

Does VFCache work with other PCIe SSD cards such as FusionIO? No, VFCache is a combination of software IO intercept and intelligent cache driver along with a PCIe SSD flash card (which could be supplied as EMC has indicated from different manufactures). Thus VFCache to be VFCache requires the EMC IO intercept and intelligent cache software driver.

Does VFCache work with other vendors storage? Yes, Refer to the EMC support matrix, however the product has been architected and designed to install and coexist into a customers existing environment which means supporting different EMC block storage systems as well as those from other vendors. Keep in mind that a main theme of VFCache is to compliment, coexist, enhance and protect customers investments in storage systems to improve their effectiveness and productivity as opposed to replacing them.

Does VFCache introduce a new point of vendor lockin or stickiness? Some will see or place this as a new form of vendor lockin, others assuming that EMC supports different vendors storage systems downstream as well as offer options for different PCIe flash cards and keeps the solution affordable will assert it is no more lockin that other solutions. In fact by supporting third party storage systems as opposed to replacing them, smart sales people and marketeers will place VFCache as being more open and interoperable than some other PCIe flash card vendors approach. Keep in mind that avoiding vendor lockin is a shared responsibility (read more here).

Does VFCache work with NAS? VFCache does not work with NAS (NFS or CIFS) attached storage.

Does VFCache work with databases? Yes, VFCache is well suited for little data (e.g. database) and traditional OLTP or general business application process that may not be covered or supported by other so called big data focused or optimized solutions. Refer to this EMC document (and this document here) for more information.

Does VFCache only work with little data? While VFCache is well suited for little data (e.g. databases, share point, file and web servers, traditional business systems) it also able to work with other forms of unstructured data.

Does VFCache need VMware? No, While VFCache works with VMware vSphere including a vCenter plug in, however it does not need a hypervisor and is practical in a physical machine (PM) as it is in a virtual machine (VM).

Does VFCache work with Microsoft Windows? Yes, Refer to the EMC support matrix for specific server operating systems and hypervisor version support.

Does VFCache work with other unix platforms? Refer to the EMC support matrix for specific server operating systems and hypervisor version support.

How are reads handled with VFCache? The VFCache software (driver if you prefer) intercepts IO requests to LUNs that are being cached performing a quick lookup to see if there is a valid cache entry in the physical VFCache PCIe card. If there is a cache hit the IO is resolved from the closer or local PCIe card cache making for a lower latency or faster response time IO. In the case of a cache miss, the VFCache driver simply passes the IO request onto the normal SCSI or block (e.g. iSCSI, SAS, FC, FCoE) stack for processing by the downstream storage system (or appliance). Note that when the requested data is retrieved from the storage system, the VFCache driver will based on caching algorithms determinations place a copy of the data in the PCIe read cache. Thus the real power of the VFCache is the software implementing the cache lookup and cache management functions to leverage the PCIe card that complements the underlying block storage systems.

How are writes handled with VFCache? Unless put into a write cache mode which is not the default, VFCache software simply passes the IO operation onto the IO stack for downstream processing by the storage system or appliance attached via a block interface (e.g. iSCSI, SAS, FC, FCoE). Note that as part of the caching algorithms, the VFCache software will make determinations of what to keep in cache based on IO activity requests similar to how cache management results in better cache effectiveness in a storage system. Given EMCs long history of working with intelligent cache algorithms, one would expect some of that DNA exists or will be leveraged further in future versions of the software. Ironically this is where other vendors with long cache effectiveness histories such as IBM, HDS and NetApp among others should also be scratching their collective heads saying wow, we can or should be doing that as well (or better).

Can VFCache be used as a write cache? Yes, while its default mode is to be used as a persistent read cache to compliment server and application buffers in DRAM along with enhance effectiveness of downstream storage system (or appliances) caches, VFCache can also be configured as a persistent write cache.

Does VFCache include FAST automated tiering between different storage systems? The first version is only a caching tool, however think about it a bit, where the software sits, what storage systems it can work with, ability to learn and understand IO paths and patterns and you can get an idea of where EMC could evolve it to, similar to what they have done with recoverpoint among other tools.

Changing data access patterns and lifecycles
Evolving data access patterns and life cycles (more retention and reads)

Does VFCache mean all or nothing approach with EMC? While the complete VFCache solution comes from EMC (e.g. PCIe card and software), the solution will work with other block attached storage as well as existing EMC storage systems for investment protection.

Does VFCache support NAS based storage systems? The first release of VFCache only supports block based access, however the server that VFCache is installed in could certainly be functioning as a general purpose NAS (NFS or CIFS) server (see supported operating systems in EMC interoperability notes) in addition to being a database or other other application server.

Does VFCache require that all LUNs be cached? No, you can select which LUNs are cached and which ones are not.

Does VFCache run in an active / active mode? In the first release it is active passive, refer to EMC release notes for details.

Can VFCache be installed in multiple physical servers accessing the same shared storage system? Yes, however refer to EMC release notes on details about active / active vs. active / passive configuration rules for ensuring data integrity.

Who else is doing things like this? There are caching appliance vendors as well as others such as NetApp and IBM who have used SSD flash caching cards in their storage systems or virtualization appliances. However keep in mind that VFCache is placing the caching function closer to the application that is accessing it there by improving on the locality of reference (e.g. storage and IO effectiveness).

Does VFCache work with SSD drives installed in EMC or other storage systems? Check the EMC product support matrix for specific tested and certified solutions, however in general if the SSD drive is installed in a storage system that is supported as a block LUN (e.g. iSCSI, SAS, FC, FCoE) in theory it should be possible to work with VFCache. Emphasis, visit the EMC support matrix.
What type of flash is being used?

What type of nand flash SSD memory is EMC using in the PCIe card? The first release of VFCache is leveraging enterprise class SLC (Single Level Cell) nand flash which has been used in other EMC products for its endurance, long duty cycle to minnimize or eliminate concerns of wear and tear while meeting read and write performance. EMC has indicated that they will also as part of an industry trend leverage MLC along with Enterprise MLC (EMLC) technologies on a go forward basis.

Doesnt nand ssd flash cache wear out? While nand flash SSD can wear out over time due to extensive write use, the VFCache approach mitigates this by being primarily a read cache reducing the number or program / erase cycles (P/E cycles) that occur with write operations as well as initially leveraging longer duty cycle SLC flash. EMC also has several years experience from implementing wear leveling algorithms into the storage systems controllers to increase duty cycle and reduce wear on SLC flash which will play forward as MLC or Enterprise MLC (EMLC) techniques are leveraged. This differs from vendors who are positioning their SLC or MLC based flash PCIe SSD cards for mainly write operations which will cause more P/E cycles to occur at a faster rate reducing the duty or useful life of the device.

How much capacity does the VFCache PCIe card contain? The first release supports a 300GB card and EMC has indicated that added capacity and configuration options are in their plans.

Does this mean disks are dead? Contrary to popular industry folk lore (or wish) the hard disk drive (HDD) has plenty of life left part of which has been increased by being complimented by VFCache.

Various options and locations for SSD along with different usage scenarios
Various SSD locations, types, packaging and usage scenario options

Can VFCache work in blade servers? The VFCache software is transparent to blade, rack mount, tower or other types of servers. The hardware part of VFCache is a PCIe card which means that the blade server or system will need to be able to accommodate a PCIe card to compliment the PCIe based mezzaine IO card (e.g. iSCSI, SAS, FC, FCOE) used for accessing storage. What this means is that for blade systems or server vendors such as IBM who have a PCIe expansion module for their H series blade systems (it consumes a slot normally used by a server blade), PCIe cache cards like those being initially released by IBM could work, however check with the EMC interoperability matrix, as well as your specific blade server vendor for PCIe expansion capabilities. Given that EMC leverages Cisco UCS for their vBlocks, one would assume that those systems will also see VFCache modules in those systems. NetApp partners with Cisco using UCS in their FlexPods so you see where that could go as well along with potential other server vendors support including Dell, HP, IBM and Oracle among others.

What about benchmarks? EMC has released some technical documents that show performance improvements in Oracle environments such as this here. Hopefully we will see EMC also release other workloads for different applications including Microsoft Exchange Solutions Proven (ESRP) along with SPC similar to what IBM recently did with their systems among others.

How do the first EMC supplied workload simulations compare vs. other PCIe cards? This is tough to gauge as many SSD solutions and in particular PCIe cards are doing apples to oranges comparisons. For example to generate a high IOPs rating for marketing purposes, most SSD solutions are stress performance tested at 512 bytes or 1/2 of a KByte or at least 1/8 of a small 4Kbyte IO. Note that operating systems such as Windows are moving to 4Kbyte page allocation size to align with growing IO sizes with databases moving from the old average of 4Kbytes to 8Kbytes and larger. What is important to consider is what is the average IO size and activity profile (e.g. reads vs. writes, random vs. sequential) for your applications. If your application is doing ultra small 1/2 Kbyte IOs, or even smaller 64 byte IOs (which should be handled by better application or file system caching in DRAM), then the smaller IO size and record setting examples will apply. However if your applications are more mainstream or larger, then those smaller IO size tests should be taken with a grain of salt. Also keep latency in mind that many target or oppourtunity applications for VFCache are response time sensitive or can benefit by the improved productivity they enable.

What is locality of reference? Locality of reference refers to how close data is to where it is being requested or accessed from. The closer the data to the application requesting the faster the response time or quick the work gets done. For example in the figure below L1/L2/L3 on board processor caches are the fastest, yet smallest while closest to the application running on the server. At the other extreme further down the stack, storage becomes large capacity, lower cost, however lower performing.

Locality of reference data and storage memory

What does cache effectiveness vs. cache utilization mean? Cache utilization is an indicator of how much the available cache capacity is being used however it does not give an indicator of if the cache is being well used or not. For example, cache could be 100 percent used, however there could be a low hit rate. Thus cache effectiveness is a gauge of how well the available cache is being used to improve performance in terms of more work being done (IOPS or bandwidth) or lower of latency and response time.

Isnt more cache better? More cache is not better, it is how the cache is being used, this is a message that I would be disappointed in HDS if they were not to bring up as a point of messaging (or rebuttal) given their history of emphasis cache effectiveness vs. size or quantity (Hu, that is a hint btw ;).

What is the performance impact of VFCache on the host server? EMC is saying greatest of 5 percent or less CPU consumption which they claim is several times less than the competitions worst scenario, as well as claiming 512MB to 1GB of DRM on the server vs. several times that of their competitors. The difference could be expected to be via more off load functioning including flash translation layer (FTL), wear leveling and other optimization being handled by the PCIe card vs. being handled in the servers memory and using host server CPU cycles.

How does this compare to what NetApp or IBM does? NetApp, IBM and others have done caching with SSD in their storage systems, or leveraging third party PCIe SSD cards from different vendors to be installed in servers to be used as a storage target. Some vendors such as LSI have done caching on the PCIe cards (e.g. CacheCaid which in theory has a similar software caching concept to VFCache) to improve performance and effectiveness across JBOD and SAS devices.

What about stale (old or invalid) reads, how does VFCache handle or protect against those? Stale reads are handled via the VFCache management software tool or driver which leverages caching algorithms to decide what is valid or invalid data.

How much does VFCache cost? Refer to EMC announcement pricing, however EMC has indicated that they will be competitive with the market (supply and demand).

If a server shutdowns or reboots, what happens to the data in the VFCache? Being that the data is in non volatile SLC nand flash memory, information is not lost when the server reboots or loses power in the case of a shutdown, thus it is persistent. While exact details are not know as of this time, it is expected that the VFCache driver and software do some form of cache coherency and validity check to guard against stale reads or discard any other invalid cache entries.

Industry trends and perspectives

What will EMC do with VFCache in the future and on a larger scale such as an appliance? EMC via its own internal development and via acquisitions has demonstrated ability to use various clustered techniques such as RapidIO for VMAX nodes, InfiniBand for connecting Isilon  nodes. Given an industry trend with several startups using PCIe flash cards installed in a server that then functions as a IO storage system, it seems likely given EMCs history and experience with different storage systems, caching, and interconnects that they could do something interesting. Perhaps Oracle Exadata III (Exadata I was HP, Exadata II was Sun/Oracle) could be an EMC based appliance (That is pure speculation btw)?

EMC has already shown how it can use SSD drives as a cache extension in VNX and CLARiiON servers ( FAST CACHE ) in addition to as a target or storage tier combined with Fast for tiering. Given their history with caching algorithms, it would not be surprising to see other instantiations of the technology deployed in complimentary ways.

Finally, EMC is showing that it can use nand flash SSD in different ways, various packaging forms to apply to diverse applications or customer environments. The companion or complimentary approach EMC is currently taking contrasts with some other vendors who are taking an all or nothing, its all SSD as disk is dead approach. Given the large installed base of disk based systems EMC as well as other vendors have in place, not to mention the investment by those customers, it makes sense to allow those customers the option of when, where and how they can leverage SSD technologies to coexist and complement their environments. Thus with VFCache, EMC is using SSD as a cache enabler to discuss the decades old and growing storage IO to capacity performance gap in a force multiplier model that spreads the cost over more TBytes, PBytes or EBytes while increasing the overall benefit, in other words effectiveness and productivity.

Additional related material:
Part I: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Industry adoption vs. industry deployment, is there a difference?
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Data Center I/O Bottlenecks Performance Issues and Impacts
From bits to bytes: Decoding Encoding
Who is responsible for vendor lockin
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperabity support matrix

Ok, nuff said for now, I think I see some storm clouds rolling in

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved