EMC New VNX MCx doing more storage I/O work vs. just being more

Storage I/O trends

It’s not how much you have, its how storage I/O work gets done that matters

Following last weeks VMworld event in San Francisco where among other announcements including this one around Virtual SAN (VSAN) along with Software Defined Storage (SDS), EMC today made several announcements.

Today’s EMC announcements include:

  • The new VNX MCx (Multi Core optimized) family of storage systems
  • VSPEX proven infrastructure portfolio enhancements
  • Availability of ViPR Software Defined Storage (SDS) platform (read more from earlier posts here, here and here)
  • Statement of direction preview of Project Nile for elastic cloud storage platform
  • XtremSW server cache software version 2.0 with enhanced management and support for VMware, AIX and Oracle RAC

EMC ViPREMC XtremSW cache software

Summary of the new EMC VNX MCx storage systems include:

  • More processor cores, PCIe Gen 3 (faster bus), front-end and back-end IO ports, DRAM and flash cache (as well as drives)
  • More 6Gb/s SAS back-end ports to use more storage devices (SAS and SATA flash SSD, fast HDD and high-capacity HDD)
  • MCx – Multi-core optimized with software rewritten to make use of threads and resources vs. simply using more sockets and cores at higher clock rates
  • Data Footprint Reduction (DFR) capabilities including block compression and dedupe, file dedupe and thin provisioning
  • Virtual storage pools that include flash SSD, fast HDD and high-capacity HDD
  • Block (iSCSI, FC and FCoE) and NAS file (NFS, pNFS, CIFS) front-end access with object access via Atmos Virtual Edition (VE) and ViPR
  • Entry level pricing starting at below $10,000 USD

EMC VNX MCx systems

What is this MCx stuff, is it just more hardware?

While there is more hardware that can be used in different configurations, the key or core (pun intended) around MCx is that EMC has taken the time and invested in reworking the internal software of the VNX that has its roots going back to the Data General CLARRiON EMC acquired. This is similar to an effort EMC made a few years back when it overhauled what is now known as the VMAX from the Symmetric into the DMX. That effort expanded from a platform or processor port to re-architecting and software optimizing (rewrite portions) to leverage new and emerging hardware capabilities more effectively.

EMC VNX MCx

With MCx EMC is doing something similar in that core portions of the VNX software have been re-architected and written to take advantage of more threads and cores being available to do work more effectively. This is not all that different from what occurs (or should) with upper level applications that eventually get rewritten to leverage underlying new capabilities to do more work faster and leverage technologies in a more cost-effective way. MCx also leverages flash as a primary medium with data than being moved (256MB chunks) down into lower tiers of storage (SSD and HDD drives).

Storage I/O trends

ENC VNX has had in the past FLASH Cache which enables SSD drives to be used as an extension of main cache as well as using drive targets. Thus while MCx can and does leverage more and faster core as would most any software, it is also able to leverage those cores and threads in a more effective way. After all, it’s not just how many processors, sockets, cores, threads, L1/L2 cache, DRAM, flash SSD and other resources, its how effective you use them. Also keep in mind that a bit of flash in the right place used effectively can go a long way vs. having a lot of cache in the wrong place or not used optimally that will end up costing a lot of cash.

Moving forward this means that EMC should be able to further refine and optimize other portions of the VNX software not yet updated to make further benefit of new hardware platforms and capabilities.

Does this mean EMC is catching up with newer vendors?

Similar to more of something is not always better, its how those items are used that matters, just because something is new does not mean its better or faster. That will manifest itself when they are demonstrated and performance results shown. However key is showing the performance across different workloads that have relevance to your needs and that convey metrics that matter with context.

Storage I/O trends

Context matters including type and size of work being done, number of transactions, IOPs, files or videos served, pages processed or items rendered per unit of time, or response time and latency (aka wait or think time), along with others. Thus some newer systems may be faster on paper, powerpoint, WebEx, You tube or via some benchmarks, however what is the context and how do they compare to others on an apples to apples basis.

What are some other enhancements or features?

Leveraging of FAST VP (Fully Automated Storage Tiering for Virtual Pools) with improved MCx software

Increases the effectiveness of available hardware resources (processors, cores, DRAM, flash, drives, ports)

Active active LUNs accessible by both controllers as well as legacy AULA support

Data sheets and other material for the new VNX MCx storage systems can be found here, with software options and bundles here, and general speeds and feeds here.

Learn more here at the EMC VNX MCx storage system landing page and compare VNX systems here.

What does then new VNX MCx family look like?

EMC VNX MCx family image

Is VNX MCx all about supporting VMware?

Interesting that if you read behind the lines, listen closely to the conversations, ask the right questions you will realize that while VMware is an important workload or environment to support, it is not the only one targeted for VNX. Likewise if you listen and look beyond what is normally amplified in various conversations you will find that systems such as VNX are being deployed as back-end storage in cloud (public, private, hybrid) environments for use with technologies such as OpenStack or object based solutions (visit www.objectstoragecenter.com for more on object storage systems and access)..

There is a common myth that the cloud and service providers all use white box commodity hardware including JBOD for their systems which some do, however some are also using systems such as VNX among others. In some of these scenarios the VNX type systems are or will be deployed in large numbers essentially consolidating the functions of what had been done by even larger number of JBOD based systems. This is where some of you will have a DejaVu or back to the future moment from the mid 90s when there was an industry movement to combine all the DAS and JBOD into larger storage systems. Don’t worry if you are not yet reading about this trend in your favorite industry rag or analyst briefing notes, however ask or look around and you might be surprised at what is occurring, granted it might be another year or two before you read about it (just saying ;).

Storage I/O trends

What that means is that VNX MCx is also well positioned for working with ViPR or Atmos Virtual Edition among other cloud and object storage stacks. VNX MCx is also well positioned for its new low-cost of entry for general purpose workloads and applications ranging from file sharing, email, web, database along with demanding high performance, low latency with large amounts of flash SSD. In addition to being used for general purpose storage, VNX MCx will also complement data protection solutions for backup/restore, BC, DR and archiving such as Data Domain, Avamar and Networker among others. Speaking of server virtualization, EMC also has tools for working with Hyper-V, Xen and KVM in addition to VMware.

If there is an all flash VNX MCx doesn’t that compete with XtremIO?

Yes there are all flash VNX MCx just as there have been all flash VNX before, however these will be positioned for different use case scenarios by EMC and their partners to avoid competing head to head with XtremIO. Thus EMC will need to be diligent in being very clear to its own sales and marketing forces as well as those of partners and customers of what to use when, where, why and how.

General thoughts and closing comments

The VNX MCx is a good set of enhancements by EMC and an example of how it’s not as important of how more you have, rather how you can use it to be more effective.

Ok, nuff said (fow now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive (part II)

StorageIO industry trends cloud, virtualization and big data

This is the second in a multi-part series of posts (read first post here) looking at if large enterprise and legacy storage systems are dead, along with what todays EMC VMAX 10K updates mean.

Thus on January 14 2013 it is time for a new EMC Virtual Matrix (VMAX) model 10,000 (10K) storage system. EMC has been promoting their January 14 live virtual event for a while now. January significance is that is when (along with May or June) is when many new systems, solutions or upgrades are made on a staggered basis.

Historically speaking, January and February, along with May and June is when you have seen many of the larger announcements from EMC being made. Case in point, back in February of 2012 VFCache was released, then May (2012) in Las Vegas at EMCworld there were 42 announcements made and others later in the year.

Click here to see images of the car stuffing or click here to watch a video.

Let’s not forget back in February of 2012 VFCache was released, and go back to January 2011 there was the record-setting event in New York City complete with 26 people being compressed, deduped, singled instanced, optimized, stacked and tiered into a mini cooper (Coop) automobile (read and view more here).

Now back to the VMAX 10K enhancements

As an example of a company, product family and specific storage system model, still being alive is the VMAX 10K. Although this announcement by EMC is VMAX 10K centric, there is also a new version of the Enginuity software (firmware, storage operating system, valueware) that runs across all VMAX based systems including VMAX 20K and VMAX 40K. Read here, here and here and here to learn more about VMAX and Enginuity systems in general.

Some main themes of this announcement include Tier 1 reliability, availability and serviceability (RAS) storage systems functionality at tier 2 pricing for traditional, virtual and cloud data centers.

Some other themes of this announcement by EMC:

  • Flexible, scalable and resilient with performance to meet dynamic needs
  • Support private, public and hybrid cloud along with federated storage models
  • Simplified decision-making, acquisition, installation and ongoing management
  • Enable traditional, virtual and cloud workloads
  • Complement its siblings VMAX 40K, 20K and SP (Service Provider) models

Note that the VMAX SP is a model configured and optimized for easy self-service and private cloud, storage as a service (SaaS), IT as a Service (ITaaS) and public cloud service providers needing multi-tenant capabilities with service catalogs and associated tools.

So what is new with the VMAX 10K?

It is twice as fast (per EMC performance results) as earlier VMAX 10K by leveraging faster 2.8GHz Intel westmere vs. earlier 2.5GHz westmere processors. In addition to faster cores, there are more, from 4 to 6 on directors, from 8 to 12 on VMAX 10K engines. The PCIe (Gen 2) IO busses remain unchanged as does the RapidIO interconnect.  RapidIO  used for connecting nodes and engines,  while PCIe is used for adapter and device connectivity. Memory stays the same at up to 128GB of global DRAM cache, along with dual virtual matrix interfaces (how the nodes are connected). Note that there is no increase in the amount of DRAM based cache memory in this new VMAX 10K model.

This should prompt the question of for traditional cache centric or dependent for performance storage systems such as VMAX, how much are they now CPU and their associated L1 / L2 cache dependent or effective? Also how much has the Enginuity code under the covers been enhanced to leverage the multiple cores and threads thus shifting from being cache memory dependent processor hungry.

Also new with the updated VMAX 10K include:

  • Support for dense 2.5 inch drives, along with mixed 2.5 inch and 3.5 inch form factor devices with a maximum of 1,560 HDDs. This means support for 2.5 inch 1TB 7,200 RPM SAS HDDs, along with fast SAS HDDs, SLC/MLC and eMLC solid state devices (SSD) also known as electronic flash devices (EFD). Note that with higher density storage configurations, good disk enclosures become more important to counter or prevent the effects of drive vibration, something that leading vendors are paying attention to and so should customers.
  • EMC is also with the VMAX 10K adding support for certain 3rd party racks or cabinets to be used for mounting the product. This means being able to mount the VMAX main system and DAE components into selected cabinets or racks to meet specific customer, colo or other environment needs for increased flexibility.
  • For security, VMAX 10K also supports Data at Rest Encryption or (D@RE) which is implemented within the VMAX platform. All data encrypted on every drive, every drive type (drive independent) within the VMAX platform to avoid performance impacts. AES 256 fixed block encryption with FIPS 140-2 validation (#1610) using embedded or external key management including RSA Key Manager. Note that since the storage system based encryption is done within the VMAX platform or controller, not only is the encrypt / decrypt off-loaded from servers, it also means that any device from SSD to HDD to third-party storage arrays can be encrypted. This is in contrast to drive based approaches such as self encrypting devices (SED) or other full drive encryption approaches. With embedded key management, encryption keys kept and managed within the VMAX system while external mode leverages RSA key management as part of a broader security solution approach.
  • In terms of addressing ease of decision-making and acquisition, EMC has bundled core Enginuity software suite (virtual provisioning, FTS and FLM, DCP (dynamic cache partitioning), host I/O limits, Optimizer/virtual LUN and integrated RecoverPoint splitter). In addition are bundles for optimization (FAST VP, EMC Unisphere for VMAX with heat map and dashboards), availability (TimeFinder for VMAX 10K) and migration (Symmetrix migration suite, Open Replicator, Open Migrator, SRDF/DM, Federated Live Migration). Additional optional software include RecoverPoint CDP, CRR and CLR, Replication Manager, PowerPath, SRDF/S, SRDF/A and SRDF/DM, Storage Configuration Advisor, Open Replicator with Dynamic Mobility and ControlCenter/ProSphere package.

Who needs a VMAX 10K or where can it be used?

As the entry-level model of the VMAX family, certain organizations who are growing and looking for an alternative to traditional mid-range storage systems should be a primary opportunity. Assuming the VMAX 10K can sell at tier-2 prices with a focus of tier-1 reliability, feature functionality, and simplification while allowing their channel partners to make some money, then EMC can have success with this product. The challenge however will be helping their direct and channel partner sales organizations to avoid competing with their own products (e.g. high-end VNX) vs. those of others.

Consolidation of servers with virtualization, along with storage system consolidation to remove complexity in management and costs should be another opportunity with the ability to virtualize third-party storage. I would expect EMC and their channel partners to place the VMAX 10K with its storage virtualization of third-party storage as an alternative to HDS VSP (aka USP/USPV) and the HP XP P9000 (Hitachi based) products, or for block storage needs the NetApp V-Series among others. There could be some scenarios where the VMAX 10K could be positioned as an alternative to the IBM V7000 (SVC based) for virtualizing third-party storage, or for larger environments, some of the software based appliances where there is a scaling with stability (performance, availability, capacity, ease of management, feature functionality) concerns.

Another area where the VMAX 10K could see action which will fly in the face of some industry thinking is for deployment in new and growing managed service providers (MSP), public cloud, and community clouds (private consortiums) looking for an alternative to open source based, or traditional mid-range solutions. Otoh, I cant wait to hear somebody think outside of both the old and new boxes about how a VMAX 10K could be used beyond traditional applications or functionality. For example filling it up with a few SSDs, and then balance with 1TB 2.5 inch SAS HDD and 3.5 inch 3TB (or larger when available) HDDs as an active archive target leveraging the built-in data compression.

How about if EMC were to support cloud optimized HDDs such as the Seagate Constellation Cloud Storage (CS) HDDs that were announced late in 2012 as well as the newer enterprise class HDDs for opening up new markets? Also keep in mind that some of the new 2.5 inch SAS 10,000 (10K) HDDs have the same performance capabilities as traditional 3.5 inch 15,000 (15K) RPM drives in a smaller footprint to help drive and support increased density of performance and capacity with improved energy effectiveness.

How about attaching a VMAX 10K with the right type of cost-effective (aligned to a given scenario) SSD or HDDs or third-party storage to a cluster or grid of servers that are running OpenStack including Swift, CloudStack, Basho Riak CS, Celversafe, Scality, Caringo, Ceph or even EMCs own ATMOS (that supports external storage) for cloud storage or object based storage solutions? Granted that would be thinking outside of the current or new box thinking to move away from RAID based systems in favor or low-cost JBOD storage in servers, however what the heck, let’s think in pragmatic ways.

Will EMC be able to open new markets and opportunities by making the VMAX and its Enginuity software platform and functionality more accessible and affordable leveraging the VMAX 10K as well as the VMAX SP? Time will tell, after all, I recall back in the mid to late 90s, and then again several times during the 2000s similar questions or conversations not to mention the demise of the large traditional storage systems.

Continue reading about what else EMC announced on January 14 2013 in addition to VMAX 10K updates here in the next post in this series. Also check out Chucks EMC blog to see what he has to say.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved