Following last weeks VMworld event in San Francisco where among other announcements including this one around Virtual SAN (VSAN) along with Software Defined Storage (SDS), EMC today made several announcements.
Today’s EMC announcements include:
Summary of the new EMC VNX MCx storage systems include:
While there is more hardware that can be used in different configurations, the key or core (pun intended) around MCx is that EMC has taken the time and invested in reworking the internal software of the VNX that has its roots going back to the Data General CLARRiON EMC acquired. This is similar to an effort EMC made a few years back when it overhauled what is now known as the VMAX from the Symmetric into the DMX. That effort expanded from a platform or processor port to re-architecting and software optimizing (rewrite portions) to leverage new and emerging hardware capabilities more effectively.
With MCx EMC is doing something similar in that core portions of the VNX software have been re-architected and written to take advantage of more threads and cores being available to do work more effectively. This is not all that different from what occurs (or should) with upper level applications that eventually get rewritten to leverage underlying new capabilities to do more work faster and leverage technologies in a more cost-effective way. MCx also leverages flash as a primary medium with data than being moved (256MB chunks) down into lower tiers of storage (SSD and HDD drives).
ENC VNX has had in the past FLASH Cache which enables SSD drives to be used as an extension of main cache as well as using drive targets. Thus while MCx can and does leverage more and faster core as would most any software, it is also able to leverage those cores and threads in a more effective way. After all, it’s not just how many processors, sockets, cores, threads, L1/L2 cache, DRAM, flash SSD and other resources, its how effective you use them. Also keep in mind that a bit of flash in the right place used effectively can go a long way vs. having a lot of cache in the wrong place or not used optimally that will end up costing a lot of cash.
Moving forward this means that EMC should be able to further refine and optimize other portions of the VNX software not yet updated to make further benefit of new hardware platforms and capabilities.
Similar to more of something is not always better, its how those items are used that matters, just because something is new does not mean its better or faster. That will manifest itself when they are demonstrated and performance results shown. However key is showing the performance across different workloads that have relevance to your needs and that convey metrics that matter with context.
Context matters including type and size of work being done, number of transactions, IOPs, files or videos served, pages processed or items rendered per unit of time, or response time and latency (aka wait or think time), along with others. Thus some newer systems may be faster on paper, powerpoint, WebEx, You tube or via some benchmarks, however what is the context and how do they compare to others on an apples to apples basis.
Leveraging of FAST VP (Fully Automated Storage Tiering for Virtual Pools) with improved MCx software
Increases the effectiveness of available hardware resources (processors, cores, DRAM, flash, drives, ports)
Active active LUNs accessible by both controllers as well as legacy AULA support
Data sheets and other material for the new VNX MCx storage systems can be found here, with software options and bundles here, and general speeds and feeds here.
Learn more here at the EMC VNX MCx storage system landing page and compare VNX systems here.
Interesting that if you read behind the lines, listen closely to the conversations, ask the right questions you will realize that while VMware is an important workload or environment to support, it is not the only one targeted for VNX. Likewise if you listen and look beyond what is normally amplified in various conversations you will find that systems such as VNX are being deployed as back-end storage in cloud (public, private, hybrid) environments for use with technologies such as OpenStack or object based solutions (visit www.objectstoragecenter.com for more on object storage systems and access)..
There is a common myth that the cloud and service providers all use white box commodity hardware including JBOD for their systems which some do, however some are also using systems such as VNX among others. In some of these scenarios the VNX type systems are or will be deployed in large numbers essentially consolidating the functions of what had been done by even larger number of JBOD based systems. This is where some of you will have a DejaVu or back to the future moment from the mid 90s when there was an industry movement to combine all the DAS and JBOD into larger storage systems. Don’t worry if you are not yet reading about this trend in your favorite industry rag or analyst briefing notes, however ask or look around and you might be surprised at what is occurring, granted it might be another year or two before you read about it (just saying ;).
What that means is that VNX MCx is also well positioned for working with ViPR or Atmos Virtual Edition among other cloud and object storage stacks. VNX MCx is also well positioned for its new low-cost of entry for general purpose workloads and applications ranging from file sharing, email, web, database along with demanding high performance, low latency with large amounts of flash SSD. In addition to being used for general purpose storage, VNX MCx will also complement data protection solutions for backup/restore, BC, DR and archiving such as Data Domain, Avamar and Networker among others. Speaking of server virtualization, EMC also has tools for working with Hyper-V, Xen and KVM in addition to VMware.
Yes there are all flash VNX MCx just as there have been all flash VNX before, however these will be positioned for different use case scenarios by EMC and their partners to avoid competing head to head with XtremIO. Thus EMC will need to be diligent in being very clear to its own sales and marketing forces as well as those of partners and customers of what to use when, where, why and how.
The VNX MCx is a good set of enhancements by EMC and an example of how it’s not as important of how more you have, rather how you can use it to be more effective.
Ok, nuff said (fow now).
Cheers
Gs
Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved
RTO Context Matters With RTO context matters similar to many things in and around Information…
What is Azure Elastic SAN Azure Elastic SAN (AES) is a new (now GA) Azure…
Yes, you read that correctly, Microsoft Hyper-V is alive and enhanced with Windows Server 2025,…
A theme I mention in the above two articles as well as elsewhere about server,…
March 31st is world backup day; when is world recovery day If March 31st is…
ToE NVMeoF TCP Performance Line Boost Performance Reduce Costs. Yes, you read that correct; leverage…
View Comments
I wonder why people buy this 20 year old architecture. 3PAR is a lot better in many ways.
Hello Peldon and thanks for your comment, can you say more why you have the perspective that 3PAR is better, or is it just that it is newer as an architecture vs. MCx whose software was just re-written?
Thanks Peldon for your comments/perspectives.
Im adding your comment I recieved via Disqus below as all that showed up via blog is your above "...fefwefwefwef.." comment.
> Via Peldon:
>
> Hi,
>
> I have worked with the VNX and Clariion systems on a technical
level for several years.
> EMC is very good at markerting and sales, but the products does not always live up to the hype.
> Clariion/VNX is built on a 20 year old design and EMC has been flicking on this product for years.
> Why do EMC have fast-cache? Simpel answer, without it, the VNX would not perform.Thin luns gives
> bad performance, pools gives less performance. (no good wide striping).The cabling on their
> unified systems, looks like a cutting nest and configuring the nas is even worse. With tons of best
> practice documents and limitations.EMC thinks their recover point solution is very good, but it's
> another product which is a nightmare to set up and do system administration.
> Some months ago I did an instal l of two large VNX systems in a mixed environment + async mirroring.
> We ended up with 10 diffenerent raid groups, 4 disk pools + 15K disk drivs for logging
> of async data. To load balance the 2 x SP (storage processors), we ended up spending
> lots of hours analyzing performance data and moving luns. I have also worked with vplex.
> Veru expensive and again, not very user friendly. Hopefully the new VNX generation has improved,
> but I have learnt not to believe what EMC marketing tell me. 3PAR is not the answer to everything,
> but in many ways it's a lot better than the VNX. Proper thin prov, reclaim of space, true active/active,
> wide striping etc.
>