EMC New VNX MCx doing more storage I/O work vs. just being more

Storage I/O trends

It’s not how much you have, its how storage I/O work gets done that matters

Following last weeks VMworld event in San Francisco where among other announcements including this one around Virtual SAN (VSAN) along with Software Defined Storage (SDS), EMC today made several announcements.

Today’s EMC announcements include:

  • The new VNX MCx (Multi Core optimized) family of storage systems
  • VSPEX proven infrastructure portfolio enhancements
  • Availability of ViPR Software Defined Storage (SDS) platform (read more from earlier posts here, here and here)
  • Statement of direction preview of Project Nile for elastic cloud storage platform
  • XtremSW server cache software version 2.0 with enhanced management and support for VMware, AIX and Oracle RAC

EMC ViPREMC XtremSW cache software

Summary of the new EMC VNX MCx storage systems include:

  • More processor cores, PCIe Gen 3 (faster bus), front-end and back-end IO ports, DRAM and flash cache (as well as drives)
  • More 6Gb/s SAS back-end ports to use more storage devices (SAS and SATA flash SSD, fast HDD and high-capacity HDD)
  • MCx – Multi-core optimized with software rewritten to make use of threads and resources vs. simply using more sockets and cores at higher clock rates
  • Data Footprint Reduction (DFR) capabilities including block compression and dedupe, file dedupe and thin provisioning
  • Virtual storage pools that include flash SSD, fast HDD and high-capacity HDD
  • Block (iSCSI, FC and FCoE) and NAS file (NFS, pNFS, CIFS) front-end access with object access via Atmos Virtual Edition (VE) and ViPR
  • Entry level pricing starting at below $10,000 USD

EMC VNX MCx systems

What is this MCx stuff, is it just more hardware?

While there is more hardware that can be used in different configurations, the key or core (pun intended) around MCx is that EMC has taken the time and invested in reworking the internal software of the VNX that has its roots going back to the Data General CLARRiON EMC acquired. This is similar to an effort EMC made a few years back when it overhauled what is now known as the VMAX from the Symmetric into the DMX. That effort expanded from a platform or processor port to re-architecting and software optimizing (rewrite portions) to leverage new and emerging hardware capabilities more effectively.

EMC VNX MCx

With MCx EMC is doing something similar in that core portions of the VNX software have been re-architected and written to take advantage of more threads and cores being available to do work more effectively. This is not all that different from what occurs (or should) with upper level applications that eventually get rewritten to leverage underlying new capabilities to do more work faster and leverage technologies in a more cost-effective way. MCx also leverages flash as a primary medium with data than being moved (256MB chunks) down into lower tiers of storage (SSD and HDD drives).

Storage I/O trends

ENC VNX has had in the past FLASH Cache which enables SSD drives to be used as an extension of main cache as well as using drive targets. Thus while MCx can and does leverage more and faster core as would most any software, it is also able to leverage those cores and threads in a more effective way. After all, it’s not just how many processors, sockets, cores, threads, L1/L2 cache, DRAM, flash SSD and other resources, its how effective you use them. Also keep in mind that a bit of flash in the right place used effectively can go a long way vs. having a lot of cache in the wrong place or not used optimally that will end up costing a lot of cash.

Moving forward this means that EMC should be able to further refine and optimize other portions of the VNX software not yet updated to make further benefit of new hardware platforms and capabilities.

Does this mean EMC is catching up with newer vendors?

Similar to more of something is not always better, its how those items are used that matters, just because something is new does not mean its better or faster. That will manifest itself when they are demonstrated and performance results shown. However key is showing the performance across different workloads that have relevance to your needs and that convey metrics that matter with context.

Storage I/O trends

Context matters including type and size of work being done, number of transactions, IOPs, files or videos served, pages processed or items rendered per unit of time, or response time and latency (aka wait or think time), along with others. Thus some newer systems may be faster on paper, powerpoint, WebEx, You tube or via some benchmarks, however what is the context and how do they compare to others on an apples to apples basis.

What are some other enhancements or features?

Leveraging of FAST VP (Fully Automated Storage Tiering for Virtual Pools) with improved MCx software

Increases the effectiveness of available hardware resources (processors, cores, DRAM, flash, drives, ports)

Active active LUNs accessible by both controllers as well as legacy AULA support

Data sheets and other material for the new VNX MCx storage systems can be found here, with software options and bundles here, and general speeds and feeds here.

Learn more here at the EMC VNX MCx storage system landing page and compare VNX systems here.

What does then new VNX MCx family look like?

EMC VNX MCx family image

Is VNX MCx all about supporting VMware?

Interesting that if you read behind the lines, listen closely to the conversations, ask the right questions you will realize that while VMware is an important workload or environment to support, it is not the only one targeted for VNX. Likewise if you listen and look beyond what is normally amplified in various conversations you will find that systems such as VNX are being deployed as back-end storage in cloud (public, private, hybrid) environments for use with technologies such as OpenStack or object based solutions (visit www.objectstoragecenter.com for more on object storage systems and access)..

There is a common myth that the cloud and service providers all use white box commodity hardware including JBOD for their systems which some do, however some are also using systems such as VNX among others. In some of these scenarios the VNX type systems are or will be deployed in large numbers essentially consolidating the functions of what had been done by even larger number of JBOD based systems. This is where some of you will have a DejaVu or back to the future moment from the mid 90s when there was an industry movement to combine all the DAS and JBOD into larger storage systems. Don’t worry if you are not yet reading about this trend in your favorite industry rag or analyst briefing notes, however ask or look around and you might be surprised at what is occurring, granted it might be another year or two before you read about it (just saying ;).

Storage I/O trends

What that means is that VNX MCx is also well positioned for working with ViPR or Atmos Virtual Edition among other cloud and object storage stacks. VNX MCx is also well positioned for its new low-cost of entry for general purpose workloads and applications ranging from file sharing, email, web, database along with demanding high performance, low latency with large amounts of flash SSD. In addition to being used for general purpose storage, VNX MCx will also complement data protection solutions for backup/restore, BC, DR and archiving such as Data Domain, Avamar and Networker among others. Speaking of server virtualization, EMC also has tools for working with Hyper-V, Xen and KVM in addition to VMware.

If there is an all flash VNX MCx doesn’t that compete with XtremIO?

Yes there are all flash VNX MCx just as there have been all flash VNX before, however these will be positioned for different use case scenarios by EMC and their partners to avoid competing head to head with XtremIO. Thus EMC will need to be diligent in being very clear to its own sales and marketing forces as well as those of partners and customers of what to use when, where, why and how.

General thoughts and closing comments

The VNX MCx is a good set of enhancements by EMC and an example of how it’s not as important of how more you have, rather how you can use it to be more effective.

Ok, nuff said (fow now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Is more of something always better? Depends on what you are doing

Storage I/O trends

Is more always better? Depends on what you are doing

As with many things it depends, however how about some of these?

Is more better for example (among others):

  • Facebook likes
  • Twitter followers or tweets (I’m @storageio btw)
  • Google+ likes, follows and hangouts
  • More smart phone apps
  • LinkedIn connections
  • People in your circle or community
  • Photos or images per post or article
  • People working with or for you
  • Partners vs. doing more with those you have
  • People you are working for or with
  • Posts or longer posts with more in them
  • IOPs or SSD and storage performance
  • Domains under management and supported
  • GB/TB/PB/EB supported or under management
  • Mart-time jobs or a better full-time opportunity
  • Metrics vs. those that matter with context
  • Programmers to get job done (aka mythical man month)
  • Lines of code per cost vs. more reliable and tested code per cost
  • For free items and time spent managing them vs. more productivity for a nominal fee
  • Meetings for planning on what to do vs. streamline and being more productive
  • More sponsors or advertisers or underwriters vs. fewer yet more effective ones
  • Space in your booth or stand at a trade show or conference vs. using what you have more effectively
  • Copies of the same data vs. fewer yet more unique (not full though) copies of information
  • Patents in your portfolio vs. more technology and solutions being delivered
  • Processors, sockets, cores, threads vs. using them more effectively
  • Ports and protocols vs. using them more effectively

Storage I/O trends

Thus does more resources matter, or making more effective use of them?

For example more ports, protocols, processors, cores, sockets, threads, memory, cache, drives, bandwidth, people among other things is not always better, particular if those resources are not being used effectively.

Likewise don’t confuse effective with efficient often assumed to mean used.

For example a cache or memory may be 100% used (what some call efficient) yet only providing a 35% effective benefit (cache hit or miss) vs. cache turn (misses etc).

Throwing more processing power in terms of clock speed, or cores is one thing, kind of like throwing more server blades at a software problem vs. using those cores and sockets not to mention threads more effectively.

Good software will run better on fast hardware while enabling more to be done with the same or less.

Thus with better software or tools, more work can be done in an effective way leveraging those resources vs. simply throwing or applying more at the situation.

Hopefully you get the point, so no need to do more with this post (for now), if not, stay tuned and pay more attention around you.

Ok, nuff said, I need to go get more work done now.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Does IBM Power7 processor announcement signal storage upgrades?

IBM recently announced the Power7 as the latest generation of processors that the company uses in some of its mid range and high end compute servers including the iSeries and pSeries.


IBM Power7 processor wafers (chips)

 

What is the Power7 processor?
The Power7 is the latest generation of IBM processors (chips) that are used as the CPUs in IBM mid range and high end open systems (pSeries) for Unix (AIX) and Linux as well as for the iSeries (aka AS400 successor). Building on previous Power series processors, the Power7 increases the performance per core (CPU) along with the number of cores per socket (chip) footprint. For example, each Power7 chip that plugs into a socket on a processor card in a server can have up to 8 cores or CPUs. Note that sometimes cores are also known as micro CPUs as well as virtual CPUs not to be confused with their presented via Hypervisor abstraction.

Sometimes you may also here the term or phrase 2 way, 4 way (not to be confused with a Cincinnati style 4 way chili) or 8 way among others that refers to the number of cores on a chip. Hence, a dual 2 way would be a pair of processor chips each with 2 cores while a quad 8 way would be 4 processors chips each with 8 cores and so on.


IBM Power7 with up to eight cores per processor (chip)

In addition to faster and more cores in a denser footprint, there are also energy efficiency enhancements including Energy Star for enterprise servers qualification along with intelligent power management (IPM also see here) implementation. IPM is implanted in what IBM refers to as Intelligent Energy technology for turning on or off various parts of the system along with varying processor clock speeds. The benefit is when there is work to be done, get it down quickly or if there is less work, turn some cores off or slow clock speed down. This is similar to what other industry leaders including Intel have deployed with their Nehalem series of processors that also support IPM.

Additional features of the Power7 include (varies by system solutions):

  • Energy Star for server qualified providing enhanced performance and efficiency.
  • IBM Systems Director Express, Standard and Enterprise Editions for simplified management including virtualization capabilities across pools of Power servers as a single entity.
  • PowerVM (Hypervisor) virtualization for AIX, iSeries and Linux operating systems.
  • ActiveMemory enables effective memory capacity to be larger than physical memory, similar to how virtual memory works within many operating systems. The benefit is to enable a partition to have access to more memory which is important for virtual machines along with the ability to support more partitions in a given physical memory footprint.
  • TurboCore and Intelligent Threads enable workload optimization by selecting the applicable mode for the work to be done. For example, single thread per core along with simultaneous threads (2 or 4) modes per core. The trade off is to have more threads per core for concurrent processing, or, fewer threads to boost single stream performance.

IBM has announced several Power7 enabled or based server system models with various numbers of processors and cores along with standalone and clustered configurations including:

IBM Power7 family of server systems

  • Power 750 Express, 4U server with one to four socket server supporting up to 32 cores (3.0 to 3.5 GHz) and 128 threads (4 threads per core), PowerVM (Hypervisor) along with main memory capacity of 512GB or 1TByte of virtual memory using Active Memory Expansion.
  • Power 755, 32 3.3Ghz Power7 cores (8 cores per processor) with memory up to 256GB along with AltiVec and VSX SIMD instruction set support. Up to 64 755 nodes each with 32 cores can be clustered together for high performance applications.
  • Power 770, Up to 64 Power7 cores providing more performance while consuming less energy per core compared to previous Power6 generations. Support for up to 2TB of main memory or RAM using 32GB DIMM when available later in 2010.
  • Power 780, 64 Power7 cores with TurboCore workload optimization providing performance boost per core. With TurboCore, 64 cores can operate at 3.8 GHz, or, enable up to 32 cores at 4.1 GHz and twice the amount of cache when more speed per thread is needed. Support for up to 2TB of main memory or RAM using 32GB DIMM when available later in 2010.

Additional Power7 specifications and details can be found here.

 

What is the DS8000?
The DS8000 is the latest generation of a family of high end enterprise class storage systems supporting IBM mainframe (zSeries), Open systems along with mixed workloads. Being high end open systems or mainframe, the DS8000 competes with similar systems from EMC (Symmetrix/DMX/VMAX), Fujitsu (Eternus DX8000), HDS (Hitachi) and HP (XP series OEM from Hitachi). Previous generations of the DS8000 (aka predecessors) include the ESS (Enterprise Storage System) Model 2105 (aka Shark) and VSS (Versatile Storage Server). Current generation family members include the Power5 based DS8100 and DS8300 along with the Power6 based DS8700.

IBM DS8000 Storage System

Learn more about the DS8000 here, here, here and here.

 

What is the association between the Power7 and DS8000?
Disclosure: Before I go any further, lets be clear on something, what I am about to post on is based entirely on researching, analyzing, correlating (connecting the dots) of what is publicly and freely available from IBM on the Web (e.g. there is no NDA material being disclosed here that I am aware of) along with prior trends and tendency of IBM and their solutions. In other words, you can call it speculation, a prediction, industry analysis perspective, looking into the proverbial crystal ball or educated guess and thus should not be taken as an indicator of what IBM may actually do or be working on. As to what may actually be done or not done, for that you will need to contact one of the IBM truth squad members.

As to what is the linkage between Power7 and the DS8000?

The linkage between the Power7 and the DS8000 is just that, the Power processors!

At the heart of the DS8000 are Power series processors coupled or clustered together in pairs for performance and availability that run IBM developed storage systems software. While the spin doctors may not agree, essentially the DS8000 and its predecessors are based on and around Power series processors clustered together with a high speed interconnect that combine to host an operating system and IBM developed storage system application software.

Thus IBM has been able to for over a decade leverage technology improvement curve advantages with faster processors, increased memory and I/O connectivity in denser footprints while enhancing their storage system application software.

Given that the current DS8000 family members utilize 2 way (2 core) or 4 way (4 core) Power5 and Power6 processors, similar to how their predecessors utilized previous generation Power4, Power3 and so forth processors, it only makes sense that IBM might possibly use a Power7 processor in a future DS8000 (or derivative perhaps even with a different name or model number). Again, this is just based all on historical trends and patterns of IBM storage systems group leveraging the latest generation of Power processors; after all, they are a large customer of the Power systems group.

Consequently it would make sense for IBM storage folks to leverage the new Power7 processors and features similar to how EMC is leveraging Intel processor enhances along with what other vendors are doing.

There is certainly room in the DS8000 architecture for growth in terms of supporting additional nodes or complexes or controllers (or whatever your term preference of choice is for describing a server) each equipped with multiple processors (chips or sockets) that have multiple cores. While IBM has only commercially released two complex or dual server versions of the DS8000 with various numbers of cores per server, they have come nowhere close to their architecture limit of nodes. In fact with this release of Power7, as an example, the model 755 can be clustered via InfiniBand with up to 64 nodes, with each node having 4 sockets (e.g. 4 way) with up to 8 cores each. That means on paper, 64 x 4 x 8 = 2048 cores and each core could have up to 4 threads for concurrency, or half as many cores for more cache performance. Now will IBM ever come out with a 64 node DS8000 on steroids?

Tough to say, maybe possibly some day to play specmanship vs EMC VMAX 256 node architectural limit, however Im not holding my breath just yet. Thus with more and faster cores per processor, ability to increase number of processors per server or node, along with architectural capabilities to boost the number of nodes in an instance or cluster, on paper alone, there is lots of head room for the DS8000 or a future derivative.

What about software and functionality, sure IBM could in theory simply turn the crank and use a new hardware platform that is faster, more capacity, denser, better energy efficiency, however what about new features?

Can IBM enhance its storage systems application software that it evolved from the ESS with new features to leverage underlying hardware capabilities including TurboCore, PowerVM, device and I/O sharing, Intelligent Energy efficiency along with threads enhancements?

Can IBM leverage those and other features to support not only scaling of performance, availability, capacity and energy efficiency in an economical manner, however also add features for advanced automated tiering or data movement plus other popular industry buzzword functionality?

 

Additional thoughts and perspectives
One of the things I find interesting is that some IBM folks along with their channel partners will go to great lengths to explain why and how the DS8000 is not just a pair of Power enabled based servers tightly coupled together. Yet, on the other hand, some of those folks will go to great lengths touting the advantages of leveraging off the shelf or commercial enabled servers based on Intel or AMD based systems such as IBMs own XIV storage solution.

I can understand in the past when the likes of EMC, Hitachi and Fujitsu were all competing with IBM building bigger and more function rich monolithic systems, however that trend is shifting. The trend now as is being seen with EMC and VMAX is to decouple and leverage more off the shelf commercially available technology combined with custom ASICs where and when needed.

Thus at a time where more attention and discussion is around clustered, grid, scalable storage systems, will we see or hear the IBM folks change their tune about the architectural scale up and out capabilities of the Power enabled DS8000 family?

There had been some industry speculation that the DS8000 would be the end of the line if the Power7 had not been released which will now (assuming that IBM leverages the Power7 for storage) shift to if there will be a Power8 or Power9 and so forth.

From a storage perspective, is the DS8K still relevant?

I say yes given its installed base and need for IBM to have an enterprise solution (sorry, IMHO XIV does not fit that bill just yet) of their own, lest they cut an OEM deal with the likes of Hitachi or Fujitsu which while possible, I do not see it as likely near term. Another soft point on its relevance is to gauge reaction from their competitors including EMC and HDS.

From a server perspective, what is the benefit of the new Power7 enabled servers from IBM?

Simple, increase scale of performance for single thread as well as concurrent or parallel application workloads.

In other words, supporting more web sites, partitions for virtual machines and guest operating system instances, databases, compute and other applications that demand performance and economy of scale.

This also means that IBM has a platform to aggressively go after Sun Solaris server customers with a lifeline during the Oracle transition, not to mention being a platform for running Oracle in addition to its own UDB/DB2 database. In addition to being a platform for Unix AIX as well as Linux, the Power7 series also are at the heart of current generation iSeries (the server formerly known as the AS400).

Additional links and resources:

Closing comments (for now):
Given IBMs history of following a Power chip enhancement with a new upgraded version of the DS8000 (or ESS/2105 aka Shark/VSS) and its predecessors by a reasonable amount of time, I would be surprised if we do not see a new DS8000 (perhaps even renamed or renumbered) within the year.

This is similar to how other vendors leverage new processor chip technology evolution to pace their systems upgrades for example how many vendors who leverage Intel processes have done announcements over the past year since the Nehalem series rolled out including EMC among others.

Lets see what the IBM truth squads have to say, or, not have to say :)

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

How to win approval for upgrades: Link them to business benefits

Drew Rob has another good article over at Processor.com about various tips and strategies on how to gain approval for hardware (or software) purchases with some comments by yours truly.

My tips and advice that are quoted in the story include to link technology resources to business needs impact which may be common sense, however still a time tested effective technique.

Instead of speaking tech talk such as Performance, capacity, availability, IOPS, bandwidth, GHz, frames or packets per second, VMs to PM or dedupe ratio, map them to business speak, that is things that finance, accountants, MBAs or other management personal understand.

For example, how many transactions at a given response time can be supported by a given type of server, storage or networking device.

Or, put a different way, with a given device, how much work can be done and what is the associated monetary or business benefit.

Likewise, if you do not have a capacity plan for servers, storage, I/O and networking along with software and facilities covering performance, availability, capacity and energy demands now is the time to put one in place.

More on capacity and performance planning later, however for now, if you want to learn more, check Chapter 10 (Performance and Capacity Planning) in my book Resilient Storage Networks: Designing Flexible and Scalable Data Infrastructure: Elsevier).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Recent StorageIO Media Coverage and Comments

BizwireeChannel LineEnterprise Storage ForumMSNBC
ProcessorSearchStorageFedTechComputer Weekly

Realizing that some prefer blogs to webs to twitters to other venues, here are some recent links among others to media coverage and comments by me on a different topics that are among others found at www.storageio.com/news.html.

  • Business Wire: Comments on The Green and Virtual Data Center Book – Jan 09
  • Search Storage: Comments on Open Source Storage – Jan 09
  • Search Storage: Comments on Clustered Storage – Jan 09
  • Storage Magazine: Comments on DR/BC Sites – Jan 09
  • SearchStorage: Comments on Fujitsu Eternus Storage – Jan 09
  • Enterprise Storage Forum: Comments on Quest buying Monosphere – Jan 09
  • Processor: Comments on Reducing Storage Costs – Jan 09
  • Enterprise Storage Forum: Comments on Apple Mac storage enhancements – Jan 09
  • Enterprise Storage Forum: Comments on EMC buying Sourcelabs & Opensource – Jan 09
  • SearchStorage Oz/NZ: Comments on Hot Technologies and Hype – Jan 09
  • CNBC: Comments on Storing Digital Documents – Dec 08
  • Enterprise Storage Forum: Comments on pNFS and Data Storage Trends – Dec 08
  • Enterprise Storage Forum: Comments on Symantec shifting hardware spending – Dec 08
  • Search Storage: Comments on DAS being more common than perceived – Dec 08
  • IT World Canada: Comments on Sun seeing lack of Storage Industry Innovation – Dec 08
  • Search Storage: Comments on Data Movement and Migration – Dec 08
  • eChannel Line: Comments on EMC and Dell renewing their vows – Dec 08
  • eChannel Line: Comments on Adaptec and SAS/SATA adapters – Dec 08
  • eChannel Line: Comments on Dell data de-duplication strategy – Nov 08
  • Server Watch: Comments on Server Virtualization Brings Fresh Life to DAS – Nov 08
  • Tech News World: Comments on Samsung Jumbo SSD drives – Nov 08
  • Enterprise Planet: Comments on EMC Cloud Storage (ATMOS) – Nov 08
  • eChannel Line: Comments on HPs new USVP virtualization platform – Nov 08
  • Search Storage: Comments on EMCs cloud and policy based storage – Nov 08
  • Tech News World: Comments on SANdisk SSD – Nov 08
  • Enterprise Storage Forum: Comments on HP adding storage virtualizaiton – Nov 08
  • Mainframe Executive: Comments on Green and Efficient Storage – Nov 08
  • Internet News: Comments – Symantec Trims Enterprise Vault Nov 08
  • Enterprise Storage Forum: Comments on DAS remaining relevant – Nov 08
  • SearchSMBStorage: Comments – NAS attraction for SMBs Nov 08
  • See more at www.storageio.com/news.html

    Cheers gs