EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I

EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I

server storage I/O trends

This is the first post in a two-part series pertaining to the EMC DSSD D5 announcement, you can read part two here.

EMC announced today the general availability of their DSSD D5 Shared Direct Attached SSD (DAS) flash storage system (e.g. All Flash Array or AFA) which is a rack-scale solution. If you recall, EMC acquired DSSD back in 2014 which you can read more about here. EMC announced four configurations that include 36TB, 72TB and 144TB raw flash SSD capacity with support for up to 48 dual-ported host client servers.

Via EMC Pulse Blog

What Is DSSD D5

At a high level EMC DSSD D5 is a PCIe direct attached SSD flash storage solution to enable aggregation of disparate SSD card functionality typically found in separate servers into a shared system without causing aggravation. DSSD D5 helps to alleviate server side I/O bottlenecks or aggravation issues that can be the result of aggregation of workloads or data. Think of DSSD D5 as an shared application server storage I/O accelerator for up to 48 servers to access up to 144TB of raw flash SSD to support various applications that have the need for speed.

Applications that have the need for speed or that can benefit from less time waiting for results, where time is money, or boosting productivity can enable high profitability computing. This includes legacy as well as emerging applications and workloads spanning little data, big data and big fast structure and unstructured data. From Oracle to SAS to HBASE and Hadoop among others, perhaps even Alluxio.

Some examples include:

  • Clusters and scale-out grids
  • High Performance COMpute (HPC)
  • Parallel file systems
  • Forecasting and image processing
  • Fraud detection and prevention
  • Research and analytics
  • E-commerce and retail
  • Search and advertising
  • Legacy applications
  • Emerging applications
  • Structured database and key-value repositories
  • Unstructured file systems, HDFS and other data
  • Large undefined work sets
  • From batch stream to real-time
  • Reduces run times from days to hours

Where to learn more

Continue reading with the following links about NVMe, flash SSD and EMC DSSD.

  • Part one of this series here and part two here.
  • Performance Redefined! Introducing DSSD D5 Rack-Scale Flash Solution (EMC Pulse Blog)
  • EMC Unveils DSSD D5: A Quantum Leap In Flash Storage (EMC Press Release)
  • EMC Declares 2016 The “Year of All-Flash” For Primary Storage (EMC Press Release)
  • EMC DSSD D5 Rack-Scale Flash (EMC PDF Overview)
  • EMC DSSD and Cloudera Evolve Hadoop (EMC White Paper Overview)
  • Software Aspects of The EMC DSSD D5 Rack-Scale Flash Storage Platform (EMC PDF White Paper)
  • EMC DSSD D5 (EMC PDF Architecture and Product Specification)
  • EMC VFCache respinning SSD and intelligent caching (Part II)
  • EMC To Acquire DSSD, Inc., Extends Flash Storage Leadership
  • Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • Learn more about flash SSD here and NVMe here at thenvmeplace.com
  • What this all means

    Today’s legacy, and emerging applications have the need for speed, and where the applications may not need speed, the users as well as Internet of Things (IoT) that depend upon, or feed those applications do need things to move faster. Fast applications need fast software and hardware to get the same amount of work done faster with less wait delays, as well as process larger amounts of structured and unstructured little data, big data and very fast big data.

    Different applications along with the data infrastructures they rely upon including servers, storage, I/O hardware and software need to adapt to various environments, one size, one approach model does not fit all scenarios. What this means is that some applications and data infrastructures will benefit from shared direct attached SSD storage such as rack scale solutions using EMC DSSD D5. Meanwhile other applications will benefit from AFA or hybrid storage systems along with other approaches used in various ways.

    Continue reading part two of this series here including how EMC DSSD D5 works and more perspectives.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Intel Micron 3D XPoint server storage NVM SCM PM SSD

    3D XPoint server storage class memory SCM


    Storage I/O trends

    Updated 1/31/2018

    Intel Micron 3D XPoint server storage NVM SCM PM SSD.

    This is the second of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part III here.

    Is this 3D XPoint marketing, manufacturing or material technology?

    You can’t have a successful manufactured material technology without some marketing, likewise marketing without some manufactured material would be manufactured marketing. In the case of 3D XPoint and its announcement launch, their real technology shown, granted it was only wafer and dies as opposed to an actual DDR4 DIMM or PCIe Add In Card (AIC) or drive form factor Solid State Device (SSD) product. On the other hand, on a relative comparison basis, even though there is marketing collateral available to learn more from, this was far from a over the big-top made for TV or web circus event, which can be a good thing.


    Wafer unveiled containing 3D XPoint 128 Gb dies

    Who will get access to 3D XPoint?

    Initially 3D XPoint production capacity supply will be for the two companies to offer early samples to their customers later this year with general production slated for 2016 meaning early real customer deployed products starting sometime in 2016.

    Is it NAND or NOT?

    3D XPoint is not NAND flash, it is also not NVRAM or DRAM, it’s a new class of NVM that can be used for server class main memory with persistency, or as persistent data storage among other uses (cell phones, automobiles, appliances and other electronics). In addition, 3D XPoint is more durable with a longer useful life for writing and storing data vs. NAND flash.

    Why is 3D XPoint important?

    As mentioned during the Intel and Micron announcement, there have only been seven major memory technologies introduced since the transistor back in 1947, granted there have been many variations along with generational enhancements of those. Thus 3D XPoint is being positioned by Intel and Micron as the eighth memory class joining its predecessors many of which continue to be used today in various roles.


    Major memory classes or categories timeline

    In addition to the above memory classes or categories timeline, the following shows in more detail various memory categories (click on the image below to get access to the Intel interactive infographic).

    Intel History of Memory Infographic
    Via: https://intelsalestraining.com/memory timeline/ (Click on image to view)

    What capacity size is 3D XPoint?

    Initially the 3D XPoint technology is available in a 2 layer 128 bit (cell) per die capacity. Keep in mind that there are usually 8 bits to a byte resulting in 16 GByte capacity per chip initially. With density improvements, as well as increased stacking of layers, the number of cells or bits per die (e.g. what makes up a chip) should improve, as well as most implementations will have multiple chips in some type of configuration.

    What will 3D XPoint cost?

    During the 3D XPoint launch webinar Intel and Micron hinted that first pricing will be between current DRAM and NAND flash on a per cell or bit basis, however real pricing and costs will vary depending on how packaged for use. For example if placed on a DDR4 or different type of DIMM or on a PCIe Add In Card (AIC) or as a drive form factor SSD among other options will vary the real price. Likewise as with other memories and storage mediums, as production yields and volumes increase, along with denser designs, the cost per usable cell or bit can be expected to further improve.

    Where to read, watch and learn more

    Storage I/O trends

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    DRAM which has been around for sometime has plenty of life left for many applications as does NAND flash including new 3D NAND, vNAND and other variations. For the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies including 3D XPoint. Read more in this series including Part I here and Part III here.

    Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    3D XPoint nvm pm scm storage class memory

    Part III – 3D XPoint server storage class memory SCM


    Storage I/O trends

    Updated 1/31/2018

    3D XPoint nvm pm scm storage class memory.

    This is the third of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part II here.

    What is 3D XPoint and how does it work?

    3D XPoint is a new class or class of memory (view other categories of memory here) that provides performance for reads and writes closer to that of DRAM with about 10x the capacity density. In addition to the speed closer to DRAM vs. the lower NAND flash, 3D XPoint is also non-volatile memory (NVM) like NAND flash, NVRAM and others. What this means is that 3D XPoint can be used as persistent higher density fast server memory (or main memory for other computers and electronics). Besides being fast persistent main memory, 3D XPoint will also be a faster medium for solid state devices (SSD’s) including PCIe Add In Cards (AIC), m2 cards and drive form factor 8637/8639 NVM Express (NVMe) accessed devices that also has better endurance or life span compared to NAND flash.


    3D XPoint architecture and attributes

    The initial die or basic chip building block 3D XPoint implementation is a layer 128 Gbit device which if using 8 bits would yield 16GB raw. Over time increased densities should become available as the bit density improves with more cells and further scaling of the technology, combined with packaging. For example while a current die could hold up to 16 GBytes of data, multiple dies could be packaged together to create a 32GB, 64GB, 128GB etc. or larger actual product. Think about not only where packaged flash based SSD capacities are today, also think in terms of where DDR3 and DDR4 DIMM are at such as 4GB, 8GB, 16GB, 32GB densities.

    The 3D aspect comes from the memory being in a matrix initially being two layers high, with multiple rows and columns that intersect, where those intersections occur is a microscopic material based switch for accessing a particular memory cell. Unlike NAND flash where an individual cell or bit is accessed as part of a larger block or page comprising several thousand bytes at once, 3D XPoint cells or bits can be individually accessed to speed up reads and writes in a more granular fashion. It is this more granular access along with performance that will enable 3D XPoint to be used in lower latency scenarios where DRAM would normally be used.

    Instead of trapping electrons in a cell to create a bit of capacity (e.g. on or off) like NAND flash, 3D XPoint leverages the underlying physical material propertied to store a bit as a phase change enabling use of all cells. In other words, instead of being electron based, it is material based. While Intel and Micron did not specify what the actual chemistry and physical materials that are used in 3D XPoint, they did discuss some of the characteristics. If you want to go deep, check out how the Dailytech makes an interesting educated speculation or thesis on the underlying technology.

    Watch the following video to get a better idea and visually see how 3D XPoint works.



    3D XPoint YouTube Video

    What are these chips, cells, wafers and dies?

    Left many dies on a wafer, right, a closer look at the dies cut from the wafer

    Dies (here and here) are the basic building block of what goes into the chips that in turn are the components used for creating DDR DIMM for main computer memory, as well as for create SD and MicroSD cards, USB thumb drives, PCIe AIC and drive form factor SSD, as well as custom modules on motherboards, or consumption via bare die and wafer level consumption (e.g. where you are doing really custom things at volume, beyond using a soldering iron scale).

    Storage I/O trends

    Has Intel and Micron cornered the NVM and memory market?

    We have heard proclamations, speculation and statements of the demise of DRAM, NAND flash and other volatile and NVM memories for years, if not decades now. Each year there is the usual this will be the year of “x” where “x” can include among others. Resistive RAM aka ReRAM or RRAM aka the memristor that HP earlier announced they were going to bring to market and then earlier this year canceling those plans while Crossbar continues to pursue RRAM. MRAM or Magnetorestive RAM, Phase Change Memory aka CRAM or PCM and PRAM, FRAM aka FeRAM or Ferroelectric RAM among others.

    flash SSD and NVM trends

    Expanding persistent memory and SSD storage markets

    Keep in mind that there are many steps taking time measured in years or decades to go from research and development lab idea to prototype that can then be produced at production volumes in economic yields. As a reference for, there is still plenty of life in both DRAM as well as NAND flash, the later having appeared around 1989.

    Industry vs. Customer Adoption and deployment timeline

    Technology industry adoption precedes customer adoption and deployment

    There is a difference between industry adoption and deployment vs. customer adoption and deployment, they are related, yet separated by time as shown in the above figure. What this means is that there can be several years from the time a new technology is initially introduced and when it becomes generally available. Keep in mind that NAND flash has yet to reach its full market potential despite having made significant inroads the past few years since it was introduced in 1989.

    This begs the question of if 3D XPoint is a variation of phase change, RRAM, MRAM or something else. Over at the Dailytech they lay out a line of thinking (or educated speculation) that 3D XPoint is some derivative or variation of phase change, time will tell about what it really is.

    What’s the difference between 3D NAND flash and 3D XPoint?

    3D NAND is a form of NAND flash NVM, while 3D XPoint is a completely new and different type of NVM (e.g. its not NAND).

    3D NAND is a variation of traditional flash with the difference between vertical stacking vs. horizontal to improve density, also known as vertical NAND or V-NAND. Vertical stacking is like building up to house more tenants or occupants in a dense environment or scaling up, vs scaling-out by using up more space where density is not an issue. Note that magnetic HDD’s shifted to perpendicular (e.g. vertical) recording about ten years ago to break through the super parametric barrier and more recently, magnetic tape has also adopted perpendicular recording. Also keep in mind that 3D XPoint and the earlier announced Intel and Micron 3D NAND flash are two separate classes of memory that both just happen to have 3D in their marketing names.

    Where to read, watch and learn more

    Storage I/O trends

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Same with DRAM which has been around for sometime, it too still has plenty of life left for many applications. However other applications that have the need for improved speed over NAND flash, or persistency and density vs. DRAM will be some of the first to leverage new NVM technologies such as 3D XPoint. Thus at least for the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies. Bottom line, 3D XPoint is a new class of NVM memory, can be used for persistent main server memory or for persistent fast storage memory. If you have not done so, check out Part I here and Part II here of this three-part series on Intel and Micron 3D XPoint.

    Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    May and June 2015 Server StorageIO Update Newsletter

    Volume 15, Issue V & VI

    Hello and welcome to this joint May and June 2015 Server StorageIO update newsletter. Here in the northern hemisphere its summer which means holiday vacations among other things.

    There has been a lot going on this spring and so far this summer with more in the wings. Summer can also be a time to get caught up on some things, preparing for others while hopefully being able to enjoy some time off as well.

    In terms of what have I been working on (or with)? Clouds (OpenStack, vCloud Air, AWS, Azure, GCS among others), virtual and containers, flash SSD devices (drives, cards), software defining, content servers, NVMe, databases, data protection items, servers, cache and micro-tiering among other things.

    Speaking of getting caught up, back in early May among many other conferences (Cisco, Docker, HP, IBM, OpenStack, Red Hat and many other events) was EMCworld. EMC covered my hotel and registration costs to attend the event in Las Vegas (thanks EMC, that’s a disclosure btw ;). View a summary StorageIOblog post covering EMCworld 2015 here along with recent EMC announcements including Acquisition of cloud services vendor Virtustream for $1.2B, and ECS 2.0.

    Server and Storage I/O Wrappings

    This months newsletter has a focus on software and storage wrappings, that is, how your storage or software is packaged, delivered or deployed. For example traditional physical storage systems, software defined storage as shrink-wrap or download, tin-wrapped software as an appliance, virtual wrapped such as a virtual storage appliance or cloud wrapped among others.

    OpenStack software defined cloud

    OpenStack (both the organization, community, event and software) continue to gain momentum. The latest release known as Kilo (more Kilo info here) was released in early April followed by the OpenStack summit in May.

    Some of you might be more involved with OpenStack vs. others, perhaps having already deployed into your production environment. Perhaps you, like myself have OpenStack running in a lab for proof of concept, research, development or learning among other things.

    You might even be using the services of a public cloud or managed service provider that is powered by OpenStack. On the other hand, you might be familiar with OpenStack from reading up on it, watching videos, listening to podcast’s or attending events to figure out what it is, where it fits, as well as what can your organization use it for.

    Drew Robb (@Robbdrew) has a good overview piece about OpenStack and storage over at Enterprise Storage Forum (here). OpenStack is a collection of tools or bundles for building private, hybrid and public clouds. These various open source projects within the OpenStack umbrella include compute (Nova) and virtual machine images (Glance). Other components include dashboard management (Horizon), security and identity control (Keystone), network (Neutron), object storage (Swift), block storage (Cinder) and file-based storage (Manila) among others.

    It’s up to the user to decide which pieces you will add. For example, you can use Swift without having virtual machines and vice versa. Read Drew’s complete article here.

    Btw, if you missed it, not only has OpenStack added file support (e.g. Manila), Amazon Web Services (AWS) also recently added Elastic File Services (EFS) complementing there Elastic Block Services (EBS).

    Focus on Storage Wrappings

    Software exists and gets deployed in various places as shown in the following examples.

    software wrapped storage

    • Cloud wrapped software – software that can be deployed in a cloud instance.
    • Container wrapped software – software deployed in a docker or other container
    • Firmware wrapped software – software that gets packaged and deployed as firmware in a server, storage, network device or adapter
    • Shrink wrapped software – software that can be downloaded and deployed where you want
    • Tin wrapped software – software that is packaged or bundled with hardware (e.g. tin) such as an appliance or storage system
    • Virtual wrapped software

    server storage software wrapping

    StorageIOblog posts

    Data Protection Diaries

    Modernizing Data Protection
    Using new and old things in new ways

    This is part of an ongoing series of posts that part of www.storageioblog.com/data-protection-diaries-main/ on data protection including archiving, backup/restore, business continuance (BC), business resiliency (BC), data footprint reduction (DFR), disaster recovery (DR), High Availability (HA) along with related themes, tools, technologies, techniques, trends and strategies.
    world backup day (and test your restore) image licensed from Shutterstock by StorageIO

    Data protection is a broad topic that spans from logical and physical security to HA, BC, BR, DR, archiving(including life beyond compliance) along with various tools, technologies, techniques. Key is aligning those to the needs of the business or organization for today’s as well as tomorrows requirements. Instead of doing things what has been done in the past that may have been based on what was known or possible due to technology capabilities, why not start using new and old things in new ways.

    Let’s start using all the tools in the data protection toolbox regardless of if they are new or old, cloud, virtual, physical, software defined product or service in new ways while keeping the requirements of the business in focus. Read more from this post here.

    In case you missed it:

    View other recent as well as past blog posts here

    In This Issue


  • Industry Trends Perspectives News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events and Webinars
  • Recommended Reading List
  • StorageIOblog posts
  • Server StorageIO Lab reports
  • Resources and Links
  • Industry News and Activity

    Recent Industry news and activity

    AWS adds new M4 virtual machine instances
    Cisco provides FCoE proof of life

    Google new cloud storage pricing
    HP announces new data center services
    HDS announces new products & services
    IBM enhances storage portfolio

    IBTA announces RoCE initiative
    InfiniteIO announces network/cloud cache
    Intel buying FPGA specialist Altera
    NetApp – Changes CEO

    View other recent and upcoming events here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities and announcements.

    BizTechMagazine: Comments on how to simplify your data center with virtualization
    EnterpriseStorageForum: Comments on Open Stack and Clouds
    EnterpriseStorageForum: Comments on Top Ten Software Defined Storage Tips, Gotchas and Cautions
    EdTech: Comments on Harness Power with New Processors

    Processor: Comments on Protecting Your Servers & Networking equipment
    EdTech: Comments on Harness Power with New Processors

    Processor: Comments on Improve Remote Server Management including KVM
    CyberTrend: Comments on Software Defined Data Center and virtualization
    BizTechMagazine: Businesses Prepare as End-of-Life for Windows Server 2003 Nears
    InformationWeek: Top 10 sessions from Interop Las Vegas 2015
    CyberTrend: Comments on Software Defined Data Center and Virtualization

    View more trends comments here

    Vendors you may not heard of

    This is a new section starting in this issue where various new or existing vendors as well as service providers you may not have heard about will be listed.

    CloudHQ – Cloud management tools
    EMCcode Rex-Ray – Container management
    Enmotus FUZE – Flash leveraged micro tiering
    Rubrik – Data protection management
    Sureline – Data protection management
    Virtunet systems – VMware flash cache software
    InfiniteIO – Cloud and NAS cache appliance
    Servers Direct – Server and storage platforms

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here. There are over 1,000 entries (and growing) vendors on the links page.

    StorageIO Tips and Articles

    So you have a new storage device or system. How will you test or find its performance? Check out this quick-read tip on storage benchmark and testing fundamentals over at BizTech.

    Check out these resources and links on server storage I/O performance and benchmarking tools. View more tips and articles here

    Webinars

    BrightTalk Webinar – June 23 2015 9AM PT
    Server Storage I/O Innovation v2.015: Protect Preserve & Serve Your Information

    Videos and Podcasts

    VMware vCloud Air Server StorageIO Lab Test Drive Ride along videos.

    Server StorageIO Lab vCloud test drive video part 1Server StorageIO Lab vCloud test drive video part 2
    VMware vCloud Air test drive videos Part I & II

    StorageIO podcasts are also available via and at StorageIO.tv

    Various Industry Events

     

    VMworld August 30-September 3 2015

    Flash Memory Summit August 11-13

    Interop – April 29 2015 Las Vegas (Voted one of top ten sessions at Interop, more here)
    Smart Shopping for Your Storage Strategy

    View other recent and upcoming events here

    Webinars

    BrightTalk Webinar – June 23 2015 9AM PT
    Server Storage I/O Innovation v2.015: Protect Preserve & Serve Your Information

    From StorageIO Labs

    Research, Reviews and Reports

    VMware vCloud Air Test Drive
    VMware vCloud Air
    local and distributed NAS (NFS, CIFS, DFS) file data. Read more here.

    VMware vCloud Air

    VMware vCloud Air provides a platform similar to those just mentioned among others for your applications and their underlying resource needs (compute, memory, storage, networking) to be fulfilled. In addition, it should not be a surprise that VMware vCloud Air shares many common themes, philosophies and user experiences with the traditional on-premises based VMware solutions you might be familiar with.

    View other StorageIO lab review reports here

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/

    storageperformance.us
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    EMCworld 2015 How Do You Want Your Storage Wrapped?

    Server Storage I/O trends

    EMCworld 2015 How Do You Want Your Storage Wrapped?

    Back in early May I was invited by EMC to attend EMCworld 2015 which included both the public sessions, as well as several NDA based discussions. Keep in mind that there is the known, there is the unknown (or assumed or speculated) and in between there are NDA’s, nuff said on that. EMC covered my hotel and registration costs to attend the event in Las Vegas (thanks EMC, that’s a disclosure btw ;) and here is a synopsis of various EMCworld 2015 announcements.

    What EMC announced

    • VMAX3 enhancements to the EMC enterprise flagship storage platform to keep it relevant for traditional legacy workloads as well as for in a converged, scale-out, cloud, virtual and software defined environment.
    • VNX 3200 entry-level All Flash Array (AFA) flash SSD system starting at $25,000 USD for a 3TB unified platform with full data services found in other VNX products.
    • vVNX aka Virtual VNX aka "project liberty" which is a community (e.g. free) software version of the VNX. vVNX is a Virtual Storage Appliance (VSA) that you download and run on a VMware platform. Learn more and download here. Note the install will do a CPU type check so forget about trying to run it on a Intel Nuc or similar, I tried just because I could, the install will protect you from doing such things.
    • Various data protection related items including new Datadomain platforms as well as software updates and integration with other EMC platforms (storage systems).
    • All Flash Array (AFA) XtremIO 4.0 enhancements including larger clusters, larger nodes to boost performance, capacity and availability, along with copy service updates among others improvements.
    • Preview of DSSD shared (inside a rack) external flash Solid State Device (SSD) including more details. While much of DSSD is still under NDA, EMC did provide more public details at EMCworld. Between what was displayed and announced publicly at EMCworld as well as what can be found via Google (or other searches) you can piece together more of the DSSD story. What is known publicly today is that DSSD leverages the new Non-Volatile Memory express (NVMe) access protocol built upon underlying PCIe technology. More on DSSD in future discussions,if you have not done so, get an NDA deep dive briefing on it from EMC.
    • ScaleIO is now available via a free download here including both Windows and Linux clients as well as instructions for those operating systems as well as VMware.
    • ViPR can also be downloaded here for free (has been previously available) from here as well as it has been placed into open source by EMC.

    What EMC announced since EMCworld 2015

    • Acquisition of cloud services (and software tools) vendor Virtustream for $1.2B adding to the federation cloud services portfolio (companion to VMware vCloud Air).
    • Release of ECS 2.0 including a free download here. This new version of ECS (Elastic Cloud Storage) can be used independent of the ViPR controller, or in conjunction with ViPR. In addition ECS now has about 80% of the functionality of the Centera object storage platform. The remaining 20% functionality (mainly regulatory compliance governance) of Centera will be added to ECS in the future providing a migration path for Centera customers. In case you are wondering what does EMC do with Centera, Atmos, ViPR and now ECS, answer is that ECS can work with or without ViPR, second is that the functionality of Centera, Atmos are being rolled into ECS. ECS as a refresher is software that transforms general purpose industry standard servers with direct storage into a scale-out HDFS and object storage solution.
    • Check out EMCcode including S3motion that I use and have reviewed here. Also check out EMCcode Rex-Ray which if you are into docker containers, it should be of interest, I know I’m interested in it.

    Server Storage I/O trends

    What this all means and wrap-up

    There were no single major explosive announcements however the sum of all the announcements together should not be over shadowed by the big tent made for TV (or web) big tent productions and entertainment. What EMC announced was effectively how would you like, how do you want and need your storage and associated data services along with management wrapped.

    tin wrapped software

    By being wrapped, do you want your software defined storage management and storage wrapped in a legacy turnkey solution such as VMAX3, VNX or Isilon, do you want or need it to be hybrid or all flash, converged and unified, block, file or object.

    software wrapped storage

    Or do you need or want the software defined storage management and storage to be "shrink wrapped" as a download so you can deploy on your own hardware "tin wrapped" or as a VSA "virtual wrapped" or cloud wrapped? Do you need or want the software defined storage management and storage to leverage anybody’s hardware while being open source?

    server storage software wrapping

    How do you need or want your storage to be wrapped to fit your specific needs, that IMHO was the essence of what EMC announced at EMCworld 2015, granted the motorcycles and other production entertainment was engaging as well as educational.

    Ok, nuff said for now

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    How to test your HDD SSD or all flash array (AFA) storage fundamentals

    How to test your HDD SSD AFA Hybrid or cloud storage

    server storage data infrastructure i/o hdd ssd all flash array afa fundamentals

    Updated 2/14/2018

    Over at BizTech Magazine I have a new article 4 Ways to Performance Test Your New HDD or SSD that provides a quick guide to verifying or learning what the speed characteristic of your new storage device are capable of.

    An out-take from the article used by BizTech as a "tease" is:

    These four steps will help you evaluate new storage drives. And … psst … we included the metrics that matter.

    Building off the basics, server storage I/O benchmark fundamentals

    The four basic steps in the article are:

    • Plan what and how you are going to test (what’s applicable for you)
    • Decide on a benchmarking tool (learn about various tools here)
    • Test the test (find bugs, errors before a long running test)
    • Focus on metrics that matter (what’s important for your environment)

    Server Storage I/O performance

    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    To some the above (read the full article here) may seem like common sense tips and things everybody should know otoh there are many people who are new to servers storage I/O networking hardware software cloud virtual along with various applications, not to mention different tools.

    Thus the above is a refresher for some (e.g. Dejavu) while for others it might be new and revolutionary or simply helpful. Interested in HDD’s, SSD’s as well as other server storage I/O performance along with benchmarking tools, techniques and trends check out the collection of links here (Server and Storage I/O Benchmarking and Performance Resources).

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    How many I/O iops can flash SSD or HDD do?

    How many i/o iops can flash ssd or hdd do with vmware?

    sddc data infrastructure Storage I/O ssd trends

    Updated 2/10/2018

    A common question I run across is how many I/O iopsS can flash SSD or HDD storage device or system do or give.

    The answer is or should be it depends.

    This is the first of a two-part series looking at storage performance, and in context specifically around drive or device (e.g. mediums) characteristics across HDD, HHDD and SSD that can be found in cloud, virtual, and legacy environments. In this first part the focus is around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).

    What about cloud, tape summit resources, storage systems or appliance?

    Lets leave those for a different discussion at another time.

    Getting started

    Part of my interest in tools, metrics that matter, measurements, analyst, forecasting ties back to having been a server, storage and IO performance and capacity planning analyst when I worked in IT. Another aspect ties back to also having been a sys admin as well as business applications developer when on the IT customer side of things. This was followed by switching over to the vendor world involved with among other things competitive positioning, customer design configuration, validation, simulation and benchmarking HDD and SSD based solutions (e.g. life before becoming an analyst and advisory consultant).

    Btw, if you happen to be interested in learn more about server, storage and IO performance and capacity planning, check out my first book Resilient Storage Networks (Elsevier) that has a bit of information on it. There is also coverage of metrics and planning in my two other books The Green and Virtual Data Center (CRC Press) and Cloud and Virtual Data Storage Networking (CRC Press). I have some copies of Resilient Storage Networks available at a special reader or viewer rate (essentially shipping and handling). If interested drop me a note and can fill you in on the details.

    There are many rules of thumb (RUT) when it comes to metrics that matter such as IOPS, some that are older while others may be guess or measured in different ways. However the answer is that it depends on many things ranging from if a standalone hard disk drive (HDD), Hybrid HDD (HHDD), Solid State Device (SSD) or if attached to a storage system, appliance, or RAID adapter card among others.

    Taking a step back, the big picture

    hdd image
    Various HDD, HHDD and SSD’s

    Server, storage and I/O performance and benchmark fundamentals

    Even if just looking at a HDD, there are many variables ranging from the rotational speed or Revolutions Per Minute (RPM), interface including 1.5Gb, 3.0Gb, 6Gb or 12Gb SAS or SATA or 4Gb Fibre Channel. If simply using a RUT or number based on RPM can cause issues particular with 2.5 vs. 3.5 or enterprise and desktop. For example, some current generation 10K 2.5 HDD can deliver the same or better performance than an older generation 3.5 15K. Other drive factors (see this link for HDD fundamentals) including physical size such as 3.5 inch or 2.5 inch small form factor (SFF), enterprise or desktop or consumer, amount of drive level cache (DRAM). Space capacity of a drive can also have an impact such as if all or just a portion of a large or small capacity devices is used. Not to mention what the drive is attached to ranging from in internal SAS or SATA drive bay, USB port, or a HBA or RAID adapter card or in a storage system.

    disk iops
    HDD fundamentals

    How about benchmark and performance for marketing or comparison tricks including delayed, deferred or asynchronous writes vs. synchronous or actually committed data to devices? Lets not forget about short stroking (only using a portion of a drive for better IOP’s) or even long stroking (to get better bandwidth leveraging spiral transfers) among others.

    Almost forgot, there are also thick, standard, thin and ultra thin drives in 2.5 and 3.5 inch form factors. What’s the difference? The number of platters and read write heads. Look at the following image showing various thickness 2.5 inch drives that have various numbers of platters to increase space capacity in a given density. Want to take a wild guess as to which one has the most space capacity in a given footprint? Also want to guess which type I use for removable disk based archives along with for onsite disk based backup targets (compliments my offsite cloud backups)?

    types of disks
    Thick, thin and ultra thin devices

    Beyond physical and configuration items, then there are logical configuration including the type of workload, large or small IOPS, random, sequential, reads, writes or mixed (various random, sequential, read, write, large and small IO). Other considerations include file system or raw device, number of workers or concurrent IO threads, size of the target storage space area to decide impact of any locality of reference or buffering. Some other items include how long the test or workload simulation ran for, was the device new or worn in before use among other items.

    Tools and the performance toolbox

    Then there are the various tools for generating IO’s or workloads along with recording metrics such as reads, writes, response time and other information. Some examples (mix of free or for fee) include Bonnie, Iometer, Iorate, IOzone, Vdbench, TPC, SPC, Microsoft ESRP, SPEC and netmist, Swifttest, Vmark, DVDstore and PCmark 7 among many others. Some are focused just on the storage system and IO path while others are application specific thus exercising servers, storage and IO paths.

    performance tools
    Server, storage and IO performance toolbox

    Having used Iometer since the late 90s, it has its place and is popular given its ease of use. Iometer is also long in the tooth and has its limits including not much if any new development, never the less, I have it in the toolbox. I also have Futremark PCmark 7 (full version) which turns out has some interesting abilities to do more than exercise an entire Windows PC. For example PCmark can use a secondary drive for doing IO to.

    PCmark can be handy for spinning up with VMware (or other tools) lots of virtual Windows systems pointing to a NAS or other shared storage device doing real world type activity. Something that could be handy for testing or stressing virtual desktop infrastructures (VDI) along with other storage systems, servers and solutions. I also have Vdbench among others tools in the toolbox including Iorate which was used to drive the workloads shown below.

    What I look for in a tool are how extensible are the scripting capabilities to define various workloads along with capabilities of the test engine. A nice GUI is handy which makes Iometer popular and yes there are script capabilities with Iometer. That is also where Iometer is long in the tooth compared to some of the newer generation of tools that have more emphasis on extensibility vs. ease of use interfaces. This also assumes knowing what workloads to generate vs. simply kicking off some IOPs using default settings to see what happens.

    Another handy tool is for recording what’s going on with a running system including IO’s, reads, writes, bandwidth or transfers, random and sequential among other things. This is where when needed I turn to something like HiMon from HyperIO, if you have not tried it, get in touch with Tom West over at HyperIO and tell him StorageIO sent you to get a demo or trial. HiMon is what I used for doing start, stop and boot among other testing being able to see IO’s at the Windows file system level (or below) including very early in the boot or shutdown phase.

    Here is a link to some other things I did awhile back with HiMon to profile some Windows and VDI activity test profiling.

    What’s the best tool or benchmark or workload generator?

    The one that meets your needs, usually your applications or something as close as possible to it.

    disk iops
    Various 2.5 and 3.5 inch HDD, HHDD, SSD with different performance

    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    That depends, however continue reading part II of this series to see some results for various types of drives and workloads.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Two companies on parallel tracks moving like trains offset by time: EMC and NetApp

    View from VIA Rail Canada taken using Gregs iFlip

    I see some similarities and parallels between two competing companies. Those companies happen to be in the same sector (e.g. IT data storage) however offset by time (about a decade or) subject to continued execution by both.

    Those two companies are EMC and NetApp.

    Some people might assert that these two companies are complete opposites. Perhaps claiming that one is on the up swing while the other on the down path (have heard claims and counter claims of both being on the other path). I will leave the discussion or debate of which is on the up and which is on the down path to the twittervile and blogsphere ultimate tag team mud wrestling arena or You Tube video rooms.

    I see EMC and NetApp a bit differently which you can take it for what that is, simply an opinion or perspective having been the competitor and partner of both when I was on the vendor side of the table and later covering the two as an industry analyst.

    Without going too far down the memory lane route, in a nut shell, I recall when EMC was still a fledgling startup who wanted to sell me (I was on the customer side then) rebrand Fujitsu disk drives to attach to my VAX/VMS systems and memory for our mainframes. Come to think about it, Emulex was also selling disk drives back then before reinventing themselves later as an HBA and hub vendor.

    Later as a vendor, around late 94 or early 95, it was the up and coming small little bay area NAS filer appliance vendor (e.g. the toaster era) that we partnered with including a very brief OEM deal involving repackaging their product which was NetApp or Network Appliance as they were formerly known then. Once that ended after a year or so NetApp become a competitor as was EMC who at the time had as the main act the Symmetrix and about to do the EPOCH backup and McData acquisitions as well as landing the HP OEM deal for open systems.

    Ironically NetApp was out to knock off Auspex which happened fairly quickly while EMC was struggling to get its NAS act together with the early DART behemoth while successfully knocking out IBM and other entrenched high-end solutions. In a twist of fate, the company I was working for ended up selling off all of their RAID (initially a few, then later all of them) patents to EMC for some cash and later transitioned out of the hardware business becoming simply a VAR of EMC (that was MTI).

    While at INRANGE which later merged into CNT before acquired by McData (I left before that) and then Brocade, both EMC and NetApp were partners across different product lines.

    What they have in common

    Ok, enough of the memory lane stuff; lets get back to where the similarities exist.

    Back in the mid 90s, EMC was essentially a one trick pony with a very software feature function rich large storage system that sold for a premium generating lots of cash from its use of cache. Likewise, NetApp is a vendor that while it has many product offerings and has some acquisitions, still relies very much on their flagship NAS storage systems that are also feature function (e.g. software) rich that leverage cache to generate cash.

    Both companies are growing in terms of revenues, installed base, partners/OEMs and product diversity. Likewise each company needs to continue expansion into those as well as other adjacent areas.

    Can NetApp catch EMC? Maybe, maybe not, however IMHO the question should be are there other areas that NetApp can extend its reach into causing EMC to react to those, like how EMC took advantage of opportunities causing IBM and others to react.

    Here are some other similarities I see of and for EMC and NetApp:

    • Both have great outreach programs where information is provided without having to ask or dig in a proactive way, yet when something is needed, they give it without fanfare
    • Both are engaging at multiple levels, from customer, to financial and investors, to var, to partner, trade groups, to trade and other media, to analysts to social networking and beyond
    • Both are passionate about their companies, cultures, products, solutions and customers
    • Both can walk the talk, however both also like to talk and see the other balk
    • Both lead by example and not afraid to tell you what they think about something
    • Both embrace social media in connection with traditional mediums for communication with people as opposed to a giant megaphone for talking at or spamming people (when will other vendors figure that out?)
    • Both also are willing to hear what you have to say even if they do not agree with it
    • Neither is scared of the other (or at least not in public)
    • Both cause the other to play and execute a stronger game
    • Both are not above throwing a mud ball or fire cracker at the other
    • Both are not above burying the hatchet and getting along when or where needed
    • Both compete vigorously on some fronts, yet partner (publicly or privately) on other fronts
    • Both have been direct focused with some vars and some OEMs
    • Both started somewhere else and now going and moving to different places and in some ways returning to their roots or at least making sure they are not forgotten
    • Both are synonymous with their core focus products and background
    • One comes from an open systems focus working to prove itself in the enterprise
    • One comes from the enterprise establishing itself in SOHO, SMB and other spaces
    • Both have many solutions, some would say long in the tooth, others would say revolutionary
    • Both are growing via organic growth as well as acquisition and partnering
    • Both have celebrity leaders and team role players to support and back then up
    • Both also have deep benches and technical folks in the trenches to get things done
    • Both have developed leadership along with rank and file employees internal
    • Both have gone outside and brought in leadership and skilled players to expand their employee ranks
    • Both are very much involved with server virtualization (Microsoft and VMware)
    • Both are very much involved in storage virtualization and associated management
    • Both are involved with cloud solutions for enabling public or private storage
    • Both are independent storage vendors not part of a larger server organization
    • Both have interoperability programs with other vendors servers and software and networks
    • Both also get beat up about their pricing models for extensive software feature function portfolios associated with respective storage solutions
    • Both get criticized by customers or the industry as is often the case of market leaders

    What I see EMC needing to do

    • Articulate where their multiple products and services fit and play into their different target market opportunities while worrying less about the color hue of logos or video backgrounds
    • Avoiding competing with itself or becoming its own major or main competitor
    • Clarify cloud (public and private) cloud confusion transitioning into cloud cash and opportunity
    • Minimize or cut channel contention and confusion internally and across partners
    • Remember where they came from and core competences however avoid a death grip on them
    • Look to the future, leverage lessons learned that helped EMC succeed where others failed
    • EMC needs NetApp as a strong NAS competitor as each plays stronger when against the other. This is like watching world-class athletes, artists or musicians that step up their games or works when paired with another

    What I see NTAP needing to do

    • Doing an acquisition in an adjacent space, perhaps even a reverse merger of sorts to move up and out into a broader space that compliments their core offerings. For example, something outside of the normal comfort zone which arguably Datadomain would have been close to their comfort zone. Likewise acquiring a software player such as Commvault would be similar to EMC having acquired Legato, Documentum and so forth. That is NetApp would have to do a series of those. So why not something really big like a reverse merger or partial acquisition of say Symantecs data protection and management group (aka the old Veritas suite including backup, management tools, clustered file server software, volume managers etc).
    • In addition to adjacent acquisition, opportunities plays such as the recent Bycast move makes sense however then those need to be integrated and rolled out similar to what EMC has done with so many of their purchases.
    • Minimize or cut channel contention and confusion both internal across products and with partners.
    • NetApp started at the lower end SMB, grew into the SME and now enterprise place, however they tried with the StorVault and backed out of that market leaving it to EMC Iomega, Cisco, HP, Dell and others. Maybe they do not need a low-end play, however I rather liked the low-end StorVault story as well as where it was going. Oh well, needless to say I ended up buying an EMC Iomega IX4 as the StorVault left the market. Hmm, does that mean NetApp should acquire SNAP or Drobo or some other low-end SOHO play? Only if the price is right and there is an existing customer base and channel in place otherwise it would be a distraction from the core business. BTW, did I mention EMC Legato, oh excuse me, Networker came from the desktop and SMB environment however grew to the enterprise (yes I know, that is debatable) however now is difficult to put into SOHO environments.
    • Does NetApp need a stronger block storage play, perhaps a 3PAR acquisition? Maybe, perhaps not depending on if they are competing for today’s market or tomorrows.
    • Does NetApp need to be acquired? I think they can stay independent; however they need to expand their presence and footprint from a product, partner and customer perspective.
    • NetApp needs a strong NAS competitor in the likes of an EMC as the competition IMHO makes each stronger as well as providing competition which should play well for customers. Not to mention the back and forth mud ball and fire cracker tossing can be entertaining for some.

    What is your take?

    Are EMC and NetApp two companies on parallel tracks offset by time and perhaps execution?

    Cast your vote and see what others have indicated in the following poll.

    View from VIA Rail Canada taken using Gregs iFlip

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved