EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I

EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I

server storage I/O trends

This is the first post in a two-part series pertaining to the EMC DSSD D5 announcement, you can read part two here.

EMC announced today the general availability of their DSSD D5 Shared Direct Attached SSD (DAS) flash storage system (e.g. All Flash Array or AFA) which is a rack-scale solution. If you recall, EMC acquired DSSD back in 2014 which you can read more about here. EMC announced four configurations that include 36TB, 72TB and 144TB raw flash SSD capacity with support for up to 48 dual-ported host client servers.

Via EMC Pulse Blog

What Is DSSD D5

At a high level EMC DSSD D5 is a PCIe direct attached SSD flash storage solution to enable aggregation of disparate SSD card functionality typically found in separate servers into a shared system without causing aggravation. DSSD D5 helps to alleviate server side I/O bottlenecks or aggravation issues that can be the result of aggregation of workloads or data. Think of DSSD D5 as an shared application server storage I/O accelerator for up to 48 servers to access up to 144TB of raw flash SSD to support various applications that have the need for speed.

Applications that have the need for speed or that can benefit from less time waiting for results, where time is money, or boosting productivity can enable high profitability computing. This includes legacy as well as emerging applications and workloads spanning little data, big data and big fast structure and unstructured data. From Oracle to SAS to HBASE and Hadoop among others, perhaps even Alluxio.

Some examples include:

  • Clusters and scale-out grids
  • High Performance COMpute (HPC)
  • Parallel file systems
  • Forecasting and image processing
  • Fraud detection and prevention
  • Research and analytics
  • E-commerce and retail
  • Search and advertising
  • Legacy applications
  • Emerging applications
  • Structured database and key-value repositories
  • Unstructured file systems, HDFS and other data
  • Large undefined work sets
  • From batch stream to real-time
  • Reduces run times from days to hours

Where to learn more

Continue reading with the following links about NVMe, flash SSD and EMC DSSD.

  • Part one of this series here and part two here.
  • Performance Redefined! Introducing DSSD D5 Rack-Scale Flash Solution (EMC Pulse Blog)
  • EMC Unveils DSSD D5: A Quantum Leap In Flash Storage (EMC Press Release)
  • EMC Declares 2016 The “Year of All-Flash” For Primary Storage (EMC Press Release)
  • EMC DSSD D5 Rack-Scale Flash (EMC PDF Overview)
  • EMC DSSD and Cloudera Evolve Hadoop (EMC White Paper Overview)
  • Software Aspects of The EMC DSSD D5 Rack-Scale Flash Storage Platform (EMC PDF White Paper)
  • EMC DSSD D5 (EMC PDF Architecture and Product Specification)
  • EMC VFCache respinning SSD and intelligent caching (Part II)
  • EMC To Acquire DSSD, Inc., Extends Flash Storage Leadership
  • Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • Learn more about flash SSD here and NVMe here at thenvmeplace.com
  • What this all means

    Today’s legacy, and emerging applications have the need for speed, and where the applications may not need speed, the users as well as Internet of Things (IoT) that depend upon, or feed those applications do need things to move faster. Fast applications need fast software and hardware to get the same amount of work done faster with less wait delays, as well as process larger amounts of structured and unstructured little data, big data and very fast big data.

    Different applications along with the data infrastructures they rely upon including servers, storage, I/O hardware and software need to adapt to various environments, one size, one approach model does not fit all scenarios. What this means is that some applications and data infrastructures will benefit from shared direct attached SSD storage such as rack scale solutions using EMC DSSD D5. Meanwhile other applications will benefit from AFA or hybrid storage systems along with other approaches used in various ways.

    Continue reading part two of this series here including how EMC DSSD D5 works and more perspectives.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Big Files Lots of Little File Processing Benchmarking with Vdbench

    Big Files Lots of Little File Processing Benchmarking with Vdbench


    server storage data infrastructure i/o File Processing Benchmarking with Vdbench

    Updated 2/10/2018

    Need to test a server, storage I/O networking, hardware, software, services, cloud, virtual, physical or other environment that is either doing some form of file processing, or, that you simply want to have some extra workload running in the background for what ever reason? An option is File Processing Benchmarking with Vdbench.

    I/O performance

    Getting Started


    Here’s a quick and relatively easy way to do it with Vdbench (Free from Oracle). Granted there are other tools, both for free and for fee that can similar things, however we will leave those for another day and post. Here’s the con to this approach, there is no Uui Gui like what you have available with some other tools Here’s the pro to this approach, its free, flexible and limited by your creative, amount of storage space, server memory and I/O capacity.

    If you need a background on Vdbench and benchmarking, check out the series of related posts here (e.g. www.storageio.com/performance).

    Get and Install the Vdbench Bits and Bytes


    If you do not already have Vdbench installed, get a copy from the Oracle or Source Forge site (now points to Oracle here).

    Vdbench is free, you simply sign-up and accept the free license, select the version down load (it is a single, common distribution for all OS) the bits as well as documentation.

    Installation particular on Windows is really easy, basically follow the instructions in the documentation by copying the contents of the download folder to a specified directory, set up any environment variables, and make sure that you have Java installed.

    Here is a hint and tip for Windows Servers, if you get an error message about counters, open a command prompt with Administrator rights, and type the command:

    $ lodctr /r


    The above command will reset your I/O counters. Note however that command will also overwrite counters if enabled so only use it if you have to.

    Likewise *nix install is also easy, copy the files, make sure to copy the applicable *nix shell script (they are in the download folder), and verify Java is installed and working.

    You can do a vdbench -t (windows) or ./vdbench -t (*nix) to verify that it is working.

    Vdbench File Processing

    There are many options with Vdbench as it has a very robust command and scripting language including ability to set up for loops among other things. We are only going to touch the surface here using its file processing capabilities. Likewise, Vdbench can run from a single server accessing multiple storage systems or file systems, as well as running from multiple servers to a single file system. For simplicity, we will stick with the basics in the following examples to exercise a local file system. The limits on the number of files and file size are limited by server memory and storage space.

    You can specify number and depth of directories to put files into for processing. One of the parameters is the anchor point for the file processing, in the following examples =S:\SIOTEMP\FS1 is used as the anchor point. Other parameters include the I/O size, percent reads, number of threads, run time and sample interval as well as output folder name for the result files. Note that unlike some tools, Vdbench does not create a single file of results, rather a folder with several files including summary, totals, parameters, histograms, CSV among others.


    Simple Vdbench File Processing Commands

    For flexibility and ease of use I put the following three Vdbench commands into a simple text file that is then called with parameters on the command line.
    fsd=fsd1,anchor=!fanchor,depth=!dirdep,width=!dirwid,files=!numfiles,size=!filesize

    fwd=fwd1,fsd=fsd1,rdpct=!filrdpct,xfersize=!fxfersize,fileselect=random,fileio=random,threads=!thrds

    rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=!etime,interval=!itime

    Simple Vdbench script

    # SIO_vdbench_filesystest.txt
    #
    # Example Vdbench script for file processing
    #
    # fanchor = file system place where directories and files will be created
    # dirwid = how wide should the directories be (e.g. how many directories wide)
    # numfiles = how many files per directory
    # filesize = size in in k, m, g e.g. 16k = 16KBytes
    # fxfersize = file I/O transfer size in kbytes
    # thrds = how many threads or workers
    # etime = how long to run in minutes (m) or hours (h)
    # itime = interval sample time e.g. 30 seconds
    # dirdep = how deep the directory tree
    # filrdpct = percent of reads e.g. 90 = 90 percent reads
    # -p processnumber = optional specify a process number, only needed if running multiple vdbenchs at same time, number should be unique
    # -o output file that describes what being done and some config info
    #
    # Sample command line shown for Windows, for *nix add ./
    #
    # The real Vdbench script with command line parameters indicated by !=
    #

    fsd=fsd1,anchor=!fanchor,depth=!dirdep,width=!dirwid,files=!numfiles,size=!filesize

    fwd=fwd1,fsd=fsd1,rdpct=!filrdpct,xfersize=!fxfersize,fileselect=random,fileio=random,threads=!thrds

    rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=!etime,interval=!itime

    Big Files Processing Script


    With the above script file defined, for Big Files I specify a command line such as the following.
    $ vdbench -f SIO_vdbench_filesystest.txt fanchor=S:\SIOTemp\FS1 dirwid=1 numfiles=60 filesize=5G fxfersize=128k thrds=64 etime=10h itime=30 numdir=1 dirdep=1 filrdpct=90 -p 5576 -o SIOWS2012R220_NOFUZE_5Gx60_BigFiles_64TH_STX1200_020116

    Big Files Processing Example Results


    The following is one of the result files from the folder of results created via the above command for Big File processing showing totals.


    Run totals

    21:09:36.001 Starting RD=format_for_rd1

    Feb 01, 2016 .Interval. .ReqstdOps.. ...cpu%... read ....read.... ...write.... ..mb/sec... mb/sec .xfer.. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete...
    rate resp total sys pct rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp
    21:23:34.101 avg_2-28 2848.2 2.70 8.8 8.32 0.0 0.0 0.00 2848.2 2.70 0.00 356.0 356.02 131071 0.0 0.00 0.0 0.00 0.1 109176 0.1 0.55 0.1 2006 0.0 0.00

    21:23:35.009 Starting RD=rd1; elapsed=36000; fwdrate=max. For loops: None

    07:23:35.000 avg_2-1200 4939.5 1.62 18.5 17.3 90.0 4445.8 1.79 493.7 0.07 555.7 61.72 617.44 131071 0.0 0.00 0.0 0.00 0.0 0.00 0.1 0.03 0.1 2.95 0.0 0.00


    Lots of Little Files Processing Script


    For lots of little files, the following is used.


    $ vdbench -f SIO_vdbench_filesystest.txt fanchor=S:\SIOTEMP\FS1 dirwid=64 numfiles=25600 filesize=16k fxfersize=1k thrds=64 etime=10h itime=30 dirdep=1 filrdpct=90 -p 5576 -o SIOWS2012R220_NOFUZE_SmallFiles_64TH_STX1200_020116

    Lots of Little Files Processing Example Results


    The following is one of the result files from the folder of results created via the above command for Big File processing showing totals.
    Run totals

    09:17:38.001 Starting RD=format_for_rd1

    Feb 02, 2016 .Interval. .ReqstdOps.. ...cpu%... read ....read.... ...write.... ..mb/sec... mb/sec .xfer.. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete...
    rate resp total sys pct rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp
    09:19:48.016 avg_2-5 10138 0.14 75.7 64.6 0.0 0.0 0.00 10138 0.14 0.00 158.4 158.42 16384 0.0 0.00 0.0 0.00 10138 0.65 10138 0.43 10138 0.05 0.0 0.00

    09:19:49.000 Starting RD=rd1; elapsed=36000; fwdrate=max. For loops: None

    19:19:49.001 avg_2-1200 113049 0.41 67.0 55.0 90.0 101747 0.19 11302 2.42 99.36 11.04 110.40 1023 0.0 0.00 0.0 0.00 0.0 0.00 7065 0.85 7065 1.60 0.0 0.00


    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    The above examples can easily be modified to do different things particular if you read the Vdbench documentation on how to setup multi-host, multi-storage system, multiple job streams to do different types of processing. This means you can benchmark a storage systems, server or converged and hyper-converged platform, or simply put a workload on it as part of other testing. There are even options for handling data footprint reduction such as compression and dedupe.

    Ok, nuff said, for now.

    Gs

    Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    NVMe Place NVM Non Volatile Memory Express Resources

    Updated 8/31/19
    NVMe place server Storage I/O data infrastructure trends

    Welcome to NVMe place NVM Non Volatile Memory Express Resources. NVMe place is about Non Volatile Memory (NVM) Express (NVMe) with Industry Trends Perspectives, Tips, Tools, Techniques, Technologies, News and other information.

    Disclaimer

    Please note that this NVMe place resources site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks.

    NVM Express Organization
    Image used with permission of NVM Express, Inc.

    Visit the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

     

    The NVMe Place resources and NVM including SCM, PMEM, Flash

    NVMe place includes Non Volatile Memory (NVM) including nand flash, storage class memories (SCM), persistent memories (PM) are storage memory mediums while NVM Express (NVMe) is an interface for accessing NVM. This NVMe resources page is a companion to The SSD Place which has a broader Non Volatile Memory (NVM) focus including flash among other SSD topics. NVMe is a new server storage I/O access method and protocol for fast access to NVM based storage and memory technologies. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

    Server Storage I/O NVMe PCIe SAS SATA AHCI
    Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

    Leveraging the standard PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5″ drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add-in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially, NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

    NVMe as back-end storage
    NVMe as a “back-end” I/O interface for NVM storage media

    NVMe as front-end server storage I/O interface
    NVMe as a “front-end” interface for servers or storage systems/appliances

    NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can also being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

    NVMe features

    Main features of NVMe include among others:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

    Shared external PCIe using NVMe
    NVMe and shared PCIe (e.g. shared PCIe flash DAS)

    NVMe related content and links

    The following are some of my tips, articles, blog posts, presentations and other content, along with material from others pertaining to NVMe. Keep in mind that the question should not be if NVMe is in your future, rather when, where, with what, from whom and how much of it will be used as well as how it will be used.

    • How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
    • Why NVMe Should Be in Your Data Center (Via Micron.com)
    • NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
    • Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
    • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
    • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
    • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
    • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
    • MNVM Express solutions (Via SuperMicro)
    • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
    • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
    • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
    • How many IOPS can a HDD, HHDD or SSD do (Part I)?
    • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
    • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
    • Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
    • Via EnterpriseStorageForum: 10-Year Review of Data Storage

    Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamline performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU.

    NVMe and SATA flash SSD performance

    The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed.

    Additional NVMe Resources

    Also check out the Server StorageIO companion micro sites landing pages including thessdplace.com (SSD focus), data protection diaries (backup, BC/DR/HA and related topics), cloud and object storage, and server storage I/O performance and benchmarking here.

    If you are in to the real bits and bytes details such as at device driver level content check out the Linux NVMe reflector forum. The linux-nvme forum is a good source if you are developer to stay up on what is happening in and around device driver and associated topics.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    Disclaimer

    Disclaimer: Please note that this site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks. Check out the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

    NVM Express Organization
    Image used with permission of NVM Express, Inc.

    Wrap Up

    Watch for updates with more content, links and NVMe resources to be added here soon.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Dude, Dell is Getting (Buying) an EMC and VMware Deal

    Storage I/O trends

    Dude, Dell is Getting (Buying) an EMC and VMware Deal

    Some of you might remember the marketing campaign "Dude you’re getting a Dell" to show somebody buying a Dell computer.

    Today, Dell as in Michael Dell and his corporation Dell along with partner Silver Lake investment announced a $67B USD deal that they are acquiring EMC along with their stake in VMware which will stay an independently public traded company. Dell brings strength in small and medium-mid market strength and supplier to cloud and other managed service providers, Dell financing combines with EMC strength and enterprise portfolio. This deal also reunites the two parties who before had a strong storage joint venture with Dell OEMing EMC storage for about a decade before going their separate ways in the late 2000s.

    Dell buying EMC

    Key points

    • Privately held Dell is acquiring EMC and its various business units
    • VMware will stay independent public company with Dell as major owner
    • EMC based in Hopkinton Massachusetts will be headquarters for new Dell Systems Business Unit
    • Dell Systems Business Unit will also be headquarters for Dell servers
    • New Dell Systems Business Unit joint with EMC is expected to be a $30B USD plus sized entity
    • Dell see’s revenue synergies of about 3x over 1x cost of the combined entities
    • Dell see’s ability to generate cash to service debt coming from increased revenue growth
    • EMC global support, professional services, consulting to complement Dell capabilities
    • Ability for both partners to leverage their best of strengths from SMB to enterprise to cloud

    What this means big picture

    Basically EMC has gone private under the Dell umbrella while VMware remains an independent publicly traded company, granted with EMC and now Dell being the primary shareholder of that entity. Dell went private back in 2013 with its founder Michael Dell along with Silver Lake Partners as key investors. EMC has been under pressure from activist investors to sell off its investment in VMware to increase shareholder and was rumored to have been in acquisition discussions with other organizations such as HP. Now EMC (e.g. the non-VMware part) is effectively a private held company as the Dell Systems Business Unit to be initially headquartered in Hopkinton Massachusetts (EMC Headquarters) while Dell Corporation headquarters will remain in Austin Texas.

    The server business will be based in Hopkinton, which will be targeted at around a $30B USD business. Ironic that Massachusetts used to be a focus for server vendors from Dell (acquired by Compaq and then HP), Wang, DG (acquired by EMC) among others. This transaction puts Massachusetts back on the map as the Dell System Business Unit will also now be home to Dell servers. As of the announcement, there is an expectation that the Hopkinton headquarters will grow vs. shrink. Granted., some consolidation can be expected.

    Some questions that exist (among many others)

    What about Pivotal?

    One of the questions I have is that during the announcement discussions, not much if anything has been said about Pivotal and its future role or how it will be folded in, or set up as a tracking stock or similar activity. Also something to keep in mind as food for thought, or speculation, is that GE is an investor in Pivotal and GE has made noise about becoming more prominent player in software, just saying. In the meantime, let’s wait and see what happens with Pivotal.

    What about Lenovo relationship?

    After the last Dell breakup, EMC established a partnership and initiative with Lenovo to jointly produce servers that had been being sourced from Dell or others, as well as EMC moving its Iomega SMB storage business into the Lenovo initiative. Note that about a year ago Lenovo bought the former IBM x86 server business. What will become of that partnership for servers, as well as for Iomega moving forward?

    How will product rationalization occur?

    There is some product overlap in the storage business, as well as backup/data protection among some other areas. However looking at the bigger picture, there is not much if any overlap. Where there is overlap, one near-term approach that might (this is speculation) occur is to segment potential competing products into Enterprise and Systems business vs. SMB or entry-level. This could occur for storage products such as Dell Compellent, Exanet based Fluid NAS, EqualLogic and MD (OEM from NetApp) vs. those from EMC such as VMAX, VNX, Isilon, XtremIO, Datadomain among others. Likewise, there will need to be some rationalization for backup and data protection products such as EMC Networker, Avamar vs. Dell AppAssure, vRanger, NetVault as well as their OEM partners Commvault and Symantec among others.

    VCE gets leveraged as part of go to market?

    EMC took over ownership of VCE in 2014 with Cisco still involved, in fact if a product has Vblock in its name, it will be a Cisco server and network. However look for other VCE solutions to appear as well as the VxRACK announced earlier this year. I would expect new converge infrastructure (CI), hyper-converged infrastructure (HCI) and Cluster-in-Box (CiB) solutions from VCE that would include Dell servers in the future leveraging different software (VMware among others).

    How will Dell OEM business drive things?

    Dell has had a server OEM business that has supplied technology to others, including in the past EMC. This business moves in under the new System Business Unit as part of what is or was EMC. Beyond servers, it will be interesting to see how that business unit can also move other technologies into the OEM or high volume market including to cloud and managed service providers who buy in bulk.

    Will this cause Cisco an EMC partner to buy another storage vendor?

    Maybe, that depends on what Cisco wants to do moving forward in addition to remaining a partner with EMC. Of course, if Cisco were to go storage shopping, who would that be? Perhaps DDN, Nimble or NetApp?

    With Michael Dell now having done one of, if not the largest tech deals in history, how will Larry Ellison of Oracle react?

    It has been said that the difference between God and Larry Ellison is that God was not interested in becoming Larry Ellison, however, is Larry Ellison still interested in industry bragging rights meaning will he want to do a big block buster deal involving Oracle to get some headlines, or enjoy his semi-retirement, perhaps buying a bankrupt country or something?

    Where to read, watch and learn more

    Storage I/O trends

    What this all means and wrap up

    Certainly there are many more questions about server, storage, I/O networking, cloud, virtual, software, hardware, security and management tools along with service and support that will get addressed in follow-up discussions.

    Near term, the combined entity needs to get out front and sell to customers, partners and prospects that EMC is not going away, or that Dell is going to get in the way of existing business. The two need to run as is pursuing and closing each others respective business making sure that competitors do not create barriers to deals closing and disrupting revenue. In other words, neither Dell nor EMC can afford to foster a revenue prevention department now, nor can either afford to allow any other competitor to become a revenue prevention department as a service (e.g. costing either EMC or Dell revenue).

    Overall this deal has some interesting upside synergies and potential, granted, we will need to see how things unfold.

    Disclosure: Dell and EMC have been Server StorageIO clients, and StorageIO uses Dell as well as Lenovo servers among others technologies including VMware.

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Fall 2015 Server Storage I/O Cloud Virtual Seminars Going Dutch

    Storage I/O trends

    Fall 2015 Server Storage I/O Cloud Virtual Seminars Going Dutch

    StorageIO events, object storage, ssd cloud, virtualization and big data

    It’s that time of the year again where the fall 2015 events and activities are underway which also includes a week of sessions in Holland October 13-16. I will be participating in four days of workshop seminars being organized by Brouwer Storage Consultancy in Nijkerk covering server storage decision-making, converged and bulk storage options, software defined storage management, data center infrastructure management and data protection along with industry trends and update sessions.

    Brouwer Storage Consultnacy

    October 13th: Symposium – Software Defined Storage Management

    09:00 -17:00

    DOWNLOAD FLYER (Dutch)

    REGISTER HERE

    FREE Session! Access for end-users only, through invitation or contacting BSC.

    Event Location: Hotel & Gasterij De Roode Schuur, Oude Barneveldseweg 98, 3862PS Nijkerk – www.deroodeschuur.nl

    Brouwer Storage Making Decision Seminar Workshops

    October 14th: Server Storage I/O Fundamental Trends V2.015 – What’s New, What’s the buzz, what you need to know about.

    09:00 -17:00

    DOWNLOAD Abstract/Agenda

    REGISTER HERE

    Event LocationGolden Tulip Ampt van Nijkerk Hotel, Berencamperweg 4, 3861MC, Nijkerk – www.goldentulipamptvannijkerk.com/en

    Brouwer Storage Making Decision Seminar Workshops

    October 15th: Symposium – Data Center Infrastructure Management

    09:00 -17:00

    DOWNLOAD Abstract / Agenda

    REGISTER Here

    FREE Session! Access, through invitation or contacting BSC.

    Event Location: Hotel & Gasterij De Roode Schuur, Oude Barneveldseweg 98, 3862PS Nijkerk – www.deroodeschuur.nl

    Going Dutch Storage Seminars

    October 16th: "Converged Day" Server and Storage Decision making – How do you want or need your storage packaged?

    09:00 -17:00

    DOWNLOAD Abstract / Agenda

    REGISTER HERE

    Event LocationGolden Tulip Ampt van Nijkerk Hotel, Berencamperweg 4, 3861MC, Nijkerk – www.goldentulipamptvannijkerk.com/en

    Going Dutch Server Storage I/O

    Brouwer Storage Consultnacy

    Learn more at the Brouwer Storage Consultancy site here, or getting in touch with them to reserve your seat at these events.

    Office: Olevoortseweg 43
    3861 MH Nijkerk
    The Netherlands

    T +31-33-246-6825
    C +31-652-601-309
    F +31-33-245-8956
    E info@brouwerconsultancy.com

    Where to read, watch and learn more

    Watch for more events, seminars, live video, webinars and virtual trade shows by visiting the StorageIO events page.

    StorageIO events, object storage, ssd cloud, virtualization and big data

    What this all means and wrap up

    Smart server and storage for cloud, virtual and physical or legacy environments starts with being informed, knowing your requirements, options and having insight into industry trends that are applicable to your environment. These sessions are vendor and technology neutral held off-site at hotel venues in Nijkerk Netherlands so no need to worry about the sales teams coming in to sell you something during the breaks or lunch which are provided. There are also opportunities throughout the workshops for engagement, discussion and interaction with other attendees that includes your peers from various commercial, government and service providers among others. Hope to see in Nijkerk to discuss server stowage I/O cloud virtual and other industry trends, technologies, techniques in October.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    August Server StorageIO Update Newsletter – NVM and Flash SSD Focus

    Volume 15, Issue VIII

    Hello and welcome to this August 2015 Server StorageIO update newsletter. Summer is wrapping up here in the northern hemisphere which means the fall conference season has started, holidays in progress as well as getting ready for back to school time. I have been spending my summer working on various things involving servers, storage, I/O networking hardware, software, services from cloud to containers, virtual and physical. This includes OpenStack, VMware vCloud Air, AWS, Microsoft Azure, GCS among others, as well as new versions of Microsoft Windows and Servers, Non Volatile Memory (NVM) including flash SSD, NVM Express (NVMe), databases, data protection, software defined, cache, micro-tiering and benchmarking using various tools among other things (some are still under wraps).

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    In This Issue

  • Feature Topic
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcasts
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • Feature Topic – Non Volatile Memory including NAND flash SSD

    Via Intel History of Memory
    Via Intel: Click above image to view history of memory

    This months feature topic theme is Non Volatile Memory (NVM) which includes technologies such as NAND flash commonly used in Solid State Devices (SSDs) storage today, as well as in USB thumb drive, mobile and hand-held devices among many other uses. NVM spans servers, storage, I/O devices along with mobile and handheld among many other technologies. In addition to NAND flash, other forms of NVM include Non Volatile Random Access Memory (NVRAM), Read Only Memory (ROM) along with some emerging new technologies including the recently announced Intel and Micron 3D XPoint among others.

    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities
    • Flash and SSD Industry trends perspectives plus benchmarking tips) – PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • Spot The Newest & Best Server Trends (Via Processor)
    • Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))

    Continue reading more about NVM, NVMe, NAND flash, SSD Server and storage I/O related topics at www.thessdplace.com as well as about I/O performance, monitoring and benchmarking tools at www.storageperformance.us.

     

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    • PMC Announces NVMe SSD Controllers (Via TomsITpro)
    • New SATA SSD powers elastic cloud agility for CSPs (Via Cbronline)
    • Toshiba Solid-State Drive Family Features PCIe Technology (Via Eweek)
    • SanDisk aims CloudSpeed Ultra SSD at cloud providers (Via ITwire)
    • Everspin & Aupera reveal MRAM Module M.2 Form Factor (Via BusinessWire)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • Seagate Grows Its Nytro Enterprise Flash Storage Line (Via InfoStor)
    • New SAS Solid State Drive From Seagate Micron Alliance (Via Seagate)
    • Samsung ups the SSD ante with faster, higher capacity drives (Via ITworld)

    View other recent news and industry trends here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities tips, and announcements.

    • Processor: Comments on Spot The Newest & Best Server Trends
    • Processor: Comments on A Snapshot Strategy For Backups & Data Recovery
    • EnterpriseStorageForum: Comments on Defining the Future of DR Storage
    • EnterpriseStorageForum: Comments on Top Ten Tips for DR as a Service
    • EnterpriseStorageForum: Comments on NVMe: Golden Ticket for Faster Storage

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Scala – Scale out storage management software tools
    • Reduxio – Enterprise hybrid storage with data services
    • Jam TreeSize Pro – Data discovery and storage resource analysis and reporting

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • IronMountain:  Information Lifecycle Management: Which Data Types Have Value?
      It’s important to keep in mind that on a fundamental level, there are three types of data: information that has value, information that does not have value and information that has unknown value. Data value can be measured along performance, availability, capacity and economic attributes, which define how the data gets managed across different tiers of storage. In general data can have value, unknown value or no value. Read more here.
    • EnterpriseStorageForum:  Is Future Storage Converging Around Hyper-Converged?
      Depending on who you talk or listen to, hyper-converged storage is either the future of storage, or it is a hype niche market that is not for everybody, particular not larger environments. How converged is the hyper-converged market? There are many environments that can leverage CI along with HCI, CiB or other bundles solutions. Granted, not all of those environments will converge around the same CI, CiB and HCI or pod solution bundles as everything is not the same in most IT environments and data centers. Not all markets, environments or solutions are the same. Read more here.

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    Server Storage I/O Workshop Seminars
    Nijkerk Netherlands October 13-16 2015

    VMworld August 30-September 3 2015

    See additional webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    Enmotus FuzeDrive (Server based Micro-Tiering)
    Enmotus FuzeDrive
    • Micro-teiring of reads and writes
    • FuzeDrive for transparent tiering
    • Dynamic tiering with selectable options
    • Monitoring and diagnostics tools
    • Transparent to operating systems
    • Hardware transparent (HDD and SSD)
    • Server I/O interface agnostic
    • Optional RAM cache and file pinning
    • Maximize NVM flash SSD investment
    • Compliment other SDS solutions
    • Use for servers or workstations

    Enmotus FuzeDrive provides micro-tiering boosting performance (reads and writes) of storage attached to physical bare metal servers, virtual and cloud instances including Windows and Linux operating systems across various applications. In the simple example above five separate SQL Server databases (260GB each) were placed on a single 6TB HDD. A TPCC workload was run concurrently against all databases with various numbers of users. One workload used a single 6TB HDD (blue) while the other used a FuzeDrive (green) comprised of a 6TB HDD and a 400GB SSD showing basic micro-tiering improvements.

    View other StorageIO lab review reports here

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of my books.

    Get Whats Yours via Amazon.com
    While not a technology book, you do not have to be at or near retirement age to be planning for retirement. Some of you may already be at or near retirement age, for others, its time to start planning or refining your plans. A friend recommended this book and I’m recommending it to others. Its pretty straight forward and you might be surprised how much money people may be leaving on the table! Check it out here at Amazon.com.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates

    Storage I/O trends

    Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates

    I attended the Flash Memory Summit in Santa Clara CA last week and not surprisingly there were many announcements about Non-Volatile Memory (NVM) along with related enabling technologies. Some of these announcements were component based intended for original equipment manufactures (OEMs) ranging from startup to established, systems integrators (SI), value added resellers (VAR’s) while others were more customer solution focused. From a customer solution focus, some of the technologies were consumer oriented while others for business and some for cloud scale service providers.

    Recent NVM, NVMe and Flash SSD news

    A sampling of some recent NVM, NVMe and Flash related news includes among others:

    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers (Via TomsITpro)
    • New SATA SSD powers elastic cloud agility for CSPs (Via Cbronline)
    • Toshiba Solid-State Drive Family Features PCIe Technology (Via Eweek)
    • SanDisk aims CloudSpeed Ultra SSD at cloud providers (Via ITwire)
    • Everspin & Aupera show all-MRAM Storage Module in M.2 Form Factor (Via BusinessWire)
    • Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage (part I, part II and part III)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • Seagate Grows Its Nytro Enterprise Flash Storage Line (Via InfoStor)
    • New SAS Solid State Drive First Product From Seagate Micron Alliance (Via Seagate)
    • Wow, Samsung’s New 16 Terabyte SSD Is the World’s Largest Hard Drive (Via Gizmodo)
    • Samsung ups the SSD ante with faster, higher capacity drives (Via ITworld)

    NVMe primer

    Via Intel History of Memory
    Via Intel: Click above image to view history of memory via Intel site

    NVM includes technologies such as NAND flash commonly used in Solid State Devices (SSD’s) storage today, as well as in USB thumb drive, mobile and hand-held devices among many other uses. NVM spans servers, storage, I/O devices along with mobile and handheld among many other technologies. In addition to NAND flash, other forms of NVM include Non Volatile Random Access Memory (NVRAM), Read Only Memory (ROM) along with some emerging new technologies including the recently announced Intel and Micron 3D XPoint among others.

    Server Storage I/O access and NVM
    Server Storage I/O memory (and storage) hierarchy

    Keep in mind that memory is storage and storage is persistent memory as well as that there are different classes, categories and tiers of memory and storage as shown above to meet various performance, availability, capacity and economic requirements. Besides NVM ranging from flash to NVRAM to emerging 3D XPoint among others, another popular topic that is gaining momentum is NVM Express (NVMe). NVMe (more material here at www.thenvmeplace.com) is a new server storage I/O access method and protocol for fast access to NVM based products. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

    Server Storage I/O NVMe PCIe SAS SATA AHCI
    Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

    Leveraging the common PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5" drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

    NVMe as back-end storage
    NVMe as a "back-end" I/O interface in a server or storage system accessing NVM storage/media devices

    NVMe as front-end server storage I/O interface
    NVMe as a “front-end” interface for servers (or storage systems/appliances) to use NVMe based storage systems

    NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can in addition to being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

    Shared external PCIe using NVMe
    NVMe and shared PCIe

    NVMe features

    Main features of NVMe include among others:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

    Watch for more about NVMe as it continues to gain in both industry adoption and deployment as well as customer adoption and deployment.

    Where to read, watch and learn more

    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, September 2014 Presentation (Flash back to reality – Myths and Realities Flash and SSD Industry trends perspectives plus benchmarking tips) – PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • Spot The Newest & Best Server Trends (Via Processor)
    • Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage (part I, part II and part III)
    • Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))
    • Continue reading more about NVM, NVMe, NAND flash, SSD Server and storage I/O related topics at www.thessdplace.com as well as about I/O performance, monitoring and benchmarking tools at www.storageperformance.us.

    Storage I/O trends

    What this all means and wrap up

    The question is not if NVM is in your future, it is! Instead the questions are what type of NVM including NAND flash among other mediums will be deployed where, using what type of packaging or solutions (drives, cards, systems, appliances, cloud) for what role (as storage, primary memory, persistent cache) along with how much among others. For some environments the solution is already, or will be All NVM Arrays (ANA) or All Flash Arrays (AFA) or All SSD Arrays (ASA) while for others the home run will be hybrid based solutions that work for you, fitting in and adapting to your environment as it changes.

    Also keep in mind that a little bit of fast memory including NVM based flash among others in the right place can have a big benefit. My experiences using NVMe to use flash enabled NVMe devices on Windows and Linux systems is that you can see lower response times at higher-IOP’s however also with lower CPU consumption particular when compared to 6Gbps SATA. Likewise bandwidth can easily be pushed to the limits of the NVMe device as well as PCIe interface being used such as x4 or x8 depending on implementation. That is also a warning and something to watch out for comparing apples to oranges in that while NVMe uses PCIe, understand when looking at different results if those are for x4 or x8 or faster PCIe as their mere presence of using PCIe does not mean you are running at full potential.

    Keep an eye on NVMe as a new high-speed, low-latency server storage I/O access protocol for unlocking the full performance capabilities of fast NVM based storage as well as leveraging the multiple cores in today’s fast processors. Does this mean AHCI/SATA or SCSI/SAS are now dead? Some will claim that, however at least near-term for next few years (if not longer), those interfaces will continue to be used where they make sense, as well as where they can save dollars specifically for cost sensitive, high-capacity environments that do not need the full performance of NVMe just yet.

    As for the Flash Memory Summit event in Santa Clara, that was a good day with time well spent in briefings, meetings, demo’s and add hoc discussions on the expo floor.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Supermicro CSE-M14TQC Use your media bay to add 12 Gbps SAS SSD drives to your server

    Storage I/O trends

    Supermicro CSE-M14TQC Use your media bay to add 12 Gbps SAS SSD drives to your server

    Do you have a computer server, workstation or mini-tower PC that needs to have more 2.5" form factor hard disk drive (HDD), solid state device (SSD) or hybrid flash drives added yet no expansion space?

    Do you also want or need the HDD or SSD drive expansion slots to be hot swappable, 6 Gbps SATA3 along with up to 12 Gbps SAS devices?

    Do you have an available 5.25" media bay slot (e.g. where you can add an optional CD or DVD drive) or can you remove your existing CD or DVD drive using USB for software loading?

    Do you need to carry out the above without swapping out your existing server or workstation on a reasonable budget, say around $100 USD plus tax, handling, shipping (your prices may vary)?

    If you need implement the above, then here is a possible solution, or in my case, an real solution.

    Via StorageIOblog Supermicro 4 x 2.5 12Gbps SAS enclosure CSE-M14TQC
    Supermicro CSE-M14TQC with hot swap canister before installing in one of my servers

    In the past I have used a solution from Startech that supports up to 4 x 2.5" 6 Gbps SAS and SATA drives in a 5.25" media bay form factor installing these in my various HP, Dell and Lenovo servers to increase internal storage bays (slots).

    Via Amazon.com StarTech SAS and SATA expansion
    Via Amazon.com StarTech 4 x 2.5" SAS and SATA internal enclosure

    I still use the StarTech device shown (read earlier reviews and experiences here, here and here) above in some of my servers which continue to be great for 6Gbps SAS and SATA 2.5" HDDs and SSDs. However for 12 Gbps SAS devices, I have used other approaches including external 12 Gbps SAS enclosures.

    Recently while talking with the folks over at Servers Direct, I mentioned how I was using StarTech 4 x 2.5" 6Gbps SAS/SATA media bay enclosure as a means of boosting the number of internal drives that could be put into some smaller servers. The Servers Direct folks told me about the Supermicro CSE-M14TQC which after doing some research, I decided to buy one to complement the StarTech 6Gbps enclosures, as well as external 12 Gbps SAS enclosures or other internal options.

    What is the Supermicro CSE-M14TQC?

    The CSE-M14TQC is a 5.25" form factor enclosure that enables four (4) 2.5" hot swappable (if your adapter and OS supports hot swap) 12 Gbps SAS or 6 Gbps SATA devices (HDD and SSD) to fit into the media bay slot normally used by CD/DVD devices in servers or workstations. There is a single Molex male power connector on the rear of the enclosure that can be used to attach to your servers available power using applicable connector adapters. In addition there are four seperate drive connectors (e.g. SATA type connectors) that support up to 12 Gbps SAS per drive which you can attach to your servers motherboard (note SAS devices need a SAS controller), HBA or RAID adapters internal ports.

    Cooling is provided via a rear mounted 12,500 RPM 16 cubic feet per minute fan, each of the four drives are hot swappable (requires operating system or hypervisor support) contained in a small canister (provided with the enclosure). Drives easily mount to the canister via screws that are also supplied as part of the enclosure kit. There is also a drive activity and failure notification LED for the devices. If you do not have any available SAS or SATA ports on your servers motherboard, you can use an available PCIe slot and add a HBA or RAID card for attaching the CSE-M14TQC to the drives. For example, a 12 Gbps SAS (6 Gbps SATA) Avago/LSI RAID card, or a 6 Gbps SAS/SATA RAID card.

    Via Supermicro CSE-M14TQC rear details (4 x SATA and 1 Molex power connector)

    Via StorageIOblog Supermicro 4 x 2.5 rear view CSE-M14TQC 12Gbps SAS enclosure
    CSE-M14TQCrear view before installation

    Via StorageIOblog Supermicro CSE-M14TQC 12Gbps SAS enclosure cabling
    CSE-M14TQC ready for installation with 4 x SATA (12 Gbps SAS) drive connectors and Molex power connector

    Tip: In the case of the Lenovo TS140 that I initially installed the CSE-M14TQC into, there is not a lot of space for installing the drive connectors or Molex power connector to the enclosure. Instead, attach the cables to the CSE-M14TQC as shown above before installing the enclosure into the media bay slot. Simply attach the connectors as shown and feed them through the media bay opening as you install the CSE-M14TQC enclosure. Then attach the drive connectors to your HBA, RAID card or server motherboard and the power connector to your power source inside the server.

    Note and disclaimer, pay attention to your server manufactures power loading and specification along with how much power will be used by the HDD or SSD’s to be installed to avoid electrical power or fire issues due to overloading!

    Via StorageIOblog Supermicro CSE-M14TQC enclosure Lenovo TS140
    CSE-M14TQC installed into Lenovo TS140 empty media bay

    Via StorageIOblog Supermicro CSE-M14TQC drive enclosure Lenovo TS140

    CSE-M14TQC installed with front face plated installed on Lenovo TS140

    Where to read, watch and learn more

    Storage I/O trends

    What this all means and wrap up

    If you have a server that simply needs some extra storage capacity by adding some 2.5" HDDs, or boosting performance with fast SSDs yet do not have any more internal drive slots or expansion bays, leverage your media bay. This applies to smaller environments where you might have one or two servers, as well as for environments where you want or need to create a scale out software defined storage or hyper-converged platform using your own hardware. Another option is that if you have a lab or test environment for VMware vSphere ESXi Windows, Linux, Openstack or other things, this can be a cost-effective approach to adding both storage space capacity as well as performance and leveraging newer 12Gbps SAS technologies.

    For example, create a VMware VSAN cluster using smaller servers such as Lenovo TS140 or equivalent where you can install a couple of 6TB or 8TB higher capacity 3.5" drive in the internal drive bays, then adding a couple of 12 Gbps SAS SSDs along with a couple of 2.5" 2TB (or larger) HDDs along with a RAID card, and high-speed networking card. If VMware VSAN is not your thing, how about setting up a Windows Server 2012 R2 failover cluster including Scale Out File Server (SOFS) with Hyper-V, or perhaps OpenStack or one of many other virtual storage appliances (VSA) or software defined storage, networking or other solutions. Perhaps you need to deploy more storage for a big data Hadoop based analytics system, or cloud or object storage solution? On the other hand, if you simply need to add some storage to your storage or media or gaming server or general purpose server, the CSE-M14TQC can be an option along with other external solutions.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Breaking the VMware ESXi 5.5 ACPI boot loop on Lenovo TD350

    Storage I/O trends

    Breaking the VMware ESXi 5.5 ACPI boot loop on Lenovo TD350

    Do you have a Lenovo TD350 or for that many other servers that when trying to load or run VMware vSphere ESXi 5.5 u2 (or other versions) and run into the boot loop at the “Initializing ACPI” point?

    Lenovo TD350 server

    VMware ACPI boot loop

    The symptoms are that you see ESXi start its boot process, loading drivers and modules (e.g. black screen), then you see the Yellow boot screen with Timer and Scheduler initialized, and at the “Initializing ACPI” point, ka boom, either a boot loop starts (e.g. the above processes repeats after system boots).

    The fix is actually pretty quick and simple, finding it took a bit of time, trial and error.

    There were of course the usual suspects such as

    • Checking to BIOS and firmware version of the motherboard on the Lenovo TD350 (checked this, however did not upgrade)
    • Making sure that the proper VMware ESXi patches and updates were installed (they were, this was a pre built image from another working server)
    • Having the latest installation media if this was a new install (tried this as part of trouble shooting to make sure the pre built image was ok)
    • Remove any conflicting devices (small diversion hint: make sure if you have cloned a working VMware image to an internal drive that it is removed to avoid same file system UUID errors)
    • Boot into BIOS making sure that for processor VT is enabled, for SATA that AHCI is enabled for any drives as opposed to IDE or RAID, and that for boot, make sure set to Legacy vs. Auto (e.g. disable UEFI support) as well as verify boot order. Having been in auto mode for UEFI support for some other activity, this was easy to change, however was not the magic silver bullet I was looking for.

    Breaking the VMware ACPI boot loop on Lenovo TD350

    After doing some searching and coming up with some interesting and false leads, as well as trying several boots, BIOS configuration changes, even cloning the good VMware ESXi boot image to an internal drive if there was a USB boot issue, the solution was rather simple once found (or remembered).

    Lenovo TD350 Basic BIOS settings
    Lenovo TD350 BIOS basic settings

    Lenovo TD350 processor BIOS settings
    Lenovo TD350 processor settings

    Make sure that in your BIOS setup under PCIE that you have that you disable “Above 4GB decoding".

    Turns out that I had enabled "Above 4GB decoding" for some other things I had done.

    Lenovo TD350 fix VMware ACPO error
    Lenovo TD350 disabling above 4GB decoding on PCIE under advanced settings

    Once I made the above change, press F10 to save BIOS settings and boot, VMware ESXi had no issues getting past the ACPI initializing and the boot loop was broken.

    Where to read, watch and learn more

    • Lenovo TS140 Server and Storage I/O lab Review
    • Lenovo ThinkServer TD340 Server and StorageIO lab Review
    • Part II: Lenovo TS140 Server and Storage I/O lab Review
    • Software defined storage on a budget with Lenovo TS140

    Storage I/O trends

    What this all means and wrap up

    In this day and age of software defined focus, remember to double-check how your hardware BIOS (e.g. software) is defined for supporting various software defined server, storage, I/O and networking software for cloud, virtual, container and legacy environments. Watch for future posts with my experiences using the Lenovo TD350 including with Windows 2012 R2 (bare metal and virtual), Ubuntu (bare metal and virtual) with various application workloads among other things.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Intel Micron 3D XPoint server storage NVM SCM PM SSD

    3D XPoint server storage class memory SCM


    Storage I/O trends

    Updated 1/31/2018

    Intel Micron 3D XPoint server storage NVM SCM PM SSD.

    This is the second of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part III here.

    Is this 3D XPoint marketing, manufacturing or material technology?

    You can’t have a successful manufactured material technology without some marketing, likewise marketing without some manufactured material would be manufactured marketing. In the case of 3D XPoint and its announcement launch, their real technology shown, granted it was only wafer and dies as opposed to an actual DDR4 DIMM or PCIe Add In Card (AIC) or drive form factor Solid State Device (SSD) product. On the other hand, on a relative comparison basis, even though there is marketing collateral available to learn more from, this was far from a over the big-top made for TV or web circus event, which can be a good thing.


    Wafer unveiled containing 3D XPoint 128 Gb dies

    Who will get access to 3D XPoint?

    Initially 3D XPoint production capacity supply will be for the two companies to offer early samples to their customers later this year with general production slated for 2016 meaning early real customer deployed products starting sometime in 2016.

    Is it NAND or NOT?

    3D XPoint is not NAND flash, it is also not NVRAM or DRAM, it’s a new class of NVM that can be used for server class main memory with persistency, or as persistent data storage among other uses (cell phones, automobiles, appliances and other electronics). In addition, 3D XPoint is more durable with a longer useful life for writing and storing data vs. NAND flash.

    Why is 3D XPoint important?

    As mentioned during the Intel and Micron announcement, there have only been seven major memory technologies introduced since the transistor back in 1947, granted there have been many variations along with generational enhancements of those. Thus 3D XPoint is being positioned by Intel and Micron as the eighth memory class joining its predecessors many of which continue to be used today in various roles.


    Major memory classes or categories timeline

    In addition to the above memory classes or categories timeline, the following shows in more detail various memory categories (click on the image below to get access to the Intel interactive infographic).

    Intel History of Memory Infographic
    Via: https://intelsalestraining.com/memory timeline/ (Click on image to view)

    What capacity size is 3D XPoint?

    Initially the 3D XPoint technology is available in a 2 layer 128 bit (cell) per die capacity. Keep in mind that there are usually 8 bits to a byte resulting in 16 GByte capacity per chip initially. With density improvements, as well as increased stacking of layers, the number of cells or bits per die (e.g. what makes up a chip) should improve, as well as most implementations will have multiple chips in some type of configuration.

    What will 3D XPoint cost?

    During the 3D XPoint launch webinar Intel and Micron hinted that first pricing will be between current DRAM and NAND flash on a per cell or bit basis, however real pricing and costs will vary depending on how packaged for use. For example if placed on a DDR4 or different type of DIMM or on a PCIe Add In Card (AIC) or as a drive form factor SSD among other options will vary the real price. Likewise as with other memories and storage mediums, as production yields and volumes increase, along with denser designs, the cost per usable cell or bit can be expected to further improve.

    Where to read, watch and learn more

    Storage I/O trends

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    DRAM which has been around for sometime has plenty of life left for many applications as does NAND flash including new 3D NAND, vNAND and other variations. For the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies including 3D XPoint. Read more in this series including Part I here and Part III here.

    Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    3D XPoint nvm pm scm storage class memory

    Part III – 3D XPoint server storage class memory SCM


    Storage I/O trends

    Updated 1/31/2018

    3D XPoint nvm pm scm storage class memory.

    This is the third of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part II here.

    What is 3D XPoint and how does it work?

    3D XPoint is a new class or class of memory (view other categories of memory here) that provides performance for reads and writes closer to that of DRAM with about 10x the capacity density. In addition to the speed closer to DRAM vs. the lower NAND flash, 3D XPoint is also non-volatile memory (NVM) like NAND flash, NVRAM and others. What this means is that 3D XPoint can be used as persistent higher density fast server memory (or main memory for other computers and electronics). Besides being fast persistent main memory, 3D XPoint will also be a faster medium for solid state devices (SSD’s) including PCIe Add In Cards (AIC), m2 cards and drive form factor 8637/8639 NVM Express (NVMe) accessed devices that also has better endurance or life span compared to NAND flash.


    3D XPoint architecture and attributes

    The initial die or basic chip building block 3D XPoint implementation is a layer 128 Gbit device which if using 8 bits would yield 16GB raw. Over time increased densities should become available as the bit density improves with more cells and further scaling of the technology, combined with packaging. For example while a current die could hold up to 16 GBytes of data, multiple dies could be packaged together to create a 32GB, 64GB, 128GB etc. or larger actual product. Think about not only where packaged flash based SSD capacities are today, also think in terms of where DDR3 and DDR4 DIMM are at such as 4GB, 8GB, 16GB, 32GB densities.

    The 3D aspect comes from the memory being in a matrix initially being two layers high, with multiple rows and columns that intersect, where those intersections occur is a microscopic material based switch for accessing a particular memory cell. Unlike NAND flash where an individual cell or bit is accessed as part of a larger block or page comprising several thousand bytes at once, 3D XPoint cells or bits can be individually accessed to speed up reads and writes in a more granular fashion. It is this more granular access along with performance that will enable 3D XPoint to be used in lower latency scenarios where DRAM would normally be used.

    Instead of trapping electrons in a cell to create a bit of capacity (e.g. on or off) like NAND flash, 3D XPoint leverages the underlying physical material propertied to store a bit as a phase change enabling use of all cells. In other words, instead of being electron based, it is material based. While Intel and Micron did not specify what the actual chemistry and physical materials that are used in 3D XPoint, they did discuss some of the characteristics. If you want to go deep, check out how the Dailytech makes an interesting educated speculation or thesis on the underlying technology.

    Watch the following video to get a better idea and visually see how 3D XPoint works.



    3D XPoint YouTube Video

    What are these chips, cells, wafers and dies?

    Left many dies on a wafer, right, a closer look at the dies cut from the wafer

    Dies (here and here) are the basic building block of what goes into the chips that in turn are the components used for creating DDR DIMM for main computer memory, as well as for create SD and MicroSD cards, USB thumb drives, PCIe AIC and drive form factor SSD, as well as custom modules on motherboards, or consumption via bare die and wafer level consumption (e.g. where you are doing really custom things at volume, beyond using a soldering iron scale).

    Storage I/O trends

    Has Intel and Micron cornered the NVM and memory market?

    We have heard proclamations, speculation and statements of the demise of DRAM, NAND flash and other volatile and NVM memories for years, if not decades now. Each year there is the usual this will be the year of “x” where “x” can include among others. Resistive RAM aka ReRAM or RRAM aka the memristor that HP earlier announced they were going to bring to market and then earlier this year canceling those plans while Crossbar continues to pursue RRAM. MRAM or Magnetorestive RAM, Phase Change Memory aka CRAM or PCM and PRAM, FRAM aka FeRAM or Ferroelectric RAM among others.

    flash SSD and NVM trends

    Expanding persistent memory and SSD storage markets

    Keep in mind that there are many steps taking time measured in years or decades to go from research and development lab idea to prototype that can then be produced at production volumes in economic yields. As a reference for, there is still plenty of life in both DRAM as well as NAND flash, the later having appeared around 1989.

    Industry vs. Customer Adoption and deployment timeline

    Technology industry adoption precedes customer adoption and deployment

    There is a difference between industry adoption and deployment vs. customer adoption and deployment, they are related, yet separated by time as shown in the above figure. What this means is that there can be several years from the time a new technology is initially introduced and when it becomes generally available. Keep in mind that NAND flash has yet to reach its full market potential despite having made significant inroads the past few years since it was introduced in 1989.

    This begs the question of if 3D XPoint is a variation of phase change, RRAM, MRAM or something else. Over at the Dailytech they lay out a line of thinking (or educated speculation) that 3D XPoint is some derivative or variation of phase change, time will tell about what it really is.

    What’s the difference between 3D NAND flash and 3D XPoint?

    3D NAND is a form of NAND flash NVM, while 3D XPoint is a completely new and different type of NVM (e.g. its not NAND).

    3D NAND is a variation of traditional flash with the difference between vertical stacking vs. horizontal to improve density, also known as vertical NAND or V-NAND. Vertical stacking is like building up to house more tenants or occupants in a dense environment or scaling up, vs scaling-out by using up more space where density is not an issue. Note that magnetic HDD’s shifted to perpendicular (e.g. vertical) recording about ten years ago to break through the super parametric barrier and more recently, magnetic tape has also adopted perpendicular recording. Also keep in mind that 3D XPoint and the earlier announced Intel and Micron 3D NAND flash are two separate classes of memory that both just happen to have 3D in their marketing names.

    Where to read, watch and learn more

    Storage I/O trends

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Same with DRAM which has been around for sometime, it too still has plenty of life left for many applications. However other applications that have the need for improved speed over NAND flash, or persistency and density vs. DRAM will be some of the first to leverage new NVM technologies such as 3D XPoint. Thus at least for the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies. Bottom line, 3D XPoint is a new class of NVM memory, can be used for persistent main server memory or for persistent fast storage memory. If you have not done so, check out Part I here and Part II here of this three-part series on Intel and Micron 3D XPoint.

    Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage

    3D XPoint NVM persistent memory PM storage class memory SCM


    Storage I/O trends

    Updated 1/31/2018

    This is the first of a three-part series on Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage announcement. Read Part II here and Part III here.

    In a webcast the other day, Intel and Micron announced new 3D XPoint non-volatile memory (NVM) that can be used for both primary main memory (e.g. what’s in computers, serves, laptops, tablets and many other things) in place of Dynamic Random Access Memory (DRAM), for persistent storage faster than today’s NAND flash-based solid state devices (SSD), not to mention future hybrid usage scenarios. Note that this announcement while having the common term 3D in it is different from the earlier Intel and Micron announcement about 3D NAND flash (read more about that here).

    Twitter hash tag #3DXpoint

    The big picture, why this type of NVM technology is needed

    Server and Storage I/O trends

    • Memory is storage and storage is persistent memory
    • No such thing as a data or information recession, more data being create, processed and stored
    • Increased demand is also driving density along with convergence across server storage I/O resources
    • Larger amounts of data needing to be processed faster (large amounts of little data and big fast data)
    • Fast applications need more and faster processors, memory along with I/O interfaces
    • The best server or storage I/O is the one you do not need to do
    • The second best I/O is one with least impact or overhead
    • Data needs to be close to processing, processing needs to be close to the data (locality of reference)


    Server Storage I/O memory hardware and software hierarchy along with technology tiers

    What did Intel and Micron announce?

    Intel SVP and General Manager Non-Volatile Memory solutions group Robert Crooke (Left) and Micron CEO D. Mark Durcan did the joint announcement presentation of 3D XPoint (webinar here). What was announced is the 3D XPoint technology jointly developed and manufactured by Intel and Micron which is a new form or category of NVM that can be used for both primary memory in servers, laptops, other computers among other uses, as well as for persistent data storage.


    Robert Crooke (Left) and Mark Durcan (Right)

    Summary of 3D XPoint announcement

    • New category of NVM memory for servers and storage
    • Joint development and manufacturing by Intel and Micron in Utah
    • Non volatile so can be used for storage or persistent server main memory
    • Allows NVM to scale with data, storage and processors performance
    • Leverages capabilities of both Intel and Micron who have collaborated in the past
    • Performance Intel and Micron claim up to 1000x faster vs. NAND flash
    • Availability persistent NVM compared to DRAM with better durability (life span) vs. NAND flash
    • Capacity densities about 10x better vs. traditional DRAM
    • Economics cost per bit between dram and nand (depending on packaging of resulting products)

    What applications and products is 3D XPoint suited for?

    In general, 3D XPoint should be able to be used for many of the same applications and associated products that current DRAM and NAND flash-based storage memories are used for. These range from IT and cloud or managed service provider data centers based applications and services, as well as consumer focused among many others.


    3D XPoint enabling various applications

    In general, applications or usage scenarios along with supporting products that can benefit from 3D XPoint include among others’. Applications that need larger amounts of main memory in a denser footprint such as in-memory databases, little and big data analytics, gaming, wave form analysis for security, copyright or other detection analysis, life sciences, high performance compute and high-productivity compute, energy, video and content severing among many others.

    In addition, applications that need persistent main memory for resiliency, or to cut delays and impacts for planned or un-planned maintenance or having to wait for memories and caches to be warmed or re-populated after a server boot (or re-boot). 3D XPoint will also be useful for those applications that need faster read and write performance compared to current generations NAND flash for data storage. This means both existing and emerging applications as well as some that do not yet exist will benefit from 3D XPoint over time, like how today’s applications and others have benefited from DRAM used in Dual Inline Memory Module (DIMM) and NAND flash advances over the past several decades.

    Where to read, watch and learn more

    Storage I/O trends

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Continue reading Part II here and Part III here of this three-part series on Intel and Micron 3D XPoint along with more analysis and commentary.

    Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Collecting Transaction Per Minute from SQL Server and HammerDB

    Storage I/O trends

    Collecting Transaction Per Minute from SQL Server and HammerDB

    When using benchmark or workload generation tools such as HammerDB I needed a way to capture and log performance activity metrics such as transactions per minute. For example using HammerDB to simulate an application making database requests performing various transactions as part of testing an overall system solution including server and storage I/O activity. This post takes a look at the problem or challenge I was looking to address, as well as creating a solution after spending time searching for one (still searching btw).

    The Problem, Issue, Challenge, Opportunity and Need

    The challenge is to collect application performance such as transactions per minute from a workload using a database. The workload or benchmark tool (in this case HammerDB) is the System Test Initiator (STI) that drives the activity (e.g. database requests) to a System Under Test (SUT). In this example the SUT is a Microsoft SQL Server running on a Windows 2012 R2 server. What I need is to collect and log into a file for later analysis the transaction rate per minute while the STI is generating a particular workload.

    Server Storage I/O performance

    Understanding the challenge and designing a strategy

    If you have ever used benchmark or workload generation tools such as Quest Benchmark Factory (part of the Toad tools collection) you might be spoiled with how it can be used to not only generate the workload, as well as collect, process, present and even store the results for database workloads such as TPC simulations. In this situation, Transaction Processing Council (TPC) like workloads need to be run and metrics on performance collected. Lets leave Benchmark Factory for a future discussion and focus instead on a free tool called HammerDB and more specifically how to collection transactions per minute metrics from Microsoft SQL Server. While the focus is SQL Server, you can easily adapt the approach for MySQL among others, not to mention there are tools such as Sysbench, Aerospike among other tools.

    The following image (created using my Livescribe Echo digital pen) outlines the problem, as well as sketches out a possible solution design. In the following figure, for my solution I’m going to show how to grab every minute for a given amount of time the count of transactions that have occurred. Later in the post processing (you could also do in the SQL Script) I take the new transaction count (which is cumulative) and subtract the earlier interval which yields the transactions per minute (see examples later in this post).

    collect TPM metrics from SQL Server with hammerdb
    The problem and challenge, a way to collect Transactions Per Minute (TPM)

    Finding a solution

    HammerDB displays results via its GUI, and perhaps there is a way or some trick to get it to log results to a file or some other means, however after searching the web, found that it was quicker to come up with solution. That solution was to decide how to collect and report the transactions per minute (or you could do by second or other interval) from Microsoft SQL Server. The solution was to find what performance counters and metrics are available from SQL Server, how to collect those and log them to a file for processing. What this means is a SQL Server script file would need to be created that ran in a loop collecting for a given amount of time at a specified interval. For example once a minute for several hours.

    Taking action

    The following is a script that I came up with that is far from optimal however it gets the job done and is a starting point for adding more capabilities or optimizations.

    In the following example, set loopcount to some number of minutes to collect samples for. Note however that if you are running a workload test for eight (8) hours with a 30 minute ramp-up time, you would want to use a loopcount (e.g. number of minutes to collect for) of 480 + 30 + 10. The extra 10 minutes is to allow for some samples before the ramp and start of workload, as well as to give a pronounced end of test number of samples. Add or subtract however many minutes to collect for as needed, however keep this in mind, better to collect a few extra minutes vs. not have them and wished you did.

    -- Note and disclaimer:
    -- 
    -- Use of this code sample is at your own risk with Server StorageIO and UnlimitedIO LLC
    -- assuming no responsibility for its use or consequences. You are free to use this as is
    -- for non-commercial scenarios with no warranty implied. However feel free to enhance and
    -- share those enhancements with others e.g. pay it forward.
    -- 
    DECLARE @cntr_value bigint;
    DECLARE @loopcount bigint; # how many minutes to take samples for
    
    set @loopcount = 240
    
    SELECT @cntr_value = cntr_value
     FROM sys.dm_os_performance_counters
     WHERE counter_name = 'transactions/sec'
     AND object_name = 'MSSQL$DBIO:Databases'
     AND instance_name = 'tpcc' ; print @cntr_value;
     WAITFOR DELAY '00:00:01'
    -- 
    -- Start loop to collect TPM every minute
    -- 
    
    while @loopcount <> 0
    begin
    SELECT @cntr_value = cntr_value
     FROM sys.dm_os_performance_counters
     WHERE counter_name = 'transactions/sec'
     AND object_name = 'MSSQL$DBIO:Databases'
     AND instance_name = 'tpcc' ; print @cntr_value;
     WAITFOR DELAY '00:01:00'
     set @loopcount = @loopcount - 1
    end
    -- 
    -- All done with loop, write out the last value
    -- 
    SELECT @cntr_value = cntr_value
     FROM sys.dm_os_performance_counters
     WHERE counter_name = 'transactions/sec'
     AND object_name = 'MSSQL$DBIO:Databases'
     AND instance_name = 'tpcc' ; print @cntr_value;
    -- 
    -- End of script
    -- 

    The above example has loopcount set to 240 for a 200 minute test with a 30 minute ramp and 10 extra minutes of samples. I use the a couple of the minutes to make sure that the system test initiator (STI) such as HammerDB is configured and ready to start executing transactions. You could also put this along with your HammerDB items into a script file for further automation, however I will leave that exercise up to you.

    For those of you familiar with SQL and SQL Server you probably already see some things to improve or stylized or simply apply your own preference which is great, go for it. Also note that I’m only selecting a certain variable from the performance counters as there are many others which you can easily discovery with a couple of SQL commands (e.g. select and specify database instance and object name. Also note that the key is accessing the items in sys.dm_os_performance_counters of your SQL Server database instance.

    The results

    The output from the above is a list of cumulative numbers as shown below which you will need to post process (or add a calculation to the above script). Note that part of running the script is specifying an output file which I show later.

    785
    785
    785
    785
    37142
    1259026
    2453479
    3635138
    

    Implementing the solution

    You can setup the above script to run as part of a larger automation shell or batch script, however for simplicity I’m showing it here using Microsoft SQL Server Studio.

    SQL Server script to collect TPM
    Microsoft SQL Server Studio with script to collect Transaction Per Minute (TPM)

    The following image shows how to specify an output file for the results to be logged to when using Microsoft SQL Studio to run the TPM collection script.

    Specify SQL Server tpm output file
    Microsoft SQL Server Studio specify output file

    With the SQL Server script running to collect results, and HammerDB workload running to generate activity, the following shows Quest Spotlight on Windows (SoW) displaying WIndows Server 2012 R2 operating system level performance including CPU, memory, paging and other activity. Note that this example had about the system test initiator (STI) which is HammerDB and the system under test (SUT) that is Microsoft SQL Server on the same server.

    Spotlight on Windows while SQL Server doing tpc
    Quest Spotlight on Windows showing Windows Server performance activity

    Results and post-processing

    As part of post processing simple use your favorite tool or script or what I often do is pull the numbers into Excel spreadsheet, and simply create a new column of numbers that computes and shows the difference between each step (see below). While in Excel then I plot the numbers as needed which can also be done via a shell script and other plotting tools such as R.

    In the following example, the results are imported into Excel (your favorite tool or script) where I then add a column (B) that simple computes the difference between the existing and earlier counter. For example in cell B2 = A2-A1, B3 = A3-A2 and so forth for the rest of the numbers in column A. I then plot the numbers in column B to show the transaction rates over time that can then be used for various things.

    Hammerdb TPM results from SQL Server processed in Excel
    Results processed in Excel and plotted

    Note that in the above results that might seem too good to be true they are, these were cached results to show the tools and data collection process as opposed to the real work being done, at least for now…

    Where to learn more

    Here are some extra links to have a look at:

    How to test your HDD, SSD or all flash array (AFA) storage fundamentals
    Server and Storage I/O Benchmarking 101 for Smarties
    Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)
    The SSD Place (collection of flash and SSD resources)
    Server and Storage I/O Benchmarking and Performance Resources
    I/O, I/O how well do you know about good or bad server and storage I/Os?

    What this all means and wrap-up

    There are probably many ways to fine tune and optimize the above script, likewise there may even be some existing tool, plug-in, add-on module, or configuration setting that allows HammerDB to log the transaction activity rates to a file vs. simply showing on a screen. However for now, this is a work around that I have found for when needing to collect transaction activity performance data with HammerDB and SQL Server.

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Top vblog voting V2.015 (Its IT award season, cast your votes)

    Top vblog voting V2.015 (Its IT award season, cast your votes)

    Storage I/O trends

    It’s that time of the year again for award season:

    • The motion picture association Academy awards (e.g. the Oscars)
    • The Grammys and other entertainment awards
    • As well as Eric Siebert (aka @ericsiebert) vsphere-land.com top vblog

    Vsphere-land.com top vblog

    Eric has run for several years now an annual top VMware, Virtualization, Storage and related blogs voting now taking place until March 16th 2015 (click on the image below). You will find a nice mix of new school, old school and a few current or future school theme blogs represented with some being more VMware specific. However there are also many blogs at the vpad site that have a cloud, virtual, server, storage, networking, software defined, development and other related themes.

    top vblog voting
    Click on the above image to cast your vote for favorite:

    • Ten blogs (e.g. select up to ten and then rank 1 through 10)
    • Storage blog
    • Scripting blog
    • VDI blog
    • New Blogger
    • Independent Blogger (e.g. non-vendor)
    • News/Information Web site
    • Podcast

    Call to action, take a moment to cast your vote

    My StorageIOblog.com has been on the vLaunchPad site for several years now as well as having syndicated content that also appears via some of the other venues listed there.

    Six time VMware vExpert

    In addition to my StorageIOblog and podcast, you will also find many of my fellow VMware vExperts among others at the vLaunchpad site so check them out as well.

    What this means

    This is a people’s choice process (yes it is a popularity process of sorts as well) however also a way of rewarding or thanking those who take time to create and share content with you and others. If you take time to read various blogs, listen to podcasts as well as consume other content, please take a few moments and cast your vote here (thank you in advance) which I hope includes StorageIOblog.com as part of the top ten, as well as being nominated in the Storage, Podcast and Independent blogger categories.

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved