Part II – EMC DSSD D5 Direct Attached Shared AFA

Part II – EMC DSSD D5 Direct Attached Shared AFA

server storage I/O trends

This is the second post in a two-part series on the EMC DSSD D5 announcement, you can read part one here.

Lets take a closer look at how EMC DSSD D5 works, its hardware and software components, how it compares and other considerations.

How Does DSSD D5 Work

Up to 48 Linux servers attach via dual port PCIe Gen 3 x8 cards that are stateless. Stateless simply means they do not have any flash or are not being used as storage cards, rather, they are essentially just an NVMe adapter card. With the first release block, HDFS file along with object and APIs are available for Linux systems. These drivers enabling the shared NVMe storage to be accessed by applications using different streamlined server and storage I/O driver software stacks to cut latency. DSSD D5 is meant to be a rack scale solutions so distance is measured as inside a rack (e.g. a couple of meters).

The 5U tall DSSD D5 supports 48 servers via a pair of I/O Modules (IOM) each with 48 ports that in turn attach to the data plane and on to the Flash Modules (FM). Also attached to the data plane are a pair of controllers that are active / active for performing management tasks, however they do not sit in the data path. This means that host client directly access the FMs without having to go through a controller which is the case in traditional storage systems and AFAs. The controllers only get involved when there is some setup, configuration or other management activities, otherwise they get out-of-the-way, kind of like how management should function. There when you need them to help, then get out-of-the-way so productive work can be done.

EMC DSSD shared ssd das
Pardon the following hand drawn sketches, you can see some nice pretty diagrams, videos and other content via the EMC Pulse Blog as well as elsewhere.

Note that the host client servers take on the responsibility for managing and coordinating data consistency meaning data can be shared between servers assuming applicable software is used for implementing integrity. This means that clustering and other software that can support shared storage are able to support low latency high performance read and write activity the DSSD D5 as opposed to relying on the underlying storage system for handling the shared storage coordination such as in a NAS. Another note is that the DSSD D5 is optimized for concurrent multi-threaded and asynchronous I/O operations along with atomic writes for data integrity that enable the multiple cores in today’s faster processors to be more effectively leveraged.

The data plane is a mesh or switch or expander based back plane enabling any of the north bound (host client-server) 96 (2 x 48) PCIe Gen 3 x4 ports to reach the up to 36 (or as few as 18) FMs that are also dual pathed. Note that the host client-server PCIe dual port cards are Gen 3 x8 while the DSSD D5 ports are Gen 3 x4. Simple math should tell you that if are going to have 2 x PCIe Gen 3 x4 ports running at full speed, you want to have a Gen 3 x8 connection inside the server to get full performance.

Think of the data plane similar to how a SAS expander works in an enclosure or a SAS switch, the difference being it is PCIe and not SAS or other protocol. Note that even though the terms mesh, fabric, switch, network are used, these are NOT attached to traditional LAN, SAN, NAS or other networks. Instead, this is a private “networked back plane” between the server and storage devices (e.g. FM).

EMC DSSD D5 details

The dual controllers (e.g. control plane) over see the flash management including garbage collection among other tasks, as well as storage is thin provisioned.

Dual Controllers (active/active) are connected to each other (e.g. control plane) as well as to the data path, however, do not sit in the data path. Thus this is a fast path control path approach meaning the controllers can get involved to do management functions when needed, and get out-of-the-way of work when not needed. The controllers are hot-swap and add global management functions including setting up, tearing down host client/server I/O paths, mappings and affinities. Controllers also support the management of CUBIC RAID data protection functions performed by the Flash Modules (FM).

Other functions the controllers implement leveraging their CPUs and DRAM include flash translation layer (FTL) functions normally handled by SSD cards, drives or other devices. These FTL functions include wear-leveling for durability, garbage collection, voltage power management among other tasks. The result is that the flash modules are able to spend more time and their resources handling I/O operations vs. handling management tasks vs. traditional off the shelf SSD drives, cards or devices.

The FMs insert from the front and come in two sizes of 2TB and 4TB of raw NAND capacity. What’s different about the FMs vs. some other vendors approach is that these are not your traditional PCIe flash cards, instead they are custom cards with a proprietary ASIC and raw nand dies. DRAM is used in the FM as a buffer to hold data for write optimization as well as enhance wear-leveling to increase flash endurance.

The result is up to thousands of nand dies spread over up to 36 FMs however more important, more performance being derived out of those resources. The increased performance comes from DSSD implementing its own flash translation layer, garbage collection, power voltage management among other techniques to derive more useful work per watt of energy consumed.

EMC DSSD performance claims:

  • 100 microsecond latency for small IOs
  • 100GB bandwidth for large IOs
  • 10 Million small IO IOPs
  • Up to 144TB raw capacity

How Does It Compare To Other AFA and SSD solutions

There will be many apples to oranges comparisons as is often the case with new technologies or at least until others arrive in the market.

Some general comparisons that may be apples to oranges as opposed to apples to apples include:

  • Shared and dense fast nand flash (eMLC) SSD storage
  • disaggregated flash SSD storage from server while enabling high performance, low latency
  • Eliminate pools or ponds of dedicated SSD storage capacity and performance
  • Not a SAN yet more than server-side flash or flash SSD JBOD
  • Underlying Flash Translation Layer (FTL) is disaggregated from SSD devices
  • Optimized hardware and software data path
  • Requires special server-side stateless adapter for accessing shared storage

Some other comparisons include:

  • Hybrid and AFA shared via some server storage I/O network (good sharing, feature rich, resilient, slower performance and higher latency due to hardware, network and server I/O software stacks). For example EMC VMAX, VNX, XtremIO among others.
  • Server attached flash SSD aka server SAN (flash SSD creates islands of technology, lower resource sharing, data shuffling between servers, limited or no data services, management complexity). For example PCIe flash SSD state full (persistent) cards where data is stored or used as a cache along with associated management tools and drivers.
  • DSSD D5 is a rack-scale hybrid approach combing direct attached shared flash with lower latency, higher performance vs. traditional AFA or hybrid storage array, better resource usage, sharing, management and performance vs. traditional dedicated server flash. Compliment server-side data infrastructure and applications scale-out software. Server applications can reach NVMe storage via user spacing with block, hdfs, Flood and other APIs.

Using EMC DSSD D5 in possible hybrid ways

What Happened to Server PCIe cards and Server SANs

If you recall a few years ago the industry rage was flash SSD PCIe server cards from vendors such as EMC, FusionIO (now part of SANdisk), Intel (still Intel), LSI (now part of Seagate), Micron (still Micron) and STEC (now part of Western Digital) among others. Server side flash SSD PCIe cards are still popular particular with newer NVMe controller based models that use the NVMe protocol stack instead of AHC/SATA or others.

However as is often the case, things evolve and while there is still a place for server-side state full PCIe flash cards either for data or as cache, there is also the need to combine and simplify management, as well as streamline the software I/O stacks which is where EMC DSSD D5 comes into play. It enables consolidation of server-side SSD cards into a shared 5U chassis enabling up to 48 dual pathed servers access to the flash pools while using streamlined server software stacks and drivers that leverage NVMe over PCIe.

Where to learn more

Continue reading with the following links about NVMe, flash SSD and EMC DSSD.

  • Part one of this series here and part two here.
  • Performance Redefined! Introducing DSSD D5 Rack-Scale Flash Solution (EMC Pulse Blog)
  • EMC Unveils DSSD D5: A Quantum Leap In Flash Storage (EMC Press Release)
  • EMC Declares 2016 The “Year of All-Flash” For Primary Storage (EMC Press Release)
  • EMC DSSD D5 Rack-Scale Flash (EMC PDF Overview)
  • EMC DSSD and Cloudera Evolve Hadoop (EMC White Paper Overview)
  • Software Aspects of The EMC DSSD D5 Rack-Scale Flash Storage Platform (EMC PDF White Paper)
  • EMC DSSD D5 (EMC PDF Architecture and Product Specification)
  • EMC VFCache respinning SSD and intelligent caching (Part II)
  • EMC To Acquire DSSD, Inc., Extends Flash Storage Leadership
  • Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • Learn more about flash SSD here and NVMe here at thenvmeplace.com
  • What this all means

    EMC with DSSD D5 now has another solution to offer clients, granted their challenge as it has been over the past couple of decades now will be to educate and compensate their sales force and partners on what technology solution to put for different needs.

    On one hand, life could be simpler for EMC if they only had one platform solution that would then be the answer to every problem, something that some other vendors and startups face. Likewise, if all you have is one solution, then while you can try to make that solution fit different environments, or, get the environment to adapt to the solution, having options is a good thing if those options can remove complexity along with cost while boosting productivity.

    I would like to see support for other operating systems such as Windows, particular with the future Windows 2016 based Nano, as well as hypervisors including VMware, Hyper-V among others. On the other hand I also would like to see a Sharp Aquous Quattron 80" 1080p 240Hz 3D TV on my wall to watch HD videos from my DJI Phantom Drone. For now focusing on Linux makes sense, however, would be nice to see some more platforms supported.

    Keep an eye on the NVMe space as we are seeing NVMe solutions appearing inside servers, storage system, external dedicated and shared, as well as some other emerging things including NVMe over Fabric. Learn more about EMC DSSD D5 here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I

    EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I

    server storage I/O trends

    This is the first post in a two-part series pertaining to the EMC DSSD D5 announcement, you can read part two here.

    EMC announced today the general availability of their DSSD D5 Shared Direct Attached SSD (DAS) flash storage system (e.g. All Flash Array or AFA) which is a rack-scale solution. If you recall, EMC acquired DSSD back in 2014 which you can read more about here. EMC announced four configurations that include 36TB, 72TB and 144TB raw flash SSD capacity with support for up to 48 dual-ported host client servers.

    Via EMC Pulse Blog

    What Is DSSD D5

    At a high level EMC DSSD D5 is a PCIe direct attached SSD flash storage solution to enable aggregation of disparate SSD card functionality typically found in separate servers into a shared system without causing aggravation. DSSD D5 helps to alleviate server side I/O bottlenecks or aggravation issues that can be the result of aggregation of workloads or data. Think of DSSD D5 as an shared application server storage I/O accelerator for up to 48 servers to access up to 144TB of raw flash SSD to support various applications that have the need for speed.

    Applications that have the need for speed or that can benefit from less time waiting for results, where time is money, or boosting productivity can enable high profitability computing. This includes legacy as well as emerging applications and workloads spanning little data, big data and big fast structure and unstructured data. From Oracle to SAS to HBASE and Hadoop among others, perhaps even Alluxio.

    Some examples include:

    • Clusters and scale-out grids
    • High Performance COMpute (HPC)
    • Parallel file systems
    • Forecasting and image processing
    • Fraud detection and prevention
    • Research and analytics
    • E-commerce and retail
    • Search and advertising
    • Legacy applications
    • Emerging applications
    • Structured database and key-value repositories
    • Unstructured file systems, HDFS and other data
    • Large undefined work sets
    • From batch stream to real-time
    • Reduces run times from days to hours

    Where to learn more

    Continue reading with the following links about NVMe, flash SSD and EMC DSSD.

  • Part one of this series here and part two here.
  • Performance Redefined! Introducing DSSD D5 Rack-Scale Flash Solution (EMC Pulse Blog)
  • EMC Unveils DSSD D5: A Quantum Leap In Flash Storage (EMC Press Release)
  • EMC Declares 2016 The “Year of All-Flash” For Primary Storage (EMC Press Release)
  • EMC DSSD D5 Rack-Scale Flash (EMC PDF Overview)
  • EMC DSSD and Cloudera Evolve Hadoop (EMC White Paper Overview)
  • Software Aspects of The EMC DSSD D5 Rack-Scale Flash Storage Platform (EMC PDF White Paper)
  • EMC DSSD D5 (EMC PDF Architecture and Product Specification)
  • EMC VFCache respinning SSD and intelligent caching (Part II)
  • EMC To Acquire DSSD, Inc., Extends Flash Storage Leadership
  • Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • Learn more about flash SSD here and NVMe here at thenvmeplace.com
  • What this all means

    Today’s legacy, and emerging applications have the need for speed, and where the applications may not need speed, the users as well as Internet of Things (IoT) that depend upon, or feed those applications do need things to move faster. Fast applications need fast software and hardware to get the same amount of work done faster with less wait delays, as well as process larger amounts of structured and unstructured little data, big data and very fast big data.

    Different applications along with the data infrastructures they rely upon including servers, storage, I/O hardware and software need to adapt to various environments, one size, one approach model does not fit all scenarios. What this means is that some applications and data infrastructures will benefit from shared direct attached SSD storage such as rack scale solutions using EMC DSSD D5. Meanwhile other applications will benefit from AFA or hybrid storage systems along with other approaches used in various ways.

    Continue reading part two of this series here including how EMC DSSD D5 works and more perspectives.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Various Hardware (SAS, SATA, NVM, M2) and Software (VHD) Defined Odd’s and Ends

    Various Hardware (SAS, SATA, NVM, M2) and Software (VHD) Defined Odd’s and Ends

    server storage I/O trends

    Ever need to add another GbE port to a small server, workstation or perhaps Intel NUC, however no PCIe slots are available? How about attaching a M2 form factor flash SSD card to a server or device that does not have an M2 port, or, for mirroring two M2 cards together with a RAID adapter? Looking for tool to convert a Windows system to a Virtual Hard Disk (VHD) while it is running? The following are a collection of odd’s and end’s devices and tools for hardware and software defining your environment.

    Adding GbE Ports Without PCIe Ports

    Adding Ethernet ports or NICs is relatively easy with larger servers, assuming you have available PCIe slots.

    However what about when you are limited or out of PCIe ports? One option is to use USB (preferably USB 3) to GbE connectors. Another option is if you have an available mSATA card slot, such as on a server or workstation that had a WiFi card you no longer need to use, is get a mSATA to GbE kit (shown below). Granted you might have to get creative with the PCIe bracket depending on what you are going to put one of these into.

    mSATA to GbE and USB to GbE
    Left mSATA to GbE port, Right USB 3 (Blue) to GbE connector

    Tip: Some hypervisors may not like the USB to GbE, or have drivers for the mSATA to GbE connector, likewise some operating systems do not have in the box drivers. Start by loading GbE drivers such as those needed for RealTek NICs and you may end up with plug and play.

    SAS to SATA Interposer and M2 to SATA docking card

    In the following figure on the left is a SAS to SATA interposer which enables a SAS HDD or SSD to connect to a SATA connector (power and data). Keep in mind that SATA devices can attach to SAS ports, however the usual rule of thumb is that SAS devices can not attach to a SATA port or controller. To prevent that from occurring, the SAS and SATA connectors have different notched connectors that prevent a SAS device from plugging into a SATA connector.

    Where the SAS to SATA interposers come into play is that some servers or systems have SAS controllers, however their drive bays have SATA power and data connectors. Note that the key here is that there is a SAS controller, however instead of a SAS connector to the drive bay, a SATA connector is used. To get around this, interposers such as the one above allows the SAS device to attach to the SATA connector which in turn attached to the SAS controller.

    SAS SATA interposer and M2 to SATA docking card
    Left SAS to SATA interposer, Right M2 to SATA docking card

    In the above figure on the right, is an M2 NVM nand flash SSD card attached to a M2 to SATA docking card. This enables M2 cards that have SATA protocol controllers (as opposed to M2 NVMe) to be attached to a SATA port on an adapter or RAID card. Some of these docking cards can also be mounted in server or storage system 2.5" (or larger) drive bays. You can find both of the above at Amazon.com as well as many other venues.

    P2V and Creating VHD and VHDX

    I like and use various Physical to Virtual (P2V) as well as Virtual to Virtual (V2V) and even Virtual to Physical (V2P) along with Virtual to Cloud (V2C) tools including those from VMware (vCenter Converter), Microsoft (e.g. Microsoft Virtual Machine Converter) among others. Likewise Clonezilla, Acronis and many other tools are in the toolbox. One of those other tools that is handy for relatively quickly making a VHD or VHDX out of a running Windows server is disk2vhd.

    disk2vhd

    Now you should ask, why not just use the Microsoft Migration tool or VMware converter?

    Simple, if you use those or other tools and run into issues with GPT vs MBR or BIOS vs UEFI settings among others, disk2vhd is a handy work around. Simply install it, tell it where to create the VHD or VHDX (preferably on another device), start the creation, when done, move the VHDX or VHD to where needed and go from there.

    Where do you get disk2vhd and how much does it cost?

    Get it here from Microsoft Technet Windows Sysinternals page and its free.

    Where to learn more

    Continue reading about the above and other related topics with these links.

  • Server storage I/O Intel NUC nick knack notes – Second impressions
  • Some Windows Server Storage I/O related commands
  • Server Storage I/O Cables Connectors Chargers & other Geek Gifts
  • The NVM (Non Volatile Memory) and NVMe Place (Non Volatile Memory Express)
  • Nand flash SSD and NVM server storage I/O memory conversations
  • Cloud Storage for Camera Data?
  • Via @EmergencyMgtMag Cloud Storage for Camera Data?

  • Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures
  • Part II 2014 Server Storage I/O Geek Gift ideas
  • What this all means

    While the above odd’s and end’s tips, tricks, tools and technology may not be applicable for your production environment, perhaps they will be useful for your test or home lab environment needs. On the other hand, the above may not be practically useful for anything, yet simply entertaining, the rest is up to you as if there is any return on investment, or, perhaps return on innovation from use these or other odd’s, end’s tips and tricks that might be outside of the traditional box so to speak.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    NVMe Place NVM Non Volatile Memory Express Resources

    Updated 8/31/19
    NVMe place server Storage I/O data infrastructure trends

    Welcome to NVMe place NVM Non Volatile Memory Express Resources. NVMe place is about Non Volatile Memory (NVM) Express (NVMe) with Industry Trends Perspectives, Tips, Tools, Techniques, Technologies, News and other information.

    Disclaimer

    Please note that this NVMe place resources site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks.

    NVM Express Organization
    Image used with permission of NVM Express, Inc.

    Visit the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

     

    The NVMe Place resources and NVM including SCM, PMEM, Flash

    NVMe place includes Non Volatile Memory (NVM) including nand flash, storage class memories (SCM), persistent memories (PM) are storage memory mediums while NVM Express (NVMe) is an interface for accessing NVM. This NVMe resources page is a companion to The SSD Place which has a broader Non Volatile Memory (NVM) focus including flash among other SSD topics. NVMe is a new server storage I/O access method and protocol for fast access to NVM based storage and memory technologies. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

    Server Storage I/O NVMe PCIe SAS SATA AHCI
    Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

    Leveraging the standard PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5″ drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add-in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially, NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

    NVMe as back-end storage
    NVMe as a “back-end” I/O interface for NVM storage media

    NVMe as front-end server storage I/O interface
    NVMe as a “front-end” interface for servers or storage systems/appliances

    NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can also being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

    NVMe features

    Main features of NVMe include among others:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

    Shared external PCIe using NVMe
    NVMe and shared PCIe (e.g. shared PCIe flash DAS)

    NVMe related content and links

    The following are some of my tips, articles, blog posts, presentations and other content, along with material from others pertaining to NVMe. Keep in mind that the question should not be if NVMe is in your future, rather when, where, with what, from whom and how much of it will be used as well as how it will be used.

    • How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
    • Why NVMe Should Be in Your Data Center (Via Micron.com)
    • NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
    • Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
    • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
    • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
    • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
    • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
    • MNVM Express solutions (Via SuperMicro)
    • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
    • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
    • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
    • How many IOPS can a HDD, HHDD or SSD do (Part I)?
    • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
    • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
    • Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
    • Via EnterpriseStorageForum: 10-Year Review of Data Storage

    Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamline performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU.

    NVMe and SATA flash SSD performance

    The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed.

    Additional NVMe Resources

    Also check out the Server StorageIO companion micro sites landing pages including thessdplace.com (SSD focus), data protection diaries (backup, BC/DR/HA and related topics), cloud and object storage, and server storage I/O performance and benchmarking here.

    If you are in to the real bits and bytes details such as at device driver level content check out the Linux NVMe reflector forum. The linux-nvme forum is a good source if you are developer to stay up on what is happening in and around device driver and associated topics.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    Disclaimer

    Disclaimer: Please note that this site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks. Check out the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

    NVM Express Organization
    Image used with permission of NVM Express, Inc.

    Wrap Up

    Watch for updates with more content, links and NVMe resources to be added here soon.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates

    Storage I/O trends

    Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates

    I attended the Flash Memory Summit in Santa Clara CA last week and not surprisingly there were many announcements about Non-Volatile Memory (NVM) along with related enabling technologies. Some of these announcements were component based intended for original equipment manufactures (OEMs) ranging from startup to established, systems integrators (SI), value added resellers (VAR’s) while others were more customer solution focused. From a customer solution focus, some of the technologies were consumer oriented while others for business and some for cloud scale service providers.

    Recent NVM, NVMe and Flash SSD news

    A sampling of some recent NVM, NVMe and Flash related news includes among others:

    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers (Via TomsITpro)
    • New SATA SSD powers elastic cloud agility for CSPs (Via Cbronline)
    • Toshiba Solid-State Drive Family Features PCIe Technology (Via Eweek)
    • SanDisk aims CloudSpeed Ultra SSD at cloud providers (Via ITwire)
    • Everspin & Aupera show all-MRAM Storage Module in M.2 Form Factor (Via BusinessWire)
    • Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage (part I, part II and part III)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • Seagate Grows Its Nytro Enterprise Flash Storage Line (Via InfoStor)
    • New SAS Solid State Drive First Product From Seagate Micron Alliance (Via Seagate)
    • Wow, Samsung’s New 16 Terabyte SSD Is the World’s Largest Hard Drive (Via Gizmodo)
    • Samsung ups the SSD ante with faster, higher capacity drives (Via ITworld)

    NVMe primer

    Via Intel History of Memory
    Via Intel: Click above image to view history of memory via Intel site

    NVM includes technologies such as NAND flash commonly used in Solid State Devices (SSD’s) storage today, as well as in USB thumb drive, mobile and hand-held devices among many other uses. NVM spans servers, storage, I/O devices along with mobile and handheld among many other technologies. In addition to NAND flash, other forms of NVM include Non Volatile Random Access Memory (NVRAM), Read Only Memory (ROM) along with some emerging new technologies including the recently announced Intel and Micron 3D XPoint among others.

    Server Storage I/O access and NVM
    Server Storage I/O memory (and storage) hierarchy

    Keep in mind that memory is storage and storage is persistent memory as well as that there are different classes, categories and tiers of memory and storage as shown above to meet various performance, availability, capacity and economic requirements. Besides NVM ranging from flash to NVRAM to emerging 3D XPoint among others, another popular topic that is gaining momentum is NVM Express (NVMe). NVMe (more material here at www.thenvmeplace.com) is a new server storage I/O access method and protocol for fast access to NVM based products. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

    Server Storage I/O NVMe PCIe SAS SATA AHCI
    Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

    Leveraging the common PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5" drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

    NVMe as back-end storage
    NVMe as a "back-end" I/O interface in a server or storage system accessing NVM storage/media devices

    NVMe as front-end server storage I/O interface
    NVMe as a “front-end” interface for servers (or storage systems/appliances) to use NVMe based storage systems

    NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can in addition to being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

    Shared external PCIe using NVMe
    NVMe and shared PCIe

    NVMe features

    Main features of NVMe include among others:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

    Watch for more about NVMe as it continues to gain in both industry adoption and deployment as well as customer adoption and deployment.

    Where to read, watch and learn more

    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, September 2014 Presentation (Flash back to reality – Myths and Realities Flash and SSD Industry trends perspectives plus benchmarking tips) – PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • Spot The Newest & Best Server Trends (Via Processor)
    • Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage (part I, part II and part III)
    • Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))
    • Continue reading more about NVM, NVMe, NAND flash, SSD Server and storage I/O related topics at www.thessdplace.com as well as about I/O performance, monitoring and benchmarking tools at www.storageperformance.us.

    Storage I/O trends

    What this all means and wrap up

    The question is not if NVM is in your future, it is! Instead the questions are what type of NVM including NAND flash among other mediums will be deployed where, using what type of packaging or solutions (drives, cards, systems, appliances, cloud) for what role (as storage, primary memory, persistent cache) along with how much among others. For some environments the solution is already, or will be All NVM Arrays (ANA) or All Flash Arrays (AFA) or All SSD Arrays (ASA) while for others the home run will be hybrid based solutions that work for you, fitting in and adapting to your environment as it changes.

    Also keep in mind that a little bit of fast memory including NVM based flash among others in the right place can have a big benefit. My experiences using NVMe to use flash enabled NVMe devices on Windows and Linux systems is that you can see lower response times at higher-IOP’s however also with lower CPU consumption particular when compared to 6Gbps SATA. Likewise bandwidth can easily be pushed to the limits of the NVMe device as well as PCIe interface being used such as x4 or x8 depending on implementation. That is also a warning and something to watch out for comparing apples to oranges in that while NVMe uses PCIe, understand when looking at different results if those are for x4 or x8 or faster PCIe as their mere presence of using PCIe does not mean you are running at full potential.

    Keep an eye on NVMe as a new high-speed, low-latency server storage I/O access protocol for unlocking the full performance capabilities of fast NVM based storage as well as leveraging the multiple cores in today’s fast processors. Does this mean AHCI/SATA or SCSI/SAS are now dead? Some will claim that, however at least near-term for next few years (if not longer), those interfaces will continue to be used where they make sense, as well as where they can save dollars specifically for cost sensitive, high-capacity environments that do not need the full performance of NVMe just yet.

    As for the Flash Memory Summit event in Santa Clara, that was a good day with time well spent in briefings, meetings, demo’s and add hoc discussions on the expo floor.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Intel Micron 3D XPoint server storage NVM SCM PM SSD

    3D XPoint server storage class memory SCM


    Storage I/O trends

    Updated 1/31/2018

    Intel Micron 3D XPoint server storage NVM SCM PM SSD.

    This is the second of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part III here.

    Is this 3D XPoint marketing, manufacturing or material technology?

    You can’t have a successful manufactured material technology without some marketing, likewise marketing without some manufactured material would be manufactured marketing. In the case of 3D XPoint and its announcement launch, their real technology shown, granted it was only wafer and dies as opposed to an actual DDR4 DIMM or PCIe Add In Card (AIC) or drive form factor Solid State Device (SSD) product. On the other hand, on a relative comparison basis, even though there is marketing collateral available to learn more from, this was far from a over the big-top made for TV or web circus event, which can be a good thing.


    Wafer unveiled containing 3D XPoint 128 Gb dies

    Who will get access to 3D XPoint?

    Initially 3D XPoint production capacity supply will be for the two companies to offer early samples to their customers later this year with general production slated for 2016 meaning early real customer deployed products starting sometime in 2016.

    Is it NAND or NOT?

    3D XPoint is not NAND flash, it is also not NVRAM or DRAM, it’s a new class of NVM that can be used for server class main memory with persistency, or as persistent data storage among other uses (cell phones, automobiles, appliances and other electronics). In addition, 3D XPoint is more durable with a longer useful life for writing and storing data vs. NAND flash.

    Why is 3D XPoint important?

    As mentioned during the Intel and Micron announcement, there have only been seven major memory technologies introduced since the transistor back in 1947, granted there have been many variations along with generational enhancements of those. Thus 3D XPoint is being positioned by Intel and Micron as the eighth memory class joining its predecessors many of which continue to be used today in various roles.


    Major memory classes or categories timeline

    In addition to the above memory classes or categories timeline, the following shows in more detail various memory categories (click on the image below to get access to the Intel interactive infographic).

    Intel History of Memory Infographic
    Via: https://intelsalestraining.com/memory timeline/ (Click on image to view)

    What capacity size is 3D XPoint?

    Initially the 3D XPoint technology is available in a 2 layer 128 bit (cell) per die capacity. Keep in mind that there are usually 8 bits to a byte resulting in 16 GByte capacity per chip initially. With density improvements, as well as increased stacking of layers, the number of cells or bits per die (e.g. what makes up a chip) should improve, as well as most implementations will have multiple chips in some type of configuration.

    What will 3D XPoint cost?

    During the 3D XPoint launch webinar Intel and Micron hinted that first pricing will be between current DRAM and NAND flash on a per cell or bit basis, however real pricing and costs will vary depending on how packaged for use. For example if placed on a DDR4 or different type of DIMM or on a PCIe Add In Card (AIC) or as a drive form factor SSD among other options will vary the real price. Likewise as with other memories and storage mediums, as production yields and volumes increase, along with denser designs, the cost per usable cell or bit can be expected to further improve.

    Where to read, watch and learn more

    Storage I/O trends

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    DRAM which has been around for sometime has plenty of life left for many applications as does NAND flash including new 3D NAND, vNAND and other variations. For the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies including 3D XPoint. Read more in this series including Part I here and Part III here.

    Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage

    3D XPoint NVM persistent memory PM storage class memory SCM


    Storage I/O trends

    Updated 1/31/2018

    This is the first of a three-part series on Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage announcement. Read Part II here and Part III here.

    In a webcast the other day, Intel and Micron announced new 3D XPoint non-volatile memory (NVM) that can be used for both primary main memory (e.g. what’s in computers, serves, laptops, tablets and many other things) in place of Dynamic Random Access Memory (DRAM), for persistent storage faster than today’s NAND flash-based solid state devices (SSD), not to mention future hybrid usage scenarios. Note that this announcement while having the common term 3D in it is different from the earlier Intel and Micron announcement about 3D NAND flash (read more about that here).

    Twitter hash tag #3DXpoint

    The big picture, why this type of NVM technology is needed

    Server and Storage I/O trends

    • Memory is storage and storage is persistent memory
    • No such thing as a data or information recession, more data being create, processed and stored
    • Increased demand is also driving density along with convergence across server storage I/O resources
    • Larger amounts of data needing to be processed faster (large amounts of little data and big fast data)
    • Fast applications need more and faster processors, memory along with I/O interfaces
    • The best server or storage I/O is the one you do not need to do
    • The second best I/O is one with least impact or overhead
    • Data needs to be close to processing, processing needs to be close to the data (locality of reference)


    Server Storage I/O memory hardware and software hierarchy along with technology tiers

    What did Intel and Micron announce?

    Intel SVP and General Manager Non-Volatile Memory solutions group Robert Crooke (Left) and Micron CEO D. Mark Durcan did the joint announcement presentation of 3D XPoint (webinar here). What was announced is the 3D XPoint technology jointly developed and manufactured by Intel and Micron which is a new form or category of NVM that can be used for both primary memory in servers, laptops, other computers among other uses, as well as for persistent data storage.


    Robert Crooke (Left) and Mark Durcan (Right)

    Summary of 3D XPoint announcement

    • New category of NVM memory for servers and storage
    • Joint development and manufacturing by Intel and Micron in Utah
    • Non volatile so can be used for storage or persistent server main memory
    • Allows NVM to scale with data, storage and processors performance
    • Leverages capabilities of both Intel and Micron who have collaborated in the past
    • Performance Intel and Micron claim up to 1000x faster vs. NAND flash
    • Availability persistent NVM compared to DRAM with better durability (life span) vs. NAND flash
    • Capacity densities about 10x better vs. traditional DRAM
    • Economics cost per bit between dram and nand (depending on packaging of resulting products)

    What applications and products is 3D XPoint suited for?

    In general, 3D XPoint should be able to be used for many of the same applications and associated products that current DRAM and NAND flash-based storage memories are used for. These range from IT and cloud or managed service provider data centers based applications and services, as well as consumer focused among many others.


    3D XPoint enabling various applications

    In general, applications or usage scenarios along with supporting products that can benefit from 3D XPoint include among others’. Applications that need larger amounts of main memory in a denser footprint such as in-memory databases, little and big data analytics, gaming, wave form analysis for security, copyright or other detection analysis, life sciences, high performance compute and high-productivity compute, energy, video and content severing among many others.

    In addition, applications that need persistent main memory for resiliency, or to cut delays and impacts for planned or un-planned maintenance or having to wait for memories and caches to be warmed or re-populated after a server boot (or re-boot). 3D XPoint will also be useful for those applications that need faster read and write performance compared to current generations NAND flash for data storage. This means both existing and emerging applications as well as some that do not yet exist will benefit from 3D XPoint over time, like how today’s applications and others have benefited from DRAM used in Dual Inline Memory Module (DIMM) and NAND flash advances over the past several decades.

    Where to read, watch and learn more

    Storage I/O trends

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Continue reading Part II here and Part III here of this three-part series on Intel and Micron 3D XPoint along with more analysis and commentary.

    Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    EMCworld 2015 How Do You Want Your Storage Wrapped?

    Server Storage I/O trends

    EMCworld 2015 How Do You Want Your Storage Wrapped?

    Back in early May I was invited by EMC to attend EMCworld 2015 which included both the public sessions, as well as several NDA based discussions. Keep in mind that there is the known, there is the unknown (or assumed or speculated) and in between there are NDA’s, nuff said on that. EMC covered my hotel and registration costs to attend the event in Las Vegas (thanks EMC, that’s a disclosure btw ;) and here is a synopsis of various EMCworld 2015 announcements.

    What EMC announced

    • VMAX3 enhancements to the EMC enterprise flagship storage platform to keep it relevant for traditional legacy workloads as well as for in a converged, scale-out, cloud, virtual and software defined environment.
    • VNX 3200 entry-level All Flash Array (AFA) flash SSD system starting at $25,000 USD for a 3TB unified platform with full data services found in other VNX products.
    • vVNX aka Virtual VNX aka "project liberty" which is a community (e.g. free) software version of the VNX. vVNX is a Virtual Storage Appliance (VSA) that you download and run on a VMware platform. Learn more and download here. Note the install will do a CPU type check so forget about trying to run it on a Intel Nuc or similar, I tried just because I could, the install will protect you from doing such things.
    • Various data protection related items including new Datadomain platforms as well as software updates and integration with other EMC platforms (storage systems).
    • All Flash Array (AFA) XtremIO 4.0 enhancements including larger clusters, larger nodes to boost performance, capacity and availability, along with copy service updates among others improvements.
    • Preview of DSSD shared (inside a rack) external flash Solid State Device (SSD) including more details. While much of DSSD is still under NDA, EMC did provide more public details at EMCworld. Between what was displayed and announced publicly at EMCworld as well as what can be found via Google (or other searches) you can piece together more of the DSSD story. What is known publicly today is that DSSD leverages the new Non-Volatile Memory express (NVMe) access protocol built upon underlying PCIe technology. More on DSSD in future discussions,if you have not done so, get an NDA deep dive briefing on it from EMC.
    • ScaleIO is now available via a free download here including both Windows and Linux clients as well as instructions for those operating systems as well as VMware.
    • ViPR can also be downloaded here for free (has been previously available) from here as well as it has been placed into open source by EMC.

    What EMC announced since EMCworld 2015

    • Acquisition of cloud services (and software tools) vendor Virtustream for $1.2B adding to the federation cloud services portfolio (companion to VMware vCloud Air).
    • Release of ECS 2.0 including a free download here. This new version of ECS (Elastic Cloud Storage) can be used independent of the ViPR controller, or in conjunction with ViPR. In addition ECS now has about 80% of the functionality of the Centera object storage platform. The remaining 20% functionality (mainly regulatory compliance governance) of Centera will be added to ECS in the future providing a migration path for Centera customers. In case you are wondering what does EMC do with Centera, Atmos, ViPR and now ECS, answer is that ECS can work with or without ViPR, second is that the functionality of Centera, Atmos are being rolled into ECS. ECS as a refresher is software that transforms general purpose industry standard servers with direct storage into a scale-out HDFS and object storage solution.
    • Check out EMCcode including S3motion that I use and have reviewed here. Also check out EMCcode Rex-Ray which if you are into docker containers, it should be of interest, I know I’m interested in it.

    Server Storage I/O trends

    What this all means and wrap-up

    There were no single major explosive announcements however the sum of all the announcements together should not be over shadowed by the big tent made for TV (or web) big tent productions and entertainment. What EMC announced was effectively how would you like, how do you want and need your storage and associated data services along with management wrapped.

    tin wrapped software

    By being wrapped, do you want your software defined storage management and storage wrapped in a legacy turnkey solution such as VMAX3, VNX or Isilon, do you want or need it to be hybrid or all flash, converged and unified, block, file or object.

    software wrapped storage

    Or do you need or want the software defined storage management and storage to be "shrink wrapped" as a download so you can deploy on your own hardware "tin wrapped" or as a VSA "virtual wrapped" or cloud wrapped? Do you need or want the software defined storage management and storage to leverage anybody’s hardware while being open source?

    server storage software wrapping

    How do you need or want your storage to be wrapped to fit your specific needs, that IMHO was the essence of what EMC announced at EMCworld 2015, granted the motorcycles and other production entertainment was engaging as well as educational.

    Ok, nuff said for now

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    This is the first post of a two part series, read the second post here.

    Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

    The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future, Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way. For example nand flash SSD as part of an enterprise tiered storage strategy can be implemented server-side using PCIe cards, SAS and SATA drives as targets or as cache along with software, as well as leveraging SSD devices in storage systems or appliances.

    Seagate 1200 SSD
    Seagate 1200 Enterprise SAS 12Gbs SSD Image via Seagate.com

    Another place where nand flash can be found and compliments SSD devices are so-called Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD) including a new generation that accelerate writes as well as reads such as those Seagate refers to as with Enterprise TurboBoost. The Enterprise TurboBoost drives (view the companion StorageIO Lab review TurboBoost white paper here) were previously known as the Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD). Read more about TurboBoost here and here.

    The best server and storage I/O is the one you do not have to do

    Keep in mind that the best server or storage I/O is that one that you do not have to do, with the second best being the one with the least overhead resolved as close to the processor (compute) as possible or practical. The following figure shows that the best place to resolve server and storage I/O is as close to the compute processor as possible however only a finite amount of storage memory located there. This is where the server memory and storage I/O hierarchy comes into play which is also often thought of in the context of tiered storage balancing performance and availability with cost and architectural limits.

    Also shown is locality of reference which refers to how close data is to where it is being used and includes cache effectiveness or buffering. Hence a small amount of cache of flash and DRAM in the right location can have a large benefit. Now if you can afford it, install as much DRAM along with flash storage as possible, however if you are like most organizations with finite budgets yet server and storage I/O challenges, then deploy a tiered flash storage strategy.

    flash cache locality of reference
    Server memory storage I/O hierarchy, locality of reference

    Seagate 1200 12Gbs Enterprise SAS SSD’s

    Back to the Seagate 1200 12Gbs Enterprise SAS SSD which is covered in this StorageIO Industry Trends Perspective thought leadership white paper. The focus of the white paper is to look at how the Seagate 1200 Enterprise class SSD’s and 12Gbps SAS address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data infrastructure environments.

    Seagate 1200 Enteprise SSD

    This includes providing proof points running various workloads including Database TPC-B, TPC-E and Microsoft Exchange in the StorageIO Labs along with cache software comparing SSD, SSHD and different HDD’s including 12Gbs SAS 6TB near-line high-capacity drives.

    Seagate 1200 Enterprise SSD Proof Points

    The proof points in this white paper are from an applications focus perspective representing more of an end-to-end real-world situation. While they are not included in this white paper, StorageIO has run traditional storage building-block focus workloads, which can be found at StorageIOblog (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?). These include tools such as Iometer, iorate, vdbench among others for various IO sizes, mixed, random, sequential, reads, writes along with “hot-band" across different number of threads (concurrent users). “Hot-Band” is part of the SNIA Emerald energy effectiveness metrics for looking at sustained storage performance using tools such as vdbench. Read more about other various server and storage I/O benchmarking tools and techniques here.

    For the following series of proof-points (TPC-B, TPC-E and Exchange) a system under test (SUT) consisted of a physical server (described with the proof-points) configured with VMware ESXi along with guests virtual machines (VMs) configured to do the storage I/O workload. Other servers were used in the case of TPC workloads as application transactional requester to drive the SQL Server database and resulting server storage I/O workload. VMware was used in the proof-points to reflect a common industry trend of using virtual server infrastructures (VSI) supporting applications including database, email among others. For the proof-point scenarios, the SUT along with storage system device under test were dedicated to that scenario (e.g. no other workload running) unless otherwise noted.

    Server Storage I/O config
    Server Storage I/O configuration for proof-points

    Microsoft Exchange Email proof-point configuration

    For this proof-point, Microsoft Jet Stress Exchange performance workloads were placed (e.g. Exchange Database – EDB file) on each of the different devices under test with various metrics shown including activity rates and response time for reads as well as writes. For the Exchange testing, the EDB was placed on the device being tested while its log files were placed on a separate Seagate 400GB Enterprise 12Gbps SAS SSD.

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB 7.2K SATA HDD. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft SBS2011 Service Pack 1 64 bit. Guest VM (VMware vSphere 5.5) was on a SSD based dat, had a physical machine (host), with 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot with Jet Stress 2010.  All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device also persistent (no delayed writes). EDB was 300GB and workload ran for 8 hours.

    Microsoft Exchange VMware SSD performance
    Microsoft Exchange proof-points comparing various storage devices

    TPC-B (Database, Data Warehouse, Batch updates) proof-point configuration

    SSD’s are a good fit for both transaction database activity with reads and write as well as query-based decision support systems (DSS), data warehouse and big data analytics. The following are proof points of SSD capabilities for database activity. In addition to supporting database table files and objects, along with transaction journal logs, other uses include for meta-data, import/export or other high-IO and write intensive scenarios. Two database workload profiles were tested including batch update (write-intensive) and transactional. Activity involved running Transaction Performance Council (TPC) workloads TPC-B (batch update) and TPC-E (transaction/OLTP simulate financial trading system) against Microsoft SQL Server 2012 databases. Each test simulation had the SQL Server database (MDF) on a different device with transaction log file (LDF) on a separate SSD. TPC-B for a single device results shown below.

    TPC-B (write intensive) results below show how TPS work being done (blue) increases from left to right (more is better) for various numbers of simulated users. Also shown on the same line for each amount of TPS work being done is the average latency in seconds (right to left) where lower is better. Results are shown from top to bottom for each group of users (100, 50, 20 and 1) for the different drives being tested (top to bottom). Note how the SSD device does more work at a lower response time vs. traditional HDD’s

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

    VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

    TPC-B sql server database SSD performance
    TPC-B SQL Server database proof-points comparing various storage devices

    TPC-E (Database, Financial Trading) proof-point configuration

    The following shows results from TPC-E test (OLTP/transactional workload) simulating a financial trading system. TPC-E is an industry standard workload that performs a mix of reads and writes database queries. Proof-points were performed with various numbers of users from 10, 20, 50 and 100 to determine (TPS) Transaction per Second (aka I/O rate) and response time in seconds. The TPC-E transactional results are shown for each device being tested across different user workloads. The results show how TPC-E TPS work (blue) increases from left to right (more is better) for larger numbers of users along with corresponding latency (green) that goes from right to left (less is better). The Seagate Enterprise 1200 SSD is shown on the top in the figure below with a red box around its results. Note how the SSD as a lower latency while doing more work compared to the other traditional HDD’s

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

    VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

    TPC-E sql server database SSD performance
    TPC-E (Financial trading) SQL Server database proof-points comparing various storage devices

    Continue reading part-two of this two-part series here including the virtual server storage I/O blender effect and solution.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    This is the second post of a two part series, read the first post here.

    Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

    The Server Storage I/O Blender Effect Bottleneck

    The earlier proof-points focused on SSD as a target or storage device. In the following proof-points, the Seagate Enterprise 1200 SSD is used as a shared read cache (write-through). Using a write-through cache enables a given amount of SSD to give a performance benefit to other local and networked storage devices.

    traditional server storage I/O
    Non-virtualized servers with dedicated storage and I/O paths.

    Aggregation causes aggravation with I/O bottlenecks because of consolidation using server virtualization. The following figure shows non-virtualized servers with their own dedicated physical machine (PM) and I/O resources. When various servers are virtualized and hosted by a common host (physical machine), their various workloads compete for I/O and other resources. In addition to competing for I/O performance resources, these different servers also tend to have diverse workloads.

    virtual server storage I/O blender
    Virtual server storage I/O blender bottleneck (aggregation causes aggravation)

    The figure above shows aggregation causing aggravation with the result being I/O bottlenecks as various applications performance needs converge and compete with each other. The aggregation and consolidation result is a blend of random, sequential, large, small, read and write characteristics. These different storage I/O characteristics are mixed up and need to be handled by the underlying I/O capabilities of the physical machine and hypervisor. As a result, a common deployment for SSD in addition to as a target device for storing data is as a cache to cut bottlenecks for traditional spinning HDD.

    In the following figure a solution is shown introducing I/O caching with SSD to help mitigate or cut the effects of server consolation causing performance aggravations.

    Creating a server storage I/O blender bottleneck

    xxxxx
    Addressing the VMware Server Storage I/O blender with cache

    Addressing server storage I/O blender and other bottlenecks

    For these proof-points, the goal was to create an I/O bottleneck resulting from multiple VMs in a virtual server environment performing application work. In this proof-point, multiple competing VMs including a SQL Server 2012 database and an Exchange server shared the same underlying storage I/O infrastructure including HDD’s The 6TB (Enterprise Capacity) HDD was configured as a VMware dat and allocated as virtual disks to the VMs. Workloads were then run concurrently to create an I/O bottleneck for both cached and non-cached results.

    xxxxx
    Server storage I/O with virtualization roof-point configuration topology

    The following figure shows two sets of proof points, cached (top) and non-cached (bottom) with three workloads. The workloads consisted of concurrent Exchange and SQL Server 2012 (TPC-B and TPC-E) running on separate virtual machine (VM) all on the same physical machine host (SUT) with database transactions being driven by two separate servers. In these proof-points, the applications data were placed onto the 6TB SAS HDD to create a bottleneck, and a portion of the SSD used as a cache. Note that the Virtunet cache software allows you to use a part of a SSD device for cache with the balance used as a regular storage target should you want to do so.

    If you have paid attention to the earlier proof-points, you might notice that some of the results below are not as good as those seen in the Exchange, TPC-B and TPC-E results about. The reason is simply that the earlier proof-points were run without competing workloads, and database along with log or journal files were placed on separate drives for performance. In the following proof-point as part of creating a server storage I/O blender bottleneck the Exchange, TPC-B as well as TPC-E workloads were all running concurrently with all data on the 6TB drive (something you normally would not want to do).

    storage I/O blender solved
    Solving the VMware Server Storage I/O blender with cache

    The cache and non-cached mixed workloads shown above prove how an SSD based read-cache can help to reduce I/O bottlenecks. This is an example of addressing the aggravation caused by aggregation of different competing workloads that are consolidated with server virtualization.

    For the workloads shown above, all data (database tables and logs) were placed on VMware virtual disks created from a dat using a single 7.2K 6TB 12Gbps SAS HDD (e.g. Seagate Enterprise Capacity).

    The guest VM system disks which included paging, applications and other data files were virtual disks using a separate dat mapped to a single 7.2K 1TB HDD. Each workload ran for eight hours with the TPC-B and TPC-E having 50 simulated users. For the TPC-B and TPC-E workloads, two separate servers were used to drive the transaction requests to the SQL Server 2012 database.

    For the cached tests, a Seagate Enterprise 1200 400GB 12Gbps SAS SSD was used as the backing store for the cache software (Virtunet Systems Virtucache) that was installed and configured on the VMware host.

    During the cached tests, the physical HDD for the data files (e.g. 6TB HDD) and system volumes (1TB HDD) were read cache enabled. All caching was disabled for the non-cached workloads.

    Note that this was only a read cache, which has the side benefit of off-loading those activities enabling the HDD to focus on writes, or read-ahead. Also note that the combined TPC-E, TPC-B and Exchange databases, logs and associated files represented over 600GB of data, there was also the combined space and thus cache impact of the two system volumes and their data. This simple workload and configuration is representative of how SSD caching can complement high-capacity HDD’s

    Seagate 6TB 12Gbs SAS high-capacity HDD

    While the star and focus of these series of proof-points is the Seagate 1200 Enterprise 12Gbs SAS SSD, the caching software (virtunet) and Enterprise TurboBoost drives also play key supporting and favorable roles. However the 6TB 12Gbs SAS high-capacity drive caught my attention from a couple of different perspectives. Certainly the space capacity was interesting along with a 12Gbs SAS interface well suited for near-line, high-capacity and dense tiered storage environments. However for a high-capacity drive its performance is what really caught my attention both in the standard exchange, TPC-B and TPC-E workloads, as well as when combined with SSD and cache software.

    This opens the door for a great combination of leveraging some amount of high-performance flash-based SSD (or TurboBoost drives) combined with cache software and high-capacity drives such as the 6TB device (Seagate now has larger versions available). Something else to mention is that the 6TB HDD in addition to being available in either 12Gbs SAS, 6Gbs SAS or 6Gbs SATA also has enhanced durability with a Read Bit Error Rate of 10 ^15 (e.g. 1 second read error per 10^15 average attempts) and an AFR (annual failure rate) of 0.63% (See more speeds and feeds here). Hence if you are concerned about using large capacity HDD’s and them failing, make sure you go with those that have a high Read Bit Error Rate and a low AFR which are more common with enterprise class vs. lower cost commodity or workstation drives. Note that these high-capacity enterprise HDD’s are also available with Self-Encrypting Drive (SED) options.

    Summary

    Read more in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of Seagate 1200 12Gbs SAS SSD’s and visit the Seagate Enterprise 1200 12Gbs SAS SSD page here. Moving forward there is the notion that flash SSD will be everywhere. There is a difference between all data on flash SSD vs. having some amount of SSD involved in preserving, serving and protecting (storing) information.

    Key themes to keep in mind include:

    • Aggregation can cause aggravation which SSD can alleviate
    • A relative small amount of flash SSD in the right place can go a long way
    • Fast flash storage needs fast server storage I/O access hardware and software
    • Locality of reference with data close to applications is a performance enabler
    • Flash SSD everywhere does not mean everything has to be SSD based
    • Having some amount of flash in different places is important for flash everywhere
    • Different applications have various performance characteristics
    • SSD as a storage device or persistent cache can speed up IOPs and bandwidth

    Flash and SSD are in your future, this comes back to the questions of how much flash SSD do you need, along with where to put it, how to use it and when.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Lenovo ThinkServer TD340 StorageIO lab Review

    Storage I/O trends

    Lenovo ThinkServer TD340 Server and StorageIO lab Review

    Earlier this year I did a review of the Lenovo ThinkServer TS140 in the StorageIO Labs (see the review here), in fact I ended up buying a TS140 after the review, and a few months back picked up another one. This StorageIOlab review looks at the Lenovo ThinkServer TD340 Tower Server which besides having a larger model number than the TS140, it also has a lot more capabilities (server compute, memory, I/O slots and internal hot-swap storage bays. Pricing varies depending on specific configuration options, however at the time of this post Lenovo was advertising a starting price of $1,509 USD for a specific configuration here. You will need to select different options to decide your specific cost.

    The TD340 is one of the servers that Lenovo has had prior to its acquisition of IBM x86 server business that you can read about here. Note that the Lenovo acquisition of the IBM xSeries business group has begun as of early October 2014 and is expected to be completed across different countries in early 2015. Read more about the IBM xSeries business unit here, here and here.

    The Lenovo TD340 Experience

    Lets start with the overall experience which was very easy other than deciding what make and model to try. This includes going from first answering some questions to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything. Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived.

    TD340 is ready for use
    TD340 with Keyboard and Mouse (Monitor and keyboard not included)

    One of the reasons I have a photo of the TD340 on a desk is that I initially put it in an office environment similar to what I did with the TS140 as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TD340 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TD340 is a good fit for environments that need a server that has to go into an office environment as opposed to a server or networking room.

    Welcome to the TD340
    Lenovo ThinkServer Setup

    TD340 Setup
    Lenovo TD340 as tested in BIOS setup, note the dual Intel Xeon E5-2420 v2 processors

    TD340 as tested

    TD340 Selfie of whats inside
    TD340 "Selfie" with 4 x 8GB DDR3 DIMM (32GB) and PCIe slots (empty)

    TD340 disk drive bays
    TD340 internal drive hot-swap bays

    Speeds and Feeds

    The TD340 that I tested was a Machine type 7087 model 002RUX which included 4 x 16GB DIMMs and both processor sockets occupied.

    You can view the Lenovo TD340 data sheet with more speeds and feeds here, however the following is a summary.

    • Operating systems support include various Windows Servers (2008-2012 R2), SUSE, RHEL, Citrix XenServer and VMware ESXi
    • Form factor is 5U tower with weight starting at 62 pounds depending on how configured
    • Processors include support for up to two (2) Intel E5-2400 v2 series
    • Memory includes 12 DDR3 DRAM DIMM slots (LV RDIMM and UDIMM) for up to 129GB.
    • Expansion slots vary depending on if a single or dual cpu socket. Single CPU socket installed has 1 x PCIe Gen3 FH/HL x8 mechanical, x4 electrical, 1 x PCIe Gen3
    • FH/HL x16 mechanical, x16 electrical and a single PCI 32bit/33 MHz FH/HL slot. With two CPU sockets installed extra PCIe slots are enabled. These include single x PCIe GEN3: FH/HL x8 mechanical, x4 electrical, single x PCIe GEN3: FH/HL x16 mechanical, x16 electrical, three x PCIe GEN3: FH/HL x8 mechanical, x8 electrical and a single PCI 5V 32-bit/33 MHz: FH/HL
    • Two 5.25” media bays for CD or DVDs or other devices
    • Integrated ThinkServer RAID (0/1/10/5) with optional RAID adapter models
    • Internal storage varies depending on model including up to eight (8) x 3.5” hot swap drives or 16 x 2.5” hot swap drives (HDD’s or SSDs).
    • Storage space capacity varies by the type and size of the drives being used.
    • Networking interfaces include two (2) x GbE
    • Power supply options include single 625 watt or 800 watt, or 1+1 redundant hot-swap 800 watt, five fixed fans.
    • Management tools include ThinkServer Management Module and diagnostics

    What Did I do with the TD340

    After initial check out in an office type environment, I moved the TD340 into the lab area where it joined other servers to be used for various things.

    Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities as well as installing VMware ESXi 5.5.

    TD340 is ready for use
    TD340 with Keyboard and Mouse (Monitor and keyboard not included)

    What I liked

    Unbelievably quiet which may not seem like a big deal, however if you are looking to deploy a server or system into a small office workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server that can be installed into a home media entertainment system, well, this might be a nice to have consideration ;). Speaking of IO slots, naturally I’m interested in Server Storage I/O so having multiple slots is a must have, along with the processor that is multi-core (pretty much standard these days) along with VT and EP for supporting VMware (these were disabled in the BIOS however that was an easy fix).

    What I did not like

    The only thing I did not like was that I ran into a compatibility issue trying to use a LSI 9300 series 12Gb SAS HBA which Lenovo is aware of, and perhaps has even addressed by now. What I ran into is that the adapters work however I was not able to get the full performance out of the adapters as compared to on other systems including my slower Lenovo TS140s.

    Summary

    Overall I give Lenovo and the TD340 an "B+" which would have been an "A" had I not gotten myself into a BIOS situation or been able to run the 12Gbps SAS PCIe Gen 3 cards at full speed. Likewise the Lenovo service and support also helped to improve on the experience. Otoh, if you are simply going to use the TD340 in a normal out of the box mode without customizing to add your own adapters or install your own operating system or Hypervisors (beyond those that are supplied as part of the install setup tool kit), you may have an "A" or "A+" experience with the TD340.

    Would I recommend the TD340 to others? Yes for those who need this type and class of server for Windows, *nix, Hyper-V or VMware environments.

    Would I buy a TD340 for myself? Maybe if that is the size and type of system I need, however I have my eye on something bigger. On the other hand for those who need a good value server for a SMB or ROBO environment with room to grow, the TD340 should be on your shopping list to compare with other solutions.

    Disclosure: Thanks to the folks at Lenovo for sending and making the TD340 available for review and a hands on test experience including covering the cost of shipping both ways (the unit should now be back in your possession). Thus this is not a sponsored post as Lenovo is not paying for this (they did loan the server and covered two-way shipping), nor am I paying them, however I have bought some of their servers in the past for the StorageIOLab environment that are companions to some Dell and HP servers that I have also purchased.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Lenovo TS140 Server and Storage I/O Review

    Storage I/O trends

    Lenovo TS140 Server and Storage I/O Review

    This is a review that looks at my recent hands on experiences in using a TS140 (Model MT-M 70A4 – 001RUS) pedestal (aka tower) server that the Lenovo folks sent to me to use for a month or so. The TS140 is one of the servers that Lenovo had prior to its acquisition of IBM x86 server business that you can read about here.

    The Lenovo TS140 Experience

    Lets start with the overall experience which was very easy and good. This includes going from initial answering some questions to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything (this was not a tear down and rip it apart into pieces trial).

    Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived. Turns out it was a box (server hardware) inside of a Lenovo box, that was inside a slightly larger unmarked shipping box (see larger box in the background).

    TS140 Evaluation Arrives

    TS140 shipment undergoing initial security screen scan and sniff (all was ok)

    TS140 with Windows 2012
    TS140 with Keyboard and Mouse (Monitor not included)

    One of the reasons I have a photo of the TS140 on a desk is that I initially put it in an office environment as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TS140 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TS140 is a good fit for environments that need a small or starter server that has to go into an office environment as opposed to a server or networking room. For those who are into mounting servers, there is the option for placing the TS140 on its side into a cabinet or rack.

    Windows 2012 on TS140
    TS140 with Windows Server 2012 Essentials

    TS140 as tested

    TS140 Selfie of whats inside
    TS140 "Selfie" with 4 x 4GB DDR3 DIMM (16GB) and PCIe slots (empty)

    16GB RAM (4 x 4GB DDR3 UDIMM, larger DIMMs are supported)
    Windows Server 2012 Essentials
    Intel Xeon E3-1225 v3 @3.2 Ghz quad (C226 chipset and TPM 1.2) vPRO/VT/EP capable
    Intel GbE 1217-LM Network connection
    280 watt power supply
    Keyboard and mouse (no monitor)
    Two 7.2K SATA HDDs (WD) configured as RAID 1 (100GB Lun)
    Slot 1 PCIe G3 x16
    Slot 2 PCIe G2 x1
    Slot 3 PCIe G2 x16 (x4 electrical signal)
    Slot 4 PCI (legacy)
    Onboard 6GB SATA RAID 0/1/10/5
    Onboard SATSA 3.0 (6Gbps) connectors (0-4), USB 3.0 and USB 2.0

    Read more about what I did with the Lenovo TS140 in part II of my review along with what I liked, did not like and general comments here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: What I did with Lenovo TS140 in my Server and Storage I/O Review

    Storage I/O trends

    Part II: Lenovo TS140 Server and Storage I/O Review


    This is the second of a two-part post series on my recent experiences with a Lenovo TS140 Server, you can read part I here.

    What Did I do with the TS140

    After initial check out in an office type environment, I moved the TS140 into the lab area where it joined other servers to be used for various things.

    Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities. Also, I also installed VMware ESXi 5.5 and ran into a few surprises. One of those was that I needed to apply an update to VMware drivers to support the onboard Intel NIC, as well as enable VT and EP modes for virtualization to assist via the BIOS. The biggest surprise was that I discovered I could not install VMware onto an internal drive attached via one of the internal SATA ports which turns out to be a BIOS firmware issue.

    Lenovo confirmed this when I brought it to their attention, and the workaround is to use USB to install VMware onto a USB flash SSD thumb drive, or other USB attached drive or to use external storage via an adapter. As of this time Lenovo is aware of the VMware issue, however, no date for new BIOS or firmware is available. Speaking of BIOS, I did notice that there was some newer BIOS and firmware available (FBKT70AUS December 2013) than what was installed (FB48A August of 2013). So I went ahead and did this upgrade which was a smooth, quick and easy process. The process included going to the Lenovo site (see resource links below), selecting the applicable download, and then installing it following the directions.

    Since I was going to install various PCIe SAS adapters into the TS140 attached to external SAS and SATA storage, this was not a big issue, more of an inconvenience Likewise for using storage mounted internally the workaround is to use an SAS or SATA adapter with internal ports (or cable). Speaking of USB workarounds, have a HDD, HHDD, SSHD or SSD that is a SATA device and need to attach it to USB, then get one of these cables. Note that there are USB 3.0 and USB 2.0 cables (see below) available so choose wisely.

    USB to SATA adapter cable

    In addition to running various VMware-based workloads with different guest VMs.

    I also ran FUTREMARK PCmark (btw, if you do not have this in your server storage I/O toolbox it should be) to gauge the systems performance. As mentioned the TS140 is quiet. However, it also has good performance depending on what processor you select. Note that while the TS140 has a list price as of the time of this post under $400 USD, that will change depending on which processor, amount of memory, software and other options you choose.

    Futuremark PCMark
    PCmark

    PCmark testResults
    Composite score2274
    Compute11530
    System Storage2429
    Secondary Storage2428
    Productivity1682
    Lightweight2137

    PCmark results are shown above for the Windows Server 2012 system (non-virtualized) configured as shipped and received from Lenovo.

    What I liked

    Unbelievably quiet which may not seem like a big deal, however, if you are looking to deploy a server or system into a small office workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server that can be installed into a home media entertainment system, well, this might be a nice to have consideration ;).

    Something else that I liked is that the TS140 with the E3-1220 v3 family of processor supports PCIe G3 adapters which are useful if you are going to be using 10GbE cards or 12Gbs SAS and faster cards to move lots of data, support more IOPs or reduce response time latency.

    In addition, while only 4 DIMM slots is not very much, its more than what some other similar focused systems have, plus with large capacity DIMMs, you can still get a nice system, or two, or three or four for a cluster at a good price or value (Hmm, VSAN anybody?). Also while not a big item, the TS140 did not require ordering an HDD or SSD if you are not also ordering software the system for a diskless system and have your own.

    Speaking of IO slots, naturally I’m interested in Server Storage I/O so having multiple slots is a must have, along with the processor that is quad core (pretty much standard these days) along with VT and EP for supporting VMware (these were disabled in the BIOS. However, that was an easy fix).

    Then there is the price as of this posting starting at $379 USD which is a bare bones system (e.g. minimal memory, basic processor, no software) whose price increases as you add more items. What I like about this price is that it has the PCIe G3 slot as well as other PCIe G2 slots for expansion meaning I can install 12Gbps (or 6Gbps) SAS storage I/O adapters, or other PCIe cards including SSD, RAID, 10GbE CNA or other cards to meet various needs including software defined storage.

    What I did not like

    I would like to have had at least six vs. four DIMM slots, however keeping in mind the price point of where this system is positioned, not to mention what you could do with it thinking outside of the box, I’m fine with only 4 x DIMM. Space for more internal storage would be nice, however, if that is what you need, then there are the larger Lenovo models to look at. By the way, thinking outside of the box, could you do something like a Hadoop, OpenStack, Object Storage, VMware VSAN or other cluster with these in addition to using as a Windows Server?

    Yup.

    Granted you won’t have as much internal storage, as the TS140 only has two fixed drive slots (for more storage there is the model TD340 among others).

    However it is not that difficult to add more (not Lenovo endorsed) by adding a StarTech enclosure like I did with my other systems (see here). Oh and those extra PCIe slots, that’s where a 12Gbs (or 6Gbps) adapter comes into play while leaving room for GbE cards and PCIe SSD cards. Btw not sure what to do with that PCIe x1 slot, that’s a good place for a dual GbE NIC to add more networking ports or an SATA adapter for attaching to larger capacity slower drives.

    StarTech 2.5" SAS and SATA drive enclosure on Amazon.com
    StarTech 2.5″ SAS SATA drive enclosure via Amazon.com

    If VMware is not a requirement, and you need a good entry level server for a large SOHO or small SMB environment, or, if you are looking to add a flexible server to a lab or for other things the TS140 is good (see disclosure below) and quiet.

    Otoh as mentioned, there is a current issue with the BIOS/firmware with the TS140 involving VMware (tried ESXi 5 & 5.5).

    However I did find a workaround which is that the current TS140 BIOS/Firmware does work with VMware if you install onto a USB drive, and then use external SAS, SATA or other accessible storage which is how I ended up using it.

    Lenovo TS140 resources include

  • TS140 Lenovo ordering website
  • TS140 Data and Spec Sheet (PDF here)
  • Lenovo ThinkServer TS140 Manual (PDF here)
  • Intel E3-1200 v3 processors capabilities (Web page here)
  • Lenovo Drivers and Software (Web page here)
  • Lenovo BIOS and Drivers (Web page here)
  • Enabling Virtualization Technology (VT) in TS140 BIOS (Press F1) (Read here)
  • Enabling Intel NIC (82579LM) GbE with VMware (Link to user forum and a blog site here)
  • My experience from a couple years ago dealing with Lenovo support for a laptop issue
  • Summary

    Disclosure: Lenovo loaned the TS140 to me for just under two months including covering shipping costs at no charge (to them or to me) hence this is not a sponsored post or review. On the other hand I have placed an order for a new TS140 similar to the one tested that I bought on-line from Lenovo.

    This new TS140 server that I bought joins the Dell Inspiron I added late last year (read more about that here) as well as other HP and Dell systems.

    Overall I give the Lenovo TS140 an provisional "A" which would be a solid "A" once the BIOS/firmware issue mentioned above is resolved for VMware. Otoh, if you are not concerned about using the TS140 for VMware (or can do a work around), then consider it as an "A".

    As mentioned above, I liked it so much I actually bought one to add to my collection.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Nand flash SSD NVM SCM server storage I/O memory conversations

    Updated 8/31/19
    Server Storage I/O storageioblog SDDC SDDI Data Infrastructure trends

    The SSD Place NVM, SCM, PMEM, Flash, Optane, 3D XPoint, MRAM, NVMe Server, Storage, I/O Topics

    Now and then somebody asks me if I’m familiar with flash or nand flash Solid State Devices (SSD) along with other non-volatile memory (NVM) technologies and trends including NVM Express (NVMe).

    Having been involved with various types of SSD technology, products and solutions since the late 80s initially as a customer in IT (including as a lunch customer for DEC’s ESE20 SSD’s), then later as a vendor selling SSD solutions, as well as an analyst and advisory consultant cover the technologies, I tell the person asking, well, yes, of course.

    That gave me the idea as well as to help me keep track of some of the content and make it easy to find by putting it here in this post (which will be updated now and then).

    Thus this is a collection of articles, tips, posts, presentations, blog posts and other content on SSD including nand flash drives, PCIe cards, DIMMs, NVM Express (NVMe), hybrid and other storage solutions along with related themes.

    Also if you can’t find it here, you can always do a Google search like this or this to find some more material (some of which is on this page).

    HDD, SSHD, HHDD and HDD

    Flash SSD Articles, posts and presentations

    The following are some of my tips, articles, blog posts, presentations and other content on SSD. Keep in mind that the question should not be if SSD are in your future, rather when, where, with what, from whom and how much. Also keep in mind that a bit of SSD as storage or cache in the right place can go a long way, while a lot of SSD will give you a benefit however also cost a lot of cash.

    • How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
    • Why NVMe Should Be in Your Data Center (Via Micron.com)
    • NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
    • Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
    • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
    • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
    • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
    • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
    • MNVM Express solutions (Via SuperMicro)
    • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
    • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
    • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
    • How many IOPS can a HDD, HHDD or SSD do (Part I)?
    • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
    • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
    • Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
    • Via EnterpriseStorageForum: 10-Year Review of Data Storage
    • Via CustomPCreview: Samsung SM961 PCIe NVMe SSD Shows Up for Pre-Order
    • StorageIO Industry Trends Perspective White Paper: Seagate 1200 Enterprise SSD (12Gbps SAS) with proof points (e.g. Lab test results)
    • Companion: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review (blog post part I and Part II)
    • NewEggBusiness: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review Are NVMe m.2 drives ready for the limelight?
    • Google (Research White Paper): Disks for Data Centers (vs. just SSD)
    • CMU (PDF White Paper): A Large-Scale Study of Flash Memory Failures in the Field
    • Via ZDnet: Google doubles Cloud Compute local SSD capacity: Now it’s 3TB per VM
    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
    • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
    • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
    • Here’s why Western Digital is buying SanDisk (Via ComputerWorld)
    • HP, SanDisk partner to bring storage-class memory to market (Via ComputerWorld)
    • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
    • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • Seagate Grows Its Nytro Enterprise Flash Storage Line (Via InfoStor)
    • New SAS Solid State Drive First Product From Seagate Micron Alliance (Via Seagate)
    • Wow, Samsung’s New 16 Terabyte SSD Is the World’s Largest Hard Drive (Via Gizmodo)
    • Samsung ups the SSD ante with faster, higher capacity drives (Via ITworld)
    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
    • New SATA SSD powers elastic cloud agility for CSPs (Via Cbronline)
    • Toshiba Solid-State Drive Family Features PCIe Technology (Via Eweek)
    • SanDisk aims CloudSpeed Ultra SSD at cloud providers (Via ITwire)
    • Everspin & Aupera reveal all-MRAM Storage Module in M.2 Form Factor (Via BusinessWire)
    • Intel, Micron Launch “Bulk-Switching” ReRAM (Via EEtimes)
    • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
    • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)

    server I/O hirearchy

    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, September 2014 Presentation (Flash back to reality – Myths and Realities Flash and SSD Industry trends perspectives plus benchmarking tips) – PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • Spot The Newest & Best Server Trends (Via Processor)
    • Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))
    • 2015 Tech Preview: SSD and SMBs (Via ChannelProNetworks )
    • How to test your HDD, SSD or all flash array (AFA) storage fundamentals (Via StorageIOBlog)
    • Processor: Comments on What Abandoned Data Is Costing Your Company
    • Processor: Comments on Match Application Needs & Infrastructure Capabilities
    • Processor: Comments on Explore The Argument For Flash-Based Storage
    • Processor: Comments on Understand The True Cost Of Acquiring More Storage
    • Processor: Comments on What Resilient & Highly Available Mean
    • Processor: Explore The Argument For Flash-Based Storage
    • SearchCloudStorage What should I consider when using SSD cloud?
    • StorageSearch.com: (not to be confused with TechTarget, good site with lots of SSD related content)
    • StorageSearch.com: What kind of SSD world… 2015?
    • StorageSearch.com: Various links about SSD
    • FlashStorage.com: (Various flash links curated by Tegile and analyst firm Actual Tech Media [Scott D. Lowe])
    • StorageSearch.com: How fast can your SSD run backwards?
    • Seagate has shipped over 10 Million storage HHDD’s (SSHDs), is that a lot?
    • Are large storage arrays dead at the hands of SSD?
    • Can we get a side of context with them IOPS and other storage metrics?
    • Cisco buys Whiptail continuing the SSD storage I/O flash cash cache dash
    • EMC VFCache respinning SSD and intelligent caching (Part I)
    • Flash Data Storage: Myth vs. Reality (Via InfoStor)
    • Have SSDs been unsuccessful with storage arrays (with poll)?
    • How many IOPS can a HDD, HHDD or SSD do (Part I)?
    • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
    • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)

    server storage i/o memory hirearchy

    • Spiceworks SSD and related conversation here and here, profiling IOPs here, and SSD endurance here.
    • SSD is in your future, How, when, with what and where you will be using it (PDF Presentation)
    • SSD for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD (Via TheVirtualizationPractice), Part II, The call to duty, SSD endurance, Part III What SSD is best for you?, and Part IV what’s best for your needs.
    • IT and storage economics 101, supply and demand
    • SSD, flash and DRAM, DejaVu or something new?
    • The Many Faces of Solid State Devices/Disks (SSD)
    • The Nand Flash Cache SSD Cash Dance (Via InfoStor)
    • The Right Storage Option Is Important for Big Data Success (Via FedTech)

    server storage i/o nand flash ssd options

    • Viking SATADIMM: Nand flash SATA SSD in DDR3 DIMM slot?
    • WD buys nand flash SSD storage I/O cache vendor Virident (Via VMware Communities)
    • What is the best kind of IO? The one you do not have to do
    • When and Where to Use NAND Flash SSD for Virtual Servers (Via TheVirtualizationPractice)
    • Why SSD based arrays and storage appliances can be a good idea (Part I)
    • Why SSD based arrays and storage appliances can be a good idea (Part II)
    • Q&A on Access data more efficiently with automated storage tiering and flash (Via SearchSolidStateStorage)
    • InfoStor: Flash Data Storage: Myth vs. Reality (Via InfoStor)
    • Enterprise Storage Forum: Not Just a Flash in the Pan (Via EnterpriseStorageForum)

    SSD Storage I/O and related technologies comments in the news

    The following are some of my commentary and industry trend perspectives that appear in various global venues.

    Storage I/O ssd news

    • Comments on using Flash Drives To Boost Performance (Via Processor)
    • Comments on selecting the Right Type, Amount & Location of Flash Storage (Via Toms It Pro)
    • Comments Google vs. AWS SSD: Which is the better deal? (Via SearchAWS)
    • Tech News World: SANdisk SSD comments and perspectives.
    • Tech News World: Samsung Jumbo SSD drives perspectives
    • Comments on Why Degaussing Isn’t Always Effective (Via StateTech Magazine)
    • Processor: SSD (FLASH and RAM)
    • SearchStorage: FLASH and SSD Storage
    • Internet News: Steve Wozniak joining SSD startup
    • Internet News: SANdisk sale to Toshiba
    • SearchSMBStorage: Comments on SanDisk and wireless storage product
    • StorageAcceleration: Comments on When VDI Hits a Storage Roadblock and SSD
    • Statetechmagazine: Boosting performance with SSD
    • Edtechmagazine: Driving toward SSDsStorage I/O trends
    • SearchStorage: Seagate SLC and MLC flash SSD
    • SearchWindowServer: Making the move to SSD in a SAN/NAS
    • SearchSolidStateStorage: Comments SSD marketplace
    • InfoStor: Comments on SSD approaches and opportunities
    • SearchSMBStorage: Solid State Devices (SSD) benefits
    • SearchSolidState: Comments on Fusion-IO flash SSD and API’s
    • SeaarchSolidStateStorage: Comments on SSD industry activity and OCZ bankruptcy
    • Processor: Comments on Plan Your Storage Future including SSD
    • Processor: Comments on Incorporate SSDs Into Your Storage PlanStorage I/O ssd news
    • Digistor: Comments on SSD and flash storage
    • ITbusinessEdge: Comments on flash SSD and hybrid storage environments
    • SearchStorage: Perspectives on Cisco buying SSD storage vendor Whiptail
    • StateTechMagazine: Comments on all flash SSD storage arrays
    • Processor: Comments on choosing SSDs for your data center needs
    • Searchsolidstatestorage: Comments on how to add solid state devices (SSD) to your storage system
    • Networkcomputing: Comments on SSD/Hard Disk Hybrids Bridge Storage Divide
    • Internet Evolution: Comments on IBM buying flash SSD vendor TMS
    • ITKE: Comments on IBM buying flash SSD vendor TMSStorage I/O trends
    • Searchsolidstatestorage: SSD, Green IT and economic benefits
    • IT World Canada: Cloud computing, dot be scared, look before you leap
    • SearchStorage: SSD in storage systems
    • SearchStorage: SAS SSD
    • SearchSolidStateStorage: Comments on Access data more efficiently with automated storage tiering and flash
    • InfoStor: Comments on EMC’s Light to Speed: Flash, VNX, and Software-Defined
    • EnterpriseStorageForum: Cloud Storage Mergers and Acquisitions: What’s Going On?

    Check out the Server StorageIO NVM Express (NVMe) focus page aka www.thenvmeplace.com for additional related content. nterested in data protection, check out the data protection diaries series of posts here, or cloud and object storage here, and server storage I/O performance benchmarking here. Also check out the StorageIO events and activities page here, as well as tips and articles here, news commentary here, along out newsletter here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved