NVMe overview primer

server storage I/O trends
Updated 2/2/2018

This is the first in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

What is NVM Express (NVMe)

Non-Volatile Memory (NVM) includes persistent memory such as NAND flash and other forms Solid State Devices (SSD). NVM express (NVMe) is a new server storage I/P protocol alternative to AHCI/SATA and the SCSI protocol used by Serial Attached SCSI (SAS). Note that the name NVMe is owned and managed by the industry trade group for NVM Express is (www.nvmexpress.org).

The key question with NVMe is not if, rather when, where, why, how and with what will it appear in your data center or server storage I/O data infrastructure. This is a companion to material that I have on my micro site www.thenvmeplace.com that provides an overview of NVMe, as well as helps to discuss some of the questions about NVMe.

Main features of NVMe include among others:

  • Lower latency due to improve drivers and increased queues (and queue sizes)
  • Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
  • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
  • Bandwidth improvements leveraging various fast PCIe interface and available lanes
  • Dual-pathing of devices like what is available with dual-path SAS devices
  • Unlock the value of more cores per processor socket and software threads (productivity)
  • Various packaging options, deployment scenarios and configuration options
  • Appears as a standard storage device on most operating systems
  • Plug-play with in-box drivers on many popular operating systems and hypervisors

Why NVMe for Server Storage I/O?
NVMe has been designed from the ground up for accessing fast storage including flash SSD leveraging PCI Express (PCIe). The benefits include lower latency, improved concurrency, increased performance and the ability to unleash a lot more of the potential of modern multi-core modern processors.

NVMe Server Storage I/O
Figure 1 shows common server I/O connectivity including PCIe, SAS, SATA and NVMe.

NVMe, leveraging PCIe, enables modern applications to reach their full potential. NVMe is one of those rare, generational protocol upgrades that comes around every couple of decades to help unlock the full performance value of servers and storage. NVMe does need new drivers, but once in place, it plugs and plays seamlessly with existing tools, software and user experiences. Likewise many of those drivers are now in the box (e.g. ship with) for popular operating systems and hypervisors.

While SATA and SAS provided enough bandwidth for HDDs and some SSD uses, more performance is needed. NVMe near-term does not replace SAS or SATA they can and will coexist for years to come enabling different tiers of server storage I/O performance.

NVMe unlocks the potential of flash-based storage by allowing up to 65,536 (64K) queues each with 64K commands per queue. SATA allowed for only one command queue capable of holding 32 commands per queue and SAS supports a queue with 64K command entries. As a result, the storage IO capabilities of flash can now be fed across PCIe much faster to enable modern multi-core processors to complete more useful work in less time.

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Continue reading about NVMe with Part II (Different NVMe configurations) in this five-part series, or jump to Part III, Part IV or Part V.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server StorageIO February 2016 Update Newsletter

Volume 16, Issue II

Hello and welcome to the February 2016 Server StorageIO update newsletter.

Even with an extra day during the month of February, there was a lot going on in a short amount of time. This included industry activity from servers to storage and I/O networking, hardware, software, services, mergers and acquisitions for cloud, virtual, containers and legacy environments. Check out the sampling of some of the various industry activities below.

Meanwhile, its now time for March Madness which also means metrics that matter and getting ready for World Backup Data on March 31st. Speaking of World Backup Day, check out the StorageIO events and activities page for a webinar on March 31st involving data protection as part of smart backups.

While your focus for March may be around brackets and other related themes, check out the Carnegie Mellon University (CMU) white paper listed below that looks at NAND flash SSD failures at Facebook. Some of the takeaways involve the importance of cooling and thermal management for flash, as well as wear management and role of flash translation layer firmware along with controllers.

Also see the links to the Google White Paper on their request to the industry for a new type of Hard Disk Drive (HDD) to store capacity data while SSD’s handle the IOP’s. The take away is that while Google uses a lot of flash SSD for high performance, low latency workloads, they also need to have a lot of high-capacity bulk storage that is more affordable on a cost per capacity basis. Google also makes several proposals and suggestions to the industry on what should and can be done on a go forward basis.

Backblaze also has a new report out on their 2015 HDD reliability and failure analysis which makes for an interesting read. One of the take away’s is that while there are newer, larger capacity 6TB and 8TB drives, Backblaze is leveraging the lower cost per capacity of 4TB drives that are also available in volume quantity.

Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

Cheers GS

In This Issue

  • StorageIOblog posts
  • Industry Activity Trends
  • New and Old Vendor Update
  • Events and Webinars
  • StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I
      and Part II – EMC DSSD D5 Direct Attached Shared AFA
      EMC announced the general availability of their DSSD D5 Shared Direct Attached SSD (DAS) flash storage system (e.g. All Flash Array or AFA) which is a rack-scale solution. If you recall, EMC acquired DSSD back in 2014 which you can read more about here. EMC announced four configurations that include 36TB, 72TB and 144TB raw flash SSD capacity with support for up to 48 dual-ported host client servers.
    • Various Hardware (SAS, SATA, NVM, M2) and Software (VHD) Defined Odd’s and Ends
      Ever need to add another GbE port to a small server, workstation or perhaps Intel NUC, however no PCIe slots are available? How about attaching a M2 form factor flash SSD card to a server or device that does not have an M2 port, or, for mirroring two M2 cards together with a RAID adapter? Looking for tool to convert a Windows system to a Virtual Hard Disk (VHD) while it is running? The following are a collection of odd’s and end’s devices and tools for hardware and software defining your environment.
    • Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures
      For those who are into, or simply like to talk about software defined storage (SDS), API’s, Windows, Virtual Hard Disks (VHD) or VHDX, or Hyper-V among other related themes, have you ever actually looked at the specification for VHDX? If not, here is the link to the open specification that Microsoft published (this one dates back to 2012).
    • Big Files and Lots of Little File Processing and Benchmarking with Vdbench
      Need to test a server, storage I/O networking, hardware, software, services, cloud, virtual, physical or other environment that is either doing some form of file processing, or, that you simply want to have some extra workload running in the background for what ever reason?

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

  • Tegile – IntelliFlash HD Now Available To Enterprises Worldwide
  • Via Forbes – Competitors and Cash Bleed Put Pressure on Pure Storage
  • Via HealthCareBusiness – Philips and Amazon team up on cloud-based health record storage
  • Via Zacks – IBM Advances Hybrid Cloud Object Based Storage
  • DataONstorage expands Microsoft Hyper Converged Infrastructure platforms
  • Via ITBusinessEdge – Nimble updates All Flash Array (AFA) storage
  • Carnegie Mellon University – A Large-Scale Study of Flash Memory Failures
  • Cisco Buys Cliqr Cloud Orchestration
  • Backblaze – 2015 Hard Drive Reliability Reports and Analysis
  • Via BusinessCloudNews – Verizon Closing Down Its Public Cloud
  • Via BusinessInsider – US Government Approves Dell and EMC Deal
  • EMC and VMware announce new VCE VxRAIL Converged Solutions
  • EMC announces new IBM zSeries Mainframe enhancements for VMAX
  • EMC announces new DSSD D5 AFA and VMAX AFA enhancements
  • HPE announces enhancements to StoreEasy 1650 storage
  • Seagate now shipping worlds slimmest and fastest 2TB mobile HDD
  • Via VMblog – Oracle Scoops Up Ravello to Boosts Its Public Cloud Offerings
  • Via Investors – SSD and Chinese Investments in Western Digital
  • ATTO announces 32G (e.g. Gen 6) Fibre Channel adapters
  • Google to disk vendors: Make hard drives like this, even if they lose more data
  • Google Disk for Data Centers White Paper (PDF Here)
  • View other recent news and industry trends here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    • SkySync – Enterprise File Sync and Share
    • SANblaze – Storage protocol emulation tools
    • OpenIT – DCIM and Data Infrastructure Management Tools
    • Infinit.sh – Decentralized Software Based File Storage Platform
    • Alluxio –
      Open Source Software Defined Storage Abstraction Layer
    • Genie9
      Backup and Data Protection Tools
    • E8 Storage – Software Defined Stealth Storage Startup

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

     

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    NAB (Las Vegas) April 19-20, 2016

    March 31, 2016 Webinar (1PM ET) – Smart Backup and World Backup Day

    February 25, 2016 Webinar (11AM PT) – Migrating to Hyper-V including from Vmware

    February 24, 2016 Webinar (11AM ET) – How To Become a Data Protection Hero

    February 23, 2016 Webinar (11AM PT) – Rethinking Data Protection

    January 19, 2016 Webinar (9AM PT) – Solve Virtualization Performance Issues Like a Pro

    See more webinars and other activities on the Server StorageIO Events page here.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links,
    objectstoragecenter.com, storageioblog.com/data-protection-diaries-main/,
    thenvmeplace.com, thessdplace.com and storageio.com/performance among others.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II – EMC DSSD D5 Direct Attached Shared AFA

    Part II – EMC DSSD D5 Direct Attached Shared AFA

    server storage I/O trends

    This is the second post in a two-part series on the EMC DSSD D5 announcement, you can read part one here.

    Lets take a closer look at how EMC DSSD D5 works, its hardware and software components, how it compares and other considerations.

    How Does DSSD D5 Work

    Up to 48 Linux servers attach via dual port PCIe Gen 3 x8 cards that are stateless. Stateless simply means they do not have any flash or are not being used as storage cards, rather, they are essentially just an NVMe adapter card. With the first release block, HDFS file along with object and APIs are available for Linux systems. These drivers enabling the shared NVMe storage to be accessed by applications using different streamlined server and storage I/O driver software stacks to cut latency. DSSD D5 is meant to be a rack scale solutions so distance is measured as inside a rack (e.g. a couple of meters).

    The 5U tall DSSD D5 supports 48 servers via a pair of I/O Modules (IOM) each with 48 ports that in turn attach to the data plane and on to the Flash Modules (FM). Also attached to the data plane are a pair of controllers that are active / active for performing management tasks, however they do not sit in the data path. This means that host client directly access the FMs without having to go through a controller which is the case in traditional storage systems and AFAs. The controllers only get involved when there is some setup, configuration or other management activities, otherwise they get out-of-the-way, kind of like how management should function. There when you need them to help, then get out-of-the-way so productive work can be done.

    EMC DSSD shared ssd das
    Pardon the following hand drawn sketches, you can see some nice pretty diagrams, videos and other content via the EMC Pulse Blog as well as elsewhere.

    Note that the host client servers take on the responsibility for managing and coordinating data consistency meaning data can be shared between servers assuming applicable software is used for implementing integrity. This means that clustering and other software that can support shared storage are able to support low latency high performance read and write activity the DSSD D5 as opposed to relying on the underlying storage system for handling the shared storage coordination such as in a NAS. Another note is that the DSSD D5 is optimized for concurrent multi-threaded and asynchronous I/O operations along with atomic writes for data integrity that enable the multiple cores in today’s faster processors to be more effectively leveraged.

    The data plane is a mesh or switch or expander based back plane enabling any of the north bound (host client-server) 96 (2 x 48) PCIe Gen 3 x4 ports to reach the up to 36 (or as few as 18) FMs that are also dual pathed. Note that the host client-server PCIe dual port cards are Gen 3 x8 while the DSSD D5 ports are Gen 3 x4. Simple math should tell you that if are going to have 2 x PCIe Gen 3 x4 ports running at full speed, you want to have a Gen 3 x8 connection inside the server to get full performance.

    Think of the data plane similar to how a SAS expander works in an enclosure or a SAS switch, the difference being it is PCIe and not SAS or other protocol. Note that even though the terms mesh, fabric, switch, network are used, these are NOT attached to traditional LAN, SAN, NAS or other networks. Instead, this is a private “networked back plane” between the server and storage devices (e.g. FM).

    EMC DSSD D5 details

    The dual controllers (e.g. control plane) over see the flash management including garbage collection among other tasks, as well as storage is thin provisioned.

    Dual Controllers (active/active) are connected to each other (e.g. control plane) as well as to the data path, however, do not sit in the data path. Thus this is a fast path control path approach meaning the controllers can get involved to do management functions when needed, and get out-of-the-way of work when not needed. The controllers are hot-swap and add global management functions including setting up, tearing down host client/server I/O paths, mappings and affinities. Controllers also support the management of CUBIC RAID data protection functions performed by the Flash Modules (FM).

    Other functions the controllers implement leveraging their CPUs and DRAM include flash translation layer (FTL) functions normally handled by SSD cards, drives or other devices. These FTL functions include wear-leveling for durability, garbage collection, voltage power management among other tasks. The result is that the flash modules are able to spend more time and their resources handling I/O operations vs. handling management tasks vs. traditional off the shelf SSD drives, cards or devices.

    The FMs insert from the front and come in two sizes of 2TB and 4TB of raw NAND capacity. What’s different about the FMs vs. some other vendors approach is that these are not your traditional PCIe flash cards, instead they are custom cards with a proprietary ASIC and raw nand dies. DRAM is used in the FM as a buffer to hold data for write optimization as well as enhance wear-leveling to increase flash endurance.

    The result is up to thousands of nand dies spread over up to 36 FMs however more important, more performance being derived out of those resources. The increased performance comes from DSSD implementing its own flash translation layer, garbage collection, power voltage management among other techniques to derive more useful work per watt of energy consumed.

    EMC DSSD performance claims:

    • 100 microsecond latency for small IOs
    • 100GB bandwidth for large IOs
    • 10 Million small IO IOPs
    • Up to 144TB raw capacity

    How Does It Compare To Other AFA and SSD solutions

    There will be many apples to oranges comparisons as is often the case with new technologies or at least until others arrive in the market.

    Some general comparisons that may be apples to oranges as opposed to apples to apples include:

    • Shared and dense fast nand flash (eMLC) SSD storage
    • disaggregated flash SSD storage from server while enabling high performance, low latency
    • Eliminate pools or ponds of dedicated SSD storage capacity and performance
    • Not a SAN yet more than server-side flash or flash SSD JBOD
    • Underlying Flash Translation Layer (FTL) is disaggregated from SSD devices
    • Optimized hardware and software data path
    • Requires special server-side stateless adapter for accessing shared storage

    Some other comparisons include:

    • Hybrid and AFA shared via some server storage I/O network (good sharing, feature rich, resilient, slower performance and higher latency due to hardware, network and server I/O software stacks). For example EMC VMAX, VNX, XtremIO among others.
    • Server attached flash SSD aka server SAN (flash SSD creates islands of technology, lower resource sharing, data shuffling between servers, limited or no data services, management complexity). For example PCIe flash SSD state full (persistent) cards where data is stored or used as a cache along with associated management tools and drivers.
    • DSSD D5 is a rack-scale hybrid approach combing direct attached shared flash with lower latency, higher performance vs. traditional AFA or hybrid storage array, better resource usage, sharing, management and performance vs. traditional dedicated server flash. Compliment server-side data infrastructure and applications scale-out software. Server applications can reach NVMe storage via user spacing with block, hdfs, Flood and other APIs.

    Using EMC DSSD D5 in possible hybrid ways

    What Happened to Server PCIe cards and Server SANs

    If you recall a few years ago the industry rage was flash SSD PCIe server cards from vendors such as EMC, FusionIO (now part of SANdisk), Intel (still Intel), LSI (now part of Seagate), Micron (still Micron) and STEC (now part of Western Digital) among others. Server side flash SSD PCIe cards are still popular particular with newer NVMe controller based models that use the NVMe protocol stack instead of AHC/SATA or others.

    However as is often the case, things evolve and while there is still a place for server-side state full PCIe flash cards either for data or as cache, there is also the need to combine and simplify management, as well as streamline the software I/O stacks which is where EMC DSSD D5 comes into play. It enables consolidation of server-side SSD cards into a shared 5U chassis enabling up to 48 dual pathed servers access to the flash pools while using streamlined server software stacks and drivers that leverage NVMe over PCIe.

    Where to learn more

    Continue reading with the following links about NVMe, flash SSD and EMC DSSD.

  • Part one of this series here and part two here.
  • Performance Redefined! Introducing DSSD D5 Rack-Scale Flash Solution (EMC Pulse Blog)
  • EMC Unveils DSSD D5: A Quantum Leap In Flash Storage (EMC Press Release)
  • EMC Declares 2016 The “Year of All-Flash” For Primary Storage (EMC Press Release)
  • EMC DSSD D5 Rack-Scale Flash (EMC PDF Overview)
  • EMC DSSD and Cloudera Evolve Hadoop (EMC White Paper Overview)
  • Software Aspects of The EMC DSSD D5 Rack-Scale Flash Storage Platform (EMC PDF White Paper)
  • EMC DSSD D5 (EMC PDF Architecture and Product Specification)
  • EMC VFCache respinning SSD and intelligent caching (Part II)
  • EMC To Acquire DSSD, Inc., Extends Flash Storage Leadership
  • Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • Learn more about flash SSD here and NVMe here at thenvmeplace.com
  • What this all means

    EMC with DSSD D5 now has another solution to offer clients, granted their challenge as it has been over the past couple of decades now will be to educate and compensate their sales force and partners on what technology solution to put for different needs.

    On one hand, life could be simpler for EMC if they only had one platform solution that would then be the answer to every problem, something that some other vendors and startups face. Likewise, if all you have is one solution, then while you can try to make that solution fit different environments, or, get the environment to adapt to the solution, having options is a good thing if those options can remove complexity along with cost while boosting productivity.

    I would like to see support for other operating systems such as Windows, particular with the future Windows 2016 based Nano, as well as hypervisors including VMware, Hyper-V among others. On the other hand I also would like to see a Sharp Aquous Quattron 80" 1080p 240Hz 3D TV on my wall to watch HD videos from my DJI Phantom Drone. For now focusing on Linux makes sense, however, would be nice to see some more platforms supported.

    Keep an eye on the NVMe space as we are seeing NVMe solutions appearing inside servers, storage system, external dedicated and shared, as well as some other emerging things including NVMe over Fabric. Learn more about EMC DSSD D5 here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I

    EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I

    server storage I/O trends

    This is the first post in a two-part series pertaining to the EMC DSSD D5 announcement, you can read part two here.

    EMC announced today the general availability of their DSSD D5 Shared Direct Attached SSD (DAS) flash storage system (e.g. All Flash Array or AFA) which is a rack-scale solution. If you recall, EMC acquired DSSD back in 2014 which you can read more about here. EMC announced four configurations that include 36TB, 72TB and 144TB raw flash SSD capacity with support for up to 48 dual-ported host client servers.

    Via EMC Pulse Blog

    What Is DSSD D5

    At a high level EMC DSSD D5 is a PCIe direct attached SSD flash storage solution to enable aggregation of disparate SSD card functionality typically found in separate servers into a shared system without causing aggravation. DSSD D5 helps to alleviate server side I/O bottlenecks or aggravation issues that can be the result of aggregation of workloads or data. Think of DSSD D5 as an shared application server storage I/O accelerator for up to 48 servers to access up to 144TB of raw flash SSD to support various applications that have the need for speed.

    Applications that have the need for speed or that can benefit from less time waiting for results, where time is money, or boosting productivity can enable high profitability computing. This includes legacy as well as emerging applications and workloads spanning little data, big data and big fast structure and unstructured data. From Oracle to SAS to HBASE and Hadoop among others, perhaps even Alluxio.

    Some examples include:

    • Clusters and scale-out grids
    • High Performance COMpute (HPC)
    • Parallel file systems
    • Forecasting and image processing
    • Fraud detection and prevention
    • Research and analytics
    • E-commerce and retail
    • Search and advertising
    • Legacy applications
    • Emerging applications
    • Structured database and key-value repositories
    • Unstructured file systems, HDFS and other data
    • Large undefined work sets
    • From batch stream to real-time
    • Reduces run times from days to hours

    Where to learn more

    Continue reading with the following links about NVMe, flash SSD and EMC DSSD.

  • Part one of this series here and part two here.
  • Performance Redefined! Introducing DSSD D5 Rack-Scale Flash Solution (EMC Pulse Blog)
  • EMC Unveils DSSD D5: A Quantum Leap In Flash Storage (EMC Press Release)
  • EMC Declares 2016 The “Year of All-Flash” For Primary Storage (EMC Press Release)
  • EMC DSSD D5 Rack-Scale Flash (EMC PDF Overview)
  • EMC DSSD and Cloudera Evolve Hadoop (EMC White Paper Overview)
  • Software Aspects of The EMC DSSD D5 Rack-Scale Flash Storage Platform (EMC PDF White Paper)
  • EMC DSSD D5 (EMC PDF Architecture and Product Specification)
  • EMC VFCache respinning SSD and intelligent caching (Part II)
  • EMC To Acquire DSSD, Inc., Extends Flash Storage Leadership
  • Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • Learn more about flash SSD here and NVMe here at thenvmeplace.com
  • What this all means

    Today’s legacy, and emerging applications have the need for speed, and where the applications may not need speed, the users as well as Internet of Things (IoT) that depend upon, or feed those applications do need things to move faster. Fast applications need fast software and hardware to get the same amount of work done faster with less wait delays, as well as process larger amounts of structured and unstructured little data, big data and very fast big data.

    Different applications along with the data infrastructures they rely upon including servers, storage, I/O hardware and software need to adapt to various environments, one size, one approach model does not fit all scenarios. What this means is that some applications and data infrastructures will benefit from shared direct attached SSD storage such as rack scale solutions using EMC DSSD D5. Meanwhile other applications will benefit from AFA or hybrid storage systems along with other approaches used in various ways.

    Continue reading part two of this series here including how EMC DSSD D5 works and more perspectives.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Various Hardware (SAS, SATA, NVM, M2) and Software (VHD) Defined Odd’s and Ends

    Various Hardware (SAS, SATA, NVM, M2) and Software (VHD) Defined Odd’s and Ends

    server storage I/O trends

    Ever need to add another GbE port to a small server, workstation or perhaps Intel NUC, however no PCIe slots are available? How about attaching a M2 form factor flash SSD card to a server or device that does not have an M2 port, or, for mirroring two M2 cards together with a RAID adapter? Looking for tool to convert a Windows system to a Virtual Hard Disk (VHD) while it is running? The following are a collection of odd’s and end’s devices and tools for hardware and software defining your environment.

    Adding GbE Ports Without PCIe Ports

    Adding Ethernet ports or NICs is relatively easy with larger servers, assuming you have available PCIe slots.

    However what about when you are limited or out of PCIe ports? One option is to use USB (preferably USB 3) to GbE connectors. Another option is if you have an available mSATA card slot, such as on a server or workstation that had a WiFi card you no longer need to use, is get a mSATA to GbE kit (shown below). Granted you might have to get creative with the PCIe bracket depending on what you are going to put one of these into.

    mSATA to GbE and USB to GbE
    Left mSATA to GbE port, Right USB 3 (Blue) to GbE connector

    Tip: Some hypervisors may not like the USB to GbE, or have drivers for the mSATA to GbE connector, likewise some operating systems do not have in the box drivers. Start by loading GbE drivers such as those needed for RealTek NICs and you may end up with plug and play.

    SAS to SATA Interposer and M2 to SATA docking card

    In the following figure on the left is a SAS to SATA interposer which enables a SAS HDD or SSD to connect to a SATA connector (power and data). Keep in mind that SATA devices can attach to SAS ports, however the usual rule of thumb is that SAS devices can not attach to a SATA port or controller. To prevent that from occurring, the SAS and SATA connectors have different notched connectors that prevent a SAS device from plugging into a SATA connector.

    Where the SAS to SATA interposers come into play is that some servers or systems have SAS controllers, however their drive bays have SATA power and data connectors. Note that the key here is that there is a SAS controller, however instead of a SAS connector to the drive bay, a SATA connector is used. To get around this, interposers such as the one above allows the SAS device to attach to the SATA connector which in turn attached to the SAS controller.

    SAS SATA interposer and M2 to SATA docking card
    Left SAS to SATA interposer, Right M2 to SATA docking card

    In the above figure on the right, is an M2 NVM nand flash SSD card attached to a M2 to SATA docking card. This enables M2 cards that have SATA protocol controllers (as opposed to M2 NVMe) to be attached to a SATA port on an adapter or RAID card. Some of these docking cards can also be mounted in server or storage system 2.5" (or larger) drive bays. You can find both of the above at Amazon.com as well as many other venues.

    P2V and Creating VHD and VHDX

    I like and use various Physical to Virtual (P2V) as well as Virtual to Virtual (V2V) and even Virtual to Physical (V2P) along with Virtual to Cloud (V2C) tools including those from VMware (vCenter Converter), Microsoft (e.g. Microsoft Virtual Machine Converter) among others. Likewise Clonezilla, Acronis and many other tools are in the toolbox. One of those other tools that is handy for relatively quickly making a VHD or VHDX out of a running Windows server is disk2vhd.

    disk2vhd

    Now you should ask, why not just use the Microsoft Migration tool or VMware converter?

    Simple, if you use those or other tools and run into issues with GPT vs MBR or BIOS vs UEFI settings among others, disk2vhd is a handy work around. Simply install it, tell it where to create the VHD or VHDX (preferably on another device), start the creation, when done, move the VHDX or VHD to where needed and go from there.

    Where do you get disk2vhd and how much does it cost?

    Get it here from Microsoft Technet Windows Sysinternals page and its free.

    Where to learn more

    Continue reading about the above and other related topics with these links.

  • Server storage I/O Intel NUC nick knack notes – Second impressions
  • Some Windows Server Storage I/O related commands
  • Server Storage I/O Cables Connectors Chargers & other Geek Gifts
  • The NVM (Non Volatile Memory) and NVMe Place (Non Volatile Memory Express)
  • Nand flash SSD and NVM server storage I/O memory conversations
  • Cloud Storage for Camera Data?
  • Via @EmergencyMgtMag Cloud Storage for Camera Data?

  • Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures
  • Part II 2014 Server Storage I/O Geek Gift ideas
  • What this all means

    While the above odd’s and end’s tips, tricks, tools and technology may not be applicable for your production environment, perhaps they will be useful for your test or home lab environment needs. On the other hand, the above may not be practically useful for anything, yet simply entertaining, the rest is up to you as if there is any return on investment, or, perhaps return on innovation from use these or other odd’s, end’s tips and tricks that might be outside of the traditional box so to speak.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures

    Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures

    server storage I/O trends

    For those who are into, or simply like to talk about software defined storage (SDS), APIs, Windows, Virtual Hard Disks (VHD) or VHDX, or Hyper-V among other related themes, have you ever actually looked at the specification for VHDX? If not, here is the link to the open specification that Microsoft published (this one dates back to 2012).

    Microsoft VHDX specification document
    Click on above image to download the VHDX specification from Microsoft.com

    How about Algorithms + Data Structures = Programs by Niklaus Wirth, some of you might remember that from the past, if not, it’s a timeless piece of work that has many fundamental concepts for understanding software defined anything. I came across Algorithms + Data Structures = Programs back in Graduate School when I was getting my masters degree in Software Engineering at night, while working during the day in an IT environment on servers, storage, I/O networking hardware and software.


    Algorithms + Data Structures = Programs on Amazon.com

    In addition to the Amazon.com link above, here is a link to a free (legitimate PDF) copy.

    The reason I mention Software Defined, Virtual Hard Disk and Algorithms + Data Structures = Programs is that they are all directly related, or at a minimum can help demystify things.

    Inside a VHD and VHDX

    The following is an excerpt from the Microsoft VHDX specification document mentioned above that shows a logical view of how a VHDX is defined as a data structure, as well as how algorithms should use and access them.

    Microsoft VHDX specification

    Keep in mind that anything software defined is a collection of data structures that describe how bits, bytes, blocks, blobs or other entities are organized and then accessed by algorithms that are defined how to use those data structures. Thus the connection to Algorithms + Data Structures = Programs mentioned above.

    In the case of a Virtual Hard Disk (VHD) or VHDX they are the data structures defined (see the specification here) and then used by various programs (applications or algorithms) such as Windows or other operating systems, hypervisors or utilities.

    A VHDX (or VMDK or VVOL or qcow or other virtual disk for that matter) is a file whose contents are organized e.g. the data structures per a given specification (here).

    The VHDX can then be moved around like another file and used for booting some operating systems, as well as simply mounting and using like any other disk or device.

    This also means that you can nest putting a VHDX inside of a VHDX and so forth.

    Where to learn more

    Continue reading with the following links about Virtual Hard Disks pertaining to Microsoft Windows, Hyper-V, VMware among others.

  • Algorithms + Data Structures = Programs on Amazon.com
  • Microsoft Technet Virtual Hard Disk Sharing Overview
  • Download the VHDX specification from Microsoft.com
  • Microsoft Technet Hyper-V Virtual Hard Disk (VHD) Format Overview
  • Microsoft Technet Online Virtual Hard Disk Resizing Overview
  • VMware Developer Resource Center (VDDK for vSphere 6.0)
  • VMware VVOLs and storage I/O fundamentals (Part 1)
  • What this all means

    Applications and utilities or basically anything that is algorithms working with data structures is a program. Software Defined Storage or Software Defined anything involves defining data structures that describes various entities, along with the algorithms to work with and use those data structures.

    Sharpen, refresh or expand your software defined data center, software defined network, software defined storage or software defined storage management as well as software defined marketing game by digging a bit deeper into the bits and bytes. Who knows, you might just go from talking the talk to walking the talk, if nothing else, talking the talk better..

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Big Files Lots of Little File Processing Benchmarking with Vdbench

    Big Files Lots of Little File Processing Benchmarking with Vdbench


    server storage data infrastructure i/o File Processing Benchmarking with Vdbench

    Updated 2/10/2018

    Need to test a server, storage I/O networking, hardware, software, services, cloud, virtual, physical or other environment that is either doing some form of file processing, or, that you simply want to have some extra workload running in the background for what ever reason? An option is File Processing Benchmarking with Vdbench.

    I/O performance

    Getting Started


    Here’s a quick and relatively easy way to do it with Vdbench (Free from Oracle). Granted there are other tools, both for free and for fee that can similar things, however we will leave those for another day and post. Here’s the con to this approach, there is no Uui Gui like what you have available with some other tools Here’s the pro to this approach, its free, flexible and limited by your creative, amount of storage space, server memory and I/O capacity.

    If you need a background on Vdbench and benchmarking, check out the series of related posts here (e.g. www.storageio.com/performance).

    Get and Install the Vdbench Bits and Bytes


    If you do not already have Vdbench installed, get a copy from the Oracle or Source Forge site (now points to Oracle here).

    Vdbench is free, you simply sign-up and accept the free license, select the version down load (it is a single, common distribution for all OS) the bits as well as documentation.

    Installation particular on Windows is really easy, basically follow the instructions in the documentation by copying the contents of the download folder to a specified directory, set up any environment variables, and make sure that you have Java installed.

    Here is a hint and tip for Windows Servers, if you get an error message about counters, open a command prompt with Administrator rights, and type the command:

    $ lodctr /r


    The above command will reset your I/O counters. Note however that command will also overwrite counters if enabled so only use it if you have to.

    Likewise *nix install is also easy, copy the files, make sure to copy the applicable *nix shell script (they are in the download folder), and verify Java is installed and working.

    You can do a vdbench -t (windows) or ./vdbench -t (*nix) to verify that it is working.

    Vdbench File Processing

    There are many options with Vdbench as it has a very robust command and scripting language including ability to set up for loops among other things. We are only going to touch the surface here using its file processing capabilities. Likewise, Vdbench can run from a single server accessing multiple storage systems or file systems, as well as running from multiple servers to a single file system. For simplicity, we will stick with the basics in the following examples to exercise a local file system. The limits on the number of files and file size are limited by server memory and storage space.

    You can specify number and depth of directories to put files into for processing. One of the parameters is the anchor point for the file processing, in the following examples =S:\SIOTEMP\FS1 is used as the anchor point. Other parameters include the I/O size, percent reads, number of threads, run time and sample interval as well as output folder name for the result files. Note that unlike some tools, Vdbench does not create a single file of results, rather a folder with several files including summary, totals, parameters, histograms, CSV among others.


    Simple Vdbench File Processing Commands

    For flexibility and ease of use I put the following three Vdbench commands into a simple text file that is then called with parameters on the command line.
    fsd=fsd1,anchor=!fanchor,depth=!dirdep,width=!dirwid,files=!numfiles,size=!filesize

    fwd=fwd1,fsd=fsd1,rdpct=!filrdpct,xfersize=!fxfersize,fileselect=random,fileio=random,threads=!thrds

    rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=!etime,interval=!itime

    Simple Vdbench script

    # SIO_vdbench_filesystest.txt
    #
    # Example Vdbench script for file processing
    #
    # fanchor = file system place where directories and files will be created
    # dirwid = how wide should the directories be (e.g. how many directories wide)
    # numfiles = how many files per directory
    # filesize = size in in k, m, g e.g. 16k = 16KBytes
    # fxfersize = file I/O transfer size in kbytes
    # thrds = how many threads or workers
    # etime = how long to run in minutes (m) or hours (h)
    # itime = interval sample time e.g. 30 seconds
    # dirdep = how deep the directory tree
    # filrdpct = percent of reads e.g. 90 = 90 percent reads
    # -p processnumber = optional specify a process number, only needed if running multiple vdbenchs at same time, number should be unique
    # -o output file that describes what being done and some config info
    #
    # Sample command line shown for Windows, for *nix add ./
    #
    # The real Vdbench script with command line parameters indicated by !=
    #

    fsd=fsd1,anchor=!fanchor,depth=!dirdep,width=!dirwid,files=!numfiles,size=!filesize

    fwd=fwd1,fsd=fsd1,rdpct=!filrdpct,xfersize=!fxfersize,fileselect=random,fileio=random,threads=!thrds

    rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=!etime,interval=!itime

    Big Files Processing Script


    With the above script file defined, for Big Files I specify a command line such as the following.
    $ vdbench -f SIO_vdbench_filesystest.txt fanchor=S:\SIOTemp\FS1 dirwid=1 numfiles=60 filesize=5G fxfersize=128k thrds=64 etime=10h itime=30 numdir=1 dirdep=1 filrdpct=90 -p 5576 -o SIOWS2012R220_NOFUZE_5Gx60_BigFiles_64TH_STX1200_020116

    Big Files Processing Example Results


    The following is one of the result files from the folder of results created via the above command for Big File processing showing totals.


    Run totals

    21:09:36.001 Starting RD=format_for_rd1

    Feb 01, 2016 .Interval. .ReqstdOps.. ...cpu%... read ....read.... ...write.... ..mb/sec... mb/sec .xfer.. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete...
    rate resp total sys pct rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp
    21:23:34.101 avg_2-28 2848.2 2.70 8.8 8.32 0.0 0.0 0.00 2848.2 2.70 0.00 356.0 356.02 131071 0.0 0.00 0.0 0.00 0.1 109176 0.1 0.55 0.1 2006 0.0 0.00

    21:23:35.009 Starting RD=rd1; elapsed=36000; fwdrate=max. For loops: None

    07:23:35.000 avg_2-1200 4939.5 1.62 18.5 17.3 90.0 4445.8 1.79 493.7 0.07 555.7 61.72 617.44 131071 0.0 0.00 0.0 0.00 0.0 0.00 0.1 0.03 0.1 2.95 0.0 0.00


    Lots of Little Files Processing Script


    For lots of little files, the following is used.


    $ vdbench -f SIO_vdbench_filesystest.txt fanchor=S:\SIOTEMP\FS1 dirwid=64 numfiles=25600 filesize=16k fxfersize=1k thrds=64 etime=10h itime=30 dirdep=1 filrdpct=90 -p 5576 -o SIOWS2012R220_NOFUZE_SmallFiles_64TH_STX1200_020116

    Lots of Little Files Processing Example Results


    The following is one of the result files from the folder of results created via the above command for Big File processing showing totals.
    Run totals

    09:17:38.001 Starting RD=format_for_rd1

    Feb 02, 2016 .Interval. .ReqstdOps.. ...cpu%... read ....read.... ...write.... ..mb/sec... mb/sec .xfer.. ...mkdir... ...rmdir... ..create... ...open.... ...close... ..delete...
    rate resp total sys pct rate resp rate resp read write total size rate resp rate resp rate resp rate resp rate resp rate resp
    09:19:48.016 avg_2-5 10138 0.14 75.7 64.6 0.0 0.0 0.00 10138 0.14 0.00 158.4 158.42 16384 0.0 0.00 0.0 0.00 10138 0.65 10138 0.43 10138 0.05 0.0 0.00

    09:19:49.000 Starting RD=rd1; elapsed=36000; fwdrate=max. For loops: None

    19:19:49.001 avg_2-1200 113049 0.41 67.0 55.0 90.0 101747 0.19 11302 2.42 99.36 11.04 110.40 1023 0.0 0.00 0.0 0.00 0.0 0.00 7065 0.85 7065 1.60 0.0 0.00


    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    The above examples can easily be modified to do different things particular if you read the Vdbench documentation on how to setup multi-host, multi-storage system, multiple job streams to do different types of processing. This means you can benchmark a storage systems, server or converged and hyper-converged platform, or simply put a workload on it as part of other testing. There are even options for handling data footprint reduction such as compression and dedupe.

    Ok, nuff said, for now.

    Gs

    Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Server StorageIO January 2016 Update Newsletter

    Volume 16, Issue I – beginning of Year (BoY) Edition

    Hello and welcome to the January 2016 Server StorageIO update newsletter.

    Is it just me, or did January disappear in a flash like data stored in non-persistent volatile DRAM memory when the power is turned off? It seems like just the other day that it was the first day of the new year and now we are about to welcome in February. Needless to say, like many of you I have been busy with various projects, many of which are behind the scenes, some of which will start appearing publicly sooner while others later.

    In terms of what have I been working on, it includes the usual of performance, availability, capacity and economics (e.g. PACE) related to servers, storage, I/O networks, hardware, software, cloud, virtual and containers. This includes NVM as well as NVMe based SSD’s, HDD’s, cache and tiering technologies, as well as data protection among other things with Hyper-V, VMware as well as various cloud services.

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    In This Issue

  • Feature Topic
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcasts
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • Feature Topic – Microsoft Nano, Server 2016 TP4 and VMware

    This months feature topic is virtual servers and software defined storage including those from VMware and Microsoft. Back in November I mentioned the 2016 Technical Preview 4 (e.g. TP4) along with Storage Spaces Direct and Nano. As a reminder you can download your free trial copy of Windows Server 2016 TP4 from this Microsoft site here.

    Three good Microsoft Blog posts about storage spaces to check out include:

    • Storage Spaces Direct in Technical Preview 4 (here)
    • Hardware options for evaluating Storage Spaces Direct in Technical Preview 4 (here)
    • Storage Spaces Direct – Under the hood with the Software Storage Bus (here)

    As for Microsoft Nano, for those not familiar, it’s not a new tablet or mobile device, instead, it is a very light weight streamlined version of the Windows Server 2016 server. How streamlined? Much more so then the earlier Windows Server versions that simply disabled the GUI and desktop interfaces. Nano is smaller from a memory and disk storage space perspective meaning it uses less RAM, boots faster, has fewer moving parts (e.g. software modules) to break (or need patching).

    Specifically Nano removes 32 bit support and anything related to the desktop and GUI interfaces as well as removing the console interface. That’s right, no console or virtual console to log into, Wow is gone, access is via Powershell or Windows Management Interface tools from remote systems. How small is it? I have a Nano instance built on a VHDX that is under a GB in size, granted, its only for testing. The goal of Nano is to have a very light weight streamlined version of Windows Server that can run hundreds (or more) VMs in a small memory footprint, not to mention supports lots of containers. Nano is part of WIndows TP4, learn more about Nano here in this Microsoft post including how to get started using it.

    Speaking of VMware, if you have not received an invite yet to their Digital Enterprise February 6, 2016 announcement event, click here to register.

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

    • EMC announced Elastic Cloud Storage (ECS) V2.2. A main theme of V2.2 is that besides being the 3rd generation of EMC object storage (dating back to Centera, then Atmos), is that ECS is also where the functionality of Centera, Atmos and other functionality converge. ECS provides object storage access along with HDFS (Hadoop and Hortonworks certified) and traditional NFS file access.

      Object storage access includes Amazon S3, OpenStack Swift, ATMOS and CAS (Centera). In addition to the access, added Centera functionality for regulatory compliance has been folded into the ECS software stack. For example, ECS is now compatible with SEC 17 a-4(f) and CFTC 1.3(b)-(c) regulations protecting data from being overwritten or erased for a specified retention period. Other enhancements besides scalability, resiliency and ease of use include meta data and search capabilities. You can download and try ECS for non-production workloads with no capacity or functionality limitations from EMC here.

    View other recent news and industry trends here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities tips, and announcements. In case you missed them from last month:

    • TheFibreChannel.com: Industry Analyst Interview: Greg Schulz, StorageIO
    • EnterpriseStorageForum: Comments Handling Virtual Storage Challenges
    • PowerMore (Dell): Q&A: When to implement ultra-dense storage

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Datrium – DVX and NetShelf server software defined flash storage and converged infrastructure
    • DataDynamics – StorageX is the software solution for enabling intelligent data migration, including from NetApp OnTap 7 to Clustered OnTap, as well as to and from EMC among other NAS file serving solutions.
    • Paxata – Little and Big Data management solutions

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • InfoStor:  Data Protection Gaps, Some Good, Some Not So Good

    And in case you missed them from last month

    • IronMountain:  5 Noteworthy Data Privacy Trends From 2015
    • Virtual Blocks (VMware Blogs):  Part III EVO:RAIL – When And Where To Use It?
    • InfoStor:  Object Storage Is In Your Future
    • InfoStor:  Water, Data and Storage Analogy

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    NAB (Las Vegas) April 19-20, 2016

    TBA – March 31, 2016

    Redmond Magazine Gridstore (How to Migrate from VMware to Hyper-V) February 25, 2016 Webinar (11AM PT)

    TBA – February 23, 2016

    Redmond Magazine and Dell Foglight – Manage and Solve Virtualization Performance Issues Like a Pro (Webinar 9AM PT) – January 19, 2016

    See more webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    Quick Look: What’s the Best Enterprise HDD for a Content Server?
    Which enterprise HDD for content servers

    Insight for Effective Server Storage I/O decision-making
    This StorageIO® Industry Trends Perspectives Solution Brief and Lab Review (compliments of Seagate and Servers Direct) looks at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate (www.seagate.com) Enterprise Hard Disk Drive (HDDs).

    I was given the opportunity to do some hands-on testing running different application workloads with a 2U content solution platform along with various Seagate Enterprise 2.5” HDDs handle different application workloads. This includes Seagate’s Enterprise Performance HDDs with the enhanced caching feature.

    Read more in this Server StorageIO industry Trends Perspective white paper and lab review.

    Looking for NVM including SSD information? Visit the Server StorageIO www.thessdplace.com and www.thenvmeplace.com micro sites. View other StorageIO lab review and test drive reports here.

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of mine as well as books from others. For this months recommended reading, it’s a blog site. If you have not visited Duncan Eppings (@DuncanYB) Yellow-Bricks site, you should, particular if you are interested in virtualization, high availability and related topical themes.

    Seven Databases in Seven Weeks guide to no SQL via Amazon.com

    Granted Duncan being a member of the VMware CTO office covers a lot of VMware related themes, however being the author of several books, he also covers non VMware related topics. Duncan recently did a really good and simple post about rebuilding a failed disk in a VMware VSAN vs. in a legacy RAID or erasure code based storage solution.

    One of the things that struck me as being important with what Duncan wrote about is avoiding apples to oranges comparisons. What I mean by this is that it is easy to compare traditional parity based or mirror type solutions that chunk or shard data on KByte basis spread over disks, vs. data that is chunk or sharded on GByte (or larger) basis over multiple servers and their disks. Anyway, check out Duncan’s site and recent post by clicking here.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace
    thessdplace.com
    storageio.com/performance.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    RIP Windows SIS (Single Instance Storage), or at least in Server 2016

    RIP Windows SIS, or at least in Server 2016

    server storage I/O trends

    I received as a Microsoft MVP a partner communication today from Microsoft of a heads up as well as pass on to others that Single Instance Storage (SIS) has been removed from Windows Server 2016 (Read the Microsoft Announcement here, or below). Windows SIS is part of Microsoft’s portfolio of tools and technology for implementing Data Footprint Reduction (DFR).

    Granted Windows Server 2016 has not been released yet, however you can download and try out the latest release such as Technical Preview 4 (TP4), get the bits from Microsoft here. Learn more about some of the server and storage I/O enhancements in TP4 including storage spaces direct here.

    Partner Communication from Microsoft

    Partner Communication
    Please relay or forward this notification to ISVs and hardware partners that have used Single Instance Storage (SIS) or implemented the SIS backup API.

    Single Instance Storage (SIS) has been removed from Windows Server 2016
    Summary:   Single Instance Storage (SIS), a file system filter driver used for NTFS file deduplication, has been removed from Windows Server. In Dec 2015, the SIS feature has been completely removed from Windows Server and Windows Storage Server editions.  SIS was officially deprecated in Windows Server 2012 R2 in this announcement and will be removed from future Windows Server Technical Preview releases.

    Call to action:
    Storage vendors that have any application dependencies on legacy SIS functions or SIS backup and restore APIs should verify that their applications behave as expected on Windows Server 2016 and Windows Storage Server 2016. Windows Server 2012 included Microsoft’s next generation of deduplication technology that uses variable-sized chunking and hashing and offers far superior deduplication rates. Users and backup vendors have already moved to support the latest Microsoft deduplication technology and should continue to do so.

    Background:
    SIS was developed and used in Windows Server since 2000, when it was part of Remote Installation Services. SIS became a general purpose file system filter driver in Windows Storage Server 2003 and the SIS groveler (the deduplication engine) was included in Windows Storage Server. In Windows Storage Server 2008, the SIS legacy read/write filter driver was upgraded to a mini-filter and it shipped in Windows Server 2008, Windows Server 2012 and Windows Server 2012 R2 editions. Creating SIS-controlled volumes could only occur on Windows Storage Server, however, all editions of Windows Server could read and write to volumes that were under SIS control and could restore and backup volumes that had SIS applied.

    Volumes using SIS that are restored or plugged into Windows Server 2016 will only be able to read data that was not deduplicated. Prior to migrating or restoring a volume, users must remove SIS from the volume by copying it to another location or removing SIS using SISadmin commands.

    The SIS components and features:

    • SIS Groveler. The SIS Groveler searched for files that were identical on the NTFS file system volume. It then reported those files to the SIS filter driver.
    • SIS Storage Filter. The SIS Storage Filter was a file system filter that managed duplicate copies of files on logical volumes. This filter copied one instance of the duplicate file into the Common Store. The duplicate copies were replaced with a link to the Common Store to improve disk space utilization.
    • SIS Link. SIS links were pointers within the file system, maintaining both application and user experience (including attributes such as file size and directory path) while I/O was transparently redirected to the actual duplicate file located within the SIS Common Store.
    • SIS Common Store. The SIS Common Store served as the repository for each file identified as having duplicates. Each SIS-maintained volume contained one SIS Common Store, which contained all of the merged duplicate files that exist on that volume.
    • SIS Administrative Interface. The SIS Administrative Interface gave network administrators easy access to all SIS controls to simplify management.
    • SIS Backup API. The SIS Backup API (Sisbkup.dll) helped OEMs create SIS-aware backup and restoration solutions.

    References:
    https://msdn.microsoft.com/en-us/library/windows/desktop/aa362538(v=vs.85).aspx
    https://msdn.microsoft.com/en-us/library/windows/desktop/aa362512(v=vs.85).aspx
    https://msdn.microsoft.com/en-us/library/dexter.functioncatall.sis(v=vs.90).aspx
    https://blogs.technet.com/b/filecab/archive/2012/05/21/introduction-to-data-deduplication-in-windows-server-2012.aspx
    https://blogs.technet.com/b/filecab/archive/2006/02/03/single-instance-store-sis-in-windows-storage-server-r2.aspx

    What this all means

    Like it or not, SIS is being removed from Windows 2016 replaced by the new Microsoft deduplication or data footprint reduction (DFR) technology.

    You have been advised…

    RIP Windows SIS

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Server StorageIO December 2015 Update Newsletter


    Server and StorageIO Update Newsletter

    Volume 15, Issue XII – End of Year (EOY) Edition

    Hello and welcome to this December 2015 Server StorageIO update newsletter.

    Seasons Greetings and Happy New Years.

    Winter has arrived here in the northern hemisphere and it is also the last day of 2015 e.g. End Of Year or EOY). For some this means relaxing and having fun after a busy year, for others, it’s the last day of the most important quarter of the most important year ever, particular if you are involved in sales or spending.

    This is also that time of year where predictions for 2016 will start streaming out as well as reflections looking back at 2015 appear (more on these in January). Another EOY activity is planning for 2016 as well as getting items ready for roll-out or launch in the new year. Overall 2015 has been a very good year with many things in the works both public facing, as well as several behind the scenes some of which will start to appear throughout 2016.

    Enjoy this abbreviated edition of the Server StorageIO update newsletter and watch for new tips, articles, predictions, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Thank you for enabling a successful 2015 and wishing you all a prosperous new year in 2016.

    Cheers GS

    In This Issue

  • Tips and Articles
  • Events and Webinars
  • Resources and Links
  • StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • IronMountain:  5 Noteworthy Data Privacy Trends From 2015
    • Virtual Blocks (VMware Blogs):  Part III EVO:RAIL – When And Where To Use It?
    • InfoStor:  Object Storage Is In Your Future
    • InfoStor:  Water, Data and Storage Analogy

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    NAB (Las Vegas) April 19-20, 2016

    Redmond Magazine Gridstore (How to Migrate from VMware to Hyper-V) February 25, 2016 Webinar (11AM PT)

    See more webinars and other activities on the Server StorageIO Events page here.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server StorageIO November 2015 Update Newsletter


    Server and StorageIO Update Newsletter

    Volume 15, Issue XI – November 2015

    Hello and welcome to this November 2015 Server StorageIO update newsletter. Winter has arrived here in the northern hemisphere, although technically its still fall until the winter solstice in December. Regardless of if summer or winter depending on which hemisphere you are, 2015 is about to wrap up meaning end of year (EOY) activities.

    EOY activities can mean final shopping or acquisitions for technology and services or simply for home and fun. This is also that time of year where predictions for 2016 will start streaming out as well as reflections looking back at 2015 appear (lets save those for December ;). Another EOY activity is planning for 2016 as well as getting items ready for roll-out or launch in the new year. Needless to say there is a lot going on so with that, enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    In This Issue

  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events and Webinars
  • Recommended Reading List
  • Resources and Links
  • StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities tips, and announcements.

    • TheFibreChannel.com: Industry Analyst Interview: Greg Schulz, StorageIO
    • EnterpriseStorageForum: Comments Handling Virtual Storage Challenges
    • PowerMore (Dell): Q&A: When to implement ultra-dense storage

    View more Server, Storage and I/O hardware as well as software trends comments here

     

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • Virtual Blocks (VMware Blogs):  EVO:RAIL Part II – Why And When To Use It?
      This is the second of a multi-part series looking at Converged Infrastructures (CI), Hyper-Converged Infrastructures (HCI), Cluster in Box (CiB) and other unified solution bundles. There is a trend of industry adoption talking about CI, HCI, CiB and other bundled solutions, along with growing IT customer adoption and deployment. Different sized organizations are looking at various types of CI solutions to meet various application and workloads needs. Read more here and part I here.
    • TheFibreChannel.com:  Industry Analyst Interview: Greg Schulz, StorageIO
      In part one of a two part article series, Frank Berry, storage industry analyst and Founder of IT Brand Pulse and editor of TheFibreChannel.com, recently spoke with StorageIO Founder Greg Schulz about Fibre Channel SAN integration with OpenStack, why Rackspace is using Fibre Channel and more. Read more here
    • CloudComputingAdmin.com:  Cloud Storage Decision Making – Using Microsoft Azure for cloud storage
      Let’s say that you have been tasked with, or decided that it is time to use (or try) public cloud storage such as Microsoft Azure. Ok, now what do you do and what decisions need to be made? Keep in mind that Microsoft Azure like many other popular public clouds provides many different services available for fee (subscription) along with free trials. These services include applications, compute, networking, storage along with development and management platform tools. Read more here.

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    Deltaware Emerging Technology Summit November 10, 2015

    Dell Data Protection Summit Nov 4, 2015 7AM PT

    Microsoft MVP Summit Nov 2-5, 2015

    See more webinars and other activities on the Server StorageIO Events page here.

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of my books.

    In case you had not heard, Microsoft recently released the bits (e.g. software download) for Windows Server 2016 Technical Preview 4 (TP4). TP4 is the successor to Technical Preview 3 (TP3) that was released this past August and is the most recent public preview version of the next Windows Server. TP4 adds a new tiering capability where Windows and storage spaces can cache and migrate data between Hard Disk Drives (HDD) and Non-Volatile Memory (NVM) including flash SSD. The new tiering feature supports a mixed HDD and NVM with flash SSD (including NVM Express or NVMe), as well as an all NVM scenario. Yes, that is correct, tiering with all NVM is not a type, instead enables using lower latency faster NVM along with lower cost higher capacity flash SSD. Learn more about what’s in TP4 from a server and storage I/O perspective in this Microsoft post, as well as more about S2D in this Microsoft Technet post here and here. You can get the Windows Server 2016 TP4 bits here which are already running in the Server StorageIO lab.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace.com
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Water, Data and Storage Analogy

    Water, Data and Storage Analogy

    server storage I/O trends

    Recently I did a piece over at InfoStor titled "Water, Data and Storage Analogy". Besides being taken for granted and all of us being dependent on them, several other similarities exist between water, data, and storage. In addition to being a link that piece, this is a companion with some different images to help show the similarities between water, data and storage if for no other reason to have a few moments of fun. Read the entire piece here.

    Water, Data and Storage Similarities

    Water can get cold and freeze, data can also go cold becoming dormant and a candidate for archiving or cold cloud storage.

    Like data and storage water can be frozen
    Like data and storage water can be frozen

    Various types of storage devices
    Various types of storage drives (HDD & SSD)

    different tiers of frozen water storage containers
    Different types and tiers of frozen water storage containers

    Data, like water, can move or be dormant, can be warm and active, or cold, frozen and inactive. Water, data and storage can also be used for work or fun.

    Kyak fishing
    Fishing on water vs. phishing for data on storage

    Eagle fly fishing on st croix river
    Eagle fly fishing on water over st croix river

    Data can be transformed into 3D images and video, water transformed into Snow can also be made into various virtual images or things.

    Data on storage can be transformed like water
    Data on storage can be transformed like water (e.g. snow)

    Data, like water, can exist in clouds, resulting in storms that if not properly prepared for, can cause problems.

    Data and storage can be damaged including by water, water can also be damaged by putting things into it or the environment.

    Water can destroy things, data and storage can be destroyed
    Water can destroy things, data and storage can be destroyed

    There are data lakes, data pools, data ponds, oceans of storage and seas of data as well as data centers.

    inside a data center
    Rows of servers and storage in a data center

    An indoor water lake (e.g. not an indoor data lake)
    An indoor water lake (e.g. not an indoor data lake)

    As water flows downstream it tends to increase in volume as tributaries or streams adding to the volume in lakes, reservoirs, rivers and streams. Another similarity is that water will tend to flow and seek its level filling up space, while data can involve a seek on an HDD in addition to filling up space.

    Flood of water vs. flood of data
    Flood of water vs. flood of data (e.g. need for Data Protection)

    There are also hybrid uses (or types) of water, just like hybrid technologies for supporting data infrastructures.

    Amphicar hybrid automobile
    Hybrid Automobile on water

    What this all means

    We might take water, data and storage for granted, yet they each need to be managed, protected, preserved and served. Servers utilize storage to support applications for managing water; water is used for cooling and powering storage, not to mention for making coffee for those who take care of IT resources.

    When you hear about data lakes, ponds or pools, keep in mind that there are also data streams, all of which need to be managed to prevent the flood of data from overwhelming you.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    NVMe Place NVM Non Volatile Memory Express Resources

    Updated 8/31/19
    NVMe place server Storage I/O data infrastructure trends

    Welcome to NVMe place NVM Non Volatile Memory Express Resources. NVMe place is about Non Volatile Memory (NVM) Express (NVMe) with Industry Trends Perspectives, Tips, Tools, Techniques, Technologies, News and other information.

    Disclaimer

    Please note that this NVMe place resources site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks.

    NVM Express Organization
    Image used with permission of NVM Express, Inc.

    Visit the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

     

    The NVMe Place resources and NVM including SCM, PMEM, Flash

    NVMe place includes Non Volatile Memory (NVM) including nand flash, storage class memories (SCM), persistent memories (PM) are storage memory mediums while NVM Express (NVMe) is an interface for accessing NVM. This NVMe resources page is a companion to The SSD Place which has a broader Non Volatile Memory (NVM) focus including flash among other SSD topics. NVMe is a new server storage I/O access method and protocol for fast access to NVM based storage and memory technologies. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

    Server Storage I/O NVMe PCIe SAS SATA AHCI
    Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

    Leveraging the standard PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5″ drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add-in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially, NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

    NVMe as back-end storage
    NVMe as a “back-end” I/O interface for NVM storage media

    NVMe as front-end server storage I/O interface
    NVMe as a “front-end” interface for servers or storage systems/appliances

    NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can also being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

    NVMe features

    Main features of NVMe include among others:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

    Shared external PCIe using NVMe
    NVMe and shared PCIe (e.g. shared PCIe flash DAS)

    NVMe related content and links

    The following are some of my tips, articles, blog posts, presentations and other content, along with material from others pertaining to NVMe. Keep in mind that the question should not be if NVMe is in your future, rather when, where, with what, from whom and how much of it will be used as well as how it will be used.

    • How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
    • Why NVMe Should Be in Your Data Center (Via Micron.com)
    • NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
    • Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
    • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
    • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
    • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
    • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
    • MNVM Express solutions (Via SuperMicro)
    • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
    • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
    • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
    • How many IOPS can a HDD, HHDD or SSD do (Part I)?
    • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
    • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
    • Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
    • Via EnterpriseStorageForum: 10-Year Review of Data Storage

    Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamline performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU.

    NVMe and SATA flash SSD performance

    The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed.

    Additional NVMe Resources

    Also check out the Server StorageIO companion micro sites landing pages including thessdplace.com (SSD focus), data protection diaries (backup, BC/DR/HA and related topics), cloud and object storage, and server storage I/O performance and benchmarking here.

    If you are in to the real bits and bytes details such as at device driver level content check out the Linux NVMe reflector forum. The linux-nvme forum is a good source if you are developer to stay up on what is happening in and around device driver and associated topics.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    Disclaimer

    Disclaimer: Please note that this site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks. Check out the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

    NVM Express Organization
    Image used with permission of NVM Express, Inc.

    Wrap Up

    Watch for updates with more content, links and NVMe resources to be added here soon.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Server StorageIO October 2015 Update Newsletter


    Server and StorageIO Update Newsletter

    Volume 15, Issue X – Industry Trends, M&A, PTSA

    Hello and welcome to this October 2015 Server StorageIO update newsletter. Fall has arrived here in the northern hemisphere which means its spring in the southern hemisphere, and getting colder here. While fall means cooler out-door temperature with winter just around the corner, in the IT/ITC industry, particular the data infrastructure sector (server, storage, I/O networking, hardware, software, cloud, physical, software defined virtual) things are very hot. Sure the various industry and vendor focused conferences, road shows and mini-events with associated new product, technology or services announcements (PTSA. There are also the various merger and acquisitions (M&A) that have occurred throughout the year including the recent Dell buying EMC, and Western Digital (WD) buying SANdisk among others.

    This edition of the Server StorageIO update newsletter has a focus on industry trends perspectives including recent M&A and PTSA activity. In addition to industry fall industry M&A and PTSA activity, there also plenty of conference, seminars, workshops, webinars and other events some of which you can see here on the Server StorageIO events page.

    On a slightly different note, for those interested and not aware of the European Union (EU) ruling earlier this month on data privacy (e.g. Safe Harbor), here and here are a couple of links to stories discussing the new ruling changes between the EU and US (among other countries). The EU data privacy rulings involve personal data being moved out of EU countries to US data centers such as cloud and application services firms.

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    In This Issue

  • Feature Topic
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcasts
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • Feature Topic – TBD

    This months feature topic theme is industry trends perspectives including M&A activity.

    Some M&A, IPO and divestiture activity includes:

    Continue reading more about NVM, NVMe, NAND flash, SSD Server and storage I/O related topics at www.thessdplace.com as well as about I/O performance, monitoring and benchmarking tools at www.storageperformance.us.

     

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

    • Amazon Web Service (AWS) Simple Storage Service (S3) Infrequent Access (IA) storage class for inactive data with immediate access vs. Glacier cold or frozen (dormant) data with slow or time delayed access. AWS also announced Snowball bulk data import/export 50TB appliance service in addition to their earlier offered capabilities.
    • EMC Rexray (part of EMCcode) and Mesosphere (for Mesos data center operating system) have joined to enable persistent Docker volumes for Mesos (e.g. data center operating system platform).
    • Microsoft Azure recent enhancements include file access of cloud storage (on-premises and within Azure cloud) leveraging SMB interfaces. Here is a primer on Azure cloud storage service offerings. View other recent Azure Cloud Storage, Compute, Database and Data Analytics service offerings here. In addition to Microsoft Azure cloud offerings or Windows 10 desktop operating system, you can also download WIndows Server 2016 Technical Preview 3 (TP3) and see what’s new here. Some of the features include Storage Spaces Direct (e.g. DAS storage) and replication among other features.

    View other recent news and industry trends here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities tips, and announcements.

    • NetworkComputing: Dell buying EMC: The Storage Ramifications
    • EnterpriseTech: VMware Targets Synergies in Dell EMC Deal 
    • HPCwire: Dell to Buy EMC for $67B
    • EnterpriseStorageForum: Data Storage: Do We Really Need to Store Everything?
    • EnterpriseStorageForum: Why Hard Drives Are Here to Stay (For Now)
    • EnterpriseStorageForum: Top Ten Ways to Use OpenStack for Storage
    • EnterpriseStorageForum: Are We Heading for Storage Armageddon?

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Hedvig – Converged server storage software management tools
    • Infinidat – Another Moshe Yanai Storage System Startup
    • Mesosphere – Mesos Data Center Operating System management tools
    • Plexxi – Networking startup with former EMC executive Rich Napolitano as CEO
    • ScaleMP – Scale-out server aggregation management tools

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • Virtual Blocks (VMware Blogs):  EVO:RAIL – What Is It And Why Does It Matter?
      This is the first of a multi-part series looking at Converged Infrastructures (CI), Hyper-Converged Infrastructures (HCI), Cluster in Box (CiB) and other unified solution bundles. There is a trend of industry adoption talking about CI, HCI, CiB and other bundled solutions, along with growing IT customer adoption and deployment. Different sized organizations are looking at various types of CI solutions to meet various application and workloads needs. Read more here.
    • WServerNews.com:  Cloud (Microsoft Azure) storage considerations
      Let’s say that you have been tasked with, or decided that it is time to use (or try) public cloud storage such as Microsoft Azure. Ok, now what do you do and what decisions need to be made? Keep in mind that Microsoft Azure like many other popular public clouds provides many difference services available for fee (subscription) along with free trials. These services include applications, compute, networking, storage along with development and management platform tools. Read more here.
    • NetworkComputing:  Selecting Storage: Buzzword Bingo
      The storage industry is rife with buzzwords. Here are some of the popular ones storage buyers need to navigate carefully to find storage products that truly meet their needs. Read more here.

    • InfoStor:  What’s The Best Storage Benchmark? It Depends…
    • EnterpriseStorageForum:  NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet!

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    Deltaware Emerging Technology Summit November 10, 2015

    Dell Data Protection Summit Nov 4, 2015 7AM PT

    Microsoft MVP Summit Nov 2-5, 2015

    Server Storage I/O Dutch Workshop Seminar Series
    Nijkerk Netherlands October 13-16 2015

    October 13 – Symposium: Software Defined Storage Management
    October 14 – Server Storage I/O Fundamental Trends
    October 15 – Symposium – Data Center Infrastructure Management (DCIM)
    October 16 – “Converged Day” Server and Storage Decision making

    Learn more and register at the Brouwer Consultancy website here.

    September 23 – Webinar Redmond Magazine & Dell Data Protection
    The New World Order of Data Protection – Focus on Recovery
    Learn more about the 9Rs of data protection and recovery

    See more webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    Quick Look: SATA and NVMe Flash SSD Performance
    SATA and NVMe flash SSD performance

    Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamlines performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU. The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed. Learn more about NVM, NVMe, flash, SSD and related topics at www.thessdplace.com.

    View other StorageIO lab review reports here

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of my books.

    Seven Databases in Seven Weeks guide to no SQL via Amazon.com

    The Human Face of Big Data book review. To say this is a big book would be an understatement, then again, big data is a big topic with a lot of diversity if you open your eyes and think in a pragmatic way, which once you open and see the pages you will see. This is physically a big book (11x 14 inches) with lots of pictures, texts, stories, factoids and thought stimulating information of the many facets and dimensions of big data across 224 pages. The Human Face of Big Data is more than a coffee table or picture book as it is full of with information, factoids and perspectives how information and data surround us every day. Open up a copy of The Human Face of Big Data and you will see examples of how data and information are all around us, and our dependence upon it. Read more here.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved