Part II – Some server storage I/O workload scripts and results

Part II – Some server storage I/O workload scripts and results

server storage I/O trends

Updated 1/28/2018

This is the second in a two part series of posts pertaining to using some common server storage I/O workload benchmark tools and scripts. View part I here which includes overview, background and information about the tools used and related topics.

NVMe ssd storage
Various NVM flash SSD including NVMe devices

Following are some server I/O benchmark workload scripts to exercise various storage devices such as Non-Volatile Memory (NVM) flash Solid State Devices (SSDs) or Hard Disk Drives (HDD) among others.

The Workloads

Some ways that can impact the workload performance results besides changing the I/O size, read write, random sequential mix is the number of threads, workers and jobs. Note that in the workload steps, the larger 1MB and sequential scenarios have fewer threads, workers vs. the smaller IOP or activity focused workloads. Too many threads or workers can cause overhead and you will reach a point of diminishing return at some point. Likewise too few and you will not drive the system under test (SUT) or device under test (DUT) to its full potential. If you are not sure how many threads or workers to use, run some short calibration tests to see the results before doing a large, longer test.

Keep in mind that the best benchmark or workload is your own application running with similar load to what you would see in real world, along with applicable features, configuration and functionality enabled. The second best would be those that closely resemble your workload characteristics and that are relevant.

The following workloads involved a system test initiator (STI) server driving workload using the different tools as well as scripts shown. The STI sends the workload to a SUT or DUT that can be a single drive, card or multiple devices, storage system or appliance. Warning: The following workload tests does both reads and writes which can be destructive to your device under test. Exercise caution on the device and file name specified to avoid causing a problem that might result in you testing your backup / recovery process. Likewise no warranty is given, implied or made for these scripts or their use or results, they are simply supplied as is for your reference.

The four free tools that I’m using with this set of scripts are:

  • Microsoft Diskspd (free), get the tool and bits here or here (open source), learn more about Diskspd here.
  • FIO.exe (free), get the tool and bits here or here among other venues.
  • Vdbench (free with registration), get the tool and bits here or here among other venues.
  • Iometer (free), get the tool and bits here among other venues.

Notice: While best effort has been made to verify the above links, they may change over time and you are responsible for verifying the safety of links and your downloads

Microsoft Diskspd workloads

Note that a 300GB size file named iobw.tst on device N: is being used for performing read and write I/Os to. There are 160 threads, I/O size of 4KB and 8KB varying from 100% Read (0% write), 70% Read (30% write) and 0% Read (100% write) with random (seek) and no hardware or software cache. Also specified are to collect latency statistics, a 30 second warm up ramp up time, and a quick 5 minute duration (test time). 5 minutes is a quick test for calibration, verify your environment however relatively short for a real test which should be in the hours or more depending on your needs.

Note that the output results are put into a file with a name describing the test tool, workload and other useful information such as date and time. You may also want to specify a different directory where output files are placed.

diskspd.exe -c300G -o160 -t160 -b4K -w0 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan100Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b4K -w30 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan70Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b4K -w100 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan0Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w0 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan100Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w30 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan70Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w100 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan0Read_160x160_072416_8AM.txt

The following Diskspd tests use similar settings as above, however instead of random, sequential is specified, threads and outstanding I/Os are reduced while I/O size is set to 1MB, then 8KB, with 100% read and 100% write scenarios. The -t specifies the number of threads and -o number of outstanding I/Os per thread.

diskspd.exe -c300G -o32 -t132 -b1M -w0 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_1MSeq100Read_32x32_072416_8AM.txt
diskspd.exe -c300G -o32 -t132 -b1M -w100 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_1MSeq0Read_32x32_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w0 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_8KSeq100Read_32x32_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w100 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_8KSeq0Read_32x32_072416_8AM.txt

Fio.exe workloads

Next are the fio workloads similar to those run using Diskspd except the sequential scenarios are skipped.

fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=100 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan100Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=70 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan70Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=0 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan0Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=100 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan100Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=70 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan70Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=0 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan0Read_5x32_072416_8AM.txt

Vdbench workloads

Next are the Vdbench workloads similar to those used with the Microsoft Diskspd scenarios. In addition to making sure Vdbench is installed and working, you will need to create a text file called seqrxx.txt containing the following:

hd=localhost,jvms=!jvmn
sd=sd1,lun=!drivename,openflags=directio,size=!dsize
wd=mix,sd=sd1
rd=!jobname,wd=mix,elapsed=!etime,interval=!itime,iorate=max,forthreads=(!tthreads),forxfersize=(!worktbd),forseekpct=(!workseek),forrdpct=(!workread),openflags=directio

The following are the commands that call the Vdbench script file. Note Vdbench puts output files (yes, plural there are many results) in a output folder.

vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o  vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran100Read_0726166AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran70Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran0Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Ran100Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Ran70Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq0Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq100Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq70Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq0Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=32 jvmn=64 worktbd=1M workseek=0 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_32TH_1M100Seq100Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=32 jvmn=64 worktbd=1M workseek=0 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_32TH_1M100Seq0Read_072416_8AM

Iometer workloads

Last however not least, lets do an Iometer run. The following command calls an Iometer input file (icf) that you can find here. In that file you will need to make a few changes including the name of the server where Iometer is running, description and device under test address. For example in the icf file change SIOSERVER to the name of the server where you will be running Iometer from. Also change the address for the DUT, for example N: to what ever address, drive, mount point you are using. Also update the description accordingly (e.g. "NVME" to "Your test example".

Here is the command line to run Iometer specifying an icf and where to put the results in a CSV file that can be imported into Excel or other tools.

iometer /c  iometer_5work32q_intel_Profile.icf /r iometer_nvmetest_5work32q_072416_8AM.csv

server storage I/O SCM NVM SSD performance

What About The Results?

For context, the following results were run on a Lenovo TS140 (32GB RAM), single socket quad core (3.2GHz) Intel E3-1225 v3 with an Intel NVMe 750 PCIe AiC (Intel SSDPEDMW40). Out of the box Microsoft Windows NVMe drive and controller drivers were used (e.g. 6.3.9600.18203 and 6.3.9600.16421). Operating system is Windows 2012 R2 (bare metal) with NVMe PCIe card formatted with ReFS file system. Workload generator and benchmark driver tools included Microsoft Diskspd version 2.012, Fio.exe version 2.2.3, Vdbench 50403 and Iometer 1.1.0. Note that there are newer versions of the various workload generation tools.

Example results are located here.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Remember, everything is not the same in the data center or with data infrastructures that support different applications.

While some tools are more robust or better than others for different things, ultimately its usually not the tool that results in a bad benchmark or comparison, its the configuration or lack of including workload settings that are not relevant or applicable. The best benchmark, workload or simulation is your own application. Second best is one that closely resembles your application workload characteristics. A bad benchmark is one that has no relevance to your environment, application use scenario. Take and treat all benchmark or workload simulation results with a grain of salt as something to compare, contrast or make reference to in the proper context.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

NetApp Announces ONTAP 9 software defined storage management

NetApp Announces ONTAP 9 software defined storage management

server storage I/O trends

NetApp has announced ONTAP 9 the latest version or generation of their storage software that defines and powers ONTAP storage systems and appliances (e.g. those known by some as FAS among others).

The major theme of ONTAP 9 is simple anywhere eluding to the software runs on storage system appliances (e.g. “tin wrapped” or hardware platform based), virtual storage (e.g. what has been known as “edge” in the past), as well as cloud versions (cDOT). The other part of simple beyond where the software gets deployed and how the resources along with functionality are consumed ties to management.

This means simple from standalone systems to clusters, ONTAP 9 is focused on consolidation and management across different storage media mediums (HDD and SSD), platforms (engineered e.g. FAS to white box), protocols (block, file, objects) as well as consumption (on hardware or software deployed including cloud).

As part of the announcement NetApp will continue with its engineered hardware platform solutions (e.g. appliances or storage systems) as well as ONTAP Select (third-party storage) and Flex using white box server platforms (e.g. a software defined storage option). This capability provides customers with flexibility on where and how to buy as well as deployment options.

Another dimension to ONTAP 7 simple theme is support for known workloads such as Oracle RAC, Microsoft SQL Server and VMware among others. ONTAP 9 provides tools for rapid provisioning of storage resources to support those and other application workloads.

Data services feature enhancements include support of new high-capacity read optimized SSDs, along with inline data compaction on 4K boundaries (data chunks) including data reduction guarantees of 4:1. For data durability, triple parity RAID has also been implemented, as well as Snaplock is also present in ONTAP 9.

Another aspect of Simple theme for ONTAP 9 is an easy transition from third-party storage systems, as well as ONTAP 8.3 and ONTAP 7 modes with new tools and processes. These also include copy free transitions where existing storage systems are detached from older generation ONTAP controller, attached to new versions and an auto update occurs.

Where To Learn More

ONTAP 9 Data Sheet (PDF)
NetApp FlashAdvantage 3-4-5 Makes the All-Flash Data Center a Reality
NetApp ONTAP 9 Software Simplifies Transition to Hybrid Cloud, Next-Generation Data Center

What This All Means

ONTAP 9 are a welcome set of enhancements for NetApps flagship storage platforms that are based on ONTAP. With these enhancements, existing or new customers gain flexibility and deployment option choices for how the ONTAP software gets deployed from physical NetApp based storage systems, to white box hardware, software defined and cloud editions. In an era where there is a focus on converged, hyper-converged, object, all flash arrays and software defined virtual as well as cloud, ONTAP 9 provides options for customers who simply want or still need a traditional multi-protocol storage system that can run in an all flash or hybrid with disk modes.

Ok, nuff said, for now…

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Server StorageIO May 2016 Update Newsletter

Volume 16, Issue V

Hello and welcome to this May 2016 Server StorageIO update newsletter.

In This Issue

  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events and Webinars
  • Industry Activity Trends
  • Resources and Links
  • Enjoy this shortened edition of the Server StorageIO update newsletter, watch for more tips, articles, lab report test drive reviews, blog posts, videos and podcast’s and in the news commentary appearing soon.

    Cheers GS

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

     

    StorageIO Commentary in the news

    Recent Server StorageIO industry trends perspectives commentary in the news.

    Cloud and Virtual Data Storage Networking: Various comments and discussions

    StorageIOblog: Additional comments and perspectives

    SearchCloudStorage: Comments on OpenIO joins object storage cloud scrum

    SearchCloudStorage: Comments on EMC VxRack Neutrino Nodes and OpenStack

    View more Server, Storage and I/O hardware as well as software trends comments here

     

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    Via Micron Blog (Guest Post): What’s next for NVMe and your Data Center – Preparing for Tomorrow Today

    Check out these resources techniques, trends as well as tools. View more tips and articles here

    StorageIO Webinars and Industry Events

    Brouwer Storage (Nijkerk Holland) June 10-15, 2016 – Various in person seminar workshops

    June 15: Software Defined Data center with Greg Schulz and Fujitsu International

    June 14: Round table with Greg Schulz and John Williams (General manager of Reduxio) and Gert Brouwer. Discussion about new technologies with Reduxio as an example.

    June 10: Hyper converged, Converged , and related subjects presented Greg Schulz

    Simplify and Streamline Your Virtual Infrastructure – May 17 webinar

    Is Hyper-Converged Infrastructure Right for Your Business? May 11 webinar

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    Making the Cloud Work for You: Rapid Recovery April 27, 2016 webinar

    See more webinars and other activities on the Server StorageIO Events page here.

     

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    storageioblog.com/data-protection-diaries-main/ – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance.com – Various server, storage and I/O performance and benchmarking

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    2016 Going Dutch Cloud Virtualization Server Storage I/O Seminars

    2016 Going Dutch Cloud Virtualization Server Storage I/O Seminars

    server storage I/O trends

    In June 2016 Brouwer Storage Consultancy is organizing their yearly spring seminar workshops in Nijkerk Holland (south of Amsterdam, near Utrecht and Amersfoort) with myself among others presenting.

    Brouwer Consultancy

    Cloud Virtualization Server Storage I/O Seminars

    For this series of seminar workshops, there are four sessions, two being presented by myself, and two others in conjunction with Reduxio as well as Fujitsu & SJ Solutions.

    Brouwer and Server StorageIO Seminar Sessions

    Agenda, How To Register and Where To Learn More

    The vendor sponsored sessions will consist of about 50% content being independent presented by myself and Gert Brouwer, the balance by the event sponsors as well as their partners. All presentation and associated content including handouts will be in English.

    There will be 4 seminar workshop sessions, two of those are paid sessions dedicated to Greg Schulz and the other two are free (sponsored) sessions where 50% of the content is sponsored (Reduxio, FujitsuSJ Solutions) and the other 50% will be independent (Greg Schulz & Gert Brouwer).

    Thursday June 9th – Server StorageIO Trends and Updates

    Server Storage I/O Fundamental Trends V2.016 and Updates. What’s New, What’s the buzz, what you need to know about. From Speeds and Feeds, Slots and Watts to Who’s doing what. Event Location: Golden Tulip Ampt van Nijkerk Hotel, Berencamperweg 4, 3861MC, Nijkerk. Learn more here (PDF abstract and topics to be covered).

    Thursday June 10th – Converged Day

    Converged Day – Moving beyond Hyper-Converged Hype and Server Storage I/O Decision Making Strategies. Event Location: Golden Tulip Ampt van Nijkerk Hotel, Berencamperweg 4, 3861MC, Nijkerk. Learn more here (PDF abstract and topics to be covered).

    Brouwer and Server StorageIO Seminar Sessions De Roode Schuur

    Tuesday June 14th – Round Table Vendor Session with Reduxio

    Symposium Workshop – Round Table Vendor Session with Reduxio – Are some solutions really ‘a Paradigm shift’ or ‘new and revolutionary” as they claim to be, or is it just more of the same (e.g. evolutionary)? – Presentations and discussions led by Greg Schulz (StorageIO), Reduxio and Brouwer Storage Consultancy. (Free, sponsored Session, Access for end-users only). Event Location: Hotel & Gasterij De Roode Schuur, Oude Barneveldseweg 98, 3862PS Nijkerk. Learn more here (PDF abstract and topics to be covered).

    Wednesday June 15th – Software Defined Data Center Symposium Workshop

    Software Defined Data Center Symposium Workshop – Round Table Vendor Session with Fujitsu & SJ Solutions
    With subjects like Openstack, Ceph, distributed object storage, Bigdata, Hyper-Converged Infrastructure (HCI), Converged Infrastructure (CI), Software defined storage (SDS) and Network (SDN and NFV), this round table format workshop seminars explores these and other related topics including what to use when, where, why and how. Presentations by Greg Schulz (StorageIO), SJ Solutions & Fujitsu and Brouwer Storage Consultancy. Event Location: Hotel & Gasterij De Roode Schuur, Oude Barneveldseweg 98, 3862PS Nijkerk. Learn more here (PDF abstract and topics to be covered).

    For more information, abstracts/agenda, registration and the specific locations for all the above events click here.

    Brouwer and Server StorageIO Sessions Ampt van Nijkerk

    What This All Means

    There is a lot of things occurring in the IT industry from physical to software defined clouds, containers and virtualization, nonvolatile memory (NVM) including flash SSD among others. These series of interactive educational workshop seminars converge on Nijkerk Holland combing content discussions from strategy, planning decision making, to what’s new (and old) that can be used in new ways, as well as some trends, speeds and feeds along with practicality for your environment.

    Brouwer Consultancy

    I Look forward to seeing you in Nijkerk and Europe during June 2016, in the meantime, contact Brouwer Storage Consultancy for more information on the above sessions as well as to arrange private discussions or meetings.

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Participate in Top vBlog 2016 Voting Now

    Participate in Top vBlog 2016 Voting Now

    server storage I/O trends

    It’s that time of the year when Eric Siebert (@ericsiebert) hosts his annual top virtual Blog (vBlog) voting via his great vsphere-land site (check it out if you are not familiar). The voting is now open until May 27th after which the results tabulated, will be announced.

    While the focus is virtualization, rest assured there are other categories including scripting, storage, independent, new, video and podcast among others. For example my blog is listed under StorageIO (Greg Schulz) and included in storage and independent among some other categories.

    Granted it is an election year here in the US and hopefully those participating in the top vBlog 2016 voting process are doing so based on content vs. simply popularity or what their virtual Popularity Action Committees (vPAC) tells them to do, that is vPACs actually exist or if they are simply vUrban Myths ;). In other words I’m not going to tell you who to vote for, or who I voted for other than that it is based on useful I found those sites and their content contributions.

    Who Is Eligible To Vote

    Anybody can vote, granted you can only vote once. Of course you can get your friends, family, co-workers, sales and marketing department, community or club, customers, clients, basically anything with an IP address and email address in theory including IoT and IoD could vote. However that would be like buying twitter followers, Facebook likes, click for view or pay for view results to game the system which if that is your game, so be it.

    How Did People Get On The List (Ballot)

    Eric puts out a call (tweets, posts here, here and here) that gets amplified for people to submit new blogs to be included, as well as then to self-nominate their site, as well as for what categories. If people do not take the initiative to get on the list, they don’t get included. If the list if important enough to be included on, then it should be important enough to to know or remember to self-nominate to be included.

    I know this from experience in that a few years ago I forgot to nominate my blog in the categories of storage, independent thus was not included in the voting for those categories. However since I had previously notified Eric to include my blog, it was in the general category and thus included. Note to bloggers, if it is important for you to be included, then notify Eric that you should be added to his lists, as well as take the time to nominate yourself to be included in the future. Simply help others help you.

    What Is The Voting Criteria

    Eric for this years top vBlog voting has culled the list to those who besides self nominating in different categories, also had at least 50 posts in the past year.

    In addition, Eric suggests focus on the content, creative and contribution (Longevity, Length, Frequency, Quality) vs. simply being a popularity contest or driven by virtual Popularity Action Committees (e.g. vPAC).

    Following are my paraphrase:

    • Longevity – How long has the blog existed and continued to be maintained vs. one started a long time ago and had not been updated in months or years.
    • Length – Are there lots of very short basically expanded micro twitter posts, recopy press releases or curation of other news, real content and analysis that requires some thought along with creative. These could be short, long or a series of short to medium size posts.
    • Frequency – How often do posts appear, daily, weekly, monthly, yearly. There’s a balance between frequency, length and content along with time effort to create something.
    • Quality – Some can be rehashed with more perspectives, inputs, hints and tips along with analysis, insight or experiences of existing, or new items. The key is what is the value add to the topic, theme or conversation vs. simply reposting or amplifying what’s already out there. In other words, is there new or unique content, perspectives, thought analysis, insight, experiences or simply repeat and amplify those of others.

    Call To Action, Get Out and Vote

    Simple, get out and vote and thanks in advance by using this link to Eric’s site.

    Where To Learn More

    • Voting now open for Top vBlog 2016
    • Link to actual voting page

    What This All Means

    Support and say thanks, give an "atta boy" or "atta girl" to those who take time to create content to share with you on various virtualization related topics from servers, storage, I/O networking, scripting, tools, techniques, clouds, containers and more via blogs, podcast’s and webinars. This includes both the independents like myself and others as well as the vendors, press and media who give the content you consume.

    So take a few moments to jump on over to Eric’s site and cast your vote and if you have found my content to be useful, I humbly appreciate your vote and say thank you for your support, as well as that for others.

    Ok, nuff said and thank you for supporting StorageIOblog.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Ubuntu 16.04 LTS (aka Xenial Xerus) What’s In The Bits and Bytes?

    Ubuntu 16.04 LTS (aka Xenial Xerus) What’s In The Bits and Bytes?

    server storage I/O trends

    Ubuntu 16.04 LTS (aka Xenial Xerus) was recently released (you can get the bits or software download here). Ubuntu is available in various distributions including as a server, workstation or desktop among others that can run bare metal on a physical machine (PM), virtual machine (VM) or as a cloud instance via services such as Amazon Web Services (AWS) as well as Microsoft Azure among others.

    Refresh, What is Ubuntu

    For those not familiar or who need a refresh, Ubuntu is an open source Linux distribution with the company behind it called Canonical. The Ubuntu software is a Debian based Linux distribution with Unity (user interface). Ubuntu is available across different platform architecture from industry standard Intel and AMD x86 32bit and 64bit to ARM processors and even the venerable IBM zSeriues (aka zed) mainframe as part of LinuxOne.

    As a desktop, some see or use Ubuntu as an open source alternative to desktop interfaces based on those from Microsoft such as Windows or Apple.

    As a server Ubuntu can be deployed from traditional applications to cloud, converged and many others including as a docker container, Ceph or OpenStack deployment platform. Speaking of Microsoft and Windows, if you are a *nix bash type person yet need (or have) to work with Windows, bash (and more) are coming to Windows 10. Ubuntu desktop GUI or User Interface options include Unity along with tools such as Compiz and LibreOffice (an alternative to Microsoft Office).

    What’s New In the Bits and Bytes (e.g. Software)

    Ubuntu 16.04 LTS is based on the Linux 4.4 kernel, that also includes Python 3, Ceph Jewel (block, file and object storage) and OpenStack Mitaka among other enhancements. These and other fixes as well as enhancements include:

    • Libvirt 1.3.1
    • Qemu 2.5
    • Open vSwitch 2.5.0
    • NginxLX2 2.0
    • Docker 1.10
    • PHP 7.9
    • MySQL 7.0
    • Juju 2.0
    • Golang 1.6 toolchain
    • OpenSSH 7.2p2 with legacy support along with cipher improvements, including 1024 bit diffie-hellman-group1-sha1 key exchange, ssh-dss, ssh-dss-cert
    • GNU toolchain
    • Apt 1.2

    What About Ubuntu for IBM zSeries Mainframe

    Ubuntu runs on 64 bit zSeries architecture with about 95% binary compatibility. If you look at the release notes, there are still a few things being worked out among known issues. However (read the release notes), Ubuntu 16.04 LTS has OpenStack and Ceph, means that those capabilities could be deployed on a zSeries.

    Now some of you might think wait, how can Linux and Ceph among others work on a FICON based mainframe?

    No worries, keep in mind that FICON the IBM zSeries server storage I/O protocol that co-exists on Fibre Channel along with SCSI_FCP (e.g. FCP) aka what most Open Systems people simply refer to as Fibre Channel (FC) works with the zOS and other operating systems. In the case of native Linux on zSeries, those systems can in fact use SCSI mode for accessing shared storage. In addition to the IBM LinuxOne site, you can learn more about Ubuntu running native on zSeries here on the Ubuntu site.

    Where To Learn More

    What This All Means

    Ubuntu as a Linux distribution continues to evolve and increase in deployment across different environments. Some still view Ubuntu as the low-end Linux for home, hobbyist or those looking for a alternative desktop to Microsoft Windows among others. However Ubuntu is also increasingly being used in roles where other Linux distribution such as Red Hat Enterprise Linux (RHEL), SUSE and Centos among others have gained prior popularity.

    In someway’s you can view RHEL as the first generation Linux distribution that gained popular in the enterprise with early adopters, followed by a second wave or generation of those who favored Centos among others such as the cloud crowd. Then there is the Ubuntu wave which is expanding in many areas along with others such as CoreOS. Granted with some people the preference between one Linux distribution vs. another can be as polarizing as Linux vs. Windows, OpenSystems vs. Mainframe vs. Cloud among others.

    Having various Ubuntu distributions installed across different servers (in addition to Centos, Suse and others), I found the install and new capabilities of Ubuntu 16.04 LTS interesting and continue to explore the many new features, while upgrading some of my older systems.

    Get the Ubuntu 16.04 LTS bits here to give a try or upgrade your existing systems.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Which Enterprise HDD for Content Server Platform

    Which Enterprise HDD to use for a Content Server Platform

    data infrastructure HDD server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server platform?

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This post is the first in a multi-part series based on a white paper hands-on lab report I did compliments of Equus Computer Systems and Seagate that you can read in PDF form here. The focus is looking at the Equus Computer Systems (www.equuscs.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). I was given the opportunity to do some hands-on testing running different application workloads with a 2U content solution platform along with various Seagate Enterprise 2.5” HDD’s handle different application workloads. This includes Seagate’s Enterprise Performance HDD’s with the enhanced caching feature.

    Issues And Challenges

    Even though Non-Volatile Memory (NVM) including NAND flash solid state devices (SSDs) have become popular storage for use internal as well as external to servers, there remains the need for HDD’s Like many of you who need to make informed server, storage, I/O hardware, software and configuration selection decisions, time is often in short supply.

    A common industry trend is to use SSD and HDD based storage mediums together in hybrid configurations. Another industry trend is that HDD’s continue to be enhanced with larger space capacity in the same or smaller footprint, as well as with performance improvements. Thus, a common challenge is what type of HDD to use for various content and application workloads balancing performance, availability, capacity and economics.

    Content Applications and Servers

    Fast Content Needs Fast Solutions

    An industry and customer trend are that information and data are getting larger, living longer, as well as there is more of it. This ties to the fundamental theme that applications and their underlying hardware platforms exist to process, move, protect, preserve and serve information.

    Content solutions span from video (4K, HD, SD and legacy streaming video, pre-/post-production, and editing), audio, imaging (photo, seismic, energy, healthcare, etc.) to security surveillance (including Intelligent Video Surveillance [ISV] as well as Intelligence Surveillance and Reconnaissance [ISR]). In addition to big fast data, other content solution applications include content distribution network (CDN) and caching, network function virtualization (NFV) and software-defined network (SDN), to cloud and other rich unstructured big fast media data, analytics along with little data (e.g. SQL and NoSQL database, key-value stores, repositories and meta-data) among others.

    Content Solutions And HDD Opportunities

    A common theme with content solutions is that they get defined with some amount of hardware (compute, memory and storage, I/O networking connectivity) as well as some type of content software. Fast content applications need fast software, multi-core processors (compute), large memory (DRAM, NAND flash, SSD and HDD’s) along with fast server storage I/O network connectivity. Content-based applications benefit from having frequently accessed data as close as possible to the application (e.g. locality of reference).

    Content solution and application servers need flexibility regarding compute options (number of sockets, cores, threads), main memory (DRAM DIMMs), PCIe expansion slots, storage slots and other connectivity. An industry trend is leveraging platforms with multi-socket processors, dozens of cores and threads (e.g. logical processors) to support parallel or high-concurrent content applications. These servers have large amounts of local storage space capacity (NAND flash SSD and HDD) and associated I/O performance (PCIe, NVMe, 40 GbE, 10 GbE, 12 Gbps SAS etc.) in addition to using external shared storage (local and cloud).

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Fast content applications need fast content and flexible content solution platforms such as those from Equus Computer Systems and HDD’s from Seagate. Key to a successful content application deployment is having the flexibility to hardware define and software defined the platform to meet your needs. Just as there are many different types of content applications along with diverse environments, content solution platforms need to be flexible, scalable and robust, not to mention cost effective.

    Continue reading part two of this multi-part series here where we look at how and what to test as well as project planning.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Cloud Constellation SpaceBelt – Out Of This World Cloud Data Centers?

    Cloud Constellation SpaceBelt – Out Of This World Cloud Data Centers?

    server storage I/O trends

    A new startup called Cloud Constellation (aka SpaceBelt) has announced and proposes converge space terrestrial satellite technology with IT information and cloud related data infrastructure technologies including NVM (e.g. SSD) and storage class memory (SCM). While announcing their Series A funding and proposed value proposition (below), Cloud Constellation did not say how much funding, who the investors are, or management team leading to some, well, rather cloud information.

    Cloud Constellation’s SpaceBelt transforms cybersecurity for enterprise and government operations moving high-value data around the world by:

    • insulating it completely from the Internet and terrestrial leased lines
    • liberating it from cyberattacks and surreptitious activities
    • protecting it from natural disasters and force majeure events
    • addressing all jurisdictional complexities and constraints
    • avoiding risks of violating privacy regulations

    Truly secure data transfer: Enterprises and governments will finally be enabled to bypass use of leaky networks and compromised servers interconnecting their sites around the world.

    New option for cloud service providers: The service will be a key market differentiator for cloud service providers to offer a transformative, ultra-high degree of network security to clients reliant on moving sensitive, mission-critical data around the world each day.

    What is SpaceBelt Cloud Constellation?

    From their website www.cloudconstellation.com you will see following.

    Cloud Constellation Space Belt
    www.cloudconstellation.com

    Keeping in mind that today is April 1st which means April Fools day 2016, my motto for the day is trust yet verify. So just for fun, check out this new company that I had a briefing with earlier this week that also announced their Series A funding earlier in March 2016.

    The question you have to ask yourself today is if this is an out of this world April Fools prank, an out of this world idea that will eclipse current cloud services such as Amazon Web Services (AWS), Google, IBM Softlayer, Microsoft Azure, Rackspace among others?

    Or, will SpaceBelt go the way of earlier cloud high flyers HP Cloud, Nirvanix among others.

    Btw, keep in mind that only you can prevent cloud data loss, however cloud and virtual data availability is also a shared responsibility.

    Some Questions and Things To Ponder

    • Is this an April Fools Joke?
    • How much Non Volatile Memory (NVM) such as NAND, 3D Nand, 3D XPoint or other Storage Class Memory (SCM) can be physically placed on each bird (e.g. Satellite)
    • What will the solar panels look like to power the birds, plus battery’s for heating and cooling the NVM (contrary to popular myth, NVMs do get warm if not hot)
    • What is the availability, accessibility and durability model, how will data be replicated, mirrored or an out of this world LRC/Erasure Code Advanced Parity model be used?
    • How will the storage be accessed, what will the end-points look like, iSCSI, NDB, FUSE, NFS, CIFS, HDFS, Torrent, JSON, ODBC, REST/HTTP, FTP or something else?
    • Security will be a concern as well as geo placement, after all, its one thing to move data across some borders, how about if the data is hundreds of miles above those borders?
    • Cost will be an interesting model to follow, as well as competitors from SpaceX, Amazon, Boeing, GE, NSA, Google, Facebook or others emerge?
    • What will the uplink and download speeds be, not to mention latency of moving and accessing data from the satellites. For those who have DirectTV or other terrestrial service you know the pros and cons associated with that. Speaking of which, perhaps you have experienced a thunder-storm with DirecTV or Dish, or perhaps a cloud storm due to a cloud provider service or site failure, think about what happens to your cloud data if the satellite dish is disrupted during an upload or download.
    • I also wonder how the various industry trade groups will wrap their head around this one, what kind of new standards, initiatives and out of this world marketing promotions will we see or hear about? You know that some creative marketer will declare surface clouds as dead, just saying.

    Where To Learn More

    What This All Means

    The folks over at cloud constellation say their space belt made up of a constellation (e.g. in orbit cluster) of satellites will be circling the globe around 2019. I wonder if they will be ready to do a proof of concept (poc) technology demonstrator of their IP using TCP based networking, server, storage I/O protocols leveraging a hot air balloon or weather balloon near term, if not, would be a great marketing ploy.

    If nothing else, putting their data infrastructure technology on a hot air balloon could be a fun marketing ploy to say their cloud rises above the hot air of other cloud marketing. Or if they do a POC using a weather balloon, they could show and say their cloud rises above traditional cloud storms, oh the fun…

    Check out Cloud Constellation and their Spacebelt, see for yourself and then you decide what is going on!

    Remember, its April Fools day today, trust, yet verify.

    What say you, is this an April Fools Joke or the next big thing?

    Ok, nuff said (for now), time to listen to Pink Floyd Dark Side of the Moon ;)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Server StorageIO March 2016 Update Newsletter

    Volume 16, Issue III

    Hello and welcome to the March 2016 Server StorageIO update newsletter.

    Here in the northern hemisphere spring has officially arrived as of March 20th equinox along with warmer weather, more hours and minutes of day light, and plenty of things to do. In addition to the official arrival of spring here (fall in the southern hemisphere), it also means in the U.S. that March Madness and college basketball tournament playoff brackets and office (betting) pools are in full swing.

    In This Issue

  • Feature Topic and Themes
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcast’s
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • A couple of other things associated with spring is to move clocks forward which occurred recently here in the U.S. Spring is also a good time to check your smoke and dangerous gas detectors or other alarms. This means replacing batteries and cleaning the detectors.

    Besides smoke and gas detectors, spring is also a good time do preventive maintenance on your battery backup uninterruptible power supplies (UPS), as well as generators and other standby power devices. For my part, I had a service tech out to do a tune up on my Kohler generator, as well as replaced some batteries in APC UPS devices.

    Besides smoke and CO2 detectors, generators and UPS standby power systems, March madness basketball and other sports tournaments, something else occurs on March 31st (besides being the day before April 1st and April fools day). March 31st is World Backup (and Restore) Day meaning an awareness on making sure your data, applications, settings, configurations, keys, software and systems are backed up, and can be recovered.

    Hopefully none of you are in the situation where data, applications, systems, computers, laptops, tablets, smart phones or other devices only get backed up or protected once a year, however maybe you know somebody who does.

    March also marks the 10th anniversary of Amazon Web Services (AWS) cloud services (more here), happy birthday AWS.

    March wraps up on the 31st with World Backup Day which is intended to draw attention to the importance of data protection and your ability to recover applications and data. While backup are important, so to are testing to make sure you can actually use and recover from what was protected. Keep in mind that while some claim backup is dead, data protection is alive and as along as vendors and others keep referring to data protection as backup, backup will stay alive.

    Join me and folks from HP Enterprise (HPE) on March 31st at 1PM ET for a free webinar compliments of HPE with a theme of Backup with Brains, emphasis on awareness and analytics to enable smart data protection. Click here to learn more and register.

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    Feature Topic and Theme

    This months feature theme and topics include backup (and restore) as part of data protection, more on clouds (public, private and hybrid) including how some providers such as DropBox are moving out of public cloud providers such as AWS building their own data centers.

    Building off of the February newsletter there is more on Google including their use of Non-Volatile Memory (NVM) aka NAND Flash Solid State Devices (SSD). and some of their research. In addition to Google’s use of SSD, check out the posts and industry activity on NVMe as well as other news and updates including new converged platforms from Cisco and HPE among others.

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

  • Via Redmondmag: AWS Cloud Storage Service Turns 10 years old in March, happy birthday AWS (read more here at the AWS site).
  • Cisco announced new flexible HyperFlex converged compute server platforms for hybrid cloud and other deployments. Also announced were NetApp All Flash Array (AFA) FlexPod converged solutions powered by Cisco UCS servers and networking technology. In other activity, Cisco unveiled a Digital Network Architecture to enable customer digital data transformation. Cisco also announced its intent to acquire CliQr for management of hybrid clouds.

  • Data Direct Networks (DDN) expands NAS offerings with new GS14K platform via PRnewswire.

  • Via Computerworld: DropBox quits Amazon cloud, takes back 500 PB of data. DropBox has created their own cloud to host videos, images, files, folders, objects, blobs and other storage items that used to be stored within AWS S3. In this DropBox post, you can read about the why they decided to create their own cloud, as well as how they used a hybrid approach with metadata kept local, actual data stored in AWS S3. Now the data and the metadata are in DropBox data centers. However, DropBox is still keeping some data in AWS particular in different geographies.

  • Web site hosting company GoDaddy has extended their capabilities similar to other service providers by adding an OpenStack powered cloud service. This is a trend that others such as Bluehost (where my sites are located on a DPS) have evolved from simple shared hosting, to dedicated private servers (DPS), virtual private servers (VPS) along with other cloud related services. Think of a VPS as a virtual machine or cloud instance. Likewise some of the cloud service providers such as AWS are moving into dedicated private servers.

  • Following up from the February 2016 Server StorageIO Update Newsletter that included Google’s message to disk vendors: Make hard drives like this, even if they lose more data and Google Disk for Data Centers White Paper (PDF Here), read about Google experiences SSD.

    In this PDF white paper that was presented at the recent Usenix 2016 conference outlining Google’s experiences with different types (SLC, MLC, eMLC) and generations of NAND flash SSD media across various vendors and generations. Some of the takeaways include that context matters when looking at SSD metrics on endurance, durability and errors. While some in the industry focus on Unrecoverable Bit Error Rates (UBER), there needs to be awareness around Raw Bit Error Rate (RBER) among other metrics and usage. Read more about Google’s experiences here.


  • Hewlett Packard Enterprise (HPE) announced Hyper-Converged systems Via Marketwired including HC 380 based on ProLiant DL380 technology providing all in one (AiO) converged compute, storage and virtualization software with simplified management. The HC 380 is targeted for mid-market aka small medium business (SMB), remote office branch office (ROBO) and workgroups. HPE also announced all flash array (AFA) enhancements for 3PAR storage (Via Businesswire).

  • Microsoft has announced that it will be releasing a version of its SQL Server database on Linux. What this means is that as well as being able to use SQL Server and associated tools on Windows and Azure platforms, you will also in the not so distant future be able to deploy on Linux. By making SQL Server available on Linux opens up some interesting scenarios and solution alternatives vs. Oracle along with MySQL and associated MySQL derivatives, as well as NoSQL offerings (Read more about NoSQL Databases here). Read more about Microsoft’s SQL Server for Linux here.

    In addition to SQL Server for Linux, Microsoft has also announced enhancements for easing docker container migrations to clouds. In other Microsoft activity, they announced enhancements to Storsimple and Azure. Keep an eye out for Windows Server 2016 Tech Preview 5 (e.g. TP5) which will be the next release of the upcoming new version of the popular operating systems.


  • MSDI, Rockland IT Solutions and Source Support Services Merge to Form Congruity with CEO Todd Gresham, along with Mike Stolz and Mark Shirman (formerly of Glasshouse) among others you may know.

  • Via Businesswire: PrimaryIO announces server-based flash acceleration for VMware systems, while Riverbed extends Remote Office Branch Office (ROBO) cloud connectivity Via Businesswire.

  • Via Computerworld: Samsung ships 12Gbs SAS 15TB 2.5" 3D NAND Flash SSD (Hey Samsung, send me a device or two and will give them a test drive in the Server StorageIO lab ;). Not to be out done, Via Forbes: Seagate announces fast SSD card, as well as for the High Performance Compute (HPC) and Super Compute (SC) markets, Via HPCwire: Seagate Sets Sights on Broader HPC Market with their scale-out clustered Lustre based systems.

  • Servers Direct is now offering the HGST 4U x 60 drive enclosures while Via PRnewswire: SMIC announces RRAM partnership.

  • ATTO Technology has enhanced their RAID Arrays Behind FibreBridge 7500, while Oracle announced mainframe virtual tape library (VTL) cloud support Via Searchdatabackup. In other updates for this month, VMware has released and made generally available (GA) VSAN 6.2 and Via Businesswire: Wave and Centeris Launch Transpacific Broadband Data and Fiber Hub.
  • The above is a sampling of some of the various industry news, announcements and updates for this March. Watch for more news and updates in April coming out of NAB and OpenStack Summit among other events.

    View other recent news and industry trends here.

    StorageIO Commentary in the news

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Continum – R1Soft Server Backup Manager
    • HyperIO – HiMon and HyperIO server storage I/O monitoring software tools
    • Runcast – VMware automation and management software tools
    • Opvizor – VMware health management software tools
    • Asigra – Cloud, Managed Service and distributed backup/data protection tools
    • Datera – Software defined storage management startup
    • E8 Storage – Software Defined Stealth Storage Startup
    • Venyu – Cloud and data center data protection tools
    • StorPool – Distributed software defined storage management tools
    • ExaBlox – Scale out storage solutions

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • InfoStor:  Data Protection Gaps, Some Good, Some Not So Good
    • Virtual Blocks (VMware Blogs):  Part III EVO:RAIL – When And Where To Use It?
    • InfoStor:  Object Storage Is In Your Future

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    Check out this video (Via YouTube) of a Google Data Center tour.

    In the IoT and IoD era of little and big data, how about this video I did with my Phantom DJI drone and a HD GoPro (e.g. 1K vs. 2.7K or 4K in newer cameras). This generates about a GByte of raw data per 10 minutes of flight, which then means another GB copies to a staging area, then to a protected copies, then production versions and so forth. Thus a 2 minute clip in 1080p resulted in plenty of storage including produced, uploaded versions along with backup copies in archives spread across YouTube, Dropbox and elsewhere.

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    TBA – April 27, 2016 webinar

    NAB (Las Vegas) April 19-20, 2016

    Backup with Brains – March 31, 2016 free webinar (1PM ET)

    See more webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    NVMe is in your future, resources to start preparing today for tomorrow

    NVM and NVMe corner (Via and Compliments of Micron.com)

    View more NVMe related items at microsite thenvmeplace.com.

    Read more in this Server StorageIO industry Trends Perspective white paper and lab review.

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of mine as well as books from others.

    For this months recommended reading, it’s a blog site. If you have not visited Eric Siebert’s (@ericsiebert) site vSphere-land and its companion resources pages including top blogs, do so now.

    Granted there is a heavy VMware server virtualization focus, however there is a good balance of other data infrastructure topics spanning servers, storage, I/O networking, data protection and more.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    storageioblog.com/data-protection-diaries-main/ – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance.com – Various server, storage and I/O performance and benchmarking

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    The Future of Ethernet – 2016 Roadmap released by Ethernet Alliance

    The Future of Ethernet – 2016 Roadmap released by Ethernet Alliance

    server storage I/O trends

    The Future of Ethernet – 2016 Roadmap released by Ethernet Alliance

    Ethernet Alliance Roadmap

    The Ethernet Alliance has announced their 2016 roadmap of enhancements for Ethernet.

    Ethernet enhancements include speeds, connectivity interfaces that span needs from consumer, enterprise, to cloud and managed service providers.

    Highlights of Ethernet Roadmap

    • FlexEthernet (FlexE)
    • QSFP-DD, microQSFP and OBO interfaces
    • Speeds from 10Mbps to 400GbE.
    • 4 Pair Power over Ethernet (PoE)
    • Power over Data Line (PoDL)

    Ethernet Alliance 2016 Roadmap Image
    Images via EthernetAlliance.org

    Who is the Ethernet Alliance

    The Ethernet Alliance (@ethernetallianc) is an industry trade and marketing consortium focused on the advancement and success of Ethernet related technologies.

    Where to learn more

    The Ethernet Alliance has also made available via their web site two presentations part one here and part two here (or click on the following images).

    Ethernet Alliance 2016 roadmap presentation #1 Ethernet Alliance 2016 roadmap presentation #2

    Also visit www.ethernetalliance.org/roadmap

    What this all means

    Ethernet technologies continue to be enhanced from consumer focused, Internet of Things (IoT) and Internet of Devices (IoD) to enterprise, data centers, IT and non-IT usage as well as cloud and managed service providers. At the lower end where there is broad adoption, the continued evolution of easier to use, lower cost, interoperable technologies and interfaces expands Ethernet adoption footprint while at the higher-end, all of those IoT, IoD, consumer and other devices aggregated (consolidate) into cloud and other services that have the need for speeds from 10GbE, 40GbE, 100GbE and 400GbE.

    With the 2016 Roadmap the Ethernet Alliance has provided good direction as to where Ethernet fits today and tomorrow.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Part V – NVMe overview primer (Where to learn more, what this all means)

    This is the fifth in a five-part mini-series providing a NVMe primer overview.

    View Part I, Part II, Part III, Part IV, Part V as well as companion posts and more NVMe primer material at www.thenvmeplace.com.

    There are many different facets of NVMe including protocol that can be deployed on PCIe (AiC, U.2/8639 drives, M.2) for local direct attached, dedicated or shared for front-end or back-end of storage systems. NVMe direct attach is also found in servers and laptops using M.2 NGFF mini cards (e.g. “gum sticks”). In addition to direct attached, dedicated and shared, NVMe is also deployed on fabrics including over Fibre Channel (FC-NVMe) as well as NVMe over Fabrics (NVMeoF) leveraging RDMA based networks (e.g. iWARP, RoCE among others).

    The storage I/O capabilities of flash can now be fed across PCIe faster to enable modern multi-core processors to complete more useful work in less time, resulting in greater application productivity. NVMe has been designed from the ground up with more and deeper queues, supporting a larger number of commands in those queues. This in turn enables the SSD to better optimize command execution for much higher concurrent IOPS. NVMe will coexist along with SAS, SATA and other server storage I/O technologies for some time to come. But NVMe will be at the top-tier of storage as it takes full advantage of the inherent speed and low latency of flash while complementing the potential of multi-core processors that can support the latest applications.

    With NVMe, the capabilities of underlying NVM and storage memories are further realized Devices used include a PCIe x4 NVMe AiC SSD, 12 GbpsSAS SSD and 6 GbpsSATA SSD. These and other improvements with NVMe enable concurrency while reducing latency to remove server storage I/O traffic congestion. The result is that application demanding more concurrent I/O activity along with lower latency will gravitate towards NVMe for access fast storage.

    Like the robust PCIe physical server storage I/O interface it leverages, NVMe provides both flexibility and compatibility. It removes complexity, overhead and latency while allowing far more concurrent I/O work to be accomplished. Those on the cutting edge will embrace NVMe rapidly. Others may prefer a phased approach.

    Some environments will initially focus on NVMe for local server storage I/O performance and capacity available today. Other environments will phase in emerging external NVMe flash-based shared storage systems over time.

    Planning is an essential ingredient for any enterprise. Because NVMe spans servers, storage, I/O hardware and software, those intending to adopt NVMe need to take into account all ramifications. Decisions made today will have a big impact on future data and information infrastructures.

    Key questions should be, how much speed do your applications need now, and how do growth plans affect those requirements? How and where can you maximize your financial return on investment (ROI) when deploying NVMe and how will that success be measured?

    Several vendors are working on, or have already introduced NVMe related technologies or initiatives. Keep an eye on among others including AWS, Broadcom (Avago, Brocade), Cisco (Servers), Dell EMC, Excelero, HPE, Intel (Servers, Drives and Cards), Lenovo, Micron, Microsoft (Azure, Drivers, Operating Systems, Storage Spaces), Mellanox, NetApp, OCZ, Oracle, PMC, Samsung, Seagate, Supermicro, VMware, Western Digital (acquisition of SANdisk and HGST) among others.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What this all means

    NVMe is in your future if not already, so If NVMe is the answer, what are the questions?

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Where, How to use NVMe overview primer

    server storage I/O trends
    Updated 1/12/2018

    This is the fourth in a five-part miniseries providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    Where and how to use NVMe

    As mentioned and shown in the second post of this series, initially, NVMe is being deployed inside servers as “ back-end,” fast, low latency storage using PCIe Add-In-Cards (AIC) and flash drives. Similar to SAS NVM SSDs and HDDs that support dual-paths, NVMe has a primary path and an alternate path. If one path fails, traffic keeps flowing without causing slowdowns. This feature is an advantage to those already familiar with the dual-path capabilities of SAS, enabling them to design and configure resilient solutions.

    NVMe devices including NVM flash AIC flash will also find their way into storage systems and appliances as back-end storage, co-existing with SAS or SATA devices. Another emerging deployment configuration scenario is shared NVMe direct attached storage (DAS) with multiple server access via PCIe external storage with dual paths for resiliency.

    Even though NVMe is a new protocol, it leverages existing skill sets. Anyone familiar with SAS/SCSI and AHCI/SATA storage devices will need little or no training to carry out and manage NVMe. Since NVMe-enabled storage appears to a host server or storage appliance as an LUN or volume, existing Windows, Linux and other OS or hypervisors tools can be used. On Windows, such as,  other than going to the device manager to see what the device is and what controller it is attached to, it is no different from installing and using any other storage device. The experience on Linux is similar, particularly when using in-the-box drivers that ship with the OS. One minor Linux difference of note is that instead of seeing a /dev/sda device as an example, you might see a device name like /dev/nvme0n1 or /dev/nvme0n1p1 (with a partition).

    Keep in mind that NVMe like SAS can be used as a “back-end” access from servers (or storage systems) to a storage device or system. For example JBOD SSD drives (e.g. 8639), PCIe AiC or M.2 devices. NVMe can also like SAS be used as a “front-end” on storage systems or appliances in place of, or in addition to other access such as GbE based iSCSI, Fibre Channel, FCoE, InfiniBand, NAS or Object.

    What this means is that NVMe can be implemented in a storage system or appliance on both the “front-end” e.g. server or host side as well as on the “back-end” e.g. device or drive side that is like SAS. Another similarity to SAS is that NVMe dual-pathing of devices, permitting system architects to design resiliency into their solutions. When the primary path fails, access to the storage device can be maintained with failover so that fast I/O operations can continue when using SAS and NVMe.

    NVM connectivity options including NVMe
    Various NVM NAND flash SSD devices and their connectivity including NVMe, M2, SATA and 12 Gbps SAS are shown in figure 6.

    Various NVM SSD interfaces including NVMe and M2
    Figure 6 Various NVM flash SSDs (Via StorageIO Labs)

    Left in figure 6 is an NAND flash NVMe PCIe AiC, top center is a USB thumb drive that has been opened up showing an NAND die (chip), middle center is a mSATA card, bottom center is an M.2 card, next on the right is a 2.5” 6 Gbps SATA device, and far fright is a 12 Gbps SAS device. Note that an M.2 card can be either an SATA or NVMe device depending on its internal controller that determines which host or server protocol device driver to use.

    The role of PCIe has evolved over the years as has its performance and packaging form factors. Also, to add in card (AiC) slots, PCIe form factors also include M.2 small form factor that replaces legacy mini-PCIe cards. Another form factor is M.2 (aka Next Generation Form Factor or NGFF) that like other devices, can be an NVMe, or SATA device.

    NGFF also known as 8639 or possibly 8637 (figure 7) can be used to support SATA as well as NVMe depending on the card device installed and host server driver support. There are various M.2 NGFF form factors including 2230, 2242, 2260 and 2280. There are also M.2 to regular physical SATA converter or adapter cards that are available enabling M.2 devices to attach to legacy SAS/SATA RAID adapters or HBAs.

    NVMe 8637 and 8639 interface backplane slotsNVMe 8637 and 8639 interface
    Figure 7 PCIe NVMe 8639 Drive (Via StorageIO Labs)

    On the left of figure 7 is a view towards the backplane of a storage enclosure in a server that supports SAS, SATA, and NVMe (e.g. 8639). On the right of figure 7 is the connector end of an 8639 NVM SSD showing addition pin connectors compared to an SAS or SATA device. Those extra pins give PCIe x4 connectivity to the NVMe devices. The 8639 drive connectors enable a device such as an NVM, or NAND flash SSD to share a common physical storage enclosure with SAS and SATA devices, including optional dual-pathing.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Be careful judging a device or component by its physical packaging or interface connection about what it is or is not. In figure 6.6 the device has SAS/SATA along with PCIe physical connections, yet it’s what’s inside (e.g. its controller) that determines if it is an SAS, SATA or NVMe enabled device. This also applies to HDDs and PCIe AiC devices, as well as I/O networking cards and adapters that may use common physical connectors, yet implement different protocols. For example, the SFF-8643 HD-Mini SAS internal connector is used for 12 Gbps SAS attachment as well as PCIe to devices such as 8630.

    Depending on the type of device inserted, access can be via NVMe over PCIe x4, SAS (12 Gbps or 6Gb) or SATA. 8639 connector based enclosures have a physical connection with their backplanes to the individual drive connectors, as well as to PCIe, SAS, and SATA cards or connectors on the server motherboard or via PCIe riser slots.

    While PCIe devices including AiC slot based, M.2 or 8639 can have common physical interfaces and lower level signaling, it’s the protocols, controllers, and drivers that determine how they get a software defined and used. Keep in mind that it’s not just the physical connector or interface that determines what a device is or how it is used, it’s also the protocol, command set, and controller and device drivers.

    Continue reading about NVMe with Part V (Where to learn more, what this all means) in this five-part series, or jump to Part I, Part II or Part III.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    NVMe Need for Performance Speed Performance

    server storage I/O trends
    Updated 1/12/2018

    This is the third in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    How fast is NVMe?

    It depends! Generally speaking NVMe is fast!

    However fast interfaces and protocols also need fast storage devices, adapters, drivers, servers, operating systems and hypervisors as well as applications that drive or benefit from the increased speed.

    A server storage I/O example is in figure 5 where a 6 Gbps SATA NVM flash SSD (left) is shown with an NVMe 8639 (x4) drive that were directly attached to a server. The workload is 8 Kbyte sized random writes with 128 threads (workers) showing results for IOPs (solid bar) along with response time (dotted line). Not surprisingly the NVMe device has a lower response time and a higher number of IOPs. However also note how the amount of CPU time used per IOP is lower on the right with the NVMe drive.

    NVMe storage I/O performance
    Figure 5 6 Gbps SATA NVM flash SSD vs. NVMe flash SSD

    While many people are aware or learning about the IOP and bandwidth improvements as well as the decrease in latency with NVMe, something that gets overlooked is how much less CPU is used. If a server is spending time in wait modes that can result in lost productivity, by finding and removing the barriers more work can be done on a given server, perhaps even delaying a server upgrade.

    In figure 5 notice the lower amount of CPU used per work activity being done (e.g. I/O or IOP) which translates to more effective resource use of your server. What that means is either doing more work with what you have, or potentially delaying a CPU server upgrade, or, using those extra CPU cycles to power software defined storage management stacks including erasure coding or advanced parity RAID, replication and other functions.

    Table 1 shows relative server I/O performance of some NVM flash SSD devices across various workloads. As with any performance, the comparison takes them, and the following with a grain of salt as your speed will vary.

    8KB I/O Size

    1MB I/O size

    NAND flash SSD

    100% Seq. Read

    100% Seq. Write

    100% Ran. Read

    100% Ran. Write

    100% Seq. Read

    100% Seq. Write

    100% Ran. Read

    100% Ran. Write

    NVMe

    IOPs

    41829.19

    33349.36

    112353.6

    28520.82

    1437.26

    889.36

    1336.94

    496.74

    PCIe

    Bandwidth

    326.79

    260.54

    877.76

    222.82

    1437.26

    889.36

    1336.94

    496.74

    AiC

    Resp.

    3.23

    3.90

    1.30

    4.56

    178.11

    287.83

    191.27

    515.17

    CPU / IOP

    0.001571

    0.002003

    0.000689

    0.002342

    0.007793

    0.011244

    0.009798

    0.015098

    12Gb

    IOPs

    34792.91

    34863.42

    29373.5

    27069.56

    427.19

    439.42

    416.68

    385.9

    SAS

    Bandwidth

    271.82

    272.37

    229.48

    211.48

    427.19

    429.42

    416.68

    385.9

    Resp.

    3.76

    3.77

    4.56

    5.71

    599.26

    582.66

    614.22

    663.21

    CPU / IOP

    0.001857

    0.00189

    0.002267

    0.00229

    0.011236

    0.011834

    0.01416

    0.015548

    6Gb

    IOPs

    33861.29

    9228.49

    28677.12

    6974.32

    363.25

    65.58

    356.06

    55.86

    SATA

    Bandwidth

    264.54

    72.1

    224.04

    54.49

    363.25

    65.58

    356.06

    55.86

    Resp.

    4.05

    26.34

    4.67

    35.65

    704.70

    3838.59

    718.81

    4535.63

    CPU / IOP

    0.001899

    0.002546

    0.002298

    0.003269

    0.012113

    0.032022

    0.015166

    0.046545

    Table 1 Relative performance of various protocols and interfaces

    The workload results in table 1 were generated using a vdbench script running on a Windows 2012 R2 based server and are intended to be a relative indicator of different protocol and interfaces; your performance mileage will vary. The results shown below compare the number of IOPs (activity rate) for reads, writes, random and sequential across small 8KB and large 1MB sized I/Os.

    Also shown in table 1 are bandwidth or throughput (e.g. amount of data moved), response time and the amount of CPU used per IOP. Note in table 1 how NVMe can do higher IOPs with a lower CPU per IOP, or, using a similar amount of CPU, do more work at a lower latency. SSD has been used for decades to help reduce CPU bottlenecks or defer server upgrades by removing I/O wait times and reduce CPU consumption (e.g. wait or lost time).

    Can NVMe solutions run faster than those shown above? Absolutely!

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Continue reading about NVMe with Part IV (Where and How to use NVMe) in this five-part series, or jump to Part I, Part II or Part V.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.