Server storage I/O performance benchmark workload scripts Part I

Server storage I/O performance benchmark workload scripts Part I

Server storage I/O performance benchmark workload scripts

Update 1/28/2018

This is part one of a two-part series of posts about Server storage I/O performance benchmark workload tools and scripts. View part II here which includes the workload scripts and where to view sample results.

There are various tools and workloads for server I/O benchmark testing, validation and exercising different storage devices (or systems and appliances) such as Non-Volatile Memory (NVM) flash Solid State Devices (SSDs) or Hard Disk Drives (HDD) among others.

NVMe ssd storage
Various NVM flash SSD including NVMe devices

For example, lets say you have an SSD such as an Intel 750 (here, here, and here) or some other vendors NVMe PCIe Add in Card (AiC) installed into a Microsoft Windows server and would like to see how it compares with expected results. The following scripts allow you to validate your system with those of others running the same workload, granted of course your mileage (performance) may vary.

server storage I/O SCM NVM SSD performance

Why Your Performance May Vary

Reasons you performance may vary include among others:

  • GHz Speed of your server, number of sockets, cores
  • Amount of main DRAM memory
  • Number, type and speed of PCIe slots
  • Speed of storage device and any adapters
  • Device drivers and firmware of storage devices and adapters
  • Server power mode setting (e.g. low or balanced power vs. high-performance)
  • Other workload running on system and device under test
  • Solar flares (kp-index) among other urban (or real) myths and issues
  • Typos or misconfiguration of workload test scripts
  • Test server, storage, I/O device, software and workload configuration
  • Versions of test software tools among others

Windows Power (and performance) Settings

Some things are assumed or taken for granted that everybody knows and does, however sometimes the obvious needs to be stated or re-stated. An example is remembering to check your server power management settings to see if they are in energy efficiency power savings mode, or, in high-performance mode. Note that if your focus is on getting the best possible performance for effective productivity, then you want to be in high performance mode. On the other hand if performance is not your main concern, instead a focus on energy avoidance, then low power mode, or perhaps balanced.

For Microsoft Windows Servers, Desktop Workstations, Laptops and Tablets you can adjust power settings via control panel and GUI as well as command line or Powershell. From command line (privileged or administrator) the following are used for setting balanced or high-performance power settings.

Balanced

powercfg.exe /setactive 381b4222-f694-41f0-9685-ff5bb260df2e

High Performance

powercfg.exe /setactive 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c

From Powershell the following set balanced or high-performance.

Balanced
PowerCfg -SetActive "381b4222-f694-41f0-9685-ff5bb260df2e"

High Performance
PowerCfg -SetActive "8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c"

Note that you can list Windows power management settings using powercfg -LIST and powercfg -QUERY

server storage I/O power management

Btw, if you have not already done so, enable Windows disk (HDD and SSD) performance counters so that they appear via Task Manager by entering from a command prompt:

diskperf -y

Workload (Benchmark) Simulation Test Tools Used

There are many tools (see storageio.com/performance) that can be used for creating and running workloads just as there are various application server I/O characteristics. Different server I/O and application performance attributes include among others read vs. write, random vs. sequential, large vs. small, long vs. short stride, burst vs. sustain, cache and non-cache friendly, activity vs. data movement vs. latency vs. CPU usage among others. Likewise the number of workers, jobs, threads, outstanding and overlapped I/O among other configuration settings can have an impact on workload and results.

The four free tools that I’m using with this set of scripts are:

  • Microsoft Diskspd (free), get the tool and bits here or here (open source), learn more about Diskspd here.
  • FIO.exe (free), get the tool and bits here or here among other venues.
  • Vdbench (free with registration), get the tool and bits here or here among other venues.
  • Iometer (free), get the tool and bits here among other venues.

Notice: While best effort has been made to verify the above links, they may change over time and you are responsible for verifying the safety of links and your downloads.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Remember, everything is not the same in the data center or with data infrastructures that support different applications.

While some tools are more robust or better than others for different things, ultimately it’s usually not the tool that results in a bad benchmark or comparison, it’s the configuration or lack of including workload settings that are not relevant or applicable. The best benchmark, workload or simulation is your own application. Second best is one that closely resembles your application workload characteristics. A bad benchmark is one that has no relevance to your environment, application use scenario. Take and treat all benchmark or workload simulation results with a grain of salt as something to compare, contrast or make reference to in the proper context. Read part two of this post series to view test tool workload scripts along with sample results.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part II – Some server storage I/O workload scripts and results

Part II – Some server storage I/O workload scripts and results

server storage I/O trends

Updated 1/28/2018

This is the second in a two part series of posts pertaining to using some common server storage I/O workload benchmark tools and scripts. View part I here which includes overview, background and information about the tools used and related topics.

NVMe ssd storage
Various NVM flash SSD including NVMe devices

Following are some server I/O benchmark workload scripts to exercise various storage devices such as Non-Volatile Memory (NVM) flash Solid State Devices (SSDs) or Hard Disk Drives (HDD) among others.

The Workloads

Some ways that can impact the workload performance results besides changing the I/O size, read write, random sequential mix is the number of threads, workers and jobs. Note that in the workload steps, the larger 1MB and sequential scenarios have fewer threads, workers vs. the smaller IOP or activity focused workloads. Too many threads or workers can cause overhead and you will reach a point of diminishing return at some point. Likewise too few and you will not drive the system under test (SUT) or device under test (DUT) to its full potential. If you are not sure how many threads or workers to use, run some short calibration tests to see the results before doing a large, longer test.

Keep in mind that the best benchmark or workload is your own application running with similar load to what you would see in real world, along with applicable features, configuration and functionality enabled. The second best would be those that closely resemble your workload characteristics and that are relevant.

The following workloads involved a system test initiator (STI) server driving workload using the different tools as well as scripts shown. The STI sends the workload to a SUT or DUT that can be a single drive, card or multiple devices, storage system or appliance. Warning: The following workload tests does both reads and writes which can be destructive to your device under test. Exercise caution on the device and file name specified to avoid causing a problem that might result in you testing your backup / recovery process. Likewise no warranty is given, implied or made for these scripts or their use or results, they are simply supplied as is for your reference.

The four free tools that I’m using with this set of scripts are:

  • Microsoft Diskspd (free), get the tool and bits here or here (open source), learn more about Diskspd here.
  • FIO.exe (free), get the tool and bits here or here among other venues.
  • Vdbench (free with registration), get the tool and bits here or here among other venues.
  • Iometer (free), get the tool and bits here among other venues.

Notice: While best effort has been made to verify the above links, they may change over time and you are responsible for verifying the safety of links and your downloads

Microsoft Diskspd workloads

Note that a 300GB size file named iobw.tst on device N: is being used for performing read and write I/Os to. There are 160 threads, I/O size of 4KB and 8KB varying from 100% Read (0% write), 70% Read (30% write) and 0% Read (100% write) with random (seek) and no hardware or software cache. Also specified are to collect latency statistics, a 30 second warm up ramp up time, and a quick 5 minute duration (test time). 5 minutes is a quick test for calibration, verify your environment however relatively short for a real test which should be in the hours or more depending on your needs.

Note that the output results are put into a file with a name describing the test tool, workload and other useful information such as date and time. You may also want to specify a different directory where output files are placed.

diskspd.exe -c300G -o160 -t160 -b4K -w0 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan100Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b4K -w30 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan70Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b4K -w100 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan0Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w0 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan100Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w30 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan70Read_160x160_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w100 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan0Read_160x160_072416_8AM.txt

The following Diskspd tests use similar settings as above, however instead of random, sequential is specified, threads and outstanding I/Os are reduced while I/O size is set to 1MB, then 8KB, with 100% read and 100% write scenarios. The -t specifies the number of threads and -o number of outstanding I/Os per thread.

diskspd.exe -c300G -o32 -t132 -b1M -w0 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_1MSeq100Read_32x32_072416_8AM.txt
diskspd.exe -c300G -o32 -t132 -b1M -w100 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_1MSeq0Read_32x32_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w0 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_8KSeq100Read_32x32_072416_8AM.txt
diskspd.exe -c300G -o160 -t160 -b8K -w100 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_8KSeq0Read_32x32_072416_8AM.txt

Fio.exe workloads

Next are the fio workloads similar to those run using Diskspd except the sequential scenarios are skipped.

fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=100 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan100Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=70 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan70Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=0 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan0Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=100 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan100Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=70 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan70Read_5x32_072416_8AM.txt
fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=0 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan0Read_5x32_072416_8AM.txt

Vdbench workloads

Next are the Vdbench workloads similar to those used with the Microsoft Diskspd scenarios. In addition to making sure Vdbench is installed and working, you will need to create a text file called seqrxx.txt containing the following:

hd=localhost,jvms=!jvmn
sd=sd1,lun=!drivename,openflags=directio,size=!dsize
wd=mix,sd=sd1
rd=!jobname,wd=mix,elapsed=!etime,interval=!itime,iorate=max,forthreads=(!tthreads),forxfersize=(!worktbd),forseekpct=(!workseek),forrdpct=(!workread),openflags=directio

The following are the commands that call the Vdbench script file. Note Vdbench puts output files (yes, plural there are many results) in a output folder.

vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o  vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran100Read_0726166AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran70Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran0Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Ran100Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Ran70Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq0Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq100Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq70Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq0Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=32 jvmn=64 worktbd=1M workseek=0 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_32TH_1M100Seq100Read_072416_8AM
vdbench -f seqrxx.txt dsize=300G  tthreads=32 jvmn=64 worktbd=1M workseek=0 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_32TH_1M100Seq0Read_072416_8AM

Iometer workloads

Last however not least, lets do an Iometer run. The following command calls an Iometer input file (icf) that you can find here. In that file you will need to make a few changes including the name of the server where Iometer is running, description and device under test address. For example in the icf file change SIOSERVER to the name of the server where you will be running Iometer from. Also change the address for the DUT, for example N: to what ever address, drive, mount point you are using. Also update the description accordingly (e.g. "NVME" to "Your test example".

Here is the command line to run Iometer specifying an icf and where to put the results in a CSV file that can be imported into Excel or other tools.

iometer /c  iometer_5work32q_intel_Profile.icf /r iometer_nvmetest_5work32q_072416_8AM.csv

server storage I/O SCM NVM SSD performance

What About The Results?

For context, the following results were run on a Lenovo TS140 (32GB RAM), single socket quad core (3.2GHz) Intel E3-1225 v3 with an Intel NVMe 750 PCIe AiC (Intel SSDPEDMW40). Out of the box Microsoft Windows NVMe drive and controller drivers were used (e.g. 6.3.9600.18203 and 6.3.9600.16421). Operating system is Windows 2012 R2 (bare metal) with NVMe PCIe card formatted with ReFS file system. Workload generator and benchmark driver tools included Microsoft Diskspd version 2.012, Fio.exe version 2.2.3, Vdbench 50403 and Iometer 1.1.0. Note that there are newer versions of the various workload generation tools.

Example results are located here.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Remember, everything is not the same in the data center or with data infrastructures that support different applications.

While some tools are more robust or better than others for different things, ultimately its usually not the tool that results in a bad benchmark or comparison, its the configuration or lack of including workload settings that are not relevant or applicable. The best benchmark, workload or simulation is your own application. Second best is one that closely resembles your application workload characteristics. A bad benchmark is one that has no relevance to your environment, application use scenario. Take and treat all benchmark or workload simulation results with a grain of salt as something to compare, contrast or make reference to in the proper context.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

12Gb SAS SSD Enabling Server Storage I/O Performance and Effectiveness Webinar

12Gb SAS SSD Enabling Server Storage I/O Performance and Effectiveness Webinar

server storage I/O trends

Non-Volatile Memory (NVM) Solid State Devices (SSDs) including nand flash, DRAM as well as emerging PCM and 3D XPoint as part of Storage Class Memories (SCMs) are in your future. The questions are where, when, for what, how much as well as what form factor packaging, along with server storage I/O interface are applicable for your different applications and data infrastructures.

server storage I/O SCM NVM SSD performance

Server storage I/O physical interfaces for access NVM SSDs include PCIe Add in Cards (AiC), M.2 as well as emerging SFF 8639 (e.g. NVMe U2 drive form factor) along with mSATA (e.g. mini PCIe card) in addition to SAS, SATA, USB among others. Protocols include NVM Express (NVMe), SAS, SATA as well as general server storage I/O access of shared storage systems that leverage NVM SSD and SCM technologies.

To help address the question of which server storage I/O interface is applicable for different environments, I invite you to a webinar on June 22, 2016 at 1PM ET hosted by and compliments of Micron.

During the webinar myself and Rob Peglarr (@peglarr) of Micron will discuss and answer questions about how 12Gb SAS remains a viable option for attach NVM SSD storage to servers, as well as via storage systems today and into the future. Today’s 12Gb SAS SSDs enable you to leverage your existing knowledge, skill sets, as well as technology to maximize your data infrastructure investments. For servers or storage systems that are PCIe slot constrained, 12Gb SAS enables more SSD including 2.5" form factor multiple TByte capacity devices to be used to boost performance and capacity in a cost as well as energy effective way.

server storage I/O nvm ssd options

In addition to Rob Peglarr, we will also be joined by Doug Rollins of Micron (@GreyHairStorage) who will share some technical speeds, feeds, slots and watts information about Micron 12Gb SAS SSDs that can scale into the TBs in capacity per device.

Here’s the synopsis from the Micron information page for this webinar.

Don’t let old, slow SAS HDDs drag down your data center

Modernize it by upgrading your storage from SAS HDDs to SAS SSDs. It’s an easy upgrade that provides a significant boost in performance, longer lasting endurance and nearly 4X the capacity. Flash storage changes how you do business and keeps you competitive.

We invite you to join Rob Peglar, Greg Schulz, along with Doug Rollins, from Micron’s technical marketing team to learn:

  • Simple solutions to solving the challenges with today’s ever-growing data demands
  • Why SAS—how it continues to fuel the data center
  • HDDs versus SDDs—before and after stories from your peers, including upfront cost savings

We will also have a live Q&A session so you can talk with the experts. Please register today! If you’re unable to attend the live webinar, we encourage you to register anyway to receive a link to the recorded session, as well as a copy of the presentation.

Where To Learn More

What This All Means

Remember, everything is not the same in the data center or with data infrastructures that support different applications, like there are various NVM SSD options as well as interfaces.

Join us for this webinar, you can view more information here, as well as register for the event.

Ok, nuff said, for now…

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

NetApp Announces ONTAP 9 software defined storage management

NetApp Announces ONTAP 9 software defined storage management

server storage I/O trends

NetApp has announced ONTAP 9 the latest version or generation of their storage software that defines and powers ONTAP storage systems and appliances (e.g. those known by some as FAS among others).

The major theme of ONTAP 9 is simple anywhere eluding to the software runs on storage system appliances (e.g. “tin wrapped” or hardware platform based), virtual storage (e.g. what has been known as “edge” in the past), as well as cloud versions (cDOT). The other part of simple beyond where the software gets deployed and how the resources along with functionality are consumed ties to management.

This means simple from standalone systems to clusters, ONTAP 9 is focused on consolidation and management across different storage media mediums (HDD and SSD), platforms (engineered e.g. FAS to white box), protocols (block, file, objects) as well as consumption (on hardware or software deployed including cloud).

As part of the announcement NetApp will continue with its engineered hardware platform solutions (e.g. appliances or storage systems) as well as ONTAP Select (third-party storage) and Flex using white box server platforms (e.g. a software defined storage option). This capability provides customers with flexibility on where and how to buy as well as deployment options.

Another dimension to ONTAP 7 simple theme is support for known workloads such as Oracle RAC, Microsoft SQL Server and VMware among others. ONTAP 9 provides tools for rapid provisioning of storage resources to support those and other application workloads.

Data services feature enhancements include support of new high-capacity read optimized SSDs, along with inline data compaction on 4K boundaries (data chunks) including data reduction guarantees of 4:1. For data durability, triple parity RAID has also been implemented, as well as Snaplock is also present in ONTAP 9.

Another aspect of Simple theme for ONTAP 9 is an easy transition from third-party storage systems, as well as ONTAP 8.3 and ONTAP 7 modes with new tools and processes. These also include copy free transitions where existing storage systems are detached from older generation ONTAP controller, attached to new versions and an auto update occurs.

Where To Learn More

ONTAP 9 Data Sheet (PDF)
NetApp FlashAdvantage 3-4-5 Makes the All-Flash Data Center a Reality
NetApp ONTAP 9 Software Simplifies Transition to Hybrid Cloud, Next-Generation Data Center

What This All Means

ONTAP 9 are a welcome set of enhancements for NetApps flagship storage platforms that are based on ONTAP. With these enhancements, existing or new customers gain flexibility and deployment option choices for how the ONTAP software gets deployed from physical NetApp based storage systems, to white box hardware, software defined and cloud editions. In an era where there is a focus on converged, hyper-converged, object, all flash arrays and software defined virtual as well as cloud, ONTAP 9 provides options for customers who simply want or still need a traditional multi-protocol storage system that can run in an all flash or hybrid with disk modes.

Ok, nuff said, for now…

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Server StorageIO May 2016 Update Newsletter

Volume 16, Issue V

Hello and welcome to this May 2016 Server StorageIO update newsletter.

In This Issue

  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events and Webinars
  • Industry Activity Trends
  • Resources and Links
  • Enjoy this shortened edition of the Server StorageIO update newsletter, watch for more tips, articles, lab report test drive reviews, blog posts, videos and podcast’s and in the news commentary appearing soon.

    Cheers GS

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

     

    StorageIO Commentary in the news

    Recent Server StorageIO industry trends perspectives commentary in the news.

    Cloud and Virtual Data Storage Networking: Various comments and discussions

    StorageIOblog: Additional comments and perspectives

    SearchCloudStorage: Comments on OpenIO joins object storage cloud scrum

    SearchCloudStorage: Comments on EMC VxRack Neutrino Nodes and OpenStack

    View more Server, Storage and I/O hardware as well as software trends comments here

     

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    Via Micron Blog (Guest Post): What’s next for NVMe and your Data Center – Preparing for Tomorrow Today

    Check out these resources techniques, trends as well as tools. View more tips and articles here

    StorageIO Webinars and Industry Events

    Brouwer Storage (Nijkerk Holland) June 10-15, 2016 – Various in person seminar workshops

    June 15: Software Defined Data center with Greg Schulz and Fujitsu International

    June 14: Round table with Greg Schulz and John Williams (General manager of Reduxio) and Gert Brouwer. Discussion about new technologies with Reduxio as an example.

    June 10: Hyper converged, Converged , and related subjects presented Greg Schulz

    Simplify and Streamline Your Virtual Infrastructure – May 17 webinar

    Is Hyper-Converged Infrastructure Right for Your Business? May 11 webinar

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    Making the Cloud Work for You: Rapid Recovery April 27, 2016 webinar

    See more webinars and other activities on the Server StorageIO Events page here.

     

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    storageioblog.com/data-protection-diaries-main/ – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance.com – Various server, storage and I/O performance and benchmarking

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Dell Updates Storage Center Operating System 7 (SCOS 7)

    Dell Updates Storage Center Operating System 7 (SCOS 7)

    server storage I/O trends

    In case you missed it, Dell recently announced Storage Center Operating System 7 (SCOS 7) with several enhancements for their SC series storage systems (e.g. Compellent). For those who are under maintenance agreements, the new features are no charge upgrades. Most of the SCOS 7 features should be generally available now (or soon) for the SC9000 series with other platform support phased in over time.

    Summary of Dell SC07 enhancements features

    • Block level dedupe in addition to previous file dedupe
    • Enhanced compression as a companion to dedupe of HDD and SSD data
    • Single volumes or LUN can span across HDD and SSD tiers
    • Live migration and volume management with load balancing
    • Ability to move volumes between arrays
    • Quality of Service (QoS) for performance across volumes and volume groups
    • QoS set by IOPs, bandwidth and latency of standard volumes and VMware VVOL
    • New Storage Manager replaces SC Enterprise Manager for unified management
    • Delivers on promise of a unified SC and FS NAS management
    • Ability to replicate data between PS (EqualLogic) and SC (Compellent)
    • HTML 5 interface and non-disruptive implementation

    Where To Learn More

    Learn more about Dell SC Series enhancements here and here.

    What This All Means

    Dell is following through on its previous commitments to both PS (e.g. EqualLogic) and SC (e.g. Compellent) customers with enhancements to increase functionality along with simplify management. These features will become more important to add continued to value to the SC and PS platforms independently of the impending Dell acquisition of EMC (e.g. Dell EMC). The elephant in the room discussion is with the impending Dell acquisition of EMC, and the new Dell EMC division (e.g. essentially the existing EMC plus the Dell Server group) what happens with the midrange storage products from both parties.

    Dell has the SC and PS as well as lower end direct attached storage (DAS) based Powervault series as well as the Exanet based Fluid file System among other technologies including Ocarina based data footprint reduction (DFR). If you recall, the Ocarina technology acquired by Dell enables not only dedupe, also compression and other DFR (here and here) capabilities. Meanwhile EMC has the VNX and Unity (announced in May 2016) among other offerings.

    Both Dell and EMC will need to continue to articulate the value of their midrange solutions prior to the acquisition closing. Likewise once the deal closes, the joint entities need to be crystal clear on where the different technologies fit for various markets or customer segments, as well as their future.

    Overall a good set of enhancements for the Dell SC (and PS) series.

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    2016 Going Dutch Cloud Virtualization Server Storage I/O Seminars

    2016 Going Dutch Cloud Virtualization Server Storage I/O Seminars

    server storage I/O trends

    In June 2016 Brouwer Storage Consultancy is organizing their yearly spring seminar workshops in Nijkerk Holland (south of Amsterdam, near Utrecht and Amersfoort) with myself among others presenting.

    Brouwer Consultancy

    Cloud Virtualization Server Storage I/O Seminars

    For this series of seminar workshops, there are four sessions, two being presented by myself, and two others in conjunction with Reduxio as well as Fujitsu & SJ Solutions.

    Brouwer and Server StorageIO Seminar Sessions

    Agenda, How To Register and Where To Learn More

    The vendor sponsored sessions will consist of about 50% content being independent presented by myself and Gert Brouwer, the balance by the event sponsors as well as their partners. All presentation and associated content including handouts will be in English.

    There will be 4 seminar workshop sessions, two of those are paid sessions dedicated to Greg Schulz and the other two are free (sponsored) sessions where 50% of the content is sponsored (Reduxio, FujitsuSJ Solutions) and the other 50% will be independent (Greg Schulz & Gert Brouwer).

    Thursday June 9th – Server StorageIO Trends and Updates

    Server Storage I/O Fundamental Trends V2.016 and Updates. What’s New, What’s the buzz, what you need to know about. From Speeds and Feeds, Slots and Watts to Who’s doing what. Event Location: Golden Tulip Ampt van Nijkerk Hotel, Berencamperweg 4, 3861MC, Nijkerk. Learn more here (PDF abstract and topics to be covered).

    Thursday June 10th – Converged Day

    Converged Day – Moving beyond Hyper-Converged Hype and Server Storage I/O Decision Making Strategies. Event Location: Golden Tulip Ampt van Nijkerk Hotel, Berencamperweg 4, 3861MC, Nijkerk. Learn more here (PDF abstract and topics to be covered).

    Brouwer and Server StorageIO Seminar Sessions De Roode Schuur

    Tuesday June 14th – Round Table Vendor Session with Reduxio

    Symposium Workshop – Round Table Vendor Session with Reduxio – Are some solutions really ‘a Paradigm shift’ or ‘new and revolutionary” as they claim to be, or is it just more of the same (e.g. evolutionary)? – Presentations and discussions led by Greg Schulz (StorageIO), Reduxio and Brouwer Storage Consultancy. (Free, sponsored Session, Access for end-users only). Event Location: Hotel & Gasterij De Roode Schuur, Oude Barneveldseweg 98, 3862PS Nijkerk. Learn more here (PDF abstract and topics to be covered).

    Wednesday June 15th – Software Defined Data Center Symposium Workshop

    Software Defined Data Center Symposium Workshop – Round Table Vendor Session with Fujitsu & SJ Solutions
    With subjects like Openstack, Ceph, distributed object storage, Bigdata, Hyper-Converged Infrastructure (HCI), Converged Infrastructure (CI), Software defined storage (SDS) and Network (SDN and NFV), this round table format workshop seminars explores these and other related topics including what to use when, where, why and how. Presentations by Greg Schulz (StorageIO), SJ Solutions & Fujitsu and Brouwer Storage Consultancy. Event Location: Hotel & Gasterij De Roode Schuur, Oude Barneveldseweg 98, 3862PS Nijkerk. Learn more here (PDF abstract and topics to be covered).

    For more information, abstracts/agenda, registration and the specific locations for all the above events click here.

    Brouwer and Server StorageIO Sessions Ampt van Nijkerk

    What This All Means

    There is a lot of things occurring in the IT industry from physical to software defined clouds, containers and virtualization, nonvolatile memory (NVM) including flash SSD among others. These series of interactive educational workshop seminars converge on Nijkerk Holland combing content discussions from strategy, planning decision making, to what’s new (and old) that can be used in new ways, as well as some trends, speeds and feeds along with practicality for your environment.

    Brouwer Consultancy

    I Look forward to seeing you in Nijkerk and Europe during June 2016, in the meantime, contact Brouwer Storage Consultancy for more information on the above sessions as well as to arrange private discussions or meetings.

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Participate in Top vBlog 2016 Voting Now

    Participate in Top vBlog 2016 Voting Now

    server storage I/O trends

    It’s that time of the year when Eric Siebert (@ericsiebert) hosts his annual top virtual Blog (vBlog) voting via his great vsphere-land site (check it out if you are not familiar). The voting is now open until May 27th after which the results tabulated, will be announced.

    While the focus is virtualization, rest assured there are other categories including scripting, storage, independent, new, video and podcast among others. For example my blog is listed under StorageIO (Greg Schulz) and included in storage and independent among some other categories.

    Granted it is an election year here in the US and hopefully those participating in the top vBlog 2016 voting process are doing so based on content vs. simply popularity or what their virtual Popularity Action Committees (vPAC) tells them to do, that is vPACs actually exist or if they are simply vUrban Myths ;). In other words I’m not going to tell you who to vote for, or who I voted for other than that it is based on useful I found those sites and their content contributions.

    Who Is Eligible To Vote

    Anybody can vote, granted you can only vote once. Of course you can get your friends, family, co-workers, sales and marketing department, community or club, customers, clients, basically anything with an IP address and email address in theory including IoT and IoD could vote. However that would be like buying twitter followers, Facebook likes, click for view or pay for view results to game the system which if that is your game, so be it.

    How Did People Get On The List (Ballot)

    Eric puts out a call (tweets, posts here, here and here) that gets amplified for people to submit new blogs to be included, as well as then to self-nominate their site, as well as for what categories. If people do not take the initiative to get on the list, they don’t get included. If the list if important enough to be included on, then it should be important enough to to know or remember to self-nominate to be included.

    I know this from experience in that a few years ago I forgot to nominate my blog in the categories of storage, independent thus was not included in the voting for those categories. However since I had previously notified Eric to include my blog, it was in the general category and thus included. Note to bloggers, if it is important for you to be included, then notify Eric that you should be added to his lists, as well as take the time to nominate yourself to be included in the future. Simply help others help you.

    What Is The Voting Criteria

    Eric for this years top vBlog voting has culled the list to those who besides self nominating in different categories, also had at least 50 posts in the past year.

    In addition, Eric suggests focus on the content, creative and contribution (Longevity, Length, Frequency, Quality) vs. simply being a popularity contest or driven by virtual Popularity Action Committees (e.g. vPAC).

    Following are my paraphrase:

    • Longevity – How long has the blog existed and continued to be maintained vs. one started a long time ago and had not been updated in months or years.
    • Length – Are there lots of very short basically expanded micro twitter posts, recopy press releases or curation of other news, real content and analysis that requires some thought along with creative. These could be short, long or a series of short to medium size posts.
    • Frequency – How often do posts appear, daily, weekly, monthly, yearly. There’s a balance between frequency, length and content along with time effort to create something.
    • Quality – Some can be rehashed with more perspectives, inputs, hints and tips along with analysis, insight or experiences of existing, or new items. The key is what is the value add to the topic, theme or conversation vs. simply reposting or amplifying what’s already out there. In other words, is there new or unique content, perspectives, thought analysis, insight, experiences or simply repeat and amplify those of others.

    Call To Action, Get Out and Vote

    Simple, get out and vote and thanks in advance by using this link to Eric’s site.

    Where To Learn More

    • Voting now open for Top vBlog 2016
    • Link to actual voting page

    What This All Means

    Support and say thanks, give an "atta boy" or "atta girl" to those who take time to create content to share with you on various virtualization related topics from servers, storage, I/O networking, scripting, tools, techniques, clouds, containers and more via blogs, podcast’s and webinars. This includes both the independents like myself and others as well as the vendors, press and media who give the content you consume.

    So take a few moments to jump on over to Eric’s site and cast your vote and if you have found my content to be useful, I humbly appreciate your vote and say thank you for your support, as well as that for others.

    Ok, nuff said and thank you for supporting StorageIOblog.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    EMCworld 2016 EMC Hybrid and Converged Clouds Your Way

    EMCworld 2016 EMC Hybrid and Converged Clouds Your Way

    server storage I/O trends

    This is a quick post looking at a high-level view of today’s EMCworld 2016 announcements.

    Following up from yesterdays post covering the set of announcements, today’s theme is around Hybrid, Converged and Clouds your way. In addition to the morning announcements, EMC also yesterday afternoon announced InfoArchive 4.0 and EMC LEAP cloud native content applications for Enterprise Content Management (ECM). However lets focus on today’s announcements with a focus of modernize, transform and automate your date center.

    Today’s announcements include:

    • Cloud solution portfolio enhancements with Native Hybrid Cloud (NHC) turnkey developer platform for cloud native application development. NHC editions include those for VMware vSphere, OpenStack and VMware Photo Platform. Read more here.

    • VCE VxRack System 1000 with new Neutrino Nodes which are software defined hyper-converged rack scale solutions to support turnkey cloud (public, private, hybrid) implementations. Read more about VxRack System 1000 with links here.

    • NVMe based DSSD D5 flash SSD system enhancements include ability to stripe two systems together in a single rack to double the IOPs, bandwidth and capacity. Also new is a VCE VxRack system with DSSD. Read more about DSSD D5 enhancements here.

    Some Hardware That Gets Software Defined

    Rear view of EMC Neutrion node

    Rear view of EMC Neutrino node

    Where To Learn More

    • Session Streaming For video of keynotes, general sessions, backstage sessions, and EMC TV coverage, click here
    • Social: Follow @EMCWorld,  @EMCCorp, @EMC_News and @EMCStorage, and join conversations with  #EMCWORLD, and like EMC on Facebook
    • Photos: Access event photos via  Flickr and EMC Pulse Blog or visit the special EMC World News microsite here
    • Reflections: Read Core Technologies President, Guy Churchward’s Reflections post on today’s announcements here
    • Visit the EMC Store, the EMC Community Network Site and The Core Blog

    What This All Means

    For those of you who have installed OpenStack either from scratch, or using one of the appliances, you understand what’s involved with doing so. The point is that for those who are in the business or jobs are based on installing or configuring or software defining the software and cloud configurations, turnkey solutions may not be a fit, or at least yet. On the other hand, if your focus is doing other things and are looking for boosting productivity, then turnkey solutions are a way of fast tracking deployment. Likewise for those who have the need for more speed from bandwidth or IOPs, the DSSD D5 enhancements will help in those environments.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    EMCworld 2016 Getting Started on Dell EMC announcements

    EMCworld 2016 Getting Started on Dell EMC announcements

    server storage I/O trends

    It’s the first morning of EMCworld 2016 here in Las Vegas with some items already announced today, and more in the wings. One of the underlying themes and discussions besides what’s new or who’s doing what, is that this is for all practical purpose the last EMCworld with the upcoming Dell acquisition. What’s not clear is will there be a renamed and repackaged Dell/EMCworld?

    With current EMC President Jeremy Burton who used to be the Chief Marketing Officer (CMO) at EMC slated to become the CMO across all of Dell, my bet is that there will be some type of new event picking up and moving to a new level of where EMCworld and Dellworld have been. More on the future of EMC and Dell in future posts, however for now, lets see what has unfolded so far today.

    Today’s EMCworld theme is modernize the data center which means a mix of hardware, software and services announcements spanning physical, virtual, cloud among others (e.g. how do you want your servers, storage and data infrastructure wrapped). While the themes are still EMC as the Dell acquisition has yet to be completed, however there is a Dell presence, including Michael Dell here in person (more on Dell later).

    The first wave of announcements include:

    • Unity All Flash Array (AFA) for small, entry-level environments
    • EMC Enterprise Copy Data Management software tools portfolio
    • ViPR Version 3.0 Controller
    • Virtustream global hyper-scale Storage Cloud for data protection and cloud native object
    • MyService360

    • Datadomain virtual edition and long-term archive

    What About The Dell Deal

    Michael Dell who is here at EMCworld announced on the main stage that Dell Technologies will be the name of the families of business.

    This family of business includes the joint Dell, EMC, VMware, Pivotal, Secureworks, RSA and Virtustream. The Dell client focused business will be called Dell leveraging

    that Brand, while the new joint Dell and EMC enterprise business will be called Dell EMC leveraging both of those brands. As a reminder, the Dell servers business unit will be moving into the existing EMC business as part of the enterprise business unit.

    Lets move onto the technology announcements from today.

    Unity AFA (and Hybrid)

    The new Unity all flash array (AFA) is a dual controller storage system optimized for Nonvolatile Memory (NVM) flash SSD, with unified (block and file) access. EMC is positioning Unity as an entry-level AFA starting around $18K USD for a 2U solutions (much capacity that includes is not yet known, more on that in a future post). As well as having a low entry cost, EMC is positioning Unity for a broad, mass market, volume distribution that can be leveraged by their partners, including Dell. More on Unity in future posts. While Unity is new and modern, it comes from the same group who has created the VNXe leveraging that knowledge and skills base.

    Note that Unity is positioned for small, mid-sized, remote office branch office (ROBO), departmental and specialized AFA situations, where EMC NVMe based DSSD D5 is positioned for higher-end shared direct attached server flash, while XtremIO and VMAX also positioned for higher-end, higher performance and workload consolidation scenarios.

    • Simple, flexible, easy to use in a 2U packaging that scale up to 80TB of NVM flash SSD storage
    • Scalable up to 3PB of storage for larger expanded configurations
    • Affordable ($18K USD starting price, $10K entry-level hybrid)
    • Modern AFA storage for entry, small, mid-sized, workgroup, departments and specialized environments
    • Unified file, block, and VMware VVOL support for storage access
    • Also available in hybrid, as well as software defined virtual and converged configurations
    • Higher performance (EMC indicates 300,000 IOPs) for given entry-level systems
    • Available in all-flash array, hybrid array, software-defined and converged configurations
    • Native controller based encryption with synchronous and asynchronous replication
    • VMware VASA 2.0, VAAI, VVols and VMware integration
    • Tight integration with EMC Data Protection portfolio tools

    Read more about Unity here.

    Copy Data Management

    Enterprise Copy Data Management (eCDM) spans data copies from data protection including backup, BC, DR as well as for operational, analytics, test, dev, devops among other uses. Another term is Enterprise Copy Data Analytics (eCDA) which includes monitoring and management along with insight, awareness and of course analytics. These new offerings and initiatives tie together various capabilities across storage platforms and software defined storage management. Watch for more activity in and around eCDM and general copy data management. Read more here.

    ViPR Controller 3.0

    ViPR controller enhancements build on previous announcements, include automation as well as fail over with native replication to a standby ViPR controller. Note that there can actually be two standby controllers that are synchronized asynchronous with software built-in to ViPR. This means that there is no need for RecoverPoint or other products to do the replication of the ViPR controllers. To be clear, this is for high availability of the ViPR controllers themselves and not a replacement for HA or replication of upper layer applications, storage servers or underlying storage services. Also note that ViPR is available via open source (CoprHD via Github here). Read more here.

    MyService360

    MyService360 is a cloud based dashboard and data infrastructure monitoring management platform. Read more here.

    Virtustream Storage Cloud

    Viutustream cloud services and software tools compliments EMC (and others) storage systems as back-end for cool, cold or other bulk data storage needs. Focus is to sell primary storage to customers, then leverage back-end public cloud services for backup, archive, copy data management and other applications. This also means that the Virtustream storage cloud is not just for data protection such as archiving, backup, BC, DR it’s also for other big fast data including cloud and object native applications. Does this mean Virtustream is an alternative to other cloud and object storage services such as AWS S3, Google GCS among others? Yup. Read more here.

    Where To Learn More

    • Session Streaming For video of keynotes, general sessions, backstage sessions, and EMC TV coverage, click here
    • Social: Follow @EMCWorld,  @EMCCorp, @EMC_News and @EMCStorage, and join conversations with  #EMCWORLD, and like EMC on Facebook
    • Photos: Access event photos via  Flickr and EMC Pulse Blog or visit the special EMC World News microsite here
    • Reflections: Read Core Technologies President, Guy Churchward’s Reflections post on today’s announcements here
    • Visit the EMC Store, the EMC Community Network Site and The Core Blog

    What This All Means

    With the announcement of Unity and impending Dell deal, some of you might (or should) have a Dejavu moment of over a decade or so ago when Dell and EMC entered into OEM agreement around the then Clariion mid range storage arrays (e.g. predecessors of VNX and VNXe). Unity is being designed as a high performance, easy to use, flexible, scalable, cost-effective storage solutions for a broad high-volume sales and distribution channel market.

    What does Unity mean for EMC VNX and VNXe as well as XtremIO? Unity will position near where the VNXe has been positioned, along with some of the competing solutions from Dell among others. There might be some overlap with other EMC solutions, however if executed properly, Unity should open up some new markets, perhaps at the hands of some of the newer popular startups that only offer AFA vs. hybrids. Likewise I would expect Unity to appear in future converged solutions such as those via the EMC Converged business unit (e.g. VCE).

    Even with the upcoming Dell acquisition and integration, EMC continues to evolve and innovate in many areas.

    Watch for more announcements later today and throughout the week

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Ubuntu 16.04 LTS (aka Xenial Xerus) What’s In The Bits and Bytes?

    Ubuntu 16.04 LTS (aka Xenial Xerus) What’s In The Bits and Bytes?

    server storage I/O trends

    Ubuntu 16.04 LTS (aka Xenial Xerus) was recently released (you can get the bits or software download here). Ubuntu is available in various distributions including as a server, workstation or desktop among others that can run bare metal on a physical machine (PM), virtual machine (VM) or as a cloud instance via services such as Amazon Web Services (AWS) as well as Microsoft Azure among others.

    Refresh, What is Ubuntu

    For those not familiar or who need a refresh, Ubuntu is an open source Linux distribution with the company behind it called Canonical. The Ubuntu software is a Debian based Linux distribution with Unity (user interface). Ubuntu is available across different platform architecture from industry standard Intel and AMD x86 32bit and 64bit to ARM processors and even the venerable IBM zSeriues (aka zed) mainframe as part of LinuxOne.

    As a desktop, some see or use Ubuntu as an open source alternative to desktop interfaces based on those from Microsoft such as Windows or Apple.

    As a server Ubuntu can be deployed from traditional applications to cloud, converged and many others including as a docker container, Ceph or OpenStack deployment platform. Speaking of Microsoft and Windows, if you are a *nix bash type person yet need (or have) to work with Windows, bash (and more) are coming to Windows 10. Ubuntu desktop GUI or User Interface options include Unity along with tools such as Compiz and LibreOffice (an alternative to Microsoft Office).

    What’s New In the Bits and Bytes (e.g. Software)

    Ubuntu 16.04 LTS is based on the Linux 4.4 kernel, that also includes Python 3, Ceph Jewel (block, file and object storage) and OpenStack Mitaka among other enhancements. These and other fixes as well as enhancements include:

    • Libvirt 1.3.1
    • Qemu 2.5
    • Open vSwitch 2.5.0
    • NginxLX2 2.0
    • Docker 1.10
    • PHP 7.9
    • MySQL 7.0
    • Juju 2.0
    • Golang 1.6 toolchain
    • OpenSSH 7.2p2 with legacy support along with cipher improvements, including 1024 bit diffie-hellman-group1-sha1 key exchange, ssh-dss, ssh-dss-cert
    • GNU toolchain
    • Apt 1.2

    What About Ubuntu for IBM zSeries Mainframe

    Ubuntu runs on 64 bit zSeries architecture with about 95% binary compatibility. If you look at the release notes, there are still a few things being worked out among known issues. However (read the release notes), Ubuntu 16.04 LTS has OpenStack and Ceph, means that those capabilities could be deployed on a zSeries.

    Now some of you might think wait, how can Linux and Ceph among others work on a FICON based mainframe?

    No worries, keep in mind that FICON the IBM zSeries server storage I/O protocol that co-exists on Fibre Channel along with SCSI_FCP (e.g. FCP) aka what most Open Systems people simply refer to as Fibre Channel (FC) works with the zOS and other operating systems. In the case of native Linux on zSeries, those systems can in fact use SCSI mode for accessing shared storage. In addition to the IBM LinuxOne site, you can learn more about Ubuntu running native on zSeries here on the Ubuntu site.

    Where To Learn More

    What This All Means

    Ubuntu as a Linux distribution continues to evolve and increase in deployment across different environments. Some still view Ubuntu as the low-end Linux for home, hobbyist or those looking for a alternative desktop to Microsoft Windows among others. However Ubuntu is also increasingly being used in roles where other Linux distribution such as Red Hat Enterprise Linux (RHEL), SUSE and Centos among others have gained prior popularity.

    In someway’s you can view RHEL as the first generation Linux distribution that gained popular in the enterprise with early adopters, followed by a second wave or generation of those who favored Centos among others such as the cloud crowd. Then there is the Ubuntu wave which is expanding in many areas along with others such as CoreOS. Granted with some people the preference between one Linux distribution vs. another can be as polarizing as Linux vs. Windows, OpenSystems vs. Mainframe vs. Cloud among others.

    Having various Ubuntu distributions installed across different servers (in addition to Centos, Suse and others), I found the install and new capabilities of Ubuntu 16.04 LTS interesting and continue to explore the many new features, while upgrading some of my older systems.

    Get the Ubuntu 16.04 LTS bits here to give a try or upgrade your existing systems.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Which Enterprise HDD for Content Server Platform

    Which Enterprise HDD to use for a Content Server Platform

    data infrastructure HDD server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server platform?

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This post is the first in a multi-part series based on a white paper hands-on lab report I did compliments of Equus Computer Systems and Seagate that you can read in PDF form here. The focus is looking at the Equus Computer Systems (www.equuscs.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). I was given the opportunity to do some hands-on testing running different application workloads with a 2U content solution platform along with various Seagate Enterprise 2.5” HDD’s handle different application workloads. This includes Seagate’s Enterprise Performance HDD’s with the enhanced caching feature.

    Issues And Challenges

    Even though Non-Volatile Memory (NVM) including NAND flash solid state devices (SSDs) have become popular storage for use internal as well as external to servers, there remains the need for HDD’s Like many of you who need to make informed server, storage, I/O hardware, software and configuration selection decisions, time is often in short supply.

    A common industry trend is to use SSD and HDD based storage mediums together in hybrid configurations. Another industry trend is that HDD’s continue to be enhanced with larger space capacity in the same or smaller footprint, as well as with performance improvements. Thus, a common challenge is what type of HDD to use for various content and application workloads balancing performance, availability, capacity and economics.

    Content Applications and Servers

    Fast Content Needs Fast Solutions

    An industry and customer trend are that information and data are getting larger, living longer, as well as there is more of it. This ties to the fundamental theme that applications and their underlying hardware platforms exist to process, move, protect, preserve and serve information.

    Content solutions span from video (4K, HD, SD and legacy streaming video, pre-/post-production, and editing), audio, imaging (photo, seismic, energy, healthcare, etc.) to security surveillance (including Intelligent Video Surveillance [ISV] as well as Intelligence Surveillance and Reconnaissance [ISR]). In addition to big fast data, other content solution applications include content distribution network (CDN) and caching, network function virtualization (NFV) and software-defined network (SDN), to cloud and other rich unstructured big fast media data, analytics along with little data (e.g. SQL and NoSQL database, key-value stores, repositories and meta-data) among others.

    Content Solutions And HDD Opportunities

    A common theme with content solutions is that they get defined with some amount of hardware (compute, memory and storage, I/O networking connectivity) as well as some type of content software. Fast content applications need fast software, multi-core processors (compute), large memory (DRAM, NAND flash, SSD and HDD’s) along with fast server storage I/O network connectivity. Content-based applications benefit from having frequently accessed data as close as possible to the application (e.g. locality of reference).

    Content solution and application servers need flexibility regarding compute options (number of sockets, cores, threads), main memory (DRAM DIMMs), PCIe expansion slots, storage slots and other connectivity. An industry trend is leveraging platforms with multi-socket processors, dozens of cores and threads (e.g. logical processors) to support parallel or high-concurrent content applications. These servers have large amounts of local storage space capacity (NAND flash SSD and HDD) and associated I/O performance (PCIe, NVMe, 40 GbE, 10 GbE, 12 Gbps SAS etc.) in addition to using external shared storage (local and cloud).

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Fast content applications need fast content and flexible content solution platforms such as those from Equus Computer Systems and HDD’s from Seagate. Key to a successful content application deployment is having the flexibility to hardware define and software defined the platform to meet your needs. Just as there are many different types of content applications along with diverse environments, content solution platforms need to be flexible, scalable and robust, not to mention cost effective.

    Continue reading part two of this multi-part series here where we look at how and what to test as well as project planning.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Part 2 – Which HDD for Content Applications – HDD Testing

    Part 2 – Which HDD for Content Applications – HDD Testing

    HDD testing server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server, hdd testing, how and what to do

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the second in a multi-part series (read part one here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post we look at some decisions and configuration choices to make for testing content applications servers as well as project planning.

    Content Solution Test Objectives

    In short period, collect performance and another server, storage I/O decision-making information on various HDD’s running different content workloads.

    Working with the Servers Direct staff a suitable content solution platform test configuration was created. In addition to providing two Intel-based content servers, Servers Direct worked with their partner Seagate to arrange for various enterprise-class HDD’s to be evaluated. For these series of content application tests, being short on time, I chose to do run some simple workloads including database, basic file (large and small) processing and general performance characterization.

    Content Solution Decision Making

    Knowing how Non-Volatile Memory (NVM) NAND flash SSD (1) devices (drives and PCIe cards) perform, what would be the best HDD based storage option for my given set of applications? Different applications have various performance, capacity and budget considerations. Different types of Seagate Enterprise class 2.5” Small Form Factor (SFF) HDD’s were tested.

    While revolutions per minute (RPM) still plays a role in HDD performance, there are other factors including internal processing capabilities, software or firmware algorithm optimization, and caching. Most HDD’s today have some amount of DRAM for read caching and other operations. Seagate Enterprise Performance HDD’s with the enhanced caching feature (2) are examples of devices accelerate storage I/O speed vs. traditional 10K and 15K RPM drives.

    Project Planning And Preparation

    Workload to be tested included:

    • Database read/writes
    • Large file processing
    • Small file processing
    • General I/O profile

    Project testing consisted of five phases, some of which overlapped with others:

    Phase 1 – Plan
    Identify candidate workloads that could be run in the given amount of time, determine time schedules and resource availability, create a project plan.

    Phase 2 – Define
    Hardware define and software define the test platform.

    Phase 3 – Setup
    The objective was to assess plug-play capability of the server, storage and I/O networking hardware with a Linux OS before moving on to the reported workloads in the next phase. Initial setup and configuration of hardware and software, installation of additional devices along with software configuration, troubleshooting, and learning as applicable. This phase consisted of using Ubuntu Linux 14.04 server as the operating system (OS) along with MySQL 5.6 as a database server during initial hands-on experience.

    Phase 4 – Execute
    This consisted of using Windows 2012 R2 server as the OS along with Microsoft SQL Server on the system under test (SUT) to support various workloads. Results of this phase are reported below.

    Phase 5 – Analyze      
    Results from the workloads run in phase 3 were analyzed and summarized into this document.

    (Note 1) Refer to Seagate 1200 12 Gbps Enterprise SAS SSD StorageIO lab review

    (Note 2) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    Planning And Preparing The Tests

    As with most any project there were constraints to contend with and work around.

    Test constraints included:

    • Short-time window
    • Hardware availability
    • Amount of hardware
    • Software availability

    Three most important constraints and considerations for this project were:

    • Time – This was a project with a very short time “runway”, something common in most customer environments who are looking to make a knowledgeable server, storage I/O decisions.
    • Amount of hardware – Limited amount of DRAM main memory, sixteen 2.5” internal hot-swap storage slots for HDD’s as well as SSDs. Note that for a production content solution platform; additional DRAM can easily be added, along with extra external storage enclosures to scale memory and storage capacity to fit your needs.
    • Software availability – Utilize common software and management tools publicly available so anybody could leverage those in their own environment and tests.

    The following content application workloads were profiled:

    • Database reads/writes – Updates, inserts, read queries for a content environment
    • Large file processing – Streaming of large video, images or other content objects.
    • Small file processing – Processing of many small files found in some content applications
    • General I/O profile – IOP, bandwidth and response time relevant to content applications

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    There are many different types of content applications ranging from little data databases to big data analytics as well as very big fast data such as for video. Likewise there are various workloads and characteristics to test. The best test and metrics are those that apply to your environment and application needs.

    Continue reading part three of this multi-part series here looking at how the systems and HDD’s were configured and tested.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.