The Value of Infrastructure Insight – Enabling Informed Decision Making

The Value of Infrastructure Insight – Enabling Informed Decision Making

server storage I/O trends

Join me and Virtual Instruments CTO John Gentry on October 27, 2016 for a free webinar (registration required) titled The Value of Infrastructure Insight – Enabling Informed Decision Making with Virtual Instruments. In this webinar, John and me will discuss the value of data center infrastructure insight both as a technology as well as a business and IT imperative.

Software Defined Data Infrastructure
Various Infrastructures – Business, Information, Data and Physical (or cloud)

Leveraging infrastructure performance analytics is key to assuring the performance, availability and cost-effectiveness of your infrastructure, especially as you transform to a hybrid data center over the coming years. By utilizing real-time and historical infrastructure insight from your servers, storage and networking, you can avoid flying blind and give situational awareness for proactive decision-making. The result is faster problem resolution, problem avoidance, higher utilization and the elimination of performance slowdowns and outages.

View the companion Server StorageIO Industry Trends Report available here (free, no registration required) at the Virtual Instruments web page resource center.

xxxx

The above Server StorageIO Industry Trends Perspective Report (click here to download PDF) looks at the value of data center infrastructure insight both as a technology as well as a business productivity enabler. Besides productivity, having insight into how data infrastructure resources (servers, storage, networks, system software) are used, enables informed analysis, troubleshooting, planning, forecasting as well as cost-effective decision-making.

In other words, data center infrastructure insight, based on infrastructure performance analytics, enables you to avoid flying blind, having situational awareness for proactive Information Technology (IT) management. Your return on innovation is increased, and leveraging insight awareness along with metrics that matter drives return on investment (ROI) along with enhanced service delivery.

Where To Learn More

  • Free Server StorageIO Industry Trends Report The Value of Infrastructure Insight – Enabling Informed Decision Making (PDF)
  • Register for the free webinar on October 27, 2016 1PM ET here.
  • View other upcoming and recent events at the Server StorageIO activities page here.

What This All Means

What this all means is that the key to making smart, informed decisions involving data infrastructure, servers, storage, I/O across different applications is having insight and awareness. See for yourself how you can gain insight into your existing information factory environment performing analysis, as well as comparing and simulating your application workloads for informed decision-making.

Having insight and awareness (e.g. instruments) allows you to avoid flying blind, enabling smart, safe and informed decisions in different conditions impacting your data infrastructure. How is your investment in hardware, software, services and tools being leveraged to meet given levels of services? Is your information factory (data center and data infrastructure) performing at its peak effectiveness?

How are you positioned to support growth, improve productivity, remove complexity and costs while evolving from a legacy to a next generation software-defined, cloud, virtual, converged or hyper-converged environment with new application needs?

Data infrastructure insight benefits and takeaways:

  • Informed performance-related decision-making
  • Support growth, agility, flexibility and availability
  • Maximize resource investment and utilization
  • Find, fix and remove I/O bottlenecks
  • Puts you in control in the driver’s seat

Remember to register and attend the October 27 webinar that you can register here.

Btw, Virtual Instruments has been a client of Server StorageIO and that fwiw is a disclosure.

Ok, nuff said, for now…

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Back To Software Defined Data Infrastructure School, Webinar and Fall 2016 events

Software Defined Data Infrastructure Webinars and Fall 2016 events

server storage I/O trends

Its September and that means back to school time, and not just for the kids. Here is the preliminary Server StorageIO fall 2016 back to school, webinar and event activities covering software defined data center, data infrastructure, virtual, cloud, containers, converged, hyper-converged server, storage, I/O network, performance and data protection among other topics.

December 7, 2016 – Webinar 11AM PT – BrightTalk
Hyper-Converged Infrastructure Decision Making

Are Converged Infrastructures (CI), Hyper-Converged Infrastructures (HCI), Cluster in Box or Cloud in Box (CiB) solutions for you? The answer is it depends on what your needs, requirements, application among other criteria are. In addition are you focused on a particular technology solution or architecture approach, or, looking for something that adapts to your needs? Join us in this discussion exploring your options for different scenarios as we look beyond they hype including to next wave of hyper-scale converged along with applicable decision-making criteria. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– What are your application and environment needs along with other objectives
– Explore various approaches for hyper-small and hyper-large environments
– What are you converging, hardware, hypervisors, management or something else?
– Does HCI mean hyper-vendor-lock-in, if so, is that a bad thing?
– When, where, why and how to use different scenarios

November 23, 2016 – Webinar 10AM PT BrightTalk
BCDR and Cloud Backup Software Defined Data Infrastructures (SDDI) and Data Protection

The answer is BCDR and Cloud Backup, however what was the question? Besides how to protect preserve and secure your data, applications along with data Infrastructures against various threat risk issues, what are some other common questions? For example how to modernize, rethink, re-architect, use new and old things in new ways, these and other topics, techniques, trends, tools have a common theme of BCDR and Cloud Backup. Join us in this discussion exploring your options for protecting data, applications and your data Infrastructures spanning legacy, software-defined virtual and cloud environments. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– Various cloud storage options to meet different application PACE needs
– Do clouds need to be backed-up or protected?
– How to leverage clouds for various data protection objectives
– When, where, why and how to use different scenarios

November 23, 2016 – Webinar 9AM PT – BrightTalk
Cloud Storage – Hybrid and Software Defined Data Infrastructures (SDDI)

You have been told, or determined that you need (or want) to use cloud storage, ok, now what? What type of cloud storage do you need or want, or do you simply want cloud storage? However, what are your options as well as application requirements including Performance, Availability, Capacity and Economics (PACE) along with access or interfaces? Where are your applications and where will they be located? What are your objectives for using cloud storage or is it simply you have heard or told its cheaper. Join us in this discussion exploring your options, considerations for cloud storage decision-making. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– Various cloud storage options to meet different application PACE needs
– Storage for primary, secondary, performance, availability, capacity, backup, archiving
– Public, private and hybrid cloud storage options from block, file, object to application service
– When, where, why and how to use cloud storage for different scenarios

November 22, 2016 – Webinar 10AM PT – BrightTalk
Cloud Infrastructure Hybrid and Software Defined Data Infrastructures (SDDI)

At the core of cloud (public, private, hybrid) next generation data centers are software defined data infrastructures that exist to protect, preserve and serve applications, data along with their resulting information services. Software defined data infrastructure core components include hardware, software servers and storage configured (defined) to provide various services enabling application Performance Availability Capacity and Economics (PACE). Just as there are different types of environments, applications along with workloads various options, technologies as well as techniques exist for cloud services (and underlying data infrastructures). Join us in this session to discuss trends, technologies, tools, techniques and services options for cloud infrastructures. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– Software Defined Infrastructures (SDDI) are what enable Software Defined Data Centers and clouds
– Various types of clouds along with cloud services that determine how resources get defined
– When, where, why and how to use cloud Infrastructures along with associated resources

October 27, 2016 – Webinar 10AM PT – Virtual Instruments
The Value of Infrastructure Insight

This webinar looks at the value of data center infrastructure insight both as a technology as well as a business productivity enabler. Besides productivity, having insight into how data infrastructure resources (servers, storage, networks, system software) are used, enables informed analysis, troubleshooting, planning, forecasting as well as cost-effective decision-making. In other words, data center infrastructure insight, based on infrastructure performance analytics, enables you to avoid flying blind, having situational awareness for proactive Information Technology (IT) management. Your return on innovation is increased, and leveraging insight awareness along with metrics that matter drives return on investment (ROI) along with enhanced service delivery.

October 20, 2016 – Webinar 9AM PT – BrightTalk
Next-Gen Data Centers Software Defined Data Infrastructures (SDDI) including Servers, Storage and Virtualizations

At the core of next generation data centers are software defined data infrastructures that enable, protect, preserve and serve applications, data along with their resulting information services. Software defined data infrastructure core components include hardware, software servers and storage configured (defined) to provide various services enabling application Performance Availability Capacity and Economics (PACE). Just as there are different types of environments, applications along with workloads various options, technologies as well as techniques exist for virtual servers and storage. Join us in this session to discuss trends, technologies, tools, techniques and services around storage and virtualization for today, tomorrow, and in the years to come. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– Software Defined Infrastructures (SDDI) are what enable Software Defined Data Centers
– Server and Storage Virtualization better together, with and without CI/HCI
– Many different facets (types) of Server virtualization and virtual storage
– When, where, why and how to use storage virtualization and virtual storage

September 20, 2016 – Webinar 8AM PT – BrightTalk
Software Defined Data Infrastructures (SDDI) Enabling Software Defined Data Centers – Part of Software-Defined Storage summit

Data Infrastructures exist to support applications and their underlying resource needs. Software-Defined Infrastructures (SDI) are what enable Software-Defined Data Centers, and at the heart of a SDI is storage that is software-defined. This spans cloud, virtual and physical storage and is at the focal point of today. Join us in this session to discuss trends, technologies, tools, techniques and services around SDI and SDDC- today, tomorrow, and in the years to come.

September 13, 2016 – Webinar 11AM PT – Redmond Magazine and
Dell Software
Windows Server 2016 and Active Directory
Whats New and How to Plan for Migration

Windows Server 2016 is expected to GA this fall and is a modernized version of the Microsoft operating system that includes new capabilities such as Active Directory (AD) enhancements. AD is critical to organizational operations providing control and secure access to data, networks, servers, storage and more from physical, virtual and cloud (public and hybrid). But over time, organizations along with their associated IT infrastructures have evolved due to mergers, acquisitions, restructuring and general growth. As a result, yesterday’s AD deployments may look like they did in the past while using new technology (e.g. in old ways). Now is the time to start planning for how you will optimize your AD environment using new tools and technologies such as those in Windows Server 2016 and AD in new ways. Optimizing AD means having a new design, performing cleanup and restructuring prior to migration vs. simply moving what you have. Join us for this interactive webinar to begin planning your journey to Windows Server 2016 and a new optimized AD deployment that is flexible, scalable and elastic, and enables resilient infrastructures. You will learn:

  • What’s new in Windows Server 2016 and how it impacts your AD
  • Why an optimized AD is critical for IT environments moving forward
  • How to gain insight into your current AD environment
  • AD restructuring planning considerations

September 8, 2016 – Webinar 11AM PT (Watch on Demand) – Redmond Magazine, Acronis and Unitrends
Data Protection for Modern Microsoft Environments

Your organization’s business depends on modern Microsoft® environments — Microsoft Azure and new versions of Windows Server 2016, Microsoft Hyper-V with RCT, and business applications — and you need a data protection solution that keeps pace with Microsoft technologies. If you lose mission-critical data, it can cost you $100,000 or more for a single hour of downtime. Join our webinar and learn how different data protection solutions can protect your Microsoft environment, whether you store data on company premises, at remote locations, in private and public clouds, and on mobile devices.

Where To Learn More

What This All Means

Its back to school and learning time, join me on these and other upcoming event activities.

Ok, nuff said, for now…

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Server StorageIO August 2016 Update Newsletter

Volume 16, Issue VIII

Hello and welcome to this August 2016 Server StorageIO update newsletter.

In This Issue

  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events and Webinars
  • Industry Activity Trends
  • Resources and Links
  • Enjoy this shortened summer edition of the Server StorageIO update newsletter.

    Cheers GS

    Industry Activity Trends

    With VMworld coming up this week, rest assured, there will be plenty to talk about and discuss in the following weeks. However for now, here are a few things from this past month.

    At Flash Memory Summit (FMS) which is more of a component, vendor to vendor industry type event, there was buzz about analytics, however what was shown as analytics tended to be Iometer. Hmmm, more on that in a future post. However something else at FMS besides variations of Non-Volatile Memory (NVM) including SSD, NAND, Flash, Storage Class Memory (SCM) such as 3D XPoint (among its new marketing names) along with NVM Expres (NVMe) was NVMe over Fabric.

    This includes NVMe over RoCE (RDMA over Converged Ethernet) which can be implemented on some 10 Gb (and faster) Ethernet adapters as well as some InfiniBand adapters from Mellanox among others. Another variation is Fibre Channel NVMe (FC-NVMe) where the NVMe protocol command set is transported as a Upper Level Protocol (ULP) over FC. This is similar to how the SCSI command set is implemented on FC (e.g. SCSI_FCP or FCP) which means NVMe can be seen as a competing protocol to FCP (which it will or could be). Naturally not to be left out, some of the marketers have already started with Persistent Memory over Fabric among other variations of Non- Ephemeral Memory over Fabrics. More on NVM, NVMe and fabrics in future posts, commentary and newsletter.

    Some other buzzword topics regaining mention (or perhaps for the first time for some) includes
    FedRAMP, Authority To Operate (ATO) clouds for Government entities, and FISMA among others. Many service providers, cloud and hosting providers from large AWS and Azure to smaller Blackmesh have added FedRAMP and other options in addition to traditional, DevOps.

    Some of you may recall me mentioning ScaleMP in the past which is a technology for aggregating multiple compute servers including processors and memory into a converged resource pool. Think the opposite of a hypervisor that divides up resources to support consolidation. In other words, where you need to scale up without complexity of clustering or to avoid having to change and partition your software applications. In addition to ScaleMP, a newer hardware agnostic startup to check out is Tidal Scale.

    On the merger and acquisition front, the Dell / EMC deal is moving forward expected to close soon, perhaps by time or before you read this. In other news, HPE announced that it is buying SGI to gain access to a larger part of the traditional legacy big data Super Compute and High Performance Compute (HPC) market. One of the SGI diamonds in the rough that if you are not aware, is DMF for data management. HPE and Dropbox also announced a partnership deal earlier this summer.

    That’s all for now, time to pack my bags and off to Las Vegas for VMworld 2016.

    Ok, nuff said, for now…

     

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past StorageIOblog posts here

     

    StorageIO Commentary in the news

    Recent Server StorageIO industry trends perspectives commentary in the news.

    Via FutureReadyOEM Q&A: when to implement ultra-dense storage
    Via EnterpriseStorageForum Comments on Top 10 Enterprise SSD Market Trends
    Via SearchStorage Comments on NAS system buying decisions
    Via EnterpriseStorageForum Comments on Cloud Storage Pros and Cons
    EnterpriseStorageForum Comments on Top 10 Enterprise SSD Market Trends

    View more Server, Storage and I/O hardware as well as software trends comments here

     

    StorageIO Tips and Articles

    Recent and past Server StorageIO articles appearing in different venues include:

    Via Iron Mountain Preventing Unexpected Disasters: IT and Data Infrastructure
    Via FutureReadyOEM Q&A: When to implement ultra-dense storage servers
    Via Micron Blog Whats next for NVMe and your Data Center – Preparing for Tomorrow
    Redmond Magazine: Trends – Evolving from Data Protection to Data Resiliency
    IronMountain: 5 Noteworthy Data Privacy Trends From 2015
    InfoStor: Data Protection Gaps, Some Good, Some Not So Good
    Virtual Blocks (VMware Blogs): EVO:RAIL ? When And Where To Use It?

    Check out these resources techniques, trends as well as tools. View more tips and articles here

    StorageIO Webinars and Industry Events

    December 7: BrightTalk Webinar – Hyper-Converged Infrastructure (HCI) Webinar 11AM PT

    November 23: BrightTalk Webinar – BCDR and Cloud Backup – Software Defined Data Infrastructures (SDDI) and Data Protection – 10AM PT

    November 23: BrightTalk Webinar – Cloud Storage – Hybrid and Software Defined Data Infrastructures (SDDI) – 9AM PT

    November 22: BrightTalk Webinar – Cloud Infrastructure – Hybrid and Software Defined Data Infrastructures (SDDI) – 10AM PT

    October 20: BrightTalk Webinar – Next-Gen Data Centers – Software Defined Data Infrastructures (SDDI) including Servers, Storage and Virtualizations – 9AM PT

    September 29: TBA Webinar – 10AM PT

    September 27-28 – NetApp – Las Vegas

    September 20: BrightTalk Webinar – Software Defined Data Infrastructures (SDDI) Enabling Software Defined Data Centers – Part of Software-Defined Storage summit – 8AM PT

    September 13: Redmond Webinar – Windows Server 2016 and Active Directory What’s New and How to Plan for Migration – 11AM PT

    September 8: Redmond Webinar – Data Protection for Modern Microsoft Environments – 11AM PT

    August 29-31: VMworld Las Vegas

    August 25 – MSP CMG – The Answer is Software Defined – What was the question?

    August 16: BrightTalk Webinar Software Defined Data Centers (SDDC) are in your future (if not already) – Part of Enterprise Software and Infrastructure summit 8AM PT

    August 10-11 Flash Memory Summit (Panel discussion August 11th) – NVMe over Fabric

    See more webinars and other activities on the Server StorageIO Events page here.

     

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    storageioblog.com/data-protection-diaries-main/ – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance.com – Various server, storage and I/O performance and benchmarking

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server storage I/O performance benchmark workload scripts Part I

    Server storage I/O performance benchmark workload scripts Part I

    Server storage I/O performance benchmark workload scripts

    Update 1/28/2018

    This is part one of a two-part series of posts about Server storage I/O performance benchmark workload tools and scripts. View part II here which includes the workload scripts and where to view sample results.

    There are various tools and workloads for server I/O benchmark testing, validation and exercising different storage devices (or systems and appliances) such as Non-Volatile Memory (NVM) flash Solid State Devices (SSDs) or Hard Disk Drives (HDD) among others.

    NVMe ssd storage
    Various NVM flash SSD including NVMe devices

    For example, lets say you have an SSD such as an Intel 750 (here, here, and here) or some other vendors NVMe PCIe Add in Card (AiC) installed into a Microsoft Windows server and would like to see how it compares with expected results. The following scripts allow you to validate your system with those of others running the same workload, granted of course your mileage (performance) may vary.

    server storage I/O SCM NVM SSD performance

    Why Your Performance May Vary

    Reasons you performance may vary include among others:

    • GHz Speed of your server, number of sockets, cores
    • Amount of main DRAM memory
    • Number, type and speed of PCIe slots
    • Speed of storage device and any adapters
    • Device drivers and firmware of storage devices and adapters
    • Server power mode setting (e.g. low or balanced power vs. high-performance)
    • Other workload running on system and device under test
    • Solar flares (kp-index) among other urban (or real) myths and issues
    • Typos or misconfiguration of workload test scripts
    • Test server, storage, I/O device, software and workload configuration
    • Versions of test software tools among others

    Windows Power (and performance) Settings

    Some things are assumed or taken for granted that everybody knows and does, however sometimes the obvious needs to be stated or re-stated. An example is remembering to check your server power management settings to see if they are in energy efficiency power savings mode, or, in high-performance mode. Note that if your focus is on getting the best possible performance for effective productivity, then you want to be in high performance mode. On the other hand if performance is not your main concern, instead a focus on energy avoidance, then low power mode, or perhaps balanced.

    For Microsoft Windows Servers, Desktop Workstations, Laptops and Tablets you can adjust power settings via control panel and GUI as well as command line or Powershell. From command line (privileged or administrator) the following are used for setting balanced or high-performance power settings.

    Balanced

    powercfg.exe /setactive 381b4222-f694-41f0-9685-ff5bb260df2e

    High Performance

    powercfg.exe /setactive 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c

    From Powershell the following set balanced or high-performance.

    Balanced
    PowerCfg -SetActive "381b4222-f694-41f0-9685-ff5bb260df2e"

    High Performance
    PowerCfg -SetActive "8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c"

    Note that you can list Windows power management settings using powercfg -LIST and powercfg -QUERY

    server storage I/O power management

    Btw, if you have not already done so, enable Windows disk (HDD and SSD) performance counters so that they appear via Task Manager by entering from a command prompt:

    diskperf -y

    Workload (Benchmark) Simulation Test Tools Used

    There are many tools (see storageio.com/performance) that can be used for creating and running workloads just as there are various application server I/O characteristics. Different server I/O and application performance attributes include among others read vs. write, random vs. sequential, large vs. small, long vs. short stride, burst vs. sustain, cache and non-cache friendly, activity vs. data movement vs. latency vs. CPU usage among others. Likewise the number of workers, jobs, threads, outstanding and overlapped I/O among other configuration settings can have an impact on workload and results.

    The four free tools that I’m using with this set of scripts are:

    • Microsoft Diskspd (free), get the tool and bits here or here (open source), learn more about Diskspd here.
    • FIO.exe (free), get the tool and bits here or here among other venues.
    • Vdbench (free with registration), get the tool and bits here or here among other venues.
    • Iometer (free), get the tool and bits here among other venues.

    Notice: While best effort has been made to verify the above links, they may change over time and you are responsible for verifying the safety of links and your downloads.

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Remember, everything is not the same in the data center or with data infrastructures that support different applications.

    While some tools are more robust or better than others for different things, ultimately it’s usually not the tool that results in a bad benchmark or comparison, it’s the configuration or lack of including workload settings that are not relevant or applicable. The best benchmark, workload or simulation is your own application. Second best is one that closely resembles your application workload characteristics. A bad benchmark is one that has no relevance to your environment, application use scenario. Take and treat all benchmark or workload simulation results with a grain of salt as something to compare, contrast or make reference to in the proper context. Read part two of this post series to view test tool workload scripts along with sample results.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Part II – Some server storage I/O workload scripts and results

    Part II – Some server storage I/O workload scripts and results

    server storage I/O trends

    Updated 1/28/2018

    This is the second in a two part series of posts pertaining to using some common server storage I/O workload benchmark tools and scripts. View part I here which includes overview, background and information about the tools used and related topics.

    NVMe ssd storage
    Various NVM flash SSD including NVMe devices

    Following are some server I/O benchmark workload scripts to exercise various storage devices such as Non-Volatile Memory (NVM) flash Solid State Devices (SSDs) or Hard Disk Drives (HDD) among others.

    The Workloads

    Some ways that can impact the workload performance results besides changing the I/O size, read write, random sequential mix is the number of threads, workers and jobs. Note that in the workload steps, the larger 1MB and sequential scenarios have fewer threads, workers vs. the smaller IOP or activity focused workloads. Too many threads or workers can cause overhead and you will reach a point of diminishing return at some point. Likewise too few and you will not drive the system under test (SUT) or device under test (DUT) to its full potential. If you are not sure how many threads or workers to use, run some short calibration tests to see the results before doing a large, longer test.

    Keep in mind that the best benchmark or workload is your own application running with similar load to what you would see in real world, along with applicable features, configuration and functionality enabled. The second best would be those that closely resemble your workload characteristics and that are relevant.

    The following workloads involved a system test initiator (STI) server driving workload using the different tools as well as scripts shown. The STI sends the workload to a SUT or DUT that can be a single drive, card or multiple devices, storage system or appliance. Warning: The following workload tests does both reads and writes which can be destructive to your device under test. Exercise caution on the device and file name specified to avoid causing a problem that might result in you testing your backup / recovery process. Likewise no warranty is given, implied or made for these scripts or their use or results, they are simply supplied as is for your reference.

    The four free tools that I’m using with this set of scripts are:

    • Microsoft Diskspd (free), get the tool and bits here or here (open source), learn more about Diskspd here.
    • FIO.exe (free), get the tool and bits here or here among other venues.
    • Vdbench (free with registration), get the tool and bits here or here among other venues.
    • Iometer (free), get the tool and bits here among other venues.

    Notice: While best effort has been made to verify the above links, they may change over time and you are responsible for verifying the safety of links and your downloads

    Microsoft Diskspd workloads

    Note that a 300GB size file named iobw.tst on device N: is being used for performing read and write I/Os to. There are 160 threads, I/O size of 4KB and 8KB varying from 100% Read (0% write), 70% Read (30% write) and 0% Read (100% write) with random (seek) and no hardware or software cache. Also specified are to collect latency statistics, a 30 second warm up ramp up time, and a quick 5 minute duration (test time). 5 minutes is a quick test for calibration, verify your environment however relatively short for a real test which should be in the hours or more depending on your needs.

    Note that the output results are put into a file with a name describing the test tool, workload and other useful information such as date and time. You may also want to specify a different directory where output files are placed.

    diskspd.exe -c300G -o160 -t160 -b4K -w0 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan100Read_160x160_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b4K -w30 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan70Read_160x160_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b4K -w100 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan0Read_160x160_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b8K -w0 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan100Read_160x160_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b8K -w30 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan70Read_160x160_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b8K -w100 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan0Read_160x160_072416_8AM.txt
    

    The following Diskspd tests use similar settings as above, however instead of random, sequential is specified, threads and outstanding I/Os are reduced while I/O size is set to 1MB, then 8KB, with 100% read and 100% write scenarios. The -t specifies the number of threads and -o number of outstanding I/Os per thread.

    diskspd.exe -c300G -o32 -t132 -b1M -w0 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_1MSeq100Read_32x32_072416_8AM.txt
    diskspd.exe -c300G -o32 -t132 -b1M -w100 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_1MSeq0Read_32x32_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b8K -w0 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_8KSeq100Read_32x32_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b8K -w100 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_8KSeq0Read_32x32_072416_8AM.txt
    

    Fio.exe workloads

    Next are the fio workloads similar to those run using Diskspd except the sequential scenarios are skipped.

    fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=100 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan100Read_5x32_072416_8AM.txt
    fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=70 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan70Read_5x32_072416_8AM.txt
    fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=0 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan0Read_5x32_072416_8AM.txt
    fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=100 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan100Read_5x32_072416_8AM.txt
    fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=70 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan70Read_5x32_072416_8AM.txt
    fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=0 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan0Read_5x32_072416_8AM.txt
    

    Vdbench workloads

    Next are the Vdbench workloads similar to those used with the Microsoft Diskspd scenarios. In addition to making sure Vdbench is installed and working, you will need to create a text file called seqrxx.txt containing the following:

    hd=localhost,jvms=!jvmn
    sd=sd1,lun=!drivename,openflags=directio,size=!dsize
    wd=mix,sd=sd1
    rd=!jobname,wd=mix,elapsed=!etime,interval=!itime,iorate=max,forthreads=(!tthreads),forxfersize=(!worktbd),forseekpct=(!workseek),forrdpct=(!workread),openflags=directio

    The following are the commands that call the Vdbench script file. Note Vdbench puts output files (yes, plural there are many results) in a output folder.

    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o  vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran100Read_0726166AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran70Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran0Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Ran100Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Ran70Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq0Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq100Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq70Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq0Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=32 jvmn=64 worktbd=1M workseek=0 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_32TH_1M100Seq100Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=32 jvmn=64 worktbd=1M workseek=0 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_32TH_1M100Seq0Read_072416_8AM
    

    Iometer workloads

    Last however not least, lets do an Iometer run. The following command calls an Iometer input file (icf) that you can find here. In that file you will need to make a few changes including the name of the server where Iometer is running, description and device under test address. For example in the icf file change SIOSERVER to the name of the server where you will be running Iometer from. Also change the address for the DUT, for example N: to what ever address, drive, mount point you are using. Also update the description accordingly (e.g. "NVME" to "Your test example".

    Here is the command line to run Iometer specifying an icf and where to put the results in a CSV file that can be imported into Excel or other tools.

    iometer /c  iometer_5work32q_intel_Profile.icf /r iometer_nvmetest_5work32q_072416_8AM.csv
    

    server storage I/O SCM NVM SSD performance

    What About The Results?

    For context, the following results were run on a Lenovo TS140 (32GB RAM), single socket quad core (3.2GHz) Intel E3-1225 v3 with an Intel NVMe 750 PCIe AiC (Intel SSDPEDMW40). Out of the box Microsoft Windows NVMe drive and controller drivers were used (e.g. 6.3.9600.18203 and 6.3.9600.16421). Operating system is Windows 2012 R2 (bare metal) with NVMe PCIe card formatted with ReFS file system. Workload generator and benchmark driver tools included Microsoft Diskspd version 2.012, Fio.exe version 2.2.3, Vdbench 50403 and Iometer 1.1.0. Note that there are newer versions of the various workload generation tools.

    Example results are located here.

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Remember, everything is not the same in the data center or with data infrastructures that support different applications.

    While some tools are more robust or better than others for different things, ultimately its usually not the tool that results in a bad benchmark or comparison, its the configuration or lack of including workload settings that are not relevant or applicable. The best benchmark, workload or simulation is your own application. Second best is one that closely resembles your application workload characteristics. A bad benchmark is one that has no relevance to your environment, application use scenario. Take and treat all benchmark or workload simulation results with a grain of salt as something to compare, contrast or make reference to in the proper context.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Which Enterprise HDD for Content Server Platform

    Which Enterprise HDD to use for a Content Server Platform

    data infrastructure HDD server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server platform?

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This post is the first in a multi-part series based on a white paper hands-on lab report I did compliments of Equus Computer Systems and Seagate that you can read in PDF form here. The focus is looking at the Equus Computer Systems (www.equuscs.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). I was given the opportunity to do some hands-on testing running different application workloads with a 2U content solution platform along with various Seagate Enterprise 2.5” HDD’s handle different application workloads. This includes Seagate’s Enterprise Performance HDD’s with the enhanced caching feature.

    Issues And Challenges

    Even though Non-Volatile Memory (NVM) including NAND flash solid state devices (SSDs) have become popular storage for use internal as well as external to servers, there remains the need for HDD’s Like many of you who need to make informed server, storage, I/O hardware, software and configuration selection decisions, time is often in short supply.

    A common industry trend is to use SSD and HDD based storage mediums together in hybrid configurations. Another industry trend is that HDD’s continue to be enhanced with larger space capacity in the same or smaller footprint, as well as with performance improvements. Thus, a common challenge is what type of HDD to use for various content and application workloads balancing performance, availability, capacity and economics.

    Content Applications and Servers

    Fast Content Needs Fast Solutions

    An industry and customer trend are that information and data are getting larger, living longer, as well as there is more of it. This ties to the fundamental theme that applications and their underlying hardware platforms exist to process, move, protect, preserve and serve information.

    Content solutions span from video (4K, HD, SD and legacy streaming video, pre-/post-production, and editing), audio, imaging (photo, seismic, energy, healthcare, etc.) to security surveillance (including Intelligent Video Surveillance [ISV] as well as Intelligence Surveillance and Reconnaissance [ISR]). In addition to big fast data, other content solution applications include content distribution network (CDN) and caching, network function virtualization (NFV) and software-defined network (SDN), to cloud and other rich unstructured big fast media data, analytics along with little data (e.g. SQL and NoSQL database, key-value stores, repositories and meta-data) among others.

    Content Solutions And HDD Opportunities

    A common theme with content solutions is that they get defined with some amount of hardware (compute, memory and storage, I/O networking connectivity) as well as some type of content software. Fast content applications need fast software, multi-core processors (compute), large memory (DRAM, NAND flash, SSD and HDD’s) along with fast server storage I/O network connectivity. Content-based applications benefit from having frequently accessed data as close as possible to the application (e.g. locality of reference).

    Content solution and application servers need flexibility regarding compute options (number of sockets, cores, threads), main memory (DRAM DIMMs), PCIe expansion slots, storage slots and other connectivity. An industry trend is leveraging platforms with multi-socket processors, dozens of cores and threads (e.g. logical processors) to support parallel or high-concurrent content applications. These servers have large amounts of local storage space capacity (NAND flash SSD and HDD) and associated I/O performance (PCIe, NVMe, 40 GbE, 10 GbE, 12 Gbps SAS etc.) in addition to using external shared storage (local and cloud).

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Fast content applications need fast content and flexible content solution platforms such as those from Equus Computer Systems and HDD’s from Seagate. Key to a successful content application deployment is having the flexibility to hardware define and software defined the platform to meet your needs. Just as there are many different types of content applications along with diverse environments, content solution platforms need to be flexible, scalable and robust, not to mention cost effective.

    Continue reading part two of this multi-part series here where we look at how and what to test as well as project planning.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Part 2 – Which HDD for Content Applications – HDD Testing

    Part 2 – Which HDD for Content Applications – HDD Testing

    HDD testing server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server, hdd testing, how and what to do

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the second in a multi-part series (read part one here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post we look at some decisions and configuration choices to make for testing content applications servers as well as project planning.

    Content Solution Test Objectives

    In short period, collect performance and another server, storage I/O decision-making information on various HDD’s running different content workloads.

    Working with the Servers Direct staff a suitable content solution platform test configuration was created. In addition to providing two Intel-based content servers, Servers Direct worked with their partner Seagate to arrange for various enterprise-class HDD’s to be evaluated. For these series of content application tests, being short on time, I chose to do run some simple workloads including database, basic file (large and small) processing and general performance characterization.

    Content Solution Decision Making

    Knowing how Non-Volatile Memory (NVM) NAND flash SSD (1) devices (drives and PCIe cards) perform, what would be the best HDD based storage option for my given set of applications? Different applications have various performance, capacity and budget considerations. Different types of Seagate Enterprise class 2.5” Small Form Factor (SFF) HDD’s were tested.

    While revolutions per minute (RPM) still plays a role in HDD performance, there are other factors including internal processing capabilities, software or firmware algorithm optimization, and caching. Most HDD’s today have some amount of DRAM for read caching and other operations. Seagate Enterprise Performance HDD’s with the enhanced caching feature (2) are examples of devices accelerate storage I/O speed vs. traditional 10K and 15K RPM drives.

    Project Planning And Preparation

    Workload to be tested included:

    • Database read/writes
    • Large file processing
    • Small file processing
    • General I/O profile

    Project testing consisted of five phases, some of which overlapped with others:

    Phase 1 – Plan
    Identify candidate workloads that could be run in the given amount of time, determine time schedules and resource availability, create a project plan.

    Phase 2 – Define
    Hardware define and software define the test platform.

    Phase 3 – Setup
    The objective was to assess plug-play capability of the server, storage and I/O networking hardware with a Linux OS before moving on to the reported workloads in the next phase. Initial setup and configuration of hardware and software, installation of additional devices along with software configuration, troubleshooting, and learning as applicable. This phase consisted of using Ubuntu Linux 14.04 server as the operating system (OS) along with MySQL 5.6 as a database server during initial hands-on experience.

    Phase 4 – Execute
    This consisted of using Windows 2012 R2 server as the OS along with Microsoft SQL Server on the system under test (SUT) to support various workloads. Results of this phase are reported below.

    Phase 5 – Analyze      
    Results from the workloads run in phase 3 were analyzed and summarized into this document.

    (Note 1) Refer to Seagate 1200 12 Gbps Enterprise SAS SSD StorageIO lab review

    (Note 2) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    Planning And Preparing The Tests

    As with most any project there were constraints to contend with and work around.

    Test constraints included:

    • Short-time window
    • Hardware availability
    • Amount of hardware
    • Software availability

    Three most important constraints and considerations for this project were:

    • Time – This was a project with a very short time “runway”, something common in most customer environments who are looking to make a knowledgeable server, storage I/O decisions.
    • Amount of hardware – Limited amount of DRAM main memory, sixteen 2.5” internal hot-swap storage slots for HDD’s as well as SSDs. Note that for a production content solution platform; additional DRAM can easily be added, along with extra external storage enclosures to scale memory and storage capacity to fit your needs.
    • Software availability – Utilize common software and management tools publicly available so anybody could leverage those in their own environment and tests.

    The following content application workloads were profiled:

    • Database reads/writes – Updates, inserts, read queries for a content environment
    • Large file processing – Streaming of large video, images or other content objects.
    • Small file processing – Processing of many small files found in some content applications
    • General I/O profile – IOP, bandwidth and response time relevant to content applications

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    There are many different types of content applications ranging from little data databases to big data analytics as well as very big fast data such as for video. Likewise there are various workloads and characteristics to test. The best test and metrics are those that apply to your environment and application needs.

    Continue reading part three of this multi-part series here looking at how the systems and HDD’s were configured and tested.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Part 3 – Which HDD for content applicaitons – Test Configuration

    Which HDD for content applications – HDD Test Configuration

    HDD Test Configuration server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server platform hdd test configuratoin

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the third in a multi-part series (read part two here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus expands to hardware and software defining as well as configuring the test environments along with applications workloads.

    Defining Hardware Software Environment

    Servers Direct content platforms are software defined and hardware defined to your specific solution needs. For my test-drive, I used a pair of 2U Content Solution platforms, one for a client System Test Initiator (STI) (3), the other as server SUT shown in figure-1 (next page). With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

    (Note 3) System Test Initiator (STI) was hardware defined with dual Intel Xeon E5-2695 v3 (2.30 GHz) processors, 32GB RAM running Windows Server 2012 R2 with two network connections to the SUT. Network connections from the STI to SUT included an Intel GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE Converged Network Adapter (CNA). In addition to software defining the STI with Windows Server 2012 R2, Dell Benchmark Factory (V7.1 64b bit 496) part of the Database Administrators (DBA) Toad Tools (including free versions) was also used. For those familiar with HammerDB, Sysbench among others, Benchmark Factory is an alternative that supports various workloads and database connections with robust reporting, scripting and automation. Other installed tools included Spotlight on Windows, Iperf 2.0.5 for generating network traffic and reporting results, as well as Vdbench with various scripts.

    SUT setup (4)  included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

    (Note 4) System Under Test (SUT) dual Intel Xeon E5-2697 v3 (2.60 GHz) providing 54 logical processors, 64GB of RAM (expandable to 768GB with 32GB DIMMs, or 3TB with 128GB DIMMs) and two network connections. Network connections from the STI to SUT consisting of an Intel 1 GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE CNA. The GbE LAN connection was used for management purposes while the 40 GbE was used for data traffic. System disk was a 6Gbs SATA flash SSD. Seagate Enterprise class HDD’s were installed into the 16 available 2.5” small form factor (SFF) drive slots. Eight (left most) drive slots were connected to an Intel RMS3CC080 12 Gbps SAS RAID internal controller. The “Blue” drives in the middle were connected to both an NVMe PCIe card and motherboard 6 Gbps SATA controller using an SFF-8637 connector. The four right most drives were also connected to the motherboard 6 Gbps SATA controller.

    System Test Configuration
    Figure-1 STI and SUT hardware as well as software defined test configuration

    This included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. Five 6 Gbps SATA Enterprise Capacity 2TB HDD’s were setup using Microsoft Windows as a spanned volume. System disk was a 6Gbps flash SSD and an NVMe flash SSD drive was used for database temp space.

    What About NVM Flash SSD?

    NAND flash and other Non-Volatile Memory (NVM) memory and SSD complement content solution. A little bit of flash SSD in the right place can have a big impact. The focus for theses tests is HDD’s, however some flash SSDs were used as system boot and database temp (e.g. tempdb) space. Refer to StorageIO Lab reviews and visit www.thessdplace.com

    Seagate Enterprise HDD’s Used During Testing

    Various Seagate Enterprise HDD specifications use in the testing are shown below in table-1.

     

    Qty

     

    Seagate HDD’s

     

    Capacity

     

    RPM

     

    Interface

     

    Size

     

    Model

    Servers Direct Price Each

    Configuration

    4

    Enterprise 10K
    Performance

    1.8TB

    10K with cache

    12 Gbps SAS

    2.5”

    ST1800MM0128
    with enhanced cache

    $875.00 USD

    HW(5) RAID 10 and RAID 1

    2

    Enterprise
    Capacity 7.2K

    2TB

    7.2K

    12 Gbps SAS

    2.5”

    ST2000NX0273

    $399.00 USD

    HW RAID 1

    2

    Enterprise 15K
    Performance

    600GB

    15K with cache

    12 Gbps SAS

    2.5”

    ST600MX0082
    with enhanced cache

    $595.00 USD

    HW RAID 1

    5

    Enterprise
    Capacity 7.2K

    2TB

    7.2K

    6 Gbps SATA

    2.5”

    ST2000NX0273

    $399.00 USD

    SW(6) RAID Span Volume

    Table-1 Seagate Enterprise HDD specification and Servers Direct pricing

    URLs for additional Servers Direct content platform information:
    https://serversdirect.com/solutions/content-solutions
    https://serversdirect.com/solutions/content-solutions/video-streaming
    https://www.serversdirect.com/File%20Library/Data%20Sheets/Intel-SDR-2P16D-001-ds2.pdf

    URLs for additional Seagate Enterprise HDD information:
    https://serversdirect.com/Components/Drives/id-HD1558/Seagate_ST2000NX0273_2TB_Hard_Drive

    https://serversdirect.com/Components/Drives/id-HD1559/Seagate_ST600MX0082_SSHD

    Seagate Performance Enhanced Cache Feature

    The Enterprise 10K and 15K Performance HDD’s tested had the enhanced cache feature enabled. This feature provides a “turbo” boost like acceleration for both reads and write I/O operations. HDD’s with enhanced cache feature leverage the fact that some NVM such as flash in the right place can have a big impact on performance (7).

    In addition to their performance benefit, combing a best of or hybrid storage model (combing flash with HDD’s along with software defined cache algorithms), these devices are “plug-and-play”. By being “plug-and-play” no extra special adapters, controllers, device drivers, tiering or cache management software tools are required.

    (Note 5) Hardware (HW) RAID using Intel server on-board LSI based 12 Gbps SAS RAID card, RAID 1 with two (2) drives, RAID 10 with four (4) drives. RAID configured in write-through mode with default stripe / chunk size.

    (Note 6) Software (SW) RAID using Microsoft Windows Server 2012 R2 (span). Hardware RAID used write-through cache (e.g. no buffering) with read-ahead enabled and a default 256KB stripe/chunk size.

    (Note 7) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    The Seagate Enterprise Performance 10K and 15K with enhanced cache feature are a good example of how there is more to performance in today’s HDD’s than simply comparing RPM’s, drive form factor or interface.

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Careful and practical planning are key steps for testing various resources as well as aligning the applicable tools, configuration to meet your needs.

    Continue reading part four of this multi-part series here where the focus expands to database application workloads.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Part 4 – Which HDD for Content Applications – Database Workloads

    Part 4 – Which HDD for Content Applications – Database Workloads

    data base server storage I/O trends

    Updated 1/23/2018
    Which enterprise HDD to use with a content server platform for database workloads

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the fourth in a multi-part series (read part three here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus expands to database application workloads that were run to test various HDD’s.

    Database Reads/Writes

    Transaction Processing Council (TPC) TPC-C like workloads were run against the SUT from the STI. These workloads simulated transactional, content management, meta-data and key-value processing. Microsoft SQL Server 2012 was configured and used with databases (each 470GB e.g. scale 6000) created and workload generated by virtual users via Dell Benchmark Factory (running on STI Windows 2012 R2).

    A single SQL Server database instance (8) was used on the SUT, however unique databases were created for each HDD set being tested. Both the main database file (.mdf) and the log file (.ldf) were placed on the same drive set being tested, keep in mind the constraints mentioned above. As time was a constraint, database workloads were run concurrent (9) with each other except for the Enterprise 10K RAID 1 and RAID 10. Workload was run with two 10K HDD’s in a RAID 1 configuration, then another workload run with a four drive RAID 10. In a production environment, ideally the .mdf and .ldf would be placed on separate HDD’s and SSDs.

    To improve cache buffering the SQL Server database instance memory could be increased from 16GB to a larger number that would yield higher TPS numbers. Keep in mind the objective was not to see how fast I could make the databases run, rather how the different drives handled the workload.

    (Note 8) The SQL Server Tempdb was placed on a separate NVMe flash SSD, also the database instance memory size was set to 16GB which was shared by all databases and virtual users accessing it.

    (Note 9) Each user step was run for 90 minutes with a 30 minute warm-up preamble to measure steady-state operation.

    Users

    TPCC Like TPS

    Single Drive Cost per TPS

    Drive Cost per TPS

    Single Drive Cost / Per GB Raw Cap.

    Cost / Per GB Usable (Protected) Cap.

    Drive Cost (Multiple Drives)

    Protect
    Space Over head

    Cost per usable GB per TPS

    Resp. Time (Sec.)

    ENT 15K R1

    1

    23.9

    $24.94

    $49.89

    $0.99

    $0.99

    $1,190

    100%

    $49.89

    0.01

    ENT 10K R1

    1

    23.4

    $37.38

    $74.77

    $0.49

    $0.49

    $1,750

    100%

    $74.77

    0.01

    ENT CAP R1

    1

    16.4

    $24.26

    $48.52

    $0.20

    $0.20

    $ 798

    100%

    $48.52

    0.03

    ENT 10K R10

    1

    23.2

    $37.70

    $150.78

    $0.49

    $0.97

    $3,500

    100%

    $150.78

    0.07

    ENT CAP SWR5

    1

    17.0

    $23.45

    $117.24

    $0.20

    $0.25

    $1,995

    20%

    $117.24

    0.02

    ENT 15K R1

    20

    362.3

    $1.64

    $3.28

    $0.99

    $0.99

    $1,190

    100%

    $3.28

    0.02

    ENT 10K R1

    20

    339.3

    $2.58

    $5.16

    $0.49

    $0.49

    $1,750

    100%

    $5.16

    0.01

    ENT CAP R1

    20

    213.4

    $1.87

    $3.74

    $0.20

    $0.20

    $ 798

    100%

    $3.74

    0.06

    ENT 10K R10

    20

    389.0

    $2.25

    $9.00

    $0.49

    $0.97

    $3,500

    100%

    $9.00

    0.02

    ENT CAP SWR5

    20

    216.8

    $1.84

    $9.20

    $0.20

    $0.25

    $1,995

    20%

    $9.20

    0.06

    ENT 15K R1

    50

    417.3

    $1.43

    $2.85

    $0.99

    $0.99

    $1,190

    100%

    $2.85

    0.08

    ENT 10K R1

    50

    385.8

    $2.27

    $4.54

    $0.49

    $0.49

    $1,750

    100%

    $4.54

    0.09

    ENT CAP R1

    50

    103.5

    $3.85

    $7.71

    $0.20

    $0.20

    $ 798

    100%

    $7.71

    0.45

    ENT 10K R10

    50

    778.3

    $1.12

    $4.50

    $0.49

    $0.97

    $3,500

    100%

    $4.50

    0.03

    ENT CAP SWR5

    50

    109.3

    $3.65

    $18.26

    $0.20

    $0.25

    $1,995

    20%

    $18.26

    0.42

    ENT 15K R1

    100

    190.7

    $3.12

    $6.24

    $0.99

    $0.99

    $1,190

    100%

    $6.24

    0.49

    ENT 10K R1

    100

    175.9

    $4.98

    $9.95

    $0.49

    $0.49

    $1,750

    100%

    $9.95

    0.53

    ENT CAP R1

    100

    59.1

    $6.76

    $13.51

    $0.20

    $0.20

    $ 798

    100%

    $13.51

    1.66

    ENT 10K R10

    100

    560.6

    $1.56

    $6.24

    $0.49

    $0.97

    $3,500

    100%

    $6.24

    0.14

    ENT CAP SWR5

    100

    62.2

    $6.42

    $32.10

    $0.20

    $0.25

    $1,995

    20%

    $32.10

    1.57

    Table-2 TPC-C workload results various number of users across different drive configurations

    Figure-2 shows TPC-C TPS (red dashed line) workload scaling over various number of users (1, 20, 50, and 100) with peak TPS per drive shown. Also shown is the used space capacity (in green), with total raw storage capacity in blue cross hatch. Looking at the multiple metrics in context shows that the 600GB Enterprise 15K HDD with performance enhanced cache is a premium option as an alternative, or, to complement flash SSD solutions.

    database TPCC transactional workloads
    Figure-2 472GB Database TPS scaling along with cost per TPS and storage space used

    In figure-2, the 1.8TB Enterprise 10K HDD with performance enhanced cache while not as fast as the 15K, provides a good balance of performance, space capacity and cost effectiveness. A good use for the 10K drives is where some amount of performance is needed as well as a large amount of storage space for less frequently accessed content.

    A low cost, low performance option would be the 2TB Enterprise Capacity HDD’s that have a good cost per capacity, however lack the performance of the 15K and 10K drives with enhanced performance cache. A four drive RAID 10 along with a five drive software volume (Microsoft WIndows) are also shown. For apples to apples comparison look at costs vs. capacity including number of drives needed for a given level of performance.

    Figure-3 is a variation of figure-2 showing TPC-C TPS (blue bar) and response time (red-dashed line) scaling across 1, 20, 50 and 100 users. Once again the Enterprise 15K with enhanced performance cache feature enabled has good performance in an apples to apples RAID 1 comparison.

    Note that the best performance was with the four drive RAID 10 using 10K HDD’s Given popularity, a four drive RAID 10 configuration with the 10K drives was used. Not surprising the four 10K drives performed better than the RAID 1 15Ks. Also note using five drives in a software spanned volume provides a large amount of storage capacity and good performance however with a larger drive footprint.

    database TPCC transactional workloads scaling
    Figure-3 472GB Database TPS scaling along with response time (latency)

    From a cost per space capacity perspective, the Enterprise Capacity drives have a good cost per GB. A hybrid solution for environment that do not need ultra-high performance would be to pair a small amount of flash SSD (10) (drives or PCIe cards), as well as the 10K and 15K performance enhanced drives with the Enterprise Capacity HDD (11) along with cache or tiering software.

    (Note 10) Refer to Seagate 1200 12 Gbps Enterprise SAS SSD StorageIO lab review

    (Note 11) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    If your environment is using applications that rely on databases, then test resources such as servers, storage, devices using tools that represent your environment. This means moving up the software and technology stack from basic storage I/O benchmark or workload generator tools such as Iometer among others instead using either your own application, or tools that can replay or generate various workloads that represent your environment.

    Continue reading part five in this multi-part series here where the focus shifts to large and small file I/O processing workloads.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Which Enterprise HDD for Content Applications General I/O Performance

    Which HDD for Content Applications general I/O Performance

    hdd general i/o performance server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server platform general I/O performance Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the sixth in a multi-part series (read part five here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus is around general I/O performance including 8KB and 128KB IOP sizes.

    General I/O Performance

    In addition to running database and file (large and small) processing workloads, Vdbench was also used to collect basic small (8KB) and large (128KB) sized I/O operations. This consisted of random and sequential reads as well as writes with the results shown below. In addition to using vdbench, other tools that could be used include Microsoft Diskspd, fio, iorate and iometer among many others.

    These workloads used Vdbench configured (13) to do direct I/O to a Windows file system mounted device using as much of the available disk space as possible. All workloads used 16 threads and were run concurrently similar to database and file processing tests.

    (Note 13) Sample vdbench configuration for general I/O, note different settings were used for various tests

    Table-7 shows workload results for 8KB random IOPs 75% reads and 75% writes including IOPs, bandwidth and response time.

     

    ENT 15K RAID1

    ENT 10K RAID1

    ENT CAP RAID1

    ENT 10K R10
    (4 Drives)

    ECAP SW RAID (5 Drives)

     

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    I/O Rate (IOPs)

    597.11

    559.26

    514

    475

    285

    293

    979

    984

    491

    644

    MB/sec

    4.7

    4.4

    4.0

    3.7

    2.2

    2.3

    7.7

    7.7

    3.8

    5.0

    Resp. Time (Sec.)

    25.9

    27.6

    30.2

    32.7

    55.5

    53.7

    16.3

    16.3

    32.6

    24.8

    Table-7 8KB sized random IOPs workload results

    Figure-6 shows small (8KB) random I/O (75% read and 25% read) across different HDD configurations. Performance including activity rates (e.g. IOPs), bandwidth and response time for mixed reads / writes are shown. Note how response time increases with the Enterprise Capacity configurations vs. other performance optimized drives.

    general 8K random IO
    Figure-6 8KB random reads and write showing IOP activity, bandwidth and response time

    Table-8 below shows workload results for 8GB sized I/Os 100% sequential with 75% reads and 75% writes including IOPs, MB/sec and response time in seconds.

    ENT 15K RAID1

    ENT 10K RAID1

    ENT CAP RAID1

    ENT 10K R10
    (4 Drives)

    ECAP SW RAID (5 Drives)

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    I/O Rate (IOPs)

    3,778

    3,414

    3,761

    3,986

    3,379

    1,274

    11,840

    8,368

    2,891

    1,146

    MB/sec

    29.5

    26.7

    29.4

    31.1

    26.4

    10.0

    92.5

    65.4

    22.6

    9.0

    Resp. Time (Sec.)

    2.2

    3.1

    2.3

    2.4

    2.7

    10.9

    1.3

    1.9

    5.5

    14.0

    Table-8 8KB sized sequential workload results

    Figure-7 shows small 8KB sequential mixed reads and writes (75% read and 75% write), while the Enterprise Capacity 2TB HDD has a large amount of space capacity, its performance in a RAID 1 vs. other similar configured drives is slower.

    8KB Sequential
    Figure-7 8KB sequential 75% reads and 75% write showing bandwidth activity

    Table-9 shows workload results for 100% sequential, 100% read and 100% write 128KB sized I/Os including IOPs, bandwidth and response time.

    ENT 15K RAID1

    ENT 10K RAID1

    ENT CAP RAID1

    ENT 10K R10
    (4 Drives)

    ECAP SW RAID (5 Drives)

    Read

    Write

    Read

    Write

    Read

    Write

    Read

    Write

    Read

    Write

    I/O Rate (IOPs)

    1,798

    1,771

    1,716

    1,688

    921

    912

    3,552

    3,486

    780

    721

    MB/sec

    224.7

    221.3

    214.5

    210.9

    115.2

    114.0

    444.0

    435.8

    97.4

    90.1

    Resp. Time (Sec.)

    8.9

    9.0

    9.3

    9.5

    17.4

    17.5

    4.5

    4.6

    19.3

    20.2

    Table-9 128KB sized sequential workload results

    Figure-8 shows sequential or streaming operations of larger I/O (100% read and 100% write) requests sizes (128KB) that would be found with large content applications. Figure-8 highlights the relationship between lower response time and increased IOPs as well as bandwidth.

    128K Sequential
    Figure-8 128KB sequential reads and write showing IOP activity, bandwidth and response time

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Some content applications are doing small random I/Os for database, key value stores or repositories as well as meta data processing while others are doing large sequential I/O. 128KB sized I/O may be large for your environment, on the other hand, with an increasing number of applications, file systems, software defined storage management tools among others, 1 to 10MB or even larger I/O sizes are becoming common. Key is selecting I/O sizes and read write as well as random sequential along with I/O or queue depths that align with your environment.

    Continue reading part seven the final post in this multi-part series here where the focus is around how HDD’s continue to evolve including performance beyond traditional RPM based execrations along with wrap up.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    HDDs evolve for Content Application servers

    HDDs evolve for Content Application servers

    hdds evolve server storage I/O trends

    Updated 1/23/2018

    Enterprise HDDs evolve for content server platform

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the seventh and final post in this multi-part series (read part six here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). The focus of this post is comparing how HDD continue to evolve over various generations boosting performance as well as capacity and reliability. This also looks at how there is more to HDD performance than the traditional focus on Revolutions Per Minute (RPM) as a speed indicator.

    Comparing Different Enterprise 10K And 15K HDD Generations

    There is more to HDD performance than RPM speed of the device. RPM plays an important role, however there are other things that impact HDD performance. A common myth is that HDD’s have not improved on performance over the past several years with each successive generation. Table-10 shows a sampling of various generations of enterprise 10K and 15K HDD’s (14) including different form factors and how their performance continues to improve.

    different 10K and 15K HDDs
    Figure-9 10K and 15K HDD performance improvements

    Figure-9 shows how performance continues to improve with 10K and 15K HDD’s with each new generation including those with enhanced cache features. The result is that with improvements in cache software within the drives, along with enhanced persistent non-volatile memory (NVM) and incremental mechanical drive improvements, both read and write performance continues to be enhanced.

    Figure-9 puts into perspective the continued performance enhancements of HDD’s comparing various enterprise 10K and 15K devices. The workload is the same TPC-C tests used earlier in a similar (14) (with no RAID). 100 simulated users are shown in figure-9 accessing a database on each of the different drives all running concurrently. The older 15K 3.5” Cheetah and 2.5” Savio used had a capacity of 146GB which used a database scale factor of 1500 or 134GB. All other drives used a scale factor 3000 or 276GB. Figure-9 also highlights the improvements in both TPS performance as well as lower response time with new HDD’s including those with performance enhanced cache feature.

    The workloads run are same as the TPC-C ones shown earlier, however these drives were not configured with any RAID. The TPC-C activity used Benchmark Factory with similar setup and configuration to those used earlier including on a multi-socket, multi-core Windows 2012 R2 server supporting a Microsoft SQL Server 2012 database with a database for each drive type.

    ENT 10K V3 2.5"

    ENT (Cheetah) 15K 3.5"

    Users

    1

    20

    50

    100

    Users

    1

    20

    50

    100

    TPS (TPC-C)

    14.8

    50.9

    30.3

    39.9

    TPS (TPC-C)

    14.6

    51.3

    27.1

    39.3

    Resp. Time (Sec.)

    0.0

    0.4

    1.6

    1.7

    Resp. Time (Sec.)

    0.0

    0.3

    1.8

    2.1

    ENT 10K 2.5" (with cache)

    ENT (Savio) 15K 2.5"

    Users

    1

    20

    50

    100

    Users

    1

    20

    50

    100

    TPS (TPC-C)

    19.2

    146.3

    72.6

    71.0

    TPS (TPC-C)

    15.8

    59.1

    40.2

    53.6

    Resp. Time (Sec.)

    0.0

    0.1

    0.7

    0.0

    Resp. Time (Sec.)

    0.0

    0.3

    1.2

    1.2

    ENT 15K V4 2.5"

    Users

    1

    20

    50

    100

    TPS (TPC-C)

    19.7

    119.8

    75.3

    69.2

    Resp. Time (Sec.)

    0.0

    0.1

    0.6

    1.0

    ENT 15K (enhanced cache) 2.5"

    Users

    1

    20

    50

    100

    TPS (TPC-C)

    20.1

    184.1

    113.7

    122.1

    Resp. Time (Sec.)

    0.0

    0.1

    0.4

    0.2

    Table-10 Continued Enterprise 10K and 15K HDD performance improvements

    (Note 14) 10K and 15K generational comparisons were run on a separate comparable server to what was used for other test workloads. Workload configuration settings were the same as other database workloads including using Microsoft SQL Server 2012 on a Windows 2012 R2 system with Benchmark Factory driving the workload. Database memory sized was reduced however to only 8GB vs. 16GB used in other tests.

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    A little bit of flash in the right place with applicable algorithms goes a long way, an example being the Seagate Enterprise HDD’s with enhanced cache feature. Likewise, HDD’s are very much alive complementing SSD and vice versa. For high-performance content application workloads flash SSD solutions including NVMe, 12Gbps SAS and 6Gbps SATA devices are cost effective solutions. HDD’s continue to be cost-effective data storage devices for both capacity, as well as environments that do not need the performance of flash SSD.

    For some environments using a combination of flash and HDD’s complementing each other along with cache software can be a cost-effective solution. The previous workload examples provide insight for making cost-effective informed storage decisions.

    Evaluate today’s HDD’s on their effective performance running workloads as close as similar to your own, or, actually try them out with your applications. Today there is more to HDD performance than just RPM speed, particular with the Seagate Enterprise Performance 10K and 15K HDD’s with enhanced caching feature.

    However the Enterprise Performance 10K with enhanced cache feature provides a good balance of capacity, performance while being cost-effective. If you are using older 3.5” 15K or even previous generation 2.5” 15K RPM and “non-performance enhanced” HDD’s, take a look at how the newer generation HDD’s perform, looking beyond the RPM of the device.

    Fast content applications need fast content and flexible content solution platforms such as those from Servers Direct and HDD’s from Seagate. Key to a successful content application deployment is having the flexibility to hardware define and software defined the platform to meet your needs. Just as there are many different types of content applications along with diverse environments, content solution platforms need to be flexible, scalable and robust, not to mention cost effective.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Server StorageIO March 2016 Update Newsletter

    Volume 16, Issue III

    Hello and welcome to the March 2016 Server StorageIO update newsletter.

    Here in the northern hemisphere spring has officially arrived as of March 20th equinox along with warmer weather, more hours and minutes of day light, and plenty of things to do. In addition to the official arrival of spring here (fall in the southern hemisphere), it also means in the U.S. that March Madness and college basketball tournament playoff brackets and office (betting) pools are in full swing.

    In This Issue

  • Feature Topic and Themes
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcast’s
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • A couple of other things associated with spring is to move clocks forward which occurred recently here in the U.S. Spring is also a good time to check your smoke and dangerous gas detectors or other alarms. This means replacing batteries and cleaning the detectors.

    Besides smoke and gas detectors, spring is also a good time do preventive maintenance on your battery backup uninterruptible power supplies (UPS), as well as generators and other standby power devices. For my part, I had a service tech out to do a tune up on my Kohler generator, as well as replaced some batteries in APC UPS devices.

    Besides smoke and CO2 detectors, generators and UPS standby power systems, March madness basketball and other sports tournaments, something else occurs on March 31st (besides being the day before April 1st and April fools day). March 31st is World Backup (and Restore) Day meaning an awareness on making sure your data, applications, settings, configurations, keys, software and systems are backed up, and can be recovered.

    Hopefully none of you are in the situation where data, applications, systems, computers, laptops, tablets, smart phones or other devices only get backed up or protected once a year, however maybe you know somebody who does.

    March also marks the 10th anniversary of Amazon Web Services (AWS) cloud services (more here), happy birthday AWS.

    March wraps up on the 31st with World Backup Day which is intended to draw attention to the importance of data protection and your ability to recover applications and data. While backup are important, so to are testing to make sure you can actually use and recover from what was protected. Keep in mind that while some claim backup is dead, data protection is alive and as along as vendors and others keep referring to data protection as backup, backup will stay alive.

    Join me and folks from HP Enterprise (HPE) on March 31st at 1PM ET for a free webinar compliments of HPE with a theme of Backup with Brains, emphasis on awareness and analytics to enable smart data protection. Click here to learn more and register.

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    Feature Topic and Theme

    This months feature theme and topics include backup (and restore) as part of data protection, more on clouds (public, private and hybrid) including how some providers such as DropBox are moving out of public cloud providers such as AWS building their own data centers.

    Building off of the February newsletter there is more on Google including their use of Non-Volatile Memory (NVM) aka NAND Flash Solid State Devices (SSD). and some of their research. In addition to Google’s use of SSD, check out the posts and industry activity on NVMe as well as other news and updates including new converged platforms from Cisco and HPE among others.

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

  • Via Redmondmag: AWS Cloud Storage Service Turns 10 years old in March, happy birthday AWS (read more here at the AWS site).
  • Cisco announced new flexible HyperFlex converged compute server platforms for hybrid cloud and other deployments. Also announced were NetApp All Flash Array (AFA) FlexPod converged solutions powered by Cisco UCS servers and networking technology. In other activity, Cisco unveiled a Digital Network Architecture to enable customer digital data transformation. Cisco also announced its intent to acquire CliQr for management of hybrid clouds.

  • Data Direct Networks (DDN) expands NAS offerings with new GS14K platform via PRnewswire.

  • Via Computerworld: DropBox quits Amazon cloud, takes back 500 PB of data. DropBox has created their own cloud to host videos, images, files, folders, objects, blobs and other storage items that used to be stored within AWS S3. In this DropBox post, you can read about the why they decided to create their own cloud, as well as how they used a hybrid approach with metadata kept local, actual data stored in AWS S3. Now the data and the metadata are in DropBox data centers. However, DropBox is still keeping some data in AWS particular in different geographies.

  • Web site hosting company GoDaddy has extended their capabilities similar to other service providers by adding an OpenStack powered cloud service. This is a trend that others such as Bluehost (where my sites are located on a DPS) have evolved from simple shared hosting, to dedicated private servers (DPS), virtual private servers (VPS) along with other cloud related services. Think of a VPS as a virtual machine or cloud instance. Likewise some of the cloud service providers such as AWS are moving into dedicated private servers.

  • Following up from the February 2016 Server StorageIO Update Newsletter that included Google’s message to disk vendors: Make hard drives like this, even if they lose more data and Google Disk for Data Centers White Paper (PDF Here), read about Google experiences SSD.

    In this PDF white paper that was presented at the recent Usenix 2016 conference outlining Google’s experiences with different types (SLC, MLC, eMLC) and generations of NAND flash SSD media across various vendors and generations. Some of the takeaways include that context matters when looking at SSD metrics on endurance, durability and errors. While some in the industry focus on Unrecoverable Bit Error Rates (UBER), there needs to be awareness around Raw Bit Error Rate (RBER) among other metrics and usage. Read more about Google’s experiences here.


  • Hewlett Packard Enterprise (HPE) announced Hyper-Converged systems Via Marketwired including HC 380 based on ProLiant DL380 technology providing all in one (AiO) converged compute, storage and virtualization software with simplified management. The HC 380 is targeted for mid-market aka small medium business (SMB), remote office branch office (ROBO) and workgroups. HPE also announced all flash array (AFA) enhancements for 3PAR storage (Via Businesswire).

  • Microsoft has announced that it will be releasing a version of its SQL Server database on Linux. What this means is that as well as being able to use SQL Server and associated tools on Windows and Azure platforms, you will also in the not so distant future be able to deploy on Linux. By making SQL Server available on Linux opens up some interesting scenarios and solution alternatives vs. Oracle along with MySQL and associated MySQL derivatives, as well as NoSQL offerings (Read more about NoSQL Databases here). Read more about Microsoft’s SQL Server for Linux here.

    In addition to SQL Server for Linux, Microsoft has also announced enhancements for easing docker container migrations to clouds. In other Microsoft activity, they announced enhancements to Storsimple and Azure. Keep an eye out for Windows Server 2016 Tech Preview 5 (e.g. TP5) which will be the next release of the upcoming new version of the popular operating systems.


  • MSDI, Rockland IT Solutions and Source Support Services Merge to Form Congruity with CEO Todd Gresham, along with Mike Stolz and Mark Shirman (formerly of Glasshouse) among others you may know.

  • Via Businesswire: PrimaryIO announces server-based flash acceleration for VMware systems, while Riverbed extends Remote Office Branch Office (ROBO) cloud connectivity Via Businesswire.

  • Via Computerworld: Samsung ships 12Gbs SAS 15TB 2.5" 3D NAND Flash SSD (Hey Samsung, send me a device or two and will give them a test drive in the Server StorageIO lab ;). Not to be out done, Via Forbes: Seagate announces fast SSD card, as well as for the High Performance Compute (HPC) and Super Compute (SC) markets, Via HPCwire: Seagate Sets Sights on Broader HPC Market with their scale-out clustered Lustre based systems.

  • Servers Direct is now offering the HGST 4U x 60 drive enclosures while Via PRnewswire: SMIC announces RRAM partnership.

  • ATTO Technology has enhanced their RAID Arrays Behind FibreBridge 7500, while Oracle announced mainframe virtual tape library (VTL) cloud support Via Searchdatabackup. In other updates for this month, VMware has released and made generally available (GA) VSAN 6.2 and Via Businesswire: Wave and Centeris Launch Transpacific Broadband Data and Fiber Hub.
  • The above is a sampling of some of the various industry news, announcements and updates for this March. Watch for more news and updates in April coming out of NAB and OpenStack Summit among other events.

    View other recent news and industry trends here.

    StorageIO Commentary in the news

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Continum – R1Soft Server Backup Manager
    • HyperIO – HiMon and HyperIO server storage I/O monitoring software tools
    • Runcast – VMware automation and management software tools
    • Opvizor – VMware health management software tools
    • Asigra – Cloud, Managed Service and distributed backup/data protection tools
    • Datera – Software defined storage management startup
    • E8 Storage – Software Defined Stealth Storage Startup
    • Venyu – Cloud and data center data protection tools
    • StorPool – Distributed software defined storage management tools
    • ExaBlox – Scale out storage solutions

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • InfoStor:  Data Protection Gaps, Some Good, Some Not So Good
    • Virtual Blocks (VMware Blogs):  Part III EVO:RAIL – When And Where To Use It?
    • InfoStor:  Object Storage Is In Your Future

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    Check out this video (Via YouTube) of a Google Data Center tour.

    In the IoT and IoD era of little and big data, how about this video I did with my Phantom DJI drone and a HD GoPro (e.g. 1K vs. 2.7K or 4K in newer cameras). This generates about a GByte of raw data per 10 minutes of flight, which then means another GB copies to a staging area, then to a protected copies, then production versions and so forth. Thus a 2 minute clip in 1080p resulted in plenty of storage including produced, uploaded versions along with backup copies in archives spread across YouTube, Dropbox and elsewhere.

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    TBA – April 27, 2016 webinar

    NAB (Las Vegas) April 19-20, 2016

    Backup with Brains – March 31, 2016 free webinar (1PM ET)

    See more webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    NVMe is in your future, resources to start preparing today for tomorrow

    NVM and NVMe corner (Via and Compliments of Micron.com)

    View more NVMe related items at microsite thenvmeplace.com.

    Read more in this Server StorageIO industry Trends Perspective white paper and lab review.

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of mine as well as books from others.

    For this months recommended reading, it’s a blog site. If you have not visited Eric Siebert’s (@ericsiebert) site vSphere-land and its companion resources pages including top blogs, do so now.

    Granted there is a heavy VMware server virtualization focus, however there is a good balance of other data infrastructure topics spanning servers, storage, I/O networking, data protection and more.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    storageioblog.com/data-protection-diaries-main/ – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance.com – Various server, storage and I/O performance and benchmarking

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part V – NVMe overview primer (Where to learn more, what this all means)

    This is the fifth in a five-part mini-series providing a NVMe primer overview.

    View Part I, Part II, Part III, Part IV, Part V as well as companion posts and more NVMe primer material at www.thenvmeplace.com.

    There are many different facets of NVMe including protocol that can be deployed on PCIe (AiC, U.2/8639 drives, M.2) for local direct attached, dedicated or shared for front-end or back-end of storage systems. NVMe direct attach is also found in servers and laptops using M.2 NGFF mini cards (e.g. “gum sticks”). In addition to direct attached, dedicated and shared, NVMe is also deployed on fabrics including over Fibre Channel (FC-NVMe) as well as NVMe over Fabrics (NVMeoF) leveraging RDMA based networks (e.g. iWARP, RoCE among others).

    The storage I/O capabilities of flash can now be fed across PCIe faster to enable modern multi-core processors to complete more useful work in less time, resulting in greater application productivity. NVMe has been designed from the ground up with more and deeper queues, supporting a larger number of commands in those queues. This in turn enables the SSD to better optimize command execution for much higher concurrent IOPS. NVMe will coexist along with SAS, SATA and other server storage I/O technologies for some time to come. But NVMe will be at the top-tier of storage as it takes full advantage of the inherent speed and low latency of flash while complementing the potential of multi-core processors that can support the latest applications.

    With NVMe, the capabilities of underlying NVM and storage memories are further realized Devices used include a PCIe x4 NVMe AiC SSD, 12 GbpsSAS SSD and 6 GbpsSATA SSD. These and other improvements with NVMe enable concurrency while reducing latency to remove server storage I/O traffic congestion. The result is that application demanding more concurrent I/O activity along with lower latency will gravitate towards NVMe for access fast storage.

    Like the robust PCIe physical server storage I/O interface it leverages, NVMe provides both flexibility and compatibility. It removes complexity, overhead and latency while allowing far more concurrent I/O work to be accomplished. Those on the cutting edge will embrace NVMe rapidly. Others may prefer a phased approach.

    Some environments will initially focus on NVMe for local server storage I/O performance and capacity available today. Other environments will phase in emerging external NVMe flash-based shared storage systems over time.

    Planning is an essential ingredient for any enterprise. Because NVMe spans servers, storage, I/O hardware and software, those intending to adopt NVMe need to take into account all ramifications. Decisions made today will have a big impact on future data and information infrastructures.

    Key questions should be, how much speed do your applications need now, and how do growth plans affect those requirements? How and where can you maximize your financial return on investment (ROI) when deploying NVMe and how will that success be measured?

    Several vendors are working on, or have already introduced NVMe related technologies or initiatives. Keep an eye on among others including AWS, Broadcom (Avago, Brocade), Cisco (Servers), Dell EMC, Excelero, HPE, Intel (Servers, Drives and Cards), Lenovo, Micron, Microsoft (Azure, Drivers, Operating Systems, Storage Spaces), Mellanox, NetApp, OCZ, Oracle, PMC, Samsung, Seagate, Supermicro, VMware, Western Digital (acquisition of SANdisk and HGST) among others.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What this all means

    NVMe is in your future if not already, so If NVMe is the answer, what are the questions?

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.