If NVMe is the answer, what are the questions?

If NVMe is the answer, what are the questions?

If NVMe is the answer, then what are the various questions that should be asked?

Some common questions that NVMe is the answer to include what is the difference between NVM and NVMe?

Is NVMe only for servers, does NVMe require fabrics and what benefit is NVMe beyond more IOPs.

Lets take a look at some of these common NVMe conversations and other questions.

Main Features and Benefits of NVMe

Some of the main feature and benefits of NVMe among others include:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPS) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

NVM and Media memory matters

Whats the differences between NVM and NVMe? Non-Volatile Memory (NVM) which as its name implies is persistent electronic memory medium where data is stored. Today you commonly know about NVMs as NAND flash Solid State Devices (SSD), along with NVRAM among others emerging storage class memories (SCM).

Emerging SCM such as 3D XPoint among other mediums (or media if you prefer) have the premises of boosting both read and write performance beyond traditional NAND flash, closer to DRAM, while having durability also closer to DRAM. For now let’s set the media and mediums aside and get back to how they or will be accessed as well as used.

server storage I/O NVMe fundamentals

Server and Storage I/O Media access matters

NVM Express (e.g. NVMe) is a standard industry protocol for accessing NVM media (SSD and flash devices, storage system, appliances). If NVMe is the answer, then depending on your point of view, NVMe can be (or is) a replacement (today or in the future) for AHCI/SATA, Serial Attached SCSI (SAS). What this means is that NVMe can coexist or replace other block SCSI protocol implementations (e.g. Fibre Channel FCP aka FCP, iSCSI, SRP) as well as NBD (among others).

Similar to the SCSI command set that is implemented on different networks (e.g. iSCSI (IP), FCP (Fibre Channel), SRP (InfiniBand), SAS) NVMe as a protocol is now implemented using PCIe with form factors of add-in cards (AiC), M.2 (e.g. gum sticks aka next-gen form factor or NGFF) as well as U.2 aka 8639 drive form factors. There are also the emerging NVMe over Fabrics variants including FC-NVMe (e.g. NVMe protocol over Fibre Channel) which is an alternative to SCSI_FCP (e.g. SCSI on Fibre Channel). An example of a PCIe AiC that I have include the Intel 750 400GB NVMe (among others). You should be able to find the Intel among other NVMe devices from your prefered vendor as well as different venues including Amazon.com.

NVM, flash and NVMe SSD
Left PCIe AiC x4 NVMe SSD, lower center M.2 NGFF, right SAS and SATA SSD

The following image shows an NVMe U.2 (e.g. 8639) drive form factor device that from a distance looks like a SAS device and connector. However looking closer some extra pins or connectors that present a PCIe Gen 3 x4 (4 PCIe lanes) connection from the server or enclosure backplane to the devices. These U.2 devices plug into 8639 slots (right) that look like a SAS slot that can also accommodate SATA. Remember, SATA can plug into SAS, however not the other way around.

NVMe U.2 8639 driveNVMe 8639 slot
Left NVMe U.2 drive showing PCIe x4 connectors, right, NVMe U.2 8639 connector

What NVMe U.2 means is that the 8639 slots can be used for 12Gbps SAS, 6Gbps SATA or x4 PCIe-based NVMe. Those devices in turn attach to their respective controllers (or adapters) and device driver software protocol stack. Several servers have U.2 or 8639 drive slots either in 2.5” or 1.8” form factors, sometimes these are also called or known as “blue” drives (or slots). The color coding simply helps to keep track of what slots can be used for different things.

Navigating your various NVMe options

If NVMe is the answer, then some device and component options are as follows.

NVMe device components and options include:

    • Enclosures and connector port slots
    • Adapters and controllers
    • U.2, PCIe AIC and M.2 devices
    • Shared storage system or appliances
    • PCIe and NVMe switches

If NVMe is the answer, what to use when, where and why?

Why use an U.2 or 8639 slot when you could use PCIe AiC? Simple, your server or storage system may be PCIe slot constrained, yet have more available U.2 slots. There are U.2 drives from various vendors including Intel and Micro, as well as servers from Dell, Intel and Lenovo among many others.

Why and when would you use an NVMe M.2 device? As a local read/write cache, or perhaps a boot and system device on servers or appliances that have M.2 slots. Many servers and smaller workstations including Intel NUC support M.2. Likewise, there are M.2 devices from many different vendors including Micron, Samsung among others.

Where and why would you use NVMe PCIe AiC? Whenever you can and if you have enough PCIe slots of the proper form factor, mechanical and electrical (e.g. x1, x4, x8, x16) to support a particular card.

Can you mix and match different types of NVMe devices on the same server or appliance? As long as the physical server and its software (BIOS/UEFI, operating system, hypervisors, drivers) support it yes. Most server and appliance vendors support PCIe NVMe AiCs, however, pay attention to if they are x4, x8 both mechanical as well as electrical. Also, verify operating system and hypervisor device driver support. PCIe NVMe AiCs are available from Dell, Intel, Micron and many other vendors.

Networking with your Server and NVMe Storage

Keep in mind that context is important when discussing NVMe as there are devices for attaching as the back-end to servers, storage systems or appliances, as well as for front-end attachment (e.g. for attaching storage systems to servers). NVMe devices can also be internal to a server or storage system and appliance, or, accessible over a network. Think of NVMe as an upper-level command set protocol like SCSI that gets implemented on different networks (e.g. iSCSI, FCP, SRP).

How can NVMe use PCIe as a transport to use devices that are outside of a server? Different vendors have PCIe adapter cards that support longer distances (few meters) to attach to devices. For example, Dell EMC DSSD has a special dual port (two x4 ports) that are PCIe x8 cards for attachment to the DSSD shared SSD devices.

Note that there are also PCIe switches similar to SAS and InfiniBand among other switches. However just because these are switches, does not mean they are your regular off the shelf network type switch that your networking folks will know what to do with (or want to manage).

The following example shows a shared storage system or appliance being accessed by servers using traditional block, NAS file or object protocols. In this example, the storage system or appliance has implemented NVMe devices (PCIe AiC, M.2, U.2) as part of their back-end storage. The back-end storage might be all NVMe, or a mix of NVMe, SAS or SATA SSD and perhaps some high-capacity HDD.

NVMe and server storage access
Servers accessing shared storage with NVMe back-end devices

NVMe and server storage access via PCIe
NVMe PCIe attached (via front-end) storage with various back-end devices

In addition to shared PCIe-attached storage such as Dell EMC DSSD similar to what is shown above, there are also other NVMe options. For example, there are industry initiatives to support the NVMe protocol to use shared storage over fabric networks. There are different fabric networks, they range from RDMA over Converged Ethernet (RoCE) based as well as Fibre Channel NVME (e.g. FC-NVME) among others.

An option that on the surface may not seem like a natural fit or leverage NVMe to its fullest is simply adding NVMe devices as back-end media to existing arrays and appliances. For example, adding NVMe devices as the back-end to iSCSI, SAS, FC, FCoE or other block-based, NAS file or object systems.

NVMe and server storage access via shared PCIe
NVMe over a fabric network (via front-end) with various back-end devices

A common argument against using legacy storage access of shared NVMe is along the lines of why would you want to put a slow network or controller in front of a fast NVM device? You might not want to do that, or your vendor may tell you many reasons why you don’t want to do it particularly if they do not support it. On the other hand, just like other fast NVM SSD storage on shared systems, it may not be all about 100% full performance. Rather, for some environments, it might be about maximizing connectivity over many interfaces to faster NVM devices for several servers.

NVMe and server storage I/O performance

Is NVMe all about boosting the number of IOPS? NVMe can increase the number of IOPS, as well as support more bandwidth. However, it also reduces response time latency as would be expected with an SSD or NVM type of solution. The following image shows an example of not surprisingly an NVMe PCIe AiC x4 SSD outperforming (more IOPs, lower response time) compared to a 6Gb SATA SSD (apples to oranges). Also keep in mind that best benchmark or workload tool is your own application as well as your performance mileage will vary.

NVMe using less CPU per IOP
SATA SSD vs. NVMe PCIe AiC SSD IOPS, Latency and CPU per IOP

The above image shows the lower amount of CPU per IOP given the newer, more streamlined driver and I/O software protocol of NVMe. With NVMe there is less overhead due to the new design, more queues and ability to unlock value not only in SSD also in servers with more sockets, cores and threads.

What this means is that NVMe and SSD can boost performance for activity (TPS, IOPs, gets, puts, reads, writes). NVMe can also lower response time latency while also enabling higher throughput bandwidth. In other words, you get more work out of your servers CPU (and memory). Granted SSDs have been used for decades to boost server performance and in many cases, delay an upgrade to a newer faster system by getting more work out of them (e.g. SSD marketing 202).

NVMe maximizing your software license investments

What may not be so obvious (e.g. SSD marketing 404) is that by getting more work activity done in a given amount of time, you can also stretch your software licenses further. What this means is that you can get more out of your IBM, Microsoft, Oracle, SAP, VMware and other software licenses by increasing their effective productivity. You might already be using virtualization to increase server hardware efficiency and utilization to cut costs. Why not go further and boost productivity to increase your software license (as well as servers) effectiveness by using NVMe and SSDs?

Note that fast applications need fast software, servers, drivers, I/O protocols and devices.

Also just because you have NVMe present or PCIe does not mean full performance, similar to how some vendors put SSDs behind their slow controllers and saw, well slow performance. On the other hand vendors who had or have fast controllers (software, firmware, hardware) that were HDD or are even SSD performance constrained can see a performance boost.

Additional NVMe and related tips

If you have a Windows server and have not overridden, check your power plan to make sure it is not improperly set to balanced instead of high performance. For example using PowerShell issue the following command:

PowerCfg -SetActive “381b4222-f694-41f0-9685-ff5bb260df2e”

Another Windows related tip if you have not done so is enable task manager disk stats by issuing from a command line “diskperf –y”. Then display task manager and performance and see drive performance.

Need to benchmark, validate, compare or test an NVMe, SSD (or even HDD) device or system, there are various tools and workloads for different scenarios. Likewise those various tools can be configured for different activity to reflect your needs (and application workloads). For example, Microsoft Diskspd, fio.exe, iometer and vdbench sample scripts are shown here (along with results) as a starting point for comparison or validation testing.

Does M.2. mean you have NVMe? That depends as some systems implement M.2 with SATA, while others support NVMe, read the fine print or ask for clarification.

Do all NVMe using PCIe run at the same speed? Not necessarily as some might be PCIe x1 or x4 or x8. Likewise some NVMe PCIe cards might be x8 (mechanical and electrical) yet split out into a pair of x4 ports. Also keep in mind that similar to a dual port HDD, NVMe U.2 drives can have two paths to a server, storage system controller or adapter, however both might not be active at the same time. You might also have a fast NVMe device attached to a slow server or storage system or adapter.

Who to watch and keep an eye on in the NVMe ecosystem? Besides those mentioned above, others to keep an eye on include Broadcom, E8, Enmotus Fuzedrive (micro-tiering software), Excelero, Magnotics, Mellanox, Microsemi (e.g. PMC Sierra), Microsoft (Windows Server 2016 S2D + ReFS + Storage Tiering), NVM Express trade group, Seagate, VMware (Virtual NVMe driver part of vSphere ESXi in addition to previous driver support) and WD/Sandisk among many others.

Where To Learn More

Additional related content can be found at:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

NVMe is in your future, that was the answer, however there are the when, where, how, with what among other questions to be addressed. One of the great things IMHO about NVMe is that you can have it your way, where and when you need it, as a replacement or companion to what you have. Granted that will vary based on your preferred vendors as well as what they support today or in the future.

If NVMe is the answer, Ask your vendor when they will support NVMe as a back-end for their storage systems, as well as a front-end. Also decide when will your servers (hardware, operating systems hypervisors) support NVMe and in what variation. Learn more why NVMe is the answer and related topics at www.thenvmeplace.com

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

SSD, flash, Non-volatile memory (NVM) storage Trends, Tips & Topics

SSD, flash, Non-volatile memory (NVM) storage Trends, Tips & Topics

Updated 2/2/2018

server storage I/O trends

Will 2017 be there year of solid state device (SSD), all flash, or all Non-volatile memory (NVM) based storage data centers and data infrastructures?

Recently I did a piece over at InfoStor looking at SSD trends, tips and related topics. SSDs of some type, shape and form are in your future, if they are not already. In my InfoStor piece, I look at some non-volatile memory (NVM) and SSD trends, technologies, tools and tips that you can leverage today to help prepare for tomorrow. This also includes NVM Express (NVMe) based components and solutions.

By way of background, SSD can refer to solid state drive or solid state device (e.g. more generic). The latter is what I am using in this post. NVM refers to different types of persistent memories, including NAND flash and its variants most commonly used today in SSDs. Other NVM mediums include NVRAM along with storage class memories (SCMs) such as 3D XPoint and phase change memory (PCM) among others. Let’s focus on NAND flash as that is what is primarily available and shipping for production enterprise environments today.

Continue reading about SSD, flash, NVM and related trends, topics and tips over at InfoStor by clicking here.

Where To Learn More

Additional related content can be found at:

What This All Means

Will 2017 finally be the year of all flash, all SSD and all NVM including emerging storage class memories (SCM)? Or as we have seen over the past decade increasing adoption as well as deployment in most environments, some of which have gone all SSD or NVM. In the meantime it is safe to say that NVMe, NVM, SSD, flash and other related technologies are in your future in some shape or form as well as quantity. Check out my piece over at InfoStor SSD trends, tips and related topics.

What say you, are you going all flash, SSD or NVM in 2017, if not, what are your concerns or constraints and plans?

Ok, nuff said, for now…

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and <a “https://storageioblog.com/book1”>Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Updated Software Defined Data Infrastructure Webinars and Fall 2016 events

Software Defined Data Infrastructure Webinars and Fall 2016 events

server storage I/O trends

Here is the updated Server StorageIO fall 2016 webinar and event activities covering software defined data center, data infrastructure, virtual, cloud, containers, converged, hyper-converged server, storage, I/O network, performance and data protection among other topics.

December 7, 2016 – Webinar 11AM PT – BrightTalk
Hyper-Converged Infrastructure Decision Making

Hyper-Converged Infrastructure, HCI and CI Decision Making

Are Converged Infrastructures (CI), Hyper-Converged Infrastructures (HCI), Cluster in Box or Cloud in Box (CiB) solutions for you? The answer is it depends on what your needs, requirements, application among other criteria are. In addition are you focused on a particular technology solution or architecture approach, or, looking for something that adapts to your needs? Join us in this discussion exploring your options for different scenarios as we look beyond they hype including to next wave of hyper-scale converged along with applicable decision-making criteria. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– What are your application and environment needs along with other objectives
– Explore various approaches for hyper-small and hyper-large environments
– What are you converging, hardware, hypervisors, management or something else?
– Does HCI mean hyper-vendor-lock-in, if so, is that a bad thing?
– When, where, why and how to use different scenarios

November 29-30, 2016 (New) – Converged & Hyper-Converged Decision Making
Is Converged Infrastructure Right For You?
Workshop Seminar – Nijkerk The Netherlands

Converged and server storage I/O data infrastructure trends
Agenda and topics to be covered include:

  • When should decide to evaluate CI/HCI vs. traditional approach
  • What are decision and evaluation criteria for apples to apples vs. Apples to pears
  • What are the costs, benefits, and caveats of the different approaches
  • How different applications such as VDI or VSI or database have different needs
  • What are the network, storage, software license and training cost implications
  • Different comparison criteria for smaller environments remote office vs. Larger enterprise
  • How will you protect and secure a CI, HCI environment (HA, BC, BR, DR, Backup)
  • What is the risk and benefit of startups, companies with limited portfolios vs. Big vendors
  • Do it yourself (DiY) vs. Turnkey software vs. Bundled tin wrapped software solution
  • We will also look at associated trends including software-defined, NVM/SSD, NVMe, VMware, Microsoft, KVM, Citrix/Xen, Docker, OpenStack among others.

Organized by:
Brouwer Storage Consultancy

November 28, 2016 (New) – Server Storage I/O Fundamental Trends V2.1116
Whats New, Whats the buzz, what you need to know about and whos doing what
Workshop Seminar – Nijkerk The Netherlands

Converged and server storage I/O data infrastructure trends
Agenda and topics that will be covered include:

  • Who’s doing what, who are the new emerging vendors, solutions and technologies to watch
  • Non-Volatile Memory (NVM), flash solid state device (SSD), Storage Class Memory (SCM)
  • Networking with your servers and storage including NVMe, NVMeoF and RoCE
  • Cloud, Object and Bulk storage for data protection, archiving, near-line, scale-out
  • Data protection and software defined storage management (backup, BC, BR, DR, archive)
  • Microsoft Windows Server 2016, Nano, S2D and Hyper-V
  • VMware, OpenStack, Ceph, Docker and Containers, CI and HCI
  • EMC is gone, now there is Dell EMC and what that means
  • Various vendors and solutions from legacy to new and emerging
  • Recommendations, usage or deployment scenarios and tips
  • Some examples of who’s doing what includes AWS, Brocade, Cisco, Dell EMC, Enmotus, Futjistu, Google, HDS, HP and Huawei, IBM, Intel, Lenovo, Mellanox, Micron, Microsoft, NetApp, Nutanix, Oracle, Pure, Quantum, Qumulo, Reduxio, Rubrik, Samsung, SANdisk, Seagate, Simplivity and Tintri, Veeam, Veritas, VMware and WD among others.

Organized by:
Brouwer Storage Consultancy

November 23, 2016 – Webinar 10AM PT BrightTalk
BCDR and Cloud Backup Software Defined Data Infrastructures (SDDI) and Data Protection

BC DR Cloud Backup and Data Protection

The answer is BCDR and Cloud Backup, however what was the question? Besides how to protect preserve and secure your data, applications along with data Infrastructures against various threat risk issues, what are some other common questions? For example how to modernize, rethink, re-architect, use new and old things in new ways, these and other topics, techniques, trends, tools have a common theme of BCDR and Cloud Backup. Join us in this discussion exploring your options for protecting data, applications and your data Infrastructures spanning legacy, software-defined virtual and cloud environments. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– Various cloud storage options to meet different application PACE needs
– Do clouds need to be backed-up or protected?
– How to leverage clouds for various data protection objectives
– When, where, why and how to use different scenarios

November 23, 2016 – Webinar 9AM PT – BrightTalk
Cloud Storage – Hybrid and Software Defined Data Infrastructures (SDDI)

Cloud Storage Decision Making

You have been told, or determined that you need (or want) to use cloud storage, ok, now what? What type of cloud storage do you need or want, or do you simply want cloud storage? However, what are your options as well as application requirements including Performance, Availability, Capacity and Economics (PACE) along with access or interfaces? Where are your applications and where will they be located? What are your objectives for using cloud storage or is it simply you have heard or told its cheaper. Join us in this discussion exploring your options, considerations for cloud storage decision-making. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– Various cloud storage options to meet different application PACE needs
– Storage for primary, secondary, performance, availability, capacity, backup, archiving
– Public, private and hybrid cloud storage options from block, file, object to application service
– When, where, why and how to use cloud storage for different scenarios

November 22, 2016 – Webinar 10AM PT – BrightTalk
Cloud Infrastructure Hybrid and Software Defined Data Infrastructures (SDDI)

Cloud Infrastructure and Hybrid Software Defined

At the core of cloud (public, private, hybrid) next generation data centers are software defined data infrastructures that exist to protect, preserve and serve applications, data along with their resulting information services. Software defined data infrastructure core components include hardware, software servers and storage configured (defined) to provide various services enabling application Performance Availability Capacity and Economics (PACE). Just as there are different types of environments, applications along with workloads various options, technologies as well as techniques exist for cloud services (and underlying data infrastructures). Join us in this session to discuss trends, technologies, tools, techniques and services options for cloud infrastructures. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– Software Defined Infrastructures (SDDI) are what enable Software Defined Data Centers and clouds
– Various types of clouds along with cloud services that determine how resources get defined
– When, where, why and how to use cloud Infrastructures along with associated resources

November 15, 2016 (New) – 11AM PT Webinar – Redmond Magazine and Solarwinds
The O.A.R. of Virtualization Scaling
A journey of optimization, automation, and reporting

Your journey to a flexible, scalable and secure IT universe begins now. Join Microsoft MVP and VMware vSAN and vExpert Greg Schulz of Server StorageIO along with VMware vExpert, Cisco Champion and Head Geek of Virtualization and Cloud Practice Kong Yang of SolarWinds for an interactive discussion empowering you to become the master of your software defined and virtual data center. Topics will include:

  • Trust your instruments and automation, however, verify they are working properl
  • Insight into how your environment, as well as automation tools, are working
  • Leverage automation to handle recurring tasks so you can focus on more productive activities
  • Capture, retain and transfer knowledge and tradecraft experiences into automation policies
  • Automated system management is only as good as the policies and data they rely upon
  • Optimize via automation that relies on reporting for insight, awareness and analytics 

November 3, 2016 (New) – Webinar 11AM PT – Redmond Magazine and
Dell Software
Tailor Your Backup Data Repositories to
Fit Your Security and Management Needs

Does data protection storage have you working overtime to take care of it? Do you have the flexibility to protect, preserve, secure and serve different workgroups or customers in a shared environment? Is your environment looking to expand with new applications and remote offices, yet your data protection is slowing you down? 

In this webinar we will look at current and emerging trends along with issues including how different threat risk challenges impact your evolving environment, as well as opportunities to address them. It’s time to deploy technology that works for you and your environment instead of you working for the solution. 

Attend and learn about:

  • Data protection trends, issues, regulatory compliance, challenges and opportunities
  • How to utilize purpose built appliances to protect and defend your systems, applications and data from various threat risks
  • Importance of timely insight and situational awareness into your data protection infrastructure
  • Protecting centralized and distributed remote office branch offices (ROBO) workgroups
  • What you can do today to optimize your environment

October 27, 2016 (New) – Webinar 10AM PT – Virtual Instruments
The Value of Infrastructure Insight

This webinar looks at the value of data center infrastructure insight both as a technology as well as a business productivity enabler. Besides productivity, having insight into how data infrastructure resources (servers, storage, networks, system software) are used, enables informed analysis, troubleshooting, planning, forecasting as well as cost-effective decision-making. In other words, data center infrastructure insight, based on infrastructure performance analytics, enables you to avoid flying blind, having situational awareness for proactive Information Technology (IT) management. Your return on innovation is increased, and leveraging insight awareness along with metrics that matter drives return on investment (ROI) along with enhanced service delivery.

October 20, 2016 – Webinar 9AM PT – BrightTalk
Next-Gen Data Centers Software Defined Data Infrastructures (SDDI) including Servers, Storage and Virtualization

Cloud Storage Decision Making

At the core of next generation data centers are software defined data infrastructures that enable, protect, preserve and serve applications, data along with their resulting information services. Software defined data infrastructure core components include hardware, software servers and storage configured (defined) to provide various services enabling application Performance Availability Capacity and Economics (PACE). Just as there are different types of environments, applications along with workloads various options, technologies as well as techniques exist for virtual servers and storage. Join us in this session to discuss trends, technologies, tools, techniques and services around storage and virtualization for today, tomorrow, and in the years to come. Topics include:

– Data Infrastructures exist to support applications and their underlying resource needs
– Software Defined Infrastructures (SDDI) are what enable Software Defined Data Centers
– Server and Storage Virtualization better together, with and without CI/HCI
– Many different facets (types) of Server virtualization and virtual storage
– When, where, why and how to use storage virtualization and virtual storage

September 20, 2016 – Webinar 8AM PT – BrightTalk
Software Defined Data Infrastructures (SDDI) Enabling Software Defined Data Centers – Part of Software-Defined Storage summit

Cloud Storage Decision Making

Data Infrastructures exist to support applications and their underlying resource needs. Software-Defined Infrastructures (SDI) are what enable Software-Defined Data Centers, and at the heart of a SDI is storage that is software-defined. This spans cloud, virtual and physical storage and is at the focal point of today. Join us in this session to discuss trends, technologies, tools, techniques and services around SDI and SDDC- today, tomorrow, and in the years to come.

September 13, 2016 – Webinar 11AM PT – Redmond Magazine and
Dell Software
Windows Server 2016 and Active Directory
Whats New and How to Plan for Migration

Windows Server 2016 is expected to GA this fall and is a modernized version of the Microsoft operating system that includes new capabilities such as Active Directory (AD) enhancements. AD is critical to organizational operations providing control and secure access to data, networks, servers, storage and more from physical, virtual and cloud (public and hybrid). But over time, organizations along with their associated IT infrastructures have evolved due to mergers, acquisitions, restructuring and general growth. As a result, yesterday’s AD deployments may look like they did in the past while using new technology (e.g. in old ways). Now is the time to start planning for how you will optimize your AD environment using new tools and technologies such as those in Windows Server 2016 and AD in new ways. Optimizing AD means having a new design, performing cleanup and restructuring prior to migration vs. simply moving what you have. Join us for this interactive webinar to begin planning your journey to Windows Server 2016 and a new optimized AD deployment that is flexible, scalable and elastic, and enables resilient infrastructures. You will learn:

  • What’s new in Windows Server 2016 and how it impacts your AD
  • Why an optimized AD is critical for IT environments moving forward
  • How to gain insight into your current AD environment
  • AD restructuring planning considerations

September 8, 2016 – Webinar 11AM PT (Watch on Demand) – Redmond Magazine, Acronis and Unitrends
Data Protection for Modern Microsoft Environments

Your organization’s business depends on modern Microsoft® environments — Microsoft Azure and new versions of Windows Server 2016, Microsoft Hyper-V with RCT, and business applications — and you need a data protection solution that keeps pace with Microsoft technologies. If you lose mission-critical data, it can cost you $100,000 or more for a single hour of downtime. Join our webinar and learn how different data protection solutions can protect your Microsoft environment, whether you store data on company premises, at remote locations, in private and public clouds, and on mobile devices.

Where To Learn More

What This All Means

Its fall back to school and learning time, join me on these and other upcoming event activities.

Ok, nuff said, for now…

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Server StorageIO August 2016 Update Newsletter

Volume 16, Issue VIII

Hello and welcome to this August 2016 Server StorageIO update newsletter.

In This Issue

  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events and Webinars
  • Industry Activity Trends
  • Resources and Links
  • Enjoy this shortened summer edition of the Server StorageIO update newsletter.

    Cheers GS

    Industry Activity Trends

    With VMworld coming up this week, rest assured, there will be plenty to talk about and discuss in the following weeks. However for now, here are a few things from this past month.

    At Flash Memory Summit (FMS) which is more of a component, vendor to vendor industry type event, there was buzz about analytics, however what was shown as analytics tended to be Iometer. Hmmm, more on that in a future post. However something else at FMS besides variations of Non-Volatile Memory (NVM) including SSD, NAND, Flash, Storage Class Memory (SCM) such as 3D XPoint (among its new marketing names) along with NVM Expres (NVMe) was NVMe over Fabric.

    This includes NVMe over RoCE (RDMA over Converged Ethernet) which can be implemented on some 10 Gb (and faster) Ethernet adapters as well as some InfiniBand adapters from Mellanox among others. Another variation is Fibre Channel NVMe (FC-NVMe) where the NVMe protocol command set is transported as a Upper Level Protocol (ULP) over FC. This is similar to how the SCSI command set is implemented on FC (e.g. SCSI_FCP or FCP) which means NVMe can be seen as a competing protocol to FCP (which it will or could be). Naturally not to be left out, some of the marketers have already started with Persistent Memory over Fabric among other variations of Non- Ephemeral Memory over Fabrics. More on NVM, NVMe and fabrics in future posts, commentary and newsletter.

    Some other buzzword topics regaining mention (or perhaps for the first time for some) includes
    FedRAMP, Authority To Operate (ATO) clouds for Government entities, and FISMA among others. Many service providers, cloud and hosting providers from large AWS and Azure to smaller Blackmesh have added FedRAMP and other options in addition to traditional, DevOps.

    Some of you may recall me mentioning ScaleMP in the past which is a technology for aggregating multiple compute servers including processors and memory into a converged resource pool. Think the opposite of a hypervisor that divides up resources to support consolidation. In other words, where you need to scale up without complexity of clustering or to avoid having to change and partition your software applications. In addition to ScaleMP, a newer hardware agnostic startup to check out is Tidal Scale.

    On the merger and acquisition front, the Dell / EMC deal is moving forward expected to close soon, perhaps by time or before you read this. In other news, HPE announced that it is buying SGI to gain access to a larger part of the traditional legacy big data Super Compute and High Performance Compute (HPC) market. One of the SGI diamonds in the rough that if you are not aware, is DMF for data management. HPE and Dropbox also announced a partnership deal earlier this summer.

    That’s all for now, time to pack my bags and off to Las Vegas for VMworld 2016.

    Ok, nuff said, for now…

     

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past StorageIOblog posts here

     

    StorageIO Commentary in the news

    Recent Server StorageIO industry trends perspectives commentary in the news.

    Via FutureReadyOEM Q&A: when to implement ultra-dense storage
    Via EnterpriseStorageForum Comments on Top 10 Enterprise SSD Market Trends
    Via SearchStorage Comments on NAS system buying decisions
    Via EnterpriseStorageForum Comments on Cloud Storage Pros and Cons
    EnterpriseStorageForum Comments on Top 10 Enterprise SSD Market Trends

    View more Server, Storage and I/O hardware as well as software trends comments here

     

    StorageIO Tips and Articles

    Recent and past Server StorageIO articles appearing in different venues include:

    Via Iron Mountain Preventing Unexpected Disasters: IT and Data Infrastructure
    Via FutureReadyOEM Q&A: When to implement ultra-dense storage servers
    Via Micron Blog Whats next for NVMe and your Data Center – Preparing for Tomorrow
    Redmond Magazine: Trends – Evolving from Data Protection to Data Resiliency
    IronMountain: 5 Noteworthy Data Privacy Trends From 2015
    InfoStor: Data Protection Gaps, Some Good, Some Not So Good
    Virtual Blocks (VMware Blogs): EVO:RAIL ? When And Where To Use It?

    Check out these resources techniques, trends as well as tools. View more tips and articles here

    StorageIO Webinars and Industry Events

    December 7: BrightTalk Webinar – Hyper-Converged Infrastructure (HCI) Webinar 11AM PT

    November 23: BrightTalk Webinar – BCDR and Cloud Backup – Software Defined Data Infrastructures (SDDI) and Data Protection – 10AM PT

    November 23: BrightTalk Webinar – Cloud Storage – Hybrid and Software Defined Data Infrastructures (SDDI) – 9AM PT

    November 22: BrightTalk Webinar – Cloud Infrastructure – Hybrid and Software Defined Data Infrastructures (SDDI) – 10AM PT

    October 20: BrightTalk Webinar – Next-Gen Data Centers – Software Defined Data Infrastructures (SDDI) including Servers, Storage and Virtualizations – 9AM PT

    September 29: TBA Webinar – 10AM PT

    September 27-28 – NetApp – Las Vegas

    September 20: BrightTalk Webinar – Software Defined Data Infrastructures (SDDI) Enabling Software Defined Data Centers – Part of Software-Defined Storage summit – 8AM PT

    September 13: Redmond Webinar – Windows Server 2016 and Active Directory What’s New and How to Plan for Migration – 11AM PT

    September 8: Redmond Webinar – Data Protection for Modern Microsoft Environments – 11AM PT

    August 29-31: VMworld Las Vegas

    August 25 – MSP CMG – The Answer is Software Defined – What was the question?

    August 16: BrightTalk Webinar Software Defined Data Centers (SDDC) are in your future (if not already) – Part of Enterprise Software and Infrastructure summit 8AM PT

    August 10-11 Flash Memory Summit (Panel discussion August 11th) – NVMe over Fabric

    See more webinars and other activities on the Server StorageIO Events page here.

     

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    storageioblog.com/data-protection-diaries-main/ – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance.com – Various server, storage and I/O performance and benchmarking

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server storage I/O performance benchmark workload scripts Part I

    Server storage I/O performance benchmark workload scripts Part I

    Server storage I/O performance benchmark workload scripts

    Update 1/28/2018

    This is part one of a two-part series of posts about Server storage I/O performance benchmark workload tools and scripts. View part II here which includes the workload scripts and where to view sample results.

    There are various tools and workloads for server I/O benchmark testing, validation and exercising different storage devices (or systems and appliances) such as Non-Volatile Memory (NVM) flash Solid State Devices (SSDs) or Hard Disk Drives (HDD) among others.

    NVMe ssd storage
    Various NVM flash SSD including NVMe devices

    For example, lets say you have an SSD such as an Intel 750 (here, here, and here) or some other vendors NVMe PCIe Add in Card (AiC) installed into a Microsoft Windows server and would like to see how it compares with expected results. The following scripts allow you to validate your system with those of others running the same workload, granted of course your mileage (performance) may vary.

    server storage I/O SCM NVM SSD performance

    Why Your Performance May Vary

    Reasons you performance may vary include among others:

    • GHz Speed of your server, number of sockets, cores
    • Amount of main DRAM memory
    • Number, type and speed of PCIe slots
    • Speed of storage device and any adapters
    • Device drivers and firmware of storage devices and adapters
    • Server power mode setting (e.g. low or balanced power vs. high-performance)
    • Other workload running on system and device under test
    • Solar flares (kp-index) among other urban (or real) myths and issues
    • Typos or misconfiguration of workload test scripts
    • Test server, storage, I/O device, software and workload configuration
    • Versions of test software tools among others

    Windows Power (and performance) Settings

    Some things are assumed or taken for granted that everybody knows and does, however sometimes the obvious needs to be stated or re-stated. An example is remembering to check your server power management settings to see if they are in energy efficiency power savings mode, or, in high-performance mode. Note that if your focus is on getting the best possible performance for effective productivity, then you want to be in high performance mode. On the other hand if performance is not your main concern, instead a focus on energy avoidance, then low power mode, or perhaps balanced.

    For Microsoft Windows Servers, Desktop Workstations, Laptops and Tablets you can adjust power settings via control panel and GUI as well as command line or Powershell. From command line (privileged or administrator) the following are used for setting balanced or high-performance power settings.

    Balanced

    powercfg.exe /setactive 381b4222-f694-41f0-9685-ff5bb260df2e

    High Performance

    powercfg.exe /setactive 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c

    From Powershell the following set balanced or high-performance.

    Balanced
    PowerCfg -SetActive "381b4222-f694-41f0-9685-ff5bb260df2e"

    High Performance
    PowerCfg -SetActive "8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c"

    Note that you can list Windows power management settings using powercfg -LIST and powercfg -QUERY

    server storage I/O power management

    Btw, if you have not already done so, enable Windows disk (HDD and SSD) performance counters so that they appear via Task Manager by entering from a command prompt:

    diskperf -y

    Workload (Benchmark) Simulation Test Tools Used

    There are many tools (see storageio.com/performance) that can be used for creating and running workloads just as there are various application server I/O characteristics. Different server I/O and application performance attributes include among others read vs. write, random vs. sequential, large vs. small, long vs. short stride, burst vs. sustain, cache and non-cache friendly, activity vs. data movement vs. latency vs. CPU usage among others. Likewise the number of workers, jobs, threads, outstanding and overlapped I/O among other configuration settings can have an impact on workload and results.

    The four free tools that I’m using with this set of scripts are:

    • Microsoft Diskspd (free), get the tool and bits here or here (open source), learn more about Diskspd here.
    • FIO.exe (free), get the tool and bits here or here among other venues.
    • Vdbench (free with registration), get the tool and bits here or here among other venues.
    • Iometer (free), get the tool and bits here among other venues.

    Notice: While best effort has been made to verify the above links, they may change over time and you are responsible for verifying the safety of links and your downloads.

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Remember, everything is not the same in the data center or with data infrastructures that support different applications.

    While some tools are more robust or better than others for different things, ultimately it’s usually not the tool that results in a bad benchmark or comparison, it’s the configuration or lack of including workload settings that are not relevant or applicable. The best benchmark, workload or simulation is your own application. Second best is one that closely resembles your application workload characteristics. A bad benchmark is one that has no relevance to your environment, application use scenario. Take and treat all benchmark or workload simulation results with a grain of salt as something to compare, contrast or make reference to in the proper context. Read part two of this post series to view test tool workload scripts along with sample results.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Part II – Some server storage I/O workload scripts and results

    Part II – Some server storage I/O workload scripts and results

    server storage I/O trends

    Updated 1/28/2018

    This is the second in a two part series of posts pertaining to using some common server storage I/O workload benchmark tools and scripts. View part I here which includes overview, background and information about the tools used and related topics.

    NVMe ssd storage
    Various NVM flash SSD including NVMe devices

    Following are some server I/O benchmark workload scripts to exercise various storage devices such as Non-Volatile Memory (NVM) flash Solid State Devices (SSDs) or Hard Disk Drives (HDD) among others.

    The Workloads

    Some ways that can impact the workload performance results besides changing the I/O size, read write, random sequential mix is the number of threads, workers and jobs. Note that in the workload steps, the larger 1MB and sequential scenarios have fewer threads, workers vs. the smaller IOP or activity focused workloads. Too many threads or workers can cause overhead and you will reach a point of diminishing return at some point. Likewise too few and you will not drive the system under test (SUT) or device under test (DUT) to its full potential. If you are not sure how many threads or workers to use, run some short calibration tests to see the results before doing a large, longer test.

    Keep in mind that the best benchmark or workload is your own application running with similar load to what you would see in real world, along with applicable features, configuration and functionality enabled. The second best would be those that closely resemble your workload characteristics and that are relevant.

    The following workloads involved a system test initiator (STI) server driving workload using the different tools as well as scripts shown. The STI sends the workload to a SUT or DUT that can be a single drive, card or multiple devices, storage system or appliance. Warning: The following workload tests does both reads and writes which can be destructive to your device under test. Exercise caution on the device and file name specified to avoid causing a problem that might result in you testing your backup / recovery process. Likewise no warranty is given, implied or made for these scripts or their use or results, they are simply supplied as is for your reference.

    The four free tools that I’m using with this set of scripts are:

    • Microsoft Diskspd (free), get the tool and bits here or here (open source), learn more about Diskspd here.
    • FIO.exe (free), get the tool and bits here or here among other venues.
    • Vdbench (free with registration), get the tool and bits here or here among other venues.
    • Iometer (free), get the tool and bits here among other venues.

    Notice: While best effort has been made to verify the above links, they may change over time and you are responsible for verifying the safety of links and your downloads

    Microsoft Diskspd workloads

    Note that a 300GB size file named iobw.tst on device N: is being used for performing read and write I/Os to. There are 160 threads, I/O size of 4KB and 8KB varying from 100% Read (0% write), 70% Read (30% write) and 0% Read (100% write) with random (seek) and no hardware or software cache. Also specified are to collect latency statistics, a 30 second warm up ramp up time, and a quick 5 minute duration (test time). 5 minutes is a quick test for calibration, verify your environment however relatively short for a real test which should be in the hours or more depending on your needs.

    Note that the output results are put into a file with a name describing the test tool, workload and other useful information such as date and time. You may also want to specify a different directory where output files are placed.

    diskspd.exe -c300G -o160 -t160 -b4K -w0 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan100Read_160x160_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b4K -w30 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan70Read_160x160_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b4K -w100 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_4KRan0Read_160x160_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b8K -w0 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan100Read_160x160_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b8K -w30 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan70Read_160x160_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b8K -w100 -W30 -d300 -h -fr  N:iobw.tst -L  > DiskSPD_300G_8KRan0Read_160x160_072416_8AM.txt
    

    The following Diskspd tests use similar settings as above, however instead of random, sequential is specified, threads and outstanding I/Os are reduced while I/O size is set to 1MB, then 8KB, with 100% read and 100% write scenarios. The -t specifies the number of threads and -o number of outstanding I/Os per thread.

    diskspd.exe -c300G -o32 -t132 -b1M -w0 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_1MSeq100Read_32x32_072416_8AM.txt
    diskspd.exe -c300G -o32 -t132 -b1M -w100 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_1MSeq0Read_32x32_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b8K -w0 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_8KSeq100Read_32x32_072416_8AM.txt
    diskspd.exe -c300G -o160 -t160 -b8K -w100 -W30 -d300 -h -si  N:iobw.tst -L  > DiskSPD_300G_8KSeq0Read_32x32_072416_8AM.txt
    

    Fio.exe workloads

    Next are the fio workloads similar to those run using Diskspd except the sequential scenarios are skipped.

    fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=100 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan100Read_5x32_072416_8AM.txt
    fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=70 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan70Read_5x32_072416_8AM.txt
    fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=4k --bs=4k --rwmixread=0 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_4KRan0Read_5x32_072416_8AM.txt
    fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=100 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan100Read_5x32_072416_8AM.txt
    fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=70 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan70Read_5x32_072416_8AM.txt
    fio --filename=N\:\iobw.tst --filesize=300000M --direct=1  --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=windowsaio  --ba=8k --bs=8k --rwmixread=0 --iodepth=32 --numjobs=5 --exitall --time_based  --ramp_time=30 --runtime=300 --group_reporting --name=xxx  --output=FIO_300000M_8KRan0Read_5x32_072416_8AM.txt
    

    Vdbench workloads

    Next are the Vdbench workloads similar to those used with the Microsoft Diskspd scenarios. In addition to making sure Vdbench is installed and working, you will need to create a text file called seqrxx.txt containing the following:

    hd=localhost,jvms=!jvmn
    sd=sd1,lun=!drivename,openflags=directio,size=!dsize
    wd=mix,sd=sd1
    rd=!jobname,wd=mix,elapsed=!etime,interval=!itime,iorate=max,forthreads=(!tthreads),forxfersize=(!worktbd),forseekpct=(!workseek),forrdpct=(!workread),openflags=directio

    The following are the commands that call the Vdbench script file. Note Vdbench puts output files (yes, plural there are many results) in a output folder.

    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o  vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran100Read_0726166AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran70Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=4k workseek=100 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_4K100Ran0Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Ran100Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Ran70Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=100 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq0Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq100Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=70 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq70Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=160 jvmn=64 worktbd=8k workseek=0 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_160TH_8K100Seq0Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=32 jvmn=64 worktbd=1M workseek=0 workread=100 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_32TH_1M100Seq100Read_072416_8AM
    vdbench -f seqrxx.txt dsize=300G  tthreads=32 jvmn=64 worktbd=1M workseek=0 workread=0 jobname=NVME etime=300 itime=30 drivename="\\.\N:\iobw.tst" -o vdbench_NNVMe_300GB_64JVM_32TH_1M100Seq0Read_072416_8AM
    

    Iometer workloads

    Last however not least, lets do an Iometer run. The following command calls an Iometer input file (icf) that you can find here. In that file you will need to make a few changes including the name of the server where Iometer is running, description and device under test address. For example in the icf file change SIOSERVER to the name of the server where you will be running Iometer from. Also change the address for the DUT, for example N: to what ever address, drive, mount point you are using. Also update the description accordingly (e.g. "NVME" to "Your test example".

    Here is the command line to run Iometer specifying an icf and where to put the results in a CSV file that can be imported into Excel or other tools.

    iometer /c  iometer_5work32q_intel_Profile.icf /r iometer_nvmetest_5work32q_072416_8AM.csv
    

    server storage I/O SCM NVM SSD performance

    What About The Results?

    For context, the following results were run on a Lenovo TS140 (32GB RAM), single socket quad core (3.2GHz) Intel E3-1225 v3 with an Intel NVMe 750 PCIe AiC (Intel SSDPEDMW40). Out of the box Microsoft Windows NVMe drive and controller drivers were used (e.g. 6.3.9600.18203 and 6.3.9600.16421). Operating system is Windows 2012 R2 (bare metal) with NVMe PCIe card formatted with ReFS file system. Workload generator and benchmark driver tools included Microsoft Diskspd version 2.012, Fio.exe version 2.2.3, Vdbench 50403 and Iometer 1.1.0. Note that there are newer versions of the various workload generation tools.

    Example results are located here.

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Remember, everything is not the same in the data center or with data infrastructures that support different applications.

    While some tools are more robust or better than others for different things, ultimately its usually not the tool that results in a bad benchmark or comparison, its the configuration or lack of including workload settings that are not relevant or applicable. The best benchmark, workload or simulation is your own application. Second best is one that closely resembles your application workload characteristics. A bad benchmark is one that has no relevance to your environment, application use scenario. Take and treat all benchmark or workload simulation results with a grain of salt as something to compare, contrast or make reference to in the proper context.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    12Gb SAS SSD Enabling Server Storage I/O Performance and Effectiveness Webinar

    12Gb SAS SSD Enabling Server Storage I/O Performance and Effectiveness Webinar

    server storage I/O trends

    Non-Volatile Memory (NVM) Solid State Devices (SSDs) including nand flash, DRAM as well as emerging PCM and 3D XPoint as part of Storage Class Memories (SCMs) are in your future. The questions are where, when, for what, how much as well as what form factor packaging, along with server storage I/O interface are applicable for your different applications and data infrastructures.

    server storage I/O SCM NVM SSD performance

    Server storage I/O physical interfaces for access NVM SSDs include PCIe Add in Cards (AiC), M.2 as well as emerging SFF 8639 (e.g. NVMe U2 drive form factor) along with mSATA (e.g. mini PCIe card) in addition to SAS, SATA, USB among others. Protocols include NVM Express (NVMe), SAS, SATA as well as general server storage I/O access of shared storage systems that leverage NVM SSD and SCM technologies.

    To help address the question of which server storage I/O interface is applicable for different environments, I invite you to a webinar on June 22, 2016 at 1PM ET hosted by and compliments of Micron.

    During the webinar myself and Rob Peglarr (@peglarr) of Micron will discuss and answer questions about how 12Gb SAS remains a viable option for attach NVM SSD storage to servers, as well as via storage systems today and into the future. Today’s 12Gb SAS SSDs enable you to leverage your existing knowledge, skill sets, as well as technology to maximize your data infrastructure investments. For servers or storage systems that are PCIe slot constrained, 12Gb SAS enables more SSD including 2.5" form factor multiple TByte capacity devices to be used to boost performance and capacity in a cost as well as energy effective way.

    server storage I/O nvm ssd options

    In addition to Rob Peglarr, we will also be joined by Doug Rollins of Micron (@GreyHairStorage) who will share some technical speeds, feeds, slots and watts information about Micron 12Gb SAS SSDs that can scale into the TBs in capacity per device.

    Here’s the synopsis from the Micron information page for this webinar.

    Don’t let old, slow SAS HDDs drag down your data center

    Modernize it by upgrading your storage from SAS HDDs to SAS SSDs. It’s an easy upgrade that provides a significant boost in performance, longer lasting endurance and nearly 4X the capacity. Flash storage changes how you do business and keeps you competitive.

    We invite you to join Rob Peglar, Greg Schulz, along with Doug Rollins, from Micron’s technical marketing team to learn:

    • Simple solutions to solving the challenges with today’s ever-growing data demands
    • Why SAS—how it continues to fuel the data center
    • HDDs versus SDDs—before and after stories from your peers, including upfront cost savings

    We will also have a live Q&A session so you can talk with the experts. Please register today! If you’re unable to attend the live webinar, we encourage you to register anyway to receive a link to the recorded session, as well as a copy of the presentation.

    Where To Learn More

    What This All Means

    Remember, everything is not the same in the data center or with data infrastructures that support different applications, like there are various NVM SSD options as well as interfaces.

    Join us for this webinar, you can view more information here, as well as register for the event.

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    2016 Going Dutch Cloud Virtualization Server Storage I/O Seminars

    2016 Going Dutch Cloud Virtualization Server Storage I/O Seminars

    server storage I/O trends

    In June 2016 Brouwer Storage Consultancy is organizing their yearly spring seminar workshops in Nijkerk Holland (south of Amsterdam, near Utrecht and Amersfoort) with myself among others presenting.

    Brouwer Consultancy

    Cloud Virtualization Server Storage I/O Seminars

    For this series of seminar workshops, there are four sessions, two being presented by myself, and two others in conjunction with Reduxio as well as Fujitsu & SJ Solutions.

    Brouwer and Server StorageIO Seminar Sessions

    Agenda, How To Register and Where To Learn More

    The vendor sponsored sessions will consist of about 50% content being independent presented by myself and Gert Brouwer, the balance by the event sponsors as well as their partners. All presentation and associated content including handouts will be in English.

    There will be 4 seminar workshop sessions, two of those are paid sessions dedicated to Greg Schulz and the other two are free (sponsored) sessions where 50% of the content is sponsored (Reduxio, FujitsuSJ Solutions) and the other 50% will be independent (Greg Schulz & Gert Brouwer).

    Thursday June 9th – Server StorageIO Trends and Updates

    Server Storage I/O Fundamental Trends V2.016 and Updates. What’s New, What’s the buzz, what you need to know about. From Speeds and Feeds, Slots and Watts to Who’s doing what. Event Location: Golden Tulip Ampt van Nijkerk Hotel, Berencamperweg 4, 3861MC, Nijkerk. Learn more here (PDF abstract and topics to be covered).

    Thursday June 10th – Converged Day

    Converged Day – Moving beyond Hyper-Converged Hype and Server Storage I/O Decision Making Strategies. Event Location: Golden Tulip Ampt van Nijkerk Hotel, Berencamperweg 4, 3861MC, Nijkerk. Learn more here (PDF abstract and topics to be covered).

    Brouwer and Server StorageIO Seminar Sessions De Roode Schuur

    Tuesday June 14th – Round Table Vendor Session with Reduxio

    Symposium Workshop – Round Table Vendor Session with Reduxio – Are some solutions really ‘a Paradigm shift’ or ‘new and revolutionary” as they claim to be, or is it just more of the same (e.g. evolutionary)? – Presentations and discussions led by Greg Schulz (StorageIO), Reduxio and Brouwer Storage Consultancy. (Free, sponsored Session, Access for end-users only). Event Location: Hotel & Gasterij De Roode Schuur, Oude Barneveldseweg 98, 3862PS Nijkerk. Learn more here (PDF abstract and topics to be covered).

    Wednesday June 15th – Software Defined Data Center Symposium Workshop

    Software Defined Data Center Symposium Workshop – Round Table Vendor Session with Fujitsu & SJ Solutions
    With subjects like Openstack, Ceph, distributed object storage, Bigdata, Hyper-Converged Infrastructure (HCI), Converged Infrastructure (CI), Software defined storage (SDS) and Network (SDN and NFV), this round table format workshop seminars explores these and other related topics including what to use when, where, why and how. Presentations by Greg Schulz (StorageIO), SJ Solutions & Fujitsu and Brouwer Storage Consultancy. Event Location: Hotel & Gasterij De Roode Schuur, Oude Barneveldseweg 98, 3862PS Nijkerk. Learn more here (PDF abstract and topics to be covered).

    For more information, abstracts/agenda, registration and the specific locations for all the above events click here.

    Brouwer and Server StorageIO Sessions Ampt van Nijkerk

    What This All Means

    There is a lot of things occurring in the IT industry from physical to software defined clouds, containers and virtualization, nonvolatile memory (NVM) including flash SSD among others. These series of interactive educational workshop seminars converge on Nijkerk Holland combing content discussions from strategy, planning decision making, to what’s new (and old) that can be used in new ways, as well as some trends, speeds and feeds along with practicality for your environment.

    Brouwer Consultancy

    I Look forward to seeing you in Nijkerk and Europe during June 2016, in the meantime, contact Brouwer Storage Consultancy for more information on the above sessions as well as to arrange private discussions or meetings.

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Which Enterprise HDD for Content Applications Different File Size Impact

    Which HDD for Content Applications Different File Size Impact

    Different File Size Impact server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server platform different file size impact.

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the fifth in a multi-part series (read part four here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus looks at large and small file I/O processing.

    File Performance Activity

    Tip, Content solutions use files in various ways. Use the following to gain perspective how various HDD’s handle workloads similar to your specific needs.

    Two separate file processing workloads were run (12), one with a relative small number of large files, and another with a large number of small files. For the large file processing (table-3), 5 GByte sized files were created and then accessed via 128 Kbyte (128KB) sized I/O over a 10 hour period with 90% read using 64 threads (workers). Large file workload simulates what might be seen with higher definition video, image or other content streaming.

    (Note 12) File processing workloads were run using Vdbench 5.04 and file anchors with sample script configuration below. Instead of vdbench you could also use other tools such as sysbench or fio among others.

    VdbenchFSBigTest.txt
    # Sample script for big files testing
    fsd=fsd1,anchor=H:,depth=1,width=5,files=20,size=5G
    fwd=fwd1,fsd=fsd1,rdpct=90,xfersize=128k,fileselect=random,fileio=random,threads=64
    rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=10h,interval=30

    vdbench -f VdbenchFSBigTest.txt -m 16 -o Results_FSbig_H_060615

    VdbenchFSSmallTest.txt
    # Sample script for big files testing
    fsd=fsd1,anchor=H:,depth=1,width=64,files=25600,size=16k
    fwd=fwd1,fsd=fsd1,rdpct=90,xfersize=1k,fileselect=random,fileio=random,threads=64
    rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=10h,interval=30

    vdbench -f VdbenchFSSmallTest.txt -m 16 -o Results_FSsmall_H_060615

    The 10% writes are intended to reflect some update activity for new content or other changes to content. Note that 128KB per second translates to roughly 1 Gbps streaming content such as higher definition video. However 4K video (not optimized) would require a higher speed as well as resulting in larger file sizes. Table-3 shows the performance during the large file access period showing average read /write rates and response time, bandwidth (MBps), average open and close rates with response time.

    Avg. File Read Rate

    Avg. Read Resp. Time
    Sec.

    Avg. File Write Rate

    Avg. Write Resp. Time
    Sec.

    Avg.
    CPU %
    Total

    Avg. CPU % System

    Avg. MBps
    Read

    Avg. MBps
    Write

    ENT 15K R1

    580.7

    107.9

    64.5

    19.7

    52.2

    35.5

    72.6

    8.1

    ENT 10K R1

    455.4

    135.5

    50.6

    44.6

    34.0

    22.7

    56.9

    6.3

    ENT CAP R1

    285.5

    221.9

    31.8

    19.0

    43.9

    28.3

    37.7

    4.0

    ENT 10K R10

    690.9

    87.21

    76.8

    48.6

    35.0

    21.8

    86.4

    9.6

    Table-3 Performance summary for large file access operations (90% read)

    Table-3 shows that for two-drive RAID 1, the Enterprise 15K are the fastest performance, however using a RAID 10 with four 10K HDD’s with enhanced cache features provide a good price, performance and space capacity option. Software RAID was used in this workload test.

    Figure-4 shows the relative performance of various HDD options handling large files, keep in mind that for the response line lower is better, while for the activity rate higher is better.

    large file processing
    Figure-4 Large file processing 90% read, 10% write rate and response time

    In figure-4 you can see the performance in terms of response time (reads larger dashed line, writes smaller dotted line) along with number of file read operations per second (reads solid blue column bar, writes green column bar). Reminder that lower response time, and higher activity rates are better. Performance declines moving from left to right, from 15K to 10K Enterprise Performance with enhanced cache feature to Enterprise Capacity (7.2K), all of which were hardware RAID 1. Also shown is a hardware RAID 10 (four x 10K HDD’s).

    Results in figure-4 above and table-4 below show how various drives can be configured to balance their performance, capacity and costs to meet different needs. Table-4 below shows an analysis looking at average file reads per second (RPS) performance vs. HDD costs, usable capacity and protection level.

    Table-4 is an example of looking at multiple metrics to make informed decisions as to which HDD would be best suited to your specific needs. For example RAID 10 using four 10K drives provides good performance and protection along with large usable space, however that also comes at a budget cost (e.g. price).

    Avg.
    File Reads Per Sec. (RPS)

    Single Drive Cost per RPS

    Multi-Drive Cost per RPS

    Single Drive Cost / Per GB Capacity

    Cost / Per GB Usable (Protected) Cap.

    Drive Cost (Multiple Drives)

    Protection Overhead (Space Capacity for RAID)

    Cost per usable GB per RPS

    Avg. File Read Resp. (Sec.)

    ENT 15K R1

    580.7

    $1.02

    $2.05

    $ 0.99

    $0.99

    $1,190

    100%

    $2.1

    107.9

    ENT 10K R1

    455.5

    1.92

    3.84

    0.49

    0.49

    1,750

    100%

    3.8

    135.5

    ENT CAP R1

    285.5

    1.40

    2.80

    0.20

    0.20

    798

    100%

    2.8

    271.9

    ENT 10K R10

    690.9

    1.27

    5.07

    0.49

    0.97

    3,500

    100%

    5.1

    87.2

    Table-4 Performance, capacity and cost analysis for big file processing

    Small File Size Processing

    To simulate a general file sharing environment, or content streaming with many smaller objects, 1,638,464 16KB sized files were created on each device being tested (table-5). These files were spread across 64 directories (25,600 files each) and accessed via 64 threads (workers) doing 90% reads with a 1KB I/O size over a ten hour time frame. Like the large file test, and database activity, all workloads were run at the same time (e.g. test devices were concurrently busy).

    Avg. File Read Rate

    Avg. Read Resp. Time
    Sec.

    Avg. File Write Rate

    Avg. Write Resp. Time
    Sec.

    Avg.
    CPU %
    Total

    Avg. CPU % System

    Avg. MBps
    Read

    Avg. MBps
    Write

    ENT 15K R1

    3,415.7

    1.5

    379.4

    132.2

    24.9

    19.5

    3.3

    0.4

    ENT 10K R1

    2,203.4

    2.9

    244.7

    172.8

    24.7

    19.3

    2.2

    0.2

    ENT CAP R1

    1,063.1

    12.7

    118.1

    303.3

    24.6

    19.2

    1.1

    0.1

    ENT 10K R10

    4,590.5

    0.7

    509.9

    101.7

    27.7

    22.1

    4.5

    0.5

    Table-5 Performance summary for small sized (16KB) file access operations (90% read)

    Figure-5 shows the relative performance of various HDD options handling large files, keep in mind that for the response line lower is better, while for the activity rate higher is better.

    small file processing
    Figure-5 Small file processing 90% read, 10% write rate and response time

    In figure-5 you can see the performance in terms of response time (reads larger dashed line, writes smaller dotted line) along with number of file read operations per second (reads solid blue column bar, writes green column bar). Reminder that lower response time, and higher activity rates are better. Performance declines moving from left to right, from 15K to 10K Enterprise Performance with enhanced cache feature to Enterprise Capacity (7.2K RPM), all of which were hardware RAID 1. Also shown is a hardware RAID 10 (four x 10K RPM HDD’s) that has higher performance and capacity along with costs (table-5).

    Results in figure-5 above and table-5 below show how various drives can be configured to balance their performance, capacity and costs to meet different needs. Table-6 below shows an analysis looking at average file reads per second (RPS) performance vs. HDD costs, usable capacity and protection level.

    Table-6 is an example of looking at multiple metrics to make informed decisions as to which HDD would be best suited to your specific needs. For example RAID 10 using four 10K drives provides good performance and protection along with large usable space, however that also comes at a budget cost (e.g. price).

    Avg.
    File Reads Per Sec. (RPS)

    Single Drive Cost per RPS

    Multi-Drive Cost per RPS

    Single Drive Cost / Per GB Capacity

    Cost / Per GB Usable (Protected) Cap.

    Drive Cost (Multiple Drives)

    Protection Overhead (Space Capacity for RAID)

    Cost per usable GB per RPS

    Avg. File Read Resp. (Sec.)

    ENT 15K R1

    3,415.7

    $0.17

    $0.35

    $0.99

    $0.99

    $1,190

    100%

    $0.35

    1.51

    ENT 10K R1

    2,203.4

    0.40

    0.79

    0.49

    0.49

    1,750

    100%

    0.79

    2.90

    ENT CAP R1

    1,063.1

    0.38

    0.75

    0.20

    0.20

    798

    100%

    0.75

    12.70

    ENT 10K R10

    4,590.5

    0.19

    0.76

    0.49

    0.97

    3,500

    100%

    0.76

    0.70

    Table-6 Performance, capacity and cost analysis for small file processing

    Looking at the small file processing analysis in table-5 shows that the 15K HDD’s on an apples to apples basis (e.g. same RAID level and number of drives) provide the best performance. However when also factoring in space capacity, performance, different RAID level or other protection schemes along with cost, there are other considerations. On the other hand the Enterprise Capacity 2TB HDD’s have a low cost per capacity, however do not have the performance of other options, assuming your applications need more performance.

    Thus the right HDD for one application may not be the best one for a different scenario as well as multiple metrics as shown in table-5 need to be included in an informed storage decision making process.

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    File processing are common content applications tasks, some being small, others large or mixed as well as reads and writes. Even if your content environment is using object storage, chances are unless it is a new applications or a gateway exists, you may be using NAS or file based access. Thus the importance of if your applications are doing file based processing, either run your own applications or use tools that can simulate as close as possible to what your environment is doing.

    Continue reading part six in this multi-part series here where the focus is around general I/O including 8KB and 128KB sized IOPs along with associated metrics.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Which Enterprise HDD for Content Applications General I/O Performance

    Which HDD for Content Applications general I/O Performance

    hdd general i/o performance server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server platform general I/O performance Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the sixth in a multi-part series (read part five here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus is around general I/O performance including 8KB and 128KB IOP sizes.

    General I/O Performance

    In addition to running database and file (large and small) processing workloads, Vdbench was also used to collect basic small (8KB) and large (128KB) sized I/O operations. This consisted of random and sequential reads as well as writes with the results shown below. In addition to using vdbench, other tools that could be used include Microsoft Diskspd, fio, iorate and iometer among many others.

    These workloads used Vdbench configured (13) to do direct I/O to a Windows file system mounted device using as much of the available disk space as possible. All workloads used 16 threads and were run concurrently similar to database and file processing tests.

    (Note 13) Sample vdbench configuration for general I/O, note different settings were used for various tests

    Table-7 shows workload results for 8KB random IOPs 75% reads and 75% writes including IOPs, bandwidth and response time.

     

    ENT 15K RAID1

    ENT 10K RAID1

    ENT CAP RAID1

    ENT 10K R10
    (4 Drives)

    ECAP SW RAID (5 Drives)

     

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    I/O Rate (IOPs)

    597.11

    559.26

    514

    475

    285

    293

    979

    984

    491

    644

    MB/sec

    4.7

    4.4

    4.0

    3.7

    2.2

    2.3

    7.7

    7.7

    3.8

    5.0

    Resp. Time (Sec.)

    25.9

    27.6

    30.2

    32.7

    55.5

    53.7

    16.3

    16.3

    32.6

    24.8

    Table-7 8KB sized random IOPs workload results

    Figure-6 shows small (8KB) random I/O (75% read and 25% read) across different HDD configurations. Performance including activity rates (e.g. IOPs), bandwidth and response time for mixed reads / writes are shown. Note how response time increases with the Enterprise Capacity configurations vs. other performance optimized drives.

    general 8K random IO
    Figure-6 8KB random reads and write showing IOP activity, bandwidth and response time

    Table-8 below shows workload results for 8GB sized I/Os 100% sequential with 75% reads and 75% writes including IOPs, MB/sec and response time in seconds.

    ENT 15K RAID1

    ENT 10K RAID1

    ENT CAP RAID1

    ENT 10K R10
    (4 Drives)

    ECAP SW RAID (5 Drives)

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    I/O Rate (IOPs)

    3,778

    3,414

    3,761

    3,986

    3,379

    1,274

    11,840

    8,368

    2,891

    1,146

    MB/sec

    29.5

    26.7

    29.4

    31.1

    26.4

    10.0

    92.5

    65.4

    22.6

    9.0

    Resp. Time (Sec.)

    2.2

    3.1

    2.3

    2.4

    2.7

    10.9

    1.3

    1.9

    5.5

    14.0

    Table-8 8KB sized sequential workload results

    Figure-7 shows small 8KB sequential mixed reads and writes (75% read and 75% write), while the Enterprise Capacity 2TB HDD has a large amount of space capacity, its performance in a RAID 1 vs. other similar configured drives is slower.

    8KB Sequential
    Figure-7 8KB sequential 75% reads and 75% write showing bandwidth activity

    Table-9 shows workload results for 100% sequential, 100% read and 100% write 128KB sized I/Os including IOPs, bandwidth and response time.

    ENT 15K RAID1

    ENT 10K RAID1

    ENT CAP RAID1

    ENT 10K R10
    (4 Drives)

    ECAP SW RAID (5 Drives)

    Read

    Write

    Read

    Write

    Read

    Write

    Read

    Write

    Read

    Write

    I/O Rate (IOPs)

    1,798

    1,771

    1,716

    1,688

    921

    912

    3,552

    3,486

    780

    721

    MB/sec

    224.7

    221.3

    214.5

    210.9

    115.2

    114.0

    444.0

    435.8

    97.4

    90.1

    Resp. Time (Sec.)

    8.9

    9.0

    9.3

    9.5

    17.4

    17.5

    4.5

    4.6

    19.3

    20.2

    Table-9 128KB sized sequential workload results

    Figure-8 shows sequential or streaming operations of larger I/O (100% read and 100% write) requests sizes (128KB) that would be found with large content applications. Figure-8 highlights the relationship between lower response time and increased IOPs as well as bandwidth.

    128K Sequential
    Figure-8 128KB sequential reads and write showing IOP activity, bandwidth and response time

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Some content applications are doing small random I/Os for database, key value stores or repositories as well as meta data processing while others are doing large sequential I/O. 128KB sized I/O may be large for your environment, on the other hand, with an increasing number of applications, file systems, software defined storage management tools among others, 1 to 10MB or even larger I/O sizes are becoming common. Key is selecting I/O sizes and read write as well as random sequential along with I/O or queue depths that align with your environment.

    Continue reading part seven the final post in this multi-part series here where the focus is around how HDD’s continue to evolve including performance beyond traditional RPM based execrations along with wrap up.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Part V – NVMe overview primer (Where to learn more, what this all means)

    This is the fifth in a five-part mini-series providing a NVMe primer overview.

    View Part I, Part II, Part III, Part IV, Part V as well as companion posts and more NVMe primer material at www.thenvmeplace.com.

    There are many different facets of NVMe including protocol that can be deployed on PCIe (AiC, U.2/8639 drives, M.2) for local direct attached, dedicated or shared for front-end or back-end of storage systems. NVMe direct attach is also found in servers and laptops using M.2 NGFF mini cards (e.g. “gum sticks”). In addition to direct attached, dedicated and shared, NVMe is also deployed on fabrics including over Fibre Channel (FC-NVMe) as well as NVMe over Fabrics (NVMeoF) leveraging RDMA based networks (e.g. iWARP, RoCE among others).

    The storage I/O capabilities of flash can now be fed across PCIe faster to enable modern multi-core processors to complete more useful work in less time, resulting in greater application productivity. NVMe has been designed from the ground up with more and deeper queues, supporting a larger number of commands in those queues. This in turn enables the SSD to better optimize command execution for much higher concurrent IOPS. NVMe will coexist along with SAS, SATA and other server storage I/O technologies for some time to come. But NVMe will be at the top-tier of storage as it takes full advantage of the inherent speed and low latency of flash while complementing the potential of multi-core processors that can support the latest applications.

    With NVMe, the capabilities of underlying NVM and storage memories are further realized Devices used include a PCIe x4 NVMe AiC SSD, 12 GbpsSAS SSD and 6 GbpsSATA SSD. These and other improvements with NVMe enable concurrency while reducing latency to remove server storage I/O traffic congestion. The result is that application demanding more concurrent I/O activity along with lower latency will gravitate towards NVMe for access fast storage.

    Like the robust PCIe physical server storage I/O interface it leverages, NVMe provides both flexibility and compatibility. It removes complexity, overhead and latency while allowing far more concurrent I/O work to be accomplished. Those on the cutting edge will embrace NVMe rapidly. Others may prefer a phased approach.

    Some environments will initially focus on NVMe for local server storage I/O performance and capacity available today. Other environments will phase in emerging external NVMe flash-based shared storage systems over time.

    Planning is an essential ingredient for any enterprise. Because NVMe spans servers, storage, I/O hardware and software, those intending to adopt NVMe need to take into account all ramifications. Decisions made today will have a big impact on future data and information infrastructures.

    Key questions should be, how much speed do your applications need now, and how do growth plans affect those requirements? How and where can you maximize your financial return on investment (ROI) when deploying NVMe and how will that success be measured?

    Several vendors are working on, or have already introduced NVMe related technologies or initiatives. Keep an eye on among others including AWS, Broadcom (Avago, Brocade), Cisco (Servers), Dell EMC, Excelero, HPE, Intel (Servers, Drives and Cards), Lenovo, Micron, Microsoft (Azure, Drivers, Operating Systems, Storage Spaces), Mellanox, NetApp, OCZ, Oracle, PMC, Samsung, Seagate, Supermicro, VMware, Western Digital (acquisition of SANdisk and HGST) among others.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What this all means

    NVMe is in your future if not already, so If NVMe is the answer, what are the questions?

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Where, How to use NVMe overview primer

    server storage I/O trends
    Updated 1/12/2018

    This is the fourth in a five-part miniseries providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    Where and how to use NVMe

    As mentioned and shown in the second post of this series, initially, NVMe is being deployed inside servers as “ back-end,” fast, low latency storage using PCIe Add-In-Cards (AIC) and flash drives. Similar to SAS NVM SSDs and HDDs that support dual-paths, NVMe has a primary path and an alternate path. If one path fails, traffic keeps flowing without causing slowdowns. This feature is an advantage to those already familiar with the dual-path capabilities of SAS, enabling them to design and configure resilient solutions.

    NVMe devices including NVM flash AIC flash will also find their way into storage systems and appliances as back-end storage, co-existing with SAS or SATA devices. Another emerging deployment configuration scenario is shared NVMe direct attached storage (DAS) with multiple server access via PCIe external storage with dual paths for resiliency.

    Even though NVMe is a new protocol, it leverages existing skill sets. Anyone familiar with SAS/SCSI and AHCI/SATA storage devices will need little or no training to carry out and manage NVMe. Since NVMe-enabled storage appears to a host server or storage appliance as an LUN or volume, existing Windows, Linux and other OS or hypervisors tools can be used. On Windows, such as,  other than going to the device manager to see what the device is and what controller it is attached to, it is no different from installing and using any other storage device. The experience on Linux is similar, particularly when using in-the-box drivers that ship with the OS. One minor Linux difference of note is that instead of seeing a /dev/sda device as an example, you might see a device name like /dev/nvme0n1 or /dev/nvme0n1p1 (with a partition).

    Keep in mind that NVMe like SAS can be used as a “back-end” access from servers (or storage systems) to a storage device or system. For example JBOD SSD drives (e.g. 8639), PCIe AiC or M.2 devices. NVMe can also like SAS be used as a “front-end” on storage systems or appliances in place of, or in addition to other access such as GbE based iSCSI, Fibre Channel, FCoE, InfiniBand, NAS or Object.

    What this means is that NVMe can be implemented in a storage system or appliance on both the “front-end” e.g. server or host side as well as on the “back-end” e.g. device or drive side that is like SAS. Another similarity to SAS is that NVMe dual-pathing of devices, permitting system architects to design resiliency into their solutions. When the primary path fails, access to the storage device can be maintained with failover so that fast I/O operations can continue when using SAS and NVMe.

    NVM connectivity options including NVMe
    Various NVM NAND flash SSD devices and their connectivity including NVMe, M2, SATA and 12 Gbps SAS are shown in figure 6.

    Various NVM SSD interfaces including NVMe and M2
    Figure 6 Various NVM flash SSDs (Via StorageIO Labs)

    Left in figure 6 is an NAND flash NVMe PCIe AiC, top center is a USB thumb drive that has been opened up showing an NAND die (chip), middle center is a mSATA card, bottom center is an M.2 card, next on the right is a 2.5” 6 Gbps SATA device, and far fright is a 12 Gbps SAS device. Note that an M.2 card can be either an SATA or NVMe device depending on its internal controller that determines which host or server protocol device driver to use.

    The role of PCIe has evolved over the years as has its performance and packaging form factors. Also, to add in card (AiC) slots, PCIe form factors also include M.2 small form factor that replaces legacy mini-PCIe cards. Another form factor is M.2 (aka Next Generation Form Factor or NGFF) that like other devices, can be an NVMe, or SATA device.

    NGFF also known as 8639 or possibly 8637 (figure 7) can be used to support SATA as well as NVMe depending on the card device installed and host server driver support. There are various M.2 NGFF form factors including 2230, 2242, 2260 and 2280. There are also M.2 to regular physical SATA converter or adapter cards that are available enabling M.2 devices to attach to legacy SAS/SATA RAID adapters or HBAs.

    NVMe 8637 and 8639 interface backplane slotsNVMe 8637 and 8639 interface
    Figure 7 PCIe NVMe 8639 Drive (Via StorageIO Labs)

    On the left of figure 7 is a view towards the backplane of a storage enclosure in a server that supports SAS, SATA, and NVMe (e.g. 8639). On the right of figure 7 is the connector end of an 8639 NVM SSD showing addition pin connectors compared to an SAS or SATA device. Those extra pins give PCIe x4 connectivity to the NVMe devices. The 8639 drive connectors enable a device such as an NVM, or NAND flash SSD to share a common physical storage enclosure with SAS and SATA devices, including optional dual-pathing.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Be careful judging a device or component by its physical packaging or interface connection about what it is or is not. In figure 6.6 the device has SAS/SATA along with PCIe physical connections, yet it’s what’s inside (e.g. its controller) that determines if it is an SAS, SATA or NVMe enabled device. This also applies to HDDs and PCIe AiC devices, as well as I/O networking cards and adapters that may use common physical connectors, yet implement different protocols. For example, the SFF-8643 HD-Mini SAS internal connector is used for 12 Gbps SAS attachment as well as PCIe to devices such as 8630.

    Depending on the type of device inserted, access can be via NVMe over PCIe x4, SAS (12 Gbps or 6Gb) or SATA. 8639 connector based enclosures have a physical connection with their backplanes to the individual drive connectors, as well as to PCIe, SAS, and SATA cards or connectors on the server motherboard or via PCIe riser slots.

    While PCIe devices including AiC slot based, M.2 or 8639 can have common physical interfaces and lower level signaling, it’s the protocols, controllers, and drivers that determine how they get a software defined and used. Keep in mind that it’s not just the physical connector or interface that determines what a device is or how it is used, it’s also the protocol, command set, and controller and device drivers.

    Continue reading about NVMe with Part V (Where to learn more, what this all means) in this five-part series, or jump to Part I, Part II or Part III.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    NVMe Need for Performance Speed Performance

    server storage I/O trends
    Updated 1/12/2018

    This is the third in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    How fast is NVMe?

    It depends! Generally speaking NVMe is fast!

    However fast interfaces and protocols also need fast storage devices, adapters, drivers, servers, operating systems and hypervisors as well as applications that drive or benefit from the increased speed.

    A server storage I/O example is in figure 5 where a 6 Gbps SATA NVM flash SSD (left) is shown with an NVMe 8639 (x4) drive that were directly attached to a server. The workload is 8 Kbyte sized random writes with 128 threads (workers) showing results for IOPs (solid bar) along with response time (dotted line). Not surprisingly the NVMe device has a lower response time and a higher number of IOPs. However also note how the amount of CPU time used per IOP is lower on the right with the NVMe drive.

    NVMe storage I/O performance
    Figure 5 6 Gbps SATA NVM flash SSD vs. NVMe flash SSD

    While many people are aware or learning about the IOP and bandwidth improvements as well as the decrease in latency with NVMe, something that gets overlooked is how much less CPU is used. If a server is spending time in wait modes that can result in lost productivity, by finding and removing the barriers more work can be done on a given server, perhaps even delaying a server upgrade.

    In figure 5 notice the lower amount of CPU used per work activity being done (e.g. I/O or IOP) which translates to more effective resource use of your server. What that means is either doing more work with what you have, or potentially delaying a CPU server upgrade, or, using those extra CPU cycles to power software defined storage management stacks including erasure coding or advanced parity RAID, replication and other functions.

    Table 1 shows relative server I/O performance of some NVM flash SSD devices across various workloads. As with any performance, the comparison takes them, and the following with a grain of salt as your speed will vary.

    8KB I/O Size

    1MB I/O size

    NAND flash SSD

    100% Seq. Read

    100% Seq. Write

    100% Ran. Read

    100% Ran. Write

    100% Seq. Read

    100% Seq. Write

    100% Ran. Read

    100% Ran. Write

    NVMe

    IOPs

    41829.19

    33349.36

    112353.6

    28520.82

    1437.26

    889.36

    1336.94

    496.74

    PCIe

    Bandwidth

    326.79

    260.54

    877.76

    222.82

    1437.26

    889.36

    1336.94

    496.74

    AiC

    Resp.

    3.23

    3.90

    1.30

    4.56

    178.11

    287.83

    191.27

    515.17

    CPU / IOP

    0.001571

    0.002003

    0.000689

    0.002342

    0.007793

    0.011244

    0.009798

    0.015098

    12Gb

    IOPs

    34792.91

    34863.42

    29373.5

    27069.56

    427.19

    439.42

    416.68

    385.9

    SAS

    Bandwidth

    271.82

    272.37

    229.48

    211.48

    427.19

    429.42

    416.68

    385.9

    Resp.

    3.76

    3.77

    4.56

    5.71

    599.26

    582.66

    614.22

    663.21

    CPU / IOP

    0.001857

    0.00189

    0.002267

    0.00229

    0.011236

    0.011834

    0.01416

    0.015548

    6Gb

    IOPs

    33861.29

    9228.49

    28677.12

    6974.32

    363.25

    65.58

    356.06

    55.86

    SATA

    Bandwidth

    264.54

    72.1

    224.04

    54.49

    363.25

    65.58

    356.06

    55.86

    Resp.

    4.05

    26.34

    4.67

    35.65

    704.70

    3838.59

    718.81

    4535.63

    CPU / IOP

    0.001899

    0.002546

    0.002298

    0.003269

    0.012113

    0.032022

    0.015166

    0.046545

    Table 1 Relative performance of various protocols and interfaces

    The workload results in table 1 were generated using a vdbench script running on a Windows 2012 R2 based server and are intended to be a relative indicator of different protocol and interfaces; your performance mileage will vary. The results shown below compare the number of IOPs (activity rate) for reads, writes, random and sequential across small 8KB and large 1MB sized I/Os.

    Also shown in table 1 are bandwidth or throughput (e.g. amount of data moved), response time and the amount of CPU used per IOP. Note in table 1 how NVMe can do higher IOPs with a lower CPU per IOP, or, using a similar amount of CPU, do more work at a lower latency. SSD has been used for decades to help reduce CPU bottlenecks or defer server upgrades by removing I/O wait times and reduce CPU consumption (e.g. wait or lost time).

    Can NVMe solutions run faster than those shown above? Absolutely!

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Continue reading about NVMe with Part IV (Where and How to use NVMe) in this five-part series, or jump to Part I, Part II or Part V.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.