Solving Application Server Storage I/O Performance Bottlenecks Webinar

Solving Application Server Storage I/O Performance Bottlenecks Webinar

Performance Bottlenecks Webinar

Solving Application Server Storage I/O Performance Bottlenecks Webinar

The best I/O is the one you do not have to do, the second best is the one with least server I/O and storage overhead along with application performance bottleneck impact.

Fast applications need fast servers, storage, I/O networking hardware, and software. Merely throwing more hardware as a cache at application performance bottlenecks can help. However, throwing more hardware at performance problems can also cost much cash. On the other hand, a little fast memory and storage in the right place, with robust software performance acceleration can have significant application productivity benefits. Fast hardware also needs fast software to help boost application and user productivity.

As application workloads activity increases, by implementing server software, performance acceleration along with additional fast memory and storage including flash, Storage Class Memories (SCM) among other SSD along with NVMe accessed devices, even more, work can be done boosting productivity while reducing cost.

Join me on June 27 at 1 PM Pacific Time (PT) when I host a free webinar (registration required) sponsored by DataCore and produced by Redmond Magazine/1105 Media as we discuss Solving Application Server Storage I/O Performance Bottlenecks including what you can do today.

I will be joined by guest presenters Augie Gonzalez, Director Technical Product Marketing and Tim Warden, Director Engineering Product Management both from DataCore. During the interactive webinar discussions, we invite you to participate with your questions, as we look at issues, challenges, various approaches, and what you can do today to boost different application performance and productivity.

This webinar is for those whose applications have the need for speed including database, VDI, SharePoint, Exchange, AI, ML and other I/O intensive workloads. Topics that we will be discussing in addition to your questions include:

  • Boosting application performance without breaking the bank
  • Improving application productivity and reducing user wait time
  • Gaining insight and awareness into bottlenecks and what to do
  • Unlocking value in your existing hardware and software licenses
  • What you can do today, literally right after or even during this webinar

Where to learn more

Learn more about Windows Server Summit and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

The best I/O is the one you do not have to do, the second best is the one that has the least impact on your applications, while boosting user productivity. There are many differ net approaches to addressing various server storage I/O performance bottlenecks across applications. Join me on June 27, 2018 at 1 PM PT for the free webinar Solving Application Server Storage I/O Performance Bottlenecks Webinar and learn what you can do today to boost your uses productivity.

 

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Broadcom aka Avago aka LSI announces SAS SATA NVMe Adapters with RAID

server storage I/O trends

Broadcom aka Avago aka LSI announces SAS SATA NVMe Adapters with RAID

In case you missed it, Broadcom formerly known as Avago who bought the LSI adapter and RAID card business announced shipping new SAS, SATA and NVMe devices.

While SAS and SATA are well established continuing to be deployed for both HDD as well as flash SSD, NVMe continues to evolve with a bright future. Likewise, while there is a focus on software-defined storage (SDS), software defined data centers (SDDC) and software defined data infrastructures (SDDI) along with advanced parity RAID including erasure codes, object storage among other technologies, there is still a need for adapter cards including traditional RAID.

Keep in mind that while probably not meeting the definition of some software-defined aficionados, the many different variations, permutations along with derivatives of RAID from mirror and replication to basic parity to advanced erasure codes (some based on Reed Solomon aka RAID 2) rely on software. Granted, some of that software is run on regular primary server processors, some on packaged in silicon via ASICs or FPGAs, or System on Chips (SOC), RAID on Chip (RoC) as well as BIOS, firmware, drivers as well as management tools.

SAS, SATA and NVMe adapters

For some environments cards such as those announced by Broadcom are used in passthru mode effectively as adapters for attaching SAS, SATA and NVMe storage devices to servers. Those servers may be deployed as converged infrastructures (CI), hyper-converged infrastructures (HCI), Cluster or Cloud in Box (CiB) among other variations. To name names you might find the above (or in the not so distant future) in VMware vSAN or regular vSphere based environments, Microsoft Windows Server, Storage Spaces Direct (S2D) or Azure Stack, OpenStack among other deployments (check your vendors Hardware Compatibility Lists aka HCLs). In some cases these cards may be adapters in passthru mode, or using their RAID (support various by different software stacks). Meanwhile in other environments, the more traditional RAID features are still used spanning Windows to Linux among others.

Who Is Broadcom?

Some of you may know of Broadcom having been around for many years with a focus on networking related technologies. However some may not realize that Avago bought Broadcom and changed their name to Broadcom. Here is a history that includes more recent acquisitions such as Brocade, PLX, Emulex as well as LSI. Some of you may recall Avago buying LSI (the SAS, SATA, PCIe HBA, RAID and components) business not sold to NetApp as part of Engenio. Also recall that Avago sold the LSI flash SSD business unit to Seagate a couple of years ago as part of its streamlining. That’s how we get to where we are at today with Broadcom aka formerly known as Avago who bought the LSI adapter and RAID business announcing new SAS, SATA, NVMe cards.

What Was Announced?

Broadcom has announced cards that are multi-protocol supporting Serial Attached SCSI (SAS), SATA/AHCI as well as NVM Express (NVMe) as basic adapters for attaching storage (HDD, SSD, storage systems) along with optional RAID as well as cache support. These cards can be used in application servers for traditional, as well as virtualized SDDC environments, as well as storage systems or appliances for software-defined storage among other uses. The basic functionality of these cards is to provide high performance (IOPs and other activity, as well as bandwidth) along with low latency combined with data protection as well as dense connectivity.

Specific features include:

  • Broadcom’s Tri-Mode SerDes Technology enables the operation of NVMe, SAS or SATA devices in a single drive bay, allowing for endless design flexibility.
  • Management software including LSI Storage Authority (LSA), StorCLI, HII (UEFI)
  • Optional CacheVault(R) flash cache protection
  • Physical dimension Low Profile 6.127” x 2.712”
  • Host bus type x8 lane PCIe Express 3.1
  • Data transfer rates SAS-3 12Gbs; NVMe up to 8 GT/s PCIe Gen 3
  • Various OS and hypervisors host platform support
  • Warranty 3 yrs, free 5×8 phone support, advanced replacement option
  • RAID levels 0, 1, 5, 6, 10, 50, and 60

Note that some of the specific feature functionality may be available at a later date, check with your preferred vendors HCL

Specification

9480 8i8e

9440 8i

9460 8i

9460 16i

Image

Internal Ports

8

 

8

16

Internal Connectors

2 x Mini-SAS HD x4 SFF-8643

2 x Mini-SAS HD x4 SFF-8643

2 x Mini-SAS HD x4 SFF-8643

4 Mini-SAS HD x4
SFF-8643

External Ports

8

 

 

 

External Connectors

2 x Mini-SAS HD SFF8644

 

 

 

Cache Protection

CacheVault CVPM05

 

CacheVault CVPM05

CacheVault CVPM05

Cache Memory

2GB 2133 MHz DDR4 SDRAM

 

2GB 2133 MHz DDR4 SDRAM

4GB 2133 MHz DDR4 SDRAM

Devices Supported

SAS/SATA: 255, NVMe: 4 x4, up to 24 x2 or x4*

SAS/SATA: 63, NVMe: 4 x4, up to 24 x2 or x4*

SAS/SATA: 255, NVMe: 4 x4, up to 24 x2 or x4*

SAS/SATA: 255, NVMe: 4 x4, up to 24 x2 or x4*

I/O Processors (SAS Controller)

SAS3516 dual-core RAID-on-Chip (ROC)

SAS3408 I/O controller (IOC)

SAS3508 dual-core RAID-on-Chip (ROC)

SAS3516 dual-core RAID-on-Chip (ROC)

In case you need a refresher on SFF cable types, click on the following two images which take you to Amazon.com where you can learn more, as well as order various cable options. PC Pit Stop has a good selection of cables (See other SFF types), connectors and other accessories that I have used, along with those from Amazon.com and others.

Available via Amazon.com sff 8644 8643 sas mini hd cable
Left: SFF 8644 Mini SAS HD (External), Right SFF-8643 Mini SAS HD (internal) Image via Amazon.com

Available via Amazon.com sff 8644 8642 sas mini hd cable
Left: SFF 8643 Mini SAS HD (Internal), Right SFF-8642 SATA with power (internal) Image via Amazon.com

Wait, Doesnt NVMe use PCIe

For those who are not familiar with NVMe and in particular U.2 aka SFF 8639 based devices, physically they look the same (almost) as a SAS device connector. The slight variation is if you look at a SAS drive, there is a small tab to prevent plugging into a SATA port (recall you can plug SATA into SAS. For SAS drives that tab is blank, however on the NVMe 8639 aka U.2 drives (below left) that tab has several connectors which are PCIe x4 (single or dual path).

What this means is that the PCIe x4 bus electrical signals are transferred via a connector, to backplane chassis to 8639 drive slot to the drive. Those same 8639 drive slots can also have a SAS SATA connection using their traditional connectors enabling a converged or hybrid drive slot so to speak. Learn more about NVMe here (If the Answer is NVMe, then what were and are the questions?) as well as at www.thenvmeplace.com.

NVMe U.2 8639 driveNVMe U.2 8639 sas sata nvme drive
Left NVMe U.2 drive showing PCIe x4 connectors, right, NVMe U.2 8639 connector

Who Is This For?

These cards are applicable for general purpose IT and other data infrastructure environments in traditional servers among others uses. They are also applicable for systems builders, integrators and OEMs whom you may be buying your current systems from, or future ones.

Where to Learn More

The following are additional resources to learn more about vSAN and related technologies.

What this all means

Even as the industry continues to talk and move towards more software-defined focus, even for environments that are serverless, there is still need for hardware somewhere. These adapters are a good sign of the continued maturing cycle of NVMe to be well positioned into the next decade and beyond, while also being relevant today. Likewise, even though the future involves NVMe, there is a still a place for SAS along with SATA to coexist in many environments. For some environment there is a need for traditional RAID while for others simply the need for attachment of SAS, SATA and NVMe devices. Overall, a good set of updates, enhancements and new technology for today and tomorrow, now, when do I get some to play with? ;).

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

VMware vSAN V6.6 Part IV (HCI scaling ROBO and data centers today)

server storage I/O trends

VMware vSAN V6.6 Part IV (HCI scaling ROBO and data centers today)

In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the fourth of a five-part series about VMware vSAN V6.6. View Part I here, Part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) located here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

VMware vSAN 6.6
Image via VMware

For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

Scaling HCI for ROBO and data centers today and for tomorrow

Scaling with stability for today and tomorrow. This includes addressing your applications Performance, Availability, Capacity and Economics (PACE) workload requirements today and for the future. By scaling with stability means boosting performance, availability (data protection, security, resiliency, durable, FTT), effective capacity without one of those attributes compromising another.

VMware vSAN data center scaling
Image via VMware

Scaling today for tomorrow also means adapting to today’s needs while also flexible to evolve with new application workloads, hardware as well as a cloud (public, private, hybrid, inter and intra-cloud). As part of continued performance improvements, enhancements to optimize for higher performance flash SSD including NVMe based devices.

VMware vSAN cloud analytics
Image via VMware

Part of scaling with stability means enhancing performance (as well as productivity) or the effectiveness of a solution. Keep in mind that efficiency is often associated with storage (or server or network) space capacity savings or reductions. In that context then effectiveness means performance and productivity or how much work can be done with least overhead impact. With vSAN, V6.6 performance enhancements include reduced checksum overhead, enhanced compression, and deduplication, along with destaging optimizations.

Other enhancements that help collectively contribute to vSAN performance improvements include VMware object handling (not to be confused with cloud or object storage S3 or Swift objects) as well as faster iSCSI for vSAN. Also improved are more accurate refined cache sizing guidelines. Keep in mind that a little bit of NAND flash SSD or SCM in the right place can have a significant benefit, while a lot of flash cache costs much cash.

Part of enabling and leveraging new technology today includes support for larger capacity 1.6TB flash SSD drives for cache, as well as lower read latency with 3D XPoint and NVMe drives such as those from Intel among others. Refer to the VMware vSAN HCL for current supported devices which continue evolve along with the partner ecosystem. Future proofing is also enabled where you can grow from today to tomorrow as new storage class memories (SCM) among other flash SSD as well as NVMe enhanced storage among other technologies are introduced into the market as well as VMware vSAN HCL.

VMware vSAN and data center class applications
Image via VMware

Traditional CI and in particular many HCI solutions have been optimized or focused on smaller application workloads including VDI resulting in the perception that HCI, in general, is only for smaller environments, or larger environment non-mission critical workloads. With vSAN V6.6 VMware is addressing and enabling larger environment mission critical applications including Intersystem Cache medical health management software among others. Other application workload extensions including support for higher performance demanding Hadoop big data analytics, a well as extending virtual desktop infrastructure (VDI) workspace with XenDesktop/XenApp, along with Photon 1.1 container support.

What about VMware vSAN 6.6. Packaging and License Options

As part of vSAN 6.6 VMware several solution bundle packaged options for the data center as well as smaller ROBO environment. Contact your VMware representative or partner to learn more about specific details.

VMware vSAN cloud analytics
Image via VMware

VMware vSAN cloud analytics
Image via VMware

Where to Learn More

The following are additional resources to find out more about vSAN and related technologies.

What this all means

Continue reading more about VMware vSAN 6.6 in part I here, part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) located here as well as part V here (VMware vSAN evolution, where to learn more and summary).

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the Spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Part 2 – Which HDD for Content Applications – HDD Testing

Part 2 – Which HDD for Content Applications – HDD Testing

HDD testing server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server, hdd testing, how and what to do

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the second in a multi-part series (read part one here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post we look at some decisions and configuration choices to make for testing content applications servers as well as project planning.

Content Solution Test Objectives

In short period, collect performance and another server, storage I/O decision-making information on various HDD’s running different content workloads.

Working with the Servers Direct staff a suitable content solution platform test configuration was created. In addition to providing two Intel-based content servers, Servers Direct worked with their partner Seagate to arrange for various enterprise-class HDD’s to be evaluated. For these series of content application tests, being short on time, I chose to do run some simple workloads including database, basic file (large and small) processing and general performance characterization.

Content Solution Decision Making

Knowing how Non-Volatile Memory (NVM) NAND flash SSD (1) devices (drives and PCIe cards) perform, what would be the best HDD based storage option for my given set of applications? Different applications have various performance, capacity and budget considerations. Different types of Seagate Enterprise class 2.5” Small Form Factor (SFF) HDD’s were tested.

While revolutions per minute (RPM) still plays a role in HDD performance, there are other factors including internal processing capabilities, software or firmware algorithm optimization, and caching. Most HDD’s today have some amount of DRAM for read caching and other operations. Seagate Enterprise Performance HDD’s with the enhanced caching feature (2) are examples of devices accelerate storage I/O speed vs. traditional 10K and 15K RPM drives.

Project Planning And Preparation

Workload to be tested included:

  • Database read/writes
  • Large file processing
  • Small file processing
  • General I/O profile

Project testing consisted of five phases, some of which overlapped with others:

Phase 1 – Plan
Identify candidate workloads that could be run in the given amount of time, determine time schedules and resource availability, create a project plan.

Phase 2 – Define
Hardware define and software define the test platform.

Phase 3 – Setup
The objective was to assess plug-play capability of the server, storage and I/O networking hardware with a Linux OS before moving on to the reported workloads in the next phase. Initial setup and configuration of hardware and software, installation of additional devices along with software configuration, troubleshooting, and learning as applicable. This phase consisted of using Ubuntu Linux 14.04 server as the operating system (OS) along with MySQL 5.6 as a database server during initial hands-on experience.

Phase 4 – Execute
This consisted of using Windows 2012 R2 server as the OS along with Microsoft SQL Server on the system under test (SUT) to support various workloads. Results of this phase are reported below.

Phase 5 – Analyze      
Results from the workloads run in phase 3 were analyzed and summarized into this document.

(Note 1) Refer to Seagate 1200 12 Gbps Enterprise SAS SSD StorageIO lab review

(Note 2) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

Planning And Preparing The Tests

As with most any project there were constraints to contend with and work around.

Test constraints included:

  • Short-time window
  • Hardware availability
  • Amount of hardware
  • Software availability

Three most important constraints and considerations for this project were:

  • Time – This was a project with a very short time “runway”, something common in most customer environments who are looking to make a knowledgeable server, storage I/O decisions.
  • Amount of hardware – Limited amount of DRAM main memory, sixteen 2.5” internal hot-swap storage slots for HDD’s as well as SSDs. Note that for a production content solution platform; additional DRAM can easily be added, along with extra external storage enclosures to scale memory and storage capacity to fit your needs.
  • Software availability – Utilize common software and management tools publicly available so anybody could leverage those in their own environment and tests.

The following content application workloads were profiled:

  • Database reads/writes – Updates, inserts, read queries for a content environment
  • Large file processing – Streaming of large video, images or other content objects.
  • Small file processing – Processing of many small files found in some content applications
  • General I/O profile – IOP, bandwidth and response time relevant to content applications

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

There are many different types of content applications ranging from little data databases to big data analytics as well as very big fast data such as for video. Likewise there are various workloads and characteristics to test. The best test and metrics are those that apply to your environment and application needs.

Continue reading part three of this multi-part series here looking at how the systems and HDD’s were configured and tested.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

NVMe Place NVM Non Volatile Memory Express Resources

Updated 8/31/19
NVMe place server Storage I/O data infrastructure trends

Welcome to NVMe place NVM Non Volatile Memory Express Resources. NVMe place is about Non Volatile Memory (NVM) Express (NVMe) with Industry Trends Perspectives, Tips, Tools, Techniques, Technologies, News and other information.

Disclaimer

Please note that this NVMe place resources site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks.

NVM Express Organization
Image used with permission of NVM Express, Inc.

Visit the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

 

The NVMe Place resources and NVM including SCM, PMEM, Flash

NVMe place includes Non Volatile Memory (NVM) including nand flash, storage class memories (SCM), persistent memories (PM) are storage memory mediums while NVM Express (NVMe) is an interface for accessing NVM. This NVMe resources page is a companion to The SSD Place which has a broader Non Volatile Memory (NVM) focus including flash among other SSD topics. NVMe is a new server storage I/O access method and protocol for fast access to NVM based storage and memory technologies. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

Server Storage I/O NVMe PCIe SAS SATA AHCI
Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

Leveraging the standard PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5″ drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add-in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially, NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

NVMe as back-end storage
NVMe as a “back-end” I/O interface for NVM storage media

NVMe as front-end server storage I/O interface
NVMe as a “front-end” interface for servers or storage systems/appliances

NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can also being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

NVMe features

Main features of NVMe include among others:

  • Lower latency due to improve drivers and increased queues (and queue sizes)
  • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
  • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
  • Bandwidth improvements leveraging various fast PCIe interface and available lanes
  • Dual-pathing of devices like what is available with dual-path SAS devices
  • Unlock the value of more cores per processor socket and software threads (productivity)
  • Various packaging options, deployment scenarios and configuration options
  • Appears as a standard storage device on most operating systems
  • Plug-play with in-box drivers on many popular operating systems and hypervisors

Shared external PCIe using NVMe
NVMe and shared PCIe (e.g. shared PCIe flash DAS)

NVMe related content and links

The following are some of my tips, articles, blog posts, presentations and other content, along with material from others pertaining to NVMe. Keep in mind that the question should not be if NVMe is in your future, rather when, where, with what, from whom and how much of it will be used as well as how it will be used.

  • How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
  • Why NVMe Should Be in Your Data Center (Via Micron.com)
  • NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
  • Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
  • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
  • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
  • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
  • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
  • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
  • MNVM Express solutions (Via SuperMicro)
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
  • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
  • RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
  • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
  • What should I consider when using SSD cloud? (Via SearchCloudStorage)
  • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
  • Selecting Storage: Start With Requirements (Via NetworkComputing)
  • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
  • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
  • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
  • How many IOPS can a HDD, HHDD or SSD do (Part I)?
  • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
  • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
  • Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
  • Via EnterpriseStorageForum: 10-Year Review of Data Storage

Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamline performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU.

NVMe and SATA flash SSD performance

The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed.

Additional NVMe Resources

Also check out the Server StorageIO companion micro sites landing pages including thessdplace.com (SSD focus), data protection diaries (backup, BC/DR/HA and related topics), cloud and object storage, and server storage I/O performance and benchmarking here.

If you are in to the real bits and bytes details such as at device driver level content check out the Linux NVMe reflector forum. The linux-nvme forum is a good source if you are developer to stay up on what is happening in and around device driver and associated topics.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

Disclaimer

Disclaimer: Please note that this site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks. Check out the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

NVM Express Organization
Image used with permission of NVM Express, Inc.

Wrap Up

Watch for updates with more content, links and NVMe resources to be added here soon.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server Storage I/O Benchmark Performance Resource Tools

Server Storage I/O Benchmarking Performance Resource Tools

server storage I/O trends

Updated 1/23/2018

Server storage I/O benchmark performance resource tools, various articles and tips. These include tools for legacy, virtual, cloud and software defined environments.

benchmark performance resource tools server storage I/O performance

The best server and storage I/O (input/output operation) is the one that you do not have to do, the second best is the one with the least impact.

server storage I/O locality of reference

This is where the idea of locality of reference (e.g. how close is the data to where your application is running) comes into play which is implemented via tiered memory, storage and caching shown in the figure above.

Cloud virtual software defined storage I/O

Server storage I/O performance applies to cloud, virtual, software defined and legacy environments

What this has to do with server storage I/O (and networking) performance benchmarking is keeping the idea of locality of reference, context and the application workload in perspective regardless of if cloud, virtual, software defined or legacy physical environments.

StorageIOblog: I/O, I/O how well do you know about good or bad server and storage I/Os?
StorageIOblog: Server and Storage I/O benchmarking 101 for smarties
StorageIOblog: Which Enterprise HDDs to use for a Content Server Platform (7 part series with using benchmark tools)
StorageIO.com: Enmotus FuzeDrive MicroTiering lab test using various tools
StorageIOblog: Some server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
StorageIOblog: Get in the NVMe SSD game (if you are not already)
Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
ComputerWeekly: Storage performance metrics: How suppliers spin performance specifications

Via StorageIO Podcast: Kevin Closson discusses SLOB Server CPU I/O Database Performance benchmarks
Via @KevinClosson: SLOB Use Cases By Industry Vendors. Learn SLOB, Speak The Experts’ Language
Via BeyondTheBlocks (Reduxio): 8 Useful Tools for Storage I/O Benchmarking
Via CCSIObench: Cold-cache Sequential I/O Benchmark
Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
CISJournal: Benchmarking the Performance of Microsoft Hyper-V server, VMware ESXi and Xen Hypervisors (PDF)
Microsoft TechNet:Windows Server 2016 Hyper-V large-scale VM performance for in-memory transaction processing
InfoStor: What’s The Best Storage Benchmark?
StorageIOblog: How to test your HDD, SSD or all flash array (AFA) storage fundamentals
Via ATTO: Atto V3.05 free storage test tool available
Via StorageIOblog: Big Files and Lots of Little File Processing and Benchmarking with Vdbench

Via StorageIO.com: Which Enterprise Hard Disk Drives (HDDs) to use with a Content Server Platform (White Paper)
Via VMware Blogs: A Free Storage Performance Testing Tool For Hyperconverged
Microsoft Technet: Test Storage Spaces Performance Using Synthetic Workloads in Windows Server
Microsoft Technet: Microsoft Windows Server Storage Spaces – Designing for Performance
BizTech: 4 Ways to Performance-Test Your New HDD or SSD
EnterpriseStorageForum: Data Storage Benchmarking Guide
StorageSearch.com: How fast can your SSD run backwards?
OpenStack: How to calculate IOPS for Cinder Storage ?
StorageAcceleration: Tips for Measuring Your Storage Acceleration

server storage I/O STI and SUT

Spiceworks: Determining HDD SSD SSHD IOP Performance
Spiceworks: Calculating IOPS from Perfmon data
Spiceworks: profiling IOPs

vdbench server storage I/O benchmark
Vdbench example via StorageIOblog.com

StorageIOblog: What does server storage I/O scaling mean to you?
StorageIOblog: What is the best kind of IO? The one you do not have to do
Testmyworkload.com: Collect and report various OS workloads
Whoishostingthis: Various SQL resources
StorageAcceleration: What, When, Why & How to Accelerate Storage
Filesystems.org: Various tools and links
StorageIOblog: Can we get a side of context with them IOPS and other storage metrics?

flash ssd and hdd

BrightTalk Webinar: Data Center Monitoring – Metrics that Matter for Effective Management
StorageIOblog: Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy
StorageIOblog: Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?

server storage I/O bottlenecks and I/O blender

Microsoft TechNet: Measuring Disk Latency with Windows Performance Monitor (Perfmon)
Via Scalegrid.io: How to benchmark MongoDB with YCSB? (Perfmon)
Microsoft MSDN: List of Perfmon counters for sql server
Microsoft TechNet: Taking Your Server’s Pulse
StorageIOblog: Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?
CMG: I/O Performance Issues and Impacts on Time-Sensitive Applications

flash ssd and hdd

Virtualization Practice: IO IO it is off to Storage and IO metrics we go
InfoStor: Is HP Short Stroking for Performance and Capacity Gains?
StorageIOblog: Is Computer Data Storage Complex? It Depends
StorageIOblog: More storage and IO metrics that matter
StorageIOblog: Moving Beyond the Benchmark Brouhaha
Yellow-Bricks: VSAN VDI Benchmarking and Beta refresh!

server storage I/O benchmark example

YellowBricks: VSAN performance: many SAS low capacity VS some SATA high capacity?
YellowBricsk: VSAN VDI Benchmarking and Beta refresh!
StorageIOblog: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review
StorageIOblog: Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review
StorageIOblog: Server Storage I/O Network Benchmark Winter Olympic Games

flash ssd and hdd

VMware VDImark aka View Planner (also here, here and here) as well as VMmark here
StorageIOblog: SPC and Storage Benchmarking Games
StorageIOblog: Speaking of speeding up business with SSD storage
StorageIOblog: SSD and Storage System Performance

Hadoop server storage I/O performance
Various Server Storage I/O tools in a hadoop environment

Michael-noll.com: Benchmarking and Stress Testing an Hadoop Cluster With TeraSort, TestDFSIO
Virtualization Practice: SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD
StorageIOblog: Storage and IO metrics that matter
InfoStor: Storage Metrics and Measurements That Matter: Getting Started
SilvertonConsulting: Storage throughput vs. IO response time and why it matters
Splunk: The percentage of Read / Write utilization to get to 800 IOPS?

flash ssd and hdd
Various server storage I/O benchmarking tools

Spiceworks: What is the best IO IOPs testing tool out there
StorageIOblog: How many IOPS can a HDD, HHDD or SSD do?
StorageIOblog: Some Windows Server Storage I/O related commands
Openmaniak: Iperf overview and Iperf.fr: Iperf overview
StorageIOblog: Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)
Quest: SQL Server Perfmon Poster (PDF)
Server and Storage I/O Networking Performance Management (webinar)
Data Center Monitoring – Metrics that Matter for Effective Management (webinar)
Flash back to reality – Flash SSD Myths and Realities (Industry trends & benchmarking tips), (MSP CMG presentation)
DBAstackexchange: How can I determine how many IOPs I need for my AWS RDS database?
ITToolbox: Benchmarking the Performance of SANs

server storage IO labs

StorageIOblog: Dell Inspiron 660 i660, Virtual Server Diamond in the rough (Server review)
StorageIOblog: Part II: Lenovo TS140 Server and Storage I/O Review (Server review)
StorageIOblog: DIY converged server software defined storage on a budget using Lenovo TS140
StorageIOblog: Server storage I/O Intel NUC nick knack notes First impressions (Server review)
StorageIOblog & ITKE: Storage performance needs availability, availability needs performance
StorageIOblog: Why SSD based arrays and storage appliances can be a good idea (Part I)
StorageIOblog: Revisiting RAID storage remains relevant and resources

Interested in cloud and object storage visit our objectstoragecenter.com page, for flash SSD checkout storageio.com/ssd page, along with data protection, RAID, various industry links and more here.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Watch for additional links to be added above in addition to those that appear via comments.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

EMC New VNX MCx doing more storage I/O work vs. just being more

Storage I/O trends

It’s not how much you have, its how storage I/O work gets done that matters

Following last weeks VMworld event in San Francisco where among other announcements including this one around Virtual SAN (VSAN) along with Software Defined Storage (SDS), EMC today made several announcements.

Today’s EMC announcements include:

  • The new VNX MCx (Multi Core optimized) family of storage systems
  • VSPEX proven infrastructure portfolio enhancements
  • Availability of ViPR Software Defined Storage (SDS) platform (read more from earlier posts here, here and here)
  • Statement of direction preview of Project Nile for elastic cloud storage platform
  • XtremSW server cache software version 2.0 with enhanced management and support for VMware, AIX and Oracle RAC

EMC ViPREMC XtremSW cache software

Summary of the new EMC VNX MCx storage systems include:

  • More processor cores, PCIe Gen 3 (faster bus), front-end and back-end IO ports, DRAM and flash cache (as well as drives)
  • More 6Gb/s SAS back-end ports to use more storage devices (SAS and SATA flash SSD, fast HDD and high-capacity HDD)
  • MCx – Multi-core optimized with software rewritten to make use of threads and resources vs. simply using more sockets and cores at higher clock rates
  • Data Footprint Reduction (DFR) capabilities including block compression and dedupe, file dedupe and thin provisioning
  • Virtual storage pools that include flash SSD, fast HDD and high-capacity HDD
  • Block (iSCSI, FC and FCoE) and NAS file (NFS, pNFS, CIFS) front-end access with object access via Atmos Virtual Edition (VE) and ViPR
  • Entry level pricing starting at below $10,000 USD

EMC VNX MCx systems

What is this MCx stuff, is it just more hardware?

While there is more hardware that can be used in different configurations, the key or core (pun intended) around MCx is that EMC has taken the time and invested in reworking the internal software of the VNX that has its roots going back to the Data General CLARRiON EMC acquired. This is similar to an effort EMC made a few years back when it overhauled what is now known as the VMAX from the Symmetric into the DMX. That effort expanded from a platform or processor port to re-architecting and software optimizing (rewrite portions) to leverage new and emerging hardware capabilities more effectively.

EMC VNX MCx

With MCx EMC is doing something similar in that core portions of the VNX software have been re-architected and written to take advantage of more threads and cores being available to do work more effectively. This is not all that different from what occurs (or should) with upper level applications that eventually get rewritten to leverage underlying new capabilities to do more work faster and leverage technologies in a more cost-effective way. MCx also leverages flash as a primary medium with data than being moved (256MB chunks) down into lower tiers of storage (SSD and HDD drives).

Storage I/O trends

ENC VNX has had in the past FLASH Cache which enables SSD drives to be used as an extension of main cache as well as using drive targets. Thus while MCx can and does leverage more and faster core as would most any software, it is also able to leverage those cores and threads in a more effective way. After all, it’s not just how many processors, sockets, cores, threads, L1/L2 cache, DRAM, flash SSD and other resources, its how effective you use them. Also keep in mind that a bit of flash in the right place used effectively can go a long way vs. having a lot of cache in the wrong place or not used optimally that will end up costing a lot of cash.

Moving forward this means that EMC should be able to further refine and optimize other portions of the VNX software not yet updated to make further benefit of new hardware platforms and capabilities.

Does this mean EMC is catching up with newer vendors?

Similar to more of something is not always better, its how those items are used that matters, just because something is new does not mean its better or faster. That will manifest itself when they are demonstrated and performance results shown. However key is showing the performance across different workloads that have relevance to your needs and that convey metrics that matter with context.

Storage I/O trends

Context matters including type and size of work being done, number of transactions, IOPs, files or videos served, pages processed or items rendered per unit of time, or response time and latency (aka wait or think time), along with others. Thus some newer systems may be faster on paper, powerpoint, WebEx, You tube or via some benchmarks, however what is the context and how do they compare to others on an apples to apples basis.

What are some other enhancements or features?

Leveraging of FAST VP (Fully Automated Storage Tiering for Virtual Pools) with improved MCx software

Increases the effectiveness of available hardware resources (processors, cores, DRAM, flash, drives, ports)

Active active LUNs accessible by both controllers as well as legacy AULA support

Data sheets and other material for the new VNX MCx storage systems can be found here, with software options and bundles here, and general speeds and feeds here.

Learn more here at the EMC VNX MCx storage system landing page and compare VNX systems here.

What does then new VNX MCx family look like?

EMC VNX MCx family image

Is VNX MCx all about supporting VMware?

Interesting that if you read behind the lines, listen closely to the conversations, ask the right questions you will realize that while VMware is an important workload or environment to support, it is not the only one targeted for VNX. Likewise if you listen and look beyond what is normally amplified in various conversations you will find that systems such as VNX are being deployed as back-end storage in cloud (public, private, hybrid) environments for use with technologies such as OpenStack or object based solutions (visit www.objectstoragecenter.com for more on object storage systems and access)..

There is a common myth that the cloud and service providers all use white box commodity hardware including JBOD for their systems which some do, however some are also using systems such as VNX among others. In some of these scenarios the VNX type systems are or will be deployed in large numbers essentially consolidating the functions of what had been done by even larger number of JBOD based systems. This is where some of you will have a DejaVu or back to the future moment from the mid 90s when there was an industry movement to combine all the DAS and JBOD into larger storage systems. Don’t worry if you are not yet reading about this trend in your favorite industry rag or analyst briefing notes, however ask or look around and you might be surprised at what is occurring, granted it might be another year or two before you read about it (just saying ;).

Storage I/O trends

What that means is that VNX MCx is also well positioned for working with ViPR or Atmos Virtual Edition among other cloud and object storage stacks. VNX MCx is also well positioned for its new low-cost of entry for general purpose workloads and applications ranging from file sharing, email, web, database along with demanding high performance, low latency with large amounts of flash SSD. In addition to being used for general purpose storage, VNX MCx will also complement data protection solutions for backup/restore, BC, DR and archiving such as Data Domain, Avamar and Networker among others. Speaking of server virtualization, EMC also has tools for working with Hyper-V, Xen and KVM in addition to VMware.

If there is an all flash VNX MCx doesn’t that compete with XtremIO?

Yes there are all flash VNX MCx just as there have been all flash VNX before, however these will be positioned for different use case scenarios by EMC and their partners to avoid competing head to head with XtremIO. Thus EMC will need to be diligent in being very clear to its own sales and marketing forces as well as those of partners and customers of what to use when, where, why and how.

General thoughts and closing comments

The VNX MCx is a good set of enhancements by EMC and an example of how it’s not as important of how more you have, rather how you can use it to be more effective.

Ok, nuff said (fow now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

VMworld 2013 Vmware, server, storage I/O and networking update (Day 1)

Storage I/O trends

Congratulations to VMware on 10 years of VMworld!

With the largest installment yet of a VMworld in terms of attendance, there were also many announcements today (e.g. Monday) and many more slated for out the week. Here are a synopsis of some of those announcements.

Software Defined Data Center (SDDC) and Software Defined Networks (SDN)

VMware made a series of announcements today that set the stage for many others. Not surprisingly, these involved SDDC, SDN, SDS, vSphere 5.5 and other management tool enhancements, or the other SDM (Software Defined Management).

VMworld image

Here is a synopsis of what was announced by VMware.

VMware NSX (SDN) combines Nicira NVPTM along with vCloud Network and Security
VMware Virtual SAN (VSAN) not to be confused with virtual storage appliances (VSAs)
VMware vCloud Suite 5.5
VMware vSphere 5.5 (includes support for new Intel Xeon and Atom processors)
VMware vSphere App HA
VMware vSphere Flash Read Cache software
VMware vSphere Big Data Extensions
VMware vCloud Automation Center
VMware vCloud

Note that while these were announced today, some will be in public beta soon and general availability over the next few months or quarters (learn more here including pricing and availability). More on these and other enhancements in future posts. However for now check out what Duncan Epping (@DuncanYB) of VMware has to say over at his Yellowbook site here, here and here.

buzzword bingo
Buzzword Bingo

Additional VMworld Software Defined Announcements

Dell did some announcements as well for cloud and virtual environments in support of VMware from networking to servers, hardware and software. With all the recent acquisitions by Dell including Quest where they picked up Foglight management tools, along with vRanger, Bakbone and others, Dell has amassed an interesting portfolio. On the hardware front, check out the VRTX shared server infrastructure, I want one for my VMware environment, now I just need to justify one (to myself). Speaking of Dell, if you are at VMworld on Tuesday August 27 around 1:30PM stop by the Dell booth where I will be presenting including announcing some new things (stay tuned for more on that soon).

HP had some announcements today. HP jumped into the SDDC and SDN with some Software Defined Marketing (SDM) and Software Defined Announcements (SDA) in addition to using the Unified Data Center theme. Today’s announcements by HP were focused more around SDN and VMware NSX along with the HP Virtual Application Networks SDN Controller and VMware networking.

NetApp (Both #1417) announced more integration between their Data ONTAP based solutions and VMware vSphere, Horizon Suite, vCenter, vCloud Automation Center and vCenter Log Insight under the them theme of SDDC and SDS. As part of the enhancement, NetApp announced Virtual Storage Console (VSC 5.0) for end-to-end storage management and software in VMware environments. In addition, integration with VMware vCenter Server 5.5. Not to be left out of the SSD flash dash NetApp also released a new V1.2 of their FlashAccel software for vSphere 5.0 and 5.1.

Storage I/O trends

Cloud, Virtualization and DCIM

Here is one that you probably have not seen or heard much about elsewhere, which is Nlyte announcement of their V1.5 Virtualization Connector for Data Center Infrastructure Management (DCIM). Keep in mind that DCIM is more than facilities, power, and cooling related themes, particular in virtual data centers. Thus, some of the DCIM vendors, as well as others are moving into the converged DCIM space that spans server, storage, networking, hardware, software and facilities topics.

Interested in or want to know more about DCIM, and then check out these items:
Data Center Infrastructure Management (DCIM) and Infrastructure Resource Management (IRM)
Data Center Tools Can Streamline Computing Resources
Considerations for Asset Tracking and DCIM

Data Protection including Backup/Restore, BC, DR and Archiving

Quantum announced that Commvault has added support to use the Lattus object storage based solution as an archive target platform. You can learn more about object storage (access and architectures) here at www.objectstoragecenter.com .

PHD Virtual did a couple of data protection (backup/restore , BC, DR ) related announcements (here and here ). Speaking of backup/restore and data protection, if you are at VMworld on Tuesday August 27th around 1:30PM, stop by the Dell booth where I will be presenting, and stay tuned for more info on some things we are going to announce at that time.

In case you missed it, Imation who bought Nexsan earlier this year last week announced their new unified NST6000 series of storage systems. The NST6000 storage solutions support Fibre Channel (FC) and iSCSI for block along with NFS, CIFS/SMB and FTP for file access from virtual and physical servers.

Emulex announced some new 16Gb Fibre Channel (e.g. 16GFC) aka what Brocade wants you to refer to as Gen 5 converged and multi-port adapters. I wonder how many still remember or would rather forget how many ASIC and adapter gens from various vendors occurred just at 1Gb Fibre Channel?

Storage I/O trends

Caching and flash SSD

Proximal announced V2.0 of AutoCache 2.0 with role based administration, multi-hypervisor support (a growing trend beyond just a VMware focus) and more vCenter/vSphere integration. This is on the heels of last week’s FusionIO powered IBM Flash Cache Storage Accelerator (FCSA ) announcement, along with others such as EMC , Infinio, Intel, NetApp, Pernix, SanDisk (Flashsoft) to name a few.

Mellanox (VMworld booth #2005), you know, the Infinaband folks who also have some Ethernet (which also includes Fibre Channel over Ethernet) technology did a series of announcements today with various PCIe nand flash SSD card vendors. The common theme with the various vendors including Micron (Booth #1635) and LSI is in support of VMware virtual servers using iSER or iSCSI over RDMA (Remote Direct Memory Access). RDMA or server to server direct memory access (what some of you might know as remote memory mapped IO or channel to channel C2C) enables very fast low server to server data movement such as in a VMware cluster. Check out Mellanox and their 40Gb Ethernet along with Infinaband among other solutions if you are into server, storage i/o and general networking, along with their partners. Need or want to learn more about networking with your servers and storage check out Cloud and Virtual Data Storage Networking and Resilient Storage Networking .

Rest assured there are many more announcements and updates to come this week, and in the weeks to follow…

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

IBM Server Side Storage I/O SSD Flash Cache Software

Storage I/O trends

IBM Server Side Storage I/O SSD Flash Cache Software

As I often say, the best server storage I/O or IOP is the one that you do not have to do. The second best storage I/O or IOP is the one with least impact or that can be done in a cost-effective way. Likewise the question is not if solid-state device (SSD) including nand flash are in your future, rather when, where, why, with what, how much along with from whom. Also location matters when it comes to SSD including nand flash with different environments and applications leveraging different placement (locality) options, not to mention how much performance do you need vs. want?

As part of their $1 billion USD (to be spent over three years, or $333.3333 million per year) flash ahead initiative IBM has announced their Flash Cache Storage Accelerator (FCSA) server software. While IBM did not use the term, (congratulations and thank you btw) some creative marketer might want to try calling this Software Defined Cache (SDC) or Software Defined SSD (SDSSD) which if that occurs, apologies in advance ;). Keep in mind that it was about a year ago this time when IBM announced that they were acquiring SSD industry veteran Texas Memory Systems (TMS).

What was announced, introducing Flash Cache Storage Acceleration or FCSA

With this announcement of FCSA slated for customer general availability by end of August, IBM joins EMC and NetApp among other storage systems vendors who developed their own, or have collaborated on server-side IO optimization and cache software. Some of the other startup and established vendors who have IO optimization, performance acceleration and caching software include DataRam (Ramdisk), FusionIO, Infinio (NFS for VMware), Pernix (block for VMware), Proximal and SANdisk (bought flashsoft) among others.

Read more about IBM Flash Cache Software (FCSA) including various questions and perspectives in part two of this two-part post located here.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Is SSD dead? No, however some vendors might be

Storage I/O trends

Is SSD dead? No, however some vendors might be

In a recent conversation with Dave Raffo about the nand flash solid state disk (SSD) market, we talked about industry trends, perspectives and where the market is now as well as headed. One of my comments is, has been and will remain that the industry has still not reached anywhere near full potential for deployment of SSD for enterprise, SMB and other data storage needs. Granted, there is broad adoption in terms of discussion or conversation and plenty of early adopters.

SSD and in particular nand flash is anything but dead, in fact in the big broad picture of things, it is still very early in the game. Sure, for those who cover and crave the newest, latest and greatest technology to talk about, nand flash SSD might seem old, yesterday news, long in the tooth and time for something else. However, for those who are focused on deployment vs. adoption such as customers, in general, nand flash SSD in its many packaging options has still not yet reached its full potential.

Despite the hype, fanfare from CEOs or their evangelist along with loyal followers of startups that help drive industry adoption (e.g. what is talked about), there is still lots of upside growth in the customer drive industry deployment (actually buying, installing and using) for nand flash SSD.

What about broad customer deployments?

Sure, there are the marquee customer success stories that you need a high-capacity SAS or SATA drive to hold the YouTube videos, slide decks, press releases for.

However, have we truly, reached broad customer deployment or broad industry adoption?

Hence, I see more startups coming into the market space, and some exiting on their own, via mergers and acquisition or other means.

Will we see a feeding frenzy or IPO craze as with earlier hype cycles of technologies, IMHO there will be some companies that get the big deal, some will survive as new players running as a business vs. running to be acquired or IPO. Others will survive by evolving into something else while others will join the where are they now list.

If you are a SSD startup, CEO, CxO, or marketer, their PR, evangelist or loyal follower do not worry as the SSD market and even nand flash is far from being dead. On the other hand, if you think that it has hit its full stride, you are missing either the bigger picture, or too busy patting yourselves on the back for a job well done. There is much more opportunity out there and not even all the low hanging fruit has been picked yet.

Check out the conversation with Dave Raffo along with comments from others here.

Related links on storage IO metrics and SSD performance
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Storage and IO metrics that matter
IO IO it is off to Storage and IO metrics we go
SSD and Storage System Performance
Speaking of speeding up business with SSD storage
Are Hard Disk Drives (HDD’s) getting too big?
Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
IT and storage economics 101, supply and demand
Researchers and marketers dont agree on future of nand flash SSD
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments Part II: The call to duty, SSD endurance
SSD options for Virtual (and Physical) Environments Part III: What type of SSD is best for you?
SSD options for Virtual (and Physical) Environments Part IV: What type of SSD is best for your needs

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

More storage and IO metrics that matter

It is great to see more conversations and coverage around storage metrics that matter beyond simply focusing on cost per GByte or TByte (e.g. space capacity). Likewise, it is also good to see conversations expanding beyond data footprint reduction (DFR) from a space capacity savings or reduction ratio to also address data movement and transfer rates. Also good to see is increase in discussion around input/output operations per section (IOPs) tying into conversations from virtualization, VDI, cloud to Sold State Devices (SSD).

Other storage and IO metrics that matter include latency or response time, which is how fast work is done, or time spent. Latency also ties to IOPS in that as more work arrives to be done (IOPS) of various size, random or sequential, reads or writes, queue depths are an indicator of how well work is flowing. Another storage and IO metric that matters is availability because without it, performance or capacity can be affected. Likewise, without performance, availability can be affected.

Needless to say that I am just scratching the surface here with storage and IO metrics that matter for physical, virtual and cloud environments from servers to networks to storage.

Here is a link to a post I did called IO, IO, it is off to storage and IO metrics we go that ties in themes of performance measurements and solid-state disk (SSD) among others. Also check out this piece about why VASA (VMware storage analysis metrics) is important to have your VMware CASA along with Windows boot storage and IO performance for VDI and traditional planning purposes.

Check out this post about metrics and measurements that matter along with this conversation about IOPs, capacity, bandwidth and purchasing discussion topics.

Related links on storage IO metrics and SSD performance
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Storage and IO metrics that matter
IO IO it is off to Storage and IO metrics we go
SSD and Storage System Performance
Speaking of speeding up business with SSD storage
Are Hard Disk Drives (HDD’s) getting too big?
Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
IT and storage economics 101, supply and demand
Researchers and marketers dont agree on future of nand flash SSD
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD
SSD options for Virtual (and Physical) Environments Part II: The call to duty, SSD endurance
SSD options for Virtual (and Physical) Environments Part III: What type of SSD is best for you?
SSD options for Virtual (and Physical) Environments Part IV: What type of SSD is best for your needs

Ok, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

What is the best kind of IO? The one you do not have to do

What is the best kind of IO? The one you do not have to do

data infrastructure server storage I/O trends

Updated 2/10/2018

What is the best kind of IO? If no IO (input/output) operation is the best IO, than the second best IO is the one that can be done as close to the application and processor with best locality of reference. Then the third best IO is the one that can be done in less time, or at least cost or impact to the requesting application which means moving further down the memory and storage stack (figure 1).

Storage and IO or I/O locality of reference and storage hirearchy
Figure 1 memory and storage hierarchy

The problem with IO is that they are basic operation to get data into and out of a computer or processor so they are required; however, they also have an impact on performance, response or wait time (latency). IO require CPU or processor time and memory to set up and then process the results as well as IO and networking resources to move data to their destination or retrieve from where stored. While IOs cannot be eliminated, their impact can be greatly improved or optimized by doing fewer of them via caching, grouped reads or writes (pre-fetch, write behind) among other techniques and technologies.

Think of it this way, instead of going on multiple errands, sometimes you can group multiple destinations together making for a shorter, more efficient trip; however, that optimization may also take longer. Hence sometimes it makes sense to go on a couple of quick, short low latency trips vs. one single larger one that takes half a day however accomplishes many things. Of course, how far you have to go on those trips (e.g. locality) makes a difference of how many you can do in a given amount of time.

What is locality of reference?

Locality of reference refers to how close (e.g location) data exists for where it is needed (being referenced) for use. For example, the best locality of reference in a computer would be registers in the processor core, then level 1 (L1), level 2 (L2) or level 3 (L3) onboard cache, followed by dynamic random access memory (DRAM). Then would come memory also known as storage on PCIe cards such as nand flash solid state device (SSD) or accessible via an adapter on a direct attached storage (DAS), SAN or NAS device. In the case of a PCIe nand flash SSD card, even though physically the nand flash SSD is closer to the processor, there is still the overhead of traversing the PCIe bus and associated drivers. To help offset that impact, PCIe cards use DRAM as cache or buffers for data along with Meta or control information to further optimize and improve locality of reference. In other words, help with cache hits, cache use and cache effectiveness vs. simply boosting cache utilization.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What can you do the cut the impact of IO

  • Establish baseline performance and availability metrics for comparison
  • Realize that IOs are a fact of IT virtual, physical and cloud life
  • Understand what is a bad IO along with its impact
  • Identify why an IO is bad, expensive or causing an impact
  • Find and fix the problem, either with software, application or database changes
  • Throw more software caching tools, hyper visors or hardware at the problem
  • Hardware includes faster processors with more DRAM and fast internal busses
  • Leveraging local PCIe flash SSD cards for caching or as targets
  • Utilize storage systems or appliances that have intelligent caching and storage optimization capabilities (performance, availability, capacity).
  • Compare changes and improvements to baseline, quantify improvement

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

EMC VFCache respinning SSD and intelligent caching (Part II)

This is the second of a two part series pertaining to EMC VFCache, you can read the first part here.

In this part of the series, lets look at some common questions along with comments and perspectives.

Common questions, answers, comments and perspectives:

Why would EMC not just go into the same market space and mode as FusionIO, a model that many other vendors seam eager to follow? IMHO many vendors are following or chasing FusionIO thus most are selling in the same way perhaps to the same customers. Some of those vendors can very easily if they were not already also make a quick change to their playbook adding some new moves to reach broader audience.

Another smart move here is that by taking a companion or complimentary approach is that EMC can continue selling existing storage systems to customers, keep those investments while also supporting competitors products. In addition, for those customers who are slow to adopt the SSD based techniques, this is a relatively easy and low risk way to gain confidence. Granted the disk drive was declared dead several years (and yes also several decades) ago, however it is and will stay alive for many years due to SSD helping to close the IO storage and performance gap.

Storage IO performance and capacity gap
Data center and storage IO performance capacity gap (Courtesy of Cloud and Virtual Data Storage Networking (CRC Press))

Has this been done before? There have been other vendors who have done LUN caching appliances in the past going back over a decade. Likewise there are PCIe RAID cards that support flash SSD as well as DRAM based caching. Even NetApp has had similar products and functionality with their PAM cards.

Does VFCache work with other PCIe SSD cards such as FusionIO? No, VFCache is a combination of software IO intercept and intelligent cache driver along with a PCIe SSD flash card (which could be supplied as EMC has indicated from different manufactures). Thus VFCache to be VFCache requires the EMC IO intercept and intelligent cache software driver.

Does VFCache work with other vendors storage? Yes, Refer to the EMC support matrix, however the product has been architected and designed to install and coexist into a customers existing environment which means supporting different EMC block storage systems as well as those from other vendors. Keep in mind that a main theme of VFCache is to compliment, coexist, enhance and protect customers investments in storage systems to improve their effectiveness and productivity as opposed to replacing them.

Does VFCache introduce a new point of vendor lockin or stickiness? Some will see or place this as a new form of vendor lockin, others assuming that EMC supports different vendors storage systems downstream as well as offer options for different PCIe flash cards and keeps the solution affordable will assert it is no more lockin that other solutions. In fact by supporting third party storage systems as opposed to replacing them, smart sales people and marketeers will place VFCache as being more open and interoperable than some other PCIe flash card vendors approach. Keep in mind that avoiding vendor lockin is a shared responsibility (read more here).

Does VFCache work with NAS? VFCache does not work with NAS (NFS or CIFS) attached storage.

Does VFCache work with databases? Yes, VFCache is well suited for little data (e.g. database) and traditional OLTP or general business application process that may not be covered or supported by other so called big data focused or optimized solutions. Refer to this EMC document (and this document here) for more information.

Does VFCache only work with little data? While VFCache is well suited for little data (e.g. databases, share point, file and web servers, traditional business systems) it also able to work with other forms of unstructured data.

Does VFCache need VMware? No, While VFCache works with VMware vSphere including a vCenter plug in, however it does not need a hypervisor and is practical in a physical machine (PM) as it is in a virtual machine (VM).

Does VFCache work with Microsoft Windows? Yes, Refer to the EMC support matrix for specific server operating systems and hypervisor version support.

Does VFCache work with other unix platforms? Refer to the EMC support matrix for specific server operating systems and hypervisor version support.

How are reads handled with VFCache? The VFCache software (driver if you prefer) intercepts IO requests to LUNs that are being cached performing a quick lookup to see if there is a valid cache entry in the physical VFCache PCIe card. If there is a cache hit the IO is resolved from the closer or local PCIe card cache making for a lower latency or faster response time IO. In the case of a cache miss, the VFCache driver simply passes the IO request onto the normal SCSI or block (e.g. iSCSI, SAS, FC, FCoE) stack for processing by the downstream storage system (or appliance). Note that when the requested data is retrieved from the storage system, the VFCache driver will based on caching algorithms determinations place a copy of the data in the PCIe read cache. Thus the real power of the VFCache is the software implementing the cache lookup and cache management functions to leverage the PCIe card that complements the underlying block storage systems.

How are writes handled with VFCache? Unless put into a write cache mode which is not the default, VFCache software simply passes the IO operation onto the IO stack for downstream processing by the storage system or appliance attached via a block interface (e.g. iSCSI, SAS, FC, FCoE). Note that as part of the caching algorithms, the VFCache software will make determinations of what to keep in cache based on IO activity requests similar to how cache management results in better cache effectiveness in a storage system. Given EMCs long history of working with intelligent cache algorithms, one would expect some of that DNA exists or will be leveraged further in future versions of the software. Ironically this is where other vendors with long cache effectiveness histories such as IBM, HDS and NetApp among others should also be scratching their collective heads saying wow, we can or should be doing that as well (or better).

Can VFCache be used as a write cache? Yes, while its default mode is to be used as a persistent read cache to compliment server and application buffers in DRAM along with enhance effectiveness of downstream storage system (or appliances) caches, VFCache can also be configured as a persistent write cache.

Does VFCache include FAST automated tiering between different storage systems? The first version is only a caching tool, however think about it a bit, where the software sits, what storage systems it can work with, ability to learn and understand IO paths and patterns and you can get an idea of where EMC could evolve it to, similar to what they have done with recoverpoint among other tools.

Changing data access patterns and lifecycles
Evolving data access patterns and life cycles (more retention and reads)

Does VFCache mean all or nothing approach with EMC? While the complete VFCache solution comes from EMC (e.g. PCIe card and software), the solution will work with other block attached storage as well as existing EMC storage systems for investment protection.

Does VFCache support NAS based storage systems? The first release of VFCache only supports block based access, however the server that VFCache is installed in could certainly be functioning as a general purpose NAS (NFS or CIFS) server (see supported operating systems in EMC interoperability notes) in addition to being a database or other other application server.

Does VFCache require that all LUNs be cached? No, you can select which LUNs are cached and which ones are not.

Does VFCache run in an active / active mode? In the first release it is active passive, refer to EMC release notes for details.

Can VFCache be installed in multiple physical servers accessing the same shared storage system? Yes, however refer to EMC release notes on details about active / active vs. active / passive configuration rules for ensuring data integrity.

Who else is doing things like this? There are caching appliance vendors as well as others such as NetApp and IBM who have used SSD flash caching cards in their storage systems or virtualization appliances. However keep in mind that VFCache is placing the caching function closer to the application that is accessing it there by improving on the locality of reference (e.g. storage and IO effectiveness).

Does VFCache work with SSD drives installed in EMC or other storage systems? Check the EMC product support matrix for specific tested and certified solutions, however in general if the SSD drive is installed in a storage system that is supported as a block LUN (e.g. iSCSI, SAS, FC, FCoE) in theory it should be possible to work with VFCache. Emphasis, visit the EMC support matrix.
What type of flash is being used?

What type of nand flash SSD memory is EMC using in the PCIe card? The first release of VFCache is leveraging enterprise class SLC (Single Level Cell) nand flash which has been used in other EMC products for its endurance, long duty cycle to minnimize or eliminate concerns of wear and tear while meeting read and write performance. EMC has indicated that they will also as part of an industry trend leverage MLC along with Enterprise MLC (EMLC) technologies on a go forward basis.

Doesnt nand ssd flash cache wear out? While nand flash SSD can wear out over time due to extensive write use, the VFCache approach mitigates this by being primarily a read cache reducing the number or program / erase cycles (P/E cycles) that occur with write operations as well as initially leveraging longer duty cycle SLC flash. EMC also has several years experience from implementing wear leveling algorithms into the storage systems controllers to increase duty cycle and reduce wear on SLC flash which will play forward as MLC or Enterprise MLC (EMLC) techniques are leveraged. This differs from vendors who are positioning their SLC or MLC based flash PCIe SSD cards for mainly write operations which will cause more P/E cycles to occur at a faster rate reducing the duty or useful life of the device.

How much capacity does the VFCache PCIe card contain? The first release supports a 300GB card and EMC has indicated that added capacity and configuration options are in their plans.

Does this mean disks are dead? Contrary to popular industry folk lore (or wish) the hard disk drive (HDD) has plenty of life left part of which has been increased by being complimented by VFCache.

Various options and locations for SSD along with different usage scenarios
Various SSD locations, types, packaging and usage scenario options

Can VFCache work in blade servers? The VFCache software is transparent to blade, rack mount, tower or other types of servers. The hardware part of VFCache is a PCIe card which means that the blade server or system will need to be able to accommodate a PCIe card to compliment the PCIe based mezzaine IO card (e.g. iSCSI, SAS, FC, FCOE) used for accessing storage. What this means is that for blade systems or server vendors such as IBM who have a PCIe expansion module for their H series blade systems (it consumes a slot normally used by a server blade), PCIe cache cards like those being initially released by IBM could work, however check with the EMC interoperability matrix, as well as your specific blade server vendor for PCIe expansion capabilities. Given that EMC leverages Cisco UCS for their vBlocks, one would assume that those systems will also see VFCache modules in those systems. NetApp partners with Cisco using UCS in their FlexPods so you see where that could go as well along with potential other server vendors support including Dell, HP, IBM and Oracle among others.

What about benchmarks? EMC has released some technical documents that show performance improvements in Oracle environments such as this here. Hopefully we will see EMC also release other workloads for different applications including Microsoft Exchange Solutions Proven (ESRP) along with SPC similar to what IBM recently did with their systems among others.

How do the first EMC supplied workload simulations compare vs. other PCIe cards? This is tough to gauge as many SSD solutions and in particular PCIe cards are doing apples to oranges comparisons. For example to generate a high IOPs rating for marketing purposes, most SSD solutions are stress performance tested at 512 bytes or 1/2 of a KByte or at least 1/8 of a small 4Kbyte IO. Note that operating systems such as Windows are moving to 4Kbyte page allocation size to align with growing IO sizes with databases moving from the old average of 4Kbytes to 8Kbytes and larger. What is important to consider is what is the average IO size and activity profile (e.g. reads vs. writes, random vs. sequential) for your applications. If your application is doing ultra small 1/2 Kbyte IOs, or even smaller 64 byte IOs (which should be handled by better application or file system caching in DRAM), then the smaller IO size and record setting examples will apply. However if your applications are more mainstream or larger, then those smaller IO size tests should be taken with a grain of salt. Also keep latency in mind that many target or oppourtunity applications for VFCache are response time sensitive or can benefit by the improved productivity they enable.

What is locality of reference? Locality of reference refers to how close data is to where it is being requested or accessed from. The closer the data to the application requesting the faster the response time or quick the work gets done. For example in the figure below L1/L2/L3 on board processor caches are the fastest, yet smallest while closest to the application running on the server. At the other extreme further down the stack, storage becomes large capacity, lower cost, however lower performing.

Locality of reference data and storage memory

What does cache effectiveness vs. cache utilization mean? Cache utilization is an indicator of how much the available cache capacity is being used however it does not give an indicator of if the cache is being well used or not. For example, cache could be 100 percent used, however there could be a low hit rate. Thus cache effectiveness is a gauge of how well the available cache is being used to improve performance in terms of more work being done (IOPS or bandwidth) or lower of latency and response time.

Isnt more cache better? More cache is not better, it is how the cache is being used, this is a message that I would be disappointed in HDS if they were not to bring up as a point of messaging (or rebuttal) given their history of emphasis cache effectiveness vs. size or quantity (Hu, that is a hint btw ;).

What is the performance impact of VFCache on the host server? EMC is saying greatest of 5 percent or less CPU consumption which they claim is several times less than the competitions worst scenario, as well as claiming 512MB to 1GB of DRM on the server vs. several times that of their competitors. The difference could be expected to be via more off load functioning including flash translation layer (FTL), wear leveling and other optimization being handled by the PCIe card vs. being handled in the servers memory and using host server CPU cycles.

How does this compare to what NetApp or IBM does? NetApp, IBM and others have done caching with SSD in their storage systems, or leveraging third party PCIe SSD cards from different vendors to be installed in servers to be used as a storage target. Some vendors such as LSI have done caching on the PCIe cards (e.g. CacheCaid which in theory has a similar software caching concept to VFCache) to improve performance and effectiveness across JBOD and SAS devices.

What about stale (old or invalid) reads, how does VFCache handle or protect against those? Stale reads are handled via the VFCache management software tool or driver which leverages caching algorithms to decide what is valid or invalid data.

How much does VFCache cost? Refer to EMC announcement pricing, however EMC has indicated that they will be competitive with the market (supply and demand).

If a server shutdowns or reboots, what happens to the data in the VFCache? Being that the data is in non volatile SLC nand flash memory, information is not lost when the server reboots or loses power in the case of a shutdown, thus it is persistent. While exact details are not know as of this time, it is expected that the VFCache driver and software do some form of cache coherency and validity check to guard against stale reads or discard any other invalid cache entries.

Industry trends and perspectives

What will EMC do with VFCache in the future and on a larger scale such as an appliance? EMC via its own internal development and via acquisitions has demonstrated ability to use various clustered techniques such as RapidIO for VMAX nodes, InfiniBand for connecting Isilon  nodes. Given an industry trend with several startups using PCIe flash cards installed in a server that then functions as a IO storage system, it seems likely given EMCs history and experience with different storage systems, caching, and interconnects that they could do something interesting. Perhaps Oracle Exadata III (Exadata I was HP, Exadata II was Sun/Oracle) could be an EMC based appliance (That is pure speculation btw)?

EMC has already shown how it can use SSD drives as a cache extension in VNX and CLARiiON servers ( FAST CACHE ) in addition to as a target or storage tier combined with Fast for tiering. Given their history with caching algorithms, it would not be surprising to see other instantiations of the technology deployed in complimentary ways.

Finally, EMC is showing that it can use nand flash SSD in different ways, various packaging forms to apply to diverse applications or customer environments. The companion or complimentary approach EMC is currently taking contrasts with some other vendors who are taking an all or nothing, its all SSD as disk is dead approach. Given the large installed base of disk based systems EMC as well as other vendors have in place, not to mention the investment by those customers, it makes sense to allow those customers the option of when, where and how they can leverage SSD technologies to coexist and complement their environments. Thus with VFCache, EMC is using SSD as a cache enabler to discuss the decades old and growing storage IO to capacity performance gap in a force multiplier model that spreads the cost over more TBytes, PBytes or EBytes while increasing the overall benefit, in other words effectiveness and productivity.

Additional related material:
Part I: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Industry adoption vs. industry deployment, is there a difference?
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Data Center I/O Bottlenecks Performance Issues and Impacts
From bits to bytes: Decoding Encoding
Who is responsible for vendor lockin
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperabity support matrix

Ok, nuff said for now, I think I see some storm clouds rolling in

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

EMC VFCache respinning SSD and intelligent caching (Part I)

This is the first part of a two part series covering EMC VFCache, you can read the second part here.

EMC formerly announced VFCache (aka Project Lightning) an IO accelerator product that comprises a PCIe nand flash card (aka Solid State Device or SSD) and intelligent cache management software. In addition EMC is also talking about the next phase of the flash business unit and project Thunder. The approach EMC is taking with vFCache should not be a surprise given their history of starting out with memory and SSD evolving it into an intelligent cache optimized storage solution.

Storage IO performance and capacity gap
Data center and storage IO performance capacity gap (Courtesy of Cloud and Virtual Data Storage Networking (CRC Press))

Could we see the future of where EMC will take VFCache along with other possible solutions already being hinted at by the EMC flash business unit by looking where they have been already?

Likewise by looking at the past can we see the future or how VFCache and sibling product solutions could evolve?

After all, EMC is no stranger to caching with both nand flash SSD (e.g. FLASH CACHE, FAST and SSD drives) along with DRAM based across their product portfolio not too mention being a core part of their company founding products that evolved into HDDs and more recent nand flash SSDs among others.

Industry trends and perspectives

Unlike others who also offer PCIe SSD cards such as FusionIO with a focus on eliminating SANs or other storage (read their marketing), EMC not surprisingly is marching to a different beat. The beat EMC is marching too or perhaps leading by example for others to follow is that of going mainstream and using PCIe SSD cards as a cache to compliment theirs as well as other vendors storage systems vs. replacing them. This is similar to what EMC and other mainstream storage vendors have done in the past such as with SSD drives being used as flash cache extension on CLARiiON or VNX based systems as well as target or storage tier.

Various options and locations for SSD along with different usage scenarios
Various SSD locations, types, packaging and usage scenario options

Other vendors including IBM, NetApp and Oracle among others have also leveraged various packaging options of Single Level Cell (SLC) or Multi Level Cell (MLC) flash as caches in the past. A different example of SSD being used as a cache is the Seagate Momentus XT which is a desktop, workstation consumer type device. Seagate has shipped over a million of the Momentus XT which use SLC flash as a cache to compliment and enhance the integrated HDD performance (a 750GB with 8GB SLC memory is in the laptop Im using to type this with).

One of the premises of solutions such as those mentioned above for caching is to discuss changing data access patterns and life cycles shown in the figure below.

Changing data access patterns and lifecycles
Evolving data access patterns and life cycles (more retention and reads)

Put a different way, instead of focusing on just big data or corner case (granted some of those are quite large) or ultra large cloud scale out solutions, EMC with VFCache is also addressing their core business which includes little data. What will be interesting to watch and listen too is how some vendors will start to jump up and down saying that they have done or enabling what EMC is announcing for some time. In some cases those vendors will be rightfully doing and making noise on something that they should have made noise about before.

EMC is bringing the SSD message to the mainstream business and storage marketplace showing how it is a compliment to, vs. a replacement of existing storage systems. By doing so, they will show how to spread the cost of SSD out across a larger storage capacity footprint boosting the effectiveness and productive of those systems. This means that customers who install the VFCache product can accelerate the performance of both their existing EMC as well as storage systems from other vendors preserving their technology along with people skills investment.

 

Key points of VFCache

  • Combines PCIe SLC nand flash card (300GB) with intelligent caching management software driver for use in virtualized and traditional servers

  • Making SSD complimentary to existing installed block based disk (and or SSD) storage systems to increase their effectiveness

  • Providing investment protection while boosting productivity of existing EMC and third party storage in customer sites

  • Brings caching closer to the application where the data is accessed while leverage larger scale direct attached and SAN block storage

  • Focusing message for SSD back on to little data as well as big data for mainstream broad customer adoption scenarios

  • Leveraging benefit and strength of SSD as a read cache and scalable of underlying downstream disk for data storage

  • Reducing concerns around SSD endurance or duty cycle wear and tear by using as a read cache

  • Off loads underlying storage systems from some read requests enabling them to do more work for other servers

Additional related material:
Part II: EMC VFCache respinning SSD and intelligent caching
IT and storage economics 101, supply and demand
2012 industry trends perspectives and commentary (predictions)
Speaking of speeding up business with SSD storage
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Are Hard Disk Drives (HDDs) getting too big?
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Industry adoption vs. industry deployment, is there a difference?
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Data Center I/O Bottlenecks Performance Issues and Impacts
From bits to bytes: Decoding Encoding
Who is responsible for vendor lockin
EMC VPLEX: Virtual Storage Redefined or Respun?
EMC interoperabity support matrix

Ok, nuff said for now, I think I see some storm clouds rolling in

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved