Which Enterprise HDD for Content Server Platform

Which Enterprise HDD to use for a Content Server Platform

data infrastructure HDD server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server platform?

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This post is the first in a multi-part series based on a white paper hands-on lab report I did compliments of Equus Computer Systems and Seagate that you can read in PDF form here. The focus is looking at the Equus Computer Systems (www.equuscs.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). I was given the opportunity to do some hands-on testing running different application workloads with a 2U content solution platform along with various Seagate Enterprise 2.5” HDD’s handle different application workloads. This includes Seagate’s Enterprise Performance HDD’s with the enhanced caching feature.

Issues And Challenges

Even though Non-Volatile Memory (NVM) including NAND flash solid state devices (SSDs) have become popular storage for use internal as well as external to servers, there remains the need for HDD’s Like many of you who need to make informed server, storage, I/O hardware, software and configuration selection decisions, time is often in short supply.

A common industry trend is to use SSD and HDD based storage mediums together in hybrid configurations. Another industry trend is that HDD’s continue to be enhanced with larger space capacity in the same or smaller footprint, as well as with performance improvements. Thus, a common challenge is what type of HDD to use for various content and application workloads balancing performance, availability, capacity and economics.

Content Applications and Servers

Fast Content Needs Fast Solutions

An industry and customer trend are that information and data are getting larger, living longer, as well as there is more of it. This ties to the fundamental theme that applications and their underlying hardware platforms exist to process, move, protect, preserve and serve information.

Content solutions span from video (4K, HD, SD and legacy streaming video, pre-/post-production, and editing), audio, imaging (photo, seismic, energy, healthcare, etc.) to security surveillance (including Intelligent Video Surveillance [ISV] as well as Intelligence Surveillance and Reconnaissance [ISR]). In addition to big fast data, other content solution applications include content distribution network (CDN) and caching, network function virtualization (NFV) and software-defined network (SDN), to cloud and other rich unstructured big fast media data, analytics along with little data (e.g. SQL and NoSQL database, key-value stores, repositories and meta-data) among others.

Content Solutions And HDD Opportunities

A common theme with content solutions is that they get defined with some amount of hardware (compute, memory and storage, I/O networking connectivity) as well as some type of content software. Fast content applications need fast software, multi-core processors (compute), large memory (DRAM, NAND flash, SSD and HDD’s) along with fast server storage I/O network connectivity. Content-based applications benefit from having frequently accessed data as close as possible to the application (e.g. locality of reference).

Content solution and application servers need flexibility regarding compute options (number of sockets, cores, threads), main memory (DRAM DIMMs), PCIe expansion slots, storage slots and other connectivity. An industry trend is leveraging platforms with multi-socket processors, dozens of cores and threads (e.g. logical processors) to support parallel or high-concurrent content applications. These servers have large amounts of local storage space capacity (NAND flash SSD and HDD) and associated I/O performance (PCIe, NVMe, 40 GbE, 10 GbE, 12 Gbps SAS etc.) in addition to using external shared storage (local and cloud).

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Fast content applications need fast content and flexible content solution platforms such as those from Equus Computer Systems and HDD’s from Seagate. Key to a successful content application deployment is having the flexibility to hardware define and software defined the platform to meet your needs. Just as there are many different types of content applications along with diverse environments, content solution platforms need to be flexible, scalable and robust, not to mention cost effective.

Continue reading part two of this multi-part series here where we look at how and what to test as well as project planning.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part 2 – Which HDD for Content Applications – HDD Testing

Part 2 – Which HDD for Content Applications – HDD Testing

HDD testing server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server, hdd testing, how and what to do

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the second in a multi-part series (read part one here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post we look at some decisions and configuration choices to make for testing content applications servers as well as project planning.

Content Solution Test Objectives

In short period, collect performance and another server, storage I/O decision-making information on various HDD’s running different content workloads.

Working with the Servers Direct staff a suitable content solution platform test configuration was created. In addition to providing two Intel-based content servers, Servers Direct worked with their partner Seagate to arrange for various enterprise-class HDD’s to be evaluated. For these series of content application tests, being short on time, I chose to do run some simple workloads including database, basic file (large and small) processing and general performance characterization.

Content Solution Decision Making

Knowing how Non-Volatile Memory (NVM) NAND flash SSD (1) devices (drives and PCIe cards) perform, what would be the best HDD based storage option for my given set of applications? Different applications have various performance, capacity and budget considerations. Different types of Seagate Enterprise class 2.5” Small Form Factor (SFF) HDD’s were tested.

While revolutions per minute (RPM) still plays a role in HDD performance, there are other factors including internal processing capabilities, software or firmware algorithm optimization, and caching. Most HDD’s today have some amount of DRAM for read caching and other operations. Seagate Enterprise Performance HDD’s with the enhanced caching feature (2) are examples of devices accelerate storage I/O speed vs. traditional 10K and 15K RPM drives.

Project Planning And Preparation

Workload to be tested included:

  • Database read/writes
  • Large file processing
  • Small file processing
  • General I/O profile

Project testing consisted of five phases, some of which overlapped with others:

Phase 1 – Plan
Identify candidate workloads that could be run in the given amount of time, determine time schedules and resource availability, create a project plan.

Phase 2 – Define
Hardware define and software define the test platform.

Phase 3 – Setup
The objective was to assess plug-play capability of the server, storage and I/O networking hardware with a Linux OS before moving on to the reported workloads in the next phase. Initial setup and configuration of hardware and software, installation of additional devices along with software configuration, troubleshooting, and learning as applicable. This phase consisted of using Ubuntu Linux 14.04 server as the operating system (OS) along with MySQL 5.6 as a database server during initial hands-on experience.

Phase 4 – Execute
This consisted of using Windows 2012 R2 server as the OS along with Microsoft SQL Server on the system under test (SUT) to support various workloads. Results of this phase are reported below.

Phase 5 – Analyze      
Results from the workloads run in phase 3 were analyzed and summarized into this document.

(Note 1) Refer to Seagate 1200 12 Gbps Enterprise SAS SSD StorageIO lab review

(Note 2) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

Planning And Preparing The Tests

As with most any project there were constraints to contend with and work around.

Test constraints included:

  • Short-time window
  • Hardware availability
  • Amount of hardware
  • Software availability

Three most important constraints and considerations for this project were:

  • Time – This was a project with a very short time “runway”, something common in most customer environments who are looking to make a knowledgeable server, storage I/O decisions.
  • Amount of hardware – Limited amount of DRAM main memory, sixteen 2.5” internal hot-swap storage slots for HDD’s as well as SSDs. Note that for a production content solution platform; additional DRAM can easily be added, along with extra external storage enclosures to scale memory and storage capacity to fit your needs.
  • Software availability – Utilize common software and management tools publicly available so anybody could leverage those in their own environment and tests.

The following content application workloads were profiled:

  • Database reads/writes – Updates, inserts, read queries for a content environment
  • Large file processing – Streaming of large video, images or other content objects.
  • Small file processing – Processing of many small files found in some content applications
  • General I/O profile – IOP, bandwidth and response time relevant to content applications

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

There are many different types of content applications ranging from little data databases to big data analytics as well as very big fast data such as for video. Likewise there are various workloads and characteristics to test. The best test and metrics are those that apply to your environment and application needs.

Continue reading part three of this multi-part series here looking at how the systems and HDD’s were configured and tested.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part 3 – Which HDD for content applicaitons – Test Configuration

Which HDD for content applications – HDD Test Configuration

HDD Test Configuration server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server platform hdd test configuratoin

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the third in a multi-part series (read part two here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus expands to hardware and software defining as well as configuring the test environments along with applications workloads.

Defining Hardware Software Environment

Servers Direct content platforms are software defined and hardware defined to your specific solution needs. For my test-drive, I used a pair of 2U Content Solution platforms, one for a client System Test Initiator (STI) (3), the other as server SUT shown in figure-1 (next page). With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

(Note 3) System Test Initiator (STI) was hardware defined with dual Intel Xeon E5-2695 v3 (2.30 GHz) processors, 32GB RAM running Windows Server 2012 R2 with two network connections to the SUT. Network connections from the STI to SUT included an Intel GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE Converged Network Adapter (CNA). In addition to software defining the STI with Windows Server 2012 R2, Dell Benchmark Factory (V7.1 64b bit 496) part of the Database Administrators (DBA) Toad Tools (including free versions) was also used. For those familiar with HammerDB, Sysbench among others, Benchmark Factory is an alternative that supports various workloads and database connections with robust reporting, scripting and automation. Other installed tools included Spotlight on Windows, Iperf 2.0.5 for generating network traffic and reporting results, as well as Vdbench with various scripts.

SUT setup (4)  included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

(Note 4) System Under Test (SUT) dual Intel Xeon E5-2697 v3 (2.60 GHz) providing 54 logical processors, 64GB of RAM (expandable to 768GB with 32GB DIMMs, or 3TB with 128GB DIMMs) and two network connections. Network connections from the STI to SUT consisting of an Intel 1 GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE CNA. The GbE LAN connection was used for management purposes while the 40 GbE was used for data traffic. System disk was a 6Gbs SATA flash SSD. Seagate Enterprise class HDD’s were installed into the 16 available 2.5” small form factor (SFF) drive slots. Eight (left most) drive slots were connected to an Intel RMS3CC080 12 Gbps SAS RAID internal controller. The “Blue” drives in the middle were connected to both an NVMe PCIe card and motherboard 6 Gbps SATA controller using an SFF-8637 connector. The four right most drives were also connected to the motherboard 6 Gbps SATA controller.

System Test Configuration
Figure-1 STI and SUT hardware as well as software defined test configuration

This included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. Five 6 Gbps SATA Enterprise Capacity 2TB HDD’s were setup using Microsoft Windows as a spanned volume. System disk was a 6Gbps flash SSD and an NVMe flash SSD drive was used for database temp space.

What About NVM Flash SSD?

NAND flash and other Non-Volatile Memory (NVM) memory and SSD complement content solution. A little bit of flash SSD in the right place can have a big impact. The focus for theses tests is HDD’s, however some flash SSDs were used as system boot and database temp (e.g. tempdb) space. Refer to StorageIO Lab reviews and visit www.thessdplace.com

Seagate Enterprise HDD’s Used During Testing

Various Seagate Enterprise HDD specifications use in the testing are shown below in table-1.

 

Qty

 

Seagate HDD’s

 

Capacity

 

RPM

 

Interface

 

Size

 

Model

Servers Direct Price Each

Configuration

4

Enterprise 10K
Performance

1.8TB

10K with cache

12 Gbps SAS

2.5”

ST1800MM0128
with enhanced cache

$875.00 USD

HW(5) RAID 10 and RAID 1

2

Enterprise
Capacity 7.2K

2TB

7.2K

12 Gbps SAS

2.5”

ST2000NX0273

$399.00 USD

HW RAID 1

2

Enterprise 15K
Performance

600GB

15K with cache

12 Gbps SAS

2.5”

ST600MX0082
with enhanced cache

$595.00 USD

HW RAID 1

5

Enterprise
Capacity 7.2K

2TB

7.2K

6 Gbps SATA

2.5”

ST2000NX0273

$399.00 USD

SW(6) RAID Span Volume

Table-1 Seagate Enterprise HDD specification and Servers Direct pricing

URLs for additional Servers Direct content platform information:
https://serversdirect.com/solutions/content-solutions
https://serversdirect.com/solutions/content-solutions/video-streaming
https://www.serversdirect.com/File%20Library/Data%20Sheets/Intel-SDR-2P16D-001-ds2.pdf

URLs for additional Seagate Enterprise HDD information:
https://serversdirect.com/Components/Drives/id-HD1558/Seagate_ST2000NX0273_2TB_Hard_Drive

https://serversdirect.com/Components/Drives/id-HD1559/Seagate_ST600MX0082_SSHD

Seagate Performance Enhanced Cache Feature

The Enterprise 10K and 15K Performance HDD’s tested had the enhanced cache feature enabled. This feature provides a “turbo” boost like acceleration for both reads and write I/O operations. HDD’s with enhanced cache feature leverage the fact that some NVM such as flash in the right place can have a big impact on performance (7).

In addition to their performance benefit, combing a best of or hybrid storage model (combing flash with HDD’s along with software defined cache algorithms), these devices are “plug-and-play”. By being “plug-and-play” no extra special adapters, controllers, device drivers, tiering or cache management software tools are required.

(Note 5) Hardware (HW) RAID using Intel server on-board LSI based 12 Gbps SAS RAID card, RAID 1 with two (2) drives, RAID 10 with four (4) drives. RAID configured in write-through mode with default stripe / chunk size.

(Note 6) Software (SW) RAID using Microsoft Windows Server 2012 R2 (span). Hardware RAID used write-through cache (e.g. no buffering) with read-ahead enabled and a default 256KB stripe/chunk size.

(Note 7) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

The Seagate Enterprise Performance 10K and 15K with enhanced cache feature are a good example of how there is more to performance in today’s HDD’s than simply comparing RPM’s, drive form factor or interface.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Careful and practical planning are key steps for testing various resources as well as aligning the applicable tools, configuration to meet your needs.

Continue reading part four of this multi-part series here where the focus expands to database application workloads.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part 4 – Which HDD for Content Applications – Database Workloads

Part 4 – Which HDD for Content Applications – Database Workloads

data base server storage I/O trends

Updated 1/23/2018
Which enterprise HDD to use with a content server platform for database workloads

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the fourth in a multi-part series (read part three here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus expands to database application workloads that were run to test various HDD’s.

Database Reads/Writes

Transaction Processing Council (TPC) TPC-C like workloads were run against the SUT from the STI. These workloads simulated transactional, content management, meta-data and key-value processing. Microsoft SQL Server 2012 was configured and used with databases (each 470GB e.g. scale 6000) created and workload generated by virtual users via Dell Benchmark Factory (running on STI Windows 2012 R2).

A single SQL Server database instance (8) was used on the SUT, however unique databases were created for each HDD set being tested. Both the main database file (.mdf) and the log file (.ldf) were placed on the same drive set being tested, keep in mind the constraints mentioned above. As time was a constraint, database workloads were run concurrent (9) with each other except for the Enterprise 10K RAID 1 and RAID 10. Workload was run with two 10K HDD’s in a RAID 1 configuration, then another workload run with a four drive RAID 10. In a production environment, ideally the .mdf and .ldf would be placed on separate HDD’s and SSDs.

To improve cache buffering the SQL Server database instance memory could be increased from 16GB to a larger number that would yield higher TPS numbers. Keep in mind the objective was not to see how fast I could make the databases run, rather how the different drives handled the workload.

(Note 8) The SQL Server Tempdb was placed on a separate NVMe flash SSD, also the database instance memory size was set to 16GB which was shared by all databases and virtual users accessing it.

(Note 9) Each user step was run for 90 minutes with a 30 minute warm-up preamble to measure steady-state operation.

Users

TPCC Like TPS

Single Drive Cost per TPS

Drive Cost per TPS

Single Drive Cost / Per GB Raw Cap.

Cost / Per GB Usable (Protected) Cap.

Drive Cost (Multiple Drives)

Protect
Space Over head

Cost per usable GB per TPS

Resp. Time (Sec.)

ENT 15K R1

1

23.9

$24.94

$49.89

$0.99

$0.99

$1,190

100%

$49.89

0.01

ENT 10K R1

1

23.4

$37.38

$74.77

$0.49

$0.49

$1,750

100%

$74.77

0.01

ENT CAP R1

1

16.4

$24.26

$48.52

$0.20

$0.20

$ 798

100%

$48.52

0.03

ENT 10K R10

1

23.2

$37.70

$150.78

$0.49

$0.97

$3,500

100%

$150.78

0.07

ENT CAP SWR5

1

17.0

$23.45

$117.24

$0.20

$0.25

$1,995

20%

$117.24

0.02

ENT 15K R1

20

362.3

$1.64

$3.28

$0.99

$0.99

$1,190

100%

$3.28

0.02

ENT 10K R1

20

339.3

$2.58

$5.16

$0.49

$0.49

$1,750

100%

$5.16

0.01

ENT CAP R1

20

213.4

$1.87

$3.74

$0.20

$0.20

$ 798

100%

$3.74

0.06

ENT 10K R10

20

389.0

$2.25

$9.00

$0.49

$0.97

$3,500

100%

$9.00

0.02

ENT CAP SWR5

20

216.8

$1.84

$9.20

$0.20

$0.25

$1,995

20%

$9.20

0.06

ENT 15K R1

50

417.3

$1.43

$2.85

$0.99

$0.99

$1,190

100%

$2.85

0.08

ENT 10K R1

50

385.8

$2.27

$4.54

$0.49

$0.49

$1,750

100%

$4.54

0.09

ENT CAP R1

50

103.5

$3.85

$7.71

$0.20

$0.20

$ 798

100%

$7.71

0.45

ENT 10K R10

50

778.3

$1.12

$4.50

$0.49

$0.97

$3,500

100%

$4.50

0.03

ENT CAP SWR5

50

109.3

$3.65

$18.26

$0.20

$0.25

$1,995

20%

$18.26

0.42

ENT 15K R1

100

190.7

$3.12

$6.24

$0.99

$0.99

$1,190

100%

$6.24

0.49

ENT 10K R1

100

175.9

$4.98

$9.95

$0.49

$0.49

$1,750

100%

$9.95

0.53

ENT CAP R1

100

59.1

$6.76

$13.51

$0.20

$0.20

$ 798

100%

$13.51

1.66

ENT 10K R10

100

560.6

$1.56

$6.24

$0.49

$0.97

$3,500

100%

$6.24

0.14

ENT CAP SWR5

100

62.2

$6.42

$32.10

$0.20

$0.25

$1,995

20%

$32.10

1.57

Table-2 TPC-C workload results various number of users across different drive configurations

Figure-2 shows TPC-C TPS (red dashed line) workload scaling over various number of users (1, 20, 50, and 100) with peak TPS per drive shown. Also shown is the used space capacity (in green), with total raw storage capacity in blue cross hatch. Looking at the multiple metrics in context shows that the 600GB Enterprise 15K HDD with performance enhanced cache is a premium option as an alternative, or, to complement flash SSD solutions.

database TPCC transactional workloads
Figure-2 472GB Database TPS scaling along with cost per TPS and storage space used

In figure-2, the 1.8TB Enterprise 10K HDD with performance enhanced cache while not as fast as the 15K, provides a good balance of performance, space capacity and cost effectiveness. A good use for the 10K drives is where some amount of performance is needed as well as a large amount of storage space for less frequently accessed content.

A low cost, low performance option would be the 2TB Enterprise Capacity HDD’s that have a good cost per capacity, however lack the performance of the 15K and 10K drives with enhanced performance cache. A four drive RAID 10 along with a five drive software volume (Microsoft WIndows) are also shown. For apples to apples comparison look at costs vs. capacity including number of drives needed for a given level of performance.

Figure-3 is a variation of figure-2 showing TPC-C TPS (blue bar) and response time (red-dashed line) scaling across 1, 20, 50 and 100 users. Once again the Enterprise 15K with enhanced performance cache feature enabled has good performance in an apples to apples RAID 1 comparison.

Note that the best performance was with the four drive RAID 10 using 10K HDD’s Given popularity, a four drive RAID 10 configuration with the 10K drives was used. Not surprising the four 10K drives performed better than the RAID 1 15Ks. Also note using five drives in a software spanned volume provides a large amount of storage capacity and good performance however with a larger drive footprint.

database TPCC transactional workloads scaling
Figure-3 472GB Database TPS scaling along with response time (latency)

From a cost per space capacity perspective, the Enterprise Capacity drives have a good cost per GB. A hybrid solution for environment that do not need ultra-high performance would be to pair a small amount of flash SSD (10) (drives or PCIe cards), as well as the 10K and 15K performance enhanced drives with the Enterprise Capacity HDD (11) along with cache or tiering software.

(Note 10) Refer to Seagate 1200 12 Gbps Enterprise SAS SSD StorageIO lab review

(Note 11) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

If your environment is using applications that rely on databases, then test resources such as servers, storage, devices using tools that represent your environment. This means moving up the software and technology stack from basic storage I/O benchmark or workload generator tools such as Iometer among others instead using either your own application, or tools that can replay or generate various workloads that represent your environment.

Continue reading part five in this multi-part series here where the focus shifts to large and small file I/O processing workloads.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Which Enterprise HDD for Content Applications Different File Size Impact

Which HDD for Content Applications Different File Size Impact

Different File Size Impact server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server platform different file size impact.

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the fifth in a multi-part series (read part four here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus looks at large and small file I/O processing.

File Performance Activity

Tip, Content solutions use files in various ways. Use the following to gain perspective how various HDD’s handle workloads similar to your specific needs.

Two separate file processing workloads were run (12), one with a relative small number of large files, and another with a large number of small files. For the large file processing (table-3), 5 GByte sized files were created and then accessed via 128 Kbyte (128KB) sized I/O over a 10 hour period with 90% read using 64 threads (workers). Large file workload simulates what might be seen with higher definition video, image or other content streaming.

(Note 12) File processing workloads were run using Vdbench 5.04 and file anchors with sample script configuration below. Instead of vdbench you could also use other tools such as sysbench or fio among others.

VdbenchFSBigTest.txt
# Sample script for big files testing
fsd=fsd1,anchor=H:,depth=1,width=5,files=20,size=5G
fwd=fwd1,fsd=fsd1,rdpct=90,xfersize=128k,fileselect=random,fileio=random,threads=64
rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=10h,interval=30

vdbench -f VdbenchFSBigTest.txt -m 16 -o Results_FSbig_H_060615

VdbenchFSSmallTest.txt
# Sample script for big files testing
fsd=fsd1,anchor=H:,depth=1,width=64,files=25600,size=16k
fwd=fwd1,fsd=fsd1,rdpct=90,xfersize=1k,fileselect=random,fileio=random,threads=64
rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=10h,interval=30

vdbench -f VdbenchFSSmallTest.txt -m 16 -o Results_FSsmall_H_060615

The 10% writes are intended to reflect some update activity for new content or other changes to content. Note that 128KB per second translates to roughly 1 Gbps streaming content such as higher definition video. However 4K video (not optimized) would require a higher speed as well as resulting in larger file sizes. Table-3 shows the performance during the large file access period showing average read /write rates and response time, bandwidth (MBps), average open and close rates with response time.

Avg. File Read Rate

Avg. Read Resp. Time
Sec.

Avg. File Write Rate

Avg. Write Resp. Time
Sec.

Avg.
CPU %
Total

Avg. CPU % System

Avg. MBps
Read

Avg. MBps
Write

ENT 15K R1

580.7

107.9

64.5

19.7

52.2

35.5

72.6

8.1

ENT 10K R1

455.4

135.5

50.6

44.6

34.0

22.7

56.9

6.3

ENT CAP R1

285.5

221.9

31.8

19.0

43.9

28.3

37.7

4.0

ENT 10K R10

690.9

87.21

76.8

48.6

35.0

21.8

86.4

9.6

Table-3 Performance summary for large file access operations (90% read)

Table-3 shows that for two-drive RAID 1, the Enterprise 15K are the fastest performance, however using a RAID 10 with four 10K HDD’s with enhanced cache features provide a good price, performance and space capacity option. Software RAID was used in this workload test.

Figure-4 shows the relative performance of various HDD options handling large files, keep in mind that for the response line lower is better, while for the activity rate higher is better.

large file processing
Figure-4 Large file processing 90% read, 10% write rate and response time

In figure-4 you can see the performance in terms of response time (reads larger dashed line, writes smaller dotted line) along with number of file read operations per second (reads solid blue column bar, writes green column bar). Reminder that lower response time, and higher activity rates are better. Performance declines moving from left to right, from 15K to 10K Enterprise Performance with enhanced cache feature to Enterprise Capacity (7.2K), all of which were hardware RAID 1. Also shown is a hardware RAID 10 (four x 10K HDD’s).

Results in figure-4 above and table-4 below show how various drives can be configured to balance their performance, capacity and costs to meet different needs. Table-4 below shows an analysis looking at average file reads per second (RPS) performance vs. HDD costs, usable capacity and protection level.

Table-4 is an example of looking at multiple metrics to make informed decisions as to which HDD would be best suited to your specific needs. For example RAID 10 using four 10K drives provides good performance and protection along with large usable space, however that also comes at a budget cost (e.g. price).

Avg.
File Reads Per Sec. (RPS)

Single Drive Cost per RPS

Multi-Drive Cost per RPS

Single Drive Cost / Per GB Capacity

Cost / Per GB Usable (Protected) Cap.

Drive Cost (Multiple Drives)

Protection Overhead (Space Capacity for RAID)

Cost per usable GB per RPS

Avg. File Read Resp. (Sec.)

ENT 15K R1

580.7

$1.02

$2.05

$ 0.99

$0.99

$1,190

100%

$2.1

107.9

ENT 10K R1

455.5

1.92

3.84

0.49

0.49

1,750

100%

3.8

135.5

ENT CAP R1

285.5

1.40

2.80

0.20

0.20

798

100%

2.8

271.9

ENT 10K R10

690.9

1.27

5.07

0.49

0.97

3,500

100%

5.1

87.2

Table-4 Performance, capacity and cost analysis for big file processing

Small File Size Processing

To simulate a general file sharing environment, or content streaming with many smaller objects, 1,638,464 16KB sized files were created on each device being tested (table-5). These files were spread across 64 directories (25,600 files each) and accessed via 64 threads (workers) doing 90% reads with a 1KB I/O size over a ten hour time frame. Like the large file test, and database activity, all workloads were run at the same time (e.g. test devices were concurrently busy).

Avg. File Read Rate

Avg. Read Resp. Time
Sec.

Avg. File Write Rate

Avg. Write Resp. Time
Sec.

Avg.
CPU %
Total

Avg. CPU % System

Avg. MBps
Read

Avg. MBps
Write

ENT 15K R1

3,415.7

1.5

379.4

132.2

24.9

19.5

3.3

0.4

ENT 10K R1

2,203.4

2.9

244.7

172.8

24.7

19.3

2.2

0.2

ENT CAP R1

1,063.1

12.7

118.1

303.3

24.6

19.2

1.1

0.1

ENT 10K R10

4,590.5

0.7

509.9

101.7

27.7

22.1

4.5

0.5

Table-5 Performance summary for small sized (16KB) file access operations (90% read)

Figure-5 shows the relative performance of various HDD options handling large files, keep in mind that for the response line lower is better, while for the activity rate higher is better.

small file processing
Figure-5 Small file processing 90% read, 10% write rate and response time

In figure-5 you can see the performance in terms of response time (reads larger dashed line, writes smaller dotted line) along with number of file read operations per second (reads solid blue column bar, writes green column bar). Reminder that lower response time, and higher activity rates are better. Performance declines moving from left to right, from 15K to 10K Enterprise Performance with enhanced cache feature to Enterprise Capacity (7.2K RPM), all of which were hardware RAID 1. Also shown is a hardware RAID 10 (four x 10K RPM HDD’s) that has higher performance and capacity along with costs (table-5).

Results in figure-5 above and table-5 below show how various drives can be configured to balance their performance, capacity and costs to meet different needs. Table-6 below shows an analysis looking at average file reads per second (RPS) performance vs. HDD costs, usable capacity and protection level.

Table-6 is an example of looking at multiple metrics to make informed decisions as to which HDD would be best suited to your specific needs. For example RAID 10 using four 10K drives provides good performance and protection along with large usable space, however that also comes at a budget cost (e.g. price).

Avg.
File Reads Per Sec. (RPS)

Single Drive Cost per RPS

Multi-Drive Cost per RPS

Single Drive Cost / Per GB Capacity

Cost / Per GB Usable (Protected) Cap.

Drive Cost (Multiple Drives)

Protection Overhead (Space Capacity for RAID)

Cost per usable GB per RPS

Avg. File Read Resp. (Sec.)

ENT 15K R1

3,415.7

$0.17

$0.35

$0.99

$0.99

$1,190

100%

$0.35

1.51

ENT 10K R1

2,203.4

0.40

0.79

0.49

0.49

1,750

100%

0.79

2.90

ENT CAP R1

1,063.1

0.38

0.75

0.20

0.20

798

100%

0.75

12.70

ENT 10K R10

4,590.5

0.19

0.76

0.49

0.97

3,500

100%

0.76

0.70

Table-6 Performance, capacity and cost analysis for small file processing

Looking at the small file processing analysis in table-5 shows that the 15K HDD’s on an apples to apples basis (e.g. same RAID level and number of drives) provide the best performance. However when also factoring in space capacity, performance, different RAID level or other protection schemes along with cost, there are other considerations. On the other hand the Enterprise Capacity 2TB HDD’s have a low cost per capacity, however do not have the performance of other options, assuming your applications need more performance.

Thus the right HDD for one application may not be the best one for a different scenario as well as multiple metrics as shown in table-5 need to be included in an informed storage decision making process.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

File processing are common content applications tasks, some being small, others large or mixed as well as reads and writes. Even if your content environment is using object storage, chances are unless it is a new applications or a gateway exists, you may be using NAS or file based access. Thus the importance of if your applications are doing file based processing, either run your own applications or use tools that can simulate as close as possible to what your environment is doing.

Continue reading part six in this multi-part series here where the focus is around general I/O including 8KB and 128KB sized IOPs along with associated metrics.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Which Enterprise HDD for Content Applications General I/O Performance

Which HDD for Content Applications general I/O Performance

hdd general i/o performance server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server platform general I/O performance Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the sixth in a multi-part series (read part five here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus is around general I/O performance including 8KB and 128KB IOP sizes.

General I/O Performance

In addition to running database and file (large and small) processing workloads, Vdbench was also used to collect basic small (8KB) and large (128KB) sized I/O operations. This consisted of random and sequential reads as well as writes with the results shown below. In addition to using vdbench, other tools that could be used include Microsoft Diskspd, fio, iorate and iometer among many others.

These workloads used Vdbench configured (13) to do direct I/O to a Windows file system mounted device using as much of the available disk space as possible. All workloads used 16 threads and were run concurrently similar to database and file processing tests.

(Note 13) Sample vdbench configuration for general I/O, note different settings were used for various tests

Table-7 shows workload results for 8KB random IOPs 75% reads and 75% writes including IOPs, bandwidth and response time.

 

ENT 15K RAID1

ENT 10K RAID1

ENT CAP RAID1

ENT 10K R10
(4 Drives)

ECAP SW RAID (5 Drives)

 

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

I/O Rate (IOPs)

597.11

559.26

514

475

285

293

979

984

491

644

MB/sec

4.7

4.4

4.0

3.7

2.2

2.3

7.7

7.7

3.8

5.0

Resp. Time (Sec.)

25.9

27.6

30.2

32.7

55.5

53.7

16.3

16.3

32.6

24.8

Table-7 8KB sized random IOPs workload results

Figure-6 shows small (8KB) random I/O (75% read and 25% read) across different HDD configurations. Performance including activity rates (e.g. IOPs), bandwidth and response time for mixed reads / writes are shown. Note how response time increases with the Enterprise Capacity configurations vs. other performance optimized drives.

general 8K random IO
Figure-6 8KB random reads and write showing IOP activity, bandwidth and response time

Table-8 below shows workload results for 8GB sized I/Os 100% sequential with 75% reads and 75% writes including IOPs, MB/sec and response time in seconds.

ENT 15K RAID1

ENT 10K RAID1

ENT CAP RAID1

ENT 10K R10
(4 Drives)

ECAP SW RAID (5 Drives)

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

I/O Rate (IOPs)

3,778

3,414

3,761

3,986

3,379

1,274

11,840

8,368

2,891

1,146

MB/sec

29.5

26.7

29.4

31.1

26.4

10.0

92.5

65.4

22.6

9.0

Resp. Time (Sec.)

2.2

3.1

2.3

2.4

2.7

10.9

1.3

1.9

5.5

14.0

Table-8 8KB sized sequential workload results

Figure-7 shows small 8KB sequential mixed reads and writes (75% read and 75% write), while the Enterprise Capacity 2TB HDD has a large amount of space capacity, its performance in a RAID 1 vs. other similar configured drives is slower.

8KB Sequential
Figure-7 8KB sequential 75% reads and 75% write showing bandwidth activity

Table-9 shows workload results for 100% sequential, 100% read and 100% write 128KB sized I/Os including IOPs, bandwidth and response time.

ENT 15K RAID1

ENT 10K RAID1

ENT CAP RAID1

ENT 10K R10
(4 Drives)

ECAP SW RAID (5 Drives)

Read

Write

Read

Write

Read

Write

Read

Write

Read

Write

I/O Rate (IOPs)

1,798

1,771

1,716

1,688

921

912

3,552

3,486

780

721

MB/sec

224.7

221.3

214.5

210.9

115.2

114.0

444.0

435.8

97.4

90.1

Resp. Time (Sec.)

8.9

9.0

9.3

9.5

17.4

17.5

4.5

4.6

19.3

20.2

Table-9 128KB sized sequential workload results

Figure-8 shows sequential or streaming operations of larger I/O (100% read and 100% write) requests sizes (128KB) that would be found with large content applications. Figure-8 highlights the relationship between lower response time and increased IOPs as well as bandwidth.

128K Sequential
Figure-8 128KB sequential reads and write showing IOP activity, bandwidth and response time

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Some content applications are doing small random I/Os for database, key value stores or repositories as well as meta data processing while others are doing large sequential I/O. 128KB sized I/O may be large for your environment, on the other hand, with an increasing number of applications, file systems, software defined storage management tools among others, 1 to 10MB or even larger I/O sizes are becoming common. Key is selecting I/O sizes and read write as well as random sequential along with I/O or queue depths that align with your environment.

Continue reading part seven the final post in this multi-part series here where the focus is around how HDD’s continue to evolve including performance beyond traditional RPM based execrations along with wrap up.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

HDDs evolve for Content Application servers

HDDs evolve for Content Application servers

hdds evolve server storage I/O trends

Updated 1/23/2018

Enterprise HDDs evolve for content server platform

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the seventh and final post in this multi-part series (read part six here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). The focus of this post is comparing how HDD continue to evolve over various generations boosting performance as well as capacity and reliability. This also looks at how there is more to HDD performance than the traditional focus on Revolutions Per Minute (RPM) as a speed indicator.

Comparing Different Enterprise 10K And 15K HDD Generations

There is more to HDD performance than RPM speed of the device. RPM plays an important role, however there are other things that impact HDD performance. A common myth is that HDD’s have not improved on performance over the past several years with each successive generation. Table-10 shows a sampling of various generations of enterprise 10K and 15K HDD’s (14) including different form factors and how their performance continues to improve.

different 10K and 15K HDDs
Figure-9 10K and 15K HDD performance improvements

Figure-9 shows how performance continues to improve with 10K and 15K HDD’s with each new generation including those with enhanced cache features. The result is that with improvements in cache software within the drives, along with enhanced persistent non-volatile memory (NVM) and incremental mechanical drive improvements, both read and write performance continues to be enhanced.

Figure-9 puts into perspective the continued performance enhancements of HDD’s comparing various enterprise 10K and 15K devices. The workload is the same TPC-C tests used earlier in a similar (14) (with no RAID). 100 simulated users are shown in figure-9 accessing a database on each of the different drives all running concurrently. The older 15K 3.5” Cheetah and 2.5” Savio used had a capacity of 146GB which used a database scale factor of 1500 or 134GB. All other drives used a scale factor 3000 or 276GB. Figure-9 also highlights the improvements in both TPS performance as well as lower response time with new HDD’s including those with performance enhanced cache feature.

The workloads run are same as the TPC-C ones shown earlier, however these drives were not configured with any RAID. The TPC-C activity used Benchmark Factory with similar setup and configuration to those used earlier including on a multi-socket, multi-core Windows 2012 R2 server supporting a Microsoft SQL Server 2012 database with a database for each drive type.

ENT 10K V3 2.5"

ENT (Cheetah) 15K 3.5"

Users

1

20

50

100

Users

1

20

50

100

TPS (TPC-C)

14.8

50.9

30.3

39.9

TPS (TPC-C)

14.6

51.3

27.1

39.3

Resp. Time (Sec.)

0.0

0.4

1.6

1.7

Resp. Time (Sec.)

0.0

0.3

1.8

2.1

ENT 10K 2.5" (with cache)

ENT (Savio) 15K 2.5"

Users

1

20

50

100

Users

1

20

50

100

TPS (TPC-C)

19.2

146.3

72.6

71.0

TPS (TPC-C)

15.8

59.1

40.2

53.6

Resp. Time (Sec.)

0.0

0.1

0.7

0.0

Resp. Time (Sec.)

0.0

0.3

1.2

1.2

ENT 15K V4 2.5"

Users

1

20

50

100

TPS (TPC-C)

19.7

119.8

75.3

69.2

Resp. Time (Sec.)

0.0

0.1

0.6

1.0

ENT 15K (enhanced cache) 2.5"

Users

1

20

50

100

TPS (TPC-C)

20.1

184.1

113.7

122.1

Resp. Time (Sec.)

0.0

0.1

0.4

0.2

Table-10 Continued Enterprise 10K and 15K HDD performance improvements

(Note 14) 10K and 15K generational comparisons were run on a separate comparable server to what was used for other test workloads. Workload configuration settings were the same as other database workloads including using Microsoft SQL Server 2012 on a Windows 2012 R2 system with Benchmark Factory driving the workload. Database memory sized was reduced however to only 8GB vs. 16GB used in other tests.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

A little bit of flash in the right place with applicable algorithms goes a long way, an example being the Seagate Enterprise HDD’s with enhanced cache feature. Likewise, HDD’s are very much alive complementing SSD and vice versa. For high-performance content application workloads flash SSD solutions including NVMe, 12Gbps SAS and 6Gbps SATA devices are cost effective solutions. HDD’s continue to be cost-effective data storage devices for both capacity, as well as environments that do not need the performance of flash SSD.

For some environments using a combination of flash and HDD’s complementing each other along with cache software can be a cost-effective solution. The previous workload examples provide insight for making cost-effective informed storage decisions.

Evaluate today’s HDD’s on their effective performance running workloads as close as similar to your own, or, actually try them out with your applications. Today there is more to HDD performance than just RPM speed, particular with the Seagate Enterprise Performance 10K and 15K HDD’s with enhanced caching feature.

However the Enterprise Performance 10K with enhanced cache feature provides a good balance of capacity, performance while being cost-effective. If you are using older 3.5” 15K or even previous generation 2.5” 15K RPM and “non-performance enhanced” HDD’s, take a look at how the newer generation HDD’s perform, looking beyond the RPM of the device.

Fast content applications need fast content and flexible content solution platforms such as those from Servers Direct and HDD’s from Seagate. Key to a successful content application deployment is having the flexibility to hardware define and software defined the platform to meet your needs. Just as there are many different types of content applications along with diverse environments, content solution platforms need to be flexible, scalable and robust, not to mention cost effective.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Happy Earth Day 2016 Eliminating Digital and Data e-Waste

Happy Earth Day 2016 Eliminating Digital and Data e-Waste

With Earth Day 2016 on April 22, here are some thoughts about electronic waste (e-waste).

For those involved in data management or data infrastructures, the following are five tips to help cut the overhead and resulting impact of digital e-waste and later physical e-waste. Most conversations involving e-waste focus on the physical aspects from disposing of electronics along with later impacts. While physical e-waste is an important topic, lets expand the conversation including other variations of e-waste including digital. By digital e-waste I’m referring to the use of physical items that end up contributing to traditional e-waste.

digital and data ewaste

Digital e-waste ranges from the overhead of keeping extra copies of data that result in an expanding data footprint that in turn requires extra physical resources and their impact. Addressing physical e-waste also means keeping digital (not the physical items) including data waste in perspective.

Also note that digital or data waste may in fact not be waste per say if it exists as a by-product of making sure applications, data and resulting information are protected, preserved, secured and served for when needed. The warning is what can be done to make sure there are good useful effective and efficient copy data that has a relative low data footprint overhead impact, more on this later.

Here are six themes to consider to cut the impact without costing or compromising your organization when address e-waste (physical, digital, data).

1. Understand Digital e-waste

You might be familiar with the term e-waste (electronic waste), you know, those physical items that get discarded from supporting your digital lifestyle. The reason awareness around e-waste is important is because of the environmental impacts of discarding all those devices. The more known about the issue, impacts, causes and effects helps to drive awareness as well as insight into what can be done to mitigate those items.

ewaste

Devices range from smart and dumb cell phones, personal digital assistants (PDAs), tablets, notebook and workstation computers, MP3 devices, cameras, video display monitors along with larger servers, storage and networking technology, not to mention all the other Internet of Things (IoT) and Internet of Device (IoD) items. What’s important to know about physical e-waste is the impact of the various components. You can learn more about physical e-waste impact in general with a web search such as Google e-waste impact.

2. Reuse, Repurpose, Redeploy, Reconfigure, Re-Tool, Recycle

Reconfigure and retool where possible by re-driving installing newer, more energy-efficient high-capacity drives, or more performance effective devices. Besides replacing Hard Disk Drives (HDD), Solid State Devices (SSDs), magnetic tape among other mediums, look at the pro’s and con’s of replacing CPU processor sockets, upgrading memory and PCIe I/O cards for networking or storage among other enhancements.

Pro’s include being able to use the chassis longer reducing amount of physical e-waste, however at some point it can be more cost-effective to do a total replacement. However the longer you can use the asset or device the more that has as a positive benefit to cut e-waste.

Repurpose, reuse and redeploy assets such as servers, storage and networking devices in a hand me down approach assuming there is a value or benefit in doing so.

Recycle when done, dispose of the technology properly including for storage secure erase of digital media and later physical handling.

3. Responsible Recycle and Disposition of technology (including secure digital destruction)

What are you doing with, how are you disposing of physical items ranging from laptops, workstations, tablets, phones, MP3 players, TVs and monitors, servers, network and storage devices among others when no longer needed?

Are you securely erasing your digital data on HDDs as well as SSDs or even tape and optical devices before they are disposed of? If not, you should be. For example if you are not yet using or looking at Self Encrypting Drives (SED) including HDDs and SSDs for securing your data, start investigating them. Sure they have a security value proposition for when lost or stolen, however they can also cut the time to secure erase to a given standard from days or hours to minutes or seconds.

These will become e-waste

Smart shopping up front, what you want, what you need, how long can you leverage, spend more up front to get something that can last 3-5 years vs. discarding in 1.5-3 years.
Smart management with insight, know your cost and impacts, not just for PR purpose, for profit and practicality

4. Plan acquisitions with disposition in mind

Redesign and design for replacement, maximizing what you have or will acquire, using it for longer time to cut costs, improve productivity (and profitability) while reducing e-waste overhead contributing footprint.

For example, do you need or want to have the latest in new technology replacing that phone, tablet, watch or other IoT or IoD item as soon as something newer comes along? No worries if you are also doing something responsible with what was new and now old by such as donating or giving it to somebody else who might be able to get a few more years worth of use out of it before it becomes e-waste.

On the other hand, if you are acquiring technology with a 2-3 year useful life plan, what would it take to upgrade that item to a larger or more robust version using it for 3-5 years. Granted, you might not use it in its primary role for the longer duration, however can it be repurposed for some other uses? Also from a technology acquisition perspective, have a forecast and plan that can help you make smart, informed decisions up front knowing when upgrades or extra resources will be needed to prolong the usefulness of the item.

Of course you can also simply move everything to the cloud and out-source your e-waste footprint to the vendor, MSP or cloud provider.

5. Understand Changing Data Value

Keep in mind that data has either no value, some value or unknown value all of which can change over time. For example some data has value for seconds, minutes or hours and can then be discarded. Other data have some value which can be low or high which determines how as well as when, where and how to protect, preserve, secure and serve it when needed. Then there is data that has an unknown value. However, that can change over time.

Different and Changing Data Value

Over time your data may end up having no value meaning it can be discarded, or, it might have some value (low or high) meaning change how it should be protected, preserved, secured and served. Then there is data that may stay in limbo or unknown status indefinitely or until somebody, or some software or via other means decide if it has value or not.

The point is that to cut digital e-waste is to discard data with no value as soon as possible, protect, preserve, secure and serve data with value appropriately. Likewise, for all of that growing data with an unknown value, rethink how it is protected and stored, all of which has an impact on both physical as well as digital e-waste.

This means having insight and awareness into your environment, applications, data, settings, configuration and metadata, not only of the space being used, or when it was last updated. Also, look beyond when data was last modified or changed, look at when it was last read or accessed to decide how protected and secured including virus and other scans.

6. Data Footprint Reduction (DFR)

Implement data footprint reduction (DFR) to lower overhead impact not only at the target or downstream destination using compression, dedupe and other techniques. Also, move upstream to the source where the problem starts and address it there. Addressing at the source leverages various techniques from Archiving, Backup/Data Protection Modernization (rethinking what saved, when, how often, etc), Cleanup, Compression and Consolidation, Data management, Deletion and Dedupe along with storage tiering, RAID/Parity/Mirroring/Replication/Erasure Code and Advanced Parity/LRC/Forward Error Correction and other technologies.

For example if you have 10TB of data, how many copies do you have and why, how are those copies protected and what is their overhead. The issue and concern should not be primarily how many copies, rather, if those copies add or give value, then what can you do to keep them while reducing their overhead impact, besides simply trying to compress or dedupe everything. Hint, start exploring copy management as well as revisiting what you protect, when, where, why, how often along with options for implementing DFR as close to the data source as possible, as well as downstream.

Where To Learn More

What This All Means

Gain insight and awareness into what is occurring with physical and digital ewaste side stepping the greenwashing and other activity. Small steps implemented by many will have a big impact. Every bit, byte, block, blob, bucket file or object along with their copies have an impact, hopefully as well as a benefit, a question is how can you reduce the overhead while increasing your return on innovation reducing costs, complexity and overhead while enhancing organization capabilities. There are many techniques, technologies, tools and approaches to apply to various environments, after all, everything is not the same, yet there are similarities. Happy Earth Day 2016 and happy spring to those of you in the northern hemisphere (as well as elsewhere).

Ok, nuff said, for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

vSphere Software Defined Beta, Something for free from VMware

vSphere Beta, Something free from VMware (other than your time)

server storage I/O trends

Something free from VMware (other than time)

VMware is looking for candidate beta test sites and environments for an upcoming vSphere release. Target audience or environments are those who have deployed vSphere 5.5 and 6.0 in your environment and looking to test the new software (e.g. bits).

What VMware is looking for

For this private community

vSphere beta, VMware is looking for participants with expectations including:

  • Online acceptance of the Master Software Beta Test Agreement will be required prior to visiting the Private Beta Community
  • Install beta software within 3 days of receiving access to the beta product
  • Provide feedback within the first 4 weeks of the beta program
  • Submit Support Requests for bugs, issues and feature requests
  • Complete surveys and beta test assignments
  • Participate in the private beta discussion forum and conference calls

How to get involved and test the bits?

To get involved (and get the bits), simply fill out the VMware form found here (no credit card or money required, just some of your time).

The VMware vSphere team will grant access to the program to selected candidates in stages. This vSphere Beta Program leverages a private Beta community to download software and share information. VMware will provide discussion forums, webinars, and service requests to enable you to share your opinion with them.

VMware cites the following reasons to participate in this vSphere beta opportunity:

  • Receive early access to the vSphere Beta products
  • Interact with the vSphere Beta team consisting of Product Managers, Engineers, Technical Support, and Technical Writers
  • Provide direct input on product functionality, configurability, usability, and performance
  • Provide feedback influencing future products, training, documentation, and services
  • Collaborate with other participants, learn about their use cases, and share advice and learnings

Where To Learn More

What This All Means

Having been involved in earlier vSphere betas this is a great way to get an early glimpse and hands on behind the wheel real-world experience with new technology for the experience, as well as testing to see how things will work in yours, or in a VMware hosted environment. You are free to use and test the bits (e.g. software) in your environment (or VMware hosted) how you like in a free-form real-world way. In addition to hands on time, you also get exposure and chance to interact with the VMware folks.

This experience can be useful for planning on how to use new feature functionality, as well as strategy planning for deployment once the production bits get released down the road.

Remember to sign up if interested here, see you in the beta.

Ok, nuff said, for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Cloud Constellation SpaceBelt – Out Of This World Cloud Data Centers?

Cloud Constellation SpaceBelt – Out Of This World Cloud Data Centers?

server storage I/O trends

A new startup called Cloud Constellation (aka SpaceBelt) has announced and proposes converge space terrestrial satellite technology with IT information and cloud related data infrastructure technologies including NVM (e.g. SSD) and storage class memory (SCM). While announcing their Series A funding and proposed value proposition (below), Cloud Constellation did not say how much funding, who the investors are, or management team leading to some, well, rather cloud information.

Cloud Constellation’s SpaceBelt transforms cybersecurity for enterprise and government operations moving high-value data around the world by:

  • insulating it completely from the Internet and terrestrial leased lines
  • liberating it from cyberattacks and surreptitious activities
  • protecting it from natural disasters and force majeure events
  • addressing all jurisdictional complexities and constraints
  • avoiding risks of violating privacy regulations

Truly secure data transfer: Enterprises and governments will finally be enabled to bypass use of leaky networks and compromised servers interconnecting their sites around the world.

New option for cloud service providers: The service will be a key market differentiator for cloud service providers to offer a transformative, ultra-high degree of network security to clients reliant on moving sensitive, mission-critical data around the world each day.

What is SpaceBelt Cloud Constellation?

From their website www.cloudconstellation.com you will see following.

Cloud Constellation Space Belt
www.cloudconstellation.com

Keeping in mind that today is April 1st which means April Fools day 2016, my motto for the day is trust yet verify. So just for fun, check out this new company that I had a briefing with earlier this week that also announced their Series A funding earlier in March 2016.

The question you have to ask yourself today is if this is an out of this world April Fools prank, an out of this world idea that will eclipse current cloud services such as Amazon Web Services (AWS), Google, IBM Softlayer, Microsoft Azure, Rackspace among others?

Or, will SpaceBelt go the way of earlier cloud high flyers HP Cloud, Nirvanix among others.

Btw, keep in mind that only you can prevent cloud data loss, however cloud and virtual data availability is also a shared responsibility.

Some Questions and Things To Ponder

  • Is this an April Fools Joke?
  • How much Non Volatile Memory (NVM) such as NAND, 3D Nand, 3D XPoint or other Storage Class Memory (SCM) can be physically placed on each bird (e.g. Satellite)
  • What will the solar panels look like to power the birds, plus battery’s for heating and cooling the NVM (contrary to popular myth, NVMs do get warm if not hot)
  • What is the availability, accessibility and durability model, how will data be replicated, mirrored or an out of this world LRC/Erasure Code Advanced Parity model be used?
  • How will the storage be accessed, what will the end-points look like, iSCSI, NDB, FUSE, NFS, CIFS, HDFS, Torrent, JSON, ODBC, REST/HTTP, FTP or something else?
  • Security will be a concern as well as geo placement, after all, its one thing to move data across some borders, how about if the data is hundreds of miles above those borders?
  • Cost will be an interesting model to follow, as well as competitors from SpaceX, Amazon, Boeing, GE, NSA, Google, Facebook or others emerge?
  • What will the uplink and download speeds be, not to mention latency of moving and accessing data from the satellites. For those who have DirectTV or other terrestrial service you know the pros and cons associated with that. Speaking of which, perhaps you have experienced a thunder-storm with DirecTV or Dish, or perhaps a cloud storm due to a cloud provider service or site failure, think about what happens to your cloud data if the satellite dish is disrupted during an upload or download.
  • I also wonder how the various industry trade groups will wrap their head around this one, what kind of new standards, initiatives and out of this world marketing promotions will we see or hear about? You know that some creative marketer will declare surface clouds as dead, just saying.

Where To Learn More

What This All Means

The folks over at cloud constellation say their space belt made up of a constellation (e.g. in orbit cluster) of satellites will be circling the globe around 2019. I wonder if they will be ready to do a proof of concept (poc) technology demonstrator of their IP using TCP based networking, server, storage I/O protocols leveraging a hot air balloon or weather balloon near term, if not, would be a great marketing ploy.

If nothing else, putting their data infrastructure technology on a hot air balloon could be a fun marketing ploy to say their cloud rises above the hot air of other cloud marketing. Or if they do a POC using a weather balloon, they could show and say their cloud rises above traditional cloud storms, oh the fun…

Check out Cloud Constellation and their Spacebelt, see for yourself and then you decide what is going on!

Remember, its April Fools day today, trust, yet verify.

What say you, is this an April Fools Joke or the next big thing?

Ok, nuff said (for now), time to listen to Pink Floyd Dark Side of the Moon ;)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Server StorageIO March 2016 Update Newsletter

Volume 16, Issue III

Hello and welcome to the March 2016 Server StorageIO update newsletter.

Here in the northern hemisphere spring has officially arrived as of March 20th equinox along with warmer weather, more hours and minutes of day light, and plenty of things to do. In addition to the official arrival of spring here (fall in the southern hemisphere), it also means in the U.S. that March Madness and college basketball tournament playoff brackets and office (betting) pools are in full swing.

In This Issue

  • Feature Topic and Themes
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcast’s
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • A couple of other things associated with spring is to move clocks forward which occurred recently here in the U.S. Spring is also a good time to check your smoke and dangerous gas detectors or other alarms. This means replacing batteries and cleaning the detectors.

    Besides smoke and gas detectors, spring is also a good time do preventive maintenance on your battery backup uninterruptible power supplies (UPS), as well as generators and other standby power devices. For my part, I had a service tech out to do a tune up on my Kohler generator, as well as replaced some batteries in APC UPS devices.

    Besides smoke and CO2 detectors, generators and UPS standby power systems, March madness basketball and other sports tournaments, something else occurs on March 31st (besides being the day before April 1st and April fools day). March 31st is World Backup (and Restore) Day meaning an awareness on making sure your data, applications, settings, configurations, keys, software and systems are backed up, and can be recovered.

    Hopefully none of you are in the situation where data, applications, systems, computers, laptops, tablets, smart phones or other devices only get backed up or protected once a year, however maybe you know somebody who does.

    March also marks the 10th anniversary of Amazon Web Services (AWS) cloud services (more here), happy birthday AWS.

    March wraps up on the 31st with World Backup Day which is intended to draw attention to the importance of data protection and your ability to recover applications and data. While backup are important, so to are testing to make sure you can actually use and recover from what was protected. Keep in mind that while some claim backup is dead, data protection is alive and as along as vendors and others keep referring to data protection as backup, backup will stay alive.

    Join me and folks from HP Enterprise (HPE) on March 31st at 1PM ET for a free webinar compliments of HPE with a theme of Backup with Brains, emphasis on awareness and analytics to enable smart data protection. Click here to learn more and register.

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    Feature Topic and Theme

    This months feature theme and topics include backup (and restore) as part of data protection, more on clouds (public, private and hybrid) including how some providers such as DropBox are moving out of public cloud providers such as AWS building their own data centers.

    Building off of the February newsletter there is more on Google including their use of Non-Volatile Memory (NVM) aka NAND Flash Solid State Devices (SSD). and some of their research. In addition to Google’s use of SSD, check out the posts and industry activity on NVMe as well as other news and updates including new converged platforms from Cisco and HPE among others.

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

  • Via Redmondmag: AWS Cloud Storage Service Turns 10 years old in March, happy birthday AWS (read more here at the AWS site).
  • Cisco announced new flexible HyperFlex converged compute server platforms for hybrid cloud and other deployments. Also announced were NetApp All Flash Array (AFA) FlexPod converged solutions powered by Cisco UCS servers and networking technology. In other activity, Cisco unveiled a Digital Network Architecture to enable customer digital data transformation. Cisco also announced its intent to acquire CliQr for management of hybrid clouds.

  • Data Direct Networks (DDN) expands NAS offerings with new GS14K platform via PRnewswire.

  • Via Computerworld: DropBox quits Amazon cloud, takes back 500 PB of data. DropBox has created their own cloud to host videos, images, files, folders, objects, blobs and other storage items that used to be stored within AWS S3. In this DropBox post, you can read about the why they decided to create their own cloud, as well as how they used a hybrid approach with metadata kept local, actual data stored in AWS S3. Now the data and the metadata are in DropBox data centers. However, DropBox is still keeping some data in AWS particular in different geographies.

  • Web site hosting company GoDaddy has extended their capabilities similar to other service providers by adding an OpenStack powered cloud service. This is a trend that others such as Bluehost (where my sites are located on a DPS) have evolved from simple shared hosting, to dedicated private servers (DPS), virtual private servers (VPS) along with other cloud related services. Think of a VPS as a virtual machine or cloud instance. Likewise some of the cloud service providers such as AWS are moving into dedicated private servers.

  • Following up from the February 2016 Server StorageIO Update Newsletter that included Google’s message to disk vendors: Make hard drives like this, even if they lose more data and Google Disk for Data Centers White Paper (PDF Here), read about Google experiences SSD.

    In this PDF white paper that was presented at the recent Usenix 2016 conference outlining Google’s experiences with different types (SLC, MLC, eMLC) and generations of NAND flash SSD media across various vendors and generations. Some of the takeaways include that context matters when looking at SSD metrics on endurance, durability and errors. While some in the industry focus on Unrecoverable Bit Error Rates (UBER), there needs to be awareness around Raw Bit Error Rate (RBER) among other metrics and usage. Read more about Google’s experiences here.


  • Hewlett Packard Enterprise (HPE) announced Hyper-Converged systems Via Marketwired including HC 380 based on ProLiant DL380 technology providing all in one (AiO) converged compute, storage and virtualization software with simplified management. The HC 380 is targeted for mid-market aka small medium business (SMB), remote office branch office (ROBO) and workgroups. HPE also announced all flash array (AFA) enhancements for 3PAR storage (Via Businesswire).

  • Microsoft has announced that it will be releasing a version of its SQL Server database on Linux. What this means is that as well as being able to use SQL Server and associated tools on Windows and Azure platforms, you will also in the not so distant future be able to deploy on Linux. By making SQL Server available on Linux opens up some interesting scenarios and solution alternatives vs. Oracle along with MySQL and associated MySQL derivatives, as well as NoSQL offerings (Read more about NoSQL Databases here). Read more about Microsoft’s SQL Server for Linux here.

    In addition to SQL Server for Linux, Microsoft has also announced enhancements for easing docker container migrations to clouds. In other Microsoft activity, they announced enhancements to Storsimple and Azure. Keep an eye out for Windows Server 2016 Tech Preview 5 (e.g. TP5) which will be the next release of the upcoming new version of the popular operating systems.


  • MSDI, Rockland IT Solutions and Source Support Services Merge to Form Congruity with CEO Todd Gresham, along with Mike Stolz and Mark Shirman (formerly of Glasshouse) among others you may know.

  • Via Businesswire: PrimaryIO announces server-based flash acceleration for VMware systems, while Riverbed extends Remote Office Branch Office (ROBO) cloud connectivity Via Businesswire.

  • Via Computerworld: Samsung ships 12Gbs SAS 15TB 2.5" 3D NAND Flash SSD (Hey Samsung, send me a device or two and will give them a test drive in the Server StorageIO lab ;). Not to be out done, Via Forbes: Seagate announces fast SSD card, as well as for the High Performance Compute (HPC) and Super Compute (SC) markets, Via HPCwire: Seagate Sets Sights on Broader HPC Market with their scale-out clustered Lustre based systems.

  • Servers Direct is now offering the HGST 4U x 60 drive enclosures while Via PRnewswire: SMIC announces RRAM partnership.

  • ATTO Technology has enhanced their RAID Arrays Behind FibreBridge 7500, while Oracle announced mainframe virtual tape library (VTL) cloud support Via Searchdatabackup. In other updates for this month, VMware has released and made generally available (GA) VSAN 6.2 and Via Businesswire: Wave and Centeris Launch Transpacific Broadband Data and Fiber Hub.
  • The above is a sampling of some of the various industry news, announcements and updates for this March. Watch for more news and updates in April coming out of NAB and OpenStack Summit among other events.

    View other recent news and industry trends here.

    StorageIO Commentary in the news

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Continum – R1Soft Server Backup Manager
    • HyperIO – HiMon and HyperIO server storage I/O monitoring software tools
    • Runcast – VMware automation and management software tools
    • Opvizor – VMware health management software tools
    • Asigra – Cloud, Managed Service and distributed backup/data protection tools
    • Datera – Software defined storage management startup
    • E8 Storage – Software Defined Stealth Storage Startup
    • Venyu – Cloud and data center data protection tools
    • StorPool – Distributed software defined storage management tools
    • ExaBlox – Scale out storage solutions

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • InfoStor:  Data Protection Gaps, Some Good, Some Not So Good
    • Virtual Blocks (VMware Blogs):  Part III EVO:RAIL – When And Where To Use It?
    • InfoStor:  Object Storage Is In Your Future

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    Check out this video (Via YouTube) of a Google Data Center tour.

    In the IoT and IoD era of little and big data, how about this video I did with my Phantom DJI drone and a HD GoPro (e.g. 1K vs. 2.7K or 4K in newer cameras). This generates about a GByte of raw data per 10 minutes of flight, which then means another GB copies to a staging area, then to a protected copies, then production versions and so forth. Thus a 2 minute clip in 1080p resulted in plenty of storage including produced, uploaded versions along with backup copies in archives spread across YouTube, Dropbox and elsewhere.

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    TBA – April 27, 2016 webinar

    NAB (Las Vegas) April 19-20, 2016

    Backup with Brains – March 31, 2016 free webinar (1PM ET)

    See more webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    NVMe is in your future, resources to start preparing today for tomorrow

    NVM and NVMe corner (Via and Compliments of Micron.com)

    View more NVMe related items at microsite thenvmeplace.com.

    Read more in this Server StorageIO industry Trends Perspective white paper and lab review.

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of mine as well as books from others.

    For this months recommended reading, it’s a blog site. If you have not visited Eric Siebert’s (@ericsiebert) site vSphere-land and its companion resources pages including top blogs, do so now.

    Granted there is a heavy VMware server virtualization focus, however there is a good balance of other data infrastructure topics spanning servers, storage, I/O networking, data protection and more.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    storageioblog.com/data-protection-diaries-main/ – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance.com – Various server, storage and I/O performance and benchmarking

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    The Future of Ethernet – 2016 Roadmap released by Ethernet Alliance

    The Future of Ethernet – 2016 Roadmap released by Ethernet Alliance

    server storage I/O trends

    The Future of Ethernet – 2016 Roadmap released by Ethernet Alliance

    Ethernet Alliance Roadmap

    The Ethernet Alliance has announced their 2016 roadmap of enhancements for Ethernet.

    Ethernet enhancements include speeds, connectivity interfaces that span needs from consumer, enterprise, to cloud and managed service providers.

    Highlights of Ethernet Roadmap

    • FlexEthernet (FlexE)
    • QSFP-DD, microQSFP and OBO interfaces
    • Speeds from 10Mbps to 400GbE.
    • 4 Pair Power over Ethernet (PoE)
    • Power over Data Line (PoDL)

    Ethernet Alliance 2016 Roadmap Image
    Images via EthernetAlliance.org

    Who is the Ethernet Alliance

    The Ethernet Alliance (@ethernetallianc) is an industry trade and marketing consortium focused on the advancement and success of Ethernet related technologies.

    Where to learn more

    The Ethernet Alliance has also made available via their web site two presentations part one here and part two here (or click on the following images).

    Ethernet Alliance 2016 roadmap presentation #1 Ethernet Alliance 2016 roadmap presentation #2

    Also visit www.ethernetalliance.org/roadmap

    What this all means

    Ethernet technologies continue to be enhanced from consumer focused, Internet of Things (IoT) and Internet of Devices (IoD) to enterprise, data centers, IT and non-IT usage as well as cloud and managed service providers. At the lower end where there is broad adoption, the continued evolution of easier to use, lower cost, interoperable technologies and interfaces expands Ethernet adoption footprint while at the higher-end, all of those IoT, IoD, consumer and other devices aggregated (consolidate) into cloud and other services that have the need for speeds from 10GbE, 40GbE, 100GbE and 400GbE.

    With the 2016 Roadmap the Ethernet Alliance has provided good direction as to where Ethernet fits today and tomorrow.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Part V – NVMe overview primer (Where to learn more, what this all means)

    This is the fifth in a five-part mini-series providing a NVMe primer overview.

    View Part I, Part II, Part III, Part IV, Part V as well as companion posts and more NVMe primer material at www.thenvmeplace.com.

    There are many different facets of NVMe including protocol that can be deployed on PCIe (AiC, U.2/8639 drives, M.2) for local direct attached, dedicated or shared for front-end or back-end of storage systems. NVMe direct attach is also found in servers and laptops using M.2 NGFF mini cards (e.g. “gum sticks”). In addition to direct attached, dedicated and shared, NVMe is also deployed on fabrics including over Fibre Channel (FC-NVMe) as well as NVMe over Fabrics (NVMeoF) leveraging RDMA based networks (e.g. iWARP, RoCE among others).

    The storage I/O capabilities of flash can now be fed across PCIe faster to enable modern multi-core processors to complete more useful work in less time, resulting in greater application productivity. NVMe has been designed from the ground up with more and deeper queues, supporting a larger number of commands in those queues. This in turn enables the SSD to better optimize command execution for much higher concurrent IOPS. NVMe will coexist along with SAS, SATA and other server storage I/O technologies for some time to come. But NVMe will be at the top-tier of storage as it takes full advantage of the inherent speed and low latency of flash while complementing the potential of multi-core processors that can support the latest applications.

    With NVMe, the capabilities of underlying NVM and storage memories are further realized Devices used include a PCIe x4 NVMe AiC SSD, 12 GbpsSAS SSD and 6 GbpsSATA SSD. These and other improvements with NVMe enable concurrency while reducing latency to remove server storage I/O traffic congestion. The result is that application demanding more concurrent I/O activity along with lower latency will gravitate towards NVMe for access fast storage.

    Like the robust PCIe physical server storage I/O interface it leverages, NVMe provides both flexibility and compatibility. It removes complexity, overhead and latency while allowing far more concurrent I/O work to be accomplished. Those on the cutting edge will embrace NVMe rapidly. Others may prefer a phased approach.

    Some environments will initially focus on NVMe for local server storage I/O performance and capacity available today. Other environments will phase in emerging external NVMe flash-based shared storage systems over time.

    Planning is an essential ingredient for any enterprise. Because NVMe spans servers, storage, I/O hardware and software, those intending to adopt NVMe need to take into account all ramifications. Decisions made today will have a big impact on future data and information infrastructures.

    Key questions should be, how much speed do your applications need now, and how do growth plans affect those requirements? How and where can you maximize your financial return on investment (ROI) when deploying NVMe and how will that success be measured?

    Several vendors are working on, or have already introduced NVMe related technologies or initiatives. Keep an eye on among others including AWS, Broadcom (Avago, Brocade), Cisco (Servers), Dell EMC, Excelero, HPE, Intel (Servers, Drives and Cards), Lenovo, Micron, Microsoft (Azure, Drivers, Operating Systems, Storage Spaces), Mellanox, NetApp, OCZ, Oracle, PMC, Samsung, Seagate, Supermicro, VMware, Western Digital (acquisition of SANdisk and HGST) among others.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What this all means

    NVMe is in your future if not already, so If NVMe is the answer, what are the questions?

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Where, How to use NVMe overview primer

    server storage I/O trends
    Updated 1/12/2018

    This is the fourth in a five-part miniseries providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    Where and how to use NVMe

    As mentioned and shown in the second post of this series, initially, NVMe is being deployed inside servers as “ back-end,” fast, low latency storage using PCIe Add-In-Cards (AIC) and flash drives. Similar to SAS NVM SSDs and HDDs that support dual-paths, NVMe has a primary path and an alternate path. If one path fails, traffic keeps flowing without causing slowdowns. This feature is an advantage to those already familiar with the dual-path capabilities of SAS, enabling them to design and configure resilient solutions.

    NVMe devices including NVM flash AIC flash will also find their way into storage systems and appliances as back-end storage, co-existing with SAS or SATA devices. Another emerging deployment configuration scenario is shared NVMe direct attached storage (DAS) with multiple server access via PCIe external storage with dual paths for resiliency.

    Even though NVMe is a new protocol, it leverages existing skill sets. Anyone familiar with SAS/SCSI and AHCI/SATA storage devices will need little or no training to carry out and manage NVMe. Since NVMe-enabled storage appears to a host server or storage appliance as an LUN or volume, existing Windows, Linux and other OS or hypervisors tools can be used. On Windows, such as,  other than going to the device manager to see what the device is and what controller it is attached to, it is no different from installing and using any other storage device. The experience on Linux is similar, particularly when using in-the-box drivers that ship with the OS. One minor Linux difference of note is that instead of seeing a /dev/sda device as an example, you might see a device name like /dev/nvme0n1 or /dev/nvme0n1p1 (with a partition).

    Keep in mind that NVMe like SAS can be used as a “back-end” access from servers (or storage systems) to a storage device or system. For example JBOD SSD drives (e.g. 8639), PCIe AiC or M.2 devices. NVMe can also like SAS be used as a “front-end” on storage systems or appliances in place of, or in addition to other access such as GbE based iSCSI, Fibre Channel, FCoE, InfiniBand, NAS or Object.

    What this means is that NVMe can be implemented in a storage system or appliance on both the “front-end” e.g. server or host side as well as on the “back-end” e.g. device or drive side that is like SAS. Another similarity to SAS is that NVMe dual-pathing of devices, permitting system architects to design resiliency into their solutions. When the primary path fails, access to the storage device can be maintained with failover so that fast I/O operations can continue when using SAS and NVMe.

    NVM connectivity options including NVMe
    Various NVM NAND flash SSD devices and their connectivity including NVMe, M2, SATA and 12 Gbps SAS are shown in figure 6.

    Various NVM SSD interfaces including NVMe and M2
    Figure 6 Various NVM flash SSDs (Via StorageIO Labs)

    Left in figure 6 is an NAND flash NVMe PCIe AiC, top center is a USB thumb drive that has been opened up showing an NAND die (chip), middle center is a mSATA card, bottom center is an M.2 card, next on the right is a 2.5” 6 Gbps SATA device, and far fright is a 12 Gbps SAS device. Note that an M.2 card can be either an SATA or NVMe device depending on its internal controller that determines which host or server protocol device driver to use.

    The role of PCIe has evolved over the years as has its performance and packaging form factors. Also, to add in card (AiC) slots, PCIe form factors also include M.2 small form factor that replaces legacy mini-PCIe cards. Another form factor is M.2 (aka Next Generation Form Factor or NGFF) that like other devices, can be an NVMe, or SATA device.

    NGFF also known as 8639 or possibly 8637 (figure 7) can be used to support SATA as well as NVMe depending on the card device installed and host server driver support. There are various M.2 NGFF form factors including 2230, 2242, 2260 and 2280. There are also M.2 to regular physical SATA converter or adapter cards that are available enabling M.2 devices to attach to legacy SAS/SATA RAID adapters or HBAs.

    NVMe 8637 and 8639 interface backplane slotsNVMe 8637 and 8639 interface
    Figure 7 PCIe NVMe 8639 Drive (Via StorageIO Labs)

    On the left of figure 7 is a view towards the backplane of a storage enclosure in a server that supports SAS, SATA, and NVMe (e.g. 8639). On the right of figure 7 is the connector end of an 8639 NVM SSD showing addition pin connectors compared to an SAS or SATA device. Those extra pins give PCIe x4 connectivity to the NVMe devices. The 8639 drive connectors enable a device such as an NVM, or NAND flash SSD to share a common physical storage enclosure with SAS and SATA devices, including optional dual-pathing.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Be careful judging a device or component by its physical packaging or interface connection about what it is or is not. In figure 6.6 the device has SAS/SATA along with PCIe physical connections, yet it’s what’s inside (e.g. its controller) that determines if it is an SAS, SATA or NVMe enabled device. This also applies to HDDs and PCIe AiC devices, as well as I/O networking cards and adapters that may use common physical connectors, yet implement different protocols. For example, the SFF-8643 HD-Mini SAS internal connector is used for 12 Gbps SAS attachment as well as PCIe to devices such as 8630.

    Depending on the type of device inserted, access can be via NVMe over PCIe x4, SAS (12 Gbps or 6Gb) or SATA. 8639 connector based enclosures have a physical connection with their backplanes to the individual drive connectors, as well as to PCIe, SAS, and SATA cards or connectors on the server motherboard or via PCIe riser slots.

    While PCIe devices including AiC slot based, M.2 or 8639 can have common physical interfaces and lower level signaling, it’s the protocols, controllers, and drivers that determine how they get a software defined and used. Keep in mind that it’s not just the physical connector or interface that determines what a device is or how it is used, it’s also the protocol, command set, and controller and device drivers.

    Continue reading about NVMe with Part V (Where to learn more, what this all means) in this five-part series, or jump to Part I, Part II or Part III.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.