Part 3 – Which HDD for content applicaitons – Test Configuration

Which HDD for content applications – HDD Test Configuration

HDD Test Configuration server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server platform hdd test configuratoin

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the third in a multi-part series (read part two here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus expands to hardware and software defining as well as configuring the test environments along with applications workloads.

Defining Hardware Software Environment

Servers Direct content platforms are software defined and hardware defined to your specific solution needs. For my test-drive, I used a pair of 2U Content Solution platforms, one for a client System Test Initiator (STI) (3), the other as server SUT shown in figure-1 (next page). With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

(Note 3) System Test Initiator (STI) was hardware defined with dual Intel Xeon E5-2695 v3 (2.30 GHz) processors, 32GB RAM running Windows Server 2012 R2 with two network connections to the SUT. Network connections from the STI to SUT included an Intel GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE Converged Network Adapter (CNA). In addition to software defining the STI with Windows Server 2012 R2, Dell Benchmark Factory (V7.1 64b bit 496) part of the Database Administrators (DBA) Toad Tools (including free versions) was also used. For those familiar with HammerDB, Sysbench among others, Benchmark Factory is an alternative that supports various workloads and database connections with robust reporting, scripting and automation. Other installed tools included Spotlight on Windows, Iperf 2.0.5 for generating network traffic and reporting results, as well as Vdbench with various scripts.

SUT setup (4)  included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

(Note 4) System Under Test (SUT) dual Intel Xeon E5-2697 v3 (2.60 GHz) providing 54 logical processors, 64GB of RAM (expandable to 768GB with 32GB DIMMs, or 3TB with 128GB DIMMs) and two network connections. Network connections from the STI to SUT consisting of an Intel 1 GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE CNA. The GbE LAN connection was used for management purposes while the 40 GbE was used for data traffic. System disk was a 6Gbs SATA flash SSD. Seagate Enterprise class HDD’s were installed into the 16 available 2.5” small form factor (SFF) drive slots. Eight (left most) drive slots were connected to an Intel RMS3CC080 12 Gbps SAS RAID internal controller. The “Blue” drives in the middle were connected to both an NVMe PCIe card and motherboard 6 Gbps SATA controller using an SFF-8637 connector. The four right most drives were also connected to the motherboard 6 Gbps SATA controller.

System Test Configuration
Figure-1 STI and SUT hardware as well as software defined test configuration

This included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. Five 6 Gbps SATA Enterprise Capacity 2TB HDD’s were setup using Microsoft Windows as a spanned volume. System disk was a 6Gbps flash SSD and an NVMe flash SSD drive was used for database temp space.

What About NVM Flash SSD?

NAND flash and other Non-Volatile Memory (NVM) memory and SSD complement content solution. A little bit of flash SSD in the right place can have a big impact. The focus for theses tests is HDD’s, however some flash SSDs were used as system boot and database temp (e.g. tempdb) space. Refer to StorageIO Lab reviews and visit www.thessdplace.com

Seagate Enterprise HDD’s Used During Testing

Various Seagate Enterprise HDD specifications use in the testing are shown below in table-1.

 

Qty

 

Seagate HDD’s

 

Capacity

 

RPM

 

Interface

 

Size

 

Model

Servers Direct Price Each

Configuration

4

Enterprise 10K
Performance

1.8TB

10K with cache

12 Gbps SAS

2.5”

ST1800MM0128
with enhanced cache

$875.00 USD

HW(5) RAID 10 and RAID 1

2

Enterprise
Capacity 7.2K

2TB

7.2K

12 Gbps SAS

2.5”

ST2000NX0273

$399.00 USD

HW RAID 1

2

Enterprise 15K
Performance

600GB

15K with cache

12 Gbps SAS

2.5”

ST600MX0082
with enhanced cache

$595.00 USD

HW RAID 1

5

Enterprise
Capacity 7.2K

2TB

7.2K

6 Gbps SATA

2.5”

ST2000NX0273

$399.00 USD

SW(6) RAID Span Volume

Table-1 Seagate Enterprise HDD specification and Servers Direct pricing

URLs for additional Servers Direct content platform information:
https://serversdirect.com/solutions/content-solutions
https://serversdirect.com/solutions/content-solutions/video-streaming
https://www.serversdirect.com/File%20Library/Data%20Sheets/Intel-SDR-2P16D-001-ds2.pdf

URLs for additional Seagate Enterprise HDD information:
https://serversdirect.com/Components/Drives/id-HD1558/Seagate_ST2000NX0273_2TB_Hard_Drive

https://serversdirect.com/Components/Drives/id-HD1559/Seagate_ST600MX0082_SSHD

Seagate Performance Enhanced Cache Feature

The Enterprise 10K and 15K Performance HDD’s tested had the enhanced cache feature enabled. This feature provides a “turbo” boost like acceleration for both reads and write I/O operations. HDD’s with enhanced cache feature leverage the fact that some NVM such as flash in the right place can have a big impact on performance (7).

In addition to their performance benefit, combing a best of or hybrid storage model (combing flash with HDD’s along with software defined cache algorithms), these devices are “plug-and-play”. By being “plug-and-play” no extra special adapters, controllers, device drivers, tiering or cache management software tools are required.

(Note 5) Hardware (HW) RAID using Intel server on-board LSI based 12 Gbps SAS RAID card, RAID 1 with two (2) drives, RAID 10 with four (4) drives. RAID configured in write-through mode with default stripe / chunk size.

(Note 6) Software (SW) RAID using Microsoft Windows Server 2012 R2 (span). Hardware RAID used write-through cache (e.g. no buffering) with read-ahead enabled and a default 256KB stripe/chunk size.

(Note 7) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

The Seagate Enterprise Performance 10K and 15K with enhanced cache feature are a good example of how there is more to performance in today’s HDD’s than simply comparing RPM’s, drive form factor or interface.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Careful and practical planning are key steps for testing various resources as well as aligning the applicable tools, configuration to meet your needs.

Continue reading part four of this multi-part series here where the focus expands to database application workloads.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part 4 – Which HDD for Content Applications – Database Workloads

Part 4 – Which HDD for Content Applications – Database Workloads

data base server storage I/O trends

Updated 1/23/2018
Which enterprise HDD to use with a content server platform for database workloads

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the fourth in a multi-part series (read part three here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus expands to database application workloads that were run to test various HDD’s.

Database Reads/Writes

Transaction Processing Council (TPC) TPC-C like workloads were run against the SUT from the STI. These workloads simulated transactional, content management, meta-data and key-value processing. Microsoft SQL Server 2012 was configured and used with databases (each 470GB e.g. scale 6000) created and workload generated by virtual users via Dell Benchmark Factory (running on STI Windows 2012 R2).

A single SQL Server database instance (8) was used on the SUT, however unique databases were created for each HDD set being tested. Both the main database file (.mdf) and the log file (.ldf) were placed on the same drive set being tested, keep in mind the constraints mentioned above. As time was a constraint, database workloads were run concurrent (9) with each other except for the Enterprise 10K RAID 1 and RAID 10. Workload was run with two 10K HDD’s in a RAID 1 configuration, then another workload run with a four drive RAID 10. In a production environment, ideally the .mdf and .ldf would be placed on separate HDD’s and SSDs.

To improve cache buffering the SQL Server database instance memory could be increased from 16GB to a larger number that would yield higher TPS numbers. Keep in mind the objective was not to see how fast I could make the databases run, rather how the different drives handled the workload.

(Note 8) The SQL Server Tempdb was placed on a separate NVMe flash SSD, also the database instance memory size was set to 16GB which was shared by all databases and virtual users accessing it.

(Note 9) Each user step was run for 90 minutes with a 30 minute warm-up preamble to measure steady-state operation.

Users

TPCC Like TPS

Single Drive Cost per TPS

Drive Cost per TPS

Single Drive Cost / Per GB Raw Cap.

Cost / Per GB Usable (Protected) Cap.

Drive Cost (Multiple Drives)

Protect
Space Over head

Cost per usable GB per TPS

Resp. Time (Sec.)

ENT 15K R1

1

23.9

$24.94

$49.89

$0.99

$0.99

$1,190

100%

$49.89

0.01

ENT 10K R1

1

23.4

$37.38

$74.77

$0.49

$0.49

$1,750

100%

$74.77

0.01

ENT CAP R1

1

16.4

$24.26

$48.52

$0.20

$0.20

$ 798

100%

$48.52

0.03

ENT 10K R10

1

23.2

$37.70

$150.78

$0.49

$0.97

$3,500

100%

$150.78

0.07

ENT CAP SWR5

1

17.0

$23.45

$117.24

$0.20

$0.25

$1,995

20%

$117.24

0.02

ENT 15K R1

20

362.3

$1.64

$3.28

$0.99

$0.99

$1,190

100%

$3.28

0.02

ENT 10K R1

20

339.3

$2.58

$5.16

$0.49

$0.49

$1,750

100%

$5.16

0.01

ENT CAP R1

20

213.4

$1.87

$3.74

$0.20

$0.20

$ 798

100%

$3.74

0.06

ENT 10K R10

20

389.0

$2.25

$9.00

$0.49

$0.97

$3,500

100%

$9.00

0.02

ENT CAP SWR5

20

216.8

$1.84

$9.20

$0.20

$0.25

$1,995

20%

$9.20

0.06

ENT 15K R1

50

417.3

$1.43

$2.85

$0.99

$0.99

$1,190

100%

$2.85

0.08

ENT 10K R1

50

385.8

$2.27

$4.54

$0.49

$0.49

$1,750

100%

$4.54

0.09

ENT CAP R1

50

103.5

$3.85

$7.71

$0.20

$0.20

$ 798

100%

$7.71

0.45

ENT 10K R10

50

778.3

$1.12

$4.50

$0.49

$0.97

$3,500

100%

$4.50

0.03

ENT CAP SWR5

50

109.3

$3.65

$18.26

$0.20

$0.25

$1,995

20%

$18.26

0.42

ENT 15K R1

100

190.7

$3.12

$6.24

$0.99

$0.99

$1,190

100%

$6.24

0.49

ENT 10K R1

100

175.9

$4.98

$9.95

$0.49

$0.49

$1,750

100%

$9.95

0.53

ENT CAP R1

100

59.1

$6.76

$13.51

$0.20

$0.20

$ 798

100%

$13.51

1.66

ENT 10K R10

100

560.6

$1.56

$6.24

$0.49

$0.97

$3,500

100%

$6.24

0.14

ENT CAP SWR5

100

62.2

$6.42

$32.10

$0.20

$0.25

$1,995

20%

$32.10

1.57

Table-2 TPC-C workload results various number of users across different drive configurations

Figure-2 shows TPC-C TPS (red dashed line) workload scaling over various number of users (1, 20, 50, and 100) with peak TPS per drive shown. Also shown is the used space capacity (in green), with total raw storage capacity in blue cross hatch. Looking at the multiple metrics in context shows that the 600GB Enterprise 15K HDD with performance enhanced cache is a premium option as an alternative, or, to complement flash SSD solutions.

database TPCC transactional workloads
Figure-2 472GB Database TPS scaling along with cost per TPS and storage space used

In figure-2, the 1.8TB Enterprise 10K HDD with performance enhanced cache while not as fast as the 15K, provides a good balance of performance, space capacity and cost effectiveness. A good use for the 10K drives is where some amount of performance is needed as well as a large amount of storage space for less frequently accessed content.

A low cost, low performance option would be the 2TB Enterprise Capacity HDD’s that have a good cost per capacity, however lack the performance of the 15K and 10K drives with enhanced performance cache. A four drive RAID 10 along with a five drive software volume (Microsoft WIndows) are also shown. For apples to apples comparison look at costs vs. capacity including number of drives needed for a given level of performance.

Figure-3 is a variation of figure-2 showing TPC-C TPS (blue bar) and response time (red-dashed line) scaling across 1, 20, 50 and 100 users. Once again the Enterprise 15K with enhanced performance cache feature enabled has good performance in an apples to apples RAID 1 comparison.

Note that the best performance was with the four drive RAID 10 using 10K HDD’s Given popularity, a four drive RAID 10 configuration with the 10K drives was used. Not surprising the four 10K drives performed better than the RAID 1 15Ks. Also note using five drives in a software spanned volume provides a large amount of storage capacity and good performance however with a larger drive footprint.

database TPCC transactional workloads scaling
Figure-3 472GB Database TPS scaling along with response time (latency)

From a cost per space capacity perspective, the Enterprise Capacity drives have a good cost per GB. A hybrid solution for environment that do not need ultra-high performance would be to pair a small amount of flash SSD (10) (drives or PCIe cards), as well as the 10K and 15K performance enhanced drives with the Enterprise Capacity HDD (11) along with cache or tiering software.

(Note 10) Refer to Seagate 1200 12 Gbps Enterprise SAS SSD StorageIO lab review

(Note 11) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

If your environment is using applications that rely on databases, then test resources such as servers, storage, devices using tools that represent your environment. This means moving up the software and technology stack from basic storage I/O benchmark or workload generator tools such as Iometer among others instead using either your own application, or tools that can replay or generate various workloads that represent your environment.

Continue reading part five in this multi-part series here where the focus shifts to large and small file I/O processing workloads.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Which Enterprise HDD for Content Applications General I/O Performance

Which HDD for Content Applications general I/O Performance

hdd general i/o performance server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server platform general I/O performance Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the sixth in a multi-part series (read part five here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus is around general I/O performance including 8KB and 128KB IOP sizes.

General I/O Performance

In addition to running database and file (large and small) processing workloads, Vdbench was also used to collect basic small (8KB) and large (128KB) sized I/O operations. This consisted of random and sequential reads as well as writes with the results shown below. In addition to using vdbench, other tools that could be used include Microsoft Diskspd, fio, iorate and iometer among many others.

These workloads used Vdbench configured (13) to do direct I/O to a Windows file system mounted device using as much of the available disk space as possible. All workloads used 16 threads and were run concurrently similar to database and file processing tests.

(Note 13) Sample vdbench configuration for general I/O, note different settings were used for various tests

Table-7 shows workload results for 8KB random IOPs 75% reads and 75% writes including IOPs, bandwidth and response time.

 

ENT 15K RAID1

ENT 10K RAID1

ENT CAP RAID1

ENT 10K R10
(4 Drives)

ECAP SW RAID (5 Drives)

 

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

I/O Rate (IOPs)

597.11

559.26

514

475

285

293

979

984

491

644

MB/sec

4.7

4.4

4.0

3.7

2.2

2.3

7.7

7.7

3.8

5.0

Resp. Time (Sec.)

25.9

27.6

30.2

32.7

55.5

53.7

16.3

16.3

32.6

24.8

Table-7 8KB sized random IOPs workload results

Figure-6 shows small (8KB) random I/O (75% read and 25% read) across different HDD configurations. Performance including activity rates (e.g. IOPs), bandwidth and response time for mixed reads / writes are shown. Note how response time increases with the Enterprise Capacity configurations vs. other performance optimized drives.

general 8K random IO
Figure-6 8KB random reads and write showing IOP activity, bandwidth and response time

Table-8 below shows workload results for 8GB sized I/Os 100% sequential with 75% reads and 75% writes including IOPs, MB/sec and response time in seconds.

ENT 15K RAID1

ENT 10K RAID1

ENT CAP RAID1

ENT 10K R10
(4 Drives)

ECAP SW RAID (5 Drives)

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

75% Read

25% Read

I/O Rate (IOPs)

3,778

3,414

3,761

3,986

3,379

1,274

11,840

8,368

2,891

1,146

MB/sec

29.5

26.7

29.4

31.1

26.4

10.0

92.5

65.4

22.6

9.0

Resp. Time (Sec.)

2.2

3.1

2.3

2.4

2.7

10.9

1.3

1.9

5.5

14.0

Table-8 8KB sized sequential workload results

Figure-7 shows small 8KB sequential mixed reads and writes (75% read and 75% write), while the Enterprise Capacity 2TB HDD has a large amount of space capacity, its performance in a RAID 1 vs. other similar configured drives is slower.

8KB Sequential
Figure-7 8KB sequential 75% reads and 75% write showing bandwidth activity

Table-9 shows workload results for 100% sequential, 100% read and 100% write 128KB sized I/Os including IOPs, bandwidth and response time.

ENT 15K RAID1

ENT 10K RAID1

ENT CAP RAID1

ENT 10K R10
(4 Drives)

ECAP SW RAID (5 Drives)

Read

Write

Read

Write

Read

Write

Read

Write

Read

Write

I/O Rate (IOPs)

1,798

1,771

1,716

1,688

921

912

3,552

3,486

780

721

MB/sec

224.7

221.3

214.5

210.9

115.2

114.0

444.0

435.8

97.4

90.1

Resp. Time (Sec.)

8.9

9.0

9.3

9.5

17.4

17.5

4.5

4.6

19.3

20.2

Table-9 128KB sized sequential workload results

Figure-8 shows sequential or streaming operations of larger I/O (100% read and 100% write) requests sizes (128KB) that would be found with large content applications. Figure-8 highlights the relationship between lower response time and increased IOPs as well as bandwidth.

128K Sequential
Figure-8 128KB sequential reads and write showing IOP activity, bandwidth and response time

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Some content applications are doing small random I/Os for database, key value stores or repositories as well as meta data processing while others are doing large sequential I/O. 128KB sized I/O may be large for your environment, on the other hand, with an increasing number of applications, file systems, software defined storage management tools among others, 1 to 10MB or even larger I/O sizes are becoming common. Key is selecting I/O sizes and read write as well as random sequential along with I/O or queue depths that align with your environment.

Continue reading part seven the final post in this multi-part series here where the focus is around how HDD’s continue to evolve including performance beyond traditional RPM based execrations along with wrap up.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

HDDs evolve for Content Application servers

HDDs evolve for Content Application servers

hdds evolve server storage I/O trends

Updated 1/23/2018

Enterprise HDDs evolve for content server platform

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the seventh and final post in this multi-part series (read part six here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). The focus of this post is comparing how HDD continue to evolve over various generations boosting performance as well as capacity and reliability. This also looks at how there is more to HDD performance than the traditional focus on Revolutions Per Minute (RPM) as a speed indicator.

Comparing Different Enterprise 10K And 15K HDD Generations

There is more to HDD performance than RPM speed of the device. RPM plays an important role, however there are other things that impact HDD performance. A common myth is that HDD’s have not improved on performance over the past several years with each successive generation. Table-10 shows a sampling of various generations of enterprise 10K and 15K HDD’s (14) including different form factors and how their performance continues to improve.

different 10K and 15K HDDs
Figure-9 10K and 15K HDD performance improvements

Figure-9 shows how performance continues to improve with 10K and 15K HDD’s with each new generation including those with enhanced cache features. The result is that with improvements in cache software within the drives, along with enhanced persistent non-volatile memory (NVM) and incremental mechanical drive improvements, both read and write performance continues to be enhanced.

Figure-9 puts into perspective the continued performance enhancements of HDD’s comparing various enterprise 10K and 15K devices. The workload is the same TPC-C tests used earlier in a similar (14) (with no RAID). 100 simulated users are shown in figure-9 accessing a database on each of the different drives all running concurrently. The older 15K 3.5” Cheetah and 2.5” Savio used had a capacity of 146GB which used a database scale factor of 1500 or 134GB. All other drives used a scale factor 3000 or 276GB. Figure-9 also highlights the improvements in both TPS performance as well as lower response time with new HDD’s including those with performance enhanced cache feature.

The workloads run are same as the TPC-C ones shown earlier, however these drives were not configured with any RAID. The TPC-C activity used Benchmark Factory with similar setup and configuration to those used earlier including on a multi-socket, multi-core Windows 2012 R2 server supporting a Microsoft SQL Server 2012 database with a database for each drive type.

ENT 10K V3 2.5"

ENT (Cheetah) 15K 3.5"

Users

1

20

50

100

Users

1

20

50

100

TPS (TPC-C)

14.8

50.9

30.3

39.9

TPS (TPC-C)

14.6

51.3

27.1

39.3

Resp. Time (Sec.)

0.0

0.4

1.6

1.7

Resp. Time (Sec.)

0.0

0.3

1.8

2.1

ENT 10K 2.5" (with cache)

ENT (Savio) 15K 2.5"

Users

1

20

50

100

Users

1

20

50

100

TPS (TPC-C)

19.2

146.3

72.6

71.0

TPS (TPC-C)

15.8

59.1

40.2

53.6

Resp. Time (Sec.)

0.0

0.1

0.7

0.0

Resp. Time (Sec.)

0.0

0.3

1.2

1.2

ENT 15K V4 2.5"

Users

1

20

50

100

TPS (TPC-C)

19.7

119.8

75.3

69.2

Resp. Time (Sec.)

0.0

0.1

0.6

1.0

ENT 15K (enhanced cache) 2.5"

Users

1

20

50

100

TPS (TPC-C)

20.1

184.1

113.7

122.1

Resp. Time (Sec.)

0.0

0.1

0.4

0.2

Table-10 Continued Enterprise 10K and 15K HDD performance improvements

(Note 14) 10K and 15K generational comparisons were run on a separate comparable server to what was used for other test workloads. Workload configuration settings were the same as other database workloads including using Microsoft SQL Server 2012 on a Windows 2012 R2 system with Benchmark Factory driving the workload. Database memory sized was reduced however to only 8GB vs. 16GB used in other tests.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

A little bit of flash in the right place with applicable algorithms goes a long way, an example being the Seagate Enterprise HDD’s with enhanced cache feature. Likewise, HDD’s are very much alive complementing SSD and vice versa. For high-performance content application workloads flash SSD solutions including NVMe, 12Gbps SAS and 6Gbps SATA devices are cost effective solutions. HDD’s continue to be cost-effective data storage devices for both capacity, as well as environments that do not need the performance of flash SSD.

For some environments using a combination of flash and HDD’s complementing each other along with cache software can be a cost-effective solution. The previous workload examples provide insight for making cost-effective informed storage decisions.

Evaluate today’s HDD’s on their effective performance running workloads as close as similar to your own, or, actually try them out with your applications. Today there is more to HDD performance than just RPM speed, particular with the Seagate Enterprise Performance 10K and 15K HDD’s with enhanced caching feature.

However the Enterprise Performance 10K with enhanced cache feature provides a good balance of capacity, performance while being cost-effective. If you are using older 3.5” 15K or even previous generation 2.5” 15K RPM and “non-performance enhanced” HDD’s, take a look at how the newer generation HDD’s perform, looking beyond the RPM of the device.

Fast content applications need fast content and flexible content solution platforms such as those from Servers Direct and HDD’s from Seagate. Key to a successful content application deployment is having the flexibility to hardware define and software defined the platform to meet your needs. Just as there are many different types of content applications along with diverse environments, content solution platforms need to be flexible, scalable and robust, not to mention cost effective.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Cloud Constellation SpaceBelt – Out Of This World Cloud Data Centers?

Cloud Constellation SpaceBelt – Out Of This World Cloud Data Centers?

server storage I/O trends

A new startup called Cloud Constellation (aka SpaceBelt) has announced and proposes converge space terrestrial satellite technology with IT information and cloud related data infrastructure technologies including NVM (e.g. SSD) and storage class memory (SCM). While announcing their Series A funding and proposed value proposition (below), Cloud Constellation did not say how much funding, who the investors are, or management team leading to some, well, rather cloud information.

Cloud Constellation’s SpaceBelt transforms cybersecurity for enterprise and government operations moving high-value data around the world by:

  • insulating it completely from the Internet and terrestrial leased lines
  • liberating it from cyberattacks and surreptitious activities
  • protecting it from natural disasters and force majeure events
  • addressing all jurisdictional complexities and constraints
  • avoiding risks of violating privacy regulations

Truly secure data transfer: Enterprises and governments will finally be enabled to bypass use of leaky networks and compromised servers interconnecting their sites around the world.

New option for cloud service providers: The service will be a key market differentiator for cloud service providers to offer a transformative, ultra-high degree of network security to clients reliant on moving sensitive, mission-critical data around the world each day.

What is SpaceBelt Cloud Constellation?

From their website www.cloudconstellation.com you will see following.

Cloud Constellation Space Belt
www.cloudconstellation.com

Keeping in mind that today is April 1st which means April Fools day 2016, my motto for the day is trust yet verify. So just for fun, check out this new company that I had a briefing with earlier this week that also announced their Series A funding earlier in March 2016.

The question you have to ask yourself today is if this is an out of this world April Fools prank, an out of this world idea that will eclipse current cloud services such as Amazon Web Services (AWS), Google, IBM Softlayer, Microsoft Azure, Rackspace among others?

Or, will SpaceBelt go the way of earlier cloud high flyers HP Cloud, Nirvanix among others.

Btw, keep in mind that only you can prevent cloud data loss, however cloud and virtual data availability is also a shared responsibility.

Some Questions and Things To Ponder

  • Is this an April Fools Joke?
  • How much Non Volatile Memory (NVM) such as NAND, 3D Nand, 3D XPoint or other Storage Class Memory (SCM) can be physically placed on each bird (e.g. Satellite)
  • What will the solar panels look like to power the birds, plus battery’s for heating and cooling the NVM (contrary to popular myth, NVMs do get warm if not hot)
  • What is the availability, accessibility and durability model, how will data be replicated, mirrored or an out of this world LRC/Erasure Code Advanced Parity model be used?
  • How will the storage be accessed, what will the end-points look like, iSCSI, NDB, FUSE, NFS, CIFS, HDFS, Torrent, JSON, ODBC, REST/HTTP, FTP or something else?
  • Security will be a concern as well as geo placement, after all, its one thing to move data across some borders, how about if the data is hundreds of miles above those borders?
  • Cost will be an interesting model to follow, as well as competitors from SpaceX, Amazon, Boeing, GE, NSA, Google, Facebook or others emerge?
  • What will the uplink and download speeds be, not to mention latency of moving and accessing data from the satellites. For those who have DirectTV or other terrestrial service you know the pros and cons associated with that. Speaking of which, perhaps you have experienced a thunder-storm with DirecTV or Dish, or perhaps a cloud storm due to a cloud provider service or site failure, think about what happens to your cloud data if the satellite dish is disrupted during an upload or download.
  • I also wonder how the various industry trade groups will wrap their head around this one, what kind of new standards, initiatives and out of this world marketing promotions will we see or hear about? You know that some creative marketer will declare surface clouds as dead, just saying.

Where To Learn More

What This All Means

The folks over at cloud constellation say their space belt made up of a constellation (e.g. in orbit cluster) of satellites will be circling the globe around 2019. I wonder if they will be ready to do a proof of concept (poc) technology demonstrator of their IP using TCP based networking, server, storage I/O protocols leveraging a hot air balloon or weather balloon near term, if not, would be a great marketing ploy.

If nothing else, putting their data infrastructure technology on a hot air balloon could be a fun marketing ploy to say their cloud rises above the hot air of other cloud marketing. Or if they do a POC using a weather balloon, they could show and say their cloud rises above traditional cloud storms, oh the fun…

Check out Cloud Constellation and their Spacebelt, see for yourself and then you decide what is going on!

Remember, its April Fools day today, trust, yet verify.

What say you, is this an April Fools Joke or the next big thing?

Ok, nuff said (for now), time to listen to Pink Floyd Dark Side of the Moon ;)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Server StorageIO March 2016 Update Newsletter

Volume 16, Issue III

Hello and welcome to the March 2016 Server StorageIO update newsletter.

Here in the northern hemisphere spring has officially arrived as of March 20th equinox along with warmer weather, more hours and minutes of day light, and plenty of things to do. In addition to the official arrival of spring here (fall in the southern hemisphere), it also means in the U.S. that March Madness and college basketball tournament playoff brackets and office (betting) pools are in full swing.

In This Issue

  • Feature Topic and Themes
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcast’s
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • A couple of other things associated with spring is to move clocks forward which occurred recently here in the U.S. Spring is also a good time to check your smoke and dangerous gas detectors or other alarms. This means replacing batteries and cleaning the detectors.

    Besides smoke and gas detectors, spring is also a good time do preventive maintenance on your battery backup uninterruptible power supplies (UPS), as well as generators and other standby power devices. For my part, I had a service tech out to do a tune up on my Kohler generator, as well as replaced some batteries in APC UPS devices.

    Besides smoke and CO2 detectors, generators and UPS standby power systems, March madness basketball and other sports tournaments, something else occurs on March 31st (besides being the day before April 1st and April fools day). March 31st is World Backup (and Restore) Day meaning an awareness on making sure your data, applications, settings, configurations, keys, software and systems are backed up, and can be recovered.

    Hopefully none of you are in the situation where data, applications, systems, computers, laptops, tablets, smart phones or other devices only get backed up or protected once a year, however maybe you know somebody who does.

    March also marks the 10th anniversary of Amazon Web Services (AWS) cloud services (more here), happy birthday AWS.

    March wraps up on the 31st with World Backup Day which is intended to draw attention to the importance of data protection and your ability to recover applications and data. While backup are important, so to are testing to make sure you can actually use and recover from what was protected. Keep in mind that while some claim backup is dead, data protection is alive and as along as vendors and others keep referring to data protection as backup, backup will stay alive.

    Join me and folks from HP Enterprise (HPE) on March 31st at 1PM ET for a free webinar compliments of HPE with a theme of Backup with Brains, emphasis on awareness and analytics to enable smart data protection. Click here to learn more and register.

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    Feature Topic and Theme

    This months feature theme and topics include backup (and restore) as part of data protection, more on clouds (public, private and hybrid) including how some providers such as DropBox are moving out of public cloud providers such as AWS building their own data centers.

    Building off of the February newsletter there is more on Google including their use of Non-Volatile Memory (NVM) aka NAND Flash Solid State Devices (SSD). and some of their research. In addition to Google’s use of SSD, check out the posts and industry activity on NVMe as well as other news and updates including new converged platforms from Cisco and HPE among others.

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

  • Via Redmondmag: AWS Cloud Storage Service Turns 10 years old in March, happy birthday AWS (read more here at the AWS site).
  • Cisco announced new flexible HyperFlex converged compute server platforms for hybrid cloud and other deployments. Also announced were NetApp All Flash Array (AFA) FlexPod converged solutions powered by Cisco UCS servers and networking technology. In other activity, Cisco unveiled a Digital Network Architecture to enable customer digital data transformation. Cisco also announced its intent to acquire CliQr for management of hybrid clouds.

  • Data Direct Networks (DDN) expands NAS offerings with new GS14K platform via PRnewswire.

  • Via Computerworld: DropBox quits Amazon cloud, takes back 500 PB of data. DropBox has created their own cloud to host videos, images, files, folders, objects, blobs and other storage items that used to be stored within AWS S3. In this DropBox post, you can read about the why they decided to create their own cloud, as well as how they used a hybrid approach with metadata kept local, actual data stored in AWS S3. Now the data and the metadata are in DropBox data centers. However, DropBox is still keeping some data in AWS particular in different geographies.

  • Web site hosting company GoDaddy has extended their capabilities similar to other service providers by adding an OpenStack powered cloud service. This is a trend that others such as Bluehost (where my sites are located on a DPS) have evolved from simple shared hosting, to dedicated private servers (DPS), virtual private servers (VPS) along with other cloud related services. Think of a VPS as a virtual machine or cloud instance. Likewise some of the cloud service providers such as AWS are moving into dedicated private servers.

  • Following up from the February 2016 Server StorageIO Update Newsletter that included Google’s message to disk vendors: Make hard drives like this, even if they lose more data and Google Disk for Data Centers White Paper (PDF Here), read about Google experiences SSD.

    In this PDF white paper that was presented at the recent Usenix 2016 conference outlining Google’s experiences with different types (SLC, MLC, eMLC) and generations of NAND flash SSD media across various vendors and generations. Some of the takeaways include that context matters when looking at SSD metrics on endurance, durability and errors. While some in the industry focus on Unrecoverable Bit Error Rates (UBER), there needs to be awareness around Raw Bit Error Rate (RBER) among other metrics and usage. Read more about Google’s experiences here.


  • Hewlett Packard Enterprise (HPE) announced Hyper-Converged systems Via Marketwired including HC 380 based on ProLiant DL380 technology providing all in one (AiO) converged compute, storage and virtualization software with simplified management. The HC 380 is targeted for mid-market aka small medium business (SMB), remote office branch office (ROBO) and workgroups. HPE also announced all flash array (AFA) enhancements for 3PAR storage (Via Businesswire).

  • Microsoft has announced that it will be releasing a version of its SQL Server database on Linux. What this means is that as well as being able to use SQL Server and associated tools on Windows and Azure platforms, you will also in the not so distant future be able to deploy on Linux. By making SQL Server available on Linux opens up some interesting scenarios and solution alternatives vs. Oracle along with MySQL and associated MySQL derivatives, as well as NoSQL offerings (Read more about NoSQL Databases here). Read more about Microsoft’s SQL Server for Linux here.

    In addition to SQL Server for Linux, Microsoft has also announced enhancements for easing docker container migrations to clouds. In other Microsoft activity, they announced enhancements to Storsimple and Azure. Keep an eye out for Windows Server 2016 Tech Preview 5 (e.g. TP5) which will be the next release of the upcoming new version of the popular operating systems.


  • MSDI, Rockland IT Solutions and Source Support Services Merge to Form Congruity with CEO Todd Gresham, along with Mike Stolz and Mark Shirman (formerly of Glasshouse) among others you may know.

  • Via Businesswire: PrimaryIO announces server-based flash acceleration for VMware systems, while Riverbed extends Remote Office Branch Office (ROBO) cloud connectivity Via Businesswire.

  • Via Computerworld: Samsung ships 12Gbs SAS 15TB 2.5" 3D NAND Flash SSD (Hey Samsung, send me a device or two and will give them a test drive in the Server StorageIO lab ;). Not to be out done, Via Forbes: Seagate announces fast SSD card, as well as for the High Performance Compute (HPC) and Super Compute (SC) markets, Via HPCwire: Seagate Sets Sights on Broader HPC Market with their scale-out clustered Lustre based systems.

  • Servers Direct is now offering the HGST 4U x 60 drive enclosures while Via PRnewswire: SMIC announces RRAM partnership.

  • ATTO Technology has enhanced their RAID Arrays Behind FibreBridge 7500, while Oracle announced mainframe virtual tape library (VTL) cloud support Via Searchdatabackup. In other updates for this month, VMware has released and made generally available (GA) VSAN 6.2 and Via Businesswire: Wave and Centeris Launch Transpacific Broadband Data and Fiber Hub.
  • The above is a sampling of some of the various industry news, announcements and updates for this March. Watch for more news and updates in April coming out of NAB and OpenStack Summit among other events.

    View other recent news and industry trends here.

    StorageIO Commentary in the news

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Continum – R1Soft Server Backup Manager
    • HyperIO – HiMon and HyperIO server storage I/O monitoring software tools
    • Runcast – VMware automation and management software tools
    • Opvizor – VMware health management software tools
    • Asigra – Cloud, Managed Service and distributed backup/data protection tools
    • Datera – Software defined storage management startup
    • E8 Storage – Software Defined Stealth Storage Startup
    • Venyu – Cloud and data center data protection tools
    • StorPool – Distributed software defined storage management tools
    • ExaBlox – Scale out storage solutions

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • InfoStor:  Data Protection Gaps, Some Good, Some Not So Good
    • Virtual Blocks (VMware Blogs):  Part III EVO:RAIL – When And Where To Use It?
    • InfoStor:  Object Storage Is In Your Future

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    Check out this video (Via YouTube) of a Google Data Center tour.

    In the IoT and IoD era of little and big data, how about this video I did with my Phantom DJI drone and a HD GoPro (e.g. 1K vs. 2.7K or 4K in newer cameras). This generates about a GByte of raw data per 10 minutes of flight, which then means another GB copies to a staging area, then to a protected copies, then production versions and so forth. Thus a 2 minute clip in 1080p resulted in plenty of storage including produced, uploaded versions along with backup copies in archives spread across YouTube, Dropbox and elsewhere.

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    TBA – April 27, 2016 webinar

    NAB (Las Vegas) April 19-20, 2016

    Backup with Brains – March 31, 2016 free webinar (1PM ET)

    See more webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    NVMe is in your future, resources to start preparing today for tomorrow

    NVM and NVMe corner (Via and Compliments of Micron.com)

    View more NVMe related items at microsite thenvmeplace.com.

    Read more in this Server StorageIO industry Trends Perspective white paper and lab review.

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of mine as well as books from others.

    For this months recommended reading, it’s a blog site. If you have not visited Eric Siebert’s (@ericsiebert) site vSphere-land and its companion resources pages including top blogs, do so now.

    Granted there is a heavy VMware server virtualization focus, however there is a good balance of other data infrastructure topics spanning servers, storage, I/O networking, data protection and more.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    storageioblog.com/data-protection-diaries-main/ – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance.com – Various server, storage and I/O performance and benchmarking

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part V – NVMe overview primer (Where to learn more, what this all means)

    server storage I/O trends
    Updated 1/12/2018
    This is the fifth in a five-part mini-series providing a NVMe primer overview.

    View Part I, Part II, Part III, Part IV, Part V as well as companion posts and more NVMe primer material at www.thenvmeplace.com.

    There are many different facets of NVMe including protocol that can be deployed on PCIe (AiC, U.2/8639 drives, M.2) for local direct attached, dedicated or shared for front-end or back-end of storage systems. NVMe direct attach is also found in servers and laptops using M.2 NGFF mini cards (e.g. "gum sticks"). In addition to direct attached, dedicated and shared, NVMe is also deployed on fabrics including over Fibre Channel (FC-NVMe) as well as NVMe over Fabrics (NVMeoF) leveraging RDMA based networks (e.g. iWARP, RoCE among others).

    The storage I/O capabilities of flash can now be fed across PCIe faster to enable modern multi-core processors to complete more useful work in less time, resulting in greater application productivity. NVMe has been designed from the ground up with more and deeper queues, supporting a larger number of commands in those queues. This in turn enables the SSD to better optimize command execution for much higher concurrent IOPS. NVMe will coexist along with SAS, SATA and other server storage I/O technologies for some time to come. But NVMe will be at the top-tier of storage as it takes full advantage of the inherent speed and low latency of flash while complementing the potential of multi-core processors that can support the latest applications.

    With NVMe, the capabilities of underlying NVM and storage memories are further realized Devices used include a PCIe x4 NVMe AiC SSD, 12 GbpsSAS SSD and 6 GbpsSATA SSD. These and other improvements with NVMe enable concurrency while reducing latency to remove server storage I/O traffic congestion. The result is that application demanding more concurrent I/O activity along with lower latency will gravitate towards NVMe for access fast storage.

    Like the robust PCIe physical server storage I/O interface it leverages, NVMe provides both flexibility and compatibility. It removes complexity, overhead and latency while allowing far more concurrent I/O work to be accomplished. Those on the cutting edge will embrace NVMe rapidly. Others may prefer a phased approach.

    Some environments will initially focus on NVMe for local server storage I/O performance and capacity available today. Other environments will phase in emerging external NVMe flash-based shared storage systems over time.

    Planning is an essential ingredient for any enterprise. Because NVMe spans servers, storage, I/O hardware and software, those intending to adopt NVMe need to take into account all ramifications. Decisions made today will have a big impact on future data and information infrastructures.

    Key questions should be, how much speed do your applications need now, and how do growth plans affect those requirements? How and where can you maximize your financial return on investment (ROI) when deploying NVMe and how will that success be measured?

    Several vendors are working on, or have already introduced NVMe related technologies or initiatives. Keep an eye on among others including AWS, Broadcom (Avago, Brocade), Cisco (Servers), Dell EMC, Excelero, HPE, Intel (Servers, Drives and Cards), Lenovo, Micron, Microsoft (Azure, Drivers, Operating Systems, Storage Spaces), Mellanox, NetApp, OCZ, Oracle, PMC, Samsung, Seagate, Supermicro, VMware, Western Digital (acquisition of SANdisk and HGST) among others.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What this all means

    NVMe is in your future if not already, so If NVMe is the answer, what are the questions?

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    NVMe Need for Performance Speed Performance

    server storage I/O trends
    Updated 1/12/2018

    This is the third in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    How fast is NVMe?

    It depends! Generally speaking NVMe is fast!

    However fast interfaces and protocols also need fast storage devices, adapters, drivers, servers, operating systems and hypervisors as well as applications that drive or benefit from the increased speed.

    A server storage I/O example is in figure 5 where a 6 Gbps SATA NVM flash SSD (left) is shown with an NVMe 8639 (x4) drive that were directly attached to a server. The workload is 8 Kbyte sized random writes with 128 threads (workers) showing results for IOPs (solid bar) along with response time (dotted line). Not surprisingly the NVMe device has a lower response time and a higher number of IOPs. However also note how the amount of CPU time used per IOP is lower on the right with the NVMe drive.

    NVMe storage I/O performance
    Figure 5 6 Gbps SATA NVM flash SSD vs. NVMe flash SSD

    While many people are aware or learning about the IOP and bandwidth improvements as well as the decrease in latency with NVMe, something that gets overlooked is how much less CPU is used. If a server is spending time in wait modes that can result in lost productivity, by finding and removing the barriers more work can be done on a given server, perhaps even delaying a server upgrade.

    In figure 5 notice the lower amount of CPU used per work activity being done (e.g. I/O or IOP) which translates to more effective resource use of your server. What that means is either doing more work with what you have, or potentially delaying a CPU server upgrade, or, using those extra CPU cycles to power software defined storage management stacks including erasure coding or advanced parity RAID, replication and other functions.

    Table 1 shows relative server I/O performance of some NVM flash SSD devices across various workloads. As with any performance, the comparison takes them, and the following with a grain of salt as your speed will vary.

    8KB I/O Size

    1MB I/O size

    NAND flash SSD

    100% Seq. Read

    100% Seq. Write

    100% Ran. Read

    100% Ran. Write

    100% Seq. Read

    100% Seq. Write

    100% Ran. Read

    100% Ran. Write

    NVMe

    IOPs

    41829.19

    33349.36

    112353.6

    28520.82

    1437.26

    889.36

    1336.94

    496.74

    PCIe

    Bandwidth

    326.79

    260.54

    877.76

    222.82

    1437.26

    889.36

    1336.94

    496.74

    AiC

    Resp.

    3.23

    3.90

    1.30

    4.56

    178.11

    287.83

    191.27

    515.17

    CPU / IOP

    0.001571

    0.002003

    0.000689

    0.002342

    0.007793

    0.011244

    0.009798

    0.015098

    12Gb

    IOPs

    34792.91

    34863.42

    29373.5

    27069.56

    427.19

    439.42

    416.68

    385.9

    SAS

    Bandwidth

    271.82

    272.37

    229.48

    211.48

    427.19

    429.42

    416.68

    385.9

    Resp.

    3.76

    3.77

    4.56

    5.71

    599.26

    582.66

    614.22

    663.21

    CPU / IOP

    0.001857

    0.00189

    0.002267

    0.00229

    0.011236

    0.011834

    0.01416

    0.015548

    6Gb

    IOPs

    33861.29

    9228.49

    28677.12

    6974.32

    363.25

    65.58

    356.06

    55.86

    SATA

    Bandwidth

    264.54

    72.1

    224.04

    54.49

    363.25

    65.58

    356.06

    55.86

    Resp.

    4.05

    26.34

    4.67

    35.65

    704.70

    3838.59

    718.81

    4535.63

    CPU / IOP

    0.001899

    0.002546

    0.002298

    0.003269

    0.012113

    0.032022

    0.015166

    0.046545

    Table 1 Relative performance of various protocols and interfaces

    The workload results in table 1 were generated using a vdbench script running on a Windows 2012 R2 based server and are intended to be a relative indicator of different protocol and interfaces; your performance mileage will vary. The results shown below compare the number of IOPs (activity rate) for reads, writes, random and sequential across small 8KB and large 1MB sized I/Os.

    Also shown in table 1 are bandwidth or throughput (e.g. amount of data moved), response time and the amount of CPU used per IOP. Note in table 1 how NVMe can do higher IOPs with a lower CPU per IOP, or, using a similar amount of CPU, do more work at a lower latency. SSD has been used for decades to help reduce CPU bottlenecks or defer server upgrades by removing I/O wait times and reduce CPU consumption (e.g. wait or lost time).

    Can NVMe solutions run faster than those shown above? Absolutely!

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Continue reading about NVMe with Part IV (Where and How to use NVMe) in this five-part series, or jump to Part I, Part II or Part V.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Different NVMe Configurations

    server storage I/O trends
    Updated 1/12/2018

    This is the second in a five-part mini-series providing a primer and overview of NVMe. View companion posts and more material at www.thenvmeplace.com.

    The many different faces or facets of NVMe configurations

    NVMe can be deployed and used in many ways, the following are some examples to show you its flexibility today as well as where it may be headed in the future. An initial deployment scenario is NVMe devices (e.g. PCIe cards, M2 or 8639 drives) installed as storage in servers or as back-end storage in storage systems. Figure 2 below shows a networked storage system or appliance that uses traditional server storage I/O interfaces and protocols for front-end access, however with back-end storage being all NVMe, or a hybrid of NVMe, SAS and SATA devices.
    NVMe as back-end server storage I/O interface to NVM
    Figure 2 NVMe as back-end server storage I/O interface to NVM storage

    A variation of the above is using NVMe for shared direct attached storage (DAS) such as the EMC DSSD D5. In the following scenario (figure 3), multiple servers in a rack or cabinet configuration have an extended PCIe connection that attached to a shared storage all flash array using NVMe on the front-end. Read more about this approach and EMC DSSD D5 here or click on the image below.

    EMC DSSD D5 NVMe
    Figure 3 Shared DAS All Flash NVM Storage using NVMe (e.g. EMC DSSD D5)

    Next up in figure 4 is a variation of the previous example, except NVMe is implemented over an RDMA (Remote Direct Memory Access) based fabric network using Converged 10GbE/40GbE or InfiniBand in what is known as RoCE (RDMA over Converged Ethernet pronounced Rocky).

    NVMe over Fabric RoCE
    Figure 4 NVMe as a “front-end” interface for servers or storage systems/appliances

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Watch for more topology and configuration options as NVMe along with associated hardware, software and I/O networking tools and technologies emerge over time.

    Continue reading about NVMe with Part III (Need for Performance Speed) in this five-part series, or jump to Part I, Part IV or Part V.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Server StorageIO February 2016 Update Newsletter

    Volume 16, Issue II

    Hello and welcome to the February 2016 Server StorageIO update newsletter.

    Even with an extra day during the month of February, there was a lot going on in a short amount of time. This included industry activity from servers to storage and I/O networking, hardware, software, services, mergers and acquisitions for cloud, virtual, containers and legacy environments. Check out the sampling of some of the various industry activities below.

    Meanwhile, its now time for March Madness which also means metrics that matter and getting ready for World Backup Data on March 31st. Speaking of World Backup Day, check out the StorageIO events and activities page for a webinar on March 31st involving data protection as part of smart backups.

    While your focus for March may be around brackets and other related themes, check out the Carnegie Mellon University (CMU) white paper listed below that looks at NAND flash SSD failures at Facebook. Some of the takeaways involve the importance of cooling and thermal management for flash, as well as wear management and role of flash translation layer firmware along with controllers.

    Also see the links to the Google White Paper on their request to the industry for a new type of Hard Disk Drive (HDD) to store capacity data while SSD’s handle the IOP’s. The take away is that while Google uses a lot of flash SSD for high performance, low latency workloads, they also need to have a lot of high-capacity bulk storage that is more affordable on a cost per capacity basis. Google also makes several proposals and suggestions to the industry on what should and can be done on a go forward basis.

    Backblaze also has a new report out on their 2015 HDD reliability and failure analysis which makes for an interesting read. One of the take away’s is that while there are newer, larger capacity 6TB and 8TB drives, Backblaze is leveraging the lower cost per capacity of 4TB drives that are also available in volume quantity.

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    In This Issue

  • StorageIOblog posts
  • Industry Activity Trends
  • New and Old Vendor Update
  • Events and Webinars
  • StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I
      and Part II – EMC DSSD D5 Direct Attached Shared AFA
      EMC announced the general availability of their DSSD D5 Shared Direct Attached SSD (DAS) flash storage system (e.g. All Flash Array or AFA) which is a rack-scale solution. If you recall, EMC acquired DSSD back in 2014 which you can read more about here. EMC announced four configurations that include 36TB, 72TB and 144TB raw flash SSD capacity with support for up to 48 dual-ported host client servers.
    • Various Hardware (SAS, SATA, NVM, M2) and Software (VHD) Defined Odd’s and Ends
      Ever need to add another GbE port to a small server, workstation or perhaps Intel NUC, however no PCIe slots are available? How about attaching a M2 form factor flash SSD card to a server or device that does not have an M2 port, or, for mirroring two M2 cards together with a RAID adapter? Looking for tool to convert a Windows system to a Virtual Hard Disk (VHD) while it is running? The following are a collection of odd’s and end’s devices and tools for hardware and software defining your environment.
    • Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures
      For those who are into, or simply like to talk about software defined storage (SDS), API’s, Windows, Virtual Hard Disks (VHD) or VHDX, or Hyper-V among other related themes, have you ever actually looked at the specification for VHDX? If not, here is the link to the open specification that Microsoft published (this one dates back to 2012).
    • Big Files and Lots of Little File Processing and Benchmarking with Vdbench
      Need to test a server, storage I/O networking, hardware, software, services, cloud, virtual, physical or other environment that is either doing some form of file processing, or, that you simply want to have some extra workload running in the background for what ever reason?

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

  • Tegile – IntelliFlash HD Now Available To Enterprises Worldwide
  • Via Forbes – Competitors and Cash Bleed Put Pressure on Pure Storage
  • Via HealthCareBusiness – Philips and Amazon team up on cloud-based health record storage
  • Via Zacks – IBM Advances Hybrid Cloud Object Based Storage
  • DataONstorage expands Microsoft Hyper Converged Infrastructure platforms
  • Via ITBusinessEdge – Nimble updates All Flash Array (AFA) storage
  • Carnegie Mellon University – A Large-Scale Study of Flash Memory Failures
  • Cisco Buys Cliqr Cloud Orchestration
  • Backblaze – 2015 Hard Drive Reliability Reports and Analysis
  • Via BusinessCloudNews – Verizon Closing Down Its Public Cloud
  • Via BusinessInsider – US Government Approves Dell and EMC Deal
  • EMC and VMware announce new VCE VxRAIL Converged Solutions
  • EMC announces new IBM zSeries Mainframe enhancements for VMAX
  • EMC announces new DSSD D5 AFA and VMAX AFA enhancements
  • HPE announces enhancements to StoreEasy 1650 storage
  • Seagate now shipping worlds slimmest and fastest 2TB mobile HDD
  • Via VMblog – Oracle Scoops Up Ravello to Boosts Its Public Cloud Offerings
  • Via Investors – SSD and Chinese Investments in Western Digital
  • ATTO announces 32G (e.g. Gen 6) Fibre Channel adapters
  • Google to disk vendors: Make hard drives like this, even if they lose more data
  • Google Disk for Data Centers White Paper (PDF Here)
  • View other recent news and industry trends here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    • SkySync – Enterprise File Sync and Share
    • SANblaze – Storage protocol emulation tools
    • OpenIT – DCIM and Data Infrastructure Management Tools
    • Infinit.sh – Decentralized Software Based File Storage Platform
    • Alluxio –
      Open Source Software Defined Storage Abstraction Layer
    • Genie9
      Backup and Data Protection Tools
    • E8 Storage – Software Defined Stealth Storage Startup

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

     

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    NAB (Las Vegas) April 19-20, 2016

    March 31, 2016 Webinar (1PM ET) – Smart Backup and World Backup Day

    February 25, 2016 Webinar (11AM PT) – Migrating to Hyper-V including from Vmware

    February 24, 2016 Webinar (11AM ET) – How To Become a Data Protection Hero

    February 23, 2016 Webinar (11AM PT) – Rethinking Data Protection

    January 19, 2016 Webinar (9AM PT) – Solve Virtualization Performance Issues Like a Pro

    See more webinars and other activities on the Server StorageIO Events page here.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links,
    objectstoragecenter.com, storageioblog.com/data-protection-diaries-main/,
    thenvmeplace.com, thessdplace.com and storageio.com/performance among others.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II – EMC DSSD D5 Direct Attached Shared AFA

    Part II – EMC DSSD D5 Direct Attached Shared AFA

    server storage I/O trends

    This is the second post in a two-part series on the EMC DSSD D5 announcement, you can read part one here.

    Lets take a closer look at how EMC DSSD D5 works, its hardware and software components, how it compares and other considerations.

    How Does DSSD D5 Work

    Up to 48 Linux servers attach via dual port PCIe Gen 3 x8 cards that are stateless. Stateless simply means they do not have any flash or are not being used as storage cards, rather, they are essentially just an NVMe adapter card. With the first release block, HDFS file along with object and APIs are available for Linux systems. These drivers enabling the shared NVMe storage to be accessed by applications using different streamlined server and storage I/O driver software stacks to cut latency. DSSD D5 is meant to be a rack scale solutions so distance is measured as inside a rack (e.g. a couple of meters).

    The 5U tall DSSD D5 supports 48 servers via a pair of I/O Modules (IOM) each with 48 ports that in turn attach to the data plane and on to the Flash Modules (FM). Also attached to the data plane are a pair of controllers that are active / active for performing management tasks, however they do not sit in the data path. This means that host client directly access the FMs without having to go through a controller which is the case in traditional storage systems and AFAs. The controllers only get involved when there is some setup, configuration or other management activities, otherwise they get out-of-the-way, kind of like how management should function. There when you need them to help, then get out-of-the-way so productive work can be done.

    EMC DSSD shared ssd das
    Pardon the following hand drawn sketches, you can see some nice pretty diagrams, videos and other content via the EMC Pulse Blog as well as elsewhere.

    Note that the host client servers take on the responsibility for managing and coordinating data consistency meaning data can be shared between servers assuming applicable software is used for implementing integrity. This means that clustering and other software that can support shared storage are able to support low latency high performance read and write activity the DSSD D5 as opposed to relying on the underlying storage system for handling the shared storage coordination such as in a NAS. Another note is that the DSSD D5 is optimized for concurrent multi-threaded and asynchronous I/O operations along with atomic writes for data integrity that enable the multiple cores in today’s faster processors to be more effectively leveraged.

    The data plane is a mesh or switch or expander based back plane enabling any of the north bound (host client-server) 96 (2 x 48) PCIe Gen 3 x4 ports to reach the up to 36 (or as few as 18) FMs that are also dual pathed. Note that the host client-server PCIe dual port cards are Gen 3 x8 while the DSSD D5 ports are Gen 3 x4. Simple math should tell you that if are going to have 2 x PCIe Gen 3 x4 ports running at full speed, you want to have a Gen 3 x8 connection inside the server to get full performance.

    Think of the data plane similar to how a SAS expander works in an enclosure or a SAS switch, the difference being it is PCIe and not SAS or other protocol. Note that even though the terms mesh, fabric, switch, network are used, these are NOT attached to traditional LAN, SAN, NAS or other networks. Instead, this is a private “networked back plane” between the server and storage devices (e.g. FM).

    EMC DSSD D5 details

    The dual controllers (e.g. control plane) over see the flash management including garbage collection among other tasks, as well as storage is thin provisioned.

    Dual Controllers (active/active) are connected to each other (e.g. control plane) as well as to the data path, however, do not sit in the data path. Thus this is a fast path control path approach meaning the controllers can get involved to do management functions when needed, and get out-of-the-way of work when not needed. The controllers are hot-swap and add global management functions including setting up, tearing down host client/server I/O paths, mappings and affinities. Controllers also support the management of CUBIC RAID data protection functions performed by the Flash Modules (FM).

    Other functions the controllers implement leveraging their CPUs and DRAM include flash translation layer (FTL) functions normally handled by SSD cards, drives or other devices. These FTL functions include wear-leveling for durability, garbage collection, voltage power management among other tasks. The result is that the flash modules are able to spend more time and their resources handling I/O operations vs. handling management tasks vs. traditional off the shelf SSD drives, cards or devices.

    The FMs insert from the front and come in two sizes of 2TB and 4TB of raw NAND capacity. What’s different about the FMs vs. some other vendors approach is that these are not your traditional PCIe flash cards, instead they are custom cards with a proprietary ASIC and raw nand dies. DRAM is used in the FM as a buffer to hold data for write optimization as well as enhance wear-leveling to increase flash endurance.

    The result is up to thousands of nand dies spread over up to 36 FMs however more important, more performance being derived out of those resources. The increased performance comes from DSSD implementing its own flash translation layer, garbage collection, power voltage management among other techniques to derive more useful work per watt of energy consumed.

    EMC DSSD performance claims:

    • 100 microsecond latency for small IOs
    • 100GB bandwidth for large IOs
    • 10 Million small IO IOPs
    • Up to 144TB raw capacity

    How Does It Compare To Other AFA and SSD solutions

    There will be many apples to oranges comparisons as is often the case with new technologies or at least until others arrive in the market.

    Some general comparisons that may be apples to oranges as opposed to apples to apples include:

    • Shared and dense fast nand flash (eMLC) SSD storage
    • disaggregated flash SSD storage from server while enabling high performance, low latency
    • Eliminate pools or ponds of dedicated SSD storage capacity and performance
    • Not a SAN yet more than server-side flash or flash SSD JBOD
    • Underlying Flash Translation Layer (FTL) is disaggregated from SSD devices
    • Optimized hardware and software data path
    • Requires special server-side stateless adapter for accessing shared storage

    Some other comparisons include:

    • Hybrid and AFA shared via some server storage I/O network (good sharing, feature rich, resilient, slower performance and higher latency due to hardware, network and server I/O software stacks). For example EMC VMAX, VNX, XtremIO among others.
    • Server attached flash SSD aka server SAN (flash SSD creates islands of technology, lower resource sharing, data shuffling between servers, limited or no data services, management complexity). For example PCIe flash SSD state full (persistent) cards where data is stored or used as a cache along with associated management tools and drivers.
    • DSSD D5 is a rack-scale hybrid approach combing direct attached shared flash with lower latency, higher performance vs. traditional AFA or hybrid storage array, better resource usage, sharing, management and performance vs. traditional dedicated server flash. Compliment server-side data infrastructure and applications scale-out software. Server applications can reach NVMe storage via user spacing with block, hdfs, Flood and other APIs.

    Using EMC DSSD D5 in possible hybrid ways

    What Happened to Server PCIe cards and Server SANs

    If you recall a few years ago the industry rage was flash SSD PCIe server cards from vendors such as EMC, FusionIO (now part of SANdisk), Intel (still Intel), LSI (now part of Seagate), Micron (still Micron) and STEC (now part of Western Digital) among others. Server side flash SSD PCIe cards are still popular particular with newer NVMe controller based models that use the NVMe protocol stack instead of AHC/SATA or others.

    However as is often the case, things evolve and while there is still a place for server-side state full PCIe flash cards either for data or as cache, there is also the need to combine and simplify management, as well as streamline the software I/O stacks which is where EMC DSSD D5 comes into play. It enables consolidation of server-side SSD cards into a shared 5U chassis enabling up to 48 dual pathed servers access to the flash pools while using streamlined server software stacks and drivers that leverage NVMe over PCIe.

    Where to learn more

    Continue reading with the following links about NVMe, flash SSD and EMC DSSD.

  • Part one of this series here and part two here.
  • Performance Redefined! Introducing DSSD D5 Rack-Scale Flash Solution (EMC Pulse Blog)
  • EMC Unveils DSSD D5: A Quantum Leap In Flash Storage (EMC Press Release)
  • EMC Declares 2016 The “Year of All-Flash” For Primary Storage (EMC Press Release)
  • EMC DSSD D5 Rack-Scale Flash (EMC PDF Overview)
  • EMC DSSD and Cloudera Evolve Hadoop (EMC White Paper Overview)
  • Software Aspects of The EMC DSSD D5 Rack-Scale Flash Storage Platform (EMC PDF White Paper)
  • EMC DSSD D5 (EMC PDF Architecture and Product Specification)
  • EMC VFCache respinning SSD and intelligent caching (Part II)
  • EMC To Acquire DSSD, Inc., Extends Flash Storage Leadership
  • Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • Learn more about flash SSD here and NVMe here at thenvmeplace.com
  • What this all means

    EMC with DSSD D5 now has another solution to offer clients, granted their challenge as it has been over the past couple of decades now will be to educate and compensate their sales force and partners on what technology solution to put for different needs.

    On one hand, life could be simpler for EMC if they only had one platform solution that would then be the answer to every problem, something that some other vendors and startups face. Likewise, if all you have is one solution, then while you can try to make that solution fit different environments, or, get the environment to adapt to the solution, having options is a good thing if those options can remove complexity along with cost while boosting productivity.

    I would like to see support for other operating systems such as Windows, particular with the future Windows 2016 based Nano, as well as hypervisors including VMware, Hyper-V among others. On the other hand I also would like to see a Sharp Aquous Quattron 80" 1080p 240Hz 3D TV on my wall to watch HD videos from my DJI Phantom Drone. For now focusing on Linux makes sense, however, would be nice to see some more platforms supported.

    Keep an eye on the NVMe space as we are seeing NVMe solutions appearing inside servers, storage system, external dedicated and shared, as well as some other emerging things including NVMe over Fabric. Learn more about EMC DSSD D5 here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I

    EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I

    server storage I/O trends

    This is the first post in a two-part series pertaining to the EMC DSSD D5 announcement, you can read part two here.

    EMC announced today the general availability of their DSSD D5 Shared Direct Attached SSD (DAS) flash storage system (e.g. All Flash Array or AFA) which is a rack-scale solution. If you recall, EMC acquired DSSD back in 2014 which you can read more about here. EMC announced four configurations that include 36TB, 72TB and 144TB raw flash SSD capacity with support for up to 48 dual-ported host client servers.

    Via EMC Pulse Blog

    What Is DSSD D5

    At a high level EMC DSSD D5 is a PCIe direct attached SSD flash storage solution to enable aggregation of disparate SSD card functionality typically found in separate servers into a shared system without causing aggravation. DSSD D5 helps to alleviate server side I/O bottlenecks or aggravation issues that can be the result of aggregation of workloads or data. Think of DSSD D5 as an shared application server storage I/O accelerator for up to 48 servers to access up to 144TB of raw flash SSD to support various applications that have the need for speed.

    Applications that have the need for speed or that can benefit from less time waiting for results, where time is money, or boosting productivity can enable high profitability computing. This includes legacy as well as emerging applications and workloads spanning little data, big data and big fast structure and unstructured data. From Oracle to SAS to HBASE and Hadoop among others, perhaps even Alluxio.

    Some examples include:

    • Clusters and scale-out grids
    • High Performance COMpute (HPC)
    • Parallel file systems
    • Forecasting and image processing
    • Fraud detection and prevention
    • Research and analytics
    • E-commerce and retail
    • Search and advertising
    • Legacy applications
    • Emerging applications
    • Structured database and key-value repositories
    • Unstructured file systems, HDFS and other data
    • Large undefined work sets
    • From batch stream to real-time
    • Reduces run times from days to hours

    Where to learn more

    Continue reading with the following links about NVMe, flash SSD and EMC DSSD.

  • Part one of this series here and part two here.
  • Performance Redefined! Introducing DSSD D5 Rack-Scale Flash Solution (EMC Pulse Blog)
  • EMC Unveils DSSD D5: A Quantum Leap In Flash Storage (EMC Press Release)
  • EMC Declares 2016 The “Year of All-Flash” For Primary Storage (EMC Press Release)
  • EMC DSSD D5 Rack-Scale Flash (EMC PDF Overview)
  • EMC DSSD and Cloudera Evolve Hadoop (EMC White Paper Overview)
  • Software Aspects of The EMC DSSD D5 Rack-Scale Flash Storage Platform (EMC PDF White Paper)
  • EMC DSSD D5 (EMC PDF Architecture and Product Specification)
  • EMC VFCache respinning SSD and intelligent caching (Part II)
  • EMC To Acquire DSSD, Inc., Extends Flash Storage Leadership
  • Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • Learn more about flash SSD here and NVMe here at thenvmeplace.com
  • What this all means

    Today’s legacy, and emerging applications have the need for speed, and where the applications may not need speed, the users as well as Internet of Things (IoT) that depend upon, or feed those applications do need things to move faster. Fast applications need fast software and hardware to get the same amount of work done faster with less wait delays, as well as process larger amounts of structured and unstructured little data, big data and very fast big data.

    Different applications along with the data infrastructures they rely upon including servers, storage, I/O hardware and software need to adapt to various environments, one size, one approach model does not fit all scenarios. What this means is that some applications and data infrastructures will benefit from shared direct attached SSD storage such as rack scale solutions using EMC DSSD D5. Meanwhile other applications will benefit from AFA or hybrid storage systems along with other approaches used in various ways.

    Continue reading part two of this series here including how EMC DSSD D5 works and more perspectives.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures

    Software Defined Storage Virtual Hard Disk (VHD) Algorithms + Data Structures

    server storage I/O trends

    For those who are into, or simply like to talk about software defined storage (SDS), APIs, Windows, Virtual Hard Disks (VHD) or VHDX, or Hyper-V among other related themes, have you ever actually looked at the specification for VHDX? If not, here is the link to the open specification that Microsoft published (this one dates back to 2012).

    Microsoft VHDX specification document
    Click on above image to download the VHDX specification from Microsoft.com

    How about Algorithms + Data Structures = Programs by Niklaus Wirth, some of you might remember that from the past, if not, it’s a timeless piece of work that has many fundamental concepts for understanding software defined anything. I came across Algorithms + Data Structures = Programs back in Graduate School when I was getting my masters degree in Software Engineering at night, while working during the day in an IT environment on servers, storage, I/O networking hardware and software.


    Algorithms + Data Structures = Programs on Amazon.com

    In addition to the Amazon.com link above, here is a link to a free (legitimate PDF) copy.

    The reason I mention Software Defined, Virtual Hard Disk and Algorithms + Data Structures = Programs is that they are all directly related, or at a minimum can help demystify things.

    Inside a VHD and VHDX

    The following is an excerpt from the Microsoft VHDX specification document mentioned above that shows a logical view of how a VHDX is defined as a data structure, as well as how algorithms should use and access them.

    Microsoft VHDX specification

    Keep in mind that anything software defined is a collection of data structures that describe how bits, bytes, blocks, blobs or other entities are organized and then accessed by algorithms that are defined how to use those data structures. Thus the connection to Algorithms + Data Structures = Programs mentioned above.

    In the case of a Virtual Hard Disk (VHD) or VHDX they are the data structures defined (see the specification here) and then used by various programs (applications or algorithms) such as Windows or other operating systems, hypervisors or utilities.

    A VHDX (or VMDK or VVOL or qcow or other virtual disk for that matter) is a file whose contents are organized e.g. the data structures per a given specification (here).

    The VHDX can then be moved around like another file and used for booting some operating systems, as well as simply mounting and using like any other disk or device.

    This also means that you can nest putting a VHDX inside of a VHDX and so forth.

    Where to learn more

    Continue reading with the following links about Virtual Hard Disks pertaining to Microsoft Windows, Hyper-V, VMware among others.

  • Algorithms + Data Structures = Programs on Amazon.com
  • Microsoft Technet Virtual Hard Disk Sharing Overview
  • Download the VHDX specification from Microsoft.com
  • Microsoft Technet Hyper-V Virtual Hard Disk (VHD) Format Overview
  • Microsoft Technet Online Virtual Hard Disk Resizing Overview
  • VMware Developer Resource Center (VDDK for vSphere 6.0)
  • VMware VVOLs and storage I/O fundamentals (Part 1)
  • What this all means

    Applications and utilities or basically anything that is algorithms working with data structures is a program. Software Defined Storage or Software Defined anything involves defining data structures that describes various entities, along with the algorithms to work with and use those data structures.

    Sharpen, refresh or expand your software defined data center, software defined network, software defined storage or software defined storage management as well as software defined marketing game by digging a bit deeper into the bits and bytes. Who knows, you might just go from talking the talk to walking the talk, if nothing else, talking the talk better..

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Server StorageIO January 2016 Update Newsletter

    Volume 16, Issue I – beginning of Year (BoY) Edition

    Hello and welcome to the January 2016 Server StorageIO update newsletter.

    Is it just me, or did January disappear in a flash like data stored in non-persistent volatile DRAM memory when the power is turned off? It seems like just the other day that it was the first day of the new year and now we are about to welcome in February. Needless to say, like many of you I have been busy with various projects, many of which are behind the scenes, some of which will start appearing publicly sooner while others later.

    In terms of what have I been working on, it includes the usual of performance, availability, capacity and economics (e.g. PACE) related to servers, storage, I/O networks, hardware, software, cloud, virtual and containers. This includes NVM as well as NVMe based SSD’s, HDD’s, cache and tiering technologies, as well as data protection among other things with Hyper-V, VMware as well as various cloud services.

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

    Cheers GS

    In This Issue

  • Feature Topic
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcasts
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • Feature Topic – Microsoft Nano, Server 2016 TP4 and VMware

    This months feature topic is virtual servers and software defined storage including those from VMware and Microsoft. Back in November I mentioned the 2016 Technical Preview 4 (e.g. TP4) along with Storage Spaces Direct and Nano. As a reminder you can download your free trial copy of Windows Server 2016 TP4 from this Microsoft site here.

    Three good Microsoft Blog posts about storage spaces to check out include:

    • Storage Spaces Direct in Technical Preview 4 (here)
    • Hardware options for evaluating Storage Spaces Direct in Technical Preview 4 (here)
    • Storage Spaces Direct – Under the hood with the Software Storage Bus (here)

    As for Microsoft Nano, for those not familiar, it’s not a new tablet or mobile device, instead, it is a very light weight streamlined version of the Windows Server 2016 server. How streamlined? Much more so then the earlier Windows Server versions that simply disabled the GUI and desktop interfaces. Nano is smaller from a memory and disk storage space perspective meaning it uses less RAM, boots faster, has fewer moving parts (e.g. software modules) to break (or need patching).

    Specifically Nano removes 32 bit support and anything related to the desktop and GUI interfaces as well as removing the console interface. That’s right, no console or virtual console to log into, Wow is gone, access is via Powershell or Windows Management Interface tools from remote systems. How small is it? I have a Nano instance built on a VHDX that is under a GB in size, granted, its only for testing. The goal of Nano is to have a very light weight streamlined version of Windows Server that can run hundreds (or more) VMs in a small memory footprint, not to mention supports lots of containers. Nano is part of WIndows TP4, learn more about Nano here in this Microsoft post including how to get started using it.

    Speaking of VMware, if you have not received an invite yet to their Digital Enterprise February 6, 2016 announcement event, click here to register.

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    Some new Products Technology Services Announcements (PTSA) include:

    • EMC announced Elastic Cloud Storage (ECS) V2.2. A main theme of V2.2 is that besides being the 3rd generation of EMC object storage (dating back to Centera, then Atmos), is that ECS is also where the functionality of Centera, Atmos and other functionality converge. ECS provides object storage access along with HDFS (Hadoop and Hortonworks certified) and traditional NFS file access.

      Object storage access includes Amazon S3, OpenStack Swift, ATMOS and CAS (Centera). In addition to the access, added Centera functionality for regulatory compliance has been folded into the ECS software stack. For example, ECS is now compatible with SEC 17 a-4(f) and CFTC 1.3(b)-(c) regulations protecting data from being overwritten or erased for a specified retention period. Other enhancements besides scalability, resiliency and ease of use include meta data and search capabilities. You can download and try ECS for non-production workloads with no capacity or functionality limitations from EMC here.

    View other recent news and industry trends here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities tips, and announcements. In case you missed them from last month:

    • TheFibreChannel.com: Industry Analyst Interview: Greg Schulz, StorageIO
    • EnterpriseStorageForum: Comments Handling Virtual Storage Challenges
    • PowerMore (Dell): Q&A: When to implement ultra-dense storage

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Datrium – DVX and NetShelf server software defined flash storage and converged infrastructure
    • DataDynamics – StorageX is the software solution for enabling intelligent data migration, including from NetApp OnTap 7 to Clustered OnTap, as well as to and from EMC among other NAS file serving solutions.
    • Paxata – Little and Big Data management solutions

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • InfoStor:  Data Protection Gaps, Some Good, Some Not So Good

    And in case you missed them from last month

    • IronMountain:  5 Noteworthy Data Privacy Trends From 2015
    • Virtual Blocks (VMware Blogs):  Part III EVO:RAIL – When And Where To Use It?
    • InfoStor:  Object Storage Is In Your Future
    • InfoStor:  Water, Data and Storage Analogy

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    NAB (Las Vegas) April 19-20, 2016

    TBA – March 31, 2016

    Redmond Magazine Gridstore (How to Migrate from VMware to Hyper-V) February 25, 2016 Webinar (11AM PT)

    TBA – February 23, 2016

    Redmond Magazine and Dell Foglight – Manage and Solve Virtualization Performance Issues Like a Pro (Webinar 9AM PT) – January 19, 2016

    See more webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    Quick Look: What’s the Best Enterprise HDD for a Content Server?
    Which enterprise HDD for content servers

    Insight for Effective Server Storage I/O decision-making
    This StorageIO® Industry Trends Perspectives Solution Brief and Lab Review (compliments of Seagate and Servers Direct) looks at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate (www.seagate.com) Enterprise Hard Disk Drive (HDDs).

    I was given the opportunity to do some hands-on testing running different application workloads with a 2U content solution platform along with various Seagate Enterprise 2.5” HDDs handle different application workloads. This includes Seagate’s Enterprise Performance HDDs with the enhanced caching feature.

    Read more in this Server StorageIO industry Trends Perspective white paper and lab review.

    Looking for NVM including SSD information? Visit the Server StorageIO www.thessdplace.com and www.thenvmeplace.com micro sites. View other StorageIO lab review and test drive reports here.

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of mine as well as books from others. For this months recommended reading, it’s a blog site. If you have not visited Duncan Eppings (@DuncanYB) Yellow-Bricks site, you should, particular if you are interested in virtualization, high availability and related topical themes.

    Seven Databases in Seven Weeks guide to no SQL via Amazon.com

    Granted Duncan being a member of the VMware CTO office covers a lot of VMware related themes, however being the author of several books, he also covers non VMware related topics. Duncan recently did a really good and simple post about rebuilding a failed disk in a VMware VSAN vs. in a legacy RAID or erasure code based storage solution.

    One of the things that struck me as being important with what Duncan wrote about is avoiding apples to oranges comparisons. What I mean by this is that it is easy to compare traditional parity based or mirror type solutions that chunk or shard data on KByte basis spread over disks, vs. data that is chunk or sharded on GByte (or larger) basis over multiple servers and their disks. Anyway, check out Duncan’s site and recent post by clicking here.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace
    thessdplace.com
    storageio.com/performance.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved