Part 3 – Which HDD for content applicaitons – Test Configuration

Which HDD for content applications – HDD Test Configuration

HDD Test Configuration server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server platform hdd test configuratoin

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the third in a multi-part series (read part two here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus expands to hardware and software defining as well as configuring the test environments along with applications workloads.

Defining Hardware Software Environment

Servers Direct content platforms are software defined and hardware defined to your specific solution needs. For my test-drive, I used a pair of 2U Content Solution platforms, one for a client System Test Initiator (STI) (3), the other as server SUT shown in figure-1 (next page). With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

(Note 3) System Test Initiator (STI) was hardware defined with dual Intel Xeon E5-2695 v3 (2.30 GHz) processors, 32GB RAM running Windows Server 2012 R2 with two network connections to the SUT. Network connections from the STI to SUT included an Intel GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE Converged Network Adapter (CNA). In addition to software defining the STI with Windows Server 2012 R2, Dell Benchmark Factory (V7.1 64b bit 496) part of the Database Administrators (DBA) Toad Tools (including free versions) was also used. For those familiar with HammerDB, Sysbench among others, Benchmark Factory is an alternative that supports various workloads and database connections with robust reporting, scripting and automation. Other installed tools included Spotlight on Windows, Iperf 2.0.5 for generating network traffic and reporting results, as well as Vdbench with various scripts.

SUT setup (4)  included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

(Note 4) System Under Test (SUT) dual Intel Xeon E5-2697 v3 (2.60 GHz) providing 54 logical processors, 64GB of RAM (expandable to 768GB with 32GB DIMMs, or 3TB with 128GB DIMMs) and two network connections. Network connections from the STI to SUT consisting of an Intel 1 GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE CNA. The GbE LAN connection was used for management purposes while the 40 GbE was used for data traffic. System disk was a 6Gbs SATA flash SSD. Seagate Enterprise class HDD’s were installed into the 16 available 2.5” small form factor (SFF) drive slots. Eight (left most) drive slots were connected to an Intel RMS3CC080 12 Gbps SAS RAID internal controller. The “Blue” drives in the middle were connected to both an NVMe PCIe card and motherboard 6 Gbps SATA controller using an SFF-8637 connector. The four right most drives were also connected to the motherboard 6 Gbps SATA controller.

System Test Configuration
Figure-1 STI and SUT hardware as well as software defined test configuration

This included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. Five 6 Gbps SATA Enterprise Capacity 2TB HDD’s were setup using Microsoft Windows as a spanned volume. System disk was a 6Gbps flash SSD and an NVMe flash SSD drive was used for database temp space.

What About NVM Flash SSD?

NAND flash and other Non-Volatile Memory (NVM) memory and SSD complement content solution. A little bit of flash SSD in the right place can have a big impact. The focus for theses tests is HDD’s, however some flash SSDs were used as system boot and database temp (e.g. tempdb) space. Refer to StorageIO Lab reviews and visit www.thessdplace.com

Seagate Enterprise HDD’s Used During Testing

Various Seagate Enterprise HDD specifications use in the testing are shown below in table-1.

 

Qty

 

Seagate HDD’s

 

Capacity

 

RPM

 

Interface

 

Size

 

Model

Servers Direct Price Each

Configuration

4

Enterprise 10K
Performance

1.8TB

10K with cache

12 Gbps SAS

2.5”

ST1800MM0128
with enhanced cache

$875.00 USD

HW(5) RAID 10 and RAID 1

2

Enterprise
Capacity 7.2K

2TB

7.2K

12 Gbps SAS

2.5”

ST2000NX0273

$399.00 USD

HW RAID 1

2

Enterprise 15K
Performance

600GB

15K with cache

12 Gbps SAS

2.5”

ST600MX0082
with enhanced cache

$595.00 USD

HW RAID 1

5

Enterprise
Capacity 7.2K

2TB

7.2K

6 Gbps SATA

2.5”

ST2000NX0273

$399.00 USD

SW(6) RAID Span Volume

Table-1 Seagate Enterprise HDD specification and Servers Direct pricing

URLs for additional Servers Direct content platform information:
https://serversdirect.com/solutions/content-solutions
https://serversdirect.com/solutions/content-solutions/video-streaming
https://www.serversdirect.com/File%20Library/Data%20Sheets/Intel-SDR-2P16D-001-ds2.pdf

URLs for additional Seagate Enterprise HDD information:
https://serversdirect.com/Components/Drives/id-HD1558/Seagate_ST2000NX0273_2TB_Hard_Drive

https://serversdirect.com/Components/Drives/id-HD1559/Seagate_ST600MX0082_SSHD

Seagate Performance Enhanced Cache Feature

The Enterprise 10K and 15K Performance HDD’s tested had the enhanced cache feature enabled. This feature provides a “turbo” boost like acceleration for both reads and write I/O operations. HDD’s with enhanced cache feature leverage the fact that some NVM such as flash in the right place can have a big impact on performance (7).

In addition to their performance benefit, combing a best of or hybrid storage model (combing flash with HDD’s along with software defined cache algorithms), these devices are “plug-and-play”. By being “plug-and-play” no extra special adapters, controllers, device drivers, tiering or cache management software tools are required.

(Note 5) Hardware (HW) RAID using Intel server on-board LSI based 12 Gbps SAS RAID card, RAID 1 with two (2) drives, RAID 10 with four (4) drives. RAID configured in write-through mode with default stripe / chunk size.

(Note 6) Software (SW) RAID using Microsoft Windows Server 2012 R2 (span). Hardware RAID used write-through cache (e.g. no buffering) with read-ahead enabled and a default 256KB stripe/chunk size.

(Note 7) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

The Seagate Enterprise Performance 10K and 15K with enhanced cache feature are a good example of how there is more to performance in today’s HDD’s than simply comparing RPM’s, drive form factor or interface.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Careful and practical planning are key steps for testing various resources as well as aligning the applicable tools, configuration to meet your needs.

Continue reading part four of this multi-part series here where the focus expands to database application workloads.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

VMware vCloud Air Server StorageIOlab Test Drive with videos

Server Storage I/O trends

VMware vCloud Air Server StorageIOlab Test Drive with videos

Recently I was invited by VMware vCloud Air to do a free hands-on test drive of their actual production environment. Some of you may already being using VMware vSphere, vRealize and other software defined data center (SDDC) aka Virtual Server Infrastructure (VSI) or Virtual Desktop Infrastructure (VDI) tools among others. Likewise some of you may already be using one of the many cloud compute or Infrastructure as a Service (IaaS) such as Amazon Web Services (AWS) Elastic Cloud Compute (EC2), Centurylink, Google Cloud, IBM Softlayer, Microsoft Azure, Rackspace or Virtustream (being bought by EMC) among many others.

VMware vCloud Air provides a platform similar to those just mentioned among others for your applications and their underlying resource needs (compute, memory, storage, networking) to be fulfilled. In addition, it should not be a surprise that VMware vCloud Air shares many common themes, philosophies and user experiences with the traditional on-premises based VMware solutions you may be familiar with.

VMware vCloud Air overview

You can give VMware vCloud Air a trial for free while the offer lasts by clicking here (service details here). Basically if you click on the link and register a new account for using VMware vCloud Air they will give you up to $500 USD in service credits to use in the real production environment while the offer lasts which iirc is through end of June 2015.

Server StorageIO test drive VMware vCloud Air video I
Click on above image to view video part I

Server StorageIO test drive VMware vCloud Air part II
Click on above image to view video part II

What this means is that you can go and setup some servers with as many CPUs or cores, memory, Hard Disk Drive (HDD) or flash Solid State Devices (SSD) storage, external IP networks using various operating systems (Centos, Ubuntu, Windows 2008, 20012, 20012 R2) for free, or until you use up the service credits.

Speaking of which, let me give you a bit of a tip or hint, even though you can get free time, if you provision a fast server with lots of fast SSD storage and leave it sit idle over night or over a weekend, you will chew up your free credits rather fast. So the tip which should be common sense is if you are going to do some proof of concepts and then leave things alone for a while, power the virtual cloud servers off to stretch your credits further. On the other hand, if you have something that you want to run on a fast server with fast storage over a weekend or longer, give that a try, just pay attention to your resource usage and possible charges should you exhaust your service credits.

My Server StorageIO test drive mission objective

For my test drive, I created a new account by using the above link to get the service credits. Note that you can use your regular VMware account with vCloud Air, however you wont get the free service credits. So while it is a few minutes of extra work, the benefit was worth it vs. simply using my existing VMware account and racking up more cloud services charges on my credit card. As part of this Server StorageIOlab test drive, I created two companion videos part I here and part II here that you can view to follow along and get a better idea of how vCloud works.

VMware vCloud Air overview
Phase one, create the virtual data center, database server, client servers and first setup

My goal was to set up a simple Virtual Data Center (VDC) that would consist of five Windows 2012 R2 servers, one would be a MySQL database server with the other four being client application servers. You can download MySQL from here at Oracle as well as via other sources. For applications to simplify things I used Hammerdb as well as Benchmark Factory that is part of the Quest Toad tool set for database admins. You can download a free trial copy of Benchmark Factory here, and HammerDB here. Another tool that I used for monitoring the servers is Spotlight on Windows (SoW) which is also free here. Speaking of tools, here is a link to various server and storage I/O performance as well as monitoring tools.

Links to tools that I used for this test-drive included:

Setting up a virtual data center vdc
Phase one steps and activity summary

Summary of phase one of vdc
Recap of what was done in phase one, watch the associated video here.

After the initial setup (e.g. part I video here), the next step was to add some more virtual machines and take a closer look at the environment. Note that most of the work in setting up this environment was Windows, MySQL, Hammerdb, Benchmark Factory, Spotlight on Windows along with other common tools so their installation is not a focus in these videos or this post, perhaps a future post will dig into those in more depth.

Summary of phase two of the vdc
What was done during phase II (view the video here)

VMware vCloud Air vdc trest drive

There is much more to VMware vCloud Air and on their main site there are many useful links including overviews, how-too tutorials, product and service offering details and much more here. Besides paying attention to your resource usage and avoid being surprised by service charges, two other tips I can pass along that are also mentioned in the videos (here and here) is to pay attention what region you setup your virtual data centers in, second is have your network thought out ahead of time to streamline setting up the NAT and firewall as well as gateway configurations.

Where to learn more

Learn more about data protection and related topics, themes, trends, tools and technologies via the following links:

Server Storage I/O trends

What this all means and wrap-up

Overall I like the VMware vCloud Air service which if you are VMware centric focused will be a familiar cloud option including integration with vCloud Director and other tools you may already have in your environment. Even if you are not familiar with VMware vSphere and associated vRealize tools, the vCloud service is intuitive enough that you can be productive fairly quickly. On one hand vCloud Air does not have the extensive menu of service offerings to choose from such as with AWS, Google, Azure or others, however that also means a simpler menu of options to choose from and simplify things.

I had wanted to spend some time actually using vCloud and the offer to use some free service credits in the production environment made it worth making the time to actually setup some workloads and do some testing. Even if you are not a VMware focused environment, I would recommend giving VMware vCloud Air a test drive to see what it can do for you, as opposed to what you can do for it…

Ok, nuff said for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

How many I/O iops can flash SSD or HDD do?

How many i/o iops can flash ssd or hdd do with vmware?

sddc data infrastructure Storage I/O ssd trends

Updated 2/10/2018

A common question I run across is how many I/O iopsS can flash SSD or HDD storage device or system do or give.

The answer is or should be it depends.

This is the first of a two-part series looking at storage performance, and in context specifically around drive or device (e.g. mediums) characteristics across HDD, HHDD and SSD that can be found in cloud, virtual, and legacy environments. In this first part the focus is around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).

What about cloud, tape summit resources, storage systems or appliance?

Lets leave those for a different discussion at another time.

Getting started

Part of my interest in tools, metrics that matter, measurements, analyst, forecasting ties back to having been a server, storage and IO performance and capacity planning analyst when I worked in IT. Another aspect ties back to also having been a sys admin as well as business applications developer when on the IT customer side of things. This was followed by switching over to the vendor world involved with among other things competitive positioning, customer design configuration, validation, simulation and benchmarking HDD and SSD based solutions (e.g. life before becoming an analyst and advisory consultant).

Btw, if you happen to be interested in learn more about server, storage and IO performance and capacity planning, check out my first book Resilient Storage Networks (Elsevier) that has a bit of information on it. There is also coverage of metrics and planning in my two other books The Green and Virtual Data Center (CRC Press) and Cloud and Virtual Data Storage Networking (CRC Press). I have some copies of Resilient Storage Networks available at a special reader or viewer rate (essentially shipping and handling). If interested drop me a note and can fill you in on the details.

There are many rules of thumb (RUT) when it comes to metrics that matter such as IOPS, some that are older while others may be guess or measured in different ways. However the answer is that it depends on many things ranging from if a standalone hard disk drive (HDD), Hybrid HDD (HHDD), Solid State Device (SSD) or if attached to a storage system, appliance, or RAID adapter card among others.

Taking a step back, the big picture

hdd image
Various HDD, HHDD and SSD’s

Server, storage and I/O performance and benchmark fundamentals

Even if just looking at a HDD, there are many variables ranging from the rotational speed or Revolutions Per Minute (RPM), interface including 1.5Gb, 3.0Gb, 6Gb or 12Gb SAS or SATA or 4Gb Fibre Channel. If simply using a RUT or number based on RPM can cause issues particular with 2.5 vs. 3.5 or enterprise and desktop. For example, some current generation 10K 2.5 HDD can deliver the same or better performance than an older generation 3.5 15K. Other drive factors (see this link for HDD fundamentals) including physical size such as 3.5 inch or 2.5 inch small form factor (SFF), enterprise or desktop or consumer, amount of drive level cache (DRAM). Space capacity of a drive can also have an impact such as if all or just a portion of a large or small capacity devices is used. Not to mention what the drive is attached to ranging from in internal SAS or SATA drive bay, USB port, or a HBA or RAID adapter card or in a storage system.

disk iops
HDD fundamentals

How about benchmark and performance for marketing or comparison tricks including delayed, deferred or asynchronous writes vs. synchronous or actually committed data to devices? Lets not forget about short stroking (only using a portion of a drive for better IOP’s) or even long stroking (to get better bandwidth leveraging spiral transfers) among others.

Almost forgot, there are also thick, standard, thin and ultra thin drives in 2.5 and 3.5 inch form factors. What’s the difference? The number of platters and read write heads. Look at the following image showing various thickness 2.5 inch drives that have various numbers of platters to increase space capacity in a given density. Want to take a wild guess as to which one has the most space capacity in a given footprint? Also want to guess which type I use for removable disk based archives along with for onsite disk based backup targets (compliments my offsite cloud backups)?

types of disks
Thick, thin and ultra thin devices

Beyond physical and configuration items, then there are logical configuration including the type of workload, large or small IOPS, random, sequential, reads, writes or mixed (various random, sequential, read, write, large and small IO). Other considerations include file system or raw device, number of workers or concurrent IO threads, size of the target storage space area to decide impact of any locality of reference or buffering. Some other items include how long the test or workload simulation ran for, was the device new or worn in before use among other items.

Tools and the performance toolbox

Then there are the various tools for generating IO’s or workloads along with recording metrics such as reads, writes, response time and other information. Some examples (mix of free or for fee) include Bonnie, Iometer, Iorate, IOzone, Vdbench, TPC, SPC, Microsoft ESRP, SPEC and netmist, Swifttest, Vmark, DVDstore and PCmark 7 among many others. Some are focused just on the storage system and IO path while others are application specific thus exercising servers, storage and IO paths.

performance tools
Server, storage and IO performance toolbox

Having used Iometer since the late 90s, it has its place and is popular given its ease of use. Iometer is also long in the tooth and has its limits including not much if any new development, never the less, I have it in the toolbox. I also have Futremark PCmark 7 (full version) which turns out has some interesting abilities to do more than exercise an entire Windows PC. For example PCmark can use a secondary drive for doing IO to.

PCmark can be handy for spinning up with VMware (or other tools) lots of virtual Windows systems pointing to a NAS or other shared storage device doing real world type activity. Something that could be handy for testing or stressing virtual desktop infrastructures (VDI) along with other storage systems, servers and solutions. I also have Vdbench among others tools in the toolbox including Iorate which was used to drive the workloads shown below.

What I look for in a tool are how extensible are the scripting capabilities to define various workloads along with capabilities of the test engine. A nice GUI is handy which makes Iometer popular and yes there are script capabilities with Iometer. That is also where Iometer is long in the tooth compared to some of the newer generation of tools that have more emphasis on extensibility vs. ease of use interfaces. This also assumes knowing what workloads to generate vs. simply kicking off some IOPs using default settings to see what happens.

Another handy tool is for recording what’s going on with a running system including IO’s, reads, writes, bandwidth or transfers, random and sequential among other things. This is where when needed I turn to something like HiMon from HyperIO, if you have not tried it, get in touch with Tom West over at HyperIO and tell him StorageIO sent you to get a demo or trial. HiMon is what I used for doing start, stop and boot among other testing being able to see IO’s at the Windows file system level (or below) including very early in the boot or shutdown phase.

Here is a link to some other things I did awhile back with HiMon to profile some Windows and VDI activity test profiling.

What’s the best tool or benchmark or workload generator?

The one that meets your needs, usually your applications or something as close as possible to it.

disk iops
Various 2.5 and 3.5 inch HDD, HHDD, SSD with different performance

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

That depends, however continue reading part II of this series to see some results for various types of drives and workloads.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

What records will EMC break in NYC January 18, 2011?

What records will EMC break in NYC January 18, 2011?

In case you have not seen or heard, EMC is doing an event next week in New York City (NYC) at the AXA Equitable Center winter weather snow storm clouds permitting (and adequate tools or technologies to deal with the snow removal), that has a theme around breaking records. If you have yet to see any of the advertisements, blogs, tweets, facebook, friendfeed, twitter, yourtube or other mediums messages, here (and here and here) are a few links to learn more as well as register to view the event.

Click on the above image to see more

There is already speculation along with IT industry wiki leaks of what will be announced or talked about next week that you can google or find at some different venues.

The theme of the event is breaking records.

What might we hear?

In addition to the advisor, author, blogger and consultant hats that I wear, Im also in the EMCs analysts relations program and as such under NDA, consequently, what the actual announcement will be next week, no comment for now. BTW, I also wear other hats including one from Boeing even though I often fly on Airbus products as well.

If its not Boeing Im not going, except I do also fly Airbus, Embrear and Bombardiar products
Other hats I wear

However, how about some fun as to what might be covered at next weeks event with getting into a wiki leak situation?

  • A no brainier would be product (hardware, software, services) related as it is mid January and if you have been in the industry for more than a year or two, you might recall that EMC tends to a mid winter launch around this time of year along with sometimes an early summer refresh. Guess what time of the year it is.
  • Im guessing lots of superlatives, perhaps at a record breaking pace (e.g. revolutionary first, explosive growth, exponential explosive growth, perfect storm among others that could be candidates for the Storagebrain wall of fame or shame)
  • Maybe we will even hear that EMC has set a new record of number of members in Chads army aka the vspecialists focused on vSphere related topics along with a growing (quietly) number of Microsoft HyperV specialist.
  • That EMC has a record number of twitter tweeps engaged in conversations (or debates) with different audiences, collectives, communities, competitors, customers, individuals, organizations, partners or venues among others.
  • Possibly that their involvement in the CDP (Carbon Disclosure Project) has resulted in enough savings to offset the impact of hosting the event making it carbon and environment neutral. After all, we already know that EMC has been in the CDP as in Continual or Constant Data Protection as well as Complete or Comprehensive Data Protection along with Cloud Data Protection not to mention Common Sense Data Protection (CSDP) for sometime now.
  • Perhaps something around the number of acquisitions, patents, products, platforms, products and partners they have amassed recently.
  • For investors, wishful thinking that they will be moving their stock into record territories.
  • Im also guessing we will hear or see a record number of tweets, posts, videos and stories.
  • To be fair and balanced, Im also expecting a record number of counter tweets, counter posts, counter videos and counter stories coming out of the event.

Some records I would like to see EMC break however Im not going to hold my breath at least for next week include:

  • Announcement of upping the game in performance benchmarking battles with record setting or breaking various SPC benchmark results submitted on their own (instead of via a competitor or here) in different categories of block storage devices along with entries for SSD based, clustered and virtualized. Of course we would expect to hear how those benchmarks and workload simulations really do not matter which would be fine, at least they would have broken some records.
  • Announcement of having shipped more hard disk drives (HDD) than anyone else in conjunction with shipping more storage than anyone else. Despite being continually declared dead (its not) and SSD gaining traction, EMC would have a record breaking leg to stand on if the qualify amount of storage shipped as external or shared or networked (SAN or NAS) as opposed to collective (e.g. HP with servers and storage among others).
  • Announcement that they are buying Cisco, or Cisco is buying them, or that they and Cisco are buying Microsoft and Oracle.
  • Announcement of being proud of the record setting season of the Patriots, devastated to losing a close and questionable game to the NY Jets, wishing them well in the 2010 NFL Playoffs (Im just sayin…).
  • Announcement of being the first vendor and solution provider to establish SaaS, PaaS, IaaS, DaaS and many other XaaS offerings via their out of this world new moon base (plans underway for Mars as part of a federated offering).
  • Announcement that Fenway park will be rebranded as the house that EMC built (or rebuilt).

Disclosure: I will be in NYC on Tuesday the 18th as one of EMCs many guests that they have picked up airfare and lodging, thanks to Len Devanna and the EMC social media crew for reaching out and extending the invitation.

Other guests of the event will include analysts, advisors, authors, bloggers, beat writers, consultants, columnist, customers, editors, media, paparazzi, partners, press, protesters (hopefully polite ones), publishers, pundits, twitter tweepps and writers among others.

I wonder if there will also be a record number of disclosures made by others attending the event as guests of EMC?

More after (or maybe during) the event.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Storage Performance Council releases Component (SPC-1C and SPC-2C) results

Storage I/O trends

For those interested in benchmarking activities or debates, the Storage Performance Council (aka SPC) released their new component level tests along with some first results for disk drives.

What differs the component SPC tests (e.g. SPC-1C and SPC-2C) from their the total system tests (SPC-1C and SPC-2C) is that only specific components are tested, such as a disk drive or adapter without being part of a total solution. If you recall from back in January of this year, NetApp stirred things up a bit by submitting SPC results for themselves as well as for EMC.

Here’s a link to a presentation from SPC about SPC-1C and SPC-2C.

In addition to the SPC-1C and SPC-2C, there are also several new SPC-1 and SPC-2 submissions from a long list of vendors and their latest technologies including with more in the works.

Cheers
gs