Server and Storage I/O Benchmarking 101 for Smarties

Server Storage I/O Benchmarking 101 for Smarties or dummies ;)

server storage I/O trends

This is the first of a series of posts and links to resources on server storage I/O performance and benchmarking (view more and follow-up posts here).

The best I/O is the I/O that you do not have to do, the second best is the one with the least impact as well as low overhead.

server storage I/O performance

Drew Robb (@robbdrew) has a Data Storage Benchmarking Guide article over at Enterprise Storage Forum that provides a good framework and summary quick guide to server storage I/O benchmarking.

Via Drew:

Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark.

Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon).

But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more. 

Read more here including some of my comments, tips and recommendations.

Drew’s provides a good summary and overview in his article which is a great opener for this first post in a series on server storage I/O benchmarking and related resources.

You can think of this series (along with Drew’s article) as server storage I/O benchmarking fundamentals (e.g. 101) for smarties (e.g. non-dummies ;) ).

Note that even if you are not a server, storage or I/O expert, you can still be considered a smarty vs. a dummy if you found the need or interest to read as well as learn more about benchmarking, metrics that matter, tools, technology and related topics.

Server and Storage I/O benchmarking 101

There are different reasons for benchmarking, such as, you might be asked or want to know how many IOPs per disk, Solid State Device (SSD), device or storage system such as for a 15K RPM (revolutions per minute) 146GB SAS Hard Disk Drive (HDD). Sure you can go to a manufactures website and look at the speeds and feeds (technical performance numbers) however are those metrics applicable to your environments applications or workload?

You might get higher IOPs with smaller IO size on sequential reads vs. random writes which will also depend on what the HDD is attached to. For example are you going to attach the HDD to a storage system or appliance with RAID and caching? Are you going to attach the HDD to a PCIe RAID card or will it be part of a server or storage system. Or are you simply going to put the HDD into a server or workstation and use as a drive without any RAID or performance acceleration.

What this all means is understanding what it is that you want to benchmark test to learn what the system, solution, service or specific device can do under different workload conditions.

Some benchmark and related topics include

  • What are you trying to benchmark
  • Why do you need to benchmark something
  • What are some server storage I/O benchmark tools
  • What is the best benchmark tool
  • What to benchmark, how to use tools
  • What are the metrics that matter
  • What is benchmark context why does it matter
  • What are marketing hero benchmark results
  • What to do with your benchmark results
  • server storage I/O benchmark step test
    Example of a step test results with various workers and workload

  • What do the various metrics mean (can we get a side of context with them metrics?)
  • Why look at server CPU if doing storage and I/O networking tests
  • Where and how to profile your application workloads
  • What about physical vs. virtual vs. cloud and software defined benchmarking
  • How to benchmark block DAS or SAN, file NAS, object, cloud, databases and other things
  • Avoiding common benchmark mistakes
  • Tips, recommendations, things to watch out for
  • What to do next

server storage I/O trends

Where to learn more

The following are related links to read more about server (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.

Drew Robb’s benchmarking quick reference guide
Server storage I/O benchmarking tools, technologies and techniques resource page
Server and Storage I/O Benchmarking 101 for Smarties.
Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
I/O, I/O how well do you know about good or bad server and storage I/Os?
Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

Wrap up and summary

We have just scratched the surface when it comes to benchmarking cloud, virtual and physical server storage I/O and networking hardware, software along with associated tools, techniques and technologies. However hopefully this and the links for more reading mentioned above give a basis for connecting the dots of what you already know or enable learning more about workloads, synthetic generation and real-world workloads, benchmarks and associated topics. Needless to say there are many more things that we will cover in future posts (e.g. keep an eye on and bookmark the server storage I/O benchmark tools and resources page here).

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Server Storage I/O Benchmark Performance Resource Tools

Server Storage I/O Benchmarking Performance Resource Tools

server storage I/O trends

Updated 1/23/2018

Server storage I/O benchmark performance resource tools, various articles and tips. These include tools for legacy, virtual, cloud and software defined environments.

benchmark performance resource tools server storage I/O performance

The best server and storage I/O (input/output operation) is the one that you do not have to do, the second best is the one with the least impact.

server storage I/O locality of reference

This is where the idea of locality of reference (e.g. how close is the data to where your application is running) comes into play which is implemented via tiered memory, storage and caching shown in the figure above.

Cloud virtual software defined storage I/O

Server storage I/O performance applies to cloud, virtual, software defined and legacy environments

What this has to do with server storage I/O (and networking) performance benchmarking is keeping the idea of locality of reference, context and the application workload in perspective regardless of if cloud, virtual, software defined or legacy physical environments.

StorageIOblog: I/O, I/O how well do you know about good or bad server and storage I/Os?
StorageIOblog: Server and Storage I/O benchmarking 101 for smarties
StorageIOblog: Which Enterprise HDDs to use for a Content Server Platform (7 part series with using benchmark tools)
StorageIO.com: Enmotus FuzeDrive MicroTiering lab test using various tools
StorageIOblog: Some server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
StorageIOblog: Get in the NVMe SSD game (if you are not already)
Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
ComputerWeekly: Storage performance metrics: How suppliers spin performance specifications

Via StorageIO Podcast: Kevin Closson discusses SLOB Server CPU I/O Database Performance benchmarks
Via @KevinClosson: SLOB Use Cases By Industry Vendors. Learn SLOB, Speak The Experts’ Language
Via BeyondTheBlocks (Reduxio): 8 Useful Tools for Storage I/O Benchmarking
Via CCSIObench: Cold-cache Sequential I/O Benchmark
Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
CISJournal: Benchmarking the Performance of Microsoft Hyper-V server, VMware ESXi and Xen Hypervisors (PDF)
Microsoft TechNet:Windows Server 2016 Hyper-V large-scale VM performance for in-memory transaction processing
InfoStor: What’s The Best Storage Benchmark?
StorageIOblog: How to test your HDD, SSD or all flash array (AFA) storage fundamentals
Via ATTO: Atto V3.05 free storage test tool available
Via StorageIOblog: Big Files and Lots of Little File Processing and Benchmarking with Vdbench

Via StorageIO.com: Which Enterprise Hard Disk Drives (HDDs) to use with a Content Server Platform (White Paper)
Via VMware Blogs: A Free Storage Performance Testing Tool For Hyperconverged
Microsoft Technet: Test Storage Spaces Performance Using Synthetic Workloads in Windows Server
Microsoft Technet: Microsoft Windows Server Storage Spaces – Designing for Performance
BizTech: 4 Ways to Performance-Test Your New HDD or SSD
EnterpriseStorageForum: Data Storage Benchmarking Guide
StorageSearch.com: How fast can your SSD run backwards?
OpenStack: How to calculate IOPS for Cinder Storage ?
StorageAcceleration: Tips for Measuring Your Storage Acceleration

server storage I/O STI and SUT

Spiceworks: Determining HDD SSD SSHD IOP Performance
Spiceworks: Calculating IOPS from Perfmon data
Spiceworks: profiling IOPs

vdbench server storage I/O benchmark
Vdbench example via StorageIOblog.com

StorageIOblog: What does server storage I/O scaling mean to you?
StorageIOblog: What is the best kind of IO? The one you do not have to do
Testmyworkload.com: Collect and report various OS workloads
Whoishostingthis: Various SQL resources
StorageAcceleration: What, When, Why & How to Accelerate Storage
Filesystems.org: Various tools and links
StorageIOblog: Can we get a side of context with them IOPS and other storage metrics?

flash ssd and hdd

BrightTalk Webinar: Data Center Monitoring – Metrics that Matter for Effective Management
StorageIOblog: Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy
StorageIOblog: Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?

server storage I/O bottlenecks and I/O blender

Microsoft TechNet: Measuring Disk Latency with Windows Performance Monitor (Perfmon)
Via Scalegrid.io: How to benchmark MongoDB with YCSB? (Perfmon)
Microsoft MSDN: List of Perfmon counters for sql server
Microsoft TechNet: Taking Your Server’s Pulse
StorageIOblog: Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?
CMG: I/O Performance Issues and Impacts on Time-Sensitive Applications

flash ssd and hdd

Virtualization Practice: IO IO it is off to Storage and IO metrics we go
InfoStor: Is HP Short Stroking for Performance and Capacity Gains?
StorageIOblog: Is Computer Data Storage Complex? It Depends
StorageIOblog: More storage and IO metrics that matter
StorageIOblog: Moving Beyond the Benchmark Brouhaha
Yellow-Bricks: VSAN VDI Benchmarking and Beta refresh!

server storage I/O benchmark example

YellowBricks: VSAN performance: many SAS low capacity VS some SATA high capacity?
YellowBricsk: VSAN VDI Benchmarking and Beta refresh!
StorageIOblog: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review
StorageIOblog: Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review
StorageIOblog: Server Storage I/O Network Benchmark Winter Olympic Games

flash ssd and hdd

VMware VDImark aka View Planner (also here, here and here) as well as VMmark here
StorageIOblog: SPC and Storage Benchmarking Games
StorageIOblog: Speaking of speeding up business with SSD storage
StorageIOblog: SSD and Storage System Performance

Hadoop server storage I/O performance
Various Server Storage I/O tools in a hadoop environment

Michael-noll.com: Benchmarking and Stress Testing an Hadoop Cluster With TeraSort, TestDFSIO
Virtualization Practice: SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD
StorageIOblog: Storage and IO metrics that matter
InfoStor: Storage Metrics and Measurements That Matter: Getting Started
SilvertonConsulting: Storage throughput vs. IO response time and why it matters
Splunk: The percentage of Read / Write utilization to get to 800 IOPS?

flash ssd and hdd
Various server storage I/O benchmarking tools

Spiceworks: What is the best IO IOPs testing tool out there
StorageIOblog: How many IOPS can a HDD, HHDD or SSD do?
StorageIOblog: Some Windows Server Storage I/O related commands
Openmaniak: Iperf overview and Iperf.fr: Iperf overview
StorageIOblog: Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)
Quest: SQL Server Perfmon Poster (PDF)
Server and Storage I/O Networking Performance Management (webinar)
Data Center Monitoring – Metrics that Matter for Effective Management (webinar)
Flash back to reality – Flash SSD Myths and Realities (Industry trends & benchmarking tips), (MSP CMG presentation)
DBAstackexchange: How can I determine how many IOPs I need for my AWS RDS database?
ITToolbox: Benchmarking the Performance of SANs

server storage IO labs

StorageIOblog: Dell Inspiron 660 i660, Virtual Server Diamond in the rough (Server review)
StorageIOblog: Part II: Lenovo TS140 Server and Storage I/O Review (Server review)
StorageIOblog: DIY converged server software defined storage on a budget using Lenovo TS140
StorageIOblog: Server storage I/O Intel NUC nick knack notes First impressions (Server review)
StorageIOblog & ITKE: Storage performance needs availability, availability needs performance
StorageIOblog: Why SSD based arrays and storage appliances can be a good idea (Part I)
StorageIOblog: Revisiting RAID storage remains relevant and resources

Interested in cloud and object storage visit our objectstoragecenter.com page, for flash SSD checkout storageio.com/ssd page, along with data protection, RAID, various industry links and more here.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Watch for additional links to be added above in addition to those that appear via comments.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

I/O, I/O how well do you know good bad ugly server storage I/O iops?

How well do you know good bad ugly I/O iops?

server storage i/o iops activity data infrastructure trends

Updated 2/10/2018

There are many different types of server storage I/O iops associated with various environments, applications and workloads. Some I/Os activity are iops, others are transactions per second (TPS), files or messages per time (hour, minute, second), gets, puts or other operations. The best IO is one you do not have to do.

What about all the cloud, virtual, software defined and legacy based application that still need to do I/O?

If no IO operation is the best IO, then the second best IO is the one that can be done as close to the application and processor as possible with the best locality of reference.

Also keep in mind that aggregation (e.g. consolidation) can cause aggravation (server storage I/O performance bottlenecks).

aggregation causes aggravation
Example of aggregation (consolidation) causing aggravation (server storage i/o blender bottlenecks)

And the third best?

It’s the one that can be done in less time or at least cost or effect to the requesting application, which means moving further down the memory and storage stack.

solving server storage i/o blender and other bottlenecks
Leveraging flash SSD and cache technologies to find and fix server storage I/O bottlenecks

On the other hand, any IOP regardless of if for block, file or object storage that involves some context is better than those without, particular involving metrics that matter (here, here and here [webinar] )

Server Storage I/O optimization and effectiveness

The problem with IO’s is that they are a basic operations to get data into and out of a computer or processor, so there’s no way to avoid all of them, unless you have a very large budget. Even if you have a large budget that can afford an all flash SSD solution, you may still meet bottlenecks or other barriers.

IO’s require CPU or processor time and memory to set up and then process the results as well as IO and networking resources to move data too their destination or retrieve them from where they are stored. While IO’s cannot be eliminated, their impact can be greatly improved or optimized by, among other techniques, doing fewer of them via caching and by grouping reads or writes (pre-fetch, write-behind).

server storage I/O STI and SUT

Think of it this way: Instead of going on multiple errands, sometimes you can group multiple destinations together making for a shorter, more efficient trip. However, that optimization may also mean your drive will take longer. So, sometimes it makes sense to go on a couple of quick, short, low-latency trips instead of one larger one that takes half a day even as it accomplishes many tasks. Of course, how far you have to go on those trips (i.e., their locality) makes a difference about how many you can do in a given amount of time.

Locality of reference (or proximity)

What is locality of reference?

This refers to how close (i.e., its place) data exists to where it is needed (being referenced) for use. For example, the best locality of reference in a computer would be registers in the processor core, ready to be acted on immediately. This would be followed by levels 1, 2, and 3 (L1, L2, and L3) onboard caches, followed by main memory, or DRAM. After that comes solid-state memory typically NAND flash either on PCIe cards or accessible on a direct attached storage (DAS), SAN, or NAS device. 

server storage I/O locality of reference

Even though a PCIe NAND flash card is close to the processor, there still remains the overhead of traversing the PCIe bus and associated drivers. To help offset that impact, PCIe cards use DRAM as cache or buffers for data along with meta or control information to further optimize and improve locality of reference. In other words, this information is used to help with cache hits, cache use, and cache effectiveness vs. simply boosting cache use.

SSD to the rescue?

What can you do the cut the impact of IO’s?

There are many steps one can take, starting with establishing baseline performance and availability metrics.

The metrics that matter include IOP’s, latency, bandwidth, and availability. Then, leverage metrics to gain insight into your application’s performance.

Understand that IO’s are a fact of applications doing work (storing, retrieving, managing data) no matter whether systems are virtual, physical, or running up in the cloud. But it’s important to understand just what a bad IO is, along with its impact on performance. Try to identify those that are bad, and then find and fix the problem, either with software, application, or database changes. Perhaps you need to throw more software caching tools, hypervisors, or hardware at the problem. Hardware may include faster processors with more DRAM and faster internal busses.

Leveraging local PCIe flash SSD cards for caching or as targets is another option.

You may want to use storage systems or appliances that rely on intelligent caching and storage optimization capabilities to help with performance, availability, and capacity.

Where to gain insight into your server storage I/O environment

There are many tools that you can be used to gain insight into your server storage I/O environment across cloud, virtual, software defined and legacy as well as from different layers (e.g. applications, database, file systems, operating systems, hypervisors, server, storage, I/O networking). Many applications along with databases have either built-in or optional tools from their provider, third-party, or via other sources that can give information about work activity being done. Likewise there are tools to dig down deeper into the various data information infrastructure to see what is happening at the various layers as shown in the following figures.

application storage I/O performance
Gaining application and operating system level performance insight via different tools

windows and linux storage I/O performance
Insight and awareness via operating system tools on Windows and Linux

In the above example, Spotlight on Windows (SoW) which you can download for free from Dell here along with Ubuntu utilities are shown, You could also use other tools to look at server storage I/O performance including Windows Perfmon among others.

vmware server storage I/O
Hypervisor performance using VMware ESXi / vsphere built-in tools

vmware server storage I/O performance
Using Visual ESXtop to dig deeper into virtual server storage I/O performance

vmware server storage i/o cache
Gaining insight into virtual server storage I/O cache performance

Wrap up and summary

There are many approaches to address (e.g. find and fix) vs. simply move or mask data center and server storage I/O bottlenecks. Having insight and awareness into how your environment along with applications is important to know to focus resources. Also keep in mind that a bit of flash SSD or DRAM cache in the applicable place can go along way while a lot of cache will also cost you cash. Even if you cant eliminate I/Os, look for ways to decrease their impact on your applications and systems.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

>Keep in mind: SSD including flash and DRAM among others are in your future, the question is where, when, with what, how much and whose technology or packaging.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server storage I/O Intel NUC nick knack notes – First impressions

Storage I/O trends

Server storage I/O Intel NUC nick knack notes – First impressions

This is the first of a two-part (part II here) series of my experiences (and impressions) using an Intel NUC ( a 4th generation model) for various things about cloud, virtual, physical and software defined server storage I/O networking.

The NUC has been around new for a few years and continues to evolve and recently I bought my first one (e.g. a 4th generation model) to join some other servers that I have. My reason for getting a NUC is to use it as a simple low-power platform to run different software on including bare-metal OS, hypervisors, cloud, virtual and software defined server storage and networking applications on that might otherwise be on an old laptop or mini-tower.

Intel® NUC with Intel® Core™ i5 Processor and 2.5-Inch Drive Support (NUC5i5RYH) via Intel.com

Introducing Intel Next Unit Computing aka NUC

For those not familiar, NUC is a series of products from Intel called Next Unit Computing that offer an alternative to traditional mini-desktop or even laptop and notebooks. There are several different NUC models available including the newer 5th generation models (click here to see various models and generations). The NUCs are simple, small units of computing with an Intel processor and room for your choice of memory, persistent storage (e.g. Hard Disk Drive (HDD) or flash Solid State Device (SSD), networking, video, audio and other peripheral device attachment.

software (not supplied) is defined by what you choose to use such as a Windows or *nix operating system, VMware ESXi, Microsoft Hyper-V, KVM or Xen hypervisor, or some other applications. The base NUC package includes front and rear-side ports for attaching various devices. In terms of functionality, think of a laptop without a keyboard or video screen, or in terms of a small head-less (e.g. no monitor) mini-tower desktop workstation PC.

Which NUC to buy?

If you need to be the first with anything new, then jump direct to the recently released 5th generation models.

On the other hand, if you are looking for a bargain, there are some good deals on 4th generation or older. likewise depending on your processor speed and features needed along with available budget, those criteria and others will direct you to a specific NUC model.

I went with a 4th generation NUC realizing that the newer models were just around the corner as I figured could always get another (e.g. create a NUC cluster) newer model when needed. In addition I also wanted a model that had enough performance to last a few years of use and the flexibility to be reconfigured as needed. My choice was a model D54250WYK priced around $352 USD via Amazon (prices may vary by different venues).

Whats included with a NUC?

My first NUC is a model D54250WYK (e.g. BOXD54250WYKH1 ) that you can view the specific speeds and feeds here at the Intel site along with ordering info here at Amazon (or your other preferred venue).

View and compare other NUC models at the Intel NUC site here.

The following images show the front-side two USB 3.0 ports along with head-phone (or speaker) and microphone jacks. Looking at the rear-side of the NUC there are a couple of air vents, power connector port (external power supply), mini-display and HDMI video port, GbE LAN, and two USB 3.0 ports.

NUC front viewRear ports of NUC
Left is front view of my NUC model 54250 and Right is back or rear view of NUC

NUC ModelBOXD54250WYKH1 (speeds/feeds vary by specific model)
Form factor1.95" tall
ProcessorIntel Core i5-4250U with active heat sink fan
MemoryTwo SO-DIMM DDR3L (e.g. laptop) memory, up to 16GB (e.g. 2x8GB)
DisplayOne mini DisplayPort with audio
One mini HDMI port with audio
AudioIntel HD Audio, 8 channel (7.1) digital audio via HDMI and DisplayPort, also headphone jack
LANIntel Gigabit Ethernet (GbE) (I218)
Peripheral and storageTwo USB 3.0 (e.g. blue) front side
Two USB 3.0 rear side
Two USB 2.0 (internal)

One SATA port (internal 2.5 inch drive bay)

Consumer infrared sensor (front panel)
ExpansionOne full-length mini PCI Express slot with mSATA support
One half-length mini PCI Express slot
Included in the boxLaptop style 19V 65W power adapter (brick) and cord, VESA mounting bracket (e.g. for mounting on rear of video monitor), integration (installation) guide, wireless antennae (integrated into chassis), Intel Core i5 logo
Warranty3-year limited

Processor Speeds and Feeds

There are various Intel Core i3 and i5 processors available depending on specific NUC model, such as my 54250WYK has a two core (1.3Ghz each) 4th generation i5-4250U (click here to see Intel speeds and feeds) which includes Intel Visual BIOS, Turbo Boost, Rapid Start and virtualization support among other features.

Note that features vary by processor type, along with other software, firmware or BIOS updates. While the 1.3Ghz two core (e.g. max 2.6Ghz) is not as robust as faster quad (or more) cores running at 3.0Ghz (or faster), for most applications including as a first virtual lab or storage sand box among other uses, it will be fast enough or comparable to a lower-mid range laptop capabilities.

What this all means

In general I like the NUC so much that I bought one (model 54250) and would consider adding another in the future for somethings, however also see the need to continue using my other compute servers for different workloads.

This wraps up part I of this two-part series and what this means is that I like the idea of a Intel NUC I bought one. Continue reading in part-two here where I cover the options that I added to my NUC, initial configuration, deployment, use and additional impressions.

Ok, nuff said for now, check out part-two here.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 StorageIO and UnlimitedIO LLC All Rights Reserved

Server storage I/O Intel NUC nick knack notes – Second impressions

Storage I/O trends

Server storage I/O Intel NUC nick knack notes – Second impressions

This is the second of a two-part series about my first and second impressions of the Intel NUC (Next Unit Computing). In the first post (here) I give an overview and my first impressions while in this post lets look at options added to my NUC model 54250, first deployment use and more impressions.

Intel® NUC with Intel® Core™ i5 Processor and 2.5-Inch Drive Support (NUC5i5RYH) via Intel.com

What you will want to add to a NUC

Since the NUC is a basic brick with a processor mounted on its mother board, you will need to add memory, some type of persistent storage device (mSATA, SATA or USB based) and optionally a WiFi card.

One of the nice things about the NUC is that in many ways it is the equivalent functionality of a laptop or mini-tower without the extra overhead (cost, components, packaging) enabling you to customize as needed for your specific requirements. For example there is no keyboard, mouse, video screen, WiFi, Hard Disk Drive (HDD) or flash Solid State Device (SSD) included with an operating system pre-installed. There is no least memory required enabling you to decide how much to configure while using compatible laptop style memory. Video and monitors attach via HDMI or mini-port including VGA devices via an adapter cable. Keyboard and mouse if needed are handled via USB ports.

Here is what I added to my NUC model 5420.

1Crucial 16GB Kit (2 x 8GB) DDR3 1600 (PC3-12800) SODIMM 204-Pin Notebook Memory
1Intel Network 7260 WiFi Wireless-AC 7260 H/T Dual Band 2×2 AC+Bluetooth HMC. Here is link to Intel site for various drivers.
1500GB Samsung Electronics 840 EVO mSATA 0.85-Inch Solid State Drive
1SATA HDD, SSD or HHDD/SSHD (I used one of my existing drives)

Note that you will also need to supply some type of Keyboard Video Mouse (KVM), in my case I used a HDMI to VGA adapter cable to attach the NUC via HDMI (for video) and USB (keyboard and mouse) to my Startech KVM switch.

Following images show on the left Intel WiFi card installed and on the right, a Samsung 840 EVO mSATA 500GB flash SSD installed above the WiFi card. Also notice on the far right of the images the two DDR3 "notebook" class DRAM DIMM slots.

NUC WiFi cardmSATA SSD
Left: Intel WiFi card installed and Right Samsung EVO mSATA SSD card (sits above WiFi card)

Note that the NUC (as do many laptops) accepts 9mm or smaller thin 7mm height HDDs and SSDs in its SATA drive bay. I mention this because some of the higher-capacity 2TB 2.5" SFF drives are taller than 9m as shown in the above image and do not fit in the NUC internal SATA drive bay. While many devices and systems support 2.5" drive slots for HDD, SSD or HHDD/SSHDs, pay attention to the height and avoid surprises when something does not fit like it was assumed to.

2.5 HDD and SSDs
Low-profile and tall-profile 2.5" SFF HDDs

Additional drives and devices can be attached using external USB 3.0 ports including HDDs, SSDs or even USB to GbE adapters if needed. You will need to supply your own operating system, hypervisor, storage, networking or other software, such as Windows, *nix, VMware ESXi, Hyper-V, KVM, Xen, OpenStack or any of the various ZFS based (among others) storage appliances.

Unpacking and physical NUC installation

Initial setup and physical configuration of the NUC is pretty quick with the only tool needed being a Philips screw driver.

NUC and components ready for installation
Intel NUC 5420 and components ready for installation

With all the components including the NUC itself laid out for a quick inventory including recording serial numbers (see image above), the next step is to open up the NUC by removing four Philip screws from the bottom. Once the screws are removed and bottom plate removed, the SATA drive bay opens up to reach the slots of memory, mSATA SSD and WiFi card (see images below). Once the memory, mSATA and WiFi cards are installed, the SATA drive bay coverage those components and it is time to install a 2.5" standard height HDD or SSD. For my first deployment I installed temporarily installed on of my older HHDDs a 750GB Seagate Momentus XT that will be replaced by something newer soon.

NUC internal HDD/SSD slotNUC internal HDD installed
View of NUC with bottom cover removed, Left empty SATA drive bay, Right HDD installed

After the components are installed, it is time to replace the bottom cover plate of the NUC securing in place with the four screws previously removed. Next up is attaching any external devices via USB and other ports including KVM and LAN network connection. Once the hardware is ready, its time to power up the NUC and checkout the Visual BIOS (or UEFI) as shown below.

Intel NUC Visual BIOSIntel NUC Visual BIOS display
NUC VisualBIOS screen shot examples

At this point unless you have already installed an operating system, hypervisor or other software on a HDD, SSD or USB device, it is time to install your prefered software.

Windows 7

First up was Windows 7 as I already had an image built on the HHDD that required some drivers to be added. specifically, a visit to the Intel resources site (See NUC resources and links section later in this post) was made to get a LAN GbE, WiFi and USB drivers. Once those were installed the on-board GbE LAN port worked good as did the WiFi. Another driver that needed to be download was for a USB-GbE adapter to add another LAN connection. Also a couple of reboots were required for other Windows drivers and configuration changes to take place to correct some transient problems including KVM hangs which eventually cleared themselves up.

Windows 2012 R2

Following Windows 7, next up was a clean install of Windows 2012 R2 which also required some drivers and configuration changes. One of the challenges is that Windows 2012 R2 is not officially supported on the NUC with its GbE LAN and WiFi cards. However after doing some searches and reading a few posts including this and this, a solution was found and Windows 2012 R2 and its networking are working good.

Ubuntu and Clonezilla

Next up was a quick install of Ubuntu 14.04 which went pretty smooth, as well as using Clonezilla to do some drive maintenance, move images and partitions among other things.

VMware ESXi 5.5U2

My first attempt at installing a standard VMware ESXi 5.5U2 image ran into problems due to the GbE LAN port not being seen. The solution is to use a different build, or custom ISO that includes the applicable GbE LAN driver (e.g. net-e1000e-2.3.2.x86_64.vib) and some useful information at Florian Grehl site (@virten) and over at Andreas Peetz site (@VFrontDe) including SATA controller driver for xahci. Once the GbE driver was added (same driver that addresses other Intel NIC I217/I218 based systems) along with updating the SATA driver, VMware worked fine.

Needless to say there are many other things I plan on doing with the NUC both as a standalone bare-metal system as well as a virtual platform as I get more time and projects allow.

What about building your NUC alternative?

In addition to the NUC models available via Intel and its partners and accessorizing as needed, there are also special customized and ruggedized NUC versions similar to what you would expect to find with laptop, notebooks, and other PC based systems.

MSI Probox rear viewMSI Probox front view
Left MSI ProBox rear-view Right MSI ProBox front view

If you are looking to do more than what Intel and its partners offer, then there are some other options such as to increase the number of external ports among other capabilities. One option which I recently added to my collection of systems is an DIY (Do It Yourself) MSI ProBox (VESA mountable) such as this one here.

MSI Probox internal view
Internal view MSI ProBox (no memory, processor or disks)

With the MSI ProBox, they are essentially a motherboard with an empty single cpu socket (e.g. LGA 1150 up to 65W) for supporting various processors, two empty DDR3 DIMM slots, 2 empty 2.5" SATA ports among other capabilities. Enclosures such as the MSI ProBox give you flexibility creating something more robust beyond a basic NUC yet smaller than a traditional server depending on your specific needs.

Looking for other small form factor modular and ruggedized server options as an alternative to a NUC, than check out those from Xi3, Advantech, Cadian Networks, and Logic Supply among many others.

Storage I/O trends

First NUC impressions

Overall I like the NUC and see many uses for it from consumer, home including entertainment and media systems, video security surveillance as well as a small server or workstation device. In addition, I can see a NUC being used for smaller environments as desktop workstations or as a lower-power, lower performance system including as a small virtualization host for SOHO, small SMB and ROBO environments. Another usage is for home virtual lab as well as gaming among other scenarios including simple software defined storage proof of concepts. For example, how about creating a small cluster of NUCs to run VMware VSAN, or Datacore, EMC ScaleIO, Starwind, Microsoft SOFS or Hyper-V as well as any of the many ZFS based NAS storage software applications.

Pro’s – Features and benefits

Small, low-power, self-contained with flexibility to choose my memory, WiFi, storage (HDD or SSD) without the extra cost of those items or software being included.

Con’s – Caveats or what to look out for

Would be nice to have another GbE LAN port however I addressed that by adding a USB 3.0 to GbE cable, likewise would be nice if the 2.5" SATA drive bay supported tall height form-factor devices such as the 2TB devices. The work around for adding larger capacity and physically larger storage devices is to use the USB 3.0 ports. The biggest warning is if you are going to venture outside of the official supported operating system and application software realm be ready to load some drivers, possibly patch and hack some install scripts and then plug and pray it all works. So far I have not run into any major show stoppers that were not addressed with some time spent searching (google will be your friend), then loading the drivers or making configuration changes.

Additional NUC resources and links

Various Intel products support search page
Intel NUC support and download links
Intel NUC model 54250 page, product brief page (and PDF version), and support with download links
Intel NUC home theater solutions guide (PDF)
Intel HCL for NUC page and Intel Core i5-4250U processor speeds and feeds
VMware on NUC tips
VMware ESXi driver for LAN net-e1000e-2.3.2.x86_64
VMware ESXi SATA xahci driver
Server storage I/O Intel NUC nick knack notes – First impressions
Server Storage I/O Cables Connectors Chargers & other Geek Gifts (Part I and Part II)
Software defined storage on a budget with Lenovo TS140

Storage I/O trends

What this all means

Intel NUC provides a good option for many situations that might otherwise need a larger mini-tower desktop workstations or similar systems both for home, consumer and small office needs. NUC can also be used for specialized pre-configured application specific situations that need low-power, basic system functionality and expansion options in a small physical footprint. In addition NUC can also be a good option for adding to an existing physical and virtual LAB or as a basis for starting a new one.

So far I have found many uses for NUC which free up other systems to do other tasks while enabling some older devices to finally be retired. On the other hand like most any technology, while the NUC is flexible, its low power and performance are not enough to support other applications. However the NUC gives me flexibility to leverage the applicable unit of compute (e.g. server, workstation, etc.) that is applicable to a given task or put another way, use the right technology tool for the task at hand.

For now I only need a single NUC to be a companion to my other HP, Dell and Lenovo servers as well as MSI ProBox, however maybe there will be a small NUC cluster, grid or ring configured down the road.

What say you, do you have a NUC if so, how is it being used and tips, tricks or hints to share with others?

Ok, nuff said for now.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 StorageIO and UnlimitedIO LLC All Rights Reserved

Revisiting RAID data protection remains relevant resource links

Revisiting RAID data protection remains relevant and resources

Storage I/O trends

Updated 2/10/2018

RAID data protection remains relevant including erasure codes (EC), local reconstruction codes (LRC) among other technologies. If RAID were really not relevant anymore (e.g. actually dead), why do some people spend so much time trying to convince others that it is dead or to use a different RAID level or enhanced RAID or beyond raid with related advanced approaches?

When you hear RAID, what comes to mind?

A legacy monolithic storage system that supports narrow 4, 5 or 6 drive wide stripe sets or a modern system support dozens of drives in a RAID group with different options?

RAID means many things, likewise there are different implementations (hardware, software, systems, adapters, operating systems) with various functionality, some better than others.

For example, which of the items in the following figure come to mind, or perhaps are new to your RAID vocabulary?

RAID questions

There are Many Variations of RAID Storage some for the enterprise, some for SMB, SOHO or consumer. Some have better performance than others, some have poor performance for example causing extra writes that lead to the perception that all parity based RAID do extra writes (some actually do write gathering and optimization).

Some hardware and software implementations using WBC (write back cache) mirrored or battery backed-BBU along with being able to group writes together in memory (cache) to do full stripe writes. The result can be fewer back-end writes compared to other systems. Hence, not all RAID implementations in either hardware or software are the same. Likewise, just because a RAID definition shows a particular theoretical implementation approach does not mean all vendors have implemented it in that way.

RAID is not a replacement for backup rather part of an overall approach to providing data availability and accessibility.

data protection and durability

What’s the best RAID level? The one that meets YOUR needs

There are different RAID levels and implementations (hardware, software, controller, storage system, operating system, adapter among others) for various environments (enterprise, SME, SMB, SOHO, consumer) supporting primary, secondary, tertiary (backup/data protection, archiving).

RAID comparison
General RAID comparisons

Thus one size or approach does fit all solutions, likewise RAID rules of thumbs or guides need context. Context means that a RAID rule or guide for consumer or SOHO or SMB might be different for enterprise and vise versa, not to mention on the type of storage system, number of drives, drive type and capacity among other factors.

RAID comparison
General basic RAID comparisons

Thus the best RAID level is the one that meets your specific needs in your environment. What is best for one environment and application may be different from what is applicable to your needs.

Key points and RAID considerations include:

· Not all RAID implementations are the same, some are very much alive and evolving while others are in need of a rest or rewrite. So it is not the technology or techniques that are often the problem, rather how it is implemented and then deployed.

· It may not be RAID that is dead, rather the solution that uses it, hence if you think a particular storage system, appliance, product or software is old and dead along with its RAID implementation, then just say that product or vendors solution is dead.

· RAID can be implemented in hardware controllers, adapters or storage systems and appliances as well as via software and those have different features, capabilities or constraints.

· Long or slow drive rebuilds are a reality with larger disk drives and parity-based approaches; however, you have options on how to balance performance, availability, capacity, and economics.

· RAID can be single, dual or multiple parity or mirroring-based.

· Erasure and other coding schemes leverage parity schemes and guess what umbrella parity schemes fall under.

· RAID may not be cool, sexy or a fun topic and technology to talk about, however many trendy tools, solutions and services actually use some form or variation of RAID as part of their basic building blocks. This is an example of using new and old things in new ways to help each other do more without increasing complexity.

·  Even if you are not a fan of RAID and think it is old and dead, at least take a few minutes to learn more about what it is that you do not like to update your dead FUD.

Wait, Isn’t RAID dead?

There is some dead marketing that paints a broad picture that RAID is dead to prop up something new, which in some cases may be a derivative variation of parity RAID.

data dispersal
Data dispersal and durability

RAID rebuild improving
RAID continues to evolve with rapid rebuilds for some systems

Otoh, there are some specific products, technologies, implementations that may be end of life or actually dead. Likewise what might be dead, dying or simply not in vogue are specific RAID implementations or packaging. Certainly there is a lot of buzz around object storage, cloud storage, forward error correction (FEC) and erasure coding including messages of how they cut RAID. Catch is that some object storage solutions are overlayed on top of lower level file systems that do things such as RAID 6, granted they are out of sight, out of mind.

RAID comparison
General RAID parity and erasure code/FEC comparisons

Then there are advanced parity protection schemes which include FEC and erasure codes that while they are not your traditional RAID levels, they have characteristic including chunking or sharding data, spreading it out over multiple devices with multiple parity (or derivatives of parity) protection.

Bottom line is that for some environments, different RAID levels may be more applicable and alive than for others.

Via BizTech – How to Turn Storage Networks into Better Performers

  • Maintain Situational Awareness
  • Design for Performance and Availability
  • Determine Networked Server and Storage Patterns
  • Make Use of Applicable Technologies and Techniques

If RAID is alive, what to do with it?

If you are new to RAID, learn more about the past, present and future keeping mind context. Keeping context in mind means that there are different RAID levels and implementations for various environments. Not all RAID 0, 1, 1/0, 10, 2, 3, 4, 5, 6 or other variations (past, present and emerging) are the same for consumer vs. SOHO vs. SMB vs. SME vs. Enterprise, nor are the usage cases. Some need performance for reads, others for writes, some for high-capacity with low performance using hardware or software. RAID Rules of thumb are ok and useful, however keep them in context to what you are doing as well as using.

What to do next?

Take some time to learn, ask questions including what to use when, where, why and how as well as if an approach or recommendation are applicable to your needs. Check out the following links to read some extra perspectives about RAID and keep in mind, what might apply to enterprise may not be relevant for consumer or SMB and vise versa.

Some advise needed on SSD’s and Raid (Via Spiceworks)
RAID 5 URE Rebuild Means The Sky Is Falling (Via BenchmarkReview)
Double drive failures in a RAID-10 configuration (Via SearchStorage)
Industry Trends and Perspectives: RAID Rebuild Rates (Via StorageIOblog)
RAID, IOPS and IO observations (Via StorageIOBlog)
RAID Relevance Revisited (Via StorageIOBlog)
HDDs Are Still Spinning (Rust Never Sleeps) (Via InfoStor)
When and Where to Use NAND Flash SSD for Virtual Servers (Via TheVirtualizationPractice)
What’s the best way to learn about RAID storage? (Via Spiceworks)
Design considerations for the host local FVP architecture (Via Frank Denneman)
Some basic RAID fundamentals and definitions (Via SearchStorage)
Can RAID extend nand flash SSD life? (Via StorageIOBlog)
I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
The original RAID white paper (PDF) that while over 20 years old, it provides a basis, foundation and some history by Katz, Gibson, Patterson et al
Storage Interview Series (Via Infortrend)
Different RAID methods (Via RAID Recovery Guide)
A good RAID tutorial (Via TheGeekStuff)
Basics of RAID explained (Via ZDNet)
RAID and IOPs (Via VMware Communities)

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What is my favorite or preferred RAID level?

That depends, for some things its RAID 1, for others RAID 10 yet for others RAID 4, 5, 6 or DP and yet other situations could be a fit for RAID 0 or erasure codes and FEC. Instead of being focused on just one or two RAID levels as the solution for different problems, I prefer to look at the environment (consumer, SOHO, small or large SMB, SME, enterprise), type of usage (primary or secondary or data protection), performance characteristics, reads, writes, type and number of drives among other factors. What might be a fit for one environment would not be a fit for others, thus my preferred RAID level along with where implemented is the one that meets the given situation. However also keep in mind is tying RAID into part of an overall data protection strategy, remember, RAID is not a replacement for backup.

What this all means

Like other technologies that have been declared dead for years or decades, aka the Zombie technologies (e.g. dead yet still alive) RAID continues to be used while the technologies evolves. There are specific products, implementations or even RAID levels that have faded away, or are declining in some environments, yet alive in others. RAID and its variations are still alive, however how it is used or deployed in conjunction with other technologies also is evolving.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

DIY converged server software defined storage on a budget using Lenovo TS140

Attention DIY Converged Server Storage Bargain Shoppers

Software defined storage on a budget with Lenovo TS140

server storage I/O trends

Recently I put together a two-part series of some server storage I/O items to get a geek for a gift (read part I here and part II here) that also contain items that can be used for accessorizing servers such as the Lenovo ThinkServer TS140.

Image via Lenovo.com

Likewise I have done reviews of the Lenovo ThinkServer TS140 in the past which included me liking them and buying some (read the reviews here and here), along with a review of the larger TD340 here.

Why is this of interest

Do you need or want to do a Do It Yourself (DIY) build of a small server compute cluster, or a software defined storage cluster (e.g. scale-out), or perhaps a converged storage for VMware VSAN, Microsoft SOFS or something else?

Do you need a new server, second or third server, or expand a cluster, create a lab or similar and want the ability to tailor your system without shopping or a motherboard, enclosure, power supply and so forth?

Are you a virtualization or software defined person looking to create a small VMware Virtual SAN (VSAN) needing three or more servers to build a proof of concept or personal lab system?

Then the TS140 could be a fit for you.

storage I/O Lenovo TS140
Image via StorageIOlabs, click to see review

Why the Lenovo TS140 now?

Recently I have seen a lot of site traffic on my site with people viewing my reviews of the Lenovo TS140 of which I have a few. In addition have got questions from people via comments section as well as elsewhere about the TS140 and while shopping at Amazon.com for some other things, noticed that there were some good value deals on different TS140 models.

I tend to buy the TS140 models that are bare bones having power supply, enclosure, CD/DVD, USB ports, power supply and fan, processor and minimal amount of DRAM memory. For processors mine have the Intel E3-1225 v3 which are quad-core and that have various virtualization assist features (e.g. good for VMware and other hypervisors).

What I saw on Amazon the other day (also elsewhere) were some Intel i3-4130 dual core based systems (these do not have all the virtualization features, just the basics) in a bare configuration (e.g. no Hard Disk Drive (HDD), 4GB DRAM, processor, mother board, power supply and fan, LAN port and USB with a price of around $220 USD (your price may vary depending on timing, venue, prime or other membership and other factors). Not bad for a system that you can tailor to your needs. However what also caught my eye were the TS140 models that have the Intel E3-1225 v3 (e.g. quad core, 3.2Ghz) processor matching the others I have with a price of around $330 USD including shipping (your price will vary depending on venue and other factors).

What are some things to be aware of?

Some caveats of this solution approach include:

  • There are probably other similar types of servers, either by price, performance, or similar
  • Compare apples to apples, e.g. same or better processor, memory, OS, PCIe speed and type of slots, LAN ports
  • Not as robust of a solution as those you can find costing tens of thousands of dollars (or more)
  • A DIY system which means you select the other hardware pieces and handle the service and support of them
  • Hardware platform approach where you choose and supply your software of choice
  • For entry-level environments who have floor-space or rack-space to accommodate towers vs. rack-space or other alternatives
  • Software agnostic Based on basically an empty server chassis (with power supplies, motherboard, power supplies, PCIe slots and other things)
  • Possible candidate for smaller SMB (Small Medium Business), ROBO (Remote Office Branch Office), SOHO (Small Office Home Office) or labs that are looking for DIY
  • A starting place and stimulus for thinking about doing different things

What could you do with this building block (e.g. server)

Create a single or multi-server based system for

  • Virtual Server Infrastructure (VSI) including KVM, Microsoft Hyper-V, VMware ESXi, Xen among others
  • Object storage
  • Software Defined Storage including Datacore, Microsoft SOFS, Openstack, Starwind, VMware VSAN, various XFS and ZFS among others
  • Private or hybrid cloud including using Openstack among other software tools
  • Create a hadoop big data analytics cluster or grid
  • Establish a video or media server, use for gaming or a backup (data protection) server
  • Update or expand your lab and test environment
  • General purpose SMB, ROBO or SOHO single or clustered server

VMware VSAN server storageIO example

What you need to know

Like some other servers in this class, you need to pay attention to what it is that you are ordering, check out the various reviews, comments and questions as well as verify the make, model along with configuration. For example what is included and what is not included, warranty, return policy among other things. In the case of some of the TS140 models, they do not have a HDD, OS, keyboard, monitor, mouse along with different types of processors and memory. Not all the processors are the same, pay attention, visit the Intel Ark site to look up a specific processor configuration to see if it fits your needs as well as visit the hardware compatibility list (HCL) for the software that you are planning to use. Note that these should be best practices regardless of make, model, type or vendor for server, storage, I/O networking hardware and software.

What you will need

This list assumes that you have obtained a model without a HDD, keyboard, video, mouse or operating system (OS) installed

  • Update your BIOS if applicable, check the Lenovo site
  • Enable virtualization and other advanced features via your BIOS
  • Software such as an Operating System (OS), hypervisor or other distribution (load via USB or CD/DVD if present)
  • SSD, SSHD/HHDD, HDD or USB flash drive for installing OS or other software
  • Keyboard, video, mouse (or a KVM switch)

What you might want to add (have it your way)

  • Keyboard, video mouse or a KVM switch (See gifts for a geek here)
  • Additional memory
  • Graphics card, GPU or PCIe riser
  • Additional SSD, SSHD/HHDD or HDD for storage
  • Extra storage I/O and networking ports

Extra networking ports

You can easily add some GbE (or faster ports) including use the PCIe x1 slot, or use one of the other slots for a quad port GbE (or faster), not to mention get some InfiniBand single or dual port cards such as the Mellanox Connectx II or Connect III that support QDR and can run in IBA or 10GbE modes. If you only have two or three servers in a cluster, grid, ring configuration you can run point to point topologies using InfiniBand (and some other network interfaces) without using a switch, however you decide if you need or want switched or non-switched (I have a switch). Note that with VMware (and perhaps other hypervisors or OS) you may need to update the drives for the Realtek GbE LAN on Motherboard port (see links below).

Extra storage ports

For extra storage space capacity (and performance) you can easily add PCIe G2 or G3 HBAs (SAS, SATA, FC, FCoE, CNA, UTA, IBA for SRP, etc) or RAID cards among others. Depending on your choice of cards, you can then attach to more internal storage, external storage or some combination with different adapters, cables, interposers and connectivity options. For example I have used TS140s with PCIe Gen 3 12Gbs SAS HBAs attached to 12Gbs SAS SSDs (and HDDs) with the ability to drive performance to see what those devices are capable of doing.

TS140 Hardware Defined My Way

As an example of how a TS140 can be configured, using one of the base E3-1224 v3 models with 4GB RAM, no HDD (e.g around $330 USD, your price will vary), add a 4TB Seagate HDD (or two or three) for around $140 USD each (your price will vary), add a 480GB SATA SSD for around $340 USD (your price will vary) with those attached to the internal SATA ports. To bump up network performance, how about a Mellanox Connectx II dual port QDR IBA/10GbE card for around $140 USD (your price will vary), plus around $65 USD for QSFP cable (you your price will vary), and some extra memory (use what you have or shop around) and you have a platform ready to go for around or under $1,000 USD. Add some more internal or external disks, bump up the memory, put in some extra network adapters and your price will go up a bit, however think about what you can have for a robust not so little system. For you VMware vgeeks, think about the proof of concept VSAN that you can put together, granted you will have to do some DIY items.

Some TS140 resources

Lenovo TS140 resources include

  • TS140 StorageIOlab review (here and here)
  • TS140 Lenovo ordering website
  • TS140 Data and Spec Sheet (PDF here)
  • Lenovo ThinkServer TS140 Manual (PDF here) and (PDF here)
  • Intel E3-1200 v3 processors capabilities (Web page here)
  • Enabling Virtualization Technology (VT) in TS140 BIOS (Press F1) (Read here)
  • Enabling Intel NIC (82579LM) GbE with VMware (Link to user forum and a blog site here)

Image via Lenovo.com

What this all means

Like many servers in its category (price, capabilities, abilities, packaging) you can do a lot of different things with them, as well as hardware define with accessories, or use your own software. Depending on how you end how hardware defining the TS140 with extra memory, HDDs, SSDs, adapters or other accessories and software your cost will vary. However you can also put together a pretty robust system without breaking your budget while meeting different needs.

Is this for everybody? Nope

Is this for more than a lab, experimental, hobbyist, gamer? Sure, with some caveats Is this apples to apples comparison vs. some other solutions including VSANs? Nope, not even close, maybe apples to oranges.

Do I like the TS140? Yup, starting with a review I did about a year ago, I liked it so much I bought one, then another, then some more.

Are these the only servers I have, use or like? Nope, I also have systems from HP and Dell as well as test drive and review others

Why do I like the TS140? It’s a value for some things which means that while affordable (not to be confused with cheap) it has features, salability and ability to be both hardware defined for what I want or need to use them as, along with software define them to be different things. Key for me is the PCIe Gen 3 support with multiple slots (and types of slots), reasonable amount of memory, internal housing for 3.5" and 2.5" drives that can attach to on-board SATA ports, media device (CD/DVD) if needed, or remove to use for more HDDs and SSDs. In other words, it’s a platform that instead of shopping for the motherboard, an enclosure, power supply, processor and related things I get the basics, then configure, and reconfigure as needed.

Another reason I like the TS140 is that I get to have the server basically my way, in that I do not have to order it with a smallest number of HDDs, or that it comes with an OS, more memory than needed or other things that I may or may not be able to use. Granted I need to supply the extra memory, HDDs, SSDs, PCIe adapters and network ports along with software, however for me that’s not too much of an issue.

What don’t I like about the TS140? You can read more about my thoughts on the TS140 in my review here, or its bigger sibling the TD340 here, however I would like to see more memory slots for scaling up. Granted for what these cost, it’s just as easy to scale-out and after all, that’s what a lot of software defined storage prefers these days (e.g. scale-out).

The TS140 is a good platform for many things, granted not for everything, that’s why like storage, networking and other technologies there are different server options for various needs. Exercise caution when doing apples to oranges comparison on price alone, compare what you are getting in terms of processor type (and its functionality), expandable memory, PCIe speed, type and number of slots, LAN connectivity and other features to meet your needs or requirements. Also keep in mind that some systems might be more expensive that include a keyboard, HDD with an OS installed that if you can use those components, then they have value and should be factored into your cost, benefit, return on investment.

And yes, I just added a few more TS140s that join other recent additions to the server storageIO lab resources…

Anybody want to guess what I will be playing with among other things during the up coming holiday season?

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

StorageIO Out and About Update – VMworld 2014

StorageIO Out and About Update – VMworld 2014

Here is a quick video montage or mash-up if you prefer that Cory Peden (aka the Server and StorageIO Intern @Studentof_IT) put together using some video that recorded while at VMworld 2014 in San Francisco. In this YouTube video we take a quick tour around the expo hall to see who as well as what we run into while out and about.

VMworld 2014 StorageIO Update
Click on above image to view video

For those of you who were at VMworld 2014 the video (click above image) will give you a quick Dejavu memory of the sites and sounds while for those who were not there, see what you missed to plan for next year. Watch for appearances from Gina Minks (@Gminks) aka Gina Rosenthal (of BackupU)and Michael (not Dell) of Dell Data Protection, Luigi Danakos (@Nerdblurt) of HP Data Protection who lost his voice (tweet Luigi if you can help him find his voice). With Luigi we were able to get in a quick game of buzzword bingo before catching up with Marc Farley (@Gofarley) and John Howarth of Quaddra Software. Mark and John talk about their new solution from Quaddra which will enable searching and discovering data across different storage systems and technologies.  

Other visits include a quick look at an EVO:Rail from Dell, along with Docker for Smarties overview with Nathan LeClaire (@upthecyberpunks) of Docker (click here to watch the extended interview with Nathan).

Docker for smarties

Check out the conversation with Max Kolomyeytsev of StarWind Software (@starwindsan) before we get interrupted by a sales person. During our walk about, we also bump into Mark Peters (@englishmdp) of ESG facing off video camera to video camera.

Watch for other things including rack cabinets that look like compute servers yet that have a large video screen so they can be software defined for different demo purposes.

virtual software defined server

Watch for more Server and StorageIO Industry Trend Perspective podcasts, videos as well as out and about updates soon, meanwhile check out others here.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

This is the first post of a two part series, read the second post here.

Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future, Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way. For example nand flash SSD as part of an enterprise tiered storage strategy can be implemented server-side using PCIe cards, SAS and SATA drives as targets or as cache along with software, as well as leveraging SSD devices in storage systems or appliances.

Seagate 1200 SSD
Seagate 1200 Enterprise SAS 12Gbs SSD Image via Seagate.com

Another place where nand flash can be found and compliments SSD devices are so-called Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD) including a new generation that accelerate writes as well as reads such as those Seagate refers to as with Enterprise TurboBoost. The Enterprise TurboBoost drives (view the companion StorageIO Lab review TurboBoost white paper here) were previously known as the Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD). Read more about TurboBoost here and here.

The best server and storage I/O is the one you do not have to do

Keep in mind that the best server or storage I/O is that one that you do not have to do, with the second best being the one with the least overhead resolved as close to the processor (compute) as possible or practical. The following figure shows that the best place to resolve server and storage I/O is as close to the compute processor as possible however only a finite amount of storage memory located there. This is where the server memory and storage I/O hierarchy comes into play which is also often thought of in the context of tiered storage balancing performance and availability with cost and architectural limits.

Also shown is locality of reference which refers to how close data is to where it is being used and includes cache effectiveness or buffering. Hence a small amount of cache of flash and DRAM in the right location can have a large benefit. Now if you can afford it, install as much DRAM along with flash storage as possible, however if you are like most organizations with finite budgets yet server and storage I/O challenges, then deploy a tiered flash storage strategy.

flash cache locality of reference
Server memory storage I/O hierarchy, locality of reference

Seagate 1200 12Gbs Enterprise SAS SSD’s

Back to the Seagate 1200 12Gbs Enterprise SAS SSD which is covered in this StorageIO Industry Trends Perspective thought leadership white paper. The focus of the white paper is to look at how the Seagate 1200 Enterprise class SSD’s and 12Gbps SAS address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data infrastructure environments.

Seagate 1200 Enteprise SSD

This includes providing proof points running various workloads including Database TPC-B, TPC-E and Microsoft Exchange in the StorageIO Labs along with cache software comparing SSD, SSHD and different HDD’s including 12Gbs SAS 6TB near-line high-capacity drives.

Seagate 1200 Enterprise SSD Proof Points

The proof points in this white paper are from an applications focus perspective representing more of an end-to-end real-world situation. While they are not included in this white paper, StorageIO has run traditional storage building-block focus workloads, which can be found at StorageIOblog (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?). These include tools such as Iometer, iorate, vdbench among others for various IO sizes, mixed, random, sequential, reads, writes along with “hot-band" across different number of threads (concurrent users). “Hot-Band” is part of the SNIA Emerald energy effectiveness metrics for looking at sustained storage performance using tools such as vdbench. Read more about other various server and storage I/O benchmarking tools and techniques here.

For the following series of proof-points (TPC-B, TPC-E and Exchange) a system under test (SUT) consisted of a physical server (described with the proof-points) configured with VMware ESXi along with guests virtual machines (VMs) configured to do the storage I/O workload. Other servers were used in the case of TPC workloads as application transactional requester to drive the SQL Server database and resulting server storage I/O workload. VMware was used in the proof-points to reflect a common industry trend of using virtual server infrastructures (VSI) supporting applications including database, email among others. For the proof-point scenarios, the SUT along with storage system device under test were dedicated to that scenario (e.g. no other workload running) unless otherwise noted.

Server Storage I/O config
Server Storage I/O configuration for proof-points

Microsoft Exchange Email proof-point configuration

For this proof-point, Microsoft Jet Stress Exchange performance workloads were placed (e.g. Exchange Database – EDB file) on each of the different devices under test with various metrics shown including activity rates and response time for reads as well as writes. For the Exchange testing, the EDB was placed on the device being tested while its log files were placed on a separate Seagate 400GB Enterprise 12Gbps SAS SSD.

Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB 7.2K SATA HDD. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft SBS2011 Service Pack 1 64 bit. Guest VM (VMware vSphere 5.5) was on a SSD based dat, had a physical machine (host), with 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot with Jet Stress 2010.  All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device also persistent (no delayed writes). EDB was 300GB and workload ran for 8 hours.

Microsoft Exchange VMware SSD performance
Microsoft Exchange proof-points comparing various storage devices

TPC-B (Database, Data Warehouse, Batch updates) proof-point configuration

SSD’s are a good fit for both transaction database activity with reads and write as well as query-based decision support systems (DSS), data warehouse and big data analytics. The following are proof points of SSD capabilities for database activity. In addition to supporting database table files and objects, along with transaction journal logs, other uses include for meta-data, import/export or other high-IO and write intensive scenarios. Two database workload profiles were tested including batch update (write-intensive) and transactional. Activity involved running Transaction Performance Council (TPC) workloads TPC-B (batch update) and TPC-E (transaction/OLTP simulate financial trading system) against Microsoft SQL Server 2012 databases. Each test simulation had the SQL Server database (MDF) on a different device with transaction log file (LDF) on a separate SSD. TPC-B for a single device results shown below.

TPC-B (write intensive) results below show how TPS work being done (blue) increases from left to right (more is better) for various numbers of simulated users. Also shown on the same line for each amount of TPS work being done is the average latency in seconds (right to left) where lower is better. Results are shown from top to bottom for each group of users (100, 50, 20 and 1) for the different drives being tested (top to bottom). Note how the SSD device does more work at a lower response time vs. traditional HDD’s

Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

TPC-B sql server database SSD performance
TPC-B SQL Server database proof-points comparing various storage devices

TPC-E (Database, Financial Trading) proof-point configuration

The following shows results from TPC-E test (OLTP/transactional workload) simulating a financial trading system. TPC-E is an industry standard workload that performs a mix of reads and writes database queries. Proof-points were performed with various numbers of users from 10, 20, 50 and 100 to determine (TPS) Transaction per Second (aka I/O rate) and response time in seconds. The TPC-E transactional results are shown for each device being tested across different user workloads. The results show how TPC-E TPS work (blue) increases from left to right (more is better) for larger numbers of users along with corresponding latency (green) that goes from right to left (less is better). The Seagate Enterprise 1200 SSD is shown on the top in the figure below with a red box around its results. Note how the SSD as a lower latency while doing more work compared to the other traditional HDD’s

Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

TPC-E sql server database SSD performance
TPC-E (Financial trading) SQL Server database proof-points comparing various storage devices

Continue reading part-two of this two-part series here including the virtual server storage I/O blender effect and solution.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

This is the second post of a two part series, read the first post here.

Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

The Server Storage I/O Blender Effect Bottleneck

The earlier proof-points focused on SSD as a target or storage device. In the following proof-points, the Seagate Enterprise 1200 SSD is used as a shared read cache (write-through). Using a write-through cache enables a given amount of SSD to give a performance benefit to other local and networked storage devices.

traditional server storage I/O
Non-virtualized servers with dedicated storage and I/O paths.

Aggregation causes aggravation with I/O bottlenecks because of consolidation using server virtualization. The following figure shows non-virtualized servers with their own dedicated physical machine (PM) and I/O resources. When various servers are virtualized and hosted by a common host (physical machine), their various workloads compete for I/O and other resources. In addition to competing for I/O performance resources, these different servers also tend to have diverse workloads.

virtual server storage I/O blender
Virtual server storage I/O blender bottleneck (aggregation causes aggravation)

The figure above shows aggregation causing aggravation with the result being I/O bottlenecks as various applications performance needs converge and compete with each other. The aggregation and consolidation result is a blend of random, sequential, large, small, read and write characteristics. These different storage I/O characteristics are mixed up and need to be handled by the underlying I/O capabilities of the physical machine and hypervisor. As a result, a common deployment for SSD in addition to as a target device for storing data is as a cache to cut bottlenecks for traditional spinning HDD.

In the following figure a solution is shown introducing I/O caching with SSD to help mitigate or cut the effects of server consolation causing performance aggravations.

Creating a server storage I/O blender bottleneck

xxxxx
Addressing the VMware Server Storage I/O blender with cache

Addressing server storage I/O blender and other bottlenecks

For these proof-points, the goal was to create an I/O bottleneck resulting from multiple VMs in a virtual server environment performing application work. In this proof-point, multiple competing VMs including a SQL Server 2012 database and an Exchange server shared the same underlying storage I/O infrastructure including HDD’s The 6TB (Enterprise Capacity) HDD was configured as a VMware dat and allocated as virtual disks to the VMs. Workloads were then run concurrently to create an I/O bottleneck for both cached and non-cached results.

xxxxx
Server storage I/O with virtualization roof-point configuration topology

The following figure shows two sets of proof points, cached (top) and non-cached (bottom) with three workloads. The workloads consisted of concurrent Exchange and SQL Server 2012 (TPC-B and TPC-E) running on separate virtual machine (VM) all on the same physical machine host (SUT) with database transactions being driven by two separate servers. In these proof-points, the applications data were placed onto the 6TB SAS HDD to create a bottleneck, and a portion of the SSD used as a cache. Note that the Virtunet cache software allows you to use a part of a SSD device for cache with the balance used as a regular storage target should you want to do so.

If you have paid attention to the earlier proof-points, you might notice that some of the results below are not as good as those seen in the Exchange, TPC-B and TPC-E results about. The reason is simply that the earlier proof-points were run without competing workloads, and database along with log or journal files were placed on separate drives for performance. In the following proof-point as part of creating a server storage I/O blender bottleneck the Exchange, TPC-B as well as TPC-E workloads were all running concurrently with all data on the 6TB drive (something you normally would not want to do).

storage I/O blender solved
Solving the VMware Server Storage I/O blender with cache

The cache and non-cached mixed workloads shown above prove how an SSD based read-cache can help to reduce I/O bottlenecks. This is an example of addressing the aggravation caused by aggregation of different competing workloads that are consolidated with server virtualization.

For the workloads shown above, all data (database tables and logs) were placed on VMware virtual disks created from a dat using a single 7.2K 6TB 12Gbps SAS HDD (e.g. Seagate Enterprise Capacity).

The guest VM system disks which included paging, applications and other data files were virtual disks using a separate dat mapped to a single 7.2K 1TB HDD. Each workload ran for eight hours with the TPC-B and TPC-E having 50 simulated users. For the TPC-B and TPC-E workloads, two separate servers were used to drive the transaction requests to the SQL Server 2012 database.

For the cached tests, a Seagate Enterprise 1200 400GB 12Gbps SAS SSD was used as the backing store for the cache software (Virtunet Systems Virtucache) that was installed and configured on the VMware host.

During the cached tests, the physical HDD for the data files (e.g. 6TB HDD) and system volumes (1TB HDD) were read cache enabled. All caching was disabled for the non-cached workloads.

Note that this was only a read cache, which has the side benefit of off-loading those activities enabling the HDD to focus on writes, or read-ahead. Also note that the combined TPC-E, TPC-B and Exchange databases, logs and associated files represented over 600GB of data, there was also the combined space and thus cache impact of the two system volumes and their data. This simple workload and configuration is representative of how SSD caching can complement high-capacity HDD’s

Seagate 6TB 12Gbs SAS high-capacity HDD

While the star and focus of these series of proof-points is the Seagate 1200 Enterprise 12Gbs SAS SSD, the caching software (virtunet) and Enterprise TurboBoost drives also play key supporting and favorable roles. However the 6TB 12Gbs SAS high-capacity drive caught my attention from a couple of different perspectives. Certainly the space capacity was interesting along with a 12Gbs SAS interface well suited for near-line, high-capacity and dense tiered storage environments. However for a high-capacity drive its performance is what really caught my attention both in the standard exchange, TPC-B and TPC-E workloads, as well as when combined with SSD and cache software.

This opens the door for a great combination of leveraging some amount of high-performance flash-based SSD (or TurboBoost drives) combined with cache software and high-capacity drives such as the 6TB device (Seagate now has larger versions available). Something else to mention is that the 6TB HDD in addition to being available in either 12Gbs SAS, 6Gbs SAS or 6Gbs SATA also has enhanced durability with a Read Bit Error Rate of 10 ^15 (e.g. 1 second read error per 10^15 average attempts) and an AFR (annual failure rate) of 0.63% (See more speeds and feeds here). Hence if you are concerned about using large capacity HDD’s and them failing, make sure you go with those that have a high Read Bit Error Rate and a low AFR which are more common with enterprise class vs. lower cost commodity or workstation drives. Note that these high-capacity enterprise HDD’s are also available with Self-Encrypting Drive (SED) options.

Summary

Read more in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of Seagate 1200 12Gbs SAS SSD’s and visit the Seagate Enterprise 1200 12Gbs SAS SSD page here. Moving forward there is the notion that flash SSD will be everywhere. There is a difference between all data on flash SSD vs. having some amount of SSD involved in preserving, serving and protecting (storing) information.

Key themes to keep in mind include:

  • Aggregation can cause aggravation which SSD can alleviate
  • A relative small amount of flash SSD in the right place can go a long way
  • Fast flash storage needs fast server storage I/O access hardware and software
  • Locality of reference with data close to applications is a performance enabler
  • Flash SSD everywhere does not mean everything has to be SSD based
  • Having some amount of flash in different places is important for flash everywhere
  • Different applications have various performance characteristics
  • SSD as a storage device or persistent cache can speed up IOPs and bandwidth

Flash and SSD are in your future, this comes back to the questions of how much flash SSD do you need, along with where to put it, how to use it and when.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Lenovo ThinkServer TD340 StorageIO lab Review

Storage I/O trends

Lenovo ThinkServer TD340 Server and StorageIO lab Review

Earlier this year I did a review of the Lenovo ThinkServer TS140 in the StorageIO Labs (see the review here), in fact I ended up buying a TS140 after the review, and a few months back picked up another one. This StorageIOlab review looks at the Lenovo ThinkServer TD340 Tower Server which besides having a larger model number than the TS140, it also has a lot more capabilities (server compute, memory, I/O slots and internal hot-swap storage bays. Pricing varies depending on specific configuration options, however at the time of this post Lenovo was advertising a starting price of $1,509 USD for a specific configuration here. You will need to select different options to decide your specific cost.

The TD340 is one of the servers that Lenovo has had prior to its acquisition of IBM x86 server business that you can read about here. Note that the Lenovo acquisition of the IBM xSeries business group has begun as of early October 2014 and is expected to be completed across different countries in early 2015. Read more about the IBM xSeries business unit here, here and here.

The Lenovo TD340 Experience

Lets start with the overall experience which was very easy other than deciding what make and model to try. This includes going from first answering some questions to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything. Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived.

TD340 is ready for use
TD340 with Keyboard and Mouse (Monitor and keyboard not included)

One of the reasons I have a photo of the TD340 on a desk is that I initially put it in an office environment similar to what I did with the TS140 as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TD340 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TD340 is a good fit for environments that need a server that has to go into an office environment as opposed to a server or networking room.

Welcome to the TD340
Lenovo ThinkServer Setup

TD340 Setup
Lenovo TD340 as tested in BIOS setup, note the dual Intel Xeon E5-2420 v2 processors

TD340 as tested

TD340 Selfie of whats inside
TD340 "Selfie" with 4 x 8GB DDR3 DIMM (32GB) and PCIe slots (empty)

TD340 disk drive bays
TD340 internal drive hot-swap bays

Speeds and Feeds

The TD340 that I tested was a Machine type 7087 model 002RUX which included 4 x 16GB DIMMs and both processor sockets occupied.

You can view the Lenovo TD340 data sheet with more speeds and feeds here, however the following is a summary.

  • Operating systems support include various Windows Servers (2008-2012 R2), SUSE, RHEL, Citrix XenServer and VMware ESXi
  • Form factor is 5U tower with weight starting at 62 pounds depending on how configured
  • Processors include support for up to two (2) Intel E5-2400 v2 series
  • Memory includes 12 DDR3 DRAM DIMM slots (LV RDIMM and UDIMM) for up to 129GB.
  • Expansion slots vary depending on if a single or dual cpu socket. Single CPU socket installed has 1 x PCIe Gen3 FH/HL x8 mechanical, x4 electrical, 1 x PCIe Gen3
  • FH/HL x16 mechanical, x16 electrical and a single PCI 32bit/33 MHz FH/HL slot. With two CPU sockets installed extra PCIe slots are enabled. These include single x PCIe GEN3: FH/HL x8 mechanical, x4 electrical, single x PCIe GEN3: FH/HL x16 mechanical, x16 electrical, three x PCIe GEN3: FH/HL x8 mechanical, x8 electrical and a single PCI 5V 32-bit/33 MHz: FH/HL
  • Two 5.25” media bays for CD or DVDs or other devices
  • Integrated ThinkServer RAID (0/1/10/5) with optional RAID adapter models
  • Internal storage varies depending on model including up to eight (8) x 3.5” hot swap drives or 16 x 2.5” hot swap drives (HDD’s or SSDs).
  • Storage space capacity varies by the type and size of the drives being used.
  • Networking interfaces include two (2) x GbE
  • Power supply options include single 625 watt or 800 watt, or 1+1 redundant hot-swap 800 watt, five fixed fans.
  • Management tools include ThinkServer Management Module and diagnostics

What Did I do with the TD340

After initial check out in an office type environment, I moved the TD340 into the lab area where it joined other servers to be used for various things.

Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities as well as installing VMware ESXi 5.5.

TD340 is ready for use
TD340 with Keyboard and Mouse (Monitor and keyboard not included)

What I liked

Unbelievably quiet which may not seem like a big deal, however if you are looking to deploy a server or system into a small office workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server that can be installed into a home media entertainment system, well, this might be a nice to have consideration ;). Speaking of IO slots, naturally I’m interested in Server Storage I/O so having multiple slots is a must have, along with the processor that is multi-core (pretty much standard these days) along with VT and EP for supporting VMware (these were disabled in the BIOS however that was an easy fix).

What I did not like

The only thing I did not like was that I ran into a compatibility issue trying to use a LSI 9300 series 12Gb SAS HBA which Lenovo is aware of, and perhaps has even addressed by now. What I ran into is that the adapters work however I was not able to get the full performance out of the adapters as compared to on other systems including my slower Lenovo TS140s.

Summary

Overall I give Lenovo and the TD340 an "B+" which would have been an "A" had I not gotten myself into a BIOS situation or been able to run the 12Gbps SAS PCIe Gen 3 cards at full speed. Likewise the Lenovo service and support also helped to improve on the experience. Otoh, if you are simply going to use the TD340 in a normal out of the box mode without customizing to add your own adapters or install your own operating system or Hypervisors (beyond those that are supplied as part of the install setup tool kit), you may have an "A" or "A+" experience with the TD340.

Would I recommend the TD340 to others? Yes for those who need this type and class of server for Windows, *nix, Hyper-V or VMware environments.

Would I buy a TD340 for myself? Maybe if that is the size and type of system I need, however I have my eye on something bigger. On the other hand for those who need a good value server for a SMB or ROBO environment with room to grow, the TD340 should be on your shopping list to compare with other solutions.

Disclosure: Thanks to the folks at Lenovo for sending and making the TD340 available for review and a hands on test experience including covering the cost of shipping both ways (the unit should now be back in your possession). Thus this is not a sponsored post as Lenovo is not paying for this (they did loan the server and covered two-way shipping), nor am I paying them, however I have bought some of their servers in the past for the StorageIOLab environment that are companions to some Dell and HP servers that I have also purchased.

Ok, nuff said

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

VMware Cisco EMC VCE Zen and now server storage I/O convergence

Storage I/O trends

VMware Cisco EMC VCE Zen and now server storage I/O convergence

In case you have not heard, the joint initiative (JV) founded in the fall of 2009 between Intel VMware Cisco and EMC called VCE had a change of ownership today.

Well, kind of…

Who is VCE and what’s this Zen stuff?

For those not familiar or who need a recap, VCE was created to create converged server, storage I/O networking hardware and software solutions combing technologies from its investors resulting in solutions called vBlocks.

The major investors were Cisco who provides the converged servers and I/O networking along with associated management tools as well as EMC who provides the storage systems along with their associated management tools. Minority investors include VMware (who is majority owned by EMC) who provides the server virtualization aka software defined data center management tools and Intel whose’s processor chip technologies are used in the vBlocks. What has changed from Zen (e.g. yesterday or in the past) and now is that Cisco has sold the majority (they are retaining about 10%) of its investment ownership in VCE to EMC. Learn more about VCE, their solutions and valueware in this post here (VCE revisited, now and Zen).

Activist activating activity?

EMC pulling VCE in-house which should prop up its own internal sales figures by perhaps a few billion USDs within a year or so (if not sooner) is not as appealing to activists investors who want results now such as selling off parts of the company (e.g. EMC, VMware or other assets) or the entire company.

However EMC has been under pressure from activist shareholder Elliot Management to divest or sell-off portions of this business such as VMware so that the investors (including the activist) can make more money. For example there have been the recent stories about EMC looking to sell or merge with the likes of HP (who is now buying back shares and splitting up its own business) among others which certainly must make the activist investors happy.

However to the activist investors who want to see things sold to make money they are not happy with EMC off buying or investing it appears.

Via Bloomberg

“The last thing on investors’ minds is the future of VCE,” Daniel Ives, an analyst with FBR Capital Markets, wrote in a note today. “EMC has a fire in its house right now and the company appears focused on painting its bedroom (e.g. VCE), while the Street wants a resolution on the strategic ownership situation sooner rather than later.”

Read more at Bloomberg

Whats this EMC Federation stuff?

Note that EMC has organized itself into a federation that consists of EMC Information Infrastructure (EMCII) or what you might know a traditional EMC based storage and related software solutions, VMware, Pivotal and RSA. Also note that each of those federated companies have their own CEO as well as have holdings or ownership of other companies. However all report to a common federated leadership aka EMC. Thus when you hear EMC that could mean depending on the context the federation mother ship which controls the individual companies, or it could also be used to refer to EMCII aka the traditional EMC. Click here to learn more about the EMC federation.

Converging Markets and Opportunities

Looking beyond near-term or quick gains, EMC could be simply doing something others do to take ownership and control over certain things while reducing complexities associated with joint initiatives. For example with EMC and Cisco in a close partnership with VCE, both parties have been free to explore and take part in other joint initiatives such as Cisco with EMC competitors NetApp, HDS among others. Otoh EMC partners with Arista for networking, not to mention via VMware acquired virtual network or software defined network Nicira now called NSX.

server and storage I/O road map to convergence

EMC is also in a partnership with Lenovo for developing servers to be used by EMC for various platforms to support storage, data and information services while shifting the lower-end SMB storage offerings such as Iomega to the Lenovo channel.

Note that Lenovo is in the process of absorbing the IBM xSeries (e.g. x86 based) business unit that started closing earlier in October (will take several months to completely close in all countries around the world). For its part Cisco is also partnering with hyper-converged solution provider Simplivity while EMC has announced its statement of direction to bring to market its own hyper-converged platform by end of the year. For those not familiar, Hyper-converged solutions are simply the next evolution of converged or pre-bundled turnkey systems (some of you might have just had a Dejavu moment) that today tend to be targeted for SMBs and ROBOs however used for targeted applications such as VDI in larger environments.

Storage I/O trends

What does this have to do with VCE?

IF EMC is about to release as it has made statement of direction statements of a hyper-converged solution by year-end to compete head-on with those from Nutanix, Simplivity and Tintri as well as perhaps to a lesser extent VMwares EVO:Rail, by having more control over VCE means reducing if not eliminating complexity around vBlocks which are Cisco based with EMC storage vs. what ever EMC brings to market for hyper-converged. In the past under the VCE initiatives storage was limited to EMC and servers along with networking from Cisco, hypervisors from VMware, however what happens in the future remains to be seen.

Does this mean EMC is moving even more into servers than just virtual servers?

Tough to say as EMC can not afford to have its sales force lose focus on its traditional core products while ramping up other business, however, the EMC direct and partner teams want and need to keep up account control which means gaining market share and footprint in those accounts. This also means EMC needs to find ways to take cost out of the sales and marketing process where possible to streamline which perhaps brining VCE will help do.

Will this perhaps give the EMC direct and partner sales teams a new carrot or incentive to promote converged and hyper-converged at the cost of other competitors or incumbents? Perhaps, lets see what happens in the coming weeks.

What does this all mean?

In a nut shell, IMHO EMC is doing a couple of things here one of which is cleaning up some ownership in JVs to give it self more control, as well as options for doing other business transactions (mergers and acquisitions (M&A), sales or divestiture’s, new joint initiatives, etc). Then there is streamline its business from decision-making to quickly respond to new opportunities as well as routes to markets and other activities (e.g. removing complexity and cost vs. simply cutting cost).

Does this signal the prelude to something else? Perhaps, we know that EMC has made a statement of direction about hyper-converged which with VCE now more under EMC control, perhaps we will see more options from under the VCE umbrella both for lower-end and entry SMB as well as SME and large enterprise organizations.

What about the activist investors?

They are going to make noise as long as they can continue to make more money or get what they want. Publicly I would be shocked if the activist investors were not making statements that EMC should be selling assets not buying or investing.

On the other hand, any smart investor,  financial or other analyst should see though the fog of what this relatively simple transaction means in terms of EMC getting further control of its future.

Of course the question will stay does EMC remain in control of its current federation of EMC, VMware, Pivotal, RSA along each of their respective holdings, does EMC doe a block buster merger, divestiture or acquisition?

server and storage I/O road ahead

Take a step back, look at the big picture!

Some things to keep an eye on:

  • Will this move help streamline decision-making enabling new solutions to be brought to market and customers quicker?
  • While there is a VMware focus, don’t forget about the long-running decades old relationship with Microsoft and how that plays into the equation
  • Watch for what EMC releases with their hyper-converged solution as well as where it is focused, not to mention how sold
  • Also watch the EMC and Lenovo join initiative, both for the Iomega storage activity as well as what EMC and Lenovo do with and for servers
  • Speaking of Lenovo, unless I missed something as of the time of writing this, have you noticed that Lenovo is not yet part of the VMware EVO:Rail initiative?

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Seagate has shipped over 10 Million storage HHDD’s, is that a lot?

Seagate has shipped over 10 Million storage HHDD’s, is that a lot?

Recently Seagate made an announcement that they have shipped over 10 million Hybrid Hard Disk Drives (HHDD) also known as Solid State Hybrid Drives (SSHD) over that past few years. Disclosure Seagate has been a StorageIO client.

I know where some of those desktop class HHDD’s including Momentus XTs ended up as I bought some of the 500GB and 750GB models via Amazon and have them in various systems. Likewise I have installed in VMware servers the newer generation of enterprise class SSHD’s which Seagate now refers to as Turbo models as companions to my older HHDD’s

What is a HHDD or SSHD?

The HHDD’s continue to evolve from initially accelerating reads to now being capable of speeding up write operations across different families (desktop/mobile, workstation and enterprise). What makes a HHDD or SSHD is that as their name implies, they are a hybrid combing a traditional spinning magnetic Hard Disk Drive (HDD) along with flash SSD storage. The flash persistent memory is in addition to the DRAM or non-persistent memory typically found on HDDs used as a cache buffer. These HHDDs or SSHDs are self-contained in that the flash are built-in to the actual drive as part of its internal electronics circuit board (controller). This means that the drives should be transparent to the operating systems or hypervisors on servers or storage controllers without need for special adapters, controller cards or drivers. In addition, there is no extra software needed to automated tiering or movement between the flash on the HHDD or SSHD and its internal HDD, its all self-contained managed by the drives firmware (e.g. software).

Some SSHD and HHDD industry perspectives

Jim Handy over at Objective Analysis has this interesting post discussing Hybrid Drives Not Catching On. The following is an excerpt from Jim’s post.

Why were our expectations higher? 

There were a few reasons: The hybrid drive can be viewed as an evolution of the DRAM cache already incorporated into nearly all HDDs today. 

  • Replacing or augmenting an expensive DRAM cache with a slower, cheaper NAND cache makes a lot of sense.
  • An SSHD performs much better than a standard HDD at a lower price than an SSD. In fact, an SSD of the same capacity as today’s average HDD would cost about an order of magnitude more than the HDD. The beauty of an SSHD is that it provides near-SSD performance at a near-HDD price. This could have been a very compelling sales proposition had it been promoted in a way that was understood and embraced by end users.
  • Some expected for Seagate to include this technology into all HDDs and not to try to continue using it as a differentiator between different Seagate product lines. The company could have taken either of two approaches: To use hybrid technology to break apart two product lines – standard HDDs and higher-margin hybrid HDDs, or to merge hybrid technology into all Seagate HDDs to differentiate Seagate HDDs from competitors’ products, allowing Seagate to take slightly higher margins on all HDDs. Seagate chose the first path.

The net result is shipments of 10 million units since its 2010 introduction, for an average of 2.5 million per year, out of a total annual HDD shipments of around 500 million units, or one half of one percent.

Continue reading more of Jim’s post here.

In his post, Jim raises some good points including that HHDD’s and SSHD’s are still a fraction of the overall HDD’s shipped on an annual basis. However IMHO the annual growth rate has not been a flat average of 2.5 million, rather starting at a lower rate and then increasing year over year. For example Seagate issued a press release back in summer 2011 that they had shipped a million HHDD’s a year after their release. Also keep in mind that those HHDD’s were focused on desktop workstations and in particular, at Gamers among others.

The early HHDD’s such as the Momentus XTs that I was using starting in June 2010 only had read acceleration which was better than HDD’s, however did not help out on writes. Over the past couple of years there have been enhancements to the HHDD’s including the newer generation also known as SSHD’s or Turbo drives as Seagate now calls them. These newer drives include write acceleration as well as with models for mobile/laptop, workstation and enterprise class including higher-performance and high-capacity versions. Thus my estimates or analysis has the growth on an accelerating curve vs. linear growth rate (e.g. average of 2.5 million units per year).

 Units shipped per yearRunning total units shipped
2010-20111.0 Million1.0 Million
2011-20121.25 Million (est.)2.25 Million (est.)
2012-20132.75 Million (est.)5.0 Million (est.)
2013-20145.0 Million (est)10.0 Million

StorageIO estimates on HHDD/SSHD units shipped based on Seagate announcements

estimated hhdd and sshd shipments

However IMHO there is more to the story beyond numbers of HHDD/SSHD shipped or if they are accelerating in deployment or growing at an average rate. Some of those perspectives are in my comments over on Jim Handy’s site with an excerpt below.

In talking with IT professionals (e.g. what the vendors/industry calls users/customers) they are generally not aware that these devices exist, or if they are aware of them, they are only aware of what was available in the past (e.g. the consumer class read optimized versions). I do talk with some who are aware of the newer generation devices however their comments are usually tied to lack of system integrator (SI) or vendor/OEM support, or sole source. Also there was a focus on promoting the HHDD’s to “gamers” or other power users as opposed to broader marketing efforts. Also most of these IT people are not aware of the newer generation of SSHD or what Seagate is now calling “Turbo” drives.

When talking with VAR’s, there is a similar reaction which is discussion about lack of support for HHDD’s or SSHD’s from the SI/vendor OEMs, or single source supply concerns. Also a common reaction is lack of awareness around current generation of SSHD’s (e.g. those that do write optimization, as well as enterprise class versions).

When talking with vendors/OEMs, there is a general lack of awareness of the newer enterprise class SSHD’s/HHDD’s that do write acceleration, sometimes there is concern of how this would disrupt their “hybrid” SSD + HDD or tiering marketing stories/strategies, as well as comments about single source suppliers. Have also heard comments to the effect of concerns about how long or committed are the drive manufactures going to be focused on SSHD/HHDD, or is this just a gap filler for now.

Not surprisingly when I talk with industry pundits, influencers, amplifiers (e.g. analyst, media, consultants, blogalysts) there is a reflection of all the above which is lack of awareness of what is available (not to mention lack of experience) vs. repeating what has been heard or read about in the past.

IMHO while there are some technology hurdles, the biggest issue and challenge is that of some basic marketing and business development to generate awareness with the industry (e.g. pundits), vendors/OEMs, VAR’s, and IT customers, that is of course assuming SSHD/HHDD are here to stay and not just a passing fad…

What about SSHD and HHDD performance on reads and writes?

What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?

SSHD and HHDD read / write performance exchange
Enterprise Turbo SSHD read and write performance (Exchange Email)

What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?

SSHD and HHDD performance TPC-B
Enterprise Turbo SSHD read and write performance (TPC-B database)

SSHD and HHDD performance TPC-E
Enterprise Turbo SSHD read and write performance (TPC-E database)

Additional details and information about HHDD/SSHD or as Seagate now refers to them Turbo drives can be found in two StorageIO Industry Trends Perspective White Papers (located here and another here).

Where to learn more

Refer to the following links to learn more about HHDD and SSHD devices.
StorageIO Momentus Hybrid Hard Disk Drive (HHDD) Moments
Enterprise SSHD and Flash SSD
Part of an Enterprise Tiered Storage Strategy

Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?
2011 Summer momentus hybrid hard disk drive (HHDD) moment
More Storage IO momentus HHDD and SSD moments part I
More Storage IO momentus HHDD and SSD moments part II
New Seagate Momentus XT Hybrid drive (SSD and HDD)
Another StorageIO Hybrid Momentus Moment
SSD past, present and future with Jim Handy
Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?

Closing comments and perspectives

I continue to be bullish on hybrid storage solutions from cloud, to storage systems as well as hybrid-storage devices. However like many technology just because something makes sense or is interesting does not mean its a near-term or long-term winner. My main concern with SSHD and HHDD is if the manufactures such as Seagate and WD are serious about making them a standard feature in all drives, or simply as a near-term stop-gap solution.

What’s your take or experience with using HHDD and/or SSHDs?

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Docker for Smarties (e.g. non-dummies) from VMworld 2014

Docker for Smarties (e.g. non-dummies) from VMworld 2014

In this Industry Trends Perspectives video pod cast episode (On YouTube) I had a chance to visit with Nathan LeClaire of docker.com at the recent VMworld 2014 in San Francisco for a quick overview of docker and containers are about, what you need to know and where to find more information. Check out this StorageIO Industry Trends Perspective episode "Docker for Smarties" aka not for dummies via YouTube by clicking here or on the image below.

storage i/o video

StorageIO docker for smarties from VMworld 2014

For those not familiar with docker.YouTube videos about server and storage I/O

Server storage I/O docker for non-dummies
Docker overview

What to know about docker
Three things to know about docker

key points and where to learn more about docker

Checkout the Docker for non-dummies storage i/o videovideo here.

What’s your take, is docker in your future or are you already using it?

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved