Collecting Transaction Per Minute from SQL Server and HammerDB

Storage I/O trends

Collecting Transaction Per Minute from SQL Server and HammerDB

When using benchmark or workload generation tools such as HammerDB I needed a way to capture and log performance activity metrics such as transactions per minute. For example using HammerDB to simulate an application making database requests performing various transactions as part of testing an overall system solution including server and storage I/O activity. This post takes a look at the problem or challenge I was looking to address, as well as creating a solution after spending time searching for one (still searching btw).

The Problem, Issue, Challenge, Opportunity and Need

The challenge is to collect application performance such as transactions per minute from a workload using a database. The workload or benchmark tool (in this case HammerDB) is the System Test Initiator (STI) that drives the activity (e.g. database requests) to a System Under Test (SUT). In this example the SUT is a Microsoft SQL Server running on a Windows 2012 R2 server. What I need is to collect and log into a file for later analysis the transaction rate per minute while the STI is generating a particular workload.

Server Storage I/O performance

Understanding the challenge and designing a strategy

If you have ever used benchmark or workload generation tools such as Quest Benchmark Factory (part of the Toad tools collection) you might be spoiled with how it can be used to not only generate the workload, as well as collect, process, present and even store the results for database workloads such as TPC simulations. In this situation, Transaction Processing Council (TPC) like workloads need to be run and metrics on performance collected. Lets leave Benchmark Factory for a future discussion and focus instead on a free tool called HammerDB and more specifically how to collection transactions per minute metrics from Microsoft SQL Server. While the focus is SQL Server, you can easily adapt the approach for MySQL among others, not to mention there are tools such as Sysbench, Aerospike among other tools.

The following image (created using my Livescribe Echo digital pen) outlines the problem, as well as sketches out a possible solution design. In the following figure, for my solution I’m going to show how to grab every minute for a given amount of time the count of transactions that have occurred. Later in the post processing (you could also do in the SQL Script) I take the new transaction count (which is cumulative) and subtract the earlier interval which yields the transactions per minute (see examples later in this post).

collect TPM metrics from SQL Server with hammerdb
The problem and challenge, a way to collect Transactions Per Minute (TPM)

Finding a solution

HammerDB displays results via its GUI, and perhaps there is a way or some trick to get it to log results to a file or some other means, however after searching the web, found that it was quicker to come up with solution. That solution was to decide how to collect and report the transactions per minute (or you could do by second or other interval) from Microsoft SQL Server. The solution was to find what performance counters and metrics are available from SQL Server, how to collect those and log them to a file for processing. What this means is a SQL Server script file would need to be created that ran in a loop collecting for a given amount of time at a specified interval. For example once a minute for several hours.

Taking action

The following is a script that I came up with that is far from optimal however it gets the job done and is a starting point for adding more capabilities or optimizations.

In the following example, set loopcount to some number of minutes to collect samples for. Note however that if you are running a workload test for eight (8) hours with a 30 minute ramp-up time, you would want to use a loopcount (e.g. number of minutes to collect for) of 480 + 30 + 10. The extra 10 minutes is to allow for some samples before the ramp and start of workload, as well as to give a pronounced end of test number of samples. Add or subtract however many minutes to collect for as needed, however keep this in mind, better to collect a few extra minutes vs. not have them and wished you did.

-- Note and disclaimer:
-- 
-- Use of this code sample is at your own risk with Server StorageIO and UnlimitedIO LLC
-- assuming no responsibility for its use or consequences. You are free to use this as is
-- for non-commercial scenarios with no warranty implied. However feel free to enhance and
-- share those enhancements with others e.g. pay it forward.
-- 
DECLARE @cntr_value bigint;
DECLARE @loopcount bigint; # how many minutes to take samples for

set @loopcount = 240

SELECT @cntr_value = cntr_value
 FROM sys.dm_os_performance_counters
 WHERE counter_name = 'transactions/sec'
 AND object_name = 'MSSQL$DBIO:Databases'
 AND instance_name = 'tpcc' ; print @cntr_value;
 WAITFOR DELAY '00:00:01'
-- 
-- Start loop to collect TPM every minute
-- 

while @loopcount <> 0
begin
SELECT @cntr_value = cntr_value
 FROM sys.dm_os_performance_counters
 WHERE counter_name = 'transactions/sec'
 AND object_name = 'MSSQL$DBIO:Databases'
 AND instance_name = 'tpcc' ; print @cntr_value;
 WAITFOR DELAY '00:01:00'
 set @loopcount = @loopcount - 1
end
-- 
-- All done with loop, write out the last value
-- 
SELECT @cntr_value = cntr_value
 FROM sys.dm_os_performance_counters
 WHERE counter_name = 'transactions/sec'
 AND object_name = 'MSSQL$DBIO:Databases'
 AND instance_name = 'tpcc' ; print @cntr_value;
-- 
-- End of script
-- 

The above example has loopcount set to 240 for a 200 minute test with a 30 minute ramp and 10 extra minutes of samples. I use the a couple of the minutes to make sure that the system test initiator (STI) such as HammerDB is configured and ready to start executing transactions. You could also put this along with your HammerDB items into a script file for further automation, however I will leave that exercise up to you.

For those of you familiar with SQL and SQL Server you probably already see some things to improve or stylized or simply apply your own preference which is great, go for it. Also note that I’m only selecting a certain variable from the performance counters as there are many others which you can easily discovery with a couple of SQL commands (e.g. select and specify database instance and object name. Also note that the key is accessing the items in sys.dm_os_performance_counters of your SQL Server database instance.

The results

The output from the above is a list of cumulative numbers as shown below which you will need to post process (or add a calculation to the above script). Note that part of running the script is specifying an output file which I show later.

785
785
785
785
37142
1259026
2453479
3635138

Implementing the solution

You can setup the above script to run as part of a larger automation shell or batch script, however for simplicity I’m showing it here using Microsoft SQL Server Studio.

SQL Server script to collect TPM
Microsoft SQL Server Studio with script to collect Transaction Per Minute (TPM)

The following image shows how to specify an output file for the results to be logged to when using Microsoft SQL Studio to run the TPM collection script.

Specify SQL Server tpm output file
Microsoft SQL Server Studio specify output file

With the SQL Server script running to collect results, and HammerDB workload running to generate activity, the following shows Quest Spotlight on Windows (SoW) displaying WIndows Server 2012 R2 operating system level performance including CPU, memory, paging and other activity. Note that this example had about the system test initiator (STI) which is HammerDB and the system under test (SUT) that is Microsoft SQL Server on the same server.

Spotlight on Windows while SQL Server doing tpc
Quest Spotlight on Windows showing Windows Server performance activity

Results and post-processing

As part of post processing simple use your favorite tool or script or what I often do is pull the numbers into Excel spreadsheet, and simply create a new column of numbers that computes and shows the difference between each step (see below). While in Excel then I plot the numbers as needed which can also be done via a shell script and other plotting tools such as R.

In the following example, the results are imported into Excel (your favorite tool or script) where I then add a column (B) that simple computes the difference between the existing and earlier counter. For example in cell B2 = A2-A1, B3 = A3-A2 and so forth for the rest of the numbers in column A. I then plot the numbers in column B to show the transaction rates over time that can then be used for various things.

Hammerdb TPM results from SQL Server processed in Excel
Results processed in Excel and plotted

Note that in the above results that might seem too good to be true they are, these were cached results to show the tools and data collection process as opposed to the real work being done, at least for now…

Where to learn more

Here are some extra links to have a look at:

How to test your HDD, SSD or all flash array (AFA) storage fundamentals
Server and Storage I/O Benchmarking 101 for Smarties
Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)
The SSD Place (collection of flash and SSD resources)
Server and Storage I/O Benchmarking and Performance Resources
I/O, I/O how well do you know about good or bad server and storage I/Os?

What this all means and wrap-up

There are probably many ways to fine tune and optimize the above script, likewise there may even be some existing tool, plug-in, add-on module, or configuration setting that allows HammerDB to log the transaction activity rates to a file vs. simply showing on a screen. However for now, this is a work around that I have found for when needing to collect transaction activity performance data with HammerDB and SQL Server.

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: If focused on cost you might miss other cloud storage benefits

Storage I/O trends

Cloud conversations: If focused on cost you might miss other cloud storage benefits

Drew Robb (@robbdrew) has a good piece (e.g. article) over at InfoStor titled Eight Ways to Avoid Cloud Storage Pricing Surprises that you can read here.

Drew start’s his piece out with this nice analogy or story:

Let’s begin with a cautionary tale about pricing: a friend hired a moving company as they quoted a very attractive price for a complex move. They lured her in with a low-ball price then added more and more “extras” to the point where their price ended up higher than many of the other bids she passed up. And to make matters worse, they are already two weeks late with delivery of the furniture and are saying it might take another two weeks.

Drew extends his example in his piece to compare how some cloud providers may start with pricing as low as some amount only for the customer to be surprised when they did not do their homework to learn about the various fees.

Note that most reputable cloud providers do not hide their fees even though there are myths that all cloud vendors have hidden fees, instead they list what those costs are on their sites. However that means the smart shopper or person procuring cloud services needs to go look for those fee’s and what they mean to avoid surprises. On the other hand if you can not find what extra fee’s would be along with what is or is not included in a cloud service price, to quote Jenny’s line in the movie Forest Gump, "…Run, Forest! Run!…".

In Drew’s piece he mentions five general areas to keep an eye on pertaining cloud storage costs including:

  • Be Duly Diligent
  • Trace Out Application Interaction
  • Avoid Fixed Usage Rates
  • Beware Lowballing
  • Demand Enterprise Visibility

Beware Lowballing

In Drew’s piece, he includes a comment from myself shown below.

Just as in the moving business, lowballing is alive and well in cloud pricing. Greg Schulz, an analyst with StorageIO Group, warned users to pay attention to services that have very low-cost per GByte/TByte yet have extra fees and charges for use, activity or place service caps. Compare those with other services that have higher base fees and attempt to price it based on your real storage and usage patterns.

“Watch out for usage and activity fees with lower cost services where you may get charged for looking at or visiting your data, not to mention for when you actually need to use it,” said Schulz. “Also be aware of limits or caps on performance that may apply to a particular class of service.”

As a follow-up to Drew’s good article, I put together the following thoughts that appeared earlier this year over at InfoStor titled Cloud storage: Is It All About Cost? that you can read here. In that article I start out with the basic question of:

So what is your take on cloud storage, and in what context?

Is cloud storage all about removing cost, cost cutting, free storage?

Or perhaps even getting something else in addition to free storage?

I routinely talk with different people from various backgrounds, environments from around the world, and the one consistency I hear when it comes to cloud services including storage is that there is no consistency.

What I mean by this is that there are the cloud crowd cheerleaders who view or cheer for anything cloud related, some of them actually use the cloud vs. simply cheering.

What does this have to do with cloud costs

Simple, how do you know if cloud is cheaper or more expensive if you do not know your own costs?

How do you know if cloud storage is available, reliable, durable if you do not have a handle on your environment?

Are you making apples to oranges comparisons or simple trading or leveraging hype and fud for or against?

Similar to regular storage, how you choose to use and configure on-site traditional storage for high-availability, performance, security among other best practices should be applied to cloud solutions. After all, only you can prevent cloud (or on premise) data loss, granted it is a shared responsibility. Shared responsibility means your service provider or system vendor needs to deliver quality robust solution that you can then take responsibility for configure to use with resiliency.

For some of you perhaps cloud might be about lowering, reducing or cutting storage costs, perhaps even getting some other service(s) in addition to free storage.

On the other hand, some of you might be

Yet another class of cloud storage (e.g. AWS EBS) are those intended or optimized to be accessed from within a cloud via cloud servers or compute instances (e.g. AWS EC2 among others) vs. those that are optimized for both inside the cloud as well as outside the cloud access (e.g. AWS S3 or Glacier with costs shown here). I am using AWS examples; however, you could use Microsoft Azure (pricing shown here), Google (including their new Nearline service with costs shown here), Rackspace, (calculator here or other cloud files pricing here), HP Cloud (costs shown here), IBM Softlayer (object storage costs here) and many others.

Not all types of cloud storage are the same, which is similar to traditional storage you may be using or have used in your environment in the past. For example, there is high-capacity low-cost storage, including magnetic tape for data protection, archiving of in-active data along with near-line hard disk drives (HDD). There are different types of HDDs, as well as fast solid-state devices (SSD) along with hybrid or SSHD storage used for different purposes. This is where some would say the topic of cloud storage is highly complex.

Where to learn more

Data Protection Diaries
Cloud Conversations: AWS overview and primer)
Only you can prevent cloud data loss
Is Computer Data Storage Complex? It Depends
Eight Ways to Avoid Cloud Storage Pricing Surprises
Cloud and Object Storage Center
Cloud Storage: Is It All About Cost?
Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)
Given outages, are you concerned with the security of the cloud?
Is the cost of cloud storage really cheaper than traditional storage?
Are more than five nines of availability really possible?
What should I look for in an enterprise file sync-and-share app?
How do primary storage clouds and cloud for backup differ?
What should I consider when using SSD cloud?
What’s most important to know about my cloud privacy policy?
Data Archiving: Life Beyond Compliance
My copies were corrupted: The 3-2-1 rule
Take a 4-3-2-1 approach to backing up data

What this means

In my opinion there are cheap clouds (products, services, solutions) and there are low-cost options as well as there are value and premium offerings. Avoid confusing value with cheap or low-cost as something might have a higher cost, however including more capabilities or fees included that if useful can be more value. Look beyond the up-front cost aspects of clouds also considering ongoing recurring fees for actually using a server or solution.

If you can find low-cost storage at or below a penny per GByte per month that could be a good value if it also includes many free access, retrieval GETS head and lists for management or reporting. On the other hand, if you find a service that is at or below a penny per GByte per month however charges for any access including retrieval, as well as network bandwidth fees along with reporting, that might not be as good of a value.

Look beyond the basic price and watch out for statements like "…as low as…" to understand what is required to get that "..as low as.." price. Also understand what the extra fee’s are which most of the reputable providers list these on their sites, granted you have to look for them. If you are already using cloud services, pay attention to your monthly invoices and track what you are paying for to avoid surprises.

From my InfoStor piece:

For cloud storage, instead of simply focusing on lowest cost of storage per capacity, look for value, along with ability to configure or use with as much resiliency as you need. Value will mean different things depending on your needs and cloud storage servers, yet the solution should be cost-effective with availability including durability, secure and applicable performance.

Shopping for cloud servers and storage is similar to acquiring regular servers and storage in that you need to understand what you are acquiring along with up-front and recurring fee’s to understand the total cost of ownership and cost of operations not to mention making apples to apples vs. apples to oranges comparisons.

Btw, instead of simply using lower cost cloud services to cut cost, why not also use those capabilities to create or park another copy of your important data somewhere else just to be safe…

What say you about cloud costs?

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

How to test your HDD SSD or all flash array (AFA) storage fundamentals

How to test your HDD SSD AFA Hybrid or cloud storage

server storage data infrastructure i/o hdd ssd all flash array afa fundamentals

Updated 2/14/2018

Over at BizTech Magazine I have a new article 4 Ways to Performance Test Your New HDD or SSD that provides a quick guide to verifying or learning what the speed characteristic of your new storage device are capable of.

An out-take from the article used by BizTech as a "tease" is:

These four steps will help you evaluate new storage drives. And … psst … we included the metrics that matter.

Building off the basics, server storage I/O benchmark fundamentals

The four basic steps in the article are:

  • Plan what and how you are going to test (what’s applicable for you)
  • Decide on a benchmarking tool (learn about various tools here)
  • Test the test (find bugs, errors before a long running test)
  • Focus on metrics that matter (what’s important for your environment)

Server Storage I/O performance

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

To some the above (read the full article here) may seem like common sense tips and things everybody should know otoh there are many people who are new to servers storage I/O networking hardware software cloud virtual along with various applications, not to mention different tools.

Thus the above is a refresher for some (e.g. Dejavu) while for others it might be new and revolutionary or simply helpful. Interested in HDD’s, SSD’s as well as other server storage I/O performance along with benchmarking tools, techniques and trends check out the collection of links here (Server and Storage I/O Benchmarking and Performance Resources).

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

February 2015 Server StorageIO Update Newsletter

Volume 15, Issue II

Hello and welcome to this February 2015 Server and StorageIO update newsletter. The new year is off and running with many events already underway including the recent USENIX FAST conference and others on the docket over the next few months.

Speaking of FAST (File and Storage Technologies) event which I attended last week, here is a link to where you can download the conference proceedings.

In other events, VMware announced version 6 of their vSphere ESXi hypervisor and associated management tools including VSAN, VVOL among other items.

This months newsletter has a focus on server storage I/O performance topics with various articles, tips, commentary and blog posts.

Watch for more news, updates and industry trends perspectives coming soon.

Commentary In The News

StorageIO news

Following are some StorageIO industry trends perspectives comments that have appeared in various print and on-line venues. Over at Processor there are comments on resilient & highly available, underutilized or unused servers, what abandoned data Is costing your company, align application needs with your infrastructure (server, storage, networking) resources.

Also at processor explore flash based (SSD) storage, enterprise backup buying tips, re-evaluate server security, new tech advancements for server upgrades, and understand cost of acquiring storage.

Meanwhile over at CyberTrend there are some perspectives on enterprise backup and better servers mean better business.

View more trends comments here

Tips and Articles

So you have a new storage device or system.

How will you test or find its performance?

Check out this quick-read tip on storage benchmark and testing fundamentals over at BizTech. Also check out these resources and links on server storage I/O performance and benchmarking tools.

View recent as well as past tips and articles here

StorageIOblog posts

Recent StorageIOblog posts include:

View other recent as well as past blog posts here

In This Issue

  • Industry Trends Perspectives
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events & Activities

    EMCworld – May 4-6 2015

    Interop – April 29 2015

    NAB – April 14-15 2015

    Deltaware Event – March 3 2015

    Feb. 18 – FAST 2015 – Santa Clara CA

    View other recent and upcoming events here

    Webinars

    December 11, 2014 – BrightTalk
    Server & Storage I/O Performance

    December 10, 2014 – BrightTalk
    Server & Storage I/O Decision Making

    December 9, 2014 – BrightTalk
    Virtual Server and Storage Decision Making

    December 3, 2014 – BrightTalk
    Data Protection Modernization

    November 13 9AM PT – BrightTalk
    Software Defined Storage

    Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    From StorageIO Labs

    Research, Reviews and Reports

    StarWind Virtual SAN
    starwind virtual san

    Using less hardware with software defined storage management. This looks at the needs of Microsoft Hyper-V ROBO and SMB environments with software defined storage less hardware. Read more here.

    View other StorageIO lab review reports here.

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/

    storageperformance.us
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server and Storage I/O Benchmarking 101 for Smarties

    Server Storage I/O Benchmarking 101 for Smarties or dummies ;)

    server storage I/O trends

    This is the first of a series of posts and links to resources on server storage I/O performance and benchmarking (view more and follow-up posts here).

    The best I/O is the I/O that you do not have to do, the second best is the one with the least impact as well as low overhead.

    server storage I/O performance

    Drew Robb (@robbdrew) has a Data Storage Benchmarking Guide article over at Enterprise Storage Forum that provides a good framework and summary quick guide to server storage I/O benchmarking.

    Via Drew:

    Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark.

    Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon).

    But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more. 

    Read more here including some of my comments, tips and recommendations.

    Drew’s provides a good summary and overview in his article which is a great opener for this first post in a series on server storage I/O benchmarking and related resources.

    You can think of this series (along with Drew’s article) as server storage I/O benchmarking fundamentals (e.g. 101) for smarties (e.g. non-dummies ;) ).

    Note that even if you are not a server, storage or I/O expert, you can still be considered a smarty vs. a dummy if you found the need or interest to read as well as learn more about benchmarking, metrics that matter, tools, technology and related topics.

    Server and Storage I/O benchmarking 101

    There are different reasons for benchmarking, such as, you might be asked or want to know how many IOPs per disk, Solid State Device (SSD), device or storage system such as for a 15K RPM (revolutions per minute) 146GB SAS Hard Disk Drive (HDD). Sure you can go to a manufactures website and look at the speeds and feeds (technical performance numbers) however are those metrics applicable to your environments applications or workload?

    You might get higher IOPs with smaller IO size on sequential reads vs. random writes which will also depend on what the HDD is attached to. For example are you going to attach the HDD to a storage system or appliance with RAID and caching? Are you going to attach the HDD to a PCIe RAID card or will it be part of a server or storage system. Or are you simply going to put the HDD into a server or workstation and use as a drive without any RAID or performance acceleration.

    What this all means is understanding what it is that you want to benchmark test to learn what the system, solution, service or specific device can do under different workload conditions.

    Some benchmark and related topics include

    • What are you trying to benchmark
    • Why do you need to benchmark something
    • What are some server storage I/O benchmark tools
    • What is the best benchmark tool
    • What to benchmark, how to use tools
    • What are the metrics that matter
    • What is benchmark context why does it matter
    • What are marketing hero benchmark results
    • What to do with your benchmark results
    • server storage I/O benchmark step test
      Example of a step test results with various workers and workload

    • What do the various metrics mean (can we get a side of context with them metrics?)
    • Why look at server CPU if doing storage and I/O networking tests
    • Where and how to profile your application workloads
    • What about physical vs. virtual vs. cloud and software defined benchmarking
    • How to benchmark block DAS or SAN, file NAS, object, cloud, databases and other things
    • Avoiding common benchmark mistakes
    • Tips, recommendations, things to watch out for
    • What to do next

    server storage I/O trends

    Where to learn more

    The following are related links to read more about server (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.

    Drew Robb’s benchmarking quick reference guide
    Server storage I/O benchmarking tools, technologies and techniques resource page
    Server and Storage I/O Benchmarking 101 for Smarties.
    Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
    I/O, I/O how well do you know about good or bad server and storage I/Os?
    Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

    Wrap up and summary

    We have just scratched the surface when it comes to benchmarking cloud, virtual and physical server storage I/O and networking hardware, software along with associated tools, techniques and technologies. However hopefully this and the links for more reading mentioned above give a basis for connecting the dots of what you already know or enable learning more about workloads, synthetic generation and real-world workloads, benchmarks and associated topics. Needless to say there are many more things that we will cover in future posts (e.g. keep an eye on and bookmark the server storage I/O benchmark tools and resources page here).

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)

    Server Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)

    server storage I/O trends

    This is part-one of a two-part post pertaining Microsoft Diskspd.that is also part of a broader series focused on server storage I/O benchmarking, performance, capacity planning, tools and related technologies. You can view part-two of this post here, along with companion links here.

    Background

    Many people use Iometer for creating synthetic (artificial) workloads to support benchmarking for testing, validation and other activities. While Iometer with its GUI is relatively easy to use and available across many operating system (OS) environments, the tool also has its limits. One of the bigger limits for Iometer is that it has become dated with little to no new development for a long time, while other tools including some new ones continue to evolve in functionality, along with extensibility. Some of these tools have optional GUI for easy of use or configuration, while others simple have extensive scripting and command parameter capabilities. Many tools are supported across different OS including physical, virtual and cloud, while others such as Microsoft Diskspd are OS specific.

    Instead of focusing on Iometer and other tools as well as benchmarking techniques (we cover those elsewhere), lets focus on Microsoft Diskspd.


    server storage I/O performance

    What is Microsoft Diskspd?

    Microsoft Diskspd is a synthetic workload generation (e.g. benchmark) tool that runs on various Windows systems as an alternative to Iometer, vdbench, iozone, iorate, fio, sqlio among other tools. Diskspd is a command line tool which means it can easily be scripted to do reads and writes of various I/O size including random as well as sequential activity. Server and storage I/O can be buffered file system as well non-buffered across different types of storage and interfaces. Various performance and CPU usage information is provided to gauge the impact on a system when doing a given number of IOP’s, amount of bandwidth along with response time latency.

    What can Diskspd do?

    Microsoft Diskspd creates synthetic benchmark workload activity with ability to define various options to simulate different application characteristics. This includes specifying read and writes, random, sequential, IO size along with number of threads to simulate concurrent activity. Diskspd can be used for testing or validating server and storage I/O systems along with associated software, tools and components. In addition to being able to specify different workloads, Diskspd can also be told which processors to use (e.g. CPU affinity), buffering or non-buffered IO among other things.

    What type of storage does Diskspd work with?

    Physical and virtual storage including hard disk drive (HDD), solid state devices (SSD), solid state hybrid drives (SSHD) in various systems or solutions. Storage can be physical as well as partitions or file systems. As with any workload tool when doing writes, exercise caution to prevent accidental deletion or destruction of your data.


    What information does Diskspd produce?

    Diskspd provides output in text as well as XML formats. See an example of Diskspd output further down in this post.

    Where to get Diskspd?

    You can download your free copy of Diskspd from the Microsoft site here.

    The download and installation are quick and easy, just remember to select the proper version for your Windows system and type of processor.

    Another tip is to remember to set path environment variables point to where you put the Diskspd image.

    Also stating what should be obvious, don’t forget that if you are going to be doing any benchmark or workload generation activity on a system where the potential for a data to be over-written or deleted, make sure you have a good backup and tested restore before you begin, if something goes wrong.


    New to server storage I/O benchmarking or tools?

    If you are not familiar with server storage I/O performance benchmarking or using various workload generation tools (e.g. benchmark tools), Drew Robb (@robbdrew) has a Data Storage Benchmarking Guide article over at Enterprise Storage Forum that provides a good framework and summary quick guide to server storage I/O benchmarking.




    Via Drew:

    Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark.

    Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon).


    But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more. 

    Read more here including some of my comments, tips and recommendations.


    In addition to Drew’s benchmarking quick reference guide, along with the server storage I/O benchmarking tools, technologies and techniques resource page (Server and Storage I/O Benchmarking 101 for Smarties.

    How do you use Diskspd?


    Tip: When you run Microsoft Diskspd it will create a file or data set on the device or volume being tested that it will do its I/O to, make sure that you have enough disk space for what will be tested (e.g. if you are going to test 1TB you need to have more than 1TB of disk space free for use). Another tip is to speed up the initializing (e.g. when Diskspd creates the file that I/Os will be done to) run as administrator.

    Tip: In case you forgot, a couple of other useful Microsoft tools (besides Perfmon) for working with and displaying server storage I/O devices including disks (HDD and SSDs) are the commands "wmic diskdrive list [brief]" and "diskpart". With diskpart exercise caution as it can get you in trouble just as fast as it can get you out of trouble.

    You can view the Diskspd commands after installing the tool and from a Windows command prompt type:

    C:\Users\Username> Diskspd


    The above command will display Diskspd help and information about the commands as follows.

    Usage: diskspd [options] target1 [ target2 [ target3 …] ]
    version 2.0.12 (2014/09/17)

    Available targets:
    file_path
    # :

    Available options:











    -?display usage information
    -a#[,#[…]]advanced CPU affinity – affinitize threads to CPUs provided after -a in a round-robin manner within current KGroup (CPU count starts with 0); the same CPU can be listed more than once and the number of CPUs can be different than the number of files or threads (cannot be used with -n)

    -ag

    group affinity – affinitize threads in a round-robin manner across KGroups
    -b[K|M|G]block size in bytes/KB/MB/GB [default=64K]

    -B[K|M|G|b]

    base file offset in bytes/KB/MB/GB/blocks [default=0] (offset from the beginning of the file)
    -c[K|M|G|b]create files of the given size. Size can be stated in bytes/KB/MB/GB/blocks

    -Ccool down time – duration of the test after measurements finished [default=0s].

    -DPrint IOPS standard deviations. The deviations are calculated for samples of duration . is given in milliseconds and the default value is 1000.
    -dduration (in seconds) to run test [default=10s]
    -f[K|M|G|b]

    file size – this parameter can be used to use only the part of the file/disk/partition for example to test only the first sectors of disk
    -fropen file with the FILE_FLAG_RANDOM_ACCESS hint
    -fsopen file with the FILE_FLAG_SEQUENTIAL_SCAN hint
    -Ftotal number of threads (cannot be used with -t)
    -gthroughput per thread is throttled to given bytes per millisecond note that this can not be specified when using completion routines
    -hdisable both software and hardware caching
    -inumber of IOs (burst size) before thinking. must be specified with -j
    -jtime to think in ms before issuing a burst of IOs (burst size). must be specified with -i
    -ISet IO priority to . Available values are: 1-very low, 2-low, 3-normal (default)
    -lUse large pages for IO buffers

    -Lmeasure latency statistics
    -ndisable affinity (cannot be used with -a)
    -onumber of overlapped I/O requests per file per thread (1=synchronous I/O, unless more than 1 thread is specified with -F) [default=2]
    -pstart async (overlapped) I/O operations with the same offset (makes sense only with -o2 or grater)
    -Penable printing a progress dot after each completed I/O operations (counted separately by each thread) [default count=65536]
    -r[K|M|G|b]random I/O aligned to bytes (doesn’t make sense with -s). can be stated in bytes/KB/MB/GB/blocks [default access=sequential, default alignment=block size]
    -R

    output format. Default is text.
    -s[K|M|G|b]stride size (offset between starting positions of subsequent I/O operations)
    -Sdisable OS caching
    -tnumber of threads per file (cannot be used with -F)
    -T[K|M|G|b]stride between I/O operations performed on the same file by different threads [default=0] (starting offset = base file offset + (thread number * ) it makes sense only with -t or -F
    -vverbose mode
    -wpercentage of write requests (-w and -w0 are equivalent). absence of this switch indicates 100% reads IMPORTANT: Your data will be destroyed without a warning
    -W

    warm up time – duration of the test before measurements start [default=5s].
    -xuse completion routines instead of I/O Completion Ports
    -Xuse an XML file for configuring the workload. Cannot be used with other parameters.
    -zset random seed [default=0 if parameter not provided, GetTickCount() if value not provided]




     
    Write buffers command options. By default, the write buffers are filled with a repeating pattern (0, 1, 2, …, 255, 0, 1, …)
    -Z

    zero buffers used for write tests
    -Z[K|M|G|b]use a global buffer filled with random data as a source for write operations.
    -Z[K|M|G|b],

    use a global buffer filled with data from as a source for write operations. If is smaller than , its content will be repeated multiple times in the buffer. By default, the write buffers are filled with a repeating pattern (0, 1, 2, …, 255, 0, 1, …)







     Synchronization command options
    -ys
    signals event
    before starting the actual run (no warmup) (creates a notification event if does not exist)
    -yfsignals event after the actual run finishes (no cooldown) (creates a notification event if does not exist)
    -yrwaits on event before starting the run (including warmup) (creates a notification event if does not exist)
    -ypallows to stop the run when event is set; it also binds CTRL+C to this event (creates a notification event if does not exist)
    -yesets event and quits









    Event Tracing command options

    -epuse paged memory for NT Kernel Logger (by default it uses non-paged memory)
    -equse perf timer
    -esuse system timer (default)
    -ecuse cycle count
    -ePROCESSprocess start & end
    -eTHREADthread start & end
    -eIMAGE_LOADimage load
    -eDISK_IOphysical disk IO
    -eMEMORY_PAGE_FAULTSall page faults
    -eMEMORY_HARD_FAULTShard faults only
    -eNETWORK

    TCP/IP, UDP/IP send & receive
    -eREGISTRYregistry calls



    Examples:

    Create 8192KB file and run read test on it for 1 second:

    diskspd -c8192K -d1 testfile.dat

    Set block size to 4KB, create 2 threads per file, 32 overlapped (outstanding)
    I/O operations per thread, disable all caching mechanisms and run block-aligned random
    access read test lasting 10 seconds:

    diskspd -b4K -t2 -r -o32 -d10 -h testfile.dat

    Create two 1GB files, set block size to 4KB, create 2 threads per file, affinitize threads
    to CPUs 0 and 1 (each file will have threads affinitized to both CPUs) and run read test
    lasting 10 seconds:

    diskspd -c1G -b4K -t2 -d10 -a0,1 testfile1.dat testfile2.dat

    Where to learn more


    The following are related links to read more about servver (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.
    resource page

    Server and Storage I/O Benchmarking 101 for Smarties.

    Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)

    I/O, I/O how well do you know about good or bad server and storage I/Os?

    Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

    Wrap up and summary, for now…


    This wraps up part-one of this two-part post taking a look at Microsoft Diskspd benchmark and workload generation tool. In part-two (here) of this post series we take a closer look including a test drive using Microsoft Diskspd.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    twitter @storageio


    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Microsoft Diskspd (Part II): Server Storage I/O Benchmark Tools

    Microsoft Diskspd (Part II): Server Storage I/O Benchmark Tools

    server storage I/O trends

    This is part-two of a two-part post pertaining Microsoft Diskspd.that is also part of a broader series focused on server storage I/O benchmarking, performance, capacity planning, tools and related technologies. You can view part-one of this post here, along with companion links here.

    Microsoft Diskspd StorageIO lab test drive

    Server and StorageIO lab

    Talking about tools and technologies is one thing, installing as well as trying them is the next step for gaining experience so how about some quick hands-on time with Microsoft Diskspd (download your copy here).

    The following commands all specify an I/O size of 8Kbytes doing I/O to a 45GByte file called diskspd.dat located on the F: drive. Note that a 45GByte file is on the small size for general performance testing, however it was used for simplicity in this example. Ideally a larger target storage area (file, partition, device) would be used, otoh, if your application uses a small storage device or volume, then tune accordingly.

    In this test, the F: drive is an iSCSI RAID protected volume, however you could use other storage interfaces supported by Windows including other block DAS or SAN (e.g. SATA, SAS, USB, iSCSI, FC, FCoE, etc) as well as NAS. Also common to the following commands is using 16 threads and 32 outstanding I/Os to simulate concurrent activity of many users, or application processing threads.
    server storage I/O performance
    Another common parameter used in the following was -r for random, 7200 seconds (e.g. two hour) test duration time, display latency ( -L ) disable hardware and software cache ( -h), forcing cpu affinity (-a0,1,2,3). Since the test ran on a server with four cores I wanted to see if I could use those for helping to keep the threads and storage busy. What varies in the commands below is the percentage of reads vs. writes, as well as the results output file. Some of the workload below also had the -S option specified to disable OS I/O buffering (to view how buffering helps when enabled or disabled). Depending on the goal, or type of test, validation, or workload being run, I would choose to set some of these parameters differently.

    diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -w0 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noh_write000.txt

    diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -w50 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noh_write050.txt

    diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -w100 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noh_write100.txt

    diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -S -w0 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noSh_test_write000.txt

    diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -S -w50 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noSh_write050.txt

    diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -S -w100 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noSh_write100.txt

    The following is the output from the above workload command.
    Microsoft Diskspd sample output
    Microsoft Diskspd sample output part 2
    Microsoft Diskspd sample output part 3

    Note that as with any benchmark, workload test or simulation your results will vary. In the above the server, storage and I/O system were not tuned as the focus was on working with the tool, determining its capabilities. Thus do not focus on the performance results per say, rather what you can do with Diskspd as a tool to try different things. Btw, fwiw, in the above example in addition to using an iSCSI target, the Windows 2012 R2 server was a guest on a VMware ESXi 5.5 system.

    Where to learn more

    The following are related links to read more about server (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.

    Drew Robb’s benchmarking quick reference guide
    Server storage I/O benchmarking tools, technologies and techniques resource page
    Server and Storage I/O Benchmarking 101 for Smarties.
    Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
    I/O, I/O how well do you know about good or bad server and storage I/Os?
    Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

    Comments and wrap-up

    What I like about Diskspd (Pros)

    Reporting including CPU usage (you can’t do server and storage I/O without CPU) along with IOP’s (activity), bandwidth (throughout or amount of data being moved), per thread and total results along with optional reporting. While a GUI would be nice particular for beginners, I’m used to setting up scripts for different workloads so having an extensive options for setting up different workloads is welcome. Being associated with a specific OS (e.g. Windows) the CPU affinity and buffer management controls will be handy for some projects.

    Diskspd has the flexibility to use different storage interfaces and types of storage including files or partitions should be taken for granted, however with some tools don’t take things for granted. I like the flexibility to easily specify various IO sizes including large 1MByte, 10MByte, 20MByte, 100MByte and 500MByte to simulate application workloads that do large sequential (or random) activity. I tried some IO sizes (e.g. specified by -b parameter larger than 500MB however, I received various errors including "Could not allocate a buffer bytes for target" which means that Diskspd can do IO sizes smaller than that. While not able to do IO sizes larger than 500MB, this is actually impressive. Several other tools I have used or with have IO size limits down around 10MByte which makes it difficult for creating workloads that do large IOP’s (note this is the IOP size, not the number of IOP’s).

    Oh, something else that should be obvious however will state it, Diskspd is free unlike some industry de-facto standard tools or workload generators that need a fee to get and use.

    Where Diskspd could be improved (Cons)

    For some users a GUI or configuration wizard would make the tool easier to get started with, on the other hand (oth), I tend to use the command capabilities of tools. Would also be nice to specify ranges as part of a single command such as stepping through an IO size range (e.g. 4K, 8K, 16K, 1MB, 10MB) as well as read write percentages along with varying random sequential mixes. Granted this can easily be done by having a series of commands, however I have become spoiled by using other tools such as vdbench.

    Summary

    Server and storage I/O performance toolbox

    Overall I like Diskspd and have added it to my Server Storage I/O workload and benchmark tool-box

    Keep in mind that the best benchmark or workload generation technology tool will be your own application(s) configured to run as close as possible to production activity levels.

    However when that is not possible, the an alternative is to use tools that have the flexibility to be configured as close as possible to your application(s) workload characteristics. This means that the focus should not be as much on the tool, as opposed to how flexible is a tool to work for you, granted the tool needs to be robust.

    Having said that, Microsoft Diskspd is a good and extensible tool for benchmarking, simulation, validation and comparisons, however it will only be as good as the parameters and configuration you set it up to use.

    Check out Microsoft Diskspd and add it to your benchmark and server storage I/O tool-box like I have done.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server Storage I/O Benchmark Performance Resource Tools

    Server Storage I/O Benchmarking Performance Resource Tools

    server storage I/O trends

    Updated 1/23/2018

    Server storage I/O benchmark performance resource tools, various articles and tips. These include tools for legacy, virtual, cloud and software defined environments.

    benchmark performance resource tools server storage I/O performance

    The best server and storage I/O (input/output operation) is the one that you do not have to do, the second best is the one with the least impact.

    server storage I/O locality of reference

    This is where the idea of locality of reference (e.g. how close is the data to where your application is running) comes into play which is implemented via tiered memory, storage and caching shown in the figure above.

    Cloud virtual software defined storage I/O

    Server storage I/O performance applies to cloud, virtual, software defined and legacy environments

    What this has to do with server storage I/O (and networking) performance benchmarking is keeping the idea of locality of reference, context and the application workload in perspective regardless of if cloud, virtual, software defined or legacy physical environments.

    StorageIOblog: I/O, I/O how well do you know about good or bad server and storage I/Os?
    StorageIOblog: Server and Storage I/O benchmarking 101 for smarties
    StorageIOblog: Which Enterprise HDDs to use for a Content Server Platform (7 part series with using benchmark tools)
    StorageIO.com: Enmotus FuzeDrive MicroTiering lab test using various tools
    StorageIOblog: Some server storage I/O benchmark tools, workload scripts and examples (Part I) and (Part II)
    StorageIOblog: Get in the NVMe SSD game (if you are not already)
    Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
    ComputerWeekly: Storage performance metrics: How suppliers spin performance specifications

    Via StorageIO Podcast: Kevin Closson discusses SLOB Server CPU I/O Database Performance benchmarks
    Via @KevinClosson: SLOB Use Cases By Industry Vendors. Learn SLOB, Speak The Experts’ Language
    Via BeyondTheBlocks (Reduxio): 8 Useful Tools for Storage I/O Benchmarking
    Via CCSIObench: Cold-cache Sequential I/O Benchmark
    Doridmen.com: Transcend SSD360S Review with tips on using ATTO and Crystal benchmark tools
    CISJournal: Benchmarking the Performance of Microsoft Hyper-V server, VMware ESXi and Xen Hypervisors (PDF)
    Microsoft TechNet:Windows Server 2016 Hyper-V large-scale VM performance for in-memory transaction processing
    InfoStor: What’s The Best Storage Benchmark?
    StorageIOblog: How to test your HDD, SSD or all flash array (AFA) storage fundamentals
    Via ATTO: Atto V3.05 free storage test tool available
    Via StorageIOblog: Big Files and Lots of Little File Processing and Benchmarking with Vdbench

    Via StorageIO.com: Which Enterprise Hard Disk Drives (HDDs) to use with a Content Server Platform (White Paper)
    Via VMware Blogs: A Free Storage Performance Testing Tool For Hyperconverged
    Microsoft Technet: Test Storage Spaces Performance Using Synthetic Workloads in Windows Server
    Microsoft Technet: Microsoft Windows Server Storage Spaces – Designing for Performance
    BizTech: 4 Ways to Performance-Test Your New HDD or SSD
    EnterpriseStorageForum: Data Storage Benchmarking Guide
    StorageSearch.com: How fast can your SSD run backwards?
    OpenStack: How to calculate IOPS for Cinder Storage ?
    StorageAcceleration: Tips for Measuring Your Storage Acceleration

    server storage I/O STI and SUT

    Spiceworks: Determining HDD SSD SSHD IOP Performance
    Spiceworks: Calculating IOPS from Perfmon data
    Spiceworks: profiling IOPs

    vdbench server storage I/O benchmark
    Vdbench example via StorageIOblog.com

    StorageIOblog: What does server storage I/O scaling mean to you?
    StorageIOblog: What is the best kind of IO? The one you do not have to do
    Testmyworkload.com: Collect and report various OS workloads
    Whoishostingthis: Various SQL resources
    StorageAcceleration: What, When, Why & How to Accelerate Storage
    Filesystems.org: Various tools and links
    StorageIOblog: Can we get a side of context with them IOPS and other storage metrics?

    flash ssd and hdd

    BrightTalk Webinar: Data Center Monitoring – Metrics that Matter for Effective Management
    StorageIOblog: Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy
    StorageIOblog: Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List?

    server storage I/O bottlenecks and I/O blender

    Microsoft TechNet: Measuring Disk Latency with Windows Performance Monitor (Perfmon)
    Via Scalegrid.io: How to benchmark MongoDB with YCSB? (Perfmon)
    Microsoft MSDN: List of Perfmon counters for sql server
    Microsoft TechNet: Taking Your Server’s Pulse
    StorageIOblog: Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?
    CMG: I/O Performance Issues and Impacts on Time-Sensitive Applications

    flash ssd and hdd

    Virtualization Practice: IO IO it is off to Storage and IO metrics we go
    InfoStor: Is HP Short Stroking for Performance and Capacity Gains?
    StorageIOblog: Is Computer Data Storage Complex? It Depends
    StorageIOblog: More storage and IO metrics that matter
    StorageIOblog: Moving Beyond the Benchmark Brouhaha
    Yellow-Bricks: VSAN VDI Benchmarking and Beta refresh!

    server storage I/O benchmark example

    YellowBricks: VSAN performance: many SAS low capacity VS some SATA high capacity?
    YellowBricsk: VSAN VDI Benchmarking and Beta refresh!
    StorageIOblog: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review
    StorageIOblog: Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review
    StorageIOblog: Server Storage I/O Network Benchmark Winter Olympic Games

    flash ssd and hdd

    VMware VDImark aka View Planner (also here, here and here) as well as VMmark here
    StorageIOblog: SPC and Storage Benchmarking Games
    StorageIOblog: Speaking of speeding up business with SSD storage
    StorageIOblog: SSD and Storage System Performance

    Hadoop server storage I/O performance
    Various Server Storage I/O tools in a hadoop environment

    Michael-noll.com: Benchmarking and Stress Testing an Hadoop Cluster With TeraSort, TestDFSIO
    Virtualization Practice: SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD
    StorageIOblog: Storage and IO metrics that matter
    InfoStor: Storage Metrics and Measurements That Matter: Getting Started
    SilvertonConsulting: Storage throughput vs. IO response time and why it matters
    Splunk: The percentage of Read / Write utilization to get to 800 IOPS?

    flash ssd and hdd
    Various server storage I/O benchmarking tools

    Spiceworks: What is the best IO IOPs testing tool out there
    StorageIOblog: How many IOPS can a HDD, HHDD or SSD do?
    StorageIOblog: Some Windows Server Storage I/O related commands
    Openmaniak: Iperf overview and Iperf.fr: Iperf overview
    StorageIOblog: Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)
    Quest: SQL Server Perfmon Poster (PDF)
    Server and Storage I/O Networking Performance Management (webinar)
    Data Center Monitoring – Metrics that Matter for Effective Management (webinar)
    Flash back to reality – Flash SSD Myths and Realities (Industry trends & benchmarking tips), (MSP CMG presentation)
    DBAstackexchange: How can I determine how many IOPs I need for my AWS RDS database?
    ITToolbox: Benchmarking the Performance of SANs

    server storage IO labs

    StorageIOblog: Dell Inspiron 660 i660, Virtual Server Diamond in the rough (Server review)
    StorageIOblog: Part II: Lenovo TS140 Server and Storage I/O Review (Server review)
    StorageIOblog: DIY converged server software defined storage on a budget using Lenovo TS140
    StorageIOblog: Server storage I/O Intel NUC nick knack notes First impressions (Server review)
    StorageIOblog & ITKE: Storage performance needs availability, availability needs performance
    StorageIOblog: Why SSD based arrays and storage appliances can be a good idea (Part I)
    StorageIOblog: Revisiting RAID storage remains relevant and resources

    Interested in cloud and object storage visit our objectstoragecenter.com page, for flash SSD checkout storageio.com/ssd page, along with data protection, RAID, various industry links and more here.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Watch for additional links to be added above in addition to those that appear via comments.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    I/O, I/O how well do you know good bad ugly server storage I/O iops?

    How well do you know good bad ugly I/O iops?

    server storage i/o iops activity data infrastructure trends

    Updated 2/10/2018

    There are many different types of server storage I/O iops associated with various environments, applications and workloads. Some I/Os activity are iops, others are transactions per second (TPS), files or messages per time (hour, minute, second), gets, puts or other operations. The best IO is one you do not have to do.

    What about all the cloud, virtual, software defined and legacy based application that still need to do I/O?

    If no IO operation is the best IO, then the second best IO is the one that can be done as close to the application and processor as possible with the best locality of reference.

    Also keep in mind that aggregation (e.g. consolidation) can cause aggravation (server storage I/O performance bottlenecks).

    aggregation causes aggravation
    Example of aggregation (consolidation) causing aggravation (server storage i/o blender bottlenecks)

    And the third best?

    It’s the one that can be done in less time or at least cost or effect to the requesting application, which means moving further down the memory and storage stack.

    solving server storage i/o blender and other bottlenecks
    Leveraging flash SSD and cache technologies to find and fix server storage I/O bottlenecks

    On the other hand, any IOP regardless of if for block, file or object storage that involves some context is better than those without, particular involving metrics that matter (here, here and here [webinar] )

    Server Storage I/O optimization and effectiveness

    The problem with IO’s is that they are a basic operations to get data into and out of a computer or processor, so there’s no way to avoid all of them, unless you have a very large budget. Even if you have a large budget that can afford an all flash SSD solution, you may still meet bottlenecks or other barriers.

    IO’s require CPU or processor time and memory to set up and then process the results as well as IO and networking resources to move data too their destination or retrieve them from where they are stored. While IO’s cannot be eliminated, their impact can be greatly improved or optimized by, among other techniques, doing fewer of them via caching and by grouping reads or writes (pre-fetch, write-behind).

    server storage I/O STI and SUT

    Think of it this way: Instead of going on multiple errands, sometimes you can group multiple destinations together making for a shorter, more efficient trip. However, that optimization may also mean your drive will take longer. So, sometimes it makes sense to go on a couple of quick, short, low-latency trips instead of one larger one that takes half a day even as it accomplishes many tasks. Of course, how far you have to go on those trips (i.e., their locality) makes a difference about how many you can do in a given amount of time.

    Locality of reference (or proximity)

    What is locality of reference?

    This refers to how close (i.e., its place) data exists to where it is needed (being referenced) for use. For example, the best locality of reference in a computer would be registers in the processor core, ready to be acted on immediately. This would be followed by levels 1, 2, and 3 (L1, L2, and L3) onboard caches, followed by main memory, or DRAM. After that comes solid-state memory typically NAND flash either on PCIe cards or accessible on a direct attached storage (DAS), SAN, or NAS device. 

    server storage I/O locality of reference

    Even though a PCIe NAND flash card is close to the processor, there still remains the overhead of traversing the PCIe bus and associated drivers. To help offset that impact, PCIe cards use DRAM as cache or buffers for data along with meta or control information to further optimize and improve locality of reference. In other words, this information is used to help with cache hits, cache use, and cache effectiveness vs. simply boosting cache use.

    SSD to the rescue?

    What can you do the cut the impact of IO’s?

    There are many steps one can take, starting with establishing baseline performance and availability metrics.

    The metrics that matter include IOP’s, latency, bandwidth, and availability. Then, leverage metrics to gain insight into your application’s performance.

    Understand that IO’s are a fact of applications doing work (storing, retrieving, managing data) no matter whether systems are virtual, physical, or running up in the cloud. But it’s important to understand just what a bad IO is, along with its impact on performance. Try to identify those that are bad, and then find and fix the problem, either with software, application, or database changes. Perhaps you need to throw more software caching tools, hypervisors, or hardware at the problem. Hardware may include faster processors with more DRAM and faster internal busses.

    Leveraging local PCIe flash SSD cards for caching or as targets is another option.

    You may want to use storage systems or appliances that rely on intelligent caching and storage optimization capabilities to help with performance, availability, and capacity.

    Where to gain insight into your server storage I/O environment

    There are many tools that you can be used to gain insight into your server storage I/O environment across cloud, virtual, software defined and legacy as well as from different layers (e.g. applications, database, file systems, operating systems, hypervisors, server, storage, I/O networking). Many applications along with databases have either built-in or optional tools from their provider, third-party, or via other sources that can give information about work activity being done. Likewise there are tools to dig down deeper into the various data information infrastructure to see what is happening at the various layers as shown in the following figures.

    application storage I/O performance
    Gaining application and operating system level performance insight via different tools

    windows and linux storage I/O performance
    Insight and awareness via operating system tools on Windows and Linux

    In the above example, Spotlight on Windows (SoW) which you can download for free from Dell here along with Ubuntu utilities are shown, You could also use other tools to look at server storage I/O performance including Windows Perfmon among others.

    vmware server storage I/O
    Hypervisor performance using VMware ESXi / vsphere built-in tools

    vmware server storage I/O performance
    Using Visual ESXtop to dig deeper into virtual server storage I/O performance

    vmware server storage i/o cache
    Gaining insight into virtual server storage I/O cache performance

    Wrap up and summary

    There are many approaches to address (e.g. find and fix) vs. simply move or mask data center and server storage I/O bottlenecks. Having insight and awareness into how your environment along with applications is important to know to focus resources. Also keep in mind that a bit of flash SSD or DRAM cache in the applicable place can go along way while a lot of cache will also cost you cash. Even if you cant eliminate I/Os, look for ways to decrease their impact on your applications and systems.

    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    >Keep in mind: SSD including flash and DRAM among others are in your future, the question is where, when, with what, how much and whose technology or packaging.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Green and Virtual IT Data Center Primer

    Green and Virtual Data Center Primer

    Moving beyond Green Hype and Green washing

    Green IT is about enabling efficient, effective and productive information services delivery. There is a growing green gap between green hype messaging or green washing and IT pain point issues including limits on availability or rising costs of power, cooling, floor-space as well as e-waste and environmental health and safety (PCFE). To close the gap will involve addressing green messaging and rhetoric closer to where IT organizations pain points are and where budget dollars exists that can address PCFE and other green related issues as a by-product. The green gap will also be narrowed as awareness of broader green related topics coincide with IT data center pain points, in other words, alignment of messaging with IT issues that have or will have budget dollars allocated towards them to sustain business and economic growth via IT resource usage efficiency. Read more here.

    There are many aspects to "Green" Information Technology including servers, storage, networks and associated management tools and techniques. The reasons and focus of "Green IT" including "Green Data Storage ", "Green Computing" and related focus areas are varied to discuss diverse needs, issues and requirements including among others:

    • Power, Cooling, Floor-space, Environmental (PCFE) related issues or constraints
    • Reduction of carbon dioxide (CO2) emissions and other green house gases (GHGs)
    • Business growth and economic sustain in an environmental friendly manner
    • Proper disposal or recycling of environmental harmful retired technology components
    • Reduction or better efficiency of electrical power consumption used for IT equipment
    • Cost avoidance or savings from lower energy fees and cooling costs
    • Support data center and application consolidation to cut cost and management
    • Enable growth and enhancements to application service level objectives
    • Maximize the usage of available power and cooling resources available in your region
    • Compliance with local or federal government mandates and regulations
    • Economic sustain and ability to support business growth and service improvements
    • General environmental awareness and stewardship to save and protect the earth

    While much of the IT industry focuses on CO2 emissions footprints, data management software and electrical power consumption, cooling and ventilation of IT data centers is an area of focus associated with "Green IT" as well as a means to discuss more effective use of electrical energy that can yield rapid results for many environments. Large tier-1 vendors including HP and IBM among others who have an IT and data center wide focus have services designed to do quick assessments as well as detailed analysis and re-organization of IT data center physical facilities to improve air flow and power consumption for more effective cooling of IT technologies including servers, storage, networks and other equipment.

    Similar to your own residence, basic steps to improve your cooling effectiveness can lead to use of less energy to cut your budget impact, or, enable you to do more with what you already have with your cooling capacity to support growth, acquisitions and or consolidation initiatives. Vendors are also looking at means and alternatives for cooling IT equipment ranging from computer assisted computational fluid dynamics (CFD) software analysis of data center cooling and ventilation to refrigerated cooling racks some leveraging water or inert liquid cooling.

    Various metrics exists and others are evolving for measuring, estimating, reporting, analyzing and discussing IT Data Center infrastructure resource topics including servers, storage, networks, facilities and associated software management tools from a power, cooling and green environmental standpoint. The importance of metrics is to focus on the larger impact of a piece of IT equipment that includes its cost and energy consumption that factors in cooling and other hosting or site environmental costs. Naturally energy costs and CO2 (carbon offsets) will vary by geography and region along with type of electrical power being used (Coal, Natural Gas, Nuclear, Wind, Thermo, Solar, etc) and other factors that should be kept in perspective as part of the big picture.

    Consequently your view and needs or interests around "Green" IT may be from an electrical power conservation perspective to maximize your power consumption or to adapt to a given power footprint or ceiling. Your focus around "Green" Data Centers and Green Storage may be from a carbon savings standpoint or proper disposition of old and retired IT equipment or from a data center cooling standpoint. Another area of focus may be that you are looking to cut your data footprint to align with your power, cooling and green footprint while enhancing application and data service delivery to your customers.

    Where to learn more

    The following are useful links to related efficient, effective, productive, flexible, scalable and resilient IT data center along with server storage I/O networking hardware and software that supports cloud and virtual green data centers.

    Various IT industry vendor and service provider links
    Green and Virtual Data Center: Productive Economical Efficient Effective Flexible
    Green and Virtual Data Center links
    Are large storage arrays dead at the hands of SSD?
    Closing the Green Gap
    Energy efficient technology sales depend on the pitch

    What this all means

    The result of a green and virtual data center is that of a flexible, agile, resilient, scalable information factory that is also economical, productive, efficient, productive as well as sustainable.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Green and Virtual Data Center: Productive Economical Efficient Effective Flexible

    Green and Virtual Data Center

    A Green and Virtual IT Data Center (e.g. an information factory) means an environment comprising:

    • Habitat for technology or physical infrastructure (e.g. physical data center, yours, co-lo, managed service or cloud)
    • Power, cooling, communication networks, HVAC, smoke and fire suppression, physical security
    • IT data information infrastructure (e.g. hardware, software, valueware, cloud, virtual, physical, servers, storage, network)
    • Data Center Infrastructure Management (DCIM) along with IT Service Management (ITSM) software defined management tools
    • Tools for monitoring, resource tracking and usage, reporting, diagnostics, provisioning and resource orchestration
    • Portals and service catalogs for automated, user initiated and assisted operation or access to IT resources
    • Processes, procedures, best-practices, work-flows and templates (including data protection with HA, BC, BR, DR, backup/restore, logical and physical security)
    • Metrics that matter for management insight and awareness
      People and skill sets among other items

    Green and Virtual Data Center Resources

    Click here to learn about "The Green and Virtual Data Center" book (CRC Press) for enabling efficient, productive IT data centers. This book covers cloud, virtualization, servers, storage, networks, software, facilities and associated management topics, technologies and techniques including metrics that matter. This book by industry veteran IT advisor and author Greg Schulz is the definitive guide for enabling economic efficiency and productive next generation data center strategies.

    Intel recommended reading
    Publisher: CRC Press – Taylor & Francis Group
    By Greg P. Schulz of StorageIO www.storageio.com
     ISBN-10: 1439851739 and ISBN-13: 978-1439851739
     Hardcover * 370 pages * Over 100 illustrations figures and tables

    Read more here and order your copy here. Also check out Cloud and Virtual Data Storage Networking (CRC Press) a new book by Greg Schulz.

    Productive Efficient Effective Economical Flexible Agile and Sustainable

    Green hype and green washing may be on the endangered species list and going away, however, green IT for servers, storage, networks, facilities as well as related software and management techniques that address energy efficiency including power and cooling along with e-waste, environmental health and safety related issues are topics that wont be going away anytime soon. There is a growing green gap between green hype messaging or green washing and IT pain point issues including limits on availability or rising costs of power, cooling, floor-space as well as e-waste and environmental health and safety (PCFE). To close the gap will involve addressing green messaging and rhetoric closer to where IT organizations pain points are and where budget dollars exists that can address PCFE and other green related issues as a by-product.

    The green gap will also be narrowed as awareness of broader green related topics coincide with IT data center pain points, in other words, alignment of messaging with IT issues that have or will have budget dollars allocated towards them to sustain business and economic growth via IT resource usage efficiency. Read more here.

    Where to learn more

    The following are useful links to related efficient, effective, productive, flexible, scalable and resilient IT data center along with server storage I/O networking hardware and software that supports cloud and virtual green data centers.

    Various IT industry vendor and service provider links
    Green and Virtual Data Center Primer
    Green and Virtual Data Center links
    Are large storage arrays dead at the hands of SSD?
    Closing the Green Gap
    Energy efficient technology sales depend on the pitch
    EPA Energy Star for Data Center Storage Update
    EPA Energy Star for data center storage draft 3 specification
    Green IT Confusion Continues, Opportunities Missed! 
    Green IT deferral blamed on economic recession might be result of green gap
    How much SSD do you need vs. want?
    How to reduce your Data Footprint impact (Podcast) 
    Industry trend: People plus data are aging and living longer
    In the data center or information factory, not everything is the same
    More storage and IO metrics that matter
    Optimizing storage capacity and performance to reduce your data footprint 
    Performance metrics: Evaluating your data storage efficiency
    PUE, Are you Managing Power, Energy or Productivity?
    Saving Money with Green Data Storage Technology
    Saving Money with Green IT: Time To Invest In Information Factories 
    Shifting from energy avoidance to energy efficiency
    SNIA Green Storage Knowledge Center
    Speaking of speeding up business with SSD storage
    SSD and Green IT moving beyond green washing
    Storage Efficiency and Optimization: The Other Green
    Supporting IT growth demand during economic uncertain times
    The Green and Virtual Data Center Book (CRC Press, Intel Recommended Reading)
    The new Green IT: Efficient, Effective, Smart and Productive 
    The other Green Storage: Efficiency and Optimization 
    What is the best kind of IO? The one you do not have to do

    Watch for more links and resources to be added soon.

    What this all means

    The result of a green and virtual data center is that of a flexible, agile, resilient, scalable information factory that is also economical, productive, efficient, productive as well as sustainable.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Green and Virtual Data Center Links

    Updated 10/25/2017

    Green and Virtual IT Data Center Links

    Moving beyond Green Hype and Green washing

    Green hype and green washing may be on the endangered species list and going away, however, green IT for servers, storage, networks, facilities as well as related software and management techniques that address energy efficiency including power and cooling along with e-waste, environmental health and safety related issues are topics that wont be going away anytime soon.

    There is a growing green gap between green hype messaging or green washing and IT pain point issues including limits on availability or rising costs of power, cooling, floor-space as well as e-waste and environmental health and safety (PCFE).

    To close the gap will involve addressing green messaging and rhetoric closer to where IT organizations pain points are and where budget dollars exists that can address PCFE and other green related issues as a by-product. The green gap will also be narrowed as awareness of broader green related topics coincide with IT data center pain points, in other words, alignment of messaging with IT issues that have or will have budget dollars allocated towards them to sustain business and economic growth via IT resource usage efficiency. Read more here.

    Enabling Effective Produtive Efficient Economical Flexible Scalable Resilient Information Infrastrctures

    The following are useful links to related efficient, effective, productive, flexible, scalable and resilient IT data center along with server storage I/O networking hardware and software that supports cloud and virtual green data centers.

    Various IT industry vendors and other links

    Via StorageIOblog – Happy Earth Day 2016 Eliminating Digital and Data e-Waste

    Green and Virtual Data Center Primer
    Green and Virtual Data Center: Productive Economical Efficient Effective Flexible
    Are large storage arrays dead at the hands of SSD?
    Closing the Green Gap
    Energy efficient technology sales depend on the pitch
    EPA Energy Star for Data Center Storage Update
    EPA Energy Star for data center storage draft 3 specification
    Green IT Confusion Continues, Opportunities Missed! 
    Green IT deferral blamed on economic recession might be result of green gap
    How much SSD do you need vs. want?
    How to reduce your Data Footprint impact (Podcast) 
    Industry trend: People plus data are aging and living longer
    In the data center or information factory, not everything is the same
    More storage and IO metrics that matter
    Optimizing storage capacity and performance to reduce your data footprint 
    Performance metrics: Evaluating your data storage efficiency
    PUE, Are you Managing Power, Energy or Productivity?
    Saving Money with Green Data Storage Technology
    Saving Money with Green IT: Time To Invest In Information Factories 
    Shifting from energy avoidance to energy efficiency
    SNIA Green Storage Knowledge Center
    Speaking of speeding up business with SSD storage
    SSD and Green IT moving beyond green washing
    Storage Efficiency and Optimization: The Other Green
    Supporting IT growth demand during economic uncertain times
    The Green and Virtual Data Center Book (CRC Press, Intel Recommended Reading)
    The new Green IT: Efficient, Effective, Smart and Productive 
    The other Green Storage: Efficiency and Optimization 
    What is the best kind of IO? The one you do not have to do

    Intel recommended reading
    Click here to learn about "The Green and Virtual Data Center" book (CRC Press) for enabling efficient , productive IT data centers. This book covers cloud, virtualization, servers, storage, networks, software, facilities and associated management topics, technologies and techniques including metrics that matter. This book by industry veteran IT advisor and author Greg Schulz is the definitive guide for enabling economic efficiency and productive next generation data center strategies. Read more here and order your copyhere. Also check out Cloud and Virtual Data Storage Networking (CRC Press) a new book by Greg Schulz.

    White papers, analyst reports and perspectives

    Business benefits of data footprint reduction (archiving, compression, de-dupe)
    Data center I/O and performance issues – Server I/O and storage capacity gap
    Analysis of EPA Report to Congress (Law 109-431)
    The Many Faces of MAID Storage Technology
    Achieving Energy Efficiency with FLASH based SSD
    MAID 2.0: Energy Savings without Performance Compromises

    Articles, Tips, Blogs, Webcasts and Podcasts

    AP – SNIA Green Emerald Program and measurements
    AP – Southern California heat wave strains electrical system
    Ars Technica – EPA: Power usage in data centers could double by 2011
    Ars Technica – Meet the climate savers: Major tech firms launch war on energy-inefficient PCs – Article
    Askageek.com – Buying an environmental friendly laptop – November 2008
    Baseline – Examining Energy Consumption in the Data Center
    Baseline – Burts Bees: What IT Means When You Go Green
    Bizcovering – Green architecture for the masses
    Broadstuff – Are Green 2.0 and Enterprise 2.0 Incompatible?
    Business Week – CEO Guide to Technology
    Business Week – Computers’ elusive eco factor
    Business Week – Clean Energy – Its Getting Affordable
    Byte & Switch – Keeping it Green This Summer – Don’t be "Green washed"
    Byte & Switch – IBM Sees Green in Energy Certificates
    Byte & Switch – Users Search for power solutions
    Byte & Switch – DoE issues Green Storage Warning
    CBR – The Green Light for Green IT
    CBR – Big boxes make greener data centers
    CFO – Power Scourge
    Channel Insider – A 12 Step Program to Dispose of IT Equipment
    China.org.cn – China publishes Energy paper
    CIO – Green Storage Means Money Saved on Power
    CIO – Data center designers share secrets for going green
    CIO – Best Place to Build a Data Center in North America
    CIO Insight – Clever Marketing or the Real Thing?
    Cleantechnica – Cooling Data Centers Could Prevent Massive Electrical Waste – June 2008
    Climatebiz – Carbon Calculators Yield Spectrum of Results: Study
    CNET News – Linux coders tackle power efficiency
    CNET News – Research: Old data centers can be nearly as ‘green’ as new ones
    CNET News – Congress, Greenpeace move on e-wast
    CNN Money – A Green Collar Recession
    CNN Money – IBM creates alliance with industry leaders supporting new data center standards
    Communication News – Utility bills key to greener IT
    Computerweekly – Business case for green storage
    Computerweekly – Optimising data centre operations
    Computerweekly – Green still good for IT, if it saves money
    Computerweekly – Meeting the Demands for storage
    Computerworld – Wells Fargo Free Data Center Cooling System
    Computerworld – Seven ways to get green and save money
    Computerworld – Build your data center here: The most energy-efficient locations
    Computerworld – EPA: U.S. needs more power plants to support data centers
    Computerworld – GreenIT: A marketing ploy or new technology?
    Computerworld – Gartner Criticizes Green Grid
    Computerworld – IT Skills no longer sufficient for data center execs.
    Computerworld – Meet MAID 2.0 and Intelligent Power Management
    Computerworld – Feds to offer energy ratings on servers and storage
    Computerworld – Greenpeace still hunting for truly green electronics
    Computerworld – How to benchmark data center energy costs
    ComputerworldUK – Datacenters at risk from poor governance
    ComputerworldUK – Top IT Leaders Back Green Survey
    ComputerworldMH – Lean and Green
    CTR – Strategies for enhancing energy efficiency
    CTR – Economies of Scale – Green Data Warehouse Appliances
    Datacenterknowledge – Microsoft to build Illinois datacenter
    Data Center Strategies – Storage The Next Hot Topic
    Earthtimes – Fujitsu installs hydrogen fuel cell power
    eChannelline – IBM Goes Green(er)
    Ecoearth.info – California Moves To Speed Solar, Wind Power Grid Connections
    Ecogeek – Solar power company figures they can power 90% of America
    Economist – Cool IT
    Electronic Design – How many watts in that Gigabyte
    eMazzanti – Desktop virtualization movement creeping into customer sites
    ens-Newswire – Western Governors Ask Obama for National Green Energy Plan
    Environmental Leader – Best Place to Build an Energy Efficient Data Center
    Environmental Leader – New Guide Helps Advertisers Avoid Greenwash Complaints
    Enterprise Storage Forum – Power Struggles Take Center Stage at SNW
    Enterprise Storage Forum – Pace Yourself for Storage Power & Cooling Needs
    Enterprise Storage Forum – Storage Power and Cooling Issues Heat Up – StorageIO Article
    Enterprise Storage Forum – Score Savings With A Storage Power Play
    Enterprise Storage Forum – I/O, I/O, Its off to Virtual Work I Go
    Enterprise Storage Forum – Not Just a Flash in the Pan – Various SSD options
    Enterprise Storage Forum – Closing the Green Gap – Article August 2008
    EPA Report to Congress and Public Law 109-431 – Reports & links
    eWeek – Saving Green by being Green
    eWeek – ‘No Cooling Necessary’ Data Centers Coming?
    eWeek – How the ‘Down’ Macroeconomy Will Impact the Data Storage Sector
    ExpressComputer – In defense of Green IT
    ExpressComputer – What data center crisis
    Forbes – How to Build a Quick Charging Battery
    GCN – Sun launches eco data center
    GreenerComputing – New Code of Conduct to Establish Best Practices in Green Data Centers
    GreenerComputing – Silicon valley’s green detente
    GreenerComputing – Majority of companies plan to green their data centers
    GreenerComputing – Citigroup to spend $232M on Green Data Center
    GreenerComputing – Chicago and Quincy, WA Top Green Data Center Locations
    GreenerComputing – Using airside economizers to chill data center cooling bills
    GreenerComputing – Making the most of asset disposal
    GreenerComputing – Greenpeace vendor rankings
    GreenerComputing – Four Steps to Improving Data Center Efficiency without Capital Expenditures
    GreenerComputing – Enabling a Green and Virtual Data Center
    Green-PC – Strategic Steps Down the Green Path
    Greeniewatch – BBC news chiefs attack plans for climate change campaign
    Greeniewatch – Warmest year predictions and data that has not yet been measured
    GoverenmentExecutive – Public Private Sectors Differ on "Green" Efforts
    HPC Wire – How hot is your code
    Industry Standard – Why green data centers mean partner opportunities
    InformationWeek – It could be 15 years before we know what is really green
    InformationWeek – Beyond Server Consolidaiton
    InformationWeek – Green IT Beyond Virtualization: The Case For Consolidation
    InfoWorld – Sun celebrates green datacenter innovations
    InfoWorld – Tech’s own datacenters are their green showrooms
    InfoWorld – 2007: The Year in Green
    InfoWorld – Green Grid Announces Tech Forum in Feb 2008
    InfoWorld – SPEC seeds future green-server benchmarks
    InfoWorld – Climate Savers green catalog proves un-ripe
    InfoWorld – Forester: Eco-minded activity up among IT pros
    InfoWorld – Green ventures in Silicon Valley, Mass reaped most VC cash in ’07
    InfoWorld – Congress misses chance to see green-energy growth
    InfoWorld – Unisys pushes green envelope with datacenter expansion
    InfoWorld – No easy green strategy for storage
    Internet News – Storage Technologies for a Slowing Economy
    Internet News – Economy will Force IT to Transform
    ITManagement – Green Computing, Green Revenue
    itnews – Data centre chiefs dismiss green hype
    itnews – Australian Green IT regulations could arrive this year
    IT Pro – SNIA Green storage metrics released
    ITtoolbox – MAID discussion
    Linux Power – Saving power with Linux on Intel platforms
    MSNBC – Microsoft to build data center in Ireland
    National Post – Green technology at the L.A. Auto Show
    Network World – Turning the datacenter green
    Network World – Color Interop Green
    Network World – Green not helpful word for setting environmental policies
    NewScientistEnvironment – Computer servers as bad for climate as SUVs
    Newser – Texas commission approves nation’s largest wind power project
    New Yorker – Big Foot: In measuring carbon emissions, it’s easy to confuse morality and science
    NY Times – What the Green Bubble Will Leave Behind
    PRNewswire – Al Gore and Cisco CEO John Chambers to debate climate change
    Processor – More than just monitoring
    Processor – The new data center: What’s hot in Data Center physical infrastructure:
    Processor – Liquid Cooling in the Data Center
    Processor – Curbing IT Power Usage
    Processor – Services To The Rescue – Services Available For Today’s Data Centers
    Processor – Green Initiatives: Hire A Consultant?
    Processor – Energy-Saving Initiatives
    Processor – The EPA’s Low Carbon Campaig
    Processor – Data Center Power Planning
    SAN Jose Mercury – Making Data Centers Green
    SDA-Asia – Green IT still a priority despite Credit Crunch
    SearchCIO – EPA report gives data centers little guidance
    SearchCIO – Green IT Strategies Could Lead to hefty ROIs
    SearchCIO – Green IT In the Data Center: Plenty of Talk, not much Walk
    SearchCIO – Green IT Overpitched by Vendors, CIOs beware
    SearchDataCenter – Study ranks cheapest places to build a data center
    SearchDataCenter – Green technology still ranks low for data center planners
    SearchDataCenter – Green Data Center: Energy Effiecnty Computing in the 21st Century
    SearchDataCenter – Green Data Center Advice: Is LEED Feasible
    SearchDataCenter – Green Data Centers Tackle LEED Certification
    SearchDataCenter – PG&E invests in data center effieicny
    SearchDataCenter – A solar powered datacenter
    SearchSMBStorage – Improve your storage energy efficiency
    SearchSMBStorage – SMB capacity planning: Focusing on energy conservation
    SearchSMBStorage – Data footprint reduction for SMBs
    SearchSMBStorage – MAID & other energy-saving storage technologies for SMBs
    SearchStorage – How to increase your storage energy efficiency
    SearchStorage – Is storage now top energy hog in the data center
    SearchStorage – Storage eZine: Turning Storage Green
    SearchStorage – The Green Storage Gap
    SearchStorageChannel – Green Data Storage Projects
    Silicon.com – The greening of IT: Cooling costs
    SNIA – SNIA Green Storage Overview
    SNIA – Green Storage
    SNW – Beyond Green-wash
    SNW Spring 2008 Beyond Green-wash
    State.org – Why Texas Has Its Own Power Grid
    StorageDecisions – Different Shades of Green
    Storage Magazine – Storage still lacks energy metrics
    StorageIOblog – Posts pertaining to Green, power, cooling, floor-space, EHS (PCFE)
    Storage Search – Various postings, news and topics pertaining to Green IT
    Technology Times – Revealed: the environmental impact of Google searches
    TechTarget – Data center power efficiency
    TechTarget – Tip for determining power consumption
    Techworld – Inside a green data center
    Techworld – Box reduction – Low hanging green datacenter fruit
    Techworld – Datacentere used to heat swimming pool
    Theinquirer – Spansion and Virident flash server farms
    Theinquirer – Storage firms worry about energy efficiency How green is the valley
    TheRegister – Data Centre Efficiency, the good, the bad and the way to hot
    TheRegister – Server makers snub whalesong for serious windmill abuse
    TheRegister – Green data center threat level: Not Green
    The Standard – Growing cynicism around going Green
    ThoughtPut – Energy Central
    Thoughtput – Power, Cooling, Green Storage and related industry trends
    Wallstreet Journal – Utilities Amp Up Push To Slash Energy Use
    Wallstreet Journal – The IT in Green Investing
    Wallstreet Journal – Tech’s Energy Consumption on the Rise
    Washingtonpost – Texas approves major new wind power project
    WhatPC – Green IT: It doesnt have to cost the earth
    WHIRnews – SingTel building green data center
    Wind-watch.org – Loss of wind causes Texas power grid emergency
    WyomingNews – Overcoming Greens Stereotype
    Yahoo – Washington Senate Unviel Green Job Plan
    ZDnet – Will supercomputer speeds hit a plateau?
    Are data centers causing climate change

    News and Press Releases

    Business Wire – The Green and Virtual Data Center
    Enterprise Storage Forum – Intel and HGST (Hitachi) partner on FLASH SSD
    PCworld – Intel and HP describe Green Strategy
    DoE – To Invest Approximately $1.3 Billion to Commercialize CCS Technology
    Yahoo – Shell Opens Los Angeles’ First Combined Hydrogen and Gasoline Station
    DuPont – DuPont Projects Save Enough Energy to Power 25,000 Homes
    Gartner – Users Are Becoming Increasingly Confused About the Issues and Solutions Surrounding Green IT

    Websites and Tools

    Various power, cooling, emmisions and device configuration tools and calculators
    Solar Action Alliance web site
    SNIA Emerald program
    Carbon Disclosure Project
    The Chicago Climate Exchange
    Climate Savers
    Data Center Decisions
    Electronic Industries Alliance (EIA)
    EMC – Digital Life Calculator
    Energy Star
    Energy Star Data Center Initiatives
    Greenpeace – Technology ranking website also here
    GlobalActionPlan
    KyotoPlanet
    LBNL High Tech Data centers
    Millicomputing
    RoHS & WEE News
    Storage Performance Council (SPC)
    SNIA Green Technical Working Group
    SPEC
    Transaction Processing Council (TPC)
    The Green Grid
    The Raised Floor
    Terra Pass Carbon Offset Credits – Website with CO2 calculators
    Energy Information Administration – EIA (US and International Electrical Information)
    U.S. Department of Energy and related information
    U.S. DOE Energy Efficient Industrial Programs
    U.S. EPA server and storage energy topics
    Zerofootprint – Various "Green" and environmental related links and calculators

    Vendor Centric and Marketing Website Links and tools

    Vendors and organizations have different types of calculators some with focus on power, cooling, floor space, carbon offsets or emissions,

    ROI, TCO and other IT data center infrastructure resource management. Following is an evolving list and by no means definitive even for a particular vendors as

    different manufactures may have multiple different calculators for different product lines or areas of focus.

    Brocade – Green website
    Cisco – Green and Environmental websites here, here and here
    Dell – Green website
    EMC – EMC Energy, Power and Cooling Related Website
    HDS – How to be green – HDS Positioning White Paper
    HP – HP Green Website
    IBM – Green Data Center – IBM Positioning White Paper
    IBM – Green Data Center for Education – IBM Positioning White Paper
    Intel – What is an Efficient Data Center and how do I measure it?
    LSI – Green site and white paper
    NetApp – Press Release and related information
    Sun – Various articles and links
    Symantec – Global 2000 Struggle to Adopt "Green" Data Centers – Announcement of Survey results
    ACTON
    Adinfa
    APC
    Australian Conservation Foundation
    Avocent
    BBC
    Brocade
    Carbon Credit Calculator UK
    Carbon Footprint Site
    Carbon Planet
    Carbonify
    CarbonZero
    Cassatt
    CO2 Stats Site
    Copan
    Dell
    DirectGov UK Acton
    Diesel Service & Supply Power Calculator & Converter
    Eaton Powerware
    Ecobusinesslinks
    Ecoscale
    EMC Power Calculator
    EMC Web Power Calculator
    EMC Digital Life Calculator
    EPA Power Profiler
    EPA Related Tools
    EPEAT
    Google UK Green Footprint
    Green Grid Calculator
    HP and more here
    HVAC Calculator
    IBM
    Logicalis
    Kohler Power (Business and Residential)
    Micron
    MSN Carbon Footprint Calculator
    National Wildlife Foundation
    NEF UK
    NetApp
    Rackwise
    Platespin
    Safecom
    Sterling Planet
    Sun and more here and here and here
    Tandberg
    TechRepublic
    TerraPass Carbon Offset Credits
    Thomas Kreen AG
    Toronto Hydro Calculator
    80 Plus Calculator
    VMware
    42u Green Grid PUE DCiE calculator
    42u energy calculator

    Green and Virtual Tools

    What’s your power, cooling, floor space, energy, environmental or green story?

    What’s your power, cooling, floor space, energy, environmental or green story? Do you have questions or want to learn more about

    energy issues pertaining to IT data center and data infrastructure topics? Do you have a solution or technology or a success story that you would like to share

    with us pertaining to data storage and server I/O energy optimization strategies?  Do you need assistance in developing, validating or reviewing your strategy

    or story? Contact us at: info@storageio.com or 651-275-1563 to learn more about green data storage and server I/O or to

    schedule a briefing to tell us about your energy efficiency and effectiveness story pertaining to IT data centers and data infrastructures.

    Disclaimer and note:  URL’s submitted for inclusion on this site will be reviewed for consideration and to be

    in generally accepted good taste in regards to the theme of this site.  Best effort has been made to validate and verify the URLs that appear on this page and

    website however they are subject to change. The author and/or maintainer’s) of this page and web site make no endorsement to and assume no responsibility for the

    URLs and their content that are listed on this page.

    Green and Virtual Metrics

    Chapter 5 "Measurement, Metrics, and Management of IT Resources" in the book "The Green and Virtual Data Center" (CRC Press) takes a look at the importance of being able to measure and monitor to enable effective management and utilization of IT resources across servers, storage, I/O networks, software, hardware and facilities.

    There are many different points of interest for collecting metrics in an IT data center for servers, storage, networking and facilities along with various points of interest or perspectives. Data center personal have varied interest from a facilities to a resource (server, storage, networking) usage and effectiveness perspective for normal use as well as planning purposes or comparison when evaluating new technology. Vendors have different uses for metrics during R&D, Q/A testing and marketing or sales campaigns as well as on-going service and support. Industry trade groups including 80 Plus, SNIA and the green grid along with government groups including the EPA Energy Star are working to define and establish applicable metrics pertinent for Green and Virtual data centers.

    Acronym

    Description

    Comment

    DCiE

    Data center Efficiency = (IT equipment / Total facility power) * 100

    Shows a ratio of how well a data center is consuming power

    DCPE

    Data center Performance Efficiency = Effective IT workload / total facility power

    Shows how effective data center is consuming power to produce a given level of service or work such as energy per transaction or energy per business function performed

    PUE

    Power usage effectiveness = Total facility power / IT equipment power

    Inverse of DCE

    Kilowatts (kw)

    Watts / 1,000

    One thousand watts

    Annual kWh

    kWh x 24 x 365

    kWh used in on year

    Megawatts (mw)

    kW / 1,000

    One thousand kW

    BTU/hour

    watts x 3.413

    Heat generated in an hour from using energy in British Thermal Units. 12,000 BTU/hour can equate to 1 Ton of cooling.

    kWh

    1,000 watt hours

    The number of watts used in one hour

    Watts

    Amps x Volts (e.g. 12 amps * 12 volts = 144 watts)

    Unit of electrical energy power

    Watts

    BTU/hour x 0.293

    Convert BTU/hr to watts

    Volts

    Watts / Amps (e.g. 144 watts / 12 amps = 12 volts)

    The amount of force on electrons

    Amps

    Watts / Volts (e.g. 144 watts / 12 volts = 12 amps)

    The flow rate of electricity

    Volt-Amperes (VA)

    Volts x Amps

    Sometimes power expressed in Volt-Ampres

    kVA

    Volts x Amp / 1000

    Number of kilovolt-ampres

    kW

    kVA x power-factor

    Power factor is the efficiency of a piece of equipments use of power

    kVA

    kW / power-factor

    Killovolt-Ampres

    U

    1U = 1.75”

    EIA metric describing height of equipment in racks.

     

    Activity / Watt Amount of work accomplished per unit of energy consumed. This could be IOPS, Transactions or Bandwidth per watt. Indicator how much work and how efficient energy is being used to accomplish useful work. This metric applies to active workloads or actively used and frequently accessed storage and data. Examples would be IOPS per watt, Bandwidth per watt, Transactions per watt, Users or streams per watt. Activity per watt should also be used in conjunction with another metric such as how much capacity is supported per watt and total watts consumed for a representative picture.

    IOPS / Watt

    Number of I/O operations (or transactions) / energy (watts)

    Indicator of how effectively energy is being used to perform a given amount of work. The work could be I/Os, transactions, throughput or other indicator of application activity. For example SPC-1 / Watt, SPEC / Watt, TPC / Watt, transaction / watt,  IOP / Watt.

    Bandwidth / Watt GBPS or TBPS or PBPS / Watt Amount of data transferred or moved per second and energy used. Often confused with Capacity per watt This indicates how much data is moved or accessed per second or time interval per unit of energy consumed. This is often confused with capacity per watt given that both bandwidth and capacity reference GByte, TByte, PByte.

    Capacity / Watt

    GB or TB or PB (storage capacity space / watt

    Indicator of how much capacity (space) or bandwidth supported in a given configuration or footprint per watt of energy. For inactive data or off-line and archive data, capacity per watt can be an effective measurement gauge however for active workloads and applications activity per watt also needs to be looked at to get a representative indicator of how energy is being used

    Mhz / Watt

    Processor performance / energy (watts)

    Indicator of how effectively energy is being used by a CPU or processor.

    Carbon Credit

    Carbon offset credit

    Offset credits that can be bought and sold to offset your CO2 emissions

    CO2 Emission

    Average 1.341 lbs per kWh of electricity generated

    The amount of average carbon dioxide (CO2) emissions from generating an average kWh of electricity

    Various power, cooling, floor space and green storage or IT  related metrics

    Metrics include Data center Efficiency (DCiE) via the greengrid which is the indicator ratio of a IT data center energy efficiency defined as IT equipment (servers, disk and tape storage, networking switches, routers, printers, etc) / Total facility power x 100 (for percentage). For example, if the sum of all IT equipment energy usage resulted in 1,500 kilowatt hours (kWh) per month yet the total facility power including UPS, energy switching, power conversation and filtering, cooling and associated infrastructure costs as well as IT equipment resulting in 3,500 kWh, the DCiE would be (1,500 / 3,500) x 100 = 43%. DCiE can be used as a ratio for example to show in the above scenario that IT equipment accounts for about 43% of energy consumed by the data center with in this scenario 57% of electrical energy being consumed by cooling, conversion and conditioning or lighting.

    Power usage effectiveness (PUE) is the indicator ratio of total energy being consumed by the data center to energy being used to operate IT equipment. PUE is defined as total facility power / IT equipment energy consumption. Using the above scenario PUE = 2.333 (3,500 / 1,500) which means that a server requiring 100 watts of power would actually require (2.333 * 100) 233.3 watts of energy that includes both direct power and cooling costs. Similarly a storage system that required 1,500 kWh of energy to power would require (1,500*2.333) 3,499.5 kWh of electrical power including cooling.

    Another metric that has the potential to have meaning is Data center Performance Efficiency (DCPE) that takes into consideration how much useful and effective work is performed by the IT equipment and data center per energy consumed. DCPE is defined as useful work / total facility power with an example being some number of transactions processed using servers, networks and storage divided by energy for the data center to power and cool the equipment. An relatively easy and straightforward implementation of DCPE is an IOPs per watt measurement that looks at how many IOPs can be performed (regardless of size or type such as reads or writes) per unit of energy in this case watts.

    DCPE = Useful work / Total facility power, for example IOPS per watt of energy used

    DCiE = IT equipment energy / Total facility power = 1 / PUE

    PUE = Total facility energy / IT equipment energy

    IOPS per Watt = Number of IOPs (or bandwidth) / energy used by the storage system

    The importance of these numbers and metrics is to focus on the larger impact of a piece of IT equipment that includes its cost and energy consumption that factors in cooling and other hosting or site environmental costs. Naturally energy costs and CO2 (carbon offsets) will vary by geography and region along with type of electrical power being used (Coal, Natural Gas, Nuclear, Wind, Thermo, Solar, etc) and other factors that should be kept in perspective as part of the big picture. Learn more in Chapter 5 "Measurement, Metrics, and Management of IT Resources" in the book "The Green and Virtual Data Center" (CRC) and in the book Cloud and Virtual Data Storage Networking (CRC).

    Disclaimer and notes

    Disclaimer and note:  URL’s submitted for inclusion on this site will be reviewed for consideration and to be in generally accepted good taste in regards to the theme of this site.  Best effort has been made to validate and verify the URLs that appear on this page and web site however they are subject to change. The author and/or maintainer’s) of this page and web site make no endorsement to and assume no responsibility for the URLs and their content that are listed on this page.

    What this all means

    The result of a green and virtual data center is that of a flexible, agile, resilient, scalable information factory that is also economical, productive, efficient, productive as well as sustainable.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    DIY converged server software defined storage on a budget using Lenovo TS140

    Attention DIY Converged Server Storage Bargain Shoppers

    Software defined storage on a budget with Lenovo TS140

    server storage I/O trends

    Recently I put together a two-part series of some server storage I/O items to get a geek for a gift (read part I here and part II here) that also contain items that can be used for accessorizing servers such as the Lenovo ThinkServer TS140.

    Image via Lenovo.com

    Likewise I have done reviews of the Lenovo ThinkServer TS140 in the past which included me liking them and buying some (read the reviews here and here), along with a review of the larger TD340 here.

    Why is this of interest

    Do you need or want to do a Do It Yourself (DIY) build of a small server compute cluster, or a software defined storage cluster (e.g. scale-out), or perhaps a converged storage for VMware VSAN, Microsoft SOFS or something else?

    Do you need a new server, second or third server, or expand a cluster, create a lab or similar and want the ability to tailor your system without shopping or a motherboard, enclosure, power supply and so forth?

    Are you a virtualization or software defined person looking to create a small VMware Virtual SAN (VSAN) needing three or more servers to build a proof of concept or personal lab system?

    Then the TS140 could be a fit for you.

    storage I/O Lenovo TS140
    Image via StorageIOlabs, click to see review

    Why the Lenovo TS140 now?

    Recently I have seen a lot of site traffic on my site with people viewing my reviews of the Lenovo TS140 of which I have a few. In addition have got questions from people via comments section as well as elsewhere about the TS140 and while shopping at Amazon.com for some other things, noticed that there were some good value deals on different TS140 models.

    I tend to buy the TS140 models that are bare bones having power supply, enclosure, CD/DVD, USB ports, power supply and fan, processor and minimal amount of DRAM memory. For processors mine have the Intel E3-1225 v3 which are quad-core and that have various virtualization assist features (e.g. good for VMware and other hypervisors).

    What I saw on Amazon the other day (also elsewhere) were some Intel i3-4130 dual core based systems (these do not have all the virtualization features, just the basics) in a bare configuration (e.g. no Hard Disk Drive (HDD), 4GB DRAM, processor, mother board, power supply and fan, LAN port and USB with a price of around $220 USD (your price may vary depending on timing, venue, prime or other membership and other factors). Not bad for a system that you can tailor to your needs. However what also caught my eye were the TS140 models that have the Intel E3-1225 v3 (e.g. quad core, 3.2Ghz) processor matching the others I have with a price of around $330 USD including shipping (your price will vary depending on venue and other factors).

    What are some things to be aware of?

    Some caveats of this solution approach include:

    • There are probably other similar types of servers, either by price, performance, or similar
    • Compare apples to apples, e.g. same or better processor, memory, OS, PCIe speed and type of slots, LAN ports
    • Not as robust of a solution as those you can find costing tens of thousands of dollars (or more)
    • A DIY system which means you select the other hardware pieces and handle the service and support of them
    • Hardware platform approach where you choose and supply your software of choice
    • For entry-level environments who have floor-space or rack-space to accommodate towers vs. rack-space or other alternatives
    • Software agnostic Based on basically an empty server chassis (with power supplies, motherboard, power supplies, PCIe slots and other things)
    • Possible candidate for smaller SMB (Small Medium Business), ROBO (Remote Office Branch Office), SOHO (Small Office Home Office) or labs that are looking for DIY
    • A starting place and stimulus for thinking about doing different things

    What could you do with this building block (e.g. server)

    Create a single or multi-server based system for

    • Virtual Server Infrastructure (VSI) including KVM, Microsoft Hyper-V, VMware ESXi, Xen among others
    • Object storage
    • Software Defined Storage including Datacore, Microsoft SOFS, Openstack, Starwind, VMware VSAN, various XFS and ZFS among others
    • Private or hybrid cloud including using Openstack among other software tools
    • Create a hadoop big data analytics cluster or grid
    • Establish a video or media server, use for gaming or a backup (data protection) server
    • Update or expand your lab and test environment
    • General purpose SMB, ROBO or SOHO single or clustered server

    VMware VSAN server storageIO example

    What you need to know

    Like some other servers in this class, you need to pay attention to what it is that you are ordering, check out the various reviews, comments and questions as well as verify the make, model along with configuration. For example what is included and what is not included, warranty, return policy among other things. In the case of some of the TS140 models, they do not have a HDD, OS, keyboard, monitor, mouse along with different types of processors and memory. Not all the processors are the same, pay attention, visit the Intel Ark site to look up a specific processor configuration to see if it fits your needs as well as visit the hardware compatibility list (HCL) for the software that you are planning to use. Note that these should be best practices regardless of make, model, type or vendor for server, storage, I/O networking hardware and software.

    What you will need

    This list assumes that you have obtained a model without a HDD, keyboard, video, mouse or operating system (OS) installed

    • Update your BIOS if applicable, check the Lenovo site
    • Enable virtualization and other advanced features via your BIOS
    • Software such as an Operating System (OS), hypervisor or other distribution (load via USB or CD/DVD if present)
    • SSD, SSHD/HHDD, HDD or USB flash drive for installing OS or other software
    • Keyboard, video, mouse (or a KVM switch)

    What you might want to add (have it your way)

    • Keyboard, video mouse or a KVM switch (See gifts for a geek here)
    • Additional memory
    • Graphics card, GPU or PCIe riser
    • Additional SSD, SSHD/HHDD or HDD for storage
    • Extra storage I/O and networking ports

    Extra networking ports

    You can easily add some GbE (or faster ports) including use the PCIe x1 slot, or use one of the other slots for a quad port GbE (or faster), not to mention get some InfiniBand single or dual port cards such as the Mellanox Connectx II or Connect III that support QDR and can run in IBA or 10GbE modes. If you only have two or three servers in a cluster, grid, ring configuration you can run point to point topologies using InfiniBand (and some other network interfaces) without using a switch, however you decide if you need or want switched or non-switched (I have a switch). Note that with VMware (and perhaps other hypervisors or OS) you may need to update the drives for the Realtek GbE LAN on Motherboard port (see links below).

    Extra storage ports

    For extra storage space capacity (and performance) you can easily add PCIe G2 or G3 HBAs (SAS, SATA, FC, FCoE, CNA, UTA, IBA for SRP, etc) or RAID cards among others. Depending on your choice of cards, you can then attach to more internal storage, external storage or some combination with different adapters, cables, interposers and connectivity options. For example I have used TS140s with PCIe Gen 3 12Gbs SAS HBAs attached to 12Gbs SAS SSDs (and HDDs) with the ability to drive performance to see what those devices are capable of doing.

    TS140 Hardware Defined My Way

    As an example of how a TS140 can be configured, using one of the base E3-1224 v3 models with 4GB RAM, no HDD (e.g around $330 USD, your price will vary), add a 4TB Seagate HDD (or two or three) for around $140 USD each (your price will vary), add a 480GB SATA SSD for around $340 USD (your price will vary) with those attached to the internal SATA ports. To bump up network performance, how about a Mellanox Connectx II dual port QDR IBA/10GbE card for around $140 USD (your price will vary), plus around $65 USD for QSFP cable (you your price will vary), and some extra memory (use what you have or shop around) and you have a platform ready to go for around or under $1,000 USD. Add some more internal or external disks, bump up the memory, put in some extra network adapters and your price will go up a bit, however think about what you can have for a robust not so little system. For you VMware vgeeks, think about the proof of concept VSAN that you can put together, granted you will have to do some DIY items.

    Some TS140 resources

    Lenovo TS140 resources include

    • TS140 StorageIOlab review (here and here)
    • TS140 Lenovo ordering website
    • TS140 Data and Spec Sheet (PDF here)
    • Lenovo ThinkServer TS140 Manual (PDF here) and (PDF here)
    • Intel E3-1200 v3 processors capabilities (Web page here)
    • Enabling Virtualization Technology (VT) in TS140 BIOS (Press F1) (Read here)
    • Enabling Intel NIC (82579LM) GbE with VMware (Link to user forum and a blog site here)

    Image via Lenovo.com

    What this all means

    Like many servers in its category (price, capabilities, abilities, packaging) you can do a lot of different things with them, as well as hardware define with accessories, or use your own software. Depending on how you end how hardware defining the TS140 with extra memory, HDDs, SSDs, adapters or other accessories and software your cost will vary. However you can also put together a pretty robust system without breaking your budget while meeting different needs.

    Is this for everybody? Nope

    Is this for more than a lab, experimental, hobbyist, gamer? Sure, with some caveats Is this apples to apples comparison vs. some other solutions including VSANs? Nope, not even close, maybe apples to oranges.

    Do I like the TS140? Yup, starting with a review I did about a year ago, I liked it so much I bought one, then another, then some more.

    Are these the only servers I have, use or like? Nope, I also have systems from HP and Dell as well as test drive and review others

    Why do I like the TS140? It’s a value for some things which means that while affordable (not to be confused with cheap) it has features, salability and ability to be both hardware defined for what I want or need to use them as, along with software define them to be different things. Key for me is the PCIe Gen 3 support with multiple slots (and types of slots), reasonable amount of memory, internal housing for 3.5" and 2.5" drives that can attach to on-board SATA ports, media device (CD/DVD) if needed, or remove to use for more HDDs and SSDs. In other words, it’s a platform that instead of shopping for the motherboard, an enclosure, power supply, processor and related things I get the basics, then configure, and reconfigure as needed.

    Another reason I like the TS140 is that I get to have the server basically my way, in that I do not have to order it with a smallest number of HDDs, or that it comes with an OS, more memory than needed or other things that I may or may not be able to use. Granted I need to supply the extra memory, HDDs, SSDs, PCIe adapters and network ports along with software, however for me that’s not too much of an issue.

    What don’t I like about the TS140? You can read more about my thoughts on the TS140 in my review here, or its bigger sibling the TD340 here, however I would like to see more memory slots for scaling up. Granted for what these cost, it’s just as easy to scale-out and after all, that’s what a lot of software defined storage prefers these days (e.g. scale-out).

    The TS140 is a good platform for many things, granted not for everything, that’s why like storage, networking and other technologies there are different server options for various needs. Exercise caution when doing apples to oranges comparison on price alone, compare what you are getting in terms of processor type (and its functionality), expandable memory, PCIe speed, type and number of slots, LAN connectivity and other features to meet your needs or requirements. Also keep in mind that some systems might be more expensive that include a keyboard, HDD with an OS installed that if you can use those components, then they have value and should be factored into your cost, benefit, return on investment.

    And yes, I just added a few more TS140s that join other recent additions to the server storageIO lab resources…

    Anybody want to guess what I will be playing with among other things during the up coming holiday season?

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    December 2014 Server StorageIO Newsletter

    December 2014

    Hello and welcome to this December Server and StorageIO update newsletter.

    Seasons Greetings

    Seasons greetings

    Commentary In The News

    StorageIO news

    Following are some StorageIO industry trends perspectives comments that have appeared in various venues. Cloud conversations continue to be popular including concerns about privacy, security and availability. Over at BizTech Magazine there are some comments about cloud and ROI. Some comments on AWS and Google SSD services can be viewed at SearchAWS. View other trends comments here

    Tips and Articles

    View recent as well as past tips and articles here

    StorageIOblog posts

    Recent StorageIOblog posts include:

    View other recent as well as past blog posts here

    In This Issue

  • Industry Trends Perspectives
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events & Activities

    View other recent and upcoming events here

    Webinars

    December 11, 2014 – BrightTalk
    Server & Storage I/O Performance

    December 10, 2014 – BrightTalk
    Server & Storage I/O Decision Making

    December 9, 2014 – BrightTalk
    Virtual Server and Storage Decision Making

    December 3, 2014 – BrightTalk
    Data Protection Modernization

    Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    From StorageIO Labs

    Research, Reviews and Reports

    StarWind Virtual SAN for Microsoft SOFS

    May require registration
    This looks at the shared storage needs of SMB’s and ROBO’s leveraging Microsoft Scale-Out File Server (SOFS). Focus is on Microsoft Windows Server 2012, Server Message Block version (SMB) 3.0, SOFS and StarWind Virtual SAN management software

    View additional reports and lab reviews here.

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageio.com/ssd
    storageio.com/ssd

    Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

    Seasons greetings 2014

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved