Server and Storage I/O Benchmarking 101 for Smarties

Server Storage I/O Benchmarking 101 for Smarties or dummies ;)

server storage I/O trends

This is the first of a series of posts and links to resources on server storage I/O performance and benchmarking (view more and follow-up posts here).

The best I/O is the I/O that you do not have to do, the second best is the one with the least impact as well as low overhead.

server storage I/O performance

Drew Robb (@robbdrew) has a Data Storage Benchmarking Guide article over at Enterprise Storage Forum that provides a good framework and summary quick guide to server storage I/O benchmarking.

Via Drew:

Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark.

Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon).

But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more. 

Read more here including some of my comments, tips and recommendations.

Drew’s provides a good summary and overview in his article which is a great opener for this first post in a series on server storage I/O benchmarking and related resources.

You can think of this series (along with Drew’s article) as server storage I/O benchmarking fundamentals (e.g. 101) for smarties (e.g. non-dummies ;) ).

Note that even if you are not a server, storage or I/O expert, you can still be considered a smarty vs. a dummy if you found the need or interest to read as well as learn more about benchmarking, metrics that matter, tools, technology and related topics.

Server and Storage I/O benchmarking 101

There are different reasons for benchmarking, such as, you might be asked or want to know how many IOPs per disk, Solid State Device (SSD), device or storage system such as for a 15K RPM (revolutions per minute) 146GB SAS Hard Disk Drive (HDD). Sure you can go to a manufactures website and look at the speeds and feeds (technical performance numbers) however are those metrics applicable to your environments applications or workload?

You might get higher IOPs with smaller IO size on sequential reads vs. random writes which will also depend on what the HDD is attached to. For example are you going to attach the HDD to a storage system or appliance with RAID and caching? Are you going to attach the HDD to a PCIe RAID card or will it be part of a server or storage system. Or are you simply going to put the HDD into a server or workstation and use as a drive without any RAID or performance acceleration.

What this all means is understanding what it is that you want to benchmark test to learn what the system, solution, service or specific device can do under different workload conditions.

Some benchmark and related topics include

  • What are you trying to benchmark
  • Why do you need to benchmark something
  • What are some server storage I/O benchmark tools
  • What is the best benchmark tool
  • What to benchmark, how to use tools
  • What are the metrics that matter
  • What is benchmark context why does it matter
  • What are marketing hero benchmark results
  • What to do with your benchmark results
  • server storage I/O benchmark step test
    Example of a step test results with various workers and workload

  • What do the various metrics mean (can we get a side of context with them metrics?)
  • Why look at server CPU if doing storage and I/O networking tests
  • Where and how to profile your application workloads
  • What about physical vs. virtual vs. cloud and software defined benchmarking
  • How to benchmark block DAS or SAN, file NAS, object, cloud, databases and other things
  • Avoiding common benchmark mistakes
  • Tips, recommendations, things to watch out for
  • What to do next

server storage I/O trends

Where to learn more

The following are related links to read more about server (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.

Drew Robb’s benchmarking quick reference guide
Server storage I/O benchmarking tools, technologies and techniques resource page
Server and Storage I/O Benchmarking 101 for Smarties.
Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
I/O, I/O how well do you know about good or bad server and storage I/Os?
Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

Wrap up and summary

We have just scratched the surface when it comes to benchmarking cloud, virtual and physical server storage I/O and networking hardware, software along with associated tools, techniques and technologies. However hopefully this and the links for more reading mentioned above give a basis for connecting the dots of what you already know or enable learning more about workloads, synthetic generation and real-world workloads, benchmarks and associated topics. Needless to say there are many more things that we will cover in future posts (e.g. keep an eye on and bookmark the server storage I/O benchmark tools and resources page here).

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Microsoft Diskspd (Part II): Server Storage I/O Benchmark Tools

Microsoft Diskspd (Part II): Server Storage I/O Benchmark Tools

server storage I/O trends

This is part-two of a two-part post pertaining Microsoft Diskspd.that is also part of a broader series focused on server storage I/O benchmarking, performance, capacity planning, tools and related technologies. You can view part-one of this post here, along with companion links here.

Microsoft Diskspd StorageIO lab test drive

Server and StorageIO lab

Talking about tools and technologies is one thing, installing as well as trying them is the next step for gaining experience so how about some quick hands-on time with Microsoft Diskspd (download your copy here).

The following commands all specify an I/O size of 8Kbytes doing I/O to a 45GByte file called diskspd.dat located on the F: drive. Note that a 45GByte file is on the small size for general performance testing, however it was used for simplicity in this example. Ideally a larger target storage area (file, partition, device) would be used, otoh, if your application uses a small storage device or volume, then tune accordingly.

In this test, the F: drive is an iSCSI RAID protected volume, however you could use other storage interfaces supported by Windows including other block DAS or SAN (e.g. SATA, SAS, USB, iSCSI, FC, FCoE, etc) as well as NAS. Also common to the following commands is using 16 threads and 32 outstanding I/Os to simulate concurrent activity of many users, or application processing threads.
server storage I/O performance
Another common parameter used in the following was -r for random, 7200 seconds (e.g. two hour) test duration time, display latency ( -L ) disable hardware and software cache ( -h), forcing cpu affinity (-a0,1,2,3). Since the test ran on a server with four cores I wanted to see if I could use those for helping to keep the threads and storage busy. What varies in the commands below is the percentage of reads vs. writes, as well as the results output file. Some of the workload below also had the -S option specified to disable OS I/O buffering (to view how buffering helps when enabled or disabled). Depending on the goal, or type of test, validation, or workload being run, I would choose to set some of these parameters differently.

diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -w0 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noh_write000.txt

diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -w50 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noh_write050.txt

diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -w100 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noh_write100.txt

diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -S -w0 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noSh_test_write000.txt

diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -S -w50 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noSh_write050.txt

diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -S -w100 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noSh_write100.txt

The following is the output from the above workload command.
Microsoft Diskspd sample output
Microsoft Diskspd sample output part 2
Microsoft Diskspd sample output part 3

Note that as with any benchmark, workload test or simulation your results will vary. In the above the server, storage and I/O system were not tuned as the focus was on working with the tool, determining its capabilities. Thus do not focus on the performance results per say, rather what you can do with Diskspd as a tool to try different things. Btw, fwiw, in the above example in addition to using an iSCSI target, the Windows 2012 R2 server was a guest on a VMware ESXi 5.5 system.

Where to learn more

The following are related links to read more about server (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.

Drew Robb’s benchmarking quick reference guide
Server storage I/O benchmarking tools, technologies and techniques resource page
Server and Storage I/O Benchmarking 101 for Smarties.
Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
I/O, I/O how well do you know about good or bad server and storage I/Os?
Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

Comments and wrap-up

What I like about Diskspd (Pros)

Reporting including CPU usage (you can’t do server and storage I/O without CPU) along with IOP’s (activity), bandwidth (throughout or amount of data being moved), per thread and total results along with optional reporting. While a GUI would be nice particular for beginners, I’m used to setting up scripts for different workloads so having an extensive options for setting up different workloads is welcome. Being associated with a specific OS (e.g. Windows) the CPU affinity and buffer management controls will be handy for some projects.

Diskspd has the flexibility to use different storage interfaces and types of storage including files or partitions should be taken for granted, however with some tools don’t take things for granted. I like the flexibility to easily specify various IO sizes including large 1MByte, 10MByte, 20MByte, 100MByte and 500MByte to simulate application workloads that do large sequential (or random) activity. I tried some IO sizes (e.g. specified by -b parameter larger than 500MB however, I received various errors including "Could not allocate a buffer bytes for target" which means that Diskspd can do IO sizes smaller than that. While not able to do IO sizes larger than 500MB, this is actually impressive. Several other tools I have used or with have IO size limits down around 10MByte which makes it difficult for creating workloads that do large IOP’s (note this is the IOP size, not the number of IOP’s).

Oh, something else that should be obvious however will state it, Diskspd is free unlike some industry de-facto standard tools or workload generators that need a fee to get and use.

Where Diskspd could be improved (Cons)

For some users a GUI or configuration wizard would make the tool easier to get started with, on the other hand (oth), I tend to use the command capabilities of tools. Would also be nice to specify ranges as part of a single command such as stepping through an IO size range (e.g. 4K, 8K, 16K, 1MB, 10MB) as well as read write percentages along with varying random sequential mixes. Granted this can easily be done by having a series of commands, however I have become spoiled by using other tools such as vdbench.

Summary

Server and storage I/O performance toolbox

Overall I like Diskspd and have added it to my Server Storage I/O workload and benchmark tool-box

Keep in mind that the best benchmark or workload generation technology tool will be your own application(s) configured to run as close as possible to production activity levels.

However when that is not possible, the an alternative is to use tools that have the flexibility to be configured as close as possible to your application(s) workload characteristics. This means that the focus should not be as much on the tool, as opposed to how flexible is a tool to work for you, granted the tool needs to be robust.

Having said that, Microsoft Diskspd is a good and extensible tool for benchmarking, simulation, validation and comparisons, however it will only be as good as the parameters and configuration you set it up to use.

Check out Microsoft Diskspd and add it to your benchmark and server storage I/O tool-box like I have done.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

I/O, I/O how well do you know good bad ugly server storage I/O iops?

How well do you know good bad ugly I/O iops?

server storage i/o iops activity data infrastructure trends

Updated 2/10/2018

There are many different types of server storage I/O iops associated with various environments, applications and workloads. Some I/Os activity are iops, others are transactions per second (TPS), files or messages per time (hour, minute, second), gets, puts or other operations. The best IO is one you do not have to do.

What about all the cloud, virtual, software defined and legacy based application that still need to do I/O?

If no IO operation is the best IO, then the second best IO is the one that can be done as close to the application and processor as possible with the best locality of reference.

Also keep in mind that aggregation (e.g. consolidation) can cause aggravation (server storage I/O performance bottlenecks).

aggregation causes aggravation
Example of aggregation (consolidation) causing aggravation (server storage i/o blender bottlenecks)

And the third best?

It’s the one that can be done in less time or at least cost or effect to the requesting application, which means moving further down the memory and storage stack.

solving server storage i/o blender and other bottlenecks
Leveraging flash SSD and cache technologies to find and fix server storage I/O bottlenecks

On the other hand, any IOP regardless of if for block, file or object storage that involves some context is better than those without, particular involving metrics that matter (here, here and here [webinar] )

Server Storage I/O optimization and effectiveness

The problem with IO’s is that they are a basic operations to get data into and out of a computer or processor, so there’s no way to avoid all of them, unless you have a very large budget. Even if you have a large budget that can afford an all flash SSD solution, you may still meet bottlenecks or other barriers.

IO’s require CPU or processor time and memory to set up and then process the results as well as IO and networking resources to move data too their destination or retrieve them from where they are stored. While IO’s cannot be eliminated, their impact can be greatly improved or optimized by, among other techniques, doing fewer of them via caching and by grouping reads or writes (pre-fetch, write-behind).

server storage I/O STI and SUT

Think of it this way: Instead of going on multiple errands, sometimes you can group multiple destinations together making for a shorter, more efficient trip. However, that optimization may also mean your drive will take longer. So, sometimes it makes sense to go on a couple of quick, short, low-latency trips instead of one larger one that takes half a day even as it accomplishes many tasks. Of course, how far you have to go on those trips (i.e., their locality) makes a difference about how many you can do in a given amount of time.

Locality of reference (or proximity)

What is locality of reference?

This refers to how close (i.e., its place) data exists to where it is needed (being referenced) for use. For example, the best locality of reference in a computer would be registers in the processor core, ready to be acted on immediately. This would be followed by levels 1, 2, and 3 (L1, L2, and L3) onboard caches, followed by main memory, or DRAM. After that comes solid-state memory typically NAND flash either on PCIe cards or accessible on a direct attached storage (DAS), SAN, or NAS device. 

server storage I/O locality of reference

Even though a PCIe NAND flash card is close to the processor, there still remains the overhead of traversing the PCIe bus and associated drivers. To help offset that impact, PCIe cards use DRAM as cache or buffers for data along with meta or control information to further optimize and improve locality of reference. In other words, this information is used to help with cache hits, cache use, and cache effectiveness vs. simply boosting cache use.

SSD to the rescue?

What can you do the cut the impact of IO’s?

There are many steps one can take, starting with establishing baseline performance and availability metrics.

The metrics that matter include IOP’s, latency, bandwidth, and availability. Then, leverage metrics to gain insight into your application’s performance.

Understand that IO’s are a fact of applications doing work (storing, retrieving, managing data) no matter whether systems are virtual, physical, or running up in the cloud. But it’s important to understand just what a bad IO is, along with its impact on performance. Try to identify those that are bad, and then find and fix the problem, either with software, application, or database changes. Perhaps you need to throw more software caching tools, hypervisors, or hardware at the problem. Hardware may include faster processors with more DRAM and faster internal busses.

Leveraging local PCIe flash SSD cards for caching or as targets is another option.

You may want to use storage systems or appliances that rely on intelligent caching and storage optimization capabilities to help with performance, availability, and capacity.

Where to gain insight into your server storage I/O environment

There are many tools that you can be used to gain insight into your server storage I/O environment across cloud, virtual, software defined and legacy as well as from different layers (e.g. applications, database, file systems, operating systems, hypervisors, server, storage, I/O networking). Many applications along with databases have either built-in or optional tools from their provider, third-party, or via other sources that can give information about work activity being done. Likewise there are tools to dig down deeper into the various data information infrastructure to see what is happening at the various layers as shown in the following figures.

application storage I/O performance
Gaining application and operating system level performance insight via different tools

windows and linux storage I/O performance
Insight and awareness via operating system tools on Windows and Linux

In the above example, Spotlight on Windows (SoW) which you can download for free from Dell here along with Ubuntu utilities are shown, You could also use other tools to look at server storage I/O performance including Windows Perfmon among others.

vmware server storage I/O
Hypervisor performance using VMware ESXi / vsphere built-in tools

vmware server storage I/O performance
Using Visual ESXtop to dig deeper into virtual server storage I/O performance

vmware server storage i/o cache
Gaining insight into virtual server storage I/O cache performance

Wrap up and summary

There are many approaches to address (e.g. find and fix) vs. simply move or mask data center and server storage I/O bottlenecks. Having insight and awareness into how your environment along with applications is important to know to focus resources. Also keep in mind that a bit of flash SSD or DRAM cache in the applicable place can go along way while a lot of cache will also cost you cash. Even if you cant eliminate I/Os, look for ways to decrease their impact on your applications and systems.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

>Keep in mind: SSD including flash and DRAM among others are in your future, the question is where, when, with what, how much and whose technology or packaging.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Green and Virtual IT Data Center Primer

Green and Virtual Data Center Primer

Moving beyond Green Hype and Green washing

Green IT is about enabling efficient, effective and productive information services delivery. There is a growing green gap between green hype messaging or green washing and IT pain point issues including limits on availability or rising costs of power, cooling, floor-space as well as e-waste and environmental health and safety (PCFE). To close the gap will involve addressing green messaging and rhetoric closer to where IT organizations pain points are and where budget dollars exists that can address PCFE and other green related issues as a by-product. The green gap will also be narrowed as awareness of broader green related topics coincide with IT data center pain points, in other words, alignment of messaging with IT issues that have or will have budget dollars allocated towards them to sustain business and economic growth via IT resource usage efficiency. Read more here.

There are many aspects to "Green" Information Technology including servers, storage, networks and associated management tools and techniques. The reasons and focus of "Green IT" including "Green Data Storage ", "Green Computing" and related focus areas are varied to discuss diverse needs, issues and requirements including among others:

  • Power, Cooling, Floor-space, Environmental (PCFE) related issues or constraints
  • Reduction of carbon dioxide (CO2) emissions and other green house gases (GHGs)
  • Business growth and economic sustain in an environmental friendly manner
  • Proper disposal or recycling of environmental harmful retired technology components
  • Reduction or better efficiency of electrical power consumption used for IT equipment
  • Cost avoidance or savings from lower energy fees and cooling costs
  • Support data center and application consolidation to cut cost and management
  • Enable growth and enhancements to application service level objectives
  • Maximize the usage of available power and cooling resources available in your region
  • Compliance with local or federal government mandates and regulations
  • Economic sustain and ability to support business growth and service improvements
  • General environmental awareness and stewardship to save and protect the earth

While much of the IT industry focuses on CO2 emissions footprints, data management software and electrical power consumption, cooling and ventilation of IT data centers is an area of focus associated with "Green IT" as well as a means to discuss more effective use of electrical energy that can yield rapid results for many environments. Large tier-1 vendors including HP and IBM among others who have an IT and data center wide focus have services designed to do quick assessments as well as detailed analysis and re-organization of IT data center physical facilities to improve air flow and power consumption for more effective cooling of IT technologies including servers, storage, networks and other equipment.

Similar to your own residence, basic steps to improve your cooling effectiveness can lead to use of less energy to cut your budget impact, or, enable you to do more with what you already have with your cooling capacity to support growth, acquisitions and or consolidation initiatives. Vendors are also looking at means and alternatives for cooling IT equipment ranging from computer assisted computational fluid dynamics (CFD) software analysis of data center cooling and ventilation to refrigerated cooling racks some leveraging water or inert liquid cooling.

Various metrics exists and others are evolving for measuring, estimating, reporting, analyzing and discussing IT Data Center infrastructure resource topics including servers, storage, networks, facilities and associated software management tools from a power, cooling and green environmental standpoint. The importance of metrics is to focus on the larger impact of a piece of IT equipment that includes its cost and energy consumption that factors in cooling and other hosting or site environmental costs. Naturally energy costs and CO2 (carbon offsets) will vary by geography and region along with type of electrical power being used (Coal, Natural Gas, Nuclear, Wind, Thermo, Solar, etc) and other factors that should be kept in perspective as part of the big picture.

Consequently your view and needs or interests around "Green" IT may be from an electrical power conservation perspective to maximize your power consumption or to adapt to a given power footprint or ceiling. Your focus around "Green" Data Centers and Green Storage may be from a carbon savings standpoint or proper disposition of old and retired IT equipment or from a data center cooling standpoint. Another area of focus may be that you are looking to cut your data footprint to align with your power, cooling and green footprint while enhancing application and data service delivery to your customers.

Where to learn more

The following are useful links to related efficient, effective, productive, flexible, scalable and resilient IT data center along with server storage I/O networking hardware and software that supports cloud and virtual green data centers.

Various IT industry vendor and service provider links
Green and Virtual Data Center: Productive Economical Efficient Effective Flexible
Green and Virtual Data Center links
Are large storage arrays dead at the hands of SSD?
Closing the Green Gap
Energy efficient technology sales depend on the pitch

What this all means

The result of a green and virtual data center is that of a flexible, agile, resilient, scalable information factory that is also economical, productive, efficient, productive as well as sustainable.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Green and Virtual Data Center: Productive Economical Efficient Effective Flexible

Green and Virtual Data Center

A Green and Virtual IT Data Center (e.g. an information factory) means an environment comprising:

  • Habitat for technology or physical infrastructure (e.g. physical data center, yours, co-lo, managed service or cloud)
  • Power, cooling, communication networks, HVAC, smoke and fire suppression, physical security
  • IT data information infrastructure (e.g. hardware, software, valueware, cloud, virtual, physical, servers, storage, network)
  • Data Center Infrastructure Management (DCIM) along with IT Service Management (ITSM) software defined management tools
  • Tools for monitoring, resource tracking and usage, reporting, diagnostics, provisioning and resource orchestration
  • Portals and service catalogs for automated, user initiated and assisted operation or access to IT resources
  • Processes, procedures, best-practices, work-flows and templates (including data protection with HA, BC, BR, DR, backup/restore, logical and physical security)
  • Metrics that matter for management insight and awareness
    People and skill sets among other items

Green and Virtual Data Center Resources

Click here to learn about "The Green and Virtual Data Center" book (CRC Press) for enabling efficient, productive IT data centers. This book covers cloud, virtualization, servers, storage, networks, software, facilities and associated management topics, technologies and techniques including metrics that matter. This book by industry veteran IT advisor and author Greg Schulz is the definitive guide for enabling economic efficiency and productive next generation data center strategies.

Intel recommended reading
Publisher: CRC Press – Taylor & Francis Group
By Greg P. Schulz of StorageIO www.storageio.com
 ISBN-10: 1439851739 and ISBN-13: 978-1439851739
 Hardcover * 370 pages * Over 100 illustrations figures and tables

Read more here and order your copy here. Also check out Cloud and Virtual Data Storage Networking (CRC Press) a new book by Greg Schulz.

Productive Efficient Effective Economical Flexible Agile and Sustainable

Green hype and green washing may be on the endangered species list and going away, however, green IT for servers, storage, networks, facilities as well as related software and management techniques that address energy efficiency including power and cooling along with e-waste, environmental health and safety related issues are topics that wont be going away anytime soon. There is a growing green gap between green hype messaging or green washing and IT pain point issues including limits on availability or rising costs of power, cooling, floor-space as well as e-waste and environmental health and safety (PCFE). To close the gap will involve addressing green messaging and rhetoric closer to where IT organizations pain points are and where budget dollars exists that can address PCFE and other green related issues as a by-product.

The green gap will also be narrowed as awareness of broader green related topics coincide with IT data center pain points, in other words, alignment of messaging with IT issues that have or will have budget dollars allocated towards them to sustain business and economic growth via IT resource usage efficiency. Read more here.

Where to learn more

The following are useful links to related efficient, effective, productive, flexible, scalable and resilient IT data center along with server storage I/O networking hardware and software that supports cloud and virtual green data centers.

Various IT industry vendor and service provider links
Green and Virtual Data Center Primer
Green and Virtual Data Center links
Are large storage arrays dead at the hands of SSD?
Closing the Green Gap
Energy efficient technology sales depend on the pitch
EPA Energy Star for Data Center Storage Update
EPA Energy Star for data center storage draft 3 specification
Green IT Confusion Continues, Opportunities Missed! 
Green IT deferral blamed on economic recession might be result of green gap
How much SSD do you need vs. want?
How to reduce your Data Footprint impact (Podcast) 
Industry trend: People plus data are aging and living longer
In the data center or information factory, not everything is the same
More storage and IO metrics that matter
Optimizing storage capacity and performance to reduce your data footprint 
Performance metrics: Evaluating your data storage efficiency
PUE, Are you Managing Power, Energy or Productivity?
Saving Money with Green Data Storage Technology
Saving Money with Green IT: Time To Invest In Information Factories 
Shifting from energy avoidance to energy efficiency
SNIA Green Storage Knowledge Center
Speaking of speeding up business with SSD storage
SSD and Green IT moving beyond green washing
Storage Efficiency and Optimization: The Other Green
Supporting IT growth demand during economic uncertain times
The Green and Virtual Data Center Book (CRC Press, Intel Recommended Reading)
The new Green IT: Efficient, Effective, Smart and Productive 
The other Green Storage: Efficiency and Optimization 
What is the best kind of IO? The one you do not have to do

Watch for more links and resources to be added soon.

What this all means

The result of a green and virtual data center is that of a flexible, agile, resilient, scalable information factory that is also economical, productive, efficient, productive as well as sustainable.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Green and Virtual Data Center Links

Updated 10/25/2017

Green and Virtual IT Data Center Links

Moving beyond Green Hype and Green washing

Green hype and green washing may be on the endangered species list and going away, however, green IT for servers, storage, networks, facilities as well as related software and management techniques that address energy efficiency including power and cooling along with e-waste, environmental health and safety related issues are topics that wont be going away anytime soon.

There is a growing green gap between green hype messaging or green washing and IT pain point issues including limits on availability or rising costs of power, cooling, floor-space as well as e-waste and environmental health and safety (PCFE).

To close the gap will involve addressing green messaging and rhetoric closer to where IT organizations pain points are and where budget dollars exists that can address PCFE and other green related issues as a by-product. The green gap will also be narrowed as awareness of broader green related topics coincide with IT data center pain points, in other words, alignment of messaging with IT issues that have or will have budget dollars allocated towards them to sustain business and economic growth via IT resource usage efficiency. Read more here.

Enabling Effective Produtive Efficient Economical Flexible Scalable Resilient Information Infrastrctures

The following are useful links to related efficient, effective, productive, flexible, scalable and resilient IT data center along with server storage I/O networking hardware and software that supports cloud and virtual green data centers.

Various IT industry vendors and other links

Via StorageIOblog – Happy Earth Day 2016 Eliminating Digital and Data e-Waste

Green and Virtual Data Center Primer
Green and Virtual Data Center: Productive Economical Efficient Effective Flexible
Are large storage arrays dead at the hands of SSD?
Closing the Green Gap
Energy efficient technology sales depend on the pitch
EPA Energy Star for Data Center Storage Update
EPA Energy Star for data center storage draft 3 specification
Green IT Confusion Continues, Opportunities Missed! 
Green IT deferral blamed on economic recession might be result of green gap
How much SSD do you need vs. want?
How to reduce your Data Footprint impact (Podcast) 
Industry trend: People plus data are aging and living longer
In the data center or information factory, not everything is the same
More storage and IO metrics that matter
Optimizing storage capacity and performance to reduce your data footprint 
Performance metrics: Evaluating your data storage efficiency
PUE, Are you Managing Power, Energy or Productivity?
Saving Money with Green Data Storage Technology
Saving Money with Green IT: Time To Invest In Information Factories 
Shifting from energy avoidance to energy efficiency
SNIA Green Storage Knowledge Center
Speaking of speeding up business with SSD storage
SSD and Green IT moving beyond green washing
Storage Efficiency and Optimization: The Other Green
Supporting IT growth demand during economic uncertain times
The Green and Virtual Data Center Book (CRC Press, Intel Recommended Reading)
The new Green IT: Efficient, Effective, Smart and Productive 
The other Green Storage: Efficiency and Optimization 
What is the best kind of IO? The one you do not have to do

Intel recommended reading
Click here to learn about "The Green and Virtual Data Center" book (CRC Press) for enabling efficient , productive IT data centers. This book covers cloud, virtualization, servers, storage, networks, software, facilities and associated management topics, technologies and techniques including metrics that matter. This book by industry veteran IT advisor and author Greg Schulz is the definitive guide for enabling economic efficiency and productive next generation data center strategies. Read more here and order your copyhere. Also check out Cloud and Virtual Data Storage Networking (CRC Press) a new book by Greg Schulz.

White papers, analyst reports and perspectives

Business benefits of data footprint reduction (archiving, compression, de-dupe)
Data center I/O and performance issues – Server I/O and storage capacity gap
Analysis of EPA Report to Congress (Law 109-431)
The Many Faces of MAID Storage Technology
Achieving Energy Efficiency with FLASH based SSD
MAID 2.0: Energy Savings without Performance Compromises

Articles, Tips, Blogs, Webcasts and Podcasts

AP – SNIA Green Emerald Program and measurements
AP – Southern California heat wave strains electrical system
Ars Technica – EPA: Power usage in data centers could double by 2011
Ars Technica – Meet the climate savers: Major tech firms launch war on energy-inefficient PCs – Article
Askageek.com – Buying an environmental friendly laptop – November 2008
Baseline – Examining Energy Consumption in the Data Center
Baseline – Burts Bees: What IT Means When You Go Green
Bizcovering – Green architecture for the masses
Broadstuff – Are Green 2.0 and Enterprise 2.0 Incompatible?
Business Week – CEO Guide to Technology
Business Week – Computers’ elusive eco factor
Business Week – Clean Energy – Its Getting Affordable
Byte & Switch – Keeping it Green This Summer – Don’t be "Green washed"
Byte & Switch – IBM Sees Green in Energy Certificates
Byte & Switch – Users Search for power solutions
Byte & Switch – DoE issues Green Storage Warning
CBR – The Green Light for Green IT
CBR – Big boxes make greener data centers
CFO – Power Scourge
Channel Insider – A 12 Step Program to Dispose of IT Equipment
China.org.cn – China publishes Energy paper
CIO – Green Storage Means Money Saved on Power
CIO – Data center designers share secrets for going green
CIO – Best Place to Build a Data Center in North America
CIO Insight – Clever Marketing or the Real Thing?
Cleantechnica – Cooling Data Centers Could Prevent Massive Electrical Waste – June 2008
Climatebiz – Carbon Calculators Yield Spectrum of Results: Study
CNET News – Linux coders tackle power efficiency
CNET News – Research: Old data centers can be nearly as ‘green’ as new ones
CNET News – Congress, Greenpeace move on e-wast
CNN Money – A Green Collar Recession
CNN Money – IBM creates alliance with industry leaders supporting new data center standards
Communication News – Utility bills key to greener IT
Computerweekly – Business case for green storage
Computerweekly – Optimising data centre operations
Computerweekly – Green still good for IT, if it saves money
Computerweekly – Meeting the Demands for storage
Computerworld – Wells Fargo Free Data Center Cooling System
Computerworld – Seven ways to get green and save money
Computerworld – Build your data center here: The most energy-efficient locations
Computerworld – EPA: U.S. needs more power plants to support data centers
Computerworld – GreenIT: A marketing ploy or new technology?
Computerworld – Gartner Criticizes Green Grid
Computerworld – IT Skills no longer sufficient for data center execs.
Computerworld – Meet MAID 2.0 and Intelligent Power Management
Computerworld – Feds to offer energy ratings on servers and storage
Computerworld – Greenpeace still hunting for truly green electronics
Computerworld – How to benchmark data center energy costs
ComputerworldUK – Datacenters at risk from poor governance
ComputerworldUK – Top IT Leaders Back Green Survey
ComputerworldMH – Lean and Green
CTR – Strategies for enhancing energy efficiency
CTR – Economies of Scale – Green Data Warehouse Appliances
Datacenterknowledge – Microsoft to build Illinois datacenter
Data Center Strategies – Storage The Next Hot Topic
Earthtimes – Fujitsu installs hydrogen fuel cell power
eChannelline – IBM Goes Green(er)
Ecoearth.info – California Moves To Speed Solar, Wind Power Grid Connections
Ecogeek – Solar power company figures they can power 90% of America
Economist – Cool IT
Electronic Design – How many watts in that Gigabyte
eMazzanti – Desktop virtualization movement creeping into customer sites
ens-Newswire – Western Governors Ask Obama for National Green Energy Plan
Environmental Leader – Best Place to Build an Energy Efficient Data Center
Environmental Leader – New Guide Helps Advertisers Avoid Greenwash Complaints
Enterprise Storage Forum – Power Struggles Take Center Stage at SNW
Enterprise Storage Forum – Pace Yourself for Storage Power & Cooling Needs
Enterprise Storage Forum – Storage Power and Cooling Issues Heat Up – StorageIO Article
Enterprise Storage Forum – Score Savings With A Storage Power Play
Enterprise Storage Forum – I/O, I/O, Its off to Virtual Work I Go
Enterprise Storage Forum – Not Just a Flash in the Pan – Various SSD options
Enterprise Storage Forum – Closing the Green Gap – Article August 2008
EPA Report to Congress and Public Law 109-431 – Reports & links
eWeek – Saving Green by being Green
eWeek – ‘No Cooling Necessary’ Data Centers Coming?
eWeek – How the ‘Down’ Macroeconomy Will Impact the Data Storage Sector
ExpressComputer – In defense of Green IT
ExpressComputer – What data center crisis
Forbes – How to Build a Quick Charging Battery
GCN – Sun launches eco data center
GreenerComputing – New Code of Conduct to Establish Best Practices in Green Data Centers
GreenerComputing – Silicon valley’s green detente
GreenerComputing – Majority of companies plan to green their data centers
GreenerComputing – Citigroup to spend $232M on Green Data Center
GreenerComputing – Chicago and Quincy, WA Top Green Data Center Locations
GreenerComputing – Using airside economizers to chill data center cooling bills
GreenerComputing – Making the most of asset disposal
GreenerComputing – Greenpeace vendor rankings
GreenerComputing – Four Steps to Improving Data Center Efficiency without Capital Expenditures
GreenerComputing – Enabling a Green and Virtual Data Center
Green-PC – Strategic Steps Down the Green Path
Greeniewatch – BBC news chiefs attack plans for climate change campaign
Greeniewatch – Warmest year predictions and data that has not yet been measured
GoverenmentExecutive – Public Private Sectors Differ on "Green" Efforts
HPC Wire – How hot is your code
Industry Standard – Why green data centers mean partner opportunities
InformationWeek – It could be 15 years before we know what is really green
InformationWeek – Beyond Server Consolidaiton
InformationWeek – Green IT Beyond Virtualization: The Case For Consolidation
InfoWorld – Sun celebrates green datacenter innovations
InfoWorld – Tech’s own datacenters are their green showrooms
InfoWorld – 2007: The Year in Green
InfoWorld – Green Grid Announces Tech Forum in Feb 2008
InfoWorld – SPEC seeds future green-server benchmarks
InfoWorld – Climate Savers green catalog proves un-ripe
InfoWorld – Forester: Eco-minded activity up among IT pros
InfoWorld – Green ventures in Silicon Valley, Mass reaped most VC cash in ’07
InfoWorld – Congress misses chance to see green-energy growth
InfoWorld – Unisys pushes green envelope with datacenter expansion
InfoWorld – No easy green strategy for storage
Internet News – Storage Technologies for a Slowing Economy
Internet News – Economy will Force IT to Transform
ITManagement – Green Computing, Green Revenue
itnews – Data centre chiefs dismiss green hype
itnews – Australian Green IT regulations could arrive this year
IT Pro – SNIA Green storage metrics released
ITtoolbox – MAID discussion
Linux Power – Saving power with Linux on Intel platforms
MSNBC – Microsoft to build data center in Ireland
National Post – Green technology at the L.A. Auto Show
Network World – Turning the datacenter green
Network World – Color Interop Green
Network World – Green not helpful word for setting environmental policies
NewScientistEnvironment – Computer servers as bad for climate as SUVs
Newser – Texas commission approves nation’s largest wind power project
New Yorker – Big Foot: In measuring carbon emissions, it’s easy to confuse morality and science
NY Times – What the Green Bubble Will Leave Behind
PRNewswire – Al Gore and Cisco CEO John Chambers to debate climate change
Processor – More than just monitoring
Processor – The new data center: What’s hot in Data Center physical infrastructure:
Processor – Liquid Cooling in the Data Center
Processor – Curbing IT Power Usage
Processor – Services To The Rescue – Services Available For Today’s Data Centers
Processor – Green Initiatives: Hire A Consultant?
Processor – Energy-Saving Initiatives
Processor – The EPA’s Low Carbon Campaig
Processor – Data Center Power Planning
SAN Jose Mercury – Making Data Centers Green
SDA-Asia – Green IT still a priority despite Credit Crunch
SearchCIO – EPA report gives data centers little guidance
SearchCIO – Green IT Strategies Could Lead to hefty ROIs
SearchCIO – Green IT In the Data Center: Plenty of Talk, not much Walk
SearchCIO – Green IT Overpitched by Vendors, CIOs beware
SearchDataCenter – Study ranks cheapest places to build a data center
SearchDataCenter – Green technology still ranks low for data center planners
SearchDataCenter – Green Data Center: Energy Effiecnty Computing in the 21st Century
SearchDataCenter – Green Data Center Advice: Is LEED Feasible
SearchDataCenter – Green Data Centers Tackle LEED Certification
SearchDataCenter – PG&E invests in data center effieicny
SearchDataCenter – A solar powered datacenter
SearchSMBStorage – Improve your storage energy efficiency
SearchSMBStorage – SMB capacity planning: Focusing on energy conservation
SearchSMBStorage – Data footprint reduction for SMBs
SearchSMBStorage – MAID & other energy-saving storage technologies for SMBs
SearchStorage – How to increase your storage energy efficiency
SearchStorage – Is storage now top energy hog in the data center
SearchStorage – Storage eZine: Turning Storage Green
SearchStorage – The Green Storage Gap
SearchStorageChannel – Green Data Storage Projects
Silicon.com – The greening of IT: Cooling costs
SNIA – SNIA Green Storage Overview
SNIA – Green Storage
SNW – Beyond Green-wash
SNW Spring 2008 Beyond Green-wash
State.org – Why Texas Has Its Own Power Grid
StorageDecisions – Different Shades of Green
Storage Magazine – Storage still lacks energy metrics
StorageIOblog – Posts pertaining to Green, power, cooling, floor-space, EHS (PCFE)
Storage Search – Various postings, news and topics pertaining to Green IT
Technology Times – Revealed: the environmental impact of Google searches
TechTarget – Data center power efficiency
TechTarget – Tip for determining power consumption
Techworld – Inside a green data center
Techworld – Box reduction – Low hanging green datacenter fruit
Techworld – Datacentere used to heat swimming pool
Theinquirer – Spansion and Virident flash server farms
Theinquirer – Storage firms worry about energy efficiency How green is the valley
TheRegister – Data Centre Efficiency, the good, the bad and the way to hot
TheRegister – Server makers snub whalesong for serious windmill abuse
TheRegister – Green data center threat level: Not Green
The Standard – Growing cynicism around going Green
ThoughtPut – Energy Central
Thoughtput – Power, Cooling, Green Storage and related industry trends
Wallstreet Journal – Utilities Amp Up Push To Slash Energy Use
Wallstreet Journal – The IT in Green Investing
Wallstreet Journal – Tech’s Energy Consumption on the Rise
Washingtonpost – Texas approves major new wind power project
WhatPC – Green IT: It doesnt have to cost the earth
WHIRnews – SingTel building green data center
Wind-watch.org – Loss of wind causes Texas power grid emergency
WyomingNews – Overcoming Greens Stereotype
Yahoo – Washington Senate Unviel Green Job Plan
ZDnet – Will supercomputer speeds hit a plateau?
Are data centers causing climate change

News and Press Releases

Business Wire – The Green and Virtual Data Center
Enterprise Storage Forum – Intel and HGST (Hitachi) partner on FLASH SSD
PCworld – Intel and HP describe Green Strategy
DoE – To Invest Approximately $1.3 Billion to Commercialize CCS Technology
Yahoo – Shell Opens Los Angeles’ First Combined Hydrogen and Gasoline Station
DuPont – DuPont Projects Save Enough Energy to Power 25,000 Homes
Gartner – Users Are Becoming Increasingly Confused About the Issues and Solutions Surrounding Green IT

Websites and Tools

Various power, cooling, emmisions and device configuration tools and calculators
Solar Action Alliance web site
SNIA Emerald program
Carbon Disclosure Project
The Chicago Climate Exchange
Climate Savers
Data Center Decisions
Electronic Industries Alliance (EIA)
EMC – Digital Life Calculator
Energy Star
Energy Star Data Center Initiatives
Greenpeace – Technology ranking website also here
GlobalActionPlan
KyotoPlanet
LBNL High Tech Data centers
Millicomputing
RoHS & WEE News
Storage Performance Council (SPC)
SNIA Green Technical Working Group
SPEC
Transaction Processing Council (TPC)
The Green Grid
The Raised Floor
Terra Pass Carbon Offset Credits – Website with CO2 calculators
Energy Information Administration – EIA (US and International Electrical Information)
U.S. Department of Energy and related information
U.S. DOE Energy Efficient Industrial Programs
U.S. EPA server and storage energy topics
Zerofootprint – Various "Green" and environmental related links and calculators

Vendor Centric and Marketing Website Links and tools

Vendors and organizations have different types of calculators some with focus on power, cooling, floor space, carbon offsets or emissions,

ROI, TCO and other IT data center infrastructure resource management. Following is an evolving list and by no means definitive even for a particular vendors as

different manufactures may have multiple different calculators for different product lines or areas of focus.

Brocade – Green website
Cisco – Green and Environmental websites here, here and here
Dell – Green website
EMC – EMC Energy, Power and Cooling Related Website
HDS – How to be green – HDS Positioning White Paper
HP – HP Green Website
IBM – Green Data Center – IBM Positioning White Paper
IBM – Green Data Center for Education – IBM Positioning White Paper
Intel – What is an Efficient Data Center and how do I measure it?
LSI – Green site and white paper
NetApp – Press Release and related information
Sun – Various articles and links
Symantec – Global 2000 Struggle to Adopt "Green" Data Centers – Announcement of Survey results
ACTON
Adinfa
APC
Australian Conservation Foundation
Avocent
BBC
Brocade
Carbon Credit Calculator UK
Carbon Footprint Site
Carbon Planet
Carbonify
CarbonZero
Cassatt
CO2 Stats Site
Copan
Dell
DirectGov UK Acton
Diesel Service & Supply Power Calculator & Converter
Eaton Powerware
Ecobusinesslinks
Ecoscale
EMC Power Calculator
EMC Web Power Calculator
EMC Digital Life Calculator
EPA Power Profiler
EPA Related Tools
EPEAT
Google UK Green Footprint
Green Grid Calculator
HP and more here
HVAC Calculator
IBM
Logicalis
Kohler Power (Business and Residential)
Micron
MSN Carbon Footprint Calculator
National Wildlife Foundation
NEF UK
NetApp
Rackwise
Platespin
Safecom
Sterling Planet
Sun and more here and here and here
Tandberg
TechRepublic
TerraPass Carbon Offset Credits
Thomas Kreen AG
Toronto Hydro Calculator
80 Plus Calculator
VMware
42u Green Grid PUE DCiE calculator
42u energy calculator

Green and Virtual Tools

What’s your power, cooling, floor space, energy, environmental or green story?

What’s your power, cooling, floor space, energy, environmental or green story? Do you have questions or want to learn more about

energy issues pertaining to IT data center and data infrastructure topics? Do you have a solution or technology or a success story that you would like to share

with us pertaining to data storage and server I/O energy optimization strategies?  Do you need assistance in developing, validating or reviewing your strategy

or story? Contact us at: info@storageio.com or 651-275-1563 to learn more about green data storage and server I/O or to

schedule a briefing to tell us about your energy efficiency and effectiveness story pertaining to IT data centers and data infrastructures.

Disclaimer and note:  URL’s submitted for inclusion on this site will be reviewed for consideration and to be

in generally accepted good taste in regards to the theme of this site.  Best effort has been made to validate and verify the URLs that appear on this page and

website however they are subject to change. The author and/or maintainer’s) of this page and web site make no endorsement to and assume no responsibility for the

URLs and their content that are listed on this page.

Green and Virtual Metrics

Chapter 5 "Measurement, Metrics, and Management of IT Resources" in the book "The Green and Virtual Data Center" (CRC Press) takes a look at the importance of being able to measure and monitor to enable effective management and utilization of IT resources across servers, storage, I/O networks, software, hardware and facilities.

There are many different points of interest for collecting metrics in an IT data center for servers, storage, networking and facilities along with various points of interest or perspectives. Data center personal have varied interest from a facilities to a resource (server, storage, networking) usage and effectiveness perspective for normal use as well as planning purposes or comparison when evaluating new technology. Vendors have different uses for metrics during R&D, Q/A testing and marketing or sales campaigns as well as on-going service and support. Industry trade groups including 80 Plus, SNIA and the green grid along with government groups including the EPA Energy Star are working to define and establish applicable metrics pertinent for Green and Virtual data centers.

Acronym

Description

Comment

DCiE

Data center Efficiency = (IT equipment / Total facility power) * 100

Shows a ratio of how well a data center is consuming power

DCPE

Data center Performance Efficiency = Effective IT workload / total facility power

Shows how effective data center is consuming power to produce a given level of service or work such as energy per transaction or energy per business function performed

PUE

Power usage effectiveness = Total facility power / IT equipment power

Inverse of DCE

Kilowatts (kw)

Watts / 1,000

One thousand watts

Annual kWh

kWh x 24 x 365

kWh used in on year

Megawatts (mw)

kW / 1,000

One thousand kW

BTU/hour

watts x 3.413

Heat generated in an hour from using energy in British Thermal Units. 12,000 BTU/hour can equate to 1 Ton of cooling.

kWh

1,000 watt hours

The number of watts used in one hour

Watts

Amps x Volts (e.g. 12 amps * 12 volts = 144 watts)

Unit of electrical energy power

Watts

BTU/hour x 0.293

Convert BTU/hr to watts

Volts

Watts / Amps (e.g. 144 watts / 12 amps = 12 volts)

The amount of force on electrons

Amps

Watts / Volts (e.g. 144 watts / 12 volts = 12 amps)

The flow rate of electricity

Volt-Amperes (VA)

Volts x Amps

Sometimes power expressed in Volt-Ampres

kVA

Volts x Amp / 1000

Number of kilovolt-ampres

kW

kVA x power-factor

Power factor is the efficiency of a piece of equipments use of power

kVA

kW / power-factor

Killovolt-Ampres

U

1U = 1.75”

EIA metric describing height of equipment in racks.

 

Activity / Watt Amount of work accomplished per unit of energy consumed. This could be IOPS, Transactions or Bandwidth per watt. Indicator how much work and how efficient energy is being used to accomplish useful work. This metric applies to active workloads or actively used and frequently accessed storage and data. Examples would be IOPS per watt, Bandwidth per watt, Transactions per watt, Users or streams per watt. Activity per watt should also be used in conjunction with another metric such as how much capacity is supported per watt and total watts consumed for a representative picture.

IOPS / Watt

Number of I/O operations (or transactions) / energy (watts)

Indicator of how effectively energy is being used to perform a given amount of work. The work could be I/Os, transactions, throughput or other indicator of application activity. For example SPC-1 / Watt, SPEC / Watt, TPC / Watt, transaction / watt,  IOP / Watt.

Bandwidth / Watt GBPS or TBPS or PBPS / Watt Amount of data transferred or moved per second and energy used. Often confused with Capacity per watt This indicates how much data is moved or accessed per second or time interval per unit of energy consumed. This is often confused with capacity per watt given that both bandwidth and capacity reference GByte, TByte, PByte.

Capacity / Watt

GB or TB or PB (storage capacity space / watt

Indicator of how much capacity (space) or bandwidth supported in a given configuration or footprint per watt of energy. For inactive data or off-line and archive data, capacity per watt can be an effective measurement gauge however for active workloads and applications activity per watt also needs to be looked at to get a representative indicator of how energy is being used

Mhz / Watt

Processor performance / energy (watts)

Indicator of how effectively energy is being used by a CPU or processor.

Carbon Credit

Carbon offset credit

Offset credits that can be bought and sold to offset your CO2 emissions

CO2 Emission

Average 1.341 lbs per kWh of electricity generated

The amount of average carbon dioxide (CO2) emissions from generating an average kWh of electricity

Various power, cooling, floor space and green storage or IT  related metrics

Metrics include Data center Efficiency (DCiE) via the greengrid which is the indicator ratio of a IT data center energy efficiency defined as IT equipment (servers, disk and tape storage, networking switches, routers, printers, etc) / Total facility power x 100 (for percentage). For example, if the sum of all IT equipment energy usage resulted in 1,500 kilowatt hours (kWh) per month yet the total facility power including UPS, energy switching, power conversation and filtering, cooling and associated infrastructure costs as well as IT equipment resulting in 3,500 kWh, the DCiE would be (1,500 / 3,500) x 100 = 43%. DCiE can be used as a ratio for example to show in the above scenario that IT equipment accounts for about 43% of energy consumed by the data center with in this scenario 57% of electrical energy being consumed by cooling, conversion and conditioning or lighting.

Power usage effectiveness (PUE) is the indicator ratio of total energy being consumed by the data center to energy being used to operate IT equipment. PUE is defined as total facility power / IT equipment energy consumption. Using the above scenario PUE = 2.333 (3,500 / 1,500) which means that a server requiring 100 watts of power would actually require (2.333 * 100) 233.3 watts of energy that includes both direct power and cooling costs. Similarly a storage system that required 1,500 kWh of energy to power would require (1,500*2.333) 3,499.5 kWh of electrical power including cooling.

Another metric that has the potential to have meaning is Data center Performance Efficiency (DCPE) that takes into consideration how much useful and effective work is performed by the IT equipment and data center per energy consumed. DCPE is defined as useful work / total facility power with an example being some number of transactions processed using servers, networks and storage divided by energy for the data center to power and cool the equipment. An relatively easy and straightforward implementation of DCPE is an IOPs per watt measurement that looks at how many IOPs can be performed (regardless of size or type such as reads or writes) per unit of energy in this case watts.

DCPE = Useful work / Total facility power, for example IOPS per watt of energy used

DCiE = IT equipment energy / Total facility power = 1 / PUE

PUE = Total facility energy / IT equipment energy

IOPS per Watt = Number of IOPs (or bandwidth) / energy used by the storage system

The importance of these numbers and metrics is to focus on the larger impact of a piece of IT equipment that includes its cost and energy consumption that factors in cooling and other hosting or site environmental costs. Naturally energy costs and CO2 (carbon offsets) will vary by geography and region along with type of electrical power being used (Coal, Natural Gas, Nuclear, Wind, Thermo, Solar, etc) and other factors that should be kept in perspective as part of the big picture. Learn more in Chapter 5 "Measurement, Metrics, and Management of IT Resources" in the book "The Green and Virtual Data Center" (CRC) and in the book Cloud and Virtual Data Storage Networking (CRC).

Disclaimer and notes

Disclaimer and note:  URL’s submitted for inclusion on this site will be reviewed for consideration and to be in generally accepted good taste in regards to the theme of this site.  Best effort has been made to validate and verify the URLs that appear on this page and web site however they are subject to change. The author and/or maintainer’s) of this page and web site make no endorsement to and assume no responsibility for the URLs and their content that are listed on this page.

What this all means

The result of a green and virtual data center is that of a flexible, agile, resilient, scalable information factory that is also economical, productive, efficient, productive as well as sustainable.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Revisiting RAID data protection remains relevant resource links

Revisiting RAID data protection remains relevant and resources

Storage I/O trends

Updated 2/10/2018

RAID data protection remains relevant including erasure codes (EC), local reconstruction codes (LRC) among other technologies. If RAID were really not relevant anymore (e.g. actually dead), why do some people spend so much time trying to convince others that it is dead or to use a different RAID level or enhanced RAID or beyond raid with related advanced approaches?

When you hear RAID, what comes to mind?

A legacy monolithic storage system that supports narrow 4, 5 or 6 drive wide stripe sets or a modern system support dozens of drives in a RAID group with different options?

RAID means many things, likewise there are different implementations (hardware, software, systems, adapters, operating systems) with various functionality, some better than others.

For example, which of the items in the following figure come to mind, or perhaps are new to your RAID vocabulary?

RAID questions

There are Many Variations of RAID Storage some for the enterprise, some for SMB, SOHO or consumer. Some have better performance than others, some have poor performance for example causing extra writes that lead to the perception that all parity based RAID do extra writes (some actually do write gathering and optimization).

Some hardware and software implementations using WBC (write back cache) mirrored or battery backed-BBU along with being able to group writes together in memory (cache) to do full stripe writes. The result can be fewer back-end writes compared to other systems. Hence, not all RAID implementations in either hardware or software are the same. Likewise, just because a RAID definition shows a particular theoretical implementation approach does not mean all vendors have implemented it in that way.

RAID is not a replacement for backup rather part of an overall approach to providing data availability and accessibility.

data protection and durability

What’s the best RAID level? The one that meets YOUR needs

There are different RAID levels and implementations (hardware, software, controller, storage system, operating system, adapter among others) for various environments (enterprise, SME, SMB, SOHO, consumer) supporting primary, secondary, tertiary (backup/data protection, archiving).

RAID comparison
General RAID comparisons

Thus one size or approach does fit all solutions, likewise RAID rules of thumbs or guides need context. Context means that a RAID rule or guide for consumer or SOHO or SMB might be different for enterprise and vise versa, not to mention on the type of storage system, number of drives, drive type and capacity among other factors.

RAID comparison
General basic RAID comparisons

Thus the best RAID level is the one that meets your specific needs in your environment. What is best for one environment and application may be different from what is applicable to your needs.

Key points and RAID considerations include:

· Not all RAID implementations are the same, some are very much alive and evolving while others are in need of a rest or rewrite. So it is not the technology or techniques that are often the problem, rather how it is implemented and then deployed.

· It may not be RAID that is dead, rather the solution that uses it, hence if you think a particular storage system, appliance, product or software is old and dead along with its RAID implementation, then just say that product or vendors solution is dead.

· RAID can be implemented in hardware controllers, adapters or storage systems and appliances as well as via software and those have different features, capabilities or constraints.

· Long or slow drive rebuilds are a reality with larger disk drives and parity-based approaches; however, you have options on how to balance performance, availability, capacity, and economics.

· RAID can be single, dual or multiple parity or mirroring-based.

· Erasure and other coding schemes leverage parity schemes and guess what umbrella parity schemes fall under.

· RAID may not be cool, sexy or a fun topic and technology to talk about, however many trendy tools, solutions and services actually use some form or variation of RAID as part of their basic building blocks. This is an example of using new and old things in new ways to help each other do more without increasing complexity.

·  Even if you are not a fan of RAID and think it is old and dead, at least take a few minutes to learn more about what it is that you do not like to update your dead FUD.

Wait, Isn’t RAID dead?

There is some dead marketing that paints a broad picture that RAID is dead to prop up something new, which in some cases may be a derivative variation of parity RAID.

data dispersal
Data dispersal and durability

RAID rebuild improving
RAID continues to evolve with rapid rebuilds for some systems

Otoh, there are some specific products, technologies, implementations that may be end of life or actually dead. Likewise what might be dead, dying or simply not in vogue are specific RAID implementations or packaging. Certainly there is a lot of buzz around object storage, cloud storage, forward error correction (FEC) and erasure coding including messages of how they cut RAID. Catch is that some object storage solutions are overlayed on top of lower level file systems that do things such as RAID 6, granted they are out of sight, out of mind.

RAID comparison
General RAID parity and erasure code/FEC comparisons

Then there are advanced parity protection schemes which include FEC and erasure codes that while they are not your traditional RAID levels, they have characteristic including chunking or sharding data, spreading it out over multiple devices with multiple parity (or derivatives of parity) protection.

Bottom line is that for some environments, different RAID levels may be more applicable and alive than for others.

Via BizTech – How to Turn Storage Networks into Better Performers

  • Maintain Situational Awareness
  • Design for Performance and Availability
  • Determine Networked Server and Storage Patterns
  • Make Use of Applicable Technologies and Techniques

If RAID is alive, what to do with it?

If you are new to RAID, learn more about the past, present and future keeping mind context. Keeping context in mind means that there are different RAID levels and implementations for various environments. Not all RAID 0, 1, 1/0, 10, 2, 3, 4, 5, 6 or other variations (past, present and emerging) are the same for consumer vs. SOHO vs. SMB vs. SME vs. Enterprise, nor are the usage cases. Some need performance for reads, others for writes, some for high-capacity with low performance using hardware or software. RAID Rules of thumb are ok and useful, however keep them in context to what you are doing as well as using.

What to do next?

Take some time to learn, ask questions including what to use when, where, why and how as well as if an approach or recommendation are applicable to your needs. Check out the following links to read some extra perspectives about RAID and keep in mind, what might apply to enterprise may not be relevant for consumer or SMB and vise versa.

Some advise needed on SSD’s and Raid (Via Spiceworks)
RAID 5 URE Rebuild Means The Sky Is Falling (Via BenchmarkReview)
Double drive failures in a RAID-10 configuration (Via SearchStorage)
Industry Trends and Perspectives: RAID Rebuild Rates (Via StorageIOblog)
RAID, IOPS and IO observations (Via StorageIOBlog)
RAID Relevance Revisited (Via StorageIOBlog)
HDDs Are Still Spinning (Rust Never Sleeps) (Via InfoStor)
When and Where to Use NAND Flash SSD for Virtual Servers (Via TheVirtualizationPractice)
What’s the best way to learn about RAID storage? (Via Spiceworks)
Design considerations for the host local FVP architecture (Via Frank Denneman)
Some basic RAID fundamentals and definitions (Via SearchStorage)
Can RAID extend nand flash SSD life? (Via StorageIOBlog)
I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
The original RAID white paper (PDF) that while over 20 years old, it provides a basis, foundation and some history by Katz, Gibson, Patterson et al
Storage Interview Series (Via Infortrend)
Different RAID methods (Via RAID Recovery Guide)
A good RAID tutorial (Via TheGeekStuff)
Basics of RAID explained (Via ZDNet)
RAID and IOPs (Via VMware Communities)

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What is my favorite or preferred RAID level?

That depends, for some things its RAID 1, for others RAID 10 yet for others RAID 4, 5, 6 or DP and yet other situations could be a fit for RAID 0 or erasure codes and FEC. Instead of being focused on just one or two RAID levels as the solution for different problems, I prefer to look at the environment (consumer, SOHO, small or large SMB, SME, enterprise), type of usage (primary or secondary or data protection), performance characteristics, reads, writes, type and number of drives among other factors. What might be a fit for one environment would not be a fit for others, thus my preferred RAID level along with where implemented is the one that meets the given situation. However also keep in mind is tying RAID into part of an overall data protection strategy, remember, RAID is not a replacement for backup.

What this all means

Like other technologies that have been declared dead for years or decades, aka the Zombie technologies (e.g. dead yet still alive) RAID continues to be used while the technologies evolves. There are specific products, implementations or even RAID levels that have faded away, or are declining in some environments, yet alive in others. RAID and its variations are still alive, however how it is used or deployed in conjunction with other technologies also is evolving.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

December 2014 Server StorageIO Newsletter

December 2014

Hello and welcome to this December Server and StorageIO update newsletter.

Seasons Greetings

Seasons greetings

Commentary In The News

StorageIO news

Following are some StorageIO industry trends perspectives comments that have appeared in various venues. Cloud conversations continue to be popular including concerns about privacy, security and availability. Over at BizTech Magazine there are some comments about cloud and ROI. Some comments on AWS and Google SSD services can be viewed at SearchAWS. View other trends comments here

Tips and Articles

View recent as well as past tips and articles here

StorageIOblog posts

Recent StorageIOblog posts include:

View other recent as well as past blog posts here

In This Issue

  • Industry Trends Perspectives
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events & Activities

    View other recent and upcoming events here

    Webinars

    December 11, 2014 – BrightTalk
    Server & Storage I/O Performance

    December 10, 2014 – BrightTalk
    Server & Storage I/O Decision Making

    December 9, 2014 – BrightTalk
    Virtual Server and Storage Decision Making

    December 3, 2014 – BrightTalk
    Data Protection Modernization

    Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    From StorageIO Labs

    Research, Reviews and Reports

    StarWind Virtual SAN for Microsoft SOFS

    May require registration
    This looks at the shared storage needs of SMB’s and ROBO’s leveraging Microsoft Scale-Out File Server (SOFS). Focus is on Microsoft Windows Server 2012, Server Message Block version (SMB) 3.0, SOFS and StarWind Virtual SAN management software

    View additional reports and lab reviews here.

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageio.com/ssd
    storageio.com/ssd

    Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

    Seasons greetings 2014

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server Storage I/O Cables Connectors Chargers & other Geek Gifts

    Server Storage I/O Cables Connectors Chargers & other Geek Gifts

    server storage I/O trends

    This is part one of a two part series for what to get a geek for a gift, read part two here.

    It is that time of the year when annual predictions are made for the upcoming year, including those that will be repeated next year or that were also made last year.

    It’s also the time of the year to get various projects wrapped up, line up new activities, get the book-keeping things ready for year-end processing and taxes, as well as other things.

    It’s also that time of the year to do some budget and project planning including upgrades, replacements, enhancements while balancing an over-subscribed holiday party schedule some of you may have.

    Lets not forget getting ready for vacations, perhaps time off from work with some time upgrading your home lab or other projects.

    Then there are the gift lists or trying to figure out what to get that difficult to shop for person particular geek’s who may have everything, or want the latest and greatest that others have, or something their peers don’t have yet.

    Sure I have a DJI Phantom II on my wish list, however also have other things on my needs list (e.g. what I really need and want vs. what would be fun to wish for).

    DJI Phantom helicopter drone
    Image via DJI.com, click on image to learn more and compare models

    So here are some things for the geek or may have everything or is up on having the latest and greatest, yet forgot or didn’t know about some of these things.

    Not to mention some of these might seem really simple and low-cost, think of them like a Lego block or erector set part where your imagination will be your boundary how to use them. Also, most if not all of these are budget friendly particular if you shop around.

    Replace a CD/DVD with 4 x 2.5″ HDD’s or SSD’s

    So you need to add some 2.5" SAS or SATA HDD’s, SSD’s, HHDD’s/SSHD’s to your server for supporting your VMware ESXi, Microsoft Hyper-V, KVM, Xen, OpenStack, Hadoop or legacy *nix or Windows environment or perhaps gaming system. Challenge is that you are out of disk drive bay slots and you want things neatly organized vs. a rat’s nest of cables hanging out of your system. No worries assuming your server has an empty media bay (e.g. those 5.25" slots where CDs/DVDs or really old HDD’s go), or if you can give up the CD/DVD, then use that bay and its power connector to add ones of these. This is a 4 x 2.5" SAS and SATA drive bay that has a common power connector (molex male) with each drive bay having its own SATA drive connection. By each drive having its own SATA connection you can map the drives to an on-board available SATA port attached to a SAS or SATA controller, or attach an available port on a RAID adapter to the ports using a cable such as small form factor (SFF) 8087 to SATA.

    sas storage enclosuresas sata storage enclosure
    (Left) Rear view with Molex power and SATA cables (Right) front view

    I have a few of these in different systems and what I like about them is that they support different drive speeds, plus they will accept a SAS drive where many enclosures in this category only support SATA. Once you mount your 2.5" HDD or SSD using screws, you can hot swap (requires controller and OS support) the drives and move them between other similar enclosures as needed. The other thing I like is that there are front indicator lights as well as by each drive having its own separate connection, you can attach some of the drives to a RAID adapter while others connected to on-board SATA ports. Oh, and you can also have different speeds of drives as well.

    Power connections

    Depending on the type of your server, you may have Molex, SATA or some other type of power connections. You can use different power connection cables to go from one type (Molex) to another, create a connection for two devices, create an extension to reach hard to get to mounting locations.

    Warning and disclosure note, keep in mind how much power you are drawing when attaching devices to not cause an electrical or fire hazard, follow manufactures instructions and specification doing so at your own risk! After all, Just like Clark Grizzwald in National Lampoon Christmas Vacation who found you could attach extension cord to splitters to splitters and fan-out to have many lights attached, you don’t want to cause a fire or blackout when you plug to many drives in.


    National Lampoon Christmas Vacation

    Measuring Power

    Ok so you do not want to do a Clark Grizzwald (see above video) and overload a power circuit, or perhaps you simply want to know how many watts, amps or quality of your voltage is.

    There are many types of power meters along with various prices, some even have interfaces where you can grab event data to correlate with server storage I/O networking performance to do things such as IOP’s per watt among other metrics. Speaking of IOP’s per watt, check out the SNIA Emerald site where they have some good tools including a benchmark script that uses Vdbench to drive hot band workload (e.g. basically kick the crap out of a storage system).

    Back to power meters, I like the Kill A Watt series of meters as they give good info about amps, volts, power quality. I have these plugged into outlets so I can see how much power is being used by the battery backup units (BBU) aka UPS that also serve as power surge filters. If needed I can move these further downstream to watch the power intake of a specific server, storage, network or other device.

    Kill A Watt Power meter

    Standby and backup power

    Electrical power surge strips should be a given or considered common sense, however what is or should be common sense should be repeated so that it remains common sense, you should be using power surge strips or other devices.

    Standby, UPS and BBU

    For most situations a good surge suppressor will cover short power transients.

    APC power strips and battery backup
    Image via APC and model similar to those that I have

    For slightly longer power outages of a few seconds to minutes, that’s where battery backup up (BBU) units that also have surge suppression comes into play. There are many types, sizes with various features to meet your needs and budget. I have several of theses in a couple of different sizes not only for servers, storage and networking equipment (including some WiFi access points, routers, etc), I also have them for home things such as satellite DVR’s. However not everything needs to stay on while others simply need to stay on long-enough in order to shutdown manually or via automated power off sequences.

    Alternate Power Generation

    Generators are not just for the rich and famous or large data center, like other technologies they are available in different sizes, power capacity, fuel sources, manual or automated among other things.

    kohler residential generator
    Image via Kohler Power similar to model that I have

    Note that even with a typical generator there will be a time gap from the time power goes off until the generator starts, stabilizes and you have good power. That’s where the BBU and UPS mentioned above comes into play to bridge those time gaps which in my cases is about 25-30 seconds. Btw, knowing how much power your technology is drawing using tools such as the Kill A Watt is part of the planning process to avoid surprises.

    What about Solar Power

    Yup, whether it is to fit in and be green, or simply to get some electrical power when or where it is not needed to charge a battery or power some device, these small solar power devices are very handy.

    solar charger
    Image via Amazon.com
    solar battery charger
    Image via Amazon.com

    For example you can get or easily make an adapter to charge laptops, cell phones or even power them for normal use (check manufactures information on power usage, Amps and Voltage draws among other warnings to prevent fire and other things). Btw, not only are these handy for computer related things, they also work great for keeping batteries on my fishing boat charged so that I have my fish finder and other electronics, just saying.

    Fire suppression

    How about a new or updated smoke and fire detection alarm monitor, as well as fire extinguisher for the geek’s software defined hardware that runs on power (electrical or battery)?

    The following is from the site Fire Extinguisher 101 where you can learn more about different types of suppression technologies.

    Image via Fire Extinguisher 101
    • Class A extinguishers are for ordinary combustible materials such as paper, wood, cardboard, and most plastics. The numerical rating on these types of extinguishers indicates the amount of water it holds and the amount of fire it can extinguish. Geometric symbol (green triangle)
    • Class B fires involve flammable or combustible liquids such as gasoline, kerosene, grease and oil. The numerical rating for class B extinguishers indicates the approximate number of square feet of fire it can extinguish. Geometric symbol (red square)
    • Class C fires involve electrical equipment, such as appliances, wiring, circuit breakers and outlets. Never use water to extinguish class C fires – the risk of electrical shock is far too great! Class C extinguishers do not have a numerical rating. The C classification means the extinguishing agent is non-conductive. Geometric symbol (blue circle)
    • Class D fire extinguishers are commonly found in a chemical laboratory. They are for fires that involve combustible metals, such as magnesium, titanium, potassium and sodium. These types of extinguishers also have no numerical rating, nor are they given a multi-purpose rating – they are designed for class D fires only. Geometric symbol (Yellow Decagon)
    • Class K fire extinguishers are for fires that involve cooking oils, trans-fats, or fats in cooking appliances and are typically found in restaurant and cafeteria kitchens. Geometric symbol (black hexagon)

    Wrap up for part I

    This wraps up part I of what to get a geek V2014, continue reading part II here.

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud Conversations: Revisiting re:Invent 2014 and other AWS updates

    server storage I/O trends

    This is part one of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part two here.

    Revisiting re:Invent 2014 and other AWS updates

    AWS re:Invent 2014

    A few weeks ago I attended Amazon Web Service (AWS) re:Invent 2014 in Las Vegas for a few days. For those of you who have not yet attended this event, I recommend adding it to your agenda. If you have interest in compute servers, networking, storage, development tools or management of cloud (public, private, hybrid), virtualization and related topic themes, you should check out AWS re:invent.

    AWS made several announcements at re:invent including many around development tools, compute and data storage services. One of those to keep an eye on is cloud based Aurora relational database service that complement existing RDS tools. Aurora is positioned as an alternative to traditional SQL based transactional databases commonly found in enterprise environments (e.g. SQL Server among others).

    Some recent AWS announcements prior to re:Invent include

    AWS vCenter Portal

    Using the AWS Management Portal for vCenter adds a plug-in within your VMware vCenter to manage your AWS infrastructure. The vCenter for AWS plug-in includes support for AWS EC2 and Virtual Machine (VM) import to migrate your VMware VMs to AWS EC2, create VPC (Virtual Private Clouds) along with subnet’s. There is no cost for the plug-in, you simply pay for the underlying AWS resources consumed (e.g. EC2, EBS, S3). Learn more about AWS Management Portal for vCenter here, and download the OVA plug-in for vCenter here.

    AWS re:invent content


    AWS Andy Jassy (Image via AWS)

    November 12, 2014 (Day 1) Keynote (highlight video, full keynote). This is the session where AWS SVP Andy Jassy made several announcements including Aurora relational database that complements existing RDS (Relational Data Services). In addition to Andy, the key-note sessions also included various special guests ranging from AWS customers, partners and internal people in support of the various initiatives and announcements.


    Amazon.com CTO Werner Vogels (Image via AWS)

    November 13, 2014 (Day 2) Keynote (highlight video, full keynote). In this session, Amazon.com CTO Werner Vogels appears making announcements about the new Container and Lambda services.

    AWS re:Invent announcements

    Announcements and enhancements made by AWS during re:Invent include:

    • Key Management Service (KMS)
    • Amazon RDS for Aurora
    • Amazon EC2 Container Service
    • AWS Lambda
    • Amazon EBS Enhancements
    • Application development, deployed and life-cycle management tools
    • AWS Service Catalog
    • AWS CodeDeploy
    • AWS CodeCommit
    • AWS CodePipeline

    Key Management Service (KMS)

    Hardware security module (HSM) based key managed service for creating and control of encryption keys to protect security of digital assets and their keys. Integration with AWS EBS and others services including S3 and Redshift along with CloudTrail logs for regulatory, compliance and management. Learn more about AWS KMS here

    AWS Database

    For those who are not familiar, AWS has a suite of database related services including SQL and no SQL based, simple to transactional to Petabyte (PB) scale data warehouses for big data and analytics. AWS offers the Relational Database Service (RDS) which is a suite of different database types, instances and services. RDS instance and types include SimpleDB, MySQL, Postgress, Oracle, SQL Server and the new AWS Aurora offering (read more below).  Other little data database and big data repository related offerings include DynamoDB (a non-SQL database), ElasticCache (in memory cache repository) and Redshift (large-scale data warehouse and big data repository).

    In addition to database services offered by AWS, you can also combine various AWS resources including EC2 compute, EBS and other storage offerings to create your own solution. For example there are various Amazon Machine Images (AMI’s) or pre-built operating systems and database tools available with EC2 as well as via the AWS Marketplace , such as MongoDB and Couchbase among others. For those not familiar with MongoDB, Couchbase, Cassandra, Riak along with other non SQL or alternative databases and key value repositories, check out Seven Databases in Seven Weeks in my book review of it here.

    Seven Databases book review
    Seven Databases in Seven Weeks and NoSQL movement available from Amazon.com

    Amazon RDS for Aurora

    Aurora is a new relational database offering part of the AWS RDS suite of services. Positioned as an alternative to commercial high-end database, Aurora is a cost-effective database engine compatible with MySQL. AWS is claiming 5x better performance than standard MySQL with Aurora while being resilient and durable. Learn more about Aurora which will be available in early 2015 and its current preview here.

    Amazon EC2 C4 instances

    AWS will be adding a new C4 instance as a next generation of EC2 compute instance based on Intel Xeon E5-2666 v3 (Haswell) processors. The Intel Xeon E5-2666 v3 processors run at a clock speed of 2.9 GHz providing the highest level of EC2 performance. AWS is targeting traditional High Performance Computing (HPC) along with other compute intensive workloads including analytics, gaming, and transcoding among others. Learn more AWS EC2 instances here, and view this Server and StorageIO EC2, EBS and associated AWS primer here.

    Amazon EC2 Container Service

    Containers such as those via Docker have become popular to support developers rapidly build as well as deploy scalable applications. AWS has added a new feature called EC2 Container Service that supports Docker using simple API’s. In addition to supporting Docker, EC2 Container Service is a high performance scalable container management service for distributed applications deployed on a cluster of EC2 instances. Similar to other EC2 services, EC2 Container Service leverages security groups, EBS volumes and Identity Access Management (IAM) roles along with scheduling placement of containers to meet your needs. Note that AWS is not alone in adding container and docker support with Microsoft Azure also having recently made some announcements, learn more about Azure and Docker here. Learn more about EC2 container service here and more about Docker here.

    Docker for smarties

    Continue reading about re:Invent 2014 and other recent AWS enhancements here in part two of this two-part series.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: Revisiting re:Invent 2014, Lambda and other AWS updates

    server storage I/O trends

    Part II: Revisiting re:Invent 2014 and other AWS updates

    This is part two of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part one here.

    AWS re:Invent 2014

    AWS re:Invent announcements

    Announcements and enhancements made by AWS during re:Invent include:

    • Key Management Service (KMS)
    • Amazon RDS for Aurora
    • Amazon EC2 Container Service
    • AWS Lambda
    • Amazon EBS Enhancements
    • Application development, deployed and life-cycle management tools
    • AWS Service Catalog
    • AWS CodeDeploy
    • AWS CodeCommit
    • AWS CodePipeline

    AWS Lambda

    In addition to announcing new higher performance Elastic Cloud Compute (EC2) compute instances along with container service, another new service is AWS Lambda. Lambda is a service that automatically and quickly runs your applications code in response to events, activities, or other triggers. In addition to running your code, Lambda service is billed in 100 millisecond increments along with corresponding memory use vs. standard EC2 per hour billing. What this means is that instead of paying for an hour of time for your code to run, you can choose to use the Lambda service with more fine-grained consumption billing.

    Lambda service can be used to have your code functions staged ready to execute. AWS Lambda can run your code in response to S3 bucket content (e.g. objects) changes, messages arriving via Kinesis streams or table updates in databases. Some examples include responding to event such as a web-site click, response to data upload (photo, image, audio, file or other object), index, stream or analyze data, receive output from a connected device (think Internet of Things IoT or Internet of Device IoD), trigger from an in-app event among others. The basic idea with Lambda is to be able to pay for only the amount of time needed to do a particular function without having to have an AWS EC2 instance dedicated to your application. Initially Lambda supports Node.js (JavaScript) based code that runs in its own isolated environment.

    AWS cloud example
    Various application code deployment models

    Lambda service is a pay for what you consume, charges are based on the number of requests for your code function (e.g. application), amount of memory and execution time. There is a free tier for Lambda that includes 1 million requests and 400,000 GByte seconds of time per month. A GByte second is the amount of memory (e.g. DRAM vs. storage) consumed during a second. An example is your application is run 100,000 times and runs for 1 second consuming 128MB of memory = 128,000,000MB = 128,000GB seconds. View various pricing models here on the AWS Lambda site that show examples for different memory sizes, times a function runs and run time.

    How much memory you select for your application code determines how it can run in the AWS free tier, which is available to both existing and new customers. Lambda fees are based on the total across all of your functions starting with the code when it runs. Note that you could have from one to thousands or more different functions running in Lambda service. As of this time, AWS is showing Lambda pricing as free for the first 1 million requests, and beyond that, $0.20 per 1 million request ($0.0000002 per request) per duration. Duration is from when you code runs until it ends or otherwise terminates rounded up to the nearest 100ms. The Lambda price also depends on the amount of memory you allocated for your code. Once past the 400,000 GByte second per month free tier the fee is $0.00001667 for every GB second used.

    Why use AWS Lambda vs. an EC2 instance

    Why would you use AWS Lambda vs. provisioning an Container, EC2 instance or running your application code function on a traditional or virtual machine?

    If you need control and can leverage an entire physical server with its operating system (O.S.), application and support tools for your piece of code (e.g. JavaScript), that could be an option. If you simply need to have an isolated image instance (O.S., applications and tools) for your code on a shared virtual on-premises environment then that can be an option. Likewise if you have the need to move your application to an isolated cloud machine (CM) that hosts an O.S. along with your application paying for those resources such as on an hourly basis, that could be your option. Simply need a lighter-weight container to drop your application into that’s where Docker and containers comes into play to off-load some of the traditional application dependencies overhead.

    However, if all you want to do is to add some code logic to support processing activity for example when an object, file or image is uploaded to AWS S3 without having to standup an EC2 instance along with associated server, O.S. and complete application activity, that’s where AWS Lambda comes into play. Simply create your code (initially JavaScript) and specify how much memory it needs, define what events or activities will trigger or invoke the event, and you have a solution.

    View AWS Lambda pricing along with free tier information here.

    Amazon EBS Enhancements

    AWS is increasing the performance and size of General Purpose SSD and Provisioned IOP’s SSD volumes. This means that you can create volumes up to 16TB and 10,000 IOP’s for AWS EBS general-purpose SSD volumes. For EBS Provisioned IOP’s SSD volumes you can create up to 16TB for 20,000 IOP’s. General-purpose SSD volumes deliver a maximum throughput (bandwidth) of 160 MBps and Provisioned IOP SSD volumes have been specified by AWS at 320MBps when attached to EBS optimized instances. Learn more about EBS capabilities here. Verify your IO size and verify AWS sizing information to avoid surprises as all IO sizes are not considered to be the same. Learn more about Provisioned IOP’s, optimized instances, EBS and EC2 fundamentals in this StorageIO AWS primer here.

    Application development, deployed and life-cycle management tools

    In addition to compute and storage resource enhancements, AWS has also announced several tools to support application development, configuration along with deployment (life-cycle management). These include tools that AWS uses themselves as part of building and maintaining the AWS platform services.

    AWS Config (Preview e.g. early access prior to full release)

    Management, reporting and monitoring capabilities including Data center infrastructure management (DCIM) for monitoring your AWS resources, configuration (including history), governance, change management and notifications. AWS Config enables similar capabilities to support DCIM, Change Management Database (CMDB), trouble shooting and diagnostics, auditing, resource and configuration analysis among other activities. Learn more about AWS Config here.

    AWS Service Catalog

    AWS announced a new service catalog that will be available in early 2015. This new service capability will enable administrators to create and manage catalogs of approved resources for users to use via their personalized portal. Learn more about AWS service catalog here.

    AWS CodeDeploy

    To support code rapid deployment automation for EC2 instances, AWS has released CodeDeploy. CodeDeploy masks complexity associated with deployment when adding new features to your applications while reducing human error-prone operations. As part of the announcement, AWS mentioned that they are using CodeDeploy as part of their own applications development, maintenance, and change-management and deployment operations. While suited for at scale deployments across many instances, CodeDeploy works with as small as a single EC2 instance. Learn more about AWS CodeDeploy here.

    AWS CodeCommit

    For application code management, AWS will be making available in early 2015 a new service called CodeCommit. CodeCommit is a highly scalable secure source control service that host private Git repositories. Supporting standard functionalities of Git, including collaboration, you can store things from source code to binaries while working with your existing tools. Learn more about AWS CodeCommit here.

    AWS CodePipeline

    To support application delivery and release automation along with associated management tools, AWS is making available CodePipeline. CodePipeline is a tool (service) that supports build, checking workflow’s, code staging, testing and release to production including support for 3rd party tool integration. CodePipeline will be available in early 2015, learn more here.

    Additional reading and related items

    Learn more about the above and other AWS services by actually truing hands on using their free tier (AWS Free Tier). View AWS re:Invent produced breakout session videos here, audio podcasts here, and session slides here (all sessions may not yet be uploaded by AWS re:Invent)

    What this all means

    AWS amazon web services

    AWS continues to invest as well as re-invest into its environment both adding new feature functionality, as well as expanding the extensibility of those features. This means that AWS like other vendors or service providers adds new check-box features, however they also like some increase the depth extensibility of those capabilities. Besides adding new features and increasing the extensibility of existing capabilities, AWS is addressing both the data and information infrastructure including compute (server), storage and database, networking along with associated management tools while also adding extra developer tools. Developer tools include life-cycle management supporting code creation, testing, tracking, testing, change management among other management activities.

    Another observation is that while AWS continues to promote the public cloud such as those services they offer as the present and future, they are also talking hybrid cloud. Granted you have to listen carefully as you may not simply hear hybrid cloud used like some toss it around, however listen for and look into AWS Virtual Private Cloud (VPC), along with what you can do using various technologies via the AWS marketplace. AWS is also speaking the language of enterprise and traditional IT from an applications and development to data and information infrastructure perspective while also walking the cloud talk. What this means is that AWS realizes that they need to help existing environments evolve and make the transition to the cloud which means speaking their language vs. converting them to cloud conversations to then be able to migrate them to the cloud. These steps should make AWS practical for many enterprise environments looking to make the transition to public and hybrid cloud at their pace, some faster than others. More on these and some related themes in future posts.

    The AWS re:Invent event continues to grow year over year, I heard a figure of over 12,000 people however it was not clear if that included exhibiting vendors, AWS people, attendees, analyst, bloggers and media among others. However a simple validation is that the keynotes were in the larger rooms used by events such as EMCworld and VMworld when they hosted in Las Vegas as was the expo space vs. what I saw last year while at re:Invent. Unlike some large events such as VMworld where at best there is a waiting queue or line to get into sessions or hands on lab (HOL), while becoming more crowded, AWS re:Invent is still easy to get in and spend some time using the HOL which is of course powered by AWS meaning you can resume what you started while at re:Invent later. Overall a good event and nice series of enhancements by AWS, looking forward to next years AWS re:Invent.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    StorageIO Out and About Update – VMworld 2014

    StorageIO Out and About Update – VMworld 2014

    Here is a quick video montage or mash-up if you prefer that Cory Peden (aka the Server and StorageIO Intern @Studentof_IT) put together using some video that recorded while at VMworld 2014 in San Francisco. In this YouTube video we take a quick tour around the expo hall to see who as well as what we run into while out and about.

    VMworld 2014 StorageIO Update
    Click on above image to view video

    For those of you who were at VMworld 2014 the video (click above image) will give you a quick Dejavu memory of the sites and sounds while for those who were not there, see what you missed to plan for next year. Watch for appearances from Gina Minks (@Gminks) aka Gina Rosenthal (of BackupU)and Michael (not Dell) of Dell Data Protection, Luigi Danakos (@Nerdblurt) of HP Data Protection who lost his voice (tweet Luigi if you can help him find his voice). With Luigi we were able to get in a quick game of buzzword bingo before catching up with Marc Farley (@Gofarley) and John Howarth of Quaddra Software. Mark and John talk about their new solution from Quaddra which will enable searching and discovering data across different storage systems and technologies.  

    Other visits include a quick look at an EVO:Rail from Dell, along with Docker for Smarties overview with Nathan LeClaire (@upthecyberpunks) of Docker (click here to watch the extended interview with Nathan).

    Docker for smarties

    Check out the conversation with Max Kolomyeytsev of StarWind Software (@starwindsan) before we get interrupted by a sales person. During our walk about, we also bump into Mark Peters (@englishmdp) of ESG facing off video camera to video camera.

    Watch for other things including rack cabinets that look like compute servers yet that have a large video screen so they can be software defined for different demo purposes.

    virtual software defined server

    Watch for more Server and StorageIO Industry Trend Perspective podcasts, videos as well as out and about updates soon, meanwhile check out others here.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Is Computer Data Storage Complex? It Depends

    Is Computer Data Storage Complex? It Depends

    I often get asked, or, told that computer data storage is complex with so many options to choose from, apples to oranges comparison among other things.

    On a recent trip to Europe while being interviewed by a Dutch journalist in Nijkerk Holland at a Brouwer Storage Consultancy event I was presenting at, the question came up again about storage complexity. Btw, you can read the article on data storage industry trends here (its in dutch).

    I hesitated and thought for a moment and responded that in some ways it’s not as complex as some make it seem, although there is more to data storage than just cost per capacity. As I usually do when asked or told how complex data storage is my response is a mixed yes it (storage, data and information infrastructure) are complex, however lets put it in perspective which is storage any more complex than other things?

    Our conversation then evolved with an example that I find shopping for an automobile complex unless I know exactly what I’m looking for. After all there are cars trucks SUV’s used new buy lease different manufacturers makes and models speeds cargo capacity management tools and interfaces not to mention metrics and fuel.

    This is where I usually mention how IMHO buying a new car or vehicle is with all the different options, that is unless you know what you want, or know your selection criteria and options. Same with selecting a new laptop computer, tablet or smart phone, not to mention a long list of other things that to the outsiders can also seem complex, intimidating or overwhelming. However lets take a step back to look at storage then return to compare some other things that may be confusing to those who are not focused on them.

    Stepping back looking at storage

    Similar to other technologies, there are different types of data storage to meet various needs from performance to space capacity as well as support various forms of scaling.

    server and storage I/O flow
    Server and storage I/O fundamentals

    Storage options
    Various types of storage devices including HDD’s, SSHD/HHDD’s and SSD’s

    Storage type options
    Various types of storage devices

    Storage I/O decision making
    Storage options, block, file, object, ssd, hdd, primary, secondary, local and cloud

    Shopping for other things can be complex

    During my return trip to the US from the Dutch event, I had a layover at London Heathrow (LHR) and walking the concourse it occurred to me that while there are complexities involved with different technologies including storage, data and information infrastructures, there were other complexities.

    Same thing with shoes so any differ options not to mention cell phones or laptops and tablets, PCIe, or how about tv’s?

    I wan to go on a trip do I book based on lowest cost for air fare then hotel and car rental, or do I purchase a package? For the air fare is it the cheapest yet that takes all day to get from point a to b via plane changes at points c d and e not to mention paying extra fees vs paying a higher price for a direct flight with extra amenities?

    Getting hungry so what to do for dinner, what type of cuisine or food?

    Hand Baggage options
    How about a new handbag or perhaps shoes?

    Baggage options
    How about a new backpack, brief case or luggage?

    Beverage options
    What to drink for a beverage, so many options unless you know what you want.

    PDA options
    Complexity of choosing what cell phone, PDA or other electronics

    What to read options
    How about what to read including print vs. online accessible content?

    How about auto parts complexity

    Once I got home from my European trip I had some mechanical things to tend to including replacing some spark plugs.

    Auto part options
    How about automobile parts from tires, to windshield wiper blades to spark plugs?

    Sure if you know the exact part number and assuming that part number has not changed, then you can start shopping for the part. However recently I had a part number based on a vehicle serial number (e.g. make, model, year, etc) only to receive the wrong part. Sure the part numbers were correct, however along the line somewhere the manufacture made a change and not all downstream vendors knew about the part change, granted I eventually received the correct part.

    Back to tech and data infrastructures

    Ok, hopefully you got the point from the above examples among many others in that we live in world full of options and those options can bring complexity.

    What type of network or server? How about operating system, browser, database, programming or development language as there are different needs and options?

    Sure there are many storage options as not everything is the same.

    Likewise while there can be simple answer with a trend of what to use before the question is understood (perhaps due to a preference) or explained, the best or applicable answer may be it depends. However saying it depends may seem complex to those who just want a simple answer.

    Closing Comments

    So is storage more complex than other technologies, tools, products or services?

    What say you?

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    September October 2014 Server and StorageIO Update Newsletter

    September and October 2014

    Hello and welcome to this joint September and October Server and StorageIO update newsletter. Since the August newsletter, things have been busy with a mix of behind the scenes projects, as well as other activities including several webinars, on-line along with in-person events in the US as well as Europe.

    Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

    Cheers gs

    Industry Trends and Perspectives

    Storage trends

    In September I was invited to do a key-note opening presentation at the MSP area CMG event. Theme for the September CMG event was "Flash – A Real Life Experience" with a focus of what people are doing, how testing and evaluating including use of hybrid solutions as opposed to vendor marketing sessions. My session was titled "Flash back to reality – Myths and Realities, Flash and SSD Industry trends perspectives plus benchmarking tips and can be found here. Thanks to Tom Becchetti an the MSP CMG (@mspcmg) folks for a great event.

    There are many facets to hybrid storage including different types of media (SSD and HDD’s) along with unified or multi-protocol access. Then there are hybrid storage that spans local and public clouds. Here is a link to an on-line Internet Radio show via Information Week along with on-line chat about Hybrid Storage for Government.

    Some things I’m working with or keeping an eye on include Cloud, Converged solutions, Data Protection, Business Resiliency, DCIM, Docker, InfiniBand, Microsoft (Hyper-V, SOFS, SMB 3.0), Object Storage, SSD, SDS, VMware and VVOL among others items.

    Commentary In The News

    StorageIO news

    A lot has been going on in the IT industry since the last StorageIO Update newsletter. The following are some StorageIO industry trends perspectives comments that have appeared in various venues. Cloud conversations continue to be popular including concerns about privacy, security and availability. Here are some comments at SearchCloudComputing: about moving on from cloud deployment heartbreak.

    Nand flash Solid State Devices (SSD) continue to increase in customer deployments, over at Processor, here are some here are some comments on Incorporating SSD’s Into Your Storage Plan. Also on SSD, here are some perspectives making the Argument For Flash-Based Storage. Some other comments over at Processer.com include looking At Disaster Recovery As A Service, tips to Avoid In Data Center Planning, making the most of Enterprise Virtualization, as well as New Tech, Advancements To Justify Servers. Part of controlling and managing storage costs is having timely insight, metrics that matter, here are some more perspectives and also here.

    Over at SearchVirtualStorage I have some comments on how to configure and manage storage for a virtual desktop environment (VDI) while over at TechPageOne there are perspectives on top reasons to switch to Windows 8. 

    Some other comments and perspectives are over at EnterpriseStorageForum including Top 10 Ways to Improve Data Center Energy Efficiency. At InfoStor there are comments and tips about Object Storage, while at SearchDataBackup I have some perspectives about Symantec being broken up.

    View other industry trends comments at the here

    Tips and Articles

    Recent Server and StorageIO tips and articles appearing in various venues include over at SearchCloudStorage a series of discussion often asked question pieces:

    Are you concerned with the security of the cloud?
    Is the cost of cloud storage really cheaper?
    What’s important to know about cloud privacy policy?
    Are more than five nines of availability really possible?
    What to look for enterprise file sync-and-share app?
    How primary storage clouds and cloud backup differ?
    What should I consider when using SSD cloud?
    What is difference between a snapshot and a clone?

    View other recent as well as past tips and articles here

    StorageIOblog posts

    Recent StorageIOblog posts include:

    View other recent as well as past blog posts here

    In This Issue

  • Industry Trends Perspectives
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events & Activities

    September 25, 2014
    MSP CMG – Flash and SSD performance

    October 8-10, 2014
    Nijkerk Netherlands Brouwer Seminar Series

    November 11-13, 2014
    AWS re:Invent Las Vegas

    View other recent and upcoming events here

    Webinars

    November 13 9AM PT
    BrightTalk – Software Defined Storage

    November 11 10AM PT
    Google+ Hangout Dell BackupU

    November 11 9AM PT
    BrightTak – Software Defined Data Centers

    October 16 9AM PT
    BrightTalk – Cloud Storage Decision Making

    October 15 1PM PT
    BrightTalk – Hybrid Cloud Trends

    October 7 11AM PT
    BackupU – Data Protection Management

    September 18 8AM CT
    Nexsan – Hybrid Storage

    September 18 9AM PT
    BrightTalk – Converged Storage

    September 17 1PM PT
    BrightTalk – DCIM

    September 16 1PM PT
    BrightTalk – Data Center Convergence

    September 16 Noon PT
    BrightTalk – BC, BR and DR

    September 16 1PM CT
    StarWind – SMB 3.0 & Microsoft SOFS

    September 16 9AM PT
    Google+ Hangout – BackupU – Replication

    September 2 11AM PT
    Dell BackupU – Replication

    Videos and Podcasts

    Docker for Smarties
    Video: Docker for Smarties

    StorageIO podcasts are also available via and at StorageIO.tv

    From StorageIO Labs

    Research, Reviews and Reports

    Enterprise 12Gbps SAS and SSD’s
    Better Together – Part of an Enterprise Tiered Storage Strategy

    In this StorageIO Industry Trends Perspective thought leadership white paper we look at how enterprise class SSD’s and 12Gbps SAS address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data environments. This report includes proof points running various workloads including Database TPC-B, TPC-E, Microsoft Exchange in the StorageIO Labs along with cache software comparing SSD, SSHD and HDD’s. Read the  white paper  compliments of Seagate 1200 12Gbs SAS SSD’s.

    Seagate SSD White Paper

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageio.com/ssd
    storageio.com/ssd

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved