Intel Micron 3D XPoint server storage NVM SCM PM SSD

3D XPoint server storage class memory SCM


Storage I/O trends

Updated 1/31/2018

Intel Micron 3D XPoint server storage NVM SCM PM SSD.

This is the second of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part III here.

Is this 3D XPoint marketing, manufacturing or material technology?

You can’t have a successful manufactured material technology without some marketing, likewise marketing without some manufactured material would be manufactured marketing. In the case of 3D XPoint and its announcement launch, their real technology shown, granted it was only wafer and dies as opposed to an actual DDR4 DIMM or PCIe Add In Card (AIC) or drive form factor Solid State Device (SSD) product. On the other hand, on a relative comparison basis, even though there is marketing collateral available to learn more from, this was far from a over the big-top made for TV or web circus event, which can be a good thing.


Wafer unveiled containing 3D XPoint 128 Gb dies

Who will get access to 3D XPoint?

Initially 3D XPoint production capacity supply will be for the two companies to offer early samples to their customers later this year with general production slated for 2016 meaning early real customer deployed products starting sometime in 2016.

Is it NAND or NOT?

3D XPoint is not NAND flash, it is also not NVRAM or DRAM, it’s a new class of NVM that can be used for server class main memory with persistency, or as persistent data storage among other uses (cell phones, automobiles, appliances and other electronics). In addition, 3D XPoint is more durable with a longer useful life for writing and storing data vs. NAND flash.

Why is 3D XPoint important?

As mentioned during the Intel and Micron announcement, there have only been seven major memory technologies introduced since the transistor back in 1947, granted there have been many variations along with generational enhancements of those. Thus 3D XPoint is being positioned by Intel and Micron as the eighth memory class joining its predecessors many of which continue to be used today in various roles.


Major memory classes or categories timeline

In addition to the above memory classes or categories timeline, the following shows in more detail various memory categories (click on the image below to get access to the Intel interactive infographic).

Intel History of Memory Infographic
Via: https://intelsalestraining.com/memory timeline/ (Click on image to view)

What capacity size is 3D XPoint?

Initially the 3D XPoint technology is available in a 2 layer 128 bit (cell) per die capacity. Keep in mind that there are usually 8 bits to a byte resulting in 16 GByte capacity per chip initially. With density improvements, as well as increased stacking of layers, the number of cells or bits per die (e.g. what makes up a chip) should improve, as well as most implementations will have multiple chips in some type of configuration.

What will 3D XPoint cost?

During the 3D XPoint launch webinar Intel and Micron hinted that first pricing will be between current DRAM and NAND flash on a per cell or bit basis, however real pricing and costs will vary depending on how packaged for use. For example if placed on a DDR4 or different type of DIMM or on a PCIe Add In Card (AIC) or as a drive form factor SSD among other options will vary the real price. Likewise as with other memories and storage mediums, as production yields and volumes increase, along with denser designs, the cost per usable cell or bit can be expected to further improve.

Where to read, watch and learn more

Storage I/O trends

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

DRAM which has been around for sometime has plenty of life left for many applications as does NAND flash including new 3D NAND, vNAND and other variations. For the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies including 3D XPoint. Read more in this series including Part I here and Part III here.

Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

3D XPoint nvm pm scm storage class memory

Part III – 3D XPoint server storage class memory SCM


Storage I/O trends

Updated 1/31/2018

3D XPoint nvm pm scm storage class memory.

This is the third of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part I here and Part II here.

What is 3D XPoint and how does it work?

3D XPoint is a new class or class of memory (view other categories of memory here) that provides performance for reads and writes closer to that of DRAM with about 10x the capacity density. In addition to the speed closer to DRAM vs. the lower NAND flash, 3D XPoint is also non-volatile memory (NVM) like NAND flash, NVRAM and others. What this means is that 3D XPoint can be used as persistent higher density fast server memory (or main memory for other computers and electronics). Besides being fast persistent main memory, 3D XPoint will also be a faster medium for solid state devices (SSD’s) including PCIe Add In Cards (AIC), m2 cards and drive form factor 8637/8639 NVM Express (NVMe) accessed devices that also has better endurance or life span compared to NAND flash.


3D XPoint architecture and attributes

The initial die or basic chip building block 3D XPoint implementation is a layer 128 Gbit device which if using 8 bits would yield 16GB raw. Over time increased densities should become available as the bit density improves with more cells and further scaling of the technology, combined with packaging. For example while a current die could hold up to 16 GBytes of data, multiple dies could be packaged together to create a 32GB, 64GB, 128GB etc. or larger actual product. Think about not only where packaged flash based SSD capacities are today, also think in terms of where DDR3 and DDR4 DIMM are at such as 4GB, 8GB, 16GB, 32GB densities.

The 3D aspect comes from the memory being in a matrix initially being two layers high, with multiple rows and columns that intersect, where those intersections occur is a microscopic material based switch for accessing a particular memory cell. Unlike NAND flash where an individual cell or bit is accessed as part of a larger block or page comprising several thousand bytes at once, 3D XPoint cells or bits can be individually accessed to speed up reads and writes in a more granular fashion. It is this more granular access along with performance that will enable 3D XPoint to be used in lower latency scenarios where DRAM would normally be used.

Instead of trapping electrons in a cell to create a bit of capacity (e.g. on or off) like NAND flash, 3D XPoint leverages the underlying physical material propertied to store a bit as a phase change enabling use of all cells. In other words, instead of being electron based, it is material based. While Intel and Micron did not specify what the actual chemistry and physical materials that are used in 3D XPoint, they did discuss some of the characteristics. If you want to go deep, check out how the Dailytech makes an interesting educated speculation or thesis on the underlying technology.

Watch the following video to get a better idea and visually see how 3D XPoint works.



3D XPoint YouTube Video

What are these chips, cells, wafers and dies?

Left many dies on a wafer, right, a closer look at the dies cut from the wafer

Dies (here and here) are the basic building block of what goes into the chips that in turn are the components used for creating DDR DIMM for main computer memory, as well as for create SD and MicroSD cards, USB thumb drives, PCIe AIC and drive form factor SSD, as well as custom modules on motherboards, or consumption via bare die and wafer level consumption (e.g. where you are doing really custom things at volume, beyond using a soldering iron scale).

Storage I/O trends

Has Intel and Micron cornered the NVM and memory market?

We have heard proclamations, speculation and statements of the demise of DRAM, NAND flash and other volatile and NVM memories for years, if not decades now. Each year there is the usual this will be the year of “x” where “x” can include among others. Resistive RAM aka ReRAM or RRAM aka the memristor that HP earlier announced they were going to bring to market and then earlier this year canceling those plans while Crossbar continues to pursue RRAM. MRAM or Magnetorestive RAM, Phase Change Memory aka CRAM or PCM and PRAM, FRAM aka FeRAM or Ferroelectric RAM among others.

flash SSD and NVM trends

Expanding persistent memory and SSD storage markets

Keep in mind that there are many steps taking time measured in years or decades to go from research and development lab idea to prototype that can then be produced at production volumes in economic yields. As a reference for, there is still plenty of life in both DRAM as well as NAND flash, the later having appeared around 1989.

Industry vs. Customer Adoption and deployment timeline

Technology industry adoption precedes customer adoption and deployment

There is a difference between industry adoption and deployment vs. customer adoption and deployment, they are related, yet separated by time as shown in the above figure. What this means is that there can be several years from the time a new technology is initially introduced and when it becomes generally available. Keep in mind that NAND flash has yet to reach its full market potential despite having made significant inroads the past few years since it was introduced in 1989.

This begs the question of if 3D XPoint is a variation of phase change, RRAM, MRAM or something else. Over at the Dailytech they lay out a line of thinking (or educated speculation) that 3D XPoint is some derivative or variation of phase change, time will tell about what it really is.

What’s the difference between 3D NAND flash and 3D XPoint?

3D NAND is a form of NAND flash NVM, while 3D XPoint is a completely new and different type of NVM (e.g. its not NAND).

3D NAND is a variation of traditional flash with the difference between vertical stacking vs. horizontal to improve density, also known as vertical NAND or V-NAND. Vertical stacking is like building up to house more tenants or occupants in a dense environment or scaling up, vs scaling-out by using up more space where density is not an issue. Note that magnetic HDD’s shifted to perpendicular (e.g. vertical) recording about ten years ago to break through the super parametric barrier and more recently, magnetic tape has also adopted perpendicular recording. Also keep in mind that 3D XPoint and the earlier announced Intel and Micron 3D NAND flash are two separate classes of memory that both just happen to have 3D in their marketing names.

Where to read, watch and learn more

Storage I/O trends

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Same with DRAM which has been around for sometime, it too still has plenty of life left for many applications. However other applications that have the need for improved speed over NAND flash, or persistency and density vs. DRAM will be some of the first to leverage new NVM technologies such as 3D XPoint. Thus at least for the next several years, there will be a co-existences between new and old NVM and DRAM among other memory technologies. Bottom line, 3D XPoint is a new class of NVM memory, can be used for persistent main server memory or for persistent fast storage memory. If you have not done so, check out Part I here and Part II here of this three-part series on Intel and Micron 3D XPoint.

Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage

3D XPoint NVM persistent memory PM storage class memory SCM


Storage I/O trends

Updated 1/31/2018

This is the first of a three-part series on Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage announcement. Read Part II here and Part III here.

In a webcast the other day, Intel and Micron announced new 3D XPoint non-volatile memory (NVM) that can be used for both primary main memory (e.g. what’s in computers, serves, laptops, tablets and many other things) in place of Dynamic Random Access Memory (DRAM), for persistent storage faster than today’s NAND flash-based solid state devices (SSD), not to mention future hybrid usage scenarios. Note that this announcement while having the common term 3D in it is different from the earlier Intel and Micron announcement about 3D NAND flash (read more about that here).

Twitter hash tag #3DXpoint

The big picture, why this type of NVM technology is needed

Server and Storage I/O trends

  • Memory is storage and storage is persistent memory
  • No such thing as a data or information recession, more data being create, processed and stored
  • Increased demand is also driving density along with convergence across server storage I/O resources
  • Larger amounts of data needing to be processed faster (large amounts of little data and big fast data)
  • Fast applications need more and faster processors, memory along with I/O interfaces
  • The best server or storage I/O is the one you do not need to do
  • The second best I/O is one with least impact or overhead
  • Data needs to be close to processing, processing needs to be close to the data (locality of reference)


Server Storage I/O memory hardware and software hierarchy along with technology tiers

What did Intel and Micron announce?

Intel SVP and General Manager Non-Volatile Memory solutions group Robert Crooke (Left) and Micron CEO D. Mark Durcan did the joint announcement presentation of 3D XPoint (webinar here). What was announced is the 3D XPoint technology jointly developed and manufactured by Intel and Micron which is a new form or category of NVM that can be used for both primary memory in servers, laptops, other computers among other uses, as well as for persistent data storage.


Robert Crooke (Left) and Mark Durcan (Right)

Summary of 3D XPoint announcement

  • New category of NVM memory for servers and storage
  • Joint development and manufacturing by Intel and Micron in Utah
  • Non volatile so can be used for storage or persistent server main memory
  • Allows NVM to scale with data, storage and processors performance
  • Leverages capabilities of both Intel and Micron who have collaborated in the past
  • Performance Intel and Micron claim up to 1000x faster vs. NAND flash
  • Availability persistent NVM compared to DRAM with better durability (life span) vs. NAND flash
  • Capacity densities about 10x better vs. traditional DRAM
  • Economics cost per bit between dram and nand (depending on packaging of resulting products)

What applications and products is 3D XPoint suited for?

In general, 3D XPoint should be able to be used for many of the same applications and associated products that current DRAM and NAND flash-based storage memories are used for. These range from IT and cloud or managed service provider data centers based applications and services, as well as consumer focused among many others.


3D XPoint enabling various applications

In general, applications or usage scenarios along with supporting products that can benefit from 3D XPoint include among others’. Applications that need larger amounts of main memory in a denser footprint such as in-memory databases, little and big data analytics, gaming, wave form analysis for security, copyright or other detection analysis, life sciences, high performance compute and high-productivity compute, energy, video and content severing among many others.

In addition, applications that need persistent main memory for resiliency, or to cut delays and impacts for planned or un-planned maintenance or having to wait for memories and caches to be warmed or re-populated after a server boot (or re-boot). 3D XPoint will also be useful for those applications that need faster read and write performance compared to current generations NAND flash for data storage. This means both existing and emerging applications as well as some that do not yet exist will benefit from 3D XPoint over time, like how today’s applications and others have benefited from DRAM used in Dual Inline Memory Module (DIMM) and NAND flash advances over the past several decades.

Where to read, watch and learn more

Storage I/O trends

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Continue reading Part II here and Part III here of this three-part series on Intel and Micron 3D XPoint along with more analysis and commentary.

Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Top vblog voting V2.015 (Its IT award season, cast your votes)

Top vblog voting V2.015 (Its IT award season, cast your votes)

Storage I/O trends

It’s that time of the year again for award season:

  • The motion picture association Academy awards (e.g. the Oscars)
  • The Grammys and other entertainment awards
  • As well as Eric Siebert (aka @ericsiebert) vsphere-land.com top vblog

Vsphere-land.com top vblog

Eric has run for several years now an annual top VMware, Virtualization, Storage and related blogs voting now taking place until March 16th 2015 (click on the image below). You will find a nice mix of new school, old school and a few current or future school theme blogs represented with some being more VMware specific. However there are also many blogs at the vpad site that have a cloud, virtual, server, storage, networking, software defined, development and other related themes.

top vblog voting
Click on the above image to cast your vote for favorite:

  • Ten blogs (e.g. select up to ten and then rank 1 through 10)
  • Storage blog
  • Scripting blog
  • VDI blog
  • New Blogger
  • Independent Blogger (e.g. non-vendor)
  • News/Information Web site
  • Podcast

Call to action, take a moment to cast your vote

My StorageIOblog.com has been on the vLaunchPad site for several years now as well as having syndicated content that also appears via some of the other venues listed there.

Six time VMware vExpert

In addition to my StorageIOblog and podcast, you will also find many of my fellow VMware vExperts among others at the vLaunchpad site so check them out as well.

What this means

This is a people’s choice process (yes it is a popularity process of sorts as well) however also a way of rewarding or thanking those who take time to create and share content with you and others. If you take time to read various blogs, listen to podcasts as well as consume other content, please take a few moments and cast your vote here (thank you in advance) which I hope includes StorageIOblog.com as part of the top ten, as well as being nominated in the Storage, Podcast and Independent blogger categories.

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

How to test your HDD SSD or all flash array (AFA) storage fundamentals

How to test your HDD SSD AFA Hybrid or cloud storage

server storage data infrastructure i/o hdd ssd all flash array afa fundamentals

Updated 2/14/2018

Over at BizTech Magazine I have a new article 4 Ways to Performance Test Your New HDD or SSD that provides a quick guide to verifying or learning what the speed characteristic of your new storage device are capable of.

An out-take from the article used by BizTech as a "tease" is:

These four steps will help you evaluate new storage drives. And … psst … we included the metrics that matter.

Building off the basics, server storage I/O benchmark fundamentals

The four basic steps in the article are:

  • Plan what and how you are going to test (what’s applicable for you)
  • Decide on a benchmarking tool (learn about various tools here)
  • Test the test (find bugs, errors before a long running test)
  • Focus on metrics that matter (what’s important for your environment)

Server Storage I/O performance

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

To some the above (read the full article here) may seem like common sense tips and things everybody should know otoh there are many people who are new to servers storage I/O networking hardware software cloud virtual along with various applications, not to mention different tools.

Thus the above is a refresher for some (e.g. Dejavu) while for others it might be new and revolutionary or simply helpful. Interested in HDD’s, SSD’s as well as other server storage I/O performance along with benchmarking tools, techniques and trends check out the collection of links here (Server and Storage I/O Benchmarking and Performance Resources).

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Server and Storage I/O Benchmarking 101 for Smarties

Server Storage I/O Benchmarking 101 for Smarties or dummies ;)

server storage I/O trends

This is the first of a series of posts and links to resources on server storage I/O performance and benchmarking (view more and follow-up posts here).

The best I/O is the I/O that you do not have to do, the second best is the one with the least impact as well as low overhead.

server storage I/O performance

Drew Robb (@robbdrew) has a Data Storage Benchmarking Guide article over at Enterprise Storage Forum that provides a good framework and summary quick guide to server storage I/O benchmarking.

Via Drew:

Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark.

Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon).

But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more. 

Read more here including some of my comments, tips and recommendations.

Drew’s provides a good summary and overview in his article which is a great opener for this first post in a series on server storage I/O benchmarking and related resources.

You can think of this series (along with Drew’s article) as server storage I/O benchmarking fundamentals (e.g. 101) for smarties (e.g. non-dummies ;) ).

Note that even if you are not a server, storage or I/O expert, you can still be considered a smarty vs. a dummy if you found the need or interest to read as well as learn more about benchmarking, metrics that matter, tools, technology and related topics.

Server and Storage I/O benchmarking 101

There are different reasons for benchmarking, such as, you might be asked or want to know how many IOPs per disk, Solid State Device (SSD), device or storage system such as for a 15K RPM (revolutions per minute) 146GB SAS Hard Disk Drive (HDD). Sure you can go to a manufactures website and look at the speeds and feeds (technical performance numbers) however are those metrics applicable to your environments applications or workload?

You might get higher IOPs with smaller IO size on sequential reads vs. random writes which will also depend on what the HDD is attached to. For example are you going to attach the HDD to a storage system or appliance with RAID and caching? Are you going to attach the HDD to a PCIe RAID card or will it be part of a server or storage system. Or are you simply going to put the HDD into a server or workstation and use as a drive without any RAID or performance acceleration.

What this all means is understanding what it is that you want to benchmark test to learn what the system, solution, service or specific device can do under different workload conditions.

Some benchmark and related topics include

  • What are you trying to benchmark
  • Why do you need to benchmark something
  • What are some server storage I/O benchmark tools
  • What is the best benchmark tool
  • What to benchmark, how to use tools
  • What are the metrics that matter
  • What is benchmark context why does it matter
  • What are marketing hero benchmark results
  • What to do with your benchmark results
  • server storage I/O benchmark step test
    Example of a step test results with various workers and workload

  • What do the various metrics mean (can we get a side of context with them metrics?)
  • Why look at server CPU if doing storage and I/O networking tests
  • Where and how to profile your application workloads
  • What about physical vs. virtual vs. cloud and software defined benchmarking
  • How to benchmark block DAS or SAN, file NAS, object, cloud, databases and other things
  • Avoiding common benchmark mistakes
  • Tips, recommendations, things to watch out for
  • What to do next

server storage I/O trends

Where to learn more

The following are related links to read more about server (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.

Drew Robb’s benchmarking quick reference guide
Server storage I/O benchmarking tools, technologies and techniques resource page
Server and Storage I/O Benchmarking 101 for Smarties.
Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
I/O, I/O how well do you know about good or bad server and storage I/Os?
Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

Wrap up and summary

We have just scratched the surface when it comes to benchmarking cloud, virtual and physical server storage I/O and networking hardware, software along with associated tools, techniques and technologies. However hopefully this and the links for more reading mentioned above give a basis for connecting the dots of what you already know or enable learning more about workloads, synthetic generation and real-world workloads, benchmarks and associated topics. Needless to say there are many more things that we will cover in future posts (e.g. keep an eye on and bookmark the server storage I/O benchmark tools and resources page here).

Ok, nuff said, for now…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Microsoft Diskspd (Part II): Server Storage I/O Benchmark Tools

Microsoft Diskspd (Part II): Server Storage I/O Benchmark Tools

server storage I/O trends

This is part-two of a two-part post pertaining Microsoft Diskspd.that is also part of a broader series focused on server storage I/O benchmarking, performance, capacity planning, tools and related technologies. You can view part-one of this post here, along with companion links here.

Microsoft Diskspd StorageIO lab test drive

Server and StorageIO lab

Talking about tools and technologies is one thing, installing as well as trying them is the next step for gaining experience so how about some quick hands-on time with Microsoft Diskspd (download your copy here).

The following commands all specify an I/O size of 8Kbytes doing I/O to a 45GByte file called diskspd.dat located on the F: drive. Note that a 45GByte file is on the small size for general performance testing, however it was used for simplicity in this example. Ideally a larger target storage area (file, partition, device) would be used, otoh, if your application uses a small storage device or volume, then tune accordingly.

In this test, the F: drive is an iSCSI RAID protected volume, however you could use other storage interfaces supported by Windows including other block DAS or SAN (e.g. SATA, SAS, USB, iSCSI, FC, FCoE, etc) as well as NAS. Also common to the following commands is using 16 threads and 32 outstanding I/Os to simulate concurrent activity of many users, or application processing threads.
server storage I/O performance
Another common parameter used in the following was -r for random, 7200 seconds (e.g. two hour) test duration time, display latency ( -L ) disable hardware and software cache ( -h), forcing cpu affinity (-a0,1,2,3). Since the test ran on a server with four cores I wanted to see if I could use those for helping to keep the threads and storage busy. What varies in the commands below is the percentage of reads vs. writes, as well as the results output file. Some of the workload below also had the -S option specified to disable OS I/O buffering (to view how buffering helps when enabled or disabled). Depending on the goal, or type of test, validation, or workload being run, I would choose to set some of these parameters differently.

diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -w0 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noh_write000.txt

diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -w50 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noh_write050.txt

diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -w100 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noh_write100.txt

diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -S -w0 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noSh_test_write000.txt

diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -S -w50 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noSh_write050.txt

diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -S -w100 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noSh_write100.txt

The following is the output from the above workload command.
Microsoft Diskspd sample output
Microsoft Diskspd sample output part 2
Microsoft Diskspd sample output part 3

Note that as with any benchmark, workload test or simulation your results will vary. In the above the server, storage and I/O system were not tuned as the focus was on working with the tool, determining its capabilities. Thus do not focus on the performance results per say, rather what you can do with Diskspd as a tool to try different things. Btw, fwiw, in the above example in addition to using an iSCSI target, the Windows 2012 R2 server was a guest on a VMware ESXi 5.5 system.

Where to learn more

The following are related links to read more about server (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.

Drew Robb’s benchmarking quick reference guide
Server storage I/O benchmarking tools, technologies and techniques resource page
Server and Storage I/O Benchmarking 101 for Smarties.
Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
I/O, I/O how well do you know about good or bad server and storage I/Os?
Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

Comments and wrap-up

What I like about Diskspd (Pros)

Reporting including CPU usage (you can’t do server and storage I/O without CPU) along with IOP’s (activity), bandwidth (throughout or amount of data being moved), per thread and total results along with optional reporting. While a GUI would be nice particular for beginners, I’m used to setting up scripts for different workloads so having an extensive options for setting up different workloads is welcome. Being associated with a specific OS (e.g. Windows) the CPU affinity and buffer management controls will be handy for some projects.

Diskspd has the flexibility to use different storage interfaces and types of storage including files or partitions should be taken for granted, however with some tools don’t take things for granted. I like the flexibility to easily specify various IO sizes including large 1MByte, 10MByte, 20MByte, 100MByte and 500MByte to simulate application workloads that do large sequential (or random) activity. I tried some IO sizes (e.g. specified by -b parameter larger than 500MB however, I received various errors including "Could not allocate a buffer bytes for target" which means that Diskspd can do IO sizes smaller than that. While not able to do IO sizes larger than 500MB, this is actually impressive. Several other tools I have used or with have IO size limits down around 10MByte which makes it difficult for creating workloads that do large IOP’s (note this is the IOP size, not the number of IOP’s).

Oh, something else that should be obvious however will state it, Diskspd is free unlike some industry de-facto standard tools or workload generators that need a fee to get and use.

Where Diskspd could be improved (Cons)

For some users a GUI or configuration wizard would make the tool easier to get started with, on the other hand (oth), I tend to use the command capabilities of tools. Would also be nice to specify ranges as part of a single command such as stepping through an IO size range (e.g. 4K, 8K, 16K, 1MB, 10MB) as well as read write percentages along with varying random sequential mixes. Granted this can easily be done by having a series of commands, however I have become spoiled by using other tools such as vdbench.

Summary

Server and storage I/O performance toolbox

Overall I like Diskspd and have added it to my Server Storage I/O workload and benchmark tool-box

Keep in mind that the best benchmark or workload generation technology tool will be your own application(s) configured to run as close as possible to production activity levels.

However when that is not possible, the an alternative is to use tools that have the flexibility to be configured as close as possible to your application(s) workload characteristics. This means that the focus should not be as much on the tool, as opposed to how flexible is a tool to work for you, granted the tool needs to be robust.

Having said that, Microsoft Diskspd is a good and extensible tool for benchmarking, simulation, validation and comparisons, however it will only be as good as the parameters and configuration you set it up to use.

Check out Microsoft Diskspd and add it to your benchmark and server storage I/O tool-box like I have done.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Green and Virtual IT Data Center Primer

Green and Virtual Data Center Primer

Moving beyond Green Hype and Green washing

Green IT is about enabling efficient, effective and productive information services delivery. There is a growing green gap between green hype messaging or green washing and IT pain point issues including limits on availability or rising costs of power, cooling, floor-space as well as e-waste and environmental health and safety (PCFE). To close the gap will involve addressing green messaging and rhetoric closer to where IT organizations pain points are and where budget dollars exists that can address PCFE and other green related issues as a by-product. The green gap will also be narrowed as awareness of broader green related topics coincide with IT data center pain points, in other words, alignment of messaging with IT issues that have or will have budget dollars allocated towards them to sustain business and economic growth via IT resource usage efficiency. Read more here.

There are many aspects to "Green" Information Technology including servers, storage, networks and associated management tools and techniques. The reasons and focus of "Green IT" including "Green Data Storage ", "Green Computing" and related focus areas are varied to discuss diverse needs, issues and requirements including among others:

  • Power, Cooling, Floor-space, Environmental (PCFE) related issues or constraints
  • Reduction of carbon dioxide (CO2) emissions and other green house gases (GHGs)
  • Business growth and economic sustain in an environmental friendly manner
  • Proper disposal or recycling of environmental harmful retired technology components
  • Reduction or better efficiency of electrical power consumption used for IT equipment
  • Cost avoidance or savings from lower energy fees and cooling costs
  • Support data center and application consolidation to cut cost and management
  • Enable growth and enhancements to application service level objectives
  • Maximize the usage of available power and cooling resources available in your region
  • Compliance with local or federal government mandates and regulations
  • Economic sustain and ability to support business growth and service improvements
  • General environmental awareness and stewardship to save and protect the earth

While much of the IT industry focuses on CO2 emissions footprints, data management software and electrical power consumption, cooling and ventilation of IT data centers is an area of focus associated with "Green IT" as well as a means to discuss more effective use of electrical energy that can yield rapid results for many environments. Large tier-1 vendors including HP and IBM among others who have an IT and data center wide focus have services designed to do quick assessments as well as detailed analysis and re-organization of IT data center physical facilities to improve air flow and power consumption for more effective cooling of IT technologies including servers, storage, networks and other equipment.

Similar to your own residence, basic steps to improve your cooling effectiveness can lead to use of less energy to cut your budget impact, or, enable you to do more with what you already have with your cooling capacity to support growth, acquisitions and or consolidation initiatives. Vendors are also looking at means and alternatives for cooling IT equipment ranging from computer assisted computational fluid dynamics (CFD) software analysis of data center cooling and ventilation to refrigerated cooling racks some leveraging water or inert liquid cooling.

Various metrics exists and others are evolving for measuring, estimating, reporting, analyzing and discussing IT Data Center infrastructure resource topics including servers, storage, networks, facilities and associated software management tools from a power, cooling and green environmental standpoint. The importance of metrics is to focus on the larger impact of a piece of IT equipment that includes its cost and energy consumption that factors in cooling and other hosting or site environmental costs. Naturally energy costs and CO2 (carbon offsets) will vary by geography and region along with type of electrical power being used (Coal, Natural Gas, Nuclear, Wind, Thermo, Solar, etc) and other factors that should be kept in perspective as part of the big picture.

Consequently your view and needs or interests around "Green" IT may be from an electrical power conservation perspective to maximize your power consumption or to adapt to a given power footprint or ceiling. Your focus around "Green" Data Centers and Green Storage may be from a carbon savings standpoint or proper disposition of old and retired IT equipment or from a data center cooling standpoint. Another area of focus may be that you are looking to cut your data footprint to align with your power, cooling and green footprint while enhancing application and data service delivery to your customers.

Where to learn more

The following are useful links to related efficient, effective, productive, flexible, scalable and resilient IT data center along with server storage I/O networking hardware and software that supports cloud and virtual green data centers.

Various IT industry vendor and service provider links
Green and Virtual Data Center: Productive Economical Efficient Effective Flexible
Green and Virtual Data Center links
Are large storage arrays dead at the hands of SSD?
Closing the Green Gap
Energy efficient technology sales depend on the pitch

What this all means

The result of a green and virtual data center is that of a flexible, agile, resilient, scalable information factory that is also economical, productive, efficient, productive as well as sustainable.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Green and Virtual Data Center: Productive Economical Efficient Effective Flexible

Green and Virtual Data Center

A Green and Virtual IT Data Center (e.g. an information factory) means an environment comprising:

  • Habitat for technology or physical infrastructure (e.g. physical data center, yours, co-lo, managed service or cloud)
  • Power, cooling, communication networks, HVAC, smoke and fire suppression, physical security
  • IT data information infrastructure (e.g. hardware, software, valueware, cloud, virtual, physical, servers, storage, network)
  • Data Center Infrastructure Management (DCIM) along with IT Service Management (ITSM) software defined management tools
  • Tools for monitoring, resource tracking and usage, reporting, diagnostics, provisioning and resource orchestration
  • Portals and service catalogs for automated, user initiated and assisted operation or access to IT resources
  • Processes, procedures, best-practices, work-flows and templates (including data protection with HA, BC, BR, DR, backup/restore, logical and physical security)
  • Metrics that matter for management insight and awareness
    People and skill sets among other items

Green and Virtual Data Center Resources

Click here to learn about "The Green and Virtual Data Center" book (CRC Press) for enabling efficient, productive IT data centers. This book covers cloud, virtualization, servers, storage, networks, software, facilities and associated management topics, technologies and techniques including metrics that matter. This book by industry veteran IT advisor and author Greg Schulz is the definitive guide for enabling economic efficiency and productive next generation data center strategies.

Intel recommended reading
Publisher: CRC Press – Taylor & Francis Group
By Greg P. Schulz of StorageIO www.storageio.com
 ISBN-10: 1439851739 and ISBN-13: 978-1439851739
 Hardcover * 370 pages * Over 100 illustrations figures and tables

Read more here and order your copy here. Also check out Cloud and Virtual Data Storage Networking (CRC Press) a new book by Greg Schulz.

Productive Efficient Effective Economical Flexible Agile and Sustainable

Green hype and green washing may be on the endangered species list and going away, however, green IT for servers, storage, networks, facilities as well as related software and management techniques that address energy efficiency including power and cooling along with e-waste, environmental health and safety related issues are topics that wont be going away anytime soon. There is a growing green gap between green hype messaging or green washing and IT pain point issues including limits on availability or rising costs of power, cooling, floor-space as well as e-waste and environmental health and safety (PCFE). To close the gap will involve addressing green messaging and rhetoric closer to where IT organizations pain points are and where budget dollars exists that can address PCFE and other green related issues as a by-product.

The green gap will also be narrowed as awareness of broader green related topics coincide with IT data center pain points, in other words, alignment of messaging with IT issues that have or will have budget dollars allocated towards them to sustain business and economic growth via IT resource usage efficiency. Read more here.

Where to learn more

The following are useful links to related efficient, effective, productive, flexible, scalable and resilient IT data center along with server storage I/O networking hardware and software that supports cloud and virtual green data centers.

Various IT industry vendor and service provider links
Green and Virtual Data Center Primer
Green and Virtual Data Center links
Are large storage arrays dead at the hands of SSD?
Closing the Green Gap
Energy efficient technology sales depend on the pitch
EPA Energy Star for Data Center Storage Update
EPA Energy Star for data center storage draft 3 specification
Green IT Confusion Continues, Opportunities Missed! 
Green IT deferral blamed on economic recession might be result of green gap
How much SSD do you need vs. want?
How to reduce your Data Footprint impact (Podcast) 
Industry trend: People plus data are aging and living longer
In the data center or information factory, not everything is the same
More storage and IO metrics that matter
Optimizing storage capacity and performance to reduce your data footprint 
Performance metrics: Evaluating your data storage efficiency
PUE, Are you Managing Power, Energy or Productivity?
Saving Money with Green Data Storage Technology
Saving Money with Green IT: Time To Invest In Information Factories 
Shifting from energy avoidance to energy efficiency
SNIA Green Storage Knowledge Center
Speaking of speeding up business with SSD storage
SSD and Green IT moving beyond green washing
Storage Efficiency and Optimization: The Other Green
Supporting IT growth demand during economic uncertain times
The Green and Virtual Data Center Book (CRC Press, Intel Recommended Reading)
The new Green IT: Efficient, Effective, Smart and Productive 
The other Green Storage: Efficiency and Optimization 
What is the best kind of IO? The one you do not have to do

Watch for more links and resources to be added soon.

What this all means

The result of a green and virtual data center is that of a flexible, agile, resilient, scalable information factory that is also economical, productive, efficient, productive as well as sustainable.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Green and Virtual Data Center Links

Updated 10/25/2017

Green and Virtual IT Data Center Links

Moving beyond Green Hype and Green washing

Green hype and green washing may be on the endangered species list and going away, however, green IT for servers, storage, networks, facilities as well as related software and management techniques that address energy efficiency including power and cooling along with e-waste, environmental health and safety related issues are topics that wont be going away anytime soon.

There is a growing green gap between green hype messaging or green washing and IT pain point issues including limits on availability or rising costs of power, cooling, floor-space as well as e-waste and environmental health and safety (PCFE).

To close the gap will involve addressing green messaging and rhetoric closer to where IT organizations pain points are and where budget dollars exists that can address PCFE and other green related issues as a by-product. The green gap will also be narrowed as awareness of broader green related topics coincide with IT data center pain points, in other words, alignment of messaging with IT issues that have or will have budget dollars allocated towards them to sustain business and economic growth via IT resource usage efficiency. Read more here.

Enabling Effective Produtive Efficient Economical Flexible Scalable Resilient Information Infrastrctures

The following are useful links to related efficient, effective, productive, flexible, scalable and resilient IT data center along with server storage I/O networking hardware and software that supports cloud and virtual green data centers.

Various IT industry vendors and other links

Via StorageIOblog – Happy Earth Day 2016 Eliminating Digital and Data e-Waste

Green and Virtual Data Center Primer
Green and Virtual Data Center: Productive Economical Efficient Effective Flexible
Are large storage arrays dead at the hands of SSD?
Closing the Green Gap
Energy efficient technology sales depend on the pitch
EPA Energy Star for Data Center Storage Update
EPA Energy Star for data center storage draft 3 specification
Green IT Confusion Continues, Opportunities Missed! 
Green IT deferral blamed on economic recession might be result of green gap
How much SSD do you need vs. want?
How to reduce your Data Footprint impact (Podcast) 
Industry trend: People plus data are aging and living longer
In the data center or information factory, not everything is the same
More storage and IO metrics that matter
Optimizing storage capacity and performance to reduce your data footprint 
Performance metrics: Evaluating your data storage efficiency
PUE, Are you Managing Power, Energy or Productivity?
Saving Money with Green Data Storage Technology
Saving Money with Green IT: Time To Invest In Information Factories 
Shifting from energy avoidance to energy efficiency
SNIA Green Storage Knowledge Center
Speaking of speeding up business with SSD storage
SSD and Green IT moving beyond green washing
Storage Efficiency and Optimization: The Other Green
Supporting IT growth demand during economic uncertain times
The Green and Virtual Data Center Book (CRC Press, Intel Recommended Reading)
The new Green IT: Efficient, Effective, Smart and Productive 
The other Green Storage: Efficiency and Optimization 
What is the best kind of IO? The one you do not have to do

Intel recommended reading
Click here to learn about "The Green and Virtual Data Center" book (CRC Press) for enabling efficient , productive IT data centers. This book covers cloud, virtualization, servers, storage, networks, software, facilities and associated management topics, technologies and techniques including metrics that matter. This book by industry veteran IT advisor and author Greg Schulz is the definitive guide for enabling economic efficiency and productive next generation data center strategies. Read more here and order your copyhere. Also check out Cloud and Virtual Data Storage Networking (CRC Press) a new book by Greg Schulz.

White papers, analyst reports and perspectives

Business benefits of data footprint reduction (archiving, compression, de-dupe)
Data center I/O and performance issues – Server I/O and storage capacity gap
Analysis of EPA Report to Congress (Law 109-431)
The Many Faces of MAID Storage Technology
Achieving Energy Efficiency with FLASH based SSD
MAID 2.0: Energy Savings without Performance Compromises

Articles, Tips, Blogs, Webcasts and Podcasts

AP – SNIA Green Emerald Program and measurements
AP – Southern California heat wave strains electrical system
Ars Technica – EPA: Power usage in data centers could double by 2011
Ars Technica – Meet the climate savers: Major tech firms launch war on energy-inefficient PCs – Article
Askageek.com – Buying an environmental friendly laptop – November 2008
Baseline – Examining Energy Consumption in the Data Center
Baseline – Burts Bees: What IT Means When You Go Green
Bizcovering – Green architecture for the masses
Broadstuff – Are Green 2.0 and Enterprise 2.0 Incompatible?
Business Week – CEO Guide to Technology
Business Week – Computers’ elusive eco factor
Business Week – Clean Energy – Its Getting Affordable
Byte & Switch – Keeping it Green This Summer – Don’t be "Green washed"
Byte & Switch – IBM Sees Green in Energy Certificates
Byte & Switch – Users Search for power solutions
Byte & Switch – DoE issues Green Storage Warning
CBR – The Green Light for Green IT
CBR – Big boxes make greener data centers
CFO – Power Scourge
Channel Insider – A 12 Step Program to Dispose of IT Equipment
China.org.cn – China publishes Energy paper
CIO – Green Storage Means Money Saved on Power
CIO – Data center designers share secrets for going green
CIO – Best Place to Build a Data Center in North America
CIO Insight – Clever Marketing or the Real Thing?
Cleantechnica – Cooling Data Centers Could Prevent Massive Electrical Waste – June 2008
Climatebiz – Carbon Calculators Yield Spectrum of Results: Study
CNET News – Linux coders tackle power efficiency
CNET News – Research: Old data centers can be nearly as ‘green’ as new ones
CNET News – Congress, Greenpeace move on e-wast
CNN Money – A Green Collar Recession
CNN Money – IBM creates alliance with industry leaders supporting new data center standards
Communication News – Utility bills key to greener IT
Computerweekly – Business case for green storage
Computerweekly – Optimising data centre operations
Computerweekly – Green still good for IT, if it saves money
Computerweekly – Meeting the Demands for storage
Computerworld – Wells Fargo Free Data Center Cooling System
Computerworld – Seven ways to get green and save money
Computerworld – Build your data center here: The most energy-efficient locations
Computerworld – EPA: U.S. needs more power plants to support data centers
Computerworld – GreenIT: A marketing ploy or new technology?
Computerworld – Gartner Criticizes Green Grid
Computerworld – IT Skills no longer sufficient for data center execs.
Computerworld – Meet MAID 2.0 and Intelligent Power Management
Computerworld – Feds to offer energy ratings on servers and storage
Computerworld – Greenpeace still hunting for truly green electronics
Computerworld – How to benchmark data center energy costs
ComputerworldUK – Datacenters at risk from poor governance
ComputerworldUK – Top IT Leaders Back Green Survey
ComputerworldMH – Lean and Green
CTR – Strategies for enhancing energy efficiency
CTR – Economies of Scale – Green Data Warehouse Appliances
Datacenterknowledge – Microsoft to build Illinois datacenter
Data Center Strategies – Storage The Next Hot Topic
Earthtimes – Fujitsu installs hydrogen fuel cell power
eChannelline – IBM Goes Green(er)
Ecoearth.info – California Moves To Speed Solar, Wind Power Grid Connections
Ecogeek – Solar power company figures they can power 90% of America
Economist – Cool IT
Electronic Design – How many watts in that Gigabyte
eMazzanti – Desktop virtualization movement creeping into customer sites
ens-Newswire – Western Governors Ask Obama for National Green Energy Plan
Environmental Leader – Best Place to Build an Energy Efficient Data Center
Environmental Leader – New Guide Helps Advertisers Avoid Greenwash Complaints
Enterprise Storage Forum – Power Struggles Take Center Stage at SNW
Enterprise Storage Forum – Pace Yourself for Storage Power & Cooling Needs
Enterprise Storage Forum – Storage Power and Cooling Issues Heat Up – StorageIO Article
Enterprise Storage Forum – Score Savings With A Storage Power Play
Enterprise Storage Forum – I/O, I/O, Its off to Virtual Work I Go
Enterprise Storage Forum – Not Just a Flash in the Pan – Various SSD options
Enterprise Storage Forum – Closing the Green Gap – Article August 2008
EPA Report to Congress and Public Law 109-431 – Reports & links
eWeek – Saving Green by being Green
eWeek – ‘No Cooling Necessary’ Data Centers Coming?
eWeek – How the ‘Down’ Macroeconomy Will Impact the Data Storage Sector
ExpressComputer – In defense of Green IT
ExpressComputer – What data center crisis
Forbes – How to Build a Quick Charging Battery
GCN – Sun launches eco data center
GreenerComputing – New Code of Conduct to Establish Best Practices in Green Data Centers
GreenerComputing – Silicon valley’s green detente
GreenerComputing – Majority of companies plan to green their data centers
GreenerComputing – Citigroup to spend $232M on Green Data Center
GreenerComputing – Chicago and Quincy, WA Top Green Data Center Locations
GreenerComputing – Using airside economizers to chill data center cooling bills
GreenerComputing – Making the most of asset disposal
GreenerComputing – Greenpeace vendor rankings
GreenerComputing – Four Steps to Improving Data Center Efficiency without Capital Expenditures
GreenerComputing – Enabling a Green and Virtual Data Center
Green-PC – Strategic Steps Down the Green Path
Greeniewatch – BBC news chiefs attack plans for climate change campaign
Greeniewatch – Warmest year predictions and data that has not yet been measured
GoverenmentExecutive – Public Private Sectors Differ on "Green" Efforts
HPC Wire – How hot is your code
Industry Standard – Why green data centers mean partner opportunities
InformationWeek – It could be 15 years before we know what is really green
InformationWeek – Beyond Server Consolidaiton
InformationWeek – Green IT Beyond Virtualization: The Case For Consolidation
InfoWorld – Sun celebrates green datacenter innovations
InfoWorld – Tech’s own datacenters are their green showrooms
InfoWorld – 2007: The Year in Green
InfoWorld – Green Grid Announces Tech Forum in Feb 2008
InfoWorld – SPEC seeds future green-server benchmarks
InfoWorld – Climate Savers green catalog proves un-ripe
InfoWorld – Forester: Eco-minded activity up among IT pros
InfoWorld – Green ventures in Silicon Valley, Mass reaped most VC cash in ’07
InfoWorld – Congress misses chance to see green-energy growth
InfoWorld – Unisys pushes green envelope with datacenter expansion
InfoWorld – No easy green strategy for storage
Internet News – Storage Technologies for a Slowing Economy
Internet News – Economy will Force IT to Transform
ITManagement – Green Computing, Green Revenue
itnews – Data centre chiefs dismiss green hype
itnews – Australian Green IT regulations could arrive this year
IT Pro – SNIA Green storage metrics released
ITtoolbox – MAID discussion
Linux Power – Saving power with Linux on Intel platforms
MSNBC – Microsoft to build data center in Ireland
National Post – Green technology at the L.A. Auto Show
Network World – Turning the datacenter green
Network World – Color Interop Green
Network World – Green not helpful word for setting environmental policies
NewScientistEnvironment – Computer servers as bad for climate as SUVs
Newser – Texas commission approves nation’s largest wind power project
New Yorker – Big Foot: In measuring carbon emissions, it’s easy to confuse morality and science
NY Times – What the Green Bubble Will Leave Behind
PRNewswire – Al Gore and Cisco CEO John Chambers to debate climate change
Processor – More than just monitoring
Processor – The new data center: What’s hot in Data Center physical infrastructure:
Processor – Liquid Cooling in the Data Center
Processor – Curbing IT Power Usage
Processor – Services To The Rescue – Services Available For Today’s Data Centers
Processor – Green Initiatives: Hire A Consultant?
Processor – Energy-Saving Initiatives
Processor – The EPA’s Low Carbon Campaig
Processor – Data Center Power Planning
SAN Jose Mercury – Making Data Centers Green
SDA-Asia – Green IT still a priority despite Credit Crunch
SearchCIO – EPA report gives data centers little guidance
SearchCIO – Green IT Strategies Could Lead to hefty ROIs
SearchCIO – Green IT In the Data Center: Plenty of Talk, not much Walk
SearchCIO – Green IT Overpitched by Vendors, CIOs beware
SearchDataCenter – Study ranks cheapest places to build a data center
SearchDataCenter – Green technology still ranks low for data center planners
SearchDataCenter – Green Data Center: Energy Effiecnty Computing in the 21st Century
SearchDataCenter – Green Data Center Advice: Is LEED Feasible
SearchDataCenter – Green Data Centers Tackle LEED Certification
SearchDataCenter – PG&E invests in data center effieicny
SearchDataCenter – A solar powered datacenter
SearchSMBStorage – Improve your storage energy efficiency
SearchSMBStorage – SMB capacity planning: Focusing on energy conservation
SearchSMBStorage – Data footprint reduction for SMBs
SearchSMBStorage – MAID & other energy-saving storage technologies for SMBs
SearchStorage – How to increase your storage energy efficiency
SearchStorage – Is storage now top energy hog in the data center
SearchStorage – Storage eZine: Turning Storage Green
SearchStorage – The Green Storage Gap
SearchStorageChannel – Green Data Storage Projects
Silicon.com – The greening of IT: Cooling costs
SNIA – SNIA Green Storage Overview
SNIA – Green Storage
SNW – Beyond Green-wash
SNW Spring 2008 Beyond Green-wash
State.org – Why Texas Has Its Own Power Grid
StorageDecisions – Different Shades of Green
Storage Magazine – Storage still lacks energy metrics
StorageIOblog – Posts pertaining to Green, power, cooling, floor-space, EHS (PCFE)
Storage Search – Various postings, news and topics pertaining to Green IT
Technology Times – Revealed: the environmental impact of Google searches
TechTarget – Data center power efficiency
TechTarget – Tip for determining power consumption
Techworld – Inside a green data center
Techworld – Box reduction – Low hanging green datacenter fruit
Techworld – Datacentere used to heat swimming pool
Theinquirer – Spansion and Virident flash server farms
Theinquirer – Storage firms worry about energy efficiency How green is the valley
TheRegister – Data Centre Efficiency, the good, the bad and the way to hot
TheRegister – Server makers snub whalesong for serious windmill abuse
TheRegister – Green data center threat level: Not Green
The Standard – Growing cynicism around going Green
ThoughtPut – Energy Central
Thoughtput – Power, Cooling, Green Storage and related industry trends
Wallstreet Journal – Utilities Amp Up Push To Slash Energy Use
Wallstreet Journal – The IT in Green Investing
Wallstreet Journal – Tech’s Energy Consumption on the Rise
Washingtonpost – Texas approves major new wind power project
WhatPC – Green IT: It doesnt have to cost the earth
WHIRnews – SingTel building green data center
Wind-watch.org – Loss of wind causes Texas power grid emergency
WyomingNews – Overcoming Greens Stereotype
Yahoo – Washington Senate Unviel Green Job Plan
ZDnet – Will supercomputer speeds hit a plateau?
Are data centers causing climate change

News and Press Releases

Business Wire – The Green and Virtual Data Center
Enterprise Storage Forum – Intel and HGST (Hitachi) partner on FLASH SSD
PCworld – Intel and HP describe Green Strategy
DoE – To Invest Approximately $1.3 Billion to Commercialize CCS Technology
Yahoo – Shell Opens Los Angeles’ First Combined Hydrogen and Gasoline Station
DuPont – DuPont Projects Save Enough Energy to Power 25,000 Homes
Gartner – Users Are Becoming Increasingly Confused About the Issues and Solutions Surrounding Green IT

Websites and Tools

Various power, cooling, emmisions and device configuration tools and calculators
Solar Action Alliance web site
SNIA Emerald program
Carbon Disclosure Project
The Chicago Climate Exchange
Climate Savers
Data Center Decisions
Electronic Industries Alliance (EIA)
EMC – Digital Life Calculator
Energy Star
Energy Star Data Center Initiatives
Greenpeace – Technology ranking website also here
GlobalActionPlan
KyotoPlanet
LBNL High Tech Data centers
Millicomputing
RoHS & WEE News
Storage Performance Council (SPC)
SNIA Green Technical Working Group
SPEC
Transaction Processing Council (TPC)
The Green Grid
The Raised Floor
Terra Pass Carbon Offset Credits – Website with CO2 calculators
Energy Information Administration – EIA (US and International Electrical Information)
U.S. Department of Energy and related information
U.S. DOE Energy Efficient Industrial Programs
U.S. EPA server and storage energy topics
Zerofootprint – Various "Green" and environmental related links and calculators

Vendor Centric and Marketing Website Links and tools

Vendors and organizations have different types of calculators some with focus on power, cooling, floor space, carbon offsets or emissions,

ROI, TCO and other IT data center infrastructure resource management. Following is an evolving list and by no means definitive even for a particular vendors as

different manufactures may have multiple different calculators for different product lines or areas of focus.

Brocade – Green website
Cisco – Green and Environmental websites here, here and here
Dell – Green website
EMC – EMC Energy, Power and Cooling Related Website
HDS – How to be green – HDS Positioning White Paper
HP – HP Green Website
IBM – Green Data Center – IBM Positioning White Paper
IBM – Green Data Center for Education – IBM Positioning White Paper
Intel – What is an Efficient Data Center and how do I measure it?
LSI – Green site and white paper
NetApp – Press Release and related information
Sun – Various articles and links
Symantec – Global 2000 Struggle to Adopt "Green" Data Centers – Announcement of Survey results
ACTON
Adinfa
APC
Australian Conservation Foundation
Avocent
BBC
Brocade
Carbon Credit Calculator UK
Carbon Footprint Site
Carbon Planet
Carbonify
CarbonZero
Cassatt
CO2 Stats Site
Copan
Dell
DirectGov UK Acton
Diesel Service & Supply Power Calculator & Converter
Eaton Powerware
Ecobusinesslinks
Ecoscale
EMC Power Calculator
EMC Web Power Calculator
EMC Digital Life Calculator
EPA Power Profiler
EPA Related Tools
EPEAT
Google UK Green Footprint
Green Grid Calculator
HP and more here
HVAC Calculator
IBM
Logicalis
Kohler Power (Business and Residential)
Micron
MSN Carbon Footprint Calculator
National Wildlife Foundation
NEF UK
NetApp
Rackwise
Platespin
Safecom
Sterling Planet
Sun and more here and here and here
Tandberg
TechRepublic
TerraPass Carbon Offset Credits
Thomas Kreen AG
Toronto Hydro Calculator
80 Plus Calculator
VMware
42u Green Grid PUE DCiE calculator
42u energy calculator

Green and Virtual Tools

What’s your power, cooling, floor space, energy, environmental or green story?

What’s your power, cooling, floor space, energy, environmental or green story? Do you have questions or want to learn more about

energy issues pertaining to IT data center and data infrastructure topics? Do you have a solution or technology or a success story that you would like to share

with us pertaining to data storage and server I/O energy optimization strategies?  Do you need assistance in developing, validating or reviewing your strategy

or story? Contact us at: info@storageio.com or 651-275-1563 to learn more about green data storage and server I/O or to

schedule a briefing to tell us about your energy efficiency and effectiveness story pertaining to IT data centers and data infrastructures.

Disclaimer and note:  URL’s submitted for inclusion on this site will be reviewed for consideration and to be

in generally accepted good taste in regards to the theme of this site.  Best effort has been made to validate and verify the URLs that appear on this page and

website however they are subject to change. The author and/or maintainer’s) of this page and web site make no endorsement to and assume no responsibility for the

URLs and their content that are listed on this page.

Green and Virtual Metrics

Chapter 5 "Measurement, Metrics, and Management of IT Resources" in the book "The Green and Virtual Data Center" (CRC Press) takes a look at the importance of being able to measure and monitor to enable effective management and utilization of IT resources across servers, storage, I/O networks, software, hardware and facilities.

There are many different points of interest for collecting metrics in an IT data center for servers, storage, networking and facilities along with various points of interest or perspectives. Data center personal have varied interest from a facilities to a resource (server, storage, networking) usage and effectiveness perspective for normal use as well as planning purposes or comparison when evaluating new technology. Vendors have different uses for metrics during R&D, Q/A testing and marketing or sales campaigns as well as on-going service and support. Industry trade groups including 80 Plus, SNIA and the green grid along with government groups including the EPA Energy Star are working to define and establish applicable metrics pertinent for Green and Virtual data centers.

Acronym

Description

Comment

DCiE

Data center Efficiency = (IT equipment / Total facility power) * 100

Shows a ratio of how well a data center is consuming power

DCPE

Data center Performance Efficiency = Effective IT workload / total facility power

Shows how effective data center is consuming power to produce a given level of service or work such as energy per transaction or energy per business function performed

PUE

Power usage effectiveness = Total facility power / IT equipment power

Inverse of DCE

Kilowatts (kw)

Watts / 1,000

One thousand watts

Annual kWh

kWh x 24 x 365

kWh used in on year

Megawatts (mw)

kW / 1,000

One thousand kW

BTU/hour

watts x 3.413

Heat generated in an hour from using energy in British Thermal Units. 12,000 BTU/hour can equate to 1 Ton of cooling.

kWh

1,000 watt hours

The number of watts used in one hour

Watts

Amps x Volts (e.g. 12 amps * 12 volts = 144 watts)

Unit of electrical energy power

Watts

BTU/hour x 0.293

Convert BTU/hr to watts

Volts

Watts / Amps (e.g. 144 watts / 12 amps = 12 volts)

The amount of force on electrons

Amps

Watts / Volts (e.g. 144 watts / 12 volts = 12 amps)

The flow rate of electricity

Volt-Amperes (VA)

Volts x Amps

Sometimes power expressed in Volt-Ampres

kVA

Volts x Amp / 1000

Number of kilovolt-ampres

kW

kVA x power-factor

Power factor is the efficiency of a piece of equipments use of power

kVA

kW / power-factor

Killovolt-Ampres

U

1U = 1.75”

EIA metric describing height of equipment in racks.

 

Activity / Watt Amount of work accomplished per unit of energy consumed. This could be IOPS, Transactions or Bandwidth per watt. Indicator how much work and how efficient energy is being used to accomplish useful work. This metric applies to active workloads or actively used and frequently accessed storage and data. Examples would be IOPS per watt, Bandwidth per watt, Transactions per watt, Users or streams per watt. Activity per watt should also be used in conjunction with another metric such as how much capacity is supported per watt and total watts consumed for a representative picture.

IOPS / Watt

Number of I/O operations (or transactions) / energy (watts)

Indicator of how effectively energy is being used to perform a given amount of work. The work could be I/Os, transactions, throughput or other indicator of application activity. For example SPC-1 / Watt, SPEC / Watt, TPC / Watt, transaction / watt,  IOP / Watt.

Bandwidth / Watt GBPS or TBPS or PBPS / Watt Amount of data transferred or moved per second and energy used. Often confused with Capacity per watt This indicates how much data is moved or accessed per second or time interval per unit of energy consumed. This is often confused with capacity per watt given that both bandwidth and capacity reference GByte, TByte, PByte.

Capacity / Watt

GB or TB or PB (storage capacity space / watt

Indicator of how much capacity (space) or bandwidth supported in a given configuration or footprint per watt of energy. For inactive data or off-line and archive data, capacity per watt can be an effective measurement gauge however for active workloads and applications activity per watt also needs to be looked at to get a representative indicator of how energy is being used

Mhz / Watt

Processor performance / energy (watts)

Indicator of how effectively energy is being used by a CPU or processor.

Carbon Credit

Carbon offset credit

Offset credits that can be bought and sold to offset your CO2 emissions

CO2 Emission

Average 1.341 lbs per kWh of electricity generated

The amount of average carbon dioxide (CO2) emissions from generating an average kWh of electricity

Various power, cooling, floor space and green storage or IT  related metrics

Metrics include Data center Efficiency (DCiE) via the greengrid which is the indicator ratio of a IT data center energy efficiency defined as IT equipment (servers, disk and tape storage, networking switches, routers, printers, etc) / Total facility power x 100 (for percentage). For example, if the sum of all IT equipment energy usage resulted in 1,500 kilowatt hours (kWh) per month yet the total facility power including UPS, energy switching, power conversation and filtering, cooling and associated infrastructure costs as well as IT equipment resulting in 3,500 kWh, the DCiE would be (1,500 / 3,500) x 100 = 43%. DCiE can be used as a ratio for example to show in the above scenario that IT equipment accounts for about 43% of energy consumed by the data center with in this scenario 57% of electrical energy being consumed by cooling, conversion and conditioning or lighting.

Power usage effectiveness (PUE) is the indicator ratio of total energy being consumed by the data center to energy being used to operate IT equipment. PUE is defined as total facility power / IT equipment energy consumption. Using the above scenario PUE = 2.333 (3,500 / 1,500) which means that a server requiring 100 watts of power would actually require (2.333 * 100) 233.3 watts of energy that includes both direct power and cooling costs. Similarly a storage system that required 1,500 kWh of energy to power would require (1,500*2.333) 3,499.5 kWh of electrical power including cooling.

Another metric that has the potential to have meaning is Data center Performance Efficiency (DCPE) that takes into consideration how much useful and effective work is performed by the IT equipment and data center per energy consumed. DCPE is defined as useful work / total facility power with an example being some number of transactions processed using servers, networks and storage divided by energy for the data center to power and cool the equipment. An relatively easy and straightforward implementation of DCPE is an IOPs per watt measurement that looks at how many IOPs can be performed (regardless of size or type such as reads or writes) per unit of energy in this case watts.

DCPE = Useful work / Total facility power, for example IOPS per watt of energy used

DCiE = IT equipment energy / Total facility power = 1 / PUE

PUE = Total facility energy / IT equipment energy

IOPS per Watt = Number of IOPs (or bandwidth) / energy used by the storage system

The importance of these numbers and metrics is to focus on the larger impact of a piece of IT equipment that includes its cost and energy consumption that factors in cooling and other hosting or site environmental costs. Naturally energy costs and CO2 (carbon offsets) will vary by geography and region along with type of electrical power being used (Coal, Natural Gas, Nuclear, Wind, Thermo, Solar, etc) and other factors that should be kept in perspective as part of the big picture. Learn more in Chapter 5 "Measurement, Metrics, and Management of IT Resources" in the book "The Green and Virtual Data Center" (CRC) and in the book Cloud and Virtual Data Storage Networking (CRC).

Disclaimer and notes

Disclaimer and note:  URL’s submitted for inclusion on this site will be reviewed for consideration and to be in generally accepted good taste in regards to the theme of this site.  Best effort has been made to validate and verify the URLs that appear on this page and web site however they are subject to change. The author and/or maintainer’s) of this page and web site make no endorsement to and assume no responsibility for the URLs and their content that are listed on this page.

What this all means

The result of a green and virtual data center is that of a flexible, agile, resilient, scalable information factory that is also economical, productive, efficient, productive as well as sustainable.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Server storage I/O Intel NUC nick knack notes – First impressions

Storage I/O trends

Server storage I/O Intel NUC nick knack notes – First impressions

This is the first of a two-part (part II here) series of my experiences (and impressions) using an Intel NUC ( a 4th generation model) for various things about cloud, virtual, physical and software defined server storage I/O networking.

The NUC has been around new for a few years and continues to evolve and recently I bought my first one (e.g. a 4th generation model) to join some other servers that I have. My reason for getting a NUC is to use it as a simple low-power platform to run different software on including bare-metal OS, hypervisors, cloud, virtual and software defined server storage and networking applications on that might otherwise be on an old laptop or mini-tower.

Intel® NUC with Intel® Core™ i5 Processor and 2.5-Inch Drive Support (NUC5i5RYH) via Intel.com

Introducing Intel Next Unit Computing aka NUC

For those not familiar, NUC is a series of products from Intel called Next Unit Computing that offer an alternative to traditional mini-desktop or even laptop and notebooks. There are several different NUC models available including the newer 5th generation models (click here to see various models and generations). The NUCs are simple, small units of computing with an Intel processor and room for your choice of memory, persistent storage (e.g. Hard Disk Drive (HDD) or flash Solid State Device (SSD), networking, video, audio and other peripheral device attachment.

software (not supplied) is defined by what you choose to use such as a Windows or *nix operating system, VMware ESXi, Microsoft Hyper-V, KVM or Xen hypervisor, or some other applications. The base NUC package includes front and rear-side ports for attaching various devices. In terms of functionality, think of a laptop without a keyboard or video screen, or in terms of a small head-less (e.g. no monitor) mini-tower desktop workstation PC.

Which NUC to buy?

If you need to be the first with anything new, then jump direct to the recently released 5th generation models.

On the other hand, if you are looking for a bargain, there are some good deals on 4th generation or older. likewise depending on your processor speed and features needed along with available budget, those criteria and others will direct you to a specific NUC model.

I went with a 4th generation NUC realizing that the newer models were just around the corner as I figured could always get another (e.g. create a NUC cluster) newer model when needed. In addition I also wanted a model that had enough performance to last a few years of use and the flexibility to be reconfigured as needed. My choice was a model D54250WYK priced around $352 USD via Amazon (prices may vary by different venues).

Whats included with a NUC?

My first NUC is a model D54250WYK (e.g. BOXD54250WYKH1 ) that you can view the specific speeds and feeds here at the Intel site along with ordering info here at Amazon (or your other preferred venue).

View and compare other NUC models at the Intel NUC site here.

The following images show the front-side two USB 3.0 ports along with head-phone (or speaker) and microphone jacks. Looking at the rear-side of the NUC there are a couple of air vents, power connector port (external power supply), mini-display and HDMI video port, GbE LAN, and two USB 3.0 ports.

NUC front viewRear ports of NUC
Left is front view of my NUC model 54250 and Right is back or rear view of NUC

NUC ModelBOXD54250WYKH1 (speeds/feeds vary by specific model)
Form factor1.95" tall
ProcessorIntel Core i5-4250U with active heat sink fan
MemoryTwo SO-DIMM DDR3L (e.g. laptop) memory, up to 16GB (e.g. 2x8GB)
DisplayOne mini DisplayPort with audio
One mini HDMI port with audio
AudioIntel HD Audio, 8 channel (7.1) digital audio via HDMI and DisplayPort, also headphone jack
LANIntel Gigabit Ethernet (GbE) (I218)
Peripheral and storageTwo USB 3.0 (e.g. blue) front side
Two USB 3.0 rear side
Two USB 2.0 (internal)

One SATA port (internal 2.5 inch drive bay)

Consumer infrared sensor (front panel)
ExpansionOne full-length mini PCI Express slot with mSATA support
One half-length mini PCI Express slot
Included in the boxLaptop style 19V 65W power adapter (brick) and cord, VESA mounting bracket (e.g. for mounting on rear of video monitor), integration (installation) guide, wireless antennae (integrated into chassis), Intel Core i5 logo
Warranty3-year limited

Processor Speeds and Feeds

There are various Intel Core i3 and i5 processors available depending on specific NUC model, such as my 54250WYK has a two core (1.3Ghz each) 4th generation i5-4250U (click here to see Intel speeds and feeds) which includes Intel Visual BIOS, Turbo Boost, Rapid Start and virtualization support among other features.

Note that features vary by processor type, along with other software, firmware or BIOS updates. While the 1.3Ghz two core (e.g. max 2.6Ghz) is not as robust as faster quad (or more) cores running at 3.0Ghz (or faster), for most applications including as a first virtual lab or storage sand box among other uses, it will be fast enough or comparable to a lower-mid range laptop capabilities.

What this all means

In general I like the NUC so much that I bought one (model 54250) and would consider adding another in the future for somethings, however also see the need to continue using my other compute servers for different workloads.

This wraps up part I of this two-part series and what this means is that I like the idea of a Intel NUC I bought one. Continue reading in part-two here where I cover the options that I added to my NUC, initial configuration, deployment, use and additional impressions.

Ok, nuff said for now, check out part-two here.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 StorageIO and UnlimitedIO LLC All Rights Reserved

Server storage I/O Intel NUC nick knack notes – Second impressions

Storage I/O trends

Server storage I/O Intel NUC nick knack notes – Second impressions

This is the second of a two-part series about my first and second impressions of the Intel NUC (Next Unit Computing). In the first post (here) I give an overview and my first impressions while in this post lets look at options added to my NUC model 54250, first deployment use and more impressions.

Intel® NUC with Intel® Core™ i5 Processor and 2.5-Inch Drive Support (NUC5i5RYH) via Intel.com

What you will want to add to a NUC

Since the NUC is a basic brick with a processor mounted on its mother board, you will need to add memory, some type of persistent storage device (mSATA, SATA or USB based) and optionally a WiFi card.

One of the nice things about the NUC is that in many ways it is the equivalent functionality of a laptop or mini-tower without the extra overhead (cost, components, packaging) enabling you to customize as needed for your specific requirements. For example there is no keyboard, mouse, video screen, WiFi, Hard Disk Drive (HDD) or flash Solid State Device (SSD) included with an operating system pre-installed. There is no least memory required enabling you to decide how much to configure while using compatible laptop style memory. Video and monitors attach via HDMI or mini-port including VGA devices via an adapter cable. Keyboard and mouse if needed are handled via USB ports.

Here is what I added to my NUC model 5420.

1Crucial 16GB Kit (2 x 8GB) DDR3 1600 (PC3-12800) SODIMM 204-Pin Notebook Memory
1Intel Network 7260 WiFi Wireless-AC 7260 H/T Dual Band 2×2 AC+Bluetooth HMC. Here is link to Intel site for various drivers.
1500GB Samsung Electronics 840 EVO mSATA 0.85-Inch Solid State Drive
1SATA HDD, SSD or HHDD/SSHD (I used one of my existing drives)

Note that you will also need to supply some type of Keyboard Video Mouse (KVM), in my case I used a HDMI to VGA adapter cable to attach the NUC via HDMI (for video) and USB (keyboard and mouse) to my Startech KVM switch.

Following images show on the left Intel WiFi card installed and on the right, a Samsung 840 EVO mSATA 500GB flash SSD installed above the WiFi card. Also notice on the far right of the images the two DDR3 "notebook" class DRAM DIMM slots.

NUC WiFi cardmSATA SSD
Left: Intel WiFi card installed and Right Samsung EVO mSATA SSD card (sits above WiFi card)

Note that the NUC (as do many laptops) accepts 9mm or smaller thin 7mm height HDDs and SSDs in its SATA drive bay. I mention this because some of the higher-capacity 2TB 2.5" SFF drives are taller than 9m as shown in the above image and do not fit in the NUC internal SATA drive bay. While many devices and systems support 2.5" drive slots for HDD, SSD or HHDD/SSHDs, pay attention to the height and avoid surprises when something does not fit like it was assumed to.

2.5 HDD and SSDs
Low-profile and tall-profile 2.5" SFF HDDs

Additional drives and devices can be attached using external USB 3.0 ports including HDDs, SSDs or even USB to GbE adapters if needed. You will need to supply your own operating system, hypervisor, storage, networking or other software, such as Windows, *nix, VMware ESXi, Hyper-V, KVM, Xen, OpenStack or any of the various ZFS based (among others) storage appliances.

Unpacking and physical NUC installation

Initial setup and physical configuration of the NUC is pretty quick with the only tool needed being a Philips screw driver.

NUC and components ready for installation
Intel NUC 5420 and components ready for installation

With all the components including the NUC itself laid out for a quick inventory including recording serial numbers (see image above), the next step is to open up the NUC by removing four Philip screws from the bottom. Once the screws are removed and bottom plate removed, the SATA drive bay opens up to reach the slots of memory, mSATA SSD and WiFi card (see images below). Once the memory, mSATA and WiFi cards are installed, the SATA drive bay coverage those components and it is time to install a 2.5" standard height HDD or SSD. For my first deployment I installed temporarily installed on of my older HHDDs a 750GB Seagate Momentus XT that will be replaced by something newer soon.

NUC internal HDD/SSD slotNUC internal HDD installed
View of NUC with bottom cover removed, Left empty SATA drive bay, Right HDD installed

After the components are installed, it is time to replace the bottom cover plate of the NUC securing in place with the four screws previously removed. Next up is attaching any external devices via USB and other ports including KVM and LAN network connection. Once the hardware is ready, its time to power up the NUC and checkout the Visual BIOS (or UEFI) as shown below.

Intel NUC Visual BIOSIntel NUC Visual BIOS display
NUC VisualBIOS screen shot examples

At this point unless you have already installed an operating system, hypervisor or other software on a HDD, SSD or USB device, it is time to install your prefered software.

Windows 7

First up was Windows 7 as I already had an image built on the HHDD that required some drivers to be added. specifically, a visit to the Intel resources site (See NUC resources and links section later in this post) was made to get a LAN GbE, WiFi and USB drivers. Once those were installed the on-board GbE LAN port worked good as did the WiFi. Another driver that needed to be download was for a USB-GbE adapter to add another LAN connection. Also a couple of reboots were required for other Windows drivers and configuration changes to take place to correct some transient problems including KVM hangs which eventually cleared themselves up.

Windows 2012 R2

Following Windows 7, next up was a clean install of Windows 2012 R2 which also required some drivers and configuration changes. One of the challenges is that Windows 2012 R2 is not officially supported on the NUC with its GbE LAN and WiFi cards. However after doing some searches and reading a few posts including this and this, a solution was found and Windows 2012 R2 and its networking are working good.

Ubuntu and Clonezilla

Next up was a quick install of Ubuntu 14.04 which went pretty smooth, as well as using Clonezilla to do some drive maintenance, move images and partitions among other things.

VMware ESXi 5.5U2

My first attempt at installing a standard VMware ESXi 5.5U2 image ran into problems due to the GbE LAN port not being seen. The solution is to use a different build, or custom ISO that includes the applicable GbE LAN driver (e.g. net-e1000e-2.3.2.x86_64.vib) and some useful information at Florian Grehl site (@virten) and over at Andreas Peetz site (@VFrontDe) including SATA controller driver for xahci. Once the GbE driver was added (same driver that addresses other Intel NIC I217/I218 based systems) along with updating the SATA driver, VMware worked fine.

Needless to say there are many other things I plan on doing with the NUC both as a standalone bare-metal system as well as a virtual platform as I get more time and projects allow.

What about building your NUC alternative?

In addition to the NUC models available via Intel and its partners and accessorizing as needed, there are also special customized and ruggedized NUC versions similar to what you would expect to find with laptop, notebooks, and other PC based systems.

MSI Probox rear viewMSI Probox front view
Left MSI ProBox rear-view Right MSI ProBox front view

If you are looking to do more than what Intel and its partners offer, then there are some other options such as to increase the number of external ports among other capabilities. One option which I recently added to my collection of systems is an DIY (Do It Yourself) MSI ProBox (VESA mountable) such as this one here.

MSI Probox internal view
Internal view MSI ProBox (no memory, processor or disks)

With the MSI ProBox, they are essentially a motherboard with an empty single cpu socket (e.g. LGA 1150 up to 65W) for supporting various processors, two empty DDR3 DIMM slots, 2 empty 2.5" SATA ports among other capabilities. Enclosures such as the MSI ProBox give you flexibility creating something more robust beyond a basic NUC yet smaller than a traditional server depending on your specific needs.

Looking for other small form factor modular and ruggedized server options as an alternative to a NUC, than check out those from Xi3, Advantech, Cadian Networks, and Logic Supply among many others.

Storage I/O trends

First NUC impressions

Overall I like the NUC and see many uses for it from consumer, home including entertainment and media systems, video security surveillance as well as a small server or workstation device. In addition, I can see a NUC being used for smaller environments as desktop workstations or as a lower-power, lower performance system including as a small virtualization host for SOHO, small SMB and ROBO environments. Another usage is for home virtual lab as well as gaming among other scenarios including simple software defined storage proof of concepts. For example, how about creating a small cluster of NUCs to run VMware VSAN, or Datacore, EMC ScaleIO, Starwind, Microsoft SOFS or Hyper-V as well as any of the many ZFS based NAS storage software applications.

Pro’s – Features and benefits

Small, low-power, self-contained with flexibility to choose my memory, WiFi, storage (HDD or SSD) without the extra cost of those items or software being included.

Con’s – Caveats or what to look out for

Would be nice to have another GbE LAN port however I addressed that by adding a USB 3.0 to GbE cable, likewise would be nice if the 2.5" SATA drive bay supported tall height form-factor devices such as the 2TB devices. The work around for adding larger capacity and physically larger storage devices is to use the USB 3.0 ports. The biggest warning is if you are going to venture outside of the official supported operating system and application software realm be ready to load some drivers, possibly patch and hack some install scripts and then plug and pray it all works. So far I have not run into any major show stoppers that were not addressed with some time spent searching (google will be your friend), then loading the drivers or making configuration changes.

Additional NUC resources and links

Various Intel products support search page
Intel NUC support and download links
Intel NUC model 54250 page, product brief page (and PDF version), and support with download links
Intel NUC home theater solutions guide (PDF)
Intel HCL for NUC page and Intel Core i5-4250U processor speeds and feeds
VMware on NUC tips
VMware ESXi driver for LAN net-e1000e-2.3.2.x86_64
VMware ESXi SATA xahci driver
Server storage I/O Intel NUC nick knack notes – First impressions
Server Storage I/O Cables Connectors Chargers & other Geek Gifts (Part I and Part II)
Software defined storage on a budget with Lenovo TS140

Storage I/O trends

What this all means

Intel NUC provides a good option for many situations that might otherwise need a larger mini-tower desktop workstations or similar systems both for home, consumer and small office needs. NUC can also be used for specialized pre-configured application specific situations that need low-power, basic system functionality and expansion options in a small physical footprint. In addition NUC can also be a good option for adding to an existing physical and virtual LAB or as a basis for starting a new one.

So far I have found many uses for NUC which free up other systems to do other tasks while enabling some older devices to finally be retired. On the other hand like most any technology, while the NUC is flexible, its low power and performance are not enough to support other applications. However the NUC gives me flexibility to leverage the applicable unit of compute (e.g. server, workstation, etc.) that is applicable to a given task or put another way, use the right technology tool for the task at hand.

For now I only need a single NUC to be a companion to my other HP, Dell and Lenovo servers as well as MSI ProBox, however maybe there will be a small NUC cluster, grid or ring configured down the road.

What say you, do you have a NUC if so, how is it being used and tips, tricks or hints to share with others?

Ok, nuff said for now.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 StorageIO and UnlimitedIO LLC All Rights Reserved

Revisiting RAID data protection remains relevant resource links

Revisiting RAID data protection remains relevant and resources

Storage I/O trends

Updated 2/10/2018

RAID data protection remains relevant including erasure codes (EC), local reconstruction codes (LRC) among other technologies. If RAID were really not relevant anymore (e.g. actually dead), why do some people spend so much time trying to convince others that it is dead or to use a different RAID level or enhanced RAID or beyond raid with related advanced approaches?

When you hear RAID, what comes to mind?

A legacy monolithic storage system that supports narrow 4, 5 or 6 drive wide stripe sets or a modern system support dozens of drives in a RAID group with different options?

RAID means many things, likewise there are different implementations (hardware, software, systems, adapters, operating systems) with various functionality, some better than others.

For example, which of the items in the following figure come to mind, or perhaps are new to your RAID vocabulary?

RAID questions

There are Many Variations of RAID Storage some for the enterprise, some for SMB, SOHO or consumer. Some have better performance than others, some have poor performance for example causing extra writes that lead to the perception that all parity based RAID do extra writes (some actually do write gathering and optimization).

Some hardware and software implementations using WBC (write back cache) mirrored or battery backed-BBU along with being able to group writes together in memory (cache) to do full stripe writes. The result can be fewer back-end writes compared to other systems. Hence, not all RAID implementations in either hardware or software are the same. Likewise, just because a RAID definition shows a particular theoretical implementation approach does not mean all vendors have implemented it in that way.

RAID is not a replacement for backup rather part of an overall approach to providing data availability and accessibility.

data protection and durability

What’s the best RAID level? The one that meets YOUR needs

There are different RAID levels and implementations (hardware, software, controller, storage system, operating system, adapter among others) for various environments (enterprise, SME, SMB, SOHO, consumer) supporting primary, secondary, tertiary (backup/data protection, archiving).

RAID comparison
General RAID comparisons

Thus one size or approach does fit all solutions, likewise RAID rules of thumbs or guides need context. Context means that a RAID rule or guide for consumer or SOHO or SMB might be different for enterprise and vise versa, not to mention on the type of storage system, number of drives, drive type and capacity among other factors.

RAID comparison
General basic RAID comparisons

Thus the best RAID level is the one that meets your specific needs in your environment. What is best for one environment and application may be different from what is applicable to your needs.

Key points and RAID considerations include:

· Not all RAID implementations are the same, some are very much alive and evolving while others are in need of a rest or rewrite. So it is not the technology or techniques that are often the problem, rather how it is implemented and then deployed.

· It may not be RAID that is dead, rather the solution that uses it, hence if you think a particular storage system, appliance, product or software is old and dead along with its RAID implementation, then just say that product or vendors solution is dead.

· RAID can be implemented in hardware controllers, adapters or storage systems and appliances as well as via software and those have different features, capabilities or constraints.

· Long or slow drive rebuilds are a reality with larger disk drives and parity-based approaches; however, you have options on how to balance performance, availability, capacity, and economics.

· RAID can be single, dual or multiple parity or mirroring-based.

· Erasure and other coding schemes leverage parity schemes and guess what umbrella parity schemes fall under.

· RAID may not be cool, sexy or a fun topic and technology to talk about, however many trendy tools, solutions and services actually use some form or variation of RAID as part of their basic building blocks. This is an example of using new and old things in new ways to help each other do more without increasing complexity.

·  Even if you are not a fan of RAID and think it is old and dead, at least take a few minutes to learn more about what it is that you do not like to update your dead FUD.

Wait, Isn’t RAID dead?

There is some dead marketing that paints a broad picture that RAID is dead to prop up something new, which in some cases may be a derivative variation of parity RAID.

data dispersal
Data dispersal and durability

RAID rebuild improving
RAID continues to evolve with rapid rebuilds for some systems

Otoh, there are some specific products, technologies, implementations that may be end of life or actually dead. Likewise what might be dead, dying or simply not in vogue are specific RAID implementations or packaging. Certainly there is a lot of buzz around object storage, cloud storage, forward error correction (FEC) and erasure coding including messages of how they cut RAID. Catch is that some object storage solutions are overlayed on top of lower level file systems that do things such as RAID 6, granted they are out of sight, out of mind.

RAID comparison
General RAID parity and erasure code/FEC comparisons

Then there are advanced parity protection schemes which include FEC and erasure codes that while they are not your traditional RAID levels, they have characteristic including chunking or sharding data, spreading it out over multiple devices with multiple parity (or derivatives of parity) protection.

Bottom line is that for some environments, different RAID levels may be more applicable and alive than for others.

Via BizTech – How to Turn Storage Networks into Better Performers

  • Maintain Situational Awareness
  • Design for Performance and Availability
  • Determine Networked Server and Storage Patterns
  • Make Use of Applicable Technologies and Techniques

If RAID is alive, what to do with it?

If you are new to RAID, learn more about the past, present and future keeping mind context. Keeping context in mind means that there are different RAID levels and implementations for various environments. Not all RAID 0, 1, 1/0, 10, 2, 3, 4, 5, 6 or other variations (past, present and emerging) are the same for consumer vs. SOHO vs. SMB vs. SME vs. Enterprise, nor are the usage cases. Some need performance for reads, others for writes, some for high-capacity with low performance using hardware or software. RAID Rules of thumb are ok and useful, however keep them in context to what you are doing as well as using.

What to do next?

Take some time to learn, ask questions including what to use when, where, why and how as well as if an approach or recommendation are applicable to your needs. Check out the following links to read some extra perspectives about RAID and keep in mind, what might apply to enterprise may not be relevant for consumer or SMB and vise versa.

Some advise needed on SSD’s and Raid (Via Spiceworks)
RAID 5 URE Rebuild Means The Sky Is Falling (Via BenchmarkReview)
Double drive failures in a RAID-10 configuration (Via SearchStorage)
Industry Trends and Perspectives: RAID Rebuild Rates (Via StorageIOblog)
RAID, IOPS and IO observations (Via StorageIOBlog)
RAID Relevance Revisited (Via StorageIOBlog)
HDDs Are Still Spinning (Rust Never Sleeps) (Via InfoStor)
When and Where to Use NAND Flash SSD for Virtual Servers (Via TheVirtualizationPractice)
What’s the best way to learn about RAID storage? (Via Spiceworks)
Design considerations for the host local FVP architecture (Via Frank Denneman)
Some basic RAID fundamentals and definitions (Via SearchStorage)
Can RAID extend nand flash SSD life? (Via StorageIOBlog)
I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
The original RAID white paper (PDF) that while over 20 years old, it provides a basis, foundation and some history by Katz, Gibson, Patterson et al
Storage Interview Series (Via Infortrend)
Different RAID methods (Via RAID Recovery Guide)
A good RAID tutorial (Via TheGeekStuff)
Basics of RAID explained (Via ZDNet)
RAID and IOPs (Via VMware Communities)

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What is my favorite or preferred RAID level?

That depends, for some things its RAID 1, for others RAID 10 yet for others RAID 4, 5, 6 or DP and yet other situations could be a fit for RAID 0 or erasure codes and FEC. Instead of being focused on just one or two RAID levels as the solution for different problems, I prefer to look at the environment (consumer, SOHO, small or large SMB, SME, enterprise), type of usage (primary or secondary or data protection), performance characteristics, reads, writes, type and number of drives among other factors. What might be a fit for one environment would not be a fit for others, thus my preferred RAID level along with where implemented is the one that meets the given situation. However also keep in mind is tying RAID into part of an overall data protection strategy, remember, RAID is not a replacement for backup.

What this all means

Like other technologies that have been declared dead for years or decades, aka the Zombie technologies (e.g. dead yet still alive) RAID continues to be used while the technologies evolves. There are specific products, implementations or even RAID levels that have faded away, or are declining in some environments, yet alive in others. RAID and its variations are still alive, however how it is used or deployed in conjunction with other technologies also is evolving.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

December 2014 Server StorageIO Newsletter

December 2014

Hello and welcome to this December Server and StorageIO update newsletter.

Seasons Greetings

Seasons greetings

Commentary In The News

StorageIO news

Following are some StorageIO industry trends perspectives comments that have appeared in various venues. Cloud conversations continue to be popular including concerns about privacy, security and availability. Over at BizTech Magazine there are some comments about cloud and ROI. Some comments on AWS and Google SSD services can be viewed at SearchAWS. View other trends comments here

Tips and Articles

View recent as well as past tips and articles here

StorageIOblog posts

Recent StorageIOblog posts include:

View other recent as well as past blog posts here

In This Issue

  • Industry Trends Perspectives
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events & Activities

    View other recent and upcoming events here

    Webinars

    December 11, 2014 – BrightTalk
    Server & Storage I/O Performance

    December 10, 2014 – BrightTalk
    Server & Storage I/O Decision Making

    December 9, 2014 – BrightTalk
    Virtual Server and Storage Decision Making

    December 3, 2014 – BrightTalk
    Data Protection Modernization

    Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    From StorageIO Labs

    Research, Reviews and Reports

    StarWind Virtual SAN for Microsoft SOFS

    May require registration
    This looks at the shared storage needs of SMB’s and ROBO’s leveraging Microsoft Scale-Out File Server (SOFS). Focus is on Microsoft Windows Server 2012, Server Message Block version (SMB) 3.0, SOFS and StarWind Virtual SAN management software

    View additional reports and lab reviews here.

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageio.com/ssd
    storageio.com/ssd

    Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

    Seasons greetings 2014

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved