Supermicro CSE-M14TQC Use your media bay to add 12 Gbps SAS SSD drives to your server

Storage I/O trends

Supermicro CSE-M14TQC Use your media bay to add 12 Gbps SAS SSD drives to your server

Do you have a computer server, workstation or mini-tower PC that needs to have more 2.5" form factor hard disk drive (HDD), solid state device (SSD) or hybrid flash drives added yet no expansion space?

Do you also want or need the HDD or SSD drive expansion slots to be hot swappable, 6 Gbps SATA3 along with up to 12 Gbps SAS devices?

Do you have an available 5.25" media bay slot (e.g. where you can add an optional CD or DVD drive) or can you remove your existing CD or DVD drive using USB for software loading?

Do you need to carry out the above without swapping out your existing server or workstation on a reasonable budget, say around $100 USD plus tax, handling, shipping (your prices may vary)?

If you need implement the above, then here is a possible solution, or in my case, an real solution.

Via StorageIOblog Supermicro 4 x 2.5 12Gbps SAS enclosure CSE-M14TQC
Supermicro CSE-M14TQC with hot swap canister before installing in one of my servers

In the past I have used a solution from Startech that supports up to 4 x 2.5" 6 Gbps SAS and SATA drives in a 5.25" media bay form factor installing these in my various HP, Dell and Lenovo servers to increase internal storage bays (slots).

Via Amazon.com StarTech SAS and SATA expansion
Via Amazon.com StarTech 4 x 2.5" SAS and SATA internal enclosure

I still use the StarTech device shown (read earlier reviews and experiences here, here and here) above in some of my servers which continue to be great for 6Gbps SAS and SATA 2.5" HDDs and SSDs. However for 12 Gbps SAS devices, I have used other approaches including external 12 Gbps SAS enclosures.

Recently while talking with the folks over at Servers Direct, I mentioned how I was using StarTech 4 x 2.5" 6Gbps SAS/SATA media bay enclosure as a means of boosting the number of internal drives that could be put into some smaller servers. The Servers Direct folks told me about the Supermicro CSE-M14TQC which after doing some research, I decided to buy one to complement the StarTech 6Gbps enclosures, as well as external 12 Gbps SAS enclosures or other internal options.

What is the Supermicro CSE-M14TQC?

The CSE-M14TQC is a 5.25" form factor enclosure that enables four (4) 2.5" hot swappable (if your adapter and OS supports hot swap) 12 Gbps SAS or 6 Gbps SATA devices (HDD and SSD) to fit into the media bay slot normally used by CD/DVD devices in servers or workstations. There is a single Molex male power connector on the rear of the enclosure that can be used to attach to your servers available power using applicable connector adapters. In addition there are four seperate drive connectors (e.g. SATA type connectors) that support up to 12 Gbps SAS per drive which you can attach to your servers motherboard (note SAS devices need a SAS controller), HBA or RAID adapters internal ports.

Cooling is provided via a rear mounted 12,500 RPM 16 cubic feet per minute fan, each of the four drives are hot swappable (requires operating system or hypervisor support) contained in a small canister (provided with the enclosure). Drives easily mount to the canister via screws that are also supplied as part of the enclosure kit. There is also a drive activity and failure notification LED for the devices. If you do not have any available SAS or SATA ports on your servers motherboard, you can use an available PCIe slot and add a HBA or RAID card for attaching the CSE-M14TQC to the drives. For example, a 12 Gbps SAS (6 Gbps SATA) Avago/LSI RAID card, or a 6 Gbps SAS/SATA RAID card.

Via Supermicro CSE-M14TQC rear details (4 x SATA and 1 Molex power connector)

Via StorageIOblog Supermicro 4 x 2.5 rear view CSE-M14TQC 12Gbps SAS enclosure
CSE-M14TQCrear view before installation

Via StorageIOblog Supermicro CSE-M14TQC 12Gbps SAS enclosure cabling
CSE-M14TQC ready for installation with 4 x SATA (12 Gbps SAS) drive connectors and Molex power connector

Tip: In the case of the Lenovo TS140 that I initially installed the CSE-M14TQC into, there is not a lot of space for installing the drive connectors or Molex power connector to the enclosure. Instead, attach the cables to the CSE-M14TQC as shown above before installing the enclosure into the media bay slot. Simply attach the connectors as shown and feed them through the media bay opening as you install the CSE-M14TQC enclosure. Then attach the drive connectors to your HBA, RAID card or server motherboard and the power connector to your power source inside the server.

Note and disclaimer, pay attention to your server manufactures power loading and specification along with how much power will be used by the HDD or SSD’s to be installed to avoid electrical power or fire issues due to overloading!

Via StorageIOblog Supermicro CSE-M14TQC enclosure Lenovo TS140
CSE-M14TQC installed into Lenovo TS140 empty media bay

Via StorageIOblog Supermicro CSE-M14TQC drive enclosure Lenovo TS140

CSE-M14TQC installed with front face plated installed on Lenovo TS140

Where to read, watch and learn more

Storage I/O trends

What this all means and wrap up

If you have a server that simply needs some extra storage capacity by adding some 2.5" HDDs, or boosting performance with fast SSDs yet do not have any more internal drive slots or expansion bays, leverage your media bay. This applies to smaller environments where you might have one or two servers, as well as for environments where you want or need to create a scale out software defined storage or hyper-converged platform using your own hardware. Another option is that if you have a lab or test environment for VMware vSphere ESXi Windows, Linux, Openstack or other things, this can be a cost-effective approach to adding both storage space capacity as well as performance and leveraging newer 12Gbps SAS technologies.

For example, create a VMware VSAN cluster using smaller servers such as Lenovo TS140 or equivalent where you can install a couple of 6TB or 8TB higher capacity 3.5" drive in the internal drive bays, then adding a couple of 12 Gbps SAS SSDs along with a couple of 2.5" 2TB (or larger) HDDs along with a RAID card, and high-speed networking card. If VMware VSAN is not your thing, how about setting up a Windows Server 2012 R2 failover cluster including Scale Out File Server (SOFS) with Hyper-V, or perhaps OpenStack or one of many other virtual storage appliances (VSA) or software defined storage, networking or other solutions. Perhaps you need to deploy more storage for a big data Hadoop based analytics system, or cloud or object storage solution? On the other hand, if you simply need to add some storage to your storage or media or gaming server or general purpose server, the CSE-M14TQC can be an option along with other external solutions.

Ok, nuff said

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage

3D XPoint NVM persistent memory PM storage class memory SCM


Storage I/O trends

Updated 1/31/2018

This is the first of a three-part series on Intel Micron unveil new 3D XPoint Non Volatie Memory NVM for servers storage announcement. Read Part II here and Part III here.

In a webcast the other day, Intel and Micron announced new 3D XPoint non-volatile memory (NVM) that can be used for both primary main memory (e.g. what’s in computers, serves, laptops, tablets and many other things) in place of Dynamic Random Access Memory (DRAM), for persistent storage faster than today’s NAND flash-based solid state devices (SSD), not to mention future hybrid usage scenarios. Note that this announcement while having the common term 3D in it is different from the earlier Intel and Micron announcement about 3D NAND flash (read more about that here).

Twitter hash tag #3DXpoint

The big picture, why this type of NVM technology is needed

Server and Storage I/O trends

  • Memory is storage and storage is persistent memory
  • No such thing as a data or information recession, more data being create, processed and stored
  • Increased demand is also driving density along with convergence across server storage I/O resources
  • Larger amounts of data needing to be processed faster (large amounts of little data and big fast data)
  • Fast applications need more and faster processors, memory along with I/O interfaces
  • The best server or storage I/O is the one you do not need to do
  • The second best I/O is one with least impact or overhead
  • Data needs to be close to processing, processing needs to be close to the data (locality of reference)


Server Storage I/O memory hardware and software hierarchy along with technology tiers

What did Intel and Micron announce?

Intel SVP and General Manager Non-Volatile Memory solutions group Robert Crooke (Left) and Micron CEO D. Mark Durcan did the joint announcement presentation of 3D XPoint (webinar here). What was announced is the 3D XPoint technology jointly developed and manufactured by Intel and Micron which is a new form or category of NVM that can be used for both primary memory in servers, laptops, other computers among other uses, as well as for persistent data storage.


Robert Crooke (Left) and Mark Durcan (Right)

Summary of 3D XPoint announcement

  • New category of NVM memory for servers and storage
  • Joint development and manufacturing by Intel and Micron in Utah
  • Non volatile so can be used for storage or persistent server main memory
  • Allows NVM to scale with data, storage and processors performance
  • Leverages capabilities of both Intel and Micron who have collaborated in the past
  • Performance Intel and Micron claim up to 1000x faster vs. NAND flash
  • Availability persistent NVM compared to DRAM with better durability (life span) vs. NAND flash
  • Capacity densities about 10x better vs. traditional DRAM
  • Economics cost per bit between dram and nand (depending on packaging of resulting products)

What applications and products is 3D XPoint suited for?

In general, 3D XPoint should be able to be used for many of the same applications and associated products that current DRAM and NAND flash-based storage memories are used for. These range from IT and cloud or managed service provider data centers based applications and services, as well as consumer focused among many others.


3D XPoint enabling various applications

In general, applications or usage scenarios along with supporting products that can benefit from 3D XPoint include among others’. Applications that need larger amounts of main memory in a denser footprint such as in-memory databases, little and big data analytics, gaming, wave form analysis for security, copyright or other detection analysis, life sciences, high performance compute and high-productivity compute, energy, video and content severing among many others.

In addition, applications that need persistent main memory for resiliency, or to cut delays and impacts for planned or un-planned maintenance or having to wait for memories and caches to be warmed or re-populated after a server boot (or re-boot). 3D XPoint will also be useful for those applications that need faster read and write performance compared to current generations NAND flash for data storage. This means both existing and emerging applications as well as some that do not yet exist will benefit from 3D XPoint over time, like how today’s applications and others have benefited from DRAM used in Dual Inline Memory Module (DIMM) and NAND flash advances over the past several decades.

Where to read, watch and learn more

Storage I/O trends

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Continue reading Part II here and Part III here of this three-part series on Intel and Micron 3D XPoint along with more analysis and commentary.

Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

July 2015 Server StorageIO Update Newsletter

Volume 15, Issue VII

Hello and welcome to this July 2015 Server StorageIO update newsletter. Its mid summer here in the northern hemisphere which for many means vacations or holidays.

Content Solution Platforms

Thus this months newsletter has a focus on content solution platforms including hardware and software that get defined to support various applications. Content solutions span from video (4K, HD and legacy streaming, pre-/post-production and editing), audio, imaging (photo, seismic, energy, healthcare, etc.) to security surveillance (including Intelligent Video Surveillance [ISV] as well as Intelligence Surveillance and Reconnaissance [ISR]).

StorageIOblog posts

In case you missed it:

View other recent as well as past blog posts here

From StorageIO Labs

Research, Reviews and Reports

Servers Direct Content Platform
Servers Direct Content Solution Platform

An industry and customer trend is leveraging converged platforms based on multi-socket processors with dozens of cores and threads (logical processors) to support parallel or high-concurrent threaded content based applications.

Recently I had the opportunity by Servers Direct to get some hands-on test time with one of their 2U Content Solution platforms. In addition to big fast data, other content solution applications include: content distribution network (CDN) content caching, network function virtualization (NFV), software-defined network (SDN), cloud rich unstructured big fast media data, analytics and little data (e.g. SQL and NoSQL database, key-value stores, repositories and meta-data) among others.

Read more about content solution platforms including those Intel powered platforms from Servers Direct in this Server StorageIO Industry Trends Perspective solution brief here.

View other Server StorageIO lab review reports here

Closing Comments

Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

Cheers gs

Greg Schulz – @StorageIO

Microsoft MVP File System Storage
VMware vExpert

In This Issue

  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Server StorageIO Lab reviews
  • Events and Webinars
  • Resources and Links
  • StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities and announcements.

    Processor: A Look At Object-Based Storage
    Processor: Newest and best server trends
    PowerMore: Flash not just for performance
    SearchVirtualStorage: Containers and storage
    BizTechMagazine: Simplify with virtualization
    EnterpriseStorageForum: Future DR Storage
    EnterpriseStorageForum: 10 Tips for DRaaS
    EnterpriseStorageForum: NVMe planning

    View more trends comments here

    StorageIO Tips and Articles

    A common question I am asked is, “What is the best storage technology?” My routine answer is, “It depends!” During my recent Interop Las Vegas session “Smart Shopping for Your Storage Strategy” I addressed this very question. Read more in my tip Selecting Storage: Start With Requirements over at Network Computing.

    Check out these resources and links on server storage I/O performance and benchmarking tools. View more tips and articles here

    Various Industry Events

    Server Storage I/O Workshop Seminars
    Nijkerk Netherlands October 13-16 2015

    VMworld August 30-September 3 2015

    Flash Memory Summit August 11-13

    View other recent and upcoming events here

    Webinars

    BrightTalk Webinar – June 23 2015 9AM PT
    Server Storage I/O Innovation v2.015: Protect Preserve & Serve Your Information

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/

    storageperformance.us
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(TM) and UnlimitedIO All Rights Reserved

    May and June 2015 Server StorageIO Update Newsletter

    Volume 15, Issue V & VI

    Hello and welcome to this joint May and June 2015 Server StorageIO update newsletter. Here in the northern hemisphere its summer which means holiday vacations among other things.

    There has been a lot going on this spring and so far this summer with more in the wings. Summer can also be a time to get caught up on some things, preparing for others while hopefully being able to enjoy some time off as well.

    In terms of what have I been working on (or with)? Clouds (OpenStack, vCloud Air, AWS, Azure, GCS among others), virtual and containers, flash SSD devices (drives, cards), software defining, content servers, NVMe, databases, data protection items, servers, cache and micro-tiering among other things.

    Speaking of getting caught up, back in early May among many other conferences (Cisco, Docker, HP, IBM, OpenStack, Red Hat and many other events) was EMCworld. EMC covered my hotel and registration costs to attend the event in Las Vegas (thanks EMC, that’s a disclosure btw ;). View a summary StorageIOblog post covering EMCworld 2015 here along with recent EMC announcements including Acquisition of cloud services vendor Virtustream for $1.2B, and ECS 2.0.

    Server and Storage I/O Wrappings

    This months newsletter has a focus on software and storage wrappings, that is, how your storage or software is packaged, delivered or deployed. For example traditional physical storage systems, software defined storage as shrink-wrap or download, tin-wrapped software as an appliance, virtual wrapped such as a virtual storage appliance or cloud wrapped among others.

    OpenStack software defined cloud

    OpenStack (both the organization, community, event and software) continue to gain momentum. The latest release known as Kilo (more Kilo info here) was released in early April followed by the OpenStack summit in May.

    Some of you might be more involved with OpenStack vs. others, perhaps having already deployed into your production environment. Perhaps you, like myself have OpenStack running in a lab for proof of concept, research, development or learning among other things.

    You might even be using the services of a public cloud or managed service provider that is powered by OpenStack. On the other hand, you might be familiar with OpenStack from reading up on it, watching videos, listening to podcast’s or attending events to figure out what it is, where it fits, as well as what can your organization use it for.

    Drew Robb (@Robbdrew) has a good overview piece about OpenStack and storage over at Enterprise Storage Forum (here). OpenStack is a collection of tools or bundles for building private, hybrid and public clouds. These various open source projects within the OpenStack umbrella include compute (Nova) and virtual machine images (Glance). Other components include dashboard management (Horizon), security and identity control (Keystone), network (Neutron), object storage (Swift), block storage (Cinder) and file-based storage (Manila) among others.

    It’s up to the user to decide which pieces you will add. For example, you can use Swift without having virtual machines and vice versa. Read Drew’s complete article here.

    Btw, if you missed it, not only has OpenStack added file support (e.g. Manila), Amazon Web Services (AWS) also recently added Elastic File Services (EFS) complementing there Elastic Block Services (EBS).

    Focus on Storage Wrappings

    Software exists and gets deployed in various places as shown in the following examples.

    software wrapped storage

    • Cloud wrapped software – software that can be deployed in a cloud instance.
    • Container wrapped software – software deployed in a docker or other container
    • Firmware wrapped software – software that gets packaged and deployed as firmware in a server, storage, network device or adapter
    • Shrink wrapped software – software that can be downloaded and deployed where you want
    • Tin wrapped software – software that is packaged or bundled with hardware (e.g. tin) such as an appliance or storage system
    • Virtual wrapped software

    server storage software wrapping

    StorageIOblog posts

    Data Protection Diaries

    Modernizing Data Protection
    Using new and old things in new ways

    This is part of an ongoing series of posts that part of www.storageioblog.com/data-protection-diaries-main/ on data protection including archiving, backup/restore, business continuance (BC), business resiliency (BC), data footprint reduction (DFR), disaster recovery (DR), High Availability (HA) along with related themes, tools, technologies, techniques, trends and strategies.
    world backup day (and test your restore) image licensed from Shutterstock by StorageIO

    Data protection is a broad topic that spans from logical and physical security to HA, BC, BR, DR, archiving(including life beyond compliance) along with various tools, technologies, techniques. Key is aligning those to the needs of the business or organization for today’s as well as tomorrows requirements. Instead of doing things what has been done in the past that may have been based on what was known or possible due to technology capabilities, why not start using new and old things in new ways.

    Let’s start using all the tools in the data protection toolbox regardless of if they are new or old, cloud, virtual, physical, software defined product or service in new ways while keeping the requirements of the business in focus. Read more from this post here.

    In case you missed it:

    View other recent as well as past blog posts here

    In This Issue


  • Industry Trends Perspectives News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events and Webinars
  • Recommended Reading List
  • StorageIOblog posts
  • Server StorageIO Lab reports
  • Resources and Links
  • Industry News and Activity

    Recent Industry news and activity

    AWS adds new M4 virtual machine instances
    Cisco provides FCoE proof of life

    Google new cloud storage pricing
    HP announces new data center services
    HDS announces new products & services
    IBM enhances storage portfolio

    IBTA announces RoCE initiative
    InfiniteIO announces network/cloud cache
    Intel buying FPGA specialist Altera
    NetApp – Changes CEO

    View other recent and upcoming events here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities and announcements.

    BizTechMagazine: Comments on how to simplify your data center with virtualization
    EnterpriseStorageForum: Comments on Open Stack and Clouds
    EnterpriseStorageForum: Comments on Top Ten Software Defined Storage Tips, Gotchas and Cautions
    EdTech: Comments on Harness Power with New Processors

    Processor: Comments on Protecting Your Servers & Networking equipment
    EdTech: Comments on Harness Power with New Processors

    Processor: Comments on Improve Remote Server Management including KVM
    CyberTrend: Comments on Software Defined Data Center and virtualization
    BizTechMagazine: Businesses Prepare as End-of-Life for Windows Server 2003 Nears
    InformationWeek: Top 10 sessions from Interop Las Vegas 2015
    CyberTrend: Comments on Software Defined Data Center and Virtualization

    View more trends comments here

    Vendors you may not heard of

    This is a new section starting in this issue where various new or existing vendors as well as service providers you may not have heard about will be listed.

    CloudHQ – Cloud management tools
    EMCcode Rex-Ray – Container management
    Enmotus FUZE – Flash leveraged micro tiering
    Rubrik – Data protection management
    Sureline – Data protection management
    Virtunet systems – VMware flash cache software
    InfiniteIO – Cloud and NAS cache appliance
    Servers Direct – Server and storage platforms

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here. There are over 1,000 entries (and growing) vendors on the links page.

    StorageIO Tips and Articles

    So you have a new storage device or system. How will you test or find its performance? Check out this quick-read tip on storage benchmark and testing fundamentals over at BizTech.

    Check out these resources and links on server storage I/O performance and benchmarking tools. View more tips and articles here

    Webinars

    BrightTalk Webinar – June 23 2015 9AM PT
    Server Storage I/O Innovation v2.015: Protect Preserve & Serve Your Information

    Videos and Podcasts

    VMware vCloud Air Server StorageIO Lab Test Drive Ride along videos.

    Server StorageIO Lab vCloud test drive video part 1Server StorageIO Lab vCloud test drive video part 2
    VMware vCloud Air test drive videos Part I & II

    StorageIO podcasts are also available via and at StorageIO.tv

    Various Industry Events

     

    VMworld August 30-September 3 2015

    Flash Memory Summit August 11-13

    Interop – April 29 2015 Las Vegas (Voted one of top ten sessions at Interop, more here)
    Smart Shopping for Your Storage Strategy

    View other recent and upcoming events here

    Webinars

    BrightTalk Webinar – June 23 2015 9AM PT
    Server Storage I/O Innovation v2.015: Protect Preserve & Serve Your Information

    From StorageIO Labs

    Research, Reviews and Reports

    VMware vCloud Air Test Drive
    VMware vCloud Air
    local and distributed NAS (NFS, CIFS, DFS) file data. Read more here.

    VMware vCloud Air

    VMware vCloud Air provides a platform similar to those just mentioned among others for your applications and their underlying resource needs (compute, memory, storage, networking) to be fulfilled. In addition, it should not be a surprise that VMware vCloud Air shares many common themes, philosophies and user experiences with the traditional on-premises based VMware solutions you might be familiar with.

    View other StorageIO lab review reports here

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/

    storageperformance.us
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Top vblog voting V2.015 (Its IT award season, cast your votes)

    Top vblog voting V2.015 (Its IT award season, cast your votes)

    Storage I/O trends

    It’s that time of the year again for award season:

    • The motion picture association Academy awards (e.g. the Oscars)
    • The Grammys and other entertainment awards
    • As well as Eric Siebert (aka @ericsiebert) vsphere-land.com top vblog

    Vsphere-land.com top vblog

    Eric has run for several years now an annual top VMware, Virtualization, Storage and related blogs voting now taking place until March 16th 2015 (click on the image below). You will find a nice mix of new school, old school and a few current or future school theme blogs represented with some being more VMware specific. However there are also many blogs at the vpad site that have a cloud, virtual, server, storage, networking, software defined, development and other related themes.

    top vblog voting
    Click on the above image to cast your vote for favorite:

    • Ten blogs (e.g. select up to ten and then rank 1 through 10)
    • Storage blog
    • Scripting blog
    • VDI blog
    • New Blogger
    • Independent Blogger (e.g. non-vendor)
    • News/Information Web site
    • Podcast

    Call to action, take a moment to cast your vote

    My StorageIOblog.com has been on the vLaunchPad site for several years now as well as having syndicated content that also appears via some of the other venues listed there.

    Six time VMware vExpert

    In addition to my StorageIOblog and podcast, you will also find many of my fellow VMware vExperts among others at the vLaunchpad site so check them out as well.

    What this means

    This is a people’s choice process (yes it is a popularity process of sorts as well) however also a way of rewarding or thanking those who take time to create and share content with you and others. If you take time to read various blogs, listen to podcasts as well as consume other content, please take a few moments and cast your vote here (thank you in advance) which I hope includes StorageIOblog.com as part of the top ten, as well as being nominated in the Storage, Podcast and Independent blogger categories.

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    How to test your HDD SSD or all flash array (AFA) storage fundamentals

    How to test your HDD SSD AFA Hybrid or cloud storage

    server storage data infrastructure i/o hdd ssd all flash array afa fundamentals

    Updated 2/14/2018

    Over at BizTech Magazine I have a new article 4 Ways to Performance Test Your New HDD or SSD that provides a quick guide to verifying or learning what the speed characteristic of your new storage device are capable of.

    An out-take from the article used by BizTech as a "tease" is:

    These four steps will help you evaluate new storage drives. And … psst … we included the metrics that matter.

    Building off the basics, server storage I/O benchmark fundamentals

    The four basic steps in the article are:

    • Plan what and how you are going to test (what’s applicable for you)
    • Decide on a benchmarking tool (learn about various tools here)
    • Test the test (find bugs, errors before a long running test)
    • Focus on metrics that matter (what’s important for your environment)

    Server Storage I/O performance

    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    To some the above (read the full article here) may seem like common sense tips and things everybody should know otoh there are many people who are new to servers storage I/O networking hardware software cloud virtual along with various applications, not to mention different tools.

    Thus the above is a refresher for some (e.g. Dejavu) while for others it might be new and revolutionary or simply helpful. Interested in HDD’s, SSD’s as well as other server storage I/O performance along with benchmarking tools, techniques and trends check out the collection of links here (Server and Storage I/O Benchmarking and Performance Resources).

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    VMware announces vSphere V6 and associated virtualization technologies

    VMware announces vSphere V6 and associated virtualization technologies

    server storage I/O trends

    VMware has announced version 6 (V6) of its software defined data center (SDDC) server virtualization hypervisor called vSphere aka ESXi. In addition to a new version of its software defined server hypervisor along with companion software defined management and convergence tools.

    VMware

    VMware vSphere Refresh

    As a refresh for those whose world does not revolve around VMware, vSphere and software defined data centers (believe it or not there are some who exist ;), ESXi is the hypervisor that virtualizes underlying physical machines (PM’s) known as hosts.

    software defined data center convergence
    The path to software defined data center convergence

    Guest operating systems (or other hypervisors using nesting) run as virtual machines (VM’s) on top of the vSphere hypervisor host (e.g. ESXi software). Various VMware management tools (or third-party) are used for managing the virtualized data center from initial configuration, configuration, conversion from physical to virtual (P2V) or virtual to virtual (V2V) along with data protection, performance, capacity planning across servers, storage and networks.

    virtual machines

    VMware vSphere is flexible and can adapt to different sized environments from small office home office (SOHO) or small SMB, to large SMB, SME, enterprise or cloud service provider. There are a free version of ESXi along with paid versions that include support and added management tool features. Besides the ESXi vSphere hypervisor, other commonly deployed modules include the vCenter administration along with Infrastructure Controller services platform among others. In addition, there are optional solution bundles to add support for virtual networking, cloud (public and private), data protection (backup/restore, replication, HA, BC, DR), big data among other capabilities.

    What is new with vSphere V6

    VMware has streamlined the installation, configuration and deployment of vSphere along with associated tools which for smaller environments makes things simply easier. For the larger environments, having to do less means being able to do more in the same amount of time which results in cost savings. In addition to easier to use, deploy and configure, VMware has extended the scaling capabilities of vSphere in terms of scaling-out (larger clusters), scaling-up (more and larger servers), as well as scaling-down (smaller environments and ease of use).

    cloud virtual software defined servers

    • Compute: Expanded support for new hardware, guest operating systems and general scalability in terms of physical, and virtual resources. For example increasing the number of virtual CPU (vCPUs), number of cluster nodes among other speeds and feeds enhancements.

    server storage I/O vsan

    • Storage: This is an area where several enhancements were made including updates for Storage I/O controls (Storage QoS and performance optimizations) with per VM reservations, NFS v4.1 with Kerberos client, Virtual SAN (VSAN) improvements (new back-end underlying file system) as well as new Virtual Volumes (vVOLs) for Storage Policy Based Management.
    • Availability: Improvements for vMotion (ability to live move virtual machines between physical servers (vmware hosts) including long distance fault-tolerance. Other improvements include faster replication, vMotion across vCenter servers, and long distance vMotion (up to 100ms round trip time latency).
    • Network: Network I/O Control (NIOC) provides per VM and dat (VM and data repository) bandwidth reservations for quality of service (QoS) performance optimization.
    • Management: Improvements for multi-site, virtual data centers, content-library (storage and versioning of files and objects including ISOs and OVFs (Open Virtualization Format files) that can be on a VMFS (VMware File System) dat or NFS volume, policy-based management and web-client performance enhancements.

    What is vVOL?

    The quick synopsis of VMware vVOL’s overview:

    • Higher level of abstraction of storage vs. traditional SCSI LUN’s or NAS NFS mount points
    • Tighter level of integration and awareness between VMware hypervisors and storage systems
    • Simplified management for storage and virtualization administrators
    • Removing complexity to support increased scaling
    • Enable automation and service managed storage aka software defined storage management

    server storage I/O volumes
    How data storage access and managed via VMware today (read more here)

    vVOL’s are not LUN’s like regular block (e.g. DAS or SAN) storage that use SAS, iSCSI, FC, FCoE, IBA/SRP, nor are they NAS volumes like NFS mount points. Likewise vVOL’s are not accessed using any of the various object storage access methods mentioned above (e.g. AWS S3, Rest, CDMI, etc) instead they are an application specific implementation. For some of you this approach of an applications specific or unique storage access method may be new, perhaps revolutionary, otoh, some of you might be having a DejaVu moment right about now.

    vVOL is not a LUN in the context of what you may know and like (or hate, even if you have never worked with them), likewise it is not a NAS volume like you know (or have heard of), neither are they objects in the context of what you might have seen or heard such as S3 among others.

    Keep in mind that what makes up a VMware virtual machine are the VMK, VMDK and some other files (shown in the figure below), and if enough information is known about where those blocks of data are or can be found, they can be worked upon. Also keep in mind that at least near-term, block is the lowest common denominator that all file systems and object repositories get built-up.

    server storage I/O vVOL basics
    How VMware data storage accessed and managed with vVOLs (read more here)

    Here is the thing, while vVOL’s will be accessible via a block interface such as iSCSI, FC or FCoE or for that matter, over Ethernet based IP using NFS. Think of these storage interfaces and access mechanisms as the general transport for how vSphere ESXi will communicate with the storage system (e.g. their data path) under vCenter management.

    What is happening inside the storage system that will be presented back to ESXi will be different than a normal SCSI LUN contents and only understood by VMware hypervisor. ESXi will still tell the storage system what it wants to do including moving blocks of data. The storage system however will have more insight and awareness into the context of what those blocks of data mean. This is how the storage systems will be able to more closely integrate snapshots, replication, cloning and other functions by having awareness into which data to move, as opposed to moving or working with an entire LUN where a VMDK may live.

    Keep in mind that the storage system will still function as it normally would, just think of vVOL as another or new personality and access mechanism used for VMware to communicate and manage storage. Watch for vVOL storage provider support from the who’s who of existing and startup storage system providers including Cisco, Dell, EMC, Fujitsu, HDS, HP, IBM, NetApp, Nimble and many others. Read more about Storage I/O fundamentals here and vVOLs here and here.

    What this announcement means

    Depending on your experiences, you might use revolutionary to describe some of the VMware vSphere V6 features and functionalities. Otoh, if you have some Dejavu moments looking pragmatically at what VMware is delivering with V6 of vSphere executing on their vision, evolutionary might be more applicable. I will leave it up to you do decide if you are having a Dejavu moment and what that might pertain to, or if this is all new and revolutionary, or something more along the lines of technolutionary.

    VMware continues to execute delivering on the Virtual Data Center aka Software Defined Data Center paradigm by increasing functionality, as well as enhancing existing capabilities with performance along with resiliency improvements. These abilities enable the aggregation of compute, storage, networking, management and policies for enabling a global virtual data center while supporting existing along with new emerging applications.

    Where to learn more

    If you were not part of the beta to gain early hands-on experience with VMware vSphere V6 and associated technologies, download a copy to check it out as part of making your upgrade or migration plans.

    Check out the various VMware resources including communities links here
    VMware vSphere Hypervisor getting started and general vSphere information (including download)
    VMware vSphere data sheet, compatibility guide along with speeds and feeds (size and other limits)
    VMware vExpert
    VMware Blogs and VMware vExpert page

    Various fellow VMware vExpert blogs including among many others vsphere-land, scott lowe, virtuallyghetto and yellow-bricks among many others found at the vpad here.

    StorageIO Out and About Update – VMworld 2014 (with Video)
    VMware vVOL’s and storage I/O fundamentals (Storage I/O overview and vVOL, details Part I and Part II)
    How many IOPs can a HDD or SSD do in a VMware environment (Part I and Part II)
    VMware VSAN overview and primer, DIY converged software defined storage on a budget

    Wrap up and summary

    Overall VMware vSphere V6 has a great set of features that support both ease of management for small environments as well as the scaling needs of larger organizations.

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    I/O, I/O how well do you know good bad ugly server storage I/O iops?

    How well do you know good bad ugly I/O iops?

    server storage i/o iops activity data infrastructure trends

    Updated 2/10/2018

    There are many different types of server storage I/O iops associated with various environments, applications and workloads. Some I/Os activity are iops, others are transactions per second (TPS), files or messages per time (hour, minute, second), gets, puts or other operations. The best IO is one you do not have to do.

    What about all the cloud, virtual, software defined and legacy based application that still need to do I/O?

    If no IO operation is the best IO, then the second best IO is the one that can be done as close to the application and processor as possible with the best locality of reference.

    Also keep in mind that aggregation (e.g. consolidation) can cause aggravation (server storage I/O performance bottlenecks).

    aggregation causes aggravation
    Example of aggregation (consolidation) causing aggravation (server storage i/o blender bottlenecks)

    And the third best?

    It’s the one that can be done in less time or at least cost or effect to the requesting application, which means moving further down the memory and storage stack.

    solving server storage i/o blender and other bottlenecks
    Leveraging flash SSD and cache technologies to find and fix server storage I/O bottlenecks

    On the other hand, any IOP regardless of if for block, file or object storage that involves some context is better than those without, particular involving metrics that matter (here, here and here [webinar] )

    Server Storage I/O optimization and effectiveness

    The problem with IO’s is that they are a basic operations to get data into and out of a computer or processor, so there’s no way to avoid all of them, unless you have a very large budget. Even if you have a large budget that can afford an all flash SSD solution, you may still meet bottlenecks or other barriers.

    IO’s require CPU or processor time and memory to set up and then process the results as well as IO and networking resources to move data too their destination or retrieve them from where they are stored. While IO’s cannot be eliminated, their impact can be greatly improved or optimized by, among other techniques, doing fewer of them via caching and by grouping reads or writes (pre-fetch, write-behind).

    server storage I/O STI and SUT

    Think of it this way: Instead of going on multiple errands, sometimes you can group multiple destinations together making for a shorter, more efficient trip. However, that optimization may also mean your drive will take longer. So, sometimes it makes sense to go on a couple of quick, short, low-latency trips instead of one larger one that takes half a day even as it accomplishes many tasks. Of course, how far you have to go on those trips (i.e., their locality) makes a difference about how many you can do in a given amount of time.

    Locality of reference (or proximity)

    What is locality of reference?

    This refers to how close (i.e., its place) data exists to where it is needed (being referenced) for use. For example, the best locality of reference in a computer would be registers in the processor core, ready to be acted on immediately. This would be followed by levels 1, 2, and 3 (L1, L2, and L3) onboard caches, followed by main memory, or DRAM. After that comes solid-state memory typically NAND flash either on PCIe cards or accessible on a direct attached storage (DAS), SAN, or NAS device. 

    server storage I/O locality of reference

    Even though a PCIe NAND flash card is close to the processor, there still remains the overhead of traversing the PCIe bus and associated drivers. To help offset that impact, PCIe cards use DRAM as cache or buffers for data along with meta or control information to further optimize and improve locality of reference. In other words, this information is used to help with cache hits, cache use, and cache effectiveness vs. simply boosting cache use.

    SSD to the rescue?

    What can you do the cut the impact of IO’s?

    There are many steps one can take, starting with establishing baseline performance and availability metrics.

    The metrics that matter include IOP’s, latency, bandwidth, and availability. Then, leverage metrics to gain insight into your application’s performance.

    Understand that IO’s are a fact of applications doing work (storing, retrieving, managing data) no matter whether systems are virtual, physical, or running up in the cloud. But it’s important to understand just what a bad IO is, along with its impact on performance. Try to identify those that are bad, and then find and fix the problem, either with software, application, or database changes. Perhaps you need to throw more software caching tools, hypervisors, or hardware at the problem. Hardware may include faster processors with more DRAM and faster internal busses.

    Leveraging local PCIe flash SSD cards for caching or as targets is another option.

    You may want to use storage systems or appliances that rely on intelligent caching and storage optimization capabilities to help with performance, availability, and capacity.

    Where to gain insight into your server storage I/O environment

    There are many tools that you can be used to gain insight into your server storage I/O environment across cloud, virtual, software defined and legacy as well as from different layers (e.g. applications, database, file systems, operating systems, hypervisors, server, storage, I/O networking). Many applications along with databases have either built-in or optional tools from their provider, third-party, or via other sources that can give information about work activity being done. Likewise there are tools to dig down deeper into the various data information infrastructure to see what is happening at the various layers as shown in the following figures.

    application storage I/O performance
    Gaining application and operating system level performance insight via different tools

    windows and linux storage I/O performance
    Insight and awareness via operating system tools on Windows and Linux

    In the above example, Spotlight on Windows (SoW) which you can download for free from Dell here along with Ubuntu utilities are shown, You could also use other tools to look at server storage I/O performance including Windows Perfmon among others.

    vmware server storage I/O
    Hypervisor performance using VMware ESXi / vsphere built-in tools

    vmware server storage I/O performance
    Using Visual ESXtop to dig deeper into virtual server storage I/O performance

    vmware server storage i/o cache
    Gaining insight into virtual server storage I/O cache performance

    Wrap up and summary

    There are many approaches to address (e.g. find and fix) vs. simply move or mask data center and server storage I/O bottlenecks. Having insight and awareness into how your environment along with applications is important to know to focus resources. Also keep in mind that a bit of flash SSD or DRAM cache in the applicable place can go along way while a lot of cache will also cost you cash. Even if you cant eliminate I/Os, look for ways to decrease their impact on your applications and systems.

    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    >Keep in mind: SSD including flash and DRAM among others are in your future, the question is where, when, with what, how much and whose technology or packaging.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Green and Virtual IT Data Center Primer

    Green and Virtual Data Center Primer

    Moving beyond Green Hype and Green washing

    Green IT is about enabling efficient, effective and productive information services delivery. There is a growing green gap between green hype messaging or green washing and IT pain point issues including limits on availability or rising costs of power, cooling, floor-space as well as e-waste and environmental health and safety (PCFE). To close the gap will involve addressing green messaging and rhetoric closer to where IT organizations pain points are and where budget dollars exists that can address PCFE and other green related issues as a by-product. The green gap will also be narrowed as awareness of broader green related topics coincide with IT data center pain points, in other words, alignment of messaging with IT issues that have or will have budget dollars allocated towards them to sustain business and economic growth via IT resource usage efficiency. Read more here.

    There are many aspects to "Green" Information Technology including servers, storage, networks and associated management tools and techniques. The reasons and focus of "Green IT" including "Green Data Storage ", "Green Computing" and related focus areas are varied to discuss diverse needs, issues and requirements including among others:

    • Power, Cooling, Floor-space, Environmental (PCFE) related issues or constraints
    • Reduction of carbon dioxide (CO2) emissions and other green house gases (GHGs)
    • Business growth and economic sustain in an environmental friendly manner
    • Proper disposal or recycling of environmental harmful retired technology components
    • Reduction or better efficiency of electrical power consumption used for IT equipment
    • Cost avoidance or savings from lower energy fees and cooling costs
    • Support data center and application consolidation to cut cost and management
    • Enable growth and enhancements to application service level objectives
    • Maximize the usage of available power and cooling resources available in your region
    • Compliance with local or federal government mandates and regulations
    • Economic sustain and ability to support business growth and service improvements
    • General environmental awareness and stewardship to save and protect the earth

    While much of the IT industry focuses on CO2 emissions footprints, data management software and electrical power consumption, cooling and ventilation of IT data centers is an area of focus associated with "Green IT" as well as a means to discuss more effective use of electrical energy that can yield rapid results for many environments. Large tier-1 vendors including HP and IBM among others who have an IT and data center wide focus have services designed to do quick assessments as well as detailed analysis and re-organization of IT data center physical facilities to improve air flow and power consumption for more effective cooling of IT technologies including servers, storage, networks and other equipment.

    Similar to your own residence, basic steps to improve your cooling effectiveness can lead to use of less energy to cut your budget impact, or, enable you to do more with what you already have with your cooling capacity to support growth, acquisitions and or consolidation initiatives. Vendors are also looking at means and alternatives for cooling IT equipment ranging from computer assisted computational fluid dynamics (CFD) software analysis of data center cooling and ventilation to refrigerated cooling racks some leveraging water or inert liquid cooling.

    Various metrics exists and others are evolving for measuring, estimating, reporting, analyzing and discussing IT Data Center infrastructure resource topics including servers, storage, networks, facilities and associated software management tools from a power, cooling and green environmental standpoint. The importance of metrics is to focus on the larger impact of a piece of IT equipment that includes its cost and energy consumption that factors in cooling and other hosting or site environmental costs. Naturally energy costs and CO2 (carbon offsets) will vary by geography and region along with type of electrical power being used (Coal, Natural Gas, Nuclear, Wind, Thermo, Solar, etc) and other factors that should be kept in perspective as part of the big picture.

    Consequently your view and needs or interests around "Green" IT may be from an electrical power conservation perspective to maximize your power consumption or to adapt to a given power footprint or ceiling. Your focus around "Green" Data Centers and Green Storage may be from a carbon savings standpoint or proper disposition of old and retired IT equipment or from a data center cooling standpoint. Another area of focus may be that you are looking to cut your data footprint to align with your power, cooling and green footprint while enhancing application and data service delivery to your customers.

    Where to learn more

    The following are useful links to related efficient, effective, productive, flexible, scalable and resilient IT data center along with server storage I/O networking hardware and software that supports cloud and virtual green data centers.

    Various IT industry vendor and service provider links
    Green and Virtual Data Center: Productive Economical Efficient Effective Flexible
    Green and Virtual Data Center links
    Are large storage arrays dead at the hands of SSD?
    Closing the Green Gap
    Energy efficient technology sales depend on the pitch

    What this all means

    The result of a green and virtual data center is that of a flexible, agile, resilient, scalable information factory that is also economical, productive, efficient, productive as well as sustainable.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Green and Virtual Data Center: Productive Economical Efficient Effective Flexible

    Green and Virtual Data Center

    A Green and Virtual IT Data Center (e.g. an information factory) means an environment comprising:

    • Habitat for technology or physical infrastructure (e.g. physical data center, yours, co-lo, managed service or cloud)
    • Power, cooling, communication networks, HVAC, smoke and fire suppression, physical security
    • IT data information infrastructure (e.g. hardware, software, valueware, cloud, virtual, physical, servers, storage, network)
    • Data Center Infrastructure Management (DCIM) along with IT Service Management (ITSM) software defined management tools
    • Tools for monitoring, resource tracking and usage, reporting, diagnostics, provisioning and resource orchestration
    • Portals and service catalogs for automated, user initiated and assisted operation or access to IT resources
    • Processes, procedures, best-practices, work-flows and templates (including data protection with HA, BC, BR, DR, backup/restore, logical and physical security)
    • Metrics that matter for management insight and awareness
      People and skill sets among other items

    Green and Virtual Data Center Resources

    Click here to learn about "The Green and Virtual Data Center" book (CRC Press) for enabling efficient, productive IT data centers. This book covers cloud, virtualization, servers, storage, networks, software, facilities and associated management topics, technologies and techniques including metrics that matter. This book by industry veteran IT advisor and author Greg Schulz is the definitive guide for enabling economic efficiency and productive next generation data center strategies.

    Intel recommended reading
    Publisher: CRC Press – Taylor & Francis Group
    By Greg P. Schulz of StorageIO www.storageio.com
     ISBN-10: 1439851739 and ISBN-13: 978-1439851739
     Hardcover * 370 pages * Over 100 illustrations figures and tables

    Read more here and order your copy here. Also check out Cloud and Virtual Data Storage Networking (CRC Press) a new book by Greg Schulz.

    Productive Efficient Effective Economical Flexible Agile and Sustainable

    Green hype and green washing may be on the endangered species list and going away, however, green IT for servers, storage, networks, facilities as well as related software and management techniques that address energy efficiency including power and cooling along with e-waste, environmental health and safety related issues are topics that wont be going away anytime soon. There is a growing green gap between green hype messaging or green washing and IT pain point issues including limits on availability or rising costs of power, cooling, floor-space as well as e-waste and environmental health and safety (PCFE). To close the gap will involve addressing green messaging and rhetoric closer to where IT organizations pain points are and where budget dollars exists that can address PCFE and other green related issues as a by-product.

    The green gap will also be narrowed as awareness of broader green related topics coincide with IT data center pain points, in other words, alignment of messaging with IT issues that have or will have budget dollars allocated towards them to sustain business and economic growth via IT resource usage efficiency. Read more here.

    Where to learn more

    The following are useful links to related efficient, effective, productive, flexible, scalable and resilient IT data center along with server storage I/O networking hardware and software that supports cloud and virtual green data centers.

    Various IT industry vendor and service provider links
    Green and Virtual Data Center Primer
    Green and Virtual Data Center links
    Are large storage arrays dead at the hands of SSD?
    Closing the Green Gap
    Energy efficient technology sales depend on the pitch
    EPA Energy Star for Data Center Storage Update
    EPA Energy Star for data center storage draft 3 specification
    Green IT Confusion Continues, Opportunities Missed! 
    Green IT deferral blamed on economic recession might be result of green gap
    How much SSD do you need vs. want?
    How to reduce your Data Footprint impact (Podcast) 
    Industry trend: People plus data are aging and living longer
    In the data center or information factory, not everything is the same
    More storage and IO metrics that matter
    Optimizing storage capacity and performance to reduce your data footprint 
    Performance metrics: Evaluating your data storage efficiency
    PUE, Are you Managing Power, Energy or Productivity?
    Saving Money with Green Data Storage Technology
    Saving Money with Green IT: Time To Invest In Information Factories 
    Shifting from energy avoidance to energy efficiency
    SNIA Green Storage Knowledge Center
    Speaking of speeding up business with SSD storage
    SSD and Green IT moving beyond green washing
    Storage Efficiency and Optimization: The Other Green
    Supporting IT growth demand during economic uncertain times
    The Green and Virtual Data Center Book (CRC Press, Intel Recommended Reading)
    The new Green IT: Efficient, Effective, Smart and Productive 
    The other Green Storage: Efficiency and Optimization 
    What is the best kind of IO? The one you do not have to do

    Watch for more links and resources to be added soon.

    What this all means

    The result of a green and virtual data center is that of a flexible, agile, resilient, scalable information factory that is also economical, productive, efficient, productive as well as sustainable.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Green and Virtual Data Center Links

    Updated 10/25/2017

    Green and Virtual IT Data Center Links

    Moving beyond Green Hype and Green washing

    Green hype and green washing may be on the endangered species list and going away, however, green IT for servers, storage, networks, facilities as well as related software and management techniques that address energy efficiency including power and cooling along with e-waste, environmental health and safety related issues are topics that wont be going away anytime soon.

    There is a growing green gap between green hype messaging or green washing and IT pain point issues including limits on availability or rising costs of power, cooling, floor-space as well as e-waste and environmental health and safety (PCFE).

    To close the gap will involve addressing green messaging and rhetoric closer to where IT organizations pain points are and where budget dollars exists that can address PCFE and other green related issues as a by-product. The green gap will also be narrowed as awareness of broader green related topics coincide with IT data center pain points, in other words, alignment of messaging with IT issues that have or will have budget dollars allocated towards them to sustain business and economic growth via IT resource usage efficiency. Read more here.

    Enabling Effective Produtive Efficient Economical Flexible Scalable Resilient Information Infrastrctures

    The following are useful links to related efficient, effective, productive, flexible, scalable and resilient IT data center along with server storage I/O networking hardware and software that supports cloud and virtual green data centers.

    Various IT industry vendors and other links

    Via StorageIOblog – Happy Earth Day 2016 Eliminating Digital and Data e-Waste

    Green and Virtual Data Center Primer
    Green and Virtual Data Center: Productive Economical Efficient Effective Flexible
    Are large storage arrays dead at the hands of SSD?
    Closing the Green Gap
    Energy efficient technology sales depend on the pitch
    EPA Energy Star for Data Center Storage Update
    EPA Energy Star for data center storage draft 3 specification
    Green IT Confusion Continues, Opportunities Missed! 
    Green IT deferral blamed on economic recession might be result of green gap
    How much SSD do you need vs. want?
    How to reduce your Data Footprint impact (Podcast) 
    Industry trend: People plus data are aging and living longer
    In the data center or information factory, not everything is the same
    More storage and IO metrics that matter
    Optimizing storage capacity and performance to reduce your data footprint 
    Performance metrics: Evaluating your data storage efficiency
    PUE, Are you Managing Power, Energy or Productivity?
    Saving Money with Green Data Storage Technology
    Saving Money with Green IT: Time To Invest In Information Factories 
    Shifting from energy avoidance to energy efficiency
    SNIA Green Storage Knowledge Center
    Speaking of speeding up business with SSD storage
    SSD and Green IT moving beyond green washing
    Storage Efficiency and Optimization: The Other Green
    Supporting IT growth demand during economic uncertain times
    The Green and Virtual Data Center Book (CRC Press, Intel Recommended Reading)
    The new Green IT: Efficient, Effective, Smart and Productive 
    The other Green Storage: Efficiency and Optimization 
    What is the best kind of IO? The one you do not have to do

    Intel recommended reading
    Click here to learn about "The Green and Virtual Data Center" book (CRC Press) for enabling efficient , productive IT data centers. This book covers cloud, virtualization, servers, storage, networks, software, facilities and associated management topics, technologies and techniques including metrics that matter. This book by industry veteran IT advisor and author Greg Schulz is the definitive guide for enabling economic efficiency and productive next generation data center strategies. Read more here and order your copyhere. Also check out Cloud and Virtual Data Storage Networking (CRC Press) a new book by Greg Schulz.

    White papers, analyst reports and perspectives

    Business benefits of data footprint reduction (archiving, compression, de-dupe)
    Data center I/O and performance issues – Server I/O and storage capacity gap
    Analysis of EPA Report to Congress (Law 109-431)
    The Many Faces of MAID Storage Technology
    Achieving Energy Efficiency with FLASH based SSD
    MAID 2.0: Energy Savings without Performance Compromises

    Articles, Tips, Blogs, Webcasts and Podcasts

    AP – SNIA Green Emerald Program and measurements
    AP – Southern California heat wave strains electrical system
    Ars Technica – EPA: Power usage in data centers could double by 2011
    Ars Technica – Meet the climate savers: Major tech firms launch war on energy-inefficient PCs – Article
    Askageek.com – Buying an environmental friendly laptop – November 2008
    Baseline – Examining Energy Consumption in the Data Center
    Baseline – Burts Bees: What IT Means When You Go Green
    Bizcovering – Green architecture for the masses
    Broadstuff – Are Green 2.0 and Enterprise 2.0 Incompatible?
    Business Week – CEO Guide to Technology
    Business Week – Computers’ elusive eco factor
    Business Week – Clean Energy – Its Getting Affordable
    Byte & Switch – Keeping it Green This Summer – Don’t be "Green washed"
    Byte & Switch – IBM Sees Green in Energy Certificates
    Byte & Switch – Users Search for power solutions
    Byte & Switch – DoE issues Green Storage Warning
    CBR – The Green Light for Green IT
    CBR – Big boxes make greener data centers
    CFO – Power Scourge
    Channel Insider – A 12 Step Program to Dispose of IT Equipment
    China.org.cn – China publishes Energy paper
    CIO – Green Storage Means Money Saved on Power
    CIO – Data center designers share secrets for going green
    CIO – Best Place to Build a Data Center in North America
    CIO Insight – Clever Marketing or the Real Thing?
    Cleantechnica – Cooling Data Centers Could Prevent Massive Electrical Waste – June 2008
    Climatebiz – Carbon Calculators Yield Spectrum of Results: Study
    CNET News – Linux coders tackle power efficiency
    CNET News – Research: Old data centers can be nearly as ‘green’ as new ones
    CNET News – Congress, Greenpeace move on e-wast
    CNN Money – A Green Collar Recession
    CNN Money – IBM creates alliance with industry leaders supporting new data center standards
    Communication News – Utility bills key to greener IT
    Computerweekly – Business case for green storage
    Computerweekly – Optimising data centre operations
    Computerweekly – Green still good for IT, if it saves money
    Computerweekly – Meeting the Demands for storage
    Computerworld – Wells Fargo Free Data Center Cooling System
    Computerworld – Seven ways to get green and save money
    Computerworld – Build your data center here: The most energy-efficient locations
    Computerworld – EPA: U.S. needs more power plants to support data centers
    Computerworld – GreenIT: A marketing ploy or new technology?
    Computerworld – Gartner Criticizes Green Grid
    Computerworld – IT Skills no longer sufficient for data center execs.
    Computerworld – Meet MAID 2.0 and Intelligent Power Management
    Computerworld – Feds to offer energy ratings on servers and storage
    Computerworld – Greenpeace still hunting for truly green electronics
    Computerworld – How to benchmark data center energy costs
    ComputerworldUK – Datacenters at risk from poor governance
    ComputerworldUK – Top IT Leaders Back Green Survey
    ComputerworldMH – Lean and Green
    CTR – Strategies for enhancing energy efficiency
    CTR – Economies of Scale – Green Data Warehouse Appliances
    Datacenterknowledge – Microsoft to build Illinois datacenter
    Data Center Strategies – Storage The Next Hot Topic
    Earthtimes – Fujitsu installs hydrogen fuel cell power
    eChannelline – IBM Goes Green(er)
    Ecoearth.info – California Moves To Speed Solar, Wind Power Grid Connections
    Ecogeek – Solar power company figures they can power 90% of America
    Economist – Cool IT
    Electronic Design – How many watts in that Gigabyte
    eMazzanti – Desktop virtualization movement creeping into customer sites
    ens-Newswire – Western Governors Ask Obama for National Green Energy Plan
    Environmental Leader – Best Place to Build an Energy Efficient Data Center
    Environmental Leader – New Guide Helps Advertisers Avoid Greenwash Complaints
    Enterprise Storage Forum – Power Struggles Take Center Stage at SNW
    Enterprise Storage Forum – Pace Yourself for Storage Power & Cooling Needs
    Enterprise Storage Forum – Storage Power and Cooling Issues Heat Up – StorageIO Article
    Enterprise Storage Forum – Score Savings With A Storage Power Play
    Enterprise Storage Forum – I/O, I/O, Its off to Virtual Work I Go
    Enterprise Storage Forum – Not Just a Flash in the Pan – Various SSD options
    Enterprise Storage Forum – Closing the Green Gap – Article August 2008
    EPA Report to Congress and Public Law 109-431 – Reports & links
    eWeek – Saving Green by being Green
    eWeek – ‘No Cooling Necessary’ Data Centers Coming?
    eWeek – How the ‘Down’ Macroeconomy Will Impact the Data Storage Sector
    ExpressComputer – In defense of Green IT
    ExpressComputer – What data center crisis
    Forbes – How to Build a Quick Charging Battery
    GCN – Sun launches eco data center
    GreenerComputing – New Code of Conduct to Establish Best Practices in Green Data Centers
    GreenerComputing – Silicon valley’s green detente
    GreenerComputing – Majority of companies plan to green their data centers
    GreenerComputing – Citigroup to spend $232M on Green Data Center
    GreenerComputing – Chicago and Quincy, WA Top Green Data Center Locations
    GreenerComputing – Using airside economizers to chill data center cooling bills
    GreenerComputing – Making the most of asset disposal
    GreenerComputing – Greenpeace vendor rankings
    GreenerComputing – Four Steps to Improving Data Center Efficiency without Capital Expenditures
    GreenerComputing – Enabling a Green and Virtual Data Center
    Green-PC – Strategic Steps Down the Green Path
    Greeniewatch – BBC news chiefs attack plans for climate change campaign
    Greeniewatch – Warmest year predictions and data that has not yet been measured
    GoverenmentExecutive – Public Private Sectors Differ on "Green" Efforts
    HPC Wire – How hot is your code
    Industry Standard – Why green data centers mean partner opportunities
    InformationWeek – It could be 15 years before we know what is really green
    InformationWeek – Beyond Server Consolidaiton
    InformationWeek – Green IT Beyond Virtualization: The Case For Consolidation
    InfoWorld – Sun celebrates green datacenter innovations
    InfoWorld – Tech’s own datacenters are their green showrooms
    InfoWorld – 2007: The Year in Green
    InfoWorld – Green Grid Announces Tech Forum in Feb 2008
    InfoWorld – SPEC seeds future green-server benchmarks
    InfoWorld – Climate Savers green catalog proves un-ripe
    InfoWorld – Forester: Eco-minded activity up among IT pros
    InfoWorld – Green ventures in Silicon Valley, Mass reaped most VC cash in ’07
    InfoWorld – Congress misses chance to see green-energy growth
    InfoWorld – Unisys pushes green envelope with datacenter expansion
    InfoWorld – No easy green strategy for storage
    Internet News – Storage Technologies for a Slowing Economy
    Internet News – Economy will Force IT to Transform
    ITManagement – Green Computing, Green Revenue
    itnews – Data centre chiefs dismiss green hype
    itnews – Australian Green IT regulations could arrive this year
    IT Pro – SNIA Green storage metrics released
    ITtoolbox – MAID discussion
    Linux Power – Saving power with Linux on Intel platforms
    MSNBC – Microsoft to build data center in Ireland
    National Post – Green technology at the L.A. Auto Show
    Network World – Turning the datacenter green
    Network World – Color Interop Green
    Network World – Green not helpful word for setting environmental policies
    NewScientistEnvironment – Computer servers as bad for climate as SUVs
    Newser – Texas commission approves nation’s largest wind power project
    New Yorker – Big Foot: In measuring carbon emissions, it’s easy to confuse morality and science
    NY Times – What the Green Bubble Will Leave Behind
    PRNewswire – Al Gore and Cisco CEO John Chambers to debate climate change
    Processor – More than just monitoring
    Processor – The new data center: What’s hot in Data Center physical infrastructure:
    Processor – Liquid Cooling in the Data Center
    Processor – Curbing IT Power Usage
    Processor – Services To The Rescue – Services Available For Today’s Data Centers
    Processor – Green Initiatives: Hire A Consultant?
    Processor – Energy-Saving Initiatives
    Processor – The EPA’s Low Carbon Campaig
    Processor – Data Center Power Planning
    SAN Jose Mercury – Making Data Centers Green
    SDA-Asia – Green IT still a priority despite Credit Crunch
    SearchCIO – EPA report gives data centers little guidance
    SearchCIO – Green IT Strategies Could Lead to hefty ROIs
    SearchCIO – Green IT In the Data Center: Plenty of Talk, not much Walk
    SearchCIO – Green IT Overpitched by Vendors, CIOs beware
    SearchDataCenter – Study ranks cheapest places to build a data center
    SearchDataCenter – Green technology still ranks low for data center planners
    SearchDataCenter – Green Data Center: Energy Effiecnty Computing in the 21st Century
    SearchDataCenter – Green Data Center Advice: Is LEED Feasible
    SearchDataCenter – Green Data Centers Tackle LEED Certification
    SearchDataCenter – PG&E invests in data center effieicny
    SearchDataCenter – A solar powered datacenter
    SearchSMBStorage – Improve your storage energy efficiency
    SearchSMBStorage – SMB capacity planning: Focusing on energy conservation
    SearchSMBStorage – Data footprint reduction for SMBs
    SearchSMBStorage – MAID & other energy-saving storage technologies for SMBs
    SearchStorage – How to increase your storage energy efficiency
    SearchStorage – Is storage now top energy hog in the data center
    SearchStorage – Storage eZine: Turning Storage Green
    SearchStorage – The Green Storage Gap
    SearchStorageChannel – Green Data Storage Projects
    Silicon.com – The greening of IT: Cooling costs
    SNIA – SNIA Green Storage Overview
    SNIA – Green Storage
    SNW – Beyond Green-wash
    SNW Spring 2008 Beyond Green-wash
    State.org – Why Texas Has Its Own Power Grid
    StorageDecisions – Different Shades of Green
    Storage Magazine – Storage still lacks energy metrics
    StorageIOblog – Posts pertaining to Green, power, cooling, floor-space, EHS (PCFE)
    Storage Search – Various postings, news and topics pertaining to Green IT
    Technology Times – Revealed: the environmental impact of Google searches
    TechTarget – Data center power efficiency
    TechTarget – Tip for determining power consumption
    Techworld – Inside a green data center
    Techworld – Box reduction – Low hanging green datacenter fruit
    Techworld – Datacentere used to heat swimming pool
    Theinquirer – Spansion and Virident flash server farms
    Theinquirer – Storage firms worry about energy efficiency How green is the valley
    TheRegister – Data Centre Efficiency, the good, the bad and the way to hot
    TheRegister – Server makers snub whalesong for serious windmill abuse
    TheRegister – Green data center threat level: Not Green
    The Standard – Growing cynicism around going Green
    ThoughtPut – Energy Central
    Thoughtput – Power, Cooling, Green Storage and related industry trends
    Wallstreet Journal – Utilities Amp Up Push To Slash Energy Use
    Wallstreet Journal – The IT in Green Investing
    Wallstreet Journal – Tech’s Energy Consumption on the Rise
    Washingtonpost – Texas approves major new wind power project
    WhatPC – Green IT: It doesnt have to cost the earth
    WHIRnews – SingTel building green data center
    Wind-watch.org – Loss of wind causes Texas power grid emergency
    WyomingNews – Overcoming Greens Stereotype
    Yahoo – Washington Senate Unviel Green Job Plan
    ZDnet – Will supercomputer speeds hit a plateau?
    Are data centers causing climate change

    News and Press Releases

    Business Wire – The Green and Virtual Data Center
    Enterprise Storage Forum – Intel and HGST (Hitachi) partner on FLASH SSD
    PCworld – Intel and HP describe Green Strategy
    DoE – To Invest Approximately $1.3 Billion to Commercialize CCS Technology
    Yahoo – Shell Opens Los Angeles’ First Combined Hydrogen and Gasoline Station
    DuPont – DuPont Projects Save Enough Energy to Power 25,000 Homes
    Gartner – Users Are Becoming Increasingly Confused About the Issues and Solutions Surrounding Green IT

    Websites and Tools

    Various power, cooling, emmisions and device configuration tools and calculators
    Solar Action Alliance web site
    SNIA Emerald program
    Carbon Disclosure Project
    The Chicago Climate Exchange
    Climate Savers
    Data Center Decisions
    Electronic Industries Alliance (EIA)
    EMC – Digital Life Calculator
    Energy Star
    Energy Star Data Center Initiatives
    Greenpeace – Technology ranking website also here
    GlobalActionPlan
    KyotoPlanet
    LBNL High Tech Data centers
    Millicomputing
    RoHS & WEE News
    Storage Performance Council (SPC)
    SNIA Green Technical Working Group
    SPEC
    Transaction Processing Council (TPC)
    The Green Grid
    The Raised Floor
    Terra Pass Carbon Offset Credits – Website with CO2 calculators
    Energy Information Administration – EIA (US and International Electrical Information)
    U.S. Department of Energy and related information
    U.S. DOE Energy Efficient Industrial Programs
    U.S. EPA server and storage energy topics
    Zerofootprint – Various "Green" and environmental related links and calculators

    Vendor Centric and Marketing Website Links and tools

    Vendors and organizations have different types of calculators some with focus on power, cooling, floor space, carbon offsets or emissions,

    ROI, TCO and other IT data center infrastructure resource management. Following is an evolving list and by no means definitive even for a particular vendors as

    different manufactures may have multiple different calculators for different product lines or areas of focus.

    Brocade – Green website
    Cisco – Green and Environmental websites here, here and here
    Dell – Green website
    EMC – EMC Energy, Power and Cooling Related Website
    HDS – How to be green – HDS Positioning White Paper
    HP – HP Green Website
    IBM – Green Data Center – IBM Positioning White Paper
    IBM – Green Data Center for Education – IBM Positioning White Paper
    Intel – What is an Efficient Data Center and how do I measure it?
    LSI – Green site and white paper
    NetApp – Press Release and related information
    Sun – Various articles and links
    Symantec – Global 2000 Struggle to Adopt "Green" Data Centers – Announcement of Survey results
    ACTON
    Adinfa
    APC
    Australian Conservation Foundation
    Avocent
    BBC
    Brocade
    Carbon Credit Calculator UK
    Carbon Footprint Site
    Carbon Planet
    Carbonify
    CarbonZero
    Cassatt
    CO2 Stats Site
    Copan
    Dell
    DirectGov UK Acton
    Diesel Service & Supply Power Calculator & Converter
    Eaton Powerware
    Ecobusinesslinks
    Ecoscale
    EMC Power Calculator
    EMC Web Power Calculator
    EMC Digital Life Calculator
    EPA Power Profiler
    EPA Related Tools
    EPEAT
    Google UK Green Footprint
    Green Grid Calculator
    HP and more here
    HVAC Calculator
    IBM
    Logicalis
    Kohler Power (Business and Residential)
    Micron
    MSN Carbon Footprint Calculator
    National Wildlife Foundation
    NEF UK
    NetApp
    Rackwise
    Platespin
    Safecom
    Sterling Planet
    Sun and more here and here and here
    Tandberg
    TechRepublic
    TerraPass Carbon Offset Credits
    Thomas Kreen AG
    Toronto Hydro Calculator
    80 Plus Calculator
    VMware
    42u Green Grid PUE DCiE calculator
    42u energy calculator

    Green and Virtual Tools

    What’s your power, cooling, floor space, energy, environmental or green story?

    What’s your power, cooling, floor space, energy, environmental or green story? Do you have questions or want to learn more about

    energy issues pertaining to IT data center and data infrastructure topics? Do you have a solution or technology or a success story that you would like to share

    with us pertaining to data storage and server I/O energy optimization strategies?  Do you need assistance in developing, validating or reviewing your strategy

    or story? Contact us at: info@storageio.com or 651-275-1563 to learn more about green data storage and server I/O or to

    schedule a briefing to tell us about your energy efficiency and effectiveness story pertaining to IT data centers and data infrastructures.

    Disclaimer and note:  URL’s submitted for inclusion on this site will be reviewed for consideration and to be

    in generally accepted good taste in regards to the theme of this site.  Best effort has been made to validate and verify the URLs that appear on this page and

    website however they are subject to change. The author and/or maintainer’s) of this page and web site make no endorsement to and assume no responsibility for the

    URLs and their content that are listed on this page.

    Green and Virtual Metrics

    Chapter 5 "Measurement, Metrics, and Management of IT Resources" in the book "The Green and Virtual Data Center" (CRC Press) takes a look at the importance of being able to measure and monitor to enable effective management and utilization of IT resources across servers, storage, I/O networks, software, hardware and facilities.

    There are many different points of interest for collecting metrics in an IT data center for servers, storage, networking and facilities along with various points of interest or perspectives. Data center personal have varied interest from a facilities to a resource (server, storage, networking) usage and effectiveness perspective for normal use as well as planning purposes or comparison when evaluating new technology. Vendors have different uses for metrics during R&D, Q/A testing and marketing or sales campaigns as well as on-going service and support. Industry trade groups including 80 Plus, SNIA and the green grid along with government groups including the EPA Energy Star are working to define and establish applicable metrics pertinent for Green and Virtual data centers.

    Acronym

    Description

    Comment

    DCiE

    Data center Efficiency = (IT equipment / Total facility power) * 100

    Shows a ratio of how well a data center is consuming power

    DCPE

    Data center Performance Efficiency = Effective IT workload / total facility power

    Shows how effective data center is consuming power to produce a given level of service or work such as energy per transaction or energy per business function performed

    PUE

    Power usage effectiveness = Total facility power / IT equipment power

    Inverse of DCE

    Kilowatts (kw)

    Watts / 1,000

    One thousand watts

    Annual kWh

    kWh x 24 x 365

    kWh used in on year

    Megawatts (mw)

    kW / 1,000

    One thousand kW

    BTU/hour

    watts x 3.413

    Heat generated in an hour from using energy in British Thermal Units. 12,000 BTU/hour can equate to 1 Ton of cooling.

    kWh

    1,000 watt hours

    The number of watts used in one hour

    Watts

    Amps x Volts (e.g. 12 amps * 12 volts = 144 watts)

    Unit of electrical energy power

    Watts

    BTU/hour x 0.293

    Convert BTU/hr to watts

    Volts

    Watts / Amps (e.g. 144 watts / 12 amps = 12 volts)

    The amount of force on electrons

    Amps

    Watts / Volts (e.g. 144 watts / 12 volts = 12 amps)

    The flow rate of electricity

    Volt-Amperes (VA)

    Volts x Amps

    Sometimes power expressed in Volt-Ampres

    kVA

    Volts x Amp / 1000

    Number of kilovolt-ampres

    kW

    kVA x power-factor

    Power factor is the efficiency of a piece of equipments use of power

    kVA

    kW / power-factor

    Killovolt-Ampres

    U

    1U = 1.75”

    EIA metric describing height of equipment in racks.

     

    Activity / Watt Amount of work accomplished per unit of energy consumed. This could be IOPS, Transactions or Bandwidth per watt. Indicator how much work and how efficient energy is being used to accomplish useful work. This metric applies to active workloads or actively used and frequently accessed storage and data. Examples would be IOPS per watt, Bandwidth per watt, Transactions per watt, Users or streams per watt. Activity per watt should also be used in conjunction with another metric such as how much capacity is supported per watt and total watts consumed for a representative picture.

    IOPS / Watt

    Number of I/O operations (or transactions) / energy (watts)

    Indicator of how effectively energy is being used to perform a given amount of work. The work could be I/Os, transactions, throughput or other indicator of application activity. For example SPC-1 / Watt, SPEC / Watt, TPC / Watt, transaction / watt,  IOP / Watt.

    Bandwidth / Watt GBPS or TBPS or PBPS / Watt Amount of data transferred or moved per second and energy used. Often confused with Capacity per watt This indicates how much data is moved or accessed per second or time interval per unit of energy consumed. This is often confused with capacity per watt given that both bandwidth and capacity reference GByte, TByte, PByte.

    Capacity / Watt

    GB or TB or PB (storage capacity space / watt

    Indicator of how much capacity (space) or bandwidth supported in a given configuration or footprint per watt of energy. For inactive data or off-line and archive data, capacity per watt can be an effective measurement gauge however for active workloads and applications activity per watt also needs to be looked at to get a representative indicator of how energy is being used

    Mhz / Watt

    Processor performance / energy (watts)

    Indicator of how effectively energy is being used by a CPU or processor.

    Carbon Credit

    Carbon offset credit

    Offset credits that can be bought and sold to offset your CO2 emissions

    CO2 Emission

    Average 1.341 lbs per kWh of electricity generated

    The amount of average carbon dioxide (CO2) emissions from generating an average kWh of electricity

    Various power, cooling, floor space and green storage or IT  related metrics

    Metrics include Data center Efficiency (DCiE) via the greengrid which is the indicator ratio of a IT data center energy efficiency defined as IT equipment (servers, disk and tape storage, networking switches, routers, printers, etc) / Total facility power x 100 (for percentage). For example, if the sum of all IT equipment energy usage resulted in 1,500 kilowatt hours (kWh) per month yet the total facility power including UPS, energy switching, power conversation and filtering, cooling and associated infrastructure costs as well as IT equipment resulting in 3,500 kWh, the DCiE would be (1,500 / 3,500) x 100 = 43%. DCiE can be used as a ratio for example to show in the above scenario that IT equipment accounts for about 43% of energy consumed by the data center with in this scenario 57% of electrical energy being consumed by cooling, conversion and conditioning or lighting.

    Power usage effectiveness (PUE) is the indicator ratio of total energy being consumed by the data center to energy being used to operate IT equipment. PUE is defined as total facility power / IT equipment energy consumption. Using the above scenario PUE = 2.333 (3,500 / 1,500) which means that a server requiring 100 watts of power would actually require (2.333 * 100) 233.3 watts of energy that includes both direct power and cooling costs. Similarly a storage system that required 1,500 kWh of energy to power would require (1,500*2.333) 3,499.5 kWh of electrical power including cooling.

    Another metric that has the potential to have meaning is Data center Performance Efficiency (DCPE) that takes into consideration how much useful and effective work is performed by the IT equipment and data center per energy consumed. DCPE is defined as useful work / total facility power with an example being some number of transactions processed using servers, networks and storage divided by energy for the data center to power and cool the equipment. An relatively easy and straightforward implementation of DCPE is an IOPs per watt measurement that looks at how many IOPs can be performed (regardless of size or type such as reads or writes) per unit of energy in this case watts.

    DCPE = Useful work / Total facility power, for example IOPS per watt of energy used

    DCiE = IT equipment energy / Total facility power = 1 / PUE

    PUE = Total facility energy / IT equipment energy

    IOPS per Watt = Number of IOPs (or bandwidth) / energy used by the storage system

    The importance of these numbers and metrics is to focus on the larger impact of a piece of IT equipment that includes its cost and energy consumption that factors in cooling and other hosting or site environmental costs. Naturally energy costs and CO2 (carbon offsets) will vary by geography and region along with type of electrical power being used (Coal, Natural Gas, Nuclear, Wind, Thermo, Solar, etc) and other factors that should be kept in perspective as part of the big picture. Learn more in Chapter 5 "Measurement, Metrics, and Management of IT Resources" in the book "The Green and Virtual Data Center" (CRC) and in the book Cloud and Virtual Data Storage Networking (CRC).

    Disclaimer and notes

    Disclaimer and note:  URL’s submitted for inclusion on this site will be reviewed for consideration and to be in generally accepted good taste in regards to the theme of this site.  Best effort has been made to validate and verify the URLs that appear on this page and web site however they are subject to change. The author and/or maintainer’s) of this page and web site make no endorsement to and assume no responsibility for the URLs and their content that are listed on this page.

    What this all means

    The result of a green and virtual data center is that of a flexible, agile, resilient, scalable information factory that is also economical, productive, efficient, productive as well as sustainable.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server storage I/O Intel NUC nick knack notes – Second impressions

    Storage I/O trends

    Server storage I/O Intel NUC nick knack notes – Second impressions

    This is the second of a two-part series about my first and second impressions of the Intel NUC (Next Unit Computing). In the first post (here) I give an overview and my first impressions while in this post lets look at options added to my NUC model 54250, first deployment use and more impressions.

    Intel® NUC with Intel® Core™ i5 Processor and 2.5-Inch Drive Support (NUC5i5RYH) via Intel.com

    What you will want to add to a NUC

    Since the NUC is a basic brick with a processor mounted on its mother board, you will need to add memory, some type of persistent storage device (mSATA, SATA or USB based) and optionally a WiFi card.

    One of the nice things about the NUC is that in many ways it is the equivalent functionality of a laptop or mini-tower without the extra overhead (cost, components, packaging) enabling you to customize as needed for your specific requirements. For example there is no keyboard, mouse, video screen, WiFi, Hard Disk Drive (HDD) or flash Solid State Device (SSD) included with an operating system pre-installed. There is no least memory required enabling you to decide how much to configure while using compatible laptop style memory. Video and monitors attach via HDMI or mini-port including VGA devices via an adapter cable. Keyboard and mouse if needed are handled via USB ports.

    Here is what I added to my NUC model 5420.

    1Crucial 16GB Kit (2 x 8GB) DDR3 1600 (PC3-12800) SODIMM 204-Pin Notebook Memory
    1Intel Network 7260 WiFi Wireless-AC 7260 H/T Dual Band 2×2 AC+Bluetooth HMC. Here is link to Intel site for various drivers.
    1500GB Samsung Electronics 840 EVO mSATA 0.85-Inch Solid State Drive
    1SATA HDD, SSD or HHDD/SSHD (I used one of my existing drives)

    Note that you will also need to supply some type of Keyboard Video Mouse (KVM), in my case I used a HDMI to VGA adapter cable to attach the NUC via HDMI (for video) and USB (keyboard and mouse) to my Startech KVM switch.

    Following images show on the left Intel WiFi card installed and on the right, a Samsung 840 EVO mSATA 500GB flash SSD installed above the WiFi card. Also notice on the far right of the images the two DDR3 "notebook" class DRAM DIMM slots.

    NUC WiFi cardmSATA SSD
    Left: Intel WiFi card installed and Right Samsung EVO mSATA SSD card (sits above WiFi card)

    Note that the NUC (as do many laptops) accepts 9mm or smaller thin 7mm height HDDs and SSDs in its SATA drive bay. I mention this because some of the higher-capacity 2TB 2.5" SFF drives are taller than 9m as shown in the above image and do not fit in the NUC internal SATA drive bay. While many devices and systems support 2.5" drive slots for HDD, SSD or HHDD/SSHDs, pay attention to the height and avoid surprises when something does not fit like it was assumed to.

    2.5 HDD and SSDs
    Low-profile and tall-profile 2.5" SFF HDDs

    Additional drives and devices can be attached using external USB 3.0 ports including HDDs, SSDs or even USB to GbE adapters if needed. You will need to supply your own operating system, hypervisor, storage, networking or other software, such as Windows, *nix, VMware ESXi, Hyper-V, KVM, Xen, OpenStack or any of the various ZFS based (among others) storage appliances.

    Unpacking and physical NUC installation

    Initial setup and physical configuration of the NUC is pretty quick with the only tool needed being a Philips screw driver.

    NUC and components ready for installation
    Intel NUC 5420 and components ready for installation

    With all the components including the NUC itself laid out for a quick inventory including recording serial numbers (see image above), the next step is to open up the NUC by removing four Philip screws from the bottom. Once the screws are removed and bottom plate removed, the SATA drive bay opens up to reach the slots of memory, mSATA SSD and WiFi card (see images below). Once the memory, mSATA and WiFi cards are installed, the SATA drive bay coverage those components and it is time to install a 2.5" standard height HDD or SSD. For my first deployment I installed temporarily installed on of my older HHDDs a 750GB Seagate Momentus XT that will be replaced by something newer soon.

    NUC internal HDD/SSD slotNUC internal HDD installed
    View of NUC with bottom cover removed, Left empty SATA drive bay, Right HDD installed

    After the components are installed, it is time to replace the bottom cover plate of the NUC securing in place with the four screws previously removed. Next up is attaching any external devices via USB and other ports including KVM and LAN network connection. Once the hardware is ready, its time to power up the NUC and checkout the Visual BIOS (or UEFI) as shown below.

    Intel NUC Visual BIOSIntel NUC Visual BIOS display
    NUC VisualBIOS screen shot examples

    At this point unless you have already installed an operating system, hypervisor or other software on a HDD, SSD or USB device, it is time to install your prefered software.

    Windows 7

    First up was Windows 7 as I already had an image built on the HHDD that required some drivers to be added. specifically, a visit to the Intel resources site (See NUC resources and links section later in this post) was made to get a LAN GbE, WiFi and USB drivers. Once those were installed the on-board GbE LAN port worked good as did the WiFi. Another driver that needed to be download was for a USB-GbE adapter to add another LAN connection. Also a couple of reboots were required for other Windows drivers and configuration changes to take place to correct some transient problems including KVM hangs which eventually cleared themselves up.

    Windows 2012 R2

    Following Windows 7, next up was a clean install of Windows 2012 R2 which also required some drivers and configuration changes. One of the challenges is that Windows 2012 R2 is not officially supported on the NUC with its GbE LAN and WiFi cards. However after doing some searches and reading a few posts including this and this, a solution was found and Windows 2012 R2 and its networking are working good.

    Ubuntu and Clonezilla

    Next up was a quick install of Ubuntu 14.04 which went pretty smooth, as well as using Clonezilla to do some drive maintenance, move images and partitions among other things.

    VMware ESXi 5.5U2

    My first attempt at installing a standard VMware ESXi 5.5U2 image ran into problems due to the GbE LAN port not being seen. The solution is to use a different build, or custom ISO that includes the applicable GbE LAN driver (e.g. net-e1000e-2.3.2.x86_64.vib) and some useful information at Florian Grehl site (@virten) and over at Andreas Peetz site (@VFrontDe) including SATA controller driver for xahci. Once the GbE driver was added (same driver that addresses other Intel NIC I217/I218 based systems) along with updating the SATA driver, VMware worked fine.

    Needless to say there are many other things I plan on doing with the NUC both as a standalone bare-metal system as well as a virtual platform as I get more time and projects allow.

    What about building your NUC alternative?

    In addition to the NUC models available via Intel and its partners and accessorizing as needed, there are also special customized and ruggedized NUC versions similar to what you would expect to find with laptop, notebooks, and other PC based systems.

    MSI Probox rear viewMSI Probox front view
    Left MSI ProBox rear-view Right MSI ProBox front view

    If you are looking to do more than what Intel and its partners offer, then there are some other options such as to increase the number of external ports among other capabilities. One option which I recently added to my collection of systems is an DIY (Do It Yourself) MSI ProBox (VESA mountable) such as this one here.

    MSI Probox internal view
    Internal view MSI ProBox (no memory, processor or disks)

    With the MSI ProBox, they are essentially a motherboard with an empty single cpu socket (e.g. LGA 1150 up to 65W) for supporting various processors, two empty DDR3 DIMM slots, 2 empty 2.5" SATA ports among other capabilities. Enclosures such as the MSI ProBox give you flexibility creating something more robust beyond a basic NUC yet smaller than a traditional server depending on your specific needs.

    Looking for other small form factor modular and ruggedized server options as an alternative to a NUC, than check out those from Xi3, Advantech, Cadian Networks, and Logic Supply among many others.

    Storage I/O trends

    First NUC impressions

    Overall I like the NUC and see many uses for it from consumer, home including entertainment and media systems, video security surveillance as well as a small server or workstation device. In addition, I can see a NUC being used for smaller environments as desktop workstations or as a lower-power, lower performance system including as a small virtualization host for SOHO, small SMB and ROBO environments. Another usage is for home virtual lab as well as gaming among other scenarios including simple software defined storage proof of concepts. For example, how about creating a small cluster of NUCs to run VMware VSAN, or Datacore, EMC ScaleIO, Starwind, Microsoft SOFS or Hyper-V as well as any of the many ZFS based NAS storage software applications.

    Pro’s – Features and benefits

    Small, low-power, self-contained with flexibility to choose my memory, WiFi, storage (HDD or SSD) without the extra cost of those items or software being included.

    Con’s – Caveats or what to look out for

    Would be nice to have another GbE LAN port however I addressed that by adding a USB 3.0 to GbE cable, likewise would be nice if the 2.5" SATA drive bay supported tall height form-factor devices such as the 2TB devices. The work around for adding larger capacity and physically larger storage devices is to use the USB 3.0 ports. The biggest warning is if you are going to venture outside of the official supported operating system and application software realm be ready to load some drivers, possibly patch and hack some install scripts and then plug and pray it all works. So far I have not run into any major show stoppers that were not addressed with some time spent searching (google will be your friend), then loading the drivers or making configuration changes.

    Additional NUC resources and links

    Various Intel products support search page
    Intel NUC support and download links
    Intel NUC model 54250 page, product brief page (and PDF version), and support with download links
    Intel NUC home theater solutions guide (PDF)
    Intel HCL for NUC page and Intel Core i5-4250U processor speeds and feeds
    VMware on NUC tips
    VMware ESXi driver for LAN net-e1000e-2.3.2.x86_64
    VMware ESXi SATA xahci driver
    Server storage I/O Intel NUC nick knack notes – First impressions
    Server Storage I/O Cables Connectors Chargers & other Geek Gifts (Part I and Part II)
    Software defined storage on a budget with Lenovo TS140

    Storage I/O trends

    What this all means

    Intel NUC provides a good option for many situations that might otherwise need a larger mini-tower desktop workstations or similar systems both for home, consumer and small office needs. NUC can also be used for specialized pre-configured application specific situations that need low-power, basic system functionality and expansion options in a small physical footprint. In addition NUC can also be a good option for adding to an existing physical and virtual LAB or as a basis for starting a new one.

    So far I have found many uses for NUC which free up other systems to do other tasks while enabling some older devices to finally be retired. On the other hand like most any technology, while the NUC is flexible, its low power and performance are not enough to support other applications. However the NUC gives me flexibility to leverage the applicable unit of compute (e.g. server, workstation, etc.) that is applicable to a given task or put another way, use the right technology tool for the task at hand.

    For now I only need a single NUC to be a companion to my other HP, Dell and Lenovo servers as well as MSI ProBox, however maybe there will be a small NUC cluster, grid or ring configured down the road.

    What say you, do you have a NUC if so, how is it being used and tips, tricks or hints to share with others?

    Ok, nuff said for now.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 StorageIO and UnlimitedIO LLC All Rights Reserved

    Revisiting RAID data protection remains relevant resource links

    Revisiting RAID data protection remains relevant and resources

    Storage I/O trends

    Updated 2/10/2018

    RAID data protection remains relevant including erasure codes (EC), local reconstruction codes (LRC) among other technologies. If RAID were really not relevant anymore (e.g. actually dead), why do some people spend so much time trying to convince others that it is dead or to use a different RAID level or enhanced RAID or beyond raid with related advanced approaches?

    When you hear RAID, what comes to mind?

    A legacy monolithic storage system that supports narrow 4, 5 or 6 drive wide stripe sets or a modern system support dozens of drives in a RAID group with different options?

    RAID means many things, likewise there are different implementations (hardware, software, systems, adapters, operating systems) with various functionality, some better than others.

    For example, which of the items in the following figure come to mind, or perhaps are new to your RAID vocabulary?

    RAID questions

    There are Many Variations of RAID Storage some for the enterprise, some for SMB, SOHO or consumer. Some have better performance than others, some have poor performance for example causing extra writes that lead to the perception that all parity based RAID do extra writes (some actually do write gathering and optimization).

    Some hardware and software implementations using WBC (write back cache) mirrored or battery backed-BBU along with being able to group writes together in memory (cache) to do full stripe writes. The result can be fewer back-end writes compared to other systems. Hence, not all RAID implementations in either hardware or software are the same. Likewise, just because a RAID definition shows a particular theoretical implementation approach does not mean all vendors have implemented it in that way.

    RAID is not a replacement for backup rather part of an overall approach to providing data availability and accessibility.

    data protection and durability

    What’s the best RAID level? The one that meets YOUR needs

    There are different RAID levels and implementations (hardware, software, controller, storage system, operating system, adapter among others) for various environments (enterprise, SME, SMB, SOHO, consumer) supporting primary, secondary, tertiary (backup/data protection, archiving).

    RAID comparison
    General RAID comparisons

    Thus one size or approach does fit all solutions, likewise RAID rules of thumbs or guides need context. Context means that a RAID rule or guide for consumer or SOHO or SMB might be different for enterprise and vise versa, not to mention on the type of storage system, number of drives, drive type and capacity among other factors.

    RAID comparison
    General basic RAID comparisons

    Thus the best RAID level is the one that meets your specific needs in your environment. What is best for one environment and application may be different from what is applicable to your needs.

    Key points and RAID considerations include:

    · Not all RAID implementations are the same, some are very much alive and evolving while others are in need of a rest or rewrite. So it is not the technology or techniques that are often the problem, rather how it is implemented and then deployed.

    · It may not be RAID that is dead, rather the solution that uses it, hence if you think a particular storage system, appliance, product or software is old and dead along with its RAID implementation, then just say that product or vendors solution is dead.

    · RAID can be implemented in hardware controllers, adapters or storage systems and appliances as well as via software and those have different features, capabilities or constraints.

    · Long or slow drive rebuilds are a reality with larger disk drives and parity-based approaches; however, you have options on how to balance performance, availability, capacity, and economics.

    · RAID can be single, dual or multiple parity or mirroring-based.

    · Erasure and other coding schemes leverage parity schemes and guess what umbrella parity schemes fall under.

    · RAID may not be cool, sexy or a fun topic and technology to talk about, however many trendy tools, solutions and services actually use some form or variation of RAID as part of their basic building blocks. This is an example of using new and old things in new ways to help each other do more without increasing complexity.

    ·  Even if you are not a fan of RAID and think it is old and dead, at least take a few minutes to learn more about what it is that you do not like to update your dead FUD.

    Wait, Isn’t RAID dead?

    There is some dead marketing that paints a broad picture that RAID is dead to prop up something new, which in some cases may be a derivative variation of parity RAID.

    data dispersal
    Data dispersal and durability

    RAID rebuild improving
    RAID continues to evolve with rapid rebuilds for some systems

    Otoh, there are some specific products, technologies, implementations that may be end of life or actually dead. Likewise what might be dead, dying or simply not in vogue are specific RAID implementations or packaging. Certainly there is a lot of buzz around object storage, cloud storage, forward error correction (FEC) and erasure coding including messages of how they cut RAID. Catch is that some object storage solutions are overlayed on top of lower level file systems that do things such as RAID 6, granted they are out of sight, out of mind.

    RAID comparison
    General RAID parity and erasure code/FEC comparisons

    Then there are advanced parity protection schemes which include FEC and erasure codes that while they are not your traditional RAID levels, they have characteristic including chunking or sharding data, spreading it out over multiple devices with multiple parity (or derivatives of parity) protection.

    Bottom line is that for some environments, different RAID levels may be more applicable and alive than for others.

    Via BizTech – How to Turn Storage Networks into Better Performers

    • Maintain Situational Awareness
    • Design for Performance and Availability
    • Determine Networked Server and Storage Patterns
    • Make Use of Applicable Technologies and Techniques

    If RAID is alive, what to do with it?

    If you are new to RAID, learn more about the past, present and future keeping mind context. Keeping context in mind means that there are different RAID levels and implementations for various environments. Not all RAID 0, 1, 1/0, 10, 2, 3, 4, 5, 6 or other variations (past, present and emerging) are the same for consumer vs. SOHO vs. SMB vs. SME vs. Enterprise, nor are the usage cases. Some need performance for reads, others for writes, some for high-capacity with low performance using hardware or software. RAID Rules of thumb are ok and useful, however keep them in context to what you are doing as well as using.

    What to do next?

    Take some time to learn, ask questions including what to use when, where, why and how as well as if an approach or recommendation are applicable to your needs. Check out the following links to read some extra perspectives about RAID and keep in mind, what might apply to enterprise may not be relevant for consumer or SMB and vise versa.

    Some advise needed on SSD’s and Raid (Via Spiceworks)
    RAID 5 URE Rebuild Means The Sky Is Falling (Via BenchmarkReview)
    Double drive failures in a RAID-10 configuration (Via SearchStorage)
    Industry Trends and Perspectives: RAID Rebuild Rates (Via StorageIOblog)
    RAID, IOPS and IO observations (Via StorageIOBlog)
    RAID Relevance Revisited (Via StorageIOBlog)
    HDDs Are Still Spinning (Rust Never Sleeps) (Via InfoStor)
    When and Where to Use NAND Flash SSD for Virtual Servers (Via TheVirtualizationPractice)
    What’s the best way to learn about RAID storage? (Via Spiceworks)
    Design considerations for the host local FVP architecture (Via Frank Denneman)
    Some basic RAID fundamentals and definitions (Via SearchStorage)
    Can RAID extend nand flash SSD life? (Via StorageIOBlog)
    I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
    The original RAID white paper (PDF) that while over 20 years old, it provides a basis, foundation and some history by Katz, Gibson, Patterson et al
    Storage Interview Series (Via Infortrend)
    Different RAID methods (Via RAID Recovery Guide)
    A good RAID tutorial (Via TheGeekStuff)
    Basics of RAID explained (Via ZDNet)
    RAID and IOPs (Via VMware Communities)

    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    What is my favorite or preferred RAID level?

    That depends, for some things its RAID 1, for others RAID 10 yet for others RAID 4, 5, 6 or DP and yet other situations could be a fit for RAID 0 or erasure codes and FEC. Instead of being focused on just one or two RAID levels as the solution for different problems, I prefer to look at the environment (consumer, SOHO, small or large SMB, SME, enterprise), type of usage (primary or secondary or data protection), performance characteristics, reads, writes, type and number of drives among other factors. What might be a fit for one environment would not be a fit for others, thus my preferred RAID level along with where implemented is the one that meets the given situation. However also keep in mind is tying RAID into part of an overall data protection strategy, remember, RAID is not a replacement for backup.

    What this all means

    Like other technologies that have been declared dead for years or decades, aka the Zombie technologies (e.g. dead yet still alive) RAID continues to be used while the technologies evolves. There are specific products, implementations or even RAID levels that have faded away, or are declining in some environments, yet alive in others. RAID and its variations are still alive, however how it is used or deployed in conjunction with other technologies also is evolving.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Cloud Conversations: Revisiting re:Invent 2014 and other AWS updates

    server storage I/O trends

    This is part one of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part two here.

    Revisiting re:Invent 2014 and other AWS updates

    AWS re:Invent 2014

    A few weeks ago I attended Amazon Web Service (AWS) re:Invent 2014 in Las Vegas for a few days. For those of you who have not yet attended this event, I recommend adding it to your agenda. If you have interest in compute servers, networking, storage, development tools or management of cloud (public, private, hybrid), virtualization and related topic themes, you should check out AWS re:invent.

    AWS made several announcements at re:invent including many around development tools, compute and data storage services. One of those to keep an eye on is cloud based Aurora relational database service that complement existing RDS tools. Aurora is positioned as an alternative to traditional SQL based transactional databases commonly found in enterprise environments (e.g. SQL Server among others).

    Some recent AWS announcements prior to re:Invent include

    AWS vCenter Portal

    Using the AWS Management Portal for vCenter adds a plug-in within your VMware vCenter to manage your AWS infrastructure. The vCenter for AWS plug-in includes support for AWS EC2 and Virtual Machine (VM) import to migrate your VMware VMs to AWS EC2, create VPC (Virtual Private Clouds) along with subnet’s. There is no cost for the plug-in, you simply pay for the underlying AWS resources consumed (e.g. EC2, EBS, S3). Learn more about AWS Management Portal for vCenter here, and download the OVA plug-in for vCenter here.

    AWS re:invent content


    AWS Andy Jassy (Image via AWS)

    November 12, 2014 (Day 1) Keynote (highlight video, full keynote). This is the session where AWS SVP Andy Jassy made several announcements including Aurora relational database that complements existing RDS (Relational Data Services). In addition to Andy, the key-note sessions also included various special guests ranging from AWS customers, partners and internal people in support of the various initiatives and announcements.


    Amazon.com CTO Werner Vogels (Image via AWS)

    November 13, 2014 (Day 2) Keynote (highlight video, full keynote). In this session, Amazon.com CTO Werner Vogels appears making announcements about the new Container and Lambda services.

    AWS re:Invent announcements

    Announcements and enhancements made by AWS during re:Invent include:

    • Key Management Service (KMS)
    • Amazon RDS for Aurora
    • Amazon EC2 Container Service
    • AWS Lambda
    • Amazon EBS Enhancements
    • Application development, deployed and life-cycle management tools
    • AWS Service Catalog
    • AWS CodeDeploy
    • AWS CodeCommit
    • AWS CodePipeline

    Key Management Service (KMS)

    Hardware security module (HSM) based key managed service for creating and control of encryption keys to protect security of digital assets and their keys. Integration with AWS EBS and others services including S3 and Redshift along with CloudTrail logs for regulatory, compliance and management. Learn more about AWS KMS here

    AWS Database

    For those who are not familiar, AWS has a suite of database related services including SQL and no SQL based, simple to transactional to Petabyte (PB) scale data warehouses for big data and analytics. AWS offers the Relational Database Service (RDS) which is a suite of different database types, instances and services. RDS instance and types include SimpleDB, MySQL, Postgress, Oracle, SQL Server and the new AWS Aurora offering (read more below).  Other little data database and big data repository related offerings include DynamoDB (a non-SQL database), ElasticCache (in memory cache repository) and Redshift (large-scale data warehouse and big data repository).

    In addition to database services offered by AWS, you can also combine various AWS resources including EC2 compute, EBS and other storage offerings to create your own solution. For example there are various Amazon Machine Images (AMI’s) or pre-built operating systems and database tools available with EC2 as well as via the AWS Marketplace , such as MongoDB and Couchbase among others. For those not familiar with MongoDB, Couchbase, Cassandra, Riak along with other non SQL or alternative databases and key value repositories, check out Seven Databases in Seven Weeks in my book review of it here.

    Seven Databases book review
    Seven Databases in Seven Weeks and NoSQL movement available from Amazon.com

    Amazon RDS for Aurora

    Aurora is a new relational database offering part of the AWS RDS suite of services. Positioned as an alternative to commercial high-end database, Aurora is a cost-effective database engine compatible with MySQL. AWS is claiming 5x better performance than standard MySQL with Aurora while being resilient and durable. Learn more about Aurora which will be available in early 2015 and its current preview here.

    Amazon EC2 C4 instances

    AWS will be adding a new C4 instance as a next generation of EC2 compute instance based on Intel Xeon E5-2666 v3 (Haswell) processors. The Intel Xeon E5-2666 v3 processors run at a clock speed of 2.9 GHz providing the highest level of EC2 performance. AWS is targeting traditional High Performance Computing (HPC) along with other compute intensive workloads including analytics, gaming, and transcoding among others. Learn more AWS EC2 instances here, and view this Server and StorageIO EC2, EBS and associated AWS primer here.

    Amazon EC2 Container Service

    Containers such as those via Docker have become popular to support developers rapidly build as well as deploy scalable applications. AWS has added a new feature called EC2 Container Service that supports Docker using simple API’s. In addition to supporting Docker, EC2 Container Service is a high performance scalable container management service for distributed applications deployed on a cluster of EC2 instances. Similar to other EC2 services, EC2 Container Service leverages security groups, EBS volumes and Identity Access Management (IAM) roles along with scheduling placement of containers to meet your needs. Note that AWS is not alone in adding container and docker support with Microsoft Azure also having recently made some announcements, learn more about Azure and Docker here. Learn more about EC2 container service here and more about Docker here.

    Docker for smarties

    Continue reading about re:Invent 2014 and other recent AWS enhancements here in part two of this two-part series.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved