December 2014 Server StorageIO Newsletter

December 2014

Hello and welcome to this December Server and StorageIO update newsletter.

Seasons Greetings

Seasons greetings

Commentary In The News

StorageIO news

Following are some StorageIO industry trends perspectives comments that have appeared in various venues. Cloud conversations continue to be popular including concerns about privacy, security and availability. Over at BizTech Magazine there are some comments about cloud and ROI. Some comments on AWS and Google SSD services can be viewed at SearchAWS. View other trends comments here

Tips and Articles

View recent as well as past tips and articles here

StorageIOblog posts

Recent StorageIOblog posts include:

View other recent as well as past blog posts here

In This Issue

  • Industry Trends Perspectives
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events & Activities

    View other recent and upcoming events here

    Webinars

    December 11, 2014 – BrightTalk
    Server & Storage I/O Performance

    December 10, 2014 – BrightTalk
    Server & Storage I/O Decision Making

    December 9, 2014 – BrightTalk
    Virtual Server and Storage Decision Making

    December 3, 2014 – BrightTalk
    Data Protection Modernization

    Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    From StorageIO Labs

    Research, Reviews and Reports

    StarWind Virtual SAN for Microsoft SOFS

    May require registration
    This looks at the shared storage needs of SMB’s and ROBO’s leveraging Microsoft Scale-Out File Server (SOFS). Focus is on Microsoft Windows Server 2012, Server Message Block version (SMB) 3.0, SOFS and StarWind Virtual SAN management software

    View additional reports and lab reviews here.

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageio.com/ssd
    storageio.com/ssd

    Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

    Seasons greetings 2014

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server Storage I/O Cables Connectors Chargers & other Geek Gifts

    Server Storage I/O Cables Connectors Chargers & other Geek Gifts

    server storage I/O trends

    This is part one of a two part series for what to get a geek for a gift, read part two here.

    It is that time of the year when annual predictions are made for the upcoming year, including those that will be repeated next year or that were also made last year.

    It’s also the time of the year to get various projects wrapped up, line up new activities, get the book-keeping things ready for year-end processing and taxes, as well as other things.

    It’s also that time of the year to do some budget and project planning including upgrades, replacements, enhancements while balancing an over-subscribed holiday party schedule some of you may have.

    Lets not forget getting ready for vacations, perhaps time off from work with some time upgrading your home lab or other projects.

    Then there are the gift lists or trying to figure out what to get that difficult to shop for person particular geek’s who may have everything, or want the latest and greatest that others have, or something their peers don’t have yet.

    Sure I have a DJI Phantom II on my wish list, however also have other things on my needs list (e.g. what I really need and want vs. what would be fun to wish for).

    DJI Phantom helicopter drone
    Image via DJI.com, click on image to learn more and compare models

    So here are some things for the geek or may have everything or is up on having the latest and greatest, yet forgot or didn’t know about some of these things.

    Not to mention some of these might seem really simple and low-cost, think of them like a Lego block or erector set part where your imagination will be your boundary how to use them. Also, most if not all of these are budget friendly particular if you shop around.

    Replace a CD/DVD with 4 x 2.5″ HDD’s or SSD’s

    So you need to add some 2.5" SAS or SATA HDD’s, SSD’s, HHDD’s/SSHD’s to your server for supporting your VMware ESXi, Microsoft Hyper-V, KVM, Xen, OpenStack, Hadoop or legacy *nix or Windows environment or perhaps gaming system. Challenge is that you are out of disk drive bay slots and you want things neatly organized vs. a rat’s nest of cables hanging out of your system. No worries assuming your server has an empty media bay (e.g. those 5.25" slots where CDs/DVDs or really old HDD’s go), or if you can give up the CD/DVD, then use that bay and its power connector to add ones of these. This is a 4 x 2.5" SAS and SATA drive bay that has a common power connector (molex male) with each drive bay having its own SATA drive connection. By each drive having its own SATA connection you can map the drives to an on-board available SATA port attached to a SAS or SATA controller, or attach an available port on a RAID adapter to the ports using a cable such as small form factor (SFF) 8087 to SATA.

    sas storage enclosuresas sata storage enclosure
    (Left) Rear view with Molex power and SATA cables (Right) front view

    I have a few of these in different systems and what I like about them is that they support different drive speeds, plus they will accept a SAS drive where many enclosures in this category only support SATA. Once you mount your 2.5" HDD or SSD using screws, you can hot swap (requires controller and OS support) the drives and move them between other similar enclosures as needed. The other thing I like is that there are front indicator lights as well as by each drive having its own separate connection, you can attach some of the drives to a RAID adapter while others connected to on-board SATA ports. Oh, and you can also have different speeds of drives as well.

    Power connections

    Depending on the type of your server, you may have Molex, SATA or some other type of power connections. You can use different power connection cables to go from one type (Molex) to another, create a connection for two devices, create an extension to reach hard to get to mounting locations.

    Warning and disclosure note, keep in mind how much power you are drawing when attaching devices to not cause an electrical or fire hazard, follow manufactures instructions and specification doing so at your own risk! After all, Just like Clark Grizzwald in National Lampoon Christmas Vacation who found you could attach extension cord to splitters to splitters and fan-out to have many lights attached, you don’t want to cause a fire or blackout when you plug to many drives in.


    National Lampoon Christmas Vacation

    Measuring Power

    Ok so you do not want to do a Clark Grizzwald (see above video) and overload a power circuit, or perhaps you simply want to know how many watts, amps or quality of your voltage is.

    There are many types of power meters along with various prices, some even have interfaces where you can grab event data to correlate with server storage I/O networking performance to do things such as IOP’s per watt among other metrics. Speaking of IOP’s per watt, check out the SNIA Emerald site where they have some good tools including a benchmark script that uses Vdbench to drive hot band workload (e.g. basically kick the crap out of a storage system).

    Back to power meters, I like the Kill A Watt series of meters as they give good info about amps, volts, power quality. I have these plugged into outlets so I can see how much power is being used by the battery backup units (BBU) aka UPS that also serve as power surge filters. If needed I can move these further downstream to watch the power intake of a specific server, storage, network or other device.

    Kill A Watt Power meter

    Standby and backup power

    Electrical power surge strips should be a given or considered common sense, however what is or should be common sense should be repeated so that it remains common sense, you should be using power surge strips or other devices.

    Standby, UPS and BBU

    For most situations a good surge suppressor will cover short power transients.

    APC power strips and battery backup
    Image via APC and model similar to those that I have

    For slightly longer power outages of a few seconds to minutes, that’s where battery backup up (BBU) units that also have surge suppression comes into play. There are many types, sizes with various features to meet your needs and budget. I have several of theses in a couple of different sizes not only for servers, storage and networking equipment (including some WiFi access points, routers, etc), I also have them for home things such as satellite DVR’s. However not everything needs to stay on while others simply need to stay on long-enough in order to shutdown manually or via automated power off sequences.

    Alternate Power Generation

    Generators are not just for the rich and famous or large data center, like other technologies they are available in different sizes, power capacity, fuel sources, manual or automated among other things.

    kohler residential generator
    Image via Kohler Power similar to model that I have

    Note that even with a typical generator there will be a time gap from the time power goes off until the generator starts, stabilizes and you have good power. That’s where the BBU and UPS mentioned above comes into play to bridge those time gaps which in my cases is about 25-30 seconds. Btw, knowing how much power your technology is drawing using tools such as the Kill A Watt is part of the planning process to avoid surprises.

    What about Solar Power

    Yup, whether it is to fit in and be green, or simply to get some electrical power when or where it is not needed to charge a battery or power some device, these small solar power devices are very handy.

    solar charger
    Image via Amazon.com
    solar battery charger
    Image via Amazon.com

    For example you can get or easily make an adapter to charge laptops, cell phones or even power them for normal use (check manufactures information on power usage, Amps and Voltage draws among other warnings to prevent fire and other things). Btw, not only are these handy for computer related things, they also work great for keeping batteries on my fishing boat charged so that I have my fish finder and other electronics, just saying.

    Fire suppression

    How about a new or updated smoke and fire detection alarm monitor, as well as fire extinguisher for the geek’s software defined hardware that runs on power (electrical or battery)?

    The following is from the site Fire Extinguisher 101 where you can learn more about different types of suppression technologies.

    Image via Fire Extinguisher 101
    • Class A extinguishers are for ordinary combustible materials such as paper, wood, cardboard, and most plastics. The numerical rating on these types of extinguishers indicates the amount of water it holds and the amount of fire it can extinguish. Geometric symbol (green triangle)
    • Class B fires involve flammable or combustible liquids such as gasoline, kerosene, grease and oil. The numerical rating for class B extinguishers indicates the approximate number of square feet of fire it can extinguish. Geometric symbol (red square)
    • Class C fires involve electrical equipment, such as appliances, wiring, circuit breakers and outlets. Never use water to extinguish class C fires – the risk of electrical shock is far too great! Class C extinguishers do not have a numerical rating. The C classification means the extinguishing agent is non-conductive. Geometric symbol (blue circle)
    • Class D fire extinguishers are commonly found in a chemical laboratory. They are for fires that involve combustible metals, such as magnesium, titanium, potassium and sodium. These types of extinguishers also have no numerical rating, nor are they given a multi-purpose rating – they are designed for class D fires only. Geometric symbol (Yellow Decagon)
    • Class K fire extinguishers are for fires that involve cooking oils, trans-fats, or fats in cooking appliances and are typically found in restaurant and cafeteria kitchens. Geometric symbol (black hexagon)

    Wrap up for part I

    This wraps up part I of what to get a geek V2014, continue reading part II here.

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud Conversations: Revisiting re:Invent 2014 and other AWS updates

    server storage I/O trends

    This is part one of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part two here.

    Revisiting re:Invent 2014 and other AWS updates

    AWS re:Invent 2014

    A few weeks ago I attended Amazon Web Service (AWS) re:Invent 2014 in Las Vegas for a few days. For those of you who have not yet attended this event, I recommend adding it to your agenda. If you have interest in compute servers, networking, storage, development tools or management of cloud (public, private, hybrid), virtualization and related topic themes, you should check out AWS re:invent.

    AWS made several announcements at re:invent including many around development tools, compute and data storage services. One of those to keep an eye on is cloud based Aurora relational database service that complement existing RDS tools. Aurora is positioned as an alternative to traditional SQL based transactional databases commonly found in enterprise environments (e.g. SQL Server among others).

    Some recent AWS announcements prior to re:Invent include

    AWS vCenter Portal

    Using the AWS Management Portal for vCenter adds a plug-in within your VMware vCenter to manage your AWS infrastructure. The vCenter for AWS plug-in includes support for AWS EC2 and Virtual Machine (VM) import to migrate your VMware VMs to AWS EC2, create VPC (Virtual Private Clouds) along with subnet’s. There is no cost for the plug-in, you simply pay for the underlying AWS resources consumed (e.g. EC2, EBS, S3). Learn more about AWS Management Portal for vCenter here, and download the OVA plug-in for vCenter here.

    AWS re:invent content


    AWS Andy Jassy (Image via AWS)

    November 12, 2014 (Day 1) Keynote (highlight video, full keynote). This is the session where AWS SVP Andy Jassy made several announcements including Aurora relational database that complements existing RDS (Relational Data Services). In addition to Andy, the key-note sessions also included various special guests ranging from AWS customers, partners and internal people in support of the various initiatives and announcements.


    Amazon.com CTO Werner Vogels (Image via AWS)

    November 13, 2014 (Day 2) Keynote (highlight video, full keynote). In this session, Amazon.com CTO Werner Vogels appears making announcements about the new Container and Lambda services.

    AWS re:Invent announcements

    Announcements and enhancements made by AWS during re:Invent include:

    • Key Management Service (KMS)
    • Amazon RDS for Aurora
    • Amazon EC2 Container Service
    • AWS Lambda
    • Amazon EBS Enhancements
    • Application development, deployed and life-cycle management tools
    • AWS Service Catalog
    • AWS CodeDeploy
    • AWS CodeCommit
    • AWS CodePipeline

    Key Management Service (KMS)

    Hardware security module (HSM) based key managed service for creating and control of encryption keys to protect security of digital assets and their keys. Integration with AWS EBS and others services including S3 and Redshift along with CloudTrail logs for regulatory, compliance and management. Learn more about AWS KMS here

    AWS Database

    For those who are not familiar, AWS has a suite of database related services including SQL and no SQL based, simple to transactional to Petabyte (PB) scale data warehouses for big data and analytics. AWS offers the Relational Database Service (RDS) which is a suite of different database types, instances and services. RDS instance and types include SimpleDB, MySQL, Postgress, Oracle, SQL Server and the new AWS Aurora offering (read more below).  Other little data database and big data repository related offerings include DynamoDB (a non-SQL database), ElasticCache (in memory cache repository) and Redshift (large-scale data warehouse and big data repository).

    In addition to database services offered by AWS, you can also combine various AWS resources including EC2 compute, EBS and other storage offerings to create your own solution. For example there are various Amazon Machine Images (AMI’s) or pre-built operating systems and database tools available with EC2 as well as via the AWS Marketplace , such as MongoDB and Couchbase among others. For those not familiar with MongoDB, Couchbase, Cassandra, Riak along with other non SQL or alternative databases and key value repositories, check out Seven Databases in Seven Weeks in my book review of it here.

    Seven Databases book review
    Seven Databases in Seven Weeks and NoSQL movement available from Amazon.com

    Amazon RDS for Aurora

    Aurora is a new relational database offering part of the AWS RDS suite of services. Positioned as an alternative to commercial high-end database, Aurora is a cost-effective database engine compatible with MySQL. AWS is claiming 5x better performance than standard MySQL with Aurora while being resilient and durable. Learn more about Aurora which will be available in early 2015 and its current preview here.

    Amazon EC2 C4 instances

    AWS will be adding a new C4 instance as a next generation of EC2 compute instance based on Intel Xeon E5-2666 v3 (Haswell) processors. The Intel Xeon E5-2666 v3 processors run at a clock speed of 2.9 GHz providing the highest level of EC2 performance. AWS is targeting traditional High Performance Computing (HPC) along with other compute intensive workloads including analytics, gaming, and transcoding among others. Learn more AWS EC2 instances here, and view this Server and StorageIO EC2, EBS and associated AWS primer here.

    Amazon EC2 Container Service

    Containers such as those via Docker have become popular to support developers rapidly build as well as deploy scalable applications. AWS has added a new feature called EC2 Container Service that supports Docker using simple API’s. In addition to supporting Docker, EC2 Container Service is a high performance scalable container management service for distributed applications deployed on a cluster of EC2 instances. Similar to other EC2 services, EC2 Container Service leverages security groups, EBS volumes and Identity Access Management (IAM) roles along with scheduling placement of containers to meet your needs. Note that AWS is not alone in adding container and docker support with Microsoft Azure also having recently made some announcements, learn more about Azure and Docker here. Learn more about EC2 container service here and more about Docker here.

    Docker for smarties

    Continue reading about re:Invent 2014 and other recent AWS enhancements here in part two of this two-part series.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: Revisiting re:Invent 2014, Lambda and other AWS updates

    server storage I/O trends

    Part II: Revisiting re:Invent 2014 and other AWS updates

    This is part two of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part one here.

    AWS re:Invent 2014

    AWS re:Invent announcements

    Announcements and enhancements made by AWS during re:Invent include:

    • Key Management Service (KMS)
    • Amazon RDS for Aurora
    • Amazon EC2 Container Service
    • AWS Lambda
    • Amazon EBS Enhancements
    • Application development, deployed and life-cycle management tools
    • AWS Service Catalog
    • AWS CodeDeploy
    • AWS CodeCommit
    • AWS CodePipeline

    AWS Lambda

    In addition to announcing new higher performance Elastic Cloud Compute (EC2) compute instances along with container service, another new service is AWS Lambda. Lambda is a service that automatically and quickly runs your applications code in response to events, activities, or other triggers. In addition to running your code, Lambda service is billed in 100 millisecond increments along with corresponding memory use vs. standard EC2 per hour billing. What this means is that instead of paying for an hour of time for your code to run, you can choose to use the Lambda service with more fine-grained consumption billing.

    Lambda service can be used to have your code functions staged ready to execute. AWS Lambda can run your code in response to S3 bucket content (e.g. objects) changes, messages arriving via Kinesis streams or table updates in databases. Some examples include responding to event such as a web-site click, response to data upload (photo, image, audio, file or other object), index, stream or analyze data, receive output from a connected device (think Internet of Things IoT or Internet of Device IoD), trigger from an in-app event among others. The basic idea with Lambda is to be able to pay for only the amount of time needed to do a particular function without having to have an AWS EC2 instance dedicated to your application. Initially Lambda supports Node.js (JavaScript) based code that runs in its own isolated environment.

    AWS cloud example
    Various application code deployment models

    Lambda service is a pay for what you consume, charges are based on the number of requests for your code function (e.g. application), amount of memory and execution time. There is a free tier for Lambda that includes 1 million requests and 400,000 GByte seconds of time per month. A GByte second is the amount of memory (e.g. DRAM vs. storage) consumed during a second. An example is your application is run 100,000 times and runs for 1 second consuming 128MB of memory = 128,000,000MB = 128,000GB seconds. View various pricing models here on the AWS Lambda site that show examples for different memory sizes, times a function runs and run time.

    How much memory you select for your application code determines how it can run in the AWS free tier, which is available to both existing and new customers. Lambda fees are based on the total across all of your functions starting with the code when it runs. Note that you could have from one to thousands or more different functions running in Lambda service. As of this time, AWS is showing Lambda pricing as free for the first 1 million requests, and beyond that, $0.20 per 1 million request ($0.0000002 per request) per duration. Duration is from when you code runs until it ends or otherwise terminates rounded up to the nearest 100ms. The Lambda price also depends on the amount of memory you allocated for your code. Once past the 400,000 GByte second per month free tier the fee is $0.00001667 for every GB second used.

    Why use AWS Lambda vs. an EC2 instance

    Why would you use AWS Lambda vs. provisioning an Container, EC2 instance or running your application code function on a traditional or virtual machine?

    If you need control and can leverage an entire physical server with its operating system (O.S.), application and support tools for your piece of code (e.g. JavaScript), that could be an option. If you simply need to have an isolated image instance (O.S., applications and tools) for your code on a shared virtual on-premises environment then that can be an option. Likewise if you have the need to move your application to an isolated cloud machine (CM) that hosts an O.S. along with your application paying for those resources such as on an hourly basis, that could be your option. Simply need a lighter-weight container to drop your application into that’s where Docker and containers comes into play to off-load some of the traditional application dependencies overhead.

    However, if all you want to do is to add some code logic to support processing activity for example when an object, file or image is uploaded to AWS S3 without having to standup an EC2 instance along with associated server, O.S. and complete application activity, that’s where AWS Lambda comes into play. Simply create your code (initially JavaScript) and specify how much memory it needs, define what events or activities will trigger or invoke the event, and you have a solution.

    View AWS Lambda pricing along with free tier information here.

    Amazon EBS Enhancements

    AWS is increasing the performance and size of General Purpose SSD and Provisioned IOP’s SSD volumes. This means that you can create volumes up to 16TB and 10,000 IOP’s for AWS EBS general-purpose SSD volumes. For EBS Provisioned IOP’s SSD volumes you can create up to 16TB for 20,000 IOP’s. General-purpose SSD volumes deliver a maximum throughput (bandwidth) of 160 MBps and Provisioned IOP SSD volumes have been specified by AWS at 320MBps when attached to EBS optimized instances. Learn more about EBS capabilities here. Verify your IO size and verify AWS sizing information to avoid surprises as all IO sizes are not considered to be the same. Learn more about Provisioned IOP’s, optimized instances, EBS and EC2 fundamentals in this StorageIO AWS primer here.

    Application development, deployed and life-cycle management tools

    In addition to compute and storage resource enhancements, AWS has also announced several tools to support application development, configuration along with deployment (life-cycle management). These include tools that AWS uses themselves as part of building and maintaining the AWS platform services.

    AWS Config (Preview e.g. early access prior to full release)

    Management, reporting and monitoring capabilities including Data center infrastructure management (DCIM) for monitoring your AWS resources, configuration (including history), governance, change management and notifications. AWS Config enables similar capabilities to support DCIM, Change Management Database (CMDB), trouble shooting and diagnostics, auditing, resource and configuration analysis among other activities. Learn more about AWS Config here.

    AWS Service Catalog

    AWS announced a new service catalog that will be available in early 2015. This new service capability will enable administrators to create and manage catalogs of approved resources for users to use via their personalized portal. Learn more about AWS service catalog here.

    AWS CodeDeploy

    To support code rapid deployment automation for EC2 instances, AWS has released CodeDeploy. CodeDeploy masks complexity associated with deployment when adding new features to your applications while reducing human error-prone operations. As part of the announcement, AWS mentioned that they are using CodeDeploy as part of their own applications development, maintenance, and change-management and deployment operations. While suited for at scale deployments across many instances, CodeDeploy works with as small as a single EC2 instance. Learn more about AWS CodeDeploy here.

    AWS CodeCommit

    For application code management, AWS will be making available in early 2015 a new service called CodeCommit. CodeCommit is a highly scalable secure source control service that host private Git repositories. Supporting standard functionalities of Git, including collaboration, you can store things from source code to binaries while working with your existing tools. Learn more about AWS CodeCommit here.

    AWS CodePipeline

    To support application delivery and release automation along with associated management tools, AWS is making available CodePipeline. CodePipeline is a tool (service) that supports build, checking workflow’s, code staging, testing and release to production including support for 3rd party tool integration. CodePipeline will be available in early 2015, learn more here.

    Additional reading and related items

    Learn more about the above and other AWS services by actually truing hands on using their free tier (AWS Free Tier). View AWS re:Invent produced breakout session videos here, audio podcasts here, and session slides here (all sessions may not yet be uploaded by AWS re:Invent)

    What this all means

    AWS amazon web services

    AWS continues to invest as well as re-invest into its environment both adding new feature functionality, as well as expanding the extensibility of those features. This means that AWS like other vendors or service providers adds new check-box features, however they also like some increase the depth extensibility of those capabilities. Besides adding new features and increasing the extensibility of existing capabilities, AWS is addressing both the data and information infrastructure including compute (server), storage and database, networking along with associated management tools while also adding extra developer tools. Developer tools include life-cycle management supporting code creation, testing, tracking, testing, change management among other management activities.

    Another observation is that while AWS continues to promote the public cloud such as those services they offer as the present and future, they are also talking hybrid cloud. Granted you have to listen carefully as you may not simply hear hybrid cloud used like some toss it around, however listen for and look into AWS Virtual Private Cloud (VPC), along with what you can do using various technologies via the AWS marketplace. AWS is also speaking the language of enterprise and traditional IT from an applications and development to data and information infrastructure perspective while also walking the cloud talk. What this means is that AWS realizes that they need to help existing environments evolve and make the transition to the cloud which means speaking their language vs. converting them to cloud conversations to then be able to migrate them to the cloud. These steps should make AWS practical for many enterprise environments looking to make the transition to public and hybrid cloud at their pace, some faster than others. More on these and some related themes in future posts.

    The AWS re:Invent event continues to grow year over year, I heard a figure of over 12,000 people however it was not clear if that included exhibiting vendors, AWS people, attendees, analyst, bloggers and media among others. However a simple validation is that the keynotes were in the larger rooms used by events such as EMCworld and VMworld when they hosted in Las Vegas as was the expo space vs. what I saw last year while at re:Invent. Unlike some large events such as VMworld where at best there is a waiting queue or line to get into sessions or hands on lab (HOL), while becoming more crowded, AWS re:Invent is still easy to get in and spend some time using the HOL which is of course powered by AWS meaning you can resume what you started while at re:Invent later. Overall a good event and nice series of enhancements by AWS, looking forward to next years AWS re:Invent.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    This is the first post of a two part series, read the second post here.

    Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

    The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future, Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way. For example nand flash SSD as part of an enterprise tiered storage strategy can be implemented server-side using PCIe cards, SAS and SATA drives as targets or as cache along with software, as well as leveraging SSD devices in storage systems or appliances.

    Seagate 1200 SSD
    Seagate 1200 Enterprise SAS 12Gbs SSD Image via Seagate.com

    Another place where nand flash can be found and compliments SSD devices are so-called Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD) including a new generation that accelerate writes as well as reads such as those Seagate refers to as with Enterprise TurboBoost. The Enterprise TurboBoost drives (view the companion StorageIO Lab review TurboBoost white paper here) were previously known as the Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD). Read more about TurboBoost here and here.

    The best server and storage I/O is the one you do not have to do

    Keep in mind that the best server or storage I/O is that one that you do not have to do, with the second best being the one with the least overhead resolved as close to the processor (compute) as possible or practical. The following figure shows that the best place to resolve server and storage I/O is as close to the compute processor as possible however only a finite amount of storage memory located there. This is where the server memory and storage I/O hierarchy comes into play which is also often thought of in the context of tiered storage balancing performance and availability with cost and architectural limits.

    Also shown is locality of reference which refers to how close data is to where it is being used and includes cache effectiveness or buffering. Hence a small amount of cache of flash and DRAM in the right location can have a large benefit. Now if you can afford it, install as much DRAM along with flash storage as possible, however if you are like most organizations with finite budgets yet server and storage I/O challenges, then deploy a tiered flash storage strategy.

    flash cache locality of reference
    Server memory storage I/O hierarchy, locality of reference

    Seagate 1200 12Gbs Enterprise SAS SSD’s

    Back to the Seagate 1200 12Gbs Enterprise SAS SSD which is covered in this StorageIO Industry Trends Perspective thought leadership white paper. The focus of the white paper is to look at how the Seagate 1200 Enterprise class SSD’s and 12Gbps SAS address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data infrastructure environments.

    Seagate 1200 Enteprise SSD

    This includes providing proof points running various workloads including Database TPC-B, TPC-E and Microsoft Exchange in the StorageIO Labs along with cache software comparing SSD, SSHD and different HDD’s including 12Gbs SAS 6TB near-line high-capacity drives.

    Seagate 1200 Enterprise SSD Proof Points

    The proof points in this white paper are from an applications focus perspective representing more of an end-to-end real-world situation. While they are not included in this white paper, StorageIO has run traditional storage building-block focus workloads, which can be found at StorageIOblog (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?). These include tools such as Iometer, iorate, vdbench among others for various IO sizes, mixed, random, sequential, reads, writes along with “hot-band" across different number of threads (concurrent users). “Hot-Band” is part of the SNIA Emerald energy effectiveness metrics for looking at sustained storage performance using tools such as vdbench. Read more about other various server and storage I/O benchmarking tools and techniques here.

    For the following series of proof-points (TPC-B, TPC-E and Exchange) a system under test (SUT) consisted of a physical server (described with the proof-points) configured with VMware ESXi along with guests virtual machines (VMs) configured to do the storage I/O workload. Other servers were used in the case of TPC workloads as application transactional requester to drive the SQL Server database and resulting server storage I/O workload. VMware was used in the proof-points to reflect a common industry trend of using virtual server infrastructures (VSI) supporting applications including database, email among others. For the proof-point scenarios, the SUT along with storage system device under test were dedicated to that scenario (e.g. no other workload running) unless otherwise noted.

    Server Storage I/O config
    Server Storage I/O configuration for proof-points

    Microsoft Exchange Email proof-point configuration

    For this proof-point, Microsoft Jet Stress Exchange performance workloads were placed (e.g. Exchange Database – EDB file) on each of the different devices under test with various metrics shown including activity rates and response time for reads as well as writes. For the Exchange testing, the EDB was placed on the device being tested while its log files were placed on a separate Seagate 400GB Enterprise 12Gbps SAS SSD.

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB 7.2K SATA HDD. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft SBS2011 Service Pack 1 64 bit. Guest VM (VMware vSphere 5.5) was on a SSD based dat, had a physical machine (host), with 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot with Jet Stress 2010.  All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device also persistent (no delayed writes). EDB was 300GB and workload ran for 8 hours.

    Microsoft Exchange VMware SSD performance
    Microsoft Exchange proof-points comparing various storage devices

    TPC-B (Database, Data Warehouse, Batch updates) proof-point configuration

    SSD’s are a good fit for both transaction database activity with reads and write as well as query-based decision support systems (DSS), data warehouse and big data analytics. The following are proof points of SSD capabilities for database activity. In addition to supporting database table files and objects, along with transaction journal logs, other uses include for meta-data, import/export or other high-IO and write intensive scenarios. Two database workload profiles were tested including batch update (write-intensive) and transactional. Activity involved running Transaction Performance Council (TPC) workloads TPC-B (batch update) and TPC-E (transaction/OLTP simulate financial trading system) against Microsoft SQL Server 2012 databases. Each test simulation had the SQL Server database (MDF) on a different device with transaction log file (LDF) on a separate SSD. TPC-B for a single device results shown below.

    TPC-B (write intensive) results below show how TPS work being done (blue) increases from left to right (more is better) for various numbers of simulated users. Also shown on the same line for each amount of TPS work being done is the average latency in seconds (right to left) where lower is better. Results are shown from top to bottom for each group of users (100, 50, 20 and 1) for the different drives being tested (top to bottom). Note how the SSD device does more work at a lower response time vs. traditional HDD’s

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

    VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

    TPC-B sql server database SSD performance
    TPC-B SQL Server database proof-points comparing various storage devices

    TPC-E (Database, Financial Trading) proof-point configuration

    The following shows results from TPC-E test (OLTP/transactional workload) simulating a financial trading system. TPC-E is an industry standard workload that performs a mix of reads and writes database queries. Proof-points were performed with various numbers of users from 10, 20, 50 and 100 to determine (TPS) Transaction per Second (aka I/O rate) and response time in seconds. The TPC-E transactional results are shown for each device being tested across different user workloads. The results show how TPC-E TPS work (blue) increases from left to right (more is better) for larger numbers of users along with corresponding latency (green) that goes from right to left (less is better). The Seagate Enterprise 1200 SSD is shown on the top in the figure below with a red box around its results. Note how the SSD as a lower latency while doing more work compared to the other traditional HDD’s

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

    VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

    TPC-E sql server database SSD performance
    TPC-E (Financial trading) SQL Server database proof-points comparing various storage devices

    Continue reading part-two of this two-part series here including the virtual server storage I/O blender effect and solution.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    This is the second post of a two part series, read the first post here.

    Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

    The Server Storage I/O Blender Effect Bottleneck

    The earlier proof-points focused on SSD as a target or storage device. In the following proof-points, the Seagate Enterprise 1200 SSD is used as a shared read cache (write-through). Using a write-through cache enables a given amount of SSD to give a performance benefit to other local and networked storage devices.

    traditional server storage I/O
    Non-virtualized servers with dedicated storage and I/O paths.

    Aggregation causes aggravation with I/O bottlenecks because of consolidation using server virtualization. The following figure shows non-virtualized servers with their own dedicated physical machine (PM) and I/O resources. When various servers are virtualized and hosted by a common host (physical machine), their various workloads compete for I/O and other resources. In addition to competing for I/O performance resources, these different servers also tend to have diverse workloads.

    virtual server storage I/O blender
    Virtual server storage I/O blender bottleneck (aggregation causes aggravation)

    The figure above shows aggregation causing aggravation with the result being I/O bottlenecks as various applications performance needs converge and compete with each other. The aggregation and consolidation result is a blend of random, sequential, large, small, read and write characteristics. These different storage I/O characteristics are mixed up and need to be handled by the underlying I/O capabilities of the physical machine and hypervisor. As a result, a common deployment for SSD in addition to as a target device for storing data is as a cache to cut bottlenecks for traditional spinning HDD.

    In the following figure a solution is shown introducing I/O caching with SSD to help mitigate or cut the effects of server consolation causing performance aggravations.

    Creating a server storage I/O blender bottleneck

    xxxxx
    Addressing the VMware Server Storage I/O blender with cache

    Addressing server storage I/O blender and other bottlenecks

    For these proof-points, the goal was to create an I/O bottleneck resulting from multiple VMs in a virtual server environment performing application work. In this proof-point, multiple competing VMs including a SQL Server 2012 database and an Exchange server shared the same underlying storage I/O infrastructure including HDD’s The 6TB (Enterprise Capacity) HDD was configured as a VMware dat and allocated as virtual disks to the VMs. Workloads were then run concurrently to create an I/O bottleneck for both cached and non-cached results.

    xxxxx
    Server storage I/O with virtualization roof-point configuration topology

    The following figure shows two sets of proof points, cached (top) and non-cached (bottom) with three workloads. The workloads consisted of concurrent Exchange and SQL Server 2012 (TPC-B and TPC-E) running on separate virtual machine (VM) all on the same physical machine host (SUT) with database transactions being driven by two separate servers. In these proof-points, the applications data were placed onto the 6TB SAS HDD to create a bottleneck, and a portion of the SSD used as a cache. Note that the Virtunet cache software allows you to use a part of a SSD device for cache with the balance used as a regular storage target should you want to do so.

    If you have paid attention to the earlier proof-points, you might notice that some of the results below are not as good as those seen in the Exchange, TPC-B and TPC-E results about. The reason is simply that the earlier proof-points were run without competing workloads, and database along with log or journal files were placed on separate drives for performance. In the following proof-point as part of creating a server storage I/O blender bottleneck the Exchange, TPC-B as well as TPC-E workloads were all running concurrently with all data on the 6TB drive (something you normally would not want to do).

    storage I/O blender solved
    Solving the VMware Server Storage I/O blender with cache

    The cache and non-cached mixed workloads shown above prove how an SSD based read-cache can help to reduce I/O bottlenecks. This is an example of addressing the aggravation caused by aggregation of different competing workloads that are consolidated with server virtualization.

    For the workloads shown above, all data (database tables and logs) were placed on VMware virtual disks created from a dat using a single 7.2K 6TB 12Gbps SAS HDD (e.g. Seagate Enterprise Capacity).

    The guest VM system disks which included paging, applications and other data files were virtual disks using a separate dat mapped to a single 7.2K 1TB HDD. Each workload ran for eight hours with the TPC-B and TPC-E having 50 simulated users. For the TPC-B and TPC-E workloads, two separate servers were used to drive the transaction requests to the SQL Server 2012 database.

    For the cached tests, a Seagate Enterprise 1200 400GB 12Gbps SAS SSD was used as the backing store for the cache software (Virtunet Systems Virtucache) that was installed and configured on the VMware host.

    During the cached tests, the physical HDD for the data files (e.g. 6TB HDD) and system volumes (1TB HDD) were read cache enabled. All caching was disabled for the non-cached workloads.

    Note that this was only a read cache, which has the side benefit of off-loading those activities enabling the HDD to focus on writes, or read-ahead. Also note that the combined TPC-E, TPC-B and Exchange databases, logs and associated files represented over 600GB of data, there was also the combined space and thus cache impact of the two system volumes and their data. This simple workload and configuration is representative of how SSD caching can complement high-capacity HDD’s

    Seagate 6TB 12Gbs SAS high-capacity HDD

    While the star and focus of these series of proof-points is the Seagate 1200 Enterprise 12Gbs SAS SSD, the caching software (virtunet) and Enterprise TurboBoost drives also play key supporting and favorable roles. However the 6TB 12Gbs SAS high-capacity drive caught my attention from a couple of different perspectives. Certainly the space capacity was interesting along with a 12Gbs SAS interface well suited for near-line, high-capacity and dense tiered storage environments. However for a high-capacity drive its performance is what really caught my attention both in the standard exchange, TPC-B and TPC-E workloads, as well as when combined with SSD and cache software.

    This opens the door for a great combination of leveraging some amount of high-performance flash-based SSD (or TurboBoost drives) combined with cache software and high-capacity drives such as the 6TB device (Seagate now has larger versions available). Something else to mention is that the 6TB HDD in addition to being available in either 12Gbs SAS, 6Gbs SAS or 6Gbs SATA also has enhanced durability with a Read Bit Error Rate of 10 ^15 (e.g. 1 second read error per 10^15 average attempts) and an AFR (annual failure rate) of 0.63% (See more speeds and feeds here). Hence if you are concerned about using large capacity HDD’s and them failing, make sure you go with those that have a high Read Bit Error Rate and a low AFR which are more common with enterprise class vs. lower cost commodity or workstation drives. Note that these high-capacity enterprise HDD’s are also available with Self-Encrypting Drive (SED) options.

    Summary

    Read more in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of Seagate 1200 12Gbs SAS SSD’s and visit the Seagate Enterprise 1200 12Gbs SAS SSD page here. Moving forward there is the notion that flash SSD will be everywhere. There is a difference between all data on flash SSD vs. having some amount of SSD involved in preserving, serving and protecting (storing) information.

    Key themes to keep in mind include:

    • Aggregation can cause aggravation which SSD can alleviate
    • A relative small amount of flash SSD in the right place can go a long way
    • Fast flash storage needs fast server storage I/O access hardware and software
    • Locality of reference with data close to applications is a performance enabler
    • Flash SSD everywhere does not mean everything has to be SSD based
    • Having some amount of flash in different places is important for flash everywhere
    • Different applications have various performance characteristics
    • SSD as a storage device or persistent cache can speed up IOPs and bandwidth

    Flash and SSD are in your future, this comes back to the questions of how much flash SSD do you need, along with where to put it, how to use it and when.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    What does server storage I/O scaling mean to you?

    What does server storage I/O scaling mean to you?

    Scaling means different things to various people depending on the context or what it is referring to.

    For example, scaling can me having or doing more of something, or less as well as referring to how more, or less of something is implemented.

    Scaling occurs in a couple of different dimensions and ways:

    • Application workload attributes – Performance, Availability, Capacity, Economics (PACE)
    • Stability without compromise or increased complexity
    • Dimension and direction – Scaling-up (vertical), scaling-out (horizontal), scaling-down

    Scaling PACE – Performance Availability Capacity Economics

    Often I hear people talk about scaling only in the context of space capacity. However there are aspects including performance, availability as well as scaling-up or scaling-out. Scaling from application workloads perspectives include four main group themes which are performance, availability, capacity and economics (as well as energy).

    • Performance – Transactions, IOP’s, bandwidth, response time, errors, quality of service
    • Availability – Accessibility, durability, reliability, HA, BC, DR, Backup/Restore, BR, data protection, security
    • Capacity – Space to store information or place for workload to run on a server, connectivity ports for networks
    • Economics – Capital and operating expenses, buy, rent, lease, subscription

    Scaling with Stability

    The latter of the above items should be thought of more in terms of a by-product, result or goal for implementing scaling. Scaling should not result in a compromise of some other attribute such as increasing performance and loss of capacity or increased complexity. Scaling with stability also means that as you scale in some direction, or across some attribute (e.g. PACE), there should not be a corresponding increase in complexity of management, or loss of performance and availability. To use a popular buzz-term scaling with stability means performance, availability, capacity, economics should scale linear with their capabilities or perhaps cost less.

    Scaling directions: Scaling-up, scaling-down, scaling-out

    server and storage i/o scale options

    Some examples of scaling in different directions include:

    • Scaling-up (vertical scaling with bigger or faster)
    • Scaling-down (vertical scaling with less)
    • Scaling-out (horizontal scaling with more of what being scaled)
    • Scaling-up and out (combines vertical and horizontal)

    Of course you can combine the above in various combinations such as the example of scaling up and out, as well as apply different names and nomenclature to see your needs or preferences. The following are a closer look at the above with some simple examples.

    server and storage i/o scale up
    Example of scaling up (vertically)

    server and storage i/o scale down
    Example of scaling-down (e.g. for smaller scenarios)

    server and storage i/o scale out
    Example of scaling-out (horizontally)

    server and storage i/o scale out
    Example of scaling-out and up(horizontally and vertical)

    Summary and what this means

    There are many aspects to scaling, as well as side-effects or impacts as a result of scaling.

    Scaling can refer to different workload attributes as well as how to support those applications.

    Regardless of what you view scaling as meaning, keep in mind the context of where and when it is used and that others might have another scale view of scale.

    Ok, nuff said (for now)…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    September October 2014 Server and StorageIO Update Newsletter

    September and October 2014

    Hello and welcome to this joint September and October Server and StorageIO update newsletter. Since the August newsletter, things have been busy with a mix of behind the scenes projects, as well as other activities including several webinars, on-line along with in-person events in the US as well as Europe.

    Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

    Cheers gs

    Industry Trends and Perspectives

    Storage trends

    In September I was invited to do a key-note opening presentation at the MSP area CMG event. Theme for the September CMG event was "Flash – A Real Life Experience" with a focus of what people are doing, how testing and evaluating including use of hybrid solutions as opposed to vendor marketing sessions. My session was titled "Flash back to reality – Myths and Realities, Flash and SSD Industry trends perspectives plus benchmarking tips and can be found here. Thanks to Tom Becchetti an the MSP CMG (@mspcmg) folks for a great event.

    There are many facets to hybrid storage including different types of media (SSD and HDD’s) along with unified or multi-protocol access. Then there are hybrid storage that spans local and public clouds. Here is a link to an on-line Internet Radio show via Information Week along with on-line chat about Hybrid Storage for Government.

    Some things I’m working with or keeping an eye on include Cloud, Converged solutions, Data Protection, Business Resiliency, DCIM, Docker, InfiniBand, Microsoft (Hyper-V, SOFS, SMB 3.0), Object Storage, SSD, SDS, VMware and VVOL among others items.

    Commentary In The News

    StorageIO news

    A lot has been going on in the IT industry since the last StorageIO Update newsletter. The following are some StorageIO industry trends perspectives comments that have appeared in various venues. Cloud conversations continue to be popular including concerns about privacy, security and availability. Here are some comments at SearchCloudComputing: about moving on from cloud deployment heartbreak.

    Nand flash Solid State Devices (SSD) continue to increase in customer deployments, over at Processor, here are some here are some comments on Incorporating SSD’s Into Your Storage Plan. Also on SSD, here are some perspectives making the Argument For Flash-Based Storage. Some other comments over at Processer.com include looking At Disaster Recovery As A Service, tips to Avoid In Data Center Planning, making the most of Enterprise Virtualization, as well as New Tech, Advancements To Justify Servers. Part of controlling and managing storage costs is having timely insight, metrics that matter, here are some more perspectives and also here.

    Over at SearchVirtualStorage I have some comments on how to configure and manage storage for a virtual desktop environment (VDI) while over at TechPageOne there are perspectives on top reasons to switch to Windows 8. 

    Some other comments and perspectives are over at EnterpriseStorageForum including Top 10 Ways to Improve Data Center Energy Efficiency. At InfoStor there are comments and tips about Object Storage, while at SearchDataBackup I have some perspectives about Symantec being broken up.

    View other industry trends comments at the here

    Tips and Articles

    Recent Server and StorageIO tips and articles appearing in various venues include over at SearchCloudStorage a series of discussion often asked question pieces:

    Are you concerned with the security of the cloud?
    Is the cost of cloud storage really cheaper?
    What’s important to know about cloud privacy policy?
    Are more than five nines of availability really possible?
    What to look for enterprise file sync-and-share app?
    How primary storage clouds and cloud backup differ?
    What should I consider when using SSD cloud?
    What is difference between a snapshot and a clone?

    View other recent as well as past tips and articles here

    StorageIOblog posts

    Recent StorageIOblog posts include:

    View other recent as well as past blog posts here

    In This Issue

  • Industry Trends Perspectives
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events & Activities

    September 25, 2014
    MSP CMG – Flash and SSD performance

    October 8-10, 2014
    Nijkerk Netherlands Brouwer Seminar Series

    November 11-13, 2014
    AWS re:Invent Las Vegas

    View other recent and upcoming events here

    Webinars

    November 13 9AM PT
    BrightTalk – Software Defined Storage

    November 11 10AM PT
    Google+ Hangout Dell BackupU

    November 11 9AM PT
    BrightTak – Software Defined Data Centers

    October 16 9AM PT
    BrightTalk – Cloud Storage Decision Making

    October 15 1PM PT
    BrightTalk – Hybrid Cloud Trends

    October 7 11AM PT
    BackupU – Data Protection Management

    September 18 8AM CT
    Nexsan – Hybrid Storage

    September 18 9AM PT
    BrightTalk – Converged Storage

    September 17 1PM PT
    BrightTalk – DCIM

    September 16 1PM PT
    BrightTalk – Data Center Convergence

    September 16 Noon PT
    BrightTalk – BC, BR and DR

    September 16 1PM CT
    StarWind – SMB 3.0 & Microsoft SOFS

    September 16 9AM PT
    Google+ Hangout – BackupU – Replication

    September 2 11AM PT
    Dell BackupU – Replication

    Videos and Podcasts

    Docker for Smarties
    Video: Docker for Smarties

    StorageIO podcasts are also available via and at StorageIO.tv

    From StorageIO Labs

    Research, Reviews and Reports

    Enterprise 12Gbps SAS and SSD’s
    Better Together – Part of an Enterprise Tiered Storage Strategy

    In this StorageIO Industry Trends Perspective thought leadership white paper we look at how enterprise class SSD’s and 12Gbps SAS address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data environments. This report includes proof points running various workloads including Database TPC-B, TPC-E, Microsoft Exchange in the StorageIO Labs along with cache software comparing SSD, SSHD and HDD’s. Read the  white paper  compliments of Seagate 1200 12Gbs SAS SSD’s.

    Seagate SSD White Paper

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageio.com/ssd
    storageio.com/ssd

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Seagate has shipped over 10 Million storage HHDD’s, is that a lot?

    Seagate has shipped over 10 Million storage HHDD’s, is that a lot?

    Recently Seagate made an announcement that they have shipped over 10 million Hybrid Hard Disk Drives (HHDD) also known as Solid State Hybrid Drives (SSHD) over that past few years. Disclosure Seagate has been a StorageIO client.

    I know where some of those desktop class HHDD’s including Momentus XTs ended up as I bought some of the 500GB and 750GB models via Amazon and have them in various systems. Likewise I have installed in VMware servers the newer generation of enterprise class SSHD’s which Seagate now refers to as Turbo models as companions to my older HHDD’s

    What is a HHDD or SSHD?

    The HHDD’s continue to evolve from initially accelerating reads to now being capable of speeding up write operations across different families (desktop/mobile, workstation and enterprise). What makes a HHDD or SSHD is that as their name implies, they are a hybrid combing a traditional spinning magnetic Hard Disk Drive (HDD) along with flash SSD storage. The flash persistent memory is in addition to the DRAM or non-persistent memory typically found on HDDs used as a cache buffer. These HHDDs or SSHDs are self-contained in that the flash are built-in to the actual drive as part of its internal electronics circuit board (controller). This means that the drives should be transparent to the operating systems or hypervisors on servers or storage controllers without need for special adapters, controller cards or drivers. In addition, there is no extra software needed to automated tiering or movement between the flash on the HHDD or SSHD and its internal HDD, its all self-contained managed by the drives firmware (e.g. software).

    Some SSHD and HHDD industry perspectives

    Jim Handy over at Objective Analysis has this interesting post discussing Hybrid Drives Not Catching On. The following is an excerpt from Jim’s post.

    Why were our expectations higher? 

    There were a few reasons: The hybrid drive can be viewed as an evolution of the DRAM cache already incorporated into nearly all HDDs today. 

    • Replacing or augmenting an expensive DRAM cache with a slower, cheaper NAND cache makes a lot of sense.
    • An SSHD performs much better than a standard HDD at a lower price than an SSD. In fact, an SSD of the same capacity as today’s average HDD would cost about an order of magnitude more than the HDD. The beauty of an SSHD is that it provides near-SSD performance at a near-HDD price. This could have been a very compelling sales proposition had it been promoted in a way that was understood and embraced by end users.
    • Some expected for Seagate to include this technology into all HDDs and not to try to continue using it as a differentiator between different Seagate product lines. The company could have taken either of two approaches: To use hybrid technology to break apart two product lines – standard HDDs and higher-margin hybrid HDDs, or to merge hybrid technology into all Seagate HDDs to differentiate Seagate HDDs from competitors’ products, allowing Seagate to take slightly higher margins on all HDDs. Seagate chose the first path.

    The net result is shipments of 10 million units since its 2010 introduction, for an average of 2.5 million per year, out of a total annual HDD shipments of around 500 million units, or one half of one percent.

    Continue reading more of Jim’s post here.

    In his post, Jim raises some good points including that HHDD’s and SSHD’s are still a fraction of the overall HDD’s shipped on an annual basis. However IMHO the annual growth rate has not been a flat average of 2.5 million, rather starting at a lower rate and then increasing year over year. For example Seagate issued a press release back in summer 2011 that they had shipped a million HHDD’s a year after their release. Also keep in mind that those HHDD’s were focused on desktop workstations and in particular, at Gamers among others.

    The early HHDD’s such as the Momentus XTs that I was using starting in June 2010 only had read acceleration which was better than HDD’s, however did not help out on writes. Over the past couple of years there have been enhancements to the HHDD’s including the newer generation also known as SSHD’s or Turbo drives as Seagate now calls them. These newer drives include write acceleration as well as with models for mobile/laptop, workstation and enterprise class including higher-performance and high-capacity versions. Thus my estimates or analysis has the growth on an accelerating curve vs. linear growth rate (e.g. average of 2.5 million units per year).

     Units shipped per yearRunning total units shipped
    2010-20111.0 Million1.0 Million
    2011-20121.25 Million (est.)2.25 Million (est.)
    2012-20132.75 Million (est.)5.0 Million (est.)
    2013-20145.0 Million (est)10.0 Million

    StorageIO estimates on HHDD/SSHD units shipped based on Seagate announcements

    estimated hhdd and sshd shipments

    However IMHO there is more to the story beyond numbers of HHDD/SSHD shipped or if they are accelerating in deployment or growing at an average rate. Some of those perspectives are in my comments over on Jim Handy’s site with an excerpt below.

    In talking with IT professionals (e.g. what the vendors/industry calls users/customers) they are generally not aware that these devices exist, or if they are aware of them, they are only aware of what was available in the past (e.g. the consumer class read optimized versions). I do talk with some who are aware of the newer generation devices however their comments are usually tied to lack of system integrator (SI) or vendor/OEM support, or sole source. Also there was a focus on promoting the HHDD’s to “gamers” or other power users as opposed to broader marketing efforts. Also most of these IT people are not aware of the newer generation of SSHD or what Seagate is now calling “Turbo” drives.

    When talking with VAR’s, there is a similar reaction which is discussion about lack of support for HHDD’s or SSHD’s from the SI/vendor OEMs, or single source supply concerns. Also a common reaction is lack of awareness around current generation of SSHD’s (e.g. those that do write optimization, as well as enterprise class versions).

    When talking with vendors/OEMs, there is a general lack of awareness of the newer enterprise class SSHD’s/HHDD’s that do write acceleration, sometimes there is concern of how this would disrupt their “hybrid” SSD + HDD or tiering marketing stories/strategies, as well as comments about single source suppliers. Have also heard comments to the effect of concerns about how long or committed are the drive manufactures going to be focused on SSHD/HHDD, or is this just a gap filler for now.

    Not surprisingly when I talk with industry pundits, influencers, amplifiers (e.g. analyst, media, consultants, blogalysts) there is a reflection of all the above which is lack of awareness of what is available (not to mention lack of experience) vs. repeating what has been heard or read about in the past.

    IMHO while there are some technology hurdles, the biggest issue and challenge is that of some basic marketing and business development to generate awareness with the industry (e.g. pundits), vendors/OEMs, VAR’s, and IT customers, that is of course assuming SSHD/HHDD are here to stay and not just a passing fad…

    What about SSHD and HHDD performance on reads and writes?

    What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?

    SSHD and HHDD read / write performance exchange
    Enterprise Turbo SSHD read and write performance (Exchange Email)

    What about the performance of today’s HHDD’s and SSHD’s, particular those that can accelerate writes as well as reads?

    SSHD and HHDD performance TPC-B
    Enterprise Turbo SSHD read and write performance (TPC-B database)

    SSHD and HHDD performance TPC-E
    Enterprise Turbo SSHD read and write performance (TPC-E database)

    Additional details and information about HHDD/SSHD or as Seagate now refers to them Turbo drives can be found in two StorageIO Industry Trends Perspective White Papers (located here and another here).

    Where to learn more

    Refer to the following links to learn more about HHDD and SSHD devices.
    StorageIO Momentus Hybrid Hard Disk Drive (HHDD) Moments
    Enterprise SSHD and Flash SSD
    Part of an Enterprise Tiered Storage Strategy

    Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?
    2011 Summer momentus hybrid hard disk drive (HHDD) moment
    More Storage IO momentus HHDD and SSD moments part I
    More Storage IO momentus HHDD and SSD moments part II
    New Seagate Momentus XT Hybrid drive (SSD and HDD)
    Another StorageIO Hybrid Momentus Moment
    SSD past, present and future with Jim Handy
    Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?

    Closing comments and perspectives

    I continue to be bullish on hybrid storage solutions from cloud, to storage systems as well as hybrid-storage devices. However like many technology just because something makes sense or is interesting does not mean its a near-term or long-term winner. My main concern with SSHD and HHDD is if the manufactures such as Seagate and WD are serious about making them a standard feature in all drives, or simply as a near-term stop-gap solution.

    What’s your take or experience with using HHDD and/or SSHDs?

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    CompTIA needs input for their Storage+ certification, can you help?

    CompTIA needs input for their Storage+ certification, can you help?

    The CompTIA folks are looking for some comments and feedback from those who are involved with data storage in various ways as part of planning for their upcoming enhancements to the Storage+ certification testing.

    As a point of disclosure, I am member of the CompTIA Storage+ certification advisory committee (CAC), however I don’t get paid or receive any other type of renumeration for contributing my time to give them feedback and guidance other than a thank, Atta boy for giving back and playing it forward to help others in the IT community similar to what my predecessors did.

    I have been asked to pass this along to others (e.g. you or who ever forwards it on to you).

    Please take a few moments and feel free to share with others this link here to the survey for CompTIA Storage+.

    What they are looking for is to validate the exam blueprint generated from a recent Job Task Analysis (JTA) process.

    In other words, does the certification exam show real-world relevance to what you and your associates may be doing involved with data storage.

    This is opposed to being aligned with those whose’s job it is to create test questions and may not understand what it is you the IT pro involved with storage does or does not do.

    If you have ever taken a certification exam test and scratched your head or wondered out why some questions that seem to lack real-world relevance were included, vs. ones of practical on-the-job experience were missing, here’s your chance to give feedback.

    Note that you will not be rewarded with an Amex or Amazon gift card, Starbucks or Dunkin Donuts certificates, free software download or some other incentive to play and win, however if you take the survey let me know and will be sure to tweet you an Atta boy or Atta girl! However they are giving away a free T-Shirt to every 10 survey takers.

    Btw, if you really need something for free, send me a note (I’m not that difficult to find) as I have some free copies of Resilient Storage Networking (RSN): Designing Flexible Scalable Data Infrastructures (Elsevier) you simply pay shopping and handling. RSN can be used to help prepare you for various storage testing as well as other day-to-day activities.

    CompTIA is looking for survey takers who have some hands-on experience or involved with data storage (e.g. can you spell SAN, NAS, Disk or SSD and work with them hands-on then you are a candidate ;).

    Welcome to the CompTIA Storage+ Certification Job Task Analysis (JTA) Survey

  • Your input will help CompTIA evaluate which test objectives are most important to include in the CompTIA Storage+ Certification Exam
  • Your responses are completely confidential.
  • The results will only be viewed in the aggregate.
  • Here is what (and whom) CompTIA is looking for feedback from:

  • Has at least 12 to 18 months of experience with storage-related technologies.
  • Makes recommendations and decisions regarding storage configuration.
  • Facilitates data security and data integrity.
  • Supports a multiplatform and multiprotocol storage environment with little assistance.
  • Has basic knowledge of cloud technologies and object storage concepts.
  • As a small token of CompTIA appreciation for your participation, they will provide an official CompTIA T-shirt to every tenth (1 of every 10) person who completes this survey. Go here for official rules.

    Click here to complete the CompTIA Storage+ survey

    Contact CompTIA with any survey issues, research@comptia.org

    What say you, take a few minutes like I did and give some feedback, you will not be on the hook for anything, and if you do get spammed by the CompTIA folks, let me know and I in turn will spam them back for spamming you as well as me.

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    June 2014 Server and StorageIO Update newsletter

    Server and StorageIO Update newsletter – June 2014

    Welcome to the June 2014 edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics. June has been busy on many fronts with lots of activities, not to mention spring and summer are finally here in the Stillwater MN area.

    Speaking of busy, the spring rains came a month or two late, or the summer storms early as we will end up with one of the, if not rainiest Junes in history here in Stillwater MN area.

    Greg Schulz Storage I/OGreg Schulz @StorageIO

    Industry and Technology Updates

    There has also been plenty of activity in the Information Technology (IT) and in particular the data infrastructure sector (databases, file systems, operating systems, servers, storage, I/O networking, cloud, virtualization, SSD, data protection and DCIM among others). SANdisk announced their intention to buy SSD vendor Fusion IO for a $1.1 Billion dollars as part of a continued flash consolidation trend For example Cisco buys Whiptail, WD buys Virident, Seagate buys Avago/LSI Flash division among others (read more about flash SSD here). Even with flash SSD vendor and technology consolidation, this is in no way an indication of the health of the market. Quite the opposite in that flash SSD has a very bright future and we are still in the relative early phase or waves and flash will be in your future. The question remains how much, when, where, with what and from whom. Needless to say there is plenty of SSD related hardware and software activity occurring in the StorageIO labs as well as StorageIO.com/SSD;).

    StorageIO Industry Trends and Perspectives

    NetApp Updates

    In early June I was invited by NetApp to attend their annual analyst summit along with many others from around the world for a series of briefings, NDA updates and other meetings. Disclosure NetApp has been a client in the past and covered travel and lodging expenses to attend their event.

    While the material under NDA naturally can not be discussed, there was discussion around NetApp previously announced earnings, their continued relationship with IBM (for the E Series) along with the June product updates. Shortly after the NetApp event they announced enhancements to there ONTAP FAS based systems that followup to those released earlier this year. These recent enhancements NetApp claims as being their fastest FAS based systems ever.

    Given the success NetApp has had with their ONTAP FAS based systems including with FlexPod, it should not be a surprise that they continue to focus on those as their flagship offerings. What was clear from listening to CEO Tom Georgens is that NetApp as a company needs to offer, promote and sell the entire portfolio including E Series (disk, hybrid and all flash EF), StorageGrid (bycast), FlexPod and FAS among other tools (software defined storage management) and services (for legacy, virtual and cloud). Watch for some interesting updates and enhancements for the above and other things from NetApp in the future.

    Staying busy is a good thing

    What have I been doing during June 2014 to stay busy besides getting ready for summer fun (as well as preparing for fall industry events) including in and around the water?

    • Presented several BrightTalk Webinars (see events below) with more coming up
    • Release of new ITP white paper and StorageIO lab proof points with more in the works
    • More videos and pod casts, technology reviews including servers among other things
    • Moderated a software defined panel discussion at MSP area VMUG
    • Providing industry commentary in different venues (see below)
    • Not to mention various client consulting projects

    What’s in the works?

    Several projects and things are in the works that will show themselves in the coming weeks or months if not sooner. Some of which are more proof points coming out of the StorageIO labs involving software defined, converged, cloud, virtual, SSD, data protection and more.

    Speaking of Software Defined, join me for a free Spiceworks Webinar on July 2, Do More with Less Hardware Using Software Defined Storage Management (sponsored by Starwind Software). The webinar looks at the many faces and facets of virtualization and software defined storage and software defined storage management for Hyper-V environments. Learn more about the Hyper-V event here or here.

    In addition to the upcoming July 2 Hyper-V software defined storage webinar ( a recording for replay will be posted to the StorageIO.com/events page after the event), I also did a webinar on BrightTalk a few weeks covering software defined storage management. View the BrightTalk webinar replays by clicking the following links The Changing Face and Landscape of Enterprise Storage (June 11), The Many Facets of Virtual Storage and Software Defined Storage Virtualization (June 12), Evolving from Disaster Recovery and Business Continuity (BC) to Business Resiliency (BR) recorded Jun 19.

    Watch for more StorageIO posts, commentary, perspectives, presentations, webinars, tips and events on information and data infrastructure topics, themes and trends. Data Infrastructure topics include among others cloud, virtual, legacy server, storage I/O networking, data protection, hardware and software.

    Enjoy this edition of the StorageIO Update newsletter and look forward to catching up with you live or online while out and about this spring.

    Ok, nuff said (for now)

    Cheers gs

    June 2014 Industry trend and perspectives

    Tips, commentary, articles and blog posts

    StorageIO Industry Trends and Perspectives

    The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.

    StorageIO comments and perspectives in the news

    StorageIO in the news

    Toms Hardware: Comments on Selecting the Right Type, Amount and Location of Flash SSD to use 
    TechPageOne: Comments on best practices for virtual data protection
    SearchAWS: Comments on Google vs. AWS SSD which is better
    InfoStor: Comments on Cloud Appliance Buying Guide

    StorageIO video and audio pod casts

    StorageIOblog postStorageIOblog post
    StorageIO audio podcasts are also available via
    and at StorageIO.tv

    StorageIOblog posts and perspectives

    StorageIOblog post

  • Is there an information or data recession, are you using less storage (with polls)
  • April and May 2014 Server and StorageIO Update newsletter
  • StorageIO White Papers, Solution Briefs and StorageIO Lab reports

    White Paper

    New White Paper: StarWind Virtual SAN:
    Hardware Agnostic Hyper-Convergence for Microsoft Hyper-V
    Using less hardware with software defined storage management There is no such thing as an information recession with more data being generated, processed, moved, stored and retained longer. In addition, people and data are living longer as well as getting larger.

    Key to support various types of business environments and their information technology (IT) / ITC applications are cost effective, flexible and resilient data infrastructures that support virtual machine (VM) centric solutions. This StorageIO Industry Trends Perspective thought leadership white paper looks at addressing the needs of Microsoft Hyper-V environments to address economic, service, growth, flexibility and technology challenges.

    The focus is on how software defined storage management solutions unlock the full value of server-based storage for Hyper-V environments. Benefits include removing complexity to cut cost while enhancing flexibility, service and business systems resiliency along with disaster recovery without compromise. Primary audiences include Small Medium Business (SMB), Remote Office Branch Office (ROBO) of larger organizations along with managed service providers (Cloud, Internet and Web) that are using Hyper-V as part of their solutions. Read more in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of StarWind Software Virtual SAN (VSAN) for Microsoft Hyper-V.

    Remember to check out our objectstoragecenter.com page where you will find a growing collection of information and links on cloud and object storage themes, technologies and trends from various sources.

    If you are interested in data protection including Backup/Restore, BC, DR, BR and Archiving along with associated technologies, tools, techniques and trends visit our storageioblog.com/data-protection-diaries-main/ page.

    StorageIO events and activities

    Server and StorageIO seminars, conferences, web cats, events, activities

    The StorageIO calendar continues to evolve, here are some recent and upcoming activities including live in-person seminars, conferences, keynote and speaking activities as well as on-line webinars, twitter chats, Google+ hangouts among others.

    October 10, 2014 Seminar: Server, Storage and IO Data Center Virtualization JumpstartNijkerk Holland
    Netherlands
    October 9, 2014 Seminar: Data Infrastructure Industry Trends and Perspectives – Whats The BuzzNijkerk Holland
    Netherlands
    October 8, 2014 Private Seminar – Contact Brouwer Storage ConsultancyNijkerk Holland
    Netherlands
    October 7, 2014 Seminar: Data Movement and MigrationNijkerk Holland
    Netherlands
    October 6, 2014 Seminar: From Backup and Disaster Recovery to Business Resiliency and ContinuanceNijkerk Holland
    Netherlands
    August 25-28, 2014VMworldTBASan Francisco
    August 7, 2014TBATBATBA
    July 2, 2014Starwind SoftwareLive webinar: Live Webinar: Do More with Less Hardware Using Software Defined Storage ManagementWebinar
    1PM CT
    June 26, 2014MSP VMUGModerate Live Panel Software Defined DiscussionPanel
    12:45PM CT
    June 17, 2014Dell BackupUExploring the Data Protection Toolbox – Data Footprint ReductionDell BackupU
    Online Webinar
    May 14, 2014 Seminar: Vendor Neutral Archiving for HealthcareNijkerk Holland
    Netherlands
    May 5-7, 2014EMC WorldLas Vegas
    April 23, 2014SNIA DSI EventKeynote: Enabling Data Infrastructure Return On Innovation – The Other ROIbackup, restore, BC, DR and archiving
    April 22, 2014SNIA DSI EventThe Cloud Hybrid “Homerun” – Life Beyond The Hypebackup, restore, BC, DR and archiving
    April 16, 2014
    Open Source and Cloud Storage – Enabling business, or a technology enabler?Webinar
    9AM PT
    April 9, 2014
    Storage Decision Making for Fast, Big and Very Big Data EnvironmentsWebinar
    9AM PT

    Click here to view other upcoming along with earlier event activities. Watch for more 2014 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, big data, little data, cloud and object storage, performance and management trends among others.

    Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.

    StorageIO Update Newsletter Archives

    Click here to view earlier StorageIO Update newsletters (HTML and PDF versions) at www.storageio.com/newsletter. Subscribe to this newsletter (and pass it along) by clicking here (Via Secure Campaigner site). View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    April and May 2014 Server and StorageIO Update newsletter


    Server and StorageIO Update newsletter – April and May 2014

    Welcome to the April and May 2014 edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics.

    The good news is that while spring is running late (as is this newsletter ;) here in the Stillwater MN area as well as other parts of the world, both are finally here. To say that a lot has been going on and things busy would be an understatement, however that is probably also the situation with you as well. So what has been going on during April and May 2014?

    Industry and Technology Updates

    Sony and Fujifilm (with their partner IBM) are trading marketing and proof of concept (POC) lab material in the efforts to show tape is still alive for data storage. Sony announced a month or so ago that it was moving the bar to 185TB per tape (without dedupe). Not to be out done, Fujifilm announced in late May that they in conjunction with IBM have a POC for a 154 TB LTO in the works.

    Greg Schulz Storage I/OGreg Schulz on break
    On the Hard Disk Drive (HDD) front, Seagate released a new 6TB device that they claim to be fast. I asked Seagate to send me one of the drives to see how fast it really is vs. their claims. While I have not completed all tests yet, what I can tell you is that the 6TB 3.5" 12Gbps SAS 7.2K RPM drive is like an american football linebacker or fullback. Its big, bulky, high-capacity, resilient with 10 to the 15 bit error rate (higher than normal high-capacity HDD’s) and fast.

    Sure the 6TB HDD is not in the speed race of a quick SSD or SSHD or 15K, however I was surprised at just how fast it is for its space capacity. Watch for a follow-up review in the not so distant future and if a WD 6TB drive were to show up on my door step can give some perspectives on that as well.

    As for SSD, they are following the trend paths of tape and HDD’s of increasing in space capacity, coming down in price and improving on resiliency. While I see HDD and even tape surviving for some time, granted in different roles, I’m also a firm believer that flash SSD in some form are in your future. The question is how much, when, where, with what and from whom. Needless to say there is plenty of SSD related hardware and software activity occurring in the StorageIO labs ;).

    Vendors and revenue earnings, is there storage slowdown?

    In other industry news and activity, vendor quarterly earnings are out and there is mixed information (see this recent post of if there is an information recession). IBM is one of those who have announced lowered storage related revenues as NetApp had mixed results (as did other vendors). In addition IBM is officially saying they are finally dropping the NetApp (FAS/ONTAP) based N series (was originally reported a week or so ago via Bloomberg). Note that IBM will continue to OEM NetApp E series (e.g. Engenio based). Some of you might remember (or do a Google search) that IBM indicated a few years back that it was De emphasizing the N series or moving away from it. Perhaps this time they really mean it while NetApp could move to embrace those VAR’s and IBM business partners to sell NetApp vs. IBM branded versions of the product. Here are some more perspectives appearing in SearchStorage. Watch for more about NetApp in a future follow-up post.

    In some other industry news, you might remember back in the February StorageIO update newsletter there was mention of Avago buying LSI. Now Avago is selling the flash business of LSI to Seagate for about $450M USD in the ongoing flash dance for cache and cash.

    Staying busy is a good thing

    What have I been doing during April and May 2014 to stay busy besides getting ready for spring and summer fun including in and around the water?

    • Attended NAB 2014 in Las Vegas where it is not just about archiving pertaining to data storage
    • Presented backup, restore, BC, DR and archiving including a keynote at the SNIA DSI conference
    • Was back in Las Vegas to attend EMCworld, I have some updates in the works from that event
    • Presented several BrightTalk Webinars (see events below) with more coming up in June
    • Release of new ITP white paper and StorageIO lab proof points with more in the works
    • More videos and pod casts, technology reviews including servers among other things
    • Participated including keynote at a vendor neutral archiving event in Europe
    • Providing industry commentary in different venues (see below) along with some writing
    • Not to mention various client consulting projects
    • Remember, work hard play hard, play hard and work hard!

    Whats in the works?

    Several projects and things are in the works that will show themselves in the coming weeks or months if not sooner. Some of which are more proof points coming out of the StorageIO labs involving software defined, converged, cloud, virtual, SSD, data protection and more.

    Speaking of Software Defined, join me for a free BrightTalk Webinar on June 12 on the many faces and facets of virtualization and software defined storage. Learn more about that event here as well as in the activities section down below.

    Watch for more StorageIO posts, commentary, perspectives, presentations, webinars, tips and events on information and data infrastructure topics, themes and trends. Data Infrastructure topics include among others cloud, virtual, legacy server, storage I/O networking, data protection, hardware and software.

    Enjoy this edition of the StorageIO Update newsletter and look forward to catching up with you live or online while out and about this spring.

    Ok, nuff said (for now)

    Cheers gs

    April and May 2014 Industry trend and perspectives

    Tips, commentary, articles and blog posts

    StorageIO Industry Trends and Perspectives

    The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.

    StorageIO comments and perspectives in the news

    StorageIO in the news

    SearchStorage: Comments on IBM dropping N series, NetApp is still OEM to IBM
    InfoStor: Comments on Software Defined Storage: 10 Things You Need to Know
    SearchDataBackup: Comments about buying guides for enterprise Hard Disk Drives (HDD)
    SearchDataBackup: Conversation about data protection modernization
    InfoStor: Comments on cloud storage, 10 things you need to know
    InfoStor: Comments on Data Archiving: Life Beyond Compliance
    NetworkComputing: Comments on Sorting Through Storage Industry Hype
    StateTech: Comments on Secure Erasing HDDs and SSDs including planning in advance
    SNIA: Comments on CDMI Cloud Management Conformance Testing
    EnterpriseStorageForum: Comments on Hybrid Cloud Storage Tips
    NetworkComputing: Comments on Sorting Through Storage Industry Hype

    StorageIO tips and articles appearing in various venues

    StorageIO tips and articles

    Via InformationSecurityBuzz:  Dark Territories MH370 Do You Know Where Your Information Is? We still dont know 100% where the missing Malaysian airlines flight 370 is which amplifies the fact that there are still dar territories or gaps in coverage in this large world. Likewise there are gaps in coverage in many IT environments yet tools and technologies are available to gain better situational awareness and insight.

    Via The Virtualization Practice: This piece looks at the EMC ViPR V1.1 and SRM V3.0 (Software Defined Storage Management) announcements from earlier this year, along with links to earlier announcement and technology analysis. Note that EMC announced May 5, 2014 ViPR 2.0 along with their new Elastic Cloud Storage Appliance (ECS) among other enhancements at EMC World. Additional perspectives on ViPR 2.0, Elastic Cloud Storage Appliance and EMCworld announcement summary analysis can be found here in this video (with text) that I did (produced via TechTarget) while at EMCworld 2014. Watch for more coverage of ViPR 2.0 and other related new as well as updated items from EMCworld 2014 in upcoming posts, articles and commentary.

    Via InfoStor: Data Archiving: Life Beyond Compliance. Today many people think or assume based on what they hear that Archiving is only for regulatory archiving. Meanwhile some of you may remember a time before the regulatory compliance era of the early 2000s when Archiving was used as a general purpose tool, technology and solution to many IT data management storage challenges. This piece I did over at InfoStor looks at Data Archiving: Life Beyond Compliance and how Archiving is also a key technology that are part of Data Footprint Reduction (DFR) that also includes compression, dedupe, thin provisioning amount other techniques and tools. Here is a related Email Archiving piece (beyond compliance) from over at StateTech along with Practical tips in a piece over at VMware Communities.

    StorageIO video and audio pod casts

    StorageIOblog postStorageIOblog post
    Video conversation with Rob Emsley of EMC and me discussing data protection modernization moving beyond the product pitch!(Via TechTarget SearchDataBackup). In this conversation Rob and me talk about various aspects of data protection modernization including finding and fixing problems at the source, accidental architectures, using new (and old) things in new ways, rethinking data protection. However the conversation is a discussion about the topics, issues, trends, what can be done as opposed to a product pitch infomercial. Check out this video blog (vblog) of Rob and me via TechTarget SearchDataBackup, then weigh in with your comments.

    audioSNIA DSI David Dale
    Audio Podcast: Data Storage Innovation Conversation with SNIA Wayne Adams and David Dale
    In this episode, SNIA Chairman Emeritus Wayne Adams and current Chairman David Dale join me in a conversation from the Data Storage Innovation (DSI) 2014 conference event. DSI is a new event produced by SNIA targeted for IT professionals involved with data storage related topics, themes, technologies and tools spanning hardware, software, cloud, virtual and physical. In this conversation, we talk about the new DSI event, the diversity of new attendees who are attending their first SNIA event, along with other updates. Some of these updates include what is new with the SNIA Cloud Data Management Initiative (CDMI), Non Volatile Memory (think flash and SSD), SMIS, education and more. Listen in to our conversation in this podcast here as we cover cloud, convergence, software defined and more about data storage.

    audiocash coleman cleardb
    Audio Podcast: Catching up with Cash Coleman talking ClearDB, cloud database and Johnny Cash
    In this episode from the SNIA DSI 2014 event I am joined by Cashton Coleman (@Cash_Coleman). Cashton (Cash) is a Software architect, product mason, family bonder, life builder, idea founder along with Founder & CEO of SuccessBricks, Inc., makers of ClearDB. ClearDB is a provider of MySQL database software tools for cloud and physical environments. We talk about ClearDB, what they do and whom they do it with including deployments in cloud’s as well as onsite. For example if you are using some of the Microsoft Azure cloud services using MySQL, you may already be using this technology. However, there is more to the story and discussion including how Cash got his name, how to speed up databases for little and big data among other topics. Check out ClearDB and listen in to the conversation with Cash podcast here.

    audio
    Audio Podcast: Matt Vogt talks VMware vCOP in his first ever podcast
    In this episode from the Computex Rethink your Datacenter for 2017 planning and strategy event I am joined by Matt Vogt (@MattVogt). Matt is a Principal Architect with Computex Technology Solutions as well as certified VMware specialist and fellow vExpert. We talk about the role of automation for performance and capacity optimization along with how VMware vCop plays an important role. Listen in to learn more about how to gain insight and situational awareness to make informed decisions for your data infrastructure environment with Matt. Check out Matt’s blog here at blog.mattvogt.net and listen in to the podcast here.

    StorageIO audio podcasts are also available via
    and at StorageIO.tv

    StorageIOblog posts and perspectives

    StorageIOblog post

  • Is there an information or data recession, are you using less storage (with polls)
  • Lenovo TS140 Server and Storage IO Review Part I here and Part II here
  • Nand flash SSD server storage I/O conversations: See more SSD stories here
  • Data Protection Diaries: March 31 World Backup Day is Restore Data Test, read more here
  • March 2014 StorageIO Update Newsletter: Click here to read more
  • StorageIO White Papers, Solution Briefs and StorageIO Lab reports

    White Paper

    New White Paper: Solid State Hybrid Drives (SSHD)
    Enterprise SSHD and Flash SSD – Better Together – Part of an Enterprise Tiered Storage Strategy The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future. Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs.

    This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way. In this StorageIO Industry Trends Perspective thought leadership white paper we look at how enterprise class Solid State Hybrid Drives (SSHD) and how they address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data infrastructure environments. This includes providing proof points running various workloads including Database TPC-B, TPC-E and Microsoft Exchange in the StorageIO Labscomparing SSHD, SSD and different HDDs. Read more in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of Seagate Enterprise Turbo SSHD. Read the companion blog post here that includes more proof points for large file transfer performance.

    Remember to check out our objectstoragecenter.com page where you will find a growing collection of information and links on cloud and object storage themes, technologies and trends from various sources.

    If you are interested in data protection including Backup/Restore, BC, DR, BR and Archiving along with associated technologies, tools, techniques and trends visit our storageioblog.com/data-protection-diaries-main/ page. For those who follow SSD and related technologies, we have organized a series of items at storageio.com/ssd.

    StorageIO events and activities

    Server and StorageIO seminars, conferences, web cats, events, activities

    The StorageIO calendar continues to evolve, here are some recent and upcoming activities including live in-person seminars, conferences, keynote and speaking activities as well as on-line webinars, twitter chats, Google+ hangouts among others.

    June 12, 2014The Many Facets of Virtual Storage and Software Defined Storage VirtualizationWebinar
    9AM PT
    June 11, 2014The Changing Face and Landscape of Enterprise StorageWebinar
    9AM PT
    May 14, 2014Brouwer Storage ConsultancyKeynote – Healthcare Vendor Neutral Archiving SymposiumNijkerk Netherlands
    May 5-7, 2014EMC WorldLas Vegas
    April 23, 2014SNIA DSI EventKeynote: Enabling Data Infrastructure Return On Innovation – The Other ROIbackup, restore, BC, DR and archiving
    April 22, 2014SNIA DSI EventThe Cloud Hybrid “Homerun” – Life Beyond The Hypebackup, restore, BC, DR and archiving
    April 16, 2014Open Source and Cloud Storage – Enabling business, or a technology enabler?Webinar
    9AM PT
    April 9, 2014Storage Decision Making for Fast, Big and Very Big Data EnvironmentsWebinar
    9AM PT

    Click here to view other upcoming along with earlier event activities. Watch for more 2014 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, big data, little data, cloud and object storage, performance and management trends among others.

    Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.

    StorageIO Update Newsletter Archives

    Click here to view previous StorageIO Update newsletters (HTML and PDF versions) at www.storageio.com/newsletter. Subscribe to this newsletter (and pass it along) by clicking here (Via Secure Campaigner site). View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Is there an information or data recession? Are you using less storage? (With Polls)

    Is there an information or data recession? Are you using less storage? (With Polls)

    StorageIO industry trends

    Is there an information recession where you are creating, processing, moving or saving less data?

    Are you using less data storage than in the past either locally online, offline or remote including via clouds?

    IMHO there is no such thing as a data or information recession, granted storage is being used more effectively by some, while economic pressures or competition enables your budgets to be stretched further. Likewise people and data are living longer and getting larger.

    In conversations with IT professionals particular the real customers (e.g. not vendors, VAR’s, analysts, blogalysts, consultants or media) I routinely hear from people that they continue to have the need to store more information, however they’re data storage usage and acquisition patterns are changing. For some this means using what they have more effectively leveraging data footprint reduction (DFR) which includes (archiving, compression, dedupe, thin provision, changing how and when data is protected). This also means using different types of storage from flash SSD to HDD to SSHD to tape summit resources as well as cloud in different ways spanning block, file and object storage local and remote.

    A common question that comes up particular around vendor earnings announcement times is if the data storage industry is in decline with some vendors experience poor results?

    Look beyond vendor revenue metrics

    As a back ground reading, you might want to check out this post here (IT and storage economics 101, supply and demand) which candidly should be common sense.

    If all you looked at were a vendors revenues or margin numbers as an indicator of how well such as the data storage industry (includes traditional, legacy as well as cloud) you would not be getting the picture.

    What needs to be factored into the picture is how much storage is being shipped (from components such as drives to systems and appliances) as well as delivered by service providers.

    Looking at storage systems vendors from a revenue earnings perspective you would get mixed indicators depending on who you include, not to mention on how those vendors report break of revenues by product, or amount units shipped. For example looking at public vendors EMC, HDS, HP, IBM, NetApp, Nimble and Oracle (among others) as well as the private ones (if you can see the data) such as Dell, Pure, Simplivity, Solidfire, Tintri results in different analysis. Some are doing better than others on revenues and margins, however try to get clarity on number of units or systems shipped (for actual revenue vs. loaners (planting seeds for future revenue or trials) or demos).

    Then look at the service providers such as AWS, Centurlylink, Google, HP, IBM, Microsoft Rackspace or Verizon (among others) you should see growth, however clarity about how much they are actually generating on revenues plus margin for storage specific vs. broad general buckets can be tricky.

    Now look at the component suppliers such as Seagate and Western Digital (WD) for HDDs and SSHDs who also provide flash SSD drives and other technology. Also look at the other flash component suppliers such as Avago/LSI whose flash business is being bought by Seagate, FusionIO, SANdisk, Samsung, Micron and Intel among others (this does not include the systems vendors who OEM those or other products to build systems or appliances). These and other component suppliers can give another indicator as to the health of the industry both from revenue and margin, as well as footprint (e.g. how many devices are being shipped). For example the legacy and startup storage systems and appliance vendors may have soft or lower revenue numbers, however are they shipping the same or less product? Likewise the cloud or service providers may be showing more revenues and product being acquired however at what margin?

    What this all means?

    Growing amounts of information?

    Look at revenue numbers in the proper context as well as in the bigger picture.

    If the same number of component devices (e.g. processors, HDD, SSD, SSHD, memory, etc) are being shipped or more, that is an indicator of continued or increased demand. Likewise if there is more competition and options for IT organizations there will be price competition between vendors as well as service providers.

    All of this means that while IT organizations budgets stay stretched, their available dollars or euros should be able to buy (or rent) them more storage space capacity.

    Likewise using various data and storage management techniques including DFR, the available space capacity can be stretched further.

    So this then begs the question of if the management of storage is important, why are we not hearing vendors talking about software defined storage management vs. chasing each other to out software define storage each other?

    Ah, that’s for a different post ;).

    So what say you?

    Are you using less storage?

    Do you have less data being created?

    Are you using storage and your available budget more effectively?

    Please take a few minutes and cast your vote (and see the results).

    Sorry I have no Amex or Amazon gift cards or other things to offer you as a giveaway for participating as nobody is secretly sponsoring this poll or post, it’s simply sharing and conveying information for you and others to see and gain insight from.

    Do you think that there is an information or data recession?

    How about are you using or buying more storage, could there be a data storage recession?

    Some more reading links

    IT and storage economics 101, supply and demand
    Green IT deferral blamed on economic recession might be result of green gap
    Industry trend: People plus data are aging and living longer
    Is There a Data and I/O Activity Recession?
    Supporting IT growth demand during economic uncertain times
    The Human Face of Big Data, a Book Review
    Garbage data in, garbage information out, big data or big garbage?
    Little data, big data and very big data (VBD) or big BS?

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Lenovo TS140 Server and Storage I/O Review

    Storage I/O trends

    Lenovo TS140 Server and Storage I/O Review

    This is a review that looks at my recent hands on experiences in using a TS140 (Model MT-M 70A4 – 001RUS) pedestal (aka tower) server that the Lenovo folks sent to me to use for a month or so. The TS140 is one of the servers that Lenovo had prior to its acquisition of IBM x86 server business that you can read about here.

    The Lenovo TS140 Experience

    Lets start with the overall experience which was very easy and good. This includes going from initial answering some questions to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything (this was not a tear down and rip it apart into pieces trial).

    Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived. Turns out it was a box (server hardware) inside of a Lenovo box, that was inside a slightly larger unmarked shipping box (see larger box in the background).

    TS140 Evaluation Arrives

    TS140 shipment undergoing initial security screen scan and sniff (all was ok)

    TS140 with Windows 2012
    TS140 with Keyboard and Mouse (Monitor not included)

    One of the reasons I have a photo of the TS140 on a desk is that I initially put it in an office environment as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TS140 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TS140 is a good fit for environments that need a small or starter server that has to go into an office environment as opposed to a server or networking room. For those who are into mounting servers, there is the option for placing the TS140 on its side into a cabinet or rack.

    Windows 2012 on TS140
    TS140 with Windows Server 2012 Essentials

    TS140 as tested

    TS140 Selfie of whats inside
    TS140 "Selfie" with 4 x 4GB DDR3 DIMM (16GB) and PCIe slots (empty)

    16GB RAM (4 x 4GB DDR3 UDIMM, larger DIMMs are supported)
    Windows Server 2012 Essentials
    Intel Xeon E3-1225 v3 @3.2 Ghz quad (C226 chipset and TPM 1.2) vPRO/VT/EP capable
    Intel GbE 1217-LM Network connection
    280 watt power supply
    Keyboard and mouse (no monitor)
    Two 7.2K SATA HDDs (WD) configured as RAID 1 (100GB Lun)
    Slot 1 PCIe G3 x16
    Slot 2 PCIe G2 x1
    Slot 3 PCIe G2 x16 (x4 electrical signal)
    Slot 4 PCI (legacy)
    Onboard 6GB SATA RAID 0/1/10/5
    Onboard SATSA 3.0 (6Gbps) connectors (0-4), USB 3.0 and USB 2.0

    Read more about what I did with the Lenovo TS140 in part II of my review along with what I liked, did not like and general comments here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved