December 2014 Server StorageIO Newsletter

December 2014

Hello and welcome to this December Server and StorageIO update newsletter.

Seasons Greetings

Seasons greetings

Commentary In The News

StorageIO news

Following are some StorageIO industry trends perspectives comments that have appeared in various venues. Cloud conversations continue to be popular including concerns about privacy, security and availability. Over at BizTech Magazine there are some comments about cloud and ROI. Some comments on AWS and Google SSD services can be viewed at SearchAWS. View other trends comments here

Tips and Articles

View recent as well as past tips and articles here

StorageIOblog posts

Recent StorageIOblog posts include:

View other recent as well as past blog posts here

In This Issue

  • Industry Trends Perspectives
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events & Activities

    View other recent and upcoming events here

    Webinars

    December 11, 2014 – BrightTalk
    Server & Storage I/O Performance

    December 10, 2014 – BrightTalk
    Server & Storage I/O Decision Making

    December 9, 2014 – BrightTalk
    Virtual Server and Storage Decision Making

    December 3, 2014 – BrightTalk
    Data Protection Modernization

    Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    From StorageIO Labs

    Research, Reviews and Reports

    StarWind Virtual SAN for Microsoft SOFS

    May require registration
    This looks at the shared storage needs of SMB’s and ROBO’s leveraging Microsoft Scale-Out File Server (SOFS). Focus is on Microsoft Windows Server 2012, Server Message Block version (SMB) 3.0, SOFS and StarWind Virtual SAN management software

    View additional reports and lab reviews here.

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageio.com/ssd
    storageio.com/ssd

    Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

    Seasons greetings 2014

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II 2014 Server Storage I/O Geek Gift ideas

    Part II 2014 Server Storage I/O Geek Gift ideas

    server storage I/O trends

    This is part two of a two part series for what to get a geek for a gift, read part one here.

    KVM switch

    Not to be confused with a software defined network (SDN) switch for the KVM virtualization hypervisor, how about the other KVM switch?

    kvm switch
    My KVM switch in use, looks like five servers are powered on.

    If you have several servers or devices that need a Keyboard Video Mouse connection, or are using A/B box or other devices, why not combine using a KVM switch. I bought the Startech shown above from Amazon which works out to be under $40 a port (connection) meaning I do not have to have Keyboards, Video monitors or Mouse for each of those systems.

    With my KVM shown above, I have used the easy setup to name each of the ports via the management software so that when a button is pressed, not only does the applicable screen appear, also a graphic text message overlay tell me which server is being displayed. This is handy for example as I have some servers that are identical (e.g. Lenovo TS140s) running VMware that a quick glance can help me verify I’m on the right one (e.g. without looking at the VMware host name or IP). This feature is also handy during power on self test (POST) when the servers physical or logical (e.g. VMware, Windows, Hyper-V, Ubuntu, Openstack, etc..) identity is known. Another thing I like about these is that on the KVM switch there is a single VGA type connector, while on the server end there is a VGA connector for attaching to the monitor port of the device, and a break out cable with USB for attaching to server to get Keyboard and Mouse.

    Single drive shoe box

    Usually things are in larger server or storage systems enclosures, however now and then there is the need to supply power to a HDD or SSD along with a USB or eSATA interface for attaching to a system. These are handy and versatile little aluminum enclosures.

    single drive sata enclosuredisk enclosure

    Note that you can now also find these types of cables that can do same or similar function for in side a server connection (check out this cable among others at Amazon)

    USB-SATA cable

    It would be easy to assume that everybody would have these by now particular since everybody (depending on who you listen to or what you read) has probably converted from a HDD to SSD. However for those who have not done an HDD to SSD, or simply a HDD to newer HDD conversion, or that have an older HDD (or SSD) lying around, these cables come in very handy. attach one end (e.g. the SATA end) to a HDD or SSD and the other to a USB port on a laptop, tablet or server. Caveat however with these is that they generally only have power (via USB) for a 2.5″ type drive so for a larger more power-hungry 3.5″ device, you would need a different powered cable, or small shoe box type enclosure.

    eSATA cable
    (Left) USB to SATA and (Right) eSATA to SATA cables

    Mophie USB charger

    There are many different types of mobile device chargers available along with multi-purpose cables. I like the Mophie which I received at an event from NetApp (Thanks NetApp) and the flexible connector I received from Dyn while at AWS re:Invent 2014 (Thanks Dyn, I’m also a Dyn customer fwiw).
    power chargerpower cable
    (Left) Mophie Power station and (Right) multi-connector cable

    The Mohpie has USB connector so that you can charge it via a charging station or via a computer, as well as attach a USB to Apple or other device connector. There is also a small connector for attach to other devices. This is where the dandy Dyn device comes into play as it has a USB as well as Apple and many other common connectors as shown in the figure below. Google around and I’m sure you can find both for sale, or as giveaways or something similar.

    SAS SATA Interposer

    sas interposerserver storage power
    (Left) SAS to SATA interposer (Right) Molex power with SATA connector to SAS

    Note that the above are intended for passing a SAS signal from a device such as HDD or SSD to a SAS based controller that happens to have SATA mechanical or keyed interfaces such as with some servers. This means that the real controller needs to be SAS and the attached drives can be SATA or SAS keeping in mind that a SATA device can plug into a SAS controller however not vise versa. You can find the above at Amazon among other venues. Need a dual-lane SAS connector as an alternative to the one shown above on the right, then check this one out at Amazon.

    Need to learn more about the many different facets of SAS and related technologies including how it coexists with iSCSI, Fibre Channel (FC), FCoE, InfiniBand and other interfaces, how about getting a free copy of SAS SANs for Dummies?

    SAS SANS for dummies

    There are also these for doing board level connections

    esata connectorsata to esata cablesata male to male gender changer
    Some additional SAS and SATA drive connectors

    In the above on the left are a female to female SATA cable with a male to male SATA gender changer attached to be used for example between a storage device and the SATA connector port on a servers motherboard, HBA or RAID controller. In the middle are shown some SATA female to female cables, as well as a SATA to eSATA (external SATA) cable (middle), and on the right are some SATA Male to SATA Male gender changes also shown being used on the left in the above figures.

    Internal Power cable / connectors

    If you or your geek are doing things in the lab or other environment adding and reconfiguring devices such as some of those mentioned above (or below), sooner or later there will be the need to do something with power cables and connectors.

    power meter
    Various cables, adapters and extender

    In the above figure are shown (top to bottom) a SATA male to molex, SATA female to SATA male and to its right SATA female to Molex. Below that are two SATA females to Molex, below that is a SATA male to dual Molex and on the bottom is a single SATA to dual SATA. Needless to say there are many other combinations of connectors as well as different genders (e.g. Male or Female) along with extenders. As mentioned above, pay attention to manufacturers recommend power draw and safety notices to prevent accidental electric shock or fire.

    Intel Edison kit for IoT and IoD

    Are you or your geek into the Internet of Things (IoT) or Internet of Devices (IoD) or other similar things and gadgets? Have you heard about Intel’s Edison breakout board for doing software development and attachment of various hardware things? Looking for something to move beyond a Raspberry Pi system?

    Intel Edison boardIntel Edison kits
    Images via Intel.com

    Over the hills, through the woods WiFi

    This past year I found Nanostation extended WiFi devices that solved a challenge (problem) which was how to get a secure WiFi signal up to a couple hundred yards through a thick forest between some hill’s.


    Image via UBNT.com, check out their other models as well as resources for different deployments

    The problem was it was to far and too many tree’s with leaves use a regular WiFi connection and too far to run cable if I did not need to. I found the solution by getting a pair of the Nanostation M2 putting them into bridge mode, then doing some alignment with their narrow beam antennas to bounce a signal through the woods. For those who simply need to go a long distance, these devices can be reconfigured to go several km’s line of sight. Click on the image above to see other models of the Nanostation as well as links to various resources on how they can be used for other things or deployments.

    How about some software

    • UpDraft Backup – This is a WordPress blog plugin that I use to back up my entire web including the templates, plug-ins, MySQL database and all other related components. While my dedicated private server gets backed up by my service provider (Bluehost), I wanted an extra detail of protection along with a copy placed at a different place (e.g. at my AWS account). Updraft is an example of an emerging class of tools for backing up and protecting cloud based and cloud born data. For example EMC recently acquired cloud backup startup Spanning who has the ability of protecting Salesforce, Google and other cloud based data.
    • Visual ESXtop – This is a great free tool that provides a nice interface and remote access for doing ESXtop functions normally accomplished from the ESXi console.
    • Microsoft Diskspd – If you or your geek is into server storage I/O performance and benchmark that has a Windows environment and looking for something besides Iometer, have them download the Microsoft Diskspd free utility.
    • Futuremark PCmark – Speaking of server storage I/O performance, check out Futuremark PCmark which will give your computer a great workout from graphics and video to compute, storage I/O and other common tasks.
    • RV Tools – Need to know more about your VMware virtual environment, take a quick inventory or something else, then your geek should have a copy of RV Tools from Robware.
    • iVMControl – For that vgeek how wants to be able to do simple VMware tasks from an iPhone, check out iVMControl tools. Its great, I don’t use it a lot, however there are times where I don’t need to or want to use a tablet or PC to reach my VMware environment, that’s when this virtual gadget comes into play.

    Livescribe Digital Pen and Paper

    How about a Livescribe digital pen and paper? Sure you can use a PC, Apple or other tablet, however some things are still easier done on a traditional paper and virtual pen. I got one of these about a year ago and use it for note taking, mocking up slides for presentations and in some cases have used this for creating figures and other things. It would be easy to see and place the Livescribe and a Windows or other tablet as an either or competitive however for me, I still see where they are better together addressing different things, at least for now.

    livescribe digital penlivescribe digital pen

    (Left) using my Livescribe and Echo digital pen (Right) resulting exported .Png

    Tip: I you noticed in the above left image (e.g. the original) the lines in the top figure, compared to the lines in the figure on the right are different. If you encounter your livescribe causing lines to run on or into each other it is because your digital pen tip is sticking. It’s easy to check by looking at the tip of your digital pen and see if the small red light is on or off, or if it stays on when you press the pen tip. If it stays on, reset the pen tip. Also when you write, make sure to lift up on the pen tip so that it releases, otherwise you will get results like those shown on the right.

    livescribe digital penlivescribe digital pen
    (Left) Livescribe Digital Desktop (Middle) Imported Digital Document (Right) Exported PNG

    Also check out this optional application that turns a Livescribe Echo pen like mine into a digital tablet allowing you to draw on-screen with certain applications and webinar tools.

    Some books for the geek

    Speaking of reading, for those who are not up on the No SQL and alternative SQL based databases including Mongo, Hbase, Riak, Cassandra, MySQL, add Seven Databases in Seven Weeks to your liust. Click on the image to read my book review of it as well as links to order it from Amazon. Seven Databases in Seven Weeks (A Guide to Modern Databases and the NoSQL Movement) is a book written Eric Redmond (@coderoshi) and Jim Wilson (@hexlib), part of The Pragmatic Programmers (@pragprog) series that takes a look at several non SQL based database systems.

    seven database nosql

    Where to get the above items

    • Ebay for new and used
    • Amazon for new and used
    • Newegg
    • PC Pit stop
    • And many other venues

    What this all means

    Note: Some of the above can be found at your favorite trade show or conference so keep that in mind for future gift giving.

    What interesting geek gift ideas or wish list items do you have?

    Of course if you have anything interesting to mention feel free to add it to the comments (keep it clean though ;) or feel free to send to me for future mention.

    In the meantime have a safe and happy holiday season for what ever holiday you enjoy celebrating anytime of the year.

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: Revisiting re:Invent 2014, Lambda and other AWS updates

    server storage I/O trends

    Part II: Revisiting re:Invent 2014 and other AWS updates

    This is part two of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part one here.

    AWS re:Invent 2014

    AWS re:Invent announcements

    Announcements and enhancements made by AWS during re:Invent include:

    • Key Management Service (KMS)
    • Amazon RDS for Aurora
    • Amazon EC2 Container Service
    • AWS Lambda
    • Amazon EBS Enhancements
    • Application development, deployed and life-cycle management tools
    • AWS Service Catalog
    • AWS CodeDeploy
    • AWS CodeCommit
    • AWS CodePipeline

    AWS Lambda

    In addition to announcing new higher performance Elastic Cloud Compute (EC2) compute instances along with container service, another new service is AWS Lambda. Lambda is a service that automatically and quickly runs your applications code in response to events, activities, or other triggers. In addition to running your code, Lambda service is billed in 100 millisecond increments along with corresponding memory use vs. standard EC2 per hour billing. What this means is that instead of paying for an hour of time for your code to run, you can choose to use the Lambda service with more fine-grained consumption billing.

    Lambda service can be used to have your code functions staged ready to execute. AWS Lambda can run your code in response to S3 bucket content (e.g. objects) changes, messages arriving via Kinesis streams or table updates in databases. Some examples include responding to event such as a web-site click, response to data upload (photo, image, audio, file or other object), index, stream or analyze data, receive output from a connected device (think Internet of Things IoT or Internet of Device IoD), trigger from an in-app event among others. The basic idea with Lambda is to be able to pay for only the amount of time needed to do a particular function without having to have an AWS EC2 instance dedicated to your application. Initially Lambda supports Node.js (JavaScript) based code that runs in its own isolated environment.

    AWS cloud example
    Various application code deployment models

    Lambda service is a pay for what you consume, charges are based on the number of requests for your code function (e.g. application), amount of memory and execution time. There is a free tier for Lambda that includes 1 million requests and 400,000 GByte seconds of time per month. A GByte second is the amount of memory (e.g. DRAM vs. storage) consumed during a second. An example is your application is run 100,000 times and runs for 1 second consuming 128MB of memory = 128,000,000MB = 128,000GB seconds. View various pricing models here on the AWS Lambda site that show examples for different memory sizes, times a function runs and run time.

    How much memory you select for your application code determines how it can run in the AWS free tier, which is available to both existing and new customers. Lambda fees are based on the total across all of your functions starting with the code when it runs. Note that you could have from one to thousands or more different functions running in Lambda service. As of this time, AWS is showing Lambda pricing as free for the first 1 million requests, and beyond that, $0.20 per 1 million request ($0.0000002 per request) per duration. Duration is from when you code runs until it ends or otherwise terminates rounded up to the nearest 100ms. The Lambda price also depends on the amount of memory you allocated for your code. Once past the 400,000 GByte second per month free tier the fee is $0.00001667 for every GB second used.

    Why use AWS Lambda vs. an EC2 instance

    Why would you use AWS Lambda vs. provisioning an Container, EC2 instance or running your application code function on a traditional or virtual machine?

    If you need control and can leverage an entire physical server with its operating system (O.S.), application and support tools for your piece of code (e.g. JavaScript), that could be an option. If you simply need to have an isolated image instance (O.S., applications and tools) for your code on a shared virtual on-premises environment then that can be an option. Likewise if you have the need to move your application to an isolated cloud machine (CM) that hosts an O.S. along with your application paying for those resources such as on an hourly basis, that could be your option. Simply need a lighter-weight container to drop your application into that’s where Docker and containers comes into play to off-load some of the traditional application dependencies overhead.

    However, if all you want to do is to add some code logic to support processing activity for example when an object, file or image is uploaded to AWS S3 without having to standup an EC2 instance along with associated server, O.S. and complete application activity, that’s where AWS Lambda comes into play. Simply create your code (initially JavaScript) and specify how much memory it needs, define what events or activities will trigger or invoke the event, and you have a solution.

    View AWS Lambda pricing along with free tier information here.

    Amazon EBS Enhancements

    AWS is increasing the performance and size of General Purpose SSD and Provisioned IOP’s SSD volumes. This means that you can create volumes up to 16TB and 10,000 IOP’s for AWS EBS general-purpose SSD volumes. For EBS Provisioned IOP’s SSD volumes you can create up to 16TB for 20,000 IOP’s. General-purpose SSD volumes deliver a maximum throughput (bandwidth) of 160 MBps and Provisioned IOP SSD volumes have been specified by AWS at 320MBps when attached to EBS optimized instances. Learn more about EBS capabilities here. Verify your IO size and verify AWS sizing information to avoid surprises as all IO sizes are not considered to be the same. Learn more about Provisioned IOP’s, optimized instances, EBS and EC2 fundamentals in this StorageIO AWS primer here.

    Application development, deployed and life-cycle management tools

    In addition to compute and storage resource enhancements, AWS has also announced several tools to support application development, configuration along with deployment (life-cycle management). These include tools that AWS uses themselves as part of building and maintaining the AWS platform services.

    AWS Config (Preview e.g. early access prior to full release)

    Management, reporting and monitoring capabilities including Data center infrastructure management (DCIM) for monitoring your AWS resources, configuration (including history), governance, change management and notifications. AWS Config enables similar capabilities to support DCIM, Change Management Database (CMDB), trouble shooting and diagnostics, auditing, resource and configuration analysis among other activities. Learn more about AWS Config here.

    AWS Service Catalog

    AWS announced a new service catalog that will be available in early 2015. This new service capability will enable administrators to create and manage catalogs of approved resources for users to use via their personalized portal. Learn more about AWS service catalog here.

    AWS CodeDeploy

    To support code rapid deployment automation for EC2 instances, AWS has released CodeDeploy. CodeDeploy masks complexity associated with deployment when adding new features to your applications while reducing human error-prone operations. As part of the announcement, AWS mentioned that they are using CodeDeploy as part of their own applications development, maintenance, and change-management and deployment operations. While suited for at scale deployments across many instances, CodeDeploy works with as small as a single EC2 instance. Learn more about AWS CodeDeploy here.

    AWS CodeCommit

    For application code management, AWS will be making available in early 2015 a new service called CodeCommit. CodeCommit is a highly scalable secure source control service that host private Git repositories. Supporting standard functionalities of Git, including collaboration, you can store things from source code to binaries while working with your existing tools. Learn more about AWS CodeCommit here.

    AWS CodePipeline

    To support application delivery and release automation along with associated management tools, AWS is making available CodePipeline. CodePipeline is a tool (service) that supports build, checking workflow’s, code staging, testing and release to production including support for 3rd party tool integration. CodePipeline will be available in early 2015, learn more here.

    Additional reading and related items

    Learn more about the above and other AWS services by actually truing hands on using their free tier (AWS Free Tier). View AWS re:Invent produced breakout session videos here, audio podcasts here, and session slides here (all sessions may not yet be uploaded by AWS re:Invent)

    What this all means

    AWS amazon web services

    AWS continues to invest as well as re-invest into its environment both adding new feature functionality, as well as expanding the extensibility of those features. This means that AWS like other vendors or service providers adds new check-box features, however they also like some increase the depth extensibility of those capabilities. Besides adding new features and increasing the extensibility of existing capabilities, AWS is addressing both the data and information infrastructure including compute (server), storage and database, networking along with associated management tools while also adding extra developer tools. Developer tools include life-cycle management supporting code creation, testing, tracking, testing, change management among other management activities.

    Another observation is that while AWS continues to promote the public cloud such as those services they offer as the present and future, they are also talking hybrid cloud. Granted you have to listen carefully as you may not simply hear hybrid cloud used like some toss it around, however listen for and look into AWS Virtual Private Cloud (VPC), along with what you can do using various technologies via the AWS marketplace. AWS is also speaking the language of enterprise and traditional IT from an applications and development to data and information infrastructure perspective while also walking the cloud talk. What this means is that AWS realizes that they need to help existing environments evolve and make the transition to the cloud which means speaking their language vs. converting them to cloud conversations to then be able to migrate them to the cloud. These steps should make AWS practical for many enterprise environments looking to make the transition to public and hybrid cloud at their pace, some faster than others. More on these and some related themes in future posts.

    The AWS re:Invent event continues to grow year over year, I heard a figure of over 12,000 people however it was not clear if that included exhibiting vendors, AWS people, attendees, analyst, bloggers and media among others. However a simple validation is that the keynotes were in the larger rooms used by events such as EMCworld and VMworld when they hosted in Las Vegas as was the expo space vs. what I saw last year while at re:Invent. Unlike some large events such as VMworld where at best there is a waiting queue or line to get into sessions or hands on lab (HOL), while becoming more crowded, AWS re:Invent is still easy to get in and spend some time using the HOL which is of course powered by AWS meaning you can resume what you started while at re:Invent later. Overall a good event and nice series of enhancements by AWS, looking forward to next years AWS re:Invent.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    This is the first post of a two part series, read the second post here.

    Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

    The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future, Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way. For example nand flash SSD as part of an enterprise tiered storage strategy can be implemented server-side using PCIe cards, SAS and SATA drives as targets or as cache along with software, as well as leveraging SSD devices in storage systems or appliances.

    Seagate 1200 SSD
    Seagate 1200 Enterprise SAS 12Gbs SSD Image via Seagate.com

    Another place where nand flash can be found and compliments SSD devices are so-called Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD) including a new generation that accelerate writes as well as reads such as those Seagate refers to as with Enterprise TurboBoost. The Enterprise TurboBoost drives (view the companion StorageIO Lab review TurboBoost white paper here) were previously known as the Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD). Read more about TurboBoost here and here.

    The best server and storage I/O is the one you do not have to do

    Keep in mind that the best server or storage I/O is that one that you do not have to do, with the second best being the one with the least overhead resolved as close to the processor (compute) as possible or practical. The following figure shows that the best place to resolve server and storage I/O is as close to the compute processor as possible however only a finite amount of storage memory located there. This is where the server memory and storage I/O hierarchy comes into play which is also often thought of in the context of tiered storage balancing performance and availability with cost and architectural limits.

    Also shown is locality of reference which refers to how close data is to where it is being used and includes cache effectiveness or buffering. Hence a small amount of cache of flash and DRAM in the right location can have a large benefit. Now if you can afford it, install as much DRAM along with flash storage as possible, however if you are like most organizations with finite budgets yet server and storage I/O challenges, then deploy a tiered flash storage strategy.

    flash cache locality of reference
    Server memory storage I/O hierarchy, locality of reference

    Seagate 1200 12Gbs Enterprise SAS SSD’s

    Back to the Seagate 1200 12Gbs Enterprise SAS SSD which is covered in this StorageIO Industry Trends Perspective thought leadership white paper. The focus of the white paper is to look at how the Seagate 1200 Enterprise class SSD’s and 12Gbps SAS address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data infrastructure environments.

    Seagate 1200 Enteprise SSD

    This includes providing proof points running various workloads including Database TPC-B, TPC-E and Microsoft Exchange in the StorageIO Labs along with cache software comparing SSD, SSHD and different HDD’s including 12Gbs SAS 6TB near-line high-capacity drives.

    Seagate 1200 Enterprise SSD Proof Points

    The proof points in this white paper are from an applications focus perspective representing more of an end-to-end real-world situation. While they are not included in this white paper, StorageIO has run traditional storage building-block focus workloads, which can be found at StorageIOblog (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?). These include tools such as Iometer, iorate, vdbench among others for various IO sizes, mixed, random, sequential, reads, writes along with “hot-band" across different number of threads (concurrent users). “Hot-Band” is part of the SNIA Emerald energy effectiveness metrics for looking at sustained storage performance using tools such as vdbench. Read more about other various server and storage I/O benchmarking tools and techniques here.

    For the following series of proof-points (TPC-B, TPC-E and Exchange) a system under test (SUT) consisted of a physical server (described with the proof-points) configured with VMware ESXi along with guests virtual machines (VMs) configured to do the storage I/O workload. Other servers were used in the case of TPC workloads as application transactional requester to drive the SQL Server database and resulting server storage I/O workload. VMware was used in the proof-points to reflect a common industry trend of using virtual server infrastructures (VSI) supporting applications including database, email among others. For the proof-point scenarios, the SUT along with storage system device under test were dedicated to that scenario (e.g. no other workload running) unless otherwise noted.

    Server Storage I/O config
    Server Storage I/O configuration for proof-points

    Microsoft Exchange Email proof-point configuration

    For this proof-point, Microsoft Jet Stress Exchange performance workloads were placed (e.g. Exchange Database – EDB file) on each of the different devices under test with various metrics shown including activity rates and response time for reads as well as writes. For the Exchange testing, the EDB was placed on the device being tested while its log files were placed on a separate Seagate 400GB Enterprise 12Gbps SAS SSD.

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB 7.2K SATA HDD. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft SBS2011 Service Pack 1 64 bit. Guest VM (VMware vSphere 5.5) was on a SSD based dat, had a physical machine (host), with 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot with Jet Stress 2010.  All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device also persistent (no delayed writes). EDB was 300GB and workload ran for 8 hours.

    Microsoft Exchange VMware SSD performance
    Microsoft Exchange proof-points comparing various storage devices

    TPC-B (Database, Data Warehouse, Batch updates) proof-point configuration

    SSD’s are a good fit for both transaction database activity with reads and write as well as query-based decision support systems (DSS), data warehouse and big data analytics. The following are proof points of SSD capabilities for database activity. In addition to supporting database table files and objects, along with transaction journal logs, other uses include for meta-data, import/export or other high-IO and write intensive scenarios. Two database workload profiles were tested including batch update (write-intensive) and transactional. Activity involved running Transaction Performance Council (TPC) workloads TPC-B (batch update) and TPC-E (transaction/OLTP simulate financial trading system) against Microsoft SQL Server 2012 databases. Each test simulation had the SQL Server database (MDF) on a different device with transaction log file (LDF) on a separate SSD. TPC-B for a single device results shown below.

    TPC-B (write intensive) results below show how TPS work being done (blue) increases from left to right (more is better) for various numbers of simulated users. Also shown on the same line for each amount of TPS work being done is the average latency in seconds (right to left) where lower is better. Results are shown from top to bottom for each group of users (100, 50, 20 and 1) for the different drives being tested (top to bottom). Note how the SSD device does more work at a lower response time vs. traditional HDD’s

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

    VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

    TPC-B sql server database SSD performance
    TPC-B SQL Server database proof-points comparing various storage devices

    TPC-E (Database, Financial Trading) proof-point configuration

    The following shows results from TPC-E test (OLTP/transactional workload) simulating a financial trading system. TPC-E is an industry standard workload that performs a mix of reads and writes database queries. Proof-points were performed with various numbers of users from 10, 20, 50 and 100 to determine (TPS) Transaction per Second (aka I/O rate) and response time in seconds. The TPC-E transactional results are shown for each device being tested across different user workloads. The results show how TPC-E TPS work (blue) increases from left to right (more is better) for larger numbers of users along with corresponding latency (green) that goes from right to left (less is better). The Seagate Enterprise 1200 SSD is shown on the top in the figure below with a red box around its results. Note how the SSD as a lower latency while doing more work compared to the other traditional HDD’s

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

    VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

    TPC-E sql server database SSD performance
    TPC-E (Financial trading) SQL Server database proof-points comparing various storage devices

    Continue reading part-two of this two-part series here including the virtual server storage I/O blender effect and solution.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    This is the second post of a two part series, read the first post here.

    Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

    The Server Storage I/O Blender Effect Bottleneck

    The earlier proof-points focused on SSD as a target or storage device. In the following proof-points, the Seagate Enterprise 1200 SSD is used as a shared read cache (write-through). Using a write-through cache enables a given amount of SSD to give a performance benefit to other local and networked storage devices.

    traditional server storage I/O
    Non-virtualized servers with dedicated storage and I/O paths.

    Aggregation causes aggravation with I/O bottlenecks because of consolidation using server virtualization. The following figure shows non-virtualized servers with their own dedicated physical machine (PM) and I/O resources. When various servers are virtualized and hosted by a common host (physical machine), their various workloads compete for I/O and other resources. In addition to competing for I/O performance resources, these different servers also tend to have diverse workloads.

    virtual server storage I/O blender
    Virtual server storage I/O blender bottleneck (aggregation causes aggravation)

    The figure above shows aggregation causing aggravation with the result being I/O bottlenecks as various applications performance needs converge and compete with each other. The aggregation and consolidation result is a blend of random, sequential, large, small, read and write characteristics. These different storage I/O characteristics are mixed up and need to be handled by the underlying I/O capabilities of the physical machine and hypervisor. As a result, a common deployment for SSD in addition to as a target device for storing data is as a cache to cut bottlenecks for traditional spinning HDD.

    In the following figure a solution is shown introducing I/O caching with SSD to help mitigate or cut the effects of server consolation causing performance aggravations.

    Creating a server storage I/O blender bottleneck

    xxxxx
    Addressing the VMware Server Storage I/O blender with cache

    Addressing server storage I/O blender and other bottlenecks

    For these proof-points, the goal was to create an I/O bottleneck resulting from multiple VMs in a virtual server environment performing application work. In this proof-point, multiple competing VMs including a SQL Server 2012 database and an Exchange server shared the same underlying storage I/O infrastructure including HDD’s The 6TB (Enterprise Capacity) HDD was configured as a VMware dat and allocated as virtual disks to the VMs. Workloads were then run concurrently to create an I/O bottleneck for both cached and non-cached results.

    xxxxx
    Server storage I/O with virtualization roof-point configuration topology

    The following figure shows two sets of proof points, cached (top) and non-cached (bottom) with three workloads. The workloads consisted of concurrent Exchange and SQL Server 2012 (TPC-B and TPC-E) running on separate virtual machine (VM) all on the same physical machine host (SUT) with database transactions being driven by two separate servers. In these proof-points, the applications data were placed onto the 6TB SAS HDD to create a bottleneck, and a portion of the SSD used as a cache. Note that the Virtunet cache software allows you to use a part of a SSD device for cache with the balance used as a regular storage target should you want to do so.

    If you have paid attention to the earlier proof-points, you might notice that some of the results below are not as good as those seen in the Exchange, TPC-B and TPC-E results about. The reason is simply that the earlier proof-points were run without competing workloads, and database along with log or journal files were placed on separate drives for performance. In the following proof-point as part of creating a server storage I/O blender bottleneck the Exchange, TPC-B as well as TPC-E workloads were all running concurrently with all data on the 6TB drive (something you normally would not want to do).

    storage I/O blender solved
    Solving the VMware Server Storage I/O blender with cache

    The cache and non-cached mixed workloads shown above prove how an SSD based read-cache can help to reduce I/O bottlenecks. This is an example of addressing the aggravation caused by aggregation of different competing workloads that are consolidated with server virtualization.

    For the workloads shown above, all data (database tables and logs) were placed on VMware virtual disks created from a dat using a single 7.2K 6TB 12Gbps SAS HDD (e.g. Seagate Enterprise Capacity).

    The guest VM system disks which included paging, applications and other data files were virtual disks using a separate dat mapped to a single 7.2K 1TB HDD. Each workload ran for eight hours with the TPC-B and TPC-E having 50 simulated users. For the TPC-B and TPC-E workloads, two separate servers were used to drive the transaction requests to the SQL Server 2012 database.

    For the cached tests, a Seagate Enterprise 1200 400GB 12Gbps SAS SSD was used as the backing store for the cache software (Virtunet Systems Virtucache) that was installed and configured on the VMware host.

    During the cached tests, the physical HDD for the data files (e.g. 6TB HDD) and system volumes (1TB HDD) were read cache enabled. All caching was disabled for the non-cached workloads.

    Note that this was only a read cache, which has the side benefit of off-loading those activities enabling the HDD to focus on writes, or read-ahead. Also note that the combined TPC-E, TPC-B and Exchange databases, logs and associated files represented over 600GB of data, there was also the combined space and thus cache impact of the two system volumes and their data. This simple workload and configuration is representative of how SSD caching can complement high-capacity HDD’s

    Seagate 6TB 12Gbs SAS high-capacity HDD

    While the star and focus of these series of proof-points is the Seagate 1200 Enterprise 12Gbs SAS SSD, the caching software (virtunet) and Enterprise TurboBoost drives also play key supporting and favorable roles. However the 6TB 12Gbs SAS high-capacity drive caught my attention from a couple of different perspectives. Certainly the space capacity was interesting along with a 12Gbs SAS interface well suited for near-line, high-capacity and dense tiered storage environments. However for a high-capacity drive its performance is what really caught my attention both in the standard exchange, TPC-B and TPC-E workloads, as well as when combined with SSD and cache software.

    This opens the door for a great combination of leveraging some amount of high-performance flash-based SSD (or TurboBoost drives) combined with cache software and high-capacity drives such as the 6TB device (Seagate now has larger versions available). Something else to mention is that the 6TB HDD in addition to being available in either 12Gbs SAS, 6Gbs SAS or 6Gbs SATA also has enhanced durability with a Read Bit Error Rate of 10 ^15 (e.g. 1 second read error per 10^15 average attempts) and an AFR (annual failure rate) of 0.63% (See more speeds and feeds here). Hence if you are concerned about using large capacity HDD’s and them failing, make sure you go with those that have a high Read Bit Error Rate and a low AFR which are more common with enterprise class vs. lower cost commodity or workstation drives. Note that these high-capacity enterprise HDD’s are also available with Self-Encrypting Drive (SED) options.

    Summary

    Read more in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of Seagate 1200 12Gbs SAS SSD’s and visit the Seagate Enterprise 1200 12Gbs SAS SSD page here. Moving forward there is the notion that flash SSD will be everywhere. There is a difference between all data on flash SSD vs. having some amount of SSD involved in preserving, serving and protecting (storing) information.

    Key themes to keep in mind include:

    • Aggregation can cause aggravation which SSD can alleviate
    • A relative small amount of flash SSD in the right place can go a long way
    • Fast flash storage needs fast server storage I/O access hardware and software
    • Locality of reference with data close to applications is a performance enabler
    • Flash SSD everywhere does not mean everything has to be SSD based
    • Having some amount of flash in different places is important for flash everywhere
    • Different applications have various performance characteristics
    • SSD as a storage device or persistent cache can speed up IOPs and bandwidth

    Flash and SSD are in your future, this comes back to the questions of how much flash SSD do you need, along with where to put it, how to use it and when.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Lenovo ThinkServer TD340 StorageIO lab Review

    Storage I/O trends

    Lenovo ThinkServer TD340 Server and StorageIO lab Review

    Earlier this year I did a review of the Lenovo ThinkServer TS140 in the StorageIO Labs (see the review here), in fact I ended up buying a TS140 after the review, and a few months back picked up another one. This StorageIOlab review looks at the Lenovo ThinkServer TD340 Tower Server which besides having a larger model number than the TS140, it also has a lot more capabilities (server compute, memory, I/O slots and internal hot-swap storage bays. Pricing varies depending on specific configuration options, however at the time of this post Lenovo was advertising a starting price of $1,509 USD for a specific configuration here. You will need to select different options to decide your specific cost.

    The TD340 is one of the servers that Lenovo has had prior to its acquisition of IBM x86 server business that you can read about here. Note that the Lenovo acquisition of the IBM xSeries business group has begun as of early October 2014 and is expected to be completed across different countries in early 2015. Read more about the IBM xSeries business unit here, here and here.

    The Lenovo TD340 Experience

    Lets start with the overall experience which was very easy other than deciding what make and model to try. This includes going from first answering some questions to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything. Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived.

    TD340 is ready for use
    TD340 with Keyboard and Mouse (Monitor and keyboard not included)

    One of the reasons I have a photo of the TD340 on a desk is that I initially put it in an office environment similar to what I did with the TS140 as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TD340 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TD340 is a good fit for environments that need a server that has to go into an office environment as opposed to a server or networking room.

    Welcome to the TD340
    Lenovo ThinkServer Setup

    TD340 Setup
    Lenovo TD340 as tested in BIOS setup, note the dual Intel Xeon E5-2420 v2 processors

    TD340 as tested

    TD340 Selfie of whats inside
    TD340 "Selfie" with 4 x 8GB DDR3 DIMM (32GB) and PCIe slots (empty)

    TD340 disk drive bays
    TD340 internal drive hot-swap bays

    Speeds and Feeds

    The TD340 that I tested was a Machine type 7087 model 002RUX which included 4 x 16GB DIMMs and both processor sockets occupied.

    You can view the Lenovo TD340 data sheet with more speeds and feeds here, however the following is a summary.

    • Operating systems support include various Windows Servers (2008-2012 R2), SUSE, RHEL, Citrix XenServer and VMware ESXi
    • Form factor is 5U tower with weight starting at 62 pounds depending on how configured
    • Processors include support for up to two (2) Intel E5-2400 v2 series
    • Memory includes 12 DDR3 DRAM DIMM slots (LV RDIMM and UDIMM) for up to 129GB.
    • Expansion slots vary depending on if a single or dual cpu socket. Single CPU socket installed has 1 x PCIe Gen3 FH/HL x8 mechanical, x4 electrical, 1 x PCIe Gen3
    • FH/HL x16 mechanical, x16 electrical and a single PCI 32bit/33 MHz FH/HL slot. With two CPU sockets installed extra PCIe slots are enabled. These include single x PCIe GEN3: FH/HL x8 mechanical, x4 electrical, single x PCIe GEN3: FH/HL x16 mechanical, x16 electrical, three x PCIe GEN3: FH/HL x8 mechanical, x8 electrical and a single PCI 5V 32-bit/33 MHz: FH/HL
    • Two 5.25” media bays for CD or DVDs or other devices
    • Integrated ThinkServer RAID (0/1/10/5) with optional RAID adapter models
    • Internal storage varies depending on model including up to eight (8) x 3.5” hot swap drives or 16 x 2.5” hot swap drives (HDD’s or SSDs).
    • Storage space capacity varies by the type and size of the drives being used.
    • Networking interfaces include two (2) x GbE
    • Power supply options include single 625 watt or 800 watt, or 1+1 redundant hot-swap 800 watt, five fixed fans.
    • Management tools include ThinkServer Management Module and diagnostics

    What Did I do with the TD340

    After initial check out in an office type environment, I moved the TD340 into the lab area where it joined other servers to be used for various things.

    Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities as well as installing VMware ESXi 5.5.

    TD340 is ready for use
    TD340 with Keyboard and Mouse (Monitor and keyboard not included)

    What I liked

    Unbelievably quiet which may not seem like a big deal, however if you are looking to deploy a server or system into a small office workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server that can be installed into a home media entertainment system, well, this might be a nice to have consideration ;). Speaking of IO slots, naturally I’m interested in Server Storage I/O so having multiple slots is a must have, along with the processor that is multi-core (pretty much standard these days) along with VT and EP for supporting VMware (these were disabled in the BIOS however that was an easy fix).

    What I did not like

    The only thing I did not like was that I ran into a compatibility issue trying to use a LSI 9300 series 12Gb SAS HBA which Lenovo is aware of, and perhaps has even addressed by now. What I ran into is that the adapters work however I was not able to get the full performance out of the adapters as compared to on other systems including my slower Lenovo TS140s.

    Summary

    Overall I give Lenovo and the TD340 an "B+" which would have been an "A" had I not gotten myself into a BIOS situation or been able to run the 12Gbps SAS PCIe Gen 3 cards at full speed. Likewise the Lenovo service and support also helped to improve on the experience. Otoh, if you are simply going to use the TD340 in a normal out of the box mode without customizing to add your own adapters or install your own operating system or Hypervisors (beyond those that are supplied as part of the install setup tool kit), you may have an "A" or "A+" experience with the TD340.

    Would I recommend the TD340 to others? Yes for those who need this type and class of server for Windows, *nix, Hyper-V or VMware environments.

    Would I buy a TD340 for myself? Maybe if that is the size and type of system I need, however I have my eye on something bigger. On the other hand for those who need a good value server for a SMB or ROBO environment with room to grow, the TD340 should be on your shopping list to compare with other solutions.

    Disclosure: Thanks to the folks at Lenovo for sending and making the TD340 available for review and a hands on test experience including covering the cost of shipping both ways (the unit should now be back in your possession). Thus this is not a sponsored post as Lenovo is not paying for this (they did loan the server and covered two-way shipping), nor am I paying them, however I have bought some of their servers in the past for the StorageIOLab environment that are companions to some Dell and HP servers that I have also purchased.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    April and May 2014 Server and StorageIO Update newsletter


    Server and StorageIO Update newsletter – April and May 2014

    Welcome to the April and May 2014 edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics.

    The good news is that while spring is running late (as is this newsletter ;) here in the Stillwater MN area as well as other parts of the world, both are finally here. To say that a lot has been going on and things busy would be an understatement, however that is probably also the situation with you as well. So what has been going on during April and May 2014?

    Industry and Technology Updates

    Sony and Fujifilm (with their partner IBM) are trading marketing and proof of concept (POC) lab material in the efforts to show tape is still alive for data storage. Sony announced a month or so ago that it was moving the bar to 185TB per tape (without dedupe). Not to be out done, Fujifilm announced in late May that they in conjunction with IBM have a POC for a 154 TB LTO in the works.

    Greg Schulz Storage I/OGreg Schulz on break
    On the Hard Disk Drive (HDD) front, Seagate released a new 6TB device that they claim to be fast. I asked Seagate to send me one of the drives to see how fast it really is vs. their claims. While I have not completed all tests yet, what I can tell you is that the 6TB 3.5" 12Gbps SAS 7.2K RPM drive is like an american football linebacker or fullback. Its big, bulky, high-capacity, resilient with 10 to the 15 bit error rate (higher than normal high-capacity HDD’s) and fast.

    Sure the 6TB HDD is not in the speed race of a quick SSD or SSHD or 15K, however I was surprised at just how fast it is for its space capacity. Watch for a follow-up review in the not so distant future and if a WD 6TB drive were to show up on my door step can give some perspectives on that as well.

    As for SSD, they are following the trend paths of tape and HDD’s of increasing in space capacity, coming down in price and improving on resiliency. While I see HDD and even tape surviving for some time, granted in different roles, I’m also a firm believer that flash SSD in some form are in your future. The question is how much, when, where, with what and from whom. Needless to say there is plenty of SSD related hardware and software activity occurring in the StorageIO labs ;).

    Vendors and revenue earnings, is there storage slowdown?

    In other industry news and activity, vendor quarterly earnings are out and there is mixed information (see this recent post of if there is an information recession). IBM is one of those who have announced lowered storage related revenues as NetApp had mixed results (as did other vendors). In addition IBM is officially saying they are finally dropping the NetApp (FAS/ONTAP) based N series (was originally reported a week or so ago via Bloomberg). Note that IBM will continue to OEM NetApp E series (e.g. Engenio based). Some of you might remember (or do a Google search) that IBM indicated a few years back that it was De emphasizing the N series or moving away from it. Perhaps this time they really mean it while NetApp could move to embrace those VAR’s and IBM business partners to sell NetApp vs. IBM branded versions of the product. Here are some more perspectives appearing in SearchStorage. Watch for more about NetApp in a future follow-up post.

    In some other industry news, you might remember back in the February StorageIO update newsletter there was mention of Avago buying LSI. Now Avago is selling the flash business of LSI to Seagate for about $450M USD in the ongoing flash dance for cache and cash.

    Staying busy is a good thing

    What have I been doing during April and May 2014 to stay busy besides getting ready for spring and summer fun including in and around the water?

    • Attended NAB 2014 in Las Vegas where it is not just about archiving pertaining to data storage
    • Presented backup, restore, BC, DR and archiving including a keynote at the SNIA DSI conference
    • Was back in Las Vegas to attend EMCworld, I have some updates in the works from that event
    • Presented several BrightTalk Webinars (see events below) with more coming up in June
    • Release of new ITP white paper and StorageIO lab proof points with more in the works
    • More videos and pod casts, technology reviews including servers among other things
    • Participated including keynote at a vendor neutral archiving event in Europe
    • Providing industry commentary in different venues (see below) along with some writing
    • Not to mention various client consulting projects
    • Remember, work hard play hard, play hard and work hard!

    Whats in the works?

    Several projects and things are in the works that will show themselves in the coming weeks or months if not sooner. Some of which are more proof points coming out of the StorageIO labs involving software defined, converged, cloud, virtual, SSD, data protection and more.

    Speaking of Software Defined, join me for a free BrightTalk Webinar on June 12 on the many faces and facets of virtualization and software defined storage. Learn more about that event here as well as in the activities section down below.

    Watch for more StorageIO posts, commentary, perspectives, presentations, webinars, tips and events on information and data infrastructure topics, themes and trends. Data Infrastructure topics include among others cloud, virtual, legacy server, storage I/O networking, data protection, hardware and software.

    Enjoy this edition of the StorageIO Update newsletter and look forward to catching up with you live or online while out and about this spring.

    Ok, nuff said (for now)

    Cheers gs

    April and May 2014 Industry trend and perspectives

    Tips, commentary, articles and blog posts

    StorageIO Industry Trends and Perspectives

    The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.

    StorageIO comments and perspectives in the news

    StorageIO in the news

    SearchStorage: Comments on IBM dropping N series, NetApp is still OEM to IBM
    InfoStor: Comments on Software Defined Storage: 10 Things You Need to Know
    SearchDataBackup: Comments about buying guides for enterprise Hard Disk Drives (HDD)
    SearchDataBackup: Conversation about data protection modernization
    InfoStor: Comments on cloud storage, 10 things you need to know
    InfoStor: Comments on Data Archiving: Life Beyond Compliance
    NetworkComputing: Comments on Sorting Through Storage Industry Hype
    StateTech: Comments on Secure Erasing HDDs and SSDs including planning in advance
    SNIA: Comments on CDMI Cloud Management Conformance Testing
    EnterpriseStorageForum: Comments on Hybrid Cloud Storage Tips
    NetworkComputing: Comments on Sorting Through Storage Industry Hype

    StorageIO tips and articles appearing in various venues

    StorageIO tips and articles

    Via InformationSecurityBuzz:  Dark Territories MH370 Do You Know Where Your Information Is? We still dont know 100% where the missing Malaysian airlines flight 370 is which amplifies the fact that there are still dar territories or gaps in coverage in this large world. Likewise there are gaps in coverage in many IT environments yet tools and technologies are available to gain better situational awareness and insight.

    Via The Virtualization Practice: This piece looks at the EMC ViPR V1.1 and SRM V3.0 (Software Defined Storage Management) announcements from earlier this year, along with links to earlier announcement and technology analysis. Note that EMC announced May 5, 2014 ViPR 2.0 along with their new Elastic Cloud Storage Appliance (ECS) among other enhancements at EMC World. Additional perspectives on ViPR 2.0, Elastic Cloud Storage Appliance and EMCworld announcement summary analysis can be found here in this video (with text) that I did (produced via TechTarget) while at EMCworld 2014. Watch for more coverage of ViPR 2.0 and other related new as well as updated items from EMCworld 2014 in upcoming posts, articles and commentary.

    Via InfoStor: Data Archiving: Life Beyond Compliance. Today many people think or assume based on what they hear that Archiving is only for regulatory archiving. Meanwhile some of you may remember a time before the regulatory compliance era of the early 2000s when Archiving was used as a general purpose tool, technology and solution to many IT data management storage challenges. This piece I did over at InfoStor looks at Data Archiving: Life Beyond Compliance and how Archiving is also a key technology that are part of Data Footprint Reduction (DFR) that also includes compression, dedupe, thin provisioning amount other techniques and tools. Here is a related Email Archiving piece (beyond compliance) from over at StateTech along with Practical tips in a piece over at VMware Communities.

    StorageIO video and audio pod casts

    StorageIOblog postStorageIOblog post
    Video conversation with Rob Emsley of EMC and me discussing data protection modernization moving beyond the product pitch!(Via TechTarget SearchDataBackup). In this conversation Rob and me talk about various aspects of data protection modernization including finding and fixing problems at the source, accidental architectures, using new (and old) things in new ways, rethinking data protection. However the conversation is a discussion about the topics, issues, trends, what can be done as opposed to a product pitch infomercial. Check out this video blog (vblog) of Rob and me via TechTarget SearchDataBackup, then weigh in with your comments.

    audioSNIA DSI David Dale
    Audio Podcast: Data Storage Innovation Conversation with SNIA Wayne Adams and David Dale
    In this episode, SNIA Chairman Emeritus Wayne Adams and current Chairman David Dale join me in a conversation from the Data Storage Innovation (DSI) 2014 conference event. DSI is a new event produced by SNIA targeted for IT professionals involved with data storage related topics, themes, technologies and tools spanning hardware, software, cloud, virtual and physical. In this conversation, we talk about the new DSI event, the diversity of new attendees who are attending their first SNIA event, along with other updates. Some of these updates include what is new with the SNIA Cloud Data Management Initiative (CDMI), Non Volatile Memory (think flash and SSD), SMIS, education and more. Listen in to our conversation in this podcast here as we cover cloud, convergence, software defined and more about data storage.

    audiocash coleman cleardb
    Audio Podcast: Catching up with Cash Coleman talking ClearDB, cloud database and Johnny Cash
    In this episode from the SNIA DSI 2014 event I am joined by Cashton Coleman (@Cash_Coleman). Cashton (Cash) is a Software architect, product mason, family bonder, life builder, idea founder along with Founder & CEO of SuccessBricks, Inc., makers of ClearDB. ClearDB is a provider of MySQL database software tools for cloud and physical environments. We talk about ClearDB, what they do and whom they do it with including deployments in cloud’s as well as onsite. For example if you are using some of the Microsoft Azure cloud services using MySQL, you may already be using this technology. However, there is more to the story and discussion including how Cash got his name, how to speed up databases for little and big data among other topics. Check out ClearDB and listen in to the conversation with Cash podcast here.

    audio
    Audio Podcast: Matt Vogt talks VMware vCOP in his first ever podcast
    In this episode from the Computex Rethink your Datacenter for 2017 planning and strategy event I am joined by Matt Vogt (@MattVogt). Matt is a Principal Architect with Computex Technology Solutions as well as certified VMware specialist and fellow vExpert. We talk about the role of automation for performance and capacity optimization along with how VMware vCop plays an important role. Listen in to learn more about how to gain insight and situational awareness to make informed decisions for your data infrastructure environment with Matt. Check out Matt’s blog here at blog.mattvogt.net and listen in to the podcast here.

    StorageIO audio podcasts are also available via
    and at StorageIO.tv

    StorageIOblog posts and perspectives

    StorageIOblog post

  • Is there an information or data recession, are you using less storage (with polls)
  • Lenovo TS140 Server and Storage IO Review Part I here and Part II here
  • Nand flash SSD server storage I/O conversations: See more SSD stories here
  • Data Protection Diaries: March 31 World Backup Day is Restore Data Test, read more here
  • March 2014 StorageIO Update Newsletter: Click here to read more
  • StorageIO White Papers, Solution Briefs and StorageIO Lab reports

    White Paper

    New White Paper: Solid State Hybrid Drives (SSHD)
    Enterprise SSHD and Flash SSD – Better Together – Part of an Enterprise Tiered Storage Strategy The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future. Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs.

    This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way. In this StorageIO Industry Trends Perspective thought leadership white paper we look at how enterprise class Solid State Hybrid Drives (SSHD) and how they address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data infrastructure environments. This includes providing proof points running various workloads including Database TPC-B, TPC-E and Microsoft Exchange in the StorageIO Labscomparing SSHD, SSD and different HDDs. Read more in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of Seagate Enterprise Turbo SSHD. Read the companion blog post here that includes more proof points for large file transfer performance.

    Remember to check out our objectstoragecenter.com page where you will find a growing collection of information and links on cloud and object storage themes, technologies and trends from various sources.

    If you are interested in data protection including Backup/Restore, BC, DR, BR and Archiving along with associated technologies, tools, techniques and trends visit our storageioblog.com/data-protection-diaries-main/ page. For those who follow SSD and related technologies, we have organized a series of items at storageio.com/ssd.

    StorageIO events and activities

    Server and StorageIO seminars, conferences, web cats, events, activities

    The StorageIO calendar continues to evolve, here are some recent and upcoming activities including live in-person seminars, conferences, keynote and speaking activities as well as on-line webinars, twitter chats, Google+ hangouts among others.

    June 12, 2014The Many Facets of Virtual Storage and Software Defined Storage VirtualizationWebinar
    9AM PT
    June 11, 2014The Changing Face and Landscape of Enterprise StorageWebinar
    9AM PT
    May 14, 2014Brouwer Storage ConsultancyKeynote – Healthcare Vendor Neutral Archiving SymposiumNijkerk Netherlands
    May 5-7, 2014EMC WorldLas Vegas
    April 23, 2014SNIA DSI EventKeynote: Enabling Data Infrastructure Return On Innovation – The Other ROIbackup, restore, BC, DR and archiving
    April 22, 2014SNIA DSI EventThe Cloud Hybrid “Homerun” – Life Beyond The Hypebackup, restore, BC, DR and archiving
    April 16, 2014Open Source and Cloud Storage – Enabling business, or a technology enabler?Webinar
    9AM PT
    April 9, 2014Storage Decision Making for Fast, Big and Very Big Data EnvironmentsWebinar
    9AM PT

    Click here to view other upcoming along with earlier event activities. Watch for more 2014 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, big data, little data, cloud and object storage, performance and management trends among others.

    Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.

    StorageIO Update Newsletter Archives

    Click here to view previous StorageIO Update newsletters (HTML and PDF versions) at www.storageio.com/newsletter. Subscribe to this newsletter (and pass it along) by clicking here (Via Secure Campaigner site). View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future.

    Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way.

    Introducing Solid State Hybrid Drives (SSHD)

    Solid State Hybrid Disks (SSHD) are the successors to previous generation Hybrid Hard Disk Drives (HHDD) that I have used for several years (you can read more about them here, and here).

    While it would be nice to simply have SSD for everything, there are also economic budget realities to be dealt with. Keep in mind that a bit of nand flash SSD cache in the right location for a given purpose can go a long way which is the case with SSHDs. This is also why in many environments today there is a mix of SSD, HDD of various makes, types, speeds and capacities (e.g. different tiers) to support diverse application needs (e.g. not everything in the data center is the same).

    However, If you have the need for speed and can afford or benefit from the increased productivity by all means go SSD!

    Otoh if you have budget constraints and need more space capacity yet want some performance boost, then SSHDs are an option. The big difference however between today’s SSHDs that are available for both enterprise class storage systems and servers, as well as desktop environments is that they can accelerate both reads and writes. This is different from their predecessors that I have used for several years now that had basic read acceleration, however no write optimizations.

    SSHD storage I/O oppourtunity
    Better Together: Where SSHDs fit in an enterprise tiered storage environment with SSD and HDDs

    As their names imply, they are a hybrid between a nand flash Solid State Device (SSD) and traditional Hard Disk Drive (HDD) meaning a best of situation. This means that the SSHD are based on a traditional spinning HDD (various models with different speeds, space capacity, interfaces) along with DRAM (which is found on most modern HDDs), along with nand flash for read cache, and some extra nonvolatile memory for persistent write cache combined with a bit of software defined storage performance optimization algorithms.

    Btw, if you were paying attention to that last sentence you would have picked up on something about nonvolatile memory being used for persistent write cache which should prompt the question would that help with nand flash write endurance? Yup.

    Where and when to use SSHD?

    In the StorageIO Industry Trends Perspective thought leadership white paper I recently released compliments of Seagate Enterprise Turbo SSHD (that’s a disclosure btw ;) enterprise class Solid State Hybrid Drives (SSHD) were looked at and test driven in the StorageIO Labs with various application workloads. These activities include being in a virtual environment for common applications including database and email messaging using industry standard benchmark workloads (e.g. TPC-B and TPC-E for database, JetStress for Exchange).

    Storage I/O sshd white paper

    Conventional storage system focused workloads using iometer, iorate and vdbench were also run in the StorageIO Labs to set up baseline reads, writes, random, sequential, small and large I/O size with IOPs, bandwidth and response time latency results. Some of those results can be found here (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?) with other ongoing workloads continuing in different configurations. The various test drive proof points were done in the   comparing SSHD, SSD and different HDDs.

    Data Protection (Archiving, Backup, BC, DR)

    Staging cache buffer area for snapshots, replication or current copies before streaming to other storage tier using fast read/write capabilities. Meta data, index and catalogs benefit from fast reads and writes for faster protection.

    Big Data DSS
    Data Warehouse

    Support sequential read-ahead operations and “hot-band” data caching in a cost-effective way using SSHD vs. slower similar capacity size HDDs for Data warehouse, DSS and other analytic environments.

    Email, Text and Voice Messaging

    Microsoft Exchange and other email journals, mailbox or object repositories can leverage faster read and write I/Os with more space capacity.

    OLTP, Database
     Key Value Stores SQL and NoSQL

    Eliminate the need to short stroke HDDs to gain performance, offer more space capacity and IOP performance per device for tables, logs, journals, import/export and scratch, temporary ephemeral storage. Leverage random and sequential read acceleration to compliment server-side SSD-based read and write-thru caching. Utilize fast magnetic media for persistent data reducing wear and tear on more costly flash SSD storage devices.

    Server Virtualization

    Fast disk storage for data stores and virtual disks supporting VMware vSphere/ESXi, Microsoft Hyper-V, KVM, Xen and others.  Holding virtual machines such as VMware VMDKs, along with Hyper-V and other hypervisor virtual disks.  Compliment virtual server read cache and I/O optimization using SSD as a cache with writes going to fast SSHD. For example VMware V5.5 Virtual SAN host disk groups use SSD as a read cache and can use SSHD as the magnetic disk for storing data while boosting performance without breaking the budget or adding complexity.

    Speaking of Virtual, as mentioned the various proof points were run using Windows systems that were VMware guests with the SSHD and other devices being Raw Device Mapped (RDM) SAS and SATA attached, read how to do that here.

    Hint: If you know about the VMware trick for making a HDD look like a SSD to vSphere/ESXi (refer to here and here) think outside the virtual box for a moment on some things you could do with SSHD in a VSAN environment among other things, for now, just sayin ;).

    Virtual Desktop Infrastructure (VDI)

    SSHD can be used as high performance magnetic disk for storing linked clone images, applications and data. Leverage fast read to support read ahead or pre-fetch to compliment SSD based read cache solutions. Utilize fast writes to quickly store data enabling SSD-based read or write-thru cache solutions to be more effective. Reduce impact of boot, shutdown, and virus scan or maintenance storms while providing more space capacity.

    Table 1 Example application and workload scenarios benefiting from SSHDs

    Test drive application proof points

    Various workloads were run using Seagate Enterprise Turbo SSHD in the StorageIO lab environment across different real world like application workload scenarios. These include general storage I/O performance characteristics profiling (e.g. reads, writes, random, sequential or various IOP size) to understand how these devices compare to other HDD, HHDD and SSD storage devices in terms of IOPS, bandwidth and response time (latency). In addition to basic storage I/O profiling, the Enterprise Turbo SSHD was also used with various SQL database workloads including Transaction Processing Council (TPC); along with VMware server virtualization among others use case scenarios.

    Note that in the following workload proof points a single drive was used meaning that using more drives in a server or storage system should yield better performance. This also means scaling would be bound by the constraints of a given configuration, server or storage system. These were also conducted using 6Gbps SAS with PCIe Gen 2 based servers and ongoing testing is confirming even better results with 12Gbs SAS, faster servers with PCIe Gen 3.

    SSHD large file storage i/o
    Copy (read and write) 80GB and 220GB file copies (time to copy entire file)

    SSHD storage I/O TPCB Database performance
    SQLserver TPC-B batch database updates

    Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 500GB 3.5” 7.2K RPM HDD 3 Gbps SATA, 1TB 3.5” 7.2K RPM HDD 3 Gbps SATA. Workload generator and virtual clients ran on Windows 7 Ultimate. Microsoft SQL Server 2012 Database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (Intel x3490 2.93 GHz)), with LSI 9211 6Gbps SAS adapters with TPC-B (www.tpc.org) workloads. VM resided on separate data store from devices being tested. All devices being tested with SQL MDF were Raw Device Mapped (RDM) independent persistent with database log file (LDF) on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

    SSHD storage I/O TPCE Database performance
    SQLserver TPC-E transactional workload

    Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 300GB 2.5” Savio 10K RPM HDD 6 Gbps SAS, 1TB 3.5” 7.2K RPM HDD 6 Gbps SATA. Workload generator and virtual clients Windows 7 Ultimate. Microsoft SQL Server 2012 database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (E8400 2.99GHz), with LSI 9211 6Gbps SAS adapters with TPC-E (www.tpc.org) workloads. VM resided on separate SSD based data store from devices being tested (e.g., where MDF resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

    SSHD storage I/O Exchange performance
    Microsoft Exchange workload

    Test configuration: 2.5” Seagate 600 Pro 120GB (ST120FP0021 ) SSD 6 Gbps SATA, 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 2.5” Savio 146GB HDD 6 Gbps SAS, 3.5” Barracuda 500GB 7.2K RPM HDD 3 Gbps SATA. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft Small Business Server (SBS) 2011 Service Pack 1 64 bit, 8GB DRAM, One CPU (Intel X3490 2.93 GHz) LSI 9211 6 Gbps SAS adapter, JetStress 2010 (no other active workload during test intervals). All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device.

    Read more about the above proof points along view data points and configuration information in the associated white paper found here (no registration required).

    What this all means

    Similar to flash-based SSD technologies the question is not if, rather when, where, why and how to deploy hybrid solutions such as SSHDs. If your applications and data infrastructures environment have the need for storage I/O speed without loss of space capacity and breaking your budget, SSD enabled devices like the Seagate Enterprise Turbo 600GB SSHD are in your future. You can learn more about enterprise class SSHD such as those from Seagate by visiting this link here.

    Watch for extra workload proof points being performed including with 12Gbps SAS and faster servers using PCIe Gen 3.

    Ok, nuff said.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Lenovo TS140 Server and Storage I/O Review

    Storage I/O trends

    Lenovo TS140 Server and Storage I/O Review

    This is a review that looks at my recent hands on experiences in using a TS140 (Model MT-M 70A4 – 001RUS) pedestal (aka tower) server that the Lenovo folks sent to me to use for a month or so. The TS140 is one of the servers that Lenovo had prior to its acquisition of IBM x86 server business that you can read about here.

    The Lenovo TS140 Experience

    Lets start with the overall experience which was very easy and good. This includes going from initial answering some questions to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything (this was not a tear down and rip it apart into pieces trial).

    Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived. Turns out it was a box (server hardware) inside of a Lenovo box, that was inside a slightly larger unmarked shipping box (see larger box in the background).

    TS140 Evaluation Arrives

    TS140 shipment undergoing initial security screen scan and sniff (all was ok)

    TS140 with Windows 2012
    TS140 with Keyboard and Mouse (Monitor not included)

    One of the reasons I have a photo of the TS140 on a desk is that I initially put it in an office environment as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TS140 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TS140 is a good fit for environments that need a small or starter server that has to go into an office environment as opposed to a server or networking room. For those who are into mounting servers, there is the option for placing the TS140 on its side into a cabinet or rack.

    Windows 2012 on TS140
    TS140 with Windows Server 2012 Essentials

    TS140 as tested

    TS140 Selfie of whats inside
    TS140 "Selfie" with 4 x 4GB DDR3 DIMM (16GB) and PCIe slots (empty)

    16GB RAM (4 x 4GB DDR3 UDIMM, larger DIMMs are supported)
    Windows Server 2012 Essentials
    Intel Xeon E3-1225 v3 @3.2 Ghz quad (C226 chipset and TPM 1.2) vPRO/VT/EP capable
    Intel GbE 1217-LM Network connection
    280 watt power supply
    Keyboard and mouse (no monitor)
    Two 7.2K SATA HDDs (WD) configured as RAID 1 (100GB Lun)
    Slot 1 PCIe G3 x16
    Slot 2 PCIe G2 x1
    Slot 3 PCIe G2 x16 (x4 electrical signal)
    Slot 4 PCI (legacy)
    Onboard 6GB SATA RAID 0/1/10/5
    Onboard SATSA 3.0 (6Gbps) connectors (0-4), USB 3.0 and USB 2.0

    Read more about what I did with the Lenovo TS140 in part II of my review along with what I liked, did not like and general comments here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: What I did with Lenovo TS140 in my Server and Storage I/O Review

    Storage I/O trends

    Part II: Lenovo TS140 Server and Storage I/O Review


    This is the second of a two-part post series on my recent experiences with a Lenovo TS140 Server, you can read part I here.

    What Did I do with the TS140

    After initial check out in an office type environment, I moved the TS140 into the lab area where it joined other servers to be used for various things.

    Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities. Also, I also installed VMware ESXi 5.5 and ran into a few surprises. One of those was that I needed to apply an update to VMware drivers to support the onboard Intel NIC, as well as enable VT and EP modes for virtualization to assist via the BIOS. The biggest surprise was that I discovered I could not install VMware onto an internal drive attached via one of the internal SATA ports which turns out to be a BIOS firmware issue.

    Lenovo confirmed this when I brought it to their attention, and the workaround is to use USB to install VMware onto a USB flash SSD thumb drive, or other USB attached drive or to use external storage via an adapter. As of this time Lenovo is aware of the VMware issue, however, no date for new BIOS or firmware is available. Speaking of BIOS, I did notice that there was some newer BIOS and firmware available (FBKT70AUS December 2013) than what was installed (FB48A August of 2013). So I went ahead and did this upgrade which was a smooth, quick and easy process. The process included going to the Lenovo site (see resource links below), selecting the applicable download, and then installing it following the directions.

    Since I was going to install various PCIe SAS adapters into the TS140 attached to external SAS and SATA storage, this was not a big issue, more of an inconvenience Likewise for using storage mounted internally the workaround is to use an SAS or SATA adapter with internal ports (or cable). Speaking of USB workarounds, have a HDD, HHDD, SSHD or SSD that is a SATA device and need to attach it to USB, then get one of these cables. Note that there are USB 3.0 and USB 2.0 cables (see below) available so choose wisely.

    USB to SATA adapter cable

    In addition to running various VMware-based workloads with different guest VMs.

    I also ran FUTREMARK PCmark (btw, if you do not have this in your server storage I/O toolbox it should be) to gauge the systems performance. As mentioned the TS140 is quiet. However, it also has good performance depending on what processor you select. Note that while the TS140 has a list price as of the time of this post under $400 USD, that will change depending on which processor, amount of memory, software and other options you choose.

    Futuremark PCMark
    PCmark

    PCmark testResults
    Composite score2274
    Compute11530
    System Storage2429
    Secondary Storage2428
    Productivity1682
    Lightweight2137

    PCmark results are shown above for the Windows Server 2012 system (non-virtualized) configured as shipped and received from Lenovo.

    What I liked

    Unbelievably quiet which may not seem like a big deal, however, if you are looking to deploy a server or system into a small office workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server that can be installed into a home media entertainment system, well, this might be a nice to have consideration ;).

    Something else that I liked is that the TS140 with the E3-1220 v3 family of processor supports PCIe G3 adapters which are useful if you are going to be using 10GbE cards or 12Gbs SAS and faster cards to move lots of data, support more IOPs or reduce response time latency.

    In addition, while only 4 DIMM slots is not very much, its more than what some other similar focused systems have, plus with large capacity DIMMs, you can still get a nice system, or two, or three or four for a cluster at a good price or value (Hmm, VSAN anybody?). Also while not a big item, the TS140 did not require ordering an HDD or SSD if you are not also ordering software the system for a diskless system and have your own.

    Speaking of IO slots, naturally I’m interested in Server Storage I/O so having multiple slots is a must have, along with the processor that is quad core (pretty much standard these days) along with VT and EP for supporting VMware (these were disabled in the BIOS. However, that was an easy fix).

    Then there is the price as of this posting starting at $379 USD which is a bare bones system (e.g. minimal memory, basic processor, no software) whose price increases as you add more items. What I like about this price is that it has the PCIe G3 slot as well as other PCIe G2 slots for expansion meaning I can install 12Gbps (or 6Gbps) SAS storage I/O adapters, or other PCIe cards including SSD, RAID, 10GbE CNA or other cards to meet various needs including software defined storage.

    What I did not like

    I would like to have had at least six vs. four DIMM slots, however keeping in mind the price point of where this system is positioned, not to mention what you could do with it thinking outside of the box, I’m fine with only 4 x DIMM. Space for more internal storage would be nice, however, if that is what you need, then there are the larger Lenovo models to look at. By the way, thinking outside of the box, could you do something like a Hadoop, OpenStack, Object Storage, VMware VSAN or other cluster with these in addition to using as a Windows Server?

    Yup.

    Granted you won’t have as much internal storage, as the TS140 only has two fixed drive slots (for more storage there is the model TD340 among others).

    However it is not that difficult to add more (not Lenovo endorsed) by adding a StarTech enclosure like I did with my other systems (see here). Oh and those extra PCIe slots, that’s where a 12Gbs (or 6Gbps) adapter comes into play while leaving room for GbE cards and PCIe SSD cards. Btw not sure what to do with that PCIe x1 slot, that’s a good place for a dual GbE NIC to add more networking ports or an SATA adapter for attaching to larger capacity slower drives.

    StarTech 2.5" SAS and SATA drive enclosure on Amazon.com
    StarTech 2.5″ SAS SATA drive enclosure via Amazon.com

    If VMware is not a requirement, and you need a good entry level server for a large SOHO or small SMB environment, or, if you are looking to add a flexible server to a lab or for other things the TS140 is good (see disclosure below) and quiet.

    Otoh as mentioned, there is a current issue with the BIOS/firmware with the TS140 involving VMware (tried ESXi 5 & 5.5).

    However I did find a workaround which is that the current TS140 BIOS/Firmware does work with VMware if you install onto a USB drive, and then use external SAS, SATA or other accessible storage which is how I ended up using it.

    Lenovo TS140 resources include

  • TS140 Lenovo ordering website
  • TS140 Data and Spec Sheet (PDF here)
  • Lenovo ThinkServer TS140 Manual (PDF here)
  • Intel E3-1200 v3 processors capabilities (Web page here)
  • Lenovo Drivers and Software (Web page here)
  • Lenovo BIOS and Drivers (Web page here)
  • Enabling Virtualization Technology (VT) in TS140 BIOS (Press F1) (Read here)
  • Enabling Intel NIC (82579LM) GbE with VMware (Link to user forum and a blog site here)
  • My experience from a couple years ago dealing with Lenovo support for a laptop issue
  • Summary

    Disclosure: Lenovo loaned the TS140 to me for just under two months including covering shipping costs at no charge (to them or to me) hence this is not a sponsored post or review. On the other hand I have placed an order for a new TS140 similar to the one tested that I bought on-line from Lenovo.

    This new TS140 server that I bought joins the Dell Inspiron I added late last year (read more about that here) as well as other HP and Dell systems.

    Overall I give the Lenovo TS140 an provisional "A" which would be a solid "A" once the BIOS/firmware issue mentioned above is resolved for VMware. Otoh, if you are not concerned about using the TS140 for VMware (or can do a work around), then consider it as an "A".

    As mentioned above, I liked it so much I actually bought one to add to my collection.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved