Server StorageIO November 2015 Update Newsletter


Server and StorageIO Update Newsletter

Volume 15, Issue XI – November 2015

Hello and welcome to this November 2015 Server StorageIO update newsletter. Winter has arrived here in the northern hemisphere, although technically its still fall until the winter solstice in December. Regardless of if summer or winter depending on which hemisphere you are, 2015 is about to wrap up meaning end of year (EOY) activities.

EOY activities can mean final shopping or acquisitions for technology and services or simply for home and fun. This is also that time of year where predictions for 2016 will start streaming out as well as reflections looking back at 2015 appear (lets save those for December ;). Another EOY activity is planning for 2016 as well as getting items ready for roll-out or launch in the new year. Needless to say there is a lot going on so with that, enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

Cheers GS

In This Issue

  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events and Webinars
  • Recommended Reading List
  • Resources and Links
  • StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities tips, and announcements.

    • TheFibreChannel.com: Industry Analyst Interview: Greg Schulz, StorageIO
    • EnterpriseStorageForum: Comments Handling Virtual Storage Challenges
    • PowerMore (Dell): Q&A: When to implement ultra-dense storage

    View more Server, Storage and I/O hardware as well as software trends comments here

     

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • Virtual Blocks (VMware Blogs):  EVO:RAIL Part II – Why And When To Use It?
      This is the second of a multi-part series looking at Converged Infrastructures (CI), Hyper-Converged Infrastructures (HCI), Cluster in Box (CiB) and other unified solution bundles. There is a trend of industry adoption talking about CI, HCI, CiB and other bundled solutions, along with growing IT customer adoption and deployment. Different sized organizations are looking at various types of CI solutions to meet various application and workloads needs. Read more here and part I here.
    • TheFibreChannel.com:  Industry Analyst Interview: Greg Schulz, StorageIO
      In part one of a two part article series, Frank Berry, storage industry analyst and Founder of IT Brand Pulse and editor of TheFibreChannel.com, recently spoke with StorageIO Founder Greg Schulz about Fibre Channel SAN integration with OpenStack, why Rackspace is using Fibre Channel and more. Read more here
    • CloudComputingAdmin.com:  Cloud Storage Decision Making – Using Microsoft Azure for cloud storage
      Let’s say that you have been tasked with, or decided that it is time to use (or try) public cloud storage such as Microsoft Azure. Ok, now what do you do and what decisions need to be made? Keep in mind that Microsoft Azure like many other popular public clouds provides many different services available for fee (subscription) along with free trials. These services include applications, compute, networking, storage along with development and management platform tools. Read more here.

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    Deltaware Emerging Technology Summit November 10, 2015

    Dell Data Protection Summit Nov 4, 2015 7AM PT

    Microsoft MVP Summit Nov 2-5, 2015

    See more webinars and other activities on the Server StorageIO Events page here.

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of my books.

    In case you had not heard, Microsoft recently released the bits (e.g. software download) for Windows Server 2016 Technical Preview 4 (TP4). TP4 is the successor to Technical Preview 3 (TP3) that was released this past August and is the most recent public preview version of the next Windows Server. TP4 adds a new tiering capability where Windows and storage spaces can cache and migrate data between Hard Disk Drives (HDD) and Non-Volatile Memory (NVM) including flash SSD. The new tiering feature supports a mixed HDD and NVM with flash SSD (including NVM Express or NVMe), as well as an all NVM scenario. Yes, that is correct, tiering with all NVM is not a type, instead enables using lower latency faster NVM along with lower cost higher capacity flash SSD. Learn more about what’s in TP4 from a server and storage I/O perspective in this Microsoft post, as well as more about S2D in this Microsoft Technet post here and here. You can get the Windows Server 2016 TP4 bits here which are already running in the Server StorageIO lab.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace.com
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates

    Storage I/O trends

    Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates

    I attended the Flash Memory Summit in Santa Clara CA last week and not surprisingly there were many announcements about Non-Volatile Memory (NVM) along with related enabling technologies. Some of these announcements were component based intended for original equipment manufactures (OEMs) ranging from startup to established, systems integrators (SI), value added resellers (VAR’s) while others were more customer solution focused. From a customer solution focus, some of the technologies were consumer oriented while others for business and some for cloud scale service providers.

    Recent NVM, NVMe and Flash SSD news

    A sampling of some recent NVM, NVMe and Flash related news includes among others:

    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers (Via TomsITpro)
    • New SATA SSD powers elastic cloud agility for CSPs (Via Cbronline)
    • Toshiba Solid-State Drive Family Features PCIe Technology (Via Eweek)
    • SanDisk aims CloudSpeed Ultra SSD at cloud providers (Via ITwire)
    • Everspin & Aupera show all-MRAM Storage Module in M.2 Form Factor (Via BusinessWire)
    • Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage (part I, part II and part III)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • Seagate Grows Its Nytro Enterprise Flash Storage Line (Via InfoStor)
    • New SAS Solid State Drive First Product From Seagate Micron Alliance (Via Seagate)
    • Wow, Samsung’s New 16 Terabyte SSD Is the World’s Largest Hard Drive (Via Gizmodo)
    • Samsung ups the SSD ante with faster, higher capacity drives (Via ITworld)

    NVMe primer

    Via Intel History of Memory
    Via Intel: Click above image to view history of memory via Intel site

    NVM includes technologies such as NAND flash commonly used in Solid State Devices (SSD’s) storage today, as well as in USB thumb drive, mobile and hand-held devices among many other uses. NVM spans servers, storage, I/O devices along with mobile and handheld among many other technologies. In addition to NAND flash, other forms of NVM include Non Volatile Random Access Memory (NVRAM), Read Only Memory (ROM) along with some emerging new technologies including the recently announced Intel and Micron 3D XPoint among others.

    Server Storage I/O access and NVM
    Server Storage I/O memory (and storage) hierarchy

    Keep in mind that memory is storage and storage is persistent memory as well as that there are different classes, categories and tiers of memory and storage as shown above to meet various performance, availability, capacity and economic requirements. Besides NVM ranging from flash to NVRAM to emerging 3D XPoint among others, another popular topic that is gaining momentum is NVM Express (NVMe). NVMe (more material here at www.thenvmeplace.com) is a new server storage I/O access method and protocol for fast access to NVM based products. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

    Server Storage I/O NVMe PCIe SAS SATA AHCI
    Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

    Leveraging the common PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5" drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

    NVMe as back-end storage
    NVMe as a "back-end" I/O interface in a server or storage system accessing NVM storage/media devices

    NVMe as front-end server storage I/O interface
    NVMe as a “front-end” interface for servers (or storage systems/appliances) to use NVMe based storage systems

    NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can in addition to being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

    Shared external PCIe using NVMe
    NVMe and shared PCIe

    NVMe features

    Main features of NVMe include among others:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

    Watch for more about NVMe as it continues to gain in both industry adoption and deployment as well as customer adoption and deployment.

    Where to read, watch and learn more

    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, September 2014 Presentation (Flash back to reality – Myths and Realities Flash and SSD Industry trends perspectives plus benchmarking tips) – PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • Spot The Newest & Best Server Trends (Via Processor)
    • Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage (part I, part II and part III)
    • Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))
    • Continue reading more about NVM, NVMe, NAND flash, SSD Server and storage I/O related topics at www.thessdplace.com as well as about I/O performance, monitoring and benchmarking tools at www.storageperformance.us.

    Storage I/O trends

    What this all means and wrap up

    The question is not if NVM is in your future, it is! Instead the questions are what type of NVM including NAND flash among other mediums will be deployed where, using what type of packaging or solutions (drives, cards, systems, appliances, cloud) for what role (as storage, primary memory, persistent cache) along with how much among others. For some environments the solution is already, or will be All NVM Arrays (ANA) or All Flash Arrays (AFA) or All SSD Arrays (ASA) while for others the home run will be hybrid based solutions that work for you, fitting in and adapting to your environment as it changes.

    Also keep in mind that a little bit of fast memory including NVM based flash among others in the right place can have a big benefit. My experiences using NVMe to use flash enabled NVMe devices on Windows and Linux systems is that you can see lower response times at higher-IOP’s however also with lower CPU consumption particular when compared to 6Gbps SATA. Likewise bandwidth can easily be pushed to the limits of the NVMe device as well as PCIe interface being used such as x4 or x8 depending on implementation. That is also a warning and something to watch out for comparing apples to oranges in that while NVMe uses PCIe, understand when looking at different results if those are for x4 or x8 or faster PCIe as their mere presence of using PCIe does not mean you are running at full potential.

    Keep an eye on NVMe as a new high-speed, low-latency server storage I/O access protocol for unlocking the full performance capabilities of fast NVM based storage as well as leveraging the multiple cores in today’s fast processors. Does this mean AHCI/SATA or SCSI/SAS are now dead? Some will claim that, however at least near-term for next few years (if not longer), those interfaces will continue to be used where they make sense, as well as where they can save dollars specifically for cost sensitive, high-capacity environments that do not need the full performance of NVMe just yet.

    As for the Flash Memory Summit event in Santa Clara, that was a good day with time well spent in briefings, meetings, demo’s and add hoc discussions on the expo floor.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    VMware vCloud Air Server StorageIOlab Test Drive with videos

    Server Storage I/O trends

    VMware vCloud Air Server StorageIOlab Test Drive with videos

    Recently I was invited by VMware vCloud Air to do a free hands-on test drive of their actual production environment. Some of you may already being using VMware vSphere, vRealize and other software defined data center (SDDC) aka Virtual Server Infrastructure (VSI) or Virtual Desktop Infrastructure (VDI) tools among others. Likewise some of you may already be using one of the many cloud compute or Infrastructure as a Service (IaaS) such as Amazon Web Services (AWS) Elastic Cloud Compute (EC2), Centurylink, Google Cloud, IBM Softlayer, Microsoft Azure, Rackspace or Virtustream (being bought by EMC) among many others.

    VMware vCloud Air provides a platform similar to those just mentioned among others for your applications and their underlying resource needs (compute, memory, storage, networking) to be fulfilled. In addition, it should not be a surprise that VMware vCloud Air shares many common themes, philosophies and user experiences with the traditional on-premises based VMware solutions you may be familiar with.

    VMware vCloud Air overview

    You can give VMware vCloud Air a trial for free while the offer lasts by clicking here (service details here). Basically if you click on the link and register a new account for using VMware vCloud Air they will give you up to $500 USD in service credits to use in the real production environment while the offer lasts which iirc is through end of June 2015.

    Server StorageIO test drive VMware vCloud Air video I
    Click on above image to view video part I

    Server StorageIO test drive VMware vCloud Air part II
    Click on above image to view video part II

    What this means is that you can go and setup some servers with as many CPUs or cores, memory, Hard Disk Drive (HDD) or flash Solid State Devices (SSD) storage, external IP networks using various operating systems (Centos, Ubuntu, Windows 2008, 20012, 20012 R2) for free, or until you use up the service credits.

    Speaking of which, let me give you a bit of a tip or hint, even though you can get free time, if you provision a fast server with lots of fast SSD storage and leave it sit idle over night or over a weekend, you will chew up your free credits rather fast. So the tip which should be common sense is if you are going to do some proof of concepts and then leave things alone for a while, power the virtual cloud servers off to stretch your credits further. On the other hand, if you have something that you want to run on a fast server with fast storage over a weekend or longer, give that a try, just pay attention to your resource usage and possible charges should you exhaust your service credits.

    My Server StorageIO test drive mission objective

    For my test drive, I created a new account by using the above link to get the service credits. Note that you can use your regular VMware account with vCloud Air, however you wont get the free service credits. So while it is a few minutes of extra work, the benefit was worth it vs. simply using my existing VMware account and racking up more cloud services charges on my credit card. As part of this Server StorageIOlab test drive, I created two companion videos part I here and part II here that you can view to follow along and get a better idea of how vCloud works.

    VMware vCloud Air overview
    Phase one, create the virtual data center, database server, client servers and first setup

    My goal was to set up a simple Virtual Data Center (VDC) that would consist of five Windows 2012 R2 servers, one would be a MySQL database server with the other four being client application servers. You can download MySQL from here at Oracle as well as via other sources. For applications to simplify things I used Hammerdb as well as Benchmark Factory that is part of the Quest Toad tool set for database admins. You can download a free trial copy of Benchmark Factory here, and HammerDB here. Another tool that I used for monitoring the servers is Spotlight on Windows (SoW) which is also free here. Speaking of tools, here is a link to various server and storage I/O performance as well as monitoring tools.

    Links to tools that I used for this test-drive included:

    Setting up a virtual data center vdc
    Phase one steps and activity summary

    Summary of phase one of vdc
    Recap of what was done in phase one, watch the associated video here.

    After the initial setup (e.g. part I video here), the next step was to add some more virtual machines and take a closer look at the environment. Note that most of the work in setting up this environment was Windows, MySQL, Hammerdb, Benchmark Factory, Spotlight on Windows along with other common tools so their installation is not a focus in these videos or this post, perhaps a future post will dig into those in more depth.

    Summary of phase two of the vdc
    What was done during phase II (view the video here)

    VMware vCloud Air vdc trest drive

    There is much more to VMware vCloud Air and on their main site there are many useful links including overviews, how-too tutorials, product and service offering details and much more here. Besides paying attention to your resource usage and avoid being surprised by service charges, two other tips I can pass along that are also mentioned in the videos (here and here) is to pay attention what region you setup your virtual data centers in, second is have your network thought out ahead of time to streamline setting up the NAT and firewall as well as gateway configurations.

    Where to learn more

    Learn more about data protection and related topics, themes, trends, tools and technologies via the following links:

    Server Storage I/O trends

    What this all means and wrap-up

    Overall I like the VMware vCloud Air service which if you are VMware centric focused will be a familiar cloud option including integration with vCloud Director and other tools you may already have in your environment. Even if you are not familiar with VMware vSphere and associated vRealize tools, the vCloud service is intuitive enough that you can be productive fairly quickly. On one hand vCloud Air does not have the extensive menu of service offerings to choose from such as with AWS, Google, Azure or others, however that also means a simpler menu of options to choose from and simplify things.

    I had wanted to spend some time actually using vCloud and the offer to use some free service credits in the production environment made it worth making the time to actually setup some workloads and do some testing. Even if you are not a VMware focused environment, I would recommend giving VMware vCloud Air a test drive to see what it can do for you, as opposed to what you can do for it…

    Ok, nuff said for now

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Data Protection Diaries: Are your restores ready for World Backup Day 2015?

    Storage I/O trends

    Data Protection Diaries: Are your restores ready for World Backup Day 2015?

    This is part of an ongoing data protection diaries series of post about, well, data protection and what I’m doing pertaining to World Backup Day 2015.

    In case you forgot or did not know, World Backup Day is March 31 2015 (@worldbackupday) so now is a good time to be ready. The only challenge that I have with the World Backup Day (view their site here) that has gone on for a few years know is that it is a good way to call out the importance of backing up or protecting data. However its time to also put more emphasis and focus on being able to make sure those backups or protection copies actually work.

    By this I mean doing more than making sure that your data can be read from tape, disk, SSD or cloud service actually going a step further and verifying that restored data can actually be used (read, written, etc).

    The Problem, Issue, Challenge, Opportunity and Need

    The problem, issue and challenges are simple, are your applications, systems and data protected as well as can you use those protection copies (e.g. backups, snapshots, replicas or archives) when as well as were needed?

    storage I/O data protection

    The opportunity is simple, avoiding downtime or impact to your business or organization by being proactive.

    Understanding the challenge and designing a strategy

    The following is my preparation checklist for World Backup Data 2015 (e.g. March 31 2015) which includes what I need or want to protect, as well as some other things to be done including testing, verification, address (remediate or fix) known issues while identifying other areas for future enhancements. Thus perhaps like yours, data protection for my environment which includes physical, virtual along with cloud spanning servers to mobile devices is constantly evolving.

    collect TPM metrics from SQL Server with hammerdb

    My data protection preparation, checklist and to do list

    Finding a solution

    While I already have a strategy, plan and solution that encompasses different tools, technologies and techniques, they are also evolving. Part of the evolving is to improve while also exploring options to use new and old things in new ways as well as eat my down dog food or walk the talk vs. talk the talk. The following figure provides a representation of my environment that spans physical, virtual and clouds (more than one) and how different applications along with systems are protected against various threats or risks. Key is that not all applications and data are the same thus enabling them to be protected in different ways as well as over various intervals. Needless to say there is more to how, when, where and with what different applications and systems are protected in my environment than show, perhaps more on that in the future.

    server storageio and unlimitedio data protection

    Some of what my data protection involves for Server StorageIO

    Taking action

    What I’m doing is going through my checklist to verify and confirm the various items on the checklist as well as find areas for improvement which is actually an ongoing process.

    Do I find things that need to be corrected?

    Yup, in fact found something that while it was not a problem, identified a way to improve on a process that will once fully implemented enabler more flexibility both if a restoration is needed, as well as for general everyday use not to mention remove some complexity and cost.

    Speaking of lessons learned, check this out that ties into why you want 4 3 2 1 based data protection strategies.

    Storage I/O trends

    Where to learn more

    Here are some extra links to have a look at:

    Data Protection Diaries
    Cloud conversations: If focused on cost you might miss other cloud storage benefits
    5 Tips for Factoring Software into Disaster Recovery Plans
    Remote office backup, archiving and disaster recovery for networking pros
    Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)
    Given outages, are you concerned with the security of the cloud?
    Data Archiving: Life Beyond Compliance
    My copies were corrupted: The 3-2-1 rule
    Take a 4-3-2-1 approach to backing up data
    Cloud and Virtual Data Storage Networks – Chapter 8 (CRC/Taylor and Francis)

    What this all means and wrap-up

    Be prepared, be proactive when it comes to data protection and business resiliency vs. simply relying reacting and recovering hoping that all will be ok (or works).

    Take a few minutes (or longer) and test your data protection including backup to make sure that you can:

    a) Verify that in fact they are working protecting applications and data in the way expected

    b) Restore data to an alternate place (verify functionality as well as prevent a problem)

    c) Actually use the data meaning it is decrypted, inflated (un-compressed, un-de duped) and security certificates along with ownership properties properly applied

    d) Look at different versions or generations of protection copies if you need to go back further in time

    e) Identify area of improvement or find and isolate problem issues in advance vs. finding out after the fact

    Time to get back to work checking and verifying things as well as attending to some other items.

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Collecting Transaction Per Minute from SQL Server and HammerDB

    Storage I/O trends

    Collecting Transaction Per Minute from SQL Server and HammerDB

    When using benchmark or workload generation tools such as HammerDB I needed a way to capture and log performance activity metrics such as transactions per minute. For example using HammerDB to simulate an application making database requests performing various transactions as part of testing an overall system solution including server and storage I/O activity. This post takes a look at the problem or challenge I was looking to address, as well as creating a solution after spending time searching for one (still searching btw).

    The Problem, Issue, Challenge, Opportunity and Need

    The challenge is to collect application performance such as transactions per minute from a workload using a database. The workload or benchmark tool (in this case HammerDB) is the System Test Initiator (STI) that drives the activity (e.g. database requests) to a System Under Test (SUT). In this example the SUT is a Microsoft SQL Server running on a Windows 2012 R2 server. What I need is to collect and log into a file for later analysis the transaction rate per minute while the STI is generating a particular workload.

    Server Storage I/O performance

    Understanding the challenge and designing a strategy

    If you have ever used benchmark or workload generation tools such as Quest Benchmark Factory (part of the Toad tools collection) you might be spoiled with how it can be used to not only generate the workload, as well as collect, process, present and even store the results for database workloads such as TPC simulations. In this situation, Transaction Processing Council (TPC) like workloads need to be run and metrics on performance collected. Lets leave Benchmark Factory for a future discussion and focus instead on a free tool called HammerDB and more specifically how to collection transactions per minute metrics from Microsoft SQL Server. While the focus is SQL Server, you can easily adapt the approach for MySQL among others, not to mention there are tools such as Sysbench, Aerospike among other tools.

    The following image (created using my Livescribe Echo digital pen) outlines the problem, as well as sketches out a possible solution design. In the following figure, for my solution I’m going to show how to grab every minute for a given amount of time the count of transactions that have occurred. Later in the post processing (you could also do in the SQL Script) I take the new transaction count (which is cumulative) and subtract the earlier interval which yields the transactions per minute (see examples later in this post).

    collect TPM metrics from SQL Server with hammerdb
    The problem and challenge, a way to collect Transactions Per Minute (TPM)

    Finding a solution

    HammerDB displays results via its GUI, and perhaps there is a way or some trick to get it to log results to a file or some other means, however after searching the web, found that it was quicker to come up with solution. That solution was to decide how to collect and report the transactions per minute (or you could do by second or other interval) from Microsoft SQL Server. The solution was to find what performance counters and metrics are available from SQL Server, how to collect those and log them to a file for processing. What this means is a SQL Server script file would need to be created that ran in a loop collecting for a given amount of time at a specified interval. For example once a minute for several hours.

    Taking action

    The following is a script that I came up with that is far from optimal however it gets the job done and is a starting point for adding more capabilities or optimizations.

    In the following example, set loopcount to some number of minutes to collect samples for. Note however that if you are running a workload test for eight (8) hours with a 30 minute ramp-up time, you would want to use a loopcount (e.g. number of minutes to collect for) of 480 + 30 + 10. The extra 10 minutes is to allow for some samples before the ramp and start of workload, as well as to give a pronounced end of test number of samples. Add or subtract however many minutes to collect for as needed, however keep this in mind, better to collect a few extra minutes vs. not have them and wished you did.

    -- Note and disclaimer:
    -- 
    -- Use of this code sample is at your own risk with Server StorageIO and UnlimitedIO LLC
    -- assuming no responsibility for its use or consequences. You are free to use this as is
    -- for non-commercial scenarios with no warranty implied. However feel free to enhance and
    -- share those enhancements with others e.g. pay it forward.
    -- 
    DECLARE @cntr_value bigint;
    DECLARE @loopcount bigint; # how many minutes to take samples for
    
    set @loopcount = 240
    
    SELECT @cntr_value = cntr_value
     FROM sys.dm_os_performance_counters
     WHERE counter_name = 'transactions/sec'
     AND object_name = 'MSSQL$DBIO:Databases'
     AND instance_name = 'tpcc' ; print @cntr_value;
     WAITFOR DELAY '00:00:01'
    -- 
    -- Start loop to collect TPM every minute
    -- 
    
    while @loopcount <> 0
    begin
    SELECT @cntr_value = cntr_value
     FROM sys.dm_os_performance_counters
     WHERE counter_name = 'transactions/sec'
     AND object_name = 'MSSQL$DBIO:Databases'
     AND instance_name = 'tpcc' ; print @cntr_value;
     WAITFOR DELAY '00:01:00'
     set @loopcount = @loopcount - 1
    end
    -- 
    -- All done with loop, write out the last value
    -- 
    SELECT @cntr_value = cntr_value
     FROM sys.dm_os_performance_counters
     WHERE counter_name = 'transactions/sec'
     AND object_name = 'MSSQL$DBIO:Databases'
     AND instance_name = 'tpcc' ; print @cntr_value;
    -- 
    -- End of script
    -- 

    The above example has loopcount set to 240 for a 200 minute test with a 30 minute ramp and 10 extra minutes of samples. I use the a couple of the minutes to make sure that the system test initiator (STI) such as HammerDB is configured and ready to start executing transactions. You could also put this along with your HammerDB items into a script file for further automation, however I will leave that exercise up to you.

    For those of you familiar with SQL and SQL Server you probably already see some things to improve or stylized or simply apply your own preference which is great, go for it. Also note that I’m only selecting a certain variable from the performance counters as there are many others which you can easily discovery with a couple of SQL commands (e.g. select and specify database instance and object name. Also note that the key is accessing the items in sys.dm_os_performance_counters of your SQL Server database instance.

    The results

    The output from the above is a list of cumulative numbers as shown below which you will need to post process (or add a calculation to the above script). Note that part of running the script is specifying an output file which I show later.

    785
    785
    785
    785
    37142
    1259026
    2453479
    3635138
    

    Implementing the solution

    You can setup the above script to run as part of a larger automation shell or batch script, however for simplicity I’m showing it here using Microsoft SQL Server Studio.

    SQL Server script to collect TPM
    Microsoft SQL Server Studio with script to collect Transaction Per Minute (TPM)

    The following image shows how to specify an output file for the results to be logged to when using Microsoft SQL Studio to run the TPM collection script.

    Specify SQL Server tpm output file
    Microsoft SQL Server Studio specify output file

    With the SQL Server script running to collect results, and HammerDB workload running to generate activity, the following shows Quest Spotlight on Windows (SoW) displaying WIndows Server 2012 R2 operating system level performance including CPU, memory, paging and other activity. Note that this example had about the system test initiator (STI) which is HammerDB and the system under test (SUT) that is Microsoft SQL Server on the same server.

    Spotlight on Windows while SQL Server doing tpc
    Quest Spotlight on Windows showing Windows Server performance activity

    Results and post-processing

    As part of post processing simple use your favorite tool or script or what I often do is pull the numbers into Excel spreadsheet, and simply create a new column of numbers that computes and shows the difference between each step (see below). While in Excel then I plot the numbers as needed which can also be done via a shell script and other plotting tools such as R.

    In the following example, the results are imported into Excel (your favorite tool or script) where I then add a column (B) that simple computes the difference between the existing and earlier counter. For example in cell B2 = A2-A1, B3 = A3-A2 and so forth for the rest of the numbers in column A. I then plot the numbers in column B to show the transaction rates over time that can then be used for various things.

    Hammerdb TPM results from SQL Server processed in Excel
    Results processed in Excel and plotted

    Note that in the above results that might seem too good to be true they are, these were cached results to show the tools and data collection process as opposed to the real work being done, at least for now…

    Where to learn more

    Here are some extra links to have a look at:

    How to test your HDD, SSD or all flash array (AFA) storage fundamentals
    Server and Storage I/O Benchmarking 101 for Smarties
    Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)
    The SSD Place (collection of flash and SSD resources)
    Server and Storage I/O Benchmarking and Performance Resources
    I/O, I/O how well do you know about good or bad server and storage I/Os?

    What this all means and wrap-up

    There are probably many ways to fine tune and optimize the above script, likewise there may even be some existing tool, plug-in, add-on module, or configuration setting that allows HammerDB to log the transaction activity rates to a file vs. simply showing on a screen. However for now, this is a work around that I have found for when needing to collect transaction activity performance data with HammerDB and SQL Server.

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Server and Storage I/O Benchmarking 101 for Smarties

    Server Storage I/O Benchmarking 101 for Smarties or dummies ;)

    server storage I/O trends

    This is the first of a series of posts and links to resources on server storage I/O performance and benchmarking (view more and follow-up posts here).

    The best I/O is the I/O that you do not have to do, the second best is the one with the least impact as well as low overhead.

    server storage I/O performance

    Drew Robb (@robbdrew) has a Data Storage Benchmarking Guide article over at Enterprise Storage Forum that provides a good framework and summary quick guide to server storage I/O benchmarking.

    Via Drew:

    Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark.

    Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon).

    But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more. 

    Read more here including some of my comments, tips and recommendations.

    Drew’s provides a good summary and overview in his article which is a great opener for this first post in a series on server storage I/O benchmarking and related resources.

    You can think of this series (along with Drew’s article) as server storage I/O benchmarking fundamentals (e.g. 101) for smarties (e.g. non-dummies ;) ).

    Note that even if you are not a server, storage or I/O expert, you can still be considered a smarty vs. a dummy if you found the need or interest to read as well as learn more about benchmarking, metrics that matter, tools, technology and related topics.

    Server and Storage I/O benchmarking 101

    There are different reasons for benchmarking, such as, you might be asked or want to know how many IOPs per disk, Solid State Device (SSD), device or storage system such as for a 15K RPM (revolutions per minute) 146GB SAS Hard Disk Drive (HDD). Sure you can go to a manufactures website and look at the speeds and feeds (technical performance numbers) however are those metrics applicable to your environments applications or workload?

    You might get higher IOPs with smaller IO size on sequential reads vs. random writes which will also depend on what the HDD is attached to. For example are you going to attach the HDD to a storage system or appliance with RAID and caching? Are you going to attach the HDD to a PCIe RAID card or will it be part of a server or storage system. Or are you simply going to put the HDD into a server or workstation and use as a drive without any RAID or performance acceleration.

    What this all means is understanding what it is that you want to benchmark test to learn what the system, solution, service or specific device can do under different workload conditions.

    Some benchmark and related topics include

    • What are you trying to benchmark
    • Why do you need to benchmark something
    • What are some server storage I/O benchmark tools
    • What is the best benchmark tool
    • What to benchmark, how to use tools
    • What are the metrics that matter
    • What is benchmark context why does it matter
    • What are marketing hero benchmark results
    • What to do with your benchmark results
    • server storage I/O benchmark step test
      Example of a step test results with various workers and workload

    • What do the various metrics mean (can we get a side of context with them metrics?)
    • Why look at server CPU if doing storage and I/O networking tests
    • Where and how to profile your application workloads
    • What about physical vs. virtual vs. cloud and software defined benchmarking
    • How to benchmark block DAS or SAN, file NAS, object, cloud, databases and other things
    • Avoiding common benchmark mistakes
    • Tips, recommendations, things to watch out for
    • What to do next

    server storage I/O trends

    Where to learn more

    The following are related links to read more about server (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.

    Drew Robb’s benchmarking quick reference guide
    Server storage I/O benchmarking tools, technologies and techniques resource page
    Server and Storage I/O Benchmarking 101 for Smarties.
    Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
    I/O, I/O how well do you know about good or bad server and storage I/Os?
    Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

    Wrap up and summary

    We have just scratched the surface when it comes to benchmarking cloud, virtual and physical server storage I/O and networking hardware, software along with associated tools, techniques and technologies. However hopefully this and the links for more reading mentioned above give a basis for connecting the dots of what you already know or enable learning more about workloads, synthetic generation and real-world workloads, benchmarks and associated topics. Needless to say there are many more things that we will cover in future posts (e.g. keep an eye on and bookmark the server storage I/O benchmark tools and resources page here).

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Microsoft Diskspd (Part II): Server Storage I/O Benchmark Tools

    Microsoft Diskspd (Part II): Server Storage I/O Benchmark Tools

    server storage I/O trends

    This is part-two of a two-part post pertaining Microsoft Diskspd.that is also part of a broader series focused on server storage I/O benchmarking, performance, capacity planning, tools and related technologies. You can view part-one of this post here, along with companion links here.

    Microsoft Diskspd StorageIO lab test drive

    Server and StorageIO lab

    Talking about tools and technologies is one thing, installing as well as trying them is the next step for gaining experience so how about some quick hands-on time with Microsoft Diskspd (download your copy here).

    The following commands all specify an I/O size of 8Kbytes doing I/O to a 45GByte file called diskspd.dat located on the F: drive. Note that a 45GByte file is on the small size for general performance testing, however it was used for simplicity in this example. Ideally a larger target storage area (file, partition, device) would be used, otoh, if your application uses a small storage device or volume, then tune accordingly.

    In this test, the F: drive is an iSCSI RAID protected volume, however you could use other storage interfaces supported by Windows including other block DAS or SAN (e.g. SATA, SAS, USB, iSCSI, FC, FCoE, etc) as well as NAS. Also common to the following commands is using 16 threads and 32 outstanding I/Os to simulate concurrent activity of many users, or application processing threads.
    server storage I/O performance
    Another common parameter used in the following was -r for random, 7200 seconds (e.g. two hour) test duration time, display latency ( -L ) disable hardware and software cache ( -h), forcing cpu affinity (-a0,1,2,3). Since the test ran on a server with four cores I wanted to see if I could use those for helping to keep the threads and storage busy. What varies in the commands below is the percentage of reads vs. writes, as well as the results output file. Some of the workload below also had the -S option specified to disable OS I/O buffering (to view how buffering helps when enabled or disabled). Depending on the goal, or type of test, validation, or workload being run, I would choose to set some of these parameters differently.

    diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -w0 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noh_write000.txt

    diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -w50 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noh_write050.txt

    diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -w100 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noh_write100.txt

    diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -S -w0 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noSh_test_write000.txt

    diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -S -w50 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noSh_write050.txt

    diskspd -c45g -b8K -t16 -o32 -r -d7200 -h -S -w100 -L -a0,1,2,3 F:\diskspd.dat >> SIOWS2012R203_Eiscsi_145_noSh_write100.txt

    The following is the output from the above workload command.
    Microsoft Diskspd sample output
    Microsoft Diskspd sample output part 2
    Microsoft Diskspd sample output part 3

    Note that as with any benchmark, workload test or simulation your results will vary. In the above the server, storage and I/O system were not tuned as the focus was on working with the tool, determining its capabilities. Thus do not focus on the performance results per say, rather what you can do with Diskspd as a tool to try different things. Btw, fwiw, in the above example in addition to using an iSCSI target, the Windows 2012 R2 server was a guest on a VMware ESXi 5.5 system.

    Where to learn more

    The following are related links to read more about server (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.

    Drew Robb’s benchmarking quick reference guide
    Server storage I/O benchmarking tools, technologies and techniques resource page
    Server and Storage I/O Benchmarking 101 for Smarties.
    Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
    I/O, I/O how well do you know about good or bad server and storage I/Os?
    Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)

    Comments and wrap-up

    What I like about Diskspd (Pros)

    Reporting including CPU usage (you can’t do server and storage I/O without CPU) along with IOP’s (activity), bandwidth (throughout or amount of data being moved), per thread and total results along with optional reporting. While a GUI would be nice particular for beginners, I’m used to setting up scripts for different workloads so having an extensive options for setting up different workloads is welcome. Being associated with a specific OS (e.g. Windows) the CPU affinity and buffer management controls will be handy for some projects.

    Diskspd has the flexibility to use different storage interfaces and types of storage including files or partitions should be taken for granted, however with some tools don’t take things for granted. I like the flexibility to easily specify various IO sizes including large 1MByte, 10MByte, 20MByte, 100MByte and 500MByte to simulate application workloads that do large sequential (or random) activity. I tried some IO sizes (e.g. specified by -b parameter larger than 500MB however, I received various errors including "Could not allocate a buffer bytes for target" which means that Diskspd can do IO sizes smaller than that. While not able to do IO sizes larger than 500MB, this is actually impressive. Several other tools I have used or with have IO size limits down around 10MByte which makes it difficult for creating workloads that do large IOP’s (note this is the IOP size, not the number of IOP’s).

    Oh, something else that should be obvious however will state it, Diskspd is free unlike some industry de-facto standard tools or workload generators that need a fee to get and use.

    Where Diskspd could be improved (Cons)

    For some users a GUI or configuration wizard would make the tool easier to get started with, on the other hand (oth), I tend to use the command capabilities of tools. Would also be nice to specify ranges as part of a single command such as stepping through an IO size range (e.g. 4K, 8K, 16K, 1MB, 10MB) as well as read write percentages along with varying random sequential mixes. Granted this can easily be done by having a series of commands, however I have become spoiled by using other tools such as vdbench.

    Summary

    Server and storage I/O performance toolbox

    Overall I like Diskspd and have added it to my Server Storage I/O workload and benchmark tool-box

    Keep in mind that the best benchmark or workload generation technology tool will be your own application(s) configured to run as close as possible to production activity levels.

    However when that is not possible, the an alternative is to use tools that have the flexibility to be configured as close as possible to your application(s) workload characteristics. This means that the focus should not be as much on the tool, as opposed to how flexible is a tool to work for you, granted the tool needs to be robust.

    Having said that, Microsoft Diskspd is a good and extensible tool for benchmarking, simulation, validation and comparisons, however it will only be as good as the parameters and configuration you set it up to use.

    Check out Microsoft Diskspd and add it to your benchmark and server storage I/O tool-box like I have done.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    December 2014 Server StorageIO Newsletter

    December 2014

    Hello and welcome to this December Server and StorageIO update newsletter.

    Seasons Greetings

    Seasons greetings

    Commentary In The News

    StorageIO news

    Following are some StorageIO industry trends perspectives comments that have appeared in various venues. Cloud conversations continue to be popular including concerns about privacy, security and availability. Over at BizTech Magazine there are some comments about cloud and ROI. Some comments on AWS and Google SSD services can be viewed at SearchAWS. View other trends comments here

    Tips and Articles

    View recent as well as past tips and articles here

    StorageIOblog posts

    Recent StorageIOblog posts include:

    View other recent as well as past blog posts here

    In This Issue

  • Industry Trends Perspectives
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events & Activities

    View other recent and upcoming events here

    Webinars

    December 11, 2014 – BrightTalk
    Server & Storage I/O Performance

    December 10, 2014 – BrightTalk
    Server & Storage I/O Decision Making

    December 9, 2014 – BrightTalk
    Virtual Server and Storage Decision Making

    December 3, 2014 – BrightTalk
    Data Protection Modernization

    Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    From StorageIO Labs

    Research, Reviews and Reports

    StarWind Virtual SAN for Microsoft SOFS

    May require registration
    This looks at the shared storage needs of SMB’s and ROBO’s leveraging Microsoft Scale-Out File Server (SOFS). Focus is on Microsoft Windows Server 2012, Server Message Block version (SMB) 3.0, SOFS and StarWind Virtual SAN management software

    View additional reports and lab reviews here.

    Resources and Links

    Check out these useful links and pages:
    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageio.com/ssd
    storageio.com/ssd

    Enjoy this edition of the Server and StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcasts along with in the news commentary appearing soon.

    Seasons greetings 2014

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud Conversations: Revisiting re:Invent 2014 and other AWS updates

    server storage I/O trends

    This is part one of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part two here.

    Revisiting re:Invent 2014 and other AWS updates

    AWS re:Invent 2014

    A few weeks ago I attended Amazon Web Service (AWS) re:Invent 2014 in Las Vegas for a few days. For those of you who have not yet attended this event, I recommend adding it to your agenda. If you have interest in compute servers, networking, storage, development tools or management of cloud (public, private, hybrid), virtualization and related topic themes, you should check out AWS re:invent.

    AWS made several announcements at re:invent including many around development tools, compute and data storage services. One of those to keep an eye on is cloud based Aurora relational database service that complement existing RDS tools. Aurora is positioned as an alternative to traditional SQL based transactional databases commonly found in enterprise environments (e.g. SQL Server among others).

    Some recent AWS announcements prior to re:Invent include

    AWS vCenter Portal

    Using the AWS Management Portal for vCenter adds a plug-in within your VMware vCenter to manage your AWS infrastructure. The vCenter for AWS plug-in includes support for AWS EC2 and Virtual Machine (VM) import to migrate your VMware VMs to AWS EC2, create VPC (Virtual Private Clouds) along with subnet’s. There is no cost for the plug-in, you simply pay for the underlying AWS resources consumed (e.g. EC2, EBS, S3). Learn more about AWS Management Portal for vCenter here, and download the OVA plug-in for vCenter here.

    AWS re:invent content


    AWS Andy Jassy (Image via AWS)

    November 12, 2014 (Day 1) Keynote (highlight video, full keynote). This is the session where AWS SVP Andy Jassy made several announcements including Aurora relational database that complements existing RDS (Relational Data Services). In addition to Andy, the key-note sessions also included various special guests ranging from AWS customers, partners and internal people in support of the various initiatives and announcements.


    Amazon.com CTO Werner Vogels (Image via AWS)

    November 13, 2014 (Day 2) Keynote (highlight video, full keynote). In this session, Amazon.com CTO Werner Vogels appears making announcements about the new Container and Lambda services.

    AWS re:Invent announcements

    Announcements and enhancements made by AWS during re:Invent include:

    • Key Management Service (KMS)
    • Amazon RDS for Aurora
    • Amazon EC2 Container Service
    • AWS Lambda
    • Amazon EBS Enhancements
    • Application development, deployed and life-cycle management tools
    • AWS Service Catalog
    • AWS CodeDeploy
    • AWS CodeCommit
    • AWS CodePipeline

    Key Management Service (KMS)

    Hardware security module (HSM) based key managed service for creating and control of encryption keys to protect security of digital assets and their keys. Integration with AWS EBS and others services including S3 and Redshift along with CloudTrail logs for regulatory, compliance and management. Learn more about AWS KMS here

    AWS Database

    For those who are not familiar, AWS has a suite of database related services including SQL and no SQL based, simple to transactional to Petabyte (PB) scale data warehouses for big data and analytics. AWS offers the Relational Database Service (RDS) which is a suite of different database types, instances and services. RDS instance and types include SimpleDB, MySQL, Postgress, Oracle, SQL Server and the new AWS Aurora offering (read more below).  Other little data database and big data repository related offerings include DynamoDB (a non-SQL database), ElasticCache (in memory cache repository) and Redshift (large-scale data warehouse and big data repository).

    In addition to database services offered by AWS, you can also combine various AWS resources including EC2 compute, EBS and other storage offerings to create your own solution. For example there are various Amazon Machine Images (AMI’s) or pre-built operating systems and database tools available with EC2 as well as via the AWS Marketplace , such as MongoDB and Couchbase among others. For those not familiar with MongoDB, Couchbase, Cassandra, Riak along with other non SQL or alternative databases and key value repositories, check out Seven Databases in Seven Weeks in my book review of it here.

    Seven Databases book review
    Seven Databases in Seven Weeks and NoSQL movement available from Amazon.com

    Amazon RDS for Aurora

    Aurora is a new relational database offering part of the AWS RDS suite of services. Positioned as an alternative to commercial high-end database, Aurora is a cost-effective database engine compatible with MySQL. AWS is claiming 5x better performance than standard MySQL with Aurora while being resilient and durable. Learn more about Aurora which will be available in early 2015 and its current preview here.

    Amazon EC2 C4 instances

    AWS will be adding a new C4 instance as a next generation of EC2 compute instance based on Intel Xeon E5-2666 v3 (Haswell) processors. The Intel Xeon E5-2666 v3 processors run at a clock speed of 2.9 GHz providing the highest level of EC2 performance. AWS is targeting traditional High Performance Computing (HPC) along with other compute intensive workloads including analytics, gaming, and transcoding among others. Learn more AWS EC2 instances here, and view this Server and StorageIO EC2, EBS and associated AWS primer here.

    Amazon EC2 Container Service

    Containers such as those via Docker have become popular to support developers rapidly build as well as deploy scalable applications. AWS has added a new feature called EC2 Container Service that supports Docker using simple API’s. In addition to supporting Docker, EC2 Container Service is a high performance scalable container management service for distributed applications deployed on a cluster of EC2 instances. Similar to other EC2 services, EC2 Container Service leverages security groups, EBS volumes and Identity Access Management (IAM) roles along with scheduling placement of containers to meet your needs. Note that AWS is not alone in adding container and docker support with Microsoft Azure also having recently made some announcements, learn more about Azure and Docker here. Learn more about EC2 container service here and more about Docker here.

    Docker for smarties

    Continue reading about re:Invent 2014 and other recent AWS enhancements here in part two of this two-part series.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    This is the first post of a two part series, read the second post here.

    Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

    The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future, Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way. For example nand flash SSD as part of an enterprise tiered storage strategy can be implemented server-side using PCIe cards, SAS and SATA drives as targets or as cache along with software, as well as leveraging SSD devices in storage systems or appliances.

    Seagate 1200 SSD
    Seagate 1200 Enterprise SAS 12Gbs SSD Image via Seagate.com

    Another place where nand flash can be found and compliments SSD devices are so-called Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD) including a new generation that accelerate writes as well as reads such as those Seagate refers to as with Enterprise TurboBoost. The Enterprise TurboBoost drives (view the companion StorageIO Lab review TurboBoost white paper here) were previously known as the Solid State Hybrid Drives (SSHD) or Hybrid Hard Disk Drives (HHDD). Read more about TurboBoost here and here.

    The best server and storage I/O is the one you do not have to do

    Keep in mind that the best server or storage I/O is that one that you do not have to do, with the second best being the one with the least overhead resolved as close to the processor (compute) as possible or practical. The following figure shows that the best place to resolve server and storage I/O is as close to the compute processor as possible however only a finite amount of storage memory located there. This is where the server memory and storage I/O hierarchy comes into play which is also often thought of in the context of tiered storage balancing performance and availability with cost and architectural limits.

    Also shown is locality of reference which refers to how close data is to where it is being used and includes cache effectiveness or buffering. Hence a small amount of cache of flash and DRAM in the right location can have a large benefit. Now if you can afford it, install as much DRAM along with flash storage as possible, however if you are like most organizations with finite budgets yet server and storage I/O challenges, then deploy a tiered flash storage strategy.

    flash cache locality of reference
    Server memory storage I/O hierarchy, locality of reference

    Seagate 1200 12Gbs Enterprise SAS SSD’s

    Back to the Seagate 1200 12Gbs Enterprise SAS SSD which is covered in this StorageIO Industry Trends Perspective thought leadership white paper. The focus of the white paper is to look at how the Seagate 1200 Enterprise class SSD’s and 12Gbps SAS address current and next generation tiered storage for virtual, cloud, traditional Little and Big Data infrastructure environments.

    Seagate 1200 Enteprise SSD

    This includes providing proof points running various workloads including Database TPC-B, TPC-E and Microsoft Exchange in the StorageIO Labs along with cache software comparing SSD, SSHD and different HDD’s including 12Gbs SAS 6TB near-line high-capacity drives.

    Seagate 1200 Enterprise SSD Proof Points

    The proof points in this white paper are from an applications focus perspective representing more of an end-to-end real-world situation. While they are not included in this white paper, StorageIO has run traditional storage building-block focus workloads, which can be found at StorageIOblog (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?). These include tools such as Iometer, iorate, vdbench among others for various IO sizes, mixed, random, sequential, reads, writes along with “hot-band" across different number of threads (concurrent users). “Hot-Band” is part of the SNIA Emerald energy effectiveness metrics for looking at sustained storage performance using tools such as vdbench. Read more about other various server and storage I/O benchmarking tools and techniques here.

    For the following series of proof-points (TPC-B, TPC-E and Exchange) a system under test (SUT) consisted of a physical server (described with the proof-points) configured with VMware ESXi along with guests virtual machines (VMs) configured to do the storage I/O workload. Other servers were used in the case of TPC workloads as application transactional requester to drive the SQL Server database and resulting server storage I/O workload. VMware was used in the proof-points to reflect a common industry trend of using virtual server infrastructures (VSI) supporting applications including database, email among others. For the proof-point scenarios, the SUT along with storage system device under test were dedicated to that scenario (e.g. no other workload running) unless otherwise noted.

    Server Storage I/O config
    Server Storage I/O configuration for proof-points

    Microsoft Exchange Email proof-point configuration

    For this proof-point, Microsoft Jet Stress Exchange performance workloads were placed (e.g. Exchange Database – EDB file) on each of the different devices under test with various metrics shown including activity rates and response time for reads as well as writes. For the Exchange testing, the EDB was placed on the device being tested while its log files were placed on a separate Seagate 400GB Enterprise 12Gbps SAS SSD.

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB 7.2K SATA HDD. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft SBS2011 Service Pack 1 64 bit. Guest VM (VMware vSphere 5.5) was on a SSD based dat, had a physical machine (host), with 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot with Jet Stress 2010.  All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device also persistent (no delayed writes). EDB was 300GB and workload ran for 8 hours.

    Microsoft Exchange VMware SSD performance
    Microsoft Exchange proof-points comparing various storage devices

    TPC-B (Database, Data Warehouse, Batch updates) proof-point configuration

    SSD’s are a good fit for both transaction database activity with reads and write as well as query-based decision support systems (DSS), data warehouse and big data analytics. The following are proof points of SSD capabilities for database activity. In addition to supporting database table files and objects, along with transaction journal logs, other uses include for meta-data, import/export or other high-IO and write intensive scenarios. Two database workload profiles were tested including batch update (write-intensive) and transactional. Activity involved running Transaction Performance Council (TPC) workloads TPC-B (batch update) and TPC-E (transaction/OLTP simulate financial trading system) against Microsoft SQL Server 2012 databases. Each test simulation had the SQL Server database (MDF) on a different device with transaction log file (LDF) on a separate SSD. TPC-B for a single device results shown below.

    TPC-B (write intensive) results below show how TPS work being done (blue) increases from left to right (more is better) for various numbers of simulated users. Also shown on the same line for each amount of TPS work being done is the average latency in seconds (right to left) where lower is better. Results are shown from top to bottom for each group of users (100, 50, 20 and 1) for the different drives being tested (top to bottom). Note how the SSD device does more work at a lower response time vs. traditional HDD’s

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

    VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

    TPC-B sql server database SSD performance
    TPC-B SQL Server database proof-points comparing various storage devices

    TPC-E (Database, Financial Trading) proof-point configuration

    The following shows results from TPC-E test (OLTP/transactional workload) simulating a financial trading system. TPC-E is an industry standard workload that performs a mix of reads and writes database queries. Proof-points were performed with various numbers of users from 10, 20, 50 and 100 to determine (TPS) Transaction per Second (aka I/O rate) and response time in seconds. The TPC-E transactional results are shown for each device being tested across different user workloads. The results show how TPC-E TPS work (blue) increases from left to right (more is better) for larger numbers of users along with corresponding latency (green) that goes from right to left (less is better). The Seagate Enterprise 1200 SSD is shown on the top in the figure below with a red box around its results. Note how the SSD as a lower latency while doing more work compared to the other traditional HDD’s

    Test configuration: Seagate 400GB 12000 2.5” SSD (ST400FM0073) 12Gbps SAS, 600GB 2.5” Enterprise 15K with TurboBoost™ (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, Seagate Enterprise Capacity Nearline (ST6000NM0014) 6TB 3.5” 7.2K RPM HDD 12 Gbps SAS and 3TB Seagate 7.2K SATA HDD Workload generator and virtual clients Windows 7 Ultimate 64 bit. Microsoft SQL Server 2012 database was on Windows 7 guest. Guest VM (VMware vSphere 5.5) had a dedicated 14 GB DRAM, quad CPU (4 x 3.192GHz) Intel E3-1225 v300, with LSI 9300 series 12Gbps SAS adapters in a PCIe Gen 3 slot along with TPC-B (www.tpc.org) workloads.

    VM with guest OS along with SQL tempdb and masterdb resided on separate SSD based data store from devices being tested (e.g., where MDF (main database tables) and LDF (log file) resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes) using VMware PVSCSI driver. MDF and LDF file sizes were 142GB and 26GB with scale factor of 10000, with each step running for one hour (10-minute preamble). Note that these proof-points DO NOT use VMware or any other third-party cache software or I/O acceleration tool technologies as those are covered later in a separate proof-point.

    TPC-E sql server database SSD performance
    TPC-E (Financial trading) SQL Server database proof-points comparing various storage devices

    Continue reading part-two of this two-part series here including the virtual server storage I/O blender effect and solution.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    Part II: Seagate 1200 12Gbs Enterprise SAS SSD StorgeIO lab review

    This is the second post of a two part series, read the first post here.

    Earlier this year I had the opportunity to test drive some Seagate 1200 12Gbs Enterprise SAS SSD’s as a follow-up to some earlier activity trying their Enterprise TurboBoost Drives. Disclosure: Seagate has been a StorageIO client and was also the sponsor of this white paper and associated proof-points mentioned in this post.

    The Server Storage I/O Blender Effect Bottleneck

    The earlier proof-points focused on SSD as a target or storage device. In the following proof-points, the Seagate Enterprise 1200 SSD is used as a shared read cache (write-through). Using a write-through cache enables a given amount of SSD to give a performance benefit to other local and networked storage devices.

    traditional server storage I/O
    Non-virtualized servers with dedicated storage and I/O paths.

    Aggregation causes aggravation with I/O bottlenecks because of consolidation using server virtualization. The following figure shows non-virtualized servers with their own dedicated physical machine (PM) and I/O resources. When various servers are virtualized and hosted by a common host (physical machine), their various workloads compete for I/O and other resources. In addition to competing for I/O performance resources, these different servers also tend to have diverse workloads.

    virtual server storage I/O blender
    Virtual server storage I/O blender bottleneck (aggregation causes aggravation)

    The figure above shows aggregation causing aggravation with the result being I/O bottlenecks as various applications performance needs converge and compete with each other. The aggregation and consolidation result is a blend of random, sequential, large, small, read and write characteristics. These different storage I/O characteristics are mixed up and need to be handled by the underlying I/O capabilities of the physical machine and hypervisor. As a result, a common deployment for SSD in addition to as a target device for storing data is as a cache to cut bottlenecks for traditional spinning HDD.

    In the following figure a solution is shown introducing I/O caching with SSD to help mitigate or cut the effects of server consolation causing performance aggravations.

    Creating a server storage I/O blender bottleneck

    xxxxx
    Addressing the VMware Server Storage I/O blender with cache

    Addressing server storage I/O blender and other bottlenecks

    For these proof-points, the goal was to create an I/O bottleneck resulting from multiple VMs in a virtual server environment performing application work. In this proof-point, multiple competing VMs including a SQL Server 2012 database and an Exchange server shared the same underlying storage I/O infrastructure including HDD’s The 6TB (Enterprise Capacity) HDD was configured as a VMware dat and allocated as virtual disks to the VMs. Workloads were then run concurrently to create an I/O bottleneck for both cached and non-cached results.

    xxxxx
    Server storage I/O with virtualization roof-point configuration topology

    The following figure shows two sets of proof points, cached (top) and non-cached (bottom) with three workloads. The workloads consisted of concurrent Exchange and SQL Server 2012 (TPC-B and TPC-E) running on separate virtual machine (VM) all on the same physical machine host (SUT) with database transactions being driven by two separate servers. In these proof-points, the applications data were placed onto the 6TB SAS HDD to create a bottleneck, and a portion of the SSD used as a cache. Note that the Virtunet cache software allows you to use a part of a SSD device for cache with the balance used as a regular storage target should you want to do so.

    If you have paid attention to the earlier proof-points, you might notice that some of the results below are not as good as those seen in the Exchange, TPC-B and TPC-E results about. The reason is simply that the earlier proof-points were run without competing workloads, and database along with log or journal files were placed on separate drives for performance. In the following proof-point as part of creating a server storage I/O blender bottleneck the Exchange, TPC-B as well as TPC-E workloads were all running concurrently with all data on the 6TB drive (something you normally would not want to do).

    storage I/O blender solved
    Solving the VMware Server Storage I/O blender with cache

    The cache and non-cached mixed workloads shown above prove how an SSD based read-cache can help to reduce I/O bottlenecks. This is an example of addressing the aggravation caused by aggregation of different competing workloads that are consolidated with server virtualization.

    For the workloads shown above, all data (database tables and logs) were placed on VMware virtual disks created from a dat using a single 7.2K 6TB 12Gbps SAS HDD (e.g. Seagate Enterprise Capacity).

    The guest VM system disks which included paging, applications and other data files were virtual disks using a separate dat mapped to a single 7.2K 1TB HDD. Each workload ran for eight hours with the TPC-B and TPC-E having 50 simulated users. For the TPC-B and TPC-E workloads, two separate servers were used to drive the transaction requests to the SQL Server 2012 database.

    For the cached tests, a Seagate Enterprise 1200 400GB 12Gbps SAS SSD was used as the backing store for the cache software (Virtunet Systems Virtucache) that was installed and configured on the VMware host.

    During the cached tests, the physical HDD for the data files (e.g. 6TB HDD) and system volumes (1TB HDD) were read cache enabled. All caching was disabled for the non-cached workloads.

    Note that this was only a read cache, which has the side benefit of off-loading those activities enabling the HDD to focus on writes, or read-ahead. Also note that the combined TPC-E, TPC-B and Exchange databases, logs and associated files represented over 600GB of data, there was also the combined space and thus cache impact of the two system volumes and their data. This simple workload and configuration is representative of how SSD caching can complement high-capacity HDD’s

    Seagate 6TB 12Gbs SAS high-capacity HDD

    While the star and focus of these series of proof-points is the Seagate 1200 Enterprise 12Gbs SAS SSD, the caching software (virtunet) and Enterprise TurboBoost drives also play key supporting and favorable roles. However the 6TB 12Gbs SAS high-capacity drive caught my attention from a couple of different perspectives. Certainly the space capacity was interesting along with a 12Gbs SAS interface well suited for near-line, high-capacity and dense tiered storage environments. However for a high-capacity drive its performance is what really caught my attention both in the standard exchange, TPC-B and TPC-E workloads, as well as when combined with SSD and cache software.

    This opens the door for a great combination of leveraging some amount of high-performance flash-based SSD (or TurboBoost drives) combined with cache software and high-capacity drives such as the 6TB device (Seagate now has larger versions available). Something else to mention is that the 6TB HDD in addition to being available in either 12Gbs SAS, 6Gbs SAS or 6Gbs SATA also has enhanced durability with a Read Bit Error Rate of 10 ^15 (e.g. 1 second read error per 10^15 average attempts) and an AFR (annual failure rate) of 0.63% (See more speeds and feeds here). Hence if you are concerned about using large capacity HDD’s and them failing, make sure you go with those that have a high Read Bit Error Rate and a low AFR which are more common with enterprise class vs. lower cost commodity or workstation drives. Note that these high-capacity enterprise HDD’s are also available with Self-Encrypting Drive (SED) options.

    Summary

    Read more in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of Seagate 1200 12Gbs SAS SSD’s and visit the Seagate Enterprise 1200 12Gbs SAS SSD page here. Moving forward there is the notion that flash SSD will be everywhere. There is a difference between all data on flash SSD vs. having some amount of SSD involved in preserving, serving and protecting (storing) information.

    Key themes to keep in mind include:

    • Aggregation can cause aggravation which SSD can alleviate
    • A relative small amount of flash SSD in the right place can go a long way
    • Fast flash storage needs fast server storage I/O access hardware and software
    • Locality of reference with data close to applications is a performance enabler
    • Flash SSD everywhere does not mean everything has to be SSD based
    • Having some amount of flash in different places is important for flash everywhere
    • Different applications have various performance characteristics
    • SSD as a storage device or persistent cache can speed up IOPs and bandwidth

    Flash and SSD are in your future, this comes back to the questions of how much flash SSD do you need, along with where to put it, how to use it and when.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    VMware Cisco EMC VCE Zen and now server storage I/O convergence

    Storage I/O trends

    VMware Cisco EMC VCE Zen and now server storage I/O convergence

    In case you have not heard, the joint initiative (JV) founded in the fall of 2009 between Intel VMware Cisco and EMC called VCE had a change of ownership today.

    Well, kind of…

    Who is VCE and what’s this Zen stuff?

    For those not familiar or who need a recap, VCE was created to create converged server, storage I/O networking hardware and software solutions combing technologies from its investors resulting in solutions called vBlocks.

    The major investors were Cisco who provides the converged servers and I/O networking along with associated management tools as well as EMC who provides the storage systems along with their associated management tools. Minority investors include VMware (who is majority owned by EMC) who provides the server virtualization aka software defined data center management tools and Intel whose’s processor chip technologies are used in the vBlocks. What has changed from Zen (e.g. yesterday or in the past) and now is that Cisco has sold the majority (they are retaining about 10%) of its investment ownership in VCE to EMC. Learn more about VCE, their solutions and valueware in this post here (VCE revisited, now and Zen).

    Activist activating activity?

    EMC pulling VCE in-house which should prop up its own internal sales figures by perhaps a few billion USDs within a year or so (if not sooner) is not as appealing to activists investors who want results now such as selling off parts of the company (e.g. EMC, VMware or other assets) or the entire company.

    However EMC has been under pressure from activist shareholder Elliot Management to divest or sell-off portions of this business such as VMware so that the investors (including the activist) can make more money. For example there have been the recent stories about EMC looking to sell or merge with the likes of HP (who is now buying back shares and splitting up its own business) among others which certainly must make the activist investors happy.

    However to the activist investors who want to see things sold to make money they are not happy with EMC off buying or investing it appears.

    Via Bloomberg

    “The last thing on investors’ minds is the future of VCE,” Daniel Ives, an analyst with FBR Capital Markets, wrote in a note today. “EMC has a fire in its house right now and the company appears focused on painting its bedroom (e.g. VCE), while the Street wants a resolution on the strategic ownership situation sooner rather than later.”

    Read more at Bloomberg

    Whats this EMC Federation stuff?

    Note that EMC has organized itself into a federation that consists of EMC Information Infrastructure (EMCII) or what you might know a traditional EMC based storage and related software solutions, VMware, Pivotal and RSA. Also note that each of those federated companies have their own CEO as well as have holdings or ownership of other companies. However all report to a common federated leadership aka EMC. Thus when you hear EMC that could mean depending on the context the federation mother ship which controls the individual companies, or it could also be used to refer to EMCII aka the traditional EMC. Click here to learn more about the EMC federation.

    Converging Markets and Opportunities

    Looking beyond near-term or quick gains, EMC could be simply doing something others do to take ownership and control over certain things while reducing complexities associated with joint initiatives. For example with EMC and Cisco in a close partnership with VCE, both parties have been free to explore and take part in other joint initiatives such as Cisco with EMC competitors NetApp, HDS among others. Otoh EMC partners with Arista for networking, not to mention via VMware acquired virtual network or software defined network Nicira now called NSX.

    server and storage I/O road map to convergence

    EMC is also in a partnership with Lenovo for developing servers to be used by EMC for various platforms to support storage, data and information services while shifting the lower-end SMB storage offerings such as Iomega to the Lenovo channel.

    Note that Lenovo is in the process of absorbing the IBM xSeries (e.g. x86 based) business unit that started closing earlier in October (will take several months to completely close in all countries around the world). For its part Cisco is also partnering with hyper-converged solution provider Simplivity while EMC has announced its statement of direction to bring to market its own hyper-converged platform by end of the year. For those not familiar, Hyper-converged solutions are simply the next evolution of converged or pre-bundled turnkey systems (some of you might have just had a Dejavu moment) that today tend to be targeted for SMBs and ROBOs however used for targeted applications such as VDI in larger environments.

    Storage I/O trends

    What does this have to do with VCE?

    IF EMC is about to release as it has made statement of direction statements of a hyper-converged solution by year-end to compete head-on with those from Nutanix, Simplivity and Tintri as well as perhaps to a lesser extent VMwares EVO:Rail, by having more control over VCE means reducing if not eliminating complexity around vBlocks which are Cisco based with EMC storage vs. what ever EMC brings to market for hyper-converged. In the past under the VCE initiatives storage was limited to EMC and servers along with networking from Cisco, hypervisors from VMware, however what happens in the future remains to be seen.

    Does this mean EMC is moving even more into servers than just virtual servers?

    Tough to say as EMC can not afford to have its sales force lose focus on its traditional core products while ramping up other business, however, the EMC direct and partner teams want and need to keep up account control which means gaining market share and footprint in those accounts. This also means EMC needs to find ways to take cost out of the sales and marketing process where possible to streamline which perhaps brining VCE will help do.

    Will this perhaps give the EMC direct and partner sales teams a new carrot or incentive to promote converged and hyper-converged at the cost of other competitors or incumbents? Perhaps, lets see what happens in the coming weeks.

    What does this all mean?

    In a nut shell, IMHO EMC is doing a couple of things here one of which is cleaning up some ownership in JVs to give it self more control, as well as options for doing other business transactions (mergers and acquisitions (M&A), sales or divestiture’s, new joint initiatives, etc). Then there is streamline its business from decision-making to quickly respond to new opportunities as well as routes to markets and other activities (e.g. removing complexity and cost vs. simply cutting cost).

    Does this signal the prelude to something else? Perhaps, we know that EMC has made a statement of direction about hyper-converged which with VCE now more under EMC control, perhaps we will see more options from under the VCE umbrella both for lower-end and entry SMB as well as SME and large enterprise organizations.

    What about the activist investors?

    They are going to make noise as long as they can continue to make more money or get what they want. Publicly I would be shocked if the activist investors were not making statements that EMC should be selling assets not buying or investing.

    On the other hand, any smart investor,  financial or other analyst should see though the fog of what this relatively simple transaction means in terms of EMC getting further control of its future.

    Of course the question will stay does EMC remain in control of its current federation of EMC, VMware, Pivotal, RSA along each of their respective holdings, does EMC doe a block buster merger, divestiture or acquisition?

    server and storage I/O road ahead

    Take a step back, look at the big picture!

    Some things to keep an eye on:

    • Will this move help streamline decision-making enabling new solutions to be brought to market and customers quicker?
    • While there is a VMware focus, don’t forget about the long-running decades old relationship with Microsoft and how that plays into the equation
    • Watch for what EMC releases with their hyper-converged solution as well as where it is focused, not to mention how sold
    • Also watch the EMC and Lenovo join initiative, both for the Iomega storage activity as well as what EMC and Lenovo do with and for servers
    • Speaking of Lenovo, unless I missed something as of the time of writing this, have you noticed that Lenovo is not yet part of the VMware EVO:Rail initiative?

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    The question to ask yourself is not if flash Solid State Device (SSD) technologies are in your future.

    Instead the questions are when, where, using what, how to configure and related themes. SSD including traditional DRAM and NAND flash-based technologies are like real estate where location matters; however, there are different types of properties to meet various needs. This means leveraging different types of NAND flash SSD technologies in different locations in a complementary and cooperative aka hybrid way.

    Introducing Solid State Hybrid Drives (SSHD)

    Solid State Hybrid Disks (SSHD) are the successors to previous generation Hybrid Hard Disk Drives (HHDD) that I have used for several years (you can read more about them here, and here).

    While it would be nice to simply have SSD for everything, there are also economic budget realities to be dealt with. Keep in mind that a bit of nand flash SSD cache in the right location for a given purpose can go a long way which is the case with SSHDs. This is also why in many environments today there is a mix of SSD, HDD of various makes, types, speeds and capacities (e.g. different tiers) to support diverse application needs (e.g. not everything in the data center is the same).

    However, If you have the need for speed and can afford or benefit from the increased productivity by all means go SSD!

    Otoh if you have budget constraints and need more space capacity yet want some performance boost, then SSHDs are an option. The big difference however between today’s SSHDs that are available for both enterprise class storage systems and servers, as well as desktop environments is that they can accelerate both reads and writes. This is different from their predecessors that I have used for several years now that had basic read acceleration, however no write optimizations.

    SSHD storage I/O oppourtunity
    Better Together: Where SSHDs fit in an enterprise tiered storage environment with SSD and HDDs

    As their names imply, they are a hybrid between a nand flash Solid State Device (SSD) and traditional Hard Disk Drive (HDD) meaning a best of situation. This means that the SSHD are based on a traditional spinning HDD (various models with different speeds, space capacity, interfaces) along with DRAM (which is found on most modern HDDs), along with nand flash for read cache, and some extra nonvolatile memory for persistent write cache combined with a bit of software defined storage performance optimization algorithms.

    Btw, if you were paying attention to that last sentence you would have picked up on something about nonvolatile memory being used for persistent write cache which should prompt the question would that help with nand flash write endurance? Yup.

    Where and when to use SSHD?

    In the StorageIO Industry Trends Perspective thought leadership white paper I recently released compliments of Seagate Enterprise Turbo SSHD (that’s a disclosure btw ;) enterprise class Solid State Hybrid Drives (SSHD) were looked at and test driven in the StorageIO Labs with various application workloads. These activities include being in a virtual environment for common applications including database and email messaging using industry standard benchmark workloads (e.g. TPC-B and TPC-E for database, JetStress for Exchange).

    Storage I/O sshd white paper

    Conventional storage system focused workloads using iometer, iorate and vdbench were also run in the StorageIO Labs to set up baseline reads, writes, random, sequential, small and large I/O size with IOPs, bandwidth and response time latency results. Some of those results can be found here (Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?) with other ongoing workloads continuing in different configurations. The various test drive proof points were done in the   comparing SSHD, SSD and different HDDs.

    Data Protection (Archiving, Backup, BC, DR)

    Staging cache buffer area for snapshots, replication or current copies before streaming to other storage tier using fast read/write capabilities. Meta data, index and catalogs benefit from fast reads and writes for faster protection.

    Big Data DSS
    Data Warehouse

    Support sequential read-ahead operations and “hot-band” data caching in a cost-effective way using SSHD vs. slower similar capacity size HDDs for Data warehouse, DSS and other analytic environments.

    Email, Text and Voice Messaging

    Microsoft Exchange and other email journals, mailbox or object repositories can leverage faster read and write I/Os with more space capacity.

    OLTP, Database
     Key Value Stores SQL and NoSQL

    Eliminate the need to short stroke HDDs to gain performance, offer more space capacity and IOP performance per device for tables, logs, journals, import/export and scratch, temporary ephemeral storage. Leverage random and sequential read acceleration to compliment server-side SSD-based read and write-thru caching. Utilize fast magnetic media for persistent data reducing wear and tear on more costly flash SSD storage devices.

    Server Virtualization

    Fast disk storage for data stores and virtual disks supporting VMware vSphere/ESXi, Microsoft Hyper-V, KVM, Xen and others.  Holding virtual machines such as VMware VMDKs, along with Hyper-V and other hypervisor virtual disks.  Compliment virtual server read cache and I/O optimization using SSD as a cache with writes going to fast SSHD. For example VMware V5.5 Virtual SAN host disk groups use SSD as a read cache and can use SSHD as the magnetic disk for storing data while boosting performance without breaking the budget or adding complexity.

    Speaking of Virtual, as mentioned the various proof points were run using Windows systems that were VMware guests with the SSHD and other devices being Raw Device Mapped (RDM) SAS and SATA attached, read how to do that here.

    Hint: If you know about the VMware trick for making a HDD look like a SSD to vSphere/ESXi (refer to here and here) think outside the virtual box for a moment on some things you could do with SSHD in a VSAN environment among other things, for now, just sayin ;).

    Virtual Desktop Infrastructure (VDI)

    SSHD can be used as high performance magnetic disk for storing linked clone images, applications and data. Leverage fast read to support read ahead or pre-fetch to compliment SSD based read cache solutions. Utilize fast writes to quickly store data enabling SSD-based read or write-thru cache solutions to be more effective. Reduce impact of boot, shutdown, and virus scan or maintenance storms while providing more space capacity.

    Table 1 Example application and workload scenarios benefiting from SSHDs

    Test drive application proof points

    Various workloads were run using Seagate Enterprise Turbo SSHD in the StorageIO lab environment across different real world like application workload scenarios. These include general storage I/O performance characteristics profiling (e.g. reads, writes, random, sequential or various IOP size) to understand how these devices compare to other HDD, HHDD and SSD storage devices in terms of IOPS, bandwidth and response time (latency). In addition to basic storage I/O profiling, the Enterprise Turbo SSHD was also used with various SQL database workloads including Transaction Processing Council (TPC); along with VMware server virtualization among others use case scenarios.

    Note that in the following workload proof points a single drive was used meaning that using more drives in a server or storage system should yield better performance. This also means scaling would be bound by the constraints of a given configuration, server or storage system. These were also conducted using 6Gbps SAS with PCIe Gen 2 based servers and ongoing testing is confirming even better results with 12Gbs SAS, faster servers with PCIe Gen 3.

    SSHD large file storage i/o
    Copy (read and write) 80GB and 220GB file copies (time to copy entire file)

    SSHD storage I/O TPCB Database performance
    SQLserver TPC-B batch database updates

    Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 500GB 3.5” 7.2K RPM HDD 3 Gbps SATA, 1TB 3.5” 7.2K RPM HDD 3 Gbps SATA. Workload generator and virtual clients ran on Windows 7 Ultimate. Microsoft SQL Server 2012 Database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (Intel x3490 2.93 GHz)), with LSI 9211 6Gbps SAS adapters with TPC-B (www.tpc.org) workloads. VM resided on separate data store from devices being tested. All devices being tested with SQL MDF were Raw Device Mapped (RDM) independent persistent with database log file (LDF) on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

    SSHD storage I/O TPCE Database performance
    SQLserver TPC-E transactional workload

    Test configuration: 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 300GB 2.5” Savio 10K RPM HDD 6 Gbps SAS, 1TB 3.5” 7.2K RPM HDD 6 Gbps SATA. Workload generator and virtual clients Windows 7 Ultimate. Microsoft SQL Server 2012 database was on Windows 7 Ultimate SP1 (64 bit) 14 GB DRAM, Dual CPU (E8400 2.99GHz), with LSI 9211 6Gbps SAS adapters with TPC-E (www.tpc.org) workloads. VM resided on separate SSD based data store from devices being tested (e.g., where MDF resided). All devices being tested were Raw Device Mapped (RDM) independent persistent with database log file on a separate SSD device also persistent (no delayed writes). Tests were performed in StorageIO Lab facilities by StorageIO personal.

    SSHD storage I/O Exchange performance
    Microsoft Exchange workload

    Test configuration: 2.5” Seagate 600 Pro 120GB (ST120FP0021 ) SSD 6 Gbps SATA, 600GB 2.5” Enterprise Turbo SSHD (ST600MX) 6 Gbps SAS, 600GB 2.5” Enterprise Enhanced 15K V4 (15K RPM) HDD (ST600MP) with 6 Gbps SAS, 2.5” Savio 146GB HDD 6 Gbps SAS, 3.5” Barracuda 500GB 7.2K RPM HDD 3 Gbps SATA. Email server hosted as guest on VMware vSphere/ESXi V5.5, Microsoft Small Business Server (SBS) 2011 Service Pack 1 64 bit, 8GB DRAM, One CPU (Intel X3490 2.93 GHz) LSI 9211 6 Gbps SAS adapter, JetStress 2010 (no other active workload during test intervals). All devices being tested were Raw Device Mapped (RDM) where EDB resided. VM on a SSD based separate data store than devices being tested. Log file IOPs were handled via a separate SSD device.

    Read more about the above proof points along view data points and configuration information in the associated white paper found here (no registration required).

    What this all means

    Similar to flash-based SSD technologies the question is not if, rather when, where, why and how to deploy hybrid solutions such as SSHDs. If your applications and data infrastructures environment have the need for storage I/O speed without loss of space capacity and breaking your budget, SSD enabled devices like the Seagate Enterprise Turbo 600GB SSHD are in your future. You can learn more about enterprise class SSHD such as those from Seagate by visiting this link here.

    Watch for extra workload proof points being performed including with 12Gbps SAS and faster servers using PCIe Gen 3.

    Ok, nuff said.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Lenovo buys IBM’s xSeries aka x86 server business, what about EMC?

    Storage I/O trends

    Lenovo buys IBM’s xSeries x86 server business for $2.3B USD, what about EMC?

    Once again Lenovo is new owner of some IBM computer technology, this time by acquiring the x86 (e.g. xSeries) server business unit from big blue. Today Lenovo announced its plan to acquire the IBM x86 server storage business unit for $2.3B USD.

    Research Triangle Park, North Carolina, and Armonk, New York – January 23, 2014

    Lenovo (HKSE: 992) (ADR: LNVGY) and IBM (NYSE: IBM) have entered into a definitive agreement in which Lenovo plans to acquire IBM’s x86 server business. This includes System x, BladeCenter and Flex System blade servers and switches, x86-based Flex integrated systems, NeXtScale and iDataPlex servers and associated software, blade networking and maintenance operations. The purchase price is approximately US$2.3 billion, approximately two billion of which will be paid in cash and the balance in Lenovo stock.

    IBM will retain its System z mainframes, Power Systems, Storage Systems, Power-based Flex servers, and PureApplication and PureData appliances.

    Read more here

    If you recall (or didn’t’t know) around a decade or so ago IBM also spun off its Laptop (e.g. Thinkpads) and workstation business unit to Lenovo after being one of the early PC players (I still have a model XT in my collection along with Mac SE and Newton).

    What this means for IBM?

    What this means is that IBM is selling off a portion of its systems technology group which is where the servers, storage and related hardware, software technologies report into. Note however that IBM is not selling off its entire server portfolio, only the x86 e.g. Intel/AMD based products that make up the xSeries as well as companion Blade and related systems. This means that IBM is retaining its Power based systems (and processors) that include the pSeries, iSeries and of course the zSeries mainframes  in addition to the storage hardware/software portfolio.

    However as part of this announcement, Lenovo is also licensing from IBM the Storwize/V7000 technology as well as tape summit resources, GPFS based scale out file systems used in SONAS and related products that are part of solution bundles tied to the x86 business.

    Again to be clear, IBM is not selling off (or at least at this time) Storwize, tape or other technology to Lenovo other than x86 server business. By server business, this means the technology, patents, people, processes, products, sales, marketing, manufacturing, R&D along with other entities that form the business unit, not all that different from when IBM divested the workstation/laptop aka PC business in the past.

    Storage I/O trends

    What this means for Lenovo?

    What Lenovo gets are an immediate (once the deal closes) expansion of their server portfolio including high-density systems for cloud, HPC as well as regular enterprise, not to mention also for SME and SMB. Lenovo also gets blade systems as well as converged systems (server, storage, networking, hardware, software) hence why IBM is also licensing some technology to Lenovo that it is not selling. Lenovo also gets the sales, marketing, design, support and other aspects to also expand their server business. By gaining the server business unit, Lenovo will now be in a place to take on Dell (who was also rumored to be in the market for the IBM servers), as well as HP, Oracle and other x86 system based suppliers.

    What about EMC and Lenovo?

    Yes, EMC, that storage company who is also a primary owner of VMware, as well as partner with Cisco and Intel in the VCE initiatives, not to mention who also entered into a partnership with Lenovo a year or so ago.

    In case you forgot or didn’t’t know, EMC after breaking up with Dell, entered into a partnership with Lenovo back in 2012.

    This partnership and initiatives included developing servers that in turn EMC could use for their various storage and data appliances which continue to leverage x86 type technology. In addition, that agreement found the EMC Iomega brand transitioning over into the Lenovo line-up for both domestic North America, as well as international including the chinese market. Hence I have an older Iomega IX4 that says EMC, and a newer one that says EMC/Lenovo, also note that at CES a few weeks ago, some new Iomega products were announced.

    In checking with Lenovo today, they indicated that it is business as usual and no changes with or to the EMC partnership.

    Via email from Lenovo spokesperson today:

    A key piece to Lenovo’s Enterprise strategy has always included strong partnerships. In fact today’s announcements reinforce that strategy very clearly.

    Given the new scale, footprint and Enterprise credibility that this server acquisition affords Lenovo, we see great opportunity in offering complimentary storage offerings to new and existing customers.

    Lenovo’s partnership with EMC is multifaceted and stays in-tact as an important part of Lenovo’s overall strategy to offer customers compelling solutions built on world-class technology.

    Lenovo will continue to offer Lenovo/EMC NAS products from our joint venture as well as resell EMC stand-alone storage platforms.

    IBM Storwize storage and other products are integral to the in-scope platforms and solutions we acquired. In order to ensure continuity of business and the best customer experience we will partner with IBM for storage products as well.

    We believe this is a great opportunity for all three companies, but most importantly these partnerships are in place and will remain healthy for the benefit for our customers.

    Hence it is my opinion that for now it is business as usual, the IBM x8x business unit has a new home, those people will be getting new email addresses and business cards similar to how some of their associates did when the PC group was sold off a few years ago.

    Otoh, there may also be new products that might become opportunities to be placed into he Lenovo EMC partnership, however that is just my speculation at this time. Likewise while there will be some groups within Lenovo focused on selling the converged Lenovo solutions coming from IBM that may in fact compete with EMC (among others) in some scenarios, that should be no more and hopefully less than what IBM has with their server groups at times competing with themselves.

    Storage I/O trends

    What does this mean for Cisco, Dell, HP and others?

    For Cisco, instead of competing with one of their OEMs (e.g. IBM) for networking equipment (note IBM also owns some of its own networking), the server competition shifts to Lenovo who is also a Cisco partner (its called coopitition), and perhaps business as usual in many areas. For Dell, in the mid-market space, things could get interesting and the Round Rock folks need to get creative and beyond VRTX.

    For HP, this is where IMHO it’s going to get really interesting as Lenovo gets things transitioned. Near-term, HP could have a disruptive upper hand, however longer-term, HP has to get their A-Game on. Oracle is in the game as are a bunch of others from Fujitsu to SuperMicro to outside of North America and in particular china there is also Huawei. Back to EMC and VCE, while I expect the Cisco partnership to stay, I also see a wild card where EMC can leverage their Lenovo partnership into more markets, while Cisco continues to move into storage and other adjacent areas (e.g. more coopitition).

    What this means now and going forward?

    Thus this is as much about enterprise, SME, SMB as it is HPC, cloud and high-density where the game is about volume. Likewise there is also the convergence or data infrastructure angle combing server, storage, networking hardware, software and services.

    One of the things I have noticed about Lenovo as a customer using ThinkPads for over 13 years now (not the same one) is that while they are affordable, instead of simply cutting cost and quality, they seem to have found ways to remove cost which is different then simply cutting to go cheap.

    Case in point about a year and a half ago I dropped my iPhone on my Lenovo X1 keyboard that is back-lit and broke a key. Calling Lenovo after trying to find a replacement key on the web, they said no worries and next morning a new keyboard for the laptop was on my doorstep by 10:30Am with instructions on how to remove the old, put in the new, and do the RMA, no questions asked (read more about this here).

    The reason I mention that story about my X1 laptop is that it ties to what I’m curious and watching with their soon to be expanded new server business.

    Will they go in and simply look to reduce cost by making cuts from design to manufacturing to part quality, service and support, or, find ways to remove complexity and cost while providing more value?

    Now I wonder whose technology will join my HP and Dell systems to fill some empty rack space in the not so distant future to support growth?

    Time will tell, congratulations to Lenovo and the IBMers who now have a new home best wishes.

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2014