Server StorageIO May 2016 Update Newsletter

Volume 16, Issue V

Hello and welcome to this May 2016 Server StorageIO update newsletter.

In This Issue

  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Events and Webinars
  • Industry Activity Trends
  • Resources and Links
  • Enjoy this shortened edition of the Server StorageIO update newsletter, watch for more tips, articles, lab report test drive reviews, blog posts, videos and podcast’s and in the news commentary appearing soon.

    Cheers GS

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

     

    StorageIO Commentary in the news

    Recent Server StorageIO industry trends perspectives commentary in the news.

    Cloud and Virtual Data Storage Networking: Various comments and discussions

    StorageIOblog: Additional comments and perspectives

    SearchCloudStorage: Comments on OpenIO joins object storage cloud scrum

    SearchCloudStorage: Comments on EMC VxRack Neutrino Nodes and OpenStack

    View more Server, Storage and I/O hardware as well as software trends comments here

     

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    Via Micron Blog (Guest Post): What’s next for NVMe and your Data Center – Preparing for Tomorrow Today

    Check out these resources techniques, trends as well as tools. View more tips and articles here

    StorageIO Webinars and Industry Events

    Brouwer Storage (Nijkerk Holland) June 10-15, 2016 – Various in person seminar workshops

    June 15: Software Defined Data center with Greg Schulz and Fujitsu International

    June 14: Round table with Greg Schulz and John Williams (General manager of Reduxio) and Gert Brouwer. Discussion about new technologies with Reduxio as an example.

    June 10: Hyper converged, Converged , and related subjects presented Greg Schulz

    Simplify and Streamline Your Virtual Infrastructure – May 17 webinar

    Is Hyper-Converged Infrastructure Right for Your Business? May 11 webinar

    EMCworld (Las Vegas) May 2-4, 2016

    Interop (Las Vegas) May 4-6 2016

    Making the Cloud Work for You: Rapid Recovery April 27, 2016 webinar

    See more webinars and other activities on the Server StorageIO Events page here.

     

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links – Various industry links (over 1,000 with more to be added soon)
    objectstoragecenter.com – Cloud and object storage topics, tips and news items
    storageioblog.com/data-protection-diaries-main/ – Various data protection items and topics
    thenvmeplace.com – Focus on NVMe trends and technologies
    thessdplace.com – NVM and Solid State Disk topics, tips and techniques
    storageio.com/performance.com – Various server, storage and I/O performance and benchmarking

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    2016 Going Dutch Cloud Virtualization Server Storage I/O Seminars

    2016 Going Dutch Cloud Virtualization Server Storage I/O Seminars

    server storage I/O trends

    In June 2016 Brouwer Storage Consultancy is organizing their yearly spring seminar workshops in Nijkerk Holland (south of Amsterdam, near Utrecht and Amersfoort) with myself among others presenting.

    Brouwer Consultancy

    Cloud Virtualization Server Storage I/O Seminars

    For this series of seminar workshops, there are four sessions, two being presented by myself, and two others in conjunction with Reduxio as well as Fujitsu & SJ Solutions.

    Brouwer and Server StorageIO Seminar Sessions

    Agenda, How To Register and Where To Learn More

    The vendor sponsored sessions will consist of about 50% content being independent presented by myself and Gert Brouwer, the balance by the event sponsors as well as their partners. All presentation and associated content including handouts will be in English.

    There will be 4 seminar workshop sessions, two of those are paid sessions dedicated to Greg Schulz and the other two are free (sponsored) sessions where 50% of the content is sponsored (Reduxio, FujitsuSJ Solutions) and the other 50% will be independent (Greg Schulz & Gert Brouwer).

    Thursday June 9th – Server StorageIO Trends and Updates

    Server Storage I/O Fundamental Trends V2.016 and Updates. What’s New, What’s the buzz, what you need to know about. From Speeds and Feeds, Slots and Watts to Who’s doing what. Event Location: Golden Tulip Ampt van Nijkerk Hotel, Berencamperweg 4, 3861MC, Nijkerk. Learn more here (PDF abstract and topics to be covered).

    Thursday June 10th – Converged Day

    Converged Day – Moving beyond Hyper-Converged Hype and Server Storage I/O Decision Making Strategies. Event Location: Golden Tulip Ampt van Nijkerk Hotel, Berencamperweg 4, 3861MC, Nijkerk. Learn more here (PDF abstract and topics to be covered).

    Brouwer and Server StorageIO Seminar Sessions De Roode Schuur

    Tuesday June 14th – Round Table Vendor Session with Reduxio

    Symposium Workshop – Round Table Vendor Session with Reduxio – Are some solutions really ‘a Paradigm shift’ or ‘new and revolutionary” as they claim to be, or is it just more of the same (e.g. evolutionary)? – Presentations and discussions led by Greg Schulz (StorageIO), Reduxio and Brouwer Storage Consultancy. (Free, sponsored Session, Access for end-users only). Event Location: Hotel & Gasterij De Roode Schuur, Oude Barneveldseweg 98, 3862PS Nijkerk. Learn more here (PDF abstract and topics to be covered).

    Wednesday June 15th – Software Defined Data Center Symposium Workshop

    Software Defined Data Center Symposium Workshop – Round Table Vendor Session with Fujitsu & SJ Solutions
    With subjects like Openstack, Ceph, distributed object storage, Bigdata, Hyper-Converged Infrastructure (HCI), Converged Infrastructure (CI), Software defined storage (SDS) and Network (SDN and NFV), this round table format workshop seminars explores these and other related topics including what to use when, where, why and how. Presentations by Greg Schulz (StorageIO), SJ Solutions & Fujitsu and Brouwer Storage Consultancy. Event Location: Hotel & Gasterij De Roode Schuur, Oude Barneveldseweg 98, 3862PS Nijkerk. Learn more here (PDF abstract and topics to be covered).

    For more information, abstracts/agenda, registration and the specific locations for all the above events click here.

    Brouwer and Server StorageIO Sessions Ampt van Nijkerk

    What This All Means

    There is a lot of things occurring in the IT industry from physical to software defined clouds, containers and virtualization, nonvolatile memory (NVM) including flash SSD among others. These series of interactive educational workshop seminars converge on Nijkerk Holland combing content discussions from strategy, planning decision making, to what’s new (and old) that can be used in new ways, as well as some trends, speeds and feeds along with practicality for your environment.

    Brouwer Consultancy

    I Look forward to seeing you in Nijkerk and Europe during June 2016, in the meantime, contact Brouwer Storage Consultancy for more information on the above sessions as well as to arrange private discussions or meetings.

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Participate in Top vBlog 2016 Voting Now

    Participate in Top vBlog 2016 Voting Now

    server storage I/O trends

    It’s that time of the year when Eric Siebert (@ericsiebert) hosts his annual top virtual Blog (vBlog) voting via his great vsphere-land site (check it out if you are not familiar). The voting is now open until May 27th after which the results tabulated, will be announced.

    While the focus is virtualization, rest assured there are other categories including scripting, storage, independent, new, video and podcast among others. For example my blog is listed under StorageIO (Greg Schulz) and included in storage and independent among some other categories.

    Granted it is an election year here in the US and hopefully those participating in the top vBlog 2016 voting process are doing so based on content vs. simply popularity or what their virtual Popularity Action Committees (vPAC) tells them to do, that is vPACs actually exist or if they are simply vUrban Myths ;). In other words I’m not going to tell you who to vote for, or who I voted for other than that it is based on useful I found those sites and their content contributions.

    Who Is Eligible To Vote

    Anybody can vote, granted you can only vote once. Of course you can get your friends, family, co-workers, sales and marketing department, community or club, customers, clients, basically anything with an IP address and email address in theory including IoT and IoD could vote. However that would be like buying twitter followers, Facebook likes, click for view or pay for view results to game the system which if that is your game, so be it.

    How Did People Get On The List (Ballot)

    Eric puts out a call (tweets, posts here, here and here) that gets amplified for people to submit new blogs to be included, as well as then to self-nominate their site, as well as for what categories. If people do not take the initiative to get on the list, they don’t get included. If the list if important enough to be included on, then it should be important enough to to know or remember to self-nominate to be included.

    I know this from experience in that a few years ago I forgot to nominate my blog in the categories of storage, independent thus was not included in the voting for those categories. However since I had previously notified Eric to include my blog, it was in the general category and thus included. Note to bloggers, if it is important for you to be included, then notify Eric that you should be added to his lists, as well as take the time to nominate yourself to be included in the future. Simply help others help you.

    What Is The Voting Criteria

    Eric for this years top vBlog voting has culled the list to those who besides self nominating in different categories, also had at least 50 posts in the past year.

    In addition, Eric suggests focus on the content, creative and contribution (Longevity, Length, Frequency, Quality) vs. simply being a popularity contest or driven by virtual Popularity Action Committees (e.g. vPAC).

    Following are my paraphrase:

    • Longevity – How long has the blog existed and continued to be maintained vs. one started a long time ago and had not been updated in months or years.
    • Length – Are there lots of very short basically expanded micro twitter posts, recopy press releases or curation of other news, real content and analysis that requires some thought along with creative. These could be short, long or a series of short to medium size posts.
    • Frequency – How often do posts appear, daily, weekly, monthly, yearly. There’s a balance between frequency, length and content along with time effort to create something.
    • Quality – Some can be rehashed with more perspectives, inputs, hints and tips along with analysis, insight or experiences of existing, or new items. The key is what is the value add to the topic, theme or conversation vs. simply reposting or amplifying what’s already out there. In other words, is there new or unique content, perspectives, thought analysis, insight, experiences or simply repeat and amplify those of others.

    Call To Action, Get Out and Vote

    Simple, get out and vote and thanks in advance by using this link to Eric’s site.

    Where To Learn More

    • Voting now open for Top vBlog 2016
    • Link to actual voting page

    What This All Means

    Support and say thanks, give an "atta boy" or "atta girl" to those who take time to create content to share with you on various virtualization related topics from servers, storage, I/O networking, scripting, tools, techniques, clouds, containers and more via blogs, podcast’s and webinars. This includes both the independents like myself and others as well as the vendors, press and media who give the content you consume.

    So take a few moments to jump on over to Eric’s site and cast your vote and if you have found my content to be useful, I humbly appreciate your vote and say thank you for your support, as well as that for others.

    Ok, nuff said and thank you for supporting StorageIOblog.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    EMCworld 2016 EMC Hybrid and Converged Clouds Your Way

    EMCworld 2016 EMC Hybrid and Converged Clouds Your Way

    server storage I/O trends

    This is a quick post looking at a high-level view of today’s EMCworld 2016 announcements.

    Following up from yesterdays post covering the set of announcements, today’s theme is around Hybrid, Converged and Clouds your way. In addition to the morning announcements, EMC also yesterday afternoon announced InfoArchive 4.0 and EMC LEAP cloud native content applications for Enterprise Content Management (ECM). However lets focus on today’s announcements with a focus of modernize, transform and automate your date center.

    Today’s announcements include:

    • Cloud solution portfolio enhancements with Native Hybrid Cloud (NHC) turnkey developer platform for cloud native application development. NHC editions include those for VMware vSphere, OpenStack and VMware Photo Platform. Read more here.

    • VCE VxRack System 1000 with new Neutrino Nodes which are software defined hyper-converged rack scale solutions to support turnkey cloud (public, private, hybrid) implementations. Read more about VxRack System 1000 with links here.

    • NVMe based DSSD D5 flash SSD system enhancements include ability to stripe two systems together in a single rack to double the IOPs, bandwidth and capacity. Also new is a VCE VxRack system with DSSD. Read more about DSSD D5 enhancements here.

    Some Hardware That Gets Software Defined

    Rear view of EMC Neutrion node

    Rear view of EMC Neutrino node

    Where To Learn More

    • Session Streaming For video of keynotes, general sessions, backstage sessions, and EMC TV coverage, click here
    • Social: Follow @EMCWorld,  @EMCCorp, @EMC_News and @EMCStorage, and join conversations with  #EMCWORLD, and like EMC on Facebook
    • Photos: Access event photos via  Flickr and EMC Pulse Blog or visit the special EMC World News microsite here
    • Reflections: Read Core Technologies President, Guy Churchward’s Reflections post on today’s announcements here
    • Visit the EMC Store, the EMC Community Network Site and The Core Blog

    What This All Means

    For those of you who have installed OpenStack either from scratch, or using one of the appliances, you understand what’s involved with doing so. The point is that for those who are in the business or jobs are based on installing or configuring or software defining the software and cloud configurations, turnkey solutions may not be a fit, or at least yet. On the other hand, if your focus is doing other things and are looking for boosting productivity, then turnkey solutions are a way of fast tracking deployment. Likewise for those who have the need for more speed from bandwidth or IOPs, the DSSD D5 enhancements will help in those environments.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    EMCworld 2016 Getting Started on Dell EMC announcements

    EMCworld 2016 Getting Started on Dell EMC announcements

    server storage I/O trends

    It’s the first morning of EMCworld 2016 here in Las Vegas with some items already announced today, and more in the wings. One of the underlying themes and discussions besides what’s new or who’s doing what, is that this is for all practical purpose the last EMCworld with the upcoming Dell acquisition. What’s not clear is will there be a renamed and repackaged Dell/EMCworld?

    With current EMC President Jeremy Burton who used to be the Chief Marketing Officer (CMO) at EMC slated to become the CMO across all of Dell, my bet is that there will be some type of new event picking up and moving to a new level of where EMCworld and Dellworld have been. More on the future of EMC and Dell in future posts, however for now, lets see what has unfolded so far today.

    Today’s EMCworld theme is modernize the data center which means a mix of hardware, software and services announcements spanning physical, virtual, cloud among others (e.g. how do you want your servers, storage and data infrastructure wrapped). While the themes are still EMC as the Dell acquisition has yet to be completed, however there is a Dell presence, including Michael Dell here in person (more on Dell later).

    The first wave of announcements include:

    • Unity All Flash Array (AFA) for small, entry-level environments
    • EMC Enterprise Copy Data Management software tools portfolio
    • ViPR Version 3.0 Controller
    • Virtustream global hyper-scale Storage Cloud for data protection and cloud native object
    • MyService360

    • Datadomain virtual edition and long-term archive

    What About The Dell Deal

    Michael Dell who is here at EMCworld announced on the main stage that Dell Technologies will be the name of the families of business.

    This family of business includes the joint Dell, EMC, VMware, Pivotal, Secureworks, RSA and Virtustream. The Dell client focused business will be called Dell leveraging

    that Brand, while the new joint Dell and EMC enterprise business will be called Dell EMC leveraging both of those brands. As a reminder, the Dell servers business unit will be moving into the existing EMC business as part of the enterprise business unit.

    Lets move onto the technology announcements from today.

    Unity AFA (and Hybrid)

    The new Unity all flash array (AFA) is a dual controller storage system optimized for Nonvolatile Memory (NVM) flash SSD, with unified (block and file) access. EMC is positioning Unity as an entry-level AFA starting around $18K USD for a 2U solutions (much capacity that includes is not yet known, more on that in a future post). As well as having a low entry cost, EMC is positioning Unity for a broad, mass market, volume distribution that can be leveraged by their partners, including Dell. More on Unity in future posts. While Unity is new and modern, it comes from the same group who has created the VNXe leveraging that knowledge and skills base.

    Note that Unity is positioned for small, mid-sized, remote office branch office (ROBO), departmental and specialized AFA situations, where EMC NVMe based DSSD D5 is positioned for higher-end shared direct attached server flash, while XtremIO and VMAX also positioned for higher-end, higher performance and workload consolidation scenarios.

    • Simple, flexible, easy to use in a 2U packaging that scale up to 80TB of NVM flash SSD storage
    • Scalable up to 3PB of storage for larger expanded configurations
    • Affordable ($18K USD starting price, $10K entry-level hybrid)
    • Modern AFA storage for entry, small, mid-sized, workgroup, departments and specialized environments
    • Unified file, block, and VMware VVOL support for storage access
    • Also available in hybrid, as well as software defined virtual and converged configurations
    • Higher performance (EMC indicates 300,000 IOPs) for given entry-level systems
    • Available in all-flash array, hybrid array, software-defined and converged configurations
    • Native controller based encryption with synchronous and asynchronous replication
    • VMware VASA 2.0, VAAI, VVols and VMware integration
    • Tight integration with EMC Data Protection portfolio tools

    Read more about Unity here.

    Copy Data Management

    Enterprise Copy Data Management (eCDM) spans data copies from data protection including backup, BC, DR as well as for operational, analytics, test, dev, devops among other uses. Another term is Enterprise Copy Data Analytics (eCDA) which includes monitoring and management along with insight, awareness and of course analytics. These new offerings and initiatives tie together various capabilities across storage platforms and software defined storage management. Watch for more activity in and around eCDM and general copy data management. Read more here.

    ViPR Controller 3.0

    ViPR controller enhancements build on previous announcements, include automation as well as fail over with native replication to a standby ViPR controller. Note that there can actually be two standby controllers that are synchronized asynchronous with software built-in to ViPR. This means that there is no need for RecoverPoint or other products to do the replication of the ViPR controllers. To be clear, this is for high availability of the ViPR controllers themselves and not a replacement for HA or replication of upper layer applications, storage servers or underlying storage services. Also note that ViPR is available via open source (CoprHD via Github here). Read more here.

    MyService360

    MyService360 is a cloud based dashboard and data infrastructure monitoring management platform. Read more here.

    Virtustream Storage Cloud

    Viutustream cloud services and software tools compliments EMC (and others) storage systems as back-end for cool, cold or other bulk data storage needs. Focus is to sell primary storage to customers, then leverage back-end public cloud services for backup, archive, copy data management and other applications. This also means that the Virtustream storage cloud is not just for data protection such as archiving, backup, BC, DR it’s also for other big fast data including cloud and object native applications. Does this mean Virtustream is an alternative to other cloud and object storage services such as AWS S3, Google GCS among others? Yup. Read more here.

    Where To Learn More

    • Session Streaming For video of keynotes, general sessions, backstage sessions, and EMC TV coverage, click here
    • Social: Follow @EMCWorld,  @EMCCorp, @EMC_News and @EMCStorage, and join conversations with  #EMCWORLD, and like EMC on Facebook
    • Photos: Access event photos via  Flickr and EMC Pulse Blog or visit the special EMC World News microsite here
    • Reflections: Read Core Technologies President, Guy Churchward’s Reflections post on today’s announcements here
    • Visit the EMC Store, the EMC Community Network Site and The Core Blog

    What This All Means

    With the announcement of Unity and impending Dell deal, some of you might (or should) have a Dejavu moment of over a decade or so ago when Dell and EMC entered into OEM agreement around the then Clariion mid range storage arrays (e.g. predecessors of VNX and VNXe). Unity is being designed as a high performance, easy to use, flexible, scalable, cost-effective storage solutions for a broad high-volume sales and distribution channel market.

    What does Unity mean for EMC VNX and VNXe as well as XtremIO? Unity will position near where the VNXe has been positioned, along with some of the competing solutions from Dell among others. There might be some overlap with other EMC solutions, however if executed properly, Unity should open up some new markets, perhaps at the hands of some of the newer popular startups that only offer AFA vs. hybrids. Likewise I would expect Unity to appear in future converged solutions such as those via the EMC Converged business unit (e.g. VCE).

    Even with the upcoming Dell acquisition and integration, EMC continues to evolve and innovate in many areas.

    Watch for more announcements later today and throughout the week

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Ubuntu 16.04 LTS (aka Xenial Xerus) What’s In The Bits and Bytes?

    Ubuntu 16.04 LTS (aka Xenial Xerus) What’s In The Bits and Bytes?

    server storage I/O trends

    Ubuntu 16.04 LTS (aka Xenial Xerus) was recently released (you can get the bits or software download here). Ubuntu is available in various distributions including as a server, workstation or desktop among others that can run bare metal on a physical machine (PM), virtual machine (VM) or as a cloud instance via services such as Amazon Web Services (AWS) as well as Microsoft Azure among others.

    Refresh, What is Ubuntu

    For those not familiar or who need a refresh, Ubuntu is an open source Linux distribution with the company behind it called Canonical. The Ubuntu software is a Debian based Linux distribution with Unity (user interface). Ubuntu is available across different platform architecture from industry standard Intel and AMD x86 32bit and 64bit to ARM processors and even the venerable IBM zSeriues (aka zed) mainframe as part of LinuxOne.

    As a desktop, some see or use Ubuntu as an open source alternative to desktop interfaces based on those from Microsoft such as Windows or Apple.

    As a server Ubuntu can be deployed from traditional applications to cloud, converged and many others including as a docker container, Ceph or OpenStack deployment platform. Speaking of Microsoft and Windows, if you are a *nix bash type person yet need (or have) to work with Windows, bash (and more) are coming to Windows 10. Ubuntu desktop GUI or User Interface options include Unity along with tools such as Compiz and LibreOffice (an alternative to Microsoft Office).

    What’s New In the Bits and Bytes (e.g. Software)

    Ubuntu 16.04 LTS is based on the Linux 4.4 kernel, that also includes Python 3, Ceph Jewel (block, file and object storage) and OpenStack Mitaka among other enhancements. These and other fixes as well as enhancements include:

    • Libvirt 1.3.1
    • Qemu 2.5
    • Open vSwitch 2.5.0
    • NginxLX2 2.0
    • Docker 1.10
    • PHP 7.9
    • MySQL 7.0
    • Juju 2.0
    • Golang 1.6 toolchain
    • OpenSSH 7.2p2 with legacy support along with cipher improvements, including 1024 bit diffie-hellman-group1-sha1 key exchange, ssh-dss, ssh-dss-cert
    • GNU toolchain
    • Apt 1.2

    What About Ubuntu for IBM zSeries Mainframe

    Ubuntu runs on 64 bit zSeries architecture with about 95% binary compatibility. If you look at the release notes, there are still a few things being worked out among known issues. However (read the release notes), Ubuntu 16.04 LTS has OpenStack and Ceph, means that those capabilities could be deployed on a zSeries.

    Now some of you might think wait, how can Linux and Ceph among others work on a FICON based mainframe?

    No worries, keep in mind that FICON the IBM zSeries server storage I/O protocol that co-exists on Fibre Channel along with SCSI_FCP (e.g. FCP) aka what most Open Systems people simply refer to as Fibre Channel (FC) works with the zOS and other operating systems. In the case of native Linux on zSeries, those systems can in fact use SCSI mode for accessing shared storage. In addition to the IBM LinuxOne site, you can learn more about Ubuntu running native on zSeries here on the Ubuntu site.

    Where To Learn More

    What This All Means

    Ubuntu as a Linux distribution continues to evolve and increase in deployment across different environments. Some still view Ubuntu as the low-end Linux for home, hobbyist or those looking for a alternative desktop to Microsoft Windows among others. However Ubuntu is also increasingly being used in roles where other Linux distribution such as Red Hat Enterprise Linux (RHEL), SUSE and Centos among others have gained prior popularity.

    In someway’s you can view RHEL as the first generation Linux distribution that gained popular in the enterprise with early adopters, followed by a second wave or generation of those who favored Centos among others such as the cloud crowd. Then there is the Ubuntu wave which is expanding in many areas along with others such as CoreOS. Granted with some people the preference between one Linux distribution vs. another can be as polarizing as Linux vs. Windows, OpenSystems vs. Mainframe vs. Cloud among others.

    Having various Ubuntu distributions installed across different servers (in addition to Centos, Suse and others), I found the install and new capabilities of Ubuntu 16.04 LTS interesting and continue to explore the many new features, while upgrading some of my older systems.

    Get the Ubuntu 16.04 LTS bits here to give a try or upgrade your existing systems.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Which Enterprise HDD for Content Server Platform

    Which Enterprise HDD to use for a Content Server Platform

    data infrastructure HDD server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server platform?

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This post is the first in a multi-part series based on a white paper hands-on lab report I did compliments of Equus Computer Systems and Seagate that you can read in PDF form here. The focus is looking at the Equus Computer Systems (www.equuscs.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). I was given the opportunity to do some hands-on testing running different application workloads with a 2U content solution platform along with various Seagate Enterprise 2.5” HDD’s handle different application workloads. This includes Seagate’s Enterprise Performance HDD’s with the enhanced caching feature.

    Issues And Challenges

    Even though Non-Volatile Memory (NVM) including NAND flash solid state devices (SSDs) have become popular storage for use internal as well as external to servers, there remains the need for HDD’s Like many of you who need to make informed server, storage, I/O hardware, software and configuration selection decisions, time is often in short supply.

    A common industry trend is to use SSD and HDD based storage mediums together in hybrid configurations. Another industry trend is that HDD’s continue to be enhanced with larger space capacity in the same or smaller footprint, as well as with performance improvements. Thus, a common challenge is what type of HDD to use for various content and application workloads balancing performance, availability, capacity and economics.

    Content Applications and Servers

    Fast Content Needs Fast Solutions

    An industry and customer trend are that information and data are getting larger, living longer, as well as there is more of it. This ties to the fundamental theme that applications and their underlying hardware platforms exist to process, move, protect, preserve and serve information.

    Content solutions span from video (4K, HD, SD and legacy streaming video, pre-/post-production, and editing), audio, imaging (photo, seismic, energy, healthcare, etc.) to security surveillance (including Intelligent Video Surveillance [ISV] as well as Intelligence Surveillance and Reconnaissance [ISR]). In addition to big fast data, other content solution applications include content distribution network (CDN) and caching, network function virtualization (NFV) and software-defined network (SDN), to cloud and other rich unstructured big fast media data, analytics along with little data (e.g. SQL and NoSQL database, key-value stores, repositories and meta-data) among others.

    Content Solutions And HDD Opportunities

    A common theme with content solutions is that they get defined with some amount of hardware (compute, memory and storage, I/O networking connectivity) as well as some type of content software. Fast content applications need fast software, multi-core processors (compute), large memory (DRAM, NAND flash, SSD and HDD’s) along with fast server storage I/O network connectivity. Content-based applications benefit from having frequently accessed data as close as possible to the application (e.g. locality of reference).

    Content solution and application servers need flexibility regarding compute options (number of sockets, cores, threads), main memory (DRAM DIMMs), PCIe expansion slots, storage slots and other connectivity. An industry trend is leveraging platforms with multi-socket processors, dozens of cores and threads (e.g. logical processors) to support parallel or high-concurrent content applications. These servers have large amounts of local storage space capacity (NAND flash SSD and HDD) and associated I/O performance (PCIe, NVMe, 40 GbE, 10 GbE, 12 Gbps SAS etc.) in addition to using external shared storage (local and cloud).

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Fast content applications need fast content and flexible content solution platforms such as those from Equus Computer Systems and HDD’s from Seagate. Key to a successful content application deployment is having the flexibility to hardware define and software defined the platform to meet your needs. Just as there are many different types of content applications along with diverse environments, content solution platforms need to be flexible, scalable and robust, not to mention cost effective.

    Continue reading part two of this multi-part series here where we look at how and what to test as well as project planning.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Part 2 – Which HDD for Content Applications – HDD Testing

    Part 2 – Which HDD for Content Applications – HDD Testing

    HDD testing server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server, hdd testing, how and what to do

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the second in a multi-part series (read part one here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post we look at some decisions and configuration choices to make for testing content applications servers as well as project planning.

    Content Solution Test Objectives

    In short period, collect performance and another server, storage I/O decision-making information on various HDD’s running different content workloads.

    Working with the Servers Direct staff a suitable content solution platform test configuration was created. In addition to providing two Intel-based content servers, Servers Direct worked with their partner Seagate to arrange for various enterprise-class HDD’s to be evaluated. For these series of content application tests, being short on time, I chose to do run some simple workloads including database, basic file (large and small) processing and general performance characterization.

    Content Solution Decision Making

    Knowing how Non-Volatile Memory (NVM) NAND flash SSD (1) devices (drives and PCIe cards) perform, what would be the best HDD based storage option for my given set of applications? Different applications have various performance, capacity and budget considerations. Different types of Seagate Enterprise class 2.5” Small Form Factor (SFF) HDD’s were tested.

    While revolutions per minute (RPM) still plays a role in HDD performance, there are other factors including internal processing capabilities, software or firmware algorithm optimization, and caching. Most HDD’s today have some amount of DRAM for read caching and other operations. Seagate Enterprise Performance HDD’s with the enhanced caching feature (2) are examples of devices accelerate storage I/O speed vs. traditional 10K and 15K RPM drives.

    Project Planning And Preparation

    Workload to be tested included:

    • Database read/writes
    • Large file processing
    • Small file processing
    • General I/O profile

    Project testing consisted of five phases, some of which overlapped with others:

    Phase 1 – Plan
    Identify candidate workloads that could be run in the given amount of time, determine time schedules and resource availability, create a project plan.

    Phase 2 – Define
    Hardware define and software define the test platform.

    Phase 3 – Setup
    The objective was to assess plug-play capability of the server, storage and I/O networking hardware with a Linux OS before moving on to the reported workloads in the next phase. Initial setup and configuration of hardware and software, installation of additional devices along with software configuration, troubleshooting, and learning as applicable. This phase consisted of using Ubuntu Linux 14.04 server as the operating system (OS) along with MySQL 5.6 as a database server during initial hands-on experience.

    Phase 4 – Execute
    This consisted of using Windows 2012 R2 server as the OS along with Microsoft SQL Server on the system under test (SUT) to support various workloads. Results of this phase are reported below.

    Phase 5 – Analyze      
    Results from the workloads run in phase 3 were analyzed and summarized into this document.

    (Note 1) Refer to Seagate 1200 12 Gbps Enterprise SAS SSD StorageIO lab review

    (Note 2) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    Planning And Preparing The Tests

    As with most any project there were constraints to contend with and work around.

    Test constraints included:

    • Short-time window
    • Hardware availability
    • Amount of hardware
    • Software availability

    Three most important constraints and considerations for this project were:

    • Time – This was a project with a very short time “runway”, something common in most customer environments who are looking to make a knowledgeable server, storage I/O decisions.
    • Amount of hardware – Limited amount of DRAM main memory, sixteen 2.5” internal hot-swap storage slots for HDD’s as well as SSDs. Note that for a production content solution platform; additional DRAM can easily be added, along with extra external storage enclosures to scale memory and storage capacity to fit your needs.
    • Software availability – Utilize common software and management tools publicly available so anybody could leverage those in their own environment and tests.

    The following content application workloads were profiled:

    • Database reads/writes – Updates, inserts, read queries for a content environment
    • Large file processing – Streaming of large video, images or other content objects.
    • Small file processing – Processing of many small files found in some content applications
    • General I/O profile – IOP, bandwidth and response time relevant to content applications

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    There are many different types of content applications ranging from little data databases to big data analytics as well as very big fast data such as for video. Likewise there are various workloads and characteristics to test. The best test and metrics are those that apply to your environment and application needs.

    Continue reading part three of this multi-part series here looking at how the systems and HDD’s were configured and tested.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Part 3 – Which HDD for content applicaitons – Test Configuration

    Which HDD for content applications – HDD Test Configuration

    HDD Test Configuration server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server platform hdd test configuratoin

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the third in a multi-part series (read part two here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus expands to hardware and software defining as well as configuring the test environments along with applications workloads.

    Defining Hardware Software Environment

    Servers Direct content platforms are software defined and hardware defined to your specific solution needs. For my test-drive, I used a pair of 2U Content Solution platforms, one for a client System Test Initiator (STI) (3), the other as server SUT shown in figure-1 (next page). With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

    (Note 3) System Test Initiator (STI) was hardware defined with dual Intel Xeon E5-2695 v3 (2.30 GHz) processors, 32GB RAM running Windows Server 2012 R2 with two network connections to the SUT. Network connections from the STI to SUT included an Intel GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE Converged Network Adapter (CNA). In addition to software defining the STI with Windows Server 2012 R2, Dell Benchmark Factory (V7.1 64b bit 496) part of the Database Administrators (DBA) Toad Tools (including free versions) was also used. For those familiar with HammerDB, Sysbench among others, Benchmark Factory is an alternative that supports various workloads and database connections with robust reporting, scripting and automation. Other installed tools included Spotlight on Windows, Iperf 2.0.5 for generating network traffic and reporting results, as well as Vdbench with various scripts.

    SUT setup (4)  included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

    (Note 4) System Under Test (SUT) dual Intel Xeon E5-2697 v3 (2.60 GHz) providing 54 logical processors, 64GB of RAM (expandable to 768GB with 32GB DIMMs, or 3TB with 128GB DIMMs) and two network connections. Network connections from the STI to SUT consisting of an Intel 1 GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE CNA. The GbE LAN connection was used for management purposes while the 40 GbE was used for data traffic. System disk was a 6Gbs SATA flash SSD. Seagate Enterprise class HDD’s were installed into the 16 available 2.5” small form factor (SFF) drive slots. Eight (left most) drive slots were connected to an Intel RMS3CC080 12 Gbps SAS RAID internal controller. The “Blue” drives in the middle were connected to both an NVMe PCIe card and motherboard 6 Gbps SATA controller using an SFF-8637 connector. The four right most drives were also connected to the motherboard 6 Gbps SATA controller.

    System Test Configuration
    Figure-1 STI and SUT hardware as well as software defined test configuration

    This included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. Five 6 Gbps SATA Enterprise Capacity 2TB HDD’s were setup using Microsoft Windows as a spanned volume. System disk was a 6Gbps flash SSD and an NVMe flash SSD drive was used for database temp space.

    What About NVM Flash SSD?

    NAND flash and other Non-Volatile Memory (NVM) memory and SSD complement content solution. A little bit of flash SSD in the right place can have a big impact. The focus for theses tests is HDD’s, however some flash SSDs were used as system boot and database temp (e.g. tempdb) space. Refer to StorageIO Lab reviews and visit www.thessdplace.com

    Seagate Enterprise HDD’s Used During Testing

    Various Seagate Enterprise HDD specifications use in the testing are shown below in table-1.

     

    Qty

     

    Seagate HDD’s

     

    Capacity

     

    RPM

     

    Interface

     

    Size

     

    Model

    Servers Direct Price Each

    Configuration

    4

    Enterprise 10K
    Performance

    1.8TB

    10K with cache

    12 Gbps SAS

    2.5”

    ST1800MM0128
    with enhanced cache

    $875.00 USD

    HW(5) RAID 10 and RAID 1

    2

    Enterprise
    Capacity 7.2K

    2TB

    7.2K

    12 Gbps SAS

    2.5”

    ST2000NX0273

    $399.00 USD

    HW RAID 1

    2

    Enterprise 15K
    Performance

    600GB

    15K with cache

    12 Gbps SAS

    2.5”

    ST600MX0082
    with enhanced cache

    $595.00 USD

    HW RAID 1

    5

    Enterprise
    Capacity 7.2K

    2TB

    7.2K

    6 Gbps SATA

    2.5”

    ST2000NX0273

    $399.00 USD

    SW(6) RAID Span Volume

    Table-1 Seagate Enterprise HDD specification and Servers Direct pricing

    URLs for additional Servers Direct content platform information:
    https://serversdirect.com/solutions/content-solutions
    https://serversdirect.com/solutions/content-solutions/video-streaming
    https://www.serversdirect.com/File%20Library/Data%20Sheets/Intel-SDR-2P16D-001-ds2.pdf

    URLs for additional Seagate Enterprise HDD information:
    https://serversdirect.com/Components/Drives/id-HD1558/Seagate_ST2000NX0273_2TB_Hard_Drive

    https://serversdirect.com/Components/Drives/id-HD1559/Seagate_ST600MX0082_SSHD

    Seagate Performance Enhanced Cache Feature

    The Enterprise 10K and 15K Performance HDD’s tested had the enhanced cache feature enabled. This feature provides a “turbo” boost like acceleration for both reads and write I/O operations. HDD’s with enhanced cache feature leverage the fact that some NVM such as flash in the right place can have a big impact on performance (7).

    In addition to their performance benefit, combing a best of or hybrid storage model (combing flash with HDD’s along with software defined cache algorithms), these devices are “plug-and-play”. By being “plug-and-play” no extra special adapters, controllers, device drivers, tiering or cache management software tools are required.

    (Note 5) Hardware (HW) RAID using Intel server on-board LSI based 12 Gbps SAS RAID card, RAID 1 with two (2) drives, RAID 10 with four (4) drives. RAID configured in write-through mode with default stripe / chunk size.

    (Note 6) Software (SW) RAID using Microsoft Windows Server 2012 R2 (span). Hardware RAID used write-through cache (e.g. no buffering) with read-ahead enabled and a default 256KB stripe/chunk size.

    (Note 7) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    The Seagate Enterprise Performance 10K and 15K with enhanced cache feature are a good example of how there is more to performance in today’s HDD’s than simply comparing RPM’s, drive form factor or interface.

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Careful and practical planning are key steps for testing various resources as well as aligning the applicable tools, configuration to meet your needs.

    Continue reading part four of this multi-part series here where the focus expands to database application workloads.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Part 4 – Which HDD for Content Applications – Database Workloads

    Part 4 – Which HDD for Content Applications – Database Workloads

    data base server storage I/O trends

    Updated 1/23/2018
    Which enterprise HDD to use with a content server platform for database workloads

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the fourth in a multi-part series (read part three here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus expands to database application workloads that were run to test various HDD’s.

    Database Reads/Writes

    Transaction Processing Council (TPC) TPC-C like workloads were run against the SUT from the STI. These workloads simulated transactional, content management, meta-data and key-value processing. Microsoft SQL Server 2012 was configured and used with databases (each 470GB e.g. scale 6000) created and workload generated by virtual users via Dell Benchmark Factory (running on STI Windows 2012 R2).

    A single SQL Server database instance (8) was used on the SUT, however unique databases were created for each HDD set being tested. Both the main database file (.mdf) and the log file (.ldf) were placed on the same drive set being tested, keep in mind the constraints mentioned above. As time was a constraint, database workloads were run concurrent (9) with each other except for the Enterprise 10K RAID 1 and RAID 10. Workload was run with two 10K HDD’s in a RAID 1 configuration, then another workload run with a four drive RAID 10. In a production environment, ideally the .mdf and .ldf would be placed on separate HDD’s and SSDs.

    To improve cache buffering the SQL Server database instance memory could be increased from 16GB to a larger number that would yield higher TPS numbers. Keep in mind the objective was not to see how fast I could make the databases run, rather how the different drives handled the workload.

    (Note 8) The SQL Server Tempdb was placed on a separate NVMe flash SSD, also the database instance memory size was set to 16GB which was shared by all databases and virtual users accessing it.

    (Note 9) Each user step was run for 90 minutes with a 30 minute warm-up preamble to measure steady-state operation.

    Users

    TPCC Like TPS

    Single Drive Cost per TPS

    Drive Cost per TPS

    Single Drive Cost / Per GB Raw Cap.

    Cost / Per GB Usable (Protected) Cap.

    Drive Cost (Multiple Drives)

    Protect
    Space Over head

    Cost per usable GB per TPS

    Resp. Time (Sec.)

    ENT 15K R1

    1

    23.9

    $24.94

    $49.89

    $0.99

    $0.99

    $1,190

    100%

    $49.89

    0.01

    ENT 10K R1

    1

    23.4

    $37.38

    $74.77

    $0.49

    $0.49

    $1,750

    100%

    $74.77

    0.01

    ENT CAP R1

    1

    16.4

    $24.26

    $48.52

    $0.20

    $0.20

    $ 798

    100%

    $48.52

    0.03

    ENT 10K R10

    1

    23.2

    $37.70

    $150.78

    $0.49

    $0.97

    $3,500

    100%

    $150.78

    0.07

    ENT CAP SWR5

    1

    17.0

    $23.45

    $117.24

    $0.20

    $0.25

    $1,995

    20%

    $117.24

    0.02

    ENT 15K R1

    20

    362.3

    $1.64

    $3.28

    $0.99

    $0.99

    $1,190

    100%

    $3.28

    0.02

    ENT 10K R1

    20

    339.3

    $2.58

    $5.16

    $0.49

    $0.49

    $1,750

    100%

    $5.16

    0.01

    ENT CAP R1

    20

    213.4

    $1.87

    $3.74

    $0.20

    $0.20

    $ 798

    100%

    $3.74

    0.06

    ENT 10K R10

    20

    389.0

    $2.25

    $9.00

    $0.49

    $0.97

    $3,500

    100%

    $9.00

    0.02

    ENT CAP SWR5

    20

    216.8

    $1.84

    $9.20

    $0.20

    $0.25

    $1,995

    20%

    $9.20

    0.06

    ENT 15K R1

    50

    417.3

    $1.43

    $2.85

    $0.99

    $0.99

    $1,190

    100%

    $2.85

    0.08

    ENT 10K R1

    50

    385.8

    $2.27

    $4.54

    $0.49

    $0.49

    $1,750

    100%

    $4.54

    0.09

    ENT CAP R1

    50

    103.5

    $3.85

    $7.71

    $0.20

    $0.20

    $ 798

    100%

    $7.71

    0.45

    ENT 10K R10

    50

    778.3

    $1.12

    $4.50

    $0.49

    $0.97

    $3,500

    100%

    $4.50

    0.03

    ENT CAP SWR5

    50

    109.3

    $3.65

    $18.26

    $0.20

    $0.25

    $1,995

    20%

    $18.26

    0.42

    ENT 15K R1

    100

    190.7

    $3.12

    $6.24

    $0.99

    $0.99

    $1,190

    100%

    $6.24

    0.49

    ENT 10K R1

    100

    175.9

    $4.98

    $9.95

    $0.49

    $0.49

    $1,750

    100%

    $9.95

    0.53

    ENT CAP R1

    100

    59.1

    $6.76

    $13.51

    $0.20

    $0.20

    $ 798

    100%

    $13.51

    1.66

    ENT 10K R10

    100

    560.6

    $1.56

    $6.24

    $0.49

    $0.97

    $3,500

    100%

    $6.24

    0.14

    ENT CAP SWR5

    100

    62.2

    $6.42

    $32.10

    $0.20

    $0.25

    $1,995

    20%

    $32.10

    1.57

    Table-2 TPC-C workload results various number of users across different drive configurations

    Figure-2 shows TPC-C TPS (red dashed line) workload scaling over various number of users (1, 20, 50, and 100) with peak TPS per drive shown. Also shown is the used space capacity (in green), with total raw storage capacity in blue cross hatch. Looking at the multiple metrics in context shows that the 600GB Enterprise 15K HDD with performance enhanced cache is a premium option as an alternative, or, to complement flash SSD solutions.

    database TPCC transactional workloads
    Figure-2 472GB Database TPS scaling along with cost per TPS and storage space used

    In figure-2, the 1.8TB Enterprise 10K HDD with performance enhanced cache while not as fast as the 15K, provides a good balance of performance, space capacity and cost effectiveness. A good use for the 10K drives is where some amount of performance is needed as well as a large amount of storage space for less frequently accessed content.

    A low cost, low performance option would be the 2TB Enterprise Capacity HDD’s that have a good cost per capacity, however lack the performance of the 15K and 10K drives with enhanced performance cache. A four drive RAID 10 along with a five drive software volume (Microsoft WIndows) are also shown. For apples to apples comparison look at costs vs. capacity including number of drives needed for a given level of performance.

    Figure-3 is a variation of figure-2 showing TPC-C TPS (blue bar) and response time (red-dashed line) scaling across 1, 20, 50 and 100 users. Once again the Enterprise 15K with enhanced performance cache feature enabled has good performance in an apples to apples RAID 1 comparison.

    Note that the best performance was with the four drive RAID 10 using 10K HDD’s Given popularity, a four drive RAID 10 configuration with the 10K drives was used. Not surprising the four 10K drives performed better than the RAID 1 15Ks. Also note using five drives in a software spanned volume provides a large amount of storage capacity and good performance however with a larger drive footprint.

    database TPCC transactional workloads scaling
    Figure-3 472GB Database TPS scaling along with response time (latency)

    From a cost per space capacity perspective, the Enterprise Capacity drives have a good cost per GB. A hybrid solution for environment that do not need ultra-high performance would be to pair a small amount of flash SSD (10) (drives or PCIe cards), as well as the 10K and 15K performance enhanced drives with the Enterprise Capacity HDD (11) along with cache or tiering software.

    (Note 10) Refer to Seagate 1200 12 Gbps Enterprise SAS SSD StorageIO lab review

    (Note 11) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    If your environment is using applications that rely on databases, then test resources such as servers, storage, devices using tools that represent your environment. This means moving up the software and technology stack from basic storage I/O benchmark or workload generator tools such as Iometer among others instead using either your own application, or tools that can replay or generate various workloads that represent your environment.

    Continue reading part five in this multi-part series here where the focus shifts to large and small file I/O processing workloads.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Which Enterprise HDD for Content Applications Different File Size Impact

    Which HDD for Content Applications Different File Size Impact

    Different File Size Impact server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server platform different file size impact.

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the fifth in a multi-part series (read part four here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus looks at large and small file I/O processing.

    File Performance Activity

    Tip, Content solutions use files in various ways. Use the following to gain perspective how various HDD’s handle workloads similar to your specific needs.

    Two separate file processing workloads were run (12), one with a relative small number of large files, and another with a large number of small files. For the large file processing (table-3), 5 GByte sized files were created and then accessed via 128 Kbyte (128KB) sized I/O over a 10 hour period with 90% read using 64 threads (workers). Large file workload simulates what might be seen with higher definition video, image or other content streaming.

    (Note 12) File processing workloads were run using Vdbench 5.04 and file anchors with sample script configuration below. Instead of vdbench you could also use other tools such as sysbench or fio among others.

    VdbenchFSBigTest.txt
    # Sample script for big files testing
    fsd=fsd1,anchor=H:,depth=1,width=5,files=20,size=5G
    fwd=fwd1,fsd=fsd1,rdpct=90,xfersize=128k,fileselect=random,fileio=random,threads=64
    rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=10h,interval=30

    vdbench -f VdbenchFSBigTest.txt -m 16 -o Results_FSbig_H_060615

    VdbenchFSSmallTest.txt
    # Sample script for big files testing
    fsd=fsd1,anchor=H:,depth=1,width=64,files=25600,size=16k
    fwd=fwd1,fsd=fsd1,rdpct=90,xfersize=1k,fileselect=random,fileio=random,threads=64
    rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=10h,interval=30

    vdbench -f VdbenchFSSmallTest.txt -m 16 -o Results_FSsmall_H_060615

    The 10% writes are intended to reflect some update activity for new content or other changes to content. Note that 128KB per second translates to roughly 1 Gbps streaming content such as higher definition video. However 4K video (not optimized) would require a higher speed as well as resulting in larger file sizes. Table-3 shows the performance during the large file access period showing average read /write rates and response time, bandwidth (MBps), average open and close rates with response time.

    Avg. File Read Rate

    Avg. Read Resp. Time
    Sec.

    Avg. File Write Rate

    Avg. Write Resp. Time
    Sec.

    Avg.
    CPU %
    Total

    Avg. CPU % System

    Avg. MBps
    Read

    Avg. MBps
    Write

    ENT 15K R1

    580.7

    107.9

    64.5

    19.7

    52.2

    35.5

    72.6

    8.1

    ENT 10K R1

    455.4

    135.5

    50.6

    44.6

    34.0

    22.7

    56.9

    6.3

    ENT CAP R1

    285.5

    221.9

    31.8

    19.0

    43.9

    28.3

    37.7

    4.0

    ENT 10K R10

    690.9

    87.21

    76.8

    48.6

    35.0

    21.8

    86.4

    9.6

    Table-3 Performance summary for large file access operations (90% read)

    Table-3 shows that for two-drive RAID 1, the Enterprise 15K are the fastest performance, however using a RAID 10 with four 10K HDD’s with enhanced cache features provide a good price, performance and space capacity option. Software RAID was used in this workload test.

    Figure-4 shows the relative performance of various HDD options handling large files, keep in mind that for the response line lower is better, while for the activity rate higher is better.

    large file processing
    Figure-4 Large file processing 90% read, 10% write rate and response time

    In figure-4 you can see the performance in terms of response time (reads larger dashed line, writes smaller dotted line) along with number of file read operations per second (reads solid blue column bar, writes green column bar). Reminder that lower response time, and higher activity rates are better. Performance declines moving from left to right, from 15K to 10K Enterprise Performance with enhanced cache feature to Enterprise Capacity (7.2K), all of which were hardware RAID 1. Also shown is a hardware RAID 10 (four x 10K HDD’s).

    Results in figure-4 above and table-4 below show how various drives can be configured to balance their performance, capacity and costs to meet different needs. Table-4 below shows an analysis looking at average file reads per second (RPS) performance vs. HDD costs, usable capacity and protection level.

    Table-4 is an example of looking at multiple metrics to make informed decisions as to which HDD would be best suited to your specific needs. For example RAID 10 using four 10K drives provides good performance and protection along with large usable space, however that also comes at a budget cost (e.g. price).

    Avg.
    File Reads Per Sec. (RPS)

    Single Drive Cost per RPS

    Multi-Drive Cost per RPS

    Single Drive Cost / Per GB Capacity

    Cost / Per GB Usable (Protected) Cap.

    Drive Cost (Multiple Drives)

    Protection Overhead (Space Capacity for RAID)

    Cost per usable GB per RPS

    Avg. File Read Resp. (Sec.)

    ENT 15K R1

    580.7

    $1.02

    $2.05

    $ 0.99

    $0.99

    $1,190

    100%

    $2.1

    107.9

    ENT 10K R1

    455.5

    1.92

    3.84

    0.49

    0.49

    1,750

    100%

    3.8

    135.5

    ENT CAP R1

    285.5

    1.40

    2.80

    0.20

    0.20

    798

    100%

    2.8

    271.9

    ENT 10K R10

    690.9

    1.27

    5.07

    0.49

    0.97

    3,500

    100%

    5.1

    87.2

    Table-4 Performance, capacity and cost analysis for big file processing

    Small File Size Processing

    To simulate a general file sharing environment, or content streaming with many smaller objects, 1,638,464 16KB sized files were created on each device being tested (table-5). These files were spread across 64 directories (25,600 files each) and accessed via 64 threads (workers) doing 90% reads with a 1KB I/O size over a ten hour time frame. Like the large file test, and database activity, all workloads were run at the same time (e.g. test devices were concurrently busy).

    Avg. File Read Rate

    Avg. Read Resp. Time
    Sec.

    Avg. File Write Rate

    Avg. Write Resp. Time
    Sec.

    Avg.
    CPU %
    Total

    Avg. CPU % System

    Avg. MBps
    Read

    Avg. MBps
    Write

    ENT 15K R1

    3,415.7

    1.5

    379.4

    132.2

    24.9

    19.5

    3.3

    0.4

    ENT 10K R1

    2,203.4

    2.9

    244.7

    172.8

    24.7

    19.3

    2.2

    0.2

    ENT CAP R1

    1,063.1

    12.7

    118.1

    303.3

    24.6

    19.2

    1.1

    0.1

    ENT 10K R10

    4,590.5

    0.7

    509.9

    101.7

    27.7

    22.1

    4.5

    0.5

    Table-5 Performance summary for small sized (16KB) file access operations (90% read)

    Figure-5 shows the relative performance of various HDD options handling large files, keep in mind that for the response line lower is better, while for the activity rate higher is better.

    small file processing
    Figure-5 Small file processing 90% read, 10% write rate and response time

    In figure-5 you can see the performance in terms of response time (reads larger dashed line, writes smaller dotted line) along with number of file read operations per second (reads solid blue column bar, writes green column bar). Reminder that lower response time, and higher activity rates are better. Performance declines moving from left to right, from 15K to 10K Enterprise Performance with enhanced cache feature to Enterprise Capacity (7.2K RPM), all of which were hardware RAID 1. Also shown is a hardware RAID 10 (four x 10K RPM HDD’s) that has higher performance and capacity along with costs (table-5).

    Results in figure-5 above and table-5 below show how various drives can be configured to balance their performance, capacity and costs to meet different needs. Table-6 below shows an analysis looking at average file reads per second (RPS) performance vs. HDD costs, usable capacity and protection level.

    Table-6 is an example of looking at multiple metrics to make informed decisions as to which HDD would be best suited to your specific needs. For example RAID 10 using four 10K drives provides good performance and protection along with large usable space, however that also comes at a budget cost (e.g. price).

    Avg.
    File Reads Per Sec. (RPS)

    Single Drive Cost per RPS

    Multi-Drive Cost per RPS

    Single Drive Cost / Per GB Capacity

    Cost / Per GB Usable (Protected) Cap.

    Drive Cost (Multiple Drives)

    Protection Overhead (Space Capacity for RAID)

    Cost per usable GB per RPS

    Avg. File Read Resp. (Sec.)

    ENT 15K R1

    3,415.7

    $0.17

    $0.35

    $0.99

    $0.99

    $1,190

    100%

    $0.35

    1.51

    ENT 10K R1

    2,203.4

    0.40

    0.79

    0.49

    0.49

    1,750

    100%

    0.79

    2.90

    ENT CAP R1

    1,063.1

    0.38

    0.75

    0.20

    0.20

    798

    100%

    0.75

    12.70

    ENT 10K R10

    4,590.5

    0.19

    0.76

    0.49

    0.97

    3,500

    100%

    0.76

    0.70

    Table-6 Performance, capacity and cost analysis for small file processing

    Looking at the small file processing analysis in table-5 shows that the 15K HDD’s on an apples to apples basis (e.g. same RAID level and number of drives) provide the best performance. However when also factoring in space capacity, performance, different RAID level or other protection schemes along with cost, there are other considerations. On the other hand the Enterprise Capacity 2TB HDD’s have a low cost per capacity, however do not have the performance of other options, assuming your applications need more performance.

    Thus the right HDD for one application may not be the best one for a different scenario as well as multiple metrics as shown in table-5 need to be included in an informed storage decision making process.

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    File processing are common content applications tasks, some being small, others large or mixed as well as reads and writes. Even if your content environment is using object storage, chances are unless it is a new applications or a gateway exists, you may be using NAS or file based access. Thus the importance of if your applications are doing file based processing, either run your own applications or use tools that can simulate as close as possible to what your environment is doing.

    Continue reading part six in this multi-part series here where the focus is around general I/O including 8KB and 128KB sized IOPs along with associated metrics.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Which Enterprise HDD for Content Applications General I/O Performance

    Which HDD for Content Applications general I/O Performance

    hdd general i/o performance server storage I/O trends

    Updated 1/23/2018

    Which enterprise HDD to use with a content server platform general I/O performance Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the sixth in a multi-part series (read part five here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus is around general I/O performance including 8KB and 128KB IOP sizes.

    General I/O Performance

    In addition to running database and file (large and small) processing workloads, Vdbench was also used to collect basic small (8KB) and large (128KB) sized I/O operations. This consisted of random and sequential reads as well as writes with the results shown below. In addition to using vdbench, other tools that could be used include Microsoft Diskspd, fio, iorate and iometer among many others.

    These workloads used Vdbench configured (13) to do direct I/O to a Windows file system mounted device using as much of the available disk space as possible. All workloads used 16 threads and were run concurrently similar to database and file processing tests.

    (Note 13) Sample vdbench configuration for general I/O, note different settings were used for various tests

    Table-7 shows workload results for 8KB random IOPs 75% reads and 75% writes including IOPs, bandwidth and response time.

     

    ENT 15K RAID1

    ENT 10K RAID1

    ENT CAP RAID1

    ENT 10K R10
    (4 Drives)

    ECAP SW RAID (5 Drives)

     

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    I/O Rate (IOPs)

    597.11

    559.26

    514

    475

    285

    293

    979

    984

    491

    644

    MB/sec

    4.7

    4.4

    4.0

    3.7

    2.2

    2.3

    7.7

    7.7

    3.8

    5.0

    Resp. Time (Sec.)

    25.9

    27.6

    30.2

    32.7

    55.5

    53.7

    16.3

    16.3

    32.6

    24.8

    Table-7 8KB sized random IOPs workload results

    Figure-6 shows small (8KB) random I/O (75% read and 25% read) across different HDD configurations. Performance including activity rates (e.g. IOPs), bandwidth and response time for mixed reads / writes are shown. Note how response time increases with the Enterprise Capacity configurations vs. other performance optimized drives.

    general 8K random IO
    Figure-6 8KB random reads and write showing IOP activity, bandwidth and response time

    Table-8 below shows workload results for 8GB sized I/Os 100% sequential with 75% reads and 75% writes including IOPs, MB/sec and response time in seconds.

    ENT 15K RAID1

    ENT 10K RAID1

    ENT CAP RAID1

    ENT 10K R10
    (4 Drives)

    ECAP SW RAID (5 Drives)

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    75% Read

    25% Read

    I/O Rate (IOPs)

    3,778

    3,414

    3,761

    3,986

    3,379

    1,274

    11,840

    8,368

    2,891

    1,146

    MB/sec

    29.5

    26.7

    29.4

    31.1

    26.4

    10.0

    92.5

    65.4

    22.6

    9.0

    Resp. Time (Sec.)

    2.2

    3.1

    2.3

    2.4

    2.7

    10.9

    1.3

    1.9

    5.5

    14.0

    Table-8 8KB sized sequential workload results

    Figure-7 shows small 8KB sequential mixed reads and writes (75% read and 75% write), while the Enterprise Capacity 2TB HDD has a large amount of space capacity, its performance in a RAID 1 vs. other similar configured drives is slower.

    8KB Sequential
    Figure-7 8KB sequential 75% reads and 75% write showing bandwidth activity

    Table-9 shows workload results for 100% sequential, 100% read and 100% write 128KB sized I/Os including IOPs, bandwidth and response time.

    ENT 15K RAID1

    ENT 10K RAID1

    ENT CAP RAID1

    ENT 10K R10
    (4 Drives)

    ECAP SW RAID (5 Drives)

    Read

    Write

    Read

    Write

    Read

    Write

    Read

    Write

    Read

    Write

    I/O Rate (IOPs)

    1,798

    1,771

    1,716

    1,688

    921

    912

    3,552

    3,486

    780

    721

    MB/sec

    224.7

    221.3

    214.5

    210.9

    115.2

    114.0

    444.0

    435.8

    97.4

    90.1

    Resp. Time (Sec.)

    8.9

    9.0

    9.3

    9.5

    17.4

    17.5

    4.5

    4.6

    19.3

    20.2

    Table-9 128KB sized sequential workload results

    Figure-8 shows sequential or streaming operations of larger I/O (100% read and 100% write) requests sizes (128KB) that would be found with large content applications. Figure-8 highlights the relationship between lower response time and increased IOPs as well as bandwidth.

    128K Sequential
    Figure-8 128KB sequential reads and write showing IOP activity, bandwidth and response time

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Some content applications are doing small random I/Os for database, key value stores or repositories as well as meta data processing while others are doing large sequential I/O. 128KB sized I/O may be large for your environment, on the other hand, with an increasing number of applications, file systems, software defined storage management tools among others, 1 to 10MB or even larger I/O sizes are becoming common. Key is selecting I/O sizes and read write as well as random sequential along with I/O or queue depths that align with your environment.

    Continue reading part seven the final post in this multi-part series here where the focus is around how HDD’s continue to evolve including performance beyond traditional RPM based execrations along with wrap up.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    HDDs evolve for Content Application servers

    HDDs evolve for Content Application servers

    hdds evolve server storage I/O trends

    Updated 1/23/2018

    Enterprise HDDs evolve for content server platform

    Insight for effective server storage I/O decision making
    Server StorageIO Lab Review

    Which enterprise HDD to use for content servers

    This is the seventh and final post in this multi-part series (read part six here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). The focus of this post is comparing how HDD continue to evolve over various generations boosting performance as well as capacity and reliability. This also looks at how there is more to HDD performance than the traditional focus on Revolutions Per Minute (RPM) as a speed indicator.

    Comparing Different Enterprise 10K And 15K HDD Generations

    There is more to HDD performance than RPM speed of the device. RPM plays an important role, however there are other things that impact HDD performance. A common myth is that HDD’s have not improved on performance over the past several years with each successive generation. Table-10 shows a sampling of various generations of enterprise 10K and 15K HDD’s (14) including different form factors and how their performance continues to improve.

    different 10K and 15K HDDs
    Figure-9 10K and 15K HDD performance improvements

    Figure-9 shows how performance continues to improve with 10K and 15K HDD’s with each new generation including those with enhanced cache features. The result is that with improvements in cache software within the drives, along with enhanced persistent non-volatile memory (NVM) and incremental mechanical drive improvements, both read and write performance continues to be enhanced.

    Figure-9 puts into perspective the continued performance enhancements of HDD’s comparing various enterprise 10K and 15K devices. The workload is the same TPC-C tests used earlier in a similar (14) (with no RAID). 100 simulated users are shown in figure-9 accessing a database on each of the different drives all running concurrently. The older 15K 3.5” Cheetah and 2.5” Savio used had a capacity of 146GB which used a database scale factor of 1500 or 134GB. All other drives used a scale factor 3000 or 276GB. Figure-9 also highlights the improvements in both TPS performance as well as lower response time with new HDD’s including those with performance enhanced cache feature.

    The workloads run are same as the TPC-C ones shown earlier, however these drives were not configured with any RAID. The TPC-C activity used Benchmark Factory with similar setup and configuration to those used earlier including on a multi-socket, multi-core Windows 2012 R2 server supporting a Microsoft SQL Server 2012 database with a database for each drive type.

    ENT 10K V3 2.5"

    ENT (Cheetah) 15K 3.5"

    Users

    1

    20

    50

    100

    Users

    1

    20

    50

    100

    TPS (TPC-C)

    14.8

    50.9

    30.3

    39.9

    TPS (TPC-C)

    14.6

    51.3

    27.1

    39.3

    Resp. Time (Sec.)

    0.0

    0.4

    1.6

    1.7

    Resp. Time (Sec.)

    0.0

    0.3

    1.8

    2.1

    ENT 10K 2.5" (with cache)

    ENT (Savio) 15K 2.5"

    Users

    1

    20

    50

    100

    Users

    1

    20

    50

    100

    TPS (TPC-C)

    19.2

    146.3

    72.6

    71.0

    TPS (TPC-C)

    15.8

    59.1

    40.2

    53.6

    Resp. Time (Sec.)

    0.0

    0.1

    0.7

    0.0

    Resp. Time (Sec.)

    0.0

    0.3

    1.2

    1.2

    ENT 15K V4 2.5"

    Users

    1

    20

    50

    100

    TPS (TPC-C)

    19.7

    119.8

    75.3

    69.2

    Resp. Time (Sec.)

    0.0

    0.1

    0.6

    1.0

    ENT 15K (enhanced cache) 2.5"

    Users

    1

    20

    50

    100

    TPS (TPC-C)

    20.1

    184.1

    113.7

    122.1

    Resp. Time (Sec.)

    0.0

    0.1

    0.4

    0.2

    Table-10 Continued Enterprise 10K and 15K HDD performance improvements

    (Note 14) 10K and 15K generational comparisons were run on a separate comparable server to what was used for other test workloads. Workload configuration settings were the same as other database workloads including using Microsoft SQL Server 2012 on a Windows 2012 R2 system with Benchmark Factory driving the workload. Database memory sized was reduced however to only 8GB vs. 16GB used in other tests.

    Where To Learn More

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    A little bit of flash in the right place with applicable algorithms goes a long way, an example being the Seagate Enterprise HDD’s with enhanced cache feature. Likewise, HDD’s are very much alive complementing SSD and vice versa. For high-performance content application workloads flash SSD solutions including NVMe, 12Gbps SAS and 6Gbps SATA devices are cost effective solutions. HDD’s continue to be cost-effective data storage devices for both capacity, as well as environments that do not need the performance of flash SSD.

    For some environments using a combination of flash and HDD’s complementing each other along with cache software can be a cost-effective solution. The previous workload examples provide insight for making cost-effective informed storage decisions.

    Evaluate today’s HDD’s on their effective performance running workloads as close as similar to your own, or, actually try them out with your applications. Today there is more to HDD performance than just RPM speed, particular with the Seagate Enterprise Performance 10K and 15K HDD’s with enhanced caching feature.

    However the Enterprise Performance 10K with enhanced cache feature provides a good balance of capacity, performance while being cost-effective. If you are using older 3.5” 15K or even previous generation 2.5” 15K RPM and “non-performance enhanced” HDD’s, take a look at how the newer generation HDD’s perform, looking beyond the RPM of the device.

    Fast content applications need fast content and flexible content solution platforms such as those from Servers Direct and HDD’s from Seagate. Key to a successful content application deployment is having the flexibility to hardware define and software defined the platform to meet your needs. Just as there are many different types of content applications along with diverse environments, content solution platforms need to be flexible, scalable and robust, not to mention cost effective.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Happy Earth Day 2016 Eliminating Digital and Data e-Waste

    Happy Earth Day 2016 Eliminating Digital and Data e-Waste

    With Earth Day 2016 on April 22, here are some thoughts about electronic waste (e-waste).

    For those involved in data management or data infrastructures, the following are five tips to help cut the overhead and resulting impact of digital e-waste and later physical e-waste. Most conversations involving e-waste focus on the physical aspects from disposing of electronics along with later impacts. While physical e-waste is an important topic, lets expand the conversation including other variations of e-waste including digital. By digital e-waste I’m referring to the use of physical items that end up contributing to traditional e-waste.

    digital and data ewaste

    Digital e-waste ranges from the overhead of keeping extra copies of data that result in an expanding data footprint that in turn requires extra physical resources and their impact. Addressing physical e-waste also means keeping digital (not the physical items) including data waste in perspective.

    Also note that digital or data waste may in fact not be waste per say if it exists as a by-product of making sure applications, data and resulting information are protected, preserved, secured and served for when needed. The warning is what can be done to make sure there are good useful effective and efficient copy data that has a relative low data footprint overhead impact, more on this later.

    Here are six themes to consider to cut the impact without costing or compromising your organization when address e-waste (physical, digital, data).

    1. Understand Digital e-waste

    You might be familiar with the term e-waste (electronic waste), you know, those physical items that get discarded from supporting your digital lifestyle. The reason awareness around e-waste is important is because of the environmental impacts of discarding all those devices. The more known about the issue, impacts, causes and effects helps to drive awareness as well as insight into what can be done to mitigate those items.

    ewaste

    Devices range from smart and dumb cell phones, personal digital assistants (PDAs), tablets, notebook and workstation computers, MP3 devices, cameras, video display monitors along with larger servers, storage and networking technology, not to mention all the other Internet of Things (IoT) and Internet of Device (IoD) items. What’s important to know about physical e-waste is the impact of the various components. You can learn more about physical e-waste impact in general with a web search such as Google e-waste impact.

    2. Reuse, Repurpose, Redeploy, Reconfigure, Re-Tool, Recycle

    Reconfigure and retool where possible by re-driving installing newer, more energy-efficient high-capacity drives, or more performance effective devices. Besides replacing Hard Disk Drives (HDD), Solid State Devices (SSDs), magnetic tape among other mediums, look at the pro’s and con’s of replacing CPU processor sockets, upgrading memory and PCIe I/O cards for networking or storage among other enhancements.

    Pro’s include being able to use the chassis longer reducing amount of physical e-waste, however at some point it can be more cost-effective to do a total replacement. However the longer you can use the asset or device the more that has as a positive benefit to cut e-waste.

    Repurpose, reuse and redeploy assets such as servers, storage and networking devices in a hand me down approach assuming there is a value or benefit in doing so.

    Recycle when done, dispose of the technology properly including for storage secure erase of digital media and later physical handling.

    3. Responsible Recycle and Disposition of technology (including secure digital destruction)

    What are you doing with, how are you disposing of physical items ranging from laptops, workstations, tablets, phones, MP3 players, TVs and monitors, servers, network and storage devices among others when no longer needed?

    Are you securely erasing your digital data on HDDs as well as SSDs or even tape and optical devices before they are disposed of? If not, you should be. For example if you are not yet using or looking at Self Encrypting Drives (SED) including HDDs and SSDs for securing your data, start investigating them. Sure they have a security value proposition for when lost or stolen, however they can also cut the time to secure erase to a given standard from days or hours to minutes or seconds.

    These will become e-waste

    Smart shopping up front, what you want, what you need, how long can you leverage, spend more up front to get something that can last 3-5 years vs. discarding in 1.5-3 years.
    Smart management with insight, know your cost and impacts, not just for PR purpose, for profit and practicality

    4. Plan acquisitions with disposition in mind

    Redesign and design for replacement, maximizing what you have or will acquire, using it for longer time to cut costs, improve productivity (and profitability) while reducing e-waste overhead contributing footprint.

    For example, do you need or want to have the latest in new technology replacing that phone, tablet, watch or other IoT or IoD item as soon as something newer comes along? No worries if you are also doing something responsible with what was new and now old by such as donating or giving it to somebody else who might be able to get a few more years worth of use out of it before it becomes e-waste.

    On the other hand, if you are acquiring technology with a 2-3 year useful life plan, what would it take to upgrade that item to a larger or more robust version using it for 3-5 years. Granted, you might not use it in its primary role for the longer duration, however can it be repurposed for some other uses? Also from a technology acquisition perspective, have a forecast and plan that can help you make smart, informed decisions up front knowing when upgrades or extra resources will be needed to prolong the usefulness of the item.

    Of course you can also simply move everything to the cloud and out-source your e-waste footprint to the vendor, MSP or cloud provider.

    5. Understand Changing Data Value

    Keep in mind that data has either no value, some value or unknown value all of which can change over time. For example some data has value for seconds, minutes or hours and can then be discarded. Other data have some value which can be low or high which determines how as well as when, where and how to protect, preserve, secure and serve it when needed. Then there is data that has an unknown value. However, that can change over time.

    Different and Changing Data Value

    Over time your data may end up having no value meaning it can be discarded, or, it might have some value (low or high) meaning change how it should be protected, preserved, secured and served. Then there is data that may stay in limbo or unknown status indefinitely or until somebody, or some software or via other means decide if it has value or not.

    The point is that to cut digital e-waste is to discard data with no value as soon as possible, protect, preserve, secure and serve data with value appropriately. Likewise, for all of that growing data with an unknown value, rethink how it is protected and stored, all of which has an impact on both physical as well as digital e-waste.

    This means having insight and awareness into your environment, applications, data, settings, configuration and metadata, not only of the space being used, or when it was last updated. Also, look beyond when data was last modified or changed, look at when it was last read or accessed to decide how protected and secured including virus and other scans.

    6. Data Footprint Reduction (DFR)

    Implement data footprint reduction (DFR) to lower overhead impact not only at the target or downstream destination using compression, dedupe and other techniques. Also, move upstream to the source where the problem starts and address it there. Addressing at the source leverages various techniques from Archiving, Backup/Data Protection Modernization (rethinking what saved, when, how often, etc), Cleanup, Compression and Consolidation, Data management, Deletion and Dedupe along with storage tiering, RAID/Parity/Mirroring/Replication/Erasure Code and Advanced Parity/LRC/Forward Error Correction and other technologies.

    For example if you have 10TB of data, how many copies do you have and why, how are those copies protected and what is their overhead. The issue and concern should not be primarily how many copies, rather, if those copies add or give value, then what can you do to keep them while reducing their overhead impact, besides simply trying to compress or dedupe everything. Hint, start exploring copy management as well as revisiting what you protect, when, where, why, how often along with options for implementing DFR as close to the data source as possible, as well as downstream.

    Where To Learn More

    What This All Means

    Gain insight and awareness into what is occurring with physical and digital ewaste side stepping the greenwashing and other activity. Small steps implemented by many will have a big impact. Every bit, byte, block, blob, bucket file or object along with their copies have an impact, hopefully as well as a benefit, a question is how can you reduce the overhead while increasing your return on innovation reducing costs, complexity and overhead while enhancing organization capabilities. There are many techniques, technologies, tools and approaches to apply to various environments, after all, everything is not the same, yet there are similarities. Happy Earth Day 2016 and happy spring to those of you in the northern hemisphere (as well as elsewhere).

    Ok, nuff said, for now

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved