Goodbye 2013, hello 2014, predictions past, present and future

Storage I/O trends

Good by 2013 and hello 2014 along with predictions past, present and future

First, for those who may have missed this, thanks to all who helped make 2013 a great year!

2013 season greetings

Looking back at 2013 I saw a continued trend of more vendors and their media public relations (PR) people reaching out to have their predictions placed in articles, posts, columns or trends perspectives pieces.

Hmm, maybe a new trend is predictions selfies? ;)

Not to worry, this is not a wrapper piece for a bunch of those pitched and placed predictions requests that I received in 2013 as those have been saved for a rainy or dull day when we need to have some fun ;) .

What about 2013 server storage I/O networking, cloud, virtual and physical?

2013 end up with some end of year spree’s including Avago acquiring storage I/O and networking vendor LSI for about $6.6B USD (e.g. SSD cards, RAID cards, cache cards, HBA’s (Host Bus Adapters), chips and other items) along with Seagate buying Xyratex for about $374M USD (a Seagate suppliers and a customer partner).

Xyratex is known by some for making the storage enclosures that house hard disk drive (HDD’s) and Solid State Device (SSD) drives that are used by many well-known, and some not so well-known systems and solution vendors. Xyratex also has other pieces of their business such as appliances that combine their storage enclosures for HDD and SSD’s along with server boards, along with a software group focus on High Performance Compute (HPC) Lustre. There is another part of the Xyratex business that is not as well-known which is the test equipment used by disk drive manufacturers such as Seagate as part of their manufacturing process. Thus the Seagate acquisition moves them up market with more integrated solutions to offer to their (e.g. Seagate and Xyratex) joint customers, as well as streamline their own supply chain and costs (not to mention sell equipment to the other remaining drive manufactures WD and Toshiba).

Storage I/O trends

Other 2013 acquisitions included (Whiptail by Cisco, Virident by WD (who also bought several other companies), Softlayer by IBM) along with various mergers, company launches, company shutdowns (cloud storage Nirvanix and SSD maker OCZ bankruptcy filing), and IPO’s (some did well like Nimble while Violin not so well), while earlier high-flying industry darlings such as FusionIO are now the high-flung darling targets of the shareholder sock lawsuit attorneys.

2013 also saw the end of SNW (Storage Network World), jointly produced by SNIA and Computerworld Storage in the US after more than a decade. Some perspectives from the last US SNW held October 2013 can be found in the Fall 2013 StorageIO Update Newsletter here, granted those were before the event was formal announced as being terminated.

Speaking of events, check out the November 2013 StorageIO Update Newsletter here for perspectives from attending the Amazon Web Services (AWS) re:Invent conference which joins VMworld, EMCworld and a bunch of other vendor world events.

Lets also not forget Dell buying itself in 2013.

StorageIO in the news

Click on the following links read (and here) more about various 2013 industry perspectives trends commentary of mine in various venues, along with tips, articles, newsletters, events, pod cast, videos and other items.

What about 2014?

Perhaps 2014 will build on the 2013 momentum of the annual rights of pages refereed to as making meaningless future year trends and predictions as being passe?

Not that there is anything wrong with making predictions for the coming year, particular if they actually have some relevance, practicality not to mention track record.

However that past few years seems to have resulted in press releases along with product (or services) plugs being masked as predictions, or simply making the same predictions for the coming year that did not come to be for the earlier year (or the one before that or before that and so forth).

On the other hand, from an entertainment perspective, perhaps that’s where we will see annual predictions finally get classified and put into perspectives as being just that.

Storage I/O trends

Now for those who still cling to as well as look forward to annual predictions, ok, simple, we will continue in 2014 (and beyond) from where we left off in 2013 (and 2012 and earlier) meaning more (or continued):

  • Software defined "x" (replace "x" with your favorite topic) industry discussion adoption yet customer adoption or deployment question conversations.
  • Cloud conversations shifted from lets all go to the cloud as the new shiny technology to questioning the security, privacy, stability, vendor or service viability not to mention other common sense concerns that should have been discussed or looked into earlier. I have also heard from people who say Amazon (as well as Verizon, Microsoft, Blue host, Google, Nirvanix, Yahoo and the list goes on) outages are bad for the image of clouds as they shake people’s confidences. IMHO people confidence needs to be shaken to that of having some common sense around clouds including don’t be scared, be ready, do your homework and basic due diligence. This means cloud conversations over concerns set the stage for increased awareness into decision-making, usage, deployment and best practices (all of which are good things for continued cloud deployments). However if some vendors or pundits feel that people having basic cloud concerns that can be addressed is not good for their products or services, I would like to talk with them because they may be missing an opportunity to create long-term confidence with their customers or prospects.
  • VDI as a technology being deployed continues to grow (e.g. customer adoption) while the industry adoption (buzz or what’s being talked about) has slowed a bit which makes sense as vendors jump from one bandwagon to the new software defined bandwagon.
  • Continued awareness around modernizing data protection including backup/restore, business continuance (BC), disaster recovery (DR), high availability, archiving and security means more than simply swapping out old technology for new, yet using it in old ways. After all, in the data center and information factory not everything is the same. Speaking of data protection, check out the series of technology neutral webcast and video chats that started last fall as part of BackupU brought to you by Dell. Even though Dell is the sponsor of the series (that’s a disclosure btw ;) ) the focus of the sessions is on how to use different tools, technologies and techniques in new ways as well as having the right tools for different tasks. Check out the information as well as register to get a free Data Protection chapter download from my book Cloud and Virtual Data Storage Networking (CRC Press) at the BackupU site as well as attend upcoming events.
  • The nand flash solid state devices (SSD) cash-dash (and shakeout) continues with some acquisitions and IPO’s, as well as disappearances of some weaker vendors, while appearance of some new. SSD is showing that it is real in several ways (despite myths, fud and hype some of which gets clarified here) ranging from some past IPO vendors (e.g. FusiuonIO) seeing exit of their CEO and founders while their stock plummets and arrival of shareholder investor lawsuits, to Violins ho-hum IPO. What this means is that the market is real, it has a very bright future, however there is also a correction occurring showing that reality may be settling in for the long run (e.g. next couple of decades) vs. SSD being in the realm of unicorns.
  • Storage I/O trends

  • Internet of Things (IoT) and Internet of Devices (IoD) may give some relief for Big Data, BYOD, VDI, Software Defined and Cloud among others that need a rest after they busy usage that past few years. On the other hand, expect enhanced use of earlier buzzwords combined with IoT and IOD. Of course that also means plenty of questions around what is and is not IoD along with IoT and if actually relevant to what you are doing.
  • Also in 2014 some will discover storage and related data infrastructure topics or some new product / service thus having a revolutionary experience that storage is now exciting while others will have a DejaVu moment that it has been exciting for the past several years if not decades.
  • More big data buzz as well as realization by some that a pragmatic approach opens up a bigger broader market, not to mention customers more likely to realize they have more in common with big data than it simply being something new forcing them to move cautiously.
  • To say that OpenStack and related technologies will continue to gain both industry and customer adoption (and deployment) status building off of 2013 in 2014 would be an understatement, not to mention too easy to say, or leave out.
  • While SSD’s continue to gain in deployment, after the question is not if, rather when, where, with what and how much nand flash SSD is in your future, HDD’s continue to evolve for physical, virtual and cloud environments. This also includes Seagate announcing a new (Kinetic) Ethernet attached HDD (note that this is not a NAS or iSCSI device) that uses a new key value object storage API for storing content data (more on this in 2014).
  • This also means realizing that large amounts of little data can result in back logs of lots of big data, and that big data is growing into very fast big data, not to mention realization by some that HDFS is just another distributed file system that happens to work with Hadoop.
  • SOHO’s and lower end of SMB begin to get more respect (and not just during the week of Consumer Electronic Show – CES).
  • Realization that there is a difference between Industry Adoption and Customer Deployment, not to mention industry buzz and traction vs. customer adoption.

server storage I/O trends

What about beyond 2014?

That’s easy, many of the predictions and prophecies that you hear about for the coming year have also been pitched in prior years, so it only makes sense that some of those will be part of the future.

  • If you have seen or experienced something you are more likely to have DejaVu.
  • Otoh if you have not seen or experienced something you are more likely to have a new and revolutionary moment!
  • Start using new (and old) things in new ways vs. simply using new things in old ways.
  • Barrier to technology convergence, not to mention new technology adoption is often people or their organizations.
  • Convergence is still around, cloud conversations around concerns get addressed leading to continued confidence for some.
  • Realization that data infrastructure span servers, storage I/O networking, cloud, virtual, physical, hardware, software and services.
  • That you can not have software defined without hardware and hardware defined requires software.
  • And it is time for me to get a new book project (or two) completed in addition to helping others with what they are working on, more on this in the months to come…

Here’s my point

The late Jim Morrison of the Doors said "There are things known and things unknown and in between are the doors.".

The doors via Amazon.com
Above image and link via Amazon.com

Hence there is what we know about 2013 or will learn about the past in the future, then there is what will be in 2014 as well as beyond, hence lets step through some doors and see what will be. This means learn and leverage lessons from the past to avoid making the same or similar mistakes in the future, however doing so while looking forward without a death grip clinging to the past.

Needless to say there will be more to review, preview and discuss throughout the coming year and beyond as we go from what is unknown through doors and learn about the known.

Thanks to all who made 2013 a great year, best wishes to all, look forward to seeing and hearing from you in 2014!

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

November 2013 Server and StorageIO Update Newsletter & AWS reinvent info


November 2013 Server and StorageIO Update Newsletter & AWS reinvent info

Welcome to the November 2013 edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics. Fall (here in North America) has been busy with in-person, on-line live and virtual events along with various client projects, research, time in the StorageIO cloud, virtual and physical lab test driving, validating and doing proof of concept research among other tasks. Check out the industry trends perspectives articles, comments and blog posts below that covers some activity over the past month.

Last week I had the chance to attend the second annual AWS re:Invent event in Las Vegas, see my comments, perspectives along with a summary of announcements from that conference below.

Watch for future posts, commentary, perspectives and other information down the road (and in the not so distant future) pertaining to information and data infrastructure topics, themes and trends across cloud, virtual, legacy server, storage, networking, hardware and software. Also check out our backup, restore, BC, DR and archiving (Under the resources section on StorageIO.com) for various presentation, book chapter downloads and other content.

Enjoy this edition of the StorageIO Update newsletter.

Ok, nuff said (for now)

Cheers gs

StorageIO Industry Trends and Perspectives

Industry trends: Amazon Web Services (AWS) re:Invent

Last week I attended the AWS re:Invent event in Las Vegas. This was the second annual AWS re:Invent conference which while having an AWS and cloud theme, it is also what I would describe as a data infrastructure event.

As a data infrastructure event AWS re:Invent spans traditional legacy IT and applications to newly invented, re-written, re-hosted or re-platformed ones from existing and new organizations. By this I mean a mix of traditional IT or enterprise people as well as cloud and virtual geek types (said with affection and all due respect of course) across server (operating system, software and tools), storage (primary, secondary, archive and tools), networking, security, development tools, applications and architecture.

That also means management from application and data protection spanning High Availability (HA), Business Continuance (BC), Disaster Recovery (DR), backup/restore, archiving, security, performance and capacity planning, service management among other related themes across public, private, hybrid and community cloud environments or paradigms. Hmm, I think I know of a book that covers the above and other related topic themes, trends, technologies and best practices called Cloud and Virtual Data Storage Networking (CRC Press) available via Amazon.com in print and Kindle (among other) versions.

During the event AWS announced enhanced and new services including:

  • WorkSpaces (Virtual Desktop Infrastructure – VDI) announced as a new service for cloud based desktops across various client devices including laptops, Kindle Fire, iPad and Android tablets using PCoIP.
  • Kinesis which is a managed service for real-time processing of streaming (e.g. Big) data at scale including ability to collect and process hundreds of GBytes of data per second across hundreds of thousands of data sources. On top of Kinesis you can build your big data applications or conduct analysis to give real-time key performance indicator dashboards, exception and alarm or event notification and other informed decision-making activity.
  • EC2 C3 instances provide Intel Xeon E5 processors and Solid State Device (SSD) based direct attached storage (DAS) like functionality vs. EBS provisioned IOPs for cost-effective storage I/O performance and compute capabilities.
  • Another EC2 enhancement are G2 instance that leverage high performance NVIDIA GRID GPU with 1,536 parallel processing cores. This new instance is well suited for 3D graphics, rendering, streaming video and other related applications that need large-scale parallel or high performance compute (HPC) also known as high productivity compute.
  • Redshift (cloud data warehouse) now supports cross region snapshots for HA, BC and DR purposes.
  • CloudTrail records AWS API calls made via the management console for analytics and logging of API activity.
  • Beta of Trusted Advisor dashboard with cost optimization saving estimates including EBS and provisioned IOPs
  • Relational Database Service (RDS) support for PostgresSQL including multi-AZ deployment.
  • Ability to discover and launch various software from AWS Marketplace via the EC2 Console. The AWS Marketplace for those not familiar with it is a catalog of various software or application titles (over 800 products across 24 categories) including free and commercial licensed solutions that include SAP, Citrix, Lotus Notes/Domino among many others.
  • AppStream is a low latency (STX protocol based) service for streaming resource (e.g. compute, storage or memory) intensive applications and games from AWS cloud to various clients, desktops or mobile devices. This means that the resource intensive functionality can be shifted to the cloud, while providing a low latency (e.g. fast) user experience off-loading the client from having to support increased compute, memory or storage capabilities. Key to AppStream is the ability to stream data in a low-latency manner including over networks normally not suited for high quality or bandwidth intensive applications. IMHO AppStream while focused initially on mobile app’s and gaming, being a bit streaming technology has the potential to be used for other similar functions that can leverage download speed improvements.
  • When I asked an AWS person if or what role AppStream might have or related to WorkSpaces their only response was a large smile and no comment. Does this mean WorkSpaces leverages AppStream? Candidly I don’t know, however if you look deeper into AppStream and expand your horizons, see what you can think up in terms of innovation. Updated 11/21/13 AWS has provided clarification that WorkSpaces is based on PCoIP while AppStream uses the STX protocols.

    Check out AWS Sr. VP Andy Jassy keynote presentation here.

Overall I found the AWS re:Invent event to be a good conference spanning many aspects and areas of focus which means I will be putting it on my must attend list for 2014.

StorageIO Industry Trends and PerspectivesIndustry trends tips, commentary, articles and blog posts
What is being seen, heard and talked about while out and about

The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.

Storage I/O posts

Recent industry trends, perspectives and commentary by StorageIO Greg Schulz in various venues:

NetworkComputing: Comments on Software-Defined Storage Startups Win Funding

Digistor: Comments on SSD and flash storage
InfoStor: Comments on data backup and virtualization software

ITbusinessEdge: Comments on flash SSD and hybrid storage environments

NetworkComputing: Comments on Hybrid Storage Startup Nimble Storage Files For IPO

InfoStor: Comments on EMC’s Light to Speed: Flash, VNX, and Software-Defined

InfoStor: Data Backup Virtualization Software: Four Solutions

ODSI: Q&A With Greg Schulz – A Quick Roundup of Data Storage Industry

Recent StorageIO Tips and Articles in various venues:

FedTechMagazine: 3 Tips for Maximizing Tiered Hypervisors
InfoStor:
RAID Remains Relevant, Really!

Storage I/O trends

Recent StorageIO blog post:

EMC announces XtremIO General Availability (Part I) – Announcement analysis of the all flash SSD storage system
Part II: EMC announces XtremIO General Availability, speeds and feeds – Part two of two part series with analysis
What does gaining industry traction or adoption mean too you? – There is a difference between buzz and deployment
Fall 2013 (September and October) StorageIO Update Newsletter – In case you missed the fall edition, here it is

StorageIO Industry Trends and Perspectives

Check out our objectstoragecenter.com page where you will find a growing collection of information and links on cloud and object storage themes, technologies and trends.

Server and StorageIO seminars, conferences, web cats, events, activities StorageIO activities (out and about)

Seminars, symposium, conferences, webinars
Live in person and recorded recent and upcoming events

While 2013 is winding down, the StorageIO calendar continues to evolve, here are some recent and upcoming activities.

December 11, 2013 Backup.UData Protection for Cloud 201Backup.U
Google+ hangout
December 3, 2013 Backup.UData Protection for Cloud 101Backup.U
Online Webinar
November 19, 2013 Backup.UData Protection for Virtualization 201Backup.U
Google+ hangout
November 12-13, 2013AWS re:InventAWS re:Invent eventLas Vegas, NV
November 5, 2013 Backup.UData Protection for Virtualization 101Backup.U
Online Webinar
October 22, 2013 Backup.UData Protection for Applications 201Backup.U
Google+ hangout

Click here to view other upcoming along with earlier event activities. Watch for more 2013 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, big data, little data, cloud and object storage, performance and management trends among others.

Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.

If you missed the Fall (September and October) 2013 StorageIO update newsletter, click here to view that and other previous editions as HTML or PDF versions. Subscribe to this newsletter (and pass it along)

and click here to subscribe to this news letter. View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter

Ok, nuff said (for now).
Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved    

EMC announces XtremIO General Availability (Part I)

Storage I/O trends

EMC announces XtremIO flash SSD General Availability

EMC announced today the general availability (GA) if the all flash Solid State Device (SSD) XtremIO that they acquired a little over a year ago. Earlier this year EMC announced directed availability (DA) of the EMC version of XtremIO as part of other SSD hardware and software updates (here and here). The XtremIO GA announcement also follows that of the VNX2 or MCx released in September of this year that also has flash SSD enhancements along with doing more with available resources.

EMC XtremIO flash SSD boosting storage I/O performance

As an industry trend, the question is not if SSD is in your future, rather where, when, how much, what to use along with coexistence to complement Hard Disk Drive (HDD) based solutions in some environments. This also means that SSD is like real estate where location matters, not to mention having different types of technologies, packaging, solutions to meet various needs (and price points). This all ties back to the best server and storage I/O or IOP is the one that you do know have to do, the second best is the one with the least impact and best application benefit.

From industry adoption to customer deployment

EMC has evolved the XtremIO platform from a pre-acquisition solution to an first EMC version that was offered to an early set of customers e.g. DA.

I suspect that the DA was as much a focus on getting early customer feedback, addressing immediate needs or opportunities as wells as getting the EMC sales and marketing teams messaging, marching orders aligned and deployed. The latter would be rather important to decrease or avoid the temptation to cannibalize existing product sales with the shiny new technology (SNT). Likewise, it would be important for EMC to not create isolated pockets or fenced off products as some other vendors often do.

EMC XtremIO X-Brick
25 SSD drive X-Brick

What is being announced?

  • General availability vs. directed or limited availability
  • Version 2.2 of the XIOS platform software
  • Integrating with EMC support and service tools

Let us get back go this announcement and XtremIO of which EMC has indicated that they have several customers who have now done either $1M or $5M USD deals. EMC has claimed over 1.5 PBytes have been booked and deployed, or with data footprint reduction (DFR) including dedupe over 10PB effective capacity. Note that for those who are focused on dedupe or DFR reduction ratios 10:1.5 may not be as impressive as seen with some backup solutions, however keep in mind that this is for primary high performance storage vs. secondary or tertiary storage devices.

As part of this announcement, EMC has also release V2.2 of the XtremIO platform software (XIOS). Hence a normal new product should start with a version 1.0 at launch, however as explained this is both a new version of the technology as well as the initial GA by EMC.

Also as part of this announcement, EMC is making available XtremIO 10TB X-Bricks with 25 eMLC SSD drives each, along with dual controllers (storage processors). EMC has indicated that it will make available a 20TB X-Brick using larger capacity SSD drives in January 2014. Note that the same type of SSD drives must be used in the systems. Currently there can be up to four X-Bricks per XtremIO cluster or instance that are interconnected using a dedicated InfiniBand Fabric. Application servers access the XtremIO X-Bricks using standard Fibre Channel or Ethernet and IP based iSCSI. In addition to the hardware platform items, the XtremIO platform software (XIOS) includes built-in on the fly data footprint reduction (DFR) using global dedupe during data ingestion and placement. Other features include thin provisioning, VMware VAII, data protection and self-balancing data placement.

Storage I/O trends

Who or what applications are XtremIO being positioned for?

Some of XtremIO industry sectors include:

  • Financial and insurance services
  • Medical, healthcare and life sciences
  • Manufacturing, retail and warehouse management
  • Government and defense
  • Media and entertainment

Application and workload focus:

  • VDI including replacing linked clones with ability to do full clone without overhead
  • Server virtualization where aggregation causes aggravation with many mixed IOPs
  • Database for reducing latency, boosting IOPs as well as improving software license costs.

Databases such as IBM DB2, Oracle RAC, Microsoft SQLserver and MySQL among others have traditionally for decades been a prime opportunity for SSD (DRAM and flash). This also includes newer NoSQL or key value stores and meta data repositories for object such as Mongo, Hbase, Cassandra, Riak among others. Typical focus includes placing entire instances, or specific files and objects such as indices, journals and redo logs, import/export temp or scratch space, message queries and high activity tables among others.

What about overlap with other EMC products?

If you simply looked at the above list of sectors (among others) or applications, you could easily come to a conclusion that there is or would be overlap. Granted in some environments there will be which means XtremIO (or other vendors solutions) may be the primary storage solution. On the other hand since everything is not the same in most data centers or information factories, there will be a mix of storage systems handling various tasks. This is where EMC will need to be careful learning what they did during DA on where to place XtremIO and how to positing to complement when and where needed other solutions, or as applicable being a replacement.

XtremIO Announcement Summary

  • All flash SSD storage solution with iSCSI and Fibre Channel server attachment
  • Scale out and scale up performance while keeping latency low and deterministic
  • Enhanced flash duty cycle (wear leveling) to increase program / erase (P/E) cycles durability
  • Can complement other storage systems, arrays or appliances or function as a standalone
  • Coexists and complements host side caching hardware and software
  • Inline always on data footprint reduction (DFR) including dedupe (global dedupe without performance compromise), space saving snapshots and copies along with thin provisioning

Storage I/O trends

Some General Comment and Perspectives

Overall, XtremIO gives EMC and their customers, partners and prospects a new technology to use and add to their toolbox for addressing various challenges. SSD is in your future, when, where, with what and how are questions not to mention how much. After all, a bit of flash SSD in the right location used effectively can have a large impact. On the other hand, a lot of flash SSD in the wrong place or not used effectively will cost you lots of cash. Key for EMC and their partners will be to articulate clearly, where XtremIO fits vs. other solutions without adding complexity.

Checkout part II of this series to learn more about XtremIO including what it is, how it works, competition and added perspectives.

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

DataDynamics StorageX 7.0 file and data management migration software

Storage I/O trends

DataDynamics StorageX 7.0 file and data management migration software

Some of you may recall back in 2006 (here and here) when Brocade bought a file management storage startup called NuView whose product was StorageX, and then in 2009 issued end of life (EOL) notice letters that the solution was being discontinued.

Fast forward to 2013 and there is a new storage startup (DatraDynamics) with an existing product that was just updated and re-released called StorageX 7.0.

Software Defined File Management – SDFM?

Granted from an industry buzz focused adoption perspective you may not have heard of DataDynamics or perhaps even StorageX. However many other customers around the world from different industry sectors have as well as are using the solution.

The current industry buzz is around software defined data centers (SDDC) which has lead to software defined networking (SDN), software defined storage (SDS), and other software defined marketing (SDM) terms, not to mention Valueware. So for those who like software defined marketing or software defined buzzwords, you can think of StorageX as software defined file management (SDFM), however don’t ask or blame them about using it as I just thought of it for them ;).

This is an example of industry adoption traction (what is being talked about) vs. industry deployment and customer adoption (what is actually in use on a revenue basis) in that DataDynamics is not a well-known company yet, however they have what many of the high-flying startups with industry adoption don’t have which is an installed base with revenue customers that also now have a new version 7.0 product to deploy.

StorageX 7.0 enabling intelligent file and data migration management

Thus, a common theme is adding management including automated data movement and migration to carry out structure around unstructured NAS file data. More than a data mover or storage migration tool, Data Dynamics StorageX is a software platform for adding storage management structure around unstructured local and distributed NAS file data. This includes heterogeneous vendor support across different storage system, protocols and tools including Windows CIFS and Unix/Linux NFS.

Storage I/O image

A few months back prior to its release, I had an opportunity to test drive StorageX 7.0 and have included some of my comments in this industry trends perspective technology solution brief (PDF). This solution brief titled Data Dynamics StorageX 7.0 Intelligent Policy Based File Data Migration is a free download with no registration required (as are others found here), however per our disclosure policy to give transparency, DataDynamics has been a StorageIO client.

If you have a new for gaining insight and management control around your file unstructured data to support migrations for upgrades, technology refresh, archiving or tiering across different vendors including EMC and NetApp, check out DataDynamics StorageX 7.0, take it for a test drive like I did and tell them StorageIO sent you.

Ok, nuff said,

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

VMworld 2013 Vmware, server, storage I/O and networking update (Day 1)

Storage I/O trends

Congratulations to VMware on 10 years of VMworld!

With the largest installment yet of a VMworld in terms of attendance, there were also many announcements today (e.g. Monday) and many more slated for out the week. Here are a synopsis of some of those announcements.

Software Defined Data Center (SDDC) and Software Defined Networks (SDN)

VMware made a series of announcements today that set the stage for many others. Not surprisingly, these involved SDDC, SDN, SDS, vSphere 5.5 and other management tool enhancements, or the other SDM (Software Defined Management).

VMworld image

Here is a synopsis of what was announced by VMware.

VMware NSX (SDN) combines Nicira NVPTM along with vCloud Network and Security
VMware Virtual SAN (VSAN) not to be confused with virtual storage appliances (VSAs)
VMware vCloud Suite 5.5
VMware vSphere 5.5 (includes support for new Intel Xeon and Atom processors)
VMware vSphere App HA
VMware vSphere Flash Read Cache software
VMware vSphere Big Data Extensions
VMware vCloud Automation Center
VMware vCloud

Note that while these were announced today, some will be in public beta soon and general availability over the next few months or quarters (learn more here including pricing and availability). More on these and other enhancements in future posts. However for now check out what Duncan Epping (@DuncanYB) of VMware has to say over at his Yellowbook site here, here and here.

buzzword bingo
Buzzword Bingo

Additional VMworld Software Defined Announcements

Dell did some announcements as well for cloud and virtual environments in support of VMware from networking to servers, hardware and software. With all the recent acquisitions by Dell including Quest where they picked up Foglight management tools, along with vRanger, Bakbone and others, Dell has amassed an interesting portfolio. On the hardware front, check out the VRTX shared server infrastructure, I want one for my VMware environment, now I just need to justify one (to myself). Speaking of Dell, if you are at VMworld on Tuesday August 27 around 1:30PM stop by the Dell booth where I will be presenting including announcing some new things (stay tuned for more on that soon).

HP had some announcements today. HP jumped into the SDDC and SDN with some Software Defined Marketing (SDM) and Software Defined Announcements (SDA) in addition to using the Unified Data Center theme. Today’s announcements by HP were focused more around SDN and VMware NSX along with the HP Virtual Application Networks SDN Controller and VMware networking.

NetApp (Both #1417) announced more integration between their Data ONTAP based solutions and VMware vSphere, Horizon Suite, vCenter, vCloud Automation Center and vCenter Log Insight under the them theme of SDDC and SDS. As part of the enhancement, NetApp announced Virtual Storage Console (VSC 5.0) for end-to-end storage management and software in VMware environments. In addition, integration with VMware vCenter Server 5.5. Not to be left out of the SSD flash dash NetApp also released a new V1.2 of their FlashAccel software for vSphere 5.0 and 5.1.

Storage I/O trends

Cloud, Virtualization and DCIM

Here is one that you probably have not seen or heard much about elsewhere, which is Nlyte announcement of their V1.5 Virtualization Connector for Data Center Infrastructure Management (DCIM). Keep in mind that DCIM is more than facilities, power, and cooling related themes, particular in virtual data centers. Thus, some of the DCIM vendors, as well as others are moving into the converged DCIM space that spans server, storage, networking, hardware, software and facilities topics.

Interested in or want to know more about DCIM, and then check out these items:
Data Center Infrastructure Management (DCIM) and Infrastructure Resource Management (IRM)
Data Center Tools Can Streamline Computing Resources
Considerations for Asset Tracking and DCIM

Data Protection including Backup/Restore, BC, DR and Archiving

Quantum announced that Commvault has added support to use the Lattus object storage based solution as an archive target platform. You can learn more about object storage (access and architectures) here at www.objectstoragecenter.com .

PHD Virtual did a couple of data protection (backup/restore , BC, DR ) related announcements (here and here ). Speaking of backup/restore and data protection, if you are at VMworld on Tuesday August 27th around 1:30PM, stop by the Dell booth where I will be presenting, and stay tuned for more info on some things we are going to announce at that time.

In case you missed it, Imation who bought Nexsan earlier this year last week announced their new unified NST6000 series of storage systems. The NST6000 storage solutions support Fibre Channel (FC) and iSCSI for block along with NFS, CIFS/SMB and FTP for file access from virtual and physical servers.

Emulex announced some new 16Gb Fibre Channel (e.g. 16GFC) aka what Brocade wants you to refer to as Gen 5 converged and multi-port adapters. I wonder how many still remember or would rather forget how many ASIC and adapter gens from various vendors occurred just at 1Gb Fibre Channel?

Storage I/O trends

Caching and flash SSD

Proximal announced V2.0 of AutoCache 2.0 with role based administration, multi-hypervisor support (a growing trend beyond just a VMware focus) and more vCenter/vSphere integration. This is on the heels of last week’s FusionIO powered IBM Flash Cache Storage Accelerator (FCSA ) announcement, along with others such as EMC , Infinio, Intel, NetApp, Pernix, SanDisk (Flashsoft) to name a few.

Mellanox (VMworld booth #2005), you know, the Infinaband folks who also have some Ethernet (which also includes Fibre Channel over Ethernet) technology did a series of announcements today with various PCIe nand flash SSD card vendors. The common theme with the various vendors including Micron (Booth #1635) and LSI is in support of VMware virtual servers using iSER or iSCSI over RDMA (Remote Direct Memory Access). RDMA or server to server direct memory access (what some of you might know as remote memory mapped IO or channel to channel C2C) enables very fast low server to server data movement such as in a VMware cluster. Check out Mellanox and their 40Gb Ethernet along with Infinaband among other solutions if you are into server, storage i/o and general networking, along with their partners. Need or want to learn more about networking with your servers and storage check out Cloud and Virtual Data Storage Networking and Resilient Storage Networking .

Rest assured there are many more announcements and updates to come this week, and in the weeks to follow…

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Can we get a side of context with them IOPS server storage metrics?

Can we get a side of context with them server storage metrics?

Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.

There is an old saying that the best I/O (Input/Output) is the one that you do not have to do.

In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.

Expanding the conversation, the need for more context

The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.

hdd and ssd iops

This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.

Adding a side of context

The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.

Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time. Not to mention block, file, object or blob as well as table.

However, are those million IOP’s applicable to your environment or needs?

Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?

How about the response time or latency for achieving them IOPS?

In other words, what is the context of those metrics and why do they matter?

storage i/o iops
Click on image to view more metrics that matter including IOP’s for HDD and SSD’s

Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.

As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.

Here are some examples of context that can be added to help make IOP’s and other metrics matter:

  • What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
  • Are they reads, writes, random, sequential or mixed and what percentage?
  • How was the storage configured including RAID, replication, erasure or dispersal codes?
  • Then there is the latency or response time and IO queue depths for the given number of IOPS.
  • Let us not forget if the storage systems (and servers) were busy with other work or not.
  • If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
  • What was the number of threads or workers, along with how many servers?
  • What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
  • Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
  • Did the IOP’s number come from a single storage system or total of multiple systems?
  • Fast storage needs fast serves and networks, what was their configuration?
  • Was the performance a short burst, or long sustained period?
  • What was the size of the test data used; did it all fit into cache?
  • Were short stroking for IOPS or long stroking for bandwidth techniques used?
  • Data footprint reduction (DFR) techniques (thin provisioned, compression or dedupe) used?
  • Were write data committed synchronously to storage, or deferred (aka lazy writes used)?

The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.

Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.

Storage I/O trends

Does size or age of vendors make a difference when it comes to context?

Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.

Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.

Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.

Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.

Storage I/O trends

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What this means is let us start putting and asking for metrics that matter such as IOP’s with context.

If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.

IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.

Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.

So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Web chat Thur May 30th: Hot Storage Trends for 2013 (and beyond)

Storage I/O trends

Join me on Thursday May 30, 2013 at Noon ET (9AM PT) for a live web chat at the 21st Century IT (21cit) site (click here to register, sign-up, or view earlier posts). This will be an online web chat format interactive conversation so if you are not able to attend, you can visit at your convenience to view and give your questions along with comments. I have done several of these web chats with 21cit as well as other venues that are a lot of fun and engaging (time flies by fast).

For those not familiar, 21cIT is part of the Desum/UBM family of sites including Internet Evolution, SMB Authority, and Enterprise Efficiency among others that I do article posts, videos and live chats for.


Sponsored by NetApp

I like these types of sites in that while they have a sponsor, the content is generally kept separate between those of editors and contributors like myself and the vendor supplied material. In other words I coordinate with the site editors on what topics I feel like writing (or doing videos) about that align with the given sites focus and themes as opposed to following and advertorial calendar script.

During this industry trends perspective web chat, one of the topics and themes planned for discussion include software defined storage (SDS). View a recent video blog post I did here about SDS. In addition to SDS, Solid State Devices (SSD) including nand flash, cloud, virtualization, object, backup and data protection, performance, management tools among others are topics that will be put out on the virtual discussion table.

Storage I/O trends

Following are some examples of recent and earlier industry trends perspectives posts that I have done over at 21cit:

Video: And Now, Software-Defined Storage!
There are many different views on what is or is not “software-defined” with products, protocols, preferences and even press releases. Check out the video and comments here.

Big Data and the Boston Marathon Investigation
How the human face of big-data will help investigators piece together all the evidence in the Boston bombing tragedy and bring those responsible to justice. Check out the post and comments here.

Don’t Use New Technologies in Old Ways
You can add new technologies to your data center infrastructure, but you won’t get the full benefit unless you update your approach with people, processes, and policies. Check out the post and comments here.

Don’t Let Clouds Scare You, Be Prepared
The idea of moving to cloud computing and cloud services can be scary, but it doesn’t have to be so if you prepare as you would for implementing any other IT tool. Check out the post and comments here.

Storage and IO trends for 2013 (& Beyond)
Efficiency, new media, data protection, and management are some of the keywords for the storage sector in 2013. Check out these and other trends, predictions along with comments here.

SSD and Real Estate: Location, Location, Location
You might be surprised how many similarities between buying real estate and buying SSDs.
Location matters and it’s not if, rather when, where, why and how you will be using SSD including nand flash in the future, read more and view comments here.

Everything Is Not Equal in the Data center, Part 3
Here are steps you can take to give the right type of backup and protection to data and solutions, depending on the risks and scenarios they face. The result? Savings and efficiencies. Read more and view comments here.

Everything Is Not Equal in the Data center, Part 2
Your data center’s operations can be affected at various levels, by multiple factors, in a number of degrees. And, therefore, each scenario requires different responses. Read more and view comments here.

Everything Is Not Equal in the Data center, Part 1
It pays to check your data center Different components need different levels of security, storage, and availability. Read more and view comments here.

Data Protection Modernizing: More Than Buzzword Bingo
IT professionals and solution providers should put technologies such as disk based backup, dedupe, cloud, and data protection management tools as assets and resources to make sure they receive necessary funding and buy in. Read more and view comments here.

Don’t Take Your Server & Storage IO Pathing Software for Granted
Path managers are valuable resources. They will become even more useful as companies continue to carry out cloud and virtualization solutions. Read more and view comments here.

SSD Is in Your Future: Where, When & With What Are the Questions
During EMC World 2012, EMC (as have other vendors) made many announcements around flash solid-state devices (SSDs), underscoring the importance of SSDs to organizations future storage needs. Read more here about why SSD is in your future along with view comments.

Changing Life cycles and Data Footprint Reduction (DFR), Part 2
In the second part of this series, the ABCDs (Archive, Backup modernize, Compression, Dedupe and data management, storage tiering) of data footprint reduction, as well as SLOs, RTOs, and RPOs are discussed. Read more and view comments here.

Changing Life cycles and Data Footprint Reduction (DFR), Part 1
Web 2.0 and related data needs to stay online and readily accessible, creating storage challenges for many organizations that want to cut their data footprint. Read more and view comments here.

No Such Thing as an Information Recession
Data, even older information, must be protected and made accessible cost-effectively. Not to mention that people and data are living longer as well as getting larger. Read more and view comments here.

Storage I/O trends

These real-time, industry trends perspective interactive chats at 21cit are open forum format (however be polite and civil) as well as non vendor sales or marketing pitches. If you have specific questions you ‘d like to ask or points of view to express, click here and post them in the chat room at any time (before, during or after).

Mark your calendar for this event live Thursday, May 30, at noon ET or visit after the fact.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

May 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
May 2013 News letter

Welcome to the May 2013 edition of the StorageIO Update. This edition has announcement analysis of EMC ViPR, Software Defined Storage (including a video here), server, storage and I/O metrics that matter for example how many IOPS can a HDD do (it depends). SSD including nand flash remains a popular topic, both in terms of industry adoption and customer deployment. Also included are my perspectives on the SSD vendor FusionIO CEO leaving in a flash. Speaking of nand flash, have you thought about how some RAID implementations and configurations can extend the life along with durability of SSD’s? More on this soon, however check out this video to give you some perspectives.

Click on the following links to view the May 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Ok Nuff said, for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: How many IOPS can a HDD HHDD SSD do with VMware?

How many IOPS can a HDD HHDD SSD do with VMware?

server storage data infrastructure i/o iop hdd ssd trends

Updated 2/10/2018

This is the second post of a two-part series looking at storage performance, specifically in the context of drive or device (e.g. mediums) characteristics of How many IOPS can a HDD HHDD SSD do with VMware. In the first post the focus was around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).

A common question is how many IOPS (IO Operations Per Second) can a storage device or system do?

The answer is or should be it depends.

Here are some examples to give you some more insight.

For example, the following shows how IOPS vary by changing the percent of reads, writes, random and sequential for a 4K (4,096 bytes or 4 KBytes) IO size with each test step (4 minutes each).

IO Size for test
Workload Pattern of test
Avg. Resp (R+W) ms
Avg. IOP Sec (R+W)
Bandwidth KB Sec (R+W)
4KB
100% Seq 100% Read
0.0
29,736
118,944
4KB
60% Seq 100% Read
4.2
236
947
4KB
30% Seq 100% Read
7.1
140
563
4KB
0% Seq 100% Read
10.0
100
400
4KB
100% Seq 60% Read
3.4
293
1,174
4KB
60% Seq 60% Read
7.2
138
554
4KB
30% Seq 60% Read
9.1
109
439
4KB
0% Seq 60% Read
10.9
91
366
4KB
100% Seq 30% Read
5.9
168
675
4KB
60% Seq 30% Read
9.1
109
439
4KB
30% Seq 30% Read
10.7
93
373
4KB
0% Seq 30% Read
11.5
86
346
4KB
100% Seq 0% Read
8.4
118
474
4KB
60% Seq 0% Read
13.0
76
307
4KB
30% Seq 0% Read
11.6
86
344
4KB
0% Seq 0% Read
12.1
82
330

Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 4K IO size

In the above example the drive is a 1TB 7200 RPM 3.5 inch Dell (Western Digital) 3Gb SATA device doing raw (non file system) IO. Note the high IOP rate with 100 percent sequential reads and a small IO size which might be a result of locality of reference due to drive level cache or buffering.

Some drives have larger buffers than others from a couple to 16MB (or more) of DRAM that can be used for read ahead caching. Note that this level of cache is independent of a storage system, RAID adapter or controller or other forms and levels of buffering.

Does this mean you can expect or plan on getting those levels of performance?

I would not make that assumption, and thus this serves as an example of using metrics like these in the proper context.

Building off of the previous example, the following is using the same drive however with a 16K IO size.

IO Size for test
Workload Pattern of test
Avg. Resp (R+W) ms
Avg. IOP Sec (R+W)
Bandwidth KB Sec (R+W)
16KB
100% Seq 100% Read
0.1
7,658
122,537
16KB
60% Seq 100% Read
4.7
210
3,370
16KB
30% Seq 100% Read
7.7
130
2,080
16KB
0% Seq 100% Read
10.1
98
1,580
16KB
100% Seq 60% Read
3.5
282
4,522
16KB
60% Seq 60% Read
7.7
130
2,090
16KB
30% Seq 60% Read
9.3
107
1,715
16KB
0% Seq 60% Read
11.1
90
1,443
16KB
100% Seq 30% Read
6.0
165
2,644
16KB
60% Seq 30% Read
9.2
109
1,745
16KB
30% Seq 30% Read
11.0
90
1,450
16KB
0% Seq 30% Read
11.7
85
1,364
16KB
100% Seq 0% Read
8.5
117
1,874
16KB
60% Seq 0% Read
10.9
92
1,472
16KB
30% Seq 0% Read
11.8
84
1,353
16KB
0% Seq 0% Read
12.2
81
1,310

Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 16K IO size

The previous two examples are excerpts of a series of workload simulation tests (ok, you can call them benchmarks) that I have done to collect information, as well as try some different things out.

The following is an example of the summary for each test output that includes the IO size, workload pattern (reads, writes, random, sequential), duration for each workload step, totals for reads and writes, along with averages including IOP’s, bandwidth and latency or response time.

disk iops

Want to see more numbers, speeds and feeds, check out the following table which will be updated with extra results as they become available.

Device
Vendor
Make

Model

Form Factor
Capacity
Interface
RPM Speed
Raw
Test Result
HDD
HGST
Desktop
HK250-160
2.5
160GB
SATA
5.4K
HDD
Seagate
Mobile
ST2000LM003
2.5
2TB
SATA
5.4K
HDD
Fujitsu
Desktop
MHWZ160BH
2.5
160GB
SATA
7.2K
HDD
Seagate
Momentus
ST9160823AS
2.5
160GB
SATA
7.2K
HDD
Seagate
MomentusXT
ST95005620AS
2.5
500GB
SATA
7.2K(1)
HDD
Seagate
Barracuda
ST3500320AS
3.5
500GB
SATA
7.2K
HDD
WD/Dell
Enterprise
WD1003FBYX
3.5
1TB
SATA
7.2K
HDD
Seagate
Barracuda
ST3000DM01
3.5
3TB
SATA
7.2K
HDD
Seagate
Desktop
ST4000DM000
3.5
4TB
SATA
HDD
HDD
Seagate
Capacity
ST6000NM00
3.5
6TB
SATA
HDD
HDD
Seagate
Capacity
ST6000NM00
3.5
6TB
12GSAS
HDD
HDD
Seagate
Savio 10K.3
ST9300603SS
2.5
300GB
SAS
10K
HDD
Seagate
Cheetah
ST3146855SS
3.5
146GB
SAS
15K
HDD
Seagate
Savio 15K.2
ST9146852SS
2.5
146GB
SAS
15K
HDD
Seagate
Ent. 15K
ST600MP0003
2.5
600GB
SAS
15K
SSHD
Seagate
Ent. Turbo
ST600MX0004
2.5
600GB
SAS
SSHD
SSD
Samsung
840 PRo
MZ-7PD256
2.5
256GB
SATA
SSD
HDD
Seagate
600 SSD
ST480HM000
2.5
480GB
SATA
SSD
SSD
Seagate
1200 SSD
ST400FM0073
2.5
400GB
12GSAS
SSD

Performance characteristics 1 worker (thread count) for RAW IO (non-file system)

Note: (1) Seagate Momentus XT is a Hybrid Hard Disk Drive (HHDD) based on a 7.2K 2.5 HDD with SLC nand flash integrated for read buffer in addition to normal DRAM buffer. This model is a XT I (4GB SLC nand flash), may add an XT II (8GB SLC nand flash) at some future time.

As a starting point, these results are raw IO with file system based information to be added soon along with more devices. These results are for tests with one worker or thread count, other results will be added with such as 16 workers or thread counts to show how those differ.

The above results include all reads, all writes, mix of reads and writes, along with all random, sequential and mixed for each IO size. IO sizes include 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K, 1024K and 2048K. As with any workload simulation, benchmark or comparison test, take these results with a grain of salt as your mileage can and will vary. For example you will see some what I consider very high IO rates with sequential reads even without file system buffering. These results might be due to locality of reference of IO’s being resolved out of the drives DRAM cache (read ahead) which vary in size for different devices. Use the vendor model numbers in the table above to check the manufactures specs on drive DRAM and other attributes.

If you are used to seeing 4K or 8K and wonder why anybody would be interested in some of the larger sizes take a look at big fast data or cloud and object storage. For some of those applications 2048K may not seem all that big. Likewise if you are used to the larger sizes, there are still applications doing smaller sizes. Sorry for those who like 512 byte or smaller IO’s as they are not included. Note that for all of these unless indicated a 512 byte standard sector or drive format is used as opposed to emerging Advanced Format (AF) 4KB sector or block size. Watch for some more drive and device types to be added to the above, along with results for more workers or thread counts, along with file system and other scenarios.

Using VMware as part of a Server, Storage and IO (aka StorageIO) test platform

vmware vexpert

The above performance results were generated on Ubuntu 12.04 (since upgraded to 14.04 which was hosted on a VMware vSphere 5.1 (upgraded to 5.5U2) purchased version (you can get the ESXi free version here) with vCenter enabled system. I also have VMware workstation installed on some of my Windows-based laptops for doing preliminary testing of scripts and other activity prior to running them on the larger server-based VMware environment. Other VMware tools include vCenter Converter, vSphere Client and CLI. Note that other guest virtual machines (VMs) were idle during the tests (e.g. other guest VMs were quiet). You may experience different results if you ran Ubuntu native on a physical machine or with different adapters, processors and device configurations among many other variables (that was a disclaimer btw ;) ).

Storage I/O trends

All of the devices (HDD, HHDD, SSD’s including those not shown or published yet) were Raw Device Mapped (RDM) to the Ubuntu VM bypassing VMware file system.

Example of creating an RDM for local SAS or SATA direct attached device.

vmkfstools -z /vmfs/devices/disks/naa.600605b0005f125018e923064cc17e7c /vmfs/volumes/dat1/RDM_ST1500Z110S6M5.vmdk

The above uses the drives address (find by doing a ls -l /dev/disks via VMware shell command line) to then create a vmdk container stored in a dat. Note that the RDM being created does not actually store data in the .vmdk, it’s there for VMware management operations.

If you are not familiar with how to create a RDM of a local SAS or SATA device, check out this post to learn how.This is important to note in that while VMware was used as a platform to support the guest operating systems (e.g. Ubuntu or Windows), the real devices are not being mapped through or via VMware virtual drives.

vmware iops

The above shows examples of RDM SAS and SATA devices along with other VMware devices and dats. In the next figure is an example of a workload being run in the test environment.

vmware iops

One of the advantages of using VMware (or other hypervisor) with RDM’s is that I can quickly define via software commands where a device gets attached to different operating systems (e.g. the other aspect of software defined storage). This means that after a test run, I can quickly simply shutdown Ubuntu, remove the RDM device from that guests settings, move the device just tested to a Windows guest if needed and restart those VMs. All of that from where ever I happen to be working from without physically changing things or dealing with multi-boot or cabling issues.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

So how many IOPs can a device do?

That depends, however have a look at the above information and results.

Check back from time to time here to see what is new or has been added including more drives, devices and other related themes.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

How many I/O iops can flash SSD or HDD do?

How many i/o iops can flash ssd or hdd do with vmware?

sddc data infrastructure Storage I/O ssd trends

Updated 2/10/2018

A common question I run across is how many I/O iopsS can flash SSD or HDD storage device or system do or give.

The answer is or should be it depends.

This is the first of a two-part series looking at storage performance, and in context specifically around drive or device (e.g. mediums) characteristics across HDD, HHDD and SSD that can be found in cloud, virtual, and legacy environments. In this first part the focus is around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).

What about cloud, tape summit resources, storage systems or appliance?

Lets leave those for a different discussion at another time.

Getting started

Part of my interest in tools, metrics that matter, measurements, analyst, forecasting ties back to having been a server, storage and IO performance and capacity planning analyst when I worked in IT. Another aspect ties back to also having been a sys admin as well as business applications developer when on the IT customer side of things. This was followed by switching over to the vendor world involved with among other things competitive positioning, customer design configuration, validation, simulation and benchmarking HDD and SSD based solutions (e.g. life before becoming an analyst and advisory consultant).

Btw, if you happen to be interested in learn more about server, storage and IO performance and capacity planning, check out my first book Resilient Storage Networks (Elsevier) that has a bit of information on it. There is also coverage of metrics and planning in my two other books The Green and Virtual Data Center (CRC Press) and Cloud and Virtual Data Storage Networking (CRC Press). I have some copies of Resilient Storage Networks available at a special reader or viewer rate (essentially shipping and handling). If interested drop me a note and can fill you in on the details.

There are many rules of thumb (RUT) when it comes to metrics that matter such as IOPS, some that are older while others may be guess or measured in different ways. However the answer is that it depends on many things ranging from if a standalone hard disk drive (HDD), Hybrid HDD (HHDD), Solid State Device (SSD) or if attached to a storage system, appliance, or RAID adapter card among others.

Taking a step back, the big picture

hdd image
Various HDD, HHDD and SSD’s

Server, storage and I/O performance and benchmark fundamentals

Even if just looking at a HDD, there are many variables ranging from the rotational speed or Revolutions Per Minute (RPM), interface including 1.5Gb, 3.0Gb, 6Gb or 12Gb SAS or SATA or 4Gb Fibre Channel. If simply using a RUT or number based on RPM can cause issues particular with 2.5 vs. 3.5 or enterprise and desktop. For example, some current generation 10K 2.5 HDD can deliver the same or better performance than an older generation 3.5 15K. Other drive factors (see this link for HDD fundamentals) including physical size such as 3.5 inch or 2.5 inch small form factor (SFF), enterprise or desktop or consumer, amount of drive level cache (DRAM). Space capacity of a drive can also have an impact such as if all or just a portion of a large or small capacity devices is used. Not to mention what the drive is attached to ranging from in internal SAS or SATA drive bay, USB port, or a HBA or RAID adapter card or in a storage system.

disk iops
HDD fundamentals

How about benchmark and performance for marketing or comparison tricks including delayed, deferred or asynchronous writes vs. synchronous or actually committed data to devices? Lets not forget about short stroking (only using a portion of a drive for better IOP’s) or even long stroking (to get better bandwidth leveraging spiral transfers) among others.

Almost forgot, there are also thick, standard, thin and ultra thin drives in 2.5 and 3.5 inch form factors. What’s the difference? The number of platters and read write heads. Look at the following image showing various thickness 2.5 inch drives that have various numbers of platters to increase space capacity in a given density. Want to take a wild guess as to which one has the most space capacity in a given footprint? Also want to guess which type I use for removable disk based archives along with for onsite disk based backup targets (compliments my offsite cloud backups)?

types of disks
Thick, thin and ultra thin devices

Beyond physical and configuration items, then there are logical configuration including the type of workload, large or small IOPS, random, sequential, reads, writes or mixed (various random, sequential, read, write, large and small IO). Other considerations include file system or raw device, number of workers or concurrent IO threads, size of the target storage space area to decide impact of any locality of reference or buffering. Some other items include how long the test or workload simulation ran for, was the device new or worn in before use among other items.

Tools and the performance toolbox

Then there are the various tools for generating IO’s or workloads along with recording metrics such as reads, writes, response time and other information. Some examples (mix of free or for fee) include Bonnie, Iometer, Iorate, IOzone, Vdbench, TPC, SPC, Microsoft ESRP, SPEC and netmist, Swifttest, Vmark, DVDstore and PCmark 7 among many others. Some are focused just on the storage system and IO path while others are application specific thus exercising servers, storage and IO paths.

performance tools
Server, storage and IO performance toolbox

Having used Iometer since the late 90s, it has its place and is popular given its ease of use. Iometer is also long in the tooth and has its limits including not much if any new development, never the less, I have it in the toolbox. I also have Futremark PCmark 7 (full version) which turns out has some interesting abilities to do more than exercise an entire Windows PC. For example PCmark can use a secondary drive for doing IO to.

PCmark can be handy for spinning up with VMware (or other tools) lots of virtual Windows systems pointing to a NAS or other shared storage device doing real world type activity. Something that could be handy for testing or stressing virtual desktop infrastructures (VDI) along with other storage systems, servers and solutions. I also have Vdbench among others tools in the toolbox including Iorate which was used to drive the workloads shown below.

What I look for in a tool are how extensible are the scripting capabilities to define various workloads along with capabilities of the test engine. A nice GUI is handy which makes Iometer popular and yes there are script capabilities with Iometer. That is also where Iometer is long in the tooth compared to some of the newer generation of tools that have more emphasis on extensibility vs. ease of use interfaces. This also assumes knowing what workloads to generate vs. simply kicking off some IOPs using default settings to see what happens.

Another handy tool is for recording what’s going on with a running system including IO’s, reads, writes, bandwidth or transfers, random and sequential among other things. This is where when needed I turn to something like HiMon from HyperIO, if you have not tried it, get in touch with Tom West over at HyperIO and tell him StorageIO sent you to get a demo or trial. HiMon is what I used for doing start, stop and boot among other testing being able to see IO’s at the Windows file system level (or below) including very early in the boot or shutdown phase.

Here is a link to some other things I did awhile back with HiMon to profile some Windows and VDI activity test profiling.

What’s the best tool or benchmark or workload generator?

The one that meets your needs, usually your applications or something as close as possible to it.

disk iops
Various 2.5 and 3.5 inch HDD, HHDD, SSD with different performance

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

That depends, however continue reading part II of this series to see some results for various types of drives and workloads.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

EMC ViPR software defined object storage part III

Storage I/O trends

This is part III in a series of posts pertaining to EMC ViPR software defined storage and object storage. You can read part I here and part II here.

EMCworld

More on the object opportunity

Other object access includes OpenStack storage part Swift, AWS S3 HTTP and REST API access. This also includes ViPR supporting EMC Atmos, VNX and Isilon arrays as southbound persistent storage in addition.

object storage
Object (and cloud) storage access example

EMC is claiming that over 250 VNX systems can be abstracted to support scaling with stability (performance, availability, capacity, economics) using ViPR. Third party storage will be supported along with software such as OpenStack Swift, Ceph and others running on commodity hardware. Note that EMC has some history with object storage and access including Centera and Atmos. Visit the micro site I have setup called www.objectstoragecenter.com and watch for more content to be updated and added there.

More on the ViPR control plane and controller

ViPR differs from some others in that it does not sit in the data path all the time (e.g. between application servers and storage systems or cloud services) to cut potential for bottlenecks.

ViPR architecture

Organizations that can use ViPR include enterprise, SMB, CSP or MSP and hosting sites. ViPR can be used in a control mode to leverage underlying storage systems, appliances and services intelligence and functionality. This means ViPR can be used to complement as oppose to treat southbound or target storage systems and services as dumb disks or JBOD.

On the other hand, ViPR will also have a suite of data services such as snapshot, replication, data migration, movement, tiering to add value for when those do not exist. Customers will be free to choose how they want to use and deploy ViPR. For example leveraging underlying storage functionality (e.g. lightweight model), or in a more familiar storage virtualization model heavy lifting model. In the heavy lifting model more work is done by the virtualization or abstraction software to create an added value, however can be a concern for bottlenecks depending how deployed.

Service categories

Software defined, storage hypervisor, virtual storage or storage virtualization?

Most storage virtualization, storage hypervisors and virtual storage solutions that are hardware or software based (e.g. software defined) implemented what is referred to as in band. With in band the storage virtualization software or hardware sits between the applications (northbound) and storage systems or services (southbound).

While this approach can be easier to carry out along with add value add services, it can also introduce scaling bottlenecks depending on implementations. Examples of in band storage virtualization includes Actifio, DataCore, EMC VMAX with third-party storage, HDS with third-party storage, IBM SVC (and their V7000 Storwize storage system based on it) and NetApp Vseries among others. An advantage of in band approaches is that there should not need to be any host or server-side software requirements and SAN transparency.

There is another approach called out-of-band that has been tried. However pure out-of-band requires a management system along with agents, drivers, shims, plugins or other software resident on host application servers.

fast path control path
Example of generic fast path control path model

ViPR takes a different approach, one that was seen a few years ago with EMC Invista called fast path, control path that for the most part stays out of the data path. While this is like out-of-band, there should not be a need for any host server-side (e.g. northbound) software. By being a fast path control path, the virtualization or abstraction and management functions stay out of the way for data being moved or work being done.

Hmm, kind of like how management should be, there to help when needed, out-of-the-way not causing overhead other times ;).

Is EMC the first (even with Invista) to leverage fast path control path?

Actually up until about a year or so ago, or shortly after HP acquired 3PAR they had a solution called Storage Virtualization Services Platform (SVPS) that was OEMd from LSI (e.g. StorAge). Unfortunately, HP decided to retire that as opposed to extend its capabilities for file and object access (northbound) as well as different southbound targets or destination services.

Whats this northbound and southbound stuff?

Simply put, think in terms of a vertical stack with host servers (PMs or VMs) on the top with applications (and hypervisors or other tools such as databases) on top of them (e.g. north).

software defined storage
Northbound servers, southbound storage systems and cloud services

Think of storage systems, appliances, cloud services or other target destinations on the bottom (or south). ViPR sits in between providing storage services and management to the northbound servers leveraging the southbound storage.

What host servers can VIPR support for serving storage?

VIPR is being designed to be server agnostic (e.g. virtual or physical), along with operating system agnostic. In addition VIPR is being positioned as capable of serving northbound (e.g. up to application servers) block, file or object as well as accessing southbound (e.g. targets) block, file and object storage systems, file systems or services.

Note that a difference between earlier similar solutions from EMC have been either block based (e.g. Invista, VPLEX, VMAX with third-party storage), or file based. Also note that this means VIPR is not just for VMware or virtual server environments and that it can exist in legacy, virtual or cloud environments.

ViPR image

Likewise VIPR is intended to be application agnostic supporting little data, big data, very big data ( VBD) along with Hadoop or other specialized processing. Note that while VIPR will support HDFS in addition to NFS and CIFS file based access, Hadoop will not be running on or in the VIPR controllers as that would live or run elsewhere.

How will VIPR be deployed and licensed?

EMC has indicated that the VIPR controller will be delivered as software that installs into a virtual appliance (e.g. VMware) running as a virtual machine (VM) guest. It is not clear when support will exist for other hypervisors (e.g. Microsoft Hyper-V, Citrix/XEN, KVM or if VMware vSphere with vCenter or simply on ESXi free version). As of the announcement pre briefing, EMC had not yet finalized pricing and licensing details. General availability is expected in the second half of calendar 2013.

Keep in mind that the VIPR controller (software) runs as a VM that can be hosted on a clustered hypervisor for HA. In addition, multiple VIPR controllers can exist in a cluster to further enhance HA.

Some questions to be addressed among others include:

  • How and where are IOs intercepted?
  • Who can have access to the APIs, what is the process, is there a developers program, SDK along with resources?
  • What network topologies are supported local and remote?
  • What happens when JBOD is used and no advanced data services exist?
  • What are the characteristics of the object access functionality?
  • What if any specific switches or data path devices and tools are needed?
  • How does a host server know to talk with its target and ViPR controller know when to intercept for handling?
  • Will SNIA CDMI be added and when as part of the object access and data services capabilities?
  • Are programmatic bindings available for the object access along with support for other APIs including IOS?
  • What are the performance characteristics including latency under load as well as during a failure or fault scenario?
  • How will EMC place Vplex and its caching model on a local and wide area basis vs. ViPR or will we see those two create some work together, if so, what will that be?

Bottom line (for now):

Good move for EMC, now let us see how they execute including driving adoption of their open APIs, something they have had success in the past with Centera and other solutions. Likewise, let us see what other storage vendors become supported or add support along with how pricing and licensing are rolled out. EMC will also have to articulate when and where to use ViPR vs. VPLEX along with other storage systems or management tools.

Additional related material:
Are you using or considering implementation of a storage hypervisor?
Cloud and Virtual Data Storage Networking (CRC)
Cloud conversations: Public, Private, Hybrid what about Community Clouds?
Cloud, virtualization, storage and networking in an election year
Does software cut or move place of vendor lock-in?
Don’t Use New Technologies in Old Ways
EMC VPLEX: Virtual Storage Redefined or Respun?
How many degrees separate you and your information?
Industry adoption vs. industry deployment, is there a difference?
Many faces of storage hypervisor, virtual storage or storage virtualization
People, Not Tech, Prevent IT Convergence
Resilient Storage Networks (Elsevier)
Server and Storage Virtualization Life beyond Consolidation
Should Everything Be Virtualized?
The Green and Virtual Data Center (CRC)
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Unified storage systems showdown: NetApp FAS vs. EMC VNX
backup, restore, BC, DR and archiving
VMware buys virsto, what about storage hypervisor’s?
Who is responsible for vendor lockin?

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC ViPR software defined object storage part II

Storage I/O trends

This is part II in a series of posts pertaining to EMC ViPR software defined storage and object storage. You can read part I here and part III here.

EMCworld

Some questions and discussion topics pertaining to ViPR:

Whom is ViPR for?

Organizations that need to scale with stability across EMC, third-party or open storage software stacks and commodity hardware. This applies to large and small enterprise, cloud service providers, managed service providers, virtual and cloud environments/

What this means for EMC hardware/platform/systems?

They can continue to be used as is, or work with ViPR or other deployment modes.

Does this mean EMC storage systems are nearing their end of life?

IMHO for the most part not yet, granted there will be some scenarios where new products will be used vs. others, or existing ones used in new ways for different things.

As has been the case for years if not decades, some products will survive, continue to evolve and find new roles, kind of like different data storage mediums (e.g. ssd, disk, tape, etc).

How does ViPR work?

ViPR functions as a control plane across the data and storage infrastructure supporting both north and southbound. northbound refers to use from or up to application servers (physical machines PM and virtual machines VMs). southbound refers target or destination storage systems. Storage systems can be traditional EMC or third-party (NetApp mentioned as part of first release), appliances, just a bunch of disks (JBOD) or cloud services.

Some general features and functions:

  • Provisioning and allocation (with automation)
  • Data and storage migration or tiering
  • Leverage scripts, templates and workbooks
  • Support service categories and catalogs
  • Discovery, registration of storage systems
  • Create of storage resource pools for host systems
  • Metering, measuring, reporting, charge or show back
  • Alerts, alarms and notification
  • Self-service portal for access and provisioning

ViPR data plane (adding data services and value when needed)

Another part is the data plane for implementing data services and access. For block and file when not needed, ViPR steps out-of-the-way leveraging the underlying storage systems or services.

object storage
Object storage access

When needed, the ViPR data plane can step in to add added services and functionality along with support object based access for little data and big data. For example, Hadoop Distributed File System (HDFS) services can support northbound analytic software applications running on servers accessing storage managed by ViPR.

Continue reading in part III of this series here including how ViPR works, who it is for and more analysis.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Spring SNW 2013, Storage Networking World Recap

Storage I/O trends

A couple of weeks ago I attended the spring 2013 Storage Networking World (SNW) in Orlando Florida. Talking with SNIA Chairman Wayne Adams and SNIA Director Leo Legar this was the 28th edition of the US SNW (two shows a year), plus the international ones. While I have not been to all 28 of the US SNWs, I have been to a couple of dozen SNWs in the US, Europe and Brazil going back to around 2001 as an attendee, main stage as well as breakout, and tutorial presenter (see here and here).

SNW image

For the spring 2013 SNW I was there for a mix of meetings, analyst briefings, attending the expo, doing some podcasts (see below), meeting with IT professionals (e.g. customers), VARs, vendors along with presenting three sessions (you can download them and others backup, restore, BC, DR and archiving).

Some of the buzz and themes heard included big data was a little topic at the event, while cloud was in the conversations, dedupe and data footprint reduction (DFR) do matter for some people and applications. However also a common theme with customers including Media and Entertainment (M&E) is that not everything can be duped thus other DFR approaches are needed.

There was some hype in and around hybrid storage along with storage hypervisors, which was also an entertaining panel discussion with HDS (Claus Mikkelsen aka @YoClaus), Datacore, IBM and Virstro.

The theme of that discussion seemed for the most part to gravitate towards realities of storage virtualization and less about the hypervisor hype. Some software defined marketing hype I heard is that it is impossible to spend more than a million dollars on a server today. I guess with the applicable caveats, qualifiers and context that could be true, however I also know some vendors and customers that would say otherwise.

Lunch
Lunchtime at SNW Spring 2013

Not surprisingly, there was an increase in vendors wanting to jump on the software defined and object storage bandwagons; however, customers tended to be curious at best, confused or concerned otherwise. Speaking of object storage, check out this podcast discussion with Cleversafe customer Justin Stottlemyer of Shutterfly and his 80PB environment.

In addition to Cleversafe, heard from Astute (if you need fast iSCSI storage check them out), Avere has a new NAS for dummies book out, Exablox a storage system startup with emphasis on scalability, ease of use and NAS access and hybrid storage Tegile. Also, check out SwifTest for generating application workloads and measurement that had their customer Go Daddy presenting at the event. A couple of others to keep an eye on include Raxco with their thin provision storage reclamation tool, and Infinio with their NAS acceleration for VMware software tools among others.

backup, restore, BC, DR and archiving

Here are the three presentations that I did while at the event:

Analyst Perspective: Increase Your Return on Innovation (The New ROI) With Data Management and Dedupe
There is no such thing as an information recession with more data to move, process and store, however there are economic challenges. Likewise, people and data are living longer and getting larger which requires leveraging data footprint reduction (DFR) techniques on a broader focus. It is time to move upstream finding and fixing things at the source to reduce the downstream impact of expanding data footprints, enabling more to be done with what you have.

Analyst Perspective: Metrics that Matter – Meritage of Data Management and Data Protection
Not everything in the data center or information factory is the same. This session recaps and builds off the morning increase your ROI with data footprint and data management session while setting the stage for the rethinking data protection (backup, BC and DR). Are you maximizing the return on innovation in how using new tools and technology in new ways, vs. using new tools in old ways? Also discussed performance capacity planning, forecasting analysis in cloud, virtual and physical environments. Without metrics that matter, you are flying blind, or perhaps missing opportunities to further drive your return on innovation and return on investment.

Analyst Perspective: Time to Rethink Data Protection Including BC and DR
When it comes to today’s data centers and information factories including physical, virtual and cloud, everything is not the same, so why treat business continuance (BC), disaster recovery (DR) and data protection in general the same? Simply using new tools, technologies and techniques in the same old ways is no longer a viable option. Since there is no such thing as a data or information recession, yet there are economic and budget challenges, along with new or changing threat risks, now is the time to review data protection including BC and DR including using new technologies in new ways.

You can view the complete SNW USA spring 2013 agenda here.

audio
Podcasts are also available on

Here are links to some podcasts from spring 2013 SNW:
Stottlemyer of Shutterfly and object storage discussion
Dave Demming talking tech education from SNW Spring 2013
Farley Flies into SNW Spring 2013
Talking with Tony DiCenzo at SNW Spring 2013
SNIA Spring 2013 update with Wayne Adams
SNIA’s new SPDEcon conference

Also, check out these podcasts from fall 2012 US and Europe SNWs:
Ben Woo on Big Data Buzzword Bingo and Business Benefits
Networking with Bruce Ravid and Bruce Rave
Industry trends and perspectives: Ray Lucchesi on Storage and SNW
Learning with Leo Leger of SNIA
Meeting up with Marty Foltyn of SNIA
Catching up with Quantum CTE David Chapa (Now with Evault)
Chatting with Karl Chen at SNW 2012
SNW 2012 Wayne’s World
SNW Podcast on Cloud Computing
HDS Claus Mikkelsen talking storage from SNW Fall 2012

Storage I/O trends

What this all means?

While busy, I liked this edition of SNW USA in that it had a great agenda with diversity and balance of speaker sessions (some tutorials, some vendors, some IT customers, and some analysts) vs. too many of one specific area.

In addition to the agenda and session length, the venue was good, big enough, however not spread out so much to cause loss of the buzz and energy of the event.

This SNW had some similar buzz or energy as early versions granted without the hype and fanfare of a startup industry or focus area (that would be some of the other events today)

Should SNW go to a once a year event?

While it would be nice to have a twice a year venue for convenience, practicality and budgets say once would be enough given all the other conferences and venues on the agenda (or that could be).

The next SNW USA will be October 15 to 17 2013 in Long Beach California, and Europe in Frankfurt Germany October 29-30 2013.

Thanks again to all the attendees, participants, vendor exhibitors, event organizers and SNIA, SNW/Computerworld staffs for another great event.

Ok, nuff said

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

HP Moonshot 1500 software defined capable compute servers

Storage I/O cloud virtual and big data perspectives

Riding the current software defined data center (SDC) wave being led by the likes of VMware and software defined networking (SDN) also championed by VMware via their acquisition of Nicira last year, Software Defined Marketing (SDM) is in full force. HP being a player in providing the core building blocks for traditional little data and big data, along with physical, virtual, converged, cloud and software defined has announced a new compute, processor or server platform called the Moonshot 1500.

HP Moonshot software defined server image

Software defined marketing aside, there are some real and interesting things from a technology standpoint that HP is doing with the Moonshot 1500 along with other vendors who are offering micro server based solutions.

First, for those who see server (processor and compute) improvements as being more and faster cores (and threads) per socket, along with extra memory, not to mention 10GbE or 40GbE networking and PCIe expansion or IO connectivity, hang on to your hats.

HP Moonshot software defined server image individual server blade

Moonshot is in the model of the micro servers or micro blades such as what HP has offered in the past along with the likes of Dell and Sea Micro (now part of AMD). The micro servers are almost the opposite of the configuration found on regular servers or blades where the focus is putting more ability on a motherboard or blade.

With micro servers the approach support those applications and environments that do not need lots of CPU processing capability, large amount of storage or IO or memory. These include some web hosting or cloud application environments that can leverage more smaller, lower power, less performance or resource intensive platforms. For example big data (or little data) applications whose software or tools benefit from many low-cost, low power, and lower performance with distributed, clustered, grid, RAIN or ring based architectures can benefit from this type of solution.

HP Moonshot software defined server image and components

What is the Moonshot 1500 system?

  • 4.3U high rack mount chassis that holds up to 45 micro servers
  • Each hot-swap micro server is its own self-contained module similar to blade server
  • Server modules install vertically from the top into the chassis similar to some high-density storage enclosures
  • Compute or processors are Intel Atom S1260 2.0GHz based processors with 1 MB of cache memory
  • Single S0-DIMM slot (unbuffered ECC at 1333 MHz) supports 8GB (1 x 8GB DIMM) DRAM
  • Each server module has a single 2.5″ SATA 200GB SSD, 500GB or 1TB HDD onboard
  • A dual port Broadcom 5720 1 Gb Ethernet LAn per server module that connects to chassis switches
  • Marvel 9125 storage controller integrated onboard each server module
  • Chassis and enclosure management along with ACPI 2.0b, SMBIOS 2.6.1 and PXE support
  • A pair of Ethernet switches each give up to six x 10GbE uplinks for the Moonshot chassis
  • Dual RJ-45 connectors for iLO chassis management are also included
  • Status LEDs on the front of each chassis providers status of the servers and network switches
  • Support for Canonical Ubuntu 12.04, RHEL 6.4, SUSE Linux LES 11 SP2

Storage I/O cloud virtual and big data perspectives

Notice a common theme with moonshot along with other micro server-based systems and architectures?

If not, it is simple, I mean literally simple and flexible is the value proposition.

Simple is the theme (with software defined for marketing) along with low-cost, lower energy power demand, lower performance, less of what is not needed to remove cost.

Granted not all applications will be a good fit for micro servers (excuse me, software defined servers) as some will need the more robust resources of traditional servers. With solutions such as HP Moonshot, system architects and designers have more options available to them as to what resources or solution options to use. For example, a cloud or object storage system based solutions that does not need a lot of processing performance per node or memory, and a low amount of storage per node might find this as an interesting option for mid to entry-level needs.

Will HP release a version of their Lefthand or IBRIX (both since renamed) based storage management software on these systems for some market or application needs?

How about deploying NoSQL type tools including Cassandra or Mongo, how about CloudStack, OpenStack Swift, Basho Riak (or Riak CS) or other software including object storage, on these types of solutions, or web servers and other applications that do not need the fastest processors or most memory per node?

Thus micro server-based solutions such as Moonshot enable return on innovation (the new ROI) by enabling customers to leverage the right tool (e.g. hard product) to create their soft product allowing their users or customers to in turn innovate in a cost-effective way.

Will the Moonshot servers be the software defined turnaround for HP, click here to see what Bloomberg has to say, or Forbes here.

Learn more about Moonshot servers at HP here, here or data sheets found here.

Btw, HP claims that this is the industries first software defined server, hmm.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved