Seasons Greetings, Happy Holidays 2013 from Server and StorageIO

Merry Christmas, Seasons Greetings, Happy Holidays 2013 from Server and StorageIO

2013 server and storage I/O holiday greetings

Ok, nuff said for now (for now ;)…

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: EMC announces XtremIO General Availability, speeds and feeds

Storage I/O trends

XtremIO flash SSD more than storage I/O speed

Following up part I of this two-part series, here are more more details, insights and perspectives about EMC XtremIO and it’s generally availability that were announced today.

XtremIO the basics

  • All flash Solid State Device (SSD) based solution
  • Cluster of up to four X-Brick nodes today
  • X-Bricks available in 10TB increments today, 20TB in January 2014
  • 25 eMLC SSD drives per X-Brick with redundant dual processor controllers
  • Provides server-side iSCSI and Fibre Channel block attachment
  • Integrated data footprint reduction (DFR) including global dedupe and thin provisioning
  • Designed for extending duty cycle, minimizing wear of SSD
  • Removes need for dedicated hot spare drives
  • Capable of sustained performance and availability with multiple drive failure
  • Only unique data blocks are saved, others tracked via in-memory meta data pointers
  • Reduces overhead of data protection vs. traditional small RAID 5 or RAID 6 configurations
  • Eliminates overhead of back-end functions performance impact on applications
  • Deterministic  storage I/O performance (IOPs, latency, bandwidth) over life of system

When would you use XtremIO vs. another storage system?

If you need all enterprise like data services including thin provisioning, dedupe, resiliency with deterministic performance on an all-flash system with raw capacity from 10-40TB (today) then XtremIO could be a good fit. On the other hand, if you need a mix of SSD based storage I/O performance (IOPS, latency or bandwidth) along with some HDD based space capacity, then a hybrid or traditional storage system could be the solution. Then there are hybrid scenarios where a hybrid storage system, array or appliance (mix of SSD and HDD) are used for most of the applications and data, with an XtremIO handling more tasks that are demanding.

How does XtremIO compare to others?

EMC with XtremIO is taking a different approach than some of their competitors whose model is to compare their faster flash-based solutions vs. traditional mid-market and enterprise arrays, appliances or storage systems on a storage I/O IOP performance basis. With XtremIO there is improved performance measured in IOPs or database transactions among other metrics that matter. However there is also an emphasis on consistent, predictable, quality of service (QoS) or what is known as deterministic storage I/O performance basis. This means both higher IOPs with lower latency while doing normal workload along with background data services (snapshots, data footprint reduction, etc).

Some of the competitors focus on how many IOPs or work they can do, however without context or showing impact to applications when back-ground tasks or other data services are in use. Other differences include how cluster nodes are interconnected (for scale out solutions) such as use of Ethernet and IP-based networks vs dedicated InfiniBand or PCIe fabrics. Host server attachment will also differ as some are only iSCSI or Fibre Channel block, or NAS file, or give a mix of different protocols and interfaces.

An industry trend however is to expand beyond the flash SSD need for speed focus by adding context along with QoS, deterministic behavior and addition of data services including snapshots, local and remote replication, multi-tenancy, metering and metrics, security among other items.

Storage I/O trends

Who or what are XtremIO competition?

To some degree vendors who only have PCIe flash SSD cards might place themselves as the alternative to all SSD or hybrid mixed SSD and HDD based solutions. FusionIO used to take that approach until they acquired NexGen (a storage system) and now have taken a broader more solution balanced approach of use the applicable tool for the task or application at hand.

Other competitors include the all SSD based storage arrays, systems or appliance vendors which includes legacy existing as well as startups vendors that include among others IBM who bought TMS (flashsystems), NetApp (EF540), Solidfire, Pure, Violin (who did a recent IPO) and Whiptail (bought by Cisco).  Then there are the hybrid which is a long list including Cloudbyte (software), Dell, EMCs other products, HDS, HP, IBM, NetApp, Nexenta (Software), Nimble, Nutanix, Oracle, Simplivity and Tintri among others.

What’s new with this XtremIO announcement

10TB X-Bricks enable 10 to 40TB (physical space capacity) per cluster (available on 11/19/13). 20TB X-Bricks (larger capacity drives) will double the space capacity in January 2014. If you are doing the math, that means either a single brick (dual controller) system, or up to four bricks (nodes, each with dual controllers) configurations. Common across all system configurations are data features such as thin provisioning, inline data footprint reduction (e.g. dedupe) and XtremIO Data Protection (XDP).

What does XtremIO look like?

XtremIO consists of up to four nodes (today) based on what EMC calls X-Bricks.
EMC XtremIO X-Brick
25 SSD drive X-Brick

Each 4U X-Brick has 25 eMLC SSD drives in a standard EMC 2U DAE (disk enclosure) like those used with the VNX and VMAX for SSD and Hard Disk Drives (HDD). In addition to the 2U drive shelve, there are a pair of 1U storage processors (e.g. controllers) that give redundancy and shared access to the storage shelve.

XtremIO Architecture
XtremIO X-Brick block diagram

XtremIO storage processors (controllers) and drive shelve block diagram. Each X-Brick and their storage processors or controllers communicate with each other and other X-Bricks via a dedicated InfiniBand using Remote Direct Memory Access (RDMA) fabric for memory to memory data transfers. The controllers or storage processors (two per X-Brick) each have dual processors with eight cores for compute, along with 256GB of DRAM memory. Part of each controllers DRAM memory is set aside as a mirror its partner or peer and vise versa with access being over the InfiniBand fabric.

XtremIO fabric
XtremIO X-Brick four node fabric cluster or instance

How XtremIO works

Servers access XtremIO X-Bricks using iSCSI and Fibre Channel for block access. A responding X-Brick node handles the storage I/O request and in the case of a write updates other nodes. In the case of a write, the handling node or controller (aka storage processor) checks its meta data map in memory to see if the data is new and unique. If so, the data gets saved to SSD along with meta data information updated across all nodes. Note that data gets ingested and chunked or sharded into 4KB blocks. So for example if a 32KB storage I/O request from the server arrives, that is broken (e.g. chunk or shard) into 8 4KB pieces each with a mathematical unique fingerprint created. This fingerprint is compared to what is known in the in memory meta data tables (this is a hexadecimal number compare so a quick operation). Based on the comparisons if unique the data is saved and pointers created, if already exists, then pointers are updated.

In addition to determining if unique data, the fingerprint is also used for generate a balanced data dispersal plan across the nodes and SSD devices. Thus there is the benefit of reducing duplicate data during ingestion, while also reducing back-end IOs within the XtremIO storage system. Another byproduct is the reduction in time spent on garbage collection or other background tasks commonly associated with SSD and other storage systems.

Meta data is kept in memory with a persistent copied written to reserved area on the flash SSD drives (think of as a vault area) to support and keep system state and consistency. In between data consistency points the meta data is kept in a log journal like how a database handles log writes. What’s different from a typical database is that XtremIO XIOS platform software does these consistency point writes for persistence on a granularity of seconds vs. hours or minutes.

Storage I/O trends

What about rumor that XtremIO can only do 4KB IOPs?

Does this mean that the smallest storage I/O or IOP that XtremIO can do is 4GB?

That is a rumor or some fud I have heard floated by a competitor (or two or three) that assumes if only 4KB internal chunk or shard being used for processing, that must mean no IOPs smaller than 4KB from a server.

XtremIO can do storage I/O IOP sizes of 512 bytes (e.g. the standard block size) as do other systems. Note that the standard server storage I/O block or IO size is 512 bytes or multiples of that unless the new 4KB advanced format (AF) block size being used which based on my conversations with EMC, AF is not supported, yet. (Updated 11/15/13 EMC has indicated that host (front-end) 4K AF support, along with 512 byte emulation modes are available now with XIOS). Also keep in mind that since XtremIO XIOS internally is working with 4KB chunks or shards, that is a stepping stone for being able to eventually leverage back-end AF drive support in the future should EMC decide to do so (Updated 11/15/13 Waiting for confirmation from EMC about if back-end AF support is now enabled or not, will give more clarity as it is recieved).

What else is EMC doing with XtremIO?

  • VCE Vblock XtremIO systems for SAP HANA (and other databases) in memory databases along with VDI optimized solutions.
  • VPLEX and XtremIO for extended distance local, metro and wide area HA, BC and DR.
  • EMC PowerPath XtremIO storage I/O path optimization and resiliency.
  • Secure Remote Support (aka phone home) and auto support integration.

Boosting your available software license minutes (ASLM) with SSD

Another use of SSD has been in the past the opportunity to make better use of servers stretching their usefulness or delaying purchase of new ones by improving their effective use to do more work. In the past this technique of using SSDs to delay a server or CPU upgrade was used when systems when hardware was more expensive, or during the dot com bubble to fill surge demand gaps.  This has the added benefit of stretching database and other expensive software licenses to go further or do more work. The less time servers spend waiting for IOP’s means more time for doing useful work and bringing value of the software license. Otoh, the more time spent waiting is lot available software minutes which is cost overhead.

Think of available software licence minutes (ASLM) in terms of available software license minutes where if doing useful work your software is providing value. On the other hand if those minutes are not used for useful work (e.g. spent waiting or lost due to CPU or server or IO wait, then they are lost). This is like airlines and available seat miles (ASM) metric where if left empty it’s a lost opportunity, however if used, then value, not to mention if yield management applied to price that seat differently. To make up for that loss many organizations have to add extra servers and thus more software licensing costs.

Storage I/O trends

Can we get a side of context with them metrics?

EMC along with some other vendors are starting to give more context with their storage I/O performance metrics that matter than simple IOP’s or Hero Marketing Metrics. However context extends beyond performance to also availability and space capacity which means data protection overhead. As an example, EMC claims 25% for RAID 5 and 20% for RAID 6 or 30% for RAID 5/RAID 6 combo where a 25 drive (SSD) XDP has a 8% overhead. However this assumes a 4+1 (5 drive) RAID , not apples to apples comparison on a space overhead basis. For example a 25 drive RAID 5 (24+1) would have around an 4% parity protection space overhead or a RAID 6 (23+2) about 8%.

Granted while the space protection overhead might be more apples to apples with the earlier examples to XDP, there are other differences. For example solutions such as XDP can be more tolerant to multiple drive failures with faster rebuilds than some of the standard or basic RAID implementations. Thus more context and clarity would be helpful.

StorageIO would like see vendors including EMC along with startups who give data protection space overhead comparisons without context to do so (and applaud those who provide context). This means providing the context for data protection space overhead comparisons similar to performance metrics that matter. For example simply state with an asterisk or footnote comparing a 4+1 RAID 5 vs. a 25 drive erasure or forward error correction or dispersal or XDP or wide stripe RAID for that matter (e.g. can we get a side of context). Note this is in no way unique to EMC and in fact quite common with many of the smaller startups as well as established vendors.

General comments

My laundry list of items which for now would be nice to have’s, however for you might be need to have would include native replication (today leverages Recover Point), Advanced Format (4KB) support for servers (Updated 11/15/13 Per above, EMC has confirmed that host/server-side (front-end) AF along with 512 byte emulation modes exist today), as well as SSD based drives, DIF (Data Integrity Feature), and Microsoft ODX among others. While 12Gb SAS server to X-Brick attachment for small in the cabinet connectivity might be nice for some, more practical on a go forward basis would be 40GbE support.

Now let us see what EMC does with XtremIO and how it competes in the market. One indicator to watch in the industry and market of the impact or presence of EMC XtremIO is the amount of fud and mud that will be tossed around. Perhaps time to make a big bowl of popcorn, sit back and enjoy the show…

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

DataDynamics StorageX 7.0 file and data management migration software

Storage I/O trends

DataDynamics StorageX 7.0 file and data management migration software

Some of you may recall back in 2006 (here and here) when Brocade bought a file management storage startup called NuView whose product was StorageX, and then in 2009 issued end of life (EOL) notice letters that the solution was being discontinued.

Fast forward to 2013 and there is a new storage startup (DatraDynamics) with an existing product that was just updated and re-released called StorageX 7.0.

Software Defined File Management – SDFM?

Granted from an industry buzz focused adoption perspective you may not have heard of DataDynamics or perhaps even StorageX. However many other customers around the world from different industry sectors have as well as are using the solution.

The current industry buzz is around software defined data centers (SDDC) which has lead to software defined networking (SDN), software defined storage (SDS), and other software defined marketing (SDM) terms, not to mention Valueware. So for those who like software defined marketing or software defined buzzwords, you can think of StorageX as software defined file management (SDFM), however don’t ask or blame them about using it as I just thought of it for them ;).

This is an example of industry adoption traction (what is being talked about) vs. industry deployment and customer adoption (what is actually in use on a revenue basis) in that DataDynamics is not a well-known company yet, however they have what many of the high-flying startups with industry adoption don’t have which is an installed base with revenue customers that also now have a new version 7.0 product to deploy.

StorageX 7.0 enabling intelligent file and data migration management

Thus, a common theme is adding management including automated data movement and migration to carry out structure around unstructured NAS file data. More than a data mover or storage migration tool, Data Dynamics StorageX is a software platform for adding storage management structure around unstructured local and distributed NAS file data. This includes heterogeneous vendor support across different storage system, protocols and tools including Windows CIFS and Unix/Linux NFS.

Storage I/O image

A few months back prior to its release, I had an opportunity to test drive StorageX 7.0 and have included some of my comments in this industry trends perspective technology solution brief (PDF). This solution brief titled Data Dynamics StorageX 7.0 Intelligent Policy Based File Data Migration is a free download with no registration required (as are others found here), however per our disclosure policy to give transparency, DataDynamics has been a StorageIO client.

If you have a new for gaining insight and management control around your file unstructured data to support migrations for upgrades, technology refresh, archiving or tiering across different vendors including EMC and NetApp, check out DataDynamics StorageX 7.0, take it for a test drive like I did and tell them StorageIO sent you.

Ok, nuff said,

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Some fall 2013 AWS cloud storage and compute enhancements

Storage I/O trends

Some fall 2013 AWS cloud storage and compute enhancements

I just received via Email the October Amazon Web Services (AWS) Newsletter in advance of the re:Invent event next week in Las Vegas (yes I will be attending).

AWS October newsletter and enhancement updates

What this means

AWS is arguably the largest of the public cloud services with a diverse set of services and options across multiple geographic regions to meet different customer needs. As such it is not surprising to see AWS continue to expand their service offerings expanding their portfolio both in terms of features, functionalities along with extending their presences in different geographies.

Lets see what else AWS announces next week in Las Vegas at their 2013 re:Invent event.

Click here to view the current October 2013 AWS newsletter. You can view (and signup for) earlier AWS newsletters here, and while you are at it, view the current and recent StorageIO Update newsletters here.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Fall 2013 StorageIO Update Newsletter

Storage I/O trends

Fall 2013 StorageIO Update Newsletter

Welcome to the Fall 2013 (joint September and October) edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics. It is fall (at least here in north America) which means conferences, symposium, virtual and physical events, seminars, webinars in addition to normal client project activities. Starting with VMworld back in late August, that event occurred in San Francisco which kicked off the fall (or back to school) season of activity. VMworld was followed with many other events including in-person along with virtual or on-line such as webinars, Google+ hangouts among others, not to mention all the briefings for vendor product announcements and updates. Check out the industry trends perspectives articles, comments and blog posts below that covers some activity over the past few months.

VMworld 2013
Congratulations to VMworld on the 10th anniversary of the event. With the largest installment yet of a VMworld in terms of attendance, there were also many announcements. Here are a synopsis of some of those announcements which of course included plenty of software defined marketing (SDM).

CMG and Storage Performance
During mid-September I was invited to give an industry trends and perspectives presentation to the Storage Performance Council (SPC) board. The SPC board were meeting in the Minneapolis area and I gave a brief talk about Metrics that Matter and importance of context with focus on applications. Speaking of the Minneapolis area, Tom Becchetti (@tbecchetti) organized a great CMG event hosted over at Blue Cross Blue Shield of Minnesota. I gave a discussion around Technolutionary, technology evolution and revolution, using old and new things in new ways.

Check out our backup, restore, BC, DR and archiving (Under the resources section on StorageIO.com) for various presentation, book chapter downloads and other content.

SNW Fall 2013 Long Beach
Talking about traveling, there was a quick trip out to Long Beach for the fall 2013 edition of Storage Networking World (SNW) where I had some good meetings and conversations with those who were actually there. No need to sugar coat it, likewise no need to kick sand in its face. Plain and simple, SNW is not the event it used to be has been a common discussion theme for several years which I had set my expectation accordingly.

Some have asked me why I even spent time, money and resources to attend SNW?

My answer is that I had some meetings to attend to, wanted to see and meet with others who were going to be there, and perhaps even say goodbye to an event that I have been involved with for over a decade.

Does that mean I’m all done with SNW?

Not sure yet as will have to wait and see what SNIA and IDG/Computerworld the event co-owners and producers put together for future events. However there are enough other events and activities to pick up the slack which is part of what has caused the steady decline in events like SNW among others.

Perhaps it is time for SNIA to partner with another adjacent yet like-minded organization such as CMG to collaborate and try doing something like what was done in the early 2000s? That is SNIA providing their own seminars along with others such as myself who involved with both CMG, SNW and SNIA to beef up or set up a storage and I/O focused track at the CMG event.

Beyond those items mentioned above, or in the following section, there are plenty of interesting and exciting things occurring in the background that I cant talk about yet. However watch for future posts, commentary, perspectives and other information down the road (and in the not so distant future).

Enjoy this edition of the StorageIO Update newsletter.

Ok, nuff said (for now)

Cheers gs

StorageIO Industry Trends and PerspectivesIndustry trends perspectives and commentary
What is being seen, heard and talked about while out and about

The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.

Storage I/O trends

InfoStor: Perspectives on Data Dynamics file migration tool (Read more about StorageX later in this newsletter)
SearchStorage: Perspectives on Data Dynamics resurrects StorageX for file migration
SearchStorage: Perspectives on Cisco buying SSD storage vendor Whiptail

Recent StorageIO Tips and Articles in various venues:

21cIT:  Why You Should Consider Object Storage
InfoStor:  HDDs Are Still Spinning (Rust Never Sleeps)
21cIT:  Object Storage Is in Your Future, Even if You Use Files
21cIT:  Playing the Name Game With Virtual Storage
InfoStor:  Flash Data Storage: Myth vs. Reality
InfoStor:  The Nand Flash Cache SSD Cash Dance
SearchEnterpriseWAN:  Remote Office / ROBO backup and data protection for networking Pro’s
TheVirtualizationPractice:  When and Where to use NAND Flash SSD for Virtual Servers
FedTech:  These Data Center (DCIM) Tools Can Streamline Computing Resources

Storage I/O posts

Recent StorageIO blog post:

Seagate Kinetic Cloud and Object Storage I/O platform (and Ethernet HDD)
Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?
Cisco buys Whiptail continuing the SSD storage I/O flash cash cache dash
WD buys nand flash SSD storage I/O cache vendor Virident
EMC New VNX MCx doing more storage I/O work vs. just being more
Is more of something always better? Depends on what you are doing
VMworld 2013 Vmware, server, storage I/O and networking update (Day 1)
EMC ViPR software defined object storage part II

Check out our objectstoragecenter.com page where you will find a growing collection of information and links pertaining to cloud and object storage themes, technologies and trends.

Brouwer Storage Consultancy

StorageIO in Europe (Netherlands)
Spent over a week in the Netherlands where I presented three different seminar workshop sessions organized by Brouwer Storage Consultancy who is celebrating their 10th anniversary in business. These sessions spanned five full days of interactive discussions with an engaged diverse group of attendees in the Nijkerk area who came from across Holland to take part in these workshops.

Congratulations to Gert and Frank Brouwer on their ten years of being in business and best wishes for many more. Fwiw those who are curious StorageIO will be ten years young in business in about two years.

StorageIO Industry Trends and Perspectives

Some observations from while in Europe:

Continued cloud privacy concerns amplified by NSA and suspicion of US-based companies, yet many are not aware of similar concerns of European or UK-based firms from those outside those areas. While there were some cloud concern conversations over the demise of Nirvanix, those seemed less so then in the media or US given that at least in Holland they have seen other cloud and storage as a service firms come and go already. It should be noted that the US has also seen cloud and storage as a service startups come and go, however I think sometimes we or at least the media tends to have a short if not selective memory at times.

In one of our workshops sessions we were talking about service level objectives (SLO), service level agreements (SLA), recovery point objectives (RPO) and recovery time objectives (RTO) among other themes. Somebody mentioned why the focus of time in RPO and questions why not a transactional perspective which I thought was a brilliant question. We had a good conversation in the group and concurred that while RPO is what the industry uses, that there also needs to be a transactional state context tie to what is inferred or assumed with RPO and RTO. Thus the importance of looking beyond just the point in time, however the importance of a transactional context or state, such as not just the time, however to a given transactional point.

Note that transactional could mean database, file system, backup or data protection index or catalog, meta data repository or other entity. This is where some should be jumping up and down like Donkey in Shrek wanting to point out that is exactly what RTO and RPO refer to which would be great. However all to often what is assumed is not conveyed, thus those who don’t know, well, they assume or simply don’t know what others.

StorageIO Industry Trends and Perspectives

Data Dynamics StorageX 7.0 Intelligent Policy Based File Data Migration – There is no such thing as a data or information recession . Likewise, people and data are living longer as well as getting larger. These span various use cases from traditional to personal or at work productivity. From little to big data content, collaboration including file or document sharing to rich media applications all of which are leveraging unstructured data. For example, email, word processing back-office documents, web and text files, presentations (e.g. PowerPoint), photos, audio and video among others. These macro trends result in the continued growth of unstructured Network Attached Storage (NAS) file data.

Thus, a common theme is adding management including automated data movement and migration to carry out structure around unstructured NAS file data. More than a data mover or storage migration tool, Data Dynamics StorageX is a software platform for adding storage management structure around unstructured local and distributed NAS file data. This includes heterogeneous vendor support across different storage system, protocols and tools including Windows CIFS and Unix/Linux NFS.
(Disclosure DataDynamics has been a StorageIO client). Visit Data Dynamics at www.datadynamicsinc.com/

Server and StorageIO seminars, conferences, web cats, events, activities StorageIO activities (out and about)

Seminars, symposium, conferences, webinars
Live in person and recorded recent and upcoming events

Announcing: Backup.U brought to you by Dell

Some on-line (live and recorded) events have include an ongoing series tied to data protection (Backup/restore, HA, BC, DR and Archiving) called Backup.U organized and sponsored by Dell Data Protection Software that you can learn more about at the landing page www.software.dell.com/backupu (more on this in a future post). In addition to data protection, some other events and activities including a BrightTalk webinar on storage I/O and networking for cloud environments (here).

In addition to the above, check out the StorageIO calendar to see more recent and upcoming activities.

Watch for more 2013 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, big data, little data, cloud and object storage, performance and management trends among others.

Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.

If you missed the Summer (July and August) 2013 StorageIO update newsletter, click here to view that and other previous editions as HTML or PDF versions. Subscribe to this newsletter (and pass it along)

and click here to subscribe to this news letter. View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Seagate Kinetic Cloud and Object Storage I/O platform (and Ethernet HDD)

Storage I/O trends

Seagate Kinetic Cloud and Object Storage I/O platform

Seagate announced today their Kinetic platform and drive designed for use by object API accessed storage including for cloud deployments. The Kinetic platform includes Hard Disk Drives (HDD) that feature 1Gb Ethernet (1 GbE) attached devices that speak object access API or what Seagate refers to as a key / value.

Seagate Kinetic architecture

What is being announced with Seagate Kinetic Cloud and Object (Ethernet HDD) Storage?

  • Kinetic Open Storage Platform – Ethernet drives, key / value (object access) API, partner software
  • Software developer’s kits (SDK) – Developer tools, documentation, drive simulator, code libraries, code samples including for SwiftStack and Riak.
  • Partner ecosystem

What is Kinetic?

While it has 1 GbE ports, do not expect to be able to use those for iSCSI or NAS including NFS, CIFS or other standard access methods. Being Ethernet based, the Kinetic drive only supports the key value object access API. What this means is that applications, cloud or object stacks, key value and NoSQL data repositories, or other software that adopt the API can communicate directly using object access.

Seagate Kinetic storage

Internal, the HDD functions as a normal drive would store and accessing data, the object access function and translation layer shifts from being in an Object Storage Device (OSD) server node to inside the HDD. The Kinetic drive takes on the key value API personality over 1 GbE ports instead of traditional Logical Block Addressing (LBA) and Logical Block Number (LBN) access using 3g, 6g or emerging 12g SAS or SATA interfaces. Instead Kinetic drives respond to object access (aka what Seagate calls key / value) API commands such as Get, Put among others. Learn more about object storage, access and clouds at www.objectstoragecenter.com.

Storage I/O trends

Some questions and comments

Is this the same as what was attempted almost a decade ago now with the T10 OSD drives?

Seagate claims no.

What is different this time around with Seagate doing a drive that to some may vaguely resemble the predecessor failed T10 OSD approach?

Industry support for object access and API development have progressed from an era of build it and they will come thinking, to now where the drives are adapted to support current cloud, object and key value software deployment.

Wont 1GbE ports be too slow vs. 12g or 6g or even 3g SAS and SATA ports?

Keep in mind those would be apples to oranges comparisons based on the protocols and types of activity being handled. Kinetic types of devices initially will be used for large data intensive applications where emphasis is on storing or retrieving large amounts of information, vs. low latency transactional. Also, keep in mind that one of the design premises is to keep cost low, spread the work over many nodes, devices to meet those goals while relying on server-side caching tools.

Storage I/O trends

Does this mean that the HDD is actually software defined?

Seagate or other HDD manufactures have not yet noticed the software defined marketing (SDM) bandwagon. They could join the software defined fun (SDF) and talk about a software defined disk (SDD) or software defined HDD (SDHDD) however let us leave that alone for now.

The reality is that there is far more software that exists in a typical HDD than what is realized. Sure some of that is packaged inside ASICs (Application Specific Integrated Circuits) or running as firmware that can be updated. However, there is a lot of software running in a HDD hence the need for power yet energy-efficient processors found in those devices. On a drive per drive basis, you may see a Kinetic device consume more energy vs. other equivalence HDDs due to the increase in processing (compute) needed to run the extra software. However that also represents an off-load of some work from servers enabling them to be smaller or do more work.

Are these drives for everybody?

It depends on if your application, environment, platform and technology can leverage them or not. This means if you view the world only through what is new or emerging then these drives may be for all of those environments, while other environments will continue to leverage different drive options.

Object storage access

Does this mean that block storage access is now dead?

Not quite, after all there is still some block activity involved, it is just that they have been further abstracted. On the other hand, many applications, systems or environments still rely on block as well as file based access.

What about OpenStack, Ceph, Cassandra, Mongo, Hbase and other support?

Seagate has indicated those and others are targeted to be included in the ecosystem.

Seagate needs to be careful balancing their story and message with Kinetic to play to and support those focused on the new and emerging, while also addressing their bread and butter legacy markets. The balancing act is communicating options, flexibility to choose and adopt the right technology for the task without being scared of the future, or clinging to the past, not to mention throwing the baby out with the bath water in exchange for something new.

For those looking to do object storage systems, or cloud and other scale based solutions, Kinetic represents a new tool to do your due diligence and learn more about.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?

Storage I/O trends

Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?

Recently seven plus year old cloud storage startup Nirvanix announced that they were finally shutting down and that customers should move their data.

nirvanix customer message

Nirvanix has also posted an announcement that they have established an agreement with IBM Softlayer (read about that acquisition here) to help customers migrate to those services as well as to those of Amazon Web Services (AWS), (read more about AWS in this primer here), Google and Microsoft Azure.

Cloud customer concerns?

With Nirvanix shutting down there has been plenty of articles, blog posts, twitter tweets and other conversations asking if Clouds are safe.

Btw, here is a link to my ongoing poll where you can cast your vote on what you think about clouds.

IMHO clouds can be safe if used in safe ways which includes knowing and addressing your concerns, not to mention following best practices, some of which pre-date the cloud era, sometimes by a few decades.

Nirvanix Storm Clouds

More on this in a moment, however lets touch base on Nirvanix and why I said they were finally shutting down.

The reason I say finally shutting down is that there were plenty of early warning signs and storm clouds circling Nirvanix for a few years now.

What I mean by this is that in their seven plus years of being in business, there have been more than a few CEO changes, something that is not unheard of.

Likewise there have been some changes to their business model ranging from selling their software as a service to a solution to hosting among others, again, smart startups and establishes organizations will adapt over time.

Nirvanix also invested heavily in marketing, public relations (PR) and analyst relations (AR) to generate buzz along with gaining endorsements as do most startups to get recognition, followings and investors if not real customers on board.

In the case of Nirvanix, the indicator signs mentioned above also included what seemed like a semi-annual if not annual changing of CEOs, marketing and others tying into business model adjustments.

cloud storage

It was only a year or so ago that if you gauged a company health by the PR and AR news or activity and endorsements you would have believed Nirvanix was about to crush Amazon, Rackspace or many others, perhaps some actually did believe that, followed shortly there after by the abrupt departure of their then CEO and marketing team. Thus just as fast as Nirvanix seemed to be the phoenix rising in stardom their aura started to dim again, which could or should have been a warning sign.

This is not to solo out Nirvanix, however given their penchant for marketing and now what appears to some as a sudden collapse or shutdown, they have also become a lightning rod of sort for clouds in general. Given all the hype and fud around clouds when something does happen the distract ors will be quick to jump or pile on to say things like "See, I told you, clouds are bad".

Meanwhile the cloud cheerleaders may go into denial saying there are no problems or issues with clouds, or they may go back into a committee meeting to create a new stack, standard, API set marketing consortium alliance. ;) On the other hand, there are valid concerns with any technology including clouds that in general there are good implementations that can be used the wrong way, or questionable implementations and selections used in what seem like good ways that can go bad.

This is not to say that clouds in general whether as a service, solution or product on a public, private or hybrid bases are any riskier than traditional hardware, software and services. Instead what this should be is a wake up call for people and organizations to review clouds citing their concerns along with revisiting what to do or can be done about them.

Clouds: Being prepared

Ben Woo of Neuralytix posted this question comment to one of the Linked In groups Collateral Considerations If You Were/Are A Nirvanix Customer which I posted some tips and recommendations including:

1) If you have another copy of your data somewhere else (which you should btw), how will your data at Nirvanix be securely erased, and the storage it resides on be safely (and secure) decommissioned?

2) if you do have another copy of your data elsewhere, how current is it, can you bring it up to date from various sources (including update from Nirvanix while they stay online)?

3) Where will you move your data to short or near term, as well as long-term.

4) What changes will you make to your procurement process for cloud services in the future to protect against situations like this happening to you?

5) As part of your plan for putting data into the cloud, refine your strategy for getting it out, moving it to another service or place as well as having an alternate copy somewhere.

Fwiw any data I put into a cloud service there is also another copy somewhere else which even though there is a cost, there is a benefit, The benefit is that ability to decide which to use if needed, as well as having a backup/spare copy.

Storage I/O trends

Cloud Concerns and Confidence

As part of cloud procurement as services or products, the same proper due diligence should occur as if you were buying traditional hardware, software, networking or services. That includes checking out not only the technology, also the companies financial, business records, customer references (both good and not so good or bad ones) to gain confidence. Part of gaining that confidence also involves addressing ahead of time how you will get your data out of or back from that services if needed.

Keep in mind that if your data is very important, are you going to keep it in just one place? For example I have data backed-up as well as archived to cloud providers, however I also have local copies either on-site or off.

Likewise there is data I have local kept at alternate locations including cloud. Sure that is costly, however by not treating all of my data and applications the same, I’m able to balance those costs out, plus use cost advantages of different services as well as on-site to be effective. I may be spending no less on data protection, in fact I’m actually spending a bit more, however I also have more copies and versions of important data and in multiple locations. Data that is not changing often does not get protected as often, however there are multiple copies to meet different needs or threat risks.

Storage I/O trends

Don’t be scared of clouds, be prepared

While some of the other smaller cloud storage vendors will see some new customers, I suspect that near to mid-term, it will be the larger, more established and well funded providers that gain the most from this current situation. Granted some customers are looking for alternatives to the mega cloud providers such as Amazon, Google, HP, IBM, Microsoft and Rackspace among others, however there are a long list of others some of which who are not so well-known that should be such as Centurylink/Savvis, Verizon/Terremark, Sungurd, Dimension Data, Peak, Bluehost, Carbonite, Mozy (owned by EMC), Xerox ACS, Evault (owned by Seagate) not to mention a long list of many others.

Something to be aware of as part of doing your due diligence is determining who or what actually powers a particular cloud service. The larger providers such as Rackspace, Amazon, Microsoft, HP among others have their own infrastructure while some of the smaller service providers may in fact use one of the larger (or even smaller) providers as their real back-end. Hence understanding who is behind a particular cloud service is important to help decide the viability and stability of who it is you are subscribed to or working with.

Something that I have said for the past couple of years and a theme of my book Cloud and Virtual Data Storage Networking (CRC Taylor & Francis) is do not be scared of clouds, however be ready, do your homework.

This also means having cloud concerns is a good thing, again don’t be scared, however find what those concerns are along with if they are major or minor. From that list you can start to decide how or if they can be worked around, as well as be prepared ahead of time should you either need all of your cloud data back quickly, or should that service become un-available.

Also when it comes to clouds, look beyond lowest cost or for free, likewise if something sounds too good to be true, perhaps it is. Instead look for value or how much do you get per what you spend including confidence in the service, service level agreements (SLA), security, and other items.

Keep in mind, only you can prevent data loss either on-site or in the cloud, granted it is a shared responsibility (With a poll).

Additional related cloud conversation items:
Cloud conversations: AWS EBS Optimized Instances
Poll: What Do You Think of IT Clouds?
Cloud conversations: Gaining cloud confidence from insights into AWS outages
Cloud conversations: confidence, certainty and confidentiality
Cloud conversation, Thanks Gartner for saying what has been said
Cloud conversations: AWS EBS, Glacier and S3 overview (Part III)
Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)
Don’t Let Clouds Scare You – Be Prepared
Everything Is Not Equal in the Datacenter, Part 3
Amazon cloud storage options enhanced with Glacier
What do VARs and Clouds as well as MSPs have in common?
How many degrees separate you and your information?

Ok, nuff said.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cisco buys Whiptail continuing the SSD storage I/O flash cash cache dash

Storage I/O trends

Cisco buys Whiptail continuing the Storage storage I/O flash cash cache dash

Congratulations to Virident for being bought today for $645 Million USD by Western Digital (WD). Virident a nand flash PCIe card startup vendor has been around for several years and in the last year or two has gain more industry awareness as a competitor to FusionIO among others.

There is a nand flash solid state devices (SSD) cash-dash occurring, not to mention fast cache dances that is occurring the IT and data infrastructure (e.g. storage and IO) sector specifically.

Why the nand flash SSD cash dash and cache dance?

Yesterday hard disk drive (HDD) vendor Western Digital (WD) bought Virident a nand flash PCIe Solid State Device (SSD) card vendor for $650M, and today networking and server vendor Cisco bought Whiptail a SSD based storage system startup for a little over $400M. Here is an industry trends perspective post that I did yesterday on WD and Virident.

Obviously this begs a couple of questions, some of which I raised in my post yesterday about WD, Virident, Seagate, FusionIO and others.

Questions include

Does this mean Cisco is getting ready to take on EMC, NetApp, HDS and its other storage partners who leverage the Cisco UCS server?

IMHO at least near term no more than they have in the past, nor any more than EMCs partnership with Lenovo indicates a shift in what is done with vBlocks. On the other hand, some partners or customers may be as nervous as a long-tailed cat next to a rocking chair (Google it if you don’t know what it means ;).

Is Cisco going to continue to offer Whiptail SSD storage solutions on a standalone basis, or pull them in as part of solutions similar to what it has done on other acquisitions?

Storage I/O trends

IMHO this is one of the most fundamental questions and despite the press release and statements about this being a UCS focus, a clear sign of proof for Cisco is how they reign in (if they go that route) Whiptail from being sold as a general storage solution (with SSD) as opposed to being part of a solution bundle.

How will Cisco manage its relationship in a coopitition manner cooperating with the likes of EMC in the joint VCE initiative along with FlexPod partner NetApp among others? Again time will tell.

Also while most of the discussions about NetApp have been around the UCS based FlexPod business, there is the other side of the discussion which is what about NetApp E Series storage including the SSD based EF540 that competes with Whiptail (among others).

Many people may not realize how much DAS storage including fast SAS, high-capacity SAS and SATA or PCIe SSD cards Cisco sells as part of UCS solutions that are not vBlock, FlexPod or other partner systems.

NetApp and Cisco have partnerships that go beyond the FlexPod (UCS and ONTAP based FAS) so will be interesting to see what happens in that space (if anything). This is where Cisco and their UCS acquiring Whiptail is not that different from IBM buying TMS to complement their servers (and storage) while also partnering with other suppliers, same holds true for server vendors Dell, HP, IBM and Oracle among others.

Can Cisco articulate and convince their partners, customers, prospects and others that the whiptail acquisition is more about direct attached storage
(DAS) which includes both internal dedicated and external shared device?

Keep in mind that DAS does not have to mean Dumb A$$ Storage as some might have you believe.

Then there are the more popular questions of who is going to get bought next, what will NetApp, Dell, Seagate, Huawei and a few others do?

Oh, btw, funny how have not seen any of the pubs mention that Whiptail CEO Dan Crain is a former Brocadian (e.g. former Brocade CTO) who happens to be a Cisco competitor, just saying.

Congratulations to Dan and his crew and enjoy life at Cisco.

Stay tuned as the fall 2013 nand flash SSD cache dash and cash dance activities are well underway.

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

WD buys nand flash SSD storage I/O cache vendor Virident

Storage I/O trends

WD buys nand flash SSD storage I/O cache vendor Virident

Congratulations to Virident for being bought today for $645 Million USD by Western Digital (WD). Virident a nand flash PCIe card startup vendor has been around for several years and in the last year or two has gain more industry awareness as a competitor to FusionIO among others.

There is a nand flash solid state devices (SSD) cash dash occurring, not to mention fast cache dances that is occurring the IT and data infrastructure (e.g. storage and IO) sector specifically.

Why the nand flash SSD cash dash and cache dance?

Here is a piece that I did today over at InfoStor on a related theme that sets the basis of why the nand flash-based SSD market is popular for storage and as a cache. Hence there is a flash cash dash and by some dance for increased storage I/O performance.

Like the hard disk drive (HDD) industry before it which despite what some pundits and profits have declared (for years if not decades) as being dead (it is still alive), there were many startups, shutdowns, mergers and acquisitions along with some transformations. Granted solid-state memories is part of the presence and future being deployed in new and different ways.

The same thing has occurred in the nand flash-based SSD sector with LSI acquiring SANDforce, SANdisk picking up Pliant and Flashsoft among others. Then there is Western Digital (WD) that recently has danced with their cash as they dash to buy up all things flash including Stec (drives & PCIe cards), Velobit (cache software), Virident (PCIe cards), along with Arkeia (backup) and an investment in Skyera.

Storage I/O trends

What about industry trends and market dynamics?

Meanwhile there have been some other changes with former industry darling and highflying stock post IPO FusionIO hitting as market reality and sudden CEO departure a few months ago. However after a few months of their stock being pummeled, today it bounced back perhaps as people now speculate who will buy FusionIO with WD picking up Virident. Note that one of Viridents OEM customers is EMC for their PCIe flash card XtremSF as are Micron and LSI.

Meanwhile Stec, also  now own by WD was also EMCs original flash SSD drive supplier or what they refer to as a EFDs (Electronic Flash Devices), not to mention having also supplied HDDs to them (also keep in mind WD bought HGST a year or so back).

There are some early signs as well as their stock price jumping today which was probably oversold. Perhaps people are now speculating that maybe Seagate who had been an investor in Virident which was bought by WD for $645 million today might be in the market for somebody else? Alternatively, that perhaps WD didn’t see the value in a FusionIO, or willing to make big flash cache cash grabs dash of that size? Also note Seagate won a $630 million (and the next appeal was recently upheld) infringement lawsuit vs. WD (here and here).

Does that mean FusionIO could become Seagate’s target or that of NetApp, Oracle or somebody else with the cash and willingness to dash, grab a chunk of the nand flash, and cache market?

Likewise, there are the software I/O and caching tool vendors some of which are tied to VMware and virtual servers vs. others that are more flexible that are gaining popularity. What about the systems or solution appliances play, could that be in the hunt for a Seagate?

Anything is possible however IMHO that would be a risky move, one that many at Seagate probably still remember from their experiment with Xiotech, not to mention stepping on the toes of their major OEM customer partners.

Storage I/O trends

Thus I would expect Seagate if they do anything would be more along the lines of a component type suppler meaning a FusionIO (yes they have Nexgen, however that could be easily dealt with), OCZ, perhaps even a LSI or Micron however some of those start to get rather expensive for a quick flash cache grab for some stock and cash.

Also, keep in mind that FusionIO in addition to having their PCIe flash cards also have the ioturbine software-caching tool that if you are not familiar with, IBM recently made an announcement of their Flash Cache Storage Accelerator (FCSA) that has an affiliation to guess who?

Closing comments (for now)

Some of the systems or solutions players will survive, perhaps even being acquired as XtremIO was by EMC, or file for IPO like Violin, or express their wish to IPO and or be bought such as all the others (e.g. Skyera, Whiptail, Pure, Solidfire, Cloudbyte, Nimbus, Nimble, Nutanix, Tegile, Kaminario, Greenbyte, and Simplivity among others).

Here’s the thing, those who really do know what is going to happen are not and probably cannot say, and those who are talking what will happen are like the rest of us, just speculating or providing perspectives or stirring the pot among other things.

So who will be next in the flash cache ssd cash dash dance?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC New VNX MCx doing more storage I/O work vs. just being more

Storage I/O trends

It’s not how much you have, its how storage I/O work gets done that matters

Following last weeks VMworld event in San Francisco where among other announcements including this one around Virtual SAN (VSAN) along with Software Defined Storage (SDS), EMC today made several announcements.

Today’s EMC announcements include:

  • The new VNX MCx (Multi Core optimized) family of storage systems
  • VSPEX proven infrastructure portfolio enhancements
  • Availability of ViPR Software Defined Storage (SDS) platform (read more from earlier posts here, here and here)
  • Statement of direction preview of Project Nile for elastic cloud storage platform
  • XtremSW server cache software version 2.0 with enhanced management and support for VMware, AIX and Oracle RAC

EMC ViPREMC XtremSW cache software

Summary of the new EMC VNX MCx storage systems include:

  • More processor cores, PCIe Gen 3 (faster bus), front-end and back-end IO ports, DRAM and flash cache (as well as drives)
  • More 6Gb/s SAS back-end ports to use more storage devices (SAS and SATA flash SSD, fast HDD and high-capacity HDD)
  • MCx – Multi-core optimized with software rewritten to make use of threads and resources vs. simply using more sockets and cores at higher clock rates
  • Data Footprint Reduction (DFR) capabilities including block compression and dedupe, file dedupe and thin provisioning
  • Virtual storage pools that include flash SSD, fast HDD and high-capacity HDD
  • Block (iSCSI, FC and FCoE) and NAS file (NFS, pNFS, CIFS) front-end access with object access via Atmos Virtual Edition (VE) and ViPR
  • Entry level pricing starting at below $10,000 USD

EMC VNX MCx systems

What is this MCx stuff, is it just more hardware?

While there is more hardware that can be used in different configurations, the key or core (pun intended) around MCx is that EMC has taken the time and invested in reworking the internal software of the VNX that has its roots going back to the Data General CLARRiON EMC acquired. This is similar to an effort EMC made a few years back when it overhauled what is now known as the VMAX from the Symmetric into the DMX. That effort expanded from a platform or processor port to re-architecting and software optimizing (rewrite portions) to leverage new and emerging hardware capabilities more effectively.

EMC VNX MCx

With MCx EMC is doing something similar in that core portions of the VNX software have been re-architected and written to take advantage of more threads and cores being available to do work more effectively. This is not all that different from what occurs (or should) with upper level applications that eventually get rewritten to leverage underlying new capabilities to do more work faster and leverage technologies in a more cost-effective way. MCx also leverages flash as a primary medium with data than being moved (256MB chunks) down into lower tiers of storage (SSD and HDD drives).

Storage I/O trends

ENC VNX has had in the past FLASH Cache which enables SSD drives to be used as an extension of main cache as well as using drive targets. Thus while MCx can and does leverage more and faster core as would most any software, it is also able to leverage those cores and threads in a more effective way. After all, it’s not just how many processors, sockets, cores, threads, L1/L2 cache, DRAM, flash SSD and other resources, its how effective you use them. Also keep in mind that a bit of flash in the right place used effectively can go a long way vs. having a lot of cache in the wrong place or not used optimally that will end up costing a lot of cash.

Moving forward this means that EMC should be able to further refine and optimize other portions of the VNX software not yet updated to make further benefit of new hardware platforms and capabilities.

Does this mean EMC is catching up with newer vendors?

Similar to more of something is not always better, its how those items are used that matters, just because something is new does not mean its better or faster. That will manifest itself when they are demonstrated and performance results shown. However key is showing the performance across different workloads that have relevance to your needs and that convey metrics that matter with context.

Storage I/O trends

Context matters including type and size of work being done, number of transactions, IOPs, files or videos served, pages processed or items rendered per unit of time, or response time and latency (aka wait or think time), along with others. Thus some newer systems may be faster on paper, powerpoint, WebEx, You tube or via some benchmarks, however what is the context and how do they compare to others on an apples to apples basis.

What are some other enhancements or features?

Leveraging of FAST VP (Fully Automated Storage Tiering for Virtual Pools) with improved MCx software

Increases the effectiveness of available hardware resources (processors, cores, DRAM, flash, drives, ports)

Active active LUNs accessible by both controllers as well as legacy AULA support

Data sheets and other material for the new VNX MCx storage systems can be found here, with software options and bundles here, and general speeds and feeds here.

Learn more here at the EMC VNX MCx storage system landing page and compare VNX systems here.

What does then new VNX MCx family look like?

EMC VNX MCx family image

Is VNX MCx all about supporting VMware?

Interesting that if you read behind the lines, listen closely to the conversations, ask the right questions you will realize that while VMware is an important workload or environment to support, it is not the only one targeted for VNX. Likewise if you listen and look beyond what is normally amplified in various conversations you will find that systems such as VNX are being deployed as back-end storage in cloud (public, private, hybrid) environments for use with technologies such as OpenStack or object based solutions (visit www.objectstoragecenter.com for more on object storage systems and access)..

There is a common myth that the cloud and service providers all use white box commodity hardware including JBOD for their systems which some do, however some are also using systems such as VNX among others. In some of these scenarios the VNX type systems are or will be deployed in large numbers essentially consolidating the functions of what had been done by even larger number of JBOD based systems. This is where some of you will have a DejaVu or back to the future moment from the mid 90s when there was an industry movement to combine all the DAS and JBOD into larger storage systems. Don’t worry if you are not yet reading about this trend in your favorite industry rag or analyst briefing notes, however ask or look around and you might be surprised at what is occurring, granted it might be another year or two before you read about it (just saying ;).

Storage I/O trends

What that means is that VNX MCx is also well positioned for working with ViPR or Atmos Virtual Edition among other cloud and object storage stacks. VNX MCx is also well positioned for its new low-cost of entry for general purpose workloads and applications ranging from file sharing, email, web, database along with demanding high performance, low latency with large amounts of flash SSD. In addition to being used for general purpose storage, VNX MCx will also complement data protection solutions for backup/restore, BC, DR and archiving such as Data Domain, Avamar and Networker among others. Speaking of server virtualization, EMC also has tools for working with Hyper-V, Xen and KVM in addition to VMware.

If there is an all flash VNX MCx doesn’t that compete with XtremIO?

Yes there are all flash VNX MCx just as there have been all flash VNX before, however these will be positioned for different use case scenarios by EMC and their partners to avoid competing head to head with XtremIO. Thus EMC will need to be diligent in being very clear to its own sales and marketing forces as well as those of partners and customers of what to use when, where, why and how.

General thoughts and closing comments

The VNX MCx is a good set of enhancements by EMC and an example of how it’s not as important of how more you have, rather how you can use it to be more effective.

Ok, nuff said (fow now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Fall 2013 Dutch cloud, virtual and storage I/O seminars

Storage I/O trends

Fall 2013 Dutch cloud, virtual and storage I/O seminars

It is that time of the year again when StorageIO will be presenting a series of seminar workshops in the Netherlands on cloud, virtual and data storage networking technologies, trends along with best practice techniques.

Brouwer Storage

StorageIO partners with the independent firm Brouwer Storage Consultancy of Holland who organizes these sessions. These sessions will also mark Brouwer Storage Consultancy celebrating ten years in business along with a long partnership with StorageIO.

Server Storage I/O Backup and Data Protection Cloud and Virtual

The fall 2013 Dutch seminars include coverage of storage I/O networking data protection and related trends topics for cloud and virtual environments. Click on the following links or images to view an abstract of the three sessions including what you will learn, who they are for, buzzwords, themes, topics and technologies that will covered.

Modernizing Data Protection
Moving Beyond Backup and Restore

Storage Industry Trends
What’s News, What’s The Buzz and Hype

Storage Decision Making
Acquisition, Deployment, Day to Day Management

Modern Data Protection
Modern Data Protection
Modern Data Protection
September 30 & October 1
October 2 2013
October 3 and 4 2013

All seminar workshop seminars are presented in a vendor technology neutral including (e.g. these are not vendor marketing sales presentations) providing independent perspectives on industry trends, who is doing what, benefits, caveats of various approaches to addressing data infrastructure and storage challenges. View posts about earlier events here and here.

Storage I/O trends

As part of theme of being vendor and technology neutral, the workshop seminars are held off-site at hotel venues in Nijkerk Netherlands so no need to worry about the sales teams coming in to sell you something during the breaks or lunch which are provided. There are also opportunities throughout the workshops for engagement, discussion and interaction with other attendees that includes your peers from various commercial, government and service providers among others.

Learn more and register for these events by visiting the Brouwer Storage Consultancy website page (here) and calling them at +31-33-246-6825 or via email info@brouwerconsultancy.com.

Storage I/O events

View other upcoming and recent StorageIO activities including live in-person, online web and recorded activities on our events page here, as well as check out our commentary and industry trends perspectives in the news here.

Bitter ballen
Ok, nuff said, I’m already hungry for bitter ballen (see above)!

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

VMworld 2013 Vmware, server, storage I/O and networking update (Day 1)

Storage I/O trends

Congratulations to VMware on 10 years of VMworld!

With the largest installment yet of a VMworld in terms of attendance, there were also many announcements today (e.g. Monday) and many more slated for out the week. Here are a synopsis of some of those announcements.

Software Defined Data Center (SDDC) and Software Defined Networks (SDN)

VMware made a series of announcements today that set the stage for many others. Not surprisingly, these involved SDDC, SDN, SDS, vSphere 5.5 and other management tool enhancements, or the other SDM (Software Defined Management).

VMworld image

Here is a synopsis of what was announced by VMware.

VMware NSX (SDN) combines Nicira NVPTM along with vCloud Network and Security
VMware Virtual SAN (VSAN) not to be confused with virtual storage appliances (VSAs)
VMware vCloud Suite 5.5
VMware vSphere 5.5 (includes support for new Intel Xeon and Atom processors)
VMware vSphere App HA
VMware vSphere Flash Read Cache software
VMware vSphere Big Data Extensions
VMware vCloud Automation Center
VMware vCloud

Note that while these were announced today, some will be in public beta soon and general availability over the next few months or quarters (learn more here including pricing and availability). More on these and other enhancements in future posts. However for now check out what Duncan Epping (@DuncanYB) of VMware has to say over at his Yellowbook site here, here and here.

buzzword bingo
Buzzword Bingo

Additional VMworld Software Defined Announcements

Dell did some announcements as well for cloud and virtual environments in support of VMware from networking to servers, hardware and software. With all the recent acquisitions by Dell including Quest where they picked up Foglight management tools, along with vRanger, Bakbone and others, Dell has amassed an interesting portfolio. On the hardware front, check out the VRTX shared server infrastructure, I want one for my VMware environment, now I just need to justify one (to myself). Speaking of Dell, if you are at VMworld on Tuesday August 27 around 1:30PM stop by the Dell booth where I will be presenting including announcing some new things (stay tuned for more on that soon).

HP had some announcements today. HP jumped into the SDDC and SDN with some Software Defined Marketing (SDM) and Software Defined Announcements (SDA) in addition to using the Unified Data Center theme. Today’s announcements by HP were focused more around SDN and VMware NSX along with the HP Virtual Application Networks SDN Controller and VMware networking.

NetApp (Both #1417) announced more integration between their Data ONTAP based solutions and VMware vSphere, Horizon Suite, vCenter, vCloud Automation Center and vCenter Log Insight under the them theme of SDDC and SDS. As part of the enhancement, NetApp announced Virtual Storage Console (VSC 5.0) for end-to-end storage management and software in VMware environments. In addition, integration with VMware vCenter Server 5.5. Not to be left out of the SSD flash dash NetApp also released a new V1.2 of their FlashAccel software for vSphere 5.0 and 5.1.

Storage I/O trends

Cloud, Virtualization and DCIM

Here is one that you probably have not seen or heard much about elsewhere, which is Nlyte announcement of their V1.5 Virtualization Connector for Data Center Infrastructure Management (DCIM). Keep in mind that DCIM is more than facilities, power, and cooling related themes, particular in virtual data centers. Thus, some of the DCIM vendors, as well as others are moving into the converged DCIM space that spans server, storage, networking, hardware, software and facilities topics.

Interested in or want to know more about DCIM, and then check out these items:
Data Center Infrastructure Management (DCIM) and Infrastructure Resource Management (IRM)
Data Center Tools Can Streamline Computing Resources
Considerations for Asset Tracking and DCIM

Data Protection including Backup/Restore, BC, DR and Archiving

Quantum announced that Commvault has added support to use the Lattus object storage based solution as an archive target platform. You can learn more about object storage (access and architectures) here at www.objectstoragecenter.com .

PHD Virtual did a couple of data protection (backup/restore , BC, DR ) related announcements (here and here ). Speaking of backup/restore and data protection, if you are at VMworld on Tuesday August 27th around 1:30PM, stop by the Dell booth where I will be presenting, and stay tuned for more info on some things we are going to announce at that time.

In case you missed it, Imation who bought Nexsan earlier this year last week announced their new unified NST6000 series of storage systems. The NST6000 storage solutions support Fibre Channel (FC) and iSCSI for block along with NFS, CIFS/SMB and FTP for file access from virtual and physical servers.

Emulex announced some new 16Gb Fibre Channel (e.g. 16GFC) aka what Brocade wants you to refer to as Gen 5 converged and multi-port adapters. I wonder how many still remember or would rather forget how many ASIC and adapter gens from various vendors occurred just at 1Gb Fibre Channel?

Storage I/O trends

Caching and flash SSD

Proximal announced V2.0 of AutoCache 2.0 with role based administration, multi-hypervisor support (a growing trend beyond just a VMware focus) and more vCenter/vSphere integration. This is on the heels of last week’s FusionIO powered IBM Flash Cache Storage Accelerator (FCSA ) announcement, along with others such as EMC , Infinio, Intel, NetApp, Pernix, SanDisk (Flashsoft) to name a few.

Mellanox (VMworld booth #2005), you know, the Infinaband folks who also have some Ethernet (which also includes Fibre Channel over Ethernet) technology did a series of announcements today with various PCIe nand flash SSD card vendors. The common theme with the various vendors including Micron (Booth #1635) and LSI is in support of VMware virtual servers using iSER or iSCSI over RDMA (Remote Direct Memory Access). RDMA or server to server direct memory access (what some of you might know as remote memory mapped IO or channel to channel C2C) enables very fast low server to server data movement such as in a VMware cluster. Check out Mellanox and their 40Gb Ethernet along with Infinaband among other solutions if you are into server, storage i/o and general networking, along with their partners. Need or want to learn more about networking with your servers and storage check out Cloud and Virtual Data Storage Networking and Resilient Storage Networking .

Rest assured there are many more announcements and updates to come this week, and in the weeks to follow…

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Care to help Coraid with a Storage I/O Content Conversation?

Storage I/O trends

Blog post – Can you help Coraid with a Storage I/O Content Conversation?

Over the past week or so have had many email conversations with the Coraid marketing/public relations (PR) folks who want to share some of their content unique or custom content with you.

Normally I (aka @StorageIO) does not accept unsolicited (placed) content (particular for product pitch/placements) from vendors or their VARs, PR, surrogates including third or fourth party placement firms. Granted StorageIOblog.com does have site sponsors , per our policies that is all that those are, advertisements with no more or less influence than for others. StorageIO does do commissioned or sponsored custom content including white papers, solution briefs among other things with applicable disclosures, retention of editorial tone and control.

Who is Coraid and what do they do?

However wanting to experiment with things, not to mention given Coraids persistence, let’s try something to see how it works.

Coraid for those who are not aware provides an alternative storage and I/O networking solution called ATA over Ethernet or AoE (here is a link to Coraids Analyst supplied content page). AoE enables servers with applicable software to use storage equipped with AoE technology (or via an applicable equipped appliance) to use Ethernet as an interconnect and transport. AoE is on the low-end an alternative to USB, Thunderbolt or direct attached SATA or SAS, along with switched or shared SAS (keep in mind SATA can plug into SAS, not vice versa).

In addition AoE is an alternative to the industry standard iSCSI (SCSI command set mapped onto IP) which can be found in various solutions including as a software stack. Another area where AoE is positioned by Coraid is as an alternative to Fibre Channel SCSI_FCP (FCP) and Fibre Channel over Ethernet (FCoE). Keep in mind that Coraid AoE is block based (granted they have other solutions) as opposed to NAS (file) such as NFS, CIFS/SMB/SAMBA, pNFS or HDFS among others and is using native Ethernet as opposed to being layered on top of iSCSI.

Storage I/O trends

So here is the experiment

Since Coraid wanted to get their unique content placed either by them or via others, lets see what happens in the comments section here at StorageIOblog.com. The warning of course is keep it respectful, courteous and no bashing or disparaging comments about others (vendors, products, technology).

Thus the experiment is simple, lets see how the conversation evolves into the caveats, benefits, tradeoffs and experiences of those who have used or looked into the solution (pro or con) and why a particular opinion. If you have a perspective or opinion, no worries, however put it in context including if you are a Coraid employee, var, reseller, surrogate and likewise for those with other view (state who you are, your affiliation and other disclosure). Likewise if providing or supplying links to any content (white papers, videos, webinars) including via third-party provide applicable disclosures (e.g. it was sponsored and by whom etc.).

Disclosure

While I have mentioned or provided perspectives about them via different venues (online, print and in person) in the past, Coraid has never been a StorageIO client. Likewise this is not an endorsement for or against Coraid and their AoE or other solutions, simply an industry trends perspective.

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Inaugural episode of the SSD Show podcast at Myce.com

Storage I/O trends

Inaugural episode of the SSD Show podcast at Myce.com

The other day I was invited by Jeremy Reynolds and J.W. Aldershoff to be a guest on the Inaugural episode of their new SSD Show podcast (click here to learn more or listen in).

audio

Many different facets or faces of nand flash SSD and SSHD or HHDD

With this first episode we discuss the latest developments in and around the solid-state device (SSD) and related storage industry, from consumer to enterprise, hardware and software, along with hands on experience insight on products, trends, technologies, technique themes. In this first podcast we discuss Solid State Hybrid Disks (SSHDs) aka Hybrid Hard Disk Drives (HHDD) with flash (read about some of my SSD, HHDD/SSHD hands on personal experiences here), the state of NAND memory (also here about nand DIMMs), the market and SSD pricing.

I had a lot of fun doing this first episode with Jeremy and hope to be invited back to do some more, follow-up on themes we discussed along with new ones in future episodes. One question remains after the podcast, will I convince Jeremy to get a Twitter account? Stay tuned!

Check out the new SSD Show podcast here.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)