Goodbye 2013, hello 2014, predictions past, present and future

Storage I/O trends

Good by 2013 and hello 2014 along with predictions past, present and future

First, for those who may have missed this, thanks to all who helped make 2013 a great year!

2013 season greetings

Looking back at 2013 I saw a continued trend of more vendors and their media public relations (PR) people reaching out to have their predictions placed in articles, posts, columns or trends perspectives pieces.

Hmm, maybe a new trend is predictions selfies? ;)

Not to worry, this is not a wrapper piece for a bunch of those pitched and placed predictions requests that I received in 2013 as those have been saved for a rainy or dull day when we need to have some fun ;) .

What about 2013 server storage I/O networking, cloud, virtual and physical?

2013 end up with some end of year spree’s including Avago acquiring storage I/O and networking vendor LSI for about $6.6B USD (e.g. SSD cards, RAID cards, cache cards, HBA’s (Host Bus Adapters), chips and other items) along with Seagate buying Xyratex for about $374M USD (a Seagate suppliers and a customer partner).

Xyratex is known by some for making the storage enclosures that house hard disk drive (HDD’s) and Solid State Device (SSD) drives that are used by many well-known, and some not so well-known systems and solution vendors. Xyratex also has other pieces of their business such as appliances that combine their storage enclosures for HDD and SSD’s along with server boards, along with a software group focus on High Performance Compute (HPC) Lustre. There is another part of the Xyratex business that is not as well-known which is the test equipment used by disk drive manufacturers such as Seagate as part of their manufacturing process. Thus the Seagate acquisition moves them up market with more integrated solutions to offer to their (e.g. Seagate and Xyratex) joint customers, as well as streamline their own supply chain and costs (not to mention sell equipment to the other remaining drive manufactures WD and Toshiba).

Storage I/O trends

Other 2013 acquisitions included (Whiptail by Cisco, Virident by WD (who also bought several other companies), Softlayer by IBM) along with various mergers, company launches, company shutdowns (cloud storage Nirvanix and SSD maker OCZ bankruptcy filing), and IPO’s (some did well like Nimble while Violin not so well), while earlier high-flying industry darlings such as FusionIO are now the high-flung darling targets of the shareholder sock lawsuit attorneys.

2013 also saw the end of SNW (Storage Network World), jointly produced by SNIA and Computerworld Storage in the US after more than a decade. Some perspectives from the last US SNW held October 2013 can be found in the Fall 2013 StorageIO Update Newsletter here, granted those were before the event was formal announced as being terminated.

Speaking of events, check out the November 2013 StorageIO Update Newsletter here for perspectives from attending the Amazon Web Services (AWS) re:Invent conference which joins VMworld, EMCworld and a bunch of other vendor world events.

Lets also not forget Dell buying itself in 2013.

StorageIO in the news

Click on the following links read (and here) more about various 2013 industry perspectives trends commentary of mine in various venues, along with tips, articles, newsletters, events, pod cast, videos and other items.

What about 2014?

Perhaps 2014 will build on the 2013 momentum of the annual rights of pages refereed to as making meaningless future year trends and predictions as being passe?

Not that there is anything wrong with making predictions for the coming year, particular if they actually have some relevance, practicality not to mention track record.

However that past few years seems to have resulted in press releases along with product (or services) plugs being masked as predictions, or simply making the same predictions for the coming year that did not come to be for the earlier year (or the one before that or before that and so forth).

On the other hand, from an entertainment perspective, perhaps that’s where we will see annual predictions finally get classified and put into perspectives as being just that.

Storage I/O trends

Now for those who still cling to as well as look forward to annual predictions, ok, simple, we will continue in 2014 (and beyond) from where we left off in 2013 (and 2012 and earlier) meaning more (or continued):

  • Software defined "x" (replace "x" with your favorite topic) industry discussion adoption yet customer adoption or deployment question conversations.
  • Cloud conversations shifted from lets all go to the cloud as the new shiny technology to questioning the security, privacy, stability, vendor or service viability not to mention other common sense concerns that should have been discussed or looked into earlier. I have also heard from people who say Amazon (as well as Verizon, Microsoft, Blue host, Google, Nirvanix, Yahoo and the list goes on) outages are bad for the image of clouds as they shake people’s confidences. IMHO people confidence needs to be shaken to that of having some common sense around clouds including don’t be scared, be ready, do your homework and basic due diligence. This means cloud conversations over concerns set the stage for increased awareness into decision-making, usage, deployment and best practices (all of which are good things for continued cloud deployments). However if some vendors or pundits feel that people having basic cloud concerns that can be addressed is not good for their products or services, I would like to talk with them because they may be missing an opportunity to create long-term confidence with their customers or prospects.
  • VDI as a technology being deployed continues to grow (e.g. customer adoption) while the industry adoption (buzz or what’s being talked about) has slowed a bit which makes sense as vendors jump from one bandwagon to the new software defined bandwagon.
  • Continued awareness around modernizing data protection including backup/restore, business continuance (BC), disaster recovery (DR), high availability, archiving and security means more than simply swapping out old technology for new, yet using it in old ways. After all, in the data center and information factory not everything is the same. Speaking of data protection, check out the series of technology neutral webcast and video chats that started last fall as part of BackupU brought to you by Dell. Even though Dell is the sponsor of the series (that’s a disclosure btw ;) ) the focus of the sessions is on how to use different tools, technologies and techniques in new ways as well as having the right tools for different tasks. Check out the information as well as register to get a free Data Protection chapter download from my book Cloud and Virtual Data Storage Networking (CRC Press) at the BackupU site as well as attend upcoming events.
  • The nand flash solid state devices (SSD) cash-dash (and shakeout) continues with some acquisitions and IPO’s, as well as disappearances of some weaker vendors, while appearance of some new. SSD is showing that it is real in several ways (despite myths, fud and hype some of which gets clarified here) ranging from some past IPO vendors (e.g. FusiuonIO) seeing exit of their CEO and founders while their stock plummets and arrival of shareholder investor lawsuits, to Violins ho-hum IPO. What this means is that the market is real, it has a very bright future, however there is also a correction occurring showing that reality may be settling in for the long run (e.g. next couple of decades) vs. SSD being in the realm of unicorns.
  • Storage I/O trends

  • Internet of Things (IoT) and Internet of Devices (IoD) may give some relief for Big Data, BYOD, VDI, Software Defined and Cloud among others that need a rest after they busy usage that past few years. On the other hand, expect enhanced use of earlier buzzwords combined with IoT and IOD. Of course that also means plenty of questions around what is and is not IoD along with IoT and if actually relevant to what you are doing.
  • Also in 2014 some will discover storage and related data infrastructure topics or some new product / service thus having a revolutionary experience that storage is now exciting while others will have a DejaVu moment that it has been exciting for the past several years if not decades.
  • More big data buzz as well as realization by some that a pragmatic approach opens up a bigger broader market, not to mention customers more likely to realize they have more in common with big data than it simply being something new forcing them to move cautiously.
  • To say that OpenStack and related technologies will continue to gain both industry and customer adoption (and deployment) status building off of 2013 in 2014 would be an understatement, not to mention too easy to say, or leave out.
  • While SSD’s continue to gain in deployment, after the question is not if, rather when, where, with what and how much nand flash SSD is in your future, HDD’s continue to evolve for physical, virtual and cloud environments. This also includes Seagate announcing a new (Kinetic) Ethernet attached HDD (note that this is not a NAS or iSCSI device) that uses a new key value object storage API for storing content data (more on this in 2014).
  • This also means realizing that large amounts of little data can result in back logs of lots of big data, and that big data is growing into very fast big data, not to mention realization by some that HDFS is just another distributed file system that happens to work with Hadoop.
  • SOHO’s and lower end of SMB begin to get more respect (and not just during the week of Consumer Electronic Show – CES).
  • Realization that there is a difference between Industry Adoption and Customer Deployment, not to mention industry buzz and traction vs. customer adoption.

server storage I/O trends

What about beyond 2014?

That’s easy, many of the predictions and prophecies that you hear about for the coming year have also been pitched in prior years, so it only makes sense that some of those will be part of the future.

  • If you have seen or experienced something you are more likely to have DejaVu.
  • Otoh if you have not seen or experienced something you are more likely to have a new and revolutionary moment!
  • Start using new (and old) things in new ways vs. simply using new things in old ways.
  • Barrier to technology convergence, not to mention new technology adoption is often people or their organizations.
  • Convergence is still around, cloud conversations around concerns get addressed leading to continued confidence for some.
  • Realization that data infrastructure span servers, storage I/O networking, cloud, virtual, physical, hardware, software and services.
  • That you can not have software defined without hardware and hardware defined requires software.
  • And it is time for me to get a new book project (or two) completed in addition to helping others with what they are working on, more on this in the months to come…

Here’s my point

The late Jim Morrison of the Doors said "There are things known and things unknown and in between are the doors.".

The doors via Amazon.com
Above image and link via Amazon.com

Hence there is what we know about 2013 or will learn about the past in the future, then there is what will be in 2014 as well as beyond, hence lets step through some doors and see what will be. This means learn and leverage lessons from the past to avoid making the same or similar mistakes in the future, however doing so while looking forward without a death grip clinging to the past.

Needless to say there will be more to review, preview and discuss throughout the coming year and beyond as we go from what is unknown through doors and learn about the known.

Thanks to all who made 2013 a great year, best wishes to all, look forward to seeing and hearing from you in 2014!

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Small Medium Business (SMB) IT continues to gain respect, what about SOHO?

Storage I/O trends

Blog post: Small Medium Business (SMB) IT continues to gain respect, what about SOHO?

Note that in Information Technology (IT) conversations there are multiple meanings for SMB including Server Message Block aka Microsoft Windows CIFS (Common Internet File System) along with its SAMBA implementation, however for this piece the context is Small Medium Business.

A decade or so ago, mention SMB (Small Medium Business) to many vendors, particular those who were either established or focused on the big game enterprise space and you might have gotten a condescending look or answer if not worse.

In other words, a decade ago the SMB did not get much respect from some vendors and those who followed or covered them.

Fast forward to today and many of those same vendors along with their pundits and media followers have now gotten their SMB grove, lingo, swagger or social media footsteps, granted for some that might be at the higher end of SMB also known as SME (Small Medium Enterprise).

Today in general the SMB is finally getting respect and in some circles its down right cool and trendy vs. being perceived as old school, stodgy large enterprise. Likewise the Remote Office Branch Office (ROBO) gained more awareness and coverage a few years back which while the ROBO buzz has subsided, the market and opportunities are certainly there.

What about Small Office Home Office (SOHO) today?

I assert that SOHO today is getting the same lack of respect that SMB in general received a decade ago.

IMHO the SOHO environment and market today is being treated with a similar lack of respect that the larger SMB received a decade ago.

Granted there are some vendors and their followings who are seeing the value and opportunity, not to mention market size potential of expanding their portfolios, not to mention routes to markets to meet their different needs of the SOHO.

relative enterprise sme smb soho positioning

What is the SOHO market or environment

One of the challenges with SMB, SOHO among other classifications are just that, the classifications.

Some classificaitons are based on number of employees, others on number of servers or workstations, while others are based on revenue or even physical location.

Meanwhile some are based on types of products, technologies or tools while others are tied to IT or general technology spending.

Some confuse the SOHO space with the consumer market space or sector which should not be a surprise if you view market segments as enterprise, SMB and consumer. However if you take a more pragmatic approach, between true consumer and SMB space, there lies the SOHO space. For some the definitions of what is consumer, SOHO, SMB, SME and enterprise (among others) will be based on number of employees, or revenue amount. Yet for others the categories may be tied to IT spending (e.g. price bands), number of workstations, servers, storage space capacity or some other metric. On the other hand some definitions of what is consumer vs. SOHO vs. SMB vs. SME or enterprise will be based on product capabilities, size, feature function and cost among other attributes.

Storage I/O trends

Understanding the SOHO

Keep in mind that SOHO can also overlap with Remote Office Branch Office (ROBO), not to mention blend with high-end consumer (prosumer) or lower bounds of SMB.

Part of the challenge (or problem) is that many confuse the Home Office or HO aspect of SOHO as being consumer.

Likewise many also confuse the Small Office or SO part of SOHO as being just the small home office or the virtual office of a mobile worker.

The reality is that just as the SMB space has expanded, there is also a growing area just above where consumer markets exist and where many place the lower-end of SMB (e.g. the bottom limits of where the solutions fit).

First keep in mind that many put too much focus and mistakenly believe that the HO or Home Office part of SOHO means that this is just a consumer focused space.

The reality is that while the HO gets included as part of SOHO, there is also the SO or Small Office which is actually the low-end of the SMB space.

Keep in mind that there are more:
SOHO than SMB
SMB than SME
SME than enterprise
F500 (Fortune 500) than F100
F100 than F10 and so forth.

Here is my point

SOHO does not have to be the Rodney Dangerfield of IT (e.g. gets no respect)!

If you jumped on the SMB bandwagon a decade ago, start paying attention to what’s going on with the SOHO or lower-end SMB sector. The reasons are simple, just as SMBs can grow up to be larger SMBs or SME or enterprise, SOHOs can also evolve to become SMBs either in business size, or in IT and data infrastructure needs, requirements.

For those who prefer (at least for now) look down upon or ignore the SOHO similar to what was done with SMB before converting to SMBism, do so at your own risk.

However let me be clear, this does not mean ignore or shift focus and thus disrupt or lose coverage of other areas, rather, extend, expand and at least become aware of what is going on in the SOHO space.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Seasons Greetings, Happy Holidays 2013 from Server and StorageIO

Merry Christmas, Seasons Greetings, Happy Holidays 2013 from Server and StorageIO

2013 server and storage I/O holiday greetings

Ok, nuff said for now (for now ;)…

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

November 2013 Server and StorageIO Update Newsletter & AWS reinvent info


November 2013 Server and StorageIO Update Newsletter & AWS reinvent info

Welcome to the November 2013 edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics. Fall (here in North America) has been busy with in-person, on-line live and virtual events along with various client projects, research, time in the StorageIO cloud, virtual and physical lab test driving, validating and doing proof of concept research among other tasks. Check out the industry trends perspectives articles, comments and blog posts below that covers some activity over the past month.

Last week I had the chance to attend the second annual AWS re:Invent event in Las Vegas, see my comments, perspectives along with a summary of announcements from that conference below.

Watch for future posts, commentary, perspectives and other information down the road (and in the not so distant future) pertaining to information and data infrastructure topics, themes and trends across cloud, virtual, legacy server, storage, networking, hardware and software. Also check out our backup, restore, BC, DR and archiving (Under the resources section on StorageIO.com) for various presentation, book chapter downloads and other content.

Enjoy this edition of the StorageIO Update newsletter.

Ok, nuff said (for now)

Cheers gs

StorageIO Industry Trends and Perspectives

Industry trends: Amazon Web Services (AWS) re:Invent

Last week I attended the AWS re:Invent event in Las Vegas. This was the second annual AWS re:Invent conference which while having an AWS and cloud theme, it is also what I would describe as a data infrastructure event.

As a data infrastructure event AWS re:Invent spans traditional legacy IT and applications to newly invented, re-written, re-hosted or re-platformed ones from existing and new organizations. By this I mean a mix of traditional IT or enterprise people as well as cloud and virtual geek types (said with affection and all due respect of course) across server (operating system, software and tools), storage (primary, secondary, archive and tools), networking, security, development tools, applications and architecture.

That also means management from application and data protection spanning High Availability (HA), Business Continuance (BC), Disaster Recovery (DR), backup/restore, archiving, security, performance and capacity planning, service management among other related themes across public, private, hybrid and community cloud environments or paradigms. Hmm, I think I know of a book that covers the above and other related topic themes, trends, technologies and best practices called Cloud and Virtual Data Storage Networking (CRC Press) available via Amazon.com in print and Kindle (among other) versions.

During the event AWS announced enhanced and new services including:

  • WorkSpaces (Virtual Desktop Infrastructure – VDI) announced as a new service for cloud based desktops across various client devices including laptops, Kindle Fire, iPad and Android tablets using PCoIP.
  • Kinesis which is a managed service for real-time processing of streaming (e.g. Big) data at scale including ability to collect and process hundreds of GBytes of data per second across hundreds of thousands of data sources. On top of Kinesis you can build your big data applications or conduct analysis to give real-time key performance indicator dashboards, exception and alarm or event notification and other informed decision-making activity.
  • EC2 C3 instances provide Intel Xeon E5 processors and Solid State Device (SSD) based direct attached storage (DAS) like functionality vs. EBS provisioned IOPs for cost-effective storage I/O performance and compute capabilities.
  • Another EC2 enhancement are G2 instance that leverage high performance NVIDIA GRID GPU with 1,536 parallel processing cores. This new instance is well suited for 3D graphics, rendering, streaming video and other related applications that need large-scale parallel or high performance compute (HPC) also known as high productivity compute.
  • Redshift (cloud data warehouse) now supports cross region snapshots for HA, BC and DR purposes.
  • CloudTrail records AWS API calls made via the management console for analytics and logging of API activity.
  • Beta of Trusted Advisor dashboard with cost optimization saving estimates including EBS and provisioned IOPs
  • Relational Database Service (RDS) support for PostgresSQL including multi-AZ deployment.
  • Ability to discover and launch various software from AWS Marketplace via the EC2 Console. The AWS Marketplace for those not familiar with it is a catalog of various software or application titles (over 800 products across 24 categories) including free and commercial licensed solutions that include SAP, Citrix, Lotus Notes/Domino among many others.
  • AppStream is a low latency (STX protocol based) service for streaming resource (e.g. compute, storage or memory) intensive applications and games from AWS cloud to various clients, desktops or mobile devices. This means that the resource intensive functionality can be shifted to the cloud, while providing a low latency (e.g. fast) user experience off-loading the client from having to support increased compute, memory or storage capabilities. Key to AppStream is the ability to stream data in a low-latency manner including over networks normally not suited for high quality or bandwidth intensive applications. IMHO AppStream while focused initially on mobile app’s and gaming, being a bit streaming technology has the potential to be used for other similar functions that can leverage download speed improvements.
  • When I asked an AWS person if or what role AppStream might have or related to WorkSpaces their only response was a large smile and no comment. Does this mean WorkSpaces leverages AppStream? Candidly I don’t know, however if you look deeper into AppStream and expand your horizons, see what you can think up in terms of innovation. Updated 11/21/13 AWS has provided clarification that WorkSpaces is based on PCoIP while AppStream uses the STX protocols.

    Check out AWS Sr. VP Andy Jassy keynote presentation here.

Overall I found the AWS re:Invent event to be a good conference spanning many aspects and areas of focus which means I will be putting it on my must attend list for 2014.

StorageIO Industry Trends and PerspectivesIndustry trends tips, commentary, articles and blog posts
What is being seen, heard and talked about while out and about

The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.

Storage I/O posts

Recent industry trends, perspectives and commentary by StorageIO Greg Schulz in various venues:

NetworkComputing: Comments on Software-Defined Storage Startups Win Funding

Digistor: Comments on SSD and flash storage
InfoStor: Comments on data backup and virtualization software

ITbusinessEdge: Comments on flash SSD and hybrid storage environments

NetworkComputing: Comments on Hybrid Storage Startup Nimble Storage Files For IPO

InfoStor: Comments on EMC’s Light to Speed: Flash, VNX, and Software-Defined

InfoStor: Data Backup Virtualization Software: Four Solutions

ODSI: Q&A With Greg Schulz – A Quick Roundup of Data Storage Industry

Recent StorageIO Tips and Articles in various venues:

FedTechMagazine: 3 Tips for Maximizing Tiered Hypervisors
InfoStor:
RAID Remains Relevant, Really!

Storage I/O trends

Recent StorageIO blog post:

EMC announces XtremIO General Availability (Part I) – Announcement analysis of the all flash SSD storage system
Part II: EMC announces XtremIO General Availability, speeds and feeds – Part two of two part series with analysis
What does gaining industry traction or adoption mean too you? – There is a difference between buzz and deployment
Fall 2013 (September and October) StorageIO Update Newsletter – In case you missed the fall edition, here it is

StorageIO Industry Trends and Perspectives

Check out our objectstoragecenter.com page where you will find a growing collection of information and links on cloud and object storage themes, technologies and trends.

Server and StorageIO seminars, conferences, web cats, events, activities StorageIO activities (out and about)

Seminars, symposium, conferences, webinars
Live in person and recorded recent and upcoming events

While 2013 is winding down, the StorageIO calendar continues to evolve, here are some recent and upcoming activities.

December 11, 2013 Backup.UData Protection for Cloud 201Backup.U
Google+ hangout
December 3, 2013 Backup.UData Protection for Cloud 101Backup.U
Online Webinar
November 19, 2013 Backup.UData Protection for Virtualization 201Backup.U
Google+ hangout
November 12-13, 2013AWS re:InventAWS re:Invent eventLas Vegas, NV
November 5, 2013 Backup.UData Protection for Virtualization 101Backup.U
Online Webinar
October 22, 2013 Backup.UData Protection for Applications 201Backup.U
Google+ hangout

Click here to view other upcoming along with earlier event activities. Watch for more 2013 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, big data, little data, cloud and object storage, performance and management trends among others.

Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.

If you missed the Fall (September and October) 2013 StorageIO update newsletter, click here to view that and other previous editions as HTML or PDF versions. Subscribe to this newsletter (and pass it along)

and click here to subscribe to this news letter. View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter

Ok, nuff said (for now).
Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved    

DataDynamics StorageX 7.0 file and data management migration software

Storage I/O trends

DataDynamics StorageX 7.0 file and data management migration software

Some of you may recall back in 2006 (here and here) when Brocade bought a file management storage startup called NuView whose product was StorageX, and then in 2009 issued end of life (EOL) notice letters that the solution was being discontinued.

Fast forward to 2013 and there is a new storage startup (DatraDynamics) with an existing product that was just updated and re-released called StorageX 7.0.

Software Defined File Management – SDFM?

Granted from an industry buzz focused adoption perspective you may not have heard of DataDynamics or perhaps even StorageX. However many other customers around the world from different industry sectors have as well as are using the solution.

The current industry buzz is around software defined data centers (SDDC) which has lead to software defined networking (SDN), software defined storage (SDS), and other software defined marketing (SDM) terms, not to mention Valueware. So for those who like software defined marketing or software defined buzzwords, you can think of StorageX as software defined file management (SDFM), however don’t ask or blame them about using it as I just thought of it for them ;).

This is an example of industry adoption traction (what is being talked about) vs. industry deployment and customer adoption (what is actually in use on a revenue basis) in that DataDynamics is not a well-known company yet, however they have what many of the high-flying startups with industry adoption don’t have which is an installed base with revenue customers that also now have a new version 7.0 product to deploy.

StorageX 7.0 enabling intelligent file and data migration management

Thus, a common theme is adding management including automated data movement and migration to carry out structure around unstructured NAS file data. More than a data mover or storage migration tool, Data Dynamics StorageX is a software platform for adding storage management structure around unstructured local and distributed NAS file data. This includes heterogeneous vendor support across different storage system, protocols and tools including Windows CIFS and Unix/Linux NFS.

Storage I/O image

A few months back prior to its release, I had an opportunity to test drive StorageX 7.0 and have included some of my comments in this industry trends perspective technology solution brief (PDF). This solution brief titled Data Dynamics StorageX 7.0 Intelligent Policy Based File Data Migration is a free download with no registration required (as are others found here), however per our disclosure policy to give transparency, DataDynamics has been a StorageIO client.

If you have a new for gaining insight and management control around your file unstructured data to support migrations for upgrades, technology refresh, archiving or tiering across different vendors including EMC and NetApp, check out DataDynamics StorageX 7.0, take it for a test drive like I did and tell them StorageIO sent you.

Ok, nuff said,

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

What does gaining industry traction or adoption mean too you?

Storage I/O trends

What does gaining industry traction or adoption mean too you?

Is it based on popularity or how often something that is talked about, blogged, tweeted, commented, video or similar?

What are the indicators that something is gaining traction?

Perhaps it is tied to the number of press releases, product or staffing announcements including who has joined the organization along with added coverage of it?

Maybe its based on how many articles, videos or some other content and coverage that helps to show traction and momentum?

On the other hand is it tied to how many prospects are actually trying a product or service as part of a demo or proof of concept?

Then again, maybe it is associated with how many real paying or revenue installed footprints and customers or what is also known as industry deployment (customer adoption).

Of those customers actually buying and deploying, how many have continued using the technology even after industry adoption subsides or does the solution become shelf ware?

Does the customer deployment actually continue to rise quietly while industry adoption or conversations drop off (past the cycle of hype)?

buzzword bingo

Gaining context with industry traction

Gaining traction can mean different things to people, however there is also a difference between industry adoption (what’s being talked about among the industry) and industry deployment (what customers are actually buying, installing and continue to use).

Often the two can go hand in hand, usually one before the other, however they can also be separate. For example it is possible that something new will have broad industry adoption (being talked about) yet have low customer deployment (even over time). This occurs when something is new and interesting that might be fun to talk about or the vendor, solution provider is cool and fun to hang out and be with, or simply has cool giveaways.

On the other hand there can be customer deployment and adoption with little to no fan fare (industry adoption) for different reasons.

Storage I/O trends

Here’s my point

Not long ago if you asked or listened to some, you would think that once high-flying cloud storage vendor Nirvanix was gaining traction based on their marketing along with other activities, yet they recently closed their doors. Then there was Kim Dotcoms hyped Megacloud launch earlier this year that also has now gone dark or shutting down. This is not unique to cloud service providers or solutions as the same can, has and will happen again to traditional hardware, software and services providers (startups and established).

How about former high-flying FusionIO, or the new startup by former FusionIO founder and CEO David Flynn called Primary Data. One of the two is struggling to gain or keep up revenue traction while having declined in industry popularity traction. The other is gaining in industry popularity traction with their recently secured $50 Million in funding yet are still in stealth mode so rather difficult to gain customer adoption or deployment traction (thus for now its industry adoption focus for them ;).

in the news

If you are a customer or somebody actually deploying and using technology, tools, techniques and services for real world activity vs. simply trying new things out, your focus on what is gaining traction will probably be different than others. Granted it is important to keep an eye on what is coming or on futures, however there is also the concern of how it will really work and keep working over time.

For example while Hard Disk Drives (HDD) continue to support industry deployment traction (customer adoption and usage) traction. However they are not new and when new models apear (such as Seagate Ethernet based Kinetic) they may not get the same industry adoption traction as a newer technology might. Case in point Solid State Devices (SSD) continue to gain in customer deployment adoption with some environments doing more than others, yet have very high industry adoption traction status.

Storage I/O SSD trends
Relative SSD customer adoption and deployment along with future opportunities

On the other hand if your focus is on what’s new and emerging which is usually more industry centered, then it should be no surprise what traction means and where it is focused. For example the following figure shoes where different audiences have various timelines on adoption (read more here).

SSD trends
Current and emerging memory, flash and other SSD technologies for different audiences

Wrap up

When you hear that something is gaining traction, ask yourself (or others) what that means along with the applicable context.

Does that mean something is popular and trending to discuss (based on GQ or looks), or that it is actually gaining real customer adoption based on G2 (insight – they are actually buying vs. simply trying our a free version).

Does it mean one form of traction along with industry adoption (what’s being talked about) vs. industry deployment (real customer adoption) is better than the other?

No, it simply means putting things into the applicable context.

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Seagate Kinetic Cloud and Object Storage I/O platform (and Ethernet HDD)

Storage I/O trends

Seagate Kinetic Cloud and Object Storage I/O platform

Seagate announced today their Kinetic platform and drive designed for use by object API accessed storage including for cloud deployments. The Kinetic platform includes Hard Disk Drives (HDD) that feature 1Gb Ethernet (1 GbE) attached devices that speak object access API or what Seagate refers to as a key / value.

Seagate Kinetic architecture

What is being announced with Seagate Kinetic Cloud and Object (Ethernet HDD) Storage?

  • Kinetic Open Storage Platform – Ethernet drives, key / value (object access) API, partner software
  • Software developer’s kits (SDK) – Developer tools, documentation, drive simulator, code libraries, code samples including for SwiftStack and Riak.
  • Partner ecosystem

What is Kinetic?

While it has 1 GbE ports, do not expect to be able to use those for iSCSI or NAS including NFS, CIFS or other standard access methods. Being Ethernet based, the Kinetic drive only supports the key value object access API. What this means is that applications, cloud or object stacks, key value and NoSQL data repositories, or other software that adopt the API can communicate directly using object access.

Seagate Kinetic storage

Internal, the HDD functions as a normal drive would store and accessing data, the object access function and translation layer shifts from being in an Object Storage Device (OSD) server node to inside the HDD. The Kinetic drive takes on the key value API personality over 1 GbE ports instead of traditional Logical Block Addressing (LBA) and Logical Block Number (LBN) access using 3g, 6g or emerging 12g SAS or SATA interfaces. Instead Kinetic drives respond to object access (aka what Seagate calls key / value) API commands such as Get, Put among others. Learn more about object storage, access and clouds at www.objectstoragecenter.com.

Storage I/O trends

Some questions and comments

Is this the same as what was attempted almost a decade ago now with the T10 OSD drives?

Seagate claims no.

What is different this time around with Seagate doing a drive that to some may vaguely resemble the predecessor failed T10 OSD approach?

Industry support for object access and API development have progressed from an era of build it and they will come thinking, to now where the drives are adapted to support current cloud, object and key value software deployment.

Wont 1GbE ports be too slow vs. 12g or 6g or even 3g SAS and SATA ports?

Keep in mind those would be apples to oranges comparisons based on the protocols and types of activity being handled. Kinetic types of devices initially will be used for large data intensive applications where emphasis is on storing or retrieving large amounts of information, vs. low latency transactional. Also, keep in mind that one of the design premises is to keep cost low, spread the work over many nodes, devices to meet those goals while relying on server-side caching tools.

Storage I/O trends

Does this mean that the HDD is actually software defined?

Seagate or other HDD manufactures have not yet noticed the software defined marketing (SDM) bandwagon. They could join the software defined fun (SDF) and talk about a software defined disk (SDD) or software defined HDD (SDHDD) however let us leave that alone for now.

The reality is that there is far more software that exists in a typical HDD than what is realized. Sure some of that is packaged inside ASICs (Application Specific Integrated Circuits) or running as firmware that can be updated. However, there is a lot of software running in a HDD hence the need for power yet energy-efficient processors found in those devices. On a drive per drive basis, you may see a Kinetic device consume more energy vs. other equivalence HDDs due to the increase in processing (compute) needed to run the extra software. However that also represents an off-load of some work from servers enabling them to be smaller or do more work.

Are these drives for everybody?

It depends on if your application, environment, platform and technology can leverage them or not. This means if you view the world only through what is new or emerging then these drives may be for all of those environments, while other environments will continue to leverage different drive options.

Object storage access

Does this mean that block storage access is now dead?

Not quite, after all there is still some block activity involved, it is just that they have been further abstracted. On the other hand, many applications, systems or environments still rely on block as well as file based access.

What about OpenStack, Ceph, Cassandra, Mongo, Hbase and other support?

Seagate has indicated those and others are targeted to be included in the ecosystem.

Seagate needs to be careful balancing their story and message with Kinetic to play to and support those focused on the new and emerging, while also addressing their bread and butter legacy markets. The balancing act is communicating options, flexibility to choose and adopt the right technology for the task without being scared of the future, or clinging to the past, not to mention throwing the baby out with the bath water in exchange for something new.

For those looking to do object storage systems, or cloud and other scale based solutions, Kinetic represents a new tool to do your due diligence and learn more about.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?

Storage I/O trends

Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?

Recently seven plus year old cloud storage startup Nirvanix announced that they were finally shutting down and that customers should move their data.

nirvanix customer message

Nirvanix has also posted an announcement that they have established an agreement with IBM Softlayer (read about that acquisition here) to help customers migrate to those services as well as to those of Amazon Web Services (AWS), (read more about AWS in this primer here), Google and Microsoft Azure.

Cloud customer concerns?

With Nirvanix shutting down there has been plenty of articles, blog posts, twitter tweets and other conversations asking if Clouds are safe.

Btw, here is a link to my ongoing poll where you can cast your vote on what you think about clouds.

IMHO clouds can be safe if used in safe ways which includes knowing and addressing your concerns, not to mention following best practices, some of which pre-date the cloud era, sometimes by a few decades.

Nirvanix Storm Clouds

More on this in a moment, however lets touch base on Nirvanix and why I said they were finally shutting down.

The reason I say finally shutting down is that there were plenty of early warning signs and storm clouds circling Nirvanix for a few years now.

What I mean by this is that in their seven plus years of being in business, there have been more than a few CEO changes, something that is not unheard of.

Likewise there have been some changes to their business model ranging from selling their software as a service to a solution to hosting among others, again, smart startups and establishes organizations will adapt over time.

Nirvanix also invested heavily in marketing, public relations (PR) and analyst relations (AR) to generate buzz along with gaining endorsements as do most startups to get recognition, followings and investors if not real customers on board.

In the case of Nirvanix, the indicator signs mentioned above also included what seemed like a semi-annual if not annual changing of CEOs, marketing and others tying into business model adjustments.

cloud storage

It was only a year or so ago that if you gauged a company health by the PR and AR news or activity and endorsements you would have believed Nirvanix was about to crush Amazon, Rackspace or many others, perhaps some actually did believe that, followed shortly there after by the abrupt departure of their then CEO and marketing team. Thus just as fast as Nirvanix seemed to be the phoenix rising in stardom their aura started to dim again, which could or should have been a warning sign.

This is not to solo out Nirvanix, however given their penchant for marketing and now what appears to some as a sudden collapse or shutdown, they have also become a lightning rod of sort for clouds in general. Given all the hype and fud around clouds when something does happen the distract ors will be quick to jump or pile on to say things like "See, I told you, clouds are bad".

Meanwhile the cloud cheerleaders may go into denial saying there are no problems or issues with clouds, or they may go back into a committee meeting to create a new stack, standard, API set marketing consortium alliance. ;) On the other hand, there are valid concerns with any technology including clouds that in general there are good implementations that can be used the wrong way, or questionable implementations and selections used in what seem like good ways that can go bad.

This is not to say that clouds in general whether as a service, solution or product on a public, private or hybrid bases are any riskier than traditional hardware, software and services. Instead what this should be is a wake up call for people and organizations to review clouds citing their concerns along with revisiting what to do or can be done about them.

Clouds: Being prepared

Ben Woo of Neuralytix posted this question comment to one of the Linked In groups Collateral Considerations If You Were/Are A Nirvanix Customer which I posted some tips and recommendations including:

1) If you have another copy of your data somewhere else (which you should btw), how will your data at Nirvanix be securely erased, and the storage it resides on be safely (and secure) decommissioned?

2) if you do have another copy of your data elsewhere, how current is it, can you bring it up to date from various sources (including update from Nirvanix while they stay online)?

3) Where will you move your data to short or near term, as well as long-term.

4) What changes will you make to your procurement process for cloud services in the future to protect against situations like this happening to you?

5) As part of your plan for putting data into the cloud, refine your strategy for getting it out, moving it to another service or place as well as having an alternate copy somewhere.

Fwiw any data I put into a cloud service there is also another copy somewhere else which even though there is a cost, there is a benefit, The benefit is that ability to decide which to use if needed, as well as having a backup/spare copy.

Storage I/O trends

Cloud Concerns and Confidence

As part of cloud procurement as services or products, the same proper due diligence should occur as if you were buying traditional hardware, software, networking or services. That includes checking out not only the technology, also the companies financial, business records, customer references (both good and not so good or bad ones) to gain confidence. Part of gaining that confidence also involves addressing ahead of time how you will get your data out of or back from that services if needed.

Keep in mind that if your data is very important, are you going to keep it in just one place? For example I have data backed-up as well as archived to cloud providers, however I also have local copies either on-site or off.

Likewise there is data I have local kept at alternate locations including cloud. Sure that is costly, however by not treating all of my data and applications the same, I’m able to balance those costs out, plus use cost advantages of different services as well as on-site to be effective. I may be spending no less on data protection, in fact I’m actually spending a bit more, however I also have more copies and versions of important data and in multiple locations. Data that is not changing often does not get protected as often, however there are multiple copies to meet different needs or threat risks.

Storage I/O trends

Don’t be scared of clouds, be prepared

While some of the other smaller cloud storage vendors will see some new customers, I suspect that near to mid-term, it will be the larger, more established and well funded providers that gain the most from this current situation. Granted some customers are looking for alternatives to the mega cloud providers such as Amazon, Google, HP, IBM, Microsoft and Rackspace among others, however there are a long list of others some of which who are not so well-known that should be such as Centurylink/Savvis, Verizon/Terremark, Sungurd, Dimension Data, Peak, Bluehost, Carbonite, Mozy (owned by EMC), Xerox ACS, Evault (owned by Seagate) not to mention a long list of many others.

Something to be aware of as part of doing your due diligence is determining who or what actually powers a particular cloud service. The larger providers such as Rackspace, Amazon, Microsoft, HP among others have their own infrastructure while some of the smaller service providers may in fact use one of the larger (or even smaller) providers as their real back-end. Hence understanding who is behind a particular cloud service is important to help decide the viability and stability of who it is you are subscribed to or working with.

Something that I have said for the past couple of years and a theme of my book Cloud and Virtual Data Storage Networking (CRC Taylor & Francis) is do not be scared of clouds, however be ready, do your homework.

This also means having cloud concerns is a good thing, again don’t be scared, however find what those concerns are along with if they are major or minor. From that list you can start to decide how or if they can be worked around, as well as be prepared ahead of time should you either need all of your cloud data back quickly, or should that service become un-available.

Also when it comes to clouds, look beyond lowest cost or for free, likewise if something sounds too good to be true, perhaps it is. Instead look for value or how much do you get per what you spend including confidence in the service, service level agreements (SLA), security, and other items.

Keep in mind, only you can prevent data loss either on-site or in the cloud, granted it is a shared responsibility (With a poll).

Additional related cloud conversation items:
Cloud conversations: AWS EBS Optimized Instances
Poll: What Do You Think of IT Clouds?
Cloud conversations: Gaining cloud confidence from insights into AWS outages
Cloud conversations: confidence, certainty and confidentiality
Cloud conversation, Thanks Gartner for saying what has been said
Cloud conversations: AWS EBS, Glacier and S3 overview (Part III)
Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)
Don’t Let Clouds Scare You – Be Prepared
Everything Is Not Equal in the Datacenter, Part 3
Amazon cloud storage options enhanced with Glacier
What do VARs and Clouds as well as MSPs have in common?
How many degrees separate you and your information?

Ok, nuff said.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cisco buys Whiptail continuing the SSD storage I/O flash cash cache dash

Storage I/O trends

Cisco buys Whiptail continuing the Storage storage I/O flash cash cache dash

Congratulations to Virident for being bought today for $645 Million USD by Western Digital (WD). Virident a nand flash PCIe card startup vendor has been around for several years and in the last year or two has gain more industry awareness as a competitor to FusionIO among others.

There is a nand flash solid state devices (SSD) cash-dash occurring, not to mention fast cache dances that is occurring the IT and data infrastructure (e.g. storage and IO) sector specifically.

Why the nand flash SSD cash dash and cache dance?

Yesterday hard disk drive (HDD) vendor Western Digital (WD) bought Virident a nand flash PCIe Solid State Device (SSD) card vendor for $650M, and today networking and server vendor Cisco bought Whiptail a SSD based storage system startup for a little over $400M. Here is an industry trends perspective post that I did yesterday on WD and Virident.

Obviously this begs a couple of questions, some of which I raised in my post yesterday about WD, Virident, Seagate, FusionIO and others.

Questions include

Does this mean Cisco is getting ready to take on EMC, NetApp, HDS and its other storage partners who leverage the Cisco UCS server?

IMHO at least near term no more than they have in the past, nor any more than EMCs partnership with Lenovo indicates a shift in what is done with vBlocks. On the other hand, some partners or customers may be as nervous as a long-tailed cat next to a rocking chair (Google it if you don’t know what it means ;).

Is Cisco going to continue to offer Whiptail SSD storage solutions on a standalone basis, or pull them in as part of solutions similar to what it has done on other acquisitions?

Storage I/O trends

IMHO this is one of the most fundamental questions and despite the press release and statements about this being a UCS focus, a clear sign of proof for Cisco is how they reign in (if they go that route) Whiptail from being sold as a general storage solution (with SSD) as opposed to being part of a solution bundle.

How will Cisco manage its relationship in a coopitition manner cooperating with the likes of EMC in the joint VCE initiative along with FlexPod partner NetApp among others? Again time will tell.

Also while most of the discussions about NetApp have been around the UCS based FlexPod business, there is the other side of the discussion which is what about NetApp E Series storage including the SSD based EF540 that competes with Whiptail (among others).

Many people may not realize how much DAS storage including fast SAS, high-capacity SAS and SATA or PCIe SSD cards Cisco sells as part of UCS solutions that are not vBlock, FlexPod or other partner systems.

NetApp and Cisco have partnerships that go beyond the FlexPod (UCS and ONTAP based FAS) so will be interesting to see what happens in that space (if anything). This is where Cisco and their UCS acquiring Whiptail is not that different from IBM buying TMS to complement their servers (and storage) while also partnering with other suppliers, same holds true for server vendors Dell, HP, IBM and Oracle among others.

Can Cisco articulate and convince their partners, customers, prospects and others that the whiptail acquisition is more about direct attached storage
(DAS) which includes both internal dedicated and external shared device?

Keep in mind that DAS does not have to mean Dumb A$$ Storage as some might have you believe.

Then there are the more popular questions of who is going to get bought next, what will NetApp, Dell, Seagate, Huawei and a few others do?

Oh, btw, funny how have not seen any of the pubs mention that Whiptail CEO Dan Crain is a former Brocadian (e.g. former Brocade CTO) who happens to be a Cisco competitor, just saying.

Congratulations to Dan and his crew and enjoy life at Cisco.

Stay tuned as the fall 2013 nand flash SSD cache dash and cash dance activities are well underway.

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

WD buys nand flash SSD storage I/O cache vendor Virident

Storage I/O trends

WD buys nand flash SSD storage I/O cache vendor Virident

Congratulations to Virident for being bought today for $645 Million USD by Western Digital (WD). Virident a nand flash PCIe card startup vendor has been around for several years and in the last year or two has gain more industry awareness as a competitor to FusionIO among others.

There is a nand flash solid state devices (SSD) cash dash occurring, not to mention fast cache dances that is occurring the IT and data infrastructure (e.g. storage and IO) sector specifically.

Why the nand flash SSD cash dash and cache dance?

Here is a piece that I did today over at InfoStor on a related theme that sets the basis of why the nand flash-based SSD market is popular for storage and as a cache. Hence there is a flash cash dash and by some dance for increased storage I/O performance.

Like the hard disk drive (HDD) industry before it which despite what some pundits and profits have declared (for years if not decades) as being dead (it is still alive), there were many startups, shutdowns, mergers and acquisitions along with some transformations. Granted solid-state memories is part of the presence and future being deployed in new and different ways.

The same thing has occurred in the nand flash-based SSD sector with LSI acquiring SANDforce, SANdisk picking up Pliant and Flashsoft among others. Then there is Western Digital (WD) that recently has danced with their cash as they dash to buy up all things flash including Stec (drives & PCIe cards), Velobit (cache software), Virident (PCIe cards), along with Arkeia (backup) and an investment in Skyera.

Storage I/O trends

What about industry trends and market dynamics?

Meanwhile there have been some other changes with former industry darling and highflying stock post IPO FusionIO hitting as market reality and sudden CEO departure a few months ago. However after a few months of their stock being pummeled, today it bounced back perhaps as people now speculate who will buy FusionIO with WD picking up Virident. Note that one of Viridents OEM customers is EMC for their PCIe flash card XtremSF as are Micron and LSI.

Meanwhile Stec, also  now own by WD was also EMCs original flash SSD drive supplier or what they refer to as a EFDs (Electronic Flash Devices), not to mention having also supplied HDDs to them (also keep in mind WD bought HGST a year or so back).

There are some early signs as well as their stock price jumping today which was probably oversold. Perhaps people are now speculating that maybe Seagate who had been an investor in Virident which was bought by WD for $645 million today might be in the market for somebody else? Alternatively, that perhaps WD didn’t see the value in a FusionIO, or willing to make big flash cache cash grabs dash of that size? Also note Seagate won a $630 million (and the next appeal was recently upheld) infringement lawsuit vs. WD (here and here).

Does that mean FusionIO could become Seagate’s target or that of NetApp, Oracle or somebody else with the cash and willingness to dash, grab a chunk of the nand flash, and cache market?

Likewise, there are the software I/O and caching tool vendors some of which are tied to VMware and virtual servers vs. others that are more flexible that are gaining popularity. What about the systems or solution appliances play, could that be in the hunt for a Seagate?

Anything is possible however IMHO that would be a risky move, one that many at Seagate probably still remember from their experiment with Xiotech, not to mention stepping on the toes of their major OEM customer partners.

Storage I/O trends

Thus I would expect Seagate if they do anything would be more along the lines of a component type suppler meaning a FusionIO (yes they have Nexgen, however that could be easily dealt with), OCZ, perhaps even a LSI or Micron however some of those start to get rather expensive for a quick flash cache grab for some stock and cash.

Also, keep in mind that FusionIO in addition to having their PCIe flash cards also have the ioturbine software-caching tool that if you are not familiar with, IBM recently made an announcement of their Flash Cache Storage Accelerator (FCSA) that has an affiliation to guess who?

Closing comments (for now)

Some of the systems or solutions players will survive, perhaps even being acquired as XtremIO was by EMC, or file for IPO like Violin, or express their wish to IPO and or be bought such as all the others (e.g. Skyera, Whiptail, Pure, Solidfire, Cloudbyte, Nimbus, Nimble, Nutanix, Tegile, Kaminario, Greenbyte, and Simplivity among others).

Here’s the thing, those who really do know what is going to happen are not and probably cannot say, and those who are talking what will happen are like the rest of us, just speculating or providing perspectives or stirring the pot among other things.

So who will be next in the flash cache ssd cash dash dance?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Is more of something always better? Depends on what you are doing

Storage I/O trends

Is more always better? Depends on what you are doing

As with many things it depends, however how about some of these?

Is more better for example (among others):

  • Facebook likes
  • Twitter followers or tweets (I’m @storageio btw)
  • Google+ likes, follows and hangouts
  • More smart phone apps
  • LinkedIn connections
  • People in your circle or community
  • Photos or images per post or article
  • People working with or for you
  • Partners vs. doing more with those you have
  • People you are working for or with
  • Posts or longer posts with more in them
  • IOPs or SSD and storage performance
  • Domains under management and supported
  • GB/TB/PB/EB supported or under management
  • Mart-time jobs or a better full-time opportunity
  • Metrics vs. those that matter with context
  • Programmers to get job done (aka mythical man month)
  • Lines of code per cost vs. more reliable and tested code per cost
  • For free items and time spent managing them vs. more productivity for a nominal fee
  • Meetings for planning on what to do vs. streamline and being more productive
  • More sponsors or advertisers or underwriters vs. fewer yet more effective ones
  • Space in your booth or stand at a trade show or conference vs. using what you have more effectively
  • Copies of the same data vs. fewer yet more unique (not full though) copies of information
  • Patents in your portfolio vs. more technology and solutions being delivered
  • Processors, sockets, cores, threads vs. using them more effectively
  • Ports and protocols vs. using them more effectively

Storage I/O trends

Thus does more resources matter, or making more effective use of them?

For example more ports, protocols, processors, cores, sockets, threads, memory, cache, drives, bandwidth, people among other things is not always better, particular if those resources are not being used effectively.

Likewise don’t confuse effective with efficient often assumed to mean used.

For example a cache or memory may be 100% used (what some call efficient) yet only providing a 35% effective benefit (cache hit or miss) vs. cache turn (misses etc).

Throwing more processing power in terms of clock speed, or cores is one thing, kind of like throwing more server blades at a software problem vs. using those cores and sockets not to mention threads more effectively.

Good software will run better on fast hardware while enabling more to be done with the same or less.

Thus with better software or tools, more work can be done in an effective way leveraging those resources vs. simply throwing or applying more at the situation.

Hopefully you get the point, so no need to do more with this post (for now), if not, stay tuned and pay more attention around you.

Ok, nuff said, I need to go get more work done now.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Viking SATADIMM: Nand flash SATA SSD in DDR3 DIMM slot?

Storage I/O trends

Today computer and data storage memory vendor Viking announced that SSD vendor Solidfire has deployed their SATADIMM modules in DDR3 DIMM (e.g. Random Access Memory (RAM) main memory) slots of their SF SSD based storage solution.

solidfire ssd storage with satadimm
Solidfire SD solution with SATADIMM via Viking

Nand flash SATA SSD in a DDR3 DIMM slot?

Per Viking, Solidfire uses the SATADIMM as boot devices and cache to complement the normal SSD drives used in their SF SSD storage grid or cluster. For those not familiar, Solidfire SF storage systems or appliances are based on industry standard servers that are populated with SSD devices which in turn are interconnected with other nodes (servers) to create a grid or cluster of SSD performance and space capacity. Thus as nodes are added, more performance, availability and capacity are also increased all of which are accessed via iSCSI. Learn more about Solidfire SD solutions on their website here.

Here is the press release that Viking put out today:

Viking Technology SATADIMM Increases SSD Capacity in SolidFire’s Storage System (Press Release)

Viking Technology’s SATADIMM enables higher total SSD capacity for SolidFire systems, offering cloud infrastructure providers an optimized and more powerful solution

FOOTHILL RANCH, Calif., August 12, 2013 – Viking Technology, an industry leading supplier of Solid State Drives (SSDs), Non-Volatile Dual In-line Memory Module (NVDIMMs), and DRAM, today announced that SolidFire has selected its SATADIMM SSD as both the cache SSD and boot volume SSD for their storage nodes. Viking Technology’s SATADIMM SSD enables SolidFire to offer enhanced products by increasing both the number and the total capacity of SSDs in their solution.

“The Viking SATADIMM gives us an additional SSD within the chassis allowing us to dedicate more drives towards storage capacity, while storing boot and metadata information securely inside the system,” says Adam Carter, Director of Product Management at SolidFire. “Viking’s SATADIMM technology is unique in the market and an important part of our hardware design.”

SATADIMM is an enterprise-class SSD in a Dual In-line Memory Module (DIMM) form factor that resides within any empty DDR3 DIMM socket. The drive enables SSD caching and boot capabilities without using a hard disk drive bay. The integration of Viking Technology’s SATADIMM not only boosts overall system performance but allows SolidFire to minimize potential human errors associated with data center management, such as accidentally removing a boot or cache drive when replacing an adjacent failed drive.

“We are excited to support SolidFire with an optimal solid state solution that delivers increased value to their customers compared to traditional SSDs,” says Adrian Proctor, VP of Marketing, Viking Technology. “SATADIMM is a solid state drive that takes advantage of existing empty DDR3 sockets and provides a valuable increase in both performance and capacity.”

SATADIMM is a 6Gb SATA SSD with capacities up to 512GB. A next generation SAS solution with capacities of 1TB & 2TB will be available early in 2014. For more information, visit our website www.vikingtechnology.com or email us at sales@vikingtechnology.com.

Sales information is available at: www.vikingtechnology.com, via email at sales@vikingtechnology.com or by calling (949) 643-7255.

About Viking Technology Viking Technology is recognized as a leader in NVDIMM technology. Supporting a broad range of memory solutions that bridge DRAM and SSD, Viking delivers solutions to OEMs in the enterprise, high-performance computing, industrial and the telecommunications markets. Viking Technology is a division of Sanmina Corporation (Nasdaq: SANM), a leading Electronics Manufacturing Services (EMS) provider. More information is available at www.vikingtechnology.com.

About SolidFire SolidFire is the market leader in high-performance data storage systems designed for large-scale public and private cloud infrastructure. Leveraging an all-flash scale-out architecture with patented volume-level quality of service (QoS) control, providers can now guarantee storage performance to thousands of applications within a shared infrastructure. In-line data reduction techniques along with system-wide automation are fueling new block-storage services and advancing the way the world uses the cloud.

What’s inside the press release

On the surface this might cause some to jump to the conclusion that the nand flash SSD is being accessed via the fast memory bus normally used for DRAM (e.g. main memory) of a server or storage system controller. For some this might even cause a jump to conclusion that Viking has figured out a way to use nand flash for reads and writes not only via a DDR3 DIMM memory location, as well as doing so with the Serial ATA (SATA) protocol enabling server boot and use by any operating system or hypervisors (e.g. VMware vSphere or ESXi, Microsoft Hyper-V, Xen or KVM among others).

Note for those not familiar or needing a refresh on DRAM, DIMM and related items, here is an excerpt from Chapter 7 (Servers – Physical, Virtual and Software) from my book "The Green and Virtual Data Center" (CRC Press).

7.2.2 Memory

Computers rely on some form of memory ranging from internal registers, local on-board processor Level 1 (L1) and Level 2 (L2) caches, random accessible memory (RAM), non-volatile RAM (NVRAM) or Flash along with external disk storage. Memory, which includes external disk storage, is used for storing operating system software along with associated tools or utilities, application programs and data. Read more of the excerpt here…

Is SATADIMM memory bus nand flash SSD storage?

In short no.

Some vendors or their surrogates might be tempted to spin such a story by masking some details to allow your imagination to run wild a bit. When I saw the press release announcement I reached out to Tinh Ngo (Director Marketing Communications) over at Viking with some questions. I was expecting the usual marketing spin story, dancing around the questions with long answers or simply not responding with anything of substance (or that requires some substance to believe). Again what I found was the opposite and thus want to share with you some of the types of questions and answers.

So what actually is SATADIMM? See for yourself in the following image (click on it to view or Viking site).

Via Viking website, click on image or here to learn more about SATADIMM

Does SATADIMM actually move data via DDR3 and memory bus? No, SATADIMM only draws power from it (yes nand flash does need power when in use contrary to a myth I was told about).

Wait, then how is data moved and how does it get to and through the SATA IO stack (hardware and software)?

Simple, there is a cable connector that attached to the SATADIMM that in turn attached to an internal SATA port. Or using a different connector cable attach the SATADIMM (up to four) to a standard SAS internal port such as on a main board, HBA, RAID or caching adapter.

industry trend

Does that mean that Viking and who ever uses SATADIMM is not actually moving data or implementing SATA via the memory bus and DDR3 DIMM sockets? That would be correct, data movement occurs via cable connection to standard SATA or SAS ports.

Wait, why would I give up a DDR3 DIMM socket in my server that could be used for more DRAM? Great question and one that should be it depends on if you need more DRAM or more nand flash? If you are out of drive slots or PCIe card slots and have enough DRAM for your needs along with available DDR3 slots, you can stuff more nand flash into those locations assuming you have SAS or SATA connectivity.

satadimm
SATADIMM with SATA connector top right via Viking

satadimm sata connector
SATADIMM SATA connector via Viking

satadimm sas connector
SATADIMM SAS (Internal) connector via Viking

Why not just use the onboard USB ports and plug-in some high-capacity USB thumb drives to cut cost? If that is your primary objective it would probably work and I can also think of some other ways to cut cost. However those are also probably not the primary tenants that people looking to deploy something like SATADIMM would be looking for.

What are the storage capacities that can be placed on the SATADIMM? They are available in different sizes up to 400GB for SLC and 480GB for MLC. Viking indicated that there are larger capacities and faster 12Gb SAS interfaces in the works which would be more of a surprise if there were not. Learn more about current product specifications here.

Good questions. Attached are three images that sort of illustrates the connector. As well, why not a USB drive; well, there are customers that put 12 of these in the system (with up to 480GB usable capacity) that equates to roughly an added 5.7TBs inside the box without touching the drive bays (left for mass HDD’s). You will then need to raid/connect) all the SATADIMM via a HBA.

How fast is the SATADIMM and does putting it into a DDR3 slot speed things up or slow them down? Viking has some basic performance information on their site (here). However generally should be the same or similar to reach a SAS or SATA SSD drive, although keep SSD metrics and performance in the proper context. Also keep in mind that the DDR3 DIMM slot is only being used for power and not real data movement.

Is the SATADIMM using 3Gbs or 6Gbs SATA? Good questions, today is 6Gb SATA (remember that SATA can attach to a SAS port however not vise versa). Lets see if Viking responds in the comments with more including RAID support (hardware or software) along with other insight such as UNMAP, TRIM, Advanced Format (AF) 4KByte blocks among other things.

Have I actually tried SATADIMM yet? No, not yet. However would like to give it a test drive and workout if one were to show up on my doorstep along with disclosure and share the results if applicable.

industry trend

Future of nand flash in DRAM DIMM sockets

Keep in mind that someday nand flash will actually seem not only in a Webex or Powerpoint demo preso (e.g. similar to what Diablo Technology is previewing), as well as in real use for example what Micron earlier this year predicted for flash on DDR4 (more DDR3 vs. DDR4 here).

Is SATADIMM the best nand flash SSD approach for every solution or environment? No, however it does give some interesting options for those who are PCIe card, or HDD and SSD drive slot constrained that also have available DDR3 DIMM sockets. As to price, check with Viking, wish I could say tell them Greg from StorageIO sent you for a good value, however not sure what they would say or do.

Related more reading:
How much storage performance do you want vs. need?
Can RAID extend the life of nand flash SSD?
Can we get a side of context with them IOPS and other storage metrics?
SSD & Real Estate: Location, Location, Location
What is the best kind of IO? The one you do not have to do
SSD, flash and DRAM, DejaVu or something new?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Can RAID extend the life of nand flash SSD?

Storage I/O trends

Can RAID extend nand flash SSD life?

Imho, the short answer is YES, under some circumstances.

There is a myth and some FUD that RAID (Redundant Array of Independent Disks) can shorten the life durability of nand flash SSD (Solid State Device) vs. HDD (Hard Disk Drives) due to extra IOP’s. The reality is that depending on how configured, RAID level, implementation and other factors, nand flash SSD can be extended as I discuss in this here video.

Video

Nand flash SSD cells and wear

First, there is a myth that nand flash SSD does not have moving parts like hard disk drives (HDD’s) thus do not wear out or break. That is just a myth in that nand flash by its nature wears out with write usage. This is due to how they store data in cells that have a rated number of program erase (P/E) cycles that vary by type of medium. For example, Single Level Cell (SLC) has a longer P/E life duration vs. Multi-Level Cells (MLC) and eMLC that stack multiple cells together.

There are a number of factors that contribute to nand flash wear, also known as duty cycle or durability tied to P/E. For example, some storage systems or controllers do a better job both at the lower level flash translation layer (FTL) in addition to controllers, firmware, caching using DRAM and IO optimization such as write ordering or grouping.

Now what about this RAID and SSD thing?

Ok first as a recap keep in mind that there are many RAID levels along with variations, enhancements and where, or how implemented ranging from software to hardware, adapters to controllers to storage systems.

In the case of RAID 1 or mirroring, just like replication or other one to one or one too many copy operation a write to one device is echoed to another. In the case of RAID 5, data is spread across drives and parity; however, the parity is rotated across all drives in an equal manner.

Some FUD or myths or misunderstandings come into play is that not all RAID 5 implementations as an example are not the same. Some do a better job of buffering or caching data in battery protected mirrored DRAM memory until a full stripe write can occur, or if needed, a partial write.

Another attribute is the chunk or shard size (how much data is sent to each drive member) along with the stripe width (how many drives). Some systems have narrow stripes of say 3+1 or 4+1 or 5+1 while others can be 14+1 or 15+1 or wider. Thus, data can be written across a wider number of drives reducing the P/E consumption or use of a single drive depending on implementation.

How about RAID 6 (dual parity)?

Same thing, it is a matter of how well the implementation is, how the write gathering is done and so forth.

What about RAID wearing out nand flash SSD?

While it is possible that it has or can occur depending on type of RAID implementation, lack of caching or optimization, configuration, type of SSD, RAID level and other things, in general I will say myth busted.

Want some proof?

I could go through a long technical proof point and citing lots of facts, figures, experts and so forth leaving you all silenced and dazed similar to the students listening to Ben Stein in Ferris Buelers day off (Click here to see what I mean) asking “anybody anybody Buleler?

Ben Stein via https://nostagjicmoviesandthings.blogspot.com
Image via nostagjicmoviesandthings.blogspot.com

How about some simple SSD and storage math?

On a very conservative basis, my estimate is that around 250PB of nand flash SSD drives are shipped and installed on a revenue basis attached to or in storage systems and appliances. Combine what Dell + DotHill + EMC + Fujitsu + HDS + HP + IBM (including TMS) + NEC + NetApp + NEC + Oracle among other legacy along with new all flash as well as hybrid vendors (e.g. Cloudbyte, FusionIO (Via their Nexgen acquisition), Kaminario, Greenbytes, Nutanix or Nimble, Purestorage, Starboard or Solidfire, Tegile or Tintri, Violin or Whiptail among others).

It is also a safe assumption based on how customers configure and use those and other storage systems is with some form of RAID. Thus if things were as bad as some researchers were, vendors and their pundits have made them out to be, wouldn’t’t we be hearing of those issues?

Is it just a RAID 5 problem and that RAID 6 magically corrects the problem?

Well, that depends on apples to apples vs. apples to oranges comparisons.

For example if you are using a 14+2 (16 drive) RAID 6 to compare to say a 3+1 (4 drive) RAID 5 that is not a fair comparison. Granted, it is a handy one if you are a vendor that supports wider RAID groups, stripes and ranks vs. those who do not. However also keep in mind that some legacy vendors actually also support wide stripes and RAID groups.

So in some cases the magic is not in the RAID level, rather the implementation or how configured or lack thereof.

Video

Watch this TechTarget produced video recorded live while I was at EMCworld 2013 to learn more.

Otherwise, ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Virtual, Cloud and IT Availability, its a shared responsibility and common sense

IT Availability, it’s a shared responsibility and common sense

In case you missed it, recently the State of Oregon had a data center computer problem (ok, storage and application outage) that resulted in unemployment benefits not being provided. Tony Knotzer over at Network Computing did a story Oregon Storage Debacle Highlights Need To Plan For Failure and asked me for some perspectives that you can read here.

Data center

The reason I bring this incident up is not to join in the feeding frenzy that usually occurs when something like this happens, instead, to touch on what should be common. What is lacking at times (or more needed) is common sense when it comes to designing and managing flexible scalable data infrastructures.

“Fundamental IT 101 is that all technology will fail, despite what the vendors tell you,” Schulz said. And the most likely time technology will fail, he notes, is when people are involved — doing configurations, making changes or updates, or performing upgrades. – Via Network Computing

Note that while any technology can or has fail at some point, how it fails along with fault containment via design best practices and vendor resolution are important.

Good vendors learn and correct things so that they don’t happen again as well as work with customers on best practices to isolate and contain faults from expanding into disasters. Thus when a sales or marketing person tries to tell me that they have never had a failure I wonder if a: they are making something up, b: have not actually shipped to a customer in production, c: not aware of other deployments, d: towing the company line, e: too good to be true or f: all the above.

People talking

On the other hand, when a vendor tells me how they have resiliency in their product as well as processes, best practices and can even tell me (public or under NDA) how they have addressed issues, then they have my attention.

A common challenge today is cost cutting along with focus on the newest technology from servers to storage, networking to cloud, virtualization and software defined among other buzzword bingo themes and trends.

buzzword bingo

What also gets overlooked as mentioned above is common sense.

Perhaps if somebody could package and launch a good public relations campaign profiling common sense such as Software Defined Common Sense (SDCS) that might help?

On the other hand, similar to public service announcements (PSA) that may seem like common sense to some, there is a reason they are being done. That is to pass on the information to others who may not know about it thus lack what is perceived as common sense.

Lets get back to the state of Oregon’s computer systems issues and the blame game.

You know the blame game? That is when something happens or does not happen as you want it to simply find somebody else to blame or pivot and point a finger elsewhere.

the blame game

While perhaps good for CYA, the blame games usually does not help to prevent something happening again, or in the first place.

Hence in my comments about the state of Oregon computer storage system problems, I took the tone of what is common these days of no fault, shared responsibility and blame.

In other words does not matter who did what first or did not do, both sides could have prevented it.

For some this might resonate of it does not matter who misbehaved in the sandbox or play room, everybody gets a time out.

This is not to say that one side or the other has to assume or take on more blame or responsibility than the other, rather there is a shared responsibility to look out for each other.

Storage I/O trends

Just like when you drive a car, the education focus is on defensive safe driving to watch out for what the other person might do or not do (e.g. use turn signals or too busy to look in a mirror while talking or texting and driving among other things). The goal is to prevent accidents by watching out for those who are not taking responsibilities for themselves, not to mention learning from others mishaps.

teamwork
Working together vs. the blame game

Different views of customer vs. vendor

Having been a customer, as well as a vendor in the past not surprisingly I have some different views on this.

Sure the customer or client is always right, however sometimes there needs to be unpleasant conversations to help the customer help themselves, or keep themselves out of trouble.

Likewise a vendor may also take the blame when something does go wrong, even if it was entirely not their own fault just to stay in good graces with the customer or get that next deal.

Sometimes a vendor deserves to get beat up when something goes wrong, or at a least tell their story including if needed behind closed doors or under NDA. Likewise to have a meaningful relationship or partnership with the vendor, supplier or VAR, there needs to be trust and confidence which means not everything gets put out for media or blog venues to feed on.

Sure there is explaining what happened without spin, however there is also learning from mistakes to prevent them from happening which should be common sense. If part of that sharing of blame and responsibility requires being not in public that’s fine, as well as enough information of what happened is conveyed to clarify concerns and create confidence.

With vendor lockin, when I was a customer some taught that it’s the vendors fault (or for CYA, blame them), as a vendor the thinking was enforced that the customer is always right and its the competition who causes lockin.

As an analyst advisory consulting, my thinking not surprisingly is that of shared responsibility.

This means only you can allow vendor lockin, not to mention decide if lockin is bad or not.

Likewise only you can prevent data loss in cloud, virtual or traditional environments which also includes loss of access.

Granted somebody higher up the organization structure may over-ride you, however ask yourself if you did what was needed?

Likewise if a vendor is going to be doing some maintenance work in the middle of the week and there is a risk of something happening, even if they have told or sold you there is no single point of failure (NSPOF), or non disruptive upgrades.

Anytime there is a person involved regardless of if hardware, cables, software, firmware, configurations or physical environments something can happen. If the vendor drops the ball or a cable or card or something else and causes an outage or downtime, it is their responsibility to discuss those issues. However it is also the customers responsibility to discuss why they let the vendor do something during that time without taking adequate precautions. Likewise if the storage system was a single point of failure for an important system, then there is the responsibility to discuss the cost cutting concerns of others and have them justify why a redundant solution is not needed (that’s CYA 101 btw ).

Some other common sense tips

For some these might be familiar and if so, are they being done, and for others, perhaps they are new or revolutionary.

In the race to jump to a new technology or vendor, what are the unknowns? For example you may know what the issues or flaws are in an existing systems, solution, product, service or vendor, however what about the new one? Will you be the production beta customer and if so, how can you mitigate any risk?

Ask vendors tough, yet fair questions that are relevant to your needs and requirements including how they handle updates, upgrades and other tasks. Don’t be afraid to go under NDA if needed to get a better view of where they are at, have been and going to avoid surprises.

If this is not common IT sense, then take the responsibility to learn.

On the other hand, if this is common sense, take the responsibility to share and help others learn what it is that you know.

Also understand your availability needs and wants as well as balance those with costs along with risks. If something can go wrong it will if people are involved, thus design for resiliency including maintenance to offset applicable threat risks. Remember in the data center not everything is the same.

Storage I/O trends

Here is my point.

There is enough blame as well as accolades to go around, however take some shared responsibility and use it wisely.

Likewise in the race to cut cost, watch out for causing problems that compromise your information systems or services.

Look into removing complexity and costs without compromise which has long-term benefits vs. simply cutting costs.

Here are some related links and perspectives:
Don’t Let Clouds Scare You Be Prepared
Cloud conversation, Thanks Gartner for saying what has been said
Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)
Make Your Company Ready for the Cloud
What do you do when your service provider drops the ball
People, Not Tech, Prevent IT Convergence
Pulling Together a Converged Team
Speaking of lockin, does software eliminate or move the location of vendor lock-in?

Ok, nuff said for now, what say you?

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved