January 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
January 2013 News letter

Welcome to the January 2013 edition of the StorageIO Update news letter including a new format and added content.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

Click on the following links to view the January 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Nuff said for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Thanks for viewing StorageIO content and top 2012 viewed posts

StorageIO industry trends cloud, virtualization and big data

2012 was a busy year (it was our 7th year in business) along with plenty of activity on StorageIOblog.com as well as on the various syndicate and other sites that pickup our content feed (https://storageioblog.com/RSSfull.xml).

Excluding traditional media venues, columns, articles, web casts and web site visits (StorageIO.com and StorageIO.TV), StorageIO generated content including posts and pod casts have reached over 50,000 views per month (and growing) across StorageIOblog.com and our partner or syndicated sites. Including both public and private, there were about four dozen in-person events and activities not counting attending conferences or vendor briefing sessions, along with plenty of industry commentary. On the twitter front, plenty of activity there as well closing in on 7,000 followers.

Thank you to everyone who have visited the sites where you will find StorageIO generated content, along with industry trends and perspective comments, articles, tips, webinars, live in person events and other activities.

In terms of what was popular on the StorageIOblog.com site, here are the top 20 viewed posts in alphabetical order.

Amazon cloud storage options enhanced with Glacier
Announcing SAS SANs for Dummies book, LSI edition
Are large storage arrays dead at the hands of SSD?
AWS (Amazon) storage gateway, first, second and third impressions
EMC VFCache respinning SSD and intelligent caching
Hard product vs. soft product
How much SSD do you need vs. want?
Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go
Is SSD dead? No, however some vendors might be
IT and storage economics 101, supply and demand
More storage and IO metrics that matter
NAD recommends Oracle discontinue certain Exadata performance claims
New Seagate Momentus XT Hybrid drive (SSD and HDD)
PureSystems, something old, something new, something from big blue
Researchers and marketers dont agree on future of nand flash SSD
Should Everything Be Virtualized?
SSD, flash and DRAM, DejaVu or something new?
What is the best kind of IO? The one you do not have to do
Why FC and FCoE vendors get beat up over bandwidth?
Why SSD based arrays and storage appliances can be a good idea

Moving beyond the top twenty read posts on StorageIOblog.com site, the list quickly expands to include more popular posts around clouds, virtualization and data protection modernization (backup/restore, HA, BC, DR, archiving), general IT/ICT industry trends and related themes.

I would like to thank the current StorageIOblog.com site sponsors Solarwinds (management tools including response time monitoring for physical and virtual servers) and Veeam (VMware and Hyper-V virtual server backup and data protection management tools) for their support.

Thanks again to everyone for reading and following these and other posts as well as for your continued support, watch for more content on the above and other related and new topics or themes throughout 2013.

Btw, if you are into Facebook, you can give StorageIO a like at facebook.com/storageio (thanks in advance) along with viewing our newsletter here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive (part III)

StorageIO industry trends cloud, virtualization and big data

This is the third in a multi-part series of posts (read first post here and second post here) looking at what else EMC announced today in addition to an enhanced VMAX 10K and dispelling the myth that large storage arrays are dead (or at least for now).

In addition to the VMAX 10K specific updates, EMC also announced the release of a new version of their Enginuity storage software (firmware, storage operating system). Enginuity is supported across all VMAX platforms and features the following:

  • Replication enhancements include TimeFinder clone refresh, restore and four site SRDF for the VMAX 10K, along with think or thin support. This capability enables functionality across VMAX 10K, 40K or 20K using synchronous or asynchronous and extends earlier 3 site to 4 site and mix modes. Note that larger VMAX systems had the extended replication feature support with VMAX 10K now on par with those. Note that the VMAX can be enhanced with VPLEX in front of storage systems (local or wide area, in region HA and out of region DR) and RecoverPoint behind the systems supporting bi-synchronous (two-way), synchronous and asynchronous data protection (CDP, replication, snapshots).
  • Unisphere for VMAX 1.5 manages DMX along with VMware VAAI UNMAP and space reclamation, block zero and hardware clone enhancements, IPV6, Microsoft Server 2012 support and VFCache 1.5.
  • Support for mix of 2.5 inch and 3.5 inch DAEs (disk array enclosures) along with new SAS drive support (high-performance and high-capacity, and various flash-based SSD or EFD).
  • The addition of a fourth dynamic tier within FAST for supporting third-party virtualized storage, along with compression of in-active, cold or stale data (manual or automatic) with 2 to 1 data footprint reduction (DFR) ratio. Note that EMC was one of early vendors to put compression into its storage systems on a block LUN basis in the CLARiiON (now VNX) along with NetApp and IBM (via their Storwize acquisition). The new fourth tier also means that third-party storage does not have to be the lowest tier in terms of performance or functionality.
  • Federated Tiered Storage (FTS) is now available on all EMC block storage systems including those with third-party storage attached in virtualization mode (e.g. VMAX). In addition to supporting tiering across its own products, and those of other vendors that have been virtualized when attached to a VMAX, ANSI T10 Data Integrity Field (DIF) is also supported. Read more about T10 DIF here, and here.
  • Front-end performance enhancements with host I/O limits (Quality of Service or QoS) for multi tenant and cloud environments to balance or prioritize IO across ports and users. This feature can balance based on thresholds for IOPS, bandwidth or both from the VMAX. Note that this feature is independent of any operating system based tool, utility, pathing driver or feature such as VMware DRS and Storage I/O control. Storage groups are created and mapped to specific host ports on the VMAX with the QoS performance thresholds applied to meet specific service level requirements or objectives.

For discussion (or entertainment) purpose, how about the question of if Enginuity qualifies or can be considered as a storage hypervisors (or storage virtualization or virtual storage)? After all, the VMAX is now capable of having third-party storage from other vendors attached to it, something that HDS has done for many years now. For those who feel a storage hypervisor, virtual storage or storage virtualization requires software running on Intel or other commodity based processors, guess what the VMAX uses for CPU processors (granted, you can’t simply download Enginuity software and run on a Dell, HP, IBM, Oracle or SuperMicro server).

I am guessing some of EMC competitors and their surrogates or others who like to play the storage hypervisor card game will be quick to tell you it is not based on various reasons or product comparisons, however you be the judge.

 

Back to the question of if, traditional high-end storage arrays are dead or dying (from part one in this series).

IMHO as mentioned not yet.

Granted like other technologies that have been declared dead or dying yet still in use (technology zombies), they continue to be enhanced, finding new customers, or existing customers using them in new ways, their roles are evolving, this still alive.

For some environments as has been the case over the past decade or so, there will be a continued migration from large legacy enterprise class storage systems to midrange or modular storage arrays with a mix of SSD and HDD. Thus, watch out for having a death grip not letting go of the past, while being careful about flying blind into the future. Do not be scared, be ready, do your homework with clouds, virtualization and traditional physical resources.

Likewise, there will be the continued migration for some from traditional mid-range class storage arrays to all flash-based appliances. Yet others will continue to leverage all the above in different roles aligned to where their specific features best serve the applications and needs of an organization.

In the case of high-end storage systems such as EMC VMAX (aka formerly known as DMX and Symmetrix before that) based on its Enginuity software, the hardware platforms will continue to evolve as will the software functionality. This means that these systems will evolve to handling more workloads, as well as moving into new environments from service providers to mid-range organizations where the systems were before out of their reach.

Smaller environments have grown larger as have their needs for storage systems while higher end solutions have scaled down to meet needs in different markets. What this means is a convergence of where smaller environments have bigger data storage needs and can afford the capabilities of scaled down or Right-sized storage systems such as the VMAX 10K.

Thus while some of the high-end systems may fade away faster than others, for those that continue to evolve being able to move into different adjacent markets or usage scenarios, they will be around for some time, at least in some environments.

Avoid confusing what is new and cool falling under industry adoption vs. what is productive and practical for customer deployment. Systems like the VMAX 10K are not for all environments or applications; however, for those who are open to exploring alternative solutions and approaches, it could open new opportunities.

If there is a high-end storage system platform (e.g. Enginuity) that continues to evolve, re-invent itself in terms of moving into or finding new uses and markets the EMC VMAX would be at or near the top of such list. For the other vendors of high-end storage system that are also evolving, you can have an Atta boy or Atta girl as well to make you feel better, loved and not left out or off of such list. ;)

Ok, nuff said for now.

Disclosure: EMC is not a StorageIO client; however, they have been in the past directly and via acquisitions that they have done. I am however a customer of EMC via my Iomega IX4 NAS (I never did get the IX2 that I supposedly won at EMCworld ;) ) that I bought on Amazon.com and indirectly via VMware products that I have, oh, and they did sent me a copy of the new book Human Face of Big Data (read more here).

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive (part II)

StorageIO industry trends cloud, virtualization and big data

This is the second in a multi-part series of posts (read first post here) looking at if large enterprise and legacy storage systems are dead, along with what todays EMC VMAX 10K updates mean.

Thus on January 14 2013 it is time for a new EMC Virtual Matrix (VMAX) model 10,000 (10K) storage system. EMC has been promoting their January 14 live virtual event for a while now. January significance is that is when (along with May or June) is when many new systems, solutions or upgrades are made on a staggered basis.

Historically speaking, January and February, along with May and June is when you have seen many of the larger announcements from EMC being made. Case in point, back in February of 2012 VFCache was released, then May (2012) in Las Vegas at EMCworld there were 42 announcements made and others later in the year.

Click here to see images of the car stuffing or click here to watch a video.

Let’s not forget back in February of 2012 VFCache was released, and go back to January 2011 there was the record-setting event in New York City complete with 26 people being compressed, deduped, singled instanced, optimized, stacked and tiered into a mini cooper (Coop) automobile (read and view more here).

Now back to the VMAX 10K enhancements

As an example of a company, product family and specific storage system model, still being alive is the VMAX 10K. Although this announcement by EMC is VMAX 10K centric, there is also a new version of the Enginuity software (firmware, storage operating system, valueware) that runs across all VMAX based systems including VMAX 20K and VMAX 40K. Read here, here and here and here to learn more about VMAX and Enginuity systems in general.

Some main themes of this announcement include Tier 1 reliability, availability and serviceability (RAS) storage systems functionality at tier 2 pricing for traditional, virtual and cloud data centers.

Some other themes of this announcement by EMC:

  • Flexible, scalable and resilient with performance to meet dynamic needs
  • Support private, public and hybrid cloud along with federated storage models
  • Simplified decision-making, acquisition, installation and ongoing management
  • Enable traditional, virtual and cloud workloads
  • Complement its siblings VMAX 40K, 20K and SP (Service Provider) models

Note that the VMAX SP is a model configured and optimized for easy self-service and private cloud, storage as a service (SaaS), IT as a Service (ITaaS) and public cloud service providers needing multi-tenant capabilities with service catalogs and associated tools.

So what is new with the VMAX 10K?

It is twice as fast (per EMC performance results) as earlier VMAX 10K by leveraging faster 2.8GHz Intel westmere vs. earlier 2.5GHz westmere processors. In addition to faster cores, there are more, from 4 to 6 on directors, from 8 to 12 on VMAX 10K engines. The PCIe (Gen 2) IO busses remain unchanged as does the RapidIO interconnect.  RapidIO  used for connecting nodes and engines,  while PCIe is used for adapter and device connectivity. Memory stays the same at up to 128GB of global DRAM cache, along with dual virtual matrix interfaces (how the nodes are connected). Note that there is no increase in the amount of DRAM based cache memory in this new VMAX 10K model.

This should prompt the question of for traditional cache centric or dependent for performance storage systems such as VMAX, how much are they now CPU and their associated L1 / L2 cache dependent or effective? Also how much has the Enginuity code under the covers been enhanced to leverage the multiple cores and threads thus shifting from being cache memory dependent processor hungry.

Also new with the updated VMAX 10K include:

  • Support for dense 2.5 inch drives, along with mixed 2.5 inch and 3.5 inch form factor devices with a maximum of 1,560 HDDs. This means support for 2.5 inch 1TB 7,200 RPM SAS HDDs, along with fast SAS HDDs, SLC/MLC and eMLC solid state devices (SSD) also known as electronic flash devices (EFD). Note that with higher density storage configurations, good disk enclosures become more important to counter or prevent the effects of drive vibration, something that leading vendors are paying attention to and so should customers.
  • EMC is also with the VMAX 10K adding support for certain 3rd party racks or cabinets to be used for mounting the product. This means being able to mount the VMAX main system and DAE components into selected cabinets or racks to meet specific customer, colo or other environment needs for increased flexibility.
  • For security, VMAX 10K also supports Data at Rest Encryption or (D@RE) which is implemented within the VMAX platform. All data encrypted on every drive, every drive type (drive independent) within the VMAX platform to avoid performance impacts. AES 256 fixed block encryption with FIPS 140-2 validation (#1610) using embedded or external key management including RSA Key Manager. Note that since the storage system based encryption is done within the VMAX platform or controller, not only is the encrypt / decrypt off-loaded from servers, it also means that any device from SSD to HDD to third-party storage arrays can be encrypted. This is in contrast to drive based approaches such as self encrypting devices (SED) or other full drive encryption approaches. With embedded key management, encryption keys kept and managed within the VMAX system while external mode leverages RSA key management as part of a broader security solution approach.
  • In terms of addressing ease of decision-making and acquisition, EMC has bundled core Enginuity software suite (virtual provisioning, FTS and FLM, DCP (dynamic cache partitioning), host I/O limits, Optimizer/virtual LUN and integrated RecoverPoint splitter). In addition are bundles for optimization (FAST VP, EMC Unisphere for VMAX with heat map and dashboards), availability (TimeFinder for VMAX 10K) and migration (Symmetrix migration suite, Open Replicator, Open Migrator, SRDF/DM, Federated Live Migration). Additional optional software include RecoverPoint CDP, CRR and CLR, Replication Manager, PowerPath, SRDF/S, SRDF/A and SRDF/DM, Storage Configuration Advisor, Open Replicator with Dynamic Mobility and ControlCenter/ProSphere package.

Who needs a VMAX 10K or where can it be used?

As the entry-level model of the VMAX family, certain organizations who are growing and looking for an alternative to traditional mid-range storage systems should be a primary opportunity. Assuming the VMAX 10K can sell at tier-2 prices with a focus of tier-1 reliability, feature functionality, and simplification while allowing their channel partners to make some money, then EMC can have success with this product. The challenge however will be helping their direct and channel partner sales organizations to avoid competing with their own products (e.g. high-end VNX) vs. those of others.

Consolidation of servers with virtualization, along with storage system consolidation to remove complexity in management and costs should be another opportunity with the ability to virtualize third-party storage. I would expect EMC and their channel partners to place the VMAX 10K with its storage virtualization of third-party storage as an alternative to HDS VSP (aka USP/USPV) and the HP XP P9000 (Hitachi based) products, or for block storage needs the NetApp V-Series among others. There could be some scenarios where the VMAX 10K could be positioned as an alternative to the IBM V7000 (SVC based) for virtualizing third-party storage, or for larger environments, some of the software based appliances where there is a scaling with stability (performance, availability, capacity, ease of management, feature functionality) concerns.

Another area where the VMAX 10K could see action which will fly in the face of some industry thinking is for deployment in new and growing managed service providers (MSP), public cloud, and community clouds (private consortiums) looking for an alternative to open source based, or traditional mid-range solutions. Otoh, I cant wait to hear somebody think outside of both the old and new boxes about how a VMAX 10K could be used beyond traditional applications or functionality. For example filling it up with a few SSDs, and then balance with 1TB 2.5 inch SAS HDD and 3.5 inch 3TB (or larger when available) HDDs as an active archive target leveraging the built-in data compression.

How about if EMC were to support cloud optimized HDDs such as the Seagate Constellation Cloud Storage (CS) HDDs that were announced late in 2012 as well as the newer enterprise class HDDs for opening up new markets? Also keep in mind that some of the new 2.5 inch SAS 10,000 (10K) HDDs have the same performance capabilities as traditional 3.5 inch 15,000 (15K) RPM drives in a smaller footprint to help drive and support increased density of performance and capacity with improved energy effectiveness.

How about attaching a VMAX 10K with the right type of cost-effective (aligned to a given scenario) SSD or HDDs or third-party storage to a cluster or grid of servers that are running OpenStack including Swift, CloudStack, Basho Riak CS, Celversafe, Scality, Caringo, Ceph or even EMCs own ATMOS (that supports external storage) for cloud storage or object based storage solutions? Granted that would be thinking outside of the current or new box thinking to move away from RAID based systems in favor or low-cost JBOD storage in servers, however what the heck, let’s think in pragmatic ways.

Will EMC be able to open new markets and opportunities by making the VMAX and its Enginuity software platform and functionality more accessible and affordable leveraging the VMAX 10K as well as the VMAX SP? Time will tell, after all, I recall back in the mid to late 90s, and then again several times during the 2000s similar questions or conversations not to mention the demise of the large traditional storage systems.

Continue reading about what else EMC announced on January 14 2013 in addition to VMAX 10K updates here in the next post in this series. Also check out Chucks EMC blog to see what he has to say.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive

StorageIO industry trends cloud, virtualization and big data

This is the first in a multi-part series of posts looking at if large enterprise and legacy storage systems are dead, along with what todays EMC VMAX 10K updates means.

EMC has announced an upgrade, refresh or new version of their previously announced Virtual matrix (VMAX) 10,000 (10K), part of the VMAX family of enterprise class storage systems formerly known as DMX (Direct Matrix) and Symmetrix. I will get back to more coverage on the VMAX 10K and other EMC enhancements in a few moments in part two and three of this series.

Have you heard the industry myth about the demise or outright death of traditional storage systems? This has been particularly the case for high-end enterprise class systems, which by the way which were first, declared dead back in the mid-1990s then at the hands of emerging mid-range storage systems.

Enterprise class storage systems include EMC VMAX, Fujitsu Eternus DX8700, HDS, HP XP P9000 based on the HDS high-end product (OEM from HDS parent Hitachi Ltd.). Note that some HPers or their fans might argue that the P10000 (formerly known as 3PAR) declared as tier 1.5 should also be on the list; I will leave that up to you to decide.

Let us not forget the IBM DS8000 series (whose predecessors was known as the ESS and VSS before that); although some IBMers will tell you that XIV should also be in this list. High-end enterprise class storage systems such as those mentioned above are not alone in being declared dead at the hands of new all solid-state devices (SSD) and their startup vendors, or mixed and hybrid-based solutions.

Some are even declaring dead due to new SSD appliances or systems, and by storage hypervisor or virtual storage array (VSA) the traditional mid-range storage systems that were supposed to have killed off the enterprise systems a decade ago (hmm, DejaVu?).

The mid-range storage systems include among others block (SAN and DAS) and file (NAS) systems from Data Direct Networks (DDN), Dell Complement, EqualLogic and MD series (Netapp Engenio based), EMC VNX and Isilon, Fujitsu Eternus, and HDS HUS mid-range formerly known as AMS. Let us not forget about HP 3PAR or P2000 (DotHill based) or P6000 (EVA which is probably being put out to rest). Then there are the various IBM products (their own and what they OEM from others), NEC, NetApp (FAS and Engenio), Oracle and Starboard (formerly known as Reldata). Note that there are many startups that could be in the above list as well if they were not considering the above to be considered dead, thus causing themselves to also be extinct as well, how ironic ;).

What are some industry trends that I am seeing?

  • Some vendors and products might be nearing the ends of their useful lives
  • Some vendors, their products and portfolios continue to evolve and expand
  • Some vendors and their products are moving into new or adjacent markets
  • Some vendors are refining where and what to sell when and to who
  • Some vendors are moving up market, some down market
  • Some vendors are moving into new markets, others are moving out of markets
  • Some vendors are declaring others dead to create a new market for their products
  • One size or approach or technology does not fit all needs, avoid treating all the same
  • Leverage multiple tools and technology in creative ways
  • Maximize return on innovation (the new ROI) by using various tools, technologies in ways to boost productivity, effectiveness while removing complexity and cost
  • Realization that cutting cost can result in reduced resiliency, thus look for and remove complexity with benefit of removing costs without compromise
  • Storage arrays are moving into new roles, including as back-end storage for cloud, object and other software stacks running on commodity servers to replace JBOD (DejaVu anyone?).

Keep in mind that there is a difference between industry adoption (what is talked about) and customer deployment (what are actually bought and used). Likewise there is technology based on GQ (looks and image) and G2 (functionality, experience).

There is also an industry myth that SSD cannot or has not been successful in traditional storage systems which in some cases has been true with some products or vendors. Otoh, some vendors such as EMC, NetApp and Oracle (among others) are having good success with SSD in their storage systems. Some SSD startup vendors have been more successful on both the G2 and GQ front, while some focus on the GQ or image may not be as successful (or at least yet) in the industry adoption vs. customer deployment game.

For the above mentioned storage systems vendors and products (among others), or at least for most of them there is still have plenty of life in them, granted their role and usage is changing including in some cases being found as back-end storage systems behind servers running virtualization, cloud, object storage and other storage software stacks. Likewise, some of the new and emerging storage systems (hardware, software, valueware, services) and vendors have bright futures while others may end up on the where are they now list.

Are high-end enterprise class or other storage arrays and systems dead at the hands of new startups, virtual storage appliances (VSA), storage hypervisors, storage virtualization, virtual storage and SSD?

Are large storage arrays dead at the hands of SSD?

Have SSDs been unsuccessful with storage arrays (with poll)?

 

Here are links to two polls where you can cast your vote.

Cast your vote and see results of if large storage arrays and systems are dead here.

Cast your vote and see results of if SSD has not been successful in storage systems.

So what about it, are enterprise or large storage arrays and systems dead?

Perhaps in some tabloids or industry myths (or that some wish for) or in some customer environments, as well as for some vendors or their products that can be the case.

However, IMHO for many other environments (and vendors) the answer is no, granted some will continue to evolve from legacy high-end enterprise class storage systems to mid-range or to appliance or VSA or something else.

There is still life many of the storage systems architectures, platforms and products that have been declared dead for over a decade.

Continue reading about the specifics of the EMC VMAX 10K announcement in the next post in this series here. Also check out Chucks EMC blog to see what he has to say.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Many faces of storage hypervisor, virtual storage or storage virtualization

StorageIO industry trends cloud, virtualization and big data

Storage hypervisors were a 2012 popular buzzword bingo topic with plenty of industry adoption and some customer deployment. Separating the hype around storage hypervisors reveals conversations around backup, restore, BC, DR and archiving.

backup, restore, BC, DR and archiving
Cloud and virtualization components

Storage virtualization along with virtual storage and storage hypervisors have a theme of abstracting underlying physical hardware resources like server virtualization. The abstraction can be for consolidation and aggregation, or for enabling agility, flexibility, emulation and other functionality.

backup, restore, BC, DR and archiving

Storage virtualization can be implemented in different locations, in many ways with various functionality and focus. For example the abstraction can occur on a server, in an virtual or physical appliance (e.g. tin wrapped software), in a network switch or router, as well as in a storage system. The focus can be for aggregation, or data protection (HA, BC, DR, backup, replication, snapshot) on a homogeneous (all one vendor) or mixed vendor basis (heterogeneous).

backup, restore, BC, DR and archiving

Here is a link to a guest post that I recently did over at The Virtualization Practice looking at storage hypervisors, virtual storage and storage virtualization. As is the case with virtual storage, storage virtualization, storage for virtual environments, depending on your views, spheres of influence, preferences among other factors what you call a storage hypervisor will probably vary.

Additional related material:

  • Are you using or considering implementation of a storage hypervisor?
  • Cloud, virtualization, storage and networking in an election year
  • EMC VPLEX: Virtual Storage Redefined or Respun?
  • Server and Storage Virtualization – Life beyond Consolidation
  • Should Everything Be Virtualized?
  • How many degrees separate you and your information?
  • Cloud and Virtual Data Storage Networking (CRC)
  • The Green and Virtual Data Center (CRC)
  • Resilient Storage Networks (Elsevier)
  • backup, restore, BC, DR and archiving
  • Btw, as a special offer for viewers, I have some copies of Resilient Storage Networking: Designing Flexible Scalable Data Infrastructures (Elsevier) available for $19.95, shipping and handling included. Send me an email or tweet (@storageio) to learn more and get your copy (Major credit cards and Pay pal accepted).

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud conversation, Thanks Gartner for saying what has been said

    StorageIO industry trends cloud, virtualization and big data

    Thank you Gartner for your statements concurring and endorsing the notion of clouds can be viable, however do your homework, welcome to the club.

    Why am I thanking Gartner?

    Simple, I appreciate Gartner now saying what has been said for a couple of years hoping it will help to amplify the theme to the Gartner followers and faithful.

    Gartner: Cloud storage viable option, but proceed carefully


    Images licensed for use by StorageIO via Atomazul / Shutterstock.com

    Sounds like Gartner has come to the same conclusion on what has been said for several years now in posts, articles, keynotes, presentations, webinars and other venues which is when it comes to IT clouds, don’t be scared. However do your homework, be prepared, do your due diligence, proof of concepts.

    Image of clouds, cloud and virtual data storage networking book

    Here are some related materials to prepare and plan for IT clouds (public and private):

    What is your take on IT clouds? Click here to cast your vote and see what others are thinking about clouds.

    Now for those who feel that free information or content is not worth its price, then feel free to go to Amazon and buy some Book copies here, or subscribing to the Kindle version of the StorageIOblog, or contact us for an advisory consultation or other project. For everybody else, enjoy and remember, don’t be scared of clouds, do your homework, be prepared and keep in mind that clouds are a shared responsibility.

    Disclosure: I was a Gartner client when I working in an IT organization and then later as a vendor, however not anymore ;).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Congratulations Imation and Nexsan, are there any independent storage vendors left?

    StorageIO industry trends cloud, virtualization and big data

    Last week Imation, the company that is known for making CDs, DVDs, magnetic tape and in the past floppy disk (diskettes) bought Nexsan, a company known for the SATA and SAS storage products.

    Imation is also (or should be) owns the TDK and Memorex names (remember is it real or is it Memorex? If not Google it). They also have had for several years removable hard disk drive (RHDD) products including the Odyssey (I am in the process of retiring mine), as well as partnership with the former ProStor for RDX and having acquired some of the assets of ProStor namely their RDX based InifiVault storage appliance. Imation has also been involved in some other things including USB and other forms of flash-based solid state devices (SSD), as well as a couple of years (2007) they launched cloud backup with DataGuard before cloud backup had become a popular buzzword topic.

    Imation has also divested parts of its business over past several years including some medical related (X-ray stuff) to Kodak who occupies part of the headquarter building in Oakdale MN, or at least last time I looked when driving by there on way from the airport. They also divested their SAN lab with some of the staff going to Glasshouse and other pieces going to Lion bridge (an independent test lab company). Beyond traditional of data protection, backup/restore and archiving media or mediums from consumer to large-scale enterprise, Imation has also been involved in other areas involving recording. Imation also has done some other recent acquisitions around dedupe (Nine Technologies).

    For its part, Nexsan has extended their portfolio from SATA and SAS products, AutoMaid Intelligent Power Management (IPM) which gives benefits of variable power and performance without the penalties of first generation MAID type products. Read more about IPM and related themes here, here and here. Nexsan also supports NAS and iSCSI solutions in addition to their archive and content or object storage focused Assureon product they bought a few years ago.

    This is a good acquisition for both companies as it gives Imation a new set of products to sell into their existing accounts and channels. It also can leverage Nexsan’s channel and solution selling skills giving them (Nexsan) a bigger brand and large parent for credibility (not that they did not have that in the past).

    Here are is a link to a piece done by Dave Raffo that includes some comments and perspectives from me. To say that the synergy here is about archiving or selling SSD or storage would be too easy and miss a bigger potential. That potential is Imation has been in the business of selling consumable accessories for protecting and preserving data. Notice I said consumable accessories which in the past has meant manufacturing consumable media (e.g. Floppy disks or discs, CD, DVDs, magnetic tapes) as well as partnering around flash and HDDs.

    In many environments from small to large to super-sized cloud and service providers, some types of storage systems including some of those that Nexsan sells can be considered a consumable media or medium taking over the role that tape, CDs or DVDs have been used in the pat. Instead of using tape or CDs or DVDs to protect the HDDs and SSDs based data, HDD based solutions are being used for disk-to-disk (D2D) protection (part of modernizing data protection). D2D is being done as appliances, or in conjunction with cloud and object storage system software stacks such as OpenStack swift, Basho Riak CS, CloudStack, Cleversafe, Ceph, Caringo and a list of others, in addition to appliances such as EMC ATMOS among others than can support 3rd party storage device as consumable mediums. Keep in mind that there is no such thing as a data or information recession, and people and data are living longer and getting larger, both for big data and little data.

    The big if in this acquisition which IMHO is a fair price for both parties based on realistic valuations is if they can collective execute on it. This means that Imation and Nexsan need to leverage each other’s strengths, address any weakness, close gaps and expand into each other’s markets, channels and sell the entire portfolio as opposed to becoming singular focused on a particular area tool or technology. If Imation can execute on this and Nexsan leverages their new parent, the result should be moving from the roughly $85M USD sales to $100M+ then $125M then $150M and so forth over the next couple of years.

    Even if Imation keeps maintains revenues or a slight increase, which would also be a good deal for them, granted the industry pundits may not agree, so let us see where this is in a few years. However if Imation can grow the Nexsan business, then it would become a very good deal. Thus, IMHO the price valuation for the deal has the risk built into, something like when NetApp bought the Engenio business unit from LSI back in 2011 for about $480M USD. At that time, Engenio was doing about $705M USD in revenue and seen by many industry pundits as being on the decline, thus a lower valuation. For its part, NetApp, has been executing maintaining the revenue of that business unit with some expansion, thus their execution so far is being rewarding for taking the risk.

    Let us see if Imation can do the same thing.

    Now, does that mean that Nexsan was the last of the independent storage vendors left?

    Hardly, after all there is still Xiotech, excuse me, Xio as they changed their name as part of a repackaging, relaunch and downsizing. There is DotHill who supplies partners such HP, or Dothills former partner supplier InfoTrend. If you are an Apple fan then you might know about Promise, if not, you should. Lets not forget about Data Direct Networks (DDN) that is still independent and at around $200M (give or take several million) in revenue, are very much still around.

    How about Xyratex, sure they make the enclosures and appliances that many others use in their solutions, however they also have a storage solutions business focused on scale out, clustered and grid NAS based on Lustre. There are some others that I am drawing a blank on now (if you read this and are one of them, chime in) in addition to all the new or current generation of startups (you can chime in as well to let people know who you are to be bought).

    There is still consolidation taking place, both of smaller vendors by mid-sized vendors, mid-sized vendors by big vendors, big vendors by mega vendors, and startups by established.

    Again congratulations to both Imation and Nexsan, let us see who or what is next on the 2013 mergers and acquisition list, as well as who will join the where are they now club.

    Disclosure: Nexsan has been a StorageIO client in the past; however, Imation has not been a client, although they have bought me lunch before here in the Stillwater, MN area.

    With Imation having their own brand name and identity, not to mention TDK and Memorex, now I have to wonder will Nexsan be real or Memorex or something else? ;)

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)

    StorageIO industry trends cloud, virtualization and big data

    This is the second in a two-part industry trends and perspective looking at learning from cloud incidents, view part I here.

    There is good information, insight and lessons to be learned from cloud outages and other incidents.

    Sorry cynics no that does not mean an end to clouds, as they are here to stay. However when and where to use them, along with what best practices, how to be ready and configure for use are part of the discussion. This means that clouds may not be for everybody or all applications, or at least today. For those who are into clouds for the long haul (either all in or partially) including current skeptics, there are many lessons to be  learned and leveraged.

    In order to gain confidence in clouds, some questions that I routinely am asked include are clouds more or less reliable than what you are doing? Depends on what you are doing, and how you will be using the cloud services. If you are applying HA and other BC or resiliency best practices, you may be able to configure and isolate from the more common situations. On the other hand, if you are simply using the cloud services as a low-cost alternative selecting the lowest price and service class (SLAs and SLOs), you might get what you paid for. Thus, clouds are a shared responsibility, the service provider has things they need to do, and the user or person designing how the service will be used have some decisions making responsibilities.

    Keep in mind that high availability (HA), resiliency, business continuance (BC) along with disaster recovery (DR) are the sum of several pieces. This includes people, best practices, processes including change management, good design eliminating points of failure and isolating or containing faults, along with how the components  or technology used (e.g. hardware, software, networks, services, tools). Good technology used in goods ways can be part of a highly resilient flexible and scalable data infrastructure. Good technology used in the wrong ways may not leverage the solutions to their full potential.

    While it is easy to focus on the physical technologies (servers, storage, networks, software, facilities), many of the cloud services incidents or outages have involved people, process and best practices so those need to be considered.

    These incidents or outages bring awareness, a level set, that this is still early in the cloud evolution lifecycle and to move beyond seeing clouds as just a way to cut cost, and seeing the importance and value HA, resiliency, BC and DR. This means learning from mistakes, taking action to correct or fix errors, find and cut points of failure are part of a technology maturing or the use of it. These all tie into having services with service level agreements (SLAs) with service level objectives (SLOs) for availability, reliability, durability, accessibility, performance and security among others to protect against mayhem or other things that can and do happen.

    Images licensed for use by StorageIO via
    Atomazul / Shutterstock.com

    The reason I mentioned earlier that AWS had another incident is that like their peers or competitors who have incidents in the past, AWS appears to be going through some growing, maturing, evolution related activities. During summer 2012 there was an AWS incident that affected Netflix (read more here: AWS and the Netflix Fix?). It should also be noted that there were earlier AWS outages where Netflix (read about Netflix architecture here) leveraged resiliency designs to try and prevent mayhem when others were impacted.

    Is AWS a lightning rod for things to happen, a point of attraction for Mayhem and others?

    Granted given their size, scope of services and how being used on a global basis AWS is blazing new territory and experiences, similar to what other information services delivery platforms did in the past. What I mean is that while taken for granted today, open systems Unix, Linux, Windows-based along with client-server, midrange or distributed systems, not to mention mainframe hardware, software, networks, processes, procedures, best practices all went through growing pains.

    There are a couple of interesting threads going on over in various LinkedIn Groups based on some reporters stories including on speculation of what happened, followed with some good discussions of what actually happened and how to prevent recurrence of them in the future.

    Over in the Cloud Computing, SaaS & Virtualization group forum, this thread is based on a Forbes article (Amazon AWS Takes Down Netflix on Christmas Eve) and involves conversations about SLAs, best practices, HA and related themes. Have a look at the story the thread is based on and some of the assertions being made, and ensuing discussions.

    Also over at LinkedIn, in the Cloud Hosting & Service Providers group forum, this thread is based on a story titled Why Netflix’ Christmas Eve Crash Was Its Own Fault with a good discussion on clouds, HA, BC, DR, resiliency and related themes.

    Over at the Virtualization Practice, there is a piece titled Is Amazon Ruining Public Cloud Computing? with comments from me and Adrian Cockcroft (@Adrianco) a Netflix Architect (you can read his blog here). You can also view some presentations about the Netflix architecture here.

    What this all means

    Saying you get what you pay for would be too easy and perhaps not applicable.

    There are good services free, or low-cost, just like good free content and other things, however vice versa, just because something costs more, does not make it better.

    Otoh, there are services that charge a premium however may have no better if not worse reliability, same with content for fee or perceived value that is no better than what you get free.

    Additional related material

    Some closing thoughts:

    • Clouds are real and can be used safely; however, they are a shared responsibility.
    • Only you can prevent cloud data loss, which means do your homework, be ready.
    • If something can go wrong, it probably will, particularly if humans are involved.
    • Prepare for the unexpected and clarify assumptions vs. realities of service capabilities.
    • Leverage fault isolation and containment to prevent rolling or spreading disasters.
    • Look at cloud services beyond lowest cost or for cost avoidance.
    • What is your organizations culture for learning from mistakes vs. fixing blame?
    • Ask yourself if you, your applications and organization are ready for clouds.
    • Ask your cloud providers if they are ready for you and your applications.
    • Identify what your cloud concerns are to decide what can be done about them.
    • Do a proof of concept to decide what types of clouds and services are best for you.

    Do not be scared of clouds, however be ready, do your homework, learn from the mistakes, misfortune and errors of others. Establish and leverage known best practices while creating new ones. Look at the past for guidance to the future, however avoid clinging to, and bringing the baggage of the past to the future. Use new technologies, tools and techniques in new ways vs. using them in old ways.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud conversations: Gaining cloud confidence from insights into AWS outages

    StorageIO industry trends cloud, virtualization and big data

    This is the first of a two-part industry trends and perspectives series looking at how to learn from cloud outages (read part II here).

    In case you missed it, there were some public cloud outages during the recent Christmas 2012-holiday season. One incident involved Microsoft Xbox (view the Microsoft Azure status dashboard here) users were impacted, and the other was another Amazon Web Services (AWS) incident. Microsoft and AWS are not alone, most if not all cloud services have had some type of incident and have gone on to improve from those outages. Google has had issues with different applications and services including some in December 2012 along with a Gmail incident that received covered back in 2011.

    For those interested, here is a link to the AWS status dashboard and a link to the AWS December 24 2012 incident postmortem. In the case of the recent AWS incident which affected users such as Netflix, the incident (read the AWS postmortem and Netflix postmortem) was tied to a human error. This is not to say AWS has more outages or incidents vs. others including Microsoft, it just seems that we hear more about AWS when things happen compared to others. That could be due to AWS size and arguably market leading status, diversity of services and scale at which some of their clients are using them.

    Btw, if you were not aware, Microsoft Azure is more than just about supporting SQLserver, Exchange, SharePoint or Office, it is also an IaaS layer for running virtual machines such as Hyper-V, as well as a storage target for storing data. You can use Microsoft Azure storage services as a target for backing up or archiving or as general storage, similar to using AWS S3 or Rackspace Cloud files or other services. Some backup and archiving AaaS and SaaS providers including Evault partner with Microsoft Azure as a storage repository target.

    When reading some of the coverage of these recent cloud incidents, I am not sure if I am more amazed by some of the marketing cloud washing, or the cloud bashing and uniformed reporting or lack of research and insight. Then again, if someone repeats a myth often enough for others to hear and repeat, as it gets amplified, the myth may assume status of reality. After all, you may know the expression that if it is on the internet then it must be true?

    Images licensed for use by StorageIO via
    Atomazul / Shutterstock.com

    Have AWS and public cloud services become a lightning rod for when things go wrong?

    Here is some coverage of various cloud incidents:

    The above are a small sampling of different stories, articles, columns, blogs, perspectives about cloud services outages or other incidents. Assuming the services are available, you can Google or Bing many others along with reading postmortems to gain insight into what happened, the cause, effect and how to prevent in the future.

    Do these recent incidents show a trend of increased cloud outages? Alternatively, do they say that the cloud services are being used more and on a larger basis, thus the impacts become more known?

    Perhaps it is a mix of the above, and like when a magnetic storage tape gets lost or stolen, it makes for good news or copy, something to write about. Granted there are fewer tapes actually lost than in the past, and far fewer vs. lost or stolen laptops and other devices with data on them. There are probably other reasons such as the lightning rod effect given how much industry hype around clouds that when something does happen, the cynics or foes come out in force, sometimes with FUD.

    Similar to traditional hardware or software based product vendors, some service providers have even tried to convince me that they have never had an incident, lost or corrupted or compromised any data, yeah, right. Candidly, I put more credibility and confidence in a vendor or solution provider who tells me that they have had incidents and taken steps to prevent them from recurring. Granted those steps might be made public while others might be under NDA, at least they are learning and implementing improvements.

    As part of gaining insights, here are some links to AWS, Google, Microsoft Azure and other service status dashboards where you can view current and past situations.

    What is your take on IT clouds? Click here to cast your vote and see what others are thinking about clouds.

    Ok, nuff said for now (check out part II here )

    Disclosure: I am a customer of AWS for EC2, EBS, S3 and Glacier as well as a customer of Bluehost for hosting and Rackspace for backups. Other than Amazon being a seller of my books (and my blog via Kindle) along with running ads on my sites and being an Amazon Associates member (Google also has ads), none of those mentioned are or have been StorageIO clients.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    The Human Face of Big Data, a Book Review

    StorageIO industry trends cloud, virtualization and big data

    My copy of the new book The Human Face of Big Data created by Rick Smolan and Jennifer Erwitt arrived yesterday compliments of EMC (the lead sponsor). In addition to EMC, the other sponsors of the book are Cisco, VMware, FedEx, Originate and Tableau software.

    To say this is a big book would be an understatement, then again, big data is a big topic with a lot of diversity if you open your eyes and think in a pragmatic way, which once you open and see the pages you will see. This is physically a big book (11x 14 inches) with lots of pictures, texts, stories, factoids and thought stimulating information of the many facets and dimensions of big data across 224 pages.

    While Big Data as a buzzword and industry topic theme might be new, along with some of the related technologies, techniques and focus areas, other as aspects have been around for some time. Big data means many things to various people depending on their focus or areas of interest ranging from analytics to images, videos and other big files. A common theme is the fact that there is no such thing as an information or data recession, and that people and data are living longer, getting larger, and we are all addicted to information for various reasons.

    Big data needs to be protected and preserved as it has value, or its value can increase over time as new ways to leverage it are discovered which also leads to changing data access and life cycle patterns. With many faces, facets and areas of interests applying to various spheres of influence, big data is not limited to programmatic, scientific, analytical or research, yet there are many current and use cases in those areas.

    Big data is not limited to videos for security surveillance, entertainment, telemetry, audio, social media, energy exploration, geosciences, seismic, forecasting or simulation, yet those have been areas of focus for years. Some big data files or objects are millions of bytes (MBytes), billion of bytes (GBytes) or trillion of bytes (TBytes) in size that when put into file systems or object repositories, add up to Exabytes (EB – 1000 TBytes) or Zettabytes (ZB – 1000 EBs). Now if you think those numbers are far-fetched, simply look back to when you thought a TByte, GByte let alone a MByte was big or far-fetched future. Remember, there is no such thing as a data or information recession, people and data are living longer and getting larger.

    Big data is more than hadoop, map reduce, SAS or other programmatic and analytical focused tool, solution or platform, yet those all have been and will be significant focus areas in the future. This also means big data is more than data warehouse, data mart, data mining, social media and event or activity log processing which also are main parts have continued roles going forward. Just as there are large MByte, GByte or TByte sized files or objects, there are also millions and billions of smaller files, objects or pieces of information that are part of the big data universe.

    You can take a narrow, product, platform, tool, process, approach, application, sphere of influence or domain of interest view towards big data, or a pragmatic view of the various faces and facets. Of course you can also spin everything that is not little-data to be big data and that is where some of the BS about big data comes from. Big data is not exclusive to the data scientist, researchers, academia, governments or analysts, yet there are areas of focus where those are important. What this means is that there are other areas of big data that do not need a data science, computer science, mathematical, statistician, Doctoral Phd or other advanced degree or training, in other words big data is for everybody.

    Cover image of Human Face of Big Data Book

    Back to how big this book is in both physical size, as well as rich content. Note the size of The Human Face of Big Data book in the adjacent image that for comparison purposes has a copy of my last book Cloud and Virtual Data Storage Networking (CRC), along with a 2.5 inch hard disk drive (HDD) and a growler. The Growler is from Lift Bridge Brewery (Stillwater, MN), after all, reading a big book about big data can create the need for a big beer to address a big thirst for information ;).

    The Human Face of Big Data is more than a coffee table or picture book as it is full of with information, factoids and perspectives how information and data surround us every day. Check out the image below and note the 2.5 inch HDD sitting on the top right hand corner of the page above the text. Open up a copy of The Human Face of Big Data and you will see examples of how data and information are all around us, and our dependence upon it.

    A look inside the book The Humand Face of Big Data image

    Book Details:
    Copyright 2012
    Against All Odds Productions
    ISBN 978-1-4549-0827-2
    Hardcover 224 pages, 11 x 0.9 x 14 inches
    4.8 pounds, English

    There is also an applet to view related videos and images found in the book at HumanFaceofBigData.com/viewer in addition to other material on the companion site www.HumanFacesofBigData.com.

    Get your copy of
    The Human Face of Big Data at Amazon.com by clicking here or at other venues including by clicking on the following image (Amazon.com).

    Some added and related material:
    Little data, big data and very big data (VBD) or big BS?
    How many degrees separate you and your information?
    Hardware, Software, what about Valueware?
    Changing Lifecycles and Data Footprint Reduction (Data doesnt have to lose value over time)
    Garbage data in, garbage information out, big data or big garbage?
    Industry adoption vs. industry deployment, is there a difference?
    Is There a Data and I/O Activity Recession?
    Industry trend: People plus data are aging and living longer
    Supporting IT growth demand during economic uncertain times
    No Such Thing as an Information Recession

    For those who can see big data in a broad and pragmatic way, perhaps using the visualization aspect this book brings forth the idea that there are and will be many opportunities. Then again for those who have a narrow or specific view of what is or is not big data, there is so much of it around and various types along with focus areas you too will see some benefits.

    Do you want to play in or be part of a big data puddle, pond, or lake, or sail and explore the oceans of big data and all the different aspects found in, under and around those bigger broader bodies of water.

    Bottom line, this is a great book and read regardless of if you are involved with data and information related topics or themes, the format and design lend itself to any audience. Broaden your horizons, open your eyes, ears and thinking to the many facets and faces of big data that are all around us by getting your copy of The Human Face of Big Data (Click here to go to Amazon for your copy) book.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    December 2012 StorageIO Update news letter

    StorageIO News Letter Image
    December 2012 News letter

    Welcome to the December 2012 year end edition of the StorageIO Update news letter including a new format and added content.

    You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

    Click on the following links to view the December 2012 edition as brief (short HTML sent via Email) version, or the full HTML or PDF versions.

    Visit the news letter page to view previous editions of the StorageIO Update.

    You can subscribe to the news letter by clicking here.

    Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

    Nuff said for now

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Hardware, Software, what about Valueware?

    StorageIO industry trends cloud, virtualization and big data

    I am surprised nobody has figured out how to use the term valueware to describe their hardware, software or services solutions, particular around cloud, big data, little data, converged solution stacks or bundles, virtualization and related themes.

    Cloud virtualization storage and networking building blocks image
    Cloud and virtualization building blocks transformed into Valueware

    Note that I’m referring to IT hardware and not what you would usually find at a TrueValue hardware store (disclosure, I like to shop there for things to innovate with and address the non IT to do project list).

    Instead of value add software or what might otherwise be called an operating system (OS), or middleware, glue, hypervisor, shims or agents, I wonder who will be first to use valueware? Or who will be the first to say they were the first to articulate the value of their industry unique and revolutionary solution using valueware?

    Cloud and convergence stack image from Cloud and Virtual Data Storage Networking Book

    For those not familiar, converged solution stack bundles combine server, storage and networking hardware along with management software and other tools in a prepackaged solution from the same or multiple vendors. Examples include Dell VIS (not to be confused with their reference architectures or fish in Dutch), VCE or EMC vBlocks, IBM Puresystems, NetApp FlexPods and Oracle Exaboxes among others.

    Converged solution or cloud bundle image from Cloud and Virtual Data Storage Networking Book

    Why is it that the IT or ICT (for my European friends) industries are not using valueware?

    Is Valueware not being used because it has not been brought to their attention yet or part of anybody’s buzzword bingo list or read about in an industry trade rag (publication) or blog (other than here) or on twitter?

    Buzzword bingo image

    Is it because the term value in some marketers opinion or view their research focus groups associate with being cheap or low-cost? If that is the case, I wonder how many of those marketing focus groups actually include active IT or ICT professionals. If those research marketing focus groups contact practicing IT or ICT pros, then there would be a lower degree of separation to the information, vs. professional focus group or survey participants who may have a larger degree of separation from practioneers.

    Degrees of seperation image

    Depending on who uses valueware first and how used, if it becomes popular or trendy, rest assured there would be bandwagon racing to the train station to jump on board the marketing innovation train.

    Image and video with audio of train going down the tracks

    On the other hand, using valueware could be an innovative way to help articulate soft product value (read more about hard and soft product here). For those not familiar, hard product does not simply mean hardware, it includes many technologies (including hardware, software, networks, services) that combined with best practices and other things to create a soft product (solution experience).

    Whatever the reason, I am assuming that valueware is not going to be used by creative marketers so let us have some fun with it instead.

    Let me rephrase that, let us leave valueware  alone, instead look at the esteemed company it is in or with (some are for fun, some are for real).

    • APIware (having some fun with those who see the world via APIs)
    • Cloudware (not to be confused with cloud washing)
    • Firmware (software tied to hardware, is it hardware or software? ;) )
    • Hardware (something software, virtualization and clouds run on)
    • Innovationware (not to be confused with a data protection company called Innovation)
    • Larryware (anything Uncle Larry wants it to be)

    Image of uncle larry aka Larry Elison taking on whomever or whatever

    • Marketware (related to marketecture)
    • Middleware (software to add value or glue other software together)
    • Netware (RIP Ray Noorda)
    • Peopleware (those who use or support IT and cloud services)
    • Santaware (come on, tis the season right)
    • Sleepware (disks and servers spin down to sleep using IPM techniques)
    • Slideware (software defined marketing presentations)
    • Software (something that runs on hardware)
    • Solutionware (could be a variation of implementation of soft product)
    • Stackware (something that can also be done with Tupperware)
    • Tupperware (something that can be used for food storage)
    • Valueware (valueware.us points to this page, unless somebody wants to buy or rent it ;) )
    • Vaporware (does vaporware actually exist?)

    More variations can be added to the above list, for example substituting ware for wear. However, I will leave that up to your own creativity and innovation skills.

    Let’s see if anybody starts to use Valueware as part of their marketware or value proposition slideware pitches, and if you do use it, let me know, be happy to give you a shout out.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    HPs big December 3rd storage announcement

    HP has been talking and promoting for several weeks (ok, months) their upcoming December 3rd storage announcements from the HP discovery event in Frankfurt Germany.

    Well its now afternoon which means the early Monday morning December 3rd embargos have been lifted so I can now talk about what HP shared last Friday about todays announcements. Basically what I received was a series of press releases as well link to their updated web site providing information about todays announcements.

    HP has enhanced the 3PAR aka P10000 with new models including for entry-level, as well as for higher performance enterprises needs. This also should beg the question for many longtime EVA (excuse me, P6000) customers, have they hit the end of the line? For scale out storage, HP has the StoreAll solutions (think about products formerly marketed as certain X9000 models based on Ibrix) with enhancements for analytics, bulk and various types of big data. In addition HP has enhanced its backup and recovery capabilities and Dedupe products including integration with Autonomy (here and here) along with capacity on demand services.

    New 3PAR (P10000 models)

    New StoreAll storage system

    From the surface and what I have been able to see so far, looks like a good set of incremental enhancements from HP. Not much else to say until I can get some time to dig around deep to see what can be found on more details, however check out Calvin Zito (aka @hpstorageguy) the HP storage blogger who should have more information from HP.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved