VMware buys virsto, is it about storage hypervisors?

StorageIO Industry trends and perspectives image

Yesterday VMware announced that it is acquiring the IO performance optimization and acceleration software vendor Virsto for an undisclosed amount.

Some may know Virsto due to their latching and jumping onto the Storage Hypervisor bandwagon as part of storage virtualization and virtual storage. On the other hand, some may know Virsto for their software that plugs into server virtualization Hypervisor  such as VMware and Microsoft Hyper-V. Then there are all of those who either did not or still don’t know of Virsto or their solutions yet they need to learn about it.

Unlike virtual storage arrays (VSAa), or virtual storage appliances, or storage virtualization software that aggregates storage, the Virsto software address the IO performance aggravation caused by aggregation.

Keep in mind that the best IO is the IO that you do not have to do. The second best IO is the one that has the least impact and that is cost effective. A common approach, or preached best practice by some vendors server virtualization and virtual desktop infrastructures (VDI) that result in IO bottlenecks is to throw more SSD or HDD hardware at the problem.

server virtualization aggregation causing aggravation

Turns out that the problem with virtual machines (VMs) is not just aggregation (consolidation) causing aggravation, it’s also the mess of mixed applications and IO profiles. That is where IO optimization and acceleration tools come into play that are plugged into applications, file systems, operating systems, hypervisor’s or storage appliances.

In the case of Virsto (read more about their solution here), their technology plugs into the hypervisor  (e.g. VMware vSphere/ESX or Hyper-V) to group and optimize IO operations.

By using SSD as a persistent cache, tools such as Virsto can help make better use of underlying storage systems including HDD and SSD, while also removing the aggravation as a result of aggregation.

What will be interesting to watch is to see if VMware continues to support other hypervisor’s such as Microsoft Hyper-V or close the technology to VMware only.

It will also be interesting to see how VMware and their parent EMC can leverage Virsto technology to complement virtual SANs as well as VSAs and underlying hardware from VFcache to storage arrays with SSD and SSD appliances as opposed to compete with them.

With the Virsto technology now part of VMware, hopefully there will be less time on talking about storage hypervisor’s and more around server IO optimization and enablement to create broader awareness for the technology.

Congratulations to VMware (and EMC) along with Virsto.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud, virtualization, Storage I/O trends for 2013 and beyond

StorageIO Industry trends and perspectives image

It is still early in 2013, so I can make some cloud, virtualization, storage and IO related predictions, or more aptly, talk about some trends, in addition to those that I made in late 2012, looking forward and back. Common over-riding themes will continue to include convergence (people and technology), valueware, clouds (public, private, hybrid and community) among others.

cloud virtualization storage I/O data center image

Certainly, solid state drives (SSDs) will remain popular, both in terms of industry adoption, and industry deployment. Big-data (and little data) management tools and purpose-build storage systems or solutions continue to be popular, as are those for supporting little data applications. On the cloud storage front, there are many options for various use cases available. Watch for more emphasis on service-level agreements (SLA), service-level objectives (SLO), security, pricing transparency, and tiers of service.

storage I/O rto rpo dcim image

Cloud and object storage will continue to gain in awareness, functionality, and options from various providers in terms of products, solutions, and services. There will be a mix of large-scale solutions and smaller ones, with a mix of open-source and proprietary pieces. Some of these will be for archiving, some for backup or data protection. Others will be for big-data, high-performance computing, or cloud on a local or wide area basis, while others for general file sharing.

Ceph object storage architecture example

Along with cloud and object storage, watch for more options about how those products or services can be accessed using traditional NAS (NFS, CIFS, HDFS and others) along with block, such as iSCSI object API’s, including Amazon S3, REST, HTTP, JSON, XML, iOS and CDMI along with programmatic bindings.

Data protection modernization, including backup/restore, high-availability, business continuity, disaster recovery, archiving, and related technologies for cloud, virtual, and traditional environments will remain popular themes.

cloud and virtual data center image

Expect more Fibre Channel over Ethernet for networking with your servers and storage, PCIe Gen 3 to move data in and out of servers, and Serial-attached SCSI (SAS) as a means of attaching storage to servers or as the back-end storage for larger storage systems and appliances. For those who like to look out over the horizon, keep an eye and ear open for more discussion around PCI gen 3 deployment and gen 4 definitions, not to mention DDR4 and nand flash moving close to the processors.

With VMware buying Virsto, that should keep software defined marketing (SDM) and Storage hypervisors, storage virtualization, virtual storage, virtual storage arrays (VSA’s) active topic themes. Lets also keep in mind for storage space capacity optimization Data footprint reduction (DFR) including archiving, backup and data protection modernization, compression, consolidation, dedupe and data management.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Public, Private, Hybrid and Community Clouds? (Part II)

StorageIO Industry trends and perspectives image

This is the second of a two part series, read part I here.

Common community cloud conversation questions include among others:

Who defines the standards for community clouds?
The members or participants, or whoever they hire or get to volunteer to do it.

Who pays for the community cloud?
The members or participants do, think about a co-op or other resource sharing consortium with multi-tenant (shared) capabilities to isolate and keep members along with what they are doing separate.

cloud image

Who are community clouds for, when to use them?
If you cannot justify a private cloud for yourself, or, if you need more resiliency than what can be provided by your site and you know of a peer, partner, member or other with common needs, those could be a fit. Another variation is you are in an industry or agency or district where pooling of resources, yet operating separate has advantages or already being done. These range from medical and healthcare to education along with various small medium businesses (SMBs) that do not want to or cannot use a public facility for various reasons.

What technology is needed for building a community cloud?
Similar to deploying a public or private cloud, you will need various hard products including servers, storage, networking, management software tools for provisioning, orchestration, show back or charge back, multi-tenancy, security and authentication, data protection (backup, bc, dr, ha) along with various middleware and applications.

Storage I/O cloud building block image

What are community clouds used for?
Almost anything, granted there are limits and boundaries based tools, technologies, security and access controls among other constraints. Applications can range from big-data to little-data on all if not most points in between. On the other hand, if they are not safe or secure enough for your needs, then use a private cloud or whatever it is that you are currently using.

What about community cloud security, privacy and compliance regulations?
Those are topics and reasons why like-minded or affected groups might be able to leverage a community cloud. By being like-minded or affected groups, labs, schools, business, entities, agencies, districts, or other organizations that are under common mandates for security, compliance, privacy or other regulations can work together, yet keep their interests separate. What tools or techniques for achieving those goals and objectives would be dependent on those who offer services to those entities now?

data centers, information factories and clouds

Where can you get a community cloud?
Look around using Google or your favorite search tool; also watch the comments section to see how long it takes someone to jump in to say how he or she can help. Also talk with solution providers, business partners and VARs. Note that they may not know the term or phrases per say, so here is what to tell them. Tell them that you would like to deploy a private cloud at some place that will then be used in a multi-tenant way to safely and securely support different members of your consortium.

For those who have been around long enough, you can also just tell them that you want to do something like the co-op or consortium time-sharing type systems from past generations and they may know what you are looking for. If although they look at you with a blank deer in the head-light stare eyes glazed over, just tell them it’s a new lead-edge, software defined new and revolutionary (add some superlatives if you feel inclined) and then they might get excited.  If they still don’t know what to do or help you with, have them get in touch with me and I will explain it to them, or, I’ll put you in touch with those can help.

data centers, information factories and clouds

Where do you put a community cloud?
You could deploy them in your own facility, other member’s locations or both for resiliency. You could also use a safe secure co-lo facility already being used for other purposes.

Do community clouds have organizers?
Perhaps, however they are probably more along the lines of a coordinator, administrator, manager, controller as opposed to a community organizer per say. In other words, do not confuse a community cloud with a cloud community organized, aligned and activated for some particular cause. On the other hand, maybe there is value prop for some cloud activist to be  organized and take up the cause for community clouds in your area of interest ;).

data centers, information factories and clouds

Are community clouds more of a concept vs. a product?
If you have figured out that a community or peer cloud is nothing more than a different way of deploying, using and managing a combination of private, public and hybrid and putting a marketing name on them, congratulations, you are now thinking outside of the box, or outside of the usual cloud conversations.

What about public cloud services for selected audiences such as Amazons GovCloud? On one hand, I guess you could call or think of that as a semi-private public cloud, or a semi-public private cloud, or if you like superlatives an uber gallistic hybrid community cloud.

How you go about building, deploying and managing your community, coop, consortium, and agency, district or peer cloud will be how you leverage various hard and software products. The results of which will be your return on innovation (the new ROI) to address various needs and concerns or also known as valueware. Those results should be able to address or help close gaps and leverage clouds in general as a resource vs. simply as a tool, technology or technique.

Ok, nuff said…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Public, Private, Hybrid what about Community Clouds?

StorageIO Industry trends and perspectives image

Have you heard of a community clouds?

Cloud computing including cloud storage and services as products, solutions and services offer different functionality and enable benefits for various types of organizations, entities or individuals.

various types of clouds image

Public clouds, private clouds and hybrids leveraging public and private continue to evolve in technology, reliability, security and functionality along with the awareness around them.

IT professionals tell me they are interested in clouds however they have concerns.

Cloud concerns range from security, compliance, industry or government regulations, privacy and budgets among others with private, public or hybrid clouds. Peer, cooperative (co-op), consortium or community clouds can be a solution for those that traditional public, private, hybrid, AaaS, SaaS, PaaS or IaaS do not meet their needs.

various types, layers and services of clouds image

From a technology standpoint, there should have to be much if any difference between a community cloud and a public, private or hybrid. Instead, they community clouds are more about thinking outside of the box, or outside of common cloud thinking per say. This means thinking beyond what others are talking about or doing and looking at how cloud products, services and practices can be used in different ways to meet your concerns or requirements.

cloud image

What’s your take on clouds, click here to cast your vote and see results

Read more about community clouds including common questions in part II here.

Ok, nuff said (for now)…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Tape is still alive, or at least in conversations and discussions

StorageIO Industry trends and perspectives image

Depending on whom you talk to or ask, you will get different views and opinions, some of them stronger than others on if magnetic tape is dead or alive as a data storage medium. However an aspect of tape that is alive are the discussions by those for, against or that simply see it as one of many data storage mediums and technologies whose role is changing.

Here is a link to an a ongoing discussion over in one of the Linked In group forums (Backup & Recovery Professionals) titled About Tape and disk drives. Rest assured, there is plenty of fud and hype on both sides of the tape is dead (or alive) arguments, not very different from the disk is dead vs. SSD or cloud arguments. After all, not everything is the same in data centers, clouds and information factories.

Fwiw, I removed tape from my environment about 8 years ago, or I should say directly as some of my cloud providers may in fact be using tape in various ways that I do not see, nor do I care one way or the other as long as my data is safe, secure, protected and SLA’s are meet. Likewise, I consult and advice for organizations where tape still exists yet its role is changing, same with those using disk and cloud.

Storage I/O data center image

I am not ready to adopt the singular view that tape is dead yet as I know too many environments that are still using it, however agree that its role is changing, thus I am not part of the tape cheerleading camp.

On the other hand, I am a fan of using disk based data protection along with cloud in new and creative (including for my use) as part of modernizing data protection. Although I see disk as having a very bright and important future beyond what it is being used for now, at least today, I am not ready to join the chants of tape is dead either.

StorageIO Industry trends and perspectives image

Does that mean I can’t decide or don’t want to pick a side? NO

It means that I do not have to nor should anyone have to choose a side, instead look at your options, what are you trying to do, how can you leverage different things, techniques and tools to maximize your return on innovation. If that means that tape is, being phased out of your organization good for you. If that means there is a new or different role for tape in your organization co-existing with disk, then good for you.

If somebody tells you that tape sucks and that you are dumb and stupid for using it without giving any informed basis for those comments then call them dumb and stupid requesting they come back when then can learn more about your environment, needs, and requirements ready to have an informed discussion on how to move forward.

Likewise, if you can make an informed value proposition on why and how to migrate to new ways of modernizing data protection without having to stoop to the tape is dead argument, or cite some research or whatever, good for you and start telling others about it.

StorageIO Industry trends and perspectives image

Otoh, if you need to use fud and hype on why tape is dead, why it sucks or is bad, at least come up with some new and relevant facts, third-party research, arguments or value propositions.

You can read more about tape and its changing role at tapeisalive.com or Tapesummit.com.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

In the data center or information factory, not everything is the same

StorageIO Industry trends and perspectives image

Sometimes what should be understood, or that is common sense or that you think everybody should know needs to be stated. After all, there could be somebody who does not know what some assume as common sense or what others know for various reasons. At times, there is simply the need to restate or have a reminder of what should be known.

Storage I/O data center image

Consequently, in the data center or information factory, either traditional, virtual, converged, private, hybrid or public cloud, everything is not the same. When I say not everything is the same, is that different applications with various service level objectives (SLO’s) and service level agreements (SLA’s). These are based on different characteristics from performance, availability, reliability, responsiveness, cost, security, privacy among others. Likewise, there are different size and types of organizations with various requirements from enterprise to SMB, ROBO and SOHO, business or government, education or research.

Various levels of HA, BC and DR

There are also different threat risks for various applications or information services within in an organization, or across different industry sectors. Thus various needs for meeting availability SLA’s, recovery time objectives (RTO’s) and recovery point objectives (RPO’s) for data protection ranging from backup/restore, to high-availability (HA), business continuance (BC), disaster recovery (DR) and archiving. Let us not forget about logical and physical security of information, assets and people, processes and intellectual property.

Storage IO RTO and RPO image

Some data centers or information factories are compute intensive while others are data centric, some are IO or activity intensive with a mix of compute and storage. On the other hand, some data centers such as a communications hub may be network centric with very little data sticking or being stored.

SLA and SLO image

Even within in a data center or information factory, various applications will have different profiles, protection requirements for big data and little data. There can also be a mix of old legacy applications and new systems developed in-house, purchased, open-source based or accessed as a service. The servers and storage may be software defined (a new buzzword that has already jumped the shark), virtualized or operated in a private, hybrid or community cloud if not using a public service.

Here are some related posts tied to everything is not the same:
Optimize Data Storage for Performance and Capacity
Is SSD only for performance?
Cloud conversations: Gaining cloud confidence from insights into AWS outages
Data Center Infrastructure Management (DCIM) and IRM
Saving Money with Green IT: Time To Invest In Information Factories
Everything Is Not Equal in the Datacenter, Part 1
Everything Is Not Equal in the Datacenter, Part 2
Everything Is Not Equal in the Datacenter, Part 3

Storage I/O data center image

Thus, not all things are the same in the data center, or information factories, both those under traditional management paradigms, as well as those supporting public, private, hybrid or community clouds.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

January 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
January 2013 News letter

Welcome to the January 2013 edition of the StorageIO Update news letter including a new format and added content.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

Click on the following links to view the January 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Nuff said for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Putting some VMware ESX storage tips together: (Part II)

In the first part of this post I showed how to use a tip from Dunacn Epping to fake VMware into thinking that a HHDD (Hybrid Hard Disk Drive) was a SSD.

Now lets look at using a tip from Dave Warburton to make an internal SATA HDD into an RDM for one of my Windows-based VMs.

My challenge was that I have a VM with a guest that I wanted to have a Raw Device Mapping (RDM) internal SATA HDD accessible to it, expect the device was an internal SATA device. Given that using the standard tools and reading some of the material available, it would have been easy to give up and quit since the SATA device was not attached to an FC or iSCSI SAN (such as my Iomega IX4 I bought from Amazon.com).

Image of internal RDM with vMware
Image of internal SATA drive being added as a RDM with vClient

Thanks to Dave’s great post that I found, I was able to create a RDM of an internal SATA drive, present it to the existing VM running Windows 7 ultimate and it is now happy, as am I.

Pay close attention to make sure that you get the correct device name for the steps in Dave’s post (link is here).

For the device that I wanted to use, the device name was:

From the ESX command line I found the device I wanted to use which is:

t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5

Then I used the following ESX shell command per Dave’s tip to create an RDM of an internal SATA HDD:

vmkfstools -z /vmfs/devices/disks/ t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5
 /vmfs/volumes/dat1/rdm_ST1500L.vmdk

Then the next steps were to update an existing VM using vSphere client to use the newly created RDM.

Hint, Pay very close attention to your device naming, along with what you name the RDM and where you find it. Also, recommend trying or practicing on a spare or scratch device first, if something is messed up. I practiced on a HDD used for moving files around and after doing the steps in Dave’s post, added the RDM to an existing VM, started the VM and accessed the HDD to verify all was fine (it was). After shutting down the VM, I removed the RDM from it as well as from ESX, and then created the real RDM.

As per Dave’s tip, vSphere Client did not recognize the RDM per say, however telling it to look at existing virtual disks, select browse the data stores, and low and behold, the RDM I was looking for was there. The following shows an example of using vSphere to add the new RDM to one of my existing VMs.

In case you are wondering, why I want to make a non SAN HDD as a RDM vs. doing something else? Simple, the HDD in question is a 1.5TB HDD that has backups on that I want to use as is. The HDD is also bit locker protected and I want the flexibility to remove the device if I have to being accessible via a non-VM based Windows system.


Image of my VMware server with internal RDM and other items

Could I have had accomplished the same thing using a USB attached device accessible to the VM?

Yes, and in fact that is how I do periodic updates to removable media (HDD using Seagate Goflex drives) where I am not as concerned about performance.

While I back up off-site to Rackspace and AWS clouds, I also have a local disk based backup, along with creating periodic full Gold or master off-site copies. The off-site copies are made to removable Seagate Goflex SATA drives using a USB to SATA Goflex cable. I also have the Goflex eSATA to SATA cable that comes in handy to quickly attach a SATA device to anything with an eSATA port including my Lenovo X1.

As a precaution, I used a different HDD that contained data I was not concerned about if something went wrong to test to the process before doing it with the drive containing backup data. Also as a precaution, the data on the backup drive is also backed up to removable media and to my cloud provider.

Thanks again to both Dave and Duncan for their great tips; I hope that you find these and other material on their sites as useful as I do.

Meanwhile, time to get some other things done, as well as continue looking for and finding good work a rounds and tricks to use in my various projects, drop me a note if you see something interesting.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Putting some VMware ESX storage tips together (Part I)

Have you spent time searching the VMware documentation, on-line forums, venues and books to decide how to make a local dedicated direct attached storage (DAS) type device (e.g. SATA or SAS) be Raw Device Mappings (RDM)? Part two of this post looks at how to make an RDM using an internal SATA HDD.

Or how about how to make a Hybrid Hard disk drive (HHDD) that is faster than a regular Hard Disk Drive (HDD) on reads, however more capacity and less cost than a Solid State Device (SSD) actually appear to VMware as a SSD?

Recently I had these and some other questions and spent some time looking around, thus this post highlights some great information I have found for addressing the above VMware challenges and some others.

VMware vExpert image

The SSD solution is via a post I found on fellow VMware vExpert  Duncan Epping’s yellow-brick site which if you are into VMware or server virtualization in general, and particular a fan of high-availability in general or virtual specific, add Duncan’s site to your reading list. Duncan also has some great books to add to your bookshelves including VMware vSphere 5.1 Clustering Deepdive (Volume 1) and VMware vSphere 5 Clustering Technical Deepdive that you can find at Amazon.com.

VMware vSphere 5 Clustering Technical Deepdive book image

Duncan’s post shows how to fake into thinking that a HDD was a SSD for testing or other purposes. Since I have some Seagate Momentus XT HHDDs that combine the capacity of a traditional HDD (and cost) with the read performance closer to a SSD (without the cost or capacity penalty), I was interested in trying Duncan’s tip (here is a link to his tip). Essential Duncan’s tip shows how to use esxcli storage nmp satp and esxcli storage core commands to make a non-SSD look like a SSD.

The commands that were used from the VMware shell per Duncan’s tip:

esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba0:C0:T1:L0 –option “enable_local enable_ssd”
esxcli storage core claiming reclaim -d mpx.vmhba0:C0:T1:L0
esxcli storage core device list –device=mpx.vmhba0:C0:T1:L0

After all, if the HHDD is actually doing some of the work to boost and thus fool the OS or hypervisor that it is faster than a HDD, why not tell the OS or hypervisor in this case VMware ESX that it is a SSD. So far have not seen nor do I expect to notice anything different in terms of performance as that already occurred going from a 7,200RPM (7.2K) HDD to the HHDD.

If you know how to decide what type of a HDD or SSD a device is by reading its sense code and model number information, you will recognize the circled device as a Seagate Momentus XT HHDD. This particular model is Seagate Momentus XT II 750GB with 8GB SLC nand flash SSD memory integrated inside the 2.5-inch drive device.

Normally the Seagate HHDDs appear to the host operating system or whatever it is attached to as a Momentus 7200 RPM SATA type disk drive. Since there are not special device drivers, controllers, adapters or anything else, essentially the Momentus XT type HHDD are plug and play. After a bit of time they start learning and caching things to boost read performance (read more about boosting read performance including Windows boot testing here).

Image of VMware vSphere vClient storage devices
Screen shot showing Seagate Momentus XT appearing as a SSD

Note that the HHDD (a Seagate Momentus XT II) is a 750GB 2.5” SATA drive that boost read performance with the current firmware. Seagate has hinted that there could be a future firmware version to enable write caching or optimization however, I have waited for a year.

Disclosure: Seagate gave me an evaluation copy of my first HHDD a couple of years ago and I then went on to buy several more from Amazon.com. I have not had a chance to try any Western Digital (WD) HHDDs yet, however I do have some of their HDDs. Perhaps I will hear something from them sometime in the future.

For those who are SSD fans or that actually have them, yes, I know SSD’s are faster all around and that is why I have some including in my Lenovo X1. Thus for write intensive go with a full SSD today if you can afford them as I have with my Lenovo X1 which enables me to save large files faster (less time waiting). However if you want the best of both worlds for lab or other system that is doing more reads vs. writes as well as need as much capacity as possible without breaking the budget, check out the HHDDs.

Thanks for the great tip and information Duncan, in part II of this post, read how to make an RDM using an internal SATA HDD.

 

Ok, nuff said (for now)…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Thanks for viewing StorageIO content and top 2012 viewed posts

StorageIO industry trends cloud, virtualization and big data

2012 was a busy year (it was our 7th year in business) along with plenty of activity on StorageIOblog.com as well as on the various syndicate and other sites that pickup our content feed (https://storageioblog.com/RSSfull.xml).

Excluding traditional media venues, columns, articles, web casts and web site visits (StorageIO.com and StorageIO.TV), StorageIO generated content including posts and pod casts have reached over 50,000 views per month (and growing) across StorageIOblog.com and our partner or syndicated sites. Including both public and private, there were about four dozen in-person events and activities not counting attending conferences or vendor briefing sessions, along with plenty of industry commentary. On the twitter front, plenty of activity there as well closing in on 7,000 followers.

Thank you to everyone who have visited the sites where you will find StorageIO generated content, along with industry trends and perspective comments, articles, tips, webinars, live in person events and other activities.

In terms of what was popular on the StorageIOblog.com site, here are the top 20 viewed posts in alphabetical order.

Amazon cloud storage options enhanced with Glacier
Announcing SAS SANs for Dummies book, LSI edition
Are large storage arrays dead at the hands of SSD?
AWS (Amazon) storage gateway, first, second and third impressions
EMC VFCache respinning SSD and intelligent caching
Hard product vs. soft product
How much SSD do you need vs. want?
Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go
Is SSD dead? No, however some vendors might be
IT and storage economics 101, supply and demand
More storage and IO metrics that matter
NAD recommends Oracle discontinue certain Exadata performance claims
New Seagate Momentus XT Hybrid drive (SSD and HDD)
PureSystems, something old, something new, something from big blue
Researchers and marketers dont agree on future of nand flash SSD
Should Everything Be Virtualized?
SSD, flash and DRAM, DejaVu or something new?
What is the best kind of IO? The one you do not have to do
Why FC and FCoE vendors get beat up over bandwidth?
Why SSD based arrays and storage appliances can be a good idea

Moving beyond the top twenty read posts on StorageIOblog.com site, the list quickly expands to include more popular posts around clouds, virtualization and data protection modernization (backup/restore, HA, BC, DR, archiving), general IT/ICT industry trends and related themes.

I would like to thank the current StorageIOblog.com site sponsors Solarwinds (management tools including response time monitoring for physical and virtual servers) and Veeam (VMware and Hyper-V virtual server backup and data protection management tools) for their support.

Thanks again to everyone for reading and following these and other posts as well as for your continued support, watch for more content on the above and other related and new topics or themes throughout 2013.

Btw, if you are into Facebook, you can give StorageIO a like at facebook.com/storageio (thanks in advance) along with viewing our newsletter here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Summary, EMC VMAX 10K, high-end storage systems stayin alive

StorageIO industry trends cloud, virtualization and big data

This is a follow-up companion post to the larger industry trends and perspectives series from earlier today (Part I, Part II and Part III) pertaining to today’s VMAX 10K enhancement and other announcements by EMC, and the industry myth of if large storage arrays or systems are dead.

The enhanced VMAX 10K scales from a couple of dozen up to 1,560 HDDs (or mix of HDD and SSDs). There can be a mix of 2.5 inch and 3.5 inch devices in different drive enclosures (DAE). There can be 25 SAS based 2.5 inch drives (HDD or SSD) in the 2U enclosure (see figure with cover panels removed), or 15 3.5 inch drives (HDD or SSD) in a 3U enclosure. As mentioned, there can be all 2.5 inch (including for vault drives) for up to 1,200 devices, all 3.5 inch drives for up to 960 devices, or a mix of 2.5 inch (2U DAE) and 3.5 inch (3U DAE) for a total of 1,560 drives.

Image of EMC 2U and 3U DAE for VMAX 10K via EMC
Image courtesy EMC

Note carefully in the figure (courtesy of EMC) that the 2U 2.5 inch DAE and 3U 3.5 inch DAE along with the VMAX 10K are actually mounted in a 3rd cabinet or rack that is part of today’s announcement.

Also note that the DAE’s are still EMC; however as part of today’s announcement, certain third-party cabinets or enclosures such as might be found in a collocation (colo) or other data center environment can be used instead of EMC cabinets.  The VMAX 10K can however like the VMAX 20K and 40K support external storage virtualized similar to what has been available from HDS (VSP/USP) and HP branded Hitachi equivalent storage, or using NetApp V-Series or IBM V7000 in a similar way.

As mentioned in one of the other posts, there are various software functionality bundles available. Note that SRDF is a separate license from the bundles to give customers options including RecoverPoint.

Check out the three post industry trends and perspectives posts here, here and here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive (part III)

StorageIO industry trends cloud, virtualization and big data

This is the third in a multi-part series of posts (read first post here and second post here) looking at what else EMC announced today in addition to an enhanced VMAX 10K and dispelling the myth that large storage arrays are dead (or at least for now).

In addition to the VMAX 10K specific updates, EMC also announced the release of a new version of their Enginuity storage software (firmware, storage operating system). Enginuity is supported across all VMAX platforms and features the following:

  • Replication enhancements include TimeFinder clone refresh, restore and four site SRDF for the VMAX 10K, along with think or thin support. This capability enables functionality across VMAX 10K, 40K or 20K using synchronous or asynchronous and extends earlier 3 site to 4 site and mix modes. Note that larger VMAX systems had the extended replication feature support with VMAX 10K now on par with those. Note that the VMAX can be enhanced with VPLEX in front of storage systems (local or wide area, in region HA and out of region DR) and RecoverPoint behind the systems supporting bi-synchronous (two-way), synchronous and asynchronous data protection (CDP, replication, snapshots).
  • Unisphere for VMAX 1.5 manages DMX along with VMware VAAI UNMAP and space reclamation, block zero and hardware clone enhancements, IPV6, Microsoft Server 2012 support and VFCache 1.5.
  • Support for mix of 2.5 inch and 3.5 inch DAEs (disk array enclosures) along with new SAS drive support (high-performance and high-capacity, and various flash-based SSD or EFD).
  • The addition of a fourth dynamic tier within FAST for supporting third-party virtualized storage, along with compression of in-active, cold or stale data (manual or automatic) with 2 to 1 data footprint reduction (DFR) ratio. Note that EMC was one of early vendors to put compression into its storage systems on a block LUN basis in the CLARiiON (now VNX) along with NetApp and IBM (via their Storwize acquisition). The new fourth tier also means that third-party storage does not have to be the lowest tier in terms of performance or functionality.
  • Federated Tiered Storage (FTS) is now available on all EMC block storage systems including those with third-party storage attached in virtualization mode (e.g. VMAX). In addition to supporting tiering across its own products, and those of other vendors that have been virtualized when attached to a VMAX, ANSI T10 Data Integrity Field (DIF) is also supported. Read more about T10 DIF here, and here.
  • Front-end performance enhancements with host I/O limits (Quality of Service or QoS) for multi tenant and cloud environments to balance or prioritize IO across ports and users. This feature can balance based on thresholds for IOPS, bandwidth or both from the VMAX. Note that this feature is independent of any operating system based tool, utility, pathing driver or feature such as VMware DRS and Storage I/O control. Storage groups are created and mapped to specific host ports on the VMAX with the QoS performance thresholds applied to meet specific service level requirements or objectives.

For discussion (or entertainment) purpose, how about the question of if Enginuity qualifies or can be considered as a storage hypervisors (or storage virtualization or virtual storage)? After all, the VMAX is now capable of having third-party storage from other vendors attached to it, something that HDS has done for many years now. For those who feel a storage hypervisor, virtual storage or storage virtualization requires software running on Intel or other commodity based processors, guess what the VMAX uses for CPU processors (granted, you can’t simply download Enginuity software and run on a Dell, HP, IBM, Oracle or SuperMicro server).

I am guessing some of EMC competitors and their surrogates or others who like to play the storage hypervisor card game will be quick to tell you it is not based on various reasons or product comparisons, however you be the judge.

 

Back to the question of if, traditional high-end storage arrays are dead or dying (from part one in this series).

IMHO as mentioned not yet.

Granted like other technologies that have been declared dead or dying yet still in use (technology zombies), they continue to be enhanced, finding new customers, or existing customers using them in new ways, their roles are evolving, this still alive.

For some environments as has been the case over the past decade or so, there will be a continued migration from large legacy enterprise class storage systems to midrange or modular storage arrays with a mix of SSD and HDD. Thus, watch out for having a death grip not letting go of the past, while being careful about flying blind into the future. Do not be scared, be ready, do your homework with clouds, virtualization and traditional physical resources.

Likewise, there will be the continued migration for some from traditional mid-range class storage arrays to all flash-based appliances. Yet others will continue to leverage all the above in different roles aligned to where their specific features best serve the applications and needs of an organization.

In the case of high-end storage systems such as EMC VMAX (aka formerly known as DMX and Symmetrix before that) based on its Enginuity software, the hardware platforms will continue to evolve as will the software functionality. This means that these systems will evolve to handling more workloads, as well as moving into new environments from service providers to mid-range organizations where the systems were before out of their reach.

Smaller environments have grown larger as have their needs for storage systems while higher end solutions have scaled down to meet needs in different markets. What this means is a convergence of where smaller environments have bigger data storage needs and can afford the capabilities of scaled down or Right-sized storage systems such as the VMAX 10K.

Thus while some of the high-end systems may fade away faster than others, for those that continue to evolve being able to move into different adjacent markets or usage scenarios, they will be around for some time, at least in some environments.

Avoid confusing what is new and cool falling under industry adoption vs. what is productive and practical for customer deployment. Systems like the VMAX 10K are not for all environments or applications; however, for those who are open to exploring alternative solutions and approaches, it could open new opportunities.

If there is a high-end storage system platform (e.g. Enginuity) that continues to evolve, re-invent itself in terms of moving into or finding new uses and markets the EMC VMAX would be at or near the top of such list. For the other vendors of high-end storage system that are also evolving, you can have an Atta boy or Atta girl as well to make you feel better, loved and not left out or off of such list. ;)

Ok, nuff said for now.

Disclosure: EMC is not a StorageIO client; however, they have been in the past directly and via acquisitions that they have done. I am however a customer of EMC via my Iomega IX4 NAS (I never did get the IX2 that I supposedly won at EMCworld ;) ) that I bought on Amazon.com and indirectly via VMware products that I have, oh, and they did sent me a copy of the new book Human Face of Big Data (read more here).

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive (part II)

StorageIO industry trends cloud, virtualization and big data

This is the second in a multi-part series of posts (read first post here) looking at if large enterprise and legacy storage systems are dead, along with what todays EMC VMAX 10K updates mean.

Thus on January 14 2013 it is time for a new EMC Virtual Matrix (VMAX) model 10,000 (10K) storage system. EMC has been promoting their January 14 live virtual event for a while now. January significance is that is when (along with May or June) is when many new systems, solutions or upgrades are made on a staggered basis.

Historically speaking, January and February, along with May and June is when you have seen many of the larger announcements from EMC being made. Case in point, back in February of 2012 VFCache was released, then May (2012) in Las Vegas at EMCworld there were 42 announcements made and others later in the year.

Click here to see images of the car stuffing or click here to watch a video.

Let’s not forget back in February of 2012 VFCache was released, and go back to January 2011 there was the record-setting event in New York City complete with 26 people being compressed, deduped, singled instanced, optimized, stacked and tiered into a mini cooper (Coop) automobile (read and view more here).

Now back to the VMAX 10K enhancements

As an example of a company, product family and specific storage system model, still being alive is the VMAX 10K. Although this announcement by EMC is VMAX 10K centric, there is also a new version of the Enginuity software (firmware, storage operating system, valueware) that runs across all VMAX based systems including VMAX 20K and VMAX 40K. Read here, here and here and here to learn more about VMAX and Enginuity systems in general.

Some main themes of this announcement include Tier 1 reliability, availability and serviceability (RAS) storage systems functionality at tier 2 pricing for traditional, virtual and cloud data centers.

Some other themes of this announcement by EMC:

  • Flexible, scalable and resilient with performance to meet dynamic needs
  • Support private, public and hybrid cloud along with federated storage models
  • Simplified decision-making, acquisition, installation and ongoing management
  • Enable traditional, virtual and cloud workloads
  • Complement its siblings VMAX 40K, 20K and SP (Service Provider) models

Note that the VMAX SP is a model configured and optimized for easy self-service and private cloud, storage as a service (SaaS), IT as a Service (ITaaS) and public cloud service providers needing multi-tenant capabilities with service catalogs and associated tools.

So what is new with the VMAX 10K?

It is twice as fast (per EMC performance results) as earlier VMAX 10K by leveraging faster 2.8GHz Intel westmere vs. earlier 2.5GHz westmere processors. In addition to faster cores, there are more, from 4 to 6 on directors, from 8 to 12 on VMAX 10K engines. The PCIe (Gen 2) IO busses remain unchanged as does the RapidIO interconnect.  RapidIO  used for connecting nodes and engines,  while PCIe is used for adapter and device connectivity. Memory stays the same at up to 128GB of global DRAM cache, along with dual virtual matrix interfaces (how the nodes are connected). Note that there is no increase in the amount of DRAM based cache memory in this new VMAX 10K model.

This should prompt the question of for traditional cache centric or dependent for performance storage systems such as VMAX, how much are they now CPU and their associated L1 / L2 cache dependent or effective? Also how much has the Enginuity code under the covers been enhanced to leverage the multiple cores and threads thus shifting from being cache memory dependent processor hungry.

Also new with the updated VMAX 10K include:

  • Support for dense 2.5 inch drives, along with mixed 2.5 inch and 3.5 inch form factor devices with a maximum of 1,560 HDDs. This means support for 2.5 inch 1TB 7,200 RPM SAS HDDs, along with fast SAS HDDs, SLC/MLC and eMLC solid state devices (SSD) also known as electronic flash devices (EFD). Note that with higher density storage configurations, good disk enclosures become more important to counter or prevent the effects of drive vibration, something that leading vendors are paying attention to and so should customers.
  • EMC is also with the VMAX 10K adding support for certain 3rd party racks or cabinets to be used for mounting the product. This means being able to mount the VMAX main system and DAE components into selected cabinets or racks to meet specific customer, colo or other environment needs for increased flexibility.
  • For security, VMAX 10K also supports Data at Rest Encryption or (D@RE) which is implemented within the VMAX platform. All data encrypted on every drive, every drive type (drive independent) within the VMAX platform to avoid performance impacts. AES 256 fixed block encryption with FIPS 140-2 validation (#1610) using embedded or external key management including RSA Key Manager. Note that since the storage system based encryption is done within the VMAX platform or controller, not only is the encrypt / decrypt off-loaded from servers, it also means that any device from SSD to HDD to third-party storage arrays can be encrypted. This is in contrast to drive based approaches such as self encrypting devices (SED) or other full drive encryption approaches. With embedded key management, encryption keys kept and managed within the VMAX system while external mode leverages RSA key management as part of a broader security solution approach.
  • In terms of addressing ease of decision-making and acquisition, EMC has bundled core Enginuity software suite (virtual provisioning, FTS and FLM, DCP (dynamic cache partitioning), host I/O limits, Optimizer/virtual LUN and integrated RecoverPoint splitter). In addition are bundles for optimization (FAST VP, EMC Unisphere for VMAX with heat map and dashboards), availability (TimeFinder for VMAX 10K) and migration (Symmetrix migration suite, Open Replicator, Open Migrator, SRDF/DM, Federated Live Migration). Additional optional software include RecoverPoint CDP, CRR and CLR, Replication Manager, PowerPath, SRDF/S, SRDF/A and SRDF/DM, Storage Configuration Advisor, Open Replicator with Dynamic Mobility and ControlCenter/ProSphere package.

Who needs a VMAX 10K or where can it be used?

As the entry-level model of the VMAX family, certain organizations who are growing and looking for an alternative to traditional mid-range storage systems should be a primary opportunity. Assuming the VMAX 10K can sell at tier-2 prices with a focus of tier-1 reliability, feature functionality, and simplification while allowing their channel partners to make some money, then EMC can have success with this product. The challenge however will be helping their direct and channel partner sales organizations to avoid competing with their own products (e.g. high-end VNX) vs. those of others.

Consolidation of servers with virtualization, along with storage system consolidation to remove complexity in management and costs should be another opportunity with the ability to virtualize third-party storage. I would expect EMC and their channel partners to place the VMAX 10K with its storage virtualization of third-party storage as an alternative to HDS VSP (aka USP/USPV) and the HP XP P9000 (Hitachi based) products, or for block storage needs the NetApp V-Series among others. There could be some scenarios where the VMAX 10K could be positioned as an alternative to the IBM V7000 (SVC based) for virtualizing third-party storage, or for larger environments, some of the software based appliances where there is a scaling with stability (performance, availability, capacity, ease of management, feature functionality) concerns.

Another area where the VMAX 10K could see action which will fly in the face of some industry thinking is for deployment in new and growing managed service providers (MSP), public cloud, and community clouds (private consortiums) looking for an alternative to open source based, or traditional mid-range solutions. Otoh, I cant wait to hear somebody think outside of both the old and new boxes about how a VMAX 10K could be used beyond traditional applications or functionality. For example filling it up with a few SSDs, and then balance with 1TB 2.5 inch SAS HDD and 3.5 inch 3TB (or larger when available) HDDs as an active archive target leveraging the built-in data compression.

How about if EMC were to support cloud optimized HDDs such as the Seagate Constellation Cloud Storage (CS) HDDs that were announced late in 2012 as well as the newer enterprise class HDDs for opening up new markets? Also keep in mind that some of the new 2.5 inch SAS 10,000 (10K) HDDs have the same performance capabilities as traditional 3.5 inch 15,000 (15K) RPM drives in a smaller footprint to help drive and support increased density of performance and capacity with improved energy effectiveness.

How about attaching a VMAX 10K with the right type of cost-effective (aligned to a given scenario) SSD or HDDs or third-party storage to a cluster or grid of servers that are running OpenStack including Swift, CloudStack, Basho Riak CS, Celversafe, Scality, Caringo, Ceph or even EMCs own ATMOS (that supports external storage) for cloud storage or object based storage solutions? Granted that would be thinking outside of the current or new box thinking to move away from RAID based systems in favor or low-cost JBOD storage in servers, however what the heck, let’s think in pragmatic ways.

Will EMC be able to open new markets and opportunities by making the VMAX and its Enginuity software platform and functionality more accessible and affordable leveraging the VMAX 10K as well as the VMAX SP? Time will tell, after all, I recall back in the mid to late 90s, and then again several times during the 2000s similar questions or conversations not to mention the demise of the large traditional storage systems.

Continue reading about what else EMC announced on January 14 2013 in addition to VMAX 10K updates here in the next post in this series. Also check out Chucks EMC blog to see what he has to say.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive

StorageIO industry trends cloud, virtualization and big data

This is the first in a multi-part series of posts looking at if large enterprise and legacy storage systems are dead, along with what todays EMC VMAX 10K updates means.

EMC has announced an upgrade, refresh or new version of their previously announced Virtual matrix (VMAX) 10,000 (10K), part of the VMAX family of enterprise class storage systems formerly known as DMX (Direct Matrix) and Symmetrix. I will get back to more coverage on the VMAX 10K and other EMC enhancements in a few moments in part two and three of this series.

Have you heard the industry myth about the demise or outright death of traditional storage systems? This has been particularly the case for high-end enterprise class systems, which by the way which were first, declared dead back in the mid-1990s then at the hands of emerging mid-range storage systems.

Enterprise class storage systems include EMC VMAX, Fujitsu Eternus DX8700, HDS, HP XP P9000 based on the HDS high-end product (OEM from HDS parent Hitachi Ltd.). Note that some HPers or their fans might argue that the P10000 (formerly known as 3PAR) declared as tier 1.5 should also be on the list; I will leave that up to you to decide.

Let us not forget the IBM DS8000 series (whose predecessors was known as the ESS and VSS before that); although some IBMers will tell you that XIV should also be in this list. High-end enterprise class storage systems such as those mentioned above are not alone in being declared dead at the hands of new all solid-state devices (SSD) and their startup vendors, or mixed and hybrid-based solutions.

Some are even declaring dead due to new SSD appliances or systems, and by storage hypervisor or virtual storage array (VSA) the traditional mid-range storage systems that were supposed to have killed off the enterprise systems a decade ago (hmm, DejaVu?).

The mid-range storage systems include among others block (SAN and DAS) and file (NAS) systems from Data Direct Networks (DDN), Dell Complement, EqualLogic and MD series (Netapp Engenio based), EMC VNX and Isilon, Fujitsu Eternus, and HDS HUS mid-range formerly known as AMS. Let us not forget about HP 3PAR or P2000 (DotHill based) or P6000 (EVA which is probably being put out to rest). Then there are the various IBM products (their own and what they OEM from others), NEC, NetApp (FAS and Engenio), Oracle and Starboard (formerly known as Reldata). Note that there are many startups that could be in the above list as well if they were not considering the above to be considered dead, thus causing themselves to also be extinct as well, how ironic ;).

What are some industry trends that I am seeing?

  • Some vendors and products might be nearing the ends of their useful lives
  • Some vendors, their products and portfolios continue to evolve and expand
  • Some vendors and their products are moving into new or adjacent markets
  • Some vendors are refining where and what to sell when and to who
  • Some vendors are moving up market, some down market
  • Some vendors are moving into new markets, others are moving out of markets
  • Some vendors are declaring others dead to create a new market for their products
  • One size or approach or technology does not fit all needs, avoid treating all the same
  • Leverage multiple tools and technology in creative ways
  • Maximize return on innovation (the new ROI) by using various tools, technologies in ways to boost productivity, effectiveness while removing complexity and cost
  • Realization that cutting cost can result in reduced resiliency, thus look for and remove complexity with benefit of removing costs without compromise
  • Storage arrays are moving into new roles, including as back-end storage for cloud, object and other software stacks running on commodity servers to replace JBOD (DejaVu anyone?).

Keep in mind that there is a difference between industry adoption (what is talked about) and customer deployment (what are actually bought and used). Likewise there is technology based on GQ (looks and image) and G2 (functionality, experience).

There is also an industry myth that SSD cannot or has not been successful in traditional storage systems which in some cases has been true with some products or vendors. Otoh, some vendors such as EMC, NetApp and Oracle (among others) are having good success with SSD in their storage systems. Some SSD startup vendors have been more successful on both the G2 and GQ front, while some focus on the GQ or image may not be as successful (or at least yet) in the industry adoption vs. customer deployment game.

For the above mentioned storage systems vendors and products (among others), or at least for most of them there is still have plenty of life in them, granted their role and usage is changing including in some cases being found as back-end storage systems behind servers running virtualization, cloud, object storage and other storage software stacks. Likewise, some of the new and emerging storage systems (hardware, software, valueware, services) and vendors have bright futures while others may end up on the where are they now list.

Are high-end enterprise class or other storage arrays and systems dead at the hands of new startups, virtual storage appliances (VSA), storage hypervisors, storage virtualization, virtual storage and SSD?

Are large storage arrays dead at the hands of SSD?

Have SSDs been unsuccessful with storage arrays (with poll)?

 

Here are links to two polls where you can cast your vote.

Cast your vote and see results of if large storage arrays and systems are dead here.

Cast your vote and see results of if SSD has not been successful in storage systems.

So what about it, are enterprise or large storage arrays and systems dead?

Perhaps in some tabloids or industry myths (or that some wish for) or in some customer environments, as well as for some vendors or their products that can be the case.

However, IMHO for many other environments (and vendors) the answer is no, granted some will continue to evolve from legacy high-end enterprise class storage systems to mid-range or to appliance or VSA or something else.

There is still life many of the storage systems architectures, platforms and products that have been declared dead for over a decade.

Continue reading about the specifics of the EMC VMAX 10K announcement in the next post in this series here. Also check out Chucks EMC blog to see what he has to say.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved