VMware VVOLs and storage I/O fundementals (Part 2)

VMware VVOL’s and storage I/O fundamentals (Part II)

Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).

Picking up from where we left off in the first part of the VMware VVOL’s and storage I/O fundamentals, lets take a closer look at VVOL’s.

First however lets be clear that while VMware uses terms including object and object storage in the context of VVOL’s, its not the same as some other object storage solutions. Learn more about object storage here at www.objectstoragecenter.com

Are VVOL’s accessed like other object storage (e.g. S3)?

No, VVOL’s are accessed via the VMware software and associated API’s that are supported by various storage providers. VVOL’s are not LUN’s like regular block (e.g. DAS or SAN) storage that use SAS, iSCSI, FC, FCoE, IBA/SRP, nor are they NAS volumes like NFS mount points. Likewise VVOL’s are not accessed using any of the various object storage access methods mentioned above (e.g. AWS S3, Rest, CDMI, etc) instead they are an application specific implementation. For some of you this approach of an applications specific or unique storage access method may be new, perhaps revolutionary, otoh, some of you might be having a DejaVu moment right about now.

VVOL is not a LUN in the context of what you may know and like (or hate, even if you have never worked with them), likewise it is not a NAS volume like you know (or have heard of), neither are they objects in the context of what you might have seen or heard such as S3 among others.

Keep in mind that what makes up a VMware virtual machine are the VMK, VMDK and some other files (shown in the figure below), and if enough information is known about where those blocks of data are or can be found, they can be worked upon. Also keep in mind that at least near-term, block is the lowest common denominator that all file systems and object repositories get built-up.

VMware ESXi basic storage I/O
VMware ESXi storage I/O, IOPS and data store basics

Here is the thing, while VVOL’s will be accessible via a block interface such as iSCSI, FC or FCoE or for that matter, over Ethernet based IP using NFS. Think of these storage interfaces and access mechanisms as the general transport for how vSphere ESXi will communicate with the storage system (e.g. their data path) under vCenter management.

What is happening inside the storage system that will be presented back to ESXi will be different than a normal SCSI LUN contents and only understood by VMware hypervisor. ESXi will still tell the storage system what it wants to do including moving blocks of data. The storage system however will have more insight and awareness into the context of what those blocks of data mean. This is how the storage systems will be able to more closely integrate snapshots, replication, cloning and other functions by having awareness into which data to move, as opposed to moving or working with an entire LUN where a VMDK may live. Keep in mind that the storage system will still function as it normally would, just think of VVOL as another or new personality and access mechanism used for VMware to communicate and manage storage.

VMware VVOL basics
VMware VVOL concepts (in general) with VMDK being pushed down into the storage system

Think in terms of the iSCSI (or FC or something else) for block or NFS for NAS as being the addressing mechanism to communicate between ESXi and the storage array, except instead of traditional SCSI LUN access and mapping, more work and insight is pushed down into the array. Also keep in mind that with a LUN, it is simply an address from what to use Logical Block Numbers or Logical Block Addresses. In the case of a storage array, it in turn manages placement of data on SSD or HDDs in turn using blocks aka LBA/LBN’s In other words, a host that does not speak VVOL would get an error if trying to use a LUN or target on a storage system that is a VVOL, that’s assuming it is not masked or hidden ;).

What’s the Storage Provider (SP)

The Storage Provider aka SP is created by the, well, the provider of the storage system or appliance leveraging a VMware API (hint, sign up for the beta and there is an SDK). Simply put, the SP is a two-way communication mechanism leveraging VASA for reporting information, configuration and other insight up to VMware ESXi hypervisor, vCenter and other management tools. In addition the storage provider receives VASA configuration information from VMware about how to configure the storage system (e.g. storage containers). Keep in mind that the SP is the out of band management interface between the storage system supporting and presenting VVOL’s and VMware hypervisors.

What’s the Storage Container (SC)

This is a storage pool created on the storage array or appliance (e.g. VMware vCenter works with array and storage provider (SP) to create) in place of using a normal LUN. With a SP and PE, the storage container becomes visible to ESXi hosts, VVOL’s can be created in the storage container until it runs out of space. Also note that the storage container takes on the storage profile assigned to it which are inherited by the VVOLs in it. This is in place of presenting LUN’s to ESXi that you can then create VMFS data stores (or use as raw) and then carve storage to VMs.

Protocol endpoint (PE)

The PE provides visibility for the VMware hypervisor to see and access VMDK’s and other objects (e.g. .vmx, swap, etc) stored in VVOL’s. The protocol endpoint (PE) manages or directs I/O received from the VM enabling scaling across many virtual volumes leveraging multipathing of the PE (inherited by the VVOL’s.). Note that for storage I/O operations, the PE is simply a pass thru mechanism and does not store the VMDK or other contents. If using iSCSI, FC, FCoE or other SAN interface, then the PE works on a LUN basis (again not actually storing data), and if using NAS NFS, then with a mount point. Key point is that the PE gets out-of-the-way.

VVOL Poll

What are you VVOL plans, view results and cast your vote here

Wrap up (for now)

There certainly are many more details to VVOL’s. that you can get a preview of in the beta, a well as via various demos, webinars, VMworld sessions as more becomes public. However for now, hope you found this quick overview on VVOL’s. of use, since VVOL’s. at the time of this writing are not yet released, you will need to wait for more detailed info, or join the beta or poke around the web (for now). Also if you have not seen the first part overview to this piece, check it out here as I give some more links to get you started to learn more about VVOL’s.

Keep an eye on and learn more about VVOL’s. at VMworld 2014 as well as in various other venues.

IMHO VVOL’s. are or will be in your future, however the question will be is there going to be a back to the future moment for some of you with VVOL’s.?

What VVOL questions, comments and concerns are in your future and on your mind?

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC ViPR software defined object storage part III

Storage I/O trends

This is part III in a series of posts pertaining to EMC ViPR software defined storage and object storage. You can read part I here and part II here.

EMCworld

More on the object opportunity

Other object access includes OpenStack storage part Swift, AWS S3 HTTP and REST API access. This also includes ViPR supporting EMC Atmos, VNX and Isilon arrays as southbound persistent storage in addition.

object storage
Object (and cloud) storage access example

EMC is claiming that over 250 VNX systems can be abstracted to support scaling with stability (performance, availability, capacity, economics) using ViPR. Third party storage will be supported along with software such as OpenStack Swift, Ceph and others running on commodity hardware. Note that EMC has some history with object storage and access including Centera and Atmos. Visit the micro site I have setup called www.objectstoragecenter.com and watch for more content to be updated and added there.

More on the ViPR control plane and controller

ViPR differs from some others in that it does not sit in the data path all the time (e.g. between application servers and storage systems or cloud services) to cut potential for bottlenecks.

ViPR architecture

Organizations that can use ViPR include enterprise, SMB, CSP or MSP and hosting sites. ViPR can be used in a control mode to leverage underlying storage systems, appliances and services intelligence and functionality. This means ViPR can be used to complement as oppose to treat southbound or target storage systems and services as dumb disks or JBOD.

On the other hand, ViPR will also have a suite of data services such as snapshot, replication, data migration, movement, tiering to add value for when those do not exist. Customers will be free to choose how they want to use and deploy ViPR. For example leveraging underlying storage functionality (e.g. lightweight model), or in a more familiar storage virtualization model heavy lifting model. In the heavy lifting model more work is done by the virtualization or abstraction software to create an added value, however can be a concern for bottlenecks depending how deployed.

Service categories

Software defined, storage hypervisor, virtual storage or storage virtualization?

Most storage virtualization, storage hypervisors and virtual storage solutions that are hardware or software based (e.g. software defined) implemented what is referred to as in band. With in band the storage virtualization software or hardware sits between the applications (northbound) and storage systems or services (southbound).

While this approach can be easier to carry out along with add value add services, it can also introduce scaling bottlenecks depending on implementations. Examples of in band storage virtualization includes Actifio, DataCore, EMC VMAX with third-party storage, HDS with third-party storage, IBM SVC (and their V7000 Storwize storage system based on it) and NetApp Vseries among others. An advantage of in band approaches is that there should not need to be any host or server-side software requirements and SAN transparency.

There is another approach called out-of-band that has been tried. However pure out-of-band requires a management system along with agents, drivers, shims, plugins or other software resident on host application servers.

fast path control path
Example of generic fast path control path model

ViPR takes a different approach, one that was seen a few years ago with EMC Invista called fast path, control path that for the most part stays out of the data path. While this is like out-of-band, there should not be a need for any host server-side (e.g. northbound) software. By being a fast path control path, the virtualization or abstraction and management functions stay out of the way for data being moved or work being done.

Hmm, kind of like how management should be, there to help when needed, out-of-the-way not causing overhead other times ;).

Is EMC the first (even with Invista) to leverage fast path control path?

Actually up until about a year or so ago, or shortly after HP acquired 3PAR they had a solution called Storage Virtualization Services Platform (SVPS) that was OEMd from LSI (e.g. StorAge). Unfortunately, HP decided to retire that as opposed to extend its capabilities for file and object access (northbound) as well as different southbound targets or destination services.

Whats this northbound and southbound stuff?

Simply put, think in terms of a vertical stack with host servers (PMs or VMs) on the top with applications (and hypervisors or other tools such as databases) on top of them (e.g. north).

software defined storage
Northbound servers, southbound storage systems and cloud services

Think of storage systems, appliances, cloud services or other target destinations on the bottom (or south). ViPR sits in between providing storage services and management to the northbound servers leveraging the southbound storage.

What host servers can VIPR support for serving storage?

VIPR is being designed to be server agnostic (e.g. virtual or physical), along with operating system agnostic. In addition VIPR is being positioned as capable of serving northbound (e.g. up to application servers) block, file or object as well as accessing southbound (e.g. targets) block, file and object storage systems, file systems or services.

Note that a difference between earlier similar solutions from EMC have been either block based (e.g. Invista, VPLEX, VMAX with third-party storage), or file based. Also note that this means VIPR is not just for VMware or virtual server environments and that it can exist in legacy, virtual or cloud environments.

ViPR image

Likewise VIPR is intended to be application agnostic supporting little data, big data, very big data ( VBD) along with Hadoop or other specialized processing. Note that while VIPR will support HDFS in addition to NFS and CIFS file based access, Hadoop will not be running on or in the VIPR controllers as that would live or run elsewhere.

How will VIPR be deployed and licensed?

EMC has indicated that the VIPR controller will be delivered as software that installs into a virtual appliance (e.g. VMware) running as a virtual machine (VM) guest. It is not clear when support will exist for other hypervisors (e.g. Microsoft Hyper-V, Citrix/XEN, KVM or if VMware vSphere with vCenter or simply on ESXi free version). As of the announcement pre briefing, EMC had not yet finalized pricing and licensing details. General availability is expected in the second half of calendar 2013.

Keep in mind that the VIPR controller (software) runs as a VM that can be hosted on a clustered hypervisor for HA. In addition, multiple VIPR controllers can exist in a cluster to further enhance HA.

Some questions to be addressed among others include:

  • How and where are IOs intercepted?
  • Who can have access to the APIs, what is the process, is there a developers program, SDK along with resources?
  • What network topologies are supported local and remote?
  • What happens when JBOD is used and no advanced data services exist?
  • What are the characteristics of the object access functionality?
  • What if any specific switches or data path devices and tools are needed?
  • How does a host server know to talk with its target and ViPR controller know when to intercept for handling?
  • Will SNIA CDMI be added and when as part of the object access and data services capabilities?
  • Are programmatic bindings available for the object access along with support for other APIs including IOS?
  • What are the performance characteristics including latency under load as well as during a failure or fault scenario?
  • How will EMC place Vplex and its caching model on a local and wide area basis vs. ViPR or will we see those two create some work together, if so, what will that be?

Bottom line (for now):

Good move for EMC, now let us see how they execute including driving adoption of their open APIs, something they have had success in the past with Centera and other solutions. Likewise, let us see what other storage vendors become supported or add support along with how pricing and licensing are rolled out. EMC will also have to articulate when and where to use ViPR vs. VPLEX along with other storage systems or management tools.

Additional related material:
Are you using or considering implementation of a storage hypervisor?
Cloud and Virtual Data Storage Networking (CRC)
Cloud conversations: Public, Private, Hybrid what about Community Clouds?
Cloud, virtualization, storage and networking in an election year
Does software cut or move place of vendor lock-in?
Don’t Use New Technologies in Old Ways
EMC VPLEX: Virtual Storage Redefined or Respun?
How many degrees separate you and your information?
Industry adoption vs. industry deployment, is there a difference?
Many faces of storage hypervisor, virtual storage or storage virtualization
People, Not Tech, Prevent IT Convergence
Resilient Storage Networks (Elsevier)
Server and Storage Virtualization Life beyond Consolidation
Should Everything Be Virtualized?
The Green and Virtual Data Center (CRC)
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
Unified storage systems showdown: NetApp FAS vs. EMC VNX
backup, restore, BC, DR and archiving
VMware buys virsto, what about storage hypervisor’s?
Who is responsible for vendor lockin?

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC ViPR software defined object storage part II

Storage I/O trends

This is part II in a series of posts pertaining to EMC ViPR software defined storage and object storage. You can read part I here and part III here.

EMCworld

Some questions and discussion topics pertaining to ViPR:

Whom is ViPR for?

Organizations that need to scale with stability across EMC, third-party or open storage software stacks and commodity hardware. This applies to large and small enterprise, cloud service providers, managed service providers, virtual and cloud environments/

What this means for EMC hardware/platform/systems?

They can continue to be used as is, or work with ViPR or other deployment modes.

Does this mean EMC storage systems are nearing their end of life?

IMHO for the most part not yet, granted there will be some scenarios where new products will be used vs. others, or existing ones used in new ways for different things.

As has been the case for years if not decades, some products will survive, continue to evolve and find new roles, kind of like different data storage mediums (e.g. ssd, disk, tape, etc).

How does ViPR work?

ViPR functions as a control plane across the data and storage infrastructure supporting both north and southbound. northbound refers to use from or up to application servers (physical machines PM and virtual machines VMs). southbound refers target or destination storage systems. Storage systems can be traditional EMC or third-party (NetApp mentioned as part of first release), appliances, just a bunch of disks (JBOD) or cloud services.

Some general features and functions:

  • Provisioning and allocation (with automation)
  • Data and storage migration or tiering
  • Leverage scripts, templates and workbooks
  • Support service categories and catalogs
  • Discovery, registration of storage systems
  • Create of storage resource pools for host systems
  • Metering, measuring, reporting, charge or show back
  • Alerts, alarms and notification
  • Self-service portal for access and provisioning

ViPR data plane (adding data services and value when needed)

Another part is the data plane for implementing data services and access. For block and file when not needed, ViPR steps out-of-the-way leveraging the underlying storage systems or services.

object storage
Object storage access

When needed, the ViPR data plane can step in to add added services and functionality along with support object based access for little data and big data. For example, Hadoop Distributed File System (HDFS) services can support northbound analytic software applications running on servers accessing storage managed by ViPR.

Continue reading in part III of this series here including how ViPR works, who it is for and more analysis.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive (part III)

StorageIO industry trends cloud, virtualization and big data

This is the third in a multi-part series of posts (read first post here and second post here) looking at what else EMC announced today in addition to an enhanced VMAX 10K and dispelling the myth that large storage arrays are dead (or at least for now).

In addition to the VMAX 10K specific updates, EMC also announced the release of a new version of their Enginuity storage software (firmware, storage operating system). Enginuity is supported across all VMAX platforms and features the following:

  • Replication enhancements include TimeFinder clone refresh, restore and four site SRDF for the VMAX 10K, along with think or thin support. This capability enables functionality across VMAX 10K, 40K or 20K using synchronous or asynchronous and extends earlier 3 site to 4 site and mix modes. Note that larger VMAX systems had the extended replication feature support with VMAX 10K now on par with those. Note that the VMAX can be enhanced with VPLEX in front of storage systems (local or wide area, in region HA and out of region DR) and RecoverPoint behind the systems supporting bi-synchronous (two-way), synchronous and asynchronous data protection (CDP, replication, snapshots).
  • Unisphere for VMAX 1.5 manages DMX along with VMware VAAI UNMAP and space reclamation, block zero and hardware clone enhancements, IPV6, Microsoft Server 2012 support and VFCache 1.5.
  • Support for mix of 2.5 inch and 3.5 inch DAEs (disk array enclosures) along with new SAS drive support (high-performance and high-capacity, and various flash-based SSD or EFD).
  • The addition of a fourth dynamic tier within FAST for supporting third-party virtualized storage, along with compression of in-active, cold or stale data (manual or automatic) with 2 to 1 data footprint reduction (DFR) ratio. Note that EMC was one of early vendors to put compression into its storage systems on a block LUN basis in the CLARiiON (now VNX) along with NetApp and IBM (via their Storwize acquisition). The new fourth tier also means that third-party storage does not have to be the lowest tier in terms of performance or functionality.
  • Federated Tiered Storage (FTS) is now available on all EMC block storage systems including those with third-party storage attached in virtualization mode (e.g. VMAX). In addition to supporting tiering across its own products, and those of other vendors that have been virtualized when attached to a VMAX, ANSI T10 Data Integrity Field (DIF) is also supported. Read more about T10 DIF here, and here.
  • Front-end performance enhancements with host I/O limits (Quality of Service or QoS) for multi tenant and cloud environments to balance or prioritize IO across ports and users. This feature can balance based on thresholds for IOPS, bandwidth or both from the VMAX. Note that this feature is independent of any operating system based tool, utility, pathing driver or feature such as VMware DRS and Storage I/O control. Storage groups are created and mapped to specific host ports on the VMAX with the QoS performance thresholds applied to meet specific service level requirements or objectives.

For discussion (or entertainment) purpose, how about the question of if Enginuity qualifies or can be considered as a storage hypervisors (or storage virtualization or virtual storage)? After all, the VMAX is now capable of having third-party storage from other vendors attached to it, something that HDS has done for many years now. For those who feel a storage hypervisor, virtual storage or storage virtualization requires software running on Intel or other commodity based processors, guess what the VMAX uses for CPU processors (granted, you can’t simply download Enginuity software and run on a Dell, HP, IBM, Oracle or SuperMicro server).

I am guessing some of EMC competitors and their surrogates or others who like to play the storage hypervisor card game will be quick to tell you it is not based on various reasons or product comparisons, however you be the judge.

 

Back to the question of if, traditional high-end storage arrays are dead or dying (from part one in this series).

IMHO as mentioned not yet.

Granted like other technologies that have been declared dead or dying yet still in use (technology zombies), they continue to be enhanced, finding new customers, or existing customers using them in new ways, their roles are evolving, this still alive.

For some environments as has been the case over the past decade or so, there will be a continued migration from large legacy enterprise class storage systems to midrange or modular storage arrays with a mix of SSD and HDD. Thus, watch out for having a death grip not letting go of the past, while being careful about flying blind into the future. Do not be scared, be ready, do your homework with clouds, virtualization and traditional physical resources.

Likewise, there will be the continued migration for some from traditional mid-range class storage arrays to all flash-based appliances. Yet others will continue to leverage all the above in different roles aligned to where their specific features best serve the applications and needs of an organization.

In the case of high-end storage systems such as EMC VMAX (aka formerly known as DMX and Symmetrix before that) based on its Enginuity software, the hardware platforms will continue to evolve as will the software functionality. This means that these systems will evolve to handling more workloads, as well as moving into new environments from service providers to mid-range organizations where the systems were before out of their reach.

Smaller environments have grown larger as have their needs for storage systems while higher end solutions have scaled down to meet needs in different markets. What this means is a convergence of where smaller environments have bigger data storage needs and can afford the capabilities of scaled down or Right-sized storage systems such as the VMAX 10K.

Thus while some of the high-end systems may fade away faster than others, for those that continue to evolve being able to move into different adjacent markets or usage scenarios, they will be around for some time, at least in some environments.

Avoid confusing what is new and cool falling under industry adoption vs. what is productive and practical for customer deployment. Systems like the VMAX 10K are not for all environments or applications; however, for those who are open to exploring alternative solutions and approaches, it could open new opportunities.

If there is a high-end storage system platform (e.g. Enginuity) that continues to evolve, re-invent itself in terms of moving into or finding new uses and markets the EMC VMAX would be at or near the top of such list. For the other vendors of high-end storage system that are also evolving, you can have an Atta boy or Atta girl as well to make you feel better, loved and not left out or off of such list. ;)

Ok, nuff said for now.

Disclosure: EMC is not a StorageIO client; however, they have been in the past directly and via acquisitions that they have done. I am however a customer of EMC via my Iomega IX4 NAS (I never did get the IX2 that I supposedly won at EMCworld ;) ) that I bought on Amazon.com and indirectly via VMware products that I have, oh, and they did sent me a copy of the new book Human Face of Big Data (read more here).

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Many faces of storage hypervisor, virtual storage or storage virtualization

StorageIO industry trends cloud, virtualization and big data

Storage hypervisors were a 2012 popular buzzword bingo topic with plenty of industry adoption and some customer deployment. Separating the hype around storage hypervisors reveals conversations around backup, restore, BC, DR and archiving.

backup, restore, BC, DR and archiving
Cloud and virtualization components

Storage virtualization along with virtual storage and storage hypervisors have a theme of abstracting underlying physical hardware resources like server virtualization. The abstraction can be for consolidation and aggregation, or for enabling agility, flexibility, emulation and other functionality.

backup, restore, BC, DR and archiving

Storage virtualization can be implemented in different locations, in many ways with various functionality and focus. For example the abstraction can occur on a server, in an virtual or physical appliance (e.g. tin wrapped software), in a network switch or router, as well as in a storage system. The focus can be for aggregation, or data protection (HA, BC, DR, backup, replication, snapshot) on a homogeneous (all one vendor) or mixed vendor basis (heterogeneous).

backup, restore, BC, DR and archiving

Here is a link to a guest post that I recently did over at The Virtualization Practice looking at storage hypervisors, virtual storage and storage virtualization. As is the case with virtual storage, storage virtualization, storage for virtual environments, depending on your views, spheres of influence, preferences among other factors what you call a storage hypervisor will probably vary.

Additional related material:

  • Are you using or considering implementation of a storage hypervisor?
  • Cloud, virtualization, storage and networking in an election year
  • EMC VPLEX: Virtual Storage Redefined or Respun?
  • Server and Storage Virtualization – Life beyond Consolidation
  • Should Everything Be Virtualized?
  • How many degrees separate you and your information?
  • Cloud and Virtual Data Storage Networking (CRC)
  • The Green and Virtual Data Center (CRC)
  • Resilient Storage Networks (Elsevier)
  • backup, restore, BC, DR and archiving
  • Btw, as a special offer for viewers, I have some copies of Resilient Storage Networking: Designing Flexible Scalable Data Infrastructures (Elsevier) available for $19.95, shipping and handling included. Send me an email or tweet (@storageio) to learn more and get your copy (Major credit cards and Pay pal accepted).

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Industry Trends and Perspectives: Storage Virtualization and Virtual Storage

    This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

    These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageio.com/reports.

    The topic of this post is a trend server virtualization and recent EMC virtual storage announcements.

    Virtual storage or storage virtualization has been as a technology and topic around for some time now. Some would argue that storage virtualization is several years old while others would say many decades depending on your view or definition which will vary by preferences, product, vendor, open or closed, hardware, network, software not to mention feature and functionality.

    Consequently there are many different views and definitions of storage virtualization some tied to that of product specifications often leading to apples and oranges comparison.

    Back in the early to mid 2000s, there was plenty of talk around storage virtualization which then gave way to a relative quiet period before seeing adoption pickup in terms of deployment later in the decade (at least for block based).

    More recently there has a been a renewed flurry of storage virtualization activity with many vendors now shipping their latest versions of tools and functionality, EMC announcing VPLEX as well as the file virtualization vendors continuing to try and create a market for their wares (give it time, like block based, it will evolve).

    One of the trends around storage virtualization and part of the play on words EMC is using is to change the order of the words. That is where storage virtualization is often aligned with product implementation (e.g. software on an appliance or switch or in a storage system) used primarily for aggregation of heterogeneous storage, with VPLEX EMC is referring to it as virtual storage.

    What is interesting here is the play on life beyond consolidation a trend that is also occurring with servers or using virtualization for agility, flexibility and ease of management for upgrades, add, move and changes as opposed to simply pooling of LUNs and underlying storage devices. Stay tuned and watch for more in this space as well as read the blog post below about VPLEX for more on this topic.

    Related and companion material:
    Blog: EMC VPLEX: Virtual Storage Redefined or Respun?

    That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved