VMware VVOLs and storage I/O fundementals (Part 2)

VMware VVOL’s and storage I/O fundamentals (Part II)

Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).

Picking up from where we left off in the first part of the VMware VVOL’s and storage I/O fundamentals, lets take a closer look at VVOL’s.

First however lets be clear that while VMware uses terms including object and object storage in the context of VVOL’s, its not the same as some other object storage solutions. Learn more about object storage here at www.objectstoragecenter.com

Are VVOL’s accessed like other object storage (e.g. S3)?

No, VVOL’s are accessed via the VMware software and associated API’s that are supported by various storage providers. VVOL’s are not LUN’s like regular block (e.g. DAS or SAN) storage that use SAS, iSCSI, FC, FCoE, IBA/SRP, nor are they NAS volumes like NFS mount points. Likewise VVOL’s are not accessed using any of the various object storage access methods mentioned above (e.g. AWS S3, Rest, CDMI, etc) instead they are an application specific implementation. For some of you this approach of an applications specific or unique storage access method may be new, perhaps revolutionary, otoh, some of you might be having a DejaVu moment right about now.

VVOL is not a LUN in the context of what you may know and like (or hate, even if you have never worked with them), likewise it is not a NAS volume like you know (or have heard of), neither are they objects in the context of what you might have seen or heard such as S3 among others.

Keep in mind that what makes up a VMware virtual machine are the VMK, VMDK and some other files (shown in the figure below), and if enough information is known about where those blocks of data are or can be found, they can be worked upon. Also keep in mind that at least near-term, block is the lowest common denominator that all file systems and object repositories get built-up.

VMware ESXi basic storage I/O
VMware ESXi storage I/O, IOPS and data store basics

Here is the thing, while VVOL’s will be accessible via a block interface such as iSCSI, FC or FCoE or for that matter, over Ethernet based IP using NFS. Think of these storage interfaces and access mechanisms as the general transport for how vSphere ESXi will communicate with the storage system (e.g. their data path) under vCenter management.

What is happening inside the storage system that will be presented back to ESXi will be different than a normal SCSI LUN contents and only understood by VMware hypervisor. ESXi will still tell the storage system what it wants to do including moving blocks of data. The storage system however will have more insight and awareness into the context of what those blocks of data mean. This is how the storage systems will be able to more closely integrate snapshots, replication, cloning and other functions by having awareness into which data to move, as opposed to moving or working with an entire LUN where a VMDK may live. Keep in mind that the storage system will still function as it normally would, just think of VVOL as another or new personality and access mechanism used for VMware to communicate and manage storage.

VMware VVOL basics
VMware VVOL concepts (in general) with VMDK being pushed down into the storage system

Think in terms of the iSCSI (or FC or something else) for block or NFS for NAS as being the addressing mechanism to communicate between ESXi and the storage array, except instead of traditional SCSI LUN access and mapping, more work and insight is pushed down into the array. Also keep in mind that with a LUN, it is simply an address from what to use Logical Block Numbers or Logical Block Addresses. In the case of a storage array, it in turn manages placement of data on SSD or HDDs in turn using blocks aka LBA/LBN’s In other words, a host that does not speak VVOL would get an error if trying to use a LUN or target on a storage system that is a VVOL, that’s assuming it is not masked or hidden ;).

What’s the Storage Provider (SP)

The Storage Provider aka SP is created by the, well, the provider of the storage system or appliance leveraging a VMware API (hint, sign up for the beta and there is an SDK). Simply put, the SP is a two-way communication mechanism leveraging VASA for reporting information, configuration and other insight up to VMware ESXi hypervisor, vCenter and other management tools. In addition the storage provider receives VASA configuration information from VMware about how to configure the storage system (e.g. storage containers). Keep in mind that the SP is the out of band management interface between the storage system supporting and presenting VVOL’s and VMware hypervisors.

What’s the Storage Container (SC)

This is a storage pool created on the storage array or appliance (e.g. VMware vCenter works with array and storage provider (SP) to create) in place of using a normal LUN. With a SP and PE, the storage container becomes visible to ESXi hosts, VVOL’s can be created in the storage container until it runs out of space. Also note that the storage container takes on the storage profile assigned to it which are inherited by the VVOLs in it. This is in place of presenting LUN’s to ESXi that you can then create VMFS data stores (or use as raw) and then carve storage to VMs.

Protocol endpoint (PE)

The PE provides visibility for the VMware hypervisor to see and access VMDK’s and other objects (e.g. .vmx, swap, etc) stored in VVOL’s. The protocol endpoint (PE) manages or directs I/O received from the VM enabling scaling across many virtual volumes leveraging multipathing of the PE (inherited by the VVOL’s.). Note that for storage I/O operations, the PE is simply a pass thru mechanism and does not store the VMDK or other contents. If using iSCSI, FC, FCoE or other SAN interface, then the PE works on a LUN basis (again not actually storing data), and if using NAS NFS, then with a mount point. Key point is that the PE gets out-of-the-way.

VVOL Poll

What are you VVOL plans, view results and cast your vote here

Wrap up (for now)

There certainly are many more details to VVOL’s. that you can get a preview of in the beta, a well as via various demos, webinars, VMworld sessions as more becomes public. However for now, hope you found this quick overview on VVOL’s. of use, since VVOL’s. at the time of this writing are not yet released, you will need to wait for more detailed info, or join the beta or poke around the web (for now). Also if you have not seen the first part overview to this piece, check it out here as I give some more links to get you started to learn more about VVOL’s.

Keep an eye on and learn more about VVOL’s. at VMworld 2014 as well as in various other venues.

IMHO VVOL’s. are or will be in your future, however the question will be is there going to be a back to the future moment for some of you with VVOL’s.?

What VVOL questions, comments and concerns are in your future and on your mind?

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Vendors Who Dont’ Want to Be Virtualized?

Storage I/O trends

This past week I did a couple of keynote and round table discussions in Plano (Dallas) at Jaspers and in Boston at Smith and Wollensky with a theme of BC/DR for Virtualized environments. In both locations, where we had great participate involvement and discussions, audience members discussed the various merits and their experiences with server virtualization, and one of the many common themes was vendors whose do not support their vertical applications in virtualized environments.

Say it so Joe (or Jane), especially with so many vendors tripping over themselves to show how their software can be stuffed into a VM in order to jump on the VM bandwagon. How could it be so that some vendors dont’ want to be virtualized?

It’s true, there are some independent software vendors (ISV) whose vertical packages are commonly deployed in environments of all size who do not for various reasons want nor support their software running in a virtualized environment.

The reasons some vendors of vertical specific applications do not support their software in virtualized environments can vary from quality of service (QoS), performance, contention and response time or availability concerns, desire to continue selling physical servers and other hardware with their applications, to the desire to keep their application on a server platform that they can control the QoS by insuring that no other applications or changes are made to the server and associated operating system environment.

Yet another example can be that the vendor has simply not had a chance to test or, to test in various permutations and thus take the route of not supporting their solutions in a virtualized, or, what they may perceive as in a consolidated environment.

This is in no way a new trend as for decades vendors of vertical software have often take a stance of not allowing other applications to be installed on a server where their software is installed in order for them to maintain QoS and service level agreement (SLA) levels and support guarantees.

In some cases such as specialized applications including hospital patient care or related systems, this can make sense as well as perhaps complying with regulatory requirements. However there are plenty of other applications where vendors drag their feet or resist supporting virtualized environments without realizing that not all virtualized environments need to be consolidated. That is, a stepping stone or baby step can be to 1st install their software on a VM that has a dedicate physical machine (PM) to validate that their are no instabilities or QoS impacts of running in a VM.

After some period of time and comfort levels, then the application and its associated VM could be placed along side some other number of VMs in an incremental and methodical manner to determine what if any impacts occur.

The bottom line is this, not all applications and servers lend themselves to being consolidated for various reasons, however, many of those applications and servers can be virtualized to enable management transparency including facilitating movement to other servers during upgrades or maintenance as well as BC/DR (e.g. life beyond consolidation), a topic that I cover in more detail in my new book “The Green and Virtual Data Center” (Auerbach).

Likewise, there are some applications that truly for security, QoS, availability, politics, software or hardware dependencies or compatibility among other reasons that should be left alone for now. However there are also many applications where vendors need to re-think or look at why they do not support a virtualized server environment and better articulate those issues to their customers, or, start the testing and qualifications as well as put together best practices guides on how to deploy their applications into virtualized environments.

Thanks for all of those who ventured out this week in Plano and Boston and participating in the discussion, look forward to seeing and hearing from you again in the not so distant future.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved