Should Everything Be Virtualized?

October 19, 2009 – 8:46 pm
Print Friendly, PDF & Email

Storage I/O trends

Should everything, that is all servers, storage and I/O along with facilities, be virtualized?

The answer not surprisingly should be it depends!

Denny Cherry (aka Mrdenny) over at ITKE did a great recent post about applications not being virtualized, particularly databases. In general some of the points or themes we are on the same or similar page, while on others we slightly differ, not by very much.

Unfortunately consolidation is commonly misunderstood to be the sole function or value proposition of server virtualization given its first wave focus. I agree that not all applications or servers should be consolidated (note that I did not say virtualized).

From a consolidation standpoint, the emphasis is often on boosting resource use to cut physical hardware and management costs by boosting the number of virtual machines (VMs) per physical machine (PMs). Ironically, while VMs using VMware, Microsoft HyperV, Citrix/Xen among others can leverage a common gold image for cloning or rapid provisioning, there are still separate operating system instances and applications that need to be managed for each VM.

Sure, VM tools from the hypervisor along with 3rd party vendors help with these tasks as well as storage vendor tools including dedupe and thin provisioning help to cut the data footprint impact of these multiple images. However, there are still multiple images to manage providing a future opportunity for further cost and management reduction (more on that in a different post).

Getting back on track:

Some reasons that all servers or applications cannot be consolidated include among others:

  • Performance, response time, latency and Quality of Service (QoS)
  • Security requirements including keeping customers or applications separate
  • Vendor support of software on virtual or consolidated servers
  • Financial where different departments own hardware or software
  • Internal political or organizational barriers and turf wars

On the other hand, for those that see virtualization as enabling agility and flexibility, that is life beyond consolidation, there are many deployment opportunities for virtualization (note that I did not say consolidation). For some environments and applications, the emphasis can be on performance, quality of service (QoS) and other service characteristics where the ratio of VMs to PMs will be much lower, if not one to one. This is where Mrdenny and me are essentially on the same page, perhaps saying it different with plenty of caveats and clarification needed of course.

My view is that in life beyond consolidation, many more servers or applications can be virtualized than might be otherwise hosted by VMs (note that I did not say consolidated). For example, instead of a high number or ratio of VMs to PMs, a lower number and for some workloads or applications, even one VM to PM can be leveraged with a focus beyond basic CPU use.

Yes you read that correctly, I said why not configure some VMs on a one to one PM basis!

Here’s the premise, todays current wave or focus is around maximizing the number of VMs and/or the reduction of physical machines to cut capital and operating costs for under-utilized applications and servers, thus the move to stuff as many VMs into/onto a PM as possible.

However, for those applications that cannot be consolidated as outlined above, there is still a benefit of having a VM dedicated to a PM. For example, by dedicating a PM (blade, server or perhaps core) allows performance and QoS aims to be meet while still providing the ability for operational and infrastructure resource management (IRM), DCIM or ITSM flexibility and agility.

Meanwhile during busy periods, the application such as a database server could have its own PM, yet during off-hours, some over VM could be moved onto that PM for backup or other IRM/DCIM/ITSM activities. Likewise, by having the VM under the database with a dedicated PM, the application could be moved proactively for maintenance or in a clustered HA scenario support BC/DR.

What can and should be done?
First and foremost, decide how VMs is the right number to divide per PM for your environment and different applications to meet your particular requirements and business needs.

Identify various VM to PM ratios to align with different application service requirements. For example, some applications may run on virtual environments with a higher number of VMs to PMs, others with a lower number of VMs to PMs and some with a one VM to PM allocation.

Certainly there will be for different reasons the need to keep some applications on a direct PM without introducing a hypervisors and VM, however many applications and servers can benefit from virtualization (again note, I did not say consolation) for agility, flexibility, BC/DR, HA and ease of IRM assuming the costs work in your favor.

Additional general to do or action items include among others:

  • Look beyond CPU use also factoring in memory and I/O performance
  • Keep response time or latency in perspective as part of performance
  • More and fast memory are important for VMs as well as for applications including databases
  • High utilization may not show high hit rates or effectiveness of resource usage
  • Fast servers need fast memory, fast I/O and fast storage systems
  • Establish tiers of virtual and physical servers to meet different service requirements
  • See efficiency and optimization as more than simply driving up utilization to cut costs
  • Productivity and improved QoS are also tenants of an efficient and optimized environment

These are themes among others that are covered in chapters 3 (What Defines a Next-Generation and Virtual Data Center?), 4 (IT Infrastructure Resource Management), 5 (Measurement, Metrics, and Management of IT Resources), as well as 7 (Servers—Physical, Virtual, and Software) in my book “The Green and Virtual Data Center (CRC) that you can learn more about here.

Welcome to life beyond consolidation, the next wave of desktop, server, storage and IO virtualization along with the many new and expanded opportunities!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2017 Server StorageIO and UnlimitedIO LLC All Rights Reserved

  1. 18 Responses to “Should Everything Be Virtualized?”

  2. Excellent advice Greg.

    While server consolidation is often what first attracts a prospect, often times it is the flexibility and agility to be derived from virtualizing that changes a prospect into a customer.

    By CJ Mosca on Nov 5, 2009

  3. I would submit that yes you should indeed virtualize everything that is possible to virtualize. Virtualization gives you something much more valuable than server consolidation or even resource utilization. it buys you the ability to define a lifecycle for a physical server that would be otherwise impactful to your business. Forget Information Lifecycle Management, Think Hardware Lifecycle Management. You as the customer should be careful what you let into your datacenter, but more importantly you should have a plan for how you are going to get it out of your infrastructure without an outage.

    You can use a virtual machine to divorce your hardware from your applications. When a physical server has outlived its usefulness, but the application it hosts is business critical, wouldn’t it be nice to more that application to a current physical server platform? In fact, you can change all kinds of infrastructure beneath a Virtual Machine without affecting the virtual machine.

    What’s the big argument against this virtualize everything argument? It is that a virtual instance won’t be fast enough to take on the load, and that your application needs every possible GB or RAM or GHz of Processor power it can grab onto. If that is the case, then I am also assuming that your are willing to bring your application down every 6 months to upgrade to a newer cutting edge server? By using a Virtual machine, you could in fact do just that. Buy a cutting edge server, virtualize it, and place your application on it in a one-VM to one-Host ratio. When a newer faster machine comes out, migration your application onto that new hardware and redeploy that 1-step-back from cutting edge server for other needs in your environment. You already know that an application in your environment will likely have a lifespan that is twice as long as a common server lifespan. Stop running obsolete servers, they waste power.

    Virtualization gives you the flexibility to infrastructure of your own, except with Virtual Machines you can be the master of your own destiny. I can’t tell you what the most cost effective protocol (iSCSI over CEE, FCoE, FC8Gb) will be in 1 year or in 18 months. Protect your ability to adopt the most cost effective next generation hardware when it becomes cost effective without having to use the word forklift or green-field.

    By chris lionetti on Nov 6, 2009

  4. Thanks for the comments and perspectives Chris and CJ.

    If this post resonates, check out some of the others on this site as there are some recurring themes, would enjoy hearing your comments on some of those as well.

    As the industry becomes more aware of the other benefits of virtualization in the next wave which is “life beyond consolidation”, the opportunities are even greater than those currently being seen. The first wave of desktop, server, storage and other forms of consolidation are a straightforward low hanging fruit conversation and value proposition.

    It’s a different paradigm and way of thinking, in that efficiency can be more than boosting utilization or consolidation. Boosting efficiency is also about managing IT resources (server, storage, I/O networking along with software) more effectively boosting productivity, however without compromising on performance or availability. In other words, enabling an efficient, productive, cost effective green and virtual data center.

    Cheers gs

    By Greg Schulz on Nov 6, 2009

  5. Great assessment and advice! But, no, we shouldn’t and can’t virtualization everything. It’s simply not possible. One word: scalability. Among many benefits and business incentives (for example, live migration is pure jaw-dropping technology), highly compute intensive applications (e.g. scientific) are not suitable for virtualization. At least for the near future, there is a threshold that vertical scalability requirement defeats virtualization advantages. There is always that elephant in the room.

    By Charles Chang on Nov 7, 2009

  6. Thanks Charles for the comment and perspectives.

    To clarify, are you saying that a) not everything should be consolidated, or, b) that not everything should be virtualized in the context of consolidaiton, or, c) even if an applicaiton had a dedicated physical server with a hypervisor however no other virtual machines on that server, they should not be virtualized?

    The reason I bring this up is that I concur there are some applicaitons/workloads that should not be consolidated, some that can be virtualized however with their own dedicated server (e.g. virtualization exists for management flexibility), and then there are some that fall into leave as is, or at least for now.

    Cheers gs

    By Greg Schulz on Nov 7, 2009

  7. I do see the excellent points for 1-1 virtualization in server space. I meant all a, b & c – such apps are not a good candidate for consolidation to begin with, then the question is whether the virtualization overhead (be it cost, system resources, restrictions, etc.) is worth the management advantages. In a way they aren’t isolated from virtualization – outside the box it’s virtualized everywhere. So the dynamic and paradigm are certainly changing, however today we still do not have a heterogeneous sysetm virtualization mgmt tool (much like SAN world) and the servers in large data centers are a lot more complex to manage! Finanly, underneath such vertical scalable platform, virtualization does not seem to make capacity and performance management easier. There are more metrics to slice and dice … although indeed the tools are maturing as we speak …

    By Charles Chang on Nov 9, 2009

  1. 12 Trackback(s)

  2. May 21, 2010: SiliconANGLE — Blog — EMC VPLEX: Virtual Storage Redefined or Respun?
  3. May 31, 2010: A Wrap-up of the Week’s News | Appassure Software
  4. Sep 28, 2010: EMC VPLEX: Virtual Storage Redefined or Respun? | StorageIOblog
  5. Nov 25, 2011: Cloud, virtualization and storage networking conversations – SYS
  6. Jan 9, 2012: 2012 industry trends perspectives and commentary (predictions) | The Data Center Journal
  7. Jan 24, 2012: Should you feel sorry for revenue prevention departments | StorageIOblog
  8. Jan 31, 2012: EMC VPLEX: Virtual Storage Redefined or Respun? | StorageIOblog
  9. Jan 17, 2013: Thanks for viewing StorageIO content and top 2012 viewed posts - StorageIO Blog - IBM Storage Community
  10. May 25, 2013: Cloud and Virtual Data Storage Networking book released | StorageIOblog
  11. Jun 28, 2013: Tiered Hypervisors and Microsoft HyperV | StorageIOblog
  12. Jul 16, 2013: 2010 and 2011 Trends, Perspectives and Predictions: More of the same? | StorageIOblog
  13. Aug 24, 2013: Cloud, virtualization and storage networking conversations | StorageIOblog

Post a Comment

Powered by Disqus