Server and desktop hypervisors (Part 1)

by [Published on 5 April 2011 / Last Updated on 5 April 2011]

In this article we'll look at the differences between Type 1 and Type 2 hypervisors and outline a few of the major features people look for in server hypervisors.

If you would like to read the next part in this article series please go to Server and desktop hypervisors (Part 2).

Introduction

Server and desktop hypervisors have very different needs and requirements. Organizations using server virtualization, for example, may not need some of the features offered in a desktop hypervisor product. On the other hand, developers and testers using a desktop virtualization platform may not need all of the high-end features available in a server hypervisor. Therefore, carefully choosing features based on workload need is key. In addition, hypervisors – particularly VMware’s – come in a number of different editions, so feature choice is a big decision.

In this three part article series, I’ll discuss some of the features that are of primary importance in each scenario. In this, part 1, I’ll focus primarily on the server side of the equation.

Hypervisor types

Although you won’t generally have a water cooler chat with your virtualization friends on this topic, an academic understanding of the different kinds of hypervisors is useful for consideration. In general, all hypervisors – virtualization platforms – can be classified into one of two categories: Type 1 and Type 2. Type 1 hypervisors include VMware ESX and are often referred to as “bare metal hypervisors” due to the fact that the hypervisor runs directly on the hardware with no underlying intermediary operating system. In this case, the hypervisor itself is the operating system and controls access to the hardware. Individual operating machines run in virtual machines that sit atop the hypervisor, so there is only one software layer between the hardware and the virtual machine. Type 2 hypervisors, on the other hand, sit on a native operating system such as Windows, Linux or Mac OS X. Examples of type 2 hypervisors are Virtual PC and VirtualBox. The underlying operating system still runs other applications – in addition to the hypervisor software – and handles resource allocation. Virtual machines run on top of the hypervisor layer meaning that there are two layers of software – the hypervisor and the host operating system – between each virtual machine and the hardware.

From a pure “density” perspective – that is, the sheer number of virtual machines that can run on each kind of platform – in general terms, Type 1 hypervisors win the day due to the fact that the hypervisor layer is almost always thinner and more efficient than the comparatively bloated full OS counterpart. When you think about it, this really makes sense. The full operating system that sits beneath a Type 2 hypervisor is intentionally designed to be general purpose; serving the hypervisor is just one of many duties for which the OS has responsibility. Type 1 hypervisors are there for one and only one purpose – to service the needs of the individual virtual machines. As such, there are more resources available to the virtual machines since the general OS layer isn’t present.

Another way to think about the different between Type 1 and Type 2 hypervisors is this: Type 2 hypervisors are essentially nothing more than applications that install on an operating system just like other software such as Microsoft Office. Type 1 hypervisors are purpose built operating systems designed to host other operating systems.

The Type 1, Type 2 distinction does not define desktop vs. server-based (or professional vs. enterprise grade) hypervisor although, in general, most desktop hypervisors are of the Type 2 variety that runs on another operating system. The reason: People use desktop-based hypervisor products as a part of their development or testing efforts, not usually to run enterprise-level workloads.

When it comes to enterprise-level hypervisors, most are of the Type 1 variety. There is a common misconception out there that Hyper-V is a Type 2 hypervisor because it runs on top of Windows. However, this is in error. Hyper-V simply uses a virtualized Windows parent partition, which manages the hypervisor. The hypervisor loads before this management operating system. The virtual machines that you create run on the hypervisor, not on the parent partition operating system.

Server hypervisor needs and features

Most organizations these days are running virtualized servers in their data centers. Except in possibly the very smallest environments, the products that are used are considered to be of the “enterprise” level variety and include the likes of VMware vSphere, Microsoft Hyper-V and Citrix XenServer. These products are designed to run even the most demanding workloads and to do it well. These products carry with them powerful features intended to ensure that the workloads they’re carrying stay available, continue to perform as expected and do it with minimal administrator interaction.

What are some of the more critical needs and features inherent in production-class server-based hypervisors? Here are just a few:

High availability

Enterprise-grade workloads all have one thing in common: They require high levels of availability. Organizations have spent significant time and money building infrastructures that eliminate single points of failure through the implementation of highly redundant components. Servers are clustered in order to protect the organization against hardware failure; individual servers include components like RAID controllers to prevent data loss if a drive fails.

Enterprise hypervisors must support high availability methods and the big players do so in ways that minimize the amount of hardware necessary. I’ll start with a discussion about some of vSphere’s high availability mechanisms:

  • Automatic restart of virtual machines, even between hosts. In the event that a virtual machine fails, whether due to the failure of a vSphere host or due to a failure inside a virtual machine itself, vSphere can automatically take steps to restart the affected virtual machine. In the event that one of the vSphere hosts itself fails, vSphere will restart the virtual machine on a different vSphere host. Note that this is not the same thing as vMotion. This is simply a high availability mechanism included in every edition of vSphere – VMware HA. Hyper-V also has a similar feature. If the host machine is restarted and you’ve left the virtual machines at their default settings, each virtual machine will restart (or resume) upon the completion of the boot process. If you’d like to learn how to manipulate the startup options for individual Hyper-V-based virtual machines, take a look at this blog posting that I wrote on the topic. Note that VMware calls this availability mechanism HA while the Microsoft equivalent could be considered their Quick Migration feature. Most important, understand that these availability mechanisms do carry with them a downtime penalty, although it’s short. There is downtime imposed, at the very least, for the virtual machine to be rebooted, not to mention the amount it time it might take to migrate it to a different host. For some (but not many!), this is a “good enough” mechanism.
  • Migration of live workloads between hosts. While it’s nice to have the hypervisor automatically restart workloads in the event of a failure, it’s also nice to have the ability to simply move a running workload from one host to another while not suffering any downtime. For example, suppose you need to perform maintenance on a vSphere or Hyper-V host. While you could shut down the running virtual machine, move it to another host and then restart the virtual machine, the process imposes a downtime penalty that would be unacceptable to many. There’s an answer for this, too, and it’s a must-have feature in enterprise-class virtualization software. VMware vSphere, for example, provides this feature called vMotion while Microsoft provides a feature called Live Migration. In the Citrix world, this is called XenMotion. All accomplish similar goals; zero-down migrations of running virtual machine between hosts.
  • Workload migration between storage devices. Another feature becoming a “must have” is the ability to migrate workloads from one storage device to another. This feature can serve one or two different purposes. First, it takes the concept of high availability to a whole new level. Now, storage administrators can safely work on storage devices in a way that hypervisor host administrators have enjoyed. Second, it’s possible that this feature could allow for the usage for more commodity, less expensive storage, although I believe that most organizations will – and rightly so – continue to deploy virtual environment using robust storage environments. At present, only VMware provides a robust way to do this with no perceived downtime by the user. Called Storage vMotion, this feature enables the live migration of virtual machine disk files between different storage arrays. Storage vMotion is included in the Enterprise and Enterprise Plus editions of vSphere. Microsoft does provide a feature called SAN Migration (formerly known as Quick Storage Migration) that performs a similar task, but imposes a short period of downtime when then virtual machine is placed into a save state while the process completes. This storage migration mechanism is also incredibly useful when it comes time to replace a SAN.

Advanced availability mechanisms

The features I outlined above are nice; high availability via the hypervisor software itself is one way in which virtualization can reduce some of the administrative overhead associated with IT infrastructure management. Now, you can do hardware maintenance on demand without having to worry about service downtime. You can move to a whole new storage array without users ever noticing.

But, for some, this isn’t enough. Even more advanced management and availability mechanisms are necessary. For example, having the ability to move a running workload between systems is nice, but as virtualized environments grow well beyond just a few hosts, the decision about which host to which to move a workload can become more complex. In smaller environments with just a couple hosts, it’s easy to look at each host and decide which host best fits the workload needs of a guest virtual machine. As the number of hosts grows, the decision points become more numerous.

What is necessary is a way by which the hypervisor can automatically move workloads around in a way that balances them against available host resources and by respecting workload placement rules that have been implemented by the administrator.

Both VMware and Microsoft have features for just this purpose. VMware calls this feature the Distributed Resource Scheduler (DRS), which uses vMotion to dynamically move individual virtual machines between vSphere hosts as a way to balance workloads between hosts. DRS is available in the Enterprise and Enterprise Plus editions of vSphere.DRS’ primary purpose is to help reduce the administrative burden on the person responsible for the virtual infrastructure and make the environment as self-sustaining as possible.

With the introduction of Virtual Machine Manager 2008 and R2, Microsoft has had a similar feature entitled Performance and Resource Optimization (PRO). This is a service that provides automatic workload placement based on performance and health information collected by the PRO management packs loaded into System Center Operations Manager.

Summary

In this part of this three part series, I’ve outlined the differences between Type 1 and Type 2 hypervisors and outlined a few of the major features people look for in server hypervisors. In the next part of this series, I will continue the discussion regarding server-based hypervisor needs and, in part 3, I will discuss desktop hypervisor needs and features.

If you would like to read the next part in this article series please go to Server and desktop hypervisors (Part 2).

The Author — Scott D. Lowe

Scott D. Lowe avatar

Scott has written thousands of articles and blog posts and has authored or coauthored three books, including Microsoft Press’ Exchange Server 2007 Administrators Companion and O’Reilly’s Home Networking: The Missing Manual. In 2012, Scott was also awarded VMware's prestigious vExpert designation for his contributions to the virtualization community.

Latest Contributions

Featured Links