What's new in vSphere 5 (Part 2)

by [Published on 25 Aug. 2011 / Last Updated on 25 Aug. 2011]

In this article I will outline the major new features in vSphere 5 and, for some features, will take a deep dive to explain why the changes matter and what impact they may have on your environment.

If you would like read the first part in this article series please go to What’s new in vSphere 5 (Part 1).

Introduction

In July 2011, VMware announced a major release of vSphere. vSphere 5 boasts many new and enhanced features and further extends virtualization’s reach into even more intensive applications, such as large tier 1, line of business, mission critical applications. In this, Part 2 of an article series, I will outline the major new features in vSphere 5 and, for some features, will take a deep dive to explain why the changes matter and what impact they may have on your environment.

Licensing update

Since part 1 of this series was published, VMware has, indeed, made major changes to the new licensing model. At some points, most of vSphere 5’s new features were being overshadowed by the controversial vRAM licensing scheme that was announced with the vSphere 5 release.

Here’s what’s changes:

  • Essentials: 32 GB vRAM entitlement (was 24 GB)
  • Essentials Plus: 32 GB vRAM entitlement (was 24 GB)
  • Enterprise: 64 GB (was 32 GB)
  • Enterprise Plus: 96 GB (was 48 GB)
  • Free ESXi: 32 GB (was 8 GB)
  • VDI use: No limits

Further, VMware will cap at 96GB of RAM on a single VM the amount of RAM that counts against the vRAM pool. So, if you have a VM that has 128GB of RAM assigned to it, only 96GB will count in vRAM calculations used for licensing purposes. With the original scheme, a massive VM was no longer financially feasible but this change brings thing back to reality a bit. The original worry was that large companies looking to virtualize tier 1 applications would end up paying a substantial sum in VMware licenses to do so.

Storage DRS

Available in Enterprise Plus only.

If you’re familiar with and rely on VMware’s Distributed Resource Scheduler to keep your running workloads balanced across hosts, you’ll love the fact that VMware has extended this mechanism to the storage side of the resources equation.

As you create datastore clusters (I’ll explain these in a minute), you have the option to enable Storage DRS (SDRS) as a part of that cluster’s attributes. When enabled, the cluster is, by default, balanced based on used space. Again, by default, that threshold is configured at 80%. So, if usage in a particular datastore exceeds 80%, SDRS will uses Storage vMotion to migrate virtual machines out of the overloaded data store into ones with more free space. When used appropriately, SDRS can make it virtually impossible to run out of disk space in a datastore. Further, SDRS isn’t going to take actions that have relatively little benefit. This is by virtue of another configurable threshold. With SDRS, VMware includes a default “delta” threshold of 5%. If a target datastore doesn’t see a utilization difference of at least 5%, a VM won’t move.

Better yet, SDRS doesn’t rely just on capacity. After all, capacity is but one consideration when it comes to managing and monitoring storage. Perhaps even more important, overall storage performance is a key metric and is one on which SDRS can base decisions. When SDRS sees I/O latency exceeding 15 ms (default), it will begin to look at ways that VMs can be moved to other datastores to correct this imbalance. I/O latency is a performance killer and used to require constant vigilance to keep in check. SDRS can help ease the administrative burden associated with maintaining a virtual infrastructure that operates at peak levels.

Obviously, you can change all of these thresholds to match the needs of your organization.

Profile-Driven Storage

Available in Enterprise Plus only.

With Profile-Driven Storage, administrators can take some of the guesswork out of initial virtual machine placement by implementing broad rules under which this placement will take place. As virtual machines fall out of compliance, Storage vMotion is used to place them back into compliance with policy.

Profile-Drive Storage is also a way by which administrators can make use of tiered storage and making sure that critical workloads run on appropriate storage.

This new feature is closely associated with Storage DRS and supports Fibre Channel, iSCSI and NFS.

vMotion improvements

Available in all editions.

vMotion is a key element in high availability in a vSphere environment. In vSphere 5, VMware has made a number of improvements to vMotion. For example, vSphere 5 can now load balance vMotion traffic across multiple vMotion-enabled network adapters, significantly decreasing the amount of time that it takes for a vMotion operation to complete. Even when a single vMotion operation is underway, vSphere is able to use multiple links to speed things up. VMware also claims that a vMotion operation in vSphere 5 can saturate a 10 Gb Ethernet link!

Metro vMotion

Available in Enterprise Plus only.

A new feature in vSphere 5, Metro vMotion attempts to extend vMotion to long-latency networks. With Metro vMotion, round-trip latency is increased to 10 ms from 5 ms.

For all editions of vSphere below Enterprise Plus, the round trip latency remains at 5 ms.

Image Builder

In order to facilitate some other new features, vSphere 5 includes a PowerCLI command set that allows administrators to create ESXi system images. Called the Image Builder, administrators can also customize images and update them with new patches and drivers. From there, it’s a snap to use Auto Deploy to send updates down to each host.

Auto Deploy

Provisioning dozens or hundreds of vSphere servers can be a time consuming task, even when using Host Profiles. vSphere 5 includes a new feature called Auto Deploy that aims to massively reduce the amount of time it takes to deploy vSphere servers en masse by creating a base image and then deploying it to new hardware. This feature leverages the Image Builder feature discussed previously.

Like many provisioning technologies, Auto Deploy leverages standard PXE boot features to obtain a DHCP-provided IP address after which the host is redirected to a TFTP server which further directs the host to stream an ESXi image over the network in memory on the intended ESXi host. Once the image has been moved to RAM on the host, ESXi is booted at which point it contacts a vCenter server for further instructions which might include, for example, application of a host profile.

Auto deploy makes use of a rules engine to dictate which host will get which image. The rules engine is driven during the provisioning stage as hardware information is fed to the Auto Deploy server so that appropriate rules can be applied.

Beyond simplifying the initial deployment process, Auto Deploy has some other serious upsides, too. For example, using this service, you can eliminate the need for a host to boot from any kind of local storage. Since a full boot image is sent to the machine over the network, the host can just boot from this image and further reduce the potential for hardware failures due to moving parts (think hard drives) and makes replacing hosts a breeze since ESXi is basically abstracted from the hardware.

This is a very cool feature that can aid in improving availability and can reduce the effort it takes to patch ESXi hosts. Now, you just patch a Host Profile image and reboot your hosts and you’re done!

Port mirroring

In the networking world, port mirroring has long been used to monitor network traffic for use in troubleshooting and compliance purposes. With vSphere 5, port mirroring has been added to Distributed Switches to aid in debugging complex network issues that can affect virtual machine performance. When port mirroring is enabled on a port, all traffic to and/or from that port is copied to another port, virtual machine or other uplink where other network monitoring tools can then be used to further analyze the traffic.

Network I/O improvements

As organizations take advantage of the benefits that come from consolidating I/O, managing that I/O becomes ever more critical, particularly as latency sensitive applications become a part of the convergence. As I/O consolidation takes place, low priority applications have the potential to significantly disrupt networking protocols that are more sensitive and require higher levels of priority.

vSphere administrators can allocate I/O shares and limits based on the kind of traffic. In addition, administrators can create custom traffic types based on business needs. Under vSphere 5, Network I/O Control has the potential to manage the following kinds of traffic, in addition to user-define traffic types:

  • Virtual machine traffic
  • Management traffic
  • iSCSI traffic
  • NFS traffic
  • Fault-tolerant traffic
  • VMware vMotion traffic
  • User-defined traffic
  • vSphere replication traffic
  • Network I

vSphere Storage Appliance (VSA)

SMBs rejoice! VMware has made available a product that will enable SMBs that may have budgetary difficulty obtaining a SAN for shared storage to be able make use of some of vSphere’s advanced availability features, such as vMotion, without buying an expensive piece of hardware.

Enter the vSphere Storage Appliance (VSA).

VSA is a software-enabled service that basically repurposes the internal storage in each ESXi host and converts it into a shared storage medium that can support a number of VMware’s high availability features. All this without having to go to the expense of buying a SAN. VSA supports both two and three nodes clusters – no more and no less.

VMware plans to sell the VSA with a list price of $5,995.

If you’re interested in looking at another product that accomplishes a similar goal, take a look at HP/LeftHand’s StorageWorks P4000 Virtual SAN Appliance Software.

Summary

This past week, I attended Gestalt IT’s Tech Field Day 7 in Austin where we visited Dell, Symantec, Solar Winds and Veeam.  A hot topic was the new vSphere 5 and how it impacts VMware’s place in the market. While there was broad agreement that vSphere 5 broadens the gap between VMware and other hypervisors – namely Microsoft – there was disagreement on how badly VMware may have damaged their reputation through the introduction of vRAM entitlements. While it was generally accepted that people won’t just drop VMware in favor of Microsoft, there were a number of people that felt that VMware might have created enough of a stir to get people to consider whether or not there were alternatives to their products. At present, many won’t give anything else a look. However, over time, it will be interesting to see what happens as the feature gap between VMware and its competitors gets narrower.

All that said, vSphere 5’s new licensing not withstanding, the new features found in the product are formidable and will help ease the overall administrative burden associated with management of a virtual environment and improve overall availability to key resources, such as storage.

If you would like read the first part in this article series please go to What’s new in vSphere 5 (Part 1).

The Author — Scott D. Lowe

Scott D. Lowe avatar

Scott has written thousands of articles and blog posts and has authored or coauthored three books, including Microsoft Press’ Exchange Server 2007 Administrators Companion and O’Reilly’s Home Networking: The Missing Manual. In 2012, Scott was also awarded VMware's prestigious vExpert designation for his contributions to the virtualization community.

Latest Contributions

Advertisement

Featured Links