Reclaiming Lost Hard Drive Space on Hyper-V Host Servers (Part 1)

by [Published on 2 Feb. 2011 / Last Updated on 2 Feb. 2011]

In spite of careful capacity planning, network administrators may discover that their Hyper-V host servers have unexpectedly run out of disk space. In this article, I will explain why these phenomena occur and what you can do about it.

If you would like to read the next part in this article series please go to Reclaiming Lost Hard Drive Space on Hyper-V Host Servers (Part 2).

Introduction

When organizations initially begin virtualizing their servers, they typically place a lot of emphasis on making sure that the host servers have sufficient CPU, memory, and disk I/O resources to support all of the virtual machines. Anyone who is planning to virtualize one or more servers is also almost certain to check to make sure that the storage arrays that will house all of the virtual hard drives have a sufficient amount of free space. Once the initial planning is finished though, the amount of available disk space tends to become something of an afterthought.

In some ways, I can understand why this happens. After all, today’s storage arrays are huge, so disk space management has become almost a non-issue for many organizations. Even so, you may suddenly find your virtualization host servers running very low on disk space.

To see why this is the case, you have to stop and think about how a lot of organizations go about creating virtual machines in a Hyper-V environment. When an organization deploys a host server, they typically have a few servers that they plan to host on that server. As such, the organization will most likely do some capacity planning to make sure that the new host server has sufficient resources to host all of the virtual servers, plus some room left over for future expansion.

With that in mind, let’s fast forward about six months or so. By this time, the virtual servers have been running on the new host server for several months. Since the server was specifically ordered with future growth in mind, the organization may decide to deploy a few more virtual servers now that the host server has proven itself to be stable.

Before the IT staff at our fictitious setup begins deploying any new virtual servers, they obviously check out the host server to make sure that it has sufficient hardware resources available. Assuming that the necessary resources are available, the new virtual servers are deployed.

Even though this sounds like a perfectly reasonable way of deploying virtual servers, if you use this method it is easy to accidentally over commit your server’s disk resources. Let me explain.

If you think back to the beginning of our fictitious deployment, you will recall that before any virtual machines were deployed onto the host server, the IT department used various capacity planning techniques to make sure that the server had sufficient hardware resources. This is exactly what should have happened.

Before I move on to the next step in the process, let’s pretend that the organization in question has a three terabyte storage array attached to the host server. Let’s also pretend that they initially decided to deploy three virtual servers which will collectively consume two of the storage array’s three terabytes, leaving one terabyte of free space available.

OK, now let’s fast forward a few months and the organization’s IT department is considering hosting a few more virtual servers on the server. Do you think that the members of the IT department are going to remember that they have exactly one terabyte of disk space to play with? Fat chance. A lot of time has passed since the first virtual servers were deployed, and odds are that the IT staff is not going to remember exactly how much disk space was available. Instead, they will most likely take the easy way out and check to see how much free space is available on the storage array.

With that being the case, let’s pretend that the IT department looks at the storage array and sees that it has almost two terabytes of free disk space. As such, they deploy a couple of more virtual servers and collectively allocate the remaining two terabytes of space to them. For a while everything runs fine, but one day the employees discover that the server has run completely out of disk space.

In this particular situation, our fictitious organization has a server with three terabytes of disk space, but they have allocated four terabytes of storage. How did that happen?

The reason why situations like this one occur is because when you create a virtual server in a Hyper-V environment, Hyper-V will create dynamic virtual hard drives for the virtual server unless you specifically tell it to use a virtual hard disk of a fixed size. Keep in mind that when I say that Hyper-V creates dynamic virtual hard drives, I am talking about a virtual hard drive file that dynamically expands on an as needed basis. This is different from using Windows to convert a basic hard drive to a dynamic hard drive.

A dynamic virtual hard drive does not automatically consume all of the space that you have allocated to it. For example, even if you were to create a 500 GB virtual hard disk, that virtual hard disk might only consume 100 MB of physical disk space. The virtual hard disk file starts out small, and then expands as you add data to it. Therefore, if you have created a huge virtual hard disk, but haven’t put much data on it, then the virtual hard disk won’t take up much disk space.

It is the gradual growth of virtual hard drives that gets organizations such as the one that I just described into trouble. If the person who is creating virtual machines only looks at how much physical hard disk space is currently being consumed rather than taking into account how much hard disk space has actually been allocated to each virtual hard drive, then it is possible to over commit the host server’s hard disk resources.

When this happens, an administrator’s first instinct might be to delete some old data or to move some data to a different server. However, moving data off of a virtual hard drive will not solve the problem. Dynamic virtual hard disks will expand automatically when you add data to them, but they will not shrink as you remove data.

Of course this raises the question of why Hyper-V creates dynamic virtual hard drives by default. I don’t have any kind of official answer from Microsoft, but I suspect that the answer lies in the fact that dynamic virtual hard drives can be created nearly instantaneously. Virtual hard disks of a fixed size can take hours to create. As such, virtual servers can be deployed much more quickly if dynamically expanding virtual hard drives are used.

There is also another benefit to using dynamic virtual hard disks. Because dynamic virtual hard disks often consume far less space than what has been allocated to them, backing up a virtual server or moving a virtual server to a different host may take less time than it would if fixed size virtual hard disks were in use.

Conclusion

As you can see, dynamically expanding virtual hard drives can cause problems unless you are careful not to over allocate disk space. In Part 2, I will continue the discussion by showing you some techniques for reclaiming some hard disk space from dynamic virtual hard drives.

If you would like to read the next part in this article series please go to Reclaiming Lost Hard Drive Space on Hyper-V Host Servers (Part 2).

Featured Links