Shared Storage Considerations for Hyper-V (Part 1)

by [Published on 6 Dec. 2012 / Last Updated on 6 Dec. 2012]

This article series discusses the pros and cons of various Hyper-V storage options.

If you would like to read the other parts in this article series please go to:

Introduction

One of the biggest considerations that must be taken into account when deploying a Hyper-V server is that of storage. Not only must your server have adequate storage capacity, but the storage subsystem needs to be able to deliver sufficient I/O to meet the virtual machine’s demands. Furthermore, storage should offer some sort of redundancy so as to avoid becoming a single point of failure.

As you can imagine, planning for Hyper-V storage is not a task to be taken lightly. Fortunately, there are a lot of options. The trick is to assess the available options and pick a solution that meets your performance, fault tolerance, and budgetary requirements.

My goal in this article is not to discuss every possible storage option for Hyper-V, but rather to give you some insight as to what works and what doesn’t work based on my own experiences in the field.

The Most Important Fault Tolerant Consideration

Even though server virtualization is widely regarded as a revolutionary technology, there are a few negative aspects to using it. Perhaps one of the biggest pitfalls to server virtualization is that a single host server runs multiple virtual machines. If the host server were to fail then all of the virtual machines residing on that host also drop offline, resulting in a major outage.

That being the case, I always recommend to my clients that they implement a failover cluster as a way of preventing a host server from becoming a single point of failure. The problem is that failover clustering for Hyper-V requires the use of shared storage, which can be expensive. When Microsoft eventually releases Windows Server 8 and Hyper-V 3.0, the shared storage requirement will go away. For right now though, Hyper-V clusters are often beyond the budget of smaller organizations. Such organizations typically end up using direct attached storage.

Direct Attached Storage

Even though local, direct attached storage does not offer the same degree of redundancy as a full blown clustering solution, it is still possible to build in at least some redundancy by making use of an internal storage array. A storage array won’t protect you against a server level failure, but it will protect against a disk failure (if implemented correctly).

Before I talk about your options for RAID storage, I want to talk about a situation that I ran into a few weeks ago. I have a friend who owns a small business and runs three production servers. The servers had been in place for a long time and the hardware was starting to age. My friend didn’t really have the money to replace all of the server hardware and asked me if virtualizing the servers might be a good alternative.

After considering my friend’s very tight budget and his business needs we decided to purchase a high end PC rather than a true server. The computer had six SATA ports so we planned to use one port for a boot drive, one port for a DVD burner (which was a business requirement), and the remaining four ports for a RAID 5 array.

Even though RAID 5 has fallen out of fashion over the last few years, it made sense in this case because combining the four disks into a RAID 5 array would deliver higher performance (from an I/O prospective) than a mirror set would. Although RAID 5 doesn’t perform as well as RAID 0, the built in parity more than makes up for any loss in performance or capacity.

When all of the parts arrived, I set up the new computer in the way that we had planned. However, even though we had built the array from SATA 3 disks which were rated at 6 gigabits per second, the array was painfully slow. In fact, copying files to the array yielded a sustained transfer rate of only about 1 MB per second. Furthermore, the array would almost always fail to copy large files.

I have built similar arrays on comparable hardware in lab environments before, so I knew that the array should perform much better than it was. My initial assumption was that the problem was driver related, but a check of all of the system’s drivers revealed that everything was up to date.

The next thing that I decided to do was to update the computer’s firmware. Over the years I have had a few bad experiences with firmware updates, so there are a couple of things that I always do prior to updating the firmware. First, I plug the computer into a UPS in case there is a power failure during the update. I have actually had the electricity go out during a firmware update and it ruined the system board.

The other thing that I do is document all of the BIOS settings. While documenting the BIOS settings I noticed that the computer’s BIOS had identified all of the hard drives as IDE rather than ACPI. While this could certainly account for the performance problems, the system would not let me set the drives to ACPI.

After a lot of trial and error I discovered that SATA ports one through four could operate in either IDE or ACPI mode, but ports 5 and six could only operate in IDE mode. To fix the problem I moved the boot drive from port 1 to port 5 and moved the DVD burner from port 2 to port 6. I then set ports one through four to use ACPI mode and attached the drives for the storage array.

Before I could use the server, I had to reconfigure the BIOS to boot from the drive on Port 5. I also had to use the Windows Disk Management Console to completely rebuild the RAID array. Once I did that the disk array began delivering the expected level of performance.

The reason why I chose to tell this story is because anyone who decides to store virtual machines on an internal RAID array could potentially run into similar problems. Since I have already worked through the troubleshooting process, I wanted to pass along my solution in the hopes that I could help someone.

RAID Selection

If you end up setting up Hyper-V to use a local RAID array then you will have to decide what type of RAID array you want to use. Your options vary depending on the number of disks that you have to work with. Here are a few thoughts on some common RAID levels:

RAID Level

Description

Comments

0

Striping

RAID 0 delivers high performance, but does not provide any fault tolerance

1

Mirroring

Disk mirroring is great for redundancy, but RAID 1’s performance is almost always inadequate for hosting virtual machines

5

Striping with parity

RAID 5 delivers the performance of a stripe set (although not as good as RAID 0) and the array can continue functioning even if one disk fails.

6

Striping with double parity

RAID 6 has a higher degree of overhead than RAID 5 but the array can survive a double disk failure.

10

Mirrored Stripe Set

RAID 10 (or RAID 1+0 as it is sometimes called) offers the performance of a stripe set, with full mirroring. RAID 10 typically delivers the best bang for the buck, but it takes a lot of hard disks to build an adequately performing RAID 10 array.

Table 1

Conclusion

So far I have talked about direct attached storage for Hyper-V, but Direct Attached Storage is not the only option. In Part 2 I will discuss shared storage.

If you would like to read the other parts in this article series please go to:

Featured Links