Everything about Web and Network Monitoring

Home > Server Management > Windows Servers Monitoring > Performance Tuning Windows Server 2008 R2 Hyper-V: Storage I/O

Performance Tuning Windows Server 2008 R2 Hyper-V: Storage I/O

In our last article we focused on tuning processor and memory, and in this article we’ll look at tuning the performance of Hyper-V and in particular at aspects that of the storage system used by your Windows Server 2008 R2 Hyper-V solution.

Supported Technology

Hyper-V supports so-called synthetic and emulated storage devices in virtual machines, where the synthetic devices generally offer better throughput and response times as well as reduced CPU overhead. The exception to this statement is if a filter driver can be loaded and reroutes I/Os to the synthetic storage device. Virtual hard disks (VHDs) can be backed by three types of VHD files or raw disks. This article describes the different options and considerations for tuning storage I/O performance.

Synthetic SCSI Controller

The synthetic storage controller provides significantly better performance and has less CPU overhead than the emulated IDE device. There is an enlightened driver for this storage device type and it is in fact required for the guest operating system. The operating system disk is mounted on the IDE device to boot from, but the virtual machine integration services load a filter driver that reroutes IDE device I/Os to the synthetic storage device.

Microsoft recommends that you mount the data drives directly to the synthetic SCSI controller because of the reduced CPU overhead in this type of configuration. You should also mount log files and the operating system paging file directly to the synthetic SCSI controller if their expected I/O rate is high.

For highly intensive storage I/O workloads that span multiple data drives, each VHD should be attached to a separate synthetic SCSI controller for better overall performance. In addition, each VHD should be stored on separate physical disks.

Virtual Hard Disk Types

There are three types of Virtual Hard Disk (VHD) type files. For production servers it is recommended to use fixed-sized VHD files for better performance and to make sure that the virtualization server has sufficient disk space for expanding the VHD file at run time. The following are the performance characteristics and trade-offs between the three VHD types:

· Dynamically expanding VHD – Space for the VHD is allocated on demand. The blocks in the disk start as zeroed blocks but are not backed by any actual space in the file. Reads from such blocks return a block of zeros. When a block is first written to, the virtualization stack must allocate space within the VHD file for the block and then update the metadata. This increases the number of necessary disk I/Os for the write and increases CPU usage. Reads and writes to existing blocks incur both disk access and CPU overhead when looking up the blocks’ mapping in the metadata.

· Fixed-size VHD – Space for the VHD is allocated when the VHD file is created. This type of VHD will have less fragmentation, reducing the I/O. It has the lowest CPU overhead of the three VHD types because reads and writes do not need to look up the mapping of the block.

· Differencing VHD – The VHD points to a parent VHD file. Any writes to blocks never written to before result in space being allocated in the VHD file, as with a dynamically expanding VHD. Reads are serviced from the VHD file if the block has been written to. Otherwise, they are serviced from the parent VHD file. In both cases, the metadata is read to determine the mapping of the block. Reads and writes to this VHD can consume more CPU and result in more I/Os than a fixed-sized VHD.

Snapshots by itself do not affect performance all that much, however, having a large chain of snapshots can affect performance because reading from the VHD can require checking for the requested blocks in many differencing VHDs. Keep your snapshot chains short to maintain good disk I/O performance.

Pass-through Disks

The VHD in a VM can be mapped directly to a physical disk or logical unit number (LUN), instead of a VHD file. This has the benefit that this configuration bypasses the file system (NTFS) in the root partition, which reduces the CPU usage of storage I/O. The risk is that physical disks or LUNs can be more difficult to move between machines than VHD files.

Large data drives are good candidates for pass-through disks, especially if they are I/O intensive. VMs that can be migrated between virtualization servers (such as quick migration) must also use drives that reside on a LUN of a shared storage device.

Disabling File Last Access Time Check

Windows Server 2003 and earlier update the last-accessed time of a file when applications open, read, or write to the file, increasing the number of disk I/Os and this increasing the CPU overhead of virtualization. If your applications do not use the last-accessed time, you should consider setting this registry key to disable these updates:

NTFSDisableLastAccessUpdate

HKLM\System\CurrentControlSet\Control\FileSystem\ (REG_DWORD)

Windows Server 2008 R2 disables the last-access time updates by default.

Physical Disk Topology

It is recommended that VHDs used by I/O-intensive virtual machines are not be placed on the same physical disks as this can cause those disks to become a bottleneck. If possible, they should also not be placed on the same physical disks that the root partition uses.

I/O Balancer Controls

The virtualization stack balances storage I/O streams from various virtual machines so that each VM has similar I/O response times when the system’s I/O bandwidth is saturated. You can fine-tune this balancing algorithm by using the following registry keys can be used to adjust the balancing algorithm:

HKLM\System\CurrentControlSet\Services\StorVsp\<Key> = (REG_DWORD)

HKLM\System\CurrentControlSet\Services\VmSwitch\<Key> = (REG_DWORD)

The virtualization stack tries to fully use the I/O device’s throughput while providing a reasonable balance. The first path listed should be used for storage scenarios, and the second path should be used for networking scenarios.

Both storage and networking have three registry keys that can be used to fine tune I/O balancing. Microsoft does ot recommend this advanced tuning option unless you have a specific reason to use it. There is also no guarantee that these registry keys are available in future versions of Hyper-V:

· IOBalance_Enabled – The balancer is enabled when set to a nonzero value and disabled when set to 0. The default is enabled for storage and disabled for networking. Enabling the balancing for networking can add significant CPU overhead in some scenarios.

· IOBalance_KeepHwBusyLatencyTarget_Microseconds – This controls how much work, represented by a latency value, the balancer allows to be issued to the hardware before throttling to provide better balance. The default is 83 ms for storage and 2 ms for networking. Lowering this value can improve balance but will reduce some throughput. Lowering it too much significantly affects overall throughput. Storage systems with high throughput and high latencies can show added overall throughput with a higher value for this parameter.

· IOBalance_AllowedPercentOverheadDueToFlowSwitching – This controls how much work the balancer issues from a VM before switching to another VM. This setting is primarily for storage where finely interleaving I/Os from different VMs can increase the number of disk seeks. The default is 8 percent for both storage and networking.

 

In our next article we’ll tacle network I/O performance tuning best practices.

Ard-Jan Barnas

About Ard-Jan Barnas

Ard-Jan is a highly technical writer with deep knowledge into the industry. He has an international background and always brings forth articles that are not just technical but with a mix of business application. This encompassing approach to technology married to business is a welcome approach to writing.