Top Five HyperV Best Practices
By Chaffie McKenna, NetApp
Microsoft® HyperV™ virtualization technology has been shipping for more than a year. Tech OnTap
profiled the use of HyperV with NetApp® technology in several past articles, including an overview
article and a detailed case study of one customer’s experiences.
NetApp has been involved with hundreds of HyperV deployments and has developed a detailed body
of best practices for HyperV deployments on NetApp. Tech OnTap asked me to highlight the top five
best practices for HyperV on NetApp, with special attention to the recently released HyperV Server
Setting the correct iGroup and LUN protocol type
Virtual machine disk alignment
Using cluster shared volumes (CSVs)
Getting the most from NetApp storage software and tools
You can find full details on these items and much more in NetApp Storage Best Practices for
Microsoft Virtualization which has been updated to include HyperV R2.
BP #1: Network Configuration in HyperV Environments
There are two important best practices to mention when it comes to network configuration:
Be sure to provide the right number of physical network adapters on HyperV servers.
Take advantage of the new network features that HyperV R2 supports if at all possible.
Physical network adapters. Failure to configure enough network connections can make it appear as
though you have a storage problem, particularly when using iSCSI. Smaller environments require a
minimum of two or three network adapters, while larger environments require at least four or five. You
may require far more. Here’s why:
Management. Microsoft recommends a dedicated network adapter for HyperV server
Virtual machines. Virtual network configurations of the external type require a minimum of one
IP storage. Microsoft recommends that IP storage communication have a dedicated network,
so one adapter is required and two or more are necessary to support multipathing.
Windows failover cluster. Windows® failover cluster requires a private network.
Live migration. This new HyperV R2 feature supports the migration of running virtual
machines between HyperV servers. Microsoft recommends configuring a dedicated physical
network adapter for live migration traffic.
Cluster shared volumes. Microsoft recommends a dedicated network to support the
communications traffic created by this new HyperV R2 feature.
The following tables will help you choose the right number of physical adapters.
Table 1) Standalone HyperV servers.
Table 2) Clustered HyperV servers.
Read the Latest on HyperV and NetApp
Chaffie McKenna blogs regularly about all things
HyperV and other topics related to Microsoft
environments on her Microsoft Environments
Recent posts include:
Is HyperV data center ready?
Networking best practices
Provisioning best practices
Complete HyperV Best Practices
If you’re deploying HyperV with NetApp storage,
access to the latest best practices information is
Download the latest detailed guides:
NetApp Storage Best Practices for Microsoft
NetApp Implementation Guide for Microsoft
New Windows Server 2008 R2 Networking
Windows Server 2008 R2 adds important new
capabilities that you should use in your HyperV
environment if your servers and network
hardware support them:
• Large Send Offload (LSO) and Checksum
Offload (CSO). LSO and CSO are supported by
the virtual networks in HyperV. In addition, if your
physical network adapters support these
capabilities, the virtual traffic is offloaded to the
physical network as necessary. Most network
adapters support LSO and CSO.
• Jumbo frames. With Windows 2008 R2, jumbo
frame enhancements converge to support up to
6 times the payload per packet. This makes a
huge difference in overall throughput and
reduces CPU utilization for large file transfers.
Jumbo frames are supported on physical
networks and virtual networks, including
switches and adapters. For physical networks,
all intervening network hardware (switches and
so on) must have jumbo frame support enabled
• TCP chimney. This allows virtual NICs in child
partitions to offload TCP connections to physical
adapters that support it, reducing CPU utilization
and other overhead.
• Virtual machine queue. VMQ improves network
throughput by distributing network traffic for
multiple VMs across multiple processors, while
reducing processor utilization by offloading
Tech OnTap Archive
Tech OnTap November 2009 | Page 2 of 10