113
Top Five HyperV Best Practices
By Chaffie McKenna, NetApp
Microsoft® HyperV™ virtualization technology has been shipping for more than a year. Tech OnTap
profiled the use of HyperV with NetApp® technology in several past articles, including an overview
article and a detailed case study of one customer’s experiences.
NetApp has been involved with hundreds of HyperV deployments and has developed a detailed body
of best practices for HyperV deployments on NetApp. Tech OnTap asked me to highlight the top five
best practices for HyperV on NetApp, with special attention to the recently released HyperV Server
2008 R2.
l
Network configuration
l
Setting the correct iGroup and LUN protocol type
l
Virtual machine disk alignment
l
Using cluster shared volumes (CSVs)
l
Getting the most from NetApp storage software and tools
You can find full details on these items and much more in NetApp Storage Best Practices for
Microsoft Virtualization which has been updated to include HyperV R2.
BP #1: Network Configuration in HyperV Environments
There are two important best practices to mention when it comes to network configuration:
l
Be sure to provide the right number of physical network adapters on HyperV servers.
l
Take advantage of the new network features that HyperV R2 supports if at all possible.
Physical network adapters. Failure to configure enough network connections can make it appear as
though you have a storage problem, particularly when using iSCSI. Smaller environments require a
minimum of two or three network adapters, while larger environments require at least four or five. You
may require far more. Here’s why:
l
Management. Microsoft recommends a dedicated network adapter for HyperV server
management.
l
Virtual machines. Virtual network configurations of the external type require a minimum of one
network adapter.
l
IP storage. Microsoft recommends that IP storage communication have a dedicated network,
so one adapter is required and two or more are necessary to support multipathing.
l
Windows failover cluster. Windows® failover cluster requires a private network.
l
Live migration. This new HyperV R2 feature supports the migration of running virtual
machines between HyperV servers. Microsoft recommends configuring a dedicated physical
network adapter for live migration traffic.
l
Cluster shared volumes. Microsoft recommends a dedicated network to support the
communications traffic created by this new HyperV R2 feature.
The following tables will help you choose the right number of physical adapters.
Table 1) Standalone HyperV servers.
Table 2) Clustered HyperV servers.
Read the Latest on HyperV and NetApp
Chaffie McKenna blogs regularly about all things
HyperV and other topics related to Microsoft
environments on her Microsoft Environments
blog.
Recent posts include:
l
Is HyperV data center ready?
l
Networking best practices
l
Provisioning best practices
More
Complete HyperV Best Practices
If you’re deploying HyperV with NetApp storage,
access to the latest best practices information is
indispensible.
Download the latest detailed guides:
NetApp Storage Best Practices for Microsoft
Virtualization
NetApp Implementation Guide for Microsoft
Virtualization
New Windows Server 2008 R2 Networking
Features
Windows Server 2008 R2 adds important new
capabilities that you should use in your HyperV
environment if your servers and network
hardware support them:
• Large Send Offload (LSO) and Checksum
Offload (CSO). LSO and CSO are supported by
the virtual networks in HyperV. In addition, if your
physical network adapters support these
capabilities, the virtual traffic is offloaded to the
physical network as necessary. Most network
adapters support LSO and CSO.
• Jumbo frames. With Windows 2008 R2, jumbo
frame enhancements converge to support up to
6 times the payload per packet. This makes a
huge difference in overall throughput and
reduces CPU utilization for large file transfers.
Jumbo frames are supported on physical
networks and virtual networks, including
switches and adapters. For physical networks,
all intervening network hardware (switches and
so on) must have jumbo frame support enabled
as well.
• TCP chimney. This allows virtual NICs in child
partitions to offload TCP connections to physical
adapters that support it, reducing CPU utilization
and other overhead.
• Virtual machine queue. VMQ improves network
throughput by distributing network traffic for
multiple VMs across multiple processors, while
reducing processor utilization by offloading
Quick Links
netapp.com
Tech OnTap Archive
Tech OnTap November 2009 | Page 2 of 10