Load Balancing and Failover (LBFO) refers to the traditional NIC Teaming feature in Windows Server that allows multiple physical network adapters to act as one for load distribution and redundancy. Switch Embedded Teaming (SET) is a newer technology introduced in Windows Server 2016 that integrates NIC teaming directly into the Hyper-V virtual switch (Link). The key difference is that LBFO is an independent teaming mechanism at the host level, whereas SET is built into the Hyper-V switch itself (hence “switch embedded”). Microsoft has shifted toward SET for Hyper-V environments because it simplifies the stack and enables advanced capabilities (like RDMA and faster VM networking) not possible with LBFO. Starting with Windows Server 2022, a Hyper-V virtual switch cannot be bound to an LBFO team – it must use a SET team(Link). (In other words, LBFO is deprecated for Hyper-V networking.) This change was made to improve performance and support new features. For example, SET allows teaming on RDMA-capable NICs and even guest RDMA, as well as features like Dynamic Virtual Machine Queue (Dynamic VMMQ)(Link), which were not supported with the older LBFO approach. In summary, Microsoft now recommends using SET for Hyper-V networking because it provides better integration with the hypervisor and future-proofs the environment.
Differences between LBFO and SET: LBFO (the older NIC Teaming) offered more flexibility in some ways – it could team a larger number of NICs (in Windows Server 2019/2022, up to 32 adapters could be in one LBFO team) and had no strict requirement that NICs be identical. SET, on the other hand, supports a maximum of 8 physical NICs in a team(Link) and requires the adapters to be symmetric (same make, model, speed, and configuration) for best results(Link). Another difference is in teaming modes: LBFO supports various teaming modes including Switch Dependent options (like LACP or static link aggregation) and Switch Independent mode. SET only supports Switch Independent mode, with the Hyper-V switch handling the distribution of traffic(Link). This means features like LACP are not available with SET, but this simplification reduces complexity in Hyper-V scenarios. In practice, LBFO allowed teaming across mixed adapters and multiple switches, whereas SET requires a uniform set of NICs and is designed to work with the Hyper-V virtual switch exclusively. Microsoft’s decision to shift to SET for Hyper-V reflects the aim to streamline networking for virtualization and enable high-performance features (e.g. SET is the only supported teaming method for new software-defined networking scenarios and Azure Stack HCI). While LBFO was mature and stable, it will not see new improvements for Hyper-V usage (it remains supported only for non-virtualization scenarios). The move to SET ensures Hyper-V networks can leverage modern networking enhancements that the older LBFO teaming could not support.
Workload Support in Windows Server 2022
It’s important to distinguish which scenarios still support LBFO versus those that require SET in Windows Server 2022:
Non-Hyper-V Workloads (Physical/Standalone Roles): Traditional LBFO NIC Teaming is still fully supported for non-Hyper-V scenarios in Windows Server 2022 (Link). This means if you have a file server, SQL server, or any standalone server role that benefits from NIC teaming for higher availability or throughput (and not using a Hyper-V virtual switch), you can continue to use LBFO as before. The LBFO management UI and PowerShell (New-NetLbfoTeam
, etc.) are still present for these use cases. For example, teaming NICs for a standalone cluster heartbeat network or a general active/passive failover team on a physical server is allowed with LBFO.
Hyper-V and Virtualization Workloads: Any scenario involving a Hyper-V virtual switch (i.e. networking for virtual machines) must use Switch Embedded Teaming (SET) in Windows Server 2022. The Hyper-V virtual switch will not bind to an LBFO team interface in this release(Link). If you attempt to create an External vSwitch on an existing LBFO NIC team, it will be blocked (the Hyper-V Manager GUI in 2022 will throw an error, as LBFO for vSwitch is deprecated). Instead, the NIC teaming for Hyper-V needs to be done via SET as part of the vSwitch creation. This applies to Hyper-V hosts and scenarios like Software Defined Networking (SDN) or Azure Stack HCI as well. (For instance, Azure Stack HCI and other SDN solutions only support SET for host teaming, not LBFO(Link).) In summary, any workload involving virtual machine networking in WS2022 requires SET, whereas LBFO is reserved for legacy purposes outside of virtualization.
To put it simply: Use LBFO for non-virtualized roles; use SET for Hyper-V hosts. Microsoft’s support stance reflects this – the change “only applies to Hyper-V” and LBFO remains supported for other scenarios(Link), but if you’re running Hyper-V, the recommended and supported teaming method is SET.
SET Teaming Configuration (PowerShell Steps)
Configuring a Switch Embedded Teaming team for Hyper-V in Windows Server 2022 can be done with PowerShell. Below are step-by-step instructions to set up a SET team optimized for Hyper-V, incorporating Microsoft’s recommendations for performance tuning:
Plan and Prepare – Identify the physical NICs on the Hyper-V host that will form the SET team. Ensure these NICs have identical link speed and capabilities (it’s best if they are the same model and firmware) because SET requires symmetric adapters for optimal performance(Link). For example, if you plan to team two 10 GbE adapters for your virtual switch, verify both are from the same vendor/model and running at 10 Gbps. Also ensure no existing LBFO team is configured on them; the physical NICs should be standalone and enabled.
Create the Hyper-V Switch with Embedded Teaming – Use the New-VMSwitch
cmdlet to create a new external virtual switch and specify multiple NICs for the -NetAdapterName parameter. This will automatically create a SET team as part of the switch. For example:
New-VMSwitch -Name "HyperV-TeamSwitch" -NetAdapterName "NIC1","NIC2" -AllowManagementOS $true -EnableEmbeddedTeaming $true
In this command:
"HyperV-TeamSwitch"
is the name of the new virtual switch (you can choose any friendly name).-NetAdapterName "NIC1","NIC2"
specifies the two physical network adapters to team. Replace "NIC1","NIC2"
with the actual interface names of your adapters (as shown by Get-NetAdapter
). You can list up to 8 NICs here for a SET team (the limit supported by SET)(Link).-AllowManagementOS $true
(optional) allows the host OS to share this NIC team for management traffic. Include this if you want the Hyper-V host itself to have an IP on the teamed interface (commonly true if this team also carries host management or cluster traffic).-EnableEmbeddedTeaming $true
explicitly tells Hyper-V to create an embedded team. (Note: When you provide multiple NICs, PowerShell treats it as a SET team automatically. This switch is a safeguard, especially if using one NIC now and adding others later.)
This single command replaces the old multi-step process of creating an LBFO team then attaching a vSwitch. It creates the virtual switch and teams the NICs in one step, since the teaming is integrated into the switch with SET. After running New-VMSwitch
, you should have a new vSwitch visible (e.g., in Hyper-V Manager or via Get-VMSwitch
) and the physical NICs will be part of the switch’s team.
Optimize Load Balancing Algorithm – By default, a SET team uses the Dynamic load-balancing algorithm(Link), which in many cases is fine. However, Microsoft documentation recommends using the Hyper-V Port algorithm for best performance on high-speed adapters (10 Gbps and above)(Link). Depending on your workload and NIC speed, you may want to set the algorithm to HyperVPort. You can configure this using the Set-VMSwitchTeam
cmdlet. For example:
Set-VMSwitchTeam -Name "HyperV-TeamSwitch" -LoadBalancingAlgorithm HyperVPort
This command changes the team named "HyperV-TeamSwitch" to use Hyper-V Port mode for load balancing. (The TeamingMode is implicitly SwitchIndependent for SET and cannot be changed – SET doesn’t support LACP(Link), so no need to specify the teaming mode.) The Hyper-V Port algorithm distributes network traffic based on the virtual switch port (essentially per-VM distribution). This mode ensures each VM’s traffic is affinitized to a particular physical NIC, which can improve throughput consistency and avoid packet reordering on 10 GbE+ networks(Link). If your host has many VMs or you are using very fast NICs, Hyper-V Port mode is often beneficial. On the other hand, Dynamic mode (the default) uses a combination of port and flow hashing to spread traffic and can yield better NIC utilization in some scenarios (it attempts to use all team members for outbound traffic). You should choose the mode based on Microsoft’s best practices: Hyper-V Port for 10 Gbps or higher NICs (and many VMs), Dynamic for general purpose or lower-speed networks(Link)(Link). You can always adjust this setting with Set-VMSwitchTeam
after initial setup.
(Optional) Additional Tuning – If your deployment requires features like SR-IOV or RDMA, ensure to configure those on the vSwitch at creation. For SR-IOV, include -EnableIov $true
in the New-VMSwitch
command (assuming your adapters support SR-IOV). For RDMA on a SET, make sure the physical NICs support RDMA and consider using Set-VMNetworkAdapter to enable Virtual RSS (vRSS) on the VM adapters for better scalability of network processing. Also verify that features like Virtual Machine Queue (VMQ) are enabled on each physical NIC. In Windows Server 2022, these are usually enabled by default when using SET, but it’s good to double-check (you can run Get-NetAdapterVmq
on each team member). The goal is to follow Microsoft’s performance tuning guidance for Hyper-V networking: enable offloads and virtualization features that are compatible with SET. (SET inherently allows Dynamic VMMQ, which automatically distributes incoming VM traffic processing across multiple CPU cores, improving performance on high throughput links(Link).)
By following the above steps, you will have created a Hyper-V switch that uses Switch Embedded Teaming under the hood, providing both redundancy and load balancing for your Hyper-V host’s networking. This configuration is fully supported in Windows Server 2022 and aligns with Microsoft’s recommended practices for Hyper-V networking.
Hyper-V Port vs. Dynamic Load Balancing Modes in SET
When using SET, there are two load balancing algorithms available for distributing traffic across the teamed NICs: Hyper-V Port and Dynamic(Link). Understanding these modes and when to use each is important for optimizing network performance on Hyper-V hosts:
Hyper-V Port Mode: In this mode, each virtual switch port (which typically corresponds to a VM’s virtual network adapter, or the host’s management vNIC) is tied to a specific physical NIC in the team. All traffic for a given VM will egress through one team member interface (though inbound traffic to the host is automatically balanced by the Hyper-V switch across NICs based on VM port as well). This one-VM-to-one-NIC mapping ensures no single VM’s traffic is split across multiple physical NICs. The benefit is that it avoids potential issues with out-of-order packets and is easier on upstream switches (each VM’s MAC/IP consistently comes from one NIC, preventing “flapping”). Microsoft notes that Hyper-V Port is often the best choice for high-bandwidth networks – specifically, it is recommended for NICs 10 Gbps or faster to achieve optimal performance(Link). In environments with many VMs, Hyper-V Port mode naturally spreads the VMs across the physical NICs (e.g. one VM’s traffic on NIC1, another on NIC2, etc.), achieving load distribution at a per-VM level. Use Hyper-V Port mode if you have very fast adapters or if you’ve observed better stability with it in your network. It’s particularly suited for scenarios where each VM can generate significant traffic on its own, as it guarantees that VM can use up to one NIC’s worth of bandwidth without interference from load-balancing algorithms.
Dynamic Mode: Dynamic is a more complex algorithm that combines elements of both outbound flow-based distribution and inbound port-based distribution. In practical terms, Dynamic mode will distribute outgoing traffic across the team NICs based on flow hashing, and still use Hyper-V Port for inbound traffic to ensure stability. Microsoft set Dynamic as the default load balancing algorithm for SET teams(Link) because it aims to utilize all NICs efficiently even if a single VM is very busy. For example, if one VM’s traffic is heavy, dynamic mode can spread different TCP streams (flows) from that VM across multiple NICs, potentially exceeding the throughput of a single NIC. This can maximize aggregate bandwidth (as seen in some cases where dynamic mode allowed using the full team bandwidth). Dynamic mode is generally recommended by Microsoft for most scenarios since it provides a good balance of load distribution. However, it can be more sensitive to switch configurations – because multiple NICs may carry traffic for the same VM or IP, your physical switch (if not in a stable configuration for independent teaming) might log MAC address moving or “IP flapping” alerts. In a properly configured Switch Independent scenario (no EtherChannel/LACP on the switch ports), dynamic mode should work well. Use Dynamic when you want the team to automatically balance traffic and you have relatively moderate NIC speeds (1 GbE or 10 GbE where each VM alone might not saturate a NIC). It’s the default for a reason: in many deployments it yields the best overall throughput distribution across a team.
Best Practice: For Windows Server 2022 Hyper-V, start with the default Dynamic mode, but consider switching to Hyper-V Port mode on hosts with 10 GbE or higher NICs, or if you encounter stability issues with dynamic. Microsoft’s official guidance suggests Hyper-V Port on >=10 Gbps networks for best performance(Link), as mentioned. Remember that in all cases with SET, the teaming mode is always Switch Independent (the physical switch does not need special configuration)(Link), so these algorithms operate at the host level. If you change the algorithm with Set-VMSwitchTeam
, the change takes effect immediately and you can monitor performance to decide which works better for your environment. Both modes will provide fault tolerance (failover to the remaining NIC if one fails), so the choice mainly impacts load balancing behavior.
Verification Steps for SET Configuration
After setting up a SET team for Hyper-V, you should verify that the configuration is correct and optimized as intended. Use the following PowerShell commands and checks to confirm a successful deployment:
List the Virtual Switch and Team Members: Run Get-VMSwitchTeam -Name "<SwitchName>"
to retrieve information about the switch’s team. For example:
Get-VMSwitchTeam -Name "HyperV-TeamSwitch"
This command will display the details of the SET team associated with the switch “HyperV-TeamSwitch.” You should see output listing the Team Members (the physical NICs in the team), the TeamingMode (which will show as SwitchIndependent), and the LoadBalancingAlgorithm (Dynamic or HyperVPort, depending on what you configured)(Link)(Link). Verify that all expected NICs are present in the team and that the load-balancing mode matches your intended setting. For instance, if you set Hyper-V Port mode for performance, ensure the output shows LoadBalancingAlgorithm : HyperVPort
. If anything is incorrect, you can re-run the Set-VMSwitchTeam
command to adjust settings.
Check the Virtual Switch Properties: You can also run Get-VMSwitch -Name "<SwitchName>" | Format-List *
to see detailed properties of the virtual switch. In the detailed output, confirm that AllowManagementOS is set to True (if you intended the host to have access), and look at the NetAdapterInterfaceDescription or NetAdapterName field which should list the teamed adapters. This confirms the switch is indeed bound to the multiple physical adapters (indicating a SET team). Additionally, the SwitchType should be External
(for an external vSwitch). While this cmdlet doesn’t explicitly enumerate team members as clearly as Get-VMSwitchTeam
, it’s useful for checking that the switch was created with the correct parameters.
Validate NIC Status: Ensure all physical NICs in the team are up and functioning. Use Get-NetAdapter -Name "<NIC1>","<NIC2>"
to check the link status and speed of each member NIC. Each should show Status: Up and the expected LinkSpeed (e.g., 10 Gbps). If a NIC is down or has a mismatched speed, the team may not perform optimally. All team members should be connected to the appropriate switch ports with identical configurations (no VLAN set on one and not the other, etc.). Remember that SET requires symmetric configuration on the NICs(Link), so any discrepancy here should be fixed at the network or adapter level.
Test Connectivity and Failover: Although not a PowerShell one-liner, a practical verification is to ensure that VMs and the host (if applicable) have network connectivity through the new SET switch. You can create or attach a test VM to the “HyperV-TeamSwitch” and assign it an IP to ping out. Try disconnecting one of the physical NIC cables (or disabling one NIC) and ensure traffic continues on the remaining NIC (the ping should continue without dropping more than a packet or two during failover). This tests the failover aspect of the team. For load balancing verification, you might monitor the NIC traffic counters (using Performance Monitor or Get-NetAdapterStatistics
) while generating load from multiple VMs to see that both NICs in the team are carrying traffic. This is more of a manual test, but it confirms that the SET team is functioning as expected in both load balancing and redundancy.
Review Event Logs (if needed): The Hyper-V Virtual Switch will log events if something is misconfigured. After setting up, check the System event log for any Hyper-V Networking or VMSMP warnings/errors. For example, an event about “an LBFO team may not be attached” would indicate an attempt to use LBFO where not supported (which our configuration avoids by using SET). No such errors should be present if the SET team is correctly configured for the vSwitch.
By performing the above verification steps, you can be confident that the transition to SET was successful. The Get-VMSwitchTeam
output is the clearest confirmation – it shows that your Hyper-V switch is indeed using SET with the intended NICs(Link). You should see SwitchTeam information indicating SwitchIndependent mode and either Dynamic or HyperVPort load balancing (with all members listed). This confirms you’re using the supported configuration on Windows Server 2022 (since an LBFO team would not appear here – instead, Get-NetLbfoTeam
would list it if one existed, but in our case we bypass LBFO). Once verified, your Hyper-V host’s networking is now running on Switch Embedded Teaming, which is the Microsoft-endorsed solution moving forward. This ensures you can take advantage of the latest Hyper-V networking performance features and that you’re in line with the support policy for Windows Server 2022 Hyper-V(Link).
Sources:
- Microsoft Docs – Features removed or deprecated in Windows Server 2022: Hyper-V switch no longer supports LBFO teams.
- Microsoft Docs – Windows Server Supported Networking Scenarios: Introduction of Switch Embedded Teaming (SET) for Hyper-V and SDN.
- Microsoft Docs – Azure Stack HCI Networking (Host network requirements): SET overview and requirements (symmetric NICs, up to 8 adapters, supported algorithms).
- Microsoft Docs – Hyper-V PowerShell Reference:
Set-VMSwitchTeam
parameters (SwitchIndependent only, LB algorithms HyperVPort/Dynamic). - Microsoft Docs – Hyper-V PowerShell Reference: Using
Get-VMSwitchTeam
to view SET team members