Bonding Network Interfaces, Part I

Scenario #2, Tutorial #2, Part #1

In the previous tutorial, we…

  1. joined two XenServer (XS) hosts together to form a XenServer resource pool;
  2. configured an NFS shared storage repository (SR) for XS hosts to store guests’ virtual hard disk drives (VHD);
  3. created a dedicated storage network for the XenServer hosts to use when communicating with the SR, and;
  4. configured that NFS shared SR as the default SR for the resource pool.

In this tutorial, we are going to improve the resiliency and the performance of networking within XenServer (XS). But first: Let’s begin with a brief discussion of Networking and XS!

At a high level, most XenServer network design decisions stem from a combination of three major design goals: the need for redundancy, performance, or isolation. For many organizations, these goals may overlap and are not necessarily mutually exclusive.

While considering the goals, keep in mind th the physical network configuration you create for one host should match those on all other hosts in the pool.

Citrix XenServer Design: Designing XenServer Network Configurations

Our objective in this tutorial will be to improve the resiliency and the performance of networking within XenServer but we will not be concerned with isolation in this tutorial.

The Two Alternative Network Stacks of XenServer

For all intents and purposes: XenServer (XS) is a virtualization appliance. It is built using two major components: The Xen Project Hypervisor and a highly-customized version of CentOS Enterprise Linux:

  • The Xen Project Hypervisor provides the virtualization component.
  • CentOS Enterprise Linux provides the control domain (i.e., Dom0) in the form of a virtual machine (VM).

Because XenServer utilizes CentOS to provide the control domain, and; because CentOS is a Linux distribution: It should not be surprising to XS administrators that the XS network stack is built upon the Linux network stack. It should also not be surprising that, because Linux is an open source operating system, the Linux network stack is modular and extensible.

In XenServer two alternative components are used to extend the network functionality provided by Linux: Linux Bridge* and Open vSwitch. The two network stacks are mutually-exclusive. XS Administrators are able to easily select the network stack used in their XS environments using tools that have been provided with XAPI.

From a conceptual perspective, [Open vSwitch] functions the same way as the existing Linux bridge. Regardless of whether or not you use [Open vSwitch] or the Linux bridge, you can still use the same networking features in XenCenter and the same xe networking commands listed in the XenServer Administrator’s Guide.

Citrix XenServer Design: Designing XenServer Network Configurations

XenServer and Linux Bridge

Linux Bridge was the original network stack of XenServer. Linux Bridge is still available in XS but has been deprecated since Version 6.5 of XS.

What is Linux Bridge?

Virtual networking requires the presence of a virtual switch inside a server/hypervisor. Even though it is called a bridge, the Linux bridge is really a virtual switch… Linux Bridge is a kernel module… And it is administered using brctl command on Linux.

Blogs by Sriram: Linux Bridge and Virtual Networking

Linux Bridge was introduced into the Linux Kernel in Version 2.2 of the Linux Kernel. It was later re-written for Version 2.4 and Version 2.6 of the Linux Kernel and continues to be present in the current version of the Kernel.

Why use Linux Bridge?

The Linux Bridge is quite stable, widely-used, and very well-understood because it’s been around for so long and is used so widely. Linux administrators may interact with Linux Bridge using standard command-line tools (e.g., the brctl) command. (However, in XenServer, all network configuration is performed using the XAPI command: xe.)

XenServer and Open vSwitch

Open vSwitch is the next generation network stack for XenServer. Open vSwitch was introduced into XS in Version 5.6, Feature Pack 1, and, as of XenServer Version 6.0, has become the default network stack in XS.

What is Open vSwitch?

From the Open vSwitch Web site:

Open vSwitch is a production-quality, multilayer, virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, IPFIX, RSPAN, CLI, LACP, 802.1ag). In addition, it is designed to support distribution across multiple [XenServer hosts] – similar to VMware’s vNetwork distributed vswitch or Cisco’s Nexus 1000V.

Open vSwitch Homepage

Why use Open vSwitch?

Using Open vSwitch is required for many of the advanced network features of XS:

  1. Cross-server private networks
  2. NIC Bonds that contain more than two NIC’s
  3. NIC Bonds that operate in LACP bonding mode
  4. Jumbo Frames
  5. OpenFlow®

Why Open vSwitch? Open vSwitch targets a different point in the design space than previous hypervisor networking stacks, focusing on the need for automated and dynamic network control in large-scale Linux-based virtualization environments.

The Open vSwitch FAQ (Summary)

The Future of Networking in XenServer

Open vSwitch has been available in XS since Version 5.6, Feature Pack 1 and has been the default network stack for XS since Version 6.0 and will continue to be the default network stack in future version XenServer:

As of XenServer 6.0, the new XenServer vSwitch component is the default networking configuration. However, you can still use the Linux Bridge, which was the default networking configuration prior to XenServer 6.0, by running an xe command to change your networking configuration.

Citrix XenServer Design: Designing XenServer Network Configurations

In this tutorial we will use the default network stack in Version 6.5 of XenServer: Open vSwitch.

Identifying and Changing the Network Stack in XenServer

XS Administrators almost never change the network stack in their infrastructure but identifying and changing the network stack can all be easily performed from the CLI. (Remember that, in this Scenario, we’ll be making use of the default network stack in Version 6.5 of XenServer [Open vSwitch] so we will not be using any of the commands illustrated throughout this section of the tutorial.)

Identifying the Current Network Stack in XenServer

Two different commands can be used to identify the network stack that is currently configured on a XS host:


Legend: GET | SEE | USE

[root@xs-1 ~]# /opt/xensource/bin/xe-get-network-backend
openvswitch

[root@xs-1 ~]# xe host-list params=software-version | grep –color network_backend
software-version (MRO) : product_version: 6.5.0; product_version_text: 6.5; product_version_text_short: 6.5; platform_name: XCP; platform_version: 1.9.0; product_brand: XenServer; build_number: 90233c; hostname: taboth-1; date: 2016-11-11; dbv: 2015.0101; xapi: 1.3; xen: 4.4.1-xs131111; linux: 3.10.0+2; xencenter_min: 2.3; xencenter_max: 2.4; network_backend: openvswitch; xs:xenserver-transfer-vm: XenServer Transfer VM, version 6.5.0, build 90158c; xcp:main: Base Pack, version 1.9.0, build 90233c; xs:main: XenServer Pack, version 6.5.0, build 90233c

Obviously, the first command is the most convenient and the least prone to human error – So we recommend using the xe-get-network-backend command to identify the network stack currently in use on the XS host.

Changing the Current Network Stack in XenServer

The xe-get-network-backend command has a compliment: The xe-switch-network-backend command!

The xe-switch-network-backend command can be used, along with the openvswitch command-line argument or the bridge command-line argument, to select which network stack the XS host will use. The Citrix whitepaper “XenServer Design: Designing XenServer Network Configurations” outlines the process this way:

Configuring [Open vSwitch] on Running Pools

If your pool is already up-and-running… consider the following before [changing the network stack]:

  • You must run the xe-switch-network-backend command on each host in the pool separately. The xe-switch-network-backend command is not a pool-wide command. This command can also be used to revert to the standard Linux bridge.
  • All hosts in the pool must use the same networking backend. Do not configure some hosts in the pool to use the Linux bridge and others to use [Open vSwitch] bridge.
  • When you are changing your hosts to use [Open vSwitch], you do not need to put the hosts into Maintenance mode. You just need to run the xe-switch-network-backend command on each host and reboot the hosts.

Citrix XenServer Design: Designing XenServer Network Configurations

Bridges and Switches and Networks – Oh, my!

In the XS vernacular a bridge is really the same thing as a [virtual] switch. Adding to the peculiarity of XS terminology is the fact that a bridge is called a network:

A network is the logical network switching fabric built into XenServer that lets you network your virtual machines. It links the physical NICs to the virtual interfaces and connects the virtual interfaces together. These networks are virtual switches that behave as regular L2 learning switches. Some vendors’ virtualization products refer to networks as virtual switches
or bridges.

Citrix XenServer Design: Designing XenServer Network Configurations

To reiterate: In XenServer…

  • A bridge is the same thing as a switch.
  • Both are called a network.

Got it?

Network Bonding

In order to improve the resiliency and the performance of the networking in XS, we’ll use a common technique that goes by many names: Network Bonding.

The combining or aggregating together of network links in order to provide a logical link with higher throughput, or to provide redundancy, is known by many names such as “channel bonding”, “Ethernet bonding”, “port trunking”, “channel teaming”, “NIC teaming”, “link aggregation”, and so on. This concept as originally implemented in the Linux kernel is widely referred to as “bonding”.

Chapter 4 of the “Red Hat Enterprise Linux 7 Networking Guide



Though NIC Bonding has many different names, it always describes the same concept: The joining of multiple, physical NIC’s into a single, logical NIC. The resulting, logical NIC behaves as a single NIC that offers increased resiliency and, in some configurations, increased throughput.

NIC bonding is a technique for increasing resiliency and/or bandwidth in which an administrator configures two [or more] NICs together so they logically function as one network card…

Citrix XenServer Design: Designing XenServer Network Configurations

Though it’s recommended to configure bonds prior to creating the resource pool…

Whenever possible, create NIC bonds as part of initial resource pool creation prior to joining additional hosts to the pool or creating VMs. Doing so allows the bond configuration to be automatically replicated to hosts as they are joined to the pool and reduces the number of steps required… Adding a NIC bond to an existing pool requires one of the following:

  1. Using the CLI to configure the bonds on the master and then each member of the pool.
  2. Using the CLI to configure the bonds on the master and then restarting each member of the pool so that it inherits its settings from the pool master. [Or…]
  3. Using XenCenter to configure the bonds on the master. XenCenter automatically synchronizes the networking settings on the member servers with the master, so you do not need to reboot the member servers.

Chapter 4.4.6 of the “Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide

…we’ll be configuring the bonds after the hosts have been joined into a resource pool. Also, because we’re not using XenCenter (XC), we’ll be configuring bonds on each member of the pool, individually:

If you are not using XenCenter for [configuring] NIC bonding, the quickest way to create pool-wide NIC bonds is to create the bond on the master, and then restart the other pool members. Alternatively, you can use the service xapi restart command. This causes the bond and VLAN settings on the master to be inherited by each host. The management interface of each host must, however, be manually reconfigured.

Chapter 4.4.6.2 of the “Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide

Network Bonding Modes

XenServer 6.5, Service Pack 1 supports 3 modes of network bonding:

  • Active-Active,
  • Active-Passive, and
  • LACP

The three modes have very different use-cases and some modes and configurations are only available when using the Open vSwitch network stack:

XenServer provides support for active-active, active-passive, and LACP bonding modes. The number of NICs supported and the bonding mode supported varies according to network stack:

  • LACP bonding is only available for [Open vSwitch] whereas active-active and active-passive are available for both [Open vSwitch] and Linux Bridge.
  • When [Open vSwitch] is the network stack, you can bond either two, three, or four NICs.
  • When the Linux Bridge is the network stack, you can only bond two NICs.

Chapter 4.3.5 of the “Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide

The technical details of NIC bonding can be very complex but Wikipedia provides a good explanation of the different bonding modes using slightly different names for the bonding modes:

Active-backup (active-backup)
Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single, logical bonded interface’s MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.

IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP)
Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification… The link is set up dynamically between two LACP-supporting peers.

Adaptive transmit load balancing (balance-tlb)
[balance-tlb mode] does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

Adaptive load balancing (balance-alb)
Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic… [balance-alb mode] does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by [guests] on their way out and overwrites the source hardware address with the unique hardware address of one of the [guest’s virtual MAC address], logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.

Wikipedia: Link aggregation

Different Bond Modes for Different Use-Cases

Citrix makes the following recommendations regarding the creation and configuration of different types of interfaces:

XenServer can only send traffic over two or more NICs when there is more than one MAC address associated with the bond. XenServer can use the virtual MAC addresses in the VIF to send traffic across multiple links. Specifically:

  • VM traffic. Provided you enable bonding on NICs carrying only VM (guest) traffic, all links are active and NIC bonding can balance spread VM traffic across NICs. An individual VIF’s traffic is never split between NICs.
  • Management or storage traffic. Only one of the links (NICs) in the bond is active and the other NICs remain unused unless traffic fails over to them…

Citrix XenServer Design: Designing XenServer Network Configurations

To be direct: Citrix recommends…

  • active-passive bonds for primary and secondary interfaces, and
  • active-active bonds for guest [external] interfaces.

Each of the XS hosts in the PXS Lab have six NIC’s. In our Scenario: These NIC’s will be combined into three bonds:

TABLE #1

NIC #1 NIC #2 FUNCTION IP ADDRESS
XS-1 XS-2
eth0 eth3 PMI 172.16.0.10/27 172.16.0.12/27
eth1 eth4 External N/A N/A
eth2 eth5 Storage 172.16.0.35/28 172.16.0.36/28

Conclusion

In the previous Tutorial, we had configured the IP address of two interfaces on the host xs-1:

  • The Primary Management Interface (172.16.0.10/28), and
  • The Storage Interface (172.16.0.35/28).

In Part 2 of this Tutorial, we’ll see how the IP addresses that we’ve already configured are inherited by the bonds that we create.

* Technically speaking: Linux Bridge has been integrated into the Linux network stack since Kernel Version 2.2 and, as such, Linux Bridge does not extend the functionality of the Linux network stack as much as it forms an important piece of the Linux network stack. However – for the purposes of this discussion – we’re going to consider it to still be separate from the Linux Kernel.

Changelog: This tutorial was last modified 20-Jul-2017