Scenario #2, Tutorial #2, Part #1
In the previous tutorial, we…
- joined two XenServer (XS) hosts together to form a XenServer resource pool;
- configured an NFS shared storage repository (SR) for XS hosts to store guests’ virtual hard disk drives (VHD);
- created a dedicated storage network for the XenServer hosts to use when communicating with the SR, and;
- configured that NFS shared SR as the default SR for the resource pool.
In this tutorial, we are going to improve the resiliency and the performance of networking within XenServer (XS). But first: Let’s begin with a brief discussion of Networking and XS!
While considering the goals, keep in mind th the physical network configuration you create for one host should match those on all other hosts in the pool.
Citrix XenServer Design: Designing XenServer Network Configurations
Our objective in this tutorial will be to improve the resiliency and the performance of networking within XenServer but we will not be concerned with isolation in this tutorial.
The Two Alternative Network Stacks of XenServer
For all intents and purposes: XenServer (XS) is a virtualization appliance. It is built using two major components: The Xen Project Hypervisor and a highly-customized version of CentOS Enterprise Linux:
- The Xen Project Hypervisor provides the virtualization component.
- CentOS Enterprise Linux provides the control domain (i.e., Dom0) in the form of a virtual machine (VM).
Because XenServer utilizes CentOS to provide the control domain, and; because CentOS is a Linux distribution: It should not be surprising to XS administrators that the XS network stack is built upon the Linux network stack. It should also not be surprising that, because Linux is an open source operating system, the Linux network stack is modular and extensible.
In XenServer two alternative components are used to extend the network functionality provided by Linux: Linux Bridge* and Open vSwitch. The two network stacks are mutually-exclusive. XS Administrators are able to easily select the network stack used in their XS environments using tools that have been provided with XAPI.
Citrix XenServer Design: Designing XenServer Network Configurations
XenServer and Linux Bridge
Linux Bridge was the original network stack of XenServer. Linux Bridge is still available in XS but has been deprecated since Version 6.5 of XS.
What is Linux Bridge?
Blogs by Sriram: Linux Bridge and Virtual Networking
Linux Bridge was introduced into the Linux Kernel in Version 2.2 of the Linux Kernel. It was later re-written for Version 2.4 and Version 2.6 of the Linux Kernel and continues to be present in the current version of the Kernel.
Why use Linux Bridge?
The Linux Bridge is quite stable, widely-used, and very well-understood because it’s been around for so long and is used so widely. Linux administrators may interact with Linux Bridge using standard command-line tools (e.g., the brctl) command. (However, in XenServer, all network configuration is performed using the XAPI command: xe.)
XenServer and Open vSwitch
Open vSwitch is the next generation network stack for XenServer. Open vSwitch was introduced into XS in Version 5.6, Feature Pack 1, and, as of XenServer Version 6.0, has become the default network stack in XS.
What is Open vSwitch?
From the Open vSwitch Web site:
Why use Open vSwitch?
Using Open vSwitch is required for many of the advanced network features of XS:
- Cross-server private networks
- NIC Bonds that contain more than two NIC’s
- NIC Bonds that operate in LACP bonding mode
- Jumbo Frames
- OpenFlow®
The Open vSwitch FAQ (Summary)
The Future of Networking in XenServer
Open vSwitch has been available in XS since Version 5.6, Feature Pack 1 and has been the default network stack for XS since Version 6.0 and will continue to be the default network stack in future version XenServer:
Citrix XenServer Design: Designing XenServer Network Configurations
In this tutorial we will use the default network stack in Version 6.5 of XenServer: Open vSwitch.
Identifying and Changing the Network Stack in XenServer
XS Administrators almost never change the network stack in their infrastructure but identifying and changing the network stack can all be easily performed from the CLI. (Remember that, in this Scenario, we’ll be making use of the default network stack in Version 6.5 of XenServer [Open vSwitch] so we will not be using any of the commands illustrated throughout this section of the tutorial.)
Identifying the Current Network Stack in XenServer
Two different commands can be used to identify the network stack that is currently configured on a XS host:
[root@xs-1 ~]# /opt/xensource/bin/xe-get-network-backend
openvswitch
[root@xs-1 ~]# xe host-list params=software-version | grep –color network_backend
software-version (MRO) : product_version: 6.5.0; product_version_text: 6.5; product_version_text_short: 6.5; platform_name: XCP; platform_version: 1.9.0; product_brand: XenServer; build_number: 90233c; hostname: taboth-1; date: 2016-11-11; dbv: 2015.0101; xapi: 1.3; xen: 4.4.1-xs131111; linux: 3.10.0+2; xencenter_min: 2.3; xencenter_max: 2.4; network_backend: openvswitch; xs:xenserver-transfer-vm: XenServer Transfer VM, version 6.5.0, build 90158c; xcp:main: Base Pack, version 1.9.0, build 90233c; xs:main: XenServer Pack, version 6.5.0, build 90233c
Obviously, the first command is the most convenient and the least prone to human error – So we recommend using the xe-get-network-backend command to identify the network stack currently in use on the XS host.
Changing the Current Network Stack in XenServer
The xe-get-network-backend command has a compliment: The xe-switch-network-backend command!
The xe-switch-network-backend command can be used, along with the openvswitch command-line argument or the bridge command-line argument, to select which network stack the XS host will use. The Citrix whitepaper “XenServer Design: Designing XenServer Network Configurations” outlines the process this way:
If your pool is already up-and-running… consider the following before [changing the network stack]:
- You must run the xe-switch-network-backend command on each host in the pool separately. The xe-switch-network-backend command is not a pool-wide command. This command can also be used to revert to the standard Linux bridge.
- All hosts in the pool must use the same networking backend. Do not configure some hosts in the pool to use the Linux bridge and others to use [Open vSwitch] bridge.
- When you are changing your hosts to use [Open vSwitch], you do not need to put the hosts into Maintenance mode. You just need to run the xe-switch-network-backend command on each host and reboot the hosts.
Citrix XenServer Design: Designing XenServer Network Configurations
Bridges and Switches and Networks – Oh, my!
In the XS vernacular a bridge is really the same thing as a [virtual] switch. Adding to the peculiarity of XS terminology is the fact that a bridge is called a network:
or bridges.
Citrix XenServer Design: Designing XenServer Network Configurations
To reiterate: In XenServer…
- A bridge is the same thing as a switch.
- Both are called a network.
Got it?
Network Bonding
In order to improve the resiliency and the performance of the networking in XS, we’ll use a common technique that goes by many names: Network Bonding.
Chapter 4 of the “Red Hat Enterprise Linux 7 Networking Guide“
Though NIC Bonding has many different names, it always describes the same concept: The joining of multiple, physical NIC’s into a single, logical NIC. The resulting, logical NIC behaves as a single NIC that offers increased resiliency and, in some configurations, increased throughput.
Citrix XenServer Design: Designing XenServer Network Configurations
Though it’s recommended to configure bonds prior to creating the resource pool…
- Using the CLI to configure the bonds on the master and then each member of the pool.
- Using the CLI to configure the bonds on the master and then restarting each member of the pool so that it inherits its settings from the pool master. [Or…]
- Using XenCenter to configure the bonds on the master. XenCenter automatically synchronizes the networking settings on the member servers with the master, so you do not need to reboot the member servers.
Chapter 4.4.6 of the “Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide“
…we’ll be configuring the bonds after the hosts have been joined into a resource pool. Also, because we’re not using XenCenter (XC), we’ll be configuring bonds on each member of the pool, individually:
Chapter 4.4.6.2 of the “Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide“
Network Bonding Modes
XenServer 6.5, Service Pack 1 supports 3 modes of network bonding:
- Active-Active,
- Active-Passive, and
- LACP
The three modes have very different use-cases and some modes and configurations are only available when using the Open vSwitch network stack:
- LACP bonding is only available for [Open vSwitch] whereas active-active and active-passive are available for both [Open vSwitch] and Linux Bridge.
- When [Open vSwitch] is the network stack, you can bond either two, three, or four NICs.
- When the Linux Bridge is the network stack, you can only bond two NICs.
Chapter 4.3.5 of the “Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide“
The technical details of NIC bonding can be very complex but Wikipedia provides a good explanation of the different bonding modes using slightly different names for the bonding modes:
Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single, logical bonded interface’s MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.
IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP)
Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification… The link is set up dynamically between two LACP-supporting peers.
Adaptive transmit load balancing (balance-tlb)
[balance-tlb mode] does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
Adaptive load balancing (balance-alb)
Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic… [balance-alb mode] does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by [guests] on their way out and overwrites the source hardware address with the unique hardware address of one of the [guest’s virtual MAC address], logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.
Different Bond Modes for Different Use-Cases
Citrix makes the following recommendations regarding the creation and configuration of different types of interfaces:
- VM traffic. Provided you enable bonding on NICs carrying only VM (guest) traffic, all links are active and NIC bonding can balance spread VM traffic across NICs. An individual VIF’s traffic is never split between NICs.
- Management or storage traffic. Only one of the links (NICs) in the bond is active and the other NICs remain unused unless traffic fails over to them…
Citrix XenServer Design: Designing XenServer Network Configurations
To be direct: Citrix recommends…
- active-passive bonds for primary and secondary interfaces, and
- active-active bonds for guest [external] interfaces.
Each of the XS hosts in the PXS Lab have six NIC’s. In our Scenario: These NIC’s will be combined into three bonds:
NIC #1 | NIC #2 | FUNCTION | IP ADDRESS | |
---|---|---|---|---|
XS-1 | XS-2 | |||
eth0 | eth3 | PMI | 172.16.0.10/27 | 172.16.0.12/27 |
eth1 | eth4 | External | N/A | N/A |
eth2 | eth5 | Storage | 172.16.0.35/28 | 172.16.0.36/28 |
Conclusion
In the previous Tutorial, we had configured the IP address of two interfaces on the host xs-1:
- The Primary Management Interface (172.16.0.10/28), and
- The Storage Interface (172.16.0.35/28).
In Part 2 of this Tutorial, we’ll see how the IP addresses that we’ve already configured are inherited by the bonds that we create.
* Technically speaking: Linux Bridge has been integrated into the Linux network stack since Kernel Version 2.2 and, as such, Linux Bridge does not extend the functionality of the Linux network stack as much as it forms an important piece of the Linux network stack. However – for the purposes of this discussion – we’re going to consider it to still be separate from the Linux Kernel.