An Overview of the Current Network Configuration

Scenario #2, Tutorial #2, Part #2

In the previous tutorial, we…

  1. joined two XenServer (XS) hosts together to form a XenServer resource pool;
  2. configured an NFS shared storage repository (SR) for XS hosts to store guests’ virtual hard disk drives (VHD);
  3. created a dedicated storage network for the XenServer hosts to use when communicating with the SR;
  4. configured an IP address on each of the storage interfaces on each of the members of the resource pool, and;
  5. configured the NFS shared SR as the default SR for the resource pool.

In this tutorial, we are going to improve the resiliency and the performance of networking within XenServer (XS).

Our objective in this tutorial will be to improve the resiliency and the performance of networking within XenServer but we will not be concerned with isolation in this tutorial.

Toward that end: We are going to improve the resiliency of the Storage Interface and the Primary Management Interface (PMI) by combining each of those two interfaces with an extra NIC to form active-standby NIC bonds. We will also create a resilient, high-performance network for guest traffic by bonding two interfaces together using active-active bonding mode.

Practical XenServer: Bonding NIC Interfaces, Part #1

Each of the XS hosts in the PXS Lab have six NIC’s. In this Scenario: We will be combining these six NIC’s into three bonds.

TABLE #1

NIC FUNCTION IP ADDRESS
PRIMARY SECONDARY XS-1 XS-2
eth0 eth3 Management (PMI) 172.16.0.10/27 172.16.0.12/27
eth1 eth4 External N/A N/A
eth2 eth5 Storage 172.16.0.35/28 172.16.0.36/28

First: We’ll begin by validating the default state of networking in the resource pool (e.g., identifying the pool master (PM); verifying that Open vSwitch [OVS] is configured as the network stack throughout the resource pool, and; verifying that the installation process successfully identified and configured all six of the NIC’s).

Then: We’ll finish with creating the bonds themselves along with some new, additional, virtual infrastructure that will form a part of the new bonds (e.g., creating a new virtual switch for each of the new bonds).

And then, finally: We’ll conclude this Tutorial by reviewing the final network configuration and discuss some of the changes that are visible in the configuration of the pool members.

Validation

We’ll begin by reviewing the default state of networking in the resource pool.

First: We’ll verify that Open vSwitch (OVS) is configured as the network stack on each of the hosts in the resource pool; that the OVS daemon is running on each of the hosts, and that there are not already any existing bonds configured on either of the two hosts.

  1. Confirm that OVS is configured as the network stack on each of the hosts in the resource pool:


    Legend: GET | SEE | USE

    [root@xs-1 ~]# xe-get-network-backend
    openvswitch

    [root@xs-1 ~]# xe host-list params=software-version | cut -d ‘;’ -f 16
    network_backend: openvswitch
    network_backend: openvswitch

  2. Confirm that the OVS daemon is running on each of the hosts:


    Legend: GET | SEE | USE

    [root@xs-1 ~]# hostname
    xs-1

    [root@xs-1 ~]# service openvswitch status
    ovsdb-server is running with pid 2378
    ovs-vswitchd is running with pid 2400
    ovs-xapi-sync is running with pid 2407


    Legend: GET | SEE | USE

    [root@xs-2 ~]# hostname
    xs-2

    [root@xs-2 ~]# service openvswitch status
    ovsdb-server is running with pid 2414
    ovs-vswitchd is running with pid 2436
    ovs-xapi-sync is running with pid 2443

  3. Verify that that no other bonds exist on either of the two hosts:


    [root@xs-1 ~]# hostname
    xs-1

    [root@xs-1 ~]# xe bond-list

    [root@xs-1 ~]# ovs-appctl bond/list
    bond type slaves


    [root@xs-2 ~]# hostname
    xs-2

    [root@xs-2 ~]# xe bond-list

    [root@xs-2 ~]# ovs-appctl bond/list
    bond type slaves

    Notice that the xe bond-list command and the ovs-apptcl bond/list command both return a null list (i.e., an empty list) but the ovs-apptcl bond/list prepends the null list with column headers.

Then: We’ll review the initial state of the networking configuration on the PM. (Though this step isn’t completely necessary, it is a good opportunity to review the status of networking on the hosts before we make any changes.)


Legend: GET | SEE | USE

[root@xs-1 ~]# xe network-list params=uuid,name-label,bridge | grep -B1 -A2 name-label
uuid ( RO) : d6d54a3d-27c0-8ebf-fd4d-1e465924c391
name-label ( RW): Pool-wide network associated with eth4
bridge ( RO): xenbr4

uuid ( RO) : 28644253-3734-7ccf-1046-bd10709457bf
name-label ( RW): Pool-wide network associated with eth1
bridge ( RO): xenbr1

uuid ( RO) : ce980e76-9eb8-c7c5-9f70-ffdda9b05adb
name-label ( RW): Pool-wide network associated with eth0
bridge ( RO): xenbr0

uuid ( RO) : 92823c10-28af-3ad1-89e7-36df6ed3d83b
name-label ( RW): Pool-wide network associated with eth5
bridge ( RO): xenbr5

uuid ( RO) : 45ea2b0f-cbf1-df35-010b-145b3e90b163
name-label ( RW): Pool-wide network associated with eth2
bridge ( RO): xenbr2

uuid ( RO) : 335e74ba-13ce-12a7-5d63-40d09c360d31
name-label ( RW): Host internal management network
bridge ( RO): xenapi

uuid ( RO) : 95af3ef3-5be2-5365-2375-59d0784a5449
name-label ( RW): Pool-wide network associated with eth3
bridge ( RO): xenbr3

[root@xs-1 ~]# xe pif-list host-uuid=$INSTALLATION_UUID physical=true IP-configuration-mode=Static params=uuid,device,IP,network-name-label
uuid ( RO) : 8445b796-01b0-e2cf-ba17-9e7af7856925
device ( RO): eth0
network-name-label ( RO): Pool-wide network associated with eth0
IP ( RO): 172.16.0.10

uuid ( RO) : 767aac49-ba37-3be7-5bad-87ae26d97e02
device ( RO): eth2
network-name-label ( RO): Pool-wide network associated with eth2
IP ( RO): 172.16.0.35

[root@xs-1 ~]# xe pif-list host-uuid=$INSTALLATION_UUID physical=false IP-configuration-mode=Static params=uuid,device,IP,network-name-label

[root@xs-1 ~]# xe pif-list host-uuid=$INSTALLATION_UUID physical=false IP-configuration-mode=None params=uuid,device,IP,network-name-label

[root@xs-1 ~]# xe bond-list params=uuid,name-label,bridge

[root@xs-1 ~]# ifconfig | grep -v ‘127\.0\.0\.1’ | grep -B1 inet
xenbr0 Link encap:Ethernet HWaddr 00:1F:29:C4:9F:5C
inet addr:172.16.0.10 Bcast:172.16.0.31 Mask:255.255.255.224

xenbr2 Link encap:Ethernet HWaddr 00:26:55:D1:E8:CC
inet addr:172.16.0.35 Bcast:172.16.0.47 Mask:255.255.255.240

[root@xs-1 ~]# ifconfig -a | grep -e xen -e xapi -e eth
eth0 Link encap:Ethernet HWaddr 00:1F:29:C4:9F:5C
eth1 Link encap:Ethernet HWaddr 00:1F:29:C4:9F:3C
eth2 Link encap:Ethernet HWaddr 00:26:55:D1:E8:CC
eth3 Link encap:Ethernet HWaddr 00:26:55:D1:E8:CD
eth4 Link encap:Ethernet HWaddr 00:1B:78:5A:21:8D
eth5 Link encap:Ethernet HWaddr 00:1B:78:5A:21:8C
xenbr0 Link encap:Ethernet HWaddr 00:1F:29:C4:9F:5C
xenbr1 Link encap:Ethernet HWaddr 00:1F:29:C4:9F:3C
xenbr2 Link encap:Ethernet HWaddr 00:26:55:D1:E8:CC
xenbr3 Link encap:Ethernet HWaddr 00:26:55:D1:E8:CD
xenbr4 Link encap:Ethernet HWaddr 00:1B:78:5A:21:8D
xenbr5 Link encap:Ethernet HWaddr 00:1B:78:5A:21:8C

[root@xs-1 ~]# ovs-vsctl list-br
xenbr0
xenbr1
xenbr2
xenbr3
xenbr4
xenbr5

[root@xs-1 ~]# ovs-appctl bond/list
bond type slaves

There are three important things to notice this example:

  1. There has been a network created for every interface on each of the two hosts.

    During installation, XenServer also creates a separate network for each NIC it detects on the host.

    Chapter 2 of Citrix XenServer Design: Designing XenServer Network Configurations

  2. The networks that have been created for every interface are present on all hosts in the pool and they are identical. i.e., The networks are named identically and given identical UUID’s.
  3. The xe pif-list command allows the inspection of other hosts in the pool. But the xe network-list command does not (for the reason that we just mentioned, above). So we’ve had to perform these commands on each host, individually.

Creation

Bonds are created on the pool-master (PM) and then inherited by the other members of the pool.

If you are not using XenCenter for [configuring] NIC bonding, the quickest way to create pool-wide NIC bonds is to create the bond on the master, and then restart the other pool members. Alternatively, you can use the service xapi restart command. This causes the bond and VLAN settings on the master to be inherited by each host. The management interface of each host must, however, be manually reconfigured.

Chapter 4.4.6.2 of the “Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide

Our first step will be to identify the PM:


Legend: GET | SEE | USE

[root@xs-1 ~]# xe pool-list params=master,name-label,name-description
name-label ( RW) : PDX0
name-description ( RW): NA-US-OR-PDX-0
master ( RO): e28ef0a3-0738-4bc5-9146-cae1205d1e20

[root@xs-1 ~]# xe host-list uuid=e28ef0a3-0738-4bc5-9146-cae1205d1e20
uuid ( RO) : e28ef0a3-0738-4bc5-9146-cae1205d1e20
name-label ( RW): xs-1
name-description ( RW): Default install of XenServer

After identifying the PM we’ll perform the configuration changes on the PM and then restart the XAPI service on each of the members of the pool.

Whenever possible, create NIC bonds as part of initial resource pool creation prior to joining additional hosts to the pool or creating VMs. Doing so allows the bond configuration to be automatically replicated to hosts as they are joined to the pool and reduces the number of steps required… Adding a NIC bond to an existing pool requires one of the following:

  1. Using the CLI to configure the bonds on the master and then each member of the pool.
  2. Using the CLI to configure the bonds on the master and then restarting each member of the pool so that it inherits its settings from the pool master. [Or…]
  3. Using XenCenter to configure the bonds on the master. XenCenter automatically synchronizes the networking settings on the member servers with the master, so you do not need to reboot the member servers.

Chapter 4.4.6 of the “Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide