Resource Pool Creation and Basic Configuration

Scenario #2, Tutorial #1

Chapter 8 of the “Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide” provides a basic overview of resource pools:

A resource pool comprises multiple XenServer host installations, bound together into a single managed entity which can host Virtual Machines. When combined with shared storage, a resource pool enables VMs to be started on any XenServer host which has sufficient memory and then dynamically moved between XenServer hosts while running with minimal downtime (XenMotion). If an individual XenServer host suffers a hardware failure, then the administrator can restart the failed VM’s on another XenServer host in the same resource pool. If high availability (HA) is enabled on the resource pool, VM’s will automatically be moved if their host fails…

A pool always has at least one physical node, known as the master. Only the master node exposes an administration interface (used by… the XenServer Command Line Interface, known as the xe CLI); the master forwards commands to individual members as necessary.

Benefits of Creating a Resource Pool

By default, XenServer (XS) hosts operate as stand-alone virtualization hosts that share nothing with any other XenServer hosts in the environment: No resources (i.e., neither Compute, Network, nor Storage); no guests; and no metainformation.

Simply put, however; joining hosts together into a “resource pool”…

  1. allows guests to be migrated between hosts seamlessly*;
  2. allows hosts to share available resources (e.g., cross-host networking, aggregated storage, and shared repositories, to name a few);
  3. enables the high-availability feature of XenServer (wherein guests of a failed host can be restarted on an available, healthy host), and;
  4. empowers workload balancing (wherein guests may be migrated amongst pool members to place workloads on hosts that have the resources available that guests need).

* Technically, guests will experience a very brief interruption as the migration process finishes and the guest is suspended on the source host, the guest’s network and storage connections are torn-down and then created anew on the destination host, and then the guest is un-suspended on the destination host. Read more on the XenMotion FAQ.

The Initial State of the Resource Pool

XenServer (XS) hosts belong to an unnamed pool by default. To create your first resource pool, rename the existing nameless pool.

Chapter 3.3 of the “Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide

Immediately after installation, the XS host is the pool-master of it’s own resource pool without much of anything else configured:


Legend: GET | SEE | USE

[root@xs-1 ~]# hostname
xs-1

[root@xs-1 ~]# xe host-list
uuid ( RO) : e28ef0a3-0738-4bc5-9146-cae1205d1e20
name-label ( RW): xs-1
name-description ( RW): Default install of XenServer

[root@xs-1 ~]# xe pool-list
uuid ( RO) : c6f0c03a-344b-24f3-e90c-3c786725f61a
name-label ( RW):
name-description ( RW):
master ( RO): e28ef0a3-0738-4bc5-9146-cae1205d1e20
default-SR ( RW):

[root@xs-1 ~]# xe pool-param-list uuid=c6f0c03a-344b-24f3-e90c-3c786725f61a params=
uuid ( RO) : c6f0c03a-344b-24f3-e90c-3c786725f61a
name-label ( RW):
name-description ( RW):
master ( RO): e28ef0a3-0738-4bc5-9146-cae1205d1e20
default-SR ( RW): <not in database>
crash-dump-SR ( RW): <not in database>
suspend-image-SR ( RW): <not in database>
supported-sr-types ( RO): lvm; iscsi; ext; file; dummy; lvmoiscsi; hba; nfs; lvmohba; iso; udev
other-config (MRW): cpuid_feature_mask: ffffff7f-ffffffff-ffffffff-ffffffff; memory-ratio-hvm: 0.25; memory-ratio-pv: 0.25
ha-enabled ( RO): false
ha-configuration ( RO):
ha-statefiles ( RO):
ha-host-failures-to-tolerate ( RW): 0
ha-plan-exists-for ( RO): 0
ha-allow-overcommit ( RW): false
ha-overcommitted ( RO): false
blobs ( RO):
wlb-url ( RO):
wlb-username ( RO):
wlb-enabled ( RW): false
wlb-verify-cert ( RW): false
gui-config (MRW):
restrictions ( RO): restrict_vswitch_controller: false; restrict_lab: false; restrict_stage: false; restrict_storagelink: false;…
tags (SRW):
license-state ( RO): edition: free; expiry: never

Creating the Resource Pool

Creating a XS resource pool in a fairly simple, straight-forward process that begins by identifying the XS host that will function as the pool-master. Once that simple, design-level decision has been made: Here, at PXS, we always prefer to…

  1. Configure name-label and name-description of the resource pool first using the host that will become the pool-master and then
  2. Join the other pool-members to the resource pool that was just configured using the host that was chosen to become the pool-master.

First: Using the host that will become the pool-master…

  1. Confirm that the host is not already a member of another resource pool:


    Legend: GET | SEE | USE

    [root@xs-1 ~]# hostname
    xs-1

    [root@xs-1 ~]# xe host-list
    uuid ( RO) : e28ef0a3-0738-4bc5-9146-cae1205d1e20
    name-label ( RW): xs-1
    name-description ( RW): Default install of XenServer

    [root@xs-1 ~]# xe pool-list
    uuid ( RO) : c6f0c03a-344b-24f3-e90c-3c786725f61a
    name-label ( RW):
    name-description ( RW):
    master ( RO): e28ef0a3-0738-4bc5-9146-cae1205d1e20
    default-SR ( RW):

    [root@xs-1 ~]# xe pool-param-list uuid=c6f0c03a-344b-24f3-e90c-3c786725f61a params=
    uuid ( RO) : c6f0c03a-344b-24f3-e90c-3c786725f61a
    name-label ( RW):
    name-description ( RW):
    master ( RO): e28ef0a3-0738-4bc5-9146-cae1205d1e20
    default-SR ( RW): <not in database>
    crash-dump-SR ( RW): <not in database>
    suspend-image-SR ( RW): <not in database>
    supported-sr-types ( RO): lvm; iscsi; ext; file; dummy; lvmoiscsi; hba; nfs; lvmohba; iso; udev
    other-config (MRW): cpuid_feature_mask: ffffff7f-ffffffff-ffffffff-ffffffff; memory-ratio-hvm: 0.25; memory-ratio-pv: 0.25
    ha-enabled ( RO): false
    ha-configuration ( RO):
    ha-statefiles ( RO):
    ha-host-failures-to-tolerate ( RW): 0
    ha-plan-exists-for ( RO): 0
    ha-allow-overcommit ( RW): false
    ha-overcommitted ( RO): false
    blobs ( RO):
    wlb-url ( RO):
    wlb-username ( RO):
    wlb-enabled ( RW): false
    wlb-verify-cert ( RW): false
    gui-config (MRW):
    restrictions ( RO): restrict_vswitch_controller: false; restrict_lab: false; restrict_stage: false; restrict_storagelink: false; restrict_storagelink_site_recovery: false; restrict_web_selfservice: true; restrict_web_selfservice_manager: true; restrict_hotfix_apply: false; restrict_export_resource_data: true; restrict_read_caching: true; restrict_xcm: false; restrict_vlan: false; restrict_qos: false; restrict_pool_attached_storage: false; restrict_netapp: false; restrict_equalogic: false; restrict_pooling: false; enable_xha: true; restrict_marathon: false; restrict_email_alerting: false; restrict_historical_performance: false; restrict_wlb: true; restrict_rbac: false; restrict_dmc: false; restrict_checkpoint: false; restrict_cpu_masking: false; restrict_connection: false; platform_filter: false; regular_nag_dialog: false; restrict_vmpr: false; restrict_intellicache: false; restrict_gpu: false; restrict_dr: false; restrict_vif_locking: false; restrict_storage_xen_motion: false; restrict_vgpu: true; restrict_integrated_gpu_passthrough: false; restrict_vss: false; restrict_xen_motion: false
    tags (SRW):
    license-state ( RO): edition: free; expiry: never

    Notice that…

    1. There are no other hosts described in the output of the `host-list` xe command and
    2. The host is the master of it’s own resource pool!
  2. Configure name-label and name-description of the resource pool:


    Legend: GET | SEE | USE

    [root@xs-1 ~]# xe pool-param-set name-label=PDX0 name-description=NA-US-OR-PDX-0 uuid=c6f0c03a-344b-24f3-e90c-3c786725f61a

    [root@xs-1 ~]# xe pool-param-list uuid=c6f0c03a-344b-24f3-e90c-3c786725f61a
    uuid ( RO) : c6f0c03a-344b-24f3-e90c-3c786725f61a
    name-label ( RW): PDX0
    name-description ( RW): NA-US-OR-PDX-0
    master ( RO): e28ef0a3-0738-4bc5-9146-cae1205d1e20
    default-SR ( RW): <not in database>
    crash-dump-SR ( RW): <not in database>
    suspend-image-SR ( RW): <not in database>
    supported-sr-types ( RO): lvm; iscsi; ext; file; dummy; lvmoiscsi; hba; nfs; lvmohba; iso; udev
    other-config (MRW): cpuid_feature_mask: ffffff7f-ffffffff-ffffffff-ffffffff; memory-ratio-hvm: 0.25; memory-ratio-pv: 0.25
    ha-enabled ( RO): false
    ha-configuration ( RO):
    ha-statefiles ( RO):
    ha-host-failures-to-tolerate ( RW): 0
    ha-plan-exists-for ( RO): 0
    ha-allow-overcommit ( RW): false
    ha-overcommitted ( RO): false
    blobs ( RO):
    wlb-url ( RO):
    wlb-username ( RO):
    wlb-enabled ( RW): false
    wlb-verify-cert ( RW): false
    gui-config (MRW):
    restrictions ( RO): restrict_vswitch_controller: false; restrict_lab: false; restrict_stage: false; restrict_storagelink: false; restrict_storagelink_site_recovery: false; restrict_web_selfservice: true; restrict_web_selfservice_manager: true; restrict_hotfix_apply: false; restrict_export_resource_data: true; restrict_read_caching: true; restrict_xcm: false; restrict_vlan: false; restrict_qos: false; restrict_pool_attached_storage: false; restrict_netapp: false; restrict_equalogic: false; restrict_pooling: false; enable_xha: true; restrict_marathon: false; restrict_email_alerting: false; restrict_historical_performance: false; restrict_wlb: true; restrict_rbac: false; restrict_dmc: false; restrict_checkpoint: false; restrict_cpu_masking: false; restrict_connection: false; platform_filter: false; regular_nag_dialog: false; restrict_vmpr: false; restrict_intellicache: false; restrict_gpu: false; restrict_dr: false; restrict_vif_locking: false; restrict_storage_xen_motion: false; restrict_vgpu: true; restrict_integrated_gpu_passthrough: false; restrict_vss: false; restrict_xen_motion: false
    tags (SRW):
    license-state ( RO): edition: free; expiry: never

Then: On each of the hosts that will be added to the newly-created resource pool…

  1. Confirm that the host is not already a member of another resource pool:


    Legend: GET | SEE | USE

    [root@xs-2 ~]# hostname
    xs-2

    [root@xs-2 ~]# xe host-list
    uuid ( RO) : f781cc54-716c-4596-924a-2494dd0b6537
    name-label ( RW): xs-2
    name-description ( RW): Default install of XenServer

    [root@xs-2 ~]# xe pool-list
    uuid ( RO) : 0f77ad2f-e3e2-4516-af3c-d2073a3904eb
    name-label ( RW):
    name-description ( RW):
    master ( RO): f781cc54-716c-4596-924a-2494dd0b6537
    default-SR ( RW):

    Notice that, much like the example above, the resource pool that the host is currently a member of…

    1. is un-named and has no description;
    2. contains no other hosts, and;
    3. has the pool-master as it’s only member!
  2. Join the host to the newly-created resource pool:


    [root@xs-2 ~]# xe pool-join master-address=xs-1 master-username=root master-password=***PASSWORD***
    Host agent will restart and attempt to join pool in 10.000 seconds…

    If the host is not able to resolve the hostname of the pool-master – xs-1” in the example above – use the IP address instead.

  3. Verify that the host has successfully joined the newly-created resource pool:

  4. Legend: GET | SEE | USE

    [root@xs-2 ~]# xe host-list
    uuid ( RO) : f781cc54-716c-4596-924a-2494dd0b6537
    name-label ( RW): xs-2
    name-description ( RW): Default install

    uuid ( RO) : e28ef0a3-0738-4bc5-9146-cae1205d1e20
    name-label ( RW): xs-1
    name-description ( RW): Default install of XenServer

    [root@xs-2 ~]# xe pool-list
    uuid ( RO) : c6f0c03a-344b-24f3-e90c-3c786725f61a
    name-label ( RW): PDX0
    name-description ( RW): NA-US-OR-PDX-0
    master ( RO): e28ef0a3-0738-4bc5-9146-cae1205d1e20
    default-SR ( RW): <not in database>

    Notice that…

    1. There are now multiple hosts described in the output of the `host-list` xe command and
    2. The host is now a part of the resource pool that was defined in the previous steps.

Configuring Shared Storage

Although not a strict technical requirement for creating a resource pool, the advantages of pools (for example, the ability to dynamically choose on which XenServer host to run a VM and to dynamically move a VM between XenServer hosts) are only available if the pool has one or more shared storage repositories (SR’s). If possible, postpone creating a pool of XenServer hosts until shared storage is available. Once shared storage has been added, Citrix recommends that you move existing VMs whose disks are in local storage into shared storage. This can be done using the xe vm-copy command or XenCenter.

Chapter 3.2 of the “Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide

Though guests may be migrated between hosts using Storage XenMotion, utilizing a shared storage repository (SR) is a much more practical solution to enabling many of the benefits of virtualization – but shared storage is not strictly required.

A storage repository (SR) is a particular storage target, in which Virtual Machine (VM) Virtual Disk Images (VDI’s) are stored….

SR’s are flexible, with built-in support for IDE, SATA, SCSI and SAS drives that are locally connected, and iSCSI, NFS, SAS and Fibre Channel remotely connected. The SR and VDI abstractions allow for advanced storage features such as Thin Provisioning, VDI snapshots, and fast cloning to be exposed on storage targets that support them.

Chapter 5.1.1 of the “Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide

Version 6.5 of XenServer supports four different types of network-attached repository:

  • Fibre Channel (FC),
  • Fibre Channel over Ethernet (FCoE),
  • iSCSI, and
  • NFS.

(Refer to Chapter 5.2.8 of Citrix XenServer® 6.5, Service Pack 1 Administrator’s Guide” for more detailed information about storage repository-types.)

Chapter 6 of the Citrix guide “Citrix XenServer Design: Designing XenServer Network Configurations” has a lot to say about dedicated storage networks:

Citrix recommends dedicating one or more NIC’s as a separate storage network for NFS and iSCSI storage [traffic]. Many consider creating a separate storage network to be a best practice.

By configuring additional management interfaces, you can both assign an IP address to a NIC and isolate storage and network traffic [physically and logically], provided [that] the appropriate physical configuration is in place

You can segregate traffic by configuring an additional… interface(s) for storage and configure XenServer to access storage through that interface. Then, physically isolate the guest network and configure virtual machines to only use that isolated network. Ideally, the [storage] network should be bonded or use multipathing…

The overall process for creating a separate storage network is as follows:

  1. Configuring physical network infrastructure so that different traffic is on different subnets.
  2. Creating a management interface to use the new network.
  3. Configuring redundancy, either multipathing or bonding.

Creating management interfaces lets you establish separate networks for, for example, IP-based traffic provided that:

  • You do not configure XenServer to use this network for any other purpose (for example, by pointing a virtual interface to this network).
  • The appropriate physical network configuration is in place.

For example, to dedicate a NIC to storage traffic, the NIC, storage target, switch, and/or VLAN must be configured (physically connected) so that the target is only accessible over the assigned NIC.

Chapter 6 of the Citrix guide “Citrix XenServer Design: Designing XenServer Network Configurations

…but all of that can be distilled down to applying these principles in the design-phase of building a XS resource pool:

  1. Use a dedicated NIC for storage traffic.
  2. Use a dedicated IP subnet for storage traffic.
  3. Use bonding or multipathing to make the interface resilient.

Additionally: If storage traffic should be physically isolated from all other network traffic: Use a separate, dedicated network infrastructure to provide physical isolation of storage traffic.

For the purposes of this scenario and this tutorial, we will be configuring an NFS SR.

First: On each host in the resource pool…

  1. Identify the interface that will function as the dedicated NIC for storage traffic – in this scenario the interface eth2 will function as the dedicated NIC for storage traffic:


    Legend: GET | SEE | USE

    [root@xs-1 ~]# xe pif-list host-uuid=$INSTALLATION_UUID device=eth2
    uuid ( RO) : 767aac49-ba37-3be7-5bad-87ae26d97e02
    device ( RO): eth2
    currently-attached ( RO): true
    VLAN ( RO): -1
    network-uuid ( RO): 45ea2b0f-cbf1-df35-010b-145b3e90b163

  2. Configure the unique IP address of that NIC:


    Legend: GET | SEE | USE

    [root@xs-1 ~]# ifconfig | grep inet -B1
    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0

    xenbr0 Link encap:Ethernet HWaddr 00:1F:29:C4:9F:5C
    inet addr:172.16.0.10 Bcast:172.16.0.31 Mask:255.255.255.224

    [root@xs-1 ~]# xe pif-reconfigure-ip uuid=767aac49-ba37-3be7-5bad-87ae26d97e02 IP=172.16.0.35 netmask=255.255.255.240 mode=static

    [root@xs-1 ~]# ifconfig | grep inet -B1
    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0

    xenbr0 Link encap:Ethernet HWaddr 00:1F:29:C4:9F:5C
    inet addr:172.16.0.10 Bcast:172.16.0.31 Mask:255.255.255.224

    xenbr2 Link encap:Ethernet HWaddr 00:26:55:D1:E8:CC
    inet addr:172.16.0.35 Bcast:172.16.0.47 Mask:255.255.255.240

  3. Use the NFS utility /usr/sbin/showmount to verify that the host can access the NFS repository:


    Legend: GET | SEE | USE

    [root@xs-1 ~]# showmount -e 172.16.0.34
    Export list for 172.16.0.34:
    /srv/backups 172.16.0.0/27
    /srv/ova 172.16.0.32/28,172.16.0.0/27
    /srv/vhd 172.16.0.32/28
    /srv/iso 172.16.0.32/28,172.16.0.0/27
    /srv/hotfixes 172.16.0.0/27

    Notice that, in this example,…

    1. The NFS server is serving NFS shares on two subnets – the PMI subnet (172.16.0.0/27) and the storage network subnet (172.16.0.32/28).
    2. The NFS server is serving multiple NFS shares – some of which are accessible from the XS hosts’ primary management interfaces (PMI’s) and others that are accessible from the hosts’ dedicated storage interfaces.

SR configuration is a pool-level feature that will be propagated throughout the hosts in the resource pool and only needs to be performed once, using any of the pool members. Once the SR has been configured: The SR can be utilized as the default SR for the resource pool.

Then: On any of the pool members…

  1. Configure the NFS share as a shared SR:


    Legend: GET | SEE | USE

    [root@xs-1 ~]# mount | grep nfs
    sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

    [root@xs-1 ~]# xe sr-create device-config:server=172.16.0.34 device-config:serverpath=/srv/vhd name-label=NAS-2-VHD content-type=user type=nfs shared=true
    8ff09191-95e7-0ea3-5b4b-eea9d0fa21ca

    [root@xs-1 ~]# xe pool-param-set default-SR=8ff09191-95e7-0ea3-5b4b-eea9d0fa21ca uuid=c6f0c03a-344b-24f3-e90c-3c786725f61a

    [root@xs-1 ~]# xe sr-list type=nfs
    uuid ( RO) : 8ff09191-95e7-0ea3-5b4b-eea9d0fa21ca
    name-label ( RW): NAS-2-VHD
    name-description ( RW):
    host ( RO): <shared>
    type ( RO): nfs
    content-type ( RO): user

    [root@xs-1 ~]# mount | grep nfs
    sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
    172.16.0.34:/srv/vhd/8ff09191-95e7-0ea3-5b4b-eea9d0fa21ca on /var/run/sr-mount/8ff09191-95e7-0ea3-5b4b-eea9d0fa21ca type nfs (rw,soft,proto=tcp,acdirmin=0,acdirmax=0,addr=172.16.0.34)

  2. Configure the SR as the default SR for the pool:


    Legend: GET | SEE | USE

    [root@xs-1 ~]# xe pool-param-set default-SR=8ff09191-95e7-0ea3-5b4b-eea9d0fa21ca uuid=c6f0c03a-344b-24f3-e90c-3c786725f61a

And then, finally: On any of the other members of the resource pool…

  1. Confirm that the default SR has been propagated to other members of the pool:


    Legend: GET | SEE | USE

    [root@xs-2 ~]# xe pool-param-list uuid=c6f0c03a-344b-24f3-e90c-3c786725f61a | grep -B4 –color SR\
    uuid ( RO) : c6f0c03a-344b-24f3-e90c-3c786725f61a
    name-label ( RW): PDX0
    name-description ( RW): NA-US-OR-PDX-0
    master ( RO): e28ef0a3-0738-4bc5-9146-cae1205d1e20
    default-SR ( RW): 8ff09191-95e7-0ea3-5b4b-eea9d0fa21ca
    crash-dump-SR ( RW): <not in database>
    suspend-image-SR ( RW): <not in database>

  2. Conclusion

    In this tutorial we’ve…

    1. joined two XenServer (XS) hosts together to form a XenServer resource pool;
    2. configured an NFS shared storage repository (SR) for XS hosts to store guests’ virtual hard disk drives (VHD);
    3. created a dedicated storage network for the XenServer hosts to use when communicating with the SR;
    4. configured an IP address on each of the storage interfaces on each of the members of the resource pool, and;
    5. configured the NFS shared SR as the default SR for the resource pool.

    In upcoming tutorials, we’ll discuss bonding NIC’s to form high-throughput and/or resilient links; enabling the high availability feature of XenServer, and; other advanced features of the XenServer virtualization appliance.

    Changelog: This tutorial was last modified 17-Feb-2020