Merge "Apply contraints to tempest plugins"
diff --git a/doc/source/guides.rst b/doc/source/guides.rst
index c2c7b91..82e0dd6 100644
--- a/doc/source/guides.rst
+++ b/doc/source/guides.rst
@@ -20,6 +20,7 @@
guides/devstack-with-nested-kvm
guides/nova
guides/devstack-with-lbaas-v2
+ guides/devstack-with-ldap
All-In-One Single VM
--------------------
@@ -66,3 +67,8 @@
--------------------------------
Guide to working with nova features :doc:`Nova and devstack <guides/nova>`.
+
+Deploying DevStack with LDAP
+----------------------------
+
+Guide to setting up :doc:`DevStack with LDAP <guides/devstack-with-ldap>`.
diff --git a/doc/source/guides/devstack-with-lbaas-v2.rst b/doc/source/guides/devstack-with-lbaas-v2.rst
index 7dee520..df3c7ce 100644
--- a/doc/source/guides/devstack-with-lbaas-v2.rst
+++ b/doc/source/guides/devstack-with-lbaas-v2.rst
@@ -15,7 +15,7 @@
Install devstack
- ::
+::
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
@@ -23,7 +23,7 @@
Edit your ``local.conf`` to look like
- ::
+::
[[local|localrc]]
# Load the external LBaaS plugin.
@@ -60,7 +60,7 @@
Run stack.sh and do some sanity checks
- ::
+::
./stack.sh
. ./openrc
@@ -69,7 +69,7 @@
Create two nova instances that we can use as test http servers:
- ::
+::
#create nova instances on private network
nova boot --image $(nova image-list | awk '/ cirros-.*-x86_64-uec / {print $2}') --flavor 1 --nic net-id=$(openstack network list | awk '/ private / {print $2}') node1
@@ -83,7 +83,7 @@
Set up a simple web server on each of these instances. ssh into each instance (username 'cirros', password 'cubswin:)') and run
- ::
+::
MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}')
while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
@@ -91,7 +91,7 @@
Phase 2: Create your load balancers
------------------------------------
- ::
+::
neutron lbaas-loadbalancer-create --name lb1 private-subnet
neutron lbaas-loadbalancer-show lb1 # Wait for the provisioning_status to be ACTIVE.
diff --git a/doc/source/guides/devstack-with-ldap.rst b/doc/source/guides/devstack-with-ldap.rst
new file mode 100644
index 0000000..ec41141
--- /dev/null
+++ b/doc/source/guides/devstack-with-ldap.rst
@@ -0,0 +1,174 @@
+============================
+Deploying DevStack with LDAP
+============================
+
+The OpenStack Identity service has the ability to integrate with LDAP. The goal
+of this guide is to walk you through setting up an LDAP-backed OpenStack
+development environment.
+
+Introduction
+============
+
+LDAP support in keystone is read-only. You can use it to back an entire
+OpenStack deployment to a single LDAP server, or you can use it to back
+separate LDAP servers to specific keystone domains. Users within those domains
+will can authenticate against keystone, assume role assignments, and interact
+with other OpenStack services.
+
+Configuration
+=============
+
+To deploy an OpenLDAP server, make sure ``ldap`` is added to the list of
+``ENABLED_SERVICES``::
+
+ enable_service ldap
+
+Devstack will require a password to set up an LDAP administrator. This
+administrative user is also the bind user specified in keystone's configuration
+files, similar to a ``keystone`` user for MySQL databases.
+
+Devstack will prompt you for a password when running ``stack.sh`` if
+``LDAP_PASSWORD`` is not set. You can add the following to your
+``local.conf``::
+
+ LDAP_PASSWORD=super_secret_password
+
+At this point, devstack should have everything it needs to deploy OpenLDAP,
+bootstrap it with a minimal set of users, and configure it to back to a domain
+in keystone::
+
+ ./stack.sh
+
+Once ``stack.sh`` completes, you should have a running keystone deployment with
+a basic set of users. It is important to note that not all users will live
+within LDAP. Instead, keystone will back different domains to different
+identity sources. For example, the ``default`` domain will be backed by MySQL.
+This is usually where you'll find your administrative and services users. If
+you query keystone for a list of domains, you should see a domain called
+``Users``. This domain is set up by devstack and points to OpenLDAP.
+
+User Management
+===============
+
+Initially, there will only be two users in the LDAP server. The ``Manager``
+user is used by keystone to talk to OpenLDAP. The ``demo`` user is a generic
+user that you should be able to see if you query keystone for users within the
+``Users`` domain. Both of these users were added to LDAP using basic LDAP
+utilities installed by devstack (e.g. ``ldap-utils``) and LDIFs. The LDIFs used
+to create these users can be found in ``devstack/files/ldap/``.
+
+Listing Users
+-------------
+
+To list all users in LDAP directly, you can use ``ldapsearch`` with the LDAP
+user bootstrapped by devstack::
+
+ ldapsearch -x -w LDAP_PASSWORD -D cn=Manager,dc=openstack,dc=org \
+ -H ldap://localhost -b dc=openstack,dc=org
+
+As you can see, devstack creates an OpenStack domain called ``openstack.org``
+as a container for the ``Manager`` and ``demo`` users.
+
+Creating Users
+--------------
+
+Since keystone's LDAP integration is read-only, users must be added directly to
+LDAP. Users added directly to OpenLDAP will automatically be placed into the
+``Users`` domain.
+
+LDIFs can be used to add users via the command line. The following is an
+example LDIF that can be used to create a new LDAP user, let's call it
+``peter.ldif.in``::
+
+ dn: cn=peter,ou=Users,dc=openstack,dc=org
+ cn: peter
+ displayName: Peter Quill
+ givenName: Peter Quill
+ mail: starlord@openstack.org
+ objectClass: inetOrgPerson
+ objectClass: top
+ sn: peter
+ uid: peter
+ userPassword: im-a-better-pilot-than-rocket
+
+Now, we use the ``Manager`` user to create a user for Peter in LDAP::
+
+ ldapadd -x -w LDAP_PASSWORD -D cn=Manager,dc=openstack,dc=org \
+ -H ldap://localhost -c -f peter.ldif.in
+
+We should be able to assign Peter roles on projects. After Peter has some level
+of authorization, he should be able to login to Horizon by specifying the
+``Users`` domain and using his ``peter`` username and password. Authorization
+can be given to Peter by creating a project within the ``Users`` domain and
+giving him a role assignment on that project::
+
+ $ openstack project create --domain Users awesome-mix-vol-1
+ +-------------+----------------------------------+
+ | Field | Value |
+ +-------------+----------------------------------+
+ | description | |
+ | domain_id | 61a2de23107c46bea2d758167af707b9 |
+ | enabled | True |
+ | id | 7d422396d54945cdac8fe1e8e32baec4 |
+ | is_domain | False |
+ | name | awesome-mix-vol-1 |
+ | parent_id | 61a2de23107c46bea2d758167af707b9 |
+ | tags | [] |
+ +-------------+----------------------------------+
+ $ openstack role add --user peter --user-domain Users \
+ --project awesome-mix-vol-1 --project-domain Users admin
+
+
+Deleting Users
+--------------
+
+We can use the same basic steps to remove users from LDAP, but instead of using
+LDIFs, we can just pass the ``dn`` of the user we want to delete::
+
+ ldapdelete -x -w LDAP_PASSWORD -D cn=Manager,dc=openstack,dc=org \
+ -H ldap://localhost cn=peter,ou=Users,dc=openstack,dc=org
+
+Group Management
+================
+
+Like users, groups are considered specific identities. This means that groups
+also fall under the same read-only constraints as users and they can be managed
+directly with LDAP in the same way users are with LDIFs.
+
+Adding Groups
+-------------
+
+Let's define a specific group with the following LDIF::
+
+ dn: cn=guardians,ou=UserGroups,dc=openstack,dc=org
+ objectClass: groupOfNames
+ cn: guardians
+ description: Guardians of the Galaxy
+ member: cn=peter,dc=openstack,dc=org
+ member: cn=gamora,dc=openstack,dc=org
+ member: cn=drax,dc=openstack,dc=org
+ member: cn=rocket,dc=openstack,dc=org
+ member: cn=groot,dc=openstack,dc=org
+
+We can create the group using the same ``ldapadd`` command as we did with
+users::
+
+ ldapadd -x -w LDAP_PASSWORD -D cn=Manager,dc=openstack,dc=org \
+ -H ldap://localhost -c -f guardian-group.ldif.in
+
+If we check the group membership in Horizon, we'll see that only Peter is a
+member of the ``guardians`` group, despite the whole crew being specified in
+the LDIF. Once those accounts are created in LDAP, they will automatically be
+added to the ``guardians`` group. They will also assume any role assignments
+given to the ``guardians`` group.
+
+Deleting Groups
+---------------
+
+Just like users, groups can be deleted using the ``dn``::
+
+ ldapdelete -x -w LDAP_PASSWORD -D cn=Manager,dc=openstack,dc=org \
+ -H ldap://localhost cn=guardians,ou=UserGroups,dc=openstack,dc=org
+
+Note that this operation will not remove users within that group. It will only
+remove the group itself and the memberships any users had with that group.
diff --git a/files/rpms-suse/general b/files/rpms-suse/general
index 0b69cb1..b870d72 100644
--- a/files/rpms-suse/general
+++ b/files/rpms-suse/general
@@ -11,7 +11,6 @@
iputils
libffi-devel # pyOpenSSL
libjpeg8-devel # Pillow 3.0.0
-libmysqlclient-devel # MySQL-python
libopenssl-devel # to rebuild pyOpenSSL if needed
libxslt-devel # lxml
lsof # useful when debugging
diff --git a/files/rpms-suse/n-cpu b/files/rpms-suse/n-cpu
index 9ece115..d0c572e 100644
--- a/files/rpms-suse/n-cpu
+++ b/files/rpms-suse/n-cpu
@@ -1,7 +1,7 @@
cryptsetup
-genisoimage
libosinfo
lvm2
+mkisofs
open-iscsi
sg3_utils
# Stuff for diablo volumes
diff --git a/files/rpms-suse/nova b/files/rpms-suse/nova
index ae115d2..4103a40 100644
--- a/files/rpms-suse/nova
+++ b/files/rpms-suse/nova
@@ -4,7 +4,6 @@
dnsmasq-utils # dist:opensuse-12.3,opensuse-13.1
ebtables
gawk
-genisoimage # required for config_drive
iptables
iputils
kpartx
@@ -12,6 +11,7 @@
libvirt # NOPRIME
libvirt-python # NOPRIME
mariadb # NOPRIME
+mkisofs # required for config_drive
parted
polkit
# qemu as fallback if kvm cannot be used
diff --git a/lib/neutron_plugins/ovs_base b/lib/neutron_plugins/ovs_base
index 50b9ae5..36e2ed2 100644
--- a/lib/neutron_plugins/ovs_base
+++ b/lib/neutron_plugins/ovs_base
@@ -72,7 +72,14 @@
if [[ $DISTRO == "sle12" ]] && [[ $os_RELEASE -lt 12.2 ]]; then
restart_service openvswitch-switch
else
- restart_service openvswitch
+ # workaround for https://bugzilla.suse.com/show_bug.cgi?id=1085971
+ if [[ $DISTRO =~ "tumbleweed" ]]; then
+ sudo sed -i -e "s,^OVS_USER_ID=.*,OVS_USER_ID='root:root'," /etc/sysconfig/openvswitch
+ fi
+ restart_service openvswitch || {
+ journalctl -xe || :
+ systemctl status openvswitch
+ }
fi
fi
}
diff --git a/lib/nova b/lib/nova
index 56e3093..939806f 100644
--- a/lib/nova
+++ b/lib/nova
@@ -506,6 +506,12 @@
if [ "$FORCE_CONFIG_DRIVE" != "False" ]; then
iniset $NOVA_CONF DEFAULT force_config_drive "$FORCE_CONFIG_DRIVE"
fi
+
+ # nova defaults to genisoimage but only mkisofs is available for 15.0+
+ if is_suse; then
+ iniset $NOVA_CONF DEFAULT mkisofs_cmd /usr/bin/mkisofs
+ fi
+
# Format logging
setup_logging $NOVA_CONF
@@ -685,7 +691,7 @@
$NOVA_BIN_DIR/nova-manage cell create --name=child --cell_type=child --username=$RABBIT_USERID --hostname=$RABBIT_HOST --port=5672 --password=$RABBIT_PASSWORD --virtual_host=child_cell --woffset=0 --wscale=1
# Creates the single cells v2 cell for the child cell (v1) nova db.
- nova-manage --config-file $NOVA_CELLS_CONF cell_v2 create_cell \
+ $NOVA_BIN_DIR/nova-manage --config-file $NOVA_CELLS_CONF cell_v2 create_cell \
--transport-url $(get_transport_url child_cell) --name 'cell1'
fi
}
@@ -729,7 +735,7 @@
# this needs to come after the api_db sync happens. We also want to run
# this before the db sync below since that will migrate both the nova
# and nova_cell0 databases.
- nova-manage cell_v2 map_cell0 --database_connection `database_connection_url nova_cell0`
+ $NOVA_BIN_DIR/nova-manage cell_v2 map_cell0 --database_connection `database_connection_url nova_cell0`
# (Re)create nova databases
for i in $(seq 1 $NOVA_NUM_CELLS); do
@@ -750,7 +756,7 @@
# create the cell1 cell for the main nova db where the hosts live
for i in $(seq 1 $NOVA_NUM_CELLS); do
- nova-manage --config-file $NOVA_CONF --config-file $(conductor_conf $i) cell_v2 create_cell --name "cell$i"
+ $NOVA_BIN_DIR/nova-manage --config-file $NOVA_CONF --config-file $(conductor_conf $i) cell_v2 create_cell --name "cell$i"
done
fi
@@ -1015,7 +1021,7 @@
if is_service_enabled n-api; then
# dump the cell mapping to ensure life is good
echo "Dumping cells_v2 mapping"
- nova-manage cell_v2 list_cells --verbose
+ $NOVA_BIN_DIR/nova-manage cell_v2 list_cells --verbose
fi
}
diff --git a/lib/swift b/lib/swift
index 6cda9c8..933af10 100644
--- a/lib/swift
+++ b/lib/swift
@@ -37,6 +37,7 @@
# Set up default directories
GITDIR["python-swiftclient"]=$DEST/python-swiftclient
+SWIFT_DIR=$DEST/swift
# Swift virtual environment
if [[ ${USE_VENV} = True ]]; then
@@ -46,8 +47,6 @@
SWIFT_BIN_DIR=$(get_python_exec_prefix)
fi
-
-SWIFT_DIR=$DEST/swift
SWIFT_AUTH_CACHE_DIR=${SWIFT_AUTH_CACHE_DIR:-/var/cache/swift}
SWIFT_APACHE_WSGI_DIR=${SWIFT_APACHE_WSGI_DIR:-/var/www/swift}
SWIFT3_DIR=$DEST/swift3
@@ -341,7 +340,7 @@
local user_group
# Make sure to kill all swift processes first
- swift-init --run-dir=${SWIFT_DATA_DIR}/run all stop || true
+ $SWIFT_BIN_DIR/swift-init --run-dir=${SWIFT_DATA_DIR}/run all stop || true
sudo install -d -o ${STACK_USER} ${SWIFT_CONF_DIR}
sudo install -d -o ${STACK_USER} ${SWIFT_CONF_DIR}/{object,container,account}-server
@@ -704,7 +703,7 @@
function init_swift {
local node_number
# Make sure to kill all swift processes first
- swift-init --run-dir=${SWIFT_DATA_DIR}/run all stop || true
+ $SWIFT_BIN_DIR/swift-init --run-dir=${SWIFT_DATA_DIR}/run all stop || true
# Forcibly re-create the backing filesystem
create_swift_disk
@@ -715,9 +714,9 @@
rm -f *.builder *.ring.gz backups/*.builder backups/*.ring.gz
- swift-ring-builder object.builder create ${SWIFT_PARTITION_POWER_SIZE} ${SWIFT_REPLICAS} 1
- swift-ring-builder container.builder create ${SWIFT_PARTITION_POWER_SIZE} ${SWIFT_REPLICAS} 1
- swift-ring-builder account.builder create ${SWIFT_PARTITION_POWER_SIZE} ${SWIFT_REPLICAS} 1
+ $SWIFT_BIN_DIR/swift-ring-builder object.builder create ${SWIFT_PARTITION_POWER_SIZE} ${SWIFT_REPLICAS} 1
+ $SWIFT_BIN_DIR/swift-ring-builder container.builder create ${SWIFT_PARTITION_POWER_SIZE} ${SWIFT_REPLICAS} 1
+ $SWIFT_BIN_DIR/swift-ring-builder account.builder create ${SWIFT_PARTITION_POWER_SIZE} ${SWIFT_REPLICAS} 1
# The ring will be created on each node, and because the order of
# nodes is identical we can use a seed for rebalancing, making it
@@ -728,26 +727,26 @@
node_number=1
for node in ${SWIFT_STORAGE_IPS}; do
- swift-ring-builder object.builder add z${node_number}-${node}:${OBJECT_PORT_BASE}/sdb1 1
- swift-ring-builder container.builder add z${node_number}-${node}:${CONTAINER_PORT_BASE}/sdb1 1
- swift-ring-builder account.builder add z${node_number}-${node}:${ACCOUNT_PORT_BASE}/sdb1 1
+ $SWIFT_BIN_DIR/swift-ring-builder object.builder add z${node_number}-${node}:${OBJECT_PORT_BASE}/sdb1 1
+ $SWIFT_BIN_DIR/swift-ring-builder container.builder add z${node_number}-${node}:${CONTAINER_PORT_BASE}/sdb1 1
+ $SWIFT_BIN_DIR/swift-ring-builder account.builder add z${node_number}-${node}:${ACCOUNT_PORT_BASE}/sdb1 1
let "node_number=node_number+1"
done
else
for node_number in ${SWIFT_REPLICAS_SEQ}; do
- swift-ring-builder object.builder add z${node_number}-${SWIFT_SERVICE_LOCAL_HOST}:$(( OBJECT_PORT_BASE + 10 * (node_number - 1) ))/sdb1 1
- swift-ring-builder container.builder add z${node_number}-${SWIFT_SERVICE_LOCAL_HOST}:$(( CONTAINER_PORT_BASE + 10 * (node_number - 1) ))/sdb1 1
- swift-ring-builder account.builder add z${node_number}-${SWIFT_SERVICE_LOCAL_HOST}:$(( ACCOUNT_PORT_BASE + 10 * (node_number - 1) ))/sdb1 1
+ $SWIFT_BIN_DIR/swift-ring-builder object.builder add z${node_number}-${SWIFT_SERVICE_LOCAL_HOST}:$(( OBJECT_PORT_BASE + 10 * (node_number - 1) ))/sdb1 1
+ $SWIFT_BIN_DIR/swift-ring-builder container.builder add z${node_number}-${SWIFT_SERVICE_LOCAL_HOST}:$(( CONTAINER_PORT_BASE + 10 * (node_number - 1) ))/sdb1 1
+ $SWIFT_BIN_DIR/swift-ring-builder account.builder add z${node_number}-${SWIFT_SERVICE_LOCAL_HOST}:$(( ACCOUNT_PORT_BASE + 10 * (node_number - 1) ))/sdb1 1
done
fi
# We use a seed for rebalancing. Doing this allows us to create
# identical rings on multiple nodes if SWIFT_STORAGE_IPS is the same
- swift-ring-builder object.builder rebalance 42
- swift-ring-builder container.builder rebalance 42
- swift-ring-builder account.builder rebalance 42
+ $SWIFT_BIN_DIR/swift-ring-builder object.builder rebalance 42
+ $SWIFT_BIN_DIR/swift-ring-builder container.builder rebalance 42
+ $SWIFT_BIN_DIR/swift-ring-builder account.builder rebalance 42
} && popd >/dev/null
# Create cache dir
@@ -803,7 +802,7 @@
# Apache should serve the "PACO" a.k.a "main" services
restart_apache_server
# The rest of the services should be started in backgroud
- swift-init --run-dir=${SWIFT_DATA_DIR}/run rest start
+ $SWIFT_BIN_DIR/swift-init --run-dir=${SWIFT_DATA_DIR}/run rest start
return 0
fi
@@ -827,7 +826,7 @@
done
if [[ "$SWIFT_START_ALL_SERVICES" == "True" ]]; then
- swift-init --run-dir=${SWIFT_DATA_DIR}/run rest start
+ $SWIFT_BIN_DIR/swift-init --run-dir=${SWIFT_DATA_DIR}/run rest start
else
# The container-sync daemon is strictly needed to pass the container
# sync Tempest tests.
@@ -835,8 +834,8 @@
run_process s-container-sync "$SWIFT_BIN_DIR/swift-container-sync ${SWIFT_CONF_DIR}/container-server/1.conf"
fi
else
- swift-init --run-dir=${SWIFT_DATA_DIR}/run all restart || true
- swift-init --run-dir=${SWIFT_DATA_DIR}/run proxy stop || true
+ $SWIFT_BIN_DIR/swift-init --run-dir=${SWIFT_DATA_DIR}/run all restart || true
+ $SWIFT_BIN_DIR/swift-init --run-dir=${SWIFT_DATA_DIR}/run proxy stop || true
fi
if is_service_enabled tls-proxy; then
@@ -863,12 +862,12 @@
local type
if [ "$SWIFT_USE_MOD_WSGI" == "True" ]; then
- swift-init --run-dir=${SWIFT_DATA_DIR}/run rest stop && return 0
+ $SWIFT_BIN_DIR/swift-init --run-dir=${SWIFT_DATA_DIR}/run rest stop && return 0
fi
# screen normally killed by ``unstack.sh``
- if type -p swift-init >/dev/null; then
- swift-init --run-dir=${SWIFT_DATA_DIR}/run all stop || true
+ if type -p $SWIFT_BIN_DIR/swift-init >/dev/null; then
+ $SWIFT_BIN_DIR/swift-init --run-dir=${SWIFT_DATA_DIR}/run all stop || true
fi
# Dump all of the servers
# Maintain the iteration as stop_process() has some desirable side-effects
diff --git a/stackrc b/stackrc
index 166b7cf..3c4e437 100644
--- a/stackrc
+++ b/stackrc
@@ -625,12 +625,7 @@
case "$VIRT_DRIVER" in
ironic|libvirt)
LIBVIRT_TYPE=${LIBVIRT_TYPE:-kvm}
- # If ENABLE_VOLUME_MULTIATTACH is True, the Ubuntu Cloud Archive can't
- # be used until it provides libvirt>=3.10, and with older versions of
- # Ubuntu the group is "libvirtd".
- # TODO(mriedem): Remove the ENABLE_VOLUME_MULTIATTACH check when
- # UCA has libvirt>=3.10.
- if [[ "$os_VENDOR" =~ (Debian|Ubuntu) && "${ENABLE_VOLUME_MULTIATTACH}" == "False" ]]; then
+ if [[ "$os_VENDOR" =~ (Debian|Ubuntu) ]]; then
# The groups change with newer libvirt. Older Ubuntu used
# 'libvirtd', but now uses libvirt like Debian. Do a quick check
# to see if libvirtd group already exists to handle grenade's case.
diff --git a/tools/fixup_stuff.sh b/tools/fixup_stuff.sh
index 90b2c8b..9147932 100755
--- a/tools/fixup_stuff.sh
+++ b/tools/fixup_stuff.sh
@@ -77,28 +77,23 @@
# Make it possible to switch this based on an environment variable as
# libvirt 2.5.0 doesn't handle nested virtualization quite well and this
# is required for the trove development environment.
-# The Pike UCA has qemu 2.10 but libvirt 3.6, therefore if
-# ENABLE_VOLUME_MULTIATTACH is True, we can't use the Pike UCA
-# because multiattach won't work with those package versions.
-# We can remove this check when the UCA has libvirt>=3.10.
function fixup_uca {
- if [[ "${ENABLE_UBUNTU_CLOUD_ARCHIVE}" == "False" || "$DISTRO" != "xenial" || \
- "${ENABLE_VOLUME_MULTIATTACH}" == "True" ]]; then
+ if [[ "${ENABLE_UBUNTU_CLOUD_ARCHIVE}" == "False" || "$DISTRO" != "xenial" ]]; then
return
fi
# This pulls in apt-add-repository
install_package "software-properties-common"
- # Use UCA for newer libvirt. Should give us libvirt 2.5.0.
+ # Use UCA for newer libvirt.
if [[ -f /etc/ci/mirror_info.sh ]] ; then
# If we are on a nodepool provided host and it has told us about where
# we can find local mirrors then use that mirror.
source /etc/ci/mirror_info.sh
- sudo apt-add-repository -y "deb $NODEPOOL_UCA_MIRROR xenial-updates/pike main"
+ sudo apt-add-repository -y "deb $NODEPOOL_UCA_MIRROR xenial-updates/queens main"
else
# Otherwise use upstream UCA
- sudo add-apt-repository -y cloud-archive:pike
+ sudo add-apt-repository -y cloud-archive:queens
fi
# Disable use of libvirt wheel since a cached wheel build might be