Merge "Configure Cinder backup driver"
diff --git a/.zuul.yaml b/.zuul.yaml
index c140671..67d4c24 100644
--- a/.zuul.yaml
+++ b/.zuul.yaml
@@ -585,13 +585,6 @@
timeout: 9000
- job:
- name: devstack-platform-opensuse-15
- parent: tempest-full-py3
- description: openSUSE 15.x platform test
- nodeset: devstack-single-node-opensuse-15
- voting: false
-
-- job:
name: devstack-platform-bionic
parent: tempest-full-py3
description: Ubuntu Bionic platform test
@@ -599,6 +592,17 @@
voting: false
- job:
+ name: devstack-async
+ parent: tempest-full-py3
+ description: Async mode enabled
+ voting: false
+ vars:
+ devstack_localrc:
+ DEVSTACK_PARALLEL: True
+ zuul_copy_output:
+ /opt/stack/async: logs
+
+- job:
name: devstack-platform-fedora-latest
parent: tempest-full-py3
description: Fedora latest platform test
@@ -686,10 +690,10 @@
jobs:
- devstack
- devstack-ipv6
- - devstack-platform-opensuse-15
- devstack-platform-fedora-latest
- devstack-platform-centos-8
- devstack-platform-bionic
+ - devstack-async
- devstack-multinode
- devstack-unit-tests
- openstack-tox-bashate
diff --git a/HACKING.rst b/HACKING.rst
index f55aed8..6a91e0a 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -74,8 +74,7 @@
``tools`` - Contains a collection of stand-alone scripts. While these
may reference the top-level DevStack configuration they can generally be
-run alone. There are also some sub-directories to support specific
-environments such as XenServer.
+run alone.
Scripts
@@ -275,9 +274,6 @@
even years from now -- why we were motivated to make a change at the
time.
-* **Reviewers** -- please see ``MAINTAINERS.rst`` for a list of people
- that should be added to reviews of various sub-systems.
-
Making Changes, Testing, and CI
-------------------------------
diff --git a/MAINTAINERS.rst b/MAINTAINERS.rst
deleted file mode 100644
index d4968a6..0000000
--- a/MAINTAINERS.rst
+++ /dev/null
@@ -1,92 +0,0 @@
-MAINTAINERS
-===========
-
-
-Overview
---------
-
-The following is a list of people known to have interests in
-particular areas or sub-systems of devstack.
-
-It is a rather general guide intended to help seed the initial
-reviewers list of a change. A +1 on a review from someone identified
-as being a maintainer of its affected area is a very positive flag to
-the core team for the veracity of the change.
-
-The ``devstack-core`` group can still be added to all reviews.
-
-
-Format
-~~~~~~
-
-The format of the file is the name of the maintainer and their
-gerrit-registered email.
-
-
-Maintainers
------------
-
-.. contents:: :local:
-
-
-Ceph
-~~~~
-
-* Sebastien Han <sebastien.han@enovance.com>
-
-Cinder
-~~~~~~
-
-Fedora/CentOS/RHEL
-~~~~~~~~~~~~~~~~~~
-
-* Ian Wienand <iwienand@redhat.com>
-
-Neutron
-~~~~~~~
-
-MidoNet
-~~~~~~~
-
-* Jaume Devesa <devvesa@gmail.com>
-* Ryu Ishimoto <ryu@midokura.com>
-* YAMAMOTO Takashi <yamamoto@midokura.com>
-
-OpenDaylight
-~~~~~~~~~~~~
-
-* Kyle Mestery <mestery@mestery.com>
-
-OpenFlow Agent (ofagent)
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-* YAMAMOTO Takashi <yamamoto@valinux.co.jp>
-* Fumihiko Kakuma <kakuma@valinux.co.jp>
-
-Swift
-~~~~~
-
-* Chmouel Boudjnah <chmouel@enovance.com>
-
-SUSE
-~~~~
-
-* Ralf Haferkamp <rhafer@suse.de>
-* Vincent Untz <vuntz@suse.com>
-
-Tempest
-~~~~~~~
-
-Xen
-~~~
-* Bob Ball <bob.ball@citrix.com>
-
-Zaqar (Marconi)
-~~~~~~~~~~~~~~~
-
-* Flavio Percoco <flaper87@gmail.com>
-* Malini Kamalambal <malini.kamalambal@rackspace.com>
-
-Oracle Linux
-~~~~~~~~~~~~
-* Wiekus Beukes <wiekus.beukes@oracle.com>
diff --git a/clean.sh b/clean.sh
index 4cebf1d..870dfd4 100755
--- a/clean.sh
+++ b/clean.sh
@@ -113,7 +113,7 @@
cleanup_database
# Clean out data and status
-sudo rm -rf $DATA_DIR $DEST/status
+sudo rm -rf $DATA_DIR $DEST/status $DEST/async
# Clean out the log file and log directories
if [[ -n "$LOGFILE" ]] && [[ -f "$LOGFILE" ]]; then
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index 22f5999..2d0c894 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -628,12 +628,6 @@
INSTALL_TEMPEST=True
-Xenserver
-~~~~~~~~~
-
-If you would like to use Xenserver as the hypervisor, please refer to
-the instructions in ``./tools/xen/README.md``.
-
Cinder
~~~~~~
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index a18a786..7d70d74 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -241,7 +241,7 @@
on Ubuntu, Debian or Linux Mint.
- ``./devstack/files/rpms/$plugin_name`` - Packages to install when running
- on Red Hat, Fedora, CentOS or XenServer.
+ on Red Hat, Fedora, or CentOS.
- ``./devstack/files/rpms-suse/$plugin_name`` - Packages to install when
running on SUSE Linux or openSUSE.
diff --git a/extras.d/80-tempest.sh b/extras.d/80-tempest.sh
index 15ecfe3..06c73ec 100644
--- a/extras.d/80-tempest.sh
+++ b/extras.d/80-tempest.sh
@@ -6,7 +6,7 @@
source $TOP_DIR/lib/tempest
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing Tempest"
- install_tempest
+ async_runfunc install_tempest
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
# Tempest config must come after layer 2 services are running
:
@@ -17,6 +17,7 @@
# local.conf Tempest option overrides
:
elif [[ "$1" == "stack" && "$2" == "test-config" ]]; then
+ async_wait install_tempest
echo_summary "Initializing Tempest"
configure_tempest
echo_summary "Installing Tempest Plugins"
diff --git a/functions b/functions
index fc87a55..ccca5cd 100644
--- a/functions
+++ b/functions
@@ -21,6 +21,7 @@
source ${FUNC_DIR}/inc/meta-config
source ${FUNC_DIR}/inc/python
source ${FUNC_DIR}/inc/rootwrap
+source ${FUNC_DIR}/inc/async
# Save trace setting
_XTRACE_FUNCTIONS=$(set +o | grep xtrace)
@@ -279,31 +280,6 @@
return
fi
- # XenServer-vhd-ovf-format images are provided as .vhd.tgz
- # and should not be decompressed prior to loading
- if [[ "$image_url" =~ '.vhd.tgz' ]]; then
- image_name="${image_fname%.vhd.tgz}"
- local force_vm_mode=""
- if [[ "$image_name" =~ 'cirros' ]]; then
- # Cirros VHD image currently only boots in PV mode.
- # Nova defaults to PV for all VHD images, but
- # the glance setting is needed for booting
- # directly from volume.
- force_vm_mode="vm_mode=xen"
- fi
- _upload_image "$image_name" ovf vhd "$image" $force_vm_mode
- return
- fi
-
- # .xen-raw.tgz suggests a Xen capable raw image inside a tgz.
- # and should not be decompressed prior to loading.
- # Setting metadata, so PV mode is used.
- if [[ "$image_url" =~ '.xen-raw.tgz' ]]; then
- image_name="${image_fname%.xen-raw.tgz}"
- _upload_image "$image_name" tgz raw "$image" vm_mode=xen
- return
- fi
-
if [[ "$image_url" =~ '.hds' ]]; then
image_name="${image_fname%.hds}"
vm_mode=${image_name##*-}
diff --git a/functions-common b/functions-common
index 87d8c64..340da75 100644
--- a/functions-common
+++ b/functions-common
@@ -397,8 +397,6 @@
# Drop the . release as we assume it's compatible
# XXX re-evaluate when we get RHEL10
DISTRO="rhel${os_RELEASE::1}"
- elif [[ "$os_VENDOR" =~ (XenServer) ]]; then
- DISTRO="xs${os_RELEASE%.*}"
else
# We can't make a good choice here. Setting a sensible DISTRO
# is part of the problem, but not the major issue -- we really
diff --git a/inc/async b/inc/async
new file mode 100644
index 0000000..d29168f
--- /dev/null
+++ b/inc/async
@@ -0,0 +1,225 @@
+#!/bin/bash
+#
+# Symbolic asynchronous tasks for devstack
+#
+# Usage:
+#
+# async_runfunc my_shell_func foo bar baz
+#
+# ... do other stuff ...
+#
+# async_wait my_shell_func
+#
+
+DEVSTACK_PARALLEL=$(trueorfalse False DEVSTACK_PARALLEL)
+_ASYNC_BG_TIME=0
+
+# Keep track of how much total time was spent in background tasks
+# Takes a job runtime in ms.
+function _async_incr_bg_time {
+ local elapsed_ms="$1"
+ _ASYNC_BG_TIME=$(($_ASYNC_BG_TIME + $elapsed_ms))
+}
+
+# Get the PID of a named future to wait on
+function async_pidof {
+ local name="$1"
+ local inifile="${DEST}/async/${name}.ini"
+
+ if [ -f "$inifile" ]; then
+ iniget $inifile job pid
+ else
+ echo 'UNKNOWN'
+ return 1
+ fi
+}
+
+# Log a message about a job. If the message contains "%command" then the
+# full command line of the job will be substituted in the output
+function async_log {
+ local name="$1"
+ shift
+ local message="$*"
+ local inifile=${DEST}/async/${name}.ini
+ local pid
+ local command
+
+ pid=$(iniget $inifile job pid)
+ command=$(iniget $inifile job command | tr '#' '-')
+ message=$(echo "$message" | sed "s#%command#$command#g")
+
+ echo "[Async ${name}:${pid}]: $message"
+}
+
+# Inner function that actually runs the requested task. We wrap it like this
+# just so we can emit a finish message as soon as the work is done, to make
+# it easier to find the tracking just before an error.
+function async_inner {
+ local name="$1"
+ local rc
+ shift
+ set -o xtrace
+ if $* >${DEST}/async/${name}.log 2>&1; then
+ rc=0
+ set +o xtrace
+ async_log "$name" "finished successfully"
+ else
+ rc=$?
+ set +o xtrace
+ async_log "$name" "FAILED with rc $rc"
+ fi
+ iniset ${DEST}/async/${name}.ini job end_time $(date "+%s%3N")
+ return $rc
+}
+
+# Run something async. Takes a symbolic name and a list of arguments of
+# what to run. Ideally this would be rarely used and async_runfunc() would
+# be used everywhere for readability.
+#
+# This spawns the work in a background worker, records a "future" to be
+# collected by a later call to async_wait()
+function async_run {
+ local xtrace
+ xtrace=$(set +o | grep xtrace)
+ set +o xtrace
+
+ local name="$1"
+ shift
+ local inifile=${DEST}/async/${name}.ini
+
+ touch $inifile
+ iniset $inifile job command "$*"
+ iniset $inifile job start_time $(date +%s%3N)
+
+ if [[ "$DEVSTACK_PARALLEL" = "True" ]]; then
+ async_inner $name $* &
+ iniset $inifile job pid $!
+ async_log "$name" "running: %command"
+ $xtrace
+ else
+ iniset $inifile job pid "self"
+ async_log "$name" "Running synchronously: %command"
+ $xtrace
+ $*
+ return $?
+ fi
+}
+
+# Shortcut for running a shell function async. Uses the function name as the
+# async name.
+function async_runfunc {
+ async_run $1 $*
+}
+
+# Wait for an async future to complete. May return immediately if already
+# complete, or of the future has already been waited on (avoid this). May
+# block until the future completes.
+function async_wait {
+ local xtrace
+ xtrace=$(set +o | grep xtrace)
+ set +o xtrace
+
+ local pid rc running inifile runtime
+ rc=0
+ for name in $*; do
+ running=$(ls ${DEST}/async/*.ini 2>/dev/null | wc -l)
+ inifile="${DEST}/async/${name}.ini"
+
+ if pid=$(async_pidof "$name"); then
+ async_log "$name" "Waiting for completion of %command" \
+ "($running other jobs running)"
+ time_start async_wait
+ if [[ "$pid" != "self" ]]; then
+ # Do not actually call wait if we ran synchronously
+ if wait $pid; then
+ rc=0
+ else
+ rc=$?
+ fi
+ cat ${DEST}/async/${name}.log
+ fi
+ time_stop async_wait
+ local start_time
+ local end_time
+ start_time=$(iniget $inifile job start_time)
+ end_time=$(iniget $inifile job end_time)
+ _async_incr_bg_time $(($end_time - $start_time))
+ runtime=$((($end_time - $start_time) / 1000))
+ async_log "$name" "finished %command with result" \
+ "$rc in $runtime seconds"
+ rm -f $inifile
+ if [ $rc -ne 0 ]; then
+ echo Stopping async wait due to error: $*
+ break
+ fi
+ else
+ # This could probably be removed - it is really just here
+ # to help notice if you wait for something by the wrong
+ # name, but it also shows up for things we didn't start
+ # because they were not enabled.
+ echo Not waiting for async task $name that we never started or \
+ has already been waited for
+ fi
+ done
+
+ $xtrace
+ return $rc
+}
+
+# Check for uncollected futures and wait on them
+function async_cleanup {
+ local name
+
+ if [[ "$DEVSTACK_PARALLEL" != "True" ]]; then
+ return 0
+ fi
+
+ for inifile in $(find ${DEST}/async -name '*.ini'); do
+ name=$(basename $pidfile .ini)
+ echo "WARNING: uncollected async future $name"
+ async_wait $name || true
+ done
+}
+
+# Make sure our async dir is created and clean
+function async_init {
+ local async_dir=${DEST}/async
+
+ # Clean any residue if present from previous runs
+ rm -Rf $async_dir
+
+ # Make sure we have a state directory
+ mkdir -p $async_dir
+}
+
+function async_print_timing {
+ local bg_time_minus_wait
+ local elapsed_time
+ local serial_time
+ local speedup
+
+ if [[ "$DEVSTACK_PARALLEL" != "True" ]]; then
+ return 0
+ fi
+
+ # The logic here is: All the background task time would be
+ # serialized if we did not do them in the background. So we can
+ # add that to the elapsed time for the whole run. However, time we
+ # spend waiting for async things to finish adds to the elapsed
+ # time, but is time where we're not doing anything useful. Thus,
+ # we substract that from the would-be-serialized time.
+
+ bg_time_minus_wait=$((\
+ ($_ASYNC_BG_TIME - ${_TIME_TOTAL[async_wait]}) / 1000))
+ elapsed_time=$(($(date "+%s") - $_TIME_BEGIN))
+ serial_time=$(($elapsed_time + $bg_time_minus_wait))
+
+ echo
+ echo "================="
+ echo " Async summary"
+ echo "================="
+ echo " Time spent in the background minus waits: $bg_time_minus_wait sec"
+ echo " Elapsed time: $elapsed_time sec"
+ echo " Time if we did everything serially: $serial_time sec"
+ echo " Speedup: " $(echo | awk "{print $serial_time / $elapsed_time}")
+}
diff --git a/lib/cinder b/lib/cinder
index 14ab291..f20631b 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -570,6 +570,14 @@
OS_USER_ID=$OS_USERNAME OS_PROJECT_ID=$OS_PROJECT_NAME cinder --os-auth-type noauth --os-endpoint=$cinder_url type-key ${be_name} set volume_backend_name=${be_name}
fi
done
+
+ # Increase quota for the service project if glance is using cinder,
+ # since it's likely to occasionally go above the default 10 in parallel
+ # test execution.
+ if [[ "$USE_CINDER_FOR_GLANCE" == "True" ]]; then
+ openstack --os-region-name="$REGION_NAME" \
+ quota set --volumes 50 "$SERVICE_PROJECT_NAME"
+ fi
fi
}
diff --git a/lib/cinder_plugins/XenAPINFS b/lib/cinder_plugins/XenAPINFS
deleted file mode 100644
index 92135e7..0000000
--- a/lib/cinder_plugins/XenAPINFS
+++ /dev/null
@@ -1,46 +0,0 @@
-#!/bin/bash
-#
-# lib/cinder_plugins/XenAPINFS
-# Configure the XenAPINFS driver
-
-# Enable with:
-#
-# CINDER_DRIVER=XenAPINFS
-
-# Dependencies:
-#
-# - ``functions`` file
-# - ``cinder`` configurations
-
-# configure_cinder_driver - make configuration changes, including those to other services
-
-# Save trace setting
-_XTRACE_CINDER_XENAPINFS=$(set +o | grep xtrace)
-set +o xtrace
-
-
-# Defaults
-# --------
-
-# Set up default directories
-
-
-# Entry Points
-# ------------
-
-# configure_cinder_driver - Set config files, create data dirs, etc
-function configure_cinder_driver {
- iniset $CINDER_CONF DEFAULT volume_driver "cinder.volume.drivers.xenapi.sm.XenAPINFSDriver"
- iniset $CINDER_CONF DEFAULT xenapi_connection_url "$CINDER_XENAPI_CONNECTION_URL"
- iniset $CINDER_CONF DEFAULT xenapi_connection_username "$CINDER_XENAPI_CONNECTION_USERNAME"
- iniset $CINDER_CONF DEFAULT xenapi_connection_password "$CINDER_XENAPI_CONNECTION_PASSWORD"
- iniset $CINDER_CONF DEFAULT xenapi_nfs_server "$CINDER_XENAPI_NFS_SERVER"
- iniset $CINDER_CONF DEFAULT xenapi_nfs_serverpath "$CINDER_XENAPI_NFS_SERVERPATH"
-}
-
-# Restore xtrace
-$_XTRACE_CINDER_XENAPINFS
-
-# Local variables:
-# mode: shell-script
-# End:
diff --git a/lib/glance b/lib/glance
index c2a8b74..e789aff 100644
--- a/lib/glance
+++ b/lib/glance
@@ -130,8 +130,9 @@
# cleanup_glance() - Remove residual data files, anything left over from previous
# runs that a clean run would need to clean up
function cleanup_glance {
- # delete image files (glance)
- sudo rm -rf $GLANCE_CACHE_DIR $GLANCE_IMAGE_DIR
+ # delete image files (glance) and all of the glance-remote temporary
+ # storage
+ sudo rm -rf $GLANCE_CACHE_DIR $GLANCE_IMAGE_DIR "${DATA_DIR}/glance-remote"
# Cleanup multiple stores directories
if [[ "$GLANCE_ENABLE_MULTIPLE_STORES" == "True" ]]; then
@@ -279,10 +280,6 @@
configure_keystone_authtoken_middleware $GLANCE_API_CONF glance
iniset $GLANCE_API_CONF oslo_messaging_notifications driver messagingv2
iniset_rpc_backend glance $GLANCE_API_CONF
- if [ "$VIRT_DRIVER" = 'xenserver' ]; then
- iniset $GLANCE_API_CONF DEFAULT container_formats "ami,ari,aki,bare,ovf,tgz"
- iniset $GLANCE_API_CONF DEFAULT disk_formats "ami,ari,aki,vhd,raw,iso"
- fi
if [ "$VIRT_DRIVER" = 'libvirt' ] && [ "$LIBVIRT_TYPE" = 'parallels' ]; then
iniset $GLANCE_API_CONF DEFAULT disk_formats "ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso,ploop"
fi
@@ -365,6 +362,11 @@
if [[ "$GLANCE_STANDALONE" == False ]]; then
write_local_uwsgi_http_config "$GLANCE_UWSGI_CONF" "$GLANCE_UWSGI" "/image"
+ # Grab our uwsgi listen address and use that to fill out our
+ # worker_self_reference_url config
+ iniset $GLANCE_API_CONF DEFAULT worker_self_reference_url \
+ $(awk '-F= ' '/^http-socket/ { print "http://"$2}' \
+ $GLANCE_UWSGI_CONF)
else
write_local_proxy_http_config glance "http://$GLANCE_SERVICE_HOST:$GLANCE_SERVICE_PORT_INT" "/image"
iniset $GLANCE_API_CONF DEFAULT bind_host $GLANCE_SERVICE_LISTEN_ADDRESS
@@ -460,6 +462,67 @@
setup_develop $GLANCE_DIR
}
+# glance_remote_conf() - Return the path to an alternate config file for
+# the remote glance clone
+function glance_remote_conf {
+ echo $(dirname "${GLANCE_CONF_DIR}")/glance-remote/$(basename "$1")
+}
+
+# start_glance_remote_clone() - Clone the regular glance api worker
+function start_glance_remote_clone {
+ local glance_remote_conf_dir glance_remote_port remote_data
+ local glance_remote_uwsgi
+
+ glance_remote_conf_dir="$(glance_remote_conf "")"
+ glance_remote_port=$(get_random_port)
+ glance_remote_uwsgi="$(glance_remote_conf $GLANCE_UWSGI_CONF)"
+
+ # Clone the existing ready-to-go glance-api setup
+ sudo rm -Rf "$glance_remote_conf_dir"
+ sudo cp -r "$GLANCE_CONF_DIR" "$glance_remote_conf_dir"
+ sudo chown $STACK_USER -R "$glance_remote_conf_dir"
+
+ # Point this worker at different data dirs
+ remote_data="${DATA_DIR}/glance-remote"
+ mkdir -p $remote_data/os_glance_tasks_store \
+ "${remote_data}/os_glance_staging_store"
+ iniset $(glance_remote_conf "$GLANCE_API_CONF") os_glance_staging_store \
+ filesystem_store_datadir "${remote_data}/os_glance_staging_store"
+ iniset $(glance_remote_conf "$GLANCE_API_CONF") os_glance_tasks_store \
+ filesystem_store_datadir "${remote_data}/os_glance_tasks_store"
+
+ # Change our uwsgi to our new port
+ sed -ri "s/^(http-socket.*):[0-9]+/\1:$glance_remote_port/" \
+ "$glance_remote_uwsgi"
+
+ # Update the self-reference url with our new port
+ iniset $(glance_remote_conf $GLANCE_API_CONF) DEFAULT \
+ worker_self_reference_url \
+ $(awk '-F= ' '/^http-socket/ { print "http://"$2 }' \
+ "$glance_remote_uwsgi")
+
+ # We need to create the systemd service for the clone, but then
+ # change it to include an Environment line to point the WSGI app
+ # at the alternate config directory.
+ write_uwsgi_user_unit_file devstack@g-api-r.service "$(which uwsgi) \
+ --procname-prefix \
+ glance-api-remote \
+ --ini $glance_remote_uwsgi" \
+ "" "$STACK_USER"
+ iniset -sudo ${SYSTEMD_DIR}/devstack@g-api-r.service \
+ "Service" "Environment" \
+ "OS_GLANCE_CONFIG_DIR=$glance_remote_conf_dir"
+
+ # Reload and restart with the new config
+ $SYSTEMCTL daemon-reload
+ $SYSTEMCTL restart devstack@g-api-r
+
+ get_or_create_service glance_remote image_remote "Alternate glance"
+ get_or_create_endpoint image_remote $REGION_NAME \
+ $(awk '-F= ' '/^http-socket/ { print "http://"$2 }' \
+ $glance_remote_uwsgi)
+}
+
# start_glance() - Start running processes
function start_glance {
local service_protocol=$GLANCE_SERVICE_PROTOCOL
@@ -475,6 +538,11 @@
run_process g-api "$GLANCE_BIN_DIR/glance-api --config-dir=$GLANCE_CONF_DIR"
fi
+ if is_service_enabled g-api-r; then
+ echo "Starting the g-api-r clone service..."
+ start_glance_remote_clone
+ fi
+
echo "Waiting for g-api ($GLANCE_SERVICE_HOST) to start..."
if ! wait_for_service $SERVICE_TIMEOUT $GLANCE_URL; then
die $LINENO "g-api did not start"
@@ -484,6 +552,7 @@
# stop_glance() - Stop running processes
function stop_glance {
stop_process g-api
+ stop_process g-api-r
}
# Restore xtrace
diff --git a/lib/keystone b/lib/keystone
index d4c7b06..66e867c 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -318,25 +318,25 @@
local admin_role="admin"
local member_role="member"
- get_or_add_user_domain_role $admin_role $admin_user default
+ async_run ks-domain-role get_or_add_user_domain_role $admin_role $admin_user default
# Create service project/role
get_or_create_domain "$SERVICE_DOMAIN_NAME"
- get_or_create_project "$SERVICE_PROJECT_NAME" "$SERVICE_DOMAIN_NAME"
+ async_run ks-project get_or_create_project "$SERVICE_PROJECT_NAME" "$SERVICE_DOMAIN_NAME"
# Service role, so service users do not have to be admins
- get_or_create_role service
+ async_run ks-service get_or_create_role service
# The ResellerAdmin role is used by Nova and Ceilometer so we need to keep it.
# The admin role in swift allows a user to act as an admin for their project,
# but ResellerAdmin is needed for a user to act as any project. The name of this
# role is also configurable in swift-proxy.conf
- get_or_create_role ResellerAdmin
+ async_run ks-reseller get_or_create_role ResellerAdmin
# another_role demonstrates that an arbitrary role may be created and used
# TODO(sleepsonthefloor): show how this can be used for rbac in the future!
local another_role="anotherrole"
- get_or_create_role $another_role
+ async_run ks-anotherrole get_or_create_role $another_role
# invisible project - admin can't see this one
local invis_project
@@ -349,10 +349,12 @@
demo_user=$(get_or_create_user "demo" \
"$ADMIN_PASSWORD" "default" "demo@example.com")
- get_or_add_user_project_role $member_role $demo_user $demo_project
- get_or_add_user_project_role $admin_role $admin_user $demo_project
- get_or_add_user_project_role $another_role $demo_user $demo_project
- get_or_add_user_project_role $member_role $demo_user $invis_project
+ async_wait ks-{domain-role,domain,project,service,reseller,anotherrole}
+
+ async_run ks-demo-member get_or_add_user_project_role $member_role $demo_user $demo_project
+ async_run ks-demo-admin get_or_add_user_project_role $admin_role $admin_user $demo_project
+ async_run ks-demo-another get_or_add_user_project_role $another_role $demo_user $demo_project
+ async_run ks-demo-invis get_or_add_user_project_role $member_role $demo_user $invis_project
# alt_demo
local alt_demo_project
@@ -361,9 +363,9 @@
alt_demo_user=$(get_or_create_user "alt_demo" \
"$ADMIN_PASSWORD" "default" "alt_demo@example.com")
- get_or_add_user_project_role $member_role $alt_demo_user $alt_demo_project
- get_or_add_user_project_role $admin_role $admin_user $alt_demo_project
- get_or_add_user_project_role $another_role $alt_demo_user $alt_demo_project
+ async_run ks-alt-member get_or_add_user_project_role $member_role $alt_demo_user $alt_demo_project
+ async_run ks-alt-admin get_or_add_user_project_role $admin_role $admin_user $alt_demo_project
+ async_run ks-alt-another get_or_add_user_project_role $another_role $alt_demo_user $alt_demo_project
# groups
local admin_group
@@ -373,11 +375,15 @@
non_admin_group=$(get_or_create_group "nonadmins" \
"default" "non-admin group")
- get_or_add_group_project_role $member_role $non_admin_group $demo_project
- get_or_add_group_project_role $another_role $non_admin_group $demo_project
- get_or_add_group_project_role $member_role $non_admin_group $alt_demo_project
- get_or_add_group_project_role $another_role $non_admin_group $alt_demo_project
- get_or_add_group_project_role $admin_role $admin_group $admin_project
+ async_run ks-group-memberdemo get_or_add_group_project_role $member_role $non_admin_group $demo_project
+ async_run ks-group-anotherdemo get_or_add_group_project_role $another_role $non_admin_group $demo_project
+ async_run ks-group-memberalt get_or_add_group_project_role $member_role $non_admin_group $alt_demo_project
+ async_run ks-group-anotheralt get_or_add_group_project_role $another_role $non_admin_group $alt_demo_project
+ async_run ks-group-admin get_or_add_group_project_role $admin_role $admin_group $admin_project
+
+ async_wait ks-demo-{member,admin,another,invis}
+ async_wait ks-alt-{member,admin,another}
+ async_wait ks-group-{memberdemo,anotherdemo,memberalt,anotheralt,admin}
if is_service_enabled ldap; then
create_ldap_domain
diff --git a/lib/neutron_plugins/openvswitch_agent b/lib/neutron_plugins/openvswitch_agent
index 1009611..7fed8bf 100644
--- a/lib/neutron_plugins/openvswitch_agent
+++ b/lib/neutron_plugins/openvswitch_agent
@@ -15,6 +15,10 @@
function neutron_plugin_install_agent_packages {
_neutron_ovs_base_install_agent_packages
+ if use_library_from_git "os-ken"; then
+ git_clone_by_name "os-ken"
+ setup_dev_lib "os-ken"
+ fi
}
function neutron_plugin_configure_dhcp_agent {
diff --git a/lib/neutron_plugins/ovn_agent b/lib/neutron_plugins/ovn_agent
index b661f59..2f6d1ab 100644
--- a/lib/neutron_plugins/ovn_agent
+++ b/lib/neutron_plugins/ovn_agent
@@ -66,7 +66,9 @@
# A UUID to uniquely identify this system. If one is not specified, a random
# one will be generated. A randomly generated UUID will be saved in a file
-# 'ovn-uuid' so that the same one will be re-used if you re-run DevStack.
+# $OVS_SYSCONFDIR/system-id.conf (typically /etc/openvswitch/system-id.conf)
+# so that the same one will be re-used if you re-run DevStack or restart
+# Open vSwitch service.
OVN_UUID=${OVN_UUID:-}
# Whether or not to build the openvswitch kernel module from ovs. This is required
@@ -109,6 +111,7 @@
OVS_SHAREDIR=$OVS_PREFIX/share/openvswitch
OVS_SCRIPTDIR=$OVS_SHAREDIR/scripts
OVS_DATADIR=$DATA_DIR/ovs
+OVS_SYSCONFDIR=${OVS_SYSCONFDIR:-$OVS_PREFIX/etc/openvswitch}
OVN_DATADIR=$DATA_DIR/ovn
OVN_SHAREDIR=$OVS_PREFIX/share/ovn
@@ -149,6 +152,9 @@
# this one allows empty:
ML2_L3_PLUGIN=${ML2_L3_PLUGIN-"ovn-router"}
+Q_LOG_DRIVER_RATE_LIMIT=${Q_LOG_DRIVER_RATE_LIMIT:-100}
+Q_LOG_DRIVER_BURST_LIMIT=${Q_LOG_DRIVER_BURST_LIMIT:-25}
+Q_LOG_DRIVER_LOG_BASE=${Q_LOG_DRIVER_LOG_BASE:-acl_log_meter}
# Utility Functions
# -----------------
@@ -271,8 +277,7 @@
sudo ovs-vsctl set open . external-ids:ovn-bridge-mappings=$PHYSICAL_NETWORK:$ext_gw_ifc
if [ -n "$FLOATING_RANGE" ]; then
local cidr_len=${FLOATING_RANGE#*/}
- sudo ip addr flush dev $ext_gw_ifc
- sudo ip addr add $PUBLIC_NETWORK_GATEWAY/$cidr_len dev $ext_gw_ifc
+ sudo ip addr replace $PUBLIC_NETWORK_GATEWAY/$cidr_len dev $ext_gw_ifc
fi
# Ensure IPv6 RAs are accepted on the interface with the default route.
@@ -286,8 +291,7 @@
sudo sysctl -w net.ipv6.conf.all.forwarding=1
if [ -n "$IPV6_PUBLIC_RANGE" ]; then
local ipv6_cidr_len=${IPV6_PUBLIC_RANGE#*/}
- sudo ip -6 addr flush dev $ext_gw_ifc
- sudo ip -6 addr add $IPV6_PUBLIC_NETWORK_GATEWAY/$ipv6_cidr_len dev $ext_gw_ifc
+ sudo ip -6 addr replace $IPV6_PUBLIC_NETWORK_GATEWAY/$ipv6_cidr_len dev $ext_gw_ifc
fi
sudo ip link set $ext_gw_ifc up
@@ -490,6 +494,12 @@
populate_ml2_config /$Q_PLUGIN_CONF_FILE securitygroup enable_security_group="$Q_USE_SECGROUP"
inicomment /$Q_PLUGIN_CONF_FILE securitygroup firewall_driver
+ if is_service_enabled q-log neutron-log; then
+ populate_ml2_config /$Q_PLUGIN_CONF_FILE network_log rate_limit="$Q_LOG_DRIVER_RATE_LIMIT"
+ populate_ml2_config /$Q_PLUGIN_CONF_FILE network_log burst_limit="$Q_LOG_DRIVER_BURST_LIMIT"
+ inicomment /$Q_PLUGIN_CONF_FILE network_log local_output_log_base="$Q_LOG_DRIVER_LOG_BASE"
+ fi
+
if is_service_enabled q-ovn-metadata-agent; then
populate_ml2_config /$Q_PLUGIN_CONF_FILE ovn ovn_metadata_enabled=True
else
@@ -521,11 +531,17 @@
echo "Configuring OVN"
if [ -z "$OVN_UUID" ] ; then
- if [ -f ./ovn-uuid ] ; then
- OVN_UUID=$(cat ovn-uuid)
+ if [ -f $OVS_SYSCONFDIR/system-id.conf ]; then
+ OVN_UUID=$(cat $OVS_SYSCONFDIR/system-id.conf)
else
OVN_UUID=$(uuidgen)
- echo $OVN_UUID > ovn-uuid
+ echo $OVN_UUID | sudo tee $OVS_SYSCONFDIR/system-id.conf
+ fi
+ else
+ local ovs_uuid
+ ovs_uuid=$(cat $OVS_SYSCONFDIR/system-id.conf)
+ if [ "$ovs_uuid" != $OVN_UUID ]; then
+ echo $OVN_UUID | sudo tee $OVS_SYSCONFDIR/system-id.conf
fi
fi
diff --git a/lib/nova b/lib/nova
index d742603..930529a 100644
--- a/lib/nova
+++ b/lib/nova
@@ -83,6 +83,11 @@
# services and the compute node
NOVA_CONSOLE_PROXY_COMPUTE_TLS=${NOVA_CONSOLE_PROXY_COMPUTE_TLS:-False}
+# Validate configuration
+if ! is_service_enabled tls-proxy && [ "$NOVA_CONSOLE_PROXY_COMPUTE_TLS" == "True" ]; then
+ die $LINENO "enabling TLS for the console proxy requires the tls-proxy service"
+fi
+
# Public facing bits
NOVA_SERVICE_HOST=${NOVA_SERVICE_HOST:-$SERVICE_HOST}
NOVA_SERVICE_PORT=${NOVA_SERVICE_PORT:-8774}
@@ -135,7 +140,7 @@
# ``NOVA_USE_SERVICE_TOKEN`` is a mode where service token is passed along with
# user token while communicating to external RESP API's like Neutron, Cinder
# and Glance.
-NOVA_USE_SERVICE_TOKEN=$(trueorfalse False NOVA_USE_SERVICE_TOKEN)
+NOVA_USE_SERVICE_TOKEN=$(trueorfalse True NOVA_USE_SERVICE_TOKEN)
# ``NOVA_ALLOW_MOVE_TO_SAME_HOST`` can be set to False in multi node DevStack,
# where there are at least two nova-computes.
@@ -607,10 +612,10 @@
# can use the NOVA_CPU_CELL variable to know which cell we are for
# calculating the offset.
# Stagger the offset based on the total number of possible console proxies
- # (novnc, xvpvnc, spice, serial) so that their ports will not collide if
+ # (novnc, spice, serial) so that their ports will not collide if
# all are enabled.
local offset
- offset=$(((NOVA_CPU_CELL - 1) * 4))
+ offset=$(((NOVA_CPU_CELL - 1) * 3))
# Use the host IP instead of the service host because for multi-node, the
# service host will be the controller only.
@@ -618,7 +623,7 @@
default_proxyclient_addr=$(iniget $NOVA_CPU_CONF DEFAULT my_ip)
# All nova-compute workers need to know the vnc configuration options
- # These settings don't hurt anything if n-xvnc and n-novnc are disabled
+ # These settings don't hurt anything if n-novnc is disabled
if is_service_enabled n-cpu; then
if [ "$NOVNC_FROM_PACKAGE" == "True" ]; then
# Use the old URL when installing novnc packages.
@@ -631,13 +636,11 @@
NOVNCPROXY_URL=${NOVNCPROXY_URL:-"http://$SERVICE_HOST:$((6080 + offset))/vnc_lite.html"}
fi
iniset $NOVA_CPU_CONF vnc novncproxy_base_url "$NOVNCPROXY_URL"
- XVPVNCPROXY_URL=${XVPVNCPROXY_URL:-"http://$SERVICE_HOST:$((6081 + offset))/console"}
- iniset $NOVA_CPU_CONF vnc xvpvncproxy_base_url "$XVPVNCPROXY_URL"
- SPICEHTML5PROXY_URL=${SPICEHTML5PROXY_URL:-"http://$SERVICE_HOST:$((6082 + offset))/spice_auto.html"}
+ SPICEHTML5PROXY_URL=${SPICEHTML5PROXY_URL:-"http://$SERVICE_HOST:$((6081 + offset))/spice_auto.html"}
iniset $NOVA_CPU_CONF spice html5proxy_base_url "$SPICEHTML5PROXY_URL"
fi
- if is_service_enabled n-novnc || is_service_enabled n-xvnc || [ "$NOVA_VNC_ENABLED" != False ]; then
+ if is_service_enabled n-novnc || [ "$NOVA_VNC_ENABLED" != False ]; then
# Address on which instance vncservers will listen on compute hosts.
# For multi-host, this should be the management ip of the compute host.
VNCSERVER_LISTEN=${VNCSERVER_LISTEN:-$NOVA_SERVICE_LISTEN_ADDRESS}
@@ -660,7 +663,7 @@
if is_service_enabled n-sproxy; then
iniset $NOVA_CPU_CONF serial_console enabled True
- iniset $NOVA_CPU_CONF serial_console base_url "ws://$SERVICE_HOST:$((6083 + offset))/"
+ iniset $NOVA_CPU_CONF serial_console base_url "ws://$SERVICE_HOST:$((6082 + offset))/"
fi
}
@@ -669,15 +672,13 @@
local conf=${1:-$NOVA_CONF}
local offset=${2:-0}
# Stagger the offset based on the total number of possible console proxies
- # (novnc, xvpvnc, spice, serial) so that their ports will not collide if
+ # (novnc, spice, serial) so that their ports will not collide if
# all are enabled.
- offset=$((offset * 4))
+ offset=$((offset * 3))
- if is_service_enabled n-novnc || is_service_enabled n-xvnc || [ "$NOVA_VNC_ENABLED" != False ]; then
+ if is_service_enabled n-novnc || [ "$NOVA_VNC_ENABLED" != False ]; then
iniset $conf vnc novncproxy_host "$NOVA_SERVICE_LISTEN_ADDRESS"
iniset $conf vnc novncproxy_port $((6080 + offset))
- iniset $conf vnc xvpvncproxy_host "$NOVA_SERVICE_LISTEN_ADDRESS"
- iniset $conf vnc xvpvncproxy_port $((6081 + offset))
if is_nova_console_proxy_compute_tls_enabled ; then
iniset $conf vnc auth_schemes "vencrypt"
@@ -709,12 +710,12 @@
if is_service_enabled n-spice; then
iniset $conf spice html5proxy_host "$NOVA_SERVICE_LISTEN_ADDRESS"
- iniset $conf spice html5proxy_port $((6082 + offset))
+ iniset $conf spice html5proxy_port $((6081 + offset))
fi
if is_service_enabled n-sproxy; then
iniset $conf serial_console serialproxy_host "$NOVA_SERVICE_LISTEN_ADDRESS"
- iniset $conf serial_console serialproxy_port $((6083 + offset))
+ iniset $conf serial_console serialproxy_port $((6082 + offset))
fi
}
@@ -741,30 +742,50 @@
sudo install -d -o $STACK_USER ${NOVA_STATE_PATH} ${NOVA_STATE_PATH}/keys
}
+function init_nova_db {
+ local dbname="$1"
+ local conffile="$2"
+ recreate_database $dbname
+ $NOVA_BIN_DIR/nova-manage --config-file $conffile db sync --local_cell
+}
+
# init_nova() - Initialize databases, etc.
function init_nova {
# All nova components talk to a central database.
# Only do this step once on the API node for an entire cluster.
if is_service_enabled $DATABASE_BACKENDS && is_service_enabled n-api; then
+ # (Re)create nova databases
+ if [[ "$CELLSV2_SETUP" == "singleconductor" ]]; then
+ # If we are doing singleconductor mode, we have some strange
+ # interdependencies. in that the main config refers to cell1
+ # instead of cell0. In that case, just make sure the cell0 database
+ # is created before we need it below, but don't db_sync it until
+ # after the cellN databases are there.
+ recreate_database nova_cell0
+ else
+ async_run nova-cell-0 init_nova_db nova_cell0 $NOVA_CONF
+ fi
+
+ for i in $(seq 1 $NOVA_NUM_CELLS); do
+ async_run nova-cell-$i init_nova_db nova_cell${i} $(conductor_conf $i)
+ done
+
recreate_database $NOVA_API_DB
$NOVA_BIN_DIR/nova-manage --config-file $NOVA_CONF api_db sync
- recreate_database nova_cell0
-
# map_cell0 will create the cell mapping record in the nova_api DB so
- # this needs to come after the api_db sync happens. We also want to run
- # this before the db sync below since that will migrate both the nova
- # and nova_cell0 databases.
+ # this needs to come after the api_db sync happens.
$NOVA_BIN_DIR/nova-manage cell_v2 map_cell0 --database_connection `database_connection_url nova_cell0`
- # (Re)create nova databases
- for i in $(seq 1 $NOVA_NUM_CELLS); do
- recreate_database nova_cell${i}
- $NOVA_BIN_DIR/nova-manage --config-file $(conductor_conf $i) db sync --local_cell
+ # Wait for DBs to finish from above
+ for i in $(seq 0 $NOVA_NUM_CELLS); do
+ async_wait nova-cell-$i
done
- # Migrate nova and nova_cell0 databases.
- $NOVA_BIN_DIR/nova-manage --config-file $NOVA_CONF db sync
+ if [[ "$CELLSV2_SETUP" == "singleconductor" ]]; then
+ # We didn't db sync cell0 above, so run it now
+ $NOVA_BIN_DIR/nova-manage --config-file $NOVA_CONF db sync
+ fi
# Run online migrations on the new databases
# Needed for flavor conversion
@@ -961,7 +982,7 @@
function enable_nova_console_proxies {
for i in $(seq 1 $NOVA_NUM_CELLS); do
- for srv in n-novnc n-xvnc n-spice n-sproxy; do
+ for srv in n-novnc n-spice n-sproxy; do
if is_service_enabled $srv; then
enable_service ${srv}-cell${i}
fi
@@ -979,7 +1000,6 @@
# console proxies run globally for singleconductor, else they run per cell
if [[ "${CELLSV2_SETUP}" == "singleconductor" ]]; then
run_process n-novnc "$NOVA_BIN_DIR/nova-novncproxy --config-file $api_cell_conf --web $NOVNC_WEB_DIR"
- run_process n-xvnc "$NOVA_BIN_DIR/nova-xvpvncproxy --config-file $api_cell_conf"
run_process n-spice "$NOVA_BIN_DIR/nova-spicehtml5proxy --config-file $api_cell_conf --web $SPICE_WEB_DIR"
run_process n-sproxy "$NOVA_BIN_DIR/nova-serialproxy --config-file $api_cell_conf"
else
@@ -988,7 +1008,6 @@
local conf
conf=$(conductor_conf $i)
run_process n-novnc-cell${i} "$NOVA_BIN_DIR/nova-novncproxy --config-file $conf --web $NOVNC_WEB_DIR"
- run_process n-xvnc-cell${i} "$NOVA_BIN_DIR/nova-xvpvncproxy --config-file $conf"
run_process n-spice-cell${i} "$NOVA_BIN_DIR/nova-spicehtml5proxy --config-file $conf --web $SPICE_WEB_DIR"
run_process n-sproxy-cell${i} "$NOVA_BIN_DIR/nova-serialproxy --config-file $conf"
done
@@ -1033,14 +1052,6 @@
# happen between here and the script ending. However, in multinode
# tests this can very often not be the case. So ensure that the
# compute is up before we move on.
-
- # TODO(sdague): honestly, this probably should be a plug point for
- # an external system.
- if [[ "$VIRT_DRIVER" == 'xenserver' ]]; then
- # xenserver encodes information in the hostname of the compute
- # because of the dom0/domU split. Just ignore for now.
- return
- fi
wait_for_compute $NOVA_READY_TIMEOUT
}
@@ -1079,13 +1090,13 @@
function stop_nova_console_proxies {
if [[ "${CELLSV2_SETUP}" == "singleconductor" ]]; then
- for srv in n-novnc n-xvnc n-spice n-sproxy; do
+ for srv in n-novnc n-spice n-sproxy; do
stop_process $srv
done
else
enable_nova_console_proxies
for i in $(seq 1 $NOVA_NUM_CELLS); do
- for srv in n-novnc n-xvnc n-spice n-sproxy; do
+ for srv in n-novnc n-spice n-sproxy; do
stop_process ${srv}-cell${i}
done
done
diff --git a/lib/nova_plugins/hypervisor-xenserver b/lib/nova_plugins/hypervisor-xenserver
deleted file mode 100644
index 511ec1b..0000000
--- a/lib/nova_plugins/hypervisor-xenserver
+++ /dev/null
@@ -1,107 +0,0 @@
-#!/bin/bash
-#
-# lib/nova_plugins/hypervisor-xenserver
-# Configure the XenServer hypervisor
-
-# Enable with:
-# VIRT_DRIVER=xenserver
-
-# Dependencies:
-# ``functions`` file
-# ``nova`` configuration
-
-# install_nova_hypervisor - install any external requirements
-# configure_nova_hypervisor - make configuration changes, including those to other services
-# start_nova_hypervisor - start any external services
-# stop_nova_hypervisor - stop any external services
-# cleanup_nova_hypervisor - remove transient data and cache
-
-# Save trace setting
-_XTRACE_XENSERVER=$(set +o | grep xtrace)
-set +o xtrace
-
-
-# Defaults
-# --------
-
-VNCSERVER_PROXYCLIENT_ADDRESS=${VNCSERVER_PROXYCLIENT_ADDRESS=169.254.0.1}
-
-
-# Entry Points
-# ------------
-
-# clean_nova_hypervisor - Clean up an installation
-function cleanup_nova_hypervisor {
- # This function intentionally left blank
- :
-}
-
-# configure_nova_hypervisor - Set config files, create data dirs, etc
-function configure_nova_hypervisor {
- if [ -z "$XENAPI_CONNECTION_URL" ]; then
- die $LINENO "XENAPI_CONNECTION_URL is not specified"
- fi
-
- # Check os-xenapi plugin is enabled
- local plugins="${DEVSTACK_PLUGINS}"
- local plugin
- local found=0
- for plugin in ${plugins//,/ }; do
- if [[ "$plugin" = "os-xenapi" ]]; then
- found=1
- break
- fi
- done
- if [[ $found -ne 1 ]]; then
- die $LINENO "os-xenapi plugin is not specified. Please enable this plugin in local.conf"
- fi
-
- iniset $NOVA_CONF DEFAULT compute_driver "xenapi.XenAPIDriver"
- iniset $NOVA_CONF xenserver connection_url "$XENAPI_CONNECTION_URL"
- iniset $NOVA_CONF xenserver connection_username "$XENAPI_USER"
- iniset $NOVA_CONF xenserver connection_password "$XENAPI_PASSWORD"
- iniset $NOVA_CONF DEFAULT flat_injected "False"
-
- local dom0_ip
- dom0_ip=$(echo "$XENAPI_CONNECTION_URL" | cut -d "/" -f 3-)
-
- local ssh_dom0
- ssh_dom0="sudo -u $DOMZERO_USER ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@$dom0_ip"
-
- # install console logrotate script
- tar -czf - -C $NOVA_DIR/tools/xenserver/ rotate_xen_guest_logs.sh |
- $ssh_dom0 'tar -xzf - -C /root/ && chmod +x /root/rotate_xen_guest_logs.sh && mkdir -p /var/log/xen/guest'
-
- # Create a cron job that will rotate guest logs
- $ssh_dom0 crontab - << CRONTAB
-* * * * * /root/rotate_xen_guest_logs.sh >/dev/null 2>&1
-CRONTAB
-
-}
-
-# install_nova_hypervisor() - Install external components
-function install_nova_hypervisor {
- # xenapi functionality is now included in os-xenapi library which houses the plugin
- # so this function intentionally left blank
- :
-}
-
-# start_nova_hypervisor - Start any required external services
-function start_nova_hypervisor {
- # This function intentionally left blank
- :
-}
-
-# stop_nova_hypervisor - Stop any external services
-function stop_nova_hypervisor {
- # This function intentionally left blank
- :
-}
-
-
-# Restore xtrace
-$_XTRACE_XENSERVER
-
-# Local variables:
-# mode: shell-script
-# End:
diff --git a/lib/tempest b/lib/tempest
index 552e1c2..29a6229 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -111,6 +111,21 @@
echo $size | python3 -c "import math; print(int(math.ceil(float(int(input()) / 1024.0 ** 3))))"
}
+function set_tempest_venv_constraints {
+ local tmp_c
+ tmp_c=$1
+ if [[ $TEMPEST_VENV_UPPER_CONSTRAINTS == "master" ]]; then
+ (cd $REQUIREMENTS_DIR && git show origin/master:upper-constraints.txt) > $tmp_c
+ else
+ echo "Using $TEMPEST_VENV_UPPER_CONSTRAINTS constraints in Tempest virtual env."
+ cat $TEMPEST_VENV_UPPER_CONSTRAINTS > $tmp_c
+ # NOTE: setting both tox env var and once Tempest start using new var
+ # TOX_CONSTRAINTS_FILE then we can remove the old one.
+ export UPPER_CONSTRAINTS_FILE=$TEMPEST_VENV_UPPER_CONSTRAINTS
+ export TOX_CONSTRAINTS_FILE=$TEMPEST_VENV_UPPER_CONSTRAINTS
+ fi
+}
+
# configure_tempest() - Set config files, create data dirs, etc
function configure_tempest {
if [[ "$INSTALL_TEMPEST" == "True" ]]; then
@@ -347,10 +362,12 @@
if [[ ! -z "$TEMPEST_HTTP_IMAGE" ]]; then
iniset $TEMPEST_CONFIG image http_image $TEMPEST_HTTP_IMAGE
fi
- if [ "$VIRT_DRIVER" = "xenserver" ]; then
- iniset $TEMPEST_CONFIG image disk_formats "ami,ari,aki,vhd,raw,iso"
- fi
iniset $TEMPEST_CONFIG image-feature-enabled import_image $GLANCE_USE_IMPORT_WORKFLOW
+ iniset $TEMPEST_CONFIG image-feature-enabled os_glance_reserved True
+ if is_service_enabled g-api-r; then
+ iniset $TEMPEST_CONFIG image alternate_image_endpoint image_remote
+ fi
+
# Compute
iniset $TEMPEST_CONFIG compute image_ref $image_uuid
iniset $TEMPEST_CONFIG compute image_ref_alt $image_uuid_alt
@@ -424,15 +441,8 @@
iniset $TEMPEST_CONFIG network-feature-enabled port_security $NEUTRON_PORT_SECURITY
# Scenario
- if [ "$VIRT_DRIVER" = "xenserver" ]; then
- SCENARIO_IMAGE_DIR=${SCENARIO_IMAGE_DIR:-$FILES}
- SCENARIO_IMAGE_FILE="cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.vhd.tgz"
- iniset $TEMPEST_CONFIG scenario img_disk_format vhd
- iniset $TEMPEST_CONFIG scenario img_container_format ovf
- else
- SCENARIO_IMAGE_DIR=${SCENARIO_IMAGE_DIR:-$FILES}
- SCENARIO_IMAGE_FILE=$DEFAULT_IMAGE_FILE_NAME
- fi
+ SCENARIO_IMAGE_DIR=${SCENARIO_IMAGE_DIR:-$FILES}
+ SCENARIO_IMAGE_FILE=$DEFAULT_IMAGE_FILE_NAME
iniset $TEMPEST_CONFIG scenario img_file $SCENARIO_IMAGE_DIR/$SCENARIO_IMAGE_FILE
# If using provider networking, use the physical network for validation rather than private
@@ -617,15 +627,13 @@
tox -revenv-tempest --notest
fi
- # The requirements might be on a different branch, while tempest needs master requirements.
local tmp_u_c_m
tmp_u_c_m=$(mktemp -t tempest_u_c_m.XXXXXXXXXX)
- (cd $REQUIREMENTS_DIR && git show origin/master:upper-constraints.txt) > $tmp_u_c_m
+ set_tempest_venv_constraints $tmp_u_c_m
tox -evenv-tempest -- pip install -c $tmp_u_c_m -r requirements.txt
rm -f $tmp_u_c_m
# Auth:
- iniset $TEMPEST_CONFIG auth tempest_roles "member"
if [[ $TEMPEST_USE_TEST_ACCOUNTS == "True" ]]; then
if [[ $TEMPEST_HAS_ADMIN == "True" ]]; then
tox -evenv-tempest -- tempest account-generator -c $TEMPEST_CONFIG --os-username $admin_username --os-password "$password" --os-project-name $admin_project_name -r $TEMPEST_CONCURRENCY --with-admin etc/accounts.yaml
@@ -702,12 +710,20 @@
# TEMPEST_DIR already exist until RECLONE is true.
git checkout $TEMPEST_BRANCH
+ local tmp_u_c_m
+ tmp_u_c_m=$(mktemp -t tempest_u_c_m.XXXXXXXXXX)
+ set_tempest_venv_constraints $tmp_u_c_m
+
tox -r --notest -efull
+ # TODO: remove the trailing pip constraint when a proper fix
+ # arrives for bug https://bugs.launchpad.net/devstack/+bug/1906322
+ $TEMPEST_DIR/.tox/tempest/bin/pip install -U -r $RC_DIR/tools/cap-pip.txt
# NOTE(mtreinish) Respect constraints in the tempest full venv, things that
# are using a tox job other than full will not be respecting constraints but
# running pip install -U on tempest requirements
- $TEMPEST_DIR/.tox/tempest/bin/pip install -c $REQUIREMENTS_DIR/upper-constraints.txt -r requirements.txt
+ $TEMPEST_DIR/.tox/tempest/bin/pip install -c $tmp_u_c_m -r requirements.txt
PROJECT_VENV["tempest"]=${TEMPEST_DIR}/.tox/tempest
+ rm -f $tmp_u_c_m
popd
}
@@ -715,10 +731,9 @@
function install_tempest_plugins {
pushd $TEMPEST_DIR
if [[ $TEMPEST_PLUGINS != 0 ]] ; then
- # The requirements might be on a different branch, while tempest & tempest plugins needs master requirements.
local tmp_u_c_m
tmp_u_c_m=$(mktemp -t tempest_u_c_m.XXXXXXXXXX)
- (cd $REQUIREMENTS_DIR && git show origin/master:upper-constraints.txt) > $tmp_u_c_m
+ set_tempest_venv_constraints $tmp_u_c_m
tox -evenv-tempest -- pip install -c $tmp_u_c_m $TEMPEST_PLUGINS
rm -f $tmp_u_c_m
echo "Checking installed Tempest plugins:"
diff --git a/roles/export-devstack-journal/tasks/main.yaml b/roles/export-devstack-journal/tasks/main.yaml
index ef839ed..db38b10 100644
--- a/roles/export-devstack-journal/tasks/main.yaml
+++ b/roles/export-devstack-journal/tasks/main.yaml
@@ -45,7 +45,7 @@
cmd: |
journalctl -o export \
--since="$(cat {{ devstack_base_dir }}/log-start-timestamp.txt)" \
- | xz --threads=0 - > {{ stage_dir }}/logs/devstack.journal.xz
+ | gzip > {{ stage_dir }}/logs/devstack.journal.gz
- name: Save journal README
become: true
diff --git a/roles/export-devstack-journal/templates/devstack.journal.README.txt.j2 b/roles/export-devstack-journal/templates/devstack.journal.README.txt.j2
index fe36653..30519f6 100644
--- a/roles/export-devstack-journal/templates/devstack.journal.README.txt.j2
+++ b/roles/export-devstack-journal/templates/devstack.journal.README.txt.j2
@@ -7,7 +7,7 @@
To use it, you will need to convert it so journalctl can read it
locally. After downloading the file:
- $ /lib/systemd/systemd-journal-remote <(xzcat ./devstack.journal.xz) -o output.journal
+ $ /lib/systemd/systemd-journal-remote <(zcat ./devstack.journal.gz) -o output.journal
Note this binary is not in the regular path. On Debian/Ubuntu
platforms, you will need to have the "systemd-journal-remote" package
diff --git a/roles/orchestrate-devstack/tasks/main.yaml b/roles/orchestrate-devstack/tasks/main.yaml
index f747943..2b8ae01 100644
--- a/roles/orchestrate-devstack/tasks/main.yaml
+++ b/roles/orchestrate-devstack/tasks/main.yaml
@@ -18,6 +18,11 @@
name: sync-devstack-data
when: devstack_services['tls-proxy']|default(false)
+ - name: Sync controller ceph.conf and key rings to subnode
+ include_role:
+ name: sync-controller-ceph-conf-and-keys
+ when: devstack_plugins is defined and 'devstack-plugin-ceph' in devstack_plugins
+
- name: Run devstack on the sub-nodes
include_role:
name: run-devstack
diff --git a/roles/sync-controller-ceph-conf-and-keys/README.rst b/roles/sync-controller-ceph-conf-and-keys/README.rst
new file mode 100644
index 0000000..e3d2bb4
--- /dev/null
+++ b/roles/sync-controller-ceph-conf-and-keys/README.rst
@@ -0,0 +1,3 @@
+Sync ceph config and keys between controller and subnodes
+
+Simply copy the contents of /etc/ceph on the controller to subnodes.
diff --git a/roles/sync-controller-ceph-conf-and-keys/tasks/main.yaml b/roles/sync-controller-ceph-conf-and-keys/tasks/main.yaml
new file mode 100644
index 0000000..71ece57
--- /dev/null
+++ b/roles/sync-controller-ceph-conf-and-keys/tasks/main.yaml
@@ -0,0 +1,15 @@
+- name: Ensure /etc/ceph exists on subnode
+ become: true
+ file:
+ path: /etc/ceph
+ state: directory
+
+- name: Copy /etc/ceph from controller to subnode
+ become: true
+ synchronize:
+ owner: yes
+ group: yes
+ perms: yes
+ src: /etc/ceph/
+ dest: /etc/ceph/
+ delegate_to: controller
diff --git a/stack.sh b/stack.sh
index 036afd7..ca9ecfa 100755
--- a/stack.sh
+++ b/stack.sh
@@ -96,19 +96,25 @@
# templates and other useful files in the ``files`` subdirectory
FILES=$TOP_DIR/files
if [ ! -d $FILES ]; then
- die $LINENO "missing devstack/files"
+ set +o xtrace
+ echo "missing devstack/files"
+ exit 1
fi
# ``stack.sh`` keeps function libraries here
# Make sure ``$TOP_DIR/inc`` directory is present
if [ ! -d $TOP_DIR/inc ]; then
- die $LINENO "missing devstack/inc"
+ set +o xtrace
+ echo "missing devstack/inc"
+ exit 1
fi
# ``stack.sh`` keeps project libraries here
# Make sure ``$TOP_DIR/lib`` directory is present
if [ ! -d $TOP_DIR/lib ]; then
- die $LINENO "missing devstack/lib"
+ set +o xtrace
+ echo "missing devstack/lib"
+ exit 1
fi
# Check if run in POSIX shell
@@ -330,6 +336,9 @@
safe_chmod 0755 $DATA_DIR
fi
+# Create and/or clean the async state directory
+async_init
+
# Configure proper hostname
# Certain services such as rabbitmq require that the local hostname resolves
# correctly. Make sure it exists in /etc/hosts so that is always true.
@@ -356,6 +365,9 @@
# EPEL packages assume that the PowerTools repository is enable.
sudo dnf config-manager --set-enabled PowerTools
+ # CentOS 8.3 changed the repository name to lower case.
+ sudo dnf config-manager --set-enabled powertools
+
if [[ ${SKIP_EPEL_INSTALL} != True ]]; then
_install_epel
fi
@@ -706,16 +718,6 @@
fi
-# Nova
-# -----
-
-if is_service_enabled nova && [[ "$VIRT_DRIVER" == 'xenserver' ]]; then
- # Look for the backend password here because read_password
- # is not a library function.
- read_password XENAPI_PASSWORD "ENTER A PASSWORD TO USE FOR XEN."
-fi
-
-
# Swift
# -----
@@ -1082,19 +1084,19 @@
create_keystone_accounts
if is_service_enabled nova; then
- create_nova_accounts
+ async_runfunc create_nova_accounts
fi
if is_service_enabled glance; then
- create_glance_accounts
+ async_runfunc create_glance_accounts
fi
if is_service_enabled cinder; then
- create_cinder_accounts
+ async_runfunc create_cinder_accounts
fi
if is_service_enabled neutron; then
- create_neutron_accounts
+ async_runfunc create_neutron_accounts
fi
if is_service_enabled swift; then
- create_swift_accounts
+ async_runfunc create_swift_accounts
fi
fi
@@ -1107,9 +1109,11 @@
if is_service_enabled horizon; then
echo_summary "Configuring Horizon"
- configure_horizon
+ async_runfunc configure_horizon
fi
+async_wait create_nova_accounts create_glance_accounts create_cinder_accounts
+async_wait create_neutron_accounts create_swift_accounts configure_horizon
# Glance
# ------
@@ -1117,7 +1121,7 @@
# NOTE(yoctozepto): limited to node hosting the database which is the controller
if is_service_enabled $DATABASE_BACKENDS && is_service_enabled glance; then
echo_summary "Configuring Glance"
- init_glance
+ async_runfunc init_glance
fi
@@ -1131,7 +1135,7 @@
# Run init_neutron only on the node hosting the Neutron API server
if is_service_enabled $DATABASE_BACKENDS && is_service_enabled neutron; then
- init_neutron
+ async_runfunc init_neutron
fi
fi
@@ -1161,7 +1165,7 @@
if is_service_enabled swift; then
echo_summary "Configuring Swift"
- init_swift
+ async_runfunc init_swift
fi
@@ -1170,7 +1174,7 @@
if is_service_enabled cinder; then
echo_summary "Configuring Cinder"
- init_cinder
+ async_runfunc init_cinder
fi
# Placement Service
@@ -1178,9 +1182,16 @@
if is_service_enabled placement; then
echo_summary "Configuring placement"
- init_placement
+ async_runfunc init_placement
fi
+# Wait for neutron and placement before starting nova
+async_wait init_neutron
+async_wait init_placement
+async_wait init_glance
+async_wait init_swift
+async_wait init_cinder
+
# Compute Service
# ---------------
@@ -1192,7 +1203,7 @@
# TODO(stephenfin): Is it possible for neutron to *not* be enabled now? If
# not, remove the if here
if is_service_enabled neutron; then
- configure_neutron_nova
+ async_runfunc configure_neutron_nova
fi
fi
@@ -1236,6 +1247,8 @@
iniset $CINDER_CONF key_manager fixed_key "$FIXED_KEY"
fi
+async_wait configure_neutron_nova
+
# Launch the nova-api and wait for it to answer before continuing
if is_service_enabled n-api; then
echo_summary "Starting Nova API"
@@ -1282,7 +1295,7 @@
if is_service_enabled nova; then
echo_summary "Starting Nova"
start_nova
- create_flavors
+ async_runfunc create_flavors
fi
if is_service_enabled cinder; then
echo_summary "Starting Cinder"
@@ -1331,6 +1344,8 @@
start_horizon
fi
+async_wait create_flavors
+
# Create account rc files
# =======================
@@ -1467,8 +1482,12 @@
exec 1>&3
fi
+# Make sure we didn't leak any background tasks
+async_cleanup
+
# Dump out the time totals
time_totals
+async_print_timing
# Using the cloud
# ===============
diff --git a/stackrc b/stackrc
index a36f897..9630221 100644
--- a/stackrc
+++ b/stackrc
@@ -245,7 +245,7 @@
# Setting the variable to 'ALL' will activate the download for all
# libraries.
-DEVSTACK_SERIES="wallaby"
+DEVSTACK_SERIES="xena"
##############
#
@@ -298,6 +298,7 @@
# Tempest test suite
TEMPEST_REPO=${TEMPEST_REPO:-${GIT_BASE}/openstack/tempest.git}
TEMPEST_BRANCH=${TEMPEST_BRANCH:-$BRANCHLESS_TARGET_BRANCH}
+TEMPEST_VENV_UPPER_CONSTRAINTS=${TEMPEST_VENV_UPPER_CONSTRAINTS:-master}
##############
@@ -554,6 +555,11 @@
GITBRANCH["ovsdbapp"]=${OVSDBAPP_BRANCH:-$TARGET_BRANCH}
GITDIR["ovsdbapp"]=$DEST/ovsdbapp
+# os-ken used by neutron
+GITREPO["os-ken"]=${OS_KEN_REPO:-${GIT_BASE}/openstack/os-ken.git}
+GITBRANCH["os-ken"]=${OS_KEN_BRANCH:-$TARGET_BRANCH}
+GITDIR["os-ken"]=$DEST/os-ken
+
##################
#
# TripleO / Heat Agent Components
@@ -605,10 +611,8 @@
# Nova hypervisor configuration. We default to libvirt with **kvm** but will
# drop back to **qemu** if we are unable to load the kvm module. ``stack.sh`` can
-# also install an **LXC**, **OpenVZ** or **XenAPI** based system. If xenserver-core
-# is installed, the default will be XenAPI
+# also install an **LXC** or **OpenVZ** based system.
DEFAULT_VIRT_DRIVER=libvirt
-is_package_installed xenserver-core && DEFAULT_VIRT_DRIVER=xenserver
VIRT_DRIVER=${VIRT_DRIVER:-$DEFAULT_VIRT_DRIVER}
case "$VIRT_DRIVER" in
ironic|libvirt)
@@ -633,14 +637,6 @@
fake)
NUMBER_FAKE_NOVA_COMPUTE=${NUMBER_FAKE_NOVA_COMPUTE:-1}
;;
- xenserver)
- # Xen config common to nova and neutron
- XENAPI_USER=${XENAPI_USER:-"root"}
- # This user will be used for dom0 - domU communication
- # should be able to log in to dom0 without a password
- # will be used to install the plugins
- DOMZERO_USER=${DOMZERO_USER:-"domzero"}
- ;;
*)
;;
esac
@@ -667,7 +663,7 @@
#IMAGE_URLS="http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-amd64-11.2_2.6.35-15_1.tar.gz" # old ttylinux-uec image
#IMAGE_URLS="http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img" # cirros full disk image
-CIRROS_VERSION=${CIRROS_VERSION:-"0.5.1"}
+CIRROS_VERSION=${CIRROS_VERSION:-"0.5.2"}
CIRROS_ARCH=${CIRROS_ARCH:-"x86_64"}
# Set default image based on ``VIRT_DRIVER`` and ``LIBVIRT_TYPE``, either of
@@ -695,11 +691,6 @@
DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-0.3.2-i386-disk.vmdk}
DEFAULT_IMAGE_FILE_NAME=${DEFAULT_IMAGE_FILE_NAME:-$DEFAULT_IMAGE_NAME}
IMAGE_URLS+="http://partnerweb.vmware.com/programs/vmdkimage/${DEFAULT_IMAGE_FILE_NAME}";;
- xenserver)
- DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-0.3.5-x86_64-disk}
- DEFAULT_IMAGE_FILE_NAME=${DEFAULT_IMAGE_NAME:-cirros-0.3.5-x86_64-disk.vhd.tgz}
- IMAGE_URLS+="http://ca.downloads.xensource.com/OpenStack/cirros-0.3.5-x86_64-disk.vhd.tgz"
- IMAGE_URLS+=",http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-uec.tar.gz";;
fake)
# Use the same as the default for libvirt
DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk}
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
index ab7583d..5b53389 100755
--- a/tests/test_libs_from_pypi.sh
+++ b/tests/test_libs_from_pypi.sh
@@ -44,7 +44,7 @@
ALL_LIBS+=" oslo.cache oslo.reports osprofiler cursive"
ALL_LIBS+=" keystoneauth ironic-lib neutron-lib oslo.privsep"
ALL_LIBS+=" diskimage-builder os-vif python-brick-cinderclient-ext"
-ALL_LIBS+=" castellan python-barbicanclient ovsdbapp"
+ALL_LIBS+=" castellan python-barbicanclient ovsdbapp os-ken"
# Generate the above list with
# echo ${!GITREPO[@]}
diff --git a/tools/image_list.sh b/tools/image_list.sh
index 3a27c4a..81231be 100755
--- a/tools/image_list.sh
+++ b/tools/image_list.sh
@@ -22,7 +22,7 @@
# Possible virt drivers, if we have more, add them here. Always keep
# dummy in the end position to trigger the fall through case.
-DRIVERS="openvz ironic libvirt vsphere xenserver dummy"
+DRIVERS="openvz ironic libvirt vsphere dummy"
# Extra variables to trigger getting additional images.
export ENABLED_SERVICES="h-api,tr-api"
diff --git a/tools/uec/meta.py b/tools/uec/meta.py
deleted file mode 100644
index 1d994a6..0000000
--- a/tools/uec/meta.py
+++ /dev/null
@@ -1,42 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import BaseHTTPServer
-import SimpleHTTPServer
-import sys
-
-
-def main(host, port, HandlerClass=SimpleHTTPServer.SimpleHTTPRequestHandler,
- ServerClass=BaseHTTPServer.HTTPServer, protocol="HTTP/1.0"):
- """simple http server that listens on a give address:port."""
-
- server_address = (host, port)
-
- HandlerClass.protocol_version = protocol
- httpd = ServerClass(server_address, HandlerClass)
-
- sa = httpd.socket.getsockname()
- print("Serving HTTP on", sa[0], "port", sa[1], "...")
- httpd.serve_forever()
-
-if __name__ == '__main__':
- if sys.argv[1:]:
- address = sys.argv[1]
- else:
- address = '0.0.0.0'
- if ':' in address:
- host, port = address.split(':')
- else:
- host = address
- port = 8080
-
- main(host, int(port))
diff --git a/tools/xen/README.md b/tools/xen/README.md
deleted file mode 100644
index 2873011..0000000
--- a/tools/xen/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-Note: XenServer relative tools have been moved to `os-xenapi`_ and be maintained there.
-
-.. _os-xenapi: https://opendev.org/x/os-xenapi/
diff --git a/unstack.sh b/unstack.sh
index 3197cf1..d9dca7c 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -184,3 +184,4 @@
fi
clean_pyc_files
+rm -Rf $DEST/async