Merge "clean up screen and tail_log references"
diff --git a/HACKING.rst b/HACKING.rst
index fc67f09..d5d6fbc 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -20,7 +20,7 @@
contains the usual links for blueprints, bugs, etc.
__ contribute_
-.. _contribute: http://docs.openstack.org/infra/manual/developers.html
+.. _contribute: https://docs.openstack.org/infra/manual/developers.html
__ lp_
.. _lp: https://launchpad.net/~devstack
@@ -255,7 +255,7 @@
* The ``OS_*`` environment variables should be the only ones used for all
authentication to OpenStack clients as documented in the CLIAuth_ wiki page.
-.. _CLIAuth: http://wiki.openstack.org/CLIAuth
+.. _CLIAuth: https://wiki.openstack.org/CLIAuth
* The exercise MUST clean up after itself if successful. If it is not successful,
it is assumed that state will be left behind; this allows a chance for developers
diff --git a/README.rst b/README.rst
index adbf59a..6885546 100644
--- a/README.rst
+++ b/README.rst
@@ -1,4 +1,5 @@
-DevStack is a set of scripts and utilities to quickly deploy an OpenStack cloud.
+DevStack is a set of scripts and utilities to quickly deploy an OpenStack cloud
+from git source trees.
Goals
=====
@@ -27,9 +28,9 @@
The DevStack master branch generally points to trunk versions of OpenStack
components. For older, stable versions, look for branches named
stable/[release] in the DevStack repo. For example, you can do the
-following to create a Newton OpenStack cloud::
+following to create a Pike OpenStack cloud::
- git checkout stable/newton
+ git checkout stable/pike
./stack.sh
You can also pick specific OpenStack project releases by setting the appropriate
@@ -54,7 +55,7 @@
endpoints, like so:
* Horizon: http://myhost/
-* Keystone: http://myhost:5000/v2.0/
+* Keystone: http://myhost/identity/v2.0/
We also provide an environment file that you can use to interact with your
cloud via CLI::
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index 064bf51..23f680a 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -136,7 +136,7 @@
::
- OS_AUTH_URL=http://$SERVICE_HOST:5000/v2.0
+ OS_AUTH_URL=http://$SERVICE_HOST:5000/v3.0
KEYSTONECLIENT\_DEBUG, NOVACLIENT\_DEBUG
Set command-line client log level to ``DEBUG``. These are commented
@@ -779,9 +779,15 @@
DOWNLOAD_DEFAULT_IMAGES=False
IMAGE_URLS="https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-s390x-disk1.img"
+ # Provide a custom etcd3 binary download URL and ints sha256.
+ # The binary must be located under '/<etcd version>/etcd-<etcd-version>-linux-s390x.tar.gz'
+ # on this URL.
+ # Build instructions for etcd3: https://github.com/linux-on-ibm-z/docs/wiki/Building-etcd
+ ETCD_DOWNLOAD_URL=<your-etcd-download-url>
+ ETCD_SHA256=<your-etcd3-sha256>
+
enable_service n-sproxy
disable_service n-novnc
- disable_service etcd3 # https://bugs.launchpad.net/devstack/+bug/1693192
[[post-config|$NOVA_CONF]]
@@ -803,8 +809,11 @@
needed if you want to use the *serial console* outside of the all-in-one
setup.
-* The service ``etcd3`` needs to be disabled as long as bug report
- https://bugs.launchpad.net/devstack/+bug/1693192 is not resolved.
+* A link to an etcd3 binary and its sha256 needs to be provided as the
+ binary for s390x is not hosted on github like it is for other
+ architectures. For more details see
+ https://bugs.launchpad.net/devstack/+bug/1693192. Etcd3 can easily be
+ built along https://github.com/linux-on-ibm-z/docs/wiki/Building-etcd.
.. note:: To run *Tempest* against this *Devstack* all-in-one, you'll need
to use a guest image which is smaller than 1GB when uncompressed.
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index a186336..ed9b4da 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -32,9 +32,9 @@
`git.openstack.org
<https://git.openstack.org/cgit/openstack-dev/devstack>`__ and bug
reports go to `LaunchPad
-<http://bugs.launchpad.net/devstack/>`__. Contributions follow the
+<https://bugs.launchpad.net/devstack/>`__. Contributions follow the
usual process as described in the `developer guide
-<http://docs.openstack.org/infra/manual/developers.html>`__. This
+<https://docs.openstack.org/infra/manual/developers.html>`__. This
Sphinx documentation is housed in the doc directory.
Why not use packages?
diff --git a/doc/source/guides/devstack-with-lbaas-v2.rst b/doc/source/guides/devstack-with-lbaas-v2.rst
index 4ed64bf..3592844 100644
--- a/doc/source/guides/devstack-with-lbaas-v2.rst
+++ b/doc/source/guides/devstack-with-lbaas-v2.rst
@@ -39,7 +39,6 @@
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
- SCREEN_LOGDIR=$DEST/logs
# Pre-requisite
ENABLED_SERVICES=rabbit,mysql,key
# Horizon
diff --git a/doc/source/guides/nova.rst b/doc/source/guides/nova.rst
index 6bbab53..0f105d7 100644
--- a/doc/source/guides/nova.rst
+++ b/doc/source/guides/nova.rst
@@ -66,5 +66,5 @@
<https://github.com/openstack/nova/blob/master/nova/conf/serial_console.py>`_.
For more information on OpenStack configuration see the `OpenStack
-Configuration Reference
-<https://docs.openstack.org/ocata/config-reference/compute.html>`_
+Compute Service Configuration Reference
+<https://docs.openstack.org/nova/latest/admin/configuration/index.html>`_
diff --git a/doc/source/networking.rst b/doc/source/networking.rst
index bdbeaaa..74010cd 100644
--- a/doc/source/networking.rst
+++ b/doc/source/networking.rst
@@ -69,7 +69,7 @@
This is not a recommended configuration. Because of interactions
between ovs and bridging, if you reboot your box with active
- networking you may loose network connectivity to your system.
+ networking you may lose network connectivity to your system.
If you need your guests accessible on the network, but only have 1
interface (using something like a NUC), you can share your one
diff --git a/doc/source/plugin-registry.rst b/doc/source/plugin-registry.rst
index a1d5ad8..6aa2e93 100644
--- a/doc/source/plugin-registry.rst
+++ b/doc/source/plugin-registry.rst
@@ -39,6 +39,7 @@
collectd-ceilometer-plugin `git://git.openstack.org/openstack/collectd-ceilometer-plugin <https://git.openstack.org/cgit/openstack/collectd-ceilometer-plugin>`__
congress `git://git.openstack.org/openstack/congress <https://git.openstack.org/cgit/openstack/congress>`__
cue `git://git.openstack.org/openstack/cue <https://git.openstack.org/cgit/openstack/cue>`__
+cyborg `git://git.openstack.org/openstack/cyborg <https://git.openstack.org/cgit/openstack/cyborg>`__
designate `git://git.openstack.org/openstack/designate <https://git.openstack.org/cgit/openstack/designate>`__
devstack-plugin-additional-pkg-repos `git://git.openstack.org/openstack/devstack-plugin-additional-pkg-repos <https://git.openstack.org/cgit/openstack/devstack-plugin-additional-pkg-repos>`__
devstack-plugin-amqp1 `git://git.openstack.org/openstack/devstack-plugin-amqp1 <https://git.openstack.org/cgit/openstack/devstack-plugin-amqp1>`__
@@ -144,6 +145,7 @@
omni `git://git.openstack.org/openstack/omni <https://git.openstack.org/cgit/openstack/omni>`__
os-xenapi `git://git.openstack.org/openstack/os-xenapi <https://git.openstack.org/cgit/openstack/os-xenapi>`__
osprofiler `git://git.openstack.org/openstack/osprofiler <https://git.openstack.org/cgit/openstack/osprofiler>`__
+oswin-tempest-plugin `git://git.openstack.org/openstack/oswin-tempest-plugin <https://git.openstack.org/cgit/openstack/oswin-tempest-plugin>`__
panko `git://git.openstack.org/openstack/panko <https://git.openstack.org/cgit/openstack/panko>`__
patrole `git://git.openstack.org/openstack/patrole <https://git.openstack.org/cgit/openstack/patrole>`__
picasso `git://git.openstack.org/openstack/picasso <https://git.openstack.org/cgit/openstack/picasso>`__
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index 5b3c6cf..fae1a1d 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -12,6 +12,15 @@
be sure that they will continue to work in the future as DevStack
evolves.
+Prerequisites
+=============
+
+If you are planning to create a plugin that is going to host a service in the
+service catalog (that is, your plugin will use the command
+``get_or_create_service``) please make sure that you apply to the `service
+types authority`_ to reserve a valid service-type. This will help to make sure
+that all deployments of your service use the same service-type.
+
Plugin Interface
================
@@ -250,3 +259,5 @@
For additional inspiration on devstack plugins you can check out the
`Plugin Registry <plugin-registry.html>`_.
+
+.. _service types authority: https://specs.openstack.org/openstack/service-types-authority/
diff --git a/doc/source/systemd.rst b/doc/source/systemd.rst
index 60a7719..c1d2944 100644
--- a/doc/source/systemd.rst
+++ b/doc/source/systemd.rst
@@ -98,8 +98,7 @@
Following logs for multiple services simultaneously::
- journalctl -f --unit devstack@n-cpu.service --unit
- devstack@n-cond.service
+ journalctl -f --unit devstack@n-cpu.service --unit devstack@n-cond.service
or you can even do wild cards to follow all the nova services::
@@ -121,6 +120,63 @@
See ``man 1 journalctl`` for more.
+Debugging
+=========
+
+Using pdb
+---------
+
+In order to break into a regular pdb session on a systemd-controlled
+service, you need to invoke the process manually - that is, take it out
+of systemd's control.
+
+Discover the command systemd is using to run the service::
+
+ systemctl show devstack@n-sch.service -p ExecStart --no-pager
+
+Stop the systemd service::
+
+ sudo systemctl stop devstack@n-sch.service
+
+Inject your breakpoint in the source, e.g.::
+
+ import pdb; pdb.set_trace()
+
+Invoke the command manually::
+
+ /usr/local/bin/nova-scheduler --config-file /etc/nova/nova.conf
+
+Using remote-pdb
+----------------
+
+`remote-pdb`_ works while the process is under systemd control.
+
+Make sure you have remote-pdb installed::
+
+ sudo pip install remote-pdb
+
+Inject your breakpoint in the source, e.g.::
+
+ import remote_pdb; remote_pdb.set_trace()
+
+Restart the relevant service::
+
+ sudo systemctl restart devstack@n-api.service
+
+The remote-pdb code configures the telnet port when ``set_trace()`` is
+invoked. Do whatever it takes to hit the instrumented code path, and
+inspect the logs for a message displaying the listening port::
+
+ Sep 07 16:36:12 p8-100-neo devstack@n-api.service[772]: RemotePdb session open at 127.0.0.1:46771, waiting for connection ...
+
+Telnet to that port to enter the pdb session::
+
+ telnet 127.0.0.1 46771
+
+See the `remote-pdb`_ home page for more options.
+
+.. _`remote-pdb`: https://pypi.python.org/pypi/remote-pdb
+
Known Issues
============
diff --git a/functions b/functions
index 6f2164a..33a0e6a 100644
--- a/functions
+++ b/functions
@@ -45,6 +45,37 @@
# export it so child shells have access to the 'short_source' function also.
export -f short_source
+# Download a file from a URL
+#
+# Will check cache (in $FILES) or download given URL.
+#
+# Argument is the URL to the remote file
+#
+# Will echo the local path to the file as the output. Will die on
+# failure to download.
+#
+# Files can be pre-cached for CI environments, see EXTRA_CACHE_URLS
+# and tools/image_list.sh
+function get_extra_file {
+ local file_url=$1
+
+ file_name=$(basename "$file_url")
+ if [[ $file_url != file* ]]; then
+ # If the file isn't cache, download it
+ if [[ ! -f $FILES/$file_name ]]; then
+ wget --progress=dot:giga -c $file_url -O $FILES/$file_name
+ if [[ $? -ne 0 ]]; then
+ die "$file_url could not be downloaded"
+ fi
+ fi
+ echo "$FILES/$file_name"
+ return
+ else
+ # just strip the file:// bit and that's the path to the file
+ echo $file_url | sed 's/$file:\/\///g'
+ fi
+}
+
# Retrieve an image from a URL and upload into Glance.
# Uses the following variables:
@@ -407,6 +438,26 @@
return $rval
}
+function wait_for_compute {
+ local timeout=$1
+ local rval=0
+ time_start "wait_for_service"
+ timeout $timeout bash -x <<EOF || rval=$?
+ ID=""
+ while [[ "\$ID" == "" ]]; do
+ sleep 1
+ ID=\$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" compute service list --host `hostname` --service nova-compute -c ID -f value)
+ done
+EOF
+ time_stop "wait_for_service"
+ # Figure out what's happening on platforms where this doesn't work
+ if [[ "$rval" != 0 ]]; then
+ echo "Didn't find service registered by hostname after $timeout seconds"
+ openstack --os-cloud devstack-admin --os-region "$REGION_NAME" compute service list
+ fi
+ return $rval
+}
+
# ping check
# Uses globals ``ENABLED_SERVICES``, ``TOP_DIR``, ``MULTI_HOST``, ``PRIVATE_NETWORK``
diff --git a/functions-common b/functions-common
index fdbb9c0..713d92e 100644
--- a/functions-common
+++ b/functions-common
@@ -519,7 +519,7 @@
if [[ ! -d $git_dest ]]; then
if [[ "$ERROR_ON_CLONE" = "True" ]]; then
echo "The $git_dest project was not found; if this is a gate job, add"
- echo "the project to the \$PROJECTS variable in the job definition."
+ echo "the project to 'required-projects' in the job definition."
die $LINENO "Cloning not allowed in this configuration"
fi
git_timed clone $git_clone_flags $git_remote $git_dest
@@ -1395,6 +1395,8 @@
iniset -sudo $unitfile "Unit" "Description" "Devstack $service"
iniset -sudo $unitfile "Service" "User" "$user"
iniset -sudo $unitfile "Service" "ExecStart" "$command"
+ iniset -sudo $unitfile "Service" "KillMode" "process"
+ iniset -sudo $unitfile "Service" "TimeoutStopSec" "infinity"
if [[ -n "$group" ]]; then
iniset -sudo $unitfile "Service" "Group" "$group"
fi
@@ -1417,7 +1419,7 @@
iniset -sudo $unitfile "Service" "User" "$user"
iniset -sudo $unitfile "Service" "ExecStart" "$command"
iniset -sudo $unitfile "Service" "Type" "notify"
- iniset -sudo $unitfile "Service" "KillSignal" "SIGQUIT"
+ iniset -sudo $unitfile "Service" "KillMode" "process"
iniset -sudo $unitfile "Service" "Restart" "always"
iniset -sudo $unitfile "Service" "NotifyAccess" "all"
iniset -sudo $unitfile "Service" "RestartForceExitStatus" "100"
@@ -1565,7 +1567,7 @@
local name=$1
local url=$2
local branch=${3:-master}
- if [[ ",${DEVSTACK_PLUGINS}," =~ ,${name}, ]]; then
+ if is_plugin_enabled $name; then
die $LINENO "Plugin attempted to be enabled twice: ${name} ${url} ${branch}"
fi
DEVSTACK_PLUGINS+=",$name"
@@ -1574,6 +1576,19 @@
GITBRANCH[$name]=$branch
}
+# is_plugin_enabled <name>
+#
+# Check if the plugin was enabled, e.g. using enable_plugin
+#
+# ``name`` The name with which the plugin was enabled
+function is_plugin_enabled {
+ local name=$1
+ if [[ ",${DEVSTACK_PLUGINS}," =~ ",${name}," ]]; then
+ return 0
+ fi
+ return 1
+}
+
# fetch_plugins
#
# clones all plugins
@@ -2063,13 +2078,31 @@
}
+# Return just the <major>.<minor> for the given python interpreter
+function _get_python_version {
+ local interp=$1
+ local version
+ # disable erroring out here, otherwise if python 3 doesn't exist we fail hard.
+ if [[ -x $(which $interp 2> /dev/null) ]]; then
+ version=$($interp -c 'import sys; print("%s.%s" % sys.version_info[0:2])')
+ fi
+ echo ${version}
+}
+
# Return the current python as "python<major>.<minor>"
function python_version {
local python_version
- python_version=$(python -c 'import sys; print("%s.%s" % sys.version_info[0:2])')
+ python_version=$(_get_python_version python2)
echo "python${python_version}"
}
+function python3_version {
+ local python3_version
+ python3_version=$(_get_python_version python3)
+ echo "python${python_version}"
+}
+
+
# Service wrapper to restart services
# restart_service service-name
function restart_service {
diff --git a/lib/apache b/lib/apache
index dfca25a..39d5b7b 100644
--- a/lib/apache
+++ b/lib/apache
@@ -260,10 +260,15 @@
# Set die-on-term & exit-on-reload so that uwsgi shuts down
iniset "$file" uwsgi die-on-term true
iniset "$file" uwsgi exit-on-reload true
+ # Set worker-reload-mercy so that worker will not exit till the time
+ # configured after graceful shutdown
+ iniset "$file" uwsgi worker-reload-mercy $WORKER_TIMEOUT
iniset "$file" uwsgi enable-threads true
iniset "$file" uwsgi plugins python
# uwsgi recommends this to prevent thundering herd on accept.
iniset "$file" uwsgi thunder-lock true
+ # Set hook to trigger graceful shutdown on SIGTERM
+ iniset "$file" uwsgi hook-master-start "unix_signal:15 gracefully_kill_them_all"
# Override the default size for headers from the 4k default.
iniset "$file" uwsgi buffer-size 65535
# Make sure the client doesn't try to re-use the connection.
@@ -316,6 +321,11 @@
iniset "$file" uwsgi plugins python
# uwsgi recommends this to prevent thundering herd on accept.
iniset "$file" uwsgi thunder-lock true
+ # Set hook to trigger graceful shutdown on SIGTERM
+ iniset "$file" uwsgi hook-master-start "unix_signal:15 gracefully_kill_them_all"
+ # Set worker-reload-mercy so that worker will not exit till the time
+ # configured after graceful shutdown
+ iniset "$file" uwsgi worker-reload-mercy $WORKER_TIMEOUT
# Override the default size for headers from the 4k default.
iniset "$file" uwsgi buffer-size 65535
# Make sure the client doesn't try to re-use the connection.
diff --git a/lib/cinder b/lib/cinder
index ba5bd04..674787c 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -70,12 +70,11 @@
CINDER_SERVICE_LISTEN_ADDRESS=${CINDER_SERVICE_LISTEN_ADDRESS:-$SERVICE_LISTEN_ADDRESS}
# What type of LVM device should Cinder use for LVM backend
-# Defaults to default, which is thick, the other valid choice
-# is thin, which as the name implies utilizes lvm thin provisioning.
-# Thinly provisioned LVM volumes may be more efficient when using the Cinder
-# image cache, but there are also known race failures with volume snapshots
-# and thinly provisioned LVM volumes, see bug 1642111 for details.
-CINDER_LVM_TYPE=${CINDER_LVM_TYPE:-default}
+# Defaults to auto, which will do thin provisioning if it's a fresh
+# volume group, otherwise it will do thick. The other valid choices are
+# default, which is thick, or thin, which as the name implies utilizes lvm
+# thin provisioning.
+CINDER_LVM_TYPE=${CINDER_LVM_TYPE:-auto}
# Default backends
# The backend format is type:name where type is one of the supported backend
@@ -296,8 +295,7 @@
# Set the service port for a proxy to take the original
if [ "$CINDER_USE_MOD_WSGI" == "True" ]; then
iniset $CINDER_CONF DEFAULT osapi_volume_listen_port $CINDER_SERVICE_PORT_INT
- iniset $CINDER_CONF DEFAULT public_endpoint $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST
- iniset $CINDER_CONF DEFAULT osapi_volume_base_URL $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST
+ iniset $CINDER_CONF oslo_middleware enable_proxy_headers_parsing True
else
iniset $CINDER_CONF DEFAULT osapi_volume_listen_port $CINDER_SERVICE_PORT_INT
iniset $CINDER_CONF DEFAULT public_endpoint $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT
@@ -506,12 +504,12 @@
if [ "$CINDER_USE_MOD_WSGI" == "False" ]; then
run_process c-api "$CINDER_BIN_DIR/cinder-api --config-file $CINDER_CONF"
cinder_url=$service_protocol://$SERVICE_HOST:$service_port
- # Start proxy if tsl enabled
- if is_service_enabled tls_proxy; then
- start_tls_proxy cinder '*' $CINDER_SERVICE_PORT $CINDER_SERVICE_HOST $CINDER_SERVICE_POR_INT
+ # Start proxy if tls enabled
+ if is_service_enabled tls-proxy; then
+ start_tls_proxy cinder '*' $CINDER_SERVICE_PORT $CINDER_SERVICE_HOST $CINDER_SERVICE_PORT_INT
fi
else
- run_process "c-api" "$CINDER_BIN_DIR/uwsgi --ini $CINDER_UWSGI_CONF"
+ run_process "c-api" "$CINDER_BIN_DIR/uwsgi --procname-prefix cinder-api --ini $CINDER_UWSGI_CONF"
cinder_url=$service_protocol://$SERVICE_HOST/volume/v3
fi
fi
diff --git a/lib/etcd3 b/lib/etcd3
index 6e32cb3..60e827a 100644
--- a/lib/etcd3
+++ b/lib/etcd3
@@ -24,15 +24,9 @@
# --------
# Set up default values for etcd
-ETCD_DOWNLOAD_URL=${ETCD_DOWNLOAD_URL:-https://github.com/coreos/etcd/releases/download}
-ETCD_VERSION=${ETCD_VERSION:-v3.1.7}
ETCD_DATA_DIR="$DATA_DIR/etcd"
ETCD_SYSTEMD_SERVICE="devstack@etcd.service"
ETCD_BIN_DIR="$DEST/bin"
-ETCD_SHA256_AMD64="4fde194bbcd259401e2b5c462dfa579ee7f6af539f13f130b8f5b4f52e3b3c52"
-# NOTE(sdague): etcd v3.1.7 doesn't have anything for these architectures, though 3.2.0 does.
-ETCD_SHA256_ARM64=""
-ETCD_SHA256_PPC64=""
ETCD_PORT=2379
if is_ubuntu ; then
@@ -95,37 +89,19 @@
function install_etcd3 {
echo "Installing etcd"
- # Make sure etcd3 downloads the correct architecture
- if is_arch "x86_64"; then
- ETCD_ARCH="amd64"
- ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_AMD64}
- elif is_arch "aarch64"; then
- ETCD_ARCH="arm64"
- ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_ARM64}
- elif is_arch "ppc64le"; then
- ETCD_ARCH="ppc64le"
- ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_PPC64}
- else
- exit_distro_not_supported "invalid hardware type - $ETCD_ARCH"
- fi
-
- ETCD_NAME=etcd-$ETCD_VERSION-linux-$ETCD_ARCH
-
# Create the necessary directories
sudo mkdir -p $ETCD_BIN_DIR
sudo mkdir -p $ETCD_DATA_DIR
# Download and cache the etcd tgz for subsequent use
+ local etcd_file
+ etcd_file="$(get_extra_file $ETCD_DOWNLOAD_LOCATION)"
if [ ! -f "$FILES/etcd-$ETCD_VERSION-linux-$ETCD_ARCH/etcd" ]; then
- ETCD_DOWNLOAD_FILE=$ETCD_NAME.tar.gz
- if [ ! -f "$FILES/$ETCD_DOWNLOAD_FILE" ]; then
- wget $ETCD_DOWNLOAD_URL/$ETCD_VERSION/$ETCD_DOWNLOAD_FILE -O $FILES/$ETCD_DOWNLOAD_FILE
- fi
- echo "${ETCD_SHA256} $FILES/${ETCD_DOWNLOAD_FILE}" > $FILES/etcd.sha256sum
+ echo "${ETCD_SHA256} $etcd_file" > $FILES/etcd.sha256sum
# NOTE(sdague): this should go fatal if this fails
sha256sum -c $FILES/etcd.sha256sum
- tar xzvf $FILES/$ETCD_DOWNLOAD_FILE -C $FILES
+ tar xzvf $etcd_file -C $FILES
sudo cp $FILES/$ETCD_NAME/etcd $ETCD_BIN_DIR/etcd
fi
if [ ! -f "$ETCD_BIN_DIR/etcd" ]; then
diff --git a/lib/glance b/lib/glance
index 6e4a925..74734c7 100644
--- a/lib/glance
+++ b/lib/glance
@@ -345,7 +345,7 @@
run_process g-reg "$GLANCE_BIN_DIR/glance-registry --config-file=$GLANCE_CONF_DIR/glance-registry.conf"
if [[ "$WSGI_MODE" == "uwsgi" ]]; then
- run_process g-api "$GLANCE_BIN_DIR/uwsgi --ini $GLANCE_UWSGI_CONF"
+ run_process g-api "$GLANCE_BIN_DIR/uwsgi --procname-prefix glance-api --ini $GLANCE_UWSGI_CONF"
else
run_process g-api "$GLANCE_BIN_DIR/glance-api --config-file=$GLANCE_CONF_DIR/glance-api.conf"
fi
diff --git a/lib/keystone b/lib/keystone
index 69aadb6..f4df635 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -550,7 +550,7 @@
enable_apache_site keystone
restart_apache_server
else # uwsgi
- run_process keystone "$KEYSTONE_BIN_DIR/uwsgi --ini $KEYSTONE_PUBLIC_UWSGI_CONF" ""
+ run_process keystone "$KEYSTONE_BIN_DIR/uwsgi --procname-prefix keystone --ini $KEYSTONE_PUBLIC_UWSGI_CONF" ""
fi
echo "Waiting for keystone to start..."
@@ -621,12 +621,6 @@
iniset $KEYSTONE_LDAP_DOMAIN_FILE identity driver "ldap"
# LDAP settings for Users domain
- iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap group_allow_delete "False"
- iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap group_allow_update "False"
- iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap group_allow_create "False"
- iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_allow_delete "False"
- iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_allow_update "False"
- iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_allow_create "False"
iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_tree_dn "ou=Users,$LDAP_BASE_DN"
iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_objectclass "inetOrgPerson"
iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_name_attribute "cn"
diff --git a/lib/libraries b/lib/libraries
index 4ceb804..6d52f64 100644
--- a/lib/libraries
+++ b/lib/libraries
@@ -30,6 +30,7 @@
GITDIR["futurist"]=$DEST/futurist
GITDIR["os-client-config"]=$DEST/os-client-config
GITDIR["osc-lib"]=$DEST/osc-lib
+GITDIR["osc-placement"]=$DEST/osc-placement
GITDIR["oslo.cache"]=$DEST/oslo.cache
GITDIR["oslo.concurrency"]=$DEST/oslo.concurrency
GITDIR["oslo.config"]=$DEST/oslo.config
@@ -91,6 +92,7 @@
_install_lib_from_source "debtcollector"
_install_lib_from_source "futurist"
_install_lib_from_source "osc-lib"
+ _install_lib_from_source "osc-placement"
_install_lib_from_source "os-client-config"
_install_lib_from_source "oslo.cache"
_install_lib_from_source "oslo.concurrency"
diff --git a/lib/neutron b/lib/neutron
index a531288..ef51d66 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -242,6 +242,7 @@
if is_service_enabled tls-proxy; then
# Set the service port for a proxy to take the original
iniset $NEUTRON_CONF DEFAULT bind_port "$NEUTRON_SERVICE_PORT_INT"
+ iniset $NEUTRON_CONF oslo_middleware enable_proxy_headers_parsing True
fi
# Metering
@@ -442,7 +443,7 @@
fi
if is_service_enabled neutron-metering; then
- run_process neutron-metering "$NEUTRON_METERING_BINARY --config-file $NEUTRON_CONF --config-file $NEUTRON_METERING_AGENT_CONF"
+ run_process neutron-metering "$NEUTRON_BIN_DIR/$NEUTRON_METERING_BINARY --config-file $NEUTRON_CONF --config-file $NEUTRON_METERING_AGENT_CONF"
fi
}
@@ -493,6 +494,13 @@
_NEUTRON_SERVER_EXTRA_CONF_FILES_ABS+=($1)
}
+# neutron_deploy_rootwrap_filters() - deploy rootwrap filters
+function neutron_deploy_rootwrap_filters_new {
+ local srcdir=$1
+ sudo install -d -o root -g root -m 755 $NEUTRON_CONF_DIR/rootwrap.d
+ sudo install -o root -g root -m 644 $srcdir/etc/neutron/rootwrap.d/*.filters $NEUTRON_CONF_DIR/rootwrap.d
+}
+
# Dispatch functions
# These are needed for compatibility between the old and new implementations
# where there are function name overlaps. These will be removed when
@@ -607,5 +615,14 @@
fi
}
+function neutron_deploy_rootwrap_filters {
+ if is_neutron_legacy_enabled; then
+ # Call back to old function
+ _neutron_deploy_rootwrap_filters "$@"
+ else
+ neutron_deploy_rootwrap_filters_new "$@"
+ fi
+}
+
# Restore xtrace
$XTRACE
diff --git a/lib/neutron-legacy b/lib/neutron-legacy
index c8d2540..0ccb17c 100644
--- a/lib/neutron-legacy
+++ b/lib/neutron-legacy
@@ -168,7 +168,7 @@
#
Q_DVR_MODE=${Q_DVR_MODE:-legacy}
if [[ "$Q_DVR_MODE" != "legacy" ]]; then
- Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,linuxbridge,l2population
+ Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,l2population
fi
# Provider Network Configurations
@@ -718,6 +718,7 @@
if is_service_enabled tls-proxy; then
# Set the service port for a proxy to take the original
iniset $NEUTRON_CONF DEFAULT bind_port "$Q_PORT_INT"
+ iniset $NEUTRON_CONF oslo_middleware enable_proxy_headers_parsing True
fi
_neutron_setup_rootwrap
diff --git a/lib/neutron_plugins/services/l3 b/lib/neutron_plugins/services/l3
index 07974fe..98315b7 100644
--- a/lib/neutron_plugins/services/l3
+++ b/lib/neutron_plugins/services/l3
@@ -87,7 +87,8 @@
# Subnetpool defaults
USE_SUBNETPOOL=${USE_SUBNETPOOL:-True}
-SUBNETPOOL_NAME=${SUBNETPOOL_NAME:-"shared-default-subnetpool"}
+SUBNETPOOL_NAME_V4=${SUBNETPOOL_NAME:-"shared-default-subnetpool-v4"}
+SUBNETPOOL_NAME_V6=${SUBNETPOOL_NAME:-"shared-default-subnetpool-v6"}
SUBNETPOOL_PREFIX_V4=${SUBNETPOOL_PREFIX_V4:-$IPV4_ADDRS_SAFE_TO_USE}
SUBNETPOOL_PREFIX_V6=${SUBNETPOOL_PREFIX_V6:-$IPV6_ADDRS_SAFE_TO_USE}
@@ -169,10 +170,10 @@
if is_networking_extension_supported "auto-allocated-topology"; then
if [[ "$USE_SUBNETPOOL" == "True" ]]; then
if [[ "$IP_VERSION" =~ 4.* ]]; then
- SUBNETPOOL_V4_ID=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet pool create $SUBNETPOOL_NAME --default-prefix-length $SUBNETPOOL_SIZE_V4 --pool-prefix $SUBNETPOOL_PREFIX_V4 --share --default | grep ' id ' | get_field 2)
+ SUBNETPOOL_V4_ID=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet pool create $SUBNETPOOL_NAME_V4 --default-prefix-length $SUBNETPOOL_SIZE_V4 --pool-prefix $SUBNETPOOL_PREFIX_V4 --share --default -f value -c id)
fi
if [[ "$IP_VERSION" =~ .*6 ]]; then
- SUBNETPOOL_V6_ID=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet pool create $SUBNETPOOL_NAME --default-prefix-length $SUBNETPOOL_SIZE_V6 --pool-prefix $SUBNETPOOL_PREFIX_V6 --share --default | grep ' id ' | get_field 2)
+ SUBNETPOOL_V6_ID=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet pool create $SUBNETPOOL_NAME_V6 --default-prefix-length $SUBNETPOOL_SIZE_V6 --pool-prefix $SUBNETPOOL_PREFIX_V6 --share --default -f value -c id)
fi
fi
fi
diff --git a/lib/nova b/lib/nova
index 5b78972..1112f29 100644
--- a/lib/nova
+++ b/lib/nova
@@ -542,7 +542,7 @@
# Set the oslo messaging driver to the typical default. This does not
# enable notifications, but it will allow them to function when enabled.
iniset $NOVA_CONF oslo_messaging_notifications driver "messagingv2"
- iniset $NOVA_CONF oslo_messaging_notifications transport_url $(get_transport_url)
+ iniset $NOVA_CONF oslo_messaging_notifications transport_url $(get_notification_url)
iniset_rpc_backend nova $NOVA_CONF
iniset $NOVA_CONF glance api_servers "$GLANCE_URL"
@@ -555,6 +555,7 @@
if is_service_enabled tls-proxy; then
iniset $NOVA_CONF DEFAULT glance_protocol https
+ iniset $NOVA_CONF oslo_middleware enable_proxy_headers_parsing True
fi
if is_service_enabled n-sproxy; then
@@ -803,7 +804,7 @@
start_tls_proxy nova '*' $NOVA_SERVICE_PORT $NOVA_SERVICE_HOST $NOVA_SERVICE_PORT_INT
fi
else
- run_process "n-api" "$NOVA_BIN_DIR/uwsgi --ini $NOVA_UWSGI_CONF"
+ run_process "n-api" "$NOVA_BIN_DIR/uwsgi --procname-prefix nova-api --ini $NOVA_UWSGI_CONF"
nova_url=$service_protocol://$SERVICE_HOST/compute/v2.1/
fi
@@ -910,7 +911,7 @@
if [ "$NOVA_USE_MOD_WSGI" == "False" ]; then
run_process n-api-meta "$NOVA_BIN_DIR/nova-api-metadata --config-file $compute_cell_conf"
else
- run_process n-api-meta "$NOVA_BIN_DIR/uwsgi --ini $NOVA_METADATA_UWSGI_CONF"
+ run_process n-api-meta "$NOVA_BIN_DIR/uwsgi --procname-prefix nova-api-meta --ini $NOVA_METADATA_UWSGI_CONF"
fi
run_process n-novnc "$NOVA_BIN_DIR/nova-novncproxy --config-file $api_cell_conf --web $NOVNC_WEB_DIR"
@@ -951,6 +952,28 @@
done
}
+function is_nova_ready {
+ # NOTE(sdague): with cells v2 all the compute services must be up
+ # and checked into the database before discover_hosts is run. This
+ # happens in all in one installs by accident, because > 30 seconds
+ # happen between here and the script ending. However, in multinode
+ # tests this can very often not be the case. So ensure that the
+ # compute is up before we move on.
+ if is_service_enabled n-cell; then
+ # cells v1 can't complete the check below because it munges
+ # hostnames with cell information (grumble grumble).
+ return
+ fi
+ # TODO(sdague): honestly, this probably should be a plug point for
+ # an external system.
+ if [[ "$VIRT_DRIVER" == 'xenserver' ]]; then
+ # xenserver encodes information in the hostname of the compute
+ # because of the dom0/domU split. Just ignore for now.
+ return
+ fi
+ wait_for_compute 60
+}
+
function start_nova {
# this catches the cells v1 case early
_set_singleconductor
diff --git a/lib/nova_plugins/hypervisor-ironic b/lib/nova_plugins/hypervisor-ironic
index 062afb7..034e403 100644
--- a/lib/nova_plugins/hypervisor-ironic
+++ b/lib/nova_plugins/hypervisor-ironic
@@ -45,12 +45,11 @@
if [[ "$IRONIC_USE_RESOURCE_CLASSES" == "False" ]]; then
iniset $NOVA_CONF filter_scheduler use_baremetal_filters True
+ iniset $NOVA_CONF filter_scheduler host_subset_size 999
+ iniset $NOVA_CONF DEFAULT ram_allocation_ratio 1.0
+ iniset $NOVA_CONF DEFAULT reserved_host_memory_mb 0
fi
- iniset $NOVA_CONF filter_scheduler host_subset_size 999
-
- iniset $NOVA_CONF DEFAULT ram_allocation_ratio 1.0
- iniset $NOVA_CONF DEFAULT reserved_host_memory_mb 0
# ironic section
iniset $NOVA_CONF ironic auth_type password
iniset $NOVA_CONF ironic username admin
diff --git a/lib/placement b/lib/placement
index 8adbbde..d3fb8c8 100644
--- a/lib/placement
+++ b/lib/placement
@@ -159,12 +159,15 @@
# install_placement() - Collect source and prepare
function install_placement {
install_apache_wsgi
+ # Install the openstackclient placement client plugin for CLI
+ # TODO(mriedem): Use pip_install_gr once osc-placement is in g-r.
+ pip_install osc-placement
}
# start_placement_api() - Start the API processes ahead of other things
function start_placement_api {
if [[ "$WSGI_MODE" == "uwsgi" ]]; then
- run_process "placement-api" "$PLACEMENT_BIN_DIR/uwsgi --ini $PLACEMENT_UWSGI_CONF"
+ run_process "placement-api" "$PLACEMENT_BIN_DIR/uwsgi --procname-prefix placement --ini $PLACEMENT_UWSGI_CONF"
else
enable_apache_site placement-api
restart_apache_server
diff --git a/lib/rpc_backend b/lib/rpc_backend
index 3177e88..fb1cf73 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -114,7 +114,7 @@
fi
}
-# builds transport url string
+# Returns the address of the RPC backend in URL format.
function get_transport_url {
local virtual_host=$1
if is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
@@ -122,8 +122,9 @@
fi
}
-# Repeat the definition, in case get_transport_url is overriden for RPC purpose.
-# get_notification_url can then be used to talk to rabbit for notifications.
+# Returns the address of the Notification backend in URL format. This
+# should be used to set the transport_url option in the
+# oslo_messaging_notifications group.
function get_notification_url {
local virtual_host=$1
if is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
diff --git a/lib/stack b/lib/stack
index f09ddce..bada26f 100644
--- a/lib/stack
+++ b/lib/stack
@@ -33,5 +33,8 @@
if [[ ${USE_VENV} = True && -n ${PROJECT_VENV[$service]:-} ]]; then
unset PIP_VIRTUAL_ENV
fi
+ else
+ echo "No function declared with name 'install_${service}'."
+ exit 1
fi
}
diff --git a/lib/swift b/lib/swift
index 45f6793..1601e2b 100644
--- a/lib/swift
+++ b/lib/swift
@@ -464,6 +464,9 @@
iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} filter:tempauth account_autocreate
iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:tempauth reseller_prefix "TEMPAUTH"
+ # Allow both reseller prefixes to be used with domain_remap
+ iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:domain_remap reseller_prefixes "AUTH, TEMPAUTH"
+
if is_service_enabled swift3; then
cat <<EOF >>${SWIFT_CONFIG_PROXY_SERVER}
[filter:s3token]
@@ -608,15 +611,13 @@
# create all of the directories needed to emulate a few different servers
local node_number
for node_number in ${SWIFT_REPLICAS_SEQ}; do
- sudo ln -sf ${SWIFT_DATA_DIR}/drives/sdb1/$node_number ${SWIFT_DATA_DIR}/$node_number;
- local drive=${SWIFT_DATA_DIR}/drives/sdb1/${node_number}
- local node=${SWIFT_DATA_DIR}/${node_number}/node
- local node_device=${node}/sdb1
- [[ -d $node ]] && continue
- [[ -d $drive ]] && continue
- sudo install -o ${STACK_USER} -g $user_group -d $drive
- sudo install -o ${STACK_USER} -g $user_group -d $node_device
- sudo chown -R ${STACK_USER}: ${node}
+ # node_devices must match *.conf devices option
+ local node_devices=${SWIFT_DATA_DIR}/${node_number}
+ local real_devices=${SWIFT_DATA_DIR}/drives/sdb1/$node_number
+ sudo ln -sf $real_devices $node_devices;
+ local device=${real_devices}/sdb1
+ [[ -d $device ]] && continue
+ sudo install -o ${STACK_USER} -g $user_group -d $device
done
}
diff --git a/lib/tempest b/lib/tempest
index cc65ec7..f086f9a 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -297,6 +297,12 @@
# Newton and Ocata. This option can be removed after Mitaka is end of life.
iniset $TEMPEST_CONFIG identity-feature-enabled forbid_global_implied_dsr True
+ # When LDAP is enabled domain specific drivers are also enabled and the users
+ # and groups identity tests must adapt to this scenario
+ if is_service_enabled ldap; then
+ iniset $TEMPEST_CONFIG identity-feature-enabled domain_specific_drivers True
+ fi
+
# Image
# We want to be able to override this variable in the gate to avoid
# doing an external HTTP fetch for this test.
@@ -574,6 +580,11 @@
DISABLE_NETWORK_API_EXTENSIONS+=", metering"
fi
+ # disable l3_agent_scheduler if we didn't enable L3 agent
+ if ! is_service_enabled q-l3; then
+ DISABLE_NETWORK_API_EXTENSIONS+=", l3_agent_scheduler"
+ fi
+
local network_api_extensions=${NETWORK_API_EXTENSIONS:-"all"}
if [[ ! -z "$DISABLE_NETWORK_API_EXTENSIONS" ]]; then
# Enabled extensions are either the ones explicitly specified or those available on the API endpoint
@@ -608,7 +619,7 @@
# install_tempest() - Collect source and prepare
function install_tempest {
git_clone $TEMPEST_REPO $TEMPEST_DIR $TEMPEST_BRANCH
- pip_install tox
+ pip_install 'tox!=2.8.0'
pushd $TEMPEST_DIR
tox -r --notest -efull
# NOTE(mtreinish) Respect constraints in the tempest full venv, things that
diff --git a/lib/tls b/lib/tls
index 7bde5e6..0baf86c 100644
--- a/lib/tls
+++ b/lib/tls
@@ -487,7 +487,7 @@
}
# Starts the TLS proxy for the given IP/ports
-# start_tls_proxy front-host front-port back-host back-port
+# start_tls_proxy service-name front-host front-port back-host back-port
function start_tls_proxy {
local b_service="$1-tls-proxy"
local f_host=$2
@@ -527,6 +527,7 @@
# for swift functional testing to work with tls enabled. It is 2 bytes
# larger than the apache default of 8190.
LimitRequestFieldSize $f_header_size
+ RequestHeader set X-Forwarded-Proto "https"
<Location />
ProxyPass http://$b_host:$b_port/ retry=0 nocanon
@@ -541,7 +542,7 @@
if is_suse ; then
sudo a2enflag SSL
fi
- for mod in ssl proxy proxy_http; do
+ for mod in headers ssl proxy proxy_http; do
enable_apache_mod $mod
done
enable_apache_site $b_service
diff --git a/stack.sh b/stack.sh
index 6e930ad..2bd9da9 100755
--- a/stack.sh
+++ b/stack.sh
@@ -1371,6 +1371,13 @@
# Sanity checks
# =============
+# Check that computes are all ready
+#
+# TODO(sdague): there should be some generic phase here.
+if is_service_enabled n-cpu; then
+ is_nova_ready
+fi
+
# Check the status of running services
service_check
diff --git a/stackrc b/stackrc
index 787ae28..0ffcb67 100644
--- a/stackrc
+++ b/stackrc
@@ -130,10 +130,12 @@
# When Python 3 is supported by an application, adding the specific
# version of Python 3 to this variable will install the app using that
# version of the interpreter instead of 2.7.
-export PYTHON3_VERSION=${PYTHON3_VERSION:-3.5}
+_DEFAULT_PYTHON3_VERSION="$(_get_python_version python3)"
+export PYTHON3_VERSION=${PYTHON3_VERSION:-${_DEFAULT_PYTHON3_VERSION:-3.5}}
# Just to be more explicit on the Python 2 version to use.
-export PYTHON2_VERSION=${PYTHON2_VERSION:-2.7}
+_DEFAULT_PYTHON2_VERSION="$(_get_python_version python2)"
+export PYTHON2_VERSION=${PYTHON2_VERSION:-${_DEFAULT_PYTHON2_VERSION:-2.7}}
# allow local overrides of env variables, including repo config
if [[ -f $RC_DIR/localrc ]]; then
@@ -355,6 +357,10 @@
# this doesn't exist in a lib file, so set it here
GITDIR["python-openstackclient"]=$DEST/python-openstackclient
+# placement-api CLI
+GITREPO["osc-placement"]=${OSC_PLACEMENT_REPO:-${GIT_BASE}/openstack/osc-placement.git}
+GITBRANCH["osc-placement"]=${OSC_PLACEMENT_BRANCH:-master}
+
###################
#
@@ -588,7 +594,7 @@
IRONIC_PYTHON_AGENT_BRANCH=${IRONIC_PYTHON_AGENT_BRANCH:-master}
# a websockets/html5 or flash powered VNC console for vm instances
-NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
+NOVNC_REPO=${NOVNC_REPO:-https://github.com/novnc/noVNC.git}
NOVNC_BRANCH=${NOVNC_BRANCH:-stable/v0.6}
# a websockets/html5 or flash powered SPICE console for vm instances
@@ -699,6 +705,51 @@
DOWNLOAD_DEFAULT_IMAGES=False
fi
+# This is a comma separated list of extra URLS to be listed for
+# download by the tools/image_list.sh script. CI environments can
+# pre-download these URLS and place them in $FILES. Later scripts can
+# then use "get_extra_file <url>" which will print out the path to the
+# file; it will either be downloaded on demand or acquired from the
+# cache if there.
+EXTRA_CACHE_URLS=""
+
+# etcd3 defaults
+ETCD_VERSION=${ETCD_VERSION:-v3.1.7}
+ETCD_SHA256_AMD64="4fde194bbcd259401e2b5c462dfa579ee7f6af539f13f130b8f5b4f52e3b3c52"
+# NOTE(sdague): etcd v3.1.7 doesn't have anything for these architectures, though 3.2.0 does.
+ETCD_SHA256_ARM64=""
+ETCD_SHA256_PPC64=""
+ETCD_SHA256_S390X=""
+# Make sure etcd3 downloads the correct architecture
+if is_arch "x86_64"; then
+ ETCD_ARCH="amd64"
+ ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_AMD64}
+elif is_arch "aarch64"; then
+ ETCD_ARCH="arm64"
+ ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_ARM64}
+elif is_arch "ppc64le"; then
+ ETCD_ARCH="ppc64le"
+ ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_PPC64}
+elif is_arch "s390x"; then
+ # An etcd3 binary for s390x is not available on github like it is
+ # for other arches. Only continue if a custom download URL was
+ # provided.
+ if [[ -n "${ETCD_DOWNLOAD_URL}" ]]; then
+ ETCD_ARCH="s390x"
+ ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_S390X}
+ else
+ exit_distro_not_supported "etcd3. No custom ETCD_DOWNLOAD_URL provided."
+ fi
+else
+ exit_distro_not_supported "invalid hardware type - $ETCD_ARCH"
+fi
+ETCD_DOWNLOAD_URL=${ETCD_DOWNLOAD_URL:-https://github.com/coreos/etcd/releases/download}
+ETCD_NAME=etcd-$ETCD_VERSION-linux-$ETCD_ARCH
+ETCD_DOWNLOAD_FILE=$ETCD_NAME.tar.gz
+ETCD_DOWNLOAD_LOCATION=$ETCD_DOWNLOAD_URL/$ETCD_VERSION/$ETCD_DOWNLOAD_FILE
+# etcd is always required, so place it into list of pre-cached downloads
+EXTRA_CACHE_URLS+=",$ETCD_DOWNLOAD_LOCATION"
+
# Detect duplicate values in IMAGE_URLS
for image_url in ${IMAGE_URLS//,/ }; do
if [ $(echo "$IMAGE_URLS" | grep -o -F "$image_url" | wc -l) -gt 1 ]; then
@@ -742,6 +793,9 @@
# Service graceful shutdown timeout
SERVICE_GRACEFUL_SHUTDOWN_TIMEOUT=${SERVICE_GRACEFUL_SHUTDOWN_TIMEOUT:-5}
+# Service graceful shutdown timeout
+WORKER_TIMEOUT=${WORKER_TIMEOUT:-90}
+
# Support alternative yum -- in future Fedora 'dnf' will become the
# only supported installer, but for now 'yum' and 'dnf' are both
# available in parallel with compatible CLIs. Allow manual switching
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
index 5b4ff32..0bd8d49 100755
--- a/tests/test_libs_from_pypi.sh
+++ b/tests/test_libs_from_pypi.sh
@@ -36,7 +36,8 @@
ALL_LIBS+=" python-cinderclient glance_store oslo.concurrency oslo.db"
ALL_LIBS+=" oslo.versionedobjects oslo.vmware keystonemiddleware"
ALL_LIBS+=" oslo.serialization django_openstack_auth"
-ALL_LIBS+=" python-openstackclient osc-lib os-client-config oslo.rootwrap"
+ALL_LIBS+=" python-openstackclient osc-lib osc-placement"
+ALL_LIBS+=" os-client-config oslo.rootwrap"
ALL_LIBS+=" oslo.i18n oslo.utils python-openstacksdk python-swiftclient"
ALL_LIBS+=" python-neutronclient tooz ceilometermiddleware oslo.policy"
ALL_LIBS+=" debtcollector os-brick os-traits automaton futurist oslo.service"
diff --git a/tools/image_list.sh b/tools/image_list.sh
index 29b93ed..3a27c4a 100755
--- a/tools/image_list.sh
+++ b/tools/image_list.sh
@@ -1,5 +1,14 @@
#!/bin/bash
+# Print out a list of image and other files to download for caching.
+# This is mostly used by the OpenStack infrasturucture during daily
+# image builds to save the large images to /opt/cache/files (see [1])
+#
+# The two lists of URL's downloaded are the IMAGE_URLS and
+# EXTRA_CACHE_URLS, which are setup in stackrc
+#
+# [1] project-config:nodepool/elements/cache-devstack/extra-data.d/55-cache-devstack-repos
+
# Keep track of the DevStack directory
TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
@@ -31,12 +40,20 @@
ALL_IMAGES+=$URLS
done
-# Make a nice list
-echo $ALL_IMAGES | tr ',' '\n' | sort | uniq
-
# Sanity check - ensure we have a minimum number of images
num=$(echo $ALL_IMAGES | tr ',' '\n' | sort | uniq | wc -l)
if [[ "$num" -lt 4 ]]; then
echo "ERROR: We only found $num images in $ALL_IMAGES, which can't be right."
exit 1
fi
+
+# This is extra non-image files that we want pre-cached. This is kept
+# in a separate list because devstack loops over the IMAGE_LIST to
+# upload files glance and these aren't images. (This was a bit of an
+# after-thought which is why the naming around this is very
+# image-centric)
+URLS=$(source $TOP_DIR/stackrc && echo $EXTRA_CACHE_URLS)
+ALL_IMAGES+=$URLS
+
+# Make a nice combined list
+echo $ALL_IMAGES | tr ',' '\n' | sort | uniq
diff --git a/tools/mlock_report.py b/tools/mlock_report.py
index 2169cc2..07716b0 100755
--- a/tools/mlock_report.py
+++ b/tools/mlock_report.py
@@ -3,12 +3,12 @@
# This tool lists processes that lock memory pages from swapping to disk.
import re
-import subprocess
import psutil
-SUMMARY_REGEX = re.compile(b".*\s+(?P<locked>[\d]+)\s+KB")
+LCK_SUMMARY_REGEX = re.compile(
+ "^VmLck:\s+(?P<locked>[\d]+)\s+kB", re.MULTILINE)
def main():
@@ -22,28 +22,21 @@
def _get_report():
mlock_users = []
for proc in psutil.process_iter():
- pid = proc.pid
# sadly psutil does not expose locked pages info, that's why we
- # call to pmap and parse the output here
+ # iterate over the /proc/%pid/status files manually
try:
- out = subprocess.check_output(['pmap', '-XX', str(pid)])
- except subprocess.CalledProcessError as e:
- # 42 means process just vanished, which is ok
- if e.returncode == 42:
- continue
- raise
- last_line = out.splitlines()[-1]
-
- # some processes don't provide a memory map, for example those
- # running as kernel services, so we need to skip those that don't
- # match
- result = SUMMARY_REGEX.match(last_line)
- if result:
- locked = int(result.group('locked'))
- if locked:
- mlock_users.append({'name': proc.name(),
- 'pid': pid,
- 'locked': locked})
+ s = open("%s/%d/status" % (psutil.PROCFS_PATH, proc.pid), 'r')
+ except EnvironmentError:
+ continue
+ with s:
+ for line in s:
+ result = LCK_SUMMARY_REGEX.search(line)
+ if result:
+ locked = int(result.group('locked'))
+ if locked:
+ mlock_users.append({'name': proc.name(),
+ 'pid': proc.pid,
+ 'locked': locked})
# produce a single line log message with per process mlock stats
if mlock_users: