Merge "Enable graceful shutdown for services"
diff --git a/README.rst b/README.rst
index adbf59a..6885546 100644
--- a/README.rst
+++ b/README.rst
@@ -1,4 +1,5 @@
-DevStack is a set of scripts and utilities to quickly deploy an OpenStack cloud.
+DevStack is a set of scripts and utilities to quickly deploy an OpenStack cloud
+from git source trees.
Goals
=====
@@ -27,9 +28,9 @@
The DevStack master branch generally points to trunk versions of OpenStack
components. For older, stable versions, look for branches named
stable/[release] in the DevStack repo. For example, you can do the
-following to create a Newton OpenStack cloud::
+following to create a Pike OpenStack cloud::
- git checkout stable/newton
+ git checkout stable/pike
./stack.sh
You can also pick specific OpenStack project releases by setting the appropriate
@@ -54,7 +55,7 @@
endpoints, like so:
* Horizon: http://myhost/
-* Keystone: http://myhost:5000/v2.0/
+* Keystone: http://myhost/identity/v2.0/
We also provide an environment file that you can use to interact with your
cloud via CLI::
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index 064bf51..23f680a 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -136,7 +136,7 @@
::
- OS_AUTH_URL=http://$SERVICE_HOST:5000/v2.0
+ OS_AUTH_URL=http://$SERVICE_HOST:5000/v3.0
KEYSTONECLIENT\_DEBUG, NOVACLIENT\_DEBUG
Set command-line client log level to ``DEBUG``. These are commented
@@ -779,9 +779,15 @@
DOWNLOAD_DEFAULT_IMAGES=False
IMAGE_URLS="https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-s390x-disk1.img"
+ # Provide a custom etcd3 binary download URL and ints sha256.
+ # The binary must be located under '/<etcd version>/etcd-<etcd-version>-linux-s390x.tar.gz'
+ # on this URL.
+ # Build instructions for etcd3: https://github.com/linux-on-ibm-z/docs/wiki/Building-etcd
+ ETCD_DOWNLOAD_URL=<your-etcd-download-url>
+ ETCD_SHA256=<your-etcd3-sha256>
+
enable_service n-sproxy
disable_service n-novnc
- disable_service etcd3 # https://bugs.launchpad.net/devstack/+bug/1693192
[[post-config|$NOVA_CONF]]
@@ -803,8 +809,11 @@
needed if you want to use the *serial console* outside of the all-in-one
setup.
-* The service ``etcd3`` needs to be disabled as long as bug report
- https://bugs.launchpad.net/devstack/+bug/1693192 is not resolved.
+* A link to an etcd3 binary and its sha256 needs to be provided as the
+ binary for s390x is not hosted on github like it is for other
+ architectures. For more details see
+ https://bugs.launchpad.net/devstack/+bug/1693192. Etcd3 can easily be
+ built along https://github.com/linux-on-ibm-z/docs/wiki/Building-etcd.
.. note:: To run *Tempest* against this *Devstack* all-in-one, you'll need
to use a guest image which is smaller than 1GB when uncompressed.
diff --git a/doc/source/guides/devstack-with-lbaas-v2.rst b/doc/source/guides/devstack-with-lbaas-v2.rst
index 4ed64bf..3592844 100644
--- a/doc/source/guides/devstack-with-lbaas-v2.rst
+++ b/doc/source/guides/devstack-with-lbaas-v2.rst
@@ -39,7 +39,6 @@
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
- SCREEN_LOGDIR=$DEST/logs
# Pre-requisite
ENABLED_SERVICES=rabbit,mysql,key
# Horizon
diff --git a/doc/source/guides/nova.rst b/doc/source/guides/nova.rst
index 6bbab53..0f105d7 100644
--- a/doc/source/guides/nova.rst
+++ b/doc/source/guides/nova.rst
@@ -66,5 +66,5 @@
<https://github.com/openstack/nova/blob/master/nova/conf/serial_console.py>`_.
For more information on OpenStack configuration see the `OpenStack
-Configuration Reference
-<https://docs.openstack.org/ocata/config-reference/compute.html>`_
+Compute Service Configuration Reference
+<https://docs.openstack.org/nova/latest/admin/configuration/index.html>`_
diff --git a/doc/source/networking.rst b/doc/source/networking.rst
index bdbeaaa..74010cd 100644
--- a/doc/source/networking.rst
+++ b/doc/source/networking.rst
@@ -69,7 +69,7 @@
This is not a recommended configuration. Because of interactions
between ovs and bridging, if you reboot your box with active
- networking you may loose network connectivity to your system.
+ networking you may lose network connectivity to your system.
If you need your guests accessible on the network, but only have 1
interface (using something like a NUC), you can share your one
diff --git a/doc/source/systemd.rst b/doc/source/systemd.rst
index 60a7719..c1d2944 100644
--- a/doc/source/systemd.rst
+++ b/doc/source/systemd.rst
@@ -98,8 +98,7 @@
Following logs for multiple services simultaneously::
- journalctl -f --unit devstack@n-cpu.service --unit
- devstack@n-cond.service
+ journalctl -f --unit devstack@n-cpu.service --unit devstack@n-cond.service
or you can even do wild cards to follow all the nova services::
@@ -121,6 +120,63 @@
See ``man 1 journalctl`` for more.
+Debugging
+=========
+
+Using pdb
+---------
+
+In order to break into a regular pdb session on a systemd-controlled
+service, you need to invoke the process manually - that is, take it out
+of systemd's control.
+
+Discover the command systemd is using to run the service::
+
+ systemctl show devstack@n-sch.service -p ExecStart --no-pager
+
+Stop the systemd service::
+
+ sudo systemctl stop devstack@n-sch.service
+
+Inject your breakpoint in the source, e.g.::
+
+ import pdb; pdb.set_trace()
+
+Invoke the command manually::
+
+ /usr/local/bin/nova-scheduler --config-file /etc/nova/nova.conf
+
+Using remote-pdb
+----------------
+
+`remote-pdb`_ works while the process is under systemd control.
+
+Make sure you have remote-pdb installed::
+
+ sudo pip install remote-pdb
+
+Inject your breakpoint in the source, e.g.::
+
+ import remote_pdb; remote_pdb.set_trace()
+
+Restart the relevant service::
+
+ sudo systemctl restart devstack@n-api.service
+
+The remote-pdb code configures the telnet port when ``set_trace()`` is
+invoked. Do whatever it takes to hit the instrumented code path, and
+inspect the logs for a message displaying the listening port::
+
+ Sep 07 16:36:12 p8-100-neo devstack@n-api.service[772]: RemotePdb session open at 127.0.0.1:46771, waiting for connection ...
+
+Telnet to that port to enter the pdb session::
+
+ telnet 127.0.0.1 46771
+
+See the `remote-pdb`_ home page for more options.
+
+.. _`remote-pdb`: https://pypi.python.org/pypi/remote-pdb
+
Known Issues
============
diff --git a/files/debs/general b/files/debs/general
index 1dde03b..8e0018d 100644
--- a/files/debs/general
+++ b/files/debs/general
@@ -29,7 +29,6 @@
python2.7
python-dev
python-gdbm # needed for testr
-screen
tar
tcpdump
unzip
diff --git a/files/rpms-suse/general b/files/rpms-suse/general
index 370f240..0c1a281 100644
--- a/files/rpms-suse/general
+++ b/files/rpms-suse/general
@@ -24,7 +24,6 @@
python-cmd2 # dist:opensuse-12.3
python-devel # pyOpenSSL
python-xml
-screen
systemd-devel # for systemd-python
tar
tcpdump
diff --git a/files/rpms/general b/files/rpms/general
index 2443cc8..f3f8708 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -28,7 +28,6 @@
pyOpenSSL # version in pip uses too much memory
python-devel
redhat-rpm-config # missing dep for gcc hardening flags, see rhbz#1217376
-screen
systemd-devel # for systemd-python
tar
tcpdump
diff --git a/functions b/functions
index 6f2164a..33a0e6a 100644
--- a/functions
+++ b/functions
@@ -45,6 +45,37 @@
# export it so child shells have access to the 'short_source' function also.
export -f short_source
+# Download a file from a URL
+#
+# Will check cache (in $FILES) or download given URL.
+#
+# Argument is the URL to the remote file
+#
+# Will echo the local path to the file as the output. Will die on
+# failure to download.
+#
+# Files can be pre-cached for CI environments, see EXTRA_CACHE_URLS
+# and tools/image_list.sh
+function get_extra_file {
+ local file_url=$1
+
+ file_name=$(basename "$file_url")
+ if [[ $file_url != file* ]]; then
+ # If the file isn't cache, download it
+ if [[ ! -f $FILES/$file_name ]]; then
+ wget --progress=dot:giga -c $file_url -O $FILES/$file_name
+ if [[ $? -ne 0 ]]; then
+ die "$file_url could not be downloaded"
+ fi
+ fi
+ echo "$FILES/$file_name"
+ return
+ else
+ # just strip the file:// bit and that's the path to the file
+ echo $file_url | sed 's/$file:\/\///g'
+ fi
+}
+
# Retrieve an image from a URL and upload into Glance.
# Uses the following variables:
@@ -407,6 +438,26 @@
return $rval
}
+function wait_for_compute {
+ local timeout=$1
+ local rval=0
+ time_start "wait_for_service"
+ timeout $timeout bash -x <<EOF || rval=$?
+ ID=""
+ while [[ "\$ID" == "" ]]; do
+ sleep 1
+ ID=\$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" compute service list --host `hostname` --service nova-compute -c ID -f value)
+ done
+EOF
+ time_stop "wait_for_service"
+ # Figure out what's happening on platforms where this doesn't work
+ if [[ "$rval" != 0 ]]; then
+ echo "Didn't find service registered by hostname after $timeout seconds"
+ openstack --os-cloud devstack-admin --os-region "$REGION_NAME" compute service list
+ fi
+ return $rval
+}
+
# ping check
# Uses globals ``ENABLED_SERVICES``, ``TOP_DIR``, ``MULTI_HOST``, ``PRIVATE_NETWORK``
diff --git a/functions-common b/functions-common
index 52f53ef..120d378 100644
--- a/functions-common
+++ b/functions-common
@@ -519,7 +519,7 @@
if [[ ! -d $git_dest ]]; then
if [[ "$ERROR_ON_CLONE" = "True" ]]; then
echo "The $git_dest project was not found; if this is a gate job, add"
- echo "the project to the \$PROJECTS variable in the job definition."
+ echo "the project to 'required-projects' in the job definition."
die $LINENO "Cloning not allowed in this configuration"
fi
git_timed clone $git_clone_flags $git_remote $git_dest
@@ -1380,62 +1380,6 @@
zypper --non-interactive install --auto-agree-with-licenses "$@"
}
-
-# Process Functions
-# =================
-
-# _run_process() is designed to be backgrounded by run_process() to simulate a
-# fork. It includes the dirty work of closing extra filehandles and preparing log
-# files to produce the same logs as screen_it(). The log filename is derived
-# from the service name.
-# Uses globals ``CURRENT_LOG_TIME``, ``LOGDIR``, ``SCREEN_LOGDIR``, ``SCREEN_NAME``, ``SERVICE_DIR``
-# If an optional group is provided sg will be used to set the group of
-# the command.
-# _run_process service "command-line" [group]
-function _run_process {
- # disable tracing through the exec redirects, it's just confusing in the logs.
- xtrace=$(set +o | grep xtrace)
- set +o xtrace
-
- local service=$1
- local command="$2"
- local group=$3
-
- # Undo logging redirections and close the extra descriptors
- exec 1>&3
- exec 2>&3
- exec 3>&-
- exec 6>&-
-
- local logfile="${service}.log.${CURRENT_LOG_TIME}"
- local real_logfile="${LOGDIR}/${logfile}"
- if [[ -n ${LOGDIR} ]]; then
- exec 1>&"$real_logfile" 2>&1
- bash -c "cd '$LOGDIR' && ln -sf '$logfile' ${service}.log"
- if [[ -n ${SCREEN_LOGDIR} ]]; then
- # Drop the backward-compat symlink
- ln -sf "$real_logfile" ${SCREEN_LOGDIR}/screen-${service}.log
- fi
-
- # TODO(dtroyer): Hack to get stdout from the Python interpreter for the logs.
- export PYTHONUNBUFFERED=1
- fi
-
- # reenable xtrace before we do *real* work
- $xtrace
-
- # Run under ``setsid`` to force the process to become a session and group leader.
- # The pid saved can be used with pkill -g to get the entire process group.
- if [[ -n "$group" ]]; then
- setsid sg $group "$command" & echo $! >$SERVICE_DIR/$SCREEN_NAME/$service.pid
- else
- setsid $command & echo $! >$SERVICE_DIR/$SCREEN_NAME/$service.pid
- fi
-
- # Just silently exit this process
- exit 0
-}
-
function write_user_unit_file {
local service=$1
local command="$2"
@@ -1537,21 +1481,6 @@
$SYSTEMCTL start $systemd_service
}
-# Helper to remove the ``*.failure`` files under ``$SERVICE_DIR/$SCREEN_NAME``.
-# This is used for ``service_check`` when all the ``screen_it`` are called finished
-# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``
-# init_service_check
-function init_service_check {
- SCREEN_NAME=${SCREEN_NAME:-stack}
- SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
-
- if [[ ! -d "$SERVICE_DIR/$SCREEN_NAME" ]]; then
- mkdir -p "$SERVICE_DIR/$SCREEN_NAME"
- fi
-
- rm -f "$SERVICE_DIR/$SCREEN_NAME"/*.failure
-}
-
# Find out if a process exists by partial name.
# is_running name
function is_running {
@@ -1578,135 +1507,11 @@
time_start "run_process"
if is_service_enabled $service; then
- if [[ "$USE_SYSTEMD" = "True" ]]; then
- _run_under_systemd "$name" "$command" "$group" "$user"
- elif [[ "$USE_SCREEN" = "True" ]]; then
- if [[ "$user" == "root" ]]; then
- command="sudo $command"
- fi
- screen_process "$name" "$command" "$group"
- else
- # Spawn directly without screen
- if [[ "$user" == "root" ]]; then
- command="sudo $command"
- fi
- _run_process "$name" "$command" "$group" &
- fi
+ _run_under_systemd "$name" "$command" "$group" "$user"
fi
time_stop "run_process"
}
-# Helper to launch a process in a named screen
-# Uses globals ``CURRENT_LOG_TIME``, ```LOGDIR``, ``SCREEN_LOGDIR``, `SCREEN_NAME``,
-# ``SERVICE_DIR``, ``SCREEN_IS_LOGGING``
-# screen_process name "command-line" [group]
-# Run a command in a shell in a screen window, if an optional group
-# is provided, use sg to set the group of the command.
-function screen_process {
- local name=$1
- local command="$2"
- local group=$3
-
- SCREEN_NAME=${SCREEN_NAME:-stack}
- SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
-
- screen -S $SCREEN_NAME -X screen -t $name
-
- local logfile="${name}.log.${CURRENT_LOG_TIME}"
- local real_logfile="${LOGDIR}/${logfile}"
- echo "LOGDIR: $LOGDIR"
- echo "SCREEN_LOGDIR: $SCREEN_LOGDIR"
- echo "log: $real_logfile"
- if [[ -n ${LOGDIR} ]]; then
- if [[ "$SCREEN_IS_LOGGING" == "True" ]]; then
- screen -S $SCREEN_NAME -p $name -X logfile "$real_logfile"
- screen -S $SCREEN_NAME -p $name -X log on
- fi
- # If logging isn't active then avoid a broken symlink
- touch "$real_logfile"
- bash -c "cd '$LOGDIR' && ln -sf '$logfile' ${name}.log"
- if [[ -n ${SCREEN_LOGDIR} ]]; then
- # Drop the backward-compat symlink
- ln -sf "$real_logfile" ${SCREEN_LOGDIR}/screen-${1}.log
- fi
- fi
-
- # sleep to allow bash to be ready to be send the command - we are
- # creating a new window in screen and then sends characters, so if
- # bash isn't running by the time we send the command, nothing
- # happens. This sleep was added originally to handle gate runs
- # where we needed this to be at least 3 seconds to pass
- # consistently on slow clouds. Now this is configurable so that we
- # can determine a reasonable value for the local case which should
- # be much smaller.
- sleep ${SCREEN_SLEEP:-3}
-
- NL=`echo -ne '\015'`
- # This fun command does the following:
- # - the passed server command is backgrounded
- # - the pid of the background process is saved in the usual place
- # - the server process is brought back to the foreground
- # - if the server process exits prematurely the fg command errors
- # and a message is written to stdout and the process failure file
- #
- # The pid saved can be used in stop_process() as a process group
- # id to kill off all child processes
- if [[ -n "$group" ]]; then
- command="sg $group '$command'"
- fi
-
- # Append the process to the screen rc file
- screen_rc "$name" "$command"
-
- screen -S $SCREEN_NAME -p $name -X stuff "$command & echo \$! >$SERVICE_DIR/$SCREEN_NAME/${name}.pid; fg || echo \"$name failed to start. Exit code: \$?\" | tee \"$SERVICE_DIR/$SCREEN_NAME/${name}.failure\"$NL"
-}
-
-# Screen rc file builder
-# Uses globals ``SCREEN_NAME``, ``SCREENRC``, ``SCREEN_IS_LOGGING``
-# screen_rc service "command-line"
-function screen_rc {
- SCREEN_NAME=${SCREEN_NAME:-stack}
- SCREENRC=$TOP_DIR/$SCREEN_NAME-screenrc
- if [[ ! -e $SCREENRC ]]; then
- # Name the screen session
- echo "sessionname $SCREEN_NAME" > $SCREENRC
- # Set a reasonable statusbar
- echo "hardstatus alwayslastline '$SCREEN_HARDSTATUS'" >> $SCREENRC
- # Some distributions override PROMPT_COMMAND for the screen terminal type - turn that off
- echo "setenv PROMPT_COMMAND /bin/true" >> $SCREENRC
- echo "screen -t shell bash" >> $SCREENRC
- fi
- # If this service doesn't already exist in the screenrc file
- if ! grep $1 $SCREENRC 2>&1 > /dev/null; then
- NL=`echo -ne '\015'`
- echo "screen -t $1 bash" >> $SCREENRC
- echo "stuff \"$2$NL\"" >> $SCREENRC
-
- if [[ -n ${LOGDIR} ]] && [[ "$SCREEN_IS_LOGGING" == "True" ]]; then
- echo "logfile ${LOGDIR}/${1}.log.${CURRENT_LOG_TIME}" >>$SCREENRC
- echo "log on" >>$SCREENRC
- fi
- fi
-}
-
-# Stop a service in screen
-# If a PID is available use it, kill the whole process group via TERM
-# If screen is being used kill the screen window; this will catch processes
-# that did not leave a PID behind
-# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``
-# screen_stop_service service
-function screen_stop_service {
- local service=$1
-
- SCREEN_NAME=${SCREEN_NAME:-stack}
- SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
-
- if is_service_enabled $service; then
- # Clean up the screen window
- screen -S $SCREEN_NAME -p $service -X kill || true
- fi
-}
-
# Stop a service process
# If a PID is available use it, kill the whole process group via TERM
# If screen is being used kill the screen window; this will catch processes
@@ -1726,149 +1531,27 @@
$SYSTEMCTL stop devstack@$service.service
$SYSTEMCTL disable devstack@$service.service
fi
-
- if [[ -r $SERVICE_DIR/$SCREEN_NAME/$service.pid ]]; then
- pkill -g $(cat $SERVICE_DIR/$SCREEN_NAME/$service.pid)
- # oslo.service tends to stop actually shutting down
- # reliably in between releases because someone believes it
- # is dying too early due to some inflight work they
- # have. This is a tension. It happens often enough we're
- # going to just account for it in devstack and assume it
- # doesn't work.
- #
- # Set OSLO_SERVICE_WORKS=True to skip this block
- if [[ -z "$OSLO_SERVICE_WORKS" ]]; then
- # TODO(danms): Remove this double-kill when we have
- # this fixed in all services:
- # https://bugs.launchpad.net/oslo-incubator/+bug/1446583
- sleep 1
- # /bin/true because pkill on a non existent process returns an error
- pkill -g $(cat $SERVICE_DIR/$SCREEN_NAME/$service.pid) || /bin/true
- fi
- rm $SERVICE_DIR/$SCREEN_NAME/$service.pid
- fi
- if [[ "$USE_SCREEN" = "True" ]]; then
- # Clean up the screen window
- screen_stop_service $service
- fi
fi
}
-# Helper to get the status of each running service
-# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``
-# service_check
+# use systemctl to check service status
function service_check {
local service
- local failures
- SCREEN_NAME=${SCREEN_NAME:-stack}
- SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
-
-
- if [[ ! -d "$SERVICE_DIR/$SCREEN_NAME" ]]; then
- echo "No service status directory found"
- return
- fi
-
- # Check if there is any failure flag file under $SERVICE_DIR/$SCREEN_NAME
- # make this -o errexit safe
- failures=`ls "$SERVICE_DIR/$SCREEN_NAME"/*.failure 2>/dev/null || /bin/true`
-
- for service in $failures; do
- service=`basename $service`
- service=${service%.failure}
- echo "Error: Service $service is not running"
- done
-
- if [ -n "$failures" ]; then
- die $LINENO "More details about the above errors can be found with screen"
- fi
-}
-
-# Tail a log file in a screen if USE_SCREEN is true.
-# Uses globals ``USE_SCREEN``
-function tail_log {
- local name=$1
- local logfile=$2
-
- if [[ "$USE_SCREEN" = "True" ]]; then
- screen_process "$name" "sudo tail -f $logfile | sed -u 's/\\\\\\\\x1b/\o033/g'"
- fi
-}
-
-
-# Deprecated Functions
-# --------------------
-
-# _old_run_process() is designed to be backgrounded by old_run_process() to simulate a
-# fork. It includes the dirty work of closing extra filehandles and preparing log
-# files to produce the same logs as screen_it(). The log filename is derived
-# from the service name and global-and-now-misnamed ``SCREEN_LOGDIR``
-# Uses globals ``CURRENT_LOG_TIME``, ``SCREEN_LOGDIR``, ``SCREEN_NAME``, ``SERVICE_DIR``
-# _old_run_process service "command-line"
-function _old_run_process {
- local service=$1
- local command="$2"
-
- # Undo logging redirections and close the extra descriptors
- exec 1>&3
- exec 2>&3
- exec 3>&-
- exec 6>&-
-
- if [[ -n ${SCREEN_LOGDIR} ]]; then
- exec 1>&${SCREEN_LOGDIR}/screen-${1}.log.${CURRENT_LOG_TIME} 2>&1
- ln -sf ${SCREEN_LOGDIR}/screen-${1}.log.${CURRENT_LOG_TIME} ${SCREEN_LOGDIR}/screen-${1}.log
-
- # TODO(dtroyer): Hack to get stdout from the Python interpreter for the logs.
- export PYTHONUNBUFFERED=1
- fi
-
- exec /bin/bash -c "$command"
- die "$service exec failure: $command"
-}
-
-# old_run_process() launches a child process that closes all file descriptors and
-# then exec's the passed in command. This is meant to duplicate the semantics
-# of screen_it() without screen. PIDs are written to
-# ``$SERVICE_DIR/$SCREEN_NAME/$service.pid`` by the spawned child process.
-# old_run_process service "command-line"
-function old_run_process {
- local service=$1
- local command="$2"
-
- # Spawn the child process
- _old_run_process "$service" "$command" &
- echo $!
-}
-
-# Compatibility for existing start_XXXX() functions
-# Uses global ``USE_SCREEN``
-# screen_it service "command-line"
-function screen_it {
- if is_service_enabled $1; then
- # Append the service to the screen rc file
- screen_rc "$1" "$2"
-
- if [[ "$USE_SCREEN" = "True" ]]; then
- screen_process "$1" "$2"
- else
- # Spawn directly without screen
- old_run_process "$1" "$2" >$SERVICE_DIR/$SCREEN_NAME/$1.pid
+ for service in ${ENABLED_SERVICES//,/ }; do
+ # because some things got renamed like key => keystone
+ if $SYSTEMCTL is-enabled devstack@$service.service; then
+ # no-pager is needed because otherwise status dumps to a
+ # pager when in interactive mode, which will stop a manual
+ # devstack run.
+ $SYSTEMCTL status devstack@$service.service --no-pager
fi
- fi
+ done
}
-# Compatibility for existing stop_XXXX() functions
-# Stop a service in screen
-# If a PID is available use it, kill the whole process group via TERM
-# If screen is being used kill the screen window; this will catch processes
-# that did not leave a PID behind
-# screen_stop service
-function screen_stop {
- # Clean up the screen window
- stop_process $1
-}
+function tail_log {
+ deprecated "With the removal of screen support, tail_log is deprecated and will be removed after Queens"
+}
# Plugin Functions
# =================
@@ -2395,13 +2078,31 @@
}
+# Return just the <major>.<minor> for the given python interpreter
+function _get_python_version {
+ local interp=$1
+ local version
+ # disable erroring out here, otherwise if python 3 doesn't exist we fail hard.
+ if [[ -x $(which $interp) ]]; then
+ version=$($interp -c 'import sys; print("%s.%s" % sys.version_info[0:2])')
+ fi
+ echo ${version}
+}
+
# Return the current python as "python<major>.<minor>"
function python_version {
local python_version
- python_version=$(python -c 'import sys; print("%s.%s" % sys.version_info[0:2])')
+ python_version=$(_get_python_version python2)
echo "python${python_version}"
}
+function python3_version {
+ local python3_version
+ python3_version=$(_get_python_version python3)
+ echo "python${python_version}"
+}
+
+
# Service wrapper to restart services
# restart_service service-name
function restart_service {
diff --git a/lib/cinder b/lib/cinder
index 22c5168..bc0c13f 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -296,8 +296,7 @@
# Set the service port for a proxy to take the original
if [ "$CINDER_USE_MOD_WSGI" == "True" ]; then
iniset $CINDER_CONF DEFAULT osapi_volume_listen_port $CINDER_SERVICE_PORT_INT
- iniset $CINDER_CONF DEFAULT public_endpoint $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST
- iniset $CINDER_CONF DEFAULT osapi_volume_base_URL $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST
+ iniset $CINDER_CONF oslo_middleware enable_proxy_headers_parsing True
else
iniset $CINDER_CONF DEFAULT osapi_volume_listen_port $CINDER_SERVICE_PORT_INT
iniset $CINDER_CONF DEFAULT public_endpoint $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT
diff --git a/lib/etcd3 b/lib/etcd3
index 6e32cb3..60e827a 100644
--- a/lib/etcd3
+++ b/lib/etcd3
@@ -24,15 +24,9 @@
# --------
# Set up default values for etcd
-ETCD_DOWNLOAD_URL=${ETCD_DOWNLOAD_URL:-https://github.com/coreos/etcd/releases/download}
-ETCD_VERSION=${ETCD_VERSION:-v3.1.7}
ETCD_DATA_DIR="$DATA_DIR/etcd"
ETCD_SYSTEMD_SERVICE="devstack@etcd.service"
ETCD_BIN_DIR="$DEST/bin"
-ETCD_SHA256_AMD64="4fde194bbcd259401e2b5c462dfa579ee7f6af539f13f130b8f5b4f52e3b3c52"
-# NOTE(sdague): etcd v3.1.7 doesn't have anything for these architectures, though 3.2.0 does.
-ETCD_SHA256_ARM64=""
-ETCD_SHA256_PPC64=""
ETCD_PORT=2379
if is_ubuntu ; then
@@ -95,37 +89,19 @@
function install_etcd3 {
echo "Installing etcd"
- # Make sure etcd3 downloads the correct architecture
- if is_arch "x86_64"; then
- ETCD_ARCH="amd64"
- ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_AMD64}
- elif is_arch "aarch64"; then
- ETCD_ARCH="arm64"
- ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_ARM64}
- elif is_arch "ppc64le"; then
- ETCD_ARCH="ppc64le"
- ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_PPC64}
- else
- exit_distro_not_supported "invalid hardware type - $ETCD_ARCH"
- fi
-
- ETCD_NAME=etcd-$ETCD_VERSION-linux-$ETCD_ARCH
-
# Create the necessary directories
sudo mkdir -p $ETCD_BIN_DIR
sudo mkdir -p $ETCD_DATA_DIR
# Download and cache the etcd tgz for subsequent use
+ local etcd_file
+ etcd_file="$(get_extra_file $ETCD_DOWNLOAD_LOCATION)"
if [ ! -f "$FILES/etcd-$ETCD_VERSION-linux-$ETCD_ARCH/etcd" ]; then
- ETCD_DOWNLOAD_FILE=$ETCD_NAME.tar.gz
- if [ ! -f "$FILES/$ETCD_DOWNLOAD_FILE" ]; then
- wget $ETCD_DOWNLOAD_URL/$ETCD_VERSION/$ETCD_DOWNLOAD_FILE -O $FILES/$ETCD_DOWNLOAD_FILE
- fi
- echo "${ETCD_SHA256} $FILES/${ETCD_DOWNLOAD_FILE}" > $FILES/etcd.sha256sum
+ echo "${ETCD_SHA256} $etcd_file" > $FILES/etcd.sha256sum
# NOTE(sdague): this should go fatal if this fails
sha256sum -c $FILES/etcd.sha256sum
- tar xzvf $FILES/$ETCD_DOWNLOAD_FILE -C $FILES
+ tar xzvf $etcd_file -C $FILES
sudo cp $FILES/$ETCD_NAME/etcd $ETCD_BIN_DIR/etcd
fi
if [ ! -f "$ETCD_BIN_DIR/etcd" ]; then
diff --git a/lib/keystone b/lib/keystone
index c8ddbae..c38d953 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -625,12 +625,6 @@
iniset $KEYSTONE_LDAP_DOMAIN_FILE identity driver "ldap"
# LDAP settings for Users domain
- iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap group_allow_delete "False"
- iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap group_allow_update "False"
- iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap group_allow_create "False"
- iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_allow_delete "False"
- iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_allow_update "False"
- iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_allow_create "False"
iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_tree_dn "ou=Users,$LDAP_BASE_DN"
iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_objectclass "inetOrgPerson"
iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_name_attribute "cn"
diff --git a/lib/libraries b/lib/libraries
index 4ceb804..6d52f64 100644
--- a/lib/libraries
+++ b/lib/libraries
@@ -30,6 +30,7 @@
GITDIR["futurist"]=$DEST/futurist
GITDIR["os-client-config"]=$DEST/os-client-config
GITDIR["osc-lib"]=$DEST/osc-lib
+GITDIR["osc-placement"]=$DEST/osc-placement
GITDIR["oslo.cache"]=$DEST/oslo.cache
GITDIR["oslo.concurrency"]=$DEST/oslo.concurrency
GITDIR["oslo.config"]=$DEST/oslo.config
@@ -91,6 +92,7 @@
_install_lib_from_source "debtcollector"
_install_lib_from_source "futurist"
_install_lib_from_source "osc-lib"
+ _install_lib_from_source "osc-placement"
_install_lib_from_source "os-client-config"
_install_lib_from_source "oslo.cache"
_install_lib_from_source "oslo.concurrency"
diff --git a/lib/neutron b/lib/neutron
index fdcf0d5..2ffabd4 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -242,6 +242,7 @@
if is_service_enabled tls-proxy; then
# Set the service port for a proxy to take the original
iniset $NEUTRON_CONF DEFAULT bind_port "$NEUTRON_SERVICE_PORT_INT"
+ iniset $NEUTRON_CONF oslo_middleware enable_proxy_headers_parsing True
fi
# Metering
diff --git a/lib/neutron-legacy b/lib/neutron-legacy
index 784f3a8..f9e0bd6 100644
--- a/lib/neutron-legacy
+++ b/lib/neutron-legacy
@@ -718,6 +718,7 @@
if is_service_enabled tls-proxy; then
# Set the service port for a proxy to take the original
iniset $NEUTRON_CONF DEFAULT bind_port "$Q_PORT_INT"
+ iniset $NEUTRON_CONF oslo_middleware enable_proxy_headers_parsing True
fi
_neutron_setup_rootwrap
diff --git a/lib/nova b/lib/nova
index 3bb313b..c641499 100644
--- a/lib/nova
+++ b/lib/nova
@@ -555,6 +555,7 @@
if is_service_enabled tls-proxy; then
iniset $NOVA_CONF DEFAULT glance_protocol https
+ iniset $NOVA_CONF oslo_middleware enable_proxy_headers_parsing True
fi
if is_service_enabled n-sproxy; then
@@ -573,10 +574,6 @@
if [[ -n ${LOGDIR} ]]; then
bash -c "cd '$LOGDIR' && ln -sf '$logfile' ${service}.log"
iniset "$NOVA_CONF_DIR/nova-dhcpbridge.conf" DEFAULT log_file "$real_logfile"
- if [[ -n ${SCREEN_LOGDIR} ]]; then
- # Drop the backward-compat symlink
- ln -sf "$real_logfile" ${SCREEN_LOGDIR}/screen-${service}.log
- fi
fi
iniset $NOVA_CONF DEFAULT dhcpbridge_flagfile "$NOVA_CONF_DIR/nova-dhcpbridge.conf"
@@ -955,6 +952,28 @@
done
}
+function is_nova_ready {
+ # NOTE(sdague): with cells v2 all the compute services must be up
+ # and checked into the database before discover_hosts is run. This
+ # happens in all in one installs by accident, because > 30 seconds
+ # happen between here and the script ending. However, in multinode
+ # tests this can very often not be the case. So ensure that the
+ # compute is up before we move on.
+ if is_service_enabled n-cell; then
+ # cells v1 can't complete the check below because it munges
+ # hostnames with cell information (grumble grumble).
+ return
+ fi
+ # TODO(sdague): honestly, this probably should be a plug point for
+ # an external system.
+ if [[ "$VIRT_DRIVER" == 'xenserver' ]]; then
+ # xenserver encodes information in the hostname of the compute
+ # because of the dom0/domU split. Just ignore for now.
+ return
+ fi
+ wait_for_compute 60
+}
+
function start_nova {
# this catches the cells v1 case early
_set_singleconductor
diff --git a/lib/placement b/lib/placement
index aef9b74..d3fb8c8 100644
--- a/lib/placement
+++ b/lib/placement
@@ -159,6 +159,9 @@
# install_placement() - Collect source and prepare
function install_placement {
install_apache_wsgi
+ # Install the openstackclient placement client plugin for CLI
+ # TODO(mriedem): Use pip_install_gr once osc-placement is in g-r.
+ pip_install osc-placement
}
# start_placement_api() - Start the API processes ahead of other things
diff --git a/lib/swift b/lib/swift
index 3b87610..5277cde 100644
--- a/lib/swift
+++ b/lib/swift
@@ -464,6 +464,9 @@
iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} filter:tempauth account_autocreate
iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:tempauth reseller_prefix "TEMPAUTH"
+ # Allow both reseller prefixes to be used with domain_remap
+ iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:domain_remap reseller_prefixes "AUTH, TEMPAUTH"
+
if is_service_enabled swift3; then
cat <<EOF >>${SWIFT_CONFIG_PROXY_SERVER}
[filter:s3token]
diff --git a/lib/tls b/lib/tls
index b7ad644..0baf86c 100644
--- a/lib/tls
+++ b/lib/tls
@@ -527,6 +527,7 @@
# for swift functional testing to work with tls enabled. It is 2 bytes
# larger than the apache default of 8190.
LimitRequestFieldSize $f_header_size
+ RequestHeader set X-Forwarded-Proto "https"
<Location />
ProxyPass http://$b_host:$b_port/ retry=0 nocanon
@@ -541,7 +542,7 @@
if is_suse ; then
sudo a2enflag SSL
fi
- for mod in ssl proxy proxy_http; do
+ for mod in headers ssl proxy proxy_http; do
enable_apache_mod $mod
done
enable_apache_site $b_service
diff --git a/stack.sh b/stack.sh
index 301e1e7..2bd9da9 100755
--- a/stack.sh
+++ b/stack.sh
@@ -228,16 +228,6 @@
fi
fi
-# Check to see if we are already running DevStack
-# Note that this may fail if USE_SCREEN=False
-if type -p screen > /dev/null && screen -ls | egrep -q "[0-9]\.$SCREEN_NAME"; then
- echo "You are already running a stack.sh session."
- echo "To rejoin this session type 'screen -x stack'."
- echo "To destroy this session, type './unstack.sh'."
- exit 1
-fi
-
-
# Local Settings
# --------------
@@ -491,24 +481,6 @@
exec 6> >( $TOP_DIR/tools/outfilter.py -v >&3 )
fi
-# Set up logging of screen windows
-# Set ``SCREEN_LOGDIR`` to turn on logging of screen windows to the
-# directory specified in ``SCREEN_LOGDIR``, we will log to the file
-# ``screen-$SERVICE_NAME-$TIMESTAMP.log`` in that dir and have a link
-# ``screen-$SERVICE_NAME.log`` to the latest log file.
-# Logs are kept for as long specified in ``LOGDAYS``.
-# This is deprecated....logs go in ``LOGDIR``, only symlinks will be here now.
-if [[ -n "$SCREEN_LOGDIR" ]]; then
-
- # We make sure the directory is created.
- if [[ -d "$SCREEN_LOGDIR" ]]; then
- # We cleanup the old logs
- find $SCREEN_LOGDIR -maxdepth 1 -name screen-\*.log -mtime +$LOGDAYS -exec rm {} \;
- else
- mkdir -p $SCREEN_LOGDIR
- fi
-fi
-
# Basic test for ``$DEST`` path permissions (fatal on error unless skipped)
check_path_perm_sanity ${DEST}
@@ -1015,38 +987,6 @@
configure_database
fi
-
-# Configure screen
-# ----------------
-
-USE_SCREEN=$(trueorfalse True USE_SCREEN)
-if [[ "$USE_SCREEN" == "True" ]]; then
- # Create a new named screen to run processes in
- screen -d -m -S $SCREEN_NAME -t shell -s /bin/bash
- sleep 1
-
- # Set a reasonable status bar
- SCREEN_HARDSTATUS=${SCREEN_HARDSTATUS:-}
- if [ -z "$SCREEN_HARDSTATUS" ]; then
- SCREEN_HARDSTATUS='%{= .} %-Lw%{= .}%> %n%f %t*%{= .}%+Lw%< %-=%{g}(%{d}%H/%l%{g})'
- fi
- screen -r $SCREEN_NAME -X hardstatus alwayslastline "$SCREEN_HARDSTATUS"
- screen -r $SCREEN_NAME -X setenv PROMPT_COMMAND /bin/true
-
- if is_service_enabled tls-proxy; then
- follow_tls_proxy
- fi
-fi
-
-# Clear ``screenrc`` file
-SCREENRC=$TOP_DIR/$SCREEN_NAME-screenrc
-if [[ -e $SCREENRC ]]; then
- rm -f $SCREENRC
-fi
-
-# Initialize the directory for service status check
-init_service_check
-
# Save configuration values
save_stackenv $LINENO
@@ -1431,6 +1371,13 @@
# Sanity checks
# =============
+# Check that computes are all ready
+#
+# TODO(sdague): there should be some generic phase here.
+if is_service_enabled n-cpu; then
+ is_nova_ready
+fi
+
# Check the status of running services
service_check
diff --git a/stackrc b/stackrc
index e936b33..0ffcb67 100644
--- a/stackrc
+++ b/stackrc
@@ -88,22 +88,9 @@
# Set the root URL for Horizon
HORIZON_APACHE_ROOT="/dashboard"
-# TODO(sdague): Queens
-#
-# All the non systemd paths should be removed in queens, they only
-# exist in Pike to support testing from grenade. Ensure that all this
-# is cleaned up and purged, which should dramatically simplify the
-# devstack codebase.
-
-# Whether to use 'dev mode' for screen windows. Dev mode works by
-# stuffing text into the screen windows so that a developer can use
-# ctrl-c, up-arrow, enter to restart the service. Starting services
-# this way is slightly unreliable, and a bit slower, so this can
-# be disabled for automated testing by setting this value to False.
-USE_SCREEN=$(trueorfalse False USE_SCREEN)
-
-# Whether to use SYSTEMD to manage services
-USE_SYSTEMD=$(trueorfalse False USE_SYSTEMD)
+# Whether to use SYSTEMD to manage services, we only do this from
+# Queens forward.
+USE_SYSTEMD="True"
USER_UNITS=$(trueorfalse False USER_UNITS)
if [[ "$USER_UNITS" == "True" ]]; then
SYSTEMD_DIR="$HOME/.local/share/systemd/user"
@@ -122,16 +109,6 @@
# memory constrained than CPU bound.
ENABLE_KSM=$(trueorfalse True ENABLE_KSM)
-# When using screen, should we keep a log file on disk? You might
-# want this False if you have a long-running setup where verbose logs
-# can fill-up the host.
-# XXX: Ideally screen itself would be configured to log but just not
-# activate. This isn't possible with the screerc syntax. Temporary
-# logging can still be used by a developer with:
-# C-a : logfile foo
-# C-a : log on
-SCREEN_IS_LOGGING=$(trueorfalse True SCREEN_IS_LOGGING)
-
# Passwords generated by interactive devstack runs
if [[ -r $RC_DIR/.localrc.password ]]; then
source $RC_DIR/.localrc.password
@@ -153,10 +130,12 @@
# When Python 3 is supported by an application, adding the specific
# version of Python 3 to this variable will install the app using that
# version of the interpreter instead of 2.7.
-export PYTHON3_VERSION=${PYTHON3_VERSION:-3.5}
+_DEFAULT_PYTHON3_VERSION="$(_get_python_version python3)"
+export PYTHON3_VERSION=${PYTHON3_VERSION:-${_DEFAULT_PYTHON3_VERSION:-3.5}}
# Just to be more explicit on the Python 2 version to use.
-export PYTHON2_VERSION=${PYTHON2_VERSION:-2.7}
+_DEFAULT_PYTHON2_VERSION="$(_get_python_version python2)"
+export PYTHON2_VERSION=${PYTHON2_VERSION:-${_DEFAULT_PYTHON2_VERSION:-2.7}}
# allow local overrides of env variables, including repo config
if [[ -f $RC_DIR/localrc ]]; then
@@ -167,16 +146,6 @@
source $RC_DIR/.localrc.auto
fi
-# TODO(sdague): Delete all this in Queens.
-if [[ "$USE_SYSTEMD" == "True" ]]; then
- USE_SCREEN=False
-fi
-# if we are forcing off USE_SCREEN (as we do in the gate), force on
-# systemd. This allows us to drop one of 3 paths through the code.
-if [[ "$USE_SCREEN" == "False" ]]; then
- USE_SYSTEMD="True"
-fi
-
# Default for log coloring is based on interactive-or-not.
# Baseline assumption is that non-interactive invocations are for CI,
# where logs are to be presented as browsable text files; hence color
@@ -388,6 +357,10 @@
# this doesn't exist in a lib file, so set it here
GITDIR["python-openstackclient"]=$DEST/python-openstackclient
+# placement-api CLI
+GITREPO["osc-placement"]=${OSC_PLACEMENT_REPO:-${GIT_BASE}/openstack/osc-placement.git}
+GITBRANCH["osc-placement"]=${OSC_PLACEMENT_BRANCH:-master}
+
###################
#
@@ -732,6 +705,51 @@
DOWNLOAD_DEFAULT_IMAGES=False
fi
+# This is a comma separated list of extra URLS to be listed for
+# download by the tools/image_list.sh script. CI environments can
+# pre-download these URLS and place them in $FILES. Later scripts can
+# then use "get_extra_file <url>" which will print out the path to the
+# file; it will either be downloaded on demand or acquired from the
+# cache if there.
+EXTRA_CACHE_URLS=""
+
+# etcd3 defaults
+ETCD_VERSION=${ETCD_VERSION:-v3.1.7}
+ETCD_SHA256_AMD64="4fde194bbcd259401e2b5c462dfa579ee7f6af539f13f130b8f5b4f52e3b3c52"
+# NOTE(sdague): etcd v3.1.7 doesn't have anything for these architectures, though 3.2.0 does.
+ETCD_SHA256_ARM64=""
+ETCD_SHA256_PPC64=""
+ETCD_SHA256_S390X=""
+# Make sure etcd3 downloads the correct architecture
+if is_arch "x86_64"; then
+ ETCD_ARCH="amd64"
+ ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_AMD64}
+elif is_arch "aarch64"; then
+ ETCD_ARCH="arm64"
+ ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_ARM64}
+elif is_arch "ppc64le"; then
+ ETCD_ARCH="ppc64le"
+ ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_PPC64}
+elif is_arch "s390x"; then
+ # An etcd3 binary for s390x is not available on github like it is
+ # for other arches. Only continue if a custom download URL was
+ # provided.
+ if [[ -n "${ETCD_DOWNLOAD_URL}" ]]; then
+ ETCD_ARCH="s390x"
+ ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_S390X}
+ else
+ exit_distro_not_supported "etcd3. No custom ETCD_DOWNLOAD_URL provided."
+ fi
+else
+ exit_distro_not_supported "invalid hardware type - $ETCD_ARCH"
+fi
+ETCD_DOWNLOAD_URL=${ETCD_DOWNLOAD_URL:-https://github.com/coreos/etcd/releases/download}
+ETCD_NAME=etcd-$ETCD_VERSION-linux-$ETCD_ARCH
+ETCD_DOWNLOAD_FILE=$ETCD_NAME.tar.gz
+ETCD_DOWNLOAD_LOCATION=$ETCD_DOWNLOAD_URL/$ETCD_VERSION/$ETCD_DOWNLOAD_FILE
+# etcd is always required, so place it into list of pre-cached downloads
+EXTRA_CACHE_URLS+=",$ETCD_DOWNLOAD_LOCATION"
+
# Detect duplicate values in IMAGE_URLS
for image_url in ${IMAGE_URLS//,/ }; do
if [ $(echo "$IMAGE_URLS" | grep -o -F "$image_url" | wc -l) -gt 1 ]; then
@@ -755,9 +773,6 @@
PUBLIC_INTERFACE=${PUBLIC_INTERFACE:-""}
-# Set default screen name
-SCREEN_NAME=${SCREEN_NAME:-stack}
-
# Allow the use of an alternate protocol (such as https) for service endpoints
SERVICE_PROTOCOL=${SERVICE_PROTOCOL:-http}
@@ -880,15 +895,6 @@
# Following entries need to be last items in file
-# Compatibility bits required by other callers like Grenade
-
-# Old way was using SCREEN_LOGDIR to locate those logs and LOGFILE for the stack.sh trace log.
-# LOGFILE SCREEN_LOGDIR output
-# not set not set no log files
-# set not set stack.sh log to LOGFILE
-# not set set screen logs to SCREEN_LOGDIR
-# set set stack.sh log to LOGFILE, screen logs to SCREEN_LOGDIR
-
# New way is LOGDIR for all logs and LOGFILE for stack.sh trace log, but if not fully-qualified will be in LOGDIR
# LOGFILE LOGDIR output
# not set not set (new) set LOGDIR from default
@@ -896,9 +902,6 @@
# not set set screen logs to LOGDIR
# set set stack.sh log to LOGFILE, screen logs to LOGDIR
-# For compat, if SCREEN_LOGDIR is set, it will be used to create back-compat symlinks to the LOGDIR
-# symlinks to SCREEN_LOGDIR (compat)
-
# Set up new logging defaults
if [[ -z "${LOGDIR:-}" ]]; then
default_logdir=$DEST/logs
@@ -913,12 +916,6 @@
# LOGFILE had no path, set a default
LOGDIR="$default_logdir"
fi
-
- # Check for duplication
- if [[ "${SCREEN_LOGDIR:-}" == "${LOGDIR}" ]]; then
- # We don't need the symlinks since it's the same directory
- unset SCREEN_LOGDIR
- fi
fi
unset default_logdir logfile
fi
diff --git a/tests/run-process.sh b/tests/run-process.sh
deleted file mode 100755
index 301b9a0..0000000
--- a/tests/run-process.sh
+++ /dev/null
@@ -1,109 +0,0 @@
-#!/bin/bash
-# tests/exec.sh - Test DevStack run_process() and stop_process()
-#
-# exec.sh start|stop|status
-#
-# Set USE_SCREEN True|False to change use of screen.
-#
-# This script emulates the basic exec environment in ``stack.sh`` to test
-# the process spawn and kill operations.
-
-if [[ -z $1 ]]; then
- echo "$0 start|stop"
- exit 1
-fi
-
-TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
-source $TOP_DIR/functions
-
-USE_SCREEN=${USE_SCREEN:-False}
-
-ENABLED_SERVICES=fake-service
-
-SERVICE_DIR=/tmp
-SCREEN_NAME=test
-SCREEN_LOGDIR=${SERVICE_DIR}/${SCREEN_NAME}
-
-
-# Kill background processes on exit
-trap clean EXIT
-clean() {
- local r=$?
- jobs -p
- kill >/dev/null 2>&1 $(jobs -p)
- exit $r
-}
-
-
-# Exit on any errors so that errors don't compound
-trap failed ERR
-failed() {
- local r=$?
- jobs -p
- kill >/dev/null 2>&1 $(jobs -p)
- set +o xtrace
- [ -n "$LOGFILE" ] && echo "${0##*/} failed: full log in $LOGFILE"
- exit $r
-}
-
-function status {
- if [[ -r $SERVICE_DIR/$SCREEN_NAME/fake-service.pid ]]; then
- pstree -pg $(cat $SERVICE_DIR/$SCREEN_NAME/fake-service.pid)
- fi
- ps -ef | grep fake
-}
-
-function setup_screen {
-if [[ ! -d $SERVICE_DIR/$SCREEN_NAME ]]; then
- rm -rf $SERVICE_DIR/$SCREEN_NAME
- mkdir -p $SERVICE_DIR/$SCREEN_NAME
-fi
-
-if [[ "$USE_SCREEN" == "True" ]]; then
- # Create a new named screen to run processes in
- screen -d -m -S $SCREEN_NAME -t shell -s /bin/bash
- sleep 1
-
- # Set a reasonable status bar
- if [ -z "$SCREEN_HARDSTATUS" ]; then
- SCREEN_HARDSTATUS='%{= .} %-Lw%{= .}%> %n%f %t*%{= .}%+Lw%< %-=%{g}(%{d}%H/%l%{g})'
- fi
- screen -r $SCREEN_NAME -X hardstatus alwayslastline "$SCREEN_HARDSTATUS"
-fi
-
-# Clear screen rc file
-SCREENRC=$TOP_DIR/tests/$SCREEN_NAME-screenrc
-if [[ -e $SCREENRC ]]; then
- echo -n > $SCREENRC
-fi
-}
-
-# Mimic logging
- # Set up output redirection without log files
- # Copy stdout to fd 3
- exec 3>&1
- if [[ "$VERBOSE" != "True" ]]; then
- # Throw away stdout and stderr
- #exec 1>/dev/null 2>&1
- :
- fi
- # Always send summary fd to original stdout
- exec 6>&3
-
-
-if [[ "$1" == "start" ]]; then
- echo "Start service"
- setup_screen
- run_process fake-service "$TOP_DIR/tests/fake-service.sh"
- sleep 1
- status
-elif [[ "$1" == "stop" ]]; then
- echo "Stop service"
- stop_process fake-service
- status
-elif [[ "$1" == "status" ]]; then
- status
-else
- echo "Unknown command"
- exit 1
-fi
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
index 5b4ff32..0bd8d49 100755
--- a/tests/test_libs_from_pypi.sh
+++ b/tests/test_libs_from_pypi.sh
@@ -36,7 +36,8 @@
ALL_LIBS+=" python-cinderclient glance_store oslo.concurrency oslo.db"
ALL_LIBS+=" oslo.versionedobjects oslo.vmware keystonemiddleware"
ALL_LIBS+=" oslo.serialization django_openstack_auth"
-ALL_LIBS+=" python-openstackclient osc-lib os-client-config oslo.rootwrap"
+ALL_LIBS+=" python-openstackclient osc-lib osc-placement"
+ALL_LIBS+=" os-client-config oslo.rootwrap"
ALL_LIBS+=" oslo.i18n oslo.utils python-openstacksdk python-swiftclient"
ALL_LIBS+=" python-neutronclient tooz ceilometermiddleware oslo.policy"
ALL_LIBS+=" debtcollector os-brick os-traits automaton futurist oslo.service"
diff --git a/tools/image_list.sh b/tools/image_list.sh
index 29b93ed..3a27c4a 100755
--- a/tools/image_list.sh
+++ b/tools/image_list.sh
@@ -1,5 +1,14 @@
#!/bin/bash
+# Print out a list of image and other files to download for caching.
+# This is mostly used by the OpenStack infrasturucture during daily
+# image builds to save the large images to /opt/cache/files (see [1])
+#
+# The two lists of URL's downloaded are the IMAGE_URLS and
+# EXTRA_CACHE_URLS, which are setup in stackrc
+#
+# [1] project-config:nodepool/elements/cache-devstack/extra-data.d/55-cache-devstack-repos
+
# Keep track of the DevStack directory
TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
@@ -31,12 +40,20 @@
ALL_IMAGES+=$URLS
done
-# Make a nice list
-echo $ALL_IMAGES | tr ',' '\n' | sort | uniq
-
# Sanity check - ensure we have a minimum number of images
num=$(echo $ALL_IMAGES | tr ',' '\n' | sort | uniq | wc -l)
if [[ "$num" -lt 4 ]]; then
echo "ERROR: We only found $num images in $ALL_IMAGES, which can't be right."
exit 1
fi
+
+# This is extra non-image files that we want pre-cached. This is kept
+# in a separate list because devstack loops over the IMAGE_LIST to
+# upload files glance and these aren't images. (This was a bit of an
+# after-thought which is why the naming around this is very
+# image-centric)
+URLS=$(source $TOP_DIR/stackrc && echo $EXTRA_CACHE_URLS)
+ALL_IMAGES+=$URLS
+
+# Make a nice combined list
+echo $ALL_IMAGES | tr ',' '\n' | sort | uniq
diff --git a/tools/mlock_report.py b/tools/mlock_report.py
index 2169cc2..07716b0 100755
--- a/tools/mlock_report.py
+++ b/tools/mlock_report.py
@@ -3,12 +3,12 @@
# This tool lists processes that lock memory pages from swapping to disk.
import re
-import subprocess
import psutil
-SUMMARY_REGEX = re.compile(b".*\s+(?P<locked>[\d]+)\s+KB")
+LCK_SUMMARY_REGEX = re.compile(
+ "^VmLck:\s+(?P<locked>[\d]+)\s+kB", re.MULTILINE)
def main():
@@ -22,28 +22,21 @@
def _get_report():
mlock_users = []
for proc in psutil.process_iter():
- pid = proc.pid
# sadly psutil does not expose locked pages info, that's why we
- # call to pmap and parse the output here
+ # iterate over the /proc/%pid/status files manually
try:
- out = subprocess.check_output(['pmap', '-XX', str(pid)])
- except subprocess.CalledProcessError as e:
- # 42 means process just vanished, which is ok
- if e.returncode == 42:
- continue
- raise
- last_line = out.splitlines()[-1]
-
- # some processes don't provide a memory map, for example those
- # running as kernel services, so we need to skip those that don't
- # match
- result = SUMMARY_REGEX.match(last_line)
- if result:
- locked = int(result.group('locked'))
- if locked:
- mlock_users.append({'name': proc.name(),
- 'pid': pid,
- 'locked': locked})
+ s = open("%s/%d/status" % (psutil.PROCFS_PATH, proc.pid), 'r')
+ except EnvironmentError:
+ continue
+ with s:
+ for line in s:
+ result = LCK_SUMMARY_REGEX.search(line)
+ if result:
+ locked = int(result.group('locked'))
+ if locked:
+ mlock_users.append({'name': proc.name(),
+ 'pid': proc.pid,
+ 'locked': locked})
# produce a single line log message with per process mlock stats
if mlock_users:
diff --git a/unstack.sh b/unstack.sh
index 77a151f..5d3672e 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -171,15 +171,6 @@
stop_dstat
fi
-# Clean up the remainder of the screen processes
-SCREEN=$(which screen)
-if [[ -n "$SCREEN" ]]; then
- SESSION=$(screen -ls | awk "/[0-9]+.${SCREEN_NAME}/"'{ print $1 }')
- if [[ -n "$SESSION" ]]; then
- screen -X -S $SESSION quit
- fi
-fi
-
# NOTE: Cinder automatically installs the lvm2 package, independently of the
# enabled backends. So if Cinder is enabled, and installed successfully we are
# sure lvm2 (lvremove, /etc/lvm/lvm.conf, etc.) is here.