Merge "Use mod_version to clean-up apache version matching"
diff --git a/MAINTAINERS.rst b/MAINTAINERS.rst
index 1e915c7..d754c08 100644
--- a/MAINTAINERS.rst
+++ b/MAINTAINERS.rst
@@ -50,6 +50,18 @@
* Kyle Mestery <kmestery@cisco.com>
+OpenFlow Agent (ofagent)
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+* YAMAMOTO Takashi <yamamoto@valinux.co.jp>
+* Fumihiko Kakuma <kakuma@valinux.co.jp>
+
+Ryu
+~~~
+
+* YAMAMOTO Takashi <yamamoto@valinux.co.jp>
+* Fumihiko Kakuma <kakuma@valinux.co.jp>
+
Sahara
~~~~~~
diff --git a/docs/source/faq.html b/docs/source/faq.html
index bfac1dc..2c74a66 100644
--- a/docs/source/faq.html
+++ b/docs/source/faq.html
@@ -73,7 +73,7 @@
<dd>A: DevStack is optimized for documentation & developers. As some of us use <a href="https://github.com/dellcloudedge/crowbar">Crowbar</a> for production deployments, we hope developers documenting how they setup systems for new features supports projects like Crowbar.</dd>
<dt>Q: I'd like to help!</dt>
- <dd>A: That isn't a question, but please do! The source for DevStack is <a href="http://github.com/openstack-dev/devstack">github</a> and bug reports go to <a href="http://bugs.launchpad.net/devstack/">LaunchPad</a>. Contributions follow the usual process as described in the <a href="http://wiki.openstack.org/HowToContribute">OpenStack wiki</a>. DevStack is not a core project but a gating project and therefore an official OpenStack project. This site is housed in the CloudBuilder's <a href="http://github.com/cloudbuilders/devstack">github</a> in the gh-pages branch.</dd>
+ <dd>A: That isn't a question, but please do! The source for DevStack is <a href="http://github.com/openstack-dev/devstack">github</a> and bug reports go to <a href="http://bugs.launchpad.net/devstack/">LaunchPad</a>. Contributions follow the usual process as described in the <a href="http://wiki.openstack.org/HowToContribute">OpenStack wiki</a> even though DevStack is not an official OpenStack project. This site is housed in the CloudBuilder's <a href="http://github.com/cloudbuilders/devstack">github</a> in the gh-pages branch.</dd>
<dt>Q: Why not use packages?</dt>
<dd>A: Unlike packages, DevStack leaves your cloud ready to develop - checkouts of the code and services running in screen. However, many people are doing the hard work of packaging and recipes for production deployments. We hope this script serves as a way to communicate configuration changes between developers and packagers.</dd>
@@ -85,7 +85,7 @@
<dd>A: Fedora and CentOS/RHEL are supported via rpm dependency files and specific checks in <code>stack.sh</code>. Support will follow the pattern set with the Ubuntu testing, i.e. only a single release of the distro will receive regular testing, others will be handled on a best-effort basis.</dd>
<dt>Q: Are there any differences between Ubuntu and Fedora support?</dt>
- <dd>A: LXC support is not complete on Fedora; Neutron is not fully supported prior to Fedora 18 due lack of OpenVSwitch packages.</dd>
+ <dd>A: Neutron is not fully supported prior to Fedora 18 due lack of OpenVSwitch packages.</dd>
<dt>Q: How about RHEL 6?</dt>
<dd>A: RHEL 6 has Python 2.6 and many old modules packaged and is a challenge to support. There are a number of specific RHEL6 work-arounds in <code>stack.sh</code> to handle this. But the testing on py26 is valuable so we do it...</dd>
diff --git a/files/apache-keystone.template b/files/apache-keystone.template
index b4bdb16..0a286b9 100644
--- a/files/apache-keystone.template
+++ b/files/apache-keystone.template
@@ -10,7 +10,7 @@
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/%APACHE_NAME%/keystone.log
- CustomLog /var/log/%APACHE_NAME%/access.log combined
+ CustomLog /var/log/%APACHE_NAME%/keystone_access.log combined
</VirtualHost>
<VirtualHost *:%ADMINPORT%>
@@ -22,7 +22,7 @@
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/%APACHE_NAME%/keystone.log
- CustomLog /var/log/%APACHE_NAME%/access.log combined
+ CustomLog /var/log/%APACHE_NAME%/keystone_access.log combined
</VirtualHost>
# Workaround for missing path on RHEL6, see
diff --git a/files/apts/ironic b/files/apts/ironic
index 8674d9f..283d1b2 100644
--- a/files/apts/ironic
+++ b/files/apts/ironic
@@ -1,3 +1,4 @@
+docker.io
ipmitool
iptables
ipxe
diff --git a/files/apts/neutron b/files/apts/neutron
index 23dd65b..381c758 100644
--- a/files/apts/neutron
+++ b/files/apts/neutron
@@ -5,7 +5,6 @@
libmysqlclient-dev # testonly
mysql-server #NOPRIME
sudo
-python-boto
python-iso8601
python-paste
python-routes
@@ -18,7 +17,7 @@
python-mysqldb
python-mysql.connector
python-pyudev
-python-qpid # dist:precise
+python-qpid # NOPRIME
dnsmasq-base
dnsmasq-utils # for dhcp_release only available in dist:precise
rabbitmq-server # NOPRIME
diff --git a/files/apts/nova b/files/apts/nova
index 114194e..b1b969a 100644
--- a/files/apts/nova
+++ b/files/apts/nova
@@ -24,7 +24,7 @@
curl
genisoimage # required for config_drive
rabbitmq-server # NOPRIME
-qpidd # dist:precise NOPRIME
+qpidd # NOPRIME
socat # used by ajaxterm
python-mox
python-paste
@@ -42,8 +42,7 @@
python-suds
python-lockfile
python-m2crypto
-python-boto
python-kombu
python-feedparser
python-iso8601
-python-qpid # dist:precise
+python-qpid # NOPRIME
diff --git a/files/rpms-suse/neutron b/files/rpms-suse/neutron
index 79f5bff..8ad69b0 100644
--- a/files/rpms-suse/neutron
+++ b/files/rpms-suse/neutron
@@ -4,7 +4,6 @@
iptables
iputils
mariadb # NOPRIME
-python-boto
python-eventlet
python-greenlet
python-iso8601
diff --git a/files/rpms-suse/nova b/files/rpms-suse/nova
index 2a210e5..73c0604 100644
--- a/files/rpms-suse/nova
+++ b/files/rpms-suse/nova
@@ -23,7 +23,6 @@
python-Routes
python-SQLAlchemy
python-Tempita
-python-boto
python-cheetah
python-eventlet
python-feedparser
diff --git a/files/rpms/horizon b/files/rpms/horizon
index 92afed2..8ecb030 100644
--- a/files/rpms/horizon
+++ b/files/rpms/horizon
@@ -4,7 +4,6 @@
pylint
python-anyjson
python-BeautifulSoup
-python-boto
python-coverage
python-dateutil
python-eventlet
diff --git a/files/rpms/ironic b/files/rpms/ironic
index 959ac3c..e646f3a 100644
--- a/files/rpms/ironic
+++ b/files/rpms/ironic
@@ -1,3 +1,4 @@
+docker-io
ipmitool
iptables
ipxe-bootimgs
diff --git a/files/rpms/keystone b/files/rpms/keystone
index 7182091..e1873b7 100644
--- a/files/rpms/keystone
+++ b/files/rpms/keystone
@@ -1,3 +1,4 @@
+MySQL-python
python-greenlet
libxslt-devel # dist:f20
python-lxml #dist:f19,f20
diff --git a/files/rpms/neutron b/files/rpms/neutron
index c56e6e2..7020d33 100644
--- a/files/rpms/neutron
+++ b/files/rpms/neutron
@@ -8,7 +8,6 @@
mysql-devel # testonly
mysql-server # NOPRIME
openvswitch # NOPRIME
-python-boto
python-eventlet
python-greenlet
python-iso8601
@@ -16,7 +15,7 @@
#rhel6 gets via pip
python-paste # dist:f19,f20,rhel7
python-paste-deploy # dist:f19,f20,rhel7
-python-qpid
+python-qpid # NOPRIME
python-routes
python-sqlalchemy
python-suds
diff --git a/files/rpms/nova b/files/rpms/nova
index 4c2ee57..695d814 100644
--- a/files/rpms/nova
+++ b/files/rpms/nova
@@ -20,7 +20,6 @@
mysql-server # NOPRIME
parted
polkit
-python-boto
python-cheetah
python-eventlet
python-feedparser
@@ -35,7 +34,7 @@
# pip we need
python-paste # dist:f19,f20,rhel7
python-paste-deploy # dist:f19,f20,rhel7
-python-qpid
+python-qpid # NOPRIME
python-routes
python-sqlalchemy
python-suds
diff --git a/files/rpms/qpid b/files/rpms/qpid
new file mode 100644
index 0000000..62148ba
--- /dev/null
+++ b/files/rpms/qpid
@@ -0,0 +1,3 @@
+qpid-proton-c-devel # NOPRIME
+python-qpid-proton # NOPRIME
+
diff --git a/functions-common b/functions-common
index 87b9ece..6b1f473 100644
--- a/functions-common
+++ b/functions-common
@@ -1135,11 +1135,14 @@
# fork. It includes the dirty work of closing extra filehandles and preparing log
# files to produce the same logs as screen_it(). The log filename is derived
# from the service name and global-and-now-misnamed ``SCREEN_LOGDIR``
-# Uses globals ``CURRENT_LOG_TIME``, ``SCREEN_LOGDIR``
-# _run_process service "command-line"
+# Uses globals ``CURRENT_LOG_TIME``, ``SCREEN_LOGDIR``, ``SCREEN_NAME``, ``SERVICE_DIR``
+# If an optional group is provided sg will be used to set the group of
+# the command.
+# _run_process service "command-line" [group]
function _run_process {
local service=$1
local command="$2"
+ local group=$3
# Undo logging redirections and close the extra descriptors
exec 1>&3
@@ -1148,15 +1151,23 @@
exec 6>&-
if [[ -n ${SCREEN_LOGDIR} ]]; then
- exec 1>&${SCREEN_LOGDIR}/screen-${1}.${CURRENT_LOG_TIME}.log 2>&1
- ln -sf ${SCREEN_LOGDIR}/screen-${1}.${CURRENT_LOG_TIME}.log ${SCREEN_LOGDIR}/screen-${1}.log
+ exec 1>&${SCREEN_LOGDIR}/screen-${service}.${CURRENT_LOG_TIME}.log 2>&1
+ ln -sf ${SCREEN_LOGDIR}/screen-${service}.${CURRENT_LOG_TIME}.log ${SCREEN_LOGDIR}/screen-${service}.log
# TODO(dtroyer): Hack to get stdout from the Python interpreter for the logs.
export PYTHONUNBUFFERED=1
fi
- exec /bin/bash -c "$command"
- die "$service exec failure: $command"
+ # Run under ``setsid`` to force the process to become a session and group leader.
+ # The pid saved can be used with pkill -g to get the entire process group.
+ if [[ -n "$group" ]]; then
+ setsid sg $group "$command" & echo $! >$SERVICE_DIR/$SCREEN_NAME/$service.pid
+ else
+ setsid $command & echo $! >$SERVICE_DIR/$SCREEN_NAME/$service.pid
+ fi
+
+ # Just silently exit this process
+ exit 0
}
# Helper to remove the ``*.failure`` files under ``$SERVICE_DIR/$SCREEN_NAME``.
@@ -1184,61 +1195,71 @@
return $exitcode
}
-# run_process() launches a child process that closes all file descriptors and
-# then exec's the passed in command. This is meant to duplicate the semantics
-# of screen_it() without screen. PIDs are written to
-# ``$SERVICE_DIR/$SCREEN_NAME/$service.pid``
-# run_process service "command-line"
+# Run a single service under screen or directly
+# If the command includes shell metachatacters (;<>*) it must be run using a shell
+# If an optional group is provided sg will be used to run the
+# command as that group.
+# run_process service "command-line" [group]
function run_process {
local service=$1
local command="$2"
+ local group=$3
- # Spawn the child process
- _run_process "$service" "$command" &
- echo $!
+ if is_service_enabled $service; then
+ if [[ "$USE_SCREEN" = "True" ]]; then
+ screen_service "$service" "$command" "$group"
+ else
+ # Spawn directly without screen
+ _run_process "$service" "$command" "$group" &
+ fi
+ fi
}
# Helper to launch a service in a named screen
# Uses globals ``CURRENT_LOG_TIME``, ``SCREEN_NAME``, ``SCREEN_LOGDIR``,
# ``SERVICE_DIR``, ``USE_SCREEN``
-# screen_it service "command-line"
-function screen_it {
+# screen_service service "command-line" [group]
+# Run a command in a shell in a screen window, if an optional group
+# is provided, use sg to set the group of the command.
+function screen_service {
+ local service=$1
+ local command="$2"
+ local group=$3
+
SCREEN_NAME=${SCREEN_NAME:-stack}
SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
USE_SCREEN=$(trueorfalse True $USE_SCREEN)
- if is_service_enabled $1; then
+ if is_service_enabled $service; then
# Append the service to the screen rc file
- screen_rc "$1" "$2"
+ screen_rc "$service" "$command"
- if [[ "$USE_SCREEN" = "True" ]]; then
- screen -S $SCREEN_NAME -X screen -t $1
+ screen -S $SCREEN_NAME -X screen -t $service
- if [[ -n ${SCREEN_LOGDIR} ]]; then
- screen -S $SCREEN_NAME -p $1 -X logfile ${SCREEN_LOGDIR}/screen-${1}.${CURRENT_LOG_TIME}.log
- screen -S $SCREEN_NAME -p $1 -X log on
- ln -sf ${SCREEN_LOGDIR}/screen-${1}.${CURRENT_LOG_TIME}.log ${SCREEN_LOGDIR}/screen-${1}.log
- fi
-
- # sleep to allow bash to be ready to be send the command - we are
- # creating a new window in screen and then sends characters, so if
- # bash isn't running by the time we send the command, nothing happens
- sleep 3
-
- NL=`echo -ne '\015'`
- # This fun command does the following:
- # - the passed server command is backgrounded
- # - the pid of the background process is saved in the usual place
- # - the server process is brought back to the foreground
- # - if the server process exits prematurely the fg command errors
- # and a message is written to stdout and the service failure file
- # The pid saved can be used in screen_stop() as a process group
- # id to kill off all child processes
- screen -S $SCREEN_NAME -p $1 -X stuff "$2 & echo \$! >$SERVICE_DIR/$SCREEN_NAME/$1.pid; fg || echo \"$1 failed to start\" | tee \"$SERVICE_DIR/$SCREEN_NAME/$1.failure\"$NL"
- else
- # Spawn directly without screen
- run_process "$1" "$2" >$SERVICE_DIR/$SCREEN_NAME/$1.pid
+ if [[ -n ${SCREEN_LOGDIR} ]]; then
+ screen -S $SCREEN_NAME -p $service -X logfile ${SCREEN_LOGDIR}/screen-${service}.${CURRENT_LOG_TIME}.log
+ screen -S $SCREEN_NAME -p $service -X log on
+ ln -sf ${SCREEN_LOGDIR}/screen-${service}.${CURRENT_LOG_TIME}.log ${SCREEN_LOGDIR}/screen-${service}.log
fi
+
+ # sleep to allow bash to be ready to be send the command - we are
+ # creating a new window in screen and then sends characters, so if
+ # bash isn't running by the time we send the command, nothing happens
+ sleep 3
+
+ NL=`echo -ne '\015'`
+ # This fun command does the following:
+ # - the passed server command is backgrounded
+ # - the pid of the background process is saved in the usual place
+ # - the server process is brought back to the foreground
+ # - if the server process exits prematurely the fg command errors
+ # and a message is written to stdout and the service failure file
+ # The pid saved can be used in stop_process() as a process group
+ # id to kill off all child processes
+ if [[ -n "$group" ]]; then
+ command="sg $group '$command'"
+ fi
+ screen -S $SCREEN_NAME -p $service -X stuff "$command & echo \$! >$SERVICE_DIR/$SCREEN_NAME/${service}.pid; fg || echo \"$service failed to start\" | tee \"$SERVICE_DIR/$SCREEN_NAME/${service}.failure\"$NL"
fi
}
@@ -1275,21 +1296,41 @@
# If screen is being used kill the screen window; this will catch processes
# that did not leave a PID behind
# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``, ``USE_SCREEN``
-# screen_stop service
-function screen_stop {
+# screen_stop_service service
+function screen_stop_service {
+ local service=$1
+
SCREEN_NAME=${SCREEN_NAME:-stack}
SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
USE_SCREEN=$(trueorfalse True $USE_SCREEN)
- if is_service_enabled $1; then
+ if is_service_enabled $service; then
+ # Clean up the screen window
+ screen -S $SCREEN_NAME -p $service -X kill
+ fi
+}
+
+# Stop a service process
+# If a PID is available use it, kill the whole process group via TERM
+# If screen is being used kill the screen window; this will catch processes
+# that did not leave a PID behind
+# Uses globals ``SERVICE_DIR``, ``USE_SCREEN``
+# stop_process service
+function stop_process {
+ local service=$1
+
+ SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
+ USE_SCREEN=$(trueorfalse True $USE_SCREEN)
+
+ if is_service_enabled $service; then
# Kill via pid if we have one available
- if [[ -r $SERVICE_DIR/$SCREEN_NAME/$1.pid ]]; then
- pkill -TERM -P -$(cat $SERVICE_DIR/$SCREEN_NAME/$1.pid)
- rm $SERVICE_DIR/$SCREEN_NAME/$1.pid
+ if [[ -r $SERVICE_DIR/$SCREEN_NAME/$service.pid ]]; then
+ pkill -g $(cat $SERVICE_DIR/$SCREEN_NAME/$service.pid)
+ rm $SERVICE_DIR/$SCREEN_NAME/$service.pid
fi
if [[ "$USE_SCREEN" = "True" ]]; then
# Clean up the screen window
- screen -S $SCREEN_NAME -p $1 -X kill
+ screen_stop_service $service
fi
fi
}
@@ -1324,6 +1365,91 @@
fi
}
+# Tail a log file in a screen if USE_SCREEN is true.
+function tail_log {
+ local service=$1
+ local logfile=$2
+
+ USE_SCREEN=$(trueorfalse True $USE_SCREEN)
+ if [[ "$USE_SCREEN" = "True" ]]; then
+ screen_service "$service" "sudo tail -f $logfile"
+ fi
+}
+
+
+# Deprecated Functions
+# --------------------
+
+# _old_run_process() is designed to be backgrounded by old_run_process() to simulate a
+# fork. It includes the dirty work of closing extra filehandles and preparing log
+# files to produce the same logs as screen_it(). The log filename is derived
+# from the service name and global-and-now-misnamed ``SCREEN_LOGDIR``
+# Uses globals ``CURRENT_LOG_TIME``, ``SCREEN_LOGDIR``, ``SCREEN_NAME``, ``SERVICE_DIR``
+# _old_run_process service "command-line"
+function _old_run_process {
+ local service=$1
+ local command="$2"
+
+ # Undo logging redirections and close the extra descriptors
+ exec 1>&3
+ exec 2>&3
+ exec 3>&-
+ exec 6>&-
+
+ if [[ -n ${SCREEN_LOGDIR} ]]; then
+ exec 1>&${SCREEN_LOGDIR}/screen-${1}.${CURRENT_LOG_TIME}.log 2>&1
+ ln -sf ${SCREEN_LOGDIR}/screen-${1}.${CURRENT_LOG_TIME}.log ${SCREEN_LOGDIR}/screen-${1}.log
+
+ # TODO(dtroyer): Hack to get stdout from the Python interpreter for the logs.
+ export PYTHONUNBUFFERED=1
+ fi
+
+ exec /bin/bash -c "$command"
+ die "$service exec failure: $command"
+}
+
+# old_run_process() launches a child process that closes all file descriptors and
+# then exec's the passed in command. This is meant to duplicate the semantics
+# of screen_it() without screen. PIDs are written to
+# ``$SERVICE_DIR/$SCREEN_NAME/$service.pid`` by the spawned child process.
+# old_run_process service "command-line"
+function old_run_process {
+ local service=$1
+ local command="$2"
+
+ # Spawn the child process
+ _old_run_process "$service" "$command" &
+ echo $!
+}
+
+# Compatibility for existing start_XXXX() functions
+# Uses global ``USE_SCREEN``
+# screen_it service "command-line"
+function screen_it {
+ if is_service_enabled $1; then
+ # Append the service to the screen rc file
+ screen_rc "$1" "$2"
+
+ if [[ "$USE_SCREEN" = "True" ]]; then
+ screen_service "$1" "$2"
+ else
+ # Spawn directly without screen
+ old_run_process "$1" "$2" >$SERVICE_DIR/$SCREEN_NAME/$1.pid
+ fi
+ fi
+}
+
+# Compatibility for existing stop_XXXX() functions
+# Stop a service in screen
+# If a PID is available use it, kill the whole process group via TERM
+# If screen is being used kill the screen window; this will catch processes
+# that did not leave a PID behind
+# screen_stop service
+function screen_stop {
+ # Clean up the screen window
+ stop_process $1
+}
+
# Python Functions
# ================
@@ -1607,6 +1733,7 @@
# are implemented
[[ ${service} == n-cell-* && ${ENABLED_SERVICES} =~ "n-cell" ]] && enabled=0
+ [[ ${service} == n-cpu-* && ${ENABLED_SERVICES} =~ "n-cpu" ]] && enabled=0
[[ ${service} == "nova" && ${ENABLED_SERVICES} =~ "n-" ]] && enabled=0
[[ ${service} == "cinder" && ${ENABLED_SERVICES} =~ "c-" ]] && enabled=0
[[ ${service} == "ceilometer" && ${ENABLED_SERVICES} =~ "ceilometer-" ]] && enabled=0
@@ -1616,6 +1743,7 @@
[[ ${service} == "trove" && ${ENABLED_SERVICES} =~ "tr-" ]] && enabled=0
[[ ${service} == "swift" && ${ENABLED_SERVICES} =~ "s-" ]] && enabled=0
[[ ${service} == s-* && ${ENABLED_SERVICES} =~ "swift" ]] && enabled=0
+ [[ ${service} == key-* && ${ENABLED_SERVICES} =~ "key" ]] && enabled=0
done
$xtrace
return $enabled
diff --git a/lib/ceilometer b/lib/ceilometer
index 340acb9..00fc0d3 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -146,15 +146,11 @@
iniset $CEILOMETER_CONF service_credentials os_password $SERVICE_PASSWORD
iniset $CEILOMETER_CONF service_credentials os_tenant_name $SERVICE_TENANT_NAME
- iniset $CEILOMETER_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
- iniset $CEILOMETER_CONF keystone_authtoken admin_user ceilometer
- iniset $CEILOMETER_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
- iniset $CEILOMETER_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $CEILOMETER_CONF keystone_authtoken signing_dir $CEILOMETER_AUTH_CACHE_DIR
+ configure_auth_token_middleware $CEILOMETER_CONF ceilometer $CEILOMETER_AUTH_CACHE_DIR
if [ "$CEILOMETER_BACKEND" = 'mysql' ] || [ "$CEILOMETER_BACKEND" = 'postgresql' ] ; then
iniset $CEILOMETER_CONF database connection `database_connection_url ceilometer`
- iniset $CEILOMETER_CONF DEFAULT collector_workers $(( ($(nproc) + 1) / 2 ))
+ iniset $CEILOMETER_CONF DEFAULT collector_workers $API_WORKERS
else
iniset $CEILOMETER_CONF database connection mongodb://localhost:27017/ceilometer
configure_mongodb
@@ -224,18 +220,18 @@
# start_ceilometer() - Start running processes, including screen
function start_ceilometer {
- screen_it ceilometer-acentral "cd ; ceilometer-agent-central --config-file $CEILOMETER_CONF"
- screen_it ceilometer-anotification "cd ; ceilometer-agent-notification --config-file $CEILOMETER_CONF"
- screen_it ceilometer-collector "cd ; ceilometer-collector --config-file $CEILOMETER_CONF"
- screen_it ceilometer-api "cd ; ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
+ run_process ceilometer-acentral "ceilometer-agent-central --config-file $CEILOMETER_CONF"
+ run_process ceilometer-anotification "ceilometer-agent-notification --config-file $CEILOMETER_CONF"
+ run_process ceilometer-collector "ceilometer-collector --config-file $CEILOMETER_CONF"
+ run_process ceilometer-api "ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
# Start the compute agent last to allow time for the collector to
# fully wake up and connect to the message bus. See bug #1355809
if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
- screen_it ceilometer-acompute "cd ; sg $LIBVIRT_GROUP 'ceilometer-agent-compute --config-file $CEILOMETER_CONF'"
+ run_process ceilometer-acompute "ceilometer-agent-compute --config-file $CEILOMETER_CONF" $LIBVIRT_GROUP
fi
if [[ "$VIRT_DRIVER" = 'vsphere' ]]; then
- screen_it ceilometer-acompute "cd ; ceilometer-agent-compute --config-file $CEILOMETER_CONF"
+ run_process ceilometer-acompute "ceilometer-agent-compute --config-file $CEILOMETER_CONF"
fi
# only die on API if it was actually intended to be turned on
@@ -246,15 +242,15 @@
fi
fi
- screen_it ceilometer-alarm-notifier "cd ; ceilometer-alarm-notifier --config-file $CEILOMETER_CONF"
- screen_it ceilometer-alarm-evaluator "cd ; ceilometer-alarm-evaluator --config-file $CEILOMETER_CONF"
+ run_process ceilometer-alarm-notifier "ceilometer-alarm-notifier --config-file $CEILOMETER_CONF"
+ run_process ceilometer-alarm-evaluator "ceilometer-alarm-evaluator --config-file $CEILOMETER_CONF"
}
# stop_ceilometer() - Stop running processes
function stop_ceilometer {
# Kill the ceilometer screen windows
for serv in ceilometer-acompute ceilometer-acentral ceilometer-anotification ceilometer-collector ceilometer-api ceilometer-alarm-notifier ceilometer-alarm-evaluator; do
- screen_stop $serv
+ stop_process $serv
done
}
diff --git a/lib/ceph b/lib/ceph
index 32a4760..30ca903 100644
--- a/lib/ceph
+++ b/lib/ceph
@@ -36,7 +36,7 @@
# Ceph data. Set ``CEPH_LOOPBACK_DISK_SIZE`` to the disk size in
# kilobytes.
# Default is 1 gigabyte.
-CEPH_LOOPBACK_DISK_SIZE_DEFAULT=2G
+CEPH_LOOPBACK_DISK_SIZE_DEFAULT=4G
CEPH_LOOPBACK_DISK_SIZE=${CEPH_LOOPBACK_DISK_SIZE:-$CEPH_LOOPBACK_DISK_SIZE_DEFAULT}
# Common
@@ -198,10 +198,11 @@
sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${GLANCE_CEPH_USER} mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=${GLANCE_CEPH_POOL}" | sudo tee ${CEPH_CONF_DIR}/ceph.client.${GLANCE_CEPH_USER}.keyring
sudo chown ${STACK_USER}:$(id -g -n $whoami) ${CEPH_CONF_DIR}/ceph.client.${GLANCE_CEPH_USER}.keyring
iniset $GLANCE_API_CONF DEFAULT default_store rbd
- iniset $GLANCE_API_CONF DEFAULT rbd_store_ceph_conf $CEPH_CONF_FILE
- iniset $GLANCE_API_CONF DEFAULT rbd_store_user $GLANCE_CEPH_USER
- iniset $GLANCE_API_CONF DEFAULT rbd_store_pool $GLANCE_CEPH_POOL
iniset $GLANCE_API_CONF DEFAULT show_image_direct_url True
+ iniset $GLANCE_API_CONF glance_store stores "file, http, rbd"
+ iniset $GLANCE_API_CONF glance_store rbd_store_ceph_conf $CEPH_CONF_FILE
+ iniset $GLANCE_API_CONF glance_store rbd_store_user $GLANCE_CEPH_USER
+ iniset $GLANCE_API_CONF glance_store rbd_store_pool $GLANCE_CEPH_POOL
}
# configure_ceph_nova() - Nova config needs to come after Nova is set up
diff --git a/lib/cinder b/lib/cinder
index ce13b86..cbca9c0 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -212,12 +212,7 @@
inicomment $CINDER_API_PASTE_INI filter:authtoken admin_password
inicomment $CINDER_API_PASTE_INI filter:authtoken signing_dir
- iniset $CINDER_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
- iniset $CINDER_CONF keystone_authtoken cafile $KEYSTONE_SSL_CA
- iniset $CINDER_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $CINDER_CONF keystone_authtoken admin_user cinder
- iniset $CINDER_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
- iniset $CINDER_CONF keystone_authtoken signing_dir $CINDER_AUTH_CACHE_DIR
+ configure_auth_token_middleware $CINDER_CONF cinder $CINDER_AUTH_CACHE_DIR
iniset $CINDER_CONF DEFAULT auth_strategy keystone
iniset $CINDER_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
@@ -247,8 +242,8 @@
if type configure_cinder_backend_${be_type} >/dev/null 2>&1; then
configure_cinder_backend_${be_type} ${be_name}
fi
- if [[ -z "$default_type" ]]; then
- default_name=$be_type
+ if [[ -z "$default_name" ]]; then
+ default_name=$be_name
fi
enabled_backends+=$be_name,
done
@@ -302,11 +297,8 @@
-e 's/snapshot_autoextend_percent =.*/snapshot_autoextend_percent = 20/' \
/etc/lvm/lvm.conf
fi
- configure_API_version $CINDER_CONF $IDENTITY_API_VERSION
- iniset $CINDER_CONF keystone_authtoken admin_user cinder
- iniset $CINDER_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $CINDER_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
+ iniset $CINDER_CONF DEFAULT osapi_volume_workers "$API_WORKERS"
}
# create_cinder_accounts() - Set up common required cinder accounts
@@ -431,15 +423,15 @@
sudo tgtadm --mode system --op update --name debug --value on
fi
- screen_it c-api "cd $CINDER_DIR && $CINDER_BIN_DIR/cinder-api --config-file $CINDER_CONF"
+ run_process c-api "$CINDER_BIN_DIR/cinder-api --config-file $CINDER_CONF"
echo "Waiting for Cinder API to start..."
if ! wait_for_service $SERVICE_TIMEOUT $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT; then
die $LINENO "c-api did not start"
fi
- screen_it c-sch "cd $CINDER_DIR && $CINDER_BIN_DIR/cinder-scheduler --config-file $CINDER_CONF"
- screen_it c-bak "cd $CINDER_DIR && $CINDER_BIN_DIR/cinder-backup --config-file $CINDER_CONF"
- screen_it c-vol "cd $CINDER_DIR && $CINDER_BIN_DIR/cinder-volume --config-file $CINDER_CONF"
+ run_process c-sch "$CINDER_BIN_DIR/cinder-scheduler --config-file $CINDER_CONF"
+ run_process c-bak "$CINDER_BIN_DIR/cinder-backup --config-file $CINDER_CONF"
+ run_process c-vol "$CINDER_BIN_DIR/cinder-volume --config-file $CINDER_CONF"
# NOTE(jdg): For cinder, startup order matters. To ensure that repor_capabilities is received
# by the scheduler start the cinder-volume service last (or restart it) after the scheduler
@@ -456,7 +448,7 @@
# Kill the cinder screen windows
local serv
for serv in c-api c-bak c-sch c-vol; do
- screen_stop $serv
+ stop_process $serv
done
if is_service_enabled c-vol; then
diff --git a/lib/cinder_backends/vmdk b/lib/cinder_backends/vmdk
new file mode 100644
index 0000000..b32c4b2
--- /dev/null
+++ b/lib/cinder_backends/vmdk
@@ -0,0 +1,45 @@
+# lib/cinder_backends/vmdk
+# Configure the VMware vmdk backend
+
+# Enable with:
+#
+# CINDER_ENABLED_BACKENDS+=,vmdk:<volume-type-name>
+
+# Dependencies:
+#
+# - ``functions`` file
+# - ``cinder`` configurations
+
+# configure_cinder_backend_vmdk - Configure Cinder for VMware vmdk backends
+
+# Save trace setting
+VMDK_XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Entry Points
+# ------------
+
+# configure_cinder_backend_vmdk - Set config files, create data dirs, etc
+function configure_cinder_backend_vmdk {
+ # To use VMware vmdk backend, set the following in local.conf:
+ # CINDER_ENABLED_BACKENDS+=,vmdk:<volume-type-name>
+ # VMWAREAPI_IP=<vcenter-ip>
+ # VMWAREAPI_USER=<vcenter-admin-account>
+ # VMWAREAPI_PASSWORD=<vcenter-admin-password>
+
+ local be_name=$1
+ iniset $CINDER_CONF $be_name volume_backend_name $be_name
+ iniset $CINDER_CONF $be_name volume_driver "cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver"
+ iniset $CINDER_CONF $be_name vmware_host_ip "$VMWAREAPI_IP"
+ iniset $CINDER_CONF $be_name vmware_host_username "$VMWAREAPI_USER"
+ iniset $CINDER_CONF $be_name vmware_host_password "$VMWAREAPI_PASSWORD"
+}
+
+
+# Restore xtrace
+$VMDK_XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/gantt b/lib/gantt
index 8db2ca1..485613f 100644
--- a/lib/gantt
+++ b/lib/gantt
@@ -77,14 +77,14 @@
# start_gantt() - Start running processes, including screen
function start_gantt {
if is_service_enabled gantt; then
- screen_it gantt "cd $GANTT_DIR && $GANTT_BIN_DIR/gantt-scheduler --config-file $GANTT_CONF"
+ run_process gantt "$GANTT_BIN_DIR/gantt-scheduler --config-file $GANTT_CONF"
fi
}
# stop_gantt() - Stop running processes
function stop_gantt {
echo "Stop Gantt"
- screen_stop gantt
+ stop_process gantt
}
# Restore xtrace
diff --git a/lib/glance b/lib/glance
index 7a28b68..6ca2fb5 100644
--- a/lib/glance
+++ b/lib/glance
@@ -28,12 +28,14 @@
# Set up default directories
GLANCE_DIR=$DEST/glance
+GLANCE_STORE_DIR=$DEST/glance_store
GLANCECLIENT_DIR=$DEST/python-glanceclient
GLANCE_CACHE_DIR=${GLANCE_CACHE_DIR:=$DATA_DIR/glance/cache}
GLANCE_IMAGE_DIR=${GLANCE_IMAGE_DIR:=$DATA_DIR/glance/images}
GLANCE_AUTH_CACHE_DIR=${GLANCE_AUTH_CACHE_DIR:-/var/cache/glance}
GLANCE_CONF_DIR=${GLANCE_CONF_DIR:-/etc/glance}
+GLANCE_METADEF_DIR=$GLANCE_CONF_DIR/metadefs
GLANCE_REGISTRY_CONF=$GLANCE_CONF_DIR/glance-registry.conf
GLANCE_API_CONF=$GLANCE_CONF_DIR/glance-api.conf
GLANCE_REGISTRY_PASTE_INI=$GLANCE_CONF_DIR/glance-registry-paste.ini
@@ -81,6 +83,11 @@
fi
sudo chown $STACK_USER $GLANCE_CONF_DIR
+ if [[ ! -d $GLANCE_METADEF_DIR ]]; then
+ sudo mkdir -p $GLANCE_METADEF_DIR
+ fi
+ sudo chown $STACK_USER $GLANCE_METADEF_DIR
+
# Copy over our glance configurations and update them
cp $GLANCE_DIR/etc/glance-registry.conf $GLANCE_REGISTRY_CONF
iniset $GLANCE_REGISTRY_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
@@ -89,13 +96,7 @@
iniset $GLANCE_REGISTRY_CONF DEFAULT sql_connection $dburl
iniset $GLANCE_REGISTRY_CONF DEFAULT use_syslog $SYSLOG
iniset $GLANCE_REGISTRY_CONF paste_deploy flavor keystone
- iniset $GLANCE_REGISTRY_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
- iniset $GLANCE_REGISTRY_CONF keystone_authtoken cafile $KEYSTONE_SSL_CA
- configure_API_version $GLANCE_REGISTRY_CONF $IDENTITY_API_VERSION
- iniset $GLANCE_REGISTRY_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $GLANCE_REGISTRY_CONF keystone_authtoken admin_user glance
- iniset $GLANCE_REGISTRY_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
- iniset $GLANCE_REGISTRY_CONF keystone_authtoken signing_dir $GLANCE_AUTH_CACHE_DIR/registry
+ configure_auth_token_middleware $GLANCE_REGISTRY_CONF glance $GLANCE_AUTH_CACHE_DIR/registry
if is_service_enabled qpid || [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; then
iniset $GLANCE_REGISTRY_CONF DEFAULT notification_driver messaging
fi
@@ -108,17 +109,11 @@
iniset $GLANCE_API_CONF DEFAULT use_syslog $SYSLOG
iniset $GLANCE_API_CONF DEFAULT image_cache_dir $GLANCE_CACHE_DIR/
iniset $GLANCE_API_CONF paste_deploy flavor keystone+cachemanagement
- iniset $GLANCE_API_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
- iniset $GLANCE_API_CONF keystone_authtoken cafile $KEYSTONE_SSL_CA
- configure_API_version $GLANCE_API_CONF $IDENTITY_API_VERSION
- iniset $GLANCE_API_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $GLANCE_API_CONF keystone_authtoken admin_user glance
- iniset $GLANCE_API_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
+ configure_auth_token_middleware $GLANCE_API_CONF glance $GLANCE_AUTH_CACHE_DIR/api
if is_service_enabled qpid || [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; then
iniset $GLANCE_API_CONF DEFAULT notification_driver messaging
fi
iniset_rpc_backend glance $GLANCE_API_CONF DEFAULT
- iniset $GLANCE_API_CONF keystone_authtoken signing_dir $GLANCE_AUTH_CACHE_DIR/api
if [ "$VIRT_DRIVER" = 'xenserver' ]; then
iniset $GLANCE_API_CONF DEFAULT container_formats "ami,ari,aki,bare,ovf,tgz"
iniset $GLANCE_API_CONF DEFAULT disk_formats "ami,ari,aki,vhd,raw,iso"
@@ -131,6 +126,8 @@
# sections.
iniset $GLANCE_API_CONF glance_store filesystem_store_datadir $GLANCE_IMAGE_DIR/
+ iniset $GLANCE_API_CONF DEFAULT workers "$API_WORKERS"
+
# Store the images in swift if enabled.
if is_service_enabled s-proxy; then
iniset $GLANCE_API_CONF DEFAULT default_store swift
@@ -177,6 +174,8 @@
cp -p $GLANCE_DIR/etc/policy.json $GLANCE_POLICY_JSON
cp -p $GLANCE_DIR/etc/schema-image.json $GLANCE_SCHEMA_JSON
+
+ cp -p $GLANCE_DIR/etc/metadefs/*.json $GLANCE_METADEF_DIR
}
# create_glance_accounts() - Set up common required glance accounts
@@ -241,6 +240,9 @@
# Migrate glance database
$GLANCE_BIN_DIR/glance-manage db_sync
+ # Load metadata definitions
+ $GLANCE_BIN_DIR/glance-manage db_load_metadefs
+
create_glance_cache_dir
}
@@ -252,14 +254,19 @@
# install_glance() - Collect source and prepare
function install_glance {
+ # Install glance_store from git so we make sure we're testing
+ # the latest code.
+ git_clone $GLANCE_STORE_REPO $GLANCE_STORE_DIR $GLANCE_STORE_BRANCH
+ setup_develop $GLANCE_STORE_DIR
+
git_clone $GLANCE_REPO $GLANCE_DIR $GLANCE_BRANCH
setup_develop $GLANCE_DIR
}
# start_glance() - Start running processes, including screen
function start_glance {
- screen_it g-reg "cd $GLANCE_DIR; $GLANCE_BIN_DIR/glance-registry --config-file=$GLANCE_CONF_DIR/glance-registry.conf"
- screen_it g-api "cd $GLANCE_DIR; $GLANCE_BIN_DIR/glance-api --config-file=$GLANCE_CONF_DIR/glance-api.conf"
+ run_process g-reg "$GLANCE_BIN_DIR/glance-registry --config-file=$GLANCE_CONF_DIR/glance-registry.conf"
+ run_process g-api "$GLANCE_BIN_DIR/glance-api --config-file=$GLANCE_CONF_DIR/glance-api.conf"
echo "Waiting for g-api ($GLANCE_HOSTPORT) to start..."
if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$GLANCE_HOSTPORT; do sleep 1; done"; then
die $LINENO "g-api did not start"
@@ -269,8 +276,8 @@
# stop_glance() - Stop running processes
function stop_glance {
# Kill the Glance screen windows
- screen_stop g-api
- screen_stop g-reg
+ stop_process g-api
+ stop_process g-reg
}
diff --git a/lib/heat b/lib/heat
index 14094a9..f64cc90 100644
--- a/lib/heat
+++ b/lib/heat
@@ -110,14 +110,7 @@
setup_colorized_logging $HEAT_CONF DEFAULT tenant user
fi
- # keystone authtoken
- iniset $HEAT_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
- configure_API_version $HEAT_CONF $IDENTITY_API_VERSION
- iniset $HEAT_CONF keystone_authtoken cafile $KEYSTONE_SSL_CA
- iniset $HEAT_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $HEAT_CONF keystone_authtoken admin_user heat
- iniset $HEAT_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
- iniset $HEAT_CONF keystone_authtoken signing_dir $HEAT_AUTH_CACHE_DIR
+ configure_auth_token_middleware $HEAT_CONF heat $HEAT_AUTH_CACHE_DIR
if is_ssl_enabled_service "key"; then
iniset $HEAT_CONF clients_keystone ca_file $KEYSTONE_SSL_CA
@@ -189,10 +182,10 @@
# start_heat() - Start running processes, including screen
function start_heat {
- screen_it h-eng "cd $HEAT_DIR; bin/heat-engine --config-file=$HEAT_CONF"
- screen_it h-api "cd $HEAT_DIR; bin/heat-api --config-file=$HEAT_CONF"
- screen_it h-api-cfn "cd $HEAT_DIR; bin/heat-api-cfn --config-file=$HEAT_CONF"
- screen_it h-api-cw "cd $HEAT_DIR; bin/heat-api-cloudwatch --config-file=$HEAT_CONF"
+ run_process h-eng "$HEAT_DIR/bin/heat-engine --config-file=$HEAT_CONF"
+ run_process h-api "$HEAT_DIR/bin/heat-api --config-file=$HEAT_CONF"
+ run_process h-api-cfn "$HEAT_DIR/bin/heat-api-cfn --config-file=$HEAT_CONF"
+ run_process h-api-cw "$HEAT_DIR/bin/heat-api-cloudwatch --config-file=$HEAT_CONF"
}
# stop_heat() - Stop running processes
@@ -200,7 +193,7 @@
# Kill the screen windows
local serv
for serv in h-eng h-api h-api-cfn h-api-cw; do
- screen_stop $serv
+ stop_process $serv
done
}
diff --git a/lib/horizon b/lib/horizon
index 19693ef..4dd12da 100644
--- a/lib/horizon
+++ b/lib/horizon
@@ -112,6 +112,9 @@
_horizon_config_set $local_settings "" OPENSTACK_HOST \"${KEYSTONE_SERVICE_HOST}\"
_horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_URL "\"${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}/v2.0\""
+ if [[ -n "$KEYSTONE_TOKEN_HASH_ALGORITHM" ]]; then
+ _horizon_config_set $local_settings "" OPENSTACK_TOKEN_HASH_ALGORITHM \""$KEYSTONE_TOKEN_HASH_ALGORITHM"\"
+ fi
if [ -f $SSL_BUNDLE_FILE ]; then
_horizon_config_set $local_settings "" OPENSTACK_SSL_CACERT \"${SSL_BUNDLE_FILE}\"
@@ -145,6 +148,7 @@
# Remove old log files that could mess with how devstack detects whether Horizon
# has been successfully started (see start_horizon() and functions::screen_it())
+ # and run_process
sudo rm -f /var/log/$APACHE_NAME/horizon_*
}
@@ -166,7 +170,7 @@
# start_horizon() - Start running processes, including screen
function start_horizon {
restart_apache_server
- screen_it horizon "cd $HORIZON_DIR && sudo tail -f /var/log/$APACHE_NAME/horizon_error.log"
+ tail_log horizon /var/log/$APACHE_NAME/horizon_error.log
}
# stop_horizon() - Stop running processes (non-screen)
diff --git a/lib/ironic b/lib/ironic
index 469f3a3..5f3ebcd 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -29,6 +29,7 @@
# Set up default directories
IRONIC_DIR=$DEST/ironic
+IRONIC_PYTHON_AGENT_DIR=$DEST/ironic-python-agent
IRONIC_DATA_DIR=$DATA_DIR/ironic
IRONIC_STATE_PATH=/var/lib/ironic
IRONICCLIENT_DIR=$DEST/python-ironicclient
@@ -74,7 +75,8 @@
IRONIC_DEPLOY_KERNEL=${IRONIC_DEPLOY_KERNEL:-}
IRONIC_DEPLOY_ELEMENT=${IRONIC_DEPLOY_ELEMENT:-deploy-ironic}
-IRONIC_AGENT_TARBALL=${IRONIC_AGENT_TARBALL:-http://tarballs.openstack.org/ironic-python-agent/coreos/ipa-coreos.tar.gz}
+IRONIC_AGENT_KERNEL_URL=${IRONIC_AGENT_KERNEL_URL:-http://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe.vmlinuz}
+IRONIC_AGENT_RAMDISK_URL=${IRONIC_AGENT_RAMDISK_URL:-http://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem.cpio.gz}
# Which deploy driver to use - valid choices right now
# are 'pxe_ssh' and 'agent_ssh'.
@@ -241,14 +243,8 @@
function configure_ironic_api {
iniset $IRONIC_CONF_FILE DEFAULT auth_strategy keystone
iniset $IRONIC_CONF_FILE DEFAULT policy_file $IRONIC_POLICY_JSON
- iniset $IRONIC_CONF_FILE keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
- iniset $IRONIC_CONF_FILE keystone_authtoken cafile $KEYSTONE_SSL_CA
- iniset $IRONIC_CONF_FILE keystone_authtoken auth_uri $KEYSTONE_SERVICE_URI
- iniset $IRONIC_CONF_FILE keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $IRONIC_CONF_FILE keystone_authtoken admin_user ironic
- iniset $IRONIC_CONF_FILE keystone_authtoken admin_password $SERVICE_PASSWORD
+ configure_auth_token_middleware $IRONIC_CONF_FILE ironic $IRONIC_AUTH_CACHE_DIR/api
iniset_rpc_backend ironic $IRONIC_CONF_FILE DEFAULT
- iniset $IRONIC_CONF_FILE keystone_authtoken signing_dir $IRONIC_AUTH_CACHE_DIR/api
cp -p $IRONIC_DIR/etc/ironic/policy.json $IRONIC_POLICY_JSON
}
@@ -379,7 +375,7 @@
# start_ironic_api() - Used by start_ironic().
# Starts Ironic API server.
function start_ironic_api {
- screen_it ir-api "cd $IRONIC_DIR; $IRONIC_BIN_DIR/ironic-api --config-file=$IRONIC_CONF_FILE"
+ run_process ir-api "$IRONIC_BIN_DIR/ironic-api --config-file=$IRONIC_CONF_FILE"
echo "Waiting for ir-api ($IRONIC_HOSTPORT) to start..."
if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$IRONIC_HOSTPORT; do sleep 1; done"; then
die $LINENO "ir-api did not start"
@@ -389,7 +385,7 @@
# start_ironic_conductor() - Used by start_ironic().
# Starts Ironic conductor.
function start_ironic_conductor {
- screen_it ir-cond "cd $IRONIC_DIR; $IRONIC_BIN_DIR/ironic-conductor --config-file=$IRONIC_CONF_FILE"
+ run_process ir-cond "$IRONIC_BIN_DIR/ironic-conductor --config-file=$IRONIC_CONF_FILE"
# TODO(romcheg): Find a way to check whether the conductor has started.
}
@@ -495,8 +491,12 @@
done < $IRONIC_VM_MACS_CSV_FILE
# create the nova flavor
+ # NOTE(adam_g): Attempting to use an autogenerated UUID for flavor id here uncovered
+ # bug (LP: #1333852) in Trove. This can be changed to use an auto flavor id when the
+ # bug is fixed in Juno.
local adjusted_disk=$(($IRONIC_VM_SPECS_DISK - $IRONIC_VM_EPHEMERAL_DISK))
- nova flavor-create --ephemeral $IRONIC_VM_EPHEMERAL_DISK baremetal auto $IRONIC_VM_SPECS_RAM $adjusted_disk $IRONIC_VM_SPECS_CPU
+ nova flavor-create --ephemeral $IRONIC_VM_EPHEMERAL_DISK baremetal 551 $IRONIC_VM_SPECS_RAM $adjusted_disk $IRONIC_VM_SPECS_CPU
+
# TODO(lucasagomes): Remove the 'baremetal:deploy_kernel_id'
# and 'baremetal:deploy_ramdisk_id' parameters
# from the flavor after the completion of
@@ -558,6 +558,19 @@
ironic_ssh_check $IRONIC_SSH_KEY_DIR/$IRONIC_SSH_KEY_FILENAME $IRONIC_VM_SSH_ADDRESS $IRONIC_VM_SSH_PORT $IRONIC_SSH_USERNAME 10
}
+function build_ipa_coreos_ramdisk {
+ echo "Building ironic-python-agent deploy ramdisk"
+ local kernel_path=$1
+ local ramdisk_path=$2
+ git_clone $IRONIC_PYTHON_AGENT_REPO $IRONIC_PYTHON_AGENT_DIR $IRONIC_PYTHON_AGENT_BRANCH
+ cd $IRONIC_PYTHON_AGENT_DIR
+ imagebuild/coreos/build_coreos_image.sh
+ cp imagebuild/coreos/UPLOAD/coreos_production_pxe_image-oem.cpio.gz $ramdisk_path
+ cp imagebuild/coreos/UPLOAD/coreos_production_pxe.vmlinuz $kernel_path
+ sudo rm -rf UPLOAD
+ cd -
+}
+
# build deploy kernel+ramdisk, then upload them to glance
# this function sets ``IRONIC_DEPLOY_KERNEL_ID``, ``IRONIC_DEPLOY_RAMDISK_ID``
function upload_baremetal_ironic_deploy {
@@ -582,8 +595,8 @@
if [ "$IRONIC_BUILD_DEPLOY_RAMDISK" = "True" ]; then
# we can build them only if we're not offline
if [ "$OFFLINE" != "True" ]; then
- if [ "$IRONIC_DEPLOY_RAMDISK" == "agent_ssh" ]; then
- die $LINENO "Ironic-python-agent build is not yet supported"
+ if [ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]; then
+ build_ipa_coreos_ramdisk $IRONIC_DEPLOY_KERNEL_PATH $IRONIC_DEPLOY_RAMDISK_PATH
else
ramdisk-image-create $IRONIC_DEPLOY_FLAVOR \
-o $TOP_DIR/files/ir-deploy
@@ -594,12 +607,8 @@
else
if [ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]; then
# download the agent image tarball
- wget "$IRONIC_AGENT_TARBALL" -O ironic_agent_tarball.tar.gz
- tar zxfv ironic_agent_tarball.tar.gz
- mv UPLOAD/coreos_production_pxe.vmlinuz $IRONIC_DEPLOY_KERNEL_PATH
- mv UPLOAD/coreos_production_pxe_image-oem.cpio.gz $IRONIC_DEPLOY_RAMDISK_PATH
- rm -rf UPLOAD
- rm ironic_agent_tarball.tar.gz
+ wget "$IRONIC_AGENT_KERNEL_URL" -O $IRONIC_DEPLOY_KERNEL_PATH
+ wget "$IRONIC_AGENT_RAMDISK_URL" -O $IRONIC_DEPLOY_RAMDISK_PATH
else
die $LINENO "Deploy kernel+ramdisk files don't exist and their building was disabled explicitly by IRONIC_BUILD_DEPLOY_RAMDISK"
fi
diff --git a/lib/keystone b/lib/keystone
index 9e234b4..adba53c 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -6,6 +6,7 @@
# - ``functions`` file
# - ``tls`` file
# - ``DEST``, ``STACK_USER``
+# - ``FILES``
# - ``IDENTITY_API_VERSION``
# - ``BASE_SQL_CONN``
# - ``SERVICE_HOST``, ``SERVICE_PROTOCOL``
@@ -104,18 +105,13 @@
# cleanup_keystone() - Remove residual data files, anything left over from previous
# runs that a clean run would need to clean up
function cleanup_keystone {
- # kill instances (nova)
- # delete image files (glance)
- # This function intentionally left blank
- :
+ _cleanup_keystone_apache_wsgi
}
# _cleanup_keystone_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
function _cleanup_keystone_apache_wsgi {
sudo rm -f $KEYSTONE_WSGI_DIR/*.wsgi
- disable_apache_site keystone
sudo rm -f $(apache_site_config_for keystone)
- restart_apache_server
}
# _config_keystone_apache_wsgi() - Set WSGI config files of Keystone
@@ -138,7 +134,6 @@
s|%ADMINWSGI%|$KEYSTONE_WSGI_DIR/admin|g;
s|%USER%|$STACK_USER|g
" -i $keystone_apache_conf
- enable_apache_site keystone
}
# configure_keystone() - Set config files, create data dirs, etc
@@ -291,6 +286,13 @@
fi
iniset $KEYSTONE_CONF DEFAULT max_token_size 16384
+
+ iniset $KEYSTONE_CONF DEFAULT admin_workers "$API_WORKERS"
+ # Public workers will use the server default, typically number of CPU.
+
+ if [[ -n "$KEYSTONE_TOKEN_HASH_ALGORITHM" ]]; then
+ iniset $KEYSTONE_CONF token hash_algorithm "$KEYSTONE_TOKEN_HASH_ALGORITHM"
+ fi
}
function configure_keystone_extensions {
@@ -350,9 +352,8 @@
# The Member role is used by Horizon and Swift so we need to keep it:
local member_role=$(get_or_create_role "Member")
- # ANOTHER_ROLE demonstrates that an arbitrary role may be created and used
+ # another_role demonstrates that an arbitrary role may be created and used
# TODO(sleepsonthefloor): show how this can be used for rbac in the future!
-
local another_role=$(get_or_create_role "anotherrole")
# invisible tenant - admin can't see this one
@@ -382,11 +383,40 @@
}
# Configure the API version for the OpenStack projects.
-# configure_API_version conf_file version
+# configure_API_version conf_file version [section]
function configure_API_version {
local conf_file=$1
local api_version=$2
- iniset $conf_file keystone_authtoken auth_uri $KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/v$api_version
+ local section=${3:-keystone_authtoken}
+ iniset $conf_file $section auth_uri $KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/v$api_version
+}
+
+# Configure the service to use the auth token middleware.
+#
+# configure_auth_token_middleware conf_file admin_user signing_dir [section]
+#
+# section defaults to keystone_authtoken, which is where auth_token looks in
+# the .conf file. If the paste config file is used (api-paste.ini) then
+# provide the section name for the auth_token filter.
+function configure_auth_token_middleware {
+ local conf_file=$1
+ local admin_user=$2
+ local signing_dir=$3
+ local section=${4:-keystone_authtoken}
+
+ iniset $conf_file $section auth_host $KEYSTONE_AUTH_HOST
+ iniset $conf_file $section auth_port $KEYSTONE_AUTH_PORT
+ iniset $conf_file $section auth_protocol $KEYSTONE_AUTH_PROTOCOL
+ iniset $conf_file $section identity_uri $KEYSTONE_AUTH_URI
+ iniset $conf_file $section cafile $KEYSTONE_SSL_CA
+ configure_API_version $conf_file $IDENTITY_API_VERSION $section
+ iniset $conf_file $section admin_tenant_name $SERVICE_TENANT_NAME
+ iniset $conf_file $section admin_user $admin_user
+ iniset $conf_file $section admin_password $SERVICE_PASSWORD
+ iniset $conf_file $section signing_dir $signing_dir
+ if [[ -n "$KEYSTONE_TOKEN_HASH_ALGORITHM" ]]; then
+ iniset $conf_file keystone_authtoken hash_algorithms "$KEYSTONE_TOKEN_HASH_ALGORITHM"
+ fi
}
# init_keystone() - Initialize databases, etc.
@@ -467,11 +497,13 @@
fi
if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
+ enable_apache_site keystone
restart_apache_server
- screen_it key "cd $KEYSTONE_DIR && sudo tail -f /var/log/$APACHE_NAME/keystone.log"
+ tail_log key /var/log/$APACHE_NAME/keystone.log
+ tail_log key-access /var/log/$APACHE_NAME/keystone_access.log
else
# Start Keystone in a screen window
- screen_it key "cd $KEYSTONE_DIR && $KEYSTONE_DIR/bin/keystone-all --config-file $KEYSTONE_CONF --debug"
+ run_process key "$KEYSTONE_DIR/bin/keystone-all --config-file $KEYSTONE_CONF --debug"
fi
echo "Waiting for keystone to start..."
@@ -491,10 +523,12 @@
# stop_keystone() - Stop running processes
function stop_keystone {
+ if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
+ disable_apache_site keystone
+ restart_apache_server
+ fi
# Kill the Keystone screen window
- screen_stop key
- # Cleanup the WSGI files and VHOST
- _cleanup_keystone_apache_wsgi
+ stop_process key
}
function is_keystone_enabled {
diff --git a/lib/ldap b/lib/ldap
index efe2f09..2bb8a4c 100644
--- a/lib/ldap
+++ b/lib/ldap
@@ -79,7 +79,7 @@
function init_ldap {
local keystone_ldif
- TMP_LDAP_DIR=$(mktemp -d -t ldap.$$.XXXXXXXXXX)
+ local tmp_ldap_dir=$(mktemp -d -t ldap.$$.XXXXXXXXXX)
# Remove data but not schemas
clear_ldap_state
@@ -91,17 +91,17 @@
printf "Configuring LDAP for $LDAP_BASE_DC\n"
# If BASE_DN is changed, the user may override the default file
if [[ -r $FILES/ldap/${LDAP_BASE_DC}.ldif.in ]]; then
- keystone_ldif=${LDAP_BASE_DC}.ldif
+ local keystone_ldif=${LDAP_BASE_DC}.ldif
else
- keystone_ldif=keystone.ldif
+ local keystone_ldif=keystone.ldif
fi
- _ldap_varsubst $FILES/ldap/${keystone_ldif}.in >$TMP_LDAP_DIR/${keystone_ldif}
- if [[ -r $TMP_LDAP_DIR/${keystone_ldif} ]]; then
- ldapadd -x -w $LDAP_PASSWORD -D "$LDAP_MANAGER_DN" -H $LDAP_URL -c -f $TMP_LDAP_DIR/${keystone_ldif}
+ _ldap_varsubst $FILES/ldap/${keystone_ldif}.in >$tmp_ldap_dir/${keystone_ldif}
+ if [[ -r $tmp_ldap_dir/${keystone_ldif} ]]; then
+ ldapadd -x -w $LDAP_PASSWORD -D "$LDAP_MANAGER_DN" -H $LDAP_URL -c -f $tmp_ldap_dir/${keystone_ldif}
fi
fi
- rm -rf TMP_LDAP_DIR
+ rm -rf $tmp_ldap_dir
}
# install_ldap
@@ -110,7 +110,7 @@
echo "Installing LDAP inside function"
echo "os_VENDOR is $os_VENDOR"
- TMP_LDAP_DIR=$(mktemp -d -t ldap.$$.XXXXXXXXXX)
+ local tmp_ldap_dir=$(mktemp -d -t ldap.$$.XXXXXXXXXX)
printf "installing OpenLDAP"
if is_ubuntu; then
@@ -119,19 +119,19 @@
elif is_fedora; then
start_ldap
elif is_suse; then
- _ldap_varsubst $FILES/ldap/suse-base-config.ldif.in >$TMP_LDAP_DIR/suse-base-config.ldif
- sudo slapadd -F /etc/openldap/slapd.d/ -bcn=config -l $TMP_LDAP_DIR/suse-base-config.ldif
+ _ldap_varsubst $FILES/ldap/suse-base-config.ldif.in >$tmp_ldap_dir/suse-base-config.ldif
+ sudo slapadd -F /etc/openldap/slapd.d/ -bcn=config -l $tmp_ldap_dir/suse-base-config.ldif
sudo sed -i '/^OPENLDAP_START_LDAPI=/s/"no"/"yes"/g' /etc/sysconfig/openldap
start_ldap
fi
echo "LDAP_PASSWORD is $LDAP_PASSWORD"
- SLAPPASS=$(slappasswd -s $LDAP_PASSWORD)
- printf "LDAP secret is $SLAPPASS\n"
+ local slappass=$(slappasswd -s $LDAP_PASSWORD)
+ printf "LDAP secret is $slappass\n"
# Create manager.ldif and add to olcdb
- _ldap_varsubst $FILES/ldap/manager.ldif.in >$TMP_LDAP_DIR/manager.ldif
- sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f $TMP_LDAP_DIR/manager.ldif
+ _ldap_varsubst $FILES/ldap/manager.ldif.in >$tmp_ldap_dir/manager.ldif
+ sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f $tmp_ldap_dir/manager.ldif
# On fedora we need to manually add cosine and inetorgperson schemas
if is_fedora; then
@@ -139,7 +139,7 @@
sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif
fi
- rm -rf TMP_LDAP_DIR
+ rm -rf $tmp_ldap_dir
}
# start_ldap() - Start LDAP
diff --git a/lib/neutron b/lib/neutron
index a00664e..96cd47b 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -447,20 +447,20 @@
# Migrated from keystone_data.sh
function create_neutron_accounts {
- SERVICE_TENANT=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
+ local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
+ local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
if [[ "$ENABLED_SERVICES" =~ "q-svc" ]]; then
- NEUTRON_USER=$(get_or_create_user "neutron" \
- "$SERVICE_PASSWORD" $SERVICE_TENANT)
- get_or_add_user_role $ADMIN_ROLE $NEUTRON_USER $SERVICE_TENANT
+ local neutron_user=$(get_or_create_user "neutron" \
+ "$SERVICE_PASSWORD" $service_tenant)
+ get_or_add_user_role $admin_role $neutron_user $service_tenant
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- NEUTRON_SERVICE=$(get_or_create_service "neutron" \
+ local neutron_service=$(get_or_create_service "neutron" \
"network" "Neutron Service")
- get_or_create_endpoint $NEUTRON_SERVICE \
+ get_or_create_endpoint $neutron_service \
"$REGION_NAME" \
"http://$SERVICE_HOST:$Q_PORT/" \
"http://$SERVICE_HOST:$Q_PORT/" \
@@ -492,9 +492,9 @@
sudo ifconfig $OVS_PHYSICAL_BRIDGE up
sudo route add default gw $NETWORK_GATEWAY dev $OVS_PHYSICAL_BRIDGE
elif is_provider_network; then
- die_if_not_set $LINENO SEGMENTATION_ID "A SEGMENTATION_ID is required to use provider networking"
+ die_if_not_set $LINENO PHYSICAL_NETWORK "You must specify the PHYSICAL_NETWORK"
die_if_not_set $LINENO PROVIDER_NETWORK_TYPE "You must specifiy the PROVIDER_NETWORK_TYPE"
- NET_ID=$(neutron net-create $PHYSICAL_NETWORK --tenant_id $TENANT_ID --provider:network_type $PROVIDER_NETWORK_TYPE --provider:physical_network "$PHYSICAL_NETWORK" --provider:segmentation_id "$SEGMENTATION_ID" --shared | grep ' id ' | get_field 2)
+ NET_ID=$(neutron net-create $PHYSICAL_NETWORK --tenant_id $TENANT_ID --provider:network_type $PROVIDER_NETWORK_TYPE --provider:physical_network "$PHYSICAL_NETWORK" ${SEGMENTATION_ID:+--provider:segmentation_id $SEGMENTATION_ID} --shared | grep ' id ' | get_field 2)
SUBNET_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 4 ${ALLOCATION_POOL:+--allocation-pool $ALLOCATION_POOL} --name $PROVIDER_SUBNET_NAME $NET_ID $FIXED_RANGE | grep ' id ' | get_field 2)
SUBNET_V6_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 6 --ipv6-address-mode slaac --gateway $V6_NETWORK_GATEWAY --name $PROVIDER_SUBNET_NAME_V6 $NET_ID $FIXED_RANGE_V6 | grep 'id' | get_field 2)
sudo ip link set $OVS_PHYSICAL_BRIDGE up
@@ -591,7 +591,7 @@
function start_neutron_service_and_check {
local cfg_file_options="$(determine_config_files neutron-server)"
# Start the Neutron service
- screen_it q-svc "cd $NEUTRON_DIR && python $NEUTRON_BIN_DIR/neutron-server $cfg_file_options"
+ run_process q-svc "python $NEUTRON_BIN_DIR/neutron-server $cfg_file_options"
echo "Waiting for Neutron to start..."
if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$Q_HOST:$Q_PORT; do sleep 1; done"; then
die $LINENO "Neutron did not start"
@@ -601,8 +601,8 @@
# Start running processes, including screen
function start_neutron_agents {
# Start up the neutron agents if enabled
- screen_it q-agt "cd $NEUTRON_DIR && python $AGENT_BINARY --config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE"
- screen_it q-dhcp "cd $NEUTRON_DIR && python $AGENT_DHCP_BINARY --config-file $NEUTRON_CONF --config-file=$Q_DHCP_CONF_FILE"
+ run_process q-agt "python $AGENT_BINARY --config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE"
+ run_process q-dhcp "python $AGENT_DHCP_BINARY --config-file $NEUTRON_CONF --config-file=$Q_DHCP_CONF_FILE"
if is_provider_network; then
sudo ovs-vsctl add-port $OVS_PHYSICAL_BRIDGE $PUBLIC_INTERFACE
@@ -612,24 +612,24 @@
fi
if is_service_enabled q-vpn; then
- screen_it q-vpn "cd $NEUTRON_DIR && $AGENT_VPN_BINARY $(determine_config_files neutron-vpn-agent)"
+ run_process q-vpn "$AGENT_VPN_BINARY $(determine_config_files neutron-vpn-agent)"
else
- screen_it q-l3 "cd $NEUTRON_DIR && python $AGENT_L3_BINARY $(determine_config_files neutron-l3-agent)"
+ run_process q-l3 "python $AGENT_L3_BINARY $(determine_config_files neutron-l3-agent)"
fi
- screen_it q-meta "cd $NEUTRON_DIR && python $AGENT_META_BINARY --config-file $NEUTRON_CONF --config-file=$Q_META_CONF_FILE"
+ run_process q-meta "python $AGENT_META_BINARY --config-file $NEUTRON_CONF --config-file=$Q_META_CONF_FILE"
if [ "$VIRT_DRIVER" = 'xenserver' ]; then
# For XenServer, start an agent for the domU openvswitch
- screen_it q-domua "cd $NEUTRON_DIR && python $AGENT_BINARY --config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE.domU"
+ run_process q-domua "python $AGENT_BINARY --config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE.domU"
fi
if is_service_enabled q-lbaas; then
- screen_it q-lbaas "cd $NEUTRON_DIR && python $AGENT_LBAAS_BINARY --config-file $NEUTRON_CONF --config-file=$LBAAS_AGENT_CONF_FILENAME"
+ run_process q-lbaas "python $AGENT_LBAAS_BINARY --config-file $NEUTRON_CONF --config-file=$LBAAS_AGENT_CONF_FILENAME"
fi
if is_service_enabled q-metering; then
- screen_it q-metering "cd $NEUTRON_DIR && python $AGENT_METERING_BINARY --config-file $NEUTRON_CONF --config-file $METERING_AGENT_CONF_FILENAME"
+ run_process q-metering "python $AGENT_METERING_BINARY --config-file $NEUTRON_CONF --config-file $METERING_AGENT_CONF_FILENAME"
fi
}
@@ -794,12 +794,12 @@
iniset $Q_META_CONF_FILE DEFAULT nova_metadata_ip $Q_META_DATA_IP
iniset $Q_META_CONF_FILE DEFAULT root_helper "$Q_RR_COMMAND"
- _neutron_setup_keystone $Q_META_CONF_FILE DEFAULT True True
+ _neutron_setup_keystone $Q_META_CONF_FILE DEFAULT
}
function _configure_neutron_ceilometer_notifications {
- iniset $NEUTRON_CONF DEFAULT notification_driver neutron.openstack.common.notifier.rpc_notifier
+ iniset $NEUTRON_CONF DEFAULT notification_driver messaging
}
function _configure_neutron_lbaas {
@@ -936,19 +936,9 @@
function _neutron_setup_keystone {
local conf_file=$1
local section=$2
- local use_auth_url=$3
- local skip_auth_cache=$4
- iniset $conf_file $section auth_uri $KEYSTONE_SERVICE_URI
- iniset $conf_file $section identity_uri $KEYSTONE_AUTH_URI
- iniset $conf_file $section admin_tenant_name $SERVICE_TENANT_NAME
- iniset $conf_file $section admin_user $Q_ADMIN_USERNAME
- iniset $conf_file $section admin_password $SERVICE_PASSWORD
- if [[ -z $skip_auth_cache ]]; then
- iniset $conf_file $section signing_dir $NEUTRON_AUTH_CACHE_DIR
- # Create cache dir
- create_neutron_cache_dir
- fi
+ create_neutron_cache_dir
+ configure_auth_token_middleware $conf_file $Q_ADMIN_USERNAME $NEUTRON_AUTH_CACHE_DIR $section
}
function _neutron_setup_interface_driver {
diff --git a/lib/neutron_plugins/ofagent_agent b/lib/neutron_plugins/ofagent_agent
index b4c2ada..a5a58f4 100644
--- a/lib/neutron_plugins/ofagent_agent
+++ b/lib/neutron_plugins/ofagent_agent
@@ -34,10 +34,18 @@
iniset $Q_L3_CONF_FILE DEFAULT l3_agent_manager neutron.agent.l3_agent.L3NATAgentWithStateReport
}
+function _neutron_ofagent_configure_firewall_driver {
+ if [[ "$Q_USE_SECGROUP" == "True" ]]; then
+ iniset /$Q_PLUGIN_CONF_FILE securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
+ else
+ iniset /$Q_PLUGIN_CONF_FILE securitygroup firewall_driver neutron.agent.firewall.NoopFirewallDriver
+ fi
+}
+
function neutron_plugin_configure_plugin_agent {
# Set up integration bridge
_neutron_ovs_base_setup_bridge $OVS_BRIDGE
- _neutron_ovs_base_configure_firewall_driver
+ _neutron_ofagent_configure_firewall_driver
# Check a supported openflow version
OF_VERSION=`ovs-ofctl --version | grep "OpenFlow versions" | awk '{print $3}' | cut -d':' -f2`
diff --git a/lib/neutron_thirdparty/README.md b/lib/neutron_thirdparty/README.md
index 2460e5c..5655e0b 100644
--- a/lib/neutron_thirdparty/README.md
+++ b/lib/neutron_thirdparty/README.md
@@ -28,12 +28,14 @@
git clone xxx
* ``start_<third_party>``:
- start running processes, including screen
+ start running processes, including screen if USE_SCREEN=True
e.g.
- screen_it XXXX "cd $XXXXY_DIR && $XXXX_DIR/bin/XXXX-bin"
+ run_process XXXX "$XXXX_DIR/bin/XXXX-bin"
* ``stop_<third_party>``:
stop running processes (non-screen)
+ e.g.
+ stop_process XXXX
* ``check_<third_party>``:
verify that the integration between neutron server and third-party components is sane
diff --git a/lib/neutron_thirdparty/ryu b/lib/neutron_thirdparty/ryu
index c737600..233f3aa 100644
--- a/lib/neutron_thirdparty/ryu
+++ b/lib/neutron_thirdparty/ryu
@@ -64,7 +64,7 @@
}
function start_ryu {
- screen_it ryu "cd $RYU_DIR && $RYU_DIR/bin/ryu-manager --config-file $RYU_CONF"
+ run_process ryu "$RYU_DIR/bin/ryu-manager --config-file $RYU_CONF"
}
function stop_ryu {
diff --git a/lib/nova b/lib/nova
index b3a586c..2a3aae1 100644
--- a/lib/nova
+++ b/lib/nova
@@ -39,6 +39,7 @@
NOVA_CONF_DIR=/etc/nova
NOVA_CONF=$NOVA_CONF_DIR/nova.conf
NOVA_CELLS_CONF=$NOVA_CONF_DIR/nova-cells.conf
+NOVA_FAKE_CONF=$NOVA_CONF_DIR/nova-fake.conf
NOVA_CELLS_DB=${NOVA_CELLS_DB:-nova_cell}
NOVA_API_PASTE_INI=${NOVA_API_PASTE_INI:-$NOVA_CONF_DIR/api-paste.ini}
@@ -405,6 +406,7 @@
iniset $NOVA_CONF DEFAULT debug "$ENABLE_DEBUG_LOG_LEVEL"
iniset $NOVA_CONF DEFAULT auth_strategy "keystone"
iniset $NOVA_CONF DEFAULT allow_resize_to_same_host "True"
+ iniset $NOVA_CONF DEFAULT allow_migrate_to_same_host "True"
iniset $NOVA_CONF DEFAULT api_paste_config "$NOVA_API_PASTE_INI"
iniset $NOVA_CONF DEFAULT rootwrap_config "$NOVA_CONF_DIR/rootwrap.conf"
iniset $NOVA_CONF DEFAULT scheduler_driver "$SCHEDULER"
@@ -436,17 +438,9 @@
iniset $NOVA_CONF DEFAULT osapi_compute_listen_port "$NOVA_SERVICE_PORT_INT"
fi
- # Add keystone authtoken configuration
-
- iniset $NOVA_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
- iniset $NOVA_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $NOVA_CONF keystone_authtoken cafile $KEYSTONE_SSL_CA
- iniset $NOVA_CONF keystone_authtoken admin_user nova
- iniset $NOVA_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
+ configure_auth_token_middleware $NOVA_CONF nova $NOVA_AUTH_CACHE_DIR
fi
- iniset $NOVA_CONF keystone_authtoken signing_dir $NOVA_AUTH_CACHE_DIR
-
if [ -n "$NOVA_STATE_PATH" ]; then
iniset $NOVA_CONF DEFAULT state_path "$NOVA_STATE_PATH"
iniset $NOVA_CONF DEFAULT lock_path "$NOVA_STATE_PATH"
@@ -516,6 +510,10 @@
iniset $NOVA_CONF DEFAULT ec2_dmz_host "$EC2_DMZ_HOST"
iniset_rpc_backend nova $NOVA_CONF DEFAULT
iniset $NOVA_CONF glance api_servers "$GLANCE_HOSTPORT"
+
+ iniset $NOVA_CONF DEFAULT osci_compute_workers "$API_WORKERS"
+ iniset $NOVA_CONF DEFAULT ec2_workers "$API_WORKERS"
+ iniset $NOVA_CONF DEFAULT metadata_workers "$API_WORKERS"
}
function init_nova_cells {
@@ -648,7 +646,7 @@
service_port=$NOVA_SERVICE_PORT_INT
fi
- screen_it n-api "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-api"
+ run_process n-api "$NOVA_BIN_DIR/nova-api"
echo "Waiting for nova-api to start..."
if ! wait_for_service $SERVICE_TIMEOUT http://$SERVICE_HOST:$service_port; then
die $LINENO "nova-api did not start"
@@ -670,18 +668,24 @@
if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
# The group **$LIBVIRT_GROUP** is added to the current user in this script.
- # Use 'sg' to execute nova-compute as a member of the **$LIBVIRT_GROUP** group.
- screen_it n-cpu "cd $NOVA_DIR && sg $LIBVIRT_GROUP '$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf'"
+ # sg' will be used in run_process to execute nova-compute as a member of the
+ # **$LIBVIRT_GROUP** group.
+ run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf" $LIBVIRT_GROUP
elif [[ "$VIRT_DRIVER" = 'fake' ]]; then
local i
for i in `seq 1 $NUMBER_FAKE_NOVA_COMPUTE`; do
- screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf --config-file <(echo -e '[DEFAULT]\nhost=${HOSTNAME}${i}')"
+ # Avoid process redirection of fake host configurations by
+ # creating or modifying real configurations. Each fake
+ # gets its own configuration and own log file.
+ local fake_conf="${NOVA_FAKE_CONF}-${i}"
+ iniset $fake_conf DEFAULT nhost "${HOSTNAME}${i}"
+ run_process "n-cpu-${i}" "$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf --config-file $fake_conf"
done
else
if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
start_nova_hypervisor
fi
- screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf"
+ run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf"
fi
}
@@ -694,25 +698,25 @@
local compute_cell_conf=$NOVA_CONF
fi
- # ``screen_it`` checks ``is_service_enabled``, it is not needed here
- screen_it n-cond "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-conductor --config-file $compute_cell_conf"
- screen_it n-cell-region "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cells --config-file $api_cell_conf"
- screen_it n-cell-child "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cells --config-file $compute_cell_conf"
+ # ``run_process`` checks ``is_service_enabled``, it is not needed here
+ run_process n-cond "$NOVA_BIN_DIR/nova-conductor --config-file $compute_cell_conf"
+ run_process n-cell-region "$NOVA_BIN_DIR/nova-cells --config-file $api_cell_conf"
+ run_process n-cell-child "$NOVA_BIN_DIR/nova-cells --config-file $compute_cell_conf"
- screen_it n-crt "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cert --config-file $api_cell_conf"
- screen_it n-net "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-network --config-file $compute_cell_conf"
- screen_it n-sch "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-scheduler --config-file $compute_cell_conf"
- screen_it n-api-meta "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-api-metadata --config-file $compute_cell_conf"
+ run_process n-crt "$NOVA_BIN_DIR/nova-cert --config-file $api_cell_conf"
+ run_process n-net "$NOVA_BIN_DIR/nova-network --config-file $compute_cell_conf"
+ run_process n-sch "$NOVA_BIN_DIR/nova-scheduler --config-file $compute_cell_conf"
+ run_process n-api-meta "$NOVA_BIN_DIR/nova-api-metadata --config-file $compute_cell_conf"
- screen_it n-novnc "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-novncproxy --config-file $api_cell_conf --web $NOVNC_WEB_DIR"
- screen_it n-xvnc "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-xvpvncproxy --config-file $api_cell_conf"
- screen_it n-spice "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-spicehtml5proxy --config-file $api_cell_conf --web $SPICE_WEB_DIR"
- screen_it n-cauth "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-consoleauth --config-file $api_cell_conf"
+ run_process n-novnc "$NOVA_BIN_DIR/nova-novncproxy --config-file $api_cell_conf --web $NOVNC_WEB_DIR"
+ run_process n-xvnc "$NOVA_BIN_DIR/nova-xvpvncproxy --config-file $api_cell_conf"
+ run_process n-spice "$NOVA_BIN_DIR/nova-spicehtml5proxy --config-file $api_cell_conf --web $SPICE_WEB_DIR"
+ run_process n-cauth "$NOVA_BIN_DIR/nova-consoleauth --config-file $api_cell_conf"
# Starting the nova-objectstore only if swift3 service is not enabled.
# Swift will act as s3 objectstore.
is_service_enabled swift3 || \
- screen_it n-obj "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-objectstore --config-file $api_cell_conf"
+ run_process n-obj "$NOVA_BIN_DIR/nova-objectstore --config-file $api_cell_conf"
}
function start_nova {
@@ -721,7 +725,7 @@
}
function stop_nova_compute {
- screen_stop n-cpu
+ stop_process n-cpu
if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
stop_nova_hypervisor
fi
@@ -732,7 +736,7 @@
# Some services are listed here twice since more than one instance
# of a service may be running in certain configs.
for serv in n-api n-crt n-net n-sch n-novnc n-xvnc n-cauth n-spice n-cond n-cell n-cell n-api-meta n-obj; do
- screen_stop $serv
+ stop_process $serv
done
}
diff --git a/lib/nova_plugins/functions-libvirt b/lib/nova_plugins/functions-libvirt
index f722836..6b9db48 100644
--- a/lib/nova_plugins/functions-libvirt
+++ b/lib/nova_plugins/functions-libvirt
@@ -57,7 +57,9 @@
EOF
fi
- if [ "$os_VENDOR" = "Ubuntu" ]; then
+ # Since the release of Debian Wheezy the libvirt init script is libvirtd
+ # and not libvirtd-bin anymore.
+ if is_ubuntu && [ ! -f /etc/init.d/libvirtd ]; then
LIBVIRT_DAEMON=libvirt-bin
else
LIBVIRT_DAEMON=libvirtd
diff --git a/lib/nova_plugins/hypervisor-fake b/lib/nova_plugins/hypervisor-fake
index e7a833f..dc93633 100644
--- a/lib/nova_plugins/hypervisor-fake
+++ b/lib/nova_plugins/hypervisor-fake
@@ -47,7 +47,7 @@
iniset $NOVA_CONF DEFAULT quota_security_groups -1
iniset $NOVA_CONF DEFAULT quota_security_group_rules -1
iniset $NOVA_CONF DEFAULT quota_key_pairs -1
- iniset $NOVA_CONF DEFAULT scheduler_default_filters "RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter"
+ iniset $NOVA_CONF DEFAULT scheduler_default_filters "RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,RamFilter,DiskFilter"
}
# install_nova_hypervisor() - Install external components
diff --git a/lib/nova_plugins/hypervisor-ironic b/lib/nova_plugins/hypervisor-ironic
index 344ef04..4004cc9 100644
--- a/lib/nova_plugins/hypervisor-ironic
+++ b/lib/nova_plugins/hypervisor-ironic
@@ -37,12 +37,9 @@
configure_libvirt
LIBVIRT_FIREWALL_DRIVER=${LIBVIRT_FIREWALL_DRIVER:-"nova.virt.firewall.NoopFirewallDriver"}
- # NOTE(adam_g): The ironic compute driver currently lives in the ironic
- # tree. We purposely configure Nova to load it from there until it moves
- # back into Nova proper.
- iniset $NOVA_CONF DEFAULT compute_driver ironic.nova.virt.ironic.IronicDriver
+ iniset $NOVA_CONF DEFAULT compute_driver nova.virt.ironic.IronicDriver
iniset $NOVA_CONF DEFAULT firewall_driver $LIBVIRT_FIREWALL_DRIVER
- iniset $NOVA_CONF DEFAULT scheduler_host_manager ironic.nova.scheduler.ironic_host_manager.IronicHostManager
+ iniset $NOVA_CONF DEFAULT scheduler_host_manager nova.scheduler.ironic_host_manager.IronicHostManager
iniset $NOVA_CONF DEFAULT ram_allocation_ratio 1.0
iniset $NOVA_CONF DEFAULT reserved_host_memory_mb 0
# ironic section
@@ -51,7 +48,6 @@
iniset $NOVA_CONF ironic admin_url $KEYSTONE_AUTH_URI/v2.0
iniset $NOVA_CONF ironic admin_tenant_name demo
iniset $NOVA_CONF ironic api_endpoint http://$SERVICE_HOST:6385/v1
- iniset $NOVA_CONF ironic sql_connection `database_connection_url nova_bm`
}
# install_nova_hypervisor() - Install external components
diff --git a/lib/opendaylight b/lib/opendaylight
index 33b3f0a..1541ac1 100644
--- a/lib/opendaylight
+++ b/lib/opendaylight
@@ -139,6 +139,8 @@
# The flags to ODL have the following meaning:
# -of13: runs ODL using OpenFlow 1.3 protocol support.
# -virt ovsdb: Runs ODL in "virtualization" mode with OVSDB support
+ # NOTE(chdent): Leaving this as screen_it instead of run_process until
+ # the right thing for this service is determined.
screen_it odl-server "cd $ODL_DIR/opendaylight && JAVA_HOME=$JHOME ./run.sh $ODL_ARGS -of13 -virt ovsdb"
# Sleep a bit to let OpenDaylight finish starting up
@@ -147,7 +149,7 @@
# stop_opendaylight() - Stop running processes (non-screen)
function stop_opendaylight {
- screen_stop odl-server
+ stop_process odl-server
}
# stop_opendaylight-compute() - Remove OVS bridges
diff --git a/lib/rpc_backend b/lib/rpc_backend
index 38da50c..f2d2859 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -6,6 +6,7 @@
#
# - ``functions`` file
# - ``RABBIT_{HOST|PASSWORD}`` must be defined when RabbitMQ is used
+# - ``RPC_MESSAGING_PROTOCOL`` option for configuring the messaging protocol
# ``stack.sh`` calls the entry points in this order:
#
@@ -90,21 +91,56 @@
exit_distro_not_supported "zeromq installation"
fi
fi
+
+ # Remove the AMQP 1.0 messaging libraries
+ if [ "$RPC_MESSAGING_PROTOCOL" == "AMQP1" ]; then
+ if is_fedora; then
+ uninstall_package qpid-proton-c-devel
+ uninstall_package python-qpid-proton
+ fi
+ # TODO(kgiusti) ubuntu cleanup
+ fi
}
# install rpc backend
function install_rpc_backend {
+ # Regardless of the broker used, if AMQP 1.0 is configured load
+ # the necessary messaging client libraries for oslo.messaging
+ if [ "$RPC_MESSAGING_PROTOCOL" == "AMQP1" ]; then
+ if is_fedora; then
+ install_package qpid-proton-c-devel
+ install_package python-qpid-proton
+ elif is_ubuntu; then
+ # TODO(kgiusti) The QPID AMQP 1.0 protocol libraries
+ # are not yet in the ubuntu repos. Enable these installs
+ # once they are present:
+ #install_package libqpid-proton2-dev
+ #install_package python-qpid-proton
+ # Also add 'uninstall' directives in cleanup_rpc_backend()!
+ exit_distro_not_supported "QPID AMQP 1.0 Proton libraries"
+ else
+ exit_distro_not_supported "QPID AMQP 1.0 Proton libraries"
+ fi
+ # Install pyngus client API
+ # TODO(kgiusti) can remove once python qpid bindings are
+ # available on all supported platforms _and_ pyngus is added
+ # to the requirements.txt file in oslo.messaging
+ pip_install pyngus
+ fi
+
if is_service_enabled rabbit; then
# Install rabbitmq-server
install_package rabbitmq-server
elif is_service_enabled qpid; then
+ local qpid_conf_file=/etc/qpid/qpidd.conf
if is_fedora; then
install_package qpid-cpp-server
if [[ $DISTRO =~ (rhel6) ]]; then
+ qpid_conf_file=/etc/qpidd.conf
# RHEL6 leaves "auth=yes" in /etc/qpidd.conf, it needs to
# be no or you get GSS authentication errors as it
# attempts to default to this.
- sudo sed -i.bak 's/^auth=yes$/auth=no/' /etc/qpidd.conf
+ sudo sed -i.bak 's/^auth=yes$/auth=no/' $qpid_conf_file
fi
elif is_ubuntu; then
install_package qpidd
@@ -113,6 +149,22 @@
else
exit_distro_not_supported "qpid installation"
fi
+ # If AMQP 1.0 is specified, ensure that the version of the
+ # broker can support AMQP 1.0 and configure the queue and
+ # topic address patterns used by oslo.messaging.
+ if [ "$RPC_MESSAGING_PROTOCOL" == "AMQP1" ]; then
+ QPIDD=$(type -p qpidd)
+ if ! $QPIDD --help | grep -q "queue-patterns"; then
+ exit_distro_not_supported "qpidd with AMQP 1.0 support"
+ fi
+ if ! grep -q "queue-patterns=exclusive" $qpid_conf_file; then
+ cat <<EOF | sudo tee --append $qpid_conf_file
+queue-patterns=exclusive
+queue-patterns=unicast
+topic-patterns=broadcast
+EOF
+ fi
+ fi
elif is_service_enabled zeromq; then
# NOTE(ewindisch): Redis is not strictly necessary
# but there is a matchmaker driver that works
@@ -130,6 +182,11 @@
sudo mkdir -p /var/run/openstack
sudo chown $STACK_USER /var/run/openstack
fi
+
+ # If using the QPID broker, install the QPID python client API
+ if is_service_enabled qpid || [ -n "$QPID_HOST" ]; then
+ install_package python-qpid
+ fi
}
# restart the rpc backend
@@ -176,7 +233,12 @@
MATCHMAKER_REDIS_HOST=${MATCHMAKER_REDIS_HOST:-127.0.0.1}
iniset $file matchmaker_redis host $MATCHMAKER_REDIS_HOST
elif is_service_enabled qpid || [ -n "$QPID_HOST" ]; then
- iniset $file $section rpc_backend ${package}.openstack.common.rpc.impl_qpid
+ # For Qpid use the 'amqp' oslo.messaging transport when AMQP 1.0 is used
+ if [ "$RPC_MESSAGING_PROTOCOL" == "AMQP1" ]; then
+ iniset $file $section rpc_backend "amqp"
+ else
+ iniset $file $section rpc_backend ${package}.openstack.common.rpc.impl_qpid
+ fi
iniset $file $section qpid_hostname ${QPID_HOST:-$SERVICE_HOST}
if is_ubuntu; then
QPID_PASSWORD=`sudo strings /etc/qpid/qpidd.sasldb | grep -B1 admin | head -1`
diff --git a/lib/sahara b/lib/sahara
index 70319d9..5c7c253 100644
--- a/lib/sahara
+++ b/lib/sahara
@@ -106,16 +106,7 @@
sudo chown $STACK_USER $SAHARA_AUTH_CACHE_DIR
rm -rf $SAHARA_AUTH_CACHE_DIR/*
- # Set actual keystone auth configs
- iniset $SAHARA_CONF_FILE keystone_authtoken auth_uri $KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/
- iniset $SAHARA_CONF_FILE keystone_authtoken auth_host $KEYSTONE_AUTH_HOST
- iniset $SAHARA_CONF_FILE keystone_authtoken auth_port $KEYSTONE_AUTH_PORT
- iniset $SAHARA_CONF_FILE keystone_authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
- iniset $SAHARA_CONF_FILE keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $SAHARA_CONF_FILE keystone_authtoken admin_user sahara
- iniset $SAHARA_CONF_FILE keystone_authtoken admin_password $SERVICE_PASSWORD
- iniset $SAHARA_CONF_FILE keystone_authtoken signing_dir $SAHARA_AUTH_CACHE_DIR
- iniset $SAHARA_CONF_FILE keystone_authtoken cafile $KEYSTONE_SSL_CA
+ configure_auth_token_middleware $SAHARA_CONF_FILE sahara $SAHARA_AUTH_CACHE_DIR
# Set configuration to send notifications
@@ -168,7 +159,7 @@
# start_sahara() - Start running processes, including screen
function start_sahara {
- screen_it sahara "cd $SAHARA_DIR && $SAHARA_BIN_DIR/sahara-all --config-file $SAHARA_CONF_FILE"
+ run_process sahara "$SAHARA_BIN_DIR/sahara-all --config-file $SAHARA_CONF_FILE"
}
# stop_sahara() - Stop running processes
diff --git a/lib/swift b/lib/swift
index 6b96348..3c31dd2 100644
--- a/lib/swift
+++ b/lib/swift
@@ -269,7 +269,7 @@
iniset ${swift_node_config} DEFAULT log_facility LOG_LOCAL${log_facility}
iniuncomment ${swift_node_config} DEFAULT workers
- iniset ${swift_node_config} DEFAULT workers 1
+ iniset ${swift_node_config} DEFAULT workers ${API_WORKERS:-1}
iniuncomment ${swift_node_config} DEFAULT disable_fallocate
iniset ${swift_node_config} DEFAULT disable_fallocate true
@@ -382,15 +382,7 @@
# Configure Keystone
sed -i '/^# \[filter:authtoken\]/,/^# \[filter:keystoneauth\]$/ s/^#[ \t]*//' ${SWIFT_CONFIG_PROXY_SERVER}
- iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken auth_host $KEYSTONE_AUTH_HOST
- iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken auth_port $KEYSTONE_AUTH_PORT
- iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
- iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken cafile $KEYSTONE_SSL_CA
- iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken auth_uri $KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/
- iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken admin_user swift
- iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken admin_password $SERVICE_PASSWORD
- iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken signing_dir $SWIFT_AUTH_CACHE_DIR
+ configure_auth_token_middleware ${SWIFT_CONFIG_PROXY_SERVER} swift $SWIFT_AUTH_CACHE_DIR filter:authtoken
# This causes the authtoken middleware to use the same python logging
# adapter provided by the swift proxy-server, so that request transaction
# IDs will included in all of its log messages.
@@ -426,7 +418,7 @@
for node_number in ${SWIFT_REPLICAS_SEQ}; do
local swift_node_config=${SWIFT_CONF_DIR}/object-server/${node_number}.conf
cp ${SWIFT_DIR}/etc/object-server.conf-sample ${swift_node_config}
- generate_swift_config ${swift_node_config} ${node_number} $[OBJECT_PORT_BASE + 10 * (node_number - 1)] object
+ generate_swift_config ${swift_node_config} ${node_number} $(( OBJECT_PORT_BASE + 10 * (node_number - 1) )) object
iniset ${swift_node_config} filter:recon recon_cache_path ${SWIFT_DATA_DIR}/cache
# Using a sed and not iniset/iniuncomment because we want to a global
# modification and make sure it works for new sections.
@@ -434,14 +426,14 @@
swift_node_config=${SWIFT_CONF_DIR}/container-server/${node_number}.conf
cp ${SWIFT_DIR}/etc/container-server.conf-sample ${swift_node_config}
- generate_swift_config ${swift_node_config} ${node_number} $[CONTAINER_PORT_BASE + 10 * (node_number - 1)] container
+ generate_swift_config ${swift_node_config} ${node_number} $(( CONTAINER_PORT_BASE + 10 * (node_number - 1) )) container
iniuncomment ${swift_node_config} app:container-server allow_versions
iniset ${swift_node_config} app:container-server allow_versions "true"
sed -i -e "s,#[ ]*recon_cache_path .*,recon_cache_path = ${SWIFT_DATA_DIR}/cache," ${swift_node_config}
swift_node_config=${SWIFT_CONF_DIR}/account-server/${node_number}.conf
cp ${SWIFT_DIR}/etc/account-server.conf-sample ${swift_node_config}
- generate_swift_config ${swift_node_config} ${node_number} $[ACCOUNT_PORT_BASE + 10 * (node_number - 1)] account
+ generate_swift_config ${swift_node_config} ${node_number} $(( ACCOUNT_PORT_BASE + 10 * (node_number - 1) )) account
sed -i -e "s,#[ ]*recon_cache_path .*,recon_cache_path = ${SWIFT_DATA_DIR}/cache," ${swift_node_config}
done
@@ -556,6 +548,7 @@
local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
+ local another_role=$(openstack role list | awk "/ anotherrole / { print \$2 }")
local swift_user=$(get_or_create_user "swift" \
"$SERVICE_PASSWORD" $service_tenant)
@@ -582,7 +575,7 @@
local swift_user_test3=$(get_or_create_user swiftusertest3 $swiftusertest3_password \
"$swift_tenant_test1" "test3@example.com")
die_if_not_set $LINENO swift_user_test3 "Failure creating swift_user_test3"
- get_or_add_user_role $ANOTHER_ROLE $swift_user_test3 $swift_tenant_test1
+ get_or_add_user_role $another_role $swift_user_test3 $swift_tenant_test1
local swift_tenant_test2=$(get_or_create_project swifttenanttest2)
die_if_not_set $LINENO swift_tenant_test2 "Failure creating swift_tenant_test2"
@@ -613,9 +606,9 @@
swift-ring-builder account.builder create ${SWIFT_PARTITION_POWER_SIZE} ${SWIFT_REPLICAS} 1
for node_number in ${SWIFT_REPLICAS_SEQ}; do
- swift-ring-builder object.builder add z${node_number}-127.0.0.1:$[OBJECT_PORT_BASE + 10 * (node_number - 1)]/sdb1 1
- swift-ring-builder container.builder add z${node_number}-127.0.0.1:$[CONTAINER_PORT_BASE + 10 * (node_number - 1)]/sdb1 1
- swift-ring-builder account.builder add z${node_number}-127.0.0.1:$[ACCOUNT_PORT_BASE + 10 * (node_number - 1)]/sdb1 1
+ swift-ring-builder object.builder add z${node_number}-127.0.0.1:$(( OBJECT_PORT_BASE + 10 * (node_number - 1) ))/sdb1 1
+ swift-ring-builder container.builder add z${node_number}-127.0.0.1:$(( CONTAINER_PORT_BASE + 10 * (node_number - 1) ))/sdb1 1
+ swift-ring-builder account.builder add z${node_number}-127.0.0.1:$(( ACCOUNT_PORT_BASE + 10 * (node_number - 1) ))/sdb1 1
done
swift-ring-builder object.builder rebalance
swift-ring-builder container.builder rebalance
@@ -658,10 +651,10 @@
if [ "$SWIFT_USE_MOD_WSGI" == "True" ]; then
restart_apache_server
swift-init --run-dir=${SWIFT_DATA_DIR}/run rest start
- screen_it s-proxy "cd $SWIFT_DIR && sudo tail -f /var/log/$APACHE_NAME/proxy-server"
+ tail_log s-proxy /var/log/$APACHE_NAME/proxy-server
if [[ ${SWIFT_REPLICAS} == 1 ]]; then
for type in object container account; do
- screen_it s-${type} "cd $SWIFT_DIR && sudo tail -f /var/log/$APACHE_NAME/${type}-server-1"
+ tail_log s-${type} /var/log/$APACHE_NAME/${type}-server-1
done
fi
return 0
@@ -682,10 +675,10 @@
for type in proxy ${todo}; do
swift-init --run-dir=${SWIFT_DATA_DIR}/run ${type} stop || true
done
- screen_it s-proxy "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
+ run_process s-proxy "$SWIFT_DIR/bin/swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
if [[ ${SWIFT_REPLICAS} == 1 ]]; then
for type in object container account; do
- screen_it s-${type} "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-${type}-server ${SWIFT_CONF_DIR}/${type}-server/1.conf -v"
+ run_process s-${type} "$SWIFT_DIR/bin/swift-${type}-server ${SWIFT_CONF_DIR}/${type}-server/1.conf -v"
done
fi
@@ -707,9 +700,9 @@
swift-init --run-dir=${SWIFT_DATA_DIR}/run all stop || true
fi
# Dump all of the servers
- # Maintain the iteration as screen_stop() has some desirable side-effects
+ # Maintain the iteration as stop_process() has some desirable side-effects
for type in proxy object container account; do
- screen_stop s-${type}
+ stop_process s-${type}
done
# Blast out any stragglers
pkill -f swift-
diff --git a/lib/tempest b/lib/tempest
index 2e8aa3e..906cb00 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -48,6 +48,7 @@
TEMPEST_CONFIG_DIR=${TEMPEST_CONFIG_DIR:-$TEMPEST_DIR/etc}
TEMPEST_CONFIG=$TEMPEST_CONFIG_DIR/tempest.conf
TEMPEST_STATE_PATH=${TEMPEST_STATE_PATH:=$DATA_DIR/tempest}
+TEMPEST_LIB_DIR=$DEST/tempest-lib
NOVA_SOURCE_DIR=$DEST/nova
@@ -294,6 +295,10 @@
iniset $TEMPEST_CONFIG compute-feature-enabled live_migration ${LIVE_MIGRATION_AVAILABLE:-False}
iniset $TEMPEST_CONFIG compute-feature-enabled change_password False
iniset $TEMPEST_CONFIG compute-feature-enabled block_migration_for_live_migration ${USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION:-False}
+ iniset $TEMPEST_CONFIG compute-feature-enabled api_extensions ${COMPUTE_API_EXTENSIONS:-"all"}
+ iniset $TEMPEST_CONFIG compute-feature-disabled api_extensions ${DISABLE_COMPUTE_API_EXTENSIONS}
+ iniset $TEMPEST_CONFIG compute-feature-enabled api_v3_extensions ${COMPUTE_API_V3_EXTENSIONS:-"all"}
+ iniset $TEMPEST_CONFIG compute-feature-disabled api_v3_extensions ${DISABLE_COMPUTE_API_V3_EXTENSIONS}
# Compute admin
iniset $TEMPEST_CONFIG "compute-admin" username $ADMIN_USERNAME
@@ -308,6 +313,8 @@
iniset $TEMPEST_CONFIG network default_network "$FIXED_RANGE"
iniset $TEMPEST_CONFIG network-feature-enabled ipv6 "$IPV6_ENABLED"
iniset $TEMPEST_CONFIG network-feature-enabled ipv6_subnet_attributes "$IPV6_SUBNET_ATTRIBUTES_ENABLED"
+ iniset $TEMPEST_CONFIG network-feature-enabled api_extensions ${NETWORK_API_EXTENSIONS:-"all"}
+ iniset $TEMPEST_CONFIG network-feature-disabled api_extensions ${DISABLE_NETWORK_API_EXTENSIONS}
# boto
iniset $TEMPEST_CONFIG boto ec2_url "http://$SERVICE_HOST:8773/services/Cloud"
@@ -348,7 +355,13 @@
# Once Tempest retires support for icehouse this flag can be removed.
iniset $TEMPEST_CONFIG telemetry too_slow_to_test "False"
+ # Object storage
+ iniset $TEMPEST_CONFIG object-storage-feature-enabled discoverable_apis ${OBJECT_STORAGE_API_EXTENSIONS:-"all"}
+ iniset $TEMPEST_CONFIG object-storage-feature-disabled discoverable_apis ${OBJECT_STORAGE_DISABLE_API_EXTENSIONS}
+
# Volume
+ iniset $TEMPEST_CONFIG volume-feature-enabled api_extensions ${VOLUME_API_EXTENSIONS:-"all"}
+ iniset $TEMPEST_CONFIG volume-feature-disabled api_extensions ${DISABLE_VOLUME_API_EXTENSIONS}
if ! is_service_enabled c-bak; then
iniset $TEMPEST_CONFIG volume-feature-enabled backup False
fi
@@ -371,9 +384,6 @@
# cli
iniset $TEMPEST_CONFIG cli cli_dir $NOVA_BIN_DIR
- # Networking
- iniset $TEMPEST_CONFIG network-feature-enabled api_extensions "${NETWORK_API_EXTENSIONS:-all}"
-
# Baremetal
if [ "$VIRT_DRIVER" = "ironic" ] ; then
iniset $TEMPEST_CONFIG baremetal driver_enabled True
@@ -419,8 +429,15 @@
fi
}
+# install_tempest_lib() - Collect source, prepare, and install tempest-lib
+function install_tempest_lib {
+ git_clone $TEMPEST_LIB_REPO $TEMPEST_LIB_DIR $TEMPEST_LIB_BRANCH
+ setup_develop $TEMPEST_LIB_DIR
+}
+
# install_tempest() - Collect source and prepare
function install_tempest {
+ install_tempest_lib
git_clone $TEMPEST_REPO $TEMPEST_DIR $TEMPEST_BRANCH
pip_install tox
}
@@ -437,6 +454,7 @@
if [ -f "$kernel" -a -f "$ramdisk" -a -f "$disk_image" -a "$VIRT_DRIVER" != "openvz" \
-a \( "$LIBVIRT_TYPE" != "lxc" -o "$VIRT_DRIVER" != "libvirt" \) ]; then
echo "Prepare aki/ari/ami Images"
+ mkdir -p $BOTO_MATERIALS_PATH
( #new namespace
# tenant:demo ; user: demo
source $TOP_DIR/accrc/demo/demo
diff --git a/lib/template b/lib/template
index efe5826..f77409b 100644
--- a/lib/template
+++ b/lib/template
@@ -75,13 +75,17 @@
# start_XXXX() - Start running processes, including screen
function start_XXXX {
- # screen_it XXXX "cd $XXXX_DIR && $XXXX_DIR/bin/XXXX-bin"
+ # The quoted command must be a single command and not include an
+ # shell metacharacters, redirections or shell builtins.
+ # run_process XXXX "$XXXX_DIR/bin/XXXX-bin"
:
}
# stop_XXXX() - Stop running processes (non-screen)
function stop_XXXX {
- # FIXME(dtroyer): stop only our screen screen window?
+ # for serv in serv-a serv-b; do
+ # stop_process $serv
+ # done
:
}
diff --git a/lib/trove b/lib/trove
index aa9442b..1d1b5f4 100644
--- a/lib/trove
+++ b/lib/trove
@@ -128,12 +128,7 @@
cp $TROVE_LOCAL_CONF_DIR/api-paste.ini $TROVE_CONF_DIR/api-paste.ini
TROVE_API_PASTE_INI=$TROVE_CONF_DIR/api-paste.ini
- iniset $TROVE_API_PASTE_INI filter:authtoken identity_uri $KEYSTONE_AUTH_URI
- iniset $TROVE_API_PASTE_INI filter:authtoken cafile $KEYSTONE_SSL_CA
- iniset $TROVE_API_PASTE_INI filter:authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $TROVE_API_PASTE_INI filter:authtoken admin_user trove
- iniset $TROVE_API_PASTE_INI filter:authtoken admin_password $SERVICE_PASSWORD
- iniset $TROVE_API_PASTE_INI filter:authtoken signing_dir $TROVE_AUTH_CACHE_DIR
+ configure_auth_token_middleware $TROVE_API_PASTE_INI trove $TROVE_AUTH_CACHE_DIR filter:authtoken
# (Re)create trove conf files
rm -f $TROVE_CONF_DIR/trove.conf
@@ -144,6 +139,8 @@
iniset $TROVE_CONF_DIR/trove.conf DEFAULT sql_connection `database_connection_url trove`
iniset $TROVE_CONF_DIR/trove.conf DEFAULT default_datastore $TROVE_DATASTORE_TYPE
setup_trove_logging $TROVE_CONF_DIR/trove.conf
+ iniset $TROVE_CONF_DIR/trove.conf DEFAULT trove_api_workers "$API_WORKERS"
+
# (Re)create trove taskmanager conf file if needed
if is_service_enabled tr-tmgr; then
@@ -228,9 +225,9 @@
# start_trove() - Start running processes, including screen
function start_trove {
- screen_it tr-api "cd $TROVE_DIR; $TROVE_BIN_DIR/trove-api --config-file=$TROVE_CONF_DIR/trove.conf --debug 2>&1"
- screen_it tr-tmgr "cd $TROVE_DIR; $TROVE_BIN_DIR/trove-taskmanager --config-file=$TROVE_CONF_DIR/trove-taskmanager.conf --debug 2>&1"
- screen_it tr-cond "cd $TROVE_DIR; $TROVE_BIN_DIR/trove-conductor --config-file=$TROVE_CONF_DIR/trove-conductor.conf --debug 2>&1"
+ run_process tr-api "$TROVE_BIN_DIR/trove-api --config-file=$TROVE_CONF_DIR/trove.conf --debug"
+ run_process tr-tmgr "$TROVE_BIN_DIR/trove-taskmanager --config-file=$TROVE_CONF_DIR/trove-taskmanager.conf --debug"
+ run_process tr-cond "$TROVE_BIN_DIR/trove-conductor --config-file=$TROVE_CONF_DIR/trove-conductor.conf --debug"
}
# stop_trove() - Stop running processes
@@ -238,7 +235,7 @@
# Kill the trove screen windows
local serv
for serv in tr-api tr-tmgr tr-cond; do
- screen_stop $serv
+ stop_process $serv
done
}
diff --git a/lib/zaqar b/lib/zaqar
index 0d33df2..93b727e 100644
--- a/lib/zaqar
+++ b/lib/zaqar
@@ -107,11 +107,7 @@
iniset $ZAQAR_CONF DEFAULT log_file $ZAQAR_API_LOG_FILE
iniset $ZAQAR_CONF 'drivers:transport:wsgi' bind $ZAQAR_SERVICE_HOST
- iniset $ZAQAR_CONF keystone_authtoken auth_protocol http
- iniset $ZAQAR_CONF keystone_authtoken admin_user zaqar
- iniset $ZAQAR_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
- iniset $ZAQAR_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $ZAQAR_CONF keystone_authtoken signing_dir $ZAQAR_AUTH_CACHE_DIR
+ configure_auth_token_middleware $ZAQAR_CONF zaqar $ZAQAR_AUTH_CACHE_DIR
if [ "$ZAQAR_BACKEND" = 'mysql' ] || [ "$ZAQAR_BACKEND" = 'postgresql' ] ; then
iniset $ZAQAR_CONF drivers storage sqlalchemy
@@ -162,9 +158,9 @@
# start_zaqar() - Start running processes, including screen
function start_zaqar {
if [[ "$USE_SCREEN" = "False" ]]; then
- screen_it zaqar-server "zaqar-server --config-file $ZAQAR_CONF --daemon"
+ run_process zaqar-server "zaqar-server --config-file $ZAQAR_CONF --daemon"
else
- screen_it zaqar-server "zaqar-server --config-file $ZAQAR_CONF"
+ run_process zaqar-server "zaqar-server --config-file $ZAQAR_CONF"
fi
echo "Waiting for Zaqar to start..."
@@ -175,6 +171,7 @@
# stop_zaqar() - Stop running processes
function stop_zaqar {
+ local serv
# Kill the zaqar screen windows
for serv in zaqar-server; do
screen -S $SCREEN_NAME -p $serv -X kill
@@ -182,18 +179,18 @@
}
function create_zaqar_accounts {
- SERVICE_TENANT=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
+ local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
- ZAQAR_USER=$(get_or_create_user "zaqar" \
- "$SERVICE_PASSWORD" $SERVICE_TENANT)
- get_or_add_user_role $ADMIN_ROLE $ZAQAR_USER $SERVICE_TENANT
+ local zaqar_user=$(get_or_create_user "zaqar" \
+ "$SERVICE_PASSWORD" $service_tenant)
+ get_or_add_user_role $ADMIN_ROLE $zaqar_user $service_tenant
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- ZAQAR_SERVICE=$(get_or_create_service "zaqar" \
- "queuing" "Zaqar Service")
- get_or_create_endpoint $ZAQAR_SERVICE \
+ local zaqar_service=$(get_or_create_service "zaqar" \
+ "messaging" "Zaqar Service")
+ get_or_create_endpoint $zaqar_service \
"$REGION_NAME" \
"$ZAQAR_SERVICE_PROTOCOL://$ZAQAR_SERVICE_HOST:$ZAQAR_SERVICE_PORT" \
"$ZAQAR_SERVICE_PROTOCOL://$ZAQAR_SERVICE_HOST:$ZAQAR_SERVICE_PORT" \
diff --git a/stack.sh b/stack.sh
index 7bfd072..c20e610 100755
--- a/stack.sh
+++ b/stack.sh
@@ -37,7 +37,6 @@
# Keep track of the devstack directory
TOP_DIR=$(cd $(dirname "$0") && pwd)
-
# Sanity Checks
# -------------
@@ -74,7 +73,6 @@
exit 1
fi
-
# Prepare the environment
# -----------------------
@@ -639,9 +637,9 @@
if [[ $r -ne 0 ]]; then
echo "Error on exit"
if [[ -z $LOGDIR ]]; then
- ./tools/worlddump.py
+ $TOP_DIR/tools/worlddump.py
else
- ./tools/worlddump.py -d $LOGDIR
+ $TOP_DIR/tools/worlddump.py -d $LOGDIR
fi
fi
@@ -1210,7 +1208,7 @@
if is_service_enabled zeromq; then
echo_summary "Starting zermomq receiver"
- screen_it zeromq "cd $NOVA_DIR && $OSLO_BIN_DIR/oslo-messaging-zmq-receiver"
+ run_process zeromq "$OSLO_BIN_DIR/oslo-messaging-zmq-receiver"
fi
# Launch the nova-api and wait for it to answer before continuing
@@ -1318,7 +1316,7 @@
fi
# ensure callback daemon is running
sudo pkill nova-baremetal-deploy-helper || true
- screen_it baremetal "cd ; nova-baremetal-deploy-helper"
+ run_process baremetal "nova-baremetal-deploy-helper"
fi
# Save some values we generated for later use
@@ -1456,7 +1454,7 @@
echo_summary "WARNING: CINDER_MULTI_LVM_BACKEND is used"
echo "You are using CINDER_MULTI_LVM_BACKEND to configure Cinder's multiple LVM backends"
echo "Please convert that configuration in local.conf to use CINDER_ENABLED_BACKENDS."
- echo "CINDER_ENABLED_BACKENDS will be removed early in the 'K' development cycle"
+ echo "CINDER_MULTI_LVM_BACKEND will be removed early in the 'K' development cycle"
echo "
[[local|localrc]]
CINDER_ENABLED_BACKENDS=lvm:lvmdriver-1,lvm:lvmdriver-2
diff --git a/stackrc b/stackrc
index ca28d32..580fabf 100644
--- a/stackrc
+++ b/stackrc
@@ -144,6 +144,9 @@
GLANCE_REPO=${GLANCE_REPO:-${GIT_BASE}/openstack/glance.git}
GLANCE_BRANCH=${GLANCE_BRANCH:-master}
+GLANCE_STORE_REPO=${GLANCE_STORE_REPO:-${GIT_BASE}/openstack/glance_store.git}
+GLANCE_STORE_BRANCH=${GLANCE_STORE_BRANCH:-master}
+
# python glance client library
GLANCECLIENT_REPO=${GLANCECLIENT_REPO:-${GIT_BASE}/openstack/python-glanceclient.git}
GLANCECLIENT_BRANCH=${GLANCECLIENT_BRANCH:-master}
@@ -172,9 +175,11 @@
HORIZONAUTH_REPO=${HORIZONAUTH_REPO:-${GIT_BASE}/openstack/django_openstack_auth.git}
HORIZONAUTH_BRANCH=${HORIZONAUTH_BRANCH:-master}
-# baremetal provisionint service
+# baremetal provisioning service
IRONIC_REPO=${IRONIC_REPO:-${GIT_BASE}/openstack/ironic.git}
IRONIC_BRANCH=${IRONIC_BRANCH:-master}
+IRONIC_PYTHON_AGENT_REPO=${IRONIC_PYTHON_AGENT_REPO:-${GIT_BASE}/openstack/ironic-python-agent.git}
+IRONIC_PYTHON_AGENT_BRANCH=${IRONIC_PYTHON_AGENT_BRANCH:-master}
# ironic client
IRONICCLIENT_REPO=${IRONICCLIENT_REPO:-${GIT_BASE}/openstack/python-ironicclient.git}
@@ -306,6 +311,9 @@
TEMPEST_REPO=${TEMPEST_REPO:-${GIT_BASE}/openstack/tempest.git}
TEMPEST_BRANCH=${TEMPEST_BRANCH:-master}
+TEMPEST_LIB_REPO=${TEMPEST_LIB_REPO:-${GIT_BASE}/openstack/tempest-lib.git}
+TEMPEST_LIB_BRANCH=${TEMPEST_LIB_BRANCH:-master}
+
# Tripleo elements for diskimage-builder images
TIE_REPO=${TIE_REPO:-${GIT_BASE}/openstack/tripleo-image-elements.git}
TIE_BRANCH=${TIE_BRANCH:-master}
@@ -503,6 +511,12 @@
# Allow the use of an alternate protocol (such as https) for service endpoints
SERVICE_PROTOCOL=${SERVICE_PROTOCOL:-http}
+# Sets the maximum number of workers for most services to reduce
+# the memory used where there are a large number of CPUs present
+# (the default number of workers for many services is the number of CPUs)
+# Also sets the minimum number of workers to 2.
+API_WORKERS=${API_WORKERS:=$(( ($(nproc)/2)<2 ? 2 : ($(nproc)/2) ))}
+
# Local variables:
# mode: shell-script
# End:
diff --git a/tests/fake-service.sh b/tests/fake-service.sh
new file mode 100755
index 0000000..d4b9b56
--- /dev/null
+++ b/tests/fake-service.sh
@@ -0,0 +1,19 @@
+#!/bin/bash
+# fake-service.sh - a fake service for start/stop testing
+# $1 - sleep time
+
+SLEEP_TIME=${1:-3}
+
+LOG=/tmp/fake-service.log
+TIMESTAMP_FORMAT=${TIMESTAMP_FORMAT:-"%F-%H%M%S"}
+
+# duplicate output
+exec 1> >(tee -a ${LOG})
+
+echo ""
+echo "Starting fake-service for ${SLEEP_TIME}"
+while true; do
+ echo "$(date +${TIMESTAMP_FORMAT}) [$$]"
+ sleep ${SLEEP_TIME}
+done
+
diff --git a/tests/run-process.sh b/tests/run-process.sh
new file mode 100755
index 0000000..bdf1395
--- /dev/null
+++ b/tests/run-process.sh
@@ -0,0 +1,109 @@
+#!/bin/bash
+# tests/exec.sh - Test DevStack run_process() and stop_process()
+#
+# exec.sh start|stop|status
+#
+# Set USE_SCREEN True|False to change use of screen.
+#
+# This script emulates the basic exec envirnment in ``stack.sh`` to test
+# the process spawn and kill operations.
+
+if [[ -z $1 ]]; then
+ echo "$0 start|stop"
+ exit 1
+fi
+
+TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
+source $TOP_DIR/functions
+
+USE_SCREEN=${USE_SCREEN:-False}
+
+ENABLED_SERVICES=fake-service
+
+SERVICE_DIR=/tmp
+SCREEN_NAME=test
+SCREEN_LOGDIR=${SERVICE_DIR}/${SCREEN_NAME}
+
+
+# Kill background processes on exit
+trap clean EXIT
+clean() {
+ local r=$?
+ jobs -p
+ kill >/dev/null 2>&1 $(jobs -p)
+ exit $r
+}
+
+
+# Exit on any errors so that errors don't compound
+trap failed ERR
+failed() {
+ local r=$?
+ jobs -p
+ kill >/dev/null 2>&1 $(jobs -p)
+ set +o xtrace
+ [ -n "$LOGFILE" ] && echo "${0##*/} failed: full log in $LOGFILE"
+ exit $r
+}
+
+function status {
+ if [[ -r $SERVICE_DIR/$SCREEN_NAME/fake-service.pid ]]; then
+ pstree -pg $(cat $SERVICE_DIR/$SCREEN_NAME/fake-service.pid)
+ fi
+ ps -ef | grep fake
+}
+
+function setup_screen {
+if [[ ! -d $SERVICE_DIR/$SCREEN_NAME ]]; then
+ rm -rf $SERVICE_DIR/$SCREEN_NAME
+ mkdir -p $SERVICE_DIR/$SCREEN_NAME
+fi
+
+if [[ "$USE_SCREEN" == "True" ]]; then
+ # Create a new named screen to run processes in
+ screen -d -m -S $SCREEN_NAME -t shell -s /bin/bash
+ sleep 1
+
+ # Set a reasonable status bar
+ if [ -z "$SCREEN_HARDSTATUS" ]; then
+ SCREEN_HARDSTATUS='%{= .} %-Lw%{= .}%> %n%f %t*%{= .}%+Lw%< %-=%{g}(%{d}%H/%l%{g})'
+ fi
+ screen -r $SCREEN_NAME -X hardstatus alwayslastline "$SCREEN_HARDSTATUS"
+fi
+
+# Clear screen rc file
+SCREENRC=$TOP_DIR/tests/$SCREEN_NAME-screenrc
+if [[ -e $SCREENRC ]]; then
+ echo -n > $SCREENRC
+fi
+}
+
+# Mimic logging
+ # Set up output redirection without log files
+ # Copy stdout to fd 3
+ exec 3>&1
+ if [[ "$VERBOSE" != "True" ]]; then
+ # Throw away stdout and stderr
+ #exec 1>/dev/null 2>&1
+ :
+ fi
+ # Always send summary fd to original stdout
+ exec 6>&3
+
+
+if [[ "$1" == "start" ]]; then
+ echo "Start service"
+ setup_screen
+ run_process fake-service "$TOP_DIR/tests/fake-service.sh"
+ sleep 1
+ status
+elif [[ "$1" == "stop" ]]; then
+ echo "Stop service"
+ stop_process fake-service
+ status
+elif [[ "$1" == "status" ]]; then
+ status
+else
+ echo "Unknown command"
+ exit 1
+fi
diff --git a/tox.ini b/tox.ini
index c8a603b..325adae 100644
--- a/tox.ini
+++ b/tox.ini
@@ -11,3 +11,6 @@
deps = bashate
whitelist_externals = bash
commands = bash -c "find {toxinidir} -not -wholename \*.tox/\* -and \( -name \*.sh -or -name \*rc -or -name functions\* -or \( -wholename lib/\* -and -not -name \*.md \) \) -print0 | xargs -0 bashate -v"
+
+[testenv:docs]
+commands = python setup.py build_sphinx
diff --git a/unstack.sh b/unstack.sh
index 0457ef2..adb6dc1 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -55,6 +55,7 @@
source $TOP_DIR/lib/neutron
source $TOP_DIR/lib/baremetal
source $TOP_DIR/lib/ldap
+source $TOP_DIR/lib/dstat
# Extras Source
# --------------