Merge "allow for soft updating of global-requirements"
diff --git a/HACKING.rst b/HACKING.rst
index 3b86529..3ffe1e2 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -126,14 +126,9 @@
Documentation
-------------
-The official DevStack repo on GitHub does not include a gh-pages branch that
-GitHub uses to create static web sites. That branch is maintained in the
-`CloudBuilders DevStack repo`__ mirror that supports the
-http://devstack.org site. This is the primary DevStack
-documentation along with the DevStack scripts themselves.
-
-__ repo_
-.. _repo: https://github.com/cloudbuilders/devstack
+The DevStack repo now contains all of the static pages of devstack.org in
+the ``doc/source`` directory. The OpenStack CI system rebuilds the docs after every
+commit and updates devstack.org (now a redirect to docs.openstack.org/developer/devstack).
All of the scripts are processed with shocco_ to render them with the comments
as text describing the script below. For this reason we tend to be a little
@@ -144,6 +139,8 @@
.. _shocco: https://github.com/dtroyer/shocco/tree/rst_support
The script used to drive <code>shocco</code> is <code>tools/build_docs.sh</code>.
+The complete docs build is also handled with <code>tox -edocs</code> per the
+OpenStack project standard.
Exercises
@@ -235,8 +232,12 @@
collections of bash scripts. These should be considered as part of the
review process.
-We have a preliminary enforcing script for this called bash8 (only a
-small number of these rules are enforced).
+DevStack uses the bashate_ style checker
+to enforce basic guidelines, similar to pep8 and flake8 tools for Python. The
+list below is not complete for what bashate checks, nor is it all checked
+by bashate. So many lines of code, so little time.
+
+.. _bashate: https://pypi.python.org/pypi/bashate
Whitespace Rules
----------------
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 6f7cab6..37b365d 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -306,74 +306,40 @@
Exercises Generated documentation of DevStack scripts.
------------------------------------------------------
-Filename
-
-Link
-
-exercise.sh
-
-`Read » <exercise.sh.html>`__
-
-exercises/aggregates.sh
-
-`Read » <exercises/aggregates.sh.html>`__
-
-exercises/boot\_from\_volume.sh
-
-`Read » <exercises/boot_from_volume.sh.html>`__
-
-exercises/bundle.sh
-
-`Read » <exercises/bundle.sh.html>`__
-
-exercises/client-args.sh
-
-`Read » <exercises/client-args.sh.html>`__
-
-exercises/client-env.sh
-
-`Read » <exercises/client-env.sh.html>`__
-
-exercises/euca.sh
-
-`Read » <exercises/euca.sh.html>`__
-
-exercises/floating\_ips.sh
-
-`Read » <exercises/floating_ips.sh.html>`__
-
-exercises/horizon.sh
-
-`Read » <exercises/horizon.sh.html>`__
-
-exercises/neutron-adv-test.sh
-
-`Read » <exercises/neutron-adv-test.sh.html>`__
-
-exercises/sahara.sh
-
-`Read » <exercises/sahara.sh.html>`__
-
-exercises/savanna.sh
-
-`Read » <exercises/savanna.sh.html>`__
-
-exercises/sec\_groups.sh
-
-`Read » <exercises/sec_groups.sh.html>`__
-
-exercises/swift.sh
-
-`Read » <exercises/swift.sh.html>`__
-
-exercises/trove.sh
-
-`Read » <exercises/trove.sh.html>`__
-
-exercises/volumes.sh
-
-`Read » <exercises/volumes.sh.html>`__
-
-exercises/zaqar.sh
-
-`Read » <exercises/zaqar.sh.html>`__
++---------------------------------+-------------------------------------------------+
+| Filename | Link |
++=================================+=================================================+
+| exercise.sh | `Read » <exercise.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/aggregates.sh | `Read » <exercises/aggregates.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/boot\_from\_volume.sh | `Read » <exercises/boot_from_volume.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/bundle.sh | `Read » <exercises/bundle.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/client-args.sh | `Read » <exercises/client-args.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/client-env.sh | `Read » <exercises/client-env.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/euca.sh | `Read » <exercises/euca.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/floating\_ips.sh | `Read » <exercises/floating_ips.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/horizon.sh | `Read » <exercises/horizon.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/neutron-adv-test.sh | `Read » <exercises/neutron-adv-test.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/sahara.sh | `Read » <exercises/sahara.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/savanna.sh | `Read » <exercises/savanna.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/sec\_groups.sh | `Read » <exercises/sec_groups.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/swift.sh | `Read » <exercises/swift.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/trove.sh | `Read » <exercises/trove.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/volumes.sh | `Read » <exercises/volumes.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
+| exercises/zaqar.sh | `Read » <exercises/zaqar.sh.html>`__ |
++---------------------------------+-------------------------------------------------+
diff --git a/exercises/swift.sh b/exercises/swift.sh
index 25ea671..afcede8 100755
--- a/exercises/swift.sh
+++ b/exercises/swift.sh
@@ -45,16 +45,16 @@
# =============
# Check if we have to swift via keystone
-swift stat || die $LINENO "Failure geting status"
+swift stat || die $LINENO "Failure getting status"
# We start by creating a test container
-swift post $CONTAINER || die $LINENO "Failure creating container $CONTAINER"
+openstack container create $CONTAINER || die $LINENO "Failure creating container $CONTAINER"
# add some files into it.
-swift upload $CONTAINER /etc/issue || die $LINENO "Failure uploading file to container $CONTAINER"
+openstack object create $CONTAINER /etc/issue || die $LINENO "Failure uploading file to container $CONTAINER"
# list them
-swift list $CONTAINER || die $LINENO "Failure listing contents of container $CONTAINER"
+openstack object list $CONTAINER || die $LINENO "Failure listing contents of container $CONTAINER"
# And we may want to delete them now that we have tested that
# everything works.
diff --git a/extras.d/60-ceph.sh b/extras.d/60-ceph.sh
index 5fb34ea..50bdfae 100644
--- a/extras.d/60-ceph.sh
+++ b/extras.d/60-ceph.sh
@@ -26,8 +26,9 @@
if is_service_enabled cinder; then
echo_summary "Configuring Cinder for Ceph"
configure_ceph_cinder
- # NOTE (leseb): the part below is a requirement from Cinder in order to attach volumes
- # so we should run the following within the if statement.
+ fi
+ if is_service_enabled cinder || is_service_enabled nova; then
+ # NOTE (leseb): the part below is a requirement to attach Ceph block devices
echo_summary "Configuring libvirt secret"
import_libvirt_secret_ceph
fi
diff --git a/files/apts/qpid b/files/apts/qpid
new file mode 100644
index 0000000..e3bbf09
--- /dev/null
+++ b/files/apts/qpid
@@ -0,0 +1 @@
+sasl2-bin # NOPRIME
diff --git a/files/rpms/qpid b/files/rpms/qpid
index 62148ba..9e3f10a 100644
--- a/files/rpms/qpid
+++ b/files/rpms/qpid
@@ -1,3 +1,4 @@
qpid-proton-c-devel # NOPRIME
python-qpid-proton # NOPRIME
+cyrus-sasl-lib # NOPRIME
diff --git a/functions-common b/functions-common
index 9041439..48edba8 100644
--- a/functions-common
+++ b/functions-common
@@ -19,6 +19,7 @@
#
# The following variables are assumed to be defined by certain functions:
#
+# - ``GIT_DEPTH``
# - ``ENABLED_SERVICES``
# - ``ERROR_ON_CLONE``
# - ``FILES``
@@ -562,16 +563,22 @@
# Set global ``RECLONE=yes`` to simulate a clone when dest-dir exists
# Set global ``ERROR_ON_CLONE=True`` to abort execution with an error if the git repo
# does not exist (default is False, meaning the repo will be cloned).
-# Uses globals ``ERROR_ON_CLONE``, ``OFFLINE``, ``RECLONE``
+# Set global ``GIT_DEPTH=<number>`` to limit the history depth of the git clone
+# Uses globals ``ERROR_ON_CLONE``, ``OFFLINE``, ``RECLONE``, ``GIT_DEPTH``
# git_clone remote dest-dir branch
function git_clone {
local git_remote=$1
local git_dest=$2
local git_ref=$3
local orig_dir=$(pwd)
+ local git_clone_flags=""
RECLONE=$(trueorfalse False $RECLONE)
+ if [[ "$GIT_DEPTH" ]]; then
+ git_clone_flags="$git_clone_flags --depth $GIT_DEPTH"
+ fi
+
if [[ "$OFFLINE" = "True" ]]; then
echo "Running in offline mode, clones already exist"
# print out the results so we know what change was used in the logs
@@ -586,7 +593,7 @@
if [[ ! -d $git_dest ]]; then
[[ "$ERROR_ON_CLONE" = "True" ]] && \
die $LINENO "Cloning not allowed in this configuration"
- git_timed clone $git_remote $git_dest
+ git_timed clone $git_clone_flags $git_remote $git_dest
fi
cd $git_dest
git_timed fetch $git_remote $git_ref && git checkout FETCH_HEAD
@@ -595,7 +602,7 @@
if [[ ! -d $git_dest ]]; then
[[ "$ERROR_ON_CLONE" = "True" ]] && \
die $LINENO "Cloning not allowed in this configuration"
- git_timed clone $git_remote $git_dest
+ git_timed clone $git_clone_flags $git_remote $git_dest
cd $git_dest
# This checkout syntax works for both branches and tags
git checkout $git_ref
@@ -790,38 +797,70 @@
mv ${tmpfile} ${policy_file}
}
+# Gets or creates a domain
+# Usage: get_or_create_domain <name> <description>
+function get_or_create_domain {
+ local os_url="$KEYSTONE_SERVICE_URI/v3"
+ # Gets domain id
+ local domain_id=$(
+ # Gets domain id
+ openstack --os-token=$OS_TOKEN --os-url=$os_url \
+ --os-identity-api-version=3 domain show $1 \
+ -f value -c id 2>/dev/null ||
+ # Creates new domain
+ openstack --os-token=$OS_TOKEN --os-url=$os_url \
+ --os-identity-api-version=3 domain create $1 \
+ --description "$2" \
+ -f value -c id
+ )
+ echo $domain_id
+}
+
# Gets or creates user
-# Usage: get_or_create_user <username> <password> <project> [<email>]
+# Usage: get_or_create_user <username> <password> <project> [<email> [<domain>]]
function get_or_create_user {
if [[ ! -z "$4" ]]; then
local email="--email=$4"
else
local email=""
fi
+ local os_cmd="openstack"
+ local domain=""
+ if [[ ! -z "$5" ]]; then
+ domain="--domain=$5"
+ os_cmd="$os_cmd --os-url=$KEYSTONE_SERVICE_URI/v3 --os-identity-api-version=3"
+ fi
# Gets user id
local user_id=$(
# Gets user id
- openstack user show $1 -f value -c id 2>/dev/null ||
+ $os_cmd user show $1 $domain -f value -c id 2>/dev/null ||
# Creates new user
- openstack user create \
+ $os_cmd user create \
$1 \
--password "$2" \
--project $3 \
$email \
+ $domain \
-f value -c id
)
echo $user_id
}
# Gets or creates project
-# Usage: get_or_create_project <name>
+# Usage: get_or_create_project <name> [<domain>]
function get_or_create_project {
# Gets project id
+ local os_cmd="openstack"
+ local domain=""
+ if [[ ! -z "$2" ]]; then
+ domain="--domain=$2"
+ os_cmd="$os_cmd --os-url=$KEYSTONE_SERVICE_URI/v3 --os-identity-api-version=3"
+ fi
local project_id=$(
# Gets project id
- openstack project show $1 -f value -c id 2>/dev/null ||
+ $os_cmd project show $1 $domain -f value -c id 2>/dev/null ||
# Creates new project if not exists
- openstack project create $1 -f value -c id
+ $os_cmd project create $1 $domain -f value -c id
)
echo $project_id
}
diff --git a/lib/ceph b/lib/ceph
index 2e68ce5..e55738c 100644
--- a/lib/ceph
+++ b/lib/ceph
@@ -225,6 +225,11 @@
iniset $NOVA_CONF libvirt images_type rbd
iniset $NOVA_CONF libvirt images_rbd_pool ${NOVA_CEPH_POOL}
iniset $NOVA_CONF libvirt images_rbd_ceph_conf ${CEPH_CONF_FILE}
+
+ if ! is_service_enabled cinder; then
+ sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${CINDER_CEPH_USER} mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=${CINDER_CEPH_POOL}, allow rwx pool=${NOVA_CEPH_POOL},allow rx pool=${GLANCE_CEPH_POOL}" | sudo tee ${CEPH_CONF_DIR}/ceph.client.${CINDER_CEPH_USER}.keyring > /dev/null
+ sudo chown ${STACK_USER}:$(id -g -n $whoami) ${CEPH_CONF_DIR}/ceph.client.${CINDER_CEPH_USER}.keyring
+ fi
}
# configure_ceph_cinder() - Cinder config needs to come after Cinder is set up
diff --git a/lib/cinder_backends/netapp_iscsi b/lib/cinder_backends/netapp_iscsi
new file mode 100644
index 0000000..7a67da7
--- /dev/null
+++ b/lib/cinder_backends/netapp_iscsi
@@ -0,0 +1,64 @@
+# lib/cinder_backends/netapp_iscsi
+# Configure the NetApp iSCSI driver
+
+# Enable with:
+#
+# iSCSI:
+# CINDER_ENABLED_BACKENDS+=,netapp_iscsi:<volume-type-name>
+
+# Dependencies:
+#
+# - ``functions`` file
+# - ``cinder`` configurations
+
+# ``CINDER_CONF``
+# ``CINDER_CONF_DIR``
+# ``CINDER_ENABLED_BACKENDS``
+
+# configure_cinder_backend_netapp_iscsi - configure iSCSI
+
+# Save trace setting
+MY_XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Entry Points
+# ------------
+
+# configure_cinder_backend_netapp_iscsi - Set config files, create data dirs, etc
+function configure_cinder_backend_netapp_iscsi {
+ # To use NetApp, set the following in local.conf:
+ # CINDER_ENABLED_BACKENDS+=,netapp_iscsi:<volume-type-name>
+ # NETAPP_MODE=ontap_7mode|ontap_cluster
+ # NETAPP_IP=<mgmt-ip>
+ # NETAPP_LOGIN=<admin-account>
+ # NETAPP_PASSWORD=<admin-password>
+ # NETAPP_ISCSI_VOLUME_LIST=<volumes>
+
+ # In ontap_cluster mode, the following also needs to be defined:
+ # NETAPP_ISCSI_VSERVER=<vserver-name>
+
+ local be_name=$1
+ iniset $CINDER_CONF $be_name volume_backend_name $be_name
+ iniset $CINDER_CONF $be_name volume_driver "cinder.volume.drivers.netapp.common.NetAppDriver"
+ iniset $CINDER_CONF $be_name netapp_storage_family ${NETAPP_MODE:-ontap_7mode}
+ iniset $CINDER_CONF $be_name netapp_server_hostname $NETAPP_IP
+ iniset $CINDER_CONF $be_name netapp_login $NETAPP_LOGIN
+ iniset $CINDER_CONF $be_name netapp_password $NETAPP_PASSWORD
+ iniset $CINDER_CONF $be_name netapp_volume_list $NETAPP_ISCSI_VOLUME_LIST
+
+ iniset $CINDER_CONF $be_name netapp_storage_protocol iscsi
+ iniset $CINDER_CONF $be_name netapp_transport_type https
+
+ if [[ "$NETAPP_MODE" == "ontap_cluster" ]]; then
+ iniset $CINDER_CONF $be_name netapp_vserver $NETAPP_ISCSI_VSERVER
+ fi
+}
+
+
+# Restore xtrace
+$MY_XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/cinder_backends/netapp_nfs b/lib/cinder_backends/netapp_nfs
new file mode 100644
index 0000000..d90b7f7
--- /dev/null
+++ b/lib/cinder_backends/netapp_nfs
@@ -0,0 +1,75 @@
+# lib/cinder_backends/netapp_nfs
+# Configure the NetApp NFS driver
+
+# Enable with:
+#
+# NFS:
+# CINDER_ENABLED_BACKENDS+=,netapp_nfs:<volume-type-name>
+
+# Dependencies:
+#
+# - ``functions`` file
+# - ``cinder`` configurations
+
+# ``CINDER_CONF``
+# ``CINDER_CONF_DIR``
+# ``CINDER_ENABLED_BACKENDS``
+
+# configure_cinder_backend_netapp_nfs - configure NFS
+
+# Save trace setting
+MY_XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Entry Points
+# ------------
+
+# configure_cinder_backend_netapp_nfs - Set config files, create data dirs, etc
+function configure_cinder_backend_netapp_nfs {
+ # To use NetApp, set the following in local.conf:
+ # CINDER_ENABLED_BACKENDS+=,netapp_nfs:<volume-type-name>
+ # NETAPP_MODE=ontap_7mode|ontap_cluster
+ # NETAPP_IP=<mgmt-ip>
+ # NETAPP_LOGIN=<admin-account>
+ # NETAPP_PASSWORD=<admin-password>
+ # NETAPP_NFS_VOLUME_LIST=<export-volumes>
+
+ # In ontap_cluster mode, the following also needs to be defined:
+ # NETAPP_NFS_VSERVER=<vserver-name>
+
+ local be_name=$1
+ iniset $CINDER_CONF $be_name volume_backend_name $be_name
+ iniset $CINDER_CONF $be_name volume_driver "cinder.volume.drivers.netapp.common.NetAppDriver"
+ iniset $CINDER_CONF $be_name netapp_storage_family ${NETAPP_MODE:-ontap_7mode}
+ iniset $CINDER_CONF $be_name netapp_server_hostname $NETAPP_IP
+ iniset $CINDER_CONF $be_name netapp_login $NETAPP_LOGIN
+ iniset $CINDER_CONF $be_name netapp_password $NETAPP_PASSWORD
+
+ iniset $CINDER_CONF $be_name netapp_storage_protocol nfs
+ iniset $CINDER_CONF $be_name netapp_transport_type https
+ iniset $CINDER_CONF $be_name nfs_shares_config $CINDER_CONF_DIR/netapp_shares.conf
+
+ echo "$NETAPP_NFS_VOLUME_LIST" | tee "$CINDER_CONF_DIR/netapp_shares.conf"
+
+ if [[ "$NETAPP_MODE" == "ontap_cluster" ]]; then
+ iniset $CINDER_CONF $be_name netapp_vserver $NETAPP_NFS_VSERVER
+ fi
+}
+
+function cleanup_cinder_backend_netapp_nfs {
+ # Clean up remaining NFS mounts
+ # Be blunt and do them all
+ local m
+ for m in $CINDER_STATE_PATH/mnt/*; do
+ sudo umount $m
+ done
+}
+
+
+# Restore xtrace
+$MY_XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/ldap b/lib/ldap
index 2bb8a4c..a6fb82f 100644
--- a/lib/ldap
+++ b/lib/ldap
@@ -139,6 +139,8 @@
sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif
fi
+ pip_install ldappool
+
rm -rf $tmp_ldap_dir
}
diff --git a/lib/neutron_thirdparty/vmware_nsx b/lib/neutron_thirdparty/vmware_nsx
index 7a76570..7a20c64 100644
--- a/lib/neutron_thirdparty/vmware_nsx
+++ b/lib/neutron_thirdparty/vmware_nsx
@@ -8,7 +8,7 @@
# * enable_service vmware_nsx --> to execute this third-party addition
# * PUBLIC_BRIDGE --> bridge used for external connectivity, typically br-ex
# * NSX_GATEWAY_NETWORK_INTERFACE --> interface used to communicate with the NSX Gateway
-# * NSX_GATEWAY_NETWORK_CIDR --> CIDR to configure br-ex, e.g. 172.24.4.211/24
+# * NSX_GATEWAY_NETWORK_CIDR --> CIDR to configure $PUBLIC_BRIDGE, e.g. 172.24.4.211/24
# Save trace setting
NSX3_XTRACE=$(set +o | grep xtrace)
@@ -29,7 +29,7 @@
function init_vmware_nsx {
if ! is_set NSX_GATEWAY_NETWORK_CIDR; then
NSX_GATEWAY_NETWORK_CIDR=$PUBLIC_NETWORK_GATEWAY/${FLOATING_RANGE#*/}
- echo "The IP address to set on br-ex was not specified. "
+ echo "The IP address to set on $PUBLIC_BRIDGE was not specified. "
echo "Defaulting to "$NSX_GATEWAY_NETWORK_CIDR
fi
# Make sure the interface is up, but not configured
@@ -42,14 +42,15 @@
# only with mac learning enabled, portsecurity and security profiles disabled
# The public bridge might not exist for the NSX plugin if Q_USE_DEBUG_COMMAND is off
# Try to create it anyway
- sudo ovs-vsctl --no-wait -- --may-exist add-br $PUBLIC_BRIDGE
- sudo ovs-vsctl -- --may-exist add-port $PUBLIC_BRIDGE $NSX_GATEWAY_NETWORK_INTERFACE
+ sudo ovs-vsctl --may-exist add-br $PUBLIC_BRIDGE
+ sudo ovs-vsctl --may-exist add-port $PUBLIC_BRIDGE $NSX_GATEWAY_NETWORK_INTERFACE
nsx_gw_net_if_mac=$(ip link show $NSX_GATEWAY_NETWORK_INTERFACE | awk '/ether/ {print $2}')
sudo ip link set address $nsx_gw_net_if_mac dev $PUBLIC_BRIDGE
for address in $addresses; do
sudo ip addr add dev $PUBLIC_BRIDGE $address
done
sudo ip addr add dev $PUBLIC_BRIDGE $NSX_GATEWAY_NETWORK_CIDR
+ sudo ip link set $PUBLIC_BRIDGE up
}
function install_vmware_nsx {
@@ -63,7 +64,7 @@
function stop_vmware_nsx {
if ! is_set NSX_GATEWAY_NETWORK_CIDR; then
NSX_GATEWAY_NETWORK_CIDR=$PUBLIC_NETWORK_GATEWAY/${FLOATING_RANGE#*/}
- echo "The IP address expected on br-ex was not specified. "
+ echo "The IP address expected on $PUBLIC_BRIDGE was not specified. "
echo "Defaulting to "$NSX_GATEWAY_NETWORK_CIDR
fi
sudo ip addr del $NSX_GATEWAY_NETWORK_CIDR dev $PUBLIC_BRIDGE
diff --git a/lib/rpc_backend b/lib/rpc_backend
index de82fe1..14c78fb 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -132,39 +132,14 @@
# Install rabbitmq-server
install_package rabbitmq-server
elif is_service_enabled qpid; then
- local qpid_conf_file=/etc/qpid/qpidd.conf
if is_fedora; then
install_package qpid-cpp-server
- if [[ $DISTRO =~ (rhel6) ]]; then
- qpid_conf_file=/etc/qpidd.conf
- # RHEL6 leaves "auth=yes" in /etc/qpidd.conf, it needs to
- # be no or you get GSS authentication errors as it
- # attempts to default to this.
- sudo sed -i.bak 's/^auth=yes$/auth=no/' $qpid_conf_file
- fi
elif is_ubuntu; then
install_package qpidd
- sudo sed -i '/PLAIN/!s/mech_list: /mech_list: PLAIN /' /etc/sasl2/qpidd.conf
- sudo chmod o+r /etc/qpid/qpidd.sasldb
else
exit_distro_not_supported "qpid installation"
fi
- # If AMQP 1.0 is specified, ensure that the version of the
- # broker can support AMQP 1.0 and configure the queue and
- # topic address patterns used by oslo.messaging.
- if [ "$RPC_MESSAGING_PROTOCOL" == "AMQP1" ]; then
- QPIDD=$(type -p qpidd)
- if ! $QPIDD --help | grep -q "queue-patterns"; then
- exit_distro_not_supported "qpidd with AMQP 1.0 support"
- fi
- if ! grep -q "queue-patterns=exclusive" $qpid_conf_file; then
- cat <<EOF | sudo tee --append $qpid_conf_file
-queue-patterns=exclusive
-queue-patterns=unicast
-topic-patterns=broadcast
-EOF
- fi
- fi
+ _configure_qpid
elif is_service_enabled zeromq; then
# NOTE(ewindisch): Redis is not strictly necessary
# but there is a matchmaker driver that works
@@ -240,10 +215,9 @@
iniset $file $section rpc_backend ${package}.openstack.common.rpc.impl_qpid
fi
iniset $file $section qpid_hostname ${QPID_HOST:-$SERVICE_HOST}
- if is_ubuntu; then
- QPID_PASSWORD=`sudo strings /etc/qpid/qpidd.sasldb | grep -B1 admin | head -1`
+ if [ -n "$QPID_USERNAME" ]; then
+ iniset $file $section qpid_username $QPID_USERNAME
iniset $file $section qpid_password $QPID_PASSWORD
- iniset $file $section qpid_username admin
fi
elif is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
iniset $file $section rpc_backend ${package}.openstack.common.rpc.impl_kombu
@@ -263,6 +237,83 @@
( ! is_suse )
}
+# Set up the various configuration files used by the qpidd broker
+function _configure_qpid {
+
+ # the location of the configuration files have changed since qpidd 0.14
+ local qpid_conf_file
+ if [ -e /etc/qpid/qpidd.conf ]; then
+ qpid_conf_file=/etc/qpid/qpidd.conf
+ elif [ -e /etc/qpidd.conf ]; then
+ qpid_conf_file=/etc/qpidd.conf
+ else
+ exit_distro_not_supported "qpidd.conf file not found!"
+ fi
+
+ # force the ACL file to a known location
+ local qpid_acl_file=/etc/qpid/qpidd.acl
+ if [ ! -e $qpid_acl_file ]; then
+ sudo mkdir -p -m 755 `dirname $qpid_acl_file`
+ sudo touch $qpid_acl_file
+ sudo chmod o+r $qpid_acl_file
+ fi
+ sudo sed -i.bak '/^acl-file=/d' $qpid_conf_file
+ echo "acl-file=$qpid_acl_file" | sudo tee --append $qpid_conf_file
+
+ sudo sed -i '/^auth=/d' $qpid_conf_file
+ if [ -z "$QPID_USERNAME" ]; then
+ # no QPID user configured, so disable authentication
+ # and access control
+ echo "auth=no" | sudo tee --append $qpid_conf_file
+ cat <<EOF | sudo tee $qpid_acl_file
+acl allow all all
+EOF
+ else
+ # Configure qpidd to use PLAIN authentication, and add
+ # QPID_USERNAME to the ACL:
+ echo "auth=yes" | sudo tee --append $qpid_conf_file
+ if [ -z "$QPID_PASSWORD" ]; then
+ read_password QPID_PASSWORD "ENTER A PASSWORD FOR QPID USER $QPID_USERNAME"
+ fi
+ # Create ACL to allow $QPID_USERNAME full access
+ cat <<EOF | sudo tee $qpid_acl_file
+group admin ${QPID_USERNAME}@QPID
+acl allow admin all
+acl deny all all
+EOF
+ # Add user to SASL database
+ if is_ubuntu; then
+ install_package sasl2-bin
+ elif is_fedora; then
+ install_package cyrus-sasl-lib
+ fi
+ local sasl_conf_file=/etc/sasl2/qpidd.conf
+ sudo sed -i.bak '/PLAIN/!s/mech_list: /mech_list: PLAIN /' $sasl_conf_file
+ local sasl_db=`sudo grep sasldb_path $sasl_conf_file | cut -f 2 -d ":" | tr -d [:blank:]`
+ if [ ! -e $sasl_db ]; then
+ sudo mkdir -p -m 755 `dirname $sasl_db`
+ fi
+ echo $QPID_PASSWORD | sudo saslpasswd2 -c -p -f $sasl_db -u QPID $QPID_USERNAME
+ sudo chmod o+r $sasl_db
+ fi
+
+ # If AMQP 1.0 is specified, ensure that the version of the
+ # broker can support AMQP 1.0 and configure the queue and
+ # topic address patterns used by oslo.messaging.
+ if [ "$RPC_MESSAGING_PROTOCOL" == "AMQP1" ]; then
+ QPIDD=$(type -p qpidd)
+ if ! $QPIDD --help | grep -q "queue-patterns"; then
+ exit_distro_not_supported "qpidd with AMQP 1.0 support"
+ fi
+ if ! grep -q "queue-patterns=exclusive" $qpid_conf_file; then
+ cat <<EOF | sudo tee --append $qpid_conf_file
+queue-patterns=exclusive
+queue-patterns=unicast
+topic-patterns=broadcast
+EOF
+ fi
+ fi
+}
# Restore xtrace
$XTRACE
diff --git a/lib/swift b/lib/swift
index 15bd2a9..7ef4496 100644
--- a/lib/swift
+++ b/lib/swift
@@ -196,9 +196,9 @@
# copy apache vhost file and set name and port
local node_number
for node_number in ${SWIFT_REPLICAS_SEQ}; do
- local object_port=$[OBJECT_PORT_BASE + 10 * ($node_number - 1)]
- local container_port=$[CONTAINER_PORT_BASE + 10 * ($node_number - 1)]
- local account_port=$[ACCOUNT_PORT_BASE + 10 * ($node_number - 1)]
+ local object_port=$(( OBJECT_PORT_BASE + 10 * (node_number - 1) ))
+ local container_port=$(( CONTAINER_PORT_BASE + 10 * (node_number - 1) ))
+ local account_port=$(( ACCOUNT_PORT_BASE + 10 * (node_number - 1) ))
sudo cp ${SWIFT_DIR}/examples/apache2/object-server.template $(apache_site_config_for object-server-${node_number})
sudo sed -e "
@@ -257,7 +257,7 @@
local bind_port=$3
local server_type=$4
- log_facility=$[ node_id - 1 ]
+ log_facility=$(( node_id - 1 ))
local node_path=${SWIFT_DATA_DIR}/${node_number}
iniuncomment ${swift_node_config} DEFAULT user
@@ -330,7 +330,12 @@
SWIFT_CONFIG_PROXY_SERVER=${SWIFT_CONF_DIR}/proxy-server.conf
cp ${SWIFT_DIR}/etc/proxy-server.conf-sample ${SWIFT_CONFIG_PROXY_SERVER}
- cp ${SWIFT_DIR}/etc/container-sync-realms.conf-sample ${SWIFT_CONF_DIR}/container-sync-realms.conf
+ # To run container sync feature introduced in Swift ver 1.12.0,
+ # container sync "realm" is added in container-sync-realms.conf
+ local csyncfile=${SWIFT_CONF_DIR}/container-sync-realms.conf
+ cp ${SWIFT_DIR}/etc/container-sync-realms.conf-sample ${csyncfile}
+ iniset ${csyncfile} realm1 key realm1key
+ iniset ${csyncfile} realm1 cluster_name1 "$SWIFT_SERVICE_PROTOCOL://$SERVICE_HOST:8080/v1/"
iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT user
iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT user ${STACK_USER}
@@ -468,12 +473,21 @@
iniset ${testfile} func_test username3 swiftusertest3
iniset ${testfile} func_test account2 swifttenanttest2
iniset ${testfile} func_test username2 swiftusertest2
+ iniset ${testfile} func_test account4 swifttenanttest4
+ iniset ${testfile} func_test username4 swiftusertest4
+ iniset ${testfile} func_test password4 testing4
+ iniset ${testfile} func_test domain4 swift_test
if is_service_enabled key;then
iniuncomment ${testfile} func_test auth_version
+ local auth_vers=$(iniget ${testfile} func_test auth_version)
iniset ${testfile} func_test auth_host ${KEYSTONE_SERVICE_HOST}
iniset ${testfile} func_test auth_port ${KEYSTONE_AUTH_PORT}
- iniset ${testfile} func_test auth_prefix /v2.0/
+ if [[ $auth_vers == "3" ]]; then
+ iniset ${testfile} func_test auth_prefix /v3/
+ else
+ iniset ${testfile} func_test auth_prefix /v2.0/
+ fi
fi
local swift_log_dir=${SWIFT_DATA_DIR}/logs
@@ -548,12 +562,13 @@
# since we want to make it compatible with tempauth which use
# underscores for separators.
-# Tenant User Roles
+# Tenant User Roles Domain
# ------------------------------------------------------------------
-# service swift service
-# swifttenanttest1 swiftusertest1 admin
-# swifttenanttest1 swiftusertest3 anotherrole
-# swifttenanttest2 swiftusertest2 admin
+# service swift service default
+# swifttenanttest1 swiftusertest1 admin default
+# swifttenanttest1 swiftusertest3 anotherrole default
+# swifttenanttest2 swiftusertest2 admin default
+# swifttenanttest4 swiftusertest4 admin swift_test
function create_swift_accounts {
# Defines specific passwords used by tools/create_userrc.sh
@@ -562,6 +577,7 @@
export swiftusertest1_password=testing
export swiftusertest2_password=testing2
export swiftusertest3_password=testing3
+ export swiftusertest4_password=testing4
KEYSTONE_CATALOG_BACKEND=${KEYSTONE_CATALOG_BACKEND:-sql}
@@ -603,6 +619,16 @@
"$swift_tenant_test2" "test2@example.com")
die_if_not_set $LINENO swift_user_test2 "Failure creating swift_user_test2"
get_or_add_user_role $admin_role $swift_user_test2 $swift_tenant_test2
+
+ local swift_domain=$(get_or_create_domain swift_test 'Used for swift functional testing')
+ die_if_not_set $LINENO swift_domain "Failure creating swift_test domain"
+
+ local swift_tenant_test4=$(get_or_create_project swifttenanttest4 $swift_domain)
+ die_if_not_set $LINENO swift_tenant_test4 "Failure creating swift_tenant_test4"
+ local swift_user_test4=$(get_or_create_user swiftusertest4 $swiftusertest4_password \
+ $swift_tenant_test4 "test4@example.com" $swift_domain)
+ die_if_not_set $LINENO swift_user_test4 "Failure creating swift_user_test4"
+ get_or_add_user_role $admin_role $swift_user_test4 $swift_tenant_test4
}
# init_swift() - Initialize rings
diff --git a/lib/tempest b/lib/tempest
index 1716bc7..66f1a78 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -287,7 +287,7 @@
iniset $TEMPEST_CONFIG compute image_ref $image_uuid
iniset $TEMPEST_CONFIG compute image_ssh_user ${DEFAULT_INSTANCE_USER:-cirros}
iniset $TEMPEST_CONFIG compute image_ref_alt $image_uuid_alt
- iniset $TEMPEST_CONFIG compute image_alt_ssh_user ${DEFAULT_INSTANCE_USER:-cirros}
+ iniset $TEMPEST_CONFIG compute image_alt_ssh_user ${ALT_INSTANCE_USER:-cirros}
iniset $TEMPEST_CONFIG compute flavor_ref $flavor_ref
iniset $TEMPEST_CONFIG compute flavor_ref_alt $flavor_ref_alt
iniset $TEMPEST_CONFIG compute ssh_connect_method $ssh_connect_method
diff --git a/stack.sh b/stack.sh
index 3b5fb74..ec13338 100755
--- a/stack.sh
+++ b/stack.sh
@@ -234,36 +234,41 @@
if [[ is_fedora && ( $DISTRO == "rhel6" || $DISTRO == "rhel7" ) ]]; then
# RHEL requires EPEL for many Open Stack dependencies
- if ! sudo yum repolist enabled epel | grep -q 'epel'; then
- echo "EPEL not detected; installing"
- # This trick installs the latest epel-release from a bootstrap
- # repo, then removes itself (as epel-release installed the
- # "real" repo).
- #
- # you would think that rather than this, you could use
- # $releasever directly in .repo file we create below. However
- # RHEL gives a $releasever of "6Server" which breaks the path;
- # see https://bugzilla.redhat.com/show_bug.cgi?id=1150759
- if [[ $DISTRO == "rhel7" ]]; then
- epel_ver="7"
- elif [[ $DISTRO == "rhel6" ]]; then
- epel_ver="6"
- fi
- cat <<EOF | sudo tee /etc/yum.repos.d/epel-bootstrap.repo
-[epel]
+ # note we always remove and install latest -- some environments
+ # use snapshot images, and if EPEL version updates they break
+ # unless we update them to latest version.
+ if sudo yum repolist enabled epel | grep -q 'epel'; then
+ uninstall_package epel-release || true
+ fi
+
+ # This trick installs the latest epel-release from a bootstrap
+ # repo, then removes itself (as epel-release installed the
+ # "real" repo).
+ #
+ # you would think that rather than this, you could use
+ # $releasever directly in .repo file we create below. However
+ # RHEL gives a $releasever of "6Server" which breaks the path;
+ # see https://bugzilla.redhat.com/show_bug.cgi?id=1150759
+ if [[ $DISTRO == "rhel7" ]]; then
+ epel_ver="7"
+ elif [[ $DISTRO == "rhel6" ]]; then
+ epel_ver="6"
+ fi
+
+ cat <<EOF | sudo tee /etc/yum.repos.d/epel-bootstrap.repo
+[epel-bootstrap]
name=Bootstrap EPEL
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-$epel_ver&arch=\$basearch
failovermethod=priority
enabled=0
gpgcheck=0
EOF
- # bare yum call due to --enablerepo
- sudo yum --enablerepo=epel -y install epel-release || \
- die $LINENO "Error installing EPEL repo, cannot continue"
- # epel rpm has installed it's version
- sudo rm -f /etc/yum.repos.d/epel-bootstrap.repo
- fi
+ # bare yum call due to --enablerepo
+ sudo yum --enablerepo=epel-bootstrap -y install epel-release || \
+ die $LINENO "Error installing EPEL repo, cannot continue"
+ # epel rpm has installed it's version
+ sudo rm -f /etc/yum.repos.d/epel-bootstrap.repo
# ... and also optional to be enabled
is_package_installed yum-utils || install_package yum-utils
diff --git a/stackrc b/stackrc
index d97dba8..15b0951 100644
--- a/stackrc
+++ b/stackrc
@@ -533,11 +533,11 @@
esac
fi
-# Trove needs a custom image for it's work
+# Trove needs a custom image for its work
if [[ "$ENABLED_SERVICES" =~ 'tr-api' ]]; then
case "$VIRT_DRIVER" in
libvirt|baremetal|ironic|xenapi)
- TROVE_GUEST_IMAGE_URL=${TROVE_GUEST_IMAGE_URL:-"http://tarballs.openstack.org/trove/images/ubuntu_mysql.qcow2/ubuntu_mysql.qcow2"}
+ TROVE_GUEST_IMAGE_URL=${TROVE_GUEST_IMAGE_URL:-"http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2"}
IMAGE_URLS+=",${TROVE_GUEST_IMAGE_URL}"
;;
*)