Merge "Revert "Fix devstack with linuxbridge without l3 agent""
diff --git a/clean.sh b/clean.sh
index bace3f5..d92807c 100755
--- a/clean.sh
+++ b/clean.sh
@@ -147,3 +147,8 @@
done
rm -rf ~/.config/openstack
+
+# Clean up all *.pyc files
+if [[ -n "$DEST" ]] && [[ -d "$DEST" ]]; then
+ sudo find $DEST -name "*.pyc" -print0 | xargs -0 rm
+fi
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index bc3f558..53ae82f 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -703,13 +703,13 @@
~~~~~~
The logical volume group used to hold the Cinder-managed volumes is
-set by ``VOLUME_GROUP``, the logical volume name prefix is set with
+set by ``VOLUME_GROUP_NAME``, the logical volume name prefix is set with
``VOLUME_NAME_PREFIX`` and the size of the volume backing file is set
with ``VOLUME_BACKING_FILE_SIZE``.
::
- VOLUME_GROUP="stack-volumes"
+ VOLUME_GROUP_NAME="stack-volumes"
VOLUME_NAME_PREFIX="volume-"
VOLUME_BACKING_FILE_SIZE=10250M
diff --git a/doc/source/guides/multinode-lab.rst b/doc/source/guides/multinode-lab.rst
index 8751eb8..dfc9936 100644
--- a/doc/source/guides/multinode-lab.rst
+++ b/doc/source/guides/multinode-lab.rst
@@ -294,10 +294,10 @@
``stack-volumes`` can be pre-created on any physical volume supported by
Linux's LVM. The name of the volume group can be changed by setting
-``VOLUME_GROUP`` in ``localrc``. ``stack.sh`` deletes all logical
-volumes in ``VOLUME_GROUP`` that begin with ``VOLUME_NAME_PREFIX`` as
+``VOLUME_GROUP_NAME`` in ``localrc``. ``stack.sh`` deletes all logical
+volumes in ``VOLUME_GROUP_NAME`` that begin with ``VOLUME_NAME_PREFIX`` as
part of cleaning up from previous runs. It is recommended to not use the
-root volume group as ``VOLUME_GROUP``.
+root volume group as ``VOLUME_GROUP_NAME``.
The details of creating the volume group depends on the server hardware
involved but looks something like this:
@@ -400,6 +400,10 @@
ssh-keyscan -H DEST_HOSTNAME | sudo tee -a /root/.ssh/known_hosts
+3. Verify that login via ssh works without a password::
+
+ ssh -i /root/.ssh/id_rsa.pub stack@DESTINATION
+
In essence, this means that every compute node's root user's public RSA key
must exist in every other compute node's stack user's authorized_keys file and
every compute node's public ECDSA key needs to be in every other compute
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 435011b..b8dd506 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -45,31 +45,6 @@
If you do not have a preference, Ubuntu 16.04 is the most tested, and
will probably go the smoothest.
-Download DevStack
------------------
-
-::
-
- git clone https://git.openstack.org/openstack-dev/devstack
-
-The ``devstack`` repo contains a script that installs OpenStack and
-templates for configuration files
-
-Create a local.conf
--------------------
-
-Create a ``local.conf`` file with 4 passwords preset
-
-::
-
- [[local|localrc]]
- ADMIN_PASSWORD=secret
- DATABASE_PASSWORD=$ADMIN_PASSWORD
- RABBIT_PASSWORD=$ADMIN_PASSWORD
- SERVICE_PASSWORD=$ADMIN_PASSWORD
-
-This is the minimum required config to get started with DevStack.
-
Add Stack User
--------------
@@ -81,14 +56,48 @@
::
- devstack/tools/create-stack-user.sh; su stack
+ $ adduser stack
+
+Since this user will be making many changes to your system, it should
+have sudo privileges:
+
+::
+
+ $ echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
+ $ su stack
+
+Download DevStack
+-----------------
+
+::
+
+ $ git clone https://git.openstack.org/openstack-dev/devstack
+ $ cd devstack
+
+The ``devstack`` repo contains a script that installs OpenStack and
+templates for configuration files
+
+Create a local.conf
+-------------------
+
+Create a ``local.conf`` file with 4 passwords preset at the root of the
+devstack git repo.
+::
+
+ [[local|localrc]]
+ ADMIN_PASSWORD=secret
+ DATABASE_PASSWORD=$ADMIN_PASSWORD
+ RABBIT_PASSWORD=$ADMIN_PASSWORD
+ SERVICE_PASSWORD=$ADMIN_PASSWORD
+
+This is the minimum required config to get started with DevStack.
Start the install
-----------------
::
- cd devstack; ./stack.sh
+ ./stack.sh
This will take a 15 - 20 minutes, largely depending on the speed of
your internet connection. Many git trees and packages will be
diff --git a/doc/source/networking.rst b/doc/source/networking.rst
index 2301a2e..bdbeaaa 100644
--- a/doc/source/networking.rst
+++ b/doc/source/networking.rst
@@ -4,7 +4,7 @@
An important part of the DevStack experience is networking that works
by default for created guests. This might not be optimal for your
-particular testing environment, so this document tries it's best to
+particular testing environment, so this document tries its best to
explain what's going on.
Defaults
@@ -17,9 +17,9 @@
* a floating ip range of 172.24.4.0/24 with the gateway of 172.24.4.1
* the demo project configured with fixed ips on a subnet allocated from
the 10.0.0.0/22 range
-* a ``br-ex`` interface controlled by neutron for all it's networking
+* a ``br-ex`` interface controlled by neutron for all its networking
(this is not connected to any physical interfaces).
-* DNS resolution for guests based on the resolv.conf for you host
+* DNS resolution for guests based on the resolv.conf for your host
* an ip masq rule that allows created guests to route out
This creates an environment which is isolated to the single
@@ -40,7 +40,7 @@
Locally Accessible Guests
=========================
-If you want to make you guests accessible other machines on your
+If you want to make you guests accessible from other machines on your
network, we have to connect ``br-ex`` to a physical interface.
Dedicated Guest Interface
@@ -109,8 +109,8 @@
For IPv4, ``FIXED_RANGE`` and ``SUBNETPOOL_PREFIX_V4`` will just default to
the value of ``IPV4_ADDRS_SAFE_TO_USE`` directly.
-For IPv6, ``FIXED_RANGE`` will default to the first /64 of the value of
+For IPv6, ``FIXED_RANGE_V6`` will default to the first /64 of the value of
``IPV6_ADDRS_SAFE_TO_USE``. If ``IPV6_ADDRS_SAFE_TO_USE`` is /64 or smaller,
-``FIXED_RANGE`` will just use the value of that directly.
+``FIXED_RANGE_V6`` will just use the value of that directly.
``SUBNETPOOL_PREFIX_V6`` will just default to the value of
``IPV6_ADDRS_SAFE_TO_USE`` directly.
diff --git a/exercises/neutron-adv-test.sh b/exercises/neutron-adv-test.sh
index bfd45ec..e8c8f62 100755
--- a/exercises/neutron-adv-test.sh
+++ b/exercises/neutron-adv-test.sh
@@ -156,7 +156,7 @@
function get_network_id {
local NETWORK_NAME="$1"
local NETWORK_ID
- NETWORK_ID=`openstack network list | grep $NETWORK_NAME | awk '{print $2}'`
+ NETWORK_ID=`openstack network show -f value -c id $NETWORK_NAME`
echo $NETWORK_ID
}
diff --git a/files/rpms/general b/files/rpms/general
index d0ceb56..77d2fa5 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -7,9 +7,9 @@
gettext # used for compiling message catalogs
git-core
graphviz # needed only for docs
-iptables-services # NOPRIME f23,f24
+iptables-services # NOPRIME f23,f24,f25
java-1.7.0-openjdk-headless # NOPRIME rhel7
-java-1.8.0-openjdk-headless # NOPRIME f23,f24
+java-1.8.0-openjdk-headless # NOPRIME f23,f24,f25
libffi-devel
libjpeg-turbo-devel # Pillow 3.0.0
libxml2-devel # lxml
diff --git a/files/rpms/nova b/files/rpms/nova
index a883ec4..45f1c94 100644
--- a/files/rpms/nova
+++ b/files/rpms/nova
@@ -7,7 +7,7 @@
genisoimage # required for config_drive
iptables
iputils
-kernel-modules # dist:f23,f24
+kernel-modules # dist:f23,f24,f25
kpartx
kvm # NOPRIME
libvirt-bin # NOPRIME
diff --git a/files/rpms/swift b/files/rpms/swift
index bd249ee..2f12df0 100644
--- a/files/rpms/swift
+++ b/files/rpms/swift
@@ -2,7 +2,7 @@
liberasurecode-devel
memcached
pyxattr
-rsync-daemon # dist:f23,f24
+rsync-daemon # dist:f23,f24,f25
sqlite
xfsprogs
xinetd
diff --git a/files/swift/rsyncd.conf b/files/swift/rsyncd.conf
index c670531..c49f716 100644
--- a/files/swift/rsyncd.conf
+++ b/files/swift/rsyncd.conf
@@ -4,76 +4,76 @@
pid file = %SWIFT_DATA_DIR%/run/rsyncd.pid
address = 127.0.0.1
-[account6012]
+[account6612]
max connections = 25
path = %SWIFT_DATA_DIR%/1/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/account6012.lock
+lock file = %SWIFT_DATA_DIR%/run/account6612.lock
-[account6022]
+[account6622]
max connections = 25
path = %SWIFT_DATA_DIR%/2/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/account6022.lock
+lock file = %SWIFT_DATA_DIR%/run/account6622.lock
-[account6032]
+[account6632]
max connections = 25
path = %SWIFT_DATA_DIR%/3/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/account6032.lock
+lock file = %SWIFT_DATA_DIR%/run/account6632.lock
-[account6042]
+[account6642]
max connections = 25
path = %SWIFT_DATA_DIR%/4/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/account6042.lock
+lock file = %SWIFT_DATA_DIR%/run/account6642.lock
-[container6011]
+[container6611]
max connections = 25
path = %SWIFT_DATA_DIR%/1/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/container6011.lock
+lock file = %SWIFT_DATA_DIR%/run/container6611.lock
-[container6021]
+[container6621]
max connections = 25
path = %SWIFT_DATA_DIR%/2/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/container6021.lock
+lock file = %SWIFT_DATA_DIR%/run/container6621.lock
-[container6031]
+[container6631]
max connections = 25
path = %SWIFT_DATA_DIR%/3/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/container6031.lock
+lock file = %SWIFT_DATA_DIR%/run/container6631.lock
-[container6041]
+[container6641]
max connections = 25
path = %SWIFT_DATA_DIR%/4/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/container6041.lock
+lock file = %SWIFT_DATA_DIR%/run/container6641.lock
-[object6010]
+[object6613]
max connections = 25
path = %SWIFT_DATA_DIR%/1/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/object6010.lock
+lock file = %SWIFT_DATA_DIR%/run/object6613.lock
-[object6020]
+[object6623]
max connections = 25
path = %SWIFT_DATA_DIR%/2/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/object6020.lock
+lock file = %SWIFT_DATA_DIR%/run/object6623.lock
-[object6030]
+[object6633]
max connections = 25
path = %SWIFT_DATA_DIR%/3/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/object6030.lock
+lock file = %SWIFT_DATA_DIR%/run/object6633.lock
-[object6040]
+[object6643]
max connections = 25
path = %SWIFT_DATA_DIR%/4/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/object6040.lock
+lock file = %SWIFT_DATA_DIR%/run/object6643.lock
diff --git a/functions-common b/functions-common
index d5014fd..cc1d42b 100644
--- a/functions-common
+++ b/functions-common
@@ -534,10 +534,8 @@
echo "the project to the \$PROJECTS variable in the job definition."
die $LINENO "Cloning not allowed in this configuration"
fi
- git_timed clone $git_clone_flags $git_remote $git_dest
- cd $git_dest
- # This checkout syntax works for both branches and tags
- git checkout $git_ref
+ # '--branch' can also take tags
+ git_timed clone $git_clone_flags $git_remote $git_dest --branch $git_ref
elif [[ "$RECLONE" = "True" ]]; then
# if it does exist then simulate what clone does if asked to RECLONE
cd $git_dest
@@ -1773,6 +1771,9 @@
local name=$1
local url=$2
local branch=${3:-master}
+ if [[ ",${DEVSTACK_PLUGINS}," =~ ,${name}, ]]; then
+ die $LINENO "Plugin attempted to be enabled twice: ${name} ${url} ${branch}"
+ fi
DEVSTACK_PLUGINS+=",$name"
GITREPO[$name]=$url
GITDIR[$name]=$DEST/$name
@@ -2260,6 +2261,14 @@
echo $subnet
}
+function is_provider_network {
+ if [ "$Q_USE_PROVIDER_NETWORKING" == "True" ]; then
+ return 0
+ fi
+ return 1
+}
+
+
# Return the current python as "python<major>.<minor>"
function python_version {
local python_version
@@ -2310,11 +2319,12 @@
fi
}
-# Service wrapper to stop services
+# Service wrapper to reload services
+# If the service was not in running state it will start it
# reload_service service-name
function reload_service {
if [ -x /bin/systemctl ]; then
- sudo /bin/systemctl reload $1
+ sudo /bin/systemctl reload-or-restart $1
else
sudo service $1 reload
fi
diff --git a/inc/meta-config b/inc/meta-config
index 6eb7a00..6252135 100644
--- a/inc/meta-config
+++ b/inc/meta-config
@@ -40,12 +40,10 @@
$CONFIG_AWK_CMD -v matchgroup=$matchgroup -v configfile=$configfile '
BEGIN { group = "" }
/^\[\[.+\|.*\]\]/ {
- if (group == "") {
- gsub("[][]", "", $1);
- split($1, a, "|");
- if (a[1] == matchgroup && a[2] == configfile) {
- group=a[1]
- }
+ gsub("[][]", "", $1);
+ split($1, a, "|");
+ if (a[1] == matchgroup && a[2] == configfile) {
+ group=a[1]
} else {
group=""
}
diff --git a/lib/apache b/lib/apache
index 8a38cc4..2dc626f 100644
--- a/lib/apache
+++ b/lib/apache
@@ -29,15 +29,20 @@
# Set up apache name and configuration directory
+# Note that APACHE_CONF_DIR is really more accurately apache's vhost
+# configuration dir but we can't just change this because public interfaces.
if is_ubuntu; then
APACHE_NAME=apache2
APACHE_CONF_DIR=${APACHE_CONF_DIR:-/etc/$APACHE_NAME/sites-available}
+ APACHE_SETTINGS_DIR=${APACHE_SETTINGS_DIR:-/etc/$APACHE_NAME/conf-enabled}
elif is_fedora; then
APACHE_NAME=httpd
APACHE_CONF_DIR=${APACHE_CONF_DIR:-/etc/$APACHE_NAME/conf.d}
+ APACHE_SETTINGS_DIR=${APACHE_SETTINGS_DIR:-/etc/$APACHE_NAME/conf.d}
elif is_suse; then
APACHE_NAME=apache2
APACHE_CONF_DIR=${APACHE_CONF_DIR:-/etc/$APACHE_NAME/vhosts.d}
+ APACHE_SETTINGS_DIR=${APACHE_SETTINGS_DIR:-/etc/$APACHE_NAME/conf.d}
fi
APACHE_LOG_DIR="/var/log/${APACHE_NAME}"
diff --git a/lib/cinder b/lib/cinder
index c4a49cd..9ff74e8 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -68,8 +68,12 @@
CINDER_SERVICE_LISTEN_ADDRESS=${CINDER_SERVICE_LISTEN_ADDRESS:-$SERVICE_LISTEN_ADDRESS}
# What type of LVM device should Cinder use for LVM backend
-# Defaults to thin. For thick provisioning change to 'default'
-CINDER_LVM_TYPE=${CINDER_LVM_TYPE:-thin}
+# Defaults to default, which is thick, the other valid choice
+# is thin, which as the name implies utilizes lvm thin provisioning.
+# Thinly provisioned LVM volumes may be more efficient when using the Cinder
+# image cache, but there are also known race failures with volume snapshots
+# and thinly provisioned LVM volumes, see bug 1642111 for details.
+CINDER_LVM_TYPE=${CINDER_LVM_TYPE:-default}
# Default backends
# The backend format is type:name where type is one of the supported backend
diff --git a/lib/horizon b/lib/horizon
index 5089650..c0faed7 100644
--- a/lib/horizon
+++ b/lib/horizon
@@ -81,7 +81,7 @@
# Horizon is installed as develop mode, so we can compile here.
# Message catalog compilation is handled by Django admin script,
# so compiling them after the installation avoids Django installation twice.
- (cd $HORIZON_DIR; ./run_tests.sh -N --compilemessages)
+ (cd $HORIZON_DIR; python manage.py compilemessages)
# ``local_settings.py`` is used to override horizon default settings.
local local_settings=$HORIZON_DIR/openstack_dashboard/local/local_settings.py
@@ -100,7 +100,7 @@
# note(trebskit): if HOST_IP points at non-localhost ip address, horizon cannot be accessed
# from outside the virtual machine. This fixes is meant primarily for local development
# purpose
- _horizon_config_set $local_settings "" ALLOWED_HOSTS [\"$HOST_IP\"]
+ _horizon_config_set $local_settings "" ALLOWED_HOSTS [\"*\"]
if [ -f $SSL_BUNDLE_FILE ]; then
_horizon_config_set $local_settings "" OPENSTACK_SSL_CACERT \"${SSL_BUNDLE_FILE}\"
diff --git a/lib/keystone b/lib/keystone
index 9a0fdad..948d5b4 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -51,9 +51,6 @@
KEYSTONE_CONF=$KEYSTONE_CONF_DIR/keystone.conf
KEYSTONE_PASTE_INI=${KEYSTONE_PASTE_INI:-$KEYSTONE_CONF_DIR/keystone-paste.ini}
-# NOTE(sdague): remove in Newton
-KEYSTONE_CATALOG_BACKEND="sql"
-
# Toggle for deploying Keystone under HTTPD + mod_wsgi
# Deprecated in Mitaka, use KEYSTONE_DEPLOY instead.
KEYSTONE_USE_MOD_WSGI=${KEYSTONE_USE_MOD_WSGI:-${ENABLE_HTTPD_MOD_WSGI_SERVICES}}
diff --git a/lib/lvm b/lib/lvm
index d35a76f..99c7ba9 100644
--- a/lib/lvm
+++ b/lib/lvm
@@ -23,11 +23,7 @@
# Defaults
# --------
# Name of the lvm volume groups to use/create for iscsi volumes
-# This monkey-motion is for compatibility with icehouse-generation Grenade
-# If ``VOLUME_GROUP`` is set, use it, otherwise we'll build a VG name based
-# on ``VOLUME_GROUP_NAME`` that includes the backend name
-# Grenade doesn't use ``VOLUME_GROUP2`` so it is left out
-VOLUME_GROUP_NAME=${VOLUME_GROUP:-${VOLUME_GROUP_NAME:-stack-volumes}}
+VOLUME_GROUP_NAME=${VOLUME_GROUP_NAME:-stack-volumes}
DEFAULT_VOLUME_GROUP_NAME=$VOLUME_GROUP_NAME-default
# Backing file name is of the form $VOLUME_GROUP$BACKING_FILE_SUFFIX
diff --git a/lib/neutron b/lib/neutron
index 415344e..d30e185 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -200,7 +200,7 @@
if is_service_enabled neutron-l3; then
cp $NEUTRON_DIR/etc/l3_agent.ini.sample $NEUTRON_L3_CONF
iniset $NEUTRON_L3_CONF DEFAULT interface_driver $NEUTRON_AGENT
- iniset $NEUTRON_CONF DEFAULT service_plugins router
+ neutron_service_plugin_class_add router
iniset $NEUTRON_L3_CONF agent root_helper_daemon "$NEUTRON_ROOTWRAP_DAEMON_CMD"
iniset $NEUTRON_L3_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
neutron_plugin_configure_l3_agent $NEUTRON_L3_CONF
@@ -249,14 +249,8 @@
source $TOP_DIR/lib/neutron_plugins/services/metering
neutron_agent_metering_configure_common
neutron_agent_metering_configure_agent
- # TODO(sc68cal) hack because we don't pass around
- # $Q_SERVICE_PLUGIN_CLASSES like -legacy does
- local plugins=""
- plugins=$(iniget $NEUTRON_CONF DEFAULT service_plugins)
- plugins+=",metering"
- iniset $NEUTRON_CONF DEFAULT service_plugins $plugins
+ neutron_service_plugin_class_add metering
fi
-
}
# configure_neutron_rootwrap() - configure Neutron's rootwrap
@@ -433,15 +427,17 @@
if is_service_enabled neutron-l3; then
run_process neutron-l3 "$NEUTRON_BIN_DIR/$NEUTRON_L3_BINARY $NEUTRON_CONFIG_ARG"
fi
- # XXX(sc68cal) - Here's where plugins can wire up their own networks instead
- # of the code in lib/neutron_plugins/services/l3
- if type -p neutron_plugin_create_initial_networks > /dev/null; then
- neutron_plugin_create_initial_networks
- else
- # XXX(sc68cal) Load up the built in Neutron networking code and build a topology
- source $TOP_DIR/lib/neutron_plugins/services/l3
- # Create the networks using servic
- create_neutron_initial_network
+ if is_service_enabled neutron-api; then
+ # XXX(sc68cal) - Here's where plugins can wire up their own networks instead
+ # of the code in lib/neutron_plugins/services/l3
+ if type -p neutron_plugin_create_initial_networks > /dev/null; then
+ neutron_plugin_create_initial_networks
+ else
+ # XXX(sc68cal) Load up the built in Neutron networking code and build a topology
+ source $TOP_DIR/lib/neutron_plugins/services/l3
+ # Create the networks using servic
+ create_neutron_initial_network
+ fi
fi
if is_service_enabled neutron-metadata-agent; then
run_process neutron-metadata-agent "$NEUTRON_BIN_DIR/$NEUTRON_META_BINARY $NEUTRON_CONFIG_ARG"
@@ -494,6 +490,16 @@
}
+# neutron_service_plugin_class_add() - add service plugin class
+function neutron_service_plugin_class_add_new {
+ local service_plugin_class=$1
+ local plugins=""
+
+ plugins=$(iniget $NEUTRON_CONF DEFAULT service_plugins)
+ plugins+=",${service_plugin_class}"
+ iniset $NEUTRON_CONF DEFAULT service_plugins $plugins
+}
+
# Dispatch functions
# These are needed for compatibility between the old and new implementations
# where there are function name overlaps. These will be removed when
@@ -553,6 +559,15 @@
fi
}
+function neutron_service_plugin_class_add {
+ if is_neutron_legacy_enabled; then
+ # Call back to old function
+ _neutron_service_plugin_class_add "$@"
+ else
+ neutron_service_plugin_class_add_new "$@"
+ fi
+}
+
function start_neutron {
if is_neutron_legacy_enabled; then
# Call back to old function
diff --git a/lib/neutron_plugins/services/l3 b/lib/neutron_plugins/services/l3
index 6d518e2..569a366 100644
--- a/lib/neutron_plugins/services/l3
+++ b/lib/neutron_plugins/services/l3
@@ -157,14 +157,6 @@
}
function create_neutron_initial_network {
- if ! is_service_enabled q-svc && ! is_service_enabled neutron-api; then
- echo "Controller services not enabled. No networks configured!"
- return
- fi
- if [[ "$NEUTRON_CREATE_INITIAL_NETWORKS" == "False" ]]; then
- echo "Network creation disabled!"
- return
- fi
local project_id
project_id=$(openstack project list | grep " demo " | get_field 1)
die_if_not_set $LINENO project_id "Failure retrieving project_id for demo"
@@ -432,13 +424,6 @@
fi
}
-function is_provider_network {
- if [ "$Q_USE_PROVIDER_NETWORKING" == "True" ]; then
- return 0
- fi
- return 1
-}
-
function is_networking_extension_supported {
local extension=$1
# TODO(sc68cal) cache this instead of calling every time
diff --git a/lib/tempest b/lib/tempest
index f43036e..a5dd531 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -193,11 +193,11 @@
available_flavors=$(nova flavor-list)
if [[ -z "$DEFAULT_INSTANCE_TYPE" ]]; then
if [[ ! ( $available_flavors =~ 'm1.nano' ) ]]; then
- nova flavor-create m1.nano 42 64 0 1
+ openstack flavor create --id 42 --ram 64 --disk 0 --vcpus 1 m1.nano
fi
flavor_ref=42
if [[ ! ( $available_flavors =~ 'm1.micro' ) ]]; then
- nova flavor-create m1.micro 84 128 0 1
+ openstack flavor create --id 84 --ram 128 --disk 0 --vcpus 1 m1.micro
fi
flavor_ref_alt=84
else
@@ -242,8 +242,7 @@
# the public network (for floating ip access) is only available
# if the extension is enabled.
if is_networking_extension_supported 'external-net'; then
- public_network_id=$(openstack network list | grep $PUBLIC_NETWORK_NAME | \
- awk '{print $2}')
+ public_network_id=$(openstack network show -f value -c id $PUBLIC_NETWORK_NAME)
fi
iniset $TEMPEST_CONFIG DEFAULT use_syslog $SYSLOG
@@ -295,7 +294,6 @@
fi
if [ "$VIRT_DRIVER" = "xenserver" ]; then
iniset $TEMPEST_CONFIG image disk_formats "ami,ari,aki,vhd,raw,iso"
- iniset $TEMPEST_CONFIG scenario img_disk_format vhd
fi
# Image Features
@@ -398,7 +396,7 @@
# build a specialized heat flavor
available_flavors=$(nova flavor-list)
if [[ ! ( $available_flavors =~ 'm1.heat' ) ]]; then
- nova flavor-create m1.heat 451 512 0 1
+ openstack flavor create --id 451 --ram 512 --disk 0 --vcpus 1 m1.heat
fi
iniset $TEMPEST_CONFIG orchestration instance_type "m1.heat"
fi
@@ -407,19 +405,32 @@
fi
# Scenario
- SCENARIO_IMAGE_DIR=${SCENARIO_IMAGE_DIR:-$FILES/images/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec}
+ if [ "$VIRT_DRIVER" = "xenserver" ]; then
+ SCENARIO_IMAGE_DIR=${SCENARIO_IMAGE_DIR:-$FILES}
+ SCENARIO_IMAGE_FILE="cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.vhd.tgz"
+ iniset $TEMPEST_CONFIG scenario img_disk_format vhd
+ iniset $TEMPEST_CONFIG scenario img_container_format ovf
+ else
+ SCENARIO_IMAGE_DIR=${SCENARIO_IMAGE_DIR:-$FILES/images/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec}
+ SCENARIO_IMAGE_FILE="cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img"
+ fi
iniset $TEMPEST_CONFIG scenario img_dir $SCENARIO_IMAGE_DIR
+ iniset $TEMPEST_CONFIG scenario img_file $SCENARIO_IMAGE_FILE
iniset $TEMPEST_CONFIG scenario ami_img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-blank.img"
iniset $TEMPEST_CONFIG scenario ari_img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-initrd"
iniset $TEMPEST_CONFIG scenario aki_img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-vmlinuz"
- iniset $TEMPEST_CONFIG scenario img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img"
+ # If using provider networking, use the physical network for validation rather than private
+ TEMPEST_SSH_NETWORK_NAME=$PRIVATE_NETWORK_NAME
+ if is_provider_network; then
+ TEMPEST_SSH_NETWORK_NAME=$PHYSICAL_NETWORK
+ fi
# Validation
iniset $TEMPEST_CONFIG validation run_validation ${TEMPEST_RUN_VALIDATION:-False}
iniset $TEMPEST_CONFIG validation ip_version_for_ssh 4
iniset $TEMPEST_CONFIG validation ssh_timeout $BUILD_TIMEOUT
iniset $TEMPEST_CONFIG validation image_ssh_user ${DEFAULT_INSTANCE_USER:-cirros}
- iniset $TEMPEST_CONFIG validation network_for_ssh $PRIVATE_NETWORK_NAME
+ iniset $TEMPEST_CONFIG validation network_for_ssh $TEMPEST_SSH_NETWORK_NAME
# Volume
# TODO(obutenko): Remove snapshot_backup when liberty-eol happens.
diff --git a/lib/tls b/lib/tls
index 14cdf19..57b5e52 100644
--- a/lib/tls
+++ b/lib/tls
@@ -442,6 +442,52 @@
# Proxy Functions
# ===============
+function tune_apache_connections {
+ local tuning_file=$APACHE_SETTINGS_DIR/connection-tuning.conf
+ if ! [ -f $tuning_file ] ; then
+ sudo bash -c "cat > $tuning_file" << EOF
+# worker MPM
+# StartServers: initial number of server processes to start
+# MinSpareThreads: minimum number of worker threads which are kept spare
+# MaxSpareThreads: maximum number of worker threads which are kept spare
+# ThreadLimit: ThreadsPerChild can be changed to this maximum value during a
+# graceful restart. ThreadLimit can only be changed by stopping
+# and starting Apache.
+# ThreadsPerChild: constant number of worker threads in each server process
+# MaxClients: maximum number of simultaneous client connections
+# MaxRequestsPerChild: maximum number of requests a server process serves
+#
+# The apache defaults are too conservative if we want reliable tempest
+# testing. Bump these values up from ~400 max clients to 1024 max clients.
+<IfModule mpm_worker_module>
+# Note that the next three conf values must be changed together.
+# MaxClients = ServerLimit * ThreadsPerChild
+ServerLimit 32
+ThreadsPerChild 32
+MaxClients 1024
+StartServers 3
+MinSpareThreads 96
+MaxSpareThreads 192
+ThreadLimit 64
+MaxRequestsPerChild 0
+</IfModule>
+<IfModule mpm_event_module>
+# Note that the next three conf values must be changed together.
+# MaxClients = ServerLimit * ThreadsPerChild
+ServerLimit 32
+ThreadsPerChild 32
+MaxClients 1024
+StartServers 3
+MinSpareThreads 96
+MaxSpareThreads 192
+ThreadLimit 64
+MaxRequestsPerChild 0
+</IfModule>
+EOF
+ restart_apache_server
+ fi
+}
+
# Starts the TLS proxy for the given IP/ports
# start_tls_proxy front-host front-port back-host back-port
function start_tls_proxy {
@@ -451,6 +497,8 @@
local b_host=$4
local b_port=$5
+ tune_apache_connections
+
local config_file
config_file=$(apache_site_config_for $b_service)
local listen_string
diff --git a/stack.sh b/stack.sh
index f20c9d9..74edb10 100755
--- a/stack.sh
+++ b/stack.sh
@@ -192,7 +192,7 @@
# Warn users who aren't on an explicitly supported distro, but allow them to
# override check and attempt installation with ``FORCE=yes ./stack``
-if [[ ! ${DISTRO} =~ (trusty|xenial|yakkety|7.0|wheezy|sid|testing|jessie|f23|f24|rhel7|kvmibm1) ]]; then
+if [[ ! ${DISTRO} =~ (trusty|xenial|yakkety|7.0|wheezy|sid|testing|jessie|f23|f24|f25|rhel7|kvmibm1) ]]; then
echo "WARNING: this script has not been tested on $DISTRO"
if [[ "$FORCE" != "yes" ]]; then
die $LINENO "If you wish to run this script anyway run with FORCE=yes"
@@ -1269,7 +1269,10 @@
start_neutron
fi
# Once neutron agents are started setup initial network elements
-create_neutron_initial_network
+if is_service_enabled q-svc && [[ "$NEUTRON_CREATE_INITIAL_NETWORKS" == "True" ]]; then
+ echo_summary "Creating initial neutron network elements"
+ create_neutron_initial_network
+fi
if is_service_enabled nova; then
echo_summary "Starting Nova"
diff --git a/stackrc b/stackrc
index 31f0759..b5018de 100644
--- a/stackrc
+++ b/stackrc
@@ -44,20 +44,10 @@
# Specify which services to launch. These generally correspond to
# screen tabs. To change the default list, use the ``enable_service`` and
# ``disable_service`` functions in ``local.conf``.
-# For example, to enable Swift add this to ``local.conf``:
-# enable_service s-proxy s-object s-container s-account
-# In order to enable Neutron (a single node setup) add the following
+# For example, to enable Swift as part of DevStack add the following
# settings in ``local.conf``:
# [[local|localrc]]
-# disable_service n-net
-# enable_service q-svc
-# enable_service q-agt
-# enable_service q-dhcp
-# enable_service q-l3
-# enable_service q-meta
-# # Optional, to enable tempest configuration as part of DevStack
-# enable_service tempest
-
+# enable_service s-proxy s-object s-container s-account
# This allows us to pass ``ENABLED_SERVICES``
if ! isset ENABLED_SERVICES ; then
# Keystone - nothing works without keystone
diff --git a/tests/test_meta_config.sh b/tests/test_meta_config.sh
index 327fb56..92f9c01 100755
--- a/tests/test_meta_config.sh
+++ b/tests/test_meta_config.sh
@@ -125,6 +125,14 @@
[[test10|does-not-exist-dir/test.conf]]
foo=bar
+[[test11|test-same.conf]]
+[DEFAULT]
+foo=bar
+
+[[test11|test-same.conf]]
+[some]
+random=config
+
[[test-multi-sections|test-multi-sections.conf]]
[sec-1]
cfg_item1 = abcd
@@ -147,6 +155,9 @@
cfg_item2 = efgh
cfg_item2 = \${FOO_BAR_BAZ}
+[[test11|test-same.conf]]
+[another]
+non = sense
EOF
echo -n "get_meta_section_files: test0 doesn't exist: "
@@ -385,8 +396,24 @@
check_result "$VAL" "$EXPECT_VAL"
set -e
+echo -n "merge_config_file test11 same section: "
+rm -f test-same.conf
+merge_config_group test.conf test11
+VAL=$(cat test-same.conf)
+EXPECT_VAL='
+[DEFAULT]
+foo = bar
+
+[some]
+random = config
+
+[another]
+non = sense'
+check_result "$VAL" "$EXPECT_VAL"
+
+
rm -f test.conf test1c.conf test2a.conf \
test-space.conf test-equals.conf test-strip.conf \
test-colon.conf test-env.conf test-multiline.conf \
- test-multi-sections.conf
+ test-multi-sections.conf test-same.conf
rm -rf test-etc
diff --git a/tools/ping_neutron.sh b/tools/ping_neutron.sh
index c755754..73fe3f3 100755
--- a/tools/ping_neutron.sh
+++ b/tools/ping_neutron.sh
@@ -54,7 +54,7 @@
REMAINING_ARGS="${@:2}"
# BUG: with duplicate network names, this fails pretty hard.
-NET_ID=$(openstack network list | grep "$NET_NAME" | awk '{print $2}')
+NET_ID=$(openstack network show -f value -c id "$NET_NAME")
PROBE_ID=$(neutron-debug probe-list -c id -c network_id | grep "$NET_ID" | awk '{print $2}' | head -n 1)
# This runs a command inside the specific netns