Merge "Added Savanna Project"
diff --git a/HACKING.rst b/HACKING.rst
index 5f33d77..3c08e67 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -5,10 +5,10 @@
General
-------
-DevStack is written in POSIX shell script. This choice was made because
-it best illustrates the configuration steps that this implementation takes
-on setting up and interacting with OpenStack components. DevStack specifically
-uses Bash and is compatible with Bash 3.
+DevStack is written in UNIX shell script. It uses a number of bash-isms
+and so is limited to Bash (version 3 and up) and compatible shells.
+Shell script was chosen because it best illustrates the steps used to
+set up and interact with OpenStack components.
DevStack's official repository is located on GitHub at
https://github.com/openstack-dev/devstack.git. Besides the master branch that
@@ -54,14 +54,14 @@
``TOP_DIR`` should always point there, even if the script itself is located in
a subdirectory::
- # Keep track of the current devstack directory.
+ # Keep track of the current DevStack directory.
TOP_DIR=$(cd $(dirname "$0") && pwd)
Many scripts will utilize shared functions from the ``functions`` file. There are
also rc files (``stackrc`` and ``openrc``) that are often included to set the primary
configuration of the user environment::
- # Keep track of the current devstack directory.
+ # Keep track of the current DevStack directory.
TOP_DIR=$(cd $(dirname "$0") && pwd)
# Import common functions
@@ -100,13 +100,14 @@
-------
``stackrc`` is the global configuration file for DevStack. It is responsible for
-calling ``localrc`` if it exists so configuration can be overridden by the user.
+calling ``local.conf`` (or ``localrc`` if it exists) so local user configuration
+is recognized.
The criteria for what belongs in ``stackrc`` can be vaguely summarized as
follows:
-* All project respositories and branches (for historical reasons)
-* Global configuration that may be referenced in ``localrc``, i.e. ``DEST``, ``DATA_DIR``
+* All project repositories and branches handled directly in ``stack.sh``
+* Global configuration that may be referenced in ``local.conf``, i.e. ``DEST``, ``DATA_DIR``
* Global service configuration like ``ENABLED_SERVICES``
* Variables used by multiple services that do not have a clear owner, i.e.
``VOLUME_BACKING_FILE_SIZE`` (nova-volumes and cinder) or ``PUBLIC_NETWORK_NAME``
@@ -116,8 +117,9 @@
not be changed for other reasons but the earlier file needs to dereference a
variable set in the later file. This should be rare.
-Also, variable declarations in ``stackrc`` do NOT allow overriding (the form
-``FOO=${FOO:-baz}``); if they did then they can already be changed in ``localrc``
+Also, variable declarations in ``stackrc`` before ``local.conf`` is sourced
+do NOT allow overriding (the form
+``FOO=${FOO:-baz}``); if they did then they can already be changed in ``local.conf``
and can stay in the project file.
@@ -139,7 +141,9 @@
Markdown formatting in the comments; use it sparingly. Specifically, ``stack.sh``
uses Markdown headers to divide the script into logical sections.
-.. _shocco: http://rtomayko.github.com/shocco/
+.. _shocco: https://github.com/dtroyer/shocco/tree/rst_support
+
+The script used to drive <code>shocco</code> is <code>tools/build_docs.sh</code>.
Exercises
diff --git a/README.md b/README.md
index 6dc9ecd..640fab6 100644
--- a/README.md
+++ b/README.md
@@ -6,35 +6,39 @@
* To describe working configurations of OpenStack (which code branches work together? what do config files look like for those branches?)
* To make it easier for developers to dive into OpenStack so that they can productively contribute without having to understand every part of the system at once
* To make it easy to prototype cross-project features
-* To sanity-check OpenStack builds (used in gating commits to the primary repos)
+* To provide an environment for the OpenStack CI testing on every commit to the projects
-Read more at http://devstack.org (built from the gh-pages branch)
+Read more at http://devstack.org.
-IMPORTANT: Be sure to carefully read `stack.sh` and any other scripts you execute before you run them, as they install software and may alter your networking configuration. We strongly recommend that you run `stack.sh` in a clean and disposable vm when you are first getting started.
-
-# DevStack on Xenserver
-
-If you would like to use Xenserver as the hypervisor, please refer to the instructions in `./tools/xen/README.md`.
-
-# DevStack on Docker
-
-If you would like to use Docker as the hypervisor, please refer to the instructions in `./tools/docker/README.md`.
+IMPORTANT: Be sure to carefully read `stack.sh` and any other scripts you
+execute before you run them, as they install software and will alter your
+networking configuration. We strongly recommend that you run `stack.sh`
+in a clean and disposable vm when you are first getting started.
# Versions
-The devstack master branch generally points to trunk versions of OpenStack components. For older, stable versions, look for branches named stable/[release] in the DevStack repo. For example, you can do the following to create a diablo OpenStack cloud:
+The DevStack master branch generally points to trunk versions of OpenStack
+components. For older, stable versions, look for branches named
+stable/[release] in the DevStack repo. For example, you can do the
+following to create a grizzly OpenStack cloud:
- git checkout stable/diablo
+ git checkout stable/grizzly
./stack.sh
-You can also pick specific OpenStack project releases by setting the appropriate `*_BRANCH` variables in `localrc` (look in `stackrc` for the default set). Usually just before a release there will be milestone-proposed branches that need to be tested::
+You can also pick specific OpenStack project releases by setting the appropriate
+`*_BRANCH` variables in the ``localrc`` section of `local.conf` (look in
+`stackrc` for the default set). Usually just before a release there will be
+milestone-proposed branches that need to be tested::
GLANCE_REPO=https://github.com/openstack/glance.git
GLANCE_BRANCH=milestone-proposed
# Start A Dev Cloud
-Installing in a dedicated disposable vm is safer than installing on your dev machine! Plus you can pick one of the supported Linux distros for your VM. To start a dev cloud run the following NOT AS ROOT (see below for more):
+Installing in a dedicated disposable VM is safer than installing on your
+dev machine! Plus you can pick one of the supported Linux distros for
+your VM. To start a dev cloud run the following NOT AS ROOT (see
+**DevStack Execution Environment** below for more on user accounts):
./stack.sh
@@ -45,7 +49,7 @@
We also provide an environment file that you can use to interact with your cloud via CLI:
- # source openrc file to load your environment with osapi and ec2 creds
+ # source openrc file to load your environment with OpenStack CLI creds
. openrc
# list instances
nova list
@@ -61,16 +65,37 @@
DevStack runs rampant over the system it runs on, installing things and uninstalling other things. Running this on a system you care about is a recipe for disappointment, or worse. Alas, we're all in the virtualization business here, so run it in a VM. And take advantage of the snapshot capabilities of your hypervisor of choice to reduce testing cycle times. You might even save enough time to write one more feature before the next feature freeze...
-``stack.sh`` needs to have root access for a lot of tasks, but it also needs to have not-root permissions for most of its work and for all of the OpenStack services. So ``stack.sh`` specifically does not run if you are root. This is a recent change (Oct 2013) from the previous behaviour of automatically creating a ``stack`` user. Automatically creating a user account is not always the right response to running as root, so that bit is now an explicit step using ``tools/create-stack-user.sh``. Run that (as root!) if you do not want to just use your normal login here, which works perfectly fine.
+``stack.sh`` needs to have root access for a lot of tasks, but uses ``sudo``
+for all of those tasks. However, it needs to be not-root for most of its
+work and for all of the OpenStack services. ``stack.sh`` specifically
+does not run if started as root.
+
+This is a recent change (Oct 2013) from the previous behaviour of
+automatically creating a ``stack`` user. Automatically creating
+user accounts is not the right response to running as root, so
+that bit is now an explicit step using ``tools/create-stack-user.sh``.
+Run that (as root!) or just check it out to see what DevStack's
+expectations are for the account it runs under. Many people simply
+use their usual login (the default 'ubuntu' login on a UEC image
+for example).
# Customizing
-You can override environment variables used in `stack.sh` by creating file name `localrc`. It is likely that you will need to do this to tweak your networking configuration should you need to access your cloud from a different host.
+You can override environment variables used in `stack.sh` by creating file
+name `local.conf` with a ``locarc`` section as shown below. It is likely
+that you will need to do this to tweak your networking configuration should
+you need to access your cloud from a different host.
+
+ [[local|localrc]]
+ VARIABLE=value
+
+See the **Local Configuration** section below for more details.
# Database Backend
Multiple database backends are available. The available databases are defined in the lib/databases directory.
-`mysql` is the default database, choose a different one by putting the following in `localrc`:
+`mysql` is the default database, choose a different one by putting the
+following in the `localrc` section:
disable_service mysql
enable_service postgresql
@@ -81,7 +106,7 @@
Multiple RPC backends are available. Currently, this
includes RabbitMQ (default), Qpid, and ZeroMQ. Your backend of
-choice may be selected via the `localrc`.
+choice may be selected via the `localrc` section.
Note that selecting more than one RPC backend will result in a failure.
@@ -95,9 +120,10 @@
# Apache Frontend
-Apache web server is enabled for wsgi services by setting `APACHE_ENABLED_SERVICES` in your localrc. But remember to enable these services at first as above.
+Apache web server is enabled for wsgi services by setting
+`APACHE_ENABLED_SERVICES` in your ``localrc`` section. Remember to
+enable these services at first as above.
-Example:
APACHE_ENABLED_SERVICES+=keystone,swift
# Swift
@@ -108,23 +134,23 @@
object services will run directly in screen. The others services like
replicator, updaters or auditor runs in background.
-If you would like to enable Swift you can add this to your `localrc` :
+If you would like to enable Swift you can add this to your `localrc` section:
enable_service s-proxy s-object s-container s-account
If you want a minimal Swift install with only Swift and Keystone you
-can have this instead in your `localrc`:
+can have this instead in your `localrc` section:
disable_all_services
enable_service key mysql s-proxy s-object s-container s-account
If you only want to do some testing of a real normal swift cluster
with multiple replicas you can do so by customizing the variable
-`SWIFT_REPLICAS` in your `localrc` (usually to 3).
+`SWIFT_REPLICAS` in your `localrc` section (usually to 3).
# Swift S3
-If you are enabling `swift3` in `ENABLED_SERVICES` devstack will
+If you are enabling `swift3` in `ENABLED_SERVICES` DevStack will
install the swift3 middleware emulation. Swift will be configured to
act as a S3 endpoint for Keystone so effectively replacing the
`nova-objectstore`.
@@ -137,7 +163,7 @@
Basic Setup
In order to enable Neutron a single node setup, you'll need the
-following settings in your `localrc` :
+following settings in your `localrc` section:
disable_service n-net
enable_service q-svc
@@ -146,12 +172,15 @@
enable_service q-l3
enable_service q-meta
enable_service neutron
- # Optional, to enable tempest configuration as part of devstack
+ # Optional, to enable tempest configuration as part of DevStack
enable_service tempest
Then run `stack.sh` as normal.
-devstack supports adding specific Neutron configuration flags to the service, Open vSwitch plugin and LinuxBridge plugin configuration files. To make use of this feature, the following variables are defined and can be configured in your `localrc` file:
+DevStack supports setting specific Neutron configuration flags to the
+service, Open vSwitch plugin and LinuxBridge plugin configuration files.
+To make use of this feature, the following variables are defined and can
+be configured in your `localrc` section:
Variable Name Config File Section Modified
-------------------------------------------------------------------------------------
@@ -160,12 +189,14 @@
Q_AGENT_EXTRA_SRV_OPTS Plugin `OVS` (for Open Vswitch) or `LINUX_BRIDGE` (for LinuxBridge)
Q_SRV_EXTRA_DEFAULT_OPTS Service DEFAULT
-An example of using the variables in your `localrc` is below:
+An example of using the variables in your `localrc` section is below:
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472)
Q_SRV_EXTRA_OPTS=(tenant_network_type=vxlan)
-devstack also supports configuring the Neutron ML2 plugin. The ML2 plugin can run with the OVS, LinuxBridge, or Hyper-V agents on compute hosts. A simple way to configure the ml2 plugin is shown below:
+DevStack also supports configuring the Neutron ML2 plugin. The ML2 plugin
+can run with the OVS, LinuxBridge, or Hyper-V agents on compute hosts. A
+simple way to configure the ml2 plugin is shown below:
# VLAN configuration
Q_PLUGIN=ml2
@@ -179,7 +210,9 @@
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan
-The above will default in devstack to using the OVS on each compute host. To change this, set the `Q_AGENT` variable to the agent you want to run (e.g. linuxbridge).
+The above will default in DevStack to using the OVS on each compute host.
+To change this, set the `Q_AGENT` variable to the agent you want to run
+(e.g. linuxbridge).
Variable Name Notes
-------------------------------------------------------------------------------------
@@ -194,13 +227,13 @@
# Heat
Heat is disabled by default. To enable it you'll need the following settings
-in your `localrc` :
+in your `localrc` section:
enable_service heat h-api h-api-cfn h-api-cw h-eng
Heat can also run in standalone mode, and be configured to orchestrate
on an external OpenStack cloud. To launch only Heat in standalone mode
-you'll need the following settings in your `localrc` :
+you'll need the following settings in your `localrc` section:
disable_all_services
enable_service rabbit mysql heat h-api h-api-cfn h-api-cw h-eng
@@ -215,6 +248,24 @@
$ cd /opt/stack/tempest
$ nosetests tempest/scenario/test_network_basic_ops.py
+# DevStack on Xenserver
+
+If you would like to use Xenserver as the hypervisor, please refer to the instructions in `./tools/xen/README.md`.
+
+# DevStack on Docker
+
+If you would like to use Docker as the hypervisor, please refer to the instructions in `./tools/docker/README.md`.
+
+# Additional Projects
+
+DevStack has a hook mechanism to call out to a dispatch script at specific
+points in the execution of `stack.sh`, `unstack.sh` and `clean.sh`. This
+allows upper-layer projects, especially those that the lower layer projects
+have no dependency on, to be added to DevStack without modifying the core
+scripts. Tempest is built this way as an example of how to structure the
+dispatch script, see `extras.d/80-tempest.sh`. See `extras.d/README.md`
+for more information.
+
# Multi-Node Setup
A more interesting setup involves running multiple compute nodes, with Neutron networks connecting VMs on different compute nodes.
@@ -228,7 +279,8 @@
enable_service q-meta
enable_service neutron
-You likely want to change your `localrc` to run a scheduler that will balance VMs across hosts:
+You likely want to change your `localrc` section to run a scheduler that
+will balance VMs across hosts:
SCHEDULER=nova.scheduler.simple.SimpleScheduler
@@ -245,8 +297,56 @@
Cells is a new scaling option with a full spec at http://wiki.openstack.org/blueprint-nova-compute-cells.
-To setup a cells environment add the following to your `localrc`:
+To setup a cells environment add the following to your `localrc` section:
enable_service n-cell
Be aware that there are some features currently missing in cells, one notable one being security groups. The exercises have been patched to disable functionality not supported by cells.
+
+
+# Local Configuration
+
+Historically DevStack has used ``localrc`` to contain all local configuration and customizations. More and more of the configuration variables available for DevStack are passed-through to the individual project configuration files. The old mechanism for this required specific code for each file and did not scale well. This is handled now by a master local configuration file.
+
+# local.conf
+
+The new config file ``local.conf`` is an extended-INI format that introduces a new meta-section header that provides some additional information such as a phase name and destination config filename:
+
+ [[ <phase> | <config-file-name> ]]
+
+where ``<phase>`` is one of a set of phase names defined by ``stack.sh``
+and ``<config-file-name>`` is the configuration filename. The filename is
+eval'ed in the ``stack.sh`` context so all environment variables are
+available and may be used. Using the project config file variables in
+the header is strongly suggested (see the ``NOVA_CONF`` example below).
+If the path of the config file does not exist it is skipped.
+
+The defined phases are:
+
+* **local** - extracts ``localrc`` from ``local.conf`` before ``stackrc`` is sourced
+* **post-config** - runs after the layer 2 services are configured and before they are started
+* **extra** - runs after services are started and before any files in ``extra.d`` are executed
+
+The file is processed strictly in sequence; meta-sections may be specified more than once but if any settings are duplicated the last to appear in the file will be used.
+
+ [[post-config|$NOVA_CONF]]
+ [DEFAULT]
+ use_syslog = True
+
+ [osapi_v3]
+ enabled = False
+
+A specific meta-section ``local|localrc`` is used to provide a default
+``localrc`` file (actually ``.localrc.auto``). This allows all custom
+settings for DevStack to be contained in a single file. If ``localrc``
+exists it will be used instead to preserve backward-compatibility.
+
+ [[local|localrc]]
+ FIXED_RANGE=10.254.1.0/24
+ ADMIN_PASSWORD=speciale
+ LOGFILE=$DEST/logs/stack.sh.log
+
+Note that ``Q_PLUGIN_CONF_FILE`` is unique in that it is assumed to *NOT*
+start with a ``/`` (slash) character. A slash will need to be added:
+
+ [[post-config|/$Q_PLUGIN_CONF_FILE]]
diff --git a/clean.sh b/clean.sh
index 6ceb5a4..395941a 100755
--- a/clean.sh
+++ b/clean.sh
@@ -47,6 +47,15 @@
source $TOP_DIR/lib/baremetal
source $TOP_DIR/lib/ldap
+# Extras Source
+# --------------
+
+# Phase: source
+if [[ -d $TOP_DIR/extras.d ]]; then
+ for i in $TOP_DIR/extras.d/*.sh; do
+ [[ -r $i ]] && source $i source
+ done
+fi
# See if there is anything running...
# need to adapt when run_service is merged
@@ -56,6 +65,16 @@
$TOP_DIR/unstack.sh --all
fi
+# Run extras
+# ==========
+
+# Phase: clean
+if [[ -d $TOP_DIR/extras.d ]]; then
+ for i in $TOP_DIR/extras.d/*.sh; do
+ [[ -r $i ]] && source $i clean
+ done
+fi
+
# Clean projects
cleanup_oslo
cleanup_cinder
diff --git a/exercises/aggregates.sh b/exercises/aggregates.sh
index e2baecd..e5fc7de 100755
--- a/exercises/aggregates.sh
+++ b/exercises/aggregates.sh
@@ -100,7 +100,7 @@
META_DATA_3_KEY=bar
#ensure no additional metadata is set
-nova aggregate-details $AGGREGATE_ID | egrep "{u'availability_zone': u'$AGGREGATE_A_ZONE'}|{}"
+nova aggregate-details $AGGREGATE_ID | egrep "\|[{u ]*'availability_zone.+$AGGREGATE_A_ZONE'[ }]*\|"
nova aggregate-set-metadata $AGGREGATE_ID ${META_DATA_1_KEY}=123
nova aggregate-details $AGGREGATE_ID | grep $META_DATA_1_KEY
@@ -117,7 +117,7 @@
nova aggregate-details $AGGREGATE_ID | grep $META_DATA_2_KEY && die $LINENO "ERROR metadata was not cleared"
nova aggregate-set-metadata $AGGREGATE_ID $META_DATA_3_KEY $META_DATA_1_KEY
-nova aggregate-details $AGGREGATE_ID | egrep "{u'availability_zone': u'$AGGREGATE_A_ZONE'}|{}"
+nova aggregate-details $AGGREGATE_ID | egrep "\|[{u ]*'availability_zone.+$AGGREGATE_A_ZONE'[ }]*\|"
# Test aggregate-add/remove-host
diff --git a/exercises/boot_from_volume.sh b/exercises/boot_from_volume.sh
index fe27bd0..634a6d5 100755
--- a/exercises/boot_from_volume.sh
+++ b/exercises/boot_from_volume.sh
@@ -119,7 +119,7 @@
INSTANCE_TYPE=$(nova flavor-list | grep $DEFAULT_INSTANCE_TYPE | get_field 1)
if [[ -z "$INSTANCE_TYPE" ]]; then
# grab the first flavor in the list to launch if default doesn't exist
- INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
+ INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
fi
# Clean-up from previous runs
diff --git a/exercises/docker.sh b/exercises/docker.sh
index 0672bc0..10c5436 100755
--- a/exercises/docker.sh
+++ b/exercises/docker.sh
@@ -62,7 +62,7 @@
INSTANCE_TYPE=$(nova flavor-list | grep $DEFAULT_INSTANCE_TYPE | get_field 1)
if [[ -z "$INSTANCE_TYPE" ]]; then
# grab the first flavor in the list to launch if default doesn't exist
- INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
+ INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
fi
# Clean-up from previous runs
@@ -102,4 +102,3 @@
echo "*********************************************************************"
echo "SUCCESS: End DevStack Exercise: $0"
echo "*********************************************************************"
-
diff --git a/exercises/euca.sh b/exercises/euca.sh
index 64c0014..ed521e4 100755
--- a/exercises/euca.sh
+++ b/exercises/euca.sh
@@ -87,31 +87,31 @@
# Volumes
# -------
if is_service_enabled c-vol && ! is_service_enabled n-cell; then
- VOLUME_ZONE=`euca-describe-availability-zones | head -n1 | cut -f2`
- die_if_not_set $LINENO VOLUME_ZONE "Failure to find zone for volume"
+ VOLUME_ZONE=`euca-describe-availability-zones | head -n1 | cut -f2`
+ die_if_not_set $LINENO VOLUME_ZONE "Failure to find zone for volume"
- VOLUME=`euca-create-volume -s 1 -z $VOLUME_ZONE | cut -f2`
- die_if_not_set $LINENO VOLUME "Failure to create volume"
+ VOLUME=`euca-create-volume -s 1 -z $VOLUME_ZONE | cut -f2`
+ die_if_not_set $LINENO VOLUME "Failure to create volume"
- # Test that volume has been created
- VOLUME=`euca-describe-volumes $VOLUME | cut -f2`
- die_if_not_set $LINENO VOLUME "Failure to get volume"
+ # Test that volume has been created
+ VOLUME=`euca-describe-volumes $VOLUME | cut -f2`
+ die_if_not_set $LINENO VOLUME "Failure to get volume"
- # Test volume has become available
- if ! timeout $RUNNING_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -q available; do sleep 1; done"; then
- die $LINENO "volume didn't become available within $RUNNING_TIMEOUT seconds"
- fi
+ # Test volume has become available
+ if ! timeout $RUNNING_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -q available; do sleep 1; done"; then
+ die $LINENO "volume didn't become available within $RUNNING_TIMEOUT seconds"
+ fi
- # Attach volume to an instance
- euca-attach-volume -i $INSTANCE -d $ATTACH_DEVICE $VOLUME || \
- die $LINENO "Failure attaching volume $VOLUME to $INSTANCE"
- if ! timeout $ACTIVE_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -A 1 in-use | grep -q attach; do sleep 1; done"; then
- die $LINENO "Could not attach $VOLUME to $INSTANCE"
- fi
+ # Attach volume to an instance
+ euca-attach-volume -i $INSTANCE -d $ATTACH_DEVICE $VOLUME || \
+ die $LINENO "Failure attaching volume $VOLUME to $INSTANCE"
+ if ! timeout $ACTIVE_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -A 1 in-use | grep -q attach; do sleep 1; done"; then
+ die $LINENO "Could not attach $VOLUME to $INSTANCE"
+ fi
- # Detach volume from an instance
- euca-detach-volume $VOLUME || \
- die $LINENO "Failure detaching volume $VOLUME to $INSTANCE"
+ # Detach volume from an instance
+ euca-detach-volume $VOLUME || \
+ die $LINENO "Failure detaching volume $VOLUME to $INSTANCE"
if ! timeout $ACTIVE_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -q available; do sleep 1; done"; then
die $LINENO "Could not detach $VOLUME to $INSTANCE"
fi
@@ -120,7 +120,7 @@
euca-delete-volume $VOLUME || \
die $LINENO "Failure to delete volume"
if ! timeout $ACTIVE_TIMEOUT sh -c "while euca-describe-volumes | grep $VOLUME; do sleep 1; done"; then
- die $LINENO "Could not delete $VOLUME"
+ die $LINENO "Could not delete $VOLUME"
fi
else
echo "Volume Tests Skipped"
diff --git a/exercises/floating_ips.sh b/exercises/floating_ips.sh
index 2833b65..1a1608c 100755
--- a/exercises/floating_ips.sh
+++ b/exercises/floating_ips.sh
@@ -113,7 +113,7 @@
INSTANCE_TYPE=$(nova flavor-list | grep $DEFAULT_INSTANCE_TYPE | get_field 1)
if [[ -z "$INSTANCE_TYPE" ]]; then
# grab the first flavor in the list to launch if default doesn't exist
- INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
+ INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
fi
# Clean-up from previous runs
@@ -168,7 +168,7 @@
# list floating addresses
if ! timeout $ASSOCIATE_TIMEOUT sh -c "while ! nova floating-ip-list | grep $TEST_FLOATING_POOL | grep -q $TEST_FLOATING_IP; do sleep 1; done"; then
die $LINENO "Floating IP not allocated"
- fi
+ fi
fi
# Dis-allow icmp traffic (ping)
diff --git a/exercises/neutron-adv-test.sh b/exercises/neutron-adv-test.sh
index abb29cf..7dfa5dc 100755
--- a/exercises/neutron-adv-test.sh
+++ b/exercises/neutron-adv-test.sh
@@ -102,6 +102,7 @@
# and save it.
TOKEN=`keystone token-get | grep ' id ' | awk '{print $4}'`
+die_if_not_set $LINENO TOKEN "Keystone fail to get token"
# Various functions
# -----------------
@@ -272,12 +273,12 @@
}
function ping_ip {
- # Test agent connection. Assumes namespaces are disabled, and
- # that DHCP is in use, but not L3
- local VM_NAME=$1
- local NET_NAME=$2
- IP=$(get_instance_ip $VM_NAME $NET_NAME)
- ping_check $NET_NAME $IP $BOOT_TIMEOUT
+ # Test agent connection. Assumes namespaces are disabled, and
+ # that DHCP is in use, but not L3
+ local VM_NAME=$1
+ local NET_NAME=$2
+ IP=$(get_instance_ip $VM_NAME $NET_NAME)
+ ping_check $NET_NAME $IP $BOOT_TIMEOUT
}
function check_vm {
@@ -329,12 +330,12 @@
}
function delete_networks {
- foreach_tenant_net 'delete_network ${%TENANT%_NAME} %NUM%'
- #TODO(nati) add secuirty group check after it is implemented
- # source $TOP_DIR/openrc demo1 demo1
- # nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
- # source $TOP_DIR/openrc demo2 demo2
- # nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
+ foreach_tenant_net 'delete_network ${%TENANT%_NAME} %NUM%'
+ # TODO(nati) add secuirty group check after it is implemented
+ # source $TOP_DIR/openrc demo1 demo1
+ # nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
+ # source $TOP_DIR/openrc demo2 demo2
+ # nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
}
function create_all {
diff --git a/exercises/volumes.sh b/exercises/volumes.sh
index e536d16..9ee9fa9 100755
--- a/exercises/volumes.sh
+++ b/exercises/volumes.sh
@@ -117,7 +117,7 @@
INSTANCE_TYPE=$(nova flavor-list | grep $DEFAULT_INSTANCE_TYPE | get_field 1)
if [[ -z "$INSTANCE_TYPE" ]]; then
# grab the first flavor in the list to launch if default doesn't exist
- INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
+ INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
fi
# Clean-up from previous runs
diff --git a/extras.d/80-tempest.sh b/extras.d/80-tempest.sh
index f159955..75b702c 100644
--- a/extras.d/80-tempest.sh
+++ b/extras.d/80-tempest.sh
@@ -1,21 +1,29 @@
# tempest.sh - DevStack extras script
-source $TOP_DIR/lib/tempest
-
-if [[ "$1" == "stack" ]]; then
- # Configure Tempest last to ensure that the runtime configuration of
- # the various OpenStack services can be queried.
- if is_service_enabled tempest; then
- echo_summary "Configuring Tempest"
+if is_service_enabled tempest; then
+ if [[ "$1" == "source" ]]; then
+ # Initial source
+ source $TOP_DIR/lib/tempest
+ elif [[ "$1" == "stack" && "$2" == "install" ]]; then
+ echo_summary "Installing Tempest"
install_tempest
+ elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
+ # Tempest config must come after layer 2 services are running
+ :
+ elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
+ echo_summary "Initializing Tempest"
configure_tempest
init_tempest
fi
-fi
-if [[ "$1" == "unstack" ]]; then
- # no-op
- :
-fi
+ if [[ "$1" == "unstack" ]]; then
+ # no-op
+ :
+ fi
+ if [[ "$1" == "clean" ]]; then
+ # no-op
+ :
+ fi
+fi
diff --git a/extras.d/README b/extras.d/README
deleted file mode 100644
index ffc6793..0000000
--- a/extras.d/README
+++ /dev/null
@@ -1,14 +0,0 @@
-The extras.d directory contains project initialization scripts to be
-sourced by stack.sh at the end of its run. This is expected to be
-used by external projects that want to be configured, started and
-stopped with DevStack.
-
-Order is controlled by prefixing the script names with the a two digit
-sequence number. Script names must end with '.sh'. This provides a
-convenient way to disable scripts by simoy renaming them.
-
-DevStack reserves the sequence numbers 00 through 09 and 90 through 99
-for its own use.
-
-The scripts are called with an argument of 'stack' by stack.sh and
-with an argument of 'unstack' by unstack.sh.
diff --git a/extras.d/README.md b/extras.d/README.md
new file mode 100644
index 0000000..88e4265
--- /dev/null
+++ b/extras.d/README.md
@@ -0,0 +1,30 @@
+# Extras Hooks
+
+The `extras.d` directory contains project dispatch scripts that are called
+at specific times by `stack.sh`, `unstack.sh` and `clean.sh`. These hooks are
+used to install, configure and start additional projects during a DevStack run
+without any modifications to the base DevStack scripts.
+
+When `stack.sh` reaches one of the hook points it sources the scripts in `extras.d`
+that end with `.sh`. To control the order that the scripts are sourced their
+names start with a two digit sequence number. DevStack reserves the sequence
+numbers 00 through 09 and 90 through 99 for its own use.
+
+The scripts are sourced at the beginning of each script that calls them. The
+entire `stack.sh` variable space is available. The scripts are
+sourced with one or more arguments, the first of which defines the hook phase:
+
+ source | stack | unstack | clean
+
+ source: always called first in any of the scripts, used to set the
+ initial defaults in a lib/* script or similar
+
+ stack: called by stack.sh. There are three possible values for
+ the second arg to distinguish the phase stack.sh is in:
+
+ arg 2: install | post-config | extra
+
+ unstack: called by unstack.sh
+
+ clean: called by clean.sh. Remember, clean.sh also calls unstack.sh
+ so that work need not be repeated.
diff --git a/files/keystone_data.sh b/files/keystone_data.sh
index 3f3137c..ea2d52d 100755
--- a/files/keystone_data.sh
+++ b/files/keystone_data.sh
@@ -66,12 +66,12 @@
# Heat
if [[ "$ENABLED_SERVICES" =~ "heat" ]]; then
HEAT_USER=$(get_id keystone user-create --name=heat \
- --pass="$SERVICE_PASSWORD" \
- --tenant_id $SERVICE_TENANT \
- --email=heat@example.com)
+ --pass="$SERVICE_PASSWORD" \
+ --tenant_id $SERVICE_TENANT \
+ --email=heat@example.com)
keystone user-role-add --tenant-id $SERVICE_TENANT \
- --user-id $HEAT_USER \
- --role-id $SERVICE_ROLE
+ --user-id $HEAT_USER \
+ --role-id $SERVICE_ROLE
# heat_stack_user role is for users created by Heat
keystone role-create --name heat_stack_user
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
@@ -126,16 +126,16 @@
# Ceilometer
if [[ "$ENABLED_SERVICES" =~ "ceilometer" ]]; then
CEILOMETER_USER=$(get_id keystone user-create --name=ceilometer \
- --pass="$SERVICE_PASSWORD" \
- --tenant_id $SERVICE_TENANT \
- --email=ceilometer@example.com)
+ --pass="$SERVICE_PASSWORD" \
+ --tenant_id $SERVICE_TENANT \
+ --email=ceilometer@example.com)
keystone user-role-add --tenant-id $SERVICE_TENANT \
- --user-id $CEILOMETER_USER \
- --role-id $ADMIN_ROLE
+ --user-id $CEILOMETER_USER \
+ --role-id $ADMIN_ROLE
# Ceilometer needs ResellerAdmin role to access swift account stats.
keystone user-role-add --tenant-id $SERVICE_TENANT \
- --user-id $CEILOMETER_USER \
- --role-id $RESELLER_ROLE
+ --user-id $CEILOMETER_USER \
+ --role-id $RESELLER_ROLE
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
CEILOMETER_SERVICE=$(get_id keystone service-create \
--name=ceilometer \
diff --git a/functions b/functions
index 01e2dfc..af5a37d 100644
--- a/functions
+++ b/functions
@@ -2,7 +2,7 @@
#
# The following variables are assumed to be defined by certain functions:
# ``ENABLED_SERVICES``
-# ``EROR_ON_CLONE``
+# ``ERROR_ON_CLONE``
# ``FILES``
# ``GLANCE_HOSTPORT``
# ``OFFLINE``
@@ -155,6 +155,22 @@
}
+# Prints line number and "message" in warning format
+# warn $LINENO "message"
+function warn() {
+ local exitcode=$?
+ errXTRACE=$(set +o | grep xtrace)
+ set +o xtrace
+ local msg="[WARNING] ${BASH_SOURCE[2]}:$1 $2"
+ echo $msg 1>&2;
+ if [[ -n ${SCREEN_LOGDIR} ]]; then
+ echo $msg >> "${SCREEN_LOGDIR}/error.log"
+ fi
+ $errXTRACE
+ return $exitcode
+}
+
+
# HTTP and HTTPS proxy servers are supported via the usual environment variables [1]
# ``http_proxy``, ``https_proxy`` and ``no_proxy``. They can be set in
# ``localrc`` or on the command line if necessary::
@@ -248,7 +264,7 @@
# - ``# dist:DISTRO`` or ``dist:DISTRO1,DISTRO2`` limits the selection
# of the package to the distros listed. The distro names are case insensitive.
function get_packages() {
- local services=$1
+ local services=$@
local package_dir=$(_get_package_dir)
local file_to_parse
local service
@@ -260,7 +276,7 @@
if [[ -z "$DISTRO" ]]; then
GetDistro
fi
- for service in general ${services//,/ }; do
+ for service in ${services//,/ }; do
# Allow individual services to specify dependencies
if [[ -e ${package_dir}/${service} ]]; then
file_to_parse="${file_to_parse} $service"
@@ -564,7 +580,8 @@
if echo $GIT_REF | egrep -q "^refs"; then
# If our branch name is a gerrit style refs/changes/...
if [[ ! -d $GIT_DEST ]]; then
- [[ "$ERROR_ON_CLONE" = "True" ]] && exit 1
+ [[ "$ERROR_ON_CLONE" = "True" ]] && \
+ die $LINENO "Cloning not allowed in this configuration"
git clone $GIT_REMOTE $GIT_DEST
fi
cd $GIT_DEST
@@ -572,7 +589,8 @@
else
# do a full clone only if the directory doesn't exist
if [[ ! -d $GIT_DEST ]]; then
- [[ "$ERROR_ON_CLONE" = "True" ]] && exit 1
+ [[ "$ERROR_ON_CLONE" = "True" ]] && \
+ die $LINENO "Cloning not allowed in this configuration"
git clone $GIT_REMOTE $GIT_DEST
cd $GIT_DEST
# This checkout syntax works for both branches and tags
@@ -596,8 +614,7 @@
elif [[ -n "`git show-ref refs/remotes/origin/$GIT_REF`" ]]; then
git_update_remote_branch $GIT_REF
else
- echo $GIT_REF is neither branch nor tag
- exit 1
+ die $LINENO "$GIT_REF is neither branch nor tag"
fi
fi
@@ -697,7 +714,8 @@
local section=$2
local option=$3
local value=$4
- if ! grep -q "^\[$section\]" "$file"; then
+
+ if ! grep -q "^\[$section\]" "$file" 2>/dev/null; then
# Add section at the end
echo -e "\n[$section]" >>"$file"
fi
@@ -1355,9 +1373,9 @@
IMAGE="$FILES/${IMAGE_FNAME}"
IMAGE_NAME="${IMAGE_FNAME%.xen-raw.tgz}"
glance \
- --os-auth-token $token \
- --os-image-url http://$GLANCE_HOSTPORT \
- image-create \
+ --os-auth-token $token \
+ --os-image-url http://$GLANCE_HOSTPORT \
+ image-create \
--name "$IMAGE_NAME" --is-public=True \
--container-format=tgz --disk-format=raw \
--property vm_mode=xen < "${IMAGE}"
@@ -1380,11 +1398,11 @@
mkdir "$xdir"
tar -zxf $FILES/$IMAGE_FNAME -C "$xdir"
KERNEL=$(for f in "$xdir/"*-vmlinuz* "$xdir/"aki-*/image; do
- [ -f "$f" ] && echo "$f" && break; done; true)
+ [ -f "$f" ] && echo "$f" && break; done; true)
RAMDISK=$(for f in "$xdir/"*-initrd* "$xdir/"ari-*/image; do
- [ -f "$f" ] && echo "$f" && break; done; true)
+ [ -f "$f" ] && echo "$f" && break; done; true)
IMAGE=$(for f in "$xdir/"*.img "$xdir/"ami-*/image; do
- [ -f "$f" ] && echo "$f" && break; done; true)
+ [ -f "$f" ] && echo "$f" && break; done; true)
if [[ -z "$IMAGE_NAME" ]]; then
IMAGE_NAME=$(basename "$IMAGE" ".img")
fi
@@ -1546,7 +1564,6 @@
else
die $LINENO "[Fail] Could ping server"
fi
- exit 1
fi
}
@@ -1559,7 +1576,6 @@
if [[ $ip = "" ]];then
echo "$nova_result"
die $LINENO "[Fail] Coudn't get ipaddress of VM"
- exit 1
fi
echo $ip
}
@@ -1675,23 +1691,23 @@
#
# _vercmp_r sep ver1 ver2
function _vercmp_r {
- typeset sep
- typeset -a ver1=() ver2=()
- sep=$1; shift
- ver1=("${@:1:sep}")
- ver2=("${@:sep+1}")
+ typeset sep
+ typeset -a ver1=() ver2=()
+ sep=$1; shift
+ ver1=("${@:1:sep}")
+ ver2=("${@:sep+1}")
- if ((ver1 > ver2)); then
- echo 1; return 0
- elif ((ver2 > ver1)); then
- echo -1; return 0
- fi
+ if ((ver1 > ver2)); then
+ echo 1; return 0
+ elif ((ver2 > ver1)); then
+ echo -1; return 0
+ fi
- if ((sep <= 1)); then
- echo 0; return 0
- fi
+ if ((sep <= 1)); then
+ echo 0; return 0
+ fi
- _vercmp_r $((sep-1)) "${ver1[@]:1}" "${ver2[@]:1}"
+ _vercmp_r $((sep-1)) "${ver1[@]:1}" "${ver2[@]:1}"
}
@@ -1713,13 +1729,13 @@
#
# vercmp_numbers ver1 ver2
vercmp_numbers() {
- typeset v1=$1 v2=$2 sep
- typeset -a ver1 ver2
+ typeset v1=$1 v2=$2 sep
+ typeset -a ver1 ver2
- IFS=. read -ra ver1 <<< "$v1"
- IFS=. read -ra ver2 <<< "$v2"
+ IFS=. read -ra ver1 <<< "$v1"
+ IFS=. read -ra ver2 <<< "$v2"
- _vercmp_r "${#ver1[@]}" "${ver1[@]}" "${ver2[@]}"
+ _vercmp_r "${#ver1[@]}" "${ver1[@]}" "${ver2[@]}"
}
diff --git a/lib/baremetal b/lib/baremetal
index 52af420..141c28d 100644
--- a/lib/baremetal
+++ b/lib/baremetal
@@ -256,19 +256,19 @@
# load them into glance
BM_DEPLOY_KERNEL_ID=$(glance \
- --os-auth-token $token \
- --os-image-url http://$GLANCE_HOSTPORT \
- image-create \
- --name $BM_DEPLOY_KERNEL \
- --is-public True --disk-format=aki \
- < $TOP_DIR/files/$BM_DEPLOY_KERNEL | grep ' id ' | get_field 2)
+ --os-auth-token $token \
+ --os-image-url http://$GLANCE_HOSTPORT \
+ image-create \
+ --name $BM_DEPLOY_KERNEL \
+ --is-public True --disk-format=aki \
+ < $TOP_DIR/files/$BM_DEPLOY_KERNEL | grep ' id ' | get_field 2)
BM_DEPLOY_RAMDISK_ID=$(glance \
- --os-auth-token $token \
- --os-image-url http://$GLANCE_HOSTPORT \
- image-create \
- --name $BM_DEPLOY_RAMDISK \
- --is-public True --disk-format=ari \
- < $TOP_DIR/files/$BM_DEPLOY_RAMDISK | grep ' id ' | get_field 2)
+ --os-auth-token $token \
+ --os-image-url http://$GLANCE_HOSTPORT \
+ image-create \
+ --name $BM_DEPLOY_RAMDISK \
+ --is-public True --disk-format=ari \
+ < $TOP_DIR/files/$BM_DEPLOY_RAMDISK | grep ' id ' | get_field 2)
}
# create a basic baremetal flavor, associated with deploy kernel & ramdisk
@@ -278,11 +278,11 @@
aki=$1
ari=$2
nova flavor-create $BM_FLAVOR_NAME $BM_FLAVOR_ID \
- $BM_FLAVOR_RAM $BM_FLAVOR_ROOT_DISK $BM_FLAVOR_CPU
+ $BM_FLAVOR_RAM $BM_FLAVOR_ROOT_DISK $BM_FLAVOR_CPU
nova flavor-key $BM_FLAVOR_NAME set \
- "cpu_arch"="$BM_FLAVOR_ARCH" \
- "baremetal:deploy_kernel_id"="$aki" \
- "baremetal:deploy_ramdisk_id"="$ari"
+ "cpu_arch"="$BM_FLAVOR_ARCH" \
+ "baremetal:deploy_kernel_id"="$aki" \
+ "baremetal:deploy_ramdisk_id"="$ari"
}
@@ -311,19 +311,19 @@
# load them into glance
KERNEL_ID=$(glance \
- --os-auth-token $token \
- --os-image-url http://$GLANCE_HOSTPORT \
- image-create \
- --name $image_name-kernel \
- --is-public True --disk-format=aki \
- < $TOP_DIR/files/$OUT_KERNEL | grep ' id ' | get_field 2)
+ --os-auth-token $token \
+ --os-image-url http://$GLANCE_HOSTPORT \
+ image-create \
+ --name $image_name-kernel \
+ --is-public True --disk-format=aki \
+ < $TOP_DIR/files/$OUT_KERNEL | grep ' id ' | get_field 2)
RAMDISK_ID=$(glance \
- --os-auth-token $token \
- --os-image-url http://$GLANCE_HOSTPORT \
- image-create \
- --name $image_name-initrd \
- --is-public True --disk-format=ari \
- < $TOP_DIR/files/$OUT_RAMDISK | grep ' id ' | get_field 2)
+ --os-auth-token $token \
+ --os-image-url http://$GLANCE_HOSTPORT \
+ image-create \
+ --name $image_name-initrd \
+ --is-public True --disk-format=ari \
+ < $TOP_DIR/files/$OUT_RAMDISK | grep ' id ' | get_field 2)
}
@@ -365,11 +365,11 @@
mkdir "$xdir"
tar -zxf $FILES/$IMAGE_FNAME -C "$xdir"
KERNEL=$(for f in "$xdir/"*-vmlinuz* "$xdir/"aki-*/image; do
- [ -f "$f" ] && echo "$f" && break; done; true)
+ [ -f "$f" ] && echo "$f" && break; done; true)
RAMDISK=$(for f in "$xdir/"*-initrd* "$xdir/"ari-*/image; do
- [ -f "$f" ] && echo "$f" && break; done; true)
+ [ -f "$f" ] && echo "$f" && break; done; true)
IMAGE=$(for f in "$xdir/"*.img "$xdir/"ami-*/image; do
- [ -f "$f" ] && echo "$f" && break; done; true)
+ [ -f "$f" ] && echo "$f" && break; done; true)
if [[ -z "$IMAGE_NAME" ]]; then
IMAGE_NAME=$(basename "$IMAGE" ".img")
fi
@@ -403,19 +403,19 @@
--container-format ari \
--disk-format ari < "$RAMDISK" | grep ' id ' | get_field 2)
else
- # TODO(deva): add support for other image types
- return
+ # TODO(deva): add support for other image types
+ return
fi
glance \
- --os-auth-token $token \
- --os-image-url http://$GLANCE_HOSTPORT \
- image-create \
- --name "${IMAGE_NAME%.img}" --is-public True \
- --container-format $CONTAINER_FORMAT \
- --disk-format $DISK_FORMAT \
- ${KERNEL_ID:+--property kernel_id=$KERNEL_ID} \
- ${RAMDISK_ID:+--property ramdisk_id=$RAMDISK_ID} < "${IMAGE}"
+ --os-auth-token $token \
+ --os-image-url http://$GLANCE_HOSTPORT \
+ image-create \
+ --name "${IMAGE_NAME%.img}" --is-public True \
+ --container-format $CONTAINER_FORMAT \
+ --disk-format $DISK_FORMAT \
+ ${KERNEL_ID:+--property kernel_id=$KERNEL_ID} \
+ ${RAMDISK_ID:+--property ramdisk_id=$RAMDISK_ID} < "${IMAGE}"
# override DEFAULT_IMAGE_NAME so that tempest can find the image
# that we just uploaded in glance
@@ -439,18 +439,20 @@
mac_2=${2:-$BM_SECOND_MAC}
id=$(nova baremetal-node-create \
- --pm_address="$BM_PM_ADDR" \
- --pm_user="$BM_PM_USER" \
- --pm_password="$BM_PM_PASS" \
- "$BM_HOSTNAME" \
- "$BM_FLAVOR_CPU" \
- "$BM_FLAVOR_RAM" \
- "$BM_FLAVOR_ROOT_DISK" \
- "$mac_1" \
- | grep ' id ' | get_field 2 )
+ --pm_address="$BM_PM_ADDR" \
+ --pm_user="$BM_PM_USER" \
+ --pm_password="$BM_PM_PASS" \
+ "$BM_HOSTNAME" \
+ "$BM_FLAVOR_CPU" \
+ "$BM_FLAVOR_RAM" \
+ "$BM_FLAVOR_ROOT_DISK" \
+ "$mac_1" \
+ | grep ' id ' | get_field 2 )
[ $? -eq 0 ] || [ "$id" ] || die $LINENO "Error adding baremetal node"
- id2=$(nova baremetal-interface-add "$id" "$mac_2" )
- [ $? -eq 0 ] || [ "$id2" ] || die $LINENO "Error adding interface to barmetal node $id"
+ if [ -n "$mac_2" ]; then
+ id2=$(nova baremetal-interface-add "$id" "$mac_2" )
+ [ $? -eq 0 ] || [ "$id2" ] || die $LINENO "Error adding interface to barmetal node $id"
+ fi
}
diff --git a/lib/ceilometer b/lib/ceilometer
index 1b04319..cd4c4d8 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -134,12 +134,12 @@
# start_ceilometer() - Start running processes, including screen
function start_ceilometer() {
- screen_it ceilometer-acompute "sg $LIBVIRT_GROUP \"ceilometer-agent-compute --config-file $CEILOMETER_CONF\""
- screen_it ceilometer-acentral "ceilometer-agent-central --config-file $CEILOMETER_CONF"
- screen_it ceilometer-collector "ceilometer-collector --config-file $CEILOMETER_CONF"
- screen_it ceilometer-api "ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
- screen_it ceilometer-alarm-notifier "ceilometer-alarm-notifier --config-file $CEILOMETER_CONF"
- screen_it ceilometer-alarm-evaluator "ceilometer-alarm-evaluator --config-file $CEILOMETER_CONF"
+ screen_it ceilometer-acompute "cd ; sg $LIBVIRT_GROUP \"ceilometer-agent-compute --config-file $CEILOMETER_CONF\""
+ screen_it ceilometer-acentral "cd ; ceilometer-agent-central --config-file $CEILOMETER_CONF"
+ screen_it ceilometer-collector "cd ; ceilometer-collector --config-file $CEILOMETER_CONF"
+ screen_it ceilometer-api "cd ; ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
+ screen_it ceilometer-alarm-notifier "cd ; ceilometer-alarm-notifier --config-file $CEILOMETER_CONF"
+ screen_it ceilometer-alarm-evaluator "cd ; ceilometer-alarm-evaluator --config-file $CEILOMETER_CONF"
}
# stop_ceilometer() - Stop running processes
diff --git a/lib/cinder b/lib/cinder
index ccf38b4..f6f137c 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -202,15 +202,25 @@
sudo mv $TEMPFILE /etc/sudoers.d/cinder-rootwrap
cp $CINDER_DIR/etc/cinder/api-paste.ini $CINDER_API_PASTE_INI
- iniset $CINDER_API_PASTE_INI filter:authtoken auth_host $KEYSTONE_AUTH_HOST
- iniset $CINDER_API_PASTE_INI filter:authtoken auth_port $KEYSTONE_AUTH_PORT
- iniset $CINDER_API_PASTE_INI filter:authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
- iniset $CINDER_API_PASTE_INI filter:authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $CINDER_API_PASTE_INI filter:authtoken admin_user cinder
- iniset $CINDER_API_PASTE_INI filter:authtoken admin_password $SERVICE_PASSWORD
- iniset $CINDER_API_PASTE_INI filter:authtoken signing_dir $CINDER_AUTH_CACHE_DIR
+
+ inicomment $CINDER_API_PASTE_INI filter:authtoken auth_host
+ inicomment $CINDER_API_PASTE_INI filter:authtoken auth_port
+ inicomment $CINDER_API_PASTE_INI filter:authtoken auth_protocol
+ inicomment $CINDER_API_PASTE_INI filter:authtoken admin_tenant_name
+ inicomment $CINDER_API_PASTE_INI filter:authtoken admin_user
+ inicomment $CINDER_API_PASTE_INI filter:authtoken admin_password
+ inicomment $CINDER_API_PASTE_INI filter:authtoken signing_dir
cp $CINDER_DIR/etc/cinder/cinder.conf.sample $CINDER_CONF
+
+ iniset $CINDER_CONF keystone_authtoken auth_host $KEYSTONE_AUTH_HOST
+ iniset $CINDER_CONF keystone_authtoken auth_port $KEYSTONE_AUTH_PORT
+ iniset $CINDER_CONF keystone_authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
+ iniset $CINDER_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
+ iniset $CINDER_CONF keystone_authtoken admin_user cinder
+ iniset $CINDER_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
+ iniset $CINDER_CONF keystone_authtoken signing_dir $CINDER_AUTH_CACHE_DIR
+
iniset $CINDER_CONF DEFAULT auth_strategy keystone
iniset $CINDER_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $CINDER_CONF DEFAULT verbose True
@@ -233,6 +243,7 @@
iniset $CINDER_CONF DEFAULT rootwrap_config "$CINDER_CONF_DIR/rootwrap.conf"
iniset $CINDER_CONF DEFAULT osapi_volume_extension cinder.api.contrib.standard_extensions
iniset $CINDER_CONF DEFAULT state_path $CINDER_STATE_PATH
+ iniset $CINDER_CONF DEFAULT lock_path $CINDER_STATE_PATH
iniset $CINDER_CONF DEFAULT periodic_interval $CINDER_PERIODIC_INTERVAL
if is_service_enabled ceilometer; then
diff --git a/lib/config b/lib/config
new file mode 100644
index 0000000..91cefe4
--- /dev/null
+++ b/lib/config
@@ -0,0 +1,130 @@
+# lib/config - Configuration file manipulation functions
+
+# These functions have no external dependencies and the following side-effects:
+#
+# CONFIG_AWK_CMD is defined, default is ``awk``
+
+# Meta-config files contain multiple INI-style configuration files
+# using a specific new section header to delimit them:
+#
+# [[group-name|file-name]]
+#
+# group-name refers to the group of configuration file changes to be processed
+# at a particular time. These are called phases in ``stack.sh`` but
+# group here as these functions are not DevStack-specific.
+#
+# file-name is the destination of the config file
+
+# Save trace setting
+C_XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Allow the awk command to be overridden on legacy platforms
+CONFIG_AWK_CMD=${CONFIG_AWK_CMD:-awk}
+
+# Get the section for the specific group and config file
+# get_meta_section infile group configfile
+function get_meta_section() {
+ local file=$1
+ local matchgroup=$2
+ local configfile=$3
+
+ [[ -r $file ]] || return 0
+ [[ -z $configfile ]] && return 0
+
+ $CONFIG_AWK_CMD -v matchgroup=$matchgroup -v configfile=$configfile '
+ BEGIN { group = "" }
+ /^\[\[.+|.*\]\]/ {
+ if (group == "") {
+ gsub("[][]", "", $1);
+ split($1, a, "|");
+ if (a[1] == matchgroup && a[2] == configfile) {
+ group=a[1]
+ }
+ } else {
+ group=""
+ }
+ next
+ }
+ {
+ if (group != "")
+ print $0
+ }
+ ' $file
+}
+
+
+# Get a list of config files for a specific group
+# get_meta_section_files infile group
+function get_meta_section_files() {
+ local file=$1
+ local matchgroup=$2
+
+ [[ -r $file ]] || return 0
+
+ $CONFIG_AWK_CMD -v matchgroup=$matchgroup '
+ /^\[\[.+\|.*\]\]/ {
+ gsub("[][]", "", $1);
+ split($1, a, "|");
+ if (a[1] == matchgroup)
+ print a[2]
+ }
+ ' $file
+}
+
+
+# Merge the contents of a meta-config file into its destination config file
+# If configfile does not exist it will be created.
+# merge_config_file infile group configfile
+function merge_config_file() {
+ local file=$1
+ local matchgroup=$2
+ local configfile=$3
+
+ [[ -r $configfile ]] || touch $configfile
+
+ get_meta_section $file $matchgroup $configfile | \
+ $CONFIG_AWK_CMD -v configfile=$configfile '
+ BEGIN { section = "" }
+ /^\[.+\]/ {
+ gsub("[][]", "", $1);
+ section=$1
+ next
+ }
+ /^ *\#/ {
+ next
+ }
+ /^.+/ {
+ split($0, d, " *= *")
+ print "iniset " configfile " " section " " d[1] " \"" d[2] "\""
+ }
+ ' | while read a; do eval "$a"; done
+
+}
+
+
+# Merge all of the files specified by group
+# merge_config_group infile group [group ...]
+function merge_config_group() {
+ local localfile=$1; shift
+ local matchgroups=$@
+
+ [[ -r $localfile ]] || return 0
+
+ for group in $matchgroups; do
+ for configfile in $(get_meta_section_files $localfile $group); do
+ if [[ -d $(dirname $configfile) ]]; then
+ merge_config_file $localfile $group $configfile
+ fi
+ done
+ done
+}
+
+
+# Restore xtrace
+$C_XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/glance b/lib/glance
index c6f11d0..75e3dd0 100644
--- a/lib/glance
+++ b/lib/glance
@@ -194,7 +194,7 @@
screen_it g-api "cd $GLANCE_DIR; $GLANCE_BIN_DIR/glance-api --config-file=$GLANCE_CONF_DIR/glance-api.conf"
echo "Waiting for g-api ($GLANCE_HOSTPORT) to start..."
if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$GLANCE_HOSTPORT; do sleep 1; done"; then
- die $LINENO "g-api did not start"
+ die $LINENO "g-api did not start"
fi
}
diff --git a/lib/heat b/lib/heat
index ff9473e..8acadb4 100644
--- a/lib/heat
+++ b/lib/heat
@@ -86,7 +86,7 @@
iniset $HEAT_CONF DEFAULT use_syslog $SYSLOG
if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
# Add color to logging output
- setup_colorized_logging $HEAT_CONF DEFAULT
+ setup_colorized_logging $HEAT_CONF DEFAULT tenant user
fi
# keystone authtoken
diff --git a/lib/ironic b/lib/ironic
index f3b4a72..649c1c2 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -11,6 +11,7 @@
# ``stack.sh`` calls the entry points in this order:
#
# install_ironic
+# install_ironicclient
# configure_ironic
# init_ironic
# start_ironic
@@ -27,6 +28,7 @@
# Set up default directories
IRONIC_DIR=$DEST/ironic
+IRONICCLIENT_DIR=$DEST/python-ironicclient
IRONIC_AUTH_CACHE_DIR=${IRONIC_AUTH_CACHE_DIR:-/var/cache/ironic}
IRONIC_CONF_DIR=${IRONIC_CONF_DIR:-/etc/ironic}
IRONIC_CONF_FILE=$IRONIC_CONF_DIR/ironic.conf
@@ -45,6 +47,18 @@
# Functions
# ---------
+# install_ironic() - Collect source and prepare
+function install_ironic() {
+ git_clone $IRONIC_REPO $IRONIC_DIR $IRONIC_BRANCH
+ setup_develop $IRONIC_DIR
+}
+
+# install_ironicclient() - Collect sources and prepare
+function install_ironicclient() {
+ git_clone $IRONICCLIENT_REPO $IRONICCLIENT_DIR $IRONICCLIENT_BRANCH
+ setup_develop $IRONICCLIENT_DIR
+}
+
# cleanup_ironic() - Remove residual data files, anything left over from previous
# runs that would need to clean up.
function cleanup_ironic() {
@@ -170,12 +184,6 @@
create_ironic_accounts
}
-# install_ironic() - Collect source and prepare
-function install_ironic() {
- git_clone $IRONIC_REPO $IRONIC_DIR $IRONIC_BRANCH
- setup_develop $IRONIC_DIR
-}
-
# start_ironic() - Start running processes, including screen
function start_ironic() {
# Start Ironic API server, if enabled.
@@ -195,7 +203,7 @@
screen_it ir-api "cd $IRONIC_DIR; $IRONIC_BIN_DIR/ironic-api --config-file=$IRONIC_CONF_FILE"
echo "Waiting for ir-api ($IRONIC_HOSTPORT) to start..."
if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$IRONIC_HOSTPORT; do sleep 1; done"; then
- die $LINENO "ir-api did not start"
+ die $LINENO "ir-api did not start"
fi
}
diff --git a/lib/keystone b/lib/keystone
index c93a436..beddb1c 100755
--- a/lib/keystone
+++ b/lib/keystone
@@ -373,7 +373,7 @@
echo "Waiting for keystone to start..."
if ! timeout $SERVICE_TIMEOUT sh -c "while ! curl --noproxy '*' -s http://$SERVICE_HOST:$service_port/v$IDENTITY_API_VERSION/ >/dev/null; do sleep 1; done"; then
- die $LINENO "keystone did not start"
+ die $LINENO "keystone did not start"
fi
# Start proxies if enabled
diff --git a/lib/neutron b/lib/neutron
index 778717d..44fb9e1 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -79,8 +79,8 @@
# Support entry points installation of console scripts
if [[ -d $NEUTRON_DIR/bin/neutron-server ]]; then
NEUTRON_BIN_DIR=$NEUTRON_DIR/bin
- else
-NEUTRON_BIN_DIR=$(get_python_exec_prefix)
+else
+ NEUTRON_BIN_DIR=$(get_python_exec_prefix)
fi
NEUTRON_CONF_DIR=/etc/neutron
@@ -373,7 +373,7 @@
iniset $Q_L3_CONF_FILE DEFAULT router_id $ROUTER_ID
fi
fi
- fi
+ fi
}
# init_neutron() - Initialize databases, etc.
@@ -404,7 +404,7 @@
fi
if is_service_enabled q-lbaas; then
- neutron_agent_lbaas_install_agent_packages
+ neutron_agent_lbaas_install_agent_packages
fi
}
@@ -414,13 +414,13 @@
local cfg_file
local CFG_FILE_OPTIONS="--config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE"
for cfg_file in ${Q_PLUGIN_EXTRA_CONF_FILES[@]}; do
- CFG_FILE_OPTIONS+=" --config-file /$cfg_file"
+ CFG_FILE_OPTIONS+=" --config-file /$cfg_file"
done
# Start the Neutron service
screen_it q-svc "cd $NEUTRON_DIR && python $NEUTRON_BIN_DIR/neutron-server $CFG_FILE_OPTIONS"
echo "Waiting for Neutron to start..."
if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$Q_HOST:$Q_PORT; do sleep 1; done"; then
- die $LINENO "Neutron did not start"
+ die $LINENO "Neutron did not start"
fi
}
@@ -712,9 +712,9 @@
# Set up ``rootwrap.conf``, pointing to ``$NEUTRON_CONF_DIR/rootwrap.d``
# location moved in newer versions, prefer new location
if test -r $NEUTRON_DIR/etc/neutron/rootwrap.conf; then
- sudo cp -p $NEUTRON_DIR/etc/neutron/rootwrap.conf $Q_RR_CONF_FILE
+ sudo cp -p $NEUTRON_DIR/etc/neutron/rootwrap.conf $Q_RR_CONF_FILE
else
- sudo cp -p $NEUTRON_DIR/etc/rootwrap.conf $Q_RR_CONF_FILE
+ sudo cp -p $NEUTRON_DIR/etc/rootwrap.conf $Q_RR_CONF_FILE
fi
sudo sed -e "s:^filters_path=.*$:filters_path=$Q_CONF_ROOTWRAP_D:" -i $Q_RR_CONF_FILE
sudo chown root:root $Q_RR_CONF_FILE
@@ -848,11 +848,11 @@
# please refer to ``lib/neutron_thirdparty/README.md`` for details
NEUTRON_THIRD_PARTIES=""
for f in $TOP_DIR/lib/neutron_thirdparty/*; do
- third_party=$(basename $f)
- if is_service_enabled $third_party; then
- source $TOP_DIR/lib/neutron_thirdparty/$third_party
- NEUTRON_THIRD_PARTIES="$NEUTRON_THIRD_PARTIES,$third_party"
- fi
+ third_party=$(basename $f)
+ if is_service_enabled $third_party; then
+ source $TOP_DIR/lib/neutron_thirdparty/$third_party
+ NEUTRON_THIRD_PARTIES="$NEUTRON_THIRD_PARTIES,$third_party"
+ fi
done
function _neutron_third_party_do() {
diff --git a/lib/neutron_plugins/midonet b/lib/neutron_plugins/midonet
index 193055f..cf45a9d 100644
--- a/lib/neutron_plugins/midonet
+++ b/lib/neutron_plugins/midonet
@@ -37,14 +37,26 @@
iniset $Q_DHCP_CONF_FILE DEFAULT interface_driver $DHCP_INTERFACE_DRIVER
iniset $Q_DHCP_CONF_FILE DEFAULT use_namespaces True
iniset $Q_DHCP_CONF_FILE DEFAULT enable_isolated_metadata True
+ if [[ "$MIDONET_API_URI" != "" ]]; then
+ iniset $Q_DHCP_CONF_FILE MIDONET midonet_uri "$MIDONET_API_URI"
+ fi
+ if [[ "$MIDONET_USERNAME" != "" ]]; then
+ iniset $Q_DHCP_CONF_FILE MIDONET username "$MIDONET_USERNAME"
+ fi
+ if [[ "$MIDONET_PASSWORD" != "" ]]; then
+ iniset $Q_DHCP_CONF_FILE MIDONET password "$MIDONET_PASSWORD"
+ fi
+ if [[ "$MIDONET_PROJECT_ID" != "" ]]; then
+ iniset $Q_DHCP_CONF_FILE MIDONET project_id "$MIDONET_PROJECT_ID"
+ fi
}
function neutron_plugin_configure_l3_agent() {
- die $LINENO "q-l3 must not be executed with MidoNet plugin!"
+ die $LINENO "q-l3 must not be executed with MidoNet plugin!"
}
function neutron_plugin_configure_plugin_agent() {
- die $LINENO "q-agt must not be executed with MidoNet plugin!"
+ die $LINENO "q-agt must not be executed with MidoNet plugin!"
}
function neutron_plugin_configure_service() {
diff --git a/lib/neutron_plugins/nec b/lib/neutron_plugins/nec
index 79d41db..3806c32 100644
--- a/lib/neutron_plugins/nec
+++ b/lib/neutron_plugins/nec
@@ -101,15 +101,15 @@
local id=0
GRE_LOCAL_IP=${GRE_LOCAL_IP:-$HOST_IP}
if [ -n "$GRE_REMOTE_IPS" ]; then
- for ip in ${GRE_REMOTE_IPS//:/ }
- do
- if [[ "$ip" == "$GRE_LOCAL_IP" ]]; then
- continue
- fi
- sudo ovs-vsctl --no-wait add-port $bridge gre$id -- \
- set Interface gre$id type=gre options:remote_ip=$ip
- id=`expr $id + 1`
- done
+ for ip in ${GRE_REMOTE_IPS//:/ }
+ do
+ if [[ "$ip" == "$GRE_LOCAL_IP" ]]; then
+ continue
+ fi
+ sudo ovs-vsctl --no-wait add-port $bridge gre$id -- \
+ set Interface gre$id type=gre options:remote_ip=$ip
+ id=`expr $id + 1`
+ done
fi
}
diff --git a/lib/neutron_plugins/nicira b/lib/neutron_plugins/nicira
index ca89d57..7c99b69 100644
--- a/lib/neutron_plugins/nicira
+++ b/lib/neutron_plugins/nicira
@@ -58,13 +58,13 @@
}
function neutron_plugin_configure_l3_agent() {
- # Nicira plugin does not run L3 agent
- die $LINENO "q-l3 should must not be executed with Nicira plugin!"
+ # Nicira plugin does not run L3 agent
+ die $LINENO "q-l3 should must not be executed with Nicira plugin!"
}
function neutron_plugin_configure_plugin_agent() {
- # Nicira plugin does not run L2 agent
- die $LINENO "q-agt must not be executed with Nicira plugin!"
+ # Nicira plugin does not run L2 agent
+ die $LINENO "q-agt must not be executed with Nicira plugin!"
}
function neutron_plugin_configure_service() {
@@ -127,6 +127,7 @@
else
die $LINENO "Agentless mode requires a service cluster."
fi
+ iniset /$Q_PLUGIN_CONF_FILE nvp_metadata metadata_server_address $Q_META_DATA_IP
fi
fi
}
diff --git a/lib/neutron_plugins/ovs_base b/lib/neutron_plugins/ovs_base
index 2666d8e..1214f3b 100644
--- a/lib/neutron_plugins/ovs_base
+++ b/lib/neutron_plugins/ovs_base
@@ -73,13 +73,7 @@
}
function _neutron_ovs_base_configure_nova_vif_driver() {
- # The hybrid VIF driver needs to be specified when Neutron Security Group
- # is enabled (until vif_security attributes are supported in VIF extension)
- if [[ "$Q_USE_SECGROUP" == "True" ]]; then
- NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver"}
- else
- NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtGenericVIFDriver"}
- fi
+ NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtGenericVIFDriver"}
}
# Restore xtrace
diff --git a/lib/neutron_thirdparty/nicira b/lib/neutron_thirdparty/nicira
index 5a20934..3f2a5af 100644
--- a/lib/neutron_thirdparty/nicira
+++ b/lib/neutron_thirdparty/nicira
@@ -18,22 +18,38 @@
# to an network that allows it to talk to the gateway for
# testing purposes
NVP_GATEWAY_NETWORK_INTERFACE=${NVP_GATEWAY_NETWORK_INTERFACE:-eth2}
+# Re-declare floating range as it's needed also in stop_nicira, which
+# is invoked by unstack.sh
+FLOATING_RANGE=${FLOATING_RANGE:-172.24.4.224/28}
function configure_nicira() {
:
}
function init_nicira() {
- die_if_not_set $LINENO NVP_GATEWAY_NETWORK_CIDR "Please, specify CIDR for the gateway network interface."
+ if ! is_set NVP_GATEWAY_NETWORK_CIDR; then
+ NVP_GATEWAY_NETWORK_CIDR=$PUBLIC_NETWORK_GATEWAY/${FLOATING_RANGE#*/}
+ echo "The IP address to set on br-ex was not specified. "
+ echo "Defaulting to "$NVP_GATEWAY_NETWORK_CIDR
+ fi
# Make sure the interface is up, but not configured
- sudo ifconfig $NVP_GATEWAY_NETWORK_INTERFACE up
+ sudo ip link dev $NVP_GATEWAY_NETWORK_INTERFACE set up
+ # Save and then flush the IP addresses on the interface
+ addresses=$(ip addr show dev $NVP_GATEWAY_NETWORK_INTERFACE | grep inet | awk {'print $2'})
sudo ip addr flush $NVP_GATEWAY_NETWORK_INTERFACE
# Use the PUBLIC Bridge to route traffic to the NVP gateway
# NOTE(armando-migliaccio): if running in a nested environment this will work
# only with mac learning enabled, portsecurity and security profiles disabled
+ # The public bridge might not exist for the NVP plugin if Q_USE_DEBUG_COMMAND is off
+ # Try to create it anyway
+ sudo ovs-vsctl --no-wait -- --may-exist add-br $PUBLIC_BRIDGE
sudo ovs-vsctl -- --may-exist add-port $PUBLIC_BRIDGE $NVP_GATEWAY_NETWORK_INTERFACE
nvp_gw_net_if_mac=$(ip link show $NVP_GATEWAY_NETWORK_INTERFACE | awk '/ether/ {print $2}')
- sudo ifconfig $PUBLIC_BRIDGE $NVP_GATEWAY_NETWORK_CIDR hw ether $nvp_gw_net_if_mac
+ sudo ip link dev $PUBLIC_BRIDGE set address $nvp_gw_net_if_mac
+ for address in $addresses; do
+ sudo ip addr add dev $PUBLIC_BRIDGE $address
+ done
+ sudo ip addr add dev $PUBLIC_BRIDGE $NVP_GATEWAY_NETWORK_CIDR
}
function install_nicira() {
@@ -45,7 +61,21 @@
}
function stop_nicira() {
- :
+ if ! is_set NVP_GATEWAY_NETWORK_CIDR; then
+ NVP_GATEWAY_NETWORK_CIDR=$PUBLIC_NETWORK_GATEWAY/${FLOATING_RANGE#*/}
+ echo "The IP address expected on br-ex was not specified. "
+ echo "Defaulting to "$NVP_GATEWAY_NETWORK_CIDR
+ fi
+ sudo ip addr del $NVP_GATEWAY_NETWORK_CIDR dev $PUBLIC_BRIDGE
+ # Save and then flush remaining addresses on the interface
+ addresses=$(ip addr show dev $PUBLIC_BRIDGE | grep inet | awk {'print $2'})
+ sudo ip addr flush $PUBLIC_BRIDGE
+ # Try to detach physical interface from PUBLIC_BRIDGE
+ sudo ovs-vsctl del-port $NVP_GATEWAY_NETWORK_INTERFACE
+ # Restore addresses on NVP_GATEWAY_NETWORK_INTERFACE
+ for address in $addresses; do
+ sudo ip addr add dev $NVP_GATEWAY_NETWORK_INTERFACE $address
+ done
}
# Restore xtrace
diff --git a/lib/neutron_thirdparty/trema b/lib/neutron_thirdparty/trema
index 09dc46b..5b5c459 100644
--- a/lib/neutron_thirdparty/trema
+++ b/lib/neutron_thirdparty/trema
@@ -66,8 +66,8 @@
cp $TREMA_SS_DIR/sliceable_switch_null.conf $TREMA_SS_CONFIG
sed -i -e "s|^\$apps_dir.*$|\$apps_dir = \"$TREMA_DIR/apps\"|" \
- -e "s|^\$db_dir.*$|\$db_dir = \"$TREMA_SS_DB_DIR\"|" \
- $TREMA_SS_CONFIG
+ -e "s|^\$db_dir.*$|\$db_dir = \"$TREMA_SS_DB_DIR\"|" \
+ $TREMA_SS_CONFIG
}
function gem_install() {
diff --git a/lib/nova b/lib/nova
index 4c55207..809f8e5 100644
--- a/lib/nova
+++ b/lib/nova
@@ -71,23 +71,24 @@
NOVNC_DIR=$DEST/noVNC
SPICE_DIR=$DEST/spice-html5
+# Set default defaults here as some hypervisor drivers override these
+PUBLIC_INTERFACE_DEFAULT=br100
+GUEST_INTERFACE_DEFAULT=eth0
+FLAT_NETWORK_BRIDGE_DEFAULT=br100
+
+# Get hypervisor configuration
+# ----------------------------
+
+NOVA_PLUGINS=$TOP_DIR/lib/nova_plugins
+if is_service_enabled nova && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
+ # Load plugin
+ source $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER
+fi
+
# Nova Network Configuration
# --------------------------
-# Set defaults according to the virt driver
-if [ "$VIRT_DRIVER" = 'baremetal' ]; then
- NETWORK_MANAGER=${NETWORK_MANAGER:-FlatManager}
- PUBLIC_INTERFACE_DEFAULT=eth0
- FLAT_INTERFACE=${FLAT_INTERFACE:-eth0}
- FLAT_NETWORK_BRIDGE_DEFAULT=br100
- STUB_NETWORK=${STUB_NETWORK:-False}
-else
- PUBLIC_INTERFACE_DEFAULT=br100
- GUEST_INTERFACE_DEFAULT=eth0
- FLAT_NETWORK_BRIDGE_DEFAULT=br100
-fi
-
NETWORK_MANAGER=${NETWORK_MANAGER:-${NET_MAN:-FlatDHCPManager}}
PUBLIC_INTERFACE=${PUBLIC_INTERFACE:-$PUBLIC_INTERFACE_DEFAULT}
VLAN_INTERFACE=${VLAN_INTERFACE:-$GUEST_INTERFACE_DEFAULT}
@@ -211,26 +212,24 @@
configure_nova_rootwrap
if is_service_enabled n-api; then
- # Use the sample http middleware configuration supplied in the
- # Nova sources. This paste config adds the configuration required
- # for Nova to validate Keystone tokens.
-
# Remove legacy paste config if present
rm -f $NOVA_DIR/bin/nova-api-paste.ini
# Get the sample configuration file in place
cp $NOVA_DIR/etc/nova/api-paste.ini $NOVA_CONF_DIR
- iniset $NOVA_API_PASTE_INI filter:authtoken auth_host $KEYSTONE_AUTH_HOST
+ # Comment out the keystone configs in Nova's api-paste.ini.
+ # We are using nova.conf to configure this instead.
+ inicomment $NOVA_API_PASTE_INI filter:authtoken auth_host
if is_service_enabled tls-proxy; then
- iniset $NOVA_API_PASTE_INI filter:authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
+ inicomment $NOVA_API_PASTE_INI filter:authtoken auth_protocol
fi
- iniset $NOVA_API_PASTE_INI filter:authtoken admin_tenant_name $SERVICE_TENANT_NAME
- iniset $NOVA_API_PASTE_INI filter:authtoken admin_user nova
- iniset $NOVA_API_PASTE_INI filter:authtoken admin_password $SERVICE_PASSWORD
+ inicomment $NOVA_API_PASTE_INI filter:authtoken admin_tenant_name
+ inicomment $NOVA_API_PASTE_INI filter:authtoken admin_user
+ inicomment $NOVA_API_PASTE_INI filter:authtoken admin_password
fi
- iniset $NOVA_API_PASTE_INI filter:authtoken signing_dir $NOVA_AUTH_CACHE_DIR
+ inicomment $NOVA_API_PASTE_INI filter:authtoken signing_dir
if is_service_enabled n-cpu; then
# Force IP forwarding on, just on case
@@ -274,83 +273,6 @@
fi
fi
- # Prepare directories and packages for baremetal driver
- if is_baremetal; then
- configure_baremetal_nova_dirs
- fi
-
- if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
- if is_service_enabled neutron && is_neutron_ovs_base_plugin && ! sudo grep -q '^cgroup_device_acl' $QEMU_CONF; then
- # Add /dev/net/tun to cgroup_device_acls, needed for type=ethernet interfaces
- cat <<EOF | sudo tee -a $QEMU_CONF
-cgroup_device_acl = [
- "/dev/null", "/dev/full", "/dev/zero",
- "/dev/random", "/dev/urandom",
- "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
- "/dev/rtc", "/dev/hpet","/dev/net/tun",
-]
-EOF
- fi
-
- if is_ubuntu; then
- LIBVIRT_DAEMON=libvirt-bin
- else
- LIBVIRT_DAEMON=libvirtd
- fi
-
- if is_fedora || is_suse; then
- if is_fedora && [[ $DISTRO =~ (rhel6) || "$os_RELEASE" -le "17" ]]; then
- sudo bash -c "cat <<EOF >/etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
-[libvirt Management Access]
-Identity=unix-group:$LIBVIRT_GROUP
-Action=org.libvirt.unix.manage
-ResultAny=yes
-ResultInactive=yes
-ResultActive=yes
-EOF"
- elif is_suse && [[ $os_RELEASE = 12.2 || "$os_VENDOR" = "SUSE LINUX" ]]; then
- # openSUSE < 12.3 or SLE
- # Work around the fact that polkit-default-privs overrules pklas
- # with 'unix-group:$group'.
- sudo bash -c "cat <<EOF >/etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
-[libvirt Management Access]
-Identity=unix-user:$USER
-Action=org.libvirt.unix.manage
-ResultAny=yes
-ResultInactive=yes
-ResultActive=yes
-EOF"
- else
- # Starting with fedora 18 and opensuse-12.3 enable stack-user to
- # virsh -c qemu:///system by creating a policy-kit rule for
- # stack-user using the new Javascript syntax
- rules_dir=/etc/polkit-1/rules.d
- sudo mkdir -p $rules_dir
- sudo bash -c "cat <<EOF > $rules_dir/50-libvirt-$STACK_USER.rules
-polkit.addRule(function(action, subject) {
- if (action.id == 'org.libvirt.unix.manage' &&
- subject.user == '"$STACK_USER"') {
- return polkit.Result.YES;
- }
-});
-EOF"
- unset rules_dir
- fi
- fi
-
- # The user that nova runs as needs to be member of **libvirtd** group otherwise
- # nova-compute will be unable to use libvirt.
- if ! getent group $LIBVIRT_GROUP >/dev/null; then
- sudo groupadd $LIBVIRT_GROUP
- fi
- add_user_to_group $STACK_USER $LIBVIRT_GROUP
-
- # libvirt detects various settings on startup, as we potentially changed
- # the system configuration (modules, filesystems), we need to restart
- # libvirt to detect those changes.
- restart_service $LIBVIRT_DAEMON
- fi
-
# Instance Storage
# ----------------
@@ -368,6 +290,14 @@
fi
fi
fi
+
+ # Rebuild the config file from scratch
+ create_nova_conf
+
+ if [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
+ # Configure hypervisor plugin
+ configure_nova_hypervisor
+ fi
}
# create_nova_accounts() - Set up common required nova accounts
@@ -447,14 +377,7 @@
iniset $NOVA_CONF DEFAULT ec2_workers "4"
iniset $NOVA_CONF DEFAULT metadata_workers "4"
iniset $NOVA_CONF DEFAULT sql_connection `database_connection_url nova`
- if is_baremetal; then
- iniset $NOVA_CONF baremetal sql_connection `database_connection_url nova_bm`
- fi
- if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
- iniset $NOVA_CONF DEFAULT libvirt_type "$LIBVIRT_TYPE"
- iniset $NOVA_CONF DEFAULT libvirt_cpu_mode "none"
- iniset $NOVA_CONF DEFAULT use_usb_tablet "False"
- fi
+ iniset $NOVA_CONF DEFAULT fatal_deprecations "True"
iniset $NOVA_CONF DEFAULT instance_name_template "${INSTANCE_NAME_PREFIX}%08x"
iniset $NOVA_CONF osapi_v3 enabled "True"
@@ -470,7 +393,20 @@
# Set the service port for a proxy to take the original
iniset $NOVA_CONF DEFAULT osapi_compute_listen_port "$NOVA_SERVICE_PORT_INT"
fi
+
+ # Add keystone authtoken configuration
+
+ iniset $NOVA_CONF keystone_authtoken auth_host $KEYSTONE_AUTH_HOST
+ if is_service_enabled tls-proxy; then
+ iniset $NOVA_CONF keystone_authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
+ fi
+ iniset $NOVA_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
+ iniset $NOVA_CONF keystone_authtoken admin_user nova
+ iniset $NOVA_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
fi
+
+ iniset $NOVA_CONF keystone_authtoken signing_dir $NOVA_AUTH_CACHE_DIR
+
if is_service_enabled cinder; then
iniset $NOVA_CONF DEFAULT volume_api_class "nova.volume.cinder.API"
fi
@@ -529,27 +465,27 @@
fi
if is_service_enabled n-novnc || is_service_enabled n-xvnc; then
- # Address on which instance vncservers will listen on compute hosts.
- # For multi-host, this should be the management ip of the compute host.
- VNCSERVER_LISTEN=${VNCSERVER_LISTEN=127.0.0.1}
- VNCSERVER_PROXYCLIENT_ADDRESS=${VNCSERVER_PROXYCLIENT_ADDRESS=127.0.0.1}
- iniset $NOVA_CONF DEFAULT vnc_enabled true
- iniset $NOVA_CONF DEFAULT vncserver_listen "$VNCSERVER_LISTEN"
- iniset $NOVA_CONF DEFAULT vncserver_proxyclient_address "$VNCSERVER_PROXYCLIENT_ADDRESS"
+ # Address on which instance vncservers will listen on compute hosts.
+ # For multi-host, this should be the management ip of the compute host.
+ VNCSERVER_LISTEN=${VNCSERVER_LISTEN=127.0.0.1}
+ VNCSERVER_PROXYCLIENT_ADDRESS=${VNCSERVER_PROXYCLIENT_ADDRESS=127.0.0.1}
+ iniset $NOVA_CONF DEFAULT vnc_enabled true
+ iniset $NOVA_CONF DEFAULT vncserver_listen "$VNCSERVER_LISTEN"
+ iniset $NOVA_CONF DEFAULT vncserver_proxyclient_address "$VNCSERVER_PROXYCLIENT_ADDRESS"
else
- iniset $NOVA_CONF DEFAULT vnc_enabled false
+ iniset $NOVA_CONF DEFAULT vnc_enabled false
fi
if is_service_enabled n-spice; then
- # Address on which instance spiceservers will listen on compute hosts.
- # For multi-host, this should be the management ip of the compute host.
- SPICESERVER_PROXYCLIENT_ADDRESS=${SPICESERVER_PROXYCLIENT_ADDRESS=127.0.0.1}
- SPICESERVER_LISTEN=${SPICESERVER_LISTEN=127.0.0.1}
- iniset $NOVA_CONF spice enabled true
- iniset $NOVA_CONF spice server_listen "$SPICESERVER_LISTEN"
- iniset $NOVA_CONF spice server_proxyclient_address "$SPICESERVER_PROXYCLIENT_ADDRESS"
+ # Address on which instance spiceservers will listen on compute hosts.
+ # For multi-host, this should be the management ip of the compute host.
+ SPICESERVER_PROXYCLIENT_ADDRESS=${SPICESERVER_PROXYCLIENT_ADDRESS=127.0.0.1}
+ SPICESERVER_LISTEN=${SPICESERVER_LISTEN=127.0.0.1}
+ iniset $NOVA_CONF spice enabled true
+ iniset $NOVA_CONF spice server_listen "$SPICESERVER_LISTEN"
+ iniset $NOVA_CONF spice server_proxyclient_address "$SPICESERVER_PROXYCLIENT_ADDRESS"
else
- iniset $NOVA_CONF spice enabled false
+ iniset $NOVA_CONF spice enabled false
fi
iniset $NOVA_CONF DEFAULT ec2_dmz_host "$EC2_DMZ_HOST"
@@ -646,37 +582,8 @@
# install_nova() - Collect source and prepare
function install_nova() {
- if is_service_enabled n-cpu; then
- if [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
- install_nova_hypervisor
- elif [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
- if is_ubuntu; then
- install_package kvm
- install_package libvirt-bin
- install_package python-libvirt
- elif is_fedora || is_suse; then
- install_package kvm
- install_package libvirt
- install_package libvirt-python
- else
- exit_distro_not_supported "libvirt installation"
- fi
-
- # Install and configure **LXC** if specified. LXC is another approach to
- # splitting a system into many smaller parts. LXC uses cgroups and chroot
- # to simulate multiple systems.
- if [[ "$LIBVIRT_TYPE" == "lxc" ]]; then
- if is_ubuntu; then
- if [[ "$DISTRO" > natty ]]; then
- install_package cgroup-lite
- fi
- else
- ### FIXME(dtroyer): figure this out
- echo "RPM-based cgroup not implemented yet"
- yum_install libcgroup-tools
- fi
- fi
- fi
+ if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
+ install_nova_hypervisor
fi
git_clone $NOVA_REPO $NOVA_DIR $NOVA_BRANCH
@@ -695,7 +602,7 @@
screen_it n-api "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-api"
echo "Waiting for nova-api to start..."
if ! wait_for_service $SERVICE_TIMEOUT http://$SERVICE_HOST:$service_port; then
- die $LINENO "nova-api did not start"
+ die $LINENO "nova-api did not start"
fi
# Start proxies if enabled
@@ -704,8 +611,28 @@
fi
}
+# start_nova_compute() - Start the compute process
+function start_nova_compute() {
+ NOVA_CONF_BOTTOM=$NOVA_CONF
+
+ if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
+ # The group **$LIBVIRT_GROUP** is added to the current user in this script.
+ # Use 'sg' to execute nova-compute as a member of the **$LIBVIRT_GROUP** group.
+ screen_it n-cpu "cd $NOVA_DIR && sg $LIBVIRT_GROUP '$NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM'"
+ elif [[ "$VIRT_DRIVER" = 'fake' ]]; then
+ for i in `seq 1 $NUMBER_FAKE_NOVA_COMPUTE`; do
+ screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM"
+ done
+ else
+ if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
+ start_nova_hypervisor
+ fi
+ screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM"
+ fi
+}
+
# start_nova() - Start running processes, including screen
-function start_nova() {
+function start_nova_rest() {
NOVA_CONF_BOTTOM=$NOVA_CONF
# ``screen_it`` checks ``is_service_enabled``, it is not needed here
@@ -718,21 +645,6 @@
screen_it n-cell-child "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cells --config-file $NOVA_CELLS_CONF"
fi
- if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
- # The group **$LIBVIRT_GROUP** is added to the current user in this script.
- # Use 'sg' to execute nova-compute as a member of the **$LIBVIRT_GROUP** group.
- screen_it n-cpu "cd $NOVA_DIR && sg $LIBVIRT_GROUP '$NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM'"
- elif [[ "$VIRT_DRIVER" = 'fake' ]]; then
- for i in `seq 1 $NUMBER_FAKE_NOVA_COMPUTE`
- do
- screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM"
- done
- else
- if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
- start_nova_hypervisor
- fi
- screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM"
- fi
screen_it n-crt "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cert"
screen_it n-net "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-network --config-file $NOVA_CONF_BOTTOM"
screen_it n-sch "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-scheduler --config-file $NOVA_CONF_BOTTOM"
@@ -749,6 +661,11 @@
screen_it n-obj "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-objectstore"
}
+function start_nova() {
+ start_nova_compute
+ start_nova_rest
+}
+
# stop_nova() - Stop running processes (non-screen)
function stop_nova() {
# Kill the nova screen windows
diff --git a/lib/nova_plugins/hypervisor-baremetal b/lib/nova_plugins/hypervisor-baremetal
new file mode 100644
index 0000000..660c977
--- /dev/null
+++ b/lib/nova_plugins/hypervisor-baremetal
@@ -0,0 +1,93 @@
+# lib/nova_plugins/hypervisor-baremetal
+# Configure the baremetal hypervisor
+
+# Enable with:
+# VIRT_DRIVER=baremetal
+
+# Dependencies:
+# ``functions`` file
+# ``nova`` configuration
+
+# install_nova_hypervisor - install any external requirements
+# configure_nova_hypervisor - make configuration changes, including those to other services
+# start_nova_hypervisor - start any external services
+# stop_nova_hypervisor - stop any external services
+# cleanup_nova_hypervisor - remove transient data and cache
+
+# Save trace setting
+MY_XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Defaults
+# --------
+
+NETWORK_MANAGER=${NETWORK_MANAGER:-FlatManager}
+PUBLIC_INTERFACE_DEFAULT=eth0
+FLAT_INTERFACE=${FLAT_INTERFACE:-eth0}
+FLAT_NETWORK_BRIDGE_DEFAULT=br100
+STUB_NETWORK=${STUB_NETWORK:-False}
+
+
+# Entry Points
+# ------------
+
+# clean_nova_hypervisor - Clean up an installation
+function cleanup_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+# configure_nova_hypervisor - Set config files, create data dirs, etc
+function configure_nova_hypervisor() {
+ configure_baremetal_nova_dirs
+
+ iniset $NOVA_CONF baremetal sql_connection `database_connection_url nova_bm`
+ LIBVIRT_FIREWALL_DRIVER=${LIBVIRT_FIREWALL_DRIVER:-"nova.virt.firewall.NoopFirewallDriver"}
+ iniset $NOVA_CONF DEFAULT compute_driver nova.virt.baremetal.driver.BareMetalDriver
+ iniset $NOVA_CONF DEFAULT firewall_driver $LIBVIRT_FIREWALL_DRIVER
+ iniset $NOVA_CONF DEFAULT scheduler_host_manager nova.scheduler.baremetal_host_manager.BaremetalHostManager
+ iniset $NOVA_CONF DEFAULT ram_allocation_ratio 1.0
+ iniset $NOVA_CONF DEFAULT reserved_host_memory_mb 0
+ iniset $NOVA_CONF baremetal instance_type_extra_specs cpu_arch:$BM_CPU_ARCH
+ iniset $NOVA_CONF baremetal driver $BM_DRIVER
+ iniset $NOVA_CONF baremetal power_manager $BM_POWER_MANAGER
+ iniset $NOVA_CONF baremetal tftp_root /tftpboot
+ if [[ "$BM_DNSMASQ_FROM_NOVA_NETWORK" = "True" ]]; then
+ BM_DNSMASQ_CONF=$NOVA_CONF_DIR/dnsmasq-for-baremetal-from-nova-network.conf
+ sudo cp "$FILES/dnsmasq-for-baremetal-from-nova-network.conf" "$BM_DNSMASQ_CONF"
+ iniset $NOVA_CONF DEFAULT dnsmasq_config_file "$BM_DNSMASQ_CONF"
+ fi
+
+ # Define extra baremetal nova conf flags by defining the array ``EXTRA_BAREMETAL_OPTS``.
+ for I in "${EXTRA_BAREMETAL_OPTS[@]}"; do
+ # Attempt to convert flags to options
+ iniset $NOVA_CONF baremetal ${I/=/ }
+ done
+}
+
+# install_nova_hypervisor() - Install external components
+function install_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+# start_nova_hypervisor - Start any required external services
+function start_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+# stop_nova_hypervisor - Stop any external services
+function stop_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+
+# Restore xtrace
+$MY_XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/nova_plugins/hypervisor-docker b/lib/nova_plugins/hypervisor-docker
index 4c8fc27..427554b 100644
--- a/lib/nova_plugins/hypervisor-docker
+++ b/lib/nova_plugins/hypervisor-docker
@@ -72,7 +72,7 @@
fi
# Make sure Docker is installed
- if ! is_package_installed lxc-docker; then
+ if ! is_package_installed lxc-docker-${DOCKER_PACKAGE_VERSION}; then
die $LINENO "Docker is not installed. Please run tools/docker/install_docker.sh"
fi
diff --git a/lib/nova_plugins/hypervisor-libvirt b/lib/nova_plugins/hypervisor-libvirt
new file mode 100644
index 0000000..6fae0b1
--- /dev/null
+++ b/lib/nova_plugins/hypervisor-libvirt
@@ -0,0 +1,165 @@
+# lib/nova_plugins/hypervisor-libvirt
+# Configure the libvirt hypervisor
+
+# Enable with:
+# VIRT_DRIVER=libvirt
+
+# Dependencies:
+# ``functions`` file
+# ``nova`` configuration
+
+# install_nova_hypervisor - install any external requirements
+# configure_nova_hypervisor - make configuration changes, including those to other services
+# start_nova_hypervisor - start any external services
+# stop_nova_hypervisor - stop any external services
+# cleanup_nova_hypervisor - remove transient data and cache
+
+# Save trace setting
+MY_XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Defaults
+# --------
+
+
+# Entry Points
+# ------------
+
+# clean_nova_hypervisor - Clean up an installation
+function cleanup_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+# configure_nova_hypervisor - Set config files, create data dirs, etc
+function configure_nova_hypervisor() {
+ if is_service_enabled neutron && is_neutron_ovs_base_plugin && ! sudo grep -q '^cgroup_device_acl' $QEMU_CONF; then
+ # Add /dev/net/tun to cgroup_device_acls, needed for type=ethernet interfaces
+ cat <<EOF | sudo tee -a $QEMU_CONF
+cgroup_device_acl = [
+ "/dev/null", "/dev/full", "/dev/zero",
+ "/dev/random", "/dev/urandom",
+ "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
+ "/dev/rtc", "/dev/hpet","/dev/net/tun",
+]
+EOF
+ fi
+
+ if is_ubuntu; then
+ LIBVIRT_DAEMON=libvirt-bin
+ else
+ LIBVIRT_DAEMON=libvirtd
+ fi
+
+ if is_fedora || is_suse; then
+ if is_fedora && [[ $DISTRO =~ (rhel6) || "$os_RELEASE" -le "17" ]]; then
+ sudo bash -c "cat <<EOF >/etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
+[libvirt Management Access]
+Identity=unix-group:$LIBVIRT_GROUP
+Action=org.libvirt.unix.manage
+ResultAny=yes
+ResultInactive=yes
+ResultActive=yes
+EOF"
+ elif is_suse && [[ $os_RELEASE = 12.2 || "$os_VENDOR" = "SUSE LINUX" ]]; then
+ # openSUSE < 12.3 or SLE
+ # Work around the fact that polkit-default-privs overrules pklas
+ # with 'unix-group:$group'.
+ sudo bash -c "cat <<EOF >/etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
+[libvirt Management Access]
+Identity=unix-user:$USER
+Action=org.libvirt.unix.manage
+ResultAny=yes
+ResultInactive=yes
+ResultActive=yes
+EOF"
+ else
+ # Starting with fedora 18 and opensuse-12.3 enable stack-user to
+ # virsh -c qemu:///system by creating a policy-kit rule for
+ # stack-user using the new Javascript syntax
+ rules_dir=/etc/polkit-1/rules.d
+ sudo mkdir -p $rules_dir
+ sudo bash -c "cat <<EOF > $rules_dir/50-libvirt-$STACK_USER.rules
+polkit.addRule(function(action, subject) {
+ if (action.id == 'org.libvirt.unix.manage' &&
+ subject.user == '"$STACK_USER"') {
+ return polkit.Result.YES;
+ }
+});
+EOF"
+ unset rules_dir
+ fi
+ fi
+
+ # The user that nova runs as needs to be member of **libvirtd** group otherwise
+ # nova-compute will be unable to use libvirt.
+ if ! getent group $LIBVIRT_GROUP >/dev/null; then
+ sudo groupadd $LIBVIRT_GROUP
+ fi
+ add_user_to_group $STACK_USER $LIBVIRT_GROUP
+
+ # libvirt detects various settings on startup, as we potentially changed
+ # the system configuration (modules, filesystems), we need to restart
+ # libvirt to detect those changes.
+ restart_service $LIBVIRT_DAEMON
+
+ iniset $NOVA_CONF DEFAULT libvirt_type "$LIBVIRT_TYPE"
+ iniset $NOVA_CONF DEFAULT libvirt_cpu_mode "none"
+ iniset $NOVA_CONF DEFAULT use_usb_tablet "False"
+ iniset $NOVA_CONF DEFAULT compute_driver "libvirt.LibvirtDriver"
+ LIBVIRT_FIREWALL_DRIVER=${LIBVIRT_FIREWALL_DRIVER:-"nova.virt.libvirt.firewall.IptablesFirewallDriver"}
+ iniset $NOVA_CONF DEFAULT firewall_driver "$LIBVIRT_FIREWALL_DRIVER"
+ # Power architecture currently does not support graphical consoles.
+ if is_arch "ppc64"; then
+ iniset $NOVA_CONF DEFAULT vnc_enabled "false"
+ fi
+}
+
+# install_nova_hypervisor() - Install external components
+function install_nova_hypervisor() {
+ if is_ubuntu; then
+ install_package kvm
+ install_package libvirt-bin
+ install_package python-libvirt
+ elif is_fedora || is_suse; then
+ install_package kvm
+ install_package libvirt
+ install_package libvirt-python
+ fi
+
+ # Install and configure **LXC** if specified. LXC is another approach to
+ # splitting a system into many smaller parts. LXC uses cgroups and chroot
+ # to simulate multiple systems.
+ if [[ "$LIBVIRT_TYPE" == "lxc" ]]; then
+ if is_ubuntu; then
+ if [[ "$DISTRO" > natty ]]; then
+ install_package cgroup-lite
+ fi
+ else
+ ### FIXME(dtroyer): figure this out
+ echo "RPM-based cgroup not implemented yet"
+ yum_install libcgroup-tools
+ fi
+ fi
+}
+
+# start_nova_hypervisor - Start any required external services
+function start_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+# stop_nova_hypervisor - Stop any external services
+function stop_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+
+# Restore xtrace
+$MY_XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/nova_plugins/hypervisor-openvz b/lib/nova_plugins/hypervisor-openvz
new file mode 100644
index 0000000..fc5ed0c
--- /dev/null
+++ b/lib/nova_plugins/hypervisor-openvz
@@ -0,0 +1,67 @@
+# lib/nova_plugins/hypervisor-openvz
+# Configure the openvz hypervisor
+
+# Enable with:
+# VIRT_DRIVER=openvz
+
+# Dependencies:
+# ``functions`` file
+# ``nova`` configuration
+
+# install_nova_hypervisor - install any external requirements
+# configure_nova_hypervisor - make configuration changes, including those to other services
+# start_nova_hypervisor - start any external services
+# stop_nova_hypervisor - stop any external services
+# cleanup_nova_hypervisor - remove transient data and cache
+
+# Save trace setting
+MY_XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Defaults
+# --------
+
+
+# Entry Points
+# ------------
+
+# clean_nova_hypervisor - Clean up an installation
+function cleanup_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+# configure_nova_hypervisor - Set config files, create data dirs, etc
+function configure_nova_hypervisor() {
+ iniset $NOVA_CONF DEFAULT compute_driver "openvz.OpenVzDriver"
+ iniset $NOVA_CONF DEFAULT connection_type "openvz"
+ LIBVIRT_FIREWALL_DRIVER=${LIBVIRT_FIREWALL_DRIVER:-"nova.virt.libvirt.firewall.IptablesFirewallDriver"}
+ iniset $NOVA_CONF DEFAULT firewall_driver "$LIBVIRT_FIREWALL_DRIVER"
+}
+
+# install_nova_hypervisor() - Install external components
+function install_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+# start_nova_hypervisor - Start any required external services
+function start_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+# stop_nova_hypervisor - Stop any external services
+function stop_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+
+# Restore xtrace
+$MY_XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/nova_plugins/hypervisor-powervm b/lib/nova_plugins/hypervisor-powervm
new file mode 100644
index 0000000..561dd9f
--- /dev/null
+++ b/lib/nova_plugins/hypervisor-powervm
@@ -0,0 +1,76 @@
+# lib/nova_plugins/hypervisor-powervm
+# Configure the PowerVM hypervisor
+
+# Enable with:
+# VIRT_DRIVER=powervm
+
+# Dependencies:
+# ``functions`` file
+# ``nova`` configuration
+
+# install_nova_hypervisor - install any external requirements
+# configure_nova_hypervisor - make configuration changes, including those to other services
+# start_nova_hypervisor - start any external services
+# stop_nova_hypervisor - stop any external services
+# cleanup_nova_hypervisor - remove transient data and cache
+
+# Save trace setting
+MY_XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Defaults
+# --------
+
+
+# Entry Points
+# ------------
+
+# clean_nova_hypervisor - Clean up an installation
+function cleanup_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+# configure_nova_hypervisor - Set config files, create data dirs, etc
+function configure_nova_hypervisor() {
+ POWERVM_MGR_TYPE=${POWERVM_MGR_TYPE:-"ivm"}
+ POWERVM_MGR_HOST=${POWERVM_MGR_HOST:-"powervm.host"}
+ POWERVM_MGR_USER=${POWERVM_MGR_USER:-"padmin"}
+ POWERVM_MGR_PASSWD=${POWERVM_MGR_PASSWD:-"password"}
+ POWERVM_IMG_REMOTE_PATH=${POWERVM_IMG_REMOTE_PATH:-"/tmp"}
+ POWERVM_IMG_LOCAL_PATH=${POWERVM_IMG_LOCAL_PATH:-"/tmp"}
+ iniset $NOVA_CONF DEFAULT compute_driver nova.virt.powervm.PowerVMDriver
+ iniset $NOVA_CONF DEFAULT powervm_mgr_type $POWERVM_MGR_TYPE
+ iniset $NOVA_CONF DEFAULT powervm_mgr $POWERVM_MGR_HOST
+ iniset $NOVA_CONF DEFAULT powervm_mgr_user $POWERVM_MGR_USER
+ iniset $NOVA_CONF DEFAULT powervm_mgr_passwd $POWERVM_MGR_PASSWD
+ iniset $NOVA_CONF DEFAULT powervm_img_remote_path $POWERVM_IMG_REMOTE_PATH
+ iniset $NOVA_CONF DEFAULT powervm_img_local_path $POWERVM_IMG_LOCAL_PATH
+}
+
+# install_nova_hypervisor() - Install external components
+function install_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+# start_nova_hypervisor - Start any required external services
+function start_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+# stop_nova_hypervisor - Stop any external services
+function stop_nova_hypervisor() {
+ # This function intentionally left blank
+ :
+}
+
+
+# Restore xtrace
+$MY_XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/rpc_backend b/lib/rpc_backend
index ff87aae..a323d64 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -63,7 +63,7 @@
if is_service_enabled rabbit; then
# Obliterate rabbitmq-server
uninstall_package rabbitmq-server
- sudo killall epmd
+ sudo killall epmd || sudo killall -9 epmd
if is_ubuntu; then
# And the Erlang runtime too
sudo aptitude purge -y ~nerlang
@@ -86,10 +86,6 @@
else
exit_distro_not_supported "zeromq installation"
fi
-
- # Necessary directory for socket location.
- sudo mkdir -p /var/run/openstack
- sudo chown $STACK_USER /var/run/openstack
fi
}
@@ -106,9 +102,9 @@
if is_fedora; then
install_package qpid-cpp-server
if [[ $DISTRO =~ (rhel6) ]]; then
- # RHEL6 leaves "auth=yes" in /etc/qpidd.conf, it needs to
- # be no or you get GSS authentication errors as it
- # attempts to default to this.
+ # RHEL6 leaves "auth=yes" in /etc/qpidd.conf, it needs to
+ # be no or you get GSS authentication errors as it
+ # attempts to default to this.
sudo sed -i.bak 's/^auth=yes$/auth=no/' /etc/qpidd.conf
fi
elif is_ubuntu; then
@@ -131,6 +127,9 @@
else
exit_distro_not_supported "zeromq installation"
fi
+ # Necessary directory for socket location.
+ sudo mkdir -p /var/run/openstack
+ sudo chown $STACK_USER /var/run/openstack
fi
}
diff --git a/lib/swift b/lib/swift
index c0dec97..8726f1e 100644
--- a/lib/swift
+++ b/lib/swift
@@ -39,6 +39,7 @@
# Set ``SWIFT_DATA_DIR`` to the location of swift drives and objects.
# Default is the common DevStack data directory.
SWIFT_DATA_DIR=${SWIFT_DATA_DIR:-${DATA_DIR}/swift}
+SWIFT_DISK_IMAGE=${SWIFT_DATA_DIR}/drives/images/swift.img
# Set ``SWIFT_CONF_DIR`` to the location of the configuration files.
# Default is ``/etc/swift``.
@@ -55,10 +56,10 @@
# swift data. Set ``SWIFT_LOOPBACK_DISK_SIZE`` to the disk size in
# kilobytes.
# Default is 1 gigabyte.
-SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=1048576
+SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=1G
# if tempest enabled the default size is 4 Gigabyte.
if is_service_enabled tempest; then
- SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=${SWIFT_LOOPBACK_DISK_SIZE:-4194304}
+ SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=${SWIFT_LOOPBACK_DISK_SIZE:-4G}
fi
SWIFT_LOOPBACK_DISK_SIZE=${SWIFT_LOOPBACK_DISK_SIZE:-$SWIFT_LOOPBACK_DISK_SIZE_DEFAULT}
@@ -103,17 +104,17 @@
# cleanup_swift() - Remove residual data files
function cleanup_swift() {
- rm -f ${SWIFT_CONF_DIR}{*.builder,*.ring.gz,backups/*.builder,backups/*.ring.gz}
- if egrep -q ${SWIFT_DATA_DIR}/drives/sdb1 /proc/mounts; then
- sudo umount ${SWIFT_DATA_DIR}/drives/sdb1
- fi
- if [[ -e ${SWIFT_DATA_DIR}/drives/images/swift.img ]]; then
- rm ${SWIFT_DATA_DIR}/drives/images/swift.img
- fi
- rm -rf ${SWIFT_DATA_DIR}/run/
- if is_apache_enabled_service swift; then
- _cleanup_swift_apache_wsgi
- fi
+ rm -f ${SWIFT_CONF_DIR}{*.builder,*.ring.gz,backups/*.builder,backups/*.ring.gz}
+ if egrep -q ${SWIFT_DATA_DIR}/drives/sdb1 /proc/mounts; then
+ sudo umount ${SWIFT_DATA_DIR}/drives/sdb1
+ fi
+ if [[ -e ${SWIFT_DISK_IMAGE} ]]; then
+ rm ${SWIFT_DISK_IMAGE}
+ fi
+ rm -rf ${SWIFT_DATA_DIR}/run/
+ if is_apache_enabled_service swift; then
+ _cleanup_swift_apache_wsgi
+ fi
}
# _cleanup_swift_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
@@ -191,7 +192,7 @@
sudo cp ${SWIFT_DIR}/examples/apache2/account-server.template ${apache_vhost_dir}/account-server-${node_number}
sudo sed -e "
- /^#/d;/^$/d;
+ /^#/d;/^$/d;
s/%PORT%/$account_port/g;
s/%SERVICENAME%/account-server-${node_number}/g;
s/%APACHE_NAME%/${APACHE_NAME}/g;
@@ -201,7 +202,7 @@
sudo cp ${SWIFT_DIR}/examples/wsgi/account-server.wsgi.template ${SWIFT_APACHE_WSGI_DIR}/account-server-${node_number}.wsgi
sudo sed -e "
- /^#/d;/^$/d;
+ /^#/d;/^$/d;
s/%SERVICECONF%/account-server\/${node_number}.conf/g;
" -i ${SWIFT_APACHE_WSGI_DIR}/account-server-${node_number}.wsgi
done
@@ -420,28 +421,27 @@
sudo chown -R $USER:${USER_GROUP} ${SWIFT_DATA_DIR}
# Create a loopback disk and format it to XFS.
- if [[ -e ${SWIFT_DATA_DIR}/drives/images/swift.img ]]; then
+ if [[ -e ${SWIFT_DISK_IMAGE} ]]; then
if egrep -q ${SWIFT_DATA_DIR}/drives/sdb1 /proc/mounts; then
sudo umount ${SWIFT_DATA_DIR}/drives/sdb1
- sudo rm -f ${SWIFT_DATA_DIR}/drives/images/swift.img
+ sudo rm -f ${SWIFT_DISK_IMAGE}
fi
fi
mkdir -p ${SWIFT_DATA_DIR}/drives/images
- sudo touch ${SWIFT_DATA_DIR}/drives/images/swift.img
- sudo chown $USER: ${SWIFT_DATA_DIR}/drives/images/swift.img
+ sudo touch ${SWIFT_DISK_IMAGE}
+ sudo chown $USER: ${SWIFT_DISK_IMAGE}
- dd if=/dev/zero of=${SWIFT_DATA_DIR}/drives/images/swift.img \
- bs=1024 count=0 seek=${SWIFT_LOOPBACK_DISK_SIZE}
+ truncate -s ${SWIFT_LOOPBACK_DISK_SIZE} ${SWIFT_DISK_IMAGE}
# Make a fresh XFS filesystem
- mkfs.xfs -f -i size=1024 ${SWIFT_DATA_DIR}/drives/images/swift.img
+ mkfs.xfs -f -i size=1024 ${SWIFT_DISK_IMAGE}
# Mount the disk with mount options to make it as efficient as possible
mkdir -p ${SWIFT_DATA_DIR}/drives/sdb1
if ! egrep -q ${SWIFT_DATA_DIR}/drives/sdb1 /proc/mounts; then
sudo mount -t xfs -o loop,noatime,nodiratime,nobarrier,logbufs=8 \
- ${SWIFT_DATA_DIR}/drives/images/swift.img ${SWIFT_DATA_DIR}/drives/sdb1
+ ${SWIFT_DISK_IMAGE} ${SWIFT_DATA_DIR}/drives/sdb1
fi
# Create a link to the above mount and
@@ -577,26 +577,26 @@
return 0
fi
- # By default with only one replica we are launching the proxy,
- # container, account and object server in screen in foreground and
- # other services in background. If we have SWIFT_REPLICAS set to something
- # greater than one we first spawn all the swift services then kill the proxy
- # service so we can run it in foreground in screen. ``swift-init ...
- # {stop|restart}`` exits with '1' if no servers are running, ignore it just
- # in case
- swift-init --run-dir=${SWIFT_DATA_DIR}/run all restart || true
- if [[ ${SWIFT_REPLICAS} == 1 ]]; then
+ # By default with only one replica we are launching the proxy,
+ # container, account and object server in screen in foreground and
+ # other services in background. If we have SWIFT_REPLICAS set to something
+ # greater than one we first spawn all the swift services then kill the proxy
+ # service so we can run it in foreground in screen. ``swift-init ...
+ # {stop|restart}`` exits with '1' if no servers are running, ignore it just
+ # in case
+ swift-init --run-dir=${SWIFT_DATA_DIR}/run all restart || true
+ if [[ ${SWIFT_REPLICAS} == 1 ]]; then
todo="object container account"
- fi
- for type in proxy ${todo}; do
- swift-init --run-dir=${SWIFT_DATA_DIR}/run ${type} stop || true
- done
- screen_it s-proxy "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
- if [[ ${SWIFT_REPLICAS} == 1 ]]; then
- for type in object container account; do
- screen_it s-${type} "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-${type}-server ${SWIFT_CONF_DIR}/${type}-server/1.conf -v"
- done
- fi
+ fi
+ for type in proxy ${todo}; do
+ swift-init --run-dir=${SWIFT_DATA_DIR}/run ${type} stop || true
+ done
+ screen_it s-proxy "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
+ if [[ ${SWIFT_REPLICAS} == 1 ]]; then
+ for type in object container account; do
+ screen_it s-${type} "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-${type}-server ${SWIFT_CONF_DIR}/${type}-server/1.conf -v"
+ done
+ fi
}
# stop_swift() - Stop running processes (non-screen)
diff --git a/lib/tempest b/lib/tempest
index bc0b18d..8e4e521 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -193,7 +193,7 @@
# If namespaces are disabled, devstack will create a single
# public router that tempest should be configured to use.
public_router_id=$(neutron router-list | awk "/ $Q_ROUTER_NAME / \
- { print \$2 }")
+ { print \$2 }")
fi
fi
@@ -266,7 +266,7 @@
iniset $TEMPEST_CONF boto ssh_user ${DEFAULT_INSTANCE_USER:-cirros}
# Orchestration test image
- if [ $HEAT_CREATE_TEST_IMAGE == "True" ]; then
+ if [[ "$HEAT_CREATE_TEST_IMAGE" = "True" ]]; then
disk_image_create /usr/share/tripleo-image-elements "vm fedora heat-cfntools" "i386" "fedora-vm-heat-cfntools-tempest"
iniset $TEMPEST_CONF orchestration image_ref "fedora-vm-heat-cfntools-tempest"
fi
@@ -328,15 +328,15 @@
local disk_image="$image_dir/${base_image_name}-blank.img"
# if the cirros uec downloaded and the system is uec capable
if [ -f "$kernel" -a -f "$ramdisk" -a -f "$disk_image" -a "$VIRT_DRIVER" != "openvz" \
- -a \( "$LIBVIRT_TYPE" != "lxc" -o "$VIRT_DRIVER" != "libvirt" \) ]; then
- echo "Prepare aki/ari/ami Images"
- ( #new namespace
- # tenant:demo ; user: demo
- source $TOP_DIR/accrc/demo/demo
- euca-bundle-image -i "$kernel" --kernel true -d "$BOTO_MATERIALS_PATH"
- euca-bundle-image -i "$ramdisk" --ramdisk true -d "$BOTO_MATERIALS_PATH"
- euca-bundle-image -i "$disk_image" -d "$BOTO_MATERIALS_PATH"
- ) 2>&1 </dev/null | cat
+ -a \( "$LIBVIRT_TYPE" != "lxc" -o "$VIRT_DRIVER" != "libvirt" \) ]; then
+ echo "Prepare aki/ari/ami Images"
+ ( #new namespace
+ # tenant:demo ; user: demo
+ source $TOP_DIR/accrc/demo/demo
+ euca-bundle-image -i "$kernel" --kernel true -d "$BOTO_MATERIALS_PATH"
+ euca-bundle-image -i "$ramdisk" --ramdisk true -d "$BOTO_MATERIALS_PATH"
+ euca-bundle-image -i "$disk_image" -d "$BOTO_MATERIALS_PATH"
+ ) 2>&1 </dev/null | cat
else
echo "Boto materials are not prepared"
fi
diff --git a/lib/trove b/lib/trove
index e64ca5f..0a19d03 100644
--- a/lib/trove
+++ b/lib/trove
@@ -45,14 +45,15 @@
SERVICE_ROLE=$(keystone role-list | awk "/ admin / { print \$2 }")
if [[ "$ENABLED_SERVICES" =~ "trove" ]]; then
- TROVE_USER=$(keystone user-create --name=trove \
- --pass="$SERVICE_PASSWORD" \
- --tenant_id $SERVICE_TENANT \
- --email=trove@example.com \
- | grep " id " | get_field 2)
+ TROVE_USER=$(keystone user-create \
+ --name=trove \
+ --pass="$SERVICE_PASSWORD" \
+ --tenant_id $SERVICE_TENANT \
+ --email=trove@example.com \
+ | grep " id " | get_field 2)
keystone user-role-add --tenant-id $SERVICE_TENANT \
- --user-id $TROVE_USER \
- --role-id $SERVICE_ROLE
+ --user-id $TROVE_USER \
+ --role-id $SERVICE_ROLE
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
TROVE_SERVICE=$(keystone service-create \
--name=trove \
@@ -109,12 +110,15 @@
# (Re)create trove conf files
rm -f $TROVE_CONF_DIR/trove.conf
rm -f $TROVE_CONF_DIR/trove-taskmanager.conf
+ rm -f $TROVE_CONF_DIR/trove-conductor.conf
+
iniset $TROVE_CONF_DIR/trove.conf DEFAULT rabbit_password $RABBIT_PASSWORD
iniset $TROVE_CONF_DIR/trove.conf DEFAULT sql_connection `database_connection_url trove`
iniset $TROVE_CONF_DIR/trove.conf DEFAULT add_addresses True
iniset $TROVE_LOCAL_CONF_DIR/trove-guestagent.conf.sample DEFAULT rabbit_password $RABBIT_PASSWORD
iniset $TROVE_LOCAL_CONF_DIR/trove-guestagent.conf.sample DEFAULT sql_connection `database_connection_url trove`
+ iniset $TROVE_LOCAL_CONF_DIR/trove-guestagent.conf.sample DEFAULT control_exchange trove
sed -i "s/localhost/$NETWORK_GATEWAY/g" $TROVE_LOCAL_CONF_DIR/trove-guestagent.conf.sample
# (Re)create trove taskmanager conf file if needed
@@ -127,6 +131,17 @@
iniset $TROVE_CONF_DIR/trove-taskmanager.conf DEFAULT nova_proxy_admin_pass $RADMIN_USER_PASS
iniset $TROVE_CONF_DIR/trove-taskmanager.conf DEFAULT trove_auth_url $TROVE_AUTH_ENDPOINT
fi
+
+ # (Re)create trove conductor conf file if needed
+ if is_service_enabled tr-cond; then
+ iniset $TROVE_CONF_DIR/trove-conductor.conf DEFAULT rabbit_password $RABBIT_PASSWORD
+ iniset $TROVE_CONF_DIR/trove-conductor.conf DEFAULT sql_connection `database_connection_url trove`
+ iniset $TROVE_CONF_DIR/trove-conductor.conf DEFAULT nova_proxy_admin_user radmin
+ iniset $TROVE_CONF_DIR/trove-conductor.conf DEFAULT nova_proxy_admin_tenant_name trove
+ iniset $TROVE_CONF_DIR/trove-conductor.conf DEFAULT nova_proxy_admin_pass $RADMIN_USER_PASS
+ iniset $TROVE_CONF_DIR/trove-conductor.conf DEFAULT trove_auth_url $TROVE_AUTH_ENDPOINT
+ iniset $TROVE_CONF_DIR/trove-conductor.conf DEFAULT control_exchange trove
+ fi
}
# install_troveclient() - Collect source and prepare
@@ -152,12 +167,13 @@
function start_trove() {
screen_it tr-api "cd $TROVE_DIR; bin/trove-api --config-file=$TROVE_CONF_DIR/trove.conf --debug 2>&1"
screen_it tr-tmgr "cd $TROVE_DIR; bin/trove-taskmanager --config-file=$TROVE_CONF_DIR/trove-taskmanager.conf --debug 2>&1"
+ screen_it tr-cond "cd $TROVE_DIR; bin/trove-conductor --config-file=$TROVE_CONF_DIR/trove-conductor.conf --debug 2>&1"
}
# stop_trove() - Stop running processes
function stop_trove() {
# Kill the trove screen windows
- for serv in tr-api tr-tmgr; do
+ for serv in tr-api tr-tmgr tr-cond; do
screen -S $SCREEN_NAME -p $serv -X kill
done
}
diff --git a/run_tests.sh b/run_tests.sh
new file mode 100755
index 0000000..9d9d186
--- /dev/null
+++ b/run_tests.sh
@@ -0,0 +1,29 @@
+#!/bin/bash
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+#
+# this runs a series of unit tests for devstack to ensure it's functioning
+
+if [[ -n $@ ]]; then
+ FILES=$@
+else
+ LIBS=`find lib -type f | grep -v \.md`
+ SCRIPTS=`find . -type f -name \*\.sh`
+ EXTRA="functions"
+ FILES="$SCRIPTS $LIBS $EXTRA"
+fi
+
+echo "Running bash8..."
+
+./tools/bash8.py $FILES
diff --git a/samples/localrc b/samples/localrc
index fd7221a..80cf0e7 100644
--- a/samples/localrc
+++ b/samples/localrc
@@ -83,7 +83,8 @@
# Set this to 1 to save some resources:
SWIFT_REPLICAS=1
-# The data for Swift is stored in the source tree by default (``$DEST/swift/data``)
-# and can be moved by setting ``SWIFT_DATA_DIR``. The directory will be created
+# The data for Swift is stored by default in (``$DEST/data/swift``),
+# or (``$DATA_DIR/swift``) if ``DATA_DIR`` has been set, and can be
+# moved by setting ``SWIFT_DATA_DIR``. The directory will be created
# if it does not exist.
SWIFT_DATA_DIR=$DEST/data
diff --git a/stack.sh b/stack.sh
index 7cd7e30..5813a8a 100755
--- a/stack.sh
+++ b/stack.sh
@@ -29,6 +29,9 @@
# Import common functions
source $TOP_DIR/functions
+# Import config functions
+source $TOP_DIR/lib/config
+
# Determine what system we are running on. This provides ``os_VENDOR``,
# ``os_RELEASE``, ``os_UPDATE``, ``os_PACKAGE``, ``os_CODENAME``
# and ``DISTRO``
@@ -38,6 +41,25 @@
# Global Settings
# ===============
+# Check for a ``localrc`` section embedded in ``local.conf`` and extract if
+# ``localrc`` does not already exist
+
+# Phase: local
+rm -f $TOP_DIR/.localrc.auto
+if [[ -r $TOP_DIR/local.conf ]]; then
+ LRC=$(get_meta_section_files $TOP_DIR/local.conf local)
+ for lfile in $LRC; do
+ if [[ "$lfile" == "localrc" ]]; then
+ if [[ -r $TOP_DIR/localrc ]]; then
+ warn $LINENO "localrc and local.conf:[[local]] both exist, using localrc"
+ else
+ echo "# Generated file, do not edit" >$TOP_DIR/.localrc.auto
+ get_meta_section $TOP_DIR/local.conf local $lfile >>$TOP_DIR/.localrc.auto
+ fi
+ fi
+ done
+fi
+
# ``stack.sh`` is customizable by setting environment variables. Override a
# default setting via export::
#
@@ -291,11 +313,14 @@
source $TOP_DIR/lib/ironic
source $TOP_DIR/lib/trove
-# Look for Nova hypervisor plugin
-NOVA_PLUGINS=$TOP_DIR/lib/nova_plugins
-if is_service_enabled nova && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
- # Load plugin
- source $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER
+# Extras Source
+# --------------
+
+# Phase: source
+if [[ -d $TOP_DIR/extras.d ]]; then
+ for i in $TOP_DIR/extras.d/*.sh; do
+ [[ -r $i ]] && source $i source
+ done
fi
# Set the destination directories for other OpenStack projects
@@ -563,7 +588,9 @@
source $TOP_DIR/tools/install_prereqs.sh
# Configure an appropriate python environment
-$TOP_DIR/tools/install_pip.sh
+if [[ "$OFFLINE" != "True" ]]; then
+ $TOP_DIR/tools/install_pip.sh
+fi
# Do the ugly hacks for borken packages and distros
$TOP_DIR/tools/fixup_stuff.sh
@@ -707,9 +734,20 @@
if is_service_enabled ir-api ir-cond; then
install_ironic
+ install_ironicclient
configure_ironic
fi
+# Extras Install
+# --------------
+
+# Phase: install
+if [[ -d $TOP_DIR/extras.d ]]; then
+ for i in $TOP_DIR/extras.d/*.sh; do
+ [[ -r $i ]] && source $i stack install
+ done
+fi
+
if [[ $TRACK_DEPENDS = True ]]; then
$DEST/.venv/bin/pip freeze > $DEST/requires-post-pip
if ! diff -Nru $DEST/requires-pre-pip $DEST/requires-post-pip > $DEST/requires.diff; then
@@ -805,13 +843,16 @@
# If enabled, systat has to start early to track OpenStack service startup.
if is_service_enabled sysstat;then
if [[ -n ${SCREEN_LOGDIR} ]]; then
- screen_it sysstat "sar -o $SCREEN_LOGDIR/$SYSSTAT_FILE $SYSSTAT_INTERVAL"
+ screen_it sysstat "cd ; sar -o $SCREEN_LOGDIR/$SYSSTAT_FILE $SYSSTAT_INTERVAL"
else
screen_it sysstat "sar $SYSSTAT_INTERVAL"
fi
fi
+# Start Services
+# ==============
+
# Keystone
# --------
@@ -882,6 +923,7 @@
init_glance
fi
+
# Ironic
# ------
@@ -891,7 +933,6 @@
fi
-
# Neutron
# -------
@@ -917,11 +958,6 @@
# Nova
# ----
-if is_service_enabled nova; then
- echo_summary "Configuring Nova"
- configure_nova
-fi
-
if is_service_enabled n-net q-dhcp; then
# Delete traces of nova networks from prior runs
# Do not kill any dnsmasq instance spawned by NetworkManager
@@ -964,8 +1000,6 @@
if is_service_enabled nova; then
echo_summary "Configuring Nova"
- # Rebuild the config file from scratch
- create_nova_conf
init_nova
# Additional Nova configuration that is dependent on other services
@@ -975,85 +1009,6 @@
create_nova_conf_nova_network
fi
-
- if [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
- # Configure hypervisor plugin
- configure_nova_hypervisor
-
-
- # OpenVZ
- # ------
-
- elif [ "$VIRT_DRIVER" = 'openvz' ]; then
- echo_summary "Using OpenVZ virtualization driver"
- iniset $NOVA_CONF DEFAULT compute_driver "openvz.OpenVzDriver"
- iniset $NOVA_CONF DEFAULT connection_type "openvz"
- LIBVIRT_FIREWALL_DRIVER=${LIBVIRT_FIREWALL_DRIVER:-"nova.virt.libvirt.firewall.IptablesFirewallDriver"}
- iniset $NOVA_CONF DEFAULT firewall_driver "$LIBVIRT_FIREWALL_DRIVER"
-
-
- # Bare Metal
- # ----------
-
- elif [ "$VIRT_DRIVER" = 'baremetal' ]; then
- echo_summary "Using BareMetal driver"
- LIBVIRT_FIREWALL_DRIVER=${LIBVIRT_FIREWALL_DRIVER:-"nova.virt.firewall.NoopFirewallDriver"}
- iniset $NOVA_CONF DEFAULT compute_driver nova.virt.baremetal.driver.BareMetalDriver
- iniset $NOVA_CONF DEFAULT firewall_driver $LIBVIRT_FIREWALL_DRIVER
- iniset $NOVA_CONF DEFAULT scheduler_host_manager nova.scheduler.baremetal_host_manager.BaremetalHostManager
- iniset $NOVA_CONF DEFAULT ram_allocation_ratio 1.0
- iniset $NOVA_CONF DEFAULT reserved_host_memory_mb 0
- iniset $NOVA_CONF baremetal instance_type_extra_specs cpu_arch:$BM_CPU_ARCH
- iniset $NOVA_CONF baremetal driver $BM_DRIVER
- iniset $NOVA_CONF baremetal power_manager $BM_POWER_MANAGER
- iniset $NOVA_CONF baremetal tftp_root /tftpboot
- if [[ "$BM_DNSMASQ_FROM_NOVA_NETWORK" = "True" ]]; then
- BM_DNSMASQ_CONF=$NOVA_CONF_DIR/dnsmasq-for-baremetal-from-nova-network.conf
- sudo cp "$FILES/dnsmasq-for-baremetal-from-nova-network.conf" "$BM_DNSMASQ_CONF"
- iniset $NOVA_CONF DEFAULT dnsmasq_config_file "$BM_DNSMASQ_CONF"
- fi
-
- # Define extra baremetal nova conf flags by defining the array ``EXTRA_BAREMETAL_OPTS``.
- for I in "${EXTRA_BAREMETAL_OPTS[@]}"; do
- # Attempt to convert flags to options
- iniset $NOVA_CONF baremetal ${I/=/ }
- done
-
-
- # PowerVM
- # -------
-
- elif [ "$VIRT_DRIVER" = 'powervm' ]; then
- echo_summary "Using PowerVM driver"
- POWERVM_MGR_TYPE=${POWERVM_MGR_TYPE:-"ivm"}
- POWERVM_MGR_HOST=${POWERVM_MGR_HOST:-"powervm.host"}
- POWERVM_MGR_USER=${POWERVM_MGR_USER:-"padmin"}
- POWERVM_MGR_PASSWD=${POWERVM_MGR_PASSWD:-"password"}
- POWERVM_IMG_REMOTE_PATH=${POWERVM_IMG_REMOTE_PATH:-"/tmp"}
- POWERVM_IMG_LOCAL_PATH=${POWERVM_IMG_LOCAL_PATH:-"/tmp"}
- iniset $NOVA_CONF DEFAULT compute_driver nova.virt.powervm.PowerVMDriver
- iniset $NOVA_CONF DEFAULT powervm_mgr_type $POWERVM_MGR_TYPE
- iniset $NOVA_CONF DEFAULT powervm_mgr $POWERVM_MGR_HOST
- iniset $NOVA_CONF DEFAULT powervm_mgr_user $POWERVM_MGR_USER
- iniset $NOVA_CONF DEFAULT powervm_mgr_passwd $POWERVM_MGR_PASSWD
- iniset $NOVA_CONF DEFAULT powervm_img_remote_path $POWERVM_IMG_REMOTE_PATH
- iniset $NOVA_CONF DEFAULT powervm_img_local_path $POWERVM_IMG_LOCAL_PATH
-
-
- # Default libvirt
- # ---------------
-
- else
- echo_summary "Using libvirt virtualization driver"
- iniset $NOVA_CONF DEFAULT compute_driver "libvirt.LibvirtDriver"
- LIBVIRT_FIREWALL_DRIVER=${LIBVIRT_FIREWALL_DRIVER:-"nova.virt.libvirt.firewall.IptablesFirewallDriver"}
- iniset $NOVA_CONF DEFAULT firewall_driver "$LIBVIRT_FIREWALL_DRIVER"
- # Power architecture currently does not support graphical consoles.
- if is_arch "ppc64"; then
- iniset $NOVA_CONF DEFAULT vnc_enabled "false"
- fi
- fi
-
init_nova_cells
fi
@@ -1063,11 +1018,30 @@
prepare_baremetal_toolchain
configure_baremetal_nova_dirs
if [[ "$BM_USE_FAKE_ENV" = "True" ]]; then
- create_fake_baremetal_env
+ create_fake_baremetal_env
fi
fi
+# Extras Configuration
+# ====================
+
+# Phase: post-config
+if [[ -d $TOP_DIR/extras.d ]]; then
+ for i in $TOP_DIR/extras.d/*.sh; do
+ [[ -r $i ]] && source $i stack post-config
+ done
+fi
+
+
+# Local Configuration
+# ===================
+
+# Apply configuration from local.conf if it exists for layer 2 services
+# Phase: post-config
+merge_config_group $TOP_DIR/local.conf post-config
+
+
# Launch Services
# ===============
@@ -1203,28 +1177,29 @@
if is_service_enabled g-reg; then
TOKEN=$(keystone token-get | grep ' id ' | get_field 2)
+ die_if_not_set $LINENO TOKEN "Keystone fail to get token"
if is_baremetal; then
- echo_summary "Creating and uploading baremetal images"
+ echo_summary "Creating and uploading baremetal images"
- # build and upload separate deploy kernel & ramdisk
- upload_baremetal_deploy $TOKEN
+ # build and upload separate deploy kernel & ramdisk
+ upload_baremetal_deploy $TOKEN
- # upload images, separating out the kernel & ramdisk for PXE boot
- for image_url in ${IMAGE_URLS//,/ }; do
- upload_baremetal_image $image_url $TOKEN
- done
+ # upload images, separating out the kernel & ramdisk for PXE boot
+ for image_url in ${IMAGE_URLS//,/ }; do
+ upload_baremetal_image $image_url $TOKEN
+ done
else
- echo_summary "Uploading images"
+ echo_summary "Uploading images"
- # Option to upload legacy ami-tty, which works with xenserver
- if [[ -n "$UPLOAD_LEGACY_TTY" ]]; then
- IMAGE_URLS="${IMAGE_URLS:+${IMAGE_URLS},}https://github.com/downloads/citrix-openstack/warehouse/tty.tgz"
- fi
+ # Option to upload legacy ami-tty, which works with xenserver
+ if [[ -n "$UPLOAD_LEGACY_TTY" ]]; then
+ IMAGE_URLS="${IMAGE_URLS:+${IMAGE_URLS},}https://github.com/downloads/citrix-openstack/warehouse/tty.tgz"
+ fi
- for image_url in ${IMAGE_URLS//,/ }; do
- upload_image $image_url $TOKEN
- done
+ for image_url in ${IMAGE_URLS//,/ }; do
+ upload_image $image_url $TOKEN
+ done
fi
fi
@@ -1236,7 +1211,7 @@
if is_service_enabled nova && is_baremetal; then
# create special flavor for baremetal if we know what images to associate
[[ -n "$BM_DEPLOY_KERNEL_ID" ]] && [[ -n "$BM_DEPLOY_RAMDISK_ID" ]] && \
- create_baremetal_flavor $BM_DEPLOY_KERNEL_ID $BM_DEPLOY_RAMDISK_ID
+ create_baremetal_flavor $BM_DEPLOY_KERNEL_ID $BM_DEPLOY_RAMDISK_ID
# otherwise user can manually add it later by calling nova-baremetal-manage
[[ -n "$BM_FIRST_MAC" ]] && add_baremetal_node
@@ -1251,24 +1226,33 @@
fi
# ensure callback daemon is running
sudo pkill nova-baremetal-deploy-helper || true
- screen_it baremetal "nova-baremetal-deploy-helper"
+ screen_it baremetal "cd ; nova-baremetal-deploy-helper"
fi
# Save some values we generated for later use
CURRENT_RUN_TIME=$(date "+$TIMESTAMP_FORMAT")
echo "# $CURRENT_RUN_TIME" >$TOP_DIR/.stackenv
for i in BASE_SQL_CONN ENABLED_SERVICES HOST_IP LOGFILE \
- SERVICE_HOST SERVICE_PROTOCOL STACK_USER TLS_IP; do
+ SERVICE_HOST SERVICE_PROTOCOL STACK_USER TLS_IP; do
echo $i=${!i} >>$TOP_DIR/.stackenv
done
+# Local Configuration
+# ===================
+
+# Apply configuration from local.conf if it exists for layer 2 services
+# Phase: extra
+merge_config_group $TOP_DIR/local.conf extra
+
+
# Run extras
# ==========
+# Phase: extra
if [[ -d $TOP_DIR/extras.d ]]; then
for i in $TOP_DIR/extras.d/*.sh; do
- [[ -r $i ]] && source $i stack
+ [[ -r $i ]] && source $i stack extra
done
fi
@@ -1335,5 +1319,66 @@
echo_summary "WARNING: $DEPRECATED_TEXT"
fi
+# Specific warning for deprecated configs
+if [[ -n "$EXTRA_OPTS" ]]; then
+ echo ""
+ echo_summary "WARNING: EXTRA_OPTS is used"
+ echo "You are using EXTRA_OPTS to pass configuration into nova.conf."
+ echo "Please convert that configuration in localrc to a nova.conf section in local.conf:"
+ echo "
+[[post-config|\$NOVA_CONF]]
+[DEFAULT]
+"
+ for I in "${EXTRA_OPTS[@]}"; do
+ # Replace the first '=' with ' ' for iniset syntax
+ echo ${I}
+ done
+fi
+
+if [[ -n "$EXTRA_BAREMETAL_OPTS" ]]; then
+ echo ""
+ echo_summary "WARNING: EXTRA_OPTS is used"
+ echo "You are using EXTRA_OPTS to pass configuration into nova.conf."
+ echo "Please convert that configuration in localrc to a nova.conf section in local.conf:"
+ echo "
+[[post-config|\$NOVA_CONF]]
+[baremetal]
+"
+ for I in "${EXTRA_BAREMETAL_OPTS[@]}"; do
+ # Replace the first '=' with ' ' for iniset syntax
+ echo ${I}
+ done
+fi
+
+if [[ -n "$Q_DHCP_EXTRA_DEFAULT_OPTS" ]]; then
+ echo ""
+ echo_summary "WARNING: Q_DHCP_EXTRA_DEFAULT_OPTS is used"
+ echo "You are using Q_DHCP_EXTRA_DEFAULT_OPTS to pass configuration into $Q_DHCP_CONF_FILE."
+ echo "Please convert that configuration in localrc to a $Q_DHCP_CONF_FILE section in local.conf:"
+ echo "
+[[post-config|\$Q_DHCP_CONF_FILE]]
+[DEFAULT]
+"
+ for I in "${Q_DHCP_EXTRA_DEFAULT_OPTS[@]}"; do
+ # Replace the first '=' with ' ' for iniset syntax
+ echo ${I}
+ done
+fi
+
+if [[ -n "$Q_SRV_EXTRA_DEFAULT_OPTS" ]]; then
+ echo ""
+ echo_summary "WARNING: Q_SRV_EXTRA_DEFAULT_OPTS is used"
+ echo "You are using Q_SRV_EXTRA_DEFAULT_OPTS to pass configuration into $NEUTRON_CONF."
+ echo "Please convert that configuration in localrc to a $NEUTRON_CONF section in local.conf:"
+ echo "
+[[post-config|\$NEUTRON_CONF]]
+[DEFAULT]
+"
+ for I in "${Q_SRV_EXTRA_DEFAULT_OPTS[@]}"; do
+ # Replace the first '=' with ' ' for iniset syntax
+ echo ${I}
+ done
+fi
+
# Indicate how long this took to run (bash maintained variable ``SECONDS``)
echo_summary "stack.sh completed in $SECONDS seconds."
diff --git a/stackrc b/stackrc
index 3a338d1..0151672 100644
--- a/stackrc
+++ b/stackrc
@@ -48,8 +48,12 @@
USE_SCREEN=True
# allow local overrides of env variables, including repo config
-if [ -f $RC_DIR/localrc ]; then
+if [[ -f $RC_DIR/localrc ]]; then
+ # Old-style user-supplied config
source $RC_DIR/localrc
+elif [[ -f $RC_DIR/.localrc.auto ]]; then
+ # New-style user-supplied config extracted from local.conf
+ source $RC_DIR/.localrc.auto
fi
@@ -100,6 +104,10 @@
IRONIC_REPO=${IRONIC_REPO:-${GIT_BASE}/openstack/ironic.git}
IRONIC_BRANCH=${IRONIC_BRANCH:-master}
+# ironic client
+IRONICCLIENT_REPO=${IRONICCLIENT_REPO:-${GIT_BASE}/openstack/python-ironicclient.git}
+IRONICCLIENT_BRANCH=${IRONICCLIENT_BRANCH:-master}
+
# unified auth system (manages accounts/tokens)
KEYSTONE_REPO=${KEYSTONE_REPO:-${GIT_BASE}/openstack/keystone.git}
KEYSTONE_BRANCH=${KEYSTONE_BRANCH:-master}
@@ -160,7 +168,7 @@
# diskimage-builder
-BM_IMAGE_BUILD_REPO=${BM_IMAGE_BUILD_REPO:-${GIT_BASE}/stackforge/diskimage-builder.git}
+BM_IMAGE_BUILD_REPO=${BM_IMAGE_BUILD_REPO:-${GIT_BASE}/openstack/diskimage-builder.git}
BM_IMAGE_BUILD_BRANCH=${BM_IMAGE_BUILD_BRANCH:-master}
# bm_poseur
diff --git a/tests/functions.sh b/tests/functions.sh
index 7d486d4..40376aa 100755
--- a/tests/functions.sh
+++ b/tests/functions.sh
@@ -122,16 +122,16 @@
# test empty option
if ini_has_option test.ini ddd empty; then
- echo "OK: ddd.empty present"
+ echo "OK: ddd.empty present"
else
- echo "ini_has_option failed: ddd.empty not found"
+ echo "ini_has_option failed: ddd.empty not found"
fi
# test non-empty option
if ini_has_option test.ini bbb handlers; then
- echo "OK: bbb.handlers present"
+ echo "OK: bbb.handlers present"
else
- echo "ini_has_option failed: bbb.handlers not found"
+ echo "ini_has_option failed: bbb.handlers not found"
fi
# test changing empty option
diff --git a/tests/test_config.sh b/tests/test_config.sh
new file mode 100755
index 0000000..fed2e7d
--- /dev/null
+++ b/tests/test_config.sh
@@ -0,0 +1,179 @@
+#!/usr/bin/env bash
+
+# Tests for DevStack meta-config functions
+
+TOP=$(cd $(dirname "$0")/.. && pwd)
+
+# Import common functions
+source $TOP/functions
+
+# Import config functions
+source $TOP/lib/config
+
+# check_result() tests and reports the result values
+# check_result "actual" "expected"
+function check_result() {
+ local actual=$1
+ local expected=$2
+ if [[ "$actual" == "$expected" ]]; then
+ echo "OK"
+ else
+ echo -e "failed: $actual != $expected\n"
+ fi
+}
+
+TEST_1C_ADD="[eee]
+type=new
+multi = foo2"
+
+function create_test1c() {
+ cat >test1c.conf <<EOF
+[eee]
+# original comment
+type=original
+EOF
+}
+
+function create_test2a() {
+ cat >test2a.conf <<EOF
+[ddd]
+# original comment
+type=original
+EOF
+}
+
+cat >test.conf <<EOF
+[[test1|test1a.conf]]
+[default]
+# comment an option
+#log_file=./log.conf
+log_file=/etc/log.conf
+handlers=do not disturb
+
+[aaa]
+# the commented option should not change
+#handlers=cc,dd
+handlers = aa, bb
+
+[[test1|test1b.conf]]
+[bbb]
+handlers=ee,ff
+
+[ ccc ]
+spaces = yes
+
+[[test2|test2a.conf]]
+[ddd]
+# new comment
+type=new
+additional=true
+
+[[test1|test1c.conf]]
+$TEST_1C_ADD
+EOF
+
+
+echo -n "get_meta_section_files: test0 doesn't exist: "
+VAL=$(get_meta_section_files test.conf test0)
+check_result "$VAL" ""
+
+echo -n "get_meta_section_files: test1 3 files: "
+VAL=$(get_meta_section_files test.conf test1)
+EXPECT_VAL="test1a.conf
+test1b.conf
+test1c.conf"
+check_result "$VAL" "$EXPECT_VAL"
+
+echo -n "get_meta_section_files: test2 1 file: "
+VAL=$(get_meta_section_files test.conf test2)
+EXPECT_VAL="test2a.conf"
+check_result "$VAL" "$EXPECT_VAL"
+
+
+# Get a section from a group that doesn't exist
+echo -n "get_meta_section: test0 doesn't exist: "
+VAL=$(get_meta_section test.conf test0 test0.conf)
+check_result "$VAL" ""
+
+# Get a single section from a group with multiple files
+echo -n "get_meta_section: test1c single section: "
+VAL=$(get_meta_section test.conf test1 test1c.conf)
+check_result "$VAL" "$TEST_1C_ADD"
+
+# Get a single section from a group with a single file
+echo -n "get_meta_section: test2a single section: "
+VAL=$(get_meta_section test.conf test2 test2a.conf)
+EXPECT_VAL="[ddd]
+# new comment
+type=new
+additional=true"
+check_result "$VAL" "$EXPECT_VAL"
+
+# Get a single section that doesn't exist from a group
+echo -n "get_meta_section: test2z.conf not in test2: "
+VAL=$(get_meta_section test.conf test2 test2z.conf)
+check_result "$VAL" ""
+
+# Get a section from a conf file that doesn't exist
+echo -n "get_meta_section: nofile doesn't exist: "
+VAL=$(get_meta_section nofile.ini test1)
+check_result "$VAL" ""
+
+echo -n "get_meta_section: nofile doesn't exist: "
+VAL=$(get_meta_section nofile.ini test0 test0.conf)
+check_result "$VAL" ""
+
+echo -n "merge_config_file test1c exists: "
+create_test1c
+merge_config_file test.conf test1 test1c.conf
+VAL=$(cat test1c.conf)
+# iniset adds values immediately under the section header
+EXPECT_VAL="[eee]
+multi = foo2
+# original comment
+type=new"
+check_result "$VAL" "$EXPECT_VAL"
+
+echo -n "merge_config_file test2a exists: "
+create_test2a
+merge_config_file test.conf test2 test2a.conf
+VAL=$(cat test2a.conf)
+# iniset adds values immediately under the section header
+EXPECT_VAL="[ddd]
+additional = true
+# original comment
+type=new"
+check_result "$VAL" "$EXPECT_VAL"
+
+echo -n "merge_config_file test2a not exist: "
+rm test2a.conf
+merge_config_file test.conf test2 test2a.conf
+VAL=$(cat test2a.conf)
+# iniset adds a blank line if it creates the file...
+EXPECT_VAL="
+[ddd]
+additional = true
+type = new"
+check_result "$VAL" "$EXPECT_VAL"
+
+echo -n "merge_config_group test2: "
+rm test2a.conf
+merge_config_group test.conf test2
+VAL=$(cat test2a.conf)
+# iniset adds a blank line if it creates the file...
+EXPECT_VAL="
+[ddd]
+additional = true
+type = new"
+check_result "$VAL" "$EXPECT_VAL"
+
+echo -n "merge_config_group test2 no conf file: "
+rm test2a.conf
+merge_config_group x-test.conf test2
+if [[ ! -r test2a.conf ]]; then
+ echo "OK"
+else
+ echo "failed: $VAL != $EXPECT_VAL"
+fi
+
+rm -f test.conf test1c.conf test2a.conf
diff --git a/tools/bash8.py b/tools/bash8.py
new file mode 100755
index 0000000..edf7da4
--- /dev/null
+++ b/tools/bash8.py
@@ -0,0 +1,115 @@
+#!/usr/bin/env python
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+# bash8 - a pep8 equivalent for bash scripts
+#
+# this program attempts to be an automated style checker for bash scripts
+# to fill the same part of code review that pep8 does in most OpenStack
+# projects. It starts from humble beginnings, and will evolve over time.
+#
+# Currently Supported checks
+#
+# Errors
+# - E001: check that lines do not end with trailing whitespace
+# - E002: ensure that indents are only spaces, and not hard tabs
+# - E003: ensure all indents are a multiple of 4 spaces
+
+import argparse
+import fileinput
+import re
+import sys
+
+
+ERRORS = 0
+
+
+def print_error(error, line):
+ global ERRORS
+ ERRORS = ERRORS + 1
+ print("%s: '%s'" % (error, line.rstrip('\n')))
+ print(" - %s: L%s" % (fileinput.filename(), fileinput.filelineno()))
+
+
+def check_no_trailing_whitespace(line):
+ if re.search('[ \t]+$', line):
+ print_error('E001: Trailing Whitespace', line)
+
+
+def check_indents(line):
+ m = re.search('^(?P<indent>[ \t]+)', line)
+ if m:
+ if re.search('\t', m.group('indent')):
+ print_error('E002: Tab indents', line)
+ if (len(m.group('indent')) % 4) != 0:
+ print_error('E003: Indent not multiple of 4', line)
+
+
+def starts_multiline(line):
+ m = re.search("[^<]<<\s*(?P<token>\w+)", line)
+ if m:
+ return m.group('token')
+ else:
+ return False
+
+
+def end_of_multiline(line, token):
+ if token:
+ return re.search("^%s\s*$" % token, line) is not None
+ return False
+
+
+def check_files(files):
+ in_multiline = False
+ logical_line = ""
+ token = False
+ for line in fileinput.input(files):
+ # NOTE(sdague): multiline processing of heredocs is interesting
+ if not in_multiline:
+ logical_line = line
+ token = starts_multiline(line)
+ if token:
+ in_multiline = True
+ continue
+ else:
+ logical_line = logical_line + line
+ if not end_of_multiline(line, token):
+ continue
+ else:
+ in_multiline = False
+
+ check_no_trailing_whitespace(logical_line)
+ check_indents(logical_line)
+
+
+def get_options():
+ parser = argparse.ArgumentParser(
+ description='A bash script style checker')
+ parser.add_argument('files', metavar='file', nargs='+',
+ help='files to scan for errors')
+ return parser.parse_args()
+
+
+def main():
+ opts = get_options()
+ check_files(opts.files)
+
+ if ERRORS > 0:
+ print("%d bash8 error(s) found" % ERRORS)
+ return 1
+ else:
+ return 0
+
+
+if __name__ == "__main__":
+ sys.exit(main())
diff --git a/tools/build_bm_multi.sh b/tools/build_bm_multi.sh
index 52b9b4e..328d576 100755
--- a/tools/build_bm_multi.sh
+++ b/tools/build_bm_multi.sh
@@ -22,8 +22,8 @@
if [ ! "$TERMINATE" = "1" ]; then
echo "Waiting for head node ($HEAD_HOST) to start..."
if ! timeout 60 sh -c "while ! wget -q -O- http://$HEAD_HOST | grep -q username; do sleep 1; done"; then
- echo "Head node did not start"
- exit 1
+ echo "Head node did not start"
+ exit 1
fi
fi
diff --git a/tools/build_uec.sh b/tools/build_uec.sh
index 6c4a26c..bce051a 100755
--- a/tools/build_uec.sh
+++ b/tools/build_uec.sh
@@ -229,8 +229,8 @@
# (re)start a metadata service
(
- pid=`lsof -iTCP@192.168.$GUEST_NETWORK.1:4567 -n | awk '{print $2}' | tail -1`
- [ -z "$pid" ] || kill -9 $pid
+ pid=`lsof -iTCP@192.168.$GUEST_NETWORK.1:4567 -n | awk '{print $2}' | tail -1`
+ [ -z "$pid" ] || kill -9 $pid
)
cd $vm_dir/uec
python meta.py 192.168.$GUEST_NETWORK.1:4567 &
@@ -268,7 +268,7 @@
sleep 2
while [ ! -e "$vm_dir/console.log" ]; do
- sleep 1
+ sleep 1
done
tail -F $vm_dir/console.log &
diff --git a/tools/create-stack-user.sh b/tools/create-stack-user.sh
old mode 100644
new mode 100755
diff --git a/tools/create_userrc.sh b/tools/create_userrc.sh
index 44b0f6b..8383fe7 100755
--- a/tools/create_userrc.sh
+++ b/tools/create_userrc.sh
@@ -105,15 +105,15 @@
fi
if [ -z "$OS_TENANT_NAME" -a -z "$OS_TENANT_ID" ]; then
- export OS_TENANT_NAME=admin
+ export OS_TENANT_NAME=admin
fi
if [ -z "$OS_USERNAME" ]; then
- export OS_USERNAME=admin
+ export OS_USERNAME=admin
fi
if [ -z "$OS_AUTH_URL" ]; then
- export OS_AUTH_URL=http://localhost:5000/v2.0/
+ export OS_AUTH_URL=http://localhost:5000/v2.0/
fi
USER_PASS=${USER_PASS:-$OS_PASSWORD}
@@ -249,7 +249,7 @@
for user_id_at_name in `keystone user-list --tenant-id $tenant_id | awk 'BEGIN {IGNORECASE = 1} /true[[:space:]]*\|[^|]*\|$/ {print $2 "@" $4}'`; do
read user_id user_name <<< `echo "$user_id_at_name" | sed 's/@/ /'`
if [ $MODE = one -a "$user_name" != "$USER_NAME" ]; then
- continue;
+ continue;
fi
add_entry "$user_id" "$user_name" "$tenant_id" "$tenant_name" "$USER_PASS"
done
diff --git a/tools/docker/install_docker.sh b/tools/docker/install_docker.sh
index 289002e..483955b 100755
--- a/tools/docker/install_docker.sh
+++ b/tools/docker/install_docker.sh
@@ -38,7 +38,7 @@
install_package python-software-properties && \
sudo sh -c "echo deb $DOCKER_APT_REPO docker main > /etc/apt/sources.list.d/docker.list"
apt_get update
-install_package --force-yes lxc-docker=${DOCKER_PACKAGE_VERSION} socat
+install_package --force-yes lxc-docker-${DOCKER_PACKAGE_VERSION} socat
# Start the daemon - restart just in case the package ever auto-starts...
restart_service docker
diff --git a/tools/fixup_stuff.sh b/tools/fixup_stuff.sh
index f3c0f98..9e65b7c 100755
--- a/tools/fixup_stuff.sh
+++ b/tools/fixup_stuff.sh
@@ -35,25 +35,35 @@
# Python Packages
# ---------------
+# get_package_path python-package # in import notation
+function get_package_path() {
+ local package=$1
+ echo $(python -c "import os; import $package; print(os.path.split(os.path.realpath($package.__file__))[0])")
+}
+
+
# Pre-install affected packages so we can fix the permissions
+# These can go away once we are confident that pip 1.4.1+ is available everywhere
+
+# Fix prettytable 0.7.2 permissions
+# Don't specify --upgrade so we use the existing package if present
pip_install prettytable
+PACKAGE_DIR=$(get_package_path prettytable)
+# Only fix version 0.7.2
+dir=$(echo $PACKAGE_DIR/prettytable-0.7.2*)
+if [[ -d $dir ]]; then
+ sudo chmod +r $dir/*
+fi
+
+# Fix httplib2 0.8 permissions
+# Don't specify --upgrade so we use the existing package if present
pip_install httplib2
-
-SITE_DIRS=$(python -c "import site; import os; print os.linesep.join(site.getsitepackages())")
-for dir in $SITE_DIRS; do
-
- # Fix prettytable 0.7.2 permissions
- if [[ -r $dir/prettytable.py ]]; then
- sudo chmod +r $dir/prettytable-0.7.2*/*
- fi
-
- # Fix httplib2 0.8 permissions
- httplib_dir=httplib2-0.8.egg-info
- if [[ -d $dir/$httplib_dir ]]; then
- sudo chmod +r $dir/$httplib_dir/*
- fi
-
-done
+PACKAGE_DIR=$(get_package_path httplib2)
+# Only fix version 0.8
+dir=$(echo $PACKAGE_DIR-0.8*)
+if [[ -d $dir ]]; then
+ sudo chmod +r $dir/*
+fi
# RHEL6
diff --git a/tools/install_pip.sh b/tools/install_pip.sh
index 940bd8c..455323e 100755
--- a/tools/install_pip.sh
+++ b/tools/install_pip.sh
@@ -72,9 +72,9 @@
function install_pip_tarball() {
(cd $FILES; \
curl -O $PIP_TAR_URL; \
- tar xvfz pip-$INSTALL_PIP_VERSION.tar.gz; \
+ tar xvfz pip-$INSTALL_PIP_VERSION.tar.gz 1>/dev/null; \
cd pip-$INSTALL_PIP_VERSION; \
- sudo python setup.py install; \
+ sudo python setup.py install 1>/dev/null; \
)
}
diff --git a/tools/install_prereqs.sh b/tools/install_prereqs.sh
index 68f11ce..0c65fd9 100755
--- a/tools/install_prereqs.sh
+++ b/tools/install_prereqs.sh
@@ -55,7 +55,7 @@
# ================
# Install package requirements
-install_package $(get_packages $ENABLED_SERVICES)
+install_package $(get_packages general $ENABLED_SERVICES)
if [[ -n "$SYSLOG" && "$SYSLOG" != "False" ]]; then
if is_ubuntu || is_fedora; then
diff --git a/tools/jenkins/jenkins_home/build_jenkins.sh b/tools/jenkins/jenkins_home/build_jenkins.sh
index e0e774e..a556db0 100755
--- a/tools/jenkins/jenkins_home/build_jenkins.sh
+++ b/tools/jenkins/jenkins_home/build_jenkins.sh
@@ -6,8 +6,8 @@
# Make sure only root can run our script
if [[ $EUID -ne 0 ]]; then
- echo "This script must be run as root"
- exit 1
+ echo "This script must be run as root"
+ exit 1
fi
# This directory
@@ -31,15 +31,15 @@
# Install jenkins
if [ ! -e /var/lib/jenkins ]; then
- echo "Jenkins installation failed"
- exit 1
+ echo "Jenkins installation failed"
+ exit 1
fi
# Make sure user has configured a jenkins ssh pubkey
if [ ! -e /var/lib/jenkins/.ssh/id_rsa.pub ]; then
- echo "Public key for jenkins is missing. This is used to ssh into your instances."
- echo "Please run "su -c ssh-keygen jenkins" before proceeding"
- exit 1
+ echo "Public key for jenkins is missing. This is used to ssh into your instances."
+ echo "Please run "su -c ssh-keygen jenkins" before proceeding"
+ exit 1
fi
# Setup sudo
@@ -96,7 +96,7 @@
# Configure plugins
for plugin in ${PLUGINS//,/ }; do
- name=`basename $plugin`
+ name=`basename $plugin`
dest=/var/lib/jenkins/plugins/$name
if [ ! -e $dest ]; then
curl -L $plugin -o $dest
diff --git a/tools/upload_image.sh b/tools/upload_image.sh
index dd21c9f..d81a5c8 100755
--- a/tools/upload_image.sh
+++ b/tools/upload_image.sh
@@ -33,6 +33,7 @@
# Get a token to authenticate to glance
TOKEN=$(keystone token-get | grep ' id ' | get_field 2)
+die_if_not_set $LINENO TOKEN "Keystone fail to get token"
# Glance connection info. Note the port must be specified.
GLANCE_HOSTPORT=${GLANCE_HOSTPORT:-$GLANCE_HOST:9292}
diff --git a/tools/xen/functions b/tools/xen/functions
index a5c4b70..b0b077d 100644
--- a/tools/xen/functions
+++ b/tools/xen/functions
@@ -69,11 +69,17 @@
}
function get_local_sr {
- xe sr-list name-label="Local storage" --minimal
+ xe pool-list params=default-SR minimal=true
}
function get_local_sr_path {
- echo "/var/run/sr-mount/$(get_local_sr)"
+ pbd_path="/var/run/sr-mount/$(get_local_sr)"
+ pbd_device_config_path=`xe pbd-list sr-uuid=$(get_local_sr) params=device-config | grep " path: "`
+ if [ -n "$pbd_device_config_path" ]; then
+ pbd_uuid=`xe pbd-list sr-uuid=$(get_local_sr) minimal=true`
+ pbd_path=`xe pbd-param-get uuid=$pbd_uuid param-name=device-config param-key=path || echo ""`
+ fi
+ echo $pbd_path
}
function find_ip_by_name() {
@@ -287,3 +293,35 @@
dynamic-max=${memory}MiB \
uuid=$vm
}
+
+function max_vcpus() {
+ local vm_name_label
+
+ vm_name_label="$1"
+
+ local vm
+ local host
+ local cpu_count
+
+ host=$(xe host-list --minimal)
+ vm=$(_vm_uuid "$vm_name_label")
+
+ cpu_count=$(xe host-param-get \
+ param-name=cpu_info \
+ uuid=$host |
+ sed -e 's/^.*cpu_count: \([0-9]*\);.*$/\1/g')
+
+ if [ -z "$cpu_count" ]; then
+ # get dom0's vcpu count
+ cpu_count=$(cat /proc/cpuinfo | grep processor | wc -l)
+ fi
+
+ # Assert cpu_count is not empty
+ [ -n "$cpu_count" ]
+
+ # Assert ithas a numeric nonzero value
+ expr "$cpu_count" + 0
+
+ xe vm-param-set uuid=$vm VCPUs-max=$cpu_count
+ xe vm-param-set uuid=$vm VCPUs-at-startup=$cpu_count
+}
diff --git a/tools/xen/install_os_domU.sh b/tools/xen/install_os_domU.sh
index 08e0f78..9a2f5a8 100755
--- a/tools/xen/install_os_domU.sh
+++ b/tools/xen/install_os_domU.sh
@@ -44,9 +44,9 @@
xe_min()
{
- local cmd="$1"
- shift
- xe "$cmd" --minimal "$@"
+ local cmd="$1"
+ shift
+ xe "$cmd" --minimal "$@"
}
#
@@ -132,8 +132,8 @@
# Set up ip forwarding, but skip on xcp-xapi
if [ -a /etc/sysconfig/network ]; then
if ! grep -q "FORWARD_IPV4=YES" /etc/sysconfig/network; then
- # FIXME: This doesn't work on reboot!
- echo "FORWARD_IPV4=YES" >> /etc/sysconfig/network
+ # FIXME: This doesn't work on reboot!
+ echo "FORWARD_IPV4=YES" >> /etc/sysconfig/network
fi
fi
# Also, enable ip forwarding in rc.local, since the above trick isn't working
@@ -268,6 +268,9 @@
# Set virtual machine parameters
set_vm_memory "$GUEST_NAME" "$OSDOMU_MEM_MB"
+# Max out VCPU count for better performance
+max_vcpus "$GUEST_NAME"
+
# start the VM to run the prepare steps
xe vm-start vm="$GUEST_NAME"
diff --git a/tools/xen/scripts/install-os-vpx.sh b/tools/xen/scripts/install-os-vpx.sh
index 7469e0c..7b0d891 100755
--- a/tools/xen/scripts/install-os-vpx.sh
+++ b/tools/xen/scripts/install-os-vpx.sh
@@ -42,69 +42,69 @@
get_params()
{
- while getopts "hbn:r:l:t:" OPTION;
- do
- case $OPTION in
- h) usage
- exit 1
- ;;
- n)
- BRIDGE=$OPTARG
- ;;
- l)
- NAME_LABEL=$OPTARG
- ;;
- t)
- TEMPLATE_NAME=$OPTARG
- ;;
- ?)
- usage
- exit
- ;;
- esac
- done
- if [[ -z $BRIDGE ]]
- then
- BRIDGE=xenbr0
- fi
+ while getopts "hbn:r:l:t:" OPTION;
+ do
+ case $OPTION in
+ h) usage
+ exit 1
+ ;;
+ n)
+ BRIDGE=$OPTARG
+ ;;
+ l)
+ NAME_LABEL=$OPTARG
+ ;;
+ t)
+ TEMPLATE_NAME=$OPTARG
+ ;;
+ ?)
+ usage
+ exit
+ ;;
+ esac
+ done
+ if [[ -z $BRIDGE ]]
+ then
+ BRIDGE=xenbr0
+ fi
- if [[ -z $TEMPLATE_NAME ]]; then
- echo "Please specify a template name" >&2
- exit 1
- fi
+ if [[ -z $TEMPLATE_NAME ]]; then
+ echo "Please specify a template name" >&2
+ exit 1
+ fi
- if [[ -z $NAME_LABEL ]]; then
- echo "Please specify a name-label for the new VM" >&2
- exit 1
- fi
+ if [[ -z $NAME_LABEL ]]; then
+ echo "Please specify a name-label for the new VM" >&2
+ exit 1
+ fi
}
xe_min()
{
- local cmd="$1"
- shift
- xe "$cmd" --minimal "$@"
+ local cmd="$1"
+ shift
+ xe "$cmd" --minimal "$@"
}
find_network()
{
- result=$(xe_min network-list bridge="$1")
- if [ "$result" = "" ]
- then
- result=$(xe_min network-list name-label="$1")
- fi
- echo "$result"
+ result=$(xe_min network-list bridge="$1")
+ if [ "$result" = "" ]
+ then
+ result=$(xe_min network-list name-label="$1")
+ fi
+ echo "$result"
}
create_vif()
{
- local v="$1"
- echo "Installing VM interface on [$BRIDGE]"
- local out_network_uuid=$(find_network "$BRIDGE")
- xe vif-create vm-uuid="$v" network-uuid="$out_network_uuid" device="0"
+ local v="$1"
+ echo "Installing VM interface on [$BRIDGE]"
+ local out_network_uuid=$(find_network "$BRIDGE")
+ xe vif-create vm-uuid="$v" network-uuid="$out_network_uuid" device="0"
}
@@ -112,20 +112,20 @@
# Make the VM auto-start on server boot.
set_auto_start()
{
- local v="$1"
- xe vm-param-set uuid="$v" other-config:auto_poweron=true
+ local v="$1"
+ xe vm-param-set uuid="$v" other-config:auto_poweron=true
}
destroy_vifs()
{
- local v="$1"
- IFS=,
- for vif in $(xe_min vif-list vm-uuid="$v")
- do
- xe vif-destroy uuid="$vif"
- done
- unset IFS
+ local v="$1"
+ IFS=,
+ for vif in $(xe_min vif-list vm-uuid="$v")
+ do
+ xe vif-destroy uuid="$vif"
+ done
+ unset IFS
}
diff --git a/tools/xen/scripts/uninstall-os-vpx.sh b/tools/xen/scripts/uninstall-os-vpx.sh
index ac26094..1ed2494 100755
--- a/tools/xen/scripts/uninstall-os-vpx.sh
+++ b/tools/xen/scripts/uninstall-os-vpx.sh
@@ -22,63 +22,63 @@
# By default, don't remove the templates
REMOVE_TEMPLATES=${REMOVE_TEMPLATES:-"false"}
if [ "$1" = "--remove-templates" ]; then
- REMOVE_TEMPLATES=true
+ REMOVE_TEMPLATES=true
fi
xe_min()
{
- local cmd="$1"
- shift
- xe "$cmd" --minimal "$@"
+ local cmd="$1"
+ shift
+ xe "$cmd" --minimal "$@"
}
destroy_vdi()
{
- local vbd_uuid="$1"
- local type=$(xe_min vbd-list uuid=$vbd_uuid params=type)
- local dev=$(xe_min vbd-list uuid=$vbd_uuid params=userdevice)
- local vdi_uuid=$(xe_min vbd-list uuid=$vbd_uuid params=vdi-uuid)
+ local vbd_uuid="$1"
+ local type=$(xe_min vbd-list uuid=$vbd_uuid params=type)
+ local dev=$(xe_min vbd-list uuid=$vbd_uuid params=userdevice)
+ local vdi_uuid=$(xe_min vbd-list uuid=$vbd_uuid params=vdi-uuid)
- if [ "$type" == 'Disk' ] && [ "$dev" != 'xvda' ] && [ "$dev" != '0' ]; then
- xe vdi-destroy uuid=$vdi_uuid
- fi
+ if [ "$type" == 'Disk' ] && [ "$dev" != 'xvda' ] && [ "$dev" != '0' ]; then
+ xe vdi-destroy uuid=$vdi_uuid
+ fi
}
uninstall()
{
- local vm_uuid="$1"
- local power_state=$(xe_min vm-list uuid=$vm_uuid params=power-state)
+ local vm_uuid="$1"
+ local power_state=$(xe_min vm-list uuid=$vm_uuid params=power-state)
- if [ "$power_state" != "halted" ]; then
- xe vm-shutdown vm=$vm_uuid force=true
- fi
+ if [ "$power_state" != "halted" ]; then
+ xe vm-shutdown vm=$vm_uuid force=true
+ fi
- for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do
- destroy_vdi "$v"
- done
+ for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do
+ destroy_vdi "$v"
+ done
- xe vm-uninstall vm=$vm_uuid force=true >/dev/null
+ xe vm-uninstall vm=$vm_uuid force=true >/dev/null
}
uninstall_template()
{
- local vm_uuid="$1"
+ local vm_uuid="$1"
- for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do
- destroy_vdi "$v"
- done
+ for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do
+ destroy_vdi "$v"
+ done
- xe template-uninstall template-uuid=$vm_uuid force=true >/dev/null
+ xe template-uninstall template-uuid=$vm_uuid force=true >/dev/null
}
# remove the VMs and their disks
for u in $(xe_min vm-list other-config:os-vpx=true | sed -e 's/,/ /g'); do
- uninstall "$u"
+ uninstall "$u"
done
# remove the templates
if [ "$REMOVE_TEMPLATES" == "true" ]; then
- for u in $(xe_min template-list other-config:os-vpx=true | sed -e 's/,/ /g'); do
- uninstall_template "$u"
- done
+ for u in $(xe_min template-list other-config:os-vpx=true | sed -e 's/,/ /g'); do
+ uninstall_template "$u"
+ done
fi
diff --git a/unstack.sh b/unstack.sh
index c944ccc..67c8b7c 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -42,6 +42,16 @@
source $TOP_DIR/lib/ironic
source $TOP_DIR/lib/trove
+# Extras Source
+# --------------
+
+# Phase: source
+if [[ -d $TOP_DIR/extras.d ]]; then
+ for i in $TOP_DIR/extras.d/*.sh; do
+ [[ -r $i ]] && source $i source
+ done
+fi
+
# Determine what system we are running on. This provides ``os_VENDOR``,
# ``os_RELEASE``, ``os_UPDATE``, ``os_PACKAGE``, ``os_CODENAME``
GetOSVersion
@@ -53,6 +63,7 @@
# Run extras
# ==========
+# Phase: unstack
if [[ -d $TOP_DIR/extras.d ]]; then
for i in $TOP_DIR/extras.d/*.sh; do
[[ -r $i ]] && source $i unstack