Merge "exit cleanup in functions"
diff --git a/HACKING.rst b/HACKING.rst
index 5f33d77..3c08e67 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -5,10 +5,10 @@
 General
 -------
 
-DevStack is written in POSIX shell script.  This choice was made because
-it best illustrates the configuration steps that this implementation takes
-on setting up and interacting with OpenStack components.  DevStack specifically
-uses Bash and is compatible with Bash 3.
+DevStack is written in UNIX shell script.  It uses a number of bash-isms
+and so is limited to Bash (version 3 and up) and compatible shells.
+Shell script was chosen because it best illustrates the steps used to
+set up and interact with OpenStack components.
 
 DevStack's official repository is located on GitHub at
 https://github.com/openstack-dev/devstack.git.  Besides the master branch that
@@ -54,14 +54,14 @@
 ``TOP_DIR`` should always point there, even if the script itself is located in
 a subdirectory::
 
-    # Keep track of the current devstack directory.
+    # Keep track of the current DevStack directory.
     TOP_DIR=$(cd $(dirname "$0") && pwd)
 
 Many scripts will utilize shared functions from the ``functions`` file.  There are
 also rc files (``stackrc`` and ``openrc``) that are often included to set the primary
 configuration of the user environment::
 
-    # Keep track of the current devstack directory.
+    # Keep track of the current DevStack directory.
     TOP_DIR=$(cd $(dirname "$0") && pwd)
 
     # Import common functions
@@ -100,13 +100,14 @@
 -------
 
 ``stackrc`` is the global configuration file for DevStack.  It is responsible for
-calling ``localrc`` if it exists so configuration can be overridden by the user.
+calling ``local.conf`` (or ``localrc`` if it exists) so local user configuration
+is recognized.
 
 The criteria for what belongs in ``stackrc`` can be vaguely summarized as
 follows:
 
-* All project respositories and branches (for historical reasons)
-* Global configuration that may be referenced in ``localrc``, i.e. ``DEST``, ``DATA_DIR``
+* All project repositories and branches handled directly in ``stack.sh``
+* Global configuration that may be referenced in ``local.conf``, i.e. ``DEST``, ``DATA_DIR``
 * Global service configuration like ``ENABLED_SERVICES``
 * Variables used by multiple services that do not have a clear owner, i.e.
   ``VOLUME_BACKING_FILE_SIZE`` (nova-volumes and cinder) or ``PUBLIC_NETWORK_NAME``
@@ -116,8 +117,9 @@
   not be changed for other reasons but the earlier file needs to dereference a
   variable set in the later file.  This should be rare.
 
-Also, variable declarations in ``stackrc`` do NOT allow overriding (the form
-``FOO=${FOO:-baz}``); if they did then they can already be changed in ``localrc``
+Also, variable declarations in ``stackrc`` before ``local.conf`` is sourced
+do NOT allow overriding (the form
+``FOO=${FOO:-baz}``); if they did then they can already be changed in ``local.conf``
 and can stay in the project file.
 
 
@@ -139,7 +141,9 @@
 Markdown formatting in the comments; use it sparingly.  Specifically, ``stack.sh``
 uses Markdown headers to divide the script into logical sections.
 
-.. _shocco: http://rtomayko.github.com/shocco/
+.. _shocco: https://github.com/dtroyer/shocco/tree/rst_support
+
+The script used to drive <code>shocco</code> is <code>tools/build_docs.sh</code>.
 
 
 Exercises
diff --git a/README.md b/README.md
index 514786c..640fab6 100644
--- a/README.md
+++ b/README.md
@@ -6,35 +6,39 @@
 * To describe working configurations of OpenStack (which code branches work together?  what do config files look like for those branches?)
 * To make it easier for developers to dive into OpenStack so that they can productively contribute without having to understand every part of the system at once
 * To make it easy to prototype cross-project features
-* To sanity-check OpenStack builds (used in gating commits to the primary repos)
+* To provide an environment for the OpenStack CI testing on every commit to the projects
 
-Read more at http://devstack.org (built from the gh-pages branch)
+Read more at http://devstack.org.
 
-IMPORTANT: Be sure to carefully read `stack.sh` and any other scripts you execute before you run them, as they install software and may alter your networking configuration.  We strongly recommend that you run `stack.sh` in a clean and disposable vm when you are first getting started.
-
-# DevStack on Xenserver
-
-If you would like to use Xenserver as the hypervisor, please refer to the instructions in `./tools/xen/README.md`.
-
-# DevStack on Docker
-
-If you would like to use Docker as the hypervisor, please refer to the instructions in `./tools/docker/README.md`.
+IMPORTANT: Be sure to carefully read `stack.sh` and any other scripts you
+execute before you run them, as they install software and will alter your
+networking configuration.  We strongly recommend that you run `stack.sh`
+in a clean and disposable vm when you are first getting started.
 
 # Versions
 
-The devstack master branch generally points to trunk versions of OpenStack components.  For older, stable versions, look for branches named stable/[release] in the DevStack repo.  For example, you can do the following to create a diablo OpenStack cloud:
+The DevStack master branch generally points to trunk versions of OpenStack
+components.  For older, stable versions, look for branches named
+stable/[release] in the DevStack repo.  For example, you can do the
+following to create a grizzly OpenStack cloud:
 
-    git checkout stable/diablo
+    git checkout stable/grizzly
     ./stack.sh
 
-You can also pick specific OpenStack project releases by setting the appropriate `*_BRANCH` variables in `localrc` (look in `stackrc` for the default set).  Usually just before a release there will be milestone-proposed branches that need to be tested::
+You can also pick specific OpenStack project releases by setting the appropriate
+`*_BRANCH` variables in the ``localrc`` section of `local.conf` (look in
+`stackrc` for the default set).  Usually just before a release there will be
+milestone-proposed branches that need to be tested::
 
     GLANCE_REPO=https://github.com/openstack/glance.git
     GLANCE_BRANCH=milestone-proposed
 
 # Start A Dev Cloud
 
-Installing in a dedicated disposable vm is safer than installing on your dev machine!  Plus you can pick one of the supported Linux distros for your VM.  To start a dev cloud run the following NOT AS ROOT (see below for more):
+Installing in a dedicated disposable VM is safer than installing on your
+dev machine!  Plus you can pick one of the supported Linux distros for
+your VM.  To start a dev cloud run the following NOT AS ROOT (see
+**DevStack Execution Environment** below for more on user accounts):
 
     ./stack.sh
 
@@ -45,7 +49,7 @@
 
 We also provide an environment file that you can use to interact with your cloud via CLI:
 
-    # source openrc file to load your environment with osapi and ec2 creds
+    # source openrc file to load your environment with OpenStack CLI creds
     . openrc
     # list instances
     nova list
@@ -61,16 +65,37 @@
 
 DevStack runs rampant over the system it runs on, installing things and uninstalling other things.  Running this on a system you care about is a recipe for disappointment, or worse.  Alas, we're all in the virtualization business here, so run it in a VM.  And take advantage of the snapshot capabilities of your hypervisor of choice to reduce testing cycle times.  You might even save enough time to write one more feature before the next feature freeze...
 
-``stack.sh`` needs to have root access for a lot of tasks, but it also needs to have not-root permissions for most of its work and for all of the OpenStack services.  So ``stack.sh`` specifically does not run if you are root. This is a recent change (Oct 2013) from the previous behaviour of automatically creating a ``stack`` user.  Automatically creating a user account is not always the right response to running as root, so that bit is now an explicit step using ``tools/create-stack-user.sh``.  Run that (as root!) if you do not want to just use your normal login here, which works perfectly fine.
+``stack.sh`` needs to have root access for a lot of tasks, but uses ``sudo``
+for all of those tasks.  However, it needs to be not-root for most of its
+work and for all of the OpenStack services.  ``stack.sh`` specifically
+does not run if started as root.
+
+This is a recent change (Oct 2013) from the previous behaviour of
+automatically creating a ``stack`` user.  Automatically creating
+user accounts is not the right response to running as root, so
+that bit is now an explicit step using ``tools/create-stack-user.sh``. 
+Run that (as root!) or just check it out to see what DevStack's
+expectations are for the account it runs under.  Many people simply
+use their usual login (the default 'ubuntu' login on a UEC image
+for example).
 
 # Customizing
 
-You can override environment variables used in `stack.sh` by creating file name `localrc`.  It is likely that you will need to do this to tweak your networking configuration should you need to access your cloud from a different host.
+You can override environment variables used in `stack.sh` by creating file
+name `local.conf` with a ``locarc`` section as shown below.  It is likely
+that you will need to do this to tweak your networking configuration should
+you need to access your cloud from a different host.
+
+    [[local|localrc]]
+    VARIABLE=value
+
+See the **Local Configuration** section below for more details.
 
 # Database Backend
 
 Multiple database backends are available. The available databases are defined in the lib/databases directory.
-`mysql` is the default database, choose a different one by putting the following in `localrc`:
+`mysql` is the default database, choose a different one by putting the
+following in the `localrc` section:
 
     disable_service mysql
     enable_service postgresql
@@ -81,7 +106,7 @@
 
 Multiple RPC backends are available. Currently, this
 includes RabbitMQ (default), Qpid, and ZeroMQ. Your backend of
-choice may be selected via the `localrc`.
+choice may be selected via the `localrc` section.
 
 Note that selecting more than one RPC backend will result in a failure.
 
@@ -95,9 +120,10 @@
 
 # Apache Frontend
 
-Apache web server is enabled for wsgi services by setting `APACHE_ENABLED_SERVICES` in your localrc. But remember to enable these services at first as above.
+Apache web server is enabled for wsgi services by setting
+`APACHE_ENABLED_SERVICES` in your ``localrc`` section.  Remember to
+enable these services at first as above.
 
-Example:
     APACHE_ENABLED_SERVICES+=keystone,swift
 
 # Swift
@@ -108,23 +134,23 @@
 object services will run directly in screen. The others services like
 replicator, updaters or auditor runs in background.
 
-If you would like to enable Swift you can add this to your `localrc` :
+If you would like to enable Swift you can add this to your `localrc` section:
 
     enable_service s-proxy s-object s-container s-account
 
 If you want a minimal Swift install with only Swift and Keystone you
-can have this instead in your `localrc`:
+can have this instead in your `localrc` section:
 
     disable_all_services
     enable_service key mysql s-proxy s-object s-container s-account
 
 If you only want to do some testing of a real normal swift cluster
 with multiple replicas you can do so by customizing the variable
-`SWIFT_REPLICAS` in your `localrc` (usually to 3).
+`SWIFT_REPLICAS` in your `localrc` section (usually to 3).
 
 # Swift S3
 
-If you are enabling `swift3` in `ENABLED_SERVICES` devstack will
+If you are enabling `swift3` in `ENABLED_SERVICES` DevStack will
 install the swift3 middleware emulation. Swift will be configured to
 act as a S3 endpoint for Keystone so effectively replacing the
 `nova-objectstore`.
@@ -137,7 +163,7 @@
 Basic Setup
 
 In order to enable Neutron a single node setup, you'll need the
-following settings in your `localrc` :
+following settings in your `localrc` section:
 
     disable_service n-net
     enable_service q-svc
@@ -146,12 +172,15 @@
     enable_service q-l3
     enable_service q-meta
     enable_service neutron
-    # Optional, to enable tempest configuration as part of devstack
+    # Optional, to enable tempest configuration as part of DevStack
     enable_service tempest
 
 Then run `stack.sh` as normal.
 
-devstack supports adding specific Neutron configuration flags to the service, Open vSwitch plugin and LinuxBridge plugin configuration files. To make use of this feature, the following variables are defined and can be configured in your `localrc` file:
+DevStack supports setting specific Neutron configuration flags to the
+service, Open vSwitch plugin and LinuxBridge plugin configuration files.
+To make use of this feature, the following variables are defined and can
+be configured in your `localrc` section:
 
     Variable Name             Config File  Section Modified
     -------------------------------------------------------------------------------------
@@ -160,12 +189,14 @@
     Q_AGENT_EXTRA_SRV_OPTS    Plugin       `OVS` (for Open Vswitch) or `LINUX_BRIDGE` (for LinuxBridge)
     Q_SRV_EXTRA_DEFAULT_OPTS  Service      DEFAULT
 
-An example of using the variables in your `localrc` is below:
+An example of using the variables in your `localrc` section is below:
 
     Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472)
     Q_SRV_EXTRA_OPTS=(tenant_network_type=vxlan)
 
-devstack also supports configuring the Neutron ML2 plugin. The ML2 plugin can run with the OVS, LinuxBridge, or Hyper-V agents on compute hosts. A simple way to configure the ml2 plugin is shown below:
+DevStack also supports configuring the Neutron ML2 plugin. The ML2 plugin
+can run with the OVS, LinuxBridge, or Hyper-V agents on compute hosts. A
+simple way to configure the ml2 plugin is shown below:
 
     # VLAN configuration
     Q_PLUGIN=ml2
@@ -179,7 +210,9 @@
     Q_PLUGIN=ml2
     Q_ML2_TENANT_NETWORK_TYPE=vxlan
 
-The above will default in devstack to using the OVS on each compute host. To change this, set the `Q_AGENT` variable to the agent you want to run (e.g. linuxbridge).
+The above will default in DevStack to using the OVS on each compute host.
+To change this, set the `Q_AGENT` variable to the agent you want to run
+(e.g. linuxbridge).
 
     Variable Name                    Notes
     -------------------------------------------------------------------------------------
@@ -194,13 +227,13 @@
 # Heat
 
 Heat is disabled by default. To enable it you'll need the following settings
-in your `localrc` :
+in your `localrc` section:
 
     enable_service heat h-api h-api-cfn h-api-cw h-eng
 
 Heat can also run in standalone mode, and be configured to orchestrate
 on an external OpenStack cloud. To launch only Heat in standalone mode
-you'll need the following settings in your `localrc` :
+you'll need the following settings in your `localrc` section:
 
     disable_all_services
     enable_service rabbit mysql heat h-api h-api-cfn h-api-cw h-eng
@@ -215,9 +248,23 @@
     $ cd /opt/stack/tempest
     $ nosetests tempest/scenario/test_network_basic_ops.py
 
+# DevStack on Xenserver
+
+If you would like to use Xenserver as the hypervisor, please refer to the instructions in `./tools/xen/README.md`.
+
+# DevStack on Docker
+
+If you would like to use Docker as the hypervisor, please refer to the instructions in `./tools/docker/README.md`.
+
 # Additional Projects
 
-DevStack has a hook mechanism to call out to a dispatch script at specific points in the execution if `stack.sh`, `unstack.sh` and `clean.sh`.  This allows higher-level projects, especially those that the lower level projects have no dependency on, to be added to DevStack without modifying the scripts.  Tempest is built this way as an example of how to structure the dispatch script, see `extras.d/80-tempest.sh`.  See `extras.d/README.md` for more information.
+DevStack has a hook mechanism to call out to a dispatch script at specific
+points in the execution of `stack.sh`, `unstack.sh` and `clean.sh`.  This
+allows upper-layer projects, especially those that the lower layer projects
+have no dependency on, to be added to DevStack without modifying the core
+scripts.  Tempest is built this way as an example of how to structure the
+dispatch script, see `extras.d/80-tempest.sh`.  See `extras.d/README.md`
+for more information.
 
 # Multi-Node Setup
 
@@ -232,7 +279,8 @@
     enable_service q-meta
     enable_service neutron
 
-You likely want to change your `localrc` to run a scheduler that will balance VMs across hosts:
+You likely want to change your `localrc` section to run a scheduler that
+will balance VMs across hosts:
 
     SCHEDULER=nova.scheduler.simple.SimpleScheduler
 
@@ -249,7 +297,7 @@
 
 Cells is a new scaling option with a full spec at http://wiki.openstack.org/blueprint-nova-compute-cells.
 
-To setup a cells environment add the following to your `localrc`:
+To setup a cells environment add the following to your `localrc` section:
 
     enable_service n-cell
 
@@ -264,32 +312,41 @@
 
 The new config file ``local.conf`` is an extended-INI format that introduces a new meta-section header that provides some additional information such as a phase name and destination config filename:
 
-  [[ <phase> | <filename> ]]
+    [[ <phase> | <config-file-name> ]]
 
-where <phase> is one of a set of phase names defined by ``stack.sh`` and <filename> is the project config filename.  The filename is eval'ed in the stack.sh context so all environment variables are available and may be used.  Using the project config file variables in the header is strongly suggested (see example of NOVA_CONF below).  If the path of the config file does not exist it is skipped.
+where ``<phase>`` is one of a set of phase names defined by ``stack.sh``
+and ``<config-file-name>`` is the configuration filename.  The filename is
+eval'ed in the ``stack.sh`` context so all environment variables are
+available and may be used.  Using the project config file variables in
+the header is strongly suggested (see the ``NOVA_CONF`` example below).
+If the path of the config file does not exist it is skipped.
 
 The defined phases are:
 
-* local - extracts ``localrc`` from ``local.conf`` before ``stackrc`` is sourced
-* post-config - runs after the layer 2 services are configured and before they are started
-* extra - runs after services are started and before any files in ``extra.d`` are executes
+* **local** - extracts ``localrc`` from ``local.conf`` before ``stackrc`` is sourced
+* **post-config** - runs after the layer 2 services are configured and before they are started
+* **extra** - runs after services are started and before any files in ``extra.d`` are executed
 
 The file is processed strictly in sequence; meta-sections may be specified more than once but if any settings are duplicated the last to appear in the file will be used.
 
-  [[post-config|$NOVA_CONF]]
-  [DEFAULT]
-  use_syslog = True
+    [[post-config|$NOVA_CONF]]
+    [DEFAULT]
+    use_syslog = True
 
-  [osapi_v3]
-  enabled = False
+    [osapi_v3]
+    enabled = False
 
-A specific meta-section ``local:localrc`` is used to provide a default localrc file.  This allows all custom settings for DevStack to be contained in a single file.  ``localrc`` is not overwritten if it exists to preserve compatability.
+A specific meta-section ``local|localrc`` is used to provide a default
+``localrc`` file (actually ``.localrc.auto``).  This allows all custom
+settings for DevStack to be contained in a single file.  If ``localrc``
+exists it will be used instead to preserve backward-compatibility.
 
-  [[local|localrc]]
-  FIXED_RANGE=10.254.1.0/24
-  ADMIN_PASSWORD=speciale
-  LOGFILE=$DEST/logs/stack.sh.log
+    [[local|localrc]]
+    FIXED_RANGE=10.254.1.0/24
+    ADMIN_PASSWORD=speciale
+    LOGFILE=$DEST/logs/stack.sh.log
 
-Note that ``Q_PLUGIN_CONF_FILE`` is unique in that it is assumed to _NOT_ start with a ``/`` (slash) character.  A slash will need to be added:
+Note that ``Q_PLUGIN_CONF_FILE`` is unique in that it is assumed to *NOT*
+start with a ``/`` (slash) character.  A slash will need to be added:
 
-  [[post-config|/$Q_PLUGIN_CONF_FILE]]
+    [[post-config|/$Q_PLUGIN_CONF_FILE]]
diff --git a/exercises/aggregates.sh b/exercises/aggregates.sh
index e2baecd..e5fc7de 100755
--- a/exercises/aggregates.sh
+++ b/exercises/aggregates.sh
@@ -100,7 +100,7 @@
 META_DATA_3_KEY=bar
 
 #ensure no additional metadata is set
-nova aggregate-details $AGGREGATE_ID | egrep "{u'availability_zone': u'$AGGREGATE_A_ZONE'}|{}"
+nova aggregate-details $AGGREGATE_ID | egrep "\|[{u ]*'availability_zone.+$AGGREGATE_A_ZONE'[ }]*\|"
 
 nova aggregate-set-metadata $AGGREGATE_ID ${META_DATA_1_KEY}=123
 nova aggregate-details $AGGREGATE_ID | grep $META_DATA_1_KEY
@@ -117,7 +117,7 @@
 nova aggregate-details $AGGREGATE_ID | grep $META_DATA_2_KEY && die $LINENO "ERROR metadata was not cleared"
 
 nova aggregate-set-metadata $AGGREGATE_ID $META_DATA_3_KEY $META_DATA_1_KEY
-nova aggregate-details $AGGREGATE_ID | egrep "{u'availability_zone': u'$AGGREGATE_A_ZONE'}|{}"
+nova aggregate-details $AGGREGATE_ID | egrep "\|[{u ]*'availability_zone.+$AGGREGATE_A_ZONE'[ }]*\|"
 
 
 # Test aggregate-add/remove-host
diff --git a/exercises/boot_from_volume.sh b/exercises/boot_from_volume.sh
index fe27bd0..634a6d5 100755
--- a/exercises/boot_from_volume.sh
+++ b/exercises/boot_from_volume.sh
@@ -119,7 +119,7 @@
 INSTANCE_TYPE=$(nova flavor-list | grep $DEFAULT_INSTANCE_TYPE | get_field 1)
 if [[ -z "$INSTANCE_TYPE" ]]; then
     # grab the first flavor in the list to launch if default doesn't exist
-   INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
+    INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
 fi
 
 # Clean-up from previous runs
diff --git a/exercises/docker.sh b/exercises/docker.sh
index 0672bc0..10c5436 100755
--- a/exercises/docker.sh
+++ b/exercises/docker.sh
@@ -62,7 +62,7 @@
 INSTANCE_TYPE=$(nova flavor-list | grep $DEFAULT_INSTANCE_TYPE | get_field 1)
 if [[ -z "$INSTANCE_TYPE" ]]; then
     # grab the first flavor in the list to launch if default doesn't exist
-   INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
+    INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
 fi
 
 # Clean-up from previous runs
@@ -102,4 +102,3 @@
 echo "*********************************************************************"
 echo "SUCCESS: End DevStack Exercise: $0"
 echo "*********************************************************************"
-
diff --git a/exercises/euca.sh b/exercises/euca.sh
index 64c0014..ed521e4 100755
--- a/exercises/euca.sh
+++ b/exercises/euca.sh
@@ -87,31 +87,31 @@
 # Volumes
 # -------
 if is_service_enabled c-vol && ! is_service_enabled n-cell; then
-   VOLUME_ZONE=`euca-describe-availability-zones | head -n1 | cut -f2`
-   die_if_not_set $LINENO VOLUME_ZONE "Failure to find zone for volume"
+    VOLUME_ZONE=`euca-describe-availability-zones | head -n1 | cut -f2`
+    die_if_not_set $LINENO VOLUME_ZONE "Failure to find zone for volume"
 
-   VOLUME=`euca-create-volume -s 1 -z $VOLUME_ZONE | cut -f2`
-   die_if_not_set $LINENO VOLUME "Failure to create volume"
+    VOLUME=`euca-create-volume -s 1 -z $VOLUME_ZONE | cut -f2`
+    die_if_not_set $LINENO VOLUME "Failure to create volume"
 
-   # Test that volume has been created
-   VOLUME=`euca-describe-volumes $VOLUME | cut -f2`
-   die_if_not_set $LINENO VOLUME "Failure to get volume"
+    # Test that volume has been created
+    VOLUME=`euca-describe-volumes $VOLUME | cut -f2`
+    die_if_not_set $LINENO VOLUME "Failure to get volume"
 
-   # Test volume has become available
-   if ! timeout $RUNNING_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -q available; do sleep 1; done"; then
-       die $LINENO "volume didn't become available within $RUNNING_TIMEOUT seconds"
-   fi
+    # Test volume has become available
+    if ! timeout $RUNNING_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -q available; do sleep 1; done"; then
+        die $LINENO "volume didn't become available within $RUNNING_TIMEOUT seconds"
+    fi
 
-   # Attach volume to an instance
-   euca-attach-volume -i $INSTANCE -d $ATTACH_DEVICE $VOLUME || \
-       die $LINENO "Failure attaching volume $VOLUME to $INSTANCE"
-   if ! timeout $ACTIVE_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -A 1 in-use | grep -q attach; do sleep 1; done"; then
-       die $LINENO "Could not attach $VOLUME to $INSTANCE"
-   fi
+    # Attach volume to an instance
+    euca-attach-volume -i $INSTANCE -d $ATTACH_DEVICE $VOLUME || \
+        die $LINENO "Failure attaching volume $VOLUME to $INSTANCE"
+    if ! timeout $ACTIVE_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -A 1 in-use | grep -q attach; do sleep 1; done"; then
+        die $LINENO "Could not attach $VOLUME to $INSTANCE"
+    fi
 
-   # Detach volume from an instance
-   euca-detach-volume $VOLUME || \
-       die $LINENO "Failure detaching volume $VOLUME to $INSTANCE"
+    # Detach volume from an instance
+    euca-detach-volume $VOLUME || \
+        die $LINENO "Failure detaching volume $VOLUME to $INSTANCE"
     if ! timeout $ACTIVE_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -q available; do sleep 1; done"; then
         die $LINENO "Could not detach $VOLUME to $INSTANCE"
     fi
@@ -120,7 +120,7 @@
     euca-delete-volume $VOLUME || \
         die $LINENO "Failure to delete volume"
     if ! timeout $ACTIVE_TIMEOUT sh -c "while euca-describe-volumes | grep $VOLUME; do sleep 1; done"; then
-       die $LINENO "Could not delete $VOLUME"
+        die $LINENO "Could not delete $VOLUME"
     fi
 else
     echo "Volume Tests Skipped"
diff --git a/exercises/floating_ips.sh b/exercises/floating_ips.sh
index 2833b65..1a1608c 100755
--- a/exercises/floating_ips.sh
+++ b/exercises/floating_ips.sh
@@ -113,7 +113,7 @@
 INSTANCE_TYPE=$(nova flavor-list | grep $DEFAULT_INSTANCE_TYPE | get_field 1)
 if [[ -z "$INSTANCE_TYPE" ]]; then
     # grab the first flavor in the list to launch if default doesn't exist
-   INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
+    INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
 fi
 
 # Clean-up from previous runs
@@ -168,7 +168,7 @@
     # list floating addresses
     if ! timeout $ASSOCIATE_TIMEOUT sh -c "while ! nova floating-ip-list | grep $TEST_FLOATING_POOL | grep -q $TEST_FLOATING_IP; do sleep 1; done"; then
         die $LINENO "Floating IP not allocated"
-     fi
+    fi
 fi
 
 # Dis-allow icmp traffic (ping)
diff --git a/exercises/neutron-adv-test.sh b/exercises/neutron-adv-test.sh
index abb29cf..7dfa5dc 100755
--- a/exercises/neutron-adv-test.sh
+++ b/exercises/neutron-adv-test.sh
@@ -102,6 +102,7 @@
 # and save it.
 
 TOKEN=`keystone token-get | grep ' id ' | awk '{print $4}'`
+die_if_not_set $LINENO TOKEN "Keystone fail to get token"
 
 # Various functions
 # -----------------
@@ -272,12 +273,12 @@
 }
 
 function ping_ip {
-     # Test agent connection.  Assumes namespaces are disabled, and
-     # that DHCP is in use, but not L3
-     local VM_NAME=$1
-     local NET_NAME=$2
-     IP=$(get_instance_ip $VM_NAME $NET_NAME)
-     ping_check $NET_NAME $IP $BOOT_TIMEOUT
+    # Test agent connection.  Assumes namespaces are disabled, and
+    # that DHCP is in use, but not L3
+    local VM_NAME=$1
+    local NET_NAME=$2
+    IP=$(get_instance_ip $VM_NAME $NET_NAME)
+    ping_check $NET_NAME $IP $BOOT_TIMEOUT
 }
 
 function check_vm {
@@ -329,12 +330,12 @@
 }
 
 function delete_networks {
-   foreach_tenant_net 'delete_network ${%TENANT%_NAME} %NUM%'
-   #TODO(nati) add secuirty group check after it is implemented
-   # source $TOP_DIR/openrc demo1 demo1
-   # nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
-   # source $TOP_DIR/openrc demo2 demo2
-   # nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
+    foreach_tenant_net 'delete_network ${%TENANT%_NAME} %NUM%'
+    # TODO(nati) add secuirty group check after it is implemented
+    # source $TOP_DIR/openrc demo1 demo1
+    # nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
+    # source $TOP_DIR/openrc demo2 demo2
+    # nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
 }
 
 function create_all {
diff --git a/exercises/volumes.sh b/exercises/volumes.sh
index e536d16..9ee9fa9 100755
--- a/exercises/volumes.sh
+++ b/exercises/volumes.sh
@@ -117,7 +117,7 @@
 INSTANCE_TYPE=$(nova flavor-list | grep $DEFAULT_INSTANCE_TYPE | get_field 1)
 if [[ -z "$INSTANCE_TYPE" ]]; then
     # grab the first flavor in the list to launch if default doesn't exist
-   INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
+    INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
 fi
 
 # Clean-up from previous runs
diff --git a/extras.d/README.md b/extras.d/README.md
index 591e438..88e4265 100644
--- a/extras.d/README.md
+++ b/extras.d/README.md
@@ -10,12 +10,11 @@
 names start with a two digit sequence number.  DevStack reserves the sequence
 numbers 00 through 09 and 90 through 99 for its own use.
 
-The scripts are sourced at each hook point so they should not declare anything
-at the top level that would cause a problem, specifically, functions.  This does
-allow the entire `stack.sh` variable space to be available.  The scripts are
+The scripts are sourced at the beginning of each script that calls them. The
+entire `stack.sh` variable space is available.  The scripts are
 sourced with one or more arguments, the first of which defines the hook phase:
 
-arg 1: source | stack | unstack | clean
+    source | stack | unstack | clean
 
     source: always called first in any of the scripts, used to set the
         initial defaults in a lib/* script or similar
diff --git a/files/keystone_data.sh b/files/keystone_data.sh
index 3f3137c..ea2d52d 100755
--- a/files/keystone_data.sh
+++ b/files/keystone_data.sh
@@ -66,12 +66,12 @@
 # Heat
 if [[ "$ENABLED_SERVICES" =~ "heat" ]]; then
     HEAT_USER=$(get_id keystone user-create --name=heat \
-                                              --pass="$SERVICE_PASSWORD" \
-                                              --tenant_id $SERVICE_TENANT \
-                                              --email=heat@example.com)
+        --pass="$SERVICE_PASSWORD" \
+        --tenant_id $SERVICE_TENANT \
+        --email=heat@example.com)
     keystone user-role-add --tenant-id $SERVICE_TENANT \
-                           --user-id $HEAT_USER \
-                           --role-id $SERVICE_ROLE
+        --user-id $HEAT_USER \
+        --role-id $SERVICE_ROLE
     # heat_stack_user role is for users created by Heat
     keystone role-create --name heat_stack_user
     if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
@@ -126,16 +126,16 @@
 # Ceilometer
 if [[ "$ENABLED_SERVICES" =~ "ceilometer" ]]; then
     CEILOMETER_USER=$(get_id keystone user-create --name=ceilometer \
-                                              --pass="$SERVICE_PASSWORD" \
-                                              --tenant_id $SERVICE_TENANT \
-                                              --email=ceilometer@example.com)
+        --pass="$SERVICE_PASSWORD" \
+        --tenant_id $SERVICE_TENANT \
+        --email=ceilometer@example.com)
     keystone user-role-add --tenant-id $SERVICE_TENANT \
-                           --user-id $CEILOMETER_USER \
-                           --role-id $ADMIN_ROLE
+        --user-id $CEILOMETER_USER \
+        --role-id $ADMIN_ROLE
     # Ceilometer needs ResellerAdmin role to access swift account stats.
     keystone user-role-add --tenant-id $SERVICE_TENANT \
-                           --user-id $CEILOMETER_USER \
-                           --role-id $RESELLER_ROLE
+        --user-id $CEILOMETER_USER \
+        --role-id $RESELLER_ROLE
     if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
         CEILOMETER_SERVICE=$(get_id keystone service-create \
             --name=ceilometer \
diff --git a/functions b/functions
index 0aef47e..af5a37d 100644
--- a/functions
+++ b/functions
@@ -714,7 +714,8 @@
     local section=$2
     local option=$3
     local value=$4
-    if ! grep -q "^\[$section\]" "$file"; then
+
+    if ! grep -q "^\[$section\]" "$file" 2>/dev/null; then
         # Add section at the end
         echo -e "\n[$section]" >>"$file"
     fi
@@ -1372,9 +1373,9 @@
         IMAGE="$FILES/${IMAGE_FNAME}"
         IMAGE_NAME="${IMAGE_FNAME%.xen-raw.tgz}"
         glance \
-          --os-auth-token $token \
-          --os-image-url http://$GLANCE_HOSTPORT \
-          image-create \
+            --os-auth-token $token \
+            --os-image-url http://$GLANCE_HOSTPORT \
+            image-create \
             --name "$IMAGE_NAME" --is-public=True \
             --container-format=tgz --disk-format=raw \
             --property vm_mode=xen < "${IMAGE}"
@@ -1397,11 +1398,11 @@
             mkdir "$xdir"
             tar -zxf $FILES/$IMAGE_FNAME -C "$xdir"
             KERNEL=$(for f in "$xdir/"*-vmlinuz* "$xdir/"aki-*/image; do
-                     [ -f "$f" ] && echo "$f" && break; done; true)
+                [ -f "$f" ] && echo "$f" && break; done; true)
             RAMDISK=$(for f in "$xdir/"*-initrd* "$xdir/"ari-*/image; do
-                     [ -f "$f" ] && echo "$f" && break; done; true)
+                [ -f "$f" ] && echo "$f" && break; done; true)
             IMAGE=$(for f in "$xdir/"*.img "$xdir/"ami-*/image; do
-                     [ -f "$f" ] && echo "$f" && break; done; true)
+                [ -f "$f" ] && echo "$f" && break; done; true)
             if [[ -z "$IMAGE_NAME" ]]; then
                 IMAGE_NAME=$(basename "$IMAGE" ".img")
             fi
@@ -1690,23 +1691,23 @@
 #
 # _vercmp_r sep ver1 ver2
 function _vercmp_r {
-  typeset sep
-  typeset -a ver1=() ver2=()
-  sep=$1; shift
-  ver1=("${@:1:sep}")
-  ver2=("${@:sep+1}")
+    typeset sep
+    typeset -a ver1=() ver2=()
+    sep=$1; shift
+    ver1=("${@:1:sep}")
+    ver2=("${@:sep+1}")
 
-  if ((ver1 > ver2)); then
-    echo 1; return 0
-  elif ((ver2 > ver1)); then
-    echo -1; return 0
-  fi
+    if ((ver1 > ver2)); then
+        echo 1; return 0
+    elif ((ver2 > ver1)); then
+        echo -1; return 0
+    fi
 
-  if ((sep <= 1)); then
-    echo 0; return 0
-  fi
+    if ((sep <= 1)); then
+        echo 0; return 0
+    fi
 
-  _vercmp_r $((sep-1)) "${ver1[@]:1}" "${ver2[@]:1}"
+    _vercmp_r $((sep-1)) "${ver1[@]:1}" "${ver2[@]:1}"
 }
 
 
@@ -1728,13 +1729,13 @@
 #
 # vercmp_numbers ver1 ver2
 vercmp_numbers() {
-  typeset v1=$1 v2=$2 sep
-  typeset -a ver1 ver2
+    typeset v1=$1 v2=$2 sep
+    typeset -a ver1 ver2
 
-  IFS=. read -ra ver1 <<< "$v1"
-  IFS=. read -ra ver2 <<< "$v2"
+    IFS=. read -ra ver1 <<< "$v1"
+    IFS=. read -ra ver2 <<< "$v2"
 
-  _vercmp_r "${#ver1[@]}" "${ver1[@]}" "${ver2[@]}"
+    _vercmp_r "${#ver1[@]}" "${ver1[@]}" "${ver2[@]}"
 }
 
 
diff --git a/lib/baremetal b/lib/baremetal
index f4d8589..141c28d 100644
--- a/lib/baremetal
+++ b/lib/baremetal
@@ -256,19 +256,19 @@
 
     # load them into glance
     BM_DEPLOY_KERNEL_ID=$(glance \
-         --os-auth-token $token \
-         --os-image-url http://$GLANCE_HOSTPORT \
-         image-create \
-         --name $BM_DEPLOY_KERNEL \
-         --is-public True --disk-format=aki \
-         < $TOP_DIR/files/$BM_DEPLOY_KERNEL  | grep ' id ' | get_field 2)
+        --os-auth-token $token \
+        --os-image-url http://$GLANCE_HOSTPORT \
+        image-create \
+        --name $BM_DEPLOY_KERNEL \
+        --is-public True --disk-format=aki \
+        < $TOP_DIR/files/$BM_DEPLOY_KERNEL  | grep ' id ' | get_field 2)
     BM_DEPLOY_RAMDISK_ID=$(glance \
-         --os-auth-token $token \
-         --os-image-url http://$GLANCE_HOSTPORT \
-         image-create \
-         --name $BM_DEPLOY_RAMDISK \
-         --is-public True --disk-format=ari \
-         < $TOP_DIR/files/$BM_DEPLOY_RAMDISK  | grep ' id ' | get_field 2)
+        --os-auth-token $token \
+        --os-image-url http://$GLANCE_HOSTPORT \
+        image-create \
+        --name $BM_DEPLOY_RAMDISK \
+        --is-public True --disk-format=ari \
+        < $TOP_DIR/files/$BM_DEPLOY_RAMDISK  | grep ' id ' | get_field 2)
 }
 
 # create a basic baremetal flavor, associated with deploy kernel & ramdisk
@@ -278,11 +278,11 @@
     aki=$1
     ari=$2
     nova flavor-create $BM_FLAVOR_NAME $BM_FLAVOR_ID \
-            $BM_FLAVOR_RAM $BM_FLAVOR_ROOT_DISK $BM_FLAVOR_CPU
+        $BM_FLAVOR_RAM $BM_FLAVOR_ROOT_DISK $BM_FLAVOR_CPU
     nova flavor-key $BM_FLAVOR_NAME set \
-            "cpu_arch"="$BM_FLAVOR_ARCH" \
-            "baremetal:deploy_kernel_id"="$aki" \
-            "baremetal:deploy_ramdisk_id"="$ari"
+        "cpu_arch"="$BM_FLAVOR_ARCH" \
+        "baremetal:deploy_kernel_id"="$aki" \
+        "baremetal:deploy_ramdisk_id"="$ari"
 
 }
 
@@ -311,19 +311,19 @@
 
     # load them into glance
     KERNEL_ID=$(glance \
-         --os-auth-token $token \
-         --os-image-url http://$GLANCE_HOSTPORT \
-         image-create \
-         --name $image_name-kernel \
-         --is-public True --disk-format=aki \
-         < $TOP_DIR/files/$OUT_KERNEL | grep ' id ' | get_field 2)
+        --os-auth-token $token \
+        --os-image-url http://$GLANCE_HOSTPORT \
+        image-create \
+        --name $image_name-kernel \
+        --is-public True --disk-format=aki \
+        < $TOP_DIR/files/$OUT_KERNEL | grep ' id ' | get_field 2)
     RAMDISK_ID=$(glance \
-         --os-auth-token $token \
-         --os-image-url http://$GLANCE_HOSTPORT \
-         image-create \
-         --name $image_name-initrd \
-         --is-public True --disk-format=ari \
-         < $TOP_DIR/files/$OUT_RAMDISK | grep ' id ' | get_field 2)
+        --os-auth-token $token \
+        --os-image-url http://$GLANCE_HOSTPORT \
+        image-create \
+        --name $image_name-initrd \
+        --is-public True --disk-format=ari \
+        < $TOP_DIR/files/$OUT_RAMDISK | grep ' id ' | get_field 2)
 }
 
 
@@ -365,11 +365,11 @@
             mkdir "$xdir"
             tar -zxf $FILES/$IMAGE_FNAME -C "$xdir"
             KERNEL=$(for f in "$xdir/"*-vmlinuz* "$xdir/"aki-*/image; do
-                     [ -f "$f" ] && echo "$f" && break; done; true)
+                [ -f "$f" ] && echo "$f" && break; done; true)
             RAMDISK=$(for f in "$xdir/"*-initrd* "$xdir/"ari-*/image; do
-                     [ -f "$f" ] && echo "$f" && break; done; true)
+                [ -f "$f" ] && echo "$f" && break; done; true)
             IMAGE=$(for f in "$xdir/"*.img "$xdir/"ami-*/image; do
-                     [ -f "$f" ] && echo "$f" && break; done; true)
+                [ -f "$f" ] && echo "$f" && break; done; true)
             if [[ -z "$IMAGE_NAME" ]]; then
                 IMAGE_NAME=$(basename "$IMAGE" ".img")
             fi
@@ -403,19 +403,19 @@
             --container-format ari \
             --disk-format ari < "$RAMDISK" | grep ' id ' | get_field 2)
     else
-       # TODO(deva): add support for other image types
-       return
+        # TODO(deva): add support for other image types
+        return
     fi
 
     glance \
-       --os-auth-token $token \
-       --os-image-url http://$GLANCE_HOSTPORT \
-       image-create \
-       --name "${IMAGE_NAME%.img}" --is-public True \
-       --container-format $CONTAINER_FORMAT \
-       --disk-format $DISK_FORMAT \
-       ${KERNEL_ID:+--property kernel_id=$KERNEL_ID} \
-       ${RAMDISK_ID:+--property ramdisk_id=$RAMDISK_ID} < "${IMAGE}"
+        --os-auth-token $token \
+        --os-image-url http://$GLANCE_HOSTPORT \
+        image-create \
+        --name "${IMAGE_NAME%.img}" --is-public True \
+        --container-format $CONTAINER_FORMAT \
+        --disk-format $DISK_FORMAT \
+        ${KERNEL_ID:+--property kernel_id=$KERNEL_ID} \
+        ${RAMDISK_ID:+--property ramdisk_id=$RAMDISK_ID} < "${IMAGE}"
 
     # override DEFAULT_IMAGE_NAME so that tempest can find the image
     # that we just uploaded in glance
@@ -439,15 +439,15 @@
     mac_2=${2:-$BM_SECOND_MAC}
 
     id=$(nova baremetal-node-create \
-       --pm_address="$BM_PM_ADDR" \
-       --pm_user="$BM_PM_USER" \
-       --pm_password="$BM_PM_PASS" \
-       "$BM_HOSTNAME" \
-       "$BM_FLAVOR_CPU" \
-       "$BM_FLAVOR_RAM" \
-       "$BM_FLAVOR_ROOT_DISK" \
-       "$mac_1" \
-       | grep ' id ' | get_field 2 )
+        --pm_address="$BM_PM_ADDR" \
+        --pm_user="$BM_PM_USER" \
+        --pm_password="$BM_PM_PASS" \
+        "$BM_HOSTNAME" \
+        "$BM_FLAVOR_CPU" \
+        "$BM_FLAVOR_RAM" \
+        "$BM_FLAVOR_ROOT_DISK" \
+        "$mac_1" \
+        | grep ' id ' | get_field 2 )
     [ $? -eq 0 ] || [ "$id" ] || die $LINENO "Error adding baremetal node"
     if [ -n "$mac_2" ]; then
         id2=$(nova baremetal-interface-add "$id" "$mac_2" )
diff --git a/lib/config b/lib/config
index 6f686e9..91cefe4 100644
--- a/lib/config
+++ b/lib/config
@@ -10,7 +10,7 @@
 #   [[group-name|file-name]]
 #
 # group-name refers to the group of configuration file changes to be processed
-# at a particular time.  These are called phases in ``stack.sh`` but 
+# at a particular time.  These are called phases in ``stack.sh`` but
 # group here as these functions are not DevStack-specific.
 #
 # file-name is the destination of the config file
@@ -64,12 +64,12 @@
     [[ -r $file ]] || return 0
 
     $CONFIG_AWK_CMD -v matchgroup=$matchgroup '
-      /^\[\[.+\|.*\]\]/ {
-          gsub("[][]", "", $1);
-          split($1, a, "|");
-          if (a[1] == matchgroup)
-              print a[2]
-      }
+        /^\[\[.+\|.*\]\]/ {
+            gsub("[][]", "", $1);
+            split($1, a, "|");
+            if (a[1] == matchgroup)
+                print a[2]
+        }
     ' $file
 }
 
diff --git a/lib/glance b/lib/glance
index c6f11d0..75e3dd0 100644
--- a/lib/glance
+++ b/lib/glance
@@ -194,7 +194,7 @@
     screen_it g-api "cd $GLANCE_DIR; $GLANCE_BIN_DIR/glance-api --config-file=$GLANCE_CONF_DIR/glance-api.conf"
     echo "Waiting for g-api ($GLANCE_HOSTPORT) to start..."
     if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$GLANCE_HOSTPORT; do sleep 1; done"; then
-      die $LINENO "g-api did not start"
+        die $LINENO "g-api did not start"
     fi
 }
 
diff --git a/lib/ironic b/lib/ironic
index f3b4a72..649c1c2 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -11,6 +11,7 @@
 # ``stack.sh`` calls the entry points in this order:
 #
 # install_ironic
+# install_ironicclient
 # configure_ironic
 # init_ironic
 # start_ironic
@@ -27,6 +28,7 @@
 
 # Set up default directories
 IRONIC_DIR=$DEST/ironic
+IRONICCLIENT_DIR=$DEST/python-ironicclient
 IRONIC_AUTH_CACHE_DIR=${IRONIC_AUTH_CACHE_DIR:-/var/cache/ironic}
 IRONIC_CONF_DIR=${IRONIC_CONF_DIR:-/etc/ironic}
 IRONIC_CONF_FILE=$IRONIC_CONF_DIR/ironic.conf
@@ -45,6 +47,18 @@
 # Functions
 # ---------
 
+# install_ironic() - Collect source and prepare
+function install_ironic() {
+    git_clone $IRONIC_REPO $IRONIC_DIR $IRONIC_BRANCH
+    setup_develop $IRONIC_DIR
+}
+
+# install_ironicclient() - Collect sources and prepare
+function install_ironicclient() {
+    git_clone $IRONICCLIENT_REPO $IRONICCLIENT_DIR $IRONICCLIENT_BRANCH
+    setup_develop $IRONICCLIENT_DIR
+}
+
 # cleanup_ironic() - Remove residual data files, anything left over from previous
 # runs that would need to clean up.
 function cleanup_ironic() {
@@ -170,12 +184,6 @@
     create_ironic_accounts
 }
 
-# install_ironic() - Collect source and prepare
-function install_ironic() {
-    git_clone $IRONIC_REPO $IRONIC_DIR $IRONIC_BRANCH
-    setup_develop $IRONIC_DIR
-}
-
 # start_ironic() - Start running processes, including screen
 function start_ironic() {
     # Start Ironic API server, if enabled.
@@ -195,7 +203,7 @@
     screen_it ir-api "cd $IRONIC_DIR; $IRONIC_BIN_DIR/ironic-api --config-file=$IRONIC_CONF_FILE"
     echo "Waiting for ir-api ($IRONIC_HOSTPORT) to start..."
     if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$IRONIC_HOSTPORT; do sleep 1; done"; then
-      die $LINENO "ir-api did not start"
+        die $LINENO "ir-api did not start"
     fi
 }
 
diff --git a/lib/keystone b/lib/keystone
index c93a436..beddb1c 100755
--- a/lib/keystone
+++ b/lib/keystone
@@ -373,7 +373,7 @@
 
     echo "Waiting for keystone to start..."
     if ! timeout $SERVICE_TIMEOUT sh -c "while ! curl --noproxy '*' -s http://$SERVICE_HOST:$service_port/v$IDENTITY_API_VERSION/ >/dev/null; do sleep 1; done"; then
-      die $LINENO "keystone did not start"
+        die $LINENO "keystone did not start"
     fi
 
     # Start proxies if enabled
diff --git a/lib/neutron b/lib/neutron
index 778717d..44fb9e1 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -79,8 +79,8 @@
 # Support entry points installation of console scripts
 if [[ -d $NEUTRON_DIR/bin/neutron-server ]]; then
     NEUTRON_BIN_DIR=$NEUTRON_DIR/bin
-     else
-NEUTRON_BIN_DIR=$(get_python_exec_prefix)
+else
+    NEUTRON_BIN_DIR=$(get_python_exec_prefix)
 fi
 
 NEUTRON_CONF_DIR=/etc/neutron
@@ -373,7 +373,7 @@
                 iniset $Q_L3_CONF_FILE DEFAULT router_id $ROUTER_ID
             fi
         fi
-   fi
+    fi
 }
 
 # init_neutron() - Initialize databases, etc.
@@ -404,7 +404,7 @@
     fi
 
     if is_service_enabled q-lbaas; then
-       neutron_agent_lbaas_install_agent_packages
+        neutron_agent_lbaas_install_agent_packages
     fi
 }
 
@@ -414,13 +414,13 @@
     local cfg_file
     local CFG_FILE_OPTIONS="--config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE"
     for cfg_file in ${Q_PLUGIN_EXTRA_CONF_FILES[@]}; do
-         CFG_FILE_OPTIONS+=" --config-file /$cfg_file"
+        CFG_FILE_OPTIONS+=" --config-file /$cfg_file"
     done
     # Start the Neutron service
     screen_it q-svc "cd $NEUTRON_DIR && python $NEUTRON_BIN_DIR/neutron-server $CFG_FILE_OPTIONS"
     echo "Waiting for Neutron to start..."
     if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$Q_HOST:$Q_PORT; do sleep 1; done"; then
-      die $LINENO "Neutron did not start"
+        die $LINENO "Neutron did not start"
     fi
 }
 
@@ -712,9 +712,9 @@
     # Set up ``rootwrap.conf``, pointing to ``$NEUTRON_CONF_DIR/rootwrap.d``
     # location moved in newer versions, prefer new location
     if test -r $NEUTRON_DIR/etc/neutron/rootwrap.conf; then
-      sudo cp -p $NEUTRON_DIR/etc/neutron/rootwrap.conf $Q_RR_CONF_FILE
+        sudo cp -p $NEUTRON_DIR/etc/neutron/rootwrap.conf $Q_RR_CONF_FILE
     else
-      sudo cp -p $NEUTRON_DIR/etc/rootwrap.conf $Q_RR_CONF_FILE
+        sudo cp -p $NEUTRON_DIR/etc/rootwrap.conf $Q_RR_CONF_FILE
     fi
     sudo sed -e "s:^filters_path=.*$:filters_path=$Q_CONF_ROOTWRAP_D:" -i $Q_RR_CONF_FILE
     sudo chown root:root $Q_RR_CONF_FILE
@@ -848,11 +848,11 @@
 # please refer to ``lib/neutron_thirdparty/README.md`` for details
 NEUTRON_THIRD_PARTIES=""
 for f in $TOP_DIR/lib/neutron_thirdparty/*; do
-     third_party=$(basename $f)
-     if is_service_enabled $third_party; then
-         source $TOP_DIR/lib/neutron_thirdparty/$third_party
-         NEUTRON_THIRD_PARTIES="$NEUTRON_THIRD_PARTIES,$third_party"
-     fi
+    third_party=$(basename $f)
+    if is_service_enabled $third_party; then
+        source $TOP_DIR/lib/neutron_thirdparty/$third_party
+        NEUTRON_THIRD_PARTIES="$NEUTRON_THIRD_PARTIES,$third_party"
+    fi
 done
 
 function _neutron_third_party_do() {
diff --git a/lib/neutron_plugins/midonet b/lib/neutron_plugins/midonet
index 193055f..cf45a9d 100644
--- a/lib/neutron_plugins/midonet
+++ b/lib/neutron_plugins/midonet
@@ -37,14 +37,26 @@
     iniset $Q_DHCP_CONF_FILE DEFAULT interface_driver $DHCP_INTERFACE_DRIVER
     iniset $Q_DHCP_CONF_FILE DEFAULT use_namespaces True
     iniset $Q_DHCP_CONF_FILE DEFAULT enable_isolated_metadata True
+    if [[ "$MIDONET_API_URI" != "" ]]; then
+        iniset $Q_DHCP_CONF_FILE MIDONET midonet_uri "$MIDONET_API_URI"
+    fi
+    if [[ "$MIDONET_USERNAME" != "" ]]; then
+        iniset $Q_DHCP_CONF_FILE MIDONET username "$MIDONET_USERNAME"
+    fi
+    if [[ "$MIDONET_PASSWORD" != "" ]]; then
+        iniset $Q_DHCP_CONF_FILE MIDONET password "$MIDONET_PASSWORD"
+    fi
+    if [[ "$MIDONET_PROJECT_ID" != "" ]]; then
+        iniset $Q_DHCP_CONF_FILE MIDONET project_id "$MIDONET_PROJECT_ID"
+    fi
 }
 
 function neutron_plugin_configure_l3_agent() {
-   die $LINENO "q-l3 must not be executed with MidoNet plugin!"
+    die $LINENO "q-l3 must not be executed with MidoNet plugin!"
 }
 
 function neutron_plugin_configure_plugin_agent() {
-   die $LINENO "q-agt must not be executed with MidoNet plugin!"
+    die $LINENO "q-agt must not be executed with MidoNet plugin!"
 }
 
 function neutron_plugin_configure_service() {
diff --git a/lib/neutron_plugins/nec b/lib/neutron_plugins/nec
index 79d41db..3806c32 100644
--- a/lib/neutron_plugins/nec
+++ b/lib/neutron_plugins/nec
@@ -101,15 +101,15 @@
     local id=0
     GRE_LOCAL_IP=${GRE_LOCAL_IP:-$HOST_IP}
     if [ -n "$GRE_REMOTE_IPS" ]; then
-         for ip in ${GRE_REMOTE_IPS//:/ }
-         do
-             if [[ "$ip" == "$GRE_LOCAL_IP" ]]; then
-                 continue
-             fi
-             sudo ovs-vsctl --no-wait add-port $bridge gre$id -- \
-                 set Interface gre$id type=gre options:remote_ip=$ip
-             id=`expr $id + 1`
-         done
+        for ip in ${GRE_REMOTE_IPS//:/ }
+        do
+            if [[ "$ip" == "$GRE_LOCAL_IP" ]]; then
+                continue
+            fi
+            sudo ovs-vsctl --no-wait add-port $bridge gre$id -- \
+                set Interface gre$id type=gre options:remote_ip=$ip
+            id=`expr $id + 1`
+        done
     fi
 }
 
diff --git a/lib/neutron_plugins/nicira b/lib/neutron_plugins/nicira
index 082c846..7c99b69 100644
--- a/lib/neutron_plugins/nicira
+++ b/lib/neutron_plugins/nicira
@@ -58,13 +58,13 @@
 }
 
 function neutron_plugin_configure_l3_agent() {
-   # Nicira plugin does not run L3 agent
-   die $LINENO "q-l3 should must not be executed with Nicira plugin!"
+    # Nicira plugin does not run L3 agent
+    die $LINENO "q-l3 should must not be executed with Nicira plugin!"
 }
 
 function neutron_plugin_configure_plugin_agent() {
-   # Nicira plugin does not run L2 agent
-   die $LINENO "q-agt must not be executed with Nicira plugin!"
+    # Nicira plugin does not run L2 agent
+    die $LINENO "q-agt must not be executed with Nicira plugin!"
 }
 
 function neutron_plugin_configure_service() {
diff --git a/lib/neutron_plugins/ovs_base b/lib/neutron_plugins/ovs_base
index 2666d8e..1214f3b 100644
--- a/lib/neutron_plugins/ovs_base
+++ b/lib/neutron_plugins/ovs_base
@@ -73,13 +73,7 @@
 }
 
 function _neutron_ovs_base_configure_nova_vif_driver() {
-    # The hybrid VIF driver needs to be specified when Neutron Security Group
-    # is enabled (until vif_security attributes are supported in VIF extension)
-    if [[ "$Q_USE_SECGROUP" == "True" ]]; then
-        NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver"}
-    else
-        NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtGenericVIFDriver"}
-    fi
+    NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtGenericVIFDriver"}
 }
 
 # Restore xtrace
diff --git a/lib/neutron_thirdparty/trema b/lib/neutron_thirdparty/trema
index 09dc46b..5b5c459 100644
--- a/lib/neutron_thirdparty/trema
+++ b/lib/neutron_thirdparty/trema
@@ -66,8 +66,8 @@
 
     cp $TREMA_SS_DIR/sliceable_switch_null.conf $TREMA_SS_CONFIG
     sed -i -e "s|^\$apps_dir.*$|\$apps_dir = \"$TREMA_DIR/apps\"|" \
-           -e "s|^\$db_dir.*$|\$db_dir = \"$TREMA_SS_DB_DIR\"|" \
-           $TREMA_SS_CONFIG
+        -e "s|^\$db_dir.*$|\$db_dir = \"$TREMA_SS_DB_DIR\"|" \
+        $TREMA_SS_CONFIG
 }
 
 function gem_install() {
diff --git a/lib/nova b/lib/nova
index 5ff5099..809f8e5 100644
--- a/lib/nova
+++ b/lib/nova
@@ -377,6 +377,7 @@
     iniset $NOVA_CONF DEFAULT ec2_workers "4"
     iniset $NOVA_CONF DEFAULT metadata_workers "4"
     iniset $NOVA_CONF DEFAULT sql_connection `database_connection_url nova`
+    iniset $NOVA_CONF DEFAULT fatal_deprecations "True"
     iniset $NOVA_CONF DEFAULT instance_name_template "${INSTANCE_NAME_PREFIX}%08x"
     iniset $NOVA_CONF osapi_v3 enabled "True"
 
@@ -464,27 +465,27 @@
     fi
 
     if is_service_enabled n-novnc || is_service_enabled n-xvnc; then
-      # Address on which instance vncservers will listen on compute hosts.
-      # For multi-host, this should be the management ip of the compute host.
-      VNCSERVER_LISTEN=${VNCSERVER_LISTEN=127.0.0.1}
-      VNCSERVER_PROXYCLIENT_ADDRESS=${VNCSERVER_PROXYCLIENT_ADDRESS=127.0.0.1}
-      iniset $NOVA_CONF DEFAULT vnc_enabled true
-      iniset $NOVA_CONF DEFAULT vncserver_listen "$VNCSERVER_LISTEN"
-      iniset $NOVA_CONF DEFAULT vncserver_proxyclient_address "$VNCSERVER_PROXYCLIENT_ADDRESS"
+        # Address on which instance vncservers will listen on compute hosts.
+        # For multi-host, this should be the management ip of the compute host.
+        VNCSERVER_LISTEN=${VNCSERVER_LISTEN=127.0.0.1}
+        VNCSERVER_PROXYCLIENT_ADDRESS=${VNCSERVER_PROXYCLIENT_ADDRESS=127.0.0.1}
+        iniset $NOVA_CONF DEFAULT vnc_enabled true
+        iniset $NOVA_CONF DEFAULT vncserver_listen "$VNCSERVER_LISTEN"
+        iniset $NOVA_CONF DEFAULT vncserver_proxyclient_address "$VNCSERVER_PROXYCLIENT_ADDRESS"
     else
-      iniset $NOVA_CONF DEFAULT vnc_enabled false
+        iniset $NOVA_CONF DEFAULT vnc_enabled false
     fi
 
     if is_service_enabled n-spice; then
-      # Address on which instance spiceservers will listen on compute hosts.
-      # For multi-host, this should be the management ip of the compute host.
-      SPICESERVER_PROXYCLIENT_ADDRESS=${SPICESERVER_PROXYCLIENT_ADDRESS=127.0.0.1}
-      SPICESERVER_LISTEN=${SPICESERVER_LISTEN=127.0.0.1}
-      iniset $NOVA_CONF spice enabled true
-      iniset $NOVA_CONF spice server_listen "$SPICESERVER_LISTEN"
-      iniset $NOVA_CONF spice server_proxyclient_address "$SPICESERVER_PROXYCLIENT_ADDRESS"
+        # Address on which instance spiceservers will listen on compute hosts.
+        # For multi-host, this should be the management ip of the compute host.
+        SPICESERVER_PROXYCLIENT_ADDRESS=${SPICESERVER_PROXYCLIENT_ADDRESS=127.0.0.1}
+        SPICESERVER_LISTEN=${SPICESERVER_LISTEN=127.0.0.1}
+        iniset $NOVA_CONF spice enabled true
+        iniset $NOVA_CONF spice server_listen "$SPICESERVER_LISTEN"
+        iniset $NOVA_CONF spice server_proxyclient_address "$SPICESERVER_PROXYCLIENT_ADDRESS"
     else
-      iniset $NOVA_CONF spice enabled false
+        iniset $NOVA_CONF spice enabled false
     fi
 
     iniset $NOVA_CONF DEFAULT ec2_dmz_host "$EC2_DMZ_HOST"
@@ -601,7 +602,7 @@
     screen_it n-api "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-api"
     echo "Waiting for nova-api to start..."
     if ! wait_for_service $SERVICE_TIMEOUT http://$SERVICE_HOST:$service_port; then
-      die $LINENO "nova-api did not start"
+        die $LINENO "nova-api did not start"
     fi
 
     # Start proxies if enabled
@@ -610,8 +611,28 @@
     fi
 }
 
+# start_nova_compute() - Start the compute process
+function start_nova_compute() {
+    NOVA_CONF_BOTTOM=$NOVA_CONF
+
+    if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
+        # The group **$LIBVIRT_GROUP** is added to the current user in this script.
+        # Use 'sg' to execute nova-compute as a member of the **$LIBVIRT_GROUP** group.
+        screen_it n-cpu "cd $NOVA_DIR && sg $LIBVIRT_GROUP '$NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM'"
+    elif [[ "$VIRT_DRIVER" = 'fake' ]]; then
+        for i in `seq 1 $NUMBER_FAKE_NOVA_COMPUTE`; do
+            screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM"
+        done
+    else
+        if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
+            start_nova_hypervisor
+        fi
+        screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM"
+    fi
+}
+
 # start_nova() - Start running processes, including screen
-function start_nova() {
+function start_nova_rest() {
     NOVA_CONF_BOTTOM=$NOVA_CONF
 
     # ``screen_it`` checks ``is_service_enabled``, it is not needed here
@@ -624,21 +645,6 @@
         screen_it n-cell-child "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cells --config-file $NOVA_CELLS_CONF"
     fi
 
-    if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
-        # The group **$LIBVIRT_GROUP** is added to the current user in this script.
-        # Use 'sg' to execute nova-compute as a member of the **$LIBVIRT_GROUP** group.
-        screen_it n-cpu "cd $NOVA_DIR && sg $LIBVIRT_GROUP '$NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM'"
-    elif [[ "$VIRT_DRIVER" = 'fake' ]]; then
-       for i in `seq 1 $NUMBER_FAKE_NOVA_COMPUTE`
-       do
-           screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM"
-       done
-    else
-        if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
-            start_nova_hypervisor
-        fi
-        screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM"
-    fi
     screen_it n-crt "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cert"
     screen_it n-net "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-network --config-file $NOVA_CONF_BOTTOM"
     screen_it n-sch "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-scheduler --config-file $NOVA_CONF_BOTTOM"
@@ -655,6 +661,11 @@
         screen_it n-obj "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-objectstore"
 }
 
+function start_nova() {
+    start_nova_compute
+    start_nova_rest
+}
+
 # stop_nova() - Stop running processes (non-screen)
 function stop_nova() {
     # Kill the nova screen windows
diff --git a/lib/nova_plugins/hypervisor-baremetal b/lib/nova_plugins/hypervisor-baremetal
index 4e7c173..660c977 100644
--- a/lib/nova_plugins/hypervisor-baremetal
+++ b/lib/nova_plugins/hypervisor-baremetal
@@ -61,8 +61,8 @@
 
     # Define extra baremetal nova conf flags by defining the array ``EXTRA_BAREMETAL_OPTS``.
     for I in "${EXTRA_BAREMETAL_OPTS[@]}"; do
-       # Attempt to convert flags to options
-       iniset $NOVA_CONF baremetal ${I/=/ }
+        # Attempt to convert flags to options
+        iniset $NOVA_CONF baremetal ${I/=/ }
     done
 }
 
diff --git a/lib/nova_plugins/hypervisor-docker b/lib/nova_plugins/hypervisor-docker
index 4c8fc27..427554b 100644
--- a/lib/nova_plugins/hypervisor-docker
+++ b/lib/nova_plugins/hypervisor-docker
@@ -72,7 +72,7 @@
     fi
 
     # Make sure Docker is installed
-    if ! is_package_installed lxc-docker; then
+    if ! is_package_installed lxc-docker-${DOCKER_PACKAGE_VERSION}; then
         die $LINENO "Docker is not installed.  Please run tools/docker/install_docker.sh"
     fi
 
diff --git a/lib/nova_plugins/hypervisor-libvirt b/lib/nova_plugins/hypervisor-libvirt
index caf0296..6fae0b1 100644
--- a/lib/nova_plugins/hypervisor-libvirt
+++ b/lib/nova_plugins/hypervisor-libvirt
@@ -82,10 +82,10 @@
             sudo mkdir -p $rules_dir
             sudo bash -c "cat <<EOF > $rules_dir/50-libvirt-$STACK_USER.rules
 polkit.addRule(function(action, subject) {
-     if (action.id == 'org.libvirt.unix.manage' &&
-         subject.user == '"$STACK_USER"') {
-         return polkit.Result.YES;
-     }
+    if (action.id == 'org.libvirt.unix.manage' &&
+        subject.user == '"$STACK_USER"') {
+        return polkit.Result.YES;
+    }
 });
 EOF"
             unset rules_dir
diff --git a/lib/rpc_backend b/lib/rpc_backend
index 44c1e44..a323d64 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -102,9 +102,9 @@
         if is_fedora; then
             install_package qpid-cpp-server
             if [[ $DISTRO =~ (rhel6) ]]; then
-               # RHEL6 leaves "auth=yes" in /etc/qpidd.conf, it needs to
-               # be no or you get GSS authentication errors as it
-               # attempts to default to this.
+                # RHEL6 leaves "auth=yes" in /etc/qpidd.conf, it needs to
+                # be no or you get GSS authentication errors as it
+                # attempts to default to this.
                 sudo sed -i.bak 's/^auth=yes$/auth=no/' /etc/qpidd.conf
             fi
         elif is_ubuntu; then
diff --git a/lib/swift b/lib/swift
index 6ab43c4..8726f1e 100644
--- a/lib/swift
+++ b/lib/swift
@@ -104,17 +104,17 @@
 
 # cleanup_swift() - Remove residual data files
 function cleanup_swift() {
-   rm -f ${SWIFT_CONF_DIR}{*.builder,*.ring.gz,backups/*.builder,backups/*.ring.gz}
-   if egrep -q ${SWIFT_DATA_DIR}/drives/sdb1 /proc/mounts; then
-      sudo umount ${SWIFT_DATA_DIR}/drives/sdb1
-   fi
-   if [[ -e ${SWIFT_DISK_IMAGE} ]]; then
-      rm ${SWIFT_DISK_IMAGE}
-   fi
-   rm -rf ${SWIFT_DATA_DIR}/run/
-   if is_apache_enabled_service swift; then
-       _cleanup_swift_apache_wsgi
-   fi
+    rm -f ${SWIFT_CONF_DIR}{*.builder,*.ring.gz,backups/*.builder,backups/*.ring.gz}
+    if egrep -q ${SWIFT_DATA_DIR}/drives/sdb1 /proc/mounts; then
+        sudo umount ${SWIFT_DATA_DIR}/drives/sdb1
+    fi
+    if [[ -e ${SWIFT_DISK_IMAGE} ]]; then
+        rm ${SWIFT_DISK_IMAGE}
+    fi
+    rm -rf ${SWIFT_DATA_DIR}/run/
+    if is_apache_enabled_service swift; then
+        _cleanup_swift_apache_wsgi
+    fi
 }
 
 # _cleanup_swift_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
@@ -192,7 +192,7 @@
 
         sudo cp ${SWIFT_DIR}/examples/apache2/account-server.template ${apache_vhost_dir}/account-server-${node_number}
         sudo sed -e "
-             /^#/d;/^$/d;
+            /^#/d;/^$/d;
             s/%PORT%/$account_port/g;
             s/%SERVICENAME%/account-server-${node_number}/g;
             s/%APACHE_NAME%/${APACHE_NAME}/g;
@@ -202,7 +202,7 @@
 
         sudo cp ${SWIFT_DIR}/examples/wsgi/account-server.wsgi.template ${SWIFT_APACHE_WSGI_DIR}/account-server-${node_number}.wsgi
         sudo sed -e "
-             /^#/d;/^$/d;
+            /^#/d;/^$/d;
             s/%SERVICECONF%/account-server\/${node_number}.conf/g;
         " -i ${SWIFT_APACHE_WSGI_DIR}/account-server-${node_number}.wsgi
     done
@@ -577,26 +577,26 @@
         return 0
     fi
 
-   # By default with only one replica we are launching the proxy,
-   # container, account and object server in screen in foreground and
-   # other services in background. If we have SWIFT_REPLICAS set to something
-   # greater than one we first spawn all the swift services then kill the proxy
-   # service so we can run it in foreground in screen.  ``swift-init ...
-   # {stop|restart}`` exits with '1' if no servers are running, ignore it just
-   # in case
-   swift-init --run-dir=${SWIFT_DATA_DIR}/run all restart || true
-   if [[ ${SWIFT_REPLICAS} == 1 ]]; then
+    # By default with only one replica we are launching the proxy,
+    # container, account and object server in screen in foreground and
+    # other services in background. If we have SWIFT_REPLICAS set to something
+    # greater than one we first spawn all the swift services then kill the proxy
+    # service so we can run it in foreground in screen.  ``swift-init ...
+    # {stop|restart}`` exits with '1' if no servers are running, ignore it just
+    # in case
+    swift-init --run-dir=${SWIFT_DATA_DIR}/run all restart || true
+    if [[ ${SWIFT_REPLICAS} == 1 ]]; then
         todo="object container account"
-   fi
-   for type in proxy ${todo}; do
-       swift-init --run-dir=${SWIFT_DATA_DIR}/run ${type} stop || true
-   done
-   screen_it s-proxy "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
-   if [[ ${SWIFT_REPLICAS} == 1 ]]; then
-       for type in object container account; do
-           screen_it s-${type} "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-${type}-server ${SWIFT_CONF_DIR}/${type}-server/1.conf -v"
-       done
-   fi
+    fi
+    for type in proxy ${todo}; do
+        swift-init --run-dir=${SWIFT_DATA_DIR}/run ${type} stop || true
+    done
+    screen_it s-proxy "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
+    if [[ ${SWIFT_REPLICAS} == 1 ]]; then
+        for type in object container account; do
+            screen_it s-${type} "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-${type}-server ${SWIFT_CONF_DIR}/${type}-server/1.conf -v"
+        done
+    fi
 }
 
 # stop_swift() - Stop running processes (non-screen)
diff --git a/lib/tempest b/lib/tempest
index 9f41608..8e4e521 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -193,7 +193,7 @@
             # If namespaces are disabled, devstack will create a single
             # public router that tempest should be configured to use.
             public_router_id=$(neutron router-list | awk "/ $Q_ROUTER_NAME / \
-               { print \$2 }")
+                { print \$2 }")
         fi
     fi
 
@@ -328,15 +328,15 @@
     local disk_image="$image_dir/${base_image_name}-blank.img"
     # if the cirros uec downloaded and the system is uec capable
     if [ -f "$kernel" -a -f "$ramdisk" -a -f "$disk_image" -a  "$VIRT_DRIVER" != "openvz" \
-         -a \( "$LIBVIRT_TYPE" != "lxc" -o "$VIRT_DRIVER" != "libvirt" \) ]; then
-       echo "Prepare aki/ari/ami Images"
-       ( #new namespace
-           # tenant:demo ; user: demo
-           source $TOP_DIR/accrc/demo/demo
-           euca-bundle-image -i "$kernel" --kernel true -d "$BOTO_MATERIALS_PATH"
-           euca-bundle-image -i "$ramdisk" --ramdisk true -d "$BOTO_MATERIALS_PATH"
-           euca-bundle-image -i "$disk_image" -d "$BOTO_MATERIALS_PATH"
-       ) 2>&1 </dev/null | cat
+        -a \( "$LIBVIRT_TYPE" != "lxc" -o "$VIRT_DRIVER" != "libvirt" \) ]; then
+        echo "Prepare aki/ari/ami Images"
+        ( #new namespace
+            # tenant:demo ; user: demo
+            source $TOP_DIR/accrc/demo/demo
+            euca-bundle-image -i "$kernel" --kernel true -d "$BOTO_MATERIALS_PATH"
+            euca-bundle-image -i "$ramdisk" --ramdisk true -d "$BOTO_MATERIALS_PATH"
+            euca-bundle-image -i "$disk_image" -d "$BOTO_MATERIALS_PATH"
+        ) 2>&1 </dev/null | cat
     else
         echo "Boto materials are not prepared"
     fi
diff --git a/lib/trove b/lib/trove
index 17c8c99..0a19d03 100644
--- a/lib/trove
+++ b/lib/trove
@@ -45,14 +45,15 @@
     SERVICE_ROLE=$(keystone role-list | awk "/ admin / { print \$2 }")
 
     if [[ "$ENABLED_SERVICES" =~ "trove" ]]; then
-        TROVE_USER=$(keystone user-create --name=trove \
-                                                  --pass="$SERVICE_PASSWORD" \
-                                                  --tenant_id $SERVICE_TENANT \
-                                                  --email=trove@example.com \
-                                                  | grep " id " | get_field 2)
+        TROVE_USER=$(keystone user-create \
+            --name=trove \
+            --pass="$SERVICE_PASSWORD" \
+            --tenant_id $SERVICE_TENANT \
+            --email=trove@example.com \
+            | grep " id " | get_field 2)
         keystone user-role-add --tenant-id $SERVICE_TENANT \
-                               --user-id $TROVE_USER \
-                               --role-id $SERVICE_ROLE
+            --user-id $TROVE_USER \
+            --role-id $SERVICE_ROLE
         if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
             TROVE_SERVICE=$(keystone service-create \
                 --name=trove \
diff --git a/stack.sh b/stack.sh
index aa0efea..5813a8a 100755
--- a/stack.sh
+++ b/stack.sh
@@ -53,7 +53,7 @@
             if [[ -r $TOP_DIR/localrc ]]; then
                 warn $LINENO "localrc and local.conf:[[local]] both exist, using localrc"
             else
-                echo "# Generated file, do not exit" >$TOP_DIR/.localrc.auto
+                echo "# Generated file, do not edit" >$TOP_DIR/.localrc.auto
                 get_meta_section $TOP_DIR/local.conf local $lfile >>$TOP_DIR/.localrc.auto
             fi
         fi
@@ -588,7 +588,9 @@
 source $TOP_DIR/tools/install_prereqs.sh
 
 # Configure an appropriate python environment
-$TOP_DIR/tools/install_pip.sh
+if [[ "$OFFLINE" != "True" ]]; then
+    $TOP_DIR/tools/install_pip.sh
+fi
 
 # Do the ugly hacks for borken packages and distros
 $TOP_DIR/tools/fixup_stuff.sh
@@ -732,6 +734,7 @@
 
 if is_service_enabled ir-api ir-cond; then
     install_ironic
+    install_ironicclient
     configure_ironic
 fi
 
@@ -840,7 +843,7 @@
 # If enabled, systat has to start early to track OpenStack service startup.
 if is_service_enabled sysstat;then
     if [[ -n ${SCREEN_LOGDIR} ]]; then
-        screen_it sysstat "sar -o $SCREEN_LOGDIR/$SYSSTAT_FILE $SYSSTAT_INTERVAL"
+        screen_it sysstat "cd ; sar -o $SCREEN_LOGDIR/$SYSSTAT_FILE $SYSSTAT_INTERVAL"
     else
         screen_it sysstat "sar $SYSSTAT_INTERVAL"
     fi
@@ -1015,7 +1018,7 @@
     prepare_baremetal_toolchain
     configure_baremetal_nova_dirs
     if [[ "$BM_USE_FAKE_ENV" = "True" ]]; then
-       create_fake_baremetal_env
+        create_fake_baremetal_env
     fi
 fi
 
@@ -1174,28 +1177,29 @@
 
 if is_service_enabled g-reg; then
     TOKEN=$(keystone token-get | grep ' id ' | get_field 2)
+    die_if_not_set $LINENO TOKEN "Keystone fail to get token"
 
     if is_baremetal; then
-       echo_summary "Creating and uploading baremetal images"
+        echo_summary "Creating and uploading baremetal images"
 
-       # build and upload separate deploy kernel & ramdisk
-       upload_baremetal_deploy $TOKEN
+        # build and upload separate deploy kernel & ramdisk
+        upload_baremetal_deploy $TOKEN
 
-       # upload images, separating out the kernel & ramdisk for PXE boot
-       for image_url in ${IMAGE_URLS//,/ }; do
-           upload_baremetal_image $image_url $TOKEN
-       done
+        # upload images, separating out the kernel & ramdisk for PXE boot
+        for image_url in ${IMAGE_URLS//,/ }; do
+            upload_baremetal_image $image_url $TOKEN
+        done
     else
-       echo_summary "Uploading images"
+        echo_summary "Uploading images"
 
-       # Option to upload legacy ami-tty, which works with xenserver
-       if [[ -n "$UPLOAD_LEGACY_TTY" ]]; then
-           IMAGE_URLS="${IMAGE_URLS:+${IMAGE_URLS},}https://github.com/downloads/citrix-openstack/warehouse/tty.tgz"
-       fi
+        # Option to upload legacy ami-tty, which works with xenserver
+        if [[ -n "$UPLOAD_LEGACY_TTY" ]]; then
+            IMAGE_URLS="${IMAGE_URLS:+${IMAGE_URLS},}https://github.com/downloads/citrix-openstack/warehouse/tty.tgz"
+        fi
 
-       for image_url in ${IMAGE_URLS//,/ }; do
-           upload_image $image_url $TOKEN
-       done
+        for image_url in ${IMAGE_URLS//,/ }; do
+            upload_image $image_url $TOKEN
+        done
     fi
 fi
 
@@ -1207,7 +1211,7 @@
 if is_service_enabled nova && is_baremetal; then
     # create special flavor for baremetal if we know what images to associate
     [[ -n "$BM_DEPLOY_KERNEL_ID" ]] && [[ -n "$BM_DEPLOY_RAMDISK_ID" ]] && \
-       create_baremetal_flavor $BM_DEPLOY_KERNEL_ID $BM_DEPLOY_RAMDISK_ID
+        create_baremetal_flavor $BM_DEPLOY_KERNEL_ID $BM_DEPLOY_RAMDISK_ID
 
     # otherwise user can manually add it later by calling nova-baremetal-manage
     [[ -n "$BM_FIRST_MAC" ]] && add_baremetal_node
@@ -1222,14 +1226,14 @@
     fi
     # ensure callback daemon is running
     sudo pkill nova-baremetal-deploy-helper || true
-    screen_it baremetal "nova-baremetal-deploy-helper"
+    screen_it baremetal "cd ; nova-baremetal-deploy-helper"
 fi
 
 # Save some values we generated for later use
 CURRENT_RUN_TIME=$(date "+$TIMESTAMP_FORMAT")
 echo "# $CURRENT_RUN_TIME" >$TOP_DIR/.stackenv
 for i in BASE_SQL_CONN ENABLED_SERVICES HOST_IP LOGFILE \
-  SERVICE_HOST SERVICE_PROTOCOL STACK_USER TLS_IP; do
+    SERVICE_HOST SERVICE_PROTOCOL STACK_USER TLS_IP; do
     echo $i=${!i} >>$TOP_DIR/.stackenv
 done
 
diff --git a/stackrc b/stackrc
index 3f740b5..0151672 100644
--- a/stackrc
+++ b/stackrc
@@ -104,6 +104,10 @@
 IRONIC_REPO=${IRONIC_REPO:-${GIT_BASE}/openstack/ironic.git}
 IRONIC_BRANCH=${IRONIC_BRANCH:-master}
 
+# ironic client
+IRONICCLIENT_REPO=${IRONICCLIENT_REPO:-${GIT_BASE}/openstack/python-ironicclient.git}
+IRONICCLIENT_BRANCH=${IRONICCLIENT_BRANCH:-master}
+
 # unified auth system (manages accounts/tokens)
 KEYSTONE_REPO=${KEYSTONE_REPO:-${GIT_BASE}/openstack/keystone.git}
 KEYSTONE_BRANCH=${KEYSTONE_BRANCH:-master}
diff --git a/tests/functions.sh b/tests/functions.sh
index 7d486d4..40376aa 100755
--- a/tests/functions.sh
+++ b/tests/functions.sh
@@ -122,16 +122,16 @@
 
 # test empty option
 if ini_has_option test.ini ddd empty; then
-   echo "OK: ddd.empty present"
+    echo "OK: ddd.empty present"
 else
-   echo "ini_has_option failed: ddd.empty not found"
+    echo "ini_has_option failed: ddd.empty not found"
 fi
 
 # test non-empty option
 if ini_has_option test.ini bbb handlers; then
-   echo "OK: bbb.handlers present"
+    echo "OK: bbb.handlers present"
 else
-   echo "ini_has_option failed: bbb.handlers not found"
+    echo "ini_has_option failed: bbb.handlers not found"
 fi
 
 # test changing empty option
diff --git a/tools/bash8.py b/tools/bash8.py
index 82a1010..edf7da4 100755
--- a/tools/bash8.py
+++ b/tools/bash8.py
@@ -55,10 +55,41 @@
             print_error('E003: Indent not multiple of 4', line)
 
 
+def starts_multiline(line):
+    m = re.search("[^<]<<\s*(?P<token>\w+)", line)
+    if m:
+        return m.group('token')
+    else:
+        return False
+
+
+def end_of_multiline(line, token):
+    if token:
+        return re.search("^%s\s*$" % token, line) is not None
+    return False
+
+
 def check_files(files):
+    in_multiline = False
+    logical_line = ""
+    token = False
     for line in fileinput.input(files):
-        check_no_trailing_whitespace(line)
-        check_indents(line)
+        # NOTE(sdague): multiline processing of heredocs is interesting
+        if not in_multiline:
+            logical_line = line
+            token = starts_multiline(line)
+            if token:
+                in_multiline = True
+                continue
+        else:
+            logical_line = logical_line + line
+            if not end_of_multiline(line, token):
+                continue
+            else:
+                in_multiline = False
+
+        check_no_trailing_whitespace(logical_line)
+        check_indents(logical_line)
 
 
 def get_options():
diff --git a/tools/build_bm_multi.sh b/tools/build_bm_multi.sh
index 52b9b4e..328d576 100755
--- a/tools/build_bm_multi.sh
+++ b/tools/build_bm_multi.sh
@@ -22,8 +22,8 @@
 if [ ! "$TERMINATE" = "1" ]; then
     echo "Waiting for head node ($HEAD_HOST) to start..."
     if ! timeout 60 sh -c "while ! wget -q -O- http://$HEAD_HOST | grep -q username; do sleep 1; done"; then
-      echo "Head node did not start"
-      exit 1
+        echo "Head node did not start"
+        exit 1
     fi
 fi
 
diff --git a/tools/build_uec.sh b/tools/build_uec.sh
index 6c4a26c..bce051a 100755
--- a/tools/build_uec.sh
+++ b/tools/build_uec.sh
@@ -229,8 +229,8 @@
 
 # (re)start a metadata service
 (
-  pid=`lsof -iTCP@192.168.$GUEST_NETWORK.1:4567 -n | awk '{print $2}' | tail -1`
-  [ -z "$pid" ] || kill -9 $pid
+    pid=`lsof -iTCP@192.168.$GUEST_NETWORK.1:4567 -n | awk '{print $2}' | tail -1`
+    [ -z "$pid" ] || kill -9 $pid
 )
 cd $vm_dir/uec
 python meta.py 192.168.$GUEST_NETWORK.1:4567 &
@@ -268,7 +268,7 @@
     sleep 2
 
     while [ ! -e "$vm_dir/console.log" ]; do
-      sleep 1
+        sleep 1
     done
 
     tail -F $vm_dir/console.log &
diff --git a/tools/create_userrc.sh b/tools/create_userrc.sh
index 44b0f6b..8383fe7 100755
--- a/tools/create_userrc.sh
+++ b/tools/create_userrc.sh
@@ -105,15 +105,15 @@
 fi
 
 if [ -z "$OS_TENANT_NAME" -a -z "$OS_TENANT_ID" ]; then
-   export OS_TENANT_NAME=admin
+    export OS_TENANT_NAME=admin
 fi
 
 if [ -z "$OS_USERNAME" ]; then
-   export OS_USERNAME=admin
+    export OS_USERNAME=admin
 fi
 
 if [ -z "$OS_AUTH_URL" ]; then
-   export OS_AUTH_URL=http://localhost:5000/v2.0/
+    export OS_AUTH_URL=http://localhost:5000/v2.0/
 fi
 
 USER_PASS=${USER_PASS:-$OS_PASSWORD}
@@ -249,7 +249,7 @@
         for user_id_at_name in `keystone user-list --tenant-id $tenant_id | awk 'BEGIN {IGNORECASE = 1} /true[[:space:]]*\|[^|]*\|$/ {print  $2 "@" $4}'`; do
             read user_id user_name <<< `echo "$user_id_at_name" | sed 's/@/ /'`
             if [ $MODE = one -a "$user_name" != "$USER_NAME" ]; then
-               continue;
+                continue;
             fi
             add_entry "$user_id" "$user_name" "$tenant_id" "$tenant_name" "$USER_PASS"
         done
diff --git a/tools/docker/install_docker.sh b/tools/docker/install_docker.sh
index 289002e..483955b 100755
--- a/tools/docker/install_docker.sh
+++ b/tools/docker/install_docker.sh
@@ -38,7 +38,7 @@
 install_package python-software-properties && \
     sudo sh -c "echo deb $DOCKER_APT_REPO docker main > /etc/apt/sources.list.d/docker.list"
 apt_get update
-install_package --force-yes lxc-docker=${DOCKER_PACKAGE_VERSION} socat
+install_package --force-yes lxc-docker-${DOCKER_PACKAGE_VERSION} socat
 
 # Start the daemon - restart just in case the package ever auto-starts...
 restart_service docker
diff --git a/tools/jenkins/jenkins_home/build_jenkins.sh b/tools/jenkins/jenkins_home/build_jenkins.sh
index e0e774e..a556db0 100755
--- a/tools/jenkins/jenkins_home/build_jenkins.sh
+++ b/tools/jenkins/jenkins_home/build_jenkins.sh
@@ -6,8 +6,8 @@
 
 # Make sure only root can run our script
 if [[ $EUID -ne 0 ]]; then
-   echo "This script must be run as root"
-   exit 1
+    echo "This script must be run as root"
+    exit 1
 fi
 
 # This directory
@@ -31,15 +31,15 @@
 
 # Install jenkins
 if [ ! -e /var/lib/jenkins ]; then
-   echo "Jenkins installation failed"
-   exit 1
+    echo "Jenkins installation failed"
+    exit 1
 fi
 
 # Make sure user has configured a jenkins ssh pubkey
 if [ ! -e /var/lib/jenkins/.ssh/id_rsa.pub ]; then
-   echo "Public key for jenkins is missing.  This is used to ssh into your instances."
-   echo "Please run "su -c ssh-keygen jenkins" before proceeding"
-   exit 1
+    echo "Public key for jenkins is missing.  This is used to ssh into your instances."
+    echo "Please run "su -c ssh-keygen jenkins" before proceeding"
+    exit 1
 fi
 
 # Setup sudo
@@ -96,7 +96,7 @@
 
 # Configure plugins
 for plugin in ${PLUGINS//,/ }; do
-    name=`basename $plugin`   
+    name=`basename $plugin`
     dest=/var/lib/jenkins/plugins/$name
     if [ ! -e $dest ]; then
         curl -L $plugin -o $dest
diff --git a/tools/upload_image.sh b/tools/upload_image.sh
index dd21c9f..d81a5c8 100755
--- a/tools/upload_image.sh
+++ b/tools/upload_image.sh
@@ -33,6 +33,7 @@
 
 # Get a token to authenticate to glance
 TOKEN=$(keystone token-get | grep ' id ' | get_field 2)
+die_if_not_set $LINENO TOKEN "Keystone fail to get token"
 
 # Glance connection info.  Note the port must be specified.
 GLANCE_HOSTPORT=${GLANCE_HOSTPORT:-$GLANCE_HOST:9292}
diff --git a/tools/xen/functions b/tools/xen/functions
index c65d919..b0b077d 100644
--- a/tools/xen/functions
+++ b/tools/xen/functions
@@ -69,11 +69,17 @@
 }
 
 function get_local_sr {
-    xe sr-list name-label="Local storage" --minimal
+    xe pool-list params=default-SR minimal=true
 }
 
 function get_local_sr_path {
-    echo "/var/run/sr-mount/$(get_local_sr)"
+    pbd_path="/var/run/sr-mount/$(get_local_sr)"
+    pbd_device_config_path=`xe pbd-list sr-uuid=$(get_local_sr) params=device-config | grep " path: "`
+    if [ -n "$pbd_device_config_path" ]; then
+        pbd_uuid=`xe pbd-list sr-uuid=$(get_local_sr) minimal=true`
+        pbd_path=`xe pbd-param-get uuid=$pbd_uuid param-name=device-config param-key=path || echo ""`
+    fi
+    echo $pbd_path
 }
 
 function find_ip_by_name() {
diff --git a/tools/xen/install_os_domU.sh b/tools/xen/install_os_domU.sh
index 0f314bf..9a2f5a8 100755
--- a/tools/xen/install_os_domU.sh
+++ b/tools/xen/install_os_domU.sh
@@ -44,9 +44,9 @@
 
 xe_min()
 {
-  local cmd="$1"
-  shift
-  xe "$cmd" --minimal "$@"
+    local cmd="$1"
+    shift
+    xe "$cmd" --minimal "$@"
 }
 
 #
@@ -132,8 +132,8 @@
 # Set up ip forwarding, but skip on xcp-xapi
 if [ -a /etc/sysconfig/network ]; then
     if ! grep -q "FORWARD_IPV4=YES" /etc/sysconfig/network; then
-      # FIXME: This doesn't work on reboot!
-      echo "FORWARD_IPV4=YES" >> /etc/sysconfig/network
+        # FIXME: This doesn't work on reboot!
+        echo "FORWARD_IPV4=YES" >> /etc/sysconfig/network
     fi
 fi
 # Also, enable ip forwarding in rc.local, since the above trick isn't working
diff --git a/tools/xen/scripts/install-os-vpx.sh b/tools/xen/scripts/install-os-vpx.sh
index 7469e0c..7b0d891 100755
--- a/tools/xen/scripts/install-os-vpx.sh
+++ b/tools/xen/scripts/install-os-vpx.sh
@@ -42,69 +42,69 @@
 
 get_params()
 {
-  while getopts "hbn:r:l:t:" OPTION;
-  do
-    case $OPTION in
-      h) usage
-         exit 1
-         ;;
-      n)
-         BRIDGE=$OPTARG
-         ;;
-      l)
-         NAME_LABEL=$OPTARG
-         ;;
-      t)
-         TEMPLATE_NAME=$OPTARG
-         ;;
-      ?)
-         usage
-         exit
-         ;;
-    esac
-  done
-  if [[ -z $BRIDGE ]]
-  then
-     BRIDGE=xenbr0
-  fi
+    while getopts "hbn:r:l:t:" OPTION;
+    do
+        case $OPTION in
+            h) usage
+                exit 1
+                ;;
+            n)
+                BRIDGE=$OPTARG
+                ;;
+            l)
+                NAME_LABEL=$OPTARG
+                ;;
+            t)
+                TEMPLATE_NAME=$OPTARG
+                ;;
+            ?)
+                usage
+                exit
+                ;;
+        esac
+    done
+    if [[ -z $BRIDGE ]]
+    then
+        BRIDGE=xenbr0
+    fi
 
-  if [[ -z $TEMPLATE_NAME ]]; then
-    echo "Please specify a template name" >&2
-    exit 1
-  fi
+    if [[ -z $TEMPLATE_NAME ]]; then
+        echo "Please specify a template name" >&2
+        exit 1
+    fi
 
-  if [[ -z $NAME_LABEL ]]; then
-    echo "Please specify a name-label for the new VM" >&2
-    exit 1
-  fi
+    if [[ -z $NAME_LABEL ]]; then
+        echo "Please specify a name-label for the new VM" >&2
+        exit 1
+    fi
 }
 
 
 xe_min()
 {
-  local cmd="$1"
-  shift
-  xe "$cmd" --minimal "$@"
+    local cmd="$1"
+    shift
+    xe "$cmd" --minimal "$@"
 }
 
 
 find_network()
 {
-  result=$(xe_min network-list bridge="$1")
-  if [ "$result" = "" ]
-  then
-    result=$(xe_min network-list name-label="$1")
-  fi
-  echo "$result"
+    result=$(xe_min network-list bridge="$1")
+    if [ "$result" = "" ]
+    then
+        result=$(xe_min network-list name-label="$1")
+    fi
+    echo "$result"
 }
 
 
 create_vif()
 {
-  local v="$1"
-  echo "Installing VM interface on [$BRIDGE]"
-  local out_network_uuid=$(find_network "$BRIDGE")
-  xe vif-create vm-uuid="$v" network-uuid="$out_network_uuid" device="0"
+    local v="$1"
+    echo "Installing VM interface on [$BRIDGE]"
+    local out_network_uuid=$(find_network "$BRIDGE")
+    xe vif-create vm-uuid="$v" network-uuid="$out_network_uuid" device="0"
 }
 
 
@@ -112,20 +112,20 @@
 # Make the VM auto-start on server boot.
 set_auto_start()
 {
-  local v="$1"
-  xe vm-param-set uuid="$v" other-config:auto_poweron=true
+    local v="$1"
+    xe vm-param-set uuid="$v" other-config:auto_poweron=true
 }
 
 
 destroy_vifs()
 {
-  local v="$1"
-  IFS=,
-  for vif in $(xe_min vif-list vm-uuid="$v")
-  do
-    xe vif-destroy uuid="$vif"
-  done
-  unset IFS
+    local v="$1"
+    IFS=,
+    for vif in $(xe_min vif-list vm-uuid="$v")
+    do
+        xe vif-destroy uuid="$vif"
+    done
+    unset IFS
 }
 
 
diff --git a/tools/xen/scripts/uninstall-os-vpx.sh b/tools/xen/scripts/uninstall-os-vpx.sh
index ac26094..1ed2494 100755
--- a/tools/xen/scripts/uninstall-os-vpx.sh
+++ b/tools/xen/scripts/uninstall-os-vpx.sh
@@ -22,63 +22,63 @@
 # By default, don't remove the templates
 REMOVE_TEMPLATES=${REMOVE_TEMPLATES:-"false"}
 if [ "$1" = "--remove-templates" ]; then
-  REMOVE_TEMPLATES=true
+    REMOVE_TEMPLATES=true
 fi
 
 xe_min()
 {
-  local cmd="$1"
-  shift
-  xe "$cmd" --minimal "$@"
+    local cmd="$1"
+    shift
+    xe "$cmd" --minimal "$@"
 }
 
 destroy_vdi()
 {
-  local vbd_uuid="$1"
-  local type=$(xe_min vbd-list uuid=$vbd_uuid params=type)
-  local dev=$(xe_min vbd-list uuid=$vbd_uuid params=userdevice)
-  local vdi_uuid=$(xe_min vbd-list uuid=$vbd_uuid params=vdi-uuid)
+    local vbd_uuid="$1"
+    local type=$(xe_min vbd-list uuid=$vbd_uuid params=type)
+    local dev=$(xe_min vbd-list uuid=$vbd_uuid params=userdevice)
+    local vdi_uuid=$(xe_min vbd-list uuid=$vbd_uuid params=vdi-uuid)
 
-  if [ "$type" == 'Disk' ] && [ "$dev" != 'xvda' ] && [ "$dev" != '0' ]; then
-    xe vdi-destroy uuid=$vdi_uuid
-  fi
+    if [ "$type" == 'Disk' ] && [ "$dev" != 'xvda' ] && [ "$dev" != '0' ]; then
+        xe vdi-destroy uuid=$vdi_uuid
+    fi
 }
 
 uninstall()
 {
-  local vm_uuid="$1"
-  local power_state=$(xe_min vm-list uuid=$vm_uuid params=power-state)
+    local vm_uuid="$1"
+    local power_state=$(xe_min vm-list uuid=$vm_uuid params=power-state)
 
-  if [ "$power_state" != "halted" ]; then
-    xe vm-shutdown vm=$vm_uuid force=true
-  fi
+    if [ "$power_state" != "halted" ]; then
+        xe vm-shutdown vm=$vm_uuid force=true
+    fi
 
-  for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do
-    destroy_vdi "$v"
-  done
+    for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do
+        destroy_vdi "$v"
+    done
 
-  xe vm-uninstall vm=$vm_uuid force=true >/dev/null
+    xe vm-uninstall vm=$vm_uuid force=true >/dev/null
 }
 
 uninstall_template()
 {
-  local vm_uuid="$1"
+    local vm_uuid="$1"
 
-  for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do
-    destroy_vdi "$v"
-  done
+    for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do
+        destroy_vdi "$v"
+    done
 
-  xe template-uninstall template-uuid=$vm_uuid force=true >/dev/null
+    xe template-uninstall template-uuid=$vm_uuid force=true >/dev/null
 }
 
 # remove the VMs and their disks
 for u in $(xe_min vm-list other-config:os-vpx=true | sed -e 's/,/ /g'); do
-  uninstall "$u"
+    uninstall "$u"
 done
 
 # remove the templates
 if [ "$REMOVE_TEMPLATES" == "true" ]; then
-  for u in $(xe_min template-list other-config:os-vpx=true | sed -e 's/,/ /g'); do
-    uninstall_template "$u"
-  done
+    for u in $(xe_min template-list other-config:os-vpx=true | sed -e 's/,/ /g'); do
+        uninstall_template "$u"
+    done
 fi