Merge "Default to Cinder REST API v2"
diff --git a/.gitignore b/.gitignore
index 798b081..0c22c6b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -13,3 +13,5 @@
 accrc
 .stackenv
 .prereqs
+docs/
+docs-files
diff --git a/HACKING.rst b/HACKING.rst
index 5f33d77..3c08e67 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -5,10 +5,10 @@
 General
 -------
 
-DevStack is written in POSIX shell script.  This choice was made because
-it best illustrates the configuration steps that this implementation takes
-on setting up and interacting with OpenStack components.  DevStack specifically
-uses Bash and is compatible with Bash 3.
+DevStack is written in UNIX shell script.  It uses a number of bash-isms
+and so is limited to Bash (version 3 and up) and compatible shells.
+Shell script was chosen because it best illustrates the steps used to
+set up and interact with OpenStack components.
 
 DevStack's official repository is located on GitHub at
 https://github.com/openstack-dev/devstack.git.  Besides the master branch that
@@ -54,14 +54,14 @@
 ``TOP_DIR`` should always point there, even if the script itself is located in
 a subdirectory::
 
-    # Keep track of the current devstack directory.
+    # Keep track of the current DevStack directory.
     TOP_DIR=$(cd $(dirname "$0") && pwd)
 
 Many scripts will utilize shared functions from the ``functions`` file.  There are
 also rc files (``stackrc`` and ``openrc``) that are often included to set the primary
 configuration of the user environment::
 
-    # Keep track of the current devstack directory.
+    # Keep track of the current DevStack directory.
     TOP_DIR=$(cd $(dirname "$0") && pwd)
 
     # Import common functions
@@ -100,13 +100,14 @@
 -------
 
 ``stackrc`` is the global configuration file for DevStack.  It is responsible for
-calling ``localrc`` if it exists so configuration can be overridden by the user.
+calling ``local.conf`` (or ``localrc`` if it exists) so local user configuration
+is recognized.
 
 The criteria for what belongs in ``stackrc`` can be vaguely summarized as
 follows:
 
-* All project respositories and branches (for historical reasons)
-* Global configuration that may be referenced in ``localrc``, i.e. ``DEST``, ``DATA_DIR``
+* All project repositories and branches handled directly in ``stack.sh``
+* Global configuration that may be referenced in ``local.conf``, i.e. ``DEST``, ``DATA_DIR``
 * Global service configuration like ``ENABLED_SERVICES``
 * Variables used by multiple services that do not have a clear owner, i.e.
   ``VOLUME_BACKING_FILE_SIZE`` (nova-volumes and cinder) or ``PUBLIC_NETWORK_NAME``
@@ -116,8 +117,9 @@
   not be changed for other reasons but the earlier file needs to dereference a
   variable set in the later file.  This should be rare.
 
-Also, variable declarations in ``stackrc`` do NOT allow overriding (the form
-``FOO=${FOO:-baz}``); if they did then they can already be changed in ``localrc``
+Also, variable declarations in ``stackrc`` before ``local.conf`` is sourced
+do NOT allow overriding (the form
+``FOO=${FOO:-baz}``); if they did then they can already be changed in ``local.conf``
 and can stay in the project file.
 
 
@@ -139,7 +141,9 @@
 Markdown formatting in the comments; use it sparingly.  Specifically, ``stack.sh``
 uses Markdown headers to divide the script into logical sections.
 
-.. _shocco: http://rtomayko.github.com/shocco/
+.. _shocco: https://github.com/dtroyer/shocco/tree/rst_support
+
+The script used to drive <code>shocco</code> is <code>tools/build_docs.sh</code>.
 
 
 Exercises
diff --git a/README.md b/README.md
index 66e36b2..cb7752d 100644
--- a/README.md
+++ b/README.md
@@ -6,35 +6,39 @@
 * To describe working configurations of OpenStack (which code branches work together?  what do config files look like for those branches?)
 * To make it easier for developers to dive into OpenStack so that they can productively contribute without having to understand every part of the system at once
 * To make it easy to prototype cross-project features
-* To sanity-check OpenStack builds (used in gating commits to the primary repos)
+* To provide an environment for the OpenStack CI testing on every commit to the projects
 
-Read more at http://devstack.org (built from the gh-pages branch)
+Read more at http://devstack.org.
 
-IMPORTANT: Be sure to carefully read `stack.sh` and any other scripts you execute before you run them, as they install software and may alter your networking configuration.  We strongly recommend that you run `stack.sh` in a clean and disposable vm when you are first getting started.
-
-# DevStack on Xenserver
-
-If you would like to use Xenserver as the hypervisor, please refer to the instructions in `./tools/xen/README.md`.
-
-# DevStack on Docker
-
-If you would like to use Docker as the hypervisor, please refer to the instructions in `./tools/docker/README.md`.
+IMPORTANT: Be sure to carefully read `stack.sh` and any other scripts you
+execute before you run them, as they install software and will alter your
+networking configuration.  We strongly recommend that you run `stack.sh`
+in a clean and disposable vm when you are first getting started.
 
 # Versions
 
-The devstack master branch generally points to trunk versions of OpenStack components.  For older, stable versions, look for branches named stable/[release] in the DevStack repo.  For example, you can do the following to create a diablo OpenStack cloud:
+The DevStack master branch generally points to trunk versions of OpenStack
+components.  For older, stable versions, look for branches named
+stable/[release] in the DevStack repo.  For example, you can do the
+following to create a grizzly OpenStack cloud:
 
-    git checkout stable/diablo
+    git checkout stable/grizzly
     ./stack.sh
 
-You can also pick specific OpenStack project releases by setting the appropriate `*_BRANCH` variables in `localrc` (look in `stackrc` for the default set).  Usually just before a release there will be milestone-proposed branches that need to be tested::
+You can also pick specific OpenStack project releases by setting the appropriate
+`*_BRANCH` variables in the ``localrc`` section of `local.conf` (look in
+`stackrc` for the default set).  Usually just before a release there will be
+milestone-proposed branches that need to be tested::
 
     GLANCE_REPO=https://github.com/openstack/glance.git
     GLANCE_BRANCH=milestone-proposed
 
 # Start A Dev Cloud
 
-Installing in a dedicated disposable vm is safer than installing on your dev machine!  Plus you can pick one of the supported Linux distros for your VM.  To start a dev cloud run the following NOT AS ROOT (see below for more):
+Installing in a dedicated disposable VM is safer than installing on your
+dev machine!  Plus you can pick one of the supported Linux distros for
+your VM.  To start a dev cloud run the following NOT AS ROOT (see
+**DevStack Execution Environment** below for more on user accounts):
 
     ./stack.sh
 
@@ -45,7 +49,7 @@
 
 We also provide an environment file that you can use to interact with your cloud via CLI:
 
-    # source openrc file to load your environment with osapi and ec2 creds
+    # source openrc file to load your environment with OpenStack CLI creds
     . openrc
     # list instances
     nova list
@@ -61,16 +65,37 @@
 
 DevStack runs rampant over the system it runs on, installing things and uninstalling other things.  Running this on a system you care about is a recipe for disappointment, or worse.  Alas, we're all in the virtualization business here, so run it in a VM.  And take advantage of the snapshot capabilities of your hypervisor of choice to reduce testing cycle times.  You might even save enough time to write one more feature before the next feature freeze...
 
-``stack.sh`` needs to have root access for a lot of tasks, but it also needs to have not-root permissions for most of its work and for all of the OpenStack services.  So ``stack.sh`` specifically does not run if you are root. This is a recent change (Oct 2013) from the previous behaviour of automatically creating a ``stack`` user.  Automatically creating a user account is not always the right response to running as root, so that bit is now an explicit step using ``tools/create-stack-user.sh``.  Run that (as root!) if you do not want to just use your normal login here, which works perfectly fine.
+``stack.sh`` needs to have root access for a lot of tasks, but uses ``sudo``
+for all of those tasks.  However, it needs to be not-root for most of its
+work and for all of the OpenStack services.  ``stack.sh`` specifically
+does not run if started as root.
+
+This is a recent change (Oct 2013) from the previous behaviour of
+automatically creating a ``stack`` user.  Automatically creating
+user accounts is not the right response to running as root, so
+that bit is now an explicit step using ``tools/create-stack-user.sh``. 
+Run that (as root!) or just check it out to see what DevStack's
+expectations are for the account it runs under.  Many people simply
+use their usual login (the default 'ubuntu' login on a UEC image
+for example).
 
 # Customizing
 
-You can override environment variables used in `stack.sh` by creating file name `localrc`.  It is likely that you will need to do this to tweak your networking configuration should you need to access your cloud from a different host.
+You can override environment variables used in `stack.sh` by creating file
+name `local.conf` with a ``localrc`` section as shown below.  It is likely
+that you will need to do this to tweak your networking configuration should
+you need to access your cloud from a different host.
+
+    [[local|localrc]]
+    VARIABLE=value
+
+See the **Local Configuration** section below for more details.
 
 # Database Backend
 
 Multiple database backends are available. The available databases are defined in the lib/databases directory.
-`mysql` is the default database, choose a different one by putting the following in `localrc`:
+`mysql` is the default database, choose a different one by putting the
+following in the `localrc` section:
 
     disable_service mysql
     enable_service postgresql
@@ -81,7 +106,7 @@
 
 Multiple RPC backends are available. Currently, this
 includes RabbitMQ (default), Qpid, and ZeroMQ. Your backend of
-choice may be selected via the `localrc`.
+choice may be selected via the `localrc` section.
 
 Note that selecting more than one RPC backend will result in a failure.
 
@@ -95,9 +120,10 @@
 
 # Apache Frontend
 
-Apache web server is enabled for wsgi services by setting `APACHE_ENABLED_SERVICES` in your localrc. But remember to enable these services at first as above.
+Apache web server is enabled for wsgi services by setting
+`APACHE_ENABLED_SERVICES` in your ``localrc`` section.  Remember to
+enable these services at first as above.
 
-Example:
     APACHE_ENABLED_SERVICES+=keystone,swift
 
 # Swift
@@ -108,23 +134,23 @@
 object services will run directly in screen. The others services like
 replicator, updaters or auditor runs in background.
 
-If you would like to enable Swift you can add this to your `localrc` :
+If you would like to enable Swift you can add this to your `localrc` section:
 
     enable_service s-proxy s-object s-container s-account
 
 If you want a minimal Swift install with only Swift and Keystone you
-can have this instead in your `localrc`:
+can have this instead in your `localrc` section:
 
     disable_all_services
     enable_service key mysql s-proxy s-object s-container s-account
 
 If you only want to do some testing of a real normal swift cluster
 with multiple replicas you can do so by customizing the variable
-`SWIFT_REPLICAS` in your `localrc` (usually to 3).
+`SWIFT_REPLICAS` in your `localrc` section (usually to 3).
 
 # Swift S3
 
-If you are enabling `swift3` in `ENABLED_SERVICES` devstack will
+If you are enabling `swift3` in `ENABLED_SERVICES` DevStack will
 install the swift3 middleware emulation. Swift will be configured to
 act as a S3 endpoint for Keystone so effectively replacing the
 `nova-objectstore`.
@@ -137,7 +163,7 @@
 Basic Setup
 
 In order to enable Neutron a single node setup, you'll need the
-following settings in your `localrc` :
+following settings in your `localrc` section:
 
     disable_service n-net
     enable_service q-svc
@@ -145,13 +171,17 @@
     enable_service q-dhcp
     enable_service q-l3
     enable_service q-meta
+    enable_service q-metering
     enable_service neutron
-    # Optional, to enable tempest configuration as part of devstack
+    # Optional, to enable tempest configuration as part of DevStack
     enable_service tempest
 
 Then run `stack.sh` as normal.
 
-devstack supports adding specific Neutron configuration flags to the service, Open vSwitch plugin and LinuxBridge plugin configuration files. To make use of this feature, the following variables are defined and can be configured in your `localrc` file:
+DevStack supports setting specific Neutron configuration flags to the
+service, Open vSwitch plugin and LinuxBridge plugin configuration files.
+To make use of this feature, the following variables are defined and can
+be configured in your `localrc` section:
 
     Variable Name             Config File  Section Modified
     -------------------------------------------------------------------------------------
@@ -160,12 +190,14 @@
     Q_AGENT_EXTRA_SRV_OPTS    Plugin       `OVS` (for Open Vswitch) or `LINUX_BRIDGE` (for LinuxBridge)
     Q_SRV_EXTRA_DEFAULT_OPTS  Service      DEFAULT
 
-An example of using the variables in your `localrc` is below:
+An example of using the variables in your `localrc` section is below:
 
     Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472)
     Q_SRV_EXTRA_OPTS=(tenant_network_type=vxlan)
 
-devstack also supports configuring the Neutron ML2 plugin. The ML2 plugin can run with the OVS, LinuxBridge, or Hyper-V agents on compute hosts. A simple way to configure the ml2 plugin is shown below:
+DevStack also supports configuring the Neutron ML2 plugin. The ML2 plugin
+can run with the OVS, LinuxBridge, or Hyper-V agents on compute hosts. A
+simple way to configure the ml2 plugin is shown below:
 
     # VLAN configuration
     Q_PLUGIN=ml2
@@ -179,7 +211,9 @@
     Q_PLUGIN=ml2
     Q_ML2_TENANT_NETWORK_TYPE=vxlan
 
-The above will default in devstack to using the OVS on each compute host. To change this, set the `Q_AGENT` variable to the agent you want to run (e.g. linuxbridge).
+The above will default in DevStack to using the OVS on each compute host.
+To change this, set the `Q_AGENT` variable to the agent you want to run
+(e.g. linuxbridge).
 
     Variable Name                    Notes
     -------------------------------------------------------------------------------------
@@ -194,13 +228,13 @@
 # Heat
 
 Heat is disabled by default. To enable it you'll need the following settings
-in your `localrc` :
+in your `localrc` section:
 
     enable_service heat h-api h-api-cfn h-api-cw h-eng
 
 Heat can also run in standalone mode, and be configured to orchestrate
 on an external OpenStack cloud. To launch only Heat in standalone mode
-you'll need the following settings in your `localrc` :
+you'll need the following settings in your `localrc` section:
 
     disable_all_services
     enable_service rabbit mysql heat h-api h-api-cfn h-api-cw h-eng
@@ -215,6 +249,24 @@
     $ cd /opt/stack/tempest
     $ nosetests tempest/scenario/test_network_basic_ops.py
 
+# DevStack on Xenserver
+
+If you would like to use Xenserver as the hypervisor, please refer to the instructions in `./tools/xen/README.md`.
+
+# DevStack on Docker
+
+If you would like to use Docker as the hypervisor, please refer to the instructions in `./tools/docker/README.md`.
+
+# Additional Projects
+
+DevStack has a hook mechanism to call out to a dispatch script at specific
+points in the execution of `stack.sh`, `unstack.sh` and `clean.sh`.  This
+allows upper-layer projects, especially those that the lower layer projects
+have no dependency on, to be added to DevStack without modifying the core
+scripts.  Tempest is built this way as an example of how to structure the
+dispatch script, see `extras.d/80-tempest.sh`.  See `extras.d/README.md`
+for more information.
+
 # Multi-Node Setup
 
 A more interesting setup involves running multiple compute nodes, with Neutron networks connecting VMs on different compute nodes.
@@ -228,7 +280,8 @@
     enable_service q-meta
     enable_service neutron
 
-You likely want to change your `localrc` to run a scheduler that will balance VMs across hosts:
+You likely want to change your `localrc` section to run a scheduler that
+will balance VMs across hosts:
 
     SCHEDULER=nova.scheduler.simple.SimpleScheduler
 
@@ -245,7 +298,7 @@
 
 Cells is a new scaling option with a full spec at http://wiki.openstack.org/blueprint-nova-compute-cells.
 
-To setup a cells environment add the following to your `localrc`:
+To setup a cells environment add the following to your `localrc` section:
 
     enable_service n-cell
 
@@ -260,32 +313,41 @@
 
 The new config file ``local.conf`` is an extended-INI format that introduces a new meta-section header that provides some additional information such as a phase name and destination config filename:
 
-  [[ <phase> | <filename> ]]
+    [[ <phase> | <config-file-name> ]]
 
-where <phase> is one of a set of phase names defined by ``stack.sh`` and <filename> is the project config filename.  The filename is eval'ed in the stack.sh context so all environment variables are available and may be used.  Using the project config file variables in the header is strongly suggested (see example of NOVA_CONF below).  If the path of the config file does not exist it is skipped.
+where ``<phase>`` is one of a set of phase names defined by ``stack.sh``
+and ``<config-file-name>`` is the configuration filename.  The filename is
+eval'ed in the ``stack.sh`` context so all environment variables are
+available and may be used.  Using the project config file variables in
+the header is strongly suggested (see the ``NOVA_CONF`` example below).
+If the path of the config file does not exist it is skipped.
 
 The defined phases are:
 
-* local - extracts ``localrc`` from ``local.conf`` before ``stackrc`` is sourced
-* post-config - runs after the layer 2 services are configured and before they are started
-* extra - runs after services are started and before any files in ``extra.d`` are executes
+* **local** - extracts ``localrc`` from ``local.conf`` before ``stackrc`` is sourced
+* **post-config** - runs after the layer 2 services are configured and before they are started
+* **extra** - runs after services are started and before any files in ``extra.d`` are executed
 
 The file is processed strictly in sequence; meta-sections may be specified more than once but if any settings are duplicated the last to appear in the file will be used.
 
-  [[post-config|$NOVA_CONF]]
-  [DEFAULT]
-  use_syslog = True
+    [[post-config|$NOVA_CONF]]
+    [DEFAULT]
+    use_syslog = True
 
-  [osapi_v3]
-  enabled = False
+    [osapi_v3]
+    enabled = False
 
-A specific meta-section ``local:localrc`` is used to provide a default localrc file.  This allows all custom settings for DevStack to be contained in a single file.  ``localrc`` is not overwritten if it exists to preserve compatability.
+A specific meta-section ``local|localrc`` is used to provide a default
+``localrc`` file (actually ``.localrc.auto``).  This allows all custom
+settings for DevStack to be contained in a single file.  If ``localrc``
+exists it will be used instead to preserve backward-compatibility.
 
-  [[local|localrc]]
-  FIXED_RANGE=10.254.1.0/24
-  ADMIN_PASSWORD=speciale
-  LOGFILE=$DEST/logs/stack.sh.log
+    [[local|localrc]]
+    FIXED_RANGE=10.254.1.0/24
+    ADMIN_PASSWORD=speciale
+    LOGFILE=$DEST/logs/stack.sh.log
 
-Note that ``Q_PLUGIN_CONF_FILE`` is unique in that it is assumed to _NOT_ start with a ``/`` (slash) character.  A slash will need to be added:
+Note that ``Q_PLUGIN_CONF_FILE`` is unique in that it is assumed to *NOT*
+start with a ``/`` (slash) character.  A slash will need to be added:
 
-  [[post-config|/$Q_PLUGIN_CONF_FILE]]
+    [[post-config|/$Q_PLUGIN_CONF_FILE]]
diff --git a/clean.sh b/clean.sh
index 6ceb5a4..395941a 100755
--- a/clean.sh
+++ b/clean.sh
@@ -47,6 +47,15 @@
 source $TOP_DIR/lib/baremetal
 source $TOP_DIR/lib/ldap
 
+# Extras Source
+# --------------
+
+# Phase: source
+if [[ -d $TOP_DIR/extras.d ]]; then
+    for i in $TOP_DIR/extras.d/*.sh; do
+        [[ -r $i ]] && source $i source
+    done
+fi
 
 # See if there is anything running...
 # need to adapt when run_service is merged
@@ -56,6 +65,16 @@
     $TOP_DIR/unstack.sh --all
 fi
 
+# Run extras
+# ==========
+
+# Phase: clean
+if [[ -d $TOP_DIR/extras.d ]]; then
+    for i in $TOP_DIR/extras.d/*.sh; do
+        [[ -r $i ]] && source $i clean
+    done
+fi
+
 # Clean projects
 cleanup_oslo
 cleanup_cinder
diff --git a/eucarc b/eucarc
index 2b0f7dd..3502351 100644
--- a/eucarc
+++ b/eucarc
@@ -13,7 +13,7 @@
 fi
 
 # Find the other rc files
-RC_DIR=$(cd $(dirname "$BASH_SOURCE") && pwd)
+RC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
 
 # Get user configuration
 source $RC_DIR/openrc
diff --git a/exercises/aggregates.sh b/exercises/aggregates.sh
index e2baecd..96241f9 100755
--- a/exercises/aggregates.sh
+++ b/exercises/aggregates.sh
@@ -3,12 +3,13 @@
 # **aggregates.sh**
 
 # This script demonstrates how to use host aggregates:
-#  *  Create an Aggregate
-#  *  Updating Aggregate details
-#  *  Testing Aggregate metadata
-#  *  Testing Aggregate delete
-#  *  Testing General Aggregates (https://blueprints.launchpad.net/nova/+spec/general-host-aggregates)
-#  *  Testing add/remove hosts (with one host)
+#
+# *  Create an Aggregate
+# *  Updating Aggregate details
+# *  Testing Aggregate metadata
+# *  Testing Aggregate delete
+# *  Testing General Aggregates (https://blueprints.launchpad.net/nova/+spec/general-host-aggregates)
+# *  Testing add/remove hosts (with one host)
 
 echo "**************************************************"
 echo "Begin DevStack Exercise: $0"
@@ -100,7 +101,7 @@
 META_DATA_3_KEY=bar
 
 #ensure no additional metadata is set
-nova aggregate-details $AGGREGATE_ID | egrep "{u'availability_zone': u'$AGGREGATE_A_ZONE'}|{}"
+nova aggregate-details $AGGREGATE_ID | egrep "\|[{u ]*'availability_zone.+$AGGREGATE_A_ZONE'[ }]*\|"
 
 nova aggregate-set-metadata $AGGREGATE_ID ${META_DATA_1_KEY}=123
 nova aggregate-details $AGGREGATE_ID | grep $META_DATA_1_KEY
@@ -117,7 +118,7 @@
 nova aggregate-details $AGGREGATE_ID | grep $META_DATA_2_KEY && die $LINENO "ERROR metadata was not cleared"
 
 nova aggregate-set-metadata $AGGREGATE_ID $META_DATA_3_KEY $META_DATA_1_KEY
-nova aggregate-details $AGGREGATE_ID | egrep "{u'availability_zone': u'$AGGREGATE_A_ZONE'}|{}"
+nova aggregate-details $AGGREGATE_ID | egrep "\|[{u ]*'availability_zone.+$AGGREGATE_A_ZONE'[ }]*\|"
 
 
 # Test aggregate-add/remove-host
diff --git a/exercises/boot_from_volume.sh b/exercises/boot_from_volume.sh
index fe27bd0..3b3d3ba 100755
--- a/exercises/boot_from_volume.sh
+++ b/exercises/boot_from_volume.sh
@@ -3,8 +3,9 @@
 # **boot_from_volume.sh**
 
 # This script demonstrates how to boot from a volume.  It does the following:
-#  *  Create a bootable volume
-#  *  Boot a volume-backed instance
+#
+# *  Create a bootable volume
+# *  Boot a volume-backed instance
 
 echo "*********************************************************************"
 echo "Begin DevStack Exercise: $0"
@@ -119,7 +120,7 @@
 INSTANCE_TYPE=$(nova flavor-list | grep $DEFAULT_INSTANCE_TYPE | get_field 1)
 if [[ -z "$INSTANCE_TYPE" ]]; then
     # grab the first flavor in the list to launch if default doesn't exist
-   INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
+    INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
 fi
 
 # Clean-up from previous runs
diff --git a/exercises/docker.sh b/exercises/docker.sh
deleted file mode 100755
index 0672bc0..0000000
--- a/exercises/docker.sh
+++ /dev/null
@@ -1,105 +0,0 @@
-#!/usr/bin/env bash
-
-# **docker**
-
-# Test Docker hypervisor
-
-echo "*********************************************************************"
-echo "Begin DevStack Exercise: $0"
-echo "*********************************************************************"
-
-# This script exits on an error so that errors don't compound and you see
-# only the first error that occurred.
-set -o errexit
-
-# Print the commands being run so that we can see the command that triggers
-# an error.  It is also useful for following allowing as the install occurs.
-set -o xtrace
-
-
-# Settings
-# ========
-
-# Keep track of the current directory
-EXERCISE_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=$(cd $EXERCISE_DIR/..; pwd)
-
-# Import common functions
-source $TOP_DIR/functions
-
-# Import configuration
-source $TOP_DIR/openrc
-
-# Import exercise configuration
-source $TOP_DIR/exerciserc
-
-# Skip if the hypervisor is not Docker
-[[ "$VIRT_DRIVER" == "docker" ]] || exit 55
-
-# Import docker functions and declarations
-source $TOP_DIR/lib/nova_plugins/hypervisor-docker
-
-# Image and flavor are ignored but the CLI requires them...
-
-# Instance type to create
-DEFAULT_INSTANCE_TYPE=${DEFAULT_INSTANCE_TYPE:-m1.tiny}
-
-# Boot this image, use first AMI image if unset
-DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-ami}
-
-# Instance name
-VM_NAME=ex-docker
-
-
-# Launching a server
-# ==================
-
-# Grab the id of the image to launch
-IMAGE=$(glance image-list | egrep " $DOCKER_IMAGE_NAME:latest " | get_field 1)
-die_if_not_set $LINENO IMAGE "Failure getting image $DOCKER_IMAGE_NAME"
-
-# Select a flavor
-INSTANCE_TYPE=$(nova flavor-list | grep $DEFAULT_INSTANCE_TYPE | get_field 1)
-if [[ -z "$INSTANCE_TYPE" ]]; then
-    # grab the first flavor in the list to launch if default doesn't exist
-   INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
-fi
-
-# Clean-up from previous runs
-nova delete $VM_NAME || true
-if ! timeout $ACTIVE_TIMEOUT sh -c "while nova show $VM_NAME; do sleep 1; done"; then
-    die $LINENO "server didn't terminate!"
-fi
-
-# Boot instance
-# -------------
-
-VM_UUID=$(nova boot --flavor $INSTANCE_TYPE --image $IMAGE $VM_NAME | grep ' id ' | get_field 2)
-die_if_not_set $LINENO VM_UUID "Failure launching $VM_NAME"
-
-# Check that the status is active within ACTIVE_TIMEOUT seconds
-if ! timeout $ACTIVE_TIMEOUT sh -c "while ! nova show $VM_UUID | grep status | grep -q ACTIVE; do sleep 1; done"; then
-    die $LINENO "server didn't become active!"
-fi
-
-# Get the instance IP
-IP=$(nova show $VM_UUID | grep "$PRIVATE_NETWORK_NAME" | get_field 2)
-die_if_not_set $LINENO IP "Failure retrieving IP address"
-
-# Private IPs can be pinged in single node deployments
-ping_check "$PRIVATE_NETWORK_NAME" $IP $BOOT_TIMEOUT
-
-# Clean up
-# --------
-
-# Delete instance
-nova delete $VM_UUID || die $LINENO "Failure deleting instance $VM_NAME"
-if ! timeout $TERMINATE_TIMEOUT sh -c "while nova list | grep -q $VM_UUID; do sleep 1; done"; then
-    die $LINENO "Server $VM_NAME not deleted"
-fi
-
-set +o xtrace
-echo "*********************************************************************"
-echo "SUCCESS: End DevStack Exercise: $0"
-echo "*********************************************************************"
-
diff --git a/exercises/euca.sh b/exercises/euca.sh
index 64c0014..ed521e4 100755
--- a/exercises/euca.sh
+++ b/exercises/euca.sh
@@ -87,31 +87,31 @@
 # Volumes
 # -------
 if is_service_enabled c-vol && ! is_service_enabled n-cell; then
-   VOLUME_ZONE=`euca-describe-availability-zones | head -n1 | cut -f2`
-   die_if_not_set $LINENO VOLUME_ZONE "Failure to find zone for volume"
+    VOLUME_ZONE=`euca-describe-availability-zones | head -n1 | cut -f2`
+    die_if_not_set $LINENO VOLUME_ZONE "Failure to find zone for volume"
 
-   VOLUME=`euca-create-volume -s 1 -z $VOLUME_ZONE | cut -f2`
-   die_if_not_set $LINENO VOLUME "Failure to create volume"
+    VOLUME=`euca-create-volume -s 1 -z $VOLUME_ZONE | cut -f2`
+    die_if_not_set $LINENO VOLUME "Failure to create volume"
 
-   # Test that volume has been created
-   VOLUME=`euca-describe-volumes $VOLUME | cut -f2`
-   die_if_not_set $LINENO VOLUME "Failure to get volume"
+    # Test that volume has been created
+    VOLUME=`euca-describe-volumes $VOLUME | cut -f2`
+    die_if_not_set $LINENO VOLUME "Failure to get volume"
 
-   # Test volume has become available
-   if ! timeout $RUNNING_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -q available; do sleep 1; done"; then
-       die $LINENO "volume didn't become available within $RUNNING_TIMEOUT seconds"
-   fi
+    # Test volume has become available
+    if ! timeout $RUNNING_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -q available; do sleep 1; done"; then
+        die $LINENO "volume didn't become available within $RUNNING_TIMEOUT seconds"
+    fi
 
-   # Attach volume to an instance
-   euca-attach-volume -i $INSTANCE -d $ATTACH_DEVICE $VOLUME || \
-       die $LINENO "Failure attaching volume $VOLUME to $INSTANCE"
-   if ! timeout $ACTIVE_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -A 1 in-use | grep -q attach; do sleep 1; done"; then
-       die $LINENO "Could not attach $VOLUME to $INSTANCE"
-   fi
+    # Attach volume to an instance
+    euca-attach-volume -i $INSTANCE -d $ATTACH_DEVICE $VOLUME || \
+        die $LINENO "Failure attaching volume $VOLUME to $INSTANCE"
+    if ! timeout $ACTIVE_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -A 1 in-use | grep -q attach; do sleep 1; done"; then
+        die $LINENO "Could not attach $VOLUME to $INSTANCE"
+    fi
 
-   # Detach volume from an instance
-   euca-detach-volume $VOLUME || \
-       die $LINENO "Failure detaching volume $VOLUME to $INSTANCE"
+    # Detach volume from an instance
+    euca-detach-volume $VOLUME || \
+        die $LINENO "Failure detaching volume $VOLUME to $INSTANCE"
     if ! timeout $ACTIVE_TIMEOUT sh -c "while ! euca-describe-volumes $VOLUME | grep -q available; do sleep 1; done"; then
         die $LINENO "Could not detach $VOLUME to $INSTANCE"
     fi
@@ -120,7 +120,7 @@
     euca-delete-volume $VOLUME || \
         die $LINENO "Failure to delete volume"
     if ! timeout $ACTIVE_TIMEOUT sh -c "while euca-describe-volumes | grep $VOLUME; do sleep 1; done"; then
-       die $LINENO "Could not delete $VOLUME"
+        die $LINENO "Could not delete $VOLUME"
     fi
 else
     echo "Volume Tests Skipped"
diff --git a/exercises/floating_ips.sh b/exercises/floating_ips.sh
index 2833b65..1a1608c 100755
--- a/exercises/floating_ips.sh
+++ b/exercises/floating_ips.sh
@@ -113,7 +113,7 @@
 INSTANCE_TYPE=$(nova flavor-list | grep $DEFAULT_INSTANCE_TYPE | get_field 1)
 if [[ -z "$INSTANCE_TYPE" ]]; then
     # grab the first flavor in the list to launch if default doesn't exist
-   INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
+    INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
 fi
 
 # Clean-up from previous runs
@@ -168,7 +168,7 @@
     # list floating addresses
     if ! timeout $ASSOCIATE_TIMEOUT sh -c "while ! nova floating-ip-list | grep $TEST_FLOATING_POOL | grep -q $TEST_FLOATING_IP; do sleep 1; done"; then
         die $LINENO "Floating IP not allocated"
-     fi
+    fi
 fi
 
 # Dis-allow icmp traffic (ping)
diff --git a/exercises/neutron-adv-test.sh b/exercises/neutron-adv-test.sh
index abb29cf..7dfa5dc 100755
--- a/exercises/neutron-adv-test.sh
+++ b/exercises/neutron-adv-test.sh
@@ -102,6 +102,7 @@
 # and save it.
 
 TOKEN=`keystone token-get | grep ' id ' | awk '{print $4}'`
+die_if_not_set $LINENO TOKEN "Keystone fail to get token"
 
 # Various functions
 # -----------------
@@ -272,12 +273,12 @@
 }
 
 function ping_ip {
-     # Test agent connection.  Assumes namespaces are disabled, and
-     # that DHCP is in use, but not L3
-     local VM_NAME=$1
-     local NET_NAME=$2
-     IP=$(get_instance_ip $VM_NAME $NET_NAME)
-     ping_check $NET_NAME $IP $BOOT_TIMEOUT
+    # Test agent connection.  Assumes namespaces are disabled, and
+    # that DHCP is in use, but not L3
+    local VM_NAME=$1
+    local NET_NAME=$2
+    IP=$(get_instance_ip $VM_NAME $NET_NAME)
+    ping_check $NET_NAME $IP $BOOT_TIMEOUT
 }
 
 function check_vm {
@@ -329,12 +330,12 @@
 }
 
 function delete_networks {
-   foreach_tenant_net 'delete_network ${%TENANT%_NAME} %NUM%'
-   #TODO(nati) add secuirty group check after it is implemented
-   # source $TOP_DIR/openrc demo1 demo1
-   # nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
-   # source $TOP_DIR/openrc demo2 demo2
-   # nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
+    foreach_tenant_net 'delete_network ${%TENANT%_NAME} %NUM%'
+    # TODO(nati) add secuirty group check after it is implemented
+    # source $TOP_DIR/openrc demo1 demo1
+    # nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
+    # source $TOP_DIR/openrc demo2 demo2
+    # nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
 }
 
 function create_all {
diff --git a/exercises/savanna.sh b/exercises/savanna.sh
new file mode 100755
index 0000000..fc3f976
--- /dev/null
+++ b/exercises/savanna.sh
@@ -0,0 +1,43 @@
+#!/usr/bin/env bash
+
+# **savanna.sh**
+
+# Sanity check that Savanna started if enabled
+
+echo "*********************************************************************"
+echo "Begin DevStack Exercise: $0"
+echo "*********************************************************************"
+
+# This script exits on an error so that errors don't compound and you see
+# only the first error that occurred.
+set -o errexit
+
+# Print the commands being run so that we can see the command that triggers
+# an error.  It is also useful for following allowing as the install occurs.
+set -o xtrace
+
+
+# Settings
+# ========
+
+# Keep track of the current directory
+EXERCISE_DIR=$(cd $(dirname "$0") && pwd)
+TOP_DIR=$(cd $EXERCISE_DIR/..; pwd)
+
+# Import common functions
+source $TOP_DIR/functions
+
+# Import configuration
+source $TOP_DIR/openrc
+
+# Import exercise configuration
+source $TOP_DIR/exerciserc
+
+is_service_enabled savanna || exit 55
+
+curl http://$SERVICE_HOST:8386/ 2>/dev/null | grep -q 'Auth' || die $LINENO "Savanna API not functioning!"
+
+set +o xtrace
+echo "*********************************************************************"
+echo "SUCCESS: End DevStack Exercise: $0"
+echo "*********************************************************************"
diff --git a/exercises/swift.sh b/exercises/swift.sh
index b9f1b56..25ea671 100755
--- a/exercises/swift.sh
+++ b/exercises/swift.sh
@@ -2,7 +2,7 @@
 
 # **swift.sh**
 
-# Test swift via the ``swift`` command line from ``python-swiftclient`
+# Test swift via the ``swift`` command line from ``python-swiftclient``
 
 echo "*********************************************************************"
 echo "Begin DevStack Exercise: $0"
diff --git a/exercises/volumes.sh b/exercises/volumes.sh
index e536d16..9ee9fa9 100755
--- a/exercises/volumes.sh
+++ b/exercises/volumes.sh
@@ -117,7 +117,7 @@
 INSTANCE_TYPE=$(nova flavor-list | grep $DEFAULT_INSTANCE_TYPE | get_field 1)
 if [[ -z "$INSTANCE_TYPE" ]]; then
     # grab the first flavor in the list to launch if default doesn't exist
-   INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
+    INSTANCE_TYPE=$(nova flavor-list | head -n 4 | tail -n 1 | get_field 1)
 fi
 
 # Clean-up from previous runs
diff --git a/extras.d/70-savanna.sh b/extras.d/70-savanna.sh
new file mode 100644
index 0000000..f6881cc
--- /dev/null
+++ b/extras.d/70-savanna.sh
@@ -0,0 +1,31 @@
+# savanna.sh - DevStack extras script to install Savanna
+
+if is_service_enabled savanna; then
+    if [[ "$1" == "source" ]]; then
+        # Initial source
+        source $TOP_DIR/lib/savanna
+        source $TOP_DIR/lib/savanna-dashboard
+    elif [[ "$1" == "stack" && "$2" == "install" ]]; then
+        echo_summary "Installing Savanna"
+        install_savanna
+        if is_service_enabled horizon; then
+            install_savanna_dashboard
+        fi
+    elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
+        echo_summary "Configuring Savanna"
+        configure_savanna
+        if is_service_enabled horizon; then
+            configure_savanna_dashboard
+        fi
+    elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
+        echo_summary "Initializing Savanna"
+        start_savanna
+    fi
+
+    if [[ "$1" == "unstack" ]]; then
+        stop_savanna
+        if is_service_enabled horizon; then
+            cleanup_savanna_dashboard
+        fi
+    fi
+fi
diff --git a/extras.d/80-tempest.sh b/extras.d/80-tempest.sh
index f159955..75b702c 100644
--- a/extras.d/80-tempest.sh
+++ b/extras.d/80-tempest.sh
@@ -1,21 +1,29 @@
 # tempest.sh - DevStack extras script
 
-source $TOP_DIR/lib/tempest
-
-if [[ "$1" == "stack" ]]; then
-    # Configure Tempest last to ensure that the runtime configuration of
-    # the various OpenStack services can be queried.
-    if is_service_enabled tempest; then
-        echo_summary "Configuring Tempest"
+if is_service_enabled tempest; then
+    if [[ "$1" == "source" ]]; then
+        # Initial source
+        source $TOP_DIR/lib/tempest
+    elif [[ "$1" == "stack" && "$2" == "install" ]]; then
+        echo_summary "Installing Tempest"
         install_tempest
+    elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
+        # Tempest config must come after layer 2 services are running
+        :
+    elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
+        echo_summary "Initializing Tempest"
         configure_tempest
         init_tempest
     fi
-fi
 
-if [[ "$1" == "unstack" ]]; then
-    # no-op
-    :
-fi
+    if [[ "$1" == "unstack" ]]; then
+        # no-op
+        :
+    fi
 
+    if [[ "$1" == "clean" ]]; then
+        # no-op
+        :
+    fi
+fi
 
diff --git a/extras.d/README b/extras.d/README
deleted file mode 100644
index ffc6793..0000000
--- a/extras.d/README
+++ /dev/null
@@ -1,14 +0,0 @@
-The extras.d directory contains project initialization scripts to be
-sourced by stack.sh at the end of its run.  This is expected to be
-used by external projects that want to be configured, started and
-stopped with DevStack.
-
-Order is controlled by prefixing the script names with the a two digit
-sequence number.  Script names must end with '.sh'.  This provides a
-convenient way to disable scripts by simoy renaming them.
-
-DevStack reserves the sequence numbers 00 through 09 and 90 through 99
-for its own use.
-
-The scripts are called with an argument of 'stack' by stack.sh and
-with an argument of 'unstack' by unstack.sh.
diff --git a/extras.d/README.md b/extras.d/README.md
new file mode 100644
index 0000000..88e4265
--- /dev/null
+++ b/extras.d/README.md
@@ -0,0 +1,30 @@
+# Extras Hooks
+
+The `extras.d` directory contains project dispatch scripts that are called
+at specific times by `stack.sh`, `unstack.sh` and `clean.sh`.  These hooks are
+used to install, configure and start additional projects during a DevStack run
+without any modifications to the base DevStack scripts.
+
+When `stack.sh` reaches one of the hook points it sources the scripts in `extras.d`
+that end with `.sh`.  To control the order that the scripts are sourced their
+names start with a two digit sequence number.  DevStack reserves the sequence
+numbers 00 through 09 and 90 through 99 for its own use.
+
+The scripts are sourced at the beginning of each script that calls them. The
+entire `stack.sh` variable space is available.  The scripts are
+sourced with one or more arguments, the first of which defines the hook phase:
+
+    source | stack | unstack | clean
+
+    source: always called first in any of the scripts, used to set the
+        initial defaults in a lib/* script or similar
+
+    stack: called by stack.sh.  There are three possible values for
+        the second arg to distinguish the phase stack.sh is in:
+
+        arg 2:  install | post-config | extra
+
+    unstack: called by unstack.sh
+
+    clean: called by clean.sh.  Remember, clean.sh also calls unstack.sh
+        so that work need not be repeated.
diff --git a/files/apts/horizon b/files/apts/horizon
index 0865931..8969046 100644
--- a/files/apts/horizon
+++ b/files/apts/horizon
@@ -19,5 +19,3 @@
 python-coverage
 python-cherrypy3 # why?
 python-migrate
-nodejs
-nodejs-legacy # dist:quantal
diff --git a/files/apts/trema b/files/apts/trema
index e33ccd3..09cb7c6 100644
--- a/files/apts/trema
+++ b/files/apts/trema
@@ -6,6 +6,7 @@
 ruby1.8-dev
 libpcap-dev
 libsqlite3-dev
+libglib2.0-dev
 
 # Sliceable Switch
 sqlite3
diff --git a/files/keystone_data.sh b/files/keystone_data.sh
index 3f3137c..ea2d52d 100755
--- a/files/keystone_data.sh
+++ b/files/keystone_data.sh
@@ -66,12 +66,12 @@
 # Heat
 if [[ "$ENABLED_SERVICES" =~ "heat" ]]; then
     HEAT_USER=$(get_id keystone user-create --name=heat \
-                                              --pass="$SERVICE_PASSWORD" \
-                                              --tenant_id $SERVICE_TENANT \
-                                              --email=heat@example.com)
+        --pass="$SERVICE_PASSWORD" \
+        --tenant_id $SERVICE_TENANT \
+        --email=heat@example.com)
     keystone user-role-add --tenant-id $SERVICE_TENANT \
-                           --user-id $HEAT_USER \
-                           --role-id $SERVICE_ROLE
+        --user-id $HEAT_USER \
+        --role-id $SERVICE_ROLE
     # heat_stack_user role is for users created by Heat
     keystone role-create --name heat_stack_user
     if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
@@ -126,16 +126,16 @@
 # Ceilometer
 if [[ "$ENABLED_SERVICES" =~ "ceilometer" ]]; then
     CEILOMETER_USER=$(get_id keystone user-create --name=ceilometer \
-                                              --pass="$SERVICE_PASSWORD" \
-                                              --tenant_id $SERVICE_TENANT \
-                                              --email=ceilometer@example.com)
+        --pass="$SERVICE_PASSWORD" \
+        --tenant_id $SERVICE_TENANT \
+        --email=ceilometer@example.com)
     keystone user-role-add --tenant-id $SERVICE_TENANT \
-                           --user-id $CEILOMETER_USER \
-                           --role-id $ADMIN_ROLE
+        --user-id $CEILOMETER_USER \
+        --role-id $ADMIN_ROLE
     # Ceilometer needs ResellerAdmin role to access swift account stats.
     keystone user-role-add --tenant-id $SERVICE_TENANT \
-                           --user-id $CEILOMETER_USER \
-                           --role-id $RESELLER_ROLE
+        --user-id $CEILOMETER_USER \
+        --role-id $RESELLER_ROLE
     if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
         CEILOMETER_SERVICE=$(get_id keystone service-create \
             --name=ceilometer \
diff --git a/files/rpms-suse/horizon b/files/rpms-suse/horizon
index 73932ac..d3bde26 100644
--- a/files/rpms-suse/horizon
+++ b/files/rpms-suse/horizon
@@ -1,6 +1,5 @@
 apache2  # NOPRIME
 apache2-mod_wsgi  # NOPRIME
-nodejs
 python-CherryPy # why? (coming from apts)
 python-Paste
 python-PasteDeploy
diff --git a/files/rpms/horizon b/files/rpms/horizon
index 0ca18ca..aa27ab4 100644
--- a/files/rpms/horizon
+++ b/files/rpms/horizon
@@ -3,7 +3,6 @@
 gcc
 httpd # NOPRIME
 mod_wsgi  # NOPRIME
-nodejs # NOPRIME
 pylint
 python-anyjson
 python-BeautifulSoup
diff --git a/functions b/functions
index 9e99cb2..6137aaf 100644
--- a/functions
+++ b/functions
@@ -1,16 +1,17 @@
 # functions - Common functions used by DevStack components
 #
 # The following variables are assumed to be defined by certain functions:
-# ``ENABLED_SERVICES``
-# ``ERROR_ON_CLONE``
-# ``FILES``
-# ``GLANCE_HOSTPORT``
-# ``OFFLINE``
-# ``PIP_DOWNLOAD_CACHE``
-# ``PIP_USE_MIRRORS``
-# ``RECLONE``
-# ``TRACK_DEPENDS``
-# ``http_proxy``, ``https_proxy``, ``no_proxy``
+#
+# - ``ENABLED_SERVICES``
+# - ``ERROR_ON_CLONE``
+# - ``FILES``
+# - ``GLANCE_HOSTPORT``
+# - ``OFFLINE``
+# - ``PIP_DOWNLOAD_CACHE``
+# - ``PIP_USE_MIRRORS``
+# - ``RECLONE``
+# - ``TRACK_DEPENDS``
+# - ``http_proxy``, ``https_proxy``, ``no_proxy``
 
 
 # Save trace setting
@@ -54,7 +55,7 @@
 
 
 # Wrapper for ``apt-get`` to set cache and proxy environment variables
-# Uses globals ``OFFLINE``, ``*_proxy`
+# Uses globals ``OFFLINE``, ``*_proxy``
 # apt_get operation package [package ...]
 function apt_get() {
     [[ "$OFFLINE" = "True" || -z "$@" ]] && return
@@ -260,11 +261,12 @@
 #
 # Only packages required for the services in 1st argument will be
 # included.  Two bits of metadata are recognized in the prerequisite files:
-# - ``# NOPRIME`` defers installation to be performed later in stack.sh
+#
+# - ``# NOPRIME`` defers installation to be performed later in `stack.sh`
 # - ``# dist:DISTRO`` or ``dist:DISTRO1,DISTRO2`` limits the selection
 #   of the package to the distros listed.  The distro names are case insensitive.
 function get_packages() {
-    local services=$1
+    local services=$@
     local package_dir=$(_get_package_dir)
     local file_to_parse
     local service
@@ -276,7 +278,7 @@
     if [[ -z "$DISTRO" ]]; then
         GetDistro
     fi
-    for service in general ${services//,/ }; do
+    for service in ${services//,/ }; do
         # Allow individual services to specify dependencies
         if [[ -e ${package_dir}/${service} ]]; then
             file_to_parse="${file_to_parse} $service"
@@ -555,6 +557,18 @@
     [ "($uname -m)" = "$ARCH_TYPE" ]
 }
 
+# Checks if installed Apache is <= given version
+# $1 = x.y.z (version string of Apache)
+function check_apache_version {
+    local cmd="apachectl"
+    if ! [[ -x $(which apachectl 2>/dev/null) ]]; then
+        cmd="/usr/sbin/apachectl"
+    fi
+
+    local version=$($cmd -v | grep version | grep -Po 'Apache/\K[^ ]*')
+    expr "$version" '>=' $1 > /dev/null
+}
+
 # git clone only if directory doesn't exist already.  Since ``DEST`` might not
 # be owned by the installation user, we create the directory and change the
 # ownership to the proper user.
@@ -580,7 +594,8 @@
     if echo $GIT_REF | egrep -q "^refs"; then
         # If our branch name is a gerrit style refs/changes/...
         if [[ ! -d $GIT_DEST ]]; then
-            [[ "$ERROR_ON_CLONE" = "True" ]] && exit 1
+            [[ "$ERROR_ON_CLONE" = "True" ]] && \
+                die $LINENO "Cloning not allowed in this configuration"
             git clone $GIT_REMOTE $GIT_DEST
         fi
         cd $GIT_DEST
@@ -588,7 +603,8 @@
     else
         # do a full clone only if the directory doesn't exist
         if [[ ! -d $GIT_DEST ]]; then
-            [[ "$ERROR_ON_CLONE" = "True" ]] && exit 1
+            [[ "$ERROR_ON_CLONE" = "True" ]] && \
+                die $LINENO "Cloning not allowed in this configuration"
             git clone $GIT_REMOTE $GIT_DEST
             cd $GIT_DEST
             # This checkout syntax works for both branches and tags
@@ -612,8 +628,7 @@
             elif [[ -n "`git show-ref refs/remotes/origin/$GIT_REF`" ]]; then
                 git_update_remote_branch $GIT_REF
             else
-                echo $GIT_REF is neither branch nor tag
-                exit 1
+                die $LINENO "$GIT_REF is neither branch nor tag"
             fi
 
         fi
@@ -713,7 +728,8 @@
     local section=$2
     local option=$3
     local value=$4
-    if ! grep -q "^\[$section\]" "$file"; then
+
+    if ! grep -q "^\[$section\]" "$file" 2>/dev/null; then
         # Add section at the end
         echo -e "\n[$section]" >>"$file"
     fi
@@ -825,6 +841,7 @@
         [[ ${service} == "cinder" && ${ENABLED_SERVICES} =~ "c-" ]] && return 0
         [[ ${service} == "ceilometer" && ${ENABLED_SERVICES} =~ "ceilometer-" ]] && return 0
         [[ ${service} == "glance" && ${ENABLED_SERVICES} =~ "g-" ]] && return 0
+        [[ ${service} == "ironic" && ${ENABLED_SERVICES} =~ "ir-" ]] && return 0
         [[ ${service} == "neutron" && ${ENABLED_SERVICES} =~ "q-" ]] && return 0
         [[ ${service} == "trove" && ${ENABLED_SERVICES} =~ "tr-" ]] && return 0
         [[ ${service} == "swift" && ${ENABLED_SERVICES} =~ "s-" ]] && return 0
@@ -980,7 +997,7 @@
 
 # Wrapper for ``pip install`` to set cache and proxy environment variables
 # Uses globals ``OFFLINE``, ``PIP_DOWNLOAD_CACHE``, ``PIP_USE_MIRRORS``,
-#   ``TRACK_DEPENDS``, ``*_proxy`
+# ``TRACK_DEPENDS``, ``*_proxy``
 # pip_install package [package ...]
 function pip_install {
     [[ "$OFFLINE" = "True" || -z "$@" ]] && return
@@ -1009,8 +1026,7 @@
     # /tmp/$USER-pip-build.  Even if a later component specifies foo <
     # 1.1, the existing extracted build will be used and cause
     # confusing errors.  By creating unique build directories we avoid
-    # this problem. See
-    #  https://github.com/pypa/pip/issues/709
+    # this problem. See https://github.com/pypa/pip/issues/709
     local pip_build_tmp=$(mktemp --tmpdir -d pip-build.XXXXX)
 
     $SUDO_PIP PIP_DOWNLOAD_CACHE=${PIP_DOWNLOAD_CACHE:-/var/cache/pip} \
@@ -1144,8 +1160,8 @@
 }
 
 
-# Helper to remove the *.failure files under $SERVICE_DIR/$SCREEN_NAME
-# This is used for service_check when all the screen_it are called finished
+# Helper to remove the ``*.failure`` files under ``$SERVICE_DIR/$SCREEN_NAME``.
+# This is used for ``service_check`` when all the ``screen_it`` are called finished
 # init_service_check
 function init_service_check() {
     SCREEN_NAME=${SCREEN_NAME:-stack}
@@ -1235,7 +1251,11 @@
 
 # ``pip install -e`` the package, which processes the dependencies
 # using pip before running `setup.py develop`
-# Uses globals ``STACK_USER``, ``TRACK_DEPENDS``, ``REQUIREMENTS_DIR``
+#
+# Updates the dependencies in project_dir from the
+# openstack/requirements global list before installing anything.
+#
+# Uses globals ``TRACK_DEPENDS``, ``REQUIREMENTS_DIR``
 # setup_develop directory
 function setup_develop() {
     local project_dir=$1
@@ -1251,14 +1271,33 @@
             $SUDO_CMD python update.py $project_dir)
     fi
 
+    setup_develop_no_requirements_update $project_dir
+
+    # We've just gone and possibly modified the user's source tree in an
+    # automated way, which is considered bad form if it's a development
+    # tree because we've screwed up their next git checkin. So undo it.
+    #
+    # However... there are some circumstances, like running in the gate
+    # where we really really want the overridden version to stick. So provide
+    # a variable that tells us whether or not we should UNDO the requirements
+    # changes (this will be set to False in the OpenStack ci gate)
+    if [ $UNDO_REQUIREMENTS = "True" ]; then
+        if [ $update_requirements -eq 0 ]; then
+            (cd $project_dir && git reset --hard)
+        fi
+    fi
+}
+
+# ``pip install -e`` the package, which processes the dependencies
+# using pip before running `setup.py develop`
+# Uses globals ``STACK_USER``
+# setup_develop_no_requirements_update directory
+function setup_develop_no_requirements_update() {
+    local project_dir=$1
+
     pip_install -e $project_dir
     # ensure that further actions can do things like setup.py sdist
     safe_chown -R $STACK_USER $1/*.egg-info
-
-    # Undo requirements changes, if we made them
-    if [ $update_requirements -eq 0 ]; then
-        (cd $project_dir && git checkout -- requirements.txt test-requirements.txt setup.py)
-    fi
 }
 
 
@@ -1299,10 +1338,12 @@
 }
 
 
-# Retrieve an image from a URL and upload into Glance
+# Retrieve an image from a URL and upload into Glance.
 # Uses the following variables:
-#   ``FILES`` must be set to the cache dir
-#   ``GLANCE_HOSTPORT``
+#
+# - ``FILES`` must be set to the cache dir
+# - ``GLANCE_HOSTPORT``
+#
 # upload_image image-url glance-token
 function upload_image() {
     local image_url=$1
@@ -1311,11 +1352,24 @@
     # Create a directory for the downloaded image tarballs.
     mkdir -p $FILES/images
 
-    # Downloads the image (uec ami+aki style), then extracts it.
-    IMAGE_FNAME=`basename "$image_url"`
-    if [[ ! -f $FILES/$IMAGE_FNAME || "$(stat -c "%s" $FILES/$IMAGE_FNAME)" = "0" ]]; then
-        wget -c $image_url -O $FILES/$IMAGE_FNAME
-        if [[ $? -ne 0 ]]; then
+    if [[ $image_url != file* ]]; then
+        # Downloads the image (uec ami+aki style), then extracts it.
+        IMAGE_FNAME=`basename "$image_url"`
+        if [[ ! -f $FILES/$IMAGE_FNAME || "$(stat -c "%s" $FILES/$IMAGE_FNAME)" = "0" ]]; then
+             wget -c $image_url -O $FILES/$IMAGE_FNAME
+             if [[ $? -ne 0 ]]; then
+                 echo "Not found: $image_url"
+                 return
+             fi
+        fi
+        IMAGE="$FILES/${IMAGE_FNAME}"
+    else
+        # File based URL (RFC 1738): file://host/path
+        # Remote files are not considered here.
+        # *nix: file:///home/user/path/file
+        # windows: file:///C:/Documents%20and%20Settings/user/path/file
+        IMAGE=$(echo $image_url | sed "s/^file:\/\///g")
+        if [[ ! -f $IMAGE || "$(stat -c "%s" $IMAGE)" == "0" ]]; then
             echo "Not found: $image_url"
             return
         fi
@@ -1323,7 +1377,6 @@
 
     # OpenVZ-format images are provided as .tar.gz, but not decompressed prior to loading
     if [[ "$image_url" =~ 'openvz' ]]; then
-        IMAGE="$FILES/${IMAGE_FNAME}"
         IMAGE_NAME="${IMAGE_FNAME%.tar.gz}"
         glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$IMAGE_NAME" --is-public=True --container-format ami --disk-format ami < "${IMAGE}"
         return
@@ -1331,26 +1384,51 @@
 
     # vmdk format images
     if [[ "$image_url" =~ '.vmdk' ]]; then
-        IMAGE="$FILES/${IMAGE_FNAME}"
         IMAGE_NAME="${IMAGE_FNAME%.vmdk}"
 
         # Before we can upload vmdk type images to glance, we need to know it's
         # disk type, storage adapter, and networking adapter. These values are
-        # passed to glance as custom properties. We take these values from the
+        # passed to glance as custom properties.
+        # We take these values from the vmdk file if populated. Otherwise, we use
         # vmdk filename, which is expected in the following format:
         #
-        #     <name>-<disk type>:<storage adapter>:<network adapter>
+        #     <name>-<disk type>;<storage adapter>;<network adapter>
         #
         # If the filename does not follow the above format then the vsphere
         # driver will supply default values.
-        property_string=`echo "$IMAGE_NAME" | grep -oP '(?<=-)(?!.*-).+:.+:.+$'`
-        if [[ ! -z "$property_string" ]]; then
-            IFS=':' read -a props <<< "$property_string"
-            vmdk_disktype="${props[0]}"
-            vmdk_adapter_type="${props[1]}"
-            vmdk_net_adapter="${props[2]}"
+
+        vmdk_adapter_type=""
+        vmdk_disktype=""
+        vmdk_net_adapter=""
+
+        # vmdk adapter type
+        vmdk_adapter_type="$(head -25 $IMAGE | grep -a -F -m 1 'ddb.adapterType =' $IMAGE)"
+        vmdk_adapter_type="${vmdk_adapter_type#*\"}"
+        vmdk_adapter_type="${vmdk_adapter_type%?}"
+
+        # vmdk disk type
+        vmdk_create_type="$(head -25 $IMAGE | grep -a -F -m 1 'createType=' $IMAGE)"
+        vmdk_create_type="${vmdk_create_type#*\"}"
+        vmdk_create_type="${vmdk_create_type%?}"
+        if [[ "$vmdk_create_type" = "monolithicSparse" ]]; then
+            vmdk_disktype="sparse"
+        elif [[ "$vmdk_create_type" = "monolithicFlat" ]]; then
+            die $LINENO "Monolithic flat disks should use a descriptor-data pair." \
+            "Please provide the disk and not the descriptor."
+        else
+            #TODO(alegendre): handle streamOptimized once supported by VMware driver.
+            vmdk_disktype="preallocated"
         fi
 
+        # NOTE: For backwards compatibility reasons, colons may be used in place
+        # of semi-colons for property delimiters but they are not permitted
+        # characters in NTFS filesystems.
+        property_string=`echo "$IMAGE_NAME" | grep -oP '(?<=-)(?!.*-).+[:;].+[:;].+$'`
+        IFS=':;' read -a props <<< "$property_string"
+        vmdk_disktype="${props[0]:-$vmdk_disktype}"
+        vmdk_adapter_type="${props[1]:-$vmdk_adapter_type}"
+        vmdk_net_adapter="${props[2]:-$vmdk_net_adapter}"
+
         glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$IMAGE_NAME" --is-public=True --container-format bare --disk-format vmdk --property vmware_disktype="$vmdk_disktype" --property vmware_adaptertype="$vmdk_adapter_type" --property hw_vif_model="$vmdk_net_adapter" < "${IMAGE}"
         return
     fi
@@ -1358,7 +1436,6 @@
     # XenServer-vhd-ovf-format images are provided as .vhd.tgz
     # and should not be decompressed prior to loading
     if [[ "$image_url" =~ '.vhd.tgz' ]]; then
-        IMAGE="$FILES/${IMAGE_FNAME}"
         IMAGE_NAME="${IMAGE_FNAME%.vhd.tgz}"
         glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$IMAGE_NAME" --is-public=True --container-format=ovf --disk-format=vhd < "${IMAGE}"
         return
@@ -1368,12 +1445,11 @@
     # and should not be decompressed prior to loading.
     # Setting metadata, so PV mode is used.
     if [[ "$image_url" =~ '.xen-raw.tgz' ]]; then
-        IMAGE="$FILES/${IMAGE_FNAME}"
         IMAGE_NAME="${IMAGE_FNAME%.xen-raw.tgz}"
         glance \
-          --os-auth-token $token \
-          --os-image-url http://$GLANCE_HOSTPORT \
-          image-create \
+            --os-auth-token $token \
+            --os-image-url http://$GLANCE_HOSTPORT \
+            image-create \
             --name "$IMAGE_NAME" --is-public=True \
             --container-format=tgz --disk-format=raw \
             --property vm_mode=xen < "${IMAGE}"
@@ -1396,17 +1472,16 @@
             mkdir "$xdir"
             tar -zxf $FILES/$IMAGE_FNAME -C "$xdir"
             KERNEL=$(for f in "$xdir/"*-vmlinuz* "$xdir/"aki-*/image; do
-                     [ -f "$f" ] && echo "$f" && break; done; true)
+                [ -f "$f" ] && echo "$f" && break; done; true)
             RAMDISK=$(for f in "$xdir/"*-initrd* "$xdir/"ari-*/image; do
-                     [ -f "$f" ] && echo "$f" && break; done; true)
+                [ -f "$f" ] && echo "$f" && break; done; true)
             IMAGE=$(for f in "$xdir/"*.img "$xdir/"ami-*/image; do
-                     [ -f "$f" ] && echo "$f" && break; done; true)
+                [ -f "$f" ] && echo "$f" && break; done; true)
             if [[ -z "$IMAGE_NAME" ]]; then
                 IMAGE_NAME=$(basename "$IMAGE" ".img")
             fi
             ;;
         *.img)
-            IMAGE="$FILES/$IMAGE_FNAME";
             IMAGE_NAME=$(basename "$IMAGE" ".img")
             format=$(qemu-img info ${IMAGE} | awk '/^file format/ { print $3; exit }')
             if [[ ",qcow2,raw,vdi,vmdk,vpc," =~ ",$format," ]]; then
@@ -1417,20 +1492,17 @@
             CONTAINER_FORMAT=bare
             ;;
         *.img.gz)
-            IMAGE="$FILES/${IMAGE_FNAME}"
             IMAGE_NAME=$(basename "$IMAGE" ".img.gz")
             DISK_FORMAT=raw
             CONTAINER_FORMAT=bare
             UNPACK=zcat
             ;;
         *.qcow2)
-            IMAGE="$FILES/${IMAGE_FNAME}"
             IMAGE_NAME=$(basename "$IMAGE" ".qcow2")
             DISK_FORMAT=qcow2
             CONTAINER_FORMAT=bare
             ;;
         *.iso)
-            IMAGE="$FILES/${IMAGE_FNAME}"
             IMAGE_NAME=$(basename "$IMAGE" ".iso")
             DISK_FORMAT=iso
             CONTAINER_FORMAT=bare
@@ -1464,7 +1536,8 @@
 # When called from stackrc/localrc DATABASE_BACKENDS has not been
 # initialized yet, just save the configuration selection and call back later
 # to validate it.
-#  $1 The name of the database backend to use (mysql, postgresql, ...)
+#
+# ``$1`` - the name of the database backend to use (mysql, postgresql, ...)
 function use_database {
     if [[ -z "$DATABASE_BACKENDS" ]]; then
         # No backends registered means this is likely called from ``localrc``
@@ -1505,7 +1578,7 @@
 
 
 # Wrapper for ``yum`` to set proxy environment variables
-# Uses globals ``OFFLINE``, ``*_proxy`
+# Uses globals ``OFFLINE``, ``*_proxy``
 # yum_install package [package ...]
 function yum_install() {
     [[ "$OFFLINE" = "True" ]] && return
@@ -1562,7 +1635,6 @@
         else
             die $LINENO "[Fail] Could ping server"
         fi
-        exit 1
     fi
 }
 
@@ -1575,7 +1647,6 @@
     if [[ $ip = "" ]];then
         echo "$nova_result"
         die $LINENO "[Fail] Coudn't get ipaddress of VM"
-        exit 1
     fi
     echo $ip
 }
@@ -1691,23 +1762,23 @@
 #
 # _vercmp_r sep ver1 ver2
 function _vercmp_r {
-  typeset sep
-  typeset -a ver1=() ver2=()
-  sep=$1; shift
-  ver1=("${@:1:sep}")
-  ver2=("${@:sep+1}")
+    typeset sep
+    typeset -a ver1=() ver2=()
+    sep=$1; shift
+    ver1=("${@:1:sep}")
+    ver2=("${@:sep+1}")
 
-  if ((ver1 > ver2)); then
-    echo 1; return 0
-  elif ((ver2 > ver1)); then
-    echo -1; return 0
-  fi
+    if ((ver1 > ver2)); then
+        echo 1; return 0
+    elif ((ver2 > ver1)); then
+        echo -1; return 0
+    fi
 
-  if ((sep <= 1)); then
-    echo 0; return 0
-  fi
+    if ((sep <= 1)); then
+        echo 0; return 0
+    fi
 
-  _vercmp_r $((sep-1)) "${ver1[@]:1}" "${ver2[@]:1}"
+    _vercmp_r $((sep-1)) "${ver1[@]:1}" "${ver2[@]:1}"
 }
 
 
@@ -1729,13 +1800,13 @@
 #
 # vercmp_numbers ver1 ver2
 vercmp_numbers() {
-  typeset v1=$1 v2=$2 sep
-  typeset -a ver1 ver2
+    typeset v1=$1 v2=$2 sep
+    typeset -a ver1 ver2
 
-  IFS=. read -ra ver1 <<< "$v1"
-  IFS=. read -ra ver2 <<< "$v2"
+    IFS=. read -ra ver1 <<< "$v1"
+    IFS=. read -ra ver2 <<< "$v2"
 
-  _vercmp_r "${#ver1[@]}" "${ver1[@]}" "${ver2[@]}"
+    _vercmp_r "${#ver1[@]}" "${ver1[@]}" "${ver2[@]}"
 }
 
 
diff --git a/lib/apache b/lib/apache
index 3a1f6f1..8ae78b2 100644
--- a/lib/apache
+++ b/lib/apache
@@ -2,15 +2,20 @@
 # Functions to control configuration and operation of apache web server
 
 # Dependencies:
-# ``functions`` file
-# is_apache_enabled_service
-# install_apache_wsgi
-# config_apache_wsgi
-# enable_apache_site
-# disable_apache_site
-# start_apache_server
-# stop_apache_server
-# restart_apache_server
+#
+# - ``functions`` file
+# -``STACK_USER`` must be defined
+
+# lib/apache exports the following functions:
+#
+# - is_apache_enabled_service
+# - install_apache_wsgi
+# - config_apache_wsgi
+# - enable_apache_site
+# - disable_apache_site
+# - start_apache_server
+# - stop_apache_server
+# - restart_apache_server
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -18,7 +23,7 @@
 
 # Allow overriding the default Apache user and group, default to
 # current user and his default group.
-APACHE_USER=${APACHE_USER:-$USER}
+APACHE_USER=${APACHE_USER:-$STACK_USER}
 APACHE_GROUP=${APACHE_GROUP:-$(id -gn $APACHE_USER)}
 
 
@@ -116,6 +121,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/baremetal b/lib/baremetal
index 52af420..a0df85e 100644
--- a/lib/baremetal
+++ b/lib/baremetal
@@ -1,19 +1,19 @@
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
+## vim: tabstop=4 shiftwidth=4 softtabstop=4
 
-# Copyright (c) 2012 Hewlett-Packard Development Company, L.P.
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
+## Copyright (c) 2012 Hewlett-Packard Development Company, L.P.
+## All Rights Reserved.
+##
+##    Licensed under the Apache License, Version 2.0 (the "License"); you may
+##    not use this file except in compliance with the License. You may obtain
+##    a copy of the License at
+##
+##         http://www.apache.org/licenses/LICENSE-2.0
+##
+##    Unless required by applicable law or agreed to in writing, software
+##    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+##    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+##    License for the specific language governing permissions and limitations
+##    under the License.
 
 
 # This file provides devstack with the environment and utilities to
@@ -24,7 +24,8 @@
 # control physical hardware resources on the same network, if you know
 # the MAC address(es) and IPMI credentials.
 #
-# At a minimum, to enable the baremetal driver, you must set these in loclarc:
+# At a minimum, to enable the baremetal driver, you must set these in localrc:
+#
 #    VIRT_DRIVER=baremetal
 #    ENABLED_SERVICES="$ENABLED_SERVICES,baremetal"
 #
@@ -38,11 +39,13 @@
 # Below that, various functions are defined, which are called by devstack
 # in the following order:
 #
-#  before nova-cpu starts:
+# before nova-cpu starts:
+#
 #  - prepare_baremetal_toolchain
 #  - configure_baremetal_nova_dirs
 #
-#  after nova and glance have started:
+# after nova and glance have started:
+#
 #  - build_and_upload_baremetal_deploy_k_and_r $token
 #  - create_baremetal_flavor $BM_DEPLOY_KERNEL_ID $BM_DEPLOY_RAMDISK_ID
 #  - upload_baremetal_image $url $token
@@ -58,11 +61,13 @@
 # -------------------
 
 # sub-driver to use for kernel deployment
-#  - nova.virt.baremetal.pxe.PXE
-#  - nova.virt.baremetal.tilera.TILERA
+#
+# - nova.virt.baremetal.pxe.PXE
+# - nova.virt.baremetal.tilera.TILERA
 BM_DRIVER=${BM_DRIVER:-nova.virt.baremetal.pxe.PXE}
 
 # sub-driver to use for remote power management
+#
 # - nova.virt.baremetal.fake.FakePowerManager, for manual power control
 # - nova.virt.baremetal.ipmi.IPMI, for remote IPMI
 # - nova.virt.baremetal.tilera_pdu.Pdu, for TilePro hardware
@@ -83,10 +88,12 @@
 # To provide PXE, configure nova-network's dnsmasq rather than run the one
 # dedicated to baremetal. When enable this, make sure these conditions are
 # fulfilled:
-#  1) nova-compute and nova-network runs on the same host
-#  2) nova-network uses FlatDHCPManager
+#
+# 1) nova-compute and nova-network runs on the same host
+# 2) nova-network uses FlatDHCPManager
+#
 # NOTE: the other BM_DNSMASQ_* have no effect on the behavior if this option
-#       is enabled.
+# is enabled.
 BM_DNSMASQ_FROM_NOVA_NETWORK=`trueorfalse False $BM_DNSMASQ_FROM_NOVA_NETWORK`
 
 # BM_DNSMASQ_IFACE should match FLAT_NETWORK_BRIDGE
@@ -103,9 +110,9 @@
 # BM_DNSMASQ_DNS provide dns server to bootstrap clients
 BM_DNSMASQ_DNS=${BM_DNSMASQ_DNS:-}
 
-# BM_FIRST_MAC *must* be set to the MAC address of the node you will boot.
-#              This is passed to dnsmasq along with the kernel/ramdisk to
-#              deploy via PXE.
+# BM_FIRST_MAC *must* be set to the MAC address of the node you will
+# boot.  This is passed to dnsmasq along with the kernel/ramdisk to
+# deploy via PXE.
 BM_FIRST_MAC=${BM_FIRST_MAC:-}
 
 # BM_SECOND_MAC is only important if the host has >1 NIC.
@@ -119,9 +126,9 @@
 BM_PM_USER=${BM_PM_USER:-user}
 BM_PM_PASS=${BM_PM_PASS:-pass}
 
-# BM_FLAVOR_* options are arbitrary and not necessarily related to physical
-#             hardware capacity. These can be changed if you are testing
-#             BaremetalHostManager with multiple nodes and different flavors.
+# BM_FLAVOR_* options are arbitrary and not necessarily related to
+# physical hardware capacity. These can be changed if you are testing
+# BaremetalHostManager with multiple nodes and different flavors.
 BM_CPU_ARCH=${BM_CPU_ARCH:-x86_64}
 BM_FLAVOR_CPU=${BM_FLAVOR_CPU:-1}
 BM_FLAVOR_RAM=${BM_FLAVOR_RAM:-1024}
@@ -198,8 +205,8 @@
     BM_FIRST_MAC=$(sudo $bm_poseur get-macs)
 
     # NOTE: there is currently a limitation in baremetal driver
-    #       that requires second MAC even if it is not used.
-    #       Passing a fake value allows this to work.
+    # that requires second MAC even if it is not used.
+    # Passing a fake value allows this to work.
     # TODO(deva): remove this after driver issue is fixed.
     BM_SECOND_MAC='12:34:56:78:90:12'
 }
@@ -256,19 +263,19 @@
 
     # load them into glance
     BM_DEPLOY_KERNEL_ID=$(glance \
-         --os-auth-token $token \
-         --os-image-url http://$GLANCE_HOSTPORT \
-         image-create \
-         --name $BM_DEPLOY_KERNEL \
-         --is-public True --disk-format=aki \
-         < $TOP_DIR/files/$BM_DEPLOY_KERNEL  | grep ' id ' | get_field 2)
+        --os-auth-token $token \
+        --os-image-url http://$GLANCE_HOSTPORT \
+        image-create \
+        --name $BM_DEPLOY_KERNEL \
+        --is-public True --disk-format=aki \
+        < $TOP_DIR/files/$BM_DEPLOY_KERNEL  | grep ' id ' | get_field 2)
     BM_DEPLOY_RAMDISK_ID=$(glance \
-         --os-auth-token $token \
-         --os-image-url http://$GLANCE_HOSTPORT \
-         image-create \
-         --name $BM_DEPLOY_RAMDISK \
-         --is-public True --disk-format=ari \
-         < $TOP_DIR/files/$BM_DEPLOY_RAMDISK  | grep ' id ' | get_field 2)
+        --os-auth-token $token \
+        --os-image-url http://$GLANCE_HOSTPORT \
+        image-create \
+        --name $BM_DEPLOY_RAMDISK \
+        --is-public True --disk-format=ari \
+        < $TOP_DIR/files/$BM_DEPLOY_RAMDISK  | grep ' id ' | get_field 2)
 }
 
 # create a basic baremetal flavor, associated with deploy kernel & ramdisk
@@ -278,16 +285,16 @@
     aki=$1
     ari=$2
     nova flavor-create $BM_FLAVOR_NAME $BM_FLAVOR_ID \
-            $BM_FLAVOR_RAM $BM_FLAVOR_ROOT_DISK $BM_FLAVOR_CPU
+        $BM_FLAVOR_RAM $BM_FLAVOR_ROOT_DISK $BM_FLAVOR_CPU
     nova flavor-key $BM_FLAVOR_NAME set \
-            "cpu_arch"="$BM_FLAVOR_ARCH" \
-            "baremetal:deploy_kernel_id"="$aki" \
-            "baremetal:deploy_ramdisk_id"="$ari"
+        "cpu_arch"="$BM_FLAVOR_ARCH" \
+        "baremetal:deploy_kernel_id"="$aki" \
+        "baremetal:deploy_ramdisk_id"="$ari"
 
 }
 
-# pull run-time kernel/ramdisk out of disk image and load into glance
-# note that $file is currently expected to be in qcow2 format
+# Pull run-time kernel/ramdisk out of disk image and load into glance.
+# Note that $file is currently expected to be in qcow2 format.
 # Sets KERNEL_ID and RAMDISK_ID
 #
 # Usage: extract_and_upload_k_and_r_from_image $token $file
@@ -311,19 +318,19 @@
 
     # load them into glance
     KERNEL_ID=$(glance \
-         --os-auth-token $token \
-         --os-image-url http://$GLANCE_HOSTPORT \
-         image-create \
-         --name $image_name-kernel \
-         --is-public True --disk-format=aki \
-         < $TOP_DIR/files/$OUT_KERNEL | grep ' id ' | get_field 2)
+        --os-auth-token $token \
+        --os-image-url http://$GLANCE_HOSTPORT \
+        image-create \
+        --name $image_name-kernel \
+        --is-public True --disk-format=aki \
+        < $TOP_DIR/files/$OUT_KERNEL | grep ' id ' | get_field 2)
     RAMDISK_ID=$(glance \
-         --os-auth-token $token \
-         --os-image-url http://$GLANCE_HOSTPORT \
-         image-create \
-         --name $image_name-initrd \
-         --is-public True --disk-format=ari \
-         < $TOP_DIR/files/$OUT_RAMDISK | grep ' id ' | get_field 2)
+        --os-auth-token $token \
+        --os-image-url http://$GLANCE_HOSTPORT \
+        image-create \
+        --name $image_name-initrd \
+        --is-public True --disk-format=ari \
+        < $TOP_DIR/files/$OUT_RAMDISK | grep ' id ' | get_field 2)
 }
 
 
@@ -365,11 +372,11 @@
             mkdir "$xdir"
             tar -zxf $FILES/$IMAGE_FNAME -C "$xdir"
             KERNEL=$(for f in "$xdir/"*-vmlinuz* "$xdir/"aki-*/image; do
-                     [ -f "$f" ] && echo "$f" && break; done; true)
+                [ -f "$f" ] && echo "$f" && break; done; true)
             RAMDISK=$(for f in "$xdir/"*-initrd* "$xdir/"ari-*/image; do
-                     [ -f "$f" ] && echo "$f" && break; done; true)
+                [ -f "$f" ] && echo "$f" && break; done; true)
             IMAGE=$(for f in "$xdir/"*.img "$xdir/"ami-*/image; do
-                     [ -f "$f" ] && echo "$f" && break; done; true)
+                [ -f "$f" ] && echo "$f" && break; done; true)
             if [[ -z "$IMAGE_NAME" ]]; then
                 IMAGE_NAME=$(basename "$IMAGE" ".img")
             fi
@@ -403,19 +410,19 @@
             --container-format ari \
             --disk-format ari < "$RAMDISK" | grep ' id ' | get_field 2)
     else
-       # TODO(deva): add support for other image types
-       return
+        # TODO(deva): add support for other image types
+        return
     fi
 
     glance \
-       --os-auth-token $token \
-       --os-image-url http://$GLANCE_HOSTPORT \
-       image-create \
-       --name "${IMAGE_NAME%.img}" --is-public True \
-       --container-format $CONTAINER_FORMAT \
-       --disk-format $DISK_FORMAT \
-       ${KERNEL_ID:+--property kernel_id=$KERNEL_ID} \
-       ${RAMDISK_ID:+--property ramdisk_id=$RAMDISK_ID} < "${IMAGE}"
+        --os-auth-token $token \
+        --os-image-url http://$GLANCE_HOSTPORT \
+        image-create \
+        --name "${IMAGE_NAME%.img}" --is-public True \
+        --container-format $CONTAINER_FORMAT \
+        --disk-format $DISK_FORMAT \
+        ${KERNEL_ID:+--property kernel_id=$KERNEL_ID} \
+        ${RAMDISK_ID:+--property ramdisk_id=$RAMDISK_ID} < "${IMAGE}"
 
     # override DEFAULT_IMAGE_NAME so that tempest can find the image
     # that we just uploaded in glance
@@ -430,7 +437,7 @@
     done
 }
 
-# inform nova-baremetal about nodes, MACs, etc
+# Inform nova-baremetal about nodes, MACs, etc.
 # Defaults to using BM_FIRST_MAC and BM_SECOND_MAC if parameters not specified
 #
 # Usage: add_baremetal_node <first_mac> <second_mac>
@@ -439,24 +446,27 @@
     mac_2=${2:-$BM_SECOND_MAC}
 
     id=$(nova baremetal-node-create \
-       --pm_address="$BM_PM_ADDR" \
-       --pm_user="$BM_PM_USER" \
-       --pm_password="$BM_PM_PASS" \
-       "$BM_HOSTNAME" \
-       "$BM_FLAVOR_CPU" \
-       "$BM_FLAVOR_RAM" \
-       "$BM_FLAVOR_ROOT_DISK" \
-       "$mac_1" \
-       | grep ' id ' | get_field 2 )
+        --pm_address="$BM_PM_ADDR" \
+        --pm_user="$BM_PM_USER" \
+        --pm_password="$BM_PM_PASS" \
+        "$BM_HOSTNAME" \
+        "$BM_FLAVOR_CPU" \
+        "$BM_FLAVOR_RAM" \
+        "$BM_FLAVOR_ROOT_DISK" \
+        "$mac_1" \
+        | grep ' id ' | get_field 2 )
     [ $? -eq 0 ] || [ "$id" ] || die $LINENO "Error adding baremetal node"
-    id2=$(nova baremetal-interface-add "$id" "$mac_2" )
-    [ $? -eq 0 ] || [ "$id2" ] || die $LINENO "Error adding interface to barmetal node $id"
+    if [ -n "$mac_2" ]; then
+        id2=$(nova baremetal-interface-add "$id" "$mac_2" )
+        [ $? -eq 0 ] || [ "$id2" ] || die $LINENO "Error adding interface to barmetal node $id"
+    fi
 }
 
 
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/ceilometer b/lib/ceilometer
index 1b04319..8e2970c 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -2,12 +2,15 @@
 # Install and start **Ceilometer** service
 
 # To enable a minimal set of Ceilometer services, add the following to localrc:
+#
 #   enable_service ceilometer-acompute ceilometer-acentral ceilometer-collector ceilometer-api
 #
 # To ensure Ceilometer alarming services are enabled also, further add to the localrc:
+#
 #   enable_service ceilometer-alarm-notifier ceilometer-alarm-evaluator
 
 # Dependencies:
+#
 # - functions
 # - OS_AUTH_URL for auth in api
 # - DEST set to the destination directory
@@ -16,12 +19,12 @@
 
 # stack.sh
 # ---------
-# install_ceilometer
-# configure_ceilometer
-# init_ceilometer
-# start_ceilometer
-# stop_ceilometer
-# cleanup_ceilometer
+# - install_ceilometer
+# - configure_ceilometer
+# - init_ceilometer
+# - start_ceilometer
+# - stop_ceilometer
+# - cleanup_ceilometer
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -64,10 +67,10 @@
     setup_develop $CEILOMETER_DIR
 
     [ ! -d $CEILOMETER_CONF_DIR ] && sudo mkdir -m 755 -p $CEILOMETER_CONF_DIR
-    sudo chown $USER $CEILOMETER_CONF_DIR
+    sudo chown $STACK_USER $CEILOMETER_CONF_DIR
 
     [ ! -d $CEILOMETER_API_LOG_DIR ] &&  sudo mkdir -m 755 -p $CEILOMETER_API_LOG_DIR
-    sudo chown $USER $CEILOMETER_API_LOG_DIR
+    sudo chown $STACK_USER $CEILOMETER_API_LOG_DIR
 
     iniset_rpc_backend ceilometer $CEILOMETER_CONF DEFAULT
 
@@ -79,6 +82,10 @@
     cp $CEILOMETER_DIR/etc/ceilometer/pipeline.yaml $CEILOMETER_CONF_DIR
     iniset $CEILOMETER_CONF DEFAULT policy_file $CEILOMETER_CONF_DIR/policy.json
 
+    if [ "$CEILOMETER_PIPELINE_INTERVAL" ]; then
+        sed -i "s/interval:.*/interval: ${CEILOMETER_PIPELINE_INTERVAL}/" $CEILOMETER_CONF_DIR/pipeline.yaml
+    fi
+
     # the compute and central agents need these credentials in order to
     # call out to the public nova and glance APIs
     iniset $CEILOMETER_CONF DEFAULT os_username ceilometer
@@ -91,7 +98,7 @@
     iniset $CEILOMETER_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
     iniset $CEILOMETER_CONF keystone_authtoken signing_dir $CEILOMETER_AUTH_CACHE_DIR
 
-    if [[ "$CEILOMETER_BACKEND" = 'mysql' ]]; then
+    if [ "$CEILOMETER_BACKEND" = 'mysql' ] || [ "$CEILOMETER_BACKEND" = 'postgresql' ] ; then
         iniset $CEILOMETER_CONF database connection `database_connection_url ceilometer`
     else
         iniset $CEILOMETER_CONF database connection mongodb://localhost:27017/ceilometer
@@ -116,7 +123,7 @@
     sudo chown $STACK_USER $CEILOMETER_AUTH_CACHE_DIR
     rm -f $CEILOMETER_AUTH_CACHE_DIR/*
 
-    if [[ "$CEILOMETER_BACKEND" = 'mysql' ]]; then
+    if [ "$CEILOMETER_BACKEND" = 'mysql' ] || [ "$CEILOMETER_BACKEND" = 'postgresql' ] ; then
         recreate_database ceilometer utf8
         $CEILOMETER_BIN_DIR/ceilometer-dbsync
     fi
@@ -134,12 +141,20 @@
 
 # start_ceilometer() - Start running processes, including screen
 function start_ceilometer() {
-    screen_it ceilometer-acompute "sg $LIBVIRT_GROUP \"ceilometer-agent-compute --config-file $CEILOMETER_CONF\""
-    screen_it ceilometer-acentral "ceilometer-agent-central --config-file $CEILOMETER_CONF"
-    screen_it ceilometer-collector "ceilometer-collector --config-file $CEILOMETER_CONF"
-    screen_it ceilometer-api "ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
-    screen_it ceilometer-alarm-notifier "ceilometer-alarm-notifier --config-file $CEILOMETER_CONF"
-    screen_it ceilometer-alarm-evaluator "ceilometer-alarm-evaluator --config-file $CEILOMETER_CONF"
+    if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
+        screen_it ceilometer-acompute "cd ; sg $LIBVIRT_GROUP \"ceilometer-agent-compute --config-file $CEILOMETER_CONF\""
+    fi
+    screen_it ceilometer-acentral "cd ; ceilometer-agent-central --config-file $CEILOMETER_CONF"
+    screen_it ceilometer-collector "cd ; ceilometer-collector --config-file $CEILOMETER_CONF"
+    screen_it ceilometer-api "cd ; ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
+
+    echo "Waiting for ceilometer-api to start..."
+    if ! timeout $SERVICE_TIMEOUT sh -c "while ! curl --noproxy '*' -s http://localhost:8777/v2/ >/dev/null; do sleep 1; done"; then
+        die $LINENO "ceilometer-api did not start"
+    fi
+
+    screen_it ceilometer-alarm-notifier "cd ; ceilometer-alarm-notifier --config-file $CEILOMETER_CONF"
+    screen_it ceilometer-alarm-evaluator "cd ; ceilometer-alarm-evaluator --config-file $CEILOMETER_CONF"
 }
 
 # stop_ceilometer() - Stop running processes
@@ -154,6 +169,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/cinder b/lib/cinder
index 220488a..96d2505 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -2,19 +2,20 @@
 # Install and start **Cinder** volume service
 
 # Dependencies:
+#
 # - functions
 # - DEST, DATA_DIR, STACK_USER must be defined
-# SERVICE_{TENANT_NAME|PASSWORD} must be defined
-# ``KEYSTONE_TOKEN_FORMAT`` must be defined
+# - SERVICE_{TENANT_NAME|PASSWORD} must be defined
+# - ``KEYSTONE_TOKEN_FORMAT`` must be defined
 
 # stack.sh
 # ---------
-# install_cinder
-# configure_cinder
-# init_cinder
-# start_cinder
-# stop_cinder
-# cleanup_cinder
+# - install_cinder
+# - configure_cinder
+# - init_cinder
+# - start_cinder
+# - stop_cinder
+# - cleanup_cinder
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -82,7 +83,8 @@
 # Functions
 # ---------
 # _clean_lvm_lv removes all cinder LVM volumes
-# _clean_lvm_lv $VOLUME_GROUP $VOLUME_NAME_PREFIX
+#
+# Usage: _clean_lvm_lv $VOLUME_GROUP $VOLUME_NAME_PREFIX
 function _clean_lvm_lv() {
     local vg=$1
     local lv_prefix=$2
@@ -98,7 +100,8 @@
 
 # _clean_lvm_backing_file() removes the backing file of the
 # volume group used by cinder
-# _clean_lvm_backing_file() $VOLUME_GROUP
+#
+# Usage: _clean_lvm_backing_file() $VOLUME_GROUP
 function _clean_lvm_backing_file() {
     local vg=$1
 
@@ -196,21 +199,31 @@
     fi
 
     TEMPFILE=`mktemp`
-    echo "$USER ALL=(root) NOPASSWD: $ROOTWRAP_CINDER_SUDOER_CMD" >$TEMPFILE
+    echo "$STACK_USER ALL=(root) NOPASSWD: $ROOTWRAP_CINDER_SUDOER_CMD" >$TEMPFILE
     chmod 0440 $TEMPFILE
     sudo chown root:root $TEMPFILE
     sudo mv $TEMPFILE /etc/sudoers.d/cinder-rootwrap
 
     cp $CINDER_DIR/etc/cinder/api-paste.ini $CINDER_API_PASTE_INI
-    iniset $CINDER_API_PASTE_INI filter:authtoken auth_host $KEYSTONE_AUTH_HOST
-    iniset $CINDER_API_PASTE_INI filter:authtoken auth_port $KEYSTONE_AUTH_PORT
-    iniset $CINDER_API_PASTE_INI filter:authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
-    iniset $CINDER_API_PASTE_INI filter:authtoken admin_tenant_name $SERVICE_TENANT_NAME
-    iniset $CINDER_API_PASTE_INI filter:authtoken admin_user cinder
-    iniset $CINDER_API_PASTE_INI filter:authtoken admin_password $SERVICE_PASSWORD
-    iniset $CINDER_API_PASTE_INI filter:authtoken signing_dir $CINDER_AUTH_CACHE_DIR
+
+    inicomment $CINDER_API_PASTE_INI filter:authtoken auth_host
+    inicomment $CINDER_API_PASTE_INI filter:authtoken auth_port
+    inicomment $CINDER_API_PASTE_INI filter:authtoken auth_protocol
+    inicomment $CINDER_API_PASTE_INI filter:authtoken admin_tenant_name
+    inicomment $CINDER_API_PASTE_INI filter:authtoken admin_user
+    inicomment $CINDER_API_PASTE_INI filter:authtoken admin_password
+    inicomment $CINDER_API_PASTE_INI filter:authtoken signing_dir
 
     cp $CINDER_DIR/etc/cinder/cinder.conf.sample $CINDER_CONF
+
+    iniset $CINDER_CONF keystone_authtoken auth_host $KEYSTONE_AUTH_HOST
+    iniset $CINDER_CONF keystone_authtoken auth_port $KEYSTONE_AUTH_PORT
+    iniset $CINDER_CONF keystone_authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
+    iniset $CINDER_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
+    iniset $CINDER_CONF keystone_authtoken admin_user cinder
+    iniset $CINDER_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
+    iniset $CINDER_CONF keystone_authtoken signing_dir $CINDER_AUTH_CACHE_DIR
+
     iniset $CINDER_CONF DEFAULT auth_strategy keystone
     iniset $CINDER_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $CINDER_CONF DEFAULT verbose True
@@ -271,6 +284,11 @@
             iniset $CINDER_CONF DEFAULT xenapi_nfs_server "$CINDER_XENAPI_NFS_SERVER"
             iniset $CINDER_CONF DEFAULT xenapi_nfs_serverpath "$CINDER_XENAPI_NFS_SERVERPATH"
         )
+    elif [ "$CINDER_DRIVER" == "nfs" ]; then
+        iniset $CINDER_CONF DEFAULT volume_driver "cinder.volume.drivers.nfs.NfsDriver"
+        iniset $CINDER_CONF DEFAULT nfs_shares_config "$CINDER_CONF_DIR/nfs_shares.conf"
+        echo "$CINDER_NFS_SERVERPATH" | sudo tee "$CINDER_CONF_DIR/nfs_shares.conf"
+        sudo chmod 666 $CINDER_CONF_DIR/nfs_shares.conf
     elif [ "$CINDER_DRIVER" == "sheepdog" ]; then
         iniset $CINDER_CONF DEFAULT volume_driver "cinder.volume.drivers.sheepdog.SheepdogDriver"
     elif [ "$CINDER_DRIVER" == "glusterfs" ]; then
@@ -536,6 +554,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/config b/lib/config
index 6f686e9..91cefe4 100644
--- a/lib/config
+++ b/lib/config
@@ -10,7 +10,7 @@
 #   [[group-name|file-name]]
 #
 # group-name refers to the group of configuration file changes to be processed
-# at a particular time.  These are called phases in ``stack.sh`` but 
+# at a particular time.  These are called phases in ``stack.sh`` but
 # group here as these functions are not DevStack-specific.
 #
 # file-name is the destination of the config file
@@ -64,12 +64,12 @@
     [[ -r $file ]] || return 0
 
     $CONFIG_AWK_CMD -v matchgroup=$matchgroup '
-      /^\[\[.+\|.*\]\]/ {
-          gsub("[][]", "", $1);
-          split($1, a, "|");
-          if (a[1] == matchgroup)
-              print a[2]
-      }
+        /^\[\[.+\|.*\]\]/ {
+            gsub("[][]", "", $1);
+            split($1, a, "|");
+            if (a[1] == matchgroup)
+                print a[2]
+        }
     ' $file
 }
 
diff --git a/lib/database b/lib/database
index 3c15609..0661049 100644
--- a/lib/database
+++ b/lib/database
@@ -9,10 +9,11 @@
 
 # This is a wrapper for the specific database backends available.
 # Each database must implement four functions:
-#   recreate_database_$DATABASE_TYPE
-#   install_database_$DATABASE_TYPE
-#   configure_database_$DATABASE_TYPE
-#   database_connection_url_$DATABASE_TYPE
+#
+# - recreate_database_$DATABASE_TYPE
+# - install_database_$DATABASE_TYPE
+# - configure_database_$DATABASE_TYPE
+# - database_connection_url_$DATABASE_TYPE
 #
 # and call register_database $DATABASE_TYPE
 
@@ -22,7 +23,9 @@
 
 
 # Register a database backend
-#  $1 The name of the database backend
+#
+#   $1 The name of the database backend
+#
 # This is required to be defined before the specific database scripts are sourced
 function register_database {
     [ -z "$DATABASE_BACKENDS" ] && DATABASE_BACKENDS=$1 || DATABASE_BACKENDS+=" $1"
@@ -121,6 +124,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/databases/mysql b/lib/databases/mysql
index 41e3236..0eb8fdd 100644
--- a/lib/databases/mysql
+++ b/lib/databases/mysql
@@ -2,7 +2,8 @@
 # Functions to control the configuration and operation of the **MySQL** database backend
 
 # Dependencies:
-# DATABASE_{HOST,USER,PASSWORD} must be defined
+#
+# - DATABASE_{HOST,USER,PASSWORD} must be defined
 
 # Save trace setting
 MY_XTRACE=$(set +o | grep xtrace)
diff --git a/lib/databases/postgresql b/lib/databases/postgresql
index b173772..519479a 100644
--- a/lib/databases/postgresql
+++ b/lib/databases/postgresql
@@ -2,7 +2,8 @@
 # Functions to control the configuration and operation of the **PostgreSQL** database backend
 
 # Dependencies:
-# DATABASE_{HOST,USER,PASSWORD} must be defined
+#
+# - DATABASE_{HOST,USER,PASSWORD} must be defined
 
 # Save trace setting
 PG_XTRACE=$(set +o | grep xtrace)
diff --git a/lib/glance b/lib/glance
index c6f11d0..eb727f1 100644
--- a/lib/glance
+++ b/lib/glance
@@ -2,20 +2,21 @@
 # Functions to control the configuration and operation of the **Glance** service
 
 # Dependencies:
-# ``functions`` file
-# ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined
-# ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
-# ``SERVICE_HOST``
-# ``KEYSTONE_TOKEN_FORMAT`` must be defined
+#
+# - ``functions`` file
+# - ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined
+# - ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
+# - ``SERVICE_HOST``
+# - ``KEYSTONE_TOKEN_FORMAT`` must be defined
 
 # ``stack.sh`` calls the entry points in this order:
 #
-# install_glance
-# configure_glance
-# init_glance
-# start_glance
-# stop_glance
-# cleanup_glance
+# - install_glance
+# - configure_glance
+# - init_glance
+# - start_glance
+# - stop_glance
+# - cleanup_glance
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -194,7 +195,7 @@
     screen_it g-api "cd $GLANCE_DIR; $GLANCE_BIN_DIR/glance-api --config-file=$GLANCE_CONF_DIR/glance-api.conf"
     echo "Waiting for g-api ($GLANCE_HOSTPORT) to start..."
     if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$GLANCE_HOSTPORT; do sleep 1; done"; then
-      die $LINENO "g-api did not start"
+        die $LINENO "g-api did not start"
     fi
 }
 
@@ -209,6 +210,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/heat b/lib/heat
index 8acadb4..7a9ef0d 100644
--- a/lib/heat
+++ b/lib/heat
@@ -2,21 +2,23 @@
 # Install and start **Heat** service
 
 # To enable, add the following to localrc
-# ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
+#
+#   ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
 
 # Dependencies:
+#
 # - functions
 
 # stack.sh
 # ---------
-# install_heatclient
-# install_heat
-# configure_heatclient
-# configure_heat
-# init_heat
-# start_heat
-# stop_heat
-# cleanup_heat
+# - install_heatclient
+# - install_heat
+# - configure_heatclient
+# - configure_heat
+# - init_heat
+# - start_heat
+# - stop_heat
+# - cleanup_heat
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -78,7 +80,7 @@
     iniset $HEAT_CONF DEFAULT heat_metadata_server_url http://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT
     iniset $HEAT_CONF DEFAULT heat_waitcondition_server_url http://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1/waitcondition
     iniset $HEAT_CONF DEFAULT heat_watch_server_url http://$HEAT_API_CW_HOST:$HEAT_API_CW_PORT
-    iniset $HEAT_CONF DEFAULT sql_connection `database_connection_url heat`
+    iniset $HEAT_CONF database connection `database_connection_url heat`
     iniset $HEAT_CONF DEFAULT auth_encryption_key `hexdump -n 16 -v -e '/1 "%02x"' /dev/random`
 
     # logging
@@ -118,9 +120,6 @@
     iniset $HEAT_CONF heat_api_cloudwatch bind_host $HEAT_API_CW_HOST
     iniset $HEAT_CONF heat_api_cloudwatch bind_port $HEAT_API_CW_PORT
 
-    # Set limits to match tempest defaults
-    iniset $HEAT_CONF DEFAULT max_template_size 10240
-
     # heat environment
     sudo mkdir -p $HEAT_ENV_DIR
     sudo chown $STACK_USER $HEAT_ENV_DIR
@@ -198,6 +197,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/horizon b/lib/horizon
index 63caf3c..5bff712 100644
--- a/lib/horizon
+++ b/lib/horizon
@@ -1,21 +1,20 @@
 # lib/horizon
 # Functions to control the configuration and operation of the horizon service
-# <do not include this template file in ``stack.sh``!>
 
 # Dependencies:
-# ``functions`` file
-# ``apache`` file
-# ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
-# <list other global vars that are assumed to be defined>
+#
+# - ``functions`` file
+# - ``apache`` file
+# - ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
 
 # ``stack.sh`` calls the entry points in this order:
 #
-# install_horizon
-# configure_horizon
-# init_horizon
-# start_horizon
-# stop_horizon
-# cleanup_horizon
+# - install_horizon
+# - configure_horizon
+# - init_horizon
+# - start_horizon
+# - stop_horizon
+# - cleanup_horizon
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -25,8 +24,6 @@
 # Defaults
 # --------
 
-# <define global variables here that belong to this project>
-
 # Set up default directories
 HORIZON_DIR=$DEST/horizon
 
@@ -115,7 +112,12 @@
     # Create an empty directory that apache uses as docroot
     sudo mkdir -p $HORIZON_DIR/.blackhole
 
+    # Apache 2.4 uses mod_authz_host for access control now (instead of "Allow")
     HORIZON_REQUIRE=''
+    if check_apache_version "2.4" ; then
+        HORIZON_REQUIRE='Require all granted'
+    fi
+
     local horizon_conf=/etc/$APACHE_NAME/$APACHE_CONF_DIR/horizon.conf
     if is_ubuntu; then
         # Clean up the old config name
@@ -124,11 +126,6 @@
         sudo touch $horizon_conf
         sudo a2ensite horizon.conf
     elif is_fedora; then
-        if [[ "$os_RELEASE" -ge "18" ]]; then
-            # fedora 18 has Require all denied  in its httpd.conf
-            # and requires explicit Require all granted
-            HORIZON_REQUIRE='Require all granted'
-        fi
         sudo sed '/^Listen/s/^.*$/Listen 0.0.0.0:80/' -i /etc/httpd/conf/httpd.conf
     elif is_suse; then
         : # nothing to do
@@ -156,15 +153,6 @@
     # Apache installation, because we mark it NOPRIME
     install_apache_wsgi
 
-    # NOTE(sdague) quantal changed the name of the node binary
-    if is_ubuntu; then
-        if [[ ! -e "/usr/bin/node" ]]; then
-            install_package nodejs-legacy
-        fi
-    elif is_fedora && [[ $DISTRO =~ (rhel6) || "$os_RELEASE" -ge "18" ]]; then
-        install_package nodejs
-    fi
-
     git_clone $HORIZON_REPO $HORIZON_DIR $HORIZON_BRANCH $HORIZON_TAG
 }
 
@@ -183,6 +171,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/infra b/lib/infra
index 0b73259..0dcf0ad 100644
--- a/lib/infra
+++ b/lib/infra
@@ -5,12 +5,13 @@
 # requirements as a global list
 
 # Dependencies:
-# ``functions`` file
+#
+# - ``functions`` file
 
 # ``stack.sh`` calls the entry points in this order:
 #
-# unfubar_setuptools
-# install_infra
+# - unfubar_setuptools
+# - install_infra
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -51,6 +52,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/ironic b/lib/ironic
index f3b4a72..9f86e84 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -2,20 +2,21 @@
 # Functions to control the configuration and operation of the **Ironic** service
 
 # Dependencies:
-# ``functions`` file
-# ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined
-# ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
-# ``SERVICE_HOST``
-# ``KEYSTONE_TOKEN_FORMAT`` must be defined
+#
+# - ``functions`` file
+# - ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined
+# - ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
+# - ``SERVICE_HOST``
+# - ``KEYSTONE_TOKEN_FORMAT`` must be defined
 
 # ``stack.sh`` calls the entry points in this order:
 #
-# install_ironic
-# configure_ironic
-# init_ironic
-# start_ironic
-# stop_ironic
-# cleanup_ironic
+# - install_ironic
+# - install_ironicclient
+# - init_ironic
+# - start_ironic
+# - stop_ironic
+# - cleanup_ironic
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -27,6 +28,7 @@
 
 # Set up default directories
 IRONIC_DIR=$DEST/ironic
+IRONICCLIENT_DIR=$DEST/python-ironicclient
 IRONIC_AUTH_CACHE_DIR=${IRONIC_AUTH_CACHE_DIR:-/var/cache/ironic}
 IRONIC_CONF_DIR=${IRONIC_CONF_DIR:-/etc/ironic}
 IRONIC_CONF_FILE=$IRONIC_CONF_DIR/ironic.conf
@@ -45,6 +47,18 @@
 # Functions
 # ---------
 
+# install_ironic() - Collect source and prepare
+function install_ironic() {
+    git_clone $IRONIC_REPO $IRONIC_DIR $IRONIC_BRANCH
+    setup_develop $IRONIC_DIR
+}
+
+# install_ironicclient() - Collect sources and prepare
+function install_ironicclient() {
+    git_clone $IRONICCLIENT_REPO $IRONICCLIENT_DIR $IRONICCLIENT_BRANCH
+    setup_develop $IRONICCLIENT_DIR
+}
+
 # cleanup_ironic() - Remove residual data files, anything left over from previous
 # runs that would need to clean up.
 function cleanup_ironic() {
@@ -79,6 +93,8 @@
 # configure_ironic_api() - Is used by configure_ironic(). Performs
 # API specific configuration.
 function configure_ironic_api() {
+    iniset $IRONIC_CONF_FILE DEFAULT auth_strategy keystone
+    iniset $IRONIC_CONF_FILE DEFAULT policy_file $IRONIC_POLICY_JSON
     iniset $IRONIC_CONF_FILE keystone_authtoken auth_host $KEYSTONE_AUTH_HOST
     iniset $IRONIC_CONF_FILE keystone_authtoken auth_port $KEYSTONE_AUTH_PORT
     iniset $IRONIC_CONF_FILE keystone_authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
@@ -170,12 +186,6 @@
     create_ironic_accounts
 }
 
-# install_ironic() - Collect source and prepare
-function install_ironic() {
-    git_clone $IRONIC_REPO $IRONIC_DIR $IRONIC_BRANCH
-    setup_develop $IRONIC_DIR
-}
-
 # start_ironic() - Start running processes, including screen
 function start_ironic() {
     # Start Ironic API server, if enabled.
@@ -195,7 +205,7 @@
     screen_it ir-api "cd $IRONIC_DIR; $IRONIC_BIN_DIR/ironic-api --config-file=$IRONIC_CONF_FILE"
     echo "Waiting for ir-api ($IRONIC_HOSTPORT) to start..."
     if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$IRONIC_HOSTPORT; do sleep 1; done"; then
-      die $LINENO "ir-api did not start"
+        die $LINENO "ir-api did not start"
     fi
 }
 
@@ -217,6 +227,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/keystone b/lib/keystone
old mode 100755
new mode 100644
index c93a436..4353eba
--- a/lib/keystone
+++ b/lib/keystone
@@ -2,25 +2,26 @@
 # Functions to control the configuration and operation of **Keystone**
 
 # Dependencies:
-# ``functions`` file
-# ``DEST``, ``STACK_USER``
-# ``IDENTITY_API_VERSION``
-# ``BASE_SQL_CONN``
-# ``SERVICE_HOST``, ``SERVICE_PROTOCOL``
-# ``SERVICE_TOKEN``
-# ``S3_SERVICE_PORT`` (template backend only)
+#
+# - ``functions`` file
+# - ``DEST``, ``STACK_USER``
+# - ``IDENTITY_API_VERSION``
+# - ``BASE_SQL_CONN``
+# - ``SERVICE_HOST``, ``SERVICE_PROTOCOL``
+# - ``SERVICE_TOKEN``
+# - ``S3_SERVICE_PORT`` (template backend only)
 
 # ``stack.sh`` calls the entry points in this order:
 #
-# install_keystone
-# configure_keystone
-# _config_keystone_apache_wsgi
-# init_keystone
-# start_keystone
-# create_keystone_accounts
-# stop_keystone
-# cleanup_keystone
-# _cleanup_keystone_apache_wsgi
+# - install_keystone
+# - configure_keystone
+# - _config_keystone_apache_wsgi
+# - init_keystone
+# - start_keystone
+# - create_keystone_accounts
+# - stop_keystone
+# - cleanup_keystone
+# - _cleanup_keystone_apache_wsgi
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -125,6 +126,7 @@
 
     if [[ "$KEYSTONE_CONF_DIR" != "$KEYSTONE_DIR/etc" ]]; then
         cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF
+        chmod 600 $KEYSTONE_CONF
         cp -p $KEYSTONE_DIR/etc/policy.json $KEYSTONE_CONF_DIR
         if [[ -f "$KEYSTONE_DIR/etc/keystone-paste.ini" ]]; then
             cp -p "$KEYSTONE_DIR/etc/keystone-paste.ini" "$KEYSTONE_PASTE_INI"
@@ -373,7 +375,7 @@
 
     echo "Waiting for keystone to start..."
     if ! timeout $SERVICE_TIMEOUT sh -c "while ! curl --noproxy '*' -s http://$SERVICE_HOST:$service_port/v$IDENTITY_API_VERSION/ >/dev/null; do sleep 1; done"; then
-      die $LINENO "keystone did not start"
+        die $LINENO "keystone did not start"
     fi
 
     # Start proxies if enabled
@@ -393,6 +395,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/ldap b/lib/ldap
index 2a24ccd..80992a7 100644
--- a/lib/ldap
+++ b/lib/ldap
@@ -2,7 +2,8 @@
 # Functions to control the installation and configuration of **ldap**
 
 # ``lib/keystone`` calls the entry points in this order:
-# install_ldap()
+#
+# - install_ldap()
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -91,6 +92,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/neutron b/lib/neutron
index 778717d..70417be 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -4,27 +4,28 @@
 # Dependencies:
 # ``functions`` file
 # ``DEST`` must be defined
+# ``STACK_USER`` must be defined
 
 # ``stack.sh`` calls the entry points in this order:
 #
-# install_neutron
-# install_neutronclient
-# install_neutron_agent_packages
-# install_neutron_third_party
-# configure_neutron
-# init_neutron
-# configure_neutron_third_party
-# init_neutron_third_party
-# start_neutron_third_party
-# create_nova_conf_neutron
-# start_neutron_service_and_check
-# create_neutron_initial_network
-# setup_neutron_debug
-# start_neutron_agents
+# - install_neutron
+# - install_neutronclient
+# - install_neutron_agent_packages
+# - install_neutron_third_party
+# - configure_neutron
+# - init_neutron
+# - configure_neutron_third_party
+# - init_neutron_third_party
+# - start_neutron_third_party
+# - create_nova_conf_neutron
+# - start_neutron_service_and_check
+# - create_neutron_initial_network
+# - setup_neutron_debug
+# - start_neutron_agents
 #
 # ``unstack.sh`` calls the entry points in this order:
 #
-# stop_neutron
+# - stop_neutron
 
 # Functions in lib/neutron are classified into the following categories:
 #
@@ -79,8 +80,8 @@
 # Support entry points installation of console scripts
 if [[ -d $NEUTRON_DIR/bin/neutron-server ]]; then
     NEUTRON_BIN_DIR=$NEUTRON_DIR/bin
-     else
-NEUTRON_BIN_DIR=$(get_python_exec_prefix)
+else
+    NEUTRON_BIN_DIR=$(get_python_exec_prefix)
 fi
 
 NEUTRON_CONF_DIR=/etc/neutron
@@ -110,6 +111,10 @@
 Q_USE_DEBUG_COMMAND=${Q_USE_DEBUG_COMMAND:-False}
 # The name of the default q-l3 router
 Q_ROUTER_NAME=${Q_ROUTER_NAME:-router1}
+# nova vif driver that all plugins should use
+NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtGenericVIFDriver"}
+
+
 # List of config file names in addition to the main plugin config file
 # See _configure_neutron_common() for details about setting it up
 declare -a Q_PLUGIN_EXTRA_CONF_FILES
@@ -202,13 +207,19 @@
 # Hardcoding for 1 service plugin for now
 source $TOP_DIR/lib/neutron_plugins/services/loadbalancer
 
+# Agent metering service plugin functions
+# -------------------------------------------
+
+# Hardcoding for 1 service plugin for now
+source $TOP_DIR/lib/neutron_plugins/services/metering
+
 # VPN service plugin functions
 # -------------------------------------------
 # Hardcoding for 1 service plugin for now
 source $TOP_DIR/lib/neutron_plugins/services/vpn
 
 # Firewall Service Plugin functions
-# --------------------------------
+# ---------------------------------
 source $TOP_DIR/lib/neutron_plugins/services/firewall
 
 # Use security group or not
@@ -231,6 +242,9 @@
     if is_service_enabled q-lbaas; then
         _configure_neutron_lbaas
     fi
+    if is_service_enabled q-metering; then
+        _configure_neutron_metering
+    fi
     if is_service_enabled q-vpn; then
         _configure_neutron_vpn
     fi
@@ -268,6 +282,7 @@
 
     if [[ "$Q_USE_SECGROUP" == "True" ]]; then
         LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
+        iniset $NOVA_CONF DEFAULT firewall_driver $LIBVIRT_FIREWALL_DRIVER
         iniset $NOVA_CONF DEFAULT security_group_api neutron
     fi
 
@@ -373,7 +388,7 @@
                 iniset $Q_L3_CONF_FILE DEFAULT router_id $ROUTER_ID
             fi
         fi
-   fi
+    fi
 }
 
 # init_neutron() - Initialize databases, etc.
@@ -404,7 +419,7 @@
     fi
 
     if is_service_enabled q-lbaas; then
-       neutron_agent_lbaas_install_agent_packages
+        neutron_agent_lbaas_install_agent_packages
     fi
 }
 
@@ -414,13 +429,13 @@
     local cfg_file
     local CFG_FILE_OPTIONS="--config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE"
     for cfg_file in ${Q_PLUGIN_EXTRA_CONF_FILES[@]}; do
-         CFG_FILE_OPTIONS+=" --config-file /$cfg_file"
+        CFG_FILE_OPTIONS+=" --config-file /$cfg_file"
     done
     # Start the Neutron service
     screen_it q-svc "cd $NEUTRON_DIR && python $NEUTRON_BIN_DIR/neutron-server $CFG_FILE_OPTIONS"
     echo "Waiting for Neutron to start..."
     if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$Q_HOST:$Q_PORT; do sleep 1; done"; then
-      die $LINENO "Neutron did not start"
+        die $LINENO "Neutron did not start"
     fi
 }
 
@@ -451,6 +466,10 @@
     if is_service_enabled q-lbaas; then
         screen_it q-lbaas "cd $NEUTRON_DIR && python $AGENT_LBAAS_BINARY --config-file $NEUTRON_CONF --config-file=$LBAAS_AGENT_CONF_FILENAME"
     fi
+
+    if is_service_enabled q-metering; then
+        screen_it q-metering "cd $NEUTRON_DIR && python $AGENT_METERING_BINARY --config-file $NEUTRON_CONF --config-file $METERING_AGENT_CONF_FILENAME"
+    fi
 }
 
 # stop_neutron() - Stop running processes (non-screen)
@@ -494,6 +513,7 @@
     # For main plugin config file, set ``Q_PLUGIN_CONF_PATH``, ``Q_PLUGIN_CONF_FILENAME``.
     # For addition plugin config files, set ``Q_PLUGIN_EXTRA_CONF_PATH``,
     # ``Q_PLUGIN_EXTRA_CONF_FILES``.  For example:
+    #
     #    ``Q_PLUGIN_EXTRA_CONF_FILES=(file1, file2)``
     neutron_plugin_configure_common
 
@@ -630,6 +650,11 @@
     neutron_agent_lbaas_configure_agent
 }
 
+function _configure_neutron_metering() {
+    neutron_agent_metering_configure_common
+    neutron_agent_metering_configure_agent
+}
+
 function _configure_neutron_fwaas() {
     neutron_fwaas_configure_common
     neutron_fwaas_configure_driver
@@ -712,9 +737,9 @@
     # Set up ``rootwrap.conf``, pointing to ``$NEUTRON_CONF_DIR/rootwrap.d``
     # location moved in newer versions, prefer new location
     if test -r $NEUTRON_DIR/etc/neutron/rootwrap.conf; then
-      sudo cp -p $NEUTRON_DIR/etc/neutron/rootwrap.conf $Q_RR_CONF_FILE
+        sudo cp -p $NEUTRON_DIR/etc/neutron/rootwrap.conf $Q_RR_CONF_FILE
     else
-      sudo cp -p $NEUTRON_DIR/etc/rootwrap.conf $Q_RR_CONF_FILE
+        sudo cp -p $NEUTRON_DIR/etc/rootwrap.conf $Q_RR_CONF_FILE
     fi
     sudo sed -e "s:^filters_path=.*$:filters_path=$Q_CONF_ROOTWRAP_D:" -i $Q_RR_CONF_FILE
     sudo chown root:root $Q_RR_CONF_FILE
@@ -724,7 +749,7 @@
 
     # Set up the rootwrap sudoers for neutron
     TEMPFILE=`mktemp`
-    echo "$USER ALL=(root) NOPASSWD: $ROOTWRAP_SUDOER_CMD" >$TEMPFILE
+    echo "$STACK_USER ALL=(root) NOPASSWD: $ROOTWRAP_SUDOER_CMD" >$TEMPFILE
     chmod 0440 $TEMPFILE
     sudo chown root:root $TEMPFILE
     sudo mv $TEMPFILE /etc/sudoers.d/neutron-rootwrap
@@ -848,11 +873,11 @@
 # please refer to ``lib/neutron_thirdparty/README.md`` for details
 NEUTRON_THIRD_PARTIES=""
 for f in $TOP_DIR/lib/neutron_thirdparty/*; do
-     third_party=$(basename $f)
-     if is_service_enabled $third_party; then
-         source $TOP_DIR/lib/neutron_thirdparty/$third_party
-         NEUTRON_THIRD_PARTIES="$NEUTRON_THIRD_PARTIES,$third_party"
-     fi
+    third_party=$(basename $f)
+    if is_service_enabled $third_party; then
+        source $TOP_DIR/lib/neutron_thirdparty/$third_party
+        NEUTRON_THIRD_PARTIES="$NEUTRON_THIRD_PARTIES,$third_party"
+    fi
 done
 
 function _neutron_third_party_do() {
@@ -890,6 +915,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/neutron_plugins/bigswitch_floodlight b/lib/neutron_plugins/bigswitch_floodlight
index 2450731..93ec497 100644
--- a/lib/neutron_plugins/bigswitch_floodlight
+++ b/lib/neutron_plugins/bigswitch_floodlight
@@ -9,7 +9,7 @@
 source $TOP_DIR/lib/neutron_thirdparty/bigswitch_floodlight     # for third party service specific configuration values
 
 function neutron_plugin_create_nova_conf() {
-    NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtGenericVIFDriver"}
+    :
 }
 
 function neutron_plugin_install_agent_packages() {
diff --git a/lib/neutron_plugins/linuxbridge_agent b/lib/neutron_plugins/linuxbridge_agent
index 88c49c5..85e8c08 100644
--- a/lib/neutron_plugins/linuxbridge_agent
+++ b/lib/neutron_plugins/linuxbridge_agent
@@ -11,7 +11,7 @@
 }
 
 function neutron_plugin_create_nova_conf() {
-    NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtGenericVIFDriver"}
+    :
 }
 
 function neutron_plugin_install_agent_packages() {
diff --git a/lib/neutron_plugins/midonet b/lib/neutron_plugins/midonet
index 193055f..e406146 100644
--- a/lib/neutron_plugins/midonet
+++ b/lib/neutron_plugins/midonet
@@ -32,19 +32,18 @@
 
 function neutron_plugin_configure_dhcp_agent() {
     DHCP_DRIVER=${DHCP_DRIVER:-"neutron.plugins.midonet.agent.midonet_driver.DhcpNoOpDriver"}
-    DHCP_INTERFACE_DRIVER=${DHCP_INTEFACE_DRIVER:-"neutron.plugins.midonet.agent.midonet_driver.MidonetInterfaceDriver"}
+    neutron_plugin_setup_interface_driver $Q_DHCP_CONF_FILE
     iniset $Q_DHCP_CONF_FILE DEFAULT dhcp_driver $DHCP_DRIVER
-    iniset $Q_DHCP_CONF_FILE DEFAULT interface_driver $DHCP_INTERFACE_DRIVER
     iniset $Q_DHCP_CONF_FILE DEFAULT use_namespaces True
     iniset $Q_DHCP_CONF_FILE DEFAULT enable_isolated_metadata True
 }
 
 function neutron_plugin_configure_l3_agent() {
-   die $LINENO "q-l3 must not be executed with MidoNet plugin!"
+    die $LINENO "q-l3 must not be executed with MidoNet plugin!"
 }
 
 function neutron_plugin_configure_plugin_agent() {
-   die $LINENO "q-agt must not be executed with MidoNet plugin!"
+    die $LINENO "q-agt must not be executed with MidoNet plugin!"
 }
 
 function neutron_plugin_configure_service() {
@@ -66,8 +65,8 @@
 }
 
 function neutron_plugin_setup_interface_driver() {
-    # May change in the future
-    :
+    local conf_file=$1
+    iniset $conf_file DEFAULT interface_driver neutron.agent.linux.interface.MidonetInterfaceDriver
 }
 
 function has_neutron_plugin_security_group() {
diff --git a/lib/neutron_plugins/nec b/lib/neutron_plugins/nec
index 79d41db..d8d8b7c 100644
--- a/lib/neutron_plugins/nec
+++ b/lib/neutron_plugins/nec
@@ -55,21 +55,26 @@
     _neutron_ovs_base_configure_l3_agent
 }
 
-function neutron_plugin_configure_plugin_agent() {
+function _quantum_plugin_setup_bridge() {
     if [[ "$SKIP_OVS_BRIDGE_SETUP" = "True" ]]; then
         return
     fi
     # Set up integration bridge
     _neutron_ovs_base_setup_bridge $OVS_BRIDGE
-    sudo ovs-vsctl --no-wait set-controller $OVS_BRIDGE tcp:$OFC_OFP_HOST:$OFC_OFP_PORT
     # Generate datapath ID from HOST_IP
-    local dpid=$(printf "0x%07d%03d%03d%03d\n" ${HOST_IP//./ })
+    local dpid=$(printf "%07d%03d%03d%03d\n" ${HOST_IP//./ })
     sudo ovs-vsctl --no-wait set Bridge $OVS_BRIDGE other-config:datapath-id=$dpid
     sudo ovs-vsctl --no-wait set-fail-mode $OVS_BRIDGE secure
+    sudo ovs-vsctl --no-wait set-controller $OVS_BRIDGE tcp:$OFC_OFP_HOST:$OFC_OFP_PORT
     if [ -n "$OVS_INTERFACE" ]; then
         sudo ovs-vsctl --no-wait -- --may-exist add-port $OVS_BRIDGE $OVS_INTERFACE
     fi
     _neutron_setup_ovs_tunnels $OVS_BRIDGE
+}
+
+function neutron_plugin_configure_plugin_agent() {
+    _quantum_plugin_setup_bridge
+
     AGENT_BINARY="$NEUTRON_BIN_DIR/neutron-nec-agent"
 
     _neutron_ovs_base_configure_firewall_driver
@@ -101,15 +106,15 @@
     local id=0
     GRE_LOCAL_IP=${GRE_LOCAL_IP:-$HOST_IP}
     if [ -n "$GRE_REMOTE_IPS" ]; then
-         for ip in ${GRE_REMOTE_IPS//:/ }
-         do
-             if [[ "$ip" == "$GRE_LOCAL_IP" ]]; then
-                 continue
-             fi
-             sudo ovs-vsctl --no-wait add-port $bridge gre$id -- \
-                 set Interface gre$id type=gre options:remote_ip=$ip
-             id=`expr $id + 1`
-         done
+        for ip in ${GRE_REMOTE_IPS//:/ }
+        do
+            if [[ "$ip" == "$GRE_LOCAL_IP" ]]; then
+                continue
+            fi
+            sudo ovs-vsctl --no-wait add-port $bridge gre$id -- \
+                set Interface gre$id type=gre options:remote_ip=$ip
+            id=`expr $id + 1`
+        done
     fi
 }
 
diff --git a/lib/neutron_plugins/nicira b/lib/neutron_plugins/nicira
index 082c846..87d3c3d 100644
--- a/lib/neutron_plugins/nicira
+++ b/lib/neutron_plugins/nicira
@@ -26,7 +26,6 @@
 }
 
 function neutron_plugin_create_nova_conf() {
-    NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtOpenVswitchDriver"}
     # if n-cpu is enabled, then setup integration bridge
     if is_service_enabled n-cpu; then
         setup_integration_bridge
@@ -58,13 +57,13 @@
 }
 
 function neutron_plugin_configure_l3_agent() {
-   # Nicira plugin does not run L3 agent
-   die $LINENO "q-l3 should must not be executed with Nicira plugin!"
+    # Nicira plugin does not run L3 agent
+    die $LINENO "q-l3 should must not be executed with Nicira plugin!"
 }
 
 function neutron_plugin_configure_plugin_agent() {
-   # Nicira plugin does not run L2 agent
-   die $LINENO "q-agt must not be executed with Nicira plugin!"
+    # Nicira plugin does not run L2 agent
+    die $LINENO "q-agt must not be executed with Nicira plugin!"
 }
 
 function neutron_plugin_configure_service() {
diff --git a/lib/neutron_plugins/ovs_base b/lib/neutron_plugins/ovs_base
index 2666d8e..89db29d 100644
--- a/lib/neutron_plugins/ovs_base
+++ b/lib/neutron_plugins/ovs_base
@@ -73,13 +73,7 @@
 }
 
 function _neutron_ovs_base_configure_nova_vif_driver() {
-    # The hybrid VIF driver needs to be specified when Neutron Security Group
-    # is enabled (until vif_security attributes are supported in VIF extension)
-    if [[ "$Q_USE_SECGROUP" == "True" ]]; then
-        NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver"}
-    else
-        NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtGenericVIFDriver"}
-    fi
+    :
 }
 
 # Restore xtrace
diff --git a/lib/neutron_plugins/plumgrid b/lib/neutron_plugins/plumgrid
index 9d3c92f..d4050bb 100644
--- a/lib/neutron_plugins/plumgrid
+++ b/lib/neutron_plugins/plumgrid
@@ -9,8 +9,7 @@
 #source $TOP_DIR/lib/neutron_plugins/ovs_base
 
 function neutron_plugin_create_nova_conf() {
-
-    NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtGenericVIFDriver"}
+    :
 }
 
 function neutron_plugin_setup_interface_driver() {
diff --git a/lib/neutron_plugins/services/metering b/lib/neutron_plugins/services/metering
new file mode 100644
index 0000000..629f3b7
--- /dev/null
+++ b/lib/neutron_plugins/services/metering
@@ -0,0 +1,30 @@
+# Neutron metering plugin
+# ---------------------------
+
+# Save trace setting
+MY_XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+AGENT_METERING_BINARY="$NEUTRON_BIN_DIR/neutron-metering-agent"
+METERING_PLUGIN="neutron.services.metering.metering_plugin.MeteringPlugin"
+
+function neutron_agent_metering_configure_common() {
+    if [[ $Q_SERVICE_PLUGIN_CLASSES == '' ]]; then
+        Q_SERVICE_PLUGIN_CLASSES=$METERING_PLUGIN
+    else
+        Q_SERVICE_PLUGIN_CLASSES="$Q_SERVICE_PLUGIN_CLASSES,$METERING_PLUGIN"
+    fi
+}
+
+function neutron_agent_metering_configure_agent() {
+    METERING_AGENT_CONF_PATH=/etc/neutron/services/metering
+    mkdir -p $METERING_AGENT_CONF_PATH
+
+    METERING_AGENT_CONF_FILENAME="$METERING_AGENT_CONF_PATH/metering_agent.ini"
+
+    cp $NEUTRON_DIR/etc/metering_agent.ini $METERING_AGENT_CONF_FILENAME
+}
+
+# Restore xtrace
+$MY_XTRACE
diff --git a/lib/neutron_thirdparty/nicira b/lib/neutron_thirdparty/nicira
index 5a20934..3f2a5af 100644
--- a/lib/neutron_thirdparty/nicira
+++ b/lib/neutron_thirdparty/nicira
@@ -18,22 +18,38 @@
 # to an network that allows it to talk to the gateway for
 # testing purposes
 NVP_GATEWAY_NETWORK_INTERFACE=${NVP_GATEWAY_NETWORK_INTERFACE:-eth2}
+# Re-declare floating range as it's needed also in stop_nicira, which
+# is invoked by unstack.sh
+FLOATING_RANGE=${FLOATING_RANGE:-172.24.4.224/28}
 
 function configure_nicira() {
     :
 }
 
 function init_nicira() {
-    die_if_not_set $LINENO NVP_GATEWAY_NETWORK_CIDR "Please, specify CIDR for the gateway network interface."
+    if ! is_set NVP_GATEWAY_NETWORK_CIDR; then
+        NVP_GATEWAY_NETWORK_CIDR=$PUBLIC_NETWORK_GATEWAY/${FLOATING_RANGE#*/}
+        echo "The IP address to set on br-ex was not specified. "
+        echo "Defaulting to "$NVP_GATEWAY_NETWORK_CIDR
+    fi
     # Make sure the interface is up, but not configured
-    sudo ifconfig $NVP_GATEWAY_NETWORK_INTERFACE up
+    sudo ip link dev $NVP_GATEWAY_NETWORK_INTERFACE set up
+    # Save and then flush the IP addresses on the interface
+    addresses=$(ip addr show dev $NVP_GATEWAY_NETWORK_INTERFACE | grep inet | awk {'print $2'})
     sudo ip addr flush $NVP_GATEWAY_NETWORK_INTERFACE
     # Use the PUBLIC Bridge to route traffic to the NVP gateway
     # NOTE(armando-migliaccio): if running in a nested environment this will work
     # only with mac learning enabled, portsecurity and security profiles disabled
+    # The public bridge might not exist for the NVP plugin if Q_USE_DEBUG_COMMAND is off
+    # Try to create it anyway
+    sudo ovs-vsctl --no-wait -- --may-exist add-br $PUBLIC_BRIDGE
     sudo ovs-vsctl -- --may-exist add-port $PUBLIC_BRIDGE $NVP_GATEWAY_NETWORK_INTERFACE
     nvp_gw_net_if_mac=$(ip link show $NVP_GATEWAY_NETWORK_INTERFACE | awk '/ether/ {print $2}')
-    sudo ifconfig $PUBLIC_BRIDGE $NVP_GATEWAY_NETWORK_CIDR hw ether $nvp_gw_net_if_mac
+    sudo ip link dev $PUBLIC_BRIDGE set address $nvp_gw_net_if_mac
+    for address in $addresses; do
+        sudo ip addr add dev $PUBLIC_BRIDGE $address
+    done
+    sudo ip addr add dev $PUBLIC_BRIDGE $NVP_GATEWAY_NETWORK_CIDR
 }
 
 function install_nicira() {
@@ -45,7 +61,21 @@
 }
 
 function stop_nicira() {
-    :
+    if ! is_set NVP_GATEWAY_NETWORK_CIDR; then
+        NVP_GATEWAY_NETWORK_CIDR=$PUBLIC_NETWORK_GATEWAY/${FLOATING_RANGE#*/}
+        echo "The IP address expected on br-ex was not specified. "
+        echo "Defaulting to "$NVP_GATEWAY_NETWORK_CIDR
+    fi
+    sudo ip addr del $NVP_GATEWAY_NETWORK_CIDR dev $PUBLIC_BRIDGE
+    # Save and then flush remaining addresses on the interface
+    addresses=$(ip addr show dev $PUBLIC_BRIDGE | grep inet | awk {'print $2'})
+    sudo ip addr flush $PUBLIC_BRIDGE
+    # Try to detach physical interface from PUBLIC_BRIDGE
+    sudo ovs-vsctl del-port $NVP_GATEWAY_NETWORK_INTERFACE
+    # Restore addresses on NVP_GATEWAY_NETWORK_INTERFACE
+    for address in $addresses; do
+        sudo ip addr add dev $NVP_GATEWAY_NETWORK_INTERFACE $address
+    done
 }
 
 # Restore xtrace
diff --git a/lib/neutron_thirdparty/trema b/lib/neutron_thirdparty/trema
index 09dc46b..9efd3f6 100644
--- a/lib/neutron_thirdparty/trema
+++ b/lib/neutron_thirdparty/trema
@@ -28,7 +28,7 @@
 TREMA_LOG_LEVEL=${TREMA_LOG_LEVEL:-info}
 
 TREMA_SS_CONFIG=$TREMA_SS_ETC_DIR/sliceable.conf
-TREMA_SS_APACHE_CONFIG=/etc/apache2/sites-available/sliceable_switch
+TREMA_SS_APACHE_CONFIG=/etc/apache2/sites-available/sliceable_switch.conf
 
 # configure_trema - Set config files, create data dirs, etc
 function configure_trema() {
@@ -66,8 +66,8 @@
 
     cp $TREMA_SS_DIR/sliceable_switch_null.conf $TREMA_SS_CONFIG
     sed -i -e "s|^\$apps_dir.*$|\$apps_dir = \"$TREMA_DIR/apps\"|" \
-           -e "s|^\$db_dir.*$|\$db_dir = \"$TREMA_SS_DB_DIR\"|" \
-           $TREMA_SS_CONFIG
+        -e "s|^\$db_dir.*$|\$db_dir = \"$TREMA_SS_DB_DIR\"|" \
+        $TREMA_SS_CONFIG
 }
 
 function gem_install() {
diff --git a/lib/nova b/lib/nova
index 8deb3a0..6ab2000 100644
--- a/lib/nova
+++ b/lib/nova
@@ -2,22 +2,23 @@
 # Functions to control the configuration and operation of the **Nova** service
 
 # Dependencies:
-# ``functions`` file
-# ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined
-# ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
-# ``LIBVIRT_TYPE`` must be defined
-# ``INSTANCE_NAME_PREFIX``, ``VOLUME_NAME_PREFIX`` must be defined
-# ``KEYSTONE_TOKEN_FORMAT`` must be defined
+#
+# - ``functions`` file
+# - ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined
+# - ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
+# - ``LIBVIRT_TYPE`` must be defined
+# - ``INSTANCE_NAME_PREFIX``, ``VOLUME_NAME_PREFIX`` must be defined
+# - ``KEYSTONE_TOKEN_FORMAT`` must be defined
 
 # ``stack.sh`` calls the entry points in this order:
 #
-# install_nova
-# configure_nova
-# create_nova_conf
-# init_nova
-# start_nova
-# stop_nova
-# cleanup_nova
+# - install_nova
+# - configure_nova
+# - create_nova_conf
+# - init_nova
+# - start_nova
+# - stop_nova
+# - cleanup_nova
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -62,15 +63,16 @@
 # NOTE: Set API_RATE_LIMIT="False" to turn OFF rate limiting
 API_RATE_LIMIT=${API_RATE_LIMIT:-"True"}
 
+# Option to enable/disable config drive
+# NOTE: Set FORCE_CONFIG_DRIVE="False" to turn OFF config drive
+FORCE_CONFIG_DRIVE=${FORCE_CONFIG_DRIVE:-"always"}
+
 # Nova supports pluggable schedulers.  The default ``FilterScheduler``
 # should work in most cases.
 SCHEDULER=${SCHEDULER:-nova.scheduler.filter_scheduler.FilterScheduler}
 
 QEMU_CONF=/etc/libvirt/qemu.conf
 
-NOVNC_DIR=$DEST/noVNC
-SPICE_DIR=$DEST/spice-html5
-
 # Set default defaults here as some hypervisor drivers override these
 PUBLIC_INTERFACE_DEFAULT=br100
 GUEST_INTERFACE_DEFAULT=eth0
@@ -193,7 +195,7 @@
 
     # Set up the rootwrap sudoers for nova
     TEMPFILE=`mktemp`
-    echo "$USER ALL=(root) NOPASSWD: $ROOTWRAP_SUDOER_CMD" >$TEMPFILE
+    echo "$STACK_USER ALL=(root) NOPASSWD: $ROOTWRAP_SUDOER_CMD" >$TEMPFILE
     chmod 0440 $TEMPFILE
     sudo chown root:root $TEMPFILE
     sudo mv $TEMPFILE /etc/sudoers.d/nova-rootwrap
@@ -212,26 +214,22 @@
     configure_nova_rootwrap
 
     if is_service_enabled n-api; then
-        # Use the sample http middleware configuration supplied in the
-        # Nova sources.  This paste config adds the configuration required
-        # for Nova to validate Keystone tokens.
-
         # Remove legacy paste config if present
         rm -f $NOVA_DIR/bin/nova-api-paste.ini
 
         # Get the sample configuration file in place
         cp $NOVA_DIR/etc/nova/api-paste.ini $NOVA_CONF_DIR
 
-        iniset $NOVA_API_PASTE_INI filter:authtoken auth_host $KEYSTONE_AUTH_HOST
-        if is_service_enabled tls-proxy; then
-            iniset $NOVA_API_PASTE_INI filter:authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
-        fi
-        iniset $NOVA_API_PASTE_INI filter:authtoken admin_tenant_name $SERVICE_TENANT_NAME
-        iniset $NOVA_API_PASTE_INI filter:authtoken admin_user nova
-        iniset $NOVA_API_PASTE_INI filter:authtoken admin_password $SERVICE_PASSWORD
+        # Comment out the keystone configs in Nova's api-paste.ini.
+        # We are using nova.conf to configure this instead.
+        inicomment $NOVA_API_PASTE_INI filter:authtoken auth_host
+        inicomment $NOVA_API_PASTE_INI filter:authtoken auth_protocol
+        inicomment $NOVA_API_PASTE_INI filter:authtoken admin_tenant_name
+        inicomment $NOVA_API_PASTE_INI filter:authtoken admin_user
+        inicomment $NOVA_API_PASTE_INI filter:authtoken admin_password
     fi
 
-    iniset $NOVA_API_PASTE_INI filter:authtoken signing_dir $NOVA_AUTH_CACHE_DIR
+    inicomment $NOVA_API_PASTE_INI filter:authtoken signing_dir
 
     if is_service_enabled n-cpu; then
         # Force IP forwarding on, just on case
@@ -379,6 +377,7 @@
     iniset $NOVA_CONF DEFAULT ec2_workers "4"
     iniset $NOVA_CONF DEFAULT metadata_workers "4"
     iniset $NOVA_CONF DEFAULT sql_connection `database_connection_url nova`
+    iniset $NOVA_CONF DEFAULT fatal_deprecations "True"
     iniset $NOVA_CONF DEFAULT instance_name_template "${INSTANCE_NAME_PREFIX}%08x"
     iniset $NOVA_CONF osapi_v3 enabled "True"
 
@@ -394,7 +393,18 @@
             # Set the service port for a proxy to take the original
             iniset $NOVA_CONF DEFAULT osapi_compute_listen_port "$NOVA_SERVICE_PORT_INT"
         fi
+
+        # Add keystone authtoken configuration
+
+        iniset $NOVA_CONF keystone_authtoken auth_host $KEYSTONE_AUTH_HOST
+        iniset $NOVA_CONF keystone_authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
+        iniset $NOVA_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
+        iniset $NOVA_CONF keystone_authtoken admin_user nova
+        iniset $NOVA_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
     fi
+
+    iniset $NOVA_CONF keystone_authtoken signing_dir $NOVA_AUTH_CACHE_DIR
+
     if is_service_enabled cinder; then
         iniset $NOVA_CONF DEFAULT volume_api_class "nova.volume.cinder.API"
     fi
@@ -415,6 +425,9 @@
     if [ "$API_RATE_LIMIT" != "True" ]; then
         iniset $NOVA_CONF DEFAULT api_rate_limit "False"
     fi
+    if [ "$FORCE_CONFIG_DRIVE" != "False" ]; then
+        iniset $NOVA_CONF DEFAULT force_config_drive "$FORCE_CONFIG_DRIVE"
+    fi
     # Format logging
     if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
         setup_colorized_logging $NOVA_CONF DEFAULT
@@ -453,27 +466,27 @@
     fi
 
     if is_service_enabled n-novnc || is_service_enabled n-xvnc; then
-      # Address on which instance vncservers will listen on compute hosts.
-      # For multi-host, this should be the management ip of the compute host.
-      VNCSERVER_LISTEN=${VNCSERVER_LISTEN=127.0.0.1}
-      VNCSERVER_PROXYCLIENT_ADDRESS=${VNCSERVER_PROXYCLIENT_ADDRESS=127.0.0.1}
-      iniset $NOVA_CONF DEFAULT vnc_enabled true
-      iniset $NOVA_CONF DEFAULT vncserver_listen "$VNCSERVER_LISTEN"
-      iniset $NOVA_CONF DEFAULT vncserver_proxyclient_address "$VNCSERVER_PROXYCLIENT_ADDRESS"
+        # Address on which instance vncservers will listen on compute hosts.
+        # For multi-host, this should be the management ip of the compute host.
+        VNCSERVER_LISTEN=${VNCSERVER_LISTEN=127.0.0.1}
+        VNCSERVER_PROXYCLIENT_ADDRESS=${VNCSERVER_PROXYCLIENT_ADDRESS=127.0.0.1}
+        iniset $NOVA_CONF DEFAULT vnc_enabled true
+        iniset $NOVA_CONF DEFAULT vncserver_listen "$VNCSERVER_LISTEN"
+        iniset $NOVA_CONF DEFAULT vncserver_proxyclient_address "$VNCSERVER_PROXYCLIENT_ADDRESS"
     else
-      iniset $NOVA_CONF DEFAULT vnc_enabled false
+        iniset $NOVA_CONF DEFAULT vnc_enabled false
     fi
 
     if is_service_enabled n-spice; then
-      # Address on which instance spiceservers will listen on compute hosts.
-      # For multi-host, this should be the management ip of the compute host.
-      SPICESERVER_PROXYCLIENT_ADDRESS=${SPICESERVER_PROXYCLIENT_ADDRESS=127.0.0.1}
-      SPICESERVER_LISTEN=${SPICESERVER_LISTEN=127.0.0.1}
-      iniset $NOVA_CONF spice enabled true
-      iniset $NOVA_CONF spice server_listen "$SPICESERVER_LISTEN"
-      iniset $NOVA_CONF spice server_proxyclient_address "$SPICESERVER_PROXYCLIENT_ADDRESS"
+        # Address on which instance spiceservers will listen on compute hosts.
+        # For multi-host, this should be the management ip of the compute host.
+        SPICESERVER_PROXYCLIENT_ADDRESS=${SPICESERVER_PROXYCLIENT_ADDRESS=127.0.0.1}
+        SPICESERVER_LISTEN=${SPICESERVER_LISTEN=127.0.0.1}
+        iniset $NOVA_CONF spice enabled true
+        iniset $NOVA_CONF spice server_listen "$SPICESERVER_LISTEN"
+        iniset $NOVA_CONF spice server_proxyclient_address "$SPICESERVER_PROXYCLIENT_ADDRESS"
     else
-      iniset $NOVA_CONF spice enabled false
+        iniset $NOVA_CONF spice enabled false
     fi
 
     iniset $NOVA_CONF DEFAULT ec2_dmz_host "$EC2_DMZ_HOST"
@@ -574,6 +587,30 @@
         install_nova_hypervisor
     fi
 
+    if is_service_enabled n-novnc; then
+        # a websockets/html5 or flash powered VNC console for vm instances
+        NOVNC_FROM_PACKAGE=`trueorfalse False $NOVNC_FROM_PACKAGE`
+        if [ "$NOVNC_FROM_PACKAGE" = "True" ]; then
+            NOVNC_WEB_DIR=/usr/share/novnc
+            install_package novnc
+        else
+            NOVNC_WEB_DIR=$DEST/noVNC
+            git_clone $NOVNC_REPO $NOVNC_WEB_DIR $NOVNC_BRANCH
+        fi
+    fi
+
+    if is_service_enabled n-spice; then
+        # a websockets/html5 or flash powered SPICE console for vm instances
+        SPICE_FROM_PACKAGE=`trueorfalse True $SPICE_FROM_PACKAGE`
+        if [ "$SPICE_FROM_PACKAGE" = "True" ]; then
+            SPICE_WEB_DIR=/usr/share/spice-html5
+            install_package spice-html5
+        else
+            SPICE_WEB_DIR=$DEST/spice-html5
+            git_clone $SPICE_REPO $SPICE_WEB_DIR $SPICE_BRANCH
+        fi
+    fi
+
     git_clone $NOVA_REPO $NOVA_DIR $NOVA_BRANCH
     setup_develop $NOVA_DIR
     sudo install -D -m 0644 -o $STACK_USER {$NOVA_DIR/tools/,/etc/bash_completion.d/}nova-manage.bash_completion
@@ -590,7 +627,7 @@
     screen_it n-api "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-api"
     echo "Waiting for nova-api to start..."
     if ! wait_for_service $SERVICE_TIMEOUT http://$SERVICE_HOST:$service_port; then
-      die $LINENO "nova-api did not start"
+        die $LINENO "nova-api did not start"
     fi
 
     # Start proxies if enabled
@@ -599,49 +636,63 @@
     fi
 }
 
-# start_nova() - Start running processes, including screen
-function start_nova() {
-    NOVA_CONF_BOTTOM=$NOVA_CONF
-
-    # ``screen_it`` checks ``is_service_enabled``, it is not needed here
-    screen_it n-cond "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-conductor"
-
+# start_nova_compute() - Start the compute process
+function start_nova_compute() {
     if is_service_enabled n-cell; then
-        NOVA_CONF_BOTTOM=$NOVA_CELLS_CONF
-        screen_it n-cond "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-conductor --config-file $NOVA_CELLS_CONF"
-        screen_it n-cell-region "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cells --config-file $NOVA_CONF"
-        screen_it n-cell-child "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cells --config-file $NOVA_CELLS_CONF"
+        local compute_cell_conf=$NOVA_CELLS_CONF
+    else
+        local compute_cell_conf=$NOVA_CONF
     fi
 
     if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
         # The group **$LIBVIRT_GROUP** is added to the current user in this script.
         # Use 'sg' to execute nova-compute as a member of the **$LIBVIRT_GROUP** group.
-        screen_it n-cpu "cd $NOVA_DIR && sg $LIBVIRT_GROUP '$NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM'"
+        screen_it n-cpu "cd $NOVA_DIR && sg $LIBVIRT_GROUP '$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf'"
     elif [[ "$VIRT_DRIVER" = 'fake' ]]; then
-       for i in `seq 1 $NUMBER_FAKE_NOVA_COMPUTE`
-       do
-           screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM"
-       done
+        for i in `seq 1 $NUMBER_FAKE_NOVA_COMPUTE`; do
+            screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf"
+        done
     else
         if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
             start_nova_hypervisor
         fi
-        screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF_BOTTOM"
+        screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf"
     fi
-    screen_it n-crt "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cert"
-    screen_it n-net "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-network --config-file $NOVA_CONF_BOTTOM"
-    screen_it n-sch "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-scheduler --config-file $NOVA_CONF_BOTTOM"
-    screen_it n-api-meta "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-api-metadata --config-file $NOVA_CONF_BOTTOM"
+}
 
-    screen_it n-novnc "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-novncproxy --config-file $NOVA_CONF --web $NOVNC_DIR"
-    screen_it n-xvnc "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-xvpvncproxy --config-file $NOVA_CONF"
-    screen_it n-spice "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-spicehtml5proxy --config-file $NOVA_CONF --web $SPICE_DIR"
-    screen_it n-cauth "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-consoleauth"
+# start_nova() - Start running processes, including screen
+function start_nova_rest() {
+    local api_cell_conf=$NOVA_CONF
+    if is_service_enabled n-cell; then
+        local compute_cell_conf=$NOVA_CELLS_CONF
+    else
+        local compute_cell_conf=$NOVA_CONF
+    fi
+
+    # ``screen_it`` checks ``is_service_enabled``, it is not needed here
+    screen_it n-cond "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-conductor --config-file $compute_cell_conf"
+    screen_it n-cell-region "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cells --config-file $api_cell_conf"
+    screen_it n-cell-child "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cells --config-file $compute_cell_conf"
+
+    screen_it n-crt "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-cert --config-file $api_cell_conf"
+    screen_it n-net "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-network --config-file $compute_cell_conf"
+    screen_it n-sch "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-scheduler --config-file $compute_cell_conf"
+    screen_it n-api-meta "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-api-metadata --config-file $compute_cell_conf"
+
+    screen_it n-novnc "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-novncproxy --config-file $api_cell_conf --web $NOVNC_WEB_DIR"
+    screen_it n-xvnc "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-xvpvncproxy --config-file $api_cell_conf"
+    screen_it n-spice "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-spicehtml5proxy --config-file $api_cell_conf --web $SPICE_WEB_DIR"
+    screen_it n-cauth "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-consoleauth --config-file $api_cell_conf"
 
     # Starting the nova-objectstore only if swift3 service is not enabled.
     # Swift will act as s3 objectstore.
     is_service_enabled swift3 || \
-        screen_it n-obj "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-objectstore"
+        screen_it n-obj "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-objectstore --config-file $api_cell_conf"
+}
+
+function start_nova() {
+    start_nova_compute
+    start_nova_rest
 }
 
 # stop_nova() - Stop running processes (non-screen)
@@ -649,7 +700,7 @@
     # Kill the nova screen windows
     # Some services are listed here twice since more than one instance
     # of a service may be running in certain configs.
-    for serv in n-api n-cpu n-crt n-net n-sch n-novnc n-xvnc n-cauth n-spice n-cond n-cond n-cell n-cell n-api-meta; do
+    for serv in n-api n-cpu n-crt n-net n-sch n-novnc n-xvnc n-cauth n-spice n-cond n-cell n-cell n-api-meta; do
         screen -S $SCREEN_NAME -p $serv -X kill
     done
     if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
@@ -661,6 +712,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/nova_plugins/hypervisor-baremetal b/lib/nova_plugins/hypervisor-baremetal
index 4e7c173..660c977 100644
--- a/lib/nova_plugins/hypervisor-baremetal
+++ b/lib/nova_plugins/hypervisor-baremetal
@@ -61,8 +61,8 @@
 
     # Define extra baremetal nova conf flags by defining the array ``EXTRA_BAREMETAL_OPTS``.
     for I in "${EXTRA_BAREMETAL_OPTS[@]}"; do
-       # Attempt to convert flags to options
-       iniset $NOVA_CONF baremetal ${I/=/ }
+        # Attempt to convert flags to options
+        iniset $NOVA_CONF baremetal ${I/=/ }
     done
 }
 
diff --git a/lib/nova_plugins/hypervisor-docker b/lib/nova_plugins/hypervisor-docker
index 4c8fc27..0153953 100644
--- a/lib/nova_plugins/hypervisor-docker
+++ b/lib/nova_plugins/hypervisor-docker
@@ -2,11 +2,13 @@
 # Configure the Docker hypervisor
 
 # Enable with:
-# VIRT_DRIVER=docker
+#
+#   VIRT_DRIVER=docker
 
 # Dependencies:
-# ``functions`` file
-# ``nova`` and ``glance`` configurations
+#
+# - ``functions`` file
+# - ``nova`` and ``glance`` configurations
 
 # install_nova_hypervisor - install any external requirements
 # configure_nova_hypervisor - make configuration changes, including those to other services
@@ -24,8 +26,6 @@
 
 # Set up default directories
 DOCKER_DIR=$DEST/docker
-DOCKER_REPO=${DOCKER_REPO:-https://github.com/dotcloud/openstack-docker.git}
-DOCKER_BRANCH=${DOCKER_BRANCH:-master}
 
 DOCKER_UNIX_SOCKET=/var/run/docker.sock
 DOCKER_PID_FILE=/var/run/docker.pid
@@ -37,7 +37,6 @@
 DOCKER_REGISTRY_IMAGE_NAME=docker-registry
 DOCKER_REPOSITORY_NAME=${SERVICE_HOST}:${DOCKER_REGISTRY_PORT}/${DOCKER_IMAGE_NAME}
 
-DOCKER_PACKAGE_VERSION=${DOCKER_PACKAGE_VERSION:-0.6.1}
 DOCKER_APT_REPO=${DOCKER_APT_REPO:-https://get.docker.io/ubuntu}
 
 
@@ -54,14 +53,8 @@
 
 # configure_nova_hypervisor - Set config files, create data dirs, etc
 function configure_nova_hypervisor() {
-    git_clone $DOCKER_REPO $DOCKER_DIR $DOCKER_BRANCH
-
-    ln -snf ${DOCKER_DIR}/nova-driver $NOVA_DIR/nova/virt/docker
-
     iniset $NOVA_CONF DEFAULT compute_driver docker.DockerDriver
     iniset $GLANCE_API_CONF DEFAULT container_formats ami,ari,aki,bare,ovf,docker
-
-    sudo cp -p ${DOCKER_DIR}/nova-driver/docker.filters $NOVA_CONF_DIR/rootwrap.d
 }
 
 # install_nova_hypervisor() - Install external components
diff --git a/lib/nova_plugins/hypervisor-libvirt b/lib/nova_plugins/hypervisor-libvirt
index caf0296..6f90f4a 100644
--- a/lib/nova_plugins/hypervisor-libvirt
+++ b/lib/nova_plugins/hypervisor-libvirt
@@ -7,6 +7,7 @@
 # Dependencies:
 # ``functions`` file
 # ``nova`` configuration
+# ``STACK_USER`` has to be defined
 
 # install_nova_hypervisor - install any external requirements
 # configure_nova_hypervisor - make configuration changes, including those to other services
@@ -68,7 +69,7 @@
             # with 'unix-group:$group'.
             sudo bash -c "cat <<EOF >/etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
 [libvirt Management Access]
-Identity=unix-user:$USER
+Identity=unix-user:$STACK_USER
 Action=org.libvirt.unix.manage
 ResultAny=yes
 ResultInactive=yes
@@ -82,10 +83,10 @@
             sudo mkdir -p $rules_dir
             sudo bash -c "cat <<EOF > $rules_dir/50-libvirt-$STACK_USER.rules
 polkit.addRule(function(action, subject) {
-     if (action.id == 'org.libvirt.unix.manage' &&
-         subject.user == '"$STACK_USER"') {
-         return polkit.Result.YES;
-     }
+    if (action.id == 'org.libvirt.unix.manage' &&
+        subject.user == '"$STACK_USER"') {
+        return polkit.Result.YES;
+    }
 });
 EOF"
             unset rules_dir
diff --git a/lib/oslo b/lib/oslo
index f77a4fa..816ae9a 100644
--- a/lib/oslo
+++ b/lib/oslo
@@ -6,11 +6,12 @@
 # pre-released versions of oslo libraries.
 
 # Dependencies:
-# ``functions`` file
+#
+# - ``functions`` file
 
 # ``stack.sh`` calls the entry points in this order:
 #
-# install_oslo
+# - install_oslo
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -52,6 +53,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/rpc_backend b/lib/rpc_backend
index 63edc07..ae83e85 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -3,15 +3,16 @@
 # rpc backend settings
 
 # Dependencies:
-# ``functions`` file
-# ``RABBIT_{HOST|PASSWORD}`` must be defined when RabbitMQ is used
+#
+# - ``functions`` file
+# - ``RABBIT_{HOST|PASSWORD}`` must be defined when RabbitMQ is used
 
 # ``stack.sh`` calls the entry points in this order:
 #
-# check_rpc_backend
-# install_rpc_backend
-# restart_rpc_backend
-# iniset_rpc_backend
+# - check_rpc_backend
+# - install_rpc_backend
+# - restart_rpc_backend
+# - iniset_rpc_backend
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -63,7 +64,7 @@
     if is_service_enabled rabbit; then
         # Obliterate rabbitmq-server
         uninstall_package rabbitmq-server
-        sudo killall epmd
+        sudo killall epmd || sudo killall -9 epmd
         if is_ubuntu; then
             # And the Erlang runtime too
             sudo aptitude purge -y ~nerlang
@@ -86,10 +87,6 @@
         else
             exit_distro_not_supported "zeromq installation"
         fi
-
-        # Necessary directory for socket location.
-        sudo mkdir -p /var/run/openstack
-        sudo chown $STACK_USER /var/run/openstack
     fi
 }
 
@@ -106,9 +103,9 @@
         if is_fedora; then
             install_package qpid-cpp-server
             if [[ $DISTRO =~ (rhel6) ]]; then
-               # RHEL6 leaves "auth=yes" in /etc/qpidd.conf, it needs to
-               # be no or you get GSS authentication errors as it
-               # attempts to default to this.
+                # RHEL6 leaves "auth=yes" in /etc/qpidd.conf, it needs to
+                # be no or you get GSS authentication errors as it
+                # attempts to default to this.
                 sudo sed -i.bak 's/^auth=yes$/auth=no/' /etc/qpidd.conf
             fi
         elif is_ubuntu; then
@@ -204,6 +201,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/savanna b/lib/savanna
new file mode 100644
index 0000000..e9dbe72
--- /dev/null
+++ b/lib/savanna
@@ -0,0 +1,97 @@
+# lib/savanna
+
+# Dependencies:
+# ``functions`` file
+# ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined
+# ``ADMIN_{TENANT_NAME|PASSWORD}`` must be defined
+
+# ``stack.sh`` calls the entry points in this order:
+#
+# install_savanna
+# configure_savanna
+# start_savanna
+# stop_savanna
+
+# Save trace setting
+XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Defaults
+# --------
+
+# Set up default repos
+SAVANNA_REPO=${SAVANNA_REPO:-${GIT_BASE}/openstack/savanna.git}
+SAVANNA_BRANCH=${SAVANNA_BRANCH:-master}
+
+# Set up default directories
+SAVANNA_DIR=$DEST/savanna
+SAVANNA_CONF_DIR=${SAVANNA_CONF_DIR:-/etc/savanna}
+SAVANNA_CONF_FILE=savanna.conf
+ADMIN_TENANT_NAME=${ADMIN_TENANT_NAME:-admin}
+ADMIN_NAME=${ADMIN_NAME:-admin}
+ADMIN_PASSWORD=${ADMIN_PASSWORD:-nova}
+SAVANNA_DEBUG=${SAVANNA_DEBUG:-True}
+
+# Support entry points installation of console scripts
+if [[ -d $SAVANNA_DIR/bin ]]; then
+    SAVANNA_BIN_DIR=$SAVANNA_DIR/bin
+else
+    SAVANNA_BIN_DIR=$(get_python_exec_prefix)
+fi
+
+# Functions
+# ---------
+
+# configure_savanna() - Set config files, create data dirs, etc
+function configure_savanna() {
+
+    if [[ ! -d $SAVANNA_CONF_DIR ]]; then
+        sudo mkdir -p $SAVANNA_CONF_DIR
+    fi
+    sudo chown $STACK_USER $SAVANNA_CONF_DIR
+
+    # Copy over savanna configuration file and configure common parameters.
+    cp $SAVANNA_DIR/etc/savanna/savanna.conf.sample $SAVANNA_CONF_DIR/$SAVANNA_CONF_FILE
+
+    iniset $SAVANNA_CONF_DIR/$SAVANNA_CONF_FILE DEFAULT os_admin_password $ADMIN_PASSWORD
+    iniset $SAVANNA_CONF_DIR/$SAVANNA_CONF_FILE DEFAULT os_admin_username $ADMIN_NAME
+    iniset $SAVANNA_CONF_DIR/$SAVANNA_CONF_FILE DEFAULT os_admin_tenant_name $ADMIN_TENANT_NAME
+    iniset $SAVANNA_CONF_DIR/$SAVANNA_CONF_FILE DEFAULT debug $SAVANNA_DEBUG
+
+    recreate_database savanna utf8
+    iniset $SAVANNA_CONF_DIR/$SAVANNA_CONF_FILE database sql_connection `database_connection_url savanna`
+    inicomment $SAVANNA_CONF_DIR/$SAVANNA_CONF_FILE database connection
+
+    if is_service_enabled neutron; then
+        iniset $SAVANNA_CONF_DIR/$SAVANNA_CONF_FILE DEFAULT use_neutron true
+        iniset $SAVANNA_CONF_DIR/$SAVANNA_CONF_FILE DEFAULT use_floating_ips true
+    fi
+
+    iniset $SAVANNA_CONF_DIR/$SAVANNA_CONF_FILE DEFAULT use_syslog $SYSLOG
+}
+
+# install_savanna() - Collect source and prepare
+function install_savanna() {
+    git_clone $SAVANNA_REPO $SAVANNA_DIR $SAVANNA_BRANCH
+    setup_develop $SAVANNA_DIR
+}
+
+# start_savanna() - Start running processes, including screen
+function start_savanna() {
+    screen_it savanna "cd $SAVANNA_DIR && $SAVANNA_BIN_DIR/savanna-api --config-file $SAVANNA_CONF_DIR/$SAVANNA_CONF_FILE"
+}
+
+# stop_savanna() - Stop running processes
+function stop_savanna() {
+    # Kill the Savanna screen windows
+    screen -S $SCREEN_NAME -p savanna -X kill
+}
+
+
+# Restore xtrace
+$XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/savanna-dashboard b/lib/savanna-dashboard
new file mode 100644
index 0000000..e967622
--- /dev/null
+++ b/lib/savanna-dashboard
@@ -0,0 +1,71 @@
+# lib/savanna-dashboard
+
+# Dependencies:
+#
+# - ``functions`` file
+# - ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined
+# - ``SERVICE_HOST``
+
+# ``stack.sh`` calls the entry points in this order:
+#
+# - install_savanna_dashboard
+# - configure_savanna_dashboard
+# - cleanup_savanna_dashboard
+
+# Save trace setting
+XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+source $TOP_DIR/lib/horizon
+
+# Defaults
+# --------
+
+# Set up default repos
+SAVANNA_DASHBOARD_REPO=${SAVANNA_DASHBOARD_REPO:-${GIT_BASE}/openstack/savanna-dashboard.git}
+SAVANNA_DASHBOARD_BRANCH=${SAVANNA_DASHBOARD_BRANCH:-master}
+
+SAVANNA_PYTHONCLIENT_REPO=${SAVANNA_PYTHONCLIENT_REPO:-${GIT_BASE}/openstack/python-savannaclient.git}
+SAVANNA_PYTHONCLIENT_BRANCH=${SAVANNA_PYTHONCLIENT_BRANCH:-master}
+
+# Set up default directories
+SAVANNA_DASHBOARD_DIR=$DEST/savanna_dashboard
+SAVANNA_PYTHONCLIENT_DIR=$DEST/python-savannaclient
+
+# Functions
+# ---------
+
+function configure_savanna_dashboard() {
+
+    echo -e "SAVANNA_URL = \"http://$SERVICE_HOST:8386/v1.1\"\nAUTO_ASSIGNMENT_ENABLED = False" >> $HORIZON_DIR/openstack_dashboard/local/local_settings.py
+    echo -e "HORIZON_CONFIG['dashboards'] += ('savanna',)\nINSTALLED_APPS += ('savannadashboard',)" >> $HORIZON_DIR/openstack_dashboard/settings.py
+
+    if is_service_enabled neutron; then
+        echo -e "SAVANNA_USE_NEUTRON = True" >> $HORIZON_DIR/openstack_dashboard/local/local_settings.py
+    fi
+}
+
+# install_savanna_dashboard() - Collect source and prepare
+function install_savanna_dashboard() {
+    install_python_savannaclient
+    git_clone $SAVANNA_DASHBOARD_REPO $SAVANNA_DASHBOARD_DIR $SAVANNA_DASHBOARD_BRANCH
+    setup_develop $SAVANNA_DASHBOARD_DIR
+}
+
+function install_python_savannaclient() {
+    git_clone $SAVANNA_PYTHONCLIENT_REPO $SAVANNA_PYTHONCLIENT_DIR $SAVANNA_PYTHONCLIENT_BRANCH
+    setup_develop $SAVANNA_PYTHONCLIENT_DIR
+}
+
+# Cleanup file settings.py from Savanna
+function cleanup_savanna_dashboard() {
+    sed -i '/savanna/d' $HORIZON_DIR/openstack_dashboard/settings.py
+}
+
+# Restore xtrace
+$XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
+
diff --git a/lib/stackforge b/lib/stackforge
new file mode 100644
index 0000000..718b818
--- /dev/null
+++ b/lib/stackforge
@@ -0,0 +1,67 @@
+# lib/stackforge
+#
+# Functions to install stackforge libraries that we depend on so
+# that we can try their git versions during devstack gate.
+#
+# This is appropriate for python libraries that release to pypi and are
+# expected to be used beyond OpenStack like, but are requirements
+# for core services in global-requirements.
+#    * wsme
+#    * pecan
+#
+# This is not appropriate for stackforge projects which are early stage
+# OpenStack tools
+
+# Dependencies:
+# ``functions`` file
+
+# ``stack.sh`` calls the entry points in this order:
+#
+# install_stackforge
+
+# Save trace setting
+XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Defaults
+# --------
+WSME_DIR=$DEST/wsme
+PECAN_DIR=$DEST/pecan
+
+# Entry Points
+# ------------
+
+# install_stackforge() - Collect source and prepare
+function install_stackforge() {
+    # TODO(sdague): remove this once we get to Icehouse, this just makes
+    # for a smoother transition of existing users.
+    cleanup_stackforge
+
+    git_clone $WSME_REPO $WSME_DIR $WSME_BRANCH
+    setup_develop_no_requirements_update $WSME_DIR
+
+    git_clone $PECAN_REPO $PECAN_DIR $PECAN_BRANCH
+    setup_develop_no_requirements_update $PECAN_DIR
+}
+
+# cleanup_stackforge() - purge possibly old versions of stackforge libraries
+function cleanup_stackforge() {
+    # this means we've got an old version installed, lets get rid of it
+    # otherwise python hates itself
+    for lib in wsme pecan; do
+        if ! python -c "import $lib" 2>/dev/null; then
+            echo "Found old $lib... removing to ensure consistency"
+            local PIP_CMD=$(get_pip_command)
+            pip_install $lib
+            sudo $PIP_CMD uninstall -y $lib
+        fi
+    done
+}
+
+# Restore xtrace
+$XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/swift b/lib/swift
index 6ab43c4..c103b5b 100644
--- a/lib/swift
+++ b/lib/swift
@@ -2,22 +2,24 @@
 # Functions to control the configuration and operation of the **Swift** service
 
 # Dependencies:
-# ``functions`` file
-# ``apache`` file
-# ``DEST``, ``SCREEN_NAME``, `SWIFT_HASH` must be defined
-# ``STACK_USER`` must be defined
-# ``SWIFT_DATA_DIR`` or ``DATA_DIR`` must be defined
-# ``lib/keystone`` file
+#
+# - ``functions`` file
+# - ``apache`` file
+# - ``DEST``, ``SCREEN_NAME``, `SWIFT_HASH` must be defined
+# - ``STACK_USER`` must be defined
+# - ``SWIFT_DATA_DIR`` or ``DATA_DIR`` must be defined
+# - ``lib/keystone`` file
+#
 # ``stack.sh`` calls the entry points in this order:
 #
-# install_swift
-# _config_swift_apache_wsgi
-# configure_swift
-# init_swift
-# start_swift
-# stop_swift
-# cleanup_swift
-# _cleanup_swift_apache_wsgi
+# - install_swift
+# - _config_swift_apache_wsgi
+# - configure_swift
+# - init_swift
+# - start_swift
+# - stop_swift
+# - cleanup_swift
+# - _cleanup_swift_apache_wsgi
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -57,9 +59,9 @@
 # kilobytes.
 # Default is 1 gigabyte.
 SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=1G
-# if tempest enabled the default size is 4 Gigabyte.
+# if tempest enabled the default size is 6 Gigabyte.
 if is_service_enabled tempest; then
-    SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=${SWIFT_LOOPBACK_DISK_SIZE:-4G}
+    SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=${SWIFT_LOOPBACK_DISK_SIZE:-6G}
 fi
 
 SWIFT_LOOPBACK_DISK_SIZE=${SWIFT_LOOPBACK_DISK_SIZE:-$SWIFT_LOOPBACK_DISK_SIZE_DEFAULT}
@@ -72,6 +74,10 @@
 # the end of the pipeline.
 SWIFT_EXTRAS_MIDDLEWARE_LAST=${SWIFT_EXTRAS_MIDDLEWARE_LAST}
 
+# Set ``SWIFT_EXTRAS_MIDDLEWARE_NO_AUTH`` to extras middlewares that need to be at
+# the beginning of the pipeline, before authentication middlewares.
+SWIFT_EXTRAS_MIDDLEWARE_NO_AUTH=${SWIFT_EXTRAS_MIDDLEWARE_NO_AUTH:-crossdomain}
+
 # The ring uses a configurable number of bits from a path’s MD5 hash as
 # a partition index that designates a device. The number of bits kept
 # from the hash is known as the partition power, and 2 to the partition
@@ -104,17 +110,17 @@
 
 # cleanup_swift() - Remove residual data files
 function cleanup_swift() {
-   rm -f ${SWIFT_CONF_DIR}{*.builder,*.ring.gz,backups/*.builder,backups/*.ring.gz}
-   if egrep -q ${SWIFT_DATA_DIR}/drives/sdb1 /proc/mounts; then
-      sudo umount ${SWIFT_DATA_DIR}/drives/sdb1
-   fi
-   if [[ -e ${SWIFT_DISK_IMAGE} ]]; then
-      rm ${SWIFT_DISK_IMAGE}
-   fi
-   rm -rf ${SWIFT_DATA_DIR}/run/
-   if is_apache_enabled_service swift; then
-       _cleanup_swift_apache_wsgi
-   fi
+    rm -f ${SWIFT_CONF_DIR}{*.builder,*.ring.gz,backups/*.builder,backups/*.ring.gz}
+    if egrep -q ${SWIFT_DATA_DIR}/drives/sdb1 /proc/mounts; then
+        sudo umount ${SWIFT_DATA_DIR}/drives/sdb1
+    fi
+    if [[ -e ${SWIFT_DISK_IMAGE} ]]; then
+        rm ${SWIFT_DISK_IMAGE}
+    fi
+    rm -rf ${SWIFT_DATA_DIR}/run/
+    if is_apache_enabled_service swift; then
+        _cleanup_swift_apache_wsgi
+    fi
 }
 
 # _cleanup_swift_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
@@ -192,7 +198,7 @@
 
         sudo cp ${SWIFT_DIR}/examples/apache2/account-server.template ${apache_vhost_dir}/account-server-${node_number}
         sudo sed -e "
-             /^#/d;/^$/d;
+            /^#/d;/^$/d;
             s/%PORT%/$account_port/g;
             s/%SERVICENAME%/account-server-${node_number}/g;
             s/%APACHE_NAME%/${APACHE_NAME}/g;
@@ -202,7 +208,7 @@
 
         sudo cp ${SWIFT_DIR}/examples/wsgi/account-server.wsgi.template ${SWIFT_APACHE_WSGI_DIR}/account-server-${node_number}.wsgi
         sudo sed -e "
-             /^#/d;/^$/d;
+            /^#/d;/^$/d;
             s/%SERVICECONF%/account-server\/${node_number}.conf/g;
         " -i ${SWIFT_APACHE_WSGI_DIR}/account-server-${node_number}.wsgi
     done
@@ -210,7 +216,7 @@
 
 # configure_swift() - Set config files, create data dirs and loop image
 function configure_swift() {
-    local swift_pipeline=" "
+    local swift_pipeline="${SWIFT_EXTRAS_MIDDLEWARE_NO_AUTH}"
     local node_number
     local swift_node_config
     local swift_log_dir
@@ -219,7 +225,7 @@
     swift-init --run-dir=${SWIFT_DATA_DIR}/run all stop || true
 
     sudo mkdir -p ${SWIFT_CONF_DIR}/{object,container,account}-server
-    sudo chown -R $USER: ${SWIFT_CONF_DIR}
+    sudo chown -R ${STACK_USER}: ${SWIFT_CONF_DIR}
 
     if [[ "$SWIFT_CONF_DIR" != "/etc/swift" ]]; then
         # Some swift tools are hard-coded to use ``/etc/swift`` and are apparently not going to be fixed.
@@ -232,7 +238,7 @@
     # setup) we configure it with our version of rsync.
     sed -e "
         s/%GROUP%/${USER_GROUP}/;
-        s/%USER%/$USER/;
+        s/%USER%/${STACK_USER}/;
         s,%SWIFT_DATA_DIR%,$SWIFT_DATA_DIR,;
     " $FILES/swift/rsyncd.conf | sudo tee /etc/rsyncd.conf
     # rsyncd.conf just prepared for 4 nodes
@@ -246,7 +252,7 @@
     cp ${SWIFT_DIR}/etc/proxy-server.conf-sample ${SWIFT_CONFIG_PROXY_SERVER}
 
     iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT user
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT user ${USER}
+    iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT user ${STACK_USER}
 
     iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT swift_dir
     iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT swift_dir ${SWIFT_CONF_DIR}
@@ -260,6 +266,15 @@
     iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT bind_port
     iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT bind_port ${SWIFT_DEFAULT_BIND_PORT:-8080}
 
+    # Devstack is commonly run in a small slow environment, so bump the
+    # timeouts up.
+    # node_timeout is how long between read operations a node takes to
+    # respond to the proxy server
+    # conn_timeout is all about how long it takes a connect() system call to
+    # return
+    iniset ${SWIFT_CONFIG_PROXY_SERVER} app:proxy-server node_timeout 120
+    iniset ${SWIFT_CONFIG_PROXY_SERVER} app:proxy-server conn_timeout 20
+
     # Configure Ceilometer
     if is_service_enabled ceilometer; then
         iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer use "egg:ceilometer#swift"
@@ -268,10 +283,10 @@
 
     # By default Swift will be installed with keystone and tempauth middleware
     # and add the swift3 middleware if its configured for it. The token for
-    # tempauth would be prefixed with the reseller_prefix setting TEMPAUTH_ the
-    # token for keystoneauth would have the standard reseller_prefix AUTH_
+    # tempauth would be prefixed with the reseller_prefix setting `TEMPAUTH_` the
+    # token for keystoneauth would have the standard reseller_prefix `AUTH_`
     if is_service_enabled swift3;then
-        swift_pipeline=" swift3 s3token "
+        swift_pipeline+=" swift3 s3token "
     fi
     swift_pipeline+=" authtoken keystoneauth tempauth "
     sed -i "/^pipeline/ { s/tempauth/${swift_pipeline} ${SWIFT_EXTRAS_MIDDLEWARE}/ ;}" ${SWIFT_CONFIG_PROXY_SERVER}
@@ -283,6 +298,9 @@
     iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} filter:tempauth reseller_prefix
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:tempauth reseller_prefix "TEMPAUTH"
 
+    # Configure Crossdomain
+    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:crossdomain use "egg:swift#crossdomain"
+
     # Configure Keystone
     sed -i '/^# \[filter:authtoken\]/,/^# \[filter:keystoneauth\]$/ s/^#[ \t]*//' ${SWIFT_CONFIG_PROXY_SERVER}
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken auth_host $KEYSTONE_AUTH_HOST
@@ -330,7 +348,7 @@
         node_path=${SWIFT_DATA_DIR}/${node_number}
 
         iniuncomment ${swift_node_config} DEFAULT user
-        iniset ${swift_node_config} DEFAULT user ${USER}
+        iniset ${swift_node_config} DEFAULT user ${STACK_USER}
 
         iniuncomment ${swift_node_config} DEFAULT bind_port
         iniset ${swift_node_config} DEFAULT bind_port ${bind_port}
@@ -401,7 +419,7 @@
     swift_log_dir=${SWIFT_DATA_DIR}/logs
     rm -rf ${swift_log_dir}
     mkdir -p ${swift_log_dir}/hourly
-    sudo chown -R $USER:adm ${swift_log_dir}
+    sudo chown -R ${STACK_USER}:adm ${swift_log_dir}
     sed "s,%SWIFT_LOGDIR%,${swift_log_dir}," $FILES/swift/rsyslog.conf | sudo \
         tee /etc/rsyslog.d/10-swift.conf
     if is_apache_enabled_service swift; then
@@ -416,9 +434,9 @@
     # First do a bit of setup by creating the directories and
     # changing the permissions so we can run it as our user.
 
-    USER_GROUP=$(id -g)
+    USER_GROUP=$(id -g ${STACK_USER})
     sudo mkdir -p ${SWIFT_DATA_DIR}/{drives,cache,run,logs}
-    sudo chown -R $USER:${USER_GROUP} ${SWIFT_DATA_DIR}
+    sudo chown -R ${STACK_USER}:${USER_GROUP} ${SWIFT_DATA_DIR}
 
     # Create a loopback disk and format it to XFS.
     if [[ -e ${SWIFT_DISK_IMAGE} ]]; then
@@ -430,7 +448,7 @@
 
     mkdir -p ${SWIFT_DATA_DIR}/drives/images
     sudo touch ${SWIFT_DISK_IMAGE}
-    sudo chown $USER: ${SWIFT_DISK_IMAGE}
+    sudo chown ${STACK_USER}: ${SWIFT_DISK_IMAGE}
 
     truncate -s ${SWIFT_LOOPBACK_DISK_SIZE} ${SWIFT_DISK_IMAGE}
 
@@ -453,9 +471,9 @@
         node_device=${node}/sdb1
         [[ -d $node ]] && continue
         [[ -d $drive ]] && continue
-        sudo install -o ${USER} -g $USER_GROUP -d $drive
-        sudo install -o ${USER} -g $USER_GROUP -d $node_device
-        sudo chown -R $USER: ${node}
+        sudo install -o ${STACK_USER} -g $USER_GROUP -d $drive
+        sudo install -o ${STACK_USER} -g $USER_GROUP -d $node_device
+        sudo chown -R ${STACK_USER}: ${node}
     done
 }
 # create_swift_accounts() - Set up standard swift accounts and extra
@@ -577,26 +595,26 @@
         return 0
     fi
 
-   # By default with only one replica we are launching the proxy,
-   # container, account and object server in screen in foreground and
-   # other services in background. If we have SWIFT_REPLICAS set to something
-   # greater than one we first spawn all the swift services then kill the proxy
-   # service so we can run it in foreground in screen.  ``swift-init ...
-   # {stop|restart}`` exits with '1' if no servers are running, ignore it just
-   # in case
-   swift-init --run-dir=${SWIFT_DATA_DIR}/run all restart || true
-   if [[ ${SWIFT_REPLICAS} == 1 ]]; then
+    # By default with only one replica we are launching the proxy,
+    # container, account and object server in screen in foreground and
+    # other services in background. If we have SWIFT_REPLICAS set to something
+    # greater than one we first spawn all the swift services then kill the proxy
+    # service so we can run it in foreground in screen.  ``swift-init ...
+    # {stop|restart}`` exits with '1' if no servers are running, ignore it just
+    # in case
+    swift-init --run-dir=${SWIFT_DATA_DIR}/run all restart || true
+    if [[ ${SWIFT_REPLICAS} == 1 ]]; then
         todo="object container account"
-   fi
-   for type in proxy ${todo}; do
-       swift-init --run-dir=${SWIFT_DATA_DIR}/run ${type} stop || true
-   done
-   screen_it s-proxy "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
-   if [[ ${SWIFT_REPLICAS} == 1 ]]; then
-       for type in object container account; do
-           screen_it s-${type} "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-${type}-server ${SWIFT_CONF_DIR}/${type}-server/1.conf -v"
-       done
-   fi
+    fi
+    for type in proxy ${todo}; do
+        swift-init --run-dir=${SWIFT_DATA_DIR}/run ${type} stop || true
+    done
+    screen_it s-proxy "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
+    if [[ ${SWIFT_REPLICAS} == 1 ]]; then
+        for type in object container account; do
+            screen_it s-${type} "cd $SWIFT_DIR && $SWIFT_DIR/bin/swift-${type}-server ${SWIFT_CONF_DIR}/${type}-server/1.conf -v"
+        done
+    fi
 }
 
 # stop_swift() - Stop running processes (non-screen)
@@ -617,6 +635,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/tempest b/lib/tempest
index bc0b18d..b0fc9f5 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -2,34 +2,38 @@
 # Install and configure Tempest
 
 # Dependencies:
-# ``functions`` file
-# ``lib/nova`` service is running
-# <list other global vars that are assumed to be defined>
-# - ``DEST``, ``FILES``
-# - ``ADMIN_PASSWORD``
-# - ``DEFAULT_IMAGE_NAME``
-# - ``S3_SERVICE_PORT``
-# - ``SERVICE_HOST``
-# - ``BASE_SQL_CONN`` ``lib/database`` declares
-# - ``PUBLIC_NETWORK_NAME``
-# - ``Q_USE_NAMESPACE``
-# - ``Q_ROUTER_NAME``
-# - ``VIRT_DRIVER``
-# - ``LIBVIRT_TYPE``
-# - ``KEYSTONE_SERVICE_PROTOCOL``, ``KEYSTONE_SERVICE_HOST`` from lib/keystone
+#
+# - ``functions`` file
+# - ``lib/nova`` service is running
+# - Global vars that are assumed to be defined:
+#   - ``DEST``, ``FILES``
+#   - ``ADMIN_PASSWORD``
+#   - ``DEFAULT_IMAGE_NAME``
+#   - ``S3_SERVICE_PORT``
+#   - ``SERVICE_HOST``
+#   - ``BASE_SQL_CONN`` ``lib/database`` declares
+#   - ``PUBLIC_NETWORK_NAME``
+#   - ``Q_USE_NAMESPACE``
+#   - ``Q_ROUTER_NAME``
+#   - ``VIRT_DRIVER``
+#   - ``LIBVIRT_TYPE``
+#   - ``KEYSTONE_SERVICE_PROTOCOL``, ``KEYSTONE_SERVICE_HOST`` from lib/keystone
+#
 # Optional Dependencies:
-# ALT_* (similar vars exists in keystone_data.sh)
-# ``LIVE_MIGRATION_AVAILABLE``
-# ``USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION``
-# ``DEFAULT_INSTANCE_TYPE``
-# ``DEFAULT_INSTANCE_USER``
-# ``CINDER_MULTI_LVM_BACKEND``
-# ``HEAT_CREATE_TEST_IMAGE``
+#
+# - ``ALT_*`` (similar vars exists in keystone_data.sh)
+# - ``LIVE_MIGRATION_AVAILABLE``
+# - ``USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION``
+# - ``DEFAULT_INSTANCE_TYPE``
+# - ``DEFAULT_INSTANCE_USER``
+# - ``CINDER_MULTI_LVM_BACKEND``
+# - ``HEAT_CREATE_TEST_IMAGE``
+#
 # ``stack.sh`` calls the entry points in this order:
 #
-# install_tempest
-# configure_tempest
-# init_tempest
+# - install_tempest
+# - configure_tempest
+# - init_tempest
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -48,7 +52,7 @@
 NOVA_SOURCE_DIR=$DEST/nova
 
 BUILD_INTERVAL=1
-BUILD_TIMEOUT=400
+BUILD_TIMEOUT=196
 
 
 BOTO_MATERIALS_PATH="$FILES/images/s3-materials/cirros-0.3.1"
@@ -69,6 +73,7 @@
     local password
     local line
     local flavors
+    local available_flavors
     local flavors_ref
     local flavor_lines
     local public_network_id
@@ -138,10 +143,15 @@
     # If the ``DEFAULT_INSTANCE_TYPE`` not declared, use the new behavior
     # Tempest creates instane types for himself
     if  [[ -z "$DEFAULT_INSTANCE_TYPE" ]]; then
-        nova flavor-create m1.nano 42 64 0 1
+        available_flavors=$(nova flavor-list)
+        if [[ ! ( $available_flavors =~ 'm1.nano' ) ]]; then
+            nova flavor-create m1.nano 42 64 0 1
+        fi
         flavor_ref=42
         boto_instance_type=m1.nano
-        nova flavor-create m1.micro 84 128 0 1
+        if [[ ! ( $available_flavors =~ 'm1.micro' ) ]]; then
+            nova flavor-create m1.micro 84 128 0 1
+        fi
         flavor_ref_alt=84
     else
         # Check Nova for existing flavors and, if set, look for the
@@ -193,7 +203,7 @@
             # If namespaces are disabled, devstack will create a single
             # public router that tempest should be configured to use.
             public_router_id=$(neutron router-list | awk "/ $Q_ROUTER_NAME / \
-               { print \$2 }")
+                { print \$2 }")
         fi
     fi
 
@@ -266,7 +276,7 @@
     iniset $TEMPEST_CONF boto ssh_user ${DEFAULT_INSTANCE_USER:-cirros}
 
     # Orchestration test image
-    if [ $HEAT_CREATE_TEST_IMAGE == "True" ]; then
+    if [[ "$HEAT_CREATE_TEST_IMAGE" = "True" ]]; then
         disk_image_create /usr/share/tripleo-image-elements "vm fedora heat-cfntools" "i386" "fedora-vm-heat-cfntools-tempest"
         iniset $TEMPEST_CONF orchestration image_ref "fedora-vm-heat-cfntools-tempest"
     fi
@@ -296,7 +306,7 @@
     iniset $TEMPEST_CONF cli cli_dir $NOVA_BIN_DIR
 
     # service_available
-    for service in nova cinder glance neutron swift heat horizon ; do
+    for service in nova cinder glance neutron swift heat horizon ceilometer ironic; do
         if is_service_enabled $service ; then
             iniset $TEMPEST_CONF service_available $service "True"
         else
@@ -328,15 +338,15 @@
     local disk_image="$image_dir/${base_image_name}-blank.img"
     # if the cirros uec downloaded and the system is uec capable
     if [ -f "$kernel" -a -f "$ramdisk" -a -f "$disk_image" -a  "$VIRT_DRIVER" != "openvz" \
-         -a \( "$LIBVIRT_TYPE" != "lxc" -o "$VIRT_DRIVER" != "libvirt" \) ]; then
-       echo "Prepare aki/ari/ami Images"
-       ( #new namespace
-           # tenant:demo ; user: demo
-           source $TOP_DIR/accrc/demo/demo
-           euca-bundle-image -i "$kernel" --kernel true -d "$BOTO_MATERIALS_PATH"
-           euca-bundle-image -i "$ramdisk" --ramdisk true -d "$BOTO_MATERIALS_PATH"
-           euca-bundle-image -i "$disk_image" -d "$BOTO_MATERIALS_PATH"
-       ) 2>&1 </dev/null | cat
+        -a \( "$LIBVIRT_TYPE" != "lxc" -o "$VIRT_DRIVER" != "libvirt" \) ]; then
+        echo "Prepare aki/ari/ami Images"
+        ( #new namespace
+            # tenant:demo ; user: demo
+            source $TOP_DIR/accrc/demo/demo
+            euca-bundle-image -i "$kernel" --kernel true -d "$BOTO_MATERIALS_PATH"
+            euca-bundle-image -i "$ramdisk" --ramdisk true -d "$BOTO_MATERIALS_PATH"
+            euca-bundle-image -i "$disk_image" -d "$BOTO_MATERIALS_PATH"
+        ) 2>&1 </dev/null | cat
     else
         echo "Boto materials are not prepared"
     fi
@@ -345,6 +355,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/template b/lib/template
index 72904fe..629e110 100644
--- a/lib/template
+++ b/lib/template
@@ -3,18 +3,19 @@
 # <do not include this template file in ``stack.sh``!>
 
 # Dependencies:
-# ``functions`` file
-# ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
-# <list other global vars that are assumed to be defined>
+#
+# - ``functions`` file
+# - ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
+# - <list other global vars that are assumed to be defined>
 
 # ``stack.sh`` calls the entry points in this order:
 #
-# install_XXXX
-# configure_XXXX
-# init_XXXX
-# start_XXXX
-# stop_XXXX
-# cleanup_XXXX
+# - install_XXXX
+# - configure_XXXX
+# - init_XXXX
+# - start_XXXX
+# - stop_XXXX
+# - cleanup_XXXX
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -79,6 +80,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/tls b/lib/tls
index f7dcffa..a1a7fdd 100644
--- a/lib/tls
+++ b/lib/tls
@@ -1,24 +1,27 @@
 # lib/tls
 # Functions to control the configuration and operation of the TLS proxy service
 
-# Dependencies:
 # !! source _before_ any services that use ``SERVICE_HOST``
-# ``functions`` file
-# ``DEST``, ``DATA_DIR`` must be defined
-# ``HOST_IP``, ``SERVICE_HOST``
-# ``KEYSTONE_TOKEN_FORMAT`` must be defined
+#
+# Dependencies:
+#
+# - ``functions`` file
+# - ``DEST``, ``DATA_DIR`` must be defined
+# - ``HOST_IP``, ``SERVICE_HOST``
+# - ``KEYSTONE_TOKEN_FORMAT`` must be defined
 
 # Entry points:
-# configure_CA
-# init_CA
+#
+# - configure_CA
+# - init_CA
 
-# configure_proxy
-# start_tls_proxy
+# - configure_proxy
+# - start_tls_proxy
 
-# make_root_ca
-# make_int_ca
-# new_cert $INT_CA_DIR int-server "abc"
-# start_tls_proxy HOST_IP 5000 localhost 5000
+# - make_root_ca
+# - make_int_ca
+# - new_cert $INT_CA_DIR int-server "abc"
+# - start_tls_proxy HOST_IP 5000 localhost 5000
 
 
 # Defaults
@@ -321,6 +324,7 @@
 }
 
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/trove b/lib/trove
index 17c8c99..c40006b 100644
--- a/lib/trove
+++ b/lib/trove
@@ -45,14 +45,15 @@
     SERVICE_ROLE=$(keystone role-list | awk "/ admin / { print \$2 }")
 
     if [[ "$ENABLED_SERVICES" =~ "trove" ]]; then
-        TROVE_USER=$(keystone user-create --name=trove \
-                                                  --pass="$SERVICE_PASSWORD" \
-                                                  --tenant_id $SERVICE_TENANT \
-                                                  --email=trove@example.com \
-                                                  | grep " id " | get_field 2)
+        TROVE_USER=$(keystone user-create \
+            --name=trove \
+            --pass="$SERVICE_PASSWORD" \
+            --tenant_id $SERVICE_TENANT \
+            --email=trove@example.com \
+            | grep " id " | get_field 2)
         keystone user-role-add --tenant-id $SERVICE_TENANT \
-                               --user-id $TROVE_USER \
-                               --role-id $SERVICE_ROLE
+            --user-id $TROVE_USER \
+            --role-id $SERVICE_ROLE
         if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
             TROVE_SERVICE=$(keystone service-create \
                 --name=trove \
@@ -180,6 +181,7 @@
 # Restore xtrace
 $XTRACE
 
-# Local variables:
-# mode: shell-script
-# End:
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/openrc b/openrc
index d5b2156..804bb3f 100644
--- a/openrc
+++ b/openrc
@@ -18,7 +18,7 @@
 fi
 
 # Find the other rc files
-RC_DIR=$(cd $(dirname "$BASH_SOURCE") && pwd)
+RC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
 
 # Import common functions
 source $RC_DIR/functions
diff --git a/stack.sh b/stack.sh
index 14ec023..47d93bd 100755
--- a/stack.sh
+++ b/stack.sh
@@ -53,7 +53,7 @@
             if [[ -r $TOP_DIR/localrc ]]; then
                 warn $LINENO "localrc and local.conf:[[local]] both exist, using localrc"
             else
-                echo "# Generated file, do not exit" >$TOP_DIR/.localrc.auto
+                echo "# Generated file, do not edit" >$TOP_DIR/.localrc.auto
                 get_meta_section $TOP_DIR/local.conf local $lfile >>$TOP_DIR/.localrc.auto
             fi
         fi
@@ -131,7 +131,7 @@
 
 # Warn users who aren't on an explicitly supported distro, but allow them to
 # override check and attempt installation with ``FORCE=yes ./stack``
-if [[ ! ${DISTRO} =~ (oneiric|precise|quantal|raring|saucy|7.0|wheezy|sid|testing|jessie|f16|f17|f18|f19|opensuse-12.2|rhel6) ]]; then
+if [[ ! ${DISTRO} =~ (oneiric|precise|quantal|raring|saucy|trusty|7.0|wheezy|sid|testing|jessie|f16|f17|f18|f19|opensuse-12.2|rhel6) ]]; then
     echo "WARNING: this script has not been tested on $DISTRO"
     if [[ "$FORCE" != "yes" ]]; then
         die $LINENO "If you wish to run this script anyway run with FORCE=yes"
@@ -299,6 +299,7 @@
 source $TOP_DIR/lib/tls
 source $TOP_DIR/lib/infra
 source $TOP_DIR/lib/oslo
+source $TOP_DIR/lib/stackforge
 source $TOP_DIR/lib/horizon
 source $TOP_DIR/lib/keystone
 source $TOP_DIR/lib/glance
@@ -313,6 +314,16 @@
 source $TOP_DIR/lib/ironic
 source $TOP_DIR/lib/trove
 
+# Extras Source
+# --------------
+
+# Phase: source
+if [[ -d $TOP_DIR/extras.d ]]; then
+    for i in $TOP_DIR/extras.d/*.sh; do
+        [[ -r $i ]] && source $i source
+    done
+fi
+
 # Set the destination directories for other OpenStack projects
 OPENSTACKCLIENT_DIR=$DEST/python-openstackclient
 
@@ -578,7 +589,9 @@
 source $TOP_DIR/tools/install_prereqs.sh
 
 # Configure an appropriate python environment
-$TOP_DIR/tools/install_pip.sh
+if [[ "$OFFLINE" != "True" ]]; then
+    $TOP_DIR/tools/install_pip.sh
+fi
 
 # Do the ugly hacks for borken packages and distros
 $TOP_DIR/tools/fixup_stuff.sh
@@ -617,6 +630,11 @@
 # Install oslo libraries that have graduated
 install_oslo
 
+# Install stackforge libraries for testing
+if is_service_enabled stackforge_libs; then
+    install_stackforge
+fi
+
 # Install clients libraries
 install_keystoneclient
 install_glanceclient
@@ -676,16 +694,6 @@
     configure_nova
 fi
 
-if is_service_enabled n-novnc; then
-    # a websockets/html5 or flash powered VNC console for vm instances
-    git_clone $NOVNC_REPO $NOVNC_DIR $NOVNC_BRANCH
-fi
-
-if is_service_enabled n-spice; then
-    # a websockets/html5 or flash powered SPICE console for vm instances
-    git_clone $SPICE_REPO $SPICE_DIR $SPICE_BRANCH
-fi
-
 if is_service_enabled horizon; then
     # dashboard
     install_horizon
@@ -722,9 +730,20 @@
 
 if is_service_enabled ir-api ir-cond; then
     install_ironic
+    install_ironicclient
     configure_ironic
 fi
 
+# Extras Install
+# --------------
+
+# Phase: install
+if [[ -d $TOP_DIR/extras.d ]]; then
+    for i in $TOP_DIR/extras.d/*.sh; do
+        [[ -r $i ]] && source $i stack install
+    done
+fi
+
 if [[ $TRACK_DEPENDS = True ]]; then
     $DEST/.venv/bin/pip freeze > $DEST/requires-post-pip
     if ! diff -Nru $DEST/requires-pre-pip $DEST/requires-post-pip > $DEST/requires.diff; then
@@ -820,7 +839,7 @@
 # If enabled, systat has to start early to track OpenStack service startup.
 if is_service_enabled sysstat;then
     if [[ -n ${SCREEN_LOGDIR} ]]; then
-        screen_it sysstat "sar -o $SCREEN_LOGDIR/$SYSSTAT_FILE $SYSSTAT_INTERVAL"
+        screen_it sysstat "cd ; sar -o $SCREEN_LOGDIR/$SYSSTAT_FILE $SYSSTAT_INTERVAL"
     else
         screen_it sysstat "sar $SYSSTAT_INTERVAL"
     fi
@@ -995,11 +1014,22 @@
     prepare_baremetal_toolchain
     configure_baremetal_nova_dirs
     if [[ "$BM_USE_FAKE_ENV" = "True" ]]; then
-       create_fake_baremetal_env
+        create_fake_baremetal_env
     fi
 fi
 
 
+# Extras Configuration
+# ====================
+
+# Phase: post-config
+if [[ -d $TOP_DIR/extras.d ]]; then
+    for i in $TOP_DIR/extras.d/*.sh; do
+        [[ -r $i ]] && source $i stack post-config
+    done
+fi
+
+
 # Local Configuration
 # ===================
 
@@ -1143,28 +1173,29 @@
 
 if is_service_enabled g-reg; then
     TOKEN=$(keystone token-get | grep ' id ' | get_field 2)
+    die_if_not_set $LINENO TOKEN "Keystone fail to get token"
 
     if is_baremetal; then
-       echo_summary "Creating and uploading baremetal images"
+        echo_summary "Creating and uploading baremetal images"
 
-       # build and upload separate deploy kernel & ramdisk
-       upload_baremetal_deploy $TOKEN
+        # build and upload separate deploy kernel & ramdisk
+        upload_baremetal_deploy $TOKEN
 
-       # upload images, separating out the kernel & ramdisk for PXE boot
-       for image_url in ${IMAGE_URLS//,/ }; do
-           upload_baremetal_image $image_url $TOKEN
-       done
+        # upload images, separating out the kernel & ramdisk for PXE boot
+        for image_url in ${IMAGE_URLS//,/ }; do
+            upload_baremetal_image $image_url $TOKEN
+        done
     else
-       echo_summary "Uploading images"
+        echo_summary "Uploading images"
 
-       # Option to upload legacy ami-tty, which works with xenserver
-       if [[ -n "$UPLOAD_LEGACY_TTY" ]]; then
-           IMAGE_URLS="${IMAGE_URLS:+${IMAGE_URLS},}https://github.com/downloads/citrix-openstack/warehouse/tty.tgz"
-       fi
+        # Option to upload legacy ami-tty, which works with xenserver
+        if [[ -n "$UPLOAD_LEGACY_TTY" ]]; then
+            IMAGE_URLS="${IMAGE_URLS:+${IMAGE_URLS},}https://github.com/downloads/citrix-openstack/warehouse/tty.tgz"
+        fi
 
-       for image_url in ${IMAGE_URLS//,/ }; do
-           upload_image $image_url $TOKEN
-       done
+        for image_url in ${IMAGE_URLS//,/ }; do
+            upload_image $image_url $TOKEN
+        done
     fi
 fi
 
@@ -1176,7 +1207,7 @@
 if is_service_enabled nova && is_baremetal; then
     # create special flavor for baremetal if we know what images to associate
     [[ -n "$BM_DEPLOY_KERNEL_ID" ]] && [[ -n "$BM_DEPLOY_RAMDISK_ID" ]] && \
-       create_baremetal_flavor $BM_DEPLOY_KERNEL_ID $BM_DEPLOY_RAMDISK_ID
+        create_baremetal_flavor $BM_DEPLOY_KERNEL_ID $BM_DEPLOY_RAMDISK_ID
 
     # otherwise user can manually add it later by calling nova-baremetal-manage
     [[ -n "$BM_FIRST_MAC" ]] && add_baremetal_node
@@ -1191,14 +1222,14 @@
     fi
     # ensure callback daemon is running
     sudo pkill nova-baremetal-deploy-helper || true
-    screen_it baremetal "nova-baremetal-deploy-helper"
+    screen_it baremetal "cd ; nova-baremetal-deploy-helper"
 fi
 
 # Save some values we generated for later use
 CURRENT_RUN_TIME=$(date "+$TIMESTAMP_FORMAT")
 echo "# $CURRENT_RUN_TIME" >$TOP_DIR/.stackenv
 for i in BASE_SQL_CONN ENABLED_SERVICES HOST_IP LOGFILE \
-  SERVICE_HOST SERVICE_PROTOCOL STACK_USER TLS_IP; do
+    SERVICE_HOST SERVICE_PROTOCOL STACK_USER TLS_IP; do
     echo $i=${!i} >>$TOP_DIR/.stackenv
 done
 
@@ -1214,9 +1245,10 @@
 # Run extras
 # ==========
 
+# Phase: extra
 if [[ -d $TOP_DIR/extras.d ]]; then
     for i in $TOP_DIR/extras.d/*.sh; do
-        [[ -r $i ]] && source $i stack
+        [[ -r $i ]] && source $i stack extra
     done
 fi
 
diff --git a/stackrc b/stackrc
index 3f740b5..6adb676 100644
--- a/stackrc
+++ b/stackrc
@@ -1,7 +1,7 @@
 # stackrc
 #
 # Find the other rc files
-RC_DIR=$(cd $(dirname "$BASH_SOURCE") && pwd)
+RC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
 
 # Destination path for installation
 DEST=/opt/stack
@@ -104,6 +104,10 @@
 IRONIC_REPO=${IRONIC_REPO:-${GIT_BASE}/openstack/ironic.git}
 IRONIC_BRANCH=${IRONIC_BRANCH:-master}
 
+# ironic client
+IRONICCLIENT_REPO=${IRONICCLIENT_REPO:-${GIT_BASE}/openstack/python-ironicclient.git}
+IRONICCLIENT_BRANCH=${IRONICCLIENT_BRANCH:-master}
+
 # unified auth system (manages accounts/tokens)
 KEYSTONE_REPO=${KEYSTONE_REPO:-${GIT_BASE}/openstack/keystone.git}
 KEYSTONE_BRANCH=${KEYSTONE_BRANCH:-master}
@@ -193,6 +197,16 @@
 TROVECLIENT_REPO=${TROVECLIENT_REPO:-${GIT_BASE}/openstack/python-troveclient.git}
 TROVECLIENT_BRANCH=${TROVECLIENT_BRANCH:-master}
 
+# stackforge libraries that are used by OpenStack core services
+# wsme
+WSME_REPO=${WSME_REPO:-${GIT_BASE}/stackforge/wsme.git}
+WSME_BRANCH=${WSME_BRANCH:-master}
+
+# pecan
+PECAN_REPO=${PECAN_REPO:-${GIT_BASE}/stackforge/pecan.git}
+PECAN_BRANCH=${PECAN_BRANCH:-master}
+
+
 # Nova hypervisor configuration.  We default to libvirt with **kvm** but will
 # drop back to **qemu** if we are unable to load the kvm module.  ``stack.sh`` can
 # also install an **LXC**, **OpenVZ** or **XenAPI** based system.  If xenserver-core
@@ -293,6 +307,9 @@
 # Do not install packages tagged with 'testonly' by default
 INSTALL_TESTONLY_PACKAGES=${INSTALL_TESTONLY_PACKAGES:-False}
 
+# Undo requirements changes by global requirements
+UNDO_REQUIREMENTS=${UNDO_REQUIREMENTS:-True}
+
 # Local variables:
 # mode: shell-script
 # End:
diff --git a/tests/functions.sh b/tests/functions.sh
index 7d486d4..40376aa 100755
--- a/tests/functions.sh
+++ b/tests/functions.sh
@@ -122,16 +122,16 @@
 
 # test empty option
 if ini_has_option test.ini ddd empty; then
-   echo "OK: ddd.empty present"
+    echo "OK: ddd.empty present"
 else
-   echo "ini_has_option failed: ddd.empty not found"
+    echo "ini_has_option failed: ddd.empty not found"
 fi
 
 # test non-empty option
 if ini_has_option test.ini bbb handlers; then
-   echo "OK: bbb.handlers present"
+    echo "OK: bbb.handlers present"
 else
-   echo "ini_has_option failed: bbb.handlers not found"
+    echo "ini_has_option failed: bbb.handlers not found"
 fi
 
 # test changing empty option
diff --git a/tools/bash8.py b/tools/bash8.py
index 82a1010..edf7da4 100755
--- a/tools/bash8.py
+++ b/tools/bash8.py
@@ -55,10 +55,41 @@
             print_error('E003: Indent not multiple of 4', line)
 
 
+def starts_multiline(line):
+    m = re.search("[^<]<<\s*(?P<token>\w+)", line)
+    if m:
+        return m.group('token')
+    else:
+        return False
+
+
+def end_of_multiline(line, token):
+    if token:
+        return re.search("^%s\s*$" % token, line) is not None
+    return False
+
+
 def check_files(files):
+    in_multiline = False
+    logical_line = ""
+    token = False
     for line in fileinput.input(files):
-        check_no_trailing_whitespace(line)
-        check_indents(line)
+        # NOTE(sdague): multiline processing of heredocs is interesting
+        if not in_multiline:
+            logical_line = line
+            token = starts_multiline(line)
+            if token:
+                in_multiline = True
+                continue
+        else:
+            logical_line = logical_line + line
+            if not end_of_multiline(line, token):
+                continue
+            else:
+                in_multiline = False
+
+        check_no_trailing_whitespace(logical_line)
+        check_indents(logical_line)
 
 
 def get_options():
diff --git a/tools/build_bm_multi.sh b/tools/build_bm_multi.sh
index 52b9b4e..328d576 100755
--- a/tools/build_bm_multi.sh
+++ b/tools/build_bm_multi.sh
@@ -22,8 +22,8 @@
 if [ ! "$TERMINATE" = "1" ]; then
     echo "Waiting for head node ($HEAD_HOST) to start..."
     if ! timeout 60 sh -c "while ! wget -q -O- http://$HEAD_HOST | grep -q username; do sleep 1; done"; then
-      echo "Head node did not start"
-      exit 1
+        echo "Head node did not start"
+        exit 1
     fi
 fi
 
diff --git a/tools/build_docs.sh b/tools/build_docs.sh
new file mode 100755
index 0000000..1c145e2
--- /dev/null
+++ b/tools/build_docs.sh
@@ -0,0 +1,136 @@
+#!/usr/bin/env bash
+
+# **build_docs.sh** - Build the gh-pages docs for DevStack
+#
+# - Install shocco if not found on PATH
+# - Clone MASTER_REPO branch MASTER_BRANCH
+# - Re-creates ``docs`` directory from existing repo + new generated script docs
+
+# Usage:
+## build_docs.sh [[-b branch] [-p] repo] | .
+## -b branch        The DevStack branch to check out (default is master; ignored if
+##                  repo is not specified)
+## -p               Push the resulting docs tree to the source repo; fatal error if
+##                  repo is not specified
+## repo             The DevStack repository to clone (default is DevStack github repo)
+##                  If a repo is not supplied use the current directory
+##                  (assumed to be a DevStack checkout) as the source.
+## .                Use the current repo and branch (do not use with -p to
+##                  prevent stray files in the workspace being added tot he docs)
+
+# Defaults
+# --------
+
+# Source repo/branch for DevStack
+MASTER_REPO=${MASTER_REPO:-https://github.com/openstack-dev/devstack.git}
+MASTER_BRANCH=${MASTER_BRANCH:-master}
+
+# http://devstack.org is a GitHub gh-pages site in the https://github.com/cloudbuilders/devtack.git repo
+GH_PAGES_REPO=git@github.com:cloudbuilders/devstack.git
+
+# Uses this shocco branch: https://github.com/dtroyer/shocco/tree/rst_support
+SHOCCO=${SHOCCO:-shocco}
+if ! which shocco; then
+    if [[ ! -x shocco/shocco ]]; then
+        if [[ -z "$INSTALL_SHOCCO" ]]; then
+            echo "shocco not found in \$PATH, please set environment variable SHOCCO"
+            exit 1
+        fi
+        echo "Installing local copy of shocco"
+        git clone -b rst_support https://github.com/dtroyer/shocco shocco
+        cd shocco
+        ./configure
+        make
+        cd ..
+    fi
+    SHOCCO=shocco/shocco
+fi
+
+# Process command-line args
+while getopts b:p c; do
+    case $c in
+        b)  MASTER_BRANCH=$OPTARG
+            ;;
+        p)  PUSH_REPO=1
+            ;;
+    esac
+done
+shift `expr $OPTIND - 1`
+
+# Sanity check the args
+if [[ "$1" == "." ]]; then
+    REPO=""
+    if [[ -n $PUSH_REPO ]]; then
+        echo "Push not allowed from an active workspace"
+        unset PUSH_REPO
+    fi
+else
+    if [[ -z "$1" ]]; then
+        REPO=$MASTER_REPO
+    else
+        REPO=$1
+    fi
+fi
+
+# Check out a specific DevStack branch
+if [[ -n $REPO ]]; then
+    # Make a workspace
+    TMP_ROOT=$(mktemp -d devstack-docs-XXXX)
+    echo "Building docs in $TMP_ROOT"
+    cd $TMP_ROOT
+
+    # Get the master branch
+    git clone $REPO devstack
+    cd devstack
+    git checkout $MASTER_BRANCH
+fi
+
+# Processing
+# ----------
+
+# Assumption is we are now in the DevStack repo workspace to be processed
+
+# Pull the latest docs branch from devstack.org repo
+if ! [ -d docs ]; then
+    git clone -b gh-pages $GH_PAGES_REPO docs
+fi
+
+# Build list of scripts to process
+FILES=""
+for f in $(find . -name .git -prune -o \( -type f -name \*.sh -not -path \*shocco/\* -print \)); do
+    echo $f
+    FILES+="$f "
+    mkdir -p docs/`dirname $f`;
+    $SHOCCO $f > docs/$f.html
+done
+for f in $(find functions lib samples -type f -name \*); do
+    echo $f
+    FILES+="$f "
+    mkdir -p docs/`dirname $f`;
+    $SHOCCO $f > docs/$f.html
+done
+echo "$FILES" >docs-files
+
+# Switch to the gh_pages repo
+cd docs
+
+# Collect the new generated pages
+find . -name \*.html -print0 | xargs -0 git add
+
+# Push our changes back up to the docs branch
+if ! git diff-index HEAD --quiet; then
+    git commit -a -m "Update script docs"
+    if [[ -n $PUSH ]]; then
+        git push
+    fi
+fi
+
+# Clean up or report the temp workspace
+if [[ -n REPO && -n $PUSH_REPO ]]; then
+    rm -rf $TMP_ROOT
+else
+    if [[ -z "$TMP_ROOT" ]]; then
+        TMP_ROOT="$(pwd)"
+    fi
+    echo "Built docs in $TMP_ROOT"
+fi
diff --git a/tools/build_ramdisk.sh b/tools/build_ramdisk.sh
index 2c45568..7372555 100755
--- a/tools/build_ramdisk.sh
+++ b/tools/build_ramdisk.sh
@@ -22,7 +22,7 @@
         umount $MNTDIR
         rmdir $MNTDIR
     fi
-    if [ -n "$DEV_FILE_TMP" -a -e "$DEV_FILE_TMP "]; then
+    if [ -n "$DEV_FILE_TMP" -a -e "$DEV_FILE_TMP" ]; then
         rm -f $DEV_FILE_TMP
     fi
     if [ -n "$IMG_FILE_TMP" -a -e "$IMG_FILE_TMP" ]; then
@@ -84,11 +84,10 @@
     $TOOLS_DIR/get_uec_image.sh $DIST_NAME $CACHEDIR/$DIST_NAME-base.img
 fi
 
-# Finds the next available NBD device
-# Exits script if error connecting or none free
+# Finds and returns full device path for the next available NBD device.
+# Exits script if error connecting or none free.
 # map_nbd image
-# Returns full nbd device path
-function map_nbd {
+function map_nbd() {
     for i in `seq 0 15`; do
         if [ ! -e /sys/block/nbd$i/pid ]; then
             NBD=/dev/nbd$i
@@ -156,7 +155,7 @@
 
     # Pre-create the image file
     # FIXME(dt): This should really get the partition size to
-    #            pre-create the image file
+    # pre-create the image file
     dd if=/dev/zero of=$IMG_FILE_TMP bs=1 count=1 seek=$((2*1024*1024*1024))
     # Create filesystem image for RAM disk
     dd if=${NBD}p1 of=$IMG_FILE_TMP bs=1M
diff --git a/tools/build_uec.sh b/tools/build_uec.sh
index 6c4a26c..bce051a 100755
--- a/tools/build_uec.sh
+++ b/tools/build_uec.sh
@@ -229,8 +229,8 @@
 
 # (re)start a metadata service
 (
-  pid=`lsof -iTCP@192.168.$GUEST_NETWORK.1:4567 -n | awk '{print $2}' | tail -1`
-  [ -z "$pid" ] || kill -9 $pid
+    pid=`lsof -iTCP@192.168.$GUEST_NETWORK.1:4567 -n | awk '{print $2}' | tail -1`
+    [ -z "$pid" ] || kill -9 $pid
 )
 cd $vm_dir/uec
 python meta.py 192.168.$GUEST_NETWORK.1:4567 &
@@ -268,7 +268,7 @@
     sleep 2
 
     while [ ! -e "$vm_dir/console.log" ]; do
-      sleep 1
+        sleep 1
     done
 
     tail -F $vm_dir/console.log &
diff --git a/tools/create-stack-user.sh b/tools/create-stack-user.sh
old mode 100644
new mode 100755
index 2251d1e..50f6592
--- a/tools/create-stack-user.sh
+++ b/tools/create-stack-user.sh
@@ -5,7 +5,9 @@
 # Create a user account suitable for running DevStack
 # - create a group named $STACK_USER if it does not exist
 # - create a user named $STACK_USER if it does not exist
+#
 #   - home is $DEST
+#
 # - configure sudo for $STACK_USER
 
 # ``stack.sh`` was never intended to run as root.  It had a hack to do what is
diff --git a/tools/create_userrc.sh b/tools/create_userrc.sh
index 44b0f6b..8383fe7 100755
--- a/tools/create_userrc.sh
+++ b/tools/create_userrc.sh
@@ -105,15 +105,15 @@
 fi
 
 if [ -z "$OS_TENANT_NAME" -a -z "$OS_TENANT_ID" ]; then
-   export OS_TENANT_NAME=admin
+    export OS_TENANT_NAME=admin
 fi
 
 if [ -z "$OS_USERNAME" ]; then
-   export OS_USERNAME=admin
+    export OS_USERNAME=admin
 fi
 
 if [ -z "$OS_AUTH_URL" ]; then
-   export OS_AUTH_URL=http://localhost:5000/v2.0/
+    export OS_AUTH_URL=http://localhost:5000/v2.0/
 fi
 
 USER_PASS=${USER_PASS:-$OS_PASSWORD}
@@ -249,7 +249,7 @@
         for user_id_at_name in `keystone user-list --tenant-id $tenant_id | awk 'BEGIN {IGNORECASE = 1} /true[[:space:]]*\|[^|]*\|$/ {print  $2 "@" $4}'`; do
             read user_id user_name <<< `echo "$user_id_at_name" | sed 's/@/ /'`
             if [ $MODE = one -a "$user_name" != "$USER_NAME" ]; then
-               continue;
+                continue;
             fi
             add_entry "$user_id" "$user_name" "$tenant_id" "$tenant_name" "$USER_PASS"
         done
diff --git a/tools/docker/install_docker.sh b/tools/docker/install_docker.sh
index 289002e..2e5b510 100755
--- a/tools/docker/install_docker.sh
+++ b/tools/docker/install_docker.sh
@@ -38,7 +38,7 @@
 install_package python-software-properties && \
     sudo sh -c "echo deb $DOCKER_APT_REPO docker main > /etc/apt/sources.list.d/docker.list"
 apt_get update
-install_package --force-yes lxc-docker=${DOCKER_PACKAGE_VERSION} socat
+install_package --force-yes lxc-docker socat
 
 # Start the daemon - restart just in case the package ever auto-starts...
 restart_service docker
diff --git a/tools/fixup_stuff.sh b/tools/fixup_stuff.sh
index f3c0f98..f936230 100755
--- a/tools/fixup_stuff.sh
+++ b/tools/fixup_stuff.sh
@@ -5,11 +5,15 @@
 # fixup_stuff.sh
 #
 # All distro and package specific hacks go in here
+#
 # - prettytable 0.7.2 permissions are 600 in the package and
 #   pip 1.4 doesn't fix it (1.3 did)
+#
 # - httplib2 0.8 permissions are 600 in the package and
 #   pip 1.4 doesn't fix it (1.3 did)
+#
 # - RHEL6:
+#
 #   - set selinux not enforcing
 #   - (re)start messagebus daemon
 #   - remove distro packages python-crypto and python-lxml
@@ -35,25 +39,35 @@
 # Python Packages
 # ---------------
 
+# get_package_path python-package    # in import notation
+function get_package_path() {
+    local package=$1
+    echo $(python -c "import os; import $package; print(os.path.split(os.path.realpath($package.__file__))[0])")
+}
+
+
 # Pre-install affected packages so we can fix the permissions
+# These can go away once we are confident that pip 1.4.1+ is available everywhere
+
+# Fix prettytable 0.7.2 permissions
+# Don't specify --upgrade so we use the existing package if present
 pip_install prettytable
+PACKAGE_DIR=$(get_package_path prettytable)
+# Only fix version 0.7.2
+dir=$(echo $PACKAGE_DIR/prettytable-0.7.2*)
+if [[ -d $dir ]]; then
+    sudo chmod +r $dir/*
+fi
+
+# Fix httplib2 0.8 permissions
+# Don't specify --upgrade so we use the existing package if present
 pip_install httplib2
-
-SITE_DIRS=$(python -c "import site; import os; print os.linesep.join(site.getsitepackages())")
-for dir in $SITE_DIRS; do
-
-    # Fix prettytable 0.7.2 permissions
-    if [[ -r $dir/prettytable.py ]]; then
-        sudo chmod +r $dir/prettytable-0.7.2*/*
-    fi
-
-    # Fix httplib2 0.8 permissions
-    httplib_dir=httplib2-0.8.egg-info
-    if [[ -d $dir/$httplib_dir ]]; then
-        sudo chmod +r $dir/$httplib_dir/*
-    fi
-
-done
+PACKAGE_DIR=$(get_package_path httplib2)
+# Only fix version 0.8
+dir=$(echo $PACKAGE_DIR-0.8*)
+if [[ -d $dir ]]; then
+    sudo chmod +r $dir/*
+fi
 
 
 # RHEL6
@@ -62,8 +76,7 @@
 if [[ $DISTRO =~ (rhel6) ]]; then
 
     # Disable selinux to avoid configuring to allow Apache access
-    # to Horizon files or run nodejs (LP#1175444)
-    # FIXME(dtroyer): see if this can be skipped without node or if Horizon is not enabled
+    # to Horizon files (LP#1175444)
     if selinuxenabled; then
         sudo setenforce 0
     fi
@@ -80,7 +93,7 @@
         # fresh system via Anaconda and the dependency chain
         # ``cas`` -> ``python-paramiko`` -> ``python-crypto``.
         # ``pip uninstall pycrypto`` will remove the packaged ``.egg-info``
-        #  file but leave most of the actual library files behind in
+        # file but leave most of the actual library files behind in
         # ``/usr/lib64/python2.6/Crypto``. Later ``pip install pycrypto``
         # will install over the packaged files resulting
         # in a useless mess of old, rpm-packaged files and pip-installed files.
diff --git a/tools/install_prereqs.sh b/tools/install_prereqs.sh
index 68f11ce..0c65fd9 100755
--- a/tools/install_prereqs.sh
+++ b/tools/install_prereqs.sh
@@ -55,7 +55,7 @@
 # ================
 
 # Install package requirements
-install_package $(get_packages $ENABLED_SERVICES)
+install_package $(get_packages general $ENABLED_SERVICES)
 
 if [[ -n "$SYSLOG" && "$SYSLOG" != "False" ]]; then
     if is_ubuntu || is_fedora; then
diff --git a/tools/jenkins/jenkins_home/build_jenkins.sh b/tools/jenkins/jenkins_home/build_jenkins.sh
index e0e774e..a556db0 100755
--- a/tools/jenkins/jenkins_home/build_jenkins.sh
+++ b/tools/jenkins/jenkins_home/build_jenkins.sh
@@ -6,8 +6,8 @@
 
 # Make sure only root can run our script
 if [[ $EUID -ne 0 ]]; then
-   echo "This script must be run as root"
-   exit 1
+    echo "This script must be run as root"
+    exit 1
 fi
 
 # This directory
@@ -31,15 +31,15 @@
 
 # Install jenkins
 if [ ! -e /var/lib/jenkins ]; then
-   echo "Jenkins installation failed"
-   exit 1
+    echo "Jenkins installation failed"
+    exit 1
 fi
 
 # Make sure user has configured a jenkins ssh pubkey
 if [ ! -e /var/lib/jenkins/.ssh/id_rsa.pub ]; then
-   echo "Public key for jenkins is missing.  This is used to ssh into your instances."
-   echo "Please run "su -c ssh-keygen jenkins" before proceeding"
-   exit 1
+    echo "Public key for jenkins is missing.  This is used to ssh into your instances."
+    echo "Please run "su -c ssh-keygen jenkins" before proceeding"
+    exit 1
 fi
 
 # Setup sudo
@@ -96,7 +96,7 @@
 
 # Configure plugins
 for plugin in ${PLUGINS//,/ }; do
-    name=`basename $plugin`   
+    name=`basename $plugin`
     dest=/var/lib/jenkins/plugins/$name
     if [ ! -e $dest ]; then
         curl -L $plugin -o $dest
diff --git a/tools/upload_image.sh b/tools/upload_image.sh
index dd21c9f..d81a5c8 100755
--- a/tools/upload_image.sh
+++ b/tools/upload_image.sh
@@ -33,6 +33,7 @@
 
 # Get a token to authenticate to glance
 TOKEN=$(keystone token-get | grep ' id ' | get_field 2)
+die_if_not_set $LINENO TOKEN "Keystone fail to get token"
 
 # Glance connection info.  Note the port must be specified.
 GLANCE_HOSTPORT=${GLANCE_HOSTPORT:-$GLANCE_HOST:9292}
diff --git a/tools/xen/functions b/tools/xen/functions
index c65d919..563303d 100644
--- a/tools/xen/functions
+++ b/tools/xen/functions
@@ -69,11 +69,17 @@
 }
 
 function get_local_sr {
-    xe sr-list name-label="Local storage" --minimal
+    xe pool-list params=default-SR minimal=true
 }
 
 function get_local_sr_path {
-    echo "/var/run/sr-mount/$(get_local_sr)"
+    pbd_path="/var/run/sr-mount/$(get_local_sr)"
+    pbd_device_config_path=`xe pbd-list sr-uuid=$(get_local_sr) params=device-config | grep " path: "`
+    if [ -n "$pbd_device_config_path" ]; then
+        pbd_uuid=`xe pbd-list sr-uuid=$(get_local_sr) minimal=true`
+        pbd_path=`xe pbd-param-get uuid=$pbd_uuid param-name=device-config param-key=path || echo ""`
+    fi
+    echo $pbd_path
 }
 
 function find_ip_by_name() {
@@ -131,14 +137,14 @@
     local name_label
     name_label=$1
 
-    ! [ -z $(xe network-list name-label="$name_label" --minimal) ]
+    ! [ -z "$(xe network-list name-label="$name_label" --minimal)" ]
 }
 
 function _bridge_exists() {
     local bridge
     bridge=$1
 
-    ! [ -z $(xe network-list bridge="$bridge" --minimal) ]
+    ! [ -z "$(xe network-list bridge="$bridge" --minimal)" ]
 }
 
 function _network_uuid() {
diff --git a/tools/xen/install_os_domU.sh b/tools/xen/install_os_domU.sh
index 0f314bf..6ce334b 100755
--- a/tools/xen/install_os_domU.sh
+++ b/tools/xen/install_os_domU.sh
@@ -44,9 +44,9 @@
 
 xe_min()
 {
-  local cmd="$1"
-  shift
-  xe "$cmd" --minimal "$@"
+    local cmd="$1"
+    shift
+    xe "$cmd" --minimal "$@"
 }
 
 #
@@ -111,12 +111,15 @@
 fi
 
 if parameter_is_specified "FLAT_NETWORK_BRIDGE"; then
-    cat >&2 << EOF
-ERROR: FLAT_NETWORK_BRIDGE is specified in localrc file
-This is considered as an error, as its value will be derived from the
-VM_BRIDGE_OR_NET_NAME variable's value.
+    if [ "$(bridge_for "$VM_BRIDGE_OR_NET_NAME")" != "$(bridge_for "$FLAT_NETWORK_BRIDGE")" ]; then
+        cat >&2 << EOF
+ERROR: FLAT_NETWORK_BRIDGE is specified in localrc file, and either no network
+found on XenServer by searching for networks by that value as name-label or
+bridge name or the network found does not match the network specified by
+VM_BRIDGE_OR_NET_NAME. Please check your localrc file.
 EOF
-    exit 1
+        exit 1
+    fi
 fi
 
 if ! xenapi_is_listening_on "$MGT_BRIDGE_OR_NET_NAME"; then
@@ -132,8 +135,8 @@
 # Set up ip forwarding, but skip on xcp-xapi
 if [ -a /etc/sysconfig/network ]; then
     if ! grep -q "FORWARD_IPV4=YES" /etc/sysconfig/network; then
-      # FIXME: This doesn't work on reboot!
-      echo "FORWARD_IPV4=YES" >> /etc/sysconfig/network
+        # FIXME: This doesn't work on reboot!
+        echo "FORWARD_IPV4=YES" >> /etc/sysconfig/network
     fi
 fi
 # Also, enable ip forwarding in rc.local, since the above trick isn't working
@@ -271,6 +274,12 @@
 # Max out VCPU count for better performance
 max_vcpus "$GUEST_NAME"
 
+# Wipe out all network cards
+destroy_all_vifs_of "$GUEST_NAME"
+
+# Add only one interface to prepare the guest template
+add_interface "$GUEST_NAME" "$MGT_BRIDGE_OR_NET_NAME" "0"
+
 # start the VM to run the prepare steps
 xe vm-start vm="$GUEST_NAME"
 
@@ -304,7 +313,7 @@
         "xen_integration_bridge=${XEN_INTEGRATION_BRIDGE}"
 fi
 
-FLAT_NETWORK_BRIDGE=$(bridge_for "$VM_BRIDGE_OR_NET_NAME")
+FLAT_NETWORK_BRIDGE="${FLAT_NETWORK_BRIDGE:-$(bridge_for "$VM_BRIDGE_OR_NET_NAME")}"
 append_kernel_cmdline "$GUEST_NAME" "flat_network_bridge=${FLAT_NETWORK_BRIDGE}"
 
 # Add a separate xvdb, if it was requested
diff --git a/tools/xen/scripts/install-os-vpx.sh b/tools/xen/scripts/install-os-vpx.sh
index 7469e0c..7b0d891 100755
--- a/tools/xen/scripts/install-os-vpx.sh
+++ b/tools/xen/scripts/install-os-vpx.sh
@@ -42,69 +42,69 @@
 
 get_params()
 {
-  while getopts "hbn:r:l:t:" OPTION;
-  do
-    case $OPTION in
-      h) usage
-         exit 1
-         ;;
-      n)
-         BRIDGE=$OPTARG
-         ;;
-      l)
-         NAME_LABEL=$OPTARG
-         ;;
-      t)
-         TEMPLATE_NAME=$OPTARG
-         ;;
-      ?)
-         usage
-         exit
-         ;;
-    esac
-  done
-  if [[ -z $BRIDGE ]]
-  then
-     BRIDGE=xenbr0
-  fi
+    while getopts "hbn:r:l:t:" OPTION;
+    do
+        case $OPTION in
+            h) usage
+                exit 1
+                ;;
+            n)
+                BRIDGE=$OPTARG
+                ;;
+            l)
+                NAME_LABEL=$OPTARG
+                ;;
+            t)
+                TEMPLATE_NAME=$OPTARG
+                ;;
+            ?)
+                usage
+                exit
+                ;;
+        esac
+    done
+    if [[ -z $BRIDGE ]]
+    then
+        BRIDGE=xenbr0
+    fi
 
-  if [[ -z $TEMPLATE_NAME ]]; then
-    echo "Please specify a template name" >&2
-    exit 1
-  fi
+    if [[ -z $TEMPLATE_NAME ]]; then
+        echo "Please specify a template name" >&2
+        exit 1
+    fi
 
-  if [[ -z $NAME_LABEL ]]; then
-    echo "Please specify a name-label for the new VM" >&2
-    exit 1
-  fi
+    if [[ -z $NAME_LABEL ]]; then
+        echo "Please specify a name-label for the new VM" >&2
+        exit 1
+    fi
 }
 
 
 xe_min()
 {
-  local cmd="$1"
-  shift
-  xe "$cmd" --minimal "$@"
+    local cmd="$1"
+    shift
+    xe "$cmd" --minimal "$@"
 }
 
 
 find_network()
 {
-  result=$(xe_min network-list bridge="$1")
-  if [ "$result" = "" ]
-  then
-    result=$(xe_min network-list name-label="$1")
-  fi
-  echo "$result"
+    result=$(xe_min network-list bridge="$1")
+    if [ "$result" = "" ]
+    then
+        result=$(xe_min network-list name-label="$1")
+    fi
+    echo "$result"
 }
 
 
 create_vif()
 {
-  local v="$1"
-  echo "Installing VM interface on [$BRIDGE]"
-  local out_network_uuid=$(find_network "$BRIDGE")
-  xe vif-create vm-uuid="$v" network-uuid="$out_network_uuid" device="0"
+    local v="$1"
+    echo "Installing VM interface on [$BRIDGE]"
+    local out_network_uuid=$(find_network "$BRIDGE")
+    xe vif-create vm-uuid="$v" network-uuid="$out_network_uuid" device="0"
 }
 
 
@@ -112,20 +112,20 @@
 # Make the VM auto-start on server boot.
 set_auto_start()
 {
-  local v="$1"
-  xe vm-param-set uuid="$v" other-config:auto_poweron=true
+    local v="$1"
+    xe vm-param-set uuid="$v" other-config:auto_poweron=true
 }
 
 
 destroy_vifs()
 {
-  local v="$1"
-  IFS=,
-  for vif in $(xe_min vif-list vm-uuid="$v")
-  do
-    xe vif-destroy uuid="$vif"
-  done
-  unset IFS
+    local v="$1"
+    IFS=,
+    for vif in $(xe_min vif-list vm-uuid="$v")
+    do
+        xe vif-destroy uuid="$vif"
+    done
+    unset IFS
 }
 
 
diff --git a/tools/xen/scripts/uninstall-os-vpx.sh b/tools/xen/scripts/uninstall-os-vpx.sh
index ac26094..1ed2494 100755
--- a/tools/xen/scripts/uninstall-os-vpx.sh
+++ b/tools/xen/scripts/uninstall-os-vpx.sh
@@ -22,63 +22,63 @@
 # By default, don't remove the templates
 REMOVE_TEMPLATES=${REMOVE_TEMPLATES:-"false"}
 if [ "$1" = "--remove-templates" ]; then
-  REMOVE_TEMPLATES=true
+    REMOVE_TEMPLATES=true
 fi
 
 xe_min()
 {
-  local cmd="$1"
-  shift
-  xe "$cmd" --minimal "$@"
+    local cmd="$1"
+    shift
+    xe "$cmd" --minimal "$@"
 }
 
 destroy_vdi()
 {
-  local vbd_uuid="$1"
-  local type=$(xe_min vbd-list uuid=$vbd_uuid params=type)
-  local dev=$(xe_min vbd-list uuid=$vbd_uuid params=userdevice)
-  local vdi_uuid=$(xe_min vbd-list uuid=$vbd_uuid params=vdi-uuid)
+    local vbd_uuid="$1"
+    local type=$(xe_min vbd-list uuid=$vbd_uuid params=type)
+    local dev=$(xe_min vbd-list uuid=$vbd_uuid params=userdevice)
+    local vdi_uuid=$(xe_min vbd-list uuid=$vbd_uuid params=vdi-uuid)
 
-  if [ "$type" == 'Disk' ] && [ "$dev" != 'xvda' ] && [ "$dev" != '0' ]; then
-    xe vdi-destroy uuid=$vdi_uuid
-  fi
+    if [ "$type" == 'Disk' ] && [ "$dev" != 'xvda' ] && [ "$dev" != '0' ]; then
+        xe vdi-destroy uuid=$vdi_uuid
+    fi
 }
 
 uninstall()
 {
-  local vm_uuid="$1"
-  local power_state=$(xe_min vm-list uuid=$vm_uuid params=power-state)
+    local vm_uuid="$1"
+    local power_state=$(xe_min vm-list uuid=$vm_uuid params=power-state)
 
-  if [ "$power_state" != "halted" ]; then
-    xe vm-shutdown vm=$vm_uuid force=true
-  fi
+    if [ "$power_state" != "halted" ]; then
+        xe vm-shutdown vm=$vm_uuid force=true
+    fi
 
-  for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do
-    destroy_vdi "$v"
-  done
+    for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do
+        destroy_vdi "$v"
+    done
 
-  xe vm-uninstall vm=$vm_uuid force=true >/dev/null
+    xe vm-uninstall vm=$vm_uuid force=true >/dev/null
 }
 
 uninstall_template()
 {
-  local vm_uuid="$1"
+    local vm_uuid="$1"
 
-  for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do
-    destroy_vdi "$v"
-  done
+    for v in $(xe_min vbd-list vm-uuid=$vm_uuid | sed -e 's/,/ /g'); do
+        destroy_vdi "$v"
+    done
 
-  xe template-uninstall template-uuid=$vm_uuid force=true >/dev/null
+    xe template-uninstall template-uuid=$vm_uuid force=true >/dev/null
 }
 
 # remove the VMs and their disks
 for u in $(xe_min vm-list other-config:os-vpx=true | sed -e 's/,/ /g'); do
-  uninstall "$u"
+    uninstall "$u"
 done
 
 # remove the templates
 if [ "$REMOVE_TEMPLATES" == "true" ]; then
-  for u in $(xe_min template-list other-config:os-vpx=true | sed -e 's/,/ /g'); do
-    uninstall_template "$u"
-  done
+    for u in $(xe_min template-list other-config:os-vpx=true | sed -e 's/,/ /g'); do
+        uninstall_template "$u"
+    done
 fi
diff --git a/unstack.sh b/unstack.sh
index c944ccc..67c8b7c 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -42,6 +42,16 @@
 source $TOP_DIR/lib/ironic
 source $TOP_DIR/lib/trove
 
+# Extras Source
+# --------------
+
+# Phase: source
+if [[ -d $TOP_DIR/extras.d ]]; then
+    for i in $TOP_DIR/extras.d/*.sh; do
+        [[ -r $i ]] && source $i source
+    done
+fi
+
 # Determine what system we are running on.  This provides ``os_VENDOR``,
 # ``os_RELEASE``, ``os_UPDATE``, ``os_PACKAGE``, ``os_CODENAME``
 GetOSVersion
@@ -53,6 +63,7 @@
 # Run extras
 # ==========
 
+# Phase: unstack
 if [[ -d $TOP_DIR/extras.d ]]; then
     for i in $TOP_DIR/extras.d/*.sh; do
         [[ -r $i ]] && source $i unstack