Merge "ceilometer: add tempest option to test events"
diff --git a/.gitignore b/.gitignore
index c6900c8..2778a65 100644
--- a/.gitignore
+++ b/.gitignore
@@ -12,9 +12,11 @@
 doc/build
 files/*.gz
 files/*.qcow2
+files/*.img
 files/images
 files/pip-*
 files/get-pip.py*
+files/ir-deploy*
 local.conf
 local.sh
 localrc
diff --git a/HACKING.rst b/HACKING.rst
index b3c82a3..a40af54 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -25,23 +25,63 @@
 __ lp_
 .. _lp: https://launchpad.net/~devstack
 
+The `Gerrit review
+queue <https://review.openstack.org/#/q/project:openstack-dev/devstack,n,z>`__
+is used for all commits.
+
 The primary script in DevStack is ``stack.sh``, which performs the bulk of the
 work for DevStack's use cases.  There is a subscript ``functions`` that contains
 generally useful shell functions and is used by a number of the scripts in
 DevStack.
 
-The ``lib`` directory contains sub-scripts for projects or packages that ``stack.sh``
-sources to perform much of the work related to those projects.  These sub-scripts
-contain configuration defaults and functions to configure, start and stop the project
-or package.  These variables and functions are also used by related projects,
-such as Grenade, to manage a DevStack installation.
-
 A number of additional scripts can be found in the ``tools`` directory that may
 be useful in supporting DevStack installations.  Of particular note are ``info.sh``
 to collect and report information about the installed system, and ``install_prereqs.sh``
 that handles installation of the prerequisite packages for DevStack.  It is
 suitable, for example, to pre-load a system for making a snapshot.
 
+Repo Layout
+-----------
+
+The DevStack repo generally keeps all of the primary scripts at the root
+level.
+
+``doc`` - Contains the Sphinx source for the documentation.
+``tools/build_docs.sh`` is used to generate the HTML versions of the
+DevStack scripts.  A complete doc build can be run with ``tox -edocs``.
+
+``exercises`` - Contains the test scripts used to sanity-check and
+demonstrate some OpenStack functions. These scripts know how to exit
+early or skip services that are not enabled.
+
+``extras.d`` - Contains the dispatch scripts called by the hooks in
+``stack.sh``, ``unstack.sh`` and ``clean.sh``. See :doc:`the plugins
+docs <plugins>` for more information.
+
+``files`` - Contains a variety of otherwise lost files used in
+configuring and operating DevStack. This includes templates for
+configuration files and the system dependency information. This is also
+where image files are downloaded and expanded if necessary.
+
+``lib`` - Contains the sub-scripts specific to each project. This is
+where the work of managing a project's services is located. Each
+top-level project (Keystone, Nova, etc) has a file here. Additionally
+there are some for system services and project plugins.  These
+variables and functions are also used by related projects, such as
+Grenade, to manage a DevStack installation.
+
+``samples`` - Contains a sample of the local files not included in the
+DevStack repo.
+
+``tests`` - the DevStack test suite is rather sparse, mostly consisting
+of test of specific fragile functions in the ``functions`` and
+``functions-common`` files.
+
+``tools`` - Contains a collection of stand-alone scripts. While these
+may reference the top-level DevStack configuration they can generally be
+run alone. There are also some sub-directories to support specific
+environments such as XenServer.
+
 
 Scripts
 -------
@@ -249,6 +289,7 @@
 
 Control Structure Rules
 -----------------------
+
 - then should be on the same line as the if
 - do should be on the same line as the for
 
@@ -270,6 +311,7 @@
 
 Variables and Functions
 -----------------------
+
 - functions should be used whenever possible for clarity
 - functions should use ``local`` variables as much as possible to
   ensure they are isolated from the rest of the environment
@@ -278,3 +320,48 @@
 - function names should_have_underscores, NotCamelCase.
 - functions should be declared as per the regex ^function foo {$
   with code starting on the next line
+
+
+Review Criteria
+===============
+
+There are some broad criteria that will be followed when reviewing
+your change
+
+* **Is it passing tests** -- your change will not be reviewed
+  throughly unless the official CI has run successfully against it.
+
+* **Does this belong in DevStack** -- DevStack reviewers have a
+  default position of "no" but are ready to be convinced by your
+  change.
+
+  For very large changes, you should consider :doc:`the plugins system
+  <plugins>` to see if your code is better abstracted from the main
+  repository.
+
+  For smaller changes, you should always consider if the change can be
+  encapsulated by per-user settings in ``local.conf``.  A common example
+  is adding a simple config-option to an ``ini`` file.  Specific flags
+  are not usually required for this, although adding documentation
+  about how to achieve a larger goal (which might include turning on
+  various settings, etc) is always welcome.
+
+* **Work-arounds** -- often things get broken and DevStack can be in a
+  position to fix them.  Work-arounds are fine, but should be
+  presented in the context of fixing the root-cause of the problem.
+  This means it is well-commented in the code and the change-log and
+  mostly likely includes links to changes or bugs that fix the
+  underlying problem.
+
+* **Should this be upstream** -- DevStack generally does not override
+  default choices provided by projects and attempts to not
+  unexpectedly modify behaviour.
+
+* **Context in commit messages** -- DevStack touches many different
+  areas and reviewers need context around changes to make good
+  decisions.  We also always want it to be clear to someone -- perhaps
+  even years from now -- why we were motivated to make a change at the
+  time.
+
+* **Reviewers** -- please see ``MAINTAINERS.rst`` for a list of people
+  that should be added to reviews of various sub-systems.
diff --git a/MAINTAINERS.rst b/MAINTAINERS.rst
index a376eb0..eeb1f21 100644
--- a/MAINTAINERS.rst
+++ b/MAINTAINERS.rst
@@ -45,6 +45,13 @@
 Neutron
 ~~~~~~~
 
+MidoNet
+~~~~~~~
+
+* Jaume Devesa <devvesa@gmail.com>
+* Ryu Ishimoto <ryu@midokura.com>
+* YAMAMOTO Takashi <yamamoto@midokura.com>
+
 OpenDaylight
 ~~~~~~~~~~~~
 
@@ -75,12 +82,6 @@
 Tempest
 ~~~~~~~
 
-Trove
-~~~~~
-
-* Nikhil Manchanda <SlickNik@gmail.com>
-* Michael Basnight <mbasnight@gmail.com>
-
 Xen
 ~~~
 * Bob Ball <bob.ball@citrix.com>
@@ -90,3 +91,7 @@
 
 * Flavio Percoco <flaper87@gmail.com>
 * Malini Kamalambal <malini.kamalambal@rackspace.com>
+
+Oracle Linux
+~~~~~~~~~~~~
+* Wiekus Beukes <wiekus.beukes@oracle.com>
diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..082aff2
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,104 @@
+# DevStack Makefile of Sanity
+
+# Interesting targets:
+# ds-remote - Create a Git remote for use by ds-push and ds-pull targets
+#             DS_REMOTE_URL must be set on the command line
+#
+# ds-push - Merge a list of branches taken from .ds-test and push them
+#           to the ds-remote repo in ds-test branch
+#
+# ds-pull - Pull the remote ds-test branch into a fresh local branch
+#
+# refresh - Performs a sequence of unstack, refresh and stack
+
+# Duplicated from stackrc for now
+DEST=/opt/stack
+WHEELHOUSE=$(DEST)/.wheelhouse
+
+all:
+	echo "This just saved you from a terrible mistake!"
+
+# Do Some Work
+stack:
+	./stack.sh
+
+unstack:
+	./unstack.sh
+
+wheels:
+	WHEELHOUSE=$(WHEELHOUSE) tools/build-wheels.sh
+
+docs:
+	tox -edocs
+
+# Just run the shocco source formatting build
+docs-build:
+	INSTALL_SHOCCO=True tools/build_docs.sh
+
+# Just run the Sphinx docs build
+docs-rst:
+	python setup.py build_sphinx
+
+# Run the bashate test
+bashate:
+	tox -ebashate
+
+# Run the function tests
+test:
+	tests/test_ini_config.sh
+	tests/test_meta_config.sh
+	tests/test_ip.sh
+	tests/test_refs.sh
+
+# Spiff up the place a bit
+clean:
+	./clean.sh
+	rm -rf accrc doc/build test*-e *.egg-info
+
+# Clean out the cache too
+realclean: clean
+	rm -rf files/cirros*.tar.gz files/Fedora*.qcow2 $(WHEELHOUSE)
+
+# Repo stuffs
+
+pull:
+	git pull
+
+
+# These repo targets are used to maintain a branch in a remote repo that
+# consists of one or more local branches merged and pushed to the remote.
+# This is most useful for iterative testing on multiple or remote servers
+# while keeping the working repo local.
+#
+# It requires:
+# * a remote pointing to a remote repo, often GitHub is used for this
+# * a branch name to be used on the remote
+# * a local file containing the list of local branches to be merged into
+#   the remote branch
+
+GIT_REMOTE_NAME=ds-test
+GIT_REMOTE_BRANCH=ds-test
+
+# Push the current branch to a remote named ds-test
+ds-push:
+	git checkout master
+	git branch -D $(GIT_REMOTE_BRANCH) || true
+	git checkout -b $(GIT_REMOTE_BRANCH)
+	for i in $(shell cat .$(GIT_REMOTE_BRANCH) | grep -v "^#" | grep "[^ ]"); do \
+	  git merge --no-edit $$i; \
+	done
+	git push -f $(GIT_REMOTE_NAME) HEAD:$(GIT_REMOTE_BRANCH)
+
+# Pull the ds-test branch
+ds-pull:
+	git checkout master
+	git branch -D $(GIT_REMOTE_BRANCH) || true
+	git pull $(GIT_REMOTE_NAME) $(GIT_REMOTE_BRANCH)
+	git checkout $(GIT_REMOTE_BRANCH)
+
+# Add the remote - set DS_REMOTE_URL=htps://example.com/ on the command line
+ds-remote:
+	git remote add $(GIT_REMOTE_NAME) $(DS_REMOTE_URL)
+
+# Refresh the current DevStack checkout nd re-initialize
+refresh: unstack ds-pull stack
diff --git a/README.md b/README.md
index 53de970..455e1c6 100644
--- a/README.md
+++ b/README.md
@@ -149,6 +149,10 @@
 
     KEYSTONE_USE_MOD_WSGI="True"
 
+Example (Nova):
+
+    NOVA_USE_MOD_WSGI="True"
+
 Example (Swift):
 
     SWIFT_USE_MOD_WSGI="True"
@@ -264,10 +268,10 @@
 
 # Heat
 
-Heat is enabled by default (see `stackrc` file). To disable it explicitly
+Heat is disabled by default (see `stackrc` file). To enable it explicitly
 you'll need the following settings in your `localrc` section:
 
-    disable_service heat h-api h-api-cfn h-api-cw h-eng
+    enable_service heat h-api h-api-cfn h-api-cw h-eng
 
 Heat can also run in standalone mode, and be configured to orchestrate
 on an external OpenStack cloud. To launch only Heat in standalone mode
@@ -328,12 +332,12 @@
 You likely want to change your `localrc` section to run a scheduler that
 will balance VMs across hosts:
 
-    SCHEDULER=nova.scheduler.simple.SimpleScheduler
+    SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
 
 You can then run many compute nodes, each of which should have a `stackrc`
 which includes the following, with the IP address of the above controller node:
 
-    ENABLED_SERVICES=n-cpu,rabbit,g-api,neutron,q-agt
+    ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
     SERVICE_HOST=[IP of controller node]
     MYSQL_HOST=$SERVICE_HOST
     RABBIT_HOST=$SERVICE_HOST
diff --git a/clean.sh b/clean.sh
index ad4525b..74bcaee 100755
--- a/clean.sh
+++ b/clean.sh
@@ -49,9 +49,8 @@
 source $TOP_DIR/lib/swift
 source $TOP_DIR/lib/ceilometer
 source $TOP_DIR/lib/heat
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 source $TOP_DIR/lib/ironic
-source $TOP_DIR/lib/trove
 
 
 # Extras Source
@@ -76,6 +75,7 @@
 # ==========
 
 # Phase: clean
+load_plugin_settings
 run_phase clean
 
 if [[ -d $TOP_DIR/extras.d ]]; then
@@ -114,15 +114,22 @@
 cleanup_rpc_backend
 cleanup_database
 
-# Clean out data, logs and status
-LOGDIR=$(dirname "$LOGFILE")
-sudo rm -rf $DATA_DIR $LOGDIR $DEST/status
+# Clean out data and status
+sudo rm -rf $DATA_DIR $DEST/status
+
+# Clean out the log file and log directories
+if [[ -n "$LOGFILE" ]] && [[ -f "$LOGFILE" ]]; then
+    sudo rm -f $LOGFILE
+fi
+if [[ -n "$LOGDIR" ]] && [[ -d "$LOGDIR" ]]; then
+    sudo rm -rf $LOGDIR
+fi
 if [[ -n "$SCREEN_LOGDIR" ]] && [[ -d "$SCREEN_LOGDIR" ]]; then
     sudo rm -rf $SCREEN_LOGDIR
 fi
 
 # Clean up venvs
-DIRS_TO_CLEAN="$WHEELHOUSE ${PROJECT_VENV[@]}"
+DIRS_TO_CLEAN="$WHEELHOUSE ${PROJECT_VENV[@]} .config/openstack"
 rm -rf $DIRS_TO_CLEAN
 
 # Clean up files
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 3e9aa45..6e3ec02 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -26,7 +26,7 @@
 
 # Add any Sphinx extension module names here, as strings. They can be extensions
 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = [ 'oslosphinx' ]
+extensions = [ 'oslosphinx', 'sphinxcontrib.blockdiag', 'sphinxcontrib.nwdiag' ]
 
 todo_include_todos = True
 
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index fe3e2c2..8e2e7ff 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -95,6 +95,8 @@
 fragment and MUST conform to the shell requirements, specifically no
 whitespace around ``=`` (equals).
 
+.. _minimal-configuration:
+
 Minimal Configuration
 =====================
 
@@ -170,6 +172,30 @@
 
       LIBS_FROM_GIT=python-keystoneclient,oslo.config
 
+Virtual Environments
+--------------------
+
+  | *Default: ``USE_VENV=False``*
+  |   Enable the use of Python virtual environments by setting ``USE_VENV``
+      to ``True``.  This will enable the creation of venvs for each project
+      that is defined in the ``PROJECT_VENV`` array.
+
+  | *Default: ``PROJECT_VENV['<project>']='<project-dir>.venv'*
+  |   Each entry in the ``PROJECT_VENV`` array contains the directory name
+      of a venv to be used for the project.  The array index is the project
+      name.  Multiple projects can use the same venv if desired.
+
+  ::
+
+    PROJECT_VENV["glance"]=${GLANCE_DIR}.venv
+
+  | *Default: ``ADDITIONAL_VENV_PACKAGES=""``*
+  |   A comma-separated list of additional packages to be installed into each
+      venv.  Often projects will not have certain packages listed in its
+      ``requirements.txt`` file because they are 'optional' requirements,
+      i.e. only needed for certain configurations.  By default, the enabled
+      databases will have their Python bindings added when they are enabled.
+
 Enable Logging
 --------------
 
@@ -247,6 +273,21 @@
 
         RECLONE=yes
 
+Upgrade packages installed by pip
+---------------------------------
+
+    | *Default: ``PIP_UPGRADE=""``*
+    |  By default ``stack.sh`` only installs Python packages if no version
+       is currently installed or the current version does not match a specified
+       requirement. If ``PIP_UPGRADE`` is set to ``True`` then existing required
+       Python packages will be upgraded to the most recent version that
+       matches requirements.
+    |
+
+    ::
+
+        PIP_UPGRADE=True
+
 Swift
 -----
 
@@ -350,7 +391,7 @@
         ENABLED_SERVICES=n-vol,n-cpu,n-net,n-api
 
 IP Version
-    | Default: ``IP_VERSION=4``
+    | Default: ``IP_VERSION=4+6``
     | This setting can be used to configure DevStack to create either an IPv4,
       IPv6, or dual stack tenant data network by setting ``IP_VERSION`` to
       either ``IP_VERSION=4``, ``IP_VERSION=6``, or ``IP_VERSION=4+6``
diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
deleted file mode 100644
index 50c0100..0000000
--- a/doc/source/contributing.rst
+++ /dev/null
@@ -1,94 +0,0 @@
-============
-Contributing
-============
-
-DevStack uses the standard OpenStack contribution process as outlined in
-`the OpenStack developer
-guide <http://docs.openstack.org/infra/manual/developers.html>`__. This
-means that you will need to meet the requirements of the Contribututors
-License Agreement (CLA). If you have already done that for another
-OpenStack project you are good to go.
-
-Things To Know
-==============
-
-|
-| **Where Things Are**
-
-The official DevStack repository is located at
-``git://git.openstack.org/openstack-dev/devstack.git``, replicated from
-the repo maintained by Gerrit. GitHub also has a mirror at
-``git://github.com/openstack-dev/devstack.git``.
-
-The `blueprint <https://blueprints.launchpad.net/devstack>`__ and `bug
-trackers <https://bugs.launchpad.net/devstack>`__ are on Launchpad. It
-should be noted that DevStack generally does not use these as strongly
-as other projects, but we're trying to change that.
-
-The `Gerrit review
-queue <https://review.openstack.org/#/q/project:openstack-dev/devstack,n,z>`__
-is, however, used for all commits except for the text of this website.
-That should also change in the near future.
-
-|
-| **HACKING.rst**
-
-Like most OpenStack projects, DevStack includes a ``HACKING.rst`` file
-that describes the layout, style and conventions of the project. Because
-``HACKING.rst`` is in the main DevStack repo it is considered
-authoritative. Much of the content on this page is taken from there.
-
-|
-| **bashate Formatting**
-
-Around the time of the OpenStack Havana release we added a tool to do
-style checking in DevStack similar to what pep8/flake8 do for Python
-projects. It is still \_very\_ simplistic, focusing mostly on stray
-whitespace to help prevent -1 on reviews that are otherwise acceptable.
-Oddly enough it is called ``bashate``. It will be expanded to enforce
-some of the documentation rules in comments that are used in formatting
-the script pages for devstack.org and possibly even simple code
-formatting. Run it on the entire project with ``./run_tests.sh``.
-
-Code
-====
-
-|
-| **Repo Layout**
-
-The DevStack repo generally keeps all of the primary scripts at the root
-level.
-
-``doc`` - Contains the Sphinx source for the documentation.
-``tools/build_docs.sh`` is used to generate the HTML versions of the
-DevStack scripts.  A complete doc build can be run with ``tox -edocs``.
-
-``exercises`` - Contains the test scripts used to sanity-check and
-demonstrate some OpenStack functions. These scripts know how to exit
-early or skip services that are not enabled.
-
-``extras.d`` - Contains the dispatch scripts called by the hooks in
-``stack.sh``, ``unstack.sh`` and ``clean.sh``. See :doc:`the plugins
-docs <plugins>` for more information.
-
-``files`` - Contains a variety of otherwise lost files used in
-configuring and operating DevStack. This includes templates for
-configuration files and the system dependency information. This is also
-where image files are downloaded and expanded if necessary.
-
-``lib`` - Contains the sub-scripts specific to each project. This is
-where the work of managing a project's services is located. Each
-top-level project (Keystone, Nova, etc) has a file here. Additionally
-there are some for system services and project plugins.
-
-``samples`` - Contains a sample of the local files not included in the
-DevStack repo.
-
-``tests`` - the DevStack test suite is rather sparse, mostly consisting
-of test of specific fragile functions in the ``functions`` and
-``functions-common`` files.
-
-``tools`` - Contains a collection of stand-alone scripts. While these
-may reference the top-level DevStack configuration they can generally be
-run alone. There are also some sub-directories to support specific
-environments such as XenServer.
diff --git a/doc/source/eucarc.rst b/doc/source/eucarc.rst
index 1284b88..c2ecbc6 100644
--- a/doc/source/eucarc.rst
+++ b/doc/source/eucarc.rst
@@ -13,7 +13,7 @@
 
     ::
 
-        EC2_URL=$(keystone catalog --service ec2 | awk '/ publicURL / { print $4 }')
+        EC2_URL=$(openstack catalog show ec2 | awk '/ publicURL: / { print $4 }')
 
 S3\_URL
     Set the S3 endpoint for euca2ools. The endpoint is extracted from
@@ -21,14 +21,14 @@
 
     ::
 
-        export S3_URL=$(keystone catalog --service s3 | awk '/ publicURL / { print $4 }')
+        export S3_URL=$(openstack catalog show s3 | awk '/ publicURL: / { print $4 }')
 
 EC2\_ACCESS\_KEY, EC2\_SECRET\_KEY
     Create EC2 credentials for the current tenant:user in Keystone.
 
     ::
 
-        CREDS=$(keystone ec2-credentials-create)
+        CREDS=$(openstack ec2 credentials create)
         export EC2_ACCESS_KEY=$(echo "$CREDS" | awk '/ access / { print $4 }')
         export EC2_SECRET_KEY=$(echo "$CREDS" | awk '/ secret / { print $4 }')
 
diff --git a/doc/source/guides/devstack-with-lbaas-v2.rst b/doc/source/guides/devstack-with-lbaas-v2.rst
new file mode 100644
index 0000000..f679783
--- /dev/null
+++ b/doc/source/guides/devstack-with-lbaas-v2.rst
@@ -0,0 +1,99 @@
+Configure Load-Balancer in Kilo
+=================================
+
+The Kilo release of OpenStack will support Version 2 of the neutron load balancer. Until now, using OpenStack `LBaaS V2 <http://docs.openstack.org/api/openstack-network/2.0/content/lbaas_ext.html>`_ has required a good understanding of neutron and LBaaS architecture and several manual steps.
+
+
+Phase 1: Create DevStack + 2 nova instances
+--------------------------------------------
+
+First, set up a vm of your choice with at least 8 GB RAM and 16 GB disk space, make sure it is updated. Install git and any other developer tools you find useful.
+
+Install devstack
+
+  ::
+
+    git clone https://git.openstack.org/openstack-dev/devstack
+    cd devstack
+
+
+Edit your `local.conf` to look like
+
+  ::
+
+    [[local|localrc]]
+    # Load the external LBaaS plugin.
+    enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas
+
+    # ===== BEGIN localrc =====
+    DATABASE_PASSWORD=password
+    ADMIN_PASSWORD=password
+    SERVICE_PASSWORD=password
+    SERVICE_TOKEN=password
+    RABBIT_PASSWORD=password
+    # Enable Logging
+    LOGFILE=$DEST/logs/stack.sh.log
+    VERBOSE=True
+    LOG_COLOR=True
+    SCREEN_LOGDIR=$DEST/logs
+    # Pre-requisite
+    ENABLED_SERVICES=rabbit,mysql,key
+    # Horizon
+    ENABLED_SERVICES+=,horizon
+    # Nova
+    ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch
+    IMAGE_URLS+=",https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img"
+    # Glance
+    ENABLED_SERVICES+=,g-api,g-reg
+    # Neutron
+    ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta
+    # Enable LBaaS V2
+    ENABLED_SERVICES+=,q-lbaasv2
+    # Cinder
+    ENABLED_SERVICES+=,c-api,c-vol,c-sch
+    # Tempest
+    ENABLED_SERVICES+=,tempest
+    # ===== END localrc =====
+
+Run stack.sh and do some sanity checks
+
+  ::
+
+    ./stack.sh
+    . ./openrc
+
+    neutron net-list  # should show public and private networks
+
+Create two nova instances that we can use as test http servers:
+
+  ::
+
+    #create nova instances on private network
+    nova boot --image $(nova image-list | awk '/ cirros-0.3.0-x86_64-disk / {print $2}') --flavor 1 --nic net-id=$(neutron net-list | awk '/ private / {print $2}') node1
+    nova boot --image $(nova image-list | awk '/ cirros-0.3.0-x86_64-disk / {print $2}') --flavor 1 --nic net-id=$(neutron net-list | awk '/ private / {print $2}') node2
+    nova list # should show the nova instances just created
+
+    #add secgroup rule to allow ssh etc..
+    neutron security-group-rule-create default --protocol icmp
+    neutron security-group-rule-create default --protocol tcp --port-range-min 22 --port-range-max 22
+    neutron security-group-rule-create default --protocol tcp --port-range-min 80 --port-range-max 80
+
+Set up a simple web server on each of these instances. ssh into each instance (username 'cirros', password 'cubswin:)') and run
+
+ ::
+
+    MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}')
+    while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
+
+Phase 2: Create your load balancers
+------------------------------------
+
+ ::
+
+    neutron lbaas-loadbalancer-create --name lb1 private-subnet
+    neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP --protocol-port 80 --name listener1
+    neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1
+    neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.3 --protocol-port 80 pool1
+    neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.5 --protocol-port 80 pool1
+
+Please note here that the "10.0.0.3" and "10.0.0.5" in the above commands are the IPs of the nodes (in my test run-thru, they were actually 10.2 and 10.4), and the address of the created LB will be reported as "vip_address" from the lbaas-loadbalancer-create, and a quick test of that LB is "curl that-lb-ip", which should alternate between showing the IPs of the two nodes.
diff --git a/doc/source/guides/devstack-with-nested-kvm.rst b/doc/source/guides/devstack-with-nested-kvm.rst
index 610300b..b35492e 100644
--- a/doc/source/guides/devstack-with-nested-kvm.rst
+++ b/doc/source/guides/devstack-with-nested-kvm.rst
@@ -54,7 +54,7 @@
 
 
 Configure Nested KVM for AMD-based Machines
---------------------------------------------
+-------------------------------------------
 
 Procedure to enable nested KVM virtualization on AMD-based machines.
 
@@ -121,7 +121,7 @@
 back to `virt_type=qemu`, i.e. plain QEMU emulation.
 
 Optionally, to explicitly set the type of virtualization, to KVM, by the
-libvirt driver in Nova, the below config attribute can be used in
+libvirt driver in nova, the below config attribute can be used in
 DevStack's ``local.conf``:
 
 ::
diff --git a/doc/source/guides/multinode-lab.rst b/doc/source/guides/multinode-lab.rst
index ff81c93..27d71f1 100644
--- a/doc/source/guides/multinode-lab.rst
+++ b/doc/source/guides/multinode-lab.rst
@@ -178,7 +178,7 @@
     MYSQL_HOST=192.168.42.11
     RABBIT_HOST=192.168.42.11
     GLANCE_HOSTPORT=192.168.42.11:9292
-    ENABLED_SERVICES=n-cpu,n-net,n-api,c-sch,c-api,c-vol
+    ENABLED_SERVICES=n-cpu,n-net,n-api,c-vol
     NOVA_VNC_ENABLED=True
     NOVNCPROXY_URL="http://192.168.42.11:6080/vnc_auto.html"
     VNCSERVER_LISTEN=$HOST_IP
@@ -229,10 +229,10 @@
 ----------------
 
 DevStack creates two OpenStack users (``admin`` and ``demo``) and two
-tenants (also ``admin`` and ``demo``). ``admin`` is exactly what it
+projects (also ``admin`` and ``demo``). ``admin`` is exactly what it
 sounds like, a privileged administrative account that is a member of
-both the ``admin`` and ``demo`` tenants. ``demo`` is a normal user
-account that is only a member of the ``demo`` tenant. Creating
+both the ``admin`` and ``demo`` projects. ``demo`` is a normal user
+account that is only a member of the ``demo`` project. Creating
 additional OpenStack users can be done through the dashboard, sometimes
 it is easier to do them in bulk from a script, especially since they get
 blown away every time ``stack.sh`` runs. The following steps are ripe
@@ -243,36 +243,36 @@
     # Get admin creds
     . openrc admin admin
 
-    # List existing tenants
-    keystone tenant-list
+    # List existing projects
+    openstack project list
 
     # List existing users
-    keystone user-list
+    openstack user list
 
-    # Add a user and tenant
+    # Add a user and project
     NAME=bob
     PASSWORD=BigSecrete
-    TENANT=$NAME
-    keystone tenant-create --name=$NAME
-    keystone user-create --name=$NAME --pass=$PASSWORD
-    keystone user-role-add --user-id=<bob-user-id> --tenant-id=<bob-tenant-id> --role-id=<member-role-id>
-    # member-role-id comes from the existing member role created by stack.sh
-    # keystone role-list
+    PROJECT=$NAME
+    openstack project create $PROJECT
+    openstack user create $NAME --password=$PASSWORD --project $PROJECT
+    openstack role add Member --user $NAME --project $PROJECT
+    # The Member role is created by stack.sh
+    # openstack role list
 
 Swift
 -----
 
-Swift requires a significant amount of resources and is disabled by
-default in DevStack. The support in DevStack is geared toward a minimal
-installation but can be used for testing. To implement a true multi-node
-test of Swift required more than DevStack provides. Enabling it is as
+Swift, OpenStack Object Storage, requires a significant amount of resources
+and is disabled by default in DevStack. The support in DevStack is geared 
+toward a minimal installation but can be used for testing. To implement a
+true multi-node test of swift, additional steps will be required. Enabling it is as
 simple as enabling the ``swift`` service in ``local.conf``:
 
 ::
 
     enable_service s-proxy s-object s-container s-account
 
-Swift will put its data files in ``SWIFT_DATA_DIR`` (default
+Swift, OpenStack Object Storage, will put its data files in ``SWIFT_DATA_DIR`` (default
 ``/opt/stack/data/swift``). The size of the data 'partition' created
 (really a loop-mounted file) is set by ``SWIFT_LOOPBACK_DISK_SIZE``. The
 Swift config files are located in ``SWIFT_CONF_DIR`` (default
@@ -334,14 +334,14 @@
 set in ``localrc`` it may be necessary to remove the corresponding
 directory from ``/opt/stack`` to force git to re-clone the repository.
 
-For example, to pull Nova from a proposed release candidate in the
-primary Nova repository:
+For example, to pull nova, OpenStack Compute, from a proposed release candidate
+in the primary nova repository:
 
 ::
 
     NOVA_BRANCH=rc-proposed
 
-To pull Glance from an experimental fork:
+To pull glance, OpenStack Image service, from an experimental fork:
 
 ::
 
diff --git a/doc/source/guides/neutron.rst b/doc/source/guides/neutron.rst
index 95cde96..bdfd3a4 100644
--- a/doc/source/guides/neutron.rst
+++ b/doc/source/guides/neutron.rst
@@ -1,15 +1,81 @@
 ======================================
-Using DevStack with Neutron Networking
+Using DevStack with neutron Networking
 ======================================
 
-This guide will walk you through using OpenStack Neutron with the ML2
+This guide will walk you through using OpenStack neutron with the ML2
 plugin and the Open vSwitch mechanism driver.
 
-Network Interface Configuration
-===============================
 
-To use Neutron, it is suggested that two network interfaces be present
-in the host operating system.
+Using Neutron with a Single Interface
+=====================================
+
+In some instances, like on a developer laptop, there is only one
+network interface that is available. In this scenario, the physical
+interface is added to the Open vSwitch bridge, and the IP address of
+the laptop is migrated onto the bridge interface. That way, the
+physical interface can be used to transmit tenant network traffic,
+the OpenStack API traffic, and management traffic.
+
+
+Physical Network Setup
+----------------------
+
+In most cases where DevStack is being deployed with a single
+interface, there is a hardware router that is being used for external
+connectivity and DHCP. The developer machine is connected to this
+network and is on a shared subnet with other machines.
+
+.. nwdiag::
+
+        nwdiag {
+                inet [ shape = cloud ];
+                router;
+                inet -- router;
+
+                network hardware_network {
+                        address = "172.18.161.0/24"
+                        router [ address = "172.18.161.1" ];
+                        devstack_laptop [ address = "172.18.161.6" ];
+                }
+        }
+
+
+DevStack Configuration
+----------------------
+
+
+::
+
+        HOST_IP=172.18.161.6
+        SERVICE_HOST=172.18.161.6
+        MYSQL_HOST=172.18.161.6
+        RABBIT_HOST=172.18.161.6
+        GLANCE_HOSTPORT=172.18.161.6:9292
+        ADMIN_PASSWORD=secrete
+        MYSQL_PASSWORD=secrete
+        RABBIT_PASSWORD=secrete
+        SERVICE_PASSWORD=secrete
+        SERVICE_TOKEN=secrete
+
+        ## Neutron options
+        Q_USE_SECGROUP=True
+        FLOATING_RANGE="172.18.161.1/24"
+        FIXED_RANGE="10.0.0.0/24"
+        Q_FLOATING_ALLOCATION_POOL=start=172.18.161.250,end=172.18.161.254
+        PUBLIC_NETWORK_GATEWAY="172.18.161.1"
+        Q_L3_ENABLED=True
+        PUBLIC_INTERFACE=eth0
+        Q_USE_PROVIDERNET_FOR_PUBLIC=True
+        OVS_PHYSICAL_BRIDGE=br-ex
+        PUBLIC_BRIDGE=br-ex
+        OVS_BRIDGE_MAPPINGS=public:br-ex
+
+
+
+
+
+Using Neutron with Multiple Interfaces
+======================================
 
 The first interface, eth0 is used for the OpenStack management (API,
 message bus, etc) as well as for ssh for an administrator to access
@@ -62,7 +128,7 @@
 Disabling Next Generation Firewall Tools
 ========================================
 
-Devstack does not properly operate with modern firewall tools.  Specifically
+DevStack does not properly operate with modern firewall tools.  Specifically
 it will appear as if the guest VM can access the external network via ICMP,
 but UDP and TCP packets will not be delivered to the guest VM.  The root cause
 of the issue is that both ufw (Uncomplicated Firewall) and firewalld (Fedora's
@@ -96,13 +162,13 @@
 Neutron Networking with Open vSwitch
 ====================================
 
-Configuring Neutron networking in DevStack is very similar to
+Configuring neutron, OpenStack Networking in DevStack is very similar to
 configuring `nova-network` - many of the same configuration variables
 (like `FIXED_RANGE` and `FLOATING_RANGE`) used by `nova-network` are
-used by Neutron, which is intentional.
+used by neutron, which is intentional.
 
 The only difference is the disabling of `nova-network` in your
-local.conf, and the enabling of the Neutron components.
+local.conf, and the enabling of the neutron components.
 
 
 Configuration
@@ -131,19 +197,24 @@
 subnet that exists in the private RFC1918 address space - however in
 in a real setup FLOATING_RANGE would be a public IP address range.
 
+Note that extension drivers for the ML2 plugin is set by
+`Q_ML2_PLUGIN_EXT_DRIVERS`, and it includes 'port_security' by default. If you
+want to remove all the extension drivers (even 'port_security'), set
+`Q_ML2_PLUGIN_EXT_DRIVERS` to blank.
+
 Neutron Networking with Open vSwitch and Provider Networks
 ==========================================================
 
-In some instances, it is desirable to use Neutron's provider
+In some instances, it is desirable to use neutron's provider
 networking extension, so that networks that are configured on an
-external router can be utilized by Neutron, and instances created via
+external router can be utilized by neutron, and instances created via
 Nova can attach to the network managed by the external router.
 
 For example, in some lab environments, a hardware router has been
 pre-configured by another party, and an OpenStack developer has been
 given a VLAN tag and IP address range, so that instances created via
 DevStack will use the external router for L3 connectivity, as opposed
-to the Neutron L3 service.
+to the neutron L3 service.
 
 
 Service Configuration
@@ -152,8 +223,8 @@
 **Control Node**
 
 In this example, the control node will run the majority of the
-OpenStack API and management services (Keystone, Glance,
-Nova, Neutron, etc..)
+OpenStack API and management services (keystone, glance,
+nova, neutron)
 
 
 **Compute Nodes**
@@ -226,4 +297,4 @@
 For example, with the above  configuration, a bridge is
 created, named `br-ex` which is managed by Open vSwitch, and the
 second interface on the compute node, `eth1` is attached to the
-bridge, to forward traffic sent by guest vms.
+bridge, to forward traffic sent by guest VMs.
diff --git a/doc/source/guides/nova.rst b/doc/source/guides/nova.rst
index 0d98f4a..a91e0d1 100644
--- a/doc/source/guides/nova.rst
+++ b/doc/source/guides/nova.rst
@@ -1,15 +1,15 @@
 =================
-Nova and devstack
+Nova and DevStack
 =================
 
 This is a rough guide to various configuration parameters for nova
-running with devstack.
+running with DevStack.
 
 
 nova-serialproxy
 ================
 
-In Juno nova implemented a `spec
+In Juno, nova implemented a `spec
 <http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/serial-ports.html>`_
 to allow read/write access to the serial console of an instance via
 `nova-serialproxy
@@ -60,7 +60,7 @@
     #proxyclient_address=127.0.0.1
 
 
-Enabling the service is enough to be functional for a single machine devstack.
+Enabling the service is enough to be functional for a single machine DevStack.
 
 These config options are defined in `nova.console.serial
 <https://github.com/openstack/nova/blob/master/nova/console/serial.py#L33-L52>`_
diff --git a/doc/source/guides/single-vm.rst b/doc/source/guides/single-vm.rst
index ab46d91..c2ce1a3 100644
--- a/doc/source/guides/single-vm.rst
+++ b/doc/source/guides/single-vm.rst
@@ -3,10 +3,10 @@
 ====================
 
 Use the cloud to build the cloud! Use your cloud to launch new versions
-of OpenStack in about 5 minutes. When you break it, start over! The VMs
+of OpenStack in about 5 minutes. If you break it, start over! The VMs
 launched in the cloud will be slow as they are running in QEMU
 (emulation), but their primary use is testing OpenStack development and
-operation. Speed not required.
+operation.
 
 Prerequisites Cloud & Image
 ===========================
@@ -15,7 +15,7 @@
 ---------------
 
 DevStack should run in any virtual machine running a supported Linux
-release. It will perform best with 4Gb or more of RAM.
+release. It will perform best with 4GB or more of RAM.
 
 OpenStack Deployment & cloud-init
 ---------------------------------
@@ -88,7 +88,7 @@
 ---------------
 
 At this point you should be able to access the dashboard. Launch VMs and
-if you give them floating IPs access those VMs from other machines on
+if you give them floating IPs, access those VMs from other machines on
 your network.
 
 One interesting use case is for developers working on a VM on their
diff --git a/doc/source/hacking.rst b/doc/source/hacking.rst
new file mode 100644
index 0000000..a2bcf4f
--- /dev/null
+++ b/doc/source/hacking.rst
@@ -0,0 +1 @@
+.. include:: ../../HACKING.rst
diff --git a/doc/source/index.rst b/doc/source/index.rst
index bac593d..e0c3f3a 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -12,7 +12,7 @@
    plugins
    faq
    changes
-   contributing
+   hacking
 
 Quick Start
 -----------
@@ -41,8 +41,7 @@
 
 #. Configure
 
-   We recommend at least a :doc:`minimal
-   configuration <configuration>` be set up.
+   We recommend at least a :ref:`minimal-configuration` be set up.
 
 #. Start the install
 
@@ -68,6 +67,7 @@
    guides/neutron
    guides/devstack-with-nested-kvm
    guides/nova
+   guides/devstack-with-lbaas-v2
 
 All-In-One Single VM
 --------------------
@@ -139,7 +139,7 @@
 Contributing
 ------------
 
-:doc:`Pitching in to make DevStack a better place <contributing>`
+:doc:`Pitching in to make DevStack a better place <hacking>`
 
 Code
 ====
@@ -165,7 +165,7 @@
 * `lib/ironic <lib/ironic.html>`__
 * `lib/keystone <lib/keystone.html>`__
 * `lib/ldap <lib/ldap.html>`__
-* `lib/neutron <lib/neutron.html>`__
+* `lib/neutron-legacy <lib/neutron-legacy.html>`__
 * `lib/nova <lib/nova.html>`__
 * `lib/oslo <lib/oslo.html>`__
 * `lib/rpc\_backend <lib/rpc_backend.html>`__
@@ -173,7 +173,6 @@
 * `lib/swift <lib/swift.html>`__
 * `lib/tempest <lib/tempest.html>`__
 * `lib/tls <lib/tls.html>`__
-* `lib/trove <lib/trove.html>`__
 * `lib/zaqar <lib/zaqar.html>`__
 * `unstack.sh <unstack.sh.html>`__
 * `clean.sh <clean.sh.html>`__
@@ -182,7 +181,6 @@
 * `extras.d/50-ironic.sh <extras.d/50-ironic.sh.html>`__
 * `extras.d/60-ceph.sh <extras.d/60-ceph.sh.html>`__
 * `extras.d/70-sahara.sh <extras.d/70-sahara.sh.html>`__
-* `extras.d/70-trove.sh <extras.d/70-trove.sh.html>`__
 * `extras.d/70-tuskar.sh <extras.d/70-tuskar.sh.html>`__
 * `extras.d/70-zaqar.sh <extras.d/70-zaqar.sh.html>`__
 * `extras.d/80-tempest.sh <extras.d/80-tempest.sh.html>`__
@@ -210,6 +208,8 @@
 -----
 
 * `tools/build\_docs.sh <tools/build_docs.sh.html>`__
+* `tools/build\_venv.sh <tools/build_venv.sh.html>`__
+* `tools/build\_wheels.sh <tools/build_wheels.sh.html>`__
 * `tools/create-stack-user.sh <tools/create-stack-user.sh.html>`__
 * `tools/create\_userrc.sh <tools/create_userrc.sh.html>`__
 * `tools/fixup\_stuff.sh <tools/fixup_stuff.sh.html>`__
@@ -240,6 +240,5 @@
 * `exercises/sahara.sh <exercises/sahara.sh.html>`__
 * `exercises/sec\_groups.sh <exercises/sec_groups.sh.html>`__
 * `exercises/swift.sh <exercises/swift.sh.html>`__
-* `exercises/trove.sh <exercises/trove.sh.html>`__
 * `exercises/volumes.sh <exercises/volumes.sh.html>`__
 * `exercises/zaqar.sh <exercises/zaqar.sh.html>`__
diff --git a/doc/source/overview.rst b/doc/source/overview.rst
index 23ccf27..d245035 100644
--- a/doc/source/overview.rst
+++ b/doc/source/overview.rst
@@ -7,7 +7,7 @@
 well beyond what was originally intended and the majority of
 configuration combinations are rarely, if ever, tested. DevStack is not
 a general OpenStack installer and was never meant to be everything to
-everyone..
+everyone.
 
 Below is a list of what is specifically is supported (read that as
 "tested") going forward.
@@ -58,7 +58,7 @@
 OpenStack Network
 -----------------
 
-*Default to Nova Network, optionally use Neutron*
+*Defaults to nova network, optionally use neutron*
 
 -  Nova Network: FlatDHCP
 -  Neutron: A basic configuration approximating the original FlatDHCP
@@ -67,10 +67,10 @@
 Services
 --------
 
-The default services configured by DevStack are Identity (Keystone),
-Object Storage (Swift), Image Storage (Glance), Block Storage (Cinder),
-Compute (Nova), Network (Nova), Dashboard (Horizon), Orchestration
-(Heat)
+The default services configured by DevStack are Identity (keystone),
+Object Storage (swift), Image Service (glance), Block Storage (cinder),
+Compute (nova), Networking (nova), Dashboard (horizon), Orchestration
+(heat)
 
 Additional services not included directly in DevStack can be tied in to
 ``stack.sh`` using the :doc:`plugin mechanism <plugins>` to call
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index a9763e6..c4ed228 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -113,6 +113,11 @@
   services using ``run_process`` as it only works with enabled
   services.
 
+  Be careful to allow users to override global-variables for
+  customizing their environment.  Usually it is best to provide a
+  default value only if the variable is unset or empty; e.g. in bash
+  syntax ``FOO=${FOO:-default}``.
+
 - ``plugin.sh`` - the actual plugin. It will be executed by devstack
   during it's run. The run order will be done in the registration
   order for these plugins, and will occur immediately after all in
@@ -179,3 +184,24 @@
 -  ``start_nova_hypervisor`` - start any external services
 -  ``stop_nova_hypervisor`` - stop any external services
 -  ``cleanup_nova_hypervisor`` - remove transient data and cache
+
+System Packages
+===============
+
+Devstack provides a framework for getting packages installed at an early
+phase of its execution. This packages may be defined in a plugin as files
+that contain new-line separated lists of packages required by the plugin
+
+Supported packaging systems include apt and yum across multiple distributions.
+To enable a plugin to hook into this and install package dependencies, packages
+may be listed at the following locations in the top-level of the plugin
+repository:
+
+- ``./devstack/files/debs/$plugin_name`` - Packages to install when running
+  on Ubuntu, Debian or Linux Mint.
+
+- ``./devstack/files/rpms/$plugin_name`` - Packages to install when running
+  on Red Hat, Fedora, CentOS or XenServer.
+
+- ``./devstack/files/rpms-suse/$plugin_name`` - Packages to install when
+  running on SUSE Linux or openSUSE.
diff --git a/eucarc b/eucarc
index 343f4cc..1e672bd 100644
--- a/eucarc
+++ b/eucarc
@@ -19,7 +19,7 @@
 source $RC_DIR/openrc
 
 # Set the ec2 url so euca2ools works
-export EC2_URL=$(keystone catalog --service ec2 | awk '/ publicURL / { print $4 }')
+export EC2_URL=$(openstack catalog show ec2 | awk '/ publicURL: / { print $4 }')
 
 # Create EC2 credentials for the current user
 CREDS=$(openstack ec2 credentials create)
@@ -29,7 +29,7 @@
 # Euca2ools Certificate stuff for uploading bundles
 # See exercises/bundle.sh to see how to get certs using nova cli
 NOVA_KEY_DIR=${NOVA_KEY_DIR:-$RC_DIR}
-export S3_URL=$(keystone catalog --service s3 | awk '/ publicURL / { print $4 }')
+export S3_URL=$(openstack catalog show s3 | awk '/ publicURL: / { print $4 }')
 export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
 export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
 export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
diff --git a/exercise.sh b/exercise.sh
index ce694fb..19c9d80 100755
--- a/exercise.sh
+++ b/exercise.sh
@@ -2,7 +2,7 @@
 
 # **exercise.sh**
 
-# Keep track of the current devstack directory.
+# Keep track of the current DevStack directory.
 TOP_DIR=$(cd $(dirname "$0") && pwd)
 
 # Import common functions
@@ -14,11 +14,11 @@
 # Run everything in the exercises/ directory that isn't explicitly disabled
 
 # comma separated list of script basenames to skip
-# to refrain from exercising euca.sh use SKIP_EXERCISES=euca
+# to refrain from exercising euca.sh use ``SKIP_EXERCISES=euca``
 SKIP_EXERCISES=${SKIP_EXERCISES:-""}
 
 # comma separated list of script basenames to run
-# to run only euca.sh use RUN_EXERCISES=euca
+# to run only euca.sh use ``RUN_EXERCISES=euca``
 basenames=${RUN_EXERCISES:-""}
 
 EXERCISE_DIR=$TOP_DIR/exercises
@@ -27,7 +27,7 @@
     # Locate the scripts we should run
     basenames=$(for b in `ls $EXERCISE_DIR/*.sh`; do basename $b .sh; done)
 else
-    # If RUN_EXERCISES was specified, ignore SKIP_EXERCISES.
+    # If ``RUN_EXERCISES`` was specified, ignore ``SKIP_EXERCISES``.
     SKIP_EXERCISES=
 fi
 
@@ -56,7 +56,7 @@
     fi
 done
 
-# output status of exercise run
+# Output status of exercise run
 echo "====================================================================="
 for script in $skips; do
     echo SKIP $script
diff --git a/exercises/boot_from_volume.sh b/exercises/boot_from_volume.sh
index a2ae275..d520b9b 100755
--- a/exercises/boot_from_volume.sh
+++ b/exercises/boot_from_volume.sh
@@ -32,7 +32,7 @@
 
 # Import project functions
 source $TOP_DIR/lib/cinder
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 
 # Import configuration
 source $TOP_DIR/openrc
@@ -182,7 +182,7 @@
 die_if_not_set $LINENO IP "Failure retrieving IP address"
 
 # Private IPs can be pinged in single node deployments
-ping_check "$PRIVATE_NETWORK_NAME" $IP $BOOT_TIMEOUT
+ping_check $IP $BOOT_TIMEOUT "$PRIVATE_NETWORK_NAME"
 
 # Clean up
 # --------
diff --git a/exercises/client-args.sh b/exercises/client-args.sh
index 2f85d98..c33ef44 100755
--- a/exercises/client-args.sh
+++ b/exercises/client-args.sh
@@ -69,7 +69,7 @@
         STATUS_KEYSTONE="Skipped"
     else
         echo -e "\nTest Keystone"
-        if keystone $TENANT_ARG $ARGS catalog --service identity; then
+        if openstack $TENANT_ARG $ARGS catalog show identity; then
             STATUS_KEYSTONE="Succeeded"
         else
             STATUS_KEYSTONE="Failed"
diff --git a/exercises/euca.sh b/exercises/euca.sh
index f9c4752..c2957e2 100755
--- a/exercises/euca.sh
+++ b/exercises/euca.sh
@@ -37,7 +37,7 @@
 source $TOP_DIR/exerciserc
 
 # Import project functions
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 
 # If nova api is not enabled we exit with exitcode 55 so that
 # the exercise is skipped
@@ -142,7 +142,7 @@
         die $LINENO "Failure authorizing rule in $SECGROUP"
 
     # Test we can ping our floating ip within ASSOCIATE_TIMEOUT seconds
-    ping_check "$PUBLIC_NETWORK_NAME" $FLOATING_IP $ASSOCIATE_TIMEOUT
+    ping_check $FLOATING_IP $ASSOCIATE_TIMEOUT "$PUBLIC_NETWORK_NAME"
 
     # Revoke pinging
     euca-revoke -P icmp -s 0.0.0.0/0 -t -1:-1 $SECGROUP || \
diff --git a/exercises/floating_ips.sh b/exercises/floating_ips.sh
index 57f48e0..4b72a00 100755
--- a/exercises/floating_ips.sh
+++ b/exercises/floating_ips.sh
@@ -31,7 +31,7 @@
 source $TOP_DIR/openrc
 
 # Import project functions
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 
 # Import exercise configuration
 source $TOP_DIR/exerciserc
@@ -139,7 +139,7 @@
 die_if_not_set $LINENO IP "Failure retrieving IP address"
 
 # Private IPs can be pinged in single node deployments
-ping_check "$PRIVATE_NETWORK_NAME" $IP $BOOT_TIMEOUT
+ping_check $IP $BOOT_TIMEOUT "$PRIVATE_NETWORK_NAME"
 
 # Floating IPs
 # ------------
@@ -158,7 +158,7 @@
     die $LINENO "Failure adding floating IP $FLOATING_IP to $VM_NAME"
 
 # Test we can ping our floating IP within ASSOCIATE_TIMEOUT seconds
-ping_check "$PUBLIC_NETWORK_NAME" $FLOATING_IP $ASSOCIATE_TIMEOUT
+ping_check $FLOATING_IP $ASSOCIATE_TIMEOUT "$PUBLIC_NETWORK_NAME"
 
 if ! is_service_enabled neutron; then
     # Allocate an IP from second floating pool
@@ -182,7 +182,7 @@
 # FIXME (anthony): make xs support security groups
 if [ "$VIRT_DRIVER" != "ironic" -a "$VIRT_DRIVER" != "xenserver" -a "$VIRT_DRIVER" != "openvz" ]; then
     # Test we can aren't able to ping our floating ip within ASSOCIATE_TIMEOUT seconds
-    ping_check "$PUBLIC_NETWORK_NAME" $FLOATING_IP $ASSOCIATE_TIMEOUT Fail
+    ping_check $FLOATING_IP $ASSOCIATE_TIMEOUT "$PUBLIC_NETWORK_NAME" Fail
 fi
 
 # Clean up
diff --git a/exercises/horizon.sh b/exercises/horizon.sh
deleted file mode 100755
index 4020580..0000000
--- a/exercises/horizon.sh
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/usr/bin/env bash
-
-# **horizon.sh**
-
-# Sanity check that horizon started if enabled
-
-echo "*********************************************************************"
-echo "Begin DevStack Exercise: $0"
-echo "*********************************************************************"
-
-# This script exits on an error so that errors don't compound and you see
-# only the first error that occurred.
-set -o errexit
-
-# Print the commands being run so that we can see the command that triggers
-# an error.  It is also useful for following allowing as the install occurs.
-set -o xtrace
-
-
-# Settings
-# ========
-
-# Keep track of the current directory
-EXERCISE_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=$(cd $EXERCISE_DIR/..; pwd)
-
-# Import common functions
-source $TOP_DIR/functions
-
-# Import configuration
-source $TOP_DIR/openrc
-
-# Import exercise configuration
-source $TOP_DIR/exerciserc
-
-is_service_enabled horizon || exit 55
-
-# can we get the front page
-$CURL_GET http://$SERVICE_HOST 2>/dev/null | grep -q '<h3.*>Log In</h3>' || die $LINENO "Horizon front page not functioning!"
-
-set +o xtrace
-echo "*********************************************************************"
-echo "SUCCESS: End DevStack Exercise: $0"
-echo "*********************************************************************"
-
diff --git a/exercises/neutron-adv-test.sh b/exercises/neutron-adv-test.sh
index 5b3281b..04892b0 100755
--- a/exercises/neutron-adv-test.sh
+++ b/exercises/neutron-adv-test.sh
@@ -49,7 +49,7 @@
 source $TOP_DIR/openrc
 
 # Import neutron functions
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 
 # If neutron is not enabled we exit with exitcode 55, which means exercise is skipped.
 neutron_plugin_check_adv_test_requirements || exit 55
@@ -281,7 +281,7 @@
     local VM_NAME=$1
     local NET_NAME=$2
     IP=$(get_instance_ip $VM_NAME $NET_NAME)
-    ping_check $NET_NAME $IP $BOOT_TIMEOUT
+    ping_check $IP $BOOT_TIMEOUT $NET_NAME
 }
 
 function check_vm {
diff --git a/exercises/volumes.sh b/exercises/volumes.sh
index 504fba1..f95c81f 100755
--- a/exercises/volumes.sh
+++ b/exercises/volumes.sh
@@ -32,7 +32,7 @@
 
 # Import project functions
 source $TOP_DIR/lib/cinder
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 
 # Import exercise configuration
 source $TOP_DIR/exerciserc
@@ -143,7 +143,7 @@
 die_if_not_set $LINENO IP "Failure retrieving IP address"
 
 # Private IPs can be pinged in single node deployments
-ping_check "$PRIVATE_NETWORK_NAME" $IP $BOOT_TIMEOUT
+ping_check $IP $BOOT_TIMEOUT "$PRIVATE_NETWORK_NAME"
 
 # Volumes
 # -------
diff --git a/extras.d/70-trove.sh b/extras.d/70-trove.sh
deleted file mode 100644
index f284354..0000000
--- a/extras.d/70-trove.sh
+++ /dev/null
@@ -1,32 +0,0 @@
-# trove.sh - Devstack extras script to install Trove
-
-if is_service_enabled trove; then
-    if [[ "$1" == "source" ]]; then
-        # Initial source
-        source $TOP_DIR/lib/trove
-    elif [[ "$1" == "stack" && "$2" == "install" ]]; then
-        echo_summary "Installing Trove"
-        install_trove
-        install_troveclient
-        cleanup_trove
-    elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
-        echo_summary "Configuring Trove"
-        configure_trove
-
-        if is_service_enabled key; then
-            create_trove_accounts
-        fi
-
-    elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
-        # Initialize trove
-        init_trove
-
-        # Start the trove API and trove taskmgr components
-        echo_summary "Starting Trove"
-        start_trove
-    fi
-
-    if [[ "$1" == "unstack" ]]; then
-        stop_trove
-    fi
-fi
diff --git a/files/apache-keystone.template b/files/apache-keystone.template
index 504dc01..0b914e2 100644
--- a/files/apache-keystone.template
+++ b/files/apache-keystone.template
@@ -1,8 +1,9 @@
 Listen %PUBLICPORT%
 Listen %ADMINPORT%
+LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\" %D(us)" keystone_combined
 
 <VirtualHost *:%PUBLICPORT%>
-    WSGIDaemonProcess keystone-public processes=5 threads=1 user=%USER% display-name=%{GROUP}
+    WSGIDaemonProcess keystone-public processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
     WSGIProcessGroup keystone-public
     WSGIScriptAlias / %PUBLICWSGI%
     WSGIApplicationGroup %{GLOBAL}
@@ -11,14 +12,14 @@
       ErrorLogFormat "%{cu}t %M"
     </IfVersion>
     ErrorLog /var/log/%APACHE_NAME%/keystone.log
-    CustomLog /var/log/%APACHE_NAME%/keystone_access.log combined
+    CustomLog /var/log/%APACHE_NAME%/keystone_access.log keystone_combined
     %SSLENGINE%
     %SSLCERTFILE%
     %SSLKEYFILE%
 </VirtualHost>
 
 <VirtualHost *:%ADMINPORT%>
-    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=%USER% display-name=%{GROUP}
+    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
     WSGIProcessGroup keystone-admin
     WSGIScriptAlias / %ADMINWSGI%
     WSGIApplicationGroup %{GLOBAL}
@@ -27,7 +28,7 @@
       ErrorLogFormat "%{cu}t %M"
     </IfVersion>
     ErrorLog /var/log/%APACHE_NAME%/keystone.log
-    CustomLog /var/log/%APACHE_NAME%/keystone_access.log combined
+    CustomLog /var/log/%APACHE_NAME%/keystone_access.log keystone_combined
     %SSLENGINE%
     %SSLCERTFILE%
     %SSLKEYFILE%
diff --git a/files/apache-nova-api.template b/files/apache-nova-api.template
new file mode 100644
index 0000000..70ccedd
--- /dev/null
+++ b/files/apache-nova-api.template
@@ -0,0 +1,16 @@
+Listen %PUBLICPORT%
+
+<VirtualHost *:%PUBLICPORT%>
+    WSGIDaemonProcess nova-api processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+    WSGIProcessGroup nova-api
+    WSGIScriptAlias / %PUBLICWSGI%
+    WSGIApplicationGroup %{GLOBAL}
+    WSGIPassAuthorization On
+    <IfVersion >= 2.4>
+      ErrorLogFormat "%{cu}t %M"
+    </IfVersion>
+    ErrorLog /var/log/%APACHE_NAME%/nova-api.log
+    %SSLENGINE%
+    %SSLCERTFILE%
+    %SSLKEYFILE%
+</VirtualHost>
\ No newline at end of file
diff --git a/files/apache-nova-ec2-api.template b/files/apache-nova-ec2-api.template
new file mode 100644
index 0000000..ae4cf94
--- /dev/null
+++ b/files/apache-nova-ec2-api.template
@@ -0,0 +1,16 @@
+Listen %PUBLICPORT%
+
+<VirtualHost *:%PUBLICPORT%>
+    WSGIDaemonProcess nova-ec2-api processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+    WSGIProcessGroup nova-ec2-api
+    WSGIScriptAlias / %PUBLICWSGI%
+    WSGIApplicationGroup %{GLOBAL}
+    WSGIPassAuthorization On
+    <IfVersion >= 2.4>
+      ErrorLogFormat "%{cu}t %M"
+    </IfVersion>
+    ErrorLog /var/log/%APACHE_NAME%/nova-ec2-api.log
+    %SSLENGINE%
+    %SSLCERTFILE%
+    %SSLKEYFILE%
+</VirtualHost>
\ No newline at end of file
diff --git a/files/debs/cinder b/files/debs/cinder
index 7819c31..51908eb 100644
--- a/files/debs/cinder
+++ b/files/debs/cinder
@@ -1,4 +1,4 @@
-tgt
+tgt # NOPRIME
 lvm2
 qemu-utils
 libpq-dev
diff --git a/files/debs/general b/files/debs/general
index 84d4302..1460526 100644
--- a/files/debs/general
+++ b/files/debs/general
@@ -6,7 +6,7 @@
 gcc
 g++
 git
-graphviz # testonly - docs
+graphviz # needed for docs
 lsof # useful when debugging
 openssh-server
 openssl
@@ -17,9 +17,11 @@
 tar
 python-dev
 python2.7
+python-gdbm # needed for testr
 bc
 libyaml-dev
 libffi-dev
 libssl-dev # for pyOpenSSL
 gettext  # used for compiling message catalogs
 openjdk-7-jre-headless  # NOPRIME
+pkg-config
diff --git a/files/debs/glance b/files/debs/glance
index 9fda6a6..37877a8 100644
--- a/files/debs/glance
+++ b/files/debs/glance
@@ -1,6 +1,6 @@
-libmysqlclient-dev  # testonly
-libpq-dev           # testonly
-libssl-dev          # testonly
+libmysqlclient-dev
+libpq-dev
+libssl-dev
 libxml2-dev
-libxslt1-dev        # testonly
-zlib1g-dev           # testonly
+libxslt1-dev
+zlib1g-dev
diff --git a/files/debs/neutron b/files/debs/neutron
index aa3d709..2d69a71 100644
--- a/files/debs/neutron
+++ b/files/debs/neutron
@@ -1,12 +1,12 @@
-acl     # testonly
+acl
 ebtables
 iptables
 iputils-ping
 iputils-arping
-libmysqlclient-dev  # testonly
+libmysqlclient-dev
 mysql-server #NOPRIME
 sudo
-postgresql-server-dev-all       # testonly
+postgresql-server-dev-all
 python-mysqldb
 python-mysql.connector
 python-qpid # NOPRIME
diff --git a/files/debs/nova b/files/debs/nova
index 0c31385..9d9acde 100644
--- a/files/debs/nova
+++ b/files/debs/nova
@@ -4,7 +4,7 @@
 kpartx
 parted
 iputils-arping
-libmysqlclient-dev  # testonly
+libmysqlclient-dev
 mysql-server # NOPRIME
 python-mysqldb
 python-mysql.connector
diff --git a/files/debs/swift b/files/debs/swift
index b32b439..726786e 100644
--- a/files/debs/swift
+++ b/files/debs/swift
@@ -1,7 +1,5 @@
 curl
+make
 memcached
-# NOTE python-nose only exists because of swift functional job, we should probably
-# figure out a more consistent way of installing this from test-requirements.txt instead
-python-nose
 sqlite3
 xfsprogs
diff --git a/files/debs/tempest b/files/debs/tempest
index f244e4e..bb09529 100644
--- a/files/debs/tempest
+++ b/files/debs/tempest
@@ -1 +1,2 @@
-libxslt1-dev
\ No newline at end of file
+libxml2-dev
+libxslt1-dev
diff --git a/files/debs/trove b/files/debs/trove
index 09dcee8..96f8f29 100644
--- a/files/debs/trove
+++ b/files/debs/trove
@@ -1 +1 @@
-libxslt1-dev   # testonly
+libxslt1-dev
diff --git a/files/rpms-suse/ceilometer-collector b/files/rpms-suse/ceilometer-collector
index c76454f..5e4dfcc 100644
--- a/files/rpms-suse/ceilometer-collector
+++ b/files/rpms-suse/ceilometer-collector
@@ -1,4 +1,3 @@
 # Not available in openSUSE main repositories, but can be fetched from OBS
 # (devel:languages:python and server:database projects)
 mongodb
-python-pymongo
diff --git a/files/rpms-suse/cinder b/files/rpms-suse/cinder
index 55078da..3fd03cc 100644
--- a/files/rpms-suse/cinder
+++ b/files/rpms-suse/cinder
@@ -1,5 +1,5 @@
 lvm2
-tgt
+tgt # NOPRIME
 qemu-tools
 python-devel
 postgresql-devel
diff --git a/files/rpms-suse/devlibs b/files/rpms-suse/devlibs
index c923825..bdb630a 100644
--- a/files/rpms-suse/devlibs
+++ b/files/rpms-suse/devlibs
@@ -3,4 +3,5 @@
 libxml2-devel  # lxml
 libxslt-devel  # lxml
 postgresql-devel  # psycopg2
+libmysqlclient-devel # MySQL-python
 python-devel  # pyOpenSSL
diff --git a/files/rpms-suse/general b/files/rpms-suse/general
index 63cf14b..42756d8 100644
--- a/files/rpms-suse/general
+++ b/files/rpms-suse/general
@@ -6,7 +6,7 @@
 gcc
 gcc-c++
 git-core
-graphviz # testonly - docs
+graphviz # docs
 iputils
 libopenssl-devel # to rebuild pyOpenSSL if needed
 lsof # useful when debugging
@@ -15,7 +15,6 @@
 openssl
 psmisc
 python-cmd2 # dist:opensuse-12.3
-python-pylint
 screen
 tar
 tcpdump
diff --git a/files/rpms-suse/glance b/files/rpms-suse/glance
index 9b962f9..0e58425 100644
--- a/files/rpms-suse/glance
+++ b/files/rpms-suse/glance
@@ -1,11 +1,2 @@
 libxml2-devel
-python-PasteDeploy
-python-Routes
-python-SQLAlchemy
-python-argparse
 python-devel
-python-eventlet
-python-greenlet
-python-iso8601
-python-pyOpenSSL
-python-xattr
diff --git a/files/rpms-suse/horizon b/files/rpms-suse/horizon
index d1f378a..77f7c34 100644
--- a/files/rpms-suse/horizon
+++ b/files/rpms-suse/horizon
@@ -1,18 +1,2 @@
 apache2  # NOPRIME
 apache2-mod_wsgi  # NOPRIME
-python-CherryPy # why? (coming from apts)
-python-Paste
-python-PasteDeploy
-python-Routes
-python-SQLAlchemy
-python-WebOb
-python-anyjson
-python-beautifulsoup
-python-coverage
-python-dateutil
-python-eventlet
-python-mox
-python-nose
-python-pylint
-python-sqlalchemy-migrate
-python-xattr
diff --git a/files/rpms-suse/keystone b/files/rpms-suse/keystone
index 4c37ade..c838b41 100644
--- a/files/rpms-suse/keystone
+++ b/files/rpms-suse/keystone
@@ -1,15 +1,4 @@
 cyrus-sasl-devel
 openldap2-devel
-python-Paste
-python-PasteDeploy
-python-PasteScript
-python-Routes
-python-SQLAlchemy
-python-WebOb
 python-devel
-python-greenlet
-python-lxml
-python-mysql
-python-mysql-connector-python
-python-pysqlite
 sqlite3
diff --git a/files/rpms-suse/neutron b/files/rpms-suse/neutron
index 66d6e4c..e75db89 100644
--- a/files/rpms-suse/neutron
+++ b/files/rpms-suse/neutron
@@ -1,22 +1,11 @@
-acl     # testonly
+acl
 dnsmasq
 dnsmasq-utils # dist:opensuse-12.3,opensuse-13.1
 ebtables
 iptables
 iputils
 mariadb # NOPRIME
-postgresql-devel        # testonly
-python-eventlet
-python-greenlet
-python-iso8601
-python-mysql
-python-mysql-connector-python
-python-Paste
-python-PasteDeploy
-python-pyudev
-python-Routes
-python-SQLAlchemy
-python-suds
+postgresql-devel
 rabbitmq-server # NOPRIME
 sqlite3
 sudo
@@ -24,5 +13,4 @@
 radvd # NOPRIME
 
 # FIXME: qpid is not part of openSUSE, those names are tentative
-python-qpid # NOPRIME
 qpidd # NOPRIME
diff --git a/files/rpms-suse/nova b/files/rpms-suse/nova
index b1c4f6a..6f8aef1 100644
--- a/files/rpms-suse/nova
+++ b/files/rpms-suse/nova
@@ -16,29 +16,7 @@
 mariadb # NOPRIME
 parted
 polkit
-python-M2Crypto
-python-m2crypto # dist:sle11sp2
-python-Paste
-python-PasteDeploy
-python-Routes
-python-SQLAlchemy
-python-Tempita
-python-cheetah
-python-eventlet
-python-feedparser
-python-greenlet
-python-iso8601
-python-libxml2
-python-lockfile
-python-lxml # needed for glance which is needed for nova --- this shouldn't be here
-python-mox
-python-mysql
-python-mysql-connector-python
-python-numpy # needed by websockify for spice console
-python-paramiko
-python-sqlalchemy-migrate
-python-suds
-python-xattr # needed for glance which is needed for nova --- this shouldn't be here
+python-devel
 rabbitmq-server # NOPRIME
 socat
 sqlite3
diff --git a/files/rpms-suse/swift b/files/rpms-suse/swift
index 4b14098..6a824f9 100644
--- a/files/rpms-suse/swift
+++ b/files/rpms-suse/swift
@@ -1,16 +1,6 @@
 curl
 memcached
-python-PasteDeploy
-python-WebOb
-python-configobj
-python-coverage
 python-devel
-python-eventlet
-python-greenlet
-python-netifaces
-python-nose
-python-simplejson
-python-xattr
 sqlite3
 xfsprogs
 xinetd
diff --git a/files/rpms-suse/trove b/files/rpms-suse/trove
index 09dcee8..96f8f29 100644
--- a/files/rpms-suse/trove
+++ b/files/rpms-suse/trove
@@ -1 +1 @@
-libxslt1-dev   # testonly
+libxslt1-dev
diff --git a/files/rpms/ceilometer-collector b/files/rpms/ceilometer-collector
index 9cf580d..b139ed2 100644
--- a/files/rpms/ceilometer-collector
+++ b/files/rpms/ceilometer-collector
@@ -1,4 +1,3 @@
 selinux-policy-targeted
 mongodb-server #NOPRIME
-pymongo # NOPRIME
 mongodb # NOPRIME
diff --git a/files/rpms/cinder b/files/rpms/cinder
index 082a35a..a88503b 100644
--- a/files/rpms/cinder
+++ b/files/rpms/cinder
@@ -1,6 +1,5 @@
 lvm2
-scsi-target-utils
+scsi-target-utils # NOPRIME
 qemu-img
 postgresql-devel
 iscsi-initiator-utils
-python-lxml
diff --git a/files/rpms/devlibs b/files/rpms/devlibs
index 834a4b6..385ed3b 100644
--- a/files/rpms/devlibs
+++ b/files/rpms/devlibs
@@ -1,8 +1,7 @@
 libffi-devel  # pyOpenSSL
 libxml2-devel  # lxml
 libxslt-devel  # lxml
-mariadb-devel  # MySQL-python  f20,f21,rhel7
-mysql-devel  # MySQL-python  rhel6
+mariadb-devel  # MySQL-python
 openssl-devel  # pyOpenSSL
 postgresql-devel  # psycopg2
 python-devel  # pyOpenSSL
diff --git a/files/rpms/general b/files/rpms/general
index eac4ec3..7b2c00a 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -5,15 +5,15 @@
 gcc
 gcc-c++
 git-core
-graphviz # testonly - docs
+graphviz # needed only for docs
 openssh-server
 openssl
 openssl-devel # to rebuild pyOpenSSL if needed
 libffi-devel
 libxml2-devel
 libxslt-devel
+pkgconfig
 psmisc
-pylint
 python-devel
 screen
 tar
@@ -27,3 +27,4 @@
 net-tools
 java-1.7.0-openjdk-headless  # NOPRIME rhel7,f20
 java-1.8.0-openjdk-headless  # NOPRIME f21,f22
+pyOpenSSL # version in pip uses too much memory
diff --git a/files/rpms/glance b/files/rpms/glance
index a09b669..479194f 100644
--- a/files/rpms/glance
+++ b/files/rpms/glance
@@ -1,14 +1,6 @@
-libxml2-devel       # testonly
-libxslt-devel       # testonly
-mysql-devel         # testonly
-openssl-devel       # testonly
-postgresql-devel    # testonly
-python-argparse
-python-eventlet
-python-greenlet
-python-lxml
-python-paste-deploy
-python-routes
-python-sqlalchemy
-pyxattr
-zlib-devel          # testonly
+libxml2-devel
+libxslt-devel
+mysql-devel
+openssl-devel
+postgresql-devel
+zlib-devel
diff --git a/files/rpms/horizon b/files/rpms/horizon
index 585c36c..b2cf0de 100644
--- a/files/rpms/horizon
+++ b/files/rpms/horizon
@@ -1,21 +1,5 @@
 Django
 httpd # NOPRIME
 mod_wsgi  # NOPRIME
-pylint
-python-anyjson
-python-BeautifulSoup
-python-coverage
-python-dateutil
-python-eventlet
-python-greenlet
-python-httplib2
-python-migrate
-python-mox
-python-nose
-python-paste
-python-paste-deploy
-python-routes
-python-sqlalchemy
-python-webob
 pyxattr
 pcre-devel  # pyScss
diff --git a/files/rpms/ironic b/files/rpms/ironic
index 0a46314..2bf8bb3 100644
--- a/files/rpms/ironic
+++ b/files/rpms/ironic
@@ -8,7 +8,6 @@
 net-tools
 openssh-clients
 openvswitch
-python-libguestfs
 sgabios
 syslinux
 tftp-server
diff --git a/files/rpms/keystone b/files/rpms/keystone
index 45492e0..8074119 100644
--- a/files/rpms/keystone
+++ b/files/rpms/keystone
@@ -1,14 +1,4 @@
 MySQL-python
-python-greenlet
 libxslt-devel
-python-lxml
-python-paste
-python-paste-deploy
-python-paste-script
-python-routes
-python-sqlalchemy
-python-webob
 sqlite
 mod_ssl
-
-# Deps installed via pip for RHEL
diff --git a/files/rpms/ldap b/files/rpms/ldap
index 2f7ab5d..d89c4cf 100644
--- a/files/rpms/ldap
+++ b/files/rpms/ldap
@@ -1,3 +1,2 @@
 openldap-servers
 openldap-clients
-python-ldap
diff --git a/files/rpms/n-api b/files/rpms/n-api
index 6f59e60..0928cd5 100644
--- a/files/rpms/n-api
+++ b/files/rpms/n-api
@@ -1,2 +1 @@
-python-dateutil
 fping
diff --git a/files/rpms/n-cpu b/files/rpms/n-cpu
index 32b1546..c1a8e8f 100644
--- a/files/rpms/n-cpu
+++ b/files/rpms/n-cpu
@@ -4,4 +4,4 @@
 genisoimage
 sysfsutils
 sg3_utils
-python-libguestfs # NOPRIME
+
diff --git a/files/rpms/neutron b/files/rpms/neutron
index d11dab7..8292e7b 100644
--- a/files/rpms/neutron
+++ b/files/rpms/neutron
@@ -1,24 +1,15 @@
 MySQL-python
-acl     # testonly
+acl
 dnsmasq # for q-dhcp
 dnsmasq-utils # for dhcp_release
 ebtables
 iptables
 iputils
 mysql-connector-python
-mysql-devel  # testonly
+mysql-devel
 mysql-server # NOPRIME
 openvswitch # NOPRIME
-postgresql-devel        # testonly
-python-eventlet
-python-greenlet
-python-iso8601
-python-paste
-python-paste-deploy
-python-qpid # NOPRIME
-python-routes
-python-sqlalchemy
-python-suds
+postgresql-devel
 rabbitmq-server # NOPRIME
 qpid-cpp-server        # NOPRIME
 sqlite
diff --git a/files/rpms/nova b/files/rpms/nova
index 557de90..ebd6674 100644
--- a/files/rpms/nova
+++ b/files/rpms/nova
@@ -17,26 +17,10 @@
 numpy # needed by websockify for spice console
 m2crypto
 mysql-connector-python
-mysql-devel  # testonly
+mysql-devel
 mysql-server # NOPRIME
 parted
 polkit
-python-cheetah
-python-eventlet
-python-feedparser
-python-greenlet
-python-iso8601
-python-lockfile
-python-migrate
-python-mox
-python-paramiko
-python-paste
-python-paste-deploy
-python-qpid # NOPRIME
-python-routes
-python-sqlalchemy
-python-suds
-python-tempita
 rabbitmq-server # NOPRIME
 qpid-cpp-server # NOPRIME
 sqlite
diff --git a/files/rpms/qpid b/files/rpms/qpid
index c5e2699..41dd2f6 100644
--- a/files/rpms/qpid
+++ b/files/rpms/qpid
@@ -1,4 +1,3 @@
 qpid-proton-c-devel # NOPRIME
-python-qpid-proton # NOPRIME
 cyrus-sasl-lib # NOPRIME
 cyrus-sasl-plain # NOPRIME
diff --git a/files/rpms/swift b/files/rpms/swift
index 5789a19..1bf57cc 100644
--- a/files/rpms/swift
+++ b/files/rpms/swift
@@ -1,14 +1,5 @@
 curl
 memcached
-python-configobj
-python-coverage
-python-eventlet
-python-greenlet
-python-netifaces
-python-nose
-python-paste-deploy
-python-simplejson
-python-webob
 pyxattr
 sqlite
 xfsprogs
diff --git a/files/rpms/trove b/files/rpms/trove
index c5cbdea..e7bbd43 100644
--- a/files/rpms/trove
+++ b/files/rpms/trove
@@ -1 +1 @@
-libxslt-devel   # testonly
+libxslt-devel
diff --git a/files/rpms/zaqar-server b/files/rpms/zaqar-server
index 541cefa..78806fb 100644
--- a/files/rpms/zaqar-server
+++ b/files/rpms/zaqar-server
@@ -3,4 +3,3 @@
 mongodb-server
 pymongo
 redis # NOPRIME
-python-redis # NOPRIME
diff --git a/files/venv-requirements.txt b/files/venv-requirements.txt
index e473a2f..b9a55b4 100644
--- a/files/venv-requirements.txt
+++ b/files/venv-requirements.txt
@@ -1,10 +1,10 @@
+# Once we can prebuild wheels before a devstack run, uncomment the skipped libraries
 cryptography
-lxml
-MySQL-python
-netifaces
+# lxml # still install from from packages
+# netifaces # still install from packages
 #numpy    # slowest wheel by far, stop building until we are actually using the output
 posix-ipc
-psycopg2
+# psycopg # still install from packages
 pycrypto
 pyOpenSSL
 PyYAML
diff --git a/functions b/functions
index 9adbfe7..1668e16 100644
--- a/functions
+++ b/functions
@@ -15,6 +15,7 @@
 source ${FUNC_DIR}/functions-common
 source ${FUNC_DIR}/inc/ini-config
 source ${FUNC_DIR}/inc/python
+source ${FUNC_DIR}/inc/rootwrap
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
@@ -286,6 +287,10 @@
         img_property="--property hw_cdrom_bus=scsi"
     fi
 
+    if is_arch "aarch64"; then
+        img_property="--property hw_machine_type=virt --property hw_cdrom_bus=virtio --property os_command_line='console=ttyAMA0'"
+    fi
+
     if [ "$container_format" = "bare" ]; then
         if [ "$unpack" = "zcat" ]; then
             openstack --os-token $token --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT image create "$image_name" $img_property --public --container-format=$container_format --disk-format $disk_format < <(zcat --force "${image}")
@@ -339,40 +344,43 @@
 
 
 # ping check
-# Uses globals ``ENABLED_SERVICES``
-# ping_check from-net ip boot-timeout expected
+# Uses globals ``ENABLED_SERVICES``, ``TOP_DIR``, ``MULTI_HOST``, ``PRIVATE_NETWORK``
+# ping_check <ip> [boot-timeout] [from_net] [expected]
 function ping_check {
-    if is_service_enabled neutron; then
-        _ping_check_neutron  "$1" $2 $3 $4
-        return
-    fi
-    _ping_check_novanet "$1" $2 $3 $4
-}
+    local ip=$1
+    local timeout=${2:-30}
+    local from_net=${3:-""}
+    local expected=${4:-True}
+    local op="!"
+    local failmsg="[Fail] Couldn't ping server"
+    local ping_cmd="ping"
 
-# ping check for nova
-# Uses globals ``MULTI_HOST``, ``PRIVATE_NETWORK``
-function _ping_check_novanet {
-    local from_net=$1
-    local ip=$2
-    local boot_timeout=$3
-    local expected=${4:-"True"}
-    local check_command=""
-    MULTI_HOST=$(trueorfalse False MULTI_HOST)
-    if [[ "$MULTI_HOST" = "True" && "$from_net" = "$PRIVATE_NETWORK_NAME" ]]; then
-        return
-    fi
-    if [[ "$expected" = "True" ]]; then
-        check_command="while ! ping -c1 -w1 $ip; do sleep 1; done"
-    else
-        check_command="while ping -c1 -w1 $ip; do sleep 1; done"
-    fi
-    if ! timeout $boot_timeout sh -c "$check_command"; then
-        if [[ "$expected" = "True" ]]; then
-            die $LINENO "[Fail] Couldn't ping server"
-        else
-            die $LINENO "[Fail] Could ping server"
+    # if we don't specify a from_net we're expecting things to work
+    # fine from our local box.
+    if [[ -n "$from_net" ]]; then
+        if is_service_enabled neutron; then
+            ping_cmd="$TOP_DIR/tools/ping_neutron.sh $from_net"
+        elif [[ "$MULTI_HOST" = "True" && "$from_net" = "$PRIVATE_NETWORK_NAME" ]]; then
+            # there is no way to address the multihost / private case, bail here for compatibility.
+            # TODO: remove this cruft and redo code to handle this at the caller level.
+            return
         fi
     fi
+
+    # inverse the logic if we're testing no connectivity
+    if [[ "$expected" != "True" ]]; then
+        op=""
+        failmsg="[Fail] Could ping server"
+    fi
+
+    # Because we've transformed this command so many times, print it
+    # out at the end.
+    local check_command="while $op $ping_cmd -c1 -w1 $ip; do sleep 1; done"
+    echo "Checking connectivity with $check_command"
+
+    if ! timeout $timeout sh -c "$check_command"; then
+        die $LINENO $failmsg
+    fi
 }
 
 # Get ip of instance
@@ -439,7 +447,7 @@
             echo "*** DEST path element"
             echo "***    ${rebuilt_path}"
             echo "*** appears to have 0700 permissions."
-            echo "*** This is very likely to cause fatal issues for devstack daemons."
+            echo "*** This is very likely to cause fatal issues for DevStack daemons."
 
             if [[ -n "$SKIP_PATH_SANITY" ]]; then
                 return
@@ -526,8 +534,8 @@
 }
 
 # These functions are provided for basic fall-back functionality for
-# projects that include parts of devstack (grenade).  stack.sh will
-# override these with more specific versions for devstack (with fancy
+# projects that include parts of DevStack (Grenade).  stack.sh will
+# override these with more specific versions for DevStack (with fancy
 # spinners, etc).  We never override an existing version
 if ! function_exists echo_summary; then
     function echo_summary {
diff --git a/functions-common b/functions-common
index 4739e42..3a2f5f7 100644
--- a/functions-common
+++ b/functions-common
@@ -51,17 +51,22 @@
 function trueorfalse {
     local xtrace=$(set +o | grep xtrace)
     set +o xtrace
-    local default=$1
-    local literal=$2
-    local testval=${!literal:-}
 
-    [[ -z "$testval" ]] && { echo "$default"; return; }
-    [[ "0 no No NO false False FALSE" =~ "$testval" ]] && { echo "False"; return; }
-    [[ "1 yes Yes YES true True TRUE" =~ "$testval" ]] && { echo "True"; return; }
-    echo "$default"
+    local default=$1
+    local testval=${!2:-}
+
+    case "$testval" in
+        "1" | [yY]es | "YES" | [tT]rue | "TRUE" ) echo "True" ;;
+        "0" | [nN]o | "NO" | [fF]alse | "FALSE" ) echo "False" ;;
+        * )                                       echo "$default" ;;
+    esac
+
     $xtrace
 }
 
+function isset {
+    [[ -v "$1" ]]
+}
 
 # Control Functions
 # =================
@@ -171,10 +176,7 @@
     local xtrace=$(set +o | grep xtrace)
     set +o xtrace
     local msg="[WARNING] ${BASH_SOURCE[2]}:$1 $2"
-    echo $msg 1>&2;
-    if [[ -n ${LOGDIR} ]]; then
-        echo $msg >> "${LOGDIR}/error.log"
-    fi
+    echo $msg
     $xtrace
     return $exitcode
 }
@@ -246,6 +248,7 @@
         # CentOS Linux release 6.0 (Final)
         # Fedora release 16 (Verne)
         # XenServer release 6.2.0-70446c (xenenterprise)
+        # Oracle Linux release 7
         os_CODENAME=""
         for r in "Red Hat" CentOS Fedora XenServer; do
             os_VENDOR=$r
@@ -259,6 +262,9 @@
             fi
             os_VENDOR=""
         done
+        if [ "$os_VENDOR" = "Red Hat" ] && [[ -r /etc/oracle-release ]]; then
+            os_VENDOR=OracleLinux
+        fi
         os_PACKAGE="rpm"
     elif [[ -r /etc/SuSE-release ]]; then
         for r in openSUSE "SUSE Linux"; do
@@ -310,7 +316,7 @@
         fi
     elif [[ "$os_VENDOR" =~ (Red Hat) || \
         "$os_VENDOR" =~ (CentOS) || \
-        "$os_VENDOR" =~ (OracleServer) ]]; then
+        "$os_VENDOR" =~ (OracleLinux) ]]; then
         # Drop the . release as we assume it's compatible
         DISTRO="rhel${os_RELEASE::1}"
     elif [[ "$os_VENDOR" =~ (XenServer) ]]; then
@@ -328,6 +334,17 @@
     [[ "$(uname -m)" == "$1" ]]
 }
 
+# Determine if current distribution is an Oracle distribution
+# is_oraclelinux
+function is_oraclelinux {
+    if [[ -z "$os_VENDOR" ]]; then
+        GetOSVersion
+    fi
+
+    [ "$os_VENDOR" = "OracleLinux" ]
+}
+
+
 # Determine if current distribution is a Fedora-based distribution
 # (Fedora, RHEL, CentOS, etc).
 # is_fedora
@@ -337,7 +354,7 @@
     fi
 
     [ "$os_VENDOR" = "Fedora" ] || [ "$os_VENDOR" = "Red Hat" ] || \
-        [ "$os_VENDOR" = "CentOS" ] || [ "$os_VENDOR" = "OracleServer" ]
+        [ "$os_VENDOR" = "CentOS" ] || [ "$os_VENDOR" = "OracleLinux" ]
 }
 
 
@@ -491,7 +508,7 @@
         fi
 
         count=$(($count + 1))
-        warn "timeout ${count} for git call: [git $@]"
+        warn $LINENO "timeout ${count} for git call: [git $@]"
         if [ $count -eq 3 ]; then
             die $LINENO "Maximum of 3 git retries reached"
         fi
@@ -542,11 +559,11 @@
     local host_ip_iface=$3
     local host_ip=$4
 
-    # Find the interface used for the default route
-    host_ip_iface=${host_ip_iface:-$(ip route | sed -n '/^default/{ s/.*dev \(\w\+\)\s\+.*/\1/; p; }' | head -1)}
     # Search for an IP unless an explicit is set by ``HOST_IP`` environment variable
     if [ -z "$host_ip" -o "$host_ip" == "dhcp" ]; then
         host_ip=""
+        # Find the interface used for the default route
+        host_ip_iface=${host_ip_iface:-$(ip route | awk '/default/ {print $5}' | head -1)}
         local host_ips=$(LC_ALL=C ip -f inet addr show ${host_ip_iface} | awk '/inet/ {split($2,parts,"/");  print parts[1]}')
         local ip
         for ip in $host_ips; do
@@ -588,6 +605,28 @@
     done
 }
 
+# install default policy
+# copy over a default policy.json and policy.d for projects
+function install_default_policy {
+    local project=$1
+    local project_uc=$(echo $1|tr a-z A-Z)
+    local conf_dir="${project_uc}_CONF_DIR"
+    # eval conf dir to get the variable
+    conf_dir="${!conf_dir}"
+    local project_dir="${project_uc}_DIR"
+    # eval project dir to get the variable
+    project_dir="${!project_dir}"
+    local sample_conf_dir="${project_dir}/etc/${project}"
+    local sample_policy_dir="${project_dir}/etc/${project}/policy.d"
+
+    # first copy any policy.json
+    cp -p $sample_conf_dir/policy.json $conf_dir
+    # then optionally copy over policy.d
+    if [[ -d $sample_policy_dir ]]; then
+        cp -r $sample_policy_dir $conf_dir/policy.d
+    fi
+}
+
 # Add a policy to a policy.json file
 # Do nothing if the policy already exists
 # ``policy_add policy_file policy_name policy_permissions``
@@ -728,6 +767,27 @@
     echo $user_role_id
 }
 
+# Gets or adds group role to project
+# Usage: get_or_add_group_project_role <role> <group> <project>
+function get_or_add_group_project_role {
+    # Gets group role id
+    local group_role_id=$(openstack role list \
+        --group $2 \
+        --project $3 \
+        --column "ID" \
+        --column "Name" \
+        | grep " $1 " | get_field 1)
+    if [[ -z "$group_role_id" ]]; then
+        # Adds role to group
+        group_role_id=$(openstack role add \
+            $1 \
+            --group $2 \
+            --project $3 \
+            | grep " id " | get_field 2)
+    fi
+    echo $group_role_id
+}
+
 # Gets or creates service
 # Usage: get_or_create_service <name> <type> <description>
 function get_or_create_service {
@@ -774,13 +834,18 @@
 
 # _get_package_dir
 function _get_package_dir {
+    local base_dir=$1
     local pkg_dir
+
+    if [[ -z "$base_dir" ]]; then
+        base_dir=$FILES
+    fi
     if is_ubuntu; then
-        pkg_dir=$FILES/debs
+        pkg_dir=$base_dir/debs
     elif is_fedora; then
-        pkg_dir=$FILES/rpms
+        pkg_dir=$base_dir/rpms
     elif is_suse; then
-        pkg_dir=$FILES/rpms-suse
+        pkg_dir=$base_dir/rpms-suse
     else
         exit_distro_not_supported "list of packages"
     fi
@@ -806,84 +871,14 @@
         apt-get --option "Dpkg::Options::=--force-confold" --assume-yes "$@"
 }
 
-# get_packages() collects a list of package names of any type from the
-# prerequisite files in ``files/{debs|rpms}``.  The list is intended
-# to be passed to a package installer such as apt or yum.
-#
-# Only packages required for the services in 1st argument will be
-# included.  Two bits of metadata are recognized in the prerequisite files:
-#
-# - ``# NOPRIME`` defers installation to be performed later in `stack.sh`
-# - ``# dist:DISTRO`` or ``dist:DISTRO1,DISTRO2`` limits the selection
-#   of the package to the distros listed.  The distro names are case insensitive.
-function get_packages {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local services=$@
-    local package_dir=$(_get_package_dir)
-    local file_to_parse=""
-    local service=""
+function _parse_package_files {
+    local files_to_parse=$@
 
-    INSTALL_TESTONLY_PACKAGES=$(trueorfalse False INSTALL_TESTONLY_PACKAGES)
-
-    if [[ -z "$package_dir" ]]; then
-        echo "No package directory supplied"
-        return 1
-    fi
     if [[ -z "$DISTRO" ]]; then
         GetDistro
     fi
-    for service in ${services//,/ }; do
-        # Allow individual services to specify dependencies
-        if [[ -e ${package_dir}/${service} ]]; then
-            file_to_parse="${file_to_parse} $service"
-        fi
-        # NOTE(sdague) n-api needs glance for now because that's where
-        # glance client is
-        if [[ $service == n-api ]]; then
-            if [[ ! $file_to_parse =~ nova ]]; then
-                file_to_parse="${file_to_parse} nova"
-            fi
-            if [[ ! $file_to_parse =~ glance ]]; then
-                file_to_parse="${file_to_parse} glance"
-            fi
-        elif [[ $service == c-* ]]; then
-            if [[ ! $file_to_parse =~ cinder ]]; then
-                file_to_parse="${file_to_parse} cinder"
-            fi
-        elif [[ $service == ceilometer-* ]]; then
-            if [[ ! $file_to_parse =~ ceilometer ]]; then
-                file_to_parse="${file_to_parse} ceilometer"
-            fi
-        elif [[ $service == s-* ]]; then
-            if [[ ! $file_to_parse =~ swift ]]; then
-                file_to_parse="${file_to_parse} swift"
-            fi
-        elif [[ $service == n-* ]]; then
-            if [[ ! $file_to_parse =~ nova ]]; then
-                file_to_parse="${file_to_parse} nova"
-            fi
-        elif [[ $service == g-* ]]; then
-            if [[ ! $file_to_parse =~ glance ]]; then
-                file_to_parse="${file_to_parse} glance"
-            fi
-        elif [[ $service == key* ]]; then
-            if [[ ! $file_to_parse =~ keystone ]]; then
-                file_to_parse="${file_to_parse} keystone"
-            fi
-        elif [[ $service == q-* ]]; then
-            if [[ ! $file_to_parse =~ neutron ]]; then
-                file_to_parse="${file_to_parse} neutron"
-            fi
-        elif [[ $service == ir-* ]]; then
-            if [[ ! $file_to_parse =~ ironic ]]; then
-                file_to_parse="${file_to_parse} ironic"
-            fi
-        fi
-    done
 
-    for file in ${file_to_parse}; do
-        local fname=${package_dir}/${file}
+    for fname in ${files_to_parse}; do
         local OIFS line package distros distro
         [[ -e $fname ]] || continue
 
@@ -911,22 +906,106 @@
                 fi
             fi
 
-            # Look for # testonly in comment
-            if [[ $line =~ (.*)#.*testonly.* ]]; then
-                package=${BASH_REMATCH[1]}
-                # Are we installing test packages? (test for the default value)
-                if [[ $INSTALL_TESTONLY_PACKAGES = "False" ]]; then
-                    # If not installing test packages the skip this package
-                    inst_pkg=0
-                fi
-            fi
-
             if [[ $inst_pkg = 1 ]]; then
                 echo $package
             fi
         done
         IFS=$OIFS
     done
+}
+
+# get_packages() collects a list of package names of any type from the
+# prerequisite files in ``files/{debs|rpms}``.  The list is intended
+# to be passed to a package installer such as apt or yum.
+#
+# Only packages required for the services in 1st argument will be
+# included.  Two bits of metadata are recognized in the prerequisite files:
+#
+# - ``# NOPRIME`` defers installation to be performed later in `stack.sh`
+# - ``# dist:DISTRO`` or ``dist:DISTRO1,DISTRO2`` limits the selection
+#   of the package to the distros listed.  The distro names are case insensitive.
+function get_packages {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local services=$@
+    local package_dir=$(_get_package_dir)
+    local file_to_parse=""
+    local service=""
+
+    if [[ -z "$package_dir" ]]; then
+        echo "No package directory supplied"
+        return 1
+    fi
+    for service in ${services//,/ }; do
+        # Allow individual services to specify dependencies
+        if [[ -e ${package_dir}/${service} ]]; then
+            file_to_parse="${file_to_parse} ${package_dir}/${service}"
+        fi
+        # NOTE(sdague) n-api needs glance for now because that's where
+        # glance client is
+        if [[ $service == n-api ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/nova ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/nova"
+            fi
+            if [[ ! $file_to_parse =~ $package_dir/glance ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/glance"
+            fi
+        elif [[ $service == c-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/cinder ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/cinder"
+            fi
+        elif [[ $service == ceilometer-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/ceilometer ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/ceilometer"
+            fi
+        elif [[ $service == s-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/swift ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/swift"
+            fi
+        elif [[ $service == n-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/nova ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/nova"
+            fi
+        elif [[ $service == g-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/glance ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/glance"
+            fi
+        elif [[ $service == key* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/keystone ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/keystone"
+            fi
+        elif [[ $service == q-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/neutron ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/neutron"
+            fi
+        elif [[ $service == ir-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/ironic ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/ironic"
+            fi
+        fi
+    done
+    echo "$(_parse_package_files $file_to_parse)"
+    $xtrace
+}
+
+# get_plugin_packages() collects a list of package names of any type from a
+# plugin's prerequisite files in ``$PLUGIN/devstack/files/{debs|rpms}``.  The
+# list is intended to be passed to a package installer such as apt or yum.
+#
+# Only packages required for enabled and collected plugins will included.
+#
+# The same metadata used in the main DevStack prerequisite files may be used
+# in these prerequisite files, see get_packages() for more info.
+function get_plugin_packages {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local files_to_parse=""
+    local package_dir=""
+    for plugin in ${DEVSTACK_PLUGINS//,/ }; do
+        local package_dir="$(_get_package_dir ${GITDIR[$plugin]}/devstack/files)"
+        files_to_parse+="$package_dir/$plugin"
+    done
+    echo "$(_parse_package_files $files_to_parse)"
     $xtrace
 }
 
@@ -1019,8 +1098,8 @@
     # The manual check for missing packages is because yum -y assumes
     # missing packages are OK.  See
     # https://bugzilla.redhat.com/show_bug.cgi?id=965567
-    $sudo http_proxy=$http_proxy https_proxy=$https_proxy \
-        no_proxy=$no_proxy \
+    $sudo http_proxy="${http_proxy:-}" https_proxy="${https_proxy:-}" \
+        no_proxy="${no_proxy:-}" \
         ${YUM:-yum} install -y "$@" 2>&1 | \
         awk '
             BEGIN { fail=0 }
@@ -1042,7 +1121,8 @@
     [[ "$OFFLINE" = "True" ]] && return
     local sudo="sudo"
     [[ "$(id -u)" = "0" ]] && sudo="env"
-    $sudo http_proxy=$http_proxy https_proxy=$https_proxy \
+    $sudo http_proxy="${http_proxy:-}" https_proxy="${https_proxy:-}" \
+        no_proxy="${no_proxy:-}" \
         zypper --non-interactive install --auto-agree-with-licenses "$@"
 }
 
@@ -1059,6 +1139,10 @@
 # the command.
 # _run_process service "command-line" [group]
 function _run_process {
+    # disable tracing through the exec redirects, it's just confusing in the logs.
+    xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+
     local service=$1
     local command="$2"
     local group=$3
@@ -1082,6 +1166,9 @@
         export PYTHONUNBUFFERED=1
     fi
 
+    # reenable xtrace before we do *real* work
+    $xtrace
+
     # Run under ``setsid`` to force the process to become a session and group leader.
     # The pid saved can be used with pkill -g to get the entire process group.
     if [[ -n "$group" ]]; then
@@ -1154,9 +1241,6 @@
     SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
     USE_SCREEN=$(trueorfalse True USE_SCREEN)
 
-    # Append the process to the screen rc file
-    screen_rc "$name" "$command"
-
     screen -S $SCREEN_NAME -X screen -t $name
 
     local real_logfile="${LOGDIR}/${name}.log.${CURRENT_LOG_TIME}"
@@ -1175,8 +1259,13 @@
 
     # sleep to allow bash to be ready to be send the command - we are
     # creating a new window in screen and then sends characters, so if
-    # bash isn't running by the time we send the command, nothing happens
-    sleep 3
+    # bash isn't running by the time we send the command, nothing
+    # happens.  This sleep was added originally to handle gate runs
+    # where we needed this to be at least 3 seconds to pass
+    # consistently on slow clouds. Now this is configurable so that we
+    # can determine a reasonable value for the local case which should
+    # be much smaller.
+    sleep ${SCREEN_SLEEP:-3}
 
     NL=`echo -ne '\015'`
     # This fun command does the following:
@@ -1191,6 +1280,10 @@
     if [[ -n "$group" ]]; then
         command="sg $group '$command'"
     fi
+
+    # Append the process to the screen rc file
+    screen_rc "$name" "$command"
+
     screen -S $SCREEN_NAME -p $name -X stuff "$command & echo \$! >$SERVICE_DIR/$SCREEN_NAME/${name}.pid; fg || echo \"$name failed to start\" | tee \"$SERVICE_DIR/$SCREEN_NAME/${name}.failure\"$NL"
 }
 
@@ -1414,7 +1507,7 @@
         return
     fi
 
-    echo "Fetching devstack plugins"
+    echo "Fetching DevStack plugins"
     for plugin in ${plugins//,/ }; do
         git_clone_by_name $plugin
     done
@@ -1442,6 +1535,33 @@
     done
 }
 
+# plugin_override_defaults
+#
+# Run an extremely early setting phase for plugins that allows default
+# overriding of services.
+function plugin_override_defaults {
+    local plugins="${DEVSTACK_PLUGINS}"
+    local plugin
+
+    # short circuit if nothing to do
+    if [[ -z $plugins ]]; then
+        return
+    fi
+
+    echo "Overriding Configuration Defaults"
+    for plugin in ${plugins//,/ }; do
+        local dir=${GITDIR[$plugin]}
+        # source any overrides
+        if [[ -f $dir/devstack/override-defaults ]]; then
+            # be really verbose that an override is happening, as it
+            # may not be obvious if things fail later.
+            echo "$plugin has overriden the following defaults"
+            cat $dir/devstack/override-defaults
+            source $dir/devstack/override-defaults
+        fi
+    done
+}
+
 # run_plugins
 #
 # Run the devstack/plugin.sh in all the plugin directories. These are
@@ -1471,6 +1591,8 @@
     # the source phase corresponds to settings loading in plugins
     if [[ "$mode" == "source" ]]; then
         load_plugin_settings
+    elif [[ "$mode" == "override_defaults" ]]; then
+        plugin_override_defaults
     else
         run_plugins $mode $phase
     fi
@@ -1505,14 +1627,23 @@
 # Uses global ``ENABLED_SERVICES``
 # disable_negated_services
 function disable_negated_services {
-    local tmpsvcs="${ENABLED_SERVICES}"
+    local to_remove=""
+    local remaining=""
     local service
-    for service in ${tmpsvcs//,/ }; do
+
+    # build up list of services that should be removed; i.e. they
+    # begin with "-"
+    for service in ${ENABLED_SERVICES//,/ }; do
         if [[ ${service} == -* ]]; then
-            tmpsvcs=$(echo ${tmpsvcs}|sed -r "s/(,)?(-)?${service#-}(,)?/,/g")
+            to_remove+=",${service#-}"
+        else
+            remaining+=",${service}"
         fi
     done
-    ENABLED_SERVICES=$(_cleanup_service_list "$tmpsvcs")
+
+    # go through the service list.  if this service appears in the "to
+    # be removed" list, drop it
+    ENABLED_SERVICES=$(remove_disabled_services "$remaining" "$to_remove")
 }
 
 # disable_service() removes the services passed as argument to the
@@ -1616,6 +1747,30 @@
     return $enabled
 }
 
+# remove specified list from the input string
+# remove_disabled_services service-list remove-list
+function remove_disabled_services {
+    local service_list=$1
+    local remove_list=$2
+    local service
+    local enabled=""
+
+    for service in ${service_list//,/ }; do
+        local remove
+        local add=1
+        for remove in ${remove_list//,/ }; do
+            if [[ ${remove} == ${service} ]]; then
+                add=0
+                break
+            fi
+        done
+        if [[ $add == 1 ]]; then
+            enabled="${enabled},$service"
+        fi
+    done
+    _cleanup_service_list "$enabled"
+}
+
 # Toggle enable/disable_service for services that must run exclusive of each other
 #  $1 The name of a variable containing a space-separated list of services
 #  $2 The name of a variable in which to store the enabled service's name
@@ -1687,16 +1842,7 @@
     local user=$1
     local group=$2
 
-    if [[ -z "$os_VENDOR" ]]; then
-        GetOSVersion
-    fi
-
-    # SLE11 and openSUSE 12.2 don't have the usual usermod
-    if ! is_suse || [[ "$os_VENDOR" = "openSUSE" && "$os_RELEASE" != "12.2" ]]; then
-        sudo usermod -a -G "$group" "$user"
-    else
-        sudo usermod -A "$group" "$user"
-    fi
+    sudo usermod -a -G "$group" "$user"
 }
 
 # Convert CIDR notation to a IPv4 netmask
@@ -1753,6 +1899,12 @@
     echo $subnet
 }
 
+# Return the current python as "python<major>.<minor>"
+function python_version {
+    local python_version=$(python -c 'import sys; print("%s.%s" % sys.version_info[0:2])')
+    echo "python${python_version}"
+}
+
 # Service wrapper to restart services
 # restart_service service-name
 function restart_service {
diff --git a/gate/updown.sh b/gate/updown.sh
index d2d7351..f46385c 100755
--- a/gate/updown.sh
+++ b/gate/updown.sh
@@ -4,7 +4,7 @@
 #
 # Note: this is expected to start running as jenkins
 
-# Step 1: give back sudoers permissions to devstack
+# Step 1: give back sudoers permissions to DevStack
 TEMPFILE=`mktemp`
 echo "stack ALL=(root) NOPASSWD:ALL" >$TEMPFILE
 chmod 0440 $TEMPFILE
diff --git a/inc/ini-config b/inc/ini-config
index 0d6d169..26401f3 100644
--- a/inc/ini-config
+++ b/inc/ini-config
@@ -205,16 +205,6 @@
     $xtrace
 }
 
-function isset {
-    nounset=$(set +o | grep nounset)
-    set +o nounset
-    [[ -n "${!1+x}" ]]
-    result=$?
-    $nounset
-    return $result
-}
-
-
 # Restore xtrace
 $INC_CONF_TRACE
 
diff --git a/inc/meta-config b/inc/meta-config
index c8789bf..e5f902d 100644
--- a/inc/meta-config
+++ b/inc/meta-config
@@ -86,6 +86,14 @@
     local matchgroup=$2
     local configfile=$3
 
+    # note, configfile might be a variable (note the iniset, etc
+    # created in the mega-awk below is "eval"ed too, so we just leave
+    # it alone.
+    local real_configfile=$(eval echo $configfile)
+    if [ ! -f $real_configfile ]; then
+        touch $real_configfile
+    fi
+
     get_meta_section $file $matchgroup $configfile | \
     $CONFIG_AWK_CMD -v configfile=$configfile '
         BEGIN {
diff --git a/inc/python b/inc/python
index d72c3c9..3d329b5 100644
--- a/inc/python
+++ b/inc/python
@@ -52,19 +52,37 @@
     fi
 }
 
+# Wrapper for ``pip install`` that only installs versions of libraries
+# from the global-requirements specification.
+#
+# Uses globals ``REQUIREMENTS_DIR``
+#
+# pip_install_gr packagename
+function pip_install_gr {
+    local name=$1
+    local clean_name=$(get_from_global_requirements $name)
+    pip_install $clean_name
+}
+
 # Wrapper for ``pip install`` to set cache and proxy environment variables
-# Uses globals ``INSTALL_TESTONLY_PACKAGES``, ``OFFLINE``, ``PIP_VIRTUAL_ENV``,
-# ``TRACK_DEPENDS``, ``*_proxy``
+# Uses globals ``OFFLINE``, ``PIP_VIRTUAL_ENV``,
+# ``PIP_UPGRADE``, ``TRACK_DEPENDS``, ``*_proxy``
 # pip_install package [package ...]
 function pip_install {
     local xtrace=$(set +o | grep xtrace)
     set +o xtrace
+    local upgrade=""
     local offline=${OFFLINE:-False}
     if [[ "$offline" == "True" || -z "$@" ]]; then
         $xtrace
         return
     fi
 
+    PIP_UPGRADE=$(trueorfalse False PIP_UPGRADE)
+    if [[ "$PIP_UPGRADE" = "True" ]] ; then
+        upgrade="--upgrade"
+    fi
+
     if [[ -z "$os_PACKAGE" ]]; then
         GetOSVersion
     fi
@@ -94,25 +112,24 @@
 
     $xtrace
     $sudo_pip \
-        http_proxy=${http_proxy:-} \
-        https_proxy=${https_proxy:-} \
-        no_proxy=${no_proxy:-} \
+        http_proxy="${http_proxy:-}" \
+        https_proxy="${https_proxy:-}" \
+        no_proxy="${no_proxy:-}" \
         PIP_FIND_LINKS=$PIP_FIND_LINKS \
-        $cmd_pip install \
+        $cmd_pip install $upgrade \
         $@
 
-    INSTALL_TESTONLY_PACKAGES=$(trueorfalse False INSTALL_TESTONLY_PACKAGES)
-    if [[ "$INSTALL_TESTONLY_PACKAGES" == "True" ]]; then
-        local test_req="$@/test-requirements.txt"
-        if [[ -e "$test_req" ]]; then
-            $sudo_pip \
-                http_proxy=${http_proxy:-} \
-                https_proxy=${https_proxy:-} \
-                no_proxy=${no_proxy:-} \
-                PIP_FIND_LINKS=$PIP_FIND_LINKS \
-                $cmd_pip install \
-                -r $test_req
-        fi
+    # Also install test requirements
+    local test_req="$@/test-requirements.txt"
+    if [[ -e "$test_req" ]]; then
+        echo "Installing test-requirements for $test_req"
+        $sudo_pip \
+            http_proxy=${http_proxy:-} \
+            https_proxy=${https_proxy:-} \
+            no_proxy=${no_proxy:-} \
+            PIP_FIND_LINKS=$PIP_FIND_LINKS \
+            $cmd_pip install $upgrade \
+            -r $test_req
     fi
 }
 
@@ -120,7 +137,7 @@
 # get_from_global_requirements <package>
 function get_from_global_requirements {
     local package=$1
-    local required_pkg=$(grep -h ${package} $REQUIREMENTS_DIR/global-requirements.txt | cut -d\# -f1)
+    local required_pkg=$(grep -i -h ^${package} $REQUIREMENTS_DIR/global-requirements.txt | cut -d\# -f1)
     if [[ $required_pkg == ""  ]]; then
         die $LINENO "Can't find package $package in requirements"
     fi
diff --git a/inc/rootwrap b/inc/rootwrap
new file mode 100644
index 0000000..f91e557
--- /dev/null
+++ b/inc/rootwrap
@@ -0,0 +1,87 @@
+#!/bin/bash
+#
+# **inc/rootwrap** - Rootwrap functions
+#
+# Handle rootwrap's foibles
+
+# Uses: ``STACK_USER``
+# Defines: ``SUDO_SECURE_PATH_FILE``
+
+# Save trace setting
+INC_ROOT_TRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+# Accumulate all additions to sudo's ``secure_path`` in one file read last
+# so they all work in a venv configuration
+SUDO_SECURE_PATH_FILE=${SUDO_SECURE_PATH_FILE:-/etc/sudoers.d/zz-secure-path}
+
+# Add a directory to the common sudo ``secure_path``
+# add_sudo_secure_path dir
+function add_sudo_secure_path {
+    local dir=$1
+    local line
+
+    # This is pretty simplistic for now - assume only the first line is used
+    if [[ -r SUDO_SECURE_PATH_FILE ]]; then
+        line=$(head -1 $SUDO_SECURE_PATH_FILE)
+    else
+        line="Defaults:$STACK_USER secure_path=/usr/local/sbin:/usr/local/bin:/usr/sbin:/sbin:/usr/bin:/bin"
+    fi
+
+    # Only add ``dir`` if it is not already present
+    if [[ $line =~ $dir ]]; then
+        echo "${line}:$dir" | sudo tee $SUDO_SECURE_PATH_FILE
+        sudo chmod 400 $SUDO_SECURE_PATH_FILE
+        sudo chown root:root $SUDO_SECURE_PATH_FILE
+    fi
+}
+
+# Configure rootwrap
+# Make a load of assumptions otherwise we'll have 6 arguments
+# configure_rootwrap project
+function configure_rootwrap {
+    local project=$1
+    local project_uc=$(echo $1|tr a-z A-Z)
+    local bin_dir="${project_uc}_BIN_DIR"
+    bin_dir="${!bin_dir}"
+    local project_dir="${project_uc}_DIR"
+    project_dir="${!project_dir}"
+
+    local rootwrap_conf_src_dir="${project_dir}/etc/${project}"
+    local rootwrap_bin="${bin_dir}/${project}-rootwrap"
+
+    # Start fresh with rootwrap filters
+    sudo rm -rf /etc/${project}/rootwrap.d
+    sudo install -d -o root -g root -m 755 /etc/${project}/rootwrap.d
+    sudo install -o root -g root -m 644 $rootwrap_conf_src_dir/rootwrap.d/*.filters /etc/${project}/rootwrap.d
+
+    # Set up rootwrap.conf, pointing to /etc/*/rootwrap.d
+    sudo install -o root -g root -m 644 $rootwrap_conf_src_dir/rootwrap.conf /etc/${project}/rootwrap.conf
+    sudo sed -e "s:^filters_path=.*$:filters_path=/etc/${project}/rootwrap.d:" -i /etc/${project}/rootwrap.conf
+
+    # Set up the rootwrap sudoers
+    local tempfile=$(mktemp)
+    # Specify rootwrap.conf as first parameter to rootwrap
+    rootwrap_sudo_cmd="${rootwrap_bin} /etc/${project}/rootwrap.conf *"
+    echo "$STACK_USER ALL=(root) NOPASSWD: $rootwrap_sudo_cmd" >$tempfile
+    if [ -f ${bin_dir}/${project}-rootwrap-daemon ]; then
+        # rootwrap daemon does not need any parameters
+        rootwrap_sudo_cmd="${rootwrap_bin}-daemon /etc/${project}/rootwrap.conf"
+        echo "$STACK_USER ALL=(root) NOPASSWD: $rootwrap_sudo_cmd" >>$tempfile
+    fi
+    chmod 0440 $tempfile
+    sudo chown root:root $tempfile
+    sudo mv $tempfile /etc/sudoers.d/${project}-rootwrap
+
+    # Add bin dir to sudo's secure_path because rootwrap is being called
+    # without a path because BROKEN.
+    add_sudo_secure_path $(dirname $rootwrap_bin)
+}
+
+
+# Restore xtrace
+$INC_ROOT_TRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/ceilometer b/lib/ceilometer
index 9db0640..1f72187 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -4,7 +4,7 @@
 # Install and start **Ceilometer** service
 
 # To enable a minimal set of Ceilometer services, add the following to the
-# localrc section of local.conf:
+# ``localrc`` section of ``local.conf``:
 #
 #   enable_service ceilometer-acompute ceilometer-acentral ceilometer-anotification ceilometer-collector ceilometer-api
 #
@@ -13,18 +13,35 @@
 #
 #   enable_service ceilometer-alarm-notifier ceilometer-alarm-evaluator
 #
+# To enable Ceilometer to collect the IPMI based meters, further add to the
+# localrc section of local.conf:
+#
+#   enable_service ceilometer-aipmi
+#
+# NOTE: Currently, there are two ways to get the IPMI based meters in
+# OpenStack. One way is to configure Ironic conductor to report those meters
+# for the nodes managed by Ironic and to have Ceilometer notification
+# agent to collect them. Ironic by default does NOT enable that reporting
+# functionality. So in order to do so, users need to set the option of
+# conductor.send_sensor_data to true in the ironic.conf configuration file
+# for the Ironic conductor service, and also enable the
+# ceilometer-anotification service.
+#
+# The other way is to use Ceilometer ipmi agent only to get the IPMI based
+# meters. To avoid duplicated meters, users need to make sure to set the
+# option of conductor.send_sensor_data to false in the ironic.conf
+# configuration file if the node on which Ceilometer ipmi agent is running
+# is also managed by Ironic.
+#
 # Several variables set in the localrc section adjust common behaviors
 # of Ceilometer (see within for additional settings):
 #
 #   CEILOMETER_USE_MOD_WSGI:       When True, run the api under mod_wsgi.
-#   CEILOMETER_PIPELINE_INTERVAL:  The number of seconds between pipeline processing
-#                                  runs. Default 600.
-#   CEILOMETER_BACKEND:            The database backend (e.g. 'mysql', 'mongodb', 'es')
-#   CEILOMETER_COORDINATION_URL:   The URL for a group membership service provided
-#                                  by tooz.
+#   CEILOMETER_PIPELINE_INTERVAL:  Seconds between pipeline processing runs. Default 600.
+#   CEILOMETER_BACKEND:            Database backend (e.g. 'mysql', 'mongodb', 'es')
+#   CEILOMETER_COORDINATION_URL:   URL for group membership service provided by tooz.
 #   CEILOMETER_EVENTS:             Enable event collection
 
-
 # Dependencies:
 #
 # - functions
@@ -94,7 +111,7 @@
     return 1
 }
 
-# create_ceilometer_accounts() - Set up common required ceilometer accounts
+# create_ceilometer_accounts() - Set up common required Ceilometer accounts
 #
 # Project              User         Roles
 # ------------------------------------------------------------------
@@ -117,14 +134,14 @@
                 "$CEILOMETER_SERVICE_PROTOCOL://$CEILOMETER_SERVICE_HOST:$CEILOMETER_SERVICE_PORT/"
         fi
         if is_service_enabled swift; then
-            # Ceilometer needs ResellerAdmin role to access swift account stats.
+            # Ceilometer needs ResellerAdmin role to access Swift account stats.
             get_or_add_user_project_role "ResellerAdmin" "ceilometer" $SERVICE_TENANT_NAME
         fi
     fi
 }
 
 
-# _cleanup_keystone_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
+# _cleanup_keystone_apache_wsgi() - Remove WSGI files, disable and remove Apache vhost file
 function _cleanup_ceilometer_apache_wsgi {
     sudo rm -f $CEILOMETER_WSGI_DIR/*
     sudo rm -f $(apache_site_config_for ceilometer)
@@ -149,7 +166,7 @@
     local ceilometer_apache_conf=$(apache_site_config_for ceilometer)
     local apache_version=$(get_apache_version)
 
-    # copy proxy vhost and wsgi file
+    # Copy proxy vhost and wsgi file
     sudo cp $CEILOMETER_DIR/ceilometer/api/app.wsgi $CEILOMETER_WSGI_DIR/app
 
     sudo cp $FILES/apache-ceilometer.template $ceilometer_apache_conf
@@ -163,13 +180,9 @@
 
 # configure_ceilometer() - Set config files, create data dirs, etc
 function configure_ceilometer {
-    [ ! -d $CEILOMETER_CONF_DIR ] && sudo mkdir -m 755 -p $CEILOMETER_CONF_DIR
-    sudo chown $STACK_USER $CEILOMETER_CONF_DIR
+    sudo install -d -o $STACK_USER -m 755 $CEILOMETER_CONF_DIR $CEILOMETER_API_LOG_DIR
 
-    [ ! -d $CEILOMETER_API_LOG_DIR ] &&  sudo mkdir -m 755 -p $CEILOMETER_API_LOG_DIR
-    sudo chown $STACK_USER $CEILOMETER_API_LOG_DIR
-
-    iniset_rpc_backend ceilometer $CEILOMETER_CONF DEFAULT
+    iniset_rpc_backend ceilometer $CEILOMETER_CONF
 
     iniset $CEILOMETER_CONF DEFAULT notification_topics "$CEILOMETER_NOTIFICATION_TOPICS"
     iniset $CEILOMETER_CONF DEFAULT verbose True
@@ -182,7 +195,7 @@
 
     # Install the policy file for the API server
     cp $CEILOMETER_DIR/etc/ceilometer/policy.json $CEILOMETER_CONF_DIR
-    iniset $CEILOMETER_CONF DEFAULT policy_file $CEILOMETER_CONF_DIR/policy.json
+    iniset $CEILOMETER_CONF oslo_policy policy_file $CEILOMETER_CONF_DIR/policy.json
 
     cp $CEILOMETER_DIR/etc/ceilometer/pipeline.yaml $CEILOMETER_CONF_DIR
     cp $CEILOMETER_DIR/etc/ceilometer/event_pipeline.yaml $CEILOMETER_CONF_DIR
@@ -193,9 +206,9 @@
         sed -i "s/interval:.*/interval: ${CEILOMETER_PIPELINE_INTERVAL}/" $CEILOMETER_CONF_DIR/pipeline.yaml
     fi
 
-    # the compute and central agents need these credentials in order to
-    # call out to other services' public APIs
-    # the alarm evaluator needs these options to call ceilometer APIs
+    # The compute and central agents need these credentials in order to
+    # call out to other services' public APIs.
+    # The alarm evaluator needs these options to call ceilometer APIs
     iniset $CEILOMETER_CONF service_credentials os_username ceilometer
     iniset $CEILOMETER_CONF service_credentials os_password $SERVICE_PASSWORD
     iniset $CEILOMETER_CONF service_credentials os_tenant_name $SERVICE_TENANT_NAME
@@ -238,10 +251,15 @@
         iniset $CEILOMETER_CONF api pecan_debug "False"
         _config_ceilometer_apache_wsgi
     fi
+
+    if is_service_enabled ceilometer-aipmi; then
+        # Configure rootwrap for the ipmi agent
+        configure_rootwrap ceilometer
+    fi
 }
 
 function configure_mongodb {
-    # server package is the same on all
+    # Server package is the same on all
     local packages=mongodb-server
 
     if is_fedora; then
@@ -254,21 +272,20 @@
     install_package ${packages}
 
     if is_fedora; then
-        # ensure smallfiles selected to minimize freespace requirements
+        # Ensure smallfiles is selected to minimize freespace requirements
         sudo sed -i '/--smallfiles/!s/OPTIONS=\"/OPTIONS=\"--smallfiles /' /etc/sysconfig/mongod
 
         restart_service mongod
     fi
 
-    # give mongodb time to start-up
+    # Give mongodb time to start-up
     sleep 5
 }
 
 # init_ceilometer() - Initialize etc.
 function init_ceilometer {
     # Create cache dir
-    sudo mkdir -p $CEILOMETER_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $CEILOMETER_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER $CEILOMETER_AUTH_CACHE_DIR
     rm -f $CEILOMETER_AUTH_CACHE_DIR/*
 
     if is_service_enabled mysql postgresql; then
@@ -322,6 +339,11 @@
     if use_library_from_git "ceilometermiddleware"; then
         git_clone_by_name "ceilometermiddleware"
         setup_dev_lib "ceilometermiddleware"
+    else
+        # BUG: this should be a pip_install_gr except it was never
+        # included in global-requirements. Needs to be fixed by
+        # https://bugs.launchpad.net/ceilometer/+bug/1441655
+        pip_install ceilometermiddleware
     fi
 }
 
@@ -330,6 +352,7 @@
     run_process ceilometer-acentral "ceilometer-agent-central --config-file $CEILOMETER_CONF"
     run_process ceilometer-anotification "ceilometer-agent-notification --config-file $CEILOMETER_CONF"
     run_process ceilometer-collector "ceilometer-collector --config-file $CEILOMETER_CONF"
+    run_process ceilometer-aipmi "ceilometer-agent-ipmi --config-file $CEILOMETER_CONF"
 
     if [[ "$CEILOMETER_USE_MOD_WSGI" == "False" ]]; then
         run_process ceilometer-api "ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
@@ -350,7 +373,7 @@
         run_process ceilometer-acompute "ceilometer-agent-compute --config-file $CEILOMETER_CONF"
     fi
 
-    # only die on API if it was actually intended to be turned on
+    # Only die on API if it was actually intended to be turned on
     if is_service_enabled ceilometer-api; then
         echo "Waiting for ceilometer-api to start..."
         if ! wait_for_service $SERVICE_TIMEOUT $CEILOMETER_SERVICE_PROTOCOL://$CEILOMETER_SERVICE_HOST:$CEILOMETER_SERVICE_PORT/v2/; then
@@ -369,7 +392,7 @@
         restart_apache_server
     fi
     # Kill the ceilometer screen windows
-    for serv in ceilometer-acompute ceilometer-acentral ceilometer-anotification ceilometer-collector ceilometer-api ceilometer-alarm-notifier ceilometer-alarm-evaluator; do
+    for serv in ceilometer-acompute ceilometer-acentral ceilometer-aipmi ceilometer-anotification ceilometer-collector ceilometer-api ceilometer-alarm-notifier ceilometer-alarm-evaluator; do
         stop_process $serv
     done
 }
diff --git a/lib/ceph b/lib/ceph
index 76747cc..4d6ca4a 100644
--- a/lib/ceph
+++ b/lib/ceph
@@ -110,7 +110,7 @@
 
 # check_os_support_ceph() - Check if the operating system provides a decent version of Ceph
 function check_os_support_ceph {
-    if [[ ! ${DISTRO} =~ (trusty|f20|f21) ]]; then
+    if [[ ! ${DISTRO} =~ (trusty|f20|f21|f22) ]]; then
         echo "WARNING: your distro $DISTRO does not provide (at least) the Firefly release. Please use Ubuntu Trusty or Fedora 20 (and higher)"
         if [[ "$FORCE_CEPH_INSTALL" != "yes" ]]; then
             die $LINENO "If you wish to install Ceph on this distribution anyway run with FORCE_CEPH_INSTALL=yes"
@@ -279,7 +279,7 @@
     # configure Nova service options, ceph pool, ceph user and ceph key
     sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${NOVA_CEPH_POOL} size ${CEPH_REPLICAS}
     if [[ $CEPH_REPLICAS -ne 1 ]]; then
-        sudo -c ${CEPH_CONF_FILE} ceph osd pool set ${NOVA_CEPH_POOL} crush_ruleset ${RULE_ID}
+        sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${NOVA_CEPH_POOL} crush_ruleset ${RULE_ID}
     fi
 }
 
diff --git a/lib/cinder b/lib/cinder
index 880af1f..b8cf809 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -39,8 +39,17 @@
 
 # set up default directories
 GITDIR["python-cinderclient"]=$DEST/python-cinderclient
-
+GITDIR["os-brick"]=$DEST/os-brick
 CINDER_DIR=$DEST/cinder
+
+# Cinder virtual environment
+if [[ ${USE_VENV} = True ]]; then
+    PROJECT_VENV["cinder"]=${CINDER_DIR}.venv
+    CINDER_BIN_DIR=${PROJECT_VENV["cinder"]}/bin
+else
+    CINDER_BIN_DIR=$(get_python_exec_prefix)
+fi
+
 CINDER_STATE_PATH=${CINDER_STATE_PATH:=$DATA_DIR/cinder}
 CINDER_AUTH_CACHE_DIR=${CINDER_AUTH_CACHE_DIR:-/var/cache/cinder}
 
@@ -57,13 +66,6 @@
 CINDER_SERVICE_PORT_INT=${CINDER_SERVICE_PORT_INT:-18776}
 CINDER_SERVICE_PROTOCOL=${CINDER_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
 
-# Support entry points installation of console scripts
-if [[ -d $CINDER_DIR/bin ]]; then
-    CINDER_BIN_DIR=$CINDER_DIR/bin
-else
-    CINDER_BIN_DIR=$(get_python_exec_prefix)
-fi
-
 
 # Default backends
 # The backend format is type:name where type is one of the supported backend
@@ -76,9 +78,20 @@
 
 
 # Should cinder perform secure deletion of volumes?
-# Defaults to true, can be set to False to avoid this bug when testing:
+# Defaults to zero. Can also be set to none or shred.
+# This was previously CINDER_SECURE_DELETE (True or False).
+# Equivalents using CINDER_VOLUME_CLEAR are zero and none, respectively.
+# Set to none to avoid this bug when testing:
 # https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1023755
-CINDER_SECURE_DELETE=$(trueorfalse True CINDER_SECURE_DELETE)
+if [[ -n $CINDER_SECURE_DELETE ]]; then
+    CINDER_SECURE_DELETE=$(trueorfalse True CINDER_SECURE_DELETE)
+    if [[ $CINDER_SECURE_DELETE == "False" ]]; then
+        CINDER_VOLUME_CLEAR_DEFAULT="none"
+    fi
+    DEPRECATED_TEXT="$DEPRECATED_TEXT\nConfigure secure Cinder volume deletion using CINDER_VOLUME_CLEAR instead of CINDER_SECURE_DELETE.\n"
+fi
+CINDER_VOLUME_CLEAR=${CINDER_VOLUME_CLEAR:-${CINDER_VOLUME_CLEAR_DEFAULT:-zero}}
+CINDER_VOLUME_CLEAR=$(echo ${CINDER_VOLUME_CLEAR} | tr '[:upper:]' '[:lower:]')
 
 # Cinder reports allocations back to the scheduler on periodic intervals
 # it turns out we can get an "out of space" issue when we run tests too
@@ -88,6 +101,8 @@
 # https://bugs.launchpad.net/cinder/+bug/1180976
 CINDER_PERIODIC_INTERVAL=${CINDER_PERIODIC_INTERVAL:-60}
 
+CINDER_ISCSI_HELPER=${CINDER_ISCSI_HELPER:-tgtadm}
+
 # Tell Tempest this project is present
 TEMPEST_SERVICES+=,cinder
 
@@ -125,31 +140,35 @@
 function cleanup_cinder {
     # ensure the volume group is cleared up because fails might
     # leave dead volumes in the group
-    local targets=$(sudo tgtadm --op show --mode target)
-    if [ $? -ne 0 ]; then
-        # If tgt driver isn't running this won't work obviously
-        # So check the response and restart if need be
-        echo "tgtd seems to be in a bad state, restarting..."
-        if is_ubuntu; then
-            restart_service tgt
-        else
-            restart_service tgtd
+    if [ "$CINDER_ISCSI_HELPER" = "tgtadm" ]; then
+        local targets=$(sudo tgtadm --op show --mode target)
+        if [ $? -ne 0 ]; then
+            # If tgt driver isn't running this won't work obviously
+            # So check the response and restart if need be
+            echo "tgtd seems to be in a bad state, restarting..."
+            if is_ubuntu; then
+                restart_service tgt
+            else
+                restart_service tgtd
+            fi
+            targets=$(sudo tgtadm --op show --mode target)
         fi
-        targets=$(sudo tgtadm --op show --mode target)
-    fi
 
-    if [[ -n "$targets" ]]; then
-        local iqn_list=( $(grep --no-filename -r iqn $SCSI_PERSIST_DIR | sed 's/<target //' | sed 's/>//') )
-        for i in "${iqn_list[@]}"; do
-            echo removing iSCSI target: $i
-            sudo tgt-admin --delete $i
-        done
-    fi
+        if [[ -n "$targets" ]]; then
+            local iqn_list=( $(grep --no-filename -r iqn $SCSI_PERSIST_DIR | sed 's/<target //' | sed 's/>//') )
+            for i in "${iqn_list[@]}"; do
+                echo removing iSCSI target: $i
+                sudo tgt-admin --delete $i
+            done
+        fi
 
-    if is_ubuntu; then
-        stop_service tgt
+        if is_ubuntu; then
+            stop_service tgt
+        else
+            stop_service tgtd
+        fi
     else
-        stop_service tgtd
+        sudo cinder-rtstool get-targets | sudo xargs -rn 1 cinder-rtstool delete
     fi
 
     if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
@@ -164,49 +183,15 @@
     fi
 }
 
-# configure_cinder_rootwrap() - configure Cinder's rootwrap
-function configure_cinder_rootwrap {
-    # Set the paths of certain binaries
-    local cinder_rootwrap=$(get_rootwrap_location cinder)
-
-    # Deploy new rootwrap filters files (owned by root).
-    # Wipe any existing rootwrap.d files first
-    if [[ -d $CINDER_CONF_DIR/rootwrap.d ]]; then
-        sudo rm -rf $CINDER_CONF_DIR/rootwrap.d
-    fi
-    # Deploy filters to /etc/cinder/rootwrap.d
-    sudo mkdir -m 755 $CINDER_CONF_DIR/rootwrap.d
-    sudo cp $CINDER_DIR/etc/cinder/rootwrap.d/*.filters $CINDER_CONF_DIR/rootwrap.d
-    sudo chown -R root:root $CINDER_CONF_DIR/rootwrap.d
-    sudo chmod 644 $CINDER_CONF_DIR/rootwrap.d/*
-    # Set up rootwrap.conf, pointing to /etc/cinder/rootwrap.d
-    sudo cp $CINDER_DIR/etc/cinder/rootwrap.conf $CINDER_CONF_DIR/
-    sudo sed -e "s:^filters_path=.*$:filters_path=$CINDER_CONF_DIR/rootwrap.d:" -i $CINDER_CONF_DIR/rootwrap.conf
-    sudo chown root:root $CINDER_CONF_DIR/rootwrap.conf
-    sudo chmod 0644 $CINDER_CONF_DIR/rootwrap.conf
-    # Specify rootwrap.conf as first parameter to rootwrap
-    ROOTWRAP_CSUDOER_CMD="$cinder_rootwrap $CINDER_CONF_DIR/rootwrap.conf *"
-
-    # Set up the rootwrap sudoers for cinder
-    local tempfile=`mktemp`
-    echo "$STACK_USER ALL=(root) NOPASSWD: $ROOTWRAP_CSUDOER_CMD" >$tempfile
-    chmod 0440 $tempfile
-    sudo chown root:root $tempfile
-    sudo mv $tempfile /etc/sudoers.d/cinder-rootwrap
-}
-
 # configure_cinder() - Set config files, create data dirs, etc
 function configure_cinder {
-    if [[ ! -d $CINDER_CONF_DIR ]]; then
-        sudo mkdir -p $CINDER_CONF_DIR
-    fi
-    sudo chown $STACK_USER $CINDER_CONF_DIR
+    sudo install -d -o $STACK_USER -m 755 $CINDER_CONF_DIR
 
     cp -p $CINDER_DIR/etc/cinder/policy.json $CINDER_CONF_DIR
 
     rm -f $CINDER_CONF
 
-    configure_cinder_rootwrap
+    configure_rootwrap cinder
 
     cp $CINDER_DIR/etc/cinder/api-paste.ini $CINDER_API_PASTE_INI
 
@@ -228,13 +213,13 @@
     iniset $CINDER_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $CINDER_CONF DEFAULT verbose True
 
-    iniset $CINDER_CONF DEFAULT iscsi_helper tgtadm
-    iniset $CINDER_CONF DEFAULT sql_connection `database_connection_url cinder`
+    iniset $CINDER_CONF DEFAULT iscsi_helper "$CINDER_ISCSI_HELPER"
+    iniset $CINDER_CONF database connection `database_connection_url cinder`
     iniset $CINDER_CONF DEFAULT api_paste_config $CINDER_API_PASTE_INI
     iniset $CINDER_CONF DEFAULT rootwrap_config "$CINDER_CONF_DIR/rootwrap.conf"
     iniset $CINDER_CONF DEFAULT osapi_volume_extension cinder.api.contrib.standard_extensions
     iniset $CINDER_CONF DEFAULT state_path $CINDER_STATE_PATH
-    iniset $CINDER_CONF DEFAULT lock_path $CINDER_STATE_PATH
+    iniset $CINDER_CONF oslo_concurrency lock_path $CINDER_STATE_PATH
     iniset $CINDER_CONF DEFAULT periodic_interval $CINDER_PERIODIC_INTERVAL
     # NOTE(thingee): Cinder V1 API is deprecated and defaults to off as of
     # Juno. Keep it enabled so we can continue testing while it's still
@@ -281,16 +266,18 @@
         iniset $CINDER_CONF DEFAULT use_syslog True
     fi
 
-    iniset_rpc_backend cinder $CINDER_CONF DEFAULT
+    iniset_rpc_backend cinder $CINDER_CONF
 
-    if [[ "$CINDER_SECURE_DELETE" == "False" ]]; then
-        iniset $CINDER_CONF DEFAULT secure_delete False
-        iniset $CINDER_CONF DEFAULT volume_clear none
+    if [[ "$CINDER_VOLUME_CLEAR" == "none" ]] || [[ "$CINDER_VOLUME_CLEAR" == "zero" ]] || [[ "$CINDER_VOLUME_CLEAR" == "shred" ]]; then
+        iniset $CINDER_CONF DEFAULT volume_clear $CINDER_VOLUME_CLEAR
     fi
 
     # Format logging
     if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
         setup_colorized_logging $CINDER_CONF DEFAULT "project_id" "user_id"
+    else
+        # Set req-id, project-name and resource in log format
+        iniset $CINDER_CONF DEFAULT logging_context_format_string "%(asctime)s.%(msecs)03d %(levelname)s %(name)s [%(request_id)s %(project_name)s] %(resource)s%(message)s"
     fi
 
     if [[ -r $CINDER_PLUGINS/$CINDER_DRIVER ]]; then
@@ -313,6 +300,11 @@
         iniset $CINDER_CONF DEFAULT ssl_key_file "$CINDER_SSL_KEY"
     fi
 
+    # Set os_privileged_user credentials (used for os-assisted-snapshots)
+    iniset $CINDER_CONF DEFAULT os_privileged_user_name nova
+    iniset $CINDER_CONF DEFAULT os_privileged_user_password "$SERVICE_PASSWORD"
+    iniset $CINDER_CONF DEFAULT os_privileged_user_tenant "$SERVICE_TENANT_NAME"
+
 }
 
 # create_cinder_accounts() - Set up common required cinder accounts
@@ -351,8 +343,7 @@
 # create_cinder_cache_dir() - Part of the init_cinder() process
 function create_cinder_cache_dir {
     # Create cache dir
-    sudo mkdir -p $CINDER_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $CINDER_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER $CINDER_AUTH_CACHE_DIR
     rm -f $CINDER_AUTH_CACHE_DIR/*
 }
 
@@ -372,15 +363,9 @@
 
     if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
         local be be_name be_type
-        local has_lvm=0
         for be in ${CINDER_ENABLED_BACKENDS//,/ }; do
             be_type=${be%%:*}
             be_name=${be##*:}
-
-            if [[ $be_type == 'lvm' ]]; then
-                has_lvm=1
-            fi
-
             if type init_cinder_backend_${be_type} >/dev/null 2>&1; then
                 # Always init the default volume group for lvm.
                 if [[ "$be_type" == "lvm" ]]; then
@@ -391,25 +376,28 @@
         done
     fi
 
-    # Keep it simple, set a marker if there's an LVM backend
-    # use the created VG's to setup lvm filters
-    if [[ $has_lvm == 1 ]]; then
-        # Order matters here, not only obviously to make
-        # sure the VG's are created, but also some distros
-        # do some customizations to lvm.conf on init, we
-        # want to make sure we copy those over
-        sudo cp /etc/lvm/lvm.conf /etc/cinder/lvm.conf
-        configure_cinder_backend_conf_lvm
-    fi
-
     mkdir -p $CINDER_STATE_PATH/volumes
     create_cinder_cache_dir
 }
 
 # install_cinder() - Collect source and prepare
 function install_cinder {
+    # Install os-brick from git so we make sure we're testing
+    # the latest code.
+    if use_library_from_git "os-brick"; then
+        git_clone_by_name "os-brick"
+        setup_dev_lib "os-brick"
+    fi
+
     git_clone $CINDER_REPO $CINDER_DIR $CINDER_BRANCH
     setup_develop $CINDER_DIR
+    if [ "$CINDER_ISCSI_HELPER" = "tgtadm" ]; then
+        if is_fedora; then
+            install_package scsi-target-utils
+        else
+            install_package tgt
+        fi
+    fi
 }
 
 # install_cinderclient() - Collect source and prepare
@@ -437,21 +425,23 @@
         service_port=$CINDER_SERVICE_PORT_INT
         service_protocol="http"
     fi
-    if is_service_enabled c-vol; then
-        # Delete any old stack.conf
-        sudo rm -f /etc/tgt/conf.d/stack.conf
-        _configure_tgt_for_config_d
-        if is_ubuntu; then
-            sudo service tgt restart
-        elif is_fedora || is_suse; then
-            restart_service tgtd
-        else
-            # note for other distros: unstack.sh also uses the tgt/tgtd service
-            # name, and would need to be adjusted too
-            exit_distro_not_supported "restarting tgt"
+    if [ "$CINDER_ISCSI_HELPER" = "tgtadm" ]; then
+        if is_service_enabled c-vol; then
+            # Delete any old stack.conf
+            sudo rm -f /etc/tgt/conf.d/stack.conf
+            _configure_tgt_for_config_d
+            if is_ubuntu; then
+                sudo service tgt restart
+            elif is_fedora || is_suse; then
+                restart_service tgtd
+            else
+                # note for other distros: unstack.sh also uses the tgt/tgtd service
+                # name, and would need to be adjusted too
+                exit_distro_not_supported "restarting tgt"
+            fi
+            # NOTE(gfidente): ensure tgtd is running in debug mode
+            sudo tgtadm --mode system --op update --name debug --value on
         fi
-        # NOTE(gfidente): ensure tgtd is running in debug mode
-        sudo tgtadm --mode system --op update --name debug --value on
     fi
 
     run_process c-api "$CINDER_BIN_DIR/cinder-api --config-file $CINDER_CONF"
@@ -481,14 +471,6 @@
     for serv in c-api c-bak c-sch c-vol; do
         stop_process $serv
     done
-
-    if is_service_enabled c-vol; then
-        if is_ubuntu; then
-            stop_service tgt
-        else
-            stop_service tgtd
-        fi
-    fi
 }
 
 # create_volume_types() - Create Cinder's configured volume types
diff --git a/lib/cinder_backends/lvm b/lib/cinder_backends/lvm
index 52fc6fb..35ad209 100644
--- a/lib/cinder_backends/lvm
+++ b/lib/cinder_backends/lvm
@@ -19,7 +19,6 @@
 # clean_cinder_backend_lvm - called from clean_cinder()
 # configure_cinder_backend_lvm - called from configure_cinder()
 # init_cinder_backend_lvm - called from init_cinder()
-# configure_cinder_backend_conf_lvm - called from configure_cinder()
 
 
 # Save trace setting
@@ -40,6 +39,7 @@
 
     # Campsite rule: leave behind a volume group at least as clean as we found it
     clean_lvm_volume_group $VOLUME_GROUP_NAME-$be_name
+    clean_lvm_filter
 }
 
 # configure_cinder_backend_lvm - Set config files, create data dirs, etc
@@ -50,7 +50,7 @@
     iniset $CINDER_CONF $be_name volume_backend_name $be_name
     iniset $CINDER_CONF $be_name volume_driver "cinder.volume.drivers.lvm.LVMVolumeDriver"
     iniset $CINDER_CONF $be_name volume_group $VOLUME_GROUP_NAME-$be_name
-    iniset $CINDER_CONF $be_name iscsi_helper "tgtadm"
+    iniset $CINDER_CONF $be_name iscsi_helper "$CINDER_ISCSI_HELPER"
 
     if [[ "$CINDER_SECURE_DELETE" == "False" ]]; then
         iniset $CINDER_CONF $be_name volume_clear none
@@ -66,36 +66,6 @@
     init_lvm_volume_group $VOLUME_GROUP_NAME-$be_name $VOLUME_BACKING_FILE_SIZE
 }
 
-# configure_cinder_backend_conf_lvm - Sets device filter in /etc/cinder/lvm.conf
-# init_cinder_backend_lvm
-function configure_cinder_backend_conf_lvm {
-    local filter_suffix='"r/.*/" ]'
-    local filter_string="filter = [ "
-    local conf_entries=$(grep volume_group /etc/cinder/cinder.conf | sed "s/ //g")
-    local pv
-    local vg
-    local line
-
-    for pv_info in $(sudo pvs --noheadings -o name,vg_name --separator ';'); do
-        echo_summary "Evaluate PV info for Cinder lvm.conf: $pv_info"
-        IFS=';' read pv vg <<< "$pv_info"
-        for line in ${conf_entries}; do
-            IFS='=' read label group <<< "$line"
-            group=$(echo $group|sed "s/^ *//g")
-            if [[ "$vg" == "$group" ]]; then
-                new="\"a$pv/\", "
-                filter_string=$filter_string$new
-            fi
-        done
-    done
-    filter_string=$filter_string$filter_suffix
-
-    # FIXME(jdg): Possible odd case that the lvm.conf file has been modified
-    # and doesn't have a filter entry to search/replace.  For devstack don't
-    # know that we care, but could consider adding a check and add
-    sudo sed -i "s#^[ \t]*filter.*#    $filter_string#g" /etc/cinder/lvm.conf
-    echo "set LVM filter_strings: $filter_string"
-}
 # Restore xtrace
 $MY_XTRACE
 
diff --git a/lib/database b/lib/database
index b114e9e..ff1fafe 100644
--- a/lib/database
+++ b/lib/database
@@ -109,6 +109,11 @@
     install_database_$DATABASE_TYPE
 }
 
+# Install the database Python packages
+function install_database_python {
+    install_database_python_$DATABASE_TYPE
+}
+
 # Configure and start the database
 function configure_database {
     configure_database_$DATABASE_TYPE
diff --git a/lib/databases/mysql b/lib/databases/mysql
index 70073c4..7cd2856 100644
--- a/lib/databases/mysql
+++ b/lib/databases/mysql
@@ -11,12 +11,19 @@
 MY_XTRACE=$(set +o | grep xtrace)
 set +o xtrace
 
+MYSQL_DRIVER=${MYSQL_DRIVER:-MySQL-python}
+# Force over to pymysql driver by default if we are using it.
+if is_service_enabled mysql; then
+    if [[ "$MYSQL_DRIVER" == "PyMySQL" ]]; then
+        SQLALCHEMY_DATABASE_DRIVER=${SQLALCHEMY_DATABASE_DRIVER:-"pymysql"}
+    fi
+fi
 
 register_database mysql
 
 # Linux distros, thank you for being incredibly consistent
 MYSQL=mysql
-if is_fedora; then
+if is_fedora && ! is_oraclelinux; then
     MYSQL=mariadb
 fi
 
@@ -32,12 +39,12 @@
         sudo rm -rf /var/lib/mysql
         sudo rm -rf /etc/mysql
         return
+    elif is_suse || is_oraclelinux; then
+        uninstall_package mysql-community-server
+        sudo rm -rf /var/lib/mysql
     elif is_fedora; then
         uninstall_package mariadb-server
         sudo rm -rf /var/lib/mysql
-    elif is_suse; then
-        uninstall_package mysql-community-server
-        sudo rm -rf /var/lib/mysql
     else
         return
     fi
@@ -56,12 +63,12 @@
     if is_ubuntu; then
         my_conf=/etc/mysql/my.cnf
         mysql=mysql
+    elif is_suse || is_oraclelinux; then
+        my_conf=/etc/my.cnf
+        mysql=mysql
     elif is_fedora; then
         mysql=mariadb
         my_conf=/etc/my.cnf
-    elif is_suse; then
-        my_conf=/etc/my.cnf
-        mysql=mysql
     else
         exit_distro_not_supported "mysql configuration"
     fi
@@ -140,20 +147,25 @@
         chmod 0600 $HOME/.my.cnf
     fi
     # Install mysql-server
-    if is_fedora; then
-        install_package mariadb-server
-    elif is_ubuntu; then
-        install_package mysql-server
-    elif is_suse; then
+    if is_suse || is_oraclelinux; then
         if ! is_package_installed mariadb; then
             install_package mysql-community-server
         fi
+    elif is_fedora; then
+        install_package mariadb-server
+    elif is_ubuntu; then
+        install_package mysql-server
     else
         exit_distro_not_supported "mysql installation"
     fi
+}
 
+function install_database_python_mysql {
     # Install Python client module
-    pip_install MySQL-python
+    pip_install_gr $MYSQL_DRIVER
+    if [[ "$MYSQL_DRIVER" == "MySQL-python" ]]; then
+        ADDITIONAL_VENV_PACKAGES+=",MySQL-python"
+    fi
 }
 
 function database_connection_url_mysql {
diff --git a/lib/databases/postgresql b/lib/databases/postgresql
index e891a08..e087a1e 100644
--- a/lib/databases/postgresql
+++ b/lib/databases/postgresql
@@ -100,9 +100,12 @@
     else
         exit_distro_not_supported "postgresql installation"
     fi
+}
 
+function install_database_python_postgresql {
     # Install Python client module
-    pip_install psycopg2
+    pip_install_gr psycopg2
+    ADDITIONAL_VENV_PACKAGES+=",psycopg2"
 }
 
 function database_connection_url_postgresql {
diff --git a/lib/dstat b/lib/dstat
index 740e48f..f11bfa5 100644
--- a/lib/dstat
+++ b/lib/dstat
@@ -16,34 +16,22 @@
 XTRACE=$(set +o | grep xtrace)
 set +o xtrace
 
-
-# Defaults
-# --------
-# for DSTAT logging
-DSTAT_FILE=${DSTAT_FILE:-"dstat.log"}
-
-
 # start_dstat() - Start running processes, including screen
 function start_dstat {
     # A better kind of sysstat, with the top process per time slice
     DSTAT_OPTS="-tcmndrylpg --top-cpu-adv --top-io-adv"
-    if [[ -n ${LOGDIR} ]]; then
-        screen_it dstat "cd $TOP_DIR; dstat $DSTAT_OPTS | tee $LOGDIR/$DSTAT_FILE"
-        if [[ -n ${SCREEN_LOGDIR} && ${SCREEN_LOGDIR} != ${LOGDIR} ]]; then
-            # Drop the backward-compat symlink
-            ln -sf $LOGDIR/$DSTAT_FILE ${SCREEN_LOGDIR}/$DSTAT_FILE
-        fi
-    else
-        screen_it dstat "dstat $DSTAT_OPTS"
-    fi
+    run_process dstat "dstat $DSTAT_OPTS"
+
+    # To enable peakmem_tracker add:
+    #    enable_service peakmem_tracker
+    # to your localrc
+    run_process peakmem_tracker "$TOP_DIR/tools/peakmem_tracker.sh"
 }
 
 # stop_dstat() stop dstat process
 function stop_dstat {
-    # dstat runs as a console, not as a service, and isn't trackable
-    # via the normal mechanisms for devstack. So lets just do a
-    # killall and move on.
-    killall dstat || /bin/true
+    stop_process dstat
+    stop_process peakmem_tracker
 }
 
 # Restore xtrace
diff --git a/lib/glance b/lib/glance
old mode 100755
new mode 100644
index eb1df2e..016ade3
--- a/lib/glance
+++ b/lib/glance
@@ -31,8 +31,16 @@
 # Set up default directories
 GITDIR["python-glanceclient"]=$DEST/python-glanceclient
 GITDIR["glance_store"]=$DEST/glance_store
-
 GLANCE_DIR=$DEST/glance
+
+# Glance virtual environment
+if [[ ${USE_VENV} = True ]]; then
+    PROJECT_VENV["glance"]=${GLANCE_DIR}.venv
+    GLANCE_BIN_DIR=${PROJECT_VENV["glance"]}/bin
+else
+    GLANCE_BIN_DIR=$(get_python_exec_prefix)
+fi
+
 GLANCE_CACHE_DIR=${GLANCE_CACHE_DIR:=$DATA_DIR/glance/cache}
 GLANCE_IMAGE_DIR=${GLANCE_IMAGE_DIR:=$DATA_DIR/glance/images}
 GLANCE_AUTH_CACHE_DIR=${GLANCE_AUTH_CACHE_DIR:-/var/cache/glance}
@@ -41,19 +49,14 @@
 GLANCE_METADEF_DIR=$GLANCE_CONF_DIR/metadefs
 GLANCE_REGISTRY_CONF=$GLANCE_CONF_DIR/glance-registry.conf
 GLANCE_API_CONF=$GLANCE_CONF_DIR/glance-api.conf
+GLANCE_SEARCH_CONF=$GLANCE_CONF_DIR/glance-search.conf
 GLANCE_REGISTRY_PASTE_INI=$GLANCE_CONF_DIR/glance-registry-paste.ini
 GLANCE_API_PASTE_INI=$GLANCE_CONF_DIR/glance-api-paste.ini
+GLANCE_SEARCH_PASTE_INI=$GLANCE_CONF_DIR/glance-search-paste.ini
 GLANCE_CACHE_CONF=$GLANCE_CONF_DIR/glance-cache.conf
 GLANCE_POLICY_JSON=$GLANCE_CONF_DIR/policy.json
 GLANCE_SCHEMA_JSON=$GLANCE_CONF_DIR/schema-image.json
 
-# Support entry points installation of console scripts
-if [[ -d $GLANCE_DIR/bin ]]; then
-    GLANCE_BIN_DIR=$GLANCE_DIR/bin
-else
-    GLANCE_BIN_DIR=$(get_python_exec_prefix)
-fi
-
 if is_ssl_enabled_service "glance" || is_service_enabled tls-proxy; then
     GLANCE_SERVICE_PROTOCOL="https"
 fi
@@ -66,6 +69,9 @@
 GLANCE_SERVICE_PROTOCOL=${GLANCE_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
 GLANCE_REGISTRY_PORT=${GLANCE_REGISTRY_PORT:-9191}
 GLANCE_REGISTRY_PORT_INT=${GLANCE_REGISTRY_PORT_INT:-19191}
+GLANCE_SEARCH_PORT=${GLANCE_SEARCH_PORT:-9393}
+GLANCE_SEARCH_PORT_INT=${GLANCE_SEARCH_PORT_INT:-19393}
+GLANCE_SEARCH_HOSTPORT=${GLANCE_SEARCH_HOSTPORT:-$GLANCE_SERVICE_HOST:$GLANCE_SEARCH_PORT}
 
 # Tell Tempest this project is present
 TEMPEST_SERVICES+=,glance
@@ -86,19 +92,15 @@
     # kill instances (nova)
     # delete image files (glance)
     sudo rm -rf $GLANCE_CACHE_DIR $GLANCE_IMAGE_DIR $GLANCE_AUTH_CACHE_DIR
+
+    if is_service_enabled g-search; then
+        ${TOP_DIR}/pkg/elasticsearch.sh stop
+    fi
 }
 
 # configure_glance() - Set config files, create data dirs, etc
 function configure_glance {
-    if [[ ! -d $GLANCE_CONF_DIR ]]; then
-        sudo mkdir -p $GLANCE_CONF_DIR
-    fi
-    sudo chown $STACK_USER $GLANCE_CONF_DIR
-
-    if [[ ! -d $GLANCE_METADEF_DIR ]]; then
-        sudo mkdir -p $GLANCE_METADEF_DIR
-    fi
-    sudo chown $STACK_USER $GLANCE_METADEF_DIR
+    sudo install -d -o $STACK_USER $GLANCE_CONF_DIR $GLANCE_METADEF_DIR
 
     # Copy over our glance configurations and update them
     cp $GLANCE_DIR/etc/glance-registry.conf $GLANCE_REGISTRY_CONF
@@ -107,12 +109,13 @@
     local dburl=`database_connection_url glance`
     iniset $GLANCE_REGISTRY_CONF DEFAULT sql_connection $dburl
     iniset $GLANCE_REGISTRY_CONF DEFAULT use_syslog $SYSLOG
+    iniset $GLANCE_REGISTRY_CONF DEFAULT workers "$API_WORKERS"
     iniset $GLANCE_REGISTRY_CONF paste_deploy flavor keystone
     configure_auth_token_middleware $GLANCE_REGISTRY_CONF glance $GLANCE_AUTH_CACHE_DIR/registry
     if is_service_enabled qpid || [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; then
         iniset $GLANCE_REGISTRY_CONF DEFAULT notification_driver messaging
     fi
-    iniset_rpc_backend glance $GLANCE_REGISTRY_CONF DEFAULT
+    iniset_rpc_backend glance $GLANCE_REGISTRY_CONF
 
     cp $GLANCE_DIR/etc/glance-api.conf $GLANCE_API_CONF
     iniset $GLANCE_API_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
@@ -125,7 +128,7 @@
     if is_service_enabled qpid || [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; then
         iniset $GLANCE_API_CONF DEFAULT notification_driver messaging
     fi
-    iniset_rpc_backend glance $GLANCE_API_CONF DEFAULT
+    iniset_rpc_backend glance $GLANCE_API_CONF
     if [ "$VIRT_DRIVER" = 'xenserver' ]; then
         iniset $GLANCE_API_CONF DEFAULT container_formats "ami,ari,aki,bare,ovf,tgz"
         iniset $GLANCE_API_CONF DEFAULT disk_formats "ami,ari,aki,vhd,raw,iso"
@@ -135,26 +138,12 @@
     fi
 
     # Store specific configs
-    iniset $GLANCE_API_CONF DEFAULT filesystem_store_datadir $GLANCE_IMAGE_DIR/
-
-    # NOTE(flaper87): Until Glance is fully migrated, set these configs in both
-    # sections.
     iniset $GLANCE_API_CONF glance_store filesystem_store_datadir $GLANCE_IMAGE_DIR/
 
     iniset $GLANCE_API_CONF DEFAULT workers "$API_WORKERS"
 
     # Store the images in swift if enabled.
     if is_service_enabled s-proxy; then
-        iniset $GLANCE_API_CONF DEFAULT default_store swift
-        iniset $GLANCE_API_CONF DEFAULT swift_store_auth_address $KEYSTONE_SERVICE_URI/v2.0/
-        iniset $GLANCE_API_CONF DEFAULT swift_store_user $SERVICE_TENANT_NAME:glance-swift
-        iniset $GLANCE_API_CONF DEFAULT swift_store_key $SERVICE_PASSWORD
-        iniset $GLANCE_API_CONF DEFAULT swift_store_create_container_on_put True
-
-        iniset $GLANCE_API_CONF DEFAULT known_stores "glance.store.filesystem.Store, glance.store.http.Store, glance.store.swift.Store"
-
-        # NOTE(flaper87): Until Glance is fully migrated, set these configs in both
-        # sections.
         iniset $GLANCE_API_CONF glance_store default_store swift
         iniset $GLANCE_API_CONF glance_store swift_store_auth_address $KEYSTONE_SERVICE_URI/v2.0/
         iniset $GLANCE_API_CONF glance_store swift_store_user $SERVICE_TENANT_NAME:glance-swift
@@ -165,6 +154,7 @@
 
     if is_service_enabled tls-proxy; then
         iniset $GLANCE_API_CONF DEFAULT bind_port $GLANCE_SERVICE_PORT_INT
+        iniset $GLANCE_API_CONF DEFAULT public_endpoint $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT
         iniset $GLANCE_REGISTRY_CONF DEFAULT bind_port $GLANCE_REGISTRY_PORT_INT
     fi
 
@@ -185,8 +175,8 @@
 
     # Format logging
     if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
-        setup_colorized_logging $GLANCE_API_CONF DEFAULT "project_id" "user_id"
-        setup_colorized_logging $GLANCE_REGISTRY_CONF DEFAULT "project_id" "user_id"
+        setup_colorized_logging $GLANCE_API_CONF DEFAULT tenant user
+        setup_colorized_logging $GLANCE_REGISTRY_CONF DEFAULT tenant user
     fi
 
     cp -p $GLANCE_DIR/etc/glance-registry-paste.ini $GLANCE_REGISTRY_PASTE_INI
@@ -208,9 +198,6 @@
     iniset $GLANCE_CACHE_CONF DEFAULT admin_password $SERVICE_PASSWORD
 
     # Store specific confs
-    # NOTE(flaper87): Until Glance is fully migrated, set these configs in both
-    # sections.
-    iniset $GLANCE_CACHE_CONF DEFAULT filesystem_store_datadir $GLANCE_IMAGE_DIR/
     iniset $GLANCE_CACHE_CONF glance_store filesystem_store_datadir $GLANCE_IMAGE_DIR/
 
     cp -p $GLANCE_DIR/etc/policy.json $GLANCE_POLICY_JSON
@@ -225,14 +212,38 @@
         iniset $GLANCE_API_CONF DEFAULT cinder_endpoint_template "https://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/%(project_id)s"
         iniset $GLANCE_CACHE_CONF DEFAULT cinder_endpoint_template "https://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/%(project_id)s"
     fi
+
+    # Configure search
+    if is_service_enabled g-search; then
+        cp $GLANCE_DIR/etc/glance-search.conf $GLANCE_SEARCH_CONF
+        iniset $GLANCE_SEARCH_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
+        inicomment $GLANCE_SEARCH_CONF DEFAULT log_file
+        iniset $GLANCE_SEARCH_CONF DEFAULT use_syslog $SYSLOG
+        iniset $GLANCE_SEARCH_CONF DEFAULT sql_connection $dburl
+        iniset $GLANCE_SEARCH_CONF paste_deploy flavor keystone
+        configure_auth_token_middleware $GLANCE_SEARCH_CONF glance $GLANCE_AUTH_CACHE_DIR/search
+
+        if is_service_enabled tls-proxy; then
+            iniset $GLANCE_SEARCH_CONF DEFAULT bind_port $GLANCE_SEARCH_PORT_INT
+        fi
+        # Register SSL certificates if provided
+        if is_ssl_enabled_service glance; then
+            ensure_certificates GLANCE
+            iniset $GLANCE_SEARCH_CONF DEFAULT cert_file "$GLANCE_SSL_CERT"
+            iniset $GLANCE_SEARCH_CONF DEFAULT key_file "$GLANCE_SSL_KEY"
+        fi
+
+        cp $GLANCE_DIR/etc/glance-search-paste.ini $GLANCE_SEARCH_PASTE_INI
+    fi
 }
 
 # create_glance_accounts() - Set up common required glance accounts
 
-# Project              User         Roles
-# ------------------------------------------------------------------
-# SERVICE_TENANT_NAME  glance       service
-# SERVICE_TENANT_NAME  glance-swift ResellerAdmin (if Swift is enabled)
+# Project              User            Roles
+# ---------------------------------------------------------------------
+# SERVICE_TENANT_NAME  glance          service
+# SERVICE_TENANT_NAME  glance-swift    ResellerAdmin (if Swift is enabled)
+# SERVICE_TENANT_NAME  glance-search   search (if Search is enabled)
 
 function create_glance_accounts {
     if is_service_enabled g-api; then
@@ -258,17 +269,27 @@
                 "$GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT"
         fi
     fi
+
+    # Add glance-search service and endpoints
+    if is_service_enabled g-search; then
+        if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
+            local glance_search_service=$(get_or_create_service "glance-search" \
+                "search" "EXPERIMENTAL - Glance Graffiti Search Service")
+
+            get_or_create_endpoint $glance_search_service \
+                "$REGION_NAME" \
+                "$GLANCE_SERVICE_PROTOCOL://$GLANCE_SEARCH_HOSTPORT" \
+                "$GLANCE_SERVICE_PROTOCOL://$GLANCE_SEARCH_HOSTPORT" \
+                "$GLANCE_SERVICE_PROTOCOL://$GLANCE_SEARCH_HOSTPORT"
+        fi
+    fi
 }
 
 # create_glance_cache_dir() - Part of the init_glance() process
 function create_glance_cache_dir {
     # Create cache dir
-    sudo mkdir -p $GLANCE_AUTH_CACHE_DIR/api
-    sudo chown $STACK_USER $GLANCE_AUTH_CACHE_DIR/api
-    rm -f $GLANCE_AUTH_CACHE_DIR/api/*
-    sudo mkdir -p $GLANCE_AUTH_CACHE_DIR/registry
-    sudo chown $STACK_USER $GLANCE_AUTH_CACHE_DIR/registry
-    rm -f $GLANCE_AUTH_CACHE_DIR/registry/*
+    sudo install -d -o $STACK_USER $GLANCE_AUTH_CACHE_DIR/api $GLANCE_AUTH_CACHE_DIR/registry $GLANCE_AUTH_CACHE_DIR/search
+    rm -f $GLANCE_AUTH_CACHE_DIR/api/* $GLANCE_AUTH_CACHE_DIR/registry/* $GLANCE_AUTH_CACHE_DIR/search/*
 }
 
 # init_glance() - Initialize databases, etc.
@@ -291,6 +312,12 @@
     $GLANCE_BIN_DIR/glance-manage db_load_metadefs
 
     create_glance_cache_dir
+
+    # Init glance search by exporting found metadefs/images to elasticsearch
+    if is_service_enabled g-search; then
+        ${TOP_DIR}/pkg/elasticsearch.sh start
+        $GLANCE_BIN_DIR/glance-index
+    fi
 }
 
 # install_glanceclient() - Collect source and prepare
@@ -312,11 +339,13 @@
     fi
 
     git_clone $GLANCE_REPO $GLANCE_DIR $GLANCE_BRANCH
-    setup_develop $GLANCE_DIR
-    if is_service_enabled g-graffiti; then
+
+    if is_service_enabled g-search; then
         ${TOP_DIR}/pkg/elasticsearch.sh download
         ${TOP_DIR}/pkg/elasticsearch.sh install
     fi
+
+    setup_develop $GLANCE_DIR
 }
 
 # start_glance() - Start running processes, including screen
@@ -325,18 +354,29 @@
     if is_service_enabled tls-proxy; then
         start_tls_proxy '*' $GLANCE_SERVICE_PORT $GLANCE_SERVICE_HOST $GLANCE_SERVICE_PORT_INT &
         start_tls_proxy '*' $GLANCE_REGISTRY_PORT $GLANCE_SERVICE_HOST $GLANCE_REGISTRY_PORT_INT &
+
+        # Handle g-search
+        if is_service_enabled g-search; then
+            start_tls_proxy '*' $GLANCE_SEARCH_PORT $GLANCE_SERVICE_HOST $GLANCE_SEARCH_PORT_INT &
+        fi
     fi
 
     run_process g-reg "$GLANCE_BIN_DIR/glance-registry --config-file=$GLANCE_CONF_DIR/glance-registry.conf"
     run_process g-api "$GLANCE_BIN_DIR/glance-api --config-file=$GLANCE_CONF_DIR/glance-api.conf"
 
-    if is_service_enabled g-graffiti; then
-        ${TOP_DIR}/pkg/elasticsearch.sh start
-    fi
     echo "Waiting for g-api ($GLANCE_HOSTPORT) to start..."
     if ! wait_for_service $SERVICE_TIMEOUT $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT; then
         die $LINENO "g-api did not start"
     fi
+
+    # Start g-search after g-reg/g-api
+    if is_service_enabled g-search; then
+        run_process g-search "$GLANCE_BIN_DIR/glance-search --config-file=$GLANCE_CONF_DIR/glance-search.conf"
+        echo "Waiting for g-search ($GLANCE_SEARCH_HOSTPORT) to start..."
+        if ! wait_for_service $SERVICE_TIMEOUT $GLANCE_SERVICE_PROTOCOL://$GLANCE_SEARCH_HOSTPORT; then
+            die $LINENO "g-search did not start"
+        fi
+    fi
 }
 
 # stop_glance() - Stop running processes
@@ -344,6 +384,10 @@
     # Kill the Glance screen windows
     stop_process g-api
     stop_process g-reg
+
+    if is_service_enabled g-search; then
+        stop_process g-search
+    fi
 }
 
 # Restore xtrace
diff --git a/lib/heat b/lib/heat
index a088e82..5cb0dbf 100644
--- a/lib/heat
+++ b/lib/heat
@@ -36,6 +36,7 @@
 HEAT_CFNTOOLS_DIR=$DEST/heat-cfntools
 HEAT_TEMPLATES_REPO_DIR=$DEST/heat-templates
 OCC_DIR=$DEST/os-collect-config
+DIB_UTILS_DIR=$DEST/dib-utils
 ORC_DIR=$DEST/os-refresh-config
 OAC_DIR=$DEST/os-apply-config
 
@@ -49,13 +50,19 @@
 HEAT_CONF=$HEAT_CONF_DIR/heat.conf
 HEAT_ENV_DIR=$HEAT_CONF_DIR/environment.d
 HEAT_TEMPLATES_DIR=$HEAT_CONF_DIR/templates
-HEAT_STACK_DOMAIN=$(trueorfalse True HEAT_STACK_DOMAIN)
 HEAT_API_HOST=${HEAT_API_HOST:-$HOST_IP}
 HEAT_API_PORT=${HEAT_API_PORT:-8004}
 
 
 # other default options
-HEAT_DEFERRED_AUTH=${HEAT_DEFERRED_AUTH:-trusts}
+if [[ "$HEAT_STANDALONE" = "True" ]]; then
+    # for standalone, use defaults which require no service user
+    HEAT_STACK_DOMAIN=`trueorfalse False $HEAT_STACK_DOMAIN`
+    HEAT_DEFERRED_AUTH=${HEAT_DEFERRED_AUTH:-password}
+else
+    HEAT_STACK_DOMAIN=`trueorfalse True $HEAT_STACK_DOMAIN`
+    HEAT_DEFERRED_AUTH=${HEAT_DEFERRED_AUTH:-trusts}
+fi
 
 # Tell Tempest this project is present
 TEMPEST_SERVICES+=,heat
@@ -77,18 +84,13 @@
     sudo rm -rf $HEAT_AUTH_CACHE_DIR
     sudo rm -rf $HEAT_ENV_DIR
     sudo rm -rf $HEAT_TEMPLATES_DIR
+    sudo rm -rf $HEAT_CONF_DIR
 }
 
 # configure_heat() - Set config files, create data dirs, etc
 function configure_heat {
-    if [[ "$HEAT_STANDALONE" = "True" ]]; then
-        setup_develop $HEAT_DIR/contrib/heat_keystoneclient_v2
-    fi
 
-    if [[ ! -d $HEAT_CONF_DIR ]]; then
-        sudo mkdir -p $HEAT_CONF_DIR
-    fi
-    sudo chown $STACK_USER $HEAT_CONF_DIR
+    sudo install -d -o $STACK_USER $HEAT_CONF_DIR
     # remove old config files
     rm -f $HEAT_CONF_DIR/heat-*.conf
 
@@ -105,7 +107,7 @@
     cp $HEAT_DIR/etc/heat/policy.json $HEAT_POLICY_FILE
 
     # common options
-    iniset_rpc_backend heat $HEAT_CONF DEFAULT
+    iniset_rpc_backend heat $HEAT_CONF
     iniset $HEAT_CONF DEFAULT heat_metadata_server_url http://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT
     iniset $HEAT_CONF DEFAULT heat_waitcondition_server_url http://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1/waitcondition
     iniset $HEAT_CONF DEFAULT heat_watch_server_url http://$HEAT_API_CW_HOST:$HEAT_API_CW_PORT
@@ -122,29 +124,29 @@
         setup_colorized_logging $HEAT_CONF DEFAULT tenant user
     fi
 
+    iniset $HEAT_CONF DEFAULT deferred_auth_method $HEAT_DEFERRED_AUTH
+
     # NOTE(jamielennox): heat re-uses specific values from the
     # keystone_authtoken middleware group and so currently fails when using the
     # auth plugin setup. This should be fixed in heat.  Heat is also the only
     # service that requires the auth_uri to include a /v2.0. Remove this custom
     # setup when bug #1300246 is resolved.
-    iniset $HEAT_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
     iniset $HEAT_CONF keystone_authtoken auth_uri $KEYSTONE_SERVICE_URI/v2.0
-    iniset $HEAT_CONF keystone_authtoken admin_user heat
-    iniset $HEAT_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
-    iniset $HEAT_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
-    iniset $HEAT_CONF keystone_authtoken cafile $SSL_BUNDLE_FILE
-    iniset $HEAT_CONF keystone_authtoken signing_dir $HEAT_AUTH_CACHE_DIR
+    if [[ "$HEAT_STANDALONE" = "True" ]]; then
+        iniset $HEAT_CONF paste_deploy flavor standalone
+        iniset $HEAT_CONF clients_heat url "http://$HEAT_API_HOST:$HEAT_API_PORT/v1/%(tenant_id)s"
+    else
+        iniset $HEAT_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
+        iniset $HEAT_CONF keystone_authtoken admin_user heat
+        iniset $HEAT_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
+        iniset $HEAT_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
+        iniset $HEAT_CONF keystone_authtoken cafile $SSL_BUNDLE_FILE
+        iniset $HEAT_CONF keystone_authtoken signing_dir $HEAT_AUTH_CACHE_DIR
+    fi
 
     # ec2authtoken
     iniset $HEAT_CONF ec2authtoken auth_uri $KEYSTONE_SERVICE_URI/v2.0
 
-    # paste_deploy
-    if [[ "$HEAT_STANDALONE" = "True" ]]; then
-        iniset $HEAT_CONF paste_deploy flavor standalone
-        iniset $HEAT_CONF DEFAULT keystone_backend heat_keystoneclient_v2.client.KeystoneClientV2
-        iniset $HEAT_CONF clients_heat url "http://$HEAT_API_HOST:$HEAT_API_PORT/v1/%(tenant_id)s"
-    fi
-
     # OpenStack API
     iniset $HEAT_CONF heat_api bind_port $HEAT_API_PORT
     iniset $HEAT_CONF heat_api workers "$API_WORKERS"
@@ -172,15 +174,11 @@
         iniset $HEAT_CONF DEFAULT enable_stack_abandon true
     fi
 
-    # heat environment
-    sudo mkdir -p $HEAT_ENV_DIR
-    sudo chown $STACK_USER $HEAT_ENV_DIR
+    sudo install -d -o $STACK_USER $HEAT_ENV_DIR $HEAT_TEMPLATES_DIR
+
     # copy the default environment
     cp $HEAT_DIR/etc/heat/environment.d/* $HEAT_ENV_DIR/
 
-    # heat template resources.
-    sudo mkdir -p $HEAT_TEMPLATES_DIR
-    sudo chown $STACK_USER $HEAT_TEMPLATES_DIR
     # copy the default templates
     cp $HEAT_DIR/etc/heat/templates/* $HEAT_TEMPLATES_DIR/
 
@@ -199,8 +197,7 @@
 # create_heat_cache_dir() - Part of the init_heat() process
 function create_heat_cache_dir {
     # Create cache dirs
-    sudo mkdir -p $HEAT_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $HEAT_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER $HEAT_AUTH_CACHE_DIR
 }
 
 # install_heatclient() - Collect source and prepare
@@ -222,6 +219,10 @@
 function install_heat_other {
     git_clone $HEAT_CFNTOOLS_REPO $HEAT_CFNTOOLS_DIR $HEAT_CFNTOOLS_BRANCH
     git_clone $HEAT_TEMPLATES_REPO $HEAT_TEMPLATES_REPO_DIR $HEAT_TEMPLATES_BRANCH
+    git_clone $OAC_REPO $OAC_DIR $OAC_BRANCH
+    git_clone $OCC_REPO $OCC_DIR $OCC_BRANCH
+    git_clone $ORC_REPO $ORC_DIR $ORC_BRANCH
+    git_clone $DIB_UTILS_REPO $DIB_UTILS_DIR $DIB_UTILS_BRANCH
 }
 
 # start_heat() - Start running processes, including screen
@@ -243,32 +244,31 @@
 
 # create_heat_accounts() - Set up common required heat accounts
 function create_heat_accounts {
-    create_service_user "heat" "admin"
+    if [[ "$HEAT_STANDALONE" != "True" ]]; then
 
-    if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
+        create_service_user "heat" "admin"
 
-        local heat_service=$(get_or_create_service "heat" \
-                "orchestration" "Heat Orchestration Service")
-        get_or_create_endpoint $heat_service \
-            "$REGION_NAME" \
-            "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s" \
-            "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s" \
-            "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s"
+        if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
 
-        local heat_cfn_service=$(get_or_create_service "heat-cfn" \
-                "cloudformation" "Heat CloudFormation Service")
-        get_or_create_endpoint $heat_cfn_service \
-            "$REGION_NAME" \
-            "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
-            "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
-            "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1"
-    fi
+            local heat_service=$(get_or_create_service "heat" \
+                    "orchestration" "Heat Orchestration Service")
+            get_or_create_endpoint $heat_service \
+                "$REGION_NAME" \
+                "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s" \
+                "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s" \
+                "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s"
 
-    # heat_stack_user role is for users created by Heat
-    get_or_create_role "heat_stack_user"
+            local heat_cfn_service=$(get_or_create_service "heat-cfn" \
+                    "cloudformation" "Heat CloudFormation Service")
+            get_or_create_endpoint $heat_cfn_service \
+                "$REGION_NAME" \
+                "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
+                "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
+                "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1"
+        fi
 
-    if [[ $HEAT_DEFERRED_AUTH == trusts ]]; then
-        iniset $HEAT_CONF DEFAULT deferred_auth_method trusts
+        # heat_stack_user role is for users created by Heat
+        get_or_create_role "heat_stack_user"
     fi
 
     if [[ "$HEAT_STACK_DOMAIN" == "True" ]]; then
@@ -299,7 +299,7 @@
 
 # build_heat_pip_mirror() - Build a pip mirror containing heat agent projects
 function build_heat_pip_mirror {
-    local project_dirs="$OCC_DIR $OAC_DIR $ORC_DIR $HEAT_CFNTOOLS_DIR"
+    local project_dirs="$OCC_DIR $OAC_DIR $ORC_DIR $HEAT_CFNTOOLS_DIR $DIB_UTILS_DIR"
     local projpath proj package
 
     rm -rf $HEAT_PIP_REPO
@@ -335,6 +335,7 @@
     " -i $heat_pip_repo_apache_conf
     enable_apache_site heat_pip_repo
     restart_apache_server
+    sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $HEAT_PIP_REPO_PORT -j ACCEPT || true
 }
 
 # Restore xtrace
diff --git a/lib/horizon b/lib/horizon
index c6e3692..b0f306b 100644
--- a/lib/horizon
+++ b/lib/horizon
@@ -97,7 +97,14 @@
     _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_DEFAULT_ROLE \"Member\"
 
     _horizon_config_set $local_settings "" OPENSTACK_HOST \"${KEYSTONE_SERVICE_HOST}\"
-    _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_URL "\"${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}/v2.0\""
+
+    if [ "$ENABLE_IDENTITY_V2" == "False" ]; then
+        # Only Identity v3 API is available; then use it with v3 auth tokens
+        _horizon_config_set $local_settings "" OPENSTACK_API_VERSIONS {\"identity\":3}
+        _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_URL "\"${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}/v3\""
+    else
+        _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_URL "\"${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}/v2.0\""
+    fi
 
     if [ -f $SSL_BUNDLE_FILE ]; then
         _horizon_config_set $local_settings "" OPENSTACK_SSL_CACERT \"${SSL_BUNDLE_FILE}\"
@@ -129,7 +136,7 @@
     fi
     enable_apache_site horizon
 
-    # Remove old log files that could mess with how devstack detects whether Horizon
+    # Remove old log files that could mess with how DevStack detects whether Horizon
     # has been successfully started (see start_horizon() and functions::screen_it())
     # and run_process
     sudo rm -f /var/log/$APACHE_NAME/horizon_*
@@ -183,7 +190,7 @@
 # NOTE: It can be moved to common functions, but it is only used by compilation
 # of django_openstack_auth catalogs at the moment.
 function _prepare_message_catalog_compilation {
-    pip_install $(get_from_global_requirements Babel)
+    pip_install_gr Babel
 }
 
 
diff --git a/lib/ironic b/lib/ironic
index 35b5411..4984be1 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -53,16 +53,18 @@
 # The file is composed of multiple lines, each line includes four field
 # separated by white space: IPMI address, MAC address, IPMI username
 # and IPMI password.
-# An example:
+#
 #   192.168.110.107 00:1e:67:57:50:4c root otc123
 IRONIC_IPMIINFO_FILE=${IRONIC_IPMIINFO_FILE:-$IRONIC_DATA_DIR/hardware_info}
 
 # Set up defaults for functional / integration testing
+IRONIC_NODE_UUID=${IRONIC_NODE_UUID:-`uuidgen`}
 IRONIC_SCRIPTS_DIR=${IRONIC_SCRIPTS_DIR:-$TOP_DIR/tools/ironic/scripts}
 IRONIC_TEMPLATES_DIR=${IRONIC_TEMPLATES_DIR:-$TOP_DIR/tools/ironic/templates}
 IRONIC_BAREMETAL_BASIC_OPS=$(trueorfalse False IRONIC_BAREMETAL_BASIC_OPS)
 IRONIC_ENABLED_DRIVERS=${IRONIC_ENABLED_DRIVERS:-fake,pxe_ssh,pxe_ipmitool}
 IRONIC_SSH_USERNAME=${IRONIC_SSH_USERNAME:-`whoami`}
+IRONIC_SSH_TIMEOUT=${IRONIC_SSH_TIMEOUT:-15}
 IRONIC_SSH_KEY_DIR=${IRONIC_SSH_KEY_DIR:-$IRONIC_DATA_DIR/ssh_keys}
 IRONIC_SSH_KEY_FILENAME=${IRONIC_SSH_KEY_FILENAME:-ironic_key}
 IRONIC_KEY_FILE=${IRONIC_KEY_FILE:-$IRONIC_SSH_KEY_DIR/$IRONIC_SSH_KEY_FILENAME}
@@ -98,10 +100,10 @@
 IRONIC_AGENT_RAMDISK_URL=${IRONIC_AGENT_RAMDISK_URL:-http://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem.cpio.gz}
 
 # Which deploy driver to use - valid choices right now
-# are 'pxe_ssh', 'pxe_ipmitool', 'agent_ssh' and 'agent_ipmitool'.
+# are ``pxe_ssh``, ``pxe_ipmitool``, ``agent_ssh`` and ``agent_ipmitool``.
 IRONIC_DEPLOY_DRIVER=${IRONIC_DEPLOY_DRIVER:-pxe_ssh}
 
-#TODO(agordeev): replace 'ubuntu' with host distro name getting
+# TODO(agordeev): replace 'ubuntu' with host distro name getting
 IRONIC_DEPLOY_FLAVOR=${IRONIC_DEPLOY_FLAVOR:-ubuntu $IRONIC_DEPLOY_ELEMENT}
 
 # Support entry points installation of console scripts
@@ -180,7 +182,11 @@
 # install_ironic() - Collect source and prepare
 function install_ironic {
     # make sure all needed service were enabled
-    for srv in nova glance key; do
+    local req_services="key"
+    if [[ "$VIRT_DRIVER" == "ironic" ]]; then
+        req_services+=" nova glance neutron"
+    fi
+    for srv in $req_services; do
         if ! is_service_enabled "$srv"; then
             die $LINENO "$srv should be enabled for Ironic."
         fi
@@ -201,7 +207,7 @@
         sudo install -D -m 0644 -o $STACK_USER {${GITDIR["python-ironicclient"]}/tools/,/etc/bash_completion.d/}ironic.bash_completion
     else
         # nothing actually "requires" ironicclient, so force instally from pypi
-        pip_install python-ironicclient
+        pip_install_gr python-ironicclient
     fi
 }
 
@@ -233,22 +239,14 @@
 # configure_ironic_dirs() - Create all directories required by Ironic and
 # associated services.
 function configure_ironic_dirs {
-    if [[ ! -d $IRONIC_CONF_DIR ]]; then
-        sudo mkdir -p $IRONIC_CONF_DIR
-    fi
+    sudo install -d -o $STACK_USER $IRONIC_CONF_DIR $STACK_USER $IRONIC_DATA_DIR \
+        $IRONIC_STATE_PATH $IRONIC_TFTPBOOT_DIR $IRONIC_TFTPBOOT_DIR/pxelinux.cfg
+    sudo chown -R $STACK_USER:$LIBVIRT_GROUP $IRONIC_TFTPBOOT_DIR
 
     if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then
-        sudo mkdir -p $IRONIC_HTTP_DIR
-        sudo chown -R $STACK_USER:$LIBVIRT_GROUP $IRONIC_HTTP_DIR
+        sudo install -d -o $STACK_USER -g $LIBVIRT_GROUP $IRONIC_HTTP_DIR
     fi
 
-    sudo mkdir -p $IRONIC_DATA_DIR
-    sudo mkdir -p $IRONIC_STATE_PATH
-    sudo mkdir -p $IRONIC_TFTPBOOT_DIR
-    sudo chown -R $STACK_USER $IRONIC_DATA_DIR $IRONIC_STATE_PATH
-    sudo chown -R $STACK_USER:$LIBVIRT_GROUP $IRONIC_TFTPBOOT_DIR
-    mkdir -p $IRONIC_TFTPBOOT_DIR/pxelinux.cfg
-
     if [ ! -f $IRONIC_PXE_BOOT_IMAGE ]; then
         die $LINENO "PXE boot file $IRONIC_PXE_BOOT_IMAGE not found."
     fi
@@ -267,13 +265,12 @@
 # configure_ironic() - Set config files, create data dirs, etc
 function configure_ironic {
     configure_ironic_dirs
-    sudo chown $STACK_USER $IRONIC_CONF_DIR
 
     # Copy over ironic configuration file and configure common parameters.
     cp $IRONIC_DIR/etc/ironic/ironic.conf.sample $IRONIC_CONF_FILE
     iniset $IRONIC_CONF_FILE DEFAULT debug True
     inicomment $IRONIC_CONF_FILE DEFAULT log_file
-    iniset $IRONIC_CONF_FILE DEFAULT sql_connection `database_connection_url ironic`
+    iniset $IRONIC_CONF_FILE database connection `database_connection_url ironic`
     iniset $IRONIC_CONF_FILE DEFAULT state_path $IRONIC_STATE_PATH
     iniset $IRONIC_CONF_FILE DEFAULT use_syslog $SYSLOG
     # Configure Ironic conductor, if it was enabled.
@@ -300,7 +297,7 @@
 # API specific configuration.
 function configure_ironic_api {
     iniset $IRONIC_CONF_FILE DEFAULT auth_strategy keystone
-    iniset $IRONIC_CONF_FILE DEFAULT policy_file $IRONIC_POLICY_JSON
+    iniset $IRONIC_CONF_FILE oslo_policy policy_file $IRONIC_POLICY_JSON
 
     # TODO(Yuki Nishiwaki): This is a temporary work-around until Ironic is fixed(bug#1422632).
     # These codes need to be changed to use the function of configure_auth_token_middleware
@@ -313,7 +310,7 @@
     iniset $IRONIC_CONF_FILE keystone_authtoken cafile $SSL_BUNDLE_FILE
     iniset $IRONIC_CONF_FILE keystone_authtoken signing_dir $IRONIC_AUTH_CACHE_DIR/api
 
-    iniset_rpc_backend ironic $IRONIC_CONF_FILE DEFAULT
+    iniset_rpc_backend ironic $IRONIC_CONF_FILE
     iniset $IRONIC_CONF_FILE api port $IRONIC_SERVICE_PORT
 
     cp -p $IRONIC_DIR/etc/ironic/policy.json $IRONIC_POLICY_JSON
@@ -374,6 +371,7 @@
         iniset $IRONIC_CONF_FILE glance swift_container glance
         iniset $IRONIC_CONF_FILE glance swift_temp_url_duration 3600
         iniset $IRONIC_CONF_FILE agent heartbeat_timeout 30
+        iniset $IRONIC_CONF_FILE agent agent_erase_devices_priority 0
     fi
 
     if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then
@@ -427,6 +425,11 @@
 
 # init_ironic() - Initialize databases, etc.
 function init_ironic {
+    # Save private network as cleaning network
+    local cleaning_network_uuid
+    cleaning_network_uuid=$(neutron net-list | grep private | get_field 1)
+    iniset $IRONIC_CONF_FILE neutron cleaning_network_uuid ${cleaning_network_uuid}
+
     # (Re)create  ironic database
     recreate_database ironic
 
@@ -566,14 +569,6 @@
 function enroll_nodes {
     local chassis_id=$(ironic chassis-create -d "ironic test chassis" | grep " uuid " | get_field 2)
 
-    if [[ "$IRONIC_DEPLOY_DRIVER" == "pxe_ssh" ]] ; then
-        local _IRONIC_DEPLOY_KERNEL_KEY=pxe_deploy_kernel
-        local _IRONIC_DEPLOY_RAMDISK_KEY=pxe_deploy_ramdisk
-    elif is_deployed_by_agent; then
-        local _IRONIC_DEPLOY_KERNEL_KEY=deploy_kernel
-        local _IRONIC_DEPLOY_RAMDISK_KEY=deploy_ramdisk
-    fi
-
     if ! is_ironic_hardware; then
         local ironic_node_cpu=$IRONIC_VM_SPECS_CPU
         local ironic_node_ram=$IRONIC_VM_SPECS_RAM
@@ -581,8 +576,8 @@
         local ironic_ephemeral_disk=$IRONIC_VM_EPHEMERAL_DISK
         local ironic_hwinfo_file=$IRONIC_VM_MACS_CSV_FILE
         local node_options="\
-            -i $_IRONIC_DEPLOY_KERNEL_KEY=$IRONIC_DEPLOY_KERNEL_ID \
-            -i $_IRONIC_DEPLOY_RAMDISK_KEY=$IRONIC_DEPLOY_RAMDISK_ID \
+            -i deploy_kernel=$IRONIC_DEPLOY_KERNEL_ID \
+            -i deploy_ramdisk=$IRONIC_DEPLOY_RAMDISK_ID \
             -i ssh_virt_type=$IRONIC_SSH_VIRT_TYPE \
             -i ssh_address=$IRONIC_VM_SSH_ADDRESS \
             -i ssh_port=$IRONIC_VM_SSH_PORT \
@@ -613,11 +608,16 @@
             # we create the bare metal flavor with minimum value
             local node_options="-i ipmi_address=$ipmi_address -i ipmi_password=$ironic_ipmi_passwd\
                 -i ipmi_username=$ironic_ipmi_username"
-            node_options+=" -i $_IRONIC_DEPLOY_KERNEL_KEY=$IRONIC_DEPLOY_KERNEL_ID"
-            node_options+=" -i $_IRONIC_DEPLOY_RAMDISK_KEY=$IRONIC_DEPLOY_RAMDISK_ID"
+            node_options+=" -i deploy_kernel=$IRONIC_DEPLOY_KERNEL_ID"
+            node_options+=" -i deploy_ramdisk=$IRONIC_DEPLOY_RAMDISK_ID"
         fi
 
-        local node_id=$(ironic node-create --chassis_uuid $chassis_id \
+        # First node created will be used for testing in ironic w/o glance
+        # scenario, so we need to know its UUID.
+        local standalone_node_uuid=$([ $total_nodes -eq 0 ] && echo "--uuid $IRONIC_NODE_UUID")
+
+        local node_id=$(ironic node-create $standalone_node_uuid\
+            --chassis_uuid $chassis_id \
             --driver $IRONIC_DEPLOY_DRIVER \
             -p cpus=$ironic_node_cpu\
             -p memory_mb=$ironic_node_ram\
@@ -703,7 +703,7 @@
 
 function configure_ironic_auxiliary {
     configure_ironic_ssh_keypair
-    ironic_ssh_check $IRONIC_KEY_FILE $IRONIC_VM_SSH_ADDRESS $IRONIC_VM_SSH_PORT $IRONIC_SSH_USERNAME 10
+    ironic_ssh_check $IRONIC_KEY_FILE $IRONIC_VM_SSH_ADDRESS $IRONIC_VM_SSH_PORT $IRONIC_SSH_USERNAME $IRONIC_SSH_TIMEOUT
 }
 
 function build_ipa_coreos_ramdisk {
@@ -727,7 +727,7 @@
 
     # install diskimage-builder
     if [[ $(type -P ramdisk-image-create) == "" ]]; then
-        pip_install diskimage_builder
+        pip_install_gr "diskimage-builder"
     fi
 
     if [ -z "$IRONIC_DEPLOY_KERNEL" -o -z "$IRONIC_DEPLOY_RAMDISK" ]; then
@@ -763,7 +763,7 @@
         fi
     fi
 
-    local token=$(keystone token-get | grep ' id ' | get_field 2)
+    local token=$(openstack token issue -c id -f value)
     die_if_not_set $LINENO token "Keystone fail to get token"
 
     # load them into glance
diff --git a/lib/keystone b/lib/keystone
index 0968445..7a949cf 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -37,12 +37,19 @@
 # Set up default directories
 GITDIR["python-keystoneclient"]=$DEST/python-keystoneclient
 GITDIR["keystonemiddleware"]=$DEST/keystonemiddleware
-
 KEYSTONE_DIR=$DEST/keystone
+
+# Keystone virtual environment
+if [[ ${USE_VENV} = True ]]; then
+    PROJECT_VENV["keystone"]=${KEYSTONE_DIR}.venv
+    KEYSTONE_BIN_DIR=${PROJECT_VENV["keystone"]}/bin
+else
+    KEYSTONE_BIN_DIR=$(get_python_exec_prefix)
+fi
+
 KEYSTONE_CONF_DIR=${KEYSTONE_CONF_DIR:-/etc/keystone}
 KEYSTONE_CONF=$KEYSTONE_CONF_DIR/keystone.conf
 KEYSTONE_PASTE_INI=${KEYSTONE_PASTE_INI:-$KEYSTONE_CONF_DIR/keystone-paste.ini}
-KEYSTONE_AUTH_CACHE_DIR=${KEYSTONE_AUTH_CACHE_DIR:-/var/cache/keystone}
 if is_suse; then
     KEYSTONE_WSGI_DIR=${KEYSTONE_WSGI_DIR:-/srv/www/htdocs/keystone}
 else
@@ -56,21 +63,21 @@
 # Toggle for deploying Keystone under HTTPD + mod_wsgi
 KEYSTONE_USE_MOD_WSGI=${KEYSTONE_USE_MOD_WSGI:-${ENABLE_HTTPD_MOD_WSGI_SERVICES}}
 
-# Select the backend for Keystone's service catalog
+# Select the Catalog backend driver
 KEYSTONE_CATALOG_BACKEND=${KEYSTONE_CATALOG_BACKEND:-sql}
 KEYSTONE_CATALOG=$KEYSTONE_CONF_DIR/default_catalog.templates
 
-# Select the backend for Tokens
+# Select the token persistence backend driver
 KEYSTONE_TOKEN_BACKEND=${KEYSTONE_TOKEN_BACKEND:-sql}
 
-# Select the backend for Identity
+# Select the Identity backend driver
 KEYSTONE_IDENTITY_BACKEND=${KEYSTONE_IDENTITY_BACKEND:-sql}
 
-# Select the backend for Assignment
+# Select the Assignment backend driver
 KEYSTONE_ASSIGNMENT_BACKEND=${KEYSTONE_ASSIGNMENT_BACKEND:-sql}
 
-# Select Keystone's token format
-# Choose from 'UUID', 'PKI', or 'PKIZ'
+# Select Keystone's token provider (and format)
+# Choose from 'uuid', 'pki', 'pkiz', or 'fernet'
 KEYSTONE_TOKEN_FORMAT=${KEYSTONE_TOKEN_FORMAT:-}
 KEYSTONE_TOKEN_FORMAT=$(echo ${KEYSTONE_TOKEN_FORMAT} | tr '[:upper:]' '[:lower:]')
 
@@ -91,12 +98,6 @@
 # Set the tenant for service accounts in Keystone
 SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}
 
-# valid identity backends as per dir keystone/identity/backends
-KEYSTONE_VALID_IDENTITY_BACKENDS=kvs,ldap,pam,sql
-
-# valid assignment backends as per dir keystone/identity/backends
-KEYSTONE_VALID_ASSIGNMENT_BACKENDS=kvs,ldap,sql
-
 # if we are running with SSL use https protocols
 if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
     KEYSTONE_AUTH_PROTOCOL="https"
@@ -144,6 +145,7 @@
     local keystone_keyfile=""
     local keystone_service_port=$KEYSTONE_SERVICE_PORT
     local keystone_auth_port=$KEYSTONE_AUTH_PORT
+    local venv_path=""
 
     if is_ssl_enabled_service key; then
         keystone_ssl="SSLEngine On"
@@ -154,6 +156,9 @@
         keystone_service_port=$KEYSTONE_SERVICE_PORT_INT
         keystone_auth_port=$KEYSTONE_AUTH_PORT_INT
     fi
+    if [[ ${USE_VENV} = True ]]; then
+        venv_path="python-path=${PROJECT_VENV["keystone"]}/lib/$(python_version)/site-packages"
+    fi
 
     # copy proxy vhost and wsgi file
     sudo cp $KEYSTONE_DIR/httpd/keystone.py $KEYSTONE_WSGI_DIR/main
@@ -169,20 +174,17 @@
         s|%SSLENGINE%|$keystone_ssl|g;
         s|%SSLCERTFILE%|$keystone_certfile|g;
         s|%SSLKEYFILE%|$keystone_keyfile|g;
-        s|%USER%|$STACK_USER|g
+        s|%USER%|$STACK_USER|g;
+        s|%VIRTUALENV%|$venv_path|g
     " -i $keystone_apache_conf
 }
 
 # configure_keystone() - Set config files, create data dirs, etc
 function configure_keystone {
-    if [[ ! -d $KEYSTONE_CONF_DIR ]]; then
-        sudo mkdir -p $KEYSTONE_CONF_DIR
-    fi
-    sudo chown $STACK_USER $KEYSTONE_CONF_DIR
+    sudo install -d -o $STACK_USER $KEYSTONE_CONF_DIR
 
     if [[ "$KEYSTONE_CONF_DIR" != "$KEYSTONE_DIR/etc" ]]; then
-        cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF
-        chmod 600 $KEYSTONE_CONF
+        install -m 600 $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF
         cp -p $KEYSTONE_DIR/etc/policy.json $KEYSTONE_CONF_DIR
         if [[ -f "$KEYSTONE_DIR/etc/keystone-paste.ini" ]]; then
             cp -p "$KEYSTONE_DIR/etc/keystone-paste.ini" "$KEYSTONE_PASTE_INI"
@@ -195,6 +197,12 @@
         KEYSTONE_PASTE_INI="$KEYSTONE_CONF"
     fi
 
+    if [ "$ENABLE_IDENTITY_V2" == "False" ]; then
+        # Only Identity v3 API should be available; then disable v2 pipelines
+        inidelete $KEYSTONE_PASTE_INI composite:main \\/v2.0
+        inidelete $KEYSTONE_PASTE_INI composite:admin \\/v2.0
+    fi
+
     configure_keystone_extensions
 
     # Rewrite stock ``keystone.conf``
@@ -216,68 +224,41 @@
         iniset $KEYSTONE_CONF DEFAULT member_role_name "_member_"
     fi
 
-    # check if identity backend is valid
-    if [[ "$KEYSTONE_VALID_IDENTITY_BACKENDS" =~ "$KEYSTONE_IDENTITY_BACKEND" ]]; then
-        iniset $KEYSTONE_CONF identity driver "keystone.identity.backends.$KEYSTONE_IDENTITY_BACKEND.Identity"
-    fi
+    iniset $KEYSTONE_CONF identity driver "$KEYSTONE_IDENTITY_BACKEND"
+    iniset $KEYSTONE_CONF assignment driver "$KEYSTONE_ASSIGNMENT_BACKEND"
 
-    # check if assignment backend is valid
-    if [[ "$KEYSTONE_VALID_ASSIGNMENT_BACKENDS" =~ "$KEYSTONE_ASSIGNMENT_BACKEND" ]]; then
-        iniset $KEYSTONE_CONF assignment driver "keystone.assignment.backends.$KEYSTONE_ASSIGNMENT_BACKEND.Assignment"
-    fi
+    iniset_rpc_backend keystone $KEYSTONE_CONF
 
-    # Configure rabbitmq credentials
-    if is_service_enabled rabbit; then
-        iniset $KEYSTONE_CONF DEFAULT rabbit_userid $RABBIT_USERID
-        iniset $KEYSTONE_CONF DEFAULT rabbit_password $RABBIT_PASSWORD
-        iniset $KEYSTONE_CONF DEFAULT rabbit_host $RABBIT_HOST
-    fi
-
-    # Set the URL advertised in the ``versions`` structure returned by the '/' route
-    if is_service_enabled tls-proxy; then
-        iniset $KEYSTONE_CONF DEFAULT public_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/"
-        iniset $KEYSTONE_CONF DEFAULT admin_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/"
-    else
-        iniset $KEYSTONE_CONF DEFAULT public_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:%(public_port)s/"
-        iniset $KEYSTONE_CONF DEFAULT admin_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:%(admin_port)s/"
-    fi
-    iniset $KEYSTONE_CONF DEFAULT admin_bind_host "$KEYSTONE_ADMIN_BIND_HOST"
+    iniset $KEYSTONE_CONF eventlet_server admin_bind_host "$KEYSTONE_ADMIN_BIND_HOST"
 
     # Register SSL certificates if provided
     if is_ssl_enabled_service key; then
         ensure_certificates KEYSTONE
 
-        iniset $KEYSTONE_CONF ssl enable True
-        iniset $KEYSTONE_CONF ssl certfile $KEYSTONE_SSL_CERT
-        iniset $KEYSTONE_CONF ssl keyfile $KEYSTONE_SSL_KEY
+        iniset $KEYSTONE_CONF eventlet_server_ssl enable True
+        iniset $KEYSTONE_CONF eventlet_server_ssl certfile $KEYSTONE_SSL_CERT
+        iniset $KEYSTONE_CONF eventlet_server_ssl keyfile $KEYSTONE_SSL_KEY
     fi
 
     if is_service_enabled tls-proxy; then
         # Set the service ports for a proxy to take the originals
-        iniset $KEYSTONE_CONF DEFAULT public_port $KEYSTONE_SERVICE_PORT_INT
-        iniset $KEYSTONE_CONF DEFAULT admin_port $KEYSTONE_AUTH_PORT_INT
+        iniset $KEYSTONE_CONF eventlet_server public_port $KEYSTONE_SERVICE_PORT_INT
+        iniset $KEYSTONE_CONF eventlet_server admin_port $KEYSTONE_AUTH_PORT_INT
     fi
 
     iniset $KEYSTONE_CONF DEFAULT admin_token "$SERVICE_TOKEN"
 
     if [[ "$KEYSTONE_TOKEN_FORMAT" != "" ]]; then
-        iniset $KEYSTONE_CONF token provider keystone.token.providers.$KEYSTONE_TOKEN_FORMAT.Provider
+        iniset $KEYSTONE_CONF token provider $KEYSTONE_TOKEN_FORMAT
     fi
 
     iniset $KEYSTONE_CONF database connection `database_connection_url keystone`
-    iniset $KEYSTONE_CONF ec2 driver "keystone.contrib.ec2.backends.sql.Ec2"
 
-    if [[ "$KEYSTONE_TOKEN_BACKEND" = "sql" ]]; then
-        iniset $KEYSTONE_CONF token driver keystone.token.persistence.backends.sql.Token
-    elif [[ "$KEYSTONE_TOKEN_BACKEND" = "memcache" ]]; then
-        iniset $KEYSTONE_CONF token driver keystone.token.persistence.backends.memcache.Token
-    else
-        iniset $KEYSTONE_CONF token driver keystone.token.persistence.backends.kvs.Token
-    fi
+    iniset $KEYSTONE_CONF token driver "$KEYSTONE_TOKEN_BACKEND"
 
+    iniset $KEYSTONE_CONF catalog driver "$KEYSTONE_CATALOG_BACKEND"
     if [[ "$KEYSTONE_CATALOG_BACKEND" = "sql" ]]; then
         # Configure ``keystone.conf`` to use sql
-        iniset $KEYSTONE_CONF catalog driver keystone.catalog.backends.sql.Catalog
         inicomment $KEYSTONE_CONF catalog template_file
     else
         cp -p $FILES/default_catalog.templates $KEYSTONE_CATALOG
@@ -304,7 +285,6 @@
         " -i $KEYSTONE_CATALOG
 
         # Configure ``keystone.conf`` to use templates
-        iniset $KEYSTONE_CONF catalog driver "keystone.catalog.backends.templated.Catalog"
         iniset $KEYSTONE_CONF catalog template_file "$KEYSTONE_CATALOG"
     fi
 
@@ -331,7 +311,7 @@
 
     iniset $KEYSTONE_CONF DEFAULT max_token_size 16384
 
-    iniset $KEYSTONE_CONF DEFAULT admin_workers "$API_WORKERS"
+    iniset $KEYSTONE_CONF eventlet_server admin_workers "$API_WORKERS"
     # Public workers will use the server default, typically number of CPU.
 }
 
@@ -367,6 +347,12 @@
 # demo                 demo       Member, anotherrole
 # invisible_to_admin   demo       Member
 
+# Group                Users      Roles                 Tenant
+# ------------------------------------------------------------------
+# admins               admin      admin                 admin
+# nonadmin             demo       Member, anotherrole   demo
+
+
 # Migrated from keystone_data.sh
 function create_keystone_accounts {
 
@@ -408,8 +394,14 @@
     get_or_add_user_project_role $another_role $demo_user $demo_tenant
     get_or_add_user_project_role $member_role $demo_user $invis_tenant
 
-    get_or_create_group "developers" "default" "openstack developers"
-    get_or_create_group "testers" "default"
+    local admin_group=$(get_or_create_group "admins" \
+        "default" "openstack admin group")
+    local non_admin_group=$(get_or_create_group "nonadmins" \
+        "default" "non-admin group")
+
+    get_or_add_group_project_role $member_role $non_admin_group $demo_tenant
+    get_or_add_group_project_role $another_role $non_admin_group $demo_tenant
+    get_or_add_group_project_role $admin_role $admin_group $admin_tenant
 
     # Keystone
     if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
@@ -474,25 +466,20 @@
     recreate_database keystone
 
     # Initialize keystone database
-    $KEYSTONE_DIR/bin/keystone-manage db_sync
+    $KEYSTONE_BIN_DIR/keystone-manage db_sync
 
     local extension_value
     for extension_value in ${KEYSTONE_EXTENSIONS//,/ }; do
         if [[ -z "${extension_value}" ]]; then
             continue
         fi
-        $KEYSTONE_DIR/bin/keystone-manage db_sync --extension "${extension_value}"
+        $KEYSTONE_BIN_DIR/keystone-manage db_sync --extension "${extension_value}"
     done
 
     if [[ "$KEYSTONE_TOKEN_FORMAT" != "uuid" ]]; then
         # Set up certificates
         rm -rf $KEYSTONE_CONF_DIR/ssl
-        $KEYSTONE_DIR/bin/keystone-manage pki_setup
-
-        # Create cache dir
-        sudo mkdir -p $KEYSTONE_AUTH_CACHE_DIR
-        sudo chown $STACK_USER $KEYSTONE_AUTH_CACHE_DIR
-        rm -f $KEYSTONE_AUTH_CACHE_DIR/*
+        $KEYSTONE_BIN_DIR/keystone-manage pki_setup
     fi
 }
 
@@ -507,9 +494,14 @@
 
 # install_keystonemiddleware() - Collect source and prepare
 function install_keystonemiddleware {
+    # install_keystonemiddleware() is called when keystonemiddleware is needed
+    # to provide an opportunity to install it from the source repo
     if use_library_from_git "keystonemiddleware"; then
         git_clone_by_name "keystonemiddleware"
         setup_dev_lib "keystonemiddleware"
+    else
+        # When not installing from repo, keystonemiddleware is still needed...
+        pip_install_gr keystonemiddleware
     fi
 }
 
@@ -557,7 +549,7 @@
         tail_log key-access /var/log/$APACHE_NAME/keystone_access.log
     else
         # Start Keystone in a screen window
-        run_process key "$KEYSTONE_DIR/bin/keystone-all --config-file $KEYSTONE_CONF"
+        run_process key "$KEYSTONE_BIN_DIR/keystone-all --config-file $KEYSTONE_CONF"
     fi
 
     echo "Waiting for keystone to start..."
diff --git a/lib/ldap b/lib/ldap
index d69d3f8..d2dbc3b 100644
--- a/lib/ldap
+++ b/lib/ldap
@@ -142,7 +142,7 @@
         sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif
     fi
 
-    pip_install ldappool
+    pip_install_gr ldappool
 
     rm -rf $tmp_ldap_dir
 }
diff --git a/lib/lvm b/lib/lvm
index 39eed00..1fe2683 100644
--- a/lib/lvm
+++ b/lib/lvm
@@ -1,3 +1,5 @@
+#!/bin/bash
+#
 # lib/lvm
 # Configure the default LVM volume group used by Cinder and Nova
 
@@ -32,8 +34,8 @@
 BACKING_FILE_SUFFIX=-backing-file
 
 
-# Entry Points
-# ------------
+# Functions
+# ---------
 
 # _clean_lvm_volume_group removes all default LVM volumes
 #
@@ -52,7 +54,7 @@
 function _clean_lvm_backing_file {
     local backing_file=$1
 
-    # if the backing physical device is a loop device, it was probably setup by devstack
+    # If the backing physical device is a loop device, it was probably setup by DevStack
     if [[ -n "$backing_file" ]] && [[ -e "$backing_file" ]]; then
         local vg_dev=$(sudo losetup -j $backing_file | awk -F':' '/'$BACKING_FILE_SUFFIX'/ { print $1}')
         sudo losetup -d $vg_dev
@@ -108,15 +110,20 @@
     if is_fedora || is_suse; then
         # services is not started by default
         start_service lvm2-lvmetad
-        start_service tgtd
+        if [ "$CINDER_ISCSI_HELPER" = "tgtadm" ]; then
+            start_service tgtd
+        fi
     fi
 
     # Start with a clean volume group
     _create_lvm_volume_group $vg $size
 
     # Remove iscsi targets
-    sudo tgtadm --op show --mode target | grep Target | cut -f3 -d ' ' | sudo xargs -n1 tgt-admin --delete || true
-
+    if [ "$CINDER_ISCSI_HELPER" = "lioadm" ]; then
+        sudo cinder-rtstool get-targets | sudo xargs -rn 1 cinder-rtstool delete
+    else
+        sudo tgtadm --op show --mode target | grep Target | cut -f3 -d ' ' | sudo xargs -n1 tgt-admin --delete || true
+    fi
     _clean_lvm_volume_group $vg
 }
 
@@ -138,6 +145,39 @@
     fi
 }
 
+# clean_lvm_filter() Remove the filter rule set in set_lvm_filter()
+#
+# Usage: clean_lvm_filter()
+function clean_lvm_filter {
+    sudo sed -i "s/^.*# from devstack$//" /etc/lvm/lvm.conf
+}
+
+# set_lvm_filter() Gather all devices configured for LVM and
+# use them to build a global device filter
+# set_lvm_filter() Create a device filter
+# and add to /etc/lvm.conf.  Note this uses
+# all current PV's in use by LVM on the
+# system to build it's filter.
+#
+# Usage: set_lvm_filter()
+function set_lvm_filter {
+    local filter_suffix='"r|.*|" ]  # from devstack'
+    local filter_string="global_filter = [ "
+    local pv
+    local vg
+    local line
+
+    for pv_info in $(sudo pvs --noheadings -o name); do
+        pv=$(echo -e "${pv_info}" | sed 's/ //g' | sed 's/\/dev\///g')
+        new="\"a|$pv|\", "
+        filter_string=$filter_string$new
+    done
+    filter_string=$filter_string$filter_suffix
+
+    clean_lvm_filter
+    sudo sed -i "/# global_filter = \[*\]/a\    $global_filter$filter_string" /etc/lvm/lvm.conf
+    echo_summary "set lvm.conf device global_filter to: $filter_string"
+}
 
 # Restore xtrace
 $MY_XTRACE
diff --git a/lib/neutron b/lib/neutron-legacy
old mode 100755
new mode 100644
similarity index 90%
rename from lib/neutron
rename to lib/neutron-legacy
index e41abaf..dd67f45
--- a/lib/neutron
+++ b/lib/neutron-legacy
@@ -57,15 +57,12 @@
 # Settings
 # --------
 
-# Timeout value in seconds to wait for IPv6 gateway configuration
-GATEWAY_TIMEOUT=30
-
 
 # Neutron Network Configuration
 # -----------------------------
 
 # Subnet IP version
-IP_VERSION=${IP_VERSION:-4}
+IP_VERSION=${IP_VERSION:-"4+6"}
 # Validate IP_VERSION
 if [[ $IP_VERSION != "4" ]] && [[ $IP_VERSION != "6" ]] && [[ $IP_VERSION != "4+6" ]]; then
     die $LINENO "IP_VERSION must be either 4, 6, or 4+6"
@@ -90,12 +87,9 @@
 IPV6_PRIVATE_SUBNET_NAME=${IPV6_PRIVATE_SUBNET_NAME:-ipv6-private-subnet}
 FIXED_RANGE_V6=${FIXED_RANGE_V6:-fd$IPV6_GLOBAL_ID::/64}
 IPV6_PRIVATE_NETWORK_GATEWAY=${IPV6_PRIVATE_NETWORK_GATEWAY:-fd$IPV6_GLOBAL_ID::1}
-IPV6_PUBLIC_RANGE=${IPV6_PUBLIC_RANGE:-fe80:cafe:cafe::/64}
-IPV6_PUBLIC_NETWORK_GATEWAY=${IPV6_PUBLIC_NETWORK_GATEWAY:-fe80:cafe:cafe::2}
-# IPV6_ROUTER_GW_IP must be defined when IP_VERSION=4+6 as it cannot be
-# obtained conventionally until the l3-agent has support for dual-stack
-# TODO (john-davidge) Remove once l3-agent supports dual-stack
-IPV6_ROUTER_GW_IP=${IPV6_ROUTER_GW_IP:-fe80:cafe:cafe::1}
+IPV6_PUBLIC_RANGE=${IPV6_PUBLIC_RANGE:-2001:db8::/64}
+IPV6_PUBLIC_NETWORK_GATEWAY=${IPV6_PUBLIC_NETWORK_GATEWAY:-2001:db8::2}
+IPV6_ROUTER_GW_IP=${IPV6_ROUTER_GW_IP:-2001:db8::1}
 
 # Set up default directories
 GITDIR["python-neutronclient"]=$DEST/python-neutronclient
@@ -153,6 +147,7 @@
 # RHEL's support for namespaces requires using veths with ovs
 Q_OVS_USE_VETH=${Q_OVS_USE_VETH:-False}
 Q_USE_ROOTWRAP=${Q_USE_ROOTWRAP:-True}
+Q_USE_ROOTWRAP_DAEMON=$(trueorfalse True Q_USE_ROOTWRAP_DAEMON)
 # Meta data IP
 Q_META_DATA_IP=${Q_META_DATA_IP:-$SERVICE_HOST}
 # Allow Overlapping IP among subnets
@@ -173,6 +168,10 @@
 ## Provider Network Information
 PROVIDER_SUBNET_NAME=${PROVIDER_SUBNET_NAME:-"provider_net"}
 
+# Define the public bridge that will transmit traffic from VMs to the
+# physical network - used by both the OVS and Linux Bridge drivers.
+PUBLIC_BRIDGE=${PUBLIC_BRIDGE:-br-ex}
+
 # Use flat providernet for public network
 #
 # If Q_USE_PROVIDERNET_FOR_PUBLIC=True, use a flat provider network
@@ -226,6 +225,9 @@
 else
     NEUTRON_ROOTWRAP=$(get_rootwrap_location neutron)
     Q_RR_COMMAND="sudo $NEUTRON_ROOTWRAP $Q_RR_CONF_FILE"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        Q_RR_DAEMON_COMMAND="sudo $NEUTRON_ROOTWRAP-daemon $Q_RR_CONF_FILE"
+    fi
 fi
 
 
@@ -422,7 +424,7 @@
 # Set common config for all neutron server and agents.
 function configure_neutron {
     _configure_neutron_common
-    iniset_rpc_backend neutron $NEUTRON_CONF DEFAULT
+    iniset_rpc_backend neutron $NEUTRON_CONF
 
     # goes before q-svc to init Q_SERVICE_PLUGIN_CLASSES
     if is_service_enabled q-lbaas; then
@@ -495,8 +497,7 @@
 # create_neutron_cache_dir() - Part of the _neutron_setup_keystone() process
 function create_neutron_cache_dir {
     # Create cache dir
-    sudo mkdir -p $NEUTRON_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $NEUTRON_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER $NEUTRON_AUTH_CACHE_DIR
     rm -f $NEUTRON_AUTH_CACHE_DIR/*
 }
 
@@ -776,9 +777,41 @@
     fi
 }
 
+# _move_neutron_addresses_route() - Move the primary IP to the OVS bridge
+# on startup, or back to the public interface on cleanup
+function _move_neutron_addresses_route {
+    local from_intf=$1
+    local to_intf=$2
+    local add_ovs_port=$3
+
+    if [[ -n "$from_intf" && -n "$to_intf" ]]; then
+        # Remove the primary IP address from $from_intf and add it to $to_intf,
+        # along with the default route, if it exists.  Also, when called
+        # on configure we will also add $from_intf as a port on $to_intf,
+        # assuming it is an OVS bridge.
+
+        local IP_BRD=$(ip -4 a s dev $from_intf | awk '/inet/ { print $2, $3, $4; exit }')
+        local DEFAULT_ROUTE_GW=$(ip r | awk "/default.+$from_intf/ { print \$3; exit }")
+        local ADD_OVS_PORT=""
+
+        if [ "$DEFAULT_ROUTE_GW" != "" ]; then
+            ADD_DEFAULT_ROUTE="sudo ip r replace default via $DEFAULT_ROUTE_GW dev $to_intf"
+        fi
+
+        if [[ "$add_ovs_port" == "True" ]]; then
+            ADD_OVS_PORT="sudo ovs-vsctl --may-exist add-port $to_intf $from_intf"
+        fi
+
+        sudo ip addr del $IP_BRD dev $from_intf; sudo ip addr add $IP_BRD dev $to_intf; $ADD_OVS_PORT; $ADD_DEFAULT_ROUTE
+    fi
+}
+
 # cleanup_neutron() - Remove residual data files, anything left over from previous
 # runs that a clean run would need to clean up
 function cleanup_neutron {
+
+    _move_neutron_addresses_route "$OVS_PHYSICAL_BRIDGE" "$PUBLIC_INTERFACE" False
+
     if is_provider_network && is_ironic_hardware; then
         for IP in $(ip addr show dev $OVS_PHYSICAL_BRIDGE | grep ' inet ' | awk '{print $2}'); do
             sudo ip addr del $IP dev $OVS_PHYSICAL_BRIDGE
@@ -791,6 +824,10 @@
         neutron_ovs_base_cleanup
     fi
 
+    if [[ $Q_AGENT == "linuxbridge" ]]; then
+        neutron_lb_cleanup
+    fi
+
     # delete all namespaces created by neutron
     for ns in $(sudo ip netns list | grep -o -E '(qdhcp|qrouter|qlbaas|fip|snat)-[0-9a-f-]*'); do
         sudo ip netns delete ${ns}
@@ -800,10 +837,7 @@
 
 function _create_neutron_conf_dir {
     # Put config files in ``NEUTRON_CONF_DIR`` for everyone to find
-    if [[ ! -d $NEUTRON_CONF_DIR ]]; then
-        sudo mkdir -p $NEUTRON_CONF_DIR
-    fi
-    sudo chown $STACK_USER $NEUTRON_CONF_DIR
+    sudo install -d -o $STACK_USER $NEUTRON_CONF_DIR
 }
 
 # _configure_neutron_common()
@@ -896,6 +930,9 @@
     iniset $NEUTRON_TEST_CONFIG_FILE DEFAULT debug False
     iniset $NEUTRON_TEST_CONFIG_FILE DEFAULT use_namespaces $Q_USE_NAMESPACE
     iniset $NEUTRON_TEST_CONFIG_FILE agent root_helper "$Q_RR_COMMAND"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        iniset $NEUTRON_TEST_CONFIG_FILE agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+    fi
 
     _neutron_setup_interface_driver $NEUTRON_TEST_CONFIG_FILE
 
@@ -910,6 +947,9 @@
     iniset $Q_DHCP_CONF_FILE DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $Q_DHCP_CONF_FILE DEFAULT use_namespaces $Q_USE_NAMESPACE
     iniset $Q_DHCP_CONF_FILE DEFAULT root_helper "$Q_RR_COMMAND"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        iniset $NEUTRON_TEST_CONFIG_FILE agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+    fi
 
     if ! is_service_enabled q-l3; then
         if [[ "$ENABLE_ISOLATED_METADATA" = "True" ]]; then
@@ -943,10 +983,17 @@
     iniset $Q_L3_CONF_FILE DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $Q_L3_CONF_FILE DEFAULT use_namespaces $Q_USE_NAMESPACE
     iniset $Q_L3_CONF_FILE DEFAULT root_helper "$Q_RR_COMMAND"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        iniset $Q_L3_CONF_FILE agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+    fi
 
     _neutron_setup_interface_driver $Q_L3_CONF_FILE
 
     neutron_plugin_configure_l3_agent
+
+    if [[ $(ip -4 a s dev "$PUBLIC_INTERFACE" | grep -c 'inet') != 0 ]]; then
+        _move_neutron_addresses_route "$PUBLIC_INTERFACE" "$OVS_PHYSICAL_BRIDGE" True
+    fi
 }
 
 function _configure_neutron_metadata_agent {
@@ -956,6 +1003,9 @@
     iniset $Q_META_CONF_FILE DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $Q_META_CONF_FILE DEFAULT nova_metadata_ip $Q_META_DATA_IP
     iniset $Q_META_CONF_FILE DEFAULT root_helper "$Q_RR_COMMAND"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        iniset $Q_META_CONF_FILE agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+    fi
 
     # Configures keystone for metadata_agent
     # The third argument "True" sets auth_url needed to communicate with keystone
@@ -1008,6 +1058,9 @@
     # Specify the default root helper prior to agent configuration to
     # ensure that an agent's configuration can override the default
     iniset /$Q_PLUGIN_CONF_FILE agent root_helper "$Q_RR_COMMAND"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        iniset /$Q_PLUGIN_CONF_FILE  agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+    fi
     iniset $NEUTRON_CONF DEFAULT verbose True
     iniset $NEUTRON_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
 
@@ -1036,7 +1089,7 @@
 
     iniset $NEUTRON_CONF DEFAULT verbose True
     iniset $NEUTRON_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
-    iniset $NEUTRON_CONF DEFAULT policy_file $Q_POLICY_FILE
+    iniset $NEUTRON_CONF oslo_policy policy_file $Q_POLICY_FILE
     iniset $NEUTRON_CONF DEFAULT allow_overlapping_ips $Q_ALLOW_OVERLAPPING_IP
 
     iniset $NEUTRON_CONF DEFAULT auth_strategy $Q_AUTH_STRATEGY
@@ -1075,10 +1128,8 @@
 # _neutron_deploy_rootwrap_filters() - deploy rootwrap filters to $Q_CONF_ROOTWRAP_D (owned by root).
 function _neutron_deploy_rootwrap_filters {
     local srcdir=$1
-    mkdir -p -m 755 $Q_CONF_ROOTWRAP_D
-    sudo cp -pr $srcdir/etc/neutron/rootwrap.d/* $Q_CONF_ROOTWRAP_D/
-    sudo chown -R root:root $Q_CONF_ROOTWRAP_D
-    sudo chmod 644 $Q_CONF_ROOTWRAP_D/*
+    sudo install -d -o root -m 755 $Q_CONF_ROOTWRAP_D
+    sudo install -o root -m 644 $srcdir/etc/neutron/rootwrap.d/* $Q_CONF_ROOTWRAP_D/
 }
 
 # _neutron_setup_rootwrap() - configure Neutron's rootwrap
@@ -1097,25 +1148,30 @@
     # Set up ``rootwrap.conf``, pointing to ``$NEUTRON_CONF_DIR/rootwrap.d``
     # location moved in newer versions, prefer new location
     if test -r $NEUTRON_DIR/etc/neutron/rootwrap.conf; then
-        sudo cp -p $NEUTRON_DIR/etc/neutron/rootwrap.conf $Q_RR_CONF_FILE
+        sudo install -o root -g root -m 644 $NEUTRON_DIR/etc/neutron/rootwrap.conf $Q_RR_CONF_FILE
     else
-        sudo cp -p $NEUTRON_DIR/etc/rootwrap.conf $Q_RR_CONF_FILE
+        sudo install -o root -g root -m 644 $NEUTRON_DIR/etc/rootwrap.conf $Q_RR_CONF_FILE
     fi
     sudo sed -e "s:^filters_path=.*$:filters_path=$Q_CONF_ROOTWRAP_D:" -i $Q_RR_CONF_FILE
-    sudo chown root:root $Q_RR_CONF_FILE
-    sudo chmod 0644 $Q_RR_CONF_FILE
+    sudo sed -e 's:^exec_dirs=\(.*\)$:exec_dirs=\1,/usr/local/bin:' -i $Q_RR_CONF_FILE
+
     # Specify ``rootwrap.conf`` as first parameter to neutron-rootwrap
     ROOTWRAP_SUDOER_CMD="$NEUTRON_ROOTWRAP $Q_RR_CONF_FILE *"
+    ROOTWRAP_DAEMON_SUDOER_CMD="$NEUTRON_ROOTWRAP-daemon $Q_RR_CONF_FILE"
 
     # Set up the rootwrap sudoers for neutron
     TEMPFILE=`mktemp`
     echo "$STACK_USER ALL=(root) NOPASSWD: $ROOTWRAP_SUDOER_CMD" >$TEMPFILE
+    echo "$STACK_USER ALL=(root) NOPASSWD: $ROOTWRAP_DAEMON_SUDOER_CMD" >>$TEMPFILE
     chmod 0440 $TEMPFILE
     sudo chown root:root $TEMPFILE
     sudo mv $TEMPFILE /etc/sudoers.d/neutron-rootwrap
 
     # Update the root_helper
     iniset $NEUTRON_CONF agent root_helper "$Q_RR_COMMAND"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        iniset $NEUTRON_CONF agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+    fi
 }
 
 # Configures keystone integration for neutron service and agents
@@ -1211,8 +1267,10 @@
         if is_neutron_ovs_base_plugin && [[ "$Q_USE_NAMESPACE" = "True" ]]; then
             local ext_gw_interface=$(_neutron_get_ext_gw_interface)
             local cidr_len=${FLOATING_RANGE#*/}
-            sudo ip addr add $ext_gw_ip/$cidr_len dev $ext_gw_interface
-            sudo ip link set $ext_gw_interface up
+            if [[ $(ip addr show dev $ext_gw_interface | grep -c $ext_gw_ip) == 0 && ( $Q_USE_PROVIDERNET_FOR_PUBLIC == "False" || $Q_USE_PUBLIC_VETH == "True" ) ]]; then
+                sudo ip addr add $ext_gw_ip/$cidr_len dev $ext_gw_interface
+                sudo ip link set $ext_gw_interface up
+            fi
             ROUTER_GW_IP=`neutron port-list -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$PUB_SUBNET_ID '$4 == subnet_id { print $8; }'`
             die_if_not_set $LINENO ROUTER_GW_IP "Failure retrieving ROUTER_GW_IP"
             sudo route add -net $FIXED_RANGE gw $ROUTER_GW_IP
@@ -1237,58 +1295,19 @@
 
     # This logic is specific to using the l3-agent for layer 3
     if is_service_enabled q-l3; then
-        local ipv6_router_gw_port
         # Ensure IPv6 forwarding is enabled on the host
         sudo sysctl -w net.ipv6.conf.all.forwarding=1
         # Configure and enable public bridge
-        if [[ "$IP_VERSION" = "6" ]]; then
-            # Override global IPV6_ROUTER_GW_IP with the true value from neutron
-            IPV6_ROUTER_GW_IP=`neutron port-list -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$ipv6_pub_subnet_id '$4 == subnet_id { print $8; }'`
-            die_if_not_set $LINENO IPV6_ROUTER_GW_IP "Failure retrieving IPV6_ROUTER_GW_IP"
-            ipv6_router_gw_port=`neutron port-list -c id -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$ipv6_pub_subnet_id '$4 == subnet_id { print $1; }' | awk -F ' | ' '{ print $2; }'`
-            die_if_not_set $LINENO ipv6_router_gw_port "Failure retrieving ipv6_router_gw_port"
-        else
-            ipv6_router_gw_port=`neutron port-list -c id -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$PUB_SUBNET_ID '$4 == subnet_id { print $1; }' | awk -F ' | ' '{ print $2; }'`
-            die_if_not_set $LINENO ipv6_router_gw_port "Failure retrieving ipv6_router_gw_port"
-        fi
-
-        # The ovs_base_configure_l3_agent function flushes the public
-        # bridge's ip addresses, so turn IPv6 support in the host off
-        # and then on to recover the public bridge's link local address
-        sudo sysctl -w net.ipv6.conf.${PUBLIC_BRIDGE}.disable_ipv6=1
-        sudo sysctl -w net.ipv6.conf.${PUBLIC_BRIDGE}.disable_ipv6=0
-        if ! ip -6 addr show dev $PUBLIC_BRIDGE | grep 'scope global'; then
-            # Create an IPv6 ULA address for PUBLIC_BRIDGE if one is not present
-            IPV6_BRIDGE_ULA=`uuidgen | sed s/-//g | cut -c 23- | sed -e "s/\(..\)\(....\)\(....\)/\1:\2:\3/"`
-            sudo ip -6 addr add fd$IPV6_BRIDGE_ULA::1 dev $PUBLIC_BRIDGE
-        fi
+        # Override global IPV6_ROUTER_GW_IP with the true value from neutron
+        IPV6_ROUTER_GW_IP=`neutron port-list -c fixed_ips | grep $ipv6_pub_subnet_id | awk -F '"' -v subnet_id=$ipv6_pub_subnet_id '$4 == subnet_id { print $8; }'`
+        die_if_not_set $LINENO IPV6_ROUTER_GW_IP "Failure retrieving IPV6_ROUTER_GW_IP"
 
         if is_neutron_ovs_base_plugin && [[ "$Q_USE_NAMESPACE" = "True" ]]; then
             local ext_gw_interface=$(_neutron_get_ext_gw_interface)
             local ipv6_cidr_len=${IPV6_PUBLIC_RANGE#*/}
 
-            # Define router_ns based on whether DVR is enabled
-            local router_ns=qrouter
-            if [[ "$Q_DVR_MODE" == "dvr_snat" ]]; then
-                router_ns=snat
-            fi
-
             # Configure interface for public bridge
             sudo ip -6 addr add $ipv6_ext_gw_ip/$ipv6_cidr_len dev $ext_gw_interface
-
-            # Wait until layer 3 agent has configured the gateway port on
-            # the public bridge, then add gateway address to the interface
-            # TODO (john-davidge) Remove once l3-agent supports dual-stack
-            if [[ "$IP_VERSION" == "4+6" ]]; then
-                if ! timeout $GATEWAY_TIMEOUT sh -c "until sudo ip netns exec $router_ns-$ROUTER_ID ip addr show qg-${ipv6_router_gw_port:0:11} | grep $ROUTER_GW_IP; do sleep 1; done"; then
-                    die $LINENO "Timeout retrieving ROUTER_GW_IP"
-                fi
-                # Configure the gateway port with the public IPv6 adress
-                sudo ip netns exec $router_ns-$ROUTER_ID ip -6 addr add $IPV6_ROUTER_GW_IP/$ipv6_cidr_len dev qg-${ipv6_router_gw_port:0:11}
-                # Add a default IPv6 route to the neutron router as the
-                # l3-agent does not add one in the dual-stack case
-                sudo ip netns exec $router_ns-$ROUTER_ID ip -6 route replace default via $ipv6_ext_gw_ip dev qg-${ipv6_router_gw_port:0:11}
-            fi
             sudo ip -6 route add $FIXED_RANGE_V6 via $IPV6_ROUTER_GW_IP dev $ext_gw_interface
         fi
         _neutron_set_router_id
@@ -1350,27 +1369,6 @@
     echo "$Q_RR_COMMAND ip netns exec qprobe-$probe_id"
 }
 
-function _ping_check_neutron {
-    local from_net=$1
-    local ip=$2
-    local timeout_sec=$3
-    local expected=${4:-"True"}
-    local check_command=""
-    probe_cmd=`_get_probe_cmd_prefix $from_net`
-    if [[ "$expected" = "True" ]]; then
-        check_command="while ! $probe_cmd ping -w 1 -c 1 $ip; do sleep 1; done"
-    else
-        check_command="while $probe_cmd ping -w 1 -c 1 $ip; do sleep 1; done"
-    fi
-    if ! timeout $timeout_sec sh -c "$check_command"; then
-        if [[ "$expected" = "True" ]]; then
-            die $LINENO "[Fail] Couldn't ping server"
-        else
-            die $LINENO "[Fail] Could ping server"
-        fi
-    fi
-}
-
 # ssh check
 function _ssh_check_neutron {
     local from_net=$1
diff --git a/lib/neutron_plugins/README.md b/lib/neutron_plugins/README.md
index 7192a05..4b220d3 100644
--- a/lib/neutron_plugins/README.md
+++ b/lib/neutron_plugins/README.md
@@ -13,7 +13,7 @@
 
 functions
 ---------
-``lib/neutron`` calls the following functions when the ``$Q_PLUGIN`` is enabled
+``lib/neutron-legacy`` calls the following functions when the ``$Q_PLUGIN`` is enabled
 
 * ``neutron_plugin_create_nova_conf`` :
   set ``NOVA_VIF_DRIVER`` and optionally set options in nova_conf
diff --git a/lib/neutron_plugins/linuxbridge_agent b/lib/neutron_plugins/linuxbridge_agent
index c9ea1ca..b348af9 100644
--- a/lib/neutron_plugins/linuxbridge_agent
+++ b/lib/neutron_plugins/linuxbridge_agent
@@ -7,6 +7,10 @@
 PLUGIN_XTRACE=$(set +o | grep xtrace)
 set +o xtrace
 
+function neutron_lb_cleanup {
+    sudo brctl delbr $PUBLIC_BRIDGE
+}
+
 function is_neutron_ovs_base_plugin {
     # linuxbridge doesn't use OVS
     return 1
@@ -29,6 +33,7 @@
 }
 
 function neutron_plugin_configure_l3_agent {
+    sudo brctl addbr $PUBLIC_BRIDGE
     iniset $Q_L3_CONF_FILE DEFAULT external_network_bridge
     iniset $Q_L3_CONF_FILE DEFAULT l3_agent_manager neutron.agent.l3_agent.L3NATAgentWithStateReport
 }
diff --git a/lib/neutron_plugins/ml2 b/lib/neutron_plugins/ml2
index e3b2c4d..2733f1f 100644
--- a/lib/neutron_plugins/ml2
+++ b/lib/neutron_plugins/ml2
@@ -31,6 +31,9 @@
 Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=${Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS:-vni_ranges=1001:2000}
 # Default VLAN TypeDriver options
 Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=${Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS:-}
+# List of extension drivers to load, use '-' instead of ':-' to allow people to
+# explicitly override this to blank
+Q_ML2_PLUGIN_EXT_DRIVERS=${Q_ML2_PLUGIN_EXT_DRIVERS-port_security}
 
 # L3 Plugin to load for ML2
 ML2_L3_PLUGIN=${ML2_L3_PLUGIN:-neutron.services.l3_router.l3_router_plugin.L3RouterPlugin}
@@ -89,7 +92,7 @@
 
     # Allow for setup the flat type network
     if [[ -z "$Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS" && -n "$PHYSICAL_NETWORK" ]]; then
-            Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS="flat_networks=$Q_ML2_FLAT_PHYSNET_OPTIONS"
+            Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS="flat_networks=$PHYSICAL_NETWORK"
     fi
     # REVISIT(rkukura): Setting firewall_driver here for
     # neutron.agent.securitygroups_rpc.is_firewall_enabled() which is
@@ -104,13 +107,17 @@
         iniset /$Q_PLUGIN_CONF_FILE securitygroup firewall_driver neutron.agent.firewall.NoopFirewallDriver
     fi
 
-    # Since we enable the tunnel TypeDrivers, also enable a local_ip
-    iniset /$Q_PLUGIN_CONF_FILE ovs local_ip $TUNNEL_ENDPOINT_IP
+    if [[ "$ENABLE_TENANT_TUNNELS" == "True" ]]; then
+        # Set local_ip if TENANT_TUNNELS are enabled.
+        iniset /$Q_PLUGIN_CONF_FILE ovs local_ip $TUNNEL_ENDPOINT_IP
+    fi
 
     populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2 mechanism_drivers=$Q_ML2_PLUGIN_MECHANISM_DRIVERS
 
     populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2 type_drivers=$Q_ML2_PLUGIN_TYPE_DRIVERS
 
+    populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2 extension_drivers=$Q_ML2_PLUGIN_EXT_DRIVERS
+
     populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2 $Q_SRV_EXTRA_OPTS
 
     populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2_type_gre $Q_ML2_PLUGIN_GRE_TYPE_OPTIONS
diff --git a/lib/neutron_plugins/ovs_base b/lib/neutron_plugins/ovs_base
index 2997c6c..81561d3 100644
--- a/lib/neutron_plugins/ovs_base
+++ b/lib/neutron_plugins/ovs_base
@@ -8,7 +8,6 @@
 set +o xtrace
 
 OVS_BRIDGE=${OVS_BRIDGE:-br-int}
-PUBLIC_BRIDGE=${PUBLIC_BRIDGE:-br-ex}
 OVS_DATAPATH_TYPE=${OVS_DATAPATH_TYPE:-""}
 
 function is_neutron_ovs_base_plugin {
@@ -93,11 +92,8 @@
         sudo ip link set $Q_PUBLIC_VETH_EX up
         sudo ip addr flush dev $Q_PUBLIC_VETH_EX
     else
-        # --no-wait causes a race condition if $PUBLIC_BRIDGE is not up when ip addr flush is called
         sudo ovs-vsctl -- --may-exist add-br $PUBLIC_BRIDGE
         sudo ovs-vsctl br-set-external-id $PUBLIC_BRIDGE bridge-id $PUBLIC_BRIDGE
-        # ensure no IP is configured on the public bridge
-        sudo ip addr flush dev $PUBLIC_BRIDGE
     fi
 }
 
diff --git a/lib/neutron_thirdparty/README.md b/lib/neutron_thirdparty/README.md
index 5655e0b..905ae77 100644
--- a/lib/neutron_thirdparty/README.md
+++ b/lib/neutron_thirdparty/README.md
@@ -10,7 +10,7 @@
 
 functions
 ---------
-``lib/neutron`` calls the following functions when the ``<third_party>`` is enabled
+``lib/neutron-legacy`` calls the following functions when the ``<third_party>`` is enabled
 
 functions to be implemented
 * ``configure_<third_party>``:
diff --git a/lib/nova b/lib/nova
index 199daee..11fa2a0 100644
--- a/lib/nova
+++ b/lib/nova
@@ -16,6 +16,7 @@
 #
 # - install_nova
 # - configure_nova
+# - _config_nova_apache_wsgi
 # - create_nova_conf
 # - init_nova
 # - start_nova
@@ -32,9 +33,16 @@
 
 # Set up default directories
 GITDIR["python-novaclient"]=$DEST/python-novaclient
-
-
 NOVA_DIR=$DEST/nova
+
+# Nova virtual environment
+if [[ ${USE_VENV} = True ]]; then
+    PROJECT_VENV["nova"]=${NOVA_DIR}.venv
+    NOVA_BIN_DIR=${PROJECT_VENV["nova"]}/bin
+else
+    NOVA_BIN_DIR=$(get_python_exec_prefix)
+fi
+
 NOVA_STATE_PATH=${NOVA_STATE_PATH:=$DATA_DIR/nova}
 # INSTANCES_PATH is the previous name for this
 NOVA_INSTANCES_PATH=${NOVA_INSTANCES_PATH:=${INSTANCES_PATH:=$NOVA_STATE_PATH/instances}}
@@ -48,12 +56,22 @@
 
 NOVA_API_PASTE_INI=${NOVA_API_PASTE_INI:-$NOVA_CONF_DIR/api-paste.ini}
 # NOVA_API_VERSION valid options
-#   - default - setup API end points as nova does out of the box
-#   - v21default - make v21 the default on /v2
+# - default - setup API end points as nova does out of the box
+# - v21default - make v21 the default on /v2
+#
 # NOTE(sdague): this is for transitional testing of the Nova v21 API.
 # Expect to remove in L or M.
 NOVA_API_VERSION=${NOVA_API_VERSION-default}
 
+if is_suse; then
+    NOVA_WSGI_DIR=${NOVA_WSGI_DIR:-/srv/www/htdocs/nova}
+else
+    NOVA_WSGI_DIR=${NOVA_WSGI_DIR:-/var/www/nova}
+fi
+
+# Toggle for deploying Nova-API under HTTPD + mod_wsgi
+NOVA_USE_MOD_WSGI=${NOVA_USE_MOD_WSGI:-False}
+
 if is_ssl_enabled_service "nova" || is_service_enabled tls-proxy; then
     NOVA_SERVICE_PROTOCOL="https"
     EC2_SERVICE_PROTOCOL="https"
@@ -69,19 +87,9 @@
 EC2_SERVICE_PORT=${EC2_SERVICE_PORT:-8773}
 EC2_SERVICE_PORT_INT=${EC2_SERVICE_PORT_INT:-18773}
 
-# Support entry points installation of console scripts
-if [[ -d $NOVA_DIR/bin ]]; then
-    NOVA_BIN_DIR=$NOVA_DIR/bin
-else
-    NOVA_BIN_DIR=$(get_python_exec_prefix)
-fi
-
-# Set the paths of certain binaries
-NOVA_ROOTWRAP=$(get_rootwrap_location nova)
-
 # Option to enable/disable config drive
-# NOTE: Set FORCE_CONFIG_DRIVE="False" to turn OFF config drive
-FORCE_CONFIG_DRIVE=${FORCE_CONFIG_DRIVE:-"always"}
+# NOTE: Set ``FORCE_CONFIG_DRIVE="False"`` to turn OFF config drive
+FORCE_CONFIG_DRIVE=${FORCE_CONFIG_DRIVE:-"True"}
 
 # Nova supports pluggable schedulers.  The default ``FilterScheduler``
 # should work in most cases.
@@ -92,11 +100,11 @@
 # Set default defaults here as some hypervisor drivers override these
 PUBLIC_INTERFACE_DEFAULT=br100
 FLAT_NETWORK_BRIDGE_DEFAULT=br100
-# set the GUEST_INTERFACE_DEFAULT to some interface on the box so that
-# the default isn't completely crazy. This will match eth*, em*, or
-# the new p* interfaces, then basically picks the first
+# Set ``GUEST_INTERFACE_DEFAULT`` to some interface on the box so that
+# the default isn't completely crazy. This will match ``eth*``, ``em*``, or
+# the new ``p*`` interfaces, then basically picks the first
 # alphabetically. It's probably wrong, however it's less wrong than
-# always using 'eth0' which doesn't exist on new Linux distros at all.
+# always using ``eth0`` which doesn't exist on new Linux distros at all.
 GUEST_INTERFACE_DEFAULT=$(ip link \
     | grep 'state UP' \
     | awk '{print $2}' \
@@ -104,8 +112,8 @@
     | grep ^[ep] \
     | head -1)
 
-# $NOVA_VNC_ENABLED can be used to forcibly enable vnc configuration.
-# In multi-node setups allows compute hosts to not run n-novnc.
+# ``NOVA_VNC_ENABLED`` can be used to forcibly enable VNC configuration.
+# In multi-node setups allows compute hosts to not run ``n-novnc``.
 NOVA_VNC_ENABLED=$(trueorfalse False NOVA_VNC_ENABLED)
 
 # Get hypervisor configuration
@@ -147,7 +155,7 @@
 # running the VM - removing a SPOF and bandwidth bottleneck.
 MULTI_HOST=$(trueorfalse False MULTI_HOST)
 
-# ``NOVA_ALLOW_MOVE_TO_SAME_HOST` can be set to False in multi node devstack,
+# ``NOVA_ALLOW_MOVE_TO_SAME_HOST`` can be set to False in multi node DevStack,
 # where there are at least two nova-computes.
 NOVA_ALLOW_MOVE_TO_SAME_HOST=$(trueorfalse True NOVA_ALLOW_MOVE_TO_SAME_HOST)
 
@@ -225,45 +233,72 @@
     #fi
 }
 
-# configure_nova_rootwrap() - configure Nova's rootwrap
-function configure_nova_rootwrap {
-    # Deploy new rootwrap filters files (owned by root).
-    # Wipe any existing rootwrap.d files first
-    if [[ -d $NOVA_CONF_DIR/rootwrap.d ]]; then
-        sudo rm -rf $NOVA_CONF_DIR/rootwrap.d
-    fi
-    # Deploy filters to /etc/nova/rootwrap.d
-    sudo mkdir -m 755 $NOVA_CONF_DIR/rootwrap.d
-    sudo cp $NOVA_DIR/etc/nova/rootwrap.d/*.filters $NOVA_CONF_DIR/rootwrap.d
-    sudo chown -R root:root $NOVA_CONF_DIR/rootwrap.d
-    sudo chmod 644 $NOVA_CONF_DIR/rootwrap.d/*
-    # Set up rootwrap.conf, pointing to /etc/nova/rootwrap.d
-    sudo cp $NOVA_DIR/etc/nova/rootwrap.conf $NOVA_CONF_DIR/
-    sudo sed -e "s:^filters_path=.*$:filters_path=$NOVA_CONF_DIR/rootwrap.d:" -i $NOVA_CONF_DIR/rootwrap.conf
-    sudo chown root:root $NOVA_CONF_DIR/rootwrap.conf
-    sudo chmod 0644 $NOVA_CONF_DIR/rootwrap.conf
-    # Specify rootwrap.conf as first parameter to nova-rootwrap
-    local rootwrap_sudoer_cmd="$NOVA_ROOTWRAP $NOVA_CONF_DIR/rootwrap.conf *"
+# _cleanup_nova_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
+function _cleanup_nova_apache_wsgi {
+    sudo rm -f $NOVA_WSGI_DIR/*
+    sudo rm -f $(apache_site_config_for nova-api)
+    sudo rm -f $(apache_site_config_for nova-ec2-api)
+}
 
-    # Set up the rootwrap sudoers for nova
-    local tempfile=`mktemp`
-    echo "$STACK_USER ALL=(root) NOPASSWD: $rootwrap_sudoer_cmd" >$tempfile
-    chmod 0440 $tempfile
-    sudo chown root:root $tempfile
-    sudo mv $tempfile /etc/sudoers.d/nova-rootwrap
+# _config_nova_apache_wsgi() - Set WSGI config files of Keystone
+function _config_nova_apache_wsgi {
+    sudo mkdir -p $NOVA_WSGI_DIR
+
+    local nova_apache_conf=$(apache_site_config_for nova-api)
+    local nova_ec2_apache_conf=$(apache_site_config_for nova-ec2-api)
+    local nova_ssl=""
+    local nova_certfile=""
+    local nova_keyfile=""
+    local nova_api_port=$NOVA_SERVICE_PORT
+    local nova_ec2_api_port=$EC2_SERVICE_PORT
+    local venv_path=""
+
+    if is_ssl_enabled_service nova-api; then
+        nova_ssl="SSLEngine On"
+        nova_certfile="SSLCertificateFile $NOVA_SSL_CERT"
+        nova_keyfile="SSLCertificateKeyFile $NOVA_SSL_KEY"
+    fi
+    if [[ ${USE_VENV} = True ]]; then
+        venv_path="python-path=${PROJECT_VENV["nova"]}/lib/$(python_version)/site-packages"
+    fi
+
+    # copy proxy vhost and wsgi helper files
+    sudo cp $NOVA_DIR/nova/wsgi/nova-api.py $NOVA_WSGI_DIR/nova-api
+    sudo cp $NOVA_DIR/nova/wsgi/nova-ec2-api.py $NOVA_WSGI_DIR/nova-ec2-api
+
+    sudo cp $FILES/apache-nova-api.template $nova_apache_conf
+    sudo sed -e "
+        s|%PUBLICPORT%|$nova_api_port|g;
+        s|%APACHE_NAME%|$APACHE_NAME|g;
+        s|%PUBLICWSGI%|$NOVA_WSGI_DIR/nova-api|g;
+        s|%SSLENGINE%|$nova_ssl|g;
+        s|%SSLCERTFILE%|$nova_certfile|g;
+        s|%SSLKEYFILE%|$nova_keyfile|g;
+        s|%USER%|$STACK_USER|g;
+        s|%VIRTUALENV%|$venv_path|g
+    " -i $nova_apache_conf
+
+    sudo cp $FILES/apache-nova-ec2-api.template $nova_ec2_apache_conf
+    sudo sed -e "
+        s|%PUBLICPORT%|$nova_ec2_api_port|g;
+        s|%APACHE_NAME%|$APACHE_NAME|g;
+        s|%PUBLICWSGI%|$NOVA_WSGI_DIR/nova-ec2-api|g;
+        s|%SSLENGINE%|$nova_ssl|g;
+        s|%SSLCERTFILE%|$nova_certfile|g;
+        s|%SSLKEYFILE%|$nova_keyfile|g;
+        s|%USER%|$STACK_USER|g;
+        s|%VIRTUALENV%|$venv_path|g
+    " -i $nova_ec2_apache_conf
 }
 
 # configure_nova() - Set config files, create data dirs, etc
 function configure_nova {
     # Put config files in ``/etc/nova`` for everyone to find
-    if [[ ! -d $NOVA_CONF_DIR ]]; then
-        sudo mkdir -p $NOVA_CONF_DIR
-    fi
-    sudo chown $STACK_USER $NOVA_CONF_DIR
+    sudo install -d -o $STACK_USER $NOVA_CONF_DIR
 
-    cp -p $NOVA_DIR/etc/nova/policy.json $NOVA_CONF_DIR
+    install_default_policy nova
 
-    configure_nova_rootwrap
+    configure_rootwrap nova
 
     if [[ "$ENABLED_SERVICES" =~ "n-api" ]]; then
         # Get the sample configuration file in place
@@ -318,8 +353,7 @@
         # ----------------
 
         # Nova stores each instance in its own directory.
-        sudo mkdir -p $NOVA_INSTANCES_PATH
-        sudo chown -R $STACK_USER $NOVA_INSTANCES_PATH
+        sudo install -d -o $STACK_USER $NOVA_INSTANCES_PATH
 
         # You can specify a different disk to be mounted and used for backing the
         # virtual machines.  If there is a partition labeled nova-instances we
@@ -426,7 +460,6 @@
     iniset $NOVA_CONF DEFAULT debug "$ENABLE_DEBUG_LOG_LEVEL"
     if [ "$NOVA_ALLOW_MOVE_TO_SAME_HOST" == "True" ]; then
         iniset $NOVA_CONF DEFAULT allow_resize_to_same_host "True"
-        iniset $NOVA_CONF DEFAULT allow_migrate_to_same_host "True"
     fi
     iniset $NOVA_CONF DEFAULT api_paste_config "$NOVA_API_PASTE_INI"
     iniset $NOVA_CONF DEFAULT rootwrap_config "$NOVA_CONF_DIR/rootwrap.conf"
@@ -437,7 +470,7 @@
     iniset $NOVA_CONF DEFAULT s3_host "$SERVICE_HOST"
     iniset $NOVA_CONF DEFAULT s3_port "$S3_SERVICE_PORT"
     iniset $NOVA_CONF DEFAULT my_ip "$HOST_IP"
-    iniset $NOVA_CONF DEFAULT sql_connection `database_connection_url nova`
+    iniset $NOVA_CONF database connection `database_connection_url nova`
     iniset $NOVA_CONF DEFAULT instance_name_template "${INSTANCE_NAME_PREFIX}%08x"
     iniset $NOVA_CONF osapi_v3 enabled "True"
 
@@ -456,6 +489,7 @@
         if is_service_enabled tls-proxy; then
             # Set the service port for a proxy to take the original
             iniset $NOVA_CONF DEFAULT osapi_compute_listen_port "$NOVA_SERVICE_PORT_INT"
+            iniset $NOVA_CONF DEFAULT osapi_compute_link_prefix $NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT
         fi
 
         configure_auth_token_middleware $NOVA_CONF nova $NOVA_AUTH_CACHE_DIR
@@ -471,7 +505,7 @@
 
     if [ -n "$NOVA_STATE_PATH" ]; then
         iniset $NOVA_CONF DEFAULT state_path "$NOVA_STATE_PATH"
-        iniset $NOVA_CONF DEFAULT lock_path "$NOVA_STATE_PATH"
+        iniset $NOVA_CONF oslo_concurrency lock_path "$NOVA_STATE_PATH"
     fi
     if [ -n "$NOVA_INSTANCES_PATH" ]; then
         iniset $NOVA_CONF DEFAULT instances_path "$NOVA_INSTANCES_PATH"
@@ -487,12 +521,16 @@
         iniset $NOVA_CONF DEFAULT force_config_drive "$FORCE_CONFIG_DRIVE"
     fi
     # Format logging
-    if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
+    if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$NOVA_USE_MOD_WSGI" == "False" ]  ; then
         setup_colorized_logging $NOVA_CONF DEFAULT
     else
         # Show user_name and project_name instead of user_id and project_id
         iniset $NOVA_CONF DEFAULT logging_context_format_string "%(asctime)s.%(msecs)03d %(levelname)s %(name)s [%(request_id)s %(user_name)s %(project_name)s] %(instance)s%(message)s"
     fi
+    if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
+        _config_nova_apache_wsgi
+    fi
+
     if is_service_enabled ceilometer; then
         iniset $NOVA_CONF DEFAULT instance_usage_audit "True"
         iniset $NOVA_CONF DEFAULT instance_usage_audit_period "hour"
@@ -537,7 +575,7 @@
 
     iniset $NOVA_CONF DEFAULT ec2_dmz_host "$EC2_DMZ_HOST"
     iniset $NOVA_CONF DEFAULT keystone_ec2_url $KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/v2.0/ec2tokens
-    iniset_rpc_backend nova $NOVA_CONF DEFAULT
+    iniset_rpc_backend nova $NOVA_CONF
     iniset $NOVA_CONF glance api_servers "${GLANCE_SERVICE_PROTOCOL}://${GLANCE_HOSTPORT}"
 
     iniset $NOVA_CONF DEFAULT osapi_compute_workers "$API_WORKERS"
@@ -577,7 +615,7 @@
 function init_nova_cells {
     if is_service_enabled n-cell; then
         cp $NOVA_CONF $NOVA_CELLS_CONF
-        iniset $NOVA_CELLS_CONF DEFAULT sql_connection `database_connection_url $NOVA_CELLS_DB`
+        iniset $NOVA_CELLS_CONF database connection `database_connection_url $NOVA_CELLS_DB`
         iniset $NOVA_CELLS_CONF DEFAULT rabbit_virtual_host child_cell
         iniset $NOVA_CELLS_CONF DEFAULT dhcpbridge_flagfile $NOVA_CELLS_CONF
         iniset $NOVA_CELLS_CONF cells enable True
@@ -603,8 +641,7 @@
 # create_nova_cache_dir() - Part of the init_nova() process
 function create_nova_cache_dir {
     # Create cache dir
-    sudo mkdir -p $NOVA_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $NOVA_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER $NOVA_AUTH_CACHE_DIR
     rm -f $NOVA_AUTH_CACHE_DIR/*
 }
 
@@ -621,8 +658,7 @@
 # create_nova_keys_dir() - Part of the init_nova() process
 function create_nova_keys_dir {
     # Create keys dir
-    sudo mkdir -p ${NOVA_STATE_PATH}/keys
-    sudo chown -R $STACK_USER ${NOVA_STATE_PATH}
+    sudo install -d -o $STACK_USER ${NOVA_STATE_PATH} ${NOVA_STATE_PATH}/keys
 }
 
 # init_nova() - Initialize databases, etc.
@@ -691,6 +727,13 @@
     git_clone $NOVA_REPO $NOVA_DIR $NOVA_BRANCH
     setup_develop $NOVA_DIR
     sudo install -D -m 0644 -o $STACK_USER {$NOVA_DIR/tools/,/etc/bash_completion.d/}nova-manage.bash_completion
+
+    if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
+        install_apache_wsgi
+        if is_ssl_enabled_service "nova-api"; then
+            enable_mod_ssl
+        fi
+    fi
 }
 
 # start_nova_api() - Start the API process ahead of other things
@@ -703,7 +746,22 @@
         service_protocol="http"
     fi
 
-    run_process n-api "$NOVA_BIN_DIR/nova-api"
+    # Hack to set the path for rootwrap
+    local old_path=$PATH
+    export PATH=$NOVA_BIN_DIR:$PATH
+
+    # If the site is not enabled then we are in a grenade scenario
+    local enabled_site_file=$(apache_site_config_for nova-api)
+    if [ -f ${enabled_site_file} ] && [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
+        enable_apache_site nova-api
+        enable_apache_site nova-ec2-api
+        restart_apache_server
+        tail_log nova /var/log/$APACHE_NAME/nova-api.log
+        tail_log nova /var/log/$APACHE_NAME/nova-ec2-api.log
+    else
+        run_process n-api "$NOVA_BIN_DIR/nova-api"
+    fi
+
     echo "Waiting for nova-api to start..."
     if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$SERVICE_HOST:$service_port; then
         die $LINENO "nova-api did not start"
@@ -714,10 +772,16 @@
         start_tls_proxy '*' $NOVA_SERVICE_PORT $NOVA_SERVICE_HOST $NOVA_SERVICE_PORT_INT &
         start_tls_proxy '*' $EC2_SERVICE_PORT $NOVA_SERVICE_HOST $EC2_SERVICE_PORT_INT &
     fi
+
+    export PATH=$old_path
 }
 
 # start_nova_compute() - Start the compute process
 function start_nova_compute {
+    # Hack to set the path for rootwrap
+    local old_path=$PATH
+    export PATH=$NOVA_BIN_DIR:$PATH
+
     if is_service_enabled n-cell; then
         local compute_cell_conf=$NOVA_CELLS_CONF
     else
@@ -745,10 +809,16 @@
         fi
         run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf"
     fi
+
+    export PATH=$old_path
 }
 
 # start_nova() - Start running processes, including screen
 function start_nova_rest {
+    # Hack to set the path for rootwrap
+    local old_path=$PATH
+    export PATH=$NOVA_BIN_DIR:$PATH
+
     local api_cell_conf=$NOVA_CONF
     if is_service_enabled n-cell; then
         local compute_cell_conf=$NOVA_CELLS_CONF
@@ -776,6 +846,8 @@
     # Swift will act as s3 objectstore.
     is_service_enabled swift3 || \
         run_process n-obj "$NOVA_BIN_DIR/nova-objectstore --config-file $api_cell_conf"
+
+    export PATH=$old_path
 }
 
 function start_nova {
@@ -798,6 +870,13 @@
 }
 
 function stop_nova_rest {
+    if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
+        disable_apache_site nova-api
+        disable_apache_site nova-ec2-api
+        restart_apache_server
+    else
+        stop_process n-api
+    fi
     # Kill the nova screen windows
     # Some services are listed here twice since more than one instance
     # of a service may be running in certain configs.
diff --git a/lib/nova_plugins/functions-libvirt b/lib/nova_plugins/functions-libvirt
old mode 100644
new mode 100755
index 4d617e8..96d8a44
--- a/lib/nova_plugins/functions-libvirt
+++ b/lib/nova_plugins/functions-libvirt
@@ -14,33 +14,31 @@
 # Defaults
 # --------
 
-# if we should turn on massive libvirt debugging
-DEBUG_LIBVIRT=$(trueorfalse False DEBUG_LIBVIRT)
+# Turn on selective debug log filters for libvirt.
+# (NOTE: Enabling this by default, because the log filters enabled in
+# 'configure_libvirt' function further below are _selective_ and not
+# extremely verbose.)
+DEBUG_LIBVIRT=$(trueorfalse True DEBUG_LIBVIRT)
 
 # Installs required distro-specific libvirt packages.
 function install_libvirt {
     if is_ubuntu; then
-        install_package qemu-kvm
-        install_package libvirt-bin
-        install_package python-libvirt
-        install_package python-guestfs
+        if is_arch "aarch64" && [[ ${DISTRO} =~ (trusty|utopic) ]]; then
+            install_package qemu-system
+        else
+            install_package qemu-kvm
+            install_package libguestfs0
+            install_package python-guestfs
+        fi
+        install_package libvirt-bin libvirt-dev
+        pip_install_gr libvirt-python
+        #pip_install_gr <there-si-no-guestfs-in-pypi>
     elif is_fedora || is_suse; then
         install_package kvm
-        install_package libvirt
-        install_package libvirt-python
+        install_package libvirt libvirt-devel
+        pip_install_gr libvirt-python
         install_package python-libguestfs
     fi
-
-    # Restart firewalld after install of libvirt to avoid a problem
-    # with polkit, which libvirtd brings in.  See
-    # https://bugzilla.redhat.com/show_bug.cgi?id=1099031
-
-    # Note there is a difference between F20 rackspace cloud images
-    # and HP images used in the gate; rackspace has firewalld but hp
-    # cloud doesn't.
-    if is_fedora && is_package_installed firewalld; then
-        sudo service firewalld restart || true
-    fi
 }
 
 # Configures the installed libvirt system so that is accessible by
@@ -97,9 +95,9 @@
             # source file paths, not relative paths. This screws with the matching
             # of '1:libvirt' making everything turn on. So use libvirt.c for now.
             # This will have to be re-visited when Ubuntu ships libvirt >= 1.2.3
-            local log_filters="1:libvirt.c 1:qemu 1:conf 1:security 3:object 3:event 3:json 3:file 1:util"
+            local log_filters="1:libvirt.c 1:qemu 1:conf 1:security 3:object 3:event 3:json 3:file 1:util 1:qemu_monitor"
         else
-            local log_filters="1:libvirt 1:qemu 1:conf 1:security 3:object 3:event 3:json 3:file 1:util"
+            local log_filters="1:libvirt 1:qemu 1:conf 1:security 3:object 3:event 3:json 3:file 1:util 1:qemu_monitor"
         fi
         local log_outputs="1:file:/var/log/libvirt/libvirtd.log"
         if ! grep -q "log_filters=\"$log_filters\"" /etc/libvirt/libvirtd.conf; then
@@ -110,10 +108,18 @@
         fi
     fi
 
+    # Update the libvirt cpu map with a gate64 cpu model. This enables nova
+    # live migration for 64bit guest OSes on heterogenous cloud "hardware".
+    if [[ -f /usr/share/libvirt/cpu_map.xml ]] ; then
+        sudo $TOP_DIR/tools/cpu_map_update.py /usr/share/libvirt/cpu_map.xml
+    fi
+
     # libvirt detects various settings on startup, as we potentially changed
     # the system configuration (modules, filesystems), we need to restart
-    # libvirt to detect those changes.
-    restart_service $LIBVIRT_DAEMON
+    # libvirt to detect those changes. Use a stop start as otherwise the new
+    # cpu_map is not loaded properly on some systems (Ubuntu).
+    stop_service $LIBVIRT_DAEMON
+    start_service $LIBVIRT_DAEMON
 }
 
 
diff --git a/lib/nova_plugins/hypervisor-ironic b/lib/nova_plugins/hypervisor-ironic
index 0169d73..b9e286d 100644
--- a/lib/nova_plugins/hypervisor-ironic
+++ b/lib/nova_plugins/hypervisor-ironic
@@ -54,9 +54,7 @@
 
 # install_nova_hypervisor() - Install external components
 function install_nova_hypervisor {
-    if ! is_service_enabled neutron; then
-        die $LINENO "Neutron should be enabled for usage of the Ironic Nova driver."
-    elif is_ironic_hardware; then
+    if is_ironic_hardware; then
         return
     fi
     install_libvirt
diff --git a/lib/nova_plugins/hypervisor-libvirt b/lib/nova_plugins/hypervisor-libvirt
index 4d1eb6c..a6a87f9 100644
--- a/lib/nova_plugins/hypervisor-libvirt
+++ b/lib/nova_plugins/hypervisor-libvirt
@@ -54,6 +54,12 @@
         iniset $NOVA_CONF DEFAULT vnc_enabled "false"
     fi
 
+    # arm64-specific configuration
+    if is_arch "aarch64"; then
+        # arm64 architecture currently does not support graphical consoles.
+        iniset $NOVA_CONF DEFAULT vnc_enabled "false"
+    fi
+
     ENABLE_FILE_INJECTION=$(trueorfalse False ENABLE_FILE_INJECTION)
     if [[ "$ENABLE_FILE_INJECTION" = "True" ]] ; then
         # When libguestfs is available for file injection, enable using
diff --git a/lib/nova_plugins/hypervisor-xenserver b/lib/nova_plugins/hypervisor-xenserver
index 4d0ec89..efce383 100644
--- a/lib/nova_plugins/hypervisor-xenserver
+++ b/lib/nova_plugins/hypervisor-xenserver
@@ -94,7 +94,7 @@
 
 # install_nova_hypervisor() - Install external components
 function install_nova_hypervisor {
-    pip_install xenapi
+    pip_install_gr xenapi
 }
 
 # start_nova_hypervisor - Start any required external services
diff --git a/lib/oslo b/lib/oslo
index 86efb60..d9688a0 100644
--- a/lib/oslo
+++ b/lib/oslo
@@ -2,7 +2,7 @@
 #
 # lib/oslo
 #
-# Functions to install oslo libraries from git
+# Functions to install **Oslo** libraries from git
 #
 # We need this to handle the fact that projects would like to use
 # pre-released versions of oslo libraries.
@@ -46,8 +46,9 @@
 # Support entry points installation of console scripts
 OSLO_BIN_DIR=$(get_python_exec_prefix)
 
-# Entry Points
-# ------------
+
+# Functions
+# ---------
 
 function _do_install_oslo_lib {
     local name=$1
diff --git a/lib/rpc_backend b/lib/rpc_backend
index ff22bbf..33ab03d 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -1,8 +1,7 @@
 #!/bin/bash
 #
 # lib/rpc_backend
-# Interface for interactig with different rpc backend
-# rpc backend settings
+# Interface for interactig with different RPC backends
 
 # Dependencies:
 #
@@ -27,10 +26,10 @@
 # messaging server as a service, which it really isn't for multi host
 QPID_HOST=${QPID_HOST:-}
 
+
 # Functions
 # ---------
 
-
 # Make sure we only have one rpc backend enabled.
 # Also check the specified rpc backend is available on your platform.
 function check_rpc_backend {
@@ -142,7 +141,7 @@
         # TODO(kgiusti) can remove once python qpid bindings are
         # available on all supported platforms _and_ pyngus is added
         # to the requirements.txt file in oslo.messaging
-        pip_install pyngus
+        pip_install_gr pyngus
     fi
 
     if is_service_enabled rabbit; then
@@ -195,14 +194,20 @@
         # NOTE(bnemec): Retry initial rabbitmq configuration to deal with
         # the fact that sometimes it fails to start properly.
         # Reference: https://bugzilla.redhat.com/show_bug.cgi?id=1144100
+        # NOTE(tonyb): Extend the orginal retry logic to only restart rabbitmq
+        # every second time around the loop.
+        # See: https://bugs.launchpad.net/devstack/+bug/1449056 for details on
+        # why this is needed.  This can bee seen on vivid and Debian unstable
+        # (May 2015)
+        # TODO(tonyb): Remove this when Debian and Ubuntu have a fixed systemd
+        # service file.
         local i
-        for i in `seq 10`; do
+        for i in `seq 20`; do
             local rc=0
 
-            [[ $i -eq "10" ]] && die $LINENO "Failed to set rabbitmq password"
+            [[ $i -eq "20" ]] && die $LINENO "Failed to set rabbitmq password"
 
-            if is_fedora || is_suse; then
-                # service is not started by default
+            if [[ $(( i % 2 )) == "0" ]] ; then
                 restart_service rabbitmq-server
             fi
 
@@ -233,17 +238,25 @@
     fi
 }
 
+# builds transport url string
+function get_transport_url {
+    if is_service_enabled qpid || [ -n "$QPID_HOST" ]; then
+        echo "qpid://$QPID_USERNAME:$QPID_PASSWORD@$QPID_HOST:5672/"
+    elif is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
+        echo "rabbit://$RABBIT_USERID:$RABBIT_PASSWORD@$RABBIT_HOST:5672/"
+    fi
+}
+
 # iniset cofiguration
 function iniset_rpc_backend {
     local package=$1
     local file=$2
-    local section=$3
+    local section=${3:-DEFAULT}
     if is_service_enabled zeromq; then
         iniset $file $section rpc_backend "zmq"
         iniset $file $section rpc_zmq_host `hostname`
         if [ "$ZEROMQ_MATCHMAKER" == "redis" ]; then
-            iniset $file $section rpc_zmq_matchmaker \
-                oslo.messaging._drivers.matchmaker_redis.MatchMakerRedis
+            iniset $file $section rpc_zmq_matchmaker "redis"
             MATCHMAKER_REDIS_HOST=${MATCHMAKER_REDIS_HOST:-127.0.0.1}
             iniset $file matchmaker_redis host $MATCHMAKER_REDIS_HOST
         else
@@ -263,9 +276,15 @@
         fi
     elif is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
         iniset $file $section rpc_backend "rabbit"
-        iniset $file $section rabbit_hosts $RABBIT_HOST
-        iniset $file $section rabbit_password $RABBIT_PASSWORD
-        iniset $file $section rabbit_userid $RABBIT_USERID
+        iniset $file oslo_messaging_rabbit rabbit_hosts $RABBIT_HOST
+        iniset $file oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD
+        iniset $file oslo_messaging_rabbit rabbit_userid $RABBIT_USERID
+        if [ -n "$RABBIT_HEARTBEAT_TIMEOUT_THRESHOLD" ]; then
+            iniset $file oslo_messaging_rabbit heartbeat_timeout_threshold $RABBIT_HEARTBEAT_TIMEOUT_THRESHOLD
+        fi
+        if [ -n "$RABBIT_HEARTBEAT_RATE" ]; then
+            iniset $file oslo_messaging_rabbit heartbeat_rate $RABBIT_HEARTBEAT_RATE
+        fi
     fi
 }
 
diff --git a/lib/sahara b/lib/sahara
index 521b19a..51e431a 100644
--- a/lib/sahara
+++ b/lib/sahara
@@ -33,8 +33,12 @@
 SAHARA_CONF_DIR=${SAHARA_CONF_DIR:-/etc/sahara}
 SAHARA_CONF_FILE=${SAHARA_CONF_DIR}/sahara.conf
 
+if is_ssl_enabled_service "sahara" || is_service_enabled tls-proxy; then
+    SAHARA_SERVICE_PROTOCOL="https"
+fi
 SAHARA_SERVICE_HOST=${SAHARA_SERVICE_HOST:-$SERVICE_HOST}
 SAHARA_SERVICE_PORT=${SAHARA_SERVICE_PORT:-8386}
+SAHARA_SERVICE_PORT_INT=${SAHARA_SERVICE_PORT_INT:-18386}
 SAHARA_SERVICE_PROTOCOL=${SAHARA_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
 
 SAHARA_AUTH_CACHE_DIR=${SAHARA_AUTH_CACHE_DIR:-/var/cache/sahara}
@@ -101,33 +105,25 @@
 
 # configure_sahara() - Set config files, create data dirs, etc
 function configure_sahara {
-
-    if [[ ! -d $SAHARA_CONF_DIR ]]; then
-        sudo mkdir -p $SAHARA_CONF_DIR
-    fi
-    sudo chown $STACK_USER $SAHARA_CONF_DIR
+    sudo install -d -o $STACK_USER $SAHARA_CONF_DIR
 
     if [[ -f $SAHARA_DIR/etc/sahara/policy.json ]]; then
         cp -p $SAHARA_DIR/etc/sahara/policy.json $SAHARA_CONF_DIR
     fi
 
-    # Copy over sahara configuration file and configure common parameters.
-    cp $SAHARA_DIR/etc/sahara/sahara.conf.sample $SAHARA_CONF_FILE
-
     # Create auth cache dir
-    sudo mkdir -p $SAHARA_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $SAHARA_AUTH_CACHE_DIR
-    sudo chmod 700 $SAHARA_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER -m 700 $SAHARA_AUTH_CACHE_DIR
     rm -rf $SAHARA_AUTH_CACHE_DIR/*
 
     configure_auth_token_middleware $SAHARA_CONF_FILE sahara $SAHARA_AUTH_CACHE_DIR
 
+    iniset_rpc_backend sahara $SAHARA_CONF_FILE DEFAULT
+
     # Set configuration to send notifications
 
     if is_service_enabled ceilometer; then
         iniset $SAHARA_CONF_FILE DEFAULT enable_notifications "true"
         iniset $SAHARA_CONF_FILE DEFAULT notification_driver "messaging"
-        iniset_rpc_backend sahara $SAHARA_CONF_FILE DEFAULT
     fi
 
     iniset $SAHARA_CONF_FILE DEFAULT verbose True
@@ -173,6 +169,14 @@
         iniset $SAHARA_CONF_FILE keystone ca_file $SSL_BUNDLE_FILE
     fi
 
+    # Register SSL certificates if provided
+    if is_ssl_enabled_service sahara; then
+        ensure_certificates SAHARA
+
+        iniset $SAHARA_CONF_FILE ssl cert_file "$SAHARA_SSL_CERT"
+        iniset $SAHARA_CONF_FILE ssl key_file "$SAHARA_SSL_KEY"
+    fi
+
     iniset $SAHARA_CONF_FILE DEFAULT use_syslog $SYSLOG
 
     # Format logging
@@ -180,6 +184,11 @@
         setup_colorized_logging $SAHARA_CONF_FILE DEFAULT
     fi
 
+    if is_service_enabled tls-proxy; then
+        # Set the service port for a proxy to take the original
+        iniset $SAHARA_CONF_FILE DEFAULT port $SAHARA_SERVICE_PORT_INT
+    fi
+
     recreate_database sahara
     $SAHARA_BIN_DIR/sahara-db-manage --config-file $SAHARA_CONF_FILE upgrade head
 }
@@ -211,13 +220,34 @@
 
 # start_sahara() - Start running processes, including screen
 function start_sahara {
+    local service_port=$SAHARA_SERVICE_PORT
+    local service_protocol=$SAHARA_SERVICE_PROTOCOL
+    if is_service_enabled tls-proxy; then
+        service_port=$SAHARA_SERVICE_PORT_INT
+        service_protocol="http"
+    fi
+
     run_process sahara "$SAHARA_BIN_DIR/sahara-all --config-file $SAHARA_CONF_FILE"
+    run_process sahara-api "$SAHARA_BIN_DIR/sahara-api --config-file $SAHARA_CONF_FILE"
+    run_process sahara-eng "$SAHARA_BIN_DIR/sahara-engine --config-file $SAHARA_CONF_FILE"
+
+    echo "Waiting for Sahara to start..."
+    if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$SAHARA_SERVICE_HOST:$service_port; then
+        die $LINENO "Sahara did not start"
+    fi
+
+    # Start proxies if enabled
+    if is_service_enabled tls-proxy; then
+        start_tls_proxy '*' $SAHARA_SERVICE_PORT $SAHARA_SERVICE_HOST $SAHARA_SERVICE_PORT_INT &
+    fi
 }
 
 # stop_sahara() - Stop running processes
 function stop_sahara {
     # Kill the Sahara screen windows
     stop_process sahara
+    stop_process sahara-api
+    stop_process sahara-eng
 }
 
 
diff --git a/lib/stack b/lib/stack
index 9a509d8..47e8ce2 100644
--- a/lib/stack
+++ b/lib/stack
@@ -2,27 +2,34 @@
 #
 # lib/stack
 #
-# These functions are code snippets pulled out of stack.sh for easier
+# These functions are code snippets pulled out of ``stack.sh`` for easier
 # re-use by Grenade.  They can assume the same environment is available
-# as in the lower part of stack.sh, namely a valid stackrc has been sourced
-# as well as all of the lib/* files for the services have been sourced.
+# as in the lower part of ``stack.sh``, namely a valid stackrc has been sourced
+# as well as all of the ``lib/*`` files for the services have been sourced.
 #
 # For clarity, all functions declared here that came from ``stack.sh``
 # shall be named with the prefix ``stack_``.
 
 
+# Functions
+# ---------
+
 # Generic service install handles venv creation if confgured for service
 # stack_install_service service
 function stack_install_service {
     local service=$1
     if type install_${service} >/dev/null 2>&1; then
-        if [[ -n ${PROJECT_VENV[$service]:-} ]]; then
+        if [[ ${USE_VENV} = True && -n ${PROJECT_VENV[$service]:-} ]]; then
             rm -rf ${PROJECT_VENV[$service]}
-            source $TOP_DIR/tools/build_venv.sh ${PROJECT_VENV[$service]}
+            source $TOP_DIR/tools/build_venv.sh ${PROJECT_VENV[$service]} ${ADDITIONAL_VENV_PACKAGES//,/ }
             export PIP_VIRTUAL_ENV=${PROJECT_VENV[$service]:-}
+
+            # Install other OpenStack prereqs that might come from source repos
+            install_oslo
+            install_keystonemiddleware
         fi
         install_${service}
-        if [[ -n ${PROJECT_VENV[$service]:-} ]]; then
+        if [[ ${USE_VENV} = True && -n ${PROJECT_VENV[$service]:-} ]]; then
             unset PIP_VIRTUAL_ENV
         fi
     fi
diff --git a/lib/swift b/lib/swift
index 4a63500..820042d 100644
--- a/lib/swift
+++ b/lib/swift
@@ -38,7 +38,6 @@
 # Set up default directories
 GITDIR["python-swiftclient"]=$DEST/python-swiftclient
 
-
 SWIFT_DIR=$DEST/swift
 SWIFT_AUTH_CACHE_DIR=${SWIFT_AUTH_CACHE_DIR:-/var/cache/swift}
 SWIFT_APACHE_WSGI_DIR=${SWIFT_APACHE_WSGI_DIR:-/var/www/swift}
@@ -59,16 +58,24 @@
 SWIFT_CONF_DIR=${SWIFT_CONF_DIR:-/etc/swift}
 
 if is_service_enabled s-proxy && is_service_enabled swift3; then
-    # If we are using swift3, we can default the s3 port to swift instead
+    # If we are using ``swift3``, we can default the S3 port to swift instead
     # of nova-objectstore
     S3_SERVICE_PORT=${S3_SERVICE_PORT:-8080}
 fi
 
-# DevStack will create a loop-back disk formatted as XFS to store the
-# swift data. Set ``SWIFT_LOOPBACK_DISK_SIZE`` to the disk size in
-# kilobytes.
-# Default is 1 gigabyte.
-SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=1G
+if is_service_enabled g-api; then
+    # Minimum Cinder volume size is 1G so if Swift backend for Glance is
+    # only 1G we can not upload volume to image.
+    # Increase Swift disk size up to 2G
+    SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=2G
+else
+    # DevStack will create a loop-back disk formatted as XFS to store the
+    # swift data. Set ``SWIFT_LOOPBACK_DISK_SIZE`` to the disk size in
+    # kilobytes.
+    # Default is 1 gigabyte.
+    SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=1G
+fi
+
 # if tempest enabled the default size is 6 Gigabyte.
 if is_service_enabled tempest; then
     SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=${SWIFT_LOOPBACK_DISK_SIZE:-6G}
@@ -129,11 +136,12 @@
 SWIFT_ENABLE_TEMPURLS=${SWIFT_ENABLE_TEMPURLS:-False}
 SWIFT_TEMPURL_KEY=${SWIFT_TEMPURL_KEY:-}
 
+# Toggle for deploying Swift under HTTPD + mod_wsgi
+SWIFT_USE_MOD_WSGI=${SWIFT_USE_MOD_WSGI:-False}
+
 # Tell Tempest this project is present
 TEMPEST_SERVICES+=,swift
 
-# Toggle for deploying Swift under HTTPD + mod_wsgi
-SWIFT_USE_MOD_WSGI=${SWIFT_USE_MOD_WSGI:-False}
 
 # Functions
 # ---------
@@ -295,19 +303,19 @@
     sed -i -e "s,#[ ]*recon_cache_path .*,recon_cache_path = ${SWIFT_DATA_DIR}/cache," ${swift_node_config}
 }
 
-
 # configure_swift() - Set config files, create data dirs and loop image
 function configure_swift {
     local swift_pipeline="${SWIFT_EXTRAS_MIDDLEWARE_NO_AUTH}"
     local node_number
     local swift_node_config
     local swift_log_dir
+    local user_group
 
     # Make sure to kill all swift processes first
     swift-init --run-dir=${SWIFT_DATA_DIR}/run all stop || true
 
-    sudo mkdir -p ${SWIFT_CONF_DIR}/{object,container,account}-server
-    sudo chown -R ${STACK_USER}: ${SWIFT_CONF_DIR}
+    sudo install -d -o ${STACK_USER} ${SWIFT_CONF_DIR}
+    sudo install -d -o ${STACK_USER} ${SWIFT_CONF_DIR}/{object,container,account}-server
 
     if [[ "$SWIFT_CONF_DIR" != "/etc/swift" ]]; then
         # Some swift tools are hard-coded to use ``/etc/swift`` and are apparently not going to be fixed.
@@ -366,26 +374,27 @@
         iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT key_file "$SWIFT_SSL_KEY"
     fi
 
-    # Devstack is commonly run in a small slow environment, so bump the
-    # timeouts up.
-    # node_timeout is how long between read operations a node takes to
-    # respond to the proxy server
-    # conn_timeout is all about how long it takes a connect() system call to
-    # return
+    # DevStack is commonly run in a small slow environment, so bump the timeouts up.
+    # ``node_timeout`` is the node read operation response time to the proxy server
+    # ``conn_timeout`` is how long it takes a connect() system call to return
     iniset ${SWIFT_CONFIG_PROXY_SERVER} app:proxy-server node_timeout 120
     iniset ${SWIFT_CONFIG_PROXY_SERVER} app:proxy-server conn_timeout 20
 
     # Configure Ceilometer
     if is_service_enabled ceilometer; then
         iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer "set log_level" "WARN"
-        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer use "egg:ceilometer#swift"
+        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer paste.filter_factory "ceilometermiddleware.swift:filter_factory"
+        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer control_exchange "swift"
+        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer url $(get_transport_url)
+        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer driver "messaging"
+        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer topic "notifications"
         SWIFT_EXTRAS_MIDDLEWARE_LAST="${SWIFT_EXTRAS_MIDDLEWARE_LAST} ceilometer"
     fi
 
-    # Restrict the length of auth tokens in the swift proxy-server logs.
+    # Restrict the length of auth tokens in the Swift ``proxy-server`` logs.
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:proxy-logging reveal_sensitive_prefix ${SWIFT_LOG_TOKEN_LENGTH}
 
-    # By default Swift will be installed with keystone and tempauth middleware
+    # By default Swift will be installed with Keystone and tempauth middleware
     # and add the swift3 middleware if its configured for it. The token for
     # tempauth would be prefixed with the reseller_prefix setting `TEMPAUTH_` the
     # token for keystoneauth would have the standard reseller_prefix `AUTH_`
@@ -401,30 +410,18 @@
     sed -i "/^pipeline/ { s/tempauth/${swift_pipeline} ${SWIFT_EXTRAS_MIDDLEWARE}/ ;}" ${SWIFT_CONFIG_PROXY_SERVER}
     sed -i "/^pipeline/ { s/proxy-server/${SWIFT_EXTRAS_MIDDLEWARE_LAST} proxy-server/ ; }" ${SWIFT_CONFIG_PROXY_SERVER}
 
-
     iniset ${SWIFT_CONFIG_PROXY_SERVER} app:proxy-server account_autocreate true
 
-
-
     # Configure Crossdomain
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:crossdomain use "egg:swift#crossdomain"
 
-
-    # This causes the authtoken middleware to use the same python logging
-    # adapter provided by the swift proxy-server, so that request transaction
+    # Configure authtoken middleware to use the same Python logging
+    # adapter provided by the Swift ``proxy-server``, so that request transaction
     # IDs will included in all of its log messages.
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken log_name swift
 
-    # NOTE(jamielennox): swift cannot use the regular configure_auth_token_middleware function because swift
-    # doesn't use oslo.config which is the only way to configure auth plugins with the middleare.
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken paste.filter_factory keystonemiddleware.auth_token:filter_factory
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken identity_uri $KEYSTONE_AUTH_URI
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken admin_user swift
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken admin_password $SERVICE_PASSWORD
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken admin_tenant_name $SERVICE_TENANT_NAME
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken auth_uri $KEYSTONE_SERVICE_URI
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken cafile $SSL_BUNDLE_FILE
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken signing_dir $SWIFT_AUTH_CACHE_DIR
+    configure_auth_token_middleware $SWIFT_CONFIG_PROXY_SERVER swift $SWIFT_AUTH_CACHE_DIR filter:authtoken
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken delay_auth_decision 1
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken cache swift.cache
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken include_service_catalog False
@@ -432,7 +429,7 @@
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:keystoneauth use "egg:swift#keystoneauth"
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:keystoneauth operator_roles "Member, admin"
 
-    # Configure Tempauth. In the sample config file, Keystoneauth is commented
+    # Configure Tempauth. In the sample config file Keystoneauth is commented
     # out. Make sure we uncomment Tempauth after we uncomment Keystoneauth
     # otherwise, this code also sets the reseller_prefix for Keystoneauth.
     iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} filter:tempauth account_autocreate
@@ -442,7 +439,7 @@
     if is_service_enabled swift3; then
         cat <<EOF >>${SWIFT_CONFIG_PROXY_SERVER}
 [filter:s3token]
-paste.filter_factory = keystoneclient.middleware.s3_token:filter_factory
+paste.filter_factory = keystonemiddleware.s3_token:filter_factory
 auth_port = ${KEYSTONE_AUTH_PORT}
 auth_host = ${KEYSTONE_AUTH_HOST}
 auth_protocol = ${KEYSTONE_AUTH_PROTOCOL}
@@ -509,10 +506,12 @@
         fi
     fi
 
+    local user_group=$(id -g ${STACK_USER})
+    sudo install -d -o ${STACK_USER} -g ${user_group} ${SWIFT_DATA_DIR}
+
     local swift_log_dir=${SWIFT_DATA_DIR}/logs
-    rm -rf ${swift_log_dir}
-    mkdir -p ${swift_log_dir}/hourly
-    sudo chown -R ${STACK_USER}:adm ${swift_log_dir}
+    sudo rm -rf ${swift_log_dir}
+    sudo install -d -o ${STACK_USER} -g adm ${swift_log_dir}/hourly
 
     if [[ $SYSLOG != "False" ]]; then
         sed "s,%SWIFT_LOGDIR%,${swift_log_dir}," $FILES/swift/rsyslog.conf | sudo \
@@ -534,8 +533,7 @@
     # changing the permissions so we can run it as our user.
 
     local user_group=$(id -g ${STACK_USER})
-    sudo mkdir -p ${SWIFT_DATA_DIR}/{drives,cache,run,logs}
-    sudo chown -R ${STACK_USER}:${user_group} ${SWIFT_DATA_DIR}
+    sudo install -d -o ${STACK_USER} -g ${user_group} ${SWIFT_DATA_DIR}/{drives,cache,run,logs}
 
     # Create a loopback disk and format it to XFS.
     if [[ -e ${SWIFT_DISK_IMAGE} ]]; then
@@ -576,7 +574,8 @@
         sudo chown -R ${STACK_USER}: ${node}
     done
 }
-# create_swift_accounts() - Set up standard swift accounts and extra
+
+# create_swift_accounts() - Set up standard Swift accounts and extra
 # one for tests we do this by attaching all words in the account name
 # since we want to make it compatible with tempauth which use
 # underscores for separators.
@@ -590,9 +589,9 @@
 # swifttenanttest4   swiftusertest4     admin          swift_test
 
 function create_swift_accounts {
-    # Defines specific passwords used by tools/create_userrc.sh
-    # As these variables are used by create_userrc.sh, they must be exported
-    # The _password suffix is expected by create_userrc.sh
+    # Defines specific passwords used by ``tools/create_userrc.sh``
+    # As these variables are used by ``create_userrc.sh,`` they must be exported
+    # The _password suffix is expected by ``create_userrc.sh``.
     export swiftusertest1_password=testing
     export swiftusertest2_password=testing2
     export swiftusertest3_password=testing3
@@ -675,8 +674,7 @@
     } && popd >/dev/null
 
     # Create cache dir
-    sudo mkdir -p $SWIFT_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $SWIFT_AUTH_CACHE_DIR
+    sudo install -d -o ${STACK_USER} $SWIFT_AUTH_CACHE_DIR
     rm -f $SWIFT_AUTH_CACHE_DIR/*
 }
 
@@ -723,8 +721,8 @@
 
     # By default with only one replica we are launching the proxy,
     # container, account and object server in screen in foreground and
-    # other services in background. If we have SWIFT_REPLICAS set to something
-    # greater than one we first spawn all the swift services then kill the proxy
+    # other services in background. If we have ``SWIFT_REPLICAS`` set to something
+    # greater than one we first spawn all the Swift services then kill the proxy
     # service so we can run it in foreground in screen.  ``swift-init ...
     # {stop|restart}`` exits with '1' if no servers are running, ignore it just
     # in case
@@ -760,7 +758,7 @@
         swift-init --run-dir=${SWIFT_DATA_DIR}/run rest stop && return 0
     fi
 
-    # screen normally killed by unstack.sh
+    # screen normally killed by ``unstack.sh``
     if type -p swift-init >/dev/null; then
         swift-init --run-dir=${SWIFT_DATA_DIR}/run all stop || true
     fi
diff --git a/lib/tempest b/lib/tempest
index 06413f7..c4ae05f 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -62,12 +62,10 @@
 # The default is set to 196 seconds.
 BUILD_TIMEOUT=${BUILD_TIMEOUT:-196}
 
-
 # This must be False on stable branches, as master tempest
 # deps do not match stable branch deps. Set this to True to
-# have tempest installed in devstack by default.
-INSTALL_TEMPEST=${INSTALL_TEMPEST:-"False"}
-
+# have tempest installed in DevStack by default.
+INSTALL_TEMPEST=${INSTALL_TEMPEST:-"True"}
 
 BOTO_MATERIALS_PATH="$FILES/images/s3-materials/cirros-${CIRROS_VERSION}"
 BOTO_CONF=/etc/boto.cfg
@@ -83,6 +81,7 @@
 IPV6_ENABLED=$(trueorfalse True IPV6_ENABLED)
 IPV6_SUBNET_ATTRIBUTES_ENABLED=$(trueorfalse True IPV6_SUBNET_ATTRIBUTES_ENABLED)
 
+
 # Functions
 # ---------
 
@@ -92,10 +91,7 @@
     local extensions_list=$1
     shift
     local disabled_exts=$*
-    for ext_to_remove in ${disabled_exts//,/ } ; do
-        extensions_list=${extensions_list/$ext_to_remove","}
-    done
-    echo $extensions_list
+    remove_disabled_services "$extensions_list" "$disabled_exts"
 }
 
 # configure_tempest() - Set config files, create data dirs, etc
@@ -104,9 +100,13 @@
         setup_develop $TEMPEST_DIR
     else
         # install testr since its used to process tempest logs
-        pip_install $(get_from_global_requirements testrepository)
+        pip_install_gr testrepository
     fi
 
+    # Used during configuration so make sure we have the correct
+    # version installed
+    pip_install_gr python-openstackclient
+
     local image_lines
     local images
     local num_images
@@ -144,9 +144,7 @@
                 image_uuid_alt="$IMAGE_UUID"
             fi
             images+=($IMAGE_UUID)
-        # TODO(stevemar): update this command to use openstackclient's `openstack image list`
-        # when it supports listing by status.
-        done < <(glance image-list --status=active | awk -F'|' '!/^(+--)|ID|aki|ari/ { print $3,$2 }')
+        done < <(openstack image list --property status=active | awk -F'|' '!/^(+--)|ID|aki|ari/ { print $3,$2 }')
 
         case "${#images[*]}" in
             0)
@@ -168,19 +166,19 @@
         esac
     fi
 
-    # Create tempest.conf from tempest.conf.sample
-    # copy every time, because the image UUIDS are going to change
-    if [[ ! -d $TEMPEST_CONFIG_DIR ]]; then
-        sudo mkdir -p $TEMPEST_CONFIG_DIR
-    fi
-    sudo chown $STACK_USER $TEMPEST_CONFIG_DIR
-    cp $TEMPEST_DIR/etc/tempest.conf.sample $TEMPEST_CONFIG
-    chmod 644 $TEMPEST_CONFIG
+    # Create ``tempest.conf`` from ``tempest.conf.sample``
+    # Copy every time because the image UUIDS are going to change
+    sudo install -d -o $STACK_USER $TEMPEST_CONFIG_DIR
+    install -m 644 $TEMPEST_DIR/etc/tempest.conf.sample $TEMPEST_CONFIG
 
     password=${ADMIN_PASSWORD:-secrete}
 
-    # See files/keystone_data.sh and stack.sh where admin, demo and alt_demo
-    # user and tenant are set up...
+    # Do we want to make a configuration where Tempest has admin on
+    # the cloud. We don't always want to so that we can ensure Tempest
+    # would work on a public cloud.
+    TEMPEST_HAS_ADMIN=$(trueorfalse True TEMPEST_HAS_ADMIN)
+
+    # See ``lib/keystone`` where these users and tenants are set up
     ADMIN_USERNAME=${ADMIN_USERNAME:-admin}
     ADMIN_TENANT_NAME=${ADMIN_TENANT_NAME:-admin}
     ADMIN_DOMAIN_NAME=${ADMIN_DOMAIN_NAME:-Default}
@@ -191,13 +189,13 @@
     ADMIN_TENANT_ID=$(openstack project list | awk "/ admin / { print \$2 }")
 
     if is_service_enabled nova; then
-        # If the ``DEFAULT_INSTANCE_TYPE`` not declared, use the new behavior
-        # Tempest creates instane types for himself
+        # If ``DEFAULT_INSTANCE_TYPE`` is not declared, use the new behavior
+        # Tempest creates its own instance types
         if  [[ -z "$DEFAULT_INSTANCE_TYPE" ]]; then
             available_flavors=$(nova flavor-list)
             if [[ ! ( $available_flavors =~ 'm1.nano' ) ]]; then
                 if is_arch "ppc64"; then
-                    # qemu needs at least 128MB of memory to boot on ppc64
+                    # Qemu needs at least 128MB of memory to boot on ppc64
                     nova flavor-create m1.nano 42 128 0 1
                 else
                     nova flavor-create m1.nano 42 64 0 1
@@ -214,8 +212,7 @@
             fi
             flavor_ref_alt=84
         else
-            # Check Nova for existing flavors and, if set, look for the
-            # ``DEFAULT_INSTANCE_TYPE`` and use that.
+            # Check Nova for existing flavors, if ``DEFAULT_INSTANCE_TYPE`` is set use it.
             boto_instance_type=$DEFAULT_INSTANCE_TYPE
             flavor_lines=`nova flavor-list`
             IFS=$'\r\n'
@@ -240,8 +237,8 @@
             flavor_ref=${flavors[0]}
             flavor_ref_alt=$flavor_ref
 
-            # ensure flavor_ref and flavor_ref_alt have different values
-            # some resize instance in tempest tests depends on this.
+            # Ensure ``flavor_ref`` and ``flavor_ref_alt`` have different values.
+            # Some resize instance in tempest tests depends on this.
             for f in ${flavors[@]:1}; do
                 if [[ $f -ne $flavor_ref ]]; then
                     flavor_ref_alt=$f
@@ -266,16 +263,26 @@
         public_network_id=$(neutron net-list | grep $PUBLIC_NETWORK_NAME | \
             awk '{print $2}')
         if [ "$Q_USE_NAMESPACE" == "False" ]; then
-            # If namespaces are disabled, devstack will create a single
+            # If namespaces are disabled, DevStack will create a single
             # public router that tempest should be configured to use.
             public_router_id=$(neutron router-list | awk "/ $Q_ROUTER_NAME / \
                 { print \$2 }")
         fi
     fi
 
+    EC2_URL=$(openstack endpoint show -f value -c publicurl ec2 || true)
+    if [[ -z $EC2_URL ]]; then
+        EC2_URL="$EC2_SERVICE_PROTOCOL://$SERVICE_HOST:8773/"
+    fi
+    S3_URL=$(openstack endpoint show -f value -c publicurl s3 || true)
+    if [[ -z $S3_URL ]]; then
+        S3_URL="http://$SERVICE_HOST:${S3_SERVICE_PORT:-3333}"
+    fi
+
     iniset $TEMPEST_CONFIG DEFAULT use_syslog $SYSLOG
+
     # Oslo
-    iniset $TEMPEST_CONFIG DEFAULT lock_path $TEMPEST_STATE_PATH
+    iniset $TEMPEST_CONFIG oslo_concurrency lock_path $TEMPEST_STATE_PATH
     mkdir -p $TEMPEST_STATE_PATH
     iniset $TEMPEST_CONFIG DEFAULT use_stderr False
     iniset $TEMPEST_CONFIG DEFAULT log_file tempest.log
@@ -296,25 +303,40 @@
     iniset $TEMPEST_CONFIG identity alt_username $ALT_USERNAME
     iniset $TEMPEST_CONFIG identity alt_password "$password"
     iniset $TEMPEST_CONFIG identity alt_tenant_name $ALT_TENANT_NAME
-    iniset $TEMPEST_CONFIG identity admin_username $ADMIN_USERNAME
-    iniset $TEMPEST_CONFIG identity admin_password "$password"
-    iniset $TEMPEST_CONFIG identity admin_tenant_name $ADMIN_TENANT_NAME
-    iniset $TEMPEST_CONFIG identity admin_tenant_id $ADMIN_TENANT_ID
-    iniset $TEMPEST_CONFIG identity admin_domain_name $ADMIN_DOMAIN_NAME
-    iniset $TEMPEST_CONFIG identity auth_version ${TEMPEST_AUTH_VERSION:-v2}
+    if [[ "$TEMPEST_HAS_ADMIN" == "True" ]]; then
+        iniset $TEMPEST_CONFIG identity admin_username $ADMIN_USERNAME
+        iniset $TEMPEST_CONFIG identity admin_password "$password"
+        iniset $TEMPEST_CONFIG identity admin_tenant_name $ADMIN_TENANT_NAME
+        iniset $TEMPEST_CONFIG identity admin_tenant_id $ADMIN_TENANT_ID
+        iniset $TEMPEST_CONFIG identity admin_domain_name $ADMIN_DOMAIN_NAME
+    fi
+    if [ "$ENABLE_IDENTITY_V2" == "False" ]; then
+        # Only Identity v3 is available; then skip Identity API v2 tests
+        iniset $TEMPEST_CONFIG identity-feature-enabled v2_api False
+        # In addition, use v3 auth tokens for running all Tempest tests
+        iniset $TEMPEST_CONFIG identity auth_version v3
+    else
+        iniset $TEMPEST_CONFIG identity auth_version ${TEMPEST_AUTH_VERSION:-v2}
+    fi
+
     if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
         iniset $TEMPEST_CONFIG identity ca_certificates_file $SSL_BUNDLE_FILE
     fi
 
     # Image
-    # for the gate we want to be able to override this variable so we aren't
-    # doing an HTTP fetch over the wide internet for this test
+    # We want to be able to override this variable in the gate to avoid
+    # doing an external HTTP fetch for this test.
     if [[ ! -z "$TEMPEST_HTTP_IMAGE" ]]; then
         iniset $TEMPEST_CONFIG image http_image $TEMPEST_HTTP_IMAGE
     fi
 
+    # Image Features
+    iniset $TEMPEST_CONFIG image-feature-enabled deactivate_image True
+
     # Auth
+    TEMPEST_ALLOW_TENANT_ISOLATION=${TEMPEST_ALLOW_TENANT_ISOLATION:-$TEMPEST_HAS_ADMIN}
     iniset $TEMPEST_CONFIG auth allow_tenant_isolation ${TEMPEST_ALLOW_TENANT_ISOLATION:-True}
+    iniset $TEMPEST_CONFIG auth tempest_roles "Member"
 
     # Compute
     iniset $TEMPEST_CONFIG compute ssh_user ${DEFAULT_INSTANCE_USER:-cirros} # DEPRECATED
@@ -328,13 +350,16 @@
     iniset $TEMPEST_CONFIG compute flavor_ref $flavor_ref
     iniset $TEMPEST_CONFIG compute flavor_ref_alt $flavor_ref_alt
     iniset $TEMPEST_CONFIG compute ssh_connect_method $ssh_connect_method
+    if [[ ! $(is_service_enabled n-cell) && ! $(is_service_enabled neutron) ]]; then
+        iniset $TEMPEST_CONFIG compute fixed_network_name $PRIVATE_NETWORK_NAME
+    fi
 
     # Compute Features
-    # Run verify_tempest_config -ur to retrieve enabled extensions on API endpoints
+    # Run ``verify_tempest_config -ur`` to retrieve enabled extensions on API endpoints
     # NOTE(mtreinish): This must be done after auth settings are added to the tempest config
     local tmp_cfg_file=$(mktemp)
     cd $TEMPEST_DIR
-    tox -evenv -- verify-tempest-config -uro $tmp_cfg_file
+    tox -revenv -- verify-tempest-config -uro $tmp_cfg_file
 
     local compute_api_extensions=${COMPUTE_API_EXTENSIONS:-"all"}
     if [[ ! -z "$DISABLE_COMPUTE_API_EXTENSIONS" ]]; then
@@ -349,11 +374,10 @@
     iniset $TEMPEST_CONFIG compute-feature-enabled change_password False
     iniset $TEMPEST_CONFIG compute-feature-enabled block_migration_for_live_migration ${USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION:-False}
     iniset $TEMPEST_CONFIG compute-feature-enabled api_extensions $compute_api_extensions
-
-    # Compute admin
-    iniset $TEMPEST_CONFIG "compute-admin" username $ADMIN_USERNAME
-    iniset $TEMPEST_CONFIG "compute-admin" password "$password"
-    iniset $TEMPEST_CONFIG "compute-admin" tenant_name $ADMIN_TENANT_NAME
+    # TODO(mriedem): Remove the preserve_ports flag when Juno is end of life.
+    iniset $TEMPEST_CONFIG compute-feature-enabled preserve_ports True
+    # TODO(gilliard): Remove the live_migrate_paused_instances flag when Juno is end of life.
+    iniset $TEMPEST_CONFIG compute-feature-enabled live_migrate_paused_instances True
 
     # Network
     iniset $TEMPEST_CONFIG network api_version 2.0
@@ -374,8 +398,8 @@
     iniset $TEMPEST_CONFIG network-feature-enabled api_extensions $network_api_extensions
 
     # boto
-    iniset $TEMPEST_CONFIG boto ec2_url "$EC2_SERVICE_PROTOCOL://$SERVICE_HOST:8773/"
-    iniset $TEMPEST_CONFIG boto s3_url "http://$SERVICE_HOST:${S3_SERVICE_PORT:-3333}"
+    iniset $TEMPEST_CONFIG boto ec2_url "$EC2_URL"
+    iniset $TEMPEST_CONFIG boto s3_url "$S3_URL"
     iniset $TEMPEST_CONFIG boto s3_materials_path "$BOTO_MATERIALS_PATH"
     iniset $TEMPEST_CONFIG boto ari_manifest cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-initrd.manifest.xml
     iniset $TEMPEST_CONFIG boto ami_manifest cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-blank.img.manifest.xml
@@ -396,24 +420,27 @@
         fi
         iniset $TEMPEST_CONFIG orchestration instance_type "m1.heat"
         iniset $TEMPEST_CONFIG orchestration build_timeout 900
+        iniset $TEMPEST_CONFIG orchestration stack_owner_role "_member_"
     fi
 
     # Scenario
-    iniset $TEMPEST_CONFIG scenario img_dir "$FILES/images/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec"
+    SCENARIO_IMAGE_DIR=${SCENARIO_IMAGE_DIR:-$FILES/images/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec}
+    iniset $TEMPEST_CONFIG scenario img_dir $SCENARIO_IMAGE_DIR
     iniset $TEMPEST_CONFIG scenario ami_img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-blank.img"
     iniset $TEMPEST_CONFIG scenario ari_img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-initrd"
     iniset $TEMPEST_CONFIG scenario aki_img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-vmlinuz"
+    iniset $TEMPEST_CONFIG scenario img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img"
 
     # Large Ops Number
     iniset $TEMPEST_CONFIG scenario large_ops_number ${TEMPEST_LARGE_OPS_NUMBER:-0}
 
     # Telemetry
-    # Ceilometer API optimization happened in juno that allows to run more tests in tempest.
+    # Ceilometer API optimization happened in Juno that allows to run more tests in tempest.
     # Once Tempest retires support for icehouse this flag can be removed.
     iniset $TEMPEST_CONFIG telemetry too_slow_to_test "False"
     iniset $TEMPEST_CONFIG telemetry-feature-enabled events "True"
 
-    # Object storage
+    # Object Store
     local object_storage_api_extensions=${OBJECT_STORAGE_API_EXTENSIONS:-"all"}
     if [[ ! -z "$DISABLE_OBJECT_STORAGE_API_EXTENSIONS" ]]; then
         # Enabled extensions are either the ones explicitly specified or those available on the API endpoint
@@ -437,7 +464,7 @@
         iniset $TEMPEST_CONFIG volume-feature-enabled backup False
     fi
 
-    # Using CINDER_ENABLED_BACKENDS
+    # Using ``CINDER_ENABLED_BACKENDS``
     if [[ -n "$CINDER_ENABLED_BACKENDS" ]] && [[ $CINDER_ENABLED_BACKENDS =~ .*,.* ]]; then
         iniset $TEMPEST_CONFIG volume-feature-enabled multi_backend "True"
         local i=1
@@ -462,12 +489,15 @@
     iniset $TEMPEST_CONFIG dashboard dashboard_url "http://$SERVICE_HOST/"
     iniset $TEMPEST_CONFIG dashboard login_url "http://$SERVICE_HOST/auth/login/"
 
-    # cli
+    # CLI
     iniset $TEMPEST_CONFIG cli cli_dir $NOVA_BIN_DIR
 
     # Baremetal
     if [ "$VIRT_DRIVER" = "ironic" ] ; then
         iniset $TEMPEST_CONFIG baremetal driver_enabled True
+        iniset $TEMPEST_CONFIG baremetal unprovision_timeout 300
+        iniset $TEMPEST_CONFIG baremetal deploy_img_dir $FILES
+        iniset $TEMPEST_CONFIG baremetal node_uuid $IRONIC_NODE_UUID
         iniset $TEMPEST_CONFIG compute-feature-enabled change_password False
         iniset $TEMPEST_CONFIG compute-feature-enabled console_output False
         iniset $TEMPEST_CONFIG compute-feature-enabled interface_attach False
@@ -487,7 +517,7 @@
         iniset $TEMPEST_CONFIG compute-feature-enabled suspend False
     fi
 
-    # service_available
+    # ``service_available``
     for service in ${TEMPEST_SERVICES//,/ }; do
         if is_service_enabled $service ; then
             iniset $TEMPEST_CONFIG service_available $service "True"
@@ -497,7 +527,7 @@
     done
 
     if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
-        # Use the BOTO_CONFIG environment variable to point to this file
+        # Use the ``BOTO_CONFIG`` environment variable to point to this file
         iniset $BOTO_CONF Boto ca_certificates_file $SSL_BUNDLE_FILE
         sudo chown $STACK_USER $BOTO_CONF
     fi
@@ -512,7 +542,6 @@
 # ------------------------------------------------------------------
 # alt_demo             alt_demo     Member
 
-# Migrated from keystone_data.sh
 function create_tempest_accounts {
     if is_service_enabled tempest; then
         # Tempest has some tests that validate various authorization checks
@@ -523,13 +552,13 @@
     fi
 }
 
-# install_tempest_lib() - Collect source, prepare, and install tempest-lib
+# install_tempest_lib() - Collect source, prepare, and install ``tempest-lib``
 function install_tempest_lib {
     if use_library_from_git "tempest-lib"; then
         git_clone_by_name "tempest-lib"
         setup_dev_lib "tempest-lib"
-        # NOTE(mtreinish) For testing tempest-lib from git with tempest we need
-        # put the git version of tempest-lib in the tempest job's tox venv
+        # NOTE(mtreinish) For testing ``tempest-lib`` from git with Tempest we need to
+        # put the git version of ``tempest-lib`` in the Tempest job's tox venv
         export PIP_VIRTUAL_ENV=${PROJECT_VENV["tempest"]}
         setup_dev_lib "tempest-lib"
         unset PIP_VIRTUAL_ENV
@@ -547,7 +576,7 @@
     popd
 }
 
-# init_tempest() - Initialize ec2 images
+# init_tempest() - Initialize EC2 images
 function init_tempest {
     local base_image_name=cirros-${CIRROS_VERSION}-${CIRROS_ARCH}
     # /opt/stack/devstack/files/images/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec
@@ -556,12 +585,14 @@
     local ramdisk="$image_dir/${base_image_name}-initrd"
     local disk_image="$image_dir/${base_image_name}-blank.img"
     if is_service_enabled nova; then
-        # if the cirros uec downloaded and the system is uec capable
+        # If the CirrOS uec downloaded and the system is UEC capable
         if [ -f "$kernel" -a -f "$ramdisk" -a -f "$disk_image" -a  "$VIRT_DRIVER" != "openvz" \
             -a \( "$LIBVIRT_TYPE" != "lxc" -o "$VIRT_DRIVER" != "libvirt" \) ]; then
             echo "Prepare aki/ari/ami Images"
             mkdir -p $BOTO_MATERIALS_PATH
             ( #new namespace
+                # euca2ools should be installed to call euca-* commands
+                is_package_installed euca2ools || install_package euca2ools
                 # tenant:demo ; user: demo
                 source $TOP_DIR/accrc/demo/demo
                 euca-bundle-image -r ${CIRROS_ARCH} -i "$kernel" --kernel true -d "$BOTO_MATERIALS_PATH"
diff --git a/lib/tls b/lib/tls
index 677895b..09f1c2d 100644
--- a/lib/tls
+++ b/lib/tls
@@ -32,6 +32,7 @@
 # - is_ssl_enabled_service
 # - enable_mod_ssl
 
+
 # Defaults
 # --------
 
@@ -92,7 +93,6 @@
     cp /dev/null $ca_dir/index.txt
 }
 
-
 # Create a new CA configuration file
 # create_CA_config ca-dir common-name
 function create_CA_config {
@@ -248,7 +248,6 @@
     fi
 }
 
-
 # make_cert creates and signs a new certificate with the given commonName and CA
 # make_cert ca-dir cert-name "common-name" ["alt-name" ...]
 function make_cert {
@@ -287,7 +286,6 @@
     fi
 }
 
-
 # Make an intermediate CA to sign everything else
 # make_int_CA ca-dir signing-ca-dir
 function make_int_CA {
@@ -362,17 +360,16 @@
     return 1
 }
 
-
 # Ensure that the certificates for a service are in place. This function does
 # not check that a service is SSL enabled, this should already have been
 # completed.
 #
 # The function expects to find a certificate, key and CA certificate in the
-# variables {service}_SSL_CERT, {service}_SSL_KEY and {service}_SSL_CA. For
-# example for keystone this would be KEYSTONE_SSL_CERT, KEYSTONE_SSL_KEY and
-# KEYSTONE_SSL_CA.
+# variables ``{service}_SSL_CERT``, ``{service}_SSL_KEY`` and ``{service}_SSL_CA``. For
+# example for keystone this would be ``KEYSTONE_SSL_CERT``, ``KEYSTONE_SSL_KEY`` and
+# ``KEYSTONE_SSL_CA``.
 #
-# If it does not find these certificates then the devstack-issued server
+# If it does not find these certificates then the DevStack-issued server
 # certificate, key and CA certificate will be associated with the service.
 #
 # If only some of the variables are provided then the function will quit.
@@ -437,14 +434,12 @@
 # Cleanup Functions
 # =================
 
-
 # Stops all stud processes. This should be done only after all services
 # using tls configuration are down.
 function stop_tls_proxy {
     killall stud
 }
 
-
 # Remove CA along with configuration, as well as the local server certificate
 function cleanup_CA {
     rm -rf "$DATA_DIR/CA" "$DEVSTACK_CERT"
diff --git a/lib/trove b/lib/trove
deleted file mode 100644
index d777983..0000000
--- a/lib/trove
+++ /dev/null
@@ -1,255 +0,0 @@
-#!/bin/bash
-#
-# lib/trove
-# Functions to control the configuration and operation of the **Trove** service
-
-# Dependencies:
-# ``functions`` file
-# ``DEST``, ``STACK_USER`` must be defined
-# ``SERVICE_{HOST|PROTOCOL|TOKEN}`` must be defined
-
-# ``stack.sh`` calls the entry points in this order:
-#
-# install_trove
-# configure_trove
-# init_trove
-# start_trove
-# stop_trove
-# cleanup_trove
-
-# Save trace setting
-XTRACE=$(set +o | grep xtrace)
-set +o xtrace
-
-# Defaults
-# --------
-if is_service_enabled neutron; then
-    TROVE_HOST_GATEWAY=${PUBLIC_NETWORK_GATEWAY:-172.24.4.1}
-else
-    TROVE_HOST_GATEWAY=${NETWORK_GATEWAY:-10.0.0.1}
-fi
-
-# Set up default configuration
-GITDIR["python-troveclient"]=$DEST/python-troveclient
-
-TROVE_DIR=$DEST/trove
-TROVE_CONF_DIR=/etc/trove
-TROVE_CONF=$TROVE_CONF_DIR/trove.conf
-TROVE_TASKMANAGER_CONF=$TROVE_CONF_DIR/trove-taskmanager.conf
-TROVE_CONDUCTOR_CONF=$TROVE_CONF_DIR/trove-conductor.conf
-TROVE_GUESTAGENT_CONF=$TROVE_CONF_DIR/trove-guestagent.conf
-TROVE_API_PASTE_INI=$TROVE_CONF_DIR/api-paste.ini
-
-TROVE_LOCAL_CONF_DIR=$TROVE_DIR/etc/trove
-TROVE_LOCAL_API_PASTE_INI=$TROVE_LOCAL_CONF_DIR/api-paste.ini
-TROVE_AUTH_CACHE_DIR=${TROVE_AUTH_CACHE_DIR:-/var/cache/trove}
-TROVE_DATASTORE_TYPE=${TROVE_DATASTORE_TYPE:-"mysql"}
-TROVE_DATASTORE_VERSION=${TROVE_DATASTORE_VERSION:-"5.5"}
-TROVE_DATASTORE_PACKAGE=${TROVE_DATASTORE_PACKAGE:-"mysql-server-5.5"}
-
-# Support entry points installation of console scripts
-if [[ -d $TROVE_DIR/bin ]]; then
-    TROVE_BIN_DIR=$TROVE_DIR/bin
-else
-    TROVE_BIN_DIR=$(get_python_exec_prefix)
-fi
-TROVE_MANAGE=$TROVE_BIN_DIR/trove-manage
-
-# Tell Tempest this project is present
-TEMPEST_SERVICES+=,trove
-
-
-# Functions
-# ---------
-
-# Test if any Trove services are enabled
-# is_trove_enabled
-function is_trove_enabled {
-    [[ ,${ENABLED_SERVICES} =~ ,"tr-" ]] && return 0
-    return 1
-}
-
-# setup_trove_logging() - Adds logging configuration to conf files
-function setup_trove_logging {
-    local CONF=$1
-    iniset $CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
-    iniset $CONF DEFAULT use_syslog $SYSLOG
-    if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
-        # Add color to logging output
-        setup_colorized_logging $CONF DEFAULT tenant user
-    fi
-}
-
-# create_trove_accounts() - Set up common required trove accounts
-
-# Tenant               User       Roles
-# ------------------------------------------------------------------
-# service              trove     admin        # if enabled
-
-function create_trove_accounts {
-    if [[ "$ENABLED_SERVICES" =~ "trove" ]]; then
-
-        create_service_user "trove"
-
-        if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
-
-            local trove_service=$(get_or_create_service "trove" \
-                "database" "Trove Service")
-            get_or_create_endpoint $trove_service \
-                "$REGION_NAME" \
-                "http://$SERVICE_HOST:8779/v1.0/\$(tenant_id)s" \
-                "http://$SERVICE_HOST:8779/v1.0/\$(tenant_id)s" \
-                "http://$SERVICE_HOST:8779/v1.0/\$(tenant_id)s"
-        fi
-    fi
-}
-
-# stack.sh entry points
-# ---------------------
-
-# cleanup_trove() - Remove residual data files, anything left over from previous
-# runs that a clean run would need to clean up
-function cleanup_trove {
-    #Clean up dirs
-    rm -fr $TROVE_AUTH_CACHE_DIR/*
-    rm -fr $TROVE_CONF_DIR/*
-}
-
-
-# configure_trove() - Set config files, create data dirs, etc
-function configure_trove {
-    setup_develop $TROVE_DIR
-
-    # Create the trove conf dir and cache dirs if they don't exist
-    sudo mkdir -p ${TROVE_CONF_DIR}
-    sudo mkdir -p ${TROVE_AUTH_CACHE_DIR}
-    sudo chown -R $STACK_USER: ${TROVE_CONF_DIR}
-    sudo chown -R $STACK_USER: ${TROVE_AUTH_CACHE_DIR}
-
-    # Copy api-paste file over to the trove conf dir
-    cp $TROVE_LOCAL_API_PASTE_INI $TROVE_API_PASTE_INI
-
-    # (Re)create trove conf files
-    rm -f $TROVE_CONF
-    rm -f $TROVE_TASKMANAGER_CONF
-    rm -f $TROVE_CONDUCTOR_CONF
-
-    iniset $TROVE_CONF DEFAULT rabbit_userid $RABBIT_USERID
-    iniset $TROVE_CONF DEFAULT rabbit_password $RABBIT_PASSWORD
-    iniset $TROVE_CONF DEFAULT sql_connection `database_connection_url trove`
-    iniset $TROVE_CONF DEFAULT default_datastore $TROVE_DATASTORE_TYPE
-    setup_trove_logging $TROVE_CONF
-    iniset $TROVE_CONF DEFAULT trove_api_workers "$API_WORKERS"
-
-    configure_auth_token_middleware $TROVE_CONF trove $TROVE_AUTH_CACHE_DIR
-
-    # (Re)create trove taskmanager conf file if needed
-    if is_service_enabled tr-tmgr; then
-        TROVE_AUTH_ENDPOINT=$KEYSTONE_AUTH_URI/v$IDENTITY_API_VERSION
-
-        iniset $TROVE_TASKMANAGER_CONF DEFAULT rabbit_userid $RABBIT_USERID
-        iniset $TROVE_TASKMANAGER_CONF DEFAULT rabbit_password $RABBIT_PASSWORD
-        iniset $TROVE_TASKMANAGER_CONF DEFAULT sql_connection `database_connection_url trove`
-        iniset $TROVE_TASKMANAGER_CONF DEFAULT taskmanager_manager trove.taskmanager.manager.Manager
-        iniset $TROVE_TASKMANAGER_CONF DEFAULT nova_proxy_admin_user radmin
-        iniset $TROVE_TASKMANAGER_CONF DEFAULT nova_proxy_admin_tenant_name trove
-        iniset $TROVE_TASKMANAGER_CONF DEFAULT nova_proxy_admin_pass $RADMIN_USER_PASS
-        iniset $TROVE_TASKMANAGER_CONF DEFAULT trove_auth_url $TROVE_AUTH_ENDPOINT
-        setup_trove_logging $TROVE_TASKMANAGER_CONF
-    fi
-
-    # (Re)create trove conductor conf file if needed
-    if is_service_enabled tr-cond; then
-        iniset $TROVE_CONDUCTOR_CONF DEFAULT rabbit_userid $RABBIT_USERID
-        iniset $TROVE_CONDUCTOR_CONF DEFAULT rabbit_password $RABBIT_PASSWORD
-        iniset $TROVE_CONDUCTOR_CONF DEFAULT sql_connection `database_connection_url trove`
-        iniset $TROVE_CONDUCTOR_CONF DEFAULT nova_proxy_admin_user radmin
-        iniset $TROVE_CONDUCTOR_CONF DEFAULT nova_proxy_admin_tenant_name trove
-        iniset $TROVE_CONDUCTOR_CONF DEFAULT nova_proxy_admin_pass $RADMIN_USER_PASS
-        iniset $TROVE_CONDUCTOR_CONF DEFAULT trove_auth_url $TROVE_AUTH_ENDPOINT
-        iniset $TROVE_CONDUCTOR_CONF DEFAULT control_exchange trove
-        setup_trove_logging $TROVE_CONDUCTOR_CONF
-    fi
-
-    # Set up Guest Agent conf
-    iniset $TROVE_GUESTAGENT_CONF DEFAULT rabbit_userid $RABBIT_USERID
-    iniset $TROVE_GUESTAGENT_CONF DEFAULT rabbit_host $TROVE_HOST_GATEWAY
-    iniset $TROVE_GUESTAGENT_CONF DEFAULT rabbit_password $RABBIT_PASSWORD
-    iniset $TROVE_GUESTAGENT_CONF DEFAULT nova_proxy_admin_user radmin
-    iniset $TROVE_GUESTAGENT_CONF DEFAULT nova_proxy_admin_tenant_name trove
-    iniset $TROVE_GUESTAGENT_CONF DEFAULT nova_proxy_admin_pass $RADMIN_USER_PASS
-    iniset $TROVE_GUESTAGENT_CONF DEFAULT trove_auth_url $TROVE_AUTH_ENDPOINT
-    iniset $TROVE_GUESTAGENT_CONF DEFAULT control_exchange trove
-    iniset $TROVE_GUESTAGENT_CONF DEFAULT ignore_users os_admin
-    iniset $TROVE_GUESTAGENT_CONF DEFAULT log_dir /var/log/trove/
-    iniset $TROVE_GUESTAGENT_CONF DEFAULT log_file trove-guestagent.log
-    setup_trove_logging $TROVE_GUESTAGENT_CONF
-}
-
-# install_troveclient() - Collect source and prepare
-function install_troveclient {
-    if use_library_from_git "python-troveclient"; then
-        git_clone_by_name "python-troveclient"
-        setup_dev_lib "python-troveclient"
-    fi
-}
-
-# install_trove() - Collect source and prepare
-function install_trove {
-    git_clone $TROVE_REPO $TROVE_DIR $TROVE_BRANCH
-}
-
-# init_trove() - Initializes Trove Database as a Service
-function init_trove {
-    # (Re)Create trove db
-    recreate_database trove
-
-    # Initialize the trove database
-    $TROVE_MANAGE db_sync
-
-    # If no guest image is specified, skip remaining setup
-    [ -z "$TROVE_GUEST_IMAGE_URL" ] && return 0
-
-    # Find the glance id for the trove guest image
-    # The image is uploaded by stack.sh -- see $IMAGE_URLS handling
-    GUEST_IMAGE_NAME=$(basename "$TROVE_GUEST_IMAGE_URL")
-    GUEST_IMAGE_NAME=${GUEST_IMAGE_NAME%.*}
-    TROVE_GUEST_IMAGE_ID=$(openstack --os-token $TOKEN --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT image list | grep "${GUEST_IMAGE_NAME}" | get_field 1)
-    if [ -z "$TROVE_GUEST_IMAGE_ID" ]; then
-        # If no glance id is found, skip remaining setup
-        echo "Datastore ${TROVE_DATASTORE_TYPE} will not be created: guest image ${GUEST_IMAGE_NAME} not found."
-        return 1
-    fi
-
-    # Now that we have the guest image id, initialize appropriate datastores / datastore versions
-    $TROVE_MANAGE datastore_update "$TROVE_DATASTORE_TYPE" ""
-    $TROVE_MANAGE datastore_version_update "$TROVE_DATASTORE_TYPE" "$TROVE_DATASTORE_VERSION" "$TROVE_DATASTORE_TYPE" \
-        "$TROVE_GUEST_IMAGE_ID" "$TROVE_DATASTORE_PACKAGE" 1
-    $TROVE_MANAGE datastore_version_update "$TROVE_DATASTORE_TYPE" "inactive_version" "inactive_manager" "$TROVE_GUEST_IMAGE_ID" "" 0
-    $TROVE_MANAGE datastore_update "$TROVE_DATASTORE_TYPE" "$TROVE_DATASTORE_VERSION"
-    $TROVE_MANAGE datastore_update "Inactive_Datastore" ""
-}
-
-# start_trove() - Start running processes, including screen
-function start_trove {
-    run_process tr-api "$TROVE_BIN_DIR/trove-api --config-file=$TROVE_CONF --debug"
-    run_process tr-tmgr "$TROVE_BIN_DIR/trove-taskmanager --config-file=$TROVE_TASKMANAGER_CONF --debug"
-    run_process tr-cond "$TROVE_BIN_DIR/trove-conductor --config-file=$TROVE_CONDUCTOR_CONF --debug"
-}
-
-# stop_trove() - Stop running processes
-function stop_trove {
-    # Kill the trove screen windows
-    local serv
-    for serv in tr-api tr-tmgr tr-cond; do
-        stop_process $serv
-    done
-}
-
-# Restore xtrace
-$XTRACE
-
-# Tell emacs to use shell-script-mode
-## Local variables:
-## mode: shell-script
-## End:
diff --git a/lib/zaqar b/lib/zaqar
index 79b4c5a..8d51910 100644
--- a/lib/zaqar
+++ b/lib/zaqar
@@ -105,8 +105,7 @@
 function configure_zaqar {
     setup_develop $ZAQAR_DIR
 
-    [ ! -d $ZAQAR_CONF_DIR ] && sudo mkdir -m 755 -p $ZAQAR_CONF_DIR
-    sudo chown $USER $ZAQAR_CONF_DIR
+    sudo install -d -o $STACK_USER -m 755 $ZAQAR_CONF_DIR
 
     iniset $ZAQAR_CONF DEFAULT debug True
     iniset $ZAQAR_CONF DEFAULT verbose True
@@ -133,7 +132,7 @@
         iniset $ZAQAR_CONF DEFAULT notification_driver messaging
         iniset $ZAQAR_CONF DEFAULT control_exchange zaqar
     fi
-    iniset_rpc_backend zaqar $ZAQAR_CONF DEFAULT
+    iniset_rpc_backend zaqar $ZAQAR_CONF
 
     cleanup_zaqar
 }
@@ -141,10 +140,10 @@
 function configure_redis {
     if is_ubuntu; then
         install_package redis-server
-        pip_install redis
+        pip_install_gr redis
     elif is_fedora; then
         install_package redis
-        pip_install redis
+        pip_install_gr redis
     else
         exit_distro_not_supported "redis installation"
     fi
@@ -168,8 +167,7 @@
 # init_zaqar() - Initialize etc.
 function init_zaqar {
     # Create cache dir
-    sudo mkdir -p $ZAQAR_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $ZAQAR_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER $ZAQAR_AUTH_CACHE_DIR
     rm -f $ZAQAR_AUTH_CACHE_DIR/*
 }
 
diff --git a/openrc b/openrc
index aec8a2a..64faa58 100644
--- a/openrc
+++ b/openrc
@@ -78,8 +78,14 @@
 #
 export OS_AUTH_URL=$KEYSTONE_AUTH_PROTOCOL://$KEYSTONE_AUTH_HOST:5000/v${OS_IDENTITY_API_VERSION}
 
-# Set the pointer to our CA certificate chain.  Harmless if TLS is not used.
-export OS_CACERT=${OS_CACERT:-$INT_CA_DIR/ca-chain.pem}
+# Set OS_CACERT to a default CA certificate chain if it exists.
+if [[ ! -v OS_CACERT ]] ; then
+    DEFAULT_OS_CACERT=$INT_CA_DIR/ca-chain.pem
+    # If the file does not exist, this may confuse preflight sanity checks
+    if [ -e $DEFAULT_OS_CACERT ] ; then
+        export OS_CACERT=$DEFAULT_OS_CACERT
+    fi
+fi
 
 # Currently novaclient needs you to specify the *compute api* version.  This
 # needs to match the config of your catalog returned by Keystone.
diff --git a/pkg/elasticsearch.sh b/pkg/elasticsearch.sh
index f53c7f2..79f67a0 100755
--- a/pkg/elasticsearch.sh
+++ b/pkg/elasticsearch.sh
@@ -7,6 +7,8 @@
 TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
 FILES=$TOP_DIR/files
 source $TOP_DIR/functions
+DEST=${DEST:-/opt/stack}
+source $TOP_DIR/lib/infra
 
 # Package source and version, all pkg files are expected to have
 # something like this, as well as a way to override them.
@@ -77,7 +79,7 @@
 }
 
 function install_elasticsearch {
-    pip_install elasticsearch
+    pip_install_gr elasticsearch
     if is_package_installed elasticsearch; then
         echo "Note: elasticsearch was already installed."
         return
diff --git a/run_tests.sh b/run_tests.sh
index 3ba7e10..a9a3d0b 100755
--- a/run_tests.sh
+++ b/run_tests.sh
@@ -11,15 +11,12 @@
 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 # License for the specific language governing permissions and limitations
 # under the License.
-#
-#
-# this runs a series of unit tests for devstack to ensure it's functioning
+
+# This runs a series of unit tests for DevStack to ensure it's functioning
 
 PASSES=""
 FAILURES=""
 
-# Test that no one is trying to land crazy refs as branches
-
 for testfile in tests/test_*.sh; do
     $testfile
     if [[ $? -eq 0 ]]; then
diff --git a/samples/local.conf b/samples/local.conf
index e4052c2..bd0cd9c 100644
--- a/samples/local.conf
+++ b/samples/local.conf
@@ -1,7 +1,6 @@
 # Sample ``local.conf`` for user-configurable variables in ``stack.sh``
 
-# NOTE: Copy this file to the root ``devstack`` directory for it to
-# work properly.
+# NOTE: Copy this file to the root DevStack directory for it to work properly.
 
 # ``local.conf`` is a user-maintained settings file that is sourced from ``stackrc``.
 # This gives it the ability to override any variables set in ``stackrc``.
@@ -98,4 +97,4 @@
 # -------
 
 # Install the tempest test suite
-enable_service tempest
\ No newline at end of file
+enable_service tempest
diff --git a/samples/local.sh b/samples/local.sh
index 664cb66..634f6dd 100755
--- a/samples/local.sh
+++ b/samples/local.sh
@@ -3,15 +3,14 @@
 # Sample ``local.sh`` for user-configurable tasks to run automatically
 # at the successful conclusion of ``stack.sh``.
 
-# NOTE: Copy this file to the root ``devstack`` directory for it to
-# work properly.
+# NOTE: Copy this file to the root DevStack directory for it to work properly.
 
 # This is a collection of some of the things we have found to be useful to run
 # after ``stack.sh`` to tweak the OpenStack configuration that DevStack produces.
 # These should be considered as samples and are unsupported DevStack code.
 
 
-# Keep track of the devstack directory
+# Keep track of the DevStack directory
 TOP_DIR=$(cd $(dirname "$0") && pwd)
 
 # Import common functions
@@ -50,7 +49,7 @@
     source $TOP_DIR/openrc admin admin
 
     # Name of new flavor
-    # set in ``localrc`` with ``DEFAULT_INSTANCE_TYPE=m1.micro``
+    # set in ``local.conf`` with ``DEFAULT_INSTANCE_TYPE=m1.micro``
     MI_NAME=m1.micro
 
     # Create micro flavor if not present
diff --git a/stack.sh b/stack.sh
index 615b77f..dc79fa9 100755
--- a/stack.sh
+++ b/stack.sh
@@ -16,18 +16,11 @@
 # (14.04 Trusty or newer), **Fedora** (F20 or newer), or **CentOS/RHEL**
 # (7 or newer) machine. (It may work on other platforms but support for those
 # platforms is left to those who added them to DevStack.) It should work in
-# a VM or physical server. Additionally, we maintain a list of ``apt`` and
+# a VM or physical server. Additionally, we maintain a list of ``deb`` and
 # ``rpm`` dependencies and other configuration files in this repo.
 
 # Learn more and get the most recent version at http://devstack.org
 
-# check if someone has invoked with "sh"
-if [[ "${POSIXLY_CORRECT}" == "y" ]]; then
-    echo "You appear to be running bash in POSIX compatability mode."
-    echo "devstack uses bash features. \"./stack.sh\" should do the right thing"
-    exit 1
-fi
-
 # Make sure custom grep options don't get in the way
 unset GREP_OPTIONS
 
@@ -44,7 +37,7 @@
 # Not all distros have sbin in PATH for regular users.
 PATH=$PATH:/usr/local/sbin:/usr/sbin:/sbin
 
-# Keep track of the devstack directory
+# Keep track of the DevStack directory
 TOP_DIR=$(cd $(dirname "$0") && pwd)
 
 # Check for uninitialized variables, a big cause of bugs
@@ -53,6 +46,10 @@
     set -o nounset
 fi
 
+
+# Configuration
+# =============
+
 # Sanity Checks
 # -------------
 
@@ -61,7 +58,7 @@
     rm $TOP_DIR/.stackenv
 fi
 
-# ``stack.sh`` keeps the list of ``apt`` and ``rpm`` dependencies and config
+# ``stack.sh`` keeps the list of ``deb`` and ``rpm`` dependencies, config
 # templates and other useful files in the ``files`` subdirectory
 FILES=$TOP_DIR/files
 if [ ! -d $FILES ]; then
@@ -69,12 +66,23 @@
 fi
 
 # ``stack.sh`` keeps function libraries here
+# Make sure ``$TOP_DIR/inc`` directory is present
+if [ ! -d $TOP_DIR/inc ]; then
+    die $LINENO "missing devstack/inc"
+fi
+
+# ``stack.sh`` keeps project libraries here
 # Make sure ``$TOP_DIR/lib`` directory is present
 if [ ! -d $TOP_DIR/lib ]; then
     die $LINENO "missing devstack/lib"
 fi
 
-# Check if run as root
+# Check if run in POSIX shell
+if [[ "${POSIXLY_CORRECT}" == "y" ]]; then
+    echo "You are running POSIX compatibility mode, DevStack requires bash 4.2 or newer."
+    exit 1
+fi
+
 # OpenStack is designed to be run as a non-root user; Horizon will fail to run
 # as **root** since Apache will not serve content from **root** user).
 # ``stack.sh`` must not be run as **root**.  It aborts and suggests one course of
@@ -89,6 +97,7 @@
     exit 1
 fi
 
+
 # Prepare the environment
 # -----------------------
 
@@ -109,6 +118,7 @@
 # and ``DISTRO``
 GetDistro
 
+
 # Global Settings
 # ---------------
 
@@ -131,7 +141,6 @@
     done
 fi
 
-
 # ``stack.sh`` is customizable by setting environment variables.  Override a
 # default setting via export::
 #
@@ -142,18 +151,20 @@
 #
 #     DATABASE_PASSWORD=simple ./stack.sh
 #
-# Persistent variables can be placed in a ``localrc`` file::
+# Persistent variables can be placed in a ``local.conf`` file::
 #
+#     [[local|localrc]]
 #     DATABASE_PASSWORD=anothersecret
 #     DATABASE_USER=hellaroot
 #
 # We try to have sensible defaults, so you should be able to run ``./stack.sh``
-# in most cases.  ``localrc`` is not distributed with DevStack and will never
+# in most cases.  ``local.conf`` is not distributed with DevStack and will never
 # be overwritten by a DevStack update.
 #
 # DevStack distributes ``stackrc`` which contains locations for the OpenStack
 # repositories, branches to configure, and other configuration defaults.
-# ``stackrc`` sources ``localrc`` to allow you to safely override those settings.
+# ``stackrc`` sources the ``localrc`` section of ``local.conf`` to allow you to
+# safely override those settings.
 
 if [[ ! -r $TOP_DIR/stackrc ]]; then
     die $LINENO "missing $TOP_DIR/stackrc - did you grab more than just stack.sh?"
@@ -162,7 +173,7 @@
 
 # Warn users who aren't on an explicitly supported distro, but allow them to
 # override check and attempt installation with ``FORCE=yes ./stack``
-if [[ ! ${DISTRO} =~ (precise|trusty|7.0|wheezy|sid|testing|jessie|f20|f21|rhel7) ]]; then
+if [[ ! ${DISTRO} =~ (precise|trusty|utopic|vivid|7.0|wheezy|sid|testing|jessie|f20|f21|f22|rhel7) ]]; then
     echo "WARNING: this script has not been tested on $DISTRO"
     if [[ "$FORCE" != "yes" ]]; then
         die $LINENO "If you wish to run this script anyway run with FORCE=yes"
@@ -185,34 +196,27 @@
 # Make sure the proxy config is visible to sub-processes
 export_proxy_variables
 
-# Remove services which were negated in ENABLED_SERVICES
+# Remove services which were negated in ``ENABLED_SERVICES``
 # using the "-" prefix (e.g., "-rabbit") instead of
 # calling disable_service().
 disable_negated_services
 
-# Look for obsolete stuff
-# if [[ ,${ENABLED_SERVICES}, =~ ,"swift", ]]; then
-#     echo "FATAL: 'swift' is not supported as a service name"
-#     echo "FATAL: Use the actual swift service names to enable them as required:"
-#     echo "FATAL: s-proxy s-object s-container s-account"
-#     exit 1
-# fi
 
 # Configure sudo
 # --------------
 
-# We're not **root**, make sure ``sudo`` is available
+# We're not as **root** so make sure ``sudo`` is available
 is_package_installed sudo || install_package sudo
 
 # UEC images ``/etc/sudoers`` does not have a ``#includedir``, add one
 sudo grep -q "^#includedir.*/etc/sudoers.d" /etc/sudoers ||
     echo "#includedir /etc/sudoers.d" | sudo tee -a /etc/sudoers
 
-# Set up devstack sudoers
+# Set up DevStack sudoers
 TEMPFILE=`mktemp`
 echo "$STACK_USER ALL=(root) NOPASSWD:ALL" >$TEMPFILE
-# Some binaries might be under /sbin or /usr/sbin, so make sure sudo will
-# see them by forcing PATH
+# Some binaries might be under ``/sbin`` or ``/usr/sbin``, so make sure sudo will
+# see them by forcing ``PATH``
 echo "Defaults:$STACK_USER secure_path=/sbin:/usr/sbin:/usr/bin:/bin:/usr/local/sbin:/usr/local/bin" >> $TEMPFILE
 echo "Defaults:$STACK_USER !requiretty" >> $TEMPFILE
 chmod 0440 $TEMPFILE
@@ -223,7 +227,7 @@
 # Configure Distro Repositories
 # -----------------------------
 
-# For debian/ubuntu make apt attempt to retry network ops on it's own
+# For Debian/Ubuntu make apt attempt to retry network ops on it's own
 if is_ubuntu; then
     echo 'APT::Acquire::Retries "20";' | sudo tee /etc/apt/apt.conf.d/80retry  >/dev/null
 fi
@@ -234,7 +238,7 @@
 if is_fedora && [[ $DISTRO == "rhel7" ]]; then
     # RHEL requires EPEL for many Open Stack dependencies
 
-    # note we always remove and install latest -- some environments
+    # NOTE: We always remove and install latest -- some environments
     # use snapshot images, and if EPEL version updates they break
     # unless we update them to latest version.
     if sudo yum repolist enabled epel | grep -q 'epel'; then
@@ -245,7 +249,7 @@
     # repo, then removes itself (as epel-release installed the
     # "real" repo).
     #
-    # you would think that rather than this, you could use
+    # You would think that rather than this, you could use
     # $releasever directly in .repo file we create below.  However
     # RHEL gives a $releasever of "6Server" which breaks the path;
     # see https://bugzilla.redhat.com/show_bug.cgi?id=1150759
@@ -262,7 +266,7 @@
     sudo yum-config-manager --enable epel-bootstrap
     yum_install epel-release || \
         die $LINENO "Error installing EPEL repo, cannot continue"
-    # epel rpm has installed it's version
+    # EPEL rpm has installed it's version
     sudo rm -f /etc/yum.repos.d/epel-bootstrap.repo
 
     # ... and also optional to be enabled
@@ -278,6 +282,10 @@
             die $LINENO "Error installing RDO repo, cannot continue"
     fi
 
+    if is_oraclelinux; then
+        sudo yum-config-manager --enable ol7_optional_latest ol7_addons ol7_MySQL56
+    fi
+
 fi
 
 
@@ -293,7 +301,7 @@
 safe_chown -R $STACK_USER $DEST
 safe_chmod 0755 $DEST
 
-# a basic test for $DEST path permissions (fatal on error unless skipped)
+# Basic test for ``$DEST`` path permissions (fatal on error unless skipped)
 check_path_perm_sanity ${DEST}
 
 # Destination path for service data
@@ -481,6 +489,9 @@
 # an error.  It is also useful for following along as the install occurs.
 set -o xtrace
 
+# Print the kernel version
+uname -a
+
 # Reset the bundle of CA certificates
 SSL_BUNDLE_FILE="$DATA_DIR/ca-bundle.pem"
 rm -f $SSL_BUNDLE_FILE
@@ -493,8 +504,8 @@
 # and the specified rpc backend is available on your platform.
 check_rpc_backend
 
-# Service to enable with SSL if USE_SSL is True
-SSL_ENABLED_SERVICES="key,nova,cinder,glance,s-proxy,neutron"
+# Service to enable with SSL if ``USE_SSL`` is True
+SSL_ENABLED_SERVICES="key,nova,cinder,glance,s-proxy,neutron,sahara"
 
 if is_service_enabled tls-proxy && [ "$USE_SSL" == "True" ]; then
     die $LINENO "tls-proxy and SSL are mutually exclusive"
@@ -503,7 +514,14 @@
 # Configure Projects
 # ==================
 
-# Import apache functions
+# Clone all external plugins
+fetch_plugins
+
+# Plugin Phase 0: override_defaults - allow pluggins to override
+# defaults before other services are run
+run_phase override_defaults
+
+# Import Apache functions
 source $TOP_DIR/lib/apache
 
 # Import TLS functions
@@ -521,13 +539,10 @@
 source $TOP_DIR/lib/swift
 source $TOP_DIR/lib/ceilometer
 source $TOP_DIR/lib/heat
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 source $TOP_DIR/lib/ldap
 source $TOP_DIR/lib/dstat
 
-# Clone all external plugins
-fetch_plugins
-
 # Extras Source
 # --------------
 
@@ -587,8 +602,9 @@
 
 
 # Database Configuration
+# ----------------------
 
-# To select between database backends, add the following to ``localrc``:
+# To select between database backends, add the following to ``local.conf``:
 #
 #    disable_service mysql
 #    enable_service postgresql
@@ -600,9 +616,10 @@
 
 
 # Queue Configuration
+# -------------------
 
 # Rabbit connection info
-# In multi node devstack, second node needs RABBIT_USERID, but rabbit
+# In multi node DevStack, second node needs ``RABBIT_USERID``, but rabbit
 # isn't enabled.
 RABBIT_USERID=${RABBIT_USERID:-stackrabbit}
 if is_service_enabled rabbit; then
@@ -612,6 +629,7 @@
 
 
 # Keystone
+# --------
 
 if is_service_enabled keystone; then
     # The ``SERVICE_TOKEN`` is used to bootstrap the Keystone database.  It is
@@ -623,14 +641,14 @@
     read_password ADMIN_PASSWORD "ENTER A PASSWORD TO USE FOR HORIZON AND KEYSTONE (20 CHARS OR LESS)."
 
     # Keystone can now optionally install OpenLDAP by enabling the ``ldap``
-    # service in ``localrc`` (e.g. ``enable_service ldap``).
+    # service in ``local.conf`` (e.g. ``enable_service ldap``).
     # To clean out the Keystone contents in OpenLDAP set ``KEYSTONE_CLEAR_LDAP``
-    # to ``yes`` (e.g. ``KEYSTONE_CLEAR_LDAP=yes``) in ``localrc``.  To enable the
+    # to ``yes`` (e.g. ``KEYSTONE_CLEAR_LDAP=yes``) in ``local.conf``.  To enable the
     # Keystone Identity Driver (``keystone.identity.backends.ldap.Identity``)
     # set ``KEYSTONE_IDENTITY_BACKEND`` to ``ldap`` (e.g.
-    # ``KEYSTONE_IDENTITY_BACKEND=ldap``) in ``localrc``.
+    # ``KEYSTONE_IDENTITY_BACKEND=ldap``) in ``local.conf``.
 
-    # only request ldap password if the service is enabled
+    # Only request LDAP password if the service is enabled
     if is_service_enabled ldap; then
         read_password LDAP_PASSWORD "ENTER A PASSWORD TO USE FOR LDAP"
     fi
@@ -638,6 +656,7 @@
 
 
 # Swift
+# -----
 
 if is_service_enabled s-proxy; then
     # We only ask for Swift Hash if we have enabled swift service.
@@ -661,14 +680,14 @@
 echo_summary "Installing package prerequisites"
 source $TOP_DIR/tools/install_prereqs.sh
 
-# Configure an appropriate python environment
+# Configure an appropriate Python environment
 if [[ "$OFFLINE" != "True" ]]; then
     PYPI_ALTERNATIVE_URL=${PYPI_ALTERNATIVE_URL:-""} $TOP_DIR/tools/install_pip.sh
 fi
 
 TRACK_DEPENDS=${TRACK_DEPENDS:-False}
 
-# Install python packages into a virtualenv so that we can track them
+# Install Python packages into a virtualenv so that we can track them
 if [[ $TRACK_DEPENDS = True ]]; then
     echo_summary "Installing Python packages into a virtualenv $DEST/.venv"
     pip_install -U virtualenv
@@ -686,6 +705,9 @@
 # Virtual Environment
 # -------------------
 
+# Install required infra support libraries
+install_infra
+
 # Pre-build some problematic wheels
 if [[ -n ${WHEELHOUSE:-} && ! -d ${WHEELHOUSE:-} ]]; then
     source $TOP_DIR/tools/build_wheels.sh
@@ -694,7 +716,6 @@
 
 # Extras Pre-install
 # ------------------
-
 # Phase: pre-install
 run_phase stack pre-install
 
@@ -702,6 +723,7 @@
 
 if is_service_enabled $DATABASE_BACKENDS; then
     install_database
+    install_database_python
 fi
 
 if is_service_enabled neutron; then
@@ -713,13 +735,10 @@
 
 echo_summary "Installing OpenStack project source"
 
-# Install required infra support libraries
-install_infra
-
-# Install oslo libraries that have graduated
+# Install Oslo libraries
 install_oslo
 
-# Install clients libraries
+# Install client libraries
 install_keystoneclient
 install_glanceclient
 install_cinderclient
@@ -737,7 +756,6 @@
 # Install middleware
 install_keystonemiddleware
 
-
 if is_service_enabled keystone; then
     if [ "$KEYSTONE_AUTH_HOST" == "$SERVICE_HOST" ]; then
         stack_install_service keystone
@@ -746,12 +764,15 @@
 fi
 
 if is_service_enabled s-proxy; then
+    if is_service_enabled ceilometer; then
+        install_ceilometermiddleware
+    fi
     stack_install_service swift
     configure_swift
 
     # swift3 middleware to provide S3 emulation to Swift
     if is_service_enabled swift3; then
-        # replace the nova-objectstore port by the swift port
+        # Replace the nova-objectstore port by the swift port
         S3_SERVICE_PORT=8080
         git_clone $SWIFT3_REPO $SWIFT3_DIR $SWIFT3_BRANCH
         setup_develop $SWIFT3_DIR
@@ -759,23 +780,25 @@
 fi
 
 if is_service_enabled g-api n-api; then
-    # image catalog service
+    # Image catalog service
     stack_install_service glance
     configure_glance
 fi
 
 if is_service_enabled cinder; then
+    # Block volume service
     stack_install_service cinder
     configure_cinder
 fi
 
 if is_service_enabled neutron; then
+    # Network service
     stack_install_service neutron
     install_neutron_third_party
 fi
 
 if is_service_enabled nova; then
-    # compute service
+    # Compute service
     stack_install_service nova
     cleanup_nova
     configure_nova
@@ -807,26 +830,25 @@
     configure_CA
     init_CA
     init_cert
-    # Add name to /etc/hosts
-    # don't be naive and add to existing line!
+    # Add name to ``/etc/hosts``.
+    # Don't be naive and add to existing line!
 fi
 
+
 # Extras Install
 # --------------
 
 # Phase: install
 run_phase stack install
 
-
-# install the OpenStack client, needed for most setup commands
+# Install the OpenStack client, needed for most setup commands
 if use_library_from_git "python-openstackclient"; then
     git_clone_by_name "python-openstackclient"
     setup_dev_lib "python-openstackclient"
 else
-    pip_install 'python-openstackclient>=1.0.2'
+    pip_install_gr python-openstackclient
 fi
 
-
 if [[ $TRACK_DEPENDS = True ]]; then
     $DEST/.venv/bin/pip freeze > $DEST/requires-post-pip
     if ! diff -Nru $DEST/requires-pre-pip $DEST/requires-post-pip > $DEST/requires.diff; then
@@ -919,7 +941,7 @@
     screen -r $SCREEN_NAME -X setenv PROMPT_COMMAND /bin/true
 fi
 
-# Clear screen rc file
+# Clear ``screenrc`` file
 SCREENRC=$TOP_DIR/$SCREEN_NAME-screenrc
 if [[ -e $SCREENRC ]]; then
     rm -f $SCREENRC
@@ -928,14 +950,16 @@
 # Initialize the directory for service status check
 init_service_check
 
+
+# Start Services
+# ==============
+
 # Dstat
-# -------
+# -----
 
 # A better kind of sysstat, with the top process per time slice
 start_dstat
 
-# Start Services
-# ==============
 
 # Keystone
 # --------
@@ -957,7 +981,7 @@
         SERVICE_ENDPOINT=http://$KEYSTONE_AUTH_HOST:$KEYSTONE_AUTH_PORT_INT/v2.0
     fi
 
-    # Setup OpenStackclient token-flow auth
+    # Setup OpenStackClient token-endpoint auth
     export OS_TOKEN=$SERVICE_TOKEN
     export OS_URL=$SERVICE_ENDPOINT
 
@@ -975,14 +999,14 @@
         create_swift_accounts
     fi
 
-    if is_service_enabled heat && [[ "$HEAT_STANDALONE" != "True" ]]; then
+    if is_service_enabled heat; then
         create_heat_accounts
     fi
 
-    # Begone token-flow auth
+    # Begone token auth
     unset OS_TOKEN OS_URL
 
-    # Set up password-flow auth creds now that keystone is bootstrapped
+    # Set up password auth credentials now that Keystone is bootstrapped
     export OS_AUTH_URL=$SERVICE_ENDPOINT
     export OS_TENANT_NAME=admin
     export OS_USERNAME=admin
@@ -1027,7 +1051,7 @@
     echo_summary "Configuring Neutron"
 
     configure_neutron
-    # Run init_neutron only on the node hosting the neutron API server
+    # Run init_neutron only on the node hosting the Neutron API server
     if is_service_enabled $DATABASE_BACKENDS && is_service_enabled q-svc; then
         init_neutron
     fi
@@ -1103,6 +1127,7 @@
     init_nova_cells
 fi
 
+
 # Extras Configuration
 # ====================
 
@@ -1113,7 +1138,7 @@
 # Local Configuration
 # ===================
 
-# Apply configuration from local.conf if it exists for layer 2 services
+# Apply configuration from ``local.conf`` if it exists for layer 2 services
 # Phase: post-config
 merge_config_group $TOP_DIR/local.conf post-config
 
@@ -1135,21 +1160,19 @@
     start_glance
 fi
 
+
 # Install Images
 # ==============
 
-# Upload an image to glance.
+# Upload an image to Glance.
 #
-# The default image is cirros, a small testing image which lets you login as **root**
-# cirros has a ``cloud-init`` analog supporting login via keypair and sending
+# The default image is CirrOS, a small testing image which lets you login as **root**
+# CirrOS has a ``cloud-init`` analog supporting login via keypair and sending
 # scripts as userdata.
-# See https://help.ubuntu.com/community/CloudInit for more on cloud-init
-#
-# Override ``IMAGE_URLS`` with a comma-separated list of UEC images.
-#  * **precise**: http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64.tar.gz
+# See https://help.ubuntu.com/community/CloudInit for more on ``cloud-init``
 
 if is_service_enabled g-reg; then
-    TOKEN=$(keystone token-get | grep ' id ' | get_field 2)
+    TOKEN=$(openstack token issue -c id -f value)
     die_if_not_set $LINENO TOKEN "Keystone fail to get token"
 
     echo_summary "Uploading images"
@@ -1164,7 +1187,7 @@
     done
 fi
 
-# Create an access key and secret key for nova ec2 register image
+# Create an access key and secret key for Nova EC2 register image
 if is_service_enabled keystone && is_service_enabled swift3 && is_service_enabled nova; then
     eval $(openstack ec2 credentials create --user nova --project $SERVICE_TENANT_NAME -f shell -c access -c secret)
     iniset $NOVA_CONF DEFAULT s3_access_key "$access"
@@ -1227,7 +1250,7 @@
     start_ceilometer
 fi
 
-# Configure and launch heat engine, api and metadata
+# Configure and launch Heat engine, api and metadata
 if is_service_enabled heat; then
     # Initialize heat
     echo_summary "Configuring Heat"
@@ -1271,31 +1294,58 @@
     echo $i=${!i} >>$TOP_DIR/.stackenv
 done
 
+# Write out a clouds.yaml file
+# putting the location into a variable to allow for easier refactoring later
+# to make it overridable. There is current no usecase where doing so makes
+# sense, so I'm not actually doing it now.
+CLOUDS_YAML=~/.config/openstack/clouds.yaml
+if [ ! -e $CLOUDS_YAML ]; then
+    mkdir -p $(dirname $CLOUDS_YAML)
+    cat >"$CLOUDS_YAML" <<EOF
+clouds:
+  devstack:
+    auth:
+      auth_url: $KEYSTONE_AUTH_URI/v$IDENTITY_API_VERSION
+      username: demo
+      project_name: demo
+      password: $ADMIN_PASSWORD
+    region_name: $REGION_NAME
+    identity_api_version: $IDENTITY_API_VERSION
+EOF
+    if [ -f "$SSL_BUNDLE_FILE" ]; then
+        echo "    cacert: $SSL_BUNDLE_FILE" >>"$CLOUDS_YAML"
+    fi
+fi
 
-# Local Configuration
-# ===================
 
-# Apply configuration from local.conf if it exists for layer 2 services
+# Wrapup configuration
+# ====================
+
+# local.conf extra
+# ----------------
+
+# Apply configuration from ``local.conf`` if it exists for layer 2 services
 # Phase: extra
 merge_config_group $TOP_DIR/local.conf extra
 
 
 # Run extras
-# ==========
+# ----------
 
 # Phase: extra
 run_phase stack extra
 
-# Local Configuration
-# ===================
 
-# Apply configuration from local.conf if it exists for layer 2 services
+# local.conf post-extra
+# ---------------------
+
+# Apply late configuration from ``local.conf`` if it exists for layer 2 services
 # Phase: post-extra
 merge_config_group $TOP_DIR/local.conf post-extra
 
 
 # Run local script
-# ================
+# ----------------
 
 # Run ``local.sh`` if it exists to perform user-managed tasks
 if [[ -x $TOP_DIR/local.sh ]]; then
@@ -1313,6 +1363,16 @@
 # Prepare bash completion for OSC
 openstack complete | sudo tee /etc/bash_completion.d/osc.bash_completion > /dev/null
 
+# If cinder is configured, set global_filter for PV devices
+if is_service_enabled cinder; then
+    if is_ubuntu; then
+        echo_summary "Configuring lvm.conf global device filter"
+        set_lvm_filter
+    else
+        echo_summary "Skip setting lvm filters for non Ubuntu systems"
+    fi
+fi
+
 
 # Fin
 # ===
@@ -1330,11 +1390,12 @@
 
 
 # Using the cloud
-# ---------------
+# ===============
 
 echo ""
 echo ""
 echo ""
+echo "This is your host ip: $HOST_IP"
 
 # If you installed Horizon on this server you should be able
 # to access the site using your browser.
@@ -1344,15 +1405,11 @@
 
 # If Keystone is present you can point ``nova`` cli to this server
 if is_service_enabled keystone; then
-    echo "Keystone is serving at $KEYSTONE_SERVICE_URI/v2.0/"
-    echo "Examples on using novaclient command line is in exercise.sh"
+    echo "Keystone is serving at $KEYSTONE_SERVICE_URI/"
     echo "The default users are: admin and demo"
     echo "The password: $ADMIN_PASSWORD"
 fi
 
-# Echo ``HOST_IP`` - useful for ``build_uec.sh``, which uses dhcp to give the instance an address
-echo "This is your host ip: $HOST_IP"
-
 # Warn that a deprecated feature was used
 if [[ -n "$DEPRECATED_TEXT" ]]; then
     echo_summary "WARNING: $DEPRECATED_TEXT"
diff --git a/stackrc b/stackrc
index 02b12a3..f8add4b 100644
--- a/stackrc
+++ b/stackrc
@@ -5,7 +5,7 @@
 # Find the other rc files
 RC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
 
-# Source required devstack functions and globals
+# Source required DevStack functions and globals
 source $RC_DIR/functions
 
 # Destination path for installation
@@ -41,21 +41,23 @@
 #  enable_service q-dhcp
 #  enable_service q-l3
 #  enable_service q-meta
-#  # Optional, to enable tempest configuration as part of devstack
+#  # Optional, to enable tempest configuration as part of DevStack
 #  enable_service tempest
 
-# this allows us to pass ENABLED_SERVICES
+# This allows us to pass ``ENABLED_SERVICES``
 if ! isset ENABLED_SERVICES ; then
-    # core compute (glance / keystone / nova (+ nova-network))
-    ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,n-sch,n-novnc,n-xvnc,n-cauth
-    # cinder
+    # Keystone - nothing works without keystone
+    ENABLED_SERVICES=key
+    # Nova - services to support libvirt based openstack clouds
+    ENABLED_SERVICES+=,n-api,n-cpu,n-net,n-cond,n-sch,n-novnc,n-crt,n-cauth
+    # Glance services needed for Nova
+    ENABLED_SERVICES+=,g-api,g-reg
+    # Cinder
     ENABLED_SERVICES+=,c-sch,c-api,c-vol
-    # heat
-    ENABLED_SERVICES+=,h-eng,h-api,h-api-cfn,h-api-cw
-    # dashboard
+    # Dashboard
     ENABLED_SERVICES+=,horizon
-    # additional services
-    ENABLED_SERVICES+=,rabbit,tempest,mysql
+    # Additional services
+    ENABLED_SERVICES+=,rabbit,tempest,mysql,dstat
 fi
 
 # SQLAlchemy supports multiple database drivers for each database server
@@ -79,15 +81,12 @@
 # Tell Tempest which services are available.  The default is set here as
 # Tempest falls late in the configuration sequence.  This differs from
 # ``ENABLED_SERVICES`` in that the project names are used here rather than
-# the service names, i.e.: TEMPEST_SERVICES="key,glance,nova"
+# the service names, i.e.: ``TEMPEST_SERVICES="key,glance,nova"``
 TEMPEST_SERVICES=""
 
 # Set the default Nova APIs to enable
 NOVA_ENABLED_APIS=ec2,osapi_compute,metadata
 
-# Configure Identity API version: 2.0, 3
-IDENTITY_API_VERSION=2.0
-
 # Whether to use 'dev mode' for screen windows. Dev mode works by
 # stuffing text into the screen windows so that a developer can use
 # ctrl-c, up-arrow, enter to restart the service. Starting services
@@ -104,6 +103,32 @@
     source $RC_DIR/.localrc.auto
 fi
 
+# Configure Identity API version: 2.0, 3
+IDENTITY_API_VERSION=${IDENTITY_API_VERSION:-2.0}
+
+# Set the option ENABLE_IDENTITY_V2 to True. It defines whether the DevStack
+# deployment will be deploying the Identity v2 pipelines. If this option is set
+# to ``False``, DevStack will: i) disable Identity v2; ii) configure Tempest to
+# skip Identity v2 specific tests; and iii) configure Horizon to use Identity
+# v3. When this option is set to ``False``, the option IDENTITY_API_VERSION
+# will to be set to ``3`` in order to make DevStack register the Identity
+# endpoint as v3. This flag is experimental and will be used as basis to
+# identify the projects which still have issues to operate with Identity v3.
+ENABLE_IDENTITY_V2=$(trueorfalse True ENABLE_IDENTITY_V2)
+if [ "$ENABLE_IDENTITY_V2" == "False" ]; then
+    IDENTITY_API_VERSION=3
+fi
+
+# Enable use of Python virtual environments.  Individual project use of
+# venvs are controlled by the PROJECT_VENV array; every project with
+# an entry in the array will be installed into the named venv.
+# By default this will put each project into its own venv.
+USE_VENV=$(trueorfalse False USE_VENV)
+
+# Add packages that need to be installed into a venv but are not in any
+# requirmenets files here, in a comma-separated list
+ADDITIONAL_VENV_PACKAGES=${ADITIONAL_VENV_PACKAGES:-""}
+
 # Configure wheel cache location
 export WHEELHOUSE=${WHEELHOUSE:-$DEST/.wheelhouse}
 export PIP_WHEEL_DIR=${PIP_WHEEL_DIR:-$WHEELHOUSE}
@@ -135,6 +160,7 @@
 #   but pass through any extras)
 REQUIREMENTS_MODE=${REQUIREMENTS_MODE:-strict}
 
+
 # Repositories
 # ------------
 
@@ -145,16 +171,17 @@
 # Which libraries should we install from git instead of using released
 # versions on pypi?
 #
-# By default devstack is now installing libraries from pypi instead of
+# By default DevStack is now installing libraries from pypi instead of
 # from git repositories by default. This works great if you are
 # developing server components, but if you want to develop libraries
-# and see them live in devstack you need to tell devstack it should
+# and see them live in DevStack you need to tell DevStack it should
 # install them from git.
 #
 # ex: LIBS_FROM_GIT=python-keystoneclient,oslo.config
 #
 # Will install those 2 libraries from git, the rest from pypi.
 
+
 ##############
 #
 #  OpenStack Server Components
@@ -217,10 +244,6 @@
 SWIFT_REPO=${SWIFT_REPO:-${GIT_BASE}/openstack/swift.git}
 SWIFT_BRANCH=${SWIFT_BRANCH:-master}
 
-# trove service
-TROVE_REPO=${TROVE_REPO:-${GIT_BASE}/openstack/trove.git}
-TROVE_BRANCH=${TROVE_BRANCH:-master}
-
 ##############
 #
 #  Testing Components
@@ -286,16 +309,13 @@
 GITREPO["python-swiftclient"]=${SWIFTCLIENT_REPO:-${GIT_BASE}/openstack/python-swiftclient.git}
 GITBRANCH["python-swiftclient"]=${SWIFTCLIENT_BRANCH:-master}
 
-# trove client library test
-GITREPO["python-troveclient"]=${TROVECLIENT_REPO:-${GIT_BASE}/openstack/python-troveclient.git}
-GITBRANCH["python-troveclient"]=${TROVECLIENT_BRANCH:-master}
-
 # consolidated openstack python client
 GITREPO["python-openstackclient"]=${OPENSTACKCLIENT_REPO:-${GIT_BASE}/openstack/python-openstackclient.git}
 GITBRANCH["python-openstackclient"]=${OPENSTACKCLIENT_BRANCH:-master}
 # this doesn't exist in a lib file, so set it here
 GITDIR["python-openstackclient"]=$DEST/python-openstackclient
 
+
 ###################
 #
 #  Oslo Libraries
@@ -386,6 +406,7 @@
 GITREPO["pbr"]=${PBR_REPO:-${GIT_BASE}/openstack-dev/pbr.git}
 GITBRANCH["pbr"]=${PBR_BRANCH:-master}
 
+
 ##################
 #
 #  Libraries managed by OpenStack programs (non oslo)
@@ -420,6 +441,10 @@
 GITREPO["ceilometermiddleware"]=${CEILOMETERMIDDLEWARE_REPO:-${GIT_BASE}/openstack/ceilometermiddleware.git}
 GITBRANCH["ceilometermiddleware"]=${CEILOMETERMIDDLEWARE_BRANCH:-master}
 
+# os-brick library to manage local volume attaches
+GITREPO["os-brick"]=${OS_BRICK_REPO:-${GIT_BASE}/openstack/os-brick.git}
+GITBRANCH["os-brick"]=${OS_BRICK_BRANCH:-master}
+
 
 ##################
 #
@@ -427,6 +452,10 @@
 #
 ##################
 
+# run-parts script required by os-refresh-config
+DIB_UTILS_REPO=${DIB_UTILS_REPO:-${GIT_BASE}/openstack/dib-utils.git}
+DIB_UTILS_BRANCH=${DIB_UTILS_BRANCH:-master}
+
 # os-apply-config configuration template tool
 OAC_REPO=${OAC_REPO:-${GIT_BASE}/openstack/os-apply-config.git}
 OAC_BRANCH=${OAC_BRANCH:-master}
@@ -439,6 +468,7 @@
 ORC_REPO=${ORC_REPO:-${GIT_BASE}/openstack/os-refresh-config.git}
 ORC_BRANCH=${ORC_BRANCH:-master}
 
+
 #################
 #
 #  3rd Party Components (non pip installable)
@@ -460,7 +490,6 @@
 SPICE_BRANCH=${SPICE_BRANCH:-master}
 
 
-
 # Nova hypervisor configuration.  We default to libvirt with **kvm** but will
 # drop back to **qemu** if we are unable to load the kvm module.  ``stack.sh`` can
 # also install an **LXC**, **OpenVZ** or **XenAPI** based system.  If xenserver-core
@@ -515,7 +544,7 @@
 #IMAGE_URLS="http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-amd64-11.2_2.6.35-15_1.tar.gz" # old ttylinux-uec image
 #IMAGE_URLS="http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img" # cirros full disk image
 
-CIRROS_VERSION=${CIRROS_VERSION:-"0.3.2"}
+CIRROS_VERSION=${CIRROS_VERSION:-"0.3.4"}
 CIRROS_ARCH=${CIRROS_ARCH:-"x86_64"}
 
 # Set default image based on ``VIRT_DRIVER`` and ``LIBVIRT_TYPE``, either of
@@ -556,36 +585,12 @@
         IMAGE_URLS=${IMAGE_URLS:-"http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec.tar.gz"};;
 esac
 
-# Use 64bit fedora image if heat is enabled
-if [[ "$ENABLED_SERVICES" =~ 'h-api' ]]; then
-    case "$VIRT_DRIVER" in
-        libvirt|ironic)
-            HEAT_CFN_IMAGE_URL=${HEAT_CFN_IMAGE_URL:-"https://download.fedoraproject.org/pub/alt/openstack/20/x86_64/Fedora-x86_64-20-20140618-sda.qcow2"}
-            IMAGE_URLS+=",$HEAT_CFN_IMAGE_URL"
-            ;;
-        *)
-            ;;
-    esac
-fi
-
-# Trove needs a custom image for its work
-if [[ "$ENABLED_SERVICES" =~ 'tr-api' ]]; then
-    case "$VIRT_DRIVER" in
-        libvirt|ironic|xenapi)
-            TROVE_GUEST_IMAGE_URL=${TROVE_GUEST_IMAGE_URL:-"http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2"}
-            IMAGE_URLS+=",${TROVE_GUEST_IMAGE_URL}"
-            ;;
-        *)
-            ;;
-    esac
-fi
-
 # Staging Area for New Images, have them here for at least 24hrs for nodepool
 # to cache them otherwise the failure rates in the gate are too high
 PRECACHE_IMAGES=$(trueorfalse False PRECACHE_IMAGES)
 if [[ "$PRECACHE_IMAGES" == "True" ]]; then
-    # staging in update for nodepool
-    IMAGE_URL="https://download.fedoraproject.org/pub/alt/openstack/20/x86_64/Fedora-x86_64-20-20140618-sda.qcow2"
+
+    IMAGE_URL="http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2"
     if ! [[ "$IMAGE_URLS"  =~ "$IMAGE_URL" ]]; then
         IMAGE_URLS+=",$IMAGE_URL"
     fi
@@ -611,9 +616,6 @@
 # Set default screen name
 SCREEN_NAME=${SCREEN_NAME:-stack}
 
-# Do not install packages tagged with 'testonly' by default
-INSTALL_TESTONLY_PACKAGES=${INSTALL_TESTONLY_PACKAGES:-False}
-
 # Undo requirements changes by global requirements
 UNDO_REQUIREMENTS=${UNDO_REQUIREMENTS:-True}
 
@@ -653,7 +655,7 @@
 
 # Set fixed and floating range here so we can make sure not to use addresses
 # from either range when attempting to guess the IP to use for the host.
-# Note that setting FIXED_RANGE may be necessary when running DevStack
+# Note that setting ``FIXED_RANGE`` may be necessary when running DevStack
 # in an OpenStack cloud that uses either of these address ranges internally.
 FLOATING_RANGE=${FLOATING_RANGE:-172.24.4.0/24}
 FIXED_RANGE=${FIXED_RANGE:-10.0.0.0/24}
@@ -681,9 +683,10 @@
 # Set to 0 to disable shallow cloning
 GIT_DEPTH=${GIT_DEPTH:-0}
 
-# Use native SSL for servers in SSL_ENABLED_SERVICES
+# Use native SSL for servers in ``SSL_ENABLED_SERVICES``
 USE_SSL=$(trueorfalse False USE_SSL)
 
+
 # Following entries need to be last items in file
 
 # Compatibility bits required by other callers like Grenade
@@ -705,7 +708,6 @@
 # For compat, if SCREEN_LOGDIR is set, it will be used to create back-compat symlinks to the LOGDIR
 # symlinks to SCREEN_LOGDIR (compat)
 
-
 # Set up new logging defaults
 if [[ -z "${LOGDIR:-}" ]]; then
     default_logdir=$DEST/logs
@@ -730,8 +732,8 @@
     unset default_logdir logfile
 fi
 
-# LOGDIR is always set at this point so it is not useful as a 'enable' for service logs
-# SCREEN_LOGDIR may be set, it is useful to enable the compat symlinks
+# ``LOGDIR`` is always set at this point so it is not useful as a 'enable' for service logs
+# ``SCREEN_LOGDIR`` may be set, it is useful to enable the compat symlinks
 
 # Local variables:
 # mode: shell-script
diff --git a/tests/functions.sh b/tests/functions.sh
deleted file mode 100755
index 874d022..0000000
--- a/tests/functions.sh
+++ /dev/null
@@ -1,198 +0,0 @@
-#!/usr/bin/env bash
-
-# Tests for DevStack functions
-
-TOP=$(cd $(dirname "$0")/.. && pwd)
-
-# Import common functions
-source $TOP/functions
-
-# Import configuration
-source $TOP/openrc
-
-
-echo "Testing die_if_not_set()"
-
-bash -cx "source $TOP/functions; X=`echo Y && true`; die_if_not_set X 'not OK'"
-if [[ $? != 0 ]]; then
-    echo "die_if_not_set [X='Y' true] Failed"
-else
-    echo 'OK'
-fi
-
-bash -cx "source $TOP/functions; X=`true`; die_if_not_set X 'OK'"
-if [[ $? = 0 ]]; then
-    echo "die_if_not_set [X='' true] Failed"
-fi
-
-bash -cx "source $TOP/functions; X=`echo Y && false`; die_if_not_set X 'not OK'"
-if [[ $? != 0 ]]; then
-    echo "die_if_not_set [X='Y' false] Failed"
-else
-    echo 'OK'
-fi
-
-bash -cx "source $TOP/functions; X=`false`; die_if_not_set X 'OK'"
-if [[ $? = 0 ]]; then
-    echo "die_if_not_set [X='' false] Failed"
-fi
-
-
-# Enabling/disabling services
-
-echo "Testing enable_service()"
-
-function test_enable_service {
-    local start="$1"
-    local add="$2"
-    local finish="$3"
-
-    ENABLED_SERVICES="$start"
-    enable_service $add
-    if [ "$ENABLED_SERVICES" = "$finish" ]; then
-        echo "OK: $start + $add -> $ENABLED_SERVICES"
-    else
-        echo "changing $start to $finish with $add failed: $ENABLED_SERVICES"
-    fi
-}
-
-test_enable_service '' a 'a'
-test_enable_service 'a' b 'a,b'
-test_enable_service 'a,b' c 'a,b,c'
-test_enable_service 'a,b' c 'a,b,c'
-test_enable_service 'a,b,' c 'a,b,c'
-test_enable_service 'a,b' c,d 'a,b,c,d'
-test_enable_service 'a,b' "c d" 'a,b,c,d'
-test_enable_service 'a,b,c' c 'a,b,c'
-
-test_enable_service 'a,b,-c' c 'a,b'
-test_enable_service 'a,b,c' -c 'a,b'
-
-function test_disable_service {
-    local start="$1"
-    local del="$2"
-    local finish="$3"
-
-    ENABLED_SERVICES="$start"
-    disable_service "$del"
-    if [ "$ENABLED_SERVICES" = "$finish" ]; then
-        echo "OK: $start - $del -> $ENABLED_SERVICES"
-    else
-        echo "changing $start to $finish with $del failed: $ENABLED_SERVICES"
-    fi
-}
-
-echo "Testing disable_service()"
-test_disable_service 'a,b,c' a 'b,c'
-test_disable_service 'a,b,c' b 'a,c'
-test_disable_service 'a,b,c' c 'a,b'
-
-test_disable_service 'a,b,c' a 'b,c'
-test_disable_service 'b,c' b 'c'
-test_disable_service 'c' c ''
-test_disable_service '' d ''
-
-test_disable_service 'a,b,c,' c 'a,b'
-test_disable_service 'a,b' c 'a,b'
-
-
-echo "Testing disable_all_services()"
-ENABLED_SERVICES=a,b,c
-disable_all_services
-
-if [[ -z "$ENABLED_SERVICES" ]]; then
-    echo "OK"
-else
-    echo "disabling all services FAILED: $ENABLED_SERVICES"
-fi
-
-echo "Testing disable_negated_services()"
-
-
-function test_disable_negated_services {
-    local start="$1"
-    local finish="$2"
-
-    ENABLED_SERVICES="$start"
-    disable_negated_services
-    if [ "$ENABLED_SERVICES" = "$finish" ]; then
-        echo "OK: $start + $add -> $ENABLED_SERVICES"
-    else
-        echo "changing $start to $finish failed: $ENABLED_SERVICES"
-    fi
-}
-
-test_disable_negated_services '-a' ''
-test_disable_negated_services '-a,a' ''
-test_disable_negated_services '-a,-a' ''
-test_disable_negated_services 'a,-a' ''
-test_disable_negated_services 'b,a,-a' 'b'
-test_disable_negated_services 'a,b,-a' 'b'
-test_disable_negated_services 'a,-a,b' 'b'
-
-
-echo "Testing is_package_installed()"
-
-if [[ -z "$os_PACKAGE" ]]; then
-    GetOSVersion
-fi
-
-if [[ "$os_PACKAGE" = "deb" ]]; then
-    is_package_installed dpkg
-    VAL=$?
-elif [[ "$os_PACKAGE" = "rpm" ]]; then
-    is_package_installed rpm
-    VAL=$?
-else
-    VAL=1
-fi
-if [[ "$VAL" -eq 0 ]]; then
-    echo "OK"
-else
-    echo "is_package_installed() on existing package failed"
-fi
-
-if [[ "$os_PACKAGE" = "deb" ]]; then
-    is_package_installed dpkg bash
-    VAL=$?
-elif [[ "$os_PACKAGE" = "rpm" ]]; then
-    is_package_installed rpm bash
-    VAL=$?
-else
-    VAL=1
-fi
-if [[ "$VAL" -eq 0 ]]; then
-    echo "OK"
-else
-    echo "is_package_installed() on more than one existing package failed"
-fi
-
-is_package_installed zzzZZZzzz
-VAL=$?
-if [[ "$VAL" -ne 0 ]]; then
-    echo "OK"
-else
-    echo "is_package_installed() on non-existing package failed"
-fi
-
-# test against removed package...was a bug on Ubuntu
-if is_ubuntu; then
-    PKG=cowsay
-    if ! (dpkg -s $PKG >/dev/null 2>&1); then
-        # it was never installed...set up the condition
-        sudo apt-get install -y cowsay >/dev/null 2>&1
-    fi
-    if (dpkg -s $PKG >/dev/null 2>&1); then
-        # remove it to create the 'un' status
-        sudo dpkg -P $PKG >/dev/null 2>&1
-    fi
-
-    # now test the installed check on a deleted package
-    is_package_installed $PKG
-    VAL=$?
-    if [[ "$VAL" -ne 0 ]]; then
-        echo "OK"
-    else
-        echo "is_package_installed() on deleted package failed"
-    fi
-fi
diff --git a/tests/test_functions.sh b/tests/test_functions.sh
index e57948a..f555de8 100755
--- a/tests/test_functions.sh
+++ b/tests/test_functions.sh
@@ -1,34 +1,248 @@
 #!/usr/bin/env bash
 
-# Tests for DevStack meta-config functions
+# Tests for DevStack functions
 
 TOP=$(cd $(dirname "$0")/.. && pwd)
 
 # Import common functions
 source $TOP/functions
+
 source $TOP/tests/unittest.sh
 
-function test_truefalse {
-    local one=1
-    local captrue=True
-    local lowtrue=true
-    local abrevtrue=t
-    local zero=0
-    local capfalse=False
-    local lowfalse=false
-    local abrevfalse=f
-    for against in True False; do
-        for name in one captrue lowtrue abrevtrue; do
-            assert_equal "True" $(trueorfalse $against $name) "\$(trueorfalse $against $name)"
-        done
-    done
-    for against in True False; do
-        for name in zero capfalse lowfalse abrevfalse; do
-            assert_equal "False" $(trueorfalse $against $name) "\$(trueorfalse $against $name)"
-        done
-    done
+echo "Testing die_if_not_set()"
+
+bash -c "source $TOP/functions; X=`echo Y && true`; die_if_not_set $LINENO X 'not OK'"
+if [[ $? != 0 ]]; then
+    failed "die_if_not_set [X='Y' true] Failed"
+else
+    passed 'OK'
+fi
+
+bash -c "source $TOP/functions; X=`true`; die_if_not_set $LINENO X 'OK'" > /dev/null 2>&1
+if [[ $? = 0 ]]; then
+    failed "die_if_not_set [X='' true] Failed"
+fi
+
+bash -c "source $TOP/functions; X=`echo Y && false`; die_if_not_set $LINENO X 'not OK'"
+if [[ $? != 0 ]]; then
+    failed "die_if_not_set [X='Y' false] Failed"
+else
+    passed 'OK'
+fi
+
+bash -c "source $TOP/functions; X=`false`; die_if_not_set $LINENO X 'OK'" > /dev/null 2>&1
+if [[ $? = 0 ]]; then
+    failed "die_if_not_set [X='' false] Failed"
+fi
+
+
+# Enabling/disabling services
+
+echo "Testing enable_service()"
+
+function test_enable_service {
+    local start="$1"
+    local add="$2"
+    local finish="$3"
+
+    ENABLED_SERVICES="$start"
+    enable_service $add
+    if [ "$ENABLED_SERVICES" = "$finish" ]; then
+        passed "OK: $start + $add -> $ENABLED_SERVICES"
+    else
+        failed "changing $start to $finish with $add failed: $ENABLED_SERVICES"
+    fi
 }
 
-test_truefalse
+test_enable_service '' a 'a'
+test_enable_service 'a' b 'a,b'
+test_enable_service 'a,b' c 'a,b,c'
+test_enable_service 'a,b' c 'a,b,c'
+test_enable_service 'a,b,' c 'a,b,c'
+test_enable_service 'a,b' c,d 'a,b,c,d'
+test_enable_service 'a,b' "c d" 'a,b,c,d'
+test_enable_service 'a,b,c' c 'a,b,c'
+
+test_enable_service 'a,b,-c' c 'a,b'
+test_enable_service 'a,b,c' -c 'a,b'
+
+function test_disable_service {
+    local start="$1"
+    local del="$2"
+    local finish="$3"
+
+    ENABLED_SERVICES="$start"
+    disable_service "$del"
+    if [ "$ENABLED_SERVICES" = "$finish" ]; then
+        passed "OK: $start - $del -> $ENABLED_SERVICES"
+    else
+        failed "changing $start to $finish with $del failed: $ENABLED_SERVICES"
+    fi
+}
+
+echo "Testing disable_service()"
+test_disable_service 'a,b,c' a 'b,c'
+test_disable_service 'a,b,c' b 'a,c'
+test_disable_service 'a,b,c' c 'a,b'
+
+test_disable_service 'a,b,c' a 'b,c'
+test_disable_service 'b,c' b 'c'
+test_disable_service 'c' c ''
+test_disable_service '' d ''
+
+test_disable_service 'a,b,c,' c 'a,b'
+test_disable_service 'a,b' c 'a,b'
+
+
+echo "Testing disable_all_services()"
+ENABLED_SERVICES=a,b,c
+disable_all_services
+
+if [[ -z "$ENABLED_SERVICES" ]]; then
+    passed "OK"
+else
+    failed "disabling all services FAILED: $ENABLED_SERVICES"
+fi
+
+echo "Testing disable_negated_services()"
+
+
+function test_disable_negated_services {
+    local start="$1"
+    local finish="$2"
+
+    ENABLED_SERVICES="$start"
+    disable_negated_services
+    if [ "$ENABLED_SERVICES" = "$finish" ]; then
+        passed "OK: $start + $add -> $ENABLED_SERVICES"
+    else
+        failed "changing $start to $finish failed: $ENABLED_SERVICES"
+    fi
+}
+
+test_disable_negated_services '-a' ''
+test_disable_negated_services '-a,a' ''
+test_disable_negated_services '-a,-a' ''
+test_disable_negated_services 'a,-a' ''
+test_disable_negated_services 'b,a,-a' 'b'
+test_disable_negated_services 'a,b,-a' 'b'
+test_disable_negated_services 'a,-a,b' 'b'
+test_disable_negated_services 'a,aa,-a' 'aa'
+test_disable_negated_services 'aa,-a' 'aa'
+test_disable_negated_services 'a_a, -a_a' ''
+test_disable_negated_services 'a-b, -a-b' ''
+test_disable_negated_services 'a-b, b, -a-b' 'b'
+test_disable_negated_services 'a,-a,av2,b' 'av2,b'
+test_disable_negated_services 'a,aa,-a' 'aa'
+test_disable_negated_services 'a,av2,-a,a' 'av2'
+test_disable_negated_services 'a,-a,av2' 'av2'
+
+echo "Testing remove_disabled_services()"
+
+function test_remove_disabled_services {
+    local service_list="$1"
+    local remove_list="$2"
+    local expected="$3"
+
+    results=$(remove_disabled_services "$service_list" "$remove_list")
+    if [ "$results" = "$expected" ]; then
+        passed "OK: '$service_list' - '$remove_list' -> '$results'"
+    else
+        failed "getting '$expected' from '$service_list' - '$remove_list' failed: '$results'"
+    fi
+}
+
+test_remove_disabled_services 'a,b,c' 'a,c' 'b'
+test_remove_disabled_services 'a,b,c' 'b' 'a,c'
+test_remove_disabled_services 'a,b,c,d' 'a,c d' 'b'
+test_remove_disabled_services 'a,b c,d' 'a d' 'b,c'
+test_remove_disabled_services 'a,b,c' 'a,b,c' ''
+test_remove_disabled_services 'a,b,c' 'd' 'a,b,c'
+test_remove_disabled_services 'a,b,c' '' 'a,b,c'
+test_remove_disabled_services '' 'a,b,c' ''
+test_remove_disabled_services '' '' ''
+
+echo "Testing is_package_installed()"
+
+if [[ -z "$os_PACKAGE" ]]; then
+    GetOSVersion
+fi
+
+if [[ "$os_PACKAGE" = "deb" ]]; then
+    is_package_installed dpkg
+    VAL=$?
+elif [[ "$os_PACKAGE" = "rpm" ]]; then
+    is_package_installed rpm
+    VAL=$?
+else
+    VAL=1
+fi
+if [[ "$VAL" -eq 0 ]]; then
+    passed "OK"
+else
+    failed "is_package_installed() on existing package failed"
+fi
+
+if [[ "$os_PACKAGE" = "deb" ]]; then
+    is_package_installed dpkg bash
+    VAL=$?
+elif [[ "$os_PACKAGE" = "rpm" ]]; then
+    is_package_installed rpm bash
+    VAL=$?
+else
+    VAL=1
+fi
+if [[ "$VAL" -eq 0 ]]; then
+    passed "OK"
+else
+    failed "is_package_installed() on more than one existing package failed"
+fi
+
+is_package_installed zzzZZZzzz
+VAL=$?
+if [[ "$VAL" -ne 0 ]]; then
+    passed "OK"
+else
+    failed "is_package_installed() on non-existing package failed"
+fi
+
+# test against removed package...was a bug on Ubuntu
+if is_ubuntu; then
+    PKG=cowsay
+    if ! (dpkg -s $PKG >/dev/null 2>&1); then
+        # it was never installed...set up the condition
+        sudo apt-get install -y cowsay >/dev/null 2>&1
+    fi
+    if (dpkg -s $PKG >/dev/null 2>&1); then
+        # remove it to create the 'un' status
+        sudo dpkg -P $PKG >/dev/null 2>&1
+    fi
+
+    # now test the installed check on a deleted package
+    is_package_installed $PKG
+    VAL=$?
+    if [[ "$VAL" -ne 0 ]]; then
+        passed "OK"
+    else
+        failed "is_package_installed() on deleted package failed"
+    fi
+fi
+
+# test isset function
+echo  "Testing isset()"
+you_should_not_have_this_variable=42
+
+if isset "you_should_not_have_this_variable"; then
+    passed "OK"
+else
+    failed "\"you_should_not_have_this_variable\" not declared. failed"
+fi
+
+unset you_should_not_have_this_variable
+if isset "you_should_not_have_this_variable"; then
+    failed "\"you_should_not_have_this_variable\" looks like declared variable."
+else
+    passed "OK"
+fi
 
 report_results
diff --git a/tests/test_ini_config.sh b/tests/test_ini_config.sh
index 4a0ae33..b2529ac 100755
--- a/tests/test_ini_config.sh
+++ b/tests/test_ini_config.sh
@@ -7,6 +7,9 @@
 # Import config functions
 source $TOP/inc/ini-config
 
+source $TOP/tests/unittest.sh
+
+set -e
 
 echo "Testing INI functions"
 
@@ -70,86 +73,86 @@
 iniset test.ini aaa
 NO_ATTRIBUTE=$(cat test.ini)
 if [[ "$BEFORE" == "$NO_ATTRIBUTE" ]]; then
-    echo "OK"
+    passed
 else
-    echo "failed"
+    failed "failed"
 fi
 
 echo -n "iniset: test missing section argument: "
 iniset test.ini
 NO_SECTION=$(cat test.ini)
 if [[ "$BEFORE" == "$NO_SECTION" ]]; then
-    echo "OK"
+    passed
 else
-    echo "failed"
+    failed "failed"
 fi
 
 # Test with spaces
 
 VAL=$(iniget test.ini aaa handlers)
 if [[ "$VAL" == "aa, bb" ]]; then
-    echo "OK: $VAL"
+    passed "OK: $VAL"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 iniset test.ini aaa handlers "11, 22"
 
 VAL=$(iniget test.ini aaa handlers)
 if [[ "$VAL" == "11, 22" ]]; then
-    echo "OK: $VAL"
+    passed "OK: $VAL"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 # Test with spaces in section header
 
 VAL=$(iniget test.ini " ccc " spaces)
 if [[ "$VAL" == "yes" ]]; then
-    echo "OK: $VAL"
+    passed "OK: $VAL"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 iniset test.ini "b b" opt_ion 42
 
 VAL=$(iniget test.ini "b b" opt_ion)
 if [[ "$VAL" == "42" ]]; then
-    echo "OK: $VAL"
+    passed "OK: $VAL"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 # Test without spaces, end of file
 
 VAL=$(iniget test.ini bbb handlers)
 if [[ "$VAL" == "ee,ff" ]]; then
-    echo "OK: $VAL"
+    passed "OK: $VAL"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 iniset test.ini bbb handlers "33,44"
 
 VAL=$(iniget test.ini bbb handlers)
 if [[ "$VAL" == "33,44" ]]; then
-    echo "OK: $VAL"
+    passed "OK: $VAL"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 # test empty option
 if ini_has_option test.ini ddd empty; then
-    echo "OK: ddd.empty present"
+    passed "OK: ddd.empty present"
 else
-    echo "ini_has_option failed: ddd.empty not found"
+    failed "ini_has_option failed: ddd.empty not found"
 fi
 
 # test non-empty option
 if ini_has_option test.ini bbb handlers; then
-    echo "OK: bbb.handlers present"
+    passed "OK: bbb.handlers present"
 else
-    echo "ini_has_option failed: bbb.handlers not found"
+    failed "ini_has_option failed: bbb.handlers not found"
 fi
 
 # test changing empty option
@@ -157,9 +160,9 @@
 
 VAL=$(iniget test.ini ddd empty)
 if [[ "$VAL" == "42" ]]; then
-    echo "OK: $VAL"
+    passed "OK: $VAL"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 # test pipe in option
@@ -167,9 +170,9 @@
 
 VAL=$(iniget test.ini aaa handlers)
 if [[ "$VAL" == "a|b" ]]; then
-    echo "OK: $VAL"
+    passed "OK: $VAL"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 # test space in option
@@ -177,51 +180,51 @@
 
 VAL="$(iniget test.ini aaa handlers)"
 if [[ "$VAL" == "a b" ]]; then
-    echo "OK: $VAL"
+    passed "OK: $VAL"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 # Test section not exist
 
 VAL=$(iniget test.ini zzz handlers)
 if [[ -z "$VAL" ]]; then
-    echo "OK: zzz not present"
+    passed "OK: zzz not present"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 iniset test.ini zzz handlers "999"
 
 VAL=$(iniget test.ini zzz handlers)
 if [[ -n "$VAL" ]]; then
-    echo "OK: zzz not present"
+    passed "OK: zzz not present"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 # Test option not exist
 
 VAL=$(iniget test.ini aaa debug)
 if [[ -z "$VAL" ]]; then
-    echo "OK aaa.debug not present"
+    passed "OK aaa.debug not present"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 if ! ini_has_option test.ini aaa debug; then
-    echo "OK aaa.debug not present"
+    passed "OK aaa.debug not present"
 else
-    echo "ini_has_option failed: aaa.debug"
+    failed "ini_has_option failed: aaa.debug"
 fi
 
 iniset test.ini aaa debug "999"
 
 VAL=$(iniget test.ini aaa debug)
 if [[ -n "$VAL" ]]; then
-    echo "OK aaa.debug present"
+    passed "OK aaa.debug present"
 else
-    echo "iniget failed: $VAL"
+    failed "iniget failed: $VAL"
 fi
 
 # Test comments
@@ -230,9 +233,9 @@
 
 VAL=$(iniget test.ini aaa handlers)
 if [[ -z "$VAL" ]]; then
-    echo "OK"
+    passed "OK"
 else
-    echo "inicomment failed: $VAL"
+    failed "inicomment failed: $VAL"
 fi
 
 # Test multiple line iniset/iniget
@@ -242,25 +245,25 @@
 if [[ "$VAL" == "bar1 bar2" ]]; then
     echo "OK: iniset_multiline"
 else
-    echo "iniset_multiline failed: $VAL"
+    failed "iniset_multiline failed: $VAL"
 fi
 
 # Test iniadd with exiting values
 iniadd test.ini eee multi bar3
 VAL=$(iniget_multiline test.ini eee multi)
 if [[ "$VAL" == "bar1 bar2 bar3" ]]; then
-    echo "OK: iniadd"
+    passed "OK: iniadd"
 else
-    echo "iniadd failed: $VAL"
+    failed "iniadd failed: $VAL"
 fi
 
 # Test iniadd with non-exiting values
 iniadd test.ini eee non-multi foobar1 foobar2
 VAL=$(iniget_multiline test.ini eee non-multi)
 if [[ "$VAL" == "foobar1 foobar2" ]]; then
-    echo "OK: iniadd with non-exiting value"
+    passed "OK: iniadd with non-exiting value"
 else
-    echo "iniadd with non-exsting failed: $VAL"
+    failed "iniadd with non-exsting failed: $VAL"
 fi
 
 # Test inidelete
@@ -276,20 +279,22 @@
     inidelete test.ini $x a
     VAL=$(iniget_multiline test.ini $x a)
     if [ -z "$VAL" ]; then
-        echo "OK: inidelete $x"
+        passed "OK: inidelete $x"
     else
-        echo "inidelete $x failed: $VAL"
+        failed "inidelete $x failed: $VAL"
     fi
     if [ "$x" = "del_separate_options" -o \
         "$x" = "del_missing_option" -o \
         "$x" = "del_missing_option_multi" ]; then
         VAL=$(iniget_multiline test.ini $x b)
         if [ "$VAL" = "c" -o "$VAL" = "c d" ]; then
-            echo "OK: inidelete other_options $x"
+            passed "OK: inidelete other_options $x"
         else
-            echo "inidelete other_option $x failed: $VAL"
+            failed "inidelete other_option $x failed: $VAL"
         fi
     fi
 done
 
 rm test.ini
+
+report_results
diff --git a/tests/test_ip.sh b/tests/test_ip.sh
index add8d1a..da939f4 100755
--- a/tests/test_ip.sh
+++ b/tests/test_ip.sh
@@ -8,108 +8,85 @@
 # Import common functions
 source $TOP/functions
 
+source $TOP/tests/unittest.sh
 
 echo "Testing IP addr functions"
 
-if [[ $(cidr2netmask 4) == 240.0.0.0 ]]; then
-    echo "cidr2netmask(): /4...OK"
-else
-    echo "cidr2netmask(): /4...failed"
-fi
-if [[ $(cidr2netmask 8) == 255.0.0.0 ]]; then
-    echo "cidr2netmask(): /8...OK"
-else
-    echo "cidr2netmask(): /8...failed"
-fi
-if [[ $(cidr2netmask 12) == 255.240.0.0 ]]; then
-    echo "cidr2netmask(): /12...OK"
-else
-    echo "cidr2netmask(): /12...failed"
-fi
-if [[ $(cidr2netmask 16) == 255.255.0.0 ]]; then
-    echo "cidr2netmask(): /16...OK"
-else
-    echo "cidr2netmask(): /16...failed"
-fi
-if [[ $(cidr2netmask 20) == 255.255.240.0 ]]; then
-    echo "cidr2netmask(): /20...OK"
-else
-    echo "cidr2netmask(): /20...failed"
-fi
-if [[ $(cidr2netmask 24) == 255.255.255.0 ]]; then
-    echo "cidr2netmask(): /24...OK"
-else
-    echo "cidr2netmask(): /24...failed"
-fi
-if [[ $(cidr2netmask 28) == 255.255.255.240 ]]; then
-    echo "cidr2netmask(): /28...OK"
-else
-    echo "cidr2netmask(): /28...failed"
-fi
-if [[ $(cidr2netmask 30) == 255.255.255.252 ]]; then
-    echo "cidr2netmask(): /30...OK"
-else
-    echo "cidr2netmask(): /30...failed"
-fi
-if [[ $(cidr2netmask 32) == 255.255.255.255 ]]; then
-    echo "cidr2netmask(): /32...OK"
-else
-    echo "cidr2netmask(): /32...failed"
-fi
+function test_cidr2netmask {
+    local mask=0
+    local ips="128 192 224 240 248 252 254 255"
+    local ip
+    local msg
 
-if [[ $(maskip 169.254.169.254 240.0.0.0) == 160.0.0.0 ]]; then
-    echo "maskip(): /4...OK"
-else
-    echo "maskip(): /4...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.0.0.0) == 169.0.0.0 ]]; then
-    echo "maskip(): /8...OK"
-else
-    echo "maskip(): /8...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.240.0.0) == 169.240.0.0 ]]; then
-    echo "maskip(): /12...OK"
-else
-    echo "maskip(): /12...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.255.0.0) == 169.254.0.0 ]]; then
-    echo "maskip(): /16...OK"
-else
-    echo "maskip(): /16...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.255.240.0) == 169.254.160.0 ]]; then
-    echo "maskip(): /20...OK"
-else
-    echo "maskip(): /20...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.255.255.0) == 169.254.169.0 ]]; then
-    echo "maskip(): /24...OK"
-else
-    echo "maskip(): /24...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.255.255.240) == 169.254.169.240 ]]; then
-    echo "maskip(): /28...OK"
-else
-    echo "maskip(): /28...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.255.255.255) == 169.254.169.254 ]]; then
-    echo "maskip(): /32...OK"
-else
-    echo "maskip(): /32...failed"
-fi
+    msg="cidr2netmask(/0) == 0.0.0.0"
+    assert_equal "0.0.0.0" $(cidr2netmask $mask) "$msg"
+
+    for ip in $ips; do
+        mask=$(( mask + 1 ))
+        msg="cidr2netmask(/$mask) == $ip.0.0.0"
+        assert_equal "$ip.0.0.0" $(cidr2netmask $mask) "$msg"
+    done
+
+    for ip in $ips; do
+        mask=$(( mask + 1 ))
+        msg="cidr2netmask(/$mask) == 255.$ip.0.0"
+        assert_equal "255.$ip.0.0" $(cidr2netmask $mask) "$msg"
+    done
+
+    for ip in $ips; do
+        mask=$(( mask + 1 ))
+        msg="cidr2netmask(/$mask) == 255.255.$ip.0"
+        assert_equal "255.255.$ip.0" $(cidr2netmask $mask) "$msg"
+    done
+
+    for ip in $ips; do
+        mask=$(( mask + 1 ))
+        msg="cidr2netmask(/$mask) == 255.255.255.$ip"
+        assert_equal "255.255.255.$ip" $(cidr2netmask $mask) "$msg"
+    done
+}
+
+test_cidr2netmask
+
+msg="maskip(169.254.169.254 240.0.0.0) == 160.0.0.0"
+assert_equal $(maskip 169.254.169.254 240.0.0.0) 160.0.0.0 "$msg"
+
+msg="maskip(169.254.169.254 255.0.0.0) == 169.0.0.0"
+assert_equal $(maskip 169.254.169.254 255.0.0.0) 169.0.0.0 "$msg"
+
+msg="maskip(169.254.169.254 255.240.0.0) == 169.240.0.0"
+assert_equal $(maskip 169.254.169.254 255.240.0.0) 169.240.0.0 "$msg"
+
+msg="maskip(169.254.169.254 255.255.0.0) == 169.254.0.0"
+assert_equal $(maskip 169.254.169.254 255.255.0.0) 169.254.0.0 "$msg"
+
+msg="maskip(169.254.169.254 255.255.240.0) == 169.254.160.0"
+assert_equal $(maskip 169.254.169.254 255.255.240.0) 169.254.160.0 "$msg"
+
+msg="maskip(169.254.169.254 255.255.255.0) == 169.254.169.0"
+assert_equal $(maskip 169.254.169.254 255.255.255.0) 169.254.169.0 "$msg"
+
+msg="maskip(169.254.169.254 255.255.255.240) == 169.254.169.240"
+assert_equal $(maskip 169.254.169.254 255.255.255.240) 169.254.169.240 "$msg"
+
+msg="maskip(169.254.169.254 255.255.255.255) == 169.254.169.254"
+assert_equal $(maskip 169.254.169.254 255.255.255.255) 169.254.169.254 "$msg"
+
 
 for mask in 8 12 16 20 24 26 28; do
-    echo -n "address_in_net(): in /$mask..."
+    msg="address_in_net($10.10.10.1 10.10.10.0/$mask)"
     if address_in_net 10.10.10.1 10.10.10.0/$mask; then
-        echo "OK"
+        passed "$msg"
     else
-        echo "address_in_net() failed on /$mask"
+        failed "$msg"
     fi
 
-    echo -n "address_in_net(): not in /$mask..."
+    msg="! address_in_net($10.10.10.1 11.11.11.0/$mask)"
     if ! address_in_net 10.10.10.1 11.11.11.0/$mask; then
-        echo "OK"
+        passed "$msg"
     else
-        echo "address_in_net() failed on /$mask"
+        failed "$msg"
     fi
 done
+
+report_results
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
index 0bec584..336a213 100755
--- a/tests/test_libs_from_pypi.sh
+++ b/tests/test_libs_from_pypi.sh
@@ -29,7 +29,7 @@
     fi
 done
 
-ALL_LIBS="python-novaclient oslo.config pbr oslo.context python-troveclient"
+ALL_LIBS="python-novaclient oslo.config pbr oslo.context"
 ALL_LIBS+=" python-keystoneclient taskflow oslo.middleware pycadf"
 ALL_LIBS+=" python-glanceclient python-ironicclient tempest-lib"
 ALL_LIBS+=" oslo.messaging oslo.log cliff python-heatclient stevedore"
@@ -39,7 +39,7 @@
 ALL_LIBS+=" python-openstackclient oslo.rootwrap oslo.i18n"
 ALL_LIBS+=" python-ceilometerclient oslo.utils python-swiftclient"
 ALL_LIBS+=" python-neutronclient tooz ceilometermiddleware oslo.policy"
-ALL_LIBS+=" debtcollector"
+ALL_LIBS+=" debtcollector os-brick"
 
 # Generate the above list with
 # echo ${!GITREPO[@]}
diff --git a/tests/test_meta_config.sh b/tests/test_meta_config.sh
index 9d65280..a04c081 100755
--- a/tests/test_meta_config.sh
+++ b/tests/test_meta_config.sh
@@ -8,6 +8,8 @@
 source $TOP/inc/ini-config
 source $TOP/inc/meta-config
 
+set -e
+
 # check_result() tests and reports the result values
 # check_result "actual" "expected"
 function check_result {
@@ -17,6 +19,7 @@
         echo "OK"
     else
         echo -e "failed: $actual != $expected\n"
+        exit 1
     fi
 }
 
diff --git a/tests/test_truefalse.sh b/tests/test_truefalse.sh
new file mode 100755
index 0000000..2689589
--- /dev/null
+++ b/tests/test_truefalse.sh
@@ -0,0 +1,45 @@
+#!/usr/bin/env bash
+
+# Tests for DevStack meta-config functions
+
+TOP=$(cd $(dirname "$0")/.. && pwd)
+
+# Import common functions
+source $TOP/functions
+source $TOP/tests/unittest.sh
+
+function test_trueorfalse {
+    local one=1
+    local captrue=True
+    local lowtrue=true
+    local uppertrue=TRUE
+    local capyes=Yes
+    local lowyes=yes
+    local upperyes=YES
+
+    for default in True False; do
+        for name in one captrue lowtrue uppertrue capyes lowyes upperyes; do
+            local msg="trueorfalse($default $name)"
+            assert_equal "True" $(trueorfalse $default $name) "$msg"
+        done
+    done
+
+    local zero=0
+    local capfalse=False
+    local lowfalse=false
+    local upperfalse=FALSE
+    local capno=No
+    local lowno=no
+    local upperno=NO
+
+    for default in True False; do
+        for name in zero capfalse lowfalse upperfalse capno lowno upperno; do
+            local msg="trueorfalse($default $name)"
+            assert_equal "False" $(trueorfalse $default $name) "$msg"
+        done
+    done
+}
+
+test_trueorfalse
+
+report_results
diff --git a/tests/unittest.sh b/tests/unittest.sh
index 435cc3a..93aa5fc 100644
--- a/tests/unittest.sh
+++ b/tests/unittest.sh
@@ -14,26 +14,65 @@
 
 # we always start with no errors
 ERROR=0
+PASS=0
 FAILED_FUNCS=""
 
+# pass a test, printing out MSG
+#  usage: passed message
+function passed {
+    local lineno=$(caller 0 | awk '{print $1}')
+    local function=$(caller 0 | awk '{print $2}')
+    local msg="$1"
+    if [ -z "$msg" ]; then
+        msg="OK"
+    fi
+    PASS=$((PASS+1))
+    echo "PASS: $function:L$lineno $msg"
+}
+
+# fail a test, printing out MSG
+#  usage: failed message
+function failed {
+    local lineno=$(caller 0 | awk '{print $1}')
+    local function=$(caller 0 | awk '{print $2}')
+    local msg="$1"
+    FAILED_FUNCS+="$function:L$lineno\n"
+    echo "ERROR: $function:L$lineno!"
+    echo "   $msg"
+    ERROR=$((ERROR+1))
+}
+
+# assert string comparision of val1 equal val2, printing out msg
+#  usage: assert_equal val1 val2 msg
 function assert_equal {
     local lineno=`caller 0 | awk '{print $1}'`
     local function=`caller 0 | awk '{print $2}'`
     local msg=$3
+
+    if [ -z "$msg" ]; then
+        msg="OK"
+    fi
     if [[ "$1" != "$2" ]]; then
         FAILED_FUNCS+="$function:L$lineno\n"
         echo "ERROR: $1 != $2 in $function:L$lineno!"
         echo "  $msg"
-        ERROR=1
+        ERROR=$((ERROR+1))
     else
-        echo "$function:L$lineno - ok"
+        PASS=$((PASS+1))
+        echo "PASS: $function:L$lineno - $msg"
     fi
 }
 
+# print a summary of passing and failing tests, exiting
+# with an error if we have failed tests
+#  usage: report_results
 function report_results {
-    if [[ $ERROR -eq 1 ]]; then
-        echo "Tests FAILED"
-        echo $FAILED_FUNCS
+    echo "$PASS Tests PASSED"
+    if [[ $ERROR -gt 1 ]]; then
+        echo
+        echo "The following $ERROR tests FAILED"
+        echo -e "$FAILED_FUNCS"
+        echo "---"
         exit 1
     fi
 }
diff --git a/tools/build_docs.sh b/tools/build_docs.sh
index 2aa0a0a..fa84343 100755
--- a/tools/build_docs.sh
+++ b/tools/build_docs.sh
@@ -2,8 +2,8 @@
 
 # **build_docs.sh** - Build the docs for DevStack
 #
-# - Install shocco if not found on PATH and INSTALL_SHOCCO is set
-# - Clone MASTER_REPO branch MASTER_BRANCH
+# - Install shocco if not found on ``PATH`` and ``INSTALL_SHOCCO`` is set
+# - Clone ``MASTER_REPO`` branch ``MASTER_BRANCH``
 # - Re-creates ``doc/build/html`` directory from existing repo + new generated script docs
 
 # Usage:
@@ -16,7 +16,7 @@
 
 HTML_BUILD=doc/build/html
 
-# Keep track of the devstack directory
+# Keep track of the DevStack directory
 TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
 
 # Uses this shocco branch: https://github.com/dtroyer/shocco/tree/rst_support
@@ -75,7 +75,7 @@
 
 # Build list of scripts to process
 FILES=""
-for f in $(find . -name .git -prune -o \( -type f -name \*.sh -not -path \*shocco/\* -print \)); do
+for f in $(find . \( -name .git -o -name .tox \) -prune -o \( -type f -name \*.sh -not -path \*shocco/\* -print \)); do
     echo $f
     FILES+="$f "
     mkdir -p $FQ_HTML_BUILD/`dirname $f`;
diff --git a/tools/build_venv.sh b/tools/build_venv.sh
index 11d1d35..cfa39a8 100755
--- a/tools/build_venv.sh
+++ b/tools/build_venv.sh
@@ -4,11 +4,12 @@
 #
 # build_venv.sh venv-path [package [...]]
 #
+# Installs basic common prereq packages that require compilation
+# to allow quick copying of resulting venv as a baseline
+#
 # Assumes:
 # - a useful pip is installed
 # - virtualenv will be installed by pip
-# - installs basic common prereq packages that require compilation
-#   to allow quick copying of resulting venv as a baseline
 
 
 VENV_DEST=${1:-.venv}
@@ -16,14 +17,14 @@
 
 MORE_PACKAGES="$@"
 
-# If TOP_DIR is set we're being sourced rather than running stand-alone
+# If ``TOP_DIR`` is set we're being sourced rather than running stand-alone
 # or in a sub-shell
 if [[ -z "$TOP_DIR" ]]; then
 
     set -o errexit
     set -o nounset
 
-    # Keep track of the devstack directory
+    # Keep track of the DevStack directory
     TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
     FILES=$TOP_DIR/files
 
diff --git a/tools/build_wheels.sh b/tools/build_wheels.sh
index f1740df..14c2999 100755
--- a/tools/build_wheels.sh
+++ b/tools/build_wheels.sh
@@ -4,21 +4,22 @@
 #
 # build_wheels.sh [package [...]]
 #
-# System package prerequisites listed in files/*/devlibs will be installed
+# System package prerequisites listed in ``files/*/devlibs`` will be installed
 #
 # Builds wheels for all virtual env requirements listed in
 # ``venv-requirements.txt`` plus any supplied on the command line.
 #
-# Assumes ``tools/install_pip.sh`` has been run and a suitable pip/setuptools is available.
+# Assumes:
+# - ``tools/install_pip.sh`` has been run and a suitable ``pip/setuptools`` is available.
 
-# If TOP_DIR is set we're being sourced rather than running stand-alone
+# If ``TOP_DIR`` is set we're being sourced rather than running stand-alone
 # or in a sub-shell
 if [[ -z "$TOP_DIR" ]]; then
 
     set -o errexit
     set -o nounset
 
-    # Keep track of the devstack directory
+    # Keep track of the DevStack directory
     TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
     FILES=$TOP_DIR/files
 
@@ -59,7 +60,19 @@
 # Install modern pip and wheel
 PIP_VIRTUAL_ENV=$TMP_VENV_PATH pip_install -U pip wheel
 
-# VENV_PACKAGES is a list of packages we want to pre-install
+# BUG: cffi has a lot of issues. It has no stable ABI, if installed
+# code is built with a different ABI than the one that's detected at
+# load time, it tries to compile on the fly for the new ABI in the
+# install location (which will probably be /usr and not
+# writable). Also cffi is often included via setup_requires by
+# packages, which have different install rules (allowing betas) than
+# pip has.
+#
+# Because of this we must pip install cffi into the venv to build
+# wheels.
+PIP_VIRTUAL_ENV=$TMP_VENV_PATH pip_install_gr cffi
+
+# ``VENV_PACKAGES`` is a list of packages we want to pre-install
 VENV_PACKAGE_FILE=$FILES/venv-requirements.txt
 if [[ -r $VENV_PACKAGE_FILE ]]; then
     VENV_PACKAGES=$(grep -v '^#' $VENV_PACKAGE_FILE)
diff --git a/tools/cpu_map_update.py b/tools/cpu_map_update.py
new file mode 100755
index 0000000..1938793
--- /dev/null
+++ b/tools/cpu_map_update.py
@@ -0,0 +1,89 @@
+#!/usr/bin/env python
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+# This small script updates the libvirt CPU map to add a gate64 cpu model
+# that can be used to enable a common 64bit capable feature set across
+# devstack nodes so that features like nova live migration work.
+
+import sys
+import xml.etree.ElementTree as ET
+from xml.dom import minidom
+
+
+def update_cpu_map(tree):
+    root = tree.getroot()
+    cpus = root#.find("cpus")
+    x86 = None
+    for arch in cpus.findall("arch"):
+        if arch.get("name") == "x86":
+            x86 = arch
+            break
+    if x86 is not None:
+        # Create a gate64 cpu model that is core2duo less monitor and pse36
+        gate64 = ET.SubElement(x86, "model")
+        gate64.set("name", "gate64")
+        ET.SubElement(gate64, "vendor").set("name", "Intel")
+        ET.SubElement(gate64, "feature").set("name", "fpu")
+        ET.SubElement(gate64, "feature").set("name", "de")
+        ET.SubElement(gate64, "feature").set("name", "pse")
+        ET.SubElement(gate64, "feature").set("name", "tsc")
+        ET.SubElement(gate64, "feature").set("name", "msr")
+        ET.SubElement(gate64, "feature").set("name", "pae")
+        ET.SubElement(gate64, "feature").set("name", "mce")
+        ET.SubElement(gate64, "feature").set("name", "cx8")
+        ET.SubElement(gate64, "feature").set("name", "apic")
+        ET.SubElement(gate64, "feature").set("name", "sep")
+        ET.SubElement(gate64, "feature").set("name", "pge")
+        ET.SubElement(gate64, "feature").set("name", "cmov")
+        ET.SubElement(gate64, "feature").set("name", "pat")
+        ET.SubElement(gate64, "feature").set("name", "mmx")
+        ET.SubElement(gate64, "feature").set("name", "fxsr")
+        ET.SubElement(gate64, "feature").set("name", "sse")
+        ET.SubElement(gate64, "feature").set("name", "sse2")
+        ET.SubElement(gate64, "feature").set("name", "vme")
+        ET.SubElement(gate64, "feature").set("name", "mtrr")
+        ET.SubElement(gate64, "feature").set("name", "mca")
+        ET.SubElement(gate64, "feature").set("name", "clflush")
+        ET.SubElement(gate64, "feature").set("name", "pni")
+        ET.SubElement(gate64, "feature").set("name", "nx")
+        ET.SubElement(gate64, "feature").set("name", "ssse3")
+        ET.SubElement(gate64, "feature").set("name", "syscall")
+        ET.SubElement(gate64, "feature").set("name", "lm")
+
+
+def format_xml(root):
+    # Adapted from http://pymotw.com/2/xml/etree/ElementTree/create.html
+    # thank you dhellmann
+    rough_string = ET.tostring(root, encoding="UTF-8")
+    dom_parsed = minidom.parseString(rough_string)
+    return dom_parsed.toprettyxml("  ", encoding="UTF-8")
+
+
+def main():
+    if len(sys.argv) != 2:
+        raise Exception("Must pass path to cpu_map.xml to update")
+    cpu_map = sys.argv[1]
+    tree = ET.parse(cpu_map)
+    for model in tree.getroot().iter("model"):
+        if model.get("name") == "gate64":
+            # gate64 model is already present
+            return
+    update_cpu_map(tree)
+    pretty_xml = format_xml(tree.getroot())
+    with open(cpu_map, 'w') as f:
+        f.write(pretty_xml)
+
+
+if __name__ == "__main__":
+    main()
diff --git a/tools/create-stack-user.sh b/tools/create-stack-user.sh
index 9c29ecd..b49164b 100755
--- a/tools/create-stack-user.sh
+++ b/tools/create-stack-user.sh
@@ -17,7 +17,7 @@
 
 set -o errexit
 
-# Keep track of the devstack directory
+# Keep track of the DevStack directory
 TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
 
 # Import common functions
diff --git a/tools/fixup_stuff.sh b/tools/fixup_stuff.sh
index f8edd16..d3a3de2 100755
--- a/tools/fixup_stuff.sh
+++ b/tools/fixup_stuff.sh
@@ -17,7 +17,7 @@
 #   - uninstall firewalld (f20 only)
 
 
-# If TOP_DIR is set we're being sourced rather than running stand-alone
+# If ``TOP_DIR`` is set we're being sourced rather than running stand-alone
 # or in a sub-shell
 if [[ -z "$TOP_DIR" ]]; then
     set -o errexit
@@ -27,7 +27,7 @@
     TOOLS_DIR=$(cd $(dirname "$0") && pwd)
     TOP_DIR=$(cd $TOOLS_DIR/..; pwd)
 
-    # Change dir to top of devstack
+    # Change dir to top of DevStack
     cd $TOP_DIR
 
     # Import common functions
@@ -38,7 +38,7 @@
 
 # Keystone Port Reservation
 # -------------------------
-# Reserve and prevent $KEYSTONE_AUTH_PORT and $KEYSTONE_AUTH_PORT_INT from
+# Reserve and prevent ``KEYSTONE_AUTH_PORT`` and ``KEYSTONE_AUTH_PORT_INT`` from
 # being used as ephemeral ports by the system. The default(s) are 35357 and
 # 35358 which are in the Linux defined ephemeral port range (in disagreement
 # with the IANA ephemeral port range). This is a workaround for bug #1253482
@@ -47,9 +47,9 @@
 # exception into the Kernel for the Keystone AUTH ports.
 keystone_ports=${KEYSTONE_AUTH_PORT:-35357},${KEYSTONE_AUTH_PORT_INT:-35358}
 
-# only do the reserved ports when available, on some system (like containers)
+# Only do the reserved ports when available, on some system (like containers)
 # where it's not exposed we are almost pretty sure these ports would be
-# exclusive for our devstack.
+# exclusive for our DevStack.
 if sysctl net.ipv4.ip_local_reserved_ports >/dev/null 2>&1; then
     # Get any currently reserved ports, strip off leading whitespace
     reserved_ports=$(sysctl net.ipv4.ip_local_reserved_ports | awk -F'=' '{print $2;}' | sed 's/^ //')
@@ -59,7 +59,7 @@
         sudo sysctl -w net.ipv4.ip_local_reserved_ports=${keystone_ports}
     else
         # If there are currently reserved ports, keep those and also reserve the
-        # keystone specific ports. Duplicate reservations are merged into a single
+        # Keystone specific ports. Duplicate reservations are merged into a single
         # reservation (or range) automatically by the kernel.
         sudo sysctl -w net.ipv4.ip_local_reserved_ports=${keystone_ports},${reserved_ports}
     fi
@@ -109,19 +109,28 @@
     fi
 
     FORCE_FIREWALLD=$(trueorfalse False $FORCE_FIREWALLD)
-    if [[ ${DISTRO} =~ (f20) && $FORCE_FIREWALLD == "False" ]]; then
+    if [[ $FORCE_FIREWALLD == "False" ]]; then
         # On Fedora 20 firewalld interacts badly with libvirt and
-        # slows things down significantly.  However, for those cases
-        # where that combination is desired, allow this fix to be skipped.
-
-        # There was also an additional issue with firewalld hanging
-        # after install of libvirt with polkit.  See
-        # https://bugzilla.redhat.com/show_bug.cgi?id=1099031
+        # slows things down significantly (this issue was fixed in
+        # later fedoras).  There was also an additional issue with
+        # firewalld hanging after install of libvirt with polkit [1].
+        # firewalld also causes problems with neturon+ipv6 [2]
+        #
+        # Note we do the same as the RDO packages and stop & disable,
+        # rather than remove.  This is because other packages might
+        # have the dependency [3][4].
+        #
+        # [1] https://bugzilla.redhat.com/show_bug.cgi?id=1099031
+        # [2] https://bugs.launchpad.net/neutron/+bug/1455303
+        # [3] https://github.com/redhat-openstack/openstack-puppet-modules/blob/master/firewall/manifests/linux/redhat.pp
+        # [4] http://docs.openstack.org/developer/devstack/guides/neutron.html
         if is_package_installed firewalld; then
-            uninstall_package firewalld
+            sudo systemctl disable firewalld
+            sudo systemctl enable iptables
+            sudo systemctl stop firewalld
+            sudo systemctl start iptables
         fi
     fi
-
 fi
 
 # The version of pip(1.5.4) supported by python-virtualenv(1.11.4) has
@@ -129,3 +138,24 @@
 # and installing the latest version using pip.
 uninstall_package python-virtualenv
 pip_install -U virtualenv
+
+# If a non-system python-requests is installed then it will use the
+# built-in CA certificate store rather than the distro-specific
+# CA certificate store. Detect this and symlink to the correct
+# one. If the value for the CA is not rooted in /etc then we know
+# we need to change it.
+capath=$(python -c "from requests import certs; print certs.where()")
+
+if is_service_enabled tls-proxy || [ "$USE_SSL" == "True" ]; then
+    if [[ ! $capath =~ ^/etc/.* && ! -L $capath ]]; then
+        if is_fedora; then
+            sudo rm -f $capath
+            sudo ln -s /etc/pki/tls/certs/ca-bundle.crt $capath
+        elif is_ubuntu; then
+            sudo rm -f $capath
+            sudo ln -s /etc/ssl/certs/ca-certificates.crt $capath
+        else
+            echo "Don't know how to set the CA bundle, expect the install to fail."
+        fi
+    fi
+fi
diff --git a/tools/image_list.sh b/tools/image_list.sh
index 88c1d09..a27635e 100755
--- a/tools/image_list.sh
+++ b/tools/image_list.sh
@@ -1,6 +1,6 @@
 #!/bin/bash
 
-# Keep track of the devstack directory
+# Keep track of the DevStack directory
 TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
 
 source $TOP_DIR/functions
@@ -9,8 +9,6 @@
 # dummy in the end position to trigger the fall through case.
 DRIVERS="openvz ironic libvirt vsphere xenserver dummy"
 
-CIRROS_ARCHS="x86_64 i386"
-
 # Extra variables to trigger getting additional images.
 export ENABLED_SERVICES="h-api,tr-api"
 HEAT_FETCHED_TEST_IMAGE="Fedora-i386-20-20131211.1-sda"
@@ -19,15 +17,12 @@
 # Loop over all the virt drivers and collect all the possible images
 ALL_IMAGES=""
 for driver in $DRIVERS; do
-    for arch in $CIRROS_ARCHS; do
-        CIRROS_ARCH=$arch
-        VIRT_DRIVER=$driver
-        URLS=$(source $TOP_DIR/stackrc && echo $IMAGE_URLS)
-        if [[ ! -z "$ALL_IMAGES" ]]; then
-            ALL_IMAGES+=,
-        fi
-        ALL_IMAGES+=$URLS
-    done
+    VIRT_DRIVER=$driver
+    URLS=$(source $TOP_DIR/stackrc && echo $IMAGE_URLS)
+    if [[ ! -z "$ALL_IMAGES" ]]; then
+        ALL_IMAGES+=,
+    fi
+    ALL_IMAGES+=$URLS
 done
 
 # Make a nice list
diff --git a/tools/info.sh b/tools/info.sh
index a8f9544..433206e 100755
--- a/tools/info.sh
+++ b/tools/info.sh
@@ -2,7 +2,7 @@
 
 # **info.sh**
 
-# Produce a report on the state of devstack installs
+# Produce a report on the state of DevStack installs
 #
 # Output fields are separated with '|' chars
 # Output types are git,localrc,os,pip,pkg:
@@ -14,7 +14,7 @@
 #   pkg|<package>|<version>
 
 function usage {
-    echo "$0 - Report on the devstack configuration"
+    echo "$0 - Report on the DevStack configuration"
     echo ""
     echo "Usage: $0"
     exit 1
diff --git a/tools/install_pip.sh b/tools/install_pip.sh
index b7b40c7..0f7c962 100755
--- a/tools/install_pip.sh
+++ b/tools/install_pip.sh
@@ -16,7 +16,7 @@
 TOOLS_DIR=$(cd $(dirname "$0") && pwd)
 TOP_DIR=`cd $TOOLS_DIR/..; pwd`
 
-# Change dir to top of devstack
+# Change dir to top of DevStack
 cd $TOP_DIR
 
 # Import common functions
@@ -42,11 +42,11 @@
 
 
 function install_get_pip {
-    # the openstack gate and others put a cached version of get-pip.py
+    # The OpenStack gate and others put a cached version of get-pip.py
     # for this to find, explicitly to avoid download issues.
     #
-    # However, if devstack *did* download the file, we want to check
-    # for updates; people can leave thier stacks around for a long
+    # However, if DevStack *did* download the file, we want to check
+    # for updates; people can leave their stacks around for a long
     # time and in the mean-time pip might get upgraded.
     #
     # Thus we use curl's "-z" feature to always check the modified
@@ -74,7 +74,7 @@
         touch $PIP_CONFIG_FILE
     fi
     if ! ini_has_option "$PIP_CONFIG_FILE" "global" "index-url"; then
-        #it means that the index-url does not exist
+        # It means that the index-url does not exist
         iniset "$PIP_CONFIG_FILE" "global" "index-url" "$PYPI_OVERRIDE"
     fi
 
diff --git a/tools/install_prereqs.sh b/tools/install_prereqs.sh
index 303cc63..a07e58d 100755
--- a/tools/install_prereqs.sh
+++ b/tools/install_prereqs.sh
@@ -18,10 +18,10 @@
     esac
 done
 
-# If TOP_DIR is set we're being sourced rather than running stand-alone
+# If ``TOP_DIR`` is set we're being sourced rather than running stand-alone
 # or in a sub-shell
 if [[ -z "$TOP_DIR" ]]; then
-    # Keep track of the devstack directory
+    # Keep track of the DevStack directory
     TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
 
     # Import common functions
@@ -62,8 +62,10 @@
 
 # Install package requirements
 PACKAGES=$(get_packages general $ENABLED_SERVICES)
+PACKAGES="$PACKAGES $(get_plugin_packages)"
+
 if is_ubuntu && echo $PACKAGES | grep -q dkms ; then
-    # ensure headers for the running kernel are installed for any DKMS builds
+    # Ensure headers for the running kernel are installed for any DKMS builds
     PACKAGES="$PACKAGES linux-headers-$(uname -r)"
 fi
 
diff --git a/tools/ironic/scripts/create-node b/tools/ironic/scripts/create-node
index 25b53d4..b018acd 100755
--- a/tools/ironic/scripts/create-node
+++ b/tools/ironic/scripts/create-node
@@ -6,13 +6,13 @@
 
 set -ex
 
-# Keep track of the devstack directory
+# Keep track of the DevStack directory
 TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
 
 NAME=$1
 CPU=$2
 MEM=$(( 1024 * $3 ))
-# extra G to allow fuzz for partition table : flavor size and registered size
+# Extra G to allow fuzz for partition table : flavor size and registered size
 # need to be different to actual size.
 DISK=$(( $4 + 1))
 
diff --git a/tools/ironic/scripts/setup-network b/tools/ironic/scripts/setup-network
index e326bf8..83308ed 100755
--- a/tools/ironic/scripts/setup-network
+++ b/tools/ironic/scripts/setup-network
@@ -9,7 +9,7 @@
 
 LIBVIRT_CONNECT_URI=${LIBVIRT_CONNECT_URI:-"qemu:///system"}
 
-# Keep track of the devstack directory
+# Keep track of the DevStack directory
 TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
 BRIDGE_SUFFIX=${1:-''}
 BRIDGE_NAME=brbm$BRIDGE_SUFFIX
@@ -19,7 +19,7 @@
 # Only add bridge if missing
 (sudo ovs-vsctl list-br | grep ${BRIDGE_NAME}$) || sudo ovs-vsctl add-br ${BRIDGE_NAME}
 
-# remove bridge before replacing it.
+# Remove bridge before replacing it.
 (virsh net-list | grep "${BRIDGE_NAME} ") && virsh net-destroy ${BRIDGE_NAME}
 (virsh net-list --inactive  | grep "${BRIDGE_NAME} ") && virsh net-undefine ${BRIDGE_NAME}
 
diff --git a/tools/outfilter.py b/tools/outfilter.py
index 9686a38..f82939b 100755
--- a/tools/outfilter.py
+++ b/tools/outfilter.py
@@ -14,8 +14,8 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
-# This is an output filter to filter and timestamp the logs from grenade and
-# devstack. Largely our awk filters got beyond the complexity level which were
+# This is an output filter to filter and timestamp the logs from Grenade and
+# DevStack. Largely our awk filters got beyond the complexity level which were
 # sustainable, so this provides us much more control in a single place.
 #
 # The overhead of running python should be less than execing `date` a million
@@ -32,7 +32,7 @@
 
 def get_options():
     parser = argparse.ArgumentParser(
-        description='Filter output by devstack and friends')
+        description='Filter output by DevStack and friends')
     parser.add_argument('-o', '--outfile',
                         help='Output file for content',
                         default=None)
@@ -52,7 +52,7 @@
     if opts.outfile:
         outfile = open(opts.outfile, 'a', 0)
 
-    # otherwise fileinput reprocess args as files
+    # Otherwise fileinput reprocess args as files
     sys.argv = []
     while True:
         line = sys.stdin.readline()
@@ -63,9 +63,9 @@
         if skip_line(line):
             continue
 
-        # this prevents us from nesting date lines, because
-        # we'd like to pull this in directly in grenade and not double
-        # up on devstack lines
+        # This prevents us from nesting date lines, because
+        # we'd like to pull this in directly in Grenade and not double
+        # up on DevStack lines
         if HAS_DATE.search(line) is None:
             now = datetime.datetime.utcnow()
             line = ("%s | %s" % (
diff --git a/tools/peakmem_tracker.sh b/tools/peakmem_tracker.sh
new file mode 100755
index 0000000..0d5728a
--- /dev/null
+++ b/tools/peakmem_tracker.sh
@@ -0,0 +1,96 @@
+#!/bin/bash
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+set -o errexit
+
+# time to sleep between checks
+SLEEP_TIME=20
+
+# MemAvailable is the best estimation and has built-in heuristics
+# around reclaimable memory.  However, it is not available until 3.14
+# kernel (i.e. Ubuntu LTS Trusty misses it).  In that case, we fall
+# back to free+buffers+cache as the available memory.
+USE_MEM_AVAILBLE=0
+if grep -q '^MemAvailable:' /proc/meminfo; then
+    USE_MEM_AVAILABLE=1
+fi
+
+function get_mem_available {
+    if [[ $USE_MEM_AVAILABLE -eq 1 ]]; then
+        awk '/^MemAvailable:/ {print $2}' /proc/meminfo
+    else
+        awk '/^MemFree:/ {free=$2}
+            /^Buffers:/ {buffers=$2}
+            /^Cached:/  {cached=$2}
+            END { print free+buffers+cached }' /proc/meminfo
+    fi
+}
+
+# whenever we see less memory available than last time, dump the
+# snapshot of current usage; i.e. checking the latest entry in the
+# file will give the peak-memory usage
+function tracker {
+    local low_point=$(get_mem_available)
+    while [ 1 ]; do
+
+        local mem_available=$(get_mem_available)
+
+        if [[ $mem_available -lt $low_point ]]; then
+            low_point=$mem_available
+            echo "[[["
+            date
+            echo "---"
+            # always available greppable output; given difference in
+            # meminfo output as described above...
+            echo "peakmem_tracker low_point: $mem_available"
+            echo "---"
+            cat /proc/meminfo
+            echo "---"
+            # would hierarchial view be more useful (-H)?  output is
+            # not sorted by usage then, however, and the first
+            # question is "what's using up the memory"
+            #
+            # there are a lot of kernel threads, especially on a 8-cpu
+            # system.  do a best-effort removal to improve
+            # signal/noise ratio of output.
+            ps --sort=-pmem -eo pid:10,pmem:6,rss:15,ppid:10,cputime:10,nlwp:8,wchan:25,args:100 |
+                grep -v ']$'
+            echo "]]]"
+        fi
+
+        sleep $SLEEP_TIME
+    done
+}
+
+function usage {
+    echo "Usage: $0 [-x] [-s N]" 1>&2
+    exit 1
+}
+
+while getopts ":s:x" opt; do
+    case $opt in
+        s)
+            SLEEP_TIME=$OPTARG
+            ;;
+        x)
+            set -o xtrace
+            ;;
+        *)
+            usage
+            ;;
+    esac
+done
+shift $((OPTIND-1))
+
+tracker
diff --git a/tools/ping_neutron.sh b/tools/ping_neutron.sh
new file mode 100755
index 0000000..d36b7f6
--- /dev/null
+++ b/tools/ping_neutron.sh
@@ -0,0 +1,65 @@
+#!/bin/bash
+#
+# Copyright 2015 Hewlett-Packard Development Company, L.P.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+# Ping a neutron guest using a network namespace probe
+
+set -o errexit
+set -o pipefail
+
+TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
+
+# This *must* be run as the admin tenant
+source $TOP_DIR/openrc admin admin
+
+function usage {
+    cat - <<EOF
+ping_neutron.sh <net_name> [ping args]
+
+This provides a wrapper to ping neutron guests that are on isolated
+tenant networks that the caller can't normally reach. It does so by
+creating a network namespace probe.
+
+It takes arguments like ping, except the first arg must be the network
+name.
+
+Note: in environments with duplicate network names, the results are
+non deterministic.
+
+This should *really* be in the neutron cli.
+
+EOF
+    exit 1
+}
+
+NET_NAME=$1
+
+if [[ -z "$NET_NAME" ]]; then
+    echo "Error: net_name is required"
+    usage
+fi
+
+REMANING_ARGS="${@:2}"
+
+# BUG: with duplicate network names, this fails pretty hard.
+NET_ID=$(neutron net-list $NET_NAME | grep "$NET_NAME" | awk '{print $2}')
+PROBE_ID=$(neutron-debug probe-list -c id -c network_id | grep "$NET_ID" | awk '{print $2}' | head -n 1)
+
+# This runs a command inside the specific netns
+NET_NS_CMD="ip netns exec qprobe-$PROBE_ID"
+
+PING_CMD="sudo $NET_NS_CMD ping $REMAING_ARGS"
+echo "Running $PING_CMD"
+$PING_CMD
diff --git a/tools/upload_image.sh b/tools/upload_image.sh
index 5d23f31..19c6b71 100755
--- a/tools/upload_image.sh
+++ b/tools/upload_image.sh
@@ -32,7 +32,7 @@
 fi
 
 # Get a token to authenticate to glance
-TOKEN=$(keystone token-get | grep ' id ' | get_field 2)
+TOKEN=$(openstack token issue -c id -f value)
 die_if_not_set $LINENO TOKEN "Keystone fail to get token"
 
 # Glance connection info.  Note the port must be specified.
diff --git a/tools/worlddump.py b/tools/worlddump.py
index 9a62c0d..d846f10 100755
--- a/tools/worlddump.py
+++ b/tools/worlddump.py
@@ -18,6 +18,7 @@
 
 import argparse
 import datetime
+import fnmatch
 import os
 import os.path
 import sys
@@ -41,12 +42,24 @@
     print "WARN: %s" % msg
 
 
+def _dump_cmd(cmd):
+    print cmd
+    print "-" * len(cmd)
+    print
+    print os.popen(cmd).read()
+
+
+def _header(name):
+    print
+    print name
+    print "=" * len(name)
+    print
+
+
 def disk_space():
     # the df output
-    print """
-File System Summary
-===================
-"""
+    _header("File System Summary")
+
     dfraw = os.popen("df -Ph").read()
     df = [s.split() for s in dfraw.splitlines()]
     for fs in df:
@@ -61,13 +74,36 @@
     print dfraw
 
 
+def iptables_dump():
+    tables = ['filter', 'nat', 'mangle']
+    _header("IP Tables Dump")
+
+    for table in tables:
+        _dump_cmd("sudo iptables --line-numbers -L -nv -t %s" % table)
+
+
+def network_dump():
+    _header("Network Dump")
+
+    _dump_cmd("brctl show")
+    _dump_cmd("arp -n")
+    _dump_cmd("ip addr")
+    _dump_cmd("ip link")
+    _dump_cmd("ip route")
+
+
 def process_list():
-    print """
-Process Listing
-===============
-"""
-    psraw = os.popen("ps auxw").read()
-    print psraw
+    _header("Process Listing")
+    _dump_cmd("ps axo "
+              "user,ppid,pid,pcpu,pmem,vsz,rss,tty,stat,start,time,args")
+
+
+def compute_consoles():
+    _header("Compute consoles")
+    for root, dirnames, filenames in os.walk('/opt/stack'):
+        for filename in fnmatch.filter(filenames, 'console.log'):
+            fullpath = os.path.join(root, filename)
+            _dump_cmd("sudo cat %s" % fullpath)
 
 
 def main():
@@ -79,6 +115,9 @@
         os.dup2(f.fileno(), sys.stdout.fileno())
         disk_space()
         process_list()
+        network_dump()
+        iptables_dump()
+        compute_consoles()
 
 
 if __name__ == '__main__':
diff --git a/tools/xen/README.md b/tools/xen/README.md
index c8f47be..61694e9 100644
--- a/tools/xen/README.md
+++ b/tools/xen/README.md
@@ -97,7 +97,7 @@
     # Download a vhd and a uec image
     IMAGE_URLS="\
     https://github.com/downloads/citrix-openstack/warehouse/cirros-0.3.0-x86_64-disk.vhd.tgz,\
-    http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz"
+    http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-uec.tar.gz"
 
     # Explicitly set virt driver
     VIRT_DRIVER=xenserver
diff --git a/tools/xen/xenrc b/tools/xen/xenrc
index 43a6ce8..be6c5ca 100644
--- a/tools/xen/xenrc
+++ b/tools/xen/xenrc
@@ -14,12 +14,12 @@
 # Size of image
 VDI_MB=${VDI_MB:-5000}
 
-# Devstack now contains many components.  3GB ram is not enough to prevent
+# Devstack now contains many components.  4GB ram is not enough to prevent
 # swapping and memory fragmentation - the latter of which can cause failures
 # such as blkfront failing to plug a VBD and lead to random test fails.
 #
-# Set to 4GB so an 8GB XenServer VM can have a 1GB Dom0 and leave 3GB for VMs
-OSDOMU_MEM_MB=4096
+# Set to 6GB so an 8GB XenServer VM can have a 1GB Dom0 and leave 1GB for VMs
+OSDOMU_MEM_MB=6144
 OSDOMU_VDI_GB=8
 
 # Network mapping. Specify bridge names or network names. Network names may
diff --git a/tox.ini b/tox.ini
index bc84928..788fea9 100644
--- a/tox.ini
+++ b/tox.ini
@@ -10,19 +10,20 @@
 [testenv:bashate]
 deps = bashate
 whitelist_externals = bash
-commands = bash -c "find {toxinidir}          \
-         -not \( -type d -name .?\* -prune \) \ # prune all 'dot' dirs
-         -not \( -type d -name doc -prune \)  \ # skip documentation
-         -type f                              \ # only files
-         -not -name \*~                       \ # skip editors, readme, etc
-         -not -name \*.md                     \
-         \(                                   \
-          -name \*.sh -or                     \
-          -name \*rc -or                      \
-          -name functions\* -or               \
-          -wholename \*/inc/\*                \ # /inc files and
-          -wholename \*/lib/\*                \ # /lib files are shell, but
-         \)                                   \ #   have no extension
+commands = bash -c "find {toxinidir}             \
+         -not \( -type d -name .?\* -prune \)    \ # prune all 'dot' dirs
+         -not \( -type d -name doc -prune \)     \ # skip documentation
+         -not \( -type d -name shocco -prune \)  \ # skip shocco
+         -type f                                 \ # only files
+         -not -name \*~                          \ # skip editors, readme, etc
+         -not -name \*.md                        \
+         \(                                      \
+          -name \*.sh -or                        \
+          -name \*rc -or                         \
+          -name functions\* -or                  \
+          -wholename \*/inc/\* -or               \ # /inc files and
+          -wholename \*/lib/\*                   \ # /lib files are shell, but
+         \)                                      \ #   have no extension
          -print0 | xargs -0 bashate -v"
 
 [testenv:docs]
@@ -32,6 +33,10 @@
    sphinx>=1.1.2,<1.2
    pbr>=0.6,!=0.7,<1.0
    oslosphinx
+   nwdiag
+   blockdiag
+   sphinxcontrib-blockdiag
+   sphinxcontrib-nwdiag
 whitelist_externals = bash
 setenv =
   TOP_DIR={toxinidir}
diff --git a/unstack.sh b/unstack.sh
index a6aeec5..f0da971 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -19,7 +19,7 @@
     esac
 done
 
-# Keep track of the current devstack directory.
+# Keep track of the current DevStack directory.
 TOP_DIR=$(cd $(dirname "$0") && pwd)
 FILES=$TOP_DIR/files
 
@@ -45,6 +45,10 @@
 # Configure Projects
 # ==================
 
+# Plugin Phase 0: override_defaults - allow pluggins to override
+# defaults before other services are run
+run_phase override_defaults
+
 # Import apache functions
 source $TOP_DIR/lib/apache
 
@@ -63,7 +67,7 @@
 source $TOP_DIR/lib/swift
 source $TOP_DIR/lib/ceilometer
 source $TOP_DIR/lib/heat
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 source $TOP_DIR/lib/ldap
 source $TOP_DIR/lib/dstat
 
@@ -169,12 +173,10 @@
     cleanup_neutron
 fi
 
-if is_service_enabled trove; then
-    cleanup_trove
+if is_service_enabled dstat; then
+    stop_dstat
 fi
 
-stop_dstat
-
 # Clean up the remainder of the screen processes
 SCREEN=$(which screen)
 if [[ -n "$SCREEN" ]]; then
@@ -186,3 +188,4 @@
 
 # BUG: maybe it doesn't exist? We should isolate this further down.
 clean_lvm_volume_group $DEFAULT_VOLUME_GROUP_NAME || /bin/true
+clean_lvm_filter