Merge "Increase Swift disk size up to 2GB if Glance is enabled"
diff --git a/.gitignore b/.gitignore
index 67ab722..c6900c8 100644
--- a/.gitignore
+++ b/.gitignore
@@ -14,7 +14,7 @@
 files/*.qcow2
 files/images
 files/pip-*
-files/get-pip.py
+files/get-pip.py*
 local.conf
 local.sh
 localrc
diff --git a/HACKING.rst b/HACKING.rst
index b3c82a3..a40af54 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -25,23 +25,63 @@
 __ lp_
 .. _lp: https://launchpad.net/~devstack
 
+The `Gerrit review
+queue <https://review.openstack.org/#/q/project:openstack-dev/devstack,n,z>`__
+is used for all commits.
+
 The primary script in DevStack is ``stack.sh``, which performs the bulk of the
 work for DevStack's use cases.  There is a subscript ``functions`` that contains
 generally useful shell functions and is used by a number of the scripts in
 DevStack.
 
-The ``lib`` directory contains sub-scripts for projects or packages that ``stack.sh``
-sources to perform much of the work related to those projects.  These sub-scripts
-contain configuration defaults and functions to configure, start and stop the project
-or package.  These variables and functions are also used by related projects,
-such as Grenade, to manage a DevStack installation.
-
 A number of additional scripts can be found in the ``tools`` directory that may
 be useful in supporting DevStack installations.  Of particular note are ``info.sh``
 to collect and report information about the installed system, and ``install_prereqs.sh``
 that handles installation of the prerequisite packages for DevStack.  It is
 suitable, for example, to pre-load a system for making a snapshot.
 
+Repo Layout
+-----------
+
+The DevStack repo generally keeps all of the primary scripts at the root
+level.
+
+``doc`` - Contains the Sphinx source for the documentation.
+``tools/build_docs.sh`` is used to generate the HTML versions of the
+DevStack scripts.  A complete doc build can be run with ``tox -edocs``.
+
+``exercises`` - Contains the test scripts used to sanity-check and
+demonstrate some OpenStack functions. These scripts know how to exit
+early or skip services that are not enabled.
+
+``extras.d`` - Contains the dispatch scripts called by the hooks in
+``stack.sh``, ``unstack.sh`` and ``clean.sh``. See :doc:`the plugins
+docs <plugins>` for more information.
+
+``files`` - Contains a variety of otherwise lost files used in
+configuring and operating DevStack. This includes templates for
+configuration files and the system dependency information. This is also
+where image files are downloaded and expanded if necessary.
+
+``lib`` - Contains the sub-scripts specific to each project. This is
+where the work of managing a project's services is located. Each
+top-level project (Keystone, Nova, etc) has a file here. Additionally
+there are some for system services and project plugins.  These
+variables and functions are also used by related projects, such as
+Grenade, to manage a DevStack installation.
+
+``samples`` - Contains a sample of the local files not included in the
+DevStack repo.
+
+``tests`` - the DevStack test suite is rather sparse, mostly consisting
+of test of specific fragile functions in the ``functions`` and
+``functions-common`` files.
+
+``tools`` - Contains a collection of stand-alone scripts. While these
+may reference the top-level DevStack configuration they can generally be
+run alone. There are also some sub-directories to support specific
+environments such as XenServer.
+
 
 Scripts
 -------
@@ -249,6 +289,7 @@
 
 Control Structure Rules
 -----------------------
+
 - then should be on the same line as the if
 - do should be on the same line as the for
 
@@ -270,6 +311,7 @@
 
 Variables and Functions
 -----------------------
+
 - functions should be used whenever possible for clarity
 - functions should use ``local`` variables as much as possible to
   ensure they are isolated from the rest of the environment
@@ -278,3 +320,48 @@
 - function names should_have_underscores, NotCamelCase.
 - functions should be declared as per the regex ^function foo {$
   with code starting on the next line
+
+
+Review Criteria
+===============
+
+There are some broad criteria that will be followed when reviewing
+your change
+
+* **Is it passing tests** -- your change will not be reviewed
+  throughly unless the official CI has run successfully against it.
+
+* **Does this belong in DevStack** -- DevStack reviewers have a
+  default position of "no" but are ready to be convinced by your
+  change.
+
+  For very large changes, you should consider :doc:`the plugins system
+  <plugins>` to see if your code is better abstracted from the main
+  repository.
+
+  For smaller changes, you should always consider if the change can be
+  encapsulated by per-user settings in ``local.conf``.  A common example
+  is adding a simple config-option to an ``ini`` file.  Specific flags
+  are not usually required for this, although adding documentation
+  about how to achieve a larger goal (which might include turning on
+  various settings, etc) is always welcome.
+
+* **Work-arounds** -- often things get broken and DevStack can be in a
+  position to fix them.  Work-arounds are fine, but should be
+  presented in the context of fixing the root-cause of the problem.
+  This means it is well-commented in the code and the change-log and
+  mostly likely includes links to changes or bugs that fix the
+  underlying problem.
+
+* **Should this be upstream** -- DevStack generally does not override
+  default choices provided by projects and attempts to not
+  unexpectedly modify behaviour.
+
+* **Context in commit messages** -- DevStack touches many different
+  areas and reviewers need context around changes to make good
+  decisions.  We also always want it to be clear to someone -- perhaps
+  even years from now -- why we were motivated to make a change at the
+  time.
+
+* **Reviewers** -- please see ``MAINTAINERS.rst`` for a list of people
+  that should be added to reviews of various sub-systems.
diff --git a/MAINTAINERS.rst b/MAINTAINERS.rst
index a376eb0..20e8655 100644
--- a/MAINTAINERS.rst
+++ b/MAINTAINERS.rst
@@ -90,3 +90,7 @@
 
 * Flavio Percoco <flaper87@gmail.com>
 * Malini Kamalambal <malini.kamalambal@rackspace.com>
+
+Oracle Linux
+~~~~~~~~~~~~
+* Wiekus Beukes <wiekus.beukes@oracle.com>
diff --git a/README.md b/README.md
index c5e7f55..53de970 100644
--- a/README.md
+++ b/README.md
@@ -249,14 +249,17 @@
     Variable Name                    Notes
     ----------------------------------------------------------------------------
     Q_AGENT                          This specifies which agent to run with the
-                                     ML2 Plugin (either `openvswitch` or `linuxbridge`).
+                                     ML2 Plugin (Typically either `openvswitch`
+                                     or `linuxbridge`).
+                                     Defaults to `openvswitch`.
     Q_ML2_PLUGIN_MECHANISM_DRIVERS   The ML2 MechanismDrivers to load. The default
-                                     is none. Note, ML2 will work with the OVS
-                                     and LinuxBridge agents by default.
+                                     is `openvswitch,linuxbridge`.
     Q_ML2_PLUGIN_TYPE_DRIVERS        The ML2 TypeDrivers to load. Defaults to
                                      all available TypeDrivers.
-    Q_ML2_PLUGIN_GRE_TYPE_OPTIONS    GRE TypeDriver options. Defaults to none.
-    Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS  VXLAN TypeDriver options. Defaults to none.
+    Q_ML2_PLUGIN_GRE_TYPE_OPTIONS    GRE TypeDriver options. Defaults to
+                                     `tunnel_id_ranges=1:1000'.
+    Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS  VXLAN TypeDriver options. Defaults to
+                                     `vni_ranges=1001:2000`
     Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS   VLAN TypeDriver options. Defaults to none.
 
 # Heat
diff --git a/clean.sh b/clean.sh
index ad4525b..035489c 100755
--- a/clean.sh
+++ b/clean.sh
@@ -49,7 +49,7 @@
 source $TOP_DIR/lib/swift
 source $TOP_DIR/lib/ceilometer
 source $TOP_DIR/lib/heat
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 source $TOP_DIR/lib/ironic
 source $TOP_DIR/lib/trove
 
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index fe3e2c2..7d06658 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -378,6 +378,18 @@
       can be configured with any valid IPv6 prefix. The default values make
       use of an auto-generated ``IPV6_GLOBAL_ID`` to comply with RFC 4193.*
 
+Unit tests dependencies install
+-------------------------------
+
+    | *Default: ``INSTALL_TESTONLY_PACKAGES=False``*
+    |  In order to be able to run unit tests with script ``run_test.sh``,
+       the required package dependencies need to be installed.
+       Setting this option as below does the work.
+
+    ::
+
+        INSTALL_TESTONLY_PACKAGES=True
+
 Examples
 ========
 
diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
deleted file mode 100644
index 50c0100..0000000
--- a/doc/source/contributing.rst
+++ /dev/null
@@ -1,94 +0,0 @@
-============
-Contributing
-============
-
-DevStack uses the standard OpenStack contribution process as outlined in
-`the OpenStack developer
-guide <http://docs.openstack.org/infra/manual/developers.html>`__. This
-means that you will need to meet the requirements of the Contribututors
-License Agreement (CLA). If you have already done that for another
-OpenStack project you are good to go.
-
-Things To Know
-==============
-
-|
-| **Where Things Are**
-
-The official DevStack repository is located at
-``git://git.openstack.org/openstack-dev/devstack.git``, replicated from
-the repo maintained by Gerrit. GitHub also has a mirror at
-``git://github.com/openstack-dev/devstack.git``.
-
-The `blueprint <https://blueprints.launchpad.net/devstack>`__ and `bug
-trackers <https://bugs.launchpad.net/devstack>`__ are on Launchpad. It
-should be noted that DevStack generally does not use these as strongly
-as other projects, but we're trying to change that.
-
-The `Gerrit review
-queue <https://review.openstack.org/#/q/project:openstack-dev/devstack,n,z>`__
-is, however, used for all commits except for the text of this website.
-That should also change in the near future.
-
-|
-| **HACKING.rst**
-
-Like most OpenStack projects, DevStack includes a ``HACKING.rst`` file
-that describes the layout, style and conventions of the project. Because
-``HACKING.rst`` is in the main DevStack repo it is considered
-authoritative. Much of the content on this page is taken from there.
-
-|
-| **bashate Formatting**
-
-Around the time of the OpenStack Havana release we added a tool to do
-style checking in DevStack similar to what pep8/flake8 do for Python
-projects. It is still \_very\_ simplistic, focusing mostly on stray
-whitespace to help prevent -1 on reviews that are otherwise acceptable.
-Oddly enough it is called ``bashate``. It will be expanded to enforce
-some of the documentation rules in comments that are used in formatting
-the script pages for devstack.org and possibly even simple code
-formatting. Run it on the entire project with ``./run_tests.sh``.
-
-Code
-====
-
-|
-| **Repo Layout**
-
-The DevStack repo generally keeps all of the primary scripts at the root
-level.
-
-``doc`` - Contains the Sphinx source for the documentation.
-``tools/build_docs.sh`` is used to generate the HTML versions of the
-DevStack scripts.  A complete doc build can be run with ``tox -edocs``.
-
-``exercises`` - Contains the test scripts used to sanity-check and
-demonstrate some OpenStack functions. These scripts know how to exit
-early or skip services that are not enabled.
-
-``extras.d`` - Contains the dispatch scripts called by the hooks in
-``stack.sh``, ``unstack.sh`` and ``clean.sh``. See :doc:`the plugins
-docs <plugins>` for more information.
-
-``files`` - Contains a variety of otherwise lost files used in
-configuring and operating DevStack. This includes templates for
-configuration files and the system dependency information. This is also
-where image files are downloaded and expanded if necessary.
-
-``lib`` - Contains the sub-scripts specific to each project. This is
-where the work of managing a project's services is located. Each
-top-level project (Keystone, Nova, etc) has a file here. Additionally
-there are some for system services and project plugins.
-
-``samples`` - Contains a sample of the local files not included in the
-DevStack repo.
-
-``tests`` - the DevStack test suite is rather sparse, mostly consisting
-of test of specific fragile functions in the ``functions`` and
-``functions-common`` files.
-
-``tools`` - Contains a collection of stand-alone scripts. While these
-may reference the top-level DevStack configuration they can generally be
-run alone. There are also some sub-directories to support specific
-environments such as XenServer.
diff --git a/doc/source/guides/devstack-with-lbaas-v2.rst b/doc/source/guides/devstack-with-lbaas-v2.rst
new file mode 100644
index 0000000..f679783
--- /dev/null
+++ b/doc/source/guides/devstack-with-lbaas-v2.rst
@@ -0,0 +1,99 @@
+Configure Load-Balancer in Kilo
+=================================
+
+The Kilo release of OpenStack will support Version 2 of the neutron load balancer. Until now, using OpenStack `LBaaS V2 <http://docs.openstack.org/api/openstack-network/2.0/content/lbaas_ext.html>`_ has required a good understanding of neutron and LBaaS architecture and several manual steps.
+
+
+Phase 1: Create DevStack + 2 nova instances
+--------------------------------------------
+
+First, set up a vm of your choice with at least 8 GB RAM and 16 GB disk space, make sure it is updated. Install git and any other developer tools you find useful.
+
+Install devstack
+
+  ::
+
+    git clone https://git.openstack.org/openstack-dev/devstack
+    cd devstack
+
+
+Edit your `local.conf` to look like
+
+  ::
+
+    [[local|localrc]]
+    # Load the external LBaaS plugin.
+    enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas
+
+    # ===== BEGIN localrc =====
+    DATABASE_PASSWORD=password
+    ADMIN_PASSWORD=password
+    SERVICE_PASSWORD=password
+    SERVICE_TOKEN=password
+    RABBIT_PASSWORD=password
+    # Enable Logging
+    LOGFILE=$DEST/logs/stack.sh.log
+    VERBOSE=True
+    LOG_COLOR=True
+    SCREEN_LOGDIR=$DEST/logs
+    # Pre-requisite
+    ENABLED_SERVICES=rabbit,mysql,key
+    # Horizon
+    ENABLED_SERVICES+=,horizon
+    # Nova
+    ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch
+    IMAGE_URLS+=",https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img"
+    # Glance
+    ENABLED_SERVICES+=,g-api,g-reg
+    # Neutron
+    ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta
+    # Enable LBaaS V2
+    ENABLED_SERVICES+=,q-lbaasv2
+    # Cinder
+    ENABLED_SERVICES+=,c-api,c-vol,c-sch
+    # Tempest
+    ENABLED_SERVICES+=,tempest
+    # ===== END localrc =====
+
+Run stack.sh and do some sanity checks
+
+  ::
+
+    ./stack.sh
+    . ./openrc
+
+    neutron net-list  # should show public and private networks
+
+Create two nova instances that we can use as test http servers:
+
+  ::
+
+    #create nova instances on private network
+    nova boot --image $(nova image-list | awk '/ cirros-0.3.0-x86_64-disk / {print $2}') --flavor 1 --nic net-id=$(neutron net-list | awk '/ private / {print $2}') node1
+    nova boot --image $(nova image-list | awk '/ cirros-0.3.0-x86_64-disk / {print $2}') --flavor 1 --nic net-id=$(neutron net-list | awk '/ private / {print $2}') node2
+    nova list # should show the nova instances just created
+
+    #add secgroup rule to allow ssh etc..
+    neutron security-group-rule-create default --protocol icmp
+    neutron security-group-rule-create default --protocol tcp --port-range-min 22 --port-range-max 22
+    neutron security-group-rule-create default --protocol tcp --port-range-min 80 --port-range-max 80
+
+Set up a simple web server on each of these instances. ssh into each instance (username 'cirros', password 'cubswin:)') and run
+
+ ::
+
+    MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}')
+    while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
+
+Phase 2: Create your load balancers
+------------------------------------
+
+ ::
+
+    neutron lbaas-loadbalancer-create --name lb1 private-subnet
+    neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP --protocol-port 80 --name listener1
+    neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1
+    neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.3 --protocol-port 80 pool1
+    neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.5 --protocol-port 80 pool1
+
+Please note here that the "10.0.0.3" and "10.0.0.5" in the above commands are the IPs of the nodes (in my test run-thru, they were actually 10.2 and 10.4), and the address of the created LB will be reported as "vip_address" from the lbaas-loadbalancer-create, and a quick test of that LB is "curl that-lb-ip", which should alternate between showing the IPs of the two nodes.
diff --git a/doc/source/guides/devstack-with-nested-kvm.rst b/doc/source/guides/devstack-with-nested-kvm.rst
index 58ec3d3..610300b 100644
--- a/doc/source/guides/devstack-with-nested-kvm.rst
+++ b/doc/source/guides/devstack-with-nested-kvm.rst
@@ -129,7 +129,7 @@
     LIBVIRT_TYPE=kvm
 
 
-Once DevStack is configured succesfully, verify if the Nova instances
+Once DevStack is configured successfully, verify if the Nova instances
 are using KVM by noticing the QEMU CLI invoked by Nova is using the
 parameter `accel=kvm`, e.g.:
 
diff --git a/doc/source/guides/single-machine.rst b/doc/source/guides/single-machine.rst
index 70287a9..236ece9 100644
--- a/doc/source/guides/single-machine.rst
+++ b/doc/source/guides/single-machine.rst
@@ -67,7 +67,7 @@
 
 ::
 
-    sudo apt-get install git -y || yum install -y git
+    sudo apt-get install git -y || sudo yum install -y git
     git clone https://git.openstack.org/openstack-dev/devstack
     cd devstack
 
diff --git a/doc/source/hacking.rst b/doc/source/hacking.rst
new file mode 100644
index 0000000..a2bcf4f
--- /dev/null
+++ b/doc/source/hacking.rst
@@ -0,0 +1 @@
+.. include:: ../../HACKING.rst
diff --git a/doc/source/index.rst b/doc/source/index.rst
index cfde991..b701237 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -12,7 +12,7 @@
    plugins
    faq
    changes
-   contributing
+   hacking
 
 Quick Start
 -----------
@@ -68,6 +68,7 @@
    guides/neutron
    guides/devstack-with-nested-kvm
    guides/nova
+   guides/devstack-with-lbaas-v2
 
 All-In-One Single VM
 --------------------
@@ -139,7 +140,7 @@
 Contributing
 ------------
 
-:doc:`Pitching in to make DevStack a better place <contributing>`
+:doc:`Pitching in to make DevStack a better place <hacking>`
 
 Code
 ====
@@ -156,7 +157,6 @@
 * `lib/ceilometer <lib/ceilometer.html>`__
 * `lib/ceph <lib/ceph.html>`__
 * `lib/cinder <lib/cinder.html>`__
-* `lib/config <lib/config.html>`__
 * `lib/database <lib/database.html>`__
 * `lib/dstat <lib/dstat.html>`__
 * `lib/glance <lib/glance.html>`__
@@ -166,7 +166,7 @@
 * `lib/ironic <lib/ironic.html>`__
 * `lib/keystone <lib/keystone.html>`__
 * `lib/ldap <lib/ldap.html>`__
-* `lib/neutron <lib/neutron.html>`__
+* `lib/neutron-legacy <lib/neutron-legacy.html>`__
 * `lib/nova <lib/nova.html>`__
 * `lib/oslo <lib/oslo.html>`__
 * `lib/rpc\_backend <lib/rpc_backend.html>`__
@@ -188,6 +188,12 @@
 * `extras.d/70-zaqar.sh <extras.d/70-zaqar.sh.html>`__
 * `extras.d/80-tempest.sh <extras.d/80-tempest.sh.html>`__
 
+* `inc/ini-config <inc/ini-config.html>`__
+* `inc/meta-config <inc/meta-config.html>`__
+* `inc/python <inc/python.html>`__
+
+* `pkg/elasticsearch.sh <pkg/elasticsearch.sh.html>`_
+
 Configuration
 -------------
 
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index 5d6d3f1..5a61063 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -136,6 +136,31 @@
 
   enable_plugin ec2api git://git.openstack.org/stackforge/ec2api
 
+Plugins for gate jobs
+---------------------
+
+All OpenStack plugins that wish to be used as gate jobs need to exist
+in OpenStack's gerrit. Both ``openstack`` namespace and ``stackforge``
+namespace are fine. This allows testing of the plugin as well as
+provides network isolation against upstream git repository failures
+(which we see often enough to be an issue).
+
+Ideally plugins will be implemented as ``devstack`` directory inside
+the project they are testing. For example, the stackforge/ec2-api
+project has it's pluggin support in it's tree.
+
+In the cases where there is no "project tree" per say (like
+integrating a backend storage configuration such as ceph or glusterfs)
+it's also allowed to build a dedicated
+``stackforge/devstack-plugin-FOO`` project to house the plugin.
+
+Note jobs must not require cloning of repositories during tests.
+Tests must list their repository in the ``PROJECTS`` variable for
+`devstack-gate
+<https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh>`_
+for the repository to be available to the test.  Further information
+is provided in the project creator's guide.
+
 Hypervisor
 ==========
 
@@ -154,3 +179,24 @@
 -  ``start_nova_hypervisor`` - start any external services
 -  ``stop_nova_hypervisor`` - stop any external services
 -  ``cleanup_nova_hypervisor`` - remove transient data and cache
+
+System Packages
+===============
+
+Devstack provides a framework for getting packages installed at an early
+phase of its execution. This packages may be defined in a plugin as files
+that contain new-line separated lists of packages required by the plugin
+
+Supported packaging systems include apt and yum across multiple distributions.
+To enable a plugin to hook into this and install package dependencies, packages
+may be listed at the following locations in the top-level of the plugin
+repository:
+
+- ``./devstack/files/debs/$plugin_name`` - Packages to install when running
+  on Ubuntu, Debian or Linux Mint.
+
+- ``./devstack/files/rpms/$plugin_name`` - Packages to install when running
+  on Red Hat, Fedora, CentOS or XenServer.
+
+- ``./devstack/files/rpms-suse/$plugin_name`` - Packages to install when
+  running on SUSE Linux or openSUSE.
diff --git a/exercises/boot_from_volume.sh b/exercises/boot_from_volume.sh
index a2ae275..aa34830 100755
--- a/exercises/boot_from_volume.sh
+++ b/exercises/boot_from_volume.sh
@@ -32,7 +32,7 @@
 
 # Import project functions
 source $TOP_DIR/lib/cinder
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 
 # Import configuration
 source $TOP_DIR/openrc
diff --git a/exercises/euca.sh b/exercises/euca.sh
index f9c4752..df5e233 100755
--- a/exercises/euca.sh
+++ b/exercises/euca.sh
@@ -37,7 +37,7 @@
 source $TOP_DIR/exerciserc
 
 # Import project functions
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 
 # If nova api is not enabled we exit with exitcode 55 so that
 # the exercise is skipped
diff --git a/exercises/floating_ips.sh b/exercises/floating_ips.sh
index 57f48e0..59444e1 100755
--- a/exercises/floating_ips.sh
+++ b/exercises/floating_ips.sh
@@ -31,7 +31,7 @@
 source $TOP_DIR/openrc
 
 # Import project functions
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 
 # Import exercise configuration
 source $TOP_DIR/exerciserc
diff --git a/exercises/horizon.sh b/exercises/horizon.sh
deleted file mode 100755
index 4020580..0000000
--- a/exercises/horizon.sh
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/usr/bin/env bash
-
-# **horizon.sh**
-
-# Sanity check that horizon started if enabled
-
-echo "*********************************************************************"
-echo "Begin DevStack Exercise: $0"
-echo "*********************************************************************"
-
-# This script exits on an error so that errors don't compound and you see
-# only the first error that occurred.
-set -o errexit
-
-# Print the commands being run so that we can see the command that triggers
-# an error.  It is also useful for following allowing as the install occurs.
-set -o xtrace
-
-
-# Settings
-# ========
-
-# Keep track of the current directory
-EXERCISE_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=$(cd $EXERCISE_DIR/..; pwd)
-
-# Import common functions
-source $TOP_DIR/functions
-
-# Import configuration
-source $TOP_DIR/openrc
-
-# Import exercise configuration
-source $TOP_DIR/exerciserc
-
-is_service_enabled horizon || exit 55
-
-# can we get the front page
-$CURL_GET http://$SERVICE_HOST 2>/dev/null | grep -q '<h3.*>Log In</h3>' || die $LINENO "Horizon front page not functioning!"
-
-set +o xtrace
-echo "*********************************************************************"
-echo "SUCCESS: End DevStack Exercise: $0"
-echo "*********************************************************************"
-
diff --git a/exercises/neutron-adv-test.sh b/exercises/neutron-adv-test.sh
index 5b3281b..9230587 100755
--- a/exercises/neutron-adv-test.sh
+++ b/exercises/neutron-adv-test.sh
@@ -49,7 +49,7 @@
 source $TOP_DIR/openrc
 
 # Import neutron functions
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 
 # If neutron is not enabled we exit with exitcode 55, which means exercise is skipped.
 neutron_plugin_check_adv_test_requirements || exit 55
diff --git a/exercises/volumes.sh b/exercises/volumes.sh
index 504fba1..3ac2016 100755
--- a/exercises/volumes.sh
+++ b/exercises/volumes.sh
@@ -32,7 +32,7 @@
 
 # Import project functions
 source $TOP_DIR/lib/cinder
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 
 # Import exercise configuration
 source $TOP_DIR/exerciserc
diff --git a/files/debs/general b/files/debs/general
index 84d4302..5f10a20 100644
--- a/files/debs/general
+++ b/files/debs/general
@@ -6,7 +6,7 @@
 gcc
 g++
 git
-graphviz # testonly - docs
+graphviz # needed for docs
 lsof # useful when debugging
 openssh-server
 openssl
diff --git a/files/debs/glance b/files/debs/glance
index 9fda6a6..37877a8 100644
--- a/files/debs/glance
+++ b/files/debs/glance
@@ -1,6 +1,6 @@
-libmysqlclient-dev  # testonly
-libpq-dev           # testonly
-libssl-dev          # testonly
+libmysqlclient-dev
+libpq-dev
+libssl-dev
 libxml2-dev
-libxslt1-dev        # testonly
-zlib1g-dev           # testonly
+libxslt1-dev
+zlib1g-dev
diff --git a/files/debs/neutron b/files/debs/neutron
index aa3d709..2d69a71 100644
--- a/files/debs/neutron
+++ b/files/debs/neutron
@@ -1,12 +1,12 @@
-acl     # testonly
+acl
 ebtables
 iptables
 iputils-ping
 iputils-arping
-libmysqlclient-dev  # testonly
+libmysqlclient-dev
 mysql-server #NOPRIME
 sudo
-postgresql-server-dev-all       # testonly
+postgresql-server-dev-all
 python-mysqldb
 python-mysql.connector
 python-qpid # NOPRIME
diff --git a/files/debs/nova b/files/debs/nova
index 0c31385..9d9acde 100644
--- a/files/debs/nova
+++ b/files/debs/nova
@@ -4,7 +4,7 @@
 kpartx
 parted
 iputils-arping
-libmysqlclient-dev  # testonly
+libmysqlclient-dev
 mysql-server # NOPRIME
 python-mysqldb
 python-mysql.connector
diff --git a/files/debs/trema b/files/debs/trema
deleted file mode 100644
index f685ca5..0000000
--- a/files/debs/trema
+++ /dev/null
@@ -1,15 +0,0 @@
-# Trema
-make
-ruby1.8
-rubygems1.8
-ruby1.8-dev
-libpcap-dev
-libsqlite3-dev
-libglib2.0-dev
-
-# Sliceable Switch
-sqlite3
-libdbi-perl
-libdbd-sqlite3-perl
-apache2
-libjson-perl
diff --git a/files/debs/trove b/files/debs/trove
index 09dcee8..96f8f29 100644
--- a/files/debs/trove
+++ b/files/debs/trove
@@ -1 +1 @@
-libxslt1-dev   # testonly
+libxslt1-dev
diff --git a/files/rpms-suse/general b/files/rpms-suse/general
index 7f4bbfb..2219426 100644
--- a/files/rpms-suse/general
+++ b/files/rpms-suse/general
@@ -6,7 +6,7 @@
 gcc
 gcc-c++
 git-core
-graphviz # testonly - docs
+graphviz # docs
 iputils
 libopenssl-devel # to rebuild pyOpenSSL if needed
 lsof # useful when debugging
@@ -16,7 +16,6 @@
 psmisc
 python-cmd2 # dist:opensuse-12.3
 python-pylint
-python-unittest2
 screen
 tar
 tcpdump
diff --git a/files/rpms-suse/neutron b/files/rpms-suse/neutron
index 66d6e4c..d278363 100644
--- a/files/rpms-suse/neutron
+++ b/files/rpms-suse/neutron
@@ -1,11 +1,11 @@
-acl     # testonly
+acl
 dnsmasq
 dnsmasq-utils # dist:opensuse-12.3,opensuse-13.1
 ebtables
 iptables
 iputils
 mariadb # NOPRIME
-postgresql-devel        # testonly
+postgresql-devel
 python-eventlet
 python-greenlet
 python-iso8601
diff --git a/files/rpms-suse/trove b/files/rpms-suse/trove
index 09dcee8..96f8f29 100644
--- a/files/rpms-suse/trove
+++ b/files/rpms-suse/trove
@@ -1 +1 @@
-libxslt1-dev   # testonly
+libxslt1-dev
diff --git a/files/rpms/ceilometer-collector b/files/rpms/ceilometer-collector
index 9cf580d..b139ed2 100644
--- a/files/rpms/ceilometer-collector
+++ b/files/rpms/ceilometer-collector
@@ -1,4 +1,3 @@
 selinux-policy-targeted
 mongodb-server #NOPRIME
-pymongo # NOPRIME
 mongodb # NOPRIME
diff --git a/files/rpms/cinder b/files/rpms/cinder
index 082a35a..9f1359f 100644
--- a/files/rpms/cinder
+++ b/files/rpms/cinder
@@ -3,4 +3,3 @@
 qemu-img
 postgresql-devel
 iscsi-initiator-utils
-python-lxml
diff --git a/files/rpms/general b/files/rpms/general
index cf40632..d74ecc6 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -5,7 +5,7 @@
 gcc
 gcc-c++
 git-core
-graphviz # testonly - docs
+graphviz # needed only for docs
 openssh-server
 openssl
 openssl-devel # to rebuild pyOpenSSL if needed
@@ -14,7 +14,6 @@
 libxslt-devel
 psmisc
 pylint
-python-unittest2
 python-devel
 screen
 tar
diff --git a/files/rpms/glance b/files/rpms/glance
index a09b669..479194f 100644
--- a/files/rpms/glance
+++ b/files/rpms/glance
@@ -1,14 +1,6 @@
-libxml2-devel       # testonly
-libxslt-devel       # testonly
-mysql-devel         # testonly
-openssl-devel       # testonly
-postgresql-devel    # testonly
-python-argparse
-python-eventlet
-python-greenlet
-python-lxml
-python-paste-deploy
-python-routes
-python-sqlalchemy
-pyxattr
-zlib-devel          # testonly
+libxml2-devel
+libxslt-devel
+mysql-devel
+openssl-devel
+postgresql-devel
+zlib-devel
diff --git a/files/rpms/horizon b/files/rpms/horizon
index 585c36c..8d7f037 100644
--- a/files/rpms/horizon
+++ b/files/rpms/horizon
@@ -2,20 +2,5 @@
 httpd # NOPRIME
 mod_wsgi  # NOPRIME
 pylint
-python-anyjson
-python-BeautifulSoup
-python-coverage
-python-dateutil
-python-eventlet
-python-greenlet
-python-httplib2
-python-migrate
-python-mox
-python-nose
-python-paste
-python-paste-deploy
-python-routes
-python-sqlalchemy
-python-webob
 pyxattr
 pcre-devel  # pyScss
diff --git a/files/rpms/ironic b/files/rpms/ironic
index 0a46314..2bf8bb3 100644
--- a/files/rpms/ironic
+++ b/files/rpms/ironic
@@ -8,7 +8,6 @@
 net-tools
 openssh-clients
 openvswitch
-python-libguestfs
 sgabios
 syslinux
 tftp-server
diff --git a/files/rpms/keystone b/files/rpms/keystone
index 45492e0..8074119 100644
--- a/files/rpms/keystone
+++ b/files/rpms/keystone
@@ -1,14 +1,4 @@
 MySQL-python
-python-greenlet
 libxslt-devel
-python-lxml
-python-paste
-python-paste-deploy
-python-paste-script
-python-routes
-python-sqlalchemy
-python-webob
 sqlite
 mod_ssl
-
-# Deps installed via pip for RHEL
diff --git a/files/rpms/ldap b/files/rpms/ldap
index 2f7ab5d..d89c4cf 100644
--- a/files/rpms/ldap
+++ b/files/rpms/ldap
@@ -1,3 +1,2 @@
 openldap-servers
 openldap-clients
-python-ldap
diff --git a/files/rpms/n-api b/files/rpms/n-api
index 6f59e60..0928cd5 100644
--- a/files/rpms/n-api
+++ b/files/rpms/n-api
@@ -1,2 +1 @@
-python-dateutil
 fping
diff --git a/files/rpms/n-cpu b/files/rpms/n-cpu
index 32b1546..c1a8e8f 100644
--- a/files/rpms/n-cpu
+++ b/files/rpms/n-cpu
@@ -4,4 +4,4 @@
 genisoimage
 sysfsutils
 sg3_utils
-python-libguestfs # NOPRIME
+
diff --git a/files/rpms/neutron b/files/rpms/neutron
index d11dab7..8292e7b 100644
--- a/files/rpms/neutron
+++ b/files/rpms/neutron
@@ -1,24 +1,15 @@
 MySQL-python
-acl     # testonly
+acl
 dnsmasq # for q-dhcp
 dnsmasq-utils # for dhcp_release
 ebtables
 iptables
 iputils
 mysql-connector-python
-mysql-devel  # testonly
+mysql-devel
 mysql-server # NOPRIME
 openvswitch # NOPRIME
-postgresql-devel        # testonly
-python-eventlet
-python-greenlet
-python-iso8601
-python-paste
-python-paste-deploy
-python-qpid # NOPRIME
-python-routes
-python-sqlalchemy
-python-suds
+postgresql-devel
 rabbitmq-server # NOPRIME
 qpid-cpp-server        # NOPRIME
 sqlite
diff --git a/files/rpms/nova b/files/rpms/nova
index 557de90..ebd6674 100644
--- a/files/rpms/nova
+++ b/files/rpms/nova
@@ -17,26 +17,10 @@
 numpy # needed by websockify for spice console
 m2crypto
 mysql-connector-python
-mysql-devel  # testonly
+mysql-devel
 mysql-server # NOPRIME
 parted
 polkit
-python-cheetah
-python-eventlet
-python-feedparser
-python-greenlet
-python-iso8601
-python-lockfile
-python-migrate
-python-mox
-python-paramiko
-python-paste
-python-paste-deploy
-python-qpid # NOPRIME
-python-routes
-python-sqlalchemy
-python-suds
-python-tempita
 rabbitmq-server # NOPRIME
 qpid-cpp-server # NOPRIME
 sqlite
diff --git a/files/rpms/qpid b/files/rpms/qpid
index c5e2699..41dd2f6 100644
--- a/files/rpms/qpid
+++ b/files/rpms/qpid
@@ -1,4 +1,3 @@
 qpid-proton-c-devel # NOPRIME
-python-qpid-proton # NOPRIME
 cyrus-sasl-lib # NOPRIME
 cyrus-sasl-plain # NOPRIME
diff --git a/files/rpms/swift b/files/rpms/swift
index 0fcdb0f..1bf57cc 100644
--- a/files/rpms/swift
+++ b/files/rpms/swift
@@ -1,15 +1,7 @@
 curl
 memcached
-python-configobj
-python-coverage
-python-eventlet
-python-greenlet
-python-netifaces
-python-nose
-python-paste-deploy
-python-simplejson
-python-webob
 pyxattr
 sqlite
 xfsprogs
 xinetd
+rsync-daemon # dist:f22,f23
diff --git a/files/rpms/trove b/files/rpms/trove
index c5cbdea..e7bbd43 100644
--- a/files/rpms/trove
+++ b/files/rpms/trove
@@ -1 +1 @@
-libxslt-devel   # testonly
+libxslt-devel
diff --git a/files/rpms/zaqar-server b/files/rpms/zaqar-server
index 69e8bfa..78806fb 100644
--- a/files/rpms/zaqar-server
+++ b/files/rpms/zaqar-server
@@ -1,5 +1,5 @@
 selinux-policy-targeted
+mongodb
 mongodb-server
 pymongo
 redis # NOPRIME
-python-redis # NOPRIME
diff --git a/files/venv-requirements.txt b/files/venv-requirements.txt
index e473a2f..73d0579 100644
--- a/files/venv-requirements.txt
+++ b/files/venv-requirements.txt
@@ -1,10 +1,11 @@
+# Once we can prebuild wheels before a devstack run, uncomment the skipped libraries
 cryptography
-lxml
+# lxml # still install from from packages
 MySQL-python
-netifaces
+# netifaces # still install from packages
 #numpy    # slowest wheel by far, stop building until we are actually using the output
 posix-ipc
-psycopg2
+# psycopg # still install from packages
 pycrypto
 pyOpenSSL
 PyYAML
diff --git a/functions b/functions
index 79b2b37..9adbfe7 100644
--- a/functions
+++ b/functions
@@ -13,6 +13,7 @@
 # Include the common functions
 FUNC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
 source ${FUNC_DIR}/functions-common
+source ${FUNC_DIR}/inc/ini-config
 source ${FUNC_DIR}/inc/python
 
 # Save trace setting
diff --git a/functions-common b/functions-common
index df69cba..48e400d 100644
--- a/functions-common
+++ b/functions-common
@@ -43,197 +43,6 @@
 
 TRACK_DEPENDS=${TRACK_DEPENDS:-False}
 
-# Config Functions
-# ================
-
-# Append a new option in an ini file without replacing the old value
-# iniadd config-file section option value1 value2 value3 ...
-function iniadd {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local file=$1
-    local section=$2
-    local option=$3
-    shift 3
-
-    local values="$(iniget_multiline $file $section $option) $@"
-    iniset_multiline $file $section $option $values
-    $xtrace
-}
-
-# Comment an option in an INI file
-# inicomment config-file section option
-function inicomment {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local file=$1
-    local section=$2
-    local option=$3
-
-    sed -i -e "/^\[$section\]/,/^\[.*\]/ s|^\($option[ \t]*=.*$\)|#\1|" "$file"
-    $xtrace
-}
-
-# Get an option from an INI file
-# iniget config-file section option
-function iniget {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local file=$1
-    local section=$2
-    local option=$3
-    local line
-
-    line=$(sed -ne "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ p; }" "$file")
-    echo ${line#*=}
-    $xtrace
-}
-
-# Get a multiple line option from an INI file
-# iniget_multiline config-file section option
-function iniget_multiline {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local file=$1
-    local section=$2
-    local option=$3
-    local values
-
-    values=$(sed -ne "/^\[$section\]/,/^\[.*\]/ { s/^$option[ \t]*=[ \t]*//gp; }" "$file")
-    echo ${values}
-    $xtrace
-}
-
-# Determinate is the given option present in the INI file
-# ini_has_option config-file section option
-function ini_has_option {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local file=$1
-    local section=$2
-    local option=$3
-    local line
-
-    line=$(sed -ne "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ p; }" "$file")
-    $xtrace
-    [ -n "$line" ]
-}
-
-# Add another config line for a multi-line option.
-# It's normally called after iniset of the same option and assumes
-# that the section already exists.
-#
-# Note that iniset_multiline requires all the 'lines' to be supplied
-# in the argument list. Doing that will cause incorrect configuration
-# if spaces are used in the config values.
-#
-# iniadd_literal config-file section option value
-function iniadd_literal {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local file=$1
-    local section=$2
-    local option=$3
-    local value=$4
-
-    [[ -z $section || -z $option ]] && return
-
-    # Add it
-    sed -i -e "/^\[$section\]/ a\\
-$option = $value
-" "$file"
-
-    $xtrace
-}
-
-function inidelete {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local file=$1
-    local section=$2
-    local option=$3
-
-    [[ -z $section || -z $option ]] && return
-
-    # Remove old values
-    sed -i -e "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ d; }" "$file"
-
-    $xtrace
-}
-
-# Set an option in an INI file
-# iniset config-file section option value
-function iniset {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local file=$1
-    local section=$2
-    local option=$3
-    local value=$4
-
-    [[ -z $section || -z $option ]] && return
-
-    if ! grep -q "^\[$section\]" "$file" 2>/dev/null; then
-        # Add section at the end
-        echo -e "\n[$section]" >>"$file"
-    fi
-    if ! ini_has_option "$file" "$section" "$option"; then
-        # Add it
-        sed -i -e "/^\[$section\]/ a\\
-$option = $value
-" "$file"
-    else
-        local sep=$(echo -ne "\x01")
-        # Replace it
-        sed -i -e '/^\['${section}'\]/,/^\[.*\]/ s'${sep}'^\('${option}'[ \t]*=[ \t]*\).*$'${sep}'\1'"${value}"${sep} "$file"
-    fi
-    $xtrace
-}
-
-# Set a multiple line option in an INI file
-# iniset_multiline config-file section option value1 value2 valu3 ...
-function iniset_multiline {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local file=$1
-    local section=$2
-    local option=$3
-
-    shift 3
-    local values
-    for v in $@; do
-        # The later sed command inserts each new value in the line next to
-        # the section identifier, which causes the values to be inserted in
-        # the reverse order. Do a reverse here to keep the original order.
-        values="$v ${values}"
-    done
-    if ! grep -q "^\[$section\]" "$file"; then
-        # Add section at the end
-        echo -e "\n[$section]" >>"$file"
-    else
-        # Remove old values
-        sed -i -e "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ d; }" "$file"
-    fi
-    # Add new ones
-    for v in $values; do
-        sed -i -e "/^\[$section\]/ a\\
-$option = $v
-" "$file"
-    done
-    $xtrace
-}
-
-# Uncomment an option in an INI file
-# iniuncomment config-file section option
-function iniuncomment {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local file=$1
-    local section=$2
-    local option=$3
-    sed -i -e "/^\[$section\]/,/^\[.*\]/ s|[^ \t]*#[ \t]*\($option[ \t]*=.*$\)|\1|" "$file"
-    $xtrace
-}
 
 # Normalize config values to True or False
 # Accepts as False: 0 no No NO false False FALSE
@@ -253,14 +62,6 @@
     $xtrace
 }
 
-function isset {
-    nounset=$(set +o | grep nounset)
-    set +o nounset
-    [[ -n "${!1+x}" ]]
-    result=$?
-    $nounset
-    return $result
-}
 
 # Control Functions
 # =================
@@ -445,6 +246,7 @@
         # CentOS Linux release 6.0 (Final)
         # Fedora release 16 (Verne)
         # XenServer release 6.2.0-70446c (xenenterprise)
+        # Oracle Linux release 7
         os_CODENAME=""
         for r in "Red Hat" CentOS Fedora XenServer; do
             os_VENDOR=$r
@@ -458,6 +260,9 @@
             fi
             os_VENDOR=""
         done
+        if [ "$os_VENDOR" = "Red Hat" ] && [[ -r /etc/oracle-release ]]; then
+            os_VENDOR=OracleLinux
+        fi
         os_PACKAGE="rpm"
     elif [[ -r /etc/SuSE-release ]]; then
         for r in openSUSE "SUSE Linux"; do
@@ -509,7 +314,7 @@
         fi
     elif [[ "$os_VENDOR" =~ (Red Hat) || \
         "$os_VENDOR" =~ (CentOS) || \
-        "$os_VENDOR" =~ (OracleServer) ]]; then
+        "$os_VENDOR" =~ (OracleLinux) ]]; then
         # Drop the . release as we assume it's compatible
         DISTRO="rhel${os_RELEASE::1}"
     elif [[ "$os_VENDOR" =~ (XenServer) ]]; then
@@ -527,6 +332,17 @@
     [[ "$(uname -m)" == "$1" ]]
 }
 
+# Determine if current distribution is an Oracle distribution
+# is_oraclelinux
+function is_oraclelinux {
+    if [[ -z "$os_VENDOR" ]]; then
+        GetOSVersion
+    fi
+
+    [ "$os_VENDOR" = "OracleLinux" ]
+}
+
+
 # Determine if current distribution is a Fedora-based distribution
 # (Fedora, RHEL, CentOS, etc).
 # is_fedora
@@ -536,7 +352,7 @@
     fi
 
     [ "$os_VENDOR" = "Fedora" ] || [ "$os_VENDOR" = "Red Hat" ] || \
-        [ "$os_VENDOR" = "CentOS" ] || [ "$os_VENDOR" = "OracleServer" ]
+        [ "$os_VENDOR" = "CentOS" ] || [ "$os_VENDOR" = "OracleLinux" ]
 }
 
 
@@ -741,11 +557,11 @@
     local host_ip_iface=$3
     local host_ip=$4
 
-    # Find the interface used for the default route
-    host_ip_iface=${host_ip_iface:-$(ip route | sed -n '/^default/{ s/.*dev \(\w\+\)\s\+.*/\1/; p; }' | head -1)}
     # Search for an IP unless an explicit is set by ``HOST_IP`` environment variable
     if [ -z "$host_ip" -o "$host_ip" == "dhcp" ]; then
         host_ip=""
+        # Find the interface used for the default route
+        host_ip_iface=${host_ip_iface:-$(ip route | awk '/default/ {print $5}' | head -1)}
         local host_ips=$(LC_ALL=C ip -f inet addr show ${host_ip_iface} | awk '/inet/ {split($2,parts,"/");  print parts[1]}')
         local ip
         for ip in $host_ips; do
@@ -787,6 +603,28 @@
     done
 }
 
+# install default policy
+# copy over a default policy.json and policy.d for projects
+function install_default_policy {
+    local project=$1
+    local project_uc=$(echo $1|tr a-z A-Z)
+    local conf_dir="${project_uc}_CONF_DIR"
+    # eval conf dir to get the variable
+    conf_dir="${!conf_dir}"
+    local project_dir="${project_uc}_DIR"
+    # eval project dir to get the variable
+    project_dir="${!project_dir}"
+    local sample_conf_dir="${project_dir}/etc/${project}"
+    local sample_policy_dir="${project_dir}/etc/${project}/policy.d"
+
+    # first copy any policy.json
+    cp -p $sample_conf_dir/policy.json $conf_dir
+    # then optionally copy over policy.d
+    if [[ -d $sample_policy_dir ]]; then
+        cp -r $sample_policy_dir $conf_dir/policy.d
+    fi
+}
+
 # Add a policy to a policy.json file
 # Do nothing if the policy already exists
 # ``policy_add policy_file policy_name policy_permissions``
@@ -973,13 +811,18 @@
 
 # _get_package_dir
 function _get_package_dir {
+    local base_dir=$1
     local pkg_dir
+
+    if [[ -z "$base_dir" ]]; then
+        base_dir=$FILES
+    fi
     if is_ubuntu; then
-        pkg_dir=$FILES/debs
+        pkg_dir=$base_dir/debs
     elif is_fedora; then
-        pkg_dir=$FILES/rpms
+        pkg_dir=$base_dir/rpms
     elif is_suse; then
-        pkg_dir=$FILES/rpms-suse
+        pkg_dir=$base_dir/rpms-suse
     else
         exit_distro_not_supported "list of packages"
     fi
@@ -1005,84 +848,14 @@
         apt-get --option "Dpkg::Options::=--force-confold" --assume-yes "$@"
 }
 
-# get_packages() collects a list of package names of any type from the
-# prerequisite files in ``files/{debs|rpms}``.  The list is intended
-# to be passed to a package installer such as apt or yum.
-#
-# Only packages required for the services in 1st argument will be
-# included.  Two bits of metadata are recognized in the prerequisite files:
-#
-# - ``# NOPRIME`` defers installation to be performed later in `stack.sh`
-# - ``# dist:DISTRO`` or ``dist:DISTRO1,DISTRO2`` limits the selection
-#   of the package to the distros listed.  The distro names are case insensitive.
-function get_packages {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local services=$@
-    local package_dir=$(_get_package_dir)
-    local file_to_parse=""
-    local service=""
+function _parse_package_files {
+    local files_to_parse=$@
 
-    INSTALL_TESTONLY_PACKAGES=$(trueorfalse False INSTALL_TESTONLY_PACKAGES)
-
-    if [[ -z "$package_dir" ]]; then
-        echo "No package directory supplied"
-        return 1
-    fi
     if [[ -z "$DISTRO" ]]; then
         GetDistro
     fi
-    for service in ${services//,/ }; do
-        # Allow individual services to specify dependencies
-        if [[ -e ${package_dir}/${service} ]]; then
-            file_to_parse="${file_to_parse} $service"
-        fi
-        # NOTE(sdague) n-api needs glance for now because that's where
-        # glance client is
-        if [[ $service == n-api ]]; then
-            if [[ ! $file_to_parse =~ nova ]]; then
-                file_to_parse="${file_to_parse} nova"
-            fi
-            if [[ ! $file_to_parse =~ glance ]]; then
-                file_to_parse="${file_to_parse} glance"
-            fi
-        elif [[ $service == c-* ]]; then
-            if [[ ! $file_to_parse =~ cinder ]]; then
-                file_to_parse="${file_to_parse} cinder"
-            fi
-        elif [[ $service == ceilometer-* ]]; then
-            if [[ ! $file_to_parse =~ ceilometer ]]; then
-                file_to_parse="${file_to_parse} ceilometer"
-            fi
-        elif [[ $service == s-* ]]; then
-            if [[ ! $file_to_parse =~ swift ]]; then
-                file_to_parse="${file_to_parse} swift"
-            fi
-        elif [[ $service == n-* ]]; then
-            if [[ ! $file_to_parse =~ nova ]]; then
-                file_to_parse="${file_to_parse} nova"
-            fi
-        elif [[ $service == g-* ]]; then
-            if [[ ! $file_to_parse =~ glance ]]; then
-                file_to_parse="${file_to_parse} glance"
-            fi
-        elif [[ $service == key* ]]; then
-            if [[ ! $file_to_parse =~ keystone ]]; then
-                file_to_parse="${file_to_parse} keystone"
-            fi
-        elif [[ $service == q-* ]]; then
-            if [[ ! $file_to_parse =~ neutron ]]; then
-                file_to_parse="${file_to_parse} neutron"
-            fi
-        elif [[ $service == ir-* ]]; then
-            if [[ ! $file_to_parse =~ ironic ]]; then
-                file_to_parse="${file_to_parse} ironic"
-            fi
-        fi
-    done
 
-    for file in ${file_to_parse}; do
-        local fname=${package_dir}/${file}
+    for fname in ${files_to_parse}; do
         local OIFS line package distros distro
         [[ -e $fname ]] || continue
 
@@ -1110,22 +883,108 @@
                 fi
             fi
 
-            # Look for # testonly in comment
-            if [[ $line =~ (.*)#.*testonly.* ]]; then
-                package=${BASH_REMATCH[1]}
-                # Are we installing test packages? (test for the default value)
-                if [[ $INSTALL_TESTONLY_PACKAGES = "False" ]]; then
-                    # If not installing test packages the skip this package
-                    inst_pkg=0
-                fi
-            fi
-
             if [[ $inst_pkg = 1 ]]; then
                 echo $package
             fi
         done
         IFS=$OIFS
     done
+}
+
+# get_packages() collects a list of package names of any type from the
+# prerequisite files in ``files/{debs|rpms}``.  The list is intended
+# to be passed to a package installer such as apt or yum.
+#
+# Only packages required for the services in 1st argument will be
+# included.  Two bits of metadata are recognized in the prerequisite files:
+#
+# - ``# NOPRIME`` defers installation to be performed later in `stack.sh`
+# - ``# dist:DISTRO`` or ``dist:DISTRO1,DISTRO2`` limits the selection
+#   of the package to the distros listed.  The distro names are case insensitive.
+function get_packages {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local services=$@
+    local package_dir=$(_get_package_dir)
+    local file_to_parse=""
+    local service=""
+
+    INSTALL_TESTONLY_PACKAGES=$(trueorfalse False INSTALL_TESTONLY_PACKAGES)
+
+    if [[ -z "$package_dir" ]]; then
+        echo "No package directory supplied"
+        return 1
+    fi
+    for service in ${services//,/ }; do
+        # Allow individual services to specify dependencies
+        if [[ -e ${package_dir}/${service} ]]; then
+            file_to_parse="${file_to_parse} ${package_dir}/${service}"
+        fi
+        # NOTE(sdague) n-api needs glance for now because that's where
+        # glance client is
+        if [[ $service == n-api ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/nova ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/nova"
+            fi
+            if [[ ! $file_to_parse =~ $package_dir/glance ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/glance"
+            fi
+        elif [[ $service == c-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/cinder ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/cinder"
+            fi
+        elif [[ $service == ceilometer-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/ceilometer ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/ceilometer"
+            fi
+        elif [[ $service == s-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/swift ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/swift"
+            fi
+        elif [[ $service == n-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/nova ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/nova"
+            fi
+        elif [[ $service == g-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/glance ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/glance"
+            fi
+        elif [[ $service == key* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/keystone ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/keystone"
+            fi
+        elif [[ $service == q-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/neutron ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/neutron"
+            fi
+        elif [[ $service == ir-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/ironic ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/ironic"
+            fi
+        fi
+    done
+    echo "$(_parse_package_files $file_to_parse)"
+    $xtrace
+}
+
+# get_plugin_packages() collects a list of package names of any type from a
+# plugin's prerequisite files in ``$PLUGIN/devstack/files/{debs|rpms}``.  The
+# list is intended to be passed to a package installer such as apt or yum.
+#
+# Only packages required for enabled and collected plugins will included.
+#
+# The same metadata used in the main devstack prerequisite files may be used
+# in these prerequisite files, see get_packages() for more info.
+function get_plugin_packages {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local files_to_parse=""
+    local package_dir=""
+    for plugin in ${DEVSTACK_PLUGINS//,/ }; do
+        local package_dir="$(_get_package_dir ${GITDIR[$plugin]}/devstack/files)"
+        files_to_parse+="$package_dir/$plugin"
+    done
+    echo "$(_parse_package_files $files_to_parse)"
     $xtrace
 }
 
@@ -1218,8 +1077,8 @@
     # The manual check for missing packages is because yum -y assumes
     # missing packages are OK.  See
     # https://bugzilla.redhat.com/show_bug.cgi?id=965567
-    $sudo http_proxy=$http_proxy https_proxy=$https_proxy \
-        no_proxy=$no_proxy \
+    $sudo http_proxy="${http_proxy:-}" https_proxy="${https_proxy:-}" \
+        no_proxy="${no_proxy:-}" \
         ${YUM:-yum} install -y "$@" 2>&1 | \
         awk '
             BEGIN { fail=0 }
@@ -1241,7 +1100,8 @@
     [[ "$OFFLINE" = "True" ]] && return
     local sudo="sudo"
     [[ "$(id -u)" = "0" ]] && sudo="env"
-    $sudo http_proxy=$http_proxy https_proxy=$https_proxy \
+    $sudo http_proxy="${http_proxy:-}" https_proxy="${https_proxy:-}" \
+        no_proxy="${no_proxy:-}" \
         zypper --non-interactive install --auto-agree-with-licenses "$@"
 }
 
diff --git a/inc/ini-config b/inc/ini-config
new file mode 100644
index 0000000..0d6d169
--- /dev/null
+++ b/inc/ini-config
@@ -0,0 +1,223 @@
+#!/bin/bash
+#
+# **inc/ini-config** - Configuration/INI functions
+#
+# Support for manipulating INI-style configuration files
+#
+# These functions have no external dependencies and no side-effects
+
+# Save trace setting
+INC_CONF_TRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Config Functions
+# ================
+
+# Append a new option in an ini file without replacing the old value
+# iniadd config-file section option value1 value2 value3 ...
+function iniadd {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local file=$1
+    local section=$2
+    local option=$3
+    shift 3
+
+    local values="$(iniget_multiline $file $section $option) $@"
+    iniset_multiline $file $section $option $values
+    $xtrace
+}
+
+# Comment an option in an INI file
+# inicomment config-file section option
+function inicomment {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local file=$1
+    local section=$2
+    local option=$3
+
+    sed -i -e "/^\[$section\]/,/^\[.*\]/ s|^\($option[ \t]*=.*$\)|#\1|" "$file"
+    $xtrace
+}
+
+# Get an option from an INI file
+# iniget config-file section option
+function iniget {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local file=$1
+    local section=$2
+    local option=$3
+    local line
+
+    line=$(sed -ne "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ p; }" "$file")
+    echo ${line#*=}
+    $xtrace
+}
+
+# Get a multiple line option from an INI file
+# iniget_multiline config-file section option
+function iniget_multiline {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local file=$1
+    local section=$2
+    local option=$3
+    local values
+
+    values=$(sed -ne "/^\[$section\]/,/^\[.*\]/ { s/^$option[ \t]*=[ \t]*//gp; }" "$file")
+    echo ${values}
+    $xtrace
+}
+
+# Determinate is the given option present in the INI file
+# ini_has_option config-file section option
+function ini_has_option {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local file=$1
+    local section=$2
+    local option=$3
+    local line
+
+    line=$(sed -ne "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ p; }" "$file")
+    $xtrace
+    [ -n "$line" ]
+}
+
+# Add another config line for a multi-line option.
+# It's normally called after iniset of the same option and assumes
+# that the section already exists.
+#
+# Note that iniset_multiline requires all the 'lines' to be supplied
+# in the argument list. Doing that will cause incorrect configuration
+# if spaces are used in the config values.
+#
+# iniadd_literal config-file section option value
+function iniadd_literal {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local file=$1
+    local section=$2
+    local option=$3
+    local value=$4
+
+    [[ -z $section || -z $option ]] && return
+
+    # Add it
+    sed -i -e "/^\[$section\]/ a\\
+$option = $value
+" "$file"
+
+    $xtrace
+}
+
+# Remove an option from an INI file
+# inidelete config-file section option
+function inidelete {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local file=$1
+    local section=$2
+    local option=$3
+
+    [[ -z $section || -z $option ]] && return
+
+    # Remove old values
+    sed -i -e "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ d; }" "$file"
+
+    $xtrace
+}
+
+# Set an option in an INI file
+# iniset config-file section option value
+function iniset {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local file=$1
+    local section=$2
+    local option=$3
+    local value=$4
+
+    [[ -z $section || -z $option ]] && return
+
+    if ! grep -q "^\[$section\]" "$file" 2>/dev/null; then
+        # Add section at the end
+        echo -e "\n[$section]" >>"$file"
+    fi
+    if ! ini_has_option "$file" "$section" "$option"; then
+        # Add it
+        sed -i -e "/^\[$section\]/ a\\
+$option = $value
+" "$file"
+    else
+        local sep=$(echo -ne "\x01")
+        # Replace it
+        sed -i -e '/^\['${section}'\]/,/^\[.*\]/ s'${sep}'^\('${option}'[ \t]*=[ \t]*\).*$'${sep}'\1'"${value}"${sep} "$file"
+    fi
+    $xtrace
+}
+
+# Set a multiple line option in an INI file
+# iniset_multiline config-file section option value1 value2 valu3 ...
+function iniset_multiline {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local file=$1
+    local section=$2
+    local option=$3
+
+    shift 3
+    local values
+    for v in $@; do
+        # The later sed command inserts each new value in the line next to
+        # the section identifier, which causes the values to be inserted in
+        # the reverse order. Do a reverse here to keep the original order.
+        values="$v ${values}"
+    done
+    if ! grep -q "^\[$section\]" "$file"; then
+        # Add section at the end
+        echo -e "\n[$section]" >>"$file"
+    else
+        # Remove old values
+        sed -i -e "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ d; }" "$file"
+    fi
+    # Add new ones
+    for v in $values; do
+        sed -i -e "/^\[$section\]/ a\\
+$option = $v
+" "$file"
+    done
+    $xtrace
+}
+
+# Uncomment an option in an INI file
+# iniuncomment config-file section option
+function iniuncomment {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local file=$1
+    local section=$2
+    local option=$3
+    sed -i -e "/^\[$section\]/,/^\[.*\]/ s|[^ \t]*#[ \t]*\($option[ \t]*=.*$\)|\1|" "$file"
+    $xtrace
+}
+
+function isset {
+    nounset=$(set +o | grep nounset)
+    set +o nounset
+    [[ -n "${!1+x}" ]]
+    result=$?
+    $nounset
+    return $result
+}
+
+
+# Restore xtrace
+$INC_CONF_TRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/config b/inc/meta-config
similarity index 96%
rename from lib/config
rename to inc/meta-config
index 31c6fa6..c8789bf 100644
--- a/lib/config
+++ b/inc/meta-config
@@ -1,7 +1,9 @@
 #!/bin/bash
 #
-# lib/config - Configuration file manipulation functions
-
+# **lib/meta-config** - Configuration file manipulation functions
+#
+# Support for DevStack's local.conf meta-config sections
+#
 # These functions have no external dependencies and the following side-effects:
 #
 # CONFIG_AWK_CMD is defined, default is ``awk``
@@ -18,7 +20,7 @@
 # file-name is the destination of the config file
 
 # Save trace setting
-C_XTRACE=$(set +o | grep xtrace)
+INC_META_XTRACE=$(set +o | grep xtrace)
 set +o xtrace
 
 
@@ -176,7 +178,7 @@
 
 
 # Restore xtrace
-$C_XTRACE
+$INC_META_XTRACE
 
 # Local variables:
 # mode: shell-script
diff --git a/inc/python b/inc/python
index d72c3c9..2d76081 100644
--- a/inc/python
+++ b/inc/python
@@ -94,25 +94,24 @@
 
     $xtrace
     $sudo_pip \
-        http_proxy=${http_proxy:-} \
-        https_proxy=${https_proxy:-} \
-        no_proxy=${no_proxy:-} \
+        http_proxy="${http_proxy:-}" \
+        https_proxy="${https_proxy:-}" \
+        no_proxy="${no_proxy:-}" \
         PIP_FIND_LINKS=$PIP_FIND_LINKS \
         $cmd_pip install \
         $@
 
-    INSTALL_TESTONLY_PACKAGES=$(trueorfalse False INSTALL_TESTONLY_PACKAGES)
-    if [[ "$INSTALL_TESTONLY_PACKAGES" == "True" ]]; then
-        local test_req="$@/test-requirements.txt"
-        if [[ -e "$test_req" ]]; then
-            $sudo_pip \
-                http_proxy=${http_proxy:-} \
-                https_proxy=${https_proxy:-} \
-                no_proxy=${no_proxy:-} \
-                PIP_FIND_LINKS=$PIP_FIND_LINKS \
-                $cmd_pip install \
-                -r $test_req
-        fi
+    # Also install test requirements
+    local test_req="$@/test-requirements.txt"
+    if [[ -e "$test_req" ]]; then
+        echo "Installing test-requirements for $test_req"
+        $sudo_pip \
+            http_proxy=${http_proxy:-} \
+            https_proxy=${https_proxy:-} \
+            no_proxy=${no_proxy:-} \
+            PIP_FIND_LINKS=$PIP_FIND_LINKS \
+            $cmd_pip install \
+            -r $test_req
     fi
 }
 
diff --git a/lib/ceilometer b/lib/ceilometer
index 9db0640..7b2215c 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -163,13 +163,9 @@
 
 # configure_ceilometer() - Set config files, create data dirs, etc
 function configure_ceilometer {
-    [ ! -d $CEILOMETER_CONF_DIR ] && sudo mkdir -m 755 -p $CEILOMETER_CONF_DIR
-    sudo chown $STACK_USER $CEILOMETER_CONF_DIR
+    sudo install -d -o $STACK_USER -m 755 $CEILOMETER_CONF_DIR $CEILOMETER_API_LOG_DIR
 
-    [ ! -d $CEILOMETER_API_LOG_DIR ] &&  sudo mkdir -m 755 -p $CEILOMETER_API_LOG_DIR
-    sudo chown $STACK_USER $CEILOMETER_API_LOG_DIR
-
-    iniset_rpc_backend ceilometer $CEILOMETER_CONF DEFAULT
+    iniset_rpc_backend ceilometer $CEILOMETER_CONF
 
     iniset $CEILOMETER_CONF DEFAULT notification_topics "$CEILOMETER_NOTIFICATION_TOPICS"
     iniset $CEILOMETER_CONF DEFAULT verbose True
@@ -267,8 +263,7 @@
 # init_ceilometer() - Initialize etc.
 function init_ceilometer {
     # Create cache dir
-    sudo mkdir -p $CEILOMETER_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $CEILOMETER_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER $CEILOMETER_AUTH_CACHE_DIR
     rm -f $CEILOMETER_AUTH_CACHE_DIR/*
 
     if is_service_enabled mysql postgresql; then
@@ -322,6 +317,8 @@
     if use_library_from_git "ceilometermiddleware"; then
         git_clone_by_name "ceilometermiddleware"
         setup_dev_lib "ceilometermiddleware"
+    else
+        pip_install ceilometermiddleware
     fi
 }
 
diff --git a/lib/cinder b/lib/cinder
index 0d157dd..ef68d8d 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -174,16 +174,15 @@
     if [[ -d $CINDER_CONF_DIR/rootwrap.d ]]; then
         sudo rm -rf $CINDER_CONF_DIR/rootwrap.d
     fi
+
     # Deploy filters to /etc/cinder/rootwrap.d
-    sudo mkdir -m 755 $CINDER_CONF_DIR/rootwrap.d
-    sudo cp $CINDER_DIR/etc/cinder/rootwrap.d/*.filters $CINDER_CONF_DIR/rootwrap.d
-    sudo chown -R root:root $CINDER_CONF_DIR/rootwrap.d
-    sudo chmod 644 $CINDER_CONF_DIR/rootwrap.d/*
+    sudo install -d -o root -g root -m 755 $CINDER_CONF_DIR/rootwrap.d
+    sudo install -o root -g root -m 644 $CINDER_DIR/etc/cinder/rootwrap.d/*.filters $CINDER_CONF_DIR/rootwrap.d
+
     # Set up rootwrap.conf, pointing to /etc/cinder/rootwrap.d
-    sudo cp $CINDER_DIR/etc/cinder/rootwrap.conf $CINDER_CONF_DIR/
+    sudo install -o root -g root -m 644 $CINDER_DIR/etc/cinder/rootwrap.conf $CINDER_CONF_DIR
     sudo sed -e "s:^filters_path=.*$:filters_path=$CINDER_CONF_DIR/rootwrap.d:" -i $CINDER_CONF_DIR/rootwrap.conf
-    sudo chown root:root $CINDER_CONF_DIR/rootwrap.conf
-    sudo chmod 0644 $CINDER_CONF_DIR/rootwrap.conf
+
     # Specify rootwrap.conf as first parameter to rootwrap
     ROOTWRAP_CSUDOER_CMD="$cinder_rootwrap $CINDER_CONF_DIR/rootwrap.conf *"
 
@@ -197,10 +196,7 @@
 
 # configure_cinder() - Set config files, create data dirs, etc
 function configure_cinder {
-    if [[ ! -d $CINDER_CONF_DIR ]]; then
-        sudo mkdir -p $CINDER_CONF_DIR
-    fi
-    sudo chown $STACK_USER $CINDER_CONF_DIR
+    sudo install -d -o $STACK_USER -m 755 $CINDER_CONF_DIR
 
     cp -p $CINDER_DIR/etc/cinder/policy.json $CINDER_CONF_DIR
 
@@ -228,20 +224,21 @@
     iniset $CINDER_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $CINDER_CONF DEFAULT verbose True
 
-    iniset $CINDER_CONF DEFAULT my_ip "$CINDER_SERVICE_HOST"
     iniset $CINDER_CONF DEFAULT iscsi_helper tgtadm
-    iniset $CINDER_CONF DEFAULT sql_connection `database_connection_url cinder`
+    iniset $CINDER_CONF database connection `database_connection_url cinder`
     iniset $CINDER_CONF DEFAULT api_paste_config $CINDER_API_PASTE_INI
     iniset $CINDER_CONF DEFAULT rootwrap_config "$CINDER_CONF_DIR/rootwrap.conf"
     iniset $CINDER_CONF DEFAULT osapi_volume_extension cinder.api.contrib.standard_extensions
     iniset $CINDER_CONF DEFAULT state_path $CINDER_STATE_PATH
-    iniset $CINDER_CONF DEFAULT lock_path $CINDER_STATE_PATH
+    iniset $CINDER_CONF oslo_concurrency lock_path $CINDER_STATE_PATH
     iniset $CINDER_CONF DEFAULT periodic_interval $CINDER_PERIODIC_INTERVAL
     # NOTE(thingee): Cinder V1 API is deprecated and defaults to off as of
     # Juno. Keep it enabled so we can continue testing while it's still
     # supported.
     iniset $CINDER_CONF DEFAULT enable_v1_api true
 
+    iniset $CINDER_CONF DEFAULT os_region_name "$REGION_NAME"
+
     if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
         local enabled_backends=""
         local default_name=""
@@ -280,7 +277,7 @@
         iniset $CINDER_CONF DEFAULT use_syslog True
     fi
 
-    iniset_rpc_backend cinder $CINDER_CONF DEFAULT
+    iniset_rpc_backend cinder $CINDER_CONF
 
     if [[ "$CINDER_SECURE_DELETE" == "False" ]]; then
         iniset $CINDER_CONF DEFAULT secure_delete False
@@ -350,8 +347,7 @@
 # create_cinder_cache_dir() - Part of the init_cinder() process
 function create_cinder_cache_dir {
     # Create cache dir
-    sudo mkdir -p $CINDER_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $CINDER_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER $CINDER_AUTH_CACHE_DIR
     rm -f $CINDER_AUTH_CACHE_DIR/*
 }
 
@@ -371,15 +367,9 @@
 
     if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
         local be be_name be_type
-        local has_lvm=0
         for be in ${CINDER_ENABLED_BACKENDS//,/ }; do
             be_type=${be%%:*}
             be_name=${be##*:}
-
-            if [[ $be_type == 'lvm' ]]; then
-                has_lvm=1
-            fi
-
             if type init_cinder_backend_${be_type} >/dev/null 2>&1; then
                 # Always init the default volume group for lvm.
                 if [[ "$be_type" == "lvm" ]]; then
@@ -390,17 +380,6 @@
         done
     fi
 
-    # Keep it simple, set a marker if there's an LVM backend
-    # use the created VG's to setup lvm filters
-    if [[ $has_lvm == 1 ]]; then
-        # Order matters here, not only obviously to make
-        # sure the VG's are created, but also some distros
-        # do some customizations to lvm.conf on init, we
-        # want to make sure we copy those over
-        sudo cp /etc/lvm/lvm.conf /etc/cinder/lvm.conf
-        configure_cinder_backend_conf_lvm
-    fi
-
     mkdir -p $CINDER_STATE_PATH/volumes
     create_cinder_cache_dir
 }
diff --git a/lib/cinder_backends/lvm b/lib/cinder_backends/lvm
index 52fc6fb..f210578 100644
--- a/lib/cinder_backends/lvm
+++ b/lib/cinder_backends/lvm
@@ -19,7 +19,6 @@
 # clean_cinder_backend_lvm - called from clean_cinder()
 # configure_cinder_backend_lvm - called from configure_cinder()
 # init_cinder_backend_lvm - called from init_cinder()
-# configure_cinder_backend_conf_lvm - called from configure_cinder()
 
 
 # Save trace setting
@@ -66,36 +65,6 @@
     init_lvm_volume_group $VOLUME_GROUP_NAME-$be_name $VOLUME_BACKING_FILE_SIZE
 }
 
-# configure_cinder_backend_conf_lvm - Sets device filter in /etc/cinder/lvm.conf
-# init_cinder_backend_lvm
-function configure_cinder_backend_conf_lvm {
-    local filter_suffix='"r/.*/" ]'
-    local filter_string="filter = [ "
-    local conf_entries=$(grep volume_group /etc/cinder/cinder.conf | sed "s/ //g")
-    local pv
-    local vg
-    local line
-
-    for pv_info in $(sudo pvs --noheadings -o name,vg_name --separator ';'); do
-        echo_summary "Evaluate PV info for Cinder lvm.conf: $pv_info"
-        IFS=';' read pv vg <<< "$pv_info"
-        for line in ${conf_entries}; do
-            IFS='=' read label group <<< "$line"
-            group=$(echo $group|sed "s/^ *//g")
-            if [[ "$vg" == "$group" ]]; then
-                new="\"a$pv/\", "
-                filter_string=$filter_string$new
-            fi
-        done
-    done
-    filter_string=$filter_string$filter_suffix
-
-    # FIXME(jdg): Possible odd case that the lvm.conf file has been modified
-    # and doesn't have a filter entry to search/replace.  For devstack don't
-    # know that we care, but could consider adding a check and add
-    sudo sed -i "s#^[ \t]*filter.*#    $filter_string#g" /etc/cinder/lvm.conf
-    echo "set LVM filter_strings: $filter_string"
-}
 # Restore xtrace
 $MY_XTRACE
 
diff --git a/lib/databases/mysql b/lib/databases/mysql
index 70073c4..dabd7d0 100644
--- a/lib/databases/mysql
+++ b/lib/databases/mysql
@@ -16,7 +16,7 @@
 
 # Linux distros, thank you for being incredibly consistent
 MYSQL=mysql
-if is_fedora; then
+if is_fedora && ! is_oraclelinux; then
     MYSQL=mariadb
 fi
 
@@ -32,12 +32,12 @@
         sudo rm -rf /var/lib/mysql
         sudo rm -rf /etc/mysql
         return
+    elif is_suse || is_oraclelinux; then
+        uninstall_package mysql-community-server
+        sudo rm -rf /var/lib/mysql
     elif is_fedora; then
         uninstall_package mariadb-server
         sudo rm -rf /var/lib/mysql
-    elif is_suse; then
-        uninstall_package mysql-community-server
-        sudo rm -rf /var/lib/mysql
     else
         return
     fi
@@ -56,12 +56,12 @@
     if is_ubuntu; then
         my_conf=/etc/mysql/my.cnf
         mysql=mysql
+    elif is_suse || is_oraclelinux; then
+        my_conf=/etc/my.cnf
+        mysql=mysql
     elif is_fedora; then
         mysql=mariadb
         my_conf=/etc/my.cnf
-    elif is_suse; then
-        my_conf=/etc/my.cnf
-        mysql=mysql
     else
         exit_distro_not_supported "mysql configuration"
     fi
@@ -140,14 +140,14 @@
         chmod 0600 $HOME/.my.cnf
     fi
     # Install mysql-server
-    if is_fedora; then
-        install_package mariadb-server
-    elif is_ubuntu; then
-        install_package mysql-server
-    elif is_suse; then
+    if is_suse || is_oraclelinux; then
         if ! is_package_installed mariadb; then
             install_package mysql-community-server
         fi
+    elif is_fedora; then
+        install_package mariadb-server
+    elif is_ubuntu; then
+        install_package mysql-server
     else
         exit_distro_not_supported "mysql installation"
     fi
diff --git a/lib/glance b/lib/glance
index eb1df2e..d781056 100755
--- a/lib/glance
+++ b/lib/glance
@@ -90,15 +90,7 @@
 
 # configure_glance() - Set config files, create data dirs, etc
 function configure_glance {
-    if [[ ! -d $GLANCE_CONF_DIR ]]; then
-        sudo mkdir -p $GLANCE_CONF_DIR
-    fi
-    sudo chown $STACK_USER $GLANCE_CONF_DIR
-
-    if [[ ! -d $GLANCE_METADEF_DIR ]]; then
-        sudo mkdir -p $GLANCE_METADEF_DIR
-    fi
-    sudo chown $STACK_USER $GLANCE_METADEF_DIR
+    sudo install -d -o $STACK_USER $GLANCE_CONF_DIR $GLANCE_METADEF_DIR
 
     # Copy over our glance configurations and update them
     cp $GLANCE_DIR/etc/glance-registry.conf $GLANCE_REGISTRY_CONF
@@ -112,7 +104,7 @@
     if is_service_enabled qpid || [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; then
         iniset $GLANCE_REGISTRY_CONF DEFAULT notification_driver messaging
     fi
-    iniset_rpc_backend glance $GLANCE_REGISTRY_CONF DEFAULT
+    iniset_rpc_backend glance $GLANCE_REGISTRY_CONF
 
     cp $GLANCE_DIR/etc/glance-api.conf $GLANCE_API_CONF
     iniset $GLANCE_API_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
@@ -125,7 +117,7 @@
     if is_service_enabled qpid || [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; then
         iniset $GLANCE_API_CONF DEFAULT notification_driver messaging
     fi
-    iniset_rpc_backend glance $GLANCE_API_CONF DEFAULT
+    iniset_rpc_backend glance $GLANCE_API_CONF
     if [ "$VIRT_DRIVER" = 'xenserver' ]; then
         iniset $GLANCE_API_CONF DEFAULT container_formats "ami,ari,aki,bare,ovf,tgz"
         iniset $GLANCE_API_CONF DEFAULT disk_formats "ami,ari,aki,vhd,raw,iso"
@@ -185,8 +177,8 @@
 
     # Format logging
     if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
-        setup_colorized_logging $GLANCE_API_CONF DEFAULT "project_id" "user_id"
-        setup_colorized_logging $GLANCE_REGISTRY_CONF DEFAULT "project_id" "user_id"
+        setup_colorized_logging $GLANCE_API_CONF DEFAULT tenant user
+        setup_colorized_logging $GLANCE_REGISTRY_CONF DEFAULT tenant user
     fi
 
     cp -p $GLANCE_DIR/etc/glance-registry-paste.ini $GLANCE_REGISTRY_PASTE_INI
@@ -263,12 +255,8 @@
 # create_glance_cache_dir() - Part of the init_glance() process
 function create_glance_cache_dir {
     # Create cache dir
-    sudo mkdir -p $GLANCE_AUTH_CACHE_DIR/api
-    sudo chown $STACK_USER $GLANCE_AUTH_CACHE_DIR/api
-    rm -f $GLANCE_AUTH_CACHE_DIR/api/*
-    sudo mkdir -p $GLANCE_AUTH_CACHE_DIR/registry
-    sudo chown $STACK_USER $GLANCE_AUTH_CACHE_DIR/registry
-    rm -f $GLANCE_AUTH_CACHE_DIR/registry/*
+    sudo install -d -o $STACK_USER $GLANCE_AUTH_CACHE_DIR/api $GLANCE_AUTH_CACHE_DIR/registry
+    rm -f $GLANCE_AUTH_CACHE_DIR/api/* $GLANCE_AUTH_CACHE_DIR/registry/*
 }
 
 # init_glance() - Initialize databases, etc.
diff --git a/lib/heat b/lib/heat
index a088e82..c7abd3b 100644
--- a/lib/heat
+++ b/lib/heat
@@ -36,6 +36,7 @@
 HEAT_CFNTOOLS_DIR=$DEST/heat-cfntools
 HEAT_TEMPLATES_REPO_DIR=$DEST/heat-templates
 OCC_DIR=$DEST/os-collect-config
+DIB_UTILS_DIR=$DEST/dib-utils
 ORC_DIR=$DEST/os-refresh-config
 OAC_DIR=$DEST/os-apply-config
 
@@ -49,13 +50,19 @@
 HEAT_CONF=$HEAT_CONF_DIR/heat.conf
 HEAT_ENV_DIR=$HEAT_CONF_DIR/environment.d
 HEAT_TEMPLATES_DIR=$HEAT_CONF_DIR/templates
-HEAT_STACK_DOMAIN=$(trueorfalse True HEAT_STACK_DOMAIN)
 HEAT_API_HOST=${HEAT_API_HOST:-$HOST_IP}
 HEAT_API_PORT=${HEAT_API_PORT:-8004}
 
 
 # other default options
-HEAT_DEFERRED_AUTH=${HEAT_DEFERRED_AUTH:-trusts}
+if [[ "$HEAT_STANDALONE" = "True" ]]; then
+    # for standalone, use defaults which require no service user
+    HEAT_STACK_DOMAIN=`trueorfalse False $HEAT_STACK_DOMAIN`
+    HEAT_DEFERRED_AUTH=${HEAT_DEFERRED_AUTH:-password}
+else
+    HEAT_STACK_DOMAIN=`trueorfalse True $HEAT_STACK_DOMAIN`
+    HEAT_DEFERRED_AUTH=${HEAT_DEFERRED_AUTH:-trusts}
+fi
 
 # Tell Tempest this project is present
 TEMPEST_SERVICES+=,heat
@@ -77,18 +84,13 @@
     sudo rm -rf $HEAT_AUTH_CACHE_DIR
     sudo rm -rf $HEAT_ENV_DIR
     sudo rm -rf $HEAT_TEMPLATES_DIR
+    sudo rm -rf $HEAT_CONF_DIR
 }
 
 # configure_heat() - Set config files, create data dirs, etc
 function configure_heat {
-    if [[ "$HEAT_STANDALONE" = "True" ]]; then
-        setup_develop $HEAT_DIR/contrib/heat_keystoneclient_v2
-    fi
 
-    if [[ ! -d $HEAT_CONF_DIR ]]; then
-        sudo mkdir -p $HEAT_CONF_DIR
-    fi
-    sudo chown $STACK_USER $HEAT_CONF_DIR
+    sudo install -d -o $STACK_USER $HEAT_CONF_DIR
     # remove old config files
     rm -f $HEAT_CONF_DIR/heat-*.conf
 
@@ -105,7 +107,7 @@
     cp $HEAT_DIR/etc/heat/policy.json $HEAT_POLICY_FILE
 
     # common options
-    iniset_rpc_backend heat $HEAT_CONF DEFAULT
+    iniset_rpc_backend heat $HEAT_CONF
     iniset $HEAT_CONF DEFAULT heat_metadata_server_url http://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT
     iniset $HEAT_CONF DEFAULT heat_waitcondition_server_url http://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1/waitcondition
     iniset $HEAT_CONF DEFAULT heat_watch_server_url http://$HEAT_API_CW_HOST:$HEAT_API_CW_PORT
@@ -127,24 +129,22 @@
     # auth plugin setup. This should be fixed in heat.  Heat is also the only
     # service that requires the auth_uri to include a /v2.0. Remove this custom
     # setup when bug #1300246 is resolved.
-    iniset $HEAT_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
     iniset $HEAT_CONF keystone_authtoken auth_uri $KEYSTONE_SERVICE_URI/v2.0
-    iniset $HEAT_CONF keystone_authtoken admin_user heat
-    iniset $HEAT_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
-    iniset $HEAT_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
-    iniset $HEAT_CONF keystone_authtoken cafile $SSL_BUNDLE_FILE
-    iniset $HEAT_CONF keystone_authtoken signing_dir $HEAT_AUTH_CACHE_DIR
+    if [[ "$HEAT_STANDALONE" = "True" ]]; then
+        iniset $HEAT_CONF paste_deploy flavor standalone
+        iniset $HEAT_CONF clients_heat url "http://$HEAT_API_HOST:$HEAT_API_PORT/v1/%(tenant_id)s"
+    else
+        iniset $HEAT_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
+        iniset $HEAT_CONF keystone_authtoken admin_user heat
+        iniset $HEAT_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
+        iniset $HEAT_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
+        iniset $HEAT_CONF keystone_authtoken cafile $SSL_BUNDLE_FILE
+        iniset $HEAT_CONF keystone_authtoken signing_dir $HEAT_AUTH_CACHE_DIR
+    fi
 
     # ec2authtoken
     iniset $HEAT_CONF ec2authtoken auth_uri $KEYSTONE_SERVICE_URI/v2.0
 
-    # paste_deploy
-    if [[ "$HEAT_STANDALONE" = "True" ]]; then
-        iniset $HEAT_CONF paste_deploy flavor standalone
-        iniset $HEAT_CONF DEFAULT keystone_backend heat_keystoneclient_v2.client.KeystoneClientV2
-        iniset $HEAT_CONF clients_heat url "http://$HEAT_API_HOST:$HEAT_API_PORT/v1/%(tenant_id)s"
-    fi
-
     # OpenStack API
     iniset $HEAT_CONF heat_api bind_port $HEAT_API_PORT
     iniset $HEAT_CONF heat_api workers "$API_WORKERS"
@@ -172,15 +172,11 @@
         iniset $HEAT_CONF DEFAULT enable_stack_abandon true
     fi
 
-    # heat environment
-    sudo mkdir -p $HEAT_ENV_DIR
-    sudo chown $STACK_USER $HEAT_ENV_DIR
+    sudo install -d -o $STACK_USER $HEAT_ENV_DIR $HEAT_TEMPLATES_DIR
+
     # copy the default environment
     cp $HEAT_DIR/etc/heat/environment.d/* $HEAT_ENV_DIR/
 
-    # heat template resources.
-    sudo mkdir -p $HEAT_TEMPLATES_DIR
-    sudo chown $STACK_USER $HEAT_TEMPLATES_DIR
     # copy the default templates
     cp $HEAT_DIR/etc/heat/templates/* $HEAT_TEMPLATES_DIR/
 
@@ -199,8 +195,7 @@
 # create_heat_cache_dir() - Part of the init_heat() process
 function create_heat_cache_dir {
     # Create cache dirs
-    sudo mkdir -p $HEAT_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $HEAT_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER $HEAT_AUTH_CACHE_DIR
 }
 
 # install_heatclient() - Collect source and prepare
@@ -222,6 +217,10 @@
 function install_heat_other {
     git_clone $HEAT_CFNTOOLS_REPO $HEAT_CFNTOOLS_DIR $HEAT_CFNTOOLS_BRANCH
     git_clone $HEAT_TEMPLATES_REPO $HEAT_TEMPLATES_REPO_DIR $HEAT_TEMPLATES_BRANCH
+    git_clone $OAC_REPO $OAC_DIR $OAC_BRANCH
+    git_clone $OCC_REPO $OCC_DIR $OCC_BRANCH
+    git_clone $ORC_REPO $ORC_DIR $ORC_BRANCH
+    git_clone $DIB_UTILS_REPO $DIB_UTILS_DIR $DIB_UTILS_BRANCH
 }
 
 # start_heat() - Start running processes, including screen
@@ -243,30 +242,33 @@
 
 # create_heat_accounts() - Set up common required heat accounts
 function create_heat_accounts {
-    create_service_user "heat" "admin"
+    if [[ "$HEAT_STANDALONE" != "True" ]]; then
 
-    if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
+        create_service_user "heat" "admin"
 
-        local heat_service=$(get_or_create_service "heat" \
-                "orchestration" "Heat Orchestration Service")
-        get_or_create_endpoint $heat_service \
-            "$REGION_NAME" \
-            "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s" \
-            "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s" \
-            "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s"
+        if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
 
-        local heat_cfn_service=$(get_or_create_service "heat-cfn" \
-                "cloudformation" "Heat CloudFormation Service")
-        get_or_create_endpoint $heat_cfn_service \
-            "$REGION_NAME" \
-            "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
-            "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
-            "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1"
+            local heat_service=$(get_or_create_service "heat" \
+                    "orchestration" "Heat Orchestration Service")
+            get_or_create_endpoint $heat_service \
+                "$REGION_NAME" \
+                "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s" \
+                "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s" \
+                "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s"
+
+            local heat_cfn_service=$(get_or_create_service "heat-cfn" \
+                    "cloudformation" "Heat CloudFormation Service")
+            get_or_create_endpoint $heat_cfn_service \
+                "$REGION_NAME" \
+                "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
+                "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
+                "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1"
+        fi
+
+        # heat_stack_user role is for users created by Heat
+        get_or_create_role "heat_stack_user"
     fi
 
-    # heat_stack_user role is for users created by Heat
-    get_or_create_role "heat_stack_user"
-
     if [[ $HEAT_DEFERRED_AUTH == trusts ]]; then
         iniset $HEAT_CONF DEFAULT deferred_auth_method trusts
     fi
@@ -299,7 +301,7 @@
 
 # build_heat_pip_mirror() - Build a pip mirror containing heat agent projects
 function build_heat_pip_mirror {
-    local project_dirs="$OCC_DIR $OAC_DIR $ORC_DIR $HEAT_CFNTOOLS_DIR"
+    local project_dirs="$OCC_DIR $OAC_DIR $ORC_DIR $HEAT_CFNTOOLS_DIR $DIB_UTILS_DIR"
     local projpath proj package
 
     rm -rf $HEAT_PIP_REPO
diff --git a/lib/ironic b/lib/ironic
index bc30cdb..58cc2fa 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -63,6 +63,7 @@
 IRONIC_BAREMETAL_BASIC_OPS=$(trueorfalse False IRONIC_BAREMETAL_BASIC_OPS)
 IRONIC_ENABLED_DRIVERS=${IRONIC_ENABLED_DRIVERS:-fake,pxe_ssh,pxe_ipmitool}
 IRONIC_SSH_USERNAME=${IRONIC_SSH_USERNAME:-`whoami`}
+IRONIC_SSH_TIMEOUT=${IRONIC_SSH_TIMEOUT:-15}
 IRONIC_SSH_KEY_DIR=${IRONIC_SSH_KEY_DIR:-$IRONIC_DATA_DIR/ssh_keys}
 IRONIC_SSH_KEY_FILENAME=${IRONIC_SSH_KEY_FILENAME:-ironic_key}
 IRONIC_KEY_FILE=${IRONIC_KEY_FILE:-$IRONIC_SSH_KEY_DIR/$IRONIC_SSH_KEY_FILENAME}
@@ -233,22 +234,14 @@
 # configure_ironic_dirs() - Create all directories required by Ironic and
 # associated services.
 function configure_ironic_dirs {
-    if [[ ! -d $IRONIC_CONF_DIR ]]; then
-        sudo mkdir -p $IRONIC_CONF_DIR
-    fi
+    sudo install -d -o $STACK_USER $IRONIC_CONF_DIR $STACK_USER $IRONIC_DATA_DIR \
+        $IRONIC_STATE_PATH $IRONIC_TFTPBOOT_DIR $IRONIC_TFTPBOOT_DIR/pxelinux.cfg
+    sudo chown -R $STACK_USER:$LIBVIRT_GROUP $IRONIC_TFTPBOOT_DIR
 
     if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then
-        sudo mkdir -p $IRONIC_HTTP_DIR
-        sudo chown -R $STACK_USER:$LIBVIRT_GROUP $IRONIC_HTTP_DIR
+        sudo install -d -o $STACK_USER -g $LIBVIRT_GROUP $IRONIC_HTTP_DIR
     fi
 
-    sudo mkdir -p $IRONIC_DATA_DIR
-    sudo mkdir -p $IRONIC_STATE_PATH
-    sudo mkdir -p $IRONIC_TFTPBOOT_DIR
-    sudo chown -R $STACK_USER $IRONIC_DATA_DIR $IRONIC_STATE_PATH
-    sudo chown -R $STACK_USER:$LIBVIRT_GROUP $IRONIC_TFTPBOOT_DIR
-    mkdir -p $IRONIC_TFTPBOOT_DIR/pxelinux.cfg
-
     if [ ! -f $IRONIC_PXE_BOOT_IMAGE ]; then
         die $LINENO "PXE boot file $IRONIC_PXE_BOOT_IMAGE not found."
     fi
@@ -267,13 +260,12 @@
 # configure_ironic() - Set config files, create data dirs, etc
 function configure_ironic {
     configure_ironic_dirs
-    sudo chown $STACK_USER $IRONIC_CONF_DIR
 
     # Copy over ironic configuration file and configure common parameters.
     cp $IRONIC_DIR/etc/ironic/ironic.conf.sample $IRONIC_CONF_FILE
     iniset $IRONIC_CONF_FILE DEFAULT debug True
     inicomment $IRONIC_CONF_FILE DEFAULT log_file
-    iniset $IRONIC_CONF_FILE DEFAULT sql_connection `database_connection_url ironic`
+    iniset $IRONIC_CONF_FILE database connection `database_connection_url ironic`
     iniset $IRONIC_CONF_FILE DEFAULT state_path $IRONIC_STATE_PATH
     iniset $IRONIC_CONF_FILE DEFAULT use_syslog $SYSLOG
     # Configure Ironic conductor, if it was enabled.
@@ -313,7 +305,7 @@
     iniset $IRONIC_CONF_FILE keystone_authtoken cafile $SSL_BUNDLE_FILE
     iniset $IRONIC_CONF_FILE keystone_authtoken signing_dir $IRONIC_AUTH_CACHE_DIR/api
 
-    iniset_rpc_backend ironic $IRONIC_CONF_FILE DEFAULT
+    iniset_rpc_backend ironic $IRONIC_CONF_FILE
     iniset $IRONIC_CONF_FILE api port $IRONIC_SERVICE_PORT
 
     cp -p $IRONIC_DIR/etc/ironic/policy.json $IRONIC_POLICY_JSON
@@ -343,13 +335,24 @@
     iniset $IRONIC_CONF_FILE pxe tftp_server $IRONIC_TFTPSERVER_IP
     iniset $IRONIC_CONF_FILE pxe tftp_root $IRONIC_TFTPBOOT_DIR
     iniset $IRONIC_CONF_FILE pxe tftp_master_path $IRONIC_TFTPBOOT_DIR/master_images
+
+    local pxe_params=""
     if [[ "$IRONIC_VM_LOG_CONSOLE" == "True" ]] ; then
-        local pxe_params="nofb nomodeset vga=normal console=ttyS0"
+        pxe_params+="nofb nomodeset vga=normal console=ttyS0"
         if is_deployed_with_ipa_ramdisk; then
             pxe_params+=" systemd.journald.forward_to_console=yes"
         fi
+    fi
+    # When booting with less than 1GB, we need to switch from default tmpfs
+    # to ramfs for ramdisks to decompress successfully.
+    if (is_ironic_hardware && [[ "$IRONIC_HW_NODE_RAM" -lt 1024 ]]) ||
+        (! is_ironic_hardware && [[ "$IRONIC_VM_SPECS_RAM" -lt 1024 ]]); then
+        pxe_params+=" rootfstype=ramfs"
+    fi
+    if [[ -n "$pxe_params" ]]; then
         iniset $IRONIC_CONF_FILE pxe pxe_append_params "$pxe_params"
     fi
+
     if is_deployed_by_agent; then
         if [[ "$SWIFT_ENABLE_TEMPURLS" == "True" ]] ; then
             iniset $IRONIC_CONF_FILE glance swift_temp_url_key $SWIFT_TEMPURL_KEY
@@ -416,6 +419,11 @@
 
 # init_ironic() - Initialize databases, etc.
 function init_ironic {
+    # Save private network as cleaning network
+    local cleaning_network_uuid
+    cleaning_network_uuid=$(neutron net-list | grep private | get_field 1)
+    iniset $IRONIC_CONF_FILE neutron cleaning_network_uuid ${cleaning_network_uuid}
+
     # (Re)create  ironic database
     recreate_database ironic
 
@@ -471,9 +479,8 @@
 
 # stop_ironic() - Stop running processes
 function stop_ironic {
-    # Kill the Ironic screen windows
-    screen -S $SCREEN_NAME -p ir-api -X kill
-    screen -S $SCREEN_NAME -p ir-cond -X kill
+    stop_process ir-api
+    stop_process ir-cond
 
     # Cleanup the WSGI files
     if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then
@@ -616,7 +623,7 @@
             $node_options \
             | grep " uuid " | get_field 2)
 
-        ironic port-create --address $mac_address --node_uuid $node_id
+        ironic port-create --address $mac_address --node $node_id
 
         total_nodes=$((total_nodes+1))
         total_cpus=$((total_cpus+$ironic_node_cpu))
@@ -693,7 +700,7 @@
 
 function configure_ironic_auxiliary {
     configure_ironic_ssh_keypair
-    ironic_ssh_check $IRONIC_KEY_FILE $IRONIC_VM_SSH_ADDRESS $IRONIC_VM_SSH_PORT $IRONIC_SSH_USERNAME 10
+    ironic_ssh_check $IRONIC_KEY_FILE $IRONIC_VM_SSH_ADDRESS $IRONIC_VM_SSH_PORT $IRONIC_SSH_USERNAME $IRONIC_SSH_TIMEOUT
 }
 
 function build_ipa_coreos_ramdisk {
diff --git a/lib/keystone b/lib/keystone
index 0968445..23773fa 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -175,14 +175,10 @@
 
 # configure_keystone() - Set config files, create data dirs, etc
 function configure_keystone {
-    if [[ ! -d $KEYSTONE_CONF_DIR ]]; then
-        sudo mkdir -p $KEYSTONE_CONF_DIR
-    fi
-    sudo chown $STACK_USER $KEYSTONE_CONF_DIR
+    sudo install -d -o $STACK_USER $KEYSTONE_CONF_DIR
 
     if [[ "$KEYSTONE_CONF_DIR" != "$KEYSTONE_DIR/etc" ]]; then
-        cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF
-        chmod 600 $KEYSTONE_CONF
+        install -m 600 $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF
         cp -p $KEYSTONE_DIR/etc/policy.json $KEYSTONE_CONF_DIR
         if [[ -f "$KEYSTONE_DIR/etc/keystone-paste.ini" ]]; then
             cp -p "$KEYSTONE_DIR/etc/keystone-paste.ini" "$KEYSTONE_PASTE_INI"
@@ -226,36 +222,26 @@
         iniset $KEYSTONE_CONF assignment driver "keystone.assignment.backends.$KEYSTONE_ASSIGNMENT_BACKEND.Assignment"
     fi
 
-    # Configure rabbitmq credentials
-    if is_service_enabled rabbit; then
-        iniset $KEYSTONE_CONF DEFAULT rabbit_userid $RABBIT_USERID
-        iniset $KEYSTONE_CONF DEFAULT rabbit_password $RABBIT_PASSWORD
-        iniset $KEYSTONE_CONF DEFAULT rabbit_host $RABBIT_HOST
-    fi
+    iniset_rpc_backend keystone $KEYSTONE_CONF
 
     # Set the URL advertised in the ``versions`` structure returned by the '/' route
-    if is_service_enabled tls-proxy; then
-        iniset $KEYSTONE_CONF DEFAULT public_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/"
-        iniset $KEYSTONE_CONF DEFAULT admin_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/"
-    else
-        iniset $KEYSTONE_CONF DEFAULT public_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:%(public_port)s/"
-        iniset $KEYSTONE_CONF DEFAULT admin_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:%(admin_port)s/"
-    fi
-    iniset $KEYSTONE_CONF DEFAULT admin_bind_host "$KEYSTONE_ADMIN_BIND_HOST"
+    iniset $KEYSTONE_CONF DEFAULT public_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/"
+    iniset $KEYSTONE_CONF DEFAULT admin_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/"
+    iniset $KEYSTONE_CONF eventlet_server admin_bind_host "$KEYSTONE_ADMIN_BIND_HOST"
 
     # Register SSL certificates if provided
     if is_ssl_enabled_service key; then
         ensure_certificates KEYSTONE
 
-        iniset $KEYSTONE_CONF ssl enable True
-        iniset $KEYSTONE_CONF ssl certfile $KEYSTONE_SSL_CERT
-        iniset $KEYSTONE_CONF ssl keyfile $KEYSTONE_SSL_KEY
+        iniset $KEYSTONE_CONF eventlet_server_ssl enable True
+        iniset $KEYSTONE_CONF eventlet_server_ssl certfile $KEYSTONE_SSL_CERT
+        iniset $KEYSTONE_CONF eventlet_server_ssl keyfile $KEYSTONE_SSL_KEY
     fi
 
     if is_service_enabled tls-proxy; then
         # Set the service ports for a proxy to take the originals
-        iniset $KEYSTONE_CONF DEFAULT public_port $KEYSTONE_SERVICE_PORT_INT
-        iniset $KEYSTONE_CONF DEFAULT admin_port $KEYSTONE_AUTH_PORT_INT
+        iniset $KEYSTONE_CONF eventlet_server public_port $KEYSTONE_SERVICE_PORT_INT
+        iniset $KEYSTONE_CONF eventlet_server admin_port $KEYSTONE_AUTH_PORT_INT
     fi
 
     iniset $KEYSTONE_CONF DEFAULT admin_token "$SERVICE_TOKEN"
@@ -331,7 +317,7 @@
 
     iniset $KEYSTONE_CONF DEFAULT max_token_size 16384
 
-    iniset $KEYSTONE_CONF DEFAULT admin_workers "$API_WORKERS"
+    iniset $KEYSTONE_CONF eventlet_server admin_workers "$API_WORKERS"
     # Public workers will use the server default, typically number of CPU.
 }
 
@@ -490,8 +476,7 @@
         $KEYSTONE_DIR/bin/keystone-manage pki_setup
 
         # Create cache dir
-        sudo mkdir -p $KEYSTONE_AUTH_CACHE_DIR
-        sudo chown $STACK_USER $KEYSTONE_AUTH_CACHE_DIR
+        sudo install -d -o $STACK_USER $KEYSTONE_AUTH_CACHE_DIR
         rm -f $KEYSTONE_AUTH_CACHE_DIR/*
     fi
 }
diff --git a/lib/lvm b/lib/lvm
index 39eed00..d0322c7 100644
--- a/lib/lvm
+++ b/lib/lvm
@@ -138,6 +138,31 @@
     fi
 }
 
+# set_lvm_filter() Gather all devices configured for LVM and
+# use them to build a global device filter
+# set_lvm_filter() Create a device filter
+# and add to /etc/lvm.conf.  Note this uses
+# all current PV's in use by LVM on the
+# system to build it's filter.
+#
+# Usage: set_lvm_filter()
+function set_lvm_filter {
+    local filter_suffix='"r|.*|" ]'
+    local filter_string="global_filter = [ "
+    local pv
+    local vg
+    local line
+
+    for pv_info in $(sudo pvs --noheadings -o name); do
+        pv=$(echo -e "${pv_info}" | sed 's/ //g' | sed 's/\/dev\///g')
+        new="\"a|$pv|\", "
+        filter_string=$filter_string$new
+    done
+    filter_string=$filter_string$filter_suffix
+
+    sudo sed -i "/# global_filter = \[*\]/a\    $global_filter$filter_string" /etc/lvm/lvm.conf
+    echo_summary "set lvm.conf device global_filter to: $filter_string"
+}
 
 # Restore xtrace
 $MY_XTRACE
diff --git a/lib/neutron b/lib/neutron
new file mode 120000
index 0000000..00cd722
--- /dev/null
+++ b/lib/neutron
@@ -0,0 +1 @@
+neutron-legacy
\ No newline at end of file
diff --git a/lib/neutron b/lib/neutron-legacy
similarity index 95%
rename from lib/neutron
rename to lib/neutron-legacy
index a7aabc5..5ff3921 100755
--- a/lib/neutron
+++ b/lib/neutron-legacy
@@ -153,6 +153,7 @@
 # RHEL's support for namespaces requires using veths with ovs
 Q_OVS_USE_VETH=${Q_OVS_USE_VETH:-False}
 Q_USE_ROOTWRAP=${Q_USE_ROOTWRAP:-True}
+Q_USE_ROOTWRAP_DAEMON=$(trueorfalse True Q_USE_ROOTWRAP_DAEMON)
 # Meta data IP
 Q_META_DATA_IP=${Q_META_DATA_IP:-$SERVICE_HOST}
 # Allow Overlapping IP among subnets
@@ -226,6 +227,9 @@
 else
     NEUTRON_ROOTWRAP=$(get_rootwrap_location neutron)
     Q_RR_COMMAND="sudo $NEUTRON_ROOTWRAP $Q_RR_CONF_FILE"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        Q_RR_DAEMON_COMMAND="sudo $NEUTRON_ROOTWRAP-daemon $Q_RR_CONF_FILE"
+    fi
 fi
 
 
@@ -422,7 +426,7 @@
 # Set common config for all neutron server and agents.
 function configure_neutron {
     _configure_neutron_common
-    iniset_rpc_backend neutron $NEUTRON_CONF DEFAULT
+    iniset_rpc_backend neutron $NEUTRON_CONF
 
     # goes before q-svc to init Q_SERVICE_PLUGIN_CLASSES
     if is_service_enabled q-lbaas; then
@@ -495,8 +499,7 @@
 # create_neutron_cache_dir() - Part of the _neutron_setup_keystone() process
 function create_neutron_cache_dir {
     # Create cache dir
-    sudo mkdir -p $NEUTRON_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $NEUTRON_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER $NEUTRON_AUTH_CACHE_DIR
     rm -f $NEUTRON_AUTH_CACHE_DIR/*
 }
 
@@ -800,10 +803,7 @@
 
 function _create_neutron_conf_dir {
     # Put config files in ``NEUTRON_CONF_DIR`` for everyone to find
-    if [[ ! -d $NEUTRON_CONF_DIR ]]; then
-        sudo mkdir -p $NEUTRON_CONF_DIR
-    fi
-    sudo chown $STACK_USER $NEUTRON_CONF_DIR
+    sudo install -d -o $STACK_USER $NEUTRON_CONF_DIR
 }
 
 # _configure_neutron_common()
@@ -871,7 +871,7 @@
     fi
 
     if is_ssl_enabled_service "nova"; then
-        iniset $NEUTRON_CONF DEFAULT nova_ca_certificates_file "$SSL_BUNDLE_FILE"
+        iniset $NEUTRON_CONF nova cafile $SSL_BUNDLE_FILE
     fi
 
     if is_ssl_enabled_service "neutron"; then
@@ -896,6 +896,9 @@
     iniset $NEUTRON_TEST_CONFIG_FILE DEFAULT debug False
     iniset $NEUTRON_TEST_CONFIG_FILE DEFAULT use_namespaces $Q_USE_NAMESPACE
     iniset $NEUTRON_TEST_CONFIG_FILE agent root_helper "$Q_RR_COMMAND"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        iniset $NEUTRON_TEST_CONFIG_FILE agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+    fi
 
     _neutron_setup_interface_driver $NEUTRON_TEST_CONFIG_FILE
 
@@ -910,6 +913,9 @@
     iniset $Q_DHCP_CONF_FILE DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $Q_DHCP_CONF_FILE DEFAULT use_namespaces $Q_USE_NAMESPACE
     iniset $Q_DHCP_CONF_FILE DEFAULT root_helper "$Q_RR_COMMAND"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        iniset $NEUTRON_TEST_CONFIG_FILE agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+    fi
 
     if ! is_service_enabled q-l3; then
         if [[ "$ENABLE_ISOLATED_METADATA" = "True" ]]; then
@@ -943,6 +949,9 @@
     iniset $Q_L3_CONF_FILE DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $Q_L3_CONF_FILE DEFAULT use_namespaces $Q_USE_NAMESPACE
     iniset $Q_L3_CONF_FILE DEFAULT root_helper "$Q_RR_COMMAND"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        iniset $Q_L3_CONF_FILE agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+    fi
 
     _neutron_setup_interface_driver $Q_L3_CONF_FILE
 
@@ -956,6 +965,9 @@
     iniset $Q_META_CONF_FILE DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $Q_META_CONF_FILE DEFAULT nova_metadata_ip $Q_META_DATA_IP
     iniset $Q_META_CONF_FILE DEFAULT root_helper "$Q_RR_COMMAND"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        iniset $Q_META_CONF_FILE agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+    fi
 
     # Configures keystone for metadata_agent
     # The third argument "True" sets auth_url needed to communicate with keystone
@@ -1008,6 +1020,9 @@
     # Specify the default root helper prior to agent configuration to
     # ensure that an agent's configuration can override the default
     iniset /$Q_PLUGIN_CONF_FILE agent root_helper "$Q_RR_COMMAND"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        iniset /$Q_PLUGIN_CONF_FILE  agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+    fi
     iniset $NEUTRON_CONF DEFAULT verbose True
     iniset $NEUTRON_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
 
@@ -1045,13 +1060,15 @@
     # Configuration for neutron notifations to nova.
     iniset $NEUTRON_CONF DEFAULT notify_nova_on_port_status_changes $Q_NOTIFY_NOVA_PORT_STATUS_CHANGES
     iniset $NEUTRON_CONF DEFAULT notify_nova_on_port_data_changes $Q_NOTIFY_NOVA_PORT_DATA_CHANGES
-    iniset $NEUTRON_CONF DEFAULT nova_url "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v2"
-    iniset $NEUTRON_CONF DEFAULT nova_region_name $REGION_NAME
-    iniset $NEUTRON_CONF DEFAULT nova_admin_username nova
-    iniset $NEUTRON_CONF DEFAULT nova_admin_password $SERVICE_PASSWORD
-    ADMIN_TENANT_ID=$(openstack project list | awk "/ service / { print \$2 }")
-    iniset $NEUTRON_CONF DEFAULT nova_admin_tenant_id $ADMIN_TENANT_ID
-    iniset $NEUTRON_CONF DEFAULT nova_admin_auth_url  "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/v2.0"
+
+    iniset $NEUTRON_CONF nova auth_plugin password
+    iniset $NEUTRON_CONF nova auth_url $KEYSTONE_AUTH_URI
+    iniset $NEUTRON_CONF nova username nova
+    iniset $NEUTRON_CONF nova password $SERVICE_PASSWORD
+    iniset $NEUTRON_CONF nova user_domain_id default
+    iniset $NEUTRON_CONF nova project_name $SERVICE_TENANT_NAME
+    iniset $NEUTRON_CONF nova project_domain_id default
+    iniset $NEUTRON_CONF nova region_name $REGION_NAME
 
     # Configure plugin
     neutron_plugin_configure_service
@@ -1073,10 +1090,8 @@
 # _neutron_deploy_rootwrap_filters() - deploy rootwrap filters to $Q_CONF_ROOTWRAP_D (owned by root).
 function _neutron_deploy_rootwrap_filters {
     local srcdir=$1
-    mkdir -p -m 755 $Q_CONF_ROOTWRAP_D
-    sudo cp -pr $srcdir/etc/neutron/rootwrap.d/* $Q_CONF_ROOTWRAP_D/
-    sudo chown -R root:root $Q_CONF_ROOTWRAP_D
-    sudo chmod 644 $Q_CONF_ROOTWRAP_D/*
+    sudo install -d -o root -m 755 $Q_CONF_ROOTWRAP_D
+    sudo install -o root -m 644 $srcdir/etc/neutron/rootwrap.d/* $Q_CONF_ROOTWRAP_D/
 }
 
 # _neutron_setup_rootwrap() - configure Neutron's rootwrap
@@ -1095,25 +1110,28 @@
     # Set up ``rootwrap.conf``, pointing to ``$NEUTRON_CONF_DIR/rootwrap.d``
     # location moved in newer versions, prefer new location
     if test -r $NEUTRON_DIR/etc/neutron/rootwrap.conf; then
-        sudo cp -p $NEUTRON_DIR/etc/neutron/rootwrap.conf $Q_RR_CONF_FILE
+        sudo install -o root -g root -m 644 $NEUTRON_DIR/etc/neutron/rootwrap.conf $Q_RR_CONF_FILE
     else
-        sudo cp -p $NEUTRON_DIR/etc/rootwrap.conf $Q_RR_CONF_FILE
+        sudo install -o root -g root -m 644 $NEUTRON_DIR/etc/rootwrap.conf $Q_RR_CONF_FILE
     fi
     sudo sed -e "s:^filters_path=.*$:filters_path=$Q_CONF_ROOTWRAP_D:" -i $Q_RR_CONF_FILE
-    sudo chown root:root $Q_RR_CONF_FILE
-    sudo chmod 0644 $Q_RR_CONF_FILE
     # Specify ``rootwrap.conf`` as first parameter to neutron-rootwrap
     ROOTWRAP_SUDOER_CMD="$NEUTRON_ROOTWRAP $Q_RR_CONF_FILE *"
+    ROOTWRAP_DAEMON_SUDOER_CMD="$NEUTRON_ROOTWRAP-daemon $Q_RR_CONF_FILE"
 
     # Set up the rootwrap sudoers for neutron
     TEMPFILE=`mktemp`
     echo "$STACK_USER ALL=(root) NOPASSWD: $ROOTWRAP_SUDOER_CMD" >$TEMPFILE
+    echo "$STACK_USER ALL=(root) NOPASSWD: $ROOTWRAP_DAEMON_SUDOER_CMD" >>$TEMPFILE
     chmod 0440 $TEMPFILE
     sudo chown root:root $TEMPFILE
     sudo mv $TEMPFILE /etc/sudoers.d/neutron-rootwrap
 
     # Update the root_helper
     iniset $NEUTRON_CONF agent root_helper "$Q_RR_COMMAND"
+    if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
+        iniset $NEUTRON_CONF agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+    fi
 }
 
 # Configures keystone integration for neutron service and agents
diff --git a/lib/neutron_plugins/README.md b/lib/neutron_plugins/README.md
index 7192a05..4b220d3 100644
--- a/lib/neutron_plugins/README.md
+++ b/lib/neutron_plugins/README.md
@@ -13,7 +13,7 @@
 
 functions
 ---------
-``lib/neutron`` calls the following functions when the ``$Q_PLUGIN`` is enabled
+``lib/neutron-legacy`` calls the following functions when the ``$Q_PLUGIN`` is enabled
 
 * ``neutron_plugin_create_nova_conf`` :
   set ``NOVA_VIF_DRIVER`` and optionally set options in nova_conf
diff --git a/lib/neutron_plugins/nec b/lib/neutron_plugins/nec
index 3b1a257..9ea7338 100644
--- a/lib/neutron_plugins/nec
+++ b/lib/neutron_plugins/nec
@@ -1,131 +1,10 @@
 #!/bin/bash
-#
-# Neutron NEC OpenFlow plugin
-# ---------------------------
 
-# Save trace setting
-NEC_XTRACE=$(set +o | grep xtrace)
-set +o xtrace
+# This file is needed so Q_PLUGIN=nec will work.
 
-# Configuration parameters
-OFC_HOST=${OFC_HOST:-127.0.0.1}
-OFC_PORT=${OFC_PORT:-8888}
-
-OFC_API_HOST=${OFC_API_HOST:-$OFC_HOST}
-OFC_API_PORT=${OFC_API_PORT:-$OFC_PORT}
-OFC_OFP_HOST=${OFC_OFP_HOST:-$OFC_HOST}
-OFC_OFP_PORT=${OFC_OFP_PORT:-6633}
-OFC_DRIVER=${OFC_DRIVER:-trema}
-OFC_RETRY_MAX=${OFC_RETRY_MAX:-0}
-OFC_RETRY_INTERVAL=${OFC_RETRY_INTERVAL:-1}
-
-# Main logic
-# ---------------------------
-
-source $TOP_DIR/lib/neutron_plugins/ovs_base
-
-function neutron_plugin_create_nova_conf {
-    _neutron_ovs_base_configure_nova_vif_driver
-}
-
-function neutron_plugin_install_agent_packages {
-    # SKIP_OVS_INSTALL is useful when we want to use Open vSwitch whose
-    # version is different from the version provided by the distribution.
-    if [[ "$SKIP_OVS_INSTALL" = "True" ]]; then
-        echo "You need to install Open vSwitch manually."
-        return
-    fi
-    _neutron_ovs_base_install_agent_packages
-}
-
-function neutron_plugin_configure_common {
-    Q_PLUGIN_CONF_PATH=etc/neutron/plugins/nec
-    Q_PLUGIN_CONF_FILENAME=nec.ini
-    Q_PLUGIN_CLASS="neutron.plugins.nec.nec_plugin.NECPluginV2"
-}
-
-function neutron_plugin_configure_debug_command {
-    _neutron_ovs_base_configure_debug_command
-}
-
-function neutron_plugin_configure_dhcp_agent {
-    :
-}
-
-function neutron_plugin_configure_l3_agent {
-    _neutron_ovs_base_configure_l3_agent
-}
-
-function _quantum_plugin_setup_bridge {
-    if [[ "$SKIP_OVS_BRIDGE_SETUP" = "True" ]]; then
-        return
-    fi
-    # Set up integration bridge
-    _neutron_ovs_base_setup_bridge $OVS_BRIDGE
-    # Generate datapath ID from HOST_IP
-    local dpid=$(printf "%07d%03d%03d%03d\n" ${HOST_IP//./ })
-    sudo ovs-vsctl --no-wait set Bridge $OVS_BRIDGE other-config:datapath-id=$dpid
-    sudo ovs-vsctl --no-wait set-fail-mode $OVS_BRIDGE secure
-    sudo ovs-vsctl --no-wait set-controller $OVS_BRIDGE tcp:$OFC_OFP_HOST:$OFC_OFP_PORT
-    if [ -n "$OVS_INTERFACE" ]; then
-        sudo ovs-vsctl --no-wait -- --may-exist add-port $OVS_BRIDGE $OVS_INTERFACE
-    fi
-    _neutron_setup_ovs_tunnels $OVS_BRIDGE
-}
-
-function neutron_plugin_configure_plugin_agent {
-    _quantum_plugin_setup_bridge
-
-    AGENT_BINARY="$NEUTRON_BIN_DIR/neutron-nec-agent"
-
-    _neutron_ovs_base_configure_firewall_driver
-}
-
-function neutron_plugin_configure_service {
-    iniset $NEUTRON_CONF DEFAULT api_extensions_path neutron/plugins/nec/extensions/
-    iniset /$Q_PLUGIN_CONF_FILE ofc host $OFC_API_HOST
-    iniset /$Q_PLUGIN_CONF_FILE ofc port $OFC_API_PORT
-    iniset /$Q_PLUGIN_CONF_FILE ofc driver $OFC_DRIVER
-    iniset /$Q_PLUGIN_CONF_FILE ofc api_retry_max OFC_RETRY_MAX
-    iniset /$Q_PLUGIN_CONF_FILE ofc api_retry_interval OFC_RETRY_INTERVAL
-
-    _neutron_ovs_base_configure_firewall_driver
-}
-
-function neutron_plugin_setup_interface_driver {
-    local conf_file=$1
-    iniset $conf_file DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
-    iniset $conf_file DEFAULT ovs_use_veth True
-}
-
-# Utility functions
-# ---------------------------
-
-# Setup OVS tunnel manually
-function _neutron_setup_ovs_tunnels {
-    local bridge=$1
-    local id=0
-    GRE_LOCAL_IP=${GRE_LOCAL_IP:-$HOST_IP}
-    if [ -n "$GRE_REMOTE_IPS" ]; then
-        for ip in ${GRE_REMOTE_IPS//:/ }; do
-            if [[ "$ip" == "$GRE_LOCAL_IP" ]]; then
-                continue
-            fi
-            sudo ovs-vsctl --no-wait add-port $bridge gre$id -- \
-                set Interface gre$id type=gre options:remote_ip=$ip
-            id=`expr $id + 1`
-        done
-    fi
-}
-
+# FIXME(amotoki): This function should not be here, but unfortunately
+# devstack calls it before the external plugins are fetched
 function has_neutron_plugin_security_group {
     # 0 means True here
     return 0
 }
-
-function neutron_plugin_check_adv_test_requirements {
-    is_service_enabled q-agt && is_service_enabled q-dhcp && return 0
-}
-
-# Restore xtrace
-$NEC_XTRACE
diff --git a/lib/neutron_thirdparty/README.md b/lib/neutron_thirdparty/README.md
index 5655e0b..905ae77 100644
--- a/lib/neutron_thirdparty/README.md
+++ b/lib/neutron_thirdparty/README.md
@@ -10,7 +10,7 @@
 
 functions
 ---------
-``lib/neutron`` calls the following functions when the ``<third_party>`` is enabled
+``lib/neutron-legacy`` calls the following functions when the ``<third_party>`` is enabled
 
 functions to be implemented
 * ``configure_<third_party>``:
diff --git a/lib/neutron_thirdparty/trema b/lib/neutron_thirdparty/trema
deleted file mode 100644
index 075f013..0000000
--- a/lib/neutron_thirdparty/trema
+++ /dev/null
@@ -1,119 +0,0 @@
-#!/bin/bash
-#
-# Trema Sliceable Switch
-# ----------------------
-
-# Trema is a Full-Stack OpenFlow Framework in Ruby and C
-# https://github.com/trema/trema
-#
-# Trema Sliceable Switch is an OpenFlow controller which provides
-# virtual layer-2 network slices.
-# https://github.com/trema/apps/wiki
-
-# Trema Sliceable Switch (OpenFlow Controller)
-TREMA_APPS_REPO=${TREMA_APPS_REPO:-https://github.com/trema/apps.git}
-TREMA_APPS_BRANCH=${TREMA_APPS_BRANCH:-master}
-
-# Save trace setting
-TREMA3_XTRACE=$(set +o | grep xtrace)
-set +o xtrace
-
-TREMA_DIR=${TREMA_DIR:-$DEST/trema}
-TREMA_SS_DIR="$TREMA_DIR/apps/sliceable_switch"
-
-TREMA_DATA_DIR=${TREMA_DATA_DIR:-$DATA_DIR/trema}
-TREMA_SS_ETC_DIR=$TREMA_DATA_DIR/sliceable_switch/etc
-TREMA_SS_DB_DIR=$TREMA_DATA_DIR/sliceable_switch/db
-TREMA_SS_SCRIPT_DIR=$TREMA_DATA_DIR/sliceable_switch/script
-TREMA_TMP_DIR=$TREMA_DATA_DIR/trema
-
-TREMA_LOG_LEVEL=${TREMA_LOG_LEVEL:-info}
-
-TREMA_SS_CONFIG=$TREMA_SS_ETC_DIR/sliceable.conf
-TREMA_SS_APACHE_CONFIG=$(apache_site_config_for sliceable_switch)
-
-# configure_trema - Set config files, create data dirs, etc
-function configure_trema {
-    # prepare dir
-    for d in $TREMA_SS_ETC_DIR $TREMA_SS_DB_DIR $TREMA_SS_SCRIPT_DIR; do
-        sudo mkdir -p $d
-        sudo chown -R `whoami` $d
-    done
-    sudo mkdir -p $TREMA_TMP_DIR
-}
-
-# init_trema - Initialize databases, etc.
-function init_trema {
-    local _pwd=$(pwd)
-
-    # Initialize databases for Sliceable Switch
-    cd $TREMA_SS_DIR
-    rm -f filter.db slice.db
-    ./create_tables.sh
-    mv filter.db slice.db $TREMA_SS_DB_DIR
-    # Make sure that apache cgi has write access to the databases
-    sudo chown -R www-data.www-data $TREMA_SS_DB_DIR
-    cd $_pwd
-
-    # Setup HTTP Server for sliceable_switch
-    cp $TREMA_SS_DIR/{Slice.pm,Filter.pm,config.cgi} $TREMA_SS_SCRIPT_DIR
-    sed -i -e "s|/home/sliceable_switch/db|$TREMA_SS_DB_DIR|" \
-        $TREMA_SS_SCRIPT_DIR/config.cgi
-
-    sudo cp $TREMA_SS_DIR/apache/sliceable_switch $TREMA_SS_APACHE_CONFIG
-    sudo sed -i -e "s|/home/sliceable_switch/script|$TREMA_SS_SCRIPT_DIR|" \
-        $TREMA_SS_APACHE_CONFIG
-    # TODO(gabriel-bezerra): use some function from lib/apache to enable these modules
-    sudo a2enmod rewrite actions
-    enable_apache_site sliceable_switch
-
-    cp $TREMA_SS_DIR/sliceable_switch_null.conf $TREMA_SS_CONFIG
-    sed -i -e "s|^\$apps_dir.*$|\$apps_dir = \"$TREMA_DIR/apps\"|" \
-        -e "s|^\$db_dir.*$|\$db_dir = \"$TREMA_SS_DB_DIR\"|" \
-        $TREMA_SS_CONFIG
-}
-
-function gem_install {
-    [[ "$OFFLINE" = "True" ]] && return
-    [ -n "$RUBYGEMS_CMD" ] || get_gem_command
-
-    local pkg=$1
-    $RUBYGEMS_CMD list | grep "^${pkg} " && return
-    sudo $RUBYGEMS_CMD install $pkg
-}
-
-function get_gem_command {
-    # Trema requires ruby 1.8, so gem1.8 is checked first
-    RUBYGEMS_CMD=$(which gem1.8 || which gem)
-    if [ -z "$RUBYGEMS_CMD" ]; then
-        echo "Warning: ruby gems command not found."
-    fi
-}
-
-function install_trema {
-    # Trema
-    gem_install trema
-    # Sliceable Switch
-    git_clone $TREMA_APPS_REPO $TREMA_DIR/apps $TREMA_APPS_BRANCH
-    make -C $TREMA_DIR/apps/topology
-    make -C $TREMA_DIR/apps/flow_manager
-    make -C $TREMA_DIR/apps/sliceable_switch
-}
-
-function start_trema {
-    restart_apache_server
-
-    sudo LOGGING_LEVEL=$TREMA_LOG_LEVEL TREMA_TMP=$TREMA_TMP_DIR \
-        trema run -d -c $TREMA_SS_CONFIG
-}
-
-function stop_trema {
-    sudo TREMA_TMP=$TREMA_TMP_DIR trema killall
-}
-
-function check_trema {
-    :
-}
-
-# Restore xtrace
-$TREMA3_XTRACE
diff --git a/lib/nova b/lib/nova
index e9e78c7..502bb35 100644
--- a/lib/nova
+++ b/lib/nova
@@ -81,7 +81,7 @@
 
 # Option to enable/disable config drive
 # NOTE: Set FORCE_CONFIG_DRIVE="False" to turn OFF config drive
-FORCE_CONFIG_DRIVE=${FORCE_CONFIG_DRIVE:-"always"}
+FORCE_CONFIG_DRIVE=${FORCE_CONFIG_DRIVE:-"True"}
 
 # Nova supports pluggable schedulers.  The default ``FilterScheduler``
 # should work in most cases.
@@ -232,16 +232,15 @@
     if [[ -d $NOVA_CONF_DIR/rootwrap.d ]]; then
         sudo rm -rf $NOVA_CONF_DIR/rootwrap.d
     fi
+
     # Deploy filters to /etc/nova/rootwrap.d
-    sudo mkdir -m 755 $NOVA_CONF_DIR/rootwrap.d
-    sudo cp $NOVA_DIR/etc/nova/rootwrap.d/*.filters $NOVA_CONF_DIR/rootwrap.d
-    sudo chown -R root:root $NOVA_CONF_DIR/rootwrap.d
-    sudo chmod 644 $NOVA_CONF_DIR/rootwrap.d/*
+    sudo install -d -o root -g root -m 755 $NOVA_CONF_DIR/rootwrap.d
+    sudo install -o root -g root -m 644  $NOVA_DIR/etc/nova/rootwrap.d/*.filters $NOVA_CONF_DIR/rootwrap.d
+
     # Set up rootwrap.conf, pointing to /etc/nova/rootwrap.d
-    sudo cp $NOVA_DIR/etc/nova/rootwrap.conf $NOVA_CONF_DIR/
+    sudo install -o root -g root -m 644 $NOVA_DIR/etc/nova/rootwrap.conf $NOVA_CONF_DIR
     sudo sed -e "s:^filters_path=.*$:filters_path=$NOVA_CONF_DIR/rootwrap.d:" -i $NOVA_CONF_DIR/rootwrap.conf
-    sudo chown root:root $NOVA_CONF_DIR/rootwrap.conf
-    sudo chmod 0644 $NOVA_CONF_DIR/rootwrap.conf
+
     # Specify rootwrap.conf as first parameter to nova-rootwrap
     local rootwrap_sudoer_cmd="$NOVA_ROOTWRAP $NOVA_CONF_DIR/rootwrap.conf *"
 
@@ -256,12 +255,9 @@
 # configure_nova() - Set config files, create data dirs, etc
 function configure_nova {
     # Put config files in ``/etc/nova`` for everyone to find
-    if [[ ! -d $NOVA_CONF_DIR ]]; then
-        sudo mkdir -p $NOVA_CONF_DIR
-    fi
-    sudo chown $STACK_USER $NOVA_CONF_DIR
+    sudo install -d -o $STACK_USER $NOVA_CONF_DIR
 
-    cp -p $NOVA_DIR/etc/nova/policy.json $NOVA_CONF_DIR
+    install_default_policy nova
 
     configure_nova_rootwrap
 
@@ -318,8 +314,7 @@
         # ----------------
 
         # Nova stores each instance in its own directory.
-        sudo mkdir -p $NOVA_INSTANCES_PATH
-        sudo chown -R $STACK_USER $NOVA_INSTANCES_PATH
+        sudo install -d -o $STACK_USER $NOVA_INSTANCES_PATH
 
         # You can specify a different disk to be mounted and used for backing the
         # virtual machines.  If there is a partition labeled nova-instances we
@@ -437,7 +432,7 @@
     iniset $NOVA_CONF DEFAULT s3_host "$SERVICE_HOST"
     iniset $NOVA_CONF DEFAULT s3_port "$S3_SERVICE_PORT"
     iniset $NOVA_CONF DEFAULT my_ip "$HOST_IP"
-    iniset $NOVA_CONF DEFAULT sql_connection `database_connection_url nova`
+    iniset $NOVA_CONF database connection `database_connection_url nova`
     iniset $NOVA_CONF DEFAULT instance_name_template "${INSTANCE_NAME_PREFIX}%08x"
     iniset $NOVA_CONF osapi_v3 enabled "True"
 
@@ -471,7 +466,7 @@
 
     if [ -n "$NOVA_STATE_PATH" ]; then
         iniset $NOVA_CONF DEFAULT state_path "$NOVA_STATE_PATH"
-        iniset $NOVA_CONF DEFAULT lock_path "$NOVA_STATE_PATH"
+        iniset $NOVA_CONF oslo_concurrency lock_path "$NOVA_STATE_PATH"
     fi
     if [ -n "$NOVA_INSTANCES_PATH" ]; then
         iniset $NOVA_CONF DEFAULT instances_path "$NOVA_INSTANCES_PATH"
@@ -537,13 +532,15 @@
 
     iniset $NOVA_CONF DEFAULT ec2_dmz_host "$EC2_DMZ_HOST"
     iniset $NOVA_CONF DEFAULT keystone_ec2_url $KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/v2.0/ec2tokens
-    iniset_rpc_backend nova $NOVA_CONF DEFAULT
+    iniset_rpc_backend nova $NOVA_CONF
     iniset $NOVA_CONF glance api_servers "${GLANCE_SERVICE_PROTOCOL}://${GLANCE_HOSTPORT}"
 
     iniset $NOVA_CONF DEFAULT osapi_compute_workers "$API_WORKERS"
     iniset $NOVA_CONF DEFAULT ec2_workers "$API_WORKERS"
     iniset $NOVA_CONF DEFAULT metadata_workers "$API_WORKERS"
 
+    iniset $NOVA_CONF cinder os_region_name "$REGION_NAME"
+
     if [[ "$NOVA_BACKEND" == "LVM" ]]; then
         iniset $NOVA_CONF libvirt images_type "lvm"
         iniset $NOVA_CONF libvirt images_volume_group $DEFAULT_VOLUME_GROUP_NAME
@@ -575,7 +572,7 @@
 function init_nova_cells {
     if is_service_enabled n-cell; then
         cp $NOVA_CONF $NOVA_CELLS_CONF
-        iniset $NOVA_CELLS_CONF DEFAULT sql_connection `database_connection_url $NOVA_CELLS_DB`
+        iniset $NOVA_CELLS_CONF database connection `database_connection_url $NOVA_CELLS_DB`
         iniset $NOVA_CELLS_CONF DEFAULT rabbit_virtual_host child_cell
         iniset $NOVA_CELLS_CONF DEFAULT dhcpbridge_flagfile $NOVA_CELLS_CONF
         iniset $NOVA_CELLS_CONF cells enable True
@@ -601,8 +598,7 @@
 # create_nova_cache_dir() - Part of the init_nova() process
 function create_nova_cache_dir {
     # Create cache dir
-    sudo mkdir -p $NOVA_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $NOVA_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER $NOVA_AUTH_CACHE_DIR
     rm -f $NOVA_AUTH_CACHE_DIR/*
 }
 
@@ -619,8 +615,7 @@
 # create_nova_keys_dir() - Part of the init_nova() process
 function create_nova_keys_dir {
     # Create keys dir
-    sudo mkdir -p ${NOVA_STATE_PATH}/keys
-    sudo chown -R $STACK_USER ${NOVA_STATE_PATH}
+    sudo install -d -o $STACK_USER ${NOVA_STATE_PATH} ${NOVA_STATE_PATH}/keys
 }
 
 # init_nova() - Initialize databases, etc.
diff --git a/lib/rpc_backend b/lib/rpc_backend
index ff22bbf..3033cbe 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -233,11 +233,20 @@
     fi
 }
 
+# builds transport url string
+function get_transport_url {
+    if is_service_enabled qpid || [ -n "$QPID_HOST" ]; then
+        echo "qpid://$QPID_USERNAME:$QPID_PASSWORD@$QPID_HOST:5672/"
+    elif is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
+        echo "rabbit://$RABBIT_USERID:$RABBIT_PASSWORD@$RABBIT_HOST:5672/"
+    fi
+}
+
 # iniset cofiguration
 function iniset_rpc_backend {
     local package=$1
     local file=$2
-    local section=$3
+    local section=${3:-DEFAULT}
     if is_service_enabled zeromq; then
         iniset $file $section rpc_backend "zmq"
         iniset $file $section rpc_zmq_host `hostname`
@@ -263,9 +272,9 @@
         fi
     elif is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
         iniset $file $section rpc_backend "rabbit"
-        iniset $file $section rabbit_hosts $RABBIT_HOST
-        iniset $file $section rabbit_password $RABBIT_PASSWORD
-        iniset $file $section rabbit_userid $RABBIT_USERID
+        iniset $file oslo_messaging_rabbit rabbit_hosts $RABBIT_HOST
+        iniset $file oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD
+        iniset $file oslo_messaging_rabbit rabbit_userid $RABBIT_USERID
     fi
 }
 
diff --git a/lib/sahara b/lib/sahara
index 9b2e9c4..a965f55 100644
--- a/lib/sahara
+++ b/lib/sahara
@@ -101,23 +101,14 @@
 
 # configure_sahara() - Set config files, create data dirs, etc
 function configure_sahara {
-
-    if [[ ! -d $SAHARA_CONF_DIR ]]; then
-        sudo mkdir -p $SAHARA_CONF_DIR
-    fi
-    sudo chown $STACK_USER $SAHARA_CONF_DIR
+    sudo install -d -o $STACK_USER $SAHARA_CONF_DIR
 
     if [[ -f $SAHARA_DIR/etc/sahara/policy.json ]]; then
         cp -p $SAHARA_DIR/etc/sahara/policy.json $SAHARA_CONF_DIR
     fi
 
-    # Copy over sahara configuration file and configure common parameters.
-    cp $SAHARA_DIR/etc/sahara/sahara.conf.sample $SAHARA_CONF_FILE
-
     # Create auth cache dir
-    sudo mkdir -p $SAHARA_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $SAHARA_AUTH_CACHE_DIR
-    sudo chmod 700 $SAHARA_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER -m 700 $SAHARA_AUTH_CACHE_DIR
     rm -rf $SAHARA_AUTH_CACHE_DIR/*
 
     configure_auth_token_middleware $SAHARA_CONF_FILE sahara $SAHARA_AUTH_CACHE_DIR
@@ -127,7 +118,7 @@
     if is_service_enabled ceilometer; then
         iniset $SAHARA_CONF_FILE DEFAULT enable_notifications "true"
         iniset $SAHARA_CONF_FILE DEFAULT notification_driver "messaging"
-        iniset_rpc_backend sahara $SAHARA_CONF_FILE DEFAULT
+        iniset_rpc_backend sahara $SAHARA_CONF_FILE
     fi
 
     iniset $SAHARA_CONF_FILE DEFAULT verbose True
@@ -139,14 +130,12 @@
 
     if is_service_enabled neutron; then
         iniset $SAHARA_CONF_FILE DEFAULT use_neutron true
-        iniset $SAHARA_CONF_FILE DEFAULT use_floating_ips true
 
         if is_ssl_enabled_service "neutron" || is_service_enabled tls-proxy; then
             iniset $SAHARA_CONF_FILE neutron ca_file $SSL_BUNDLE_FILE
         fi
     else
         iniset $SAHARA_CONF_FILE DEFAULT use_neutron false
-        iniset $SAHARA_CONF_FILE DEFAULT use_floating_ips false
     fi
 
     if is_service_enabled heat; then
diff --git a/lib/swift b/lib/swift
index 5005ba0..28ef7de 100644
--- a/lib/swift
+++ b/lib/swift
@@ -314,8 +314,8 @@
     # Make sure to kill all swift processes first
     swift-init --run-dir=${SWIFT_DATA_DIR}/run all stop || true
 
-    sudo mkdir -p ${SWIFT_CONF_DIR}/{object,container,account}-server
-    sudo chown -R ${STACK_USER}: ${SWIFT_CONF_DIR}
+    sudo install -d -o ${STACK_USER} ${SWIFT_CONF_DIR}
+    sudo install -d -o ${STACK_USER} ${SWIFT_CONF_DIR}/{object,container,account}-server
 
     if [[ "$SWIFT_CONF_DIR" != "/etc/swift" ]]; then
         # Some swift tools are hard-coded to use ``/etc/swift`` and are apparently not going to be fixed.
@@ -386,7 +386,11 @@
     # Configure Ceilometer
     if is_service_enabled ceilometer; then
         iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer "set log_level" "WARN"
-        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer use "egg:ceilometer#swift"
+        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer paste.filter_factory "ceilometermiddleware.swift:filter_factory"
+        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer control_exchange "swift"
+        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer url $(get_transport_url)
+        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer driver "messaging"
+        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer topic "notifications"
         SWIFT_EXTRAS_MIDDLEWARE_LAST="${SWIFT_EXTRAS_MIDDLEWARE_LAST} ceilometer"
     fi
 
@@ -423,16 +427,8 @@
     # IDs will included in all of its log messages.
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken log_name swift
 
-    # NOTE(jamielennox): swift cannot use the regular configure_auth_token_middleware function because swift
-    # doesn't use oslo.config which is the only way to configure auth plugins with the middleare.
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken paste.filter_factory keystonemiddleware.auth_token:filter_factory
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken identity_uri $KEYSTONE_AUTH_URI
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken admin_user swift
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken admin_password $SERVICE_PASSWORD
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken admin_tenant_name $SERVICE_TENANT_NAME
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken auth_uri $KEYSTONE_SERVICE_URI
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken cafile $SSL_BUNDLE_FILE
-    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken signing_dir $SWIFT_AUTH_CACHE_DIR
+    configure_auth_token_middleware $SWIFT_CONFIG_PROXY_SERVER swift $SWIFT_AUTH_CACHE_DIR filter:authtoken
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken delay_auth_decision 1
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken cache swift.cache
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:authtoken include_service_catalog False
@@ -449,16 +445,15 @@
 
     if is_service_enabled swift3; then
         cat <<EOF >>${SWIFT_CONFIG_PROXY_SERVER}
-# NOTE(chmou): s3token middleware is not updated yet to use only
-# username and password.
 [filter:s3token]
 paste.filter_factory = keystoneclient.middleware.s3_token:filter_factory
 auth_port = ${KEYSTONE_AUTH_PORT}
 auth_host = ${KEYSTONE_AUTH_HOST}
 auth_protocol = ${KEYSTONE_AUTH_PROTOCOL}
 cafile = ${SSL_BUNDLE_FILE}
-auth_token = ${SERVICE_TOKEN}
-admin_token = ${SERVICE_TOKEN}
+admin_user = swift
+admin_tenant_name = ${SERVICE_TENANT_NAME}
+admin_password = ${SERVICE_PASSWORD}
 
 [filter:swift3]
 use = egg:swift3#swift3
@@ -543,8 +538,7 @@
     # changing the permissions so we can run it as our user.
 
     local user_group=$(id -g ${STACK_USER})
-    sudo mkdir -p ${SWIFT_DATA_DIR}/{drives,cache,run,logs}
-    sudo chown -R ${STACK_USER}:${user_group} ${SWIFT_DATA_DIR}
+    sudo install -d -o ${STACK_USER} -g ${user_group} ${SWIFT_DATA_DIR}/{drives,cache,run,logs}
 
     # Create a loopback disk and format it to XFS.
     if [[ -e ${SWIFT_DISK_IMAGE} ]]; then
@@ -684,8 +678,7 @@
     } && popd >/dev/null
 
     # Create cache dir
-    sudo mkdir -p $SWIFT_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $SWIFT_AUTH_CACHE_DIR
+    sudo install -d -o ${STACK_USER} $SWIFT_AUTH_CACHE_DIR
     rm -f $SWIFT_AUTH_CACHE_DIR/*
 }
 
diff --git a/lib/tempest b/lib/tempest
index f856ce0..4ece349 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -66,7 +66,7 @@
 # This must be False on stable branches, as master tempest
 # deps do not match stable branch deps. Set this to True to
 # have tempest installed in devstack by default.
-INSTALL_TEMPEST=${INSTALL_TEMPEST:-"False"}
+INSTALL_TEMPEST=${INSTALL_TEMPEST:-"True"}
 
 
 BOTO_MATERIALS_PATH="$FILES/images/s3-materials/cirros-${CIRROS_VERSION}"
@@ -170,12 +170,8 @@
 
     # Create tempest.conf from tempest.conf.sample
     # copy every time, because the image UUIDS are going to change
-    if [[ ! -d $TEMPEST_CONFIG_DIR ]]; then
-        sudo mkdir -p $TEMPEST_CONFIG_DIR
-    fi
-    sudo chown $STACK_USER $TEMPEST_CONFIG_DIR
-    cp $TEMPEST_DIR/etc/tempest.conf.sample $TEMPEST_CONFIG
-    chmod 644 $TEMPEST_CONFIG
+    sudo install -d -o $STACK_USER $TEMPEST_CONFIG_DIR
+    install -m 644 $TEMPEST_DIR/etc/tempest.conf.sample $TEMPEST_CONFIG
 
     password=${ADMIN_PASSWORD:-secrete}
 
@@ -275,7 +271,7 @@
 
     iniset $TEMPEST_CONFIG DEFAULT use_syslog $SYSLOG
     # Oslo
-    iniset $TEMPEST_CONFIG DEFAULT lock_path $TEMPEST_STATE_PATH
+    iniset $TEMPEST_CONFIG oslo_concurrency lock_path $TEMPEST_STATE_PATH
     mkdir -p $TEMPEST_STATE_PATH
     iniset $TEMPEST_CONFIG DEFAULT use_stderr False
     iniset $TEMPEST_CONFIG DEFAULT log_file tempest.log
@@ -315,6 +311,7 @@
 
     # Auth
     iniset $TEMPEST_CONFIG auth allow_tenant_isolation ${TEMPEST_ALLOW_TENANT_ISOLATION:-True}
+    iniset $TEMPEST_CONFIG auth tempest_roles "Member"
 
     # Compute
     iniset $TEMPEST_CONFIG compute ssh_user ${DEFAULT_INSTANCE_USER:-cirros} # DEPRECATED
@@ -349,6 +346,8 @@
     iniset $TEMPEST_CONFIG compute-feature-enabled change_password False
     iniset $TEMPEST_CONFIG compute-feature-enabled block_migration_for_live_migration ${USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION:-False}
     iniset $TEMPEST_CONFIG compute-feature-enabled api_extensions $compute_api_extensions
+    # TODO(mriedem): Remove the preserve_ports flag when Juno is end of life.
+    iniset $TEMPEST_CONFIG compute-feature-enabled preserve_ports True
 
     # Compute admin
     iniset $TEMPEST_CONFIG "compute-admin" username $ADMIN_USERNAME
@@ -396,6 +395,7 @@
         fi
         iniset $TEMPEST_CONFIG orchestration instance_type "m1.heat"
         iniset $TEMPEST_CONFIG orchestration build_timeout 900
+        iniset $TEMPEST_CONFIG orchestration stack_owner_role "_member_"
     fi
 
     # Scenario
diff --git a/lib/trove b/lib/trove
index d777983..5dd4f23 100644
--- a/lib/trove
+++ b/lib/trove
@@ -33,19 +33,19 @@
 GITDIR["python-troveclient"]=$DEST/python-troveclient
 
 TROVE_DIR=$DEST/trove
-TROVE_CONF_DIR=/etc/trove
-TROVE_CONF=$TROVE_CONF_DIR/trove.conf
-TROVE_TASKMANAGER_CONF=$TROVE_CONF_DIR/trove-taskmanager.conf
-TROVE_CONDUCTOR_CONF=$TROVE_CONF_DIR/trove-conductor.conf
-TROVE_GUESTAGENT_CONF=$TROVE_CONF_DIR/trove-guestagent.conf
-TROVE_API_PASTE_INI=$TROVE_CONF_DIR/api-paste.ini
+TROVE_CONF_DIR=${TROVE_CONF_DIR:-/etc/trove}
+TROVE_CONF=${TROVE_CONF:-$TROVE_CONF_DIR/trove.conf}
+TROVE_TASKMANAGER_CONF=${TROVE_TASKMANAGER_CONF:-$TROVE_CONF_DIR/trove-taskmanager.conf}
+TROVE_CONDUCTOR_CONF=${TROVE_CONDUCTOR_CONF:-$TROVE_CONF_DIR/trove-conductor.conf}
+TROVE_GUESTAGENT_CONF=${TROVE_GUESTAGENT_CONF:-$TROVE_CONF_DIR/trove-guestagent.conf}
+TROVE_API_PASTE_INI=${TROVE_API_PASTE_INI:-$TROVE_CONF_DIR/api-paste.ini}
 
 TROVE_LOCAL_CONF_DIR=$TROVE_DIR/etc/trove
 TROVE_LOCAL_API_PASTE_INI=$TROVE_LOCAL_CONF_DIR/api-paste.ini
 TROVE_AUTH_CACHE_DIR=${TROVE_AUTH_CACHE_DIR:-/var/cache/trove}
 TROVE_DATASTORE_TYPE=${TROVE_DATASTORE_TYPE:-"mysql"}
-TROVE_DATASTORE_VERSION=${TROVE_DATASTORE_VERSION:-"5.5"}
-TROVE_DATASTORE_PACKAGE=${TROVE_DATASTORE_PACKAGE:-"mysql-server-5.5"}
+TROVE_DATASTORE_VERSION=${TROVE_DATASTORE_VERSION:-"5.6"}
+TROVE_DATASTORE_PACKAGE=${TROVE_DATASTORE_PACKAGE:-"mysql-server-5.6"}
 
 # Support entry points installation of console scripts
 if [[ -d $TROVE_DIR/bin ]]; then
@@ -121,10 +121,7 @@
     setup_develop $TROVE_DIR
 
     # Create the trove conf dir and cache dirs if they don't exist
-    sudo mkdir -p ${TROVE_CONF_DIR}
-    sudo mkdir -p ${TROVE_AUTH_CACHE_DIR}
-    sudo chown -R $STACK_USER: ${TROVE_CONF_DIR}
-    sudo chown -R $STACK_USER: ${TROVE_AUTH_CACHE_DIR}
+    sudo install -d -o $STACK_USER ${TROVE_CONF_DIR} ${TROVE_AUTH_CACHE_DIR}
 
     # Copy api-paste file over to the trove conf dir
     cp $TROVE_LOCAL_API_PASTE_INI $TROVE_API_PASTE_INI
@@ -136,7 +133,7 @@
 
     iniset $TROVE_CONF DEFAULT rabbit_userid $RABBIT_USERID
     iniset $TROVE_CONF DEFAULT rabbit_password $RABBIT_PASSWORD
-    iniset $TROVE_CONF DEFAULT sql_connection `database_connection_url trove`
+    iniset $TROVE_CONF database connection `database_connection_url trove`
     iniset $TROVE_CONF DEFAULT default_datastore $TROVE_DATASTORE_TYPE
     setup_trove_logging $TROVE_CONF
     iniset $TROVE_CONF DEFAULT trove_api_workers "$API_WORKERS"
@@ -149,7 +146,7 @@
 
         iniset $TROVE_TASKMANAGER_CONF DEFAULT rabbit_userid $RABBIT_USERID
         iniset $TROVE_TASKMANAGER_CONF DEFAULT rabbit_password $RABBIT_PASSWORD
-        iniset $TROVE_TASKMANAGER_CONF DEFAULT sql_connection `database_connection_url trove`
+        iniset $TROVE_TASKMANAGER_CONF database connection `database_connection_url trove`
         iniset $TROVE_TASKMANAGER_CONF DEFAULT taskmanager_manager trove.taskmanager.manager.Manager
         iniset $TROVE_TASKMANAGER_CONF DEFAULT nova_proxy_admin_user radmin
         iniset $TROVE_TASKMANAGER_CONF DEFAULT nova_proxy_admin_tenant_name trove
@@ -162,7 +159,7 @@
     if is_service_enabled tr-cond; then
         iniset $TROVE_CONDUCTOR_CONF DEFAULT rabbit_userid $RABBIT_USERID
         iniset $TROVE_CONDUCTOR_CONF DEFAULT rabbit_password $RABBIT_PASSWORD
-        iniset $TROVE_CONDUCTOR_CONF DEFAULT sql_connection `database_connection_url trove`
+        iniset $TROVE_CONDUCTOR_CONF database connection `database_connection_url trove`
         iniset $TROVE_CONDUCTOR_CONF DEFAULT nova_proxy_admin_user radmin
         iniset $TROVE_CONDUCTOR_CONF DEFAULT nova_proxy_admin_tenant_name trove
         iniset $TROVE_CONDUCTOR_CONF DEFAULT nova_proxy_admin_pass $RADMIN_USER_PASS
diff --git a/lib/zaqar b/lib/zaqar
index c9321b9..34f1915 100644
--- a/lib/zaqar
+++ b/lib/zaqar
@@ -37,8 +37,6 @@
 ZAQARCLIENT_DIR=$DEST/python-zaqarclient
 ZAQAR_CONF_DIR=/etc/zaqar
 ZAQAR_CONF=$ZAQAR_CONF_DIR/zaqar.conf
-ZAQAR_API_LOG_DIR=/var/log/zaqar
-ZAQAR_API_LOG_FILE=$ZAQAR_API_LOG_DIR/queues.log
 ZAQAR_AUTH_CACHE_DIR=${ZAQAR_AUTH_CACHE_DIR:-/var/cache/zaqar}
 
 # Support potential entry-points console scripts
@@ -107,17 +105,12 @@
 function configure_zaqar {
     setup_develop $ZAQAR_DIR
 
-    [ ! -d $ZAQAR_CONF_DIR ] && sudo mkdir -m 755 -p $ZAQAR_CONF_DIR
-    sudo chown $USER $ZAQAR_CONF_DIR
-
-    [ ! -d $ZAQAR_API_LOG_DIR ] &&  sudo mkdir -m 755 -p $ZAQAR_API_LOG_DIR
-    sudo chown $USER $ZAQAR_API_LOG_DIR
+    sudo install -d -o $STACK_USER -m 755 $ZAQAR_CONF_DIR
 
     iniset $ZAQAR_CONF DEFAULT debug True
     iniset $ZAQAR_CONF DEFAULT verbose True
     iniset $ZAQAR_CONF DEFAULT admin_mode True
     iniset $ZAQAR_CONF DEFAULT use_syslog $SYSLOG
-    iniset $ZAQAR_CONF DEFAULT log_file $ZAQAR_API_LOG_FILE
     iniset $ZAQAR_CONF 'drivers:transport:wsgi' bind $ZAQAR_SERVICE_HOST
 
     configure_auth_token_middleware $ZAQAR_CONF zaqar $ZAQAR_AUTH_CACHE_DIR
@@ -139,7 +132,7 @@
         iniset $ZAQAR_CONF DEFAULT notification_driver messaging
         iniset $ZAQAR_CONF DEFAULT control_exchange zaqar
     fi
-    iniset_rpc_backend zaqar $ZAQAR_CONF DEFAULT
+    iniset_rpc_backend zaqar $ZAQAR_CONF
 
     cleanup_zaqar
 }
@@ -174,8 +167,7 @@
 # init_zaqar() - Initialize etc.
 function init_zaqar {
     # Create cache dir
-    sudo mkdir -p $ZAQAR_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $ZAQAR_AUTH_CACHE_DIR
+    sudo install -d -o $STACK_USER $ZAQAR_AUTH_CACHE_DIR
     rm -f $ZAQAR_AUTH_CACHE_DIR/*
 }
 
diff --git a/pkg/elasticsearch.sh b/pkg/elasticsearch.sh
index 239d6b9..f53c7f2 100755
--- a/pkg/elasticsearch.sh
+++ b/pkg/elasticsearch.sh
@@ -77,6 +77,7 @@
 }
 
 function install_elasticsearch {
+    pip_install elasticsearch
     if is_package_installed elasticsearch; then
         echo "Note: elasticsearch was already installed."
         return
diff --git a/samples/local.conf b/samples/local.conf
index 9e0b540..63000b6 100644
--- a/samples/local.conf
+++ b/samples/local.conf
@@ -3,7 +3,7 @@
 # NOTE: Copy this file to the root ``devstack`` directory for it to
 # work properly.
 
-# ``local.conf`` is a user-maintained setings file that is sourced from ``stackrc``.
+# ``local.conf`` is a user-maintained settings file that is sourced from ``stackrc``.
 # This gives it the ability to override any variables set in ``stackrc``.
 # Also, most of the settings in ``stack.sh`` are written to only be set if no
 # value has already been set; this lets ``local.conf`` effectively override the
@@ -98,4 +98,4 @@
 # -------
 
 # Install the tempest test suite
-enable_service tempest
\ No newline at end of file
+enable_service tempest
diff --git a/stack.sh b/stack.sh
index bf9fc01..9069367 100755
--- a/stack.sh
+++ b/stack.sh
@@ -21,6 +21,13 @@
 
 # Learn more and get the most recent version at http://devstack.org
 
+# check if someone has invoked with "sh"
+if [[ "${POSIXLY_CORRECT}" == "y" ]]; then
+    echo "You appear to be running bash in POSIX compatibility mode."
+    echo "devstack uses bash features. \"./stack.sh\" should do the right thing"
+    exit 1
+fi
+
 # Make sure custom grep options don't get in the way
 unset GREP_OPTIONS
 
@@ -92,7 +99,7 @@
 source $TOP_DIR/functions
 
 # Import config functions
-source $TOP_DIR/lib/config
+source $TOP_DIR/inc/meta-config
 
 # Import 'public' stack.sh functions
 source $TOP_DIR/lib/stack
@@ -271,6 +278,10 @@
             die $LINENO "Error installing RDO repo, cannot continue"
     fi
 
+    if is_oraclelinux; then
+        sudo yum-config-manager --enable ol7_optional_latest ol7_addons ol7_MySQL56
+    fi
+
 fi
 
 
@@ -514,7 +525,7 @@
 source $TOP_DIR/lib/swift
 source $TOP_DIR/lib/ceilometer
 source $TOP_DIR/lib/heat
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 source $TOP_DIR/lib/ldap
 source $TOP_DIR/lib/dstat
 
@@ -739,6 +750,9 @@
 fi
 
 if is_service_enabled s-proxy; then
+    if is_service_enabled ceilometer; then
+        install_ceilometermiddleware
+    fi
     stack_install_service swift
     configure_swift
 
@@ -968,7 +982,7 @@
         create_swift_accounts
     fi
 
-    if is_service_enabled heat && [[ "$HEAT_STANDALONE" != "True" ]]; then
+    if is_service_enabled heat; then
         create_heat_accounts
     fi
 
@@ -1306,6 +1320,15 @@
 # Prepare bash completion for OSC
 openstack complete | sudo tee /etc/bash_completion.d/osc.bash_completion > /dev/null
 
+# If cinder is configured, set global_filter for PV devices
+if is_service_enabled cinder; then
+    if is_ubuntu; then
+        echo_summary "Configuring lvm.conf global device filter"
+        set_lvm_filter
+    else
+        echo_summary "Skip setting lvm filters for non Ubuntu systems"
+    fi
+fi
 
 # Fin
 # ===
diff --git a/stackrc b/stackrc
index 02b12a3..bca434e 100644
--- a/stackrc
+++ b/stackrc
@@ -427,6 +427,10 @@
 #
 ##################
 
+# run-parts script required by os-refresh-config
+DIB_UTILS_REPO=${DIB_UTILS_REPO:-${GIT_BASE}/openstack/dib-utils.git}
+DIB_UTILS_BRANCH=${DIB_UTILS_BRANCH:-master}
+
 # os-apply-config configuration template tool
 OAC_REPO=${OAC_REPO:-${GIT_BASE}/openstack/os-apply-config.git}
 OAC_BRANCH=${OAC_BRANCH:-master}
@@ -556,18 +560,6 @@
         IMAGE_URLS=${IMAGE_URLS:-"http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec.tar.gz"};;
 esac
 
-# Use 64bit fedora image if heat is enabled
-if [[ "$ENABLED_SERVICES" =~ 'h-api' ]]; then
-    case "$VIRT_DRIVER" in
-        libvirt|ironic)
-            HEAT_CFN_IMAGE_URL=${HEAT_CFN_IMAGE_URL:-"https://download.fedoraproject.org/pub/alt/openstack/20/x86_64/Fedora-x86_64-20-20140618-sda.qcow2"}
-            IMAGE_URLS+=",$HEAT_CFN_IMAGE_URL"
-            ;;
-        *)
-            ;;
-    esac
-fi
-
 # Trove needs a custom image for its work
 if [[ "$ENABLED_SERVICES" =~ 'tr-api' ]]; then
     case "$VIRT_DRIVER" in
@@ -580,17 +572,6 @@
     esac
 fi
 
-# Staging Area for New Images, have them here for at least 24hrs for nodepool
-# to cache them otherwise the failure rates in the gate are too high
-PRECACHE_IMAGES=$(trueorfalse False PRECACHE_IMAGES)
-if [[ "$PRECACHE_IMAGES" == "True" ]]; then
-    # staging in update for nodepool
-    IMAGE_URL="https://download.fedoraproject.org/pub/alt/openstack/20/x86_64/Fedora-x86_64-20-20140618-sda.qcow2"
-    if ! [[ "$IMAGE_URLS"  =~ "$IMAGE_URL" ]]; then
-        IMAGE_URLS+=",$IMAGE_URL"
-    fi
-fi
-
 # 10Gb default volume backing file size
 VOLUME_BACKING_FILE_SIZE=${VOLUME_BACKING_FILE_SIZE:-10250M}
 
@@ -611,9 +592,6 @@
 # Set default screen name
 SCREEN_NAME=${SCREEN_NAME:-stack}
 
-# Do not install packages tagged with 'testonly' by default
-INSTALL_TESTONLY_PACKAGES=${INSTALL_TESTONLY_PACKAGES:-False}
-
 # Undo requirements changes by global requirements
 UNDO_REQUIREMENTS=${UNDO_REQUIREMENTS:-True}
 
diff --git a/tests/test_ini.sh b/tests/test_ini_config.sh
similarity index 98%
rename from tests/test_ini.sh
rename to tests/test_ini_config.sh
index 106cc95..4a0ae33 100755
--- a/tests/test_ini.sh
+++ b/tests/test_ini_config.sh
@@ -4,8 +4,8 @@
 
 TOP=$(cd $(dirname "$0")/.. && pwd)
 
-# Import common functions
-source $TOP/functions
+# Import config functions
+source $TOP/inc/ini-config
 
 
 echo "Testing INI functions"
diff --git a/tests/test_config.sh b/tests/test_meta_config.sh
similarity index 98%
rename from tests/test_config.sh
rename to tests/test_meta_config.sh
index 3252104..9d65280 100755
--- a/tests/test_config.sh
+++ b/tests/test_meta_config.sh
@@ -4,11 +4,9 @@
 
 TOP=$(cd $(dirname "$0")/.. && pwd)
 
-# Import common functions
-source $TOP/functions
-
 # Import config functions
-source $TOP/lib/config
+source $TOP/inc/ini-config
+source $TOP/inc/meta-config
 
 # check_result() tests and reports the result values
 # check_result "actual" "expected"
diff --git a/tools/build_docs.sh b/tools/build_docs.sh
index 929d1e0..2aa0a0a 100755
--- a/tools/build_docs.sh
+++ b/tools/build_docs.sh
@@ -81,7 +81,7 @@
     mkdir -p $FQ_HTML_BUILD/`dirname $f`;
     $SHOCCO $f > $FQ_HTML_BUILD/$f.html
 done
-for f in $(find functions functions-common lib samples -type f -name \*); do
+for f in $(find functions functions-common inc lib pkg samples -type f -name \*); do
     echo $f
     FILES+="$f "
     mkdir -p $FQ_HTML_BUILD/`dirname $f`;
diff --git a/tools/install_pip.sh b/tools/install_pip.sh
index 73d0947..b7b40c7 100755
--- a/tools/install_pip.sh
+++ b/tools/install_pip.sh
@@ -42,9 +42,21 @@
 
 
 function install_get_pip {
-    if [[ ! -r $LOCAL_PIP ]]; then
-        curl --retry 6 --retry-delay 5 -o $LOCAL_PIP $PIP_GET_PIP_URL || \
+    # the openstack gate and others put a cached version of get-pip.py
+    # for this to find, explicitly to avoid download issues.
+    #
+    # However, if devstack *did* download the file, we want to check
+    # for updates; people can leave thier stacks around for a long
+    # time and in the mean-time pip might get upgraded.
+    #
+    # Thus we use curl's "-z" feature to always check the modified
+    # since and only download if a new version is out -- but only if
+    # it seems we downloaded the file originally.
+    if [[ ! -r $LOCAL_PIP || -r $LOCAL_PIP.downloaded ]]; then
+        curl --retry 6 --retry-delay 5 \
+            -z $LOCAL_PIP -o $LOCAL_PIP $PIP_GET_PIP_URL || \
             die $LINENO "Download of get-pip.py failed"
+        touch $LOCAL_PIP.downloaded
     fi
     sudo -H -E python $LOCAL_PIP
 }
diff --git a/tools/install_prereqs.sh b/tools/install_prereqs.sh
index 303cc63..917980c 100755
--- a/tools/install_prereqs.sh
+++ b/tools/install_prereqs.sh
@@ -62,6 +62,8 @@
 
 # Install package requirements
 PACKAGES=$(get_packages general $ENABLED_SERVICES)
+PACKAGES="$PACKAGES $(get_plugin_packages)"
+
 if is_ubuntu && echo $PACKAGES | grep -q dkms ; then
     # ensure headers for the running kernel are installed for any DKMS builds
     PACKAGES="$PACKAGES linux-headers-$(uname -r)"
diff --git a/tools/xen/install_os_domU.sh b/tools/xen/install_os_domU.sh
index 082c27e..b49347e 100755
--- a/tools/xen/install_os_domU.sh
+++ b/tools/xen/install_os_domU.sh
@@ -227,7 +227,7 @@
         -n "$UBUNTU_INST_BRIDGE_OR_NET_NAME" \
         -l "$GUEST_NAME"
 
-    set_vm_memory "$GUEST_NAME" "$OSDOMU_MEM_MB"
+    set_vm_memory "$GUEST_NAME" "1024"
 
     xe vm-start vm="$GUEST_NAME"
 
diff --git a/tox.ini b/tox.ini
index a958ae7..bc84928 100644
--- a/tox.ini
+++ b/tox.ini
@@ -20,6 +20,7 @@
           -name \*.sh -or                     \
           -name \*rc -or                      \
           -name functions\* -or               \
+          -wholename \*/inc/\*                \ # /inc files and
           -wholename \*/lib/\*                \ # /lib files are shell, but
          \)                                   \ #   have no extension
          -print0 | xargs -0 bashate -v"
diff --git a/unstack.sh b/unstack.sh
index a6aeec5..a66370b 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -63,7 +63,7 @@
 source $TOP_DIR/lib/swift
 source $TOP_DIR/lib/ceilometer
 source $TOP_DIR/lib/heat
-source $TOP_DIR/lib/neutron
+source $TOP_DIR/lib/neutron-legacy
 source $TOP_DIR/lib/ldap
 source $TOP_DIR/lib/dstat
 
@@ -173,7 +173,9 @@
     cleanup_trove
 fi
 
-stop_dstat
+if is_service_enabled dstat; then
+    stop_dstat
+fi
 
 # Clean up the remainder of the screen processes
 SCREEN=$(which screen)