Merge "Use Keystone v3 API for user creation"
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index 0437ec2..b09d386 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -2,99 +2,135 @@
 FAQ
 ===
 
--  `General Questions <#general>`__
--  `Operation and Configuration <#ops_conf>`__
--  `Miscellaneous <#misc>`__
+.. contents::
+   :local:
 
 General Questions
 =================
 
-Q: Can I use DevStack for production?
+Can I use DevStack for production?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-    A: DevStack is targeted at developers and CI systems to use the
-    raw upstream code.  It makes many choices that are not appropriate
-    for production systems.
+DevStack is targeted at developers and CI systems to use the raw
+upstream code.  It makes many choices that are not appropriate for
+production systems.
 
-    Your best choice is probably to choose a `distribution of
-    OpenStack
-    <https://www.openstack.org/marketplace/distros>`__.
+Your best choice is probably to choose a `distribution of OpenStack
+<https://www.openstack.org/marketplace/distros/distribution>`__.
 
-Q: Why a shell script, why not chef/puppet/...
-    A: The script is meant to be read by humans (as well as ran by
-    computers); it is the primary documentation after all. Using a
-    recipe system requires everyone to agree and understand chef or
-    puppet.
+Why a shell script, why not chef/puppet/...
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Q: I'd like to help!
-    A: That isn't a question, but please do! The source for DevStack is
-    at
-    `git.openstack.org <https://git.openstack.org/cgit/openstack-dev/devstack>`__
-    and bug reports go to
-    `LaunchPad <http://bugs.launchpad.net/devstack/>`__. Contributions
-    follow the usual process as described in the `developer
-    guide <http://docs.openstack.org/infra/manual/developers.html>`__. This Sphinx
-    documentation is housed in the doc directory.
+The script is meant to be read by humans (as well as ran by
+computers); it is the primary documentation after all. Using a recipe
+system requires everyone to agree and understand chef or puppet.
 
-Q: Why not use packages?
-    A: Unlike packages, DevStack leaves your cloud ready to develop -
-    checkouts of the code and services running in screen. However, many
-    people are doing the hard work of packaging and recipes for
-    production deployments.
+I'd like to help!
+~~~~~~~~~~~~~~~~~
 
-Q: Why isn't $MY\_FAVORITE\_DISTRO supported?
-    A: DevStack is meant for developers and those who want to see how
-    OpenStack really works. DevStack is known to run on the
-    distro/release combinations listed in ``README.md``. DevStack is
-    only supported on releases other than those documented in
-    ``README.md`` on a best-effort basis.
+That isn't a question, but please do! The source for DevStack is at
+`git.openstack.org
+<https://git.openstack.org/cgit/openstack-dev/devstack>`__ and bug
+reports go to `LaunchPad
+<http://bugs.launchpad.net/devstack/>`__. Contributions follow the
+usual process as described in the `developer guide
+<http://docs.openstack.org/infra/manual/developers.html>`__. This
+Sphinx documentation is housed in the doc directory.
 
-Q: Are there any differences between Ubuntu and Centos/Fedora support?
-    A: Both should work well and are tested by DevStack CI.
+Why not use packages?
+~~~~~~~~~~~~~~~~~~~~~
 
-Q: Why can't I use another shell?
-    A: DevStack now uses some specific bash-ism that require Bash 4, such
-    as associative arrays. Simple compatibility patches have been accepted
-    in the past when they are not complex, at this point no additional
-    compatibility patches will be considered except for shells matching
-    the array functionality as it is very ingrained in the repo and project
-    management.
+Unlike packages, DevStack leaves your cloud ready to develop -
+checkouts of the code and services running in screen. However, many
+people are doing the hard work of packaging and recipes for production
+deployments.
 
-Q: Can I test on OS/X?
-   A: Some people have success with bash 4 installed via
-   homebrew to keep running tests on OS/X.
+Why isn't $MY\_FAVORITE\_DISTRO supported?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DevStack is meant for developers and those who want to see how
+OpenStack really works. DevStack is known to run on the distro/release
+combinations listed in ``README.md``. DevStack is only supported on
+releases other than those documented in ``README.md`` on a best-effort
+basis.
+
+Are there any differences between Ubuntu and Centos/Fedora support?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Both should work well and are tested by DevStack CI.
+
+Why can't I use another shell?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DevStack now uses some specific bash-ism that require Bash 4, such as
+associative arrays. Simple compatibility patches have been accepted in
+the past when they are not complex, at this point no additional
+compatibility patches will be considered except for shells matching
+the array functionality as it is very ingrained in the repo and
+project management.
+
+Can I test on OS/X?
+~~~~~~~~~~~~~~~~~~~
+
+Some people have success with bash 4 installed via homebrew to keep
+running tests on OS/X.
+
+Can I at least source ``openrc`` with ``zsh``?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+People have reported success with a special function to run ``openrc``
+through bash for this
+
+.. code-block:: bash
+
+   function sourceopenrc {
+       pushd ~/devstack >/dev/null
+       eval $(bash -c ". openrc $1 $2;env|sed -n '/OS_/ { s/^/export /;p}'")
+       popd >/dev/null
+   }
+
 
 Operation and Configuration
 ===========================
 
-Q: Can DevStack handle a multi-node installation?
-    A: Yes, see :doc:`multinode lab guide <guides/multinode-lab>`
+Can DevStack handle a multi-node installation?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Q: How can I document the environment that DevStack is using?
-    A: DevStack includes a script (``tools/info.sh``) that gathers the
-    versions of the relevant installed apt packages, pip packages and
-    git repos. This is a good way to verify what Python modules are
-    installed.
+Yes, see :doc:`multinode lab guide <guides/multinode-lab>`
 
-Q: How do I turn off a service that is enabled by default?
-    A: Services can be turned off by adding ``disable_service xxx`` to
-    ``local.conf`` (using ``n-vol`` in this example):
+How can I document the environment that DevStack is using?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DevStack includes a script (``tools/info.sh``) that gathers the
+versions of the relevant installed apt packages, pip packages and git
+repos. This is a good way to verify what Python modules are
+installed.
+
+How do I turn off a service that is enabled by default?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Services can be turned off by adding ``disable_service xxx`` to
+``local.conf`` (using ``n-vol`` in this example):
 
     ::
 
         disable_service n-vol
 
-Q: Is enabling a service that defaults to off done with the reverse of the above?
-    A: Of course!
+Is enabling a service that defaults to off done with the reverse of the above?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Of course!
 
     ::
 
         enable_service qpid
 
-Q: How do I run a specific OpenStack milestone?
-   A: OpenStack milestones have tags set in the git repo. Set the
-   appropriate tag in the ``*_BRANCH`` variables in ``local.conf``.
-   Swift is on its own release schedule so pick a tag in the Swift repo
-   that is just before the milestone release. For example:
+How do I run a specific OpenStack milestone?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+OpenStack milestones have tags set in the git repo. Set the
+appropriate tag in the ``*_BRANCH`` variables in ``local.conf``.
+Swift is on its own release schedule so pick a tag in the Swift repo
+that is just before the milestone release. For example:
 
     ::
 
@@ -107,16 +143,16 @@
         NEUTRON_BRANCH=stable/kilo
         SWIFT_BRANCH=2.3.0
 
-Q: What can I do about RabbitMQ not wanting to start on my fresh new VM?
-    A: This is often caused by ``erlang`` not being happy with the
-    hostname resolving to a reachable IP address. Make sure your
-    hostname resolves to a working IP address; setting it to 127.0.0.1
-    in ``/etc/hosts`` is often good enough for a single-node
-    installation. And in an extreme case, use ``clean.sh`` to eradicate
-    it and try again.
+What can I do about RabbitMQ not wanting to start on my fresh new VM?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Q: How can I set up Heat in stand-alone configuration?
-    A: Configure ``local.conf`` thusly:
+This is often caused by ``erlang`` not being happy with the hostname
+resolving to a reachable IP address. Make sure your hostname resolves
+to a working IP address; setting it to 127.0.0.1 in ``/etc/hosts`` is
+often good enough for a single-node installation. And in an extreme
+case, use ``clean.sh`` to eradicate it and try again.
+
+Configure ``local.conf`` thusly:
 
     ::
 
@@ -126,22 +162,25 @@
         KEYSTONE_SERVICE_HOST=<keystone-host>
         KEYSTONE_AUTH_HOST=<keystone-host>
 
-Q: Why are my configuration changes ignored?
-    A: You may have run into the package prerequisite installation
-    timeout. ``tools/install_prereqs.sh`` has a timer that skips the
-    package installation checks if it was run within the last
-    ``PREREQ_RERUN_HOURS`` hours (default is 2). To override this, set
-    ``FORCE_PREREQ=1`` and the package checks will never be skipped.
+Why are my configuration changes ignored?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You may have run into the package prerequisite installation
+timeout. ``tools/install_prereqs.sh`` has a timer that skips the
+package installation checks if it was run within the last
+``PREREQ_RERUN_HOURS`` hours (default is 2). To override this, set
+``FORCE_PREREQ=1`` and the package checks will never be skipped.
 
 Miscellaneous
 =============
 
-Q: ``tools/fixup_stuff.sh`` is broken and shouldn't 'fix' just one version of packages.
-    A: [Another not-a-question] No it isn't. Stuff in there is to
-    correct problems in an environment that need to be fixed elsewhere
-    or may/will be fixed in a future release. In the case of
-    ``httplib2`` and ``prettytable`` specific problems with specific
-    versions are being worked around. If later releases have those
-    problems than we'll add them to the script. Knowing about the broken
-    future releases is valuable rather than polling to see if it has
-    been fixed.
+``tools/fixup_stuff.sh`` is broken and shouldn't 'fix' just one version of packages.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Stuff in there is to correct problems in an environment that need to
+be fixed elsewhere or may/will be fixed in a future release. In the
+case of ``httplib2`` and ``prettytable`` specific problems with
+specific versions are being worked around. If later releases have
+those problems than we'll add them to the script. Knowing about the
+broken future releases is valuable rather than polling to see if it
+has been fixed.
diff --git a/doc/source/guides/neutron.rst b/doc/source/guides/neutron.rst
index bdfd3a4..40a5632 100644
--- a/doc/source/guides/neutron.rst
+++ b/doc/source/guides/neutron.rst
@@ -261,15 +261,18 @@
 
         ## Neutron Networking options used to create Neutron Subnets
 
-        FIXED_RANGE="10.1.1.0/24"
+        FIXED_RANGE="203.0.113.0/24"
         PROVIDER_SUBNET_NAME="provider_net"
         PROVIDER_NETWORK_TYPE="vlan"
         SEGMENTATION_ID=2010
 
 In this configuration we are defining FIXED_RANGE to be a
-subnet that exists in the private RFC1918 address space - however
-in a real setup FIXED_RANGE would be a public IP address range, so
-that you could access your instances from the public internet.
+publicly routed IPv4 subnet. In this specific instance we are using
+the special TEST-NET-3 subnet defined in `RFC 5737 <http://tools.ietf.org/html/rfc5737>`_,
+which is used for documentation.  In your DevStack setup, FIXED_RANGE
+would be a public IP address range that you or your organization has
+allocated to you, so that you could access your instances from the
+public internet.
 
 The following is a snippet of the DevStack configuration on the
 compute node.
diff --git a/doc/source/index.rst b/doc/source/index.rst
index e0c3f3a..f15c306 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -10,6 +10,7 @@
    overview
    configuration
    plugins
+   plugin-registry
    faq
    changes
    hacking
@@ -19,9 +20,9 @@
 
 #. Select a Linux Distribution
 
-   Only Ubuntu 14.04 (Trusty), Fedora 20 and CentOS/RHEL 7 are
-   documented here. OpenStack also runs and is packaged on other flavors
-   of Linux such as OpenSUSE and Debian.
+   Only Ubuntu 14.04 (Trusty), Fedora 21 (or Fedora 22) and CentOS/RHEL
+   7 are documented here. OpenStack also runs and is packaged on other
+   flavors of Linux such as OpenSUSE and Debian.
 
 #. Install Selected OS
 
diff --git a/doc/source/plugin-registry.rst b/doc/source/plugin-registry.rst
new file mode 100644
index 0000000..2dd70d8
--- /dev/null
+++ b/doc/source/plugin-registry.rst
@@ -0,0 +1,73 @@
+..
+  Note to reviewers: the intent of this file is to be easy for
+  community members to update. As such fast approving (single core +2)
+  is fine as long as you've identified that the plugin listed actually exists.
+
+==========================
+ DevStack Plugin Registry
+==========================
+
+Since we've created the external plugin mechanism, it's gotten used by
+a lot of projects. The following is a list of plugins that currently
+exist. Any project that wishes to list their plugin here is welcomed
+to.
+
+Official OpenStack Projects
+===========================
+
+The following are plugins that exist for official OpenStack projects.
+
++--------------------+-------------------------------------------+--------------------+
+|Plugin Name         |URL                                        |Comments            |
++--------------------+-------------------------------------------+--------------------+
+|magnum              |git://git.openstack.org/openstack/magnum   |                    |
++--------------------+-------------------------------------------+--------------------+
+|trove               |git://git.openstack.org/openstack/trove    |                    |
++--------------------+-------------------------------------------+--------------------+
+|zaqar               |git://git.openstack.org/openstack/zarar    |                    |
++--------------------+-------------------------------------------+--------------------+
+
+
+
+Drivers
+=======
+
++--------------------+-------------------------------------------------+------------------+
+|Plugin Name         |URL                                              |Comments          |
++--------------------+-------------------------------------------------+------------------+
+|dragonflow          |git://git.openstack.org/openstack/dragonflow     |[d1]_             |
++--------------------+-------------------------------------------------+------------------+
+|odl                 |git://git.openstack.org/openstack/networking-odl |[d2]_             |
++--------------------+-------------------------------------------------+------------------+
+
+.. [d1] demonstrates example of installing 3rd party SDN controller
+.. [d2] demonstrates a pretty advanced set of modes that that allow
+        one to run OpenDayLight either from a pre-existing install, or
+        also from source
+
+Alternate Configs
+=================
+
++-------------+------------------------------------------------------------+------------+
+| Plugin Name | URL                                                        | Comments   |
+|             |                                                            |            |
++-------------+------------------------------------------------------------+------------+
+|glusterfs    |git://git.openstack.org/stackforge/devstack-plugin-glusterfs|            |
++-------------+------------------------------------------------------------+------------+
+|             |                                                            |            |
++-------------+------------------------------------------------------------+------------+
+
+Additional Services
+===================
+
++-------------+------------------------------------------+------------+
+| Plugin Name | URL                                      | Comments   |
+|             |                                          |            |
++-------------+------------------------------------------+------------+
+|ec2-api      |git://git.openstack.org/stackforge/ec2api |[as1]_      |
++-------------+------------------------------------------+------------+
+|             |                                          |            |
++-------------+------------------------------------------+------------+
+
+.. [as1] first functional devstack plugin, hence why used in most of
+         the examples.
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index c4ed228..b166936 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -2,103 +2,21 @@
 Plugins
 =======
 
-DevStack has a couple of plugin mechanisms to allow easily adding
-support for additional projects and features.
+The OpenStack ecosystem is wide and deep, and only growing more so
+every day. The value of DevStack is that it's simple enough to
+understand what it's doing clearly. And yet we'd like to support as
+much of the OpenStack Ecosystem as possible. We do that with plugins.
 
-Extras.d Hooks
-==============
+DevStack plugins are bits of bash code that live outside the DevStack
+tree. They are called through a strong contract, so these plugins can
+be sure that they will continue to work in the future as DevStack
+evolves.
 
-These hooks are an extension of the service calls in
-``stack.sh`` at specific points in its run, plus ``unstack.sh`` and
-``clean.sh``. A number of the higher-layer projects are implemented in
-DevStack using this mechanism.
+Plugin Interface
+================
 
-The script in ``extras.d`` is expected to be mostly a dispatcher to
-functions in a ``lib/*`` script. The scripts are named with a
-zero-padded two digits sequence number prefix to control the order that
-the scripts are called, and with a suffix of ``.sh``. DevStack reserves
-for itself the sequence numbers 00 through 09 and 90 through 99.
-
-Below is a template that shows handlers for the possible command-line
-arguments:
-
-::
-
-    # template.sh - DevStack extras.d dispatch script template
-
-    # check for service enabled
-    if is_service_enabled template; then
-
-        if [[ "$1" == "source" ]]; then
-            # Initial source of lib script
-            source $TOP_DIR/lib/template
-        fi
-
-        if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
-            # Set up system services
-            echo_summary "Configuring system services Template"
-            install_package cowsay
-
-        elif [[ "$1" == "stack" && "$2" == "install" ]]; then
-            # Perform installation of service source
-            echo_summary "Installing Template"
-            install_template
-
-        elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
-            # Configure after the other layer 1 and 2 services have been configured
-            echo_summary "Configuring Template"
-            configure_template
-
-        elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
-            # Initialize and start the template service
-            echo_summary "Initializing Template"
-            ##init_template
-        fi
-
-        if [[ "$1" == "unstack" ]]; then
-            # Shut down template services
-            # no-op
-            :
-        fi
-
-        if [[ "$1" == "clean" ]]; then
-            # Remove state and transient data
-            # Remember clean.sh first calls unstack.sh
-            # no-op
-            :
-        fi
-    fi
-
-The arguments are:
-
--  **source** - Called by each script that utilizes ``extras.d`` hooks;
-   this replaces directly sourcing the ``lib/*`` script.
--  **stack** - Called by ``stack.sh`` three times for different phases
-   of its run:
-
-   -  **pre-install** - Called after system (OS) setup is complete and
-      before project source is installed.
-   -  **install** - Called after the layer 1 and 2 projects source and
-      their dependencies have been installed.
-   -  **post-config** - Called after the layer 1 and 2 services have
-      been configured. All configuration files for enabled services
-      should exist at this point.
-   -  **extra** - Called near the end after layer 1 and 2 services have
-      been started. This is the existing hook and has not otherwise
-      changed.
-
--  **unstack** - Called by ``unstack.sh`` before other services are shut
-   down.
--  **clean** - Called by ``clean.sh`` before other services are cleaned,
-   but after ``unstack.sh`` has been called.
-
-
-Externally Hosted Plugins
-=========================
-
-Based on the extras.d hooks, DevStack supports a standard mechansim
-for including plugins from external repositories. The plugin interface
-assumes the following:
+DevStack supports a standard mechansim for including plugins from
+external repositories. The plugin interface assumes the following:
 
 An external git repository that includes a ``devstack/`` top level
 directory. Inside this directory there can be 2 files.
@@ -118,11 +36,10 @@
   default value only if the variable is unset or empty; e.g. in bash
   syntax ``FOO=${FOO:-default}``.
 
-- ``plugin.sh`` - the actual plugin. It will be executed by devstack
-  during it's run. The run order will be done in the registration
-  order for these plugins, and will occur immediately after all in
-  tree extras.d dispatch at the phase in question.  The plugin.sh
-  looks like the extras.d dispatcher above.
+- ``plugin.sh`` - the actual plugin. It is executed by devstack at
+  well defined points during a ``stack.sh`` run. The plugin.sh
+  internal structure is discussed bellow.
+
 
 Plugins are registered by adding the following to the localrc section
 of ``local.conf``.
@@ -141,49 +58,121 @@
 
   enable_plugin ec2api git://git.openstack.org/stackforge/ec2api
 
-Plugins for gate jobs
----------------------
+plugin.sh contract
+==================
 
-All OpenStack plugins that wish to be used as gate jobs need to exist
-in OpenStack's gerrit. Both ``openstack`` namespace and ``stackforge``
-namespace are fine. This allows testing of the plugin as well as
-provides network isolation against upstream git repository failures
-(which we see often enough to be an issue).
+``plugin.sh`` is a bash script that will be called at specific points
+during ``stack.sh``, ``unstack.sh``, and ``clean.sh``. It will be
+called in the following way::
 
-Ideally plugins will be implemented as ``devstack`` directory inside
-the project they are testing. For example, the stackforge/ec2-api
-project has it's pluggin support in it's tree.
+  source $PATH/TO/plugin.sh <mode> [phase]
 
-In the cases where there is no "project tree" per say (like
-integrating a backend storage configuration such as ceph or glusterfs)
-it's also allowed to build a dedicated
-``stackforge/devstack-plugin-FOO`` project to house the plugin.
+``mode`` can be thought of as the major mode being called, currently
+one of: ``stack``, ``unstack``, ``clean``. ``phase`` is used by modes
+which have multiple points during their run where it's necessary to
+be able to execute code. All existing ``mode`` and ``phase`` points
+are considered **strong contracts** and won't be removed without a
+reasonable deprecation period. Additional new ``mode`` or ``phase``
+points may be added at any time if we discover we need them to support
+additional kinds of plugins in devstack.
 
-Note jobs must not require cloning of repositories during tests.
-Tests must list their repository in the ``PROJECTS`` variable for
-`devstack-gate
-<https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh>`_
-for the repository to be available to the test.  Further information
-is provided in the project creator's guide.
+The current full list of ``mode`` and ``phase`` are:
 
-Hypervisor
-==========
+-  **stack** - Called by ``stack.sh`` four times for different phases
+   of its run:
 
-Hypervisor plugins are fairly new and condense most hypervisor
-configuration into one place.
+   -  **pre-install** - Called after system (OS) setup is complete and
+      before project source is installed.
+   -  **install** - Called after the layer 1 and 2 projects source and
+      their dependencies have been installed.
+   -  **post-config** - Called after the layer 1 and 2 services have
+      been configured. All configuration files for enabled services
+      should exist at this point.
+   -  **extra** - Called near the end after layer 1 and 2 services have
+      been started.
 
-The initial plugin implemented was for Docker support and is a useful
-template for the required support. Plugins are placed in
-``lib/nova_plugins`` and named ``hypervisor-<name>`` where ``<name>`` is
-the value of ``VIRT_DRIVER``. Plugins must define the following
-functions:
+-  **unstack** - Called by ``unstack.sh`` before other services are shut
+   down.
+-  **clean** - Called by ``clean.sh`` before other services are cleaned,
+   but after ``unstack.sh`` has been called.
 
--  ``install_nova_hypervisor`` - install any external requirements
--  ``configure_nova_hypervisor`` - make configuration changes, including
-   those to other services
--  ``start_nova_hypervisor`` - start any external services
--  ``stop_nova_hypervisor`` - stop any external services
--  ``cleanup_nova_hypervisor`` - remove transient data and cache
+Example plugin
+====================
+
+An example plugin would look something as follows.
+
+``devstack/settings``::
+
+    # settings file for template
+  enable_service template
+
+
+``devstack/plugin.sh``::
+
+    # plugin.sh - DevStack plugin.sh dispatch script template
+
+    function install_template {
+        ...
+    }
+
+    function init_template {
+        ...
+    }
+
+    function configure_template {
+        ...
+    }
+
+    # check for service enabled
+    if is_service_enabled template; then
+
+        if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
+            # Set up system services
+            echo_summary "Configuring system services Template"
+            install_package cowsay
+
+        elif [[ "$1" == "stack" && "$2" == "install" ]]; then
+            # Perform installation of service source
+            echo_summary "Installing Template"
+            install_template
+
+        elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
+            # Configure after the other layer 1 and 2 services have been configured
+            echo_summary "Configuring Template"
+            configure_template
+
+        elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
+            # Initialize and start the template service
+            echo_summary "Initializing Template"
+            init_template
+        fi
+
+        if [[ "$1" == "unstack" ]]; then
+            # Shut down template services
+            # no-op
+            :
+        fi
+
+        if [[ "$1" == "clean" ]]; then
+            # Remove state and transient data
+            # Remember clean.sh first calls unstack.sh
+            # no-op
+            :
+        fi
+    fi
+
+Plugin Execution Order
+======================
+
+Plugins are run after in tree services at each of the stages
+above. For example, if you need something to happen before Keystone
+starts, you should do that at the ``post-config`` phase.
+
+Multiple plugins can be specified in your ``local.conf``. When that
+happens the plugins will be executed **in order** at each phase. This
+allows plugins to conceptually depend on each other through
+documenting to the user the order they must be declared. A formal
+dependency mechanism is beyond the scope of the current work.
 
 System Packages
 ===============
@@ -205,3 +194,47 @@
 
 - ``./devstack/files/rpms-suse/$plugin_name`` - Packages to install when
   running on SUSE Linux or openSUSE.
+
+
+Using Plugins in the OpenStack Gate
+===================================
+
+For everyday use, DevStack plugins can exist in any git tree that's
+accessible on the internet. However, when using DevStack plugins in
+the OpenStack gate, they must live in projects in OpenStack's
+gerrit. Both ``openstack`` namespace and ``stackforge`` namespace are
+fine. This allows testing of the plugin as well as provides network
+isolation against upstream git repository failures (which we see often
+enough to be an issue).
+
+Ideally a plugin will be included within the ``devstack`` directory of
+the project they are being tested. For example, the stackforge/ec2-api
+project has its pluggin support in its own tree.
+
+However, some times a DevStack plugin might be used solely to
+configure a backend service that will be used by the rest of
+OpenStack, so there is no "project tree" per say. Good examples
+include: integration of back end storage (e.g. ceph or glusterfs),
+integration of SDN controllers (e.g. ovn, OpenDayLight), or
+integration of alternate RPC systems (e.g. zmq, qpid). In these cases
+the best practice is to build a dedicated
+``stackforge/devstack-plugin-FOO`` project.
+
+To enable a plugin to be used in a gate job, the following lines will
+be needed in your project.yaml definition::
+
+  # Because we are testing a non standard project, add the
+  # our project repository. This makes zuul do the right
+  # reference magic for testing changes.
+  export PROJECTS="stackforge/ec2-api $PROJECTS"
+
+  # note the actual url here is somewhat irrelevant because it
+  # caches in nodepool, however make it a valid url for
+  # documentation purposes.
+  export DEVSTACK_LOCAL_CONFIG="enable_plugin ec2-api git://git.openstack.org/stackforge/ec2-api"
+
+See Also
+========
+
+For additional inspiration on devstack plugins you can check out the
+`Plugin Registry <plugin-registry.html>`_.
diff --git a/exercises/sahara.sh b/exercises/sahara.sh
index 2589e28..8cad945 100755
--- a/exercises/sahara.sh
+++ b/exercises/sahara.sh
@@ -35,7 +35,13 @@
 
 is_service_enabled sahara || exit 55
 
-$CURL_GET http://$SERVICE_HOST:8386/ 2>/dev/null | grep -q 'Auth' || die $LINENO "Sahara API isn't functioning!"
+if is_ssl_enabled_service "sahara" || is_service_enabled tls-proxy; then
+    SAHARA_SERVICE_PROTOCOL="https"
+fi
+
+SAHARA_SERVICE_PROTOCOL=${SAHARA_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
+
+$CURL_GET $SAHARA_SERVICE_PROTOCOL://$SERVICE_HOST:8386/ 2>/dev/null | grep -q 'Auth' || die $LINENO "Sahara API isn't functioning!"
 
 set +o xtrace
 echo "*********************************************************************"
diff --git a/files/apache-ceilometer.template b/files/apache-ceilometer.template
index 1c57b32..79f14c3 100644
--- a/files/apache-ceilometer.template
+++ b/files/apache-ceilometer.template
@@ -1,7 +1,7 @@
 Listen %PORT%
 
 <VirtualHost *:%PORT%>
-    WSGIDaemonProcess ceilometer-api processes=2 threads=10 user=%USER% display-name=%{GROUP}
+    WSGIDaemonProcess ceilometer-api processes=2 threads=10 user=%USER% display-name=%{GROUP} %VIRTUALENV%
     WSGIProcessGroup ceilometer-api
     WSGIScriptAlias / %WSGIAPP%
     WSGIApplicationGroup %{GLOBAL}
diff --git a/files/apache-nova-api.template b/files/apache-nova-api.template
index 70ccedd..301a3bd 100644
--- a/files/apache-nova-api.template
+++ b/files/apache-nova-api.template
@@ -1,7 +1,7 @@
 Listen %PUBLICPORT%
 
 <VirtualHost *:%PUBLICPORT%>
-    WSGIDaemonProcess nova-api processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+    WSGIDaemonProcess nova-api processes=%APIWORKERS% threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
     WSGIProcessGroup nova-api
     WSGIScriptAlias / %PUBLICWSGI%
     WSGIApplicationGroup %{GLOBAL}
@@ -13,4 +13,4 @@
     %SSLENGINE%
     %SSLCERTFILE%
     %SSLKEYFILE%
-</VirtualHost>
\ No newline at end of file
+</VirtualHost>
diff --git a/files/apache-nova-ec2-api.template b/files/apache-nova-ec2-api.template
index ae4cf94..235d958 100644
--- a/files/apache-nova-ec2-api.template
+++ b/files/apache-nova-ec2-api.template
@@ -1,7 +1,7 @@
 Listen %PUBLICPORT%
 
 <VirtualHost *:%PUBLICPORT%>
-    WSGIDaemonProcess nova-ec2-api processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+    WSGIDaemonProcess nova-ec2-api processes=%APIWORKERS% threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
     WSGIProcessGroup nova-ec2-api
     WSGIScriptAlias / %PUBLICWSGI%
     WSGIApplicationGroup %{GLOBAL}
@@ -13,4 +13,4 @@
     %SSLENGINE%
     %SSLCERTFILE%
     %SSLKEYFILE%
-</VirtualHost>
\ No newline at end of file
+</VirtualHost>
diff --git a/files/rpms-suse/devlibs b/files/rpms-suse/devlibs
index bdb630a..54d13a3 100644
--- a/files/rpms-suse/devlibs
+++ b/files/rpms-suse/devlibs
@@ -1,6 +1,5 @@
 libffi-devel  # pyOpenSSL
 libopenssl-devel  # pyOpenSSL
-libxml2-devel  # lxml
 libxslt-devel  # lxml
 postgresql-devel  # psycopg2
 libmysqlclient-devel # MySQL-python
diff --git a/files/rpms-suse/glance b/files/rpms-suse/glance
index 0e58425..bf512de 100644
--- a/files/rpms-suse/glance
+++ b/files/rpms-suse/glance
@@ -1,2 +1 @@
-libxml2-devel
 python-devel
diff --git a/files/rpms-suse/q-l3 b/files/rpms-suse/q-l3
new file mode 100644
index 0000000..a7a190c
--- /dev/null
+++ b/files/rpms-suse/q-l3
@@ -0,0 +1,2 @@
+conntrack-tools
+keepalived
diff --git a/files/rpms-suse/trove b/files/rpms-suse/trove
deleted file mode 100644
index 96f8f29..0000000
--- a/files/rpms-suse/trove
+++ /dev/null
@@ -1 +0,0 @@
-libxslt1-dev
diff --git a/files/rpms/general b/files/rpms/general
index 7b2c00a..c3f3de8 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -25,6 +25,7 @@
 libyaml-devel
 gettext  # used for compiling message catalogs
 net-tools
-java-1.7.0-openjdk-headless  # NOPRIME rhel7,f20
+java-1.7.0-openjdk-headless  # NOPRIME rhel7
 java-1.8.0-openjdk-headless  # NOPRIME f21,f22
 pyOpenSSL # version in pip uses too much memory
+iptables-services  # NOPRIME f21,f22
diff --git a/files/rpms/nova b/files/rpms/nova
index ebd6674..d32c332 100644
--- a/files/rpms/nova
+++ b/files/rpms/nova
@@ -10,6 +10,7 @@
 iputils
 kpartx
 kvm # NOPRIME
+qemu-kvm # NOPRIME
 libvirt-bin # NOPRIME
 libvirt-devel # NOPRIME
 libvirt-python # NOPRIME
diff --git a/functions-common b/functions-common
index 48c0c75..6ab567a 100644
--- a/functions-common
+++ b/functions-common
@@ -43,6 +43,25 @@
 
 TRACK_DEPENDS=${TRACK_DEPENDS:-False}
 
+# Save these variables to .stackenv
+STACK_ENV_VARS="BASE_SQL_CONN DATA_DIR DEST ENABLED_SERVICES HOST_IP \
+    KEYSTONE_AUTH_PROTOCOL KEYSTONE_AUTH_URI KEYSTONE_SERVICE_URI \
+    LOGFILE OS_CACERT SERVICE_HOST SERVICE_PROTOCOL STACK_USER TLS_IP"
+
+
+# Saves significant environment variables to .stackenv for later use
+# Refers to a lot of globals, only TOP_DIR and STACK_ENV_VARS are required to
+# function, the rest are simply saved and do not cause problems if they are undefined.
+# save_stackenv [tag]
+function save_stackenv {
+    local tag=${1:-""}
+    # Save some values we generated for later use
+    time_stamp=$(date "+$TIMESTAMP_FORMAT")
+    echo "# $time_stamp $tag" >$TOP_DIR/.stackenv
+    for i in $STACK_ENV_VARS; do
+        echo $i=${!i} >>$TOP_DIR/.stackenv
+    done
+}
 
 # Normalize config values to True or False
 # Accepts as False: 0 no No NO false False FALSE
@@ -68,6 +87,7 @@
     [[ -v "$1" ]]
 }
 
+
 # Control Functions
 # =================
 
@@ -1726,7 +1746,6 @@
         [[ ${service} == n-cell-* && ${ENABLED_SERVICES} =~ "n-cell" ]] && enabled=0
         [[ ${service} == n-cpu-* && ${ENABLED_SERVICES} =~ "n-cpu" ]] && enabled=0
         [[ ${service} == "nova" && ${ENABLED_SERVICES} =~ "n-" ]] && enabled=0
-        [[ ${service} == "cinder" && ${ENABLED_SERVICES} =~ "c-" ]] && enabled=0
         [[ ${service} == "ceilometer" && ${ENABLED_SERVICES} =~ "ceilometer-" ]] && enabled=0
         [[ ${service} == "glance" && ${ENABLED_SERVICES} =~ "g-" ]] && enabled=0
         [[ ${service} == "ironic" && ${ENABLED_SERVICES} =~ "ir-" ]] && enabled=0
@@ -1939,6 +1958,19 @@
     fi
 }
 
+# Test with a finite retry loop.
+#
+function test_with_retry {
+    local testcmd=$1
+    local failmsg=$2
+    local until=${3:-10}
+    local sleep=${4:-0.5}
+
+    if ! timeout $until sh -c "while ! $testcmd; do sleep $sleep; done"; then
+        die $LINENO "$failmsg"
+    fi
+}
+
 
 # Restore xtrace
 $XTRACE
diff --git a/inc/python b/inc/python
index 3d329b5..07a811e 100644
--- a/inc/python
+++ b/inc/python
@@ -216,18 +216,14 @@
     local update_requirements=$(cd $project_dir && git diff --exit-code >/dev/null || echo "changed")
 
     if [[ $update_requirements != "changed" ]]; then
-        if [[ "$REQUIREMENTS_MODE" == "soft" ]]; then
-            if is_in_projects_txt $project_dir; then
-                (cd $REQUIREMENTS_DIR; \
-                    python update.py $project_dir)
-            else
-                # soft update projects not found in requirements project.txt
-                (cd $REQUIREMENTS_DIR; \
-                    python update.py -s $project_dir)
-            fi
-        else
+        if is_in_projects_txt $project_dir; then
             (cd $REQUIREMENTS_DIR; \
-                python update.py $project_dir)
+                ./.venv/bin/python update.py $project_dir)
+        else
+            # soft update projects not found in requirements project.txt
+            echo "$project_dir not a constrained repository, soft enforcing requirements"
+            (cd $REQUIREMENTS_DIR; \
+                ./.venv/bin/python update.py -s $project_dir)
         fi
     fi
 
diff --git a/lib/ceilometer b/lib/ceilometer
index 1f72187..d7888d9 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -78,8 +78,13 @@
 CEILOMETER_AUTH_CACHE_DIR=${CEILOMETER_AUTH_CACHE_DIR:-/var/cache/ceilometer}
 CEILOMETER_WSGI_DIR=${CEILOMETER_WSGI_DIR:-/var/www/ceilometer}
 
-# Support potential entry-points console scripts
-CEILOMETER_BIN_DIR=$(get_python_exec_prefix)
+# Support potential entry-points console scripts in VENV or not
+if [[ ${USE_VENV} = True ]]; then
+    PROJECT_VENV["ceilometer"]=${CEILOMETER_DIR}.venv
+    CEILOMETER_BIN_DIR=${PROJECT_VENV["ceilometer"]}/bin
+else
+    CEILOMETER_BIN_DIR=$(get_python_exec_prefix)
+fi
 
 # Set up database backend
 CEILOMETER_BACKEND=${CEILOMETER_BACKEND:-mysql}
@@ -165,16 +170,22 @@
 
     local ceilometer_apache_conf=$(apache_site_config_for ceilometer)
     local apache_version=$(get_apache_version)
+    local venv_path=""
 
     # Copy proxy vhost and wsgi file
     sudo cp $CEILOMETER_DIR/ceilometer/api/app.wsgi $CEILOMETER_WSGI_DIR/app
 
+    if [[ ${USE_VENV} = True ]]; then
+        venv_path="python-path=${PROJECT_VENV["ceilometer"]}/lib/$(python_version)/site-packages"
+    fi
+
     sudo cp $FILES/apache-ceilometer.template $ceilometer_apache_conf
     sudo sed -e "
         s|%PORT%|$CEILOMETER_SERVICE_PORT|g;
         s|%APACHE_NAME%|$APACHE_NAME|g;
         s|%WSGIAPP%|$CEILOMETER_WSGI_DIR/app|g;
-        s|%USER%|$STACK_USER|g
+        s|%USER%|$STACK_USER|g;
+        s|%VIRTUALENV%|$venv_path|g
     " -i $ceilometer_apache_conf
 }
 
@@ -232,12 +243,14 @@
         iniset $CEILOMETER_CONF DEFAULT collector_workers $API_WORKERS
         ${TOP_DIR}/pkg/elasticsearch.sh start
         cleanup_ceilometer
-    else
+    elif [ "$CEILOMETER_BACKEND" = 'mongodb' ] ; then
         iniset $CEILOMETER_CONF database alarm_connection mongodb://localhost:27017/ceilometer
         iniset $CEILOMETER_CONF database event_connection mongodb://localhost:27017/ceilometer
         iniset $CEILOMETER_CONF database metering_connection mongodb://localhost:27017/ceilometer
         configure_mongodb
         cleanup_ceilometer
+    else
+        die $LINENO "Unable to configure unknown CEILOMETER_BACKEND $CEILOMETER_BACKEND"
     fi
 
     if [[ "$VIRT_DRIVER" = 'vsphere' ]]; then
@@ -263,10 +276,8 @@
     local packages=mongodb-server
 
     if is_fedora; then
-        # mongodb client + python bindings
-        packages="${packages} mongodb pymongo"
-    else
-        packages="${packages} python-pymongo"
+        # mongodb client
+        packages="${packages} mongodb"
     fi
 
     install_package ${packages}
@@ -319,6 +330,21 @@
         install_redis
     fi
 
+    if [ "$CEILOMETER_BACKEND" = 'mongodb' ] ; then
+        pip_install_gr pymongo
+    fi
+
+    # Only install virt drivers if we're running nova compute
+    if is_service_enabled n-cpu ; then
+        if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
+            pip_install_gr libvirt-python
+        fi
+
+        if [[ "$VIRT_DRIVER" = 'vsphere' ]]; then
+            pip_instal_gr oslo.vmware
+        fi
+    fi
+
     if [ "$CEILOMETER_BACKEND" = 'es' ] ; then
         ${TOP_DIR}/pkg/elasticsearch.sh download
         ${TOP_DIR}/pkg/elasticsearch.sh install
@@ -340,22 +366,19 @@
         git_clone_by_name "ceilometermiddleware"
         setup_dev_lib "ceilometermiddleware"
     else
-        # BUG: this should be a pip_install_gr except it was never
-        # included in global-requirements. Needs to be fixed by
-        # https://bugs.launchpad.net/ceilometer/+bug/1441655
-        pip_install ceilometermiddleware
+        pip_install_gr ceilometermiddleware
     fi
 }
 
 # start_ceilometer() - Start running processes, including screen
 function start_ceilometer {
-    run_process ceilometer-acentral "ceilometer-agent-central --config-file $CEILOMETER_CONF"
-    run_process ceilometer-anotification "ceilometer-agent-notification --config-file $CEILOMETER_CONF"
-    run_process ceilometer-collector "ceilometer-collector --config-file $CEILOMETER_CONF"
-    run_process ceilometer-aipmi "ceilometer-agent-ipmi --config-file $CEILOMETER_CONF"
+    run_process ceilometer-acentral "$CEILOMETER_BIN_DIR/ceilometer-agent-central --config-file $CEILOMETER_CONF"
+    run_process ceilometer-anotification "$CEILOMETER_BIN_DIR/ceilometer-agent-notification --config-file $CEILOMETER_CONF"
+    run_process ceilometer-collector "$CEILOMETER_BIN_DIR/ceilometer-collector --config-file $CEILOMETER_CONF"
+    run_process ceilometer-aipmi "$CEILOMETER_BIN_DIR/ceilometer-agent-ipmi --config-file $CEILOMETER_CONF"
 
     if [[ "$CEILOMETER_USE_MOD_WSGI" == "False" ]]; then
-        run_process ceilometer-api "ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
+        run_process ceilometer-api "$CEILOMETER_BIN_DIR/ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
     else
         enable_apache_site ceilometer
         restart_apache_server
@@ -367,10 +390,10 @@
     # Start the compute agent last to allow time for the collector to
     # fully wake up and connect to the message bus. See bug #1355809
     if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
-        run_process ceilometer-acompute "ceilometer-agent-compute --config-file $CEILOMETER_CONF" $LIBVIRT_GROUP
+        run_process ceilometer-acompute "$CEILOMETER_BIN_DIR/ceilometer-agent-compute --config-file $CEILOMETER_CONF" $LIBVIRT_GROUP
     fi
     if [[ "$VIRT_DRIVER" = 'vsphere' ]]; then
-        run_process ceilometer-acompute "ceilometer-agent-compute --config-file $CEILOMETER_CONF"
+        run_process ceilometer-acompute "$CEILOMETER_BIN_DIR/ceilometer-agent-compute --config-file $CEILOMETER_CONF"
     fi
 
     # Only die on API if it was actually intended to be turned on
@@ -381,8 +404,8 @@
         fi
     fi
 
-    run_process ceilometer-alarm-notifier "ceilometer-alarm-notifier --config-file $CEILOMETER_CONF"
-    run_process ceilometer-alarm-evaluator "ceilometer-alarm-evaluator --config-file $CEILOMETER_CONF"
+    run_process ceilometer-alarm-notifier "$CEILOMETER_BIN_DIR/ceilometer-alarm-notifier --config-file $CEILOMETER_CONF"
+    run_process ceilometer-alarm-evaluator "$CEILOMETER_BIN_DIR/ceilometer-alarm-evaluator --config-file $CEILOMETER_CONF"
 }
 
 # stop_ceilometer() - Stop running processes
diff --git a/lib/ceph b/lib/ceph
index 4d6ca4a..cbdc3b8 100644
--- a/lib/ceph
+++ b/lib/ceph
@@ -110,7 +110,7 @@
 
 # check_os_support_ceph() - Check if the operating system provides a decent version of Ceph
 function check_os_support_ceph {
-    if [[ ! ${DISTRO} =~ (trusty|f20|f21|f22) ]]; then
+    if [[ ! ${DISTRO} =~ (trusty|f21|f22) ]]; then
         echo "WARNING: your distro $DISTRO does not provide (at least) the Firefly release. Please use Ubuntu Trusty or Fedora 20 (and higher)"
         if [[ "$FORCE_CEPH_INSTALL" != "yes" ]]; then
             die $LINENO "If you wish to install Ceph on this distribution anyway run with FORCE_CEPH_INSTALL=yes"
diff --git a/lib/cinder b/lib/cinder
index b8cf809..8117447 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -66,6 +66,10 @@
 CINDER_SERVICE_PORT_INT=${CINDER_SERVICE_PORT_INT:-18776}
 CINDER_SERVICE_PROTOCOL=${CINDER_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
 
+# What type of LVM device should Cinder use for LVM backend
+# Defaults to default, which is thick, the other valid choice
+# is thin, which as the name implies utilizes lvm thin provisioning.
+CINDER_LVM_TYPE=${CINDER_LVM_TYPE:-default}
 
 # Default backends
 # The backend format is type:name where type is one of the supported backend
@@ -432,12 +436,13 @@
             _configure_tgt_for_config_d
             if is_ubuntu; then
                 sudo service tgt restart
-            elif is_fedora || is_suse; then
-                restart_service tgtd
+            elif is_suse; then
+                # NOTE(dmllr): workaround restart bug
+                # https://bugzilla.suse.com/show_bug.cgi?id=934642
+                stop_service tgtd
+                start_service tgtd
             else
-                # note for other distros: unstack.sh also uses the tgt/tgtd service
-                # name, and would need to be adjusted too
-                exit_distro_not_supported "restarting tgt"
+                restart_service tgtd
             fi
             # NOTE(gfidente): ensure tgtd is running in debug mode
             sudo tgtadm --mode system --op update --name debug --value on
diff --git a/lib/cinder_backends/lvm b/lib/cinder_backends/lvm
index 35ad209..411b82c 100644
--- a/lib/cinder_backends/lvm
+++ b/lib/cinder_backends/lvm
@@ -51,6 +51,7 @@
     iniset $CINDER_CONF $be_name volume_driver "cinder.volume.drivers.lvm.LVMVolumeDriver"
     iniset $CINDER_CONF $be_name volume_group $VOLUME_GROUP_NAME-$be_name
     iniset $CINDER_CONF $be_name iscsi_helper "$CINDER_ISCSI_HELPER"
+    iniset $CINDER_CONF $be_name lvm_type "$CINDER_LVM_TYPE"
 
     if [[ "$CINDER_SECURE_DELETE" == "False" ]]; then
         iniset $CINDER_CONF $be_name volume_clear none
diff --git a/lib/databases/mysql b/lib/databases/mysql
index 7b01771..0e477ca 100644
--- a/lib/databases/mysql
+++ b/lib/databases/mysql
@@ -11,7 +11,7 @@
 MY_XTRACE=$(set +o | grep xtrace)
 set +o xtrace
 
-MYSQL_DRIVER=${MYSQL_DRIVER:-MySQL-python}
+MYSQL_DRIVER=${MYSQL_DRIVER:-PyMySQL}
 # Force over to pymysql driver by default if we are using it.
 if is_service_enabled mysql; then
     if [[ "$MYSQL_DRIVER" == "PyMySQL" ]]; then
@@ -95,7 +95,10 @@
     sudo bash -c "source $TOP_DIR/functions && \
         iniset $my_conf mysqld bind-address 0.0.0.0 && \
         iniset $my_conf mysqld sql_mode STRICT_ALL_TABLES && \
-        iniset $my_conf mysqld default-storage-engine InnoDB"
+        iniset $my_conf mysqld default-storage-engine InnoDB \
+        iniset $my_conf mysqld max_connections 1024 \
+        iniset $my_conf mysqld query_cache_type OFF \
+        iniset $my_conf mysqld query_cache_size 0"
 
 
     if [[ "$DATABASE_QUERY_LOGGING" == "True" ]]; then
diff --git a/lib/glance b/lib/glance
index 2fae002..f2c5e99 100644
--- a/lib/glance
+++ b/lib/glance
@@ -56,6 +56,7 @@
 GLANCE_CACHE_CONF=$GLANCE_CONF_DIR/glance-cache.conf
 GLANCE_POLICY_JSON=$GLANCE_CONF_DIR/policy.json
 GLANCE_SCHEMA_JSON=$GLANCE_CONF_DIR/schema-image.json
+GLANCE_SWIFT_STORE_CONF=$GLANCE_CONF_DIR/glance-swift-store.conf
 
 if is_ssl_enabled_service "glance" || is_service_enabled tls-proxy; then
     GLANCE_SERVICE_PROTOCOL="https"
@@ -145,11 +146,20 @@
     # Store the images in swift if enabled.
     if is_service_enabled s-proxy; then
         iniset $GLANCE_API_CONF glance_store default_store swift
-        iniset $GLANCE_API_CONF glance_store swift_store_auth_address $KEYSTONE_SERVICE_URI/v2.0/
-        iniset $GLANCE_API_CONF glance_store swift_store_user $SERVICE_TENANT_NAME:glance-swift
-        iniset $GLANCE_API_CONF glance_store swift_store_key $SERVICE_PASSWORD
         iniset $GLANCE_API_CONF glance_store swift_store_create_container_on_put True
+
+        iniset $GLANCE_API_CONF glance_store swift_store_config_file $GLANCE_SWIFT_STORE_CONF
+        iniset $GLANCE_API_CONF glance_store default_swift_reference ref1
         iniset $GLANCE_API_CONF glance_store stores "file, http, swift"
+
+        iniset $GLANCE_SWIFT_STORE_CONF ref1 user $SERVICE_TENANT_NAME:glance-swift
+        iniset $GLANCE_SWIFT_STORE_CONF ref1 key $SERVICE_PASSWORD
+        iniset $GLANCE_SWIFT_STORE_CONF ref1 auth_address $KEYSTONE_SERVICE_URI/v2.0/
+
+        # commenting is not strictly necessary but it's confusing to have bad values in conf
+        inicomment $GLANCE_API_CONF glance_store swift_store_user
+        inicomment $GLANCE_API_CONF glance_store swift_store_key
+        inicomment $GLANCE_API_CONF glance_store swift_store_auth_address
     fi
 
     if is_service_enabled tls-proxy; then
diff --git a/lib/infra b/lib/infra
index c825b4e..3d68e45 100644
--- a/lib/infra
+++ b/lib/infra
@@ -29,8 +29,17 @@
 
 # install_infra() - Collect source and prepare
 function install_infra {
+    local PIP_VIRTUAL_ENV="$REQUIREMENTS_DIR/.venv"
     # bring down global requirements
     git_clone $REQUIREMENTS_REPO $REQUIREMENTS_DIR $REQUIREMENTS_BRANCH
+    [ ! -d $PIP_VIRTUAL_ENV ] && virtualenv $PIP_VIRTUAL_ENV
+    # We don't care about testing git pbr in the requirements venv.
+    PIP_VIRTUAL_ENV=$PIP_VIRTUAL_ENV pip_install -U pbr
+    PIP_VIRTUAL_ENV=$PIP_VIRTUAL_ENV pip_install $REQUIREMENTS_DIR
+
+    # Unset the PIP_VIRTUAL_ENV so that PBR does not end up trapped
+    # down the VENV well
+    unset PIP_VIRTUAL_ENV
 
     # Install pbr
     if use_library_from_git "pbr"; then
diff --git a/lib/neutron-legacy b/lib/neutron-legacy
index 3ac76a2..acc2851 100644
--- a/lib/neutron-legacy
+++ b/lib/neutron-legacy
@@ -696,9 +696,10 @@
     if is_ssl_enabled_service "neutron"; then
         ssl_ca="--ca-certificate=${SSL_BUNDLE_FILE}"
     fi
-    if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget ${ssl_ca} --no-proxy -q -O- $service_protocol://$Q_HOST:$service_port; do sleep 1; done"; then
-        die $LINENO "Neutron did not start"
-    fi
+
+    local testcmd="wget ${ssl_ca} --no-proxy -q -O- $service_protocol://$Q_HOST:$service_port"
+    test_with_retry "$testcmd" "Neutron did not start" $SERVICE_TIMEOUT
+
     # Start proxy if enabled
     if is_service_enabled tls-proxy; then
         start_tls_proxy '*' $Q_PORT $Q_HOST $Q_PORT_INT &
@@ -721,7 +722,7 @@
                 sudo ip addr del $IP dev $PUBLIC_INTERFACE
                 sudo ip addr add $IP dev $OVS_PHYSICAL_BRIDGE
             done
-            sudo route add -net $FIXED_RANGE gw $NETWORK_GATEWAY dev $OVS_PHYSICAL_BRIDGE
+            sudo ip route replace $FIXED_RANGE via $NETWORK_GATEWAY dev $OVS_PHYSICAL_BRIDGE
         fi
     fi
 
@@ -1266,16 +1267,26 @@
     # This logic is specific to using the l3-agent for layer 3
     if is_service_enabled q-l3; then
         # Configure and enable public bridge
+        local ext_gw_interface="none"
         if is_neutron_ovs_base_plugin && [[ "$Q_USE_NAMESPACE" = "True" ]]; then
-            local ext_gw_interface=$(_neutron_get_ext_gw_interface)
+            ext_gw_interface=$(_neutron_get_ext_gw_interface)
+        elif [[ "$Q_AGENT" = "linuxbridge" ]]; then
+            # Search for the brq device the neutron router and network for $FIXED_RANGE
+            # will be using.
+            # e.x. brq3592e767-da for NET_ID 3592e767-da66-4bcb-9bec-cdb03cd96102
+            ext_gw_interface=brq${EXT_NET_ID:0:11}
+        fi
+        if [[ "$ext_gw_interface" != "none" ]]; then
             local cidr_len=${FLOATING_RANGE#*/}
+            local testcmd="ip -o link | grep -q $ext_gw_interface"
+            test_with_retry "$testcmd" "$ext_gw_interface creation failed"
             if [[ $(ip addr show dev $ext_gw_interface | grep -c $ext_gw_ip) == 0 && ( $Q_USE_PROVIDERNET_FOR_PUBLIC == "False" || $Q_USE_PUBLIC_VETH == "True" ) ]]; then
                 sudo ip addr add $ext_gw_ip/$cidr_len dev $ext_gw_interface
                 sudo ip link set $ext_gw_interface up
             fi
             ROUTER_GW_IP=`neutron port-list -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$PUB_SUBNET_ID '$4 == subnet_id { print $8; }'`
             die_if_not_set $LINENO ROUTER_GW_IP "Failure retrieving ROUTER_GW_IP"
-            sudo route add -net $FIXED_RANGE gw $ROUTER_GW_IP
+            sudo ip route replace  $FIXED_RANGE via $ROUTER_GW_IP
         fi
         _neutron_set_router_id
     fi
@@ -1310,7 +1321,7 @@
 
             # Configure interface for public bridge
             sudo ip -6 addr add $ipv6_ext_gw_ip/$ipv6_cidr_len dev $ext_gw_interface
-            sudo ip -6 route add $FIXED_RANGE_V6 via $IPV6_ROUTER_GW_IP dev $ext_gw_interface
+            sudo ip -6 route replace $FIXED_RANGE_V6 via $IPV6_ROUTER_GW_IP dev $ext_gw_interface
         fi
         _neutron_set_router_id
     fi
@@ -1380,9 +1391,8 @@
     local timeout_sec=$5
     local probe_cmd = ""
     probe_cmd=`_get_probe_cmd_prefix $from_net`
-    if ! timeout $timeout_sec sh -c "while ! $probe_cmd ssh -o StrictHostKeyChecking=no -i $key_file ${user}@$ip echo success; do sleep 1; done"; then
-        die $LINENO "server didn't become ssh-able!"
-    fi
+    local testcmd="$probe_cmd ssh -o StrictHostKeyChecking=no -i $key_file ${user}@$ip echo success"
+    test_with_retry "$testcmd" "server $ip didn't become ssh-able" $timeout_sec
 }
 
 # Neutron 3rd party programs
diff --git a/lib/neutron_plugins/linuxbridge_agent b/lib/neutron_plugins/linuxbridge_agent
old mode 100644
new mode 100755
index b348af9..fefc1c3
--- a/lib/neutron_plugins/linuxbridge_agent
+++ b/lib/neutron_plugins/linuxbridge_agent
@@ -9,6 +9,20 @@
 
 function neutron_lb_cleanup {
     sudo brctl delbr $PUBLIC_BRIDGE
+
+    if [[ "$Q_ML2_TENANT_NETWORK_TYPE" = "vxlan" ]]; then
+        for port in $(sudo brctl show | grep -o -e [a-zA-Z\-]*tap[0-9a-f\-]* -e vxlan-[0-9a-f\-]*); do
+            sudo ip link delete $port
+        done
+    elif [[ "$Q_ML2_TENANT_NETWORK_TYPE" = "vlan" ]]; then
+        for port in $(sudo brctl show | grep -o -e [a-zA-Z\-]*tap[0-9a-f\-]* -e ${LB_PHYSICAL_INTERFACE}\.[0-9a-f\-]*); do
+            sudo ip link delete $port
+        done
+    fi
+    for bridge in $(sudo brctl show |grep -o -e brq[0-9a-f\-]*); do
+        sudo ip link set $bridge down
+        sudo brctl delbr $bridge
+    done
 }
 
 function is_neutron_ovs_base_plugin {
diff --git a/lib/neutron_plugins/openvswitch_agent b/lib/neutron_plugins/openvswitch_agent
index 1d24f3b..2a05e2d 100644
--- a/lib/neutron_plugins/openvswitch_agent
+++ b/lib/neutron_plugins/openvswitch_agent
@@ -59,7 +59,7 @@
         OVS_BRIDGE_MAPPINGS=$PHYSICAL_NETWORK:$OVS_PHYSICAL_BRIDGE
 
         # Configure bridge manually with physical interface as port for multi-node
-        sudo ovs-vsctl --no-wait -- --may-exist add-br $OVS_PHYSICAL_BRIDGE
+        _neutron_ovs_base_add_bridge $OVS_PHYSICAL_BRIDGE
     fi
     if [[ "$OVS_BRIDGE_MAPPINGS" != "" ]]; then
         iniset /$Q_PLUGIN_CONF_FILE ovs bridge_mappings $OVS_BRIDGE_MAPPINGS
@@ -92,7 +92,7 @@
         # Set up domU's L2 agent:
 
         # Create a bridge "br-$GUEST_INTERFACE_DEFAULT"
-        sudo ovs-vsctl --no-wait -- --may-exist add-br "br-$GUEST_INTERFACE_DEFAULT"
+        _neutron_ovs_base_add_bridge "br-$GUEST_INTERFACE_DEFAULT"
         # Add $GUEST_INTERFACE_DEFAULT to that bridge
         sudo ovs-vsctl add-port "br-$GUEST_INTERFACE_DEFAULT" $GUEST_INTERFACE_DEFAULT
 
diff --git a/lib/neutron_plugins/ovs_base b/lib/neutron_plugins/ovs_base
index 81561d3..4e750f0 100644
--- a/lib/neutron_plugins/ovs_base
+++ b/lib/neutron_plugins/ovs_base
@@ -15,13 +15,21 @@
     return 0
 }
 
+function _neutron_ovs_base_add_bridge {
+    local bridge=$1
+    local addbr_cmd="sudo ovs-vsctl --no-wait -- --may-exist add-br $bridge"
+
+    if [ "$OVS_DATAPATH_TYPE" != "" ] ; then
+        addbr_cmd="$addbr_cmd -- set Bridge $bridge datapath_type=${OVS_DATAPATH_TYPE}"
+    fi
+
+    $addbr_cmd
+}
+
 function _neutron_ovs_base_setup_bridge {
     local bridge=$1
     neutron-ovs-cleanup
-    sudo ovs-vsctl --no-wait -- --may-exist add-br $bridge
-    if [[ $OVS_DATAPATH_TYPE != "" ]]; then
-        sudo ovs-vsctl set Bridge $bridge datapath_type=${OVS_DATAPATH_TYPE}
-    fi
+    _neutron_ovs_base_add_bridge $bridge
     sudo ovs-vsctl --no-wait br-set-external-id $bridge bridge-id $bridge
 }
 
@@ -92,7 +100,7 @@
         sudo ip link set $Q_PUBLIC_VETH_EX up
         sudo ip addr flush dev $Q_PUBLIC_VETH_EX
     else
-        sudo ovs-vsctl -- --may-exist add-br $PUBLIC_BRIDGE
+        _neutron_ovs_base_add_bridge $PUBLIC_BRIDGE
         sudo ovs-vsctl br-set-external-id $PUBLIC_BRIDGE bridge-id $PUBLIC_BRIDGE
     fi
 }
diff --git a/lib/neutron_plugins/vmware_dvs b/lib/neutron_plugins/vmware_dvs
new file mode 100644
index 0000000..587d5a6
--- /dev/null
+++ b/lib/neutron_plugins/vmware_dvs
@@ -0,0 +1,10 @@
+#!/bin/bash
+
+# This file is needed so Q_PLUGIN=vmware_dvs will work.
+
+# FIXME(salv-orlando): This function should not be here, but unfortunately
+# devstack calls it before the external plugins are fetched
+function has_neutron_plugin_security_group {
+    # 0 means True here
+    return 0
+}
diff --git a/lib/neutron_plugins/vmware_nsx_v3 b/lib/neutron_plugins/vmware_nsx_v3
new file mode 100644
index 0000000..6d8a6e6
--- /dev/null
+++ b/lib/neutron_plugins/vmware_nsx_v3
@@ -0,0 +1,10 @@
+#!/bin/bash
+
+# This file is needed so Q_PLUGIN=vmware_nsx_v3 will work.
+
+# FIXME(salv-orlando): This function should not be here, but unfortunately
+# devstack calls it before the external plugins are fetched
+function has_neutron_plugin_security_group {
+    # 0 means True here
+    return 0
+}
diff --git a/lib/nova b/lib/nova
index a9f3351..88b336a 100644
--- a/lib/nova
+++ b/lib/nova
@@ -232,6 +232,10 @@
     #if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
     #    cleanup_nova_hypervisor
     #fi
+
+    if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
+        _cleanup_nova_apache_wsgi
+    fi
 }
 
 # _cleanup_nova_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
@@ -277,6 +281,7 @@
         s|%SSLKEYFILE%|$nova_keyfile|g;
         s|%USER%|$STACK_USER|g;
         s|%VIRTUALENV%|$venv_path|g
+        s|%APIWORKERS%|$API_WORKERS|g
     " -i $nova_apache_conf
 
     sudo cp $FILES/apache-nova-ec2-api.template $nova_ec2_apache_conf
@@ -289,6 +294,7 @@
         s|%SSLKEYFILE%|$nova_keyfile|g;
         s|%USER%|$STACK_USER|g;
         s|%VIRTUALENV%|$venv_path|g
+        s|%APIWORKERS%|$API_WORKERS|g
     " -i $nova_ec2_apache_conf
 }
 
@@ -761,8 +767,8 @@
         enable_apache_site nova-api
         enable_apache_site nova-ec2-api
         restart_apache_server
-        tail_log nova /var/log/$APACHE_NAME/nova-api.log
-        tail_log nova /var/log/$APACHE_NAME/nova-ec2-api.log
+        tail_log nova-api /var/log/$APACHE_NAME/nova-api.log
+        tail_log nova-ec2-api /var/log/$APACHE_NAME/nova-ec2-api.log
     else
         run_process n-api "$NOVA_BIN_DIR/nova-api"
     fi
diff --git a/lib/nova_plugins/functions-libvirt b/lib/nova_plugins/functions-libvirt
index 96d8a44..5525cfd 100755
--- a/lib/nova_plugins/functions-libvirt
+++ b/lib/nova_plugins/functions-libvirt
@@ -28,16 +28,21 @@
         else
             install_package qemu-kvm
             install_package libguestfs0
-            install_package python-guestfs
         fi
         install_package libvirt-bin libvirt-dev
         pip_install_gr libvirt-python
         #pip_install_gr <there-si-no-guestfs-in-pypi>
     elif is_fedora || is_suse; then
         install_package kvm
+        # there is a dependency issue with kvm (which is really just a
+        # wrapper to qemu-system-x86) that leaves some bios files out,
+        # so install qemu-kvm (which shouldn't strictly be needed, as
+        # everything has been merged into qemu-system-x86) to bring in
+        # the right packages. see
+        # https://bugzilla.redhat.com/show_bug.cgi?id=1235890
+        install_package qemu-kvm
         install_package libvirt libvirt-devel
         pip_install_gr libvirt-python
-        install_package python-libguestfs
     fi
 }
 
diff --git a/lib/nova_plugins/hypervisor-libvirt b/lib/nova_plugins/hypervisor-libvirt
index a6a87f9..f70b21a 100644
--- a/lib/nova_plugins/hypervisor-libvirt
+++ b/lib/nova_plugins/hypervisor-libvirt
@@ -26,7 +26,7 @@
 # --------
 
 # File injection is disabled by default in Nova.  This will turn it back on.
-ENABLE_FILE_INJECTION=${ENABLE_FILE_INJECTION:-False}
+ENABLE_FILE_INJECTION=$(trueorfalse False ENABLE_FILE_INJECTION)
 
 
 # Entry Points
@@ -60,7 +60,6 @@
         iniset $NOVA_CONF DEFAULT vnc_enabled "false"
     fi
 
-    ENABLE_FILE_INJECTION=$(trueorfalse False ENABLE_FILE_INJECTION)
     if [[ "$ENABLE_FILE_INJECTION" = "True" ]] ; then
         # When libguestfs is available for file injection, enable using
         # libguestfs to inspect the image and figure out the proper
@@ -97,6 +96,14 @@
             yum_install libcgroup-tools
         fi
     fi
+
+    if [[ "$ENABLE_FILE_INJECTION" = "True" ]] ; then
+        if is_ubuntu; then
+            install_package python-guestfs
+        elif is_fedora || is_suse; then
+            install_package python-libguestfs
+        fi
+    fi
 }
 
 # start_nova_hypervisor - Start any required external services
diff --git a/lib/oslo b/lib/oslo
index d9688a0..be935bb 100644
--- a/lib/oslo
+++ b/lib/oslo
@@ -22,8 +22,11 @@
 
 # Defaults
 # --------
+GITDIR["automaton"]=$DEST/automaton
 GITDIR["cliff"]=$DEST/cliff
 GITDIR["debtcollector"]=$DEST/debtcollector
+GITDIR["futurist"]=$DEST/futurist
+GITDIR["oslo.cache"]=$DEST/oslo.cache
 GITDIR["oslo.concurrency"]=$DEST/oslo.concurrency
 GITDIR["oslo.config"]=$DEST/oslo.config
 GITDIR["oslo.context"]=$DEST/oslo.context
@@ -35,6 +38,7 @@
 GITDIR["oslo.policy"]=$DEST/oslo.policy
 GITDIR["oslo.rootwrap"]=$DEST/oslo.rootwrap
 GITDIR["oslo.serialization"]=$DEST/oslo.serialization
+GITDIR["oslo.service"]=$DEST/oslo.service
 GITDIR["oslo.utils"]=$DEST/oslo.utils
 GITDIR["oslo.versionedobjects"]=$DEST/oslo.versionedobjects
 GITDIR["oslo.vmware"]=$DEST/oslo.vmware
@@ -62,6 +66,8 @@
 function install_oslo {
     _do_install_oslo_lib "cliff"
     _do_install_oslo_lib "debtcollector"
+    _do_install_oslo_lib "futurist"
+    _do_install_oslo_lib "oslo.cache"
     _do_install_oslo_lib "oslo.concurrency"
     _do_install_oslo_lib "oslo.config"
     _do_install_oslo_lib "oslo.context"
@@ -73,6 +79,7 @@
     _do_install_oslo_lib "oslo.policy"
     _do_install_oslo_lib "oslo.rootwrap"
     _do_install_oslo_lib "oslo.serialization"
+    _do_install_oslo_lib "oslo.service"
     _do_install_oslo_lib "oslo.utils"
     _do_install_oslo_lib "oslo.versionedobjects"
     _do_install_oslo_lib "oslo.vmware"
diff --git a/lib/tempest b/lib/tempest
index f3703a0..5ea217f 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -329,6 +329,7 @@
     if [[ ! -z "$TEMPEST_HTTP_IMAGE" ]]; then
         iniset $TEMPEST_CONFIG image http_image $TEMPEST_HTTP_IMAGE
     fi
+    iniset $TEMPEST_CONFIG image-feature-enabled deactivate_image true
 
     # Image Features
     iniset $TEMPEST_CONFIG image-feature-enabled deactivate_image True
@@ -451,6 +452,9 @@
     iniset $TEMPEST_CONFIG object-storage-feature-enabled discoverable_apis $object_storage_api_extensions
 
     # Volume
+    # TODO(dkranz): Remove the bootable flag when Juno is end of life.
+    iniset $TEMPEST_CONFIG volume-feature-enabled bootable True
+
     local volume_api_extensions=${VOLUME_API_EXTENSIONS:-"all"}
     if [[ ! -z "$DISABLE_VOLUME_API_EXTENSIONS" ]]; then
         # Enabled extensions are either the ones explicitly specified or those available on the API endpoint
diff --git a/stack.sh b/stack.sh
index 489fbe4..591c0dc 100755
--- a/stack.sh
+++ b/stack.sh
@@ -173,7 +173,7 @@
 
 # Warn users who aren't on an explicitly supported distro, but allow them to
 # override check and attempt installation with ``FORCE=yes ./stack``
-if [[ ! ${DISTRO} =~ (precise|trusty|utopic|vivid|7.0|wheezy|sid|testing|jessie|f20|f21|f22|rhel7) ]]; then
+if [[ ! ${DISTRO} =~ (precise|trusty|utopic|vivid|7.0|wheezy|sid|testing|jessie|f21|f22|rhel7) ]]; then
     echo "WARNING: this script has not been tested on $DISTRO"
     if [[ "$FORCE" != "yes" ]]; then
         die $LINENO "If you wish to run this script anyway run with FORCE=yes"
@@ -263,6 +263,7 @@
 EOF
     # Enable a bootstrap repo.  It is removed after finishing
     # the epel-release installation.
+    is_package_installed yum-utils || install_package yum-utils
     sudo yum-config-manager --enable epel-bootstrap
     yum_install epel-release || \
         die $LINENO "Error installing EPEL repo, cannot continue"
@@ -270,7 +271,6 @@
     sudo rm -f /etc/yum.repos.d/epel-bootstrap.repo
 
     # ... and also optional to be enabled
-    is_package_installed yum-utils || install_package yum-utils
     sudo yum-config-manager --enable rhel-7-server-optional-rpms
 
     RHEL_RDO_REPO_RPM=${RHEL7_RDO_REPO_RPM:-"https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm"}
@@ -669,6 +669,9 @@
     fi
 fi
 
+# Save configuration values
+save_stackenv $LINENO
+
 
 # Install Packages
 # ================
@@ -950,6 +953,9 @@
 # Initialize the directory for service status check
 init_service_check
 
+# Save configuration values
+save_stackenv $LINENO
+
 
 # Start Services
 # ==============
@@ -1290,35 +1296,44 @@
 
 
 # Save some values we generated for later use
-CURRENT_RUN_TIME=$(date "+$TIMESTAMP_FORMAT")
-echo "# $CURRENT_RUN_TIME" >$TOP_DIR/.stackenv
-for i in BASE_SQL_CONN ENABLED_SERVICES HOST_IP LOGFILE \
-    SERVICE_HOST SERVICE_PROTOCOL STACK_USER TLS_IP KEYSTONE_AUTH_PROTOCOL OS_CACERT; do
-    echo $i=${!i} >>$TOP_DIR/.stackenv
-done
+save_stackenv
 
-# Write out a clouds.yaml file
-# putting the location into a variable to allow for easier refactoring later
-# to make it overridable. There is current no usecase where doing so makes
-# sense, so I'm not actually doing it now.
+# Update/create user clouds.yaml file.
+# clouds.yaml will have
+# - A `devstack` entry for the `demo` user for the `demo` project.
+# - A `devstack-admin` entry for the `admin` user for the `admin` project.
+
+# The location is a variable to allow for easier refactoring later to make it
+# overridable. There is currently no usecase where doing so makes sense, so
+# it's not currently configurable.
 CLOUDS_YAML=~/.config/openstack/clouds.yaml
-if [ ! -e $CLOUDS_YAML ]; then
-    mkdir -p $(dirname $CLOUDS_YAML)
-    cat >"$CLOUDS_YAML" <<EOF
-clouds:
-  devstack:
-    auth:
-      auth_url: $KEYSTONE_AUTH_URI/v$IDENTITY_API_VERSION
-      username: demo
-      project_name: demo
-      password: $ADMIN_PASSWORD
-    region_name: $REGION_NAME
-    identity_api_version: $IDENTITY_API_VERSION
-EOF
-    if [ -f "$SSL_BUNDLE_FILE" ]; then
-        echo "    cacert: $SSL_BUNDLE_FILE" >>"$CLOUDS_YAML"
-    fi
+
+mkdir -p $(dirname $CLOUDS_YAML)
+
+CA_CERT_ARG=''
+if [ -f "$SSL_BUNDLE_FILE" ]; then
+    CA_CERT_ARG="--os-cacert $SSL_BUNDLE_FILE"
 fi
+$TOP_DIR/tools/update_clouds_yaml.py \
+    --file $CLOUDS_YAML \
+    --os-cloud devstack \
+    --os-region-name $REGION_NAME \
+    --os-identity-api-version $IDENTITY_API_VERSION \
+    $CA_CERT_ARG \
+    --os-auth-url $KEYSTONE_AUTH_URI/v$IDENTITY_API_VERSION \
+    --os-username demo \
+    --os-password $ADMIN_PASSWORD \
+    --os-project-name demo
+$TOP_DIR/tools/update_clouds_yaml.py \
+    --file $CLOUDS_YAML \
+    --os-cloud devstack-admin \
+    --os-region-name $REGION_NAME \
+    --os-identity-api-version $IDENTITY_API_VERSION \
+    $CA_CERT_ARG \
+    --os-auth-url $KEYSTONE_AUTH_URI/v$IDENTITY_API_VERSION \
+    --os-username admin \
+    --os-password $ADMIN_PASSWORD \
+    --os-project-name admin
 
 
 # Wrapup configuration
diff --git a/stackrc b/stackrc
index f8add4b..f2aafe9 100644
--- a/stackrc
+++ b/stackrc
@@ -149,17 +149,6 @@
 # Zero disables timeouts
 GIT_TIMEOUT=${GIT_TIMEOUT:-0}
 
-# Requirements enforcing mode
-#
-# - strict (default) : ensure all project requirements files match
-#   what's in global requirements.
-#
-# - soft : enforce requirements on everything in
-#   requirements/projects.txt, but do soft updates on all other
-#   repositories (i.e. sync versions for requirements that are in g-r,
-#   but pass through any extras)
-REQUIREMENTS_MODE=${REQUIREMENTS_MODE:-strict}
-
 
 # Repositories
 # ------------
@@ -326,10 +315,22 @@
 GITREPO["cliff"]=${CLIFF_REPO:-${GIT_BASE}/openstack/cliff.git}
 GITBRANCH["cliff"]=${CLIFF_BRANCH:-master}
 
+# async framework/helpers
+GITREPO["futurist"]=${FUTURIST_REPO:-${GIT_BASE}/openstack/futurist.git}
+GITBRANCH["futurist"]=${FUTURIST_BRANCH:-master}
+
 # debtcollector deprecation framework/helpers
 GITREPO["debtcollector"]=${DEBTCOLLECTOR_REPO:-${GIT_BASE}/openstack/debtcollector.git}
 GITBRANCH["debtcollector"]=${DEBTCOLLECTOR_BRANCH:-master}
 
+# helpful state machines
+GITREPO["automaton"]=${AUTOMATON_REPO:-${GIT_BASE}/openstack/automaton.git}
+GITBRANCH["automaton"]=${AUTOMATON_BRANCH:-master}
+
+# oslo.cache
+GITREPO["oslo.cache"]=${OSLOCACHE_REPO:-${GIT_BASE}/openstack/oslo.cache.git}
+GITBRANCH["oslo.cache"]=${OSLOCACHE_BRANCH:-master}
+
 # oslo.concurrency
 GITREPO["oslo.concurrency"]=${OSLOCON_REPO:-${GIT_BASE}/openstack/oslo.concurrency.git}
 GITBRANCH["oslo.concurrency"]=${OSLOCON_BRANCH:-master}
@@ -374,6 +375,10 @@
 GITREPO["oslo.serialization"]=${OSLOSERIALIZATION_REPO:-${GIT_BASE}/openstack/oslo.serialization.git}
 GITBRANCH["oslo.serialization"]=${OSLOSERIALIZATION_BRANCH:-master}
 
+# oslo.service
+GITREPO["oslo.service"]=${OSLOSERVICE_REPO:-${GIT_BASE}/openstack/oslo.service.git}
+GITBRANCH["oslo.service"]=${OSLOSERVICE_BRANCH:-master}
+
 # oslo.utils
 GITREPO["oslo.utils"]=${OSLOUTILS_REPO:-${GIT_BASE}/openstack/oslo.utils.git}
 GITBRANCH["oslo.utils"]=${OSLOUTILS_BRANCH:-master}
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
index 336a213..1f7169c 100755
--- a/tests/test_libs_from_pypi.sh
+++ b/tests/test_libs_from_pypi.sh
@@ -39,7 +39,8 @@
 ALL_LIBS+=" python-openstackclient oslo.rootwrap oslo.i18n"
 ALL_LIBS+=" python-ceilometerclient oslo.utils python-swiftclient"
 ALL_LIBS+=" python-neutronclient tooz ceilometermiddleware oslo.policy"
-ALL_LIBS+=" debtcollector os-brick"
+ALL_LIBS+=" debtcollector os-brick automaton futurist oslo.service"
+ALL_LIBS+=" oslo.cache"
 
 # Generate the above list with
 # echo ${!GITREPO[@]}
diff --git a/tools/fixup_stuff.sh b/tools/fixup_stuff.sh
index 31258d1..4fff57f 100755
--- a/tools/fixup_stuff.sh
+++ b/tools/fixup_stuff.sh
@@ -126,6 +126,9 @@
         # [4] http://docs.openstack.org/developer/devstack/guides/neutron.html
         if is_package_installed firewalld; then
             sudo systemctl disable firewalld
+            # The iptables service files are no longer included by default,
+            # at least on a baremetal Fedora 21 Server install.
+            install_package iptables-services
             sudo systemctl enable iptables
             sudo systemctl stop firewalld
             sudo systemctl start iptables
diff --git a/tools/update_clouds_yaml.py b/tools/update_clouds_yaml.py
new file mode 100755
index 0000000..0862135
--- /dev/null
+++ b/tools/update_clouds_yaml.py
@@ -0,0 +1,95 @@
+#!/usr/bin/env python
+
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+# Update the clouds.yaml file.
+
+
+import argparse
+import os.path
+
+import yaml
+
+
+class UpdateCloudsYaml(object):
+    def __init__(self, args):
+        if args.file:
+            self._clouds_path = args.file
+            self._create_directory = False
+        else:
+            self._clouds_path = os.path.expanduser(
+                '~/.config/openstack/clouds.yaml')
+            self._create_directory = True
+        self._clouds = {}
+
+        self._cloud = args.os_cloud
+        self._cloud_data = {
+            'region_name': args.os_region_name,
+            'identity_api_version': args.os_identity_api_version,
+            'auth': {
+                'auth_url': args.os_auth_url,
+                'username': args.os_username,
+                'password': args.os_password,
+                'project_name': args.os_project_name,
+            },
+        }
+        if args.os_cacert:
+            self._cloud_data['cacert'] = args.os_cacert
+
+    def run(self):
+        self._read_clouds()
+        self._update_clouds()
+        self._write_clouds()
+
+    def _read_clouds(self):
+        try:
+            with open(self._clouds_path) as clouds_file:
+                self._clouds = yaml.load(clouds_file)
+        except IOError:
+            # The user doesn't have a clouds.yaml file.
+            print("The user clouds.yaml file didn't exist.")
+            self._clouds = {}
+
+    def _update_clouds(self):
+        self._clouds.setdefault('clouds', {})[self._cloud] = self._cloud_data
+
+    def _write_clouds(self):
+
+        if self._create_directory:
+            clouds_dir = os.path.dirname(self._clouds_path)
+            os.makedirs(clouds_dir)
+
+        with open(self._clouds_path, 'w') as clouds_file:
+            yaml.dump(self._clouds, clouds_file, default_flow_style=False)
+
+
+def main():
+    parser = argparse.ArgumentParser('Update clouds.yaml file.')
+    parser.add_argument('--file')
+    parser.add_argument('--os-cloud', required=True)
+    parser.add_argument('--os-region-name', default='RegionOne')
+    parser.add_argument('--os-identity-api-version', default='3')
+    parser.add_argument('--os-cacert')
+    parser.add_argument('--os-auth-url', required=True)
+    parser.add_argument('--os-username', required=True)
+    parser.add_argument('--os-password', required=True)
+    parser.add_argument('--os-project-name', required=True)
+
+    args = parser.parse_args()
+
+    update_clouds_yaml = UpdateCloudsYaml(args)
+    update_clouds_yaml.run()
+
+
+if __name__ == "__main__":
+    main()
diff --git a/tools/worlddump.py b/tools/worlddump.py
index 7acfb5e..0f1a6a1 100755
--- a/tools/worlddump.py
+++ b/tools/worlddump.py
@@ -23,6 +23,7 @@
 import os.path
 import sys
 
+from subprocess import Popen
 
 def get_options():
     parser = argparse.ArgumentParser(
@@ -46,7 +47,7 @@
     print cmd
     print "-" * len(cmd)
     print
-    print os.popen(cmd).read()
+    subprocess.Popen(cmd, shell=True)
 
 
 def _header(name):
diff --git a/unstack.sh b/unstack.sh
index f0da971..10e5958 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -187,5 +187,10 @@
 fi
 
 # BUG: maybe it doesn't exist? We should isolate this further down.
-clean_lvm_volume_group $DEFAULT_VOLUME_GROUP_NAME || /bin/true
-clean_lvm_filter
+# NOTE: Cinder automatically installs the lvm2 package, independently of the
+# enabled backends. So if Cinder is enabled, we are sure lvm (lvremove,
+# /etc/lvm/lvm.conf, etc.) is here.
+if is_service_enabled cinder; then
+    clean_lvm_volume_group $DEFAULT_VOLUME_GROUP_NAME || /bin/true
+    clean_lvm_filter
+fi