Merge "Add file creation test"
diff --git a/HACKING.rst b/HACKING.rst
index a40af54..6bd24b0 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -355,7 +355,7 @@
 
 * **Should this be upstream** -- DevStack generally does not override
   default choices provided by projects and attempts to not
-  unexpectedly modify behaviour.
+  unexpectedly modify behavior.
 
 * **Context in commit messages** -- DevStack touches many different
   areas and reviewers need context around changes to make good
diff --git a/README.md b/README.md
index acc3e5a..ee7f0e7 100644
--- a/README.md
+++ b/README.md
@@ -77,361 +77,21 @@
 of your hypervisor of choice to reduce testing cycle times.  You might even save
 enough time to write one more feature before the next feature freeze...
 
-``stack.sh`` needs to have root access for a lot of tasks, but uses ``sudo``
-for all of those tasks.  However, it needs to be not-root for most of its
-work and for all of the OpenStack services.  ``stack.sh`` specifically
-does not run if started as root.
+``stack.sh`` needs to have root access for a lot of tasks, but uses
+``sudo`` for all of those tasks.  However, it needs to be not-root for
+most of its work and for all of the OpenStack services.  ``stack.sh``
+specifically does not run if started as root.
 
-This is a recent change (Oct 2013) from the previous behaviour of
-automatically creating a ``stack`` user.  Automatically creating
-user accounts is not the right response to running as root, so
-that bit is now an explicit step using ``tools/create-stack-user.sh``.
-Run that (as root!) or just check it out to see what DevStack's
-expectations are for the account it runs under.  Many people simply
-use their usual login (the default 'ubuntu' login on a UEC image
-for example).
+DevStack will not automatically create the user, but provides a helper
+script in ``tools/create-stack-user.sh``.  Run that (as root!) or just
+check it out to see what DevStack's expectations are for the account
+it runs under.  Many people simply use their usual login (the default
+'ubuntu' login on a UEC image for example).
 
 # Customizing
 
-You can override environment variables used in `stack.sh` by creating file
-name `local.conf` with a ``localrc`` section as shown below.  It is likely
-that you will need to do this to tweak your networking configuration should
-you need to access your cloud from a different host.
-
-    [[local|localrc]]
-    VARIABLE=value
-
-See the **Local Configuration** section below for more details.
-
-# Database Backend
-
-Multiple database backends are available. The available databases are defined
-in the lib/databases directory.
-`mysql` is the default database, choose a different one by putting the
-following in the `localrc` section:
-
-    disable_service mysql
-    enable_service postgresql
-
-`mysql` is the default database.
-
-# RPC Backend
-
-Support for a RabbitMQ RPC backend is included. Additional RPC backends may
-be available via external plugins.  Enabling or disabling RabbitMQ is handled
-via the usual service functions and ``ENABLED_SERVICES``.
-
-Example disabling RabbitMQ in ``local.conf``:
-
-    disable_service rabbit
-
-# Apache Frontend
-
-Apache web server can be enabled for wsgi services that support being deployed
-under HTTPD + mod_wsgi. By default, services that recommend running under
-HTTPD + mod_wsgi are deployed under Apache. To use an alternative deployment
-strategy (e.g. eventlet) for services that support an alternative to HTTPD +
-mod_wsgi set ``ENABLE_HTTPD_MOD_WSGI_SERVICES`` to ``False`` in your
-``local.conf``.
-
-Each service that can be run under HTTPD + mod_wsgi also has an override
-toggle available that can be set in your ``local.conf``.
-
-Keystone is run under HTTPD + mod_wsgi by default.
-
-Example (Keystone):
-
-    KEYSTONE_USE_MOD_WSGI="True"
-
-Example (Nova):
-
-    NOVA_USE_MOD_WSGI="True"
-
-Example (Swift):
-
-    SWIFT_USE_MOD_WSGI="True"
-
-# Swift
-
-Swift is disabled by default.  When enabled, it is configured with
-only one replica to avoid being IO/memory intensive on a small
-vm. When running with only one replica the account, container and
-object services will run directly in screen. The others services like
-replicator, updaters or auditor runs in background.
-
-If you would like to enable Swift you can add this to your `localrc` section:
-
-    enable_service s-proxy s-object s-container s-account
-
-If you want a minimal Swift install with only Swift and Keystone you
-can have this instead in your `localrc` section:
-
-    disable_all_services
-    enable_service key mysql s-proxy s-object s-container s-account
-
-If you only want to do some testing of a real normal swift cluster
-with multiple replicas you can do so by customizing the variable
-`SWIFT_REPLICAS` in your `localrc` section (usually to 3).
-
-# Swift S3
-
-If you are enabling `swift3` in `ENABLED_SERVICES` DevStack will
-install the swift3 middleware emulation. Swift will be configured to
-act as a S3 endpoint for Keystone so effectively replacing the
-`nova-objectstore`.
-
-Only Swift proxy server is launched in the screen session all other
-services are started in background and managed by `swift-init` tool.
-
-# Neutron
-
-Basic Setup
-
-In order to enable Neutron in a single node setup, you'll need the
-following settings in your `local.conf`:
-
-    disable_service n-net
-    enable_service q-svc
-    enable_service q-agt
-    enable_service q-dhcp
-    enable_service q-l3
-    enable_service q-meta
-    enable_service q-metering
-
-Then run `stack.sh` as normal.
-
-DevStack supports setting specific Neutron configuration flags to the
-service, ML2 plugin, DHCP and L3 configuration files:
-
-    [[post-config|/$Q_PLUGIN_CONF_FILE]]
-    [ml2]
-    mechanism_drivers=openvswitch,l2population
-
-    [[post-config|$NEUTRON_CONF]]
-    [DEFAULT]
-    quota_port=42
-
-    [[post-config|$Q_L3_CONF_FILE]]
-    [DEFAULT]
-    agent_mode=legacy
-
-    [[post-config|$Q_DHCP_CONF_FILE]]
-    [DEFAULT]
-    dnsmasq_dns_servers = 8.8.8.8,8.8.4.4
-
-The ML2 plugin can run with the OVS, LinuxBridge, or Hyper-V agents on compute
-hosts. This is a simple way to configure the ml2 plugin:
-
-    # VLAN configuration
-    ENABLE_TENANT_VLANS=True
-
-    # GRE tunnel configuration
-    ENABLE_TENANT_TUNNELS=True
-
-    # VXLAN tunnel configuration
-    Q_ML2_TENANT_NETWORK_TYPE=vxlan
-
-The above will default in DevStack to using the OVS on each compute host.
-To change this, set the `Q_AGENT` variable to the agent you want to run
-(e.g. linuxbridge).
-
-    Variable Name                    Notes
-    ----------------------------------------------------------------------------
-    Q_AGENT                          This specifies which agent to run with the
-                                     ML2 Plugin (Typically either `openvswitch`
-                                     or `linuxbridge`).
-                                     Defaults to `openvswitch`.
-    Q_ML2_PLUGIN_MECHANISM_DRIVERS   The ML2 MechanismDrivers to load. The default
-                                     is `openvswitch,linuxbridge`.
-    Q_ML2_PLUGIN_TYPE_DRIVERS        The ML2 TypeDrivers to load. Defaults to
-                                     all available TypeDrivers.
-    Q_ML2_PLUGIN_GRE_TYPE_OPTIONS    GRE TypeDriver options. Defaults to
-                                     `tunnel_id_ranges=1:1000'.
-    Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS  VXLAN TypeDriver options. Defaults to
-                                     `vni_ranges=1001:2000`
-    Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS   VLAN TypeDriver options. Defaults to none.
-
-# Heat
-
-Heat is disabled by default (see `stackrc` file). To enable it explicitly
-you'll need the following settings in your `localrc` section:
-
-    enable_service heat h-api h-api-cfn h-api-cw h-eng
-
-Heat can also run in standalone mode, and be configured to orchestrate
-on an external OpenStack cloud. To launch only Heat in standalone mode
-you'll need the following settings in your `localrc` section:
-
-    disable_all_services
-    enable_service rabbit mysql heat h-api h-api-cfn h-api-cw h-eng
-    HEAT_STANDALONE=True
-    KEYSTONE_SERVICE_HOST=...
-    KEYSTONE_AUTH_HOST=...
-
-# Tempest
-
-If tempest has been successfully configured, a basic set of smoke
-tests can be run as follows:
-
-    $ cd /opt/stack/tempest
-    $ tox -efull  tempest.scenario.test_network_basic_ops
-
-By default tempest is downloaded and the config file is generated, but the
-tempest package is not installed in the system's global site-packages (the
-package install includes installing dependences). So tempest won't run
-outside of tox. If you would like to install it add the following to your
-``localrc`` section:
-
-    INSTALL_TEMPEST=True
-
-# DevStack on Xenserver
-
-If you would like to use Xenserver as the hypervisor, please refer
-to the instructions in `./tools/xen/README.md`.
-
-# Additional Projects
-
-DevStack has a hook mechanism to call out to a dispatch script at specific
-points in the execution of `stack.sh`, `unstack.sh` and `clean.sh`.  This
-allows upper-layer projects, especially those that the lower layer projects
-have no dependency on, to be added to DevStack without modifying the core
-scripts.  Tempest is built this way as an example of how to structure the
-dispatch script, see `extras.d/80-tempest.sh`.  See `extras.d/README.md`
-for more information.
-
-# Multi-Node Setup
-
-A more interesting setup involves running multiple compute nodes, with Neutron
-networks connecting VMs on different compute nodes.
-You should run at least one "controller node", which should have a `stackrc`
-that includes at least:
-
-    disable_service n-net
-    enable_service q-svc
-    enable_service q-agt
-    enable_service q-dhcp
-    enable_service q-l3
-    enable_service q-meta
-    enable_service neutron
-
-You likely want to change your `localrc` section to run a scheduler that
-will balance VMs across hosts:
-
-    SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
-
-You can then run many compute nodes, each of which should have a `stackrc`
-which includes the following, with the IP address of the above controller node:
-
-    ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
-    SERVICE_HOST=[IP of controller node]
-    MYSQL_HOST=$SERVICE_HOST
-    RABBIT_HOST=$SERVICE_HOST
-    Q_HOST=$SERVICE_HOST
-    MATCHMAKER_REDIS_HOST=$SERVICE_HOST
-
-# Multi-Region Setup
-
-We want to setup two devstack (RegionOne and RegionTwo) with shared keystone
-(same users and services) and horizon.
-Keystone and Horizon will be located in RegionOne.
-Full spec is available at:
-https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat.
-
-In RegionOne:
-
-    REGION_NAME=RegionOne
-
-In RegionTwo:
-
-    disable_service horizon
-    KEYSTONE_SERVICE_HOST=<KEYSTONE_IP_ADDRESS_FROM_REGION_ONE>
-    KEYSTONE_AUTH_HOST=<KEYSTONE_IP_ADDRESS_FROM_REGION_ONE>
-    REGION_NAME=RegionTwo
-
-# Cells
-
-Cells is a new scaling option with a full spec at:
-http://wiki.openstack.org/blueprint-nova-compute-cells.
-
-To setup a cells environment add the following to your `localrc` section:
-
-    enable_service n-cell
-
-Be aware that there are some features currently missing in cells, one notable
-one being security groups.  The exercises have been patched to disable
-functionality not supported by cells.
-
-# IPv6
-
-By default, most Openstack services are bound to 0.0.0.0
-and service endpoints are registered as IPv4 addresses.
-A new variable was created to control this behavior, and to
-allow for operation over IPv6 instead of IPv4.
-
-For this, add the following to `local.conf`:
-
-    SERVICE_IP_VERSION=6
-
-When set to "6" devstack services will open listen sockets on ::
-and service endpoints will be registered using HOST_IPV6 as the
-address.  The default value for this setting is `4`.  Dual-mode
-support, for example `4+6` is not currently supported.
-
-
-# Local Configuration
-
-Historically DevStack has used ``localrc`` to contain all local configuration
-and customizations. More and more of the configuration variables available for
-DevStack are passed-through to the individual project configuration files.
-The old mechanism for this required specific code for each file and did not
-scale well.  This is handled now by a master local configuration file.
-
-# local.conf
-
-The new config file ``local.conf`` is an extended-INI format that introduces
-a new meta-section header that provides some additional information such
-as a phase name and destination config filename:
-
-    [[ <phase> | <config-file-name> ]]
-
-where ``<phase>`` is one of a set of phase names defined by ``stack.sh``
-and ``<config-file-name>`` is the configuration filename.  The filename is
-eval'ed in the ``stack.sh`` context so all environment variables are
-available and may be used.  Using the project config file variables in
-the header is strongly suggested (see the ``NOVA_CONF`` example below).
-If the path of the config file does not exist it is skipped.
-
-The defined phases are:
-
-* **local** - extracts ``localrc`` from ``local.conf`` before ``stackrc`` is sourced
-* **post-config** - runs after the layer 2 services are configured
-                    and before they are started
-* **extra** - runs after services are started and before any files
-              in ``extra.d`` are executed
-* **post-extra** - runs after files in ``extra.d`` are executed
-
-The file is processed strictly in sequence; meta-sections may be specified more
-than once but if any settings are duplicated the last to appear in the file
-will be used.
-
-    [[post-config|$NOVA_CONF]]
-    [DEFAULT]
-    use_syslog = True
-
-    [osapi_v3]
-    enabled = False
-
-A specific meta-section ``local|localrc`` is used to provide a default
-``localrc`` file (actually ``.localrc.auto``).  This allows all custom
-settings for DevStack to be contained in a single file.  If ``localrc``
-exists it will be used instead to preserve backward-compatibility.
-
-    [[local|localrc]]
-    FIXED_RANGE=10.254.1.0/24
-    ADMIN_PASSWORD=speciale
-    LOGFILE=$DEST/logs/stack.sh.log
-
-Note that ``Q_PLUGIN_CONF_FILE`` is unique in that it is assumed to *NOT*
-start with a ``/`` (slash) character.  A slash will need to be added:
-
-    [[post-config|/$Q_PLUGIN_CONF_FILE]]
+DevStack can be extensively configured via the configuration file
+`local.conf`.  It is likely that you will need to provide and modify
+this file if you want anything other than the most basic setup.  Start
+by reading the [configuration guide](doc/source/configuration.rst) for
+details of the configuration file and the many available options.
\ No newline at end of file
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index 6052576..fe23d6c 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -2,32 +2,19 @@
 Configuration
 =============
 
-DevStack has always tried to be mostly-functional with a minimal amount
-of configuration. The number of options has ballooned as projects add
-features, new projects added and more combinations need to be tested.
-Historically DevStack obtained all local configuration and
-customizations from a ``localrc`` file. The number of configuration
-variables that are simply passed-through to the individual project
-configuration files is also increasing. The old mechanism for this
-(``EXTRAS_OPTS`` and friends) required specific code for each file and
-did not scale well.
-
-In Oct 2013 a new configuration method was introduced (in `review
-46768 <https://review.openstack.org/#/c/46768/>`__) to hopefully
-simplify this process and meet the following goals:
-
--  contain all non-default local configuration in a single file
--  be backward-compatible with ``localrc`` to smooth the transition
-   process
--  allow settings in arbitrary configuration files to be changed
+.. contents::
+   :local:
+   :depth: 1
 
 local.conf
 ==========
 
-The new configuration file is ``local.conf`` and resides in the root
-DevStack directory like the old ``localrc`` file. It is a modified INI
-format file that introduces a meta-section header to carry additional
-information regarding the configuration files to be changed.
+DevStack configuration is modified via the file ``local.conf``.  It is
+a modified INI format file that introduces a meta-section header to
+carry additional information regarding the configuration files to be
+changed.
+
+A sample is provided in ``devstack/samples``
 
 The new header is similar to a normal INI section header but with double
 brackets (``[[ ... ]]``) and two internal fields separated by a pipe
@@ -142,36 +129,185 @@
 Setting it here also makes it available for ``openrc`` to set ``OS_AUTH_URL``.
 ``HOST_IPV6`` is not set by default.
 
-Common Configuration Variables
-==============================
+Historical Notes
+================
+
+Historically DevStack obtained all local configuration and
+customizations from a ``localrc`` file.  In Oct 2013 the
+``local.conf`` configuration method was introduced (in `review 46768
+<https://review.openstack.org/#/c/46768/>`__) to simplify this
+process.
+
+Configuration Notes
+===================
+
+.. contents::
+   :local:
 
 Installation Directory
 ----------------------
 
-    | *Default: ``DEST=/opt/stack``*
-    |  The DevStack install directory is set by the ``DEST`` variable.
-    |  By setting it early in the ``localrc`` section you can reference it
-       in later variables. It can be useful to set it even though it is not
-       changed from the default value.
-    |
+The DevStack install directory is set by the ``DEST`` variable.  By
+default it is ``/opt/stack``.
+
+By setting it early in the ``localrc`` section you can reference it in
+later variables.  It can be useful to set it even though it is not
+changed from the default value.
 
     ::
 
         DEST=/opt/stack
 
+Logging
+-------
+
+Enable Logging
+~~~~~~~~~~~~~~
+
+By default ``stack.sh`` output is only written to the console where it
+runs. It can be sent to a file in addition to the console by setting
+``LOGFILE`` to the fully-qualified name of the destination log file. A
+timestamp will be appended to the given filename for each run of
+``stack.sh``.
+
+    ::
+
+        LOGFILE=$DEST/logs/stack.sh.log
+
+Old log files are cleaned automatically if ``LOGDAYS`` is set to the
+number of days of old log files to keep.
+
+    ::
+
+        LOGDAYS=1
+
+The some of the project logs (Nova, Cinder, etc) will be colorized by
+default (if ``SYSLOG`` is not set below); this can be turned off by
+setting ``LOG_COLOR`` to ``False``.
+
+    ::
+
+        LOG_COLOR=False
+
+Logging the Service Output
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DevStack will log the ``stdout`` output of the services it starts.
+When using ``screen`` this logs the output in the screen windows to a
+file.  Without ``screen`` this simply redirects stdout of the service
+process to a file in ``LOGDIR``.
+
+    ::
+
+        LOGDIR=$DEST/logs
+
+*Note the use of ``DEST`` to locate the main install directory; this
+is why we suggest setting it in ``local.conf``.*
+
+Enabling Syslog
+~~~~~~~~~~~~~~~
+
+Logging all services to a single syslog can be convenient. Enable
+syslogging by setting ``SYSLOG`` to ``True``. If the destination log
+host is not localhost ``SYSLOG_HOST`` and ``SYSLOG_PORT`` can be used
+to direct the message stream to the log host.  |
+
+    ::
+
+        SYSLOG=True
+        SYSLOG_HOST=$HOST_IP
+        SYSLOG_PORT=516
+
+
+Example Logging Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+For example, non-interactive installs probably wish to save output to
+a file, keep service logs and disable color in the stored files.
+
+   ::
+
+       [[local|localrc]]
+       DEST=/opt/stack/
+       LOGDIR=$DEST/logs
+       LOGFILE=$LOGDIR/stack.sh.log
+       LOG_COLOR=False
+
+Database Backend
+----------------
+
+Multiple database backends are available. The available databases are defined
+in the lib/databases directory.
+`mysql` is the default database, choose a different one by putting the
+following in the `localrc` section:
+
+   ::
+
+      disable_service mysql
+      enable_service postgresql
+
+`mysql` is the default database.
+
+RPC Backend
+-----------
+
+Support for a RabbitMQ RPC backend is included. Additional RPC
+backends may be available via external plugins.  Enabling or disabling
+RabbitMQ is handled via the usual service functions and
+``ENABLED_SERVICES``.
+
+Example disabling RabbitMQ in ``local.conf``:
+
+::
+    disable_service rabbit
+
+
+Apache Frontend
+---------------
+
+The Apache web server can be enabled for wsgi services that support
+being deployed under HTTPD + mod_wsgi. By default, services that
+recommend running under HTTPD + mod_wsgi are deployed under Apache. To
+use an alternative deployment strategy (e.g. eventlet) for services
+that support an alternative to HTTPD + mod_wsgi set
+``ENABLE_HTTPD_MOD_WSGI_SERVICES`` to ``False`` in your
+``local.conf``.
+
+Each service that can be run under HTTPD + mod_wsgi also has an
+override toggle available that can be set in your ``local.conf``.
+
+Keystone is run under Apache with ``mod_wsgi`` by default.
+
+Example (Keystone)
+
+::
+
+    KEYSTONE_USE_MOD_WSGI="True"
+
+Example (Nova):
+
+::
+
+    NOVA_USE_MOD_WSGI="True"
+
+Example (Swift):
+
+::
+
+    SWIFT_USE_MOD_WSGI="True"
+
+
+
 Libraries from Git
 ------------------
 
-   | *Default: ``LIBS_FROM_GIT=""``*
-
-   | By default devstack installs OpenStack server components from
-     git, however it installs client libraries from released versions
-     on pypi. This is appropriate if you are working on server
-     development, but if you want to see how an unreleased version of
-     the client affects the system you can have devstack install it
-     from upstream, or from local git trees.
-   | Multiple libraries can be specified as a comma separated list.
-   |
+By default devstack installs OpenStack server components from git,
+however it installs client libraries from released versions on pypi.
+This is appropriate if you are working on server development, but if
+you want to see how an unreleased version of the client affects the
+system you can have devstack install it from upstream, or from local
+git trees by specifying it in ``LIBS_FROM_GIT``.  Multiple libraries
+can be specified as a comma separated list.
 
    ::
 
@@ -180,99 +316,37 @@
 Virtual Environments
 --------------------
 
-  | *Default: ``USE_VENV=False``*
-  |   Enable the use of Python virtual environments by setting ``USE_VENV``
-      to ``True``.  This will enable the creation of venvs for each project
-      that is defined in the ``PROJECT_VENV`` array.
+Enable the use of Python virtual environments by setting ``USE_VENV``
+to ``True``.  This will enable the creation of venvs for each project
+that is defined in the ``PROJECT_VENV`` array.
 
-  | *Default: ``PROJECT_VENV['<project>']='<project-dir>.venv'*
-  |   Each entry in the ``PROJECT_VENV`` array contains the directory name
-      of a venv to be used for the project.  The array index is the project
-      name.  Multiple projects can use the same venv if desired.
+Each entry in the ``PROJECT_VENV`` array contains the directory name
+of a venv to be used for the project.  The array index is the project
+name.  Multiple projects can use the same venv if desired.
 
   ::
 
     PROJECT_VENV["glance"]=${GLANCE_DIR}.venv
 
-  | *Default: ``ADDITIONAL_VENV_PACKAGES=""``*
-  |   A comma-separated list of additional packages to be installed into each
-      venv.  Often projects will not have certain packages listed in its
-      ``requirements.txt`` file because they are 'optional' requirements,
-      i.e. only needed for certain configurations.  By default, the enabled
-      databases will have their Python bindings added when they are enabled.
+``ADDITIONAL_VENV_PACKAGES`` is a comma-separated list of additional
+packages to be installed into each venv.  Often projects will not have
+certain packages listed in its ``requirements.txt`` file because they
+are 'optional' requirements, i.e. only needed for certain
+configurations.  By default, the enabled databases will have their
+Python bindings added when they are enabled.
 
-Enable Logging
---------------
+  ::
 
-    | *Defaults: ``LOGFILE="" LOGDAYS=7 LOG_COLOR=True``*
-    |  By default ``stack.sh`` output is only written to the console
-       where it runs. It can be sent to a file in addition to the console
-       by setting ``LOGFILE`` to the fully-qualified name of the
-       destination log file. A timestamp will be appended to the given
-       filename for each run of ``stack.sh``.
-    |
+     ADDITIONAL_VENV_PACKAGES="python-foo, python-bar"
 
-    ::
-
-        LOGFILE=$DEST/logs/stack.sh.log
-
-    Old log files are cleaned automatically if ``LOGDAYS`` is set to the
-    number of days of old log files to keep.
-
-    ::
-
-        LOGDAYS=1
-
-    The some of the project logs (Nova, Cinder, etc) will be colorized
-    by default (if ``SYSLOG`` is not set below); this can be turned off
-    by setting ``LOG_COLOR`` False.
-
-    ::
-
-        LOG_COLOR=False
-
-Logging the Service Output
---------------------------
-
-    | *Default: ``LOGDIR=""``*
-    |  DevStack will log the stdout output of the services it starts.
-       When using ``screen`` this logs the output in the screen windows
-       to a file.  Without ``screen`` this simply redirects stdout of
-       the service process to a file in ``LOGDIR``.
-    |
-
-    ::
-
-        LOGDIR=$DEST/logs
-
-    *Note the use of ``DEST`` to locate the main install directory; this
-    is why we suggest setting it in ``local.conf``.*
-
-Enabling Syslog
----------------
-
-    | *Default: ``SYSLOG=False SYSLOG_HOST=$HOST_IP SYSLOG_PORT=516``*
-    |  Logging all services to a single syslog can be convenient. Enable
-       syslogging by setting ``SYSLOG`` to ``True``. If the destination log
-       host is not localhost ``SYSLOG_HOST`` and ``SYSLOG_PORT`` can be
-       used to direct the message stream to the log host.
-    |
-
-    ::
-
-        SYSLOG=True
-        SYSLOG_HOST=$HOST_IP
-        SYSLOG_PORT=516
 
 A clean install every time
 --------------------------
 
-    | *Default: ``RECLONE=""``*
-    |  By default ``stack.sh`` only clones the project repos if they do
-       not exist in ``$DEST``. ``stack.sh`` will freshen each repo on each
-       run if ``RECLONE`` is set to ``yes``. This avoids having to manually
-       remove repos in order to get the current branch from ``$GIT_BASE``.
-    |
+By default ``stack.sh`` only clones the project repos if they do not
+exist in ``$DEST``. ``stack.sh`` will freshen each repo on each run if
+``RECLONE`` is set to ``yes``. This avoids having to manually remove
+repos in order to get the current branch from ``$GIT_BASE``.
 
     ::
 
@@ -281,139 +355,50 @@
 Upgrade packages installed by pip
 ---------------------------------
 
-    | *Default: ``PIP_UPGRADE=""``*
-    |  By default ``stack.sh`` only installs Python packages if no version
-       is currently installed or the current version does not match a specified
-       requirement. If ``PIP_UPGRADE`` is set to ``True`` then existing required
-       Python packages will be upgraded to the most recent version that
-       matches requirements.
-    |
+By default ``stack.sh`` only installs Python packages if no version is
+currently installed or the current version does not match a specified
+requirement. If ``PIP_UPGRADE`` is set to ``True`` then existing
+required Python packages will be upgraded to the most recent version
+that matches requirements.
 
     ::
 
         PIP_UPGRADE=True
 
-Swift
------
-
-    | Default: SWIFT_HASH=""
-    | SWIFT_REPLICAS=1
-    | SWIFT_DATA_DIR=$DEST/data/swift
-
-    | Swift is now used as the back-end for the S3-like object store.
-      When enabled Nova's objectstore (n-obj in ENABLED_SERVICES) is
-      automatically disabled. Enable Swift by adding it services to
-      ENABLED_SERVICES: enable_service s-proxy s-object s-container
-      s-account
-
-    Setting Swift's hash value is required and you will be prompted for
-    it if Swift is enabled so just set it to something already:
-
-    ::
-
-        SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
-
-    For development purposes the default number of replicas is set to
-    ``1`` to reduce the overhead required. To better simulate a
-    production deployment set this to ``3`` or more.
-
-    ::
-
-        SWIFT_REPLICAS=3
-
-    The data for Swift is stored in the source tree by default (in
-    ``$DEST/swift/data``) and can be moved by setting
-    ``SWIFT_DATA_DIR``. The specified directory will be created if it
-    does not exist.
-
-    ::
-
-        SWIFT_DATA_DIR=$DEST/data/swift
-
-    *Note: Previously just enabling ``swift`` was sufficient to start
-    the Swift services. That does not provide proper service
-    granularity, particularly in multi-host configurations, and is
-    considered deprecated. Some service combination tests now check for
-    specific Swift services and the old blanket acceptance will longer
-    work correctly.*
 
 Service Catalog Backend
 -----------------------
 
-    | *Default: ``KEYSTONE_CATALOG_BACKEND=sql``*
-    |  DevStack uses Keystone's ``sql`` service catalog backend. An
-       alternate ``template`` backend is also available. However, it does
-       not support the ``service-*`` and ``endpoint-*`` commands of the
-       ``keystone`` CLI. To do so requires the ``sql`` backend be enabled:
-    |
+By default DevStack uses Keystone's ``sql`` service catalog backend.
+An alternate ``template`` backend is also available, however, it does
+not support the ``service-*`` and ``endpoint-*`` commands of the
+``keystone`` CLI.  To do so requires the ``sql`` backend be enabled
+with ``KEYSTONE_CATALOG_BACKEND``:
 
     ::
 
         KEYSTONE_CATALOG_BACKEND=template
 
-    DevStack's default configuration in ``sql`` mode is set in
-    ``files/keystone_data.sh``
+DevStack's default configuration in ``sql`` mode is set in
+``files/keystone_data.sh``
 
-Cinder
-------
-
-    | Default:
-    | VOLUME_GROUP="stack-volumes" VOLUME_NAME_PREFIX="volume-" VOLUME_BACKING_FILE_SIZE=10250M
-    |  The logical volume group used to hold the Cinder-managed volumes
-       is set by ``VOLUME_GROUP``, the logical volume name prefix is set
-       with ``VOLUME_NAME_PREFIX`` and the size of the volume backing file
-       is set with ``VOLUME_BACKING_FILE_SIZE``.
-    |
-
-    ::
-
-        VOLUME_GROUP="stack-volumes"
-        VOLUME_NAME_PREFIX="volume-"
-        VOLUME_BACKING_FILE_SIZE=10250M
-
-Multi-host DevStack
--------------------
-
-    | *Default: ``MULTI_HOST=False``*
-    |  Running DevStack with multiple hosts requires a custom
-       ``local.conf`` section for each host. The master is the same as a
-       single host installation with ``MULTI_HOST=True``. The slaves have
-       fewer services enabled and a couple of host variables pointing to
-       the master.
-    |  **Master**
-
-    ::
-
-        MULTI_HOST=True
-
-    **Slave**
-
-    ::
-
-        MYSQL_HOST=w.x.y.z
-        RABBIT_HOST=w.x.y.z
-        GLANCE_HOSTPORT=w.x.y.z:9292
-        ENABLED_SERVICES=n-vol,n-cpu,n-net,n-api
 
 IP Version
 ----------
 
-    | Default: ``IP_VERSION=4+6``
-    | This setting can be used to configure DevStack to create either an IPv4,
-      IPv6, or dual stack tenant data network by setting ``IP_VERSION`` to
-      either ``IP_VERSION=4``, ``IP_VERSION=6``, or ``IP_VERSION=4+6``
-      respectively. This functionality requires that the Neutron networking
-      service is enabled by setting the following options:
-    |
+``IP_VERSION`` can be used to configure DevStack to create either an
+IPv4, IPv6, or dual-stack tenant data-network by with either
+``IP_VERSION=4``, ``IP_VERSION=6``, or ``IP_VERSION=4+6``
+respectively.  This functionality requires that the Neutron networking
+service is enabled by setting the following options:
 
     ::
 
         disable_service n-net
         enable_service q-svc q-agt q-dhcp q-l3
 
-    | The following optional variables can be used to alter the default IPv6
-      behavior:
-    |
+The following optional variables can be used to alter the default IPv6
+behavior:
 
     ::
 
@@ -422,52 +407,190 @@
         FIXED_RANGE_V6=fd$IPV6_GLOBAL_ID::/64
         IPV6_PRIVATE_NETWORK_GATEWAY=fd$IPV6_GLOBAL_ID::1
 
-    | *Note: ``FIXED_RANGE_V6`` and ``IPV6_PRIVATE_NETWORK_GATEWAY``
-      can be configured with any valid IPv6 prefix. The default values make
-      use of an auto-generated ``IPV6_GLOBAL_ID`` to comply with RFC 4193.*
-    |
+*Note*: ``FIXED_RANGE_V6`` and ``IPV6_PRIVATE_NETWORK_GATEWAY`` can be
+configured with any valid IPv6 prefix. The default values make use of
+an auto-generated ``IPV6_GLOBAL_ID`` to comply with RFC4193.
 
-    | Default: ``SERVICE_IP_VERSION=4``
-    | This setting can be used to configure DevStack to enable services to
-      operate over either IPv4 or IPv6, by setting ``SERVICE_IP_VERSION`` to
-      either ``SERVICE_IP_VERSION=4`` or ``SERVICE_IP_VERSION=6`` respectively.
-      When set to ``4`` devstack services will open listen sockets on 0.0.0.0
-      and service endpoints will be registered using ``HOST_IP`` as the address.
-      When set to ``6`` devstack services will open listen sockets on :: and
-      service endpoints will be registered using ``HOST_IPV6`` as the address.
-      The default value for this setting is ``4``.  Dual-mode support, for
-      example ``4+6`` is not currently supported.
-    | The following optional variable can be used to alter the default IPv6
-      address used:
-    |
+Service Version
+~~~~~~~~~~~~~~~
+
+DevStack can enable service operation over either IPv4 or IPv6 by
+setting ``SERVICE_IP_VERSION`` to either ``SERVICE_IP_VERSION=4`` or
+``SERVICE_IP_VERSION=6`` respectively.
+
+When set to ``4`` devstack services will open listen sockets on
+``0.0.0.0`` and service endpoints will be registered using ``HOST_IP``
+as the address.
+
+When set to ``6`` devstack services will open listen sockets on ``::``
+and service endpoints will be registered using ``HOST_IPV6`` as the
+address.
+
+The default value for this setting is ``4``.  Dual-mode support, for
+example ``4+6`` is not currently supported.  ``HOST_IPV6`` can
+optionally be used to alter the default IPv6 address
 
     ::
 
         HOST_IPV6=${some_local_ipv6_address}
 
-Examples
-========
+Multi-node setup
+~~~~~~~~~~~~~~~~
 
--  Eliminate a Cinder pass-through (``CINDER_PERIODIC_INTERVAL``):
+See the :doc:`multi-node lab guide<guides/multinode-lab>`
 
-   ::
+Projects
+--------
 
-       [[post-config|$CINDER_CONF]]
-       [DEFAULT]
-       periodic_interval = 60
+Neutron
+~~~~~~~
 
--  Sample ``local.conf`` with screen logging enabled:
+See the :doc:`neutron configuration guide<guides/neutron>` for
+details on configuration of Neutron
 
-   ::
 
-       [[local|localrc]]
-       FIXED_RANGE=10.254.1.0/24
-       NETWORK_GATEWAY=10.254.1.1
-       LOGDAYS=1
-       LOGDIR=$DEST/logs
-       LOGFILE=$LOGDIR/stack.sh.log
-       ADMIN_PASSWORD=quiet
-       DATABASE_PASSWORD=$ADMIN_PASSWORD
-       RABBIT_PASSWORD=$ADMIN_PASSWORD
-       SERVICE_PASSWORD=$ADMIN_PASSWORD
-       SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50
+Swift
+~~~~~
+
+Swift is disabled by default.  When enabled, it is configured with
+only one replica to avoid being IO/memory intensive on a small
+VM. When running with only one replica the account, container and
+object services will run directly in screen. The others services like
+replicator, updaters or auditor runs in background.
+
+If you would like to enable Swift you can add this to your `localrc`
+section:
+
+::
+
+    enable_service s-proxy s-object s-container s-account
+
+If you want a minimal Swift install with only Swift and Keystone you
+can have this instead in your `localrc` section:
+
+::
+
+    disable_all_services
+    enable_service key mysql s-proxy s-object s-container s-account
+
+If you only want to do some testing of a real normal swift cluster
+with multiple replicas you can do so by customizing the variable
+`SWIFT_REPLICAS` in your `localrc` section (usually to 3).
+
+Swift S3
+++++++++
+
+If you are enabling `swift3` in `ENABLED_SERVICES` DevStack will
+install the swift3 middleware emulation. Swift will be configured to
+act as a S3 endpoint for Keystone so effectively replacing the
+`nova-objectstore`.
+
+Only Swift proxy server is launched in the screen session all other
+services are started in background and managed by `swift-init` tool.
+
+Heat
+~~~~
+
+Heat is disabled by default (see `stackrc` file). To enable it
+explicitly you'll need the following settings in your `localrc`
+section
+
+::
+
+    enable_service heat h-api h-api-cfn h-api-cw h-eng
+
+Heat can also run in standalone mode, and be configured to orchestrate
+on an external OpenStack cloud. To launch only Heat in standalone mode
+you'll need the following settings in your `localrc` section
+
+::
+
+    disable_all_services
+    enable_service rabbit mysql heat h-api h-api-cfn h-api-cw h-eng
+    HEAT_STANDALONE=True
+    KEYSTONE_SERVICE_HOST=...
+    KEYSTONE_AUTH_HOST=...
+
+Tempest
+~~~~~~~
+
+If tempest has been successfully configured, a basic set of smoke
+tests can be run as follows:
+
+::
+
+    $ cd /opt/stack/tempest
+    $ tox -efull  tempest.scenario.test_network_basic_ops
+
+By default tempest is downloaded and the config file is generated, but the
+tempest package is not installed in the system's global site-packages (the
+package install includes installing dependences). So tempest won't run
+outside of tox. If you would like to install it add the following to your
+``localrc`` section:
+
+::
+
+    INSTALL_TEMPEST=True
+
+
+Xenserver
+~~~~~~~~~
+
+If you would like to use Xenserver as the hypervisor, please refer to
+the instructions in `./tools/xen/README.md`.
+
+Cells
+~~~~~
+
+`Cells <http://wiki.openstack.org/blueprint-nova-compute-cells>`__ is
+an alternative scaling option.  To setup a cells environment add the
+following to your `localrc` section:
+
+::
+
+    enable_service n-cell
+
+Be aware that there are some features currently missing in cells, one
+notable one being security groups.  The exercises have been patched to
+disable functionality not supported by cells.
+
+Cinder
+~~~~~~
+
+The logical volume group used to hold the Cinder-managed volumes is
+set by ``VOLUME_GROUP``, the logical volume name prefix is set with
+``VOLUME_NAME_PREFIX`` and the size of the volume backing file is set
+with ``VOLUME_BACKING_FILE_SIZE``.
+
+    ::
+
+        VOLUME_GROUP="stack-volumes"
+        VOLUME_NAME_PREFIX="volume-"
+        VOLUME_BACKING_FILE_SIZE=10250M
+
+
+Keystone
+~~~~~~~~
+
+Multi-Region Setup
+++++++++++++++++++
+
+We want to setup two devstack (RegionOne and RegionTwo) with shared
+keystone (same users and services) and horizon.  Keystone and Horizon
+will be located in RegionOne.  Full spec is available at:
+`<https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat>`__.
+
+In RegionOne:
+
+::
+
+    REGION_NAME=RegionOne
+
+In RegionTwo:
+
+::
+   
+    disable_service horizon
+    KEYSTONE_SERVICE_HOST=<KEYSTONE_IP_ADDRESS_FROM_REGION_ONE>
+    KEYSTONE_AUTH_HOST=<KEYSTONE_IP_ADDRESS_FROM_REGION_ONE>
+    REGION_NAME=RegionTwo
diff --git a/doc/source/guides/multinode-lab.rst b/doc/source/guides/multinode-lab.rst
index 27d71f1..1530a84 100644
--- a/doc/source/guides/multinode-lab.rst
+++ b/doc/source/guides/multinode-lab.rst
@@ -175,15 +175,20 @@
     SERVICE_TOKEN=xyzpdqlazydog
     DATABASE_TYPE=mysql
     SERVICE_HOST=192.168.42.11
-    MYSQL_HOST=192.168.42.11
-    RABBIT_HOST=192.168.42.11
-    GLANCE_HOSTPORT=192.168.42.11:9292
-    ENABLED_SERVICES=n-cpu,n-net,n-api,c-vol
+    MYSQL_HOST=$SERVICE_HOST
+    RABBIT_HOST=$SERVICE_HOST
+    GLANCE_HOSTPORT=$SERVICE_HOST:9292
+    ENABLED_SERVICES=n-cpu,n-net,n-api-meta,c-vol
     NOVA_VNC_ENABLED=True
-    NOVNCPROXY_URL="http://192.168.42.11:6080/vnc_auto.html"
+    NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
     VNCSERVER_LISTEN=$HOST_IP
     VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
 
+**Note:** the ``n-api-meta`` service is a version of the api server
+that only serves the metadata service. It's needed because the
+computes created won't have a routing path to the metadata service on
+the controller.
+
 Fire up OpenStack:
 
 ::
@@ -263,7 +268,7 @@
 -----
 
 Swift, OpenStack Object Storage, requires a significant amount of resources
-and is disabled by default in DevStack. The support in DevStack is geared 
+and is disabled by default in DevStack. The support in DevStack is geared
 toward a minimal installation but can be used for testing. To implement a
 true multi-node test of swift, additional steps will be required. Enabling it is as
 simple as enabling the ``swift`` service in ``local.conf``:
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index 1b6f5e3..803dd08 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -15,7 +15,7 @@
 Plugin Interface
 ================
 
-DevStack supports a standard mechansim for including plugins from
+DevStack supports a standard mechanism for including plugins from
 external repositories. The plugin interface assumes the following:
 
 An external git repository that includes a ``devstack/`` top level
@@ -49,7 +49,7 @@
   [[local|localrc]]
   enable_plugin <NAME> <GITURL> [GITREF]
 
-- ``name`` - an arbitrary name. (ex: glustfs, docker, zaqar, congress)
+- ``name`` - an arbitrary name. (ex: glusterfs, docker, zaqar, congress)
 - ``giturl`` - a valid git url that can be cloned
 - ``gitref`` - an optional git ref (branch / ref / tag) that will be
   cloned. Defaults to master.
@@ -209,7 +209,7 @@
 
 Ideally a plugin will be included within the ``devstack`` directory of
 the project they are being tested. For example, the stackforge/ec2-api
-project has its pluggin support in its own tree.
+project has its plugin support in its own tree.
 
 However, some times a DevStack plugin might be used solely to
 configure a backend service that will be used by the rest of
diff --git a/exercises/aggregates.sh b/exercises/aggregates.sh
index 01d548d..808ef76 100755
--- a/exercises/aggregates.sh
+++ b/exercises/aggregates.sh
@@ -31,18 +31,13 @@
 EXERCISE_DIR=$(cd $(dirname "$0") && pwd)
 TOP_DIR=$(cd $EXERCISE_DIR/..; pwd)
 
-# Import common functions
-source $TOP_DIR/functions
-
-# Import configuration
-source $TOP_DIR/openrc
+# Test as the admin user
+# note this imports stackrc/functions, etc
+. $TOP_DIR/openrc admin admin
 
 # Import exercise configuration
 source $TOP_DIR/exerciserc
 
-# Test as the admin user
-. $TOP_DIR/openrc admin admin
-
 # If nova api is not enabled we exit with exitcode 55 so that
 # the exercise is skipped
 is_service_enabled n-api || exit 55
diff --git a/files/apache-horizon.template b/files/apache-horizon.template
index 6883898..bfd7567 100644
--- a/files/apache-horizon.template
+++ b/files/apache-horizon.template
@@ -1,5 +1,5 @@
 <VirtualHost *:80>
-    WSGIScriptAlias / %HORIZON_DIR%/openstack_dashboard/wsgi/django.wsgi
+    WSGIScriptAlias %WEBROOT% %HORIZON_DIR%/openstack_dashboard/wsgi/django.wsgi
     WSGIDaemonProcess horizon user=%USER% group=%GROUP% processes=3 threads=10 home=%HORIZON_DIR% display-name=%{GROUP}
     WSGIApplicationGroup %{GLOBAL}
 
@@ -8,7 +8,10 @@
     WSGIProcessGroup horizon
 
     DocumentRoot %HORIZON_DIR%/.blackhole/
-    Alias /media %HORIZON_DIR%/openstack_dashboard/static
+    Alias %WEBROOT%/media %HORIZON_DIR%/openstack_dashboard/static
+    Alias %WEBROOT%/static %HORIZON_DIR%/static
+
+    RedirectMatch "^/$" "%WEBROOT%/"
 
     <Directory />
         Options FollowSymLinks
diff --git a/files/debs/n-cpu b/files/debs/n-cpu
index 5d5052a..ffc947a 100644
--- a/files/debs/n-cpu
+++ b/files/debs/n-cpu
@@ -5,3 +5,4 @@
 sysfsutils
 sg3-utils
 python-guestfs # NOPRIME
+cryptsetup
diff --git a/files/rpms-suse/n-cpu b/files/rpms-suse/n-cpu
index 7040b84..b3a468d 100644
--- a/files/rpms-suse/n-cpu
+++ b/files/rpms-suse/n-cpu
@@ -4,3 +4,4 @@
 open-iscsi
 sysfsutils
 sg3_utils
+cryptsetup
diff --git a/files/rpms/n-cpu b/files/rpms/n-cpu
index c1a8e8f..81278b3 100644
--- a/files/rpms/n-cpu
+++ b/files/rpms/n-cpu
@@ -4,4 +4,4 @@
 genisoimage
 sysfsutils
 sg3_utils
-
+cryptsetup
diff --git a/functions b/functions
index 1668e16..4001e9d 100644
--- a/functions
+++ b/functions
@@ -10,6 +10,10 @@
 # - ``GLANCE_HOSTPORT``
 #
 
+# ensure we don't re-source this in the same environment
+[[ -z "$_DEVSTACK_FUNCTIONS" ]] || return 0
+declare -r _DEVSTACK_FUNCTIONS=1
+
 # Include the common functions
 FUNC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
 source ${FUNC_DIR}/functions-common
@@ -219,6 +223,23 @@
         return
     fi
 
+    if [[ "$image_url" =~ '.hds' ]]; then
+        image_name="${image_fname%.hds}"
+        vm_mode=${image_name##*-}
+        if [[ $vm_mode != 'exe' && $vm_mode != 'hvm' ]]; then
+            die $LINENO "Unknown vm_mode=${vm_mode} for Virtuozzo image"
+        fi
+
+        openstack \
+            --os-token $token \
+            --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT \
+            image create \
+            "$image_name" --public \
+            --container-format=bare --disk-format=ploop \
+            --property vm_mode=$vm_mode < "${image}"
+        return
+    fi
+
     local kernel=""
     local ramdisk=""
     local disk_format=""
diff --git a/functions-common b/functions-common
index be1ebb9..f6a5253 100644
--- a/functions-common
+++ b/functions-common
@@ -28,7 +28,6 @@
 # - ``REQUIREMENTS_DIR``
 # - ``STACK_USER``
 # - ``TRACK_DEPENDS``
-# - ``UNDO_REQUIREMENTS``
 # - ``http_proxy``, ``https_proxy``, ``no_proxy``
 #
 
@@ -36,6 +35,10 @@
 XTRACE=$(set +o | grep xtrace)
 set +o xtrace
 
+# ensure we don't re-source this in the same environment
+[[ -z "$_DEVSTACK_FUNCTIONS_COMMON" ]] || return 0
+declare -r _DEVSTACK_FUNCTIONS_COMMON=1
+
 # Global Config Variables
 declare -A GITREPO
 declare -A GITBRANCH
@@ -686,9 +689,10 @@
 # Gets or creates a domain
 # Usage: get_or_create_domain <name> <description>
 function get_or_create_domain {
+    local domain_id
     local os_url="$KEYSTONE_SERVICE_URI_V3"
     # Gets domain id
-    local domain_id=$(
+    domain_id=$(
         # Gets domain id
         openstack --os-token=$OS_TOKEN --os-url=$os_url \
             --os-identity-api-version=3 domain show $1 \
@@ -707,8 +711,9 @@
 function get_or_create_group {
     local desc="${3:-}"
     local os_url="$KEYSTONE_SERVICE_URI_V3"
+    local group_id
     # Gets group id
-    local group_id=$(
+    group_id=$(
         # Creates new group with --or-show
         openstack --os-token=$OS_TOKEN --os-url=$os_url \
             --os-identity-api-version=3 group create $1 \
@@ -721,13 +726,14 @@
 # Gets or creates user
 # Usage: get_or_create_user <username> <password> <domain> [<email>]
 function get_or_create_user {
+    local user_id
     if [[ ! -z "$4" ]]; then
         local email="--email=$4"
     else
         local email=""
     fi
     # Gets user id
-    local user_id=$(
+    user_id=$(
         # Creates new user with --or-show
         openstack user create \
             $1 \
@@ -745,7 +751,8 @@
 # Gets or creates project
 # Usage: get_or_create_project <name> <domain>
 function get_or_create_project {
-    local project_id=$(
+    local project_id
+    project_id=$(
         # Creates new project with --or-show
         openstack --os-url=$KEYSTONE_SERVICE_URI_V3 \
             --os-identity-api-version=3 \
@@ -759,7 +766,8 @@
 # Gets or creates role
 # Usage: get_or_create_role <name>
 function get_or_create_role {
-    local role_id=$(
+    local role_id
+    role_id=$(
         # Creates role with --or-show
         openstack role create $1 \
             --os-url=$KEYSTONE_SERVICE_URI_V3 \
@@ -772,8 +780,9 @@
 # Gets or adds user role to project
 # Usage: get_or_add_user_project_role <role> <user> <project>
 function get_or_add_user_project_role {
+    local user_role_id
     # Gets user role id
-    local user_role_id=$(openstack role list \
+    user_role_id=$(openstack role list \
         --user $2 \
         --os-url=$KEYSTONE_SERVICE_URI_V3 \
         --os-identity-api-version=3 \
@@ -797,8 +806,9 @@
 # Gets or adds group role to project
 # Usage: get_or_add_group_project_role <role> <group> <project>
 function get_or_add_group_project_role {
+    local group_role_id
     # Gets group role id
-    local group_role_id=$(openstack role list \
+    group_role_id=$(openstack role list \
         --os-url=$KEYSTONE_SERVICE_URI_V3 \
         --os-identity-api-version=3 \
         --group $2 \
@@ -824,8 +834,9 @@
 # Gets or creates service
 # Usage: get_or_create_service <name> <type> <description>
 function get_or_create_service {
+    local service_id
     # Gets service id
-    local service_id=$(
+    service_id=$(
         # Gets service id
         openstack service show $2 -f value -c id 2>/dev/null ||
         # Creates new service if not exists
@@ -843,13 +854,19 @@
 # Create an endpoint with a specific interface
 # Usage: _get_or_create_endpoint_with_interface <service> <interface> <url> <region>
 function _get_or_create_endpoint_with_interface {
-    local endpoint_id=$(openstack endpoint list \
+    local endpoint_id
+    # TODO(dgonzalez): The check of the region name, as done in the grep
+    # statement below, exists only because keystone does currently
+    # not allow filtering the region name when listing endpoints. If keystone
+    # gets support for this, the check for the region name can be removed.
+    # Related bug in keystone: https://bugs.launchpad.net/keystone/+bug/1482772
+    endpoint_id=$(openstack endpoint list \
         --os-url $KEYSTONE_SERVICE_URI_V3 \
         --os-identity-api-version=3 \
         --service $1 \
         --interface $2 \
         --region $4 \
-        -c ID -f value)
+        -c ID -c Region -f value | grep $4 | cut -f 1 -d " ")
     if [[ -z "$endpoint_id" ]]; then
         # Creates new endpoint
         endpoint_id=$(openstack endpoint create \
diff --git a/inc/python b/inc/python
index 54e19a7..5c9dc5c 100644
--- a/inc/python
+++ b/inc/python
@@ -67,7 +67,6 @@
 # Wrapper for ``pip install`` to set cache and proxy environment variables
 # Uses globals ``OFFLINE``, ``PIP_VIRTUAL_ENV``,
 # ``PIP_UPGRADE``, ``TRACK_DEPENDS``, ``*_proxy``,
-# ``USE_CONSTRAINTS``
 # pip_install package [package ...]
 function pip_install {
     local xtrace=$(set +o | grep xtrace)
@@ -105,11 +104,8 @@
     fi
 
     cmd_pip="$cmd_pip install"
-
-    # Handle a constraints file, if needed.
-    if [[ "$USE_CONSTRAINTS" == "True" ]]; then
-        cmd_pip="$cmd_pip -c $REQUIREMENTS_DIR/upper-constraints.txt"
-    fi
+    # Always apply constraints
+    cmd_pip="$cmd_pip -c $REQUIREMENTS_DIR/upper-constraints.txt"
 
     local pip_version=$(python -c "import pip; \
                         print(pip.__version__.strip('.')[0])")
@@ -187,13 +183,13 @@
 # use this, especially *oslo* ones
 function setup_install {
     local project_dir=$1
-    setup_package_with_req_sync $project_dir
+    setup_package_with_constraints_edit $project_dir
 }
 
 # this should be used for projects which run services, like all services
 function setup_develop {
     local project_dir=$1
-    setup_package_with_req_sync $project_dir -e
+    setup_package_with_constraints_edit $project_dir -e
 }
 
 # determine if a project as specified by directory is in
@@ -209,32 +205,16 @@
 # ``pip install -e`` the package, which processes the dependencies
 # using pip before running `setup.py develop`
 #
-# Updates the dependencies in project_dir from the
-# openstack/requirements global list before installing anything.
+# Updates the constraints from REQUIREMENTS_DIR to reflect the
+# future installed state of this package. This ensures when we
+# install this package we get the from source version.
 #
-# Uses globals ``TRACK_DEPENDS``, ``REQUIREMENTS_DIR``, ``UNDO_REQUIREMENTS``
+# Uses globals ``REQUIREMENTS_DIR``
 # setup_develop directory
-function setup_package_with_req_sync {
+function setup_package_with_constraints_edit {
     local project_dir=$1
     local flags=$2
 
-    # Don't update repo if local changes exist
-    # Don't use buggy "git diff --quiet"
-    # ``errexit`` requires us to trap the exit code when the repo is changed
-    local update_requirements=$(cd $project_dir && git diff --exit-code >/dev/null || echo "changed")
-
-    if [[ $update_requirements != "changed" && "$USE_CONSTRAINTS" == "False" ]]; then
-        if is_in_projects_txt $project_dir; then
-            (cd $REQUIREMENTS_DIR; \
-                ./.venv/bin/python update.py $project_dir)
-        else
-            # soft update projects not found in requirements project.txt
-            echo "$project_dir not a constrained repository, soft enforcing requirements"
-            (cd $REQUIREMENTS_DIR; \
-                ./.venv/bin/python update.py -s $project_dir)
-        fi
-    fi
-
     if [ -n "$REQUIREMENTS_DIR" ]; then
         # Constrain this package to this project directory from here on out.
         local name=$(awk '/^name.*=/ {print $3}' $project_dir/setup.cfg)
@@ -245,19 +225,6 @@
 
     setup_package $project_dir $flags
 
-    # We've just gone and possibly modified the user's source tree in an
-    # automated way, which is considered bad form if it's a development
-    # tree because we've screwed up their next git checkin. So undo it.
-    #
-    # However... there are some circumstances, like running in the gate
-    # where we really really want the overridden version to stick. So provide
-    # a variable that tells us whether or not we should UNDO the requirements
-    # changes (this will be set to False in the OpenStack ci gate)
-    if [ $UNDO_REQUIREMENTS = "True" ]; then
-        if [[ $update_requirements != "changed" ]]; then
-            (cd $project_dir && git reset --hard)
-        fi
-    fi
 }
 
 # ``pip install -e`` the package, which processes the dependencies
diff --git a/lib/apache b/lib/apache
index c7d69f2..a8e9bc5 100644
--- a/lib/apache
+++ b/lib/apache
@@ -11,7 +11,6 @@
 # lib/apache exports the following functions:
 #
 # - install_apache_wsgi
-# - config_apache_wsgi
 # - apache_site_config_for
 # - enable_apache_site
 # - disable_apache_site
diff --git a/lib/ceilometer b/lib/ceilometer
index 9226d85..3df75b7 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -211,7 +211,8 @@
     cp $CEILOMETER_DIR/etc/ceilometer/event_pipeline.yaml $CEILOMETER_CONF_DIR
     cp $CEILOMETER_DIR/etc/ceilometer/api_paste.ini $CEILOMETER_CONF_DIR
     cp $CEILOMETER_DIR/etc/ceilometer/event_definitions.yaml $CEILOMETER_CONF_DIR
-    cp $CEILOMETER_DIR/etc/ceilometer/meters.yaml $CEILOMETER_CONF_DIR
+    cp $CEILOMETER_DIR/etc/ceilometer/gnocchi_archive_policy_map.yaml $CEILOMETER_CONF_DIR
+    cp $CEILOMETER_DIR/etc/ceilometer/gnocchi_resources.yaml $CEILOMETER_CONF_DIR
 
     if [ "$CEILOMETER_PIPELINE_INTERVAL" ]; then
         sed -i "s/interval:.*/interval: ${CEILOMETER_PIPELINE_INTERVAL}/" $CEILOMETER_CONF_DIR/pipeline.yaml
diff --git a/lib/ceph b/lib/ceph
index 6cf481e..8e34aa4 100644
--- a/lib/ceph
+++ b/lib/ceph
@@ -176,7 +176,9 @@
     sudo mkdir -p ${CEPH_DATA_DIR}/{bootstrap-mds,bootstrap-osd,mds,mon,osd,tmp}
 
     # create ceph monitor initial key and directory
-    sudo ceph-authtool /var/lib/ceph/tmp/keyring.mon.$(hostname) --create-keyring --name=mon. --add-key=$(ceph-authtool --gen-print-key) --cap mon 'allow *'
+    sudo ceph-authtool /var/lib/ceph/tmp/keyring.mon.$(hostname) \
+        --create-keyring --name=mon. --add-key=$(ceph-authtool --gen-print-key) \
+        --cap mon 'allow *'
     sudo mkdir /var/lib/ceph/mon/ceph-$(hostname)
 
     # create a default ceph configuration file
@@ -194,12 +196,14 @@
 EOF
 
     # bootstrap the ceph monitor
-    sudo ceph-mon -c ${CEPH_CONF_FILE} --mkfs -i $(hostname) --keyring /var/lib/ceph/tmp/keyring.mon.$(hostname)
+    sudo ceph-mon -c ${CEPH_CONF_FILE} --mkfs -i $(hostname) \
+        --keyring /var/lib/ceph/tmp/keyring.mon.$(hostname)
+
     if is_ubuntu; then
-    sudo touch /var/lib/ceph/mon/ceph-$(hostname)/upstart
+        sudo touch /var/lib/ceph/mon/ceph-$(hostname)/upstart
         sudo initctl emit ceph-mon id=$(hostname)
     else
-    sudo touch /var/lib/ceph/mon/ceph-$(hostname)/sysvinit
+        sudo touch /var/lib/ceph/mon/ceph-$(hostname)/sysvinit
         sudo service ceph start mon.$(hostname)
     fi
 
@@ -240,7 +244,9 @@
         OSD_ID=$(sudo ceph -c ${CEPH_CONF_FILE} osd create)
         sudo mkdir -p ${CEPH_DATA_DIR}/osd/ceph-${OSD_ID}
         sudo ceph-osd -c ${CEPH_CONF_FILE} -i ${OSD_ID} --mkfs
-        sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create osd.${OSD_ID} mon 'allow profile osd ' osd 'allow *' | sudo tee ${CEPH_DATA_DIR}/osd/ceph-${OSD_ID}/keyring
+        sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create osd.${OSD_ID} \
+            mon 'allow profile osd ' osd 'allow *' | \
+            sudo tee ${CEPH_DATA_DIR}/osd/ceph-${OSD_ID}/keyring
 
         # ceph's init script is parsing ${CEPH_DATA_DIR}/osd/ceph-${OSD_ID}/ and looking for a file
         # 'upstart' or 'sysinitv', thanks to these 'touches' we are able to control OSDs daemons
@@ -264,9 +270,13 @@
 # configure_ceph_glance() - Glance config needs to come after Glance is set up
 function configure_ceph_glance {
     sudo ceph -c ${CEPH_CONF_FILE} osd pool create ${GLANCE_CEPH_POOL} ${GLANCE_CEPH_POOL_PG} ${GLANCE_CEPH_POOL_PGP}
-    sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${GLANCE_CEPH_USER} mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=${GLANCE_CEPH_POOL}" | sudo tee ${CEPH_CONF_DIR}/ceph.client.${GLANCE_CEPH_USER}.keyring
+    sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${GLANCE_CEPH_USER} \
+        mon "allow r" \
+        osd "allow class-read object_prefix rbd_children, allow rwx pool=${GLANCE_CEPH_POOL}" | \
+        sudo tee ${CEPH_CONF_DIR}/ceph.client.${GLANCE_CEPH_USER}.keyring
     sudo chown ${STACK_USER}:$(id -g -n $whoami) ${CEPH_CONF_DIR}/ceph.client.${GLANCE_CEPH_USER}.keyring
 
+    iniset $GLANCE_API_CONF DEFAULT show_image_direct_url True
     iniset $GLANCE_API_CONF glance_store default_store rbd
     iniset $GLANCE_API_CONF glance_store stores "file, http, rbd"
     iniset $GLANCE_API_CONF glance_store rbd_store_ceph_conf $CEPH_CONF_FILE
@@ -295,7 +305,10 @@
     iniset $NOVA_CONF libvirt images_rbd_ceph_conf ${CEPH_CONF_FILE}
 
     if ! is_service_enabled cinder; then
-        sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${CINDER_CEPH_USER} mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=${CINDER_CEPH_POOL}, allow rwx pool=${NOVA_CEPH_POOL},allow rx pool=${GLANCE_CEPH_POOL}" | sudo tee ${CEPH_CONF_DIR}/ceph.client.${CINDER_CEPH_USER}.keyring > /dev/null
+        sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${CINDER_CEPH_USER} \
+            mon "allow r" \
+            osd "allow class-read object_prefix rbd_children, allow rwx pool=${CINDER_CEPH_POOL}, allow rwx pool=${NOVA_CEPH_POOL},allow rwx pool=${GLANCE_CEPH_POOL}" | \
+            sudo tee ${CEPH_CONF_DIR}/ceph.client.${CINDER_CEPH_USER}.keyring > /dev/null
         sudo chown ${STACK_USER}:$(id -g -n $whoami) ${CEPH_CONF_DIR}/ceph.client.${CINDER_CEPH_USER}.keyring
     fi
 }
@@ -311,7 +324,10 @@
 # configure_ceph_cinder() - Cinder config needs to come after Cinder is set up
 function configure_ceph_cinder {
     sudo ceph -c ${CEPH_CONF_FILE} osd pool create ${CINDER_CEPH_POOL} ${CINDER_CEPH_POOL_PG} ${CINDER_CEPH_POOL_PGP}
-    sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${CINDER_CEPH_USER} mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=${CINDER_CEPH_POOL}, allow rwx pool=${NOVA_CEPH_POOL},allow rx pool=${GLANCE_CEPH_POOL}" | sudo tee ${CEPH_CONF_DIR}/ceph.client.${CINDER_CEPH_USER}.keyring
+    sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${CINDER_CEPH_USER} \
+        mon "allow r" \
+        osd "allow class-read object_prefix rbd_children, allow rwx pool=${CINDER_CEPH_POOL}, allow rwx pool=${NOVA_CEPH_POOL},allow rwx pool=${GLANCE_CEPH_POOL}" | \
+        sudo tee ${CEPH_CONF_DIR}/ceph.client.${CINDER_CEPH_USER}.keyring
     sudo chown ${STACK_USER}:$(id -g -n $whoami) ${CEPH_CONF_DIR}/ceph.client.${CINDER_CEPH_USER}.keyring
 }
 
diff --git a/lib/databases/mysql b/lib/databases/mysql
index fb55b60..7ae9a93 100644
--- a/lib/databases/mysql
+++ b/lib/databases/mysql
@@ -85,12 +85,12 @@
         sudo mysqladmin -u root password $DATABASE_PASSWORD || true
     fi
 
-    # Update the DB to give user ‘$DATABASE_USER’@’%’ full control of the all databases:
+    # Update the DB to give user '$DATABASE_USER'@'%' full control of the all databases:
     sudo mysql -uroot -p$DATABASE_PASSWORD -h127.0.0.1 -e "GRANT ALL PRIVILEGES ON *.* TO '$DATABASE_USER'@'%' identified by '$DATABASE_PASSWORD';"
 
     # Now update ``my.cnf`` for some local needs and restart the mysql service
 
-    # Change ‘bind-address’ from localhost (127.0.0.1) to any (::) and
+    # Change bind-address from localhost (127.0.0.1) to any (::) and
     # set default db type to InnoDB
     sudo bash -c "source $TOP_DIR/functions && \
         iniset $my_conf mysqld bind-address "$SERVICE_LISTEN_ADDRESS" && \
diff --git a/lib/glance b/lib/glance
index f200dca..b1b0f32 100644
--- a/lib/glance
+++ b/lib/glance
@@ -154,7 +154,10 @@
 
         iniset $GLANCE_SWIFT_STORE_CONF ref1 user $SERVICE_TENANT_NAME:glance-swift
         iniset $GLANCE_SWIFT_STORE_CONF ref1 key $SERVICE_PASSWORD
-        iniset $GLANCE_SWIFT_STORE_CONF ref1 auth_address $KEYSTONE_SERVICE_URI/v2.0/
+        iniset $GLANCE_SWIFT_STORE_CONF ref1 auth_address $KEYSTONE_SERVICE_URI/v3
+        iniset $GLANCE_SWIFT_STORE_CONF ref1 user_domain_id default
+        iniset $GLANCE_SWIFT_STORE_CONF ref1 project_domain_id default
+        iniset $GLANCE_SWIFT_STORE_CONF ref1 auth_version 3
 
         # commenting is not strictly necessary but it's confusing to have bad values in conf
         inicomment $GLANCE_API_CONF glance_store swift_store_user
diff --git a/lib/horizon b/lib/horizon
index b0f306b..9fe0aa8 100644
--- a/lib/horizon
+++ b/lib/horizon
@@ -93,6 +93,9 @@
     local local_settings=$HORIZON_DIR/openstack_dashboard/local/local_settings.py
     cp $HORIZON_SETTINGS $local_settings
 
+    _horizon_config_set $local_settings "" WEBROOT \"$HORIZON_APACHE_ROOT/\"
+    _horizon_config_set $local_settings "" CUSTOM_THEME_PATH \"themes/webroot\"
+
     _horizon_config_set $local_settings "" COMPRESS_OFFLINE True
     _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_DEFAULT_ROLE \"Member\"
 
@@ -122,6 +125,7 @@
         s,%HORIZON_DIR%,$HORIZON_DIR,g;
         s,%APACHE_NAME%,$APACHE_NAME,g;
         s,%DEST%,$DEST,g;
+        s,%WEBROOT%,$HORIZON_APACHE_ROOT,g;
     \" $FILES/apache-horizon.template >$horizon_conf"
 
     if is_ubuntu; then
diff --git a/lib/infra b/lib/infra
index 3d68e45..89397de 100644
--- a/lib/infra
+++ b/lib/infra
@@ -22,7 +22,6 @@
 # Defaults
 # --------
 GITDIR["pbr"]=$DEST/pbr
-REQUIREMENTS_DIR=$DEST/requirements
 
 # Entry Points
 # ------------
@@ -30,8 +29,6 @@
 # install_infra() - Collect source and prepare
 function install_infra {
     local PIP_VIRTUAL_ENV="$REQUIREMENTS_DIR/.venv"
-    # bring down global requirements
-    git_clone $REQUIREMENTS_REPO $REQUIREMENTS_DIR $REQUIREMENTS_BRANCH
     [ ! -d $PIP_VIRTUAL_ENV ] && virtualenv $PIP_VIRTUAL_ENV
     # We don't care about testing git pbr in the requirements venv.
     PIP_VIRTUAL_ENV=$PIP_VIRTUAL_ENV pip_install -U pbr
diff --git a/lib/ironic b/lib/ironic
index 1323446..b3ad586 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -618,6 +618,7 @@
         local node_id=$(ironic node-create $standalone_node_uuid\
             --chassis_uuid $chassis_id \
             --driver $IRONIC_DEPLOY_DRIVER \
+            --name node-$total_nodes \
             -p cpus=$ironic_node_cpu\
             -p memory_mb=$ironic_node_ram\
             -p local_gb=$ironic_node_disk\
diff --git a/lib/keystone b/lib/keystone
index 59584b2..e2448c9 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -35,6 +35,7 @@
 # --------
 
 # Set up default directories
+GITDIR["keystoneauth"]=$DEST/keystoneauth
 GITDIR["python-keystoneclient"]=$DEST/python-keystoneclient
 GITDIR["keystonemiddleware"]=$DEST/keystonemiddleware
 KEYSTONE_DIR=$DEST/keystone
@@ -488,6 +489,14 @@
     fi
 }
 
+# install_keystoneauth() - Collect source and prepare
+function install_keystoneauth {
+    if use_library_from_git "keystoneauth"; then
+        git_clone_by_name "keystoneauth"
+        setup_dev_lib "keystoneauth"
+    fi
+}
+
 # install_keystoneclient() - Collect source and prepare
 function install_keystoneclient {
     if use_library_from_git "python-keystoneclient"; then
diff --git a/lib/neutron-legacy b/lib/neutron-legacy
index 5abe55c..d0eb0c0 100644
--- a/lib/neutron-legacy
+++ b/lib/neutron-legacy
@@ -471,11 +471,21 @@
 
 function create_nova_conf_neutron {
     iniset $NOVA_CONF DEFAULT network_api_class "nova.network.neutronv2.api.API"
-    iniset $NOVA_CONF neutron admin_username "$Q_ADMIN_USERNAME"
-    iniset $NOVA_CONF neutron admin_password "$SERVICE_PASSWORD"
-    iniset $NOVA_CONF neutron admin_auth_url "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/v2.0"
+
+
+    if [ "$ENABLE_IDENTITY_V2" == "False" ]; then
+        iniset $NOVA_CONF neutron auth_plugin "v3password"
+        iniset $NOVA_CONF neutron auth_url "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/v3"
+        iniset $NOVA_CONF neutron username "$Q_ADMIN_USERNAME"
+        iniset $NOVA_CONF neutron password "$SERVICE_PASSWORD"
+        iniset $NOVA_CONF neutron user_domain_name "default"
+    else
+        iniset $NOVA_CONF neutron admin_username "$Q_ADMIN_USERNAME"
+        iniset $NOVA_CONF neutron admin_password "$SERVICE_PASSWORD"
+        iniset $NOVA_CONF neutron admin_auth_url "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/v2.0"
+        iniset $NOVA_CONF neutron admin_tenant_name "$SERVICE_TENANT_NAME"
+    fi
     iniset $NOVA_CONF neutron auth_strategy "$Q_AUTH_STRATEGY"
-    iniset $NOVA_CONF neutron admin_tenant_name "$SERVICE_TENANT_NAME"
     iniset $NOVA_CONF neutron region_name "$REGION_NAME"
     iniset $NOVA_CONF neutron url "${Q_PROTOCOL}://$Q_HOST:$Q_PORT"
 
@@ -707,11 +717,10 @@
     fi
 }
 
-# Start running processes, including screen
-function start_neutron_agents {
-    # Start up the neutron agents if enabled
+# Control of the l2 agent is separated out to make it easier to test partial
+# upgrades (everything upgraded except the L2 agent)
+function start_neutron_l2_agent {
     run_process q-agt "python $AGENT_BINARY --config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE"
-    run_process q-dhcp "python $AGENT_DHCP_BINARY --config-file $NEUTRON_CONF --config-file=$Q_DHCP_CONF_FILE"
 
     if is_provider_network; then
         sudo ovs-vsctl --no-wait -- --may-exist add-port $OVS_PHYSICAL_BRIDGE $PUBLIC_INTERFACE
@@ -726,31 +735,41 @@
             sudo ip route replace $FIXED_RANGE via $NETWORK_GATEWAY dev $OVS_PHYSICAL_BRIDGE
         fi
     fi
+}
 
-    if is_service_enabled q-vpn; then
+function start_neutron_other_agents {
+    run_process q-dhcp "python $AGENT_DHCP_BINARY --config-file $NEUTRON_CONF --config-file=$Q_DHCP_CONF_FILE"
+
+    if is_service_enabled neutron-vpnaas; then
+        :  # Started by plugin
+    elif is_service_enabled q-vpn; then
         run_process q-vpn "$AGENT_VPN_BINARY $(determine_config_files neutron-vpn-agent)"
     else
         run_process q-l3 "python $AGENT_L3_BINARY $(determine_config_files neutron-l3-agent)"
     fi
 
     run_process q-meta "python $AGENT_META_BINARY --config-file $NEUTRON_CONF --config-file=$Q_META_CONF_FILE"
+    run_process q-lbaas "python $AGENT_LBAAS_BINARY --config-file $NEUTRON_CONF --config-file=$LBAAS_AGENT_CONF_FILENAME"
+    run_process q-metering "python $AGENT_METERING_BINARY --config-file $NEUTRON_CONF --config-file $METERING_AGENT_CONF_FILENAME"
 
     if [ "$VIRT_DRIVER" = 'xenserver' ]; then
         # For XenServer, start an agent for the domU openvswitch
         run_process q-domua "python $AGENT_BINARY --config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE.domU"
     fi
-
-    if is_service_enabled q-lbaas; then
-        run_process q-lbaas "python $AGENT_LBAAS_BINARY --config-file $NEUTRON_CONF --config-file=$LBAAS_AGENT_CONF_FILENAME"
-    fi
-
-    if is_service_enabled q-metering; then
-        run_process q-metering "python $AGENT_METERING_BINARY --config-file $NEUTRON_CONF --config-file $METERING_AGENT_CONF_FILENAME"
-    fi
 }
 
-# stop_neutron() - Stop running processes (non-screen)
-function stop_neutron {
+# Start running processes, including screen
+function start_neutron_agents {
+    # Start up the neutron agents if enabled
+    start_neutron_l2_agent
+    start_neutron_other_agents
+}
+
+function stop_neutron_l2_agent {
+    stop_process q-agt
+}
+
+function stop_neutron_other {
     if is_service_enabled q-dhcp; then
         stop_process q-dhcp
         pid=$(ps aux | awk '/[d]nsmasq.+interface=(tap|ns-)/ { print $2 }')
@@ -765,8 +784,6 @@
         stop_process q-meta
     fi
 
-    stop_process q-agt
-
     if is_service_enabled q-lbaas; then
         neutron_lbaas_stop
     fi
@@ -781,8 +798,15 @@
     fi
 }
 
+# stop_neutron() - Stop running processes (non-screen)
+function stop_neutron {
+    stop_neutron_other
+    stop_neutron_l2_agent
+}
+
 # _move_neutron_addresses_route() - Move the primary IP to the OVS bridge
-# on startup, or back to the public interface on cleanup
+# on startup, or back to the public interface on cleanup. If no IP is
+# configured on the interface, just add it as a port to the OVS bridge.
 function _move_neutron_addresses_route {
     local from_intf=$1
     local to_intf=$2
@@ -795,7 +819,8 @@
         # on configure we will also add $from_intf as a port on $to_intf,
         # assuming it is an OVS bridge.
 
-        local IP_BRD=$(ip -f $af a s dev $from_intf | awk '/inet/ { print $2, $3, $4; exit }')
+        local IP_ADD=""
+        local IP_DEL=""
         local DEFAULT_ROUTE_GW=$(ip r | awk "/default.+$from_intf/ { print \$3; exit }")
         local ADD_OVS_PORT=""
 
@@ -815,7 +840,12 @@
             ADD_OVS_PORT="sudo ovs-vsctl --may-exist add-port $to_intf $from_intf"
         fi
 
-        sudo ip addr del $IP_BRD dev $from_intf; sudo ip addr add $IP_BRD dev $to_intf; $ADD_OVS_PORT; $ADD_DEFAULT_ROUTE
+        if [[ "$IP_BRD" != "" ]]; then
+            IP_ADD="sudo ip addr del $IP_BRD dev $from_intf"
+            IP_DEL="sudo ip addr add $IP_BRD dev $to_intf"
+        fi
+
+        $IP_ADD; $IP_DEL; $ADD_OVS_PORT; $ADD_DEFAULT_ROUTE
     fi
 }
 
@@ -823,9 +853,7 @@
 # runs that a clean run would need to clean up
 function cleanup_neutron {
 
-    if [[ $(ip -f inet a s dev "$OVS_PHYSICAL_BRIDGE" | grep -c 'global') != 0 ]]; then
-        _move_neutron_addresses_route "$OVS_PHYSICAL_BRIDGE" "$PUBLIC_INTERFACE" False "inet"
-    fi
+    _move_neutron_addresses_route "$OVS_PHYSICAL_BRIDGE" "$PUBLIC_INTERFACE" False "inet"
 
     if [[ $(ip -f inet6 a s dev "$OVS_PHYSICAL_BRIDGE" | grep -c 'global') != 0 ]]; then
         _move_neutron_addresses_route "$OVS_PHYSICAL_BRIDGE" "$PUBLIC_INTERFACE" False "inet6"
@@ -867,6 +895,12 @@
 
     cp $NEUTRON_DIR/etc/neutron.conf $NEUTRON_CONF
 
+    Q_POLICY_FILE=$NEUTRON_CONF_DIR/policy.json
+    cp $NEUTRON_DIR/etc/policy.json $Q_POLICY_FILE
+
+    # allow neutron user to administer neutron to match neutron account
+    sed -i 's/"context_is_admin":  "role:admin"/"context_is_admin":  "role:admin or user_name:neutron"/g' $Q_POLICY_FILE
+
     # Set plugin-specific variables ``Q_DB_NAME``, ``Q_PLUGIN_CLASS``.
     # For main plugin config file, set ``Q_PLUGIN_CONF_PATH``, ``Q_PLUGIN_CONF_FILENAME``.
     # For addition plugin config files, set ``Q_PLUGIN_EXTRA_CONF_PATH``,
@@ -949,9 +983,9 @@
     iniset $NEUTRON_TEST_CONFIG_FILE DEFAULT verbose False
     iniset $NEUTRON_TEST_CONFIG_FILE DEFAULT debug False
     iniset $NEUTRON_TEST_CONFIG_FILE DEFAULT use_namespaces $Q_USE_NAMESPACE
-    iniset $NEUTRON_TEST_CONFIG_FILE agent root_helper "$Q_RR_COMMAND"
+    iniset $NEUTRON_TEST_CONFIG_FILE AGENT root_helper "$Q_RR_COMMAND"
     if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
-        iniset $NEUTRON_TEST_CONFIG_FILE agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+        iniset $NEUTRON_TEST_CONFIG_FILE AGENT root_helper_daemon "$Q_RR_DAEMON_COMMAND"
     fi
 
     _neutron_setup_interface_driver $NEUTRON_TEST_CONFIG_FILE
@@ -966,9 +1000,9 @@
     iniset $Q_DHCP_CONF_FILE DEFAULT verbose True
     iniset $Q_DHCP_CONF_FILE DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $Q_DHCP_CONF_FILE DEFAULT use_namespaces $Q_USE_NAMESPACE
-    iniset $Q_DHCP_CONF_FILE DEFAULT root_helper "$Q_RR_COMMAND"
+    iniset $Q_DHCP_CONF_FILE AGENT root_helper "$Q_RR_COMMAND"
     if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
-        iniset $NEUTRON_TEST_CONFIG_FILE agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+        iniset $Q_DHCP_CONF_FILE AGENT root_helper_daemon "$Q_RR_DAEMON_COMMAND"
     fi
 
     if ! is_service_enabled q-l3; then
@@ -988,7 +1022,6 @@
 }
 
 function _configure_neutron_l3_agent {
-    local cfg_file
     Q_L3_ENABLED=True
     # for l3-agent, only use per tenant router if we have namespaces
     Q_L3_ROUTER_PER_TENANT=$Q_USE_NAMESPACE
@@ -1002,18 +1035,16 @@
     iniset $Q_L3_CONF_FILE DEFAULT verbose True
     iniset $Q_L3_CONF_FILE DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $Q_L3_CONF_FILE DEFAULT use_namespaces $Q_USE_NAMESPACE
-    iniset $Q_L3_CONF_FILE DEFAULT root_helper "$Q_RR_COMMAND"
+    iniset $Q_L3_CONF_FILE AGENT root_helper "$Q_RR_COMMAND"
     if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
-        iniset $Q_L3_CONF_FILE agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+        iniset $Q_L3_CONF_FILE AGENT root_helper_daemon "$Q_RR_DAEMON_COMMAND"
     fi
 
     _neutron_setup_interface_driver $Q_L3_CONF_FILE
 
     neutron_plugin_configure_l3_agent
 
-    if [[ $(ip -f inet a s dev "$PUBLIC_INTERFACE" | grep -c 'global') != 0 ]]; then
-        _move_neutron_addresses_route "$PUBLIC_INTERFACE" "$OVS_PHYSICAL_BRIDGE" True "inet"
-    fi
+    _move_neutron_addresses_route "$PUBLIC_INTERFACE" "$OVS_PHYSICAL_BRIDGE" True "inet"
 
     if [[ $(ip -f inet6 a s dev "$PUBLIC_INTERFACE" | grep -c 'global') != 0 ]]; then
         _move_neutron_addresses_route "$PUBLIC_INTERFACE" "$OVS_PHYSICAL_BRIDGE" False "inet6"
@@ -1026,9 +1057,9 @@
     iniset $Q_META_CONF_FILE DEFAULT verbose True
     iniset $Q_META_CONF_FILE DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $Q_META_CONF_FILE DEFAULT nova_metadata_ip $Q_META_DATA_IP
-    iniset $Q_META_CONF_FILE DEFAULT root_helper "$Q_RR_COMMAND"
+    iniset $Q_META_CONF_FILE AGENT root_helper "$Q_RR_COMMAND"
     if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
-        iniset $Q_META_CONF_FILE agent root_helper_daemon "$Q_RR_DAEMON_COMMAND"
+        iniset $Q_META_CONF_FILE AGENT root_helper_daemon "$Q_RR_DAEMON_COMMAND"
     fi
 
     # Configures keystone for metadata_agent
@@ -1096,13 +1127,7 @@
 # It is called when q-svc is enabled.
 function _configure_neutron_service {
     Q_API_PASTE_FILE=$NEUTRON_CONF_DIR/api-paste.ini
-    Q_POLICY_FILE=$NEUTRON_CONF_DIR/policy.json
-
     cp $NEUTRON_DIR/etc/api-paste.ini $Q_API_PASTE_FILE
-    cp $NEUTRON_DIR/etc/policy.json $Q_POLICY_FILE
-
-    # allow neutron user to administer neutron to match neutron account
-    sed -i 's/"context_is_admin":  "role:admin"/"context_is_admin":  "role:admin or user_name:neutron"/g' $Q_POLICY_FILE
 
     # Update either configuration file with plugin
     iniset $NEUTRON_CONF DEFAULT core_plugin $Q_PLUGIN_CLASS
diff --git a/lib/nova b/lib/nova
index a6cd651..6441a89 100644
--- a/lib/nova
+++ b/lib/nova
@@ -490,7 +490,6 @@
     iniset $NOVA_CONF database connection `database_connection_url nova`
     iniset $NOVA_CONF api_database connection `database_connection_url nova_api`
     iniset $NOVA_CONF DEFAULT instance_name_template "${INSTANCE_NAME_PREFIX}%08x"
-    iniset $NOVA_CONF osapi_v3 enabled "True"
     iniset $NOVA_CONF DEFAULT osapi_compute_listen "$NOVA_SERVICE_LISTEN_ADDRESS"
     iniset $NOVA_CONF DEFAULT ec2_listen "$NOVA_SERVICE_LISTEN_ADDRESS"
     iniset $NOVA_CONF DEFAULT metadata_listen "$NOVA_SERVICE_LISTEN_ADDRESS"
diff --git a/lib/swift b/lib/swift
index 826f233..fc736a6 100644
--- a/lib/swift
+++ b/lib/swift
@@ -46,6 +46,7 @@
 SWIFT_SERVICE_PROTOCOL=${SWIFT_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
 SWIFT_DEFAULT_BIND_PORT_INT=${SWIFT_DEFAULT_BIND_PORT_INT:-8081}
 SWIFT_SERVICE_LOCAL_HOST=${SWIFT_SERVICE_LOCAL_HOST:-$SERVICE_LOCAL_HOST}
+SWIFT_SERVICE_LISTEN_ADDRESS=${SWIFT_SERVICE_LISTEN_ADDRESS:-$SERVICE_LISTEN_ADDRESS}
 
 # TODO: add logging to different location.
 
@@ -96,7 +97,7 @@
 # the beginning of the pipeline, before authentication middlewares.
 SWIFT_EXTRAS_MIDDLEWARE_NO_AUTH=${SWIFT_EXTRAS_MIDDLEWARE_NO_AUTH:-crossdomain}
 
-# The ring uses a configurable number of bits from a path’s MD5 hash as
+# The ring uses a configurable number of bits from a path's MD5 hash as
 # a partition index that designates a device. The number of bits kept
 # from the hash is known as the partition power, and 2 to the partition
 # power indicates the partition count. Partitioning the full MD5 hash
@@ -361,6 +362,9 @@
     iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT log_level
     iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT log_level DEBUG
 
+    iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT bind_ip
+    iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT bind_ip ${SWIFT_SERVICE_LISTEN_ADDRESS}
+
     iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT bind_port
     if is_service_enabled tls-proxy; then
         iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT bind_port ${SWIFT_DEFAULT_BIND_PORT_INT}
@@ -463,17 +467,23 @@
         local swift_node_config=${SWIFT_CONF_DIR}/object-server/${node_number}.conf
         cp ${SWIFT_DIR}/etc/object-server.conf-sample ${swift_node_config}
         generate_swift_config_services ${swift_node_config} ${node_number} $(( OBJECT_PORT_BASE + 10 * (node_number - 1) )) object
+        iniuncomment ${swift_node_config} DEFAULT bind_ip
+        iniset ${swift_node_config} DEFAULT bind_ip ${SWIFT_SERVICE_LISTEN_ADDRESS}
         iniset ${swift_node_config} filter:recon recon_cache_path  ${SWIFT_DATA_DIR}/cache
 
         swift_node_config=${SWIFT_CONF_DIR}/container-server/${node_number}.conf
         cp ${SWIFT_DIR}/etc/container-server.conf-sample ${swift_node_config}
         generate_swift_config_services ${swift_node_config} ${node_number} $(( CONTAINER_PORT_BASE + 10 * (node_number - 1) )) container
+        iniuncomment ${swift_node_config} DEFAULT bind_ip
+        iniset ${swift_node_config} DEFAULT bind_ip ${SWIFT_SERVICE_LISTEN_ADDRESS}
         iniuncomment ${swift_node_config} app:container-server allow_versions
         iniset ${swift_node_config} app:container-server allow_versions  "true"
 
         swift_node_config=${SWIFT_CONF_DIR}/account-server/${node_number}.conf
         cp ${SWIFT_DIR}/etc/account-server.conf-sample ${swift_node_config}
         generate_swift_config_services ${swift_node_config} ${node_number} $(( ACCOUNT_PORT_BASE + 10 * (node_number - 1) )) account
+        iniuncomment ${swift_node_config} DEFAULT bind_ip
+        iniset ${swift_node_config} DEFAULT bind_ip ${SWIFT_SERVICE_LISTEN_ADDRESS}
     done
 
     # Set new accounts in tempauth to match keystone tenant/user (to make testing easier)
@@ -600,7 +610,7 @@
 
     KEYSTONE_CATALOG_BACKEND=${KEYSTONE_CATALOG_BACKEND:-sql}
 
-    local another_role=$(openstack role list | awk "/ anotherrole / { print \$2 }")
+    local another_role=$(get_or_create_role "anotherrole")
 
     # NOTE(jroll): Swift doesn't need the admin role here, however Ironic uses
     # temp urls, which break when uploaded by a non-admin role
diff --git a/lib/tempest b/lib/tempest
index 1376c87..be24da6 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -30,6 +30,7 @@
 # - ``DEFAULT_INSTANCE_TYPE``
 # - ``DEFAULT_INSTANCE_USER``
 # - ``CINDER_ENABLED_BACKENDS``
+# - ``NOVA_ALLOW_DUPLICATE_NETWORKS``
 #
 # ``stack.sh`` calls the entry points in this order:
 #
@@ -81,6 +82,21 @@
 IPV6_ENABLED=$(trueorfalse True IPV6_ENABLED)
 IPV6_SUBNET_ATTRIBUTES_ENABLED=$(trueorfalse True IPV6_SUBNET_ATTRIBUTES_ENABLED)
 
+# Do we want to make a configuration where Tempest has admin on
+# the cloud. We don't always want to so that we can ensure Tempest
+# would work on a public cloud.
+TEMPEST_HAS_ADMIN=$(trueorfalse True TEMPEST_HAS_ADMIN)
+
+# Credential provider configuration option variables
+TEMPEST_ALLOW_TENANT_ISOLATION=${TEMPEST_ALLOW_TENANT_ISOLATION:-$TEMPEST_HAS_ADMIN}
+TEMPEST_USE_TEST_ACCOUNTS=$(trueorfalse False TEMPEST_USE_TEST_ACCOUNTS)
+
+# The number of workers tempest is expected to be run with. This is used for
+# generating a accounts.yaml for running with test-accounts. This is also the
+# same variable that devstack-gate uses to specify the number of workers that
+# it will run tempest with
+TEMPEST_CONCURRENCY=${TEMPEST_CONCURRENCY:-$(nproc)}
+
 
 # Functions
 # ---------
@@ -166,18 +182,13 @@
         esac
     fi
 
-    # Create ``tempest.conf`` from ``tempest.conf.sample``
-    # Copy every time because the image UUIDS are going to change
+    # (Re)create ``tempest.conf``
+    # Create every time because the image UUIDS are going to change
     sudo install -d -o $STACK_USER $TEMPEST_CONFIG_DIR
-    install -m 644 $TEMPEST_DIR/etc/tempest.conf.sample $TEMPEST_CONFIG
+    rm -f $TEMPEST_CONFIG
 
     password=${ADMIN_PASSWORD:-secrete}
 
-    # Do we want to make a configuration where Tempest has admin on
-    # the cloud. We don't always want to so that we can ensure Tempest
-    # would work on a public cloud.
-    TEMPEST_HAS_ADMIN=$(trueorfalse True TEMPEST_HAS_ADMIN)
-
     # See ``lib/keystone`` where these users and tenants are set up
     ADMIN_USERNAME=${ADMIN_USERNAME:-admin}
     ADMIN_TENANT_NAME=${ADMIN_TENANT_NAME:-admin}
@@ -312,7 +323,7 @@
     fi
     if [ "$ENABLE_IDENTITY_V2" == "False" ]; then
         # Only Identity v3 is available; then skip Identity API v2 tests
-        iniset $TEMPEST_CONFIG identity-feature-enabled v2_api False
+        iniset $TEMPEST_CONFIG identity-feature-enabled api_v2 False
         # In addition, use v3 auth tokens for running all Tempest tests
         iniset $TEMPEST_CONFIG identity auth_version v3
     else
@@ -334,11 +345,6 @@
     # Image Features
     iniset $TEMPEST_CONFIG image-feature-enabled deactivate_image True
 
-    # Auth
-    TEMPEST_ALLOW_TENANT_ISOLATION=${TEMPEST_ALLOW_TENANT_ISOLATION:-$TEMPEST_HAS_ADMIN}
-    iniset $TEMPEST_CONFIG auth allow_tenant_isolation ${TEMPEST_ALLOW_TENANT_ISOLATION:-True}
-    iniset $TEMPEST_CONFIG auth tempest_roles "Member"
-
     # Compute
     iniset $TEMPEST_CONFIG compute ssh_user ${DEFAULT_INSTANCE_USER:-cirros} # DEPRECATED
     iniset $TEMPEST_CONFIG compute network_for_ssh $PRIVATE_NETWORK_NAME
@@ -380,8 +386,12 @@
     # TODO(gilliard): Remove the live_migrate_paused_instances flag when Juno is end of life.
     iniset $TEMPEST_CONFIG compute-feature-enabled live_migrate_paused_instances True
     iniset $TEMPEST_CONFIG compute-feature-enabled attach_encrypted_volume ${ATTACH_ENCRYPTED_VOLUME_AVAILABLE:-True}
+    # TODO(mriedem): Remove this when kilo-eol happens since the
+    # neutron.allow_duplicate_networks option was removed from nova in Liberty
+    # and is now the default behavior.
+    iniset $TEMPEST_CONFIG compute-feature-enabled allow_duplicate_networks ${NOVA_ALLOW_DUPLICATE_NETWORKS:-True}
 
-    # Network
+    # Network
     iniset $TEMPEST_CONFIG network api_version 2.0
     iniset $TEMPEST_CONFIG network tenant_networks_reachable "$tenant_networks_reachable"
     iniset $TEMPEST_CONFIG network public_network_id "$public_network_id"
@@ -452,6 +462,9 @@
     fi
     iniset $TEMPEST_CONFIG object-storage-feature-enabled discoverable_apis $object_storage_api_extensions
 
+    # Validation
+    iniset $TEMPEST_CONFIG validation run_validation ${TEMPEST_RUN_VALIDATION:-False}
+
     # Volume
     # TODO(dkranz): Remove the bootable flag when Juno is end of life.
     iniset $TEMPEST_CONFIG volume-feature-enabled bootable True
@@ -460,7 +473,7 @@
     if [[ ! -z "$DISABLE_VOLUME_API_EXTENSIONS" ]]; then
         # Enabled extensions are either the ones explicitly specified or those available on the API endpoint
         volume_api_extensions=${VOLUME_API_EXTENSIONS:-$(iniget $tmp_cfg_file volume-feature-enabled api_extensions | tr -d " ")}
-        # Remove disabled extensions
+        # Remove disabled extensions
         volume_api_extensions=$(remove_disabled_extensions $volume_api_extensions $DISABLE_VOLUME_API_EXTENSIONS)
     fi
     iniset $TEMPEST_CONFIG volume-feature-enabled api_extensions $volume_api_extensions
@@ -537,6 +550,19 @@
         sudo chown $STACK_USER $BOTO_CONF
     fi
 
+    # Auth
+    iniset $TEMPEST_CONFIG auth tempest_roles "Member"
+    if [[ $TEMPEST_USE_TEST_ACCOUNTS == "True" ]]; then
+        if [[ $TEMPEST_HAS_ADMIN == "True" ]]; then
+            tempest-account-generator -c $TEMPEST_CONFIG --os-username $ADMIN_USERNAME --os-password $ADMIN_PASSWORD --os-tenant-name $ADMIN_TENANT_NAME -r $TEMPEST_CONCURRENCY --with-admin etc/accounts.yaml
+        else
+            tempest-account-generator -c $TEMPEST_CONFIG --os-username $ADMIN_USERNAME --os-password $ADMIN_PASSWORD --os-tenant-name $ADMIN_TENANT_NAME -r $TEMPEST_CONCURRENCY etc/accounts.yaml
+        fi
+        iniset $TEMPEST_CONFIG auth allow_tenant_isolation False
+        iniset $TEMPEST_CONFIG auth test_accounts_file "etc/accounts.yaml"
+    else
+        iniset $TEMPEST_CONFIG auth allow_tenant_isolation ${TEMPEST_ALLOW_TENANT_ISOLATION:-True}
+    fi
     # Restore IFS
     IFS=$ifs
 }
diff --git a/pkg/elasticsearch.sh b/pkg/elasticsearch.sh
index 79f67a0..14d13cf 100755
--- a/pkg/elasticsearch.sh
+++ b/pkg/elasticsearch.sh
@@ -6,9 +6,7 @@
 # step can probably be factored out to something nicer
 TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
 FILES=$TOP_DIR/files
-source $TOP_DIR/functions
-DEST=${DEST:-/opt/stack}
-source $TOP_DIR/lib/infra
+source $TOP_DIR/stackrc
 
 # Package source and version, all pkg files are expected to have
 # something like this, as well as a way to override them.
diff --git a/stack.sh b/stack.sh
index 49f9415..accfd0a 100755
--- a/stack.sh
+++ b/stack.sh
@@ -21,6 +21,10 @@
 
 # Learn more and get the most recent version at http://devstack.org
 
+# Print the commands being run so that we can see the command that triggers
+# an error.  It is also useful for following along as the install occurs.
+set -o xtrace
+
 # Make sure custom grep options don't get in the way
 unset GREP_OPTIONS
 
@@ -494,10 +498,6 @@
 # Begin trapping error exit codes
 set -o errexit
 
-# Print the commands being run so that we can see the command that triggers
-# an error.  It is also useful for following along as the install occurs.
-set -o xtrace
-
 # Print the kernel version
 uname -a
 
@@ -683,14 +683,16 @@
 
 # OpenStack uses a fair number of other projects.
 
+# Bring down global requirements before any use of pip_install. This is
+# necessary to ensure that the constraints file is in place before we
+# attempt to apply any constraints to pip installs.
+git_clone $REQUIREMENTS_REPO $REQUIREMENTS_DIR $REQUIREMENTS_BRANCH
+
 # Install package requirements
 # Source it so the entire environment is available
 echo_summary "Installing package prerequisites"
 source $TOP_DIR/tools/install_prereqs.sh
 
-# Normalise USE_CONSTRAINTS
-USE_CONSTRAINTS=$(trueorfalse False USE_CONSTRAINTS)
-
 # Configure an appropriate Python environment
 if [[ "$OFFLINE" != "True" ]]; then
     PYPI_ALTERNATIVE_URL=${PYPI_ALTERNATIVE_URL:-""} $TOP_DIR/tools/install_pip.sh
@@ -750,6 +752,7 @@
 install_oslo
 
 # Install client libraries
+install_keystoneauth
 install_keystoneclient
 install_glanceclient
 install_cinderclient
@@ -1420,7 +1423,7 @@
 # If you installed Horizon on this server you should be able
 # to access the site using your browser.
 if is_service_enabled horizon; then
-    echo "Horizon is now available at http://$SERVICE_HOST/"
+    echo "Horizon is now available at http://$SERVICE_HOST$HORIZON_APACHE_ROOT"
 fi
 
 # If Keystone is present you can point ``nova`` cli to this server
diff --git a/stackrc b/stackrc
index d16fcf6..156cb1f 100644
--- a/stackrc
+++ b/stackrc
@@ -87,6 +87,9 @@
 # Set the default Nova APIs to enable
 NOVA_ENABLED_APIS=ec2,osapi_compute,metadata
 
+# Set the root URL for Horizon
+HORIZON_APACHE_ROOT="/dashboard"
+
 # Whether to use 'dev mode' for screen windows. Dev mode works by
 # stuffing text into the screen windows so that a developer can use
 # ctrl-c, up-arrow, enter to restart the service. Starting services
@@ -149,13 +152,6 @@
 # Zero disables timeouts
 GIT_TIMEOUT=${GIT_TIMEOUT:-0}
 
-# Constraints mode
-# - False (default) : update git projects dependencies from global-requirements.
-#
-# - True : use upper-constraints.txt to constrain versions of packages intalled
-#          and do not edit projects at all.
-USE_CONSTRAINTS=$(trueorfalse False USE_CONSTRAINTS)
-
 # Repositories
 # ------------
 
@@ -163,6 +159,9 @@
 # Another option is https://git.openstack.org
 GIT_BASE=${GIT_BASE:-git://git.openstack.org}
 
+# The location of REQUIREMENTS once cloned
+REQUIREMENTS_DIR=$DEST/requirements
+
 # Which libraries should we install from git instead of using released
 # versions on pypi?
 #
@@ -280,6 +279,10 @@
 GITREPO["python-ironicclient"]=${IRONICCLIENT_REPO:-${GIT_BASE}/openstack/python-ironicclient.git}
 GITBRANCH["python-ironicclient"]=${IRONICCLIENT_BRANCH:-master}
 
+# the base authentication plugins that clients use to authenticate
+GITREPO["keystoneauth"]=${KEYSTONEAUTH_REPO:-${GIT_BASE}/openstack/keystoneauth.git}
+GITBRANCH["keystoneauth"]=${KEYSTONEAUTH_BRANCH:-master}
+
 # python keystone client library to nova that horizon uses
 GITREPO["python-keystoneclient"]=${KEYSTONECLIENT_REPO:-${GIT_BASE}/openstack/python-keystoneclient.git}
 GITBRANCH["python-keystoneclient"]=${KEYSTONECLIENT_BRANCH:-master}
@@ -623,9 +626,6 @@
 # Set default screen name
 SCREEN_NAME=${SCREEN_NAME:-stack}
 
-# Undo requirements changes by global requirements
-UNDO_REQUIREMENTS=${UNDO_REQUIREMENTS:-True}
-
 # Allow the use of an alternate protocol (such as https) for service endpoints
 SERVICE_PROTOCOL=${SERVICE_PROTOCOL:-http}
 
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
index 8dc3ba3..d10cd0e 100755
--- a/tests/test_libs_from_pypi.sh
+++ b/tests/test_libs_from_pypi.sh
@@ -41,6 +41,7 @@
 ALL_LIBS+=" python-neutronclient tooz ceilometermiddleware oslo.policy"
 ALL_LIBS+=" debtcollector os-brick automaton futurist oslo.service"
 ALL_LIBS+=" oslo.cache oslo.reports"
+ALL_LIBS+=" keystoneauth"
 
 # Generate the above list with
 # echo ${!GITREPO[@]}
diff --git a/tools/install_pip.sh b/tools/install_pip.sh
index 0f7c962..7b42c8c 100755
--- a/tools/install_pip.sh
+++ b/tools/install_pip.sh
@@ -20,7 +20,7 @@
 cd $TOP_DIR
 
 # Import common functions
-source $TOP_DIR/functions
+source $TOP_DIR/stackrc
 
 FILES=$TOP_DIR/files
 
diff --git a/tools/ping_neutron.sh b/tools/ping_neutron.sh
index d36b7f6..dba7502 100755
--- a/tools/ping_neutron.sh
+++ b/tools/ping_neutron.sh
@@ -51,15 +51,15 @@
     usage
 fi
 
-REMANING_ARGS="${@:2}"
+REMAINING_ARGS="${@:2}"
 
 # BUG: with duplicate network names, this fails pretty hard.
-NET_ID=$(neutron net-list $NET_NAME | grep "$NET_NAME" | awk '{print $2}')
+NET_ID=$(neutron net-list | grep "$NET_NAME" | awk '{print $2}')
 PROBE_ID=$(neutron-debug probe-list -c id -c network_id | grep "$NET_ID" | awk '{print $2}' | head -n 1)
 
 # This runs a command inside the specific netns
 NET_NS_CMD="ip netns exec qprobe-$PROBE_ID"
 
-PING_CMD="sudo $NET_NS_CMD ping $REMAING_ARGS"
+PING_CMD="sudo $NET_NS_CMD ping $REMAINING_ARGS"
 echo "Running $PING_CMD"
 $PING_CMD
diff --git a/tools/worlddump.py b/tools/worlddump.py
index e4ba02b..1b337a9 100755
--- a/tools/worlddump.py
+++ b/tools/worlddump.py
@@ -31,12 +31,19 @@
     parser.add_argument('-d', '--dir',
                         default='.',
                         help='Output directory for worlddump')
+    parser.add_argument('-n', '--name',
+                        default='',
+                        help='Additional name to tag into file')
     return parser.parse_args()
 
 
-def filename(dirname):
+def filename(dirname, name=""):
     now = datetime.datetime.utcnow()
-    return os.path.join(dirname, now.strftime("worlddump-%Y-%m-%d-%H%M%S.txt"))
+    fmt = "worlddump-%Y-%m-%d-%H%M%S"
+    if name:
+        fmt += "-" + name
+    fmt += ".txt"
+    return os.path.join(dirname, now.strftime(fmt))
 
 
 def warn(msg):
@@ -78,6 +85,11 @@
     print dfraw
 
 
+def ebtables_dump():
+    _header("EB Tables Dump")
+    _dump_cmd("sudo ebtables -L")
+
+
 def iptables_dump():
     tables = ['filter', 'nat', 'mangle']
     _header("IP Tables Dump")
@@ -125,7 +137,7 @@
 
 def main():
     opts = get_options()
-    fname = filename(opts.dir)
+    fname = filename(opts.dir, opts.name)
     print "World dumping... see %s for details" % fname
     sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
     with open(fname, 'w') as f:
@@ -134,6 +146,7 @@
         process_list()
         network_dump()
         iptables_dump()
+        ebtables_dump()
         compute_consoles()
         guru_meditation_report()