Merge "lib/neutron: configure root_helper for agents"
diff --git a/.gitignore b/.gitignore
index d1781bc..d2c127d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -23,6 +23,8 @@
 files/pip-*
 files/get-pip.py*
 files/ir-deploy*
+files/ironic-inspector*
+files/etcd*
 local.conf
 local.sh
 localrc
diff --git a/HACKING.rst b/HACKING.rst
index b76cb6c..d5d6fbc 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -20,7 +20,7 @@
 contains the usual links for blueprints, bugs, etc.
 
 __ contribute_
-.. _contribute: http://docs.openstack.org/infra/manual/developers.html
+.. _contribute: https://docs.openstack.org/infra/manual/developers.html
 
 __ lp_
 .. _lp: https://launchpad.net/~devstack
@@ -255,7 +255,7 @@
 * The ``OS_*`` environment variables should be the only ones used for all
   authentication to OpenStack clients as documented in the CLIAuth_ wiki page.
 
-.. _CLIAuth: http://wiki.openstack.org/CLIAuth
+.. _CLIAuth: https://wiki.openstack.org/CLIAuth
 
 * The exercise MUST clean up after itself if successful.  If it is not successful,
   it is assumed that state will be left behind; this allows a chance for developers
@@ -322,7 +322,7 @@
 
 
 Review Criteria
-===============
+---------------
 
 There are some broad criteria that will be followed when reviewing
 your change
@@ -364,3 +364,26 @@
 
 * **Reviewers** -- please see ``MAINTAINERS.rst`` for a list of people
   that should be added to reviews of various sub-systems.
+
+
+Making Changes, Testing, and CI
+-------------------------------
+
+Changes to Devstack are tested by automated continuous integration jobs
+that run on a variety of Linux Distros using a handful of common
+configurations. What this means is that every change to Devstack is
+self testing. One major benefit of this is that developers do not
+typically need to add new non voting test jobs to add features to
+Devstack. Instead the features can be added, then if testing passes
+with the feature enabled the change is ready to merge (pending code
+review).
+
+A concrete example of this was the switch from screen based service
+management to systemd based service management. No new jobs were
+created for this. Instead the features were added to devstack, tested
+locally and in CI using a change that enabled the feature, then once
+the enabling change was passing and the new behavior communicated and
+documented it was merged.
+
+Using this process has been proven to be effective and leads to
+quicker implementation of desired features.
diff --git a/README.md b/README.rst
similarity index 84%
rename from README.md
rename to README.rst
index ff5598b..6885546 100644
--- a/README.md
+++ b/README.rst
@@ -1,6 +1,8 @@
-DevStack is a set of scripts and utilities to quickly deploy an OpenStack cloud.
+DevStack is a set of scripts and utilities to quickly deploy an OpenStack cloud
+from git source trees.
 
-# Goals
+Goals
+=====
 
 * To quickly build dev OpenStack environments in a clean Ubuntu or Fedora
   environment
@@ -13,21 +15,22 @@
 * To provide an environment for the OpenStack CI testing on every commit
   to the projects
 
-Read more at http://docs.openstack.org/developer/devstack
+Read more at https://docs.openstack.org/devstack/latest
 
 IMPORTANT: Be sure to carefully read `stack.sh` and any other scripts you
 execute before you run them, as they install software and will alter your
 networking configuration.  We strongly recommend that you run `stack.sh`
 in a clean and disposable vm when you are first getting started.
 
-# Versions
+Versions
+========
 
 The DevStack master branch generally points to trunk versions of OpenStack
 components.  For older, stable versions, look for branches named
 stable/[release] in the DevStack repo.  For example, you can do the
-following to create a Newton OpenStack cloud:
+following to create a Pike OpenStack cloud::
 
-    git checkout stable/newton
+    git checkout stable/pike
     ./stack.sh
 
 You can also pick specific OpenStack project releases by setting the appropriate
@@ -38,7 +41,8 @@
     GLANCE_REPO=git://git.openstack.org/openstack/glance.git
     GLANCE_BRANCH=milestone-proposed
 
-# Start A Dev Cloud
+Start A Dev Cloud
+=================
 
 Installing in a dedicated disposable VM is safer than installing on your
 dev machine!  Plus you can pick one of the supported Linux distros for
@@ -51,17 +55,18 @@
 endpoints, like so:
 
 * Horizon: http://myhost/
-* Keystone: http://myhost:5000/v2.0/
+* Keystone: http://myhost/identity/v2.0/
 
 We also provide an environment file that you can use to interact with your
-cloud via CLI:
+cloud via CLI::
 
     # source openrc file to load your environment with OpenStack CLI creds
     . openrc
     # list instances
-    nova list
+    openstack server list
 
-# DevStack Execution Environment
+DevStack Execution Environment
+==============================
 
 DevStack runs rampant over the system it runs on, installing things and
 uninstalling other things.  Running this on a system you care about is a recipe
@@ -81,10 +86,12 @@
 it runs under.  Many people simply use their usual login (the default
 'ubuntu' login on a UEC image for example).
 
-# Customizing
+Customizing
+===========
 
 DevStack can be extensively configured via the configuration file
 `local.conf`.  It is likely that you will need to provide and modify
 this file if you want anything other than the most basic setup.  Start
-by reading the [configuration guide](doc/source/configuration.rst) for
-details of the configuration file and the many available options.
+by reading the `configuration guide
+<https://docs.openstack.org/devstack/latest/configuration.html>`_
+for details of the configuration file and the many available options.
diff --git a/clean.sh b/clean.sh
index 90b21eb..2333596 100755
--- a/clean.sh
+++ b/clean.sh
@@ -64,13 +64,8 @@
     done
 fi
 
-# See if there is anything running...
-# need to adapt when run_service is merged
-SESSION=$(screen -ls | awk '/[0-9].stack/ { print $1 }')
-if [[ -n "$SESSION" ]]; then
-    # Let unstack.sh do its thing first
-    $TOP_DIR/unstack.sh --all
-fi
+# Let unstack.sh do its thing first
+$TOP_DIR/unstack.sh --all
 
 # Run extras
 # ==========
@@ -93,6 +88,7 @@
 cleanup_glance
 cleanup_keystone
 cleanup_nova
+cleanup_placement
 cleanup_neutron
 cleanup_swift
 cleanup_horizon
@@ -130,6 +126,13 @@
     sudo rm -rf $SCREEN_LOGDIR
 fi
 
+# Clean out the sytemd user unit files if systemd was used.
+if [[ "$USE_SYSTEMD" = "True" ]]; then
+    sudo find $SYSTEMD_DIR -type f -name '*devstack@*service' -delete
+    # Make systemd aware of the deletion.
+    $SYSTEMCTL daemon-reload
+fi
+
 # Clean up venvs
 DIRS_TO_CLEAN="$WHEELHOUSE ${PROJECT_VENV[@]} .config/openstack"
 rm -rf $DIRS_TO_CLEAN
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 6e3ec02..780237f 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -26,7 +26,13 @@
 
 # Add any Sphinx extension module names here, as strings. They can be extensions
 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = [ 'oslosphinx', 'sphinxcontrib.blockdiag', 'sphinxcontrib.nwdiag' ]
+extensions = [ 'openstackdocstheme', 'sphinxcontrib.blockdiag', 'sphinxcontrib.nwdiag' ]
+
+# openstackdocstheme options
+repository_name = 'openstack-dev/devstack'
+bug_project = 'devstack'
+bug_tag = ''
+html_last_updated_fmt = '%Y-%m-%d %H:%M'
 
 todo_include_todos = True
 
@@ -87,7 +93,7 @@
 
 # The theme to use for HTML and HTML Help pages.  See the documentation for
 # a list of builtin themes.
-html_theme = 'nature'
+html_theme = 'openstackdocs'
 
 # Theme options are theme-specific and customize the look and feel of a theme
 # further.  For a list of options available for each theme, see the
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index 53ae82f..23f680a 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -136,7 +136,7 @@
 
     ::
 
-        OS_AUTH_URL=http://$SERVICE_HOST:5000/v2.0
+        OS_AUTH_URL=http://$SERVICE_HOST:5000/v3.0
 
 KEYSTONECLIENT\_DEBUG, NOVACLIENT\_DEBUG
     Set command-line client log level to ``DEBUG``. These are commented
@@ -195,6 +195,9 @@
 Setting it here also makes it available for ``openrc`` to set ``OS_AUTH_URL``.
 ``HOST_IPV6`` is not set by default.
 
+For architecture specific configurations which differ from the x86 default
+here, see `arch-configuration`_.
+
 Historical Notes
 ================
 
@@ -278,43 +281,22 @@
 
         LOGDAYS=1
 
-The some of the project logs (Nova, Cinder, etc) will be colorized by
-default (if ``SYSLOG`` is not set below); this can be turned off by
-setting ``LOG_COLOR`` to ``False``.
-
-    ::
+Some coloring is used during the DevStack runs to make it easier to
+see what is going on. This can be disabled with::
 
         LOG_COLOR=False
 
 Logging the Service Output
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-DevStack will log the ``stdout`` output of the services it starts.
-When using ``screen`` this logs the output in the screen windows to a
-file.  Without ``screen`` this simply redirects stdout of the service
-process to a file in ``LOGDIR``.
+By default, services run under ``systemd`` and are natively logging to
+the systemd journal.
 
-    ::
+To query the logs use the ``journalctl`` command, such as::
 
-        LOGDIR=$DEST/logs
+  journalctl --unit devstack@*
 
-Note the use of ``DEST`` to locate the main install directory; this
-is why we suggest setting it in ``local.conf``.
-
-Enabling Syslog
-~~~~~~~~~~~~~~~
-
-Logging all services to a single syslog can be convenient. Enable
-syslogging by setting ``SYSLOG`` to ``True``. If the destination log
-host is not localhost ``SYSLOG_HOST`` and ``SYSLOG_PORT`` can be used
-to direct the message stream to the log host.
-
-    ::
-
-        SYSLOG=True
-        SYSLOG_HOST=$HOST_IP
-        SYSLOG_PORT=516
-
+More examples can be found in :ref:`journalctl-examples`.
 
 Example Logging Configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -326,7 +308,6 @@
 
        [[local|localrc]]
        DEST=/opt/stack/
-       LOGDIR=$DEST/logs
        LOGFILE=$LOGDIR/stack.sh.log
        LOG_COLOR=False
 
@@ -587,9 +568,7 @@
 
 Swift is disabled by default.  When enabled, it is configured with
 only one replica to avoid being IO/memory intensive on a small
-VM. When running with only one replica the account, container and
-object services will run directly in screen. The others services like
-replicator, updaters or auditor runs in background.
+VM.
 
 If you would like to enable Swift you can add this to your ``localrc``
 section:
@@ -630,32 +609,9 @@
 act as a S3 endpoint for Keystone so effectively replacing the
 ``nova-objectstore``.
 
-Only Swift proxy server is launched in the screen session all other
+Only Swift proxy server is launched in the systemd system all other
 services are started in background and managed by ``swift-init`` tool.
 
-Heat
-~~~~
-
-Heat is disabled by default (see ``stackrc`` file). To enable it
-explicitly you'll need the following settings in your ``localrc``
-section
-
-::
-
-    enable_service heat h-api h-api-cfn h-api-cw h-eng
-
-Heat can also run in standalone mode, and be configured to orchestrate
-on an external OpenStack cloud. To launch only Heat in standalone mode
-you'll need the following settings in your ``localrc`` section
-
-::
-
-    disable_all_services
-    enable_service rabbit mysql heat h-api h-api-cfn h-api-cw h-eng
-    HEAT_STANDALONE=True
-    KEYSTONE_SERVICE_HOST=...
-    KEYSTONE_AUTH_HOST=...
-
 Tempest
 ~~~~~~~
 
@@ -796,3 +752,69 @@
     ::
 
         TERMINATE_TIMEOUT=30
+
+
+.. _arch-configuration:
+
+Architectures
+-------------
+
+The upstream CI runs exclusively on nodes with x86 architectures, but
+OpenStack supports even more architectures. Some of them need to configure
+Devstack in a certain way.
+
+KVM on s390x (IBM z Systems)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+KVM on s390x (IBM z Systems) is supported since the *Kilo* release. For
+an all-in-one setup, these minimal settings in the ``local.conf`` file
+are needed::
+
+    [[local|localrc]]
+    ADMIN_PASSWORD=secret
+    DATABASE_PASSWORD=$ADMIN_PASSWORD
+    RABBIT_PASSWORD=$ADMIN_PASSWORD
+    SERVICE_PASSWORD=$ADMIN_PASSWORD
+
+    DOWNLOAD_DEFAULT_IMAGES=False
+    IMAGE_URLS="https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-s390x-disk1.img"
+
+    # Provide a custom etcd3 binary download URL and ints sha256.
+    # The binary must be located under '/<etcd version>/etcd-<etcd-version>-linux-s390x.tar.gz'
+    # on this URL.
+    # Build instructions for etcd3: https://github.com/linux-on-ibm-z/docs/wiki/Building-etcd
+    ETCD_DOWNLOAD_URL=<your-etcd-download-url>
+    ETCD_SHA256=<your-etcd3-sha256>
+
+    enable_service n-sproxy
+    disable_service n-novnc
+
+    [[post-config|$NOVA_CONF]]
+
+    [serial_console]
+    base_url=ws://$HOST_IP:6083/  # optional
+
+Reasoning:
+
+* The default image of Devstack is x86 only, so we deactivate the download
+  with ``DOWNLOAD_DEFAULT_IMAGES``. The referenced guest image
+  in the code above (``IMAGE_URLS``) serves as an example. The list of
+  possible s390x guest images is not limited to that.
+
+* This platform doesn't support a graphical console like VNC or SPICE.
+  The technical reason is the missing framebuffer on the platform. This
+  means we rely on the substitute feature *serial console* which needs the
+  proxy service ``n-sproxy``. We also disable VNC's proxy ``n-novnc`` for
+  that reason . The configuration in the ``post-config`` section is only
+  needed if you want to use the *serial console* outside of the all-in-one
+  setup.
+
+* A link to an etcd3 binary and its sha256 needs to be provided as the
+  binary for s390x is not hosted on github like it is for other
+  architectures. For more details see
+  https://bugs.launchpad.net/devstack/+bug/1693192. Etcd3 can easily be
+  built along https://github.com/linux-on-ibm-z/docs/wiki/Building-etcd.
+
+.. note:: To run *Tempest* against this *Devstack* all-in-one, you'll need
+   to use a guest image which is smaller than 1GB when uncompressed.
+   The example image from above is bigger than that!
diff --git a/doc/source/development.rst b/doc/source/development.rst
index 776ac6c..957de9b 100644
--- a/doc/source/development.rst
+++ b/doc/source/development.rst
@@ -8,56 +8,33 @@
 Inspecting Services
 ===================
 
-By default most services in DevStack are running in a `screen
-<https://www.gnu.org/software/screen/manual/screen.html>`_
-session.
+By default most services in DevStack are running as `systemd` units
+named `devstack@$servicename.service`. You can see running services
+with.
 
 .. code-block:: bash
 
-   os3:~> screen -list
-   There is a screen on:
-        28994.stack	(08/10/2016 09:01:33 PM)	(Detached)
-   1 Socket in /var/run/screen/S-sdague.
+   sudo systemctl status "devstack@*"
 
-You can attach to this screen session using ``screen -r`` which gives
-you a view of the services in action.
-
-.. image:: assets/images/screen_session_1.png
-   :width: 100%
-
-Basic Screen Commands
----------------------
-
-The following minimal commands will be useful to using screen:
-
-* ``ctrl-a n`` - go to next window. Next is assumed to be right of
-  current window.
-* ``ctrl-a p`` - go to previous window. Previous is assumed to be left
-  of current window.
-* ``ctrl-a [`` - entry copy/scrollback mode. This allows you to
-  navigate back through the logs with the up arrow.
-* ``ctrl-a d`` - detach from screen. Gets you back to a normal
-  terminal, while leaving everything running.
-
-For more about using screen, see the excellent `screen manual
-<https://www.gnu.org/software/screen/manual/screen.html>`_.
+To learn more about the basics of systemd, see :doc:`/systemd`
 
 Patching a Service
 ==================
 
 If you want to make a quick change to a running service the easiest
-way to do this is:
+way to do that is to change the code directly in /opt/stack/$service
+and then restart the affected daemons.
 
-* attach to screen
-* navigate to the window in question
-* ``ctrl-c`` to kill the service
-* make appropriate changes to the code
-* ``up arrow`` in the screen window to display the command used to run
-  that service
-* ``enter`` to restart the service
+.. code-block:: bash
 
-This works for services, except those running under Apache (currently
-just ``keystone`` by default).
+   sudo systemctl restart devstack@n-cpu.service
+
+If your change impacts more than one daemon you can restart by
+wildcard as well.
+
+.. code-block:: bash
+
+   sudo systemctl restart "devstack@n-*"
 
 .. warning::
 
@@ -102,14 +79,6 @@
    NOVA_BRANCH=refs/changes/10/353710/1
 
 
-Testing Changes to Apache Based Services
-========================================
-
-When testing changes to Apache based services, such as ``keystone``,
-you can either use the Testing a Patch Series approach above, or make
-changes in the code tree and issue an apache restart.
-
-
 Testing Changes to Libraries
 ============================
 
@@ -132,9 +101,17 @@
    OSLOPOLICY_REPO=/home/sdague/oslo.policy
    OSLOPOLICY_BRANCH=better_exception
 
-Because libraries are used by many services, library changes really
-need to go through a full ``./unstack.sh && ./stack.sh`` to see your
-changes in action.
+As libraries are not installed `editable` by pip, after you make any
+local changes you will need to:
 
-To figure out the repo / branch names for every library that's
-supported, you'll need to read the devstack source.
+* cd to top of library path
+* sudo pip install -U .
+* restart all services you want to use the new library
+
+You can do that with wildcards such as
+
+.. code-block:: bash
+
+   sudo systemctl restart "devstack@n-*"
+
+which will restart all nova services.
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index 7793d8e..ed9b4da 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -32,17 +32,18 @@
 `git.openstack.org
 <https://git.openstack.org/cgit/openstack-dev/devstack>`__ and bug
 reports go to `LaunchPad
-<http://bugs.launchpad.net/devstack/>`__. Contributions follow the
+<https://bugs.launchpad.net/devstack/>`__. Contributions follow the
 usual process as described in the `developer guide
-<http://docs.openstack.org/infra/manual/developers.html>`__. This
+<https://docs.openstack.org/infra/manual/developers.html>`__. This
 Sphinx documentation is housed in the doc directory.
 
 Why not use packages?
 ~~~~~~~~~~~~~~~~~~~~~
 
 Unlike packages, DevStack leaves your cloud ready to develop -
-checkouts of the code and services running in screen. However, many
-people are doing the hard work of packaging and recipes for production
+checkouts of the code and services running locally under systemd,
+making it easy to hack on and test new patches. However, many people
+are doing the hard work of packaging and recipes for production
 deployments.
 
 Why isn't $MY\_FAVORITE\_DISTRO supported?
@@ -130,8 +131,8 @@
 DevStack master tracks the upstream master of all the projects. If you
 would like to run a stable branch of OpenStack, you should use the
 corresponding stable branch of DevStack as well. For instance the
-``stable/kilo`` version of DevStack will already default to all the
-projects running at ``stable/kilo`` levels.
+``stable/ocata`` version of DevStack will already default to all the
+projects running at ``stable/ocata`` levels.
 
 Note: it's also possible to manually adjust the ``*_BRANCH`` variables
 further if you would like to test specific milestones, or even custom
@@ -158,16 +159,6 @@
 often good enough for a single-node installation. And in an extreme
 case, use ``clean.sh`` to eradicate it and try again.
 
-Configure ``local.conf`` thusly:
-
-    ::
-
-        [[local|localrc]]
-        HEAT_STANDALONE=True
-        ENABLED_SERVICES=rabbit,mysql,heat,h-api,h-api-cfn,h-api-cw,h-eng
-        KEYSTONE_SERVICE_HOST=<keystone-host>
-        KEYSTONE_AUTH_HOST=<keystone-host>
-
 Why are my configuration changes ignored?
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/source/guides/devstack-with-lbaas-v2.rst b/doc/source/guides/devstack-with-lbaas-v2.rst
index 21bea99..3592844 100644
--- a/doc/source/guides/devstack-with-lbaas-v2.rst
+++ b/doc/source/guides/devstack-with-lbaas-v2.rst
@@ -39,13 +39,12 @@
     LOGFILE=$DEST/logs/stack.sh.log
     VERBOSE=True
     LOG_COLOR=True
-    SCREEN_LOGDIR=$DEST/logs
     # Pre-requisite
     ENABLED_SERVICES=rabbit,mysql,key
     # Horizon
     ENABLED_SERVICES+=,horizon
     # Nova
-    ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-cond,n-sch
+    ENABLED_SERVICES+=,n-api,n-cpu,n-cond,n-sch
     # Glance
     ENABLED_SERVICES+=,g-api,g-reg
     # Neutron
diff --git a/doc/source/guides/multinode-lab.rst b/doc/source/guides/multinode-lab.rst
index dfc9936..b4e2891 100644
--- a/doc/source/guides/multinode-lab.rst
+++ b/doc/source/guides/multinode-lab.rst
@@ -73,8 +73,7 @@
 
 ::
 
-    groupadd stack
-    useradd -g stack -s /bin/bash -d /opt/stack -m stack
+    useradd -s /bin/bash -d /opt/stack -m stack
 
 This user will be making many changes to your system during installation
 and operation so it needs to have sudo privileges to root without a
@@ -176,7 +175,7 @@
     MYSQL_HOST=$SERVICE_HOST
     RABBIT_HOST=$SERVICE_HOST
     GLANCE_HOSTPORT=$SERVICE_HOST:9292
-    ENABLED_SERVICES=n-cpu,n-net,n-api-meta,c-vol
+    ENABLED_SERVICES=n-cpu,q-agt,n-api-meta,c-vol,placement-client
     NOVA_VNC_ENABLED=True
     NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
     VNCSERVER_LISTEN=$HOST_IP
@@ -198,6 +197,22 @@
 to poke at your shiny new OpenStack. The most recent log file is
 available in ``stack.sh.log``.
 
+Starting in the Ocata release, Nova requires a `Cells v2`_ deployment. Compute
+node services must be mapped to a cell before they can be used.
+
+After each compute node is stacked, verify it shows up in the
+``nova service-list --binary nova-compute`` output. The compute service is
+registered in the cell database asynchronously so this may require polling.
+
+Once the compute node services shows up, run the ``./tools/discover_hosts.sh``
+script from the control node to map compute hosts to the single cell.
+
+The compute service running on the primary control node will be
+discovered automatically when the control node is stacked so this really
+only needs to be performed for subnodes.
+
+.. _Cells v2: https://docs.openstack.org/nova/latest/user/cells.html
+
 Cleaning Up After DevStack
 --------------------------
 
diff --git a/doc/source/guides/nova.rst b/doc/source/guides/nova.rst
index a91e0d1..0f105d7 100644
--- a/doc/source/guides/nova.rst
+++ b/doc/source/guides/nova.rst
@@ -13,7 +13,7 @@
 <http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/serial-ports.html>`_
 to allow read/write access to the serial console of an instance via
 `nova-serialproxy
-<http://docs.openstack.org/developer/nova/man/nova-serialproxy.html>`_.
+<https://docs.openstack.org/nova/latest/cli/nova-serialproxy.html>`_.
 
 The service can be enabled by adding ``n-sproxy`` to
 ``ENABLED_SERVICES``.  Further options can be enabled via
@@ -62,11 +62,9 @@
 
 Enabling the service is enough to be functional for a single machine DevStack.
 
-These config options are defined in `nova.console.serial
-<https://github.com/openstack/nova/blob/master/nova/console/serial.py#L33-L52>`_
-and `nova.cmd.serialproxy
-<https://github.com/openstack/nova/blob/master/nova/cmd/serialproxy.py#L26-L33>`_.
+These config options are defined in `nova.conf.serial_console
+<https://github.com/openstack/nova/blob/master/nova/conf/serial_console.py>`_.
 
 For more information on OpenStack configuration see the `OpenStack
-Configuration Reference
-<http://docs.openstack.org/trunk/config-reference/content/list-of-compute-config-options.html>`_
+Compute Service Configuration Reference
+<https://docs.openstack.org/nova/latest/admin/configuration/index.html>`_
diff --git a/doc/source/guides/single-machine.rst b/doc/source/guides/single-machine.rst
index 011c41f..48a4fa8 100644
--- a/doc/source/guides/single-machine.rst
+++ b/doc/source/guides/single-machine.rst
@@ -47,7 +47,7 @@
 
 ::
 
-    adduser stack
+    useradd -s /bin/bash -d /opt/stack -m stack
 
 Since this user will be making many changes to your system, it will need
 to have sudo privileges:
diff --git a/doc/source/index.rst b/doc/source/index.rst
index edd6595..47087c5 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -39,7 +39,7 @@
 -------------
 
 Start with a clean and minimal install of a Linux system. Devstack
-attempts to support Ubuntu 14.04/16.04, Fedora 23/24, CentOS/RHEL 7,
+attempts to support Ubuntu 16.04/17.04, Fedora 24/25, CentOS/RHEL 7,
 as well as Debian and OpenSUSE.
 
 If you do not have a preference, Ubuntu 16.04 is the most tested, and
@@ -56,14 +56,14 @@
 
 ::
 
-   $ sudo adduser stack
+   $ sudo useradd -s /bin/bash -d /opt/stack -m stack
 
 Since this user will be making many changes to your system, it should
 have sudo privileges:
 
 ::
 
-    $ sudo tee <<<"stack ALL=(ALL) NOPASSWD: ALL" /etc/sudoers
+    $ echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
     $ sudo su - stack
 
 Download DevStack
@@ -142,3 +142,12 @@
 Get :doc:`the big picture <overview>` of what we are trying to do
 with devstack, and help us by :doc:`contributing to the project
 <hacking>`.
+
+Contents
+--------
+
+.. toctree::
+   :glob:
+   :maxdepth: 2
+
+   *
diff --git a/doc/source/networking.rst b/doc/source/networking.rst
index bdbeaaa..74010cd 100644
--- a/doc/source/networking.rst
+++ b/doc/source/networking.rst
@@ -69,7 +69,7 @@
 
    This is not a recommended configuration. Because of interactions
    between ovs and bridging, if you reboot your box with active
-   networking you may loose network connectivity to your system.
+   networking you may lose network connectivity to your system.
 
 If you need your guests accessible on the network, but only have 1
 interface (using something like a NUC), you can share your one
diff --git a/doc/source/overview.rst b/doc/source/overview.rst
index d245035..c07a8e6 100644
--- a/doc/source/overview.rst
+++ b/doc/source/overview.rst
@@ -20,11 +20,11 @@
 
 *The OpenStack Technical Committee (TC) has defined the current CI
 strategy to include the latest Ubuntu release and the latest RHEL
-release (for Python 2.6 testing).*
+release.*
 
 -  Ubuntu: current LTS release plus current development release
 -  Fedora: current release plus previous release
--  RHEL: current major release
+-  RHEL/Centos: current major release
 -  Other OS platforms may continue to be included but the maintenance of
    those platforms shall not be assumed simply due to their presence.
    Having a listed point-of-contact for each additional OS will greatly
@@ -38,7 +38,6 @@
 *As packaged by the host OS*
 
 -  MySQL
--  PostgreSQL
 
 Queues
 ------
@@ -46,7 +45,6 @@
 *As packaged by the host OS*
 
 -  Rabbit
--  Qpid
 
 Web Server
 ----------
@@ -58,9 +56,6 @@
 OpenStack Network
 -----------------
 
-*Defaults to nova network, optionally use neutron*
-
--  Nova Network: FlatDHCP
 -  Neutron: A basic configuration approximating the original FlatDHCP
    mode using linuxbridge or OpenVSwitch.
 
@@ -68,9 +63,8 @@
 --------
 
 The default services configured by DevStack are Identity (keystone),
-Object Storage (swift), Image Service (glance), Block Storage (cinder),
-Compute (nova), Networking (nova), Dashboard (horizon), Orchestration
-(heat)
+Object Storage (swift), Image Service (glance), Block Storage
+(cinder), Compute (nova), Networking (neutron), Dashboard (horizon)
 
 Additional services not included directly in DevStack can be tied in to
 ``stack.sh`` using the :doc:`plugin mechanism <plugins>` to call
@@ -80,8 +74,7 @@
 -------------------
 
 -  single node
--  multi-node is not tested regularly by the core team, and even then
-   only minimal configurations are reviewed
+-  multi-node configurations as are tested by the gate
 
 Exercises
 ---------
diff --git a/doc/source/plugin-registry.rst b/doc/source/plugin-registry.rst
index 17da67b..6aa2e93 100644
--- a/doc/source/plugin-registry.rst
+++ b/doc/source/plugin-registry.rst
@@ -39,18 +39,22 @@
 collectd-ceilometer-plugin             `git://git.openstack.org/openstack/collectd-ceilometer-plugin <https://git.openstack.org/cgit/openstack/collectd-ceilometer-plugin>`__
 congress                               `git://git.openstack.org/openstack/congress <https://git.openstack.org/cgit/openstack/congress>`__
 cue                                    `git://git.openstack.org/openstack/cue <https://git.openstack.org/cgit/openstack/cue>`__
+cyborg                                 `git://git.openstack.org/openstack/cyborg <https://git.openstack.org/cgit/openstack/cyborg>`__
 designate                              `git://git.openstack.org/openstack/designate <https://git.openstack.org/cgit/openstack/designate>`__
 devstack-plugin-additional-pkg-repos   `git://git.openstack.org/openstack/devstack-plugin-additional-pkg-repos <https://git.openstack.org/cgit/openstack/devstack-plugin-additional-pkg-repos>`__
 devstack-plugin-amqp1                  `git://git.openstack.org/openstack/devstack-plugin-amqp1 <https://git.openstack.org/cgit/openstack/devstack-plugin-amqp1>`__
 devstack-plugin-bdd                    `git://git.openstack.org/openstack/devstack-plugin-bdd <https://git.openstack.org/cgit/openstack/devstack-plugin-bdd>`__
 devstack-plugin-ceph                   `git://git.openstack.org/openstack/devstack-plugin-ceph <https://git.openstack.org/cgit/openstack/devstack-plugin-ceph>`__
+devstack-plugin-container              `git://git.openstack.org/openstack/devstack-plugin-container <https://git.openstack.org/cgit/openstack/devstack-plugin-container>`__
 devstack-plugin-glusterfs              `git://git.openstack.org/openstack/devstack-plugin-glusterfs <https://git.openstack.org/cgit/openstack/devstack-plugin-glusterfs>`__
 devstack-plugin-hdfs                   `git://git.openstack.org/openstack/devstack-plugin-hdfs <https://git.openstack.org/cgit/openstack/devstack-plugin-hdfs>`__
 devstack-plugin-kafka                  `git://git.openstack.org/openstack/devstack-plugin-kafka <https://git.openstack.org/cgit/openstack/devstack-plugin-kafka>`__
+devstack-plugin-libvirt-qemu           `git://git.openstack.org/openstack/devstack-plugin-libvirt-qemu <https://git.openstack.org/cgit/openstack/devstack-plugin-libvirt-qemu>`__
 devstack-plugin-mariadb                `git://git.openstack.org/openstack/devstack-plugin-mariadb <https://git.openstack.org/cgit/openstack/devstack-plugin-mariadb>`__
 devstack-plugin-nfs                    `git://git.openstack.org/openstack/devstack-plugin-nfs <https://git.openstack.org/cgit/openstack/devstack-plugin-nfs>`__
 devstack-plugin-pika                   `git://git.openstack.org/openstack/devstack-plugin-pika <https://git.openstack.org/cgit/openstack/devstack-plugin-pika>`__
 devstack-plugin-sheepdog               `git://git.openstack.org/openstack/devstack-plugin-sheepdog <https://git.openstack.org/cgit/openstack/devstack-plugin-sheepdog>`__
+devstack-plugin-vmax                   `git://git.openstack.org/openstack/devstack-plugin-vmax <https://git.openstack.org/cgit/openstack/devstack-plugin-vmax>`__
 devstack-plugin-zmq                    `git://git.openstack.org/openstack/devstack-plugin-zmq <https://git.openstack.org/cgit/openstack/devstack-plugin-zmq>`__
 dragonflow                             `git://git.openstack.org/openstack/dragonflow <https://git.openstack.org/cgit/openstack/dragonflow>`__
 drbd-devstack                          `git://git.openstack.org/openstack/drbd-devstack <https://git.openstack.org/cgit/openstack/drbd-devstack>`__
@@ -61,13 +65,14 @@
 fuxi                                   `git://git.openstack.org/openstack/fuxi <https://git.openstack.org/cgit/openstack/fuxi>`__
 gce-api                                `git://git.openstack.org/openstack/gce-api <https://git.openstack.org/cgit/openstack/gce-api>`__
 glare                                  `git://git.openstack.org/openstack/glare <https://git.openstack.org/cgit/openstack/glare>`__
-gnocchi                                `git://git.openstack.org/openstack/gnocchi <https://git.openstack.org/cgit/openstack/gnocchi>`__
 group-based-policy                     `git://git.openstack.org/openstack/group-based-policy <https://git.openstack.org/cgit/openstack/group-based-policy>`__
 heat                                   `git://git.openstack.org/openstack/heat <https://git.openstack.org/cgit/openstack/heat>`__
 horizon-mellanox                       `git://git.openstack.org/openstack/horizon-mellanox <https://git.openstack.org/cgit/openstack/horizon-mellanox>`__
 ironic                                 `git://git.openstack.org/openstack/ironic <https://git.openstack.org/cgit/openstack/ironic>`__
 ironic-inspector                       `git://git.openstack.org/openstack/ironic-inspector <https://git.openstack.org/cgit/openstack/ironic-inspector>`__
 ironic-staging-drivers                 `git://git.openstack.org/openstack/ironic-staging-drivers <https://git.openstack.org/cgit/openstack/ironic-staging-drivers>`__
+ironic-ui                              `git://git.openstack.org/openstack/ironic-ui <https://git.openstack.org/cgit/openstack/ironic-ui>`__
+k8s-cloud-provider                     `git://git.openstack.org/openstack/k8s-cloud-provider <https://git.openstack.org/cgit/openstack/k8s-cloud-provider>`__
 karbor                                 `git://git.openstack.org/openstack/karbor <https://git.openstack.org/cgit/openstack/karbor>`__
 karbor-dashboard                       `git://git.openstack.org/openstack/karbor-dashboard <https://git.openstack.org/cgit/openstack/karbor-dashboard>`__
 keystone                               `git://git.openstack.org/openstack/keystone <https://git.openstack.org/cgit/openstack/keystone>`__
@@ -84,15 +89,18 @@
 mistral                                `git://git.openstack.org/openstack/mistral <https://git.openstack.org/cgit/openstack/mistral>`__
 mixmatch                               `git://git.openstack.org/openstack/mixmatch <https://git.openstack.org/cgit/openstack/mixmatch>`__
 mogan                                  `git://git.openstack.org/openstack/mogan <https://git.openstack.org/cgit/openstack/mogan>`__
+mogan-ui                               `git://git.openstack.org/openstack/mogan-ui <https://git.openstack.org/cgit/openstack/mogan-ui>`__
 monasca-analytics                      `git://git.openstack.org/openstack/monasca-analytics <https://git.openstack.org/cgit/openstack/monasca-analytics>`__
 monasca-api                            `git://git.openstack.org/openstack/monasca-api <https://git.openstack.org/cgit/openstack/monasca-api>`__
 monasca-ceilometer                     `git://git.openstack.org/openstack/monasca-ceilometer <https://git.openstack.org/cgit/openstack/monasca-ceilometer>`__
+monasca-events-api                     `git://git.openstack.org/openstack/monasca-events-api <https://git.openstack.org/cgit/openstack/monasca-events-api>`__
 monasca-log-api                        `git://git.openstack.org/openstack/monasca-log-api <https://git.openstack.org/cgit/openstack/monasca-log-api>`__
 monasca-transform                      `git://git.openstack.org/openstack/monasca-transform <https://git.openstack.org/cgit/openstack/monasca-transform>`__
 murano                                 `git://git.openstack.org/openstack/murano <https://git.openstack.org/cgit/openstack/murano>`__
 networking-6wind                       `git://git.openstack.org/openstack/networking-6wind <https://git.openstack.org/cgit/openstack/networking-6wind>`__
 networking-arista                      `git://git.openstack.org/openstack/networking-arista <https://git.openstack.org/cgit/openstack/networking-arista>`__
 networking-bagpipe                     `git://git.openstack.org/openstack/networking-bagpipe <https://git.openstack.org/cgit/openstack/networking-bagpipe>`__
+networking-baremetal                   `git://git.openstack.org/openstack/networking-baremetal <https://git.openstack.org/cgit/openstack/networking-baremetal>`__
 networking-bgpvpn                      `git://git.openstack.org/openstack/networking-bgpvpn <https://git.openstack.org/cgit/openstack/networking-bgpvpn>`__
 networking-brocade                     `git://git.openstack.org/openstack/networking-brocade <https://git.openstack.org/cgit/openstack/networking-brocade>`__
 networking-calico                      `git://git.openstack.org/openstack/networking-calico <https://git.openstack.org/cgit/openstack/networking-calico>`__
@@ -101,15 +109,17 @@
 networking-dpm                         `git://git.openstack.org/openstack/networking-dpm <https://git.openstack.org/cgit/openstack/networking-dpm>`__
 networking-fortinet                    `git://git.openstack.org/openstack/networking-fortinet <https://git.openstack.org/cgit/openstack/networking-fortinet>`__
 networking-generic-switch              `git://git.openstack.org/openstack/networking-generic-switch <https://git.openstack.org/cgit/openstack/networking-generic-switch>`__
+networking-hpe                         `git://git.openstack.org/openstack/networking-hpe <https://git.openstack.org/cgit/openstack/networking-hpe>`__
 networking-huawei                      `git://git.openstack.org/openstack/networking-huawei <https://git.openstack.org/cgit/openstack/networking-huawei>`__
+networking-hyperv                      `git://git.openstack.org/openstack/networking-hyperv <https://git.openstack.org/cgit/openstack/networking-hyperv>`__
 networking-infoblox                    `git://git.openstack.org/openstack/networking-infoblox <https://git.openstack.org/cgit/openstack/networking-infoblox>`__
 networking-l2gw                        `git://git.openstack.org/openstack/networking-l2gw <https://git.openstack.org/cgit/openstack/networking-l2gw>`__
 networking-midonet                     `git://git.openstack.org/openstack/networking-midonet <https://git.openstack.org/cgit/openstack/networking-midonet>`__
 networking-mlnx                        `git://git.openstack.org/openstack/networking-mlnx <https://git.openstack.org/cgit/openstack/networking-mlnx>`__
 networking-nec                         `git://git.openstack.org/openstack/networking-nec <https://git.openstack.org/cgit/openstack/networking-nec>`__
 networking-odl                         `git://git.openstack.org/openstack/networking-odl <https://git.openstack.org/cgit/openstack/networking-odl>`__
-networking-ofagent                     `git://git.openstack.org/openstack/networking-ofagent <https://git.openstack.org/cgit/openstack/networking-ofagent>`__
 networking-onos                        `git://git.openstack.org/openstack/networking-onos <https://git.openstack.org/cgit/openstack/networking-onos>`__
+networking-opencontrail                `git://git.openstack.org/openstack/networking-opencontrail <https://git.openstack.org/cgit/openstack/networking-opencontrail>`__
 networking-ovn                         `git://git.openstack.org/openstack/networking-ovn <https://git.openstack.org/cgit/openstack/networking-ovn>`__
 networking-ovs-dpdk                    `git://git.openstack.org/openstack/networking-ovs-dpdk <https://git.openstack.org/cgit/openstack/networking-ovs-dpdk>`__
 networking-plumgrid                    `git://git.openstack.org/openstack/networking-plumgrid <https://git.openstack.org/cgit/openstack/networking-plumgrid>`__
@@ -120,19 +130,26 @@
 neutron                                `git://git.openstack.org/openstack/neutron <https://git.openstack.org/cgit/openstack/neutron>`__
 neutron-dynamic-routing                `git://git.openstack.org/openstack/neutron-dynamic-routing <https://git.openstack.org/cgit/openstack/neutron-dynamic-routing>`__
 neutron-fwaas                          `git://git.openstack.org/openstack/neutron-fwaas <https://git.openstack.org/cgit/openstack/neutron-fwaas>`__
+neutron-fwaas-dashboard                `git://git.openstack.org/openstack/neutron-fwaas-dashboard <https://git.openstack.org/cgit/openstack/neutron-fwaas-dashboard>`__
 neutron-lbaas                          `git://git.openstack.org/openstack/neutron-lbaas <https://git.openstack.org/cgit/openstack/neutron-lbaas>`__
 neutron-lbaas-dashboard                `git://git.openstack.org/openstack/neutron-lbaas-dashboard <https://git.openstack.org/cgit/openstack/neutron-lbaas-dashboard>`__
 neutron-vpnaas                         `git://git.openstack.org/openstack/neutron-vpnaas <https://git.openstack.org/cgit/openstack/neutron-vpnaas>`__
+neutron-vpnaas-dashboard               `git://git.openstack.org/openstack/neutron-vpnaas-dashboard <https://git.openstack.org/cgit/openstack/neutron-vpnaas-dashboard>`__
 nova-dpm                               `git://git.openstack.org/openstack/nova-dpm <https://git.openstack.org/cgit/openstack/nova-dpm>`__
 nova-lxd                               `git://git.openstack.org/openstack/nova-lxd <https://git.openstack.org/cgit/openstack/nova-lxd>`__
 nova-mksproxy                          `git://git.openstack.org/openstack/nova-mksproxy <https://git.openstack.org/cgit/openstack/nova-mksproxy>`__
 nova-powervm                           `git://git.openstack.org/openstack/nova-powervm <https://git.openstack.org/cgit/openstack/nova-powervm>`__
 oaktree                                `git://git.openstack.org/openstack/oaktree <https://git.openstack.org/cgit/openstack/oaktree>`__
 octavia                                `git://git.openstack.org/openstack/octavia <https://git.openstack.org/cgit/openstack/octavia>`__
+octavia-dashboard                      `git://git.openstack.org/openstack/octavia-dashboard <https://git.openstack.org/cgit/openstack/octavia-dashboard>`__
+omni                                   `git://git.openstack.org/openstack/omni <https://git.openstack.org/cgit/openstack/omni>`__
 os-xenapi                              `git://git.openstack.org/openstack/os-xenapi <https://git.openstack.org/cgit/openstack/os-xenapi>`__
 osprofiler                             `git://git.openstack.org/openstack/osprofiler <https://git.openstack.org/cgit/openstack/osprofiler>`__
+oswin-tempest-plugin                   `git://git.openstack.org/openstack/oswin-tempest-plugin <https://git.openstack.org/cgit/openstack/oswin-tempest-plugin>`__
 panko                                  `git://git.openstack.org/openstack/panko <https://git.openstack.org/cgit/openstack/panko>`__
+patrole                                `git://git.openstack.org/openstack/patrole <https://git.openstack.org/cgit/openstack/patrole>`__
 picasso                                `git://git.openstack.org/openstack/picasso <https://git.openstack.org/cgit/openstack/picasso>`__
+qinling                                `git://git.openstack.org/openstack/qinling <https://git.openstack.org/cgit/openstack/qinling>`__
 rally                                  `git://git.openstack.org/openstack/rally <https://git.openstack.org/cgit/openstack/rally>`__
 sahara                                 `git://git.openstack.org/openstack/sahara <https://git.openstack.org/cgit/openstack/sahara>`__
 sahara-dashboard                       `git://git.openstack.org/openstack/sahara-dashboard <https://git.openstack.org/cgit/openstack/sahara-dashboard>`__
@@ -141,15 +158,19 @@
 searchlight-ui                         `git://git.openstack.org/openstack/searchlight-ui <https://git.openstack.org/cgit/openstack/searchlight-ui>`__
 senlin                                 `git://git.openstack.org/openstack/senlin <https://git.openstack.org/cgit/openstack/senlin>`__
 solum                                  `git://git.openstack.org/openstack/solum <https://git.openstack.org/cgit/openstack/solum>`__
+stackube                               `git://git.openstack.org/openstack/stackube <https://git.openstack.org/cgit/openstack/stackube>`__
 tacker                                 `git://git.openstack.org/openstack/tacker <https://git.openstack.org/cgit/openstack/tacker>`__
 tap-as-a-service                       `git://git.openstack.org/openstack/tap-as-a-service <https://git.openstack.org/cgit/openstack/tap-as-a-service>`__
+tap-as-a-service-dashboard             `git://git.openstack.org/openstack/tap-as-a-service-dashboard <https://git.openstack.org/cgit/openstack/tap-as-a-service-dashboard>`__
 tricircle                              `git://git.openstack.org/openstack/tricircle <https://git.openstack.org/cgit/openstack/tricircle>`__
 trio2o                                 `git://git.openstack.org/openstack/trio2o <https://git.openstack.org/cgit/openstack/trio2o>`__
 trove                                  `git://git.openstack.org/openstack/trove <https://git.openstack.org/cgit/openstack/trove>`__
 trove-dashboard                        `git://git.openstack.org/openstack/trove-dashboard <https://git.openstack.org/cgit/openstack/trove-dashboard>`__
+valet                                  `git://git.openstack.org/openstack/valet <https://git.openstack.org/cgit/openstack/valet>`__
 vitrage                                `git://git.openstack.org/openstack/vitrage <https://git.openstack.org/cgit/openstack/vitrage>`__
 vitrage-dashboard                      `git://git.openstack.org/openstack/vitrage-dashboard <https://git.openstack.org/cgit/openstack/vitrage-dashboard>`__
 vmware-nsx                             `git://git.openstack.org/openstack/vmware-nsx <https://git.openstack.org/cgit/openstack/vmware-nsx>`__
+vmware-vspc                            `git://git.openstack.org/openstack/vmware-vspc <https://git.openstack.org/cgit/openstack/vmware-vspc>`__
 watcher                                `git://git.openstack.org/openstack/watcher <https://git.openstack.org/cgit/openstack/watcher>`__
 watcher-dashboard                      `git://git.openstack.org/openstack/watcher-dashboard <https://git.openstack.org/cgit/openstack/watcher-dashboard>`__
 zaqar                                  `git://git.openstack.org/openstack/zaqar <https://git.openstack.org/cgit/openstack/zaqar>`__
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index 5b3c6cf..fae1a1d 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -12,6 +12,15 @@
 be sure that they will continue to work in the future as DevStack
 evolves.
 
+Prerequisites
+=============
+
+If you are planning to create a plugin that is going to host a service in the
+service catalog (that is, your plugin will use the command
+``get_or_create_service``) please make sure that you apply to the `service
+types authority`_ to reserve a valid service-type. This will help to make sure
+that all deployments of your service use the same service-type.
+
 Plugin Interface
 ================
 
@@ -250,3 +259,5 @@
 
 For additional inspiration on devstack plugins you can check out the
 `Plugin Registry <plugin-registry.html>`_.
+
+.. _service types authority: https://specs.openstack.org/openstack/service-types-authority/
diff --git a/doc/source/site-map.rst b/doc/source/site-map.rst
deleted file mode 100644
index 801fc66..0000000
--- a/doc/source/site-map.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-:orphan:
-
-.. the TOC on the front page actually makes the document a lot more
-   confusing. This lets us bury a toc which we can link in when
-   appropriate.
-
-==========
- Site Map
-==========
-
-.. toctree::
-   :glob:
-   :maxdepth: 3
-
-   overview
-   configuration
-   networking
-   plugins
-   plugin-registry
-   faq
-   development
-   hacking
-   guides
diff --git a/doc/source/systemd.rst b/doc/source/systemd.rst
new file mode 100644
index 0000000..c1d2944
--- /dev/null
+++ b/doc/source/systemd.rst
@@ -0,0 +1,227 @@
+===========================
+ Using Systemd in DevStack
+===========================
+
+By default DevStack is run with all the services as systemd unit
+files. Systemd is now the default init system for nearly every Linux
+distro, and systemd encodes and solves many of the problems related to
+poorly running processes.
+
+Why this instead of screen?
+===========================
+
+The screen model for DevStack was invented when the number of services
+that a DevStack user was going to run was typically < 10. This made
+screen hot keys to jump around very easy. However, the landscape has
+changed (not all services are stoppable in screen as some are under
+Apache, there are typically at least 20 items)
+
+There is also a common developer workflow of changing code in more
+than one service, and needing to restart a bunch of services for that
+to take effect.
+
+Unit Structure
+==============
+
+.. note::
+
+   Originally we actually wanted to do this as user units, however
+   there are issues with running this under non interactive
+   shells. For now, we'll be running as system units. Some user unit
+   code is left in place in case we can switch back later.
+
+All DevStack user units are created as a part of the DevStack slice
+given the name ``devstack@$servicename.service``. This makes it easy
+to understand which services are part of the devstack run, and lets us
+disable / stop them in a single command.
+
+Manipulating Units
+==================
+
+Assuming the unit ``n-cpu`` to make the examples more clear.
+
+Enable a unit (allows it to be started)::
+
+  sudo systemctl enable devstack@n-cpu.service
+
+Disable a unit::
+
+  sudo systemctl disable devstack@n-cpu.service
+
+Start a unit::
+
+  sudo systemctl start devstack@n-cpu.service
+
+Stop a unit::
+
+  sudo systemctl stop devstack@n-cpu.service
+
+Restart a unit::
+
+  sudo systemctl restart devstack@n-cpu.service
+
+See status of a unit::
+
+  sudo systemctl status devstack@n-cpu.service
+
+Operating on more than one unit at a time
+-----------------------------------------
+
+Systemd supports wildcarding for unit operations. To restart every
+service in devstack you can do that following::
+
+  sudo systemctl restart devstack@*
+
+Or to see the status of all Nova processes you can do::
+
+  sudo systemctl status devstack@n-*
+
+We'll eventually make the unit names a bit more meaningful so that
+it's easier to understand what you are restarting.
+
+.. _journalctl-examples:
+
+Querying Logs
+=============
+
+One of the other major things that comes with systemd is journald, a
+consolidated way to access logs (including querying through structured
+metadata). This is accessed by the user via ``journalctl`` command.
+
+
+Logs can be accessed through ``journalctl``. journalctl has powerful
+query facilities. We'll start with some common options.
+
+Follow logs for a specific service::
+
+  journalctl -f --unit devstack@n-cpu.service
+
+Following logs for multiple services simultaneously::
+
+  journalctl -f --unit devstack@n-cpu.service --unit devstack@n-cond.service
+
+or you can even do wild cards to follow all the nova services::
+
+  journalctl -f --unit devstack@n-*
+
+Use higher precision time stamps::
+
+  journalctl -f -o short-precise --unit devstack@n-cpu.service
+
+By default, journalctl strips out "unprintable" characters, including
+ASCII color codes. To keep the color codes (which can be interpreted by
+an appropriate terminal/pager - e.g. ``less``, the default)::
+
+  journalctl -a --unit devstack@n-cpu.service
+
+When outputting to the terminal using the default pager, long lines
+appear to be truncated, but horizontal scrolling is supported via the
+left/right arrow keys.
+
+See ``man 1 journalctl`` for more.
+
+Debugging
+=========
+
+Using pdb
+---------
+
+In order to break into a regular pdb session on a systemd-controlled
+service, you need to invoke the process manually - that is, take it out
+of systemd's control.
+
+Discover the command systemd is using to run the service::
+
+  systemctl show devstack@n-sch.service -p ExecStart --no-pager
+
+Stop the systemd service::
+
+  sudo systemctl stop devstack@n-sch.service
+
+Inject your breakpoint in the source, e.g.::
+
+  import pdb; pdb.set_trace()
+
+Invoke the command manually::
+
+  /usr/local/bin/nova-scheduler --config-file /etc/nova/nova.conf
+
+Using remote-pdb
+----------------
+
+`remote-pdb`_ works while the process is under systemd control.
+
+Make sure you have remote-pdb installed::
+
+  sudo pip install remote-pdb
+
+Inject your breakpoint in the source, e.g.::
+
+  import remote_pdb; remote_pdb.set_trace()
+
+Restart the relevant service::
+
+  sudo systemctl restart devstack@n-api.service
+
+The remote-pdb code configures the telnet port when ``set_trace()`` is
+invoked.  Do whatever it takes to hit the instrumented code path, and
+inspect the logs for a message displaying the listening port::
+
+  Sep 07 16:36:12 p8-100-neo devstack@n-api.service[772]: RemotePdb session open at 127.0.0.1:46771, waiting for connection ...
+
+Telnet to that port to enter the pdb session::
+
+  telnet 127.0.0.1 46771
+
+See the `remote-pdb`_ home page for more options.
+
+.. _`remote-pdb`: https://pypi.python.org/pypi/remote-pdb
+
+Known Issues
+============
+
+Be careful about systemd python libraries. There are 3 of them on
+pypi, and they are all very different. They unfortunately all install
+into the ``systemd`` namespace, which can cause some issues.
+
+- ``systemd-python`` - this is the upstream maintained library, it has
+  a version number like systemd itself (currently ``234``). This is
+  the one you want.
+- ``systemd`` - a python 3 only library, not what you want.
+- ``python-systemd`` - another library you don't want. Installing it
+  on a system will break ansible's ability to run.
+
+
+If we were using user units, the ``[Service]`` - ``Group=`` parameter
+doesn't seem to work with user units, even though the documentation
+says that it should. This means that we will need to do an explicit
+``/usr/bin/sg``. This has the downside of making the SYSLOG_IDENTIFIER
+be ``sg``. We can explicitly set that with ``SyslogIdentifier=``, but
+it's really unfortunate that we're going to need this work
+around. This is currently not a problem because we're only using
+system units.
+
+Future Work
+===========
+
+user units
+----------
+
+It would be great if we could do services as user units, so that there
+is a clear separation of code being run as not root, to ensure running
+as root never accidentally gets baked in as an assumption to
+services. However, user units interact poorly with devstack-gate and
+the way that commands are run as users with ansible and su.
+
+Maybe someday we can figure that out.
+
+References
+==========
+
+- Arch Linux Wiki - https://wiki.archlinux.org/index.php/Systemd/User
+- Python interface to journald -
+  https://www.freedesktop.org/software/systemd/python-systemd/journal.html
+- Systemd documentation on service files -
+  https://www.freedesktop.org/software/systemd/man/systemd.service.html
+- Systemd documentation on exec (can be used to impact service runs) -
+  https://www.freedesktop.org/software/systemd/man/systemd.exec.html
diff --git a/files/apache-keystone.template b/files/apache-keystone.template
index 84dc273..1284360 100644
--- a/files/apache-keystone.template
+++ b/files/apache-keystone.template
@@ -7,7 +7,7 @@
 </Directory>
 
 <VirtualHost *:%PUBLICPORT%>
-    WSGIDaemonProcess keystone-public processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+    WSGIDaemonProcess keystone-public processes=3 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
     WSGIProcessGroup keystone-public
     WSGIScriptAlias / %KEYSTONE_BIN%/keystone-wsgi-public
     WSGIApplicationGroup %{GLOBAL}
@@ -21,7 +21,7 @@
 </VirtualHost>
 
 <VirtualHost *:%ADMINPORT%>
-    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+    WSGIDaemonProcess keystone-admin processes=3 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
     WSGIProcessGroup keystone-admin
     WSGIScriptAlias / %KEYSTONE_BIN%/keystone-wsgi-admin
     WSGIApplicationGroup %{GLOBAL}
diff --git a/files/debs/dstat b/files/debs/dstat
index 2b643b8..0d9da44 100644
--- a/files/debs/dstat
+++ b/files/debs/dstat
@@ -1 +1,2 @@
 dstat
+python-psutil
diff --git a/files/debs/general b/files/debs/general
index c121770..8e0018d 100644
--- a/files/debs/general
+++ b/files/debs/general
@@ -1,3 +1,5 @@
+apache2
+apache2-dev
 bc
 bridge-utils
 bsdmainutils
@@ -9,11 +11,13 @@
 git
 graphviz # needed for docs
 iputils-ping
+libapache2-mod-proxy-uwsgi
 libffi-dev # for pyOpenSSL
 libjpeg-dev # Pillow 3.0.0
 libmysqlclient-dev  # MySQL-python
 libpq-dev  # psycopg2
 libssl-dev # for pyOpenSSL
+libsystemd-dev # for systemd-python
 libxml2-dev  # lxml
 libxslt1-dev  # lxml
 libyaml-dev
@@ -25,7 +29,6 @@
 python2.7
 python-dev
 python-gdbm # needed for testr
-screen
 tar
 tcpdump
 unzip
diff --git a/files/debs/n-api b/files/debs/n-api
deleted file mode 100644
index 0928cd5..0000000
--- a/files/debs/n-api
+++ /dev/null
@@ -1 +0,0 @@
-fping
diff --git a/files/debs/n-cpu b/files/debs/n-cpu
index 69ac430..d8bbf59 100644
--- a/files/debs/n-cpu
+++ b/files/debs/n-cpu
@@ -2,6 +2,7 @@
 genisoimage
 gir1.2-libosinfo-1.0
 lvm2 # NOPRIME
+netcat-openbsd
 open-iscsi
 python-guestfs # NOPRIME
 qemu-utils
diff --git a/files/debs/q-agt b/files/debs/neutron-agent
similarity index 100%
rename from files/debs/q-agt
rename to files/debs/neutron-agent
diff --git a/files/debs/neutron b/files/debs/neutron-common
similarity index 79%
rename from files/debs/neutron
rename to files/debs/neutron-common
index 2307fa5..e30f678 100644
--- a/files/debs/neutron
+++ b/files/debs/neutron-common
@@ -2,6 +2,7 @@
 dnsmasq-base
 dnsmasq-utils # for dhcp_release only available in dist:precise
 ebtables
+haproxy # to serve as metadata proxy inside router/dhcp namespaces
 iptables
 iputils-arping
 iputils-ping
diff --git a/files/debs/q-l3 b/files/debs/neutron-l3
similarity index 100%
rename from files/debs/q-l3
rename to files/debs/neutron-l3
diff --git a/files/debs/nova b/files/debs/nova
index 58dad41..5e14aec 100644
--- a/files/debs/nova
+++ b/files/debs/nova
@@ -10,7 +10,9 @@
 kpartx
 libjs-jquery-tablesorter # Needed for coverage html reports
 libmysqlclient-dev
-libvirt-bin # NOPRIME
+libvirt-bin # dist:xenial NOPRIME
+libvirt-clients # not:xenial NOPRIME
+libvirt-daemon-system # not:xenial NOPRIME
 libvirt-dev # NOPRIME
 mysql-server # NOPRIME
 parted
diff --git a/files/debs/q-agt b/files/debs/q-agt
new file mode 120000
index 0000000..99fe353
--- /dev/null
+++ b/files/debs/q-agt
@@ -0,0 +1 @@
+neutron-agent
\ No newline at end of file
diff --git a/files/debs/q-l3 b/files/debs/q-l3
new file mode 120000
index 0000000..0a5ca2a
--- /dev/null
+++ b/files/debs/q-l3
@@ -0,0 +1 @@
+neutron-l3
\ No newline at end of file
diff --git a/files/debs/zookeeper b/files/debs/zookeeper
deleted file mode 100644
index f41b559..0000000
--- a/files/debs/zookeeper
+++ /dev/null
@@ -1 +0,0 @@
-zookeeperd
diff --git a/files/ebtables.workaround b/files/ebtables.workaround
deleted file mode 100644
index c8af51f..0000000
--- a/files/ebtables.workaround
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-#
-# Copyright 2015 Hewlett-Packard Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-#
-# This is a terrible, terrible, truly terrible work around for
-# environments that have libvirt < 1.2.11. ebtables requires that you
-# specifically tell it you would like to not race and get punched in
-# the face when 2 run at the same time with a --concurrent flag.
-
-flock -w 300 /var/lock/ebtables.nova /sbin/ebtables.real $@
diff --git a/files/ldap/user.ldif.in b/files/ldap/user.ldif.in
new file mode 100644
index 0000000..16a9807
--- /dev/null
+++ b/files/ldap/user.ldif.in
@@ -0,0 +1,23 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+# implied. See the License for the specific language governing
+# permissions and limitations under the License.
+
+# Demo LDAP user
+dn: cn=demo,ou=Users,${BASE_DN}
+cn: demo
+displayName: demo
+givenName: demo
+mail: demo@openstack.org
+objectClass: inetOrgPerson
+objectClass: top
+sn: demo
+uid: demo
+userPassword: demo
diff --git a/files/rpms-suse/general b/files/rpms-suse/general
index 1044c25..0c1a281 100644
--- a/files/rpms-suse/general
+++ b/files/rpms-suse/general
@@ -1,3 +1,5 @@
+apache2
+apache2-devel
 bc
 bridge-utils
 ca-certificates-mozilla
@@ -22,10 +24,11 @@
 python-cmd2 # dist:opensuse-12.3
 python-devel  # pyOpenSSL
 python-xml
-screen
+systemd-devel # for systemd-python
 tar
 tcpdump
 unzip
 util-linux
 wget
+which
 zlib-devel
diff --git a/files/rpms-suse/n-api b/files/rpms-suse/n-api
index af5ac2f..0f08daa 100644
--- a/files/rpms-suse/n-api
+++ b/files/rpms-suse/n-api
@@ -1,2 +1 @@
-fping
 python-dateutil
diff --git a/files/rpms-suse/q-agt b/files/rpms-suse/neutron-agent
similarity index 100%
rename from files/rpms-suse/q-agt
rename to files/rpms-suse/neutron-agent
diff --git a/files/rpms-suse/neutron b/files/rpms-suse/neutron-common
similarity index 70%
rename from files/rpms-suse/neutron
rename to files/rpms-suse/neutron-common
index e9abc6e..d1cc73f 100644
--- a/files/rpms-suse/neutron
+++ b/files/rpms-suse/neutron-common
@@ -2,6 +2,7 @@
 dnsmasq
 dnsmasq-utils # dist:opensuse-12.3,opensuse-13.1
 ebtables
+haproxy # to serve as metadata proxy inside router/dhcp namespaces
 iptables
 iputils
 mariadb # NOPRIME
diff --git a/files/rpms-suse/q-l3 b/files/rpms-suse/neutron-l3
similarity index 100%
rename from files/rpms-suse/q-l3
rename to files/rpms-suse/neutron-l3
diff --git a/files/rpms-suse/q-agt b/files/rpms-suse/q-agt
new file mode 120000
index 0000000..99fe353
--- /dev/null
+++ b/files/rpms-suse/q-agt
@@ -0,0 +1 @@
+neutron-agent
\ No newline at end of file
diff --git a/files/rpms-suse/q-l3 b/files/rpms-suse/q-l3
new file mode 120000
index 0000000..0a5ca2a
--- /dev/null
+++ b/files/rpms-suse/q-l3
@@ -0,0 +1 @@
+neutron-l3
\ No newline at end of file
diff --git a/files/rpms/cinder b/files/rpms/cinder
index 0274642..3bc4e7a 100644
--- a/files/rpms/cinder
+++ b/files/rpms/cinder
@@ -1,4 +1,5 @@
 iscsi-initiator-utils
 lvm2
 qemu-img
-scsi-target-utils # NOPRIME
+scsi-target-utils # not:rhel7,f24,f25,f26 NOPRIME
+targetcli # dist:rhel7,f24,f25,f26 NOPRIME
diff --git a/files/rpms/dstat b/files/rpms/dstat
index 2b643b8..0d9da44 100644
--- a/files/rpms/dstat
+++ b/files/rpms/dstat
@@ -1 +1,2 @@
 dstat
+python-psutil
diff --git a/files/rpms/general b/files/rpms/general
index 77d2fa5..f3f8708 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -7,9 +7,11 @@
 gettext  # used for compiling message catalogs
 git-core
 graphviz # needed only for docs
-iptables-services  # NOPRIME f23,f24,f25
+httpd
+httpd-devel
+iptables-services  # NOPRIME f23,f24,f25,f26
 java-1.7.0-openjdk-headless  # NOPRIME rhel7
-java-1.8.0-openjdk-headless  # NOPRIME f23,f24,f25
+java-1.8.0-openjdk-headless  # NOPRIME f23,f24,f25,f26
 libffi-devel
 libjpeg-turbo-devel # Pillow 3.0.0
 libxml2-devel # lxml
@@ -26,7 +28,7 @@
 pyOpenSSL # version in pip uses too much memory
 python-devel
 redhat-rpm-config # missing dep for gcc hardening flags, see rhbz#1217376
-screen
+systemd-devel # for systemd-python
 tar
 tcpdump
 unzip
diff --git a/files/rpms/n-api b/files/rpms/n-api
deleted file mode 100644
index 0928cd5..0000000
--- a/files/rpms/n-api
+++ /dev/null
@@ -1 +0,0 @@
-fping
diff --git a/files/rpms/q-agt b/files/rpms/neutron-agent
similarity index 100%
rename from files/rpms/q-agt
rename to files/rpms/neutron-agent
diff --git a/files/rpms/neutron b/files/rpms/neutron-common
similarity index 75%
rename from files/rpms/neutron
rename to files/rpms/neutron-common
index 2e49a0c..a4e029a 100644
--- a/files/rpms/neutron
+++ b/files/rpms/neutron-common
@@ -2,6 +2,7 @@
 dnsmasq # for q-dhcp
 dnsmasq-utils # for dhcp_release
 ebtables
+haproxy # to serve as metadata proxy inside router/dhcp namespaces
 iptables
 iputils
 mysql-devel
diff --git a/files/rpms/q-l3 b/files/rpms/neutron-l3
similarity index 100%
rename from files/rpms/q-l3
rename to files/rpms/neutron-l3
diff --git a/files/rpms/nova b/files/rpms/nova
index a368c55..632e796 100644
--- a/files/rpms/nova
+++ b/files/rpms/nova
@@ -7,7 +7,7 @@
 genisoimage # required for config_drive
 iptables
 iputils
-kernel-modules # dist:f23,f24,f25
+kernel-modules # dist:f23,f24,f25,f26
 kpartx
 libxml2-python
 m2crypto
diff --git a/files/rpms/q-agt b/files/rpms/q-agt
new file mode 120000
index 0000000..99fe353
--- /dev/null
+++ b/files/rpms/q-agt
@@ -0,0 +1 @@
+neutron-agent
\ No newline at end of file
diff --git a/files/rpms/q-l3 b/files/rpms/q-l3
new file mode 120000
index 0000000..0a5ca2a
--- /dev/null
+++ b/files/rpms/q-l3
@@ -0,0 +1 @@
+neutron-l3
\ No newline at end of file
diff --git a/files/rpms/swift b/files/rpms/swift
index 2f12df0..2e09cec 100644
--- a/files/rpms/swift
+++ b/files/rpms/swift
@@ -2,7 +2,7 @@
 liberasurecode-devel
 memcached
 pyxattr
-rsync-daemon # dist:f23,f24,f25
+rsync-daemon # dist:f23,f24,f25,f26
 sqlite
 xfsprogs
 xinetd
diff --git a/files/rpms/zookeeper b/files/rpms/zookeeper
deleted file mode 100644
index 1bfac53..0000000
--- a/files/rpms/zookeeper
+++ /dev/null
@@ -1 +0,0 @@
-zookeeper
diff --git a/files/zookeeper/environment b/files/zookeeper/environment
deleted file mode 100644
index afa2d2f..0000000
--- a/files/zookeeper/environment
+++ /dev/null
@@ -1,36 +0,0 @@
-#
-# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-# Modified from http://packages.ubuntu.com/saucy/zookeeperd
-NAME=zookeeper
-ZOOCFGDIR=/etc/zookeeper/conf
-
-# seems, that log4j requires the log4j.properties file to be in the classpath
-CLASSPATH="$ZOOCFGDIR:/usr/share/java/jline.jar:/usr/share/java/log4j-1.2.jar:/usr/share/java/xercesImpl.jar:/usr/share/java/xmlParserAPIs.jar:/usr/share/java/netty.jar:/usr/share/java/slf4j-api.jar:/usr/share/java/slf4j-log4j12.jar:/usr/share/java/zookeeper.jar"
-
-ZOOCFG="$ZOOCFGDIR/zoo.cfg"
-ZOO_LOG_DIR=/var/log/zookeeper
-USER=$NAME
-GROUP=$NAME
-PIDDIR=/var/run/$NAME
-PIDFILE=$PIDDIR/$NAME.pid
-SCRIPTNAME=/etc/init.d/$NAME
-JAVA=/usr/bin/java
-ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain"
-ZOO_LOG4J_PROP="INFO,ROLLINGFILE"
-JMXLOCALONLY=false
-JAVA_OPTS=""
diff --git a/files/zookeeper/log4j.properties b/files/zookeeper/log4j.properties
deleted file mode 100644
index 6c45a4a..0000000
--- a/files/zookeeper/log4j.properties
+++ /dev/null
@@ -1,69 +0,0 @@
-#
-# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-# From http://packages.ubuntu.com/saucy/zookeeperd
-
-# ZooKeeper Logging Configuration
-#
-
-# Format is "<default threshold> (, <appender>)+
-
-log4j.rootLogger=${zookeeper.root.logger}
-
-# Example: console appender only
-# log4j.rootLogger=INFO, CONSOLE
-
-# Example with rolling log file
-#log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE
-
-# Example with rolling log file and tracing
-#log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE
-
-#
-# Log INFO level and above messages to the console
-#
-log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
-log4j.appender.CONSOLE.Threshold=INFO
-log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
-log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n
-
-#
-# Add ROLLINGFILE to rootLogger to get log file output
-#    Log DEBUG level and above messages to a log file
-log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
-log4j.appender.ROLLINGFILE.Threshold=WARN
-log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/zookeeper.log
-
-# Max log file size of 10MB
-log4j.appender.ROLLINGFILE.MaxFileSize=10MB
-# uncomment the next line to limit number of backup files
-#log4j.appender.ROLLINGFILE.MaxBackupIndex=10
-
-log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
-log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n
-
-
-#
-# Add TRACEFILE to rootLogger to get log file output
-#    Log DEBUG level and above messages to a log file
-log4j.appender.TRACEFILE=org.apache.log4j.FileAppender
-log4j.appender.TRACEFILE.Threshold=TRACE
-log4j.appender.TRACEFILE.File=${zookeeper.log.dir}/zookeeper_trace.log
-
-log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout
-### Notice we are including log4j's NDC here (%x)
-log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n
diff --git a/files/zookeeper/myid b/files/zookeeper/myid
deleted file mode 100644
index c227083..0000000
--- a/files/zookeeper/myid
+++ /dev/null
@@ -1 +0,0 @@
-0
\ No newline at end of file
diff --git a/files/zookeeper/zoo.cfg b/files/zookeeper/zoo.cfg
deleted file mode 100644
index b8f5582..0000000
--- a/files/zookeeper/zoo.cfg
+++ /dev/null
@@ -1,74 +0,0 @@
-#
-# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-# http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html
-
-# The number of milliseconds of each tick
-tickTime=2000
-# The number of ticks that the initial
-# synchronization phase can take
-initLimit=10
-# The number of ticks that can pass between
-# sending a request and getting an acknowledgement
-syncLimit=5
-# the directory where the snapshot is stored.
-dataDir=/var/lib/zookeeper
-# Place the dataLogDir to a separate physical disc for better performance
-# dataLogDir=/disk2/zookeeper
-
-# the port at which the clients will connect
-clientPort=2181
-
-# Maximum number of clients that can connect from one client
-maxClientCnxns=60
-
-# specify all zookeeper servers
-# The fist port is used by followers to connect to the leader
-# The second one is used for leader election
-
-server.0=127.0.0.1:2888:3888
-
-# To avoid seeks ZooKeeper allocates space in the transaction log file in
-# blocks of preAllocSize kilobytes. The default block size is 64M. One reason
-# for changing the size of the blocks is to reduce the block size if snapshots
-# are taken more often. (Also, see snapCount).
-#preAllocSize=65536
-
-# Clients can submit requests faster than ZooKeeper can process them,
-# especially if there are a lot of clients. To prevent ZooKeeper from running
-# out of memory due to queued requests, ZooKeeper will throttle clients so that
-# there is no more than globalOutstandingLimit outstanding requests in the
-# system. The default limit is 1,000.ZooKeeper logs transactions to a
-# transaction log. After snapCount transactions are written to a log file a
-# snapshot is started and a new transaction log file is started. The default
-# snapCount is 10,000.
-#snapCount=1000
-
-# If this option is defined, requests will be will logged to a trace file named
-# traceFile.year.month.day.
-#traceFile=
-
-# Leader accepts client connections. Default value is "yes". The leader machine
-# coordinates updates. For higher update throughput at thes slight expense of
-# read throughput the leader can be configured to not accept clients and focus
-# on coordination.
-#leaderServes=yes
-
-# Autopurge every hour to avoid using lots of disk in bursts
-# Order of the next 2 properties matters.
-# autopurge.snapRetainCount must be before autopurge.purgeInterval.
-autopurge.snapRetainCount=3
-autopurge.purgeInterval=1
\ No newline at end of file
diff --git a/functions b/functions
index f262fbc..33a0e6a 100644
--- a/functions
+++ b/functions
@@ -12,7 +12,7 @@
 
 # ensure we don't re-source this in the same environment
 [[ -z "$_DEVSTACK_FUNCTIONS" ]] || return 0
-declare -r _DEVSTACK_FUNCTIONS=1
+declare -r -g _DEVSTACK_FUNCTIONS=1
 
 # Include the common functions
 FUNC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
@@ -45,6 +45,37 @@
 # export it so child shells have access to the 'short_source' function also.
 export -f short_source
 
+# Download a file from a URL
+#
+# Will check cache (in $FILES) or download given URL.
+#
+# Argument is the URL to the remote file
+#
+# Will echo the local path to the file as the output.  Will die on
+# failure to download.
+#
+# Files can be pre-cached for CI environments, see EXTRA_CACHE_URLS
+# and tools/image_list.sh
+function get_extra_file {
+    local file_url=$1
+
+    file_name=$(basename "$file_url")
+    if [[ $file_url != file* ]]; then
+        # If the file isn't cache, download it
+        if [[ ! -f $FILES/$file_name ]]; then
+            wget --progress=dot:giga -c $file_url -O $FILES/$file_name
+            if [[ $? -ne 0 ]]; then
+                die "$file_url could not be downloaded"
+            fi
+        fi
+        echo "$FILES/$file_name"
+        return
+    else
+        # just strip the file:// bit and that's the path to the file
+        echo $file_url | sed 's/$file:\/\///g'
+    fi
+}
+
 
 # Retrieve an image from a URL and upload into Glance.
 # Uses the following variables:
@@ -310,6 +341,11 @@
             disk_format=qcow2
             container_format=bare
             ;;
+        *.raw)
+            image_name=$(basename "$image" ".raw")
+            disk_format=raw
+            container_format=bare
+            ;;
         *.iso)
             image_name=$(basename "$image" ".iso")
             disk_format=iso
@@ -318,7 +354,7 @@
         *.vhd|*.vhdx|*.vhd.gz|*.vhdx.gz)
             local extension="${image_fname#*.}"
             image_name=$(basename "$image" ".$extension")
-            disk_format=vhd
+            disk_format=$(echo $image_fname | grep -oP '(?<=\.)vhdx?(?=\.|$)')
             container_format=bare
             if [ "${image_fname##*.}" == "gz" ]; then
                 unpack=zcat
@@ -402,6 +438,26 @@
     return $rval
 }
 
+function wait_for_compute {
+    local timeout=$1
+    local rval=0
+    time_start "wait_for_service"
+    timeout $timeout bash -x <<EOF || rval=$?
+        ID=""
+        while [[ "\$ID" == "" ]]; do
+            sleep 1
+            ID=\$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" compute service list --host `hostname` --service nova-compute -c ID -f value)
+        done
+EOF
+    time_stop "wait_for_service"
+    # Figure out what's happening on platforms where this doesn't work
+    if [[ "$rval" != 0 ]]; then
+        echo "Didn't find service registered by hostname after $timeout seconds"
+        openstack --os-cloud devstack-admin --os-region "$REGION_NAME" compute service list
+    fi
+    return $rval
+}
+
 
 # ping check
 # Uses globals ``ENABLED_SERVICES``, ``TOP_DIR``, ``MULTI_HOST``, ``PRIVATE_NETWORK``
@@ -575,7 +631,9 @@
 function setup_logging {
     local conf_file=$1
     local other_cond=${2:-"False"}
-    if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$other_cond" == "False" ]; then
+    if [[ "$USE_SYSTEMD" == "True" ]]; then
+        setup_systemd_logging $conf_file
+    elif [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$other_cond" == "False" ]; then
         setup_colorized_logging $conf_file
     else
         setup_standard_logging_identity $conf_file
@@ -601,6 +659,28 @@
     iniset $conf_file $conf_section logging_exception_prefix "%(color)s%(asctime)s.%(msecs)03d TRACE %(name)s %(instance)s"
 }
 
+function setup_systemd_logging {
+    local conf_file=$1
+    local conf_section="DEFAULT"
+    # NOTE(sdague): this is a nice to have, and means we're using the
+    # native systemd path, which provides for things like search on
+    # request-id. However, there may be an eventlet interaction here,
+    # so going off for now.
+    USE_JOURNAL=$(trueorfalse False USE_JOURNAL)
+    local pidstr=""
+    if [[ "$USE_JOURNAL" == "True" ]]; then
+        iniset $conf_file $conf_section use_journal "True"
+        # if we are using the journal directly, our process id is already correct
+    else
+        pidstr="(pid=%(process)d) "
+    fi
+    iniset $conf_file $conf_section logging_debug_format_suffix "{{${pidstr}%(funcName)s %(pathname)s:%(lineno)d}}"
+
+    iniset $conf_file $conf_section logging_context_format_string "%(color)s%(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(project_name)s %(user_name)s%(color)s] %(instance)s%(color)s%(message)s"
+    iniset $conf_file $conf_section logging_default_format_string "%(color)s%(levelname)s %(name)s [-%(color)s] %(instance)s%(color)s%(message)s"
+    iniset $conf_file $conf_section logging_exception_prefix "ERROR %(name)s %(instance)s"
+}
+
 function setup_standard_logging_identity {
     local conf_file=$1
     iniset $conf_file DEFAULT logging_user_identity_format "%(project_name)s %(user_name)s"
@@ -666,11 +746,7 @@
 
 # running_in_container - Returns true otherwise false
 function running_in_container {
-    if grep -q lxc /proc/1/cgroup; then
-        return 0
-    fi
-
-    return 1
+    [[ $(systemd-detect-virt --container) != 'none' ]]
 }
 
 
@@ -692,6 +768,52 @@
 }
 
 
+# Set a systemd system override
+#
+# This sets a system-side override in system.conf. A per-service
+# override would be /etc/systemd/system/${service}.service/override.conf
+function set_systemd_override {
+    local key="$1"
+    local value="$2"
+
+    local sysconf="/etc/systemd/system.conf"
+    iniset -sudo "${sysconf}" "Manager" "$key" "$value"
+    echo "Set systemd system override for ${key}=${value}"
+
+    sudo systemctl daemon-reload
+}
+
+# Get a random port from the local port range
+#
+# This function returns an available port in the local port range. The search
+# order is not truly random, but should be considered a random value by the
+# user because it depends on the state of your local system.
+function get_random_port {
+    read lower_port upper_port < /proc/sys/net/ipv4/ip_local_port_range
+    while true; do
+        for (( port = upper_port ; port >= lower_port ; port-- )); do
+            sudo lsof -i ":$port" &> /dev/null
+            if [[ $? > 0 ]] ; then
+                break 2
+            fi
+        done
+    done
+    echo $port
+}
+
+# Save some state information
+#
+# Write out various useful state information to /etc/devstack-version
+function write_devstack_version {
+    cat - > /tmp/devstack-version <<EOF
+DevStack Version: ${DEVSTACK_SERIES}
+Change: $(git log --format="%H %s %ci" -1)
+OS Version: ${os_VENDOR} ${os_RELEASE} ${os_CODENAME}
+EOF
+    sudo install -m 644 /tmp/devstack-version /etc/devstack-version
+    rm /tmp/devstack-version
+}
+
 # Restore xtrace
 $_XTRACE_FUNCTIONS
 
diff --git a/functions-common b/functions-common
index 7e9e200..1b8ca96 100644
--- a/functions-common
+++ b/functions-common
@@ -37,19 +37,19 @@
 
 # ensure we don't re-source this in the same environment
 [[ -z "$_DEVSTACK_FUNCTIONS_COMMON" ]] || return 0
-declare -r _DEVSTACK_FUNCTIONS_COMMON=1
+declare -r -g _DEVSTACK_FUNCTIONS_COMMON=1
 
 # Global Config Variables
-declare -A GITREPO
-declare -A GITBRANCH
-declare -A GITDIR
+declare -A -g GITREPO
+declare -A -g GITBRANCH
+declare -A -g GITDIR
 
 TRACK_DEPENDS=${TRACK_DEPENDS:-False}
 
 # Save these variables to .stackenv
 STACK_ENV_VARS="BASE_SQL_CONN DATA_DIR DEST ENABLED_SERVICES HOST_IP \
-    KEYSTONE_AUTH_PROTOCOL KEYSTONE_AUTH_URI KEYSTONE_SERVICE_URI \
-    LOGFILE OS_CACERT SERVICE_HOST SERVICE_PROTOCOL STACK_USER TLS_IP \
+    KEYSTONE_AUTH_URI KEYSTONE_SERVICE_URI \
+    LOGFILE OS_CACERT SERVICE_HOST STACK_USER TLS_IP \
     HOST_IPV6 SERVICE_IP_VERSION"
 
 
@@ -93,7 +93,7 @@
         --os-region-name $REGION_NAME \
         --os-identity-api-version 3 \
         $CA_CERT_ARG \
-        --os-auth-url $KEYSTONE_AUTH_URI \
+        --os-auth-url $KEYSTONE_SERVICE_URI \
         --os-username demo \
         --os-password $ADMIN_PASSWORD \
         --os-project-name demo
@@ -105,7 +105,7 @@
         --os-region-name $REGION_NAME \
         --os-identity-api-version 3 \
         $CA_CERT_ARG \
-        --os-auth-url $KEYSTONE_AUTH_URI \
+        --os-auth-url $KEYSTONE_SERVICE_URI \
         --os-username alt_demo \
         --os-password $ADMIN_PASSWORD \
         --os-project-name alt_demo
@@ -117,7 +117,7 @@
         --os-region-name $REGION_NAME \
         --os-identity-api-version 3 \
         $CA_CERT_ARG \
-        --os-auth-url $KEYSTONE_AUTH_URI \
+        --os-auth-url $KEYSTONE_SERVICE_URI \
         --os-username admin \
         --os-password $ADMIN_PASSWORD \
         --os-project-name admin
@@ -306,7 +306,7 @@
 # ``os_PACKAGE`` - package type: ``deb`` or ``rpm``
 # ``os_CODENAME`` - vendor's codename for release: ``xenial``
 
-declare os_VENDOR os_RELEASE os_PACKAGE os_CODENAME
+declare -g os_VENDOR os_RELEASE os_PACKAGE os_CODENAME
 
 # Make a *best effort* attempt to install lsb_release packages for the
 # user if not available.  Note can't use generic install_package*
@@ -361,7 +361,7 @@
 
 # Translate the OS version values into common nomenclature
 # Sets global ``DISTRO`` from the ``os_*`` values
-declare DISTRO
+declare -g DISTRO
 
 function GetDistro {
     GetOSVersion
@@ -519,7 +519,7 @@
         if [[ ! -d $git_dest ]]; then
             if [[ "$ERROR_ON_CLONE" = "True" ]]; then
                 echo "The $git_dest project was not found; if this is a gate job, add"
-                echo "the project to the \$PROJECTS variable in the job definition."
+                echo "the project to 'required-projects' in the job definition."
                 die $LINENO "Cloning not allowed in this configuration"
             fi
             git_timed clone $git_clone_flags $git_remote $git_dest
@@ -864,10 +864,11 @@
 
     # Gets user role id
     user_role_id=$(openstack role assignment list \
+        --role $1 \
         --user $2 \
         --project $3 \
         $domain_args \
-        | grep " $1 " | get_field 1)
+        | grep '^|\s[a-f0-9]\+' | get_field 1)
     if [[ -z "$user_role_id" ]]; then
         # Adds role to user and get it
         openstack role add $1 \
@@ -875,10 +876,11 @@
             --project $3 \
             $domain_args
         user_role_id=$(openstack role assignment list \
+            --role $1 \
             --user $2 \
             --project $3 \
             $domain_args \
-            | grep " $1 " | get_field 1)
+            | grep '^|\s[a-f0-9]\+' | get_field 1)
     fi
     echo $user_role_id
 }
@@ -889,46 +891,20 @@
     local user_role_id
     # Gets user role id
     user_role_id=$(openstack role assignment list \
+        --role $1 \
         --user $2 \
         --domain $3 \
-        | grep " $1 " | get_field 1)
+        | grep '^|\s[a-f0-9]\+' | get_field 1)
     if [[ -z "$user_role_id" ]]; then
         # Adds role to user and get it
         openstack role add $1 \
             --user $2 \
             --domain $3
         user_role_id=$(openstack role assignment list \
+            --role $1 \
             --user $2 \
             --domain $3 \
-            | grep " $1 " | get_field 1)
-    fi
-    echo $user_role_id
-}
-
-# Gets or adds user role to domain
-# Usage: get_or_add_user_domain_role <role> <user> <domain>
-function get_or_add_user_domain_role {
-    local user_role_id
-    # Gets user role id
-    user_role_id=$(openstack role assignment list \
-        --user $2 \
-        --os-url=$KEYSTONE_SERVICE_URI_V3 \
-        --os-identity-api-version=3 \
-        --domain $3 \
-        | grep " $1 " | get_field 1)
-    if [[ -z "$user_role_id" ]]; then
-        # Adds role to user and get it
-        openstack role add $1 \
-            --user $2 \
-            --domain $3 \
-            --os-url=$KEYSTONE_SERVICE_URI_V3 \
-            --os-identity-api-version=3
-        user_role_id=$(openstack role assignment list \
-            --user $2 \
-            --os-url=$KEYSTONE_SERVICE_URI_V3 \
-            --os-identity-api-version=3 \
-            --domain $3 \
-            | grep " $1 " | get_field 1)
+            | grep '^|\s[a-f0-9]\+' | get_field 1)
     fi
     echo $user_role_id
 }
@@ -939,6 +915,7 @@
     local group_role_id
     # Gets group role id
     group_role_id=$(openstack role assignment list \
+        --role $1 \
         --group $2 \
         --project $3 \
         -f value)
@@ -948,6 +925,7 @@
             --group $2 \
             --project $3
         group_role_id=$(openstack role assignment list \
+            --role $1 \
             --group $2 \
             --project $3 \
             -f value)
@@ -1148,6 +1126,19 @@
                 fi
             fi
 
+            # Look for # not:xxx in comment
+            if [[ $line =~ (.*)#.*not:([^ ]*) ]]; then
+                # We are using BASH regexp matching feature.
+                package=${BASH_REMATCH[1]}
+                distros=${BASH_REMATCH[2]}
+                # In bash ${VAR,,} will lowercase VAR
+                # Look for a match in the distro list
+                if [[ ${distros,,} =~ ${DISTRO,,} ]]; then
+                    # If match then skip this package
+                    inst_pkg=0
+                fi
+            fi
+
             if [[ $inst_pkg = 1 ]]; then
                 echo $package
             fi
@@ -1166,6 +1157,8 @@
 # - ``# NOPRIME`` defers installation to be performed later in `stack.sh`
 # - ``# dist:DISTRO`` or ``dist:DISTRO1,DISTRO2`` limits the selection
 #   of the package to the distros listed.  The distro names are case insensitive.
+# - ``# not:DISTRO`` or ``not:DISTRO1,DISTRO2`` limits the selection
+#   of the package to the distros not listed. The distro names are case insensitive.
 function get_packages {
     local xtrace
     xtrace=$(set +o | grep xtrace)
@@ -1218,9 +1211,9 @@
             if [[ ! $file_to_parse =~ $package_dir/keystone ]]; then
                 file_to_parse="${file_to_parse} ${package_dir}/keystone"
             fi
-        elif [[ $service == q-* ]]; then
-            if [[ ! $file_to_parse =~ $package_dir/neutron ]]; then
-                file_to_parse="${file_to_parse} ${package_dir}/neutron"
+        elif [[ $service == q-* || $service == neutron-* ]]; then
+            if [[ ! $file_to_parse =~ $package_dir/neutron-common ]]; then
+                file_to_parse="${file_to_parse} ${package_dir}/neutron-common"
             fi
         elif [[ $service == ir-* ]]; then
             if [[ ! $file_to_parse =~ $package_dir/ironic ]]; then
@@ -1387,75 +1380,105 @@
         zypper --non-interactive install --auto-agree-with-licenses "$@"
 }
 
-
-# Process Functions
-# =================
-
-# _run_process() is designed to be backgrounded by run_process() to simulate a
-# fork.  It includes the dirty work of closing extra filehandles and preparing log
-# files to produce the same logs as screen_it().  The log filename is derived
-# from the service name.
-# Uses globals ``CURRENT_LOG_TIME``, ``LOGDIR``, ``SCREEN_LOGDIR``, ``SCREEN_NAME``, ``SERVICE_DIR``
-# If an optional group is provided sg will be used to set the group of
-# the command.
-# _run_process service "command-line" [group]
-function _run_process {
-    # disable tracing through the exec redirects, it's just confusing in the logs.
-    xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-
+function write_user_unit_file {
     local service=$1
     local command="$2"
     local group=$3
-
-    # Undo logging redirections and close the extra descriptors
-    exec 1>&3
-    exec 2>&3
-    exec 3>&-
-    exec 6>&-
-
-    local logfile="${service}.log.${CURRENT_LOG_TIME}"
-    local real_logfile="${LOGDIR}/${logfile}"
-    if [[ -n ${LOGDIR} ]]; then
-        exec 1>&"$real_logfile" 2>&1
-        bash -c "cd '$LOGDIR' && ln -sf '$logfile' ${service}.log"
-        if [[ -n ${SCREEN_LOGDIR} ]]; then
-            # Drop the backward-compat symlink
-            ln -sf "$real_logfile" ${SCREEN_LOGDIR}/screen-${service}.log
-        fi
-
-        # TODO(dtroyer): Hack to get stdout from the Python interpreter for the logs.
-        export PYTHONUNBUFFERED=1
-    fi
-
-    # reenable xtrace before we do *real* work
-    $xtrace
-
-    # Run under ``setsid`` to force the process to become a session and group leader.
-    # The pid saved can be used with pkill -g to get the entire process group.
+    local user=$4
+    local extra=""
     if [[ -n "$group" ]]; then
-        setsid sg $group "$command" & echo $! >$SERVICE_DIR/$SCREEN_NAME/$service.pid
-    else
-        setsid $command & echo $! >$SERVICE_DIR/$SCREEN_NAME/$service.pid
+        extra="Group=$group"
     fi
+    local unitfile="$SYSTEMD_DIR/$service"
+    mkdir -p $SYSTEMD_DIR
 
-    # Just silently exit this process
-    exit 0
+    iniset -sudo $unitfile "Unit" "Description" "Devstack $service"
+    iniset -sudo $unitfile "Service" "User" "$user"
+    iniset -sudo $unitfile "Service" "ExecStart" "$command"
+    iniset -sudo $unitfile "Service" "KillMode" "process"
+    iniset -sudo $unitfile "Service" "TimeoutStopSec" "infinity"
+    if [[ -n "$group" ]]; then
+        iniset -sudo $unitfile "Service" "Group" "$group"
+    fi
+    iniset -sudo $unitfile "Install" "WantedBy" "multi-user.target"
+
+    # changes to existing units sometimes need a refresh
+    $SYSTEMCTL daemon-reload
 }
 
-# Helper to remove the ``*.failure`` files under ``$SERVICE_DIR/$SCREEN_NAME``.
-# This is used for ``service_check`` when all the ``screen_it`` are called finished
-# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``
-# init_service_check
-function init_service_check {
-    SCREEN_NAME=${SCREEN_NAME:-stack}
-    SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
+function write_uwsgi_user_unit_file {
+    local service=$1
+    local command="$2"
+    local group=$3
+    local user=$4
+    local unitfile="$SYSTEMD_DIR/$service"
+    mkdir -p $SYSTEMD_DIR
 
-    if [[ ! -d "$SERVICE_DIR/$SCREEN_NAME" ]]; then
-        mkdir -p "$SERVICE_DIR/$SCREEN_NAME"
+    iniset -sudo $unitfile "Unit" "Description" "Devstack $service"
+    iniset -sudo $unitfile "Service" "SyslogIdentifier" "$service"
+    iniset -sudo $unitfile "Service" "User" "$user"
+    iniset -sudo $unitfile "Service" "ExecStart" "$command"
+    iniset -sudo $unitfile "Service" "Type" "notify"
+    iniset -sudo $unitfile "Service" "KillMode" "process"
+    iniset -sudo $unitfile "Service" "Restart" "always"
+    iniset -sudo $unitfile "Service" "NotifyAccess" "all"
+    iniset -sudo $unitfile "Service" "RestartForceExitStatus" "100"
+
+    if [[ -n "$group" ]]; then
+        iniset -sudo $unitfile "Service" "Group" "$group"
+    fi
+    iniset -sudo $unitfile "Install" "WantedBy" "multi-user.target"
+
+    # changes to existing units sometimes need a refresh
+    $SYSTEMCTL daemon-reload
+}
+
+function _common_systemd_pitfalls {
+    local cmd=$1
+    # do some sanity checks on $cmd to see things we don't expect to work
+
+    if [[ "$cmd" =~ "sudo" ]]; then
+        local msg=<<EOF
+You are trying to use run_process with sudo, this is not going to work under systemd.
+
+If you need to run a service as a user other than $STACK_USER call it with:
+
+   run_process \$name \$cmd \$group \$user
+EOF
+        die $LINENO $msg
     fi
 
-    rm -f "$SERVICE_DIR/$SCREEN_NAME"/*.failure
+    if [[ ! "$cmd" =~ ^/ ]]; then
+        local msg=<<EOF
+The cmd="$cmd" does not start with an absolute path. It will fail to
+start under systemd.
+
+Please update your run_process stanza to have an absolute path.
+EOF
+        die $LINENO $msg
+    fi
+
+}
+
+# Helper function to build a basic unit file and run it under systemd.
+function _run_under_systemd {
+    local service=$1
+    local command="$2"
+    local cmd=$command
+    # sanity check the command
+    _common_systemd_pitfalls "$cmd"
+
+    local systemd_service="devstack@$service.service"
+    local group=$3
+    local user=${4:-$STACK_USER}
+    if [[ "$command" =~ "uwsgi" ]] ; then
+        write_uwsgi_user_unit_file $systemd_service "$cmd" "$group" "$user"
+    else
+        write_user_unit_file $systemd_service "$cmd" "$group" "$user"
+    fi
+
+    $SYSTEMCTL enable $systemd_service
+    $SYSTEMCTL start $systemd_service
 }
 
 # Find out if a process exists by partial name.
@@ -1473,138 +1496,22 @@
 # If an optional group is provided sg will be used to run the
 # command as that group.
 # Uses globals ``USE_SCREEN``
-# run_process service "command-line" [group]
+# run_process service "command-line" [group] [user]
 function run_process {
     local service=$1
     local command="$2"
     local group=$3
-    local subservice=$4
+    local user=$4
 
-    local name=${subservice:-$service}
+    local name=$service
 
     time_start "run_process"
     if is_service_enabled $service; then
-        if [[ "$USE_SCREEN" = "True" ]]; then
-            screen_process "$name" "$command" "$group"
-        else
-            # Spawn directly without screen
-            _run_process "$name" "$command" "$group" &
-        fi
+        _run_under_systemd "$name" "$command" "$group" "$user"
     fi
     time_stop "run_process"
 }
 
-# Helper to launch a process in a named screen
-# Uses globals ``CURRENT_LOG_TIME``, ```LOGDIR``, ``SCREEN_LOGDIR``, `SCREEN_NAME``,
-# ``SERVICE_DIR``, ``SCREEN_IS_LOGGING``
-# screen_process name "command-line" [group]
-# Run a command in a shell in a screen window, if an optional group
-# is provided, use sg to set the group of the command.
-function screen_process {
-    local name=$1
-    local command="$2"
-    local group=$3
-
-    SCREEN_NAME=${SCREEN_NAME:-stack}
-    SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
-
-    screen -S $SCREEN_NAME -X screen -t $name
-
-    local logfile="${name}.log.${CURRENT_LOG_TIME}"
-    local real_logfile="${LOGDIR}/${logfile}"
-    echo "LOGDIR: $LOGDIR"
-    echo "SCREEN_LOGDIR: $SCREEN_LOGDIR"
-    echo "log: $real_logfile"
-    if [[ -n ${LOGDIR} ]]; then
-        if [[ "$SCREEN_IS_LOGGING" == "True" ]]; then
-            screen -S $SCREEN_NAME -p $name -X logfile "$real_logfile"
-            screen -S $SCREEN_NAME -p $name -X log on
-        fi
-        # If logging isn't active then avoid a broken symlink
-        touch "$real_logfile"
-        bash -c "cd '$LOGDIR' && ln -sf '$logfile' ${name}.log"
-        if [[ -n ${SCREEN_LOGDIR} ]]; then
-            # Drop the backward-compat symlink
-            ln -sf "$real_logfile" ${SCREEN_LOGDIR}/screen-${1}.log
-        fi
-    fi
-
-    # sleep to allow bash to be ready to be send the command - we are
-    # creating a new window in screen and then sends characters, so if
-    # bash isn't running by the time we send the command, nothing
-    # happens.  This sleep was added originally to handle gate runs
-    # where we needed this to be at least 3 seconds to pass
-    # consistently on slow clouds. Now this is configurable so that we
-    # can determine a reasonable value for the local case which should
-    # be much smaller.
-    sleep ${SCREEN_SLEEP:-3}
-
-    NL=`echo -ne '\015'`
-    # This fun command does the following:
-    # - the passed server command is backgrounded
-    # - the pid of the background process is saved in the usual place
-    # - the server process is brought back to the foreground
-    # - if the server process exits prematurely the fg command errors
-    # and a message is written to stdout and the process failure file
-    #
-    # The pid saved can be used in stop_process() as a process group
-    # id to kill off all child processes
-    if [[ -n "$group" ]]; then
-        command="sg $group '$command'"
-    fi
-
-    # Append the process to the screen rc file
-    screen_rc "$name" "$command"
-
-    screen -S $SCREEN_NAME -p $name -X stuff "$command & echo \$! >$SERVICE_DIR/$SCREEN_NAME/${name}.pid; fg || echo \"$name failed to start. Exit code: \$?\" | tee \"$SERVICE_DIR/$SCREEN_NAME/${name}.failure\"$NL"
-}
-
-# Screen rc file builder
-# Uses globals ``SCREEN_NAME``, ``SCREENRC``, ``SCREEN_IS_LOGGING``
-# screen_rc service "command-line"
-function screen_rc {
-    SCREEN_NAME=${SCREEN_NAME:-stack}
-    SCREENRC=$TOP_DIR/$SCREEN_NAME-screenrc
-    if [[ ! -e $SCREENRC ]]; then
-        # Name the screen session
-        echo "sessionname $SCREEN_NAME" > $SCREENRC
-        # Set a reasonable statusbar
-        echo "hardstatus alwayslastline '$SCREEN_HARDSTATUS'" >> $SCREENRC
-        # Some distributions override PROMPT_COMMAND for the screen terminal type - turn that off
-        echo "setenv PROMPT_COMMAND /bin/true" >> $SCREENRC
-        echo "screen -t shell bash" >> $SCREENRC
-    fi
-    # If this service doesn't already exist in the screenrc file
-    if ! grep $1 $SCREENRC 2>&1 > /dev/null; then
-        NL=`echo -ne '\015'`
-        echo "screen -t $1 bash" >> $SCREENRC
-        echo "stuff \"$2$NL\"" >> $SCREENRC
-
-        if [[ -n ${LOGDIR} ]] && [[ "$SCREEN_IS_LOGGING" == "True" ]]; then
-            echo "logfile ${LOGDIR}/${1}.log.${CURRENT_LOG_TIME}" >>$SCREENRC
-            echo "log on" >>$SCREENRC
-        fi
-    fi
-}
-
-# Stop a service in screen
-# If a PID is available use it, kill the whole process group via TERM
-# If screen is being used kill the screen window; this will catch processes
-# that did not leave a PID behind
-# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``
-# screen_stop_service service
-function screen_stop_service {
-    local service=$1
-
-    SCREEN_NAME=${SCREEN_NAME:-stack}
-    SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
-
-    if is_service_enabled $service; then
-        # Clean up the screen window
-        screen -S $SCREEN_NAME -p $service -X kill || true
-    fi
-}
-
 # Stop a service process
 # If a PID is available use it, kill the whole process group via TERM
 # If screen is being used kill the screen window; this will catch processes
@@ -1617,149 +1524,34 @@
     SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
 
     if is_service_enabled $service; then
-        # Kill via pid if we have one available
-        if [[ -r $SERVICE_DIR/$SCREEN_NAME/$service.pid ]]; then
-            pkill -g $(cat $SERVICE_DIR/$SCREEN_NAME/$service.pid)
-            # oslo.service tends to stop actually shutting down
-            # reliably in between releases because someone believes it
-            # is dying too early due to some inflight work they
-            # have. This is a tension. It happens often enough we're
-            # going to just account for it in devstack and assume it
-            # doesn't work.
-            #
-            # Set OSLO_SERVICE_WORKS=True to skip this block
-            if [[ -z "$OSLO_SERVICE_WORKS" ]]; then
-                # TODO(danms): Remove this double-kill when we have
-                # this fixed in all services:
-                # https://bugs.launchpad.net/oslo-incubator/+bug/1446583
-                sleep 1
-                # /bin/true because pkill on a non existent process returns an error
-                pkill -g $(cat $SERVICE_DIR/$SCREEN_NAME/$service.pid) || /bin/true
-            fi
-            rm $SERVICE_DIR/$SCREEN_NAME/$service.pid
-        fi
-        if [[ "$USE_SCREEN" = "True" ]]; then
-            # Clean up the screen window
-            screen_stop_service $service
+        # Only do this for units which appear enabled, this also
+        # catches units that don't really exist for cases like
+        # keystone without a failure.
+        if $SYSTEMCTL is-enabled devstack@$service.service; then
+            $SYSTEMCTL stop devstack@$service.service
+            $SYSTEMCTL disable devstack@$service.service
         fi
     fi
 }
 
-# Helper to get the status of each running service
-# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``
-# service_check
+# use systemctl to check service status
 function service_check {
     local service
-    local failures
-    SCREEN_NAME=${SCREEN_NAME:-stack}
-    SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
-
-
-    if [[ ! -d "$SERVICE_DIR/$SCREEN_NAME" ]]; then
-        echo "No service status directory found"
-        return
-    fi
-
-    # Check if there is any failure flag file under $SERVICE_DIR/$SCREEN_NAME
-    # make this -o errexit safe
-    failures=`ls "$SERVICE_DIR/$SCREEN_NAME"/*.failure 2>/dev/null || /bin/true`
-
-    for service in $failures; do
-        service=`basename $service`
-        service=${service%.failure}
-        echo "Error: Service $service is not running"
-    done
-
-    if [ -n "$failures" ]; then
-        die $LINENO "More details about the above errors can be found with screen"
-    fi
-}
-
-# Tail a log file in a screen if USE_SCREEN is true.
-# Uses globals ``USE_SCREEN``
-function tail_log {
-    local name=$1
-    local logfile=$2
-
-    if [[ "$USE_SCREEN" = "True" ]]; then
-        screen_process "$name" "sudo tail -f $logfile | sed -u 's/\\\\\\\\x1b/\o033/g'"
-    fi
-}
-
-
-# Deprecated Functions
-# --------------------
-
-# _old_run_process() is designed to be backgrounded by old_run_process() to simulate a
-# fork.  It includes the dirty work of closing extra filehandles and preparing log
-# files to produce the same logs as screen_it().  The log filename is derived
-# from the service name and global-and-now-misnamed ``SCREEN_LOGDIR``
-# Uses globals ``CURRENT_LOG_TIME``, ``SCREEN_LOGDIR``, ``SCREEN_NAME``, ``SERVICE_DIR``
-# _old_run_process service "command-line"
-function _old_run_process {
-    local service=$1
-    local command="$2"
-
-    # Undo logging redirections and close the extra descriptors
-    exec 1>&3
-    exec 2>&3
-    exec 3>&-
-    exec 6>&-
-
-    if [[ -n ${SCREEN_LOGDIR} ]]; then
-        exec 1>&${SCREEN_LOGDIR}/screen-${1}.log.${CURRENT_LOG_TIME} 2>&1
-        ln -sf ${SCREEN_LOGDIR}/screen-${1}.log.${CURRENT_LOG_TIME} ${SCREEN_LOGDIR}/screen-${1}.log
-
-        # TODO(dtroyer): Hack to get stdout from the Python interpreter for the logs.
-        export PYTHONUNBUFFERED=1
-    fi
-
-    exec /bin/bash -c "$command"
-    die "$service exec failure: $command"
-}
-
-# old_run_process() launches a child process that closes all file descriptors and
-# then exec's the passed in command.  This is meant to duplicate the semantics
-# of screen_it() without screen.  PIDs are written to
-# ``$SERVICE_DIR/$SCREEN_NAME/$service.pid`` by the spawned child process.
-# old_run_process service "command-line"
-function old_run_process {
-    local service=$1
-    local command="$2"
-
-    # Spawn the child process
-    _old_run_process "$service" "$command" &
-    echo $!
-}
-
-# Compatibility for existing start_XXXX() functions
-# Uses global ``USE_SCREEN``
-# screen_it service "command-line"
-function screen_it {
-    if is_service_enabled $1; then
-        # Append the service to the screen rc file
-        screen_rc "$1" "$2"
-
-        if [[ "$USE_SCREEN" = "True" ]]; then
-            screen_process "$1" "$2"
-        else
-            # Spawn directly without screen
-            old_run_process "$1" "$2" >$SERVICE_DIR/$SCREEN_NAME/$1.pid
+    for service in ${ENABLED_SERVICES//,/ }; do
+        # because some things got renamed like key => keystone
+        if $SYSTEMCTL is-enabled devstack@$service.service; then
+            # no-pager is needed because otherwise status dumps to a
+            # pager when in interactive mode, which will stop a manual
+            # devstack run.
+            $SYSTEMCTL status devstack@$service.service --no-pager
         fi
-    fi
+    done
 }
 
-# Compatibility for existing stop_XXXX() functions
-# Stop a service in screen
-# If a PID is available use it, kill the whole process group via TERM
-# If screen is being used kill the screen window; this will catch processes
-# that did not leave a PID behind
-# screen_stop service
-function screen_stop {
-    # Clean up the screen window
-    stop_process $1
-}
 
+function tail_log {
+    deprecated "With the removal of screen support, tail_log is deprecated and will be removed after Queens"
+}
 
 # Plugin Functions
 # =================
@@ -1775,7 +1567,7 @@
     local name=$1
     local url=$2
     local branch=${3:-master}
-    if [[ ",${DEVSTACK_PLUGINS}," =~ ,${name}, ]]; then
+    if is_plugin_enabled $name; then
         die $LINENO "Plugin attempted to be enabled twice: ${name} ${url} ${branch}"
     fi
     DEVSTACK_PLUGINS+=",$name"
@@ -1784,6 +1576,19 @@
     GITBRANCH[$name]=$branch
 }
 
+# is_plugin_enabled <name>
+#
+# Check if the plugin was enabled, e.g. using enable_plugin
+#
+# ``name`` The name with which the plugin was enabled
+function is_plugin_enabled {
+    local name=$1
+    if [[ ",${DEVSTACK_PLUGINS}," =~ ",${name}," ]]; then
+        return 0
+    fi
+    return 1
+}
+
 # fetch_plugins
 #
 # clones all plugins
@@ -2273,13 +2078,31 @@
 }
 
 
+# Return just the <major>.<minor> for the given python interpreter
+function _get_python_version {
+    local interp=$1
+    local version
+    # disable erroring out here, otherwise if python 3 doesn't exist we fail hard.
+    if [[ -x $(which $interp 2> /dev/null) ]]; then
+        version=$($interp -c 'import sys; print("%s.%s" % sys.version_info[0:2])')
+    fi
+    echo ${version}
+}
+
 # Return the current python as "python<major>.<minor>"
 function python_version {
     local python_version
-    python_version=$(python -c 'import sys; print("%s.%s" % sys.version_info[0:2])')
+    python_version=$(_get_python_version python2)
     echo "python${python_version}"
 }
 
+function python3_version {
+    local python3_version
+    python3_version=$(_get_python_version python3)
+    echo "python${python_version}"
+}
+
+
 # Service wrapper to restart services
 # restart_service service-name
 function restart_service {
@@ -2376,9 +2199,9 @@
 # Resolution is only in whole seconds, so should be used for long
 # running activities.
 
-declare -A _TIME_TOTAL
-declare -A _TIME_START
-declare -r _TIME_BEGIN=$(date +%s)
+declare -A -g _TIME_TOTAL
+declare -A -g _TIME_START
+declare -r -g _TIME_BEGIN=$(date +%s)
 
 # time_start $name
 #
@@ -2390,7 +2213,7 @@
     if [[ -n "$start_time" ]]; then
         die $LINENO "Trying to start the clock on $name, but it's already been started"
     fi
-    _TIME_START[$name]=$(date +%s)
+    _TIME_START[$name]=$(date +%s%3N)
 }
 
 # time_stop $name
@@ -2411,7 +2234,7 @@
     if [[ -z "$start_time" ]]; then
         die $LINENO "Trying to stop the clock on $name, but it was never started"
     fi
-    end_time=$(date +%s)
+    end_time=$(date +%s%3N)
     elapsed_time=$(($end_time - $start_time))
     total=${_TIME_TOTAL[$name]:-0}
     # reset the clock so we can start it in the future
@@ -2419,16 +2242,61 @@
     _TIME_TOTAL[$name]=$(($total + $elapsed_time))
 }
 
+function oscwrap {
+    local out
+    local rc
+    local start
+    local end
+    # Cannot use timer_start and timer_stop as we run in subshells
+    # and those rely on modifying vars in the same process (which cannot
+    # happen from a subshell.
+    start=$(date +%s%3N)
+    out=$(command openstack "$@")
+    rc=$?
+    end=$(date +%s%3N)
+    echo $((end - start)) >> $OSCWRAP_TIMER_FILE
+
+    echo "$out"
+    return $rc
+}
+
+function install_oscwrap {
+    # File to accumulate our timing data
+    OSCWRAP_TIMER_FILE=$(mktemp)
+    # Bash by default doesn't expand aliases, allow it for the aliases
+    # we want to whitelist.
+    shopt -s expand_aliases
+    # Remove all aliases that might be expanded to preserve old unexpanded
+    # behavior
+    unalias -a
+    # Add only the alias we want for openstack
+    alias openstack=oscwrap
+}
+
+function cleanup_oscwrap {
+    local total=0
+    if python3_enabled ; then
+        local python=python3
+    else
+        local python=python
+    fi
+    total=$(cat $OSCWRAP_TIMER_FILE | $python -c "import sys; print(sum(int(l) for l in sys.stdin))")
+    _TIME_TOTAL["osc"]=$total
+    rm $OSCWRAP_TIMER_FILE
+}
+
 # time_totals
 #  Print out total time summary
 function time_totals {
     local elapsed_time
     local end_time
-    local len=15
+    local len=20
     local xtrace
+    local unaccounted_time
 
     end_time=$(date +%s)
     elapsed_time=$(($end_time - $_TIME_BEGIN))
+    unaccounted_time=$elapsed_time
 
     # pad 1st column this far
     for t in ${!_TIME_TOTAL[*]}; do
@@ -2437,20 +2305,27 @@
         fi
     done
 
+    cleanup_oscwrap
+
     xtrace=$(set +o | grep xtrace)
     set +o xtrace
 
     echo
     echo "========================="
     echo "DevStack Component Timing"
+    echo " (times are in seconds)  "
     echo "========================="
-    printf "%-${len}s %3d\n" "Total runtime" "$elapsed_time"
-    echo
     for t in ${!_TIME_TOTAL[*]}; do
         local v=${_TIME_TOTAL[$t]}
+        # because we're recording in milliseconds
+        v=$(($v / 1000))
         printf "%-${len}s %3d\n" "$t" "$v"
+        unaccounted_time=$(($unaccounted_time - $v))
     done
+    echo "-------------------------"
+    printf "%-${len}s %3d\n" "Unaccounted time" "$unaccounted_time"
     echo "========================="
+    printf "%-${len}s %3d\n" "Total runtime" "$elapsed_time"
 
     $xtrace
 }
diff --git a/inc/python b/inc/python
index 2bdc097..5e7f742 100644
--- a/inc/python
+++ b/inc/python
@@ -19,7 +19,7 @@
 
 # PROJECT_VENV contains the name of the virtual environment for each
 # project.  A null value installs to the system Python directories.
-declare -A PROJECT_VENV
+declare -A -g PROJECT_VENV
 
 
 # Python Functions
@@ -320,6 +320,14 @@
     fi
 
     $xtrace
+
+    # Also install test requirements
+    local install_test_reqs=""
+    local test_req="${!#}/test-requirements.txt"
+    if [[ -e "$test_req" ]]; then
+        install_test_reqs="-r $test_req"
+    fi
+
     # adding SETUPTOOLS_SYS_PATH_TECHNIQUE is a workaround to keep
     # the same behaviour of setuptools before version 25.0.0.
     # related issue: https://github.com/pypa/pip/issues/3874
@@ -329,28 +337,31 @@
         no_proxy="${no_proxy:-}" \
         PIP_FIND_LINKS=$PIP_FIND_LINKS \
         SETUPTOOLS_SYS_PATH_TECHNIQUE=rewrite \
-        $cmd_pip $upgrade \
+        $cmd_pip $upgrade $install_test_reqs \
         $@
     result=$?
 
-    # Also install test requirements
-    local test_req="${!#}/test-requirements.txt"
-    if [[ $result == 0 ]] && [[ -e "$test_req" ]]; then
-        echo "Installing test-requirements for $test_req"
-        $sudo_pip \
-            http_proxy=${http_proxy:-} \
-            https_proxy=${https_proxy:-} \
-            no_proxy=${no_proxy:-} \
-            PIP_FIND_LINKS=$PIP_FIND_LINKS \
-            $cmd_pip $upgrade \
-            -r $test_req
-        result=$?
-    fi
-
     time_stop "pip_install"
     return $result
 }
 
+function pip_uninstall {
+    # Skip uninstall if offline
+    [[ "${OFFLINE}" = "True" ]] && return
+
+    local name=$1
+    if [[ -n ${PIP_VIRTUAL_ENV:=} && -d ${PIP_VIRTUAL_ENV} ]]; then
+        local cmd_pip=$PIP_VIRTUAL_ENV/bin/pip
+        local sudo_pip="env"
+    else
+        local cmd_pip
+        cmd_pip=$(get_pip_command $PYTHON2_VERSION)
+        local sudo_pip="sudo -H"
+    fi
+    # don't error if we can't uninstall, it might not be there
+    $sudo_pip $cmd_pip uninstall -y $name || /bin/true
+}
+
 # get version of a package from global requirements file
 # get_from_global_requirements <package>
 function get_from_global_requirements {
@@ -433,7 +444,7 @@
 # project_dir: directory of project repo (e.g., /opt/stack/keystone)
 # extras: comma-separated list of optional dependencies to install
 #         (e.g., ldap,memcache).
-#         See http://docs.openstack.org/developer/pbr/#extra-requirements
+#         See https://docs.openstack.org/pbr/latest/user/using.html#extra-requirements
 # The command is like "pip install <project_dir>[<extras>]"
 function setup_install {
     local project_dir=$1
@@ -447,7 +458,7 @@
 # project_dir: directory of project repo (e.g., /opt/stack/keystone)
 # extras: comma-separated list of optional dependencies to install
 #         (e.g., ldap,memcache).
-#         See http://docs.openstack.org/developer/pbr/#extra-requirements
+#         See https://docs.openstack.org/pbr/latest/user/using.html#extra-requirements
 # The command is like "pip install -e <project_dir>[<extras>]"
 function setup_develop {
     local project_dir=$1
@@ -479,7 +490,7 @@
 # flags: pip CLI options/flags
 # extras: comma-separated list of optional dependencies to install
 #         (e.g., ldap,memcache).
-#         See http://docs.openstack.org/developer/pbr/#extra-requirements
+#         See https://docs.openstack.org/pbr/latest/user/using.html#extra-requirements
 # The command is like "pip install <flags> <project_dir>[<extras>]"
 function _setup_package_with_constraints_edit {
     local project_dir=$1
@@ -515,7 +526,7 @@
 # flags: pip CLI options/flags
 # extras: comma-separated list of optional dependencies to install
 #         (e.g., ldap,memcache).
-#         See http://docs.openstack.org/developer/pbr/#extra-requirements
+#         See https://docs.openstack.org/pbr/latest/user/using.html#extra-requirements
 # The command is like "pip install <flags> <project_dir>[<extras>]"
 function setup_package {
     local project_dir=$1
@@ -553,6 +564,8 @@
 function install_python3 {
     if is_ubuntu; then
         apt_get install python${PYTHON3_VERSION} python${PYTHON3_VERSION}-dev
+    elif is_suse; then
+        install_package python3-devel python3-dbm
     fi
 }
 
diff --git a/lib/apache b/lib/apache
index d1a11ae..39d5b7b 100644
--- a/lib/apache
+++ b/lib/apache
@@ -53,8 +53,15 @@
 function enable_apache_mod {
     local mod=$1
     # Apache installation, because we mark it NOPRIME
-    if is_ubuntu || is_suse ; then
-        if ! a2query -m $mod ; then
+    if is_ubuntu; then
+        # Skip mod_version as it is not a valid mod to enable
+        # on debuntu, instead it is built in.
+        if [[ "$mod" != "version" ]] && ! a2query -m $mod ; then
+            sudo a2enmod $mod
+            restart_apache_server
+        fi
+    elif is_suse; then
+        if ! a2enmod -q $mod ; then
             sudo a2enmod $mod
             restart_apache_server
         fi
@@ -66,6 +73,48 @@
     fi
 }
 
+# NOTE(sdague): Install uwsgi including apache module, we need to get
+# to 2.0.6+ to get a working mod_proxy_uwsgi. We can probably build a
+# check for that and do it differently for different platforms.
+function install_apache_uwsgi {
+    local apxs="apxs2"
+    if is_fedora; then
+        apxs="apxs"
+    fi
+
+    # Ubuntu xenial is back level on uwsgi so the proxy doesn't
+    # actually work. Hence we have to build from source for now.
+    #
+    # Centos 7 actually has the module in epel, but there was a big
+    # push to disable epel by default. As such, compile from source
+    # there as well.
+
+    local dir
+    dir=$(mktemp -d)
+    pushd $dir
+    pip_install uwsgi
+    pip download uwsgi -c $REQUIREMENTS_DIR/upper-constraints.txt
+    local uwsgi
+    uwsgi=$(ls uwsgi*)
+    tar xvf $uwsgi
+    cd uwsgi*/apache2
+    sudo $apxs -i -c mod_proxy_uwsgi.c
+    popd
+    # delete the temp directory
+    sudo rm -rf $dir
+
+    if is_ubuntu || is_suse ; then
+        # we've got to enable proxy and proxy_uwsgi for this to work
+        sudo a2enmod proxy
+        sudo a2enmod proxy_uwsgi
+    elif is_fedora; then
+        # redhat is missing a nice way to turn on/off modules
+        echo "LoadModule proxy_uwsgi_module modules/mod_proxy_uwsgi.so" \
+            | sudo tee /etc/httpd/conf.modules.d/02-proxy-uwsgi.conf
+    fi
+    restart_apache_server
+}
+
 # install_apache_wsgi() - Install Apache server and wsgi module
 function install_apache_wsgi {
     # Apache installation, because we mark it NOPRIME
@@ -83,6 +132,10 @@
     elif is_fedora; then
         sudo rm -f /etc/httpd/conf.d/000-*
         install_package httpd mod_wsgi
+        # For consistency with Ubuntu, switch to the worker mpm, as
+        # the default is prefork
+        sudo sed -i '/mod_mpm_prefork.so/s/^/#/g' /etc/httpd/conf.modules.d/00-mpm.conf
+        sudo sed -i '/mod_mpm_worker.so/s/^#//g' /etc/httpd/conf.modules.d/00-mpm.conf
     elif is_suse; then
         install_package apache2 apache2-mod_wsgi
     else
@@ -90,49 +143,15 @@
     fi
     # WSGI isn't enabled by default, enable it
     enable_apache_mod wsgi
-
-    # ensure mod_version enabled for <IfVersion ...>.  This is
-    # built-in statically on anything recent, but precise (2.2)
-    # doesn't have it enabled
-    sudo a2enmod version || true
-}
-
-# get_apache_version() - return the version of Apache installed
-# This function is used to determine the Apache version installed. There are
-# various differences between Apache 2.2 and 2.4 that warrant special handling.
-function get_apache_version {
-    if is_ubuntu; then
-        local version_str
-        version_str=$(sudo /usr/sbin/apache2ctl -v | awk '/Server version/ {print $3}' | cut -f2 -d/)
-    elif is_fedora; then
-        local version_str
-        version_str=$(rpm -qa --queryformat '%{VERSION}' httpd)
-    elif is_suse; then
-        local version_str
-        version_str=$(rpm -qa --queryformat '%{VERSION}' apache2)
-    else
-        exit_distro_not_supported "cannot determine apache version"
-    fi
-    if [[ "$version_str" =~ ^2\.2\. ]]; then
-        echo "2.2"
-    elif [[ "$version_str" =~ ^2\.4\. ]]; then
-        echo "2.4"
-    else
-        exit_distro_not_supported "apache version not supported"
-    fi
 }
 
 # apache_site_config_for() - The filename of the site's configuration file.
 # This function uses the global variables APACHE_NAME and APACHE_CONF_DIR.
 #
-# On Ubuntu 14.04, the site configuration file must have a .conf suffix for a2ensite and a2dissite to
+# On Ubuntu 14.04+, the site configuration file must have a .conf suffix for a2ensite and a2dissite to
 # recognise it. a2ensite and a2dissite ignore the .conf suffix used as parameter. The default sites'
 # files are 000-default.conf and default-ssl.conf.
 #
-# On Ubuntu 12.04, the site configuration file may have any format, as long as it is in
-# /etc/apache2/sites-available/. a2ensite and a2dissite need the entire file name to work. The default
-# sites' files are default and default-ssl.
-#
 # On Fedora and openSUSE, any file in /etc/httpd/conf.d/ whose name ends with .conf is enabled.
 #
 # On RHEL and CentOS, things should hopefully work as in Fedora.
@@ -141,22 +160,14 @@
 # +----------------------+--------------------+--------------------------+--------------------------+
 # | Distribution         | File name          | Site enabling command    | Site disabling command   |
 # +----------------------+--------------------+--------------------------+--------------------------+
-# | Ubuntu 12.04         | site               | a2ensite site            | a2dissite site           |
 # | Ubuntu 14.04         | site.conf          | a2ensite site            | a2dissite site           |
 # | Fedora, RHEL, CentOS | site.conf.disabled | mv site.conf{.disabled,} | mv site.conf{,.disabled} |
 # +----------------------+--------------------+--------------------------+--------------------------+
 function apache_site_config_for {
     local site=$@
     if is_ubuntu; then
-        local apache_version
-        apache_version=$(get_apache_version)
-        if [[ "$apache_version" == "2.2" ]]; then
-            # Ubuntu 12.04 - Apache 2.2
-            echo $APACHE_CONF_DIR/${site}
-        else
-            # Ubuntu 14.04 - Apache 2.4
-            echo $APACHE_CONF_DIR/${site}.conf
-        fi
+        # Ubuntu 14.04 - Apache 2.4
+        echo $APACHE_CONF_DIR/${site}.conf
     elif is_fedora || is_suse; then
         # fedora conf.d is only imported if it ends with .conf so this is approx the same
         local enabled_site_file="$APACHE_CONF_DIR/${site}.conf"
@@ -171,6 +182,8 @@
 # enable_apache_site() - Enable a particular apache site
 function enable_apache_site {
     local site=$@
+    # Many of our sites use mod version. Just enable it.
+    enable_apache_mod version
     if is_ubuntu; then
         sudo a2ensite ${site}
     elif is_fedora || is_suse; then
@@ -186,7 +199,7 @@
 function disable_apache_site {
     local site=$@
     if is_ubuntu; then
-        sudo a2dissite ${site}
+        sudo a2dissite ${site} || true
     elif is_fedora || is_suse; then
         local enabled_site_file="$APACHE_CONF_DIR/${site}.conf"
         # Do nothing if no site config exists
@@ -215,16 +228,136 @@
     # Apache can be slow to stop, doing an explicit stop, sleep, start helps
     # to mitigate issues where apache will claim a port it's listening on is
     # still in use and fail to start.
-    time_start "restart_apache_server"
-    stop_service $APACHE_NAME
-    sleep 3
-    start_service $APACHE_NAME
-    time_stop "restart_apache_server"
+    restart_service $APACHE_NAME
 }
 
-# reload_apache_server
-function reload_apache_server {
-    reload_service $APACHE_NAME
+function write_uwsgi_config {
+    local file=$1
+    local wsgi=$2
+    local url=$3
+    local http=$4
+    local name=""
+    name=$(basename $wsgi)
+
+    # create a home for the sockets; note don't use /tmp -- apache has
+    # a private view of it on some platforms.
+    local socket_dir='/var/run/uwsgi'
+
+    # /var/run will be empty on ubuntu after reboot, so we can use systemd-temptiles
+    # to automatically create $socket_dir.
+    sudo mkdir -p /etc/tmpfiles.d/
+    echo "d $socket_dir 0755 $STACK_USER root" | sudo tee /etc/tmpfiles.d/uwsgi.conf
+    sudo systemd-tmpfiles --create /etc/tmpfiles.d/uwsgi.conf
+
+    local socket="$socket_dir/${name}.socket"
+
+    # always cleanup given that we are using iniset here
+    rm -rf $file
+    iniset "$file" uwsgi wsgi-file "$wsgi"
+    iniset "$file" uwsgi processes $API_WORKERS
+    # This is running standalone
+    iniset "$file" uwsgi master true
+    # Set die-on-term & exit-on-reload so that uwsgi shuts down
+    iniset "$file" uwsgi die-on-term true
+    iniset "$file" uwsgi exit-on-reload true
+    # Set worker-reload-mercy so that worker will not exit till the time
+    # configured after graceful shutdown
+    iniset "$file" uwsgi worker-reload-mercy $WORKER_TIMEOUT
+    iniset "$file" uwsgi enable-threads true
+    iniset "$file" uwsgi plugins python
+    # uwsgi recommends this to prevent thundering herd on accept.
+    iniset "$file" uwsgi thunder-lock true
+    # Set hook to trigger graceful shutdown on SIGTERM
+    iniset "$file" uwsgi hook-master-start "unix_signal:15 gracefully_kill_them_all"
+    # Override the default size for headers from the 4k default.
+    iniset "$file" uwsgi buffer-size 65535
+    # Make sure the client doesn't try to re-use the connection.
+    iniset "$file" uwsgi add-header "Connection: close"
+    # This ensures that file descriptors aren't shared between processes.
+    iniset "$file" uwsgi lazy-apps true
+
+    # If we said bind directly to http, then do that and don't start the apache proxy
+    if [[ -n "$http" ]]; then
+        iniset "$file" uwsgi http $http
+    else
+        local apache_conf=""
+        apache_conf=$(apache_site_config_for $name)
+        echo "SetEnv proxy-sendcl 1" | sudo tee $apache_conf
+        iniset "$file" uwsgi socket "$socket"
+        iniset "$file" uwsgi chmod-socket 666
+        echo "ProxyPass \"${url}\" \"unix:${socket}|uwsgi://uwsgi-uds-${name}/\" retry=0 " | sudo tee -a $apache_conf
+        enable_apache_site $name
+        restart_apache_server
+    fi
+}
+
+# For services using chunked encoding, the only services known to use this
+# currently are Glance and Swift, we need to use an http proxy instead of
+# mod_proxy_uwsgi because the chunked encoding gets dropped. See:
+# https://github.com/unbit/uwsgi/issues/1540 You can workaround this on python2
+# but that involves having apache buffer the request before sending it to
+# uwsgi.
+function write_local_uwsgi_http_config {
+    local file=$1
+    local wsgi=$2
+    local url=$3
+    name=$(basename $wsgi)
+
+    # create a home for the sockets; note don't use /tmp -- apache has
+    # a private view of it on some platforms.
+
+    # always cleanup given that we are using iniset here
+    rm -rf $file
+    iniset "$file" uwsgi wsgi-file "$wsgi"
+    port=$(get_random_port)
+    iniset "$file" uwsgi http-socket "127.0.0.1:$port"
+    iniset "$file" uwsgi processes $API_WORKERS
+    # This is running standalone
+    iniset "$file" uwsgi master true
+    # Set die-on-term & exit-on-reload so that uwsgi shuts down
+    iniset "$file" uwsgi die-on-term true
+    iniset "$file" uwsgi exit-on-reload true
+    iniset "$file" uwsgi enable-threads true
+    iniset "$file" uwsgi plugins python
+    # uwsgi recommends this to prevent thundering herd on accept.
+    iniset "$file" uwsgi thunder-lock true
+    # Set hook to trigger graceful shutdown on SIGTERM
+    iniset "$file" uwsgi hook-master-start "unix_signal:15 gracefully_kill_them_all"
+    # Set worker-reload-mercy so that worker will not exit till the time
+    # configured after graceful shutdown
+    iniset "$file" uwsgi worker-reload-mercy $WORKER_TIMEOUT
+    # Override the default size for headers from the 4k default.
+    iniset "$file" uwsgi buffer-size 65535
+    # Make sure the client doesn't try to re-use the connection.
+    iniset "$file" uwsgi add-header "Connection: close"
+    # This ensures that file descriptors aren't shared between processes.
+    iniset "$file" uwsgi lazy-apps true
+    iniset "$file" uwsgi chmod-socket 666
+    iniset "$file" uwsgi http-raw-body true
+    iniset "$file" uwsgi http-chunked-input true
+    iniset "$file" uwsgi http-auto-chunked true
+    iniset "$file" uwsgi http-keepalive false
+    # Increase socket timeout for slow chunked uploads
+    iniset "$file" uwsgi socket-timeout 30
+
+    enable_apache_mod proxy
+    enable_apache_mod proxy_http
+    local apache_conf=""
+    apache_conf=$(apache_site_config_for $name)
+    echo "KeepAlive Off" | sudo tee $apache_conf
+    echo "ProxyPass \"${url}\" \"http://127.0.0.1:$port\" retry=0 " | sudo tee -a $apache_conf
+    enable_apache_site $name
+    restart_apache_server
+}
+
+function remove_uwsgi_config {
+    local file=$1
+    local wsgi=$2
+    local name=""
+    name=$(basename $wsgi)
+
+    rm -rf $file
+    disable_apache_site $name
 }
 
 # Restore xtrace
diff --git a/lib/cinder b/lib/cinder
index c17cea0..674787c 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -55,10 +55,12 @@
 
 CINDER_CONF_DIR=/etc/cinder
 CINDER_CONF=$CINDER_CONF_DIR/cinder.conf
+CINDER_UWSGI=$CINDER_BIN_DIR/cinder-wsgi
+CINDER_UWSGI_CONF=$CINDER_CONF_DIR/cinder-api-uwsgi.ini
 CINDER_API_PASTE_INI=$CINDER_CONF_DIR/api-paste.ini
 
 # Public facing bits
-if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
     CINDER_SERVICE_PROTOCOL="https"
 fi
 CINDER_SERVICE_HOST=${CINDER_SERVICE_HOST:-$SERVICE_HOST}
@@ -68,12 +70,11 @@
 CINDER_SERVICE_LISTEN_ADDRESS=${CINDER_SERVICE_LISTEN_ADDRESS:-$SERVICE_LISTEN_ADDRESS}
 
 # What type of LVM device should Cinder use for LVM backend
-# Defaults to default, which is thick, the other valid choice
-# is thin, which as the name implies utilizes lvm thin provisioning.
-# Thinly provisioned LVM volumes may be more efficient when using the Cinder
-# image cache, but there are also known race failures with volume snapshots
-# and thinly provisioned LVM volumes, see bug 1642111 for details.
-CINDER_LVM_TYPE=${CINDER_LVM_TYPE:-default}
+# Defaults to auto, which will do thin provisioning if it's a fresh
+# volume group, otherwise it will do thick. The other valid choices are
+# default, which is thick, or thin, which as the name implies utilizes lvm
+# thin provisioning.
+CINDER_LVM_TYPE=${CINDER_LVM_TYPE:-auto}
 
 # Default backends
 # The backend format is type:name where type is one of the supported backend
@@ -84,20 +85,6 @@
 # CINDER_ENABLED_BACKENDS=${CINDER_ENABLED_BACKENDS:-lvm:lvmdriver-1,lvm:lvmdriver-2}
 CINDER_ENABLED_BACKENDS=${CINDER_ENABLED_BACKENDS:-lvm:lvmdriver-1}
 
-
-# Should cinder perform secure deletion of volumes?
-# Defaults to zero. Can also be set to none or shred.
-# This was previously CINDER_SECURE_DELETE (True or False).
-# Equivalents using CINDER_VOLUME_CLEAR are zero and none, respectively.
-# Set to none to avoid this bug when testing:
-# https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1023755
-if [[ -n $CINDER_SECURE_DELETE ]]; then
-    CINDER_SECURE_DELETE=$(trueorfalse True CINDER_SECURE_DELETE)
-    if [[ $CINDER_SECURE_DELETE == "False" ]]; then
-        CINDER_VOLUME_CLEAR_DEFAULT="none"
-    fi
-    deprecated "Configure secure Cinder volume deletion using CINDER_VOLUME_CLEAR instead of CINDER_SECURE_DELETE."
-fi
 CINDER_VOLUME_CLEAR=${CINDER_VOLUME_CLEAR:-${CINDER_VOLUME_CLEAR_DEFAULT:-zero}}
 CINDER_VOLUME_CLEAR=$(echo ${CINDER_VOLUME_CLEAR} | tr '[:upper:]' '[:lower:]')
 
@@ -109,10 +96,20 @@
 # https://bugs.launchpad.net/cinder/+bug/1180976
 CINDER_PERIODIC_INTERVAL=${CINDER_PERIODIC_INTERVAL:-60}
 
-CINDER_ISCSI_HELPER=${CINDER_ISCSI_HELPER:-tgtadm}
+# Centos7 switched to using LIO and that's all that's supported,
+# although the tgt bits are in EPEL we don't want that for CI
+if is_fedora; then
+    CINDER_ISCSI_HELPER=${CINDER_ISCSI_HELPER:-lioadm}
+    if [[ ${CINDER_ISCSI_HELPER} != "lioadm" ]]; then
+        die "lioadm is the only valid Cinder iscsi_helper config on this platform"
+    fi
+else
+    CINDER_ISCSI_HELPER=${CINDER_ISCSI_HELPER:-tgtadm}
+fi
 
-# Toggle for deploying Cinder under HTTPD + mod_wsgi
-CINDER_USE_MOD_WSGI=${CINDER_USE_MOD_WSGI:-False}
+# Toggle for deploying Cinder under a wsgi server. Legacy mod_wsgi
+# reference should be cleaned up to more accurately refer to uwsgi.
+CINDER_USE_MOD_WSGI=${CINDER_USE_MOD_WSGI:-True}
 
 # Source the enabled backends
 if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
@@ -143,6 +140,7 @@
 # Test if any Cinder services are enabled
 # is_cinder_enabled
 function is_cinder_enabled {
+    [[ ,${DISABLED_SERVICES} =~ ,"cinder" ]] && return 1
     [[ ,${ENABLED_SERVICES} =~ ,"c-" ]] && return 0
     return 1
 }
@@ -200,43 +198,8 @@
         done
     fi
 
-    if [ "$CINDER_USE_MOD_WSGI" == "True" ]; then
-        _cinder_cleanup_apache_wsgi
-    fi
-}
-
-# _cinder_config_apache_wsgi() - Set WSGI config files
-function _cinder_config_apache_wsgi {
-    local cinder_apache_conf
-    cinder_apache_conf=$(apache_site_config_for osapi-volume)
-    local cinder_ssl=""
-    local cinder_certfile=""
-    local cinder_keyfile=""
-    local cinder_api_port=$CINDER_SERVICE_PORT
-    local venv_path=""
-
-    if is_ssl_enabled_service c-api; then
-        cinder_ssl="SSLEngine On"
-        cinder_certfile="SSLCertificateFile $CINDER_SSL_CERT"
-        cinder_keyfile="SSLCertificateKeyFile $CINDER_SSL_KEY"
-    fi
-    if [[ ${USE_VENV} = True ]]; then
-        venv_path="python-path=${PROJECT_VENV["cinder"]}/lib/python2.7/site-packages"
-    fi
-
-    # copy proxy vhost file
-    sudo cp $FILES/apache-cinder-api.template $cinder_apache_conf
-    sudo sed -e "
-        s|%PUBLICPORT%|$cinder_api_port|g;
-        s|%APACHE_NAME%|$APACHE_NAME|g;
-        s|%APIWORKERS%|$API_WORKERS|g
-        s|%CINDER_BIN_DIR%|$CINDER_BIN_DIR|g;
-        s|%SSLENGINE%|$cinder_ssl|g;
-        s|%SSLCERTFILE%|$cinder_certfile|g;
-        s|%SSLKEYFILE%|$cinder_keyfile|g;
-        s|%USER%|$STACK_USER|g;
-        s|%VIRTUALENV%|$venv_path|g
-    " -i $cinder_apache_conf
+    stop_process "c-api"
+    remove_uwsgi_config "$CINDER_UWSGI_CONF" "$CINDER_UWSGI"
 }
 
 # configure_cinder() - Set config files, create data dirs, etc
@@ -249,6 +212,10 @@
 
     configure_rootwrap cinder
 
+    if [[ -f "$CINDER_DIR/etc/cinder/resource_filters.json" ]]; then
+        cp -p "$CINDER_DIR/etc/cinder/resource_filters.json" "$CINDER_CONF_DIR/resource_filters.json"
+    fi
+
     cp $CINDER_DIR/etc/cinder/api-paste.ini $CINDER_API_PASTE_INI
 
     inicomment $CINDER_API_PASTE_INI filter:authtoken auth_host
@@ -288,6 +255,8 @@
 
     iniset $CINDER_CONF DEFAULT os_region_name "$REGION_NAME"
 
+    iniset $CINDER_CONF key_manager api_class cinder.keymgr.conf_key_mgr.ConfKeyManager
+
     if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
         local enabled_backends=""
         local default_name=""
@@ -302,6 +271,9 @@
                 default_name=$be_name
             fi
             enabled_backends+=$be_name,
+
+            iniset $CINDER_CONF $be_name volume_clear $CINDER_VOLUME_CLEAR
+
         done
         iniset $CINDER_CONF DEFAULT enabled_backends ${enabled_backends%,*}
         if [[ -n "$default_name" ]]; then
@@ -319,10 +291,17 @@
     fi
 
     if is_service_enabled tls-proxy; then
-        # Set the service port for a proxy to take the original
-        iniset $CINDER_CONF DEFAULT osapi_volume_listen_port $CINDER_SERVICE_PORT_INT
-        iniset $CINDER_CONF DEFAULT public_endpoint $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT
-        iniset $CINDER_CONF DEFAULT osapi_volume_base_URL $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT
+        if [[ "$ENABLED_SERVICES" =~ "c-api" ]]; then
+            # Set the service port for a proxy to take the original
+            if [ "$CINDER_USE_MOD_WSGI" == "True" ]; then
+                iniset $CINDER_CONF DEFAULT osapi_volume_listen_port $CINDER_SERVICE_PORT_INT
+                iniset $CINDER_CONF oslo_middleware enable_proxy_headers_parsing True
+            else
+                iniset $CINDER_CONF DEFAULT osapi_volume_listen_port $CINDER_SERVICE_PORT_INT
+                iniset $CINDER_CONF DEFAULT public_endpoint $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT
+                iniset $CINDER_CONF DEFAULT osapi_volume_base_URL $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT
+            fi
+        fi
     fi
 
     if [ "$SYSLOG" != "False" ]; then
@@ -331,14 +310,10 @@
 
     iniset_rpc_backend cinder $CINDER_CONF
 
-    iniset $CINDER_CONF DEFAULT volume_clear $CINDER_VOLUME_CLEAR
-
     # Format logging
     setup_logging $CINDER_CONF $CINDER_USE_MOD_WSGI
 
-    if [ "$CINDER_USE_MOD_WSGI" == "True" ]; then
-        _cinder_config_apache_wsgi
-    fi
+    write_uwsgi_config "$CINDER_UWSGI_CONF" "$CINDER_UWSGI" "/volume"
 
     if [[ -r $CINDER_PLUGINS/$CINDER_DRIVER ]]; then
         configure_cinder_driver
@@ -346,8 +321,8 @@
 
     iniset $CINDER_CONF DEFAULT osapi_volume_workers "$API_WORKERS"
 
-    iniset $CINDER_CONF DEFAULT glance_api_servers "${GLANCE_SERVICE_PROTOCOL}://${GLANCE_HOSTPORT}"
-    if is_ssl_enabled_service glance || is_service_enabled tls-proxy; then
+    iniset $CINDER_CONF DEFAULT glance_api_servers "$GLANCE_URL"
+    if is_service_enabled tls-proxy; then
         iniset $CINDER_CONF DEFAULT glance_protocol https
         iniset $CINDER_CONF DEFAULT glance_ca_certificates_file $SSL_BUNDLE_FILE
     fi
@@ -356,25 +331,16 @@
         iniset $CINDER_CONF DEFAULT glance_api_version 2
     fi
 
-    # Register SSL certificates if provided
-    if is_ssl_enabled_service cinder; then
-        ensure_certificates CINDER
-
-        iniset $CINDER_CONF DEFAULT ssl_cert_file "$CINDER_SSL_CERT"
-        iniset $CINDER_CONF DEFAULT ssl_key_file "$CINDER_SSL_KEY"
-    fi
-
     # Set os_privileged_user credentials (used for os-assisted-snapshots)
     iniset $CINDER_CONF DEFAULT os_privileged_user_name nova
     iniset $CINDER_CONF DEFAULT os_privileged_user_password "$SERVICE_PASSWORD"
     iniset $CINDER_CONF DEFAULT os_privileged_user_tenant "$SERVICE_PROJECT_NAME"
     iniset $CINDER_CONF DEFAULT graceful_shutdown_timeout "$SERVICE_GRACEFUL_SHUTDOWN_TIMEOUT"
 
-    # Set the backend url according to the configured dlm backend
-    if is_dlm_enabled; then
-        if [[ "$(dlm_backend)" == "zookeeper" ]]; then
-            iniset $CINDER_CONF coordination backend_url "zookeeper://${SERVICE_HOST}:2181"
-        fi
+    if [[ ! -z "$CINDER_COORDINATION_URL" ]]; then
+        iniset $CINDER_CONF coordination backend_url "$CINDER_COORDINATION_URL"
+    elif is_service_enabled etcd3; then
+        iniset $CINDER_CONF coordination backend_url "etcd3+http://${SERVICE_HOST}:2379"
     fi
 }
 
@@ -386,29 +352,47 @@
 
 # Migrated from keystone_data.sh
 function create_cinder_accounts {
-
     # Cinder
     if [[ "$ENABLED_SERVICES" =~ "c-api" ]]; then
 
         create_service_user "cinder"
 
         get_or_create_service "cinder" "volume" "Cinder Volume Service"
-        get_or_create_endpoint \
-            "volume" \
-            "$REGION_NAME" \
-            "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/\$(project_id)s"
+        if [ "$CINDER_USE_MOD_WSGI" == "False" ]; then
+            get_or_create_endpoint \
+                "volume" \
+                "$REGION_NAME" \
+                "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/\$(project_id)s"
 
-        get_or_create_service "cinderv2" "volumev2" "Cinder Volume Service V2"
-        get_or_create_endpoint \
-            "volumev2" \
-            "$REGION_NAME" \
-            "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/\$(project_id)s"
+            get_or_create_service "cinderv2" "volumev2" "Cinder Volume Service V2"
+            get_or_create_endpoint \
+                "volumev2" \
+                "$REGION_NAME" \
+                "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/\$(project_id)s"
 
-        get_or_create_service "cinderv3" "volumev3" "Cinder Volume Service V3"
-        get_or_create_endpoint \
-            "volumev3" \
-            "$REGION_NAME" \
-            "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v3/\$(project_id)s"
+            get_or_create_service "cinderv3" "volumev3" "Cinder Volume Service V3"
+            get_or_create_endpoint \
+                "volumev3" \
+                "$REGION_NAME" \
+                "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v3/\$(project_id)s"
+        else
+            get_or_create_endpoint \
+                "volume" \
+                "$REGION_NAME" \
+                "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST/volume/v1/\$(project_id)s"
+
+            get_or_create_service "cinderv2" "volumev2" "Cinder Volume Service V2"
+            get_or_create_endpoint \
+                "volumev2" \
+                "$REGION_NAME" \
+                "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST/volume/v2/\$(project_id)s"
+
+            get_or_create_service "cinderv3" "volumev3" "Cinder Volume Service V3"
+            get_or_create_endpoint \
+                "volumev3" \
+                "$REGION_NAME" \
+                "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST/volume/v3/\$(project_id)s"
+        fi
 
         configure_cinder_internal_tenant
     fi
@@ -427,8 +411,10 @@
         # (Re)create cinder database
         recreate_database cinder
 
+        time_start "dbsync"
         # Migrate cinder database
         $CINDER_BIN_DIR/cinder-manage --config-file $CINDER_CONF db sync
+        time_stop "dbsync"
     fi
 
     if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
@@ -454,19 +440,10 @@
 function install_cinder {
     git_clone $CINDER_REPO $CINDER_DIR $CINDER_BRANCH
     setup_develop $CINDER_DIR
-    if [ "$CINDER_ISCSI_HELPER" = "tgtadm" ]; then
-        if is_fedora; then
-            install_package scsi-target-utils
-        else
-            install_package tgt
-        fi
-    fi
-
-    if [ "$CINDER_USE_MOD_WSGI" == "True" ]; then
-        install_apache_wsgi
-        if is_ssl_enabled_service "c-api"; then
-            enable_mod_ssl
-        fi
+    if [[ "$CINDER_ISCSI_HELPER" == "tgtadm" ]]; then
+        install_package tgt
+    elif [[ "$CINDER_ISCI_HELPER" == "lioadm" ]]; then
+        install_package targetcli
     fi
 }
 
@@ -494,11 +471,12 @@
     fi
 }
 
-# start_cinder() - Start running processes, including screen
+# start_cinder() - Start running processes
 function start_cinder {
     local service_port=$CINDER_SERVICE_PORT
     local service_protocol=$CINDER_SERVICE_PROTOCOL
-    if is_service_enabled tls-proxy; then
+    local cinder_url
+    if is_service_enabled tls-proxy && [ "$CINDER_USE_MOD_WSGI" == "False" ]; then
         service_port=$CINDER_SERVICE_PORT_INT
         service_protocol="http"
     fi
@@ -522,18 +500,25 @@
         fi
     fi
 
-    if [ "$CINDER_USE_MOD_WSGI" == "True" ]; then
-        enable_apache_site osapi-volume
-        restart_apache_server
-        tail_log c-api /var/log/$APACHE_NAME/c-api.log
-    else
-        run_process c-api "$CINDER_BIN_DIR/cinder-api --config-file $CINDER_CONF"
-        echo "Waiting for Cinder API to start..."
-        if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$CINDER_SERVICE_HOST:$service_port; then
-            die $LINENO "c-api did not start"
+    if [[ "$ENABLED_SERVICES" =~ "c-api" ]]; then
+        if [ "$CINDER_USE_MOD_WSGI" == "False" ]; then
+            run_process c-api "$CINDER_BIN_DIR/cinder-api --config-file $CINDER_CONF"
+            cinder_url=$service_protocol://$SERVICE_HOST:$service_port
+            # Start proxy if tls enabled
+            if is_service_enabled tls-proxy; then
+                start_tls_proxy cinder '*' $CINDER_SERVICE_PORT $CINDER_SERVICE_HOST $CINDER_SERVICE_PORT_INT
+            fi
+        else
+            run_process "c-api" "$CINDER_BIN_DIR/uwsgi --procname-prefix cinder-api --ini $CINDER_UWSGI_CONF"
+            cinder_url=$service_protocol://$SERVICE_HOST/volume/v3
         fi
     fi
 
+    echo "Waiting for Cinder API to start..."
+    if ! wait_for_service $SERVICE_TIMEOUT $cinder_url; then
+        die $LINENO "c-api did not start"
+    fi
+
     run_process c-sch "$CINDER_BIN_DIR/cinder-scheduler --config-file $CINDER_CONF"
     run_process c-bak "$CINDER_BIN_DIR/cinder-backup --config-file $CINDER_CONF"
     run_process c-vol "$CINDER_BIN_DIR/cinder-volume --config-file $CINDER_CONF"
@@ -541,27 +526,14 @@
     # NOTE(jdg): For cinder, startup order matters.  To ensure that repor_capabilities is received
     # by the scheduler start the cinder-volume service last (or restart it) after the scheduler
     # has started.  This is a quick fix for lp bug/1189595
-
-    # Start proxies if enabled
-    if is_service_enabled c-api && is_service_enabled tls-proxy; then
-        start_tls_proxy cinder '*' $CINDER_SERVICE_PORT $CINDER_SERVICE_HOST $CINDER_SERVICE_PORT_INT
-    fi
 }
 
 # stop_cinder() - Stop running processes
 function stop_cinder {
-    if [ "$CINDER_USE_MOD_WSGI" == "True" ]; then
-        disable_apache_site osapi-volume
-        restart_apache_server
-    else
-        stop_process c-api
-    fi
-
-    # Kill the cinder screen windows
-    local serv
-    for serv in c-bak c-sch c-vol; do
-        stop_process $serv
-    done
+    stop_process c-api
+    stop_process c-bak
+    stop_process c-sch
+    stop_process c-vol
 }
 
 # create_volume_types() - Create Cinder's configured volume types
diff --git a/lib/cinder_backends/lvm b/lib/cinder_backends/lvm
index d927f9c..03e1880 100644
--- a/lib/cinder_backends/lvm
+++ b/lib/cinder_backends/lvm
@@ -53,9 +53,6 @@
     iniset $CINDER_CONF $be_name iscsi_helper "$CINDER_ISCSI_HELPER"
     iniset $CINDER_CONF $be_name lvm_type "$CINDER_LVM_TYPE"
 
-    if [[ "$CINDER_SECURE_DELETE" == "False" ]]; then
-        iniset $CINDER_CONF $be_name volume_clear none
-    fi
 }
 
 # init_cinder_backend_lvm - Initialize volume group
diff --git a/lib/databases/mysql b/lib/databases/mysql
index 7bbcace..a0cf7a4 100644
--- a/lib/databases/mysql
+++ b/lib/databases/mysql
@@ -71,6 +71,10 @@
     elif is_fedora; then
         mysql=mariadb
         my_conf=/etc/my.cnf
+        local cracklib_conf=/etc/my.cnf.d/cracklib_password_check.cnf
+        if [ -f "$cracklib_conf" ]; then
+            inicomment -sudo "$cracklib_conf" "mariadb" "plugin-load-add"
+        fi
     else
         exit_distro_not_supported "mysql configuration"
     fi
diff --git a/lib/dlm b/lib/dlm
deleted file mode 100644
index b5ac0f5..0000000
--- a/lib/dlm
+++ /dev/null
@@ -1,111 +0,0 @@
-#!/bin/bash
-#
-# lib/dlm
-#
-# Functions to control the installation and configuration of software
-# that provides a dlm (and possibly other functions). The default is
-# **zookeeper**, and is going to be the only backend supported in the
-# devstack tree.
-
-# Dependencies:
-#
-# - ``functions`` file
-
-# ``stack.sh`` calls the entry points in this order:
-#
-# - is_dlm_enabled
-# - install_dlm
-# - configure_dlm
-# - cleanup_dlm
-
-# Save trace setting
-_XTRACE_DLM=$(set +o | grep xtrace)
-set +o xtrace
-
-
-# Defaults
-# --------
-
-# <define global variables here that belong to this project>
-
-# Set up default directories
-ZOOKEEPER_DATA_DIR=$DEST/data/zookeeper
-ZOOKEEPER_CONF_DIR=/etc/zookeeper
-
-
-# Entry Points
-# ------------
-#
-# NOTE(sdague): it is expected that when someone wants to implement
-# another one of these out of tree, they'll implement the following
-# functions:
-#
-# - dlm_backend
-# - install_dlm
-# - configure_dlm
-# - cleanup_dlm
-
-# This should be declared in the settings file of any plugin or
-# service that needs to have a dlm in their environment.
-function use_dlm {
-    enable_service $(dlm_backend)
-}
-
-# A function to return the name of the backend in question, some users
-# are going to need to know this.
-function dlm_backend {
-    echo "zookeeper"
-}
-
-# Test if a dlm is enabled (defaults to a zookeeper specific check)
-function is_dlm_enabled {
-    [[ ,${ENABLED_SERVICES}, =~ ,"$(dlm_backend)", ]] && return 0
-    return 1
-}
-
-# cleanup_dlm() - Remove residual data files, anything left over from previous
-# runs that a clean run would need to clean up
-function cleanup_dlm {
-    # NOTE(sdague): we don't check for is_enabled here because we
-    # should just delete this regardless. Some times users updated
-    # their service list before they run cleanup.
-    sudo rm -rf $ZOOKEEPER_DATA_DIR
-}
-
-# configure_dlm() - Set config files, create data dirs, etc
-function configure_dlm {
-    if is_dlm_enabled; then
-        sudo cp $FILES/zookeeper/* $ZOOKEEPER_CONF_DIR
-        sudo sed -i -e 's|.*dataDir.*|dataDir='$ZOOKEEPER_DATA_DIR'|' $ZOOKEEPER_CONF_DIR/zoo.cfg
-        # clean up from previous (possibly aborted) runs
-        # create required data files
-        sudo rm -rf $ZOOKEEPER_DATA_DIR
-        sudo mkdir -p $ZOOKEEPER_DATA_DIR
-        # restart after configuration, there is no reason to make this
-        # another step, because having data files that don't match the
-        # zookeeper running is just going to cause tears.
-        restart_service zookeeper
-    fi
-}
-
-# install_dlm() - Collect source and prepare
-function install_dlm {
-    if is_dlm_enabled; then
-        pip_install_gr_extras tooz zookeeper
-        if is_ubuntu; then
-            install_package zookeeperd
-        elif is_fedora; then
-            install_package zookeeper
-        else
-            die $LINENO "Don't know how to install zookeeper on this platform"
-        fi
-    fi
-}
-
-# Restore xtrace
-$_XTRACE_DLM
-
-# Tell emacs to use shell-script-mode
-## Local variables:
-## mode: shell-script
-## End:
diff --git a/lib/dstat b/lib/dstat
index b705948..fe38d75 100644
--- a/lib/dstat
+++ b/lib/dstat
@@ -16,21 +16,27 @@
 _XTRACE_DSTAT=$(set +o | grep xtrace)
 set +o xtrace
 
-# start_dstat() - Start running processes, including screen
+# start_dstat() - Start running processes
 function start_dstat {
     # A better kind of sysstat, with the top process per time slice
     run_process dstat "$TOP_DIR/tools/dstat.sh $LOGDIR"
 
-    # To enable peakmem_tracker add:
-    #    enable_service peakmem_tracker
+    # To enable memory_tracker add:
+    #    enable_service memory_tracker
     # to your localrc
-    run_process peakmem_tracker "$TOP_DIR/tools/peakmem_tracker.sh"
+    run_process memory_tracker "$TOP_DIR/tools/memory_tracker.sh" "" "root"
+
+    # remove support for the old name when it's no longer used (sometime in Queens)
+    if is_service_enabled peakmem_tracker; then
+        deprecated "Use of peakmem_tracker in devstack is deprecated, use memory_tracker instead"
+        run_process peakmem_tracker "$TOP_DIR/tools/memory_tracker.sh" "" "root"
+    fi
 }
 
 # stop_dstat() stop dstat process
 function stop_dstat {
     stop_process dstat
-    stop_process peakmem_tracker
+    stop_process memory_tracker
 }
 
 # Restore xtrace
diff --git a/lib/etcd3 b/lib/etcd3
new file mode 100644
index 0000000..60e827a
--- /dev/null
+++ b/lib/etcd3
@@ -0,0 +1,118 @@
+#!/bin/bash
+#
+# lib/etcd3
+#
+# Functions to control the installation and configuration of etcd 3.x
+# that provides a key-value store (and possibly other functions).
+
+# Dependencies:
+#
+# - ``functions`` file
+
+# ``stack.sh`` calls the entry points in this order:
+#
+# - start_etcd3
+# - stop_etcd3
+# - cleanup_etcd3
+
+# Save trace setting
+_XTRACE_ETCD3=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Defaults
+# --------
+
+# Set up default values for etcd
+ETCD_DATA_DIR="$DATA_DIR/etcd"
+ETCD_SYSTEMD_SERVICE="devstack@etcd.service"
+ETCD_BIN_DIR="$DEST/bin"
+ETCD_PORT=2379
+
+if is_ubuntu ; then
+    UBUNTU_RELEASE_BASE_NUM=`lsb_release -r | awk '{print $2}' | cut -d '.' -f 1`
+fi
+
+# start_etcd3() - Starts to run the etcd process
+function start_etcd3 {
+    local cmd="$ETCD_BIN_DIR/etcd"
+    cmd+=" --name $HOSTNAME --data-dir $ETCD_DATA_DIR"
+    cmd+=" --initial-cluster-state new --initial-cluster-token etcd-cluster-01"
+    cmd+=" --initial-cluster $HOSTNAME=http://$SERVICE_HOST:2380"
+    cmd+=" --initial-advertise-peer-urls http://$SERVICE_HOST:2380"
+    cmd+=" --advertise-client-urls http://${HOST_IP}:$ETCD_PORT"
+    cmd+=" --listen-peer-urls http://0.0.0.0:2380 "
+    cmd+=" --listen-client-urls http://${HOST_IP}:$ETCD_PORT"
+
+    local unitfile="$SYSTEMD_DIR/$ETCD_SYSTEMD_SERVICE"
+    write_user_unit_file $ETCD_SYSTEMD_SERVICE "$cmd" "" "root"
+
+    iniset -sudo $unitfile "Unit" "After" "network.target"
+    iniset -sudo $unitfile "Service" "Type" "notify"
+    iniset -sudo $unitfile "Service" "Restart" "on-failure"
+    iniset -sudo $unitfile "Service" "LimitNOFILE" "65536"
+    if is_arch "aarch64"; then
+        iniset -sudo $unitfile "Service" "Environment" "ETCD_UNSUPPORTED_ARCH=arm64"
+    fi
+
+    $SYSTEMCTL daemon-reload
+    $SYSTEMCTL enable $ETCD_SYSTEMD_SERVICE
+    $SYSTEMCTL start $ETCD_SYSTEMD_SERVICE
+}
+
+# stop_etcd3() stops the etcd3 process
+function stop_etcd3 {
+    # Don't install in sub nodes (multinode scenario)
+    if [ "$SERVICE_HOST" != "$HOST_IP" ]; then
+        return
+    fi
+
+    $SYSTEMCTL stop $ETCD_SYSTEMD_SERVICE
+}
+
+function cleanup_etcd3 {
+    # Don't install in sub nodes (multinode scenario)
+    if [ "$SERVICE_HOST" != "$HOST_IP" ]; then
+        return
+    fi
+
+    $SYSTEMCTL disable $ETCD_SYSTEMD_SERVICE
+
+    local unitfile="$SYSTEMD_DIR/$ETCD_SYSTEMD_SERVICE"
+    sudo rm -f $unitfile
+
+    $SYSTEMCTL daemon-reload
+
+    sudo rm -rf $ETCD_DATA_DIR
+}
+
+function install_etcd3 {
+    echo "Installing etcd"
+
+    # Create the necessary directories
+    sudo mkdir -p $ETCD_BIN_DIR
+    sudo mkdir -p $ETCD_DATA_DIR
+
+    # Download and cache the etcd tgz for subsequent use
+    local etcd_file
+    etcd_file="$(get_extra_file $ETCD_DOWNLOAD_LOCATION)"
+    if [ ! -f "$FILES/etcd-$ETCD_VERSION-linux-$ETCD_ARCH/etcd" ]; then
+        echo "${ETCD_SHA256} $etcd_file" > $FILES/etcd.sha256sum
+        # NOTE(sdague): this should go fatal if this fails
+        sha256sum -c $FILES/etcd.sha256sum
+
+        tar xzvf $etcd_file -C $FILES
+        sudo cp $FILES/$ETCD_NAME/etcd $ETCD_BIN_DIR/etcd
+    fi
+    if [ ! -f "$ETCD_BIN_DIR/etcd" ]; then
+        sudo cp $FILES/$ETCD_NAME/etcd $ETCD_BIN_DIR/etcd
+    fi
+}
+
+# Restore xtrace
+$_XTRACE_ETCD3
+
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/glance b/lib/glance
index 0ba2cfa..74734c7 100644
--- a/lib/glance
+++ b/lib/glance
@@ -43,6 +43,7 @@
 
 GLANCE_CACHE_DIR=${GLANCE_CACHE_DIR:=$DATA_DIR/glance/cache}
 GLANCE_IMAGE_DIR=${GLANCE_IMAGE_DIR:=$DATA_DIR/glance/images}
+GLANCE_LOCK_DIR=${GLANCE_LOCK_DIR:=$DATA_DIR/glance/locks}
 GLANCE_AUTH_CACHE_DIR=${GLANCE_AUTH_CACHE_DIR:-/var/cache/glance}
 
 GLANCE_CONF_DIR=${GLANCE_CONF_DIR:-/etc/glance}
@@ -55,11 +56,9 @@
 GLANCE_POLICY_JSON=$GLANCE_CONF_DIR/policy.json
 GLANCE_SCHEMA_JSON=$GLANCE_CONF_DIR/schema-image.json
 GLANCE_SWIFT_STORE_CONF=$GLANCE_CONF_DIR/glance-swift-store.conf
-GLANCE_GLARE_CONF=$GLANCE_CONF_DIR/glance-glare.conf
-GLANCE_GLARE_PASTE_INI=$GLANCE_CONF_DIR/glance-glare-paste.ini
-GLANCE_V1_ENABLED=${GLANCE_V1_ENABLED:-True}
+GLANCE_V1_ENABLED=${GLANCE_V1_ENABLED:-False}
 
-if is_ssl_enabled_service "glance" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
     GLANCE_SERVICE_PROTOCOL="https"
 fi
 
@@ -72,8 +71,16 @@
 GLANCE_SERVICE_PROTOCOL=${GLANCE_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
 GLANCE_REGISTRY_PORT=${GLANCE_REGISTRY_PORT:-9191}
 GLANCE_REGISTRY_PORT_INT=${GLANCE_REGISTRY_PORT_INT:-19191}
-GLANCE_GLARE_PORT=${GLANCE_GLARE_PORT:-9494}
-GLANCE_GLARE_HOSTPORT=${GLANCE_GLARE_HOSTPORT:-$GLANCE_SERVICE_HOST:$GLANCE_GLARE_PORT}
+GLANCE_UWSGI=$GLANCE_BIN_DIR/glance-wsgi-api
+GLANCE_UWSGI_CONF=$GLANCE_CONF_DIR/glance-uwsgi.ini
+# If wsgi mode is uwsgi run glance under uwsgi, else default to eventlet
+# TODO(mtreinish): Remove the eventlet path here and in all the similar
+# conditionals below after the Pike release
+if [[ "$WSGI_MODE" == "uwsgi" ]]; then
+    GLANCE_URL="$GLANCE_SERVICE_PROTOCOL://$GLANCE_SERVICE_HOST/image"
+else
+    GLANCE_URL="$GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT"
+fi
 
 # Functions
 # ---------
@@ -81,6 +88,7 @@
 # Test if any Glance services are enabled
 # is_glance_enabled
 function is_glance_enabled {
+    [[ ,${DISABLED_SERVICES} =~ ,"glance" ]] && return 1
     [[ ,${ENABLED_SERVICES} =~ ,"g-" ]] && return 0
     return 1
 }
@@ -98,9 +106,6 @@
     sudo install -d -o $STACK_USER $GLANCE_CONF_DIR $GLANCE_METADEF_DIR
 
     # Copy over our glance configurations and update them
-    if is_service_enabled g-glare; then
-        cp $GLANCE_DIR/etc/glance-glare.conf $GLANCE_GLARE_CONF
-    fi
     cp $GLANCE_DIR/etc/glance-registry.conf $GLANCE_REGISTRY_CONF
     iniset $GLANCE_REGISTRY_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $GLANCE_REGISTRY_CONF DEFAULT bind_host $GLANCE_SERVICE_LISTEN_ADDRESS
@@ -109,20 +114,18 @@
     dburl=`database_connection_url glance`
     iniset $GLANCE_REGISTRY_CONF database connection $dburl
     iniset $GLANCE_REGISTRY_CONF DEFAULT use_syslog $SYSLOG
-    iniset $GLANCE_REGISTRY_CONF DEFAULT workers "$API_WORKERS"
     iniset $GLANCE_REGISTRY_CONF paste_deploy flavor keystone
     configure_auth_token_middleware $GLANCE_REGISTRY_CONF glance $GLANCE_AUTH_CACHE_DIR/registry
     iniset $GLANCE_REGISTRY_CONF oslo_messaging_notifications driver messagingv2
     iniset_rpc_backend glance $GLANCE_REGISTRY_CONF
     iniset $GLANCE_REGISTRY_CONF DEFAULT graceful_shutdown_timeout "$SERVICE_GRACEFUL_SHUTDOWN_TIMEOUT"
 
-    cp $GLANCE_DIR/etc/glance-api.conf $GLANCE_API_CONF
     iniset $GLANCE_API_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
-    iniset $GLANCE_API_CONF DEFAULT bind_host $GLANCE_SERVICE_LISTEN_ADDRESS
     inicomment $GLANCE_API_CONF DEFAULT log_file
     iniset $GLANCE_API_CONF database connection $dburl
     iniset $GLANCE_API_CONF DEFAULT use_syslog $SYSLOG
     iniset $GLANCE_API_CONF DEFAULT image_cache_dir $GLANCE_CACHE_DIR/
+    iniset $GLANCE_API_CONF DEFAULT lock_path $GLANCE_LOCK_DIR
     iniset $GLANCE_API_CONF paste_deploy flavor keystone+cachemanagement
     configure_auth_token_middleware $GLANCE_API_CONF glance $GLANCE_AUTH_CACHE_DIR/api
     iniset $GLANCE_API_CONF oslo_messaging_notifications driver messagingv2
@@ -143,13 +146,8 @@
 
     # Store specific configs
     iniset $GLANCE_API_CONF glance_store filesystem_store_datadir $GLANCE_IMAGE_DIR/
-    if is_service_enabled g-glare; then
-        iniset $GLANCE_GLARE_CONF glance_store filesystem_store_datadir $GLANCE_IMAGE_DIR/
-    fi
     iniset $GLANCE_API_CONF DEFAULT registry_host $GLANCE_SERVICE_HOST
 
-    iniset $GLANCE_API_CONF DEFAULT workers "$API_WORKERS"
-
     # CORS feature support - to allow calls from Horizon by default
     if [ -n "$GLANCE_CORS_ALLOWED_ORIGIN" ]; then
         iniset $GLANCE_API_CONF cors allowed_origin "$GLANCE_CORS_ALLOWED_ORIGIN"
@@ -172,22 +170,6 @@
 
         iniset $GLANCE_SWIFT_STORE_CONF ref1 user $SERVICE_PROJECT_NAME:glance-swift
 
-        # Store the glare in swift if enabled.
-        if is_service_enabled g-glare; then
-            iniset $GLANCE_GLARE_CONF glance_store default_store swift
-            iniset $GLANCE_GLARE_CONF glance_store swift_store_create_container_on_put True
-
-            iniset $GLANCE_GLARE_CONF glance_store swift_store_config_file $GLANCE_SWIFT_STORE_CONF
-            iniset $GLANCE_GLARE_CONF glance_store default_swift_reference ref1
-            iniset $GLANCE_GLARE_CONF glance_store stores "file, http, swift"
-            iniset $GLANCE_GLARE_CONF DEFAULT graceful_shutdown_timeout "$SERVICE_GRACEFUL_SHUTDOWN_TIMEOUT"
-
-            # commenting is not strictly necessary but it's confusing to have bad values in conf
-            inicomment $GLANCE_GLARE_CONF glance_store swift_store_user
-            inicomment $GLANCE_GLARE_CONF glance_store swift_store_key
-            inicomment $GLANCE_GLARE_CONF glance_store swift_store_auth_address
-        fi
-
         iniset $GLANCE_SWIFT_STORE_CONF ref1 key $SERVICE_PASSWORD
         if python3_enabled; then
             # NOTE(dims): Currently the glance_store+swift does not support either an insecure flag
@@ -204,27 +186,19 @@
         inicomment $GLANCE_API_CONF glance_store swift_store_auth_address
     fi
 
+    # We need to tell glance what it's public endpoint is so that the version
+    # discovery document will be correct
+    iniset $GLANCE_API_CONF DEFAULT public_endpoint $GLANCE_URL
+
     if is_service_enabled tls-proxy; then
         iniset $GLANCE_API_CONF DEFAULT bind_port $GLANCE_SERVICE_PORT_INT
-        iniset $GLANCE_API_CONF DEFAULT public_endpoint $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT
         iniset $GLANCE_REGISTRY_CONF DEFAULT bind_port $GLANCE_REGISTRY_PORT_INT
 
         iniset $GLANCE_API_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
         iniset $GLANCE_REGISTRY_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
     fi
 
-    # Register SSL certificates if provided
-    if is_ssl_enabled_service glance; then
-        ensure_certificates GLANCE
-
-        iniset $GLANCE_API_CONF DEFAULT cert_file "$GLANCE_SSL_CERT"
-        iniset $GLANCE_API_CONF DEFAULT key_file "$GLANCE_SSL_KEY"
-
-        iniset $GLANCE_REGISTRY_CONF DEFAULT cert_file "$GLANCE_SSL_CERT"
-        iniset $GLANCE_REGISTRY_CONF DEFAULT key_file "$GLANCE_SSL_KEY"
-    fi
-
-    if is_ssl_enabled_service glance || is_service_enabled tls-proxy; then
+    if is_service_enabled tls-proxy; then
         iniset $GLANCE_API_CONF DEFAULT registry_client_protocol https
     fi
 
@@ -233,7 +207,6 @@
     setup_logging $GLANCE_REGISTRY_CONF
 
     cp -p $GLANCE_DIR/etc/glance-registry-paste.ini $GLANCE_REGISTRY_PASTE_INI
-
     cp -p $GLANCE_DIR/etc/glance-api-paste.ini $GLANCE_API_PASTE_INI
 
     cp $GLANCE_DIR/etc/glance-cache.conf $GLANCE_CACHE_CONF
@@ -242,7 +215,7 @@
     iniset $GLANCE_CACHE_CONF DEFAULT use_syslog $SYSLOG
     iniset $GLANCE_CACHE_CONF DEFAULT image_cache_dir $GLANCE_CACHE_DIR/
     iniuncomment $GLANCE_CACHE_CONF DEFAULT auth_url
-    iniset $GLANCE_CACHE_CONF DEFAULT auth_url $KEYSTONE_AUTH_URI/v3
+    iniset $GLANCE_CACHE_CONF DEFAULT auth_url $KEYSTONE_AUTH_URI
     iniuncomment $GLANCE_CACHE_CONF DEFAULT auth_tenant_name
     iniset $GLANCE_CACHE_CONF DEFAULT admin_tenant_name $SERVICE_PROJECT_NAME
     iniuncomment $GLANCE_CACHE_CONF DEFAULT auth_user
@@ -259,7 +232,7 @@
 
     cp -p $GLANCE_DIR/etc/metadefs/*.json $GLANCE_METADEF_DIR
 
-    if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
+    if is_service_enabled tls-proxy; then
         CINDER_SERVICE_HOST=${CINDER_SERVICE_HOST:-$SERVICE_HOST}
         CINDER_SERVICE_PORT=${CINDER_SERVICE_PORT:-8776}
 
@@ -267,27 +240,11 @@
         iniset $GLANCE_CACHE_CONF DEFAULT cinder_endpoint_template "https://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/%(project_id)s"
     fi
 
-    # Configure GLANCE_GLARE (Glance Glare)
-    if is_service_enabled g-glare; then
-        local dburl
-        dburl=`database_connection_url glance`
-        setup_logging $GLANCE_GLARE_CONF
-        iniset $GLANCE_GLARE_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
-        iniset $GLANCE_GLARE_CONF DEFAULT bind_host $GLANCE_SERVICE_LISTEN_ADDRESS
-        iniset $GLANCE_GLARE_CONF DEFAULT bind_port $GLANCE_GLARE_PORT
-        inicomment $GLANCE_GLARE_CONF DEFAULT log_file
-        iniset $GLANCE_GLARE_CONF DEFAULT workers "$API_WORKERS"
-
-        iniset $GLANCE_GLARE_CONF database connection $dburl
-        iniset $GLANCE_GLARE_CONF paste_deploy flavor keystone
-        configure_auth_token_middleware $GLANCE_GLARE_CONF glare $GLANCE_AUTH_CACHE_DIR/artifact
-        # Register SSL certificates if provided
-        if is_ssl_enabled_service glance; then
-            ensure_certificates GLANCE
-            iniset $GLANCE_GLARE_CONF DEFAULT cert_file "$GLANCE_SSL_CERT"
-            iniset $GLANCE_GLARE_CONF DEFAULT key_file "$GLANCE_SSL_KEY"
-        fi
-        cp $GLANCE_DIR/etc/glance-glare-paste.ini $GLANCE_GLARE_PASTE_INI
+    if [[ "$WSGI_MODE" == "uwsgi" ]]; then
+        write_local_uwsgi_http_config "$GLANCE_UWSGI_CONF" "$GLANCE_UWSGI" "/image"
+    else
+        iniset $GLANCE_API_CONF DEFAULT bind_host $GLANCE_SERVICE_LISTEN_ADDRESS
+        iniset $GLANCE_API_CONF DEFAULT workers "$API_WORKERS"
     fi
 }
 
@@ -298,7 +255,6 @@
 # SERVICE_PROJECT_NAME  glance          service
 # SERVICE_PROJECT_NAME  glance-swift    ResellerAdmin (if Swift is enabled)
 # SERVICE_PROJECT_NAME  glance-search   search (if Search is enabled)
-# SERVICE_PROJECT_NAME  glare           service (if enabled)
 
 function create_glance_accounts {
     if is_service_enabled g-api; then
@@ -314,23 +270,13 @@
         get_or_create_endpoint \
             "image" \
             "$REGION_NAME" \
-            "$GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT"
+            "$GLANCE_URL"
 
         # Note(frickler): Crude workaround for https://bugs.launchpad.net/glance-store/+bug/1620999
         service_domain_id=$(get_or_create_domain $SERVICE_DOMAIN_NAME)
         iniset $GLANCE_SWIFT_STORE_CONF ref1 project_domain_id $service_domain_id
         iniset $GLANCE_SWIFT_STORE_CONF ref1 user_domain_id $service_domain_id
     fi
-
-    # Add glance-glare service and endpoints
-    if is_service_enabled g-glare; then
-        create_service_user "glare"
-        get_or_create_service "glare" "artifact" "Glance Artifact Service"
-
-        get_or_create_endpoint "artifact" \
-            "$REGION_NAME" \
-            "$GLANCE_SERVICE_PROTOCOL://$GLANCE_GLARE_HOSTPORT"
-    fi
 }
 
 # create_glance_cache_dir() - Part of the init_glance() process
@@ -353,11 +299,13 @@
     # (Re)create glance database
     recreate_database glance
 
+    time_start "dbsync"
     # Migrate glance database
     $GLANCE_BIN_DIR/glance-manage --config-file $GLANCE_CONF_DIR/glance-api.conf db_sync
 
     # Load metadata definitions
     $GLANCE_BIN_DIR/glance-manage --config-file $GLANCE_CONF_DIR/glance-api.conf db_load_metadefs
+    time_stop "dbsync"
 
     create_glance_cache_dir
 }
@@ -385,41 +333,33 @@
     setup_develop $GLANCE_DIR
 }
 
-# start_glance() - Start running processes, including screen
+# start_glance() - Start running processes
 function start_glance {
     local service_protocol=$GLANCE_SERVICE_PROTOCOL
     if is_service_enabled tls-proxy; then
-        start_tls_proxy glance-service '*' $GLANCE_SERVICE_PORT $GLANCE_SERVICE_HOST $GLANCE_SERVICE_PORT_INT
+        if [[ "$WSGI_MODE" != "uwsgi" ]]; then
+            start_tls_proxy glance-service '*' $GLANCE_SERVICE_PORT $GLANCE_SERVICE_HOST $GLANCE_SERVICE_PORT_INT
+        fi
         start_tls_proxy glance-registry '*' $GLANCE_REGISTRY_PORT $GLANCE_SERVICE_HOST $GLANCE_REGISTRY_PORT_INT
     fi
 
     run_process g-reg "$GLANCE_BIN_DIR/glance-registry --config-file=$GLANCE_CONF_DIR/glance-registry.conf"
-    run_process g-api "$GLANCE_BIN_DIR/glance-api --config-file=$GLANCE_CONF_DIR/glance-api.conf"
-
-    echo "Waiting for g-api ($GLANCE_HOSTPORT) to start..."
-    if ! wait_for_service $SERVICE_TIMEOUT $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT; then
-        die $LINENO "g-api did not start"
+    if [[ "$WSGI_MODE" == "uwsgi" ]]; then
+        run_process g-api "$GLANCE_BIN_DIR/uwsgi --procname-prefix glance-api --ini $GLANCE_UWSGI_CONF"
+    else
+        run_process g-api "$GLANCE_BIN_DIR/glance-api --config-file=$GLANCE_CONF_DIR/glance-api.conf"
     fi
 
-    #Start g-glare after g-reg/g-api
-    if is_service_enabled g-glare; then
-        run_process g-glare "$GLANCE_BIN_DIR/glance-glare --config-file=$GLANCE_CONF_DIR/glance-glare.conf"
-        echo "Waiting for Glare [g-glare] ($GLANCE_GLARE_HOSTPORT) to start..."
-        if ! wait_for_service $SERVICE_TIMEOUT $GLANCE_SERVICE_PROTOCOL://$GLANCE_GLARE_HOSTPORT; then
-            die $LINENO " Glare [g-glare] did not start"
-        fi
+    echo "Waiting for g-api ($GLANCE_SERVICE_HOST) to start..."
+    if ! wait_for_service $SERVICE_TIMEOUT $GLANCE_URL; then
+        die $LINENO "g-api did not start"
     fi
 }
 
 # stop_glance() - Stop running processes
 function stop_glance {
-    # Kill the Glance screen windows
     stop_process g-api
     stop_process g-reg
-
-    if is_service_enabled g-glare; then
-        stop_process g-glare
-    fi
 }
 
 # Restore xtrace
diff --git a/lib/horizon b/lib/horizon
index 9c7ec00..3d2f68d 100644
--- a/lib/horizon
+++ b/lib/horizon
@@ -106,6 +106,10 @@
         _horizon_config_set $local_settings "" OPENSTACK_SSL_CACERT \"${SSL_BUNDLE_FILE}\"
     fi
 
+    if is_service_enabled ldap; then
+        _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT "True"
+    fi
+
     # Create an empty directory that apache uses as docroot
     sudo mkdir -p $HORIZON_DIR/.blackhole
 
@@ -177,13 +181,12 @@
     git_clone $HORIZON_REPO $HORIZON_DIR $HORIZON_BRANCH
 }
 
-# start_horizon() - Start running processes, including screen
+# start_horizon() - Start running processes
 function start_horizon {
     restart_apache_server
-    tail_log horizon /var/log/$APACHE_NAME/horizon_error.log
 }
 
-# stop_horizon() - Stop running processes (non-screen)
+# stop_horizon() - Stop running processes
 function stop_horizon {
     stop_apache_server
 }
diff --git a/lib/keystone b/lib/keystone
index 530f3b4..f4df635 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -50,22 +50,18 @@
 KEYSTONE_CONF_DIR=${KEYSTONE_CONF_DIR:-/etc/keystone}
 KEYSTONE_CONF=$KEYSTONE_CONF_DIR/keystone.conf
 KEYSTONE_PASTE_INI=${KEYSTONE_PASTE_INI:-$KEYSTONE_CONF_DIR/keystone-paste.ini}
-
-# Toggle for deploying Keystone under HTTPD + mod_wsgi
-# Deprecated in Mitaka, use KEYSTONE_DEPLOY instead.
-KEYSTONE_USE_MOD_WSGI=${KEYSTONE_USE_MOD_WSGI:-${ENABLE_HTTPD_MOD_WSGI_SERVICES}}
+KEYSTONE_PUBLIC_UWSGI_CONF=$KEYSTONE_CONF_DIR/keystone-uwsgi-public.ini
+KEYSTONE_ADMIN_UWSGI_CONF=$KEYSTONE_CONF_DIR/keystone-uwsgi-admin.ini
+KEYSTONE_PUBLIC_UWSGI=$KEYSTONE_BIN_DIR/keystone-wsgi-public
+KEYSTONE_ADMIN_UWSGI=$KEYSTONE_BIN_DIR/keystone-wsgi-admin
 
 # KEYSTONE_DEPLOY defines how keystone is deployed, allowed values:
 # - mod_wsgi : Run keystone under Apache HTTPd mod_wsgi
 # - uwsgi : Run keystone under uwsgi
-if [ -z "$KEYSTONE_DEPLOY" ]; then
-    if [ -z "$KEYSTONE_USE_MOD_WSGI" ]; then
-        KEYSTONE_DEPLOY=mod_wsgi
-    elif [ "$KEYSTONE_USE_MOD_WSGI" == True ]; then
-        KEYSTONE_DEPLOY=mod_wsgi
-    else
-        KEYSTONE_DEPLOY=uwsgi
-    fi
+if [[ "$WSGI_MODE" == "uwsgi" ]]; then
+    KEYSTONE_DEPLOY=uwsgi
+else
+    KEYSTONE_DEPLOY=mod_wsgi
 fi
 
 # Select the token persistence backend driver
@@ -112,20 +108,14 @@
 SERVICE_TENANT_NAME=${SERVICE_PROJECT_NAME:-service}
 
 # if we are running with SSL use https protocols
-if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
     KEYSTONE_AUTH_PROTOCOL="https"
     KEYSTONE_SERVICE_PROTOCOL="https"
 fi
 
-# complete URIs
-if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
-    # If running in Apache, use path access rather than port.
-    KEYSTONE_AUTH_URI=${KEYSTONE_AUTH_PROTOCOL}://${KEYSTONE_AUTH_HOST}/identity_admin
-    KEYSTONE_SERVICE_URI=${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}/identity
-else
-    KEYSTONE_AUTH_URI=${KEYSTONE_AUTH_PROTOCOL}://${KEYSTONE_AUTH_HOST}:${KEYSTONE_AUTH_PORT}
-    KEYSTONE_SERVICE_URI=${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}
-fi
+KEYSTONE_SERVICE_URI=${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}/identity
+# for compat
+KEYSTONE_AUTH_URI=$KEYSTONE_SERVICE_URI
 
 # V3 URIs
 KEYSTONE_AUTH_URI_V3=$KEYSTONE_AUTH_URI/v3
@@ -134,9 +124,15 @@
 # Security compliance
 KEYSTONE_SECURITY_COMPLIANCE_ENABLED=${KEYSTONE_SECURITY_COMPLIANCE_ENABLED:-True}
 KEYSTONE_LOCKOUT_FAILURE_ATTEMPTS=${KEYSTONE_LOCKOUT_FAILURE_ATTEMPTS:-2}
-KEYSTONE_LOCKOUT_DURATION=${KEYSTONE_LOCKOUT_DURATION:-5}
+KEYSTONE_LOCKOUT_DURATION=${KEYSTONE_LOCKOUT_DURATION:-10}
 KEYSTONE_UNIQUE_LAST_PASSWORD_COUNT=${KEYSTONE_UNIQUE_LAST_PASSWORD_COUNT:-2}
 
+# Number of bcrypt hashing rounds, increasing number exponentially increases required
+# resources to generate password hash. This is very effective way to protect from
+# bruteforce attacks. 4 is minimal value that can be specified for bcrypt and
+# it works way faster than default 12. Minimal value is great for CI and development
+# however may not be suitable for real production.
+KEYSTONE_PASSWORD_HASH_ROUNDS=${KEYSTONE_PASSWORD_HASH_ROUNDS:-4}
 
 # Functions
 # ---------
@@ -144,6 +140,7 @@
 # Test if Keystone is enabled
 # is_keystone_enabled
 function is_keystone_enabled {
+    [[ ,${DISABLED_SERVICES} =~ ,"keystone" ]] && return 1
     [[ ,${ENABLED_SERVICES}, =~ ,"key", ]] && return 0
     return 1
 }
@@ -151,8 +148,18 @@
 # cleanup_keystone() - Remove residual data files, anything left over from previous
 # runs that a clean run would need to clean up
 function cleanup_keystone {
-    disable_apache_site keystone
-    sudo rm -f $(apache_site_config_for keystone)
+    if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
+        # These files will be created if we are running WSGI_MODE="mod_wsgi"
+        disable_apache_site keystone
+        sudo rm -f $(apache_site_config_for keystone)
+    else
+        stop_process "keystone"
+        # TODO: remove admin at pike-2
+        remove_uwsgi_config "$KEYSTONE_PUBLIC_UWSGI_CONF" "$KEYSTONE_PUBLIC_UWSGI"
+        remove_uwsgi_config "$KEYSTONE_ADMIN_UWSGI_CONF" "$KEYSTONE_ADMIN_UWSGI"
+        sudo rm -f $(apache_site_config_for keystone-wsgi-public)
+        sudo rm -f $(apache_site_config_for keystone-wsgi-admin)
+    fi
 }
 
 # _config_keystone_apache_wsgi() - Set WSGI config files of Keystone
@@ -167,12 +174,6 @@
     local keystone_auth_port=$KEYSTONE_AUTH_PORT
     local venv_path=""
 
-    if is_ssl_enabled_service key; then
-        keystone_ssl_listen=""
-        keystone_ssl="SSLEngine On"
-        keystone_certfile="SSLCertificateFile $KEYSTONE_SSL_CERT"
-        keystone_keyfile="SSLCertificateKeyFile $KEYSTONE_SSL_KEY"
-    fi
     if is_service_enabled tls-proxy; then
         keystone_service_port=$KEYSTONE_SERVICE_PORT_INT
         keystone_auth_port=$KEYSTONE_AUTH_PORT_INT
@@ -202,7 +203,6 @@
 
     if [[ "$KEYSTONE_CONF_DIR" != "$KEYSTONE_DIR/etc" ]]; then
         install -m 600 $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF
-        cp -p $KEYSTONE_DIR/etc/policy.json $KEYSTONE_CONF_DIR
         if [[ -f "$KEYSTONE_DIR/etc/keystone-paste.ini" ]]; then
             cp -p "$KEYSTONE_DIR/etc/keystone-paste.ini" "$KEYSTONE_PASTE_INI"
         fi
@@ -221,18 +221,12 @@
     fi
 
     # Rewrite stock ``keystone.conf``
-
     if is_service_enabled ldap; then
-        #Set all needed ldap values
-        iniset $KEYSTONE_CONF ldap password $LDAP_PASSWORD
-        iniset $KEYSTONE_CONF ldap user $LDAP_MANAGER_DN
-        iniset $KEYSTONE_CONF ldap suffix $LDAP_BASE_DN
-        iniset $KEYSTONE_CONF ldap user_tree_dn "ou=Users,$LDAP_BASE_DN"
-        iniset $KEYSTONE_CONF DEFAULT member_role_id "9fe2ff9ee4384b1894a90878d3e92bab"
-        iniset $KEYSTONE_CONF DEFAULT member_role_name "_member_"
+        iniset $KEYSTONE_CONF identity domain_config_dir "$KEYSTONE_CONF_DIR/domains"
+        iniset $KEYSTONE_CONF identity domain_specific_drivers_enabled "True"
     fi
-
     iniset $KEYSTONE_CONF identity driver "$KEYSTONE_IDENTITY_BACKEND"
+    iniset $KEYSTONE_CONF identity password_hash_rounds $KEYSTONE_PASSWORD_HASH_ROUNDS
     iniset $KEYSTONE_CONF assignment driver "$KEYSTONE_ASSIGNMENT_BACKEND"
     iniset $KEYSTONE_CONF role driver "$KEYSTONE_ROLE_BACKEND"
     iniset $KEYSTONE_CONF resource driver "$KEYSTONE_RESOURCE_BACKEND"
@@ -244,11 +238,6 @@
 
     iniset_rpc_backend keystone $KEYSTONE_CONF
 
-    # Register SSL certificates if provided
-    if is_ssl_enabled_service key; then
-        ensure_certificates KEYSTONE
-    fi
-
     local service_port=$KEYSTONE_SERVICE_PORT
     local auth_port=$KEYSTONE_AUTH_PORT
 
@@ -264,10 +253,8 @@
     # work when you want to use a different port (in the case of proxy), or you
     # don't want the port (in the case of putting keystone on a path in
     # apache).
-    if is_service_enabled tls-proxy || [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
-        iniset $KEYSTONE_CONF DEFAULT public_endpoint $KEYSTONE_SERVICE_URI
-        iniset $KEYSTONE_CONF DEFAULT admin_endpoint $KEYSTONE_AUTH_URI
-    fi
+    iniset $KEYSTONE_CONF DEFAULT public_endpoint $KEYSTONE_SERVICE_URI
+    iniset $KEYSTONE_CONF DEFAULT admin_endpoint $KEYSTONE_AUTH_URI
 
     if [[ "$KEYSTONE_TOKEN_FORMAT" != "" ]]; then
         iniset $KEYSTONE_CONF token provider $KEYSTONE_TOKEN_FORMAT
@@ -283,9 +270,7 @@
     fi
 
     # Format logging
-    if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$KEYSTONE_DEPLOY" != "mod_wsgi" ] ; then
-        setup_colorized_logging $KEYSTONE_CONF
-    fi
+    setup_logging $KEYSTONE_CONF
 
     iniset $KEYSTONE_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
 
@@ -293,45 +278,8 @@
         iniset $KEYSTONE_CONF DEFAULT logging_exception_prefix "%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s"
         _config_keystone_apache_wsgi
     else # uwsgi
-        # iniset creates these files when it's called if they don't exist.
-        KEYSTONE_PUBLIC_UWSGI_FILE=$KEYSTONE_CONF_DIR/keystone-uwsgi-public.ini
-        KEYSTONE_ADMIN_UWSGI_FILE=$KEYSTONE_CONF_DIR/keystone-uwsgi-admin.ini
-
-        rm -f "$KEYSTONE_PUBLIC_UWSGI_FILE"
-        rm -f "$KEYSTONE_ADMIN_UWSGI_FILE"
-
-        if is_ssl_enabled_service key; then
-            iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi https $KEYSTONE_SERVICE_HOST:$service_port,$KEYSTONE_SSL_CERT,$KEYSTONE_SSL_KEY
-            iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi https $KEYSTONE_ADMIN_BIND_HOST:$auth_port,$KEYSTONE_SSL_CERT,$KEYSTONE_SSL_KEY
-        else
-            iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi http $KEYSTONE_SERVICE_HOST:$service_port
-            iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi http $KEYSTONE_ADMIN_BIND_HOST:$auth_port
-        fi
-
-        iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi wsgi-file "$KEYSTONE_BIN_DIR/keystone-wsgi-public"
-        iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi processes $(nproc)
-
-        iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi wsgi-file "$KEYSTONE_BIN_DIR/keystone-wsgi-admin"
-        iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi processes $API_WORKERS
-
-        # Common settings
-        for file in "$KEYSTONE_PUBLIC_UWSGI_FILE" "$KEYSTONE_ADMIN_UWSGI_FILE"; do
-            # This is running standalone
-            iniset "$file" uwsgi master true
-            # Set die-on-term & exit-on-reload so that uwsgi shuts down
-            iniset "$file" uwsgi die-on-term true
-            iniset "$file" uwsgi exit-on-reload true
-            iniset "$file" uwsgi enable-threads true
-            iniset "$file" uwsgi plugins python
-            # uwsgi recommends this to prevent thundering herd on accept.
-            iniset "$file" uwsgi thunder-lock true
-            # Override the default size for headers from the 4k default.
-            iniset "$file" uwsgi buffer-size 65535
-            # Make sure the client doesn't try to re-use the connection.
-            iniset "$file" uwsgi add-header "Connection: close"
-            # This ensures that file descriptors aren't shared between processes.
-            iniset "$file" uwsgi lazy-apps true
-        done
+        write_uwsgi_config "$KEYSTONE_PUBLIC_UWSGI_CONF" "$KEYSTONE_PUBLIC_UWSGI" "/identity"
+        write_uwsgi_config "$KEYSTONE_ADMIN_UWSGI_CONF" "$KEYSTONE_ADMIN_UWSGI" "/identity_admin"
     fi
 
     iniset $KEYSTONE_CONF DEFAULT max_token_size 16384
@@ -404,7 +352,7 @@
     # The Member role is used by Horizon and Swift so we need to keep it:
     local member_role="member"
 
-    # Captial Member role is legacy hard coded in Horizon / Swift
+    # Capital Member role is legacy hard coded in Horizon / Swift
     # configs. Keep it around.
     get_or_create_role "Member"
 
@@ -457,6 +405,10 @@
     get_or_add_group_project_role $member_role $non_admin_group $alt_demo_project
     get_or_add_group_project_role $another_role $non_admin_group $alt_demo_project
     get_or_add_group_project_role $admin_role $admin_group $admin_project
+
+    if is_service_enabled ldap; then
+        create_ldap_domain
+    fi
 }
 
 # Create a user that is capable of verifying keystone tokens for use with auth_token middleware.
@@ -489,14 +441,13 @@
     local section=${4:-keystone_authtoken}
 
     iniset $conf_file $section auth_type password
-    iniset $conf_file $section auth_url $KEYSTONE_AUTH_URI
+    iniset $conf_file $section auth_url $KEYSTONE_SERVICE_URI
     iniset $conf_file $section username $admin_user
     iniset $conf_file $section password $SERVICE_PASSWORD
     iniset $conf_file $section user_domain_name "$SERVICE_DOMAIN_NAME"
     iniset $conf_file $section project_name $SERVICE_PROJECT_NAME
     iniset $conf_file $section project_domain_name "$SERVICE_DOMAIN_NAME"
 
-    iniset $conf_file $section auth_uri $KEYSTONE_SERVICE_URI
     iniset $conf_file $section cafile $SSL_BUNDLE_FILE
     iniset $conf_file $section signing_dir $signing_dir
     iniset $conf_file $section memcached_servers $SERVICE_HOST:11211
@@ -513,8 +464,10 @@
         recreate_database keystone
     fi
 
+    time_start "dbsync"
     # Initialize keystone database
     $KEYSTONE_BIN_DIR/keystone-manage --config-file $KEYSTONE_CONF db_sync
+    time_stop "dbsync"
 
     if [[ "$KEYSTONE_TOKEN_FORMAT" == "pki" || "$KEYSTONE_TOKEN_FORMAT" == "pkiz" ]]; then
         # Set up certificates
@@ -578,15 +531,12 @@
 
     if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
         install_apache_wsgi
-        if is_ssl_enabled_service "key"; then
-            enable_mod_ssl
-        fi
     elif [ "$KEYSTONE_DEPLOY" == "uwsgi" ]; then
         pip_install uwsgi
     fi
 }
 
-# start_keystone() - Start running processes, including screen
+# start_keystone() - Start running processes
 function start_keystone {
     # Get right service port for testing
     local service_port=$KEYSTONE_SERVICE_PORT
@@ -599,11 +549,8 @@
     if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
         enable_apache_site keystone
         restart_apache_server
-        tail_log key /var/log/$APACHE_NAME/keystone.log
-        tail_log key-access /var/log/$APACHE_NAME/keystone_access.log
     else # uwsgi
-        run_process key "$KEYSTONE_BIN_DIR/uwsgi $KEYSTONE_PUBLIC_UWSGI_FILE" "" "key-p"
-        run_process key "$KEYSTONE_BIN_DIR/uwsgi $KEYSTONE_ADMIN_UWSGI_FILE" "" "key-a"
+        run_process keystone "$KEYSTONE_BIN_DIR/uwsgi --procname-prefix keystone --ini $KEYSTONE_PUBLIC_UWSGI_CONF" ""
     fi
 
     echo "Waiting for keystone to start..."
@@ -612,10 +559,7 @@
     # unencryted traffic at this point.
     # If running in Apache, use the path rather than port.
 
-    local service_uri=$auth_protocol://$KEYSTONE_SERVICE_HOST:$service_port/v$IDENTITY_API_VERSION/
-    if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
-        service_uri=$auth_protocol://$KEYSTONE_SERVICE_HOST/identity/v$IDENTITY_API_VERSION/
-    fi
+    local service_uri=$auth_protocol://$KEYSTONE_SERVICE_HOST/identity/v$IDENTITY_API_VERSION/
 
     if ! wait_for_service $SERVICE_TIMEOUT $service_uri; then
         die $LINENO "keystone did not start"
@@ -636,9 +580,9 @@
     if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
         disable_apache_site keystone
         restart_apache_server
+    else
+        stop_process keystone
     fi
-    # Kill the Keystone screen window
-    stop_process key
 }
 
 # bootstrap_keystone() - Initialize user, role and project
@@ -663,6 +607,57 @@
         --bootstrap-public-url "$KEYSTONE_SERVICE_URI"
 }
 
+# create_ldap_domain() - Create domain file and initialize domain with a user
+function create_ldap_domain {
+    # Creates domain Users
+    openstack --os-identity-api-version=3 domain create --description "LDAP domain" Users
+
+    # Create domain file inside etc/keystone/domains
+    KEYSTONE_LDAP_DOMAIN_FILE=$KEYSTONE_CONF_DIR/domains/keystone.Users.conf
+    mkdir -p "$KEYSTONE_CONF_DIR/domains"
+    touch "$KEYSTONE_LDAP_DOMAIN_FILE"
+
+    # Set identity driver 'ldap'
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE identity driver "ldap"
+
+    # LDAP settings for Users domain
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_tree_dn "ou=Users,$LDAP_BASE_DN"
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_objectclass "inetOrgPerson"
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_name_attribute "cn"
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_mail_attribute "mail"
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user_id_attribute "uid"
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap user "cn=Manager,dc=openstack,dc=org"
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap url "ldap://localhost"
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap suffix $LDAP_BASE_DN
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap password $LDAP_PASSWORD
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap group_tree_dn "ou=Groups,$LDAP_BASE_DN"
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap group_objectclass "groupOfNames"
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap group_name_attribute "cn"
+    iniset $KEYSTONE_LDAP_DOMAIN_FILE ldap group_id_attribute "cn"
+
+    # Restart apache and identity services to associate domain and conf file
+    sudo service apache2 reload
+    sudo systemctl restart devstack@keystone
+
+    # Create LDAP user.ldif and add user to LDAP backend
+    local tmp_ldap_dir
+    tmp_ldap_dir=$(mktemp -d -t ldap.$$.XXXXXXXXXX)
+
+    _ldap_varsubst $FILES/ldap/user.ldif.in $slappass >$tmp_ldap_dir/user.ldif
+    sudo ldapadd -x -w $LDAP_PASSWORD -D "$LDAP_MANAGER_DN" -H $LDAP_URL -c -f $tmp_ldap_dir/user.ldif
+    rm -rf $tmp_ldap_dir
+
+    local admin_project
+    admin_project=$(get_or_create_project "admin" default)
+    local ldap_user
+    ldap_user=$(openstack user show --domain=Users demo -f value -c id)
+    local admin_role="admin"
+    get_or_create_role $admin_role
+
+    # Grant demo LDAP user access to project and role
+    get_or_add_user_project_role $admin_role $ldap_user $admin_project
+}
+
 # Restore xtrace
 $_XTRACE_KEYSTONE
 
diff --git a/lib/ldap b/lib/ldap
index 4cea812..5a53d0e 100644
--- a/lib/ldap
+++ b/lib/ldap
@@ -119,8 +119,7 @@
 
     printf "installing OpenLDAP"
     if is_ubuntu; then
-        # Ubuntu automatically starts LDAP so no need to call start_ldap()
-        :
+        configure_ldap
     elif is_fedora; then
         start_ldap
     elif is_suse; then
@@ -148,6 +147,27 @@
     rm -rf $tmp_ldap_dir
 }
 
+# configure_ldap() - Configure LDAP - reconfigure slapd
+function configure_ldap {
+    sudo debconf-set-selections <<EOF
+    slapd slapd/internal/generated_adminpw password $LDAP_PASSWORD
+    slapd slapd/internal/adminpw password $LDAP_PASSWORD
+    slapd slapd/password2 password $LDAP_PASSWORD
+    slapd slapd/password1 password $LDAP_PASSWORD
+    slapd slapd/dump_database_destdir string /var/backups/slapd-VERSION
+    slapd slapd/domain string Users
+    slapd shared/organization string $LDAP_DOMAIN
+    slapd slapd/backend string HDB
+    slapd slapd/purge_database boolean true
+    slapd slapd/move_old_database boolean true
+    slapd slapd/allow_ldap_v2 boolean false
+    slapd slapd/no_configuration boolean false
+    slapd slapd/dump_database select when needed
+EOF
+    sudo apt-get install -y slapd ldap-utils
+    sudo dpkg-reconfigure -f noninteractive $LDAP_SERVICE_NAME
+}
+
 # start_ldap() - Start LDAP
 function start_ldap {
     sudo service $LDAP_SERVICE_NAME restart
diff --git a/lib/libraries b/lib/libraries
new file mode 100644
index 0000000..6d52f64
--- /dev/null
+++ b/lib/libraries
@@ -0,0 +1,145 @@
+#!/bin/bash
+#
+# lib/oslo
+#
+# Functions to install libraries from git
+#
+# We need this to handle the fact that projects would like to use
+# pre-released versions of oslo libraries.
+
+# Dependencies:
+#
+# - ``functions`` file
+
+# ``stack.sh`` calls the entry points in this order:
+#
+# - install_libraries
+
+# Save trace setting
+_XTRACE_LIB_LIBRARIES=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Defaults
+# --------
+GITDIR["automaton"]=$DEST/automaton
+GITDIR["castellan"]=$DEST/castellan
+GITDIR["cliff"]=$DEST/cliff
+GITDIR["cursive"]=$DEST/cursive
+GITDIR["debtcollector"]=$DEST/debtcollector
+GITDIR["futurist"]=$DEST/futurist
+GITDIR["os-client-config"]=$DEST/os-client-config
+GITDIR["osc-lib"]=$DEST/osc-lib
+GITDIR["osc-placement"]=$DEST/osc-placement
+GITDIR["oslo.cache"]=$DEST/oslo.cache
+GITDIR["oslo.concurrency"]=$DEST/oslo.concurrency
+GITDIR["oslo.config"]=$DEST/oslo.config
+GITDIR["oslo.context"]=$DEST/oslo.context
+GITDIR["oslo.db"]=$DEST/oslo.db
+GITDIR["oslo.i18n"]=$DEST/oslo.i18n
+GITDIR["oslo.log"]=$DEST/oslo.log
+GITDIR["oslo.messaging"]=$DEST/oslo.messaging
+GITDIR["oslo.middleware"]=$DEST/oslo.middleware
+GITDIR["oslo.policy"]=$DEST/oslo.policy
+GITDIR["oslo.privsep"]=$DEST/oslo.privsep
+GITDIR["oslo.reports"]=$DEST/oslo.reports
+GITDIR["oslo.rootwrap"]=$DEST/oslo.rootwrap
+GITDIR["oslo.serialization"]=$DEST/oslo.serialization
+GITDIR["oslo.service"]=$DEST/oslo.service
+GITDIR["oslo.utils"]=$DEST/oslo.utils
+GITDIR["oslo.versionedobjects"]=$DEST/oslo.versionedobjects
+GITDIR["oslo.vmware"]=$DEST/oslo.vmware
+GITDIR["osprofiler"]=$DEST/osprofiler
+GITDIR["pycadf"]=$DEST/pycadf
+GITDIR["python-openstacksdk"]=$DEST/python-openstacksdk
+GITDIR["stevedore"]=$DEST/stevedore
+GITDIR["taskflow"]=$DEST/taskflow
+GITDIR["tooz"]=$DEST/tooz
+
+# Non oslo libraries are welcomed below as well, this prevents
+# duplication of this code.
+GITDIR["os-brick"]=$DEST/os-brick
+GITDIR["os-traits"]=$DEST/os-traits
+
+# Support entry points installation of console scripts
+OSLO_BIN_DIR=$(get_python_exec_prefix)
+
+
+# Functions
+# ---------
+
+function _install_lib_from_source {
+    local name=$1
+    if use_library_from_git "$name"; then
+        git_clone_by_name "$name"
+        setup_dev_lib "$name"
+    fi
+}
+
+# install_oslo - install libraries that oslo needs
+function install_oslo {
+    install_libs
+}
+
+# install_libs() - Install additional libraries that we need and want
+# on all environments. Some will only install here if from source,
+# others will always install.
+function install_libs {
+    _install_lib_from_source "automaton"
+    _install_lib_from_source "castellan"
+    _install_lib_from_source "cliff"
+    _install_lib_from_source "cursive"
+    _install_lib_from_source "debtcollector"
+    _install_lib_from_source "futurist"
+    _install_lib_from_source "osc-lib"
+    _install_lib_from_source "osc-placement"
+    _install_lib_from_source "os-client-config"
+    _install_lib_from_source "oslo.cache"
+    _install_lib_from_source "oslo.concurrency"
+    _install_lib_from_source "oslo.config"
+    _install_lib_from_source "oslo.context"
+    _install_lib_from_source "oslo.db"
+    _install_lib_from_source "oslo.i18n"
+    _install_lib_from_source "oslo.log"
+    _install_lib_from_source "oslo.messaging"
+    _install_lib_from_source "oslo.middleware"
+    _install_lib_from_source "oslo.policy"
+    _install_lib_from_source "oslo.privsep"
+    _install_lib_from_source "oslo.reports"
+    _install_lib_from_source "oslo.rootwrap"
+    _install_lib_from_source "oslo.serialization"
+    _install_lib_from_source "oslo.service"
+    _install_lib_from_source "oslo.utils"
+    _install_lib_from_source "oslo.versionedobjects"
+    _install_lib_from_source "oslo.vmware"
+    _install_lib_from_source "osprofiler"
+    _install_lib_from_source "pycadf"
+    _install_lib_from_source "python-openstacksdk"
+    _install_lib_from_source "stevedore"
+    _install_lib_from_source "taskflow"
+    _install_lib_from_source "tooz"
+    # installation of additional libraries
+    #
+    # os-traits for nova
+    _install_lib_from_source "os-brick"
+    _install_lib_from_source "os-traits"
+    #
+    # python client libraries we might need from git can go here
+    _install_lib_from_source "python-barbicanclient"
+
+
+    # etcd (because tooz does not have a hard dependency on these)
+    #
+    # NOTE(sdague): this is currently a work around because tooz
+    # doesn't pull in etcd3.
+    pip_install etcd3
+    pip_install etcd3gw
+}
+
+# Restore xtrace
+$_XTRACE_LIB_LIBRARIES
+
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/lvm b/lib/lvm
index 0cebd92..f047181 100644
--- a/lib/lvm
+++ b/lib/lvm
@@ -35,7 +35,7 @@
 
 # _clean_lvm_volume_group removes all default LVM volumes
 #
-# Usage: clean_lvm_volume_group $vg
+# Usage: _clean_lvm_volume_group $vg
 function _clean_lvm_volume_group {
     local vg=$1
 
@@ -43,6 +43,16 @@
     sudo lvremove -f $vg
 }
 
+# _remove_lvm_volume_group removes the volume group
+#
+# Usage: _remove_lvm_volume_group $vg
+function _remove_lvm_volume_group {
+    local vg=$1
+
+    # Remove the volume group
+    sudo vgremove -f $vg
+}
+
 # _clean_lvm_backing_file() removes the backing file of the
 # volume group
 #
@@ -69,6 +79,7 @@
     local vg=$1
 
     _clean_lvm_volume_group $vg
+    _remove_lvm_volume_group $vg
     # if there is no logical volume left, it's safe to attempt a cleanup
     # of the backing file
     if [[ -z "$(sudo lvs --noheadings -o lv_name $vg 2>/dev/null)" ]]; then
diff --git a/lib/neutron b/lib/neutron
index e72c9fe..645d68c 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -61,7 +61,7 @@
 NEUTRON_METERING_BINARY=${NEUTRON_METERING_BINARY:-neutron-metering-agent}
 
 # Public facing bits
-if is_ssl_enabled_service "neutron" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
     NEUTRON_SERVICE_PROTOCOL="https"
 fi
 NEUTRON_SERVICE_HOST=${NEUTRON_SERVICE_HOST:-$SERVICE_HOST}
@@ -75,8 +75,16 @@
 NEUTRON_ROOTWRAP_CMD="$NEUTRON_ROOTWRAP $NEUTRON_ROOTWRAP_CONF_FILE"
 NEUTRON_ROOTWRAP_DAEMON_CMD="$NEUTRON_ROOTWRAP-daemon $NEUTRON_ROOTWRAP_CONF_FILE"
 
+# This is needed because _neutron_ovs_base_configure_l3_agent will set
+# external_network_bridge
+Q_USE_PROVIDERNET_FOR_PUBLIC=${Q_USE_PROVIDERNET_FOR_PUBLIC:-True}
+# This is needed because _neutron_ovs_base_configure_l3_agent uses it to create
+# an external network bridge
+PUBLIC_BRIDGE=${PUBLIC_BRIDGE:-br-ex}
+PUBLIC_BRIDGE_MTU=${PUBLIC_BRIDGE_MTU:-1500}
+
 # Additional neutron api config files
-declare -a _NEUTRON_SERVER_EXTRA_CONF_FILES_ABS
+declare -a -g _NEUTRON_SERVER_EXTRA_CONF_FILES_ABS
 
 # Functions
 # ---------
@@ -84,6 +92,7 @@
 # Test if any Neutron services are enabled
 # is_neutron_enabled
 function is_neutron_enabled {
+    [[ ,${DISABLED_SERVICES} =~ ,"neutron" ]] && return 1
     [[ ,${ENABLED_SERVICES} =~ ,"neutron-" || ,${ENABLED_SERVICES} =~ ,"q-" ]] && return 0
     return 1
 }
@@ -91,6 +100,7 @@
 # Test if any Neutron services are enabled
 # is_neutron_enabled
 function is_neutron_legacy_enabled {
+    [[ ,${DISABLED_SERVICES} =~ ,"neutron" ]] && return 1
     [[ ,${ENABLED_SERVICES} =~ ,"q-" ]] && return 0
     return 1
 }
@@ -135,7 +145,11 @@
 
     mkdir -p $NEUTRON_CORE_PLUGIN_CONF_PATH
 
-    cp $NEUTRON_DIR/etc/neutron/plugins/$NEUTRON_CORE_PLUGIN/$NEUTRON_CORE_PLUGIN_CONF_FILENAME.sample $NEUTRON_CORE_PLUGIN_CONF
+    # NOTE(yamamoto): A decomposed plugin should prepare the config file in
+    # its devstack plugin.
+    if [ -f $NEUTRON_DIR/etc/neutron/plugins/$NEUTRON_CORE_PLUGIN/$NEUTRON_CORE_PLUGIN_CONF_FILENAME.sample ]; then
+        cp $NEUTRON_DIR/etc/neutron/plugins/$NEUTRON_CORE_PLUGIN/$NEUTRON_CORE_PLUGIN_CONF_FILENAME.sample $NEUTRON_CORE_PLUGIN_CONF
+    fi
 
     iniset $NEUTRON_CONF database connection `database_connection_url neutron`
     iniset $NEUTRON_CONF DEFAULT state_path $NEUTRON_STATE_PATH
@@ -171,7 +185,7 @@
         iniset $NEUTRON_CORE_PLUGIN_CONF ml2_type_vxlan vni_ranges 1001:2000
         iniset $NEUTRON_CORE_PLUGIN_CONF ml2_type_flat flat_networks public
         if [[ "$NEUTRON_PORT_SECURITY" = "True" ]]; then
-            iniset $NEUTRON_CORE_PLUGIN_CONF ml2 extension_drivers port_security
+            neutron_ml2_extension_driver_add port_security
         fi
     fi
 
@@ -228,29 +242,17 @@
         configure_root_helper_options $NEUTRON_META_CONF
 
         # TODO(dtroyer): remove the v2.0 hard code below
-        iniset $NEUTRON_META_CONF DEFAULT auth_url $KEYSTONE_SERVICE_URI/v2.0
+        iniset $NEUTRON_META_CONF DEFAULT auth_url $KEYSTONE_SERVICE_URI
         configure_auth_token_middleware $NEUTRON_META_CONF neutron $NEUTRON_AUTH_CACHE_DIR DEFAULT
     fi
 
     # Format logging
-    if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
-        setup_colorized_logging $NEUTRON_CONF DEFAULT project_id
-    else
-        # Show user_name and project_name by default
-        iniset $NEUTRON_CONF DEFAULT logging_context_format_string "%(asctime)s.%(msecs)03d %(levelname)s %(name)s [%(request_id)s %(user_name)s %(project_name)s] %(instance)s%(message)s"
-    fi
+    setup_logging $NEUTRON_CONF
 
     if is_service_enabled tls-proxy; then
         # Set the service port for a proxy to take the original
         iniset $NEUTRON_CONF DEFAULT bind_port "$NEUTRON_SERVICE_PORT_INT"
-    fi
-
-    if is_ssl_enabled_service "neutron"; then
-        ensure_certificates NEUTRON
-
-        iniset $NEUTRON_CONF DEFAULT use_ssl True
-        iniset $NEUTRON_CONF DEFAULT ssl_cert_file "$NEUTRON_SSL_CERT"
-        iniset $NEUTRON_CONF DEFAULT ssl_key_file "$NEUTRON_SSL_KEY"
+        iniset $NEUTRON_CONF oslo_middleware enable_proxy_headers_parsing True
     fi
 
     # Metering
@@ -289,7 +291,7 @@
 function configure_neutron_nova_new {
     iniset $NOVA_CONF DEFAULT use_neutron True
     iniset $NOVA_CONF neutron auth_type "password"
-    iniset $NOVA_CONF neutron auth_url "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/v3"
+    iniset $NOVA_CONF neutron auth_url "$KEYSTONE_SERVICE_URI"
     iniset $NOVA_CONF neutron username neutron
     iniset $NOVA_CONF neutron password "$SERVICE_PASSWORD"
     iniset $NOVA_CONF neutron user_domain_name "Default"
@@ -340,8 +342,10 @@
 
     recreate_database neutron
 
+    time_start "dbsync"
     # Run Neutron db migrations
     $NEUTRON_BIN_DIR/neutron-db-manage upgrade heads
+    time_stop "dbsync"
 
     create_neutron_cache_dir
 }
@@ -401,24 +405,17 @@
     # TODO(sc68cal) Stop hard coding this
     run_process neutron-api "$NEUTRON_BIN_DIR/neutron-server $opts"
 
-    if is_ssl_enabled_service "neutron"; then
-        ssl_ca="--ca-certificate=${SSL_BUNDLE_FILE}"
-        local testcmd="wget ${ssl_ca} --no-proxy -q -O- $service_protocol://$NEUTRON_SERVICE_HOST:$service_port"
-        test_with_retry "$testcmd" "Neutron did not start" $SERVICE_TIMEOUT
-    else
-        if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$NEUTRON_SERVICE_HOST:$service_port; then
-            die $LINENO "neutron-api did not start"
-        fi
+    if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$NEUTRON_SERVICE_HOST:$service_port; then
+        die $LINENO "neutron-api did not start"
     fi
 
-
     # Start proxy if enabled
     if is_service_enabled tls-proxy; then
         start_tls_proxy neutron '*' $NEUTRON_SERVICE_PORT $NEUTRON_SERVICE_HOST $NEUTRON_SERVICE_PORT_INT
     fi
 }
 
-# start_neutron() - Start running processes, including screen
+# start_neutron() - Start running processes
 function start_neutron_new {
     # Start up the neutron agents if enabled
     # TODO(sc68cal) Make this pluggable so different DevStack plugins for different Neutron plugins
@@ -451,11 +448,11 @@
     fi
 
     if is_service_enabled neutron-metering; then
-        run_process neutron-metering "$NEUTRON_METERING_BINARY --config-file $NEUTRON_CONF --config-file $NEUTRON_METERING_AGENT_CONF"
+        run_process neutron-metering "$NEUTRON_BIN_DIR/$NEUTRON_METERING_BINARY --config-file $NEUTRON_CONF --config-file $NEUTRON_METERING_AGENT_CONF"
     fi
 }
 
-# stop_neutron() - Stop running processes (non-screen)
+# stop_neutron() - Stop running processes
 function stop_neutron_new {
     for serv in neutron-api neutron-agent neutron-l3; do
         stop_process $serv
@@ -486,10 +483,29 @@
     iniset $NEUTRON_CONF DEFAULT service_plugins $plugins
 }
 
+function _neutron_ml2_extension_driver_add {
+    local driver=$1
+    local drivers=""
+
+    drivers=$(iniget $NEUTRON_CORE_PLUGIN_CONF ml2 extension_drivers)
+    if [ $drivers ]; then
+        drivers+=","
+    fi
+    drivers+="${driver}"
+    iniset $NEUTRON_CORE_PLUGIN_CONF ml2 extension_drivers $drivers
+}
+
 function neutron_server_config_add_new {
     _NEUTRON_SERVER_EXTRA_CONF_FILES_ABS+=($1)
 }
 
+# neutron_deploy_rootwrap_filters() - deploy rootwrap filters
+function neutron_deploy_rootwrap_filters_new {
+    local srcdir=$1
+    sudo install -d -o root -g root -m 755 $NEUTRON_CONF_DIR/rootwrap.d
+    sudo install -o root -g root -m 644 $srcdir/etc/neutron/rootwrap.d/*.filters $NEUTRON_CONF_DIR/rootwrap.d
+}
+
 # Dispatch functions
 # These are needed for compatibility between the old and new implementations
 # where there are function name overlaps.  These will be removed when
@@ -558,6 +574,15 @@
     fi
 }
 
+function neutron_ml2_extension_driver_add {
+    if is_neutron_legacy_enabled; then
+        # Call back to old function
+        _neutron_ml2_extension_driver_add_old "$@"
+    else
+        _neutron_ml2_extension_driver_add "$@"
+    fi
+}
+
 function install_neutron_agent_packages {
     if is_neutron_legacy_enabled; then
         # Call back to old function
@@ -595,5 +620,14 @@
     fi
 }
 
+function neutron_deploy_rootwrap_filters {
+    if is_neutron_legacy_enabled; then
+        # Call back to old function
+        _neutron_deploy_rootwrap_filters "$@"
+    else
+        neutron_deploy_rootwrap_filters_new "$@"
+    fi
+}
+
 # Restore xtrace
 $XTRACE
diff --git a/lib/neutron-legacy b/lib/neutron-legacy
index 1a16a44..0ccb17c 100644
--- a/lib/neutron-legacy
+++ b/lib/neutron-legacy
@@ -20,6 +20,7 @@
 # - init_neutron_third_party
 # - start_neutron_third_party
 # - create_nova_conf_neutron
+# - configure_neutron_after_post_config
 # - start_neutron_service_and_check
 # - check_neutron_third_party_integration
 # - start_neutron_agents
@@ -61,7 +62,7 @@
 
 deprecated "Using lib/neutron-legacy is deprecated, and it will be removed in the future"
 
-if is_ssl_enabled_service "neutron" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
     Q_PROTOCOL="https"
 fi
 
@@ -141,10 +142,10 @@
 # These config files are relative to ``/etc/neutron``.  The above
 # example would specify ``--config-file /etc/neutron/file1`` for
 # neutron server.
-declare -a Q_PLUGIN_EXTRA_CONF_FILES
+declare -a -g Q_PLUGIN_EXTRA_CONF_FILES
 
 # same as Q_PLUGIN_EXTRA_CONF_FILES, but with absolute path.
-declare -a _Q_PLUGIN_EXTRA_CONF_FILES_ABS
+declare -a -g _Q_PLUGIN_EXTRA_CONF_FILES_ABS
 
 
 Q_RR_CONF_FILE=$NEUTRON_CONF_DIR/rootwrap.conf
@@ -167,7 +168,7 @@
 #
 Q_DVR_MODE=${Q_DVR_MODE:-legacy}
 if [[ "$Q_DVR_MODE" != "legacy" ]]; then
-    Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,linuxbridge,l2population
+    Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,l2population
 fi
 
 # Provider Network Configurations
@@ -331,7 +332,6 @@
     _configure_neutron_common
     iniset_rpc_backend neutron $NEUTRON_CONF
 
-    # goes before q-svc to init Q_SERVICE_PLUGIN_CLASSES
     if is_service_enabled q-metering; then
         _configure_neutron_metering
     fi
@@ -368,7 +368,7 @@
 function create_nova_conf_neutron {
     iniset $NOVA_CONF DEFAULT use_neutron True
     iniset $NOVA_CONF neutron auth_type "password"
-    iniset $NOVA_CONF neutron auth_url "$KEYSTONE_AUTH_URI/v3"
+    iniset $NOVA_CONF neutron auth_url "$KEYSTONE_AUTH_URI"
     iniset $NOVA_CONF neutron username "$Q_ADMIN_USERNAME"
     iniset $NOVA_CONF neutron password "$SERVICE_PASSWORD"
     iniset $NOVA_CONF neutron user_domain_name "$SERVICE_DOMAIN_NAME"
@@ -417,8 +417,10 @@
 # init_mutnauq() - Initialize databases, etc.
 function init_mutnauq {
     recreate_database $Q_DB_NAME
+    time_start "dbsync"
     # Run Neutron db migrations
     $NEUTRON_BIN_DIR/neutron-db-manage --config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE upgrade head
+    time_stop "dbsync"
 }
 
 # install_mutnauq() - Collect source and prepare
@@ -432,24 +434,6 @@
 
     git_clone $NEUTRON_REPO $NEUTRON_DIR $NEUTRON_BRANCH
     setup_develop $NEUTRON_DIR
-
-    if [ "$VIRT_DRIVER" == 'xenserver' ]; then
-        local dom0_ip
-        dom0_ip=$(echo "$XENAPI_CONNECTION_URL" | cut -d "/" -f 3-)
-
-        local ssh_dom0
-        ssh_dom0="sudo -u $DOMZERO_USER ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@$dom0_ip"
-
-        # Find where the plugins should go in dom0
-        local xen_functions
-        xen_functions=$(cat $TOP_DIR/tools/xen/functions)
-        local plugin_dir
-        plugin_dir=$($ssh_dom0 "$xen_functions; set -eux; xapi_plugin_location")
-
-        # install neutron plugins to dom0
-        tar -czf - -C $NEUTRON_DIR/neutron/plugins/ml2/drivers/openvswitch/agent/xenapi/etc/xapi.d/plugins/ ./ |
-            $ssh_dom0 "tar -xzf - -C $plugin_dir && chmod a+x $plugin_dir/*"
-    fi
 }
 
 # install_neutron_agent_packages() - Collect source and prepare
@@ -464,7 +448,14 @@
     fi
 }
 
-# Start running processes, including screen
+# Finish neutron configuration
+function configure_neutron_after_post_config {
+    if [[ $Q_SERVICE_PLUGIN_CLASSES != '' ]]; then
+        iniset $NEUTRON_CONF DEFAULT service_plugins $Q_SERVICE_PLUGIN_CLASSES
+    fi
+}
+
+# Start running processes
 function start_neutron_service_and_check {
     local service_port=$Q_PORT
     local service_protocol=$Q_PROTOCOL
@@ -479,9 +470,6 @@
     # Start the Neutron service
     run_process q-svc "$NEUTRON_BIN_DIR/neutron-server $cfg_file_options"
     echo "Waiting for Neutron to start..."
-    if is_ssl_enabled_service "neutron"; then
-        ssl_ca="--ca-certificate=${SSL_BUNDLE_FILE}"
-    fi
 
     local testcmd="wget ${ssl_ca} --no-proxy -q -O- $service_protocol://$Q_HOST:$service_port"
     test_with_retry "$testcmd" "Neutron did not start" $SERVICE_TIMEOUT
@@ -523,11 +511,6 @@
 
     run_process q-meta "$AGENT_META_BINARY --config-file $NEUTRON_CONF --config-file $Q_META_CONF_FILE"
     run_process q-metering "$AGENT_METERING_BINARY --config-file $NEUTRON_CONF --config-file $METERING_AGENT_CONF_FILENAME"
-
-    if [ "$VIRT_DRIVER" = 'xenserver' ]; then
-        # For XenServer, start an agent for the domU openvswitch
-        run_process q-domua "$AGENT_BINARY --config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE.domU"
-    fi
 }
 
 # Start running processes, including screen
@@ -539,13 +522,9 @@
 
 function stop_mutnauq_l2_agent {
     stop_process q-agt
-
-    if [ "$VIRT_DRIVER" = 'xenserver' ]; then
-        stop_process q-domua
-    fi
 }
 
-# stop_mutnauq_other() - Stop running processes (non-screen)
+# stop_mutnauq_other() - Stop running processes
 function stop_mutnauq_other {
     if is_service_enabled q-dhcp; then
         stop_process q-dhcp
@@ -600,7 +579,7 @@
         local IP_DEL=""
         local IP_UP=""
         local DEFAULT_ROUTE_GW
-        DEFAULT_ROUTE_GW=$(ip -f $af r | awk "/default.+$from_intf/ { print \$3; exit }")
+        DEFAULT_ROUTE_GW=$(ip -f $af r | awk "/default.+$from_intf\s/ { print \$3; exit }")
         local ADD_OVS_PORT=""
         local DEL_OVS_PORT=""
         local ARP_CMD=""
@@ -739,18 +718,7 @@
     if is_service_enabled tls-proxy; then
         # Set the service port for a proxy to take the original
         iniset $NEUTRON_CONF DEFAULT bind_port "$Q_PORT_INT"
-    fi
-
-    if is_ssl_enabled_service "nova"; then
-        iniset $NEUTRON_CONF nova cafile $SSL_BUNDLE_FILE
-    fi
-
-    if is_ssl_enabled_service "neutron"; then
-        ensure_certificates NEUTRON
-
-        iniset $NEUTRON_CONF DEFAULT use_ssl True
-        iniset $NEUTRON_CONF DEFAULT ssl_cert_file "$NEUTRON_SSL_CERT"
-        iniset $NEUTRON_CONF DEFAULT ssl_key_file "$NEUTRON_SSL_KEY"
+        iniset $NEUTRON_CONF oslo_middleware enable_proxy_headers_parsing True
     fi
 
     _neutron_setup_rootwrap
@@ -836,10 +804,6 @@
     # Update either configuration file with plugin
     iniset $NEUTRON_CONF DEFAULT core_plugin $Q_PLUGIN_CLASS
 
-    if [[ $Q_SERVICE_PLUGIN_CLASSES != '' ]]; then
-        iniset $NEUTRON_CONF DEFAULT service_plugins $Q_SERVICE_PLUGIN_CLASSES
-    fi
-
     iniset $NEUTRON_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
     iniset $NEUTRON_CONF oslo_policy policy_file $Q_POLICY_FILE
     iniset $NEUTRON_CONF DEFAULT allow_overlapping_ips $Q_ALLOW_OVERLAPPING_IP
@@ -870,6 +834,16 @@
     fi
 }
 
+# _neutron_ml2_extension_driver_add_old() - add ML2 extension driver
+function _neutron_ml2_extension_driver_add_old {
+    local extension=$1
+    if [[ $Q_ML2_PLUGIN_EXT_DRIVERS == '' ]]; then
+        Q_ML2_PLUGIN_EXT_DRIVERS=$extension
+    elif [[ ! ,${Q_ML2_PLUGIN_EXT_DRIVERS}, =~ ,${extension}, ]]; then
+        Q_ML2_PLUGIN_EXT_DRIVERS="$Q_ML2_PLUGIN_EXT_DRIVERS,$extension"
+    fi
+}
+
 # mutnauq_server_config_add() - add server config file
 function mutnauq_server_config_add {
     _Q_PLUGIN_EXTRA_CONF_FILES_ABS+=($1)
diff --git a/lib/neutron_plugins/openvswitch_agent b/lib/neutron_plugins/openvswitch_agent
index acab582..b65a258 100644
--- a/lib/neutron_plugins/openvswitch_agent
+++ b/lib/neutron_plugins/openvswitch_agent
@@ -11,12 +11,6 @@
 
 function neutron_plugin_create_nova_conf {
     _neutron_ovs_base_configure_nova_vif_driver
-    if [ "$VIRT_DRIVER" == 'xenserver' ]; then
-        iniset $NOVA_CONF xenserver vif_driver nova.virt.xenapi.vif.XenAPIOpenVswitchDriver
-        iniset $NOVA_CONF xenserver ovs_integration_bridge $XEN_INTEGRATION_BRIDGE
-        # Disable nova's firewall so that it does not conflict with neutron
-        iniset $NOVA_CONF DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
-    fi
 }
 
 function neutron_plugin_install_agent_packages {
@@ -58,65 +52,6 @@
     fi
     AGENT_BINARY="$NEUTRON_BIN_DIR/neutron-openvswitch-agent"
 
-    if [ "$VIRT_DRIVER" == 'xenserver' ]; then
-        # Make a copy of our config for domU
-        sudo cp /$Q_PLUGIN_CONF_FILE "/$Q_PLUGIN_CONF_FILE.domU"
-
-        # change domU's config file to STACK_USER
-        sudo chown $STACK_USER:$STACK_USER /$Q_PLUGIN_CONF_FILE.domU
-
-        # Deal with Dom0's L2 Agent:
-        Q_RR_DOM0_COMMAND="$NEUTRON_BIN_DIR/neutron-rootwrap-xen-dom0 $Q_RR_CONF_FILE"
-
-        # For now, duplicate the xen configuration already found in nova.conf
-        iniset $Q_RR_CONF_FILE xenapi xenapi_connection_url "$XENAPI_CONNECTION_URL"
-        iniset $Q_RR_CONF_FILE xenapi xenapi_connection_username "$XENAPI_USER"
-        iniset $Q_RR_CONF_FILE xenapi xenapi_connection_password "$XENAPI_PASSWORD"
-
-        # Under XS/XCP, the ovs agent needs to target the dom0
-        # integration bridge.  This is enabled by using a root wrapper
-        # that executes commands on dom0 via a XenAPI plugin.
-        # XenAPI does not support daemon rootwrap now, so set root_helper_daemon empty
-        iniset "/$Q_PLUGIN_CONF_FILE.domU" agent root_helper ""
-        iniset "/$Q_PLUGIN_CONF_FILE.domU" agent root_helper_daemon "xenapi_root_helper"
-        iniset "/$Q_PLUGIN_CONF_FILE.domU" xenapi connection_url "$XENAPI_CONNECTION_URL"
-        iniset "/$Q_PLUGIN_CONF_FILE.domU" xenapi connection_username "$XENAPI_USER"
-        iniset "/$Q_PLUGIN_CONF_FILE.domU" xenapi connection_password "$XENAPI_PASSWORD"
-
-        # Disable minimize polling, so that it can always detect OVS and Port changes
-        # This is a problem of xenserver + neutron, bug has been reported
-        # https://bugs.launchpad.net/neutron/+bug/1495423
-        iniset "/$Q_PLUGIN_CONF_FILE.domU" agent minimize_polling False
-
-        # Set "physical" mapping
-        iniset "/$Q_PLUGIN_CONF_FILE.domU" ovs bridge_mappings "physnet1:$FLAT_NETWORK_BRIDGE"
-
-        # XEN_INTEGRATION_BRIDGE is the integration bridge in dom0
-        iniset "/$Q_PLUGIN_CONF_FILE.domU" ovs integration_bridge $XEN_INTEGRATION_BRIDGE
-
-        # Set OVS native interface for ovs-agent in compute node
-        XEN_DOM0_IP=$(echo "$XENAPI_CONNECTION_URL" | cut -d "/" -f 3)
-        iniset /$Q_PLUGIN_CONF_FILE.domU ovs ovsdb_connection tcp:$XEN_DOM0_IP:6640
-        iniset /$Q_PLUGIN_CONF_FILE.domU ovs of_listen_address $HOST_IP
-
-        # Set up domU's L2 agent:
-
-        # Create a bridge "br-$VLAN_INTERFACE"
-        _neutron_ovs_base_add_bridge "br-$VLAN_INTERFACE"
-        # Add $VLAN_INTERFACE to that bridge
-        sudo ovs-vsctl -- --may-exist add-port "br-$VLAN_INTERFACE" $VLAN_INTERFACE
-
-        # Create external bridge and add port
-        _neutron_ovs_base_add_public_bridge
-        sudo ovs-vsctl -- --may-exist add-port $PUBLIC_BRIDGE $PUBLIC_INTERFACE
-
-        # Set bridge mappings to "physnet1:br-$GUEST_INTERFACE_DEFAULT"
-        iniset /$Q_PLUGIN_CONF_FILE ovs bridge_mappings "physnet1:br-$VLAN_INTERFACE,physnet-ex:$PUBLIC_BRIDGE"
-        # Set integration bridge to domU's
-        iniset /$Q_PLUGIN_CONF_FILE ovs integration_bridge $OVS_BRIDGE
-        # Set root wrap
-        iniset /$Q_PLUGIN_CONF_FILE agent root_helper "$Q_RR_COMMAND"
-    fi
     iniset /$Q_PLUGIN_CONF_FILE agent tunnel_types $Q_TUNNEL_TYPES
     iniset /$Q_PLUGIN_CONF_FILE ovs datapath_type $OVS_DATAPATH_TYPE
 }
diff --git a/lib/neutron_plugins/services/l3 b/lib/neutron_plugins/services/l3
index e87a30c..98315b7 100644
--- a/lib/neutron_plugins/services/l3
+++ b/lib/neutron_plugins/services/l3
@@ -87,7 +87,8 @@
 
 # Subnetpool defaults
 USE_SUBNETPOOL=${USE_SUBNETPOOL:-True}
-SUBNETPOOL_NAME=${SUBNETPOOL_NAME:-"shared-default-subnetpool"}
+SUBNETPOOL_NAME_V4=${SUBNETPOOL_NAME:-"shared-default-subnetpool-v4"}
+SUBNETPOOL_NAME_V6=${SUBNETPOOL_NAME:-"shared-default-subnetpool-v6"}
 
 SUBNETPOOL_PREFIX_V4=${SUBNETPOOL_PREFIX_V4:-$IPV4_ADDRS_SAFE_TO_USE}
 SUBNETPOOL_PREFIX_V6=${SUBNETPOOL_PREFIX_V6:-$IPV6_ADDRS_SAFE_TO_USE}
@@ -169,10 +170,10 @@
     if is_networking_extension_supported "auto-allocated-topology"; then
         if [[ "$USE_SUBNETPOOL" == "True" ]]; then
             if [[ "$IP_VERSION" =~ 4.* ]]; then
-                SUBNETPOOL_V4_ID=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet pool create $SUBNETPOOL_NAME --default-prefix-length $SUBNETPOOL_SIZE_V4 --pool-prefix $SUBNETPOOL_PREFIX_V4 --share --default | grep ' id ' | get_field 2)
+                SUBNETPOOL_V4_ID=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet pool create $SUBNETPOOL_NAME_V4 --default-prefix-length $SUBNETPOOL_SIZE_V4 --pool-prefix $SUBNETPOOL_PREFIX_V4 --share --default -f value -c id)
             fi
             if [[ "$IP_VERSION" =~ .*6 ]]; then
-                SUBNETPOOL_V6_ID=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet pool create $SUBNETPOOL_NAME --default-prefix-length $SUBNETPOOL_SIZE_V6 --pool-prefix $SUBNETPOOL_PREFIX_V6 --share --default | grep ' id ' | get_field 2)
+                SUBNETPOOL_V6_ID=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet pool create $SUBNETPOOL_NAME_V6 --default-prefix-length $SUBNETPOOL_SIZE_V6 --pool-prefix $SUBNETPOOL_PREFIX_V6 --share --default -f value -c id)
             fi
         fi
     fi
@@ -197,8 +198,8 @@
             if [ -z $SUBNETPOOL_V6_ID ]; then
                 fixed_range_v6=$IPV6_PROVIDER_FIXED_RANGE
             fi
-            SUBNET_V6_ID=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet create --project $project_id --ip-version 6 --ipv6-address-mode $IPV6_ADDRESS_MODE --gateway $IPV6_PROVIDER_NETWORK_GATEWAY $IPV6_PROVIDER_SUBNET_NAME ${SUBNETPOOL_V6_ID:+--subnet-pool $SUBNETPOOL_V6_ID} --network $NET_ID $fixed_range_v6 | grep 'id' | get_field 2)
-            die_if_not_set $LINENO SUBNET_V6_ID "Failure creating SUBNET_V6_ID for $IPV6_PROVIDER_SUBNET_NAME $project_id"
+            IPV6_SUBNET_ID=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet create --project $project_id --ip-version 6 --gateway $IPV6_PROVIDER_NETWORK_GATEWAY $IPV6_PROVIDER_SUBNET_NAME ${SUBNETPOOL_V6_ID:+--subnet-pool $SUBNETPOOL_V6_ID} --network $NET_ID --subnet-range $fixed_range_v6 | grep ' id ' | get_field 2)
+            die_if_not_set $LINENO IPV6_SUBNET_ID "Failure creating IPV6_SUBNET_ID for $IPV6_PROVIDER_SUBNET_NAME $project_id"
         fi
 
         if [[ $Q_AGENT == "openvswitch" ]]; then
diff --git a/lib/nova b/lib/nova
index 4c9f30f..1112f29 100644
--- a/lib/nova
+++ b/lib/nova
@@ -17,7 +17,6 @@
 #
 # - install_nova
 # - configure_nova
-# - _config_nova_apache_wsgi
 # - create_nova_conf
 # - init_nova
 # - start_nova
@@ -28,7 +27,6 @@
 _XTRACE_LIB_NOVA=$(set +o | grep xtrace)
 set +o xtrace
 
-
 # Defaults
 # --------
 
@@ -53,22 +51,34 @@
 NOVA_CONF_DIR=/etc/nova
 NOVA_CONF=$NOVA_CONF_DIR/nova.conf
 NOVA_CELLS_CONF=$NOVA_CONF_DIR/nova-cells.conf
+NOVA_COND_CONF=$NOVA_CONF_DIR/nova.conf
+NOVA_CPU_CONF=$NOVA_CONF_DIR/nova-cpu.conf
 NOVA_FAKE_CONF=$NOVA_CONF_DIR/nova-fake.conf
 NOVA_CELLS_DB=${NOVA_CELLS_DB:-nova_cell}
 NOVA_API_DB=${NOVA_API_DB:-nova_api}
+NOVA_UWSGI=$NOVA_BIN_DIR/nova-api-wsgi
+NOVA_METADATA_UWSGI=$NOVA_BIN_DIR/nova-metadata-wsgi
+NOVA_UWSGI_CONF=$NOVA_CONF_DIR/nova-api-uwsgi.ini
+NOVA_METADATA_UWSGI_CONF=$NOVA_CONF_DIR/nova-metadata-uwsgi.ini
+
+# The total number of cells we expect. Must be greater than one and doesn't
+# count cell0.
+NOVA_NUM_CELLS=${NOVA_NUM_CELLS:-1}
+# Our cell index, so we know what rabbit vhost to connect to.
+# This should be in the range of 1-$NOVA_NUM_CELLS
+NOVA_CPU_CELL=${NOVA_CPU_CELL:-1}
 
 NOVA_API_PASTE_INI=${NOVA_API_PASTE_INI:-$NOVA_CONF_DIR/api-paste.ini}
 
-if is_suse; then
-    NOVA_WSGI_DIR=${NOVA_WSGI_DIR:-/srv/www/htdocs/nova}
-else
-    NOVA_WSGI_DIR=${NOVA_WSGI_DIR:-/var/www/nova}
-fi
+# Toggle for deploying Nova-API under a wsgi server. We default to
+# true to use UWSGI, but allow False so that fall back to the
+# eventlet server can happen for grenade runs.
+# NOTE(cdent): We can adjust to remove the eventlet-base api service
+# after pike, at which time we can stop using NOVA_USE_MOD_WSGI to
+# mean "use uwsgi" because we'll be always using uwsgi.
+NOVA_USE_MOD_WSGI=${NOVA_USE_MOD_WSGI:-True}
 
-# Toggle for deploying Nova-API under HTTPD + mod_wsgi
-NOVA_USE_MOD_WSGI=${NOVA_USE_MOD_WSGI:-False}
-
-if is_ssl_enabled_service "nova" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
     NOVA_SERVICE_PROTOCOL="https"
 fi
 
@@ -91,7 +101,7 @@
 
 # The following FILTERS contains SameHostFilter and DifferentHostFilter with
 # the default filters.
-FILTERS="RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter"
+FILTERS="RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter"
 
 QEMU_CONF=/etc/libvirt/qemu.conf
 
@@ -175,6 +185,7 @@
 # Test if any Nova services are enabled
 # is_nova_enabled
 function is_nova_enabled {
+    [[ ,${DISABLED_SERVICES} =~ ,"nova" ]] && return 1
     [[ ,${ENABLED_SERVICES} =~ ,"n-" ]] && return 0
     return 1
 }
@@ -210,7 +221,10 @@
         instances=`sudo virsh list --all | grep $INSTANCE_NAME_PREFIX | sed "s/.*\($INSTANCE_NAME_PREFIX[0-9a-fA-F]*\).*/\1/g"`
         if [ ! "$instances" = "" ]; then
             echo $instances | xargs -n1 sudo virsh destroy || true
-            echo $instances | xargs -n1 sudo virsh undefine --managed-save || true
+            if ! xargs -n1 sudo virsh undefine --managed-save --nvram <<< $instances; then
+                # Can't delete with nvram flags, then just try without this flag
+                xargs -n1 sudo virsh undefine --managed-save <<< $instances
+            fi
         fi
 
         # Logout and delete iscsi sessions
@@ -235,71 +249,10 @@
     #    cleanup_nova_hypervisor
     #fi
 
-    if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
-        _cleanup_nova_apache_wsgi
-    fi
-}
-
-# _cleanup_nova_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
-function _cleanup_nova_apache_wsgi {
-    sudo rm -f $NOVA_WSGI_DIR/*
-    sudo rm -f $(apache_site_config_for nova-api)
-    sudo rm -f $(apache_site_config_for nova-metadata)
-}
-
-# _config_nova_apache_wsgi() - Set WSGI config files of Nova API
-function _config_nova_apache_wsgi {
-    sudo mkdir -p $NOVA_WSGI_DIR
-
-    local nova_apache_conf
-    nova_apache_conf=$(apache_site_config_for nova-api)
-    local nova_metadata_apache_conf
-    nova_metadata_apache_conf=$(apache_site_config_for nova-metadata)
-    local nova_ssl=""
-    local nova_certfile=""
-    local nova_keyfile=""
-    local nova_api_port=$NOVA_SERVICE_PORT
-    local nova_metadata_port=$METADATA_SERVICE_PORT
-    local venv_path=""
-
-    if is_ssl_enabled_service nova-api; then
-        nova_ssl="SSLEngine On"
-        nova_certfile="SSLCertificateFile $NOVA_SSL_CERT"
-        nova_keyfile="SSLCertificateKeyFile $NOVA_SSL_KEY"
-    fi
-    if [[ ${USE_VENV} = True ]]; then
-        venv_path="python-path=${PROJECT_VENV["nova"]}/lib/$(python_version)/site-packages"
-    fi
-
-    # copy proxy vhost and wsgi helper files
-    sudo cp $NOVA_DIR/nova/wsgi/nova-api.py $NOVA_WSGI_DIR/nova-api
-    sudo cp $NOVA_DIR/nova/wsgi/nova-metadata.py $NOVA_WSGI_DIR/nova-metadata
-
-    sudo cp $FILES/apache-nova-api.template $nova_apache_conf
-    sudo sed -e "
-        s|%PUBLICPORT%|$nova_api_port|g;
-        s|%APACHE_NAME%|$APACHE_NAME|g;
-        s|%PUBLICWSGI%|$NOVA_WSGI_DIR/nova-api|g;
-        s|%SSLENGINE%|$nova_ssl|g;
-        s|%SSLCERTFILE%|$nova_certfile|g;
-        s|%SSLKEYFILE%|$nova_keyfile|g;
-        s|%USER%|$STACK_USER|g;
-        s|%VIRTUALENV%|$venv_path|g
-        s|%APIWORKERS%|$API_WORKERS|g
-    " -i $nova_apache_conf
-
-    sudo cp $FILES/apache-nova-metadata.template $nova_metadata_apache_conf
-    sudo sed -e "
-        s|%PUBLICPORT%|$nova_metadata_port|g;
-        s|%APACHE_NAME%|$APACHE_NAME|g;
-        s|%PUBLICWSGI%|$NOVA_WSGI_DIR/nova-metadata|g;
-        s|%SSLENGINE%|$nova_ssl|g;
-        s|%SSLCERTFILE%|$nova_certfile|g;
-        s|%SSLKEYFILE%|$nova_keyfile|g;
-        s|%USER%|$STACK_USER|g;
-        s|%VIRTUALENV%|$venv_path|g
-        s|%APIWORKERS%|$API_WORKERS|g
-    " -i $nova_metadata_apache_conf
+    stop_process "n-api"
+    stop_process "n-api-meta"
+    remove_uwsgi_config "$NOVA_UWSGI_CONF" "$NOVA_UWSGI"
+    remove_uwsgi_config "$NOVA_METADATA_UWSGI_CONF" "$NOVA_METADATA_UWSGI"
 }
 
 # configure_nova() - Set config files, create data dirs, etc
@@ -458,8 +411,8 @@
     fi
     iniset $NOVA_CONF wsgi api_paste_config "$NOVA_API_PASTE_INI"
     iniset $NOVA_CONF DEFAULT rootwrap_config "$NOVA_CONF_DIR/rootwrap.conf"
-    iniset $NOVA_CONF DEFAULT scheduler_driver "$SCHEDULER"
-    iniset $NOVA_CONF DEFAULT scheduler_default_filters "$FILTERS"
+    iniset $NOVA_CONF scheduler driver "$SCHEDULER"
+    iniset $NOVA_CONF filter_scheduler enabled_filters "$FILTERS"
     iniset $NOVA_CONF DEFAULT default_floating_pool "$PUBLIC_NETWORK_NAME"
     if [[ $SERVICE_IP_VERSION == 6 ]]; then
         iniset $NOVA_CONF DEFAULT my_ip "$HOST_IPV6"
@@ -471,6 +424,8 @@
     iniset $NOVA_CONF DEFAULT osapi_compute_listen "$NOVA_SERVICE_LISTEN_ADDRESS"
     iniset $NOVA_CONF DEFAULT metadata_listen "$NOVA_SERVICE_LISTEN_ADDRESS"
 
+    iniset $NOVA_CONF key_manager api_class nova.keymgr.conf_key_mgr.ConfKeyManager
+
     if is_fedora || is_suse; then
         # nova defaults to /usr/local/bin, but fedora and suse pip like to
         # install things in /usr/bin
@@ -481,7 +436,19 @@
     # require them running on the host. The ensures that n-cpu doesn't
     # leak a need to use the db in a multinode scenario.
     if is_service_enabled n-api n-cond n-sched; then
-        iniset $NOVA_CONF database connection `database_connection_url nova`
+        # If we're in multi-tier cells mode, we want our control services pointing
+        # at cell0 instead of cell1 to ensure isolation. If not, we point everything
+        # at the main database like normal.
+        if [[ "$CELLSV2_SETUP" == "singleconductor" ]]; then
+            local db="nova_cell1"
+        else
+            local db="nova_cell0"
+            # When in superconductor mode, nova-compute can't send instance
+            # info updates to the scheduler, so just disable it.
+            iniset $NOVA_CONF filter_scheduler track_instance_changes False
+        fi
+
+        iniset $NOVA_CONF database connection `database_connection_url $db`
         iniset $NOVA_CONF api_database connection `database_connection_url nova_api`
     fi
 
@@ -491,7 +458,7 @@
             NOVA_ENABLED_APIS=$(echo $NOVA_ENABLED_APIS | sed "s/,metadata//")
         fi
         iniset $NOVA_CONF DEFAULT enabled_apis "$NOVA_ENABLED_APIS"
-        if is_service_enabled tls-proxy; then
+        if is_service_enabled tls-proxy && [ "$NOVA_USE_MOD_WSGI" == "False" ]; then
             # Set the service port for a proxy to take the original
             iniset $NOVA_CONF DEFAULT osapi_compute_listen_port "$NOVA_SERVICE_PORT_INT"
             iniset $NOVA_CONF DEFAULT osapi_compute_link_prefix $NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT
@@ -501,7 +468,7 @@
     fi
 
     if is_service_enabled cinder; then
-        if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
+        if is_service_enabled tls-proxy; then
             CINDER_SERVICE_HOST=${CINDER_SERVICE_HOST:-$SERVICE_HOST}
             CINDER_SERVICE_PORT=${CINDER_SERVICE_PORT:-8776}
             iniset $NOVA_CONF cinder cafile $SSL_BUNDLE_FILE
@@ -526,11 +493,10 @@
         iniset $NOVA_CONF DEFAULT force_config_drive "$FORCE_CONFIG_DRIVE"
     fi
     # Format logging
-    setup_logging $NOVA_CONF $NOVA_USE_MOD_WSGI
+    setup_logging $NOVA_CONF
 
-    if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
-        _config_nova_apache_wsgi
-    fi
+    write_uwsgi_config "$NOVA_UWSGI_CONF" "$NOVA_UWSGI" "/compute"
+    write_uwsgi_config "$NOVA_METADATA_UWSGI_CONF" "$NOVA_METADATA_UWSGI" "" ":${METADATA_SERVICE_PORT}"
 
     if is_service_enabled ceilometer; then
         iniset $NOVA_CONF DEFAULT instance_usage_audit "True"
@@ -576,8 +542,9 @@
     # Set the oslo messaging driver to the typical default. This does not
     # enable notifications, but it will allow them to function when enabled.
     iniset $NOVA_CONF oslo_messaging_notifications driver "messagingv2"
+    iniset $NOVA_CONF oslo_messaging_notifications transport_url $(get_notification_url)
     iniset_rpc_backend nova $NOVA_CONF
-    iniset $NOVA_CONF glance api_servers "${GLANCE_SERVICE_PROTOCOL}://${GLANCE_HOSTPORT}"
+    iniset $NOVA_CONF glance api_servers "$GLANCE_URL"
 
     iniset $NOVA_CONF DEFAULT osapi_compute_workers "$API_WORKERS"
     iniset $NOVA_CONF DEFAULT metadata_workers "$API_WORKERS"
@@ -586,18 +553,9 @@
 
     iniset $NOVA_CONF cinder os_region_name "$REGION_NAME"
 
-    if is_ssl_enabled_service glance || is_service_enabled tls-proxy; then
+    if is_service_enabled tls-proxy; then
         iniset $NOVA_CONF DEFAULT glance_protocol https
-    fi
-
-    # Register SSL certificates if provided
-    if is_ssl_enabled_service nova; then
-        ensure_certificates NOVA
-
-        iniset $NOVA_CONF DEFAULT ssl_cert_file "$NOVA_SSL_CERT"
-        iniset $NOVA_CONF DEFAULT ssl_key_file "$NOVA_SSL_KEY"
-
-        iniset $NOVA_CONF DEFAULT enabled_ssl_apis "$NOVA_ENABLED_APIS"
+        iniset $NOVA_CONF oslo_middleware enable_proxy_headers_parsing True
     fi
 
     if is_service_enabled n-sproxy; then
@@ -609,29 +567,48 @@
     # Setup logging for nova-dhcpbridge command line
     sudo cp "$NOVA_CONF" "$NOVA_CONF_DIR/nova-dhcpbridge.conf"
 
-    local service="n-dhcp"
-    local logfile="${service}.log.${CURRENT_LOG_TIME}"
-    local real_logfile="${LOGDIR}/${logfile}"
-    if [[ -n ${LOGDIR} ]]; then
-        bash -c "cd '$LOGDIR' && ln -sf '$logfile' ${service}.log"
-        iniset "$NOVA_CONF_DIR/nova-dhcpbridge.conf" DEFAULT log_file "$real_logfile"
-        if [[ -n ${SCREEN_LOGDIR} ]]; then
-            # Drop the backward-compat symlink
-            ln -sf "$real_logfile" ${SCREEN_LOGDIR}/screen-${service}.log
+    if is_service_enabled n-net; then
+        local service="n-dhcp"
+        local logfile="${service}.log.${CURRENT_LOG_TIME}"
+        local real_logfile="${LOGDIR}/${logfile}"
+        if [[ -n ${LOGDIR} ]]; then
+            bash -c "cd '$LOGDIR' && ln -sf '$logfile' ${service}.log"
+            iniset "$NOVA_CONF_DIR/nova-dhcpbridge.conf" DEFAULT log_file "$real_logfile"
         fi
-    fi
 
-    iniset $NOVA_CONF DEFAULT dhcpbridge_flagfile "$NOVA_CONF_DIR/nova-dhcpbridge.conf"
+        iniset $NOVA_CONF DEFAULT dhcpbridge_flagfile "$NOVA_CONF_DIR/nova-dhcpbridge.conf"
+    fi
 
     if [ "$NOVA_USE_SERVICE_TOKEN" == "True" ]; then
         init_nova_service_user_conf
     fi
+
+    if is_service_enabled n-cond; then
+        for i in $(seq 1 $NOVA_NUM_CELLS); do
+            local conf
+            local vhost
+            conf=$(conductor_conf $i)
+            vhost="nova_cell${i}"
+            iniset $conf database connection `database_connection_url nova_cell${i}`
+            iniset $conf conductor workers "$API_WORKERS"
+            iniset $conf DEFAULT debug "$ENABLE_DEBUG_LOG_LEVEL"
+            # if we have a singleconductor, we don't have per host message queues.
+            if [[ "${CELLSV2_SETUP}" == "singleconductor" ]]; then
+                iniset_rpc_backend nova $conf DEFAULT
+            else
+                rpc_backend_add_vhost $vhost
+                iniset_rpc_backend nova $conf DEFAULT $vhost
+            fi
+            # Format logging
+            setup_logging $conf
+        done
+    fi
 }
 
 function init_nova_service_user_conf {
     iniset $NOVA_CONF service_user send_service_user_token True
     iniset $NOVA_CONF service_user auth_type password
-    iniset $NOVA_CONF service_user auth_url "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT"
+    iniset $NOVA_CONF service_user auth_url "$KEYSTONE_SERVICE_URI"
     iniset $NOVA_CONF service_user username nova
     iniset $NOVA_CONF service_user password "$SERVICE_PASSWORD"
     iniset $NOVA_CONF service_user user_domain_name "$SERVICE_DOMAIN_NAME"
@@ -640,6 +617,11 @@
     iniset $NOVA_CONF service_user auth_strategy keystone
 }
 
+function conductor_conf {
+    local cell="$1"
+    echo "${NOVA_CONF_DIR}/nova_cell${cell}.conf"
+}
+
 function init_nova_cells {
     if is_service_enabled n-cell; then
         cp $NOVA_CONF $NOVA_CELLS_CONF
@@ -661,7 +643,12 @@
             iniset $NOVA_CELLS_CONF DEFAULT enabled_apis metadata
         fi
 
+        # Cells v1 conductor should be the nova-cells.conf
+        NOVA_COND_CONF=$NOVA_CELLS_CONF
+
+        time_start "dbsync"
         $NOVA_BIN_DIR/nova-manage --config-file $NOVA_CELLS_CONF db sync
+        time_stop "dbsync"
         $NOVA_BIN_DIR/nova-manage --config-file $NOVA_CELLS_CONF cell create --name=region --cell_type=parent --username=$RABBIT_USERID --hostname=$RABBIT_HOST --port=5672 --password=$RABBIT_PASSWORD --virtual_host=/ --woffset=0 --wscale=1
         $NOVA_BIN_DIR/nova-manage cell create --name=child --cell_type=child --username=$RABBIT_USERID --hostname=$RABBIT_HOST --port=5672 --password=$RABBIT_PASSWORD --virtual_host=child_cell --woffset=0 --wscale=1
 
@@ -704,8 +691,6 @@
         recreate_database $NOVA_API_DB
         $NOVA_BIN_DIR/nova-manage --config-file $NOVA_CONF api_db sync
 
-        # (Re)create nova databases
-        recreate_database nova
         recreate_database nova_cell0
 
         # map_cell0 will create the cell mapping record in the nova_api DB so
@@ -714,6 +699,12 @@
         # and nova_cell0 databases.
         nova-manage cell_v2 map_cell0 --database_connection `database_connection_url nova_cell0`
 
+        # (Re)create nova databases
+        for i in $(seq 1 $NOVA_NUM_CELLS); do
+            recreate_database nova_cell${i}
+            $NOVA_BIN_DIR/nova-manage --config-file $(conductor_conf $i) db sync
+        done
+
         # Migrate nova and nova_cell0 databases.
         $NOVA_BIN_DIR/nova-manage --config-file $NOVA_CONF db sync
 
@@ -726,8 +717,9 @@
         $NOVA_BIN_DIR/nova-manage --config-file $NOVA_CONF db online_data_migrations
 
         # create the cell1 cell for the main nova db where the hosts live
-        nova-manage cell_v2 create_cell --transport-url $(get_transport_url) \
-            --name 'cell1'
+        for i in $(seq 1 $NOVA_NUM_CELLS); do
+            nova-manage --config-file $NOVA_CONF --config-file $(conductor_conf $i) cell_v2 create_cell --name "cell$i"
+        done
     fi
 
     create_nova_cache_dir
@@ -787,13 +779,6 @@
     git_clone $NOVA_REPO $NOVA_DIR $NOVA_BRANCH
     setup_develop $NOVA_DIR
     sudo install -D -m 0644 -o $STACK_USER {$NOVA_DIR/tools/,/etc/bash_completion.d/}nova-manage.bash_completion
-
-    if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
-        install_apache_wsgi
-        if is_ssl_enabled_service "nova-api"; then
-            enable_mod_ssl
-        fi
-    fi
 }
 
 # start_nova_api() - Start the API process ahead of other things
@@ -801,6 +786,7 @@
     # Get right service port for testing
     local service_port=$NOVA_SERVICE_PORT
     local service_protocol=$NOVA_SERVICE_PROTOCOL
+    local nova_url
     if is_service_enabled tls-proxy; then
         service_port=$NOVA_SERVICE_PORT_INT
         service_protocol="http"
@@ -810,32 +796,36 @@
     local old_path=$PATH
     export PATH=$NOVA_BIN_DIR:$PATH
 
-    # If the site is not enabled then we are in a grenade scenario
-    local enabled_site_file
-    enabled_site_file=$(apache_site_config_for nova-api)
-    if [ -f ${enabled_site_file} ] && [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
-        enable_apache_site nova-api
-        enable_apache_site nova-metadata
-        restart_apache_server
-        tail_log nova-api /var/log/$APACHE_NAME/nova-api.log
-        tail_log nova-metadata /var/log/$APACHE_NAME/nova-metadata.log
-    else
+    if [ "$NOVA_USE_MOD_WSGI" == "False" ]; then
         run_process n-api "$NOVA_BIN_DIR/nova-api"
+        nova_url=$service_protocol://$SERVICE_HOST:$service_port
+        # Start proxy if tsl enabled
+        if is_service_enabled tls-proxy; then
+            start_tls_proxy nova '*' $NOVA_SERVICE_PORT $NOVA_SERVICE_HOST $NOVA_SERVICE_PORT_INT
+        fi
+    else
+        run_process "n-api" "$NOVA_BIN_DIR/uwsgi --procname-prefix nova-api --ini $NOVA_UWSGI_CONF"
+        nova_url=$service_protocol://$SERVICE_HOST/compute/v2.1/
     fi
 
     echo "Waiting for nova-api to start..."
-    if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$SERVICE_HOST:$service_port; then
+    if ! wait_for_service $SERVICE_TIMEOUT $nova_url; then
         die $LINENO "nova-api did not start"
     fi
 
-    # Start proxies if enabled
-    if is_service_enabled tls-proxy; then
-        start_tls_proxy nova '*' $NOVA_SERVICE_PORT $NOVA_SERVICE_HOST $NOVA_SERVICE_PORT_INT
-    fi
-
     export PATH=$old_path
 }
 
+# Detect and setup conditions under which singleconductor setup is
+# needed. Notably cellsv1.
+function _set_singleconductor {
+    # NOTE(danms): Don't setup conductor fleet for cellsv1
+    if is_service_enabled n-cell; then
+        CELLSV2_SETUP="singleconductor"
+    fi
+}
+
+
 # start_nova_compute() - Start the compute process
 function start_nova_compute {
     # Hack to set the path for rootwrap
@@ -848,15 +838,31 @@
         local compute_cell_conf=$NOVA_CONF
     fi
 
+    if [[ "${CELLSV2_SETUP}" == "singleconductor" ]]; then
+        # NOTE(danms): Grenade doesn't setup multi-cell rabbit, so
+        # skip these bits and use the normal config.
+        NOVA_CPU_CONF=$compute_cell_conf
+        echo "Skipping multi-cell conductor fleet setup"
+    else
+        # "${CELLSV2_SETUP}" is "superconductor"
+        cp $compute_cell_conf $NOVA_CPU_CONF
+        # FIXME(danms): Should this be configurable?
+        iniset $NOVA_CPU_CONF workarounds disable_group_policy_check_upcall True
+        # Since the nova-compute service cannot reach nova-scheduler over
+        # RPC, we also disable track_instance_changes.
+        iniset $NOVA_CPU_CONF filter_scheduler track_instance_changes False
+        iniset_rpc_backend nova $NOVA_CPU_CONF DEFAULT "nova_cell${NOVA_CPU_CELL}"
+    fi
+
     if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
         # The group **$LIBVIRT_GROUP** is added to the current user in this script.
         # ``sg`` is used in run_process to execute nova-compute as a member of the
         # **$LIBVIRT_GROUP** group.
-        run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf" $LIBVIRT_GROUP
+        run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $NOVA_CPU_CONF" $LIBVIRT_GROUP
     elif [[ "$VIRT_DRIVER" = 'lxd' ]]; then
-        run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf" $LXD_GROUP
+        run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $NOVA_CPU_CONF" $LXD_GROUP
     elif [[ "$VIRT_DRIVER" = 'docker' || "$VIRT_DRIVER" = 'zun' ]]; then
-        run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf" $DOCKER_GROUP
+        run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $NOVA_CPU_CONF" $DOCKER_GROUP
     elif [[ "$VIRT_DRIVER" = 'fake' ]]; then
         local i
         for i in `seq 1 $NUMBER_FAKE_NOVA_COMPUTE`; do
@@ -865,19 +871,19 @@
             # gets its own configuration and own log file.
             local fake_conf="${NOVA_FAKE_CONF}-${i}"
             iniset $fake_conf DEFAULT nhost "${HOSTNAME}${i}"
-            run_process "n-cpu-${i}" "$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf --config-file $fake_conf"
+            run_process "n-cpu-${i}" "$NOVA_BIN_DIR/nova-compute --config-file $NOVA_CPU_CONF --config-file $fake_conf"
         done
     else
         if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
             start_nova_hypervisor
         fi
-        run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf"
+        run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $NOVA_CPU_CONF"
     fi
 
     export PATH=$old_path
 }
 
-# start_nova() - Start running processes, including screen
+# start_nova() - Start running processes
 function start_nova_rest {
     # Hack to set the path for rootwrap
     local old_path=$PATH
@@ -891,10 +897,8 @@
     fi
 
     # ``run_process`` checks ``is_service_enabled``, it is not needed here
-    run_process n-cond "$NOVA_BIN_DIR/nova-conductor --config-file $compute_cell_conf"
     run_process n-cell-region "$NOVA_BIN_DIR/nova-cells --config-file $api_cell_conf"
     run_process n-cell-child "$NOVA_BIN_DIR/nova-cells --config-file $compute_cell_conf"
-    run_process n-crt "$NOVA_BIN_DIR/nova-cert --config-file $api_cell_conf"
 
     if is_service_enabled n-net; then
         if ! running_in_container; then
@@ -904,7 +908,11 @@
     run_process n-net "$NOVA_BIN_DIR/nova-network --config-file $compute_cell_conf"
 
     run_process n-sch "$NOVA_BIN_DIR/nova-scheduler --config-file $compute_cell_conf"
-    run_process n-api-meta "$NOVA_BIN_DIR/nova-api-metadata --config-file $compute_cell_conf"
+    if [ "$NOVA_USE_MOD_WSGI" == "False" ]; then
+        run_process n-api-meta "$NOVA_BIN_DIR/nova-api-metadata --config-file $compute_cell_conf"
+    else
+        run_process n-api-meta "$NOVA_BIN_DIR/uwsgi --procname-prefix nova-api-meta --ini $NOVA_METADATA_UWSGI_CONF"
+    fi
 
     run_process n-novnc "$NOVA_BIN_DIR/nova-novncproxy --config-file $api_cell_conf --web $NOVNC_WEB_DIR"
     run_process n-xvnc "$NOVA_BIN_DIR/nova-xvpvncproxy --config-file $api_cell_conf"
@@ -915,9 +923,68 @@
     export PATH=$old_path
 }
 
+function enable_nova_fleet {
+    if is_service_enabled n-cond; then
+        enable_service n-super-cond
+        for i in $(seq 1 $NOVA_NUM_CELLS); do
+            enable_service n-cond-cell${i}
+        done
+    fi
+}
+
+function start_nova_conductor {
+    if [[ "${CELLSV2_SETUP}" == "singleconductor" ]]; then
+        echo "Starting nova-conductor in a cellsv1-compatible way"
+        run_process n-cond "$NOVA_BIN_DIR/nova-conductor --config-file $NOVA_COND_CONF"
+        return
+    fi
+
+    enable_nova_fleet
+    if is_service_enabled n-super-cond; then
+        run_process n-super-cond "$NOVA_BIN_DIR/nova-conductor --config-file $NOVA_COND_CONF"
+    fi
+    for i in $(seq 1 $NOVA_NUM_CELLS); do
+        if is_service_enabled n-cond-cell${i}; then
+            local conf
+            conf=$(conductor_conf $i)
+            run_process n-cond-cell${i} "$NOVA_BIN_DIR/nova-conductor --config-file $conf"
+        fi
+    done
+}
+
+function is_nova_ready {
+    # NOTE(sdague): with cells v2 all the compute services must be up
+    # and checked into the database before discover_hosts is run. This
+    # happens in all in one installs by accident, because > 30 seconds
+    # happen between here and the script ending. However, in multinode
+    # tests this can very often not be the case. So ensure that the
+    # compute is up before we move on.
+    if is_service_enabled n-cell; then
+        # cells v1 can't complete the check below because it munges
+        # hostnames with cell information (grumble grumble).
+        return
+    fi
+    # TODO(sdague): honestly, this probably should be a plug point for
+    # an external system.
+    if [[ "$VIRT_DRIVER" == 'xenserver' ]]; then
+        # xenserver encodes information in the hostname of the compute
+        # because of the dom0/domU split. Just ignore for now.
+        return
+    fi
+    wait_for_compute 60
+}
+
 function start_nova {
+    # this catches the cells v1 case early
+    _set_singleconductor
     start_nova_rest
+    start_nova_conductor
     start_nova_compute
+    if is_service_enabled n-api; then
+        # dump the cell mapping to ensure life is good
+        echo "Dumping cells_v2 mapping"
+        nova-manage cell_v2 list_cells --verbose
+    fi
 }
 
 function stop_nova_compute {
@@ -935,24 +1002,30 @@
 }
 
 function stop_nova_rest {
-    if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
-        disable_apache_site nova-api
-        disable_apache_site nova-metadata
-        restart_apache_server
-    else
-        stop_process n-api
-    fi
-    # Kill the nova screen windows
-    # Some services are listed here twice since more than one instance
-    # of a service may be running in certain configs.
-    for serv in n-api n-crt n-net n-sch n-novnc n-xvnc n-cauth n-spice n-cond n-cell n-cell n-api-meta n-sproxy; do
+    # Kill the non-compute nova processes
+    for serv in n-api n-api-meta n-net n-sch n-novnc n-xvnc n-cauth n-spice n-cell n-cell n-sproxy; do
         stop_process $serv
     done
 }
 
-# stop_nova() - Stop running processes (non-screen)
+function stop_nova_conductor {
+    if [[ "${CELLSV2_SETUP}" == "singleconductor" ]]; then
+        stop_process n-cond
+        return
+    fi
+
+    enable_nova_fleet
+    for srv in n-super-cond $(seq -f n-cond-cell%0.f 1 $NOVA_NUM_CELLS); do
+        if is_service_enabled $srv; then
+            stop_process $srv
+        fi
+    done
+}
+
+# stop_nova() - Stop running processes
 function stop_nova {
     stop_nova_rest
+    stop_nova_conductor
     stop_nova_compute
 }
 
diff --git a/lib/nova_plugins/functions-libvirt b/lib/nova_plugins/functions-libvirt
index 56bb6bd..8d74c77 100644
--- a/lib/nova_plugins/functions-libvirt
+++ b/lib/nova_plugins/functions-libvirt
@@ -20,42 +20,77 @@
 # extremely verbose.)
 DEBUG_LIBVIRT=$(trueorfalse True DEBUG_LIBVIRT)
 
+# Try to enable coredumps for libvirt
+# Currently fairly specific to OpenStackCI hosts
+DEBUG_LIBVIRT_COREDUMPS=$(trueorfalse False DEBUG_LIBVIRT_COREDUMPS)
+
+# Only Xenial is left with libvirt-bin.  Everywhere else is libvirtd
+if is_ubuntu && [ ! -f /etc/init.d/libvirtd ]; then
+    LIBVIRT_DAEMON=libvirt-bin
+else
+    LIBVIRT_DAEMON=libvirtd
+fi
+
+# Enable coredumps for libvirt
+#  Bug: https://bugs.launchpad.net/nova/+bug/1643911
+function _enable_coredump {
+    local confdir=/etc/systemd/system/${LIBVIRT_DAEMON}.service.d
+    local conffile=${confdir}/coredump.conf
+
+    # Create a coredump directory, and instruct the kernel to save to
+    # here
+    sudo mkdir -p /var/core
+    sudo chmod a+wrx /var/core
+    echo '/var/core/core.%e.%p.%h.%t' | \
+        sudo tee /proc/sys/kernel/core_pattern
+
+    # Drop a config file to up the core ulimit
+    sudo mkdir -p ${confdir}
+    sudo tee ${conffile} <<EOF
+[Service]
+LimitCORE=infinity
+EOF
+
+    # Tell systemd to reload the unit (service restarts later after
+    # config anyway)
+    sudo systemctl daemon-reload
+}
+
+
 # Installs required distro-specific libvirt packages.
 function install_libvirt {
+
     if is_ubuntu; then
         install_package qemu-system
-        install_package libvirt-bin libvirt-dev
-        pip_install_gr libvirt-python
-        if [[ ${DISTRO} == "trusty" && ${EBTABLES_RACE_FIX} == "True" ]]; then
-            # Work around for bug #1501558. We can remove this once we
-            # get to a version of Ubuntu that has new enough libvirt.
-            TOP_DIR=$TOP_DIR $TOP_DIR/tools/install_ebtables_workaround.sh
+        if [[ ${DISTRO} == "xenial" ]]; then
+            install_package libvirt-bin libvirt-dev
+        else
+            install_package libvirt-clients libvirt-daemon-system libvirt-dev
         fi
+        # uninstall in case the libvirt version changed
+        pip_uninstall libvirt-python
+        pip_install_gr libvirt-python
         #pip_install_gr <there-si-no-guestfs-in-pypi>
     elif is_fedora || is_suse; then
         # On "KVM for IBM z Systems", kvm does not have its own package
-        if [[ ! ${DISTRO} =~ "kvmibm1" && ! ${DISTRO} =~ "rhel7" ]]; then
-            install_package kvm
-        fi
-
-        if [[ ${DISTRO} =~ "rhel7" ]]; then
-            # This should install the latest qemu-kvm build,
-            # which is called qemu-kvm-ev in centos7
-            # (as the default OS qemu-kvm package is usually rather old,
-            # and should be updated by above)
+        if [[ ! ${DISTRO} =~ "kvmibm1" ]]; then
             install_package qemu-kvm
         fi
 
         install_package libvirt libvirt-devel
+        pip_uninstall libvirt-python
         pip_install_gr libvirt-python
+    fi
 
+    if [[ $DEBUG_LIBVIRT_COREDUMPS == True ]]; then
+        _enable_coredump
     fi
 }
 
 # Configures the installed libvirt system so that is accessible by
 # STACK_USER via qemu:///system with management capabilities.
 function configure_libvirt {
-    if is_service_enabled neutron && is_neutron_ovs_base_plugin && ! sudo grep -q '^cgroup_device_acl' $QEMU_CONF; then
+    if is_service_enabled neutron && ! sudo grep -q '^cgroup_device_acl' $QEMU_CONF; then
         # Add /dev/net/tun to cgroup_device_acls, needed for type=ethernet interfaces
         cat <<EOF | sudo tee -a $QEMU_CONF
 cgroup_device_acl = [
@@ -68,14 +103,6 @@
 EOF
     fi
 
-    # Since the release of Debian Wheezy the libvirt init script is libvirtd
-    # and not libvirtd-bin anymore.
-    if is_ubuntu && [ ! -f /etc/init.d/libvirtd ]; then
-        LIBVIRT_DAEMON=libvirt-bin
-    else
-        LIBVIRT_DAEMON=libvirtd
-    fi
-
     if is_fedora || is_suse; then
         # Starting with fedora 18 and opensuse-12.3 enable stack-user to
         # virsh -c qemu:///system by creating a policy-kit rule for
diff --git a/lib/nova_plugins/hypervisor-fake b/lib/nova_plugins/hypervisor-fake
index f9b95c1..49c8dee 100644
--- a/lib/nova_plugins/hypervisor-fake
+++ b/lib/nova_plugins/hypervisor-fake
@@ -49,7 +49,7 @@
     iniset $NOVA_CONF DEFAULT quota_security_groups -1
     iniset $NOVA_CONF DEFAULT quota_security_group_rules -1
     iniset $NOVA_CONF DEFAULT quota_key_pairs -1
-    iniset $NOVA_CONF DEFAULT scheduler_default_filters "RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,RamFilter,DiskFilter"
+    iniset $NOVA_CONF filter_scheduler enabled_filters "RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,RamFilter,DiskFilter"
 }
 
 # install_nova_hypervisor() - Install external components
diff --git a/lib/nova_plugins/hypervisor-ironic b/lib/nova_plugins/hypervisor-ironic
index 7ffd14d..034e403 100644
--- a/lib/nova_plugins/hypervisor-ironic
+++ b/lib/nova_plugins/hypervisor-ironic
@@ -42,13 +42,19 @@
     iniset $NOVA_CONF DEFAULT compute_driver ironic.IronicDriver
     iniset $NOVA_CONF DEFAULT firewall_driver $LIBVIRT_FIREWALL_DRIVER
     iniset $NOVA_CONF DEFAULT scheduler_host_manager ironic_host_manager
-    iniset $NOVA_CONF DEFAULT ram_allocation_ratio 1.0
-    iniset $NOVA_CONF DEFAULT reserved_host_memory_mb 0
+
+    if [[ "$IRONIC_USE_RESOURCE_CLASSES" == "False" ]]; then
+        iniset $NOVA_CONF filter_scheduler use_baremetal_filters True
+        iniset $NOVA_CONF filter_scheduler host_subset_size 999
+        iniset $NOVA_CONF DEFAULT ram_allocation_ratio 1.0
+        iniset $NOVA_CONF DEFAULT reserved_host_memory_mb 0
+    fi
+
     # ironic section
     iniset $NOVA_CONF ironic auth_type password
     iniset $NOVA_CONF ironic username admin
     iniset $NOVA_CONF ironic password $ADMIN_PASSWORD
-    iniset $NOVA_CONF ironic auth_url $KEYSTONE_AUTH_URI/v3
+    iniset $NOVA_CONF ironic auth_url $KEYSTONE_AUTH_URI
     iniset $NOVA_CONF ironic project_domain_id default
     iniset $NOVA_CONF ironic user_domain_id default
     iniset $NOVA_CONF ironic project_name demo
diff --git a/lib/nova_plugins/hypervisor-libvirt b/lib/nova_plugins/hypervisor-libvirt
index f3c8add..0c08a0f 100644
--- a/lib/nova_plugins/hypervisor-libvirt
+++ b/lib/nova_plugins/hypervisor-libvirt
@@ -115,7 +115,10 @@
                     sudo dpkg-statoverride --add --update $STAT_OVERRIDE
                 fi
             done
-        elif is_fedora || is_suse; then
+        elif is_suse; then
+            # Workaround for missing dependencies in python-libguestfs
+            install_package python-libguestfs guestfs-data augeas augeas-lenses
+        elif is_fedora; then
             install_package python-libguestfs
         fi
     fi
diff --git a/lib/nova_plugins/hypervisor-xenserver b/lib/nova_plugins/hypervisor-xenserver
index 0046a36..6f79e4f 100644
--- a/lib/nova_plugins/hypervisor-xenserver
+++ b/lib/nova_plugins/hypervisor-xenserver
@@ -26,10 +26,6 @@
 
 # Allow ``build_domU.sh`` to specify the flat network bridge via kernel args
 FLAT_NETWORK_BRIDGE_DEFAULT=$(sed -e 's/.* flat_network_bridge=\([[:alnum:]]*\).*$/\1/g' /proc/cmdline)
-if is_service_enabled neutron; then
-    XEN_INTEGRATION_BRIDGE_DEFAULT=$(sed -e 's/.* xen_integration_bridge=\([[:alnum:]]*\).*$/\1/g' /proc/cmdline)
-    XEN_INTEGRATION_BRIDGE=${XEN_INTEGRATION_BRIDGE:-$XEN_INTEGRATION_BRIDGE_DEFAULT}
-fi
 
 VNCSERVER_PROXYCLIENT_ADDRESS=${VNCSERVER_PROXYCLIENT_ADDRESS=169.254.0.1}
 
@@ -88,28 +84,6 @@
 * * * * * /root/rotate_xen_guest_logs.sh >/dev/null 2>&1
 CRONTAB
 
-    # Create directories for kernels and images
-    {
-        echo "set -eux"
-        cat $TOP_DIR/tools/xen/functions
-        echo "create_directory_for_images"
-        echo "create_directory_for_kernels"
-        echo "install_conntrack_tools"
-    } | $ssh_dom0
-
-    if is_service_enabled neutron; then
-        # Remove restriction on linux bridge in Dom0 when neutron is enabled
-        $ssh_dom0 "rm -f /etc/modprobe.d/blacklist-bridge*"
-
-        count=`$ssh_dom0 "iptables -t filter -L XenServerDevstack |wc -l"`
-        if [ "$count" = "0" ]; then
-        {
-            echo "iptables -t filter --new XenServerDevstack"
-            echo "iptables -t filter -I INPUT -j XenServerDevstack"
-            echo "iptables -t filter -I XenServerDevstack -p tcp --dport 6640 -j ACCEPT"
-        } | $ssh_dom0
-        fi
-    fi
 }
 
 # install_nova_hypervisor() - Install external components
diff --git a/lib/os_brick b/lib/os_brick
deleted file mode 100644
index d1cca4a..0000000
--- a/lib/os_brick
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/bin/bash
-#
-# lib/os_brick
-# Install **os-brick** python module from source
-
-# Dependencies:
-#
-# - functions
-# - DEST, DATA_DIR must be defined
-
-# stack.sh
-# ---------
-# - install_os_brick
-
-# Save trace setting
-_XTRACE_OS_BRICK=$(set +o | grep xtrace)
-set +o xtrace
-
-
-GITDIR["os-brick"]=$DEST/os-brick
-
-# Install os_brick from git only if requested, otherwise it will be pulled from
-# pip repositories by requirements of projects that need it.
-function install_os_brick {
-    if use_library_from_git "os-brick"; then
-        git_clone_by_name "os-brick"
-        setup_dev_lib "os-brick"
-    fi
-}
-
-# Restore xtrace
-$_XTRACE_OS_BRICK
\ No newline at end of file
diff --git a/lib/oslo b/lib/oslo
index e34e48a..3ae64c8 100644
--- a/lib/oslo
+++ b/lib/oslo
@@ -6,104 +6,6 @@
 #
 # We need this to handle the fact that projects would like to use
 # pre-released versions of oslo libraries.
-
-# Dependencies:
 #
-# - ``functions`` file
-
-# ``stack.sh`` calls the entry points in this order:
-#
-# - install_oslo
-
-# Save trace setting
-_XTRACE_LIB_OSLO=$(set +o | grep xtrace)
-set +o xtrace
-
-
-# Defaults
-# --------
-GITDIR["automaton"]=$DEST/automaton
-GITDIR["cliff"]=$DEST/cliff
-GITDIR["debtcollector"]=$DEST/debtcollector
-GITDIR["futurist"]=$DEST/futurist
-GITDIR["os-client-config"]=$DEST/os-client-config
-GITDIR["osc-lib"]=$DEST/osc-lib
-GITDIR["oslo.cache"]=$DEST/oslo.cache
-GITDIR["oslo.concurrency"]=$DEST/oslo.concurrency
-GITDIR["oslo.config"]=$DEST/oslo.config
-GITDIR["oslo.context"]=$DEST/oslo.context
-GITDIR["oslo.db"]=$DEST/oslo.db
-GITDIR["oslo.i18n"]=$DEST/oslo.i18n
-GITDIR["oslo.log"]=$DEST/oslo.log
-GITDIR["oslo.messaging"]=$DEST/oslo.messaging
-GITDIR["oslo.middleware"]=$DEST/oslo.middleware
-GITDIR["oslo.policy"]=$DEST/oslo.policy
-GITDIR["oslo.privsep"]=$DEST/oslo.privsep
-GITDIR["oslo.reports"]=$DEST/oslo.reports
-GITDIR["oslo.rootwrap"]=$DEST/oslo.rootwrap
-GITDIR["oslo.serialization"]=$DEST/oslo.serialization
-GITDIR["oslo.service"]=$DEST/oslo.service
-GITDIR["oslo.utils"]=$DEST/oslo.utils
-GITDIR["oslo.versionedobjects"]=$DEST/oslo.versionedobjects
-GITDIR["oslo.vmware"]=$DEST/oslo.vmware
-GITDIR["osprofiler"]=$DEST/osprofiler
-GITDIR["pycadf"]=$DEST/pycadf
-GITDIR["stevedore"]=$DEST/stevedore
-GITDIR["taskflow"]=$DEST/taskflow
-GITDIR["tooz"]=$DEST/tooz
-
-# Support entry points installation of console scripts
-OSLO_BIN_DIR=$(get_python_exec_prefix)
-
-
-# Functions
-# ---------
-
-function _do_install_oslo_lib {
-    local name=$1
-    if use_library_from_git "$name"; then
-        git_clone_by_name "$name"
-        setup_dev_lib "$name"
-    fi
-}
-
-# install_oslo() - Collect source and prepare
-function install_oslo {
-    _do_install_oslo_lib "automaton"
-    _do_install_oslo_lib "cliff"
-    _do_install_oslo_lib "debtcollector"
-    _do_install_oslo_lib "futurist"
-    _do_install_oslo_lib "osc-lib"
-    _do_install_oslo_lib "os-client-config"
-    _do_install_oslo_lib "oslo.cache"
-    _do_install_oslo_lib "oslo.concurrency"
-    _do_install_oslo_lib "oslo.config"
-    _do_install_oslo_lib "oslo.context"
-    _do_install_oslo_lib "oslo.db"
-    _do_install_oslo_lib "oslo.i18n"
-    _do_install_oslo_lib "oslo.log"
-    _do_install_oslo_lib "oslo.messaging"
-    _do_install_oslo_lib "oslo.middleware"
-    _do_install_oslo_lib "oslo.policy"
-    _do_install_oslo_lib "oslo.privsep"
-    _do_install_oslo_lib "oslo.reports"
-    _do_install_oslo_lib "oslo.rootwrap"
-    _do_install_oslo_lib "oslo.serialization"
-    _do_install_oslo_lib "oslo.service"
-    _do_install_oslo_lib "oslo.utils"
-    _do_install_oslo_lib "oslo.versionedobjects"
-    _do_install_oslo_lib "oslo.vmware"
-    _do_install_oslo_lib "osprofiler"
-    _do_install_oslo_lib "pycadf"
-    _do_install_oslo_lib "stevedore"
-    _do_install_oslo_lib "taskflow"
-    _do_install_oslo_lib "tooz"
-}
-
-# Restore xtrace
-$_XTRACE_LIB_OSLO
-
-# Tell emacs to use shell-script-mode
-## Local variables:
-## mode: shell-script
-## End:
+# Included for compatibility with grenade, remove in Queens
+source $TOP_DIR/lib/libraries
diff --git a/lib/placement b/lib/placement
index e7ffe33..d3fb8c8 100644
--- a/lib/placement
+++ b/lib/placement
@@ -32,7 +32,15 @@
 PLACEMENT_CONF_DIR=/etc/nova
 PLACEMENT_CONF=$PLACEMENT_CONF_DIR/nova.conf
 PLACEMENT_AUTH_STRATEGY=${PLACEMENT_AUTH_STRATEGY:-placement}
-
+# Nova virtual environment
+if [[ ${USE_VENV} = True ]]; then
+    PROJECT_VENV["nova"]=${NOVA_DIR}.venv
+    PLACEMENT_BIN_DIR=${PROJECT_VENV["nova"]}/bin
+else
+    PLACEMENT_BIN_DIR=$(get_python_exec_prefix)
+fi
+PLACEMENT_UWSGI=$PLACEMENT_BIN_DIR/nova-placement-api
+PLACEMENT_UWSGI_CONF=$PLACEMENT_CONF_DIR/placement-uwsgi.ini
 
 # The placement service can optionally use a separate database
 # connection. Set PLACEMENT_DB_ENABLED to True to use it.
@@ -40,7 +48,7 @@
 # yet merged in nova but is coming soon.
 PLACEMENT_DB_ENABLED=$(trueorfalse False PLACEMENT_DB_ENABLED)
 
-if is_ssl_enabled_service "placement-api" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
     PLACEMENT_SERVICE_PROTOCOL="https"
 fi
 
@@ -61,6 +69,7 @@
 # cleanup_placement() - Remove residual data files, anything left over from previous
 # runs that a clean run would need to clean up
 function cleanup_placement {
+    sudo rm -f $(apache_site_config_for nova-placement-api)
     sudo rm -f $(apache_site_config_for placement-api)
 }
 
@@ -72,12 +81,6 @@
     nova_bin_dir=$(get_python_exec_prefix)
     placement_api_apache_conf=$(apache_site_config_for placement-api)
 
-    # reuse nova's cert if a cert is being used
-    if is_ssl_enabled_service "placement-api"; then
-        placement_ssl="SSLEngine On"
-        placement_certfile="SSLCertificateFile $NOVA_SSL_CERT"
-        placement_keyfile="SSLCertificateKeyFile $NOVA_SSL_KEY"
-    fi
     # reuse nova's venv if there is one as placement code lives
     # there
     if [[ ${USE_VENV} = True ]]; then
@@ -100,7 +103,7 @@
 
 function configure_placement_nova_compute {
     iniset $NOVA_CONF placement auth_type "password"
-    iniset $NOVA_CONF placement auth_url "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/v3"
+    iniset $NOVA_CONF placement auth_url "$KEYSTONE_SERVICE_URI"
     iniset $NOVA_CONF placement username placement
     iniset $NOVA_CONF placement password "$SERVICE_PASSWORD"
     iniset $NOVA_CONF placement user_domain_name "$SERVICE_DOMAIN_NAME"
@@ -120,7 +123,12 @@
     if [ "$PLACEMENT_DB_ENABLED" != False ]; then
         iniset $PLACEMENT_CONF placement_database connection `database_connection_url placement`
     fi
-    _config_placement_apache_wsgi
+
+    if [[ "$WSGI_MODE" == "uwsgi" ]]; then
+        write_uwsgi_config "$PLACEMENT_UWSGI_CONF" "$PLACEMENT_UWSGI" "/placement"
+    else
+        _config_placement_apache_wsgi
+    fi
 }
 
 # create_placement_accounts() - Set up required placement accounts
@@ -141,7 +149,9 @@
 function init_placement {
     if [ "$PLACEMENT_DB_ENABLED" != False ]; then
         recreate_database placement
+        time_start "dbsync"
         $NOVA_BIN_DIR/nova-manage --config-file $NOVA_CONF api_db sync
+        time_stop "dbsync"
     fi
     create_placement_accounts
 }
@@ -149,16 +159,20 @@
 # install_placement() - Collect source and prepare
 function install_placement {
     install_apache_wsgi
-    if is_ssl_enabled_service "placement-api"; then
-        enable_mod_ssl
-    fi
+    # Install the openstackclient placement client plugin for CLI
+    # TODO(mriedem): Use pip_install_gr once osc-placement is in g-r.
+    pip_install osc-placement
 }
 
 # start_placement_api() - Start the API processes ahead of other things
 function start_placement_api {
-    enable_apache_site placement-api
-    restart_apache_server
-    tail_log placement-api /var/log/$APACHE_NAME/placement-api.log
+    if [[ "$WSGI_MODE" == "uwsgi" ]]; then
+        run_process "placement-api" "$PLACEMENT_BIN_DIR/uwsgi --procname-prefix placement --ini $PLACEMENT_UWSGI_CONF"
+    else
+        enable_apache_site placement-api
+        restart_apache_server
+        tail_log placement-api /var/log/$APACHE_NAME/placement-api.log
+    fi
 
     echo "Waiting for placement-api to start..."
     if ! wait_for_service $SERVICE_TIMEOUT $PLACEMENT_SERVICE_PROTOCOL://$PLACEMENT_SERVICE_HOST/placement; then
@@ -172,8 +186,13 @@
 
 # stop_placement() - Disable the api service and stop it.
 function stop_placement {
-    disable_apache_site placement-api
-    restart_apache_server
+    if [[ "$WSGI_MODE" == "uwsgi" ]]; then
+        stop_process "placement-api"
+        remove_uwsgi_config "$PLACEMENT_UWSGI_CONF" "$PLACEMENT_UWSGI"
+    else
+        disable_apache_site placement-api
+        restart_apache_server
+    fi
 }
 
 # Restore xtrace
diff --git a/lib/rpc_backend b/lib/rpc_backend
index 3c1404e..fb1cf73 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -114,7 +114,7 @@
     fi
 }
 
-# builds transport url string
+# Returns the address of the RPC backend in URL format.
 function get_transport_url {
     local virtual_host=$1
     if is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
@@ -122,6 +122,16 @@
     fi
 }
 
+# Returns the address of the Notification backend in URL format.  This
+# should be used to set the transport_url option in the
+# oslo_messaging_notifications group.
+function get_notification_url {
+    local virtual_host=$1
+    if is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
+        echo "rabbit://$RABBIT_USERID:$RABBIT_PASSWORD@$RABBIT_HOST:5672/$virtual_host"
+    fi
+}
+
 # iniset configuration
 function iniset_rpc_backend {
     local package=$1
diff --git a/lib/stack b/lib/stack
index f09ddce..bada26f 100644
--- a/lib/stack
+++ b/lib/stack
@@ -33,5 +33,8 @@
         if [[ ${USE_VENV} = True && -n ${PROJECT_VENV[$service]:-} ]]; then
             unset PIP_VIRTUAL_ENV
         fi
+    else
+        echo "No function declared with name 'install_${service}'."
+        exit 1
     fi
 }
diff --git a/lib/swift b/lib/swift
index 5b510e5..1601e2b 100644
--- a/lib/swift
+++ b/lib/swift
@@ -7,7 +7,7 @@
 #
 # - ``functions`` file
 # - ``apache`` file
-# - ``DEST``, ``SCREEN_NAME``, `SWIFT_HASH` must be defined
+# - ``DEST``, `SWIFT_HASH` must be defined
 # - ``STACK_USER`` must be defined
 # - ``SWIFT_DATA_DIR`` or ``DATA_DIR`` must be defined
 # - ``lib/keystone`` file
@@ -31,13 +31,22 @@
 # Defaults
 # --------
 
-if is_ssl_enabled_service "s-proxy" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
     SWIFT_SERVICE_PROTOCOL="https"
 fi
 
 # Set up default directories
 GITDIR["python-swiftclient"]=$DEST/python-swiftclient
 
+# Swift virtual environment
+if [[ ${USE_VENV} = True ]]; then
+    PROJECT_VENV["swift"]=${SWIFT_DIR}.venv
+    SWIFT_BIN_DIR=${PROJECT_VENV["swift"]}/bin
+else
+    SWIFT_BIN_DIR=$(get_python_exec_prefix)
+fi
+
+
 SWIFT_DIR=$DEST/swift
 SWIFT_AUTH_CACHE_DIR=${SWIFT_AUTH_CACHE_DIR:-/var/cache/swift}
 SWIFT_APACHE_WSGI_DIR=${SWIFT_APACHE_WSGI_DIR:-/var/www/swift}
@@ -119,6 +128,11 @@
 SWIFT_REPLICAS=${SWIFT_REPLICAS:-1}
 SWIFT_REPLICAS_SEQ=$(seq ${SWIFT_REPLICAS})
 
+# Set ``SWIFT_START_ALL_SERVICES`` to control whether all Swift
+# services (including the *-auditor, *-replicator, *-reconstructor, etc.
+# daemons) should be started.
+SWIFT_START_ALL_SERVICES=$(trueorfalse True SWIFT_START_ALL_SERVICES)
+
 # Set ``SWIFT_LOG_TOKEN_LENGTH`` to configure how many characters of an auth
 # token should be placed in the logs. When keystone is used with PKI tokens,
 # the token values can be huge, seemingly larger the 2K, at the least. We
@@ -160,6 +174,7 @@
 # Test if any Swift services are enabled
 # is_swift_enabled
 function is_swift_enabled {
+    [[ ,${DISABLED_SERVICES} =~ ,"swift" ]] && return 1
     [[ ,${ENABLED_SERVICES} =~ ,"s-" ]] && return 0
     return 1
 }
@@ -384,13 +399,6 @@
         iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT bind_port ${SWIFT_DEFAULT_BIND_PORT}
     fi
 
-    if is_ssl_enabled_service s-proxy; then
-        ensure_certificates SWIFT
-
-        iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT cert_file "$SWIFT_SSL_CERT"
-        iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT key_file "$SWIFT_SSL_KEY"
-    fi
-
     # DevStack is commonly run in a small slow environment, so bump the timeouts up.
     # ``node_timeout`` is the node read operation response time to the proxy server
     # ``conn_timeout`` is how long it takes a connect() system call to return
@@ -405,7 +413,7 @@
         iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer "set log_level" "WARN"
         iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer paste.filter_factory "ceilometermiddleware.swift:filter_factory"
         iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer control_exchange "swift"
-        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer url $(get_transport_url)
+        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer url $(get_notification_url)
         iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer driver "messaging"
         iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer topic "notifications"
         SWIFT_EXTRAS_MIDDLEWARE_LAST="${SWIFT_EXTRAS_MIDDLEWARE_LAST} ceilometer"
@@ -456,6 +464,9 @@
     iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} filter:tempauth account_autocreate
     iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:tempauth reseller_prefix "TEMPAUTH"
 
+    # Allow both reseller prefixes to be used with domain_remap
+    iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:domain_remap reseller_prefixes "AUTH, TEMPAUTH"
+
     if is_service_enabled swift3; then
         cat <<EOF >>${SWIFT_CONFIG_PROXY_SERVER}
 [filter:s3token]
@@ -523,11 +534,20 @@
         local auth_vers
         auth_vers=$(iniget ${testfile} func_test auth_version)
         iniset ${testfile} func_test auth_host ${KEYSTONE_SERVICE_HOST}
-        iniset ${testfile} func_test auth_port ${KEYSTONE_AUTH_PORT}
-        if [[ $auth_vers == "3" ]]; then
-            iniset ${testfile} func_test auth_prefix /v3/
+        if [[ "$KEYSTONE_AUTH_PROTOCOL" == "https" ]]; then
+            iniset ${testfile} func_test auth_port 443
         else
-            iniset ${testfile} func_test auth_prefix /v2.0/
+            iniset ${testfile} func_test auth_port 80
+        fi
+        iniset ${testfile} func_test auth_uri ${KEYSTONE_AUTH_URI}
+        if [[ "$auth_vers" == "3" ]]; then
+            iniset ${testfile} func_test auth_prefix /identity/v3/
+        else
+            iniset ${testfile} func_test auth_prefix /identity/v2.0/
+        fi
+        if is_service_enabled tls-proxy; then
+            iniset ${testfile} func_test cafile ${SSL_BUNDLE_FILE}
+            iniset ${testfile} func_test web_front_end apache2
         fi
     fi
 
@@ -542,6 +562,7 @@
     if [[ $SYSLOG != "False" ]]; then
         sed "s,%SWIFT_LOGDIR%,${swift_log_dir}," $FILES/swift/rsyslog.conf | sudo \
             tee /etc/rsyslog.d/10-swift.conf
+        echo "MaxMessageSize 6k" | sudo tee /etc/rsyslog.d/99-maxsize.conf
         # restart syslog to take the changes
         sudo killall -HUP rsyslogd
     fi
@@ -590,15 +611,13 @@
     # create all of the directories needed to emulate a few different servers
     local node_number
     for node_number in ${SWIFT_REPLICAS_SEQ}; do
-        sudo ln -sf ${SWIFT_DATA_DIR}/drives/sdb1/$node_number ${SWIFT_DATA_DIR}/$node_number;
-        local drive=${SWIFT_DATA_DIR}/drives/sdb1/${node_number}
-        local node=${SWIFT_DATA_DIR}/${node_number}/node
-        local node_device=${node}/sdb1
-        [[ -d $node ]] && continue
-        [[ -d $drive ]] && continue
-        sudo install -o ${STACK_USER} -g $user_group -d $drive
-        sudo install -o ${STACK_USER} -g $user_group -d $node_device
-        sudo chown -R ${STACK_USER}: ${node}
+        # node_devices must match *.conf devices option
+        local node_devices=${SWIFT_DATA_DIR}/${node_number}
+        local real_devices=${SWIFT_DATA_DIR}/drives/sdb1/$node_number
+        sudo ln -sf $real_devices $node_devices;
+        local device=${real_devices}/sdb1
+        [[ -d $device ]] && continue
+        sudo install -o ${STACK_USER} -g $user_group -d $device
     done
 }
 
@@ -762,7 +781,7 @@
     fi
 }
 
-# start_swift() - Start running processes, including screen
+# start_swift() - Start running processes
 function start_swift {
     # (re)start memcached to make sure we have a clean memcache.
     restart_service memcached
@@ -777,41 +796,56 @@
     fi
 
     if [ "$SWIFT_USE_MOD_WSGI" == "True" ]; then
+        # Apache should serve the "PACO" a.k.a "main" services
         restart_apache_server
+        # The rest of the services should be started in backgroud
         swift-init --run-dir=${SWIFT_DATA_DIR}/run rest start
-        tail_log s-proxy /var/log/$APACHE_NAME/proxy-server
-        if [[ ${SWIFT_REPLICAS} == 1 ]]; then
-            for type in object container account; do
-                tail_log s-${type} /var/log/$APACHE_NAME/${type}-server-1
-            done
-        fi
         return 0
     fi
 
-    # By default with only one replica we are launching the proxy,
-    # container, account and object server in screen in foreground and
-    # other services in background. If we have ``SWIFT_REPLICAS`` set to something
-    # greater than one we first spawn all the Swift services then kill the proxy
-    # service so we can run it in foreground in screen.  ``swift-init ...
-    # {stop|restart}`` exits with '1' if no servers are running, ignore it just
-    # in case
-    local todo type
-    swift-init --run-dir=${SWIFT_DATA_DIR}/run all restart || true
+
+    # By default with only one replica we are launching the proxy, container
+    # account and object server in screen in foreground. Then, the rest of
+    # the services is optionally started.
+    #
+    # If we have ``SWIFT_REPLICAS`` set to something greater than one
+    # we first spawn *all* the Swift services then kill the proxy service
+    # so we can run it in foreground in screen.
+    #
+    # ``swift-init ... {stop|restart}`` exits with '1' if no servers are
+    #  running, ignore it just in case
     if [[ ${SWIFT_REPLICAS} == 1 ]]; then
-        todo="object container account"
+        local foreground_services type
+
+        foreground_services="object container account"
+        for type in ${foreground_services}; do
+            run_process s-${type} "$SWIFT_BIN_DIR/swift-${type}-server ${SWIFT_CONF_DIR}/${type}-server/1.conf -v"
+        done
+
+        if [[ "$SWIFT_START_ALL_SERVICES" == "True" ]]; then
+            swift-init --run-dir=${SWIFT_DATA_DIR}/run rest start
+        else
+            # The container-sync daemon is strictly needed to pass the container
+            # sync Tempest tests.
+            swift-init --run-dir=${SWIFT_DATA_DIR}/run container-sync start
+        fi
+    else
+        swift-init --run-dir=${SWIFT_DATA_DIR}/run all restart || true
+        swift-init --run-dir=${SWIFT_DATA_DIR}/run proxy stop || true
     fi
-    for type in proxy ${todo}; do
-        swift-init --run-dir=${SWIFT_DATA_DIR}/run ${type} stop || true
-    done
+
     if is_service_enabled tls-proxy; then
         local proxy_port=${SWIFT_DEFAULT_BIND_PORT}
-        start_tls_proxy swift '*' $proxy_port $SERVICE_HOST $SWIFT_DEFAULT_BIND_PORT_INT
+        start_tls_proxy swift '*' $proxy_port $SERVICE_HOST $SWIFT_DEFAULT_BIND_PORT_INT $SWIFT_MAX_HEADER_SIZE
     fi
-    run_process s-proxy "swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
-    if [[ ${SWIFT_REPLICAS} == 1 ]]; then
-        for type in object container account; do
-            run_process s-${type} "swift-${type}-server ${SWIFT_CONF_DIR}/${type}-server/1.conf -v"
-        done
+    run_process s-proxy "$SWIFT_BIN_DIR/swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
+
+    # We also started the storage services, but proxy started last and
+    # will take the longest to start, so by the time it comes up, we're
+    # probably fine.
+    echo "Waiting for swift proxy to start..."
+    if ! wait_for_service $SERVICE_TIMEOUT $SWIFT_SERVICE_PROTOCOL://$SERVICE_HOST:$SWIFT_DEFAULT_BIND_PORT/info; then
+        die $LINENO "swift proxy did not start"
     fi
 
     if [[ "$SWIFT_ENABLE_TEMPURLS" == "True" ]]; then
@@ -819,7 +853,7 @@
     fi
 }
 
-# stop_swift() - Stop running processes (non-screen)
+# stop_swift() - Stop running processes
 function stop_swift {
     local type
 
diff --git a/lib/tempest b/lib/tempest
index d95a9f5..f086f9a 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -11,13 +11,14 @@
 #   - ``DEST``, ``FILES``
 #   - ``ADMIN_PASSWORD``
 #   - ``DEFAULT_IMAGE_NAME``
+#   - ``DEFAULT_IMAGE_FILE_NAME``
 #   - ``S3_SERVICE_PORT``
 #   - ``SERVICE_HOST``
 #   - ``BASE_SQL_CONN`` ``lib/database`` declares
 #   - ``PUBLIC_NETWORK_NAME``
 #   - ``VIRT_DRIVER``
 #   - ``LIBVIRT_TYPE``
-#   - ``KEYSTONE_SERVICE_PROTOCOL``, ``KEYSTONE_SERVICE_HOST`` from lib/keystone
+#   - ``KEYSTONE_SERVICE_URI``, ``KEYSTONE_SERVICE_URI_V3`` from lib/keystone
 #
 # Optional Dependencies:
 #
@@ -223,7 +224,7 @@
             # Ensure ``flavor_ref`` and ``flavor_ref_alt`` have different values.
             # Some resize instance in tempest tests depends on this.
             for f in ${flavors[@]:1}; do
-                if [[ $f -ne $flavor_ref ]]; then
+                if [[ "$f" != "$flavor_ref" ]]; then
                     flavor_ref_alt=$f
                     break
                 fi
@@ -257,7 +258,7 @@
     iniset $TEMPEST_CONFIG volume build_timeout $BUILD_TIMEOUT
 
     # Identity
-    iniset $TEMPEST_CONFIG identity uri "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:5000/v2.0/"
+    iniset $TEMPEST_CONFIG identity uri "$KEYSTONE_SERVICE_URI/v2.0/"
     iniset $TEMPEST_CONFIG identity uri_v3 "$KEYSTONE_SERVICE_URI_V3"
     iniset $TEMPEST_CONFIG identity user_lockout_failure_attempts $KEYSTONE_LOCKOUT_FAILURE_ATTEMPTS
     iniset $TEMPEST_CONFIG identity user_lockout_duration $KEYSTONE_LOCKOUT_DURATION
@@ -273,15 +274,17 @@
     if [ "$ENABLE_IDENTITY_V2" == "True" ]; then
         # Run Identity API v2 tests ONLY if needed
         iniset $TEMPEST_CONFIG identity-feature-enabled api_v2 True
-        iniset $TEMPEST_CONFIG identity auth_version ${TEMPEST_AUTH_VERSION:-v2}
     else
         # Skip Identity API v2 tests by default
         iniset $TEMPEST_CONFIG identity-feature-enabled api_v2 False
-        # Use v3 auth tokens for running all Tempest tests
-        iniset $TEMPEST_CONFIG identity auth_version v3
+    fi
+    iniset $TEMPEST_CONFIG identity auth_version ${TEMPEST_AUTH_VERSION:-v3}
+    if [[ "$TEMPEST_AUTH_VERSION" != "v2.0" ]]; then
+        # we're going to disable v2 admin unless we're using v2.0 by default.
+        iniset $TEMPEST_CONFIG identity-feature-enabled api_v2_admin False
     fi
 
-    if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
+    if is_service_enabled tls-proxy; then
         iniset $TEMPEST_CONFIG identity ca_certificates_file $SSL_BUNDLE_FILE
     fi
 
@@ -294,6 +297,12 @@
     # Newton and Ocata. This option can be removed after Mitaka is end of life.
     iniset $TEMPEST_CONFIG identity-feature-enabled forbid_global_implied_dsr True
 
+    # When LDAP is enabled domain specific drivers are also enabled and the users
+    # and groups identity tests must adapt to this scenario
+    if is_service_enabled ldap; then
+        iniset $TEMPEST_CONFIG identity-feature-enabled domain_specific_drivers True
+    fi
+
     # Image
     # We want to be able to override this variable in the gate to avoid
     # doing an external HTTP fetch for this test.
@@ -358,6 +367,7 @@
     iniset $TEMPEST_CONFIG compute-feature-enabled live_migration ${LIVE_MIGRATION_AVAILABLE:-False}
     iniset $TEMPEST_CONFIG compute-feature-enabled change_password False
     iniset $TEMPEST_CONFIG compute-feature-enabled block_migration_for_live_migration ${USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION:-False}
+    iniset $TEMPEST_CONFIG compute-feature-enabled live_migrate_back_and_forth ${LIVE_MIGRATE_BACK_AND_FORTH:-False}
     iniset $TEMPEST_CONFIG compute-feature-enabled attach_encrypted_volume ${ATTACH_ENCRYPTED_VOLUME_AVAILABLE:-True}
     if is_service_enabled n-cell; then
         # Cells doesn't support shelving/unshelving
@@ -390,24 +400,6 @@
     iniset $TEMPEST_CONFIG network-feature-enabled ipv6_subnet_attributes "$IPV6_SUBNET_ATTRIBUTES_ENABLED"
     iniset $TEMPEST_CONFIG network-feature-enabled port_security $NEUTRON_PORT_SECURITY
 
-    # Orchestration Tests
-    if is_service_enabled heat; then
-        if [[ ! -z "$HEAT_CFN_IMAGE_URL" ]]; then
-            iniset $TEMPEST_CONFIG orchestration image_ref $(basename "${HEAT_CFN_IMAGE_URL%.*}")
-        fi
-        # Nova might not be enabled, especially when we want to test tempest scenario/API that only create Neutron resources
-        if is_service_enabled nova; then
-            # build a specialized heat flavor
-            available_flavors=$(nova flavor-list)
-            if [[ ! ( $available_flavors =~ 'm1.heat' ) ]]; then
-                openstack flavor create --id 451 --ram 512 --disk 0 --vcpus 1 m1.heat
-            fi
-            iniset $TEMPEST_CONFIG orchestration instance_type "m1.heat"
-        fi
-        iniset $TEMPEST_CONFIG orchestration build_timeout 900
-        iniset $TEMPEST_CONFIG orchestration stack_owner_role Member
-    fi
-
     # Scenario
     if [ "$VIRT_DRIVER" = "xenserver" ]; then
         SCENARIO_IMAGE_DIR=${SCENARIO_IMAGE_DIR:-$FILES}
@@ -416,7 +408,7 @@
         iniset $TEMPEST_CONFIG scenario img_container_format ovf
     else
         SCENARIO_IMAGE_DIR=${SCENARIO_IMAGE_DIR:-$FILES}
-        SCENARIO_IMAGE_FILE=$DEFAULT_IMAGE_NAME
+        SCENARIO_IMAGE_FILE=$DEFAULT_IMAGE_FILE_NAME
     fi
     iniset $TEMPEST_CONFIG scenario img_dir $SCENARIO_IMAGE_DIR
     iniset $TEMPEST_CONFIG scenario img_file $SCENARIO_IMAGE_FILE
@@ -427,7 +419,7 @@
         TEMPEST_SSH_NETWORK_NAME=$PHYSICAL_NETWORK
     fi
     # Validation
-    iniset $TEMPEST_CONFIG validation run_validation ${TEMPEST_RUN_VALIDATION:-False}
+    iniset $TEMPEST_CONFIG validation run_validation ${TEMPEST_RUN_VALIDATION:-True}
     iniset $TEMPEST_CONFIG validation ip_version_for_ssh 4
     iniset $TEMPEST_CONFIG validation ssh_timeout $BUILD_TIMEOUT
     iniset $TEMPEST_CONFIG validation image_ssh_user ${DEFAULT_INSTANCE_USER:-cirros}
@@ -439,7 +431,11 @@
         TEMPEST_VOLUME_MANAGE_SNAPSHOT=${TEMPEST_VOLUME_MANAGE_SNAPSHOT:-True}
     fi
     iniset $TEMPEST_CONFIG volume-feature-enabled manage_snapshot $(trueorfalse False TEMPEST_VOLUME_MANAGE_SNAPSHOT)
-
+    # Only turn on TEMPEST_VOLUME_MANAGE_VOLUME by default for "lvm" backends
+    if [[ "$CINDER_ENABLED_BACKENDS" == *"lvm"* ]]; then
+        TEMPEST_VOLUME_MANAGE_VOLUME=${TEMPEST_VOLUME_MANAGE_VOLUME:-True}
+    fi
+    iniset $TEMPEST_CONFIG volume-feature-enabled manage_volume $(trueorfalse False TEMPEST_VOLUME_MANAGE_VOLUME)
     # TODO(ameade): Remove the api_v3 flag when Mitaka and Liberty are end of life.
     iniset $TEMPEST_CONFIG volume-feature-enabled api_v3 True
     iniset $TEMPEST_CONFIG volume-feature-enabled api_v1 $(trueorfalse False TEMPEST_VOLUME_API_V1)
@@ -584,6 +580,11 @@
         DISABLE_NETWORK_API_EXTENSIONS+=", metering"
     fi
 
+    # disable l3_agent_scheduler if we didn't enable L3 agent
+    if ! is_service_enabled q-l3; then
+        DISABLE_NETWORK_API_EXTENSIONS+=", l3_agent_scheduler"
+    fi
+
     local network_api_extensions=${NETWORK_API_EXTENSIONS:-"all"}
     if [[ ! -z "$DISABLE_NETWORK_API_EXTENSIONS" ]]; then
         # Enabled extensions are either the ones explicitly specified or those available on the API endpoint
@@ -618,9 +619,9 @@
 # install_tempest() - Collect source and prepare
 function install_tempest {
     git_clone $TEMPEST_REPO $TEMPEST_DIR $TEMPEST_BRANCH
-    pip_install tox
+    pip_install 'tox!=2.8.0'
     pushd $TEMPEST_DIR
-    tox --notest -efull
+    tox -r --notest -efull
     # NOTE(mtreinish) Respect constraints in the tempest full venv, things that
     # are using a tox job other than full will not be respecting constraints but
     # running pip install -U on tempest requirements
diff --git a/lib/template b/lib/template
index b92fb40..e6d0032 100644
--- a/lib/template
+++ b/lib/template
@@ -41,6 +41,7 @@
 # Test if any XXXX services are enabled
 # is_XXXX_enabled
 function is_XXXX_enabled {
+    [[ ,${DISABLED_SERVICES} =~ ,"XXXX" ]] && return 1
     [[ ,${ENABLED_SERVICES} =~ ,"XX-" ]] && return 0
     return 1
 }
@@ -80,7 +81,7 @@
     :
 }
 
-# start_XXXX() - Start running processes, including screen
+# start_XXXX() - Start running processes
 function start_XXXX {
     # The quoted command must be a single command and not include an
     # shell metacharacters, redirections or shell builtins.
@@ -88,7 +89,7 @@
     :
 }
 
-# stop_XXXX() - Stop running processes (non-screen)
+# stop_XXXX() - Stop running processes
 function stop_XXXX {
     # for serv in serv-a serv-b; do
     #     stop_process $serv
diff --git a/lib/tls b/lib/tls
index f9ef554..0baf86c 100644
--- a/lib/tls
+++ b/lib/tls
@@ -113,11 +113,11 @@
 certificate             = \$dir/cacert.pem
 private_key             = \$dir/private/cacert.key
 RANDFILE                = \$dir/private/.rand
-default_md              = default
+default_md              = sha256
 
 [ req ]
-default_bits            = 1024
-default_md              = sha1
+default_bits            = 2048
+default_md              = sha256
 
 prompt                  = no
 distinguished_name      = ca_distinguished_name
@@ -212,6 +212,9 @@
     if is_fedora; then
         sudo cp $INT_CA_DIR/ca-chain.pem /usr/share/pki/ca-trust-source/anchors/devstack-chain.pem
         sudo update-ca-trust
+    elif is_suse; then
+        sudo cp $INT_CA_DIR/ca-chain.pem /usr/share/pki/trust/anchors/devstack-chain.pem
+        sudo update-ca-certificates
     elif is_ubuntu; then
         sudo cp $INT_CA_DIR/ca-chain.pem /usr/local/share/ca-certificates/devstack-int.crt
         sudo cp $ROOT_CA_DIR/cacert.pem /usr/local/share/ca-certificates/devstack-root.crt
@@ -343,9 +346,10 @@
 # one. If the value for the CA is not rooted in /etc then we know
 # we need to change it.
 function fix_system_ca_bundle_path {
-    if is_service_enabled tls-proxy || [ "$USE_SSL" == "True" ]; then
+    if is_service_enabled tls-proxy; then
         local capath
-        capath=$(python -c $'try:\n from requests import certs\n print certs.where()\nexcept ImportError: pass')
+        local python_cmd=${1:-python}
+        capath=$($python_cmd -c $'try:\n from requests import certs\n print (certs.where())\nexcept ImportError: pass')
 
         if [[ ! $capath == "" && ! $capath =~ ^/etc/.* && ! -L $capath ]]; then
             if is_fedora; then
@@ -354,6 +358,9 @@
             elif is_ubuntu; then
                 sudo rm -f $capath
                 sudo ln -s /etc/ssl/certs/ca-certificates.crt $capath
+            elif is_suse; then
+                sudo rm -f $capath
+                sudo ln -s /etc/ssl/ca-bundle.pem $capath
             else
                 echo "Don't know how to set the CA bundle, expect the install to fail."
             fi
@@ -362,27 +369,14 @@
 }
 
 
+# Only for compatibility, return if the tls-proxy is enabled
+function is_ssl_enabled_service {
+    return is_service_enabled tls-proxy
+}
+
 # Certificate Input Configuration
 # ===============================
 
-# check to see if the service(s) specified are to be SSL enabled.
-#
-# Multiple services specified as arguments are ``OR``'ed together; the test
-# is a short-circuit boolean, i.e it returns on the first match.
-#
-# Uses global ``SSL_ENABLED_SERVICES``
-function is_ssl_enabled_service {
-    local services=$@
-    local service=""
-    if [ "$USE_SSL" == "False" ]; then
-        return 1
-    fi
-    for service in ${services}; do
-        [[ ,${SSL_ENABLED_SERVICES}, =~ ,${service}, ]] && return 0
-    done
-    return 1
-}
-
 # Ensure that the certificates for a service are in place. This function does
 # not check that a service is SSL enabled, this should already have been
 # completed.
@@ -429,6 +423,9 @@
 
     if is_ubuntu; then
         sudo a2enmod ssl
+    elif is_suse; then
+        sudo a2enmod ssl
+        sudo a2enflag SSL
     elif is_fedora; then
         # Fedora enables mod_ssl by default
         :
@@ -457,29 +454,30 @@
 # MaxClients: maximum number of simultaneous client connections
 # MaxRequestsPerChild: maximum number of requests a server process serves
 #
-# The apache defaults are too conservative if we want reliable tempest
-# testing. Bump these values up from ~400 max clients to 1024 max clients.
+# We want to be memory thrifty so tune down apache to allow 256 total
+# connections. This should still be plenty for a dev env yet lighter than
+# apache defaults.
 <IfModule mpm_worker_module>
 # Note that the next three conf values must be changed together.
 # MaxClients = ServerLimit * ThreadsPerChild
-ServerLimit          32
+ServerLimit           8
 ThreadsPerChild      32
-MaxClients         1024
-StartServers          3
-MinSpareThreads      96
-MaxSpareThreads     192
+MaxClients          256
+StartServers          2
+MinSpareThreads      32
+MaxSpareThreads      96
 ThreadLimit          64
 MaxRequestsPerChild   0
 </IfModule>
 <IfModule mpm_event_module>
 # Note that the next three conf values must be changed together.
 # MaxClients = ServerLimit * ThreadsPerChild
-ServerLimit          32
+ServerLimit           8
 ThreadsPerChild      32
-MaxClients         1024
-StartServers          3
-MinSpareThreads      96
-MaxSpareThreads     192
+MaxClients          256
+StartServers          2
+MinSpareThreads      32
+MaxSpareThreads      96
 ThreadLimit          64
 MaxRequestsPerChild   0
 </IfModule>
@@ -489,13 +487,15 @@
 }
 
 # Starts the TLS proxy for the given IP/ports
-# start_tls_proxy front-host front-port back-host back-port
+# start_tls_proxy service-name front-host front-port back-host back-port
 function start_tls_proxy {
     local b_service="$1-tls-proxy"
     local f_host=$2
     local f_port=$3
     local b_host=$4
     local b_port=$5
+    # 8190 is the default apache size.
+    local f_header_size=${6:-8190}
 
     tune_apache_connections
 
@@ -523,27 +523,30 @@
     # ('Connection aborted.', BadStatusLine("''",)) error
     KeepAlive Off
 
+    # This increase in allowed request header sizes is required
+    # for swift functional testing to work with tls enabled. It is 2 bytes
+    # larger than the apache default of 8190.
+    LimitRequestFieldSize $f_header_size
+    RequestHeader set X-Forwarded-Proto "https"
+
     <Location />
-        ProxyPass http://$b_host:$b_port/ retry=5 nocanon
+        ProxyPass http://$b_host:$b_port/ retry=0 nocanon
         ProxyPassReverse http://$b_host:$b_port/
     </Location>
     ErrorLog $APACHE_LOG_DIR/tls-proxy_error.log
-    ErrorLogFormat "[%{u}t] [%-m:%l] [pid %P:tid %T] %7F: %E: [client\ %a] [frontend\ %A] %M% ,\ referer\ %{Referer}i"
+    ErrorLogFormat "%{cu}t [%-m:%l] [pid %P:tid %T] %7F: %E: [client\ %a] [frontend\ %A] %M% ,\ referer\ %{Referer}i"
     LogLevel info
-    CustomLog $APACHE_LOG_DIR/tls-proxy_access.log common
-    LogFormat "%v %h %l %u %t \"%r\" %>s %b"
+    CustomLog $APACHE_LOG_DIR/tls-proxy_access.log "%{%Y-%m-%d}t %{%T}t.%{msec_frac}t [%l] %a \"%r\" %>s %b"
 </VirtualHost>
 EOF
-    for mod in ssl proxy proxy_http; do
+    if is_suse ; then
+        sudo a2enflag SSL
+    fi
+    for mod in headers ssl proxy proxy_http; do
         enable_apache_mod $mod
     done
     enable_apache_site $b_service
-    # Only a reload is required to pull in new vhosts
-    # Note that a restart reliably fails on centos7 and trusty
-    # because apache can't open port 80 because the old apache
-    # still has it open. Using reload fixes trusty but centos7
-    # still doesn't work.
-    reload_apache_server
+    restart_apache_server
 }
 
 # Follow TLS proxy
diff --git a/openrc b/openrc
index 483b5af..37724c5 100644
--- a/openrc
+++ b/openrc
@@ -72,19 +72,23 @@
     GLANCE_HOST=${GLANCE_HOST:-$HOST_IP}
 fi
 
-SERVICE_PROTOCOL=${SERVICE_PROTOCOL:-http}
-KEYSTONE_AUTH_PROTOCOL=${KEYSTONE_AUTH_PROTOCOL:-$SERVICE_PROTOCOL}
-KEYSTONE_AUTH_HOST=${KEYSTONE_AUTH_HOST:-$SERVICE_HOST}
-
 # Identity API version
 export OS_IDENTITY_API_VERSION=${IDENTITY_API_VERSION:-3}
 
+# Ask keystoneauth1 to use keystone
+export OS_AUTH_TYPE=password
+
 # Authenticating against an OpenStack cloud using Keystone returns a **Token**
 # and **Service Catalog**.  The catalog contains the endpoints for all services
 # the user/project has access to - including nova, glance, keystone, swift, ...
 # We currently recommend using the version 3 *identity api*.
 #
-export OS_AUTH_URL=$KEYSTONE_AUTH_PROTOCOL://$KEYSTONE_AUTH_HOST:5000/v${OS_IDENTITY_API_VERSION}
+
+# If you don't have a working .stackenv, this is the backup position
+KEYSTONE_BACKUP=$SERVICE_PROTOCOL://$SERVICE_HOST:5000
+KEYSTONE_AUTH_URI=${KEYSTONE_AUTH_URI:-$KEYSTONE_BACKUP}
+
+export OS_AUTH_URL=${OS_AUTH_URL:-$KEYSTONE_AUTH_URI}
 
 # Currently, in order to use openstackclient with Identity API v3,
 # we need to set the domain which the user and project belong to.
diff --git a/samples/local.conf b/samples/local.conf
index 6d5351f..8b76137 100644
--- a/samples/local.conf
+++ b/samples/local.conf
@@ -10,7 +10,7 @@
 
 # This is a collection of some of the settings we have found to be useful
 # in our DevStack development environments. Additional settings are described
-# in http://docs.openstack.org/developer/devstack/configuration.html#local-conf
+# in https://docs.openstack.org/devstack/latest/configuration.html#local-conf
 # These should be considered as samples and are unsupported DevStack code.
 
 # The ``localrc`` section replaces the old ``localrc`` configuration file.
diff --git a/setup.cfg b/setup.cfg
index e4b2888..fcd2b13 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -2,10 +2,10 @@
 name = DevStack
 summary = OpenStack DevStack
 description-file =
-    README.md
+    README.rst
 author = OpenStack
 author-email = openstack-dev@lists.openstack.org
-home-page = http://docs.openstack.org/developer/devstack
+home-page = https://docs.openstack.org/devstack/latest
 classifier =
     Intended Audience :: Developers
     License :: OSI Approved :: Apache Software License
@@ -15,6 +15,7 @@
 all_files = 1
 build-dir = doc/build
 source-dir = doc/source
+warning-is-error = 1
 
 [pbr]
 warnerrors = True
diff --git a/stack.sh b/stack.sh
index 4cee385..2bd9da9 100755
--- a/stack.sh
+++ b/stack.sh
@@ -2,7 +2,7 @@
 
 # ``stack.sh`` is an opinionated OpenStack developer installation.  It
 # installs and configures various combinations of **Cinder**, **Glance**,
-# **Heat**, **Horizon**, **Keystone**, **Nova**, **Neutron**, and **Swift**
+# **Horizon**, **Keystone**, **Nova**, **Neutron**, and **Swift**
 
 # This script's options can be changed by setting appropriate environment
 # variables.  You can configure things like which git repositories to use,
@@ -27,11 +27,37 @@
 # Make sure custom grep options don't get in the way
 unset GREP_OPTIONS
 
-# Sanitize language settings to avoid commands bailing out
-# with "unsupported locale setting" errors.
+# NOTE(sdague): why do we explicitly set locale when running stack.sh?
+#
+# Devstack is written in bash, and many functions used throughout
+# devstack process text coming off a command (like the ip command)
+# and do transforms using grep, sed, cut, awk on the strings that are
+# returned. Many of these programs are interationalized, which is
+# great for end users, but means that the strings that devstack
+# functions depend upon might not be there in other locales. We thus
+# need to pin the world to an english basis during the runs.
+#
+# Previously we used the C locale for this, every system has it, and
+# it gives us a stable sort order. It does however mean that we
+# effectively drop unicode support.... boo!  :(
+#
+# With python3 being more unicode aware by default, that's not the
+# right option. While there is a C.utf8 locale, some distros are
+# shipping it as C.UTF8 for extra confusingness. And it's support
+# isn't super clear across distros. This is made more challenging when
+# trying to support both out of the box distros, and the gate which
+# uses diskimage builder to build disk images in a different way than
+# the distros do.
+#
+# So... en_US.utf8 it is. That's existed for a very long time. It is a
+# compromise position, but it is the least worse idea at the time of
+# this comment.
+#
+# We also have to unset other variables that might impact LC_ALL
+# taking effect.
 unset LANG
 unset LANGUAGE
-LC_ALL=C
+LC_ALL=en_US.utf8
 export LC_ALL
 
 # Make sure umask is sane
@@ -161,16 +187,16 @@
 extract_localrc_section $TOP_DIR/local.conf $TOP_DIR/localrc $TOP_DIR/.localrc.auto
 
 # ``stack.sh`` is customizable by setting environment variables.  Override a
-# default setting via export::
+# default setting via export:
 #
 #     export DATABASE_PASSWORD=anothersecret
 #     ./stack.sh
 #
-# or by setting the variable on the command line::
+# or by setting the variable on the command line:
 #
 #     DATABASE_PASSWORD=simple ./stack.sh
 #
-# Persistent variables can be placed in a ``local.conf`` file::
+# Persistent variables can be placed in a ``local.conf`` file:
 #
 #     [[local|localrc]]
 #     DATABASE_PASSWORD=anothersecret
@@ -190,25 +216,18 @@
 fi
 source $TOP_DIR/stackrc
 
+# write /etc/devstack-version
+write_devstack_version
+
 # Warn users who aren't on an explicitly supported distro, but allow them to
 # override check and attempt installation with ``FORCE=yes ./stack``
-if [[ ! ${DISTRO} =~ (xenial|yakkety|zesty|sid|testing|jessie|f24|f25|rhel7|kvmibm1) ]]; then
+if [[ ! ${DISTRO} =~ (xenial|yakkety|zesty|stretch|jessie|f24|f25|f26|opensuse-42.2|opensuse-42.3|rhel7|kvmibm1) ]]; then
     echo "WARNING: this script has not been tested on $DISTRO"
     if [[ "$FORCE" != "yes" ]]; then
         die $LINENO "If you wish to run this script anyway run with FORCE=yes"
     fi
 fi
 
-# Check to see if we are already running DevStack
-# Note that this may fail if USE_SCREEN=False
-if type -p screen > /dev/null && screen -ls | egrep -q "[0-9]\.$SCREEN_NAME"; then
-    echo "You are already running a stack.sh session."
-    echo "To rejoin this session type 'screen -x stack'."
-    echo "To destroy this session, type './unstack.sh'."
-    exit 1
-fi
-
-
 # Local Settings
 # --------------
 
@@ -328,6 +347,7 @@
 DATA_DIR=${DATA_DIR:-${DEST}/data}
 sudo mkdir -p $DATA_DIR
 safe_chown -R $STACK_USER $DATA_DIR
+safe_chmod 0755 $DATA_DIR
 
 # Configure proper hostname
 # Certain services such as rabbitmq require that the local hostname resolves
@@ -347,6 +367,10 @@
 # is pre-installed.
 if [[ -f /etc/nodepool/provider ]]; then
     SKIP_EPEL_INSTALL=True
+    if is_fedora; then
+        # However, EPEL is not enabled by default.
+        sudo yum-config-manager --enable epel
+    fi
 fi
 
 if is_fedora && [[ $DISTRO == "rhel7" ]] && \
@@ -457,24 +481,6 @@
     exec 6> >( $TOP_DIR/tools/outfilter.py -v >&3 )
 fi
 
-# Set up logging of screen windows
-# Set ``SCREEN_LOGDIR`` to turn on logging of screen windows to the
-# directory specified in ``SCREEN_LOGDIR``, we will log to the file
-# ``screen-$SERVICE_NAME-$TIMESTAMP.log`` in that dir and have a link
-# ``screen-$SERVICE_NAME.log`` to the latest log file.
-# Logs are kept for as long specified in ``LOGDAYS``.
-# This is deprecated....logs go in ``LOGDIR``, only symlinks will be here now.
-if [[ -n "$SCREEN_LOGDIR" ]]; then
-
-    # We make sure the directory is created.
-    if [[ -d "$SCREEN_LOGDIR" ]]; then
-        # We cleanup the old logs
-        find $SCREEN_LOGDIR -maxdepth 1 -name screen-\*.log -mtime +$LOGDAYS -exec rm {} \;
-    else
-        mkdir -p $SCREEN_LOGDIR
-    fi
-fi
-
 # Basic test for ``$DEST`` path permissions (fatal on error unless skipped)
 check_path_perm_sanity ${DEST}
 
@@ -493,6 +499,11 @@
         kill 2>&1 $jobs
     fi
 
+    #Remove timing data file
+    if [ -f "$OSCWRAP_TIMER_FILE" ] ; then
+        rm "$OSCWRAP_TIMER_FILE"
+    fi
+
     # Kill the last spinner process
     kill_spinner
 
@@ -538,13 +549,6 @@
 source $TOP_DIR/lib/database
 source $TOP_DIR/lib/rpc_backend
 
-# Service to enable with SSL if ``USE_SSL`` is True
-SSL_ENABLED_SERVICES="key,nova,cinder,glance,s-proxy,neutron"
-
-if is_service_enabled tls-proxy && [ "$USE_SSL" == "True" ]; then
-    die $LINENO "tls-proxy and SSL are mutually exclusive"
-fi
-
 # Configure Projects
 # ==================
 
@@ -563,7 +567,7 @@
 
 # Source project function libraries
 source $TOP_DIR/lib/infra
-source $TOP_DIR/lib/oslo
+source $TOP_DIR/lib/libraries
 source $TOP_DIR/lib/lvm
 source $TOP_DIR/lib/horizon
 source $TOP_DIR/lib/keystone
@@ -575,8 +579,7 @@
 source $TOP_DIR/lib/neutron
 source $TOP_DIR/lib/ldap
 source $TOP_DIR/lib/dstat
-source $TOP_DIR/lib/dlm
-source $TOP_DIR/lib/os_brick
+source $TOP_DIR/lib/etcd3
 
 # Extras Source
 # --------------
@@ -748,6 +751,13 @@
 # Do the ugly hacks for broken packages and distros
 source $TOP_DIR/tools/fixup_stuff.sh
 
+if [[ "$USE_SYSTEMD" == "True" ]]; then
+    pip_install_gr systemd-python
+    # the default rate limit of 1000 messages / 30 seconds is not
+    # sufficient given how verbose our logging is.
+    iniset -sudo /etc/systemd/journald.conf "Journal" "RateLimitBurst" "0"
+    sudo systemctl restart systemd-journald
+fi
 
 # Virtual Environment
 # -------------------
@@ -760,13 +770,12 @@
 # Phase: pre-install
 run_phase stack pre-install
 
+# NOTE(danms): Set global limits before installing anything
+set_systemd_override DefaultLimitNOFILE ${ULIMIT_NOFILE}
+
 install_rpc_backend
 restart_rpc_backend
 
-# NOTE(sdague): dlm install is conditional on one being enabled by configuration
-install_dlm
-configure_dlm
-
 if is_service_enabled $DATABASE_BACKENDS; then
     install_database
 fi
@@ -778,13 +787,20 @@
     install_neutron_agent_packages
 fi
 
+if is_service_enabled etcd3; then
+    install_etcd3
+fi
+
 # Check Out and Install Source
 # ----------------------------
 
 echo_summary "Installing OpenStack project source"
 
-# Install Oslo libraries
-install_oslo
+# Install additional libraries
+install_libs
+
+# Install uwsgi
+install_apache_uwsgi
 
 # Install client libraries
 install_keystoneauth
@@ -799,13 +815,8 @@
     install_neutronclient
 fi
 
-# Install shared libraries
-if is_service_enabled cinder nova; then
-    install_os_brick
-fi
-
 # Setup TLS certs
-if is_service_enabled tls-proxy || [ "$USE_SSL" == "True" ]; then
+if is_service_enabled tls-proxy; then
     configure_CA
     init_CA
     init_cert
@@ -857,14 +868,12 @@
 if is_service_enabled nova; then
     # Compute service
     stack_install_service nova
-    cleanup_nova
     configure_nova
 fi
 
 if is_service_enabled placement; then
     # placement api
     stack_install_service placement
-    cleanup_placement
     configure_placement
 fi
 
@@ -885,8 +894,11 @@
     stack_install_service horizon
 fi
 
-if is_service_enabled tls-proxy || [ "$USE_SSL" == "True" ]; then
+if is_service_enabled tls-proxy; then
     fix_system_ca_bundle_path
+    if python3_enabled ; then
+        fix_system_ca_bundle_path python3
+    fi
 fi
 
 # Extras Install
@@ -903,6 +915,10 @@
     pip_install_gr python-openstackclient
 fi
 
+# Installs alias for osc so that we can collect timing for all
+# osc commands. Alias dies with stack.sh.
+install_oscwrap
+
 if [[ $TRACK_DEPENDS = True ]]; then
     $DEST/.venv/bin/pip freeze > $DEST/requires-post-pip
     if ! diff -Nru $DEST/requires-pre-pip $DEST/requires-post-pip > $DEST/requires.diff; then
@@ -971,41 +987,25 @@
     configure_database
 fi
 
-
-# Configure screen
-# ----------------
-
-USE_SCREEN=$(trueorfalse True USE_SCREEN)
-if [[ "$USE_SCREEN" == "True" ]]; then
-    # Create a new named screen to run processes in
-    screen -d -m -S $SCREEN_NAME -t shell -s /bin/bash
-    sleep 1
-
-    # Set a reasonable status bar
-    SCREEN_HARDSTATUS=${SCREEN_HARDSTATUS:-}
-    if [ -z "$SCREEN_HARDSTATUS" ]; then
-        SCREEN_HARDSTATUS='%{= .} %-Lw%{= .}%> %n%f %t*%{= .}%+Lw%< %-=%{g}(%{d}%H/%l%{g})'
-    fi
-    screen -r $SCREEN_NAME -X hardstatus alwayslastline "$SCREEN_HARDSTATUS"
-    screen -r $SCREEN_NAME -X setenv PROMPT_COMMAND /bin/true
-
-    if is_service_enabled tls-proxy; then
-        follow_tls_proxy
-    fi
-fi
-
-# Clear ``screenrc`` file
-SCREENRC=$TOP_DIR/$SCREEN_NAME-screenrc
-if [[ -e $SCREENRC ]]; then
-    rm -f $SCREENRC
-fi
-
-# Initialize the directory for service status check
-init_service_check
-
 # Save configuration values
 save_stackenv $LINENO
 
+# Kernel Samepage Merging (KSM)
+# -----------------------------
+
+# Processes that mark their memory as mergeable can share identical memory
+# pages if KSM is enabled. This is particularly useful for nova + libvirt
+# backends but any other setup that marks its memory as mergeable can take
+# advantage. The drawback is there is higher cpu load; however, we tend to
+# be memory bound not cpu bound so enable KSM by default but allow people
+# to opt out if the CPU time is more important to them.
+
+if [[ "ENABLE_KSM" == "True" ]] ; then
+    if [[ -f /sys/kernel/mm/ksm/run ]] ; then
+        sudo sh -c "echo 1 > /sys/kernel/mm/ksm/run"
+    fi
+fi
+
 
 # Start Services
 # ==============
@@ -1016,6 +1016,13 @@
 # A better kind of sysstat, with the top process per time slice
 start_dstat
 
+# Etcd
+# -----
+
+# etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines
+if is_service_enabled etcd3; then
+    start_etcd3
+fi
 
 # Keystone
 # --------
@@ -1057,11 +1064,18 @@
     fi
 
     create_keystone_accounts
-    create_nova_accounts
-    create_glance_accounts
-    create_cinder_accounts
-    create_neutron_accounts
-
+    if is_service_enabled nova; then
+        create_nova_accounts
+    fi
+    if is_service_enabled glance; then
+        create_glance_accounts
+    fi
+    if is_service_enabled cinder; then
+        create_cinder_accounts
+    fi
+    if is_service_enabled neutron; then
+        create_neutron_accounts
+    fi
     if is_service_enabled swift; then
         create_swift_accounts
     fi
@@ -1222,8 +1236,15 @@
 fi
 
 # Create a randomized default value for the key manager's fixed_key
+# NOTE(lyarwood): This is currently set to 36 as a workaround to the following
+# libvirt bug that incorrectly pads passphrases that are a multiple of 16 bytes
+# in length.
+# Unable to use LUKS passphrase that is exactly 16 bytes long
+# https://bugzilla.redhat.com/show_bug.cgi?id=1447297
 if is_service_enabled nova; then
-    iniset $NOVA_CONF key_manager fixed_key $(generate_hex_string 32)
+    key=$(generate_hex_string 36)
+    iniset $NOVA_CONF key_manager fixed_key "$key"
+    iniset $NOVA_CPU_CONF key_manager fixed_key "$key"
 fi
 
 # Launch the nova-api and wait for it to answer before continuing
@@ -1237,6 +1258,7 @@
     start_neutron_api
 elif is_service_enabled q-svc; then
     echo_summary "Starting Neutron"
+    configure_neutron_after_post_config
     start_neutron_service_and_check
 elif is_service_enabled $DATABASE_BACKENDS && is_service_enabled n-net; then
     NM_CONF=${NOVA_CONF}
@@ -1254,6 +1276,13 @@
     $NOVA_BIN_DIR/nova-manage --config-file $NM_CONF floating create --ip_range=$TEST_FLOATING_RANGE --pool=$TEST_FLOATING_POOL
 fi
 
+# Start placement before any of the service that are likely to want
+# to use it to manage resource providers.
+if is_service_enabled placement; then
+    echo_summary "Starting Placement"
+    start_placement
+fi
+
 if is_service_enabled neutron; then
     start_neutron
 fi
@@ -1268,10 +1297,6 @@
     start_nova
     create_flavors
 fi
-if is_service_enabled placement; then
-    echo_summary "Starting Placement"
-    start_placement
-fi
 if is_service_enabled cinder; then
     echo_summary "Starting Cinder"
     start_cinder
@@ -1300,10 +1325,6 @@
         USERRC_PARAMS="$USERRC_PARAMS --os-cacert $SSL_BUNDLE_FILE"
     fi
 
-    if [[ "$HEAT_STANDALONE" = "True" ]]; then
-        USERRC_PARAMS="$USERRC_PARAMS --heat-url http://$HEAT_API_HOST:$HEAT_API_PORT/v1"
-    fi
-
     $TOP_DIR/tools/create_userrc.sh $USERRC_PARAMS
 fi
 
@@ -1350,6 +1371,13 @@
 # Sanity checks
 # =============
 
+# Check that computes are all ready
+#
+# TODO(sdague): there should be some generic phase here.
+if is_service_enabled n-cpu; then
+    is_nova_ready
+fi
+
 # Check the status of running services
 service_check
 
@@ -1443,12 +1471,28 @@
 
 # Warn that a deprecated feature was used
 if [[ -n "$DEPRECATED_TEXT" ]]; then
-    echo_summary "WARNING: $DEPRECATED_TEXT"
+    echo
+    echo -e "WARNING: $DEPRECATED_TEXT"
+    echo
 fi
 
+# If USE_SYSTEMD is enabled, tell the user about using it.
+if [[ "$USE_SYSTEMD" == "True" ]]; then
+    echo
+    echo "Services are running under systemd unit files."
+    echo "For more information see: "
+    echo "https://docs.openstack.org/devstack/latest/systemd.html"
+    echo
+fi
+
+# Useful info on current state
+cat /etc/devstack-version
+echo
+
 # Indicate how long this took to run (bash maintained variable ``SECONDS``)
 echo_summary "stack.sh completed in $SECONDS seconds."
 
+
 # Restore/close logging file descriptors
 exec 1>&3
 exec 2>&3
diff --git a/stackrc b/stackrc
index 46b8747..0ffcb67 100644
--- a/stackrc
+++ b/stackrc
@@ -5,7 +5,7 @@
 
 # ensure we don't re-source this in the same environment
 [[ -z "$_DEVSTACK_STACKRC" ]] || return 0
-declare -r _DEVSTACK_STACKRC=1
+declare -r -g _DEVSTACK_STACKRC=1
 
 # Find the other rc files
 RC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
@@ -53,7 +53,7 @@
     # Keystone - nothing works without keystone
     ENABLED_SERVICES=key
     # Nova - services to support libvirt based openstack clouds
-    ENABLED_SERVICES+=,n-api,n-cpu,n-cond,n-sch,n-novnc,n-cauth
+    ENABLED_SERVICES+=,n-api,n-cpu,n-cond,n-sch,n-novnc,n-cauth,n-api-meta
     # Placement service needed for Nova
     ENABLED_SERVICES+=,placement-api,placement-client
     # Glance services needed for Nova
@@ -65,7 +65,7 @@
     # Dashboard
     ENABLED_SERVICES+=,horizon
     # Additional services
-    ENABLED_SERVICES+=,rabbit,tempest,mysql,dstat
+    ENABLED_SERVICES+=,rabbit,tempest,mysql,etcd3,dstat
 fi
 
 # Global toggle for enabling services under mod_wsgi. If this is set to
@@ -77,25 +77,37 @@
 # Set the default Nova APIs to enable
 NOVA_ENABLED_APIS=osapi_compute,metadata
 
+# CELLSV2_SETUP - how we should configure services with cells v2
+#
+# - superconductor - this is one conductor for the api services, and
+#   one per cell managing the compute services. This is preferred
+# - singleconductor - this is one conductor for the whole deployment,
+#   this is not recommended, and will be removed in the future.
+CELLSV2_SETUP=${CELLSV2_SETUP:-"superconductor"}
+
 # Set the root URL for Horizon
 HORIZON_APACHE_ROOT="/dashboard"
 
-# Whether to use 'dev mode' for screen windows. Dev mode works by
-# stuffing text into the screen windows so that a developer can use
-# ctrl-c, up-arrow, enter to restart the service. Starting services
-# this way is slightly unreliable, and a bit slower, so this can
-# be disabled for automated testing by setting this value to False.
-USE_SCREEN=$(trueorfalse True USE_SCREEN)
+# Whether to use SYSTEMD to manage services, we only do this from
+# Queens forward.
+USE_SYSTEMD="True"
+USER_UNITS=$(trueorfalse False USER_UNITS)
+if [[ "$USER_UNITS" == "True" ]]; then
+    SYSTEMD_DIR="$HOME/.local/share/systemd/user"
+    SYSTEMCTL="systemctl --user"
+else
+    SYSTEMD_DIR="/etc/systemd/system"
+    SYSTEMCTL="sudo systemctl"
+fi
 
-# When using screen, should we keep a log file on disk?  You might
-# want this False if you have a long-running setup where verbose logs
-# can fill-up the host.
-# XXX: Ideally screen itself would be configured to log but just not
-# activate.  This isn't possible with the screerc syntax.  Temporary
-# logging can still be used by a developer with:
-#    C-a : logfile foo
-#    C-a : log on
-SCREEN_IS_LOGGING=$(trueorfalse True SCREEN_IS_LOGGING)
+
+# Whether or not to enable Kernel Samepage Merging (KSM) if available.
+# This allows programs that mark their memory as mergeable to share
+# memory pages if they are identical. This is particularly useful with
+# libvirt backends. This reduces memory usage at the cost of CPU overhead
+# to scan memory. We default to enabling it because we tend to be more
+# memory constrained than CPU bound.
+ENABLE_KSM=$(trueorfalse True ENABLE_KSM)
 
 # Passwords generated by interactive devstack runs
 if [[ -r $RC_DIR/.localrc.password ]]; then
@@ -118,10 +130,12 @@
 # When Python 3 is supported by an application, adding the specific
 # version of Python 3 to this variable will install the app using that
 # version of the interpreter instead of 2.7.
-export PYTHON3_VERSION=${PYTHON3_VERSION:-3.5}
+_DEFAULT_PYTHON3_VERSION="$(_get_python_version python3)"
+export PYTHON3_VERSION=${PYTHON3_VERSION:-${_DEFAULT_PYTHON3_VERSION:-3.5}}
 
 # Just to be more explicit on the Python 2 version to use.
-export PYTHON2_VERSION=${PYTHON2_VERSION:-2.7}
+_DEFAULT_PYTHON2_VERSION="$(_get_python_version python2)"
+export PYTHON2_VERSION=${PYTHON2_VERSION:-${_DEFAULT_PYTHON2_VERSION:-2.7}}
 
 # allow local overrides of env variables, including repo config
 if [[ -f $RC_DIR/localrc ]]; then
@@ -200,6 +214,12 @@
 # Zero disables timeouts
 GIT_TIMEOUT=${GIT_TIMEOUT:-0}
 
+# How should we be handling WSGI deployments. By default we're going
+# to allow for 2 modes, which is "uwsgi" which runs with an apache
+# proxy uwsgi in front of it, or "mod_wsgi", which runs in
+# apache. mod_wsgi is deprecated, don't use it.
+WSGI_MODE=${WSGI_MODE:-"uwsgi"}
+
 # Repositories
 # ------------
 
@@ -226,6 +246,7 @@
 # Setting the variable to 'ALL' will activate the download for all
 # libraries.
 
+DEVSTACK_SERIES="pike"
 
 ##############
 #
@@ -295,6 +316,11 @@
 GITREPO["python-brick-cinderclient-ext"]=${BRICK_CINDERCLIENT_REPO:-${GIT_BASE}/openstack/python-brick-cinderclient-ext.git}
 GITBRANCH["python-brick-cinderclient-ext"]=${BRICK_CINDERCLIENT_BRANCH:-master}
 
+# python barbican client library
+GITREPO["python-barbicanclient"]=${BARBICANCLIENT_REPO:-${GIT_BASE}/openstack/python-barbicanclient.git}
+GITBRANCH["python-barbicanclient"]=${BARBICANCLIENT_BRANCH:-master}
+GITDIR["python-barbicanclient"]=$DEST/python-barbicanclient
+
 # python glance client library
 GITREPO["python-glanceclient"]=${GLANCECLIENT_REPO:-${GIT_BASE}/openstack/python-glanceclient.git}
 GITBRANCH["python-glanceclient"]=${GLANCECLIENT_BRANCH:-master}
@@ -331,6 +357,10 @@
 # this doesn't exist in a lib file, so set it here
 GITDIR["python-openstackclient"]=$DEST/python-openstackclient
 
+# placement-api CLI
+GITREPO["osc-placement"]=${OSC_PLACEMENT_REPO:-${GIT_BASE}/openstack/osc-placement.git}
+GITBRANCH["osc-placement"]=${OSC_PLACEMENT_BRANCH:-master}
+
 
 ###################
 #
@@ -339,6 +369,10 @@
 #
 ###################
 
+# castellan key manager interface
+GITREPO["castellan"]=${CASTELLAN_REPO:-${GIT_BASE}/openstack/castellan.git}
+GITBRANCH["castellan"]=${CASTELLAN_BRANCH:-master}
+
 # cliff command line framework
 GITREPO["cliff"]=${CLIFF_REPO:-${GIT_BASE}/openstack/cliff.git}
 GITBRANCH["cliff"]=${CLIFF_BRANCH:-master}
@@ -458,18 +492,14 @@
 #
 ##################
 
+# cursive library
+GITREPO["cursive"]=${CURSIVE_REPO:-${GIT_BASE}/openstack/cursive.git}
+GITBRANCH["cursive"]=${CURSIVE_BRANCH:-master}
+
 # glance store library
 GITREPO["glance_store"]=${GLANCE_STORE_REPO:-${GIT_BASE}/openstack/glance_store.git}
 GITBRANCH["glance_store"]=${GLANCE_STORE_BRANCH:-master}
 
-# heat-cfntools server agent
-HEAT_CFNTOOLS_REPO=${HEAT_CFNTOOLS_REPO:-${GIT_BASE}/openstack/heat-cfntools.git}
-HEAT_CFNTOOLS_BRANCH=${HEAT_CFNTOOLS_BRANCH:-master}
-
-# heat example templates and elements
-HEAT_TEMPLATES_REPO=${HEAT_TEMPLATES_REPO:-${GIT_BASE}/openstack/heat-templates.git}
-HEAT_TEMPLATES_BRANCH=${HEAT_TEMPLATES_BRANCH:-master}
-
 # django openstack_auth library
 GITREPO["django_openstack_auth"]=${HORIZONAUTH_REPO:-${GIT_BASE}/openstack/django_openstack_auth.git}
 GITBRANCH["django_openstack_auth"]=${HORIZONAUTH_BRANCH:-master}
@@ -504,6 +534,10 @@
 GITREPO["osc-lib"]=${OSC_LIB_REPO:-${GIT_BASE}/openstack/osc-lib.git}
 GITBRANCH["osc-lib"]=${OSC_LIB_BRANCH:-master}
 
+# python-openstacksdk OpenStack Python SDK
+GITREPO["python-openstacksdk"]=${OPENSTACKSDK_REPO:-${GIT_BASE}/openstack/python-openstacksdk.git}
+GITBRANCH["python-openstacksdk"]=${OPENSTACKSDK_BRANCH:-master}
+
 # ironic common lib
 GITREPO["ironic-lib"]=${IRONIC_LIB_REPO:-${GIT_BASE}/openstack/ironic-lib.git}
 GITBRANCH["ironic-lib"]=${IRONIC_LIB_BRANCH:-master}
@@ -520,6 +554,10 @@
 GITBRANCH["neutron-lib"]=${NEUTRON_LIB_BRANCH:-master}
 GITDIR["neutron-lib"]=$DEST/neutron-lib
 
+# os-traits library for resource provider traits in the placement service
+GITREPO["os-traits"]=${OS_TRAITS_REPO:-${GIT_BASE}/openstack/os-traits.git}
+GITBRANCH["os-traits"]=${OS_TRAITS_BRANCH:-master}
+
 ##################
 #
 #  TripleO / Heat Agent Components
@@ -556,8 +594,8 @@
 IRONIC_PYTHON_AGENT_BRANCH=${IRONIC_PYTHON_AGENT_BRANCH:-master}
 
 # a websockets/html5 or flash powered VNC console for vm instances
-NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
-NOVNC_BRANCH=${NOVNC_BRANCH:-master}
+NOVNC_REPO=${NOVNC_REPO:-https://github.com/novnc/noVNC.git}
+NOVNC_BRANCH=${NOVNC_BRANCH:-stable/v0.6}
 
 # a websockets/html5 or flash powered SPICE console for vm instances
 SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}
@@ -574,8 +612,12 @@
 case "$VIRT_DRIVER" in
     ironic|libvirt)
         LIBVIRT_TYPE=${LIBVIRT_TYPE:-kvm}
-        if [[ "$os_VENDOR" =~ (Debian) ]]; then
-            LIBVIRT_GROUP=libvirt
+        if [[ "$os_VENDOR" =~ (Debian|Ubuntu) ]]; then
+            # The groups change with newer libvirt. Older Ubuntu used
+            # 'libvirtd', but now uses libvirt like Debian. Do a quick check
+            # to see if libvirtd group already exists to handle grenade's case.
+            LIBVIRT_GROUP=$(cut -d ':' -f 1 /etc/group | grep 'libvirtd$' || true)
+            LIBVIRT_GROUP=${LIBVIRT_GROUP:-libvirt}
         else
             LIBVIRT_GROUP=libvirtd
         fi
@@ -601,6 +643,8 @@
         ;;
 esac
 
+# By default, devstack will use Ubuntu Cloud Archive.
+ENABLE_UBUNTU_CLOUD_ARCHIVE=$(trueorfalse True ENABLE_UBUNTU_CLOUD_ARCHIVE)
 
 # Images
 # ------
@@ -640,39 +684,71 @@
             case "$LIBVIRT_TYPE" in
                 lxc) # the cirros root disk in the uec tarball is empty, so it will not work for lxc
                     DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-rootfs}
-                    IMAGE_URLS+="http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-rootfs.img.gz";;
+                    DEFAULT_IMAGE_FILE_NAME=${DEFAULT_IMAGE_FILE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-rootfs.img.gz}
+                    IMAGE_URLS+="http://download.cirros-cloud.net/${CIRROS_VERSION}/${DEFAULT_IMAGE_FILE_NAME}";;
                 *) # otherwise, use the qcow image
-                    DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img}
-                    IMAGE_URLS+="http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img";;
+                    DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk}
+                    DEFAULT_IMAGE_FILE_NAME=${DEFAULT_IMAGE_FILE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img}
+                    IMAGE_URLS+="http://download.cirros-cloud.net/${CIRROS_VERSION}/${DEFAULT_IMAGE_FILE_NAME}";;
                 esac
             ;;
         vsphere)
             DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-0.3.2-i386-disk.vmdk}
-            IMAGE_URLS+="http://partnerweb.vmware.com/programs/vmdkimage/cirros-0.3.2-i386-disk.vmdk";;
+            DEFAULT_IMAGE_FILE_NAME=${DEFAULT_IMAGE_FILE_NAME:-$DEFAULT_IMAGE_NAME}
+            IMAGE_URLS+="http://partnerweb.vmware.com/programs/vmdkimage/${DEFAULT_IMAGE_FILE_NAME}";;
         xenserver)
-            DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-0.3.4-x86_64-disk}
-            IMAGE_URLS+="http://ca.downloads.xensource.com/OpenStack/cirros-0.3.4-x86_64-disk.vhd.tgz"
+            DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-0.3.5-x86_64-disk}
+            DEFAULT_IMAGE_FILE_NAME=${DEFAULT_IMAGE_NAME:-cirros-0.3.5-x86_64-disk.vhd.tgz}
+            IMAGE_URLS+="http://ca.downloads.xensource.com/OpenStack/cirros-0.3.5-x86_64-disk.vhd.tgz"
             IMAGE_URLS+=",http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-uec.tar.gz";;
     esac
     DOWNLOAD_DEFAULT_IMAGES=False
 fi
 
-# Staging area for new images.  These images are cached by a run of
-# ./tools/image_list.sh during CI image build (see
-# project-config:nodepool/elements/cache-devstack/extra-data.d/55-cache-devstack-repos).
-#
-# To avoid CI failures grabbing the images, new images should be here
-# for at least 24hrs (nodepool builds images at 14:00UTC) so the they
-# are in the cache.
-PRECACHE_IMAGES=$(trueorfalse False PRECACHE_IMAGES)
-if [[ "$PRECACHE_IMAGES" == "True" ]]; then
-    # required for trove devstack tests; see
-    #  git.openstack.org/cgit/openstack/trove/tree/devstack/plugin.sh
-    IMAGE_URL="http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2"
-    if ! [[ "$IMAGE_URLS"  =~ "$IMAGE_URL" ]]; then
-        IMAGE_URLS+=",$IMAGE_URL"
+# This is a comma separated list of extra URLS to be listed for
+# download by the tools/image_list.sh script.  CI environments can
+# pre-download these URLS and place them in $FILES.  Later scripts can
+# then use "get_extra_file <url>" which will print out the path to the
+# file; it will either be downloaded on demand or acquired from the
+# cache if there.
+EXTRA_CACHE_URLS=""
+
+# etcd3 defaults
+ETCD_VERSION=${ETCD_VERSION:-v3.1.7}
+ETCD_SHA256_AMD64="4fde194bbcd259401e2b5c462dfa579ee7f6af539f13f130b8f5b4f52e3b3c52"
+# NOTE(sdague): etcd v3.1.7 doesn't have anything for these architectures, though 3.2.0 does.
+ETCD_SHA256_ARM64=""
+ETCD_SHA256_PPC64=""
+ETCD_SHA256_S390X=""
+# Make sure etcd3 downloads the correct architecture
+if is_arch "x86_64"; then
+    ETCD_ARCH="amd64"
+    ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_AMD64}
+elif is_arch "aarch64"; then
+    ETCD_ARCH="arm64"
+    ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_ARM64}
+elif is_arch "ppc64le"; then
+    ETCD_ARCH="ppc64le"
+    ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_PPC64}
+elif is_arch "s390x"; then
+    # An etcd3 binary for s390x is not available on github like it is
+    # for other arches. Only continue if a custom download URL was
+    # provided.
+    if [[ -n "${ETCD_DOWNLOAD_URL}" ]]; then
+        ETCD_ARCH="s390x"
+        ETCD_SHA256=${ETCD_SHA256:-$ETCD_SHA256_S390X}
+    else
+        exit_distro_not_supported "etcd3. No custom ETCD_DOWNLOAD_URL provided."
     fi
+else
+    exit_distro_not_supported "invalid hardware type - $ETCD_ARCH"
 fi
+ETCD_DOWNLOAD_URL=${ETCD_DOWNLOAD_URL:-https://github.com/coreos/etcd/releases/download}
+ETCD_NAME=etcd-$ETCD_VERSION-linux-$ETCD_ARCH
+ETCD_DOWNLOAD_FILE=$ETCD_NAME.tar.gz
+ETCD_DOWNLOAD_LOCATION=$ETCD_DOWNLOAD_URL/$ETCD_VERSION/$ETCD_DOWNLOAD_FILE
+# etcd is always required, so place it into list of pre-cached downloads
+EXTRA_CACHE_URLS+=",$ETCD_DOWNLOAD_LOCATION"
 
 # Detect duplicate values in IMAGE_URLS
 for image_url in ${IMAGE_URLS//,/ }; do
@@ -697,9 +773,6 @@
 
 PUBLIC_INTERFACE=${PUBLIC_INTERFACE:-""}
 
-# Set default screen name
-SCREEN_NAME=${SCREEN_NAME:-stack}
-
 # Allow the use of an alternate protocol (such as https) for service endpoints
 SERVICE_PROTOCOL=${SERVICE_PROTOCOL:-http}
 
@@ -720,6 +793,9 @@
 # Service graceful shutdown timeout
 SERVICE_GRACEFUL_SHUTDOWN_TIMEOUT=${SERVICE_GRACEFUL_SHUTDOWN_TIMEOUT:-5}
 
+# Service graceful shutdown timeout
+WORKER_TIMEOUT=${WORKER_TIMEOUT:-90}
+
 # Support alternative yum -- in future Fedora 'dnf' will become the
 # only supported installer, but for now 'yum' and 'dnf' are both
 # available in parallel with compatible CLIs.  Allow manual switching
@@ -813,35 +889,12 @@
 # Set to 0 to disable shallow cloning
 GIT_DEPTH=${GIT_DEPTH:-0}
 
-# Use native SSL for servers in ``SSL_ENABLED_SERVICES``
-USE_SSL=$(trueorfalse False USE_SSL)
-
 # We may not need to recreate database in case 2 Keystone services
 # sharing the same database. It would be useful for multinode Grenade tests.
 RECREATE_KEYSTONE_DB=$(trueorfalse True RECREATE_KEYSTONE_DB)
 
-# ebtables is inherently racey. If you run it by two or more processes
-# simultaneously it will collide, badly, in the kernel and produce
-# failures or corruption of ebtables. The only way around it is for
-# all tools running ebtables to only ever do so with the --concurrent
-# flag. This requires libvirt >= 1.2.11.
-#
-# If you don't have this then the following work around will replace
-# ebtables with a wrapper script so that it is safe to run without
-# that flag.
-EBTABLES_RACE_FIX=$(trueorfalse False EBTABLES_RACE_FIX)
-
 # Following entries need to be last items in file
 
-# Compatibility bits required by other callers like Grenade
-
-# Old way was using SCREEN_LOGDIR to locate those logs and LOGFILE for the stack.sh trace log.
-# LOGFILE       SCREEN_LOGDIR       output
-# not set       not set             no log files
-# set           not set             stack.sh log to LOGFILE
-# not set       set                 screen logs to SCREEN_LOGDIR
-# set           set                 stack.sh log to LOGFILE, screen logs to SCREEN_LOGDIR
-
 # New way is LOGDIR for all logs and LOGFILE for stack.sh trace log, but if not fully-qualified will be in LOGDIR
 # LOGFILE       LOGDIR              output
 # not set       not set             (new) set LOGDIR from default
@@ -849,9 +902,6 @@
 # not set       set                 screen logs to LOGDIR
 # set           set                 stack.sh log to LOGFILE, screen logs to LOGDIR
 
-# For compat, if SCREEN_LOGDIR is set, it will be used to create back-compat symlinks to the LOGDIR
-# symlinks to SCREEN_LOGDIR (compat)
-
 # Set up new logging defaults
 if [[ -z "${LOGDIR:-}" ]]; then
     default_logdir=$DEST/logs
@@ -866,12 +916,6 @@
             # LOGFILE had no path, set a default
             LOGDIR="$default_logdir"
         fi
-
-        # Check for duplication
-        if [[ "${SCREEN_LOGDIR:-}" == "${LOGDIR}" ]]; then
-            # We don't need the symlinks since it's the same directory
-            unset SCREEN_LOGDIR
-        fi
     fi
     unset default_logdir logfile
 fi
@@ -879,6 +923,9 @@
 # ``LOGDIR`` is always set at this point so it is not useful as a 'enable' for service logs
 # ``SCREEN_LOGDIR`` may be set, it is useful to enable the compat symlinks
 
+# System-wide ulimit file descriptors override
+ULIMIT_NOFILE=${ULIMIT_NOFILE:-2048}
+
 # Local variables:
 # mode: shell-script
 # End:
diff --git a/tests/run-process.sh b/tests/run-process.sh
deleted file mode 100755
index 301b9a0..0000000
--- a/tests/run-process.sh
+++ /dev/null
@@ -1,109 +0,0 @@
-#!/bin/bash
-# tests/exec.sh - Test DevStack run_process() and stop_process()
-#
-# exec.sh start|stop|status
-#
-# Set USE_SCREEN True|False to change use of screen.
-#
-# This script emulates the basic exec environment in ``stack.sh`` to test
-# the process spawn and kill operations.
-
-if [[ -z $1 ]]; then
-    echo "$0 start|stop"
-    exit 1
-fi
-
-TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
-source $TOP_DIR/functions
-
-USE_SCREEN=${USE_SCREEN:-False}
-
-ENABLED_SERVICES=fake-service
-
-SERVICE_DIR=/tmp
-SCREEN_NAME=test
-SCREEN_LOGDIR=${SERVICE_DIR}/${SCREEN_NAME}
-
-
-# Kill background processes on exit
-trap clean EXIT
-clean() {
-    local r=$?
-    jobs -p
-    kill >/dev/null 2>&1 $(jobs -p)
-    exit $r
-}
-
-
-# Exit on any errors so that errors don't compound
-trap failed ERR
-failed() {
-    local r=$?
-    jobs -p
-    kill >/dev/null 2>&1 $(jobs -p)
-    set +o xtrace
-    [ -n "$LOGFILE" ] && echo "${0##*/} failed: full log in $LOGFILE"
-    exit $r
-}
-
-function status {
-    if [[ -r $SERVICE_DIR/$SCREEN_NAME/fake-service.pid ]]; then
-        pstree -pg $(cat $SERVICE_DIR/$SCREEN_NAME/fake-service.pid)
-    fi
-    ps -ef | grep fake
-}
-
-function setup_screen {
-if [[ ! -d $SERVICE_DIR/$SCREEN_NAME ]]; then
-    rm -rf $SERVICE_DIR/$SCREEN_NAME
-    mkdir -p $SERVICE_DIR/$SCREEN_NAME
-fi
-
-if [[ "$USE_SCREEN" == "True" ]]; then
-    # Create a new named screen to run processes in
-    screen -d -m -S $SCREEN_NAME -t shell -s /bin/bash
-    sleep 1
-
-    # Set a reasonable status bar
-    if [ -z "$SCREEN_HARDSTATUS" ]; then
-        SCREEN_HARDSTATUS='%{= .} %-Lw%{= .}%> %n%f %t*%{= .}%+Lw%< %-=%{g}(%{d}%H/%l%{g})'
-    fi
-    screen -r $SCREEN_NAME -X hardstatus alwayslastline "$SCREEN_HARDSTATUS"
-fi
-
-# Clear screen rc file
-SCREENRC=$TOP_DIR/tests/$SCREEN_NAME-screenrc
-if [[ -e $SCREENRC ]]; then
-    echo -n > $SCREENRC
-fi
-}
-
-# Mimic logging
-    # Set up output redirection without log files
-    # Copy stdout to fd 3
-    exec 3>&1
-    if [[ "$VERBOSE" != "True" ]]; then
-        # Throw away stdout and stderr
-        #exec 1>/dev/null 2>&1
-        :
-    fi
-    # Always send summary fd to original stdout
-    exec 6>&3
-
-
-if [[ "$1" == "start" ]]; then
-    echo "Start service"
-    setup_screen
-    run_process fake-service "$TOP_DIR/tests/fake-service.sh"
-    sleep 1
-    status
-elif [[ "$1" == "stop" ]]; then
-    echo "Stop service"
-    stop_process fake-service
-    status
-elif [[ "$1" == "status" ]]; then
-    status
-else
-    echo "Unknown command"
-    exit 1
-fi
diff --git a/tests/test_functions.sh b/tests/test_functions.sh
index 8aae23d..adf20cd 100755
--- a/tests/test_functions.sh
+++ b/tests/test_functions.sh
@@ -224,7 +224,7 @@
 
 # test against removed package...was a bug on Ubuntu
 if is_ubuntu; then
-    PKG=cowsay
+    PKG=cowsay-off
     if ! (dpkg -s $PKG >/dev/null 2>&1); then
         # it was never installed...set up the condition
         sudo apt-get install -y cowsay >/dev/null 2>&1
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
index 415fec5..0bd8d49 100755
--- a/tests/test_libs_from_pypi.sh
+++ b/tests/test_libs_from_pypi.sh
@@ -36,13 +36,15 @@
 ALL_LIBS+=" python-cinderclient glance_store oslo.concurrency oslo.db"
 ALL_LIBS+=" oslo.versionedobjects oslo.vmware keystonemiddleware"
 ALL_LIBS+=" oslo.serialization django_openstack_auth"
-ALL_LIBS+=" python-openstackclient osc-lib os-client-config oslo.rootwrap"
-ALL_LIBS+=" oslo.i18n oslo.utils python-swiftclient"
+ALL_LIBS+=" python-openstackclient osc-lib osc-placement"
+ALL_LIBS+=" os-client-config oslo.rootwrap"
+ALL_LIBS+=" oslo.i18n oslo.utils python-openstacksdk python-swiftclient"
 ALL_LIBS+=" python-neutronclient tooz ceilometermiddleware oslo.policy"
-ALL_LIBS+=" debtcollector os-brick automaton futurist oslo.service"
-ALL_LIBS+=" oslo.cache oslo.reports osprofiler"
+ALL_LIBS+=" debtcollector os-brick os-traits automaton futurist oslo.service"
+ALL_LIBS+=" oslo.cache oslo.reports osprofiler cursive"
 ALL_LIBS+=" keystoneauth ironic-lib neutron-lib oslo.privsep"
 ALL_LIBS+=" diskimage-builder os-vif python-brick-cinderclient-ext"
+ALL_LIBS+=" castellan python-barbicanclient"
 
 # Generate the above list with
 # echo ${!GITREPO[@]}
diff --git a/tests/test_refs.sh b/tests/test_refs.sh
index bccca5d..65848cd 100755
--- a/tests/test_refs.sh
+++ b/tests/test_refs.sh
@@ -15,7 +15,7 @@
 
 echo "Ensuring we don't have crazy refs"
 
-REFS=`grep BRANCH stackrc | grep -v -- '-master'`
+REFS=`grep BRANCH stackrc | grep -v -- '-master' | grep -v 'NOVNC_BRANCH'`
 rc=$?
 if [[ $rc -eq 0 ]]; then
     echo "Branch defaults must be master. Found:"
diff --git a/tools/dstat.sh b/tools/dstat.sh
index 1c80fb7..01c6d9b 100755
--- a/tools/dstat.sh
+++ b/tools/dstat.sh
@@ -9,14 +9,14 @@
 # Assumes:
 #  - dstat command is installed
 
-# Retreive log directory as argument from calling script.
+# Retrieve log directory as argument from calling script.
 LOGDIR=$1
 
 # Command line arguments for primary DStat process.
-DSTAT_OPTS="-tcmndrylpg --top-cpu-adv --top-io-adv --top-mem --swap"
+DSTAT_OPTS="-tcmndrylpg --top-cpu-adv --top-io-adv --top-mem --swap --tcp"
 
 # Command-line arguments for secondary background DStat process.
-DSTAT_CSV_OPTS="-tcmndrylpg --output $LOGDIR/dstat-csv.log"
+DSTAT_CSV_OPTS="-tcmndrylpg --tcp --output $LOGDIR/dstat-csv.log"
 
 # Execute and background the secondary dstat process and discard its output.
 dstat $DSTAT_CSV_OPTS >& /dev/null &
diff --git a/tools/fixup_stuff.sh b/tools/fixup_stuff.sh
index 4dec95e..f1552ab 100755
--- a/tools/fixup_stuff.sh
+++ b/tools/fixup_stuff.sh
@@ -67,6 +67,40 @@
     echo_summary "WARNING: unable to reserve keystone ports"
 fi
 
+# Ubuntu Cloud Archive
+#---------------------
+# We've found that Libvirt on Xenial is flaky and crashes enough to be
+# a regular top e-r bug. Opt into Ubuntu Cloud Archive if on Xenial to
+# get newer Libvirt.
+# Make it possible to switch this based on an environment variable as
+# libvirt 2.5.0 doesn't handle nested virtualization quite well and this
+# is required for the trove development environment.
+if [[ "${ENABLE_UBUNTU_CLOUD_ARCHIVE}" == "True" && "$DISTRO" = "xenial" ]]; then
+    # This pulls in apt-add-repository
+    install_package "software-properties-common"
+    # Use UCA for newer libvirt. Should give us libvirt 2.5.0.
+    if [[ -f /etc/ci/mirror_info.sh ]] ; then
+        # If we are on a nodepool provided host and it has told us about where
+        # we can find local mirrors then use that mirror.
+        source /etc/ci/mirror_info.sh
+
+        sudo apt-add-repository -y "deb $NODEPOOL_UCA_MIRROR xenial-updates/ocata main"
+    else
+        # Otherwise use upstream UCA
+        sudo add-apt-repository -y cloud-archive:ocata
+    fi
+
+    # Disable use of libvirt wheel since a cached wheel build might be
+    # against older libvirt binary.  Particularly a problem if using
+    # the openstack wheel mirrors, but can hit locally too.
+    # TODO(clarkb) figure out how to use upstream wheel again.
+    iniset -sudo /etc/pip.conf "global" "no-binary" "libvirt-python"
+
+    # Force update our APT repos, since we added UCA above.
+    REPOS_UPDATED=False
+    apt_get_update
+fi
+
 
 # Python Packages
 # ---------------
@@ -123,7 +157,7 @@
         # [1] https://bugzilla.redhat.com/show_bug.cgi?id=1099031
         # [2] https://bugs.launchpad.net/neutron/+bug/1455303
         # [3] https://github.com/redhat-openstack/openstack-puppet-modules/blob/master/firewall/manifests/linux/redhat.pp
-        # [4] http://docs.openstack.org/developer/devstack/guides/neutron.html
+        # [4] https://docs.openstack.org/devstack/latest/guides/neutron.html
         if is_package_installed firewalld; then
             sudo systemctl disable firewalld
             # The iptables service files are no longer included by default,
@@ -168,5 +202,22 @@
 # on python-virtualenv), first install the distro python-virtualenv
 # to satisfy any dependencies then use pip to overwrite it.
 
-install_package python-virtualenv
-pip_install -U --force-reinstall virtualenv
+# ... but, for infra builds, the pip-and-virtualenv [1] element has
+# already done this to ensure the latest pip, virtualenv and
+# setuptools on the base image for all platforms.  It has also added
+# the packages to the yum/dnf ignore list to prevent them being
+# overwritten with old versions.  F26 and dnf 2.0 has changed
+# behaviour that means re-installing python-virtualenv fails [2].
+# Thus we do a quick check if we're in the infra environment by
+# looking for the mirror config script before doing this, and just
+# skip it if so.
+
+# [1] https://git.openstack.org/cgit/openstack/diskimage-builder/tree/ \
+#        diskimage_builder/elements/pip-and-virtualenv/ \
+#            install.d/pip-and-virtualenv-source-install/04-install-pip
+# [2] https://bugzilla.redhat.com/show_bug.cgi?id=1477823
+
+if [[ ! -f /etc/ci/mirror_info.sh ]]; then
+    install_package python-virtualenv
+    pip_install -U --force-reinstall virtualenv
+fi
diff --git a/tools/image_list.sh b/tools/image_list.sh
index 27b3d46..3a27c4a 100755
--- a/tools/image_list.sh
+++ b/tools/image_list.sh
@@ -1,5 +1,14 @@
 #!/bin/bash
 
+# Print out a list of image and other files to download for caching.
+# This is mostly used by the OpenStack infrasturucture during daily
+# image builds to save the large images to /opt/cache/files (see [1])
+#
+# The two lists of URL's downloaded are the IMAGE_URLS and
+# EXTRA_CACHE_URLS, which are setup in stackrc
+#
+# [1] project-config:nodepool/elements/cache-devstack/extra-data.d/55-cache-devstack-repos
+
 # Keep track of the DevStack directory
 TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
 
@@ -31,12 +40,20 @@
     ALL_IMAGES+=$URLS
 done
 
-# Make a nice list
-echo $ALL_IMAGES | tr ',' '\n' | sort | uniq
-
 # Sanity check - ensure we have a minimum number of images
 num=$(echo $ALL_IMAGES | tr ',' '\n' | sort | uniq | wc -l)
-if [[ "$num" -lt 5 ]]; then
+if [[ "$num" -lt 4 ]]; then
     echo "ERROR: We only found $num images in $ALL_IMAGES, which can't be right."
     exit 1
 fi
+
+# This is extra non-image files that we want pre-cached.  This is kept
+# in a separate list because devstack loops over the IMAGE_LIST to
+# upload files glance and these aren't images.  (This was a bit of an
+# after-thought which is why the naming around this is very
+# image-centric)
+URLS=$(source $TOP_DIR/stackrc && echo $EXTRA_CACHE_URLS)
+ALL_IMAGES+=$URLS
+
+# Make a nice combined list
+echo $ALL_IMAGES | tr ',' '\n' | sort | uniq
diff --git a/tools/install_ebtables_workaround.sh b/tools/install_ebtables_workaround.sh
deleted file mode 100755
index 45ced87..0000000
--- a/tools/install_ebtables_workaround.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash -eu
-#
-# Copyright 2015 Hewlett-Packard Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-#
-# This replaces the ebtables on your system with a wrapper script that
-# does implicit locking. This is needed if libvirt < 1.2.11 on your platform.
-
-EBTABLES=/sbin/ebtables
-EBTABLESREAL=/sbin/ebtables.real
-FILES=$TOP_DIR/files
-
-if [[ -f "$EBTABLES" ]]; then
-    if file $EBTABLES | grep ELF; then
-        sudo mv $EBTABLES $EBTABLESREAL
-        sudo install -m 0755 $FILES/ebtables.workaround $EBTABLES
-        echo "Replaced ebtables with locking workaround"
-    fi
-fi
diff --git a/tools/install_prereqs.sh b/tools/install_prereqs.sh
index da59093..6189085 100755
--- a/tools/install_prereqs.sh
+++ b/tools/install_prereqs.sh
@@ -88,6 +88,15 @@
     export PYTHON=$(which python 2>/dev/null)
 fi
 
+if is_suse; then
+    # now reinstall cryptography from source, in order to rebuilt it against the
+    # system libssl rather than the bundled openSSL 1.1, which segfaults when combined
+    # with a system provided openSSL 1.0
+    # see https://github.com/pyca/cryptography/issues/3804 and followup issues
+    sudo pip install cryptography --no-binary :all:
+fi
+
+
 # Mark end of run
 # ---------------
 
diff --git a/tools/memory_tracker.sh b/tools/memory_tracker.sh
new file mode 100755
index 0000000..63f25ca
--- /dev/null
+++ b/tools/memory_tracker.sh
@@ -0,0 +1,120 @@
+#!/bin/bash
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+set -o errexit
+
+PYTHON=${PYTHON:-python}
+
+# time to sleep between checks
+SLEEP_TIME=20
+
+# MemAvailable is the best estimation and has built-in heuristics
+# around reclaimable memory.  However, it is not available until 3.14
+# kernel (i.e. Ubuntu LTS Trusty misses it).  In that case, we fall
+# back to free+buffers+cache as the available memory.
+USE_MEM_AVAILABLE=0
+if grep -q '^MemAvailable:' /proc/meminfo; then
+    USE_MEM_AVAILABLE=1
+fi
+
+function get_mem_unevictable {
+    awk '/^Unevictable:/ {print $2}' /proc/meminfo
+}
+
+function get_mem_available {
+    if [[ $USE_MEM_AVAILABLE -eq 1 ]]; then
+        awk '/^MemAvailable:/ {print $2}' /proc/meminfo
+    else
+        awk '/^MemFree:/ {free=$2}
+            /^Buffers:/ {buffers=$2}
+            /^Cached:/  {cached=$2}
+            END { print free+buffers+cached }' /proc/meminfo
+    fi
+}
+
+function tracker {
+    local low_point
+    local unevictable_point
+    low_point=$(get_mem_available)
+    # log mlocked memory at least on first iteration
+    unevictable_point=0
+    while [ 1 ]; do
+
+        local mem_available
+        mem_available=$(get_mem_available)
+
+        local unevictable
+        unevictable=$(get_mem_unevictable)
+
+        if [ $mem_available -lt $low_point -o $unevictable -ne $unevictable_point ]; then
+            echo "[[["
+            date
+
+            # whenever we see less memory available than last time, dump the
+            # snapshot of current usage; i.e. checking the latest entry in the file
+            # will give the peak-memory usage
+            if [[ $mem_available -lt $low_point ]]; then
+                low_point=$mem_available
+                echo "---"
+                # always available greppable output; given difference in
+                # meminfo output as described above...
+                echo "memory_tracker low_point: $mem_available"
+                echo "---"
+                cat /proc/meminfo
+                echo "---"
+                # would hierarchial view be more useful (-H)?  output is
+                # not sorted by usage then, however, and the first
+                # question is "what's using up the memory"
+                #
+                # there are a lot of kernel threads, especially on a 8-cpu
+                # system.  do a best-effort removal to improve
+                # signal/noise ratio of output.
+                ps --sort=-pmem -eo pid:10,pmem:6,rss:15,ppid:10,cputime:10,nlwp:8,wchan:25,args:100 |
+                    grep -v ']$'
+            fi
+            echo "---"
+
+            # list processes that lock memory from swap
+            if [[ $unevictable -ne $unevictable_point ]]; then
+                unevictable_point=$unevictable
+                ${PYTHON} $(dirname $0)/mlock_report.py
+            fi
+
+            echo "]]]"
+        fi
+        sleep $SLEEP_TIME
+    done
+}
+
+function usage {
+    echo "Usage: $0 [-x] [-s N]" 1>&2
+    exit 1
+}
+
+while getopts ":s:x" opt; do
+    case $opt in
+        s)
+            SLEEP_TIME=$OPTARG
+            ;;
+        x)
+            set -o xtrace
+            ;;
+        *)
+            usage
+            ;;
+    esac
+done
+shift $((OPTIND-1))
+
+tracker
diff --git a/tools/mlock_report.py b/tools/mlock_report.py
new file mode 100755
index 0000000..07716b0
--- /dev/null
+++ b/tools/mlock_report.py
@@ -0,0 +1,53 @@
+#!/usr/bin/env python
+
+# This tool lists processes that lock memory pages from swapping to disk.
+
+import re
+
+import psutil
+
+
+LCK_SUMMARY_REGEX = re.compile(
+    "^VmLck:\s+(?P<locked>[\d]+)\s+kB", re.MULTILINE)
+
+
+def main():
+    try:
+        print(_get_report())
+    except Exception as e:
+        print("Failure listing processes locking memory: %s" % str(e))
+        raise
+
+
+def _get_report():
+    mlock_users = []
+    for proc in psutil.process_iter():
+        # sadly psutil does not expose locked pages info, that's why we
+        # iterate over the /proc/%pid/status files manually
+        try:
+            s = open("%s/%d/status" % (psutil.PROCFS_PATH, proc.pid), 'r')
+        except EnvironmentError:
+            continue
+        with s:
+            for line in s:
+                result = LCK_SUMMARY_REGEX.search(line)
+                if result:
+                    locked = int(result.group('locked'))
+                    if locked:
+                        mlock_users.append({'name': proc.name(),
+                                            'pid': proc.pid,
+                                            'locked': locked})
+
+    # produce a single line log message with per process mlock stats
+    if mlock_users:
+        return "; ".join(
+            "[%(name)s (pid:%(pid)s)]=%(locked)dKB" % args
+            # log heavy users first
+            for args in sorted(mlock_users, key=lambda d: d['locked'])
+        )
+    else:
+        return "no locked memory"
+
+
+if __name__ == "__main__":
+    main()
diff --git a/tools/peakmem_tracker.sh b/tools/peakmem_tracker.sh
deleted file mode 100755
index ecbd79a..0000000
--- a/tools/peakmem_tracker.sh
+++ /dev/null
@@ -1,98 +0,0 @@
-#!/bin/bash
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-set -o errexit
-
-# time to sleep between checks
-SLEEP_TIME=20
-
-# MemAvailable is the best estimation and has built-in heuristics
-# around reclaimable memory.  However, it is not available until 3.14
-# kernel (i.e. Ubuntu LTS Trusty misses it).  In that case, we fall
-# back to free+buffers+cache as the available memory.
-USE_MEM_AVAILBLE=0
-if grep -q '^MemAvailable:' /proc/meminfo; then
-    USE_MEM_AVAILABLE=1
-fi
-
-function get_mem_available {
-    if [[ $USE_MEM_AVAILABLE -eq 1 ]]; then
-        awk '/^MemAvailable:/ {print $2}' /proc/meminfo
-    else
-        awk '/^MemFree:/ {free=$2}
-            /^Buffers:/ {buffers=$2}
-            /^Cached:/  {cached=$2}
-            END { print free+buffers+cached }' /proc/meminfo
-    fi
-}
-
-# whenever we see less memory available than last time, dump the
-# snapshot of current usage; i.e. checking the latest entry in the
-# file will give the peak-memory usage
-function tracker {
-    local low_point
-    low_point=$(get_mem_available)
-    while [ 1 ]; do
-
-        local mem_available
-        mem_available=$(get_mem_available)
-
-        if [[ $mem_available -lt $low_point ]]; then
-            low_point=$mem_available
-            echo "[[["
-            date
-            echo "---"
-            # always available greppable output; given difference in
-            # meminfo output as described above...
-            echo "peakmem_tracker low_point: $mem_available"
-            echo "---"
-            cat /proc/meminfo
-            echo "---"
-            # would hierarchial view be more useful (-H)?  output is
-            # not sorted by usage then, however, and the first
-            # question is "what's using up the memory"
-            #
-            # there are a lot of kernel threads, especially on a 8-cpu
-            # system.  do a best-effort removal to improve
-            # signal/noise ratio of output.
-            ps --sort=-pmem -eo pid:10,pmem:6,rss:15,ppid:10,cputime:10,nlwp:8,wchan:25,args:100 |
-                grep -v ']$'
-            echo "]]]"
-        fi
-
-        sleep $SLEEP_TIME
-    done
-}
-
-function usage {
-    echo "Usage: $0 [-x] [-s N]" 1>&2
-    exit 1
-}
-
-while getopts ":s:x" opt; do
-    case $opt in
-        s)
-            SLEEP_TIME=$OPTARG
-            ;;
-        x)
-            set -o xtrace
-            ;;
-        *)
-            usage
-            ;;
-    esac
-done
-shift $((OPTIND-1))
-
-tracker
diff --git a/tools/worlddump.py b/tools/worlddump.py
index eb109b9..6fff149 100755
--- a/tools/worlddump.py
+++ b/tools/worlddump.py
@@ -223,6 +223,14 @@
         print("guru meditation report in %s log" % service)
 
 
+def var_core():
+    if os.path.exists('/var/core'):
+        _header("/var/core dumps")
+        # NOTE(ianw) : see DEBUG_LIBVIRT_COREDUMPS.  We could think
+        # about getting backtraces out of these.  There are other
+        # tools out there that can do that sort of thing though.
+        _dump_cmd("ls -ltrah /var/core")
+
 def main():
     opts = get_options()
     fname = filename(opts.dir, opts.name)
@@ -238,6 +246,7 @@
         ebtables_dump()
         compute_consoles()
         guru_meditation_reports()
+        var_core()
 
 
 if __name__ == '__main__':
diff --git a/tools/xen/README.md b/tools/xen/README.md
index 7062ecb..9559e77 100644
--- a/tools/xen/README.md
+++ b/tools/xen/README.md
@@ -171,8 +171,3 @@
     umount "$mountdir"
     rm -rf "$mountdir"
 
-### Migrate OpenStack DomU to another host
-
-Given you need to migrate your DomU with OpenStack installed to another host,
-you need to set `XEN_INTEGRATION_BRIDGE` in localrc if neutron network is used.
-It is the bridge for `XEN_INT_BRIDGE_OR_NET_NAME` network created in Dom0
diff --git a/tools/xen/functions b/tools/xen/functions
index 93f3413..bc0c515 100644
--- a/tools/xen/functions
+++ b/tools/xen/functions
@@ -294,6 +294,18 @@
     # Assert ithas a numeric nonzero value
     expr "$cpu_count" + 0
 
+    # 8 VCPUs should be enough for devstack VM; avoid using too
+    # many VCPUs:
+    # 1. too many VCPUs may trigger a kernel bug which result VM
+    #    not able to boot:
+    #    https://kernel.googlesource.com/pub/scm/linux/kernel/git/wsa/linux/+/e2e004acc7cbe3c531e752a270a74e95cde3ea48
+    # 2. The remaining CPUs can be used for other purpose:
+    #    e.g. boot test VMs.
+    MAX_VCPUS=8
+    if [ $cpu_count -ge $MAX_VCPUS ]; then
+        cpu_count=$MAX_VCPUS
+    fi
+
     xe vm-param-set uuid=$vm VCPUs-max=$cpu_count
     xe vm-param-set uuid=$vm VCPUs-at-startup=$cpu_count
 }
diff --git a/tools/xen/install_os_domU.sh b/tools/xen/install_os_domU.sh
index d2e2c57..f4ca71a 100755
--- a/tools/xen/install_os_domU.sh
+++ b/tools/xen/install_os_domU.sh
@@ -66,10 +66,6 @@
 setup_network "$MGT_BRIDGE_OR_NET_NAME"
 setup_network "$PUB_BRIDGE_OR_NET_NAME"
 
-# With neutron, one more network is required, which is internal to the
-# hypervisor, and used by the VMs
-setup_network "$XEN_INT_BRIDGE_OR_NET_NAME"
-
 if parameter_is_specified "FLAT_NETWORK_BRIDGE"; then
     if [ "$(bridge_for "$VM_BRIDGE_OR_NET_NAME")" != "$(bridge_for "$FLAT_NETWORK_BRIDGE")" ]; then
         cat >&2 << EOF
@@ -292,16 +288,6 @@
 #
 $THIS_DIR/build_xva.sh "$GUEST_NAME"
 
-# Attach a network interface for the integration network (so that the bridge
-# is created by XenServer). This is required for Neutron. Also pass that as a
-# kernel parameter for DomU
-attach_network "$XEN_INT_BRIDGE_OR_NET_NAME"
-
-XEN_INTEGRATION_BRIDGE_DEFAULT=$(bridge_for "$XEN_INT_BRIDGE_OR_NET_NAME")
-append_kernel_cmdline \
-    "$GUEST_NAME" \
-    "xen_integration_bridge=${XEN_INTEGRATION_BRIDGE_DEFAULT}"
-
 FLAT_NETWORK_BRIDGE="${FLAT_NETWORK_BRIDGE:-$(bridge_for "$VM_BRIDGE_OR_NET_NAME")}"
 append_kernel_cmdline "$GUEST_NAME" "flat_network_bridge=${FLAT_NETWORK_BRIDGE}"
 
diff --git a/tools/xen/xenrc b/tools/xen/xenrc
index 60be02f..169e042 100644
--- a/tools/xen/xenrc
+++ b/tools/xen/xenrc
@@ -29,7 +29,6 @@
 # Get the management network from the XS installation
 VM_BRIDGE_OR_NET_NAME="OpenStack VM Network"
 PUB_BRIDGE_OR_NET_NAME="OpenStack Public Network"
-XEN_INT_BRIDGE_OR_NET_NAME="OpenStack VM Integration Network"
 
 # VM Password
 GUEST_PASSWORD=${GUEST_PASSWORD:-secret}
diff --git a/tox.ini b/tox.ini
index 55a06d0..46b15f4 100644
--- a/tox.ini
+++ b/tox.ini
@@ -37,9 +37,9 @@
 deps =
    Pygments
    docutils
-   sphinx>=1.1.2,<1.2
-   pbr>=0.6,!=0.7,<1.0
-   oslosphinx
+   sphinx>=1.6.2
+   pbr>=2.0.0,!=2.1.0
+   openstackdocstheme>=1.11.0
    nwdiag
    blockdiag
    sphinxcontrib-blockdiag
@@ -52,9 +52,9 @@
 
 [testenv:venv]
 deps =
-   pbr>=0.6,!=0.7,<1.0
-   sphinx>=1.1.2,<1.2
-   oslosphinx
+   pbr>=2.0.0,!=2.1.0
+   sphinx>=1.6.2
+   openstackdocstheme>=1.11.0
    blockdiag
    sphinxcontrib-blockdiag
    sphinxcontrib-nwdiag
diff --git a/unstack.sh b/unstack.sh
index b0ebaf7..5d3672e 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -69,7 +69,7 @@
 source $TOP_DIR/lib/neutron
 source $TOP_DIR/lib/ldap
 source $TOP_DIR/lib/dstat
-source $TOP_DIR/lib/dlm
+source $TOP_DIR/lib/etcd3
 
 # Extras Source
 # --------------
@@ -129,9 +129,6 @@
     stop_tls_proxy
     cleanup_CA
 fi
-if [ "$USE_SSL" == "True" ]; then
-    cleanup_CA
-fi
 
 SCSI_PERSIST_DIR=$CINDER_STATE_PATH/volumes/*
 
@@ -165,17 +162,13 @@
     cleanup_neutron
 fi
 
-if is_service_enabled dstat; then
-    stop_dstat
+if is_service_enabled etcd3; then
+    stop_etcd3
+    cleanup_etcd3
 fi
 
-# Clean up the remainder of the screen processes
-SCREEN=$(which screen)
-if [[ -n "$SCREEN" ]]; then
-    SESSION=$(screen -ls | awk "/[0-9]+.${SCREEN_NAME}/"'{ print $1 }')
-    if [[ -n "$SESSION" ]]; then
-        screen -X -S $SESSION quit
-    fi
+if is_service_enabled dstat; then
+    stop_dstat
 fi
 
 # NOTE: Cinder automatically installs the lvm2 package, independently of the