Merge "Remove neutron ryu-plugin support"
diff --git a/doc/source/changes.rst b/doc/source/changes.rst
index f4a326d..7b75375 100644
--- a/doc/source/changes.rst
+++ b/doc/source/changes.rst
@@ -3,7 +3,7 @@
 =======
 
 Recent Changes What's been happening?
--------------------------------------
+=====================================
 
 These are the commits to DevStack for the last six months. For the
 complete list see `the DevStack project in
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index eba2956..5157622 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -22,7 +22,7 @@
 -  allow settings in arbitrary configuration files to be changed
 
 local.conf
-~~~~~~~~~~
+==========
 
 The new configuration file is ``local.conf`` and resides in the root
 DevStack directory like the old ``localrc`` file. It is a modified INI
@@ -74,7 +74,7 @@
 ``localrc`` file (actually ``.localrc.auto``). This allows all custom
 settings for DevStack to be contained in a single file. If ``localrc``
 exists it will be used instead to preserve backward-compatibility. More
-details on the `contents of localrc <localrc.html>`__ are available.
+details on the :doc:`contents of local.conf <local.conf>` are available.
 
 ::
 
@@ -96,7 +96,7 @@
 whitespace around ``=`` (equals).
 
 Minimal Configuration
-~~~~~~~~~~~~~~~~~~~~~
+=====================
 
 While ``stack.sh`` is happy to run without a ``localrc`` section in
 ``local.conf``, devlife is better when there are a few minimal variables
@@ -136,9 +136,11 @@
 by default.
 
 Common Configuration Variables
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+==============================
 
-Set DevStack install directory
+Installation Directory
+----------------------
+
     | *Default: ``DEST=/opt/stack``*
     |  The DevStack install directory is set by the ``DEST`` variable.
     |  By setting it early in the ``localrc`` section you can reference it
@@ -150,7 +152,27 @@
 
         DEST=/opt/stack
 
-stack.sh logging
+Libraries from Git
+------------------
+
+   | *Default: ``LIBS_FROM_GIT=""``*
+
+   | By default devstack installs OpenStack server components from
+     git, however it installs client libraries from released versions
+     on pypi. This is appropriate if you are working on server
+     development, but if you want to see how an unreleased version of
+     the client affects the system you can have devstack install it
+     from upstream, or from local git trees.
+   | Multiple libraries can be specified as a comma separated list.
+   |
+
+   ::
+
+      LIBS_FROM_GIT=python-keystoneclient,oslo.config
+
+Enable Logging
+--------------
+
     | *Defaults: ``LOGFILE="" LOGDAYS=7 LOG_COLOR=True``*
     |  By default ``stack.sh`` output is only written to the console
        where is runs. It can be sent to a file in addition to the console
@@ -178,7 +200,9 @@
 
         LOG_COLOR=False
 
-Screen logging
+Logging the Screen Output
+-------------------------
+
     | *Default: ``SCREEN_LOGDIR=""``*
     |  By default DevStack runs the OpenStack services using ``screen``
        which is useful for watching log and debug output. However, in
@@ -196,7 +220,9 @@
     *Note the use of ``DEST`` to locate the main install directory; this
     is why we suggest setting it in ``local.conf``.*
 
-One syslog to bind them all
+Enabling Syslog
+---------------
+
     | *Default: ``SYSLOG=False SYSLOG_HOST=$HOST_IP SYSLOG_PORT=516``*
     |  Logging all services to a single syslog can be convenient. Enable
        syslogging by setting ``SYSLOG`` to ``True``. If the destination log
@@ -211,6 +237,8 @@
         SYSLOG_PORT=516
 
 A clean install every time
+--------------------------
+
     | *Default: ``RECLONE=""``*
     |  By default ``stack.sh`` only clones the project repos if they do
        not exist in ``$DEST``. ``stack.sh`` will freshen each repo on each
@@ -222,10 +250,18 @@
 
         RECLONE=yes
 
-                    Swift
-                    Default: SWIFT_HASH="" SWIFT_REPLICAS=1 SWIFT_DATA_DIR=$DEST/data/swift
-                    Swift is now used as the back-end for the S3-like object store.  When enabled Nova's objectstore (n-obj in ENABLED_SERVICES) is automatically disabled. Enable Swift by adding it services to ENABLED_SERVICES:
-                    enable_service s-proxy s-object s-container s-account
+Swift
+-----
+
+    | Default: SWIFT_HASH=""
+    | SWIFT_REPLICAS=1
+    | SWIFT_DATA_DIR=$DEST/data/swift
+
+    | Swift is now used as the back-end for the S3-like object store.
+      When enabled Nova's objectstore (n-obj in ENABLED_SERVICES) is
+      automatically disabled. Enable Swift by adding it services to
+      ENABLED_SERVICES: enable_service s-proxy s-object s-container
+      s-account
 
     Setting Swift's hash value is required and you will be prompted for
     it if Swift is enabled so just set it to something already:
@@ -259,6 +295,8 @@
     work correctly.*
 
 Service Catalog Backend
+-----------------------
+
     | *Default: ``KEYSTONE_CATALOG_BACKEND=sql``*
     |  DevStack uses Keystone's ``sql`` service catalog backend. An
        alternate ``template`` backend is also available. However, it does
@@ -274,6 +312,8 @@
     ``files/keystone_data.sh``
 
 Cinder
+------
+
     | Default:
     | VOLUME_GROUP="stack-volumes" VOLUME_NAME_PREFIX="volume-" VOLUME_BACKING_FILE_SIZE=10250M
     |  The logical volume group used to hold the Cinder-managed volumes
@@ -289,6 +329,8 @@
         VOLUME_BACKING_FILE_SIZE=10250M
 
 Multi-host DevStack
+-------------------
+
     | *Default: ``MULTI_HOST=False``*
     |  Running DevStack with multiple hosts requires a custom
        ``local.conf`` section for each host. The master is the same as a
@@ -311,6 +353,8 @@
         ENABLED_SERVICES=n-vol,n-cpu,n-net,n-api
 
 API rate limits
+---------------
+
     | Default: ``API_RATE_LIMIT=True``
     | Integration tests such as Tempest will likely run afoul of the
       default rate limits configured for Nova. Turn off rate limiting
@@ -321,8 +365,37 @@
 
         API_RATE_LIMIT=False
 
+IP Version
+    | Default: ``IP_VERSION=4``
+    | This setting can be used to configure DevStack to create either an IPv4,
+      IPv6, or dual stack tenant data network by setting ``IP_VERSION`` to
+      either ``IP_VERSION=4``, ``IP_VERSION=6``, or ``IP_VERSION=4+6``
+      respectively. This functionality requires that the Neutron networking
+      service is enabled by setting the following options:
+    |
+
+    ::
+
+        disable_service n-net
+        enable_service q-svc q-agt q-dhcp q-l3
+
+    | The following optional variables can be used to alter the default IPv6
+      behavior:
+    |
+
+    ::
+
+        IPV6_RA_MODE=slaac
+        IPV6_ADDRESS_MODE=slaac
+        FIXED_RANGE_V6=fd$IPV6_GLOBAL_ID::/64
+        IPV6_PRIVATE_NETWORK_GATEWAY=fd$IPV6_GLOBAL_ID::1
+
+    | *Note: ``FIXED_RANGE_V6`` and ``IPV6_PRIVATE_NETWORK_GATEWAY``
+      can be configured with any valid IPv6 prefix. The default values make
+      use of an auto-generated ``IPV6_GLOBAL_ID`` to comply with RFC 4193.*
+
 Examples
-~~~~~~~~
+========
 
 -  Eliminate a Cinder pass-through (``CINDER_PERIODIC_INTERVAL``):
 
diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index b4f9f37..8c9e658 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -10,9 +10,9 @@
 OpenStack project you are good to go.
 
 Things To Know
-~~~~~~~~~~~~~~
+==============
 
-| 
+|
 | **Where Things Are**
 
 The official DevStack repository is located at
@@ -30,7 +30,7 @@
 is, however, used for all commits except for the text of this website.
 That should also change in the near future.
 
-| 
+|
 | **HACKING.rst**
 
 Like most OpenStack projects, DevStack includes a ``HACKING.rst`` file
@@ -38,7 +38,7 @@
 ``HACKING.rst`` is in the main DevStack repo it is considered
 authoritative. Much of the content on this page is taken from there.
 
-| 
+|
 | **bashate Formatting**
 
 Around the time of the OpenStack Havana release we added a tool to do
@@ -51,24 +51,25 @@
 formatting. Run it on the entire project with ``./run_tests.sh``.
 
 Code
-~~~~
+====
 
-| 
+|
 | **Repo Layout**
 
 The DevStack repo generally keeps all of the primary scripts at the root
 level.
 
-``docs`` - Contains the source for this website. It is built using
-``tools/build_docs.sh``.
+``doc`` - Contains the Sphinx source for the documentation.
+``tools/build_docs.sh`` is used to generate the HTML versions of the
+DevStack scripts.  A complete doc build can be run with ``tox -edocs``.
 
-``exercises`` - Contains the test scripts used to validate and
+``exercises`` - Contains the test scripts used to sanity-check and
 demonstrate some OpenStack functions. These scripts know how to exit
 early or skip services that are not enabled.
 
 ``extras.d`` - Contains the dispatch scripts called by the hooks in
-``stack.sh``, ``unstack.sh`` and ``clean.sh``. See `the plugins
-docs <plugins.html>`__ for more information.
+``stack.sh``, ``unstack.sh`` and ``clean.sh``. See :doc:`the plugins
+docs <plugins>` for more information.
 
 ``files`` - Contains a variety of otherwise lost files used in
 configuring and operating DevStack. This includes templates for
@@ -84,10 +85,10 @@
 DevStack repo.
 
 ``tests`` - the DevStack test suite is rather sparse, mostly consisting
-of test of specific fragile functions in the ``functions`` file.
+of test of specific fragile functions in the ``functions`` and
+``functions-common`` files.
 
-``tools`` - Contains a collection of stand-alone scripts, some of which
-have aged a bit (does anyone still do ramdisk installs?). While these
+``tools`` - Contains a collection of stand-alone scripts. While these
 may reference the top-level DevStack configuration they can generally be
 run alone. There are also some sub-directories to support specific
 environments such as XenServer.
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index 7b33b41..f39471c 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -7,7 +7,7 @@
 -  `Miscellaneous <#misc>`__
 
 General Questions
-~~~~~~~~~~~~~~~~~
+=================
 
 Q: Can I use DevStack for production?
     A: No. We mean it. Really. DevStack makes some implementation
@@ -47,11 +47,8 @@
     and bug reports go to
     `LaunchPad <http://bugs.launchpad.net/devstack/>`__. Contributions
     follow the usual process as described in the `OpenStack
-    wiki <http://wiki.openstack.org/HowToContribute>`__ even though
-    DevStack is not an official OpenStack project. This site is housed
-    in the CloudBuilder's
-    `github <http://github.com/cloudbuilders/devstack>`__ in the
-    gh-pages branch.
+    wiki <http://wiki.openstack.org/HowToContribute>`__. This Sphinx
+    documentation is housed in the doc directory.
 Q: Why not use packages?
     A: Unlike packages, DevStack leaves your cloud ready to develop -
     checkouts of the code and services running in screen. However, many
@@ -80,7 +77,7 @@
     is valuable so we do it...
 
 Operation and Configuration
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
+===========================
 
 Q: Can DevStack handle a multi-node installation?
     A: Indirectly, yes. You run DevStack on each node with the
@@ -160,7 +157,7 @@
     ``FORCE_PREREQ=1`` and the package checks will never be skipped.
 
 Miscellaneous
-~~~~~~~~~~~~~
+=============
 
 Q: ``tools/fixup_stuff.sh`` is broken and shouldn't 'fix' just one version of packages.
     A: [Another not-a-question] No it isn't. Stuff in there is to
diff --git a/doc/source/guides/multinode-lab.rst b/doc/source/guides/multinode-lab.rst
index 1c53227..44601d8 100644
--- a/doc/source/guides/multinode-lab.rst
+++ b/doc/source/guides/multinode-lab.rst
@@ -6,19 +6,19 @@
 physical servers.
 
 Prerequisites Linux & Network
------------------------------
+=============================
 
 Minimal Install
-~~~~~~~~~~~~~~~
+---------------
 
 You need to have a system with a fresh install of Linux. You can
 download the `Minimal
 CD <https://help.ubuntu.com/community/Installation/MinimalCD>`__ for
 Ubuntu releases since DevStack will download & install all the
 additional dependencies. The netinstall ISO is available for
-`Fedora <http://mirrors.kernel.org/fedora/releases/18/Fedora/x86_64/iso/Fedora-20-x86_64-netinst.iso>`__
+`Fedora <http://mirrors.kernel.org/fedora/releases/>`__
 and
-`CentOS/RHEL <http://mirrors.kernel.org/centos/6.5/isos/x86_64/CentOS-6.5-x86_64-netinstall.iso>`__.
+`CentOS/RHEL <http://mirrors.kernel.org/centos/>`__.
 
 Install a couple of packages to bootstrap configuration:
 
@@ -27,7 +27,7 @@
     apt-get install -y git sudo || yum install -y git sudo
 
 Network Configuration
-~~~~~~~~~~~~~~~~~~~~~
+---------------------
 
 The first iteration of the lab uses OpenStack's FlatDHCP network
 controller so only a single network will be required. It should be on
@@ -60,10 +60,10 @@
     GATEWAY=192.168.42.1
 
 Installation shake and bake
----------------------------
+===========================
 
 Add the DevStack User
-~~~~~~~~~~~~~~~~~~~~~
+---------------------
 
 OpenStack runs as a non-root user that has sudo access to root. There is
 nothing special about the name, we'll use ``stack`` here. Every node
@@ -88,7 +88,7 @@
 ``stack`` user.
 
 Set Up Ssh
-~~~~~~~~~~
+----------
 
 Set up the stack user on each node with an ssh key for access:
 
@@ -98,7 +98,7 @@
     echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyYjfgyPazTvGpd8OaAvtU2utL8W6gWC4JdRS1J95GhNNfQd657yO6s1AH5KYQWktcE6FO/xNUC2reEXSGC7ezy+sGO1kj9Limv5vrvNHvF1+wts0Cmyx61D2nQw35/Qz8BvpdJANL7VwP/cFI/p3yhvx2lsnjFE3hN8xRB2LtLUopUSVdBwACOVUmH2G+2BWMJDjVINd2DPqRIA4Zhy09KJ3O1Joabr0XpQL0yt/I9x8BVHdAx6l9U0tMg9dj5+tAjZvMAFfye3PJcYwwsfJoFxC8w/SLtqlFX7Ehw++8RtvomvuipLdmWCy+T9hIkl+gHYE4cS3OIqXH7f49jdJf jesse@spacey.local" > ~/.ssh/authorized_keys
 
 Download DevStack
-~~~~~~~~~~~~~~~~~
+-----------------
 
 Grab the latest version of DevStack:
 
@@ -112,7 +112,7 @@
 (aka 'head node') and the compute nodes.
 
 Configure Cluster Controller
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+----------------------------
 
 The cluster controller runs all OpenStack services. Configure the
 cluster controller's DevStack in ``local.conf``:
@@ -153,7 +153,7 @@
 available in ``stack.sh.log``.
 
 Configure Compute Nodes
-~~~~~~~~~~~~~~~~~~~~~~~
+-----------------------
 
 The compute nodes only run the OpenStack worker services. For additional
 machines, create a ``local.conf`` with:
@@ -196,7 +196,7 @@
 available in ``stack.sh.log``.
 
 Cleaning Up After DevStack
-~~~~~~~~~~~~~~~~~~~~~~~~~~
+--------------------------
 
 Shutting down OpenStack is now as simple as running the included
 ``unstack.sh`` script:
@@ -223,10 +223,10 @@
     sudo virsh list | grep inst | awk '{print $1}' | xargs -n1 virsh destroy
 
 Options pimp your stack
------------------------
+=======================
 
 Additional Users
-~~~~~~~~~~~~~~~~
+----------------
 
 DevStack creates two OpenStack users (``admin`` and ``demo``) and two
 tenants (also ``admin`` and ``demo``). ``admin`` is exactly what it
@@ -242,7 +242,7 @@
 
     # Get admin creds
     . openrc admin admin
-            
+
     # List existing tenants
     keystone tenant-list
 
@@ -260,7 +260,7 @@
     # keystone role-list
 
 Swift
-~~~~~
+-----
 
 Swift requires a significant amount of resources and is disabled by
 default in DevStack. The support in DevStack is geared toward a minimal
@@ -280,11 +280,11 @@
 it...) ``local.conf``.
 
 Volumes
-~~~~~~~
+-------
 
 DevStack will automatically use an existing LVM volume group named
 ``stack-volumes`` to store cloud-created volumes. If ``stack-volumes``
-doesn't exist, DevStack will set up a 5Gb loop-mounted file to contain
+doesn't exist, DevStack will set up a 10Gb loop-mounted file to contain
 it. This obviously limits the number and size of volumes that can be
 created inside OpenStack. The size can be overridden by setting
 ``VOLUME_BACKING_FILE_SIZE`` in ``local.conf``.
@@ -305,7 +305,7 @@
     vgcreate stack-volumes /dev/sdc
 
 Syslog
-~~~~~~
+------
 
 DevStack is capable of using ``rsyslog`` to aggregate logging across the
 cluster. It is off by default; to turn it on set ``SYSLOG=True`` in
@@ -319,7 +319,7 @@
     SYSLOG_HOST=192.168.42.11
 
 Using Alternate Repositories/Branches
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-------------------------------------
 
 The git repositories for all of the OpenStack services are defined in
 ``stackrc``. Since this file is a part of the DevStack package changes
@@ -349,10 +349,10 @@
     GLANCE_REPO=https://github.com/mcuser/glance.git
 
 Notes stuff you might need to know
-----------------------------------
+==================================
 
 Reset the Bridge
-~~~~~~~~~~~~~~~~
+----------------
 
 How to reset the bridge configuration:
 
@@ -363,7 +363,7 @@
     sudo brctl delbr br100
 
 Set MySQL Password
-~~~~~~~~~~~~~~~~~~
+------------------
 
 If you forgot to set the root password you can do this:
 
diff --git a/doc/source/guides/single-machine.rst b/doc/source/guides/single-machine.rst
index 6059511..17e9b9e 100644
--- a/doc/source/guides/single-machine.rst
+++ b/doc/source/guides/single-machine.rst
@@ -1,31 +1,31 @@
-==========
-All-In-One
-==========
+=========================
+All-In-One Single Machine
+=========================
 
 Things are about to get real! Using OpenStack in containers or VMs is
 nice for kicking the tires, but doesn't compare to the feeling you get
 with hardware.
 
 Prerequisites Linux & Network
------------------------------
+=============================
 
 Minimal Install
-~~~~~~~~~~~~~~~
+---------------
 
 You need to have a system with a fresh install of Linux. You can
 download the `Minimal
 CD <https://help.ubuntu.com/community/Installation/MinimalCD>`__ for
 Ubuntu releases since DevStack will download & install all the
 additional dependencies. The netinstall ISO is available for
-`Fedora <http://mirrors.kernel.org/fedora/releases/18/Fedora/x86_64/iso/Fedora-20-x86_64-netinst.iso>`__
+`Fedora <http://mirrors.kernel.org/fedora/releases/>`__
 and
-`CentOS/RHEL <http://mirrors.kernel.org/centos/6.5/isos/x86_64/CentOS-6.5-x86_64-netinstall.iso>`__.
+`CentOS/RHEL <http://mirrors.kernel.org/centos/>`__.
 You may be tempted to use a desktop distro on a laptop, it will probably
 work but you may need to tell Network Manager to keep its fingers off
 the interface(s) that OpenStack uses for bridging.
 
 Network Configuration
-~~~~~~~~~~~~~~~~~~~~~
+---------------------
 
 Determine the network configuration on the interface used to integrate
 your OpenStack cloud with your existing network. For example, if the IPs
@@ -36,10 +36,10 @@
 of DHCP (i.e. 192.168.1.201).
 
 Installation shake and bake
----------------------------
+===========================
 
 Add your user
-~~~~~~~~~~~~~
+-------------
 
 We need to add a user to install DevStack. (if you created a user during
 install you can skip this step and just give the user sudo privileges
@@ -61,7 +61,7 @@
 **login** as that user.
 
 Download DevStack
-~~~~~~~~~~~~~~~~~
+-----------------
 
 We'll grab the latest version of DevStack via https:
 
@@ -72,7 +72,7 @@
     cd devstack
 
 Run DevStack
-~~~~~~~~~~~~
+------------
 
 Now to configure ``stack.sh``. DevStack includes a sample in
 ``devstack/samples/local.conf``. Create ``local.conf`` as shown below to
@@ -120,7 +120,7 @@
 accounts and passwords to poke at your shiny new OpenStack.
 
 Using OpenStack
-~~~~~~~~~~~~~~~
+---------------
 
 At this point you should be able to access the dashboard from other
 computers on the local network. In this example that would be
diff --git a/doc/source/guides/single-vm.rst b/doc/source/guides/single-vm.rst
index d296db6..a41c4e1 100644
--- a/doc/source/guides/single-vm.rst
+++ b/doc/source/guides/single-vm.rst
@@ -1,6 +1,6 @@
-=============
-Cloud in a VM
-=============
+====================
+All-In-One Single VM
+====================
 
 Use the cloud to build the cloud! Use your cloud to launch new versions
 of OpenStack in about 5 minutes. When you break it, start over! The VMs
@@ -9,16 +9,16 @@
 operation. Speed not required.
 
 Prerequisites Cloud & Image
----------------------------
+===========================
 
 Virtual Machine
-~~~~~~~~~~~~~~~
+---------------
 
 DevStack should run in any virtual machine running a supported Linux
 release. It will perform best with 2Gb or more of RAM.
 
 OpenStack Deployment & cloud-init
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+---------------------------------
 
 If the cloud service has an image with ``cloud-init`` pre-installed, use
 it. You can get one from `Ubuntu's Daily
@@ -33,10 +33,10 @@
 bare-bones server installation.
 
 Installation shake and bake
----------------------------
+===========================
 
 Launching With Cloud-Init
-~~~~~~~~~~~~~~~~~~~~~~~~~
+-------------------------
 
 This cloud config grabs the latest version of DevStack via git, creates
 a minimal ``local.conf`` file and kicks off ``stack.sh``. It should be
@@ -79,13 +79,13 @@
 to create a non-root user and run the ``start.sh`` script as that user.
 
 Launching By Hand
-~~~~~~~~~~~~~~~~~
+-----------------
 
 Using a hypervisor directly, launch the VM and either manually perform
 the steps in the embedded shell script above or copy it into the VM.
 
 Using OpenStack
-~~~~~~~~~~~~~~~
+---------------
 
 At this point you should be able to access the dashboard. Launch VMs and
 if you give them floating IPs access those VMs from other machines on
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 2128620..dbefdec 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -12,11 +12,8 @@
    changes
    contributing
 
-   guides/*
-
-
-Quick Start This ain't your first rodeo
----------------------------------------
+Quick Start
+-----------
 
 #. Select a Linux Distribution
 
@@ -59,40 +56,36 @@
 
 Walk through various setups used by stackers
 
-OpenStack on VMs
-----------------
+.. toctree::
+   :glob:
+   :maxdepth: 1
 
-These guides tell you how to virtualize your OpenStack cloud in virtual
-machines. This means that you can get started without having to purchase
-any hardware.
+   guides/single-vm
+   guides/single-machine
+   guides/multinode-lab
 
-Virtual Machine
-~~~~~~~~~~~~~~~
+All-In-One Single VM
+--------------------
 
-:doc:`Run OpenStack in a VM <guides/single-vm>`. The VMs launched in your cloud will be slow as
+Run :doc:`OpenStack in a VM <guides/single-vm>`. The VMs launched in your cloud will be slow as
 they are running in QEMU (emulation), but it is useful if you don't have
 spare hardware laying around. :doc:`[Read] <guides/single-vm>`
 
-OpenStack on Hardware
----------------------
+All-In-One Single Machine
+-------------------------
 
-These guides tell you how to deploy a development environment on real
-hardware. Guides range from running OpenStack on a single laptop to
-running a multi-node deployment on datacenter hardware.
+Run :doc:`OpenStack on dedicated hardware <guides/single-machine>`  This can include a
+server-class machine or a laptop at home.
+:doc:`[Read] <guides/single-machine>`
 
-All-In-One
-~~~~~~~~~~
+Multi-Node Lab
+--------------
 
-:doc:`Run OpenStack on dedicated hardware <guides/single-machine>` to get real performance in your VMs.
-This can include a server-class machine or a laptop at home. :doc:`[Read] <guides/single-machine>`
+Setup a :doc:`multi-node cluster <guides/multinode-lab>` with dedicated VLANs for VMs & Management.
+:doc:`[Read] <guides/multinode-lab>`
 
-Multi-Node + VLANs
-~~~~~~~~~~~~~~~~~~
-
-:doc:`Setup a multi-node cluster <guides/multinode-lab>` with dedicated VLANs for VMs & Management. :doc:`[Read] <guides/multinode-lab>`
-
-Documentation
-=============
+DevStack Documentation
+======================
 
 Overview
 --------
@@ -127,187 +120,102 @@
 Code
 ====
 
-A look at the bits that make it all go
+*A look at the bits that make it all go*
 
 Scripts
 -------
 
-Generated documentation of DevStack scripts.
+* `stack.sh <stack.sh.html>`__ - The main script
+* `functions <functions.html>`__ - DevStack-specific functions
+* `functions-common <functions-common.html>`__ - Functions shared with other projects
+* `lib/apache <lib/apache.html>`__
+* `lib/baremetal <lib/baremetal.html>`__
+* `lib/ceilometer <lib/ceilometer.html>`__
+* `lib/ceph <lib/ceph.html>`__
+* `lib/cinder <lib/cinder.html>`__
+* `lib/config <lib/config.html>`__
+* `lib/database <lib/database.html>`__
+* `lib/dib <lib/dib.html>`__
+* `lib/dstat <lib/dstat.html>`__
+* `lib/glance <lib/glance.html>`__
+* `lib/heat <lib/heat.html>`__
+* `lib/horizon <lib/horizon.html>`__
+* `lib/infra <lib/infra.html>`__
+* `lib/ironic <lib/ironic.html>`__
+* `lib/keystone <lib/keystone.html>`__
+* `lib/ldap <lib/ldap.html>`__
+* `lib/neutron <lib/neutron.html>`__
+* `lib/nova <lib/nova.html>`__
+* `lib/opendaylight <lib/opendaylight.html>`__
+* `lib/oslo <lib/oslo.html>`__
+* `lib/rpc\_backend <lib/rpc_backend.html>`__
+* `lib/sahara <lib/sahara.html>`__
+* `lib/stackforge <lib/stackforge.html>`__
+* `lib/swift <lib/swift.html>`__
+* `lib/tempest <lib/tempest.html>`__
+* `lib/tls <lib/tls.html>`__
+* `lib/trove <lib/trove.html>`__
+* `lib/zaqar <lib/zaqar.html>`__
+* `unstack.sh <unstack.sh.html>`__
+* `clean.sh <clean.sh.html>`__
+* `run\_tests.sh <run_tests.sh.html>`__
 
-+-------------------------------+----------------------------------------------+
-| Filename                      | Link                                         |
-+===============================+==============================================+
-| stack.sh                      | `Read » <stack.sh.html>`__                   |
-+-------------------------------+----------------------------------------------+
-| functions                     | `Read » <functions.html>`__                  |
-+-------------------------------+----------------------------------------------+
-| functions-common              | `Read » <functions-common.html>`__           |
-+-------------------------------+----------------------------------------------+
-| lib/apache                    | `Read » <lib/apache.html>`__                 |
-+-------------------------------+----------------------------------------------+
-| lib/baremetal                 | `Read » <lib/baremetal.html>`__              |
-+-------------------------------+----------------------------------------------+
-| lib/ceilometer                | `Read » <lib/ceilometer.html>`__             |
-+-------------------------------+----------------------------------------------+
-| lib/cinder                    | `Read » <lib/cinder.html>`__                 |
-+-------------------------------+----------------------------------------------+
-| lib/config                    | `Read » <lib/config.html>`__                 |
-+-------------------------------+----------------------------------------------+
-| lib/database                  | `Read » <lib/database.html>`__               |
-+-------------------------------+----------------------------------------------+
-| lib/glance                    | `Read » <lib/glance.html>`__                 |
-+-------------------------------+----------------------------------------------+
-| lib/heat                      | `Read » <lib/heat.html>`__                   |
-+-------------------------------+----------------------------------------------+
-| lib/horizon                   | `Read » <lib/horizon.html>`__                |
-+-------------------------------+----------------------------------------------+
-| lib/infra                     | `Read » <lib/infra.html>`__                  |
-+-------------------------------+----------------------------------------------+
-| lib/ironic                    | `Read » <lib/ironic.html>`__                 |
-+-------------------------------+----------------------------------------------+
-| lib/keystone                  | `Read » <lib/keystone.html>`__               |
-+-------------------------------+----------------------------------------------+
-| lib/ldap                      | `Read » <lib/ldap.html>`__                   |
-+-------------------------------+----------------------------------------------+
-| lib/zaqar                     | `Read » <lib/zaqar.html>`__                  |
-+-------------------------------+----------------------------------------------+
-| lib/neutron                   | `Read » <lib/neutron.html>`__                |
-+-------------------------------+----------------------------------------------+
-| lib/nova                      | `Read » <lib/nova.html>`__                   |
-+-------------------------------+----------------------------------------------+
-| lib/oslo                      | `Read » <lib/oslo.html>`__                   |
-+-------------------------------+----------------------------------------------+
-| lib/rpc\_backend              | `Read » <lib/rpc_backend.html>`__            |
-+-------------------------------+----------------------------------------------+
-| lib/sahara                    | `Read » <lib/sahara.html>`__                 |
-+-------------------------------+----------------------------------------------+
-| lib/savanna                   | `Read » <lib/savanna.html>`__                |
-+-------------------------------+----------------------------------------------+
-| lib/stackforge                | `Read » <lib/stackforge.html>`__             |
-+-------------------------------+----------------------------------------------+
-| lib/swift                     | `Read » <lib/swift.html>`__                  |
-+-------------------------------+----------------------------------------------+
-| lib/tempest                   | `Read » <lib/tempest.html>`__                |
-+-------------------------------+----------------------------------------------+
-| lib/tls                       | `Read » <lib/tls.html>`__                    |
-+-------------------------------+----------------------------------------------+
-| lib/trove                     | `Read » <lib/trove.html>`__                  |
-+-------------------------------+----------------------------------------------+
-| unstack.sh                    | `Read » <unstack.sh.html>`__                 |
-+-------------------------------+----------------------------------------------+
-| clean.sh                      | `Read » <clean.sh.html>`__                   |
-+-------------------------------+----------------------------------------------+
-| run\_tests.sh                 | `Read » <run_tests.sh.html>`__               |
-+-------------------------------+----------------------------------------------+
-| extras.d/50-ironic.sh         | `Read » <extras.d/50-ironic.html>`__         |
-+-------------------------------+----------------------------------------------+
-| extras.d/70-zaqar.sh          | `Read » <extras.d/70-zaqar.html>`__          |
-+-------------------------------+----------------------------------------------+
-| extras.d/70-sahara.sh         | `Read » <extras.d/70-sahara.html>`__         |
-+-------------------------------+----------------------------------------------+
-| extras.d/70-savanna.sh        | `Read » <extras.d/70-savanna.html>`__        |
-+-------------------------------+----------------------------------------------+
-| extras.d/70-trove.sh          | `Read » <extras.d/70-trove.html>`__          |
-+-------------------------------+----------------------------------------------+
-| extras.d/80-opendaylight.sh   | `Read » <extras.d/80-opendaylight.html>`__   |
-+-------------------------------+----------------------------------------------+
-| extras.d/80-tempest.sh        | `Read » <extras.d/80-tempest.html>`__        |
-+-------------------------------+----------------------------------------------+
+* `extras.d/40-dib.sh <extras.d/40-dib.sh.html>`__
+* `extras.d/50-ironic.sh <extras.d/50-ironic.sh.html>`__
+* `extras.d/60-ceph.sh <extras.d/60-ceph.sh.html>`__
+* `extras.d/70-sahara.sh <extras.d/70-sahara.sh.html>`__
+* `extras.d/70-trove.sh <extras.d/70-trove.sh.html>`__
+* `extras.d/70-zaqar.sh <extras.d/70-zaqar.sh.html>`__
+* `extras.d/80-opendaylight.sh <extras.d/80-opendaylight.sh.html>`__
+* `extras.d/80-tempest.sh <extras.d/80-tempest.sh.html>`__
 
 Configuration
 -------------
 
-+--------------+--------------------------------+
-| Filename     | Link                           |
-+==============+================================+
-| local.conf   | `Read » <local.conf.html>`__   |
-+--------------+--------------------------------+
-| stackrc      | `Read » <stackrc.html>`__      |
-+--------------+--------------------------------+
-| openrc       | `Read » <openrc.html>`__       |
-+--------------+--------------------------------+
-| exerciserc   | `Read » <exerciserc.html>`__   |
-+--------------+--------------------------------+
-| eucarc       | `Read » <eucarc.html>`__       |
-+--------------+--------------------------------+
-
-Tools
------
-
-+-----------------------------+----------------------------------------------+
-| Filename                    | Link                                         |
-+=============================+==============================================+
-| tools/info.sh               | `Read » <tools/info.sh.html>`__              |
-+-----------------------------+----------------------------------------------+
-| tools/build\_docs.sh        | `Read » <tools/build_docs.sh.html>`__        |
-+-----------------------------+----------------------------------------------+
-| tools/create\_userrc.sh     | `Read » <tools/create_userrc.sh.html>`__     |
-+-----------------------------+----------------------------------------------+
-| tools/fixup\_stuff.sh       | `Read » <tools/fixup_stuff.sh.html>`__       |
-+-----------------------------+----------------------------------------------+
-| tools/install\_prereqs.sh   | `Read » <tools/install_prereqs.sh.html>`__   |
-+-----------------------------+----------------------------------------------+
-| tools/install\_pip.sh       | `Read » <tools/install_pip.sh.html>`__       |
-+-----------------------------+----------------------------------------------+
-| tools/upload\_image.sh      | `Read » <tools/upload_image.sh.html>`__      |
-+-----------------------------+----------------------------------------------+
-
-Samples
--------
-
-Generated documentation of DevStack sample files.
-
-+------------+--------------------------------------+
-| Filename   | Link                                 |
-+============+======================================+
-| local.sh   | `Read » <samples/local.sh.html>`__   |
-+------------+--------------------------------------+
-| localrc    | `Read » <samples/localrc.html>`__    |
-+------------+--------------------------------------+
-
-Exercises
----------
-
-+---------------------------------+-------------------------------------------------+
-| Filename                        | Link                                            |
-+=================================+=================================================+
-| exercise.sh                     | `Read » <exercise.sh.html>`__                   |
-+---------------------------------+-------------------------------------------------+
-| exercises/aggregates.sh         | `Read » <exercises/aggregates.sh.html>`__       |
-+---------------------------------+-------------------------------------------------+
-| exercises/boot\_from\_volume.sh | `Read » <exercises/boot_from_volume.sh.html>`__ |
-+---------------------------------+-------------------------------------------------+
-| exercises/bundle.sh             | `Read » <exercises/bundle.sh.html>`__           |
-+---------------------------------+-------------------------------------------------+
-| exercises/client-args.sh        | `Read » <exercises/client-args.sh.html>`__      |
-+---------------------------------+-------------------------------------------------+
-| exercises/client-env.sh         | `Read » <exercises/client-env.sh.html>`__       |
-+---------------------------------+-------------------------------------------------+
-| exercises/euca.sh               | `Read » <exercises/euca.sh.html>`__             |
-+---------------------------------+-------------------------------------------------+
-| exercises/floating\_ips.sh      | `Read » <exercises/floating_ips.sh.html>`__     |
-+---------------------------------+-------------------------------------------------+
-| exercises/horizon.sh            | `Read » <exercises/horizon.sh.html>`__          |
-+---------------------------------+-------------------------------------------------+
-| exercises/neutron-adv-test.sh   | `Read » <exercises/neutron-adv-test.sh.html>`__ |
-+---------------------------------+-------------------------------------------------+
-| exercises/sahara.sh             | `Read » <exercises/sahara.sh.html>`__           |
-+---------------------------------+-------------------------------------------------+
-| exercises/savanna.sh            | `Read » <exercises/savanna.sh.html>`__          |
-+---------------------------------+-------------------------------------------------+
-| exercises/sec\_groups.sh        | `Read » <exercises/sec_groups.sh.html>`__       |
-+---------------------------------+-------------------------------------------------+
-| exercises/swift.sh              | `Read » <exercises/swift.sh.html>`__            |
-+---------------------------------+-------------------------------------------------+
-| exercises/trove.sh              | `Read » <exercises/trove.sh.html>`__            |
-+---------------------------------+-------------------------------------------------+
-| exercises/volumes.sh            | `Read » <exercises/volumes.sh.html>`__          |
-+---------------------------------+-------------------------------------------------+
-| exercises/zaqar.sh              | `Read » <exercises/zaqar.sh.html>`__            |
-+---------------------------------+-------------------------------------------------+
-
 .. toctree::
    :glob:
    :maxdepth: 1
 
-   *
+   local.conf
+   stackrc
+   openrc
+   exerciserc
+   eucarc
+
+Tools
+-----
+
+* `tools/build\_docs.sh <tools/build_docs.sh.html>`__
+* `tools/create-stack-user.sh <tools/create-stack-user.sh.html>`__
+* `tools/create\_userrc.sh <tools/create_userrc.sh.html>`__
+* `tools/fixup\_stuff.sh <tools/fixup_stuff.sh.html>`__
+* `tools/info.sh <tools/info.sh.html>`__
+* `tools/install\_pip.sh <tools/install_pip.sh.html>`__
+* `tools/install\_prereqs.sh <tools/install_prereqs.sh.html>`__
+* `tools/make\_cert.sh <tools/make_cert.sh.html>`__
+* `tools/upload\_image.sh <tools/upload_image.sh.html>`__
+
+Samples
+-------
+
+* `local.sh <samples/local.sh.html>`__
+
+Exercises
+---------
+
+* `exercise.sh <exercise.sh.html>`__
+* `exercises/aggregates.sh <exercises/aggregates.sh.html>`__
+* `exercises/boot\_from\_volume.sh <exercises/boot_from_volume.sh.html>`__
+* `exercises/bundle.sh <exercises/bundle.sh.html>`__
+* `exercises/client-args.sh <exercises/client-args.sh.html>`__
+* `exercises/client-env.sh <exercises/client-env.sh.html>`__
+* `exercises/euca.sh <exercises/euca.sh.html>`__
+* `exercises/floating\_ips.sh <exercises/floating_ips.sh.html>`__
+* `exercises/horizon.sh <exercises/horizon.sh.html>`__
+* `exercises/neutron-adv-test.sh <exercises/neutron-adv-test.sh.html>`__
+* `exercises/sahara.sh <exercises/sahara.sh.html>`__
+* `exercises/sec\_groups.sh <exercises/sec_groups.sh.html>`__
+* `exercises/swift.sh <exercises/swift.sh.html>`__
+* `exercises/trove.sh <exercises/trove.sh.html>`__
+* `exercises/volumes.sh <exercises/volumes.sh.html>`__
+* `exercises/zaqar.sh <exercises/zaqar.sh.html>`__
diff --git a/doc/source/local.conf.rst b/doc/source/local.conf.rst
index a9dfcb0..b2f7557 100644
--- a/doc/source/local.conf.rst
+++ b/doc/source/local.conf.rst
@@ -4,6 +4,6 @@
 
 ``local.conf`` is a user-maintained setings file that is sourced in
 ``stackrc``. It contains a section that replaces the historical
-``localrc`` file. See `the description of
-local.conf <configuration.html>`__ for more details about the mechanics
+``localrc`` file. See the description of
+:doc:`local.conf <configuration>` for more details about the mechanics
 of the file.
diff --git a/doc/source/localrc.rst b/doc/source/localrc.rst
deleted file mode 100644
index 98f3083..0000000
--- a/doc/source/localrc.rst
+++ /dev/null
@@ -1,9 +0,0 @@
-=====================
-localrc - The Old Way
-=====================
-
-``localrc`` is the old file used to configure DevStack. It is deprecated
-and has been replaced by ```local.conf`` <local.conf.html>`__. DevStack
-will continue to use ``localrc`` if it is present and ignore the
-``localrc`` section in ``local.conf.``. Remove ``localrc`` to switch to
-using the new file.
diff --git a/doc/source/openrc.rst b/doc/source/openrc.rst
index dc12f76..56ff5c2 100644
--- a/doc/source/openrc.rst
+++ b/doc/source/openrc.rst
@@ -8,29 +8,30 @@
 ``local.conf``) in order to pick up ``HOST_IP`` and/or ``SERVICE_HOST``
 to use in the endpoints. The values shown below are the default values.
 
-OS\_TENANT\_NAME
-    The introduction of Keystone to the OpenStack ecosystem has
-    standardized the term *tenant* as the entity that owns resources. In
-    some places references still exist to the original Nova term
-    *project* for this use. Also, *tenant\_name* is preferred to
-    *tenant\_id*.
+OS\_PROJECT\_NAME (OS\_TENANT\_NAME)
+    Keystone has
+    standardized the term *project* as the entity that owns resources. In
+    some places references still exist to the previous term
+    *tenant* for this use. Also, *project\_name* is preferred to
+    *project\_id*.  OS\_TENANT\_NAME remains supported for compatibility
+    with older tools.
 
     ::
 
-        OS_TENANT_NAME=demo
+        OS_PROJECT_NAME=demo
 
 OS\_USERNAME
-    In addition to the owning entity (tenant), Nova stores the entity
-    performing the action as the *user*.
+    In addition to the owning entity (project), OpenStack calls the entity
+    performing the action *user*.
 
     ::
 
         OS_USERNAME=demo
 
 OS\_PASSWORD
-    With Keystone you pass the keystone password instead of an api key.
-    Recent versions of novaclient use OS\_PASSWORD instead of
-    NOVA\_API\_KEYs or NOVA\_PASSWORD.
+    Keystone's default authentication requires a password be provided.
+    The usual cautions about putting passwords in environment variables
+    apply, for most DevStack uses this may be an acceptable tradeoff.
 
     ::
 
@@ -39,7 +40,7 @@
 HOST\_IP, SERVICE\_HOST
     Set API endpoint host using ``HOST_IP``. ``SERVICE_HOST`` may also
     be used to specify the endpoint, which is convenient for some
-    ``localrc`` configurations. Typically, ``HOST_IP`` is set in the
+    ``local.conf`` configurations. Typically, ``HOST_IP`` is set in the
     ``localrc`` section.
 
     ::
@@ -57,15 +58,6 @@
 
         OS_AUTH_URL=http://$SERVICE_HOST:5000/v2.0
 
-GLANCE\_HOST
-    Some exercises call Glance directly. On a single-node installation,
-    Glance should be listening on ``HOST_IP``. If its running elsewhere
-    it can be set here.
-
-    ::
-
-        GLANCE_HOST=$HOST_IP
-
 KEYSTONECLIENT\_DEBUG, NOVACLIENT\_DEBUG
     Set command-line client log level to ``DEBUG``. These are commented
     out by default.
diff --git a/doc/source/overview.rst b/doc/source/overview.rst
index e3cf75d..23ccf27 100644
--- a/doc/source/overview.rst
+++ b/doc/source/overview.rst
@@ -13,10 +13,10 @@
 "tested") going forward.
 
 Supported Components
---------------------
+====================
 
 Base OS
-~~~~~~~
+-------
 
 *The OpenStack Technical Committee (TC) has defined the current CI
 strategy to include the latest Ubuntu release and the latest RHEL
@@ -33,7 +33,7 @@
    side-effects on other OS platforms.
 
 Databases
-~~~~~~~~~
+---------
 
 *As packaged by the host OS*
 
@@ -41,7 +41,7 @@
 -  PostgreSQL
 
 Queues
-~~~~~~
+------
 
 *As packaged by the host OS*
 
@@ -49,14 +49,14 @@
 -  Qpid
 
 Web Server
-~~~~~~~~~~
+----------
 
 *As packaged by the host OS*
 
 -  Apache
 
 OpenStack Network
-~~~~~~~~~~~~~~~~~
+-----------------
 
 *Default to Nova Network, optionally use Neutron*
 
@@ -65,7 +65,7 @@
    mode using linuxbridge or OpenVSwitch.
 
 Services
-~~~~~~~~
+--------
 
 The default services configured by DevStack are Identity (Keystone),
 Object Storage (Swift), Image Storage (Glance), Block Storage (Cinder),
@@ -73,18 +73,18 @@
 (Heat)
 
 Additional services not included directly in DevStack can be tied in to
-``stack.sh`` using the `plugin mechanism <plugins.html>`__ to call
+``stack.sh`` using the :doc:`plugin mechanism <plugins>` to call
 scripts that perform the configuration and startup of the service.
 
 Node Configurations
-~~~~~~~~~~~~~~~~~~~
+-------------------
 
 -  single node
 -  multi-node is not tested regularly by the core team, and even then
    only minimal configurations are reviewed
 
 Exercises
-~~~~~~~~~
+---------
 
 The DevStack exercise scripts are no longer used as integration and gate
 testing as that job has transitioned to Tempest. They are still
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index 282c1a4..485cd0f 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -6,10 +6,10 @@
 support for additional projects and features.
 
 Extras.d Hooks
-~~~~~~~~~~~~~~
+==============
 
-These relatively new hooks are an extension of the existing calls from
-``stack.sh`` at the end of its run, plus ``unstack.sh`` and
+These hooks are an extension of the service calls in
+``stack.sh`` at specific points in its run, plus ``unstack.sh`` and
 ``clean.sh``. A number of the higher-layer projects are implemented in
 DevStack using this mechanism.
 
@@ -93,7 +93,7 @@
    but after ``unstack.sh`` has been called.
 
 Hypervisor
-~~~~~~~~~~
+==========
 
 Hypervisor plugins are fairly new and condense most hypervisor
 configuration into one place.
diff --git a/doc/source/stackrc.rst b/doc/source/stackrc.rst
index 0faab45..b21f74f 100644
--- a/doc/source/stackrc.rst
+++ b/doc/source/stackrc.rst
@@ -15,12 +15,12 @@
     Specify which services to launch. These generally correspond to
     screen tabs. The default includes: Glance (API and Registry),
     Keystone, Nova (API, Certificate, Object Store, Compute, Network,
-    Scheduler, VNC proxies, Certificate Authentication), Cinder
+    Scheduler, Certificate Authentication), Cinder
     (Scheduler, API, Volume), Horizon, MySQL, RabbitMQ, Tempest.
 
     ::
 
-        ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,rabbit,tempest,$DATABASE_TYPE
+        ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,c-sch,c-api,c-vol,n-sch,n-cauth,horizon,rabbit,tempest,$DATABASE_TYPE
 
     Other services that are not enabled by default can be enabled in
     ``localrc``. For example, to add Swift, use the following service
diff --git a/exercises/trove.sh b/exercises/trove.sh
deleted file mode 100755
index 053f872..0000000
--- a/exercises/trove.sh
+++ /dev/null
@@ -1,49 +0,0 @@
-#!/usr/bin/env bash
-
-# **trove.sh**
-
-# Sanity check that trove started if enabled
-
-echo "*********************************************************************"
-echo "Begin DevStack Exercise: $0"
-echo "*********************************************************************"
-
-# This script exits on an error so that errors don't compound and you see
-# only the first error that occurred.
-set -o errexit
-
-# Print the commands being run so that we can see the command that triggers
-# an error.  It is also useful for following allowing as the install occurs.
-set -o xtrace
-
-
-# Settings
-# ========
-
-# Keep track of the current directory
-EXERCISE_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=$(cd $EXERCISE_DIR/..; pwd)
-
-# Import common functions
-source $TOP_DIR/functions
-
-# Import configuration
-source $TOP_DIR/openrc
-
-# Import exercise configuration
-source $TOP_DIR/exerciserc
-
-is_service_enabled trove || exit 55
-
-# can try to get datastore id
-DSTORE_ID=$(trove datastore-list | tail -n +4 |head -3 | get_field 1)
-die_if_not_set $LINENO  DSTORE_ID "Trove API not functioning!"
-
-DV_ID=$(trove datastore-version-list $DSTORE_ID | tail -n +4 | get_field 1)
-die_if_not_set $LINENO DV_ID "Trove API not functioning!"
-
-set +o xtrace
-echo "*********************************************************************"
-echo "SUCCESS: End DevStack Exercise: $0"
-echo "*********************************************************************"
-
diff --git a/extras.d/70-trove.sh b/extras.d/70-trove.sh
index a4dc7fb..f284354 100644
--- a/extras.d/70-trove.sh
+++ b/extras.d/70-trove.sh
@@ -11,7 +11,6 @@
         cleanup_trove
     elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
         echo_summary "Configuring Trove"
-        configure_troveclient
         configure_trove
 
         if is_service_enabled key; then
diff --git a/files/apts/ironic b/files/apts/ironic
index 45fdecc..f6c7b74 100644
--- a/files/apts/ironic
+++ b/files/apts/ironic
@@ -12,6 +12,7 @@
 qemu
 qemu-kvm
 qemu-utils
+sgabios
 syslinux
 tftpd-hpa
 xinetd
diff --git a/files/rpms-suse/general b/files/rpms-suse/general
index 0a4746f..f1f7e8f 100644
--- a/files/rpms-suse/general
+++ b/files/rpms-suse/general
@@ -22,3 +22,4 @@
 tcpdump
 unzip
 wget
+net-tools
diff --git a/files/rpms/cinder b/files/rpms/cinder
index ce6181e..eedff18 100644
--- a/files/rpms/cinder
+++ b/files/rpms/cinder
@@ -3,4 +3,4 @@
 qemu-img
 postgresql-devel
 iscsi-initiator-utils
-python-lxml         #dist:f19,f20,rhel7
+python-lxml         #dist:f19,f20,f21,rhel7
diff --git a/files/rpms/general b/files/rpms/general
index d4a9fcb..d7ace9b 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -27,6 +27,7 @@
 bc
 libyaml-devel
 gettext  # used for compiling message catalogs
+net-tools
 
 # [1] : some of installed tools have unversioned dependencies on this,
 # but others have versioned (<=0.7).  So if a later version (0.7.1)
diff --git a/files/rpms/glance b/files/rpms/glance
index 5a7f073..d2792cf 100644
--- a/files/rpms/glance
+++ b/files/rpms/glance
@@ -6,10 +6,10 @@
 python-argparse
 python-eventlet
 python-greenlet
-python-lxml         #dist:f19,f20,rhel7
-python-paste-deploy #dist:f19,f20,rhel7
+python-lxml         #dist:f19,f20,f21,rhel7
+python-paste-deploy #dist:f19,f20,f21,rhel7
 python-routes
 python-sqlalchemy
-python-wsgiref      #dist:f18,f19,f20
+python-wsgiref      #dist:f18,f19,f20,f21
 pyxattr
 zlib-devel          # testonly
diff --git a/files/rpms/horizon b/files/rpms/horizon
index 7add23a..1d06ac2 100644
--- a/files/rpms/horizon
+++ b/files/rpms/horizon
@@ -12,8 +12,8 @@
 python-migrate
 python-mox
 python-nose
-python-paste        #dist:f19,f20
-python-paste-deploy #dist:f19,f20
+python-paste        #dist:f19,f20,f21
+python-paste-deploy #dist:f19,f20,f21
 python-routes
 python-sphinx
 python-sqlalchemy
diff --git a/files/rpms/ironic b/files/rpms/ironic
index e646f3a..0a46314 100644
--- a/files/rpms/ironic
+++ b/files/rpms/ironic
@@ -9,6 +9,7 @@
 openssh-clients
 openvswitch
 python-libguestfs
+sgabios
 syslinux
 tftp-server
 xinetd
diff --git a/files/rpms/keystone b/files/rpms/keystone
index ce41ee5..8b0953d 100644
--- a/files/rpms/keystone
+++ b/files/rpms/keystone
@@ -1,10 +1,10 @@
 MySQL-python
 python-greenlet
-libxslt-devel       # dist:f20
-python-lxml         #dist:f19,f20
-python-paste        #dist:f19,f20
-python-paste-deploy #dist:f19,f20
-python-paste-script #dist:f19,f20
+libxslt-devel       # dist:f20,f21
+python-lxml         #dist:f19,f20,f21
+python-paste        #dist:f19,f20,f21
+python-paste-deploy #dist:f19,f20,f21
+python-paste-script #dist:f19,f20,f21
 python-routes
 python-sqlalchemy
 python-webob
diff --git a/files/rpms/neutron b/files/rpms/neutron
index 2c9dd3d..f2473fb 100644
--- a/files/rpms/neutron
+++ b/files/rpms/neutron
@@ -12,8 +12,8 @@
 python-greenlet
 python-iso8601
 #rhel6 gets via pip
-python-paste        # dist:f19,f20,rhel7
-python-paste-deploy # dist:f19,f20,rhel7
+python-paste        # dist:f19,f20,f21,rhel7
+python-paste-deploy # dist:f19,f20,f21,rhel7
 python-qpid # NOPRIME
 python-routes
 python-sqlalchemy
diff --git a/files/rpms/nova b/files/rpms/nova
index f3261c6..07f13c7 100644
--- a/files/rpms/nova
+++ b/files/rpms/nova
@@ -29,11 +29,11 @@
 python-lockfile
 python-migrate
 python-mox
-python-paramiko # dist:f19,f20,rhel7
+python-paramiko # dist:f19,f20,f21,rhel7
 # ^ on RHEL6, brings in python-crypto which conflicts with version from
 # pip we need
-python-paste        # dist:f19,f20,rhel7
-python-paste-deploy # dist:f19,f20,rhel7
+python-paste        # dist:f19,f20,f21,rhel7
+python-paste-deploy # dist:f19,f20,f21,rhel7
 python-qpid # NOPRIME
 python-routes
 python-sqlalchemy
diff --git a/files/rpms/swift b/files/rpms/swift
index 9ec4aab..ccda22b 100644
--- a/files/rpms/swift
+++ b/files/rpms/swift
@@ -6,7 +6,7 @@
 python-greenlet
 python-netifaces
 python-nose
-python-paste-deploy # dist:f19,f20,rhel7
+python-paste-deploy # dist:f19,f20,f21,rhel7
 python-simplejson
 python-webob
 pyxattr
diff --git a/functions-common b/functions-common
index 9f8476e..e890b75 100644
--- a/functions-common
+++ b/functions-common
@@ -25,7 +25,6 @@
 # - ``FILES``
 # - ``OFFLINE``
 # - ``PIP_DOWNLOAD_CACHE``
-# - ``PIP_USE_MIRRORS``
 # - ``RECLONE``
 # - ``REQUIREMENTS_DIR``
 # - ``STACK_USER``
@@ -1559,7 +1558,7 @@
 }
 
 # Wrapper for ``pip install`` to set cache and proxy environment variables
-# Uses globals ``OFFLINE``, ``PIP_DOWNLOAD_CACHE``, ``PIP_USE_MIRRORS``,
+# Uses globals ``OFFLINE``, ``PIP_DOWNLOAD_CACHE``,
 # ``TRACK_DEPENDS``, ``*_proxy``
 # pip_install package [package ...]
 function pip_install {
@@ -1585,21 +1584,13 @@
         local sudo_pip="sudo"
     fi
 
-    # Mirror option not needed anymore because pypi has CDN available,
-    # but it's useful in certain circumstances
-    PIP_USE_MIRRORS=${PIP_USE_MIRRORS:-False}
-    local pip_mirror_opt=""
-    if [[ "$PIP_USE_MIRRORS" != "False" ]]; then
-        pip_mirror_opt="--use-mirrors"
-    fi
-
     $xtrace
     $sudo_pip PIP_DOWNLOAD_CACHE=${PIP_DOWNLOAD_CACHE:-/var/cache/pip} \
         http_proxy=$http_proxy \
         https_proxy=$https_proxy \
         no_proxy=$no_proxy \
         $cmd_pip install \
-        $pip_mirror_opt $@
+        $@
 
     INSTALL_TESTONLY_PACKAGES=$(trueorfalse False $INSTALL_TESTONLY_PACKAGES)
     if [[ "$INSTALL_TESTONLY_PACKAGES" == "True" ]]; then
@@ -1610,7 +1601,7 @@
                 https_proxy=$https_proxy \
                 no_proxy=$no_proxy \
                 $cmd_pip install \
-                $pip_mirror_opt -r $test_req
+                -r $test_req
         fi
     fi
 }
@@ -1634,6 +1625,17 @@
     setup_install $dir
 }
 
+# setup a library by name in editiable mode. If we are trying to use
+# the library from git, we'll do a git based install, otherwise we'll
+# punt and the library should be installed by a requirements pull from
+# another project.
+#
+# use this for non namespaced libraries
+function setup_dev_lib {
+    local name=$1
+    local dir=${GITDIR[$name]}
+    setup_develop $dir
+}
 
 # this should be used if you want to install globally, all libraries should
 # use this, especially *oslo* ones
diff --git a/lib/ceilometer b/lib/ceilometer
index 483cd27..c4377e0 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -35,8 +35,9 @@
 # --------
 
 # Set up default directories
+GITDIR["python-ceilometerclient"]=$DEST/python-ceilometerclient
+
 CEILOMETER_DIR=$DEST/ceilometer
-CEILOMETERCLIENT_DIR=$DEST/python-ceilometerclient
 CEILOMETER_CONF_DIR=/etc/ceilometer
 CEILOMETER_CONF=$CEILOMETER_CONF_DIR/ceilometer.conf
 CEILOMETER_API_LOG_DIR=/var/log/ceilometer-api
@@ -268,9 +269,11 @@
 
 # install_ceilometerclient() - Collect source and prepare
 function install_ceilometerclient {
-    git_clone $CEILOMETERCLIENT_REPO $CEILOMETERCLIENT_DIR $CEILOMETERCLIENT_BRANCH
-    setup_develop $CEILOMETERCLIENT_DIR
-    sudo install -D -m 0644 -o $STACK_USER {$CEILOMETERCLIENT_DIR/tools/,/etc/bash_completion.d/}ceilometer.bash_completion
+    if use_library_from_git "python-ceilometerclient"; then
+        git_clone_by_name "python-ceilometerclient"
+        setup_dev_lib "python-ceilometerclient"
+        sudo install -D -m 0644 -o $STACK_USER {${GITDIR["python-ceilometerclient"]}/tools/,/etc/bash_completion.d/}ceilometer.bash_completion
+    fi
 }
 
 # start_ceilometer() - Start running processes, including screen
diff --git a/lib/ceph b/lib/ceph
index e55738c..2ddf5db 100644
--- a/lib/ceph
+++ b/lib/ceph
@@ -71,6 +71,11 @@
 # Functions
 # ------------
 
+function get_ceph_version {
+    local ceph_version_str=$(sudo ceph daemon mon.$(hostname) version | cut -d '"' -f 4)
+    echo $ceph_version_str
+}
+
 # import_libvirt_secret_ceph() - Imports Cinder user key into libvirt
 # so it can connect to the Ceph cluster while attaching a Cinder block device
 function import_libvirt_secret_ceph {
@@ -154,10 +159,16 @@
         sleep 5
     done
 
+    # pools data and metadata were removed in the Giant release so depending on the version we apply different commands
+    local ceph_version=$(get_ceph_version)
     # change pool replica size according to the CEPH_REPLICAS set by the user
-    sudo ceph -c ${CEPH_CONF_FILE} osd pool set data size ${CEPH_REPLICAS}
-    sudo ceph -c ${CEPH_CONF_FILE} osd pool set rbd size ${CEPH_REPLICAS}
-    sudo ceph -c ${CEPH_CONF_FILE} osd pool set metadata size ${CEPH_REPLICAS}
+    if [[ ${ceph_version%.*} -eq 0 ]] && [[ ${ceph_version##*.} -lt 87 ]]; then
+        sudo ceph -c ${CEPH_CONF_FILE} osd pool set rbd size ${CEPH_REPLICAS}
+        sudo ceph -c ${CEPH_CONF_FILE} osd pool set data size ${CEPH_REPLICAS}
+        sudo ceph -c ${CEPH_CONF_FILE} osd pool set metadata size ${CEPH_REPLICAS}
+    else
+        sudo ceph -c ${CEPH_CONF_FILE} osd pool set rbd size ${CEPH_REPLICAS}
+    fi
 
     # create a simple rule to take OSDs instead of host with CRUSH
     # then apply this rules to the default pool
diff --git a/lib/cinder b/lib/cinder
index 29cda42..611e1ca 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -36,8 +36,9 @@
 fi
 
 # set up default directories
+GITDIR["python-cinderclient"]=$DEST/python-cinderclient
+
 CINDER_DIR=$DEST/cinder
-CINDERCLIENT_DIR=$DEST/python-cinderclient
 CINDER_STATE_PATH=${CINDER_STATE_PATH:=$DATA_DIR/cinder}
 CINDER_AUTH_CACHE_DIR=${CINDER_AUTH_CACHE_DIR:-/var/cache/cinder}
 
@@ -257,7 +258,7 @@
     fi
 
     if is_service_enabled swift; then
-        iniset $CINDER_CONF DEFAULT backup_swift_url "http://$SERVICE_HOST:8080/v1/AUTH_"
+        iniset $CINDER_CONF DEFAULT backup_swift_url "$SWIFT_SERVICE_PROTOCOL://$SERVICE_HOST:8080/v1/AUTH_"
     fi
 
     if is_service_enabled ceilometer; then
@@ -402,9 +403,11 @@
 
 # install_cinderclient() - Collect source and prepare
 function install_cinderclient {
-    git_clone $CINDERCLIENT_REPO $CINDERCLIENT_DIR $CINDERCLIENT_BRANCH
-    setup_develop $CINDERCLIENT_DIR
-    sudo install -D -m 0644 -o $STACK_USER {$CINDERCLIENT_DIR/tools/,/etc/bash_completion.d/}cinder.bash_completion
+    if use_library_from_git "python-cinderclient"; then
+        git_clone_by_name "python-cinderclient"
+        setup_dev_lib "python-cinderclient"
+        sudo install -D -m 0644 -o $STACK_USER {${GITDIR["python-cinderclient"]}/tools/,/etc/bash_completion.d/}cinder.bash_completion
+    fi
 }
 
 # apply config.d approach for cinder volumes directory
diff --git a/lib/databases/mysql b/lib/databases/mysql
index 67bf85a..bbf2fd0 100644
--- a/lib/databases/mysql
+++ b/lib/databases/mysql
@@ -26,10 +26,10 @@
         sudo rm -rf /etc/mysql
         return
     elif is_fedora; then
-        if [[ $DISTRO =~ (rhel7) ]]; then
-            MYSQL=mariadb
-        else
+        if [[ $DISTRO =~ (rhel6) ]]; then
             MYSQL=mysqld
+        else
+            MYSQL=mariadb
         fi
     elif is_suse; then
         MYSQL=mysql
@@ -54,10 +54,10 @@
         my_conf=/etc/mysql/my.cnf
         mysql=mysql
     elif is_fedora; then
-        if [[ $DISTRO =~ (rhel7) ]]; then
-            mysql=mariadb
-        else
+        if [[ $DISTRO =~ (rhel6) ]]; then
             mysql=mysqld
+        else
+            mysql=mariadb
         fi
         my_conf=/etc/my.cnf
     elif is_suse; then
@@ -142,10 +142,10 @@
     fi
     # Install mysql-server
     if is_ubuntu || is_fedora; then
-        if [[ $DISTRO =~ (rhel7) ]]; then
-            install_package mariadb-server
-        else
+        if [[ $DISTRO =~ (rhel6|precise) ]]; then
             install_package mysql-server
+        else
+            install_package mariadb-server
         fi
     elif is_suse; then
         if ! is_package_installed mariadb; then
diff --git a/lib/dstat b/lib/dstat
index a2c522c..4ec10dc 100644
--- a/lib/dstat
+++ b/lib/dstat
@@ -1,4 +1,4 @@
-# lib/apache
+# lib/dstat
 # Functions to start and stop dstat
 
 # Dependencies:
@@ -24,7 +24,7 @@
 # start_dstat() - Start running processes, including screen
 function start_dstat {
     # A better kind of sysstat, with the top process per time slice
-    DSTAT_OPTS="-tcmndrylp --top-cpu-adv"
+    DSTAT_OPTS="-tcmndrylpg --top-cpu-adv"
     if [[ -n ${SCREEN_LOGDIR} ]]; then
         screen_it dstat "cd $TOP_DIR; dstat $DSTAT_OPTS | tee $SCREEN_LOGDIR/$DSTAT_FILE"
     else
diff --git a/lib/glance b/lib/glance
index 4194842..04c088a 100644
--- a/lib/glance
+++ b/lib/glance
@@ -27,9 +27,10 @@
 # --------
 
 # Set up default directories
+GITDIR["python-glanceclient"]=$DEST/python-glanceclient
+GIRDIR["glance_store"]=$DEST/glance_store
+
 GLANCE_DIR=$DEST/glance
-GLANCE_STORE_DIR=$DEST/glance_store
-GLANCECLIENT_DIR=$DEST/python-glanceclient
 GLANCE_CACHE_DIR=${GLANCE_CACHE_DIR:=$DATA_DIR/glance/cache}
 GLANCE_IMAGE_DIR=${GLANCE_IMAGE_DIR:=$DATA_DIR/glance/images}
 GLANCE_AUTH_CACHE_DIR=${GLANCE_AUTH_CACHE_DIR:-/var/cache/glance}
@@ -286,16 +287,20 @@
 
 # install_glanceclient() - Collect source and prepare
 function install_glanceclient {
-    git_clone $GLANCECLIENT_REPO $GLANCECLIENT_DIR $GLANCECLIENT_BRANCH
-    setup_develop $GLANCECLIENT_DIR
+    if use_library_from_git "python-glanceclient"; then
+        git_clone_by_name "python-glanceclient"
+        setup_dev_lib "python-glanceclient"
+    fi
 }
 
 # install_glance() - Collect source and prepare
 function install_glance {
     # Install glance_store from git so we make sure we're testing
     # the latest code.
-    git_clone $GLANCE_STORE_REPO $GLANCE_STORE_DIR $GLANCE_STORE_BRANCH
-    setup_develop $GLANCE_STORE_DIR
+    if use_library_from_git "glance_store"; then
+        git_clone_by_name "glance_store"
+        setup_dev_lib "glance_store"
+    fi
 
     git_clone $GLANCE_REPO $GLANCE_DIR $GLANCE_BRANCH
     setup_develop $GLANCE_DIR
diff --git a/lib/heat b/lib/heat
index 53eca25..2b55cf0 100644
--- a/lib/heat
+++ b/lib/heat
@@ -29,8 +29,9 @@
 # --------
 
 # set up default directories
+GITDIR["python-heatclient"]=$DEST/python-heatclient
+
 HEAT_DIR=$DEST/heat
-HEATCLIENT_DIR=$DEST/python-heatclient
 HEAT_CFNTOOLS_DIR=$DEST/heat-cfntools
 HEAT_TEMPLATES_REPO_DIR=$DEST/heat-templates
 HEAT_AUTH_CACHE_DIR=${HEAT_AUTH_CACHE_DIR:-/var/cache/heat}
@@ -183,9 +184,11 @@
 
 # install_heatclient() - Collect source and prepare
 function install_heatclient {
-    git_clone $HEATCLIENT_REPO $HEATCLIENT_DIR $HEATCLIENT_BRANCH
-    setup_develop $HEATCLIENT_DIR
-    sudo install -D -m 0644 -o $STACK_USER {$HEATCLIENT_DIR/tools/,/etc/bash_completion.d/}heat.bash_completion
+    if use_library_from_git "python-heatclient"; then
+        git_clone_by_name "python-heatclient"
+        setup_dev_lib "python-heatclient"
+        sudo install -D -m 0644 -o $STACK_USER {${GITDIR["python-heatclient"]}/tools/,/etc/bash_completion.d/}heat.bash_completion
+    fi
 }
 
 # install_heat() - Collect source and prepare
diff --git a/lib/horizon b/lib/horizon
index 0213948..9fd1b85 100644
--- a/lib/horizon
+++ b/lib/horizon
@@ -25,8 +25,9 @@
 # --------
 
 # Set up default directories
+GITDIR["django_openstack_auth"]=$DEST/django_openstack_auth
+
 HORIZON_DIR=$DEST/horizon
-HORIZONAUTH_DIR=$DEST/django_openstack_auth
 
 # local_settings.py is used to customize Dashboard settings.
 # The example file in Horizon repo is used by default.
@@ -89,9 +90,7 @@
     # Horizon is installed as develop mode, so we can compile here.
     # Message catalog compilation is handled by Django admin script,
     # so compiling them after the installation avoids Django installation twice.
-    cd $HORIZON_DIR
-    ./run_tests.sh -N --compilemessages
-    cd -
+    (cd $HORIZON_DIR; ./run_tests.sh -N --compilemessages)
 }
 
 # init_horizon() - Initialize databases, etc.
@@ -100,6 +99,8 @@
     local local_settings=$HORIZON_DIR/openstack_dashboard/local/local_settings.py
     cp $HORIZON_SETTINGS $local_settings
 
+    _horizon_config_set $local_settings "" COMPRESS_OFFLINE True
+
     _horizon_config_set $local_settings "" OPENSTACK_HOST \"${KEYSTONE_SERVICE_HOST}\"
     _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_URL "\"${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}/v2.0\""
     if [[ -n "$KEYSTONE_TOKEN_HASH_ALGORITHM" ]]; then
@@ -141,19 +142,31 @@
     # and run_process
     sudo rm -f /var/log/$APACHE_NAME/horizon_*
 
+    # Setup alias for django-admin which could be different depending on distro
+    local django_admin
+    if type -p django-admin > /dev/null; then
+        django_admin=django-admin
+    else
+        django_admin=django-admin.py
+    fi
+
+    DJANGO_SETTINGS_MODULE=openstack_dashboard.settings $django_admin collectstatic --noinput
+    DJANGO_SETTINGS_MODULE=openstack_dashboard.settings $django_admin compress --force
+
 }
 
 # install_django_openstack_auth() - Collect source and prepare
 function install_django_openstack_auth {
-    git_clone $HORIZONAUTH_REPO $HORIZONAUTH_DIR $HORIZONAUTH_BRANCH
-
-    # Compile message catalogs before installation
-    _prepare_message_catalog_compilation
-    cd $HORIZONAUTH_DIR
-    python setup.py compile_catalog
-    cd -
-
-    setup_install $HORIZONAUTH_DIR
+    if use_library_from_git "django_openstack_auth"; then
+        local dir=${GITDIR["django_openstack_auth"]}
+        git_clone_by_name "django_openstack_auth"
+        # Compile message catalogs before installation
+        _prepare_message_catalog_compilation
+        (cd $dir; python setup.py compile_catalog)
+        setup_dev_lib "django_openstack_auth"
+    fi
+    # if we aren't using this library from git, then we just let it
+    # get dragged in by the horizon setup.
 }
 
 # install_horizon() - Collect source and prepare
diff --git a/lib/ironic b/lib/ironic
index cf005a7..f2b1fb2 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -28,17 +28,30 @@
 # --------
 
 # Set up default directories
+GITDIR["python-ironicclient"]=$DEST/python-ironicclient
+
 IRONIC_DIR=$DEST/ironic
 IRONIC_PYTHON_AGENT_DIR=$DEST/ironic-python-agent
 IRONIC_DATA_DIR=$DATA_DIR/ironic
 IRONIC_STATE_PATH=/var/lib/ironic
-IRONICCLIENT_DIR=$DEST/python-ironicclient
 IRONIC_AUTH_CACHE_DIR=${IRONIC_AUTH_CACHE_DIR:-/var/cache/ironic}
 IRONIC_CONF_DIR=${IRONIC_CONF_DIR:-/etc/ironic}
 IRONIC_CONF_FILE=$IRONIC_CONF_DIR/ironic.conf
 IRONIC_ROOTWRAP_CONF=$IRONIC_CONF_DIR/rootwrap.conf
 IRONIC_POLICY_JSON=$IRONIC_CONF_DIR/policy.json
 
+# Deploy to hardware platform
+IRONIC_HW_NODE_CPU=${IRONIC_HW_NODE_CPU:-1}
+IRONIC_HW_NODE_RAM=${IRONIC_HW_NODE_RAM:-512}
+IRONIC_HW_NODE_DISK=${IRONIC_HW_NODE_DISK:-10}
+IRONIC_HW_EPHEMERAL_DISK=${IRONIC_HW_EPHEMERAL_DISK:-0}
+# The file is composed of multiple lines, each line includes four field
+# separated by white space: IPMI address, MAC address, IPMI username
+# and IPMI password.
+# An example:
+#   192.168.110.107 00:1e:67:57:50:4c root otc123
+IRONIC_IPMIINFO_FILE=${IRONIC_IPMIINFO_FILE:-$IRONIC_DATA_DIR/hardware_info}
+
 # Set up defaults for functional / integration testing
 IRONIC_SCRIPTS_DIR=${IRONIC_SCRIPTS_DIR:-$TOP_DIR/tools/ironic/scripts}
 IRONIC_TEMPLATES_DIR=${IRONIC_TEMPLATES_DIR:-$TOP_DIR/tools/ironic/templates}
@@ -50,6 +63,7 @@
 IRONIC_KEY_FILE=$IRONIC_SSH_KEY_DIR/$IRONIC_SSH_KEY_FILENAME
 IRONIC_SSH_VIRT_TYPE=${IRONIC_SSH_VIRT_TYPE:-virsh}
 IRONIC_TFTPBOOT_DIR=${IRONIC_TFTPBOOT_DIR:-$IRONIC_DATA_DIR/tftpboot}
+IRONIC_TFTPSERVER_IP=${IRONIC_TFTPSERVER_IP:-$HOST_IP}
 IRONIC_VM_SSH_PORT=${IRONIC_VM_SSH_PORT:-22}
 IRONIC_VM_SSH_ADDRESS=${IRONIC_VM_SSH_ADDRESS:-$HOST_IP}
 IRONIC_VM_COUNT=${IRONIC_VM_COUNT:-1}
@@ -79,7 +93,7 @@
 IRONIC_AGENT_RAMDISK_URL=${IRONIC_AGENT_RAMDISK_URL:-http://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem.cpio.gz}
 
 # Which deploy driver to use - valid choices right now
-# are 'pxe_ssh' and 'agent_ssh'.
+# are 'pxe_ssh', 'pxe_ipmitool', 'agent_ssh' and 'agent_ipmitool'.
 IRONIC_DEPLOY_DRIVER=${IRONIC_DEPLOY_DRIVER:-pxe_ssh}
 
 #TODO(agordeev): replace 'ubuntu' with host distro name getting
@@ -90,7 +104,8 @@
 
 # Ironic connection info.  Note the port must be specified.
 IRONIC_SERVICE_PROTOCOL=http
-IRONIC_HOSTPORT=${IRONIC_HOSTPORT:-$SERVICE_HOST:6385}
+IRONIC_SERVICE_PORT=${IRONIC_SERVICE_PORT:-6385}
+IRONIC_HOSTPORT=${IRONIC_HOSTPORT:-$SERVICE_HOST:$IRONIC_SERVICE_PORT}
 
 # Tell Tempest this project is present
 TEMPEST_SERVICES+=,ironic
@@ -132,6 +147,16 @@
     return 1
 }
 
+function is_ironic_hardware {
+    is_ironic_enabled && [[ -n "${IRONIC_DEPLOY_DRIVER##*_ssh}" ]] && return 0
+    return 1
+}
+
+function is_deployed_by_agent {
+    [[ -z "${IRONIC_DEPLOY_DRIVER%%agent*}" ]] && return 0
+    return 1
+}
+
 # install_ironic() - Collect source and prepare
 function install_ironic {
     # make sure all needed service were enabled
@@ -146,13 +171,26 @@
     if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then
         install_apache_wsgi
     fi
+
+    if [[ "$IRONIC_VM_LOG_CONSOLE" == "True" ]] && is_ubuntu; then
+        # Ubuntu packaging+apparmor issue prevents libvirt from loading
+        # the ROM from /usr/share/misc.  Workaround by installing it directly
+        # to a directory that it can read from. (LP: #1393548)
+        sudo rm -rf /usr/share/qemu/sgabios.bin
+        sudo cp /usr/share/misc/sgabios.bin /usr/share/qemu/sgabios.bin
+    fi
 }
 
 # install_ironicclient() - Collect sources and prepare
 function install_ironicclient {
-    git_clone $IRONICCLIENT_REPO $IRONICCLIENT_DIR $IRONICCLIENT_BRANCH
-    setup_develop $IRONICCLIENT_DIR
-    sudo install -D -m 0644 -o $STACK_USER {$IRONICCLIENT_DIR/tools/,/etc/bash_completion.d/}ironic.bash_completion
+    if use_library_from_git "python-ironicclient"; then
+        git_clone_by_name "python-ironicclient"
+        setup_dev_lib "python-ironicclient"
+        sudo install -D -m 0644 -o $STACK_USER {${GITDIR["python-ironicclient"]}/tools/,/etc/bash_completion.d/}ironic.bash_completion
+    else
+        # nothing actually "requires" ironicclient, so force instally from pypi
+        pip_install python-ironicclient
+    fi
 }
 
 # _cleanup_ironic_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
@@ -177,7 +215,7 @@
 # cleanup_ironic() - Remove residual data files, anything left over from previous
 # runs that would need to clean up.
 function cleanup_ironic {
-    sudo rm -rf $IRONIC_AUTH_CACHE_DIR
+    sudo rm -rf $IRONIC_AUTH_CACHE_DIR $IRONIC_CONF_DIR
 }
 
 # configure_ironic_dirs() - Create all directories required by Ironic and
@@ -245,6 +283,7 @@
     iniset $IRONIC_CONF_FILE DEFAULT policy_file $IRONIC_POLICY_JSON
     configure_auth_token_middleware $IRONIC_CONF_FILE ironic $IRONIC_AUTH_CACHE_DIR/api
     iniset_rpc_backend ironic $IRONIC_CONF_FILE DEFAULT
+    iniset $IRONIC_CONF_FILE api port $IRONIC_SERVICE_PORT
 
     cp -p $IRONIC_DIR/etc/ironic/policy.json $IRONIC_POLICY_JSON
 }
@@ -266,14 +305,14 @@
 
     iniset $IRONIC_CONF_FILE DEFAULT rootwrap_config $IRONIC_ROOTWRAP_CONF
     iniset $IRONIC_CONF_FILE DEFAULT enabled_drivers $IRONIC_ENABLED_DRIVERS
-    iniset $IRONIC_CONF_FILE conductor api_url http://$HOST_IP:6385
-    iniset $IRONIC_CONF_FILE pxe tftp_server $HOST_IP
+    iniset $IRONIC_CONF_FILE conductor api_url $IRONIC_SERVICE_PROTOCOL://$HOST_IP:$IRONIC_SERVICE_PORT
+    iniset $IRONIC_CONF_FILE pxe tftp_server $IRONIC_TFTPSERVER_IP
     iniset $IRONIC_CONF_FILE pxe tftp_root $IRONIC_TFTPBOOT_DIR
     iniset $IRONIC_CONF_FILE pxe tftp_master_path $IRONIC_TFTPBOOT_DIR/master_images
     if [[ "$IRONIC_VM_LOG_CONSOLE" == "True" ]] ; then
         iniset $IRONIC_CONF_FILE pxe pxe_append_params "nofb nomodeset vga=normal console=ttyS0"
     fi
-    if [[ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]] ; then
+    if is_deployed_by_agent; then
         if [[ "$SWIFT_ENABLE_TEMPURLS" == "True" ]] ; then
             iniset $IRONIC_CONF_FILE glance swift_temp_url_key $SWIFT_TEMPURL_KEY
         else
@@ -387,7 +426,7 @@
 function start_ironic_api {
     run_process ir-api "$IRONIC_BIN_DIR/ironic-api --config-file=$IRONIC_CONF_FILE"
     echo "Waiting for ir-api ($IRONIC_HOSTPORT) to start..."
-    if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$IRONIC_HOSTPORT; do sleep 1; done"; then
+    if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- $IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT; do sleep 1; done"; then
         die $LINENO "ir-api did not start"
     fi
 }
@@ -469,46 +508,81 @@
     create_ovs_taps
 }
 
-function enroll_vms {
+function enroll_nodes {
     local chassis_id=$(ironic chassis-create -d "ironic test chassis" | grep " uuid " | get_field 2)
     local idx=0
 
     if [[ "$IRONIC_DEPLOY_DRIVER" == "pxe_ssh" ]] ; then
         local _IRONIC_DEPLOY_KERNEL_KEY=pxe_deploy_kernel
         local _IRONIC_DEPLOY_RAMDISK_KEY=pxe_deploy_ramdisk
-    elif [[ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]] ; then
+    elif is_deployed_by_agent; then
         local _IRONIC_DEPLOY_KERNEL_KEY=deploy_kernel
         local _IRONIC_DEPLOY_RAMDISK_KEY=deploy_ramdisk
     fi
 
-    while read MAC; do
-
-        local node_id=$(ironic node-create --chassis_uuid $chassis_id \
-            --driver $IRONIC_DEPLOY_DRIVER \
+    if ! is_ironic_hardware; then
+        local ironic_node_cpu=$IRONIC_VM_SPECS_CPU
+        local ironic_node_ram=$IRONIC_VM_SPECS_RAM
+        local ironic_node_disk=$IRONIC_VM_SPECS_DISK
+        local ironic_ephemeral_disk=$IRONIC_VM_EPHEMERAL_DISK
+        local ironic_hwinfo_file=$IRONIC_VM_MACS_CSV_FILE
+        local node_options="\
             -i $_IRONIC_DEPLOY_KERNEL_KEY=$IRONIC_DEPLOY_KERNEL_ID \
             -i $_IRONIC_DEPLOY_RAMDISK_KEY=$IRONIC_DEPLOY_RAMDISK_ID \
             -i ssh_virt_type=$IRONIC_SSH_VIRT_TYPE \
             -i ssh_address=$IRONIC_VM_SSH_ADDRESS \
             -i ssh_port=$IRONIC_VM_SSH_PORT \
             -i ssh_username=$IRONIC_SSH_USERNAME \
-            -i ssh_key_filename=$IRONIC_SSH_KEY_DIR/$IRONIC_SSH_KEY_FILENAME \
-            -p cpus=$IRONIC_VM_SPECS_CPU \
-            -p memory_mb=$IRONIC_VM_SPECS_RAM \
-            -p local_gb=$IRONIC_VM_SPECS_DISK \
+            -i ssh_key_filename=$IRONIC_SSH_KEY_DIR/$IRONIC_SSH_KEY_FILENAME"
+    else
+        local ironic_node_cpu=$IRONIC_HW_NODE_CPU
+        local ironic_node_ram=$IRONIC_HW_NODE_RAM
+        local ironic_node_disk=$IRONIC_HW_NODE_DISK
+        local ironic_ephemeral_disk=$IRONIC_HW_EPHEMERAL_DISK
+        if [[ -z "${IRONIC_DEPLOY_DRIVER##*_ipmitool}" ]]; then
+            local ironic_hwinfo_file=$IRONIC_IPMIINFO_FILE
+        fi
+    fi
+
+    while read hardware_info; do
+        if ! is_ironic_hardware; then
+            local mac_address=$hardware_info
+        elif [[ -z "${IRONIC_DEPLOY_DRIVER##*_ipmitool}" ]]; then
+            local ipmi_address=$(echo $hardware_info |awk  '{print $1}')
+            local mac_address=$(echo $hardware_info |awk '{print $2}')
+            local ironic_ipmi_username=$(echo $hardware_info |awk '{print $3}')
+            local ironic_ipmi_passwd=$(echo $hardware_info |awk '{print $4}')
+            # Currently we require all hardware platform have same CPU/RAM/DISK info
+            # in future, this can be enhanced to support different type, and then
+            # we create the bare metal flavor with minimum value
+            local node_options="-i ipmi_address=$ipmi_address -i ipmi_password=$ironic_ipmi_passwd\
+                -i ipmi_username=$ironic_ipmi_username"
+            if is_deployed_by_agent; then
+                node_options+=" -i $_IRONIC_DEPLOY_KERNEL_KEY=$IRONIC_DEPLOY_KERNEL_ID"
+                node_options+=" -i $_IRONIC_DEPLOY_RAMDISK_KEY=$IRONIC_DEPLOY_RAMDISK_ID"
+            fi
+        fi
+
+        local node_id=$(ironic node-create --chassis_uuid $chassis_id \
+            --driver $IRONIC_DEPLOY_DRIVER \
+            -p cpus=$ironic_node_cpu\
+            -p memory_mb=$ironic_node_ram\
+            -p local_gb=$ironic_node_disk\
             -p cpu_arch=x86_64 \
+            $node_options \
             | grep " uuid " | get_field 2)
 
-        ironic port-create --address $MAC --node_uuid $node_id
+        ironic port-create --address $mac_address --node_uuid $node_id
 
         idx=$((idx+1))
-    done < $IRONIC_VM_MACS_CSV_FILE
+    done < $ironic_hwinfo_file
 
     # create the nova flavor
     # NOTE(adam_g): Attempting to use an autogenerated UUID for flavor id here uncovered
     # bug (LP: #1333852) in Trove.  This can be changed to use an auto flavor id when the
     # bug is fixed in Juno.
-    local adjusted_disk=$(($IRONIC_VM_SPECS_DISK - $IRONIC_VM_EPHEMERAL_DISK))
-    nova flavor-create --ephemeral $IRONIC_VM_EPHEMERAL_DISK baremetal 551 $IRONIC_VM_SPECS_RAM $adjusted_disk $IRONIC_VM_SPECS_CPU
+    local adjusted_disk=$(($ironic_node_disk - $ironic_ephemeral_disk))
+    nova flavor-create --ephemeral $ironic_ephemeral_disk baremetal 551 $ironic_node_ram $adjusted_disk $ironic_node_cpu
 
     # TODO(lucasagomes): Remove the 'baremetal:deploy_kernel_id'
     # and 'baremetal:deploy_ramdisk_id' parameters
@@ -523,8 +597,8 @@
     sudo modprobe nf_nat_tftp
     # nodes boot from TFTP and callback to the API server listening on $HOST_IP
     sudo iptables -I INPUT -d $HOST_IP -p udp --dport 69 -j ACCEPT || true
-    sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $IRONIC_HOSTPORT -j ACCEPT || true
-    if [ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]; then
+    sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $IRONIC_SERVICE_PORT -j ACCEPT || true
+    if is_deployed_by_agent; then
         # agent ramdisk gets instance image from swift
         sudo iptables -I INPUT -d $HOST_IP -p tcp --dport ${SWIFT_DEFAULT_BIND_PORT:-8080} -j ACCEPT || true
     fi
@@ -600,8 +674,8 @@
     fi
 
     if [ -z "$IRONIC_DEPLOY_KERNEL" -o -z "$IRONIC_DEPLOY_RAMDISK" ]; then
-        local IRONIC_DEPLOY_KERNEL_PATH=$TOP_DIR/files/ir-deploy.kernel
-        local IRONIC_DEPLOY_RAMDISK_PATH=$TOP_DIR/files/ir-deploy.initramfs
+        local IRONIC_DEPLOY_KERNEL_PATH=$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.kernel
+        local IRONIC_DEPLOY_RAMDISK_PATH=$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.initramfs
     else
         local IRONIC_DEPLOY_KERNEL_PATH=$IRONIC_DEPLOY_KERNEL
         local IRONIC_DEPLOY_RAMDISK_PATH=$IRONIC_DEPLOY_RAMDISK
@@ -612,17 +686,17 @@
         if [ "$IRONIC_BUILD_DEPLOY_RAMDISK" = "True" ]; then
             # we can build them only if we're not offline
             if [ "$OFFLINE" != "True" ]; then
-                if [ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]; then
+                if is_deployed_by_agent; then
                     build_ipa_coreos_ramdisk $IRONIC_DEPLOY_KERNEL_PATH $IRONIC_DEPLOY_RAMDISK_PATH
                 else
                     ramdisk-image-create $IRONIC_DEPLOY_FLAVOR \
-                        -o $TOP_DIR/files/ir-deploy
+                        -o $TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER
                 fi
             else
                 die $LINENO "Deploy kernel+ramdisk files don't exist and cannot be build in OFFLINE mode"
             fi
         else
-            if [ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]; then
+            if is_deployed_by_agent; then
                 # download the agent image tarball
                 wget "$IRONIC_AGENT_KERNEL_URL" -O $IRONIC_DEPLOY_KERNEL_PATH
                 wget "$IRONIC_AGENT_RAMDISK_URL" -O $IRONIC_DEPLOY_RAMDISK_PATH
@@ -656,11 +730,15 @@
 
 function prepare_baremetal_basic_ops {
     upload_baremetal_ironic_deploy
-    create_bridge_and_vms
-    enroll_vms
+    if ! is_ironic_hardware; then
+        create_bridge_and_vms
+    fi
+    enroll_nodes
     configure_tftpd
     configure_iptables
-    configure_ironic_auxiliary
+    if ! is_ironic_hardware; then
+        configure_ironic_auxiliary
+    fi
 }
 
 function cleanup_baremetal_basic_ops {
@@ -681,8 +759,8 @@
     sudo rm -rf /etc/xinetd.d/tftp /etc/init/tftpd-hpa.override
     restart_service xinetd
     sudo iptables -D INPUT -d $HOST_IP -p udp --dport 69 -j ACCEPT || true
-    sudo iptables -D INPUT -d $HOST_IP -p tcp --dport 6385 -j ACCEPT || true
-    if [ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]; then
+    sudo iptables -D INPUT -d $HOST_IP -p tcp --dport $IRONIC_SERVICE_PORT -j ACCEPT || true
+    if is_deployed_by_agent; then
         # agent ramdisk gets instance image from swift
         sudo iptables -D INPUT -d $HOST_IP -p tcp --dport ${SWIFT_DEFAULT_BIND_PORT:-8080} -j ACCEPT || true
     fi
diff --git a/lib/keystone b/lib/keystone
index 276e971..72a79be 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -33,6 +33,9 @@
 # --------
 
 # Set up default directories
+GITDIR["python-keystoneclient"]=$DEST/python-keystoneclient
+GITDIR["keystonemiddleware"]=$DEST/keystonemiddleware
+
 KEYSTONE_DIR=$DEST/keystone
 KEYSTONE_CONF_DIR=${KEYSTONE_CONF_DIR:-/etc/keystone}
 KEYSTONE_CONF=$KEYSTONE_CONF_DIR/keystone.conf
@@ -44,9 +47,6 @@
     KEYSTONE_WSGI_DIR=${KEYSTONE_WSGI_DIR:-/var/www/keystone}
 fi
 
-KEYSTONEMIDDLEWARE_DIR=$DEST/keystonemiddleware
-KEYSTONECLIENT_DIR=$DEST/python-keystoneclient
-
 # Set up additional extensions, such as oauth1, federation
 # Example of KEYSTONE_EXTENSIONS=oauth1,federation
 KEYSTONE_EXTENSIONS=${KEYSTONE_EXTENSIONS:-}
@@ -479,15 +479,19 @@
 
 # install_keystoneclient() - Collect source and prepare
 function install_keystoneclient {
-    git_clone $KEYSTONECLIENT_REPO $KEYSTONECLIENT_DIR $KEYSTONECLIENT_BRANCH
-    setup_develop $KEYSTONECLIENT_DIR
-    sudo install -D -m 0644 -o $STACK_USER {$KEYSTONECLIENT_DIR/tools/,/etc/bash_completion.d/}keystone.bash_completion
+    if use_library_from_git "python-keystoneclient"; then
+        git_clone_by_name "python-keystoneclient"
+        setup_dev_lib "python-keystoneclient"
+        sudo install -D -m 0644 -o $STACK_USER {${GITDIR["python-keystoneclient"]}/tools/,/etc/bash_completion.d/}keystone.bash_completion
+    fi
 }
 
 # install_keystonemiddleware() - Collect source and prepare
 function install_keystonemiddleware {
-    git_clone $KEYSTONEMIDDLEWARE_REPO $KEYSTONEMIDDLEWARE_DIR $KEYSTONEMIDDLEWARE_BRANCH
-    setup_install $KEYSTONEMIDDLEWARE_DIR
+    if use_library_from_git "keystonemiddleware"; then
+        git_clone_by_name "keystonemiddleware"
+        setup_dev_lib "keystonemiddleware"
+    fi
 }
 
 # install_keystone() - Collect source and prepare
diff --git a/lib/neutron b/lib/neutron
index eb07f40..09d9354 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -51,10 +51,22 @@
 #
 # With Neutron networking the NETWORK_MANAGER variable is ignored.
 
+# Settings
+# --------
+
+# Timeout value in seconds to wait for IPv6 gateway configuration
+GATEWAY_TIMEOUT=30
+
 
 # Neutron Network Configuration
 # -----------------------------
 
+# Subnet IP version
+IP_VERSION=${IP_VERSION:-4}
+# Validate IP_VERSION
+if [[ $IP_VERSION != "4" ]] && [[ $IP_VERSION != "6" ]] && [[ $IP_VERSION != "4+6" ]]; then
+    die $LINENO "IP_VERSION must be either 4, 6, or 4+6"
+fi
 # Gateway and subnet defaults, in case they are not customized in localrc
 NETWORK_GATEWAY=${NETWORK_GATEWAY:-10.0.0.1}
 PUBLIC_NETWORK_GATEWAY=${PUBLIC_NETWORK_GATEWAY:-172.24.4.1}
@@ -65,10 +77,28 @@
     Q_PROTOCOL="https"
 fi
 
+# Generate 40-bit IPv6 Global ID to comply with RFC 4193
+IPV6_GLOBAL_ID=`uuidgen | sed s/-//g | cut -c 23- | sed -e "s/\(..\)\(....\)\(....\)/\1:\2:\3/"`
+
+# IPv6 gateway and subnet defaults, in case they are not customized in localrc
+IPV6_RA_MODE=${IPV6_RA_MODE:-slaac}
+IPV6_ADDRESS_MODE=${IPV6_ADDRESS_MODE:-slaac}
+IPV6_PUBLIC_SUBNET_NAME=${IPV6_PUBLIC_SUBNET_NAME:-ipv6-public-subnet}
+IPV6_PRIVATE_SUBNET_NAME=${IPV6_PRIVATE_SUBNET_NAME:-ipv6-private-subnet}
+FIXED_RANGE_V6=${FIXED_RANGE_V6:-fd$IPV6_GLOBAL_ID::/64}
+IPV6_PRIVATE_NETWORK_GATEWAY=${IPV6_PRIVATE_NETWORK_GATEWAY:-fd$IPV6_GLOBAL_ID::1}
+IPV6_PUBLIC_RANGE=${IPV6_PUBLIC_RANGE:-fe80:cafe:cafe::/64}
+IPV6_PUBLIC_NETWORK_GATEWAY=${IPV6_PUBLIC_NETWORK_GATEWAY:-fe80:cafe:cafe::2}
+# IPV6_ROUTER_GW_IP must be defined when IP_VERSION=4+6 as it cannot be
+# obtained conventionally until the l3-agent has support for dual-stack
+# TODO (john-davidge) Remove once l3-agent supports dual-stack
+IPV6_ROUTER_GW_IP=${IPV6_ROUTER_GW_IP:-fe80:cafe:cafe::1}
 
 # Set up default directories
+GITDIR["python-neutronclient"]=$DEST/python-neutronclient
+
+
 NEUTRON_DIR=$DEST/neutron
-NEUTRONCLIENT_DIR=$DEST/python-neutronclient
 NEUTRON_AUTH_CACHE_DIR=${NEUTRON_AUTH_CACHE_DIR:-/var/cache/neutron}
 
 # Support entry points installation of console scripts
@@ -131,6 +161,8 @@
 Q_NOTIFY_NOVA_PORT_DATA_CHANGES=${Q_NOTIFY_NOVA_PORT_DATA_CHANGES:-True}
 VIF_PLUGGING_IS_FATAL=${VIF_PLUGGING_IS_FATAL:-True}
 VIF_PLUGGING_TIMEOUT=${VIF_PLUGGING_TIMEOUT:-300}
+# Specify if the initial private and external networks should be created
+NEUTRON_CREATE_INITIAL_NETWORKS=${NEUTRON_CREATE_INITIAL_NETWORKS:-True}
 
 ## Provider Network Information
 PROVIDER_SUBNET_NAME=${PROVIDER_SUBNET_NAME:-"provider_net"}
@@ -519,7 +551,7 @@
         die_if_not_set $LINENO PHYSICAL_NETWORK "You must specify the PHYSICAL_NETWORK"
         die_if_not_set $LINENO PROVIDER_NETWORK_TYPE "You must specifiy the PROVIDER_NETWORK_TYPE"
         NET_ID=$(neutron net-create $PHYSICAL_NETWORK --tenant_id $TENANT_ID --provider:network_type $PROVIDER_NETWORK_TYPE --provider:physical_network "$PHYSICAL_NETWORK" ${SEGMENTATION_ID:+--provider:segmentation_id $SEGMENTATION_ID} --shared | grep ' id ' | get_field 2)
-        SUBNET_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 4 ${ALLOCATION_POOL:+--allocation-pool $ALLOCATION_POOL} --name $PROVIDER_SUBNET_NAME $NET_ID $FIXED_RANGE | grep ' id ' | get_field 2)
+        SUBNET_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 4 ${ALLOCATION_POOL:+--allocation-pool $ALLOCATION_POOL} --name $PROVIDER_SUBNET_NAME --gateway $NETWORK_GATEWAY $NET_ID $FIXED_RANGE | grep ' id ' | get_field 2)
         SUBNET_V6_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 6 --ipv6-address-mode slaac --gateway $V6_NETWORK_GATEWAY --name $PROVIDER_SUBNET_NAME_V6 $NET_ID $FIXED_RANGE_V6 | grep 'id' | get_field 2)
         sudo ip link set $OVS_PHYSICAL_BRIDGE up
         sudo ip link set br-int up
@@ -527,8 +559,16 @@
     else
         NET_ID=$(neutron net-create --tenant-id $TENANT_ID "$PRIVATE_NETWORK_NAME" | grep ' id ' | get_field 2)
         die_if_not_set $LINENO NET_ID "Failure creating NET_ID for $PHYSICAL_NETWORK $TENANT_ID"
-        SUBNET_ID=$(neutron subnet-create --tenant-id $TENANT_ID --ip_version 4 --gateway $NETWORK_GATEWAY --name $PRIVATE_SUBNET_NAME $NET_ID $FIXED_RANGE | grep ' id ' | get_field 2)
-        die_if_not_set $LINENO SUBNET_ID "Failure creating SUBNET_ID for $TENANT_ID"
+
+        if [[ "$IP_VERSION" =~ 4.* ]]; then
+            # Create IPv4 private subnet
+            SUBNET_ID=$(_neutron_create_private_subnet_v4)
+        fi
+
+        if [[ "$IP_VERSION" =~ .*6 ]]; then
+            # Create IPv6 private subnet
+            IPV6_SUBNET_ID=$(_neutron_create_private_subnet_v6)
+        fi
     fi
 
     if [[ "$Q_L3_ENABLED" == "True" ]]; then
@@ -542,7 +582,7 @@
             ROUTER_ID=$(neutron router-create $Q_ROUTER_NAME | grep ' id ' | get_field 2)
             die_if_not_set $LINENO ROUTER_ID "Failure creating ROUTER_ID for $Q_ROUTER_NAME"
         fi
-        neutron router-interface-add $ROUTER_ID $SUBNET_ID
+
         # Create an external network, and a subnet. Configure the external network as router gw
         if [ "$Q_USE_PROVIDERNET_FOR_PUBLIC" = "True" ]; then
             EXT_NET_ID=$(neutron net-create "$PUBLIC_NETWORK_NAME" -- --router:external=True --provider:network_type=flat --provider:physical_network=${PUBLIC_PHYSICAL_NETWORK} | grep ' id ' | get_field 2)
@@ -550,35 +590,15 @@
             EXT_NET_ID=$(neutron net-create "$PUBLIC_NETWORK_NAME" -- --router:external=True | grep ' id ' | get_field 2)
         fi
         die_if_not_set $LINENO EXT_NET_ID "Failure creating EXT_NET_ID for $PUBLIC_NETWORK_NAME"
-        EXT_GW_IP=$(neutron subnet-create --ip_version 4 ${Q_FLOATING_ALLOCATION_POOL:+--allocation-pool $Q_FLOATING_ALLOCATION_POOL} --gateway $PUBLIC_NETWORK_GATEWAY --name $PUBLIC_SUBNET_NAME $EXT_NET_ID $FLOATING_RANGE -- --enable_dhcp=False | grep 'gateway_ip' | get_field 2)
-        die_if_not_set $LINENO EXT_GW_IP "Failure creating EXT_GW_IP"
-        neutron router-gateway-set $ROUTER_ID $EXT_NET_ID
 
-        if is_service_enabled q-l3; then
-            # logic is specific to using the l3-agent for l3
-            if is_neutron_ovs_base_plugin && [[ "$Q_USE_NAMESPACE" = "True" ]]; then
-                local ext_gw_interface
+        if [[ "$IP_VERSION" =~ 4.* ]]; then
+            # Configure router for IPv4 public access
+            _neutron_configure_router_v4
+        fi
 
-                if [[ "$Q_USE_PUBLIC_VETH" = "True" ]]; then
-                    ext_gw_interface=$Q_PUBLIC_VETH_EX
-                else
-                    # Disable in-band as we are going to use local port
-                    # to communicate with VMs
-                    sudo ovs-vsctl set Bridge $PUBLIC_BRIDGE \
-                        other_config:disable-in-band=true
-                    ext_gw_interface=$PUBLIC_BRIDGE
-                fi
-                CIDR_LEN=${FLOATING_RANGE#*/}
-                sudo ip addr add $EXT_GW_IP/$CIDR_LEN dev $ext_gw_interface
-                sudo ip link set $ext_gw_interface up
-                ROUTER_GW_IP=`neutron port-list -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' '{ print $8; }'`
-                die_if_not_set $LINENO ROUTER_GW_IP "Failure retrieving ROUTER_GW_IP"
-                sudo route add -net $FIXED_RANGE gw $ROUTER_GW_IP
-            fi
-            if [[ "$Q_USE_NAMESPACE" == "False" ]]; then
-                # Explicitly set router id in l3 agent configuration
-                iniset $Q_L3_CONF_FILE DEFAULT router_id $ROUTER_ID
-            fi
+        if [[ "$IP_VERSION" =~ .*6 ]]; then
+            # Configure router for IPv6 public access
+            _neutron_configure_router_v6
         fi
     fi
 }
@@ -616,9 +636,11 @@
 
 # install_neutronclient() - Collect source and prepare
 function install_neutronclient {
-    git_clone $NEUTRONCLIENT_REPO $NEUTRONCLIENT_DIR $NEUTRONCLIENT_BRANCH
-    setup_develop $NEUTRONCLIENT_DIR
-    sudo install -D -m 0644 -o $STACK_USER {$NEUTRONCLIENT_DIR/tools/,/etc/bash_completion.d/}neutron.bash_completion
+    if use_library_from_git "python-neutronclient"; then
+        git_clone_by_name "python-neutronclient"
+        setup_dev_lib "python-neutronclient"
+        sudo install -D -m 0644 -o $STACK_USER {${GITDIR["python-neutronclient"]}/tools/,/etc/bash_completion.d/}neutron.bash_completion
+    fi
 }
 
 # install_neutron_agent_packages() - Collect source and prepare
@@ -672,6 +694,13 @@
         sudo ip link set $OVS_PHYSICAL_BRIDGE up
         sudo ip link set br-int up
         sudo ip link set $PUBLIC_INTERFACE up
+        if is_ironic_hardware; then
+            for IP in $(ip addr show dev $PUBLIC_INTERFACE | grep ' inet ' | awk '{print $2}'); do
+                sudo ip addr del $IP dev $PUBLIC_INTERFACE
+                sudo ip addr add $IP dev $OVS_PHYSICAL_BRIDGE
+            done
+            sudo route add -net $FIXED_RANGE gw $NETWORK_GATEWAY dev $OVS_PHYSICAL_BRIDGE
+        fi
     fi
 
     if is_service_enabled q-vpn; then
@@ -723,6 +752,14 @@
 # cleanup_neutron() - Remove residual data files, anything left over from previous
 # runs that a clean run would need to clean up
 function cleanup_neutron {
+    if [[ is_provider_network && is_ironic_hardware ]]; then
+        for IP in $(ip addr show dev $OVS_PHYSICAL_BRIDGE | grep ' inet ' | awk '{print $2}'); do
+            sudo ip addr del $IP dev $OVS_PHYSICAL_BRIDGE
+            sudo ip addr add $IP dev $PUBLIC_INTERFACE
+        done
+        sudo route del -net $FIXED_RANGE gw $NETWORK_GATEWAY dev $OVS_PHYSICAL_BRIDGE
+    fi
+
     if is_neutron_ovs_base_plugin; then
         neutron_ovs_base_cleanup
     fi
@@ -764,7 +801,7 @@
 
     iniset $NEUTRON_CONF database connection `database_connection_url $Q_DB_NAME`
     iniset $NEUTRON_CONF DEFAULT state_path $DATA_DIR/neutron
-
+    iniset $NEUTRON_CONF DEFAULT use_syslog $SYSLOG
     # If addition config files are set, make sure their path name is set as well
     if [[ ${#Q_PLUGIN_EXTRA_CONF_FILES[@]} > 0 && $Q_PLUGIN_EXTRA_CONF_PATH == '' ]]; then
         die $LINENO "Neutron additional plugin config not set.. exiting"
@@ -1044,6 +1081,172 @@
     neutron_plugin_setup_interface_driver $1
 }
 
+# Create private IPv4 subnet
+function _neutron_create_private_subnet_v4 {
+    local subnet_params="--tenant-id $TENANT_ID "
+    subnet_params+="--ip_version 4 "
+    subnet_params+="--gateway $NETWORK_GATEWAY "
+    subnet_params+="--name $PRIVATE_SUBNET_NAME "
+    subnet_params+="$NET_ID $FIXED_RANGE"
+    local subnet_id=$(neutron subnet-create $subnet_params | grep ' id ' | get_field 2)
+    die_if_not_set $LINENO subnet_id "Failure creating private IPv4 subnet for $TENANT_ID"
+    echo $subnet_id
+}
+
+# Create private IPv6 subnet
+function _neutron_create_private_subnet_v6 {
+    die_if_not_set $LINENO IPV6_RA_MODE "IPV6 RA Mode not set"
+    die_if_not_set $LINENO IPV6_ADDRESS_MODE "IPV6 Address Mode not set"
+    local ipv6_modes="--ipv6-ra-mode $IPV6_RA_MODE --ipv6-address-mode $IPV6_ADDRESS_MODE"
+    local subnet_params="--tenant-id $TENANT_ID "
+    subnet_params+="--ip_version 6 "
+    subnet_params+="--gateway $IPV6_PRIVATE_NETWORK_GATEWAY "
+    subnet_params+="--name $IPV6_PRIVATE_SUBNET_NAME "
+    subnet_params+="$NET_ID $FIXED_RANGE_V6 $ipv6_modes"
+    local ipv6_subnet_id=$(neutron subnet-create $subnet_params | grep ' id ' | get_field 2)
+    die_if_not_set $LINENO ipv6_subnet_id "Failure creating private IPv6 subnet for $TENANT_ID"
+    echo $ipv6_subnet_id
+}
+
+# Create public IPv4 subnet
+function _neutron_create_public_subnet_v4 {
+    local subnet_params+="--ip_version 4 "
+    subnet_params+="${Q_FLOATING_ALLOCATION_POOL:+--allocation-pool $Q_FLOATING_ALLOCATION_POOL} "
+    subnet_params+="--gateway $PUBLIC_NETWORK_GATEWAY "
+    subnet_params+="--name $PUBLIC_SUBNET_NAME "
+    subnet_params+="$EXT_NET_ID $FLOATING_RANGE "
+    subnet_params+="-- --enable_dhcp=False"
+    local id_and_ext_gw_ip=$(neutron subnet-create $subnet_params | grep -e 'gateway_ip' -e ' id ')
+    die_if_not_set $LINENO id_and_ext_gw_ip "Failure creating public IPv4 subnet"
+    echo $id_and_ext_gw_ip
+}
+
+# Create public IPv6 subnet
+function _neutron_create_public_subnet_v6 {
+    local subnet_params="--ip_version 6 "
+    subnet_params+="--gateway $IPV6_PUBLIC_NETWORK_GATEWAY "
+    subnet_params+="--name $IPV6_PUBLIC_SUBNET_NAME "
+    subnet_params+="$EXT_NET_ID $IPV6_PUBLIC_RANGE "
+    subnet_params+="-- --enable_dhcp=False"
+    local ipv6_id_and_ext_gw_ip=$(neutron subnet-create $subnet_params | grep -e 'gateway_ip' -e ' id ')
+    die_if_not_set $LINENO ipv6_id_and_ext_gw_ip "Failure creating an IPv6 public subnet"
+    echo $ipv6_id_and_ext_gw_ip
+}
+
+# Configure neutron router for IPv4 public access
+function _neutron_configure_router_v4 {
+    neutron router-interface-add $ROUTER_ID $SUBNET_ID
+    # Create a public subnet on the external network
+    local id_and_ext_gw_ip=$(_neutron_create_public_subnet_v4 $EXT_NET_ID)
+    local ext_gw_ip=$(echo $id_and_ext_gw_ip  | get_field 2)
+    PUB_SUBNET_ID=$(echo $id_and_ext_gw_ip | get_field 5)
+    # Configure the external network as the default router gateway
+    neutron router-gateway-set $ROUTER_ID $EXT_NET_ID
+
+    # This logic is specific to using the l3-agent for layer 3
+    if is_service_enabled q-l3; then
+        # Configure and enable public bridge
+        if is_neutron_ovs_base_plugin && [[ "$Q_USE_NAMESPACE" = "True" ]]; then
+            local ext_gw_interface=$(_neutron_get_ext_gw_interface)
+            local cidr_len=${FLOATING_RANGE#*/}
+            sudo ip addr add $ext_gw_ip/$cidr_len dev $ext_gw_interface
+            sudo ip link set $ext_gw_interface up
+            ROUTER_GW_IP=`neutron port-list -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$PUB_SUBNET_ID '$4 == subnet_id { print $8; }'`
+            die_if_not_set $LINENO ROUTER_GW_IP "Failure retrieving ROUTER_GW_IP"
+            sudo route add -net $FIXED_RANGE gw $ROUTER_GW_IP
+        fi
+        _neutron_set_router_id
+    fi
+}
+
+# Configure neutron router for IPv6 public access
+function _neutron_configure_router_v6 {
+    neutron router-interface-add $ROUTER_ID $IPV6_SUBNET_ID
+    # Create a public subnet on the external network
+    local ipv6_id_and_ext_gw_ip=$(_neutron_create_public_subnet_v6 $EXT_NET_ID)
+    local ipv6_ext_gw_ip=$(echo $ipv6_id_and_ext_gw_ip | get_field 2)
+    local ipv6_pub_subnet_id=$(echo $ipv6_id_and_ext_gw_ip | get_field 5)
+
+    # If the external network has not already been set as the default router
+    # gateway when configuring an IPv4 public subnet, do so now
+    if [[ "$IP_VERSION" == "6" ]]; then
+        neutron router-gateway-set $ROUTER_ID $EXT_NET_ID
+    fi
+
+    # This logic is specific to using the l3-agent for layer 3
+    if is_service_enabled q-l3; then
+        local ipv6_router_gw_port
+        # Ensure IPv6 forwarding is enabled on the host
+        sudo sysctl -w net.ipv6.conf.all.forwarding=1
+        # Configure and enable public bridge
+        if [[ "$IP_VERSION" = "6" ]]; then
+            # Override global IPV6_ROUTER_GW_IP with the true value from neutron
+            IPV6_ROUTER_GW_IP=`neutron port-list -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$ipv6_pub_subnet_id '$4 == subnet_id { print $8; }'`
+            die_if_not_set $LINENO IPV6_ROUTER_GW_IP "Failure retrieving IPV6_ROUTER_GW_IP"
+            ipv6_router_gw_port=`neutron port-list -c id -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$ipv6_pub_subnet_id '$4 == subnet_id { print $1; }' | awk -F ' | ' '{ print $2; }'`
+            die_if_not_set $LINENO ipv6_router_gw_port "Failure retrieving ipv6_router_gw_port"
+        else
+            ipv6_router_gw_port=`neutron port-list -c id -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$PUB_SUBNET_ID '$4 == subnet_id { print $1; }' | awk -F ' | ' '{ print $2; }'`
+            die_if_not_set $LINENO ipv6_router_gw_port "Failure retrieving ipv6_router_gw_port"
+        fi
+
+        # The ovs_base_configure_l3_agent function flushes the public
+        # bridge's ip addresses, so turn IPv6 support in the host off
+        # and then on to recover the public bridge's link local address
+        sudo sysctl -w net.ipv6.conf.${PUBLIC_BRIDGE}.disable_ipv6=1
+        sudo sysctl -w net.ipv6.conf.${PUBLIC_BRIDGE}.disable_ipv6=0
+        if is_neutron_ovs_base_plugin && [[ "$Q_USE_NAMESPACE" = "True" ]]; then
+            local ext_gw_interface=$(_neutron_get_ext_gw_interface)
+            local ipv6_cidr_len=${IPV6_PUBLIC_RANGE#*/}
+
+            # Define router_ns based on whether DVR is enabled
+            local router_ns=qrouter
+            if [[ "$Q_DVR_MODE" == "dvr_snat" ]]; then
+                router_ns=snat
+            fi
+
+            # Configure interface for public bridge
+            sudo ip -6 addr add $ipv6_ext_gw_ip/$ipv6_cidr_len dev $ext_gw_interface
+
+            # Wait until layer 3 agent has configured the gateway port on
+            # the public bridge, then add gateway address to the interface
+            # TODO (john-davidge) Remove once l3-agent supports dual-stack
+            if [[ "$IP_VERSION" == "4+6" ]]; then
+                if ! timeout $GATEWAY_TIMEOUT sh -c "until sudo ip netns exec $router_ns-$ROUTER_ID ip addr show qg-${ipv6_router_gw_port:0:11} | grep $ROUTER_GW_IP; do sleep 1; done"; then
+                    die $LINENO "Timeout retrieving ROUTER_GW_IP"
+                fi
+                # Configure the gateway port with the public IPv6 adress
+                sudo ip netns exec $router_ns-$ROUTER_ID ip -6 addr add $IPV6_ROUTER_GW_IP/$ipv6_cidr_len dev qg-${ipv6_router_gw_port:0:11}
+                # Add a default IPv6 route to the neutron router as the
+                # l3-agent does not add one in the dual-stack case
+                sudo ip netns exec $router_ns-$ROUTER_ID ip -6 route replace default via $ipv6_ext_gw_ip dev qg-${ipv6_router_gw_port:0:11}
+            fi
+            sudo ip -6 route add $FIXED_RANGE_V6 via $IPV6_ROUTER_GW_IP dev $ext_gw_interface
+        fi
+        _neutron_set_router_id
+    fi
+}
+
+# Explicitly set router id in l3 agent configuration
+function _neutron_set_router_id {
+    if [[ "$Q_USE_NAMESPACE" == "False" ]]; then
+        iniset $Q_L3_CONF_FILE DEFAULT router_id $ROUTER_ID
+    fi
+}
+
+# Get ext_gw_interface depending on value of Q_USE_PUBLIC_VETH
+function _neutron_get_ext_gw_interface {
+    if [[ "$Q_USE_PUBLIC_VETH" == "True" ]]; then
+        echo $Q_PUBLIC_VETH_EX
+    else
+        # Disable in-band as we are going to use local port
+        # to communicate with VMs
+        sudo ovs-vsctl set Bridge $PUBLIC_BRIDGE \
+            other_config:disable-in-band=true
+        echo $PUBLIC_BRIDGE
+    fi
+}
+
 # Functions for Neutron Exercises
 #--------------------------------
 
diff --git a/lib/neutron_plugins/ml2 b/lib/neutron_plugins/ml2
index 44b947f..f9a9774 100644
--- a/lib/neutron_plugins/ml2
+++ b/lib/neutron_plugins/ml2
@@ -84,6 +84,11 @@
         fi
     fi
 
+
+    # Allow for setup the flat type network
+    if [[ -z "$Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS" && -n "$PHYSICAL_NETWORK" ]]; then
+            Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS="flat_networks=$Q_ML2_FLAT_PHYSNET_OPTIONS"
+    fi
     # REVISIT(rkukura): Setting firewall_driver here for
     # neutron.agent.securitygroups_rpc.is_firewall_enabled() which is
     # used in the server, in case no L2 agent is configured on the
@@ -110,6 +115,8 @@
 
     populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2_type_vxlan $Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS
 
+    populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2_type_flat $Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS
+
     populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2_type_vlan $Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS
 
     if [[ "$Q_DVR_MODE" != "legacy" ]]; then
diff --git a/lib/neutron_plugins/nuage b/lib/neutron_plugins/nuage
index 52d85a2..70de8fa 100644
--- a/lib/neutron_plugins/nuage
+++ b/lib/neutron_plugins/nuage
@@ -7,7 +7,7 @@
 
 function neutron_plugin_create_nova_conf {
     NOVA_OVS_BRIDGE=${NOVA_OVS_BRIDGE:-"br-int"}
-    iniset $NOVA_CONF DEFAULT neutron_ovs_bridge $NOVA_OVS_BRIDGE
+    iniset $NOVA_CONF neutron ovs_bridge $NOVA_OVS_BRIDGE
     NOVA_VIF_DRIVER=${NOVA_VIF_DRIVER:-"nova.virt.libvirt.vif.LibvirtGenericVIFDriver"}
     LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
     iniset $NOVA_CONF DEFAULT firewall_driver $LIBVIRT_FIREWALL_DRIVER
diff --git a/lib/nova b/lib/nova
index 0f83807..78906f7 100644
--- a/lib/nova
+++ b/lib/nova
@@ -29,8 +29,10 @@
 # --------
 
 # Set up default directories
+GITDIR["python-novaclient"]=$DEST/python-novaclient
+
+
 NOVA_DIR=$DEST/nova
-NOVACLIENT_DIR=$DEST/python-novaclient
 NOVA_STATE_PATH=${NOVA_STATE_PATH:=$DATA_DIR/nova}
 # INSTANCES_PATH is the previous name for this
 NOVA_INSTANCES_PATH=${NOVA_INSTANCES_PATH:=${INSTANCES_PATH:=$NOVA_STATE_PATH/instances}}
@@ -637,9 +639,11 @@
 
 # install_novaclient() - Collect source and prepare
 function install_novaclient {
-    git_clone $NOVACLIENT_REPO $NOVACLIENT_DIR $NOVACLIENT_BRANCH
-    setup_develop $NOVACLIENT_DIR
-    sudo install -D -m 0644 -o $STACK_USER {$NOVACLIENT_DIR/tools/,/etc/bash_completion.d/}nova.bash_completion
+    if use_library_from_git "python-novaclient"; then
+        git_clone_by_name "python-novaclient"
+        setup_dev_lib "python-novaclient"
+        sudo install -D -m 0644 -o $STACK_USER {${GITDIR["python-novaclient"]}/tools/,/etc/bash_completion.d/}nova.bash_completion
+    fi
 }
 
 # install_nova() - Collect source and prepare
diff --git a/lib/nova_plugins/hypervisor-ironic b/lib/nova_plugins/hypervisor-ironic
index 4004cc9..4209503 100644
--- a/lib/nova_plugins/hypervisor-ironic
+++ b/lib/nova_plugins/hypervisor-ironic
@@ -47,7 +47,7 @@
     iniset $NOVA_CONF ironic admin_password $ADMIN_PASSWORD
     iniset $NOVA_CONF ironic admin_url $KEYSTONE_AUTH_URI/v2.0
     iniset $NOVA_CONF ironic admin_tenant_name demo
-    iniset $NOVA_CONF ironic api_endpoint http://$SERVICE_HOST:6385/v1
+    iniset $NOVA_CONF ironic api_endpoint $IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT/v1
 }
 
 # install_nova_hypervisor() - Install external components
diff --git a/lib/oslo b/lib/oslo
index a20aa14..d00580b 100644
--- a/lib/oslo
+++ b/lib/oslo
@@ -23,6 +23,7 @@
 GITDIR["cliff"]=$DEST/cliff
 GITDIR["oslo.config"]=$DEST/oslo.config
 GITDIR["oslo.concurrency"]=$DEST/oslo.concurrency
+GITDIR["oslo.context"]=$DEST/oslo.context
 GITDIR["oslo.db"]=$DEST/oslo.db
 GITDIR["oslo.i18n"]=$DEST/oslo.i18n
 GITDIR["oslo.log"]=$DEST/oslo.log
diff --git a/lib/sahara b/lib/sahara
index 6d1bef5..4f1ba22 100644
--- a/lib/sahara
+++ b/lib/sahara
@@ -22,15 +22,10 @@
 # --------
 
 # Set up default repos
-SAHARA_REPO=${SAHARA_REPO:-${GIT_BASE}/openstack/sahara.git}
-SAHARA_BRANCH=${SAHARA_BRANCH:-master}
-
-SAHARA_PYTHONCLIENT_REPO=${SAHARA_PYTHONCLIENT_REPO:-${GIT_BASE}/openstack/python-saharaclient.git}
-SAHARA_PYTHONCLIENT_BRANCH=${SAHARA_PYTHONCLIENT_BRANCH:-master}
 
 # Set up default directories
+GITDIR["python-saharaclient"]=$DEST/python-saharaclient
 SAHARA_DIR=$DEST/sahara
-SAHARA_PYTHONCLIENT_DIR=$DEST/python-saharaclient
 
 SAHARA_CONF_DIR=${SAHARA_CONF_DIR:-/etc/sahara}
 SAHARA_CONF_FILE=${SAHARA_CONF_DIR}/sahara.conf
@@ -158,8 +153,10 @@
 
 # install_python_saharaclient() - Collect source and prepare
 function install_python_saharaclient {
-    git_clone $SAHARA_PYTHONCLIENT_REPO $SAHARA_PYTHONCLIENT_DIR $SAHARA_PYTHONCLIENT_BRANCH
-    setup_develop $SAHARA_PYTHONCLIENT_DIR
+    if use_library_from_git "python-saharaclient"; then
+        git_clone_by_name "python-saharaclient"
+        setup_dev_lib "python-saharaclient"
+    fi
 }
 
 # start_sahara() - Start running processes, including screen
diff --git a/lib/swift b/lib/swift
index 7ef4496..ae0874e 100644
--- a/lib/swift
+++ b/lib/swift
@@ -34,8 +34,10 @@
 fi
 
 # Set up default directories
+GITDIR["python-swiftclient"]=$DEST/python-swiftclient
+
+
 SWIFT_DIR=$DEST/swift
-SWIFTCLIENT_DIR=$DEST/python-swiftclient
 SWIFT_AUTH_CACHE_DIR=${SWIFT_AUTH_CACHE_DIR:-/var/cache/swift}
 SWIFT_APACHE_WSGI_DIR=${SWIFT_APACHE_WSGI_DIR:-/var/www/swift}
 SWIFT3_DIR=$DEST/swift3
@@ -675,8 +677,10 @@
 }
 
 function install_swiftclient {
-    git_clone $SWIFTCLIENT_REPO $SWIFTCLIENT_DIR $SWIFTCLIENT_BRANCH
-    setup_develop $SWIFTCLIENT_DIR
+    if use_library_from_git "python-swiftclient"; then
+        git_clone_by_name "python-swiftclient"
+        setup_dev_lib "python-swiftclient"
+    fi
 }
 
 # start_swift() - Start running processes, including screen
diff --git a/lib/tempest b/lib/tempest
index 66f1a78..d3fb9fb 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -45,11 +45,12 @@
 # --------
 
 # Set up default directories
+GITDIR["tempest-lib"]=$DEST/tempest-lib
+
 TEMPEST_DIR=$DEST/tempest
 TEMPEST_CONFIG_DIR=${TEMPEST_CONFIG_DIR:-$TEMPEST_DIR/etc}
 TEMPEST_CONFIG=$TEMPEST_CONFIG_DIR/tempest.conf
 TEMPEST_STATE_PATH=${TEMPEST_STATE_PATH:=$DATA_DIR/tempest}
-TEMPEST_LIB_DIR=$DEST/tempest-lib
 
 NOVA_SOURCE_DIR=$DEST/nova
 
@@ -110,34 +111,36 @@
     # ... Also ensure we only take active images, so we don't get snapshots in process
     declare -a images
 
-    while read -r IMAGE_NAME IMAGE_UUID; do
-        if [ "$IMAGE_NAME" = "$DEFAULT_IMAGE_NAME" ]; then
-            image_uuid="$IMAGE_UUID"
-            image_uuid_alt="$IMAGE_UUID"
-        fi
-        images+=($IMAGE_UUID)
-    # TODO(stevemar): update this command to use openstackclient's `openstack image list`
-    # when it supports listing by status.
-    done < <(glance image-list --status=active | awk -F'|' '!/^(+--)|ID|aki|ari/ { print $3,$2 }')
+    if is_service_enabled glance; then
+        while read -r IMAGE_NAME IMAGE_UUID; do
+            if [ "$IMAGE_NAME" = "$DEFAULT_IMAGE_NAME" ]; then
+                image_uuid="$IMAGE_UUID"
+                image_uuid_alt="$IMAGE_UUID"
+            fi
+            images+=($IMAGE_UUID)
+        # TODO(stevemar): update this command to use openstackclient's `openstack image list`
+        # when it supports listing by status.
+        done < <(glance image-list --status=active | awk -F'|' '!/^(+--)|ID|aki|ari/ { print $3,$2 }')
 
-    case "${#images[*]}" in
-        0)
-            echo "Found no valid images to use!"
-            exit 1
-            ;;
-        1)
-            if [ -z "$image_uuid" ]; then
-                image_uuid=${images[0]}
-                image_uuid_alt=${images[0]}
-            fi
-            ;;
-        *)
-            if [ -z "$image_uuid" ]; then
-                image_uuid=${images[0]}
-                image_uuid_alt=${images[1]}
-            fi
-            ;;
-    esac
+        case "${#images[*]}" in
+            0)
+                echo "Found no valid images to use!"
+                exit 1
+                ;;
+            1)
+                if [ -z "$image_uuid" ]; then
+                    image_uuid=${images[0]}
+                    image_uuid_alt=${images[0]}
+                fi
+                ;;
+            *)
+                if [ -z "$image_uuid" ]; then
+                    image_uuid=${images[0]}
+                    image_uuid_alt=${images[1]}
+                fi
+                ;;
+        esac
+    fi
 
     # Create tempest.conf from tempest.conf.sample
     # copy every time, because the image UUIDS are going to change
@@ -161,63 +164,65 @@
     ALT_TENANT_NAME=${ALT_TENANT_NAME:-alt_demo}
     ADMIN_TENANT_ID=$(openstack project list | awk "/ admin / { print \$2 }")
 
-    # If the ``DEFAULT_INSTANCE_TYPE`` not declared, use the new behavior
-    # Tempest creates instane types for himself
-    if  [[ -z "$DEFAULT_INSTANCE_TYPE" ]]; then
-        available_flavors=$(nova flavor-list)
-        if [[ ! ( $available_flavors =~ 'm1.nano' ) ]]; then
-            if is_arch "ppc64"; then
-                # qemu needs at least 128MB of memory to boot on ppc64
-                nova flavor-create m1.nano 42 128 0 1
-            else
-                nova flavor-create m1.nano 42 64 0 1
+    if is_service_enabled nova; then
+        # If the ``DEFAULT_INSTANCE_TYPE`` not declared, use the new behavior
+        # Tempest creates instane types for himself
+        if  [[ -z "$DEFAULT_INSTANCE_TYPE" ]]; then
+            available_flavors=$(nova flavor-list)
+            if [[ ! ( $available_flavors =~ 'm1.nano' ) ]]; then
+                if is_arch "ppc64"; then
+                    # qemu needs at least 128MB of memory to boot on ppc64
+                    nova flavor-create m1.nano 42 128 0 1
+                else
+                    nova flavor-create m1.nano 42 64 0 1
+                fi
             fi
-        fi
-        flavor_ref=42
-        boto_instance_type=m1.nano
-        if [[ ! ( $available_flavors =~ 'm1.micro' ) ]]; then
-            if is_arch "ppc64"; then
-                nova flavor-create m1.micro 84 256 0 1
-            else
-                nova flavor-create m1.micro 84 128 0 1
+            flavor_ref=42
+            boto_instance_type=m1.nano
+            if [[ ! ( $available_flavors =~ 'm1.micro' ) ]]; then
+                if is_arch "ppc64"; then
+                    nova flavor-create m1.micro 84 256 0 1
+                else
+                    nova flavor-create m1.micro 84 128 0 1
+                fi
             fi
-        fi
-        flavor_ref_alt=84
-    else
-        # Check Nova for existing flavors and, if set, look for the
-        # ``DEFAULT_INSTANCE_TYPE`` and use that.
-        boto_instance_type=$DEFAULT_INSTANCE_TYPE
-        flavor_lines=`nova flavor-list`
-        IFS=$'\r\n'
-        flavors=""
-        for line in $flavor_lines; do
-            f=$(echo $line | awk "/ $DEFAULT_INSTANCE_TYPE / { print \$2 }")
-            flavors="$flavors $f"
-        done
+            flavor_ref_alt=84
+        else
+            # Check Nova for existing flavors and, if set, look for the
+            # ``DEFAULT_INSTANCE_TYPE`` and use that.
+            boto_instance_type=$DEFAULT_INSTANCE_TYPE
+            flavor_lines=`nova flavor-list`
+            IFS=$'\r\n'
+            flavors=""
+            for line in $flavor_lines; do
+                f=$(echo $line | awk "/ $DEFAULT_INSTANCE_TYPE / { print \$2 }")
+                flavors="$flavors $f"
+            done
 
-        for line in $flavor_lines; do
-            flavors="$flavors `echo $line | grep -v "^\(|\s*ID\|+--\)" | cut -d' ' -f2`"
-        done
+            for line in $flavor_lines; do
+                flavors="$flavors `echo $line | grep -v "^\(|\s*ID\|+--\)" | cut -d' ' -f2`"
+            done
 
-        IFS=" "
-        flavors=($flavors)
-        num_flavors=${#flavors[*]}
-        echo "Found $num_flavors flavors"
-        if [[ $num_flavors -eq 0 ]]; then
-            echo "Found no valid flavors to use!"
-            exit 1
-        fi
-        flavor_ref=${flavors[0]}
-        flavor_ref_alt=$flavor_ref
-
-        # ensure flavor_ref and flavor_ref_alt have different values
-        # some resize instance in tempest tests depends on this.
-        for f in ${flavors[@]:1}; do
-            if [[ $f -ne $flavor_ref ]]; then
-                flavor_ref_alt=$f
-                break
+            IFS=" "
+            flavors=($flavors)
+            num_flavors=${#flavors[*]}
+            echo "Found $num_flavors flavors"
+            if [[ $num_flavors -eq 0 ]]; then
+                echo "Found no valid flavors to use!"
+                exit 1
             fi
-        done
+            flavor_ref=${flavors[0]}
+            flavor_ref_alt=$flavor_ref
+
+            # ensure flavor_ref and flavor_ref_alt have different values
+            # some resize instance in tempest tests depends on this.
+            for f in ${flavors[@]:1}; do
+                if [[ $f -ne $flavor_ref ]]; then
+                    flavor_ref_alt=$f
+                    break
+                fi
+            done
+        fi
     fi
 
     if [ "$Q_USE_NAMESPACE" != "False" ]; then
@@ -242,6 +247,7 @@
         fi
     fi
 
+    iniset $TEMPEST_CONF DEFAULT use_syslog $SYSLOG
     # Oslo
     iniset $TEMPEST_CONFIG DEFAULT lock_path $TEMPEST_STATE_PATH
     mkdir -p $TEMPEST_STATE_PATH
@@ -298,6 +304,7 @@
     iniset $TEMPEST_CONFIG compute-feature-enabled change_password False
     iniset $TEMPEST_CONFIG compute-feature-enabled block_migration_for_live_migration ${USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION:-False}
     iniset $TEMPEST_CONFIG compute-feature-enabled api_extensions ${COMPUTE_API_EXTENSIONS:-"all"}
+    iniset $TEMPEST_CONFIG compute-feature-enabled xml_api_v2 ${TEMPEST_ENABLE_NOVA_XML_API:-True}
     iniset $TEMPEST_CONFIG compute-feature-disabled api_extensions ${DISABLE_COMPUTE_API_EXTENSIONS}
 
     # Compute admin
@@ -441,8 +448,10 @@
 
 # install_tempest_lib() - Collect source, prepare, and install tempest-lib
 function install_tempest_lib {
-    git_clone $TEMPEST_LIB_REPO $TEMPEST_LIB_DIR $TEMPEST_LIB_BRANCH
-    setup_develop $TEMPEST_LIB_DIR
+    if use_library_from_git "tempest-lib"; then
+        git_clone_by_name "tempest-lib"
+        setup_develop "tempest-lib"
+    fi
 }
 
 # install_tempest() - Collect source and prepare
@@ -460,20 +469,22 @@
     local kernel="$image_dir/${base_image_name}-vmlinuz"
     local ramdisk="$image_dir/${base_image_name}-initrd"
     local disk_image="$image_dir/${base_image_name}-blank.img"
-    # if the cirros uec downloaded and the system is uec capable
-    if [ -f "$kernel" -a -f "$ramdisk" -a -f "$disk_image" -a  "$VIRT_DRIVER" != "openvz" \
-        -a \( "$LIBVIRT_TYPE" != "lxc" -o "$VIRT_DRIVER" != "libvirt" \) ]; then
-        echo "Prepare aki/ari/ami Images"
-        mkdir -p $BOTO_MATERIALS_PATH
-        ( #new namespace
-            # tenant:demo ; user: demo
-            source $TOP_DIR/accrc/demo/demo
-            euca-bundle-image -r ${CIRROS_ARCH} -i "$kernel" --kernel true -d "$BOTO_MATERIALS_PATH"
-            euca-bundle-image -r ${CIRROS_ARCH} -i "$ramdisk" --ramdisk true -d "$BOTO_MATERIALS_PATH"
-            euca-bundle-image -r ${CIRROS_ARCH} -i "$disk_image" -d "$BOTO_MATERIALS_PATH"
-        ) 2>&1 </dev/null | cat
-    else
-        echo "Boto materials are not prepared"
+    if is_service_enabled nova; then
+        # if the cirros uec downloaded and the system is uec capable
+        if [ -f "$kernel" -a -f "$ramdisk" -a -f "$disk_image" -a  "$VIRT_DRIVER" != "openvz" \
+            -a \( "$LIBVIRT_TYPE" != "lxc" -o "$VIRT_DRIVER" != "libvirt" \) ]; then
+            echo "Prepare aki/ari/ami Images"
+            mkdir -p $BOTO_MATERIALS_PATH
+            ( #new namespace
+                # tenant:demo ; user: demo
+                source $TOP_DIR/accrc/demo/demo
+                euca-bundle-image -r ${CIRROS_ARCH} -i "$kernel" --kernel true -d "$BOTO_MATERIALS_PATH"
+                euca-bundle-image -r ${CIRROS_ARCH} -i "$ramdisk" --ramdisk true -d "$BOTO_MATERIALS_PATH"
+                euca-bundle-image -r ${CIRROS_ARCH} -i "$disk_image" -d "$BOTO_MATERIALS_PATH"
+            ) 2>&1 </dev/null | cat
+        else
+            echo "Boto materials are not prepared"
+        fi
     fi
 }
 
diff --git a/lib/trove b/lib/trove
index 4ac7293..60b2bdb 100644
--- a/lib/trove
+++ b/lib/trove
@@ -28,8 +28,9 @@
 fi
 
 # Set up default configuration
+GITDIR["python-troveclient"]=$DEST/python-troveclient
+
 TROVE_DIR=$DEST/trove
-TROVECLIENT_DIR=$DEST/python-troveclient
 TROVE_CONF_DIR=/etc/trove
 TROVE_LOCAL_CONF_DIR=$TROVE_DIR/etc/trove
 TROVE_AUTH_CACHE_DIR=${TROVE_AUTH_CACHE_DIR:-/var/cache/trove}
@@ -109,10 +110,6 @@
     rm -fr $TROVE_CONF_DIR/*
 }
 
-# configure_troveclient() - Set config files, create data dirs, etc
-function configure_troveclient {
-    setup_develop $TROVECLIENT_DIR
-}
 
 # configure_trove() - Set config files, create data dirs, etc
 function configure_trove {
@@ -184,7 +181,10 @@
 
 # install_troveclient() - Collect source and prepare
 function install_troveclient {
-    git_clone $TROVECLIENT_REPO $TROVECLIENT_DIR $TROVECLIENT_BRANCH
+    if use_library_from_git "python-troveclient"; then
+        git_clone_by_name "python-troveclient"
+        setup_dev_lib "python-troveclient"
+    fi
 }
 
 # install_trove() - Collect source and prepare
diff --git a/run_tests.sh b/run_tests.sh
index bf90332..3ba7e10 100755
--- a/run_tests.sh
+++ b/run_tests.sh
@@ -18,47 +18,18 @@
 PASSES=""
 FAILURES=""
 
-# Check the return code and add the test to PASSES or FAILURES as appropriate
-# pass_fail <result> <expected> <name>
-function pass_fail {
-    local result=$1
-    local expected=$2
-    local test_name=$3
-
-    if [[ $result -ne $expected ]]; then
-        FAILURES="$FAILURES $test_name"
-    else
-        PASSES="$PASSES $test_name"
-    fi
-}
-
-if [[ -n $@ ]]; then
-    FILES=$@
-else
-    LIBS=`find lib -type f | grep -v \.md`
-    SCRIPTS=`find . -type f -name \*\.sh`
-    EXTRA="functions functions-common stackrc openrc exerciserc eucarc"
-    FILES="$SCRIPTS $LIBS $EXTRA"
-fi
-
-echo "Running bash8..."
-
-tox -ebashate
-pass_fail $? 0 bash8
-
-
 # Test that no one is trying to land crazy refs as branches
 
-echo "Ensuring we don't have crazy refs"
+for testfile in tests/test_*.sh; do
+    $testfile
+    if [[ $? -eq 0 ]]; then
+        PASSES="$PASSES $testfile"
+    else
+        FAILURES="$FAILURES $testfile"
+    fi
+done
 
-REFS=`grep BRANCH stackrc | grep -v -- '-master'`
-rc=$?
-pass_fail $rc 1 crazy-refs
-if [[ $rc -eq 0 ]]; then
-    echo "Branch defaults must be master. Found:"
-    echo $REFS
-fi
-
+# Summary display now that all is said and done
 echo "====================================================================="
 for script in $PASSES; do
     echo PASS $script
diff --git a/stack.sh b/stack.sh
index 38ecceb..93e4541 100755
--- a/stack.sh
+++ b/stack.sh
@@ -143,7 +143,7 @@
 
 # Warn users who aren't on an explicitly supported distro, but allow them to
 # override check and attempt installation with ``FORCE=yes ./stack``
-if [[ ! ${DISTRO} =~ (precise|trusty|7.0|wheezy|sid|testing|jessie|f19|f20|rhel6|rhel7) ]]; then
+if [[ ! ${DISTRO} =~ (precise|trusty|7.0|wheezy|sid|testing|jessie|f19|f20|f21|rhel6|rhel7) ]]; then
     echo "WARNING: this script has not been tested on $DISTRO"
     if [[ "$FORCE" != "yes" ]]; then
         die $LINENO "If you wish to run this script anyway run with FORCE=yes"
@@ -209,18 +209,10 @@
     echo 'APT::Acquire::Retries "20";' | sudo tee /etc/apt/apt.conf.d/80retry  >/dev/null
 fi
 
-# upstream Rackspace centos7 images have an issue where cloud-init is
-# installed via pip because there were not official packages when the
-# image was created (fix in the works).  Remove all pip packages
-# before we do anything else
-if [[ $DISTRO = "rhel7" && is_rackspace ]]; then
-    (sudo pip freeze | xargs sudo pip uninstall -y) || true
-fi
-
 # Some distros need to add repos beyond the defaults provided by the vendor
 # to pick up required packages.
 
-if [[ is_fedora && $DISTRO == "rhel6" ]]; then
+if is_fedora && [ $DISTRO == "rhel6" ]; then
     # Installing Open vSwitch on RHEL requires enabling the RDO repo.
     RHEL6_RDO_REPO_RPM=${RHEL6_RDO_REPO_RPM:-"http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm"}
     RHEL6_RDO_REPO_ID=${RHEL6_RDO_REPO_ID:-"openstack-icehouse"}
@@ -231,7 +223,7 @@
     fi
 fi
 
-if [[ is_fedora && ( $DISTRO == "rhel6" || $DISTRO == "rhel7" ) ]]; then
+if is_fedora && [[ $DISTRO == "rhel6" || $DISTRO == "rhel7" ]]; then
     # RHEL requires EPEL for many Open Stack dependencies
 
     # note we always remove and install latest -- some environments
@@ -355,7 +347,7 @@
     echo $@ >&3
 }
 
-if [[ is_fedora && $DISTRO == "rhel6" ]]; then
+if is_fedora && [ $DISTRO == "rhel6" ]; then
     # poor old python2.6 doesn't have argparse by default, which
     # outfilter.py uses
     is_package_installed python-argparse || install_package python-argparse
@@ -584,7 +576,7 @@
 fi
 
 # Set the destination directories for other OpenStack projects
-OPENSTACKCLIENT_DIR=$DEST/python-openstackclient
+GITDIR["python-openstackclient"]=$DEST/python-openstackclient
 
 # Interactive Configuration
 # -------------------------
@@ -787,8 +779,14 @@
 # Install middleware
 install_keystonemiddleware
 
-git_clone $OPENSTACKCLIENT_REPO $OPENSTACKCLIENT_DIR $OPENSTACKCLIENT_BRANCH
-setup_develop $OPENSTACKCLIENT_DIR
+# install the OpenStack client, needed for most setup commands
+if use_library_from_git "python-openstackclient"; then
+    git_clone_by_name "python-openstackclient"
+    setup_dev_lib "python-openstackclient"
+else
+    pip_install python-openstackclient
+fi
+
 
 if is_service_enabled key; then
     if [ "$KEYSTONE_AUTH_HOST" == "$SERVICE_HOST" ]; then
@@ -1274,7 +1272,7 @@
     start_neutron_agents
 fi
 # Once neutron agents are started setup initial network elements
-if is_service_enabled q-svc; then
+if is_service_enabled q-svc && [[ "$NEUTRON_CREATE_INITIAL_NETWORKS" == "True" ]]; then
     echo_summary "Creating initial neutron network elements"
     create_neutron_initial_network
     setup_neutron_debug
diff --git a/stackrc b/stackrc
index 15b0951..e0e886d 100644
--- a/stackrc
+++ b/stackrc
@@ -134,6 +134,19 @@
 # Another option is https://git.openstack.org
 GIT_BASE=${GIT_BASE:-git://git.openstack.org}
 
+# Which libraries should we install from git instead of using released
+# versions on pypi?
+#
+# By default devstack is now installing libraries from pypi instead of
+# from git repositories by default. This works great if you are
+# developing server components, but if you want to develop libraries
+# and see them live in devstack you need to tell devstack it should
+# install them from git.
+#
+# ex: LIBS_FROM_GIT=python-keystoneclient,oslo.config
+#
+# Will install those 2 libraries from git, the rest from pypi.
+
 ##############
 #
 #  OpenStack Server Components
@@ -144,7 +157,7 @@
 CEILOMETER_REPO=${CEILOMETER_REPO:-${GIT_BASE}/openstack/ceilometer.git}
 CEILOMETER_BRANCH=${CEILOMETER_BRANCH:-master}
 
-# volume service
+# block storage service
 CINDER_REPO=${CINDER_REPO:-${GIT_BASE}/openstack/cinder.git}
 CINDER_BRANCH=${CINDER_BRANCH:-master}
 
@@ -176,7 +189,11 @@
 NOVA_REPO=${NOVA_REPO:-${GIT_BASE}/openstack/nova.git}
 NOVA_BRANCH=${NOVA_BRANCH:-master}
 
-# storage service
+# data processing service
+SAHARA_REPO=${SAHARA_REPO:-${GIT_BASE}/openstack/sahara.git}
+SAHARA_BRANCH=${SAHARA_BRANCH:-master}
+
+# object storage service
 SWIFT_REPO=${SWIFT_REPO:-${GIT_BASE}/openstack/swift.git}
 SWIFT_BRANCH=${SWIFT_BRANCH:-master}
 
@@ -199,8 +216,8 @@
 TEMPEST_BRANCH=${TEMPEST_BRANCH:-master}
 
 # TODO(sdague): this should end up as a library component like below
-TEMPEST_LIB_REPO=${TEMPEST_LIB_REPO:-${GIT_BASE}/openstack/tempest-lib.git}
-TEMPEST_LIB_BRANCH=${TEMPEST_LIB_BRANCH:-master}
+GITREPO["tempest-lib"]=${TEMPEST_LIB_REPO:-${GIT_BASE}/openstack/tempest-lib.git}
+GITBRANCH["tempest-lib"]=${TEMPEST_LIB_BRANCH:-master}
 
 
 ##############
@@ -210,48 +227,52 @@
 ##############
 
 # ceilometer client library
-CEILOMETERCLIENT_REPO=${CEILOMETERCLIENT_REPO:-${GIT_BASE}/openstack/python-ceilometerclient.git}
-CEILOMETERCLIENT_BRANCH=${CEILOMETERCLIENT_BRANCH:-master}
+GITREPO["python-ceilometerclient"]=${CEILOMETERCLIENT_REPO:-${GIT_BASE}/openstack/python-ceilometerclient.git}
+GITBRANCH["python-ceilometerclient"]=${CEILOMETERCLIENT_BRANCH:-master}
 
 # volume client
-CINDERCLIENT_REPO=${CINDERCLIENT_REPO:-${GIT_BASE}/openstack/python-cinderclient.git}
-CINDERCLIENT_BRANCH=${CINDERCLIENT_BRANCH:-master}
+GITREPO["python-cinderclient"]=${CINDERCLIENT_REPO:-${GIT_BASE}/openstack/python-cinderclient.git}
+GITBRANCH["python-cinderclient"]=${CINDERCLIENT_BRANCH:-master}
 
 # python glance client library
-GLANCECLIENT_REPO=${GLANCECLIENT_REPO:-${GIT_BASE}/openstack/python-glanceclient.git}
-GLANCECLIENT_BRANCH=${GLANCECLIENT_BRANCH:-master}
+GITREPO["python-glanceclient"]=${GLANCECLIENT_REPO:-${GIT_BASE}/openstack/python-glanceclient.git}
+GITBRANCH["python-glanceclient"]=${GLANCECLIENT_BRANCH:-master}
 
 # python heat client library
-HEATCLIENT_REPO=${HEATCLIENT_REPO:-${GIT_BASE}/openstack/python-heatclient.git}
-HEATCLIENT_BRANCH=${HEATCLIENT_BRANCH:-master}
+GITREPO["python-heatclient"]=${HEATCLIENT_REPO:-${GIT_BASE}/openstack/python-heatclient.git}
+GITBRANCH["python-heatclient"]=${HEATCLIENT_BRANCH:-master}
 
 # ironic client
-IRONICCLIENT_REPO=${IRONICCLIENT_REPO:-${GIT_BASE}/openstack/python-ironicclient.git}
-IRONICCLIENT_BRANCH=${IRONICCLIENT_BRANCH:-master}
+GITREPO["python-ironicclient"]=${IRONICCLIENT_REPO:-${GIT_BASE}/openstack/python-ironicclient.git}
+GITBRANCH["python-ironicclient"]=${IRONICCLIENT_BRANCH:-master}
 
 # python keystone client library to nova that horizon uses
-KEYSTONECLIENT_REPO=${KEYSTONECLIENT_REPO:-${GIT_BASE}/openstack/python-keystoneclient.git}
-KEYSTONECLIENT_BRANCH=${KEYSTONECLIENT_BRANCH:-master}
+GITREPO["python-keystoneclient"]=${KEYSTONECLIENT_REPO:-${GIT_BASE}/openstack/python-keystoneclient.git}
+GITBRANCH["python-keystoneclient"]=${KEYSTONECLIENT_BRANCH:-master}
 
 # neutron client
-NEUTRONCLIENT_REPO=${NEUTRONCLIENT_REPO:-${GIT_BASE}/openstack/python-neutronclient.git}
-NEUTRONCLIENT_BRANCH=${NEUTRONCLIENT_BRANCH:-master}
+GITREPO["python-neutronclient"]=${NEUTRONCLIENT_REPO:-${GIT_BASE}/openstack/python-neutronclient.git}
+GITBRANCH["python-neutronclient"]=${NEUTRONCLIENT_BRANCH:-master}
 
 # python client library to nova that horizon (and others) use
-NOVACLIENT_REPO=${NOVACLIENT_REPO:-${GIT_BASE}/openstack/python-novaclient.git}
-NOVACLIENT_BRANCH=${NOVACLIENT_BRANCH:-master}
+GITREPO["python-novaclient"]=${NOVACLIENT_REPO:-${GIT_BASE}/openstack/python-novaclient.git}
+GITBRANCH["python-novaclient"]=${NOVACLIENT_BRANCH:-master}
+
+# python saharaclient
+GITREPO["python-saharaclient"]=${SAHARACLIENT_REPO:-${GIT_BASE}/openstack/python-saharaclient.git}
+GITBRANCH["python-saharaclient"]=${SAHARACLIENT_BRANCH:-master}
 
 # python swift client library
-SWIFTCLIENT_REPO=${SWIFTCLIENT_REPO:-${GIT_BASE}/openstack/python-swiftclient.git}
-SWIFTCLIENT_BRANCH=${SWIFTCLIENT_BRANCH:-master}
+GITREPO["python-swiftclient"]=${SWIFTCLIENT_REPO:-${GIT_BASE}/openstack/python-swiftclient.git}
+GITBRANCH["python-swiftclient"]=${SWIFTCLIENT_BRANCH:-master}
 
 # trove client library test
-TROVECLIENT_REPO=${TROVECLIENT_REPO:-${GIT_BASE}/openstack/python-troveclient.git}
-TROVECLIENT_BRANCH=${TROVECLIENT_BRANCH:-master}
+GITREPO["python-troveclient"]=${TROVECLIENT_REPO:-${GIT_BASE}/openstack/python-troveclient.git}
+GITBRANCH["python-troveclient"]=${TROVECLIENT_BRANCH:-master}
 
 # consolidated openstack python client
-OPENSTACKCLIENT_REPO=${OPENSTACKCLIENT_REPO:-${GIT_BASE}/openstack/python-openstackclient.git}
-OPENSTACKCLIENT_BRANCH=${OPENSTACKCLIENT_BRANCH:-master}
+GITREPO["python-openstackclient"]=${OPENSTACKCLIENT_REPO:-${GIT_BASE}/openstack/python-openstackclient.git}
+GITBRANCH["python-openstackclient"]=${OPENSTACKCLIENT_BRANCH:-master}
 
 ###################
 #
@@ -271,6 +292,10 @@
 GITREPO["oslo.config"]=${OSLOCFG_REPO:-${GIT_BASE}/openstack/oslo.config.git}
 GITBRANCH["oslo.config"]=${OSLOCFG_BRANCH:-master}
 
+# oslo.context
+GITREPO["oslo.context"]=${OSLOCTX_REPO:-${GIT_BASE}/openstack/oslo.context.git}
+GITBRANCH["oslo.context"]=${OSLOCTX_BRANCH:-master}
+
 # oslo.db
 GITREPO["oslo.db"]=${OSLODB_REPO:-${GIT_BASE}/openstack/oslo.db.git}
 GITBRANCH["oslo.db"]=${OSLODB_BRANCH:-master}
@@ -330,8 +355,8 @@
 ##################
 
 # glance store library
-GLANCE_STORE_REPO=${GLANCE_STORE_REPO:-${GIT_BASE}/openstack/glance_store.git}
-GLANCE_STORE_BRANCH=${GLANCE_STORE_BRANCH:-master}
+GITREPO["glance_store"]=${GLANCE_STORE_REPO:-${GIT_BASE}/openstack/glance_store.git}
+GITBRANCH["glance_store"]=${GLANCE_STORE_BRANCH:-master}
 
 # heat-cfntools server agent
 HEAT_CFNTOOLS_REPO=${HEAT_CFNTOOLS_REPO:-${GIT_BASE}/openstack/heat-cfntools.git}
@@ -342,12 +367,12 @@
 HEAT_TEMPLATES_BRANCH=${HEAT_TEMPLATES_BRANCH:-master}
 
 # django openstack_auth library
-HORIZONAUTH_REPO=${HORIZONAUTH_REPO:-${GIT_BASE}/openstack/django_openstack_auth.git}
-HORIZONAUTH_BRANCH=${HORIZONAUTH_BRANCH:-master}
+GITREPO["django_openstack_auth"]=${HORIZONAUTH_REPO:-${GIT_BASE}/openstack/django_openstack_auth.git}
+GITBRANCH["django_openstack_auth"]=${HORIZONAUTH_BRANCH:-master}
 
 # keystone middleware
-KEYSTONEMIDDLEWARE_REPO=${KEYSTONEMIDDLEWARE_REPO:-${GIT_BASE}/openstack/keystonemiddleware.git}
-KEYSTONEMIDDLEWARE_BRANCH=${KEYSTONEMIDDLEWARE_BRANCH:-master}
+GITREPO["keystonemiddleware"]=${KEYSTONEMIDDLEWARE_REPO:-${GIT_BASE}/openstack/keystonemiddleware.git}
+GITBRANCH["keystonemiddleware"]=${KEYSTONEMIDDLEWARE_BRANCH:-master}
 
 # s3 support for swift
 SWIFT3_REPO=${SWIFT3_REPO:-${GIT_BASE}/stackforge/swift3.git}
diff --git a/tests/test_refs.sh b/tests/test_refs.sh
new file mode 100755
index 0000000..bccca5d
--- /dev/null
+++ b/tests/test_refs.sh
@@ -0,0 +1,24 @@
+#!/bin/bash
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+
+echo "Ensuring we don't have crazy refs"
+
+REFS=`grep BRANCH stackrc | grep -v -- '-master'`
+rc=$?
+if [[ $rc -eq 0 ]]; then
+    echo "Branch defaults must be master. Found:"
+    echo $REFS
+    exit 1
+fi
diff --git a/tools/fixup_stuff.sh b/tools/fixup_stuff.sh
index b8beb01..ca46533 100755
--- a/tools/fixup_stuff.sh
+++ b/tools/fixup_stuff.sh
@@ -18,7 +18,6 @@
 #   - (re)start messagebus daemon
 #   - remove distro packages python-crypto and python-lxml
 #   - pre-install hgtools to work around a bug in RHEL6 distribute
-#   - install nose 1.1 from EPEL
 
 # If TOP_DIR is set we're being sourced rather than running stand-alone
 # or in a sub-shell
@@ -179,14 +178,6 @@
     # Note we do this before the track-depends in ``stack.sh``.
     pip_install hgtools
 
-
-    # RHEL6's version of ``python-nose`` is incompatible with Tempest.
-    # Install nose 1.1 (Tempest-compatible) from EPEL
-    install_package python-nose1.1
-    # Add a symlink for the new nosetests to allow tox for Tempest to
-    # work unmolested.
-    sudo ln -sf /usr/bin/nosetests1.1 /usr/local/bin/nosetests
-
     # workaround for https://code.google.com/p/unittest-ext/issues/detail?id=79
     install_package python-unittest2 patch
     pip_install discover
diff --git a/tools/ironic/scripts/configure-vm b/tools/ironic/scripts/configure-vm
index 4c42c49..378fcb8 100755
--- a/tools/ironic/scripts/configure-vm
+++ b/tools/ironic/scripts/configure-vm
@@ -78,8 +78,10 @@
             params['emulator'] = "/usr/bin/qemu-kvm"
 
     if args.console_log:
+        params['bios_serial'] = "<bios useserial='yes'/>"
         params['console_log'] = CONSOLE_LOG % {'console_log': args.console_log}
     else:
+        params['bios_serial'] = ''
         params['console_log'] = ''
     libvirt_template = source_template % params
     conn = libvirt.open("qemu:///system")
diff --git a/tools/ironic/templates/vm.xml b/tools/ironic/templates/vm.xml
index 4f40334..ae7d685 100644
--- a/tools/ironic/templates/vm.xml
+++ b/tools/ironic/templates/vm.xml
@@ -6,6 +6,7 @@
     <type arch='%(arch)s' machine='pc-1.0'>hvm</type>
     <boot dev='%(bootdev)s'/>
     <bootmenu enable='no'/>
+    %(bios_serial)s
   </os>
   <features>
     <acpi/>