Merge "Cleanup some of the deb/rpm installs"
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index d70d3da..22841f6 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -202,8 +202,8 @@
 
         LOGDIR=$DEST/logs
 
-*Note the use of ``DEST`` to locate the main install directory; this
-is why we suggest setting it in ``local.conf``.*
+Note the use of ``DEST`` to locate the main install directory; this
+is why we suggest setting it in ``local.conf``.
 
 Enabling Syslog
 ~~~~~~~~~~~~~~~
@@ -211,7 +211,7 @@
 Logging all services to a single syslog can be convenient. Enable
 syslogging by setting ``SYSLOG`` to ``True``. If the destination log
 host is not localhost ``SYSLOG_HOST`` and ``SYSLOG_PORT`` can be used
-to direct the message stream to the log host.  |
+to direct the message stream to the log host.
 
     ::
 
@@ -239,15 +239,15 @@
 
 Multiple database backends are available. The available databases are defined
 in the lib/databases directory.
-`mysql` is the default database, choose a different one by putting the
-following in the `localrc` section:
+``mysql`` is the default database, choose a different one by putting the
+following in the ``localrc`` section:
 
    ::
 
       disable_service mysql
       enable_service postgresql
 
-`mysql` is the default database.
+``mysql`` is the default database.
 
 RPC Backend
 -----------
@@ -260,6 +260,7 @@
 Example disabling RabbitMQ in ``local.conf``:
 
 ::
+
     disable_service rabbit
 
 
@@ -511,7 +512,7 @@
 object services will run directly in screen. The others services like
 replicator, updaters or auditor runs in background.
 
-If you would like to enable Swift you can add this to your `localrc`
+If you would like to enable Swift you can add this to your ``localrc``
 section:
 
 ::
@@ -519,7 +520,7 @@
     enable_service s-proxy s-object s-container s-account
 
 If you want a minimal Swift install with only Swift and Keystone you
-can have this instead in your `localrc` section:
+can have this instead in your ``localrc`` section:
 
 ::
 
@@ -528,24 +529,24 @@
 
 If you only want to do some testing of a real normal swift cluster
 with multiple replicas you can do so by customizing the variable
-`SWIFT_REPLICAS` in your `localrc` section (usually to 3).
+``SWIFT_REPLICAS`` in your ``localrc`` section (usually to 3).
 
 Swift S3
 ++++++++
 
-If you are enabling `swift3` in `ENABLED_SERVICES` DevStack will
+If you are enabling ``swift3`` in ``ENABLED_SERVICES`` DevStack will
 install the swift3 middleware emulation. Swift will be configured to
 act as a S3 endpoint for Keystone so effectively replacing the
-`nova-objectstore`.
+``nova-objectstore``.
 
 Only Swift proxy server is launched in the screen session all other
-services are started in background and managed by `swift-init` tool.
+services are started in background and managed by ``swift-init`` tool.
 
 Heat
 ~~~~
 
-Heat is disabled by default (see `stackrc` file). To enable it
-explicitly you'll need the following settings in your `localrc`
+Heat is disabled by default (see ``stackrc`` file). To enable it
+explicitly you'll need the following settings in your ``localrc``
 section
 
 ::
@@ -554,7 +555,7 @@
 
 Heat can also run in standalone mode, and be configured to orchestrate
 on an external OpenStack cloud. To launch only Heat in standalone mode
-you'll need the following settings in your `localrc` section
+you'll need the following settings in your ``localrc`` section
 
 ::
 
@@ -590,14 +591,14 @@
 ~~~~~~~~~
 
 If you would like to use Xenserver as the hypervisor, please refer to
-the instructions in `./tools/xen/README.md`.
+the instructions in ``./tools/xen/README.md``.
 
 Cells
 ~~~~~
 
 `Cells <http://wiki.openstack.org/blueprint-nova-compute-cells>`__ is
 an alternative scaling option.  To setup a cells environment add the
-following to your `localrc` section:
+following to your ``localrc`` section:
 
 ::
 
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index 3562bfa..7aca8d0 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -54,7 +54,7 @@
 releases other than those documented in ``README.md`` on a best-effort
 basis.
 
-Are there any differences between Ubuntu and Centos/Fedora support?
+Are there any differences between Ubuntu and CentOS/Fedora support?
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Both should work well and are tested by DevStack CI.
@@ -146,7 +146,7 @@
 
 
 Upstream DevStack is only tested with master and stable
-branches. Setting custom BRANCH definitions is not guarunteed to
+branches. Setting custom BRANCH definitions is not guaranteed to
 produce working results.
 
 What can I do about RabbitMQ not wanting to start on my fresh new VM?
diff --git a/doc/source/guides/devstack-with-lbaas-v2.rst b/doc/source/guides/devstack-with-lbaas-v2.rst
index f679783..4e5f874 100644
--- a/doc/source/guides/devstack-with-lbaas-v2.rst
+++ b/doc/source/guides/devstack-with-lbaas-v2.rst
@@ -17,7 +17,7 @@
     cd devstack
 
 
-Edit your `local.conf` to look like
+Edit your ``local.conf`` to look like
 
   ::
 
diff --git a/doc/source/guides/devstack-with-nested-kvm.rst b/doc/source/guides/devstack-with-nested-kvm.rst
index c652bac..85a5656 100644
--- a/doc/source/guides/devstack-with-nested-kvm.rst
+++ b/doc/source/guides/devstack-with-nested-kvm.rst
@@ -50,7 +50,7 @@
     parm:           nested:bool
 
 Start your VM, now it should have KVM capabilities -- you can verify
-that by ensuring `/dev/kvm` character device is present.
+that by ensuring ``/dev/kvm`` character device is present.
 
 
 Configure Nested KVM for AMD-based Machines
@@ -97,7 +97,7 @@
 Expose Virtualization Extensions to DevStack VM
 -----------------------------------------------
 
-Edit the VM's libvirt XML configuration via `virsh` utility:
+Edit the VM's libvirt XML configuration via ``virsh`` utility:
 
 ::
 
@@ -115,10 +115,10 @@
 -------------------------------
 
 Before invoking ``stack.sh`` in the VM, ensure that KVM is enabled. This
-can be verified by checking for the presence of the file `/dev/kvm` in
+can be verified by checking for the presence of the file ``/dev/kvm`` in
 your VM. If it is present, DevStack will default to using the config
-attribute `virt_type = kvm` in `/etc/nova.conf`; otherwise, it'll fall
-back to `virt_type=qemu`, i.e. plain QEMU emulation.
+attribute ``virt_type = kvm`` in ``/etc/nova.conf``; otherwise, it'll fall
+back to ``virt_type=qemu``, i.e. plain QEMU emulation.
 
 Optionally, to explicitly set the type of virtualization, to KVM, by the
 libvirt driver in nova, the below config attribute can be used in
@@ -131,7 +131,7 @@
 
 Once DevStack is configured successfully, verify if the Nova instances
 are using KVM by noticing the QEMU CLI invoked by Nova is using the
-parameter `accel=kvm`, e.g.:
+parameter ``accel=kvm``, e.g.:
 
 ::
 
diff --git a/doc/source/guides/neutron.rst b/doc/source/guides/neutron.rst
index 5891f68..996c7d1 100644
--- a/doc/source/guides/neutron.rst
+++ b/doc/source/guides/neutron.rst
@@ -35,7 +35,7 @@
                 network hardware_network {
                         address = "172.18.161.0/24"
                         router [ address = "172.18.161.1" ];
-                        devstack_laptop [ address = "172.18.161.6" ];
+                        devstack-1 [ address = "172.18.161.6" ];
                 }
         }
 
@@ -43,9 +43,13 @@
 DevStack Configuration
 ----------------------
 
+The following is a complete `local.conf` for the host named
+`devstack-1`. It will run all the API and services, as well as
+serving as a hypervisor for guest instances.
 
 ::
 
+        [[local|localrc]]
         HOST_IP=172.18.161.6
         SERVICE_HOST=172.18.161.6
         MYSQL_HOST=172.18.161.6
@@ -57,6 +61,12 @@
         SERVICE_PASSWORD=secrete
         SERVICE_TOKEN=secrete
 
+        # Do not use Nova-Network
+        disable_service n-net
+        # Enable Neutron
+        ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3
+
+
         ## Neutron options
         Q_USE_SECGROUP=True
         FLOATING_RANGE="172.18.161.0/24"
@@ -71,6 +81,166 @@
         OVS_BRIDGE_MAPPINGS=public:br-ex
 
 
+Adding Additional Compute Nodes
+-------------------------------
+
+Let's suppose that after installing DevStack on the first host, you
+also want to do multinode testing and networking.
+
+Physical Network Setup
+~~~~~~~~~~~~~~~~~~~~~~
+
+.. nwdiag::
+
+        nwdiag {
+                inet [ shape = cloud ];
+                router;
+                inet -- router;
+
+                network hardware_network {
+                        address = "172.18.161.0/24"
+                        router [ address = "172.18.161.1" ];
+                        devstack-1 [ address = "172.18.161.6" ];
+                        devstack-2 [ address = "172.18.161.7" ];
+                }
+        }
+
+
+After DevStack installs and configures Neutron, traffic from guest VMs
+flows out of `devstack-2` (the compute node) and is encapsulated in a
+VXLAN tunnel back to `devstack-1` (the control node) where the L3
+agent is running.
+
+::
+
+    stack@devstack-2:~/devstack$ sudo ovs-vsctl show
+    8992d965-0ba0-42fd-90e9-20ecc528bc29
+        Bridge br-int
+            fail_mode: secure
+            Port br-int
+                Interface br-int
+                    type: internal
+            Port patch-tun
+                Interface patch-tun
+                    type: patch
+                    options: {peer=patch-int}
+        Bridge br-tun
+            fail_mode: secure
+            Port "vxlan-c0a801f6"
+                Interface "vxlan-c0a801f6"
+                    type: vxlan
+                    options: {df_default="true", in_key=flow, local_ip="172.18.161.7", out_key=flow, remote_ip="172.18.161.6"}
+            Port patch-int
+                Interface patch-int
+                    type: patch
+                    options: {peer=patch-tun}
+            Port br-tun
+                Interface br-tun
+                    type: internal
+        ovs_version: "2.0.2"
+
+Open vSwitch on the control node, where the L3 agent runs, is
+configured to de-encapsulate traffic from compute nodes, then forward
+it over the `br-ex` bridge, where `eth0` is attached.
+
+::
+
+    stack@devstack-1:~/devstack$ sudo ovs-vsctl show
+    422adeea-48d1-4a1f-98b1-8e7239077964
+        Bridge br-tun
+            fail_mode: secure
+            Port br-tun
+                Interface br-tun
+                    type: internal
+            Port patch-int
+                Interface patch-int
+                    type: patch
+                    options: {peer=patch-tun}
+            Port "vxlan-c0a801d8"
+                Interface "vxlan-c0a801d8"
+                    type: vxlan
+                    options: {df_default="true", in_key=flow, local_ip="172.18.161.6", out_key=flow, remote_ip="172.18.161.7"}
+        Bridge br-ex
+            Port phy-br-ex
+                Interface phy-br-ex
+                    type: patch
+                    options: {peer=int-br-ex}
+            Port "eth0"
+                Interface "eth0"
+            Port br-ex
+                Interface br-ex
+                    type: internal
+        Bridge br-int
+            fail_mode: secure
+            Port "tapce66332d-ea"
+                tag: 1
+                Interface "tapce66332d-ea"
+                    type: internal
+            Port "qg-65e5a4b9-15"
+                tag: 2
+                Interface "qg-65e5a4b9-15"
+                    type: internal
+            Port "qr-33e5e471-88"
+                tag: 1
+                Interface "qr-33e5e471-88"
+                    type: internal
+            Port "qr-acbe9951-70"
+                tag: 1
+                Interface "qr-acbe9951-70"
+                    type: internal
+            Port br-int
+                Interface br-int
+                    type: internal
+            Port patch-tun
+                Interface patch-tun
+                    type: patch
+                    options: {peer=patch-int}
+            Port int-br-ex
+                Interface int-br-ex
+                    type: patch
+                    options: {peer=phy-br-ex}
+        ovs_version: "2.0.2"
+
+`br-int` is a bridge that the Open vSwitch mechanism driver creates,
+which is used as the "integration bridge" where ports are created, and
+plugged into the virtual switching fabric. `br-ex` is an OVS bridge
+that is used to connect physical ports (like `eth0`), so that floating
+IP traffic for tenants can be received from the physical network
+infrastructure (and the internet), and routed to tenant network ports.
+`br-tun` is a tunnel bridge that is used to connect OpenStack nodes
+(like `devstack-2`) together. This bridge is used so that tenant
+network traffic, using the VXLAN tunneling protocol, flows between
+each compute node where tenant instances run.
+
+
+
+DevStack Compute Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The host `devstack-2` has a very minimal `local.conf`.
+
+::
+
+    [[local|localrc]]
+    HOST_IP=172.18.161.7
+    SERVICE_HOST=172.18.161.6
+    MYSQL_HOST=172.18.161.6
+    RABBIT_HOST=172.18.161.6
+    GLANCE_HOSTPORT=172.18.161.6:9292
+    ADMIN_PASSWORD=secrete
+    MYSQL_PASSWORD=secrete
+    RABBIT_PASSWORD=secrete
+    SERVICE_PASSWORD=secrete
+    SERVICE_TOKEN=secrete
+
+    ## Neutron options
+    PUBLIC_INTERFACE=eth0
+    ENABLED_SERVICES=n-cpu,rabbit,q-agt
+
+Network traffic from `eth0` on the compute nodes is then NAT'd by the
+controller node that runs Neutron's `neutron-l3-agent` and provides L3
+connectivity.
+
 
 Neutron Networking with Open vSwitch and Provider Networks
 ==========================================================
@@ -170,8 +340,8 @@
 **Compute Nodes**
 
 In this example, the nodes that will host guest instances will run
-the `neutron-openvswitch-agent` for network connectivity, as well as
-the compute service `nova-compute`.
+the ``neutron-openvswitch-agent`` for network connectivity, as well as
+the compute service ``nova-compute``.
 
 DevStack Configuration
 ----------------------
@@ -256,16 +426,16 @@
         Q_L3_ENABLED=False
 
 Compute node 2's configuration will be exactly the same, except
-`HOST_IP` will be `10.0.0.4`
+``HOST_IP`` will be ``10.0.0.4``
 
 When DevStack is configured to use provider networking (via
-`Q_USE_PROVIDER_NETWORKING` is True and `Q_L3_ENABLED` is False) -
+``Q_USE_PROVIDER_NETWORKING`` is True and ``Q_L3_ENABLED`` is False) -
 DevStack will automatically add the network interface defined in
-`PUBLIC_INTERFACE` to the `OVS_PHYSICAL_BRIDGE`
+``PUBLIC_INTERFACE`` to the ``OVS_PHYSICAL_BRIDGE``
 
 For example, with the above  configuration, a bridge is
-created, named `br-ex` which is managed by Open vSwitch, and the
-second interface on the compute node, `eth1` is attached to the
+created, named ``br-ex`` which is managed by Open vSwitch, and the
+second interface on the compute node, ``eth1`` is attached to the
 bridge, to forward traffic sent by guest VMs.
 
 Miscellaneous Tips
@@ -307,7 +477,7 @@
 ------------------------------------------------
 
 Extension drivers for the ML2 plugin are set with the variable
-`Q_ML2_PLUGIN_EXT_DRIVERS`, and includes the 'port_security' extension
+``Q_ML2_PLUGIN_EXT_DRIVERS``, and includes the 'port_security' extension
 by default. If you want to remove all the extension drivers (even
-'port_security'), set `Q_ML2_PLUGIN_EXT_DRIVERS` to blank.
+'port_security'), set ``Q_ML2_PLUGIN_EXT_DRIVERS`` to blank.
 
diff --git a/exercises/neutron-adv-test.sh b/exercises/neutron-adv-test.sh
index a8fbd86..9bcb766 100755
--- a/exercises/neutron-adv-test.sh
+++ b/exercises/neutron-adv-test.sh
@@ -235,7 +235,7 @@
     local NET_ID
     NET_ID=$(neutron net-create --tenant-id $TENANT_ID $NET_NAME $EXTRA| grep ' id ' | awk '{print $4}' )
     die_if_not_set $LINENO NET_ID "Failure creating NET_ID for $TENANT_ID $NET_NAME $EXTRA"
-    neutron subnet-create --ip-version 4 --tenant-id $TENANT_ID --gateway $GATEWAY $NET_ID $CIDR
+    neutron subnet-create --ip-version 4 --tenant-id $TENANT_ID --gateway $GATEWAY --subnetpool None $NET_ID $CIDR
     neutron_debug_admin probe-create --device-owner compute $NET_ID
     source $TOP_DIR/openrc demo demo
 }
diff --git a/exercises/swift.sh b/exercises/swift.sh
index afcede8..4a41e0f 100755
--- a/exercises/swift.sh
+++ b/exercises/swift.sh
@@ -2,7 +2,7 @@
 
 # **swift.sh**
 
-# Test swift via the ``swift`` command line from ``python-swiftclient``
+# Test swift via the ``python-openstackclient`` command line
 
 echo "*********************************************************************"
 echo "Begin DevStack Exercise: $0"
@@ -39,26 +39,29 @@
 
 # Container name
 CONTAINER=ex-swift
+OBJECT=/etc/issue
 
 
 # Testing Swift
 # =============
 
 # Check if we have to swift via keystone
-swift stat || die $LINENO "Failure getting status"
+openstack object store account show || die $LINENO "Failure getting account status"
 
 # We start by creating a test container
 openstack container create $CONTAINER || die $LINENO "Failure creating container $CONTAINER"
 
-# add some files into it.
-openstack object create $CONTAINER /etc/issue || die $LINENO "Failure uploading file to container $CONTAINER"
+# add a file into it.
+openstack object create $CONTAINER $OBJECT || die $LINENO "Failure uploading file to container $CONTAINER"
 
-# list them
+# list the objects
 openstack object list $CONTAINER || die $LINENO "Failure listing contents of container $CONTAINER"
 
-# And we may want to delete them now that we have tested that
-# everything works.
-swift delete $CONTAINER || die $LINENO "Failure deleting container $CONTAINER"
+# delete the object first
+openstack object delete $CONTAINER $OBJECT || die $LINENO "Failure deleting object $OBJECT in container $CONTAINER"
+
+# delete the container
+openstack container delete $CONTAINER || die $LINENO "Failure deleting container $CONTAINER"
 
 set +o xtrace
 echo "*********************************************************************"
diff --git a/files/apache-nova-metadata.template b/files/apache-nova-metadata.template
new file mode 100644
index 0000000..6231c1c
--- /dev/null
+++ b/files/apache-nova-metadata.template
@@ -0,0 +1,25 @@
+Listen %PUBLICPORT%
+
+<VirtualHost *:%PUBLICPORT%>
+    WSGIDaemonProcess nova-metadata processes=%APIWORKERS% threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+    WSGIProcessGroup nova-metadata
+    WSGIScriptAlias / %PUBLICWSGI%
+    WSGIApplicationGroup %{GLOBAL}
+    WSGIPassAuthorization On
+    <IfVersion >= 2.4>
+      ErrorLogFormat "%M"
+    </IfVersion>
+    ErrorLog /var/log/%APACHE_NAME%/nova-metadata.log
+    %SSLENGINE%
+    %SSLCERTFILE%
+    %SSLKEYFILE%
+</VirtualHost>
+
+Alias /metadata %PUBLICWSGI%
+<Location /metadata>
+    SetHandler wsgi-script
+    Options +ExecCGI
+    WSGIProcessGroup nova-metadata
+    WSGIApplicationGroup %{GLOBAL}
+    WSGIPassAuthorization On
+</Location>
diff --git a/files/debs/general b/files/debs/general
index 9b27156..1215147 100644
--- a/files/debs/general
+++ b/files/debs/general
@@ -8,6 +8,7 @@
 graphviz # needed for docs
 iputils-ping
 libffi-dev # for pyOpenSSL
+libjpeg-dev # Pillow 3.0.0
 libmysqlclient-dev  # MySQL-python
 libpq-dev  # psycopg2
 libssl-dev # for pyOpenSSL
diff --git a/files/rpms-suse/general b/files/rpms-suse/general
index 651243d..34a2955 100644
--- a/files/rpms-suse/general
+++ b/files/rpms-suse/general
@@ -9,6 +9,7 @@
 graphviz # docs
 iputils
 libffi-devel  # pyOpenSSL
+libjpeg8-devel # Pillow 3.0.0
 libmysqlclient-devel # MySQL-python
 libopenssl-devel # to rebuild pyOpenSSL if needed
 libxslt-devel  # lxml
@@ -26,3 +27,4 @@
 tcpdump
 unzip
 wget
+zlib-devel
diff --git a/files/rpms/general b/files/rpms/general
index cfd9479..40b06f4 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -12,6 +12,7 @@
 java-1.7.0-openjdk-headless  # NOPRIME rhel7
 java-1.8.0-openjdk-headless  # NOPRIME f21,f22
 libffi-devel
+libjpeg-turbo-devel # Pillow 3.0.0
 libxml2-devel # lxml
 libxslt-devel # lxml
 libyaml-devel
diff --git a/files/rpms/nova b/files/rpms/nova
index e70f138..00e7596 100644
--- a/files/rpms/nova
+++ b/files/rpms/nova
@@ -7,6 +7,7 @@
 genisoimage # required for config_drive
 iptables
 iputils
+kernel-modules # dist:f21,f22,f23
 kpartx
 kvm # NOPRIME
 libvirt-bin # NOPRIME
diff --git a/functions b/functions
index ca5955e..34da1ba 100644
--- a/functions
+++ b/functions
@@ -410,7 +410,7 @@
     ip=$(echo "$nova_result" | grep "$network_name" | get_field 2)
     if [[ $ip = "" ]];then
         echo "$nova_result"
-        die $LINENO "[Fail] Coudn't get ipaddress of VM"
+        die $LINENO "[Fail] Couldn't get ipaddress of VM"
     fi
     echo $ip
 }
diff --git a/functions-common b/functions-common
index e252139..6a065ba 100644
--- a/functions-common
+++ b/functions-common
@@ -73,42 +73,39 @@
 # - A `devstack-admin` entry for the `admin` user for the `admin` project.
 # write_clouds_yaml
 function write_clouds_yaml {
-    local clouds_yaml
+    # The location is a variable to allow for easier refactoring later to make it
+    # overridable. There is currently no usecase where doing so makes sense, so
+    # it's not currently configurable.
 
-    sudo mkdir -p /etc/openstack
+    CLOUDS_YAML=/etc/openstack/clouds.yaml
+
+    sudo mkdir -p $(dirname $CLOUDS_YAML)
     sudo chown -R $STACK_USER /etc/openstack
-    # XXX: to be removed, see https://review.openstack.org/237149/
-    # careful not to sudo this, incase ~ is NFS mounted
-    mkdir -p ~/.config/openstack
 
-    for clouds_path in /etc/openstack ~/.config/openstack ; do
-        clouds_yaml=$clouds_path/clouds.yaml
-
-        CA_CERT_ARG=''
-        if [ -f "$SSL_BUNDLE_FILE" ]; then
-            CA_CERT_ARG="--os-cacert $SSL_BUNDLE_FILE"
-        fi
-        $TOP_DIR/tools/update_clouds_yaml.py \
-            --file $clouds_yaml \
-            --os-cloud devstack \
-            --os-region-name $REGION_NAME \
-            --os-identity-api-version 3 \
-            $CA_CERT_ARG \
-            --os-auth-url $KEYSTONE_AUTH_URI \
-            --os-username demo \
-            --os-password $ADMIN_PASSWORD \
-            --os-project-name demo
-        $TOP_DIR/tools/update_clouds_yaml.py \
-            --file $clouds_yaml \
-            --os-cloud devstack-admin \
-            --os-region-name $REGION_NAME \
-            --os-identity-api-version 3 \
-            $CA_CERT_ARG \
-            --os-auth-url $KEYSTONE_AUTH_URI \
-            --os-username admin \
-            --os-password $ADMIN_PASSWORD \
-            --os-project-name admin
-    done
+    CA_CERT_ARG=''
+    if [ -f "$SSL_BUNDLE_FILE" ]; then
+        CA_CERT_ARG="--os-cacert $SSL_BUNDLE_FILE"
+    fi
+    $TOP_DIR/tools/update_clouds_yaml.py \
+        --file $CLOUDS_YAML \
+        --os-cloud devstack \
+        --os-region-name $REGION_NAME \
+        --os-identity-api-version 3 \
+        $CA_CERT_ARG \
+        --os-auth-url $KEYSTONE_AUTH_URI \
+        --os-username demo \
+        --os-password $ADMIN_PASSWORD \
+        --os-project-name demo
+    $TOP_DIR/tools/update_clouds_yaml.py \
+        --file $CLOUDS_YAML \
+        --os-cloud devstack-admin \
+        --os-region-name $REGION_NAME \
+        --os-identity-api-version 3 \
+        $CA_CERT_ARG \
+        --os-auth-url $KEYSTONE_AUTH_URI \
+        --os-username admin \
+        --os-password $ADMIN_PASSWORD \
+        --os-project-name admin
 }
 
 # trueorfalse <True|False> <VAR>
@@ -913,16 +910,11 @@
 # Usage: _get_or_create_endpoint_with_interface <service> <interface> <url> <region>
 function _get_or_create_endpoint_with_interface {
     local endpoint_id
-    # TODO(dgonzalez): The check of the region name, as done in the grep
-    # statement below, exists only because keystone does currently
-    # not allow filtering the region name when listing endpoints. If keystone
-    # gets support for this, the check for the region name can be removed.
-    # Related bug in keystone: https://bugs.launchpad.net/keystone/+bug/1482772
     endpoint_id=$(openstack endpoint list \
         --service $1 \
         --interface $2 \
         --region $4 \
-        -c ID -c Region -f value | grep $4 | cut -f 1 -d " ")
+        -c ID -f value)
     if [[ -z "$endpoint_id" ]]; then
         # Creates new endpoint
         endpoint_id=$(openstack endpoint create \
@@ -1039,7 +1031,7 @@
                 # We are using BASH regexp matching feature.
                 package=${BASH_REMATCH[1]}
                 distros=${BASH_REMATCH[2]}
-                # In bash ${VAR,,} will lowecase VAR
+                # In bash ${VAR,,} will lowercase VAR
                 # Look for a match in the distro list
                 if [[ ! ${distros,,} =~ ${DISTRO,,} ]]; then
                     # If no match then skip this package
@@ -1355,6 +1347,7 @@
 # If the command includes shell metachatacters (;<>*) it must be run using a shell
 # If an optional group is provided sg will be used to run the
 # command as that group.
+# Uses globals ``USE_SCREEN``
 # run_process service "command-line" [group]
 function run_process {
     local service=$1
@@ -1373,7 +1366,7 @@
 
 # Helper to launch a process in a named screen
 # Uses globals ``CURRENT_LOG_TIME``, ```LOGDIR``, ``SCREEN_LOGDIR``, `SCREEN_NAME``,
-# ``SERVICE_DIR``, ``USE_SCREEN``
+# ``SERVICE_DIR``, ``SCREEN_IS_LOGGING``
 # screen_process name "command-line" [group]
 # Run a command in a shell in a screen window, if an optional group
 # is provided, use sg to set the group of the command.
@@ -1384,7 +1377,6 @@
 
     SCREEN_NAME=${SCREEN_NAME:-stack}
     SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
-    USE_SCREEN=$(trueorfalse True USE_SCREEN)
 
     screen -S $SCREEN_NAME -X screen -t $name
 
@@ -1393,8 +1385,12 @@
     echo "SCREEN_LOGDIR: $SCREEN_LOGDIR"
     echo "log: $real_logfile"
     if [[ -n ${LOGDIR} ]]; then
-        screen -S $SCREEN_NAME -p $name -X logfile "$real_logfile"
-        screen -S $SCREEN_NAME -p $name -X log on
+        if [[ "$SCREEN_IS_LOGGING" == "True" ]]; then
+            screen -S $SCREEN_NAME -p $name -X logfile "$real_logfile"
+            screen -S $SCREEN_NAME -p $name -X log on
+        fi
+        # If logging isn't active then avoid a broken symlink
+        touch "$real_logfile"
         ln -sf "$real_logfile" ${LOGDIR}/${name}.log
         if [[ -n ${SCREEN_LOGDIR} ]]; then
             # Drop the backward-compat symlink
@@ -1433,7 +1429,7 @@
 }
 
 # Screen rc file builder
-# Uses globals ``SCREEN_NAME``, ``SCREENRC``
+# Uses globals ``SCREEN_NAME``, ``SCREENRC``, ``SCREEN_IS_LOGGING``
 # screen_rc service "command-line"
 function screen_rc {
     SCREEN_NAME=${SCREEN_NAME:-stack}
@@ -1453,7 +1449,7 @@
         echo "screen -t $1 bash" >> $SCREENRC
         echo "stuff \"$2$NL\"" >> $SCREENRC
 
-        if [[ -n ${LOGDIR} ]]; then
+        if [[ -n ${LOGDIR} ]] && [[ "$SCREEN_IS_LOGGING" == "True" ]]; then
             echo "logfile ${LOGDIR}/${1}.log.${CURRENT_LOG_TIME}" >>$SCREENRC
             echo "log on" >>$SCREENRC
         fi
@@ -1464,14 +1460,13 @@
 # If a PID is available use it, kill the whole process group via TERM
 # If screen is being used kill the screen window; this will catch processes
 # that did not leave a PID behind
-# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``, ``USE_SCREEN``
+# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``
 # screen_stop_service service
 function screen_stop_service {
     local service=$1
 
     SCREEN_NAME=${SCREEN_NAME:-stack}
     SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
-    USE_SCREEN=$(trueorfalse True USE_SCREEN)
 
     if is_service_enabled $service; then
         # Clean up the screen window
@@ -1489,7 +1484,6 @@
     local service=$1
 
     SERVICE_DIR=${SERVICE_DIR:-${DEST}/status}
-    USE_SCREEN=$(trueorfalse True USE_SCREEN)
 
     if is_service_enabled $service; then
         # Kill via pid if we have one available
@@ -1508,7 +1502,7 @@
                 # this fixed in all services:
                 # https://bugs.launchpad.net/oslo-incubator/+bug/1446583
                 sleep 1
-                # /bin/true becakse pkill on a non existant process returns an error
+                # /bin/true because pkill on a non existent process returns an error
                 pkill -g $(cat $SERVICE_DIR/$SCREEN_NAME/$service.pid) || /bin/true
             fi
             rm $SERVICE_DIR/$SCREEN_NAME/$service.pid
@@ -1551,11 +1545,11 @@
 }
 
 # Tail a log file in a screen if USE_SCREEN is true.
+# Uses globals ``USE_SCREEN``
 function tail_log {
     local name=$1
     local logfile=$2
 
-    USE_SCREEN=$(trueorfalse True USE_SCREEN)
     if [[ "$USE_SCREEN" = "True" ]]; then
         screen_process "$name" "sudo tail -f $logfile"
     fi
@@ -1716,7 +1710,7 @@
         if [[ -f $dir/devstack/override-defaults ]]; then
             # be really verbose that an override is happening, as it
             # may not be obvious if things fail later.
-            echo "$plugin has overriden the following defaults"
+            echo "$plugin has overridden the following defaults"
             cat $dir/devstack/override-defaults
             source $dir/devstack/override-defaults
         fi
diff --git a/lib/dlm b/lib/dlm
new file mode 100644
index 0000000..f68ee26
--- /dev/null
+++ b/lib/dlm
@@ -0,0 +1,108 @@
+#!/bin/bash
+#
+# lib/dlm
+#
+# Functions to control the installation and configuration of software
+# that provides a dlm (and possibly other functions). The default is
+# **zookeeper**, and is going to be the only backend supported in the
+# devstack tree.
+
+# Dependencies:
+#
+# - ``functions`` file
+
+# ``stack.sh`` calls the entry points in this order:
+#
+# - is_dlm_enabled
+# - install_dlm
+# - configure_dlm
+# - cleanup_dlm
+
+# Save trace setting
+XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Defaults
+# --------
+
+# <define global variables here that belong to this project>
+
+# Set up default directories
+ZOOKEEPER_DATA_DIR=$DEST/data/zookeeper
+ZOOKEEPER_CONF_DIR=/etc/zookeeper
+
+
+# Entry Points
+# ------------
+#
+# NOTE(sdague): it is expected that when someone wants to implement
+# another one of these out of tree, they'll implement the following
+# functions:
+#
+# - dlm_backend
+# - install_dlm
+# - configure_dlm
+# - cleanup_dlm
+
+# This should be declared in the settings file of any plugin or
+# service that needs to have a dlm in their enviroment.
+function use_dlm {
+    enable_service $(dlm_backend)
+}
+
+# A function to return the name of the backend in question, some users
+# are going to need to know this.
+function dlm_backend {
+    echo "zookeeper"
+}
+
+# Test if a dlm is enabled (defaults to a zookeeper specific check)
+function is_dlm_enabled {
+    [[ ,${ENABLED_SERVICES}, =~ ,"$(dlm_backend)", ]] && return 0
+    return 1
+}
+
+# cleanup_dlm() - Remove residual data files, anything left over from previous
+# runs that a clean run would need to clean up
+function cleanup_dlm {
+    # NOTE(sdague): we don't check for is_enabled here because we
+    # should just delete this regardless. Some times users updated
+    # their service list before they run cleanup.
+    sudo rm -rf $ZOOKEEPER_DATA_DIR
+}
+
+# configure_dlm() - Set config files, create data dirs, etc
+function configure_dlm {
+    if is_dlm_enabled; then
+        sudo cp $FILES/zookeeper/* $ZOOKEEPER_CONF_DIR
+        sudo sed -i -e 's|.*dataDir.*|dataDir='$ZOOKEEPER_DATA_DIR'|' $ZOOKEEPER_CONF_DIR/zoo.cfg
+        # clean up from previous (possibly aborted) runs
+        # create required data files
+        sudo rm -rf $ZOOKEEPER_DATA_DIR
+        sudo mkdir -p $ZOOKEEPER_DATA_DIR
+        # restart after configuration, there is no reason to make this
+        # another step, because having data files that don't match the
+        # zookeeper running is just going to cause tears.
+        restart_service zookeeper
+    fi
+}
+
+# install_dlm() - Collect source and prepare
+function install_dlm {
+    if is_dlm_enabled; then
+        if is_ubuntu; then
+            install_package zookeeperd
+        else
+            die $LINENO "Don't know how to install zookeeper on this platform"
+        fi
+    fi
+}
+
+# Restore xtrace
+$XTRACE
+
+# Tell emacs to use shell-script-mode
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/horizon b/lib/horizon
index 6ecd755..ff63b06 100644
--- a/lib/horizon
+++ b/lib/horizon
@@ -99,13 +99,8 @@
 
     _horizon_config_set $local_settings "" OPENSTACK_HOST \"${KEYSTONE_SERVICE_HOST}\"
 
-    if [ "$ENABLE_IDENTITY_V2" == "False" ]; then
-        # Only Identity v3 API is available; then use it with v3 auth tokens
-        _horizon_config_set $local_settings "" OPENSTACK_API_VERSIONS {\"identity\":3}
-        _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_URL "\"${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}/v3\""
-    else
-        _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_URL "\"${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}/v2.0\""
-    fi
+    _horizon_config_set $local_settings "" OPENSTACK_API_VERSIONS {\"identity\":3}
+    _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_URL "\"${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}/v3\""
 
     if [ -f $SSL_BUNDLE_FILE ]; then
         _horizon_config_set $local_settings "" OPENSTACK_SSL_CACERT \"${SSL_BUNDLE_FILE}\"
diff --git a/lib/ironic b/lib/ironic
index d786870..016e639 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -672,6 +672,8 @@
     # enable tftp natting for allowing connections to HOST_IP's tftp server
     sudo modprobe nf_conntrack_tftp
     sudo modprobe nf_nat_tftp
+    # explicitly allow DHCP - packets are occassionally being dropped here
+    sudo iptables -I INPUT -p udp --dport 67:68 --sport 67:68 -j ACCEPT || true
     # nodes boot from TFTP and callback to the API server listening on $HOST_IP
     sudo iptables -I INPUT -d $HOST_IP -p udp --dport 69 -j ACCEPT || true
     sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $IRONIC_SERVICE_PORT -j ACCEPT || true
diff --git a/lib/neutron-legacy b/lib/neutron-legacy
index c244e54..79c3140 100644
--- a/lib/neutron-legacy
+++ b/lib/neutron-legacy
@@ -486,7 +486,6 @@
     # optionally set options in nova_conf
     neutron_plugin_create_nova_conf
 
-    iniset $NOVA_CONF DEFAULT linuxnet_interface_driver "$LINUXNET_VIF_DRIVER"
     if is_service_enabled q-meta; then
         iniset $NOVA_CONF neutron service_metadata_proxy "True"
     fi
@@ -542,12 +541,12 @@
         die_if_not_set $LINENO NET_ID "Failure creating NET_ID for $PHYSICAL_NETWORK $TENANT_ID"
 
         if [[ "$IP_VERSION" =~ 4.* ]]; then
-            SUBNET_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 4 ${ALLOCATION_POOL:+--allocation-pool $ALLOCATION_POOL} --name $PROVIDER_SUBNET_NAME --gateway $NETWORK_GATEWAY $NET_ID $FIXED_RANGE | grep ' id ' | get_field 2)
+            SUBNET_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 4 ${ALLOCATION_POOL:+--allocation-pool $ALLOCATION_POOL} --name $PROVIDER_SUBNET_NAME --gateway $NETWORK_GATEWAY --subnetpool None $NET_ID $FIXED_RANGE | grep ' id ' | get_field 2)
             die_if_not_set $LINENO SUBNET_ID "Failure creating SUBNET_ID for $PROVIDER_SUBNET_NAME $TENANT_ID"
         fi
 
         if [[ "$IP_VERSION" =~ .*6 ]]; then
-            SUBNET_V6_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 6 --ipv6-address-mode $IPV6_ADDRESS_MODE --gateway $V6_NETWORK_GATEWAY --name $PROVIDER_SUBNET_NAME_V6 $NET_ID $FIXED_RANGE_V6 | grep 'id' | get_field 2)
+            SUBNET_V6_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 6 --ipv6-address-mode $IPV6_ADDRESS_MODE --gateway $V6_NETWORK_GATEWAY --name $PROVIDER_SUBNET_NAME_V6 --subnetpool_id None $NET_ID $FIXED_RANGE_V6 | grep 'id' | get_field 2)
             die_if_not_set $LINENO SUBNET_V6_ID "Failure creating SUBNET_V6_ID for $PROVIDER_SUBNET_NAME_V6 $TENANT_ID"
         fi
 
@@ -799,7 +798,7 @@
         local IP_ADD=""
         local IP_DEL=""
         local DEFAULT_ROUTE_GW
-        DEFAULT_ROUTE_GW=$(ip r | awk "/default.+$from_intf/ { print \$3; exit }")
+        DEFAULT_ROUTE_GW=$(ip -f $af r | awk "/default.+$from_intf/ { print \$3; exit }")
         local ADD_OVS_PORT=""
 
         if [[ $af == "inet" ]]; then
@@ -811,7 +810,7 @@
         fi
 
         if [ "$DEFAULT_ROUTE_GW" != "" ]; then
-            ADD_DEFAULT_ROUTE="sudo ip r replace default via $DEFAULT_ROUTE_GW dev $to_intf"
+            ADD_DEFAULT_ROUTE="sudo ip -f $af r replace default via $DEFAULT_ROUTE_GW dev $to_intf"
         fi
 
         if [[ "$add_ovs_port" == "True" ]]; then
@@ -1236,6 +1235,7 @@
     subnet_params+="--ip_version 4 "
     subnet_params+="--gateway $NETWORK_GATEWAY "
     subnet_params+="--name $PRIVATE_SUBNET_NAME "
+    subnet_params+="--subnetpool None "
     subnet_params+="$NET_ID $FIXED_RANGE"
     local subnet_id
     subnet_id=$(neutron subnet-create $subnet_params | grep ' id ' | get_field 2)
@@ -1252,6 +1252,7 @@
     subnet_params+="--ip_version 6 "
     subnet_params+="--gateway $IPV6_PRIVATE_NETWORK_GATEWAY "
     subnet_params+="--name $IPV6_PRIVATE_SUBNET_NAME "
+    subnet_params+="--subnetpool None "
     subnet_params+="$NET_ID $FIXED_RANGE_V6 $ipv6_modes"
     local ipv6_subnet_id
     ipv6_subnet_id=$(neutron subnet-create $subnet_params | grep ' id ' | get_field 2)
@@ -1265,6 +1266,7 @@
     subnet_params+="${Q_FLOATING_ALLOCATION_POOL:+--allocation-pool $Q_FLOATING_ALLOCATION_POOL} "
     subnet_params+="--gateway $PUBLIC_NETWORK_GATEWAY "
     subnet_params+="--name $PUBLIC_SUBNET_NAME "
+    subnet_params+="--subnetpool None "
     subnet_params+="$EXT_NET_ID $FLOATING_RANGE "
     subnet_params+="-- --enable_dhcp=False"
     local id_and_ext_gw_ip
@@ -1278,6 +1280,7 @@
     local subnet_params="--ip_version 6 "
     subnet_params+="--gateway $IPV6_PUBLIC_NETWORK_GATEWAY "
     subnet_params+="--name $IPV6_PUBLIC_SUBNET_NAME "
+    subnet_params+="--subnetpool None "
     subnet_params+="$EXT_NET_ID $IPV6_PUBLIC_RANGE "
     subnet_params+="-- --enable_dhcp=False"
     local ipv6_id_and_ext_gw_ip
diff --git a/lib/neutron_plugins/ibm b/lib/neutron_plugins/ibm
deleted file mode 100644
index dd5cfa6..0000000
--- a/lib/neutron_plugins/ibm
+++ /dev/null
@@ -1,133 +0,0 @@
-#!/bin/bash
-#
-# Neutron IBM SDN-VE plugin
-# ---------------------------
-
-# Save trace setting
-IBM_XTRACE=$(set +o | grep xtrace)
-set +o xtrace
-
-source $TOP_DIR/lib/neutron_plugins/ovs_base
-
-function neutron_plugin_install_agent_packages {
-    _neutron_ovs_base_install_agent_packages
-}
-
-function _neutron_interface_setup {
-    # Setup one interface on the integration bridge if needed
-    # The plugin agent to be used if more than one interface is used
-    local bridge=$1
-    local interface=$2
-    sudo ovs-vsctl --no-wait -- --may-exist add-port $bridge $interface
-}
-
-function neutron_setup_integration_bridge {
-    # Setup integration bridge if needed
-    if [[ "$SDNVE_INTEGRATION_BRIDGE" != "" ]]; then
-        neutron_ovs_base_cleanup
-        _neutron_ovs_base_setup_bridge $SDNVE_INTEGRATION_BRIDGE
-        if [[ "$SDNVE_INTERFACE_MAPPINGS" != "" ]]; then
-            interfaces=(${SDNVE_INTERFACE_MAPPINGS//[,:]/ })
-            _neutron_interface_setup $SDNVE_INTEGRATION_BRIDGE ${interfaces[1]}
-        fi
-    fi
-
-    # Set controller to SDNVE controller (1st of list) if exists
-    if [[ "$SDNVE_CONTROLLER_IPS" != "" ]]; then
-        # Get the first controller
-        controllers=(${SDNVE_CONTROLLER_IPS//[\[,\]]/ })
-        SDNVE_IP=${controllers[0]}
-        sudo ovs-vsctl set-controller $SDNVE_INTEGRATION_BRIDGE tcp:$SDNVE_IP
-    fi
-}
-
-function neutron_plugin_create_nova_conf {
-    # if n-cpu is enabled, then setup integration bridge
-    if is_service_enabled n-cpu; then
-        neutron_setup_integration_bridge
-    fi
-}
-
-function is_neutron_ovs_base_plugin {
-    if [[ "$SDNVE_INTEGRATION_BRIDGE" != "" ]]; then
-        # Yes, we use OVS.
-        return 0
-    else
-        # No, we do not use OVS.
-        return 1
-    fi
-}
-
-function neutron_plugin_configure_common {
-    Q_PLUGIN_CONF_PATH=etc/neutron/plugins/ibm
-    Q_PLUGIN_CONF_FILENAME=sdnve_neutron_plugin.ini
-    Q_PLUGIN_CLASS="neutron.plugins.ibm.sdnve_neutron_plugin.SdnvePluginV2"
-}
-
-function neutron_plugin_configure_service {
-    # Define extra "SDNVE" configuration options when q-svc is configured
-
-    iniset /$Q_PLUGIN_CONF_FILE securitygroup firewall_driver neutron.agent.firewall.NoopFirewallDriver
-
-    if [[ "$SDNVE_CONTROLLER_IPS" != "" ]]; then
-        iniset /$Q_PLUGIN_CONF_FILE sdnve controller_ips $SDNVE_CONTROLLER_IPS
-    fi
-
-    if [[ "$SDNVE_INTEGRATION_BRIDGE" != "" ]]; then
-        iniset /$Q_PLUGIN_CONF_FILE sdnve integration_bridge $SDNVE_INTEGRATION_BRIDGE
-    fi
-
-    if [[ "$SDNVE_RESET_BRIDGE" != "" ]]; then
-        iniset /$Q_PLUGIN_CONF_FILE sdnve reset_bridge $SDNVE_RESET_BRIDGE
-    fi
-
-    if [[ "$SDNVE_OUT_OF_BAND" != "" ]]; then
-        iniset /$Q_PLUGIN_CONF_FILE sdnve out_of_band $SDNVE_OUT_OF_BAND
-    fi
-
-    if [[ "$SDNVE_INTERFACE_MAPPINGS" != "" ]]; then
-        iniset /$Q_PLUGIN_CONF_FILE sdnve interface_mappings $SDNVE_INTERFACE_MAPPINGS
-    fi
-
-    if [[ "$SDNVE_FAKE_CONTROLLER" != "" ]]; then
-        iniset /$Q_PLUGIN_CONF_FILE sdnve use_fake_controller $SDNVE_FAKE_CONTROLLER
-    fi
-
-
-    iniset $NEUTRON_CONF DEFAULT notification_driver neutron.openstack.common.notifier.no_op_notifier
-
-}
-
-function neutron_plugin_configure_plugin_agent {
-    AGENT_BINARY="$NEUTRON_BIN_DIR/neutron-ibm-agent"
-}
-
-function neutron_plugin_configure_debug_command {
-    :
-}
-
-function neutron_plugin_setup_interface_driver {
-    return 0
-}
-
-function has_neutron_plugin_security_group {
-    # Does not support Security Groups
-    return 1
-}
-
-function neutron_ovs_base_cleanup {
-    if [[ "$SDNVE_RESET_BRIDGE" != False ]]; then
-        # remove all OVS ports that look like Neutron created ports
-        for port in $(sudo ovs-vsctl list port | grep -o -e tap[0-9a-f\-]* -e q[rg]-[0-9a-f\-]*); do
-            sudo ovs-vsctl del-port ${port}
-        done
-
-        # remove integration bridge created by Neutron
-        for bridge in $(sudo ovs-vsctl list-br | grep -o -e ${SDNVE_INTEGRATION_BRIDGE}); do
-            sudo ovs-vsctl del-br ${bridge}
-        done
-    fi
-}
-
-# Restore xtrace
-$IBM_XTRACE
diff --git a/lib/nova b/lib/nova
index 47c43bd..ba05f53 100644
--- a/lib/nova
+++ b/lib/nova
@@ -7,6 +7,7 @@
 #
 # - ``functions`` file
 # - ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined
+# - ``FILES``
 # - ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
 # - ``LIBVIRT_TYPE`` must be defined
 # - ``INSTANCE_NAME_PREFIX``, ``VOLUME_NAME_PREFIX`` must be defined
@@ -87,6 +88,7 @@
 NOVA_SERVICE_LISTEN_ADDRESS=${NOVA_SERVICE_LISTEN_ADDRESS:-$SERVICE_LISTEN_ADDRESS}
 EC2_SERVICE_PORT=${EC2_SERVICE_PORT:-8773}
 EC2_SERVICE_PORT_INT=${EC2_SERVICE_PORT_INT:-18773}
+METADATA_SERVICE_PORT=${METADATA_SERVICE_PORT:-8775}
 
 # Option to enable/disable config drive
 # NOTE: Set ``FORCE_CONFIG_DRIVE="False"`` to turn OFF config drive
@@ -241,6 +243,7 @@
     sudo rm -f $NOVA_WSGI_DIR/*
     sudo rm -f $(apache_site_config_for nova-api)
     sudo rm -f $(apache_site_config_for nova-ec2-api)
+    sudo rm -f $(apache_site_config_for nova-metadata)
 }
 
 # _config_nova_apache_wsgi() - Set WSGI config files of Keystone
@@ -251,11 +254,14 @@
     nova_apache_conf=$(apache_site_config_for nova-api)
     local nova_ec2_apache_conf
     nova_ec2_apache_conf=$(apache_site_config_for nova-ec2-api)
+    local nova_metadata_apache_conf
+    nova_metadata_apache_conf=$(apache_site_config_for nova-metadata)
     local nova_ssl=""
     local nova_certfile=""
     local nova_keyfile=""
     local nova_api_port=$NOVA_SERVICE_PORT
     local nova_ec2_api_port=$EC2_SERVICE_PORT
+    local nova_metadata_port=$METADATA_SERVICE_PORT
     local venv_path=""
 
     if is_ssl_enabled_service nova-api; then
@@ -270,6 +276,7 @@
     # copy proxy vhost and wsgi helper files
     sudo cp $NOVA_DIR/nova/wsgi/nova-api.py $NOVA_WSGI_DIR/nova-api
     sudo cp $NOVA_DIR/nova/wsgi/nova-ec2-api.py $NOVA_WSGI_DIR/nova-ec2-api
+    sudo cp $NOVA_DIR/nova/wsgi/nova-metadata.py $NOVA_WSGI_DIR/nova-metadata
 
     sudo cp $FILES/apache-nova-api.template $nova_apache_conf
     sudo sed -e "
@@ -296,6 +303,19 @@
         s|%VIRTUALENV%|$venv_path|g
         s|%APIWORKERS%|$API_WORKERS|g
     " -i $nova_ec2_apache_conf
+
+    sudo cp $FILES/apache-nova-metadata.template $nova_metadata_apache_conf
+    sudo sed -e "
+        s|%PUBLICPORT%|$nova_metadata_port|g;
+        s|%APACHE_NAME%|$APACHE_NAME|g;
+        s|%PUBLICWSGI%|$NOVA_WSGI_DIR/nova-metadata|g;
+        s|%SSLENGINE%|$nova_ssl|g;
+        s|%SSLCERTFILE%|$nova_certfile|g;
+        s|%SSLKEYFILE%|$nova_keyfile|g;
+        s|%USER%|$STACK_USER|g;
+        s|%VIRTUALENV%|$venv_path|g
+        s|%APIWORKERS%|$API_WORKERS|g
+    " -i $nova_metadata_apache_conf
 }
 
 # configure_nova() - Set config files, create data dirs, etc
@@ -797,9 +817,11 @@
     if [ -f ${enabled_site_file} ] && [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
         enable_apache_site nova-api
         enable_apache_site nova-ec2-api
+        enable_apache_site nova-metadata
         restart_apache_server
         tail_log nova-api /var/log/$APACHE_NAME/nova-api.log
         tail_log nova-ec2-api /var/log/$APACHE_NAME/nova-ec2-api.log
+        tail_log nova-metadata /var/log/$APACHE_NAME/nova-metadata.log
     else
         run_process n-api "$NOVA_BIN_DIR/nova-api"
     fi
@@ -915,6 +937,7 @@
     if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
         disable_apache_site nova-api
         disable_apache_site nova-ec2-api
+        disable_apache_site nova-metadata
         restart_apache_server
     else
         stop_process n-api
diff --git a/lib/swift b/lib/swift
index 3a8e80d..ee0238d 100644
--- a/lib/swift
+++ b/lib/swift
@@ -813,11 +813,13 @@
 }
 
 function swift_configure_tempurls {
+    # note we are using swift credentials!
     OS_USERNAME=swift \
-        OS_PROJECT_NAME=$SERVICE_TENANT_NAME \
-        OS_PASSWORD=$SERVICE_PASSWORD \
-        OS_AUTH_URL=$SERVICE_ENDPOINT \
-        swift post --auth-version 3 -m "Temp-URL-Key: $SWIFT_TEMPURL_KEY"
+    OS_PASSWORD=$SERVICE_PASSWORD \
+    OS_PROJECT_NAME=$SERVICE_TENANT_NAME \
+    OS_AUTH_URL=$SERVICE_ENDPOINT \
+    openstack object store account \
+        set --property "Temp-URL-Key=$SWIFT_TEMPURL_KEY"
 }
 
 # Restore xtrace
diff --git a/lib/tempest b/lib/tempest
index 32630db..691ad86 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -377,6 +377,15 @@
         iniset $TEMPEST_CONFIG compute-feature-enabled shelve False
         # Cells doesn't support hot-plugging virtual interfaces.
         iniset $TEMPEST_CONFIG compute-feature-enabled interface_attach False
+
+        if  [[ -z "$DEFAULT_INSTANCE_TYPE" ]]; then
+            # Cells supports resize but does not currently work with devstack
+            # because of the custom flavors created for Tempest runs which are
+            # not in the cells database.
+            # TODO(mriedem): work on adding a nova-manage command to sync
+            # flavors into the cells database.
+            iniset $TEMPEST_CONFIG compute-feature-enabled resize False
+        fi
     fi
 
     # Network
diff --git a/lib/zookeeper b/lib/zookeeper
deleted file mode 100644
index e62ba8a..0000000
--- a/lib/zookeeper
+++ /dev/null
@@ -1,86 +0,0 @@
-#!/bin/bash
-#
-# lib/zookeeper
-# Functions to control the installation and configuration of **zookeeper**
-
-# Dependencies:
-#
-# - ``functions`` file
-
-# ``stack.sh`` calls the entry points in this order:
-#
-# - is_zookeeper_enabled
-# - install_zookeeper
-# - configure_zookeeper
-# - init_zookeeper
-# - start_zookeeper
-# - stop_zookeeper
-# - cleanup_zookeeper
-
-# Save trace setting
-XTRACE=$(set +o | grep xtrace)
-set +o xtrace
-
-
-# Defaults
-# --------
-
-# <define global variables here that belong to this project>
-
-# Set up default directories
-ZOOKEEPER_DATA_DIR=$DEST/data/zookeeper
-ZOOKEEPER_CONF_DIR=/etc/zookeeper
-
-
-# Entry Points
-# ------------
-
-# Test if any zookeeper service us enabled
-# is_zookeeper_enabled
-function is_zookeeper_enabled {
-    [[ ,${ENABLED_SERVICES}, =~ ,"zookeeper", ]] && return 0
-    return 1
-}
-
-# cleanup_zookeeper() - Remove residual data files, anything left over from previous
-# runs that a clean run would need to clean up
-function cleanup_zookeeper {
-    sudo rm -rf $ZOOKEEPER_DATA_DIR
-}
-
-# configure_zookeeper() - Set config files, create data dirs, etc
-function configure_zookeeper {
-    sudo cp $FILES/zookeeper/* $ZOOKEEPER_CONF_DIR
-    sudo sed -i -e 's|.*dataDir.*|dataDir='$ZOOKEEPER_DATA_DIR'|' $ZOOKEEPER_CONF_DIR/zoo.cfg
-}
-
-# init_zookeeper() - Initialize databases, etc.
-function init_zookeeper {
-    # clean up from previous (possibly aborted) runs
-    # create required data files
-    sudo rm -rf $ZOOKEEPER_DATA_DIR
-    sudo mkdir -p $ZOOKEEPER_DATA_DIR
-}
-
-# install_zookeeper() - Collect source and prepare
-function install_zookeeper {
-    install_package zookeeperd
-}
-
-# start_zookeeper() - Start running processes, including screen
-function start_zookeeper {
-    start_service zookeeper
-}
-
-# stop_zookeeper() - Stop running processes (non-screen)
-function stop_zookeeper {
-    stop_service zookeeper
-}
-
-# Restore xtrace
-$XTRACE
-
-# Tell emacs to use shell-script-mode
-## Local variables:
-## mode: shell-script
-## End:
diff --git a/samples/local.conf b/samples/local.conf
index cb293b6..b92097d 100644
--- a/samples/local.conf
+++ b/samples/local.conf
@@ -93,9 +93,3 @@
 # moved by setting ``SWIFT_DATA_DIR``. The directory will be created
 # if it does not exist.
 SWIFT_DATA_DIR=$DEST/data
-
-# Tempest
-# -------
-
-# Install the tempest test suite
-enable_service tempest
diff --git a/stack.sh b/stack.sh
index 103e7e9..9b811b7 100755
--- a/stack.sh
+++ b/stack.sh
@@ -93,6 +93,20 @@
     exit 1
 fi
 
+# OpenStack is designed to run at a system level, with system level
+# installation of python packages. It does not support running under a
+# virtual env, and will fail in really odd ways if you do this. Make
+# this explicit as it has come up on the mailing list.
+if [[ -n "$VIRTUAL_ENV" ]]; then
+    echo "You appear to be running under a python virtualenv."
+    echo "DevStack does not support this, as we may break the"
+    echo "virtualenv you are currently in by modifying "
+    echo "external system-level components the virtualenv relies on."
+    echo "We recommend you use a separate virtual-machine if "
+    echo "you are worried about DevStack taking over your system."
+    exit 1
+fi
+
 # Provide a safety switch for devstack. If you do a lot of devstack,
 # on a lot of different environments, you sometimes run it on the
 # wrong box. This makes there be a way to prevent that.
@@ -178,7 +192,7 @@
 
 # Warn users who aren't on an explicitly supported distro, but allow them to
 # override check and attempt installation with ``FORCE=yes ./stack``
-if [[ ! ${DISTRO} =~ (precise|trusty|vivid|7.0|wheezy|sid|testing|jessie|f21|f22|rhel7) ]]; then
+if [[ ! ${DISTRO} =~ (precise|trusty|vivid|wily|7.0|wheezy|sid|testing|jessie|f21|f22|rhel7) ]]; then
     echo "WARNING: this script has not been tested on $DISTRO"
     if [[ "$FORCE" != "yes" ]]; then
         die $LINENO "If you wish to run this script anyway run with FORCE=yes"
@@ -321,6 +335,10 @@
     sudo sed -i "s/\(^127.0.0.1.*\)/\1 $LOCAL_HOSTNAME/" /etc/hosts
 fi
 
+# Ensure python is installed
+# --------------------------
+is_package_installed python || install_package python
+
 
 # Configure Logging
 # -----------------
@@ -539,7 +557,7 @@
 source $TOP_DIR/lib/neutron-legacy
 source $TOP_DIR/lib/ldap
 source $TOP_DIR/lib/dstat
-source $TOP_DIR/lib/zookeeper
+source $TOP_DIR/lib/dlm
 
 # Extras Source
 # --------------
@@ -724,11 +742,10 @@
 
 install_rpc_backend
 
-if is_service_enabled zookeeper; then
-    cleanup_zookeeper
-    configure_zookeeper
-    init_zookeeper
-fi
+# NOTE(sdague): dlm install is conditional on one being enabled by configuration
+install_dlm
+configure_dlm
+
 if is_service_enabled $DATABASE_BACKENDS; then
     install_database
 fi
@@ -968,15 +985,6 @@
 start_dstat
 
 
-# Zookeeper
-# -----
-
-# zookeeper for use with tooz for Distributed Lock Management capabilities etc.,
-if is_service_enabled zookeeper; then
-    start_zookeeper
-fi
-
-
 # Keystone
 # --------
 
diff --git a/stackrc b/stackrc
index 3033b27..f400047 100644
--- a/stackrc
+++ b/stackrc
@@ -69,7 +69,7 @@
     # Dashboard
     ENABLED_SERVICES+=,horizon
     # Additional services
-    ENABLED_SERVICES+=,rabbit,tempest,mysql,dstat,zookeeper
+    ENABLED_SERVICES+=,rabbit,tempest,mysql,dstat
 fi
 
 # SQLAlchemy supports multiple database drivers for each database server
@@ -101,7 +101,17 @@
 # ctrl-c, up-arrow, enter to restart the service. Starting services
 # this way is slightly unreliable, and a bit slower, so this can
 # be disabled for automated testing by setting this value to False.
-USE_SCREEN=True
+USE_SCREEN=$(trueorfalse True USE_SCREEN)
+
+# When using screen, should we keep a log file on disk?  You might
+# want this False if you have a long-running setup where verbose logs
+# can fill-up the host.
+# XXX: Ideally screen itself would be configured to log but just not
+# activate.  This isn't possible with the screerc syntax.  Temporary
+# logging can still be used by a developer with:
+#    C-a : logfile foo
+#    C-a : log on
+SCREEN_IS_LOGGING=$(trueorfalse True SCREEN_IS_LOGGING)
 
 # Passwords generated by interactive devstack runs
 if [[ -r $RC_DIR/.localrc.password ]]; then
@@ -641,9 +651,6 @@
 PRIVATE_NETWORK_NAME=${PRIVATE_NETWORK_NAME:-"private"}
 PUBLIC_NETWORK_NAME=${PUBLIC_NETWORK_NAME:-"public"}
 
-# Compatibility until it's eradicated from CI
-USE_SCREEN=${SCREEN_DEV:-$USE_SCREEN}
-
 # Set default screen name
 SCREEN_NAME=${SCREEN_NAME:-stack}
 
diff --git a/tools/install_pip.sh b/tools/install_pip.sh
index 13c1786..ab5efb2 100755
--- a/tools/install_pip.sh
+++ b/tools/install_pip.sh
@@ -110,7 +110,11 @@
 # Do pip
 
 # Eradicate any and all system packages
-uninstall_package python-pip
+
+# python in f23 depends on the python-pip package
+if ! { is_fedora && [[ $DISTRO == "f23" ]]; }; then
+    uninstall_package python-pip
+fi
 
 install_get_pip
 
diff --git a/tools/worlddump.py b/tools/worlddump.py
index 1b337a9..97e4d94 100755
--- a/tools/worlddump.py
+++ b/tools/worlddump.py
@@ -86,8 +86,10 @@
 
 
 def ebtables_dump():
+    tables = ['filter', 'nat', 'broute']
     _header("EB Tables Dump")
-    _dump_cmd("sudo ebtables -L")
+    for table in tables:
+        _dump_cmd("sudo ebtables -t %s -L" % table)
 
 
 def iptables_dump():
diff --git a/unstack.sh b/unstack.sh
index 0cace32..8eded83 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -69,7 +69,7 @@
 source $TOP_DIR/lib/neutron-legacy
 source $TOP_DIR/lib/ldap
 source $TOP_DIR/lib/dstat
-source $TOP_DIR/lib/zookeeper
+source $TOP_DIR/lib/dlm
 
 # Extras Source
 # --------------