Merge "VMware: add support for simple DVS"
diff --git a/MAINTAINERS.rst b/MAINTAINERS.rst
index d3e8c67..eeb1f21 100644
--- a/MAINTAINERS.rst
+++ b/MAINTAINERS.rst
@@ -45,6 +45,13 @@
 Neutron
 ~~~~~~~
 
+MidoNet
+~~~~~~~
+
+* Jaume Devesa <devvesa@gmail.com>
+* Ryu Ishimoto <ryu@midokura.com>
+* YAMAMOTO Takashi <yamamoto@midokura.com>
+
 OpenDaylight
 ~~~~~~~~~~~~
 
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 3e9aa45..6e3ec02 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -26,7 +26,7 @@
 
 # Add any Sphinx extension module names here, as strings. They can be extensions
 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = [ 'oslosphinx' ]
+extensions = [ 'oslosphinx', 'sphinxcontrib.blockdiag', 'sphinxcontrib.nwdiag' ]
 
 todo_include_todos = True
 
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index 1cc7083..8e2e7ff 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -391,7 +391,7 @@
         ENABLED_SERVICES=n-vol,n-cpu,n-net,n-api
 
 IP Version
-    | Default: ``IP_VERSION=4``
+    | Default: ``IP_VERSION=4+6``
     | This setting can be used to configure DevStack to create either an IPv4,
       IPv6, or dual stack tenant data network by setting ``IP_VERSION`` to
       either ``IP_VERSION=4``, ``IP_VERSION=6``, or ``IP_VERSION=4+6``
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index d3b491f..87f8469 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -2,151 +2,142 @@
 FAQ
 ===
 
--  `General Questions <#general>`__
--  `Operation and Configuration <#ops_conf>`__
--  `Miscellaneous <#misc>`__
+.. contents::
+   :local:
 
 General Questions
 =================
 
-Q: Can I use DevStack for production?
-    A: No. We mean it. Really. DevStack makes some implementation
-    choices that are not appropriate for production deployments. We
-    warned you!
-Q: Then why selinux in enforcing mode?
-    A: That is the default on current Fedora and RHEL releases. DevStack
-    has (rightly so) a bad reputation for its security practices; it has
-    always been meant as a development tool first and system integration
-    later. This is changing as the security issues around OpenStack's
-    use of root (for example) have been tightened and developers need to
-    be better equipped to work in these environments. ``stack.sh``'s use
-    of root is primarily to support the activities that would be handled
-    by packaging in "real" deployments. To remove additional protections
-    that will be desired/required in production would be a step
-    backward.
-Q: But selinux is disabled in RHEL!
-    A: Today it is, yes. That is a specific exception that certain
-    DevStack contributors fought strongly against. The primary reason it
-    was allowed was to support using RHEL6 as the Python 2.6 test
-    platform and that took priority time-wise. This will not be the case
-    with RHEL 7.
-Q: Why a shell script, why not chef/puppet/...
-    A: The script is meant to be read by humans (as well as ran by
-    computers); it is the primary documentation after all. Using a
-    recipe system requires everyone to agree and understand chef or
-    puppet.
-Q: Why not use Crowbar?
-    A: DevStack is optimized for documentation & developers. As some of
-    us use `Crowbar <https://github.com/dellcloudedge/crowbar>`__ for
-    production deployments, we hope developers documenting how they
-    setup systems for new features supports projects like Crowbar.
-Q: I'd like to help!
-    A: That isn't a question, but please do! The source for DevStack is
-    at
-    `git.openstack.org <https://git.openstack.org/cgit/openstack-dev/devstack>`__
-    and bug reports go to
-    `LaunchPad <http://bugs.launchpad.net/devstack/>`__. Contributions
-    follow the usual process as described in the `developer
-    guide <http://docs.openstack.org/infra/manual/developers.html>`__. This Sphinx
-    documentation is housed in the doc directory.
-Q: Why not use packages?
-    A: Unlike packages, DevStack leaves your cloud ready to develop -
-    checkouts of the code and services running in screen. However, many
-    people are doing the hard work of packaging and recipes for
-    production deployments. We hope this script serves as a way to
-    communicate configuration changes between developers and packagers.
-Q: Why isn't $MY\_FAVORITE\_DISTRO supported?
-    A: DevStack is meant for developers and those who want to see how
-    OpenStack really works. DevStack is known to run on the
-    distro/release combinations listed in ``README.md``. DevStack is
-    only supported on releases other than those documented in
-    ``README.md`` on a best-effort basis.
-Q: What about Fedora/RHEL/CentOS?
-    A: Fedora and CentOS/RHEL are supported via rpm dependency files and
-    specific checks in ``stack.sh``. Support will follow the pattern set
-    with the Ubuntu testing, i.e. only a single release of the distro
-    will receive regular testing, others will be handled on a
-    best-effort basis.
-Q: Are there any differences between Ubuntu and Fedora support?
-    A: Neutron is not fully supported prior to Fedora 18 due lack of
-    OpenVSwitch packages.
-Q: Why can't I use another shell?
-    A: DevStack now uses some specific bash-ism that require Bash 4, such
-    as associative arrays. Simple compatibility patches have been accepted
-    in the past when they are not complex, at this point no additional
-    compatibility patches will be considered except for shells matching
-    the array functionality as it is very ingrained in the repo and project
-    management.
-Q: But, but, can't I test on OS/X?
-   A: Yes, even you, core developer who complained about this, needs to
-   install bash 4 via homebrew to keep running tests on OS/X.  Get a Real
-   Operating System.   (For most of you who don't know, I am referring to
-   myself.)
+Can I use DevStack for production?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DevStack is targeted at developers and CI systems to use the raw
+upstream code.  It makes many choices that are not appropriate for
+production systems.
+
+Your best choice is probably to choose a `distribution of OpenStack
+<https://www.openstack.org/marketplace/distros/distribution>`__.
+
+Why a shell script, why not chef/puppet/...
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The script is meant to be read by humans (as well as ran by
+computers); it is the primary documentation after all. Using a recipe
+system requires everyone to agree and understand chef or puppet.
+
+I'd like to help!
+~~~~~~~~~~~~~~~~~
+
+That isn't a question, but please do! The source for DevStack is at
+`git.openstack.org
+<https://git.openstack.org/cgit/openstack-dev/devstack>`__ and bug
+reports go to `LaunchPad
+<http://bugs.launchpad.net/devstack/>`__. Contributions follow the
+usual process as described in the `developer guide
+<http://docs.openstack.org/infra/manual/developers.html>`__. This
+Sphinx documentation is housed in the doc directory.
+
+Why not use packages?
+~~~~~~~~~~~~~~~~~~~~~
+
+Unlike packages, DevStack leaves your cloud ready to develop -
+checkouts of the code and services running in screen. However, many
+people are doing the hard work of packaging and recipes for production
+deployments.
+
+Why isn't $MY\_FAVORITE\_DISTRO supported?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DevStack is meant for developers and those who want to see how
+OpenStack really works. DevStack is known to run on the distro/release
+combinations listed in ``README.md``. DevStack is only supported on
+releases other than those documented in ``README.md`` on a best-effort
+basis.
+
+Are there any differences between Ubuntu and Centos/Fedora support?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Both should work well and are tested by DevStack CI.
+
+Why can't I use another shell?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DevStack now uses some specific bash-ism that require Bash 4, such as
+associative arrays. Simple compatibility patches have been accepted in
+the past when they are not complex, at this point no additional
+compatibility patches will be considered except for shells matching
+the array functionality as it is very ingrained in the repo and
+project management.
+
+Can I test on OS/X?
+~~~~~~~~~~~~~~~~~~~
+
+Some people have success with bash 4 installed via homebrew to keep
+running tests on OS/X.
 
 Operation and Configuration
 ===========================
 
-Q: Can DevStack handle a multi-node installation?
-    A: Indirectly, yes. You run DevStack on each node with the
-    appropriate configuration in ``local.conf``. The primary
-    considerations are turning off the services not required on the
-    secondary nodes, making sure the passwords match and setting the
-    various API URLs to the right place.
-Q: How can I document the environment that DevStack is using?
-    A: DevStack includes a script (``tools/info.sh``) that gathers the
-    versions of the relevant installed apt packages, pip packages and
-    git repos. This is a good way to verify what Python modules are
-    installed.
-Q: How do I turn off a service that is enabled by default?
-    A: Services can be turned off by adding ``disable_service xxx`` to
-    ``local.conf`` (using ``n-vol`` in this example):
+Can DevStack handle a multi-node installation?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Yes, see :doc:`multinode lab guide <guides/multinode-lab>`
+
+How can I document the environment that DevStack is using?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DevStack includes a script (``tools/info.sh``) that gathers the
+versions of the relevant installed apt packages, pip packages and git
+repos. This is a good way to verify what Python modules are
+installed.
+
+How do I turn off a service that is enabled by default?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Services can be turned off by adding ``disable_service xxx`` to
+``local.conf`` (using ``n-vol`` in this example):
 
     ::
 
         disable_service n-vol
 
-Q: Is enabling a service that defaults to off done with the reverse of the above?
-    A: Of course!
+Is enabling a service that defaults to off done with the reverse of the above?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Of course!
 
     ::
 
         enable_service qpid
 
-Q: How do I run a specific OpenStack milestone?
-    A: OpenStack milestones have tags set in the git repo. Set the appropriate tag in the ``*_BRANCH`` variables in ``local.conf``.  Swift is on its own release schedule so pick a tag in the Swift repo that is just before the milestone release. For example:
+How do I run a specific OpenStack milestone?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+OpenStack milestones have tags set in the git repo. Set the
+appropriate tag in the ``*_BRANCH`` variables in ``local.conf``.
+Swift is on its own release schedule so pick a tag in the Swift repo
+that is just before the milestone release. For example:
 
     ::
 
         [[local|localrc]]
-        GLANCE_BRANCH=stable/juno
-        HORIZON_BRANCH=stable/juno
-        KEYSTONE_BRANCH=stable/juno
-        NOVA_BRANCH=stable/juno
-        GLANCE_BRANCH=stable/juno
-        NEUTRON_BRANCH=stable/juno
-        SWIFT_BRANCH=2.2.1
+        GLANCE_BRANCH=stable/kilo
+        HORIZON_BRANCH=stable/kilo
+        KEYSTONE_BRANCH=stable/kilo
+        NOVA_BRANCH=stable/kilo
+        GLANCE_BRANCH=stable/kilo
+        NEUTRON_BRANCH=stable/kilo
+        SWIFT_BRANCH=2.3.0
 
-Q: Why not use [STRIKEOUT:``tools/pip-requires``]\ ``requirements.txt`` to grab project dependencies?
-    [STRIKEOUT:The majority of deployments will use packages to install
-    OpenStack that will have distro-based packages as dependencies.
-    DevStack installs as many of these Python packages as possible to
-    mimic the expected production environment.] Certain Linux
-    distributions have a 'lack of workaround' in their Python
-    configurations that installs vendor packaged Python modules and
-    pip-installed modules to the SAME DIRECTORY TREE. This is causing
-    heartache and moving us in the direction of installing more modules
-    from PyPI than vendor packages. However, that is only being done as
-    necessary as the packaging needs to catch up to the development
-    cycle anyway so this is kept to a minimum.
-Q: What can I do about RabbitMQ not wanting to start on my fresh new VM?
-    A: This is often caused by ``erlang`` not being happy with the
-    hostname resolving to a reachable IP address. Make sure your
-    hostname resolves to a working IP address; setting it to 127.0.0.1
-    in ``/etc/hosts`` is often good enough for a single-node
-    installation. And in an extreme case, use ``clean.sh`` to eradicate
-    it and try again.
-Q: How can I set up Heat in stand-alone configuration?
-    A: Configure ``local.conf`` thusly:
+What can I do about RabbitMQ not wanting to start on my fresh new VM?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This is often caused by ``erlang`` not being happy with the hostname
+resolving to a reachable IP address. Make sure your hostname resolves
+to a working IP address; setting it to 127.0.0.1 in ``/etc/hosts`` is
+often good enough for a single-node installation. And in an extreme
+case, use ``clean.sh`` to eradicate it and try again.
+
+Configure ``local.conf`` thusly:
 
     ::
 
@@ -156,22 +147,25 @@
         KEYSTONE_SERVICE_HOST=<keystone-host>
         KEYSTONE_AUTH_HOST=<keystone-host>
 
-Q: Why are my configuration changes ignored?
-    A: You may have run into the package prerequisite installation
-    timeout. ``tools/install_prereqs.sh`` has a timer that skips the
-    package installation checks if it was run within the last
-    ``PREREQ_RERUN_HOURS`` hours (default is 2). To override this, set
-    ``FORCE_PREREQ=1`` and the package checks will never be skipped.
+Why are my configuration changes ignored?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You may have run into the package prerequisite installation
+timeout. ``tools/install_prereqs.sh`` has a timer that skips the
+package installation checks if it was run within the last
+``PREREQ_RERUN_HOURS`` hours (default is 2). To override this, set
+``FORCE_PREREQ=1`` and the package checks will never be skipped.
 
 Miscellaneous
 =============
 
-Q: ``tools/fixup_stuff.sh`` is broken and shouldn't 'fix' just one version of packages.
-    A: [Another not-a-question] No it isn't. Stuff in there is to
-    correct problems in an environment that need to be fixed elsewhere
-    or may/will be fixed in a future release. In the case of
-    ``httplib2`` and ``prettytable`` specific problems with specific
-    versions are being worked around. If later releases have those
-    problems than we'll add them to the script. Knowing about the broken
-    future releases is valuable rather than polling to see if it has
-    been fixed.
+``tools/fixup_stuff.sh`` is broken and shouldn't 'fix' just one version of packages.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Stuff in there is to correct problems in an environment that need to
+be fixed elsewhere or may/will be fixed in a future release. In the
+case of ``httplib2`` and ``prettytable`` specific problems with
+specific versions are being worked around. If later releases have
+those problems than we'll add them to the script. Knowing about the
+broken future releases is valuable rather than polling to see if it
+has been fixed.
diff --git a/doc/source/guides/multinode-lab.rst b/doc/source/guides/multinode-lab.rst
index b2617c9..27d71f1 100644
--- a/doc/source/guides/multinode-lab.rst
+++ b/doc/source/guides/multinode-lab.rst
@@ -178,7 +178,7 @@
     MYSQL_HOST=192.168.42.11
     RABBIT_HOST=192.168.42.11
     GLANCE_HOSTPORT=192.168.42.11:9292
-    ENABLED_SERVICES=n-cpu,n-net,n-api,c-sch,c-api,c-vol
+    ENABLED_SERVICES=n-cpu,n-net,n-api,c-vol
     NOVA_VNC_ENABLED=True
     NOVNCPROXY_URL="http://192.168.42.11:6080/vnc_auto.html"
     VNCSERVER_LISTEN=$HOST_IP
diff --git a/doc/source/guides/neutron.rst b/doc/source/guides/neutron.rst
index 3030c7b..bdfd3a4 100644
--- a/doc/source/guides/neutron.rst
+++ b/doc/source/guides/neutron.rst
@@ -5,11 +5,77 @@
 This guide will walk you through using OpenStack neutron with the ML2
 plugin and the Open vSwitch mechanism driver.
 
-Network Interface Configuration
-===============================
 
-To use neutron, it is suggested that two network interfaces be present
-in the host operating system.
+Using Neutron with a Single Interface
+=====================================
+
+In some instances, like on a developer laptop, there is only one
+network interface that is available. In this scenario, the physical
+interface is added to the Open vSwitch bridge, and the IP address of
+the laptop is migrated onto the bridge interface. That way, the
+physical interface can be used to transmit tenant network traffic,
+the OpenStack API traffic, and management traffic.
+
+
+Physical Network Setup
+----------------------
+
+In most cases where DevStack is being deployed with a single
+interface, there is a hardware router that is being used for external
+connectivity and DHCP. The developer machine is connected to this
+network and is on a shared subnet with other machines.
+
+.. nwdiag::
+
+        nwdiag {
+                inet [ shape = cloud ];
+                router;
+                inet -- router;
+
+                network hardware_network {
+                        address = "172.18.161.0/24"
+                        router [ address = "172.18.161.1" ];
+                        devstack_laptop [ address = "172.18.161.6" ];
+                }
+        }
+
+
+DevStack Configuration
+----------------------
+
+
+::
+
+        HOST_IP=172.18.161.6
+        SERVICE_HOST=172.18.161.6
+        MYSQL_HOST=172.18.161.6
+        RABBIT_HOST=172.18.161.6
+        GLANCE_HOSTPORT=172.18.161.6:9292
+        ADMIN_PASSWORD=secrete
+        MYSQL_PASSWORD=secrete
+        RABBIT_PASSWORD=secrete
+        SERVICE_PASSWORD=secrete
+        SERVICE_TOKEN=secrete
+
+        ## Neutron options
+        Q_USE_SECGROUP=True
+        FLOATING_RANGE="172.18.161.1/24"
+        FIXED_RANGE="10.0.0.0/24"
+        Q_FLOATING_ALLOCATION_POOL=start=172.18.161.250,end=172.18.161.254
+        PUBLIC_NETWORK_GATEWAY="172.18.161.1"
+        Q_L3_ENABLED=True
+        PUBLIC_INTERFACE=eth0
+        Q_USE_PROVIDERNET_FOR_PUBLIC=True
+        OVS_PHYSICAL_BRIDGE=br-ex
+        PUBLIC_BRIDGE=br-ex
+        OVS_BRIDGE_MAPPINGS=public:br-ex
+
+
+
+
+
+Using Neutron with Multiple Interfaces
+======================================
 
 The first interface, eth0 is used for the OpenStack management (API,
 message bus, etc) as well as for ssh for an administrator to access
@@ -131,6 +197,11 @@
 subnet that exists in the private RFC1918 address space - however in
 in a real setup FLOATING_RANGE would be a public IP address range.
 
+Note that extension drivers for the ML2 plugin is set by
+`Q_ML2_PLUGIN_EXT_DRIVERS`, and it includes 'port_security' by default. If you
+want to remove all the extension drivers (even 'port_security'), set
+`Q_ML2_PLUGIN_EXT_DRIVERS` to blank.
+
 Neutron Networking with Open vSwitch and Provider Networks
 ==========================================================
 
diff --git a/files/apache-ceilometer.template b/files/apache-ceilometer.template
index 1c57b32..79f14c3 100644
--- a/files/apache-ceilometer.template
+++ b/files/apache-ceilometer.template
@@ -1,7 +1,7 @@
 Listen %PORT%
 
 <VirtualHost *:%PORT%>
-    WSGIDaemonProcess ceilometer-api processes=2 threads=10 user=%USER% display-name=%{GROUP}
+    WSGIDaemonProcess ceilometer-api processes=2 threads=10 user=%USER% display-name=%{GROUP} %VIRTUALENV%
     WSGIProcessGroup ceilometer-api
     WSGIScriptAlias / %WSGIAPP%
     WSGIApplicationGroup %{GLOBAL}
diff --git a/files/apache-nova-api.template b/files/apache-nova-api.template
index 70ccedd..301a3bd 100644
--- a/files/apache-nova-api.template
+++ b/files/apache-nova-api.template
@@ -1,7 +1,7 @@
 Listen %PUBLICPORT%
 
 <VirtualHost *:%PUBLICPORT%>
-    WSGIDaemonProcess nova-api processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+    WSGIDaemonProcess nova-api processes=%APIWORKERS% threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
     WSGIProcessGroup nova-api
     WSGIScriptAlias / %PUBLICWSGI%
     WSGIApplicationGroup %{GLOBAL}
@@ -13,4 +13,4 @@
     %SSLENGINE%
     %SSLCERTFILE%
     %SSLKEYFILE%
-</VirtualHost>
\ No newline at end of file
+</VirtualHost>
diff --git a/files/apache-nova-ec2-api.template b/files/apache-nova-ec2-api.template
index ae4cf94..235d958 100644
--- a/files/apache-nova-ec2-api.template
+++ b/files/apache-nova-ec2-api.template
@@ -1,7 +1,7 @@
 Listen %PUBLICPORT%
 
 <VirtualHost *:%PUBLICPORT%>
-    WSGIDaemonProcess nova-ec2-api processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+    WSGIDaemonProcess nova-ec2-api processes=%APIWORKERS% threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
     WSGIProcessGroup nova-ec2-api
     WSGIScriptAlias / %PUBLICWSGI%
     WSGIApplicationGroup %{GLOBAL}
@@ -13,4 +13,4 @@
     %SSLENGINE%
     %SSLCERTFILE%
     %SSLKEYFILE%
-</VirtualHost>
\ No newline at end of file
+</VirtualHost>
diff --git a/files/rpms-suse/ceilometer-collector b/files/rpms-suse/ceilometer-collector
index c76454f..5e4dfcc 100644
--- a/files/rpms-suse/ceilometer-collector
+++ b/files/rpms-suse/ceilometer-collector
@@ -1,4 +1,3 @@
 # Not available in openSUSE main repositories, but can be fetched from OBS
 # (devel:languages:python and server:database projects)
 mongodb
-python-pymongo
diff --git a/files/rpms-suse/devlibs b/files/rpms-suse/devlibs
index c923825..bdb630a 100644
--- a/files/rpms-suse/devlibs
+++ b/files/rpms-suse/devlibs
@@ -3,4 +3,5 @@
 libxml2-devel  # lxml
 libxslt-devel  # lxml
 postgresql-devel  # psycopg2
+libmysqlclient-devel # MySQL-python
 python-devel  # pyOpenSSL
diff --git a/files/rpms-suse/glance b/files/rpms-suse/glance
index 9b962f9..0e58425 100644
--- a/files/rpms-suse/glance
+++ b/files/rpms-suse/glance
@@ -1,11 +1,2 @@
 libxml2-devel
-python-PasteDeploy
-python-Routes
-python-SQLAlchemy
-python-argparse
 python-devel
-python-eventlet
-python-greenlet
-python-iso8601
-python-pyOpenSSL
-python-xattr
diff --git a/files/rpms-suse/horizon b/files/rpms-suse/horizon
index c45eae6..77f7c34 100644
--- a/files/rpms-suse/horizon
+++ b/files/rpms-suse/horizon
@@ -1,16 +1,2 @@
 apache2  # NOPRIME
 apache2-mod_wsgi  # NOPRIME
-python-CherryPy # why? (coming from apts)
-python-Paste
-python-PasteDeploy
-python-Routes
-python-SQLAlchemy
-python-WebOb
-python-anyjson
-python-beautifulsoup
-python-coverage
-python-dateutil
-python-eventlet
-python-mox
-python-sqlalchemy-migrate
-python-xattr
diff --git a/files/rpms-suse/keystone b/files/rpms-suse/keystone
index 4c37ade..c838b41 100644
--- a/files/rpms-suse/keystone
+++ b/files/rpms-suse/keystone
@@ -1,15 +1,4 @@
 cyrus-sasl-devel
 openldap2-devel
-python-Paste
-python-PasteDeploy
-python-PasteScript
-python-Routes
-python-SQLAlchemy
-python-WebOb
 python-devel
-python-greenlet
-python-lxml
-python-mysql
-python-mysql-connector-python
-python-pysqlite
 sqlite3
diff --git a/files/rpms-suse/neutron b/files/rpms-suse/neutron
index d278363..e75db89 100644
--- a/files/rpms-suse/neutron
+++ b/files/rpms-suse/neutron
@@ -6,17 +6,6 @@
 iputils
 mariadb # NOPRIME
 postgresql-devel
-python-eventlet
-python-greenlet
-python-iso8601
-python-mysql
-python-mysql-connector-python
-python-Paste
-python-PasteDeploy
-python-pyudev
-python-Routes
-python-SQLAlchemy
-python-suds
 rabbitmq-server # NOPRIME
 sqlite3
 sudo
@@ -24,5 +13,4 @@
 radvd # NOPRIME
 
 # FIXME: qpid is not part of openSUSE, those names are tentative
-python-qpid # NOPRIME
 qpidd # NOPRIME
diff --git a/files/rpms-suse/nova b/files/rpms-suse/nova
index b1c4f6a..6f8aef1 100644
--- a/files/rpms-suse/nova
+++ b/files/rpms-suse/nova
@@ -16,29 +16,7 @@
 mariadb # NOPRIME
 parted
 polkit
-python-M2Crypto
-python-m2crypto # dist:sle11sp2
-python-Paste
-python-PasteDeploy
-python-Routes
-python-SQLAlchemy
-python-Tempita
-python-cheetah
-python-eventlet
-python-feedparser
-python-greenlet
-python-iso8601
-python-libxml2
-python-lockfile
-python-lxml # needed for glance which is needed for nova --- this shouldn't be here
-python-mox
-python-mysql
-python-mysql-connector-python
-python-numpy # needed by websockify for spice console
-python-paramiko
-python-sqlalchemy-migrate
-python-suds
-python-xattr # needed for glance which is needed for nova --- this shouldn't be here
+python-devel
 rabbitmq-server # NOPRIME
 socat
 sqlite3
diff --git a/files/rpms-suse/q-l3 b/files/rpms-suse/q-l3
new file mode 100644
index 0000000..a7a190c
--- /dev/null
+++ b/files/rpms-suse/q-l3
@@ -0,0 +1,2 @@
+conntrack-tools
+keepalived
diff --git a/files/rpms-suse/swift b/files/rpms-suse/swift
index 9c0d188..6a824f9 100644
--- a/files/rpms-suse/swift
+++ b/files/rpms-suse/swift
@@ -1,15 +1,6 @@
 curl
 memcached
-python-PasteDeploy
-python-WebOb
-python-configobj
-python-coverage
 python-devel
-python-eventlet
-python-greenlet
-python-netifaces
-python-simplejson
-python-xattr
 sqlite3
 xfsprogs
 xinetd
diff --git a/files/rpms/devlibs b/files/rpms/devlibs
index 834a4b6..385ed3b 100644
--- a/files/rpms/devlibs
+++ b/files/rpms/devlibs
@@ -1,8 +1,7 @@
 libffi-devel  # pyOpenSSL
 libxml2-devel  # lxml
 libxslt-devel  # lxml
-mariadb-devel  # MySQL-python  f20,f21,rhel7
-mysql-devel  # MySQL-python  rhel6
+mariadb-devel  # MySQL-python
 openssl-devel  # pyOpenSSL
 postgresql-devel  # psycopg2
 python-devel  # pyOpenSSL
diff --git a/files/venv-requirements.txt b/files/venv-requirements.txt
index 73d0579..b9a55b4 100644
--- a/files/venv-requirements.txt
+++ b/files/venv-requirements.txt
@@ -1,7 +1,6 @@
 # Once we can prebuild wheels before a devstack run, uncomment the skipped libraries
 cryptography
 # lxml # still install from from packages
-MySQL-python
 # netifaces # still install from packages
 #numpy    # slowest wheel by far, stop building until we are actually using the output
 posix-ipc
diff --git a/functions-common b/functions-common
index 52d80fb..3a2f5f7 100644
--- a/functions-common
+++ b/functions-common
@@ -1629,7 +1629,6 @@
 function disable_negated_services {
     local to_remove=""
     local remaining=""
-    local enabled=""
     local service
 
     # build up list of services that should be removed; i.e. they
@@ -1644,21 +1643,7 @@
 
     # go through the service list.  if this service appears in the "to
     # be removed" list, drop it
-    for service in ${remaining//,/ }; do
-        local remove
-        local add=1
-        for remove in ${to_remove//,/ }; do
-            if [[ ${remove} == ${service} ]]; then
-                add=0
-                break
-            fi
-        done
-        if [[ $add == 1 ]]; then
-            enabled="${enabled},$service"
-        fi
-    done
-
-    ENABLED_SERVICES=$(_cleanup_service_list "$enabled")
+    ENABLED_SERVICES=$(remove_disabled_services "$remaining" "$to_remove")
 }
 
 # disable_service() removes the services passed as argument to the
@@ -1762,6 +1747,30 @@
     return $enabled
 }
 
+# remove specified list from the input string
+# remove_disabled_services service-list remove-list
+function remove_disabled_services {
+    local service_list=$1
+    local remove_list=$2
+    local service
+    local enabled=""
+
+    for service in ${service_list//,/ }; do
+        local remove
+        local add=1
+        for remove in ${remove_list//,/ }; do
+            if [[ ${remove} == ${service} ]]; then
+                add=0
+                break
+            fi
+        done
+        if [[ $add == 1 ]]; then
+            enabled="${enabled},$service"
+        fi
+    done
+    _cleanup_service_list "$enabled"
+}
+
 # Toggle enable/disable_service for services that must run exclusive of each other
 #  $1 The name of a variable containing a space-separated list of services
 #  $2 The name of a variable in which to store the enabled service's name
@@ -1833,16 +1842,7 @@
     local user=$1
     local group=$2
 
-    if [[ -z "$os_VENDOR" ]]; then
-        GetOSVersion
-    fi
-
-    # SLE11 and openSUSE 12.2 don't have the usual usermod
-    if ! is_suse || [[ "$os_VENDOR" = "openSUSE" && "$os_RELEASE" != "12.2" ]]; then
-        sudo usermod -a -G "$group" "$user"
-    else
-        sudo usermod -A "$group" "$user"
-    fi
+    sudo usermod -a -G "$group" "$user"
 }
 
 # Convert CIDR notation to a IPv4 netmask
diff --git a/lib/ceilometer b/lib/ceilometer
index 1f72187..f6f605b 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -78,8 +78,13 @@
 CEILOMETER_AUTH_CACHE_DIR=${CEILOMETER_AUTH_CACHE_DIR:-/var/cache/ceilometer}
 CEILOMETER_WSGI_DIR=${CEILOMETER_WSGI_DIR:-/var/www/ceilometer}
 
-# Support potential entry-points console scripts
-CEILOMETER_BIN_DIR=$(get_python_exec_prefix)
+# Support potential entry-points console scripts in VENV or not
+if [[ ${USE_VENV} = True ]]; then
+    PROJECT_VENV["ceilometer"]=${CEILOMETER_DIR}.venv
+    CEILOMETER_BIN_DIR=${PROJECT_VENV["ceilometer"]}/bin
+else
+    CEILOMETER_BIN_DIR=$(get_python_exec_prefix)
+fi
 
 # Set up database backend
 CEILOMETER_BACKEND=${CEILOMETER_BACKEND:-mysql}
@@ -151,6 +156,8 @@
 # runs that a clean run would need to clean up
 function cleanup_ceilometer {
     if [ "$CEILOMETER_BACKEND" = 'mongodb' ] ; then
+        echo "### cleaning database"
+        read
         mongo ceilometer --eval "db.dropDatabase();"
     elif [ "$CEILOMETER_BACKEND" = 'es' ] ; then
         curl -XDELETE "localhost:9200/events_*"
@@ -165,16 +172,22 @@
 
     local ceilometer_apache_conf=$(apache_site_config_for ceilometer)
     local apache_version=$(get_apache_version)
+    local venv_path=""
 
     # Copy proxy vhost and wsgi file
     sudo cp $CEILOMETER_DIR/ceilometer/api/app.wsgi $CEILOMETER_WSGI_DIR/app
 
+    if [[ ${USE_VENV} = True ]]; then
+        venv_path="python-path=${PROJECT_VENV["ceilometer"]}/lib/$(python_version)/site-packages"
+    fi
+
     sudo cp $FILES/apache-ceilometer.template $ceilometer_apache_conf
     sudo sed -e "
         s|%PORT%|$CEILOMETER_SERVICE_PORT|g;
         s|%APACHE_NAME%|$APACHE_NAME|g;
         s|%WSGIAPP%|$CEILOMETER_WSGI_DIR/app|g;
-        s|%USER%|$STACK_USER|g
+        s|%USER%|$STACK_USER|g;
+        s|%VIRTUALENV%|$venv_path|g
     " -i $ceilometer_apache_conf
 }
 
@@ -232,12 +245,14 @@
         iniset $CEILOMETER_CONF DEFAULT collector_workers $API_WORKERS
         ${TOP_DIR}/pkg/elasticsearch.sh start
         cleanup_ceilometer
-    else
+    elif [ "$CEILOMETER_BACKEND" = 'mongodb' ] ; then
         iniset $CEILOMETER_CONF database alarm_connection mongodb://localhost:27017/ceilometer
         iniset $CEILOMETER_CONF database event_connection mongodb://localhost:27017/ceilometer
         iniset $CEILOMETER_CONF database metering_connection mongodb://localhost:27017/ceilometer
         configure_mongodb
         cleanup_ceilometer
+    else
+        die $LINENO "Unable to configure unknown CEILOMETER_BACKEND $CEILOMETER_BACKEND"
     fi
 
     if [[ "$VIRT_DRIVER" = 'vsphere' ]]; then
@@ -263,10 +278,8 @@
     local packages=mongodb-server
 
     if is_fedora; then
-        # mongodb client + python bindings
-        packages="${packages} mongodb pymongo"
-    else
-        packages="${packages} python-pymongo"
+        # mongodb client
+        packages="${packages} mongodb"
     fi
 
     install_package ${packages}
@@ -319,6 +332,18 @@
         install_redis
     fi
 
+    if [ "$CEILOMETER_BACKEND" = 'mongodb' ] ; then
+        pip_install_gr pymongo
+    fi
+
+    if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
+        pip_install_gr libvirt-python
+    fi
+
+    if [[ "$VIRT_DRIVER" = 'vsphere' ]]; then
+        pip_instal_gr oslo.vmware
+    fi
+
     if [ "$CEILOMETER_BACKEND" = 'es' ] ; then
         ${TOP_DIR}/pkg/elasticsearch.sh download
         ${TOP_DIR}/pkg/elasticsearch.sh install
@@ -349,13 +374,13 @@
 
 # start_ceilometer() - Start running processes, including screen
 function start_ceilometer {
-    run_process ceilometer-acentral "ceilometer-agent-central --config-file $CEILOMETER_CONF"
-    run_process ceilometer-anotification "ceilometer-agent-notification --config-file $CEILOMETER_CONF"
-    run_process ceilometer-collector "ceilometer-collector --config-file $CEILOMETER_CONF"
-    run_process ceilometer-aipmi "ceilometer-agent-ipmi --config-file $CEILOMETER_CONF"
+    run_process ceilometer-acentral "$CEILOMETER_BIN_DIR/ceilometer-agent-central --config-file $CEILOMETER_CONF"
+    run_process ceilometer-anotification "$CEILOMETER_BIN_DIR/ceilometer-agent-notification --config-file $CEILOMETER_CONF"
+    run_process ceilometer-collector "$CEILOMETER_BIN_DIR/ceilometer-collector --config-file $CEILOMETER_CONF"
+    run_process ceilometer-aipmi "$CEILOMETER_BIN_DIR/ceilometer-agent-ipmi --config-file $CEILOMETER_CONF"
 
     if [[ "$CEILOMETER_USE_MOD_WSGI" == "False" ]]; then
-        run_process ceilometer-api "ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
+        run_process ceilometer-api "$CEILOMETER_BIN_DIR/ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
     else
         enable_apache_site ceilometer
         restart_apache_server
@@ -367,10 +392,10 @@
     # Start the compute agent last to allow time for the collector to
     # fully wake up and connect to the message bus. See bug #1355809
     if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
-        run_process ceilometer-acompute "ceilometer-agent-compute --config-file $CEILOMETER_CONF" $LIBVIRT_GROUP
+        run_process ceilometer-acompute "$CEILOMETER_BIN_DIR/ceilometer-agent-compute --config-file $CEILOMETER_CONF" $LIBVIRT_GROUP
     fi
     if [[ "$VIRT_DRIVER" = 'vsphere' ]]; then
-        run_process ceilometer-acompute "ceilometer-agent-compute --config-file $CEILOMETER_CONF"
+        run_process ceilometer-acompute "$CEILOMETER_BIN_DIR/ceilometer-agent-compute --config-file $CEILOMETER_CONF"
     fi
 
     # Only die on API if it was actually intended to be turned on
@@ -381,8 +406,8 @@
         fi
     fi
 
-    run_process ceilometer-alarm-notifier "ceilometer-alarm-notifier --config-file $CEILOMETER_CONF"
-    run_process ceilometer-alarm-evaluator "ceilometer-alarm-evaluator --config-file $CEILOMETER_CONF"
+    run_process ceilometer-alarm-notifier "$CEILOMETER_BIN_DIR/ceilometer-alarm-notifier --config-file $CEILOMETER_CONF"
+    run_process ceilometer-alarm-evaluator "$CEILOMETER_BIN_DIR/ceilometer-alarm-evaluator --config-file $CEILOMETER_CONF"
 }
 
 # stop_ceilometer() - Stop running processes
diff --git a/lib/ceph b/lib/ceph
index 4068e26..4d6ca4a 100644
--- a/lib/ceph
+++ b/lib/ceph
@@ -110,7 +110,7 @@
 
 # check_os_support_ceph() - Check if the operating system provides a decent version of Ceph
 function check_os_support_ceph {
-    if [[ ! ${DISTRO} =~ (trusty|f20|f21) ]]; then
+    if [[ ! ${DISTRO} =~ (trusty|f20|f21|f22) ]]; then
         echo "WARNING: your distro $DISTRO does not provide (at least) the Firefly release. Please use Ubuntu Trusty or Fedora 20 (and higher)"
         if [[ "$FORCE_CEPH_INSTALL" != "yes" ]]; then
             die $LINENO "If you wish to install Ceph on this distribution anyway run with FORCE_CEPH_INSTALL=yes"
diff --git a/lib/cinder b/lib/cinder
index da22e29..ade3b82 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -39,6 +39,7 @@
 
 # set up default directories
 GITDIR["python-cinderclient"]=$DEST/python-cinderclient
+GITDIR["os-brick"]=$DEST/os-brick
 CINDER_DIR=$DEST/cinder
 
 # Cinder virtual environment
@@ -381,6 +382,13 @@
 
 # install_cinder() - Collect source and prepare
 function install_cinder {
+    # Install os-brick from git so we make sure we're testing
+    # the latest code.
+    if use_library_from_git "os-brick"; then
+        git_clone_by_name "os-brick"
+        setup_dev_lib "os-brick"
+    fi
+
     git_clone $CINDER_REPO $CINDER_DIR $CINDER_BRANCH
     setup_develop $CINDER_DIR
     if [ "$CINDER_ISCSI_HELPER" = "tgtadm" ]; then
@@ -424,12 +432,13 @@
             _configure_tgt_for_config_d
             if is_ubuntu; then
                 sudo service tgt restart
-            elif is_fedora || is_suse; then
-                restart_service tgtd
+            elif is_suse; then
+                # NOTE(dmllr): workaround restart bug
+                # https://bugzilla.suse.com/show_bug.cgi?id=934642
+                stop_service tgtd
+                start_service tgtd
             else
-                # note for other distros: unstack.sh also uses the tgt/tgtd service
-                # name, and would need to be adjusted too
-                exit_distro_not_supported "restarting tgt"
+                restart_service tgtd
             fi
             # NOTE(gfidente): ensure tgtd is running in debug mode
             sudo tgtadm --mode system --op update --name debug --value on
diff --git a/lib/databases/mysql b/lib/databases/mysql
index 1b9a081..f097fb2 100644
--- a/lib/databases/mysql
+++ b/lib/databases/mysql
@@ -11,6 +11,13 @@
 MY_XTRACE=$(set +o | grep xtrace)
 set +o xtrace
 
+MYSQL_DRIVER=${MYSQL_DRIVER:-PyMySQL}
+# Force over to pymysql driver by default if we are using it.
+if is_service_enabled mysql; then
+    if [[ "$MYSQL_DRIVER" == "PyMySQL" ]]; then
+        SQLALCHEMY_DATABASE_DRIVER=${SQLALCHEMY_DATABASE_DRIVER:-"pymysql"}
+    fi
+fi
 
 register_database mysql
 
@@ -155,8 +162,12 @@
 
 function install_database_python_mysql {
     # Install Python client module
-    pip_install_gr MySQL-python
-    ADDITIONAL_VENV_PACKAGES+=",MySQL-python"
+    pip_install_gr $MYSQL_DRIVER
+    if [[ "$MYSQL_DRIVER" == "MySQL-python" ]]; then
+        ADDITIONAL_VENV_PACKAGES+=",MySQL-python"
+    elif [[ "$MYSQL_DRIVER" == "PyMySQL" ]]; then
+        ADDITIONAL_VENV_PACKAGES+=",PyMySQL"
+    fi
 }
 
 function database_connection_url_mysql {
diff --git a/lib/glance b/lib/glance
index 4e1bd24..016ade3 100644
--- a/lib/glance
+++ b/lib/glance
@@ -154,6 +154,7 @@
 
     if is_service_enabled tls-proxy; then
         iniset $GLANCE_API_CONF DEFAULT bind_port $GLANCE_SERVICE_PORT_INT
+        iniset $GLANCE_API_CONF DEFAULT public_endpoint $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT
         iniset $GLANCE_REGISTRY_CONF DEFAULT bind_port $GLANCE_REGISTRY_PORT_INT
     fi
 
diff --git a/lib/horizon b/lib/horizon
index f953f5c..b0f306b 100644
--- a/lib/horizon
+++ b/lib/horizon
@@ -97,7 +97,14 @@
     _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_DEFAULT_ROLE \"Member\"
 
     _horizon_config_set $local_settings "" OPENSTACK_HOST \"${KEYSTONE_SERVICE_HOST}\"
-    _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_URL "\"${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}/v2.0\""
+
+    if [ "$ENABLE_IDENTITY_V2" == "False" ]; then
+        # Only Identity v3 API is available; then use it with v3 auth tokens
+        _horizon_config_set $local_settings "" OPENSTACK_API_VERSIONS {\"identity\":3}
+        _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_URL "\"${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}/v3\""
+    else
+        _horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_URL "\"${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}/v2.0\""
+    fi
 
     if [ -f $SSL_BUNDLE_FILE ]; then
         _horizon_config_set $local_settings "" OPENSTACK_SSL_CACERT \"${SSL_BUNDLE_FILE}\"
diff --git a/lib/ironic b/lib/ironic
index 7493c3c..4984be1 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -569,14 +569,6 @@
 function enroll_nodes {
     local chassis_id=$(ironic chassis-create -d "ironic test chassis" | grep " uuid " | get_field 2)
 
-    if [[ "$IRONIC_DEPLOY_DRIVER" == "pxe_ssh" ]] ; then
-        local _IRONIC_DEPLOY_KERNEL_KEY=pxe_deploy_kernel
-        local _IRONIC_DEPLOY_RAMDISK_KEY=pxe_deploy_ramdisk
-    elif is_deployed_by_agent; then
-        local _IRONIC_DEPLOY_KERNEL_KEY=deploy_kernel
-        local _IRONIC_DEPLOY_RAMDISK_KEY=deploy_ramdisk
-    fi
-
     if ! is_ironic_hardware; then
         local ironic_node_cpu=$IRONIC_VM_SPECS_CPU
         local ironic_node_ram=$IRONIC_VM_SPECS_RAM
@@ -584,8 +576,8 @@
         local ironic_ephemeral_disk=$IRONIC_VM_EPHEMERAL_DISK
         local ironic_hwinfo_file=$IRONIC_VM_MACS_CSV_FILE
         local node_options="\
-            -i $_IRONIC_DEPLOY_KERNEL_KEY=$IRONIC_DEPLOY_KERNEL_ID \
-            -i $_IRONIC_DEPLOY_RAMDISK_KEY=$IRONIC_DEPLOY_RAMDISK_ID \
+            -i deploy_kernel=$IRONIC_DEPLOY_KERNEL_ID \
+            -i deploy_ramdisk=$IRONIC_DEPLOY_RAMDISK_ID \
             -i ssh_virt_type=$IRONIC_SSH_VIRT_TYPE \
             -i ssh_address=$IRONIC_VM_SSH_ADDRESS \
             -i ssh_port=$IRONIC_VM_SSH_PORT \
@@ -616,8 +608,8 @@
             # we create the bare metal flavor with minimum value
             local node_options="-i ipmi_address=$ipmi_address -i ipmi_password=$ironic_ipmi_passwd\
                 -i ipmi_username=$ironic_ipmi_username"
-            node_options+=" -i $_IRONIC_DEPLOY_KERNEL_KEY=$IRONIC_DEPLOY_KERNEL_ID"
-            node_options+=" -i $_IRONIC_DEPLOY_RAMDISK_KEY=$IRONIC_DEPLOY_RAMDISK_ID"
+            node_options+=" -i deploy_kernel=$IRONIC_DEPLOY_KERNEL_ID"
+            node_options+=" -i deploy_ramdisk=$IRONIC_DEPLOY_RAMDISK_ID"
         fi
 
         # First node created will be used for testing in ironic w/o glance
diff --git a/lib/keystone b/lib/keystone
index b0907c7..7a949cf 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -197,6 +197,12 @@
         KEYSTONE_PASTE_INI="$KEYSTONE_CONF"
     fi
 
+    if [ "$ENABLE_IDENTITY_V2" == "False" ]; then
+        # Only Identity v3 API should be available; then disable v2 pipelines
+        inidelete $KEYSTONE_PASTE_INI composite:main \\/v2.0
+        inidelete $KEYSTONE_PASTE_INI composite:admin \\/v2.0
+    fi
+
     configure_keystone_extensions
 
     # Rewrite stock ``keystone.conf``
diff --git a/lib/lvm b/lib/lvm
index 1fe2683..8afd543 100644
--- a/lib/lvm
+++ b/lib/lvm
@@ -78,7 +78,7 @@
 }
 
 
-# _create_volume_group creates default volume group
+# _create_lvm_volume_group creates default volume group
 #
 # Usage: _create_lvm_volume_group() $vg $size
 function _create_lvm_volume_group {
diff --git a/lib/neutron-legacy b/lib/neutron-legacy
index 18b0942..3ac76a2 100644
--- a/lib/neutron-legacy
+++ b/lib/neutron-legacy
@@ -168,6 +168,10 @@
 ## Provider Network Information
 PROVIDER_SUBNET_NAME=${PROVIDER_SUBNET_NAME:-"provider_net"}
 
+# Define the public bridge that will transmit traffic from VMs to the
+# physical network - used by both the OVS and Linux Bridge drivers.
+PUBLIC_BRIDGE=${PUBLIC_BRIDGE:-br-ex}
+
 # Use flat providernet for public network
 #
 # If Q_USE_PROVIDERNET_FOR_PUBLIC=True, use a flat provider network
@@ -459,6 +463,8 @@
     fi
 
     _configure_neutron_debug_command
+
+    iniset $NEUTRON_CONF DEFAULT api_workers "$API_WORKERS"
 }
 
 function create_nova_conf_neutron {
@@ -820,6 +826,10 @@
         neutron_ovs_base_cleanup
     fi
 
+    if [[ $Q_AGENT == "linuxbridge" ]]; then
+        neutron_lb_cleanup
+    fi
+
     # delete all namespaces created by neutron
     for ns in $(sudo ip netns list | grep -o -E '(qdhcp|qrouter|qlbaas|fip|snat)-[0-9a-f-]*'); do
         sudo ip netns delete ${ns}
@@ -1294,17 +1304,6 @@
         IPV6_ROUTER_GW_IP=`neutron port-list -c fixed_ips | grep $ipv6_pub_subnet_id | awk -F '"' -v subnet_id=$ipv6_pub_subnet_id '$4 == subnet_id { print $8; }'`
         die_if_not_set $LINENO IPV6_ROUTER_GW_IP "Failure retrieving IPV6_ROUTER_GW_IP"
 
-        # The ovs_base_configure_l3_agent function flushes the public
-        # bridge's ip addresses, so turn IPv6 support in the host off
-        # and then on to recover the public bridge's link local address
-        sudo sysctl -w net.ipv6.conf.${PUBLIC_BRIDGE}.disable_ipv6=1
-        sudo sysctl -w net.ipv6.conf.${PUBLIC_BRIDGE}.disable_ipv6=0
-        if ! ip -6 addr show dev $PUBLIC_BRIDGE | grep 'scope global'; then
-            # Create an IPv6 ULA address for PUBLIC_BRIDGE if one is not present
-            IPV6_BRIDGE_ULA=`uuidgen | sed s/-//g | cut -c 23- | sed -e "s/\(..\)\(....\)\(....\)/\1:\2:\3/"`
-            sudo ip -6 addr add fd$IPV6_BRIDGE_ULA::1 dev $PUBLIC_BRIDGE
-        fi
-
         if is_neutron_ovs_base_plugin && [[ "$Q_USE_NAMESPACE" = "True" ]]; then
             local ext_gw_interface=$(_neutron_get_ext_gw_interface)
             local ipv6_cidr_len=${IPV6_PUBLIC_RANGE#*/}
diff --git a/lib/neutron_plugins/linuxbridge_agent b/lib/neutron_plugins/linuxbridge_agent
index c9ea1ca..b348af9 100644
--- a/lib/neutron_plugins/linuxbridge_agent
+++ b/lib/neutron_plugins/linuxbridge_agent
@@ -7,6 +7,10 @@
 PLUGIN_XTRACE=$(set +o | grep xtrace)
 set +o xtrace
 
+function neutron_lb_cleanup {
+    sudo brctl delbr $PUBLIC_BRIDGE
+}
+
 function is_neutron_ovs_base_plugin {
     # linuxbridge doesn't use OVS
     return 1
@@ -29,6 +33,7 @@
 }
 
 function neutron_plugin_configure_l3_agent {
+    sudo brctl addbr $PUBLIC_BRIDGE
     iniset $Q_L3_CONF_FILE DEFAULT external_network_bridge
     iniset $Q_L3_CONF_FILE DEFAULT l3_agent_manager neutron.agent.l3_agent.L3NATAgentWithStateReport
 }
diff --git a/lib/neutron_plugins/ml2 b/lib/neutron_plugins/ml2
index 8853777..2733f1f 100644
--- a/lib/neutron_plugins/ml2
+++ b/lib/neutron_plugins/ml2
@@ -31,6 +31,9 @@
 Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=${Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS:-vni_ranges=1001:2000}
 # Default VLAN TypeDriver options
 Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=${Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS:-}
+# List of extension drivers to load, use '-' instead of ':-' to allow people to
+# explicitly override this to blank
+Q_ML2_PLUGIN_EXT_DRIVERS=${Q_ML2_PLUGIN_EXT_DRIVERS-port_security}
 
 # L3 Plugin to load for ML2
 ML2_L3_PLUGIN=${ML2_L3_PLUGIN:-neutron.services.l3_router.l3_router_plugin.L3RouterPlugin}
@@ -113,6 +116,8 @@
 
     populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2 type_drivers=$Q_ML2_PLUGIN_TYPE_DRIVERS
 
+    populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2 extension_drivers=$Q_ML2_PLUGIN_EXT_DRIVERS
+
     populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2 $Q_SRV_EXTRA_OPTS
 
     populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2_type_gre $Q_ML2_PLUGIN_GRE_TYPE_OPTIONS
diff --git a/lib/neutron_plugins/openvswitch_agent b/lib/neutron_plugins/openvswitch_agent
index 1d24f3b..2a05e2d 100644
--- a/lib/neutron_plugins/openvswitch_agent
+++ b/lib/neutron_plugins/openvswitch_agent
@@ -59,7 +59,7 @@
         OVS_BRIDGE_MAPPINGS=$PHYSICAL_NETWORK:$OVS_PHYSICAL_BRIDGE
 
         # Configure bridge manually with physical interface as port for multi-node
-        sudo ovs-vsctl --no-wait -- --may-exist add-br $OVS_PHYSICAL_BRIDGE
+        _neutron_ovs_base_add_bridge $OVS_PHYSICAL_BRIDGE
     fi
     if [[ "$OVS_BRIDGE_MAPPINGS" != "" ]]; then
         iniset /$Q_PLUGIN_CONF_FILE ovs bridge_mappings $OVS_BRIDGE_MAPPINGS
@@ -92,7 +92,7 @@
         # Set up domU's L2 agent:
 
         # Create a bridge "br-$GUEST_INTERFACE_DEFAULT"
-        sudo ovs-vsctl --no-wait -- --may-exist add-br "br-$GUEST_INTERFACE_DEFAULT"
+        _neutron_ovs_base_add_bridge "br-$GUEST_INTERFACE_DEFAULT"
         # Add $GUEST_INTERFACE_DEFAULT to that bridge
         sudo ovs-vsctl add-port "br-$GUEST_INTERFACE_DEFAULT" $GUEST_INTERFACE_DEFAULT
 
diff --git a/lib/neutron_plugins/ovs_base b/lib/neutron_plugins/ovs_base
index 51999c6..4e750f0 100644
--- a/lib/neutron_plugins/ovs_base
+++ b/lib/neutron_plugins/ovs_base
@@ -8,7 +8,6 @@
 set +o xtrace
 
 OVS_BRIDGE=${OVS_BRIDGE:-br-int}
-PUBLIC_BRIDGE=${PUBLIC_BRIDGE:-br-ex}
 OVS_DATAPATH_TYPE=${OVS_DATAPATH_TYPE:-""}
 
 function is_neutron_ovs_base_plugin {
@@ -16,13 +15,21 @@
     return 0
 }
 
+function _neutron_ovs_base_add_bridge {
+    local bridge=$1
+    local addbr_cmd="sudo ovs-vsctl --no-wait -- --may-exist add-br $bridge"
+
+    if [ "$OVS_DATAPATH_TYPE" != "" ] ; then
+        addbr_cmd="$addbr_cmd -- set Bridge $bridge datapath_type=${OVS_DATAPATH_TYPE}"
+    fi
+
+    $addbr_cmd
+}
+
 function _neutron_ovs_base_setup_bridge {
     local bridge=$1
     neutron-ovs-cleanup
-    sudo ovs-vsctl --no-wait -- --may-exist add-br $bridge
-    if [[ $OVS_DATAPATH_TYPE != "" ]]; then
-        sudo ovs-vsctl set Bridge $bridge datapath_type=${OVS_DATAPATH_TYPE}
-    fi
+    _neutron_ovs_base_add_bridge $bridge
     sudo ovs-vsctl --no-wait br-set-external-id $bridge bridge-id $bridge
 }
 
@@ -93,7 +100,7 @@
         sudo ip link set $Q_PUBLIC_VETH_EX up
         sudo ip addr flush dev $Q_PUBLIC_VETH_EX
     else
-        sudo ovs-vsctl -- --may-exist add-br $PUBLIC_BRIDGE
+        _neutron_ovs_base_add_bridge $PUBLIC_BRIDGE
         sudo ovs-vsctl br-set-external-id $PUBLIC_BRIDGE bridge-id $PUBLIC_BRIDGE
     fi
 }
diff --git a/lib/neutron_plugins/vmware_nsx_v3 b/lib/neutron_plugins/vmware_nsx_v3
new file mode 100644
index 0000000..6d8a6e6
--- /dev/null
+++ b/lib/neutron_plugins/vmware_nsx_v3
@@ -0,0 +1,10 @@
+#!/bin/bash
+
+# This file is needed so Q_PLUGIN=vmware_nsx_v3 will work.
+
+# FIXME(salv-orlando): This function should not be here, but unfortunately
+# devstack calls it before the external plugins are fetched
+function has_neutron_plugin_security_group {
+    # 0 means True here
+    return 0
+}
diff --git a/lib/nova b/lib/nova
index da288d3..88b336a 100644
--- a/lib/nova
+++ b/lib/nova
@@ -53,6 +53,7 @@
 NOVA_CELLS_CONF=$NOVA_CONF_DIR/nova-cells.conf
 NOVA_FAKE_CONF=$NOVA_CONF_DIR/nova-fake.conf
 NOVA_CELLS_DB=${NOVA_CELLS_DB:-nova_cell}
+NOVA_API_DB=${NOVA_API_DB:-nova_api}
 
 NOVA_API_PASTE_INI=${NOVA_API_PASTE_INI:-$NOVA_CONF_DIR/api-paste.ini}
 # NOVA_API_VERSION valid options
@@ -231,6 +232,10 @@
     #if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
     #    cleanup_nova_hypervisor
     #fi
+
+    if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
+        _cleanup_nova_apache_wsgi
+    fi
 }
 
 # _cleanup_nova_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
@@ -276,6 +281,7 @@
         s|%SSLKEYFILE%|$nova_keyfile|g;
         s|%USER%|$STACK_USER|g;
         s|%VIRTUALENV%|$venv_path|g
+        s|%APIWORKERS%|$API_WORKERS|g
     " -i $nova_apache_conf
 
     sudo cp $FILES/apache-nova-ec2-api.template $nova_ec2_apache_conf
@@ -288,6 +294,7 @@
         s|%SSLKEYFILE%|$nova_keyfile|g;
         s|%USER%|$STACK_USER|g;
         s|%VIRTUALENV%|$venv_path|g
+        s|%APIWORKERS%|$API_WORKERS|g
     " -i $nova_ec2_apache_conf
 }
 
@@ -471,6 +478,7 @@
     iniset $NOVA_CONF DEFAULT s3_port "$S3_SERVICE_PORT"
     iniset $NOVA_CONF DEFAULT my_ip "$HOST_IP"
     iniset $NOVA_CONF database connection `database_connection_url nova`
+    iniset $NOVA_CONF api_database connection `database_connection_url nova_api`
     iniset $NOVA_CONF DEFAULT instance_name_template "${INSTANCE_NAME_PREFIX}%08x"
     iniset $NOVA_CONF osapi_v3 enabled "True"
 
@@ -489,6 +497,7 @@
         if is_service_enabled tls-proxy; then
             # Set the service port for a proxy to take the original
             iniset $NOVA_CONF DEFAULT osapi_compute_listen_port "$NOVA_SERVICE_PORT_INT"
+            iniset $NOVA_CONF DEFAULT osapi_compute_link_prefix $NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT
         fi
 
         configure_auth_token_middleware $NOVA_CONF nova $NOVA_AUTH_CACHE_DIR
@@ -674,6 +683,9 @@
         if is_service_enabled n-cell; then
             recreate_database $NOVA_CELLS_DB
         fi
+
+        recreate_database $NOVA_API_DB
+        $NOVA_BIN_DIR/nova-manage api_db sync
     fi
 
     create_nova_cache_dir
@@ -755,8 +767,8 @@
         enable_apache_site nova-api
         enable_apache_site nova-ec2-api
         restart_apache_server
-        tail_log nova /var/log/$APACHE_NAME/nova-api.log
-        tail_log nova /var/log/$APACHE_NAME/nova-ec2-api.log
+        tail_log nova-api /var/log/$APACHE_NAME/nova-api.log
+        tail_log nova-ec2-api /var/log/$APACHE_NAME/nova-ec2-api.log
     else
         run_process n-api "$NOVA_BIN_DIR/nova-api"
     fi
diff --git a/lib/rpc_backend b/lib/rpc_backend
index 297ebac..33ab03d 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -194,13 +194,22 @@
         # NOTE(bnemec): Retry initial rabbitmq configuration to deal with
         # the fact that sometimes it fails to start properly.
         # Reference: https://bugzilla.redhat.com/show_bug.cgi?id=1144100
+        # NOTE(tonyb): Extend the orginal retry logic to only restart rabbitmq
+        # every second time around the loop.
+        # See: https://bugs.launchpad.net/devstack/+bug/1449056 for details on
+        # why this is needed.  This can bee seen on vivid and Debian unstable
+        # (May 2015)
+        # TODO(tonyb): Remove this when Debian and Ubuntu have a fixed systemd
+        # service file.
         local i
-        for i in `seq 10`; do
+        for i in `seq 20`; do
             local rc=0
 
-            [[ $i -eq "10" ]] && die $LINENO "Failed to set rabbitmq password"
+            [[ $i -eq "20" ]] && die $LINENO "Failed to set rabbitmq password"
 
-            restart_service rabbitmq-server
+            if [[ $(( i % 2 )) == "0" ]] ; then
+                restart_service rabbitmq-server
+            fi
 
             rabbit_setuser "$RABBIT_USERID" "$RABBIT_PASSWORD" || rc=$?
             if [ $rc -ne 0 ]; then
diff --git a/lib/sahara b/lib/sahara
index 6d4e864..51e431a 100644
--- a/lib/sahara
+++ b/lib/sahara
@@ -186,7 +186,7 @@
 
     if is_service_enabled tls-proxy; then
         # Set the service port for a proxy to take the original
-        iniset $SAHARA_CONF DEFAULT port $SAHARA_SERVICE_PORT_INT
+        iniset $SAHARA_CONF_FILE DEFAULT port $SAHARA_SERVICE_PORT_INT
     fi
 
     recreate_database sahara
diff --git a/lib/tempest b/lib/tempest
index 211ff35..c4ae05f 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -91,10 +91,7 @@
     local extensions_list=$1
     shift
     local disabled_exts=$*
-    for ext_to_remove in ${disabled_exts//,/ } ; do
-        extensions_list=${extensions_list/$ext_to_remove","}
-    done
-    echo $extensions_list
+    remove_disabled_services "$extensions_list" "$disabled_exts"
 }
 
 # configure_tempest() - Set config files, create data dirs, etc
@@ -313,7 +310,15 @@
         iniset $TEMPEST_CONFIG identity admin_tenant_id $ADMIN_TENANT_ID
         iniset $TEMPEST_CONFIG identity admin_domain_name $ADMIN_DOMAIN_NAME
     fi
-    iniset $TEMPEST_CONFIG identity auth_version ${TEMPEST_AUTH_VERSION:-v2}
+    if [ "$ENABLE_IDENTITY_V2" == "False" ]; then
+        # Only Identity v3 is available; then skip Identity API v2 tests
+        iniset $TEMPEST_CONFIG identity-feature-enabled v2_api False
+        # In addition, use v3 auth tokens for running all Tempest tests
+        iniset $TEMPEST_CONFIG identity auth_version v3
+    else
+        iniset $TEMPEST_CONFIG identity auth_version ${TEMPEST_AUTH_VERSION:-v2}
+    fi
+
     if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
         iniset $TEMPEST_CONFIG identity ca_certificates_file $SSL_BUNDLE_FILE
     fi
@@ -325,6 +330,9 @@
         iniset $TEMPEST_CONFIG image http_image $TEMPEST_HTTP_IMAGE
     fi
 
+    # Image Features
+    iniset $TEMPEST_CONFIG image-feature-enabled deactivate_image True
+
     # Auth
     TEMPEST_ALLOW_TENANT_ISOLATION=${TEMPEST_ALLOW_TENANT_ISOLATION:-$TEMPEST_HAS_ADMIN}
     iniset $TEMPEST_CONFIG auth allow_tenant_isolation ${TEMPEST_ALLOW_TENANT_ISOLATION:-True}
@@ -430,6 +438,7 @@
     # Ceilometer API optimization happened in Juno that allows to run more tests in tempest.
     # Once Tempest retires support for icehouse this flag can be removed.
     iniset $TEMPEST_CONFIG telemetry too_slow_to_test "False"
+    iniset $TEMPEST_CONFIG telemetry-feature-enabled events "True"
 
     # Object Store
     local object_storage_api_extensions=${OBJECT_STORAGE_API_EXTENSIONS:-"all"}
diff --git a/stack.sh b/stack.sh
index dea5643..dc79fa9 100755
--- a/stack.sh
+++ b/stack.sh
@@ -173,7 +173,7 @@
 
 # Warn users who aren't on an explicitly supported distro, but allow them to
 # override check and attempt installation with ``FORCE=yes ./stack``
-if [[ ! ${DISTRO} =~ (precise|trusty|7.0|wheezy|sid|testing|jessie|f20|f21|rhel7) ]]; then
+if [[ ! ${DISTRO} =~ (precise|trusty|utopic|vivid|7.0|wheezy|sid|testing|jessie|f20|f21|f22|rhel7) ]]; then
     echo "WARNING: this script has not been tested on $DISTRO"
     if [[ "$FORCE" != "yes" ]]; then
         die $LINENO "If you wish to run this script anyway run with FORCE=yes"
diff --git a/stackrc b/stackrc
index 938a09a..f8add4b 100644
--- a/stackrc
+++ b/stackrc
@@ -87,9 +87,6 @@
 # Set the default Nova APIs to enable
 NOVA_ENABLED_APIS=ec2,osapi_compute,metadata
 
-# Configure Identity API version: 2.0, 3
-IDENTITY_API_VERSION=2.0
-
 # Whether to use 'dev mode' for screen windows. Dev mode works by
 # stuffing text into the screen windows so that a developer can use
 # ctrl-c, up-arrow, enter to restart the service. Starting services
@@ -106,6 +103,22 @@
     source $RC_DIR/.localrc.auto
 fi
 
+# Configure Identity API version: 2.0, 3
+IDENTITY_API_VERSION=${IDENTITY_API_VERSION:-2.0}
+
+# Set the option ENABLE_IDENTITY_V2 to True. It defines whether the DevStack
+# deployment will be deploying the Identity v2 pipelines. If this option is set
+# to ``False``, DevStack will: i) disable Identity v2; ii) configure Tempest to
+# skip Identity v2 specific tests; and iii) configure Horizon to use Identity
+# v3. When this option is set to ``False``, the option IDENTITY_API_VERSION
+# will to be set to ``3`` in order to make DevStack register the Identity
+# endpoint as v3. This flag is experimental and will be used as basis to
+# identify the projects which still have issues to operate with Identity v3.
+ENABLE_IDENTITY_V2=$(trueorfalse True ENABLE_IDENTITY_V2)
+if [ "$ENABLE_IDENTITY_V2" == "False" ]; then
+    IDENTITY_API_VERSION=3
+fi
+
 # Enable use of Python virtual environments.  Individual project use of
 # venvs are controlled by the PROJECT_VENV array; every project with
 # an entry in the array will be installed into the named venv.
@@ -428,6 +441,10 @@
 GITREPO["ceilometermiddleware"]=${CEILOMETERMIDDLEWARE_REPO:-${GIT_BASE}/openstack/ceilometermiddleware.git}
 GITBRANCH["ceilometermiddleware"]=${CEILOMETERMIDDLEWARE_BRANCH:-master}
 
+# os-brick library to manage local volume attaches
+GITREPO["os-brick"]=${OS_BRICK_REPO:-${GIT_BASE}/openstack/os-brick.git}
+GITBRANCH["os-brick"]=${OS_BRICK_BRANCH:-master}
+
 
 ##################
 #
diff --git a/tests/test_functions.sh b/tests/test_functions.sh
index 1d82792..f555de8 100755
--- a/tests/test_functions.sh
+++ b/tests/test_functions.sh
@@ -137,6 +137,31 @@
 test_disable_negated_services 'a,av2,-a,a' 'av2'
 test_disable_negated_services 'a,-a,av2' 'av2'
 
+echo "Testing remove_disabled_services()"
+
+function test_remove_disabled_services {
+    local service_list="$1"
+    local remove_list="$2"
+    local expected="$3"
+
+    results=$(remove_disabled_services "$service_list" "$remove_list")
+    if [ "$results" = "$expected" ]; then
+        passed "OK: '$service_list' - '$remove_list' -> '$results'"
+    else
+        failed "getting '$expected' from '$service_list' - '$remove_list' failed: '$results'"
+    fi
+}
+
+test_remove_disabled_services 'a,b,c' 'a,c' 'b'
+test_remove_disabled_services 'a,b,c' 'b' 'a,c'
+test_remove_disabled_services 'a,b,c,d' 'a,c d' 'b'
+test_remove_disabled_services 'a,b c,d' 'a d' 'b,c'
+test_remove_disabled_services 'a,b,c' 'a,b,c' ''
+test_remove_disabled_services 'a,b,c' 'd' 'a,b,c'
+test_remove_disabled_services 'a,b,c' '' 'a,b,c'
+test_remove_disabled_services '' 'a,b,c' ''
+test_remove_disabled_services '' '' ''
+
 echo "Testing is_package_installed()"
 
 if [[ -z "$os_PACKAGE" ]]; then
diff --git a/tests/test_ip.sh b/tests/test_ip.sh
index c53e80d..da939f4 100755
--- a/tests/test_ip.sh
+++ b/tests/test_ip.sh
@@ -12,106 +12,80 @@
 
 echo "Testing IP addr functions"
 
-if [[ $(cidr2netmask 4) == 240.0.0.0 ]]; then
-    passed "cidr2netmask(): /4...OK"
-else
-    failed "cidr2netmask(): /4...failed"
-fi
-if [[ $(cidr2netmask 8) == 255.0.0.0 ]]; then
-    passed "cidr2netmask(): /8...OK"
-else
-    failed "cidr2netmask(): /8...failed"
-fi
-if [[ $(cidr2netmask 12) == 255.240.0.0 ]]; then
-    passed "cidr2netmask(): /12...OK"
-else
-    failed "cidr2netmask(): /12...failed"
-fi
-if [[ $(cidr2netmask 16) == 255.255.0.0 ]]; then
-    passed "cidr2netmask(): /16...OK"
-else
-    failed "cidr2netmask(): /16...failed"
-fi
-if [[ $(cidr2netmask 20) == 255.255.240.0 ]]; then
-    passed "cidr2netmask(): /20...OK"
-else
-    failed "cidr2netmask(): /20...failed"
-fi
-if [[ $(cidr2netmask 24) == 255.255.255.0 ]]; then
-    passed "cidr2netmask(): /24...OK"
-else
-    failed "cidr2netmask(): /24...failed"
-fi
-if [[ $(cidr2netmask 28) == 255.255.255.240 ]]; then
-    passed "cidr2netmask(): /28...OK"
-else
-    failed "cidr2netmask(): /28...failed"
-fi
-if [[ $(cidr2netmask 30) == 255.255.255.252 ]]; then
-    passed "cidr2netmask(): /30...OK"
-else
-    failed "cidr2netmask(): /30...failed"
-fi
-if [[ $(cidr2netmask 32) == 255.255.255.255 ]]; then
-    passed "cidr2netmask(): /32...OK"
-else
-    failed "cidr2netmask(): /32...failed"
-fi
+function test_cidr2netmask {
+    local mask=0
+    local ips="128 192 224 240 248 252 254 255"
+    local ip
+    local msg
 
-if [[ $(maskip 169.254.169.254 240.0.0.0) == 160.0.0.0 ]]; then
-    passed "maskip(): /4...OK"
-else
-    failed "maskip(): /4...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.0.0.0) == 169.0.0.0 ]]; then
-    passed "maskip(): /8...OK"
-else
-    failed "maskip(): /8...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.240.0.0) == 169.240.0.0 ]]; then
-    passed "maskip(): /12...OK"
-else
-    failed "maskip(): /12...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.255.0.0) == 169.254.0.0 ]]; then
-    passed "maskip(): /16...OK"
-else
-    failed "maskip(): /16...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.255.240.0) == 169.254.160.0 ]]; then
-    passed "maskip(): /20...OK"
-else
-    failed "maskip(): /20...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.255.255.0) == 169.254.169.0 ]]; then
-    passed "maskip(): /24...OK"
-else
-    failed "maskip(): /24...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.255.255.240) == 169.254.169.240 ]]; then
-    passed "maskip(): /28...OK"
-else
-    failed "maskip(): /28...failed"
-fi
-if [[ $(maskip 169.254.169.254 255.255.255.255) == 169.254.169.254 ]]; then
-    passed "maskip(): /32...OK"
-else
-    failed "maskip(): /32...failed"
-fi
+    msg="cidr2netmask(/0) == 0.0.0.0"
+    assert_equal "0.0.0.0" $(cidr2netmask $mask) "$msg"
+
+    for ip in $ips; do
+        mask=$(( mask + 1 ))
+        msg="cidr2netmask(/$mask) == $ip.0.0.0"
+        assert_equal "$ip.0.0.0" $(cidr2netmask $mask) "$msg"
+    done
+
+    for ip in $ips; do
+        mask=$(( mask + 1 ))
+        msg="cidr2netmask(/$mask) == 255.$ip.0.0"
+        assert_equal "255.$ip.0.0" $(cidr2netmask $mask) "$msg"
+    done
+
+    for ip in $ips; do
+        mask=$(( mask + 1 ))
+        msg="cidr2netmask(/$mask) == 255.255.$ip.0"
+        assert_equal "255.255.$ip.0" $(cidr2netmask $mask) "$msg"
+    done
+
+    for ip in $ips; do
+        mask=$(( mask + 1 ))
+        msg="cidr2netmask(/$mask) == 255.255.255.$ip"
+        assert_equal "255.255.255.$ip" $(cidr2netmask $mask) "$msg"
+    done
+}
+
+test_cidr2netmask
+
+msg="maskip(169.254.169.254 240.0.0.0) == 160.0.0.0"
+assert_equal $(maskip 169.254.169.254 240.0.0.0) 160.0.0.0 "$msg"
+
+msg="maskip(169.254.169.254 255.0.0.0) == 169.0.0.0"
+assert_equal $(maskip 169.254.169.254 255.0.0.0) 169.0.0.0 "$msg"
+
+msg="maskip(169.254.169.254 255.240.0.0) == 169.240.0.0"
+assert_equal $(maskip 169.254.169.254 255.240.0.0) 169.240.0.0 "$msg"
+
+msg="maskip(169.254.169.254 255.255.0.0) == 169.254.0.0"
+assert_equal $(maskip 169.254.169.254 255.255.0.0) 169.254.0.0 "$msg"
+
+msg="maskip(169.254.169.254 255.255.240.0) == 169.254.160.0"
+assert_equal $(maskip 169.254.169.254 255.255.240.0) 169.254.160.0 "$msg"
+
+msg="maskip(169.254.169.254 255.255.255.0) == 169.254.169.0"
+assert_equal $(maskip 169.254.169.254 255.255.255.0) 169.254.169.0 "$msg"
+
+msg="maskip(169.254.169.254 255.255.255.240) == 169.254.169.240"
+assert_equal $(maskip 169.254.169.254 255.255.255.240) 169.254.169.240 "$msg"
+
+msg="maskip(169.254.169.254 255.255.255.255) == 169.254.169.254"
+assert_equal $(maskip 169.254.169.254 255.255.255.255) 169.254.169.254 "$msg"
+
 
 for mask in 8 12 16 20 24 26 28; do
-    echo -n "address_in_net(): in /$mask..."
+    msg="address_in_net($10.10.10.1 10.10.10.0/$mask)"
     if address_in_net 10.10.10.1 10.10.10.0/$mask; then
-        passed "OK"
+        passed "$msg"
     else
-        failed "address_in_net() failed on /$mask"
+        failed "$msg"
     fi
 
-    echo -n "address_in_net(): not in /$mask..."
+    msg="! address_in_net($10.10.10.1 11.11.11.0/$mask)"
     if ! address_in_net 10.10.10.1 11.11.11.0/$mask; then
-        passed "OK"
+        passed "$msg"
     else
-        failed "address_in_net() failed on /$mask"
+        failed "$msg"
     fi
 done
 
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
index 8210d0a..336a213 100755
--- a/tests/test_libs_from_pypi.sh
+++ b/tests/test_libs_from_pypi.sh
@@ -39,7 +39,7 @@
 ALL_LIBS+=" python-openstackclient oslo.rootwrap oslo.i18n"
 ALL_LIBS+=" python-ceilometerclient oslo.utils python-swiftclient"
 ALL_LIBS+=" python-neutronclient tooz ceilometermiddleware oslo.policy"
-ALL_LIBS+=" debtcollector"
+ALL_LIBS+=" debtcollector os-brick"
 
 # Generate the above list with
 # echo ${!GITREPO[@]}
diff --git a/tests/test_truefalse.sh b/tests/test_truefalse.sh
index ebd9650..2689589 100755
--- a/tests/test_truefalse.sh
+++ b/tests/test_truefalse.sh
@@ -19,7 +19,8 @@
 
     for default in True False; do
         for name in one captrue lowtrue uppertrue capyes lowyes upperyes; do
-                assert_equal "True" $(trueorfalse $default $name) "\$(trueorfalse $default $name)"
+            local msg="trueorfalse($default $name)"
+            assert_equal "True" $(trueorfalse $default $name) "$msg"
         done
     done
 
@@ -33,7 +34,8 @@
 
     for default in True False; do
         for name in zero capfalse lowfalse upperfalse capno lowno upperno; do
-            assert_equal "False" $(trueorfalse $default $name) "\$(trueorfalse $default $name)"
+            local msg="trueorfalse($default $name)"
+            assert_equal "False" $(trueorfalse $default $name) "$msg"
         done
     done
 }
diff --git a/tests/unittest.sh b/tests/unittest.sh
index 69f19b7..93aa5fc 100644
--- a/tests/unittest.sh
+++ b/tests/unittest.sh
@@ -17,6 +17,8 @@
 PASS=0
 FAILED_FUNCS=""
 
+# pass a test, printing out MSG
+#  usage: passed message
 function passed {
     local lineno=$(caller 0 | awk '{print $1}')
     local function=$(caller 0 | awk '{print $2}')
@@ -25,9 +27,11 @@
         msg="OK"
     fi
     PASS=$((PASS+1))
-    echo $function:L$lineno $msg
+    echo "PASS: $function:L$lineno $msg"
 }
 
+# fail a test, printing out MSG
+#  usage: failed message
 function failed {
     local lineno=$(caller 0 | awk '{print $1}')
     local function=$(caller 0 | awk '{print $2}')
@@ -38,10 +42,16 @@
     ERROR=$((ERROR+1))
 }
 
+# assert string comparision of val1 equal val2, printing out msg
+#  usage: assert_equal val1 val2 msg
 function assert_equal {
     local lineno=`caller 0 | awk '{print $1}'`
     local function=`caller 0 | awk '{print $2}'`
     local msg=$3
+
+    if [ -z "$msg" ]; then
+        msg="OK"
+    fi
     if [[ "$1" != "$2" ]]; then
         FAILED_FUNCS+="$function:L$lineno\n"
         echo "ERROR: $1 != $2 in $function:L$lineno!"
@@ -49,10 +59,13 @@
         ERROR=$((ERROR+1))
     else
         PASS=$((PASS+1))
-        echo "$function:L$lineno - ok"
+        echo "PASS: $function:L$lineno - $msg"
     fi
 }
 
+# print a summary of passing and failing tests, exiting
+# with an error if we have failed tests
+#  usage: report_results
 function report_results {
     echo "$PASS Tests PASSED"
     if [[ $ERROR -gt 1 ]]; then
diff --git a/tools/build_docs.sh b/tools/build_docs.sh
index fda86c0..fa84343 100755
--- a/tools/build_docs.sh
+++ b/tools/build_docs.sh
@@ -75,7 +75,7 @@
 
 # Build list of scripts to process
 FILES=""
-for f in $(find . -name .git -prune -o \( -type f -name \*.sh -not -path \*shocco/\* -print \)); do
+for f in $(find . \( -name .git -o -name .tox \) -prune -o \( -type f -name \*.sh -not -path \*shocco/\* -print \)); do
     echo $f
     FILES+="$f "
     mkdir -p $FQ_HTML_BUILD/`dirname $f`;
diff --git a/tools/worlddump.py b/tools/worlddump.py
index d846f10..7acfb5e 100755
--- a/tools/worlddump.py
+++ b/tools/worlddump.py
@@ -106,6 +106,12 @@
             _dump_cmd("sudo cat %s" % fullpath)
 
 
+def guru_meditation_report():
+    _header("nova-compute Guru Meditation Report")
+    _dump_cmd("kill -s USR1 `pgrep nova-compute`")
+    print "guru meditation report in nova-compute log"
+
+
 def main():
     opts = get_options()
     fname = filename(opts.dir)
@@ -118,6 +124,7 @@
         network_dump()
         iptables_dump()
         compute_consoles()
+        guru_meditation_report()
 
 
 if __name__ == '__main__':
diff --git a/tools/xen/xenrc b/tools/xen/xenrc
index 43a6ce8..be6c5ca 100644
--- a/tools/xen/xenrc
+++ b/tools/xen/xenrc
@@ -14,12 +14,12 @@
 # Size of image
 VDI_MB=${VDI_MB:-5000}
 
-# Devstack now contains many components.  3GB ram is not enough to prevent
+# Devstack now contains many components.  4GB ram is not enough to prevent
 # swapping and memory fragmentation - the latter of which can cause failures
 # such as blkfront failing to plug a VBD and lead to random test fails.
 #
-# Set to 4GB so an 8GB XenServer VM can have a 1GB Dom0 and leave 3GB for VMs
-OSDOMU_MEM_MB=4096
+# Set to 6GB so an 8GB XenServer VM can have a 1GB Dom0 and leave 1GB for VMs
+OSDOMU_MEM_MB=6144
 OSDOMU_VDI_GB=8
 
 # Network mapping. Specify bridge names or network names. Network names may
diff --git a/tox.ini b/tox.ini
index 279dcd4..788fea9 100644
--- a/tox.ini
+++ b/tox.ini
@@ -10,19 +10,20 @@
 [testenv:bashate]
 deps = bashate
 whitelist_externals = bash
-commands = bash -c "find {toxinidir}          \
-         -not \( -type d -name .?\* -prune \) \ # prune all 'dot' dirs
-         -not \( -type d -name doc -prune \)  \ # skip documentation
-         -type f                              \ # only files
-         -not -name \*~                       \ # skip editors, readme, etc
-         -not -name \*.md                     \
-         \(                                   \
-          -name \*.sh -or                     \
-          -name \*rc -or                      \
-          -name functions\* -or               \
-          -wholename \*/inc/\* -or            \ # /inc files and
-          -wholename \*/lib/\*                \ # /lib files are shell, but
-         \)                                   \ #   have no extension
+commands = bash -c "find {toxinidir}             \
+         -not \( -type d -name .?\* -prune \)    \ # prune all 'dot' dirs
+         -not \( -type d -name doc -prune \)     \ # skip documentation
+         -not \( -type d -name shocco -prune \)  \ # skip shocco
+         -type f                                 \ # only files
+         -not -name \*~                          \ # skip editors, readme, etc
+         -not -name \*.md                        \
+         \(                                      \
+          -name \*.sh -or                        \
+          -name \*rc -or                         \
+          -name functions\* -or                  \
+          -wholename \*/inc/\* -or               \ # /inc files and
+          -wholename \*/lib/\*                   \ # /lib files are shell, but
+         \)                                      \ #   have no extension
          -print0 | xargs -0 bashate -v"
 
 [testenv:docs]
@@ -32,6 +33,10 @@
    sphinx>=1.1.2,<1.2
    pbr>=0.6,!=0.7,<1.0
    oslosphinx
+   nwdiag
+   blockdiag
+   sphinxcontrib-blockdiag
+   sphinxcontrib-nwdiag
 whitelist_externals = bash
 setenv =
   TOP_DIR={toxinidir}