Merge "add gating up/down script for devstack"
diff --git a/FUTURE.rst b/FUTURE.rst
new file mode 100644
index 0000000..11bea30
--- /dev/null
+++ b/FUTURE.rst
@@ -0,0 +1,113 @@
+=============
+ Quo Vadimus
+=============
+
+Where are we going?
+
+This is a document in Devstack to outline where we are headed in the
+future. The future might be near or far, but this is where we'd like
+to be.
+
+This is intended to help people contribute, because it will be a
+little clearer if a contribution takes us closer to or further away to
+our end game.
+
+==================
+ Default Services
+==================
+
+Devstack is designed as a development environment first. There are a
+lot of ways to compose the OpenStack services, but we do need one
+default.
+
+That should be the Compute Layer (currently Glance + Nova + Cinder +
+Neutron Core (not advanced services) + Keystone). It should be the
+base building block going forward, and the introduction point of
+people to OpenStack via Devstack.
+
+================
+ Service Howtos
+================
+
+Starting from the base building block all services included in
+OpenStack should have an overview page in the Devstack
+documentation. That should include the following:
+
+- A helpful high level overview of that service
+- What it depends on (both other OpenStack services and other system
+  components)
+- What new daemons are needed to be started, including where they
+  should live
+
+This provides a map for people doing multinode testing to understand
+what portions are control plane, which should live on worker nodes.
+
+Service how to pages will start with an ugly "This team has provided
+no information about this service" until someone does.
+
+===================
+ Included Services
+===================
+
+Devstack doesn't need to eat the world. Given the existence of the
+external devstack plugin architecture, the future direction is to move
+the bulk of the support code out of devstack itself and into external
+plugins.
+
+This will also promote a more clean separation between services.
+
+=============================
+ Included Backends / Drivers
+=============================
+
+Upstream Devstack should only include Open Source backends / drivers,
+it's intent is for Open Source development of OpenStack. Proprietary
+drivers should be supported via external plugins.
+
+Just being Open Source doesn't mean it should be in upstream Devstack
+if it's not required for base development of OpenStack
+components. When in doubt, external plugins should be used.
+
+========================================
+ OpenStack Services vs. System Services
+========================================
+
+ENABLED_SERVICES is currently entirely too overloaded. We should have
+a separation of actual OpenStack services that you have to run (n-cpu,
+g-api) and required backends like mysql and rabbitmq.
+
+===========================
+ Splitting up of Functions
+===========================
+
+The functions-common file has grown over time, and needs to be split
+up into smaller libraries that handle specific domains.
+
+======================
+ Testing of Functions
+======================
+
+Every function in a functions file should get tests. The devstack
+testing framework is young, but we do have some unit tests for the
+tree, and those should be enhanced.
+
+==============================
+ Not Co-Gating with the World
+==============================
+
+As projects spin up functional test jobs, Devstack should not be
+co-gated with every single one of those. The Devstack team has one of
+the fastest turn arounds for blocking bugs of any Open Stack
+project.
+
+Basic service validation should be included as part of Devstack
+installation to mitigate this.
+
+============================
+ Documenting all the things
+============================
+
+Devstack started off as an explanation as much as an install
+script. We would love contributions to that further enhance the
+comments and explanations about what is happening, even if it seems a
+little pedantic at times.
diff --git a/HACKING.rst b/HACKING.rst
index dcde141..b3c82a3 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -6,7 +6,7 @@
 -------
 
 DevStack is written in UNIX shell script.  It uses a number of bash-isms
-and so is limited to Bash (version 3 and up) and compatible shells.
+and so is limited to Bash (version 4 and up) and compatible shells.
 Shell script was chosen because it best illustrates the steps used to
 set up and interact with OpenStack components.
 
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index fd9c736..a449f49 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -70,6 +70,18 @@
 Q: Are there any differences between Ubuntu and Fedora support?
     A: Neutron is not fully supported prior to Fedora 18 due lack of
     OpenVSwitch packages.
+Q: Why can't I use another shell?
+    A: DevStack now uses some specific bash-ism that require Bash 4, such
+    as associative arrays. Simple compatibility patches have been accepted
+    in the past when they are not complex, at this point no additional
+    compatibility patches will be considered except for shells matching
+    the array functionality as it is very ingrained in the repo and project
+    management.
+Q: But, but, can't I test on OS/X?
+   A: Yes, even you, core developer who complained about this, needs to
+   install bash 4 via homebrew to keep running tests on OS/X.  Get a Real
+   Operating System.   (For most of you who don't know, I am refering to
+   myself.)
 
 Operation and Configuration
 ===========================
diff --git a/doc/source/guides/devstack-with-nested-kvm.rst b/doc/source/guides/devstack-with-nested-kvm.rst
new file mode 100644
index 0000000..2538c8d
--- /dev/null
+++ b/doc/source/guides/devstack-with-nested-kvm.rst
@@ -0,0 +1,139 @@
+=======================================================
+Configure DevStack with KVM-based Nested Virtualization
+=======================================================
+
+When using virtualization technologies like KVM, one can take advantage
+of "Nested VMX" (i.e. the ability to run KVM on KVM) so that the VMs in
+cloud (Nova guests) can run relatively faster than with plain QEMU
+emulation.
+
+Kernels shipped with Linux distributions doesn't have this enabled by
+default. This guide outlines the configuration details to enable nested
+virtualization in KVM-based environments. And how to setup DevStack
+(that'll run in a VM) to take advantage of this.
+
+
+Nested Virtualization Configuration
+===================================
+
+Configure Nested KVM for Intel-based Machines
+---------------------------------------------
+
+Procedure to enable nested KVM virtualization on AMD-based machines.
+
+Check if the nested KVM Kernel parameter is enabled:
+
+::
+
+    cat /sys/module/kvm_intel/parameters/nested
+    N
+
+Temporarily remove the KVM intel Kernel module, enable nested
+virtualization to be persistent across reboots and add the Kernel
+module back:
+
+::
+
+    sudo rmmod kvm-intel
+    sudo sh -c "echo 'options kvm-intel nested=y' >> /etc/modprobe.d/dist.conf"
+    sudo modprobe kvm-intel
+
+Ensure the Nested KVM Kernel module parameter for Intel is enabled on
+the host:
+
+::
+
+    cat /sys/module/kvm_intel/parameters/nested
+    Y
+
+    modinfo kvm_intel | grep nested
+    parm:           nested:bool
+
+Start your VM, now it should have KVM capabilities -- you can verify
+that by ensuring `/dev/kvm` character device is present.
+
+
+Configure Nested KVM for AMD-based Machines
+--------------------------------------------
+
+Procedure to enable nested KVM virtualization on AMD-based machines.
+
+Check if the nested KVM Kernel parameter is enabled:
+
+::
+
+    cat /sys/module/kvm_amd/parameters/nested
+    0
+
+
+Temporarily remove the KVM AMD Kernel module, enable nested
+virtualization to be persistent across reboots and add the Kernel module
+back:
+
+::
+
+    sudo rmmod kvm-amd
+    sudo sh -c "echo 'options amd nested=1' >> /etc/modprobe.d/dist.conf"
+    sudo modprobe kvm-amd
+
+Ensure the Nested KVM Kernel module parameter for AMD is enabled on the
+host:
+
+::
+
+    cat /sys/module/kvm_amd/parameters/nested
+    1
+
+    modinfo kvm_amd | grep -i nested
+    parm:           nested:int
+
+To make the above value persistent across reboots, add an entry in
+/etc/modprobe.ddist.conf so it looks as below::
+
+    cat /etc/modprobe.d/dist.conf
+    options kvm-amd nested=y
+
+
+Expose Virtualization Extensions to DevStack VM
+-----------------------------------------------
+
+Edit the VM's libvirt XML configuration via `virsh` utility:
+
+::
+
+    sudo virsh edit devstack-vm
+
+Add the below snippet to expose the host CPU features to the VM:
+
+::
+
+    <cpu mode='host-passthrough'>
+    </cpu>
+
+
+Ensure DevStack VM is Using KVM
+-------------------------------
+
+Before invoking ``stack.sh`` in the VM, ensure that KVM is enabled. This
+can be verified by checking for the presence of the file `/dev/kvm` in
+your VM. If it is present, DevStack will default to using the config
+attribute `virt_type = kvm` in `/etc/nova.conf`; otherwise, it'll fall
+back to `virt_type=qemu`, i.e. plain QEMU emulation.
+
+Optionally, to explicitly set the type of virtualization, to KVM, by the
+libvirt driver in Nova, the below config attribute can be used in
+DevStack's ``local.conf``:
+
+::
+
+    LIBVIRT_TYPE=kvm
+
+
+Once DevStack is configured succesfully, verify if the Nova instances
+are using KVM by noticing the QEMU CLI invoked by Nova is using the
+parameter `accel=kvm`, e.g.:
+
+::
+
+    ps -ef | grep -i qemu
+    root     29773     1  0 11:24 ?        00:00:00 /usr/bin/qemu-system-x86_64 -machine accel=kvm [. . .]
diff --git a/doc/source/guides/single-machine.rst b/doc/source/guides/single-machine.rst
index 17e9b9e..70287a9 100644
--- a/doc/source/guides/single-machine.rst
+++ b/doc/source/guides/single-machine.rst
@@ -108,6 +108,7 @@
     MYSQL_PASSWORD=iheartdatabases
     RABBIT_PASSWORD=flopsymopsy
     SERVICE_PASSWORD=iheartksl
+    SERVICE_TOKEN=xyzpdqlazydog
 
 Run DevStack:
 
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 0763fb8..855a2d6 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -66,6 +66,7 @@
    guides/single-machine
    guides/multinode-lab
    guides/neutron
+   guides/devstack-with-nested-kvm
 
 All-In-One Single VM
 --------------------
@@ -94,6 +95,13 @@
 This guide is meant for building lab environments with a dedicated
 control node and multiple compute nodes.
 
+DevStack with KVM-based Nested Virtualization
+---------------------------------------------
+
+Procedure to setup :doc:`DevStack with KVM-based Nested Virtualization
+<guides/devstack-with-nested-kvm>`. With this setup, Nova instances
+will be more performant than with plain QEMU emulation.
+
 DevStack Documentation
 ======================
 
@@ -155,11 +163,9 @@
 * `lib/ldap <lib/ldap.html>`__
 * `lib/neutron <lib/neutron.html>`__
 * `lib/nova <lib/nova.html>`__
-* `lib/opendaylight <lib/opendaylight.html>`__
 * `lib/oslo <lib/oslo.html>`__
 * `lib/rpc\_backend <lib/rpc_backend.html>`__
 * `lib/sahara <lib/sahara.html>`__
-* `lib/stackforge <lib/stackforge.html>`__
 * `lib/swift <lib/swift.html>`__
 * `lib/tempest <lib/tempest.html>`__
 * `lib/tls <lib/tls.html>`__
@@ -176,7 +182,6 @@
 * `extras.d/70-trove.sh <extras.d/70-trove.sh.html>`__
 * `extras.d/70-tuskar.sh <extras.d/70-tuskar.sh.html>`__
 * `extras.d/70-zaqar.sh <extras.d/70-zaqar.sh.html>`__
-* `extras.d/80-opendaylight.sh <extras.d/80-opendaylight.sh.html>`__
 * `extras.d/80-tempest.sh <extras.d/80-tempest.sh.html>`__
 
 Configuration
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index d1f7377..5d6d3f1 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -16,7 +16,7 @@
 The script in ``extras.d`` is expected to be mostly a dispatcher to
 functions in a ``lib/*`` script. The scripts are named with a
 zero-padded two digits sequence number prefix to control the order that
-the scripts are called, and with a suffix of ``.sh``. DevSack reserves
+the scripts are called, and with a suffix of ``.sh``. DevStack reserves
 for itself the sequence numbers 00 through 09 and 90 through 99.
 
 Below is a template that shows handlers for the possible command-line
@@ -107,19 +107,24 @@
   sourced very early in the process. This is helpful if other plugins
   might depend on this one, and need access to global variables to do
   their work.
+
+  Your settings should include any ``enable_service`` lines required
+  by your plugin. This is especially important if you are kicking off
+  services using ``run_process`` as it only works with enabled
+  services.
+
 - ``plugin.sh`` - the actual plugin. It will be executed by devstack
   during it's run. The run order will be done in the registration
   order for these plugins, and will occur immediately after all in
   tree extras.d dispatch at the phase in question.  The plugin.sh
-  looks like the extras.d dispatcher above **except** it should not
-  include the is_service_enabled conditional. All external plugins are
-  always assumed to be enabled.
+  looks like the extras.d dispatcher above.
 
 Plugins are registered by adding the following to the localrc section
 of ``local.conf``.
 
 They are added in the following format::
 
+  [[local|localrc]]
   enable_plugin <NAME> <GITURL> [GITREF]
 
 - ``name`` - an arbitrary name. (ex: glustfs, docker, zaqar, congress)
@@ -129,7 +134,7 @@
 
 An example would be as follows::
 
-  enable_plugin glusterfs https://github.com/sdague/devstack-plugins glusterfs
+  enable_plugin ec2api git://git.openstack.org/stackforge/ec2api
 
 Hypervisor
 ==========
diff --git a/extras.d/70-gantt.sh b/extras.d/70-gantt.sh
deleted file mode 100644
index ac1efba..0000000
--- a/extras.d/70-gantt.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-# gantt.sh - Devstack extras script to install Gantt
-
-if is_service_enabled n-sch; then
-    disable_service gantt
-fi
-
-if is_service_enabled gantt; then
-    if [[ "$1" == "source" ]]; then
-        # Initial source
-        source $TOP_DIR/lib/gantt
-    elif [[ "$1" == "stack" && "$2" == "install" ]]; then
-        echo_summary "Installing Gantt"
-        install_gantt
-        cleanup_gantt
-    elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
-        echo_summary "Configuring Gantt"
-        configure_gantt
-
-    elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
-        # Initialize gantt
-        init_gantt
-
-        # Start gantt
-        echo_summary "Starting Gantt"
-        start_gantt
-    fi
-
-    if [[ "$1" == "unstack" ]]; then
-        stop_gantt
-    fi
-fi
diff --git a/extras.d/70-tuskar.sh b/extras.d/70-tuskar.sh
deleted file mode 100644
index 6e26db2..0000000
--- a/extras.d/70-tuskar.sh
+++ /dev/null
@@ -1,205 +0,0 @@
-# Install and start the **Tuskar** service
-#
-# To enable, add the following to your localrc
-#
-# enable_service tuskar
-# enable_service tuskar-api
-
-
-if is_service_enabled tuskar; then
-    if [[ "$1" == "source" ]]; then
-        # Initial source, do nothing as functions sourced
-        # are below rather than in lib/tuskar
-        echo_summary "source extras tuskar"
-    elif [[ "$1" == "stack" && "$2" == "install" ]]; then
-        echo_summary "Installing Tuskar"
-        install_tuskarclient
-        install_tuskar
-    elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
-        echo_summary "Configuring Tuskar"
-        configure_tuskar
-        configure_tuskarclient
-
-        if is_service_enabled key; then
-            create_tuskar_accounts
-        fi
-
-    elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
-        echo_summary "Initializing Tuskar"
-        init_tuskar
-        start_tuskar
-    fi
-
-    if [[ "$1" == "unstack" ]]; then
-        stop_tuskar
-    fi
-fi
-
-# library code (equivalent to lib/tuskar)
-# ---------
-# - install_tuskarclient
-# - install_tuskar
-# - configure_tuskarclient
-# - configure_tuskar
-# - init_tuskar
-# - start_tuskar
-# - stop_tuskar
-# - cleanup_tuskar
-
-# Save trace setting
-XTRACE=$(set +o | grep xtrace)
-set +o xtrace
-
-
-# Defaults
-# --------
-
-# tuskar repos
-TUSKAR_REPO=${TUSKAR_REPO:-${GIT_BASE}/openstack/tuskar.git}
-TUSKAR_BRANCH=${TUSKAR_BRANCH:-master}
-
-TUSKARCLIENT_REPO=${TUSKARCLIENT_REPO:-${GIT_BASE}/openstack/python-tuskarclient.git}
-TUSKARCLIENT_BRANCH=${TUSKARCLIENT_BRANCH:-master}
-
-# set up default directories
-TUSKAR_DIR=$DEST/tuskar
-TUSKARCLIENT_DIR=$DEST/python-tuskarclient
-TUSKAR_AUTH_CACHE_DIR=${TUSKAR_AUTH_CACHE_DIR:-/var/cache/tuskar}
-TUSKAR_STANDALONE=$(trueorfalse False TUSKAR_STANDALONE)
-TUSKAR_CONF_DIR=/etc/tuskar
-TUSKAR_CONF=$TUSKAR_CONF_DIR/tuskar.conf
-TUSKAR_API_HOST=${TUSKAR_API_HOST:-$HOST_IP}
-TUSKAR_API_PORT=${TUSKAR_API_PORT:-8585}
-
-# Tell Tempest this project is present
-TEMPEST_SERVICES+=,tuskar
-
-# Functions
-# ---------
-
-# Test if any Tuskar services are enabled
-# is_tuskar_enabled
-function is_tuskar_enabled {
-    [[ ,${ENABLED_SERVICES} =~ ,"tuskar-" ]] && return 0
-    return 1
-}
-
-# cleanup_tuskar() - Remove residual data files, anything left over from previous
-# runs that a clean run would need to clean up
-function cleanup_tuskar {
-    sudo rm -rf $TUSKAR_AUTH_CACHE_DIR
-}
-
-# configure_tuskar() - Set config files, create data dirs, etc
-function configure_tuskar {
-    setup_develop $TUSKAR_DIR
-
-    if [[ ! -d $TUSKAR_CONF_DIR ]]; then
-        sudo mkdir -p $TUSKAR_CONF_DIR
-    fi
-    sudo chown $STACK_USER $TUSKAR_CONF_DIR
-    # remove old config files
-    rm -f $TUSKAR_CONF_DIR/tuskar-*.conf
-
-    TUSKAR_POLICY_FILE=$TUSKAR_CONF_DIR/policy.json
-
-    cp $TUSKAR_DIR/etc/tuskar/policy.json $TUSKAR_POLICY_FILE
-    cp $TUSKAR_DIR/etc/tuskar/tuskar.conf.sample $TUSKAR_CONF
-
-    # common options
-    iniset $TUSKAR_CONF database connection `database_connection_url tuskar`
-
-    # logging
-    iniset $TUSKAR_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
-    iniset $TUSKAR_CONF DEFAULT use_syslog $SYSLOG
-    if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
-        # Add color to logging output
-        setup_colorized_logging $TUSKAR_CONF DEFAULT tenant user
-    fi
-
-    configure_auth_token_middleware $TUSKAR_CONF tuskar $TUSKAR_AUTH_CACHE_DIR
-
-    if is_ssl_enabled_service "key"; then
-        iniset $TUSKAR_CONF clients_keystone ca_file $SSL_BUNDLE_FILE
-    fi
-
-    iniset $TUSKAR_CONF tuskar_api bind_port $TUSKAR_API_PORT
-
-}
-
-# init_tuskar() - Initialize database
-function init_tuskar {
-
-    # (re)create tuskar database
-    recreate_database tuskar
-
-    tuskar-dbsync --config-file $TUSKAR_CONF
-    create_tuskar_cache_dir
-}
-
-# create_tuskar_cache_dir() - Part of the init_tuskar() process
-function create_tuskar_cache_dir {
-    # Create cache dirs
-    sudo mkdir -p $TUSKAR_AUTH_CACHE_DIR
-    sudo chown $STACK_USER $TUSKAR_AUTH_CACHE_DIR
-}
-
-# install_tuskarclient() - Collect source and prepare
-function install_tuskarclient {
-    git_clone $TUSKARCLIENT_REPO $TUSKARCLIENT_DIR $TUSKARCLIENT_BRANCH
-    setup_develop $TUSKARCLIENT_DIR
-}
-
-# configure_tuskarclient() - Set config files, create data dirs, etc
-function configure_tuskarclient {
-    setup_develop $TUSKARCLIENT_DIR
-}
-
-# install_tuskar() - Collect source and prepare
-function install_tuskar {
-    git_clone $TUSKAR_REPO $TUSKAR_DIR $TUSKAR_BRANCH
-}
-
-# start_tuskar() - Start running processes, including screen
-function start_tuskar {
-    run_process tuskar-api "tuskar-api --config-file=$TUSKAR_CONF"
-}
-
-# stop_tuskar() - Stop running processes
-function stop_tuskar {
-    # Kill the screen windows
-    local serv
-    for serv in tuskar-api; do
-        stop_process $serv
-    done
-}
-
-# create_tuskar_accounts() - Set up common required tuskar accounts
-function create_tuskar_accounts {
-    # migrated from files/keystone_data.sh
-    local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
-    local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
-    local tuskar_user=$(get_or_create_user "tuskar" \
-        "$SERVICE_PASSWORD" $service_tenant)
-    get_or_add_user_role $admin_role $tuskar_user $service_tenant
-
-    if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
-
-        local tuskar_service=$(get_or_create_service "tuskar" \
-                "management" "Tuskar Management Service")
-        get_or_create_endpoint $tuskar_service \
-            "$REGION_NAME" \
-            "$SERVICE_PROTOCOL://$TUSKAR_API_HOST:$TUSKAR_API_PORT" \
-            "$SERVICE_PROTOCOL://$TUSKAR_API_HOST:$TUSKAR_API_PORT" \
-            "$SERVICE_PROTOCOL://$TUSKAR_API_HOST:$TUSKAR_API_PORT"
-    fi
-}
-
-# Restore xtrace
-$XTRACE
-
-# Tell emacs to use shell-script-mode
-## Local variables:
-## mode: shell-script
-## End:
diff --git a/extras.d/80-opendaylight.sh b/extras.d/80-opendaylight.sh
deleted file mode 100644
index b673777..0000000
--- a/extras.d/80-opendaylight.sh
+++ /dev/null
@@ -1,76 +0,0 @@
-# opendaylight.sh - DevStack extras script
-
-if is_service_enabled odl-server odl-compute; then
-    # Initial source
-    [[ "$1" == "source" ]] && source $TOP_DIR/lib/opendaylight
-fi
-
-if is_service_enabled odl-server; then
-    if [[ "$1" == "source" ]]; then
-        # no-op
-        :
-    elif [[ "$1" == "stack" && "$2" == "install" ]]; then
-        install_opendaylight
-        configure_opendaylight
-        init_opendaylight
-    elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
-        configure_ml2_odl
-        # This has to start before Neutron
-        start_opendaylight
-    elif [[ "$1" == "stack" && "$2" == "post-extra" ]]; then
-        # no-op
-        :
-    fi
-
-    if [[ "$1" == "unstack" ]]; then
-        stop_opendaylight
-        cleanup_opendaylight
-    fi
-
-    if [[ "$1" == "clean" ]]; then
-        # no-op
-        :
-    fi
-fi
-
-if is_service_enabled odl-compute; then
-    if [[ "$1" == "source" ]]; then
-        # no-op
-        :
-    elif [[ "$1" == "stack" && "$2" == "install" ]]; then
-        install_opendaylight-compute
-    elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
-        if is_service_enabled nova; then
-            create_nova_conf_neutron
-        fi
-    elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
-        echo_summary "Initializing OpenDaylight"
-        ODL_LOCAL_IP=${ODL_LOCAL_IP:-$HOST_IP}
-        ODL_MGR_PORT=${ODL_MGR_PORT:-6640}
-        read ovstbl <<< $(sudo ovs-vsctl get Open_vSwitch . _uuid)
-        sudo ovs-vsctl set-manager tcp:$ODL_MGR_IP:$ODL_MGR_PORT
-        if [[ -n "$ODL_PROVIDER_MAPPINGS" ]] && [[ "$ENABLE_TENANT_VLANS" == "True" ]]; then
-            sudo ovs-vsctl set Open_vSwitch $ovstbl \
-                other_config:provider_mappings=$ODL_PROVIDER_MAPPINGS
-        fi
-        sudo ovs-vsctl set Open_vSwitch $ovstbl other_config:local_ip=$ODL_LOCAL_IP
-    elif [[ "$1" == "stack" && "$2" == "post-extra" ]]; then
-        # no-op
-        :
-    fi
-
-    if [[ "$1" == "unstack" ]]; then
-        sudo ovs-vsctl del-manager
-        BRIDGES=$(sudo ovs-vsctl list-br)
-        for bridge in $BRIDGES ; do
-            sudo ovs-vsctl del-controller $bridge
-        done
-
-        stop_opendaylight-compute
-    fi
-
-    if [[ "$1" == "clean" ]]; then
-        # no-op
-        :
-    fi
-fi
diff --git a/files/debs/general b/files/debs/general
index e824d23..4050191 100644
--- a/files/debs/general
+++ b/files/debs/general
@@ -27,3 +27,4 @@
 libffi-dev
 libssl-dev # for pyOpenSSL
 gettext  # used for compiling message catalogs
+openjdk-7-jre-headless  # NOPRIME
diff --git a/files/debs/neutron b/files/debs/neutron
index 5a59b22..3f4b6d2 100644
--- a/files/debs/neutron
+++ b/files/debs/neutron
@@ -6,6 +6,7 @@
 libmysqlclient-dev  # testonly
 mysql-server #NOPRIME
 sudo
+postgresql-server-dev-all       # testonly
 python-iso8601
 python-paste
 python-routes
diff --git a/files/rpms-suse/neutron b/files/rpms-suse/neutron
index 50ee145..66d6e4c 100644
--- a/files/rpms-suse/neutron
+++ b/files/rpms-suse/neutron
@@ -5,6 +5,7 @@
 iptables
 iputils
 mariadb # NOPRIME
+postgresql-devel        # testonly
 python-eventlet
 python-greenlet
 python-iso8601
diff --git a/files/rpms/general b/files/rpms/general
index 13c8a87..6f22391 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -26,3 +26,4 @@
 libyaml-devel
 gettext  # used for compiling message catalogs
 net-tools
+java-1.7.0-openjdk-headless  # NOPRIME
diff --git a/files/rpms/neutron b/files/rpms/neutron
index 59152d6..d11dab7 100644
--- a/files/rpms/neutron
+++ b/files/rpms/neutron
@@ -9,6 +9,7 @@
 mysql-devel  # testonly
 mysql-server # NOPRIME
 openvswitch # NOPRIME
+postgresql-devel        # testonly
 python-eventlet
 python-greenlet
 python-iso8601
diff --git a/files/rpms/qpid b/files/rpms/qpid
index 9e3f10a..c5e2699 100644
--- a/files/rpms/qpid
+++ b/files/rpms/qpid
@@ -1,4 +1,4 @@
 qpid-proton-c-devel # NOPRIME
 python-qpid-proton # NOPRIME
 cyrus-sasl-lib # NOPRIME
-
+cyrus-sasl-plain # NOPRIME
diff --git a/functions b/functions
index 5b3a8ea..2f976cf 100644
--- a/functions
+++ b/functions
@@ -13,6 +13,7 @@
 # Include the common functions
 FUNC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
 source ${FUNC_DIR}/functions-common
+source ${FUNC_DIR}/inc/python
 
 # Save trace setting
 XTRACE=$(set +o | grep xtrace)
diff --git a/functions-common b/functions-common
index b92fa55..6beb670 100644
--- a/functions-common
+++ b/functions-common
@@ -15,7 +15,6 @@
 # - OpenStack Functions
 # - Package Functions
 # - Process Functions
-# - Python Functions
 # - Service Functions
 # - System Functions
 #
@@ -860,17 +859,17 @@
 }
 
 # Gets or creates user
-# Usage: get_or_create_user <username> <password> <project> [<email> [<domain>]]
+# Usage: get_or_create_user <username> <password> [<email> [<domain>]]
 function get_or_create_user {
-    if [[ ! -z "$4" ]]; then
-        local email="--email=$4"
+    if [[ ! -z "$3" ]]; then
+        local email="--email=$3"
     else
         local email=""
     fi
     local os_cmd="openstack"
     local domain=""
-    if [[ ! -z "$5" ]]; then
-        domain="--domain=$5"
+    if [[ ! -z "$4" ]]; then
+        domain="--domain=$4"
         os_cmd="$os_cmd --os-url=$KEYSTONE_SERVICE_URI_V3 --os-identity-api-version=3"
     fi
     # Gets user id
@@ -879,7 +878,6 @@
         $os_cmd user create \
             $1 \
             --password "$2" \
-            --project $3 \
             $email \
             $domain \
             --or-show \
@@ -915,9 +913,9 @@
     echo $role_id
 }
 
-# Gets or adds user role
-# Usage: get_or_add_user_role <role> <user> <project>
-function get_or_add_user_role {
+# Gets or adds user role to project
+# Usage: get_or_add_user_project_role <role> <user> <project>
+function get_or_add_user_project_role {
     # Gets user role id
     local user_role_id=$(openstack role list \
         --user $2 \
@@ -1208,7 +1206,7 @@
     if is_ubuntu; then
         apt_get purge "$@"
     elif is_fedora; then
-        sudo $YUM remove -y "$@" ||:
+        sudo ${YUM:-yum} remove -y "$@" ||:
     elif is_suse; then
         sudo zypper rm "$@"
     else
@@ -1229,7 +1227,7 @@
     # https://bugzilla.redhat.com/show_bug.cgi?id=965567
     $sudo http_proxy=$http_proxy https_proxy=$https_proxy \
         no_proxy=$no_proxy \
-        $YUM install -y "$@" 2>&1 | \
+        ${YUM:-yum} install -y "$@" 2>&1 | \
         awk '
             BEGIN { fail=0 }
             /No package/ { fail=1 }
@@ -1239,7 +1237,7 @@
 
     # also ensure we catch a yum failure
     if [[ ${PIPESTATUS[0]} != 0 ]]; then
-        die $LINENO "$YUM install failure"
+        die $LINENO "${YUM:-yum} install failure"
     fi
 }
 
@@ -1590,204 +1588,6 @@
 }
 
 
-# Python Functions
-# ================
-
-# Get the path to the pip command.
-# get_pip_command
-function get_pip_command {
-    which pip || which pip-python
-
-    if [ $? -ne 0 ]; then
-        die $LINENO "Unable to find pip; cannot continue"
-    fi
-}
-
-# Get the path to the direcotry where python executables are installed.
-# get_python_exec_prefix
-function get_python_exec_prefix {
-    if is_fedora || is_suse; then
-        echo "/usr/bin"
-    else
-        echo "/usr/local/bin"
-    fi
-}
-
-# Wrapper for ``pip install`` to set cache and proxy environment variables
-# Uses globals ``OFFLINE``, ``TRACK_DEPENDS``, ``*_proxy``
-# pip_install package [package ...]
-function pip_install {
-    local xtrace=$(set +o | grep xtrace)
-    set +o xtrace
-    local offline=${OFFLINE:-False}
-    if [[ "$offline" == "True" || -z "$@" ]]; then
-        $xtrace
-        return
-    fi
-
-    if [[ -z "$os_PACKAGE" ]]; then
-        GetOSVersion
-    fi
-    if [[ $TRACK_DEPENDS = True && ! "$@" =~ virtualenv ]]; then
-        # TRACK_DEPENDS=True installation creates a circular dependency when
-        # we attempt to install virtualenv into a virualenv, so we must global
-        # that installation.
-        source $DEST/.venv/bin/activate
-        local cmd_pip=$DEST/.venv/bin/pip
-        local sudo_pip="env"
-    else
-        local cmd_pip=$(get_pip_command)
-        local sudo_pip="sudo -H"
-    fi
-
-    local pip_version=$(python -c "import pip; \
-                        print(pip.__version__.strip('.')[0])")
-    if (( pip_version<6 )); then
-        die $LINENO "Currently installed pip version ${pip_version} does not" \
-            "meet minimum requirements (>=6)."
-    fi
-
-    $xtrace
-    $sudo_pip \
-        http_proxy=${http_proxy:-} \
-        https_proxy=${https_proxy:-} \
-        no_proxy=${no_proxy:-} \
-        $cmd_pip install \
-        $@
-
-    INSTALL_TESTONLY_PACKAGES=$(trueorfalse False INSTALL_TESTONLY_PACKAGES)
-    if [[ "$INSTALL_TESTONLY_PACKAGES" == "True" ]]; then
-        local test_req="$@/test-requirements.txt"
-        if [[ -e "$test_req" ]]; then
-            $sudo_pip \
-                http_proxy=${http_proxy:-} \
-                https_proxy=${https_proxy:-} \
-                no_proxy=${no_proxy:-} \
-                $cmd_pip install \
-                -r $test_req
-        fi
-    fi
-}
-
-# should we use this library from their git repo, or should we let it
-# get pulled in via pip dependencies.
-function use_library_from_git {
-    local name=$1
-    local enabled=1
-    [[ ,${LIBS_FROM_GIT}, =~ ,${name}, ]] && enabled=0
-    return $enabled
-}
-
-# setup a library by name. If we are trying to use the library from
-# git, we'll do a git based install, otherwise we'll punt and the
-# library should be installed by a requirements pull from another
-# project.
-function setup_lib {
-    local name=$1
-    local dir=${GITDIR[$name]}
-    setup_install $dir
-}
-
-# setup a library by name in editiable mode. If we are trying to use
-# the library from git, we'll do a git based install, otherwise we'll
-# punt and the library should be installed by a requirements pull from
-# another project.
-#
-# use this for non namespaced libraries
-function setup_dev_lib {
-    local name=$1
-    local dir=${GITDIR[$name]}
-    setup_develop $dir
-}
-
-# this should be used if you want to install globally, all libraries should
-# use this, especially *oslo* ones
-function setup_install {
-    local project_dir=$1
-    setup_package_with_req_sync $project_dir
-}
-
-# this should be used for projects which run services, like all services
-function setup_develop {
-    local project_dir=$1
-    setup_package_with_req_sync $project_dir -e
-}
-
-# determine if a project as specified by directory is in
-# projects.txt. This will not be an exact match because we throw away
-# the namespacing when we clone, but it should be good enough in all
-# practical ways.
-function is_in_projects_txt {
-    local project_dir=$1
-    local project_name=$(basename $project_dir)
-    return grep "/$project_name\$" $REQUIREMENTS_DIR/projects.txt >/dev/null
-}
-
-# ``pip install -e`` the package, which processes the dependencies
-# using pip before running `setup.py develop`
-#
-# Updates the dependencies in project_dir from the
-# openstack/requirements global list before installing anything.
-#
-# Uses globals ``TRACK_DEPENDS``, ``REQUIREMENTS_DIR``, ``UNDO_REQUIREMENTS``
-# setup_develop directory
-function setup_package_with_req_sync {
-    local project_dir=$1
-    local flags=$2
-
-    # Don't update repo if local changes exist
-    # Don't use buggy "git diff --quiet"
-    # ``errexit`` requires us to trap the exit code when the repo is changed
-    local update_requirements=$(cd $project_dir && git diff --exit-code >/dev/null || echo "changed")
-
-    if [[ $update_requirements != "changed" ]]; then
-        if [[ "$REQUIREMENTS_MODE" == "soft" ]]; then
-            if is_in_projects_txt $project_dir; then
-                (cd $REQUIREMENTS_DIR; \
-                    python update.py $project_dir)
-            else
-                # soft update projects not found in requirements project.txt
-                (cd $REQUIREMENTS_DIR; \
-                    python update.py -s $project_dir)
-            fi
-        else
-            (cd $REQUIREMENTS_DIR; \
-                python update.py $project_dir)
-        fi
-    fi
-
-    setup_package $project_dir $flags
-
-    # We've just gone and possibly modified the user's source tree in an
-    # automated way, which is considered bad form if it's a development
-    # tree because we've screwed up their next git checkin. So undo it.
-    #
-    # However... there are some circumstances, like running in the gate
-    # where we really really want the overridden version to stick. So provide
-    # a variable that tells us whether or not we should UNDO the requirements
-    # changes (this will be set to False in the OpenStack ci gate)
-    if [ $UNDO_REQUIREMENTS = "True" ]; then
-        if [[ $update_requirements != "changed" ]]; then
-            (cd $project_dir && git reset --hard)
-        fi
-    fi
-}
-
-# ``pip install -e`` the package, which processes the dependencies
-# using pip before running `setup.py develop`
-# Uses globals ``STACK_USER``
-# setup_develop_no_requirements_update directory
-function setup_package {
-    local project_dir=$1
-    local flags=$2
-
-    pip_install $flags $project_dir
-    # ensure that further actions can do things like setup.py sdist
-    if [[ "$flags" == "-e" ]]; then
-        safe_chown -R $STACK_USER $1/*.egg-info
-    fi
-}
-
 # Plugin Functions
 # =================
 
diff --git a/inc/python b/inc/python
new file mode 100644
index 0000000..0348cb3
--- /dev/null
+++ b/inc/python
@@ -0,0 +1,223 @@
+#!/bin/bash
+#
+# **inc/python** - Python-related functions
+#
+# Support for pip/setuptools interfaces and virtual environments
+#
+# External functions used:
+# - GetOSVersion
+# - is_fedora
+# - is_suse
+# - safe_chown
+
+# Save trace setting
+INC_PY_TRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Python Functions
+# ================
+
+# Get the path to the pip command.
+# get_pip_command
+function get_pip_command {
+    which pip || which pip-python
+
+    if [ $? -ne 0 ]; then
+        die $LINENO "Unable to find pip; cannot continue"
+    fi
+}
+
+# Get the path to the direcotry where python executables are installed.
+# get_python_exec_prefix
+function get_python_exec_prefix {
+    if is_fedora || is_suse; then
+        echo "/usr/bin"
+    else
+        echo "/usr/local/bin"
+    fi
+}
+
+# Wrapper for ``pip install`` to set cache and proxy environment variables
+# Uses globals ``INSTALL_TESTONLY_PACKAGES``, ``OFFLINE``, ``TRACK_DEPENDS``,
+# ``*_proxy``
+# pip_install package [package ...]
+function pip_install {
+    local xtrace=$(set +o | grep xtrace)
+    set +o xtrace
+    local offline=${OFFLINE:-False}
+    if [[ "$offline" == "True" || -z "$@" ]]; then
+        $xtrace
+        return
+    fi
+
+    if [[ -z "$os_PACKAGE" ]]; then
+        GetOSVersion
+    fi
+    if [[ $TRACK_DEPENDS = True && ! "$@" =~ virtualenv ]]; then
+        # TRACK_DEPENDS=True installation creates a circular dependency when
+        # we attempt to install virtualenv into a virualenv, so we must global
+        # that installation.
+        source $DEST/.venv/bin/activate
+        local cmd_pip=$DEST/.venv/bin/pip
+        local sudo_pip="env"
+    else
+        local cmd_pip=$(get_pip_command)
+        local sudo_pip="sudo -H"
+    fi
+
+    local pip_version=$(python -c "import pip; \
+                        print(pip.__version__.strip('.')[0])")
+    if (( pip_version<6 )); then
+        die $LINENO "Currently installed pip version ${pip_version} does not" \
+            "meet minimum requirements (>=6)."
+    fi
+
+    $xtrace
+    $sudo_pip \
+        http_proxy=${http_proxy:-} \
+        https_proxy=${https_proxy:-} \
+        no_proxy=${no_proxy:-} \
+        $cmd_pip install \
+        $@
+
+    INSTALL_TESTONLY_PACKAGES=$(trueorfalse False INSTALL_TESTONLY_PACKAGES)
+    if [[ "$INSTALL_TESTONLY_PACKAGES" == "True" ]]; then
+        local test_req="$@/test-requirements.txt"
+        if [[ -e "$test_req" ]]; then
+            $sudo_pip \
+                http_proxy=${http_proxy:-} \
+                https_proxy=${https_proxy:-} \
+                no_proxy=${no_proxy:-} \
+                $cmd_pip install \
+                -r $test_req
+        fi
+    fi
+}
+
+# should we use this library from their git repo, or should we let it
+# get pulled in via pip dependencies.
+function use_library_from_git {
+    local name=$1
+    local enabled=1
+    [[ ,${LIBS_FROM_GIT}, =~ ,${name}, ]] && enabled=0
+    return $enabled
+}
+
+# setup a library by name. If we are trying to use the library from
+# git, we'll do a git based install, otherwise we'll punt and the
+# library should be installed by a requirements pull from another
+# project.
+function setup_lib {
+    local name=$1
+    local dir=${GITDIR[$name]}
+    setup_install $dir
+}
+
+# setup a library by name in editiable mode. If we are trying to use
+# the library from git, we'll do a git based install, otherwise we'll
+# punt and the library should be installed by a requirements pull from
+# another project.
+#
+# use this for non namespaced libraries
+function setup_dev_lib {
+    local name=$1
+    local dir=${GITDIR[$name]}
+    setup_develop $dir
+}
+
+# this should be used if you want to install globally, all libraries should
+# use this, especially *oslo* ones
+function setup_install {
+    local project_dir=$1
+    setup_package_with_req_sync $project_dir
+}
+
+# this should be used for projects which run services, like all services
+function setup_develop {
+    local project_dir=$1
+    setup_package_with_req_sync $project_dir -e
+}
+
+# determine if a project as specified by directory is in
+# projects.txt. This will not be an exact match because we throw away
+# the namespacing when we clone, but it should be good enough in all
+# practical ways.
+function is_in_projects_txt {
+    local project_dir=$1
+    local project_name=$(basename $project_dir)
+    return grep "/$project_name\$" $REQUIREMENTS_DIR/projects.txt >/dev/null
+}
+
+# ``pip install -e`` the package, which processes the dependencies
+# using pip before running `setup.py develop`
+#
+# Updates the dependencies in project_dir from the
+# openstack/requirements global list before installing anything.
+#
+# Uses globals ``TRACK_DEPENDS``, ``REQUIREMENTS_DIR``, ``UNDO_REQUIREMENTS``
+# setup_develop directory
+function setup_package_with_req_sync {
+    local project_dir=$1
+    local flags=$2
+
+    # Don't update repo if local changes exist
+    # Don't use buggy "git diff --quiet"
+    # ``errexit`` requires us to trap the exit code when the repo is changed
+    local update_requirements=$(cd $project_dir && git diff --exit-code >/dev/null || echo "changed")
+
+    if [[ $update_requirements != "changed" ]]; then
+        if [[ "$REQUIREMENTS_MODE" == "soft" ]]; then
+            if is_in_projects_txt $project_dir; then
+                (cd $REQUIREMENTS_DIR; \
+                    python update.py $project_dir)
+            else
+                # soft update projects not found in requirements project.txt
+                (cd $REQUIREMENTS_DIR; \
+                    python update.py -s $project_dir)
+            fi
+        else
+            (cd $REQUIREMENTS_DIR; \
+                python update.py $project_dir)
+        fi
+    fi
+
+    setup_package $project_dir $flags
+
+    # We've just gone and possibly modified the user's source tree in an
+    # automated way, which is considered bad form if it's a development
+    # tree because we've screwed up their next git checkin. So undo it.
+    #
+    # However... there are some circumstances, like running in the gate
+    # where we really really want the overridden version to stick. So provide
+    # a variable that tells us whether or not we should UNDO the requirements
+    # changes (this will be set to False in the OpenStack ci gate)
+    if [ $UNDO_REQUIREMENTS = "True" ]; then
+        if [[ $update_requirements != "changed" ]]; then
+            (cd $project_dir && git reset --hard)
+        fi
+    fi
+}
+
+# ``pip install -e`` the package, which processes the dependencies
+# using pip before running `setup.py develop`
+# Uses globals ``STACK_USER``
+# setup_develop_no_requirements_update directory
+function setup_package {
+    local project_dir=$1
+    local flags=$2
+
+    pip_install $flags $project_dir
+    # ensure that further actions can do things like setup.py sdist
+    if [[ "$flags" == "-e" ]]; then
+        safe_chown -R $STACK_USER $1/*.egg-info
+    fi
+}
+
+
+# Restore xtrace
+$INC_PY_TRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/ceilometer b/lib/ceilometer
index 5d5b987..698e8b0 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -105,14 +105,10 @@
 # SERVICE_TENANT_NAME  ceilometer   ResellerAdmin (if Swift is enabled)
 function create_ceilometer_accounts {
 
-    local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
-    local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
     # Ceilometer
     if [[ "$ENABLED_SERVICES" =~ "ceilometer-api" ]]; then
-        local ceilometer_user=$(get_or_create_user "ceilometer" \
-            "$SERVICE_PASSWORD" $service_tenant)
-        get_or_add_user_role $admin_role $ceilometer_user $service_tenant
+
+        create_service_user "ceilometer"
 
         if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
             local ceilometer_service=$(get_or_create_service "ceilometer" \
@@ -125,7 +121,7 @@
         fi
         if is_service_enabled swift; then
             # Ceilometer needs ResellerAdmin role to access swift account stats.
-            get_or_add_user_role "ResellerAdmin" "ceilometer" $SERVICE_TENANT_NAME
+            get_or_add_user_project_role "ResellerAdmin" "ceilometer" $SERVICE_TENANT_NAME
         fi
     fi
 }
@@ -190,6 +186,7 @@
     iniset $CEILOMETER_CONF DEFAULT policy_file $CEILOMETER_CONF_DIR/policy.json
 
     cp $CEILOMETER_DIR/etc/ceilometer/pipeline.yaml $CEILOMETER_CONF_DIR
+    cp $CEILOMETER_DIR/etc/ceilometer/event_pipeline.yaml $CEILOMETER_CONF_DIR
     cp $CEILOMETER_DIR/etc/ceilometer/api_paste.ini $CEILOMETER_CONF_DIR
     cp $CEILOMETER_DIR/etc/ceilometer/event_definitions.yaml $CEILOMETER_CONF_DIR
 
diff --git a/lib/ceph b/lib/ceph
index 77b5726..a6b8cc8 100644
--- a/lib/ceph
+++ b/lib/ceph
@@ -142,8 +142,8 @@
 }
 
 function cleanup_ceph_embedded {
-    sudo pkill -f ceph-mon
-    sudo pkill -f ceph-osd
+    sudo killall -w -9 ceph-mon
+    sudo killall -w -9 ceph-osd
     sudo rm -rf ${CEPH_DATA_DIR}/*/*
     if egrep -q ${CEPH_DATA_DIR} /proc/mounts; then
         sudo umount ${CEPH_DATA_DIR}
diff --git a/lib/cinder b/lib/cinder
index 08f5874..17a0cc3 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -330,15 +330,10 @@
 # Migrated from keystone_data.sh
 function create_cinder_accounts {
 
-    local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
-    local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
     # Cinder
     if [[ "$ENABLED_SERVICES" =~ "c-api" ]]; then
 
-        local cinder_user=$(get_or_create_user "cinder" \
-            "$SERVICE_PASSWORD" $service_tenant)
-        get_or_add_user_role $admin_role $cinder_user $service_tenant
+        create_service_user "cinder"
 
         if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
 
@@ -454,10 +449,7 @@
         _configure_tgt_for_config_d
         if is_ubuntu; then
             sudo service tgt restart
-        elif is_fedora; then
-            # bypass redirection to systemctl during restart
-            sudo /sbin/service --skip-redirect tgtd restart
-        elif is_suse; then
+        elif is_fedora || is_suse; then
             restart_service tgtd
         else
             # note for other distros: unstack.sh also uses the tgt/tgtd service
diff --git a/lib/gantt b/lib/gantt
deleted file mode 100644
index 5bd28c2..0000000
--- a/lib/gantt
+++ /dev/null
@@ -1,98 +0,0 @@
-#!/bin/bash
-#
-# lib/gantt
-# Install and start **Gantt** scheduler service
-
-# Dependencies:
-#
-# - functions
-# - DEST, DATA_DIR, STACK_USER must be defined
-
-# stack.sh
-# ---------
-# - install_gantt
-# - configure_gantt
-# - init_gantt
-# - start_gantt
-# - stop_gantt
-# - cleanup_gantt
-
-# Save trace setting
-XTRACE=$(set +o | grep xtrace)
-set +o xtrace
-
-# Defaults
-# --------
-
-# set up default directories
-GANTT_DIR=$DEST/gantt
-GANTT_STATE_PATH=${GANTT_STATE_PATH:=$DATA_DIR/gantt}
-GANTT_REPO=${GANTT_REPO:-${GIT_BASE}/openstack/gantt.git}
-GANTT_BRANCH=${GANTT_BRANCH:-master}
-
-GANTTCLIENT_DIR=$DEST/python-ganttclient
-GANTTCLIENT_REPO=${GANTT_REPO:-${GIT_BASE}/openstack/python-ganttclient.git}
-GANTTCLIENT_BRANCH=${GANTT_BRANCH:-master}
-
-# eventually we will have a separate gantt config
-# file but for compatibility reasone stick with
-# nova.conf for now
-GANTT_CONF_DIR=${GANTT_CONF_DIR:-/etc/nova}
-GANTT_CONF=$GANTT_CONF_DIR/nova.conf
-
-# Support entry points installation of console scripts
-GANTT_BIN_DIR=$(get_python_exec_prefix)
-
-
-# Functions
-# ---------
-
-# cleanup_gantt() - Remove residual data files, anything left over from previous
-# runs that a clean run would need to clean up
-function cleanup_gantt {
-    echo "Cleanup Gantt"
-}
-
-# configure_gantt() - Set config files, create data dirs, etc
-function configure_gantt {
-    echo "Configure Gantt"
-}
-
-# init_gantt() - Initialize database and volume group
-function init_gantt {
-    echo "Initialize Gantt"
-}
-
-# install_gantt() - Collect source and prepare
-function install_gantt {
-    git_clone $GANTT_REPO $GANTT_DIR $GANTT_BRANCH
-    setup_develop $GANTT_DIR
-}
-
-# install_ganttclient() - Collect source and prepare
-function install_ganttclient {
-    echo "Install Gantt Client"
-#    git_clone $GANTTCLIENT_REPO $GANTTCLIENT_DIR $GANTTCLIENT_BRANCH
-#    setup_develop $GANTTCLIENT_DIR
-}
-
-# start_gantt() - Start running processes, including screen
-function start_gantt {
-    if is_service_enabled gantt; then
-        run_process gantt "$GANTT_BIN_DIR/gantt-scheduler --config-file $GANTT_CONF"
-    fi
-}
-
-# stop_gantt() - Stop running processes
-function stop_gantt {
-    echo "Stop Gantt"
-    stop_process gantt
-}
-
-# Restore xtrace
-$XTRACE
-
-# Tell emacs to use shell-script-mode
-## Local variables:
-## mode: shell-script
-## End:
diff --git a/lib/glance b/lib/glance
old mode 100644
new mode 100755
index 8768761..5bd0b8c
--- a/lib/glance
+++ b/lib/glance
@@ -70,7 +70,6 @@
 # Tell Tempest this project is present
 TEMPEST_SERVICES+=,glance
 
-
 # Functions
 # ---------
 
@@ -232,16 +231,14 @@
 function create_glance_accounts {
     if is_service_enabled g-api; then
 
-        local glance_user=$(get_or_create_user "glance" \
-            "$SERVICE_PASSWORD" $SERVICE_TENANT_NAME)
-        get_or_add_user_role service $glance_user $SERVICE_TENANT_NAME
+        create_service_user "glance"
 
         # required for swift access
         if is_service_enabled s-proxy; then
 
             local glance_swift_user=$(get_or_create_user "glance-swift" \
-                "$SERVICE_PASSWORD" $SERVICE_TENANT_NAME "glance-swift@example.com")
-            get_or_add_user_role "ResellerAdmin" $glance_swift_user $SERVICE_TENANT_NAME
+                "$SERVICE_PASSWORD" "glance-swift@example.com")
+            get_or_add_user_project_role "ResellerAdmin" $glance_swift_user $SERVICE_TENANT_NAME
         fi
 
         if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
@@ -310,6 +307,10 @@
 
     git_clone $GLANCE_REPO $GLANCE_DIR $GLANCE_BRANCH
     setup_develop $GLANCE_DIR
+    if is_service_enabled g-graffiti; then
+        ${TOP_DIR}/pkg/elasticsearch.sh download
+        ${TOP_DIR}/pkg/elasticsearch.sh install
+    fi
 }
 
 # start_glance() - Start running processes, including screen
@@ -323,6 +324,9 @@
     run_process g-reg "$GLANCE_BIN_DIR/glance-registry --config-file=$GLANCE_CONF_DIR/glance-registry.conf"
     run_process g-api "$GLANCE_BIN_DIR/glance-api --config-file=$GLANCE_CONF_DIR/glance-api.conf"
 
+    if is_service_enabled g-graffiti; then
+        ${TOP_DIR}/pkg/elasticsearch.sh start
+    fi
     echo "Waiting for g-api ($GLANCE_HOSTPORT) to start..."
     if ! wait_for_service $SERVICE_TIMEOUT $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT; then
         die $LINENO "g-api did not start"
@@ -336,7 +340,6 @@
     stop_process g-reg
 }
 
-
 # Restore xtrace
 $XTRACE
 
diff --git a/lib/heat b/lib/heat
index bbef08c..c102163 100644
--- a/lib/heat
+++ b/lib/heat
@@ -134,10 +134,6 @@
     iniset $HEAT_CONF keystone_authtoken cafile $SSL_BUNDLE_FILE
     iniset $HEAT_CONF keystone_authtoken signing_dir $HEAT_AUTH_CACHE_DIR
 
-    if is_ssl_enabled_service "key"; then
-        iniset $HEAT_CONF clients_keystone ca_file $SSL_BUNDLE_FILE
-    fi
-
     # ec2authtoken
     iniset $HEAT_CONF ec2authtoken auth_uri $KEYSTONE_SERVICE_URI/v2.0
 
@@ -246,13 +242,7 @@
 
 # create_heat_accounts() - Set up common required heat accounts
 function create_heat_accounts {
-    # migrated from files/keystone_data.sh
-    local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
-    local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
-    local heat_user=$(get_or_create_user "heat" \
-        "$SERVICE_PASSWORD" $service_tenant)
-    get_or_add_user_role $admin_role $heat_user $service_tenant
+    create_service_user "heat" "admin"
 
     if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
 
diff --git a/lib/ironic b/lib/ironic
index 2075a9c..bed816e 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -358,16 +358,11 @@
 # service              ironic     admin        # if enabled
 function create_ironic_accounts {
 
-    local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
-    local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
     # Ironic
     if [[ "$ENABLED_SERVICES" =~ "ir-api" ]]; then
         # Get ironic user if exists
 
-        local ironic_user=$(get_or_create_user "ironic" \
-            "$SERVICE_PASSWORD" $service_tenant)
-        get_or_add_user_role $admin_role $ironic_user $service_tenant
+        create_service_user "ironic"
 
         if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
 
diff --git a/lib/keystone b/lib/keystone
index afa7f00..2da2d1b 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -309,8 +309,9 @@
         setup_colorized_logging $KEYSTONE_CONF DEFAULT
     fi
 
+    iniset $KEYSTONE_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
+
     if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
-        iniset $KEYSTONE_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
         # Eliminate the %(asctime)s.%(msecs)03d from the log format strings
         iniset $KEYSTONE_CONF DEFAULT logging_context_format_string "%(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s"
         iniset $KEYSTONE_CONF DEFAULT logging_default_format_string "%(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s"
@@ -362,10 +363,9 @@
 
     # admin
     local admin_tenant=$(get_or_create_project "admin")
-    local admin_user=$(get_or_create_user "admin" \
-        "$ADMIN_PASSWORD" "$admin_tenant")
+    local admin_user=$(get_or_create_user "admin" "$ADMIN_PASSWORD")
     local admin_role=$(get_or_create_role "admin")
-    get_or_add_user_role $admin_role $admin_user $admin_tenant
+    get_or_add_user_project_role $admin_role $admin_user $admin_tenant
 
     # Create service project/role
     get_or_create_project "$SERVICE_TENANT_NAME"
@@ -392,12 +392,12 @@
     # demo
     local demo_tenant=$(get_or_create_project "demo")
     local demo_user=$(get_or_create_user "demo" \
-        "$ADMIN_PASSWORD" "$demo_tenant" "demo@example.com")
+        "$ADMIN_PASSWORD" "demo@example.com")
 
-    get_or_add_user_role $member_role $demo_user $demo_tenant
-    get_or_add_user_role $admin_role $admin_user $demo_tenant
-    get_or_add_user_role $another_role $demo_user $demo_tenant
-    get_or_add_user_role $member_role $demo_user $invis_tenant
+    get_or_add_user_project_role $member_role $demo_user $demo_tenant
+    get_or_add_user_project_role $admin_role $admin_user $demo_tenant
+    get_or_add_user_project_role $another_role $demo_user $demo_tenant
+    get_or_add_user_project_role $member_role $demo_user $invis_tenant
 
     get_or_create_group "developers" "default" "openstack developers"
     get_or_create_group "testers" "default"
@@ -415,6 +415,20 @@
     fi
 }
 
+# Create a user that is capable of verifying keystone tokens for use with auth_token middleware.
+#
+# create_service_user <name> [role]
+#
+# The role defaults to the service role. It is allowed to be provided as optional as historically
+# a lot of projects have configured themselves with the admin or other role here if they are
+# using this user for other purposes beyond simply auth_token middleware.
+function create_service_user {
+    local role=${2:-service}
+
+    local user=$(get_or_create_user "$1" "$SERVICE_PASSWORD")
+    get_or_add_user_project_role "$role" "$user" "$SERVICE_TENANT_NAME"
+}
+
 # Configure the service to use the auth token middleware.
 #
 # configure_auth_token_middleware conf_file admin_user signing_dir [section]
@@ -533,12 +547,8 @@
         tail_log key /var/log/$APACHE_NAME/keystone.log
         tail_log key-access /var/log/$APACHE_NAME/keystone_access.log
     else
-        local EXTRA_PARAMS=""
-        if [ "$ENABLE_DEBUG_LOG_LEVEL" == "True" ]; then
-            EXTRA_PARAMS="--debug"
-        fi
         # Start Keystone in a screen window
-        run_process key "$KEYSTONE_DIR/bin/keystone-all --config-file $KEYSTONE_CONF $EXTRA_PARAMS"
+        run_process key "$KEYSTONE_DIR/bin/keystone-all --config-file $KEYSTONE_CONF"
     fi
 
     echo "Waiting for keystone to start..."
diff --git a/lib/neutron b/lib/neutron
index 0fb8d00..15a5f00 100755
--- a/lib/neutron
+++ b/lib/neutron
@@ -10,24 +10,25 @@
 
 # ``stack.sh`` calls the entry points in this order:
 #
-# - install_neutron
-# - install_neutronclient
 # - install_neutron_agent_packages
+# - install_neutronclient
+# - install_neutron
 # - install_neutron_third_party
 # - configure_neutron
 # - init_neutron
 # - configure_neutron_third_party
 # - init_neutron_third_party
 # - start_neutron_third_party
-# - create_neutron_cache_dir
 # - create_nova_conf_neutron
 # - start_neutron_service_and_check
+# - check_neutron_third_party_integration
 # - start_neutron_agents
 # - create_neutron_initial_network
 # - setup_neutron_debug
 #
 # ``unstack.sh`` calls the entry points in this order:
 #
+# - teardown_neutron_debug
 # - stop_neutron
 # - stop_neutron_third_party
 # - cleanup_neutron
@@ -507,15 +508,9 @@
 
 # Migrated from keystone_data.sh
 function create_neutron_accounts {
-
-    local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
-    local service_role=$(openstack role list | awk "/ service / { print \$2 }")
-
     if [[ "$ENABLED_SERVICES" =~ "q-svc" ]]; then
 
-        local neutron_user=$(get_or_create_user "neutron" \
-            "$SERVICE_PASSWORD" $service_tenant)
-        get_or_add_user_role $service_role $neutron_user $service_tenant
+        create_service_user "neutron"
 
         if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
 
@@ -749,13 +744,21 @@
 # stop_neutron() - Stop running processes (non-screen)
 function stop_neutron {
     if is_service_enabled q-dhcp; then
+        stop_process q-dhcp
         pid=$(ps aux | awk '/[d]nsmasq.+interface=(tap|ns-)/ { print $2 }')
         [ ! -z "$pid" ] && sudo kill -9 $pid
     fi
+
+    stop_process q-svc
+    stop_process q-l3
+
     if is_service_enabled q-meta; then
         sudo pkill -9 -f neutron-ns-metadata-proxy || :
+        stop_process q-meta
     fi
 
+    stop_process q-agt
+
     if is_service_enabled q-lbaas; then
         neutron_lbaas_stop
     fi
diff --git a/lib/neutron_plugins/services/metering b/lib/neutron_plugins/services/metering
index 51123e2..37ba019 100644
--- a/lib/neutron_plugins/services/metering
+++ b/lib/neutron_plugins/services/metering
@@ -23,7 +23,7 @@
 }
 
 function neutron_metering_stop {
-    :
+    stop_process q-metering
 }
 
 # Restore xtrace
diff --git a/lib/neutron_plugins/services/vpn b/lib/neutron_plugins/services/vpn
index 7e80b5b..5912eab 100644
--- a/lib/neutron_plugins/services/vpn
+++ b/lib/neutron_plugins/services/vpn
@@ -28,6 +28,7 @@
     if [ -n "$pids" ]; then
         sudo kill $pids
     fi
+    stop_process q-vpn
 }
 
 # Restore xtrace
diff --git a/lib/nova b/lib/nova
index a4b1bb1..a5033f7 100644
--- a/lib/nova
+++ b/lib/nova
@@ -353,15 +353,12 @@
 # SERVICE_TENANT_NAME  nova         ResellerAdmin (if Swift is enabled)
 function create_nova_accounts {
 
-    local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
-    local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
     # Nova
     if [[ "$ENABLED_SERVICES" =~ "n-api" ]]; then
 
-        local nova_user=$(get_or_create_user "nova" \
-            "$SERVICE_PASSWORD" $service_tenant)
-        get_or_add_user_role $admin_role $nova_user $service_tenant
+        # NOTE(jamielennox): Nova doesn't need the admin role here, however neutron uses
+        # this service user when notifying nova of changes and that requires the admin role.
+        create_service_user "nova" "admin"
 
         if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
 
@@ -388,7 +385,7 @@
         if is_service_enabled swift; then
             # Nova needs ResellerAdmin role to download images when accessing
             # swift through the s3 api.
-            get_or_add_user_role ResellerAdmin nova $SERVICE_TENANT_NAME
+            get_or_add_user_project_role ResellerAdmin nova $SERVICE_TENANT_NAME
         fi
 
         # EC2
diff --git a/lib/opendaylight b/lib/opendaylight
deleted file mode 100644
index 6518673..0000000
--- a/lib/opendaylight
+++ /dev/null
@@ -1,215 +0,0 @@
-#!/bin/bash
-#
-# lib/opendaylight
-# Functions to control the configuration and operation of the opendaylight service
-
-# Dependencies:
-#
-# ``functions`` file
-# ``DEST`` must be defined
-# ``STACK_USER`` must be defined
-
-# ``stack.sh`` calls the entry points in this order:
-#
-# - is_opendaylight_enabled
-# - is_opendaylight-compute_enabled
-# - install_opendaylight
-# - install_opendaylight-compute
-# - configure_opendaylight
-# - init_opendaylight
-# - start_opendaylight
-# - stop_opendaylight-compute
-# - stop_opendaylight
-# - cleanup_opendaylight
-
-# Save trace setting
-XTRACE=$(set +o | grep xtrace)
-set +o xtrace
-
-
-# For OVS_BRIDGE and PUBLIC_BRIDGE
-source $TOP_DIR/lib/neutron_plugins/ovs_base
-
-# Defaults
-# --------
-
-# The IP address of ODL. Set this in local.conf.
-# ODL_MGR_IP=
-ODL_MGR_IP=${ODL_MGR_IP:-$SERVICE_HOST}
-
-# The ODL endpoint URL
-ODL_ENDPOINT=${ODL_ENDPOINT:-http://${ODL_MGR_IP}:8080/controller/nb/v2/neutron}
-
-# The ODL username
-ODL_USERNAME=${ODL_USERNAME:-admin}
-
-# The ODL password
-ODL_PASSWORD=${ODL_PASSWORD:-admin}
-
-# Short name of ODL package
-ODL_NAME=${ODL_NAME:-distribution-karaf-0.2.1-Helium-SR1.1}
-
-# <define global variables here that belong to this project>
-ODL_DIR=$DEST/opendaylight
-
-# The OpenDaylight Package, currently using 'Hydrogen' release
-ODL_PKG=${ODL_PKG:-distribution-karaf-0.2.1-Helium-SR1.1.zip}
-
-# The OpenDaylight URL
-ODL_URL=${ODL_URL:-https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/integration/distribution-karaf/0.2.1-Helium-SR1.1/}
-
-# Default arguments for OpenDaylight. This is typically used to set
-# Java memory options.
-# ``ODL_ARGS=Xmx1024m -XX:MaxPermSize=512m``
-ODL_ARGS=${ODL_ARGS:-"-XX:MaxPermSize=384m"}
-
-# How long to pause after ODL starts to let it complete booting
-ODL_BOOT_WAIT=${ODL_BOOT_WAIT:-20}
-
-# The physical provider network to device mapping
-ODL_PROVIDER_MAPPINGS=${ODL_PROVIDER_MAPPINGS:-physnet1:eth1}
-
-# Enable OpenDaylight l3 forwarding
-ODL_L3=${ODL_L3:-False}
-
-# Enable debug logs for odl ovsdb
-ODL_NETVIRT_DEBUG_LOGS=${ODL_NETVIRT_DEBUG_LOGS:-False}
-
-# The logging config file in ODL
-ODL_LOGGING_CONFIG=${ODL_LOGGING_CONFIG:-${ODL_DIR}/${ODL_NAME}/etc/org.ops4j.pax.logging.cfg}
-
-# Entry Points
-# ------------
-
-# Test if OpenDaylight is enabled
-# is_opendaylight_enabled
-function is_opendaylight_enabled {
-    [[ ,${ENABLED_SERVICES} =~ ,"odl-" ]] && return 0
-    return 1
-}
-
-# cleanup_opendaylight() - Remove residual data files, anything left over from previous
-# runs that a clean run would need to clean up
-function cleanup_opendaylight {
-    :
-}
-
-# configure_opendaylight() - Set config files, create data dirs, etc
-function configure_opendaylight {
-    # Add odl-ovsdb-openstack if it's not already there
-    local ODLOVSDB=$(cat $ODL_DIR/$ODL_NAME/etc/org.apache.karaf.features.cfg | grep featuresBoot= | grep odl)
-    if [ "$ODLOVSDB" == "" ]; then
-        sed -i '/^featuresBoot=/ s/$/,odl-ovsdb-openstack/' $ODL_DIR/$ODL_NAME/etc/org.apache.karaf.features.cfg
-    fi
-
-    # Configure OpenFlow 1.3 if it's not there
-    local OFLOW13=$(cat $ODL_DIR/$ODL_NAME/etc/custom.properties | grep ^of.version)
-    if [ "$OFLOW13" == "" ]; then
-        echo "ovsdb.of.version=1.3" >> $ODL_DIR/$ODL_NAME/etc/custom.properties
-    fi
-
-    # Configure L3 if the user wants it
-    if [ "${ODL_L3}" == "True" ]; then
-        # Configure L3 FWD if it's not there
-        local L3FWD=$(cat $ODL_DIR/$ODL_NAME/etc/custom.properties | grep ^ovsdb.l3.fwd.enabled)
-        if [ "$L3FWD" == "" ]; then
-            echo "ovsdb.l3.fwd.enabled=yes" >> $ODL_DIR/$ODL_NAME/etc/custom.properties
-        fi
-    fi
-
-    # Configure DEBUG logs for network virtualization in odl, if the user wants it
-    if [ "${ODL_NETVIRT_DEBUG_LOGS}" == "True" ]; then
-        local OVSDB_DEBUG_LOGS=$(cat $ODL_LOGGING_CONFIG | grep ^log4j.logger.org.opendaylight.ovsdb)
-        if [ "${OVSDB_DEBUG_LOGS}" == "" ]; then
-            echo 'log4j.logger.org.opendaylight.ovsdb = TRACE' >> $ODL_LOGGING_CONFIG
-            echo 'log4j.logger.org.opendaylight.ovsdb.lib = INFO' >> $ODL_LOGGING_CONFIG
-            echo 'log4j.logger.org.opendaylight.ovsdb.openstack.netvirt.impl.NeutronL3Adapter = DEBUG' >> $ODL_LOGGING_CONFIG
-            echo 'log4j.logger.org.opendaylight.ovsdb.openstack.netvirt.impl.TenantNetworkManagerImpl = DEBUG' >> $ODL_LOGGING_CONFIG
-            echo 'log4j.logger.org.opendaylight.ovsdb.plugin.md.OvsdbInventoryManager = INFO' >> $ODL_LOGGING_CONFIG
-        fi
-        local ODL_NEUTRON_DEBUG_LOGS=$(cat $ODL_LOGGING_CONFIG | grep ^log4j.logger.org.opendaylight.controller.networkconfig.neutron)
-        if [ "${ODL_NEUTRON_DEBUG_LOGS}" == "" ]; then
-            echo 'log4j.logger.org.opendaylight.controller.networkconfig.neutron = TRACE' >> $ODL_LOGGING_CONFIG
-        fi
-    fi
-}
-
-function configure_ml2_odl {
-    populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2_odl url=$ODL_ENDPOINT
-    populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2_odl username=$ODL_USERNAME
-    populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2_odl password=$ODL_PASSWORD
-}
-
-# init_opendaylight() - Initialize databases, etc.
-function init_opendaylight {
-    # clean up from previous (possibly aborted) runs
-    # create required data files
-    :
-}
-
-# install_opendaylight() - Collect source and prepare
-function install_opendaylight {
-    local _pwd=$(pwd)
-
-    if is_ubuntu; then
-        install_package maven openjdk-7-jre openjdk-7-jdk
-    else
-        yum_install maven java-1.7.0-openjdk
-    fi
-
-    # Download OpenDaylight
-    mkdir -p $ODL_DIR
-    cd $ODL_DIR
-    wget -N $ODL_URL/$ODL_PKG
-    unzip -u $ODL_PKG
-}
-
-# install_opendaylight-compute - Make sure OVS is installed
-function install_opendaylight-compute {
-    # packages are the same as for Neutron OVS agent
-    _neutron_ovs_base_install_agent_packages
-}
-
-# start_opendaylight() - Start running processes, including screen
-function start_opendaylight {
-    if is_ubuntu; then
-        JHOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64
-    else
-        JHOME=/usr/lib/jvm/java-1.7.0-openjdk
-    fi
-
-    # The flags to ODL have the following meaning:
-    #   -of13: runs ODL using OpenFlow 1.3 protocol support.
-    #   -virt ovsdb: Runs ODL in "virtualization" mode with OVSDB support
-
-    run_process odl-server "cd $ODL_DIR/$ODL_NAME && JAVA_HOME=$JHOME bin/karaf"
-
-    # Sleep a bit to let OpenDaylight finish starting up
-    sleep $ODL_BOOT_WAIT
-}
-
-# stop_opendaylight() - Stop running processes (non-screen)
-function stop_opendaylight {
-    stop_process odl-server
-}
-
-# stop_opendaylight-compute() - Remove OVS bridges
-function stop_opendaylight-compute {
-    # remove all OVS ports that look like Neutron created ports
-    for port in $(sudo ovs-vsctl list port | grep -o -e tap[0-9a-f\-]* -e q[rg]-[0-9a-f\-]*); do
-        sudo ovs-vsctl del-port ${port}
-    done
-
-    # remove all OVS bridges created by Neutron
-    for bridge in $(sudo ovs-vsctl list-br | grep -o -e ${OVS_BRIDGE} -e ${PUBLIC_BRIDGE}); do
-        sudo ovs-vsctl del-br ${bridge}
-    done
-}
-
-# Restore xtrace
-$XTRACE
-
-# Tell emacs to use shell-script-mode
-## Local variables:
-## mode: shell-script
-## End:
diff --git a/lib/rpc_backend b/lib/rpc_backend
index ec821f1..899748c 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -343,6 +343,7 @@
             install_package sasl2-bin
         elif is_fedora; then
             install_package cyrus-sasl-lib
+            install_package cyrus-sasl-plain
         fi
         local sasl_conf_file=/etc/sasl2/qpidd.conf
         sudo sed -i.bak '/PLAIN/!s/mech_list: /mech_list: PLAIN /' $sasl_conf_file
diff --git a/lib/sahara b/lib/sahara
index 5720c20..da4fbcd 100644
--- a/lib/sahara
+++ b/lib/sahara
@@ -61,12 +61,7 @@
 # service     sahara    admin
 function create_sahara_accounts {
 
-    local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
-    local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
-    local sahara_user=$(get_or_create_user "sahara" \
-        "$SERVICE_PASSWORD" $service_tenant)
-    get_or_add_user_role $admin_role $sahara_user $service_tenant
+    create_service_user "sahara"
 
     if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
 
@@ -129,6 +124,10 @@
     if is_service_enabled neutron; then
         iniset $SAHARA_CONF_FILE DEFAULT use_neutron true
         iniset $SAHARA_CONF_FILE DEFAULT use_floating_ips true
+
+        if is_ssl_enabled_service "neutron" || is_service_enabled tls-proxy; then
+            iniset $SAHARA_CONF_FILE neutron ca_file $SSL_BUNDLE_FILE
+        fi
     else
         iniset $SAHARA_CONF_FILE DEFAULT use_neutron false
         iniset $SAHARA_CONF_FILE DEFAULT use_floating_ips false
@@ -136,10 +135,30 @@
 
     if is_service_enabled heat; then
         iniset $SAHARA_CONF_FILE DEFAULT infrastructure_engine heat
+
+        if is_ssl_enabled_service "heat" || is_service_enabled tls-proxy; then
+            iniset $SAHARA_CONF_FILE heat ca_file $SSL_BUNDLE_FILE
+        fi
     else
         iniset $SAHARA_CONF_FILE DEFAULT infrastructure_engine direct
     fi
 
+    if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
+        iniset $SAHARA_CONF_FILE cinder ca_file $SSL_BUNDLE_FILE
+    fi
+
+    if is_ssl_enabled_service "nova" || is_service_enabled tls-proxy; then
+        iniset $SAHARA_CONF_FILE nova ca_file $SSL_BUNDLE_FILE
+    fi
+
+    if is_ssl_enabled_service "swift" || is_service_enabled tls-proxy; then
+        iniset $SAHARA_CONF_FILE swift ca_file $SSL_BUNDLE_FILE
+    fi
+
+    if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
+        iniset $SAHARA_CONF_FILE keystone ca_file $SSL_BUNDLE_FILE
+    fi
+
     iniset $SAHARA_CONF_FILE DEFAULT use_syslog $SYSLOG
 
     # Format logging
diff --git a/lib/stackforge b/lib/stackforge
deleted file mode 100644
index cc3a689..0000000
--- a/lib/stackforge
+++ /dev/null
@@ -1,56 +0,0 @@
-#!/bin/bash
-#
-# lib/stackforge
-#
-# Functions to install stackforge libraries that we depend on so
-# that we can try their git versions during devstack gate.
-#
-# This is appropriate for python libraries that release to pypi and are
-# expected to be used beyond OpenStack like, but are requirements
-# for core services in global-requirements.
-#
-#     * wsme
-#     * pecan
-#
-# This is not appropriate for stackforge projects which are early stage
-# OpenStack tools
-
-# Dependencies:
-# ``functions`` file
-
-# ``stack.sh`` calls the entry points in this order:
-#
-# install_stackforge
-
-# Save trace setting
-XTRACE=$(set +o | grep xtrace)
-set +o xtrace
-
-
-# Defaults
-# --------
-WSME_DIR=$DEST/wsme
-PECAN_DIR=$DEST/pecan
-SQLALCHEMY_MIGRATE_DIR=$DEST/sqlalchemy-migrate
-
-# Entry Points
-# ------------
-
-# install_stackforge() - Collect source and prepare
-function install_stackforge {
-    git_clone $WSME_REPO $WSME_DIR $WSME_BRANCH
-    setup_package $WSME_DIR
-
-    git_clone $PECAN_REPO $PECAN_DIR $PECAN_BRANCH
-    setup_package $PECAN_DIR
-
-    git_clone $SQLALCHEMY_MIGRATE_REPO $SQLALCHEMY_MIGRATE_DIR $SQLALCHEMY_MIGRATE_BRANCH
-    setup_package $SQLALCHEMY_MIGRATE_DIR
-}
-
-# Restore xtrace
-$XTRACE
-
-# Local variables:
-# mode: shell-script
-# End:
diff --git a/lib/swift b/lib/swift
index e6e1212..e4d8b5f 100644
--- a/lib/swift
+++ b/lib/swift
@@ -601,13 +601,9 @@
 
     KEYSTONE_CATALOG_BACKEND=${KEYSTONE_CATALOG_BACKEND:-sql}
 
-    local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
-    local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
     local another_role=$(openstack role list | awk "/ anotherrole / { print \$2 }")
 
-    local swift_user=$(get_or_create_user "swift" \
-        "$SERVICE_PASSWORD" $service_tenant)
-    get_or_add_user_role $admin_role $swift_user $service_tenant
+    create_service_user "swift"
 
     if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
 
@@ -622,33 +618,30 @@
 
     local swift_tenant_test1=$(get_or_create_project swifttenanttest1)
     die_if_not_set $LINENO swift_tenant_test1 "Failure creating swift_tenant_test1"
-    SWIFT_USER_TEST1=$(get_or_create_user swiftusertest1 $swiftusertest1_password \
-        "$swift_tenant_test1" "test@example.com")
+    SWIFT_USER_TEST1=$(get_or_create_user swiftusertest1 $swiftusertest1_password "test@example.com")
     die_if_not_set $LINENO SWIFT_USER_TEST1 "Failure creating SWIFT_USER_TEST1"
-    get_or_add_user_role $admin_role $SWIFT_USER_TEST1 $swift_tenant_test1
+    get_or_add_user_project_role admin $SWIFT_USER_TEST1 $swift_tenant_test1
 
-    local swift_user_test3=$(get_or_create_user swiftusertest3 $swiftusertest3_password \
-        "$swift_tenant_test1" "test3@example.com")
+    local swift_user_test3=$(get_or_create_user swiftusertest3 $swiftusertest3_password "test3@example.com")
     die_if_not_set $LINENO swift_user_test3 "Failure creating swift_user_test3"
-    get_or_add_user_role $another_role $swift_user_test3 $swift_tenant_test1
+    get_or_add_user_project_role $another_role $swift_user_test3 $swift_tenant_test1
 
     local swift_tenant_test2=$(get_or_create_project swifttenanttest2)
     die_if_not_set $LINENO swift_tenant_test2 "Failure creating swift_tenant_test2"
 
-    local swift_user_test2=$(get_or_create_user swiftusertest2 $swiftusertest2_password \
-        "$swift_tenant_test2" "test2@example.com")
+    local swift_user_test2=$(get_or_create_user swiftusertest2 $swiftusertest2_password "test2@example.com")
     die_if_not_set $LINENO swift_user_test2 "Failure creating swift_user_test2"
-    get_or_add_user_role $admin_role $swift_user_test2 $swift_tenant_test2
+    get_or_add_user_project_role admin $swift_user_test2 $swift_tenant_test2
 
     local swift_domain=$(get_or_create_domain swift_test 'Used for swift functional testing')
     die_if_not_set $LINENO swift_domain "Failure creating swift_test domain"
 
     local swift_tenant_test4=$(get_or_create_project swifttenanttest4 $swift_domain)
     die_if_not_set $LINENO swift_tenant_test4 "Failure creating swift_tenant_test4"
-    local swift_user_test4=$(get_or_create_user swiftusertest4 $swiftusertest4_password \
-        $swift_tenant_test4 "test4@example.com" $swift_domain)
+
+    local swift_user_test4=$(get_or_create_user swiftusertest4 $swiftusertest4_password "test4@example.com" $swift_domain)
     die_if_not_set $LINENO swift_user_test4 "Failure creating swift_user_test4"
-    get_or_add_user_role $admin_role $swift_user_test4 $swift_tenant_test4
+    get_or_add_user_project_role admin $swift_user_test4 $swift_tenant_test4
 }
 
 # init_swift() - Initialize rings
diff --git a/lib/tempest b/lib/tempest
index 1ae9457..5ca217e 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -65,7 +65,7 @@
 
 
 BOTO_MATERIALS_PATH="$FILES/images/s3-materials/cirros-${CIRROS_VERSION}"
-BOTO_CONF=$TEMPEST_DIR/boto.cfg
+BOTO_CONF=/etc/boto.cfg
 
 # Cinder/Volume variables
 TEMPEST_VOLUME_DRIVER=${TEMPEST_VOLUME_DRIVER:-default}
@@ -95,7 +95,8 @@
 
 # configure_tempest() - Set config files, create data dirs, etc
 function configure_tempest {
-    setup_develop $TEMPEST_DIR
+    # install testr since its used to process tempest logs
+    pip_install `grep -h testrepository $REQUIREMENTS_DIR/global-requirements.txt | cut -d\# -f1`
     local image_lines
     local images
     local num_images
@@ -291,6 +292,9 @@
     iniset $TEMPEST_CONFIG identity admin_tenant_id $ADMIN_TENANT_ID
     iniset $TEMPEST_CONFIG identity admin_domain_name $ADMIN_DOMAIN_NAME
     iniset $TEMPEST_CONFIG identity auth_version ${TEMPEST_AUTH_VERSION:-v2}
+    if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
+        iniset $TEMPEST_CONFIG identity ca_certificates_file $SSL_BUNDLE_FILE
+    fi
 
     # Image
     # for the gate we want to be able to override this variable so we aren't
@@ -319,7 +323,8 @@
     # Run verify_tempest_config -ur to retrieve enabled extensions on API endpoints
     # NOTE(mtreinish): This must be done after auth settings are added to the tempest config
     local tmp_cfg_file=$(mktemp)
-    $TEMPEST_DIR/tempest/cmd/verify_tempest_config.py -uro $tmp_cfg_file
+    cd $TEMPEST_DIR
+    tox -evenv -- verify-tempest-config -uro $tmp_cfg_file
 
     local compute_api_extensions=${COMPUTE_API_EXTENSIONS:-"all"}
     if [[ ! -z "$DISABLE_COMPUTE_API_EXTENSIONS" ]]; then
@@ -480,7 +485,7 @@
         fi
     done
 
-    if is_ssl_enabled_service "keystone" || is_service_enabled tls-proxy; then
+    if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
         # Use the BOTO_CONFIG environment variable to point to this file
         iniset $BOTO_CONF Boto ca_certificates_file $SSL_BUNDLE_FILE
         sudo chown $STACK_USER $BOTO_CONF
@@ -502,8 +507,8 @@
         # Tempest has some tests that validate various authorization checks
         # between two regular users in separate tenants
         get_or_create_project alt_demo
-        get_or_create_user alt_demo "$ADMIN_PASSWORD" alt_demo "alt_demo@example.com"
-        get_or_add_user_role Member alt_demo alt_demo
+        get_or_create_user alt_demo "$ADMIN_PASSWORD" "alt_demo@example.com"
+        get_or_add_user_project_role Member alt_demo alt_demo
     fi
 }
 
diff --git a/lib/trove b/lib/trove
index 3249ce0..e1b307a 100644
--- a/lib/trove
+++ b/lib/trove
@@ -79,14 +79,9 @@
 # service              trove     admin        # if enabled
 
 function create_trove_accounts {
-    local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
-    local service_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
     if [[ "$ENABLED_SERVICES" =~ "trove" ]]; then
 
-        local trove_user=$(get_or_create_user "trove" \
-            "$SERVICE_PASSWORD" $service_tenant)
-        get_or_add_user_role $service_role $trove_user $service_tenant
+        create_service_user "trove"
 
         if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
 
diff --git a/lib/zaqar b/lib/zaqar
index dfa3452..4a24415 100644
--- a/lib/zaqar
+++ b/lib/zaqar
@@ -215,12 +215,7 @@
 }
 
 function create_zaqar_accounts {
-    local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
-    ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
-
-    local zaqar_user=$(get_or_create_user "zaqar" \
-        "$SERVICE_PASSWORD" $service_tenant)
-    get_or_add_user_role $ADMIN_ROLE $zaqar_user $service_tenant
+    create_service_user "zaqar"
 
     if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
 
diff --git a/pkg/elasticsearch.sh b/pkg/elasticsearch.sh
new file mode 100755
index 0000000..15e1b2b
--- /dev/null
+++ b/pkg/elasticsearch.sh
@@ -0,0 +1,126 @@
+#!/bin/bash -xe
+
+# basic reference point for things like filecache
+#
+# TODO(sdague): once we have a few of these I imagine the download
+# step can probably be factored out to something nicer
+TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
+FILES=$TOP_DIR/files
+source $TOP_DIR/functions
+
+# Package source and version, all pkg files are expected to have
+# something like this, as well as a way to override them.
+ELASTICSEARCH_VERSION=${ELASTICSEARCH_VERSION:-1.4.2}
+ELASTICSEARCH_BASEURL=${ELASTICSEARCH_BASEURL:-https://download.elasticsearch.org/elasticsearch/elasticsearch}
+
+# Elastic search actual implementation
+function wget_elasticsearch {
+    local file=${1}
+
+    if [ ! -f ${FILES}/${file} ]; then
+        wget $ELASTICSEARCH_BASEURL/${file} -O ${FILES}/${file}
+    fi
+
+    if [ ! -f ${FILES}/${file}.sha1.txt ]; then
+        wget $ELASTICSEARCH_BASEURL/${file}.sha1.txt -O ${FILES}/${file}.sha1.txt
+    fi
+
+    pushd ${FILES};  sha1sum ${file} > ${file}.sha1.gen;  popd
+
+    if ! diff ${FILES}/${file}.sha1.gen ${FILES}/${file}.sha1.txt; then
+        echo "Invalid elasticsearch download. Could not install."
+        return 1
+    fi
+    return 0
+}
+
+function download_elasticsearch {
+    if is_ubuntu; then
+        wget_elasticsearch elasticsearch-${ELASTICSEARCH_VERSION}.deb
+    elif is_fedora; then
+        wget_elasticsearch elasticsearch-${ELASTICSEARCH_VERSION}.noarch.rpm
+    fi
+}
+
+function configure_elasticsearch {
+    # currently a no op
+    ::
+}
+
+function start_elasticsearch {
+    if is_ubuntu; then
+        sudo /etc/init.d/elasticsearch start
+    elif is_fedora; then
+        sudo /bin/systemctl start elasticsearch.service
+    else
+        echo "Unsupported architecture...can not start elasticsearch."
+    fi
+}
+
+function stop_elasticsearch {
+    if is_ubuntu; then
+        sudo /etc/init.d/elasticsearch stop
+    elif is_fedora; then
+        sudo /bin/systemctl stop elasticsearch.service
+    else
+        echo "Unsupported architecture...can not stop elasticsearch."
+    fi
+}
+
+function install_elasticsearch {
+    if is_package_installed elasticsearch; then
+        echo "Note: elasticsearch was already installed."
+        return
+    fi
+    if is_ubuntu; then
+        is_package_installed openjdk-7-jre-headless || install_package openjdk-7-jre-headless
+
+        sudo dpkg -i ${FILES}/elasticsearch-${ELASTICSEARCH_VERSION}.deb
+        sudo update-rc.d elasticsearch defaults 95 10
+    elif is_fedora; then
+        is_package_installed java-1.7.0-openjdk-headless || install_package java-1.7.0-openjdk-headless
+        yum_install ${FILES}/elasticsearch-${ELASTICSEARCH_VERSION}.noarch.rpm
+        sudo /bin/systemctl daemon-reload
+        sudo /bin/systemctl enable elasticsearch.service
+    else
+        echo "Unsupported install of elasticsearch on this architecture."
+    fi
+}
+
+function uninstall_elasticsearch {
+    if is_package_installed elasticsearch; then
+        if is_ubuntu; then
+            sudo apt-get purge elasticsearch
+        elif is_fedora; then
+            sudo yum remove elasticsearch
+        else
+            echo "Unsupported install of elasticsearch on this architecture."
+        fi
+    fi
+}
+
+# The PHASE dispatcher. All pkg files are expected to basically cargo
+# cult the case statement.
+PHASE=$1
+echo "Phase is $PHASE"
+
+case $PHASE in
+    download)
+        download_elasticsearch
+        ;;
+    install)
+        install_elasticsearch
+        ;;
+    configure)
+        configure_elasticsearch
+        ;;
+    start)
+        start_elasticsearch
+        ;;
+    stop)
+        stop_elasticsearch
+        ;;
+    uninstall)
+        uninstall_elasticsearch
+        ;;
+esac
diff --git a/stack.sh b/stack.sh
index eaecea0..c8905a7 100755
--- a/stack.sh
+++ b/stack.sh
@@ -500,7 +500,6 @@
 # Source project function libraries
 source $TOP_DIR/lib/infra
 source $TOP_DIR/lib/oslo
-source $TOP_DIR/lib/stackforge
 source $TOP_DIR/lib/lvm
 source $TOP_DIR/lib/horizon
 source $TOP_DIR/lib/keystone
@@ -585,7 +584,7 @@
 # The available database backends are listed in ``DATABASE_BACKENDS`` after
 # ``lib/database`` is sourced. ``mysql`` is the default.
 
-initialize_database_backends && echo "Using $DATABASE_TYPE database backend" || die $LINENO "No database enabled"
+initialize_database_backends && echo "Using $DATABASE_TYPE database backend" || echo "No database enabled"
 
 
 # Queue Configuration
@@ -699,11 +698,6 @@
 # Install oslo libraries that have graduated
 install_oslo
 
-# Install stackforge libraries for testing
-if is_service_enabled stackforge_libs; then
-    install_stackforge
-fi
-
 # Install clients libraries
 install_keystoneclient
 install_glanceclient
diff --git a/stackrc b/stackrc
index 99748ce..7bb4cc2 100644
--- a/stackrc
+++ b/stackrc
@@ -32,11 +32,15 @@
 # ``disable_service`` functions in ``local.conf``.
 # For example, to enable Swift add this to ``local.conf``:
 #  enable_service s-proxy s-object s-container s-account
-# In order to enable nova-networking add the following settings in
-# `` local.conf ``:
+# In order to enable Neutron (a single node setup) add the following
+# settings in ``local.conf``:
 #  [[local|localrc]]
-#  disable_service q-svc q-agt q-dhcp q-l3 q-meta
-#  enable_service n-net
+#  disable_service n-net
+#  enable_service q-svc
+#  enable_service q-agt
+#  enable_service q-dhcp
+#  enable_service q-l3
+#  enable_service q-meta
 #  # Optional, to enable tempest configuration as part of devstack
 #  enable_service tempest
 function isset {
@@ -50,16 +54,14 @@
 
 # this allows us to pass ENABLED_SERVICES
 if ! isset ENABLED_SERVICES ; then
-    # core compute (glance / keystone / nova)
-    ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-xvnc,n-cauth
+    # core compute (glance / keystone / nova (+ nova-network))
+    ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,n-sch,n-xvnc,n-cauth
     # cinder
     ENABLED_SERVICES+=,c-sch,c-api,c-vol
     # heat
     ENABLED_SERVICES+=,h-eng,h-api,h-api-cfn,h-api-cw
     # dashboard
     ENABLED_SERVICES+=,horizon
-    # neutron
-    ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta
     # additional services
     ENABLED_SERVICES+=,rabbit,tempest,mysql
 fi
@@ -434,26 +436,6 @@
 
 #################
 #
-#  Additional Libraries
-#
-#################
-
-# stackforge libraries that are used by OpenStack core services
-# wsme
-WSME_REPO=${WSME_REPO:-${GIT_BASE}/stackforge/wsme.git}
-WSME_BRANCH=${WSME_BRANCH:-master}
-
-# pecan
-PECAN_REPO=${PECAN_REPO:-${GIT_BASE}/stackforge/pecan.git}
-PECAN_BRANCH=${PECAN_BRANCH:-master}
-
-# sqlalchemy-migrate
-SQLALCHEMY_MIGRATE_REPO=${SQLALCHEMY_MIGRATE_REPO:-${GIT_BASE}/stackforge/sqlalchemy-migrate.git}
-SQLALCHEMY_MIGRATE_BRANCH=${SQLALCHEMY_MIGRATE_BRANCH:-master}
-
-
-#################
-#
 #  3rd Party Components (non pip installable)
 #
 #  NOTE(sdague): these should be converted to release version installs or removed
diff --git a/tests/test_ip.sh b/tests/test_ip.sh
index e9cbcca..add8d1a 100755
--- a/tests/test_ip.sh
+++ b/tests/test_ip.sh
@@ -8,9 +8,6 @@
 # Import common functions
 source $TOP/functions
 
-# Import configuration
-source $TOP/openrc
-
 
 echo "Testing IP addr functions"
 
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
index 7e96bae..6e1b515 100755
--- a/tests/test_libs_from_pypi.sh
+++ b/tests/test_libs_from_pypi.sh
@@ -17,6 +17,8 @@
 
 export TOP_DIR=$TOP
 
+# we don't actually care about the HOST_IP
+HOST_IP="don't care"
 # Import common functions
 source $TOP/functions
 source $TOP/stackrc
diff --git a/tools/xen/xenrc b/tools/xen/xenrc
index 0cbf861..43a6ce8 100644
--- a/tools/xen/xenrc
+++ b/tools/xen/xenrc
@@ -70,8 +70,8 @@
 # XenServer 6.1 and later or XCP 1.6 or later
 # 11.10 is only really supported with XenServer 6.0.2 and later
 UBUNTU_INST_ARCH="amd64"
-UBUNTU_INST_HTTP_HOSTNAME="mirror.anl.gov"
-UBUNTU_INST_HTTP_DIRECTORY="/pub/ubuntu"
+UBUNTU_INST_HTTP_HOSTNAME="archive.ubuntu.com"
+UBUNTU_INST_HTTP_DIRECTORY="/ubuntu"
 UBUNTU_INST_HTTP_PROXY=""
 UBUNTU_INST_LOCALE="en_US"
 UBUNTU_INST_KEYBOARD="us"
diff --git a/unstack.sh b/unstack.sh
index 9a283ba..6deeba2 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -54,7 +54,6 @@
 # Source project function libraries
 source $TOP_DIR/lib/infra
 source $TOP_DIR/lib/oslo
-source $TOP_DIR/lib/stackforge
 source $TOP_DIR/lib/lvm
 source $TOP_DIR/lib/horizon
 source $TOP_DIR/lib/keystone
@@ -133,6 +132,9 @@
     stop_tls_proxy
     cleanup_CA
 fi
+if [ "$USE_SSL" == "True" ]; then
+    cleanup_CA
+fi
 
 SCSI_PERSIST_DIR=$CINDER_STATE_PATH/volumes/*