Merge "Fix error detection & exit in report_results"
diff --git a/HACKING.rst b/HACKING.rst
index 6bd24b0..d66687e 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -250,8 +250,7 @@
   database access from the exercise itself.
 
 * If specific configuration needs to be present for the exercise to complete,
-  it should be staged in ``stack.sh``, or called from ``stack.sh`` (see
-  ``files/keystone_data.sh`` for an example of this).
+  it should be staged in ``stack.sh``, or called from ``stack.sh``.
 
 * The ``OS_*`` environment variables should be the only ones used for all
   authentication to OpenStack clients as documented in the CLIAuth_ wiki page.
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index d70d3da..d87809a 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -202,8 +202,8 @@
 
         LOGDIR=$DEST/logs
 
-*Note the use of ``DEST`` to locate the main install directory; this
-is why we suggest setting it in ``local.conf``.*
+Note the use of ``DEST`` to locate the main install directory; this
+is why we suggest setting it in ``local.conf``.
 
 Enabling Syslog
 ~~~~~~~~~~~~~~~
@@ -211,7 +211,7 @@
 Logging all services to a single syslog can be convenient. Enable
 syslogging by setting ``SYSLOG`` to ``True``. If the destination log
 host is not localhost ``SYSLOG_HOST`` and ``SYSLOG_PORT`` can be used
-to direct the message stream to the log host.  |
+to direct the message stream to the log host.
 
     ::
 
@@ -239,15 +239,15 @@
 
 Multiple database backends are available. The available databases are defined
 in the lib/databases directory.
-`mysql` is the default database, choose a different one by putting the
-following in the `localrc` section:
+``mysql`` is the default database, choose a different one by putting the
+following in the ``localrc`` section:
 
    ::
 
       disable_service mysql
       enable_service postgresql
 
-`mysql` is the default database.
+``mysql`` is the default database.
 
 RPC Backend
 -----------
@@ -260,6 +260,7 @@
 Example disabling RabbitMQ in ``local.conf``:
 
 ::
+
     disable_service rabbit
 
 
@@ -393,7 +394,7 @@
         KEYSTONE_CATALOG_BACKEND=template
 
 DevStack's default configuration in ``sql`` mode is set in
-``files/keystone_data.sh``
+``lib/keystone``
 
 
 Guest Images
@@ -511,7 +512,7 @@
 object services will run directly in screen. The others services like
 replicator, updaters or auditor runs in background.
 
-If you would like to enable Swift you can add this to your `localrc`
+If you would like to enable Swift you can add this to your ``localrc``
 section:
 
 ::
@@ -519,7 +520,7 @@
     enable_service s-proxy s-object s-container s-account
 
 If you want a minimal Swift install with only Swift and Keystone you
-can have this instead in your `localrc` section:
+can have this instead in your ``localrc`` section:
 
 ::
 
@@ -528,24 +529,24 @@
 
 If you only want to do some testing of a real normal swift cluster
 with multiple replicas you can do so by customizing the variable
-`SWIFT_REPLICAS` in your `localrc` section (usually to 3).
+``SWIFT_REPLICAS`` in your ``localrc`` section (usually to 3).
 
 Swift S3
 ++++++++
 
-If you are enabling `swift3` in `ENABLED_SERVICES` DevStack will
+If you are enabling ``swift3`` in ``ENABLED_SERVICES`` DevStack will
 install the swift3 middleware emulation. Swift will be configured to
 act as a S3 endpoint for Keystone so effectively replacing the
-`nova-objectstore`.
+``nova-objectstore``.
 
 Only Swift proxy server is launched in the screen session all other
-services are started in background and managed by `swift-init` tool.
+services are started in background and managed by ``swift-init`` tool.
 
 Heat
 ~~~~
 
-Heat is disabled by default (see `stackrc` file). To enable it
-explicitly you'll need the following settings in your `localrc`
+Heat is disabled by default (see ``stackrc`` file). To enable it
+explicitly you'll need the following settings in your ``localrc``
 section
 
 ::
@@ -554,7 +555,7 @@
 
 Heat can also run in standalone mode, and be configured to orchestrate
 on an external OpenStack cloud. To launch only Heat in standalone mode
-you'll need the following settings in your `localrc` section
+you'll need the following settings in your ``localrc`` section
 
 ::
 
@@ -590,14 +591,14 @@
 ~~~~~~~~~
 
 If you would like to use Xenserver as the hypervisor, please refer to
-the instructions in `./tools/xen/README.md`.
+the instructions in ``./tools/xen/README.md``.
 
 Cells
 ~~~~~
 
 `Cells <http://wiki.openstack.org/blueprint-nova-compute-cells>`__ is
 an alternative scaling option.  To setup a cells environment add the
-following to your `localrc` section:
+following to your ``localrc`` section:
 
 ::
 
diff --git a/doc/source/guides/devstack-with-lbaas-v2.rst b/doc/source/guides/devstack-with-lbaas-v2.rst
index f679783..f3bd2fe 100644
--- a/doc/source/guides/devstack-with-lbaas-v2.rst
+++ b/doc/source/guides/devstack-with-lbaas-v2.rst
@@ -1,13 +1,17 @@
-Configure Load-Balancer in Kilo
+Configure Load-Balancer Version 2
 =================================
 
-The Kilo release of OpenStack will support Version 2 of the neutron load balancer. Until now, using OpenStack `LBaaS V2 <http://docs.openstack.org/api/openstack-network/2.0/content/lbaas_ext.html>`_ has required a good understanding of neutron and LBaaS architecture and several manual steps.
+Starting in the OpenStack Liberty release, the
+`neutron LBaaS v2 API <http://developer.openstack.org/api-ref-networking-v2-ext.html>`_
+is now stable while the LBaaS v1 API has been deprecated.  The LBaaS v2 reference
+driver is based on Octavia.
 
 
 Phase 1: Create DevStack + 2 nova instances
 --------------------------------------------
 
-First, set up a vm of your choice with at least 8 GB RAM and 16 GB disk space, make sure it is updated. Install git and any other developer tools you find useful.
+First, set up a vm of your choice with at least 8 GB RAM and 16 GB disk space,
+make sure it is updated. Install git and any other developer tools you find useful.
 
 Install devstack
 
@@ -17,13 +21,14 @@
     cd devstack
 
 
-Edit your `local.conf` to look like
+Edit your ``local.conf`` to look like
 
   ::
 
     [[local|localrc]]
     # Load the external LBaaS plugin.
     enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas
+    enable_plugin octavia https://git.openstack.org/openstack/octavia
 
     # ===== BEGIN localrc =====
     DATABASE_PASSWORD=password
@@ -42,13 +47,13 @@
     ENABLED_SERVICES+=,horizon
     # Nova
     ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch
-    IMAGE_URLS+=",https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img"
     # Glance
     ENABLED_SERVICES+=,g-api,g-reg
     # Neutron
     ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta
-    # Enable LBaaS V2
+    # Enable LBaaS v2
     ENABLED_SERVICES+=,q-lbaasv2
+    ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
     # Cinder
     ENABLED_SERVICES+=,c-api,c-vol,c-sch
     # Tempest
@@ -69,11 +74,11 @@
   ::
 
     #create nova instances on private network
-    nova boot --image $(nova image-list | awk '/ cirros-0.3.0-x86_64-disk / {print $2}') --flavor 1 --nic net-id=$(neutron net-list | awk '/ private / {print $2}') node1
-    nova boot --image $(nova image-list | awk '/ cirros-0.3.0-x86_64-disk / {print $2}') --flavor 1 --nic net-id=$(neutron net-list | awk '/ private / {print $2}') node2
+    nova boot --image $(nova image-list | awk '/ cirros-.*-x86_64-uec / {print $2}') --flavor 1 --nic net-id=$(neutron net-list | awk '/ private / {print $2}') node1
+    nova boot --image $(nova image-list | awk '/ cirros-.*-x86_64-uec / {print $2}') --flavor 1 --nic net-id=$(neutron net-list | awk '/ private / {print $2}') node2
     nova list # should show the nova instances just created
 
-    #add secgroup rule to allow ssh etc..
+    #add secgroup rules to allow ssh etc..
     neutron security-group-rule-create default --protocol icmp
     neutron security-group-rule-create default --protocol tcp --port-range-min 22 --port-range-max 22
     neutron security-group-rule-create default --protocol tcp --port-range-min 80 --port-range-max 80
@@ -91,9 +96,16 @@
  ::
 
     neutron lbaas-loadbalancer-create --name lb1 private-subnet
+    neutron lbaas-loadbalancer-show lb1  # Wait for the provisioning_status to be ACTIVE.
     neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP --protocol-port 80 --name listener1
+    sleep 10  # Sleep since LBaaS actions can take a few seconds depending on the environment.
     neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1
+    sleep 10
     neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.3 --protocol-port 80 pool1
+    sleep 10
     neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.5 --protocol-port 80 pool1
 
-Please note here that the "10.0.0.3" and "10.0.0.5" in the above commands are the IPs of the nodes (in my test run-thru, they were actually 10.2 and 10.4), and the address of the created LB will be reported as "vip_address" from the lbaas-loadbalancer-create, and a quick test of that LB is "curl that-lb-ip", which should alternate between showing the IPs of the two nodes.
+Please note here that the "10.0.0.3" and "10.0.0.5" in the above commands are the IPs of the nodes
+(in my test run-thru, they were actually 10.2 and 10.4), and the address of the created LB will be
+reported as "vip_address" from the lbaas-loadbalancer-create, and a quick test of that LB is
+"curl that-lb-ip", which should alternate between showing the IPs of the two nodes.
diff --git a/doc/source/guides/devstack-with-nested-kvm.rst b/doc/source/guides/devstack-with-nested-kvm.rst
index c652bac..85a5656 100644
--- a/doc/source/guides/devstack-with-nested-kvm.rst
+++ b/doc/source/guides/devstack-with-nested-kvm.rst
@@ -50,7 +50,7 @@
     parm:           nested:bool
 
 Start your VM, now it should have KVM capabilities -- you can verify
-that by ensuring `/dev/kvm` character device is present.
+that by ensuring ``/dev/kvm`` character device is present.
 
 
 Configure Nested KVM for AMD-based Machines
@@ -97,7 +97,7 @@
 Expose Virtualization Extensions to DevStack VM
 -----------------------------------------------
 
-Edit the VM's libvirt XML configuration via `virsh` utility:
+Edit the VM's libvirt XML configuration via ``virsh`` utility:
 
 ::
 
@@ -115,10 +115,10 @@
 -------------------------------
 
 Before invoking ``stack.sh`` in the VM, ensure that KVM is enabled. This
-can be verified by checking for the presence of the file `/dev/kvm` in
+can be verified by checking for the presence of the file ``/dev/kvm`` in
 your VM. If it is present, DevStack will default to using the config
-attribute `virt_type = kvm` in `/etc/nova.conf`; otherwise, it'll fall
-back to `virt_type=qemu`, i.e. plain QEMU emulation.
+attribute ``virt_type = kvm`` in ``/etc/nova.conf``; otherwise, it'll fall
+back to ``virt_type=qemu``, i.e. plain QEMU emulation.
 
 Optionally, to explicitly set the type of virtualization, to KVM, by the
 libvirt driver in nova, the below config attribute can be used in
@@ -131,7 +131,7 @@
 
 Once DevStack is configured successfully, verify if the Nova instances
 are using KVM by noticing the QEMU CLI invoked by Nova is using the
-parameter `accel=kvm`, e.g.:
+parameter ``accel=kvm``, e.g.:
 
 ::
 
diff --git a/doc/source/guides/neutron.rst b/doc/source/guides/neutron.rst
index ee29087..996c7d1 100644
--- a/doc/source/guides/neutron.rst
+++ b/doc/source/guides/neutron.rst
@@ -340,8 +340,8 @@
 **Compute Nodes**
 
 In this example, the nodes that will host guest instances will run
-the `neutron-openvswitch-agent` for network connectivity, as well as
-the compute service `nova-compute`.
+the ``neutron-openvswitch-agent`` for network connectivity, as well as
+the compute service ``nova-compute``.
 
 DevStack Configuration
 ----------------------
@@ -426,16 +426,16 @@
         Q_L3_ENABLED=False
 
 Compute node 2's configuration will be exactly the same, except
-`HOST_IP` will be `10.0.0.4`
+``HOST_IP`` will be ``10.0.0.4``
 
 When DevStack is configured to use provider networking (via
-`Q_USE_PROVIDER_NETWORKING` is True and `Q_L3_ENABLED` is False) -
+``Q_USE_PROVIDER_NETWORKING`` is True and ``Q_L3_ENABLED`` is False) -
 DevStack will automatically add the network interface defined in
-`PUBLIC_INTERFACE` to the `OVS_PHYSICAL_BRIDGE`
+``PUBLIC_INTERFACE`` to the ``OVS_PHYSICAL_BRIDGE``
 
 For example, with the above  configuration, a bridge is
-created, named `br-ex` which is managed by Open vSwitch, and the
-second interface on the compute node, `eth1` is attached to the
+created, named ``br-ex`` which is managed by Open vSwitch, and the
+second interface on the compute node, ``eth1`` is attached to the
 bridge, to forward traffic sent by guest VMs.
 
 Miscellaneous Tips
@@ -477,7 +477,7 @@
 ------------------------------------------------
 
 Extension drivers for the ML2 plugin are set with the variable
-`Q_ML2_PLUGIN_EXT_DRIVERS`, and includes the 'port_security' extension
+``Q_ML2_PLUGIN_EXT_DRIVERS``, and includes the 'port_security' extension
 by default. If you want to remove all the extension drivers (even
-'port_security'), set `Q_ML2_PLUGIN_EXT_DRIVERS` to blank.
+'port_security'), set ``Q_ML2_PLUGIN_EXT_DRIVERS`` to blank.
 
diff --git a/exercises/swift.sh b/exercises/swift.sh
index afcede8..4a41e0f 100755
--- a/exercises/swift.sh
+++ b/exercises/swift.sh
@@ -2,7 +2,7 @@
 
 # **swift.sh**
 
-# Test swift via the ``swift`` command line from ``python-swiftclient``
+# Test swift via the ``python-openstackclient`` command line
 
 echo "*********************************************************************"
 echo "Begin DevStack Exercise: $0"
@@ -39,26 +39,29 @@
 
 # Container name
 CONTAINER=ex-swift
+OBJECT=/etc/issue
 
 
 # Testing Swift
 # =============
 
 # Check if we have to swift via keystone
-swift stat || die $LINENO "Failure getting status"
+openstack object store account show || die $LINENO "Failure getting account status"
 
 # We start by creating a test container
 openstack container create $CONTAINER || die $LINENO "Failure creating container $CONTAINER"
 
-# add some files into it.
-openstack object create $CONTAINER /etc/issue || die $LINENO "Failure uploading file to container $CONTAINER"
+# add a file into it.
+openstack object create $CONTAINER $OBJECT || die $LINENO "Failure uploading file to container $CONTAINER"
 
-# list them
+# list the objects
 openstack object list $CONTAINER || die $LINENO "Failure listing contents of container $CONTAINER"
 
-# And we may want to delete them now that we have tested that
-# everything works.
-swift delete $CONTAINER || die $LINENO "Failure deleting container $CONTAINER"
+# delete the object first
+openstack object delete $CONTAINER $OBJECT || die $LINENO "Failure deleting object $OBJECT in container $CONTAINER"
+
+# delete the container
+openstack container delete $CONTAINER || die $LINENO "Failure deleting container $CONTAINER"
 
 set +o xtrace
 echo "*********************************************************************"
diff --git a/files/debs/cinder b/files/debs/cinder
index 48b8d0f..3595e01 100644
--- a/files/debs/cinder
+++ b/files/debs/cinder
@@ -1,4 +1,3 @@
-libpq-dev
 lvm2
 open-iscsi
 open-iscsi-utils # Deprecated since quantal dist:precise
diff --git a/files/rpms-suse/cinder b/files/rpms-suse/cinder
index 56b1bb5..189a232 100644
--- a/files/rpms-suse/cinder
+++ b/files/rpms-suse/cinder
@@ -1,6 +1,4 @@
 lvm2
 open-iscsi
-postgresql-devel
-python-devel
 qemu-tools
 tgt # NOPRIME
diff --git a/files/rpms-suse/glance b/files/rpms-suse/glance
deleted file mode 100644
index bf512de..0000000
--- a/files/rpms-suse/glance
+++ /dev/null
@@ -1 +0,0 @@
-python-devel
diff --git a/files/rpms-suse/keystone b/files/rpms-suse/keystone
index c838b41..46832c7 100644
--- a/files/rpms-suse/keystone
+++ b/files/rpms-suse/keystone
@@ -1,4 +1,3 @@
 cyrus-sasl-devel
 openldap2-devel
-python-devel
 sqlite3
diff --git a/files/rpms-suse/neutron b/files/rpms-suse/neutron
index 4b0eefa..e9abc6e 100644
--- a/files/rpms-suse/neutron
+++ b/files/rpms-suse/neutron
@@ -5,7 +5,6 @@
 iptables
 iputils
 mariadb # NOPRIME
-postgresql-devel
 rabbitmq-server # NOPRIME
 radvd # NOPRIME
 sqlite3
diff --git a/files/rpms-suse/nova b/files/rpms-suse/nova
index 2f3ad21..ae115d2 100644
--- a/files/rpms-suse/nova
+++ b/files/rpms-suse/nova
@@ -14,7 +14,6 @@
 mariadb # NOPRIME
 parted
 polkit
-python-devel
 # qemu as fallback if kvm cannot be used
 qemu # NOPRIME
 rabbitmq-server # NOPRIME
diff --git a/files/rpms-suse/swift b/files/rpms-suse/swift
index 6a824f9..52e0a99 100644
--- a/files/rpms-suse/swift
+++ b/files/rpms-suse/swift
@@ -1,6 +1,5 @@
 curl
 memcached
-python-devel
 sqlite3
 xfsprogs
 xinetd
diff --git a/files/rpms/cinder b/files/rpms/cinder
index f28f04d..0274642 100644
--- a/files/rpms/cinder
+++ b/files/rpms/cinder
@@ -1,5 +1,4 @@
 iscsi-initiator-utils
 lvm2
-postgresql-devel
 qemu-img
 scsi-target-utils # NOPRIME
diff --git a/files/rpms/neutron b/files/rpms/neutron
index b3f79ed..9683475 100644
--- a/files/rpms/neutron
+++ b/files/rpms/neutron
@@ -9,7 +9,6 @@
 MySQL-python
 mysql-server # NOPRIME
 openvswitch # NOPRIME
-postgresql-devel
 rabbitmq-server # NOPRIME
 radvd # NOPRIME
 sqlite
diff --git a/functions-common b/functions-common
index 2fcd9f8..6a065ba 100644
--- a/functions-common
+++ b/functions-common
@@ -910,16 +910,11 @@
 # Usage: _get_or_create_endpoint_with_interface <service> <interface> <url> <region>
 function _get_or_create_endpoint_with_interface {
     local endpoint_id
-    # TODO(dgonzalez): The check of the region name, as done in the grep
-    # statement below, exists only because keystone does currently
-    # not allow filtering the region name when listing endpoints. If keystone
-    # gets support for this, the check for the region name can be removed.
-    # Related bug in keystone: https://bugs.launchpad.net/keystone/+bug/1482772
     endpoint_id=$(openstack endpoint list \
         --service $1 \
         --interface $2 \
         --region $4 \
-        -c ID -c Region -f value | grep $4 | cut -f 1 -d " ")
+        -c ID -f value)
     if [[ -z "$endpoint_id" ]]; then
         # Creates new endpoint
         endpoint_id=$(openstack endpoint create \
diff --git a/lib/neutron-legacy b/lib/neutron-legacy
index 79c3140..85f7fc0 100644
--- a/lib/neutron-legacy
+++ b/lib/neutron-legacy
@@ -256,7 +256,7 @@
 
 # If using GRE tunnels for tenant networks, specify the range of
 # tunnel IDs from which tenant networks are allocated. Can be
-# overriden in ``localrc`` in necesssary.
+# overridden in ``localrc`` in necessary.
 TENANT_TUNNEL_RANGES=${TENANT_TUNNEL_RANGES:-1:1000}
 
 # To use VLANs for tenant networks, set to True in localrc. VLANs
@@ -536,7 +536,7 @@
 
     if is_provider_network; then
         die_if_not_set $LINENO PHYSICAL_NETWORK "You must specify the PHYSICAL_NETWORK"
-        die_if_not_set $LINENO PROVIDER_NETWORK_TYPE "You must specifiy the PROVIDER_NETWORK_TYPE"
+        die_if_not_set $LINENO PROVIDER_NETWORK_TYPE "You must specify the PROVIDER_NETWORK_TYPE"
         NET_ID=$(neutron net-create $PHYSICAL_NETWORK --tenant_id $TENANT_ID --provider:network_type $PROVIDER_NETWORK_TYPE --provider:physical_network "$PHYSICAL_NETWORK" ${SEGMENTATION_ID:+--provider:segmentation_id $SEGMENTATION_ID} --shared | grep ' id ' | get_field 2)
         die_if_not_set $LINENO NET_ID "Failure creating NET_ID for $PHYSICAL_NETWORK $TENANT_ID"
 
@@ -638,7 +638,7 @@
         plugin_dir=$($ssh_dom0 "$xen_functions; set -eux; xapi_plugin_location")
 
         # install neutron plugins to dom0
-        tar -czf - -C $NEUTRON_DIR/neutron/plugins/openvswitch/agent/xenapi/etc/xapi.d/plugins/ ./ |
+        tar -czf - -C $NEUTRON_DIR/neutron/plugins/ml2/drivers/openvswitch/agent/xenapi/etc/xapi.d/plugins/ ./ |
             $ssh_dom0 "tar -xzf - -C $plugin_dir && chmod a+x $plugin_dir/*"
     fi
 }
@@ -806,7 +806,7 @@
         fi
 
         if [[ $af == "inet6" ]]; then
-            IP_BRD=$(ip -f $af a s dev $from_intf | grep inet6 | awk '{ print $2, $3, $4; exit }')
+            IP_BRD=$(ip -f $af a s dev $from_intf | grep 'scope global' | sed '/temporary/d' | awk '{ print $2, $3, $4; exit }')
         fi
 
         if [ "$DEFAULT_ROUTE_GW" != "" ]; then
@@ -834,6 +834,10 @@
         _move_neutron_addresses_route "$OVS_PHYSICAL_BRIDGE" "$PUBLIC_INTERFACE" False "inet"
 
         if [[ $(ip -f inet6 a s dev "$OVS_PHYSICAL_BRIDGE" | grep -c 'global') != 0 ]]; then
+            # ip(8) wants the prefix length when deleting
+            local v6_gateway
+            v6_gateway=$(ip -6 a s dev $OVS_PHYSICAL_BRIDGE | grep $IPV6_PUBLIC_NETWORK_GATEWAY | awk '{ print $2 }')
+            sudo ip -6 addr del $v6_gateway dev $OVS_PHYSICAL_BRIDGE
             _move_neutron_addresses_route "$OVS_PHYSICAL_BRIDGE" "$PUBLIC_INTERFACE" False "inet6"
         fi
 
@@ -1122,7 +1126,7 @@
     iniset $NEUTRON_CONF DEFAULT auth_strategy $Q_AUTH_STRATEGY
     _neutron_setup_keystone $NEUTRON_CONF keystone_authtoken
 
-    # Configuration for neutron notifations to nova.
+    # Configuration for neutron notifications to nova.
     iniset $NEUTRON_CONF DEFAULT notify_nova_on_port_status_changes $Q_NOTIFY_NOVA_PORT_STATUS_CHANGES
     iniset $NEUTRON_CONF DEFAULT notify_nova_on_port_data_changes $Q_NOTIFY_NOVA_PORT_DATA_CHANGES
 
diff --git a/lib/neutron_plugins/openvswitch_agent b/lib/neutron_plugins/openvswitch_agent
index 5a843ff..6a33393 100644
--- a/lib/neutron_plugins/openvswitch_agent
+++ b/lib/neutron_plugins/openvswitch_agent
@@ -71,6 +71,9 @@
         # Make a copy of our config for domU
         sudo cp /$Q_PLUGIN_CONF_FILE "/$Q_PLUGIN_CONF_FILE.domU"
 
+        # change domU's config file to STACK_USER
+        sudo chown $STACK_USER:$STACK_USER /$Q_PLUGIN_CONF_FILE.domU
+
         # Deal with Dom0's L2 Agent:
         Q_RR_DOM0_COMMAND="$NEUTRON_BIN_DIR/neutron-rootwrap-xen-dom0 $Q_RR_CONF_FILE"
 
@@ -82,7 +85,14 @@
         # Under XS/XCP, the ovs agent needs to target the dom0
         # integration bridge.  This is enabled by using a root wrapper
         # that executes commands on dom0 via a XenAPI plugin.
+        # XenAPI does not support daemon rootwrap now, so set root_helper_daemon empty
         iniset /$Q_PLUGIN_CONF_FILE agent root_helper "$Q_RR_DOM0_COMMAND"
+        iniset /$Q_PLUGIN_CONF_FILE agent root_helper_daemon ""
+
+        # Disable minimize polling, so that it can always detect OVS and Port changes
+        # This is a problem of xenserver + neutron, bug has been reported
+        # https://bugs.launchpad.net/neutron/+bug/1495423
+        iniset /$Q_PLUGIN_CONF_FILE agent minimize_polling False
 
         # Set "physical" mapping
         iniset /$Q_PLUGIN_CONF_FILE ovs bridge_mappings "physnet1:$FLAT_NETWORK_BRIDGE"
@@ -95,10 +105,14 @@
         # Create a bridge "br-$GUEST_INTERFACE_DEFAULT"
         _neutron_ovs_base_add_bridge "br-$GUEST_INTERFACE_DEFAULT"
         # Add $GUEST_INTERFACE_DEFAULT to that bridge
-        sudo ovs-vsctl add-port "br-$GUEST_INTERFACE_DEFAULT" $GUEST_INTERFACE_DEFAULT
+        sudo ovs-vsctl -- --may-exist add-port "br-$GUEST_INTERFACE_DEFAULT" $GUEST_INTERFACE_DEFAULT
+
+        # Create external bridge and add port
+        _neutron_ovs_base_add_bridge $PUBLIC_BRIDGE
+        sudo ovs-vsctl -- --may-exist add-port $PUBLIC_BRIDGE $PUBLIC_INTERFACE_DEFAULT
 
         # Set bridge mappings to "physnet1:br-$GUEST_INTERFACE_DEFAULT"
-        iniset "/$Q_PLUGIN_CONF_FILE.domU" ovs bridge_mappings "physnet1:br-$GUEST_INTERFACE_DEFAULT"
+        iniset "/$Q_PLUGIN_CONF_FILE.domU" ovs bridge_mappings "physnet1:br-$GUEST_INTERFACE_DEFAULT,physnet-ex:$PUBLIC_BRIDGE"
         # Set integration bridge to domU's
         iniset "/$Q_PLUGIN_CONF_FILE.domU" ovs integration_bridge $OVS_BRIDGE
         # Set root wrap
diff --git a/lib/rpc_backend b/lib/rpc_backend
index 03eacd8..298dcb6 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -58,7 +58,7 @@
         # NOTE(bnemec): Retry initial rabbitmq configuration to deal with
         # the fact that sometimes it fails to start properly.
         # Reference: https://bugzilla.redhat.com/show_bug.cgi?id=1144100
-        # NOTE(tonyb): Extend the orginal retry logic to only restart rabbitmq
+        # NOTE(tonyb): Extend the original retry logic to only restart rabbitmq
         # every second time around the loop.
         # See: https://bugs.launchpad.net/devstack/+bug/1449056 for details on
         # why this is needed.  This can bee seen on vivid and Debian unstable
@@ -106,7 +106,7 @@
     fi
 }
 
-# iniset cofiguration
+# iniset configuration
 function iniset_rpc_backend {
     local package=$1
     local file=$2
diff --git a/lib/stack b/lib/stack
index 47e8ce2..7d98604 100644
--- a/lib/stack
+++ b/lib/stack
@@ -14,7 +14,7 @@
 # Functions
 # ---------
 
-# Generic service install handles venv creation if confgured for service
+# Generic service install handles venv creation if configured for service
 # stack_install_service service
 function stack_install_service {
     local service=$1
diff --git a/lib/swift b/lib/swift
index ee0238d..d7ccc24 100644
--- a/lib/swift
+++ b/lib/swift
@@ -123,13 +123,13 @@
 # trace through the logs when looking for its use.
 SWIFT_LOG_TOKEN_LENGTH=${SWIFT_LOG_TOKEN_LENGTH:-12}
 
-# Set ``SWIFT_MAX_HEADER_SIZE`` to configure the maximun length of headers in
+# Set ``SWIFT_MAX_HEADER_SIZE`` to configure the maximum length of headers in
 # Swift API
 SWIFT_MAX_HEADER_SIZE=${SWIFT_MAX_HEADER_SIZE:-16384}
 
 # Set ``OBJECT_PORT_BASE``, ``CONTAINER_PORT_BASE``, ``ACCOUNT_PORT_BASE``
-# Port bases used in port number calclution for the service "nodes"
-# The specified port number will be used, the additinal ports calculated by
+# Port bases used in port number calculation for the service "nodes"
+# The specified port number will be used, the additional ports calculated by
 # base_port + node_num * 10
 OBJECT_PORT_BASE=${OBJECT_PORT_BASE:-6613}
 CONTAINER_PORT_BASE=${CONTAINER_PORT_BASE:-6611}
diff --git a/lib/tempest b/lib/tempest
index 32630db..76fd6ca 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -23,7 +23,7 @@
 #
 # Optional Dependencies:
 #
-# - ``ALT_*`` (similar vars exists in keystone_data.sh)
+# - ``ALT_*``
 # - ``LIVE_MIGRATION_AVAILABLE``
 # - ``USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION``
 # - ``DEFAULT_INSTANCE_TYPE``
@@ -377,6 +377,15 @@
         iniset $TEMPEST_CONFIG compute-feature-enabled shelve False
         # Cells doesn't support hot-plugging virtual interfaces.
         iniset $TEMPEST_CONFIG compute-feature-enabled interface_attach False
+
+        if  [[ -z "$DEFAULT_INSTANCE_TYPE" ]]; then
+            # Cells supports resize but does not currently work with devstack
+            # because of the custom flavors created for Tempest runs which are
+            # not in the cells database.
+            # TODO(mriedem): work on adding a nova-manage command to sync
+            # flavors into the cells database.
+            iniset $TEMPEST_CONFIG compute-feature-enabled resize False
+        fi
     fi
 
     # Network
diff --git a/samples/local.conf b/samples/local.conf
index cb293b6..b92097d 100644
--- a/samples/local.conf
+++ b/samples/local.conf
@@ -93,9 +93,3 @@
 # moved by setting ``SWIFT_DATA_DIR``. The directory will be created
 # if it does not exist.
 SWIFT_DATA_DIR=$DEST/data
-
-# Tempest
-# -------
-
-# Install the tempest test suite
-enable_service tempest
diff --git a/stack.sh b/stack.sh
index 9b811b7..a3d943a 100755
--- a/stack.sh
+++ b/stack.sh
@@ -925,8 +925,8 @@
 restart_rpc_backend
 
 
-# Export Certicate Authority Bundle
-# ---------------------------------
+# Export Certificate Authority Bundle
+# -----------------------------------
 
 # If certificates were used and written to the SSL bundle file then these
 # should be exported so clients can validate their connections.