Merge "Set fixed-key key manager"
diff --git a/SYSTEMD.rst b/SYSTEMD.rst
new file mode 100644
index 0000000..729fdf4
--- /dev/null
+++ b/SYSTEMD.rst
@@ -0,0 +1,189 @@
+===========================
+ Using Systemd in DevStack
+===========================
+
+.. note::
+
+ This is an in progress document as we work out the way forward here
+ with DevStack and systemd.
+
+DevStack can be run with all the services as systemd unit
+files. Systemd is now the default init system for nearly every Linux
+distro, and systemd encodes and solves many of the problems related to
+poorly running processes.
+
+Why this instead of screen?
+===========================
+
+The screen model for DevStack was invented when the number of services
+that a DevStack user was going to run was typically < 10. This made
+screen hot keys to jump around very easy. However, the landscape has
+changed (not all services are stoppable in screen as some are under
+Apache, there are typically at least 20 items)
+
+There is also a common developer workflow of changing code in more
+than one service, and needing to restart a bunch of services for that
+to take effect.
+
+To enable this add the following to your local.conf::
+
+ USE_SYSTEMD=True
+
+
+
+Unit Structure
+==============
+
+.. note::
+
+ Originally we actually wanted to do this as user units, however
+ there are issues with running this under non interactive
+ shells. For now, we'll be running as system units. Some user unit
+ code is left in place in case we can switch back later.
+
+All DevStack user units are created as a part of the DevStack slice
+given the name ``devstack@$servicename.service``. This lets us do
+certain operations at the slice level.
+
+Manipulating Units
+==================
+
+Assuming the unit ``n-cpu`` to make the examples more clear.
+
+Enable a unit (allows it to be started)::
+
+ sudo systemctl enable devstack@n-cpu.service
+
+Disable a unit::
+
+ sudo systemctl disable devstack@n-cpu.service
+
+Start a unit::
+
+ sudo systemctl start devstack@n-cpu.service
+
+Stop a unit::
+
+ sudo systemctl stop devstack@n-cpu.service
+
+Restart a unit::
+
+ sudo systemctl restart devstack@n-cpu.service
+
+See status of a unit::
+
+ sudo systemctl status devstack@n-cpu.service
+
+Operating on more than one unit at a time
+-----------------------------------------
+
+Systemd supports wildcarding for unit operations. To restart every
+service in devstack you can do that following::
+
+ sudo systemctl restart devstack@*
+
+Or to see the status of all Nova processes you can do::
+
+ sudo systemctl status devstack@n-*
+
+We'll eventually make the unit names a bit more meaningful so that
+it's easier to understand what you are restarting.
+
+Querying Logs
+=============
+
+One of the other major things that comes with systemd is journald, a
+consolidated way to access logs (including querying through structured
+metadata). This is accessed by the user via ``journalctl`` command.
+
+
+Logs can be accessed through ``journalctl``. journalctl has powerful
+query facilities. We'll start with some common options.
+
+Follow logs for a specific service::
+
+ journalctl -f --unit devstack@n-cpu.service
+
+Following logs for multiple services simultaneously::
+
+ journalctl -f --unit devstack@n-cpu.service --unit
+ devstack@n-cond.service
+
+or you can even do wild cards to follow all the nova services::
+
+ journalctl -f --unit devstack@n-*
+
+Use higher precision time stamps::
+
+ journalctl -f -o short-precise --unit devstack@n-cpu.service
+
+
+Known Issues
+============
+
+Be careful about systemd python libraries. There are 3 of them on
+pypi, and they are all very different. They unfortunately all install
+into the ``systemd`` namespace, which can cause some issues.
+
+- ``systemd-python`` - this is the upstream maintained library, it has
+ a version number like systemd itself (currently ``233``). This is
+ the one you want.
+- ``systemd`` - a python 3 only library, not what you want.
+- ``python-systemd`` - another library you don't want. Installing it
+ on a system will break ansible's ability to run.
+
+
+If we were using user units, the ``[Service]`` - ``Group=`` parameter
+doesn't seem to work with user units, even though the documentation
+says that it should. This means that we will need to do an explicit
+``/usr/bin/sg``. This has the downside of making the SYSLOG_IDENTIFIER
+be ``sg``. We can explicitly set that with ``SyslogIdentifier=``, but
+it's really unfortunate that we're going to need this work
+around. This is currently not a problem because we're only using
+system units.
+
+Future Work
+===========
+
+oslo.log journald
+-----------------
+
+Journald has an extremely rich mechanism for direct logging including
+structured metadata. We should enhance oslo.log to take advantage of
+that. It would let us do things like::
+
+ journalctl REQUEST_ID=......
+
+ journalctl INSTANCE_ID=......
+
+And get all lines related to the request id or instance id. (Note:
+this work has been started at https://review.openstack.org/#/c/451525/)
+
+log colorizing
+--------------
+
+We lose log colorization through this process. We might want to build
+a custom colorizer that we could run journalctl output through
+optionally for people.
+
+user units
+----------
+
+It would be great if we could do services as user units, so that there
+is a clear separation of code being run as not root, to ensure running
+as root never accidentally gets baked in as an assumption to
+services. However, user units interact poorly with devstack-gate and
+the way that commands are run as users with ansible and su.
+
+Maybe someday we can figure that out.
+
+References
+==========
+
+- Arch Linux Wiki - https://wiki.archlinux.org/index.php/Systemd/User
+- Python interface to journald -
+ https://www.freedesktop.org/software/systemd/python-systemd/journal.html
+- Systemd documentation on service files -
+ https://www.freedesktop.org/software/systemd/man/systemd.service.html
+- Systemd documentation on exec (can be used to impact service runs) -
+ https://www.freedesktop.org/software/systemd/man/systemd.exec.html
diff --git a/clean.sh b/clean.sh
index bace3f5..90b21eb 100755
--- a/clean.sh
+++ b/clean.sh
@@ -49,7 +49,6 @@
source $TOP_DIR/lib/placement
source $TOP_DIR/lib/cinder
source $TOP_DIR/lib/swift
-source $TOP_DIR/lib/heat
source $TOP_DIR/lib/neutron
source $TOP_DIR/lib/neutron-legacy
@@ -108,7 +107,7 @@
fi
# Clean out /etc
-sudo rm -rf /etc/keystone /etc/glance /etc/nova /etc/cinder /etc/swift /etc/heat /etc/neutron /etc/openstack/
+sudo rm -rf /etc/keystone /etc/glance /etc/nova /etc/cinder /etc/swift /etc/neutron /etc/openstack/
# Clean out tgt
sudo rm -f /etc/tgt/conf.d/*
@@ -147,3 +146,13 @@
done
rm -rf ~/.config/openstack
+
+# Clean up all *.pyc files
+if [[ -n "$DEST" ]] && [[ -d "$DEST" ]]; then
+ find_version=`find --version | awk '{ print $NF; exit}'`
+ if vercmp "$find_version" "<" "4.2.3" ; then
+ sudo find $DEST -name "*.pyc" -print0 | xargs -0 rm
+ else
+ sudo find $DEST -name "*.pyc" -delete
+ fi
+fi
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index bc3f558..53ae82f 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -703,13 +703,13 @@
~~~~~~
The logical volume group used to hold the Cinder-managed volumes is
-set by ``VOLUME_GROUP``, the logical volume name prefix is set with
+set by ``VOLUME_GROUP_NAME``, the logical volume name prefix is set with
``VOLUME_NAME_PREFIX`` and the size of the volume backing file is set
with ``VOLUME_BACKING_FILE_SIZE``.
::
- VOLUME_GROUP="stack-volumes"
+ VOLUME_GROUP_NAME="stack-volumes"
VOLUME_NAME_PREFIX="volume-"
VOLUME_BACKING_FILE_SIZE=10250M
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index 7793d8e..f03304f 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -130,8 +130,8 @@
DevStack master tracks the upstream master of all the projects. If you
would like to run a stable branch of OpenStack, you should use the
corresponding stable branch of DevStack as well. For instance the
-``stable/kilo`` version of DevStack will already default to all the
-projects running at ``stable/kilo`` levels.
+``stable/ocata`` version of DevStack will already default to all the
+projects running at ``stable/ocata`` levels.
Note: it's also possible to manually adjust the ``*_BRANCH`` variables
further if you would like to test specific milestones, or even custom
diff --git a/doc/source/guides/devstack-with-nested-kvm.rst b/doc/source/guides/devstack-with-nested-kvm.rst
index 85a5656..3732f06 100644
--- a/doc/source/guides/devstack-with-nested-kvm.rst
+++ b/doc/source/guides/devstack-with-nested-kvm.rst
@@ -73,7 +73,7 @@
::
sudo rmmod kvm-amd
- sudo sh -c "echo 'options amd nested=1' >> /etc/modprobe.d/dist.conf"
+ sudo sh -c "echo 'options kvm-amd nested=1' >> /etc/modprobe.d/dist.conf"
sudo modprobe kvm-amd
Ensure the Nested KVM Kernel module parameter for AMD is enabled on the
diff --git a/doc/source/guides/multinode-lab.rst b/doc/source/guides/multinode-lab.rst
index 8751eb8..1a8ddbc 100644
--- a/doc/source/guides/multinode-lab.rst
+++ b/doc/source/guides/multinode-lab.rst
@@ -73,8 +73,7 @@
::
- groupadd stack
- useradd -g stack -s /bin/bash -d /opt/stack -m stack
+ useradd -s /bin/bash -d /opt/stack -m stack
This user will be making many changes to your system during installation
and operation so it needs to have sudo privileges to root without a
@@ -176,7 +175,7 @@
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
- ENABLED_SERVICES=n-cpu,n-net,n-api-meta,c-vol
+ ENABLED_SERVICES=n-cpu,q-agt,n-api-meta,c-vol,placement-client
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
VNCSERVER_LISTEN=$HOST_IP
@@ -294,10 +293,10 @@
``stack-volumes`` can be pre-created on any physical volume supported by
Linux's LVM. The name of the volume group can be changed by setting
-``VOLUME_GROUP`` in ``localrc``. ``stack.sh`` deletes all logical
-volumes in ``VOLUME_GROUP`` that begin with ``VOLUME_NAME_PREFIX`` as
+``VOLUME_GROUP_NAME`` in ``localrc``. ``stack.sh`` deletes all logical
+volumes in ``VOLUME_GROUP_NAME`` that begin with ``VOLUME_NAME_PREFIX`` as
part of cleaning up from previous runs. It is recommended to not use the
-root volume group as ``VOLUME_GROUP``.
+root volume group as ``VOLUME_GROUP_NAME``.
The details of creating the volume group depends on the server hardware
involved but looks something like this:
@@ -400,6 +399,10 @@
ssh-keyscan -H DEST_HOSTNAME | sudo tee -a /root/.ssh/known_hosts
+3. Verify that login via ssh works without a password::
+
+ ssh -i /root/.ssh/id_rsa.pub stack@DESTINATION
+
In essence, this means that every compute node's root user's public RSA key
must exist in every other compute node's stack user's authorized_keys file and
every compute node's public ECDSA key needs to be in every other compute
diff --git a/doc/source/guides/single-machine.rst b/doc/source/guides/single-machine.rst
index 011c41f..48a4fa8 100644
--- a/doc/source/guides/single-machine.rst
+++ b/doc/source/guides/single-machine.rst
@@ -47,7 +47,7 @@
::
- adduser stack
+ useradd -s /bin/bash -d /opt/stack -m stack
Since this user will be making many changes to your system, it will need
to have sudo privileges:
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 435011b..cbd6971 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -39,37 +39,12 @@
-------------
Start with a clean and minimal install of a Linux system. Devstack
-attempts to support Ubuntu 14.04/16.04, Fedora 23/24, CentOS/RHEL 7,
+attempts to support Ubuntu 16.04/17.04, Fedora 24/25, CentOS/RHEL 7,
as well as Debian and OpenSUSE.
If you do not have a preference, Ubuntu 16.04 is the most tested, and
will probably go the smoothest.
-Download DevStack
------------------
-
-::
-
- git clone https://git.openstack.org/openstack-dev/devstack
-
-The ``devstack`` repo contains a script that installs OpenStack and
-templates for configuration files
-
-Create a local.conf
--------------------
-
-Create a ``local.conf`` file with 4 passwords preset
-
-::
-
- [[local|localrc]]
- ADMIN_PASSWORD=secret
- DATABASE_PASSWORD=$ADMIN_PASSWORD
- RABBIT_PASSWORD=$ADMIN_PASSWORD
- SERVICE_PASSWORD=$ADMIN_PASSWORD
-
-This is the minimum required config to get started with DevStack.
-
Add Stack User
--------------
@@ -81,14 +56,48 @@
::
- devstack/tools/create-stack-user.sh; su stack
+ $ sudo useradd -s /bin/bash -d /opt/stack -m stack
+
+Since this user will be making many changes to your system, it should
+have sudo privileges:
+
+::
+
+ $ echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
+ $ sudo su - stack
+
+Download DevStack
+-----------------
+
+::
+
+ $ git clone https://git.openstack.org/openstack-dev/devstack
+ $ cd devstack
+
+The ``devstack`` repo contains a script that installs OpenStack and
+templates for configuration files
+
+Create a local.conf
+-------------------
+
+Create a ``local.conf`` file with 4 passwords preset at the root of the
+devstack git repo.
+::
+
+ [[local|localrc]]
+ ADMIN_PASSWORD=secret
+ DATABASE_PASSWORD=$ADMIN_PASSWORD
+ RABBIT_PASSWORD=$ADMIN_PASSWORD
+ SERVICE_PASSWORD=$ADMIN_PASSWORD
+
+This is the minimum required config to get started with DevStack.
Start the install
-----------------
::
- cd devstack; ./stack.sh
+ ./stack.sh
This will take a 15 - 20 minutes, largely depending on the speed of
your internet connection. Many git trees and packages will be
diff --git a/doc/source/networking.rst b/doc/source/networking.rst
index 2301a2e..bdbeaaa 100644
--- a/doc/source/networking.rst
+++ b/doc/source/networking.rst
@@ -4,7 +4,7 @@
An important part of the DevStack experience is networking that works
by default for created guests. This might not be optimal for your
-particular testing environment, so this document tries it's best to
+particular testing environment, so this document tries its best to
explain what's going on.
Defaults
@@ -17,9 +17,9 @@
* a floating ip range of 172.24.4.0/24 with the gateway of 172.24.4.1
* the demo project configured with fixed ips on a subnet allocated from
the 10.0.0.0/22 range
-* a ``br-ex`` interface controlled by neutron for all it's networking
+* a ``br-ex`` interface controlled by neutron for all its networking
(this is not connected to any physical interfaces).
-* DNS resolution for guests based on the resolv.conf for you host
+* DNS resolution for guests based on the resolv.conf for your host
* an ip masq rule that allows created guests to route out
This creates an environment which is isolated to the single
@@ -40,7 +40,7 @@
Locally Accessible Guests
=========================
-If you want to make you guests accessible other machines on your
+If you want to make you guests accessible from other machines on your
network, we have to connect ``br-ex`` to a physical interface.
Dedicated Guest Interface
@@ -109,8 +109,8 @@
For IPv4, ``FIXED_RANGE`` and ``SUBNETPOOL_PREFIX_V4`` will just default to
the value of ``IPV4_ADDRS_SAFE_TO_USE`` directly.
-For IPv6, ``FIXED_RANGE`` will default to the first /64 of the value of
+For IPv6, ``FIXED_RANGE_V6`` will default to the first /64 of the value of
``IPV6_ADDRS_SAFE_TO_USE``. If ``IPV6_ADDRS_SAFE_TO_USE`` is /64 or smaller,
-``FIXED_RANGE`` will just use the value of that directly.
+``FIXED_RANGE_V6`` will just use the value of that directly.
``SUBNETPOOL_PREFIX_V6`` will just default to the value of
``IPV6_ADDRS_SAFE_TO_USE`` directly.
diff --git a/doc/source/plugin-registry.rst b/doc/source/plugin-registry.rst
index 6ece997..96a2733 100644
--- a/doc/source/plugin-registry.rst
+++ b/doc/source/plugin-registry.rst
@@ -44,6 +44,7 @@
devstack-plugin-amqp1 `git://git.openstack.org/openstack/devstack-plugin-amqp1 <https://git.openstack.org/cgit/openstack/devstack-plugin-amqp1>`__
devstack-plugin-bdd `git://git.openstack.org/openstack/devstack-plugin-bdd <https://git.openstack.org/cgit/openstack/devstack-plugin-bdd>`__
devstack-plugin-ceph `git://git.openstack.org/openstack/devstack-plugin-ceph <https://git.openstack.org/cgit/openstack/devstack-plugin-ceph>`__
+devstack-plugin-container `git://git.openstack.org/openstack/devstack-plugin-container <https://git.openstack.org/cgit/openstack/devstack-plugin-container>`__
devstack-plugin-glusterfs `git://git.openstack.org/openstack/devstack-plugin-glusterfs <https://git.openstack.org/cgit/openstack/devstack-plugin-glusterfs>`__
devstack-plugin-hdfs `git://git.openstack.org/openstack/devstack-plugin-hdfs <https://git.openstack.org/cgit/openstack/devstack-plugin-hdfs>`__
devstack-plugin-kafka `git://git.openstack.org/openstack/devstack-plugin-kafka <https://git.openstack.org/cgit/openstack/devstack-plugin-kafka>`__
@@ -58,6 +59,7 @@
freezer `git://git.openstack.org/openstack/freezer <https://git.openstack.org/cgit/openstack/freezer>`__
freezer-api `git://git.openstack.org/openstack/freezer-api <https://git.openstack.org/cgit/openstack/freezer-api>`__
freezer-web-ui `git://git.openstack.org/openstack/freezer-web-ui <https://git.openstack.org/cgit/openstack/freezer-web-ui>`__
+fuxi `git://git.openstack.org/openstack/fuxi <https://git.openstack.org/cgit/openstack/fuxi>`__
gce-api `git://git.openstack.org/openstack/gce-api <https://git.openstack.org/cgit/openstack/gce-api>`__
glare `git://git.openstack.org/openstack/glare <https://git.openstack.org/cgit/openstack/glare>`__
gnocchi `git://git.openstack.org/openstack/gnocchi <https://git.openstack.org/cgit/openstack/gnocchi>`__
@@ -67,6 +69,8 @@
ironic `git://git.openstack.org/openstack/ironic <https://git.openstack.org/cgit/openstack/ironic>`__
ironic-inspector `git://git.openstack.org/openstack/ironic-inspector <https://git.openstack.org/cgit/openstack/ironic-inspector>`__
ironic-staging-drivers `git://git.openstack.org/openstack/ironic-staging-drivers <https://git.openstack.org/cgit/openstack/ironic-staging-drivers>`__
+ironic-ui `git://git.openstack.org/openstack/ironic-ui <https://git.openstack.org/cgit/openstack/ironic-ui>`__
+k8s-cloud-provider `git://git.openstack.org/openstack/k8s-cloud-provider <https://git.openstack.org/cgit/openstack/k8s-cloud-provider>`__
karbor `git://git.openstack.org/openstack/karbor <https://git.openstack.org/cgit/openstack/karbor>`__
karbor-dashboard `git://git.openstack.org/openstack/karbor-dashboard <https://git.openstack.org/cgit/openstack/karbor-dashboard>`__
keystone `git://git.openstack.org/openstack/keystone <https://git.openstack.org/cgit/openstack/keystone>`__
@@ -76,9 +80,14 @@
magnum `git://git.openstack.org/openstack/magnum <https://git.openstack.org/cgit/openstack/magnum>`__
magnum-ui `git://git.openstack.org/openstack/magnum-ui <https://git.openstack.org/cgit/openstack/magnum-ui>`__
manila `git://git.openstack.org/openstack/manila <https://git.openstack.org/cgit/openstack/manila>`__
+manila-ui `git://git.openstack.org/openstack/manila-ui <https://git.openstack.org/cgit/openstack/manila-ui>`__
masakari `git://git.openstack.org/openstack/masakari <https://git.openstack.org/cgit/openstack/masakari>`__
+meteos `git://git.openstack.org/openstack/meteos <https://git.openstack.org/cgit/openstack/meteos>`__
+meteos-ui `git://git.openstack.org/openstack/meteos-ui <https://git.openstack.org/cgit/openstack/meteos-ui>`__
mistral `git://git.openstack.org/openstack/mistral <https://git.openstack.org/cgit/openstack/mistral>`__
mixmatch `git://git.openstack.org/openstack/mixmatch <https://git.openstack.org/cgit/openstack/mixmatch>`__
+mogan `git://git.openstack.org/openstack/mogan <https://git.openstack.org/cgit/openstack/mogan>`__
+mogan-ui `git://git.openstack.org/openstack/mogan-ui <https://git.openstack.org/cgit/openstack/mogan-ui>`__
monasca-analytics `git://git.openstack.org/openstack/monasca-analytics <https://git.openstack.org/cgit/openstack/monasca-analytics>`__
monasca-api `git://git.openstack.org/openstack/monasca-api <https://git.openstack.org/cgit/openstack/monasca-api>`__
monasca-ceilometer `git://git.openstack.org/openstack/monasca-ceilometer <https://git.openstack.org/cgit/openstack/monasca-ceilometer>`__
@@ -92,6 +101,8 @@
networking-brocade `git://git.openstack.org/openstack/networking-brocade <https://git.openstack.org/cgit/openstack/networking-brocade>`__
networking-calico `git://git.openstack.org/openstack/networking-calico <https://git.openstack.org/cgit/openstack/networking-calico>`__
networking-cisco `git://git.openstack.org/openstack/networking-cisco <https://git.openstack.org/cgit/openstack/networking-cisco>`__
+networking-cumulus `git://git.openstack.org/openstack/networking-cumulus <https://git.openstack.org/cgit/openstack/networking-cumulus>`__
+networking-dpm `git://git.openstack.org/openstack/networking-dpm <https://git.openstack.org/cgit/openstack/networking-dpm>`__
networking-fortinet `git://git.openstack.org/openstack/networking-fortinet <https://git.openstack.org/cgit/openstack/networking-fortinet>`__
networking-generic-switch `git://git.openstack.org/openstack/networking-generic-switch <https://git.openstack.org/cgit/openstack/networking-generic-switch>`__
networking-huawei `git://git.openstack.org/openstack/networking-huawei <https://git.openstack.org/cgit/openstack/networking-huawei>`__
@@ -101,7 +112,6 @@
networking-mlnx `git://git.openstack.org/openstack/networking-mlnx <https://git.openstack.org/cgit/openstack/networking-mlnx>`__
networking-nec `git://git.openstack.org/openstack/networking-nec <https://git.openstack.org/cgit/openstack/networking-nec>`__
networking-odl `git://git.openstack.org/openstack/networking-odl <https://git.openstack.org/cgit/openstack/networking-odl>`__
-networking-ofagent `git://git.openstack.org/openstack/networking-ofagent <https://git.openstack.org/cgit/openstack/networking-ofagent>`__
networking-onos `git://git.openstack.org/openstack/networking-onos <https://git.openstack.org/cgit/openstack/networking-onos>`__
networking-ovn `git://git.openstack.org/openstack/networking-ovn <https://git.openstack.org/cgit/openstack/networking-ovn>`__
networking-ovs-dpdk `git://git.openstack.org/openstack/networking-ovs-dpdk <https://git.openstack.org/cgit/openstack/networking-ovs-dpdk>`__
@@ -116,15 +126,17 @@
neutron-lbaas `git://git.openstack.org/openstack/neutron-lbaas <https://git.openstack.org/cgit/openstack/neutron-lbaas>`__
neutron-lbaas-dashboard `git://git.openstack.org/openstack/neutron-lbaas-dashboard <https://git.openstack.org/cgit/openstack/neutron-lbaas-dashboard>`__
neutron-vpnaas `git://git.openstack.org/openstack/neutron-vpnaas <https://git.openstack.org/cgit/openstack/neutron-vpnaas>`__
-nimble `git://git.openstack.org/openstack/nimble <https://git.openstack.org/cgit/openstack/nimble>`__
-nova-docker `git://git.openstack.org/openstack/nova-docker <https://git.openstack.org/cgit/openstack/nova-docker>`__
+nova-dpm `git://git.openstack.org/openstack/nova-dpm <https://git.openstack.org/cgit/openstack/nova-dpm>`__
nova-lxd `git://git.openstack.org/openstack/nova-lxd <https://git.openstack.org/cgit/openstack/nova-lxd>`__
nova-mksproxy `git://git.openstack.org/openstack/nova-mksproxy <https://git.openstack.org/cgit/openstack/nova-mksproxy>`__
nova-powervm `git://git.openstack.org/openstack/nova-powervm <https://git.openstack.org/cgit/openstack/nova-powervm>`__
oaktree `git://git.openstack.org/openstack/oaktree <https://git.openstack.org/cgit/openstack/oaktree>`__
octavia `git://git.openstack.org/openstack/octavia <https://git.openstack.org/cgit/openstack/octavia>`__
+octavia-dashboard `git://git.openstack.org/openstack/octavia-dashboard <https://git.openstack.org/cgit/openstack/octavia-dashboard>`__
+os-xenapi `git://git.openstack.org/openstack/os-xenapi <https://git.openstack.org/cgit/openstack/os-xenapi>`__
osprofiler `git://git.openstack.org/openstack/osprofiler <https://git.openstack.org/cgit/openstack/osprofiler>`__
panko `git://git.openstack.org/openstack/panko <https://git.openstack.org/cgit/openstack/panko>`__
+picasso `git://git.openstack.org/openstack/picasso <https://git.openstack.org/cgit/openstack/picasso>`__
rally `git://git.openstack.org/openstack/rally <https://git.openstack.org/cgit/openstack/rally>`__
sahara `git://git.openstack.org/openstack/sahara <https://git.openstack.org/cgit/openstack/sahara>`__
sahara-dashboard `git://git.openstack.org/openstack/sahara-dashboard <https://git.openstack.org/cgit/openstack/sahara-dashboard>`__
@@ -142,6 +154,7 @@
vitrage `git://git.openstack.org/openstack/vitrage <https://git.openstack.org/cgit/openstack/vitrage>`__
vitrage-dashboard `git://git.openstack.org/openstack/vitrage-dashboard <https://git.openstack.org/cgit/openstack/vitrage-dashboard>`__
vmware-nsx `git://git.openstack.org/openstack/vmware-nsx <https://git.openstack.org/cgit/openstack/vmware-nsx>`__
+vmware-vspc `git://git.openstack.org/openstack/vmware-vspc <https://git.openstack.org/cgit/openstack/vmware-vspc>`__
watcher `git://git.openstack.org/openstack/watcher <https://git.openstack.org/cgit/openstack/watcher>`__
watcher-dashboard `git://git.openstack.org/openstack/watcher-dashboard <https://git.openstack.org/cgit/openstack/watcher-dashboard>`__
zaqar `git://git.openstack.org/openstack/zaqar <https://git.openstack.org/cgit/openstack/zaqar>`__
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index 31987bc..5b3c6cf 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -99,7 +99,7 @@
should exist at this point.
- **extra** - Called near the end after layer 1 and 2 services have
been started.
- - **test-config** - Called at the end of devstack used to configure tempest
+ - **test-config** - Called at the end of devstack used to configure tempest
or any other test environments
- **unstack** - Called by ``unstack.sh`` before other services are shut
diff --git a/exercises/neutron-adv-test.sh b/exercises/neutron-adv-test.sh
index bfd45ec..e8c8f62 100755
--- a/exercises/neutron-adv-test.sh
+++ b/exercises/neutron-adv-test.sh
@@ -156,7 +156,7 @@
function get_network_id {
local NETWORK_NAME="$1"
local NETWORK_ID
- NETWORK_ID=`openstack network list | grep $NETWORK_NAME | awk '{print $2}'`
+ NETWORK_ID=`openstack network show -f value -c id $NETWORK_NAME`
echo $NETWORK_ID
}
diff --git a/extras.d/80-tempest.sh b/extras.d/80-tempest.sh
index 6a3d121..15ecfe3 100644
--- a/extras.d/80-tempest.sh
+++ b/extras.d/80-tempest.sh
@@ -11,13 +11,16 @@
# Tempest config must come after layer 2 services are running
:
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
+ # Tempest config must come after all other plugins are run
+ :
+ elif [[ "$1" == "stack" && "$2" == "post-extra" ]]; then
+ # local.conf Tempest option overrides
+ :
+ elif [[ "$1" == "stack" && "$2" == "test-config" ]]; then
echo_summary "Initializing Tempest"
configure_tempest
echo_summary "Installing Tempest Plugins"
install_tempest_plugins
- elif [[ "$1" == "stack" && "$2" == "post-extra" ]]; then
- # local.conf Tempest option overrides
- :
fi
if [[ "$1" == "unstack" ]]; then
diff --git a/files/apache-heat-api-cfn.template b/files/apache-heat-api-cfn.template
deleted file mode 100644
index ab33c66..0000000
--- a/files/apache-heat-api-cfn.template
+++ /dev/null
@@ -1,27 +0,0 @@
-Listen %PUBLICPORT%
-
-<VirtualHost *:%PUBLICPORT%>
- WSGIDaemonProcess heat-api-cfn processes=2 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
- WSGIProcessGroup heat-api-cfn
- WSGIScriptAlias / %HEAT_BIN_DIR%/heat-wsgi-api-cfn
- WSGIApplicationGroup %{GLOBAL}
- WSGIPassAuthorization On
- AllowEncodedSlashes On
- <IfVersion >= 2.4>
- ErrorLogFormat "%{cu}t %M"
- </IfVersion>
- ErrorLog /var/log/%APACHE_NAME%/heat-api-cfn.log
- %SSLENGINE%
- %SSLCERTFILE%
- %SSLKEYFILE%
-
- <Directory %HEAT_BIN_DIR%>
- <IfVersion >= 2.4>
- Require all granted
- </IfVersion>
- <IfVersion < 2.4>
- Order allow,deny
- Allow from all
- </IfVersion>
- </Directory>
-</VirtualHost>
diff --git a/files/apache-heat-api-cloudwatch.template b/files/apache-heat-api-cloudwatch.template
deleted file mode 100644
index 06c91bb..0000000
--- a/files/apache-heat-api-cloudwatch.template
+++ /dev/null
@@ -1,27 +0,0 @@
-Listen %PUBLICPORT%
-
-<VirtualHost *:%PUBLICPORT%>
- WSGIDaemonProcess heat-api-cloudwatch processes=2 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
- WSGIProcessGroup heat-api-cloudwatch
- WSGIScriptAlias / %HEAT_BIN_DIR%/heat-wsgi-api-cloudwatch
- WSGIApplicationGroup %{GLOBAL}
- WSGIPassAuthorization On
- AllowEncodedSlashes On
- <IfVersion >= 2.4>
- ErrorLogFormat "%{cu}t %M"
- </IfVersion>
- ErrorLog /var/log/%APACHE_NAME%/heat-api-cloudwatch.log
- %SSLENGINE%
- %SSLCERTFILE%
- %SSLKEYFILE%
-
- <Directory %HEAT_BIN_DIR%>
- <IfVersion >= 2.4>
- Require all granted
- </IfVersion>
- <IfVersion < 2.4>
- Order allow,deny
- Allow from all
- </IfVersion>
- </Directory>
-</VirtualHost>
diff --git a/files/apache-heat-api.template b/files/apache-heat-api.template
deleted file mode 100644
index 4924b39..0000000
--- a/files/apache-heat-api.template
+++ /dev/null
@@ -1,27 +0,0 @@
-Listen %PUBLICPORT%
-
-<VirtualHost *:%PUBLICPORT%>
- WSGIDaemonProcess heat-api processes=3 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
- WSGIProcessGroup heat-api
- WSGIScriptAlias / %HEAT_BIN_DIR%/heat-wsgi-api
- WSGIApplicationGroup %{GLOBAL}
- WSGIPassAuthorization On
- AllowEncodedSlashes On
- <IfVersion >= 2.4>
- ErrorLogFormat "%{cu}t %M"
- </IfVersion>
- ErrorLog /var/log/%APACHE_NAME%/heat-api.log
- %SSLENGINE%
- %SSLCERTFILE%
- %SSLKEYFILE%
-
- <Directory %HEAT_BIN_DIR%>
- <IfVersion >= 2.4>
- Require all granted
- </IfVersion>
- <IfVersion < 2.4>
- Order allow,deny
- Allow from all
- </IfVersion>
- </Directory>
-</VirtualHost>
diff --git a/files/apache-heat-pip-repo.template b/files/apache-heat-pip-repo.template
deleted file mode 100644
index d88ac3e..0000000
--- a/files/apache-heat-pip-repo.template
+++ /dev/null
@@ -1,15 +0,0 @@
-Listen %HEAT_PIP_REPO_PORT%
-
-<VirtualHost *:%HEAT_PIP_REPO_PORT%>
- DocumentRoot %HEAT_PIP_REPO%
- <Directory %HEAT_PIP_REPO%>
- DirectoryIndex index.html
- Require all granted
- Order allow,deny
- allow from all
- </Directory>
-
- ErrorLog /var/log/%APACHE_NAME%/heat_pip_repo_error.log
- LogLevel warn
- CustomLog /var/log/%APACHE_NAME%/heat_pip_repo_access.log combined
-</VirtualHost>
diff --git a/files/apache-keystone.template b/files/apache-keystone.template
index 428544f..1284360 100644
--- a/files/apache-keystone.template
+++ b/files/apache-keystone.template
@@ -7,7 +7,7 @@
</Directory>
<VirtualHost *:%PUBLICPORT%>
- WSGIDaemonProcess keystone-public processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+ WSGIDaemonProcess keystone-public processes=3 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
WSGIProcessGroup keystone-public
WSGIScriptAlias / %KEYSTONE_BIN%/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
@@ -21,7 +21,7 @@
</VirtualHost>
<VirtualHost *:%ADMINPORT%>
- WSGIDaemonProcess keystone-admin processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+ WSGIDaemonProcess keystone-admin processes=3 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
WSGIProcessGroup keystone-admin
WSGIScriptAlias / %KEYSTONE_BIN%/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
@@ -34,6 +34,12 @@
%SSLKEYFILE%
</VirtualHost>
+%SSLLISTEN%<VirtualHost *:443>
+%SSLLISTEN% %SSLENGINE%
+%SSLLISTEN% %SSLCERTFILE%
+%SSLLISTEN% %SSLKEYFILE%
+%SSLLISTEN%</VirtualHost>
+
Alias /identity %KEYSTONE_BIN%/keystone-wsgi-public
<Location /identity>
SetHandler wsgi-script
diff --git a/files/apache-placement-api.template b/files/apache-placement-api.template
index b89ef96..011abb9 100644
--- a/files/apache-placement-api.template
+++ b/files/apache-placement-api.template
@@ -1,6 +1,8 @@
-Listen %PUBLICPORT%
-
-<VirtualHost *:%PUBLICPORT%>
+# NOTE(sbauza): This virtualhost is only here because some directives can
+# only be set by a virtualhost or server context, so that's why the port is not bound.
+# TODO(sbauza): Find a better way to identify a free port that is not corresponding to an existing
+# vhost.
+<VirtualHost *:8780>
WSGIDaemonProcess placement-api processes=%APIWORKERS% threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
WSGIProcessGroup placement-api
WSGIScriptAlias / %PUBLICWSGI%
diff --git a/files/debs/general b/files/debs/general
index a1f2a4b..20490c6 100644
--- a/files/debs/general
+++ b/files/debs/general
@@ -1,13 +1,17 @@
+apache2
+apache2-dev
bc
bridge-utils
bsdmainutils
curl
+default-jre-headless # NOPRIME
g++
gcc
gettext # used for compiling message catalogs
git
graphviz # needed for docs
iputils-ping
+libapache2-mod-proxy-uwsgi
libffi-dev # for pyOpenSSL
libjpeg-dev # Pillow 3.0.0
libmysqlclient-dev # MySQL-python
@@ -17,14 +21,15 @@
libxslt1-dev # lxml
libyaml-dev
lsof # useful when debugging
-openjdk-7-jre-headless # NOPRIME
openssh-server
openssl
pkg-config
psmisc
python2.7
+python3-systemd
python-dev
python-gdbm # needed for testr
+python-systemd
screen
tar
tcpdump
diff --git a/files/debs/heat b/files/debs/heat
deleted file mode 100644
index 1ecbc78..0000000
--- a/files/debs/heat
+++ /dev/null
@@ -1 +0,0 @@
-gettext # dist:trusty
diff --git a/files/debs/n-cpu b/files/debs/n-cpu
index 69ac430..d8bbf59 100644
--- a/files/debs/n-cpu
+++ b/files/debs/n-cpu
@@ -2,6 +2,7 @@
genisoimage
gir1.2-libosinfo-1.0
lvm2 # NOPRIME
+netcat-openbsd
open-iscsi
python-guestfs # NOPRIME
qemu-utils
diff --git a/files/debs/neutron b/files/debs/neutron
index 2307fa5..e30f678 100644
--- a/files/debs/neutron
+++ b/files/debs/neutron
@@ -2,6 +2,7 @@
dnsmasq-base
dnsmasq-utils # for dhcp_release only available in dist:precise
ebtables
+haproxy # to serve as metadata proxy inside router/dhcp namespaces
iptables
iputils-arping
iputils-ping
diff --git a/files/debs/q-agt b/files/debs/neutron-agent
similarity index 100%
rename from files/debs/q-agt
rename to files/debs/neutron-agent
diff --git a/files/debs/q-l3 b/files/debs/neutron-l3
similarity index 100%
rename from files/debs/q-l3
rename to files/debs/neutron-l3
diff --git a/files/debs/nova b/files/debs/nova
index 58dad41..5e14aec 100644
--- a/files/debs/nova
+++ b/files/debs/nova
@@ -10,7 +10,9 @@
kpartx
libjs-jquery-tablesorter # Needed for coverage html reports
libmysqlclient-dev
-libvirt-bin # NOPRIME
+libvirt-bin # dist:xenial NOPRIME
+libvirt-clients # not:xenial NOPRIME
+libvirt-daemon-system # not:xenial NOPRIME
libvirt-dev # NOPRIME
mysql-server # NOPRIME
parted
diff --git a/files/debs/q-agt b/files/debs/q-agt
new file mode 120000
index 0000000..99fe353
--- /dev/null
+++ b/files/debs/q-agt
@@ -0,0 +1 @@
+neutron-agent
\ No newline at end of file
diff --git a/files/debs/q-l3 b/files/debs/q-l3
new file mode 120000
index 0000000..0a5ca2a
--- /dev/null
+++ b/files/debs/q-l3
@@ -0,0 +1 @@
+neutron-l3
\ No newline at end of file
diff --git a/files/ebtables.workaround b/files/ebtables.workaround
deleted file mode 100644
index c8af51f..0000000
--- a/files/ebtables.workaround
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-#
-# Copyright 2015 Hewlett-Packard Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-#
-# This is a terrible, terrible, truly terrible work around for
-# environments that have libvirt < 1.2.11. ebtables requires that you
-# specifically tell it you would like to not race and get punched in
-# the face when 2 run at the same time with a --concurrent flag.
-
-flock -w 300 /var/lock/ebtables.nova /sbin/ebtables.real $@
diff --git a/files/rpms-suse/general b/files/rpms-suse/general
index 3b19071..1044c25 100644
--- a/files/rpms-suse/general
+++ b/files/rpms-suse/general
@@ -21,6 +21,7 @@
psmisc
python-cmd2 # dist:opensuse-12.3
python-devel # pyOpenSSL
+python-xml
screen
tar
tcpdump
diff --git a/files/rpms-suse/neutron b/files/rpms-suse/neutron
index e9abc6e..d1cc73f 100644
--- a/files/rpms-suse/neutron
+++ b/files/rpms-suse/neutron
@@ -2,6 +2,7 @@
dnsmasq
dnsmasq-utils # dist:opensuse-12.3,opensuse-13.1
ebtables
+haproxy # to serve as metadata proxy inside router/dhcp namespaces
iptables
iputils
mariadb # NOPRIME
diff --git a/files/rpms-suse/q-agt b/files/rpms-suse/neutron-agent
similarity index 100%
rename from files/rpms-suse/q-agt
rename to files/rpms-suse/neutron-agent
diff --git a/files/rpms-suse/q-l3 b/files/rpms-suse/neutron-l3
similarity index 100%
rename from files/rpms-suse/q-l3
rename to files/rpms-suse/neutron-l3
diff --git a/files/rpms-suse/q-agt b/files/rpms-suse/q-agt
new file mode 120000
index 0000000..99fe353
--- /dev/null
+++ b/files/rpms-suse/q-agt
@@ -0,0 +1 @@
+neutron-agent
\ No newline at end of file
diff --git a/files/rpms-suse/q-l3 b/files/rpms-suse/q-l3
new file mode 120000
index 0000000..0a5ca2a
--- /dev/null
+++ b/files/rpms-suse/q-l3
@@ -0,0 +1 @@
+neutron-l3
\ No newline at end of file
diff --git a/files/rpms/general b/files/rpms/general
index d0ceb56..106aa6a 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -7,9 +7,11 @@
gettext # used for compiling message catalogs
git-core
graphviz # needed only for docs
-iptables-services # NOPRIME f23,f24
+httpd
+httpd-devel
+iptables-services # NOPRIME f23,f24,f25
java-1.7.0-openjdk-headless # NOPRIME rhel7
-java-1.8.0-openjdk-headless # NOPRIME f23,f24
+java-1.8.0-openjdk-headless # NOPRIME f23,f24,f25
libffi-devel
libjpeg-turbo-devel # Pillow 3.0.0
libxml2-devel # lxml
@@ -27,6 +29,7 @@
python-devel
redhat-rpm-config # missing dep for gcc hardening flags, see rhbz#1217376
screen
+systemd-python
tar
tcpdump
unzip
diff --git a/files/rpms/neutron b/files/rpms/neutron
index 2e49a0c..a4e029a 100644
--- a/files/rpms/neutron
+++ b/files/rpms/neutron
@@ -2,6 +2,7 @@
dnsmasq # for q-dhcp
dnsmasq-utils # for dhcp_release
ebtables
+haproxy # to serve as metadata proxy inside router/dhcp namespaces
iptables
iputils
mysql-devel
diff --git a/files/rpms/q-agt b/files/rpms/neutron-agent
similarity index 100%
rename from files/rpms/q-agt
rename to files/rpms/neutron-agent
diff --git a/files/rpms/q-l3 b/files/rpms/neutron-l3
similarity index 100%
rename from files/rpms/q-l3
rename to files/rpms/neutron-l3
diff --git a/files/rpms/nova b/files/rpms/nova
index a883ec4..a368c55 100644
--- a/files/rpms/nova
+++ b/files/rpms/nova
@@ -7,12 +7,8 @@
genisoimage # required for config_drive
iptables
iputils
-kernel-modules # dist:f23,f24
+kernel-modules # dist:f23,f24,f25
kpartx
-kvm # NOPRIME
-libvirt-bin # NOPRIME
-libvirt-devel # NOPRIME
-libvirt-python # NOPRIME
libxml2-python
m2crypto
mysql-devel
@@ -21,7 +17,6 @@
numpy # needed by websockify for spice console
parted
polkit
-qemu-kvm # NOPRIME
rabbitmq-server # NOPRIME
sqlite
sudo
diff --git a/files/rpms/q-agt b/files/rpms/q-agt
new file mode 120000
index 0000000..99fe353
--- /dev/null
+++ b/files/rpms/q-agt
@@ -0,0 +1 @@
+neutron-agent
\ No newline at end of file
diff --git a/files/rpms/q-l3 b/files/rpms/q-l3
new file mode 120000
index 0000000..0a5ca2a
--- /dev/null
+++ b/files/rpms/q-l3
@@ -0,0 +1 @@
+neutron-l3
\ No newline at end of file
diff --git a/files/rpms/swift b/files/rpms/swift
index bd249ee..2f12df0 100644
--- a/files/rpms/swift
+++ b/files/rpms/swift
@@ -2,7 +2,7 @@
liberasurecode-devel
memcached
pyxattr
-rsync-daemon # dist:f23,f24
+rsync-daemon # dist:f23,f24,f25
sqlite
xfsprogs
xinetd
diff --git a/files/swift/rsyncd.conf b/files/swift/rsyncd.conf
index c670531..c49f716 100644
--- a/files/swift/rsyncd.conf
+++ b/files/swift/rsyncd.conf
@@ -4,76 +4,76 @@
pid file = %SWIFT_DATA_DIR%/run/rsyncd.pid
address = 127.0.0.1
-[account6012]
+[account6612]
max connections = 25
path = %SWIFT_DATA_DIR%/1/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/account6012.lock
+lock file = %SWIFT_DATA_DIR%/run/account6612.lock
-[account6022]
+[account6622]
max connections = 25
path = %SWIFT_DATA_DIR%/2/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/account6022.lock
+lock file = %SWIFT_DATA_DIR%/run/account6622.lock
-[account6032]
+[account6632]
max connections = 25
path = %SWIFT_DATA_DIR%/3/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/account6032.lock
+lock file = %SWIFT_DATA_DIR%/run/account6632.lock
-[account6042]
+[account6642]
max connections = 25
path = %SWIFT_DATA_DIR%/4/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/account6042.lock
+lock file = %SWIFT_DATA_DIR%/run/account6642.lock
-[container6011]
+[container6611]
max connections = 25
path = %SWIFT_DATA_DIR%/1/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/container6011.lock
+lock file = %SWIFT_DATA_DIR%/run/container6611.lock
-[container6021]
+[container6621]
max connections = 25
path = %SWIFT_DATA_DIR%/2/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/container6021.lock
+lock file = %SWIFT_DATA_DIR%/run/container6621.lock
-[container6031]
+[container6631]
max connections = 25
path = %SWIFT_DATA_DIR%/3/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/container6031.lock
+lock file = %SWIFT_DATA_DIR%/run/container6631.lock
-[container6041]
+[container6641]
max connections = 25
path = %SWIFT_DATA_DIR%/4/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/container6041.lock
+lock file = %SWIFT_DATA_DIR%/run/container6641.lock
-[object6010]
+[object6613]
max connections = 25
path = %SWIFT_DATA_DIR%/1/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/object6010.lock
+lock file = %SWIFT_DATA_DIR%/run/object6613.lock
-[object6020]
+[object6623]
max connections = 25
path = %SWIFT_DATA_DIR%/2/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/object6020.lock
+lock file = %SWIFT_DATA_DIR%/run/object6623.lock
-[object6030]
+[object6633]
max connections = 25
path = %SWIFT_DATA_DIR%/3/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/object6030.lock
+lock file = %SWIFT_DATA_DIR%/run/object6633.lock
-[object6040]
+[object6643]
max connections = 25
path = %SWIFT_DATA_DIR%/4/node/
read only = false
-lock file = %SWIFT_DATA_DIR%/run/object6040.lock
+lock file = %SWIFT_DATA_DIR%/run/object6643.lock
diff --git a/functions b/functions
index 6a0ac67..c99e435 100644
--- a/functions
+++ b/functions
@@ -12,7 +12,7 @@
# ensure we don't re-source this in the same environment
[[ -z "$_DEVSTACK_FUNCTIONS" ]] || return 0
-declare -r _DEVSTACK_FUNCTIONS=1
+declare -r -g _DEVSTACK_FUNCTIONS=1
# Include the common functions
FUNC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
@@ -569,6 +569,21 @@
esac
}
+# This sets up defaults we like in devstack for logging for tracking
+# down issues, and makes sure everything is done the same between
+# projects.
+function setup_logging {
+ local conf_file=$1
+ local other_cond=${2:-"False"}
+ if [[ "$USE_SYSTEMD" == "True" ]]; then
+ setup_systemd_logging $conf_file
+ elif [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$other_cond" == "False" ]; then
+ setup_colorized_logging $conf_file
+ else
+ setup_standard_logging_identity $conf_file
+ fi
+}
+
# This function sets log formatting options for colorizing log
# output to stdout. It is meant to be called by lib modules.
# The last two parameters are optional and can be used to specify
@@ -578,16 +593,34 @@
# setup_colorized_logging something.conf SOMESECTION
function setup_colorized_logging {
local conf_file=$1
- local conf_section=$2
- local project_var=${3:-"project_name"}
- local user_var=${4:-"user_name"}
+ local conf_section="DEFAULT"
+ local project_var="project_name"
+ local user_var="user_name"
# Add color to logging output
- iniset $conf_file $conf_section logging_context_format_string "%(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [[01;36m%(request_id)s [00;36m%("$user_var")s %("$project_var")s%(color)s] [01;35m%(instance)s%(color)s%(message)s[00m"
+ iniset $conf_file $conf_section logging_context_format_string "%(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [[01;36m%(request_id)s [00;36m%("$project_var")s %("$user_var")s%(color)s] [01;35m%(instance)s%(color)s%(message)s[00m"
iniset $conf_file $conf_section logging_default_format_string "%(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [[00;36m-%(color)s] [01;35m%(instance)s%(color)s%(message)s[00m"
iniset $conf_file $conf_section logging_debug_format_suffix "[00;33mfrom (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d[00m"
iniset $conf_file $conf_section logging_exception_prefix "%(color)s%(asctime)s.%(msecs)03d TRACE %(name)s [01;35m%(instance)s[00m"
}
+function setup_systemd_logging {
+ local conf_file=$1
+ local conf_section="DEFAULT"
+ iniset $conf_file $conf_section use_journal "True"
+ iniset $conf_file $conf_section logging_context_format_string \
+ "%(levelname)s %(name)s [%(request_id)s %(project_name)s %(user_name)s] %(instance)s%(message)s"
+ iniset $conf_file $conf_section logging_default_format_string \
+ "%(levelname)s %(name)s [-] %(instance)s%(color)s%(message)s"
+ iniset $conf_file $conf_section logging_debug_format_suffix \
+ "from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d"
+ iniset $conf_file $conf_section logging_exception_prefix "ERROR %(name)s %(instance)s"
+}
+
+function setup_standard_logging_identity {
+ local conf_file=$1
+ iniset $conf_file DEFAULT logging_user_identity_format "%(project_name)s %(user_name)s"
+}
+
# These functions are provided for basic fall-back functionality for
# projects that include parts of DevStack (Grenade). stack.sh will
# override these with more specific versions for DevStack (with fancy
@@ -646,6 +679,12 @@
}
+# running_in_container - Returns true otherwise false
+function running_in_container {
+ [[ $(systemd-detect-virt --container) != 'none' ]]
+}
+
+
# enable_kernel_bridge_firewall - Enable kernel support for bridge firewalling
function enable_kernel_bridge_firewall {
# Load bridge module. This module provides access to firewall for bridged
@@ -658,7 +697,7 @@
# Enable bridge firewalling in case it's disabled in kernel (upstream
# default is enabled, but some distributions may decide to change it).
# This is at least needed for RHEL 7.2 and earlier releases.
- for proto in arp ip ip6; do
+ for proto in ip ip6; do
sudo sysctl -w net.bridge.bridge-nf-call-${proto}tables=1
done
}
diff --git a/functions-common b/functions-common
index d5014fd..35b4860 100644
--- a/functions-common
+++ b/functions-common
@@ -37,12 +37,12 @@
# ensure we don't re-source this in the same environment
[[ -z "$_DEVSTACK_FUNCTIONS_COMMON" ]] || return 0
-declare -r _DEVSTACK_FUNCTIONS_COMMON=1
+declare -r -g _DEVSTACK_FUNCTIONS_COMMON=1
# Global Config Variables
-declare -A GITREPO
-declare -A GITBRANCH
-declare -A GITDIR
+declare -A -g GITREPO
+declare -A -g GITBRANCH
+declare -A -g GITDIR
TRACK_DEPENDS=${TRACK_DEPENDS:-False}
@@ -87,7 +87,7 @@
CA_CERT_ARG="--os-cacert $SSL_BUNDLE_FILE"
fi
# demo -> devstack
- $TOP_DIR/tools/update_clouds_yaml.py \
+ $PYTHON $TOP_DIR/tools/update_clouds_yaml.py \
--file $CLOUDS_YAML \
--os-cloud devstack \
--os-region-name $REGION_NAME \
@@ -99,7 +99,7 @@
--os-project-name demo
# alt_demo -> devstack-alt
- $TOP_DIR/tools/update_clouds_yaml.py \
+ $PYTHON $TOP_DIR/tools/update_clouds_yaml.py \
--file $CLOUDS_YAML \
--os-cloud devstack-alt \
--os-region-name $REGION_NAME \
@@ -111,7 +111,7 @@
--os-project-name alt_demo
# admin -> devstack-admin
- $TOP_DIR/tools/update_clouds_yaml.py \
+ $PYTHON $TOP_DIR/tools/update_clouds_yaml.py \
--file $CLOUDS_YAML \
--os-cloud devstack-admin \
--os-region-name $REGION_NAME \
@@ -216,7 +216,7 @@
function deprecated {
local text=$1
DEPRECATED_TEXT+="\n$text"
- echo "WARNING: $text"
+ echo "WARNING: $text" >&2
}
# Prints line number and "message" in error format
@@ -302,11 +302,11 @@
# such as "install_package" further abstract things in better ways.
#
# ``os_VENDOR`` - vendor name: ``Ubuntu``, ``Fedora``, etc
-# ``os_RELEASE`` - major release: ``14.04`` (Ubuntu), ``20`` (Fedora)
+# ``os_RELEASE`` - major release: ``16.04`` (Ubuntu), ``23`` (Fedora)
# ``os_PACKAGE`` - package type: ``deb`` or ``rpm``
-# ``os_CODENAME`` - vendor's codename for release: ``trusty``
+# ``os_CODENAME`` - vendor's codename for release: ``xenial``
-declare os_VENDOR os_RELEASE os_PACKAGE os_CODENAME
+declare -g os_VENDOR os_RELEASE os_PACKAGE os_CODENAME
# Make a *best effort* attempt to install lsb_release packages for the
# user if not available. Note can't use generic install_package*
@@ -361,7 +361,7 @@
# Translate the OS version values into common nomenclature
# Sets global ``DISTRO`` from the ``os_*`` values
-declare DISTRO
+declare -g DISTRO
function GetDistro {
GetOSVersion
@@ -534,10 +534,8 @@
echo "the project to the \$PROJECTS variable in the job definition."
die $LINENO "Cloning not allowed in this configuration"
fi
- git_timed clone $git_clone_flags $git_remote $git_dest
- cd $git_dest
- # This checkout syntax works for both branches and tags
- git checkout $git_ref
+ # '--branch' can also take tags
+ git_timed clone $git_clone_flags $git_remote $git_dest --branch $git_ref
elif [[ "$RECLONE" = "True" ]]; then
# if it does exist then simulate what clone does if asked to RECLONE
cd $git_dest
@@ -907,34 +905,6 @@
echo $user_role_id
}
-# Gets or adds user role to domain
-# Usage: get_or_add_user_domain_role <role> <user> <domain>
-function get_or_add_user_domain_role {
- local user_role_id
- # Gets user role id
- user_role_id=$(openstack role assignment list \
- --user $2 \
- --os-url=$KEYSTONE_SERVICE_URI_V3 \
- --os-identity-api-version=3 \
- --domain $3 \
- | grep " $1 " | get_field 1)
- if [[ -z "$user_role_id" ]]; then
- # Adds role to user and get it
- openstack role add $1 \
- --user $2 \
- --domain $3 \
- --os-url=$KEYSTONE_SERVICE_URI_V3 \
- --os-identity-api-version=3
- user_role_id=$(openstack role assignment list \
- --user $2 \
- --os-url=$KEYSTONE_SERVICE_URI_V3 \
- --os-identity-api-version=3 \
- --domain $3 \
- | grep " $1 " | get_field 1)
- fi
- echo $user_role_id
-}
-
# Gets or adds group role to project
# Usage: get_or_add_group_project_role <role> <group> <project>
function get_or_add_group_project_role {
@@ -994,7 +964,7 @@
}
# Gets or creates endpoint
-# Usage: get_or_create_endpoint <service> <region> <publicurl> <adminurl> <internalurl>
+# Usage: get_or_create_endpoint <service> <region> <publicurl> [adminurl] [internalurl]
function get_or_create_endpoint {
# NOTE(jamielennnox): when converting to v3 endpoint creation we go from
# creating one endpoint with multiple urls to multiple endpoints each with
@@ -1006,9 +976,13 @@
# endpoints they need.
local public_id
public_id=$(_get_or_create_endpoint_with_interface $1 public $3 $2)
- _get_or_create_endpoint_with_interface $1 admin $4 $2
- _get_or_create_endpoint_with_interface $1 internal $5 $2
-
+ # only create admin/internal urls if provided content for them
+ if [[ -n "$4" ]]; then
+ _get_or_create_endpoint_with_interface $1 admin $4 $2
+ fi
+ if [[ -n "$5" ]]; then
+ _get_or_create_endpoint_with_interface $1 internal $5 $2
+ fi
# return the public id to indicate success, and this is the endpoint most likely wanted
echo $public_id
}
@@ -1146,6 +1120,19 @@
fi
fi
+ # Look for # not:xxx in comment
+ if [[ $line =~ (.*)#.*not:([^ ]*) ]]; then
+ # We are using BASH regexp matching feature.
+ package=${BASH_REMATCH[1]}
+ distros=${BASH_REMATCH[2]}
+ # In bash ${VAR,,} will lowercase VAR
+ # Look for a match in the distro list
+ if [[ ${distros,,} =~ ${DISTRO,,} ]]; then
+ # If match then skip this package
+ inst_pkg=0
+ fi
+ fi
+
if [[ $inst_pkg = 1 ]]; then
echo $package
fi
@@ -1164,6 +1151,8 @@
# - ``# NOPRIME`` defers installation to be performed later in `stack.sh`
# - ``# dist:DISTRO`` or ``dist:DISTRO1,DISTRO2`` limits the selection
# of the package to the distros listed. The distro names are case insensitive.
+# - ``# not:DISTRO`` or ``not:DISTRO1,DISTRO2`` limits the selection
+# of the package to the distros not listed. The distro names are case insensitive.
function get_packages {
local xtrace
xtrace=$(set +o | grep xtrace)
@@ -1441,6 +1430,89 @@
exit 0
}
+function write_user_unit_file {
+ local service=$1
+ local command="$2"
+ local group=$3
+ local user=$4
+ local extra=""
+ if [[ -n "$group" ]]; then
+ extra="Group=$group"
+ fi
+ local unitfile="$SYSTEMD_DIR/$service"
+ mkdir -p $SYSTEMD_DIR
+
+ iniset -sudo $unitfile "Unit" "Description" "Devstack $service"
+ iniset -sudo $unitfile "Service" "User" "$user"
+ iniset -sudo $unitfile "Service" "ExecStart" "$command"
+ if [[ -n "$group" ]]; then
+ iniset -sudo $unitfile "Service" "Group" "$group"
+ fi
+ iniset -sudo $unitfile "Install" "WantedBy" "multi-user.target"
+
+ # changes to existing units sometimes need a refresh
+ $SYSTEMCTL daemon-reload
+}
+
+function write_uwsgi_user_unit_file {
+ local service=$1
+ local command="$2"
+ local group=$3
+ local user=$4
+ local unitfile="$SYSTEMD_DIR/$service"
+ mkdir -p $SYSTEMD_DIR
+
+ iniset -sudo $unitfile "Unit" "Description" "Devstack $service"
+ iniset -sudo $unitfile "Service" "User" "$user"
+ iniset -sudo $unitfile "Service" "ExecStart" "$command"
+ iniset -sudo $unitfile "Service" "Type" "notify"
+ iniset -sudo $unitfile "Service" "KillSignal" "SIGQUIT"
+ iniset -sudo $unitfile "Service" "Restart" "Always"
+ iniset -sudo $unitfile "Service" "NotifyAccess" "all"
+ iniset -sudo $unitfile "Service" "RestartForceExitStatus" "100"
+
+ if [[ -n "$group" ]]; then
+ iniset -sudo $unitfile "Service" "Group" "$group"
+ fi
+ iniset -sudo $unitfile "Install" "WantedBy" "multi-user.target"
+
+ # changes to existing units sometimes need a refresh
+ $SYSTEMCTL daemon-reload
+}
+
+function _run_under_systemd {
+ local service=$1
+ local command="$2"
+ local cmd=$command
+ local systemd_service="devstack@$service.service"
+ local group=$3
+ local user=${4:-$STACK_USER}
+ if [[ "$command" =~ "uwsgi" ]] ; then
+ write_uwsgi_user_unit_file $systemd_service "$cmd" "$group" "$user"
+ else
+ write_user_unit_file $systemd_service "$cmd" "$group" "$user"
+ fi
+
+ $SYSTEMCTL enable $systemd_service
+ $SYSTEMCTL start $systemd_service
+ _journal_log $service $systemd_service
+}
+
+function _journal_log {
+ local service=$1
+ local unit=$2
+ local logfile="${service}.log.${CURRENT_LOG_TIME}"
+ local real_logfile="${LOGDIR}/${logfile}"
+ if [[ -n ${LOGDIR} ]]; then
+ $JOURNALCTL_F $2 > "$real_logfile" &
+ bash -c "cd '$LOGDIR' && ln -sf '$logfile' ${service}.log"
+ if [[ -n ${SCREEN_LOGDIR} ]]; then
+ # Drop the backward-compat symlink
+ ln -sf "$real_logfile" ${SCREEN_LOGDIR}/screen-${service}.log
+ fi
+ fi
+}
+
# Helper to remove the ``*.failure`` files under ``$SERVICE_DIR/$SCREEN_NAME``.
# This is used for ``service_check`` when all the ``screen_it`` are called finished
# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``
@@ -1476,16 +1548,24 @@
local service=$1
local command="$2"
local group=$3
- local subservice=$4
+ local user=$4
- local name=${subservice:-$service}
+ local name=$service
time_start "run_process"
if is_service_enabled $service; then
- if [[ "$USE_SCREEN" = "True" ]]; then
+ if [[ "$USE_SYSTEMD" = "True" ]]; then
+ _run_under_systemd "$name" "$command" "$group" "$user"
+ elif [[ "$USE_SCREEN" = "True" ]]; then
+ if [[ "$user" == "root" ]]; then
+ command="sudo $command"
+ fi
screen_process "$name" "$command" "$group"
else
# Spawn directly without screen
+ if [[ "$user" == "root" ]]; then
+ command="sudo $command"
+ fi
_run_process "$name" "$command" "$group" &
fi
fi
@@ -1554,7 +1634,7 @@
# Append the process to the screen rc file
screen_rc "$name" "$command"
- screen -S $SCREEN_NAME -p $name -X stuff "$command & echo \$! >$SERVICE_DIR/$SCREEN_NAME/${name}.pid; fg || echo \"$name failed to start\" | tee \"$SERVICE_DIR/$SCREEN_NAME/${name}.failure\"$NL"
+ screen -S $SCREEN_NAME -p $name -X stuff "$command & echo \$! >$SERVICE_DIR/$SCREEN_NAME/${name}.pid; fg || echo \"$name failed to start. Exit code: \$?\" | tee \"$SERVICE_DIR/$SCREEN_NAME/${name}.failure\"$NL"
}
# Screen rc file builder
@@ -1616,6 +1696,14 @@
if is_service_enabled $service; then
# Kill via pid if we have one available
+ if [[ "$USE_SYSTEMD" == "True" ]]; then
+ # Only do this for units which appear enabled, this also
+ # catches units that don't really exist for cases like
+ # keystone without a failure.
+ $SYSTEMCTL stop devstack@$service.service
+ $SYSTEMCTL disable devstack@$service.service
+ fi
+
if [[ -r $SERVICE_DIR/$SCREEN_NAME/$service.pid ]]; then
pkill -g $(cat $SERVICE_DIR/$SCREEN_NAME/$service.pid)
# oslo.service tends to stop actually shutting down
@@ -1680,7 +1768,7 @@
local logfile=$2
if [[ "$USE_SCREEN" = "True" ]]; then
- screen_process "$name" "sudo tail -f $logfile | sed 's/\\\\\\\\x1b/\o033/g'"
+ screen_process "$name" "sudo tail -f $logfile | sed -u 's/\\\\\\\\x1b/\o033/g'"
fi
}
@@ -1773,6 +1861,9 @@
local name=$1
local url=$2
local branch=${3:-master}
+ if [[ ",${DEVSTACK_PLUGINS}," =~ ,${name}, ]]; then
+ die $LINENO "Plugin attempted to be enabled twice: ${name} ${url} ${branch}"
+ fi
DEVSTACK_PLUGINS+=",$name"
GITREPO[$name]=$url
GITDIR[$name]=$DEST/$name
@@ -2260,6 +2351,14 @@
echo $subnet
}
+function is_provider_network {
+ if [ "$Q_USE_PROVIDER_NETWORKING" == "True" ]; then
+ return 0
+ fi
+ return 1
+}
+
+
# Return the current python as "python<major>.<minor>"
function python_version {
local python_version
@@ -2310,11 +2409,12 @@
fi
}
-# Service wrapper to stop services
+# Service wrapper to reload services
+# If the service was not in running state it will start it
# reload_service service-name
function reload_service {
if [ -x /bin/systemctl ]; then
- sudo /bin/systemctl reload $1
+ sudo /bin/systemctl reload-or-restart $1
else
sudo service $1 reload
fi
@@ -2362,9 +2462,9 @@
# Resolution is only in whole seconds, so should be used for long
# running activities.
-declare -A _TIME_TOTAL
-declare -A _TIME_START
-declare -r _TIME_BEGIN=$(date +%s)
+declare -A -g _TIME_TOTAL
+declare -A -g _TIME_START
+declare -r -g _TIME_BEGIN=$(date +%s)
# time_start $name
#
diff --git a/inc/meta-config b/inc/meta-config
index 6eb7a00..be73b60 100644
--- a/inc/meta-config
+++ b/inc/meta-config
@@ -40,12 +40,10 @@
$CONFIG_AWK_CMD -v matchgroup=$matchgroup -v configfile=$configfile '
BEGIN { group = "" }
/^\[\[.+\|.*\]\]/ {
- if (group == "") {
- gsub("[][]", "", $1);
- split($1, a, "|");
- if (a[1] == matchgroup && a[2] == configfile) {
- group=a[1]
- }
+ gsub("[][]", "", $1);
+ split($1, a, "|");
+ if (a[1] == matchgroup && a[2] == configfile) {
+ group=a[1]
} else {
group=""
}
@@ -183,7 +181,8 @@
realconfigfile=$(eval "echo $configfile")
if [[ -z $realconfigfile ]]; then
- die $LINENO "bogus config file specification: $configfile is undefined"
+ warn $LINENO "unknown config file specification: $configfile is undefined"
+ break
fi
dir=$(dirname $realconfigfile)
if [[ -d $dir ]]; then
diff --git a/inc/python b/inc/python
index e4cfab8..2443c4d 100644
--- a/inc/python
+++ b/inc/python
@@ -19,7 +19,7 @@
# PROJECT_VENV contains the name of the virtual environment for each
# project. A null value installs to the system Python directories.
-declare -A PROJECT_VENV
+declare -A -g PROJECT_VENV
# Python Functions
@@ -69,6 +69,20 @@
pip_install $clean_name
}
+# Wrapper for ``pip install`` that only installs versions of libraries
+# from the global-requirements specification with extras.
+#
+# Uses globals ``REQUIREMENTS_DIR``
+#
+# pip_install_gr_extras packagename extra1,extra2,...
+function pip_install_gr_extras {
+ local name=$1
+ local extras=$2
+ local clean_name
+ clean_name=$(get_from_global_requirements $name)
+ pip_install $clean_name[$extras]
+}
+
# Determine the python versions supported by a package
function get_python_versions_for_package {
local name=$1
@@ -76,6 +90,132 @@
| grep 'Language' | cut -f5 -d: | grep '\.' | tr '\n' ' '
}
+# Check for python3 classifier in local directory
+function check_python3_support_for_package_local {
+ local name=$1
+ cd $name
+ set +e
+ classifier=$(python setup.py --classifiers \
+ | grep 'Programming Language :: Python :: 3$')
+ set -e
+ echo $classifier
+}
+
+# Check for python3 classifier on pypi
+function check_python3_support_for_package_remote {
+ local name=$1
+ set +e
+ classifier=$(curl -s -L "https://pypi.python.org/pypi/$name/json" \
+ | grep '"Programming Language :: Python :: 3"')
+ set -e
+ echo $classifier
+}
+
+# python3_enabled_for() checks if the service(s) specified as arguments are
+# enabled by the user in ``ENABLED_PYTHON3_PACKAGES``.
+#
+# Multiple services specified as arguments are ``OR``'ed together; the test
+# is a short-circuit boolean, i.e it returns on the first match.
+#
+# Uses global ``ENABLED_PYTHON3_PACKAGES``
+# python3_enabled_for dir [dir ...]
+function python3_enabled_for {
+ local xtrace
+ xtrace=$(set +o | grep xtrace)
+ set +o xtrace
+
+ local enabled=1
+ local dirs=$@
+ local dir
+ for dir in ${dirs}; do
+ [[ ,${ENABLED_PYTHON3_PACKAGES}, =~ ,${dir}, ]] && enabled=0
+ done
+
+ $xtrace
+ return $enabled
+}
+
+# python3_disabled_for() checks if the service(s) specified as arguments are
+# disabled by the user in ``DISABLED_PYTHON3_PACKAGES``.
+#
+# Multiple services specified as arguments are ``OR``'ed together; the test
+# is a short-circuit boolean, i.e it returns on the first match.
+#
+# Uses global ``DISABLED_PYTHON3_PACKAGES``
+# python3_disabled_for dir [dir ...]
+function python3_disabled_for {
+ local xtrace
+ xtrace=$(set +o | grep xtrace)
+ set +o xtrace
+
+ local enabled=1
+ local dirs=$@
+ local dir
+ for dir in ${dirs}; do
+ [[ ,${DISABLED_PYTHON3_PACKAGES}, =~ ,${dir}, ]] && enabled=0
+ done
+
+ $xtrace
+ return $enabled
+}
+
+# enable_python3_package() adds the repositories passed as argument to the
+# ``ENABLED_PYTHON3_PACKAGES`` list, if they are not already present.
+#
+# For example:
+# enable_python3_package nova
+#
+# Uses global ``ENABLED_PYTHON3_PACKAGES``
+# enable_python3_package dir [dir ...]
+function enable_python3_package {
+ local xtrace
+ xtrace=$(set +o | grep xtrace)
+ set +o xtrace
+
+ local tmpsvcs="${ENABLED_PYTHON3_PACKAGES}"
+ local python3
+ for dir in $@; do
+ if [[ ,${DISABLED_PYTHON3_PACKAGES}, =~ ,${dir}, ]]; then
+ warn $LINENO "Attempt to enable_python3_package ${dir} when it has been disabled"
+ continue
+ fi
+ if ! python3_enabled_for $dir; then
+ tmpsvcs+=",$dir"
+ fi
+ done
+ ENABLED_PYTHON3_PACKAGES=$(_cleanup_service_list "$tmpsvcs")
+
+ $xtrace
+}
+
+# disable_python3_package() prepares the services passed as argument to be
+# removed from the ``ENABLED_PYTHON3_PACKAGES`` list, if they are present.
+#
+# For example:
+# disable_python3_package swift
+#
+# Uses globals ``ENABLED_PYTHON3_PACKAGES`` and ``DISABLED_PYTHON3_PACKAGES``
+# disable_python3_package dir [dir ...]
+function disable_python3_package {
+ local xtrace
+ xtrace=$(set +o | grep xtrace)
+ set +o xtrace
+
+ local disabled_svcs="${DISABLED_PYTHON3_PACKAGES}"
+ local enabled_svcs=",${ENABLED_PYTHON3_PACKAGES},"
+ local dir
+ for dir in $@; do
+ disabled_svcs+=",$dir"
+ if python3_enabled_for $dir; then
+ enabled_svcs=${enabled_svcs//,$dir,/,}
+ fi
+ done
+ DISABLED_PYTHON3_PACKAGES=$(_cleanup_service_list "$disabled_svcs")
+ ENABLED_PYTHON3_PACKAGES=$(_cleanup_service_list "$enabled_svcs")
+
+ $xtrace
+}
+
# Wrapper for ``pip install`` to set cache and proxy environment variables
# Uses globals ``OFFLINE``, ``PIP_VIRTUAL_ENV``,
# ``PIP_UPGRADE``, ``TRACK_DEPENDS``, ``*_proxy``,
@@ -123,9 +263,41 @@
# default pip
local package_dir=${!#}
local python_versions
- if [[ -d "$package_dir" ]]; then
+
+ # Special case some services that have experimental
+ # support for python3 in progress, but don't claim support
+ # in their classifier
+ echo "Check python version for : $package_dir"
+ if python3_disabled_for ${package_dir##*/}; then
+ echo "Explicitly using $PYTHON2_VERSION version to install $package_dir based on DISABLED_PYTHON3_PACKAGES"
+ elif python3_enabled_for ${package_dir##*/}; then
+ echo "Explicitly using $PYTHON3_VERSION version to install $package_dir based on ENABLED_PYTHON3_PACKAGES"
+ sudo_pip="$sudo_pip LC_ALL=en_US.UTF-8"
+ cmd_pip=$(get_pip_command $PYTHON3_VERSION)
+ elif [[ -d "$package_dir" ]]; then
python_versions=$(get_python_versions_for_package $package_dir)
if [[ $python_versions =~ $PYTHON3_VERSION ]]; then
+ echo "Automatically using $PYTHON3_VERSION version to install $package_dir based on classifiers"
+ sudo_pip="$sudo_pip LC_ALL=en_US.UTF-8"
+ cmd_pip=$(get_pip_command $PYTHON3_VERSION)
+ else
+ # The package may not have yet advertised python3.5
+ # support so check for just python3 classifier and log
+ # a warning.
+ python3_classifier=$(check_python3_support_for_package_local $package_dir)
+ if [[ ! -z "$python3_classifier" ]]; then
+ echo "Automatically using $PYTHON3_VERSION version to install $package_dir based on local package settings"
+ sudo_pip="$sudo_pip LC_ALL=en_US.UTF-8"
+ cmd_pip=$(get_pip_command $PYTHON3_VERSION)
+ fi
+ fi
+ else
+ # Check pypi as we don't have the package on disk
+ package=$(echo $package_dir | grep -o '^[.a-zA-Z0-9_-]*')
+ python3_classifier=$(check_python3_support_for_package_remote $package)
+ if [[ ! -z "$python3_classifier" ]]; then
+ echo "Automatically using $PYTHON3_VERSION version to install $package based on remote package settings"
+ sudo_pip="$sudo_pip LC_ALL=en_US.UTF-8"
cmd_pip=$(get_pip_command $PYTHON3_VERSION)
fi
fi
@@ -241,6 +413,16 @@
function setup_dev_lib {
local name=$1
local dir=${GITDIR[$name]}
+ if python3_enabled; then
+ # Turn off Python 3 mode and install the package again,
+ # forcing a Python 2 installation. This ensures that all libs
+ # being used for development are installed under both versions
+ # of Python.
+ echo "Installing $name again without Python 3 enabled"
+ USE_PYTHON3=False
+ setup_develop $dir
+ USE_PYTHON3=True
+ fi
setup_develop $dir
}
@@ -371,9 +553,18 @@
function install_python3 {
if is_ubuntu; then
apt_get install python${PYTHON3_VERSION} python${PYTHON3_VERSION}-dev
+ elif is_suse; then
+ install_package python3-devel python3-dbm
fi
}
+function install_devstack_tools {
+ # intentionally old to ensure devstack-gate has control
+ local dstools_version=${DSTOOLS_VERSION:-0.1.2}
+ install_python3
+ sudo pip3 install -U devstack-tools==${dstools_version}
+}
+
# Restore xtrace
$INC_PY_TRACE
diff --git a/lib/apache b/lib/apache
index 8a38cc4..afeac15 100644
--- a/lib/apache
+++ b/lib/apache
@@ -29,15 +29,20 @@
# Set up apache name and configuration directory
+# Note that APACHE_CONF_DIR is really more accurately apache's vhost
+# configuration dir but we can't just change this because public interfaces.
if is_ubuntu; then
APACHE_NAME=apache2
APACHE_CONF_DIR=${APACHE_CONF_DIR:-/etc/$APACHE_NAME/sites-available}
+ APACHE_SETTINGS_DIR=${APACHE_SETTINGS_DIR:-/etc/$APACHE_NAME/conf-enabled}
elif is_fedora; then
APACHE_NAME=httpd
APACHE_CONF_DIR=${APACHE_CONF_DIR:-/etc/$APACHE_NAME/conf.d}
+ APACHE_SETTINGS_DIR=${APACHE_SETTINGS_DIR:-/etc/$APACHE_NAME/conf.d}
elif is_suse; then
APACHE_NAME=apache2
APACHE_CONF_DIR=${APACHE_CONF_DIR:-/etc/$APACHE_NAME/vhosts.d}
+ APACHE_SETTINGS_DIR=${APACHE_SETTINGS_DIR:-/etc/$APACHE_NAME/conf.d}
fi
APACHE_LOG_DIR="/var/log/${APACHE_NAME}"
@@ -61,12 +66,62 @@
fi
}
+# NOTE(sdague): Install uwsgi including apache module, we need to get
+# to 2.0.6+ to get a working mod_proxy_uwsgi. We can probably build a
+# check for that and do it differently for different platforms.
+function install_apache_uwsgi {
+ local apxs="apxs2"
+ if is_fedora; then
+ apxs="apxs"
+ fi
+
+ # Ubuntu xenial is back level on uwsgi so the proxy doesn't
+ # actually work. Hence we have to build from source for now.
+ #
+ # Centos 7 actually has the module in epel, but there was a big
+ # push to disable epel by default. As such, compile from source
+ # there as well.
+
+ local dir
+ dir=$(mktemp -d)
+ pushd $dir
+ pip_install uwsgi
+ pip download uwsgi -c $REQUIREMENTS_DIR/upper-constraints.txt
+ local uwsgi
+ uwsgi=$(ls uwsgi*)
+ tar xvf $uwsgi
+ cd uwsgi*/apache2
+ sudo $apxs -i -c mod_proxy_uwsgi.c
+ popd
+ # delete the temp directory
+ sudo rm -rf $dir
+
+ if is_ubuntu; then
+ # we've got to enable proxy and proxy_uwsgi for this to work
+ sudo a2enmod proxy
+ sudo a2enmod proxy_uwsgi
+ elif is_fedora; then
+ # redhat is missing a nice way to turn on/off modules
+ echo "LoadModule proxy_uwsgi_module modules/mod_proxy_uwsgi.so" \
+ | sudo tee /etc/httpd/conf.modules.d/02-proxy-uwsgi.conf
+ fi
+ restart_apache_server
+}
+
# install_apache_wsgi() - Install Apache server and wsgi module
function install_apache_wsgi {
# Apache installation, because we mark it NOPRIME
if is_ubuntu; then
# Install apache2, which is NOPRIME'd
- install_package apache2 libapache2-mod-wsgi
+ install_package apache2
+ if python3_enabled; then
+ if is_package_installed libapache2-mod-wsgi; then
+ uninstall_package libapache2-mod-wsgi
+ fi
+ install_package libapache2-mod-wsgi-py3
+ else
+ install_package libapache2-mod-wsgi
+ fi
elif is_fedora; then
sudo rm -f /etc/httpd/conf.d/000-*
install_package httpd mod_wsgi
@@ -77,49 +132,15 @@
fi
# WSGI isn't enabled by default, enable it
enable_apache_mod wsgi
-
- # ensure mod_version enabled for <IfVersion ...>. This is
- # built-in statically on anything recent, but precise (2.2)
- # doesn't have it enabled
- sudo a2enmod version || true
-}
-
-# get_apache_version() - return the version of Apache installed
-# This function is used to determine the Apache version installed. There are
-# various differences between Apache 2.2 and 2.4 that warrant special handling.
-function get_apache_version {
- if is_ubuntu; then
- local version_str
- version_str=$(sudo /usr/sbin/apache2ctl -v | awk '/Server version/ {print $3}' | cut -f2 -d/)
- elif is_fedora; then
- local version_str
- version_str=$(rpm -qa --queryformat '%{VERSION}' httpd)
- elif is_suse; then
- local version_str
- version_str=$(rpm -qa --queryformat '%{VERSION}' apache2)
- else
- exit_distro_not_supported "cannot determine apache version"
- fi
- if [[ "$version_str" =~ ^2\.2\. ]]; then
- echo "2.2"
- elif [[ "$version_str" =~ ^2\.4\. ]]; then
- echo "2.4"
- else
- exit_distro_not_supported "apache version not supported"
- fi
}
# apache_site_config_for() - The filename of the site's configuration file.
# This function uses the global variables APACHE_NAME and APACHE_CONF_DIR.
#
-# On Ubuntu 14.04, the site configuration file must have a .conf suffix for a2ensite and a2dissite to
+# On Ubuntu 14.04+, the site configuration file must have a .conf suffix for a2ensite and a2dissite to
# recognise it. a2ensite and a2dissite ignore the .conf suffix used as parameter. The default sites'
# files are 000-default.conf and default-ssl.conf.
#
-# On Ubuntu 12.04, the site configuration file may have any format, as long as it is in
-# /etc/apache2/sites-available/. a2ensite and a2dissite need the entire file name to work. The default
-# sites' files are default and default-ssl.
-#
# On Fedora and openSUSE, any file in /etc/httpd/conf.d/ whose name ends with .conf is enabled.
#
# On RHEL and CentOS, things should hopefully work as in Fedora.
@@ -128,22 +149,14 @@
# +----------------------+--------------------+--------------------------+--------------------------+
# | Distribution | File name | Site enabling command | Site disabling command |
# +----------------------+--------------------+--------------------------+--------------------------+
-# | Ubuntu 12.04 | site | a2ensite site | a2dissite site |
# | Ubuntu 14.04 | site.conf | a2ensite site | a2dissite site |
# | Fedora, RHEL, CentOS | site.conf.disabled | mv site.conf{.disabled,} | mv site.conf{,.disabled} |
# +----------------------+--------------------+--------------------------+--------------------------+
function apache_site_config_for {
local site=$@
if is_ubuntu; then
- local apache_version
- apache_version=$(get_apache_version)
- if [[ "$apache_version" == "2.2" ]]; then
- # Ubuntu 12.04 - Apache 2.2
- echo $APACHE_CONF_DIR/${site}
- else
- # Ubuntu 14.04 - Apache 2.4
- echo $APACHE_CONF_DIR/${site}.conf
- fi
+ # Ubuntu 14.04 - Apache 2.4
+ echo $APACHE_CONF_DIR/${site}.conf
elif is_fedora || is_suse; then
# fedora conf.d is only imported if it ends with .conf so this is approx the same
local enabled_site_file="$APACHE_CONF_DIR/${site}.conf"
@@ -173,7 +186,7 @@
function disable_apache_site {
local site=$@
if is_ubuntu; then
- sudo a2dissite ${site}
+ sudo a2dissite ${site} || true
elif is_fedora || is_suse; then
local enabled_site_file="$APACHE_CONF_DIR/${site}.conf"
# Do nothing if no site config exists
@@ -202,11 +215,7 @@
# Apache can be slow to stop, doing an explicit stop, sleep, start helps
# to mitigate issues where apache will claim a port it's listening on is
# still in use and fail to start.
- time_start "restart_apache_server"
- stop_service $APACHE_NAME
- sleep 3
- start_service $APACHE_NAME
- time_stop "restart_apache_server"
+ restart_service $APACHE_NAME
}
# reload_apache_server
@@ -214,6 +223,64 @@
reload_service $APACHE_NAME
}
+function write_uwsgi_config {
+ local file=$1
+ local wsgi=$2
+ local url=$3
+ local http=$4
+ local name=""
+ name=$(basename $wsgi)
+
+ # create a home for the sockets; note don't use /tmp -- apache has
+ # a private view of it on some platforms.
+ local socket_dir='/var/run/uwsgi'
+ sudo install -d -o $STACK_USER -m 755 $socket_dir
+ local socket="$socket_dir/${name}.socket"
+
+ # always cleanup given that we are using iniset here
+ rm -rf $file
+ iniset "$file" uwsgi wsgi-file "$wsgi"
+ iniset "$file" uwsgi socket "$socket"
+ iniset "$file" uwsgi processes $API_WORKERS
+ # This is running standalone
+ iniset "$file" uwsgi master true
+ # Set die-on-term & exit-on-reload so that uwsgi shuts down
+ iniset "$file" uwsgi die-on-term true
+ iniset "$file" uwsgi exit-on-reload true
+ iniset "$file" uwsgi enable-threads true
+ iniset "$file" uwsgi plugins python
+ # uwsgi recommends this to prevent thundering herd on accept.
+ iniset "$file" uwsgi thunder-lock true
+ # Override the default size for headers from the 4k default.
+ iniset "$file" uwsgi buffer-size 65535
+ # Make sure the client doesn't try to re-use the connection.
+ iniset "$file" uwsgi add-header "Connection: close"
+ # This ensures that file descriptors aren't shared between processes.
+ iniset "$file" uwsgi lazy-apps true
+ iniset "$file" uwsgi chmod-socket 666
+
+ # If we said bind directly to http, then do that and don't start the apache proxy
+ if [[ -n "$http" ]]; then
+ iniset "$file" uwsgi http $http
+ else
+ local apache_conf=""
+ apache_conf=$(apache_site_config_for $name)
+ echo "ProxyPass \"${url}\" \"unix:${socket}|uwsgi://uwsgi-uds-${name}/\" retry=0 " | sudo tee $apache_conf
+ enable_apache_site $name
+ reload_apache_server
+ fi
+}
+
+function remove_uwsgi_config {
+ local file=$1
+ local wsgi=$2
+ local name=""
+ name=$(basename $wsgi)
+
+ rm -rf $file
+ disable_apache_site $name
+}
+
# Restore xtrace
$_XTRACE_LIB_APACHE
diff --git a/lib/cinder b/lib/cinder
index c7ce0f5..9fc25c7 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -58,7 +58,7 @@
CINDER_API_PASTE_INI=$CINDER_CONF_DIR/api-paste.ini
# Public facing bits
-if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
CINDER_SERVICE_PROTOCOL="https"
fi
CINDER_SERVICE_HOST=${CINDER_SERVICE_HOST:-$SERVICE_HOST}
@@ -68,8 +68,12 @@
CINDER_SERVICE_LISTEN_ADDRESS=${CINDER_SERVICE_LISTEN_ADDRESS:-$SERVICE_LISTEN_ADDRESS}
# What type of LVM device should Cinder use for LVM backend
-# Defaults to thin. For thick provisioning change to 'default'
-CINDER_LVM_TYPE=${CINDER_LVM_TYPE:-thin}
+# Defaults to default, which is thick, the other valid choice
+# is thin, which as the name implies utilizes lvm thin provisioning.
+# Thinly provisioned LVM volumes may be more efficient when using the Cinder
+# image cache, but there are also known race failures with volume snapshots
+# and thinly provisioned LVM volumes, see bug 1642111 for details.
+CINDER_LVM_TYPE=${CINDER_LVM_TYPE:-default}
# Default backends
# The backend format is type:name where type is one of the supported backend
@@ -121,12 +125,6 @@
done
fi
-# Change the default nova_catalog_info and nova_catalog_admin_info values in
-# cinder so that the service name cinder is searching for matches that set for
-# nova in keystone.
-CINDER_NOVA_CATALOG_INFO=${CINDER_NOVA_CATALOG_INFO:-compute:nova:publicURL}
-CINDER_NOVA_CATALOG_ADMIN_INFO=${CINDER_NOVA_CATALOG_ADMIN_INFO:-compute:nova:adminURL}
-
# Environment variables to configure the image-volume cache
CINDER_IMG_CACHE_ENABLED=${CINDER_IMG_CACHE_ENABLED:-True}
@@ -217,11 +215,6 @@
local cinder_api_port=$CINDER_SERVICE_PORT
local venv_path=""
- if is_ssl_enabled_service c-api; then
- cinder_ssl="SSLEngine On"
- cinder_certfile="SSLCertificateFile $CINDER_SSL_CERT"
- cinder_keyfile="SSLCertificateKeyFile $CINDER_SSL_KEY"
- fi
if [[ ${USE_VENV} = True ]]; then
venv_path="python-path=${PROJECT_VENV["cinder"]}/lib/python2.7/site-packages"
fi
@@ -264,8 +257,15 @@
configure_auth_token_middleware $CINDER_CONF cinder $CINDER_AUTH_CACHE_DIR
- iniset $CINDER_CONF DEFAULT nova_catalog_info $CINDER_NOVA_CATALOG_INFO
- iniset $CINDER_CONF DEFAULT nova_catalog_admin_info $CINDER_NOVA_CATALOG_ADMIN_INFO
+ # Change the default nova_catalog_info and nova_catalog_admin_info values in
+ # cinder so that the service name cinder is searching for matches that set for
+ # nova in keystone.
+ if [[ -n "$CINDER_NOVA_CATALOG_INFO" ]]; then
+ iniset $CINDER_CONF DEFAULT nova_catalog_info $CINDER_NOVA_CATALOG_INFO
+ fi
+ if [[ -n "$CINDER_NOVA_CATALOG_ADMIN_INFO" ]]; then
+ iniset $CINDER_CONF DEFAULT nova_catalog_admin_info $CINDER_NOVA_CATALOG_ADMIN_INFO
+ fi
iniset $CINDER_CONF DEFAULT auth_strategy keystone
iniset $CINDER_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
@@ -312,7 +312,7 @@
fi
if is_service_enabled ceilometer; then
- iniset $CINDER_CONF oslo_messaging_notifications driver "messaging"
+ iniset $CINDER_CONF oslo_messaging_notifications driver "messagingv2"
fi
if is_service_enabled tls-proxy; then
@@ -331,12 +331,7 @@
iniset $CINDER_CONF DEFAULT volume_clear $CINDER_VOLUME_CLEAR
# Format logging
- if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$CINDER_USE_MOD_WSGI" == "False" ]; then
- setup_colorized_logging $CINDER_CONF DEFAULT "project_id" "user_id"
- else
- # Set req-id, project-name and resource in log format
- iniset $CINDER_CONF DEFAULT logging_context_format_string "%(asctime)s.%(msecs)03d %(levelname)s %(name)s [%(request_id)s %(project_name)s] %(resource)s%(message)s"
- fi
+ setup_logging $CINDER_CONF $CINDER_USE_MOD_WSGI
if [ "$CINDER_USE_MOD_WSGI" == "True" ]; then
_cinder_config_apache_wsgi
@@ -349,7 +344,7 @@
iniset $CINDER_CONF DEFAULT osapi_volume_workers "$API_WORKERS"
iniset $CINDER_CONF DEFAULT glance_api_servers "${GLANCE_SERVICE_PROTOCOL}://${GLANCE_HOSTPORT}"
- if is_ssl_enabled_service glance || is_service_enabled tls-proxy; then
+ if is_service_enabled tls-proxy; then
iniset $CINDER_CONF DEFAULT glance_protocol https
iniset $CINDER_CONF DEFAULT glance_ca_certificates_file $SSL_BUNDLE_FILE
fi
@@ -358,19 +353,18 @@
iniset $CINDER_CONF DEFAULT glance_api_version 2
fi
- # Register SSL certificates if provided
- if is_ssl_enabled_service cinder; then
- ensure_certificates CINDER
-
- iniset $CINDER_CONF DEFAULT ssl_cert_file "$CINDER_SSL_CERT"
- iniset $CINDER_CONF DEFAULT ssl_key_file "$CINDER_SSL_KEY"
- fi
-
# Set os_privileged_user credentials (used for os-assisted-snapshots)
iniset $CINDER_CONF DEFAULT os_privileged_user_name nova
iniset $CINDER_CONF DEFAULT os_privileged_user_password "$SERVICE_PASSWORD"
iniset $CINDER_CONF DEFAULT os_privileged_user_tenant "$SERVICE_PROJECT_NAME"
iniset $CINDER_CONF DEFAULT graceful_shutdown_timeout "$SERVICE_GRACEFUL_SHUTDOWN_TIMEOUT"
+
+ # Set the backend url according to the configured dlm backend
+ if is_dlm_enabled; then
+ if [[ "$(dlm_backend)" == "zookeeper" ]]; then
+ iniset $CINDER_CONF coordination backend_url "zookeeper://${SERVICE_HOST}:2181"
+ fi
+ fi
}
# create_cinder_accounts() - Set up common required cinder accounts
@@ -391,24 +385,18 @@
get_or_create_endpoint \
"volume" \
"$REGION_NAME" \
- "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/\$(project_id)s" \
- "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/\$(project_id)s" \
"$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/\$(project_id)s"
get_or_create_service "cinderv2" "volumev2" "Cinder Volume Service V2"
get_or_create_endpoint \
"volumev2" \
"$REGION_NAME" \
- "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/\$(project_id)s" \
- "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/\$(project_id)s" \
"$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/\$(project_id)s"
get_or_create_service "cinderv3" "volumev3" "Cinder Volume Service V3"
get_or_create_endpoint \
"volumev3" \
"$REGION_NAME" \
- "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v3/\$(project_id)s" \
- "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v3/\$(project_id)s" \
"$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v3/\$(project_id)s"
configure_cinder_internal_tenant
@@ -423,11 +411,7 @@
}
# init_cinder() - Initialize database and volume group
-# Uses global ``NOVA_ENABLED_APIS``
function init_cinder {
- # Force nova volumes off
- NOVA_ENABLED_APIS=$(echo $NOVA_ENABLED_APIS | sed "s/osapi_volume,//")
-
if is_service_enabled $DATABASE_BACKENDS; then
# (Re)create cinder database
recreate_database cinder
@@ -469,9 +453,6 @@
if [ "$CINDER_USE_MOD_WSGI" == "True" ]; then
install_apache_wsgi
- if is_ssl_enabled_service "c-api"; then
- enable_mod_ssl
- fi
fi
}
@@ -533,10 +514,11 @@
tail_log c-api /var/log/$APACHE_NAME/c-api.log
else
run_process c-api "$CINDER_BIN_DIR/cinder-api --config-file $CINDER_CONF"
- echo "Waiting for Cinder API to start..."
- if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$CINDER_SERVICE_HOST:$service_port; then
- die $LINENO "c-api did not start"
- fi
+ fi
+
+ echo "Waiting for Cinder API to start..."
+ if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$CINDER_SERVICE_HOST:$service_port; then
+ die $LINENO "c-api did not start"
fi
run_process c-sch "$CINDER_BIN_DIR/cinder-scheduler --config-file $CINDER_CONF"
diff --git a/lib/cinder_backends/ceph b/lib/cinder_backends/ceph
index ba86ccf..00a0bb3 100644
--- a/lib/cinder_backends/ceph
+++ b/lib/cinder_backends/ceph
@@ -48,7 +48,7 @@
iniset $CINDER_CONF $be_name rbd_ceph_conf "$CEPH_CONF_FILE"
iniset $CINDER_CONF $be_name rbd_pool "$CINDER_CEPH_POOL"
iniset $CINDER_CONF $be_name rbd_user "$CINDER_CEPH_USER"
- iniset $CINDER_CONF $be_name rbd_uuid "$CINDER_CEPH_UUID"
+ iniset $CINDER_CONF $be_name rbd_secret_uuid "$CINDER_CEPH_UUID"
iniset $CINDER_CONF $be_name rbd_flatten_volume_from_snapshot False
iniset $CINDER_CONF $be_name rbd_max_clone_depth 5
iniset $CINDER_CONF DEFAULT glance_api_version 2
diff --git a/lib/cinder_backends/fake b/lib/cinder_backends/fake
new file mode 100644
index 0000000..4749ace
--- /dev/null
+++ b/lib/cinder_backends/fake
@@ -0,0 +1,47 @@
+#!/bin/bash
+#
+# lib/cinder_backends/fake
+# Configure the Fake backend
+
+# Enable with:
+#
+# CINDER_ENABLED_BACKENDS+=,fake:fake
+
+# Dependencies:
+#
+# - ``functions`` file
+# - ``cinder`` configurations
+
+# CINDER_CONF
+
+# clean_cinder_backend_fake - called from clean_cinder()
+# configure_cinder_backend_fake - called from configure_cinder()
+# init_cinder_backend_fake - called from init_cinder()
+
+
+# Save trace setting
+_XTRACE_CINDER_FAKE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+function cleanup_cinder_backend_fake {
+ local be_name=$1
+}
+
+function configure_cinder_backend_fake {
+ local be_name=$1
+
+ iniset $CINDER_CONF $be_name volume_backend_name $be_name
+ iniset $CINDER_CONF $be_name volume_driver "cinder.tests.fake_driver.FakeLoggingVolumeDriver"
+
+}
+
+function init_cinder_backend_fake {
+ local be_name=$1
+}
+
+# Restore xtrace
+$_XTRACE_CINDER_FAKE
+
+# mode: shell-script
+# End:
diff --git a/lib/cinder_backends/fake_gate b/lib/cinder_backends/fake_gate
new file mode 100644
index 0000000..6b1f848
--- /dev/null
+++ b/lib/cinder_backends/fake_gate
@@ -0,0 +1,74 @@
+#!/bin/bash
+#
+# lib/cinder_backends/lvm
+# Configure the LVM backend
+
+# Enable with:
+#
+# CINDER_ENABLED_BACKENDS+=,fake_gate:lvmname
+
+# Dependencies:
+#
+# - ``functions`` file
+# - ``cinder`` configurations
+
+# CINDER_CONF
+# DATA_DIR
+# VOLUME_GROUP_NAME
+
+# clean_cinder_backend_lvm - called from clean_cinder()
+# configure_cinder_backend_lvm - called from configure_cinder()
+# init_cinder_backend_lvm - called from init_cinder()
+
+
+# Save trace setting
+_XTRACE_CINDER_LVM=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# TODO: resurrect backing device...need to know how to set values
+#VOLUME_BACKING_DEVICE=${VOLUME_BACKING_DEVICE:-}
+
+# Entry Points
+# ------------
+
+# cleanup_cinder_backend_lvm - Delete volume group and remove backing file
+# cleanup_cinder_backend_lvm $be_name
+function cleanup_cinder_backend_lvm {
+ local be_name=$1
+
+ # Campsite rule: leave behind a volume group at least as clean as we found it
+ clean_lvm_volume_group $VOLUME_GROUP_NAME-$be_name
+ clean_lvm_filter
+}
+
+# configure_cinder_backend_lvm - Set config files, create data dirs, etc
+# configure_cinder_backend_lvm $be_name
+function configure_cinder_backend_lvm {
+ local be_name=$1
+
+ iniset $CINDER_CONF $be_name volume_backend_name $be_name
+ iniset $CINDER_CONF $be_name volume_driver "cinder.tests.fake_driver.FakeGateDriver"
+ iniset $CINDER_CONF $be_name volume_group $VOLUME_GROUP_NAME-$be_name
+ iniset $CINDER_CONF $be_name iscsi_helper "$CINDER_ISCSI_HELPER"
+ iniset $CINDER_CONF $be_name lvm_type "$CINDER_LVM_TYPE"
+
+ if [[ "$CINDER_VOLUME_CLEAR" == "non" ]]; then
+ iniset $CINDER_CONF $be_name volume_clear none
+ fi
+}
+
+# init_cinder_backend_lvm - Initialize volume group
+# init_cinder_backend_lvm $be_name
+function init_cinder_backend_lvm {
+ local be_name=$1
+
+ # Start with a clean volume group
+ init_lvm_volume_group $VOLUME_GROUP_NAME-$be_name $VOLUME_BACKING_FILE_SIZE
+}
+
+# Restore xtrace
+$_XTRACE_CINDER_LVM
+
+# mode: shell-script
+# End:
diff --git a/lib/databases/mysql b/lib/databases/mysql
index f6cc922..7bbcace 100644
--- a/lib/databases/mysql
+++ b/lib/databases/mysql
@@ -82,10 +82,9 @@
fi
# Set the root password - only works the first time. For Ubuntu, we already
- # did that with debconf before installing the package.
- if ! is_ubuntu; then
- sudo mysqladmin -u root password $DATABASE_PASSWORD || true
- fi
+ # did that with debconf before installing the package, but we still try,
+ # because the package might have been installed already.
+ sudo mysqladmin -u root password $DATABASE_PASSWORD || true
# Update the DB to give user '$DATABASE_USER'@'%' full control of the all databases:
sudo mysql -uroot -p$DATABASE_PASSWORD -h127.0.0.1 -e "GRANT ALL PRIVILEGES ON *.* TO '$DATABASE_USER'@'%' identified by '$DATABASE_PASSWORD';"
@@ -95,7 +94,7 @@
# Change bind-address from localhost (127.0.0.1) to any (::) and
# set default db type to InnoDB
iniset -sudo $my_conf mysqld bind-address "$SERVICE_LISTEN_ADDRESS"
- iniset -sudo $my_conf mysqld sql_mode STRICT_ALL_TABLES
+ iniset -sudo $my_conf mysqld sql_mode TRADITIONAL
iniset -sudo $my_conf mysqld default-storage-engine InnoDB
iniset -sudo $my_conf mysqld max_connections 1024
iniset -sudo $my_conf mysqld query_cache_type OFF
diff --git a/lib/databases/postgresql b/lib/databases/postgresql
index 14425a5..618834b 100644
--- a/lib/databases/postgresql
+++ b/lib/databases/postgresql
@@ -47,7 +47,7 @@
}
function configure_database_postgresql {
- local pg_conf pg_dir pg_hba root_roles version
+ local pg_conf pg_dir pg_hba check_role version
echo_summary "Configuring and starting PostgreSQL"
if is_fedora; then
pg_hba=/var/lib/pgsql/data/pg_hba.conf
@@ -85,8 +85,8 @@
restart_service postgresql
# Create the role if it's not here or else alter it.
- root_roles=$(sudo -u root sudo -u postgres -i psql -t -c "SELECT 'HERE' from pg_roles where rolname='root'")
- if [[ ${root_roles} == *HERE ]];then
+ check_role=$(sudo -u root sudo -u postgres -i psql -t -c "SELECT 'HERE' from pg_roles where rolname='$DATABASE_USER'")
+ if [[ ${check_role} == *HERE ]];then
sudo -u root sudo -u postgres -i psql -c "ALTER ROLE $DATABASE_USER WITH SUPERUSER LOGIN PASSWORD '$DATABASE_PASSWORD'"
else
sudo -u root sudo -u postgres -i psql -c "CREATE ROLE $DATABASE_USER WITH SUPERUSER LOGIN PASSWORD '$DATABASE_PASSWORD'"
@@ -95,6 +95,7 @@
function install_database_postgresql {
echo_summary "Installing postgresql"
+ deprecated "Use of postgresql in devstack is deprecated, and will be removed during the Pike cycle"
local pgpass=$HOME/.pgpass
if [[ ! -e $pgpass ]]; then
cat <<EOF > $pgpass
diff --git a/lib/dlm b/lib/dlm
index e391535..b5ac0f5 100644
--- a/lib/dlm
+++ b/lib/dlm
@@ -91,6 +91,7 @@
# install_dlm() - Collect source and prepare
function install_dlm {
if is_dlm_enabled; then
+ pip_install_gr_extras tooz zookeeper
if is_ubuntu; then
install_package zookeeperd
elif is_fedora; then
diff --git a/lib/dstat b/lib/dstat
index b705948..982b703 100644
--- a/lib/dstat
+++ b/lib/dstat
@@ -21,16 +21,22 @@
# A better kind of sysstat, with the top process per time slice
run_process dstat "$TOP_DIR/tools/dstat.sh $LOGDIR"
- # To enable peakmem_tracker add:
- # enable_service peakmem_tracker
+ # To enable memory_tracker add:
+ # enable_service memory_tracker
# to your localrc
- run_process peakmem_tracker "$TOP_DIR/tools/peakmem_tracker.sh"
+ run_process memory_tracker "$TOP_DIR/tools/memory_tracker.sh" "" "root"
+
+ # remove support for the old name when it's no longer used (sometime in Queens)
+ if is_service_enabled peakmem_tracker; then
+ deprecated "Use of peakmem_tracker in devstack is deprecated, use memory_tracker instead"
+ run_process peakmem_tracker "$TOP_DIR/tools/memory_tracker.sh" "" "root"
+ fi
}
# stop_dstat() stop dstat process
function stop_dstat {
stop_process dstat
- stop_process peakmem_tracker
+ stop_process memory_tracker
}
# Restore xtrace
diff --git a/lib/glance b/lib/glance
index 5259174..23a1cbf 100644
--- a/lib/glance
+++ b/lib/glance
@@ -55,11 +55,9 @@
GLANCE_POLICY_JSON=$GLANCE_CONF_DIR/policy.json
GLANCE_SCHEMA_JSON=$GLANCE_CONF_DIR/schema-image.json
GLANCE_SWIFT_STORE_CONF=$GLANCE_CONF_DIR/glance-swift-store.conf
-GLANCE_GLARE_CONF=$GLANCE_CONF_DIR/glance-glare.conf
-GLANCE_GLARE_PASTE_INI=$GLANCE_CONF_DIR/glance-glare-paste.ini
-GLANCE_V1_ENABLED=${GLANCE_V1_ENABLED:-True}
+GLANCE_V1_ENABLED=${GLANCE_V1_ENABLED:-False}
-if is_ssl_enabled_service "glance" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
GLANCE_SERVICE_PROTOCOL="https"
fi
@@ -72,8 +70,6 @@
GLANCE_SERVICE_PROTOCOL=${GLANCE_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
GLANCE_REGISTRY_PORT=${GLANCE_REGISTRY_PORT:-9191}
GLANCE_REGISTRY_PORT_INT=${GLANCE_REGISTRY_PORT_INT:-19191}
-GLANCE_GLARE_PORT=${GLANCE_GLARE_PORT:-9494}
-GLANCE_GLARE_HOSTPORT=${GLANCE_GLARE_HOSTPORT:-$GLANCE_SERVICE_HOST:$GLANCE_GLARE_PORT}
# Functions
# ---------
@@ -98,9 +94,6 @@
sudo install -d -o $STACK_USER $GLANCE_CONF_DIR $GLANCE_METADEF_DIR
# Copy over our glance configurations and update them
- if is_service_enabled g-glare; then
- cp $GLANCE_DIR/etc/glance-glare.conf $GLANCE_GLARE_CONF
- fi
cp $GLANCE_DIR/etc/glance-registry.conf $GLANCE_REGISTRY_CONF
iniset $GLANCE_REGISTRY_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $GLANCE_REGISTRY_CONF DEFAULT bind_host $GLANCE_SERVICE_LISTEN_ADDRESS
@@ -112,7 +105,7 @@
iniset $GLANCE_REGISTRY_CONF DEFAULT workers "$API_WORKERS"
iniset $GLANCE_REGISTRY_CONF paste_deploy flavor keystone
configure_auth_token_middleware $GLANCE_REGISTRY_CONF glance $GLANCE_AUTH_CACHE_DIR/registry
- iniset $GLANCE_REGISTRY_CONF oslo_messaging_notifications driver messaging
+ iniset $GLANCE_REGISTRY_CONF oslo_messaging_notifications driver messagingv2
iniset_rpc_backend glance $GLANCE_REGISTRY_CONF
iniset $GLANCE_REGISTRY_CONF DEFAULT graceful_shutdown_timeout "$SERVICE_GRACEFUL_SHUTDOWN_TIMEOUT"
@@ -125,7 +118,7 @@
iniset $GLANCE_API_CONF DEFAULT image_cache_dir $GLANCE_CACHE_DIR/
iniset $GLANCE_API_CONF paste_deploy flavor keystone+cachemanagement
configure_auth_token_middleware $GLANCE_API_CONF glance $GLANCE_AUTH_CACHE_DIR/api
- iniset $GLANCE_API_CONF oslo_messaging_notifications driver messaging
+ iniset $GLANCE_API_CONF oslo_messaging_notifications driver messagingv2
iniset_rpc_backend glance $GLANCE_API_CONF
if [ "$VIRT_DRIVER" = 'xenserver' ]; then
iniset $GLANCE_API_CONF DEFAULT container_formats "ami,ari,aki,bare,ovf,tgz"
@@ -143,9 +136,6 @@
# Store specific configs
iniset $GLANCE_API_CONF glance_store filesystem_store_datadir $GLANCE_IMAGE_DIR/
- if is_service_enabled g-glare; then
- iniset $GLANCE_GLARE_CONF glance_store filesystem_store_datadir $GLANCE_IMAGE_DIR/
- fi
iniset $GLANCE_API_CONF DEFAULT registry_host $GLANCE_SERVICE_HOST
iniset $GLANCE_API_CONF DEFAULT workers "$API_WORKERS"
@@ -161,6 +151,9 @@
if is_service_enabled s-proxy; then
iniset $GLANCE_API_CONF glance_store default_store swift
iniset $GLANCE_API_CONF glance_store swift_store_create_container_on_put True
+ if python3_enabled; then
+ iniset $GLANCE_API_CONF glance_store swift_store_auth_insecure True
+ fi
iniset $GLANCE_API_CONF glance_store swift_store_config_file $GLANCE_SWIFT_STORE_CONF
iniset $GLANCE_API_CONF glance_store default_swift_reference ref1
@@ -169,24 +162,14 @@
iniset $GLANCE_SWIFT_STORE_CONF ref1 user $SERVICE_PROJECT_NAME:glance-swift
- # Store the glare in swift if enabled.
- if is_service_enabled g-glare; then
- iniset $GLANCE_GLARE_CONF glance_store default_store swift
- iniset $GLANCE_GLARE_CONF glance_store swift_store_create_container_on_put True
-
- iniset $GLANCE_GLARE_CONF glance_store swift_store_config_file $GLANCE_SWIFT_STORE_CONF
- iniset $GLANCE_GLARE_CONF glance_store default_swift_reference ref1
- iniset $GLANCE_GLARE_CONF glance_store stores "file, http, swift"
- iniset $GLANCE_GLARE_CONF DEFAULT graceful_shutdown_timeout "$SERVICE_GRACEFUL_SHUTDOWN_TIMEOUT"
-
- # commenting is not strictly necessary but it's confusing to have bad values in conf
- inicomment $GLANCE_GLARE_CONF glance_store swift_store_user
- inicomment $GLANCE_GLARE_CONF glance_store swift_store_key
- inicomment $GLANCE_GLARE_CONF glance_store swift_store_auth_address
- fi
-
iniset $GLANCE_SWIFT_STORE_CONF ref1 key $SERVICE_PASSWORD
- iniset $GLANCE_SWIFT_STORE_CONF ref1 auth_address $KEYSTONE_SERVICE_URI/v3
+ if python3_enabled; then
+ # NOTE(dims): Currently the glance_store+swift does not support either an insecure flag
+ # or ability to specify the CACERT. So fallback to http:// url
+ iniset $GLANCE_SWIFT_STORE_CONF ref1 auth_address ${KEYSTONE_SERVICE_URI/https/http}/v3
+ else
+ iniset $GLANCE_SWIFT_STORE_CONF ref1 auth_address $KEYSTONE_SERVICE_URI/v3
+ fi
iniset $GLANCE_SWIFT_STORE_CONF ref1 auth_version 3
# commenting is not strictly necessary but it's confusing to have bad values in conf
@@ -204,26 +187,13 @@
iniset $GLANCE_REGISTRY_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
fi
- # Register SSL certificates if provided
- if is_ssl_enabled_service glance; then
- ensure_certificates GLANCE
-
- iniset $GLANCE_API_CONF DEFAULT cert_file "$GLANCE_SSL_CERT"
- iniset $GLANCE_API_CONF DEFAULT key_file "$GLANCE_SSL_KEY"
-
- iniset $GLANCE_REGISTRY_CONF DEFAULT cert_file "$GLANCE_SSL_CERT"
- iniset $GLANCE_REGISTRY_CONF DEFAULT key_file "$GLANCE_SSL_KEY"
- fi
-
- if is_ssl_enabled_service glance || is_service_enabled tls-proxy; then
+ if is_service_enabled tls-proxy; then
iniset $GLANCE_API_CONF DEFAULT registry_client_protocol https
fi
# Format logging
- if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
- setup_colorized_logging $GLANCE_API_CONF DEFAULT tenant user
- setup_colorized_logging $GLANCE_REGISTRY_CONF DEFAULT tenant user
- fi
+ setup_logging $GLANCE_API_CONF
+ setup_logging $GLANCE_REGISTRY_CONF
cp -p $GLANCE_DIR/etc/glance-registry-paste.ini $GLANCE_REGISTRY_PASTE_INI
@@ -235,7 +205,7 @@
iniset $GLANCE_CACHE_CONF DEFAULT use_syslog $SYSLOG
iniset $GLANCE_CACHE_CONF DEFAULT image_cache_dir $GLANCE_CACHE_DIR/
iniuncomment $GLANCE_CACHE_CONF DEFAULT auth_url
- iniset $GLANCE_CACHE_CONF DEFAULT auth_url $KEYSTONE_AUTH_URI/v2.0
+ iniset $GLANCE_CACHE_CONF DEFAULT auth_url $KEYSTONE_AUTH_URI/v3
iniuncomment $GLANCE_CACHE_CONF DEFAULT auth_tenant_name
iniset $GLANCE_CACHE_CONF DEFAULT admin_tenant_name $SERVICE_PROJECT_NAME
iniuncomment $GLANCE_CACHE_CONF DEFAULT auth_user
@@ -252,36 +222,13 @@
cp -p $GLANCE_DIR/etc/metadefs/*.json $GLANCE_METADEF_DIR
- if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
+ if is_service_enabled tls-proxy; then
CINDER_SERVICE_HOST=${CINDER_SERVICE_HOST:-$SERVICE_HOST}
CINDER_SERVICE_PORT=${CINDER_SERVICE_PORT:-8776}
iniset $GLANCE_API_CONF DEFAULT cinder_endpoint_template "https://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/%(project_id)s"
iniset $GLANCE_CACHE_CONF DEFAULT cinder_endpoint_template "https://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/%(project_id)s"
fi
-
- # Configure GLANCE_GLARE (Glance Glare)
- if is_service_enabled g-glare; then
- local dburl
- dburl=`database_connection_url glance`
- setup_colorized_logging $GLANCE_GLARE_CONF DEFAULT tenant user
- iniset $GLANCE_GLARE_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
- iniset $GLANCE_GLARE_CONF DEFAULT bind_host $GLANCE_SERVICE_LISTEN_ADDRESS
- iniset $GLANCE_GLARE_CONF DEFAULT bind_port $GLANCE_GLARE_PORT
- inicomment $GLANCE_GLARE_CONF DEFAULT log_file
- iniset $GLANCE_GLARE_CONF DEFAULT workers "$API_WORKERS"
-
- iniset $GLANCE_GLARE_CONF database connection $dburl
- iniset $GLANCE_GLARE_CONF paste_deploy flavor keystone
- configure_auth_token_middleware $GLANCE_GLARE_CONF glare $GLANCE_AUTH_CACHE_DIR/artifact
- # Register SSL certificates if provided
- if is_ssl_enabled_service glance; then
- ensure_certificates GLANCE
- iniset $GLANCE_GLARE_CONF DEFAULT cert_file "$GLANCE_SSL_CERT"
- iniset $GLANCE_GLARE_CONF DEFAULT key_file "$GLANCE_SSL_KEY"
- fi
- cp $GLANCE_DIR/etc/glance-glare-paste.ini $GLANCE_GLARE_PASTE_INI
- fi
}
# create_glance_accounts() - Set up common required glance accounts
@@ -291,7 +238,6 @@
# SERVICE_PROJECT_NAME glance service
# SERVICE_PROJECT_NAME glance-swift ResellerAdmin (if Swift is enabled)
# SERVICE_PROJECT_NAME glance-search search (if Search is enabled)
-# SERVICE_PROJECT_NAME glare service (if enabled)
function create_glance_accounts {
if is_service_enabled g-api; then
@@ -307,8 +253,6 @@
get_or_create_endpoint \
"image" \
"$REGION_NAME" \
- "$GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT" \
- "$GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT" \
"$GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT"
# Note(frickler): Crude workaround for https://bugs.launchpad.net/glance-store/+bug/1620999
@@ -316,18 +260,6 @@
iniset $GLANCE_SWIFT_STORE_CONF ref1 project_domain_id $service_domain_id
iniset $GLANCE_SWIFT_STORE_CONF ref1 user_domain_id $service_domain_id
fi
-
- # Add glance-glare service and endpoints
- if is_service_enabled g-glare; then
- create_service_user "glare"
- get_or_create_service "glare" "artifact" "Glance Artifact Service"
-
- get_or_create_endpoint "artifact" \
- "$REGION_NAME" \
- "$GLANCE_SERVICE_PROTOCOL://$GLANCE_GLARE_HOSTPORT" \
- "$GLANCE_SERVICE_PROTOCOL://$GLANCE_GLARE_HOSTPORT" \
- "$GLANCE_SERVICE_PROTOCOL://$GLANCE_GLARE_HOSTPORT"
- fi
}
# create_glance_cache_dir() - Part of the init_glance() process
@@ -397,15 +329,6 @@
if ! wait_for_service $SERVICE_TIMEOUT $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT; then
die $LINENO "g-api did not start"
fi
-
- #Start g-glare after g-reg/g-api
- if is_service_enabled g-glare; then
- run_process g-glare "$GLANCE_BIN_DIR/glance-glare --config-file=$GLANCE_CONF_DIR/glance-glare.conf"
- echo "Waiting for Glare [g-glare] ($GLANCE_GLARE_HOSTPORT) to start..."
- if ! wait_for_service $SERVICE_TIMEOUT $GLANCE_SERVICE_PROTOCOL://$GLANCE_GLARE_HOSTPORT; then
- die $LINENO " Glare [g-glare] did not start"
- fi
- fi
}
# stop_glance() - Stop running processes
@@ -413,10 +336,6 @@
# Kill the Glance screen windows
stop_process g-api
stop_process g-reg
-
- if is_service_enabled g-glare; then
- stop_process g-glare
- fi
}
# Restore xtrace
diff --git a/lib/heat b/lib/heat
deleted file mode 100644
index 0863128..0000000
--- a/lib/heat
+++ /dev/null
@@ -1,467 +0,0 @@
-#!/bin/bash
-#
-# lib/heat
-# Install and start **Heat** service
-
-# To enable, add the following to localrc
-#
-# ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
-
-# Dependencies:
-# (none)
-
-# stack.sh
-# ---------
-# - install_heatclient
-# - install_heat
-# - configure_heatclient
-# - configure_heat
-# - _config_heat_apache_wsgi
-# - init_heat
-# - start_heat
-# - stop_heat
-# - cleanup_heat
-
-# Save trace setting
-_XTRACE_HEAT=$(set +o | grep xtrace)
-set +o xtrace
-
-
-# Defaults
-# --------
-
-# set up default directories
-GITDIR["python-heatclient"]=$DEST/python-heatclient
-
-# Toggle for deploying Heat-API under HTTPD + mod_wsgi
-HEAT_USE_MOD_WSGI=${HEAT_USE_MOD_WSGI:-False}
-
-HEAT_DIR=$DEST/heat
-HEAT_CFNTOOLS_DIR=$DEST/heat-cfntools
-HEAT_TEMPLATES_REPO_DIR=$DEST/heat-templates
-OCC_DIR=$DEST/os-collect-config
-ORC_DIR=$DEST/os-refresh-config
-OAC_DIR=$DEST/os-apply-config
-
-HEAT_PIP_REPO=$DATA_DIR/heat-pip-repo
-HEAT_PIP_REPO_PORT=${HEAT_PIP_REPO_PORT:-8899}
-
-HEAT_AUTH_CACHE_DIR=${HEAT_AUTH_CACHE_DIR:-/var/cache/heat}
-HEAT_STANDALONE=$(trueorfalse False HEAT_STANDALONE)
-HEAT_ENABLE_ADOPT_ABANDON=$(trueorfalse False HEAT_ENABLE_ADOPT_ABANDON)
-HEAT_CONF_DIR=/etc/heat
-HEAT_CONF=$HEAT_CONF_DIR/heat.conf
-HEAT_ENV_DIR=$HEAT_CONF_DIR/environment.d
-HEAT_TEMPLATES_DIR=$HEAT_CONF_DIR/templates
-HEAT_API_HOST=${HEAT_API_HOST:-$HOST_IP}
-HEAT_API_PORT=${HEAT_API_PORT:-8004}
-HEAT_SERVICE_USER=${HEAT_SERVICE_USER:-heat}
-HEAT_TRUSTEE_USER=${HEAT_TRUSTEE_USER:-$HEAT_SERVICE_USER}
-HEAT_TRUSTEE_PASSWORD=${HEAT_TRUSTEE_PASSWORD:-$SERVICE_PASSWORD}
-HEAT_TRUSTEE_DOMAIN=${HEAT_TRUSTEE_DOMAIN:-default}
-
-# Support entry points installation of console scripts
-HEAT_BIN_DIR=$(get_python_exec_prefix)
-
-# other default options
-if [[ "$HEAT_STANDALONE" = "True" ]]; then
- # for standalone, use defaults which require no service user
- HEAT_STACK_DOMAIN=$(trueorfalse False HEAT_STACK_DOMAIN)
- HEAT_DEFERRED_AUTH=${HEAT_DEFERRED_AUTH:-password}
- if [[ ${HEAT_DEFERRED_AUTH} != "password" ]]; then
- # Heat does not support keystone trusts when deployed in
- # standalone mode
- die $LINENO \
- 'HEAT_DEFERRED_AUTH can only be set to "password" when HEAT_STANDALONE is True.'
- fi
-else
- HEAT_STACK_DOMAIN=$(trueorfalse True HEAT_STACK_DOMAIN)
- HEAT_DEFERRED_AUTH=${HEAT_DEFERRED_AUTH:-}
-fi
-HEAT_PLUGIN_DIR=${HEAT_PLUGIN_DIR:-$DATA_DIR/heat/plugins}
-ENABLE_HEAT_PLUGINS=${ENABLE_HEAT_PLUGINS:-}
-
-# Functions
-# ---------
-
-# Test if any Heat services are enabled
-# is_heat_enabled
-function is_heat_enabled {
- [[ ,${ENABLED_SERVICES} =~ ,"h-" ]] && return 0
- return 1
-}
-
-# cleanup_heat() - Remove residual data files, anything left over from previous
-# runs that a clean run would need to clean up
-function cleanup_heat {
- sudo rm -rf $HEAT_AUTH_CACHE_DIR
- sudo rm -rf $HEAT_ENV_DIR
- sudo rm -rf $HEAT_TEMPLATES_DIR
- sudo rm -rf $HEAT_CONF_DIR
-}
-
-# configure_heat() - Set config files, create data dirs, etc
-function configure_heat {
-
- sudo install -d -o $STACK_USER $HEAT_CONF_DIR
- # remove old config files
- rm -f $HEAT_CONF_DIR/heat-*.conf
-
- HEAT_API_CFN_HOST=${HEAT_API_CFN_HOST:-$HOST_IP}
- HEAT_API_CFN_PORT=${HEAT_API_CFN_PORT:-8000}
- HEAT_ENGINE_HOST=${HEAT_ENGINE_HOST:-$SERVICE_HOST}
- HEAT_ENGINE_PORT=${HEAT_ENGINE_PORT:-8001}
- HEAT_API_CW_HOST=${HEAT_API_CW_HOST:-$HOST_IP}
- HEAT_API_CW_PORT=${HEAT_API_CW_PORT:-8003}
- HEAT_API_PASTE_FILE=$HEAT_CONF_DIR/api-paste.ini
- HEAT_POLICY_FILE=$HEAT_CONF_DIR/policy.json
-
- cp $HEAT_DIR/etc/heat/api-paste.ini $HEAT_API_PASTE_FILE
- cp $HEAT_DIR/etc/heat/policy.json $HEAT_POLICY_FILE
-
- # common options
- iniset_rpc_backend heat $HEAT_CONF
- iniset $HEAT_CONF DEFAULT heat_metadata_server_url http://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT
- iniset $HEAT_CONF DEFAULT heat_waitcondition_server_url http://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1/waitcondition
- iniset $HEAT_CONF DEFAULT heat_watch_server_url http://$HEAT_API_CW_HOST:$HEAT_API_CW_PORT
- iniset $HEAT_CONF database connection `database_connection_url heat`
- iniset $HEAT_CONF DEFAULT auth_encryption_key $(generate_hex_string 16)
-
- iniset $HEAT_CONF DEFAULT region_name_for_services "$REGION_NAME"
-
- # logging
- iniset $HEAT_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
- iniset $HEAT_CONF DEFAULT use_syslog $SYSLOG
- if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$HEAT_USE_MOD_WSGI" == "False" ] ; then
- # Add color to logging output
- setup_colorized_logging $HEAT_CONF DEFAULT tenant user
- fi
-
- if [ ! -z "$HEAT_DEFERRED_AUTH" ]; then
- iniset $HEAT_CONF DEFAULT deferred_auth_method $HEAT_DEFERRED_AUTH
- fi
-
- if [ "$HEAT_USE_MOD_WSGI" == "True" ]; then
- _config_heat_apache_wsgi
- fi
-
- if [[ "$HEAT_STANDALONE" = "True" ]]; then
- iniset $HEAT_CONF paste_deploy flavor standalone
- iniset $HEAT_CONF clients_heat url "http://$HEAT_API_HOST:$HEAT_API_PORT/v1/%(tenant_id)s"
- else
- configure_auth_token_middleware $HEAT_CONF heat $HEAT_AUTH_CACHE_DIR
- fi
-
- # If HEAT_DEFERRED_AUTH is unset or explicitly set to trusts, configure
- # the section for the client plugin associated with the trustee
- if [ -z "$HEAT_DEFERRED_AUTH" -o "trusts" == "$HEAT_DEFERRED_AUTH" ]; then
- iniset $HEAT_CONF trustee auth_type password
- iniset $HEAT_CONF trustee auth_url $KEYSTONE_AUTH_URI
- iniset $HEAT_CONF trustee username $HEAT_TRUSTEE_USER
- iniset $HEAT_CONF trustee password $HEAT_TRUSTEE_PASSWORD
- iniset $HEAT_CONF trustee user_domain_id $HEAT_TRUSTEE_DOMAIN
- fi
-
- # clients_keystone
- iniset $HEAT_CONF clients_keystone auth_uri $KEYSTONE_AUTH_URI
-
- # OpenStack API
- iniset $HEAT_CONF heat_api bind_port $HEAT_API_PORT
- iniset $HEAT_CONF heat_api workers "$API_WORKERS"
-
- # Cloudformation API
- iniset $HEAT_CONF heat_api_cfn bind_port $HEAT_API_CFN_PORT
-
- # Cloudwatch API
- iniset $HEAT_CONF heat_api_cloudwatch bind_port $HEAT_API_CW_PORT
-
- if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
- iniset $HEAT_CONF clients_keystone ca_file $SSL_BUNDLE_FILE
- fi
-
- if is_ssl_enabled_service "nova" || is_service_enabled tls-proxy; then
- iniset $HEAT_CONF clients_nova ca_file $SSL_BUNDLE_FILE
- fi
-
- if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
- iniset $HEAT_CONF clients_cinder ca_file $SSL_BUNDLE_FILE
- fi
-
- if [[ "$HEAT_ENABLE_ADOPT_ABANDON" = "True" ]]; then
- iniset $HEAT_CONF DEFAULT enable_stack_adopt true
- iniset $HEAT_CONF DEFAULT enable_stack_abandon true
- fi
-
- iniset $HEAT_CONF cache enabled "True"
- iniset $HEAT_CONF cache backend "dogpile.cache.memory"
-
- sudo install -d -o $STACK_USER $HEAT_ENV_DIR $HEAT_TEMPLATES_DIR
-
- # copy the default environment
- cp $HEAT_DIR/etc/heat/environment.d/* $HEAT_ENV_DIR/
-
- # copy the default templates
- cp $HEAT_DIR/etc/heat/templates/* $HEAT_TEMPLATES_DIR/
-
- # Enable heat plugins.
- # NOTE(nic): The symlink nonsense is necessary because when
- # plugins are installed in "developer mode", the final component
- # of their target directory is always "resources", which confuses
- # Heat's plugin loader into believing that all plugins are named
- # "resources", and therefore are all the same plugin; so it
- # will only load one of them. Linking them all to a common
- # location with unique names avoids that type of collision,
- # while still allowing the plugins to be edited in-tree.
- local err_count=0
-
- if [ -n "$ENABLE_HEAT_PLUGINS" ]; then
- mkdir -p $HEAT_PLUGIN_DIR
- # Clean up cruft from any previous runs
- rm -f $HEAT_PLUGIN_DIR/*
- iniset $HEAT_CONF DEFAULT plugin_dirs $HEAT_PLUGIN_DIR
- fi
-
- for heat_plugin in $ENABLE_HEAT_PLUGINS; do
- if [ -d $HEAT_DIR/contrib/$heat_plugin ]; then
- setup_package $HEAT_DIR/contrib/$heat_plugin -e
- ln -s $HEAT_DIR/contrib/$heat_plugin/$heat_plugin/resources $HEAT_PLUGIN_DIR/$heat_plugin
- else
- : # clear retval on the test so that we can roll up errors
- err $LINENO "Requested Heat plugin(${heat_plugin}) not found."
- err_count=$(($err_count + 1))
- fi
- done
- [ $err_count -eq 0 ] || die $LINENO "$err_count of the requested Heat plugins could not be installed."
-}
-
-# init_heat() - Initialize database
-function init_heat {
-
- # (re)create heat database
- recreate_database heat
-
- $HEAT_BIN_DIR/heat-manage --config-file $HEAT_CONF db_sync
- create_heat_cache_dir
-}
-
-# create_heat_cache_dir() - Part of the init_heat() process
-function create_heat_cache_dir {
- # Create cache dirs
- sudo install -d -o $STACK_USER $HEAT_AUTH_CACHE_DIR
-}
-
-# install_heatclient() - Collect source and prepare
-function install_heatclient {
- if use_library_from_git "python-heatclient"; then
- git_clone_by_name "python-heatclient"
- setup_dev_lib "python-heatclient"
- sudo install -D -m 0644 -o $STACK_USER {${GITDIR["python-heatclient"]}/tools/,/etc/bash_completion.d/}heat.bash_completion
- fi
-}
-
-# install_heat() - Collect source and prepare
-function install_heat {
- git_clone $HEAT_REPO $HEAT_DIR $HEAT_BRANCH
- setup_develop $HEAT_DIR
- if [ "$HEAT_USE_MOD_WSGI" == "True" ]; then
- install_apache_wsgi
- fi
-}
-
-# install_heat_other() - Collect source and prepare
-function install_heat_other {
- git_clone $HEAT_CFNTOOLS_REPO $HEAT_CFNTOOLS_DIR $HEAT_CFNTOOLS_BRANCH
- git_clone $HEAT_TEMPLATES_REPO $HEAT_TEMPLATES_REPO_DIR $HEAT_TEMPLATES_BRANCH
- git_clone $OAC_REPO $OAC_DIR $OAC_BRANCH
- git_clone $OCC_REPO $OCC_DIR $OCC_BRANCH
- git_clone $ORC_REPO $ORC_DIR $ORC_BRANCH
-}
-
-# start_heat() - Start running processes, including screen
-function start_heat {
- run_process h-eng "$HEAT_BIN_DIR/heat-engine --config-file=$HEAT_CONF"
-
- # If the site is not enabled then we are in a grenade scenario
- local enabled_site_file
- enabled_site_file=$(apache_site_config_for heat-api)
- if [ -f ${enabled_site_file} ] && [ "$HEAT_USE_MOD_WSGI" == "True" ]; then
- enable_apache_site heat-api
- enable_apache_site heat-api-cfn
- enable_apache_site heat-api-cloudwatch
- restart_apache_server
- tail_log heat-api /var/log/$APACHE_NAME/heat-api.log
- tail_log heat-api-cfn /var/log/$APACHE_NAME/heat-api-cfn.log
- tail_log heat-api-cloudwatch /var/log/$APACHE_NAME/heat-api-cloudwatch.log
- else
- run_process h-api "$HEAT_BIN_DIR/heat-api --config-file=$HEAT_CONF"
- run_process h-api-cfn "$HEAT_BIN_DIR/heat-api-cfn --config-file=$HEAT_CONF"
- run_process h-api-cw "$HEAT_BIN_DIR/heat-api-cloudwatch --config-file=$HEAT_CONF"
- fi
-}
-
-# stop_heat() - Stop running processes
-function stop_heat {
- # Kill the screen windows
- stop_process h-eng
-
- if [ "$HEAT_USE_MOD_WSGI" == "True" ]; then
- disable_apache_site heat-api
- disable_apache_site heat-api-cfn
- disable_apache_site heat-api-cloudwatch
- restart_apache_server
- else
- local serv
- for serv in h-api h-api-cfn h-api-cw; do
- stop_process $serv
- done
- fi
-
-}
-
-# _cleanup_heat_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
-function _cleanup_heat_apache_wsgi {
- sudo rm -f $(apache_site_config_for heat-api)
- sudo rm -f $(apache_site_config_for heat-api-cfn)
- sudo rm -f $(apache_site_config_for heat-api-cloudwatch)
-}
-
-# _config_heat_apache_wsgi() - Set WSGI config files of Heat
-function _config_heat_apache_wsgi {
-
- local heat_apache_conf
- heat_apache_conf=$(apache_site_config_for heat-api)
- local heat_cfn_apache_conf
- heat_cfn_apache_conf=$(apache_site_config_for heat-api-cfn)
- local heat_cloudwatch_apache_conf
- heat_cloudwatch_apache_conf=$(apache_site_config_for heat-api-cloudwatch)
- local heat_ssl=""
- local heat_certfile=""
- local heat_keyfile=""
- local heat_api_port=$HEAT_API_PORT
- local heat_cfn_api_port=$HEAT_API_CFN_PORT
- local heat_cw_api_port=$HEAT_API_CW_PORT
- local venv_path=""
-
- sudo cp $FILES/apache-heat-api.template $heat_apache_conf
- sudo sed -e "
- s|%PUBLICPORT%|$heat_api_port|g;
- s|%APACHE_NAME%|$APACHE_NAME|g;
- s|%HEAT_BIN_DIR%|$HEAT_BIN_DIR|g;
- s|%SSLENGINE%|$heat_ssl|g;
- s|%SSLCERTFILE%|$heat_certfile|g;
- s|%SSLKEYFILE%|$heat_keyfile|g;
- s|%USER%|$STACK_USER|g;
- s|%VIRTUALENV%|$venv_path|g
- " -i $heat_apache_conf
-
- sudo cp $FILES/apache-heat-api-cfn.template $heat_cfn_apache_conf
- sudo sed -e "
- s|%PUBLICPORT%|$heat_cfn_api_port|g;
- s|%APACHE_NAME%|$APACHE_NAME|g;
- s|%HEAT_BIN_DIR%|$HEAT_BIN_DIR|g;
- s|%SSLENGINE%|$heat_ssl|g;
- s|%SSLCERTFILE%|$heat_certfile|g;
- s|%SSLKEYFILE%|$heat_keyfile|g;
- s|%USER%|$STACK_USER|g;
- s|%VIRTUALENV%|$venv_path|g
- " -i $heat_cfn_apache_conf
-
- sudo cp $FILES/apache-heat-api-cloudwatch.template $heat_cloudwatch_apache_conf
- sudo sed -e "
- s|%PUBLICPORT%|$heat_cw_api_port|g;
- s|%APACHE_NAME%|$APACHE_NAME|g;
- s|%HEAT_BIN_DIR%|$HEAT_BIN_DIR|g;
- s|%SSLENGINE%|$heat_ssl|g;
- s|%SSLCERTFILE%|$heat_certfile|g;
- s|%SSLKEYFILE%|$heat_keyfile|g;
- s|%USER%|$STACK_USER|g;
- s|%VIRTUALENV%|$venv_path|g
- " -i $heat_cloudwatch_apache_conf
-}
-
-
-# create_heat_accounts() - Set up common required heat accounts
-function create_heat_accounts {
- if [[ "$HEAT_STANDALONE" != "True" ]]; then
-
- create_service_user "heat" "admin"
- get_or_create_service "heat" "orchestration" "Heat Orchestration Service"
- get_or_create_endpoint \
- "orchestration" \
- "$REGION_NAME" \
- "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(project_id)s" \
- "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(project_id)s" \
- "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(project_id)s"
-
- get_or_create_service "heat-cfn" "cloudformation" "Heat CloudFormation Service"
- get_or_create_endpoint \
- "cloudformation" \
- "$REGION_NAME" \
- "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
- "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
- "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1"
-
- # heat_stack_user role is for users created by Heat
- get_or_create_role "heat_stack_user"
- fi
-
- if [[ "$HEAT_STACK_DOMAIN" == "True" ]]; then
- # domain -> heat and user -> heat_domain_admin
- domain_id=$(get_or_create_domain heat 'Owns users and projects created by heat')
- iniset $HEAT_CONF DEFAULT stack_user_domain_id ${domain_id}
- get_or_create_user heat_domain_admin $SERVICE_PASSWORD heat
- get_or_add_user_domain_role admin heat_domain_admin heat
- iniset $HEAT_CONF DEFAULT stack_domain_admin heat_domain_admin
- iniset $HEAT_CONF DEFAULT stack_domain_admin_password $SERVICE_PASSWORD
- fi
-}
-
-# build_heat_pip_mirror() - Build a pip mirror containing heat agent projects
-function build_heat_pip_mirror {
- local project_dirs="$OCC_DIR $OAC_DIR $ORC_DIR $HEAT_CFNTOOLS_DIR"
- local projpath proj package
-
- rm -rf $HEAT_PIP_REPO
- mkdir -p $HEAT_PIP_REPO
-
- echo "<html><body>" > $HEAT_PIP_REPO/index.html
- for projpath in $project_dirs; do
- proj=$(basename $projpath)
- mkdir -p $HEAT_PIP_REPO/$proj
- pushd $projpath
- rm -rf dist
- python setup.py sdist
- pushd dist
- package=$(ls *)
- mv $package $HEAT_PIP_REPO/$proj/$package
- popd
-
- echo "<html><body><a href=\"$package\">$package</a></body></html>" > $HEAT_PIP_REPO/$proj/index.html
- echo "<a href=\"$proj\">$proj</a><br/>" >> $HEAT_PIP_REPO/index.html
-
- popd
- done
-
- echo "</body></html>" >> $HEAT_PIP_REPO/index.html
-
- local heat_pip_repo_apache_conf
- heat_pip_repo_apache_conf=$(apache_site_config_for heat_pip_repo)
-
- sudo cp $FILES/apache-heat-pip-repo.template $heat_pip_repo_apache_conf
- sudo sed -e "
- s|%HEAT_PIP_REPO%|$HEAT_PIP_REPO|g;
- s|%HEAT_PIP_REPO_PORT%|$HEAT_PIP_REPO_PORT|g;
- s|%APACHE_NAME%|$APACHE_NAME|g;
- " -i $heat_pip_repo_apache_conf
- enable_apache_site heat_pip_repo
- restart_apache_server
- sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $HEAT_PIP_REPO_PORT -j ACCEPT || true
-}
-
-# Restore xtrace
-$_XTRACE_HEAT
-
-# Tell emacs to use shell-script-mode
-## Local variables:
-## mode: shell-script
-## End:
diff --git a/lib/horizon b/lib/horizon
index 5089650..9c7ec00 100644
--- a/lib/horizon
+++ b/lib/horizon
@@ -81,7 +81,7 @@
# Horizon is installed as develop mode, so we can compile here.
# Message catalog compilation is handled by Django admin script,
# so compiling them after the installation avoids Django installation twice.
- (cd $HORIZON_DIR; ./run_tests.sh -N --compilemessages)
+ (cd $HORIZON_DIR; $PYTHON manage.py compilemessages)
# ``local_settings.py`` is used to override horizon default settings.
local local_settings=$HORIZON_DIR/openstack_dashboard/local/local_settings.py
@@ -100,7 +100,7 @@
# note(trebskit): if HOST_IP points at non-localhost ip address, horizon cannot be accessed
# from outside the virtual machine. This fixes is meant primarily for local development
# purpose
- _horizon_config_set $local_settings "" ALLOWED_HOSTS [\"$HOST_IP\"]
+ _horizon_config_set $local_settings "" ALLOWED_HOSTS [\"*\"]
if [ -f $SSL_BUNDLE_FILE ]; then
_horizon_config_set $local_settings "" OPENSTACK_SSL_CACERT \"${SSL_BUNDLE_FILE}\"
@@ -126,9 +126,7 @@
if is_ubuntu; then
disable_apache_site 000-default
sudo touch $horizon_conf
- elif is_fedora; then
- sudo sed '/^Listen/s/^.*$/Listen 0.0.0.0:80/' -i /etc/httpd/conf/httpd.conf
- elif is_suse; then
+ elif is_fedora || is_suse; then
: # nothing to do
else
exit_distro_not_supported "horizon apache configuration"
@@ -164,7 +162,7 @@
git_clone_by_name "django_openstack_auth"
# Compile message catalogs before installation
_prepare_message_catalog_compilation
- (cd $dir; python setup.py compile_catalog)
+ (cd $dir; $PYTHON setup.py compile_catalog)
setup_dev_lib "django_openstack_auth"
fi
# if we aren't using this library from git, then we just let it
diff --git a/lib/keystone b/lib/keystone
index 9a0fdad..a26ef8a 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -50,25 +50,18 @@
KEYSTONE_CONF_DIR=${KEYSTONE_CONF_DIR:-/etc/keystone}
KEYSTONE_CONF=$KEYSTONE_CONF_DIR/keystone.conf
KEYSTONE_PASTE_INI=${KEYSTONE_PASTE_INI:-$KEYSTONE_CONF_DIR/keystone-paste.ini}
-
-# NOTE(sdague): remove in Newton
-KEYSTONE_CATALOG_BACKEND="sql"
-
-# Toggle for deploying Keystone under HTTPD + mod_wsgi
-# Deprecated in Mitaka, use KEYSTONE_DEPLOY instead.
-KEYSTONE_USE_MOD_WSGI=${KEYSTONE_USE_MOD_WSGI:-${ENABLE_HTTPD_MOD_WSGI_SERVICES}}
+KEYSTONE_PUBLIC_UWSGI_CONF=$KEYSTONE_CONF_DIR/keystone-uwsgi-public.ini
+KEYSTONE_ADMIN_UWSGI_CONF=$KEYSTONE_CONF_DIR/keystone-uwsgi-admin.ini
+KEYSTONE_PUBLIC_UWSGI=$KEYSTONE_BIN_DIR/keystone-wsgi-public
+KEYSTONE_ADMIN_UWSGI=$KEYSTONE_BIN_DIR/keystone-wsgi-admin
# KEYSTONE_DEPLOY defines how keystone is deployed, allowed values:
# - mod_wsgi : Run keystone under Apache HTTPd mod_wsgi
# - uwsgi : Run keystone under uwsgi
-if [ -z "$KEYSTONE_DEPLOY" ]; then
- if [ -z "$KEYSTONE_USE_MOD_WSGI" ]; then
- KEYSTONE_DEPLOY=mod_wsgi
- elif [ "$KEYSTONE_USE_MOD_WSGI" == True ]; then
- KEYSTONE_DEPLOY=mod_wsgi
- else
- KEYSTONE_DEPLOY=uwsgi
- fi
+if [[ "$WSGI_MODE" == "uwsgi" ]]; then
+ KEYSTONE_DEPLOY=uwsgi
+else
+ KEYSTONE_DEPLOY=mod_wsgi
fi
# Select the token persistence backend driver
@@ -115,25 +108,24 @@
SERVICE_TENANT_NAME=${SERVICE_PROJECT_NAME:-service}
# if we are running with SSL use https protocols
-if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
KEYSTONE_AUTH_PROTOCOL="https"
KEYSTONE_SERVICE_PROTOCOL="https"
fi
-# complete URIs
-if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
- # If running in Apache, use path access rather than port.
- KEYSTONE_AUTH_URI=${KEYSTONE_AUTH_PROTOCOL}://${KEYSTONE_AUTH_HOST}/identity_admin
- KEYSTONE_SERVICE_URI=${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}/identity
-else
- KEYSTONE_AUTH_URI=${KEYSTONE_AUTH_PROTOCOL}://${KEYSTONE_AUTH_HOST}:${KEYSTONE_AUTH_PORT}
- KEYSTONE_SERVICE_URI=${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}
-fi
+KEYSTONE_AUTH_URI=${KEYSTONE_AUTH_PROTOCOL}://${KEYSTONE_AUTH_HOST}/identity_admin
+KEYSTONE_SERVICE_URI=${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}/identity
# V3 URIs
KEYSTONE_AUTH_URI_V3=$KEYSTONE_AUTH_URI/v3
KEYSTONE_SERVICE_URI_V3=$KEYSTONE_SERVICE_URI/v3
+# Security compliance
+KEYSTONE_SECURITY_COMPLIANCE_ENABLED=${KEYSTONE_SECURITY_COMPLIANCE_ENABLED:-True}
+KEYSTONE_LOCKOUT_FAILURE_ATTEMPTS=${KEYSTONE_LOCKOUT_FAILURE_ATTEMPTS:-2}
+KEYSTONE_LOCKOUT_DURATION=${KEYSTONE_LOCKOUT_DURATION:-5}
+KEYSTONE_UNIQUE_LAST_PASSWORD_COUNT=${KEYSTONE_UNIQUE_LAST_PASSWORD_COUNT:-2}
+
# Functions
# ---------
@@ -148,14 +140,22 @@
# cleanup_keystone() - Remove residual data files, anything left over from previous
# runs that a clean run would need to clean up
function cleanup_keystone {
- disable_apache_site keystone
- sudo rm -f $(apache_site_config_for keystone)
+ if [[ "$WSGI_MODE" == "uwsgi" ]]; then
+ remove_uwsgi_config "$KEYSTONE_PUBLIC_UWSGI_CONF" "$KEYSTONE_PUBLIC_UWSGI"
+ remove_uwsgi_config "$KEYSTONE_ADMIN_UWSGI_CONF" "$KEYSTONE_ADMIN_UWSGI"
+ sudo rm -f $(apache_site_config_for keystone-wsgi-public)
+ sudo rm -f $(apache_site_config_for keystone-wsgi-admin)
+ else
+ disable_apache_site keystone
+ sudo rm -f $(apache_site_config_for keystone)
+ fi
}
# _config_keystone_apache_wsgi() - Set WSGI config files of Keystone
function _config_keystone_apache_wsgi {
local keystone_apache_conf
keystone_apache_conf=$(apache_site_config_for keystone)
+ keystone_ssl_listen="#"
local keystone_ssl=""
local keystone_certfile=""
local keystone_keyfile=""
@@ -163,11 +163,6 @@
local keystone_auth_port=$KEYSTONE_AUTH_PORT
local venv_path=""
- if is_ssl_enabled_service key; then
- keystone_ssl="SSLEngine On"
- keystone_certfile="SSLCertificateFile $KEYSTONE_SSL_CERT"
- keystone_keyfile="SSLCertificateKeyFile $KEYSTONE_SSL_KEY"
- fi
if is_service_enabled tls-proxy; then
keystone_service_port=$KEYSTONE_SERVICE_PORT_INT
keystone_auth_port=$KEYSTONE_AUTH_PORT_INT
@@ -181,6 +176,7 @@
s|%PUBLICPORT%|$keystone_service_port|g;
s|%ADMINPORT%|$keystone_auth_port|g;
s|%APACHE_NAME%|$APACHE_NAME|g;
+ s|%SSLLISTEN%|$keystone_ssl_listen|g;
s|%SSLENGINE%|$keystone_ssl|g;
s|%SSLCERTFILE%|$keystone_certfile|g;
s|%SSLKEYFILE%|$keystone_keyfile|g;
@@ -196,7 +192,6 @@
if [[ "$KEYSTONE_CONF_DIR" != "$KEYSTONE_DIR/etc" ]]; then
install -m 600 $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF
- cp -p $KEYSTONE_DIR/etc/policy.json $KEYSTONE_CONF_DIR
if [[ -f "$KEYSTONE_DIR/etc/keystone-paste.ini" ]]; then
cp -p "$KEYSTONE_DIR/etc/keystone-paste.ini" "$KEYSTONE_PASTE_INI"
fi
@@ -238,11 +233,6 @@
iniset_rpc_backend keystone $KEYSTONE_CONF
- # Register SSL certificates if provided
- if is_ssl_enabled_service key; then
- ensure_certificates KEYSTONE
- fi
-
local service_port=$KEYSTONE_SERVICE_PORT
local auth_port=$KEYSTONE_AUTH_PORT
@@ -258,10 +248,8 @@
# work when you want to use a different port (in the case of proxy), or you
# don't want the port (in the case of putting keystone on a path in
# apache).
- if is_service_enabled tls-proxy || [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
- iniset $KEYSTONE_CONF DEFAULT public_endpoint $KEYSTONE_SERVICE_URI
- iniset $KEYSTONE_CONF DEFAULT admin_endpoint $KEYSTONE_AUTH_URI
- fi
+ iniset $KEYSTONE_CONF DEFAULT public_endpoint $KEYSTONE_SERVICE_URI
+ iniset $KEYSTONE_CONF DEFAULT admin_endpoint $KEYSTONE_AUTH_URI
if [[ "$KEYSTONE_TOKEN_FORMAT" != "" ]]; then
iniset $KEYSTONE_CONF token provider $KEYSTONE_TOKEN_FORMAT
@@ -278,7 +266,7 @@
# Format logging
if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$KEYSTONE_DEPLOY" != "mod_wsgi" ] ; then
- setup_colorized_logging $KEYSTONE_CONF DEFAULT
+ setup_colorized_logging $KEYSTONE_CONF
fi
iniset $KEYSTONE_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
@@ -287,45 +275,8 @@
iniset $KEYSTONE_CONF DEFAULT logging_exception_prefix "%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s"
_config_keystone_apache_wsgi
else # uwsgi
- # iniset creates these files when it's called if they don't exist.
- KEYSTONE_PUBLIC_UWSGI_FILE=$KEYSTONE_CONF_DIR/keystone-uwsgi-public.ini
- KEYSTONE_ADMIN_UWSGI_FILE=$KEYSTONE_CONF_DIR/keystone-uwsgi-admin.ini
-
- rm -f "$KEYSTONE_PUBLIC_UWSGI_FILE"
- rm -f "$KEYSTONE_ADMIN_UWSGI_FILE"
-
- if is_ssl_enabled_service key; then
- iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi https $KEYSTONE_SERVICE_HOST:$service_port,$KEYSTONE_SSL_CERT,$KEYSTONE_SSL_KEY
- iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi https $KEYSTONE_ADMIN_BIND_HOST:$auth_port,$KEYSTONE_SSL_CERT,$KEYSTONE_SSL_KEY
- else
- iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi http $KEYSTONE_SERVICE_HOST:$service_port
- iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi http $KEYSTONE_ADMIN_BIND_HOST:$auth_port
- fi
-
- iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi wsgi-file "$KEYSTONE_BIN_DIR/keystone-wsgi-public"
- iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi processes $(nproc)
-
- iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi wsgi-file "$KEYSTONE_BIN_DIR/keystone-wsgi-admin"
- iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi processes $API_WORKERS
-
- # Common settings
- for file in "$KEYSTONE_PUBLIC_UWSGI_FILE" "$KEYSTONE_ADMIN_UWSGI_FILE"; do
- # This is running standalone
- iniset "$file" uwsgi master true
- # Set die-on-term & exit-on-reload so that uwsgi shuts down
- iniset "$file" uwsgi die-on-term true
- iniset "$file" uwsgi exit-on-reload true
- iniset "$file" uwsgi enable-threads true
- iniset "$file" uwsgi plugins python
- # uwsgi recommends this to prevent thundering herd on accept.
- iniset "$file" uwsgi thunder-lock true
- # Override the default size for headers from the 4k default.
- iniset "$file" uwsgi buffer-size 65535
- # Make sure the client doesn't try to re-use the connection.
- iniset "$file" uwsgi add-header "Connection: close"
- # This ensures that file descriptors aren't shared between processes.
- iniset "$file" uwsgi lazy-apps true
- done
+ write_uwsgi_config "$KEYSTONE_PUBLIC_UWSGI_CONF" "$KEYSTONE_PUBLIC_UWSGI" "/identity"
+ write_uwsgi_config "$KEYSTONE_ADMIN_UWSGI_CONF" "$KEYSTONE_ADMIN_UWSGI" "/identity_admin"
fi
iniset $KEYSTONE_CONF DEFAULT max_token_size 16384
@@ -339,6 +290,12 @@
# allows policy changes in order to clarify the adminess scope.
#iniset $KEYSTONE_CONF resource admin_project_domain_name Default
#iniset $KEYSTONE_CONF resource admin_project_name admin
+
+ if [[ "$KEYSTONE_SECURITY_COMPLIANCE_ENABLED" = True ]]; then
+ iniset $KEYSTONE_CONF security_compliance lockout_failure_attempts $KEYSTONE_LOCKOUT_FAILURE_ATTEMPTS
+ iniset $KEYSTONE_CONF security_compliance lockout_duration $KEYSTONE_LOCKOUT_DURATION
+ iniset $KEYSTONE_CONF security_compliance unique_last_password_count $KEYSTONE_UNIQUE_LAST_PASSWORD_COUNT
+ fi
}
# create_keystone_accounts() - Sets up common required keystone accounts
@@ -372,8 +329,7 @@
admin_project=$(openstack project show "admin" -f value -c id)
local admin_user
admin_user=$(openstack user show "admin" -f value -c id)
- local admin_role
- admin_role=$(openstack role show "admin" -f value -c id)
+ local admin_role="admin"
get_or_add_user_domain_role $admin_role $admin_user default
@@ -391,13 +347,20 @@
get_or_create_role ResellerAdmin
# The Member role is used by Horizon and Swift so we need to keep it:
- local member_role
- member_role=$(get_or_create_role "Member")
+ local member_role="member"
+
+ # Captial Member role is legacy hard coded in Horizon / Swift
+ # configs. Keep it around.
+ get_or_create_role "Member"
+
+ # The reality is that the rest of the roles listed below honestly
+ # should work by symbolic names.
+ get_or_create_role $member_role
# another_role demonstrates that an arbitrary role may be created and used
# TODO(sleepsonthefloor): show how this can be used for rbac in the future!
- local another_role
- another_role=$(get_or_create_role "anotherrole")
+ local another_role="anotherrole"
+ get_or_create_role $another_role
# invisible project - admin can't see this one
local invis_project
@@ -445,14 +408,16 @@
#
# create_service_user <name> [role]
#
-# The role defaults to the service role. It is allowed to be provided as optional as historically
+# We always add the service role, other roles are also allowed to be added as historically
# a lot of projects have configured themselves with the admin or other role here if they are
# using this user for other purposes beyond simply auth_token middleware.
function create_service_user {
- local role=${2:-service}
-
get_or_create_user "$1" "$SERVICE_PASSWORD" "$SERVICE_DOMAIN_NAME"
- get_or_add_user_project_role "$role" "$1" "$SERVICE_PROJECT_NAME" "$SERVICE_DOMAIN_NAME" "$SERVICE_DOMAIN_NAME"
+ get_or_add_user_project_role service "$1" "$SERVICE_PROJECT_NAME" "$SERVICE_DOMAIN_NAME" "$SERVICE_DOMAIN_NAME"
+
+ if [[ -n "$2" ]]; then
+ get_or_add_user_project_role "$2" "$1" "$SERVICE_PROJECT_NAME" "$SERVICE_DOMAIN_NAME" "$SERVICE_DOMAIN_NAME"
+ fi
}
# Configure the service to use the auth token middleware.
@@ -488,8 +453,10 @@
init_ldap
fi
- # (Re)create keystone database
- recreate_database keystone
+ if [[ "$RECREATE_KEYSTONE_DB" == True ]]; then
+ # (Re)create keystone database
+ recreate_database keystone
+ fi
# Initialize keystone database
$KEYSTONE_BIN_DIR/keystone-manage --config-file $KEYSTONE_CONF db_sync
@@ -556,9 +523,6 @@
if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
install_apache_wsgi
- if is_ssl_enabled_service "key"; then
- enable_mod_ssl
- fi
elif [ "$KEYSTONE_DEPLOY" == "uwsgi" ]; then
pip_install uwsgi
fi
@@ -580,8 +544,11 @@
tail_log key /var/log/$APACHE_NAME/keystone.log
tail_log key-access /var/log/$APACHE_NAME/keystone_access.log
else # uwsgi
- run_process key "$KEYSTONE_BIN_DIR/uwsgi $KEYSTONE_PUBLIC_UWSGI_FILE" "" "key-p"
- run_process key "$KEYSTONE_BIN_DIR/uwsgi $KEYSTONE_ADMIN_UWSGI_FILE" "" "key-a"
+ # TODO(sdague): we should really get down to a single keystone here
+ enable_service key-p
+ enable_service key-a
+ run_process key-p "$KEYSTONE_BIN_DIR/uwsgi --ini $KEYSTONE_PUBLIC_UWSGI_CONF" ""
+ run_process key-a "$KEYSTONE_BIN_DIR/uwsgi --ini $KEYSTONE_ADMIN_UWSGI_CONF" ""
fi
echo "Waiting for keystone to start..."
@@ -590,10 +557,7 @@
# unencryted traffic at this point.
# If running in Apache, use the path rather than port.
- local service_uri=$auth_protocol://$KEYSTONE_SERVICE_HOST:$service_port/v$IDENTITY_API_VERSION/
- if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
- service_uri=$auth_protocol://$KEYSTONE_SERVICE_HOST/identity/v$IDENTITY_API_VERSION/
- fi
+ local service_uri=$auth_protocol://$KEYSTONE_SERVICE_HOST/identity/v$IDENTITY_API_VERSION/
if ! wait_for_service $SERVICE_TIMEOUT $service_uri; then
die $LINENO "keystone did not start"
@@ -614,6 +578,11 @@
if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
disable_apache_site keystone
restart_apache_server
+ else
+ stop_process key-p
+ stop_process key-a
+ remove_uwsgi_config "$KEYSTONE_PUBLIC_UWSGI_CONF" "$KEYSTONE_PUBLIC_UWSGI"
+ remove_uwsgi_config "$KEYSTONE_ADMIN_UWSGI_CONF" "$KEYSTONE_ADMIN_UWSGI"
fi
# Kill the Keystone screen window
stop_process key
@@ -638,8 +607,7 @@
--bootstrap-service-name keystone \
--bootstrap-region-id "$REGION_NAME" \
--bootstrap-admin-url "$KEYSTONE_AUTH_URI" \
- --bootstrap-public-url "$KEYSTONE_SERVICE_URI" \
- --bootstrap-internal-url "$KEYSTONE_SERVICE_URI"
+ --bootstrap-public-url "$KEYSTONE_SERVICE_URI"
}
# Restore xtrace
diff --git a/lib/lvm b/lib/lvm
index d35a76f..0cebd92 100644
--- a/lib/lvm
+++ b/lib/lvm
@@ -23,11 +23,7 @@
# Defaults
# --------
# Name of the lvm volume groups to use/create for iscsi volumes
-# This monkey-motion is for compatibility with icehouse-generation Grenade
-# If ``VOLUME_GROUP`` is set, use it, otherwise we'll build a VG name based
-# on ``VOLUME_GROUP_NAME`` that includes the backend name
-# Grenade doesn't use ``VOLUME_GROUP2`` so it is left out
-VOLUME_GROUP_NAME=${VOLUME_GROUP:-${VOLUME_GROUP_NAME:-stack-volumes}}
+VOLUME_GROUP_NAME=${VOLUME_GROUP_NAME:-stack-volumes}
DEFAULT_VOLUME_GROUP_NAME=$VOLUME_GROUP_NAME-default
# Backing file name is of the form $VOLUME_GROUP$BACKING_FILE_SUFFIX
@@ -105,7 +101,7 @@
# init_lvm_volume_group() initializes the volume group creating the backing
# file if necessary
#
-# Usage: init_lvm_volume_group() $vg
+# Usage: init_lvm_volume_group() $vg $size
function init_lvm_volume_group {
local vg=$1
local size=$2
diff --git a/lib/neutron b/lib/neutron
index 415344e..492a0ee 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -52,12 +52,16 @@
NEUTRON_CORE_PLUGIN_CONF_PATH=$NEUTRON_CONF_DIR/plugins/$NEUTRON_CORE_PLUGIN
NEUTRON_CORE_PLUGIN_CONF=$NEUTRON_CORE_PLUGIN_CONF_PATH/$NEUTRON_CORE_PLUGIN_CONF_FILENAME
+NEUTRON_METERING_AGENT_CONF_FILENAME=${NEUTRON_METERING_AGENT_CONF_FILENAME:-metering_agent.ini}
+NEUTRON_METERING_AGENT_CONF=$NEUTRON_CONF_DIR/$NEUTRON_METERING_AGENT_CONF_FILENAME
+
NEUTRON_AGENT_BINARY=${NEUTRON_AGENT_BINARY:-neutron-$NEUTRON_AGENT-agent}
NEUTRON_L3_BINARY=${NEUTRON_L3_BINARY:-neutron-l3-agent}
NEUTRON_META_BINARY=${NEUTRON_META_BINARY:-neutron-metadata-agent}
+NEUTRON_METERING_BINARY=${NEUTRON_METERING_BINARY:-neutron-metering-agent}
# Public facing bits
-if is_ssl_enabled_service "neutron" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
NEUTRON_SERVICE_PROTOCOL="https"
fi
NEUTRON_SERVICE_HOST=${NEUTRON_SERVICE_HOST:-$SERVICE_HOST}
@@ -70,8 +74,16 @@
NEUTRON_ROOTWRAP_CONF_FILE=$NEUTRON_CONF_DIR/rootwrap.conf
NEUTRON_ROOTWRAP_DAEMON_CMD="sudo $NEUTRON_ROOTWRAP-daemon $NEUTRON_ROOTWRAP_CONF_FILE"
-# Add all enabled config files to a single config arg
-NEUTRON_CONFIG_ARG=${NEUTRON_CONFIG_ARG:-""}
+# This is needed because _neutron_ovs_base_configure_l3_agent will set
+# external_network_bridge
+Q_USE_PROVIDERNET_FOR_PUBLIC=${Q_USE_PROVIDERNET_FOR_PUBLIC:-True}
+# This is needed because _neutron_ovs_base_configure_l3_agent uses it to create
+# an external network bridge
+PUBLIC_BRIDGE=${PUBLIC_BRIDGE:-br-ex}
+PUBLIC_BRIDGE_MTU=${PUBLIC_BRIDGE_MTU:-1500}
+
+# Additional neutron api config files
+declare -a -g _NEUTRON_SERVER_EXTRA_CONF_FILES_ABS
# Functions
# ---------
@@ -90,6 +102,10 @@
return 1
}
+if is_neutron_legacy_enabled; then
+ source $TOP_DIR/lib/neutron-legacy
+fi
+
# cleanup_neutron() - Remove residual data files, anything left over from previous
# runs that a clean run would need to clean up
function cleanup_neutron_new {
@@ -146,24 +162,16 @@
iniset $NEUTRON_CONF DEFAULT auth_strategy $NEUTRON_AUTH_STRATEGY
configure_auth_token_middleware $NEUTRON_CONF neutron $NEUTRON_AUTH_CACHE_DIR keystone_authtoken
-
- iniset $NEUTRON_CONF nova auth_type password
- iniset $NEUTRON_CONF nova auth_url "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/v3"
- iniset $NEUTRON_CONF nova username nova
- iniset $NEUTRON_CONF nova password $SERVICE_PASSWORD
- iniset $NEUTRON_CONF nova user_domain_id default
- iniset $NEUTRON_CONF nova project_name $SERVICE_TENANT_NAME
- iniset $NEUTRON_CONF nova project_domain_id default
- iniset $NEUTRON_CONF nova region_name $REGION_NAME
+ configure_auth_token_middleware $NEUTRON_CONF nova $NEUTRON_AUTH_CACHE_DIR nova
# Configure VXLAN
# TODO(sc68cal) not hardcode?
iniset $NEUTRON_CORE_PLUGIN_CONF ml2 tenant_network_types vxlan
- iniset $NEUTRON_CORE_PLUGIN_CONF ml2 type_drivers vxlan
iniset $NEUTRON_CORE_PLUGIN_CONF ml2 mechanism_drivers openvswitch,linuxbridge
iniset $NEUTRON_CORE_PLUGIN_CONF ml2_type_vxlan vni_ranges 1001:2000
+ iniset $NEUTRON_CORE_PLUGIN_CONF ml2_type_flat flat_networks public
if [[ "$NEUTRON_PORT_SECURITY" = "True" ]]; then
- iniset $NEUTRON_CORE_PLUGIN_CONF ml2 extension_drivers port_security
+ neutron_ml2_extension_driver_add port_security
fi
fi
@@ -174,14 +182,16 @@
# Configure the neutron agent
if [[ $NEUTRON_AGENT == "linuxbridge" ]]; then
- iniset $NEUTRON_CORE_PLUGIN_CONF securitygroup iptables
+ iniset $NEUTRON_CORE_PLUGIN_CONF securitygroup firewall_driver iptables
iniset $NEUTRON_CORE_PLUGIN_CONF vxlan local_ip $HOST_IP
else
- iniset $NEUTRON_CORE_PLUGIN_CONF securitygroup iptables_hybrid
+ iniset $NEUTRON_CORE_PLUGIN_CONF securitygroup firewall_driver iptables_hybrid
iniset $NEUTRON_CORE_PLUGIN_CONF ovs local_ip $HOST_IP
fi
- enable_kernel_bridge_firewall
+ if ! running_in_container; then
+ enable_kernel_bridge_firewall
+ fi
fi
# DHCP Agent
@@ -200,7 +210,7 @@
if is_service_enabled neutron-l3; then
cp $NEUTRON_DIR/etc/l3_agent.ini.sample $NEUTRON_L3_CONF
iniset $NEUTRON_L3_CONF DEFAULT interface_driver $NEUTRON_AGENT
- iniset $NEUTRON_CONF DEFAULT service_plugins router
+ neutron_service_plugin_class_add router
iniset $NEUTRON_L3_CONF agent root_helper_daemon "$NEUTRON_ROOTWRAP_DAEMON_CMD"
iniset $NEUTRON_L3_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
neutron_plugin_configure_l3_agent $NEUTRON_L3_CONF
@@ -212,6 +222,7 @@
iniset $NEUTRON_META_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $NEUTRON_META_CONF DEFAULT nova_metadata_ip $SERVICE_HOST
+ iniset $NEUTRON_META_CONF DEFAULT metadata_workers $API_WORKERS
iniset $NEUTRON_META_CONF agent root_helper_daemon "$NEUTRON_ROOTWRAP_DAEMON_CMD"
# TODO(dtroyer): remove the v2.0 hard code below
@@ -232,31 +243,11 @@
iniset $NEUTRON_CONF DEFAULT bind_port "$NEUTRON_SERVICE_PORT_INT"
fi
- if is_ssl_enabled_service "nova"; then
- iniset $NEUTRON_CONF nova cafile $SSL_BUNDLE_FILE
- fi
-
- if is_ssl_enabled_service "neutron"; then
- ensure_certificates NEUTRON
-
- iniset $NEUTRON_CONF DEFAULT use_ssl True
- iniset $NEUTRON_CONF DEFAULT ssl_cert_file "$NEUTRON_SSL_CERT"
- iniset $NEUTRON_CONF DEFAULT ssl_key_file "$NEUTRON_SSL_KEY"
- fi
-
# Metering
if is_service_enabled neutron-metering; then
- source $TOP_DIR/lib/neutron_plugins/services/metering
- neutron_agent_metering_configure_common
- neutron_agent_metering_configure_agent
- # TODO(sc68cal) hack because we don't pass around
- # $Q_SERVICE_PLUGIN_CLASSES like -legacy does
- local plugins=""
- plugins=$(iniget $NEUTRON_CONF DEFAULT service_plugins)
- plugins+=",metering"
- iniset $NEUTRON_CONF DEFAULT service_plugins $plugins
+ cp $NEUTRON_DIR/etc/metering_agent.ini.sample $NEUTRON_METERING_AGENT_CONF
+ neutron_service_plugin_class_add metering
fi
-
}
# configure_neutron_rootwrap() - configure Neutron's rootwrap
@@ -293,7 +284,7 @@
function configure_neutron_nova_new {
iniset $NOVA_CONF DEFAULT use_neutron True
iniset $NOVA_CONF neutron auth_type "password"
- iniset $NOVA_CONF neutron auth_url "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/v3"
+ iniset $NOVA_CONF neutron auth_url "$KEYSTONE_SERVICE_URI/v3"
iniset $NOVA_CONF neutron username neutron
iniset $NOVA_CONF neutron password "$SERVICE_PASSWORD"
iniset $NOVA_CONF neutron user_domain_name "Default"
@@ -328,8 +319,6 @@
"network" "Neutron Service")
get_or_create_endpoint $neutron_service \
"$REGION_NAME" \
- "$NEUTRON_SERVICE_PROTOCOL://$NEUTRON_SERVICE_HOST:$NEUTRON_SERVICE_PORT/" \
- "$NEUTRON_SERVICE_PROTOCOL://$NEUTRON_SERVICE_HOST:$NEUTRON_SERVICE_PORT/" \
"$NEUTRON_SERVICE_PROTOCOL://$NEUTRON_SERVICE_HOST:$NEUTRON_SERVICE_PORT/"
fi
}
@@ -347,7 +336,7 @@
recreate_database neutron
# Run Neutron db migrations
- $NEUTRON_BIN_DIR/neutron-db-manage $NEUTRON_CONFIG_ARG upgrade heads
+ $NEUTRON_BIN_DIR/neutron-db-manage upgrade heads
create_neutron_cache_dir
}
@@ -395,21 +384,22 @@
service_protocol="http"
fi
+ local opts=""
+ opts+=" --config-file $NEUTRON_CONF"
+ opts+=" --config-file $NEUTRON_CORE_PLUGIN_CONF"
+ local cfg_file
+ for cfg_file in ${_NEUTRON_SERVER_EXTRA_CONF_FILES_ABS[@]}; do
+ opts+=" --config-file $cfg_file"
+ done
+
# Start the Neutron service
# TODO(sc68cal) Stop hard coding this
- run_process neutron-api "$NEUTRON_BIN_DIR/neutron-server --config-file $NEUTRON_CONF --config-file $NEUTRON_CORE_PLUGIN_CONF"
+ run_process neutron-api "$NEUTRON_BIN_DIR/neutron-server $opts"
- if is_ssl_enabled_service "neutron"; then
- ssl_ca="--ca-certificate=${SSL_BUNDLE_FILE}"
- local testcmd="wget ${ssl_ca} --no-proxy -q -O- $service_protocol://$NEUTRON_SERVICE_HOST:$service_port"
- test_with_retry "$testcmd" "Neutron did not start" $SERVICE_TIMEOUT
- else
- if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$NEUTRON_SERVICE_HOST:$service_port; then
- die $LINENO "neutron-api did not start"
- fi
+ if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$NEUTRON_SERVICE_HOST:$service_port; then
+ die $LINENO "neutron-api did not start"
fi
-
# Start proxy if enabled
if is_service_enabled tls-proxy; then
start_tls_proxy neutron '*' $NEUTRON_SERVICE_PORT $NEUTRON_SERVICE_HOST $NEUTRON_SERVICE_PORT_INT
@@ -418,37 +408,38 @@
# start_neutron() - Start running processes, including screen
function start_neutron_new {
- _set_config_files
-
# Start up the neutron agents if enabled
# TODO(sc68cal) Make this pluggable so different DevStack plugins for different Neutron plugins
# can resolve the $NEUTRON_AGENT_BINARY
if is_service_enabled neutron-agent; then
- run_process neutron-agent "$NEUTRON_BIN_DIR/$NEUTRON_AGENT_BINARY $NEUTRON_CONFIG_ARG"
+ # TODO(ihrachys) stop loading ml2_conf.ini into agents, instead load agent specific files
+ run_process neutron-agent "$NEUTRON_BIN_DIR/$NEUTRON_AGENT_BINARY --config-file $NEUTRON_CONF --config-file $NEUTRON_CORE_PLUGIN_CONF"
fi
if is_service_enabled neutron-dhcp; then
neutron_plugin_configure_dhcp_agent $NEUTRON_DHCP_CONF
- run_process neutron-dhcp "$NEUTRON_BIN_DIR/$NEUTRON_DHCP_BINARY $NEUTRON_CONFIG_ARG"
+ run_process neutron-dhcp "$NEUTRON_BIN_DIR/$NEUTRON_DHCP_BINARY --config-file $NEUTRON_CONF --config-file $NEUTRON_DHCP_CONF"
fi
if is_service_enabled neutron-l3; then
- run_process neutron-l3 "$NEUTRON_BIN_DIR/$NEUTRON_L3_BINARY $NEUTRON_CONFIG_ARG"
+ run_process neutron-l3 "$NEUTRON_BIN_DIR/$NEUTRON_L3_BINARY --config-file $NEUTRON_CONF --config-file $NEUTRON_L3_CONF"
fi
- # XXX(sc68cal) - Here's where plugins can wire up their own networks instead
- # of the code in lib/neutron_plugins/services/l3
- if type -p neutron_plugin_create_initial_networks > /dev/null; then
- neutron_plugin_create_initial_networks
- else
- # XXX(sc68cal) Load up the built in Neutron networking code and build a topology
- source $TOP_DIR/lib/neutron_plugins/services/l3
- # Create the networks using servic
- create_neutron_initial_network
+ if is_service_enabled neutron-api; then
+ # XXX(sc68cal) - Here's where plugins can wire up their own networks instead
+ # of the code in lib/neutron_plugins/services/l3
+ if type -p neutron_plugin_create_initial_networks > /dev/null; then
+ neutron_plugin_create_initial_networks
+ else
+ # XXX(sc68cal) Load up the built in Neutron networking code and build a topology
+ source $TOP_DIR/lib/neutron_plugins/services/l3
+ # Create the networks using servic
+ create_neutron_initial_network
+ fi
fi
if is_service_enabled neutron-metadata-agent; then
- run_process neutron-metadata-agent "$NEUTRON_BIN_DIR/$NEUTRON_META_BINARY $NEUTRON_CONFIG_ARG"
+ run_process neutron-metadata-agent "$NEUTRON_BIN_DIR/$NEUTRON_META_BINARY --config-file $NEUTRON_CONF --config-file $NEUTRON_META_CONF"
fi
if is_service_enabled neutron-metering; then
- run_process neutron-metering "$AGENT_METERING_BINARY --config-file $NEUTRON_CONF --config-file $METERING_AGENT_CONF_FILENAME"
+ run_process neutron-metering "$NEUTRON_METERING_BINARY --config-file $NEUTRON_CONF --config-file $NEUTRON_METERING_AGENT_CONF"
fi
}
@@ -470,28 +461,33 @@
fi
}
-# Compile the lost of enabled config files
-function _set_config_files {
+# neutron_service_plugin_class_add() - add service plugin class
+function neutron_service_plugin_class_add_new {
+ local service_plugin_class=$1
+ local plugins=""
- NEUTRON_CONFIG_ARG+=" --config-file $NEUTRON_CONF"
-
- #TODO(sc68cal) OVS and LB agent uses settings in NEUTRON_CORE_PLUGIN_CONF (ml2_conf.ini) but others may not
- if is_service_enabled neutron-agent; then
- NEUTRON_CONFIG_ARG+=" --config-file $NEUTRON_CORE_PLUGIN_CONF"
+ plugins=$(iniget $NEUTRON_CONF DEFAULT service_plugins)
+ if [ $plugins ]; then
+ plugins+=","
fi
+ plugins+="${service_plugin_class}"
+ iniset $NEUTRON_CONF DEFAULT service_plugins $plugins
+}
- if is_service_enabled neutron-dhcp; then
- NEUTRON_CONFIG_ARG+=" --config-file $NEUTRON_DHCP_CONF"
+function _neutron_ml2_extension_driver_add {
+ local driver=$1
+ local drivers=""
+
+ drivers=$(iniget $NEUTRON_CORE_PLUGIN_CONF ml2 extension_drivers)
+ if [ $drivers ]; then
+ drivers+=","
fi
+ drivers+="${driver}"
+ iniset $NEUTRON_CORE_PLUGIN_CONF ml2 extension_drivers $drivers
+}
- if is_service_enabled neutron-l3; then
- NEUTRON_CONFIG_ARG+=" --config-file $NEUTRON_L3_CONF"
- fi
-
- if is_service_enabled neutron-metadata-agent; then
- NEUTRON_CONFIG_ARG+=" --config-file $NEUTRON_META_CONF"
- fi
-
+function neutron_server_config_add_new {
+ _NEUTRON_SERVER_EXTRA_CONF_FILES_ABS+=($1)
}
# Dispatch functions
@@ -553,6 +549,42 @@
fi
}
+function neutron_service_plugin_class_add {
+ if is_neutron_legacy_enabled; then
+ # Call back to old function
+ _neutron_service_plugin_class_add "$@"
+ else
+ neutron_service_plugin_class_add_new "$@"
+ fi
+}
+
+function neutron_ml2_extension_driver_add {
+ if is_neutron_legacy_enabled; then
+ # Call back to old function
+ _neutron_ml2_extension_driver_add_old "$@"
+ else
+ _neutron_ml2_extension_driver_add "$@"
+ fi
+}
+
+function install_neutron_agent_packages {
+ if is_neutron_legacy_enabled; then
+ # Call back to old function
+ install_neutron_agent_packages_mutnauq "$@"
+ else
+ :
+ fi
+}
+
+function neutron_server_config_add {
+ if is_neutron_legacy_enabled; then
+ # Call back to old function
+ mutnauq_server_config_add "$@"
+ else
+ neutron_server_config_add_new "$@"
+ fi
+}
+
function start_neutron {
if is_neutron_legacy_enabled; then
# Call back to old function
diff --git a/lib/neutron-legacy b/lib/neutron-legacy
index 613e0f1..1dfd5fe 100644
--- a/lib/neutron-legacy
+++ b/lib/neutron-legacy
@@ -61,7 +61,7 @@
deprecated "Using lib/neutron-legacy is deprecated, and it will be removed in the future"
-if is_ssl_enabled_service "neutron" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
Q_PROTOCOL="https"
fi
@@ -128,9 +128,23 @@
VIF_PLUGGING_IS_FATAL=${VIF_PLUGGING_IS_FATAL:-True}
VIF_PLUGGING_TIMEOUT=${VIF_PLUGGING_TIMEOUT:-300}
+# The directory which contains files for Q_PLUGIN_EXTRA_CONF_FILES.
+# /etc/neutron is assumed by many of devstack plugins. Do not change.
+_Q_PLUGIN_EXTRA_CONF_PATH=/etc/neutron
+
# List of config file names in addition to the main plugin config file
-# See _configure_neutron_common() for details about setting it up
-declare -a Q_PLUGIN_EXTRA_CONF_FILES
+# To add additional plugin config files, use ``neutron_server_config_add``
+# utility function. For example:
+#
+# ``neutron_server_config_add file1``
+#
+# These config files are relative to ``/etc/neutron``. The above
+# example would specify ``--config-file /etc/neutron/file1`` for
+# neutron server.
+declare -a -g Q_PLUGIN_EXTRA_CONF_FILES
+
+# same as Q_PLUGIN_EXTRA_CONF_FILES, but with absolute path.
+declare -a -g _Q_PLUGIN_EXTRA_CONF_FILES_ABS
Q_RR_CONF_FILE=$NEUTRON_CONF_DIR/rootwrap.conf
@@ -270,9 +284,23 @@
# ---------
function _determine_config_server {
+ if [[ "$Q_PLUGIN_EXTRA_CONF_PATH" != '' ]]; then
+ if [[ "$Q_PLUGIN_EXTRA_CONF_PATH" = "$_Q_PLUGIN_EXTRA_CONF_PATH" ]]; then
+ deprecated "Q_PLUGIN_EXTRA_CONF_PATH is deprecated"
+ else
+ die $LINENO "Q_PLUGIN_EXTRA_CONF_PATH is deprecated"
+ fi
+ fi
+ if [[ ${#Q_PLUGIN_EXTRA_CONF_FILES[@]} > 0 ]]; then
+ deprecated "Q_PLUGIN_EXTRA_CONF_FILES is deprecated. Use neutron_server_config_add instead."
+ fi
+ for cfg_file in ${Q_PLUGIN_EXTRA_CONF_FILES[@]}; do
+ _Q_PLUGIN_EXTRA_CONF_FILES_ABS+=($_Q_PLUGIN_EXTRA_CONF_PATH/$cfg_file)
+ done
+
local cfg_file
local opts="--config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE"
- for cfg_file in ${Q_PLUGIN_EXTRA_CONF_FILES[@]}; do
+ for cfg_file in ${_Q_PLUGIN_EXTRA_CONF_FILES_ABS[@]}; do
opts+=" --config-file $cfg_file"
done
echo "$opts"
@@ -331,6 +359,10 @@
fi
iniset $NEUTRON_CONF DEFAULT api_workers "$API_WORKERS"
+ # devstack is not a tool for running uber scale OpenStack
+ # clouds, therefore running without a dedicated RPC worker
+ # for state reports is more than adequate.
+ iniset $NEUTRON_CONF DEFAULT rpc_state_report_workers 0
}
function create_nova_conf_neutron {
@@ -378,8 +410,6 @@
get_or_create_endpoint \
"network" \
"$REGION_NAME" \
- "$Q_PROTOCOL://$SERVICE_HOST:$Q_PORT/" \
- "$Q_PROTOCOL://$SERVICE_HOST:$Q_PORT/" \
"$Q_PROTOCOL://$SERVICE_HOST:$Q_PORT/"
fi
}
@@ -402,28 +432,10 @@
git_clone $NEUTRON_REPO $NEUTRON_DIR $NEUTRON_BRANCH
setup_develop $NEUTRON_DIR
-
- if [ "$VIRT_DRIVER" == 'xenserver' ]; then
- local dom0_ip
- dom0_ip=$(echo "$XENAPI_CONNECTION_URL" | cut -d "/" -f 3-)
-
- local ssh_dom0
- ssh_dom0="sudo -u $DOMZERO_USER ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@$dom0_ip"
-
- # Find where the plugins should go in dom0
- local xen_functions
- xen_functions=$(cat $TOP_DIR/tools/xen/functions)
- local plugin_dir
- plugin_dir=$($ssh_dom0 "$xen_functions; set -eux; xapi_plugin_location")
-
- # install neutron plugins to dom0
- tar -czf - -C $NEUTRON_DIR/neutron/plugins/ml2/drivers/openvswitch/agent/xenapi/etc/xapi.d/plugins/ ./ |
- $ssh_dom0 "tar -xzf - -C $plugin_dir && chmod a+x $plugin_dir/*"
- fi
}
# install_neutron_agent_packages() - Collect source and prepare
-function install_neutron_agent_packages {
+function install_neutron_agent_packages_mutnauq {
# radvd doesn't come with the OS. Install it if the l3 service is enabled.
if is_service_enabled q-l3; then
install_package radvd
@@ -449,9 +461,6 @@
# Start the Neutron service
run_process q-svc "$NEUTRON_BIN_DIR/neutron-server $cfg_file_options"
echo "Waiting for Neutron to start..."
- if is_ssl_enabled_service "neutron"; then
- ssl_ca="--ca-certificate=${SSL_BUNDLE_FILE}"
- fi
local testcmd="wget ${ssl_ca} --no-proxy -q -O- $service_protocol://$Q_HOST:$service_port"
test_with_retry "$testcmd" "Neutron did not start" $SERVICE_TIMEOUT
@@ -493,11 +502,6 @@
run_process q-meta "$AGENT_META_BINARY --config-file $NEUTRON_CONF --config-file $Q_META_CONF_FILE"
run_process q-metering "$AGENT_METERING_BINARY --config-file $NEUTRON_CONF --config-file $METERING_AGENT_CONF_FILENAME"
-
- if [ "$VIRT_DRIVER" = 'xenserver' ]; then
- # For XenServer, start an agent for the domU openvswitch
- run_process q-domua "$AGENT_BINARY --config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE.domU"
- fi
}
# Start running processes, including screen
@@ -664,11 +668,6 @@
# Set plugin-specific variables ``Q_DB_NAME``, ``Q_PLUGIN_CLASS``.
# For main plugin config file, set ``Q_PLUGIN_CONF_PATH``, ``Q_PLUGIN_CONF_FILENAME``.
- # For additional plugin config files, set ``Q_PLUGIN_EXTRA_CONF_PATH`` and
- # ``Q_PLUGIN_EXTRA_CONF_FILES``. For example:
- #
- # ``Q_PLUGIN_EXTRA_CONF_PATH=/path/to/plugins``
- # ``Q_PLUGIN_EXTRA_CONF_FILES=(file1 file2)``
neutron_plugin_configure_common
if [[ "$Q_PLUGIN_CONF_PATH" == '' || "$Q_PLUGIN_CONF_FILENAME" == '' || "$Q_PLUGIN_CLASS" == '' ]]; then
@@ -695,20 +694,6 @@
# NOTE(freerunner): Need to adjust Region Name for nova in multiregion installation
iniset $NEUTRON_CONF nova region_name $REGION_NAME
- # If addition config files are set, make sure their path name is set as well
- if [[ ${#Q_PLUGIN_EXTRA_CONF_FILES[@]} > 0 && $Q_PLUGIN_EXTRA_CONF_PATH == '' ]]; then
- die $LINENO "Neutron additional plugin config not set.. exiting"
- fi
-
- # If additional config files exist, copy them over to neutron configuration
- # directory
- if [[ $Q_PLUGIN_EXTRA_CONF_PATH != '' ]]; then
- local f
- for (( f=0; $f < ${#Q_PLUGIN_EXTRA_CONF_FILES[@]}; f+=1 )); do
- Q_PLUGIN_EXTRA_CONF_FILES[$f]=$Q_PLUGIN_EXTRA_CONF_PATH/${Q_PLUGIN_EXTRA_CONF_FILES[$f]}
- done
- fi
-
if [ "$VIRT_DRIVER" = 'fake' ]; then
# Disable arbitrary limits
iniset $NEUTRON_CONF quotas quota_network -1
@@ -719,30 +704,13 @@
fi
# Format logging
- if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
- setup_colorized_logging $NEUTRON_CONF DEFAULT project_id
- else
- # Show user_name and project_name by default like in nova
- iniset $NEUTRON_CONF DEFAULT logging_user_identity_format "%(user_name)s %(project_name)s"
- fi
+ setup_logging $NEUTRON_CONF
if is_service_enabled tls-proxy; then
# Set the service port for a proxy to take the original
iniset $NEUTRON_CONF DEFAULT bind_port "$Q_PORT_INT"
fi
- if is_ssl_enabled_service "nova"; then
- iniset $NEUTRON_CONF nova cafile $SSL_BUNDLE_FILE
- fi
-
- if is_ssl_enabled_service "neutron"; then
- ensure_certificates NEUTRON
-
- iniset $NEUTRON_CONF DEFAULT use_ssl True
- iniset $NEUTRON_CONF DEFAULT ssl_cert_file "$NEUTRON_SSL_CERT"
- iniset $NEUTRON_CONF DEFAULT ssl_key_file "$NEUTRON_SSL_KEY"
- fi
-
_neutron_setup_rootwrap
}
@@ -780,6 +748,7 @@
iniset $Q_META_CONF_FILE DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $Q_META_CONF_FILE DEFAULT nova_metadata_ip $Q_META_DATA_IP
+ iniset $Q_META_CONF_FILE DEFAULT metadata_workers $API_WORKERS
iniset $Q_META_CONF_FILE AGENT root_helper "$Q_RR_COMMAND"
if [[ "$Q_USE_ROOTWRAP_DAEMON" == "True" ]]; then
iniset $Q_META_CONF_FILE AGENT root_helper_daemon "$Q_RR_DAEMON_COMMAND"
@@ -787,7 +756,7 @@
}
function _configure_neutron_ceilometer_notifications {
- iniset $NEUTRON_CONF oslo_messaging_notifications driver messaging
+ iniset $NEUTRON_CONF oslo_messaging_notifications driver messagingv2
}
function _configure_neutron_metering {
@@ -859,6 +828,21 @@
fi
}
+# _neutron_ml2_extension_driver_add_old() - add ML2 extension driver
+function _neutron_ml2_extension_driver_add_old {
+ local extension=$1
+ if [[ $Q_ML2_PLUGIN_EXT_DRIVERS == '' ]]; then
+ Q_ML2_PLUGIN_EXT_DRIVERS=$extension
+ elif [[ ! ,${Q_ML2_PLUGIN_EXT_DRIVERS}, =~ ,${extension}, ]]; then
+ Q_ML2_PLUGIN_EXT_DRIVERS="$Q_ML2_PLUGIN_EXT_DRIVERS,$extension"
+ fi
+}
+
+# mutnauq_server_config_add() - add server config file
+function mutnauq_server_config_add {
+ _Q_PLUGIN_EXTRA_CONF_FILES_ABS+=($1)
+}
+
# _neutron_deploy_rootwrap_filters() - deploy rootwrap filters to $Q_CONF_ROOTWRAP_D (owned by root).
function _neutron_deploy_rootwrap_filters {
if [[ "$Q_USE_ROOTWRAP" == "False" ]]; then
diff --git a/lib/neutron_plugins/README.md b/lib/neutron_plugins/README.md
index f03000e..ed40886 100644
--- a/lib/neutron_plugins/README.md
+++ b/lib/neutron_plugins/README.md
@@ -24,7 +24,6 @@
* ``neutron_plugin_configure_common`` :
set plugin-specific variables, ``Q_PLUGIN_CONF_PATH``, ``Q_PLUGIN_CONF_FILENAME``,
``Q_PLUGIN_CLASS``
-* ``neutron_plugin_configure_debug_command``
* ``neutron_plugin_configure_dhcp_agent``
* ``neutron_plugin_configure_l3_agent``
* ``neutron_plugin_configure_plugin_agent``
diff --git a/lib/neutron_plugins/bigswitch_floodlight b/lib/neutron_plugins/bigswitch_floodlight
index 586ded7..52c6ad5 100644
--- a/lib/neutron_plugins/bigswitch_floodlight
+++ b/lib/neutron_plugins/bigswitch_floodlight
@@ -26,10 +26,6 @@
BS_FL_CONTROLLER_TIMEOUT=${BS_FL_CONTROLLER_TIMEOUT:-10}
}
-function neutron_plugin_configure_debug_command {
- _neutron_ovs_base_configure_debug_command
-}
-
function neutron_plugin_configure_dhcp_agent {
:
}
diff --git a/lib/neutron_plugins/brocade b/lib/neutron_plugins/brocade
index 6ba0a66..310b72e 100644
--- a/lib/neutron_plugins/brocade
+++ b/lib/neutron_plugins/brocade
@@ -49,16 +49,11 @@
}
-function neutron_plugin_configure_debug_command {
- iniset $NEUTRON_TEST_CONFIG_FILE DEFAULT external_network_bridge
-}
-
function neutron_plugin_configure_dhcp_agent {
iniset $Q_DHCP_CONF_FILE DEFAULT dhcp_agent_manager neutron.agent.dhcp_agent.DhcpAgentWithStateReport
}
function neutron_plugin_configure_l3_agent {
- iniset $Q_L3_CONF_FILE DEFAULT external_network_bridge
iniset $Q_L3_CONF_FILE DEFAULT l3_agent_manager neutron.agent.l3_agent.L3NATAgentWithStateReport
}
diff --git a/lib/neutron_plugins/cisco b/lib/neutron_plugins/cisco
index fc2cb8a..b397169 100644
--- a/lib/neutron_plugins/cisco
+++ b/lib/neutron_plugins/cisco
@@ -45,7 +45,6 @@
_prefix_function neutron_plugin_create_nova_conf ovs
_prefix_function neutron_plugin_install_agent_packages ovs
_prefix_function neutron_plugin_configure_common ovs
-_prefix_function neutron_plugin_configure_debug_command ovs
_prefix_function neutron_plugin_configure_dhcp_agent ovs
_prefix_function neutron_plugin_configure_l3_agent ovs
_prefix_function neutron_plugin_configure_plugin_agent ovs
@@ -83,10 +82,6 @@
Q_PLUGIN_CLASS="neutron.plugins.cisco.network_plugin.PluginV2"
}
-function neutron_plugin_configure_debug_command {
- :
-}
-
function neutron_plugin_configure_dhcp_agent {
iniset $Q_DHCP_CONF_FILE DEFAULT dhcp_agent_manager neutron.agent.dhcp_agent.DhcpAgentWithStateReport
}
diff --git a/lib/neutron_plugins/linuxbridge_agent b/lib/neutron_plugins/linuxbridge_agent
index d0de2f5..f2302e3 100644
--- a/lib/neutron_plugins/linuxbridge_agent
+++ b/lib/neutron_plugins/linuxbridge_agent
@@ -8,6 +8,7 @@
set +o xtrace
function neutron_lb_cleanup {
+ sudo ip link set $PUBLIC_BRIDGE down
sudo brctl delbr $PUBLIC_BRIDGE
if [[ "$Q_ML2_TENANT_NETWORK_TYPE" = "vxlan" ]]; then
@@ -38,10 +39,6 @@
install_package bridge-utils
}
-function neutron_plugin_configure_debug_command {
- iniset $NEUTRON_TEST_CONFIG_FILE DEFAULT external_network_bridge
-}
-
function neutron_plugin_configure_dhcp_agent {
local conf_file=$1
:
@@ -51,7 +48,6 @@
local conf_file=$1
sudo brctl addbr $PUBLIC_BRIDGE
set_mtu $PUBLIC_BRIDGE $PUBLIC_BRIDGE_MTU
- iniset $conf_file DEFAULT external_network_bridge
}
function neutron_plugin_configure_plugin_agent {
@@ -62,14 +58,18 @@
LB_INTERFACE_MAPPINGS=$PHYSICAL_NETWORK:$LB_PHYSICAL_INTERFACE
fi
if [[ "$PUBLIC_BRIDGE" != "" ]] && [[ "$PUBLIC_PHYSICAL_NETWORK" != "" ]]; then
- iniset /$Q_PLUGIN_CONF_FILE linux_bridge bridge_mappings "$PUBLIC_PHYSICAL_NETWORK:$PUBLIC_BRIDGE"
+ if is_service_enabled q-l3 || is_service_enabled neutron-l3; then
+ iniset /$Q_PLUGIN_CONF_FILE linux_bridge bridge_mappings "$PUBLIC_PHYSICAL_NETWORK:$PUBLIC_BRIDGE"
+ fi
fi
if [[ "$LB_INTERFACE_MAPPINGS" != "" ]]; then
iniset /$Q_PLUGIN_CONF_FILE linux_bridge physical_interface_mappings $LB_INTERFACE_MAPPINGS
fi
if [[ "$Q_USE_SECGROUP" == "True" ]]; then
iniset /$Q_PLUGIN_CONF_FILE securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
- enable_kernel_bridge_firewall
+ if ! running_in_container; then
+ enable_kernel_bridge_firewall
+ fi
else
iniset /$Q_PLUGIN_CONF_FILE securitygroup firewall_driver neutron.agent.firewall.NoopFirewallDriver
fi
diff --git a/lib/neutron_plugins/ml2 b/lib/neutron_plugins/ml2
index e429714..c5a4c02 100644
--- a/lib/neutron_plugins/ml2
+++ b/lib/neutron_plugins/ml2
@@ -63,7 +63,7 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/ml2
Q_PLUGIN_CONF_FILENAME=ml2_conf.ini
- Q_PLUGIN_CLASS="neutron.plugins.ml2.plugin.Ml2Plugin"
+ Q_PLUGIN_CLASS="ml2"
# The ML2 plugin delegates L3 routing/NAT functionality to
# the L3 service plugin which must therefore be specified.
_neutron_service_plugin_class_add $ML2_L3_PLUGIN
@@ -105,7 +105,7 @@
if [[ -n "$PHYSICAL_NETWORK" ]]; then
Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS+="${PHYSICAL_NETWORK},"
fi
- if [[ -n "$PUBLIC_PHYSICAL_NETWORK" ]]; then
+ if [[ -n "$PUBLIC_PHYSICAL_NETWORK" ]] && [[ "${PHYSICAL_NETWORK}" != "$PUBLIC_PHYSICAL_NETWORK" ]]; then
Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS+="${PUBLIC_PHYSICAL_NETWORK},"
fi
fi
diff --git a/lib/neutron_plugins/nuage b/lib/neutron_plugins/nuage
index 61e634e..1c04aaa 100644
--- a/lib/neutron_plugins/nuage
+++ b/lib/neutron_plugins/nuage
@@ -33,10 +33,6 @@
NUAGE_CNA_DEF_NETPART_NAME=${NUAGE_CNA_DEF_NETPART_NAME:-''}
}
-function neutron_plugin_configure_debug_command {
- :
-}
-
function neutron_plugin_configure_dhcp_agent {
:
}
diff --git a/lib/neutron_plugins/openvswitch_agent b/lib/neutron_plugins/openvswitch_agent
index e27b8a6..b65a258 100644
--- a/lib/neutron_plugins/openvswitch_agent
+++ b/lib/neutron_plugins/openvswitch_agent
@@ -11,22 +11,12 @@
function neutron_plugin_create_nova_conf {
_neutron_ovs_base_configure_nova_vif_driver
- if [ "$VIRT_DRIVER" == 'xenserver' ]; then
- iniset $NOVA_CONF xenserver vif_driver nova.virt.xenapi.vif.XenAPIOpenVswitchDriver
- iniset $NOVA_CONF xenserver ovs_integration_bridge $XEN_INTEGRATION_BRIDGE
- # Disable nova's firewall so that it does not conflict with neutron
- iniset $NOVA_CONF DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
- fi
}
function neutron_plugin_install_agent_packages {
_neutron_ovs_base_install_agent_packages
}
-function neutron_plugin_configure_debug_command {
- _neutron_ovs_base_configure_debug_command
-}
-
function neutron_plugin_configure_dhcp_agent {
local conf_file=$1
:
@@ -62,57 +52,6 @@
fi
AGENT_BINARY="$NEUTRON_BIN_DIR/neutron-openvswitch-agent"
- if [ "$VIRT_DRIVER" == 'xenserver' ]; then
- # Make a copy of our config for domU
- sudo cp /$Q_PLUGIN_CONF_FILE "/$Q_PLUGIN_CONF_FILE.domU"
-
- # change domU's config file to STACK_USER
- sudo chown $STACK_USER:$STACK_USER /$Q_PLUGIN_CONF_FILE.domU
-
- # Deal with Dom0's L2 Agent:
- Q_RR_DOM0_COMMAND="$NEUTRON_BIN_DIR/neutron-rootwrap-xen-dom0 $Q_RR_CONF_FILE"
-
- # For now, duplicate the xen configuration already found in nova.conf
- iniset $Q_RR_CONF_FILE xenapi xenapi_connection_url "$XENAPI_CONNECTION_URL"
- iniset $Q_RR_CONF_FILE xenapi xenapi_connection_username "$XENAPI_USER"
- iniset $Q_RR_CONF_FILE xenapi xenapi_connection_password "$XENAPI_PASSWORD"
-
- # Under XS/XCP, the ovs agent needs to target the dom0
- # integration bridge. This is enabled by using a root wrapper
- # that executes commands on dom0 via a XenAPI plugin.
- # XenAPI does not support daemon rootwrap now, so set root_helper_daemon empty
- iniset /$Q_PLUGIN_CONF_FILE agent root_helper "$Q_RR_DOM0_COMMAND"
- iniset /$Q_PLUGIN_CONF_FILE agent root_helper_daemon ""
-
- # Disable minimize polling, so that it can always detect OVS and Port changes
- # This is a problem of xenserver + neutron, bug has been reported
- # https://bugs.launchpad.net/neutron/+bug/1495423
- iniset /$Q_PLUGIN_CONF_FILE agent minimize_polling False
-
- # Set "physical" mapping
- iniset /$Q_PLUGIN_CONF_FILE ovs bridge_mappings "physnet1:$FLAT_NETWORK_BRIDGE"
-
- # XEN_INTEGRATION_BRIDGE is the integration bridge in dom0
- iniset /$Q_PLUGIN_CONF_FILE ovs integration_bridge $XEN_INTEGRATION_BRIDGE
-
- # Set up domU's L2 agent:
-
- # Create a bridge "br-$VLAN_INTERFACE"
- _neutron_ovs_base_add_bridge "br-$VLAN_INTERFACE"
- # Add $VLAN_INTERFACE to that bridge
- sudo ovs-vsctl -- --may-exist add-port "br-$VLAN_INTERFACE" $VLAN_INTERFACE
-
- # Create external bridge and add port
- _neutron_ovs_base_add_public_bridge
- sudo ovs-vsctl -- --may-exist add-port $PUBLIC_BRIDGE $PUBLIC_INTERFACE
-
- # Set bridge mappings to "physnet1:br-$GUEST_INTERFACE_DEFAULT"
- iniset "/$Q_PLUGIN_CONF_FILE.domU" ovs bridge_mappings "physnet1:br-$VLAN_INTERFACE,physnet-ex:$PUBLIC_BRIDGE"
- # Set integration bridge to domU's
- iniset "/$Q_PLUGIN_CONF_FILE.domU" ovs integration_bridge $OVS_BRIDGE
- # Set root wrap
- iniset "/$Q_PLUGIN_CONF_FILE.domU" agent root_helper "$Q_RR_COMMAND"
- fi
iniset /$Q_PLUGIN_CONF_FILE agent tunnel_types $Q_TUNNEL_TYPES
iniset /$Q_PLUGIN_CONF_FILE ovs datapath_type $OVS_DATAPATH_TYPE
}
diff --git a/lib/neutron_plugins/ovs_base b/lib/neutron_plugins/ovs_base
index baf7d7f..50b9ae5 100644
--- a/lib/neutron_plugins/ovs_base
+++ b/lib/neutron_plugins/ovs_base
@@ -30,7 +30,7 @@
function _neutron_ovs_base_setup_bridge {
local bridge=$1
- neutron-ovs-cleanup
+ neutron-ovs-cleanup --config-file $NEUTRON_CONF
_neutron_ovs_base_add_bridge $bridge
sudo ovs-vsctl --no-wait br-set-external-id $bridge bridge-id $bridge
}
@@ -69,35 +69,31 @@
restart_service openvswitch
sudo systemctl enable openvswitch
elif is_suse; then
- restart_service openvswitch-switch
- fi
-}
-
-function _neutron_ovs_base_configure_debug_command {
- if [ "$Q_USE_PROVIDERNET_FOR_PUBLIC" = "True" ]; then
- iniset $NEUTRON_TEST_CONFIG_FILE DEFAULT external_network_bridge ""
- else
- iniset $NEUTRON_TEST_CONFIG_FILE DEFAULT external_network_bridge $PUBLIC_BRIDGE
+ if [[ $DISTRO == "sle12" ]] && [[ $os_RELEASE -lt 12.2 ]]; then
+ restart_service openvswitch-switch
+ else
+ restart_service openvswitch
+ fi
fi
}
function _neutron_ovs_base_configure_firewall_driver {
if [[ "$Q_USE_SECGROUP" == "True" ]]; then
iniset /$Q_PLUGIN_CONF_FILE securitygroup firewall_driver iptables_hybrid
- enable_kernel_bridge_firewall
+ if ! running_in_container; then
+ enable_kernel_bridge_firewall
+ fi
else
iniset /$Q_PLUGIN_CONF_FILE securitygroup firewall_driver noop
fi
}
function _neutron_ovs_base_configure_l3_agent {
- if [ "$Q_USE_PROVIDERNET_FOR_PUBLIC" = "True" ]; then
- iniset $Q_L3_CONF_FILE DEFAULT external_network_bridge ""
- else
+ if [ "$Q_USE_PROVIDERNET_FOR_PUBLIC" != "True" ]; then
iniset $Q_L3_CONF_FILE DEFAULT external_network_bridge $PUBLIC_BRIDGE
fi
- neutron-ovs-cleanup
+ neutron-ovs-cleanup --config-file $NEUTRON_CONF
if [[ "$Q_USE_PUBLIC_VETH" = "True" ]]; then
ip link show $Q_PUBLIC_VETH_INT > /dev/null 2>&1 ||
sudo ip link add $Q_PUBLIC_VETH_INT type veth \
diff --git a/lib/neutron_plugins/services/l3 b/lib/neutron_plugins/services/l3
index 6d518e2..07974fe 100644
--- a/lib/neutron_plugins/services/l3
+++ b/lib/neutron_plugins/services/l3
@@ -157,14 +157,6 @@
}
function create_neutron_initial_network {
- if ! is_service_enabled q-svc && ! is_service_enabled neutron-api; then
- echo "Controller services not enabled. No networks configured!"
- return
- fi
- if [[ "$NEUTRON_CREATE_INITIAL_NETWORKS" == "False" ]]; then
- echo "Network creation disabled!"
- return
- fi
local project_id
project_id=$(openstack project list | grep " demo " | get_field 1)
die_if_not_set $LINENO project_id "Failure retrieving project_id for demo"
@@ -200,13 +192,13 @@
fi
if [[ "$IP_VERSION" =~ .*6 ]]; then
- die_if_not_set $LINENO IPV6_PROVIDER_FIXED_RANGE "IPV6_PROVIDER_FIXED_RANGE has not been set, but Q_USE_PROVIDERNET_FOR_PUBLIC is true and IP_VERSION includes 6"
- die_if_not_set $LINENO IPV6_PROVIDER_NETWORK_GATEWAY "IPV6_PROVIDER_NETWORK_GATEWAY has not been set, but Q_USE_PROVIDERNET_FOR_PUBLIC is true and IP_VERSION includes 6"
+ die_if_not_set $LINENO IPV6_PROVIDER_FIXED_RANGE "IPV6_PROVIDER_FIXED_RANGE has not been set, but Q_USE_PROVIDER_NETWORKING is true and IP_VERSION includes 6"
+ die_if_not_set $LINENO IPV6_PROVIDER_NETWORK_GATEWAY "IPV6_PROVIDER_NETWORK_GATEWAY has not been set, but Q_USE_PROVIDER_NETWORKING is true and IP_VERSION includes 6"
if [ -z $SUBNETPOOL_V6_ID ]; then
fixed_range_v6=$IPV6_PROVIDER_FIXED_RANGE
fi
- SUBNET_V6_ID=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet create --project $project_id --ip-version 6 --ipv6-address-mode $IPV6_ADDRESS_MODE --gateway $IPV6_PROVIDER_NETWORK_GATEWAY $IPV6_PROVIDER_SUBNET_NAME ${SUBNETPOOL_V6_ID:+--subnet-pool $SUBNETPOOL_V6_ID} --network $NET_ID $fixed_range_v6 | grep 'id' | get_field 2)
- die_if_not_set $LINENO SUBNET_V6_ID "Failure creating SUBNET_V6_ID for $IPV6_PROVIDER_SUBNET_NAME $project_id"
+ IPV6_SUBNET_ID=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet create --project $project_id --ip-version 6 --gateway $IPV6_PROVIDER_NETWORK_GATEWAY $IPV6_PROVIDER_SUBNET_NAME ${SUBNETPOOL_V6_ID:+--subnet-pool $SUBNETPOOL_V6_ID} --network $NET_ID --subnet-range $fixed_range_v6 | grep ' id ' | get_field 2)
+ die_if_not_set $LINENO IPV6_SUBNET_ID "Failure creating IPV6_SUBNET_ID for $IPV6_PROVIDER_SUBNET_NAME $project_id"
fi
if [[ $Q_AGENT == "openvswitch" ]]; then
@@ -300,8 +292,8 @@
subnet_params+="--gateway $IPV6_PRIVATE_NETWORK_GATEWAY "
fi
subnet_params+="${SUBNETPOOL_V6_ID:+--subnet-pool $SUBNETPOOL_V6_ID} "
- subnet_params+="${fixed_range_v6:+--subnet-range $fixed_range_v6 $ipv6_modes} "
- subnet_params+="--network $NET_ID $IPV6_PRIVATE_SUBNET_NAME "
+ subnet_params+="${fixed_range_v6:+--subnet-range $fixed_range_v6} "
+ subnet_params+="$ipv6_modes --network $NET_ID $IPV6_PRIVATE_SUBNET_NAME "
local ipv6_subnet_id
ipv6_subnet_id=$(openstack --os-cloud devstack-admin --os-region "$REGION_NAME" subnet create $subnet_params | grep ' id ' | get_field 2)
die_if_not_set $LINENO ipv6_subnet_id "Failure creating private IPv6 subnet for $project_id"
@@ -345,7 +337,7 @@
ext_gw_ip=$(echo $id_and_ext_gw_ip | get_field 2)
PUB_SUBNET_ID=$(echo $id_and_ext_gw_ip | get_field 5)
# Configure the external network as the default router gateway
- neutron --os-cloud devstack-admin --os-region "$REGION_NAME" router-gateway-set $ROUTER_ID $EXT_NET_ID
+ openstack --os-cloud devstack-admin --os-region "$REGION_NAME" router set --external-gateway $EXT_NET_ID $ROUTER_ID
# This logic is specific to using the l3-agent for layer 3
if is_service_enabled q-l3 || is_service_enabled neutron-l3; then
@@ -393,7 +385,7 @@
# If the external network has not already been set as the default router
# gateway when configuring an IPv4 public subnet, do so now
if [[ "$IP_VERSION" == "6" ]]; then
- neutron --os-cloud devstack-admin --os-region "$REGION_NAME" router-gateway-set $ROUTER_ID $EXT_NET_ID
+ openstack --os-cloud devstack-admin --os-region "$REGION_NAME" router set --external-gateway $EXT_NET_ID $ROUTER_ID
fi
# This logic is specific to using the l3-agent for layer 3
@@ -432,13 +424,6 @@
fi
}
-function is_provider_network {
- if [ "$Q_USE_PROVIDER_NETWORKING" == "True" ]; then
- return 0
- fi
- return 1
-}
-
function is_networking_extension_supported {
local extension=$1
# TODO(sc68cal) cache this instead of calling every time
diff --git a/lib/nova b/lib/nova
index d3e8ce8..de053ab 100644
--- a/lib/nova
+++ b/lib/nova
@@ -68,7 +68,7 @@
# Toggle for deploying Nova-API under HTTPD + mod_wsgi
NOVA_USE_MOD_WSGI=${NOVA_USE_MOD_WSGI:-False}
-if is_ssl_enabled_service "nova" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
NOVA_SERVICE_PROTOCOL="https"
fi
@@ -85,9 +85,6 @@
# NOTE: Set ``FORCE_CONFIG_DRIVE="False"`` to turn OFF config drive
FORCE_CONFIG_DRIVE=${FORCE_CONFIG_DRIVE:-"False"}
-# Option to initialize CellsV2 environment
-NOVA_CONFIGURE_CELLSV2=$(trueorfalse False NOVA_CONFIGURE_CELLSV2)
-
# Nova supports pluggable schedulers. The default ``FilterScheduler``
# should work in most cases.
SCHEDULER=${SCHEDULER:-filter_scheduler}
@@ -164,6 +161,14 @@
TEST_FLOATING_POOL=${TEST_FLOATING_POOL:-test}
TEST_FLOATING_RANGE=${TEST_FLOATING_RANGE:-192.168.253.0/29}
+# Other Nova configurations
+# ----------------------------
+
+# ``NOVA_USE_SERVICE_TOKEN`` is a mode where service token is passed along with
+# user token while communicating to external RESP API's like Neutron, Cinder
+# and Glance.
+NOVA_USE_SERVICE_TOKEN=$(trueorfalse False NOVA_USE_SERVICE_TOKEN)
+
# Functions
# ---------
@@ -242,7 +247,7 @@
sudo rm -f $(apache_site_config_for nova-metadata)
}
-# _config_nova_apache_wsgi() - Set WSGI config files of Keystone
+# _config_nova_apache_wsgi() - Set WSGI config files of Nova API
function _config_nova_apache_wsgi {
sudo mkdir -p $NOVA_WSGI_DIR
@@ -257,11 +262,6 @@
local nova_metadata_port=$METADATA_SERVICE_PORT
local venv_path=""
- if is_ssl_enabled_service nova-api; then
- nova_ssl="SSLEngine On"
- nova_certfile="SSLCertificateFile $NOVA_SSL_CERT"
- nova_keyfile="SSLCertificateKeyFile $NOVA_SSL_KEY"
- fi
if [[ ${USE_VENV} = True ]]; then
venv_path="python-path=${PROJECT_VENV["nova"]}/lib/$(python_version)/site-packages"
fi
@@ -410,16 +410,12 @@
get_or_create_endpoint \
"compute_legacy" \
"$REGION_NAME" \
- "$nova_api_url/v2/\$(project_id)s" \
- "$nova_api_url/v2/\$(project_id)s" \
"$nova_api_url/v2/\$(project_id)s"
get_or_create_service "nova" "compute" "Nova Compute Service"
get_or_create_endpoint \
"compute" \
"$REGION_NAME" \
- "$nova_api_url/v2.1" \
- "$nova_api_url/v2.1" \
"$nova_api_url/v2.1"
fi
@@ -460,20 +456,15 @@
iniset $NOVA_CONF DEFAULT scheduler_driver "$SCHEDULER"
iniset $NOVA_CONF DEFAULT scheduler_default_filters "$FILTERS"
iniset $NOVA_CONF DEFAULT default_floating_pool "$PUBLIC_NETWORK_NAME"
- iniset $NOVA_CONF DEFAULT s3_host "$SERVICE_HOST"
- iniset $NOVA_CONF DEFAULT s3_port "$S3_SERVICE_PORT"
if [[ $SERVICE_IP_VERSION == 6 ]]; then
iniset $NOVA_CONF DEFAULT my_ip "$HOST_IPV6"
iniset $NOVA_CONF DEFAULT use_ipv6 "True"
else
iniset $NOVA_CONF DEFAULT my_ip "$HOST_IP"
fi
- iniset $NOVA_CONF database connection `database_connection_url nova`
- iniset $NOVA_CONF api_database connection `database_connection_url nova_api`
iniset $NOVA_CONF DEFAULT instance_name_template "${INSTANCE_NAME_PREFIX}%08x"
iniset $NOVA_CONF DEFAULT osapi_compute_listen "$NOVA_SERVICE_LISTEN_ADDRESS"
iniset $NOVA_CONF DEFAULT metadata_listen "$NOVA_SERVICE_LISTEN_ADDRESS"
- iniset $NOVA_CONF DEFAULT s3_listen "$NOVA_SERVICE_LISTEN_ADDRESS"
iniset $NOVA_CONF key_manager api_class nova.keymgr.conf_key_mgr.ConfKeyManager
@@ -483,6 +474,14 @@
iniset $NOVA_CONF DEFAULT bindir "/usr/bin"
fi
+ # only setup database connections if there are services that
+ # require them running on the host. The ensures that n-cpu doesn't
+ # leak a need to use the db in a multinode scenario.
+ if is_service_enabled n-api n-cond n-sched; then
+ iniset $NOVA_CONF database connection `database_connection_url nova`
+ iniset $NOVA_CONF api_database connection `database_connection_url nova_api`
+ fi
+
if is_service_enabled n-api; then
if is_service_enabled n-api-meta; then
# If running n-api-meta as a separate service
@@ -499,7 +498,7 @@
fi
if is_service_enabled cinder; then
- if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
+ if is_service_enabled tls-proxy; then
CINDER_SERVICE_HOST=${CINDER_SERVICE_HOST:-$SERVICE_HOST}
CINDER_SERVICE_PORT=${CINDER_SERVICE_PORT:-8776}
iniset $NOVA_CONF cinder cafile $SSL_BUNDLE_FILE
@@ -524,12 +523,8 @@
iniset $NOVA_CONF DEFAULT force_config_drive "$FORCE_CONFIG_DRIVE"
fi
# Format logging
- if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$NOVA_USE_MOD_WSGI" == "False" ] ; then
- setup_colorized_logging $NOVA_CONF DEFAULT
- else
- # Show user_name and project_name instead of user_id and project_id
- iniset $NOVA_CONF DEFAULT logging_user_identity_format "%(user_name)s %(project_name)s"
- fi
+ setup_logging $NOVA_CONF $NOVA_USE_MOD_WSGI
+
if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
_config_nova_apache_wsgi
fi
@@ -577,7 +572,7 @@
# Set the oslo messaging driver to the typical default. This does not
# enable notifications, but it will allow them to function when enabled.
- iniset $NOVA_CONF oslo_messaging_notifications driver "messaging"
+ iniset $NOVA_CONF oslo_messaging_notifications driver "messagingv2"
iniset_rpc_backend nova $NOVA_CONF
iniset $NOVA_CONF glance api_servers "${GLANCE_SERVICE_PROTOCOL}://${GLANCE_HOSTPORT}"
@@ -588,20 +583,10 @@
iniset $NOVA_CONF cinder os_region_name "$REGION_NAME"
- if is_ssl_enabled_service glance || is_service_enabled tls-proxy; then
+ if is_service_enabled tls-proxy; then
iniset $NOVA_CONF DEFAULT glance_protocol https
fi
- # Register SSL certificates if provided
- if is_ssl_enabled_service nova; then
- ensure_certificates NOVA
-
- iniset $NOVA_CONF DEFAULT ssl_cert_file "$NOVA_SSL_CERT"
- iniset $NOVA_CONF DEFAULT ssl_key_file "$NOVA_SSL_KEY"
-
- iniset $NOVA_CONF DEFAULT enabled_ssl_apis "$NOVA_ENABLED_APIS"
- fi
-
if is_service_enabled n-sproxy; then
iniset $NOVA_CONF serial_console serialproxy_host "$NOVA_SERVICE_LISTEN_ADDRESS"
iniset $NOVA_CONF serial_console enabled True
@@ -624,12 +609,29 @@
fi
iniset $NOVA_CONF DEFAULT dhcpbridge_flagfile "$NOVA_CONF_DIR/nova-dhcpbridge.conf"
+
+ if [ "$NOVA_USE_SERVICE_TOKEN" == "True" ]; then
+ init_nova_service_user_conf
+ fi
+}
+
+function init_nova_service_user_conf {
+ iniset $NOVA_CONF service_user send_service_user_token True
+ iniset $NOVA_CONF service_user auth_type password
+ iniset $NOVA_CONF service_user auth_url "$KEYSTONE_SERVICE_URI"
+ iniset $NOVA_CONF service_user username nova
+ iniset $NOVA_CONF service_user password "$SERVICE_PASSWORD"
+ iniset $NOVA_CONF service_user user_domain_name "$SERVICE_DOMAIN_NAME"
+ iniset $NOVA_CONF service_user project_name "$SERVICE_PROJECT_NAME"
+ iniset $NOVA_CONF service_user project_domain_name "$SERVICE_DOMAIN_NAME"
+ iniset $NOVA_CONF service_user auth_strategy keystone
}
function init_nova_cells {
if is_service_enabled n-cell; then
cp $NOVA_CONF $NOVA_CELLS_CONF
iniset $NOVA_CELLS_CONF database connection `database_connection_url $NOVA_CELLS_DB`
+ rpc_backend_add_vhost child_cell
iniset_rpc_backend nova $NOVA_CELLS_CONF DEFAULT child_cell
iniset $NOVA_CELLS_CONF DEFAULT dhcpbridge_flagfile $NOVA_CELLS_CONF
iniset $NOVA_CELLS_CONF cells enable True
@@ -649,6 +651,10 @@
$NOVA_BIN_DIR/nova-manage --config-file $NOVA_CELLS_CONF db sync
$NOVA_BIN_DIR/nova-manage --config-file $NOVA_CELLS_CONF cell create --name=region --cell_type=parent --username=$RABBIT_USERID --hostname=$RABBIT_HOST --port=5672 --password=$RABBIT_PASSWORD --virtual_host=/ --woffset=0 --wscale=1
$NOVA_BIN_DIR/nova-manage cell create --name=child --cell_type=child --username=$RABBIT_USERID --hostname=$RABBIT_HOST --port=5672 --password=$RABBIT_PASSWORD --virtual_host=child_cell --woffset=0 --wscale=1
+
+ # Creates the single cells v2 cell for the child cell (v1) nova db.
+ nova-manage --config-file $NOVA_CELLS_CONF cell_v2 create_cell \
+ --transport-url $(get_transport_url child_cell) --name 'cell1'
fi
}
@@ -668,6 +674,7 @@
if [ -n "$FLAT_INTERFACE" ]; then
iniset $NOVA_CONF DEFAULT flat_interface "$FLAT_INTERFACE"
fi
+ iniset $NOVA_CONF DEFAULT use_neutron False
}
# create_nova_keys_dir() - Part of the init_nova() process
@@ -681,27 +688,33 @@
# All nova components talk to a central database.
# Only do this step once on the API node for an entire cluster.
if is_service_enabled $DATABASE_BACKENDS && is_service_enabled n-api; then
+ recreate_database $NOVA_API_DB
+ $NOVA_BIN_DIR/nova-manage --config-file $NOVA_CONF api_db sync
+
# (Re)create nova databases
recreate_database nova
- if [ "$NOVA_CONFIGURE_CELLSV2" != "False" ]; then
- recreate_database nova_api_cell0
- fi
+ recreate_database nova_cell0
- # Migrate nova database. If "nova-manage cell_v2 simple_cell_setup" has
- # been run this migrates the "nova" and "nova_api_cell0" database.
- # Otherwise it just migrates the "nova" database.
+ # map_cell0 will create the cell mapping record in the nova_api DB so
+ # this needs to come after the api_db sync happens. We also want to run
+ # this before the db sync below since that will migrate both the nova
+ # and nova_cell0 databases.
+ nova-manage cell_v2 map_cell0 --database_connection `database_connection_url nova_cell0`
+
+ # Migrate nova and nova_cell0 databases.
$NOVA_BIN_DIR/nova-manage --config-file $NOVA_CONF db sync
if is_service_enabled n-cell; then
recreate_database $NOVA_CELLS_DB
fi
- recreate_database $NOVA_API_DB
- $NOVA_BIN_DIR/nova-manage --config-file $NOVA_CONF api_db sync
-
# Run online migrations on the new databases
# Needed for flavor conversion
$NOVA_BIN_DIR/nova-manage --config-file $NOVA_CONF db online_data_migrations
+
+ # create the cell1 cell for the main nova db where the hosts live
+ nova-manage cell_v2 create_cell --transport-url $(get_transport_url) \
+ --name 'cell1'
fi
create_nova_cache_dir
@@ -764,9 +777,6 @@
if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
install_apache_wsgi
- if is_ssl_enabled_service "nova-api"; then
- enable_mod_ssl
- fi
fi
}
@@ -829,7 +839,7 @@
run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf" $LIBVIRT_GROUP
elif [[ "$VIRT_DRIVER" = 'lxd' ]]; then
run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf" $LXD_GROUP
- elif [[ "$VIRT_DRIVER" = 'docker' ]]; then
+ elif [[ "$VIRT_DRIVER" = 'docker' || "$VIRT_DRIVER" = 'zun' ]]; then
run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf" $DOCKER_GROUP
elif [[ "$VIRT_DRIVER" = 'fake' ]]; then
local i
@@ -871,7 +881,9 @@
run_process n-crt "$NOVA_BIN_DIR/nova-cert --config-file $api_cell_conf"
if is_service_enabled n-net; then
- enable_kernel_bridge_firewall
+ if ! running_in_container; then
+ enable_kernel_bridge_firewall
+ fi
fi
run_process n-net "$NOVA_BIN_DIR/nova-network --config-file $compute_cell_conf"
@@ -950,15 +962,6 @@
fi
}
-# create_cell(): Group the available hosts into a cell
-function create_cell {
- if ! is_service_enabled n-cell; then
- nova-manage cell_v2 simple_cell_setup --transport-url $(get_transport_url)
- else
- echo 'Skipping cellsv2 setup for this cellsv1 configuration'
- fi
-}
-
# Restore xtrace
$_XTRACE_LIB_NOVA
diff --git a/lib/nova_plugins/functions-libvirt b/lib/nova_plugins/functions-libvirt
index 5e7695a..47605af 100644
--- a/lib/nova_plugins/functions-libvirt
+++ b/lib/nova_plugins/functions-libvirt
@@ -20,33 +20,76 @@
# extremely verbose.)
DEBUG_LIBVIRT=$(trueorfalse True DEBUG_LIBVIRT)
+# Try to enable coredumps for libvirt
+# Currently fairly specific to OpenStackCI hosts
+DEBUG_LIBVIRT_COREDUMPS=$(trueorfalse False DEBUG_LIBVIRT_COREDUMPS)
+
+# Only Xenial is left with libvirt-bin. Everywhere else is libvirtd
+if is_ubuntu && [ ! -f /etc/init.d/libvirtd ]; then
+ LIBVIRT_DAEMON=libvirt-bin
+else
+ LIBVIRT_DAEMON=libvirtd
+fi
+
+# Enable coredumps for libvirt
+# Bug: https://bugs.launchpad.net/nova/+bug/1643911
+function _enable_coredump {
+ local confdir=/etc/systemd/system/${LIBVIRT_DAEMON}.service.d
+ local conffile=${confdir}/coredump.conf
+
+ # Create a coredump directory, and instruct the kernel to save to
+ # here
+ sudo mkdir -p /var/core
+ sudo chmod a+wrx /var/core
+ echo '/var/core/core.%e.%p.%h.%t' | \
+ sudo tee /proc/sys/kernel/core_pattern
+
+ # Drop a config file to up the core ulimit
+ sudo mkdir -p ${confdir}
+ sudo tee ${conffile} <<EOF
+[Service]
+LimitCORE=infinity
+EOF
+
+ # Tell systemd to reload the unit (service restarts later after
+ # config anyway)
+ sudo systemctl daemon-reload
+}
+
+
# Installs required distro-specific libvirt packages.
function install_libvirt {
+
if is_ubuntu; then
install_package qemu-system
- install_package libvirt-bin libvirt-dev
- pip_install_gr libvirt-python
- if [[ "$EBTABLES_RACE_FIX" == "True" ]]; then
- # Work around for bug #1501558. We can remove this once we
- # get to a version of Ubuntu that has new enough libvirt.
- TOP_DIR=$TOP_DIR $TOP_DIR/tools/install_ebtables_workaround.sh
+ if [[ ${DISTRO} == "xenial" ]]; then
+ install_package libvirt-bin libvirt-dev
+ else
+ install_package libvirt-clients libvirt-daemon-system libvirt-dev
fi
+ pip_install_gr libvirt-python
#pip_install_gr <there-si-no-guestfs-in-pypi>
elif is_fedora || is_suse; then
# On "KVM for IBM z Systems", kvm does not have its own package
- if [[ ! ${DISTRO} =~ "kvmibm1" ]]; then
+ if [[ ! ${DISTRO} =~ "kvmibm1" && ! ${DISTRO} =~ "rhel7" ]]; then
install_package kvm
fi
- # there is a dependency issue with kvm (which is really just a
- # wrapper to qemu-system-x86) that leaves some bios files out,
- # so install qemu-kvm (which shouldn't strictly be needed, as
- # everything has been merged into qemu-system-x86) to bring in
- # the right packages. see
- # https://bugzilla.redhat.com/show_bug.cgi?id=1235890
- install_package qemu-kvm
+
+ if [[ ${DISTRO} =~ "rhel7" ]]; then
+ # This should install the latest qemu-kvm build,
+ # which is called qemu-kvm-ev in centos7
+ # (as the default OS qemu-kvm package is usually rather old,
+ # and should be updated by above)
+ install_package qemu-kvm
+ fi
+
install_package libvirt libvirt-devel
pip_install_gr libvirt-python
fi
+
+ if [[ $DEBUG_LIBVIRT_COREDUMPS == True ]]; then
+ _enable_coredump
+ fi
}
# Configures the installed libvirt system so that is accessible by
@@ -65,14 +108,6 @@
EOF
fi
- # Since the release of Debian Wheezy the libvirt init script is libvirtd
- # and not libvirtd-bin anymore.
- if is_ubuntu && [ ! -f /etc/init.d/libvirtd ]; then
- LIBVIRT_DAEMON=libvirt-bin
- else
- LIBVIRT_DAEMON=libvirtd
- fi
-
if is_fedora || is_suse; then
# Starting with fedora 18 and opensuse-12.3 enable stack-user to
# virsh -c qemu:///system by creating a policy-kit rule for
diff --git a/lib/nova_plugins/hypervisor-ironic b/lib/nova_plugins/hypervisor-ironic
index 7ffd14d..c9544fe 100644
--- a/lib/nova_plugins/hypervisor-ironic
+++ b/lib/nova_plugins/hypervisor-ironic
@@ -42,6 +42,7 @@
iniset $NOVA_CONF DEFAULT compute_driver ironic.IronicDriver
iniset $NOVA_CONF DEFAULT firewall_driver $LIBVIRT_FIREWALL_DRIVER
iniset $NOVA_CONF DEFAULT scheduler_host_manager ironic_host_manager
+ iniset $NOVA_CONF filter_scheduler use_baremetal_filters True
iniset $NOVA_CONF DEFAULT ram_allocation_ratio 1.0
iniset $NOVA_CONF DEFAULT reserved_host_memory_mb 0
# ironic section
diff --git a/lib/nova_plugins/hypervisor-libvirt b/lib/nova_plugins/hypervisor-libvirt
index 167ab6f..f3c8add 100644
--- a/lib/nova_plugins/hypervisor-libvirt
+++ b/lib/nova_plugins/hypervisor-libvirt
@@ -105,6 +105,16 @@
if [[ "$ENABLE_FILE_INJECTION" == "True" ]] ; then
if is_ubuntu; then
install_package python-guestfs
+ # NOTE(andreaf) Ubuntu kernel can only be read by root, which breaks libguestfs:
+ # https://bugs.launchpad.net/ubuntu/+source/linux/+bug/759725)
+ INSTALLED_KERNELS="$(ls /boot/vmlinuz-*)"
+ for kernel in $INSTALLED_KERNELS; do
+ STAT_OVERRIDE="root root 644 ${kernel}"
+ # unstack won't remove the statoverride, so make this idempotent
+ if [[ ! $(dpkg-statoverride --list | grep "$STAT_OVERRIDE") ]]; then
+ sudo dpkg-statoverride --add --update $STAT_OVERRIDE
+ fi
+ done
elif is_fedora || is_suse; then
install_package python-libguestfs
fi
diff --git a/lib/nova_plugins/hypervisor-xenserver b/lib/nova_plugins/hypervisor-xenserver
index e5d25da..880b87f 100644
--- a/lib/nova_plugins/hypervisor-xenserver
+++ b/lib/nova_plugins/hypervisor-xenserver
@@ -26,10 +26,6 @@
# Allow ``build_domU.sh`` to specify the flat network bridge via kernel args
FLAT_NETWORK_BRIDGE_DEFAULT=$(sed -e 's/.* flat_network_bridge=\([[:alnum:]]*\).*$/\1/g' /proc/cmdline)
-if is_service_enabled neutron; then
- XEN_INTEGRATION_BRIDGE_DEFAULT=$(sed -e 's/.* xen_integration_bridge=\([[:alnum:]]*\).*$/\1/g' /proc/cmdline)
- XEN_INTEGRATION_BRIDGE=${XEN_INTEGRATION_BRIDGE:-$XEN_INTEGRATION_BRIDGE_DEFAULT}
-fi
VNCSERVER_PROXYCLIENT_ADDRESS=${VNCSERVER_PROXYCLIENT_ADDRESS=169.254.0.1}
@@ -48,6 +44,21 @@
if [ -z "$XENAPI_CONNECTION_URL" ]; then
die $LINENO "XENAPI_CONNECTION_URL is not specified"
fi
+
+ # Check os-xenapi plugin is enabled
+ local plugins="${DEVSTACK_PLUGINS}"
+ local plugin
+ local found=0
+ for plugin in ${plugins//,/ }; do
+ if [[ "$plugin" = "os-xenapi" ]]; then
+ found=1
+ break
+ fi
+ done
+ if [[ $found -ne 1 ]]; then
+ die $LINENO "os-xenapi plugin is not specified. Please enable this plugin in local.conf"
+ fi
+
read_password XENAPI_PASSWORD "ENTER A PASSWORD TO USE FOR XEN."
iniset $NOVA_CONF DEFAULT compute_driver "xenapi.XenAPIDriver"
iniset $NOVA_CONF xenserver connection_url "$XENAPI_CONNECTION_URL"
@@ -64,14 +75,6 @@
local ssh_dom0
ssh_dom0="sudo -u $DOMZERO_USER ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@$dom0_ip"
- # Find where the plugins should go in dom0
- xen_functions=`cat $TOP_DIR/tools/xen/functions`
- PLUGIN_DIR=`$ssh_dom0 "$xen_functions; set -eux; xapi_plugin_location"`
-
- # install nova plugins to dom0
- tar -czf - -C $NOVA_DIR/plugins/xenserver/xenapi/etc/xapi.d/plugins/ ./ |
- $ssh_dom0 "tar -xzf - -C $PLUGIN_DIR && chmod a+x $PLUGIN_DIR/*"
-
# install console logrotate script
tar -czf - -C $NOVA_DIR/tools/xenserver/ rotate_xen_guest_logs.sh |
$ssh_dom0 'tar -xzf - -C /root/ && chmod +x /root/rotate_xen_guest_logs.sh && mkdir -p /var/log/xen/guest'
@@ -89,12 +92,13 @@
echo "create_directory_for_kernels"
echo "install_conntrack_tools"
} | $ssh_dom0
-
}
# install_nova_hypervisor() - Install external components
function install_nova_hypervisor {
- pip_install_gr xenapi
+ # xenapi functionality is now included in os-xenapi library which houses the plugin
+ # so this function intentionally left blank
+ :
}
# start_nova_hypervisor - Start any required external services
diff --git a/lib/oslo b/lib/oslo
index e34e48a..2895503 100644
--- a/lib/oslo
+++ b/lib/oslo
@@ -23,7 +23,9 @@
# Defaults
# --------
GITDIR["automaton"]=$DEST/automaton
+GITDIR["castellan"]=$DEST/castellan
GITDIR["cliff"]=$DEST/cliff
+GITDIR["cursive"]=$DEST/cursive
GITDIR["debtcollector"]=$DEST/debtcollector
GITDIR["futurist"]=$DEST/futurist
GITDIR["os-client-config"]=$DEST/os-client-config
@@ -48,6 +50,7 @@
GITDIR["oslo.vmware"]=$DEST/oslo.vmware
GITDIR["osprofiler"]=$DEST/osprofiler
GITDIR["pycadf"]=$DEST/pycadf
+GITDIR["python-openstacksdk"]=$DEST/python-openstacksdk
GITDIR["stevedore"]=$DEST/stevedore
GITDIR["taskflow"]=$DEST/taskflow
GITDIR["tooz"]=$DEST/tooz
@@ -70,7 +73,9 @@
# install_oslo() - Collect source and prepare
function install_oslo {
_do_install_oslo_lib "automaton"
+ _do_install_oslo_lib "castellan"
_do_install_oslo_lib "cliff"
+ _do_install_oslo_lib "cursive"
_do_install_oslo_lib "debtcollector"
_do_install_oslo_lib "futurist"
_do_install_oslo_lib "osc-lib"
@@ -95,6 +100,7 @@
_do_install_oslo_lib "oslo.vmware"
_do_install_oslo_lib "osprofiler"
_do_install_oslo_lib "pycadf"
+ _do_install_oslo_lib "python-openstacksdk"
_do_install_oslo_lib "stevedore"
_do_install_oslo_lib "taskflow"
_do_install_oslo_lib "tooz"
diff --git a/lib/placement b/lib/placement
index 165c670..4755a58 100644
--- a/lib/placement
+++ b/lib/placement
@@ -32,7 +32,15 @@
PLACEMENT_CONF_DIR=/etc/nova
PLACEMENT_CONF=$PLACEMENT_CONF_DIR/nova.conf
PLACEMENT_AUTH_STRATEGY=${PLACEMENT_AUTH_STRATEGY:-placement}
-
+# Nova virtual environment
+if [[ ${USE_VENV} = True ]]; then
+ PROJECT_VENV["nova"]=${NOVA_DIR}.venv
+ PLACEMENT_BIN_DIR=${PROJECT_VENV["nova"]}/bin
+else
+ PLACEMENT_BIN_DIR=$(get_python_exec_prefix)
+fi
+PLACEMENT_UWSGI=$PLACEMENT_BIN_DIR/nova-placement-api
+PLACEMENT_UWSGI_CONF=$PLACEMENT_CONF_DIR/placement-uwsgi.ini
# The placement service can optionally use a separate database
# connection. Set PLACEMENT_DB_ENABLED to True to use it.
@@ -40,14 +48,13 @@
# yet merged in nova but is coming soon.
PLACEMENT_DB_ENABLED=$(trueorfalse False PLACEMENT_DB_ENABLED)
-if is_ssl_enabled_service "placement-api" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
PLACEMENT_SERVICE_PROTOCOL="https"
fi
# Public facing bits
PLACEMENT_SERVICE_PROTOCOL=${PLACEMENT_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
PLACEMENT_SERVICE_HOST=${PLACEMENT_SERVICE_HOST:-$SERVICE_HOST}
-PLACEMENT_SERVICE_PORT=${PLACEMENT_SERVICE_PORT:-8778}
# Functions
# ---------
@@ -55,7 +62,7 @@
# Test if any placement services are enabled
# is_placement_enabled
function is_placement_enabled {
- [[ ,${ENABLED_SERVICES} =~ ,"placement-" ]] && return 0
+ [[ ,${ENABLED_SERVICES} =~ ,"placement-api" ]] && return 0
return 1
}
@@ -68,18 +75,11 @@
# _config_placement_apache_wsgi() - Set WSGI config files
function _config_placement_apache_wsgi {
local placement_api_apache_conf
- local placement_api_port=$PLACEMENT_SERVICE_PORT
local venv_path=""
local nova_bin_dir=""
nova_bin_dir=$(get_python_exec_prefix)
placement_api_apache_conf=$(apache_site_config_for placement-api)
- # reuse nova's cert if a cert is being used
- if is_ssl_enabled_service "placement-api"; then
- placement_ssl="SSLEngine On"
- placement_certfile="SSLCertificateFile $NOVA_SSL_CERT"
- placement_keyfile="SSLCertificateKeyFile $NOVA_SSL_KEY"
- fi
# reuse nova's venv if there is one as placement code lives
# there
if [[ ${USE_VENV} = True ]]; then
@@ -89,7 +89,6 @@
sudo cp $FILES/apache-placement-api.template $placement_api_apache_conf
sudo sed -e "
- s|%PUBLICPORT%|$placement_api_port|g;
s|%APACHE_NAME%|$APACHE_NAME|g;
s|%PUBLICWSGI%|$nova_bin_dir/nova-placement-api|g;
s|%SSLENGINE%|$placement_ssl|g;
@@ -101,19 +100,14 @@
" -i $placement_api_apache_conf
}
-# configure_placement() - Set config files, create data dirs, etc
-function configure_placement {
- if [ "$PLACEMENT_DB_ENABLED" != False ]; then
- iniset $PLACEMENT_CONF placement_database connection `database_connection_url placement`
- fi
-
+function configure_placement_nova_compute {
iniset $NOVA_CONF placement auth_type "password"
- iniset $NOVA_CONF placement auth_url "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/v3"
+ iniset $NOVA_CONF placement auth_url "$KEYSTONE_SERVICE_URI/v3"
iniset $NOVA_CONF placement username placement
iniset $NOVA_CONF placement password "$SERVICE_PASSWORD"
- iniset $NOVA_CONF placement user_domain_name "Default"
+ iniset $NOVA_CONF placement user_domain_name "$SERVICE_DOMAIN_NAME"
iniset $NOVA_CONF placement project_name "$SERVICE_TENANT_NAME"
- iniset $NOVA_CONF placement project_domain_name "Default"
+ iniset $NOVA_CONF placement project_domain_name "$SERVICE_DOMAIN_NAME"
iniset $NOVA_CONF placement os_region_name "$REGION_NAME"
# TODO(cdent): auth_strategy, which is common to see in these
# blocks is not currently used here. For the time being the
@@ -121,8 +115,19 @@
# established by the nova api. This avoids, for the time, being,
# creating redundant configuration items that are just used for
# testing.
+}
- _config_placement_apache_wsgi
+# configure_placement() - Set config files, create data dirs, etc
+function configure_placement {
+ if [ "$PLACEMENT_DB_ENABLED" != False ]; then
+ iniset $PLACEMENT_CONF placement_database connection `database_connection_url placement`
+ fi
+
+ if [[ "$WSGI_MODE" == "uwsgi" ]]; then
+ write_uwsgi_config "$PLACEMENT_UWSGI_CONF" "$PLACEMENT_UWSGI" "/placement"
+ else
+ _config_placement_apache_wsgi
+ fi
}
# create_placement_accounts() - Set up required placement accounts
@@ -134,8 +139,6 @@
get_or_create_endpoint \
"placement" \
"$REGION_NAME" \
- "$placement_api_url" \
- "$placement_api_url" \
"$placement_api_url"
}
@@ -153,20 +156,17 @@
# install_placement() - Collect source and prepare
function install_placement {
install_apache_wsgi
- if is_ssl_enabled_service "placement-api"; then
- enable_mod_ssl
- fi
}
# start_placement_api() - Start the API processes ahead of other things
function start_placement_api {
- # Get right service port for testing
- local service_port=$PLACEMENT_SERVICE_PORT
- local placement_api_port=$PLACEMENT_SERVICE_PORT
-
- enable_apache_site placement-api
- restart_apache_server
- tail_log placement-api /var/log/$APACHE_NAME/placement-api.log
+ if [[ "$WSGI_MODE" == "uwsgi" ]]; then
+ run_process "placement-api" "$PLACEMENT_BIN_DIR/uwsgi --ini $PLACEMENT_UWSGI_CONF"
+ else
+ enable_apache_site placement-api
+ restart_apache_server
+ tail_log placement-api /var/log/$APACHE_NAME/placement-api.log
+ fi
echo "Waiting for placement-api to start..."
if ! wait_for_service $SERVICE_TIMEOUT $PLACEMENT_SERVICE_PROTOCOL://$PLACEMENT_SERVICE_HOST/placement; then
@@ -180,8 +180,13 @@
# stop_placement() - Disable the api service and stop it.
function stop_placement {
- disable_apache_site placement-api
- restart_apache_server
+ if [[ "$WSGI_MODE" == "uwsgi" ]]; then
+ stop_process "placement-api"
+ remove_uwsgi_config "$PLACEMENT_UWSGI_CONF" "$PLACEMENT_UWSGI"
+ else
+ disable_apache_site placement-api
+ restart_apache_server
+ fi
}
# Restore xtrace
diff --git a/lib/rpc_backend b/lib/rpc_backend
index 97b1aa4..3177e88 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -25,6 +25,9 @@
set +o xtrace
RABBIT_USERID=${RABBIT_USERID:-stackrabbit}
+if is_service_enabled rabbit; then
+ RABBIT_HOST=${RABBIT_HOST:-$SERVICE_HOST}
+fi
# Functions
# ---------
@@ -94,13 +97,20 @@
break
done
- if is_service_enabled n-cell; then
- # Add partitioned access for the child cell
- if [ -z `sudo rabbitmqctl list_vhosts | grep child_cell` ]; then
- sudo rabbitmqctl add_vhost child_cell
- sudo rabbitmqctl set_permissions -p child_cell $RABBIT_USERID ".*" ".*" ".*"
- fi
+ fi
+}
+
+# adds a vhost to the rpc backend
+function rpc_backend_add_vhost {
+ local vhost="$1"
+ if is_service_enabled rabbit; then
+ if [ -z `sudo rabbitmqctl list_vhosts | grep $vhost` ]; then
+ sudo rabbitmqctl add_vhost $vhost
+ sudo rabbitmqctl set_permissions -p $vhost $RABBIT_USERID ".*" ".*" ".*"
fi
+ else
+ echo 'RPC backend does not support vhosts'
+ return 1
fi
}
@@ -112,6 +122,15 @@
fi
}
+# Repeat the definition, in case get_transport_url is overriden for RPC purpose.
+# get_notification_url can then be used to talk to rabbit for notifications.
+function get_notification_url {
+ local virtual_host=$1
+ if is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
+ echo "rabbit://$RABBIT_USERID:$RABBIT_PASSWORD@$RABBIT_HOST:5672/$virtual_host"
+ fi
+}
+
# iniset configuration
function iniset_rpc_backend {
local package=$1
diff --git a/lib/swift b/lib/swift
index f9ea028..8fad6b8 100644
--- a/lib/swift
+++ b/lib/swift
@@ -31,13 +31,22 @@
# Defaults
# --------
-if is_ssl_enabled_service "s-proxy" || is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy; then
SWIFT_SERVICE_PROTOCOL="https"
fi
# Set up default directories
GITDIR["python-swiftclient"]=$DEST/python-swiftclient
+# Swift virtual environment
+if [[ ${USE_VENV} = True ]]; then
+ PROJECT_VENV["swift"]=${SWIFT_DIR}.venv
+ SWIFT_BIN_DIR=${PROJECT_VENV["swift"]}/bin
+else
+ SWIFT_BIN_DIR=$(get_python_exec_prefix)
+fi
+
+
SWIFT_DIR=$DEST/swift
SWIFT_AUTH_CACHE_DIR=${SWIFT_AUTH_CACHE_DIR:-/var/cache/swift}
SWIFT_APACHE_WSGI_DIR=${SWIFT_APACHE_WSGI_DIR:-/var/www/swift}
@@ -119,6 +128,11 @@
SWIFT_REPLICAS=${SWIFT_REPLICAS:-1}
SWIFT_REPLICAS_SEQ=$(seq ${SWIFT_REPLICAS})
+# Set ``SWIFT_START_ALL_SERVICES`` to control whether all Swift
+# services (including the *-auditor, *-replicator, *-reconstructor, etc.
+# daemons) should be started.
+SWIFT_START_ALL_SERVICES=$(trueorfalse True SWIFT_START_ALL_SERVICES)
+
# Set ``SWIFT_LOG_TOKEN_LENGTH`` to configure how many characters of an auth
# token should be placed in the logs. When keystone is used with PKI tokens,
# the token values can be huge, seemingly larger the 2K, at the least. We
@@ -384,25 +398,21 @@
iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT bind_port ${SWIFT_DEFAULT_BIND_PORT}
fi
- if is_ssl_enabled_service s-proxy; then
- ensure_certificates SWIFT
-
- iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT cert_file "$SWIFT_SSL_CERT"
- iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT key_file "$SWIFT_SSL_KEY"
- fi
-
# DevStack is commonly run in a small slow environment, so bump the timeouts up.
# ``node_timeout`` is the node read operation response time to the proxy server
# ``conn_timeout`` is how long it takes a connect() system call to return
iniset ${SWIFT_CONFIG_PROXY_SERVER} app:proxy-server node_timeout 120
iniset ${SWIFT_CONFIG_PROXY_SERVER} app:proxy-server conn_timeout 20
+ # Versioned Writes
+ iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:versioned_writes allow_versioned_writes true
+
# Configure Ceilometer
if is_service_enabled ceilometer; then
iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer "set log_level" "WARN"
iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer paste.filter_factory "ceilometermiddleware.swift:filter_factory"
iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer control_exchange "swift"
- iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer url $(get_transport_url)
+ iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer url $(get_notification_url)
iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer driver "messaging"
iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer topic "notifications"
SWIFT_EXTRAS_MIDDLEWARE_LAST="${SWIFT_EXTRAS_MIDDLEWARE_LAST} ceilometer"
@@ -451,7 +461,6 @@
# out. Make sure we uncomment Tempauth after we uncomment Keystoneauth
# otherwise, this code also sets the reseller_prefix for Keystoneauth.
iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} filter:tempauth account_autocreate
- iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} filter:tempauth reseller_prefix
iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:tempauth reseller_prefix "TEMPAUTH"
if is_service_enabled swift3; then
@@ -489,8 +498,6 @@
generate_swift_config_services ${swift_node_config} ${node_number} $(( CONTAINER_PORT_BASE + 10 * (node_number - 1) )) container
iniuncomment ${swift_node_config} DEFAULT bind_ip
iniset ${swift_node_config} DEFAULT bind_ip ${SWIFT_SERVICE_LISTEN_ADDRESS}
- iniuncomment ${swift_node_config} app:container-server allow_versions
- iniset ${swift_node_config} app:container-server allow_versions "true"
swift_node_config=${SWIFT_CONF_DIR}/account-server/${node_number}.conf
cp ${SWIFT_DIR}/etc/account-server.conf-sample ${swift_node_config}
@@ -523,11 +530,16 @@
local auth_vers
auth_vers=$(iniget ${testfile} func_test auth_version)
iniset ${testfile} func_test auth_host ${KEYSTONE_SERVICE_HOST}
- iniset ${testfile} func_test auth_port ${KEYSTONE_AUTH_PORT}
- if [[ $auth_vers == "3" ]]; then
- iniset ${testfile} func_test auth_prefix /v3/
+ if [[ "$KEYSTONE_AUTH_PROTOCOL" == "https" ]]; then
+ iniset ${testfile} func_test auth_port 443
else
- iniset ${testfile} func_test auth_prefix /v2.0/
+ iniset ${testfile} func_test auth_port 80
+ fi
+ iniset ${testfile} func_test auth_uri ${KEYSTONE_AUTH_URI}
+ if [[ "$auth_vers" == "3" ]]; then
+ iniset ${testfile} func_test auth_prefix /identity/v3/
+ else
+ iniset ${testfile} func_test auth_prefix /identity/v2.0/
fi
fi
@@ -542,6 +554,7 @@
if [[ $SYSLOG != "False" ]]; then
sed "s,%SWIFT_LOGDIR%,${swift_log_dir}," $FILES/swift/rsyslog.conf | sudo \
tee /etc/rsyslog.d/10-swift.conf
+ echo "MaxMessageSize 6k" | sudo tee /etc/rsyslog.d/99-maxsize.conf
# restart syslog to take the changes
sudo killall -HUP rsyslogd
fi
@@ -636,8 +649,7 @@
"object-store" \
"$REGION_NAME" \
"$SWIFT_SERVICE_PROTOCOL://$SERVICE_HOST:$SWIFT_DEFAULT_BIND_PORT/v1/AUTH_\$(project_id)s" \
- "$SWIFT_SERVICE_PROTOCOL://$SERVICE_HOST:$SWIFT_DEFAULT_BIND_PORT" \
- "$SWIFT_SERVICE_PROTOCOL://$SERVICE_HOST:$SWIFT_DEFAULT_BIND_PORT/v1/AUTH_\$(project_id)s"
+ "$SWIFT_SERVICE_PROTOCOL://$SERVICE_HOST:$SWIFT_DEFAULT_BIND_PORT"
local swift_project_test1
swift_project_test1=$(get_or_create_project swiftprojecttest1 default)
@@ -778,8 +790,11 @@
fi
if [ "$SWIFT_USE_MOD_WSGI" == "True" ]; then
+ # Apache should serve the "PACO" a.k.a "main" services
restart_apache_server
+ # The rest of the services should be started in backgroud
swift-init --run-dir=${SWIFT_DATA_DIR}/run rest start
+ # Be we still want the logs of Swift Proxy in our screen session
tail_log s-proxy /var/log/$APACHE_NAME/proxy-server
if [[ ${SWIFT_REPLICAS} == 1 ]]; then
for type in object container account; do
@@ -789,31 +804,42 @@
return 0
fi
- # By default with only one replica we are launching the proxy,
- # container, account and object server in screen in foreground and
- # other services in background. If we have ``SWIFT_REPLICAS`` set to something
- # greater than one we first spawn all the Swift services then kill the proxy
- # service so we can run it in foreground in screen. ``swift-init ...
- # {stop|restart}`` exits with '1' if no servers are running, ignore it just
- # in case
- local todo type
- swift-init --run-dir=${SWIFT_DATA_DIR}/run all restart || true
+
+ # By default with only one replica we are launching the proxy, container
+ # account and object server in screen in foreground. Then, the rest of
+ # the services is optionally started.
+ #
+ # If we have ``SWIFT_REPLICAS`` set to something greater than one
+ # we first spawn *all* the Swift services then kill the proxy service
+ # so we can run it in foreground in screen.
+ #
+ # ``swift-init ... {stop|restart}`` exits with '1' if no servers are
+ # running, ignore it just in case
if [[ ${SWIFT_REPLICAS} == 1 ]]; then
- todo="object container account"
+ local foreground_services type
+
+ foreground_services="object container account"
+ for type in ${foreground_services}; do
+ run_process s-${type} "$SWIFT_BIN_DIR/swift-${type}-server ${SWIFT_CONF_DIR}/${type}-server/1.conf -v"
+ done
+
+ if [[ "$SWIFT_START_ALL_SERVICES" == "True" ]]; then
+ swift-init --run-dir=${SWIFT_DATA_DIR}/run rest start
+ else
+ # The container-sync daemon is strictly needed to pass the container
+ # sync Tempest tests.
+ swift-init --run-dir=${SWIFT_DATA_DIR}/run container-sync start
+ fi
+ else
+ swift-init --run-dir=${SWIFT_DATA_DIR}/run all restart || true
+ swift-init --run-dir=${SWIFT_DATA_DIR}/run proxy stop || true
fi
- for type in proxy ${todo}; do
- swift-init --run-dir=${SWIFT_DATA_DIR}/run ${type} stop || true
- done
+
if is_service_enabled tls-proxy; then
local proxy_port=${SWIFT_DEFAULT_BIND_PORT}
start_tls_proxy swift '*' $proxy_port $SERVICE_HOST $SWIFT_DEFAULT_BIND_PORT_INT
fi
- run_process s-proxy "$SWIFT_DIR/bin/swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
- if [[ ${SWIFT_REPLICAS} == 1 ]]; then
- for type in object container account; do
- run_process s-${type} "$SWIFT_DIR/bin/swift-${type}-server ${SWIFT_CONF_DIR}/${type}-server/1.conf -v"
- done
- fi
+ run_process s-proxy "$SWIFT_BIN_DIR/swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
if [[ "$SWIFT_ENABLE_TEMPURLS" == "True" ]]; then
swift_configure_tempurls
diff --git a/lib/tempest b/lib/tempest
index f43036e..f19686a 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -17,7 +17,7 @@
# - ``PUBLIC_NETWORK_NAME``
# - ``VIRT_DRIVER``
# - ``LIBVIRT_TYPE``
-# - ``KEYSTONE_SERVICE_PROTOCOL``, ``KEYSTONE_SERVICE_HOST`` from lib/keystone
+# - ``KEYSTONE_SERVICE_URI``, ``KEYSTONE_SERVICE_URI_V3`` from lib/keystone
#
# Optional Dependencies:
#
@@ -48,10 +48,6 @@
TEMPEST_CONFIG=$TEMPEST_CONFIG_DIR/tempest.conf
TEMPEST_STATE_PATH=${TEMPEST_STATE_PATH:=$DATA_DIR/tempest}
-NOVA_SOURCE_DIR=$DEST/nova
-
-BUILD_INTERVAL=1
-
# This is the timeout that tempest will wait for a VM to change state,
# spawn, delete, etc.
# The default is set to 196 seconds.
@@ -193,11 +189,11 @@
available_flavors=$(nova flavor-list)
if [[ -z "$DEFAULT_INSTANCE_TYPE" ]]; then
if [[ ! ( $available_flavors =~ 'm1.nano' ) ]]; then
- nova flavor-create m1.nano 42 64 0 1
+ openstack flavor create --id 42 --ram 64 --disk 0 --vcpus 1 m1.nano
fi
flavor_ref=42
if [[ ! ( $available_flavors =~ 'm1.micro' ) ]]; then
- nova flavor-create m1.micro 84 128 0 1
+ openstack flavor create --id 84 --ram 128 --disk 0 --vcpus 1 m1.micro
fi
flavor_ref_alt=84
else
@@ -227,7 +223,7 @@
# Ensure ``flavor_ref`` and ``flavor_ref_alt`` have different values.
# Some resize instance in tempest tests depends on this.
for f in ${flavors[@]:1}; do
- if [[ $f -ne $flavor_ref ]]; then
+ if [[ "$f" != "$flavor_ref" ]]; then
flavor_ref_alt=$f
break
fi
@@ -241,9 +237,10 @@
# the public network (for floating ip access) is only available
# if the extension is enabled.
- if is_networking_extension_supported 'external-net'; then
- public_network_id=$(openstack network list | grep $PUBLIC_NETWORK_NAME | \
- awk '{print $2}')
+ # If NEUTRON_CREATE_INITIAL_NETWORKS is not true, there is no network created
+ # and the public_network_id should not be set.
+ if [[ "$NEUTRON_CREATE_INITIAL_NETWORKS" == "True" ]] && is_networking_extension_supported 'external-net'; then
+ public_network_id=$(openstack network show -f value -c id $PUBLIC_NETWORK_NAME)
fi
iniset $TEMPEST_CONFIG DEFAULT use_syslog $SYSLOG
@@ -260,8 +257,11 @@
iniset $TEMPEST_CONFIG volume build_timeout $BUILD_TIMEOUT
# Identity
- iniset $TEMPEST_CONFIG identity uri "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:5000/v2.0/"
+ iniset $TEMPEST_CONFIG identity uri "$KEYSTONE_SERVICE_URI/v2.0/"
iniset $TEMPEST_CONFIG identity uri_v3 "$KEYSTONE_SERVICE_URI_V3"
+ iniset $TEMPEST_CONFIG identity user_lockout_failure_attempts $KEYSTONE_LOCKOUT_FAILURE_ATTEMPTS
+ iniset $TEMPEST_CONFIG identity user_lockout_duration $KEYSTONE_LOCKOUT_DURATION
+ iniset $TEMPEST_CONFIG identity user_unique_last_password_count $KEYSTONE_UNIQUE_LAST_PASSWORD_COUNT
# Use domain scoped tokens for admin v3 tests, v3 dynamic credentials of v3 account generation
iniset $TEMPEST_CONFIG identity admin_domain_scope True
if [[ "$TEMPEST_HAS_ADMIN" == "True" ]]; then
@@ -270,22 +270,27 @@
iniset $TEMPEST_CONFIG auth admin_project_name $admin_project_name
iniset $TEMPEST_CONFIG auth admin_domain_name $admin_domain_name
fi
- if [ "$ENABLE_IDENTITY_V2" == "False" ]; then
- # Only Identity v3 is available; then skip Identity API v2 tests
- iniset $TEMPEST_CONFIG identity-feature-enabled api_v2 False
- # In addition, use v3 auth tokens for running all Tempest tests
- iniset $TEMPEST_CONFIG identity auth_version v3
+ if [ "$ENABLE_IDENTITY_V2" == "True" ]; then
+ # Run Identity API v2 tests ONLY if needed
+ iniset $TEMPEST_CONFIG identity-feature-enabled api_v2 True
else
- iniset $TEMPEST_CONFIG identity auth_version ${TEMPEST_AUTH_VERSION:-v2}
+ # Skip Identity API v2 tests by default
+ iniset $TEMPEST_CONFIG identity-feature-enabled api_v2 False
fi
+ iniset $TEMPEST_CONFIG identity auth_version ${TEMPEST_AUTH_VERSION:-v3}
- if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
+ if is_service_enabled tls-proxy; then
iniset $TEMPEST_CONFIG identity ca_certificates_file $SSL_BUNDLE_FILE
fi
# Identity Features
- # TODO(rodrigods): Remove the reseller flag when Kilo and Liberty are end of life.
- iniset $TEMPEST_CONFIG identity-feature-enabled reseller True
+ if [[ "$KEYSTONE_SECURITY_COMPLIANCE_ENABLED" = True ]]; then
+ iniset $TEMPEST_CONFIG identity-feature-enabled security_compliance True
+ fi
+
+ # TODO(rodrigods): This is a feature flag for bug 1590578 which is fixed in
+ # Newton and Ocata. This option can be removed after Mitaka is end of life.
+ iniset $TEMPEST_CONFIG identity-feature-enabled forbid_global_implied_dsr True
# Image
# We want to be able to override this variable in the gate to avoid
@@ -295,7 +300,6 @@
fi
if [ "$VIRT_DRIVER" = "xenserver" ]; then
iniset $TEMPEST_CONFIG image disk_formats "ami,ari,aki,vhd,raw,iso"
- iniset $TEMPEST_CONFIG scenario img_disk_format vhd
fi
# Image Features
@@ -347,13 +351,12 @@
iniset $TEMPEST_CONFIG compute max_microversion $tempest_compute_max_microversion
fi
- # TODO(mriedem): Remove allow_port_security_disabled after liberty-eol.
- iniset $TEMPEST_CONFIG compute-feature-enabled allow_port_security_disabled True
iniset $TEMPEST_CONFIG compute-feature-enabled personality ${ENABLE_FILE_INJECTION:-False}
iniset $TEMPEST_CONFIG compute-feature-enabled resize True
iniset $TEMPEST_CONFIG compute-feature-enabled live_migration ${LIVE_MIGRATION_AVAILABLE:-False}
iniset $TEMPEST_CONFIG compute-feature-enabled change_password False
iniset $TEMPEST_CONFIG compute-feature-enabled block_migration_for_live_migration ${USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION:-False}
+ iniset $TEMPEST_CONFIG compute-feature-enabled live_migrate_back_and_forth ${LIVE_MIGRATE_BACK_AND_FORTH:-False}
iniset $TEMPEST_CONFIG compute-feature-enabled attach_encrypted_volume ${ATTACH_ENCRYPTED_VOLUME_AVAILABLE:-True}
if is_service_enabled n-cell; then
# Cells doesn't support shelving/unshelving
@@ -373,8 +376,11 @@
fi
fi
+ if is_service_enabled n-novnc; then
+ iniset $TEMPEST_CONFIG compute-feature-enabled vnc_console True
+ fi
+
# Network
- iniset $TEMPEST_CONFIG network api_version 2.0
iniset $TEMPEST_CONFIG network project_networks_reachable false
iniset $TEMPEST_CONFIG network public_network_id "$public_network_id"
iniset $TEMPEST_CONFIG network public_router_id "$public_router_id"
@@ -385,11 +391,6 @@
# Orchestration Tests
if is_service_enabled heat; then
- # Though this is not needed by heat, some tempest tests explicitly
- # try to set this role. Removing them from the tempest tests breaks
- # some non-devstack CIs.
- get_or_create_role "heat_stack_owner"
-
if [[ ! -z "$HEAT_CFN_IMAGE_URL" ]]; then
iniset $TEMPEST_CONFIG orchestration image_ref $(basename "${HEAT_CFN_IMAGE_URL%.*}")
fi
@@ -398,36 +399,53 @@
# build a specialized heat flavor
available_flavors=$(nova flavor-list)
if [[ ! ( $available_flavors =~ 'm1.heat' ) ]]; then
- nova flavor-create m1.heat 451 512 0 1
+ openstack flavor create --id 451 --ram 512 --disk 0 --vcpus 1 m1.heat
fi
iniset $TEMPEST_CONFIG orchestration instance_type "m1.heat"
fi
iniset $TEMPEST_CONFIG orchestration build_timeout 900
- iniset $TEMPEST_CONFIG orchestration stack_owner_role "heat_stack_owner"
+ iniset $TEMPEST_CONFIG orchestration stack_owner_role Member
fi
# Scenario
- SCENARIO_IMAGE_DIR=${SCENARIO_IMAGE_DIR:-$FILES/images/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec}
+ if [ "$VIRT_DRIVER" = "xenserver" ]; then
+ SCENARIO_IMAGE_DIR=${SCENARIO_IMAGE_DIR:-$FILES}
+ SCENARIO_IMAGE_FILE="cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.vhd.tgz"
+ iniset $TEMPEST_CONFIG scenario img_disk_format vhd
+ iniset $TEMPEST_CONFIG scenario img_container_format ovf
+ else
+ SCENARIO_IMAGE_DIR=${SCENARIO_IMAGE_DIR:-$FILES}
+ SCENARIO_IMAGE_FILE=$DEFAULT_IMAGE_NAME
+ fi
iniset $TEMPEST_CONFIG scenario img_dir $SCENARIO_IMAGE_DIR
- iniset $TEMPEST_CONFIG scenario ami_img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-blank.img"
- iniset $TEMPEST_CONFIG scenario ari_img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-initrd"
- iniset $TEMPEST_CONFIG scenario aki_img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-vmlinuz"
- iniset $TEMPEST_CONFIG scenario img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img"
+ iniset $TEMPEST_CONFIG scenario img_file $SCENARIO_IMAGE_FILE
+ # If using provider networking, use the physical network for validation rather than private
+ TEMPEST_SSH_NETWORK_NAME=$PRIVATE_NETWORK_NAME
+ if is_provider_network; then
+ TEMPEST_SSH_NETWORK_NAME=$PHYSICAL_NETWORK
+ fi
# Validation
iniset $TEMPEST_CONFIG validation run_validation ${TEMPEST_RUN_VALIDATION:-False}
iniset $TEMPEST_CONFIG validation ip_version_for_ssh 4
iniset $TEMPEST_CONFIG validation ssh_timeout $BUILD_TIMEOUT
iniset $TEMPEST_CONFIG validation image_ssh_user ${DEFAULT_INSTANCE_USER:-cirros}
- iniset $TEMPEST_CONFIG validation network_for_ssh $PRIVATE_NETWORK_NAME
+ iniset $TEMPEST_CONFIG validation network_for_ssh $TEMPEST_SSH_NETWORK_NAME
# Volume
- # TODO(obutenko): Remove snapshot_backup when liberty-eol happens.
- iniset $TEMPEST_CONFIG volume-feature-enabled snapshot_backup True
- # TODO(ynesenenko): Remove the volume_services flag when Liberty and Kilo will correct work with host info.
- iniset $TEMPEST_CONFIG volume-feature-enabled volume_services True
+ # Only turn on TEMPEST_VOLUME_MANAGE_SNAPSHOT by default for "lvm" backends
+ if [[ "$CINDER_ENABLED_BACKENDS" == *"lvm"* ]]; then
+ TEMPEST_VOLUME_MANAGE_SNAPSHOT=${TEMPEST_VOLUME_MANAGE_SNAPSHOT:-True}
+ fi
+ iniset $TEMPEST_CONFIG volume-feature-enabled manage_snapshot $(trueorfalse False TEMPEST_VOLUME_MANAGE_SNAPSHOT)
+ # Only turn on TEMPEST_VOLUME_MANAGE_VOLUME by default for "lvm" backends
+ if [[ "$CINDER_ENABLED_BACKENDS" == *"lvm"* ]]; then
+ TEMPEST_VOLUME_MANAGE_VOLUME=${TEMPEST_VOLUME_MANAGE_VOLUME:-True}
+ fi
+ iniset $TEMPEST_CONFIG volume-feature-enabled manage_volume $(trueorfalse False TEMPEST_VOLUME_MANAGE_VOLUME)
# TODO(ameade): Remove the api_v3 flag when Mitaka and Liberty are end of life.
iniset $TEMPEST_CONFIG volume-feature-enabled api_v3 True
+ iniset $TEMPEST_CONFIG volume-feature-enabled api_v1 $(trueorfalse False TEMPEST_VOLUME_API_V1)
local tempest_volume_min_microversion=${TEMPEST_VOLUME_MIN_MICROVERSION:-None}
local tempest_volume_max_microversion=${TEMPEST_VOLUME_MAX_MICROVERSION:-"latest"}
if [ "$tempest_volume_min_microversion" == "None" ]; then
@@ -478,20 +496,8 @@
iniset $TEMPEST_CONFIG volume storage_protocol "$TEMPEST_STORAGE_PROTOCOL"
fi
- # Dashboard
- iniset $TEMPEST_CONFIG dashboard dashboard_url "http://$SERVICE_HOST/"
-
- # CLI
- iniset $TEMPEST_CONFIG cli cli_dir $NOVA_BIN_DIR
-
# Baremetal
if [ "$VIRT_DRIVER" = "ironic" ] ; then
- iniset $TEMPEST_CONFIG baremetal driver_enabled True
- iniset $TEMPEST_CONFIG baremetal unprovision_timeout $BUILD_TIMEOUT
- iniset $TEMPEST_CONFIG baremetal active_timeout $BUILD_TIMEOUT
- iniset $TEMPEST_CONFIG baremetal deploywait_timeout $BUILD_TIMEOUT
- iniset $TEMPEST_CONFIG baremetal deploy_img_dir $FILES
- iniset $TEMPEST_CONFIG baremetal node_uuid $IRONIC_NODE_UUID
iniset $TEMPEST_CONFIG compute-feature-enabled change_password False
iniset $TEMPEST_CONFIG compute-feature-enabled console_output False
iniset $TEMPEST_CONFIG compute-feature-enabled interface_attach False
@@ -504,13 +510,19 @@
iniset $TEMPEST_CONFIG compute-feature-enabled suspend False
fi
- # Libvirt-LXC
- if [ "$VIRT_DRIVER" = "libvirt" ] && [ "$LIBVIRT_TYPE" = "lxc" ]; then
- iniset $TEMPEST_CONFIG compute-feature-enabled rescue False
- iniset $TEMPEST_CONFIG compute-feature-enabled resize False
- iniset $TEMPEST_CONFIG compute-feature-enabled shelve False
- iniset $TEMPEST_CONFIG compute-feature-enabled snapshot False
- iniset $TEMPEST_CONFIG compute-feature-enabled suspend False
+ # Libvirt
+ if [ "$VIRT_DRIVER" = "libvirt" ]; then
+ # Libvirt-LXC
+ if [ "$LIBVIRT_TYPE" = "lxc" ]; then
+ iniset $TEMPEST_CONFIG compute-feature-enabled rescue False
+ iniset $TEMPEST_CONFIG compute-feature-enabled resize False
+ iniset $TEMPEST_CONFIG compute-feature-enabled shelve False
+ iniset $TEMPEST_CONFIG compute-feature-enabled snapshot False
+ iniset $TEMPEST_CONFIG compute-feature-enabled suspend False
+ elif ! is_service_enabled n-cell; then
+ # cells v1 does not support swapping volumes
+ iniset $TEMPEST_CONFIG compute-feature-enabled swap_volume True
+ fi
fi
# ``service_available``
@@ -611,7 +623,7 @@
git_clone $TEMPEST_REPO $TEMPEST_DIR $TEMPEST_BRANCH
pip_install tox
pushd $TEMPEST_DIR
- tox --notest -efull
+ tox -r --notest -efull
# NOTE(mtreinish) Respect constraints in the tempest full venv, things that
# are using a tox job other than full will not be respecting constraints but
# running pip install -U on tempest requirements
diff --git a/lib/tls b/lib/tls
index 14cdf19..7a7b104 100644
--- a/lib/tls
+++ b/lib/tls
@@ -343,7 +343,7 @@
# one. If the value for the CA is not rooted in /etc then we know
# we need to change it.
function fix_system_ca_bundle_path {
- if is_service_enabled tls-proxy || [ "$USE_SSL" == "True" ]; then
+ if is_service_enabled tls-proxy; then
local capath
capath=$(python -c $'try:\n from requests import certs\n print certs.where()\nexcept ImportError: pass')
@@ -362,27 +362,14 @@
}
+# Only for compatibility, return if the tls-proxy is enabled
+function is_ssl_enabled_service {
+ return is_service_enabled tls-proxy
+}
+
# Certificate Input Configuration
# ===============================
-# check to see if the service(s) specified are to be SSL enabled.
-#
-# Multiple services specified as arguments are ``OR``'ed together; the test
-# is a short-circuit boolean, i.e it returns on the first match.
-#
-# Uses global ``SSL_ENABLED_SERVICES``
-function is_ssl_enabled_service {
- local services=$@
- local service=""
- if [ "$USE_SSL" == "False" ]; then
- return 1
- fi
- for service in ${services}; do
- [[ ,${SSL_ENABLED_SERVICES}, =~ ,${service}, ]] && return 0
- done
- return 1
-}
-
# Ensure that the certificates for a service are in place. This function does
# not check that a service is SSL enabled, this should already have been
# completed.
@@ -442,6 +429,53 @@
# Proxy Functions
# ===============
+function tune_apache_connections {
+ local tuning_file=$APACHE_SETTINGS_DIR/connection-tuning.conf
+ if ! [ -f $tuning_file ] ; then
+ sudo bash -c "cat > $tuning_file" << EOF
+# worker MPM
+# StartServers: initial number of server processes to start
+# MinSpareThreads: minimum number of worker threads which are kept spare
+# MaxSpareThreads: maximum number of worker threads which are kept spare
+# ThreadLimit: ThreadsPerChild can be changed to this maximum value during a
+# graceful restart. ThreadLimit can only be changed by stopping
+# and starting Apache.
+# ThreadsPerChild: constant number of worker threads in each server process
+# MaxClients: maximum number of simultaneous client connections
+# MaxRequestsPerChild: maximum number of requests a server process serves
+#
+# We want to be memory thrifty so tune down apache to allow 256 total
+# connections. This should still be plenty for a dev env yet lighter than
+# apache defaults.
+<IfModule mpm_worker_module>
+# Note that the next three conf values must be changed together.
+# MaxClients = ServerLimit * ThreadsPerChild
+ServerLimit 8
+ThreadsPerChild 32
+MaxClients 256
+StartServers 2
+MinSpareThreads 32
+MaxSpareThreads 96
+ThreadLimit 64
+MaxRequestsPerChild 0
+</IfModule>
+<IfModule mpm_event_module>
+# Note that the next three conf values must be changed together.
+# MaxClients = ServerLimit * ThreadsPerChild
+ServerLimit 8
+ThreadsPerChild 32
+MaxClients 256
+StartServers 2
+MinSpareThreads 32
+MaxSpareThreads 96
+ThreadLimit 64
+MaxRequestsPerChild 0
+</IfModule>
+EOF
+ restart_apache_server
+ fi
+}
+
# Starts the TLS proxy for the given IP/ports
# start_tls_proxy front-host front-port back-host back-port
function start_tls_proxy {
@@ -451,6 +485,8 @@
local b_host=$4
local b_port=$5
+ tune_apache_connections
+
local config_file
config_file=$(apache_site_config_for $b_service)
local listen_string
@@ -471,8 +507,12 @@
SSLEngine On
SSLCertificateFile $DEVSTACK_CERT
+ # Disable KeepAlive to fix bug #1630664 a.k.a the
+ # ('Connection aborted.', BadStatusLine("''",)) error
+ KeepAlive Off
+
<Location />
- ProxyPass http://$b_host:$b_port/ retry=5 nocanon
+ ProxyPass http://$b_host:$b_port/ retry=0 nocanon
ProxyPassReverse http://$b_host:$b_port/
</Location>
ErrorLog $APACHE_LOG_DIR/tls-proxy_error.log
diff --git a/openrc b/openrc
index 8d8ae8b..4cdb50e 100644
--- a/openrc
+++ b/openrc
@@ -53,10 +53,6 @@
# or NOVA_PASSWORD.
export OS_PASSWORD=${ADMIN_PASSWORD:-secret}
-# Don't put the key into a keyring by default. Testing for development is much
-# easier with this off.
-export OS_NO_CACHE=${OS_NO_CACHE:-1}
-
# Region
export OS_REGION_NAME=${REGION_NAME:-RegionOne}
@@ -77,18 +73,16 @@
fi
SERVICE_PROTOCOL=${SERVICE_PROTOCOL:-http}
-KEYSTONE_AUTH_PROTOCOL=${KEYSTONE_AUTH_PROTOCOL:-$SERVICE_PROTOCOL}
-KEYSTONE_AUTH_HOST=${KEYSTONE_AUTH_HOST:-$SERVICE_HOST}
# Identity API version
-export OS_IDENTITY_API_VERSION=${IDENTITY_API_VERSION:-2.0}
+export OS_IDENTITY_API_VERSION=${IDENTITY_API_VERSION:-3}
# Authenticating against an OpenStack cloud using Keystone returns a **Token**
# and **Service Catalog**. The catalog contains the endpoints for all services
# the user/project has access to - including nova, glance, keystone, swift, ...
-# We currently recommend using the 2.0 *identity api*.
+# We currently recommend using the version 3 *identity api*.
#
-export OS_AUTH_URL=$KEYSTONE_AUTH_PROTOCOL://$KEYSTONE_AUTH_HOST:5000/v${OS_IDENTITY_API_VERSION}
+export OS_AUTH_URL=$KEYSTONE_AUTH_URI
# Currently, in order to use openstackclient with Identity API v3,
# we need to set the domain which the user and project belong to.
diff --git a/pkg/elasticsearch.sh b/pkg/elasticsearch.sh
index 856eaff..fefd454 100755
--- a/pkg/elasticsearch.sh
+++ b/pkg/elasticsearch.sh
@@ -83,7 +83,7 @@
return
fi
if is_ubuntu; then
- is_package_installed openjdk-7-jre-headless || install_package openjdk-7-jre-headless
+ is_package_installed default-jre-headless || install_package default-jre-headless
sudo dpkg -i ${FILES}/elasticsearch-${ELASTICSEARCH_VERSION}.deb
sudo update-rc.d elasticsearch defaults 95 10
diff --git a/stack.sh b/stack.sh
index f20c9d9..31ea2e1 100755
--- a/stack.sh
+++ b/stack.sh
@@ -12,7 +12,7 @@
# a multi-node developer install.
# To keep this script simple we assume you are running on a recent **Ubuntu**
-# (14.04 Trusty or newer), **Fedora** (F20 or newer), or **CentOS/RHEL**
+# (16.04 Xenial or newer), **Fedora** (F24 or newer), or **CentOS/RHEL**
# (7 or newer) machine. (It may work on other platforms but support for those
# platforms is left to those who added them to DevStack.) It should work in
# a VM or physical server. Additionally, we maintain a list of ``deb`` and
@@ -161,16 +161,16 @@
extract_localrc_section $TOP_DIR/local.conf $TOP_DIR/localrc $TOP_DIR/.localrc.auto
# ``stack.sh`` is customizable by setting environment variables. Override a
-# default setting via export::
+# default setting via export:
#
# export DATABASE_PASSWORD=anothersecret
# ./stack.sh
#
-# or by setting the variable on the command line::
+# or by setting the variable on the command line:
#
# DATABASE_PASSWORD=simple ./stack.sh
#
-# Persistent variables can be placed in a ``local.conf`` file::
+# Persistent variables can be placed in a ``local.conf`` file:
#
# [[local|localrc]]
# DATABASE_PASSWORD=anothersecret
@@ -192,7 +192,7 @@
# Warn users who aren't on an explicitly supported distro, but allow them to
# override check and attempt installation with ``FORCE=yes ./stack``
-if [[ ! ${DISTRO} =~ (trusty|xenial|yakkety|7.0|wheezy|sid|testing|jessie|f23|f24|rhel7|kvmibm1) ]]; then
+if [[ ! ${DISTRO} =~ (xenial|yakkety|zesty|stretch|jessie|f24|f25|rhel7|kvmibm1) ]]; then
echo "WARNING: this script has not been tested on $DISTRO"
if [[ "$FORCE" != "yes" ]]; then
die $LINENO "If you wish to run this script anyway run with FORCE=yes"
@@ -328,6 +328,7 @@
DATA_DIR=${DATA_DIR:-${DEST}/data}
sudo mkdir -p $DATA_DIR
safe_chown -R $STACK_USER $DATA_DIR
+safe_chmod 0755 $DATA_DIR
# Configure proper hostname
# Certain services such as rabbitmq require that the local hostname resolves
@@ -347,6 +348,10 @@
# is pre-installed.
if [[ -f /etc/nodepool/provider ]]; then
SKIP_EPEL_INSTALL=True
+ if is_fedora; then
+ # However, EPEL is not enabled by default.
+ sudo yum-config-manager --enable epel
+ fi
fi
if is_fedora && [[ $DISTRO == "rhel7" ]] && \
@@ -538,13 +543,6 @@
source $TOP_DIR/lib/database
source $TOP_DIR/lib/rpc_backend
-# Service to enable with SSL if ``USE_SSL`` is True
-SSL_ENABLED_SERVICES="key,nova,cinder,glance,s-proxy,neutron"
-
-if is_service_enabled tls-proxy && [ "$USE_SSL" == "True" ]; then
- die $LINENO "tls-proxy and SSL are mutually exclusive"
-fi
-
# Configure Projects
# ==================
@@ -572,9 +570,7 @@
source $TOP_DIR/lib/placement
source $TOP_DIR/lib/cinder
source $TOP_DIR/lib/swift
-source $TOP_DIR/lib/heat
source $TOP_DIR/lib/neutron
-source $TOP_DIR/lib/neutron-legacy
source $TOP_DIR/lib/ldap
source $TOP_DIR/lib/dstat
source $TOP_DIR/lib/dlm
@@ -665,7 +661,6 @@
# In multi node DevStack, second node needs ``RABBIT_USERID``, but rabbit
# isn't enabled.
if is_service_enabled rabbit; then
- RABBIT_HOST=${RABBIT_HOST:-$SERVICE_HOST}
read_password RABBIT_PASSWORD "ENTER A PASSWORD TO USE FOR RABBIT."
fi
@@ -764,6 +759,7 @@
run_phase stack pre-install
install_rpc_backend
+restart_rpc_backend
# NOTE(sdague): dlm install is conditional on one being enabled by configuration
install_dlm
@@ -788,6 +784,9 @@
# Install Oslo libraries
install_oslo
+# Install uwsgi
+install_apache_uwsgi
+
# Install client libraries
install_keystoneauth
install_keystoneclient
@@ -800,9 +799,6 @@
if is_service_enabled neutron nova horizon; then
install_neutronclient
fi
-if is_service_enabled heat horizon; then
- install_heatclient
-fi
# Install shared libraries
if is_service_enabled cinder nova; then
@@ -810,7 +806,7 @@
fi
# Setup TLS certs
-if is_service_enabled tls-proxy || [ "$USE_SSL" == "True" ]; then
+if is_service_enabled tls-proxy; then
configure_CA
init_CA
init_cert
@@ -873,6 +869,16 @@
configure_placement
fi
+# create a placement-client fake service to know we need to configure
+# placement connectivity. We configure the placement service for nova
+# if placement-api or placement-client is active, and n-cpu on the
+# same box.
+if is_service_enabled placement placement-client; then
+ if is_service_enabled n-cpu || is_service_enabled n-sch; then
+ configure_placement_nova_compute
+ fi
+fi
+
if is_service_enabled horizon; then
# django openstack_auth
install_django_openstack_auth
@@ -880,14 +886,7 @@
stack_install_service horizon
fi
-if is_service_enabled heat; then
- stack_install_service heat
- install_heat_other
- cleanup_heat
- configure_heat
-fi
-
-if is_service_enabled tls-proxy || [ "$USE_SSL" == "True" ]; then
+if is_service_enabled tls-proxy; then
fix_system_ca_bundle_path
fi
@@ -955,11 +954,6 @@
fi
-# Finalize queue installation
-# ----------------------------
-restart_rpc_backend
-
-
# Export Certificate Authority Bundle
# -----------------------------------
@@ -1013,6 +1007,22 @@
# Save configuration values
save_stackenv $LINENO
+# Kernel Samepage Merging (KSM)
+# -----------------------------
+
+# Processes that mark their memory as mergeable can share identical memory
+# pages if KSM is enabled. This is particularly useful for nova + libvirt
+# backends but any other setup that marks its memory as mergeable can take
+# advantage. The drawback is there is higher cpu load; however, we tend to
+# be memory bound not cpu bound so enable KSM by default but allow people
+# to opt out if the CPU time is more important to them.
+
+if [[ "ENABLE_KSM" == "True" ]] ; then
+ if [[ -f /sys/kernel/mm/ksm/run ]] ; then
+ sudo sh -c "echo 1 > /sys/kernel/mm/ksm/run"
+ fi
+fi
+
# Start Services
# ==============
@@ -1064,19 +1074,22 @@
fi
create_keystone_accounts
- create_nova_accounts
- create_glance_accounts
- create_cinder_accounts
- create_neutron_accounts
-
+ if is_service_enabled nova; then
+ create_nova_accounts
+ fi
+ if is_service_enabled glance; then
+ create_glance_accounts
+ fi
+ if is_service_enabled cinder; then
+ create_cinder_accounts
+ fi
+ if is_service_enabled neutron; then
+ create_neutron_accounts
+ fi
if is_service_enabled swift; then
create_swift_accounts
fi
- if is_service_enabled heat; then
- create_heat_accounts
- fi
-
fi
# Write a clouds.yaml file
@@ -1269,7 +1282,10 @@
start_neutron
fi
# Once neutron agents are started setup initial network elements
-create_neutron_initial_network
+if is_service_enabled q-svc && [[ "$NEUTRON_CREATE_INITIAL_NETWORKS" == "True" ]]; then
+ echo_summary "Creating initial neutron network elements"
+ create_neutron_initial_network
+fi
if is_service_enabled nova; then
echo_summary "Starting Nova"
@@ -1286,18 +1302,6 @@
create_volume_types
fi
-# Configure and launch Heat engine, api and metadata
-if is_service_enabled heat; then
- # Initialize heat
- echo_summary "Configuring Heat"
- init_heat
- echo_summary "Starting Heat"
- start_heat
- if [ "$HEAT_BUILD_PIP_MIRROR" = "True" ]; then
- echo_summary "Building Heat pip mirror"
- build_heat_pip_mirror
- fi
-fi
if is_service_enabled horizon; then
echo_summary "Starting Horizon"
@@ -1382,8 +1386,16 @@
# ----------------------
# Do this late because it requires compute hosts to have started
-if is_service_enabled n-api && [ "$NOVA_CONFIGURE_CELLSV2" == "True" ]; then
- create_cell
+if is_service_enabled n-api; then
+ if is_service_enabled n-cpu; then
+ $TOP_DIR/tools/discover_hosts.sh
+ else
+ # Some CI systems like Hyper-V build the control plane on
+ # Linux, and join in non Linux Computes after setup. This
+ # allows them to delay the processing until after their whole
+ # environment is up.
+ echo_summary "SKIPPING Cell setup because n-cpu is not enabled. You will have to do this manually before you have a working environment."
+ fi
fi
# Bash completion
@@ -1408,6 +1420,9 @@
# Phase: test-config
run_phase stack test-config
+# Apply late configuration from ``local.conf`` if it exists for layer 2 services
+# Phase: test-config
+merge_config_group $TOP_DIR/local.conf test-config
# Fin
# ===
diff --git a/stackrc b/stackrc
index 31f0759..ed1cf6e 100644
--- a/stackrc
+++ b/stackrc
@@ -5,7 +5,7 @@
# ensure we don't re-source this in the same environment
[[ -z "$_DEVSTACK_STACKRC" ]] || return 0
-declare -r _DEVSTACK_STACKRC=1
+declare -r -g _DEVSTACK_STACKRC=1
# Find the other rc files
RC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
@@ -44,26 +44,18 @@
# Specify which services to launch. These generally correspond to
# screen tabs. To change the default list, use the ``enable_service`` and
# ``disable_service`` functions in ``local.conf``.
-# For example, to enable Swift add this to ``local.conf``:
-# enable_service s-proxy s-object s-container s-account
-# In order to enable Neutron (a single node setup) add the following
+# For example, to enable Swift as part of DevStack add the following
# settings in ``local.conf``:
# [[local|localrc]]
-# disable_service n-net
-# enable_service q-svc
-# enable_service q-agt
-# enable_service q-dhcp
-# enable_service q-l3
-# enable_service q-meta
-# # Optional, to enable tempest configuration as part of DevStack
-# enable_service tempest
-
+# enable_service s-proxy s-object s-container s-account
# This allows us to pass ``ENABLED_SERVICES``
if ! isset ENABLED_SERVICES ; then
# Keystone - nothing works without keystone
ENABLED_SERVICES=key
# Nova - services to support libvirt based openstack clouds
ENABLED_SERVICES+=,n-api,n-cpu,n-cond,n-sch,n-novnc,n-cauth
+ # Placement service needed for Nova
+ ENABLED_SERVICES+=,placement-api,placement-client
# Glance services needed for Nova
ENABLED_SERVICES+=,g-api,g-reg
# Cinder
@@ -95,6 +87,31 @@
# be disabled for automated testing by setting this value to False.
USE_SCREEN=$(trueorfalse True USE_SCREEN)
+# Whether to use SYSTEMD to manage services
+USE_SYSTEMD=$(trueorfalse False USE_SYSTEMD)
+USER_UNITS=$(trueorfalse False USER_UNITS)
+if [[ "$USER_UNITS" == "True" ]]; then
+ SYSTEMD_DIR="$HOME/.local/share/systemd/user"
+ SYSTEMCTL="systemctl --user"
+ JOURNALCTL_F="journalctl -f -o short-precise --user-unit"
+else
+ SYSTEMD_DIR="/etc/systemd/system"
+ SYSTEMCTL="sudo systemctl"
+ JOURNALCTL_F="journalctl -f -o short-precise --unit"
+fi
+
+if [[ "$USE_SYSTEMD" == "True" ]]; then
+ USE_SCREEN=False
+fi
+
+# Whether or not to enable Kernel Samepage Merging (KSM) if available.
+# This allows programs that mark their memory as mergeable to share
+# memory pages if they are identical. This is particularly useful with
+# libvirt backends. This reduces memory useage at the cost of CPU overhead
+# to scan memory. We default to enabling it because we tend to be more
+# memory constrained than CPU bound.
+ENABLE_KSM=$(trueorfalse True ENABLE_KSM)
+
# When using screen, should we keep a log file on disk? You might
# want this False if you have a long-running setup where verbose logs
# can fill-up the host.
@@ -110,13 +127,23 @@
source $RC_DIR/.localrc.password
fi
-# Control whether Python 3 should be used.
-export USE_PYTHON3=${USE_PYTHON3:-False}
+# Control whether Python 3 should be used at all.
+export USE_PYTHON3=$(trueorfalse False USE_PYTHON3)
+
+# Control whether Python 3 is enabled for specific services by the
+# base name of the directory from which they are installed. See
+# enable_python3_package to edit this variable and use_python3_for to
+# test membership.
+export ENABLED_PYTHON3_PACKAGES="nova,glance,cinder,uwsgi,python-openstackclient"
+
+# Explicitly list services not to run under Python 3. See
+# disable_python3_package to edit this variable.
+export DISABLED_PYTHON3_PACKAGES=""
# When Python 3 is supported by an application, adding the specific
# version of Python 3 to this variable will install the app using that
# version of the interpreter instead of 2.7.
-export PYTHON3_VERSION=${PYTHON3_VERSION:-3.4}
+export PYTHON3_VERSION=${PYTHON3_VERSION:-3.5}
# Just to be more explicit on the Python 2 version to use.
export PYTHON2_VERSION=${PYTHON2_VERSION:-2.7}
@@ -158,7 +185,7 @@
fi
# Configure Identity API version: 2.0, 3
-IDENTITY_API_VERSION=${IDENTITY_API_VERSION:-2.0}
+IDENTITY_API_VERSION=${IDENTITY_API_VERSION:-3}
# Set the option ENABLE_IDENTITY_V2 to True. It defines whether the DevStack
# deployment will be deploying the Identity v2 pipelines. If this option is set
@@ -198,6 +225,12 @@
# Zero disables timeouts
GIT_TIMEOUT=${GIT_TIMEOUT:-0}
+# How should we be handling WSGI deployments. By default we're going
+# to allow for 2 modes, which is "uwsgi" which runs with an apache
+# proxy uwsgi in front of it, or "mod_wsgi", which runs in
+# apache. mod_wsgi is deprecated, don't use it.
+WSGI_MODE=${WSGI_MODE:-"uwsgi"}
+
# Repositories
# ------------
@@ -239,10 +272,6 @@
GLANCE_REPO=${GLANCE_REPO:-${GIT_BASE}/openstack/glance.git}
GLANCE_BRANCH=${GLANCE_BRANCH:-master}
-# heat service
-HEAT_REPO=${HEAT_REPO:-${GIT_BASE}/openstack/heat.git}
-HEAT_BRANCH=${HEAT_BRANCH:-master}
-
# django powered web control panel for openstack
HORIZON_REPO=${HORIZON_REPO:-${GIT_BASE}/openstack/horizon.git}
HORIZON_BRANCH=${HORIZON_BRANCH:-master}
@@ -301,10 +330,6 @@
GITREPO["python-glanceclient"]=${GLANCECLIENT_REPO:-${GIT_BASE}/openstack/python-glanceclient.git}
GITBRANCH["python-glanceclient"]=${GLANCECLIENT_BRANCH:-master}
-# python heat client library
-GITREPO["python-heatclient"]=${HEATCLIENT_REPO:-${GIT_BASE}/openstack/python-heatclient.git}
-GITBRANCH["python-heatclient"]=${HEATCLIENT_BRANCH:-master}
-
# ironic client
GITREPO["python-ironicclient"]=${IRONICCLIENT_REPO:-${GIT_BASE}/openstack/python-ironicclient.git}
GITBRANCH["python-ironicclient"]=${IRONICCLIENT_BRANCH:-master}
@@ -345,6 +370,10 @@
#
###################
+# castellan key manager interface
+GITREPO["castellan"]=${CASTELLAN_REPO:-${GIT_BASE}/openstack/castellan.git}
+GITBRANCH["castellan"]=${CASTELLAN_BRANCH:-master}
+
# cliff command line framework
GITREPO["cliff"]=${CLIFF_REPO:-${GIT_BASE}/openstack/cliff.git}
GITBRANCH["cliff"]=${CLIFF_BRANCH:-master}
@@ -464,6 +493,10 @@
#
##################
+# cursive library
+GITREPO["cursive"]=${CURSIVE_REPO:-${GIT_BASE}/openstack/cursive.git}
+GITBRANCH["cursive"]=${CURSIVE_BRANCH:-master}
+
# glance store library
GITREPO["glance_store"]=${GLANCE_STORE_REPO:-${GIT_BASE}/openstack/glance_store.git}
GITBRANCH["glance_store"]=${GLANCE_STORE_BRANCH:-master}
@@ -510,6 +543,10 @@
GITREPO["osc-lib"]=${OSC_LIB_REPO:-${GIT_BASE}/openstack/osc-lib.git}
GITBRANCH["osc-lib"]=${OSC_LIB_BRANCH:-master}
+# python-openstacksdk OpenStack Python SDK
+GITREPO["python-openstacksdk"]=${OPENSTACKSDK_REPO:-${GIT_BASE}/openstack/python-openstacksdk.git}
+GITBRANCH["python-openstacksdk"]=${OPENSTACKSDK_BRANCH:-master}
+
# ironic common lib
GITREPO["ironic-lib"]=${IRONIC_LIB_REPO:-${GIT_BASE}/openstack/ironic-lib.git}
GITBRANCH["ironic-lib"]=${IRONIC_LIB_BRANCH:-master}
@@ -580,8 +617,12 @@
case "$VIRT_DRIVER" in
ironic|libvirt)
LIBVIRT_TYPE=${LIBVIRT_TYPE:-kvm}
- if [[ "$os_VENDOR" =~ (Debian) ]]; then
- LIBVIRT_GROUP=libvirt
+ if [[ "$os_VENDOR" =~ (Debian|Ubuntu) ]]; then
+ # The groups change with newer libvirt. Older Ubuntu used
+ # 'libvirtd', but now uses libvirt like Debian. Do a quick check
+ # to see if libvirtd group already exists to handle grenade's case.
+ LIBVIRT_GROUP=$(cut -d ':' -f 1 /etc/group | grep 'libvirtd$' || true)
+ LIBVIRT_GROUP=${LIBVIRT_GROUP:-libvirt}
else
LIBVIRT_GROUP=libvirtd
fi
@@ -589,7 +630,7 @@
lxd)
LXD_GROUP=${LXD_GROUP:-"lxd"}
;;
- docker)
+ docker|zun)
DOCKER_GROUP=${DOCKER_GROUP:-"docker"}
;;
fake)
@@ -630,7 +671,7 @@
#IMAGE_URLS="http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-amd64-11.2_2.6.35-15_1.tar.gz" # old ttylinux-uec image
#IMAGE_URLS="http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img" # cirros full disk image
-CIRROS_VERSION=${CIRROS_VERSION:-"0.3.4"}
+CIRROS_VERSION=${CIRROS_VERSION:-"0.3.5"}
CIRROS_ARCH=${CIRROS_ARCH:-"x86_64"}
# Set default image based on ``VIRT_DRIVER`` and ``LIBVIRT_TYPE``, either of
@@ -642,17 +683,14 @@
IMAGE_URLS+=","
fi
case "$VIRT_DRIVER" in
- openvz)
- DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-ubuntu-12.04-x86_64}
- IMAGE_URLS+="http://download.openvz.org/template/precreated/ubuntu-12.04-x86_64.tar.gz";;
libvirt)
case "$LIBVIRT_TYPE" in
lxc) # the cirros root disk in the uec tarball is empty, so it will not work for lxc
DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-rootfs}
IMAGE_URLS+="http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-rootfs.img.gz";;
- *) # otherwise, use the uec style image (with kernel, ramdisk, disk)
- DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec}
- IMAGE_URLS+="http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec.tar.gz";;
+ *) # otherwise, use the qcow image
+ DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img}
+ IMAGE_URLS+="http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img";;
esac
;;
vsphere)
@@ -662,18 +700,6 @@
DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-0.3.4-x86_64-disk}
IMAGE_URLS+="http://ca.downloads.xensource.com/OpenStack/cirros-0.3.4-x86_64-disk.vhd.tgz"
IMAGE_URLS+=",http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-uec.tar.gz";;
- ironic)
- # Ironic can do both partition and full disk images, depending on the driver
- if [[ -z "${IRONIC_DEPLOY_DRIVER%%agent*}" ]]; then
- DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-x86_64-disk}
- else
- DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-x86_64-uec}
- fi
- IMAGE_URLS+="http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-uec.tar.gz"
- IMAGE_URLS+=",http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-disk.img";;
- *) # Default to Cirros with kernel, ramdisk and disk image
- DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec}
- IMAGE_URLS+="http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec.tar.gz";;
esac
DOWNLOAD_DEFAULT_IMAGES=False
fi
@@ -834,19 +860,9 @@
# Set to 0 to disable shallow cloning
GIT_DEPTH=${GIT_DEPTH:-0}
-# Use native SSL for servers in ``SSL_ENABLED_SERVICES``
-USE_SSL=$(trueorfalse False USE_SSL)
-
-# ebtables is inherently racey. If you run it by two or more processes
-# simultaneously it will collide, badly, in the kernel and produce
-# failures or corruption of ebtables. The only way around it is for
-# all tools running ebtables to only ever do so with the --concurrent
-# flag. This requires libvirt >= 1.2.11.
-#
-# If you don't have this then the following work around will replace
-# ebtables with a wrapper script so that it is safe to run without
-# that flag.
-EBTABLES_RACE_FIX=$(trueorfalse False EBTABLES_RACE_FIX)
+# We may not need to recreate database in case 2 Keystone services
+# sharing the same database. It would be useful for multinode Grenade tests.
+RECREATE_KEYSTONE_DB=$(trueorfalse True RECREATE_KEYSTONE_DB)
# Following entries need to be last items in file
diff --git a/tests/test_functions.sh b/tests/test_functions.sh
index 8aae23d..adf20cd 100755
--- a/tests/test_functions.sh
+++ b/tests/test_functions.sh
@@ -224,7 +224,7 @@
# test against removed package...was a bug on Ubuntu
if is_ubuntu; then
- PKG=cowsay
+ PKG=cowsay-off
if ! (dpkg -s $PKG >/dev/null 2>&1); then
# it was never installed...set up the condition
sudo apt-get install -y cowsay >/dev/null 2>&1
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
index fb55023..608ef6a 100755
--- a/tests/test_libs_from_pypi.sh
+++ b/tests/test_libs_from_pypi.sh
@@ -32,17 +32,18 @@
ALL_LIBS="python-novaclient oslo.config pbr oslo.context"
ALL_LIBS+=" python-keystoneclient taskflow oslo.middleware pycadf"
ALL_LIBS+=" python-glanceclient python-ironicclient"
-ALL_LIBS+=" oslo.messaging oslo.log cliff python-heatclient stevedore"
+ALL_LIBS+=" oslo.messaging oslo.log cliff stevedore"
ALL_LIBS+=" python-cinderclient glance_store oslo.concurrency oslo.db"
ALL_LIBS+=" oslo.versionedobjects oslo.vmware keystonemiddleware"
ALL_LIBS+=" oslo.serialization django_openstack_auth"
ALL_LIBS+=" python-openstackclient osc-lib os-client-config oslo.rootwrap"
-ALL_LIBS+=" oslo.i18n oslo.utils python-swiftclient"
+ALL_LIBS+=" oslo.i18n oslo.utils python-openstacksdk python-swiftclient"
ALL_LIBS+=" python-neutronclient tooz ceilometermiddleware oslo.policy"
ALL_LIBS+=" debtcollector os-brick automaton futurist oslo.service"
-ALL_LIBS+=" oslo.cache oslo.reports osprofiler"
+ALL_LIBS+=" oslo.cache oslo.reports osprofiler cursive"
ALL_LIBS+=" keystoneauth ironic-lib neutron-lib oslo.privsep"
ALL_LIBS+=" diskimage-builder os-vif python-brick-cinderclient-ext"
+ALL_LIBS+=" castellan"
# Generate the above list with
# echo ${!GITREPO[@]}
diff --git a/tests/test_meta_config.sh b/tests/test_meta_config.sh
index 327fb56..087aaf4 100755
--- a/tests/test_meta_config.sh
+++ b/tests/test_meta_config.sh
@@ -29,6 +29,10 @@
exit -1
}
+function warn {
+ return 0
+}
+
TEST_1C_ADD="[eee]
type=new
multi = foo2"
@@ -92,7 +96,7 @@
[[test3|test-space.conf]]
[DEFAULT]
attribute=value
-
+
# the above line has a single space
[[test4|\$TEST4_DIR/\$TEST4_FILE]]
@@ -125,6 +129,14 @@
[[test10|does-not-exist-dir/test.conf]]
foo=bar
+[[test11|test-same.conf]]
+[DEFAULT]
+foo=bar
+
+[[test11|test-same.conf]]
+[some]
+random=config
+
[[test-multi-sections|test-multi-sections.conf]]
[sec-1]
cfg_item1 = abcd
@@ -147,6 +159,9 @@
cfg_item2 = efgh
cfg_item2 = \${FOO_BAR_BAZ}
+[[test11|test-same.conf]]
+[another]
+non = sense
EOF
echo -n "get_meta_section_files: test0 doesn't exist: "
@@ -367,11 +382,10 @@
echo -n "merge_config_group test9 undefined conf file: "
set +e
-# function is expected to fail and exit, running it
-# in a subprocess to let this script proceed
+# function is expected to trigger warn and continue
(merge_config_group test.conf test9)
VAL=$?
-EXPECT_VAL=255
+EXPECT_VAL=0
check_result "$VAL" "$EXPECT_VAL"
set -e
@@ -385,8 +399,24 @@
check_result "$VAL" "$EXPECT_VAL"
set -e
+echo -n "merge_config_file test11 same section: "
+rm -f test-same.conf
+merge_config_group test.conf test11
+VAL=$(cat test-same.conf)
+EXPECT_VAL='
+[DEFAULT]
+foo = bar
+
+[some]
+random = config
+
+[another]
+non = sense'
+check_result "$VAL" "$EXPECT_VAL"
+
+
rm -f test.conf test1c.conf test2a.conf \
test-space.conf test-equals.conf test-strip.conf \
test-colon.conf test-env.conf test-multiline.conf \
- test-multi-sections.conf
+ test-multi-sections.conf test-same.conf
rm -rf test-etc
diff --git a/tests/test_python.sh b/tests/test_python.sh
new file mode 100755
index 0000000..8652798
--- /dev/null
+++ b/tests/test_python.sh
@@ -0,0 +1,30 @@
+#!/usr/bin/env bash
+
+# Tests for DevStack INI functions
+
+TOP=$(cd $(dirname "$0")/.. && pwd)
+
+source $TOP/functions-common
+source $TOP/inc/python
+
+source $TOP/tests/unittest.sh
+
+echo "Testing Python 3 functions"
+
+# Initialize variables manipulated by functions under test.
+export ENABLED_PYTHON3_PACKAGES=""
+export DISABLED_PYTHON3_PACKAGES=""
+
+assert_false "should not be enabled yet" python3_enabled_for testpackage1
+
+enable_python3_package testpackage1
+assert_equal "$ENABLED_PYTHON3_PACKAGES" "testpackage1" "unexpected result"
+assert_true "should be enabled" python3_enabled_for testpackage1
+
+assert_false "should not be disabled yet" python3_disabled_for testpackage2
+
+disable_python3_package testpackage2
+assert_equal "$DISABLED_PYTHON3_PACKAGES" "testpackage2" "unexpected result"
+assert_true "should be disabled" python3_disabled_for testpackage2
+
+report_results
diff --git a/tools/create_userrc.sh b/tools/create_userrc.sh
index 30d1a01..f4a4edc 100755
--- a/tools/create_userrc.sh
+++ b/tools/create_userrc.sh
@@ -152,7 +152,7 @@
fi
if [ -z "$OS_AUTH_URL" ]; then
- export OS_AUTH_URL=http://localhost:5000/v2.0/
+ export OS_AUTH_URL=http://localhost:5000/v3/
fi
if [ -z "$OS_USER_DOMAIN_ID" -a -z "$OS_USER_DOMAIN_NAME" ]; then
diff --git a/tools/discover_hosts.sh b/tools/discover_hosts.sh
new file mode 100755
index 0000000..4ec6a40
--- /dev/null
+++ b/tools/discover_hosts.sh
@@ -0,0 +1,20 @@
+#!/usr/bin/env bash
+
+# **discover_hosts.sh**
+
+# This is just a very simple script to run the
+# "nova-manage cell_v2 discover_hosts" command
+# which is needed to discover compute nodes and
+# register them with a parent cell in Nova.
+# This assumes that /etc/nova/nova.conf exists
+# and has the following entries filled in:
+#
+# [api_database]
+# connection = This is the URL to the nova_api database
+#
+# In other words this should be run on the primary
+# (API) node in a multi-node setup.
+
+if [[ -x $(which nova-manage) ]]; then
+ nova-manage cell_v2 discover_hosts --verbose
+fi
diff --git a/tools/dstat.sh b/tools/dstat.sh
index 3c0b3be..ae7306e 100755
--- a/tools/dstat.sh
+++ b/tools/dstat.sh
@@ -9,11 +9,11 @@
# Assumes:
# - dstat command is installed
-# Retreive log directory as argument from calling script.
+# Retrieve log directory as argument from calling script.
LOGDIR=$1
# Command line arguments for primary DStat process.
-DSTAT_OPTS="-tcmndrylpg --top-cpu-adv --top-io-adv --swap"
+DSTAT_OPTS="-tcmndrylpg --top-cpu-adv --top-io-adv --top-mem --swap"
# Command-line arguments for secondary background DStat process.
DSTAT_CSV_OPTS="-tcmndrylpg --output $LOGDIR/dstat-csv.log"
diff --git a/tools/fixup_stuff.sh b/tools/fixup_stuff.sh
index 4dec95e..f3ba702 100755
--- a/tools/fixup_stuff.sh
+++ b/tools/fixup_stuff.sh
@@ -67,6 +67,35 @@
echo_summary "WARNING: unable to reserve keystone ports"
fi
+# Ubuntu Cloud Archive
+#---------------------
+# We've found that Libvirt on Xenial is flaky and crashes enough to be
+# a regular top e-r bug. Opt into Ubuntu Cloud Archive if on Xenial to
+# get newer Libvirt.
+if [[ "$DISTRO" = "xenial" ]]; then
+ # This pulls in apt-add-repository
+ install_package "software-properties-common"
+ # Use UCA for newer libvirt. Should give us libvirt 2.5.0.
+ if [[ -f /etc/ci/mirror_info.sh ]] ; then
+ # If we are on a nodepool provided host and it has told us about where
+ # we can find local mirrors then use that mirror.
+ source /etc/ci/mirror_info.sh
+
+ sudo apt-add-repository -y "deb $NODEPOOL_UCA_MIRROR xenial-updates/ocata main"
+
+ # Disable use of libvirt wheel here as presence of mirror implies
+ # presence of cached wheel build against older libvirt binary.
+ # TODO(clarkb) figure out how to use wheel again.
+ sudo bash -c 'echo "no-binary = libvirt-python" >> /etc/pip.conf'
+ else
+ # Otherwise use upstream UCA
+ sudo add-apt-repository -y cloud-archive:ocata
+ fi
+ # Force update our APT repos, since we added UCA above.
+ REPOS_UPDATED=False
+ apt_get_update
+fi
+
# Python Packages
# ---------------
diff --git a/tools/install_ebtables_workaround.sh b/tools/install_ebtables_workaround.sh
deleted file mode 100755
index 45ced87..0000000
--- a/tools/install_ebtables_workaround.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash -eu
-#
-# Copyright 2015 Hewlett-Packard Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-#
-# This replaces the ebtables on your system with a wrapper script that
-# does implicit locking. This is needed if libvirt < 1.2.11 on your platform.
-
-EBTABLES=/sbin/ebtables
-EBTABLESREAL=/sbin/ebtables.real
-FILES=$TOP_DIR/files
-
-if [[ -f "$EBTABLES" ]]; then
- if file $EBTABLES | grep ELF; then
- sudo mv $EBTABLES $EBTABLESREAL
- sudo install -m 0755 $FILES/ebtables.workaround $EBTABLES
- echo "Replaced ebtables with locking workaround"
- fi
-fi
diff --git a/tools/install_pip.sh b/tools/install_pip.sh
index a5ccb19..dbe5278 100755
--- a/tools/install_pip.sh
+++ b/tools/install_pip.sh
@@ -144,6 +144,9 @@
fi
set -x
-pip_install -U setuptools
+
+# Note setuptools is part of requirements.txt and we want to make sure
+# we obey any versioning as described there.
+pip_install_gr setuptools
get_versions
diff --git a/tools/install_prereqs.sh b/tools/install_prereqs.sh
index 8895e1e..da59093 100755
--- a/tools/install_prereqs.sh
+++ b/tools/install_prereqs.sh
@@ -83,6 +83,9 @@
if python3_enabled; then
install_python3
+ export PYTHON=$(which python${PYTHON3_VERSION} 2>/dev/null || which python3 2>/dev/null)
+else
+ export PYTHON=$(which python 2>/dev/null)
fi
# Mark end of run
diff --git a/tools/memory_tracker.sh b/tools/memory_tracker.sh
new file mode 100755
index 0000000..cbdeb8f
--- /dev/null
+++ b/tools/memory_tracker.sh
@@ -0,0 +1,120 @@
+#!/bin/bash
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+set -o errexit
+
+PYTHON=${PYTHON:-python}
+
+# time to sleep between checks
+SLEEP_TIME=20
+
+# MemAvailable is the best estimation and has built-in heuristics
+# around reclaimable memory. However, it is not available until 3.14
+# kernel (i.e. Ubuntu LTS Trusty misses it). In that case, we fall
+# back to free+buffers+cache as the available memory.
+USE_MEM_AVAILABLE=0
+if grep -q '^MemAvailable:' /proc/meminfo; then
+ USE_MEM_AVAILABLE=1
+fi
+
+function get_mem_unevictable {
+ awk '/^Unevictable:/ {print $2}' /proc/meminfo
+}
+
+function get_mem_available {
+ if [[ $USE_MEM_AVAILABLE -eq 1 ]]; then
+ awk '/^MemAvailable:/ {print $2}' /proc/meminfo
+ else
+ awk '/^MemFree:/ {free=$2}
+ /^Buffers:/ {buffers=$2}
+ /^Cached:/ {cached=$2}
+ END { print free+buffers+cached }' /proc/meminfo
+ fi
+}
+
+function tracker {
+ local low_point
+ local unevictable_point
+ low_point=$(get_mem_available)
+ # log mlocked memory at least on first iteration
+ unevictable_point=0
+ while [ 1 ]; do
+
+ local mem_available
+ mem_available=$(get_mem_available)
+
+ local unevictable
+ unevictable=$(get_mem_unevictable)
+
+ if [ $mem_available -lt $low_point -o $unevictable -ne $unevictable_point ]; then
+ echo "[[["
+ date
+
+ # whenever we see less memory available than last time, dump the
+ # snapshot of current usage; i.e. checking the latest entry in the file
+ # will give the peak-memory usage
+ if [[ $mem_available -lt $low_point ]]; then
+ low_point=$mem_available
+ echo "---"
+ # always available greppable output; given difference in
+ # meminfo output as described above...
+ echo "memory_tracker low_point: $mem_available"
+ echo "---"
+ cat /proc/meminfo
+ echo "---"
+ # would hierarchial view be more useful (-H)? output is
+ # not sorted by usage then, however, and the first
+ # question is "what's using up the memory"
+ #
+ # there are a lot of kernel threads, especially on a 8-cpu
+ # system. do a best-effort removal to improve
+ # signal/noise ratio of output.
+ ps --sort=-pmem -eo pid:10,pmem:6,rss:15,ppid:10,cputime:10,nlwp:8,wchan:25,args:100 |
+ grep -v ']$'
+ fi
+ echo "---"
+
+ # list processes that lock memory from swap
+ if [[ $unevictable -ne $unevictable_point ]]; then
+ unevictable_point=$unevictable
+ ${PYTHON} ./tools/mlock_report.py
+ fi
+
+ echo "]]]"
+ fi
+ sleep $SLEEP_TIME
+ done
+}
+
+function usage {
+ echo "Usage: $0 [-x] [-s N]" 1>&2
+ exit 1
+}
+
+while getopts ":s:x" opt; do
+ case $opt in
+ s)
+ SLEEP_TIME=$OPTARG
+ ;;
+ x)
+ set -o xtrace
+ ;;
+ *)
+ usage
+ ;;
+ esac
+done
+shift $((OPTIND-1))
+
+tracker
diff --git a/tools/mlock_report.py b/tools/mlock_report.py
new file mode 100755
index 0000000..2169cc2
--- /dev/null
+++ b/tools/mlock_report.py
@@ -0,0 +1,60 @@
+#!/usr/bin/env python
+
+# This tool lists processes that lock memory pages from swapping to disk.
+
+import re
+import subprocess
+
+import psutil
+
+
+SUMMARY_REGEX = re.compile(b".*\s+(?P<locked>[\d]+)\s+KB")
+
+
+def main():
+ try:
+ print(_get_report())
+ except Exception as e:
+ print("Failure listing processes locking memory: %s" % str(e))
+ raise
+
+
+def _get_report():
+ mlock_users = []
+ for proc in psutil.process_iter():
+ pid = proc.pid
+ # sadly psutil does not expose locked pages info, that's why we
+ # call to pmap and parse the output here
+ try:
+ out = subprocess.check_output(['pmap', '-XX', str(pid)])
+ except subprocess.CalledProcessError as e:
+ # 42 means process just vanished, which is ok
+ if e.returncode == 42:
+ continue
+ raise
+ last_line = out.splitlines()[-1]
+
+ # some processes don't provide a memory map, for example those
+ # running as kernel services, so we need to skip those that don't
+ # match
+ result = SUMMARY_REGEX.match(last_line)
+ if result:
+ locked = int(result.group('locked'))
+ if locked:
+ mlock_users.append({'name': proc.name(),
+ 'pid': pid,
+ 'locked': locked})
+
+ # produce a single line log message with per process mlock stats
+ if mlock_users:
+ return "; ".join(
+ "[%(name)s (pid:%(pid)s)]=%(locked)dKB" % args
+ # log heavy users first
+ for args in sorted(mlock_users, key=lambda d: d['locked'])
+ )
+ else:
+ return "no locked memory"
+
+
+if __name__ == "__main__":
+ main()
diff --git a/tools/peakmem_tracker.sh b/tools/peakmem_tracker.sh
deleted file mode 100755
index ecbd79a..0000000
--- a/tools/peakmem_tracker.sh
+++ /dev/null
@@ -1,98 +0,0 @@
-#!/bin/bash
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-set -o errexit
-
-# time to sleep between checks
-SLEEP_TIME=20
-
-# MemAvailable is the best estimation and has built-in heuristics
-# around reclaimable memory. However, it is not available until 3.14
-# kernel (i.e. Ubuntu LTS Trusty misses it). In that case, we fall
-# back to free+buffers+cache as the available memory.
-USE_MEM_AVAILBLE=0
-if grep -q '^MemAvailable:' /proc/meminfo; then
- USE_MEM_AVAILABLE=1
-fi
-
-function get_mem_available {
- if [[ $USE_MEM_AVAILABLE -eq 1 ]]; then
- awk '/^MemAvailable:/ {print $2}' /proc/meminfo
- else
- awk '/^MemFree:/ {free=$2}
- /^Buffers:/ {buffers=$2}
- /^Cached:/ {cached=$2}
- END { print free+buffers+cached }' /proc/meminfo
- fi
-}
-
-# whenever we see less memory available than last time, dump the
-# snapshot of current usage; i.e. checking the latest entry in the
-# file will give the peak-memory usage
-function tracker {
- local low_point
- low_point=$(get_mem_available)
- while [ 1 ]; do
-
- local mem_available
- mem_available=$(get_mem_available)
-
- if [[ $mem_available -lt $low_point ]]; then
- low_point=$mem_available
- echo "[[["
- date
- echo "---"
- # always available greppable output; given difference in
- # meminfo output as described above...
- echo "peakmem_tracker low_point: $mem_available"
- echo "---"
- cat /proc/meminfo
- echo "---"
- # would hierarchial view be more useful (-H)? output is
- # not sorted by usage then, however, and the first
- # question is "what's using up the memory"
- #
- # there are a lot of kernel threads, especially on a 8-cpu
- # system. do a best-effort removal to improve
- # signal/noise ratio of output.
- ps --sort=-pmem -eo pid:10,pmem:6,rss:15,ppid:10,cputime:10,nlwp:8,wchan:25,args:100 |
- grep -v ']$'
- echo "]]]"
- fi
-
- sleep $SLEEP_TIME
- done
-}
-
-function usage {
- echo "Usage: $0 [-x] [-s N]" 1>&2
- exit 1
-}
-
-while getopts ":s:x" opt; do
- case $opt in
- s)
- SLEEP_TIME=$OPTARG
- ;;
- x)
- set -o xtrace
- ;;
- *)
- usage
- ;;
- esac
-done
-shift $((OPTIND-1))
-
-tracker
diff --git a/tools/ping_neutron.sh b/tools/ping_neutron.sh
index c755754..73fe3f3 100755
--- a/tools/ping_neutron.sh
+++ b/tools/ping_neutron.sh
@@ -54,7 +54,7 @@
REMAINING_ARGS="${@:2}"
# BUG: with duplicate network names, this fails pretty hard.
-NET_ID=$(openstack network list | grep "$NET_NAME" | awk '{print $2}')
+NET_ID=$(openstack network show -f value -c id "$NET_NAME")
PROBE_ID=$(neutron-debug probe-list -c id -c network_id | grep "$NET_ID" | awk '{print $2}' | head -n 1)
# This runs a command inside the specific netns
diff --git a/tools/worlddump.py b/tools/worlddump.py
index e1ef544..6fff149 100755
--- a/tools/worlddump.py
+++ b/tools/worlddump.py
@@ -17,6 +17,8 @@
"""Dump the state of the world for post mortem."""
+from __future__ import print_function
+
import argparse
import datetime
from distutils import spawn
@@ -34,6 +36,7 @@
'neutron-linuxbridge-agent',
'neutron-metadata-agent',
'neutron-openvswitch-agent',
+ 'cinder-volume',
)
@@ -150,7 +153,11 @@
def _netns_list():
process = subprocess.Popen(['ip', 'netns'], stdout=subprocess.PIPE)
stdout, _ = process.communicate()
- return stdout.split()
+ # NOTE(jlvillal): Sometimes 'ip netns list' can return output like:
+ # qrouter-0805fd7d-c493-4fa6-82ca-1c6c9b23cd9e (id: 1)
+ # qdhcp-bb2cc6ae-2ae8-474f-adda-a94059b872b5 (id: 0)
+ output = [x.split()[0] for x in stdout.splitlines()]
+ return output
def network_dump():
@@ -216,6 +223,14 @@
print("guru meditation report in %s log" % service)
+def var_core():
+ if os.path.exists('/var/core'):
+ _header("/var/core dumps")
+ # NOTE(ianw) : see DEBUG_LIBVIRT_COREDUMPS. We could think
+ # about getting backtraces out of these. There are other
+ # tools out there that can do that sort of thing though.
+ _dump_cmd("ls -ltrah /var/core")
+
def main():
opts = get_options()
fname = filename(opts.dir, opts.name)
@@ -231,6 +246,7 @@
ebtables_dump()
compute_consoles()
guru_meditation_reports()
+ var_core()
if __name__ == '__main__':
diff --git a/tools/xen/README.md b/tools/xen/README.md
index 7062ecb..9559e77 100644
--- a/tools/xen/README.md
+++ b/tools/xen/README.md
@@ -171,8 +171,3 @@
umount "$mountdir"
rm -rf "$mountdir"
-### Migrate OpenStack DomU to another host
-
-Given you need to migrate your DomU with OpenStack installed to another host,
-you need to set `XEN_INTEGRATION_BRIDGE` in localrc if neutron network is used.
-It is the bridge for `XEN_INT_BRIDGE_OR_NET_NAME` network created in Dom0
diff --git a/tools/xen/build_xva.sh b/tools/xen/build_xva.sh
index 25bf58c..34ef719 100755
--- a/tools/xen/build_xva.sh
+++ b/tools/xen/build_xva.sh
@@ -96,48 +96,27 @@
tar xf /tmp/devstack.tar -C $STAGING_DIR/opt/stack/devstack
cd $TOP_DIR
-# Create an upstart job (task) for devstack, which can interact with the console
-cat >$STAGING_DIR/etc/init/devstack.conf << EOF
-start on stopped rc RUNLEVEL=[2345]
+# Create an systemd task for devstack
+cat >$STAGING_DIR/etc/systemd/system/devstack.service << EOF
+[Unit]
+Description=Install OpenStack by DevStack
-console output
-task
+[Service]
+Type=oneshot
+RemainAfterExit=yes
+ExecStartPre=/bin/rm -f /opt/stack/runsh.succeeded
+ExecStart=/bin/su -c "/opt/stack/run.sh" stack
+StandardOutput=tty
+StandardError=tty
-pre-start script
- rm -f /opt/stack/runsh.succeeded
-end script
+[Install]
+WantedBy=multi-user.target
-script
- initctl stop hvc0 || true
-
- # Read any leftover characters from standard input
- while read -n 1 -s -t 0.1 -r ignored; do
- true
- done
-
- clear
-
- chown -R $STACK_USER /opt/stack
-
- su -c "/opt/stack/run.sh" $STACK_USER
-
- # Update /etc/issue
- {
- echo "OpenStack VM - Installed by DevStack"
- IPADDR=\$(ip -4 address show eth0 | sed -n 's/.*inet \\([0-9\.]\\+\\).*/\1/p')
- echo " Management IP: \$IPADDR"
- echo -n " Devstack run: "
- if [ -e /opt/stack/runsh.succeeded ]; then
- echo "SUCCEEDED"
- else
- echo "FAILED"
- fi
- echo ""
- } > /etc/issue
- initctl start hvc0 > /dev/null 2>&1
-end script
EOF
+# enable this service
+ln -s $STAGING_DIR/etc/systemd/system/devstack.service $STAGING_DIR/etc/systemd/system/multi-user.target.wants/devstack.service
+
# Configure the hostname
echo $GUEST_NAME > $STAGING_DIR/etc/hostname
@@ -178,6 +157,8 @@
(
flock -n 9 || exit 1
+ sudo chown -R stack /opt/stack
+
[ -e /opt/stack/runsh.succeeded ] && rm /opt/stack/runsh.succeeded
echo \$\$ >> /opt/stack/run_sh.pid
@@ -187,7 +168,24 @@
# Got to the end - success
touch /opt/stack/runsh.succeeded
+
+ # Update /etc/issue
+ (
+ echo "OpenStack VM - Installed by DevStack"
+ IPADDR=$(ip -4 address show eth0 | sed -n 's/.*inet \([0-9\.]\+\).*/\1/p')
+ echo " Management IP: $IPADDR"
+ echo -n " Devstack run: "
+ if [ -e /opt/stack/runsh.succeeded ]; then
+ echo "SUCCEEDED"
+ else
+ echo "FAILED"
+ fi
+ echo ""
+ ) > /opt/stack/issue
+ sudo cp /opt/stack/issue /etc/issue
+
rm /opt/stack/run_sh.pid
) 9> /opt/stack/.runsh_lock
EOF
+
chmod 755 $STAGING_DIR/opt/stack/run.sh
diff --git a/tools/xen/functions b/tools/xen/functions
index e1864eb..bc0c515 100644
--- a/tools/xen/functions
+++ b/tools/xen/functions
@@ -294,6 +294,18 @@
# Assert ithas a numeric nonzero value
expr "$cpu_count" + 0
+ # 8 VCPUs should be enough for devstack VM; avoid using too
+ # many VCPUs:
+ # 1. too many VCPUs may trigger a kernel bug which result VM
+ # not able to boot:
+ # https://kernel.googlesource.com/pub/scm/linux/kernel/git/wsa/linux/+/e2e004acc7cbe3c531e752a270a74e95cde3ea48
+ # 2. The remaining CPUs can be used for other purpose:
+ # e.g. boot test VMs.
+ MAX_VCPUS=8
+ if [ $cpu_count -ge $MAX_VCPUS ]; then
+ cpu_count=$MAX_VCPUS
+ fi
+
xe vm-param-set uuid=$vm VCPUs-max=$cpu_count
xe vm-param-set uuid=$vm VCPUs-at-startup=$cpu_count
}
@@ -317,7 +329,7 @@
# Only support conntrack-tools in Dom0 with XS7.0 and above
if [ ! -f /usr/sbin/conntrackd ]; then
sed -i s/#baseurl=/baseurl=/g /etc/yum.repos.d/CentOS-Base.repo
- centos_ver=$(yum version nogroups |grep Installed | cut -d' ' -f 2 | cut -d'.' -f1-2 | tr '-' '.')
+ centos_ver=$(yum version nogroups |grep Installed | cut -d' ' -f 2 | cut -d'/' -f 1 | cut -d'-' -f 1)
yum install -y --enablerepo=base --releasever=$centos_ver conntrack-tools
# Backup conntrackd.conf after install conntrack-tools, use the one with statistic mode
mv /etc/conntrackd/conntrackd.conf /etc/conntrackd/conntrackd.conf.back
diff --git a/tools/xen/install_os_domU.sh b/tools/xen/install_os_domU.sh
index 66b9eda..f4ca71a 100755
--- a/tools/xen/install_os_domU.sh
+++ b/tools/xen/install_os_domU.sh
@@ -66,10 +66,6 @@
setup_network "$MGT_BRIDGE_OR_NET_NAME"
setup_network "$PUB_BRIDGE_OR_NET_NAME"
-# With neutron, one more network is required, which is internal to the
-# hypervisor, and used by the VMs
-setup_network "$XEN_INT_BRIDGE_OR_NET_NAME"
-
if parameter_is_specified "FLAT_NETWORK_BRIDGE"; then
if [ "$(bridge_for "$VM_BRIDGE_OR_NET_NAME")" != "$(bridge_for "$FLAT_NETWORK_BRIDGE")" ]; then
cat >&2 << EOF
@@ -292,16 +288,6 @@
#
$THIS_DIR/build_xva.sh "$GUEST_NAME"
-# Attach a network interface for the integration network (so that the bridge
-# is created by XenServer). This is required for Neutron. Also pass that as a
-# kernel parameter for DomU
-attach_network "$XEN_INT_BRIDGE_OR_NET_NAME"
-
-XEN_INTEGRATION_BRIDGE_DEFAULT=$(bridge_for "$XEN_INT_BRIDGE_OR_NET_NAME")
-append_kernel_cmdline \
- "$GUEST_NAME" \
- "xen_integration_bridge=${XEN_INTEGRATION_BRIDGE_DEFAULT}"
-
FLAT_NETWORK_BRIDGE="${FLAT_NETWORK_BRIDGE:-$(bridge_for "$VM_BRIDGE_OR_NET_NAME")}"
append_kernel_cmdline "$GUEST_NAME" "flat_network_bridge=${FLAT_NETWORK_BRIDGE}"
@@ -424,7 +410,7 @@
echo "looking at the console of your domU / checking the log files."
echo ""
echo "ssh into your domU now: 'ssh stack@$OS_VM_MANAGEMENT_ADDRESS' using your password"
- echo "and then do: 'sudo service devstack status' to check if devstack is still running."
+ echo "and then do: 'sudo systemctl status devstack' to check if devstack is still running."
echo "Check that /opt/stack/runsh.succeeded exists"
echo ""
echo "When devstack completes, you can visit the OpenStack Dashboard"
diff --git a/tools/xen/scripts/install_ubuntu_template.sh b/tools/xen/scripts/install_ubuntu_template.sh
index d80ed09..6ea3642 100755
--- a/tools/xen/scripts/install_ubuntu_template.sh
+++ b/tools/xen/scripts/install_ubuntu_template.sh
@@ -50,7 +50,7 @@
# however these need to be answered before the netinstall
# is ready to fetch the preseed file, and as such must be here
# to get a fully automated install
-pvargs="-- quiet console=hvc0 partman/default_filesystem=ext3 \
+pvargs="quiet console=hvc0 partman/default_filesystem=ext3 \
console-setup/ask_detect=false locale=${UBUNTU_INST_LOCALE} \
keyboard-configuration/layoutcode=${UBUNTU_INST_KEYBOARD} \
netcfg/choose_interface=eth0 \
diff --git a/tools/xen/xenrc b/tools/xen/xenrc
index bb27454..169e042 100644
--- a/tools/xen/xenrc
+++ b/tools/xen/xenrc
@@ -29,7 +29,6 @@
# Get the management network from the XS installation
VM_BRIDGE_OR_NET_NAME="OpenStack VM Network"
PUB_BRIDGE_OR_NET_NAME="OpenStack Public Network"
-XEN_INT_BRIDGE_OR_NET_NAME="OpenStack VM Integration Network"
# VM Password
GUEST_PASSWORD=${GUEST_PASSWORD:-secret}
@@ -63,8 +62,8 @@
PUB_NETMASK=${PUB_NETMASK:-255.255.255.0}
# Ubuntu install settings
-UBUNTU_INST_RELEASE="trusty"
-UBUNTU_INST_TEMPLATE_NAME="Ubuntu 14.04 (64-bit) for DevStack"
+UBUNTU_INST_RELEASE="xenial"
+UBUNTU_INST_TEMPLATE_NAME="Ubuntu 16.04 (64-bit) for DevStack"
# For 12.04 use "precise" and update template name
# However, for 12.04, you should be using
# XenServer 6.1 and later or XCP 1.6 or later
@@ -101,6 +100,7 @@
## Note that the lines below are coming from stackrc to support
## new-style config files
+source $RC_DIR/functions-common
# allow local overrides of env variables, including repo config
if [[ -f $RC_DIR/localrc ]]; then
diff --git a/unstack.sh b/unstack.sh
index c05d1f0..485fed7 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -66,9 +66,7 @@
source $TOP_DIR/lib/placement
source $TOP_DIR/lib/cinder
source $TOP_DIR/lib/swift
-source $TOP_DIR/lib/heat
source $TOP_DIR/lib/neutron
-source $TOP_DIR/lib/neutron-legacy
source $TOP_DIR/lib/ldap
source $TOP_DIR/lib/dstat
source $TOP_DIR/lib/dlm
@@ -99,10 +97,6 @@
# Call service stop
-if is_service_enabled heat; then
- stop_heat
-fi
-
if is_service_enabled nova; then
stop_nova
fi
@@ -135,9 +129,6 @@
stop_tls_proxy
cleanup_CA
fi
-if [ "$USE_SSL" == "True" ]; then
- cleanup_CA
-fi
SCSI_PERSIST_DIR=$CINDER_STATE_PATH/volumes/*